Posts

Showing posts from June, 2016

Modular Composition of Coordination Services

Image
This paper appeared in Usenix ATC'16 last week.  The paper considers the problem of scaling ZooKeeper to WAN deployments. ( Check this earlier post for a brief review of ZooKeeper. ) The paper suggests a client-side only modification to ZooKeeper, and calls the resultant system ZooNet. ZooNet consists of a federation of ZooKeeper cluster at each datacenter/region. ZooNet assumes the workload is highly-partitionable, so  the data is partitioned among the ZooKeeper clusters, each accompanied by learners in the remote datacenter/regions. Each ZooKeeper cluster processes only updates for its own data partition and if applications in different regions need to access unrelated items they can also do so independently and in parallel at their own site. However, the problem with such a deployment is that it does not preserve ZooKeeper's sequential execution semantics. Consider the example in Figure 1. (It is not clear to me why the so-called "loosely-coupled" application

How to package your ideas using the Winston star

Image
I came across this advice serendipitously by reading a random Hacker News comment. The advice shows up towards the last 10 minutes of an Artificial Intelligence lecture by Patrick Winston. Winston, a prominent professor at MIT , tells his class that he will disclose them important advice that may make or break their careers. It is about how to pack and present ideas. His advice is simple. Follow this 5-tip star to pack your ideas better. All the tips start with "S": Symbol: use a symbol to make your idea visual and memorable Slogan: find a catchy slogan for your idea Surprise: highlight the unconventional/counterintuitive/interesting part of your idea Salient: focus on the essential part of your idea, remember: less is more, fewer ideas is better Story: pack your idea with a story, human brains are hardwired for stories Here let me try to apply the  Winston's Star method on itself, to make this more concrete. Symbol: star is the symbol of the Winston'

Nobody wants to read your shit

Earlier I wrote about how much I liked the "The War of Art" by Steven Pressfield. This newly released book by Pressfield takes on where "The War of Art" has left. While the former focused on the psychological aspects of the writing process, this book focuses on the structural/mechanical aspects of writing. The book was freely distributed as pdf and ebook for a limited time for promotion purposes. (It looks like the promotion ended.)  I read it in one sitting and found it interesting. This book can benefit anyone who needs to communicate in writing. (Of course, if you are a novice writer, start with the Elements of Style .) The book gives a very interesting account of what Steven learned from a 50+ years career performing all forms of writing, including ad writing, Hollywood script-writing, novels, and nonfiction. The first chapter lays down the most important lesson Steven has learned: "Nobody wants to read your shit", and the rest of the book talks abou

Progress by impossibility proofs

I recently listened to this TED talk by Google X director Astro Teller. The talk is about the Google X moonshot projects, and the process they use for managing them. Google X moonshot projects aim to address big, important, and challenging problems.  Teller tells (yay, pun!) that the process they manage a moonshot project is basically to try to find ways to kill the project. The team first tries to identify the bottleneck at a project by focusing on the most important/critical/risky part of the project. Then they try to either solve that hardest part or show that it is unsolvable (in a feasible manner) and kill the project. Teller claims that, at Google X, they actually incentivize people to kill the project by awarding bonuses, and celebrate when a project gets proved impossible and killed. And if a project still survives, that is a successful moonshot project that has potential for transformative change. Although, this approach looks counter intuitive at first, it is actually a

TensorFlow: A system for large-scale machine learning

Image
This paper has been uploaded to arxiv.org on May 27 2016, so it is the most recent description of Google TensorFlow. ( Five months ago, I had commented on an earlier TensorFlow whitepaper , if you want to check that first.) Below I summarize the main points of this paper by using several sentences/paragraphs from the paper with some paraphrasing. I end the post with my wild speculations about TensorFlow. (This speculation thing is getting strangely addictive for me.) TensorFlow is built leveraging Google's experience with their first generation distributed machine learning system, DistBelief. The core idea of this paper is that TensorFlow's dataflow representation subsumes existing work on parameter server systems (including DistBelief), and offers a uniform programming model that allows users to harness large-scale heterogeneous systems, both for production tasks and for experimenting with new approaches. TensorFlow versus Parameter Server systems DistBelief was based o

Popular posts from this blog

The end of a myth: Distributed transactions can scale

Hints for Distributed Systems Design

Foundational distributed systems papers

Learning about distributed systems: where to start?

Metastable failures in the wild

Scalable OLTP in the Cloud: What’s the BIG DEAL?

SIGMOD panel: Future of Database System Architectures

The demise of coding is greatly exaggerated

Dude, where's my Emacs?

There is plenty of room at the bottom