Updates on Open Source Distributed Consensus

There’s been more activity in the distributed consensus space recently.

At the Hypertable talk yesterday Doug mentioned Hyperspace, their Chubby-style distributed lock manager. Though I think it’s missing the ‘distributed’ part for now.

To provide some level of high availability, Hypertable needs something akin to Chubby. We’ve decided to call this service Hyperspace. Initially we plan to implement this service as a single server. This single server implementation will later be replaced with a replicated version based on Paxos or the Spread toolkit.

ZooKeeper seems to be making some progress as well.

Check out this recent video presentation (which I probably can’t embed so here’s the link).

In 2006 we were building distributed applications that needed a master, aka coordinator, aka controller to manage the sub processes of the applications. It was a scenario that we had encountered before and something that we saw repeated over and over again inside and outside of Yahoo!.

For example, we have an application that consists of a bunch of processes. Each process needs be aware of other processes in the system. The processes need to know how requests are partitioned among the processes. They need to be aware of configuration changes and failures. Generally an application specific central control process manages these needs, but generally these control programs are specific to applications and thus represent a recurring development cost for each distributed application. Because each control program is rewritten it doesn’t get the investment of development time to become truly robust, making it an unreliable single point of failure.

We developed ZooKeeper to be a generic coordination service that can be used in a variety of applications. The API consists of less than a dozen functions and mimics the familiar file system API. Because it is used by many applications we can spend time making robust and resilient to server failures. We also designed it to have good performance so that it can be used extensively by applications to do fine grained coordination.

We use a lock coordinator in Spinn3r and are very happy with the results. It’s a very simple system so provides a LOT of functionality without much pain and maintenance.

Paxos made live is out as well. (I haven’t had time to read it yet).

%d bloggers like this: