A distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable.
This famous quote from, Leslie Lamport, an A.M. Turing Award Laureate, summarizes the challenges in building and maintaining a distributed system. But why is there a need for such complicated systems?
With the advent of the internet and smarter devices, the amount of data that needs to be processed has exploded. Simple day-to-day activities like ordering an Uber, watching a show on Netflix, a simple Google search, online shopping, or interacting with social media, all trivial actions that we take for granted are powered by hundreds of distribution services. All these services are built on the backbone of some foundational papers in distributed systems.
While this list is definitely not comprehensive, here are some of my favorite papers that have had a massive impact on the world of distributed systems.
While not a traditional paper, Eric Brewer first introduced it as a conjecture in a keynote address at the 2000 ACM Symposium on Principles of Distributed Computing (PODC). The paper was later formalized and proved by Nancy Lynch and Seth Gilbert in the paper Brewer’s Conjecture and the Feasibility of Consistent, Available, Partition-Tolerant Web Services
Eric Brewer's CAP theorem is a fundamental concept in distributed systems theory, stating that it is impossible for a distributed data store to simultaneously provide more than two out of three guarantees: consistency, availability, and partition tolerance. All other papers mentioned here apply the above principle and make the necessary tradeoffs in their system.
CAP theorem always leads to a lot of discussions based on readers’ understanding of the paper. Martin Kleppmann's "A Critique of the CAP Theorem" provides a better framework to discuss the tradeoffs.
In this seminal paper from 2001, Leslie Lamport presents the Paxos algorithm for achieving consensus in a distributed system in an easy and accessible way. Paxos-based consensus protocols form the backbone of many distributed databases, storage systems, messaging platforms, and coordination services used by many technology companies. It heavily influenced other technologies like Google's Chubby, Google's Spanner, Apache ZooKeeper, Apache BookKeeper, etc.
The Google File System (GFS) paper introduces a scalable distributed file system for large distributed data-intensive applications on commodity hardware, which is the foundation for many distributed file systems that followed. GFS served as a major inspiration for HDFS, the distributed file system used by the Apache Hadoop framework and eventually Amazon S3 (event though s3 is fundamentally different).
This paper introduces the MapReduce programming model, which demonstrates a scalable approach to processing large-scale datasets using distributed computing infrastructure. MapReduce played a pivotal role in the "big data" revolution, enabling organizations to harness the power of distributed computing to analyze and derive insights from massive datasets. You can see how combining GFS and MapReduce allowed Google to process Petabytes of data to organize data of the "internet."
The MapReduce paper (along with GFS) inspired the development of an entire ecosystem of tools and libraries built around Apache Hadoop such as Apache Hive (data warehouse infrastructure built on Hadoop), Apache Pig (high-level data flow language for Hadoop), Apache Spark (in-memory data processing engine), Apache HBase (distributed NoSQL database), and many others.
The Bigtable paper represents a distributed storage system for managing structured data at Google. Once MapReduce and GFS allowed Google to process data at scale in a cost-effective manner, the next step was to enable access to the data in a reliable and highly available manner. BigTable was able to provide a flexible, high-performance solution for applications like web indexing, Google Earth, and Google Finance.
Just like MapReduce revolutionized the "big data" era, BigTable paper was the moving force for the "NoSQL" era. Many of the design principles and architectural concepts introduced in the Bigtable paper, were used in technologies like "Apache HBase", "Cassandra", "MongoD", etc. While some of these applications might use different data models (e.g., MongoDB), they share common principles such as horizontal scalability, fault tolerance, and automatic sharding.
Dynamo paper presents the design and implementation of a highly available key-value store developed by Amazon. Dynamo addressed the need for real-time access to highly dynamic data, such as items in your shopping cart. The paper introduced the concept of "eventual consistency" as a core principle of distributed systems design, allowing for relaxed consistency guarantees to achieve high availability and performance (hi CAP theorem!).
From the paper itself, “Compared to Bigtable, Dynamo targets applications that require only key/value access with primary focus on high availability where updates are not rejected even in the wake of network partitions or server failures.”
Similar to BigTable, the Dynamo paper highly influenced subsequent technologies like Riak, Voldemort, Cassandra, and even event streaming technologies like Apache Kafka.
Facebook's rapid growth necessitated a database solution capable of handling massive amounts of data and supporting a large number of concurrent users. While BigTable and Dynamo were quite influential in their own right, Cassandra was the first technology that went a step ahead of others. By releasing it as an open-source contribution under the Apache License along with publishing the paper, Facebook was instrumental in enabling access to such technology to the entire industry.
Cassandra differentiated itself from the previous two by providing a tunable consistency model, allowing users to choose between strong consistency (like BigTable) and eventual consistency (like Dynamo) based on their application requirements.
This paper introduces Apache ZooKeeper and presents its design principles and algorithms for providing highly reliable and scalable coordination services in distributed systems. Before the introduction of ZooKeeper, software developers often had to implement their own ad-hoc solutions for distributed coordination and consensus in distributed systems.
ZooKeeper proposed a centralized service for distributed coordination, offering primitives such as distributed locks, leader election, and configuration management This allowed to simplify the development of distributed applications by offloading complex coordination logic to ZooKeeper. One of the most common use cases of using Zookeeper is for service discovery.
This paper introduces Apache Kafka, a distributed messaging system designed for high-throughput, fault-tolerant processing of event streams. Kafka's publication as a research paper and its open-source release as an Apache project established it as a standard messaging system for highly scalable and fault-tolerant real-time data processing and event-driven architectures.
Kafka introduced a highly scalable and fault-tolerant messaging system designed for handling large volumes of data streams in real time. Kafka was quite influential in enabling the development of Lambda architecture, which combines batch processing and stream processing to handle large volumes of data with low latency and high throughput.
This paper introduces Resilient Distributed Datasets (RDDs), the core abstraction in Apache Spark, which enables fault-tolerant, in-memory data processing across distributed clusters. Spark's in-memory execution engine provides significantly faster performance compared to MapReduce (which has a disk-based execution model), especially for iterative algorithms, machine learning, and interactive analytics.
These papers cover a broad range of topics in distributed systems, including storage systems, consensus algorithms, fault tolerance, and scalability. Reading them will provide a solid foundation in the principles and practices of building and managing distributed systems.
If you're starting your journey in distributed systems and want to learn more, or you're already an expert and simply want to refresh your fundamentals, there is no better way to learn than reading some of these foundational papers on distributed systems.