paint-brush
Exploring the World of Distributed Systems: PACELC Theorem or Why CAP Is Not Enoughby@antonkuklin
2,959 reads
2,959 reads

Exploring the World of Distributed Systems: PACELC Theorem or Why CAP Is Not Enough

by Anton KuklinApril 28th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

This article heavily relies on the previous article about CAP theorem. The idea of PACELC is to extend the CAP, by saying, if a system is currently under a Partition (P), it has to choose Availability (A) and Consistency (E) If a system has no partitions at the moment, if it has no tradeoff between Latency (Listency) andconsistent (C)
featured image - Exploring the World of Distributed Systems: PACELC Theorem or Why CAP Is Not Enough
Anton Kuklin HackerNoon profile picture

Important notice: This article heavily relies on the previous article about the CAP theorem. I won’t cover topics and points covered there, so if you don’t really know much about the CAP theorem, I highly recommend starting with that article first: https://hackernoon.com/exploring-the-world-of-distributed-systems-a-beginners-guide-to-the-cap-theorem

Preface

As you recall, the CAP theorem was invented and introduced back in 2000, which is relatively a long time ago, and it affected the way developers started approaching and handling distributed systems design constraints.


Nevertheless, it might look like the CAP theorem put everything in its place, but a number of questions were raised.


First of all, the CAP theorem mostly covers the case when a system is partitioned; in that case, to continue operating, it can either be available or consistent. But what happens when there’s no partitioning? Is it both consistent and available?


This raises the second question; what in reality is an “available” system? If it has responded after, let’s say, 10 minutes, is it available?


All those points led to the idea that the CAP theorem is not enough, and constraints and rulesets should be enriched.

Necessary Definitions

Latency

From a general point of view, latency is a time delay between the cause and the effect of some physical change in the system being observed. (Wikipedia)


This definition might be too abstract and not so practical, so in our case, when we are interested in some distributed system (some multi-node database for instance), latency will be a time interval between the moment we send a request and the moment we are getting a corresponding response.


Under the hood, it also consists of multiple sub-latencies, like network latency, query latency, etc.

Consistency

Nevertheless, in the previous article, we’ve already covered a consistency definition; we need to look closer into it. This is due to the fact that there are multiple types of consistency: strong consistency and eventual consistency.


In a CAP theorem, the C letter stands for a strong consistency, which we have fully covered there, but what is an eventual consistency?


Eventual consistency means a guarantee that a write operation will eventually take effect for all nodes in a cluster if a system will be operating for long enough. For example, there’s a cluster of three nodes: 1, 2, and 3. The user calls node 1 to perform a write operation.


In this case, eventual consistency will be a guarantee, that at some point in time, both 2 and 3 nodes will receive those write updates too.


This means two things:


  1. There might be some time frame when a read call to nodes 1, 2, and 3 might return different data.


  2. Eventually, it is guaranteed that 1, 2, and 3 will return the same data on a corresponding read call ( in case no new writes will be performed before ).

Data Replication

We’ve already mentioned the idea that a distributed system (database, according to the previous examples) has to propagate the update to separate nodes, which makes this data replicated to different nodes.


This might be required to distribute the read load to different nodes, due to the traffic amount. Other use cases might also relate to a requirement to place nodes physically closer to end-users (same continent/country/region/city etc.), so less time will be required to reach this node via the network.


Also, it makes sense to have multiple nodes to increase reliability, which will cover cases when a physical machine/server rack or data center becomes unavailable due to some malfunction. We will cover examples of how this replication might happen later on.

PACELC

First, we need to start with the right pronunciation - “pass-elk” (due to the official paper link).


The idea of the PACELC theorem is to extend the CAP theorem by saying, that if a system is currently under a Partition (P), it has to choose between Availability (A) and Consistency (C); Else (E), if a system has no partitions at the moment, it has to choose a tradeoff between Latency (L) and Consistency (C).


As I mentioned at the beginning of the article, we won’t cover the PAC part (which is basically a CAP theorem), and we will focus on ELC. Let’s try to understand why such a tradeoff is required by looking at some specific examples.


Let’s say our DB has three nodes: 1, 2, and 3. Node 1 is a primary node that accepts all the write requests, and nodes 2 and 3 are secondary nodes which means they do accept only read requests.


As we remember, there is no problem with the connection between nodes; we can create a connection between any node, and everything operates as “normal”. This leads us to the question of how exactly to perform a write query.


It might be non-obvious, but in our case, we have multiple options to perform a write, let’s check a few of them.

Low Latency Setup

When node 1 accepts a write request, it will respond with a successful response, after which it will apply those changes locally. After the response, node 1 will pass those changes to nodes 2 and 3.


This makes this setup eventually consistent and makes it the least possibly consistent and the most efficient in regards to latency.



Most Consistent Setup

Node 1 accepts a write request, applies it locally, pass it to nodes 2 and 3, and only after that, responds with a success. As you can imagine, this setup is strongly consistent and has the longest latency.



Hybrid Setup

There’s also a hybrid setup. When node 1 accepts a write request, it will apply this change locally, pass this update to node 2, respond with success, and only after that, will it update node 3.


This would make the system less consistent than in the first case, but also, latency will be relatively smaller than in the second case.



I would like to focus more precisely on this case. It seems like this setup is a “golden mean” because we are making some sort of a compromise between consistency and latency. However, is there a way we can make the same setup even more strongly consistent for reads?


Actually, there is one approach that we haven’t covered yet as long as we haven’t mentioned ways we can adjust the readings. In this setup, after the successful response, 2 of our nodes will have the latest updates, and 1 will be outdated for some time.


This means that users who will be passed to this outdated node by a load balancer won’t receive the latest data. But what if read requests are passed not only to one node but also to two nodes? This actually will guarantee that users will be able to receive the latest data.


Definitely, the challenge of sorting out which data is the latest one is a separate task and actually a topic to be covered in a separate article, but physically, nothing prevents us from doing that.




By looking at this example, we can even derive the following simple formula. Let's make the following definitions:


R - is the number of nodes the read call will reach out to.


W - is the number of nodes write will be applied to before responding with success.


N - is the total number of nodes in a cluster.


If R + W > N, then we have a guarantee that reads will be strongly consistent; if R + W <= N some portion of read requests won’t be able to read the latest data.


This also shows us that we are able to control not only the latency in general, but also balance between read & write latency by changing the value of R and W.


This is obviously not a full list of possible ways of data replication, and the ones mentioned once are much more complex under the hood.


But the overall idea that we are interested in is actually a demonstration of the fact, that one way or another, we have to choose between the grade of consistency and latency.

Conclusion

Nevertheless, the CAP theorem seemed for a long time, like a sufficient way to understand constraints related to distributed systems design; eventually, it became obvious that it is not enough.


To also cover the constraints, during the non-partitioned system state, the PACELC theorem was built on top of the CAP theorem. As we have covered in this article, we have to choose the balance between latency & consistency of our system.


Also, we have seen that there is a way to balance not only the general latency but also the read & write latency. All those different setups have their own use cases and mechanisms to be implemented, which I will cover in my next article. Stay tuned!