ā Seele is an interesting blockchain project, but their whitepaper is highly technical. During the past couple of weeks, I have received some questions from people asking me whether I could explain the Seele project to them in an easy and understandable way. I created this multi-part series, as I believe that Seele could benefit from a clearer and easier to understand overview of their innovative blockchain technology. In each article I will provide an in-depth explanation of one of Seeleās innovative features and try to explain this feature in a way that it is easy to understand, even for someone with a non-technical background.
In part 1, I will be explaining Seeleās Neural Consensus Algorithm. Part 2, will be about Seeleās Heterogeneous Forest Network architecture. Part 3, will be about Seeleās Value Transport Protocol and in part 4, I will elaborate on how Seele incorporates their Quick Value Internet Connection protocol.
ā Seele in a nutshell
Seele positions itself as a blockchain 4.0. Their aim is to improve on Bitcoin (blockchain 1.0), Ethereum (blockchain 2.0) and EOS/Dfinity/Cosmos (blockchain 3.0). All previous blockchain generations suffer from the Scalability, Security and Efficiency paradox; meaning that it is difficult, if not impossible, to optimize one or two of these pillars, without comprising the other one. For instance, EOS has to potential to scale up to 1 million transactions per second; but in order to achieve such a high throughput the agent nodes may become susceptible to attacks; hence compromising on security.
Seele is trying to improve upon existing blockchains by building a new network infrastructure from the ground up without compromising on this scalability, efficiency and security paradox. To accomplish this, Seele strives to implement the following features:
ā”ļø Neural Consensus Algorithm
ā”ļø Heterogeneous Forest Network architecture
ā”ļø Value Transport Protocol
ā”ļø Quick Value Internet Connection protocol
So now that we know what Seele is trying to accomplish letās dive into the first feature of Seeleās blockchain technology, their Neural Consensus Algorithm.
ā Neural Consensus Algorithm
Neural Consensus Algorithm is a highly technical term that can best be explained by looking at how the neurons in our brain work. The following is an example that was posted in Seeleās telegram so for clarityās sake letās use it as well. Letās look at the visual recognition process within the brain. When the eye receives external light signals, billions of neurons in the brain are linked to identify the signal and determine what the signal represents. This procedure is essentially the consensus of billions of neurons. Seele proposes a neural consensus algorithm that transforms the consensus problem into the processing of large-scale concurrent requests, whereby multiple nodes participate collaboratively and concurrently. The higher the degree of participation by the nodes, the faster the consensus convergence is reached.
The idea of a neural consensus algorithm is closely related to that of neural networks, which is a procedure used in machine learning and artificial intelligence; therefore gaining a better understanding of how neural networks work can help better understand Seeleās neural consensus algorithm.
So, in essence neural networks are a form of artificial learning that work as follows: Letās imagine we have a robot that we want to learn to see and identify objects. We first have to feed the robot with certain input values, for instance the external light signals that make up a particular image letās say in this case a soccer ball (as the World Cup is almost taking place) or a basketball. Now our goal is to train our robot so it can tell with 100% certainty whether, it is looking at a soccer ball or a basketball. Similar to our brain the computer learns through different neurons (which we call nodes) that work together by passing on information between each node and reach a certain output value, in our case soccer ball or a basketball. Within neural networks each connection between nodes is assigned a different weight, corresponding to the importance of that information to the node. For instance, in the above image we see that input node B has a bigger connection (higher weight w=0.9) to Node C than input node A has (notice that the green line is much thinner than the orange line, w=0.1). This means that the value of node B has a larger impact in the value attributed to Node C than node A has. So, in a neural network all those connections between the different nodes have their own weight and together they determine the output in our example the object the robot thinks it sees.
So now you may be wondering, how is the weight of the connection determined. Good question and that is exactly what neural network learning entails. At the start of the neural network learning process random weight factors are attributed to the connections between each node in the network.
As you can imagine these initial weights can be way off producing a lot of wrong predicted outcomes, that do not match with the actual outcomes. We call these errors and they play an important part in evaluating the learning process. Ultimately, we want our robot to be 99% accurate in answering whether the object is a soccer ball or a basketball, hence we want an error of 1% or less. The image below shows an example where our robot is correct only 1 out of 3 times (error = 66%, which is not that great yet).
The initial version of the weights attributed to the nodes in the network can be way off, the network can now attribute different weights to the connections and try to see if this result in a more accurate output (higher percentage correct answers). This process is called back-propagation, which means that it tracks back the connections and adjusts the weights of the connections between the nodes. After that is done the outputs are calculated again. This process is done over and over, constantly calibrating the connection weights between the nodes, until an adequate threshold of error scores is reached (in our case we want 99% correctness, hence an error of 1%).
To visualize this learning progress, imagine a ball being dropped from one sloop and rolling up another sloop and back again (see image below). At first the errors (the height the ball will reach at each sloop) will be high but after each iteration the error (the momentum of the ball) will decrease until it reaches (almost) zero, which would be a perfect match between the predicted outcome and the real outcome.
ā One of the big advances of neural networks is that their performance is linearly accelerated as the node size increases. Hence the more nodes that are in the network, the faster convergence becomes and the better performance will be.
Now that we have a bit of an understanding of what a neural network is, we can go on to the other technical term Seele mentions in their whitepaper, namely: the Practical Byzantine Fault Tolerance Algorithm (PBFT).
ā To explain the Practical Byzantine Fault Tolerance Algorithm, it is important to know that it was designed as a solution to a problem presented in the form of the following allegory (source):Ā Imagine that several divisions of the Byzantine army are camped outside an enemy city, each division commanded by its own general. The generals can communicate with one another only by messenger. After observing the enemy, they must decide upon a common plan of action. However, some of the generals may be traitors, trying to prevent the loyal generals from reaching agreement. The generals must decide on when to attack the city, but they need a strong majority of their army to attack at the same time. The generals must have an algorithm to guarantee that (a) all loyal generals decide upon the same plan of action, and (b) a small number of traitors cannot cause the loyal generals to adopt a bad plan. The loyal generals will all do what the algorithm says they should, but the traitors may do anything they wish. The algorithm must guarantee condition (a) regardless of what the traitors do. The loyal generals should not only reach agreement, but should agree upon a reasonable plan.
In the above allegory, the generals are the nodes participating in the distributed network. The messengers they are sending back and forth are the connections between the nodes. The collective goal of the āloyalā generals is to decide whether the information submitted to the blockchain is valid or not.
In the Byzantine Generals Problem, the solution seems deceptively simple. However, its difficulty is indicated by the surprising fact that no solution will work unless more than two-thirds of the generals are loyal. Hence if more than 33% of the nodes are bad actors no consensus can be reached. Seele therefore proposes a new consensus mechanism called Īµ-differential agreement (EDA).
The whitepaper is very vague about what this EDA exactly does and it was hard for me to decipher. Since I am not sure I understand it myself, I will stay away from trying to explain it (as I could be wrong about it), but it appears to transform the consensus problem into an asynchronous request processing procedure that has a very strong robustness for the overall connectivity of the network.
With the EDA algorithm the nodes can verify input, using different sampling rates and by doing so they can still be effective even when more than 33% of the nodes are bad actors; therefore, being an improvement over byzantine consensus algorithms.
ā To conclude the Seeleās Neural Consensus Algorithm improves upon existing consensus mechanisms by:
ā”ļø**Changing the consensus process from discrete voting to continuous voting**Traditionally, voting on the block between nodes represents the nodeās āopinionā with 0 or 1 (yes/or no), but now this represents a continuous entity (e.g. now also values between 0 and 1 are possible).
ā”ļø **Efficiency parameters can be adjusted**Optimal system efficiency can be obtained by adjusting the relevant parameters in real time.
ā”ļø **Energy saving**There is no head node selection process in the algorithm, and there is no Proof of Work or Proof of Stake required to fully reduce the energy consumption.
ā”ļø **Low transmission overhead**The algorithm does not need to connect with most nodes during the consensus process which can save on the transmission overhead and reduces the nodeās dependence on the system network structure as much as possible.
ā”ļø **Compatible with a variety of network structures**The consensus algorithm has strong adaptability to the traditional chain structure and Direct Acyclic Graph (DAG) structure.
ā I hope this article provided a better picture regarding Seeleās Neural Consensus Algorithm. In the next article we will be diving deeper into Seeleās Heterogeneous Forest Network architecture. If you have questions about Seeleās Neural Consensus Algorithm or Seeleās other technical features, you can always send me a message.
Subscribe to my channels Steemit, Medium and Twitter if you would like to be informed about Blockchain en cryptocurrency news. My handle is @LindaCrypto for all channels. You can also read my articles on LinkedIn.
If you like my article please give me a clap. Thank you!
LindaCrypto