Blockchain technology has surely been a hot discussion topic, not only among academia but also industry practitioners. The blockchain capacity, or scalability in a more popular term, should be in the discussions. The vision of bringing blockchain technology into an industry standard requires the technology to be upgraded so that it can handle a massive amount of workloads.
It is a common knowledge that despite being the oldest blockchain design ever existed, Bitcoin’s blockchain is used as a measurement of success when creating a new blockchain system. Its current blockchain can only handle up to 7 transaction per second (7 tps) which is way below the payment industry leader’s capacity of 56k tps. The new design is targeted to surpass the Bitcoin’s capacity and the next shot is to exceed the industry leader’s capacity.
There was a debate among Bitcoin enthusiasts on how to scale the current capacity. Some people just wanted to raise the block size to achieve a higher tps rate, but others wanted to keep the legacy, hard cap Satoshi Nakamoto has put on the original design of 1MB block size, by creating a workaround such as Segregated Witness and Lightning Network. As result of this scalability debate, the first group decided to go on a separate path by forking the original Bitcoin to create Bitcoin Cash. The latter group launched the SegWit and currently in the process of constructing Lightning Network.
Based on the story above, network capacity is a strong issue in a blockchain system. The problem is that every blockchain design cannot escape the triple constraint: trust, capacity, and cost. I coined this blockchain triple constraint after comparing old and new designs of both private and public blockchain.
The trust constraint is the level of confidence when the users put on their information into the system; whether the validators (e.g. miners) has the capacity of playing with the system by all means. The capacity is the amount of transactions which can be confirmed by the system within a timeframe, while the cost is defined as everything that needs to be prepared for the system to run, e.g. computing power, bandwidth, storage, etc.
A system with a high level of trust means that the users do not need to rely on the honesty of the validators, while it is assumed that at least the majority of the validators behave honestly, then the system can still run smoothly. An example of this kind of system is Bitcoin. In Bitcoin, all of the users do not bother who mines the transactions, because of the large amount of cost being poured to secure the system. By all means, this high level of trust has a masive impact on the capacity, where it might be difficult to increase the on-chain storage due to variations of Internet speed in the global scale. That is why they now try to remove the on-chain burden into off-chain by the Lightning Network.
When you read about a system with a high capacity, rest assured that there are sacrifices they dearly pay, either degrading the trust (by employing some kind of delegated validators or even central authority) or cost (by requiring special machines, high speed network, etc) constraint. At the moment it might be infeasible to avoid the triple constraint, but when a new approach is defined, we might need to redefine the constraint of the new system. But for now, we can use the triple constraint definition to analyse all new blockchain projects. Most of these projects are overrated where their PR jobs do not really describe their own technology, but rather give complex words that commons do not really understand.
Image source: Techcrunch.