Ethereum’s Centralization Endgame Makes The Case For Building On Bitcoin
In a recent blog post called “Endgame”, Ethereum founder Vitalik Buterin addressed the concerns around undue centralization of Ethereum. But not so that he might dismiss those claims. Nay, he’d rather confirm them.
There are a couple of noteworthy comments in the beginning of this article, such as “average ‘big block chain’,” and “acceptably trustless and censorship resistant, at least by my standards.”
Clearly, Bitcoin is not considered the average blockchain, even by Buterin. All of us remember the Block Size Wars, where a hard fork known as Bitcoin Cash emerged from a fundamental disagreement around the acceptable block size for Bitcoin.
To summarize, Bitcoin as we know it today stood on the side of the everyman, allowing small block sizes so that anyone willing to could easily participate as a node. Proponents of what became Bitcoin Cash wanted to rival the likes of Visa in its ability to process transactions quickly, and demanded larger blocks in order to meet their idea of transactional demand.
The Lightning Network and Layer 2 applications allowed this scaling to ultimately happen on Bitcoin off-chain, which is how El Salvador, for instance, was able to practically accept bitcoin as a legal tender currency.
Now, one might be tempted to utter, “He didn’t say ‘Ethereum,’ he’s talking about other projects.” Fine then, let’s continue, young padawan.
“Trying To Decentralize”
Buterin then provided a roadmap for how one might achieve his “standards” of trustlessness and censorship resistance.
Let’s break them down. First,“second tier of staking.” What’s he going on about? What is “staking” and how does it work?
Staking exists as a consensus model for other cryptocurrency platforms, Ethereum being potentially the most prominent to use this model if it ever realizes promises to adopt it, and is referred to as “proof of stake.”
A consensus model is a way for all of the nodes, or participants in the network, to agree on the information contained within each block of its blockchain. These second-tier holders would validate, while the larger “stakers” would create blocks.
Bitcoin runs on a model called “proof of work.” In this consensus model, think of computers using electricity to solve a puzzle. The resources spent to solve the puzzle are the “work” in proof of work. It actually requires effort and resources.
Ethereum’s proof of stake, however, would require no resource expenditure once it switches from proof of work (God knows when that would be as they change the date constantly), which is cited as a feature, not a bug, by its proponents.
But if there’s no resource expenditure, how are the blocks validated through consensus? The answer is: staking. In order to stake on the Ethereum network, you would be required to have 32 ether. Going off of the floating value of $4,000 per ether, let’s just call the requirement for staking an even $120,000 worth of ether to be a validator. Staking means providing liquidity to an organization, so you can’t touch these staked ether, or move them. Your asset is at stake, and can be lost. You are giving that organization the ability to use your funds. See where the name comes from?
“An attestation is a validator’s vote, weighted by the validator’s balance,” it explains. “Attestations are broadcasted by validators in addition to blocks.”
The higher the balance (with a maximum of 32 ETH), the more weight the vote carries in validating transactions, which is not to be confused with creating a new block.The more ETH you have available, the more likely you are to be chosen to participate in the process, be it with weighted voting procedures or multiple wallets containing the maximum amount of ETH.
This attestation, or validation, is where the aforementioned second tier of staking comes into play.
A “second tier” would allow those with smaller amounts of money to stake as well, but this doesn’t change the fact that those with the most ultimately control everything. This is just to make retail investors feel better about themselves.
Next, let’s refer back to Buterin’s second point from the “Endgame” roadmap, “Introduce either fraud-proofs, or ZK-SNARKs.”
This is basically a way of compressing data so that the validators are not required to see as much of the information. This is accomplished by providing a public set of parameters or rules for validating the information.
The problem here is that trust is usually required. If the parameters are not deleted by the necessary participant in validation, someone can maliciously use those parameters to counterfeit currency.
I won’t go into a massive explanation of what these things are, just know that the point is to compress data in a cryptographic format in order for smaller validators to be required to have less work. This is hardly a fool-proof system, as mentioned with the necessary trust built into the system in most use cases.
“Hence, for this to work it’s absolutely imperative that whoever creates those points is trustworthy and actually deletes k once they created the ten points. This is where the concept of a ‘trusted setup’ comes from.”
Later on in that post, Buterin discusses his hope that the ZK-SNARKs Rollups would scale, which makes it “a difficult market to enter,” by making the process more straining on the validator.
It’s important to note that while SNARKs require a trusted and permissioned private key, there are other options available. zk-STARKs, for instance, seeks to resolve this issue.
“First and foremost, zk-STARKs have solved the trusted setup problem. They completely remove the need for multiple parties to create the private key needed for the string. Instead, everything needed to generate the proofs is public and the proofs are generated from random numbers. zk-STARKs actually remove the requirement in zk-SNARKs for asymmetric cryptography and instead use the hash functions similar to those found in Bitcoin mining.”
Why would this not be the default solution to retain a trustless system? Buterin answered that on his blog:
“However, this comes at a cost: the size of a proof goes up from 288 bytes to a few hundred kilobytes. Sometimes the cost will not be worth it, but at other times, particularly in the context of public blockchain applications where the need for trust minimization is high, it may well be.”
This is something developers could work to progress and allow for smaller datasets, however, in typical Ethereum fashion, the focus is scale and speed. There’s no value placed on decentralized or trustless systems, only efficiency. Which is exactly why zk-STARKs were not addressed in “Endgame.”
Remember earlier when we talked about Buterin’s “standards” for trustlessness, and centralization? Let’s continue, because all I see is required trust and centralized liquidity.
The next two steps Buterin included in his roadmap, “data availability sampling” and “secondary transaction channels,” will be addressed briefly. Data sampling is just a way for validators to check block space while only needing a smaller portion of the blockchain to be downloaded, preventing larger download requirements.
Secondary transaction channels would work like the Lightning Network mentioned earlier. It would be a Layer 2 that allows transactions to happen off-chain, to be submitted at a later point. There’s nothing inherently wrong with wanting a Layer 2 protocol for scale, but the need of having one emerges from centralized control of data because of massive block size is a problem.
Still with me? On we go!
End Goal For The “Endgame”
In “Endgame,” Buterin then addresses what the fruits of this labor would hold:
“What do we get after all of this is done? We get a chain where block production is still centralized, but block validation is trustless and highly decentralized, and specialized anti-censorship magic prevents the block producers from censoring.”
Block production is still centralized. The entire consensus model that dictates the whole network is still controlled by those who have the most money. “Validation” at this point is trusting random nodes to verify a zk-SNARK, where they have little information, and come to a two-thirds agreement in order to meet an arbitrary threshold to stamp it complete.
But, he said block validation is trustless, right? Hardly. We discussed how the idea of zk-SNARKs will lead to creating a trusted party. Seems like the opposite of trustlessness to me.
Even saying that block validation would be “highly decentralized” still seems like a stretch. Would it be more decentralized than if the change isn’t made? Absolutely. But when you’re starting from zero, any increase looks better than nothing.
Scaling The Centralization
“Imagine that one particular rollup – whether Arbitrum, Optimism, Zksync, StarkNet or something completely new — does a really good job of engineering their node implementation, to the point where it really can do 10,000 transactions per second if given powerful enough hardware.”
This is the best part, because what do you think he wrote after the paragraph that followed?
“Once again, we get a world where block production is centralized, block validation is trustless and highly decentralized, and censorship is still prevented.”
Now remember, according to Buterin’s earlier statements in “Endgame,” zk-SNARKs would make the market “a difficult market to enter,” yet somehow the introduction of scaling these rollups makes the centralization even more so by adding validation strain and makes block validation trustless? No. The third-party requirement is simply now at a larger scale of trust.
The Side Chick Problem Of Sidechains
This was Buterin’s comment in the blog when he began to address the idea of multiple-rollups, which is basically the idea that when another project is built on top of Ethereum, users will often rely on a process known as bridging which allows one to bounce between chains without paying fees, or gas on the main chain (Ethereum).
“It seems like we could have it all: decentralized validation, robust censorship resistance, and even distributed block production, because the rollups are all individually small and so easy to start producing blocks in. But the decentralization of block production may not last, because of the possibility of cross-domain MEV.”
Let’s assume that I didn’t spend this entire article arguing that there is no decentralized block validation and that this entire paragraph is accurate. Pay attention to that last sentence: “Decentralization of block production may not last, because of the possibility of cross-domain MEV.”
What is cross-domain MEV? And didn’t this entire blog state repetitively that there is no decentralized block production already? Oh, he must be saying that the small amount that exists would die completely because of this. So, what is it?
“One example of such is the Ethereum modular architecture, with its beacon chain, its execution chain, its Layer 2s, and soon its shards. These can all be thought as separate blockchains, heavily inter-connected with one another, and together forming an ecosystem. In this work, we call each of these interconnected blockchains ‘domains,’ and study the manifestation of Maximal Extractable Value (MEV, a generalization of ‘Miner Extractable Value’) across them.”
In their example, the authors of “Unity Is Strength” are using Ethereum and Layer 2 protocols as separate blockchains, but deeply connected ones. A Layer 2 can be anything built on top of Ethereum that requires blocks to be solved.
“In other words, we investigate whether there exists extractable value that depends on the ordering of transactions in two or more domains jointly,” the “Unity Is Strength” authors wrote.
The MEV refers to the value you can extract by changing the ordering of transactions. So, imagine a scenario across multiple blockchains (or in Ethereum’s case, different second layer rollups, sidechains, etc.). Which chain comes first? Think about someone using Polygon (a Layer 2 protocol for Ethereum that seeks to transact between chains). Is there value to be extracted by placing the Ethereum transactions first? How does that affect the sidechain to be placed in a secondary, tertiary or even lesser level of importance? This puts Polygon at a lesser level of priority.
“We find that Cross-Domain MEV can be used to measure the incentive for transaction sequencers in different domains to collude with one another, and study the scenarios in which there exists such an incentive,” per the “Unity Is Strength” authors.
Cross-domain MEV is the process of determining the value of a specific sequencing order of transactions from two, or more domains.
Which chain is more valuable in the sequence? More valuable chains give their consensus makers more leverage in negotiating to share profit with other chains when there is MEV to realize. This gives the consensus maker power and reason to prioritize one chain over another.
What happens with processing different chains as one becomes of greater importance than another? The preferred chain, or the most important chain (Ethereum in this case) receives larger staking, which means much of the network becomes devoted to extracting that value. This creates a demand on a specific side of the transactions, leading to a larger presence of liquidity centralizing to the greatest extractable value. Now, not only is the consensus model centralized, but the entire platform becomes centralized against its own Layer 2 protocols.This dynamic creates the ability to distort consensus on other layers or chains.
Collusion across chains allows leverage to be held against the network as MEV is prioritized. The creation of a multitude of tokens leads to competition in MEV and creates a priority queue.
I don’t think Buterin is maliciously intending to be deceitful. I have respect for what he has accomplished, and this is in no way meant to be an attack on him, or his future ambitions. But I purposefully reject this narrative.
His blog started with admitting that he was giving up centralization and requiring trust, but that it was being done in a way that meets his“standards.” The small amount of decentralization that remains in Ethereum block production will die as this roadmap completes. The addition of zk-SNARKs, or any other zero-proof method they attempt to install will result in scaling that leads to even further centralization. Money will dictate this platform, and maybe that’s the intention. I admire the efforts of scaling and secondary tiers of staking in order for retail to have a larger presence. But that doesn’t make it right.
Bitcoin maintains its low block size so that nodes and miners alike can participate without massive hardware requirements, or unsustainable liquidity demands. While Ethereum upgrades focus on creating a false ideology of decentralization, Bitcoin’s upgrades will continue supporting world-changing development, furthering security, scaling with little-to-no fees (Strike, we love you), and allowing its users the privacy they deserve.
This is a guest post by Shawn Amick. Opinions expressed are entirely their own and do not necessarily reflect those of BTC Inc or Bitcoin Magazine.