I first discovered Bitcoin in 2010 while I was doing research on distributed systems. I have since followed it very closely and have continually researched the technology. Today, I write blockchain software at the transaction level, including custom scripts and signatures. I have a very good understanding of crypto. I was also the first person provide software to be able to generate segwit transactions to create (claim) transactions on many of the forks - including BCH (before the replay protection). I also know Bitcoin Cash fairly well.
This post is about my views on the scaling approaches and forks.
The problem with scaling is not hard drive space. It's not block size. It's somewhat networking, and CPU, but really, it's a system problem. It's a system problem that BCH does not solve, and the approach taken by BCH to date will not solve. The current architecture of the BTC and BCH blockchain has limitations. LN does not solve those limitations, but it does mitigate the problem until a true solution can be developed.
I'll start by explaining what's wrong with the approach taken by BCH.
Problems with BCH
- BCH is centralized. The fact that you can actually do a hard fork is proof. Hard forks can only happen when development and control is centralized. This is a philosophical flaw in BCH. You can not have a (freedom providing) currency which is centralized. Mitigation: Soft forks - See BTC point 1 below.
- The (former) quadratic hashing problem, and verification scalability. The amount of data to hash to compute or verify signatures grows as a square of the number of non-segwit inputs. As the blocks get bigger, the amount of computing required increases quadratically. This is a processing problem meaning that larger more complex transactions and blocks takes significant CPU resources. Mitigations:
A) Segwit / BIP-143, BCH actually took the method used in Segwit for their transactions - which happened after the initial fork. However, the BCH implementation does not fix malleability as Segwit does. So BCH and BTC more-or-less resolved the quadratic hashing problem (BCH copied the BTC solution)
B) BCH Checkpoints. This one is highly controversial, and also a huge risk for chain fragmentation. There is now a 10 block checkpoint rolled out via hard-fork by Bitcoin ABC. This means that verification only has to start at 10 blocks prior to the head. So what happens if there is an 11 block orphaned chain broadcast to the network? It would mean that BCH nodes would not know which is the true blockchain since both are valid. How hard would this be to do? BCH has about 1-2% of the hashing power of the BTC network. In terms of cost - that would cost an attacker about $100,000 to have a chance to execute the attack.
- Transaction censorship. Yes - you read that right. BCH can censor transactions. And this has already happened. In May this year, there was an "attempt at a 51%" attack by an unknown miner. From the article: "When the unknown miner tried to take the coins themselves, [BTC.top and BTC.com] saw & immediately decided to re-organize and remove these [transactions], in favor of their own [transactions], spending the same P2SH coins, [and] many others". Can you believe this? If they can justify this for what they labeled an attack, they can also do it for any other reason as well. This goes back to the dangers of point 1 above. Mitigation: ? How can you ever be sure that the coin will not be censored?
- It's a pseudo cargo cult. BCH has Schnorr signatures before BTC. How can this be you ask? Well, the signature scheme used does not actually make much of a difference for the blockchain. BTC orignially (as developed by Satoshi) uses a secp256k1 based Elliptic Curve Digital Signature Algorithm (ECDSA). The Schnorr scheme is similar to ECDSA in that the same private key can be used to generate a Schnorr signature (which is technically a better signature than secp256k1 ECDSA). Basically, a signature is a signature, so it's actually somewhat trivial to swap out secp256k1 for a Schnorr scheme via hard fork (oh the centralization). The advantage of Schnorr is that it makes signature aggregation easy (well, at least possible). This makes verifying transactions much faster and also reduces the size of transactions. A great improvement to the problem in point 2. Again, we see BCH takes the work of Blockstream developers (i.e. Pieter Wuille) and implements the work into their own coin.
Another non technical issue I have is that while the BTC camp also has some very vocal "strange" ideas, the schism in thinking in the BCH camp astounds me. They claim to follow "Satoshis Vision" (not referring to the fork here) and are actually implementing things like 10 block checkpoints.
So what is BTC doing? Here are a few things I'm excited about.
- Upgrade through Soft Forks (Thanks SegWit). There is no central development in Bitcoin. A hard fork upgrade is essentially not possible since BTC is actually decentralized. Luckily the upgrade path was introduced with SegWit, and so future soft forks will be easier. So why don't we see them coming now? Well - the first soft fork will likely deploy many features all at once - including Taproot, NOINPUT, and Schnorr. Why soft forks? The short answer is trust and safety. Only a majority of hash power can activate a soft fork. You don't need to upgrade all the nodes at once, and the risk of a chain split is mitigated. When you're dealing with something worth $200 Billion and growing in market cap, you want to take the safe path. It's not just in case of software error - but it's also a philosophical fundamental of Bitcoin - decentralization and trust (or lack of the need for trust). That's what keeps giving BTC it's value (not the 1MB block limit).
- Transaction efficiency. Ok, so let's talk about that 1MB block limit which the BCH camp is obsessed with. It was introduced in 2010 by Satoshi as a soft fork. It was meant to mitigate a transaction inflation attack. The actual objective of the blockchain scaling is to increase transaction rate. While this can be achieved by increasing block size (but this comes at a cost, and also has limits), this can also be achieved by reducing the size of transactions. That's what SegWit does. That's what Schnorr does. That's what rootstock does. That's what scriptless scripts will do. The goal is to increase transaction rates. That's what is going on here. Is an increase to the blocksize still possible? Sure, it may be, but I believe that should be a last resort after the real innovations in transaction efficiency have been applied first.
- Privacy. We have this now. LN finally gives BTC it's first actual transaction privacy. BTC can process (and settle) more transactions now than all of VISA/Mastercard can via layer 2. That's referring to quantity, but not yet variety - that will come with other LN additions such as splicing and multi-party channels. But the other privacy aspects in development include Schnorr aggregation, and scriptless scripts (mimblewimble). The future is bright.
Some other thoughts:
I do not support the BTC is like gold arguments. I strongly believe BTC is to become a currency. This is why I support LN now - and transaction efficiency. I do not want high fees (blamed on 1MB limit), but I also appreciate the need for a fee market as block rewards decrease.
I am not opposed to a blocksize increase. But first we need to improve transaction efficiency.
The UTXO database size will eventually become an issue. When we have keys holding only a few Satoshi of value, we are facing a database on the order of 2,100,000,000,000,000 entries. That's 2 quadrillion entries (assuming we don't add more decimal places). This is a database which is to be synchronized across all nodes across the planet. How will we manage scaling this? Well, there are proposals for sharding, and really, the solution may be to use layer 2 for transactions. This database would become a settlement layer storing the channels, and the future transactions will be on a layer 2.