Discover more from Tradecraft/Freicoin
Devs: We're adding Schnorr signatures to bitcoin!
Me: Awesome.
Devs: Also we're enhancing privacy by masking output scripts behind the most-common spend pathway.
Me: Seems smart.
Devs: You can even use ring signatures for large anonymity set proofs of membership!
Me: Wow, that is cool!
Devs: and we're arranging for every bitcoin to get stolen at some unpredictable date in the near future
Me: Wait, what?
Devs: ...
What is this Taproot thing anyway?
“Taproot” is the latest soft-fork feature extension to be proposed by the Bitcoin developers to the community for adoption. Most people reading this are probably familiar with the proposal, but I’m sure there are some of you feeling out of the loop. Maybe you were forwarded this essay by a friend but are not yourself up to speed with the latest goings on in the Bitcoin space. Or maybe you've been living under a rock. Either way, here’s the short summary:
The developers of the Bitcoin Core reference implementation are proposing that the Bitcoin network adopt a new type of script for protecting bitcoin outputs, one which is related to the old scripting system but carries many significant advantages. Most importantly, this new script replaces the venerable ECDSA signature algorithm with the simpler, easier to implement, and more capable Schnorr digital signature scheme. Because of the simpler mathematics which define Schnorr signatures, this allows a lot of multi-party or multi-key use cases to do an optimization at signing time which produces smaller transactions, without changing the underlying cryptographic security. When this optimization is done the multi-party/multi-key use cases become indistinguishable from the single-key signatures produced by typical wallets. This allows advanced users with bespoke signing rules to “hide in the crowd” and prevent their custom rules from watermarking their transactions.
Like the Segwit update which activated in 2017, there are also a slew of other script improvements that are being added to Taproot scripts more because of the presented opportunity rather than any having relation to Schnorr signatures per se1. An inner script version for easier script updates, a signature “annex” which will be used in the future to implement delegation and perhaps other digital signature extensions, a new signature verification opcode, and the relaxation of various script limits.
The bitcoin developers recognized that nearly all smart contracts on Bitcoin are of the form “either everyone agrees, or ensure the following conditions are met for conflict-resolution: _________,” that the “everyone agrees” part can be expressed as a multi-party/multi-key Schnorr signature, and that most contracts are resolved cooperatively2. To capitalize on this fact, Taproot privileges the “everyone agrees” cooperative closure by making it look on-chain like a single-key signature, with no publicly visible indication of the other, unused conditions for conflict resolution.
So far, so good. These are all things that I have advocated for in the past, and some of which were even my suggestions or related to features that have already activated on Tradecraft/Freicoin. I’m definitely for the incremental evolution of Bitcoin in the above directions.
Now as a final efficiency gain, the proposed Taproot extension places the “everyone agrees” multi-key aggregate Schnorr pubkey as a naked pubkey in the scriptPubKey of a Taproot output. Unlike Tradecraft/Freicoin’s MAST3 implementation, and unlike BIPs 98/116/117 which I proposed to Bitcoin, the conflict resolution mechanisms of Taproot are hidden by committing to them inside the aforementioned naked pubkey which is part of the UTXO set. If a conflict-resolution script is required, the commitment is revealed at spend time. Otherwise the cooperative-close outcomes look like any other single-key spend. This clever hack achieves privacy benefits by allowing all outputs and cooperative spends to look the same on-chain, saves space because the public key doesn’t need to be repeated in the spend witness, and allows for some pretty cool tricks like efficient non-interactive ring-signature proofs of ownership that confirm you own an output of a specific value but without revealing exactly which output is yours. Even those few scripts which don’t share an “everyone agrees” multi-party cooperative outcome can participate by choosing a nothing-up-my-sleeves random pubkey, which nobody knows the discrete log for, as the base key to which they commit their actual spend script.
However let me repeat: this final efficiency gain and other benefits come from presenting a naked secp256k1 pubkey on chain at the time the output is created which has absolute spend authority for the underlying bitcoins. And this, I’m afraid, is where future historians will note that it all went terribly, terribly wrong.
Let’s talk about classical computers
No I’m not talking about the Commodore 64 or the Apple ][, the classic computers of my childhood. At least not specifically. I’m talking about machines studied by the field of computer science, which encompasses all information processing devices that can be built in a universe governed by the classical laws of Newtonian mechanics or Maxwell’s electromagnetism. 5nm silicon transistors, or the clockwork mechanism of a fine Swiss watch—they all share the same properties, which have been worked out by computer scientists over the better part of the last century.
Classical computers push information around, store it, transform it, and compare with other data to make decisions. The fundamental unit of a digital computer is the bit, representing a 1 or a 0. Anything can be reduced to mathematical models, and all math can be represented with digital logic. And with maths we can show the inherent difficulty of various practical problems, meaning the minimum amount of work that must be done to solve a problem of a general sort. For example, if you give me a list of 1,000 integers in random order I can write a program that sorts the list. But no matter how I structure my program, it will still take at least 10,000 steps to sort, as sorting has an inherent complexity of O(n*log(n))4.
Somewhat surprisingly, we don’t have airtight proofs about the complexity of solving various number-theory challenges that make up the base primitives of cryptography. But we have general attacks that work against any crypto system, and strong reasons5 to believe that no better attack exists for the systems that are currently in use.
For a hash function like SHA-256, it takes 2^(N) operations to find an input (not necessarily the same input) which hashes to the same value, called a preimage, and 2^(N/2) operations to find two inputs which hash to the same value (a collision), where N is the length of the hash function (N=256 for SHA-256, hence the name).
For elliptic curve digital signatures, it takes 2^(N/2) operations to uncover the private key for a given public key using Pollard’s rho algorithm, where N is log2 the size of the group. Bitcoin’s secp256k1 curve is a group of almost 2^256 elements, so it takes approximately 2^(256/2) = 2^128 operations to cryptographically break a key.
Let’s talk about quantum cryptography
The laws of classical physics, Newtonian dynamics and Maxwell electromagnetism, assume the world consists of things with definite, real, determinate states at every moment. It is not possible for a transistor to be both signaling and not signaling at the same time, for a capacitor to be both charged and empty, or for a cat to be both alive and dead. However in the 1920’s physicists discovered that the world does not operate so neatly at the subatomic scale, and it is possible for isolated particles to exhibit weird behavior where it occupies two or more states at once, and are randomly assigned their final state only when measured. Critically, two or more particles that are experiencing such quantum ambiguity can interact and cause their states to be entangled with each other, meaning that the value of one depends on the still-undetermined value of the other.
Naturally of course physicists and computer scientists wondered if devices could be built which make use of these quantum properties to perform computation. The answer is yes, and it turns out that that such machines can do things which classical computers cannot. The intuition is not too complex: a classical computer works by calculating answers in a mechanical way. If you are looking for a solution to a problem, you need to check every plausible input until you find the answer you are looking for. If you are smart you can reduce your work by using the structure of the input to avoid testing more solutions than are necessary. Cryptography is built up from problems for which we believe there are no faster ways to solve than trying every input6.
But quantum computers work differently. Essentially you entangle a bunch of quantum bits, or “qubits” together, initialized at first to hold all possible values. Then you constrain their entangled state in such a way as to block-off any wrong answers, so that whatever value they do hold is a solution. You then measure the qubits, forcing them to resolve to a single value. But they’re so heavily constrained that the only value they can take is a right answer for the problem you’re trying to solve.
To solve a 256-bit elliptic curve discrete logarithm, for example, would require a few thousand qubits and around a hundred billion gate operations—the aforementioned constraints placed on the qubits. Keeping thousands of qubits stable for hundreds of billions of gate operations is a substantial challenge, but if you could then you could reverse a secp256k1 pubkey into its underlying private key in a few minutes or hours.
The situation with hash functions is markedly better. Grover’s algorithm is the best available general approach for finding preimages or collisions for a hash function. It reduces the number of operations needed for finding a preimage to SHA-256 to 2^128, and 2^80 for a collision. In the immortal words of Anatoly Dyatlov: “Not great, not terrible.” A reduction in security to 80 bits of collision resistance is merely a regression to pre-Segwit’s P2SH security level, which is still considered secure today.
Okay, but aren’t quantum computers way in the future?
No. This is a reasonable default intuition to have since quantum computers have been talked about for decades without much visible progress for most of that time. But advances are coming fast and furious now, with quantum capabilities increasing at a super-exponential rate.
The number of qubits which can be kept entangled together has been increasing at an exponential rate, doubling every year and half or so.
The error rate of quantum computers has been decreasing at a similar pace, which means the number of supported gate operations has been exponentially increasing as a result.
On the theory side, there are increasing advances in the efficiency of quantum computer algorithms applied to cryptanalysis problems. Any reduction in the resource requirements needed break practical crypto systems brings closer the estimated date of practical utility.
Each of these areas are increasing capabilities at an exponential rate, resulting in super-exponential process in aggregate. We’re only about six doublings away from the scale needed to attack secp256k1 using current algorithms. At about 18 months between doublings, that could be as soon as the end of the decade. And lest you convince yourself we’re at the end of an adoption curve, consider this:
In 2019 and 2020 both IBM and Google demonstrated for the first time so-called quantum supremacy. They used quantum computers (about ~50 qubits in size) to calculate solutions to problems which are infeasible to solve by classical means. The problems solved were toy problems to be sure, but not toy-sized instances. Quantum computers are now a practical means of solving real-world problems, and this has triggered a massive investment into the arms race for better quantum computers, including new entrants into the field. The level of funding and excitement is unlike anything we’ve seen in the past, and while that doesn’t change the fundamental challenges, it does mean that research is no longer resource-limited. Many hands make light work, many eyes make all bugs shallow. It is prudent to expect advances to accelerate, not slow in the years ahead.
The situation is remarkably similar to the growth surrounding AI and deep learning in the last decade. Almost exactly 10 years ago in 2011 Google advanced the field by spending millions of dollars to train a computer to recognize cat videos on Youtube. As I write this in 2021, Tesla is rolling out Full Self Driving to beta testers and “deep fakes” are destroying our trust in audio/video evidence. Do not underestimate what can be accomplished in 10 years of a growth industry.
Meanwhile, almost than 5 years have passed since Segwit was merged into Bitcoin Core, and Segwit adoption still hovers around 50%. Do not make the mistake of over-estimating how quickly new technology can be deployed and adopted in our industry. I’ll get back to this point.
What does the crypto apocalypse mean for Bitcoin?
Effective quantum computers at scale break the elliptic curve discrete logarithm. Any exposed public key would be vulnerable and could be solved into its underlying private key in mere minutes or hours. Such quantum computers would only weaken hash function effectiveness. Enough to drive an orderly adoption of post-quantum cryptography, but not to such a degree that it would enable immediate theft of funds.
As noted in a footnote to BIP-341, there were about 6.25M bitcoins protected by known public keys in 2019. So that is 6.25M coins that are vulnerable to immediate theft if one had access to quantum computers large enough to perform secp256k1 discrete log calculations. That’s about 1/3 of all bitcoins in existence at this time.
That’s bad. Really, really bad. Worst-case scenario, if all 6.25M exposed coins suddenly move at once it would be the largest theft in history and utterly destroy public confidence in Bitcoin. More likely it would happen slowly over time, and starting with older unidentified coins whose theft are less likely to be noticed by their owners. Still, someone would eventually notice and when the word is out there would be market panic. As one of the developers of the Taproot proposal puts it:
I believe these numbers are evidence that the "public key hashes protect against a quantum computer" argument is (currently) bogus. Even when your own coins are encumbered with a PKH construction, you can't claim much security if 37% of the supply is at risk.
- Pieter Wuille (March 19, 2019)
I am disappointed in this tweet. First of all, it conflates two separate issues: recovery of lost or long-dormant coins from Bitcoin’s early history, and the theft of large amounts of more recent bitcoins from address/pubkey reuse. Only about 1.75M bitcoins are vulnerable through exposed pubkeys on the blockchain, and most of these are early solo-mined coins or multi-key scripts which predate wide adoption of P2SH. More than 4M of the vulnerable coins are through reused keys or scripts which is a more recent practice and an unfortunately common practice of certain custodial services. Any theft is bad, but there is a pragmatic distinction to be made between the theft of coins from a small number of early adopters who haven’t moved the coins in over a decade and may have lost the keys entirely, vs. raiding the cold wallet of a widely used custodial wallet or exchange. The former is regrettable but survivable, the latter would be a crisis of public confidence that we might well not be able to recover from. Fortunately though, key and script reuse for currently circulating bitcoins is precisely something that we can fix.
Second, it doesn’t clearly distinguish between two possible interpretations:
Bitcoin is currently more vulnerable to quantum breakthroughs than you might think. We need to deploy post-quantum cryptography and get people to move their coins prior to such breakthroughs.
or:We can’t fix these quantum vulnerabilities, and the exposure is so high that Bitcoin will die once adversaries are able to break secp256k1. When that finally does happen, let us be thankful that we had a good run. So long and thanks for all the sats.
I’m not one to ascribe ill intent to the author of a tweet, given the inherent limitations of that communication platform. Besides, I’ve worked with Pieter for many years. I’m 99% sure he falls in (1). But what I can’t reconcile is how anyone who believes (1) can simultaneously be advocating for a technology which forces the user to opt into quantum vulnerabilities in order to receive the benefits of Schnorr signatures.
As developers, it is our responsibility to do what we can to prevent, forestall, or otherwise mitigate the downstream effects of this inevitable theft of coins. This means creating or maintaining quantum-resistant solutions for coin ownership, and promoting custodial practices which prevent exposing quantum vulnerabilities, such as revealing public keys for an output at time of creation rather than time of spend.
Yes this means we’ll have an extra 32 bytes per output for Taproot—or perhaps an extra 48 bytes if we want to be proactive about post-quantum hash output widths. This also means we won’t be able to have super compact ring signatures without bulky zero-knowledge hash preimage proofs. But those are prices that we ought to be willing to pay for security against quantum adversaries.
But Taproot went the other way. In the Taproot model, we give up quantum resistance in exchange for these small efficiency gains and trendy crypto properties. This is, I believe, analogous to knowing full well the danger posed by global warming yet deciding to build new coal fired power plants in 2021.
We can, and ought to do better.
What can we do to fix this?
Replacing hash functions with quantum-secure variants is easy—just increase the width. Switching from SHA-256 to SHA-512 would be more than adequate. Although replacing the hash function used in P2WPKH or P2WSH outputs would be relatively simple, it wouldn’t accomplish much because the chain ultimately depends on Merkle hash trees using SHA-256. Moving to SHA-512 for the whole chain would require using something like Forward Blocks, which isn’t quite ready yet. Still, the need isn’t that pressing: 80 bits of security is still sufficient for the foreseeable future.
Replacing elliptic curve digital signatures is far more important, but unfortunately there are fewer options available today. There are some mature post-quantum digital signature schemes, like the hash-based Lamport signatures, but they are large and don’t have attached zero-knowledge proof systems. There are a whole slew of post-quantum digital signature schemes being developed and competed against each other in the NIST Post-Quantum Cryptography (PQC) standardization project. Some of these have the right properties Bitcoin developers would be looking at for a drop-in replacement for Schnorr signatures, such as a compact key and/or signature size, the ability to construct multi-key signature schemes, and an attached zero-knowledge proof system. But it is still a rapidly evolving field with many options and no clear winner. Since a consensus-layer digital signature scheme cannot be easily updated, the prudent choice is to wait and see how the PQC standardization finalists do.
So really there isn’t a technical solution that is ready to deploy today. We need to sponsor work on replacing secp256k1 and SHA-256 with quantum secure alternatives, such as one of the PQC finalists for digital signatures and a SHA-512 based Forward Block chain for hash tree replacement. But in the mean time we need to work with existing service operators to reform or remove existing protocols which cause public keys to be exposed, and to move old bitcoins which are vulnerable. We need to run a public service campaign to build awareness of the vulnerability and help people to ensure their own coins are secured against quantum adversaries.
But more than anything else, we need to stop making the situation worse. Rule #1 when you find yourself in a hole: stop digging. We should NOT deploy Taproot as specified, or any other technology which requires naked pubkeys on-chain.
Where we go from here
There are serious existing quantum vulnerabilities in Bitcoin, and the proposed Taproot extension only makes the situation worse. We should not allow Taproot to activate on Bitcoin, or if it does then we should discourage its use.
We should invest time and resources into developing post-quantum cryptography compact digital signatures and zero-knowledge proof systems which can serve as a replacement for elliptic curve digital signatures. And we should continue work on transition mechanisms for moving off of SHA-256 based Merkle trees and commitment structures.
And while in the mean time we ought bring the universally better Schnorr signatures to Bitcoin, we need to do so in the context of a proposal that does not make the Bitcoin ecosystem more vulnerable to quantum adversaries.
However the new signature verification opcode, CHECKSIGADD, is made possible by the properties of Schnorr signatures.
It is essential that the conflict-resolution mechanisms exist, or else there would be endemic problems. But if the conflict-resolution mechanisms of a specific contract are present and ironclad, no party gains anything from avoiding a cooperative close, as the cooperative close usually involves smaller transactions and therefore fewer fees assessed against all parties.
Merkleized Alternative Script Trees. Tradecraft/Freicoin’s Segwit implementation allows the creator(s) of an output to commit to an arbitrary number of possible spend pathways with different scripts for each, but they only need to reveal one at time of spend.
Read “big-oh N-log-N”, this means that N integers must take at least N*log2[N] individual compare-and-swap operations to complete. So with N=1,000, we see that it must take at least 1000 * log2[1000] ~= 10,000 compare-and-swap operations to sort a list of 1,000 random integers.
This is a lie; we don’t know. Nobody has been able to prove that factoring integers into their prime components is actually hard, for example. There could very well be be a simple way to do this. But after a half century of trying, with fame and fortune going to anyone who might have found a solution, no one has. So we assume that factoring large composite numbers on a classical computer is a hard problem, then build systems around that.
Sort-of; this is true of hash functions. Both prime factoring and elliptic curve discrete log have faster-then-brute-force solutions. But the difficulty of solving these problems does appear to relate to the size of the search space in a way that is of practical use. For example, Pollard's rho algorithm offers a square-root reduction in the search space for discrete log, but since it was published in 1975 we have yet to find a faster general algorithm. So we believe ECDL on 256-bit curve to require sqrt[2^256] = 2^128 operations on a classical computer.
Subscribe to Tradecraft/Freicoin
Technical blog from the creators of Tradecraft/Freicoin.