A research note I wrote recently. For years there has been a worry about how much transaction fungibility defects are affecting user privacy. Now we can put specific numbers on it. If there is some tradeoff between requiring more transaction format strictness and some other goal, we can weigh the options with the estimated privacy benefits.
If your wallet uses the “standard” wallet2
method to create Monero transactions, this issue mostly doesn’t affect you. As far as I know, some of the wallets that use wallet2
are the GUI, CLI, Feather, Cake, Monerujo, and Stack Wallet.
Direct link to the PDF: https://github.com/Rucknium/misc-research/blob/main/Monero-Fungibility-Defect-Classifier/pdf/classify-real-spend-with-fungibility-defects.pdf
Abstract
Many parts of Monero’s transaction format such as tx_extra contents, the fee paid to miners, and the decoy selection algorithm are not standardized by rules set by nodes nor blockchain consensus. Instead, alternative Monero wallet implementations are free to set these transaction characteristics in ways that are unique to the wallet implementation. Therefore, observers of the blockchain data can determine that a transaction was likely created by a nonstandard implementation. The distinguishing characteristics of transactions create many “anonymity puddles” instead of one “anonymity pool”. An adversary that aims to guess the real spend of a ring signature can exploit the information contained in these characteristics, referred to as “fungibility defects”.
This note defines a simple classification rule that leverages information about the fungibility defects of each ring signature’s 16 members. The classification rule is applied to the rings in all transactions that have the defect. A ring member having the defect increases the probability that it is the real spend because a user will often spend “change” outputs from transactions that were created by their own nonstandard wallet. Using basic probability concepts I develop a closed-form expression for the probability that the classifier correctly classifies a ring member as the real spend. This probability, the Positive Predictive Value (PPV) is a function of ring size, the probability that a user spends change in a ring, and the proportion of transaction outputs on the blockchain that have the defect. These three values are either defined by Monero’s protocol rules or can be accurately estimated directly from the blockchain data. For example, when these values are 16, 40%, and 5%, respectively, the probability that the classifier correctly classifies a ring member as the real spend is 31.7%, much higher than the 1/16=6.25% probability of randomly guessing between the 16 ring members.
A serious/critical study like this is valuable, as opposed to “easy” & “practical” write-ups (which are sometimes too simplified, even harmfully problematic: e.g. an unfounded “guide” stating that one should not use Feather on Tails).
Somewhat more trivially, one could also wonder if the freedom where a user can customize fees is a really good idea—for example a person who always selects a non-default fee (e.g. “fastest” or “slow”) might stick out.
Could you please answer a question about the motivation behind AGPL - MIT asked elsewhere?
Thanks. AGPL vs. MIT, if you mean the cuprate Rust implementation of a Monero node, I was just the messenger for that CCS fundraiser. I am not involved in the project. But there is a little intersection with the fungibility defects issue. If Monero’s main wallet code wallet2 would be AGPL (which it isn’t. It is BSD, which is similar to MIT) then the closed source multi-coin wallets that implement Monero wouldn’t be able to use it. That would increase the number of wallets producing fungibility defects because they wouldn’t use the wallet2 procedure.