Lately the network is fairly congested due to a massive increase of adoption and smart contracts being used for everything from DExes over trading NFTs to using monetary policies on assets. While the increase of adoption is really amazing it does lead to transactions being delayed or even failing which has not happened anywhere near this widespread before.
The network infrastructure is holding up without any issues and the main pitfalls have all been with light wallets so understanding how “the” mempool works hopefully helps to understand why that happened, why it’s a challenge for the light wallet providers and how the proposed scaling solutions such as Hydra may affect the challenges faced recently.
In this case we start with what’s probably most intuitive to understand: Blocks
Any time a staking pool is leading the current slot (they will know that in advance, there’s a schedule for this) it’s their task to fill that block with transactions. In order to put transactions in there they will first have to know which transactions are out there.
It wouldn’t be terribly decentralized or usable if all transactions would have to be sent to one single source or to the staking pool which will mint a block next. In order to “see” which transactions are out there, all transactions from wallets (no matter if it’s a full node wallet or a light wallet) will go to the mempool.
How the mempool is structured
The mempool is – in general – just a pile of data thrown together. It’s not a “clean” First-in-First-out pipeline, it’s more of a heap of things which are put into a corner until someone picks them up.
Kinda like your To-Do notes which may start neatly in one place they tend to spread out a lot once they grow faster than they’re worked off. Ideally everyone in the world can see the same transactions in the mempool but there are some logistical and structural issues with getting to that sort of state.
In general people submitting their transaction will have their transaction in one part of the mempool network and it gets propagated throughout all the nodes until it’s generally visible. For the network to work properly transactions don’t need to be available everywhere instantly – take a look at SOL, see their minimum requirements for pools to see why – it just needs to be eventually consistent.
It’s absolutely fine if a transaction from a rural area with underdeveloped infrastructure and a shaky connection needs a while to “hit the chain”. As long as the data is visible to the pool minting their block at some point it’s good enough.
Something people may be more familiar with is torrents. Someone has a piece of data – for example a Linux ISO to be able to host it for cheaper – and wants to distribute it. The nodes on Cardano work similarly as they query each other for any transactions which aren’t yet on the chain, download them from nodes which have the transactions in their local mempool and will share it with other nodes who don’t yet have it.
If you run Daedalus on your computer and have the patience to let it sync then you’re running a full node on your computer. Once you submit a transaction your node will talk to other nodes which are running and share your transaction with them so it’s eventually visible to whatever staking pool mints a block next.
There is no “one” mempool
Every node has its own mempool. They share it with other nodes so any transaction will eventually be everywhere unless it’s included in a block before that.
But there is no one singular, global mempool.
It’s a network of all the nodes.
Where this can cause issues
Each nodes mempool has a limit on how much data it can contain. Usually this limit is not hit since Daedalus only needs to handle your own transactions, transactions were handled faster than they were created and adoption of larger, shared services wasn’t as pronounced as it is right now.
Lately – especially with high caliber NFT projects and DExes driving up demand massively – there are spikes in demand which are easily an order of magnitude higher than usual. You can bet that any time aeoniumsky releases something into the wild there will be hundreds or thousands of people hammering away transactions as fast as they can, that’s more than one node can handle.
And that’s where light wallets break.
No matter if it’s Yoroi with their nodes at Emurgo, Nami with Blockfrost or CC with Firehose (or is it Phyrehose? Can’t find them), at some point their nodes are full.
So while you may have issues when using Daedalus and transactions not being included quickly it’s mostly due to all the mempools containing way more transactions than can be handled right now. But it will either go through or it will fail.
Sidenote: Failing transactions
Transactions in Cardano can fail and this happens every now and then. Daedalus and a lot of other wallets send out a “Time to live” with their transaction.
If you send a transaction on Daedalus it will be valid for roughly two hours and that is an on-chain constraint! After that slot the transaction will be invalid and cannot be included in a block anymore.
The good thing about this is that you can be sure that the transaction won’t be transacted out of the blue days later, it failed cleanly. This is why Daedalus can show with absolute certainty that it failed. Things currently get more difficult for light wallets.
Centralized services failing
Don’t get me wrong, what Emurgo, Blockfrost and CC are doing for Cardano is amazing. I’m not hating on them, their teams are great.
What I’m trying to point out is that no matter how amazing teams are, centralized points of failure are more likely to cause issue for larger groups when reliability matters most. All the centralized nodes are built to handle everyday traffic and traffic spikes but they cannot sensibly run 30x the hardware they need just to also handle these extreme spikes in traffic.
Additionally Cardano nodes cannot be deployed as “just in time” containers in a few milliseconds, they need to synchronize and are pretty large. It’s not necessarily their fault if their mempools fill up, this happens and will continue to happen as long as there are “order of magnitudes”-spikes in traffic.
One scenario which has happened a lot lately was that transactions were sent to these services, the mempool was already full and they were not included there. There is no mempool² of all transactions waiting to be included in the mempool. If the applications are well made they will have implemented queuing systems which are essentially a mempool² but that may be different for each provider.
Most significantly: For the user it’s often impossible to know if their transaction is in the mempool or if it just never landed in there and never will.
How Hydra can help with this
This is where my knowledge is a little more sparse so someone more knowledgeable might disagree. Any comments would be very welcome to my discord handle “[the Minister]#0001”.
Hydra is not a “Run it and we have one million TPS for everything” solution but it helps to take some traffic off the mainnet and settle them at a later point. Parties which may need to send a lot of transactions to each other can enter a Hydra head, run Smart Contracts in there, send funds and then settle on the main chain to keep all the smaller transactions off of the chain. What makes Hydra nicer than transaction channels like the Lightning Network on BTC is that participants will be able to send funds to participants who are not inside of the Hydra head. They’ll get the funds only once the head closes but only using participants who are inside the network is not a constraint here.
By allowing parties to take traffic away from the chain it’ll effectively increase capacity and incentivize parties to do so by having access to faster transactions, fees scaling much better and not having to deal with network congestion as much.
But just with traffic this could induce even more demand so it stands to see if this will help with mempools not reaching capacity. My hope is that this at the very least reduces the magnitude of the spikes. If traffic can spike 2-4x compared to the usual traffic then things become much more reasonable to handle than they are now.
Opinion: What wallets can do NOW to make things better
In my opinion the core topic a lot of wallets should aim to inform the user about the state of their transaction. CC has had their mempools filled up yesterday and they do have a queue for incoming transactions (which took a good bit of time to resolve but eventually was resolved) whereas transactions in Nami often just “disappeared”.
In order to improve this wallets should ideally include these three measures:
- Send transactions with a time to live and reflect that in the UI
- Let the user know about any issues with the services mempool
- Unmistakably show the user the time to live and status of that transaction
Users should be able to see if their transaction is pending, confirmed or failed. And transactions should fail after some time. This can be set to any time but there should be a definitive deadline after which the transaction cannot be included anymore.
Failed transactions don’t cause any fees, they’re one of the great features Cardano offers.
- There is not one singular mempool, it’s a network of all the nodes out there
- Light wallets are centralized -> many people use few nodes -> They can be “full”
- Feedback to the user about the state of their transaction is vital
- Light wallets are great but will remain more error prone