A Transaction's Journey on Solana
In the previous article, we introduced Tower BFT, Solana's stake-weighted Proof-of-Stake consensus mechanism based on a replication algorithm called pBFT. In this article, we will delve deeper into the journey transactions go through from being submitted to being confirmed on the Solana blockchain - we aim to explain exactly what happens under the hood when you e.g. send someone some SOL or swap SOL for a memecoin via a dex using Phantom, Solflare or NuFi.
When you do this, the app you're using sends your transaction to the blockchain to be executed and added to the Solana ledger. The first step is to actually get the transaction into the network, so let's now have a look at how that happens.
RPC-Full Nodes
In our previous articles, we used the term "node" in the context of a single validator node that's actively participating in the network by voting on as many slots as possible and collecting staking rewards. However, there is also another type of node on the Solana network, the so-called "RPC-full" node. The purpose of RPC-full (RPC stands for remote procedure call) nodes is to act as a gateway to the blockchain network for all Solana dApps and individual users.
An RPC-full node on the Solana blockchain is a node that runs exactly the same solana‑validator
binary code actively voting validator nodes do, but with the voting services and everything related to them deactivated (RPC-full nodes do not vote on slots, do not build a vote tower, do not have a stake in the network and also cannot be included in the leader schedule rotation). As of 2025, there are a little over 5.000 full nodes on the Solana blockchain's network: roughly 1.300 are actively voting validators and about 3.700 are non‑voting RPC‑full nodes. In addition to this, a lot of incoming traffic also goes though JSON-RPC relay proxies that can tap into the network and forward transactions and queries where necessary, but do not run the solana‑validator
binary.
Okay, now that we know the difference between the two node types, so let's have a look at what happens when a node receives a transaction from outside the network.
TPU and Gulf Stream
Every node that runs the solana‑validator
binary can serve as the entry node (where data from the outside first enters the network) for submitted transactions and queries. Each node is running a TPU service (TPU stands for Transaction Processing Unit), which is actively listening for incoming data packets from users, RPC relays and other nodes.
When the entry node receives a batch of incoming transactions, the node first verifies each transaction's Ed25519 signature and discards invalid txs and duplicates. These transactions are then handed over to Gulf Stream - Solana's transaction forwarding protocol.
Gulf Stream is the first implementation of a mempool-less solution for distributing transactions in a production blockchain.
In most blockchains, each node has a "waiting room" for incoming transactions that are valid, but not added to a block yet. This waiting room is called a mempool (short for memory pool) and all nodes forward incoming transactions they have added to their own mempools to their peers, so the same transactions are replicated and waiting in the mempool of every node on the network. When it's time to add a new block to the blockchain, the current proposer (PoS) or a miner that mines the block (PoW) will execute transactions from their copy of the mempool, often sorting them not by the order in which they arrived, but by the fee they will receive for executing the transaction to maximize profit (this is why increasing the fee on an already submitted transaction usually speeds up the confirmation on Bitcoin and Ethereum mainnet).
Since the Proof of History hash chain on Solana serves as a universal clock for all nodes and allows the network to generate a deterministic leader schedule for each 2-day long epoch, the Gulf Stream service running on each node can just take all valid incoming transactions and forward them directly to the current leader as they come (and by default, also the next 2 upcoming leaders - in case the current leader is offline and not assembling blocks), completely eliminating the need for a global mempool. In practice, a leader often has most of the transactions before it's their turn to assemble a block, so block building can begin immediately without any delays, which in practice improves network speed.
Stake-weighted Quality-of-Service
Since every node on the network knows who the current and future leaders are for each slot, an attacker could theoretically overwhelm the leader by spamming their open TPU port with fake transactions to overload its buffer, halting the network as a result - during periods of heightened network activity, the leader will reserve 80% of each slot for data coming from other validators with a stake in the network (this is stake-weighted, so the more SOL a validator has staked, the more transactions they will be able to get into each block slot when the network is congested). The remaining 20% remains open to incoming transactions from non-staking RPC-full nodes. We call this 80/20 split stake-weighted QoS, or Quality-of-Service, and it was implemented because a validator with a stake is much less likely to risk spamming the network, because there's no incentive for them to do so and they'd also be risking their stake.
In a spam scenario, packets sent by validators with a stake are much less likely to get dropped by the leader than packets sent by IPs with no stake - since this implementation of spam protection heavily favours traffic from validator nodes over RPC-full nodes, it may seem counter-intuitive that there are more RPC-full nodes than validators on the network - with a ratio of around 3 to 1!
This, however, has a simple practical explanation - there are more RPC-full nodes because most network load is read/query oriented (e.g. "What's the current balance on this account? What's the current state of this program?" - basically questions that dApps are asking the blockchain in order to be able to provide their services to their users).
Operators running a core voting validator with a big SOL stake often completely firewall their node from the outside and configure their TPU port to only listen to other validators, which means that they usually do not forward a lot of incoming transactions to the leader via Gulf Stream.
RPC-full node operators, on the other hand, operate a fleet of multiple RPC-full nodes as a service for the public (and since they do not receive rewards for staking from any of these nodes, access to this service is usually paid) with one "edge validator" node that has a small stake in the network. The purpose of this validator isn't to maximize staking rewards (like it is for the big staker from the example above), but to ensure service availability in case the network is congested. If the network is overwhelmed, all incoming tx traffic from their RPC-full node fleet will be routed through this one edge validator in order to maximize exposure to the 80% of bandwidth reserved for stakers and get as many transactions confirmed as possible.
Slot Leader's Responsibilities
When a node becomes the leader for its slot window based on the current epoch's leader schedule, it will start a process that ends with newly assembled block data for the current slot being sent out to all validators to be voted on.
The leader should already have a lot of the data it needs to process ahead of its turn because of Gulf Stream forwarding incoming transactions to upcoming leaders. When a node becomes the leader, it will take the last PoH tick hash from the last produced block and start hashing it, continuing the PoH sequence. Incoming transactions are verified and then executed (Solana calls the execution stage the "banking stage" and the service responsible for this the "Bank", just a fun fact) and new entries to the ledger are all timestamped with the PoH tick hashes.
It's important to note that what the leader produces is essentially a continuous stream of executed transactions intertwined with PoH tick hashes for exactly 64 ticks per slot (and usually 4 slots per leader rotation) - this data is sent to validators in real time as it's being generated and NOT only after the full data block has been assembled like it is with other blockchains. We'll come back to this in a moment when we get to what happens on validators before they vote, just make sure you keep this in mind!
Shreds
Once transactions are executed, the leader node puts them through a process called shredding, along with the recorded PoH tick hashes. Shredding breaks the data down into small, easily transferrable packets called shreds - the MTU (maximum transmission unit, the largest possible size of one packet) of one Solana shred is around 1232 bytes, but they can of course be smaller.
Shreds are created in batches of 64 (the number of shreds in the final batch for each slot can of course be lower) and there are two types of shreds in each batch - data shreds and encoding shreds, split 50/50. Data shreds are the executed transactions along with PoH ticks and encoding shreds are Reed-Solomon parity shards that let each recipient validator reconstruct the whole batch as long as it receives at least half of the total 64 shreds. The leader's Ed25519 signature is added to each shred upon creation (validators will discard any received shreds where the signature doesn't match the signature of the current slot's leader). Each shred is also indexed so the validators that receive them know how to order them.
Once the block data is shredded, it is time to send it to validators so they can replay the data and vote on confirming the block. Broadcasting the shreds to validators is done via Turbine, Solana's block propagation protocol, which we'll have a look at next.
Turbine and the TVU
Turbine has a tree-based broadcasting structure and its design is inspired by the Bittorent protocol - data is sliced into tiny bits (shreds in our case) and each recipient node on the network also functions as an immediate relay and uploads shreds it receives to a deterministic subset of validators. On Solana, this subset is called a neighborhood. The default number of logical peers in one neighborhood is 200 and the fan-out tree isn't static, neighborhoods are dynamically shuffled - the shuffle is deterministic per shred using a seed that's generated with some unique input from the data in the shred.
After the leader generates shreds, it sends them one-by-one to the peers in its neighborhood, rotating the root peer after each shred. This is considered the first layer of the broadcasting structure (remember, it's tree-based, which means that the data doesn't just travel horizontally, but also downwards in layers).
When a validator receives a shred, it resends it both to all of the peers in its neighborhood and also to a predetermined set validators in different neighborhoods in the next lower layer. A node will receive each shred multiple times from different nodes in different neighborhoods in the layer above and also from the peers in its own neighborhood, which keeps the load on each node fairly low and also ensures minimal data loss.
And remember, each validator only needs 50% of shreds from each batch sent out because the Reed-Solomon encoding allows it to reconstruct the full block even if some bits are missing, but if a validator doesn't have all the shreds it needs, it can also send a repair request to its immediate peers and ask for the missing shards directly.
The service responsible for reconstructing the block data from received shreds is called the TVU - Transaction Validation Unit. It verifies that all executed transactions are valid and also replays the PoH ticks. Every replica validator processes this data in parallel on multiple CPU/GPU cores, so replaying block data is much quicker than the leader's livestream of execution and hashing.
Once the validator confirms that everything in the block is in order, it immediately casts a vote on the block and since this is just a regular Solana transaction, this vote is sent to the current leader via Gulf Stream and processed together with user transactions. If the ledger shows that ⅔+ of all effectively staked SOL voted to add this block to the blockchain, it is now confirmed, typically in the next few slots (under normal conditions, confirmation takes around 0.8-1.2 seconds).
Last updated