A Transaction's Journey on Solana

In the previous article, we introduced Tower BFT, Solana's stake-weighted Proof-of-Stake consensus mechanism based on a replication algorithm called pBFT. In this article, we will delve deeper into the journey transactions go through from being submitted to being confirmed on the Solana blockchain - we aim to explain exactly what happens under the hood when you e.g. send someone some SOL or swap SOL for a memecoin via a dex using Phantom, Solflare or NuFi.

When you do this, the app you're using sends your transaction to the blockchain to be executed and added to the Solana ledger. The first step is to actually get the transaction into the network, so let's now have a look at how that happens.

RPC-Full Nodes

In our previous articles, we used the term "node" in the context of a single validator node that's actively participating in the network by voting on as many slots as possible and collecting staking rewards. However, there is also another type of node on the Solana network, the so-called "RPC-full" node. The purpose of RPC-full (RPC stands for remote procedure call) nodes is to act as a gateway to the blockchain network for all Solana dApps and individual users.

You can think of RPC nodes as the first line of contact with a blockchain for the outside world - kind of like calling the "contact us" support number of a company. When you call this number, you can ask basic questions (such as "What is the current state of the ledger?" in our case) and if you need something more specific, your request will be redirected to the appropriate department within the company. And although our example makes this sound somewhat bureacratic and slow, RPC nodes can answer your questions/redirect your requests in milliseconds (it's the 21st century and computers are quick!).

An RPC-full node on the Solana blockchain is a node that runs exactly the same solana‑validator binary code actively voting validator nodes do, but with the voting services and everything related to them deactivated (RPC-full nodes do not vote on slots, do not build a vote tower, do not have a stake in the network and also cannot be included in the leader schedule rotation). As of 2025, there are a little over 5.000 full nodes on the Solana blockchain's network: roughly 1.300 are actively voting validators and about 3.700 are non‑voting RPC‑full nodes. In addition to this, a lot of incoming traffic also goes though JSON-RPC relay proxies that can tap into the network and forward transactions and queries where necessary, but do not run the solana‑validator binary.

Okay, now that we know the difference between the two node types, so let's have a look at what happens when a node receives a transaction from outside the network.

TPU and Gulf Stream

Every node that runs the solana‑validator binary can serve as the entry node (where data from the outside first enters the network) for submitted transactions and queries. Each node is running a TPU service (TPU stands for Transaction Processing Unit), which is actively listening for incoming data packets from users, RPC relays and other nodes.

When the entry node receives a batch of incoming transactions, the node first verifies each transaction's Ed25519 signature and discards invalid txs and duplicates. These transactions are then handed over to Gulf Stream - Solana's transaction forwarding protocol.

Gulf Stream is the first implementation of a mempool-less solution for distributing transactions in a production blockchain.

In most blockchains, each node has a "waiting room" for incoming transactions that are valid, but not added to a block yet. This waiting room is called a mempool (short for memory pool) and all nodes forward incoming transactions they have added to their own mempools to their peers, so the same transactions are replicated and waiting in the mempool of every node on the network. When it's time to add a new block to the blockchain, the current proposer (PoS) or a miner that mines the block (PoW) will execute transactions from their copy of the mempool, often sorting them not by the order in which they arrived, but by the fee they will receive for executing the transaction to maximize profit (this is why increasing the fee on an already submitted transaction usually speeds up the confirmation on Bitcoin and Ethereum mainnet).

Since the Proof of History hash chain on Solana serves as a universal clock for all nodes and allows the network to generate a deterministic leader schedule for each 2-day long epoch, the Gulf Stream service running on each node can just take all valid incoming transactions and forward them directly to the current leader as they come (and by default, also the next 2 upcoming leaders - in case the current leader is offline and not assembling blocks), completely eliminating the need for a global mempool. In practice, a leader often has most of the transactions before it's their turn to assemble a block, so block building can begin immediately without any delays, which in practice improves network speed.

Stake-weighted Quality-of-Service

Slot Leader's Responsibilities

When a node becomes the leader for its slot window based on the current epoch's leader schedule, it will start a process that ends with newly assembled block data for the current slot being sent out to all validators to be voted on.

The leader should already have a lot of the data it needs to process ahead of its turn because of Gulf Stream forwarding incoming transactions to upcoming leaders. When a node becomes the leader, it will take the last PoH tick hash from the last produced block and start hashing it, continuing the PoH sequence. Incoming transactions are verified and then executed (Solana calls the execution stage the "banking stage" and the service responsible for this the "Bank", just a fun fact) and new entries to the ledger are all timestamped with the PoH tick hashes.

It's important to note that what the leader produces is essentially a continuous stream of executed transactions intertwined with PoH tick hashes for exactly 64 ticks per slot (and usually 4 slots per leader rotation) - this data is sent to validators in real time as it's being generated and NOT only after the full data block has been assembled like it is with other blockchains. We'll come back to this in a moment when we get to what happens on validators before they vote, just make sure you keep this in mind!

Shreds

Once transactions are executed, the leader node puts them through a process called shredding, along with the recorded PoH tick hashes. Shredding breaks the data down into small, easily transferrable packets called shreds - the MTU (maximum transmission unit, the largest possible size of one packet) of one Solana shred is around 1232 bytes, but they can of course be smaller.

Shreds are created in batches of 64 (the number of shreds in the final batch for each slot can of course be lower) and there are two types of shreds in each batch - data shreds and encoding shreds, split 50/50. Data shreds are the executed transactions along with PoH ticks and encoding shreds are Reed-Solomon parity shards that let each recipient validator reconstruct the whole batch as long as it receives at least half of the total 64 shreds. The leader's Ed25519 signature is added to each shred upon creation (validators will discard any received shreds where the signature doesn't match the signature of the current slot's leader). Each shred is also indexed so the validators that receive them know how to order them.

Once the block data is shredded, it is time to send it to validators so they can replay the data and vote on confirming the block. Broadcasting the shreds to validators is done via Turbine, Solana's block propagation protocol, which we'll have a look at next.

Turbine and the TVU

Turbine has a tree-based broadcasting structure and its design is inspired by the Bittorent protocol - data is sliced into tiny bits (shreds in our case) and each recipient node on the network also functions as an immediate relay and uploads shreds it receives to a deterministic subset of validators. On Solana, this subset is called a neighborhood. The default number of logical peers in one neighborhood is 200 and the fan-out tree isn't static, neighborhoods are dynamically shuffled - the shuffle is deterministic per shred using a seed that's generated with some unique input from the data in the shred.

After the leader generates shreds, it sends them one-by-one to the peers in its neighborhood, rotating the root peer after each shred. This is considered the first layer of the broadcasting structure (remember, it's tree-based, which means that the data doesn't just travel horizontally, but also downwards in layers).

When a validator receives a shred, it resends it both to all of the peers in its neighborhood and also to a predetermined set validators in different neighborhoods in the next lower layer. A node will receive each shred multiple times from different nodes in different neighborhoods in the layer above and also from the peers in its own neighborhood, which keeps the load on each node fairly low and also ensures minimal data loss.

And remember, each validator only needs 50% of shreds from each batch sent out because the Reed-Solomon encoding allows it to reconstruct the full block even if some bits are missing, but if a validator doesn't have all the shreds it needs, it can also send a repair request to its immediate peers and ask for the missing shards directly.

The service responsible for reconstructing the block data from received shreds is called the TVU - Transaction Validation Unit. It verifies that all executed transactions are valid and also replays the PoH ticks. Every replica validator processes this data in parallel on multiple CPU/GPU cores, so replaying block data is much quicker than the leader's livestream of execution and hashing.

Once the validator confirms that everything in the block is in order, it immediately casts a vote on the block and since this is just a regular Solana transaction, this vote is sent to the current leader via Gulf Stream and processed together with user transactions. If the ledger shows that ⅔+ of all effectively staked SOL voted to add this block to the blockchain, it is now confirmed, typically in the next few slots (under normal conditions, confirmation takes around 0.8-1.2 seconds).

Last updated