RFC 2920 – SMTP Pipelining
Why This RFC Exists
Standard SMTP is a strict request-response protocol. The client sends one command, waits for the server's response, then sends the next. Each round trip adds latency, especially across high-latency network links. For a message with 50 recipients, the client would need 50 separate round trips just for the RCPT TO commands.
RFC 2920 defines the PIPELINING SMTP extension, which allows the client to send multiple commands in a batch without waiting for individual responses. The server buffers and processes the commands in order, then sends all responses back. This dramatically reduces the total time for an SMTP session.
Pipelining is one of the most widely supported SMTP extensions. Nearly every modern mail server advertises it, and most SMTP client libraries use it by default.
How It Works
- The client sends
EHLOand confirms the server advertisesPIPELININGin its capability list. - The client groups together commands that are safe to pipeline — typically
MAIL FROM, one or moreRCPT TO, andDATA. - The client sends all these commands in rapid succession without waiting for individual replies.
- The server processes each command in order and sends back one response per command, in order.
- The client reads all responses and handles any errors (e.g., a rejected recipient).
SMTP Example
Without pipelining (5 round trips for the envelope):
With pipelining (1 round trip for the entire envelope):
Key Technical Details
Which Commands Can Be Pipelined
Not all SMTP commands are safe to pipeline. RFC 2920 divides commands into two categories:
| Safe to Pipeline | Must Wait for Response |
|---|---|
MAIL FROM |
EHLO / HELO
|
RCPT TO |
STARTTLS |
DATA |
AUTH |
RSET |
QUIT |
NOOP |
DATA content (the dot-stuffed body) |
Commands that change the connection state (EHLO, STARTTLS, AUTH) are synchronization points — the client must wait for the response before sending anything else.
Error Handling
When pipelining, some commands in a batch may succeed while others fail. The client must match responses to commands in order. A rejected RCPT TO does not invalidate the entire transaction — the message is still delivered to the accepted recipients. However, if MAIL FROM is rejected, subsequent RCPT TO commands will also fail.
TCP Buffering Considerations
Pipelining relies on TCP's buffering. The client writes multiple commands to the socket without reading, trusting that the TCP send buffer can hold them. For very large batches of RCPT TO commands (hundreds or thousands), the client may need to pipeline in groups to avoid filling the TCP buffer and deadlocking.
Interaction with STARTTLS
After an EHLO response that includes STARTTLS, the client must not pipeline STARTTLS with other commands. The TLS handshake changes the state of the entire connection, so it's a hard synchronization point. After TLS is established and a new EHLO is sent, pipelining can resume.
Common Mistakes
- Pipelining EHLO or STARTTLS with other commands. These are synchronization points. Pipelining them causes protocol errors because the server's response changes the session state that subsequent commands depend on.
- Not reading all responses before acting on errors. If you pipeline 5 commands, you must read all 5 responses, even if the first one is an error. Abandoning the response stream corrupts the protocol state.
-
Assuming all servers support pipelining. While nearly universal, pipelining is an extension. Always check the
EHLOresponse forPIPELININGbefore using it. Fall back to one-at-a-time mode if absent. -
Pipelining after MAIL FROM is rejected. If
MAIL FROMis rejected and you've already pipelinedRCPT TOcommands, they'll all fail. Read theMAIL FROMresponse before pipelining recipients, or be prepared to handle cascading failures. -
Sending DATA content in the pipeline. The
DATAcommand itself can be pipelined, but the message body that follows cannot. You must wait for the354response toDATAbefore sending the message content.
Deliverability Impact
- Faster delivery for multi-recipient messages. Pipelining reduces an N-recipient message from N+2 round trips to approximately 2 round trips for the envelope phase. On high-latency connections, this saves seconds per message.
- Higher throughput for bulk sending. When sending many messages through the same connection, pipelining lets you overlap the envelope of the next message with the data transfer of the current one, maximizing connection utilization.
- Reduced connection time. Shorter sessions mean less time holding open connections. This is important for servers that enforce per-connection time limits or rate limits based on connection duration.
- Better behavior under load. Pipelining reduces the number of network round trips, which reduces the load on both the sending and receiving servers. This makes your sending infrastructure more efficient.
-
Per-recipient rejection still works. Pipelining does not reduce the server's ability to reject individual recipients. Each
RCPT TOstill gets its own response code, so bounce handling works exactly the same.