Author: Sami A | Category: Database Performance| Tags: Database, Performance, DB Network Tuning, DB Linux Tuning
Introduction
Database performance over the network is not only about bandwidth and latency. It is also heavily influenced by how data is packaged, buffered, and queued between the Oracle client, the listener, and the operating system’s TCP/IP stack.
This article explains how Oracle Session Data Unit (SDU), Linux TCP socket buffer sizes, and network device queue parameters work together, and how proper tuning can reduce round-trips, improve throughput, and stabilize performance in high-load environments such as Exadata, RAC, or Data Guard.
1. Session Data Unit (SDU)
The Session Data Unit (SDU) defines the size of the data packet exchanged between the Oracle client and server during SQL*Net communication.
In simple terms, SDU controls how much Oracle payload is sent in each network packet.
Default SDU values are typically 8192 bytes, but Oracle allows configuration up to 65535 bytes.
Where SDU is configured
sqlnet.oratnsnames.ora- Listener configuration
Example:
DEFAULT_SDU_SIZE=65535
Or per connection:
(DESCRIPTION= (SDU=65535))
Why SDU matters
- Larger SDU reduces the number of packets required to transfer large result sets.
- Fewer packets mean fewer round trips.
- Especially beneficial for:
- Data Guard redo transport
- RMAN backups over network
- Large batch queries
- ETL operations
When increasing SDU helps
- High bandwidth networks (10G, 25G, 40G)
- High latency connections
- Large data transfers
2. TCP Socket Buffer Sizes
TCP uses memory buffers to store data before sending and after receiving.
If these buffers are too small, throughput is limited even if the network is fast.
Key Linux parameters:
net.ipv4.tcp_rmem→ Receive buffernet.ipv4.tcp_wmem→ Send buffer
Each parameter has three values:
min default max
Example:
net.ipv4.tcp_rmem = 4096 87380 33554432net.ipv4.tcp_wmem = 4096 65536 33554432
Meaning
- Minimum: Guaranteed per socket
- Default: Used for normal connections
- Maximum: Upper limit autotuning can reach
Impact on Oracle
- Controls how much data TCP can process before waiting for ACK.
- Larger buffers improve throughput in high-latency networks.
- Important for:
- Data Guard SYNC/ASYNC transport
- GoldenGate
- RMAN duplication over network
- Exadata client connections
3. Core Socket Limits
These parameters define the maximum allowed buffer sizes system-wide.
net.core.rmem_maxnet.core.wmem_max
Example:
net.core.rmem_max = 33554432net.core.wmem_max = 33554432
If these values are small, increasing tcp_rmem or tcp_wmem will have no effect because the OS will cap them.
Best practice
Always increase:
net.core.rmem_maxnet.core.wmem_max
before tuning TCP buffers.
4. Network Device Queue Parameters
These control how packets are queued at the NIC (Network Interface Card) level.
txqueuelen
Defines the transmit queue length for a network interface.
Check value:
ip link show eth0
Change:
ip link set eth0 txqueuelen 10000
Effect
- Larger queue reduces packet drops during bursts.
- Useful for high throughput environments.
net.core.netdev_max_backlog
Defines how many packets can be queued when the kernel cannot process them fast enough.
Example:
net.core.netdev_max_backlog = 30000
Impact
- Prevents packet loss during traffic spikes.
- Important for:
- RAC interconnect
- High connection count systems
- Backup or migration windows
5. How All Parameters Work Together
Data flow path:
Oracle Session → SDU → TCP Socket Buffer → Kernel Queue → NIC Queue → Network
If any layer is undersized:
- Packets fragment
- Round trips increase
- Throughput drops
- CPU interrupts increase
Optimal tuning requires alignment between:
- Oracle SDU
- TCP buffer sizes
- Kernel limits
- NIC queue depth
6. Practical Tuning Strategy
- Increase OS limits first:
net.core.rmem_maxnet.core.wmem_max
- Tune TCP buffers:
net.ipv4.tcp_rmemnet.ipv4.tcp_wmem
- Adjust network queues:
txqueuelennet.core.netdev_max_backlog
- Finally optimize Oracle SDU.
7. Recommended Use Cases
These tunings provide the biggest benefit in:
- Exadata environments
- Data Guard redo transport
- Cross-region DR
- High throughput OLTP
- RMAN over network
- Consolidated databases with many sessions
Conclusion
Network performance tuning is not achieved by a single parameter.
Oracle SDU controls how data is packaged, TCP buffers control how data is staged, and Linux network
queues control how packets are handled under pressure.
When these layers are aligned, the database can fully utilize modern high-speed networks, reduce round trips, and deliver consistent performance even during peak load.
Proper testing with tools such as AWR, ASH, OSWatcher, and network statistics is essential to validate improvements and avoid over-allocation of memory.

