Hardware Integration
jNetWorks SDK provides seamless integration with high-performance hardware backends while maintaining a unified API. The same application code works across libpcap (for development), DPDK, and Napatech — with hardware-specific optimizations applied transparently via the ProtocolStack SPI.
Backends can substitute software implementations with hardware-accelerated versions during the specification lifecycle (resolve() and build() phases). This enables offloading of complex operations without any code changes.
Supported Backends
libpcap
Apache 2.0
Software-only processing; no hardware offload
Single packet at a time
DPDK
Commercial
RSS hashing, zero-copy access to large hugepage-backed buffers
Large multi-GB NUMA-aware device buffers
Napatech
Commercial
IP fragment table offload, advanced descriptors, protocol dissection offload
Up to 64 large multi-GB NUMA-aware buffers
Transparent Offload via ProtocolStack SPI
The ProtocolStack collaborates with backends through SPI services:
During
resolve(), the backend is queried viaBackendContextfor supported capabilities.If hardware offload is available and preferred, incompatible software processors are replaced.
In
build(), hardware-acceleratedProcessorimplementations are instantiated.
Example (user code remains identical):
stack.getProtocol(IpProtocol.class)
.enableReassembly()
.preferHardwareOffload(true); // Automatic fallback to softwareIf the backend supports it (e.g., Napatech IP frag table), the hardware version is used seamlessly.
Backend-Specific Offloads
Napatech (NTAPI) Backend
Napatech SmartNICs offload heavy operations to dedicated hardware:
IP Fragment Table Offload — Hardware-managed reassembly with large tables and timeout handling.
Advanced Hardware Packet Descriptors — Rich metadata (nanosecond timestamps, tunnel info, error flags, VLAN stacking).
Protocol Dissection Offload — Hardware parsing up to L4 (Ethernet → IP → TCP/UDP), reducing CPU load.
Hash Generation — Built-in flow distribution for perfect stream affinity.
Buffering: Zero-copy access to up to 64 large device buffers (multi-GB, NUMA-aware). Packets can be held for medium-duration processing before release.
DPDK Backend
DPDK enables kernel-bypass on commodity high-speed NICs:
Hash Generation — RSS-based (hardware or emulated) for 2/3/5-tuple flow distribution.
Advanced Packet Descriptors — NIC-specific metadata (checksum status, VLAN, timestamps where supported).
Protocol Dissection — Software-based but highly optimized.
Buffering: Zero-copy access to large hugepage-backed memory regions (typically GB-range, NUMA-aware). Allows medium-duration packet retention.
libpcap Backend
Pure software fallback.
Single-packet delivery only.
No hardware offload or multi-buffer access.
Packet Retention and Cloning
DPDK and Napatech expose large shared device buffers:
Packets acquired via
stream.take()reference memory in these buffers.They may be held for medium-duration processing (e.g., across multiple worker stages).
Must eventually be
release()-d to return to the device pool.
For longer-term retention or modification:
Configure
PacketPolicy.memoryCopy()— automatically copies to a managed memory pool ontake().Or use custom
PacketFactoryviaPacketPolicy.factoryCopy().At any time:
Packet cloned = packet.clone();— creates an independent copy using the packet's definedclonePolicy(typePacketPolicy).
Example:
Performance Benefits
Hardware backends dramatically reduce CPU utilization and increase throughput:
Offloaded reassembly/dissection → near-zero CPU cycles per packet
Large shared buffers → fewer syscalls and better cache behavior
NUMA awareness → optimal memory locality
The unified API ensures development and testing remain simple with libpcap, while production deployments unlock full hardware potential.
See the Tutorials for practical examples leveraging these capabilities.
Last updated