To test with highly realistic traffic patterns, we need to step back and focus on production network patterns. Most importantly, real traffic is framed in application workflows, not overly simplified HTTP or TCP bandwidth and concurrencies. What this means for testing and validation is that the emulated orchestration element of traffic is the user generating traffic (Spirent’s term is ‘SimUser’). The SimUser has attributes (i.e., root state session stored in DUT state tables, IP/MAC/VLAN attributes, network impairments unique attributes, distance, latency, jitter, loss, asymmetric routes), so it behaves like a real user for testing purposes. From a penultimate measurement perspective, the most important unit of measure is user Quality of Experience (QoE). This requires sufficiently realistic application workflows to identify the availability of the service, access predictability, and performance (for example, page load time with no transaction errors). Everything else is secondary to QoE.
From a traffic loading perspective, one user will drive an application workflow (for example, stepping through an application performing actions like posting forms or streaming video). Each element of the application workflow is encapsulated by a TLS 1.2/1.3 tunnel, which is statefully negotiated and renegotiated periodically, with the associated level of encryption. A single page in a web-based application (like an eCommerce site) will typically have 150-300 URLs that are processed based on the mapping through the browser (i.e., pipelining, transactions per connection and connection close rules). This will generate and/or reuse 10’s of TCP connections (which are statefully tracked by the DUT) and drive traffic. From a measurement perspective, all stack elements are measured as a coordinated unit. This means a single failing point will affect overall user satisfaction (QoE). Thus, a simple application ‘page’ – just one element of a multipage workflow – is substantially more complex than simple HTTP GET style traffic.
The key to measuring real-world performance
Realism substantially affects the device under test (DUT). Simple traffic will only exercise a few buffers/queues, state tracking, and other policies. Even if the network is not performing state tracking, impairments in the DUT may impact the entangled nature of the workflow and not be detected with simple traffic. If the network statefully tracks and manages traffic, then simple traffic will simply not exercise meaningful policies. For this reason, simple traffic is best suited and most methodologically accurate to measure the synthetic engineering limits of a DUT’s fundamental performance attributes.
Simple traffic, however, is not suitable for measuring real-world performance, scale, or user experience. There is a danger of misinterpretation of simple traffic’s overall value and in what it tells us about the DUT’s ability to perform in a production network. Certainly, network architecture and engineering should never rely only on simple traffic results as it frequently over-predicts capacity.
Solutions for measuring true network scale and performance
The testing requirements outlined above cannot be addressed by the majority of network test and assessment solutions.directly addresses all stages of test and assessment. It can measure synthetic engineering limits rapidly. It provides unique value by emulating the real use and their application workflows and addressing the orchestration chain described above by providing an extensive database via CyberFlood TestCloud content feed of tens of thousands of application workflows along with the ability to measure application user experience. With these capabilities, customers are always in a ready-to-test state with the most current application versions and devices. Because CyberFlood traffic is real, meaningful, Layer 7+ (with SimUser Emulation), and with the ability to generate very high user concurrencies, it enables measuring the true scalability and effects on the target DUT or overall network under test.
Learn more about.