Validating Application Performance and Latency in Dynamic Cloud Environments

By :

A wide range of solutions from Amazon Web Services (AWS) provide flexible and secured choices for organizations in public cloud, including gateways and network connection options. Learn how to measure application throughput and latency key performance indicators (KPIs) to right-size AWS deployments.

Amazon Virtual Private Cloud (VPC) provides the ability to logically partition sections of Amazon Web Services (AWS) public cloud, where AWS resources can be launched in isolation.

Organizations benefit from this due to the security of private clouds as well as the scalability and convenience offered by public clouds. There are numerous use cases for selecting this approach, for example in the case of deployments where public-facing services (e.g. web servers) are coupled with private facing services (e.g. databases and application servers).

Amazon VPC comprises of many user-configurable components that provide customization to meet the demands of today’s applications. There are several options in terms of network gateways, services, and connections. One such option is AWS Transit Gateway, which provides a hub and spoke design for connecting VPCs.

Such variations in provisioning options coupled with an underlying cloud infrastructure that is not in the control of the organization, may impact end-user quality of experience (QoE), making it imperative to gauge end-to end performance of the installation prior and during deployment.

Furthermore, there is always the possibility of additional congestion due to neighboring workloads at busy times that impact an organizations’ applications, which again is not in their control. Open-source tools have limitations in performing these types of assessments because they lack realism in the network traffic they simulate, do not offer thorough reporting and visualization, can be cumbersome to access, and may come with their own security risks.

Spirent CyberFlood helps users validate performance, scalability, and network security of app- and content-aware solutions on prem and in both private and public clouds.

In this sequel to our earlier post about proactive performance and security assessment in public clouds, we will share two video demonstrations showing how CyberFlood can help measure the bandwidth and latency in AWS inter-VPC scenarios in a single region.

Inter-VPC back-to-back baseline assessment

The following diagram illustrates this use case, where we deploy two distinct CyberFlood Virtual (CFv) instances in two separate AWS VPC, which then communicate with each other through an AWS Transit Gateway:

See CyberFlood inter-VPC back-to-back baseline assessment in action:

CyberFlood HTTP Throughput Test can be used to measure bandwidth and latency with various deployments. In these set of tests, private cloud (ESXi) with CyberFlood and Virtual Routers achieved over 8G of HTTP throughput with sub-second latency for back-to-back (no DUT) tests. Similar set up and tests in AWS inter- and Intra-VPC achieved 5G of HTTP throughput and latency exceeded what was achieved in private cloud case.

See below for a few result samples from CyberFlood Virtual reports:

Inter-VPC back-to-back baseline assessment

Private cloud (ESXi) back-to-back baseline assessment

The table below offers examples of the proactive validation organizations can leverage to assure quality user experiences for their business-critical applications:

Throughput and latency results measured by CyberFlood test solution

Learn how Spirent security testing solutions can help assess the performance and security strength of your organization’s public cloud network.




Reza Saadat
Reza Saadat


Reza SaadatはSpirentのアプリケーションおよびセキュリティ グループのシニア テクニカル マーケティング エンジニアであり、コンピューターおよびデータ通信技術において 25 年以上の経験を有しています。Spirentでは製品管理、エンジニアリング、セールスの各チームと協力し、ネットワーク機器メーカー、企業、サービス プロバイダー向けに最先端のアプリケーションおよびセキュリティ テスト ソリューションを市場に投入しています。業界、市場、ソフトウェア開発に関する深い知識と協調的な設計および開発スキルにより、数多くのハードウェアおよびソフトウェア ソリューションを生み出し、IBM Corp、Cisco Systemsなど多くの企業からリリースされています。