Spirentサークルロゴ
高速イーサネット

New Performance Pressure for Latest App-Driven Data Center Deals

By :

Every network and application brings unique characteristics and vulnerability to network impairments. Having a deep understanding of how apps behave in a range of expected scenarios is a critical step in assuring performance. Read how pre-deployment testing and verification will help relieve doubts and reveal the unknown.

A perfect storm of tech readiness, big data demands and an unstoppable trend toward cloud-hosted apps and services has been a boon for data centers.

Suddenly, these network operators are serving unprecedented network IT needs. They’re capitalizing on mass IoT deployments that are seeing physical devices, vehicles and home appliances sending a flurry of data for real-time processing.

As mobile network operators seek to strategically offload certain network management elements in a shift to 5G, a bevy of virtual network functions are being spun up constantly. Data centers have been prepared to meet this demand. Ubiquitous 100G Ethernet technology has helped instill confidence around widescale high-speed availability providing flexible breakout options for high-density, cost-effective way to transition into 400G and early 800G deployments.

And just in time.

The sizeable increase in the volume of data being constantly stored and accessed makes 100G not a nice-to-have, but a necessity.

Data center switches migrating to 25/50/100G Ethernet. Source: Dell’Oro July 2021 - Long Term Ethernet Switch Forecast

100G just a piece of the performance puzzle

Data center operators understand that 100G on its own isn’t adequate for meeting the exacting performance requirements that are accompanying these new opportunities.

After all, these data centers are not just offering the latest network speeds. They are fundamentally entering new areas of business. They’re having to become experts in network function virtualization (NFV), edge computing, webscale networks and network slicing architectures.

The revenue possibilities are limitless. But so are, potentially, the headaches that accompany them. That’s because these new lines of business also bring responsibility for maintaining strict quality of service (QoS).

Suddenly, the pressure is on to assure performance in line with contracted service level agreements (SLAs). The challenge here is that data center operators don’t have an efficient, accurate way to verify app performance under real network conditions. This stokes potential app failure scenarios, threatening lucrative new lines of business.

Why pre-deployment insights are key to a high-performing future

While performance has traditionally been a post-deployment concern, the high stakes for emerging apps being spun up in data centers insists upon better insight into issues that may occur and risk mitigation strategies to contain them.

Every network and application brings unique characteristics and vulnerability to network impairments. Therefore, having a deep understanding of how apps behave in a range of expected scenarios is a critical step in assuring performance.

In our recent work with data center operators, we’ve begun to identify pre-deployment verification use cases in the areas of data center migration and data center interconnect:

  • Emulating customer networks to demonstrate risk mitigation. For enterprises conducting migrations, cost efficiencies must be weighed against a potential impact to mission critical applications. By emulating customer networks in advance of deployment, impairment impacts caused by common issues like latency and packet loss can be examined. When customers understand how real-world challenges arising from complex interactions between network components will be addressed, a plan can be developed for mitigating risk and demonstrating how SLAs will be met.

  • Assuring 100G performance. In data center interconnect scenarios, apps will be hosted remotely, with high-speed network connectivity between locations representing a potential weak link. These deployment scenarios can require both east to west and north to south access, potentially introducing latency-driven performance issues. By emulating specific network environments, and introducing latency and packet loss, application performance boundaries can be identified before deployment. This helps reveal where implementation of load balancing and WAN optimization technologies will support improved QoS.

Removing the guesswork, instilling confidence

Data center operators stand at the precipice of massive revenue and service evolution opportunities. As they traverse this new terrain, pre-deployment testing and verification will help relieve doubts and reveal the unknown. Armed with the right insight, they will be able to make confident decisions about network architectures and the SLAs they’ll support.

Learn about the latest test solutions for ensuring predictable infrastructure performance.

Guest contributor: David Robertson, Product Manager, Calnex Solutions

コンテンツはいかがでしたか?

こちらで当社のブログをご購読ください。

ブログニュースレターの購読

Malathi Malla
Malathi Malla

Malathi Malla leads Cloud, Data Center and Virtualization segment for Spirent. Responsible for the Product Marketing, Technical Marketing, and Product Management, she drives go-to-market strategy across Cloud and IP solutions. She has over 14 years of hi-tech experience at both Silicon Valley start-ups and large companies including Citrix, IBM, Sterling Commerce (software division of AT&T), Comergent Technologies. Malathi also represents Spirent as Marketing prime through various open source communities like Open Networking Foundation and OpenDayLight. Join the conversation and connect with Malathi on LinkedIn or follow on her on Twitter at @malathimalla.