Agile Computing Authors: Liz McMillan, Pat Romanski, Yeshim Deniz, Elizabeth White, Rene Buest

Related Topics: Agile Computing

Agile Computing: Article

The Emergence of the Universal Appliance

Containers are an example of a universal solution - one that revolutionized the shipping industry

In 1956, Malcolm McLean invented a shipping system that revolutionized cargo shipping forever, namely the container. The shipping container provides a standard, universal packing solution that can be used for transporting whatever you need to ship. Containers can be transported on trucks, trains or on ships, because they are of standard size.

Containers are an example of a universal solution that revolutionized the shipping industry. Can the emergence of the standard PC server platform as a universal computing platform herald the proliferation of even more innovative dedicated network appliances in IP networks?

The growth of network appliances
Over the last decade, we have all become familiar with packet networks including equipment like routers and switches. But packet networks have become more intelligent over the last few years as more and more demanding, real-time services have migrated to IP. While intelligence has been added to routers to deal with these new service types, a need has developed for dedicated network appliances that can perform specific functions in the IP network.

A good example is network performance monitoring. Routers have the capability to create and collect netflow data, which can be used to monitor the performance of IP networks. However, processing this data on many sessions places an unacceptable load on the router, which diverts attention away from the main task of the router, which is to forward packets. It therefore makes sense to off-load this task to dedicated network performance monitoring appliances.

Another example is network security. Many routers provide network security features, but as we move to higher speed networks, there is interest in using dedicated network security devices, such as intrusion detection and prevention systems (IDS/IPS) to detect threats in real-time and take immediate action.

This is a trend seen throughout the network with dedicated appliances for network analysis, network forensics, network test and measurement, network optimization and network security. These are highly intelligent solutions, which have the ability to process a vast amount of data in real-time. They are essential in establishing IP networks as multi-service and intelligent transport networks.

From proprietary to standard hardware platforms
Until recently, the high performance required by these solutions dictated a system design similar to the routing products they were designed to off-load, namely a proprietary hardware design. The doctrine has been that only a customized, proprietary design can provide the performance you need to meet the real-time demands of high-speed network monitoring.

But an alternative system design approach has been gathering momentum over the last few years based on standard off-the-shelf platforms. Standard PC servers have established themselves as a credible hardware platform alternative to in-house proprietary design and have been embraced by a number of network monitoring solution vendors who recognize that the value of their solutions lies in the application software provided. The hardware platform just needs to provide the raw computing power, memory bandwidth and fast input/output of data that these solutions require.

With the latest server platforms based on new multi-core CPU architectures, the raw processing power and memory bandwidth is available to perform even the most demanding tasks. However there is one area that these standard platforms are not able to address – fast input/output of data, especially for real-time network analysis applications.

Standard Network Interface Cards (NICs) provided with standard servers do not have the real-time throughput capacity and efficiency needed to for high-speed network monitoring. NICs can provide fast input/output for data packets to a specific server MAC/IP address, but cannot provide the same performance for all traffic when monitoring of all MAC/IP addresses is required. This is especially the case when moving to 10 Gbps networking.

Fortunately, specialist network adapters have emerged to fill the gap.

Focusing R&D effort
The combination of standard server platforms and intelligent real-time network adapters establishes the universal appliance platform for high-performance network monitoring or any other application that requires real-time packet capture, analysis and re-transmission at speeds up to 10 Gbps without losing packet data.

The emergence of such a universal appliance is significant. It effectively separates the application software from the hardware supporting it. This allows a multitude of dedicated application software solutions to be supported by a single hardware platform where addition of features or even a total replacement of application software supported by the server is possible. Vendors of network monitoring, analysis, test & measurement, optimization and security solutions can thus concentrate on the application and focus their R&D investment on software development rather than diverting attention to hardware development.

Not only does this mean more focus, but it is also comes at a lower cost! Standard PC server platforms enjoy economies of scale leading to relatively low unit prices. A standard server for a few thousand dollars is more than adequate in providing the CPU power and memory performance requirements for 10G applications. It is therefore possible to provide a lower cost hardware platform with a high performance with zero investment in hardware development.

But to make it work, you need an intelligent real-time network adapter. Let’s take a look at the fast input/output challenge for real-time network monitoring and how intelligent real-time network adapters help to meet these challenges.

The limitations of standard NICs
Fast input/output in real-time network monitoring requires that all data is captured no matter the packet size, link utilization or line-speed. Standard Network Interface Cards (NICs) have been used for this task in the past, but as the graphs in figure 1a and 1b show below, they face significantly challenges in a 10Gbps real-time network monitoring:

Figure 1a: Real throughput on a 10 Gbps port for standard NICs (Source: CESNET performance tests)

Figure 1b: CPU load handling 10 Gbps data traffic on 10 Gbps port (Source: CESNET performance tests)

The graph shown in figure 1a is referring to the effective throughput that can be achieved without losing packets at the port. It refers to Ethernet frames, which are used to transport IP packets in IP networks. Ethernet frames (and IP packets) can have any size. The size is determined by the application, but also conditions on the network – if the network or parts of the network are heavily loaded, then this can result in the use of smaller packets/frames as these have a better chance of reaching the destination in a congested newtork.

Table 1 below shows the theoretical limit for the throughput one should expect on a 10 Gbps port. Note that throughput naturally falls as the frame size is reduced. With smaller frame sizes, there are more frames to be handled and the preamble and inter-frame gap associated with each frame becomes more significant. This is pure overhead and reduces the effective throughput.

Table 1: Theoretical maximum throughput for a 10 Gbps Ethernet port As can be seen in figure 1a, for large Ethernet frame sizes, throughput is close to the theoretical limit. However, as frame sizes decrease, the effective throughput drops off dramatically to less than 1 Gbps at small frame sizes.

Typical frame sizes for Internet communication lie in the range from 128 to 1024 bytes with 300 bytes an often referenced frame size for tests. In this range, it can be seen that throughput is at best 6 Gbps and can be as low as 1 Gbps!

The graphs above are based on 10 Gbps port throughput, but the issue is the same for 1 Gbps ports. What distinguishes these two cases is the additional load that is placed on the CPU for handling of data traffic. For 1 Gbps ports, the CPU load is high, but acceptable, whereas for 10 Gbps ports, as figure 1b shows, almost 2/3 of the CPU resources are used just in handling Ethernet frames. This is not acceptable for many of the compute- and data-intensive network applications that are now becoming common in the network.

The explanation for this considerable work-load is that standard NICs are designed to interrupt the CPU each time a frame is received and needs to be handled. The CPU must decide what to do with the frame, to re-order and de-duplicate frames received, to discard frames that are invalid etc. This, obviously, is a distraction for CPUs, which should be busy running the network application in question.

Intelligent real-time network adapters, on the other hand, are designed for real-time network monitoring. In particular, they are designed to provide full throughput at the theoretical limit without losing packets no matter the packet size. They are also designed to do this without overloading the CPU by off-loading many of the tasks normally performed by the CPU. The results can be seen below (see figure 2a and 2b):

Figure 2a: Napatech NT20E throughput performance

Figure 2b: Napatech NT20E CPU load performance

As can be seen, the throughput can be maximized to theoretical limits while CPU load can be reduced to less than 1%. A lower CPU load ensures that there is more processing power delivered back to the application. This means a faster application with the ability to process more data. Intelligent real-time network adapters, such as Napatech’s can bridge the performance gap making standard off-the-shelf servers a viable and powerful universal platform for network appliances.

Parallel processing using multiple CPU cores
The latest CPUs provide multiple cores, effectively 2, 4 or 8 CPUs in one chip. However, to take advantage of this, it must be possible to run multiple instances of one application or several different applications on the available CPU cores. It must also be possible to direct the right traffic to each application instance. Now, instead of one flow of data being processed by a single application, 2, 4 or 8 flows can be processed in parallel.

While methods exist to implement multi-threading or multiple instances of the same application software on multiple CPU cores, standard NICs are not designed for providing data to multiple application instances in an intelligent way. In standard NIC implementations, Ethernet frames are treated on a frame-by-frame basis as a single flow. It is up to the operating system to copy the frames to all of the relevant application instances, which is both a time consuming and wasteful process.

Napatech network adapters provide a unique capability to intelligently define multiple data flows based on an examination of the Ethernet frames received. The flows can be defined based on the source and destination ports and addresses in the Ethernet, IP and TCP/UDP headers, but also on tunnel identifiers if a tunneling protocol has been used, such as SCTP, GRE or GTP.

Once these flows are defined, they can be directed to up to 32 different CPU cores for processing by an application instance. A Direct Memory Access (DMA) process is used, which means that the operating system does not need to be involved and no copying of frames is necessary. This removes delays and does not waste memory leading to a faster, more efficient data transfer.

The net result is real-time, parallel processing of multiple flows of data where each flow can be processed and managed differently, if one so chooses.

From standard server to universal appliance
The pieces are now in place to provide a universal appliance platform that can support any real-time network analysis application. This not only provides a relatively cheap, but powerful and reliable platform, but also provides flexibility in the type of server platform to use and the application to run on the platform thanks to the separation of hardware from software. More importantly, it allows providers of network monitoring, analysis, test & measurement, optimization and security solutions to focus their energy on software development rather than on hardware development.

Just as containers revolutionized the shipping industry, can the Universal Appliance concept do the same for dedicated network appliances and IP networks?

More Stories By Daniel Joseph Barry

Daniel Joseph Barry is VP Positioning and Chief Evangelist at Napatech and has over 20 years experience in the IT and Telecom industry. Prior to joining Napatech in 2009, he was Marketing Director at TPACK, a leading supplier of transport chip solutions to the Telecom sector.

From 2001 to 2005, he was Director of Sales and Business Development at optical component vendor NKT Integration (now Ignis Photonyx) following various positions in product development, business development and product management at Ericsson. He joined Ericsson in 1995 from a position in the R&D department of Jutland Telecom (now TDC). He has an MBA and a BSc degree in Electronic Engineering from Trinity College Dublin.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@ThingsExpo Stories
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
DXWorldEXPO LLC announced today that ICOHOLDER named "Media Sponsor" of Miami Blockchain Event by FinTechEXPO. ICOHOLDER give you detailed information and help the community to invest in the trusty projects. Miami Blockchain Event by FinTechEXPO has opened its Call for Papers. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Miami Blockchain Event by FinTechEXPO also offers s...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of bus...
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
The IoT Will Grow: In what might be the most obvious prediction of the decade, the IoT will continue to expand next year, with more and more devices coming online every single day. What isn’t so obvious about this prediction: where that growth will occur. The retail, healthcare, and industrial/supply chain industries will likely see the greatest growth. Forrester Research has predicted the IoT will become “the backbone” of customer value as it continues to grow. It is no surprise that retail is ...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...