Agile Computing Authors: Liz McMillan, Pat Romanski, Zakia Bouachraoui, Yeshim Deniz, Elizabeth White

Related Topics: Agile Computing

Agile Computing: Article

The Emergence of the Universal Appliance

Containers are an example of a universal solution - one that revolutionized the shipping industry

In 1956, Malcolm McLean invented a shipping system that revolutionized cargo shipping forever, namely the container. The shipping container provides a standard, universal packing solution that can be used for transporting whatever you need to ship. Containers can be transported on trucks, trains or on ships, because they are of standard size.

Containers are an example of a universal solution that revolutionized the shipping industry. Can the emergence of the standard PC server platform as a universal computing platform herald the proliferation of even more innovative dedicated network appliances in IP networks?

The growth of network appliances
Over the last decade, we have all become familiar with packet networks including equipment like routers and switches. But packet networks have become more intelligent over the last few years as more and more demanding, real-time services have migrated to IP. While intelligence has been added to routers to deal with these new service types, a need has developed for dedicated network appliances that can perform specific functions in the IP network.

A good example is network performance monitoring. Routers have the capability to create and collect netflow data, which can be used to monitor the performance of IP networks. However, processing this data on many sessions places an unacceptable load on the router, which diverts attention away from the main task of the router, which is to forward packets. It therefore makes sense to off-load this task to dedicated network performance monitoring appliances.

Another example is network security. Many routers provide network security features, but as we move to higher speed networks, there is interest in using dedicated network security devices, such as intrusion detection and prevention systems (IDS/IPS) to detect threats in real-time and take immediate action.

This is a trend seen throughout the network with dedicated appliances for network analysis, network forensics, network test and measurement, network optimization and network security. These are highly intelligent solutions, which have the ability to process a vast amount of data in real-time. They are essential in establishing IP networks as multi-service and intelligent transport networks.

From proprietary to standard hardware platforms
Until recently, the high performance required by these solutions dictated a system design similar to the routing products they were designed to off-load, namely a proprietary hardware design. The doctrine has been that only a customized, proprietary design can provide the performance you need to meet the real-time demands of high-speed network monitoring.

But an alternative system design approach has been gathering momentum over the last few years based on standard off-the-shelf platforms. Standard PC servers have established themselves as a credible hardware platform alternative to in-house proprietary design and have been embraced by a number of network monitoring solution vendors who recognize that the value of their solutions lies in the application software provided. The hardware platform just needs to provide the raw computing power, memory bandwidth and fast input/output of data that these solutions require.

With the latest server platforms based on new multi-core CPU architectures, the raw processing power and memory bandwidth is available to perform even the most demanding tasks. However there is one area that these standard platforms are not able to address – fast input/output of data, especially for real-time network analysis applications.

Standard Network Interface Cards (NICs) provided with standard servers do not have the real-time throughput capacity and efficiency needed to for high-speed network monitoring. NICs can provide fast input/output for data packets to a specific server MAC/IP address, but cannot provide the same performance for all traffic when monitoring of all MAC/IP addresses is required. This is especially the case when moving to 10 Gbps networking.

Fortunately, specialist network adapters have emerged to fill the gap.

Focusing R&D effort
The combination of standard server platforms and intelligent real-time network adapters establishes the universal appliance platform for high-performance network monitoring or any other application that requires real-time packet capture, analysis and re-transmission at speeds up to 10 Gbps without losing packet data.

The emergence of such a universal appliance is significant. It effectively separates the application software from the hardware supporting it. This allows a multitude of dedicated application software solutions to be supported by a single hardware platform where addition of features or even a total replacement of application software supported by the server is possible. Vendors of network monitoring, analysis, test & measurement, optimization and security solutions can thus concentrate on the application and focus their R&D investment on software development rather than diverting attention to hardware development.

Not only does this mean more focus, but it is also comes at a lower cost! Standard PC server platforms enjoy economies of scale leading to relatively low unit prices. A standard server for a few thousand dollars is more than adequate in providing the CPU power and memory performance requirements for 10G applications. It is therefore possible to provide a lower cost hardware platform with a high performance with zero investment in hardware development.

But to make it work, you need an intelligent real-time network adapter. Let’s take a look at the fast input/output challenge for real-time network monitoring and how intelligent real-time network adapters help to meet these challenges.

The limitations of standard NICs
Fast input/output in real-time network monitoring requires that all data is captured no matter the packet size, link utilization or line-speed. Standard Network Interface Cards (NICs) have been used for this task in the past, but as the graphs in figure 1a and 1b show below, they face significantly challenges in a 10Gbps real-time network monitoring:

Figure 1a: Real throughput on a 10 Gbps port for standard NICs (Source: CESNET performance tests)

Figure 1b: CPU load handling 10 Gbps data traffic on 10 Gbps port (Source: CESNET performance tests)

The graph shown in figure 1a is referring to the effective throughput that can be achieved without losing packets at the port. It refers to Ethernet frames, which are used to transport IP packets in IP networks. Ethernet frames (and IP packets) can have any size. The size is determined by the application, but also conditions on the network – if the network or parts of the network are heavily loaded, then this can result in the use of smaller packets/frames as these have a better chance of reaching the destination in a congested newtork.

Table 1 below shows the theoretical limit for the throughput one should expect on a 10 Gbps port. Note that throughput naturally falls as the frame size is reduced. With smaller frame sizes, there are more frames to be handled and the preamble and inter-frame gap associated with each frame becomes more significant. This is pure overhead and reduces the effective throughput.

Table 1: Theoretical maximum throughput for a 10 Gbps Ethernet port As can be seen in figure 1a, for large Ethernet frame sizes, throughput is close to the theoretical limit. However, as frame sizes decrease, the effective throughput drops off dramatically to less than 1 Gbps at small frame sizes.

Typical frame sizes for Internet communication lie in the range from 128 to 1024 bytes with 300 bytes an often referenced frame size for tests. In this range, it can be seen that throughput is at best 6 Gbps and can be as low as 1 Gbps!

The graphs above are based on 10 Gbps port throughput, but the issue is the same for 1 Gbps ports. What distinguishes these two cases is the additional load that is placed on the CPU for handling of data traffic. For 1 Gbps ports, the CPU load is high, but acceptable, whereas for 10 Gbps ports, as figure 1b shows, almost 2/3 of the CPU resources are used just in handling Ethernet frames. This is not acceptable for many of the compute- and data-intensive network applications that are now becoming common in the network.

The explanation for this considerable work-load is that standard NICs are designed to interrupt the CPU each time a frame is received and needs to be handled. The CPU must decide what to do with the frame, to re-order and de-duplicate frames received, to discard frames that are invalid etc. This, obviously, is a distraction for CPUs, which should be busy running the network application in question.

Intelligent real-time network adapters, on the other hand, are designed for real-time network monitoring. In particular, they are designed to provide full throughput at the theoretical limit without losing packets no matter the packet size. They are also designed to do this without overloading the CPU by off-loading many of the tasks normally performed by the CPU. The results can be seen below (see figure 2a and 2b):

Figure 2a: Napatech NT20E throughput performance

Figure 2b: Napatech NT20E CPU load performance

As can be seen, the throughput can be maximized to theoretical limits while CPU load can be reduced to less than 1%. A lower CPU load ensures that there is more processing power delivered back to the application. This means a faster application with the ability to process more data. Intelligent real-time network adapters, such as Napatech’s can bridge the performance gap making standard off-the-shelf servers a viable and powerful universal platform for network appliances.

Parallel processing using multiple CPU cores
The latest CPUs provide multiple cores, effectively 2, 4 or 8 CPUs in one chip. However, to take advantage of this, it must be possible to run multiple instances of one application or several different applications on the available CPU cores. It must also be possible to direct the right traffic to each application instance. Now, instead of one flow of data being processed by a single application, 2, 4 or 8 flows can be processed in parallel.

While methods exist to implement multi-threading or multiple instances of the same application software on multiple CPU cores, standard NICs are not designed for providing data to multiple application instances in an intelligent way. In standard NIC implementations, Ethernet frames are treated on a frame-by-frame basis as a single flow. It is up to the operating system to copy the frames to all of the relevant application instances, which is both a time consuming and wasteful process.

Napatech network adapters provide a unique capability to intelligently define multiple data flows based on an examination of the Ethernet frames received. The flows can be defined based on the source and destination ports and addresses in the Ethernet, IP and TCP/UDP headers, but also on tunnel identifiers if a tunneling protocol has been used, such as SCTP, GRE or GTP.

Once these flows are defined, they can be directed to up to 32 different CPU cores for processing by an application instance. A Direct Memory Access (DMA) process is used, which means that the operating system does not need to be involved and no copying of frames is necessary. This removes delays and does not waste memory leading to a faster, more efficient data transfer.

The net result is real-time, parallel processing of multiple flows of data where each flow can be processed and managed differently, if one so chooses.

From standard server to universal appliance
The pieces are now in place to provide a universal appliance platform that can support any real-time network analysis application. This not only provides a relatively cheap, but powerful and reliable platform, but also provides flexibility in the type of server platform to use and the application to run on the platform thanks to the separation of hardware from software. More importantly, it allows providers of network monitoring, analysis, test & measurement, optimization and security solutions to focus their energy on software development rather than on hardware development.

Just as containers revolutionized the shipping industry, can the Universal Appliance concept do the same for dedicated network appliances and IP networks?

More Stories By Daniel Joseph Barry

Daniel Joseph Barry is VP Positioning and Chief Evangelist at Napatech and has over 20 years experience in the IT and Telecom industry. Prior to joining Napatech in 2009, he was Marketing Director at TPACK, a leading supplier of transport chip solutions to the Telecom sector.

From 2001 to 2005, he was Director of Sales and Business Development at optical component vendor NKT Integration (now Ignis Photonyx) following various positions in product development, business development and product management at Ericsson. He joined Ericsson in 1995 from a position in the R&D department of Jutland Telecom (now TDC). He has an MBA and a BSc degree in Electronic Engineering from Trinity College Dublin.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

IoT & Smart Cities Stories
All in Mobile is a place where we continually maximize their impact by fostering understanding, empathy, insights, creativity and joy. They believe that a truly useful and desirable mobile app doesn't need the brightest idea or the most advanced technology. A great product begins with understanding people. It's easy to think that customers will love your app, but can you justify it? They make sure your final app is something that users truly want and need. The only way to do this is by ...
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
DXWorldEXPO LLC announced today that Big Data Federation to Exhibit at the 22nd International CloudEXPO, colocated with DevOpsSUMMIT and DXWorldEXPO, November 12-13, 2018 in New York City. Big Data Federation, Inc. develops and applies artificial intelligence to predict financial and economic events that matter. The company uncovers patterns and precise drivers of performance and outcomes with the aid of machine-learning algorithms, big data, and fundamental analysis. Their products are deployed...
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next...
CloudEXPO | DevOpsSUMMIT | DXWorldEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
Cell networks have the advantage of long-range communications, reaching an estimated 90% of the world. But cell networks such as 2G, 3G and LTE consume lots of power and were designed for connecting people. They are not optimized for low- or battery-powered devices or for IoT applications with infrequently transmitted data. Cell IoT modules that support narrow-band IoT and 4G cell networks will enable cell connectivity, device management, and app enablement for low-power wide-area network IoT. B...
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully ...
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...