Evolution of Data Center Architecture & Edge Computing

Wednesday, September 14th, 2016 at 6:30 PM

TI Auditorium

PROGRAM

6:30 - 7:00 PM Networking & Refreshments
7:00 - 8:00 PM Talks
8:00 - 8:30 PM Panel Session
8:30 - 8:45 PM Speaker Appreciation & Adjournment

Chair: Sameer Herlekar
Organizer: Saurabh Sureka/Sameer Herlekar
    

Session Abstract:Join us for a session on "Evolution of Data Center Architecture & Edge Computing”. In this session, the speakers will provide an insight into newer hardware and software architectures that are shaping Edge Computing.

Speaker: Muthurajan Jayakumar (M Jay)

Bio: M Jay joined Intel in 1991 and has been in various roles and divisions with Intel – 64 bit CPU front side bus architect, 64 bit HAL developer to mention a few before DPDK team. M Jay has worked with the DPDK team from 2009 onwards. M Jay holds 21 US Patents, both individually and jointly, all issued while working in Intel. M Jay was awarded the Intel Achievement Award in 2016, Intel's highest honor based on innovation and results.

Title: Cloud Data Center – Architecture, Protocol and Design Trade Off Choices

Abstract: We will start the presentation zooming into the scaling problems of Traditional 3-Tier Traditional Data Center. We will highlight as how flattening to 2-Tier Cloud Data Center architecture benefits scalability. To give usage model example, we will touch upon NVGRE and VXLAN and look at protocol level differences. Tunneling choices 1) at Hypervisor, 2) NIC and 3) ToR switch will be raised and trade-offs will be outlined. With cloud data center having deterministic static topology, the value of Spanning Tree Protocol and its effective lower utilization of the network infrastructure is questioned.

Speaker: Mark Douglas

Bio: Mark Douglas is a Distinguished Member of Technical Staff at NXP Semiconductors. Networking Group, Software and Tools Field Applications Engineer, 28 years in Embedded and Networking in the San Jose Bay Area. Working with major networking companies in appliance deployment specificly with bootloaders, O/Ss, VMs and tools. Currently engaged in all things concerning virtualization benchmarking. Previously with Red Hat, Inc.

Title: Challenges for Next Generation Fog and IOT Computing: Performance of virtual machines on edge devices

Abstract: With the ever increasing bandwidth requirements on Internet edge equipment, compounded by the growing need for almost instantaneous content delivery to multiple IoT devices, these demands have created a situation where traditional hardware and software architectures for these edge devices are stressed to a new level. Given the need for this consolidation of both the:

1) traditional access-point features along with

2) ”on demand” content delivery capabilities, there is a driving need to investigate new approaches in platform architectures for future edge devices.

One approach to solving these growing feature consolidation demand trends is to utilize the partitioning qualities found in virtualization techniques for these new edge designs. Virtualization is common in the Cloud and other compute server environments. Providing instant content and internet access at the edge of the network is defined as Fog Computing. With fog computing, virtual machine (VM) feature partitioning and capability must be optimized for isolation of features. The performance of these consolidated/partitioned features in various guest VMs and performance optimized I/O for file systems and network interfaces is critical. As IoT continues to grow, access points and other edge equipment for IoT devices will evolve into consolidated Fog Servers with multiple features running on the same SoC. These VMs will both share and directly map storage devices and networking interfaces respectively. The introduction of Fog servers have VM demands not seen in the past for equipment at this point in the network. Currently available SoC devices based on ARMv8 64-bit technology, along with a Fog-optimized software architecture with proper feature partitioning and consolidation can in fact achieve these functional requirements for next generation Fog computing systems.

This presentation will summarize the platform requirements for Fog computing and then show performance data for VM use cases for memory and network devices. The data shows that Fog computing consolidation is all very possible with existing SoC multicore architectures and the VM over-head incurred in their deployment is very acceptable for these devices.