ICT Today

ICT Today October/November/December 2022

Issue link: https://www.e-digitaleditions.com/i/1476549

Contents of this Issue

Navigation

Page 20 of 59

October/November/December 2022 I 21 The physical servers that the cloud-native network runs on have a finite amount of central processing unit (CPU) power, memory, and input-output (I/O) resources that need to be allocated, scheduled, and shared. NETWORK SLICING BRINGS A NEW CHALLENGE Network slicing seems simple at first glance. Dedicate a portion of the cloud-native network to a specific cus- tomer or use case and effectively "wall-it-off" from the rest of the cloud so that the slice looks like a dedicated and separate network. Each slice can be configured to deliver specific service capabilities, such as ultra-reliable low-latency communication (URLLC) needed for mission critical services, massive scalability of devices needed for massive machine type communications (mMTC) or very high throughput needed for enhanced mobile broadband (eMBB). From the perspective of the network and service layers, this is a relatively well understood task. However, Without visibility into the performance of server resources, such as individual CPU utilization, memory and cache bandwidth and utilization, and a vast array of other such details, the service assurance solution may not be able to explain why a particular slice is not meeting its SLA commitments. When managing net- works comprised of thousands of servers underpinning thousands of slices and millions of services, the problem can quickly spiral out of control. the reality is that each slice shares the cloud infrastruc- ture hardware resources with several other slices or networks, including the public network. This is where things can get tricky. The physical servers that the cloud-native network runs on have a finite amount of central processing unit (CPU) power, memory, and input-output (I/O) resources that need to be allocated, scheduled, and shared. For example, modern servers have multiple cores and CPUs in them. Each of these share certain physical, finite resources, such as memory, cache, and network I/O. Under certain loading conditions, the performance of a specific CPU may be impacted by the other CPUs within the same socket that share resources. This is the so-called "noisy neighbor" problem—and it can be a real issue for network slices. ADDRESSING THE EDGE To make matters worse, there is the unique challenge of the MEC scenario. Unlike the larger data centers where hardware resources and power can essentially be considered infinite, the MEC is a highly resource constrained environment. This can be especially challenging if the MEC supports a multi-tenant or multi-slice environment. The CSP needs to be very aware of infrastructure resource utilization and power consumption to ensure continued customer's QoE. To stay within the physical and operational constraints of the MEC, orchestration decisions or power manage- ment decisions will need to be taken to guarantee the performance of the services or slices over various loading conditions. By having the ability to correlate QoS and QoE assurance data and insights with platform observ- ability data and insights, the MEC orchestrator can optimize the MEC environment to ensure fair access to all hardware resources so that one process or customer cannot starve another.

Articles in this issue

Archives of this issue

view archives of ICT Today - ICT Today October/November/December 2022