Compute Express Link (CXL) is a new memory interconnect technology that allows for increased memory capacity per server. As CPU core counts are scaling faster than memory capacity, CXL is a promising solution to expand memory capacity while reducing operational costs and carbon emissions. However, integrating CXL in cloud environments poses unique challenges, particularly in ensuring that multiple companies running workloads on the same physical server do not experience performance interference.
About the Research
The paper, titled "Managing Memory Tiers with CXL in Virtualized Environments," co-authored by Columbia CS students Yuhong Zhong, Ryan Wee, and EE professor Asaf Cidon, presents several groundbreaking contributions:
1. Intel Flat Memory Mode: The first hardware-managed tiering system for CXL, which provides performance close to regular DRAM with no more than a 5% degradation for over 82% of workloads. This mode manages data placement between local DRAM and CXL memory at cache-line granularity within the processor memory controller.
2. Memstrata: A lightweight multi-tenant memory allocator, which employs page coloring to eliminate inter-VM contention and allocates more local DRAM to VMs with access patterns sensitive to hardware tiering. Memstrata significantly improves performance for VMs, reducing degradation for outlier workloads from over 30% to below 6% with real CXL hardware.
Future Impact
The research by Cidon and Zhong is expected to have a high impact on the industry, as CXL is rapidly becoming a major new memory standard backed by almost all hardware and cloud companies. This work not only provides the first proof of concept for running CXL in a cloud environment but also sets a foundation for future advancements in cloud memory management.
For more details on this groundbreaking work, you can read the full paper here.