Serverless has emerged as the next dominant cloud architecture and paradigm due to its scalability and a flexible billing model. In serverless, developers structure their cloud services as a set of functions connected in a workflow, whereas providers take responsibility for dynamically scaling each function’s resources. This labor division opens opportunities for systems researchers to innovate in serverless computing. However, leading serverless providers, like AWS Lambda, rely on proprietary infrastructure ill-suited for systems research in academia.
To enable research in serverless systems, we have recently introduced vHive — an open-source ecosystem for serverless benchmarking, experimentation, and innovation across serverless stack — which is now in use in 30+ universities and companies world-wide. With vHive, we have found that modern clouds are bound by data movement, in particular, by moving the initial function state to spawn a new function instance (“cold start”) and slow cross-function data communication. To reduce the cold-start delays, we show that serverless functions operate over stable working sets - across function invocations - that can be prefetched. To accelerate communication, we devise a serverless-native communication fabric that combines high performance and autoscaling.
About the speaker
Dmitrii is a Postdoctoral Researcher at ETH Zurich, joining Nanyang Technological University Singapore as an Assistant Professor later this year. He received a PhD degree from the University of Edinburgh and BSc/MSc degrees from Moscow Institute of Physics and Technology. Dmitrii’s research interests lie at the intersection of Computer Systems and Architecture with a current focus on support for cloud and serverless computing.