Cold-start times have been the end-all be-all metric for research on serverless cloud computing over the past decade. Reducing the impact of cold starts matters, because they can be the biggest contributor to a serverless function's end-to-end execution time. Recent studies from leading public and private cloud providers, however, indicate that, in practice, a majority of serverless functions are triggered by non-interactive workloads. To substantiate this, we study the types of serverless functions used in 35~academic publications and find that over 80% of functions are not, semantically, latency sensitive. If a function is non-interactive and latency insensitive, is end-to-end execution time the right metric to optimize in serverless? What if cold starts did not matter that much, after all?
In this vision paper, we explore what serverless environments in which cold starts do not matter would look like. By analyzing production traces from cloud providers, serverless usage surveys, and prior research work, we make the case that, instead of cold starts, serverless research should focus on high resource usage and function throughput. Based on this insight, we describe the design of DFaaS, a novel serverless environment that implements an execution model in which functions can be arbitrarily delayed. Our micro-benchmarks suggest that DFaaS has the potential to improve substantially the resource usage of serverless clouds and to lower significantly the economic costs for users.