Back to glossary

Cold Start

The initial latency spike that occurs when a serverless function, container, or service instance is invoked after a period of inactivity and must initialize its runtime environment before processing the request.

Cold starts happen because serverless platforms deallocate resources from idle functions. When a new request arrives, the platform must provision a container, load the runtime, initialize dependencies, and establish database connections before executing your code. This adds hundreds of milliseconds to several seconds of latency on the first request.

Cold start severity varies by runtime and platform. Node.js and Python functions on AWS Lambda typically cold-start in 200-500ms. Java and .NET functions can take 1-3 seconds due to heavier runtimes. AI inference functions loading large models can take 10-30 seconds, making cold starts a serious UX concern.

Mitigation strategies include provisioned concurrency (keeping instances warm at a fixed cost), periodic pinging to prevent deallocation, minimizing dependency size, using lightweight runtimes, and lazy-loading heavy resources. For AI features, teams often keep model-serving containers warm with minimum replica counts rather than relying on purely serverless scaling.

Related Terms