Back to glossary

Edge Computing

A distributed computing paradigm that processes data closer to the source of generation rather than in a centralized data center. Edge computing reduces latency, conserves bandwidth, and enables real-time processing for latency-sensitive applications.

Edge computing places compute resources at the network edge, in locations like CDN PoPs, ISP facilities, or on-device. This architecture is essential when milliseconds matter, bandwidth is constrained, or data privacy requires local processing. The edge handles initial processing and filtering, sending only relevant results to centralized systems for storage and deeper analysis.

For AI products, edge computing unlocks use cases that centralized inference cannot serve. Running lightweight models at the edge enables real-time personalization, on-device content moderation, and instant predictions without round-trip latency to a cloud endpoint. Growth teams benefit because faster AI responses directly improve user engagement metrics. However, edge deployment introduces challenges: models must be smaller to fit edge constraints, updates must propagate across a distributed fleet, and monitoring becomes more complex. The trade-off between model sophistication and inference speed is a key architectural decision that affects both product quality and infrastructure cost.

Related Terms

Content Delivery Network

A geographically distributed network of proxy servers that caches and delivers content from locations closest to end users. CDNs reduce latency, improve load times, and absorb traffic spikes by serving content from edge nodes rather than a single origin server.

Serverless Computing

A cloud execution model where the provider dynamically manages server allocation and scaling. Developers deploy functions or containers without provisioning infrastructure, paying only for actual compute time consumed rather than reserved capacity.

Function as a Service

A serverless computing category where developers deploy individual functions that execute in response to events. FaaS platforms like AWS Lambda, Google Cloud Functions, and Azure Functions handle all infrastructure management, scaling each function independently.

Platform as a Service

A cloud computing model that provides a complete development and deployment environment without managing underlying infrastructure. PaaS offerings like Heroku, Vercel, and Google App Engine handle servers, storage, networking, and runtime configuration.

Infrastructure as a Service

A cloud computing model that provides virtualized computing resources over the internet. IaaS offerings like AWS EC2, Google Compute Engine, and Azure Virtual Machines give teams full control over servers, storage, and networking without owning physical hardware.

Container Orchestration

The automated management of containerized applications across a cluster of machines, handling deployment, scaling, networking, and health monitoring. Kubernetes is the dominant orchestration platform, providing declarative configuration for complex distributed systems.