Back to glossary

Autoencoder

A neural network trained to compress input data into a compact latent representation and then reconstruct the original input from that representation, learning efficient data encodings in the process.

Autoencoders consist of an encoder that compresses the input into a lower-dimensional bottleneck (latent space) and a decoder that reconstructs the input from this compressed representation. By forcing information through a narrow bottleneck, the network learns to capture the most essential features of the data, discarding noise and redundancy.

Variants include denoising autoencoders (trained to reconstruct clean inputs from corrupted ones, learning robust representations), variational autoencoders (VAEs, which learn a smooth, continuous latent space suitable for generation), and sparse autoencoders (which encourage sparse activations for more interpretable features). Each variant serves different purposes.

For production applications, autoencoders are useful for dimensionality reduction (compressing data while preserving structure), anomaly detection (items that reconstruct poorly are anomalous), data denoising (removing noise while preserving signal), and feature learning (the latent representation serves as a compact feature vector for downstream tasks). In growth contexts, autoencoder-based anomaly detection can flag unusual user behavior, and latent representations can power similarity-based recommendation systems.

Related Terms