Results for "dimensionality reduction"
A narrow hidden layer forcing compact representations.
Learning structure from unlabeled data, such as discovering groups, compressing representations, or modeling data distributions.
Decomposes a matrix into orthogonal components; used in embeddings and compression.
Number of linearly independent rows or columns.
A measurable property or attribute used as model input (raw or engineered), such as age, pixel intensity, or token ID.
The internal space where learned representations live; operations here often correlate with semantics or generative factors.
Reduction in uncertainty achieved by observing a variable; used in decision trees and active learning.
All possible configurations an agent may encounter.
Model that compresses input into latent space and reconstructs it.
Vector whose direction remains unchanged under linear transformation.
Modeling environment evolution in latent space.
Networks using convolution operations with weight sharing and locality, effective for images and signals.
Architecture based on self-attention and feedforward layers; foundation of modern LLMs and many multimodal models.
Quantifies shared information between random variables.
A single attention mechanism within multi-head attention.
Encodes token position explicitly, often via sinusoids.
Generative model that learns to reverse a gradual noise process.
Eliminating variables by integrating over them.
Attention between different modalities.
Approximating expectations via random sampling.
Sampling from easier distribution with reweighting.
Storing results to reduce compute.
Space of all possible robot configurations.
Risk threatening humanity’s survival.