Results for "direct optimization"
Controls the size of parameter updates; too high diverges, too low trains slowly or gets stuck.
Variability introduced by minibatch sampling during SGD.
Limiting gradient magnitude to prevent exploding gradients.
Matrix of second derivatives describing local curvature of loss.
Matrix of curvature information.
Optimizing policies directly via gradient ascent on expected reward.
Optimization under uncertainty.
Measure of vector magnitude; used in regularization and optimization.
Model optimizes objectives misaligned with human values.
Lowest possible loss.
A subfield of AI where models learn patterns from data to make predictions or decisions, improving with experience rather than explicit rule-coding.
The field of building systems that perform tasks associated with human intelligence—perception, reasoning, language, planning, and decision-making—via algori...
Training with a small labeled dataset plus a larger unlabeled dataset, leveraging assumptions like smoothness/cluster structure.
A branch of ML using multi-layer neural networks to learn hierarchical representations, often excelling in vision, speech, and language.
Methods that learn training procedures or initializations so models can adapt quickly to new tasks with little data.
Learning where data arrives sequentially and the model updates continuously, often under changing distributions.
The learned numeric values of a model adjusted during training to minimize a loss function.
Automatically learning useful internal features (latent variables) that capture salient structure for downstream tasks.
A scalar measure optimized during training, typically expected loss over data, sometimes with regularization terms.
Average of squared residuals; common regression objective.
Iterative method that updates parameters in the direction of negative gradient to minimize loss.
A gradient method using random minibatches for efficient training on large datasets.
Popular optimizer combining momentum and per-parameter adaptive step sizes via first/second moment estimates.
One complete traversal of the training dataset during training.
A parameterized function composed of interconnected units organized in layers with nonlinear activations.
Nonlinear functions enabling networks to approximate complex mappings; ReLU variants dominate modern DL.
Gradients grow too large, causing divergence; mitigated by clipping, normalization, careful init.
An RNN variant using gates to mitigate vanishing gradients and capture longer context.
Constraining outputs to retrieved or provided sources, often with citation, to improve factual reliability.
Updating a pretrained model’s weights on task-specific data to improve performance or adapt style/behavior.