Results for "hidden variables"
A parameterized function composed of interconnected units organized in layers with nonlinear activations.
Neural networks can approximate any continuous function under certain conditions.
Networks with recurrent connections for sequences; largely supplanted by Transformers for many tasks.
A narrow hidden layer forcing compact representations.
Converting audio speech into text, often using encoder-decoder or transducer architectures.
Maps audio signals to linguistic units.
Temporal and pitch characteristics of speech.
The relationship between inputs and outputs changes over time, requiring monitoring and model updates.
Automatically learning useful internal features (latent variables) that capture salient structure for downstream tasks.
The internal space where learned representations live; operations here often correlate with semantics or generative factors.
Configuration choices not learned directly (or not typically learned) that govern training or architecture.
Framework for reasoning about cause-effect relationships beyond correlation, often using structural assumptions and experiments.
Measures a model’s ability to fit random noise; used to bound generalization error.
Exact likelihood generative models using invertible transforms.
Generative model that learns to reverse a gradual noise process.
What would have happened under different conditions.
Models effects of interventions (do(X=x)).
Expected causal effect of a treatment.
Shift in model outputs.
Mathematical foundation for ML involving vector spaces, matrices, and linear transformations.
Matrix of first-order derivatives for vector-valued functions.
Direction of steepest ascent of a function.
Average value under a distribution.
Sample mean converges to expected value.
The physical system being controlled.
System returns to equilibrium after disturbance.
Equations governing how system states change over time.
Study of motion without considering forces.
Predicting disease progression or survival.