Results for "statistical learning"
Activation max(0, x); improves gradient flow and training speed in deep nets.
Techniques that stabilize and speed training by normalizing activations; LayerNorm is common in Transformers.
Gradients shrink through layers, slowing learning in early layers; mitigated by ReLU, residuals, normalization.
A high-priority instruction layer setting overarching behavior constraints for a chat model.
Updating a pretrained model’s weights on task-specific data to improve performance or adapt style/behavior.
A preference-based training method optimizing policies directly from pairwise comparisons without explicit RL loops.
Model trained to predict human preferences (or utility) for candidate outputs; used in RLHF-style pipelines.
Systematic differences in model outcomes across groups; arises from data, labels, and deployment context.
Rules and controls around generation (filters, validators, structured outputs) to reduce unsafe or invalid behavior.
Feature attribution method grounded in cooperative game theory for explaining predictions in tabular settings.
When some classes are rare, requiring reweighting, resampling, or specialized metrics.
Expanding training data via transformations (flips, noise, paraphrases) to improve robustness.
Policies and practices for approving, monitoring, auditing, and documenting models in production.
Standardized documentation describing intended use, performance, limitations, data, and ethical considerations.
Techniques that fine-tune small additional components rather than all weights to reduce compute and storage.
PEFT method injecting trainable low-rank matrices into layers, enabling efficient fine-tuning.
Inputs crafted to cause model errors or unsafe behavior, often imperceptible in vision or subtle in text.
Maliciously inserting or altering training data to implant backdoors or degrade performance.
Mechanisms for retaining context across turns/sessions: scratchpads, vector memories, structured stores.
AI focused on interpreting images/video: classification, detection, segmentation, tracking, and 3D understanding.
Measures a model’s ability to fit random noise; used to bound generalization error.
Optimization with multiple local minima/saddle points; typical in neural networks.
Variability introduced by minibatch sampling during SGD.
A narrow minimum often associated with poorer generalization.
Early architecture using learned gates for skip connections.
Empirical laws linking model size, data, compute to performance.
Chooses which experts process each token.
Set of all actions available to the agent.
Formal framework for sequential decision-making under uncertainty.
Fundamental recursive relationship defining optimal value functions.