Results for "full pass through data"
Enables external computation or lookup.
Shift in feature distribution over time.
The learned numeric values of a model adjusted during training to minimize a loss function.
How well a model performs on new data drawn from the same (or similar) distribution as training.
Selecting the most informative samples to label (e.g., uncertainty sampling) to reduce labeling cost.
A narrow minimum often associated with poorer generalization.
Systematic error introduced by simplifying assumptions in a learning algorithm.
Built-in assumptions guiding learning efficiency and generalization.
Learns the score (∇ log p(x)) for generative sampling.
Exact likelihood generative models using invertible transforms.
Two-network setup where generator fools a discriminator.
Startup latency for services.
Finding mathematical equations from data.
Privacy risk analysis under GDPR-like laws.
Methods that learn training procedures or initializations so models can adapt quickly to new tasks with little data.
Letting an LLM call external functions/APIs to fetch data, compute, or take actions, improving reliability.
Model-generated content that is fluent but unsupported by evidence or incorrect; mitigated by grounding and verification.
Reinforcement learning from human feedback: uses preference data to train a reward model and optimize the policy.
Capabilities that appear only beyond certain model sizes.
Persistent directional movement over time.
Identifying abrupt changes in data generation.
Models effects of interventions (do(X=x)).
Agents communicate via shared state.
Maintaining alignment under new conditions.
Task instruction without examples.
Applying learned patterns incorrectly.
Centralized AI expertise group.
External sensing of surroundings (vision, audio, lidar).
Differences between simulated and real physics.
AI systems assisting clinicians with diagnosis or treatment decisions.