Results for "multiple samples"
Sampling from easier distribution with reweighting.
Sampling-based motion planner.
How well a model performs on new data drawn from the same (or similar) distribution as training.
A robust evaluation technique that trains/evaluates across multiple splits to estimate performance variability.
One complete traversal of the training dataset during training.
Randomly zeroing activations during training to reduce co-adaptation and overfitting.
The text (and possibly other modalities) given to an LLM to condition its output behavior.
Measure of consistency across labelers; low agreement indicates ambiguous tasks or poor guidelines.
Training across many devices/silos without centralizing raw data; aggregates updates, not data.
A system that perceives state, selects actions, and pursues goals—often combining LLM reasoning with tools and memory.
Coordinating tools, models, and steps (retrieval, calls, validation) to deliver reliable end-to-end behavior.
Techniques to handle longer documents without quadratic cost.
Assigning labels per pixel (semantic) or per instance (instance segmentation) to map object boundaries.
Routes inputs to subsets of parameters for scalable capacity.
Transformer applied to image patches.
Agents communicate via shared state.
Distributed agents producing emergent intelligence.
Vector whose direction remains unchanged under linear transformation.
Matrix of curvature information.
One example included to guide output.
Coordinating models, tools, and logic.
Software pipeline converting raw sensor data into structured representations.
Computing joint angles for desired end-effector pose.
Deep learning system for protein structure prediction.
Agents fail to coordinate optimally.
Tendency to gain control/resources.