Results for "compute-data-performance"
A wide basin often correlated with better generalization.
A narrow hidden layer forcing compact representations.
Allows model to attend to information from different subspaces simultaneously.
Controls amount of noise added at each diffusion step.
Repeating temporal patterns.
Using production outcomes to improve models.
Measure of spread around the mean.
Normalized covariance.
AI used without governance approval.
Model behaves well during training but not deployment.
Coordinating models, tools, and logic.
Limiting inference usage.
Predicts next state given current state and action.
Detecting and avoiding obstacles.
AI applied to X-rays, CT, MRI, ultrasound, pathology slides.
Predicting disease progression or survival.
Testing AI under actual clinical conditions.
US approval process for medical AI devices.
Ultra-low-latency algorithmic trading.
AI discovering new compounds/materials.
A parameterized mapping from inputs to outputs; includes architecture + learned parameters.
The learned numeric values of a model adjusted during training to minimize a loss function.
Systematic differences in model outcomes across groups; arises from data, labels, and deployment context.
Measure of consistency across labelers; low agreement indicates ambiguous tasks or poor guidelines.
Attacks that infer whether specific records were in training data, or reconstruct sensitive training examples.
Structured dataset documentation covering collection, composition, recommended uses, biases, and maintenance.
Methods to protect model/data during inference (e.g., trusted execution environments) from operators/attackers.
Systematic error introduced by simplifying assumptions in a learning algorithm.
Built-in assumptions guiding learning efficiency and generalization.
Estimating parameters by maximizing likelihood of observed data.