Results for "learning like humans"
Measures similarity and projection between vectors.
Ensuring learned behavior matches intended objective.
Using limited human feedback to guide large models.
Model behaves well during training but not deployment.
Asking model to review and improve output.
Applying learned patterns incorrectly.
Train/test environment mismatch.
Model relies on irrelevant signals.
Startup latency for services.
AI systems that perceive and act in the physical world through sensors and actuators.
Algorithm computing control actions.
Artificial environment for training/testing agents.
Randomizing simulation parameters to improve real-world transfer.
Performance drop when moving from simulation to reality.
Directly optimizing control policies.
Reward only given upon task completion.
Inferring human goals from behavior.
Automated assistance identifying disease indicators.
AI supporting legal research, drafting, and analysis.
AI-assisted review of legal documents.
Predicting protein 3D structure from sequence.
AI selecting next experiments.
AI tacitly coordinating prices.
Rate at which AI capabilities improve.
Research ensuring AI remains safe.
The learned numeric values of a model adjusted during training to minimize a loss function.
A scalar measure optimized during training, typically expected loss over data, sometimes with regularization terms.
Minimizing average loss on training data; can overfit when data is limited or biased.
When a model fits noise/idiosyncrasies of training data and performs poorly on unseen data.
When a model cannot capture underlying structure, performing poorly on both training and test data.