Results for "high-risk"
High-Risk AI System
IntermediateAI used in sensitive domains requiring compliance.
High-risk AI systems are types of artificial intelligence that can have serious consequences if they fail. For example, AI used in medical devices or self-driving cars is considered high-risk because mistakes could harm people. Because of this, there are strict rules that these systems must follo...
A datastore optimized for similarity search over embeddings, enabling semantic retrieval at scale.
Techniques to understand model decisions (global or local), important in high-stakes and regulated settings.
System design where humans validate or guide model outputs, especially for high-stakes decisions.
A model is PAC-learnable if it can, with high probability, learn an approximately correct hypothesis from finite samples.
Learns the score (∇ log p(x)) for generative sampling.
Diffusion performed in latent space for efficiency.
Flat high-dimensional regions slowing training.
Applying learned patterns incorrectly.
Probabilities do not reflect true correctness.
High-fidelity virtual model of a physical system.
A function measuring prediction error (and sometimes calibration), guiding gradient-based optimization.
How well a model performs on new data drawn from the same (or similar) distribution as training.
Randomly zeroing activations during training to reduce co-adaptation and overfitting.
Breaking documents into pieces for retrieval; chunk size/overlap strongly affect RAG quality.
A formal privacy framework ensuring outputs do not reveal much about any single individual’s data contribution.
Training across many devices/silos without centralizing raw data; aggregates updates, not data.
Systematic review of model/data processes to ensure performance, fairness, security, and policy compliance.
Constraining model outputs into a schema used to call external APIs/tools safely and deterministically.
A measure of a model class’s expressive capacity based on its ability to shatter datasets.
Measures a model’s ability to fit random noise; used to bound generalization error.
Central catalog of deployed and experimental models.
Logged record of model inputs, outputs, and decisions.
Inferring sensitive features of training data.
Average value under a distribution.
Review process before deployment.
Process for managing AI failures.
Governance of model changes.
AI used without governance approval.
Learning action mapping directly from demonstrations.
Ensuring robots do not harm humans.