Results for "risk stratification"
Risk Stratification
IntermediateGrouping patients by predicted outcomes.
Risk stratification is like sorting patients into different groups based on how likely they are to get sicker. For example, doctors can use information like age, medical history, and test results to figure out which patients are at high risk for complications. It’s similar to how insurance compan...
Grouping patients by predicted outcomes.
Quantifying financial risk.
Central log of AI-related risks.
Classifying models by impact level.
Risk of incorrect financial models.
AI used in sensitive domains requiring compliance.
Existential risk from AI systems.
European regulation classifying AI systems by risk.
US framework for AI risk governance.
Probability of treatment assignment given covariates.
A hidden variable influences both cause and effect, biasing naive estimates of causal impact.
Minimizing average loss on training data; can overfit when data is limited or biased.
Categorizing AI applications by impact and regulatory risk.
Maximum expected loss under normal conditions.
Risk threatening humanity’s survival.
Framework for identifying, measuring, and mitigating model risks.
Required human review for high-risk decisions.
International AI risk standard.
Models estimating recidivism risk.
Simulating adverse scenarios.
Predicting borrower default risk.
Privacy risk analysis under GDPR-like laws.
Restricting distribution of powerful models.
A function measuring prediction error (and sometimes calibration), guiding gradient-based optimization.
How well a model performs on new data drawn from the same (or similar) distribution as training.
Randomly zeroing activations during training to reduce co-adaptation and overfitting.
Breaking documents into pieces for retrieval; chunk size/overlap strongly affect RAG quality.
A formal privacy framework ensuring outputs do not reveal much about any single individual’s data contribution.
Training across many devices/silos without centralizing raw data; aggregates updates, not data.
Samples from the k highest-probability tokens to limit unlikely outputs.