Results for "testing framework"
Formal framework for sequential decision-making under uncertainty.
GNN framework where nodes iteratively exchange and aggregate messages from neighbors.
European regulation classifying AI systems by risk.
Mathematical framework for controlling dynamic systems.
Learning by minimizing prediction error.
Agents optimize collective outcomes.
Acting to minimize surprise or free energy.
System-level design for general intelligence.
Minimizing average loss on training data; can overfit when data is limited or biased.
A conceptual framework describing error as the sum of systematic error (bias) and sensitivity to data (variance).
A model that assigns probabilities to sequences of tokens; often trained by next-token prediction.
A high-priority instruction layer setting overarching behavior constraints for a chat model.
Stepwise reasoning patterns that can improve multi-step tasks; often handled implicitly or summarized for safety/privacy.
Architecture that retrieves relevant documents (e.g., from a vector DB) and conditions generation on them to reduce hallucinations.
Reinforcement learning from human feedback: uses preference data to train a reward model and optimize the policy.
Feature attribution method grounded in cooperative game theory for explaining predictions in tabular settings.
Policies and practices for approving, monitoring, auditing, and documenting models in production.
System for running consistent evaluations across tasks, versions, prompts, and model settings.
Allows gradients to bypass layers, enabling very deep networks.
Routes inputs to subsets of parameters for scalable capacity.
Separates planning from execution in agent architectures.
Categorizing AI applications by impact and regulatory risk.
Central catalog of deployed and experimental models.
Probabilistic energy-based neural network with hidden variables.
GNN using attention to weight neighbor contributions dynamically.
Graphical model expressing factorization of a probability distribution.
Autoencoder using probabilistic latent variables and KL regularization.
Exact likelihood generative models using invertible transforms.
Two-network setup where generator fools a discriminator.
Models time evolution via hidden states.