Results for "probability mapping"
Models that learn to generate samples resembling training data.
Learns the score (∇ log p(x)) for generative sampling.
Generative model that learns to reverse a gradual noise process.
Autoencoder using probabilistic latent variables and KL regularization.
Predicting future values from past observations.
Identifying abrupt changes in data generation.
Measure of spread around the mean.
Eliminating variables by integrating over them.
Optimization under uncertainty.
Assigning a role or identity to the model.
Sampling multiple outputs and selecting consensus.
Central log of AI-related risks.
Enables external computation or lookup.
Storing results to reduce compute.
Inferring the agent’s internal state from noisy sensor data.
Ensuring models comply with lending fairness laws.
Maximum expected loss under normal conditions.