Results for "market making"
A discipline ensuring AI systems are fair, safe, transparent, privacy-preserving, and accountable throughout lifecycle.
System design where humans validate or guide model outputs, especially for high-stakes decisions.
Tendency to trust automated suggestions even when incorrect; mitigated by UI design, training, and checks.
A system that perceives state, selects actions, and pursues goals—often combining LLM reasoning with tools and memory.
Mechanisms for retaining context across turns/sessions: scratchpads, vector memories, structured stores.
Methods for breaking goals into steps; can be classical (A*, STRIPS) or LLM-driven with tool calls.
Measures a model’s ability to fit random noise; used to bound generalization error.
A theoretical framework analyzing what classes of functions can be learned, how efficiently, and with what guarantees.
A measure of randomness or uncertainty in a probability distribution.
Systematic error introduced by simplifying assumptions in a learning algorithm.
Quantifies shared information between random variables.
Estimating parameters by maximizing likelihood of observed data.
Updating beliefs about parameters using observed evidence and prior distributions.
Optimization with multiple local minima/saddle points; typical in neural networks.
Attention mechanisms that reduce quadratic complexity.
All possible configurations an agent may encounter.
Strategy mapping states to actions.
Fundamental recursive relationship defining optimal value functions.
Extending agents with long-term memory stores.
Coordination arising without explicit programming.
Models evaluating and improving their own outputs.
Framework for identifying, measuring, and mitigating model risks.
Ensuring decisions can be explained and traced.
Central catalog of deployed and experimental models.
Logged record of model inputs, outputs, and decisions.
Legal or policy requirement to explain AI decisions.
Embedding signals to prove model ownership.
GNN framework where nodes iteratively exchange and aggregate messages from neighbors.
Autoencoder using probabilistic latent variables and KL regularization.
Extension of convolution to graph domains using adjacency structure.