Results for "label prediction"
Shift in model outputs.
Differences between training and deployed patient populations.
Training objective where the model predicts the next token given previous tokens (causal modeling).
Pixel-wise classification of image regions.
A model that assigns probabilities to sequences of tokens; often trained by next-token prediction.
Probabilistic graphical model for structured prediction.
Monte Carlo method for state estimation.
Low-latency prediction per request.
Learning by minimizing prediction error.
Predicting case success probabilities.
Deep learning system for protein structure prediction.
A mismatch between training and deployment data distributions that can degrade model performance.
A structured collection of examples used to train/evaluate models; quality, bias, and coverage often dominate outcomes.
Penalizes confident wrong predictions heavily; standard for classification and language modeling.
Measure of consistency across labelers; low agreement indicates ambiguous tasks or poor guidelines.
Selecting the most informative samples to label (e.g., uncertainty sampling) to reduce labeling cost.
Assigning labels per pixel (semantic) or per instance (instance segmentation) to map object boundaries.
Assigning category labels to images.
Train/test environment mismatch.
Learning from data by constructing “pseudo-labels” (e.g., next-token prediction, masked modeling) without manual annotation.
A function measuring prediction error (and sometimes calibration), guiding gradient-based optimization.
Studying internal mechanisms or input influence on outputs (e.g., saliency maps, SHAP, attention analysis).
Networks with recurrent connections for sequences; largely supplanted by Transformers for many tasks.
Feature attribution method grounded in cooperative game theory for explaining predictions in tabular settings.
Local surrogate explanation method approximating model behavior near a specific input.
Inputs crafted to cause model errors or unsafe behavior, often imperceptible in vision or subtle in text.
Systematic error introduced by simplifying assumptions in a learning algorithm.
Error due to sensitivity to fluctuations in the training dataset.
GNN framework where nodes iteratively exchange and aggregate messages from neighbors.
Graphs containing multiple node or edge types with different semantics.