Results for "timing labels"
Aligns transcripts with audio timestamps.
Learning from data by constructing “pseudo-labels” (e.g., next-token prediction, masked modeling) without manual annotation.
Minimizing average loss on training data; can overfit when data is limited or biased.
Human or automated process of assigning targets; quality, consistency, and guidelines matter heavily.
A structured collection of examples used to train/evaluate models; quality, bias, and coverage often dominate outcomes.
A scalar measure optimized during training, typically expected loss over data, sometimes with regularization terms.
Updating a pretrained model’s weights on task-specific data to improve performance or adapt style/behavior.
Systematic differences in model outcomes across groups; arises from data, labels, and deployment context.
Selecting the most informative samples to label (e.g., uncertainty sampling) to reduce labeling cost.
Training a smaller “student” model to mimic a larger “teacher,” often improving efficiency while retaining performance.
Assigning labels per pixel (semantic) or per instance (instance segmentation) to map object boundaries.
Measures a model’s ability to fit random noise; used to bound generalization error.
Measures divergence between true and predicted probability distributions.
Models that learn to generate samples resembling training data.
Assigning category labels to images.
Pixel-wise classification of image regions.
Using limited human feedback to guide large models.