Results for "task-specific"
GNN using attention to weight neighbor contributions dynamically.
One example included to guide output.
Multiple examples included in prompt.
Estimating robot position within a map.
Studying internal mechanisms or input influence on outputs (e.g., saliency maps, SHAP, attention analysis).
Local surrogate explanation method approximating model behavior near a specific input.
Hidden behavior activated by specific triggers, causing targeted mispredictions or undesired outputs.
Attacks that infer whether specific records were in training data, or reconstruct sensitive training examples.
Embedding signals to prove model ownership.
Graphs containing multiple node or edge types with different semantics.
AI limited to specific domains.
A measurable property or attribute used as model input (raw or engineered), such as age, pixel intensity, or token ID.
The internal space where learned representations live; operations here often correlate with semantics or generative factors.
A table summarizing classification outcomes, foundational for metrics like precision, recall, specificity.
Randomly zeroing activations during training to reduce co-adaptation and overfitting.
Mechanism that computes context-aware mixtures of representations; scales well and captures long-range dependencies.
Attention where queries/keys/values come from the same sequence, enabling token-to-token interactions.
The text (and possibly other modalities) given to an LLM to condition its output behavior.
Crafting prompts to elicit desired behavior, often using role, structure, constraints, and examples.
Techniques to understand model decisions (global or local), important in high-stakes and regulated settings.
Controlled experiment comparing variants by random assignment to estimate causal effects of changes.
Selecting the most informative samples to label (e.g., uncertainty sampling) to reduce labeling cost.
Practices for operationalizing ML: versioning, CI/CD, monitoring, retraining, and reliable production management.
System for running consistent evaluations across tasks, versions, prompts, and model settings.
Constraining model outputs into a schema used to call external APIs/tools safely and deterministically.
Forcing predictable formats for downstream systems; reduces parsing errors and supports validation/guardrails.
A narrow hidden layer forcing compact representations.
Strategy mapping states to actions.
Expected return of taking action in a state.
Multiple agents interacting cooperatively or competitively.