Risk
Insecure Output Handling
Description
Model outputs are not filtered or validated before being presented to users or downstream systems, leading to XSS, policy violations, or leakage.
Example
LLM output is rendered in a webapp without encoding, enabling stored XSS.
Assets Affected
Model Response
AI App
Mitigation
- Output encoding
- Content security policies
- Output sanitization and validation
Standards Mapping
- ISO 42001: A.8.2, A.6.2.6
- OWASP Top 10 for LLM: LLM05
- NIST AI RMF: MEASURE 2.4, MANAGE 2.4
- DASF v2: MODEL SERVING 10.2