SAIL

/

Build - AI Security Posture Management

/

Model Backdoor Insertion or Tampering

3.2

.

Model Backdoor Insertion or Tampering

sail
3.2
Risk

Model Backdoor Insertion or Tampering

Description

Malicious code or vulnerabilities embedded into the model during training or fine-tuning, or unauthorized modification of model artifacts.

Example

A compromised open-source library used in training injects a backdoor into the final model.

Assets Affected

Model files

AI Model

Mitigation
  • Secure the development environment
  • Use trusted, scanned libraries/frameworks
  • Implement model integrity checks (hashing, signatures)
  • Conduct security testing and code reviews for AI components
  • Document AI system design and development
Standards Mapping
  • ISO 42001: A.6.2.4, A.7.2
  • OWASP Top 10 for LLM: LLM04
  • NIST AI RMF: MEASURE 2.7, MAP 4.2
  • DASF v2: MODEL 7.1