Risk
Intellectual Property (IP) Theft of Models
Description
Unauthorized copying, extraction, or reverse-engineering of proprietary trained models during the development or pre-deployment stages.
Example
An insider with access to model repositories exfiltrates a valuable proprietary model before it's secured for deployment.
Assets Affected
Model files
AI Model
Model metadata
Mitigation
- Implement strong access controls to model artifacts and training environments
- Encrypt models at rest
- Use watermarking or obfuscation techniques
- Enforce legal agreements/NDAs
- Monitor access to model repositories
Standards Mapping
- ISO 42001: A.6.2.4, A.10.2
- NIST AI RMF: MEASURE 2.7, MANAGE 1.4
- DASF v2: MODEL MANAGEMENT 8.2