Key SAIL Risks to Monitor
SAIL 1.1: Inadequate AI Policy
SAIL 1.2: Governance Misalignment
SAIL 1.10: Incomplete Threat Modeling for AI Systems
How These Risks Could Manifest (Lessons from Amazon Q)
AI coding assistants may not be covered by existing security policies
External contribution policies may not account for AI-specific risks
Traditional threat models may miss AI supply chain vectors like malicious prompts in PRs
Recommended Controls
Create AI-specific security policies that explicitly cover repository management and external contributions
Establish unified governance between security, AI, and development teams
Conduct threat modeling sessions that include supply chain, insider threats, and privilege escalation scenarios
Key SAIL Risks to Monitor
SAIL 2.1: Incomplete Asset Inventory
SAIL 2.3: Unidentified Third-Party AI Integrations
SAIL 2.5: Lack of Clarity on AI System Purpose and Criticality
How These Risks Could Manifest (Lessons from Amazon Q)
AI extensions used by thousands of developers may not be in your asset inventory
External contributors to AI tools represent third-party risk
AI coding assistants have privileged access but may be classified as low-risk developer tools
Recommended Controls
Build comprehensive inventory of all AI components including extensions, APIs, coding agents and more
Map all external access points and third-party integrations
Implement risk-based classification considering user base size and business impact
Key SAIL Risks to Monitor
SAIL 3.2: Model Backdoor Insertion or Tampering
SAIL 3.4: Insecure System Prompt Design
SAIL 3.10: Unvetted Use of Open-Source and Third-Party AI Components
SAIL 3.14: Exposed AI Access Credentials in Discovered Assets
How These Risks Could Manifest (Lessons from Amazon Q)
Malicious prompts can be inserted into AI systems via code contributions
AI assistants may have unrestricted access to execute system commands
External PRs to AI components may bypass security review
Repository permissions may grant excessive access to external parties
Recommended Controls
Implement mandatory security review for all code affecting AI behavior
Design AI systems with least-privilege access principles
Establish approval workflows requiring security sign-off for external contributions
Automate regular audits of repository permissions and access logs
Key SAIL Risks to Monitor
SAIL 4.2: Incomplete Red-Team Coverage
SAIL 4.3: Lack of Risk Assessment Process
SAIL 4.9: Limited Scope of Evasion Technique Testing
How These Risks Could Manifest (Lessons from Amazon Q)
Security testing may focus on traditional vulnerabilities, missing AI-specific attacks
Supply chain attacks through development tools may not be tested
Privilege escalation through repository access may be overlooked
Recommended Controls
Include supply chain compromise scenarios in red team exercises
Test specifically for prompt injection and privilege escalation
Simulate attacks through development pipeline vulnerabilities
Document and track all identified risks with remediation timelines
Key SAIL Risks to Monitor
SAIL 5.7: Insecure Output Handling
SAIL 5.14: Autonomous-Agent Misuse
SAIL 5.16: Cross-Domain Prompt Injection (XPIA)
SAIL 5.17: Policy-Violating Output
How These Risks Could Manifest (Lessons from Amazon Q)
Deployed AI systems may contain hidden malicious prompts
AI-generated commands may execute without validation
AI assistants may have autonomous execution capabilities
No runtime controls to prevent destructive operations
Recommended Controls
Deploy prompt validation and sanitization layers
Implement command filtering for high-risk operations
Require human approval for destructive actions
Enable runtime security policies that prevent policy violations
Key SAIL Risks to Monitor
SAIL 6.1: Autonomous Code Execution Abuse
SAIL 6.2: Unrestricted API/Tool Invocation
SAIL 6.6: Autonomous Resource Provisioning/Abuse
How These Risks Could Manifest (Lessons from Amazon Q)
AI systems may operate with full user privileges and filesystem access
Direct access to cloud CLIs and APIs without restrictions
No isolation between AI operations and production environments
AI can execute destructive commands like file deletion or resource termination
Recommended Controls
Deploy AI workloads in isolated sandboxed environments
Implement strict API allowlisting and rate limiting
Use separate, limited credentials for AI operations
Key SAIL Risks to Monitor
SAIL 7.1: Insufficient AI Interaction Logging
SAIL 7.2: Missing Real-time Security Alerts
SAIL 7.4: Inadequate AI Audit Trails
SAIL 7.6: Absence of AI-Specific Incident Response Plan
How These Risks Could Manifest (Lessons from Amazon Q)
Permission changes in AI repositories may go undetected
Malicious code insertions may not trigger security alerts
Audit trails may not capture AI-specific security events
Standard incident response may not cover AI compromise scenarios
Recommended Controls
Enable comprehensive logging for all AI-related activities
Configure real-time alerts for permission changes and anomalous behavior
Maintain immutable audit trails with full change history
Develop and test AI-specific incident response procedures
Establish clear escalation paths and disclosure protocols