In previous articles, we framed AI security as protecting confidentiality, integrity, and availability of the whole AI system, not just the model. We also mapped AI risks onto familiar secure development lifecycle (SDLC) thinking, treating data and model artifacts as first-class build inputs and outputs.This article examines the primary security risk for enterprise large language model (LLM) applications: prompt injection. This vulnerability occurs when the model fails to distinguish between data and instructions, allowing external prompts to seize control of the system. The risk is particular