AI is already embedded across security stacks, but much of its decision-making is opaque and detached from organizational context. In a call to action, SANS Fellow Mark Baggett urges defenders to build and tune their own lightweight AI utilities—using plain-English prompts to generate queries in JQ, SQL, and regex—so analysts can speed investigations and reduce “translation” toil. The pitch: regain control over what data models learn from and how they score risk, while keeping human judgment at the center. The piece argues for pragmatic skills—baseline Python, an understanding of how models interpret inputs, and practical ML literacy—combined with disciplined habits: auditing where AI operates today, challenging outputs, automating one weekly task, and sharing lessons with peers. The aim isn’t replacing tools, but counterbalancing blind spots and raising velocity. Baggett will expand on the approach at SANS 2026, framing AI as a force multiplier rather than a substitute for human responsibility.
Related articles:
– NIST AI Risk Management Framework
– MITRE ATLAS: Adversarial Threat Landscape for AI Systems
– OWASP Top 10 for Large Language Model Applications





























