Voice of reason
As IT teams race to weave AI into production systems, a growing concern is what happens when models “hallucinate” at the worst possible time—or make damaging autonomous decisions. Byron Cook, a vice president and distinguished scientist at Amazon who also serves as a part-time program manager at DARPA, is among the researchers pushing for stronger safeguards through automated reasoning. Cook leads AWS’s Automated Reasoning Group, which focuses on tools designed to validate AI outputs and provide provable security guarantees, aiming to help humans verify whether an AI system’s response is actually correct before it’s trusted in critical workflows.
ChatGPT it?
For years, IT professionals relied on search engines and community forums to troubleshoot everything from obscure error codes to major outages, honing “Google-fu” to find precise answers quickly. Now, many are increasingly turning to large language models like ChatGPT and Gemini, a shift that’s changing how technical knowledge is discovered—and raising new reliability issues. Tom Bachant, founder and CEO of Unthread, warns that “AI slop” is beginning to crowd search results, creating a scenario where low-quality, AI-generated content rises to the top while chat-based answers can still be confidently wrong, leaving IT workers stuck with fast responses that may not be verifiable.
Your input matters!
Federal officials are asking the public to weigh in on a fast-moving security challenge: how to protect agentic AI systems that can plan and act with limited human oversight. On Jan. 8, the Center for AI Standards and Innovation (CAISI) within the Department of Commerce—housed in NIST—issued a request for information seeking real-world examples, best practices, case studies, and actionable recommendations. The agency says it wants deeper insight into current risks and vulnerabilities, what security practices may need to change in an agentic era, and how organizations should evaluate the security of these systems as adoption accelerates.







