The U.S. government has released new security guidelines to protect critical infrastructure from artificial intelligence (AI) threats, focusing on assessing AI risks, facilitating safe AI use, and ensuring privacy protection; the guidance covers functions such as governance, mapping, measurement, and management throughout the AI lifecycle, emphasizing the need for transparency, secure design practices, and sector-specific risk assessments; the guidelines aim to address AI augmenting attacks, adversarial AI manipulation, and AI system vulnerabilities, necessitating secure deployment practices and source validation; the measures follow the Five Eyes alliance's cybersecurity recommendations and best practices, including securing deployment environments, reviewing AI model sources, and implementing strict access controls to safeguard against malicious cyber actors targeting AI systems with prompt injection attacks; instances like the Keras 2 neural network library vulnerability and the Crescendo technique illustrate the need to protect AI systems from being manipulated or exploited for malicious purposes, with AI systems increasingly being targeted for data theft, espionage, and influence operations

 US Government unveils new AI security guidelines for critical infrastructure to mitigate AI-related threats