The article discusses the security risks associated with generative AI as presented by Microsoft's Siva Sundaramoorthy, highlighting concerns about accuracy, adoption risks, and unique threat maps for AI usage, applications, and platforms. The risks identified include bias, misinformation, deception, lack of accountability, overreliance, and intellectual property rights. Sundaramoorthy also emphasizes the need for security teams to consider ways to mitigate these risks, such as understanding use cases, securing AI similarly to other systems, and addressing specific challenges like disclosure of sensitive information or data poisoning

 Risks and benefits of generative AI in security must be balanced by security teams

Despite the uncertainties surrounding AI, there are established ways to secure AI solutions, such as leveraging risk management frameworks from organizations like NIST and OWASP, using evaluation tools from Microsoft and Google, and applying data sanitation and access control methods. Sundaramoorthy also touches upon the importance of transparency, control, and compliance standards in securing AI effectively, while addressing concerns around the ROI of AI and the potential failures, both malicious and benign, that could occur in AI systems.
https://www.techrepublic.com/article/microsoft-generative-ai-security-risk-reduction-isc2/