Securiti introduces Securiti LLM Firewalls to address threats in generative artificial intelligence (gen AI) systems and applications, focusing on protection against prompt injection, data leakage, and training data poisoning. These distributed firewalls are designed to detect and prevent various LLM-based attacks, including prompt injection and training data poisoning. The firewalls monitor user prompts, LLM responses, and data retrievals from vector databases in real-time, aiming to enhance security for genAI systems

 Securiti releases LLM firewalls for genAI applications

By inspecting prompts for relevancy and topics, the firewalls can mitigate potential malicious use, block unauthorized attempts, and avoid sensitive data disclosure. The offering complements existing capabilities in Securiti's Data Command Center and aligns with OWASP's list of the 10 most critical large language model vulnerabilities, catering to additional threats like jailbreaks and offensive language. Besides enhancing security, the firewalls are intended to help organizations meet compliance objectives, whether legislative or internally mandated policies, aligning with frameworks such as Gartner's AI TRiSM and NIST AI RMF. While early in genAI adoption, these developments address crucial security gaps and underscore the importance of securing genAI systems amid rising threats and vulnerabilities.
https://www.csoonline.com/article/2096737/securiti-adds-distributed-llm-firewalls-to-secure-genai-applications.html