Cybersecurity researchers found that even individuals with minimal hacking skills could trick AI chatbots into divulging passwords through prompt injection, with 88% of participants succeeding in at least one level of difficulty. The contest highlighted the susceptibility of generative AI chatbots and the need for robust security controls to prevent prompt injection attacks. Participants used creative techniques to manipulate the chatbots, such as asking for direct information or hints about the password and using emoticons

 AI bots can be tricked into revealing passwords by individuals of all skill levels

The study emphasized the importance of implementing security measures in large language models, as more than 80% of enterprises are projected to use generative AI-enabled applications in the coming years, posing increased security risks. The findings stress the need for public-private cooperation and organizational policies to mitigate the security threats posed by prompt injection attacks. ```
https://www.bankinfosecurity.com/anyone-trick-ai-bots-into-spilling-passwords-a-25301