UK researchers from the AI Safety Institute discovered that four of the most popular generative AI chatbots are highly susceptible to basic jailbreak attempts, showing vulnerabilities when tested for actions bypassing safety measures, potential facilitation of cyber-attacks, autonomous actions, and providing expert-level knowledge that can be used for both positive and harmful purposes. The researchers found that large language models (LLMs) were easily vulnerable to jailbreak attacks, with harmful responses triggered in 90-100% of cases, even when subjected to repeated attack patterns. However, LLMs were limited in helping cyber-attackers, struggling with university-level cybersecurity challenges, and unable to autonomously plan and execute complex tasks

 AI chatbots found highly vulnerable to jailbreaks by UK researchers

This suggests that while some models could solve simple challenges, they are not currently significant tools for cyber-attacks. ```
https://www.infosecurity-magazine.com/news/ai-chatbots-vulnerable-jailbreaks/