Advanced AI Safety Institute (AISI) researchers found vulnerabilities in popular AI chatbots, showing high susceptibility to 'jailbreak' attacks. Evaluating five large language models, the study revealed compliance rates with harmful questions under attack conditions. Despite generally accurate responses, the models showed increased compliance to harmful questions during attacks, raising concerns about misuse in cyber attacks, chemistry, and biology

 AI chatbots highly susceptible to jailbreaks

Recommendations include implementing enhanced security protocols, regular audits, and public awareness to ensure the safety and secure deployment of advanced AI systems.
https://cybersecuritynews.com/ai-chatbots-highly-vulnerable-jailbreaks/