Large Language Models (LLMs) are being treated as vital business tools but pose unprecedented security risks, such as manipulation leading to severe impacts like data breaches and remote code execution vulnerabilities. Despite attempts to address these risks externally, fixing the root cause is challenging due to LLMs' complex nature, necessitating a new 'assume breach' security paradigm. To mitigate LLM threats, enterprises are advised to enforce least privilege, restrict capabilities, and use sandboxes, as the industry is still in early stages of research and risk mitigation for LLM security

LLMs are a new type of insider adversary