Security Stop Press : LLM Malicious “Prompt Injection” Attack Warning

The UK’s National Cyber Security Centre (NCSC) has warned of the susceptibility of existing Large Language Models (LLMs) to malicious “prompt injection” attacks. These are where a user creates inputs intended to cause an AI model to behave in an unintended way e.g., generating offensive content or disclosing confidential information.

This means that businesses integrating LLMs like ChatGPT into their business, products, or services could be leaving themselves open to risks like inaccurate, controversial, or biased data content, data poisoning and concealed prompt injection attacks.

The advice is for businesses to establish cybersecurity principles and make sure that they are able to deal with even the worst case scenario of whatever their LLM-powered app is permitted to do.

Posted in

Mklink

MICROSOFT OFFICE 365
YOUR COMPLETE OFFICE IN THE CLOUD

Bringing together everyone's favourite productivity tools with the benefits of cloud-based communication and collaboration, Microsoft have developed a platform that is both technically & commercially-sound for businesses of any shape.