The National Computer Emergency Response Team (CERT) has issued a warning about using AI chatbots like OpenAI’s ChatGPT, cautioning users about potential cybersecurity and privacy risks.
While these tools have become popular for work and personal use, CERT highlights that they also bring vulnerabilities.
Chatbot interactions can involve sensitive information — like business plans or personal messages — which could be at risk if a data breach occurs. Hackers might misuse this data, posing a risk of intellectual property theft, reputation damage, and even legal issues.
The advisory from CERT stated that AI chatbots are increasingly targeted by cybercriminals using advanced tactics, like phishing disguised as legitimate chatbot conversations, to trick users into sharing confidential information.
Read more: Future of Assessment — AI and evolution of high-stakes testing
There's also a risk of malware infections if chatbots are used on devices that aren’t secure. CERT suggests stronger cybersecurity measures to protect users from such threats.
How to stay safe from cybercrimes
To stay safe, CERT advises individuals not to enter sensitive data in chatbot interfaces. Disabling chat-saving options and deleting any conversations with personal information can also help reduce risks.
Another key recommendation is to ensure devices are free from malware and have updated security settings when using chatbots.
For organisations, CERT recommends using secure devices dedicated to chatbot interactions and implementing strict access controls. Encryption for chatbot communications and regular staff training on cybersecurity are also essential to keep sensitive information safe.
Companies are encouraged to set up monitoring tools to catch any suspicious chatbot activity and to create response plans in case of a breach.
CERT’s advisory underscores the importance of proactive cybersecurity practices. Regular software updates, application whitelisting, and clear crisis communication plans are critical for both individuals and organisations using AI chatbots.