Table Of Contents
Employees today heavily rely on neural network-based language models like ChatGPT chatbot to perform their tasks. A survey conducted by Kaspersky in Russia revealed that 11% of participants used chatbots, and around 30% believe in their potential to change work roles in the future.
Other surveys indicate that 50% of office employees in Belgium and 65% in the UK depend on ChatGPT chatbot. The trend of the term “ChatGPT” in Google Trends search records shows clear usage of the tool during weekdays, which might be linked to employing employees using it during work hours.
With the increasing reliance on chatbots in the workplace, it raises a crucial question: Are these bots qualified for handling sensitive company data? To answer this question, Kaspersky researchers identified four key risks that companies may face when using their employees with ChatGPT bot, which are:
Identity theft or system breach by the provider
Despite operators of large language model-based chatbots being key players in the tech field, they are not without vulnerabilities to hacking or identity theft. For example, ChatGPT users succeeded in some instances to see messages from chat logs of other users.
Data leakage through chatbots
In theory, the exchanged messages with chatbots can be exploited in training future language models. Large language models sometimes tend to “remember what hasn’t been properly taken care of,” creating unique datasets like phone numbers that do not contribute positively to model quality but pose a privacy threat, as any data in the dataset used to train these models can reach other users intentionally or unintentionally.
Malicious users
Individuals in environments that prohibit official services, like ChatGPT, may resort to unofficial alternatives of other chatbot programs or websites and deploy malware as a fake client or application.
Account penetration
Attackers can access employees’ accounts and their data through fraud attacks or login credentials. For example, Kaspersky Digital Footprint Intelligence regularly shows posts on dark internet forums selling access to various accounts on chatbot platforms.
In conclusion, chatbots may jeopardize users’ and companies’ privacy. Therefore, responsible developers must clarify how they use data when training their language models in their privacy policy.
Kaspersky’s analysis of common chatbots (such as ChatGPT, ChatGPT API, Anthropic Claude, Bing Chat, Bing Chat Enterprise, You.com, Google Bard, and Alloy Studios’ Genius app) indicates that security and privacy standards are higher in chatbots targeted for corporate clients (B2B) due to the greater risks involved with company information.
As a result, terms and conditions for data collection, storage, and processing focus more on protection in this area compared to services aimed at individual consumers (B2C). In this analysis, tools directed towards companies usually do not automatically store conversation logs and in some cases do not send data to company servers, as the chatbot works locally within the client’s network.
Anna Larkina, a security and privacy expert at Kaspersky, stated: “After exploring the potential risks associated with dealing with large language model-based chatbots for work purposes, we found that the risk of sensitive data leakage is higher when employees use their personal accounts during work. This makes raising awareness among employees about the risks of using chatbots a priority for companies. On one hand, employees should know what data is considered confidential or personal or deemed professional secret, and why they should avoid inputting it into chatbots. On the other hand, companies should establish clear rules for using such services if they allow them at all.”