The company (OpenAI) announced in early January the launch of the GPT Store, which allows users to access chatbots based on models developed by the company and designed by developers to perform specific tasks.
These customized versions, also known as Generative Pre-trained Transformers (GPTs), are designed to cater to different needs and fields such as writing, research, programming, education, and enhancing productivity. On January 31, 2023, OpenAI introduced a new feature in ChatGPT, which allows users to summon GPT robots in any conversation with the robot by typing the symbol @ and selecting GPT from the list.
By tagging it akin to how people are called within the Slack platform, users can seamlessly integrate GPT robots into the conversation. The specified GPT comprehends the conversation fully, and users can summon multiple GPT robots for various use cases and diverse needs.
Following the recent announcement regarding summoning customized GPT robots to converse with ChatGPT, Kaspersky experts emphasized the importance of exercising utmost caution and deliberation when sharing sensitive information with these models.
Vladislav Tushkanov, the Technical Director of the Research Development Group in the Artificial Intelligence Research Technology team at Kaspersky, commented saying, “Custom artificial intelligence robots, known as GPTs, can leverage external resources and tools for advanced purposes. Hence, the Artificial Intelligence Research Lab at OpenAI is working on developing a mechanism to allow users to review and approve the procedures followed by the customized GPT robot, aiding in user data protection. When customized GPT robots attempt to send data to any external entity, users themselves are asked to permit or reject it, along with an additional option to inspect the data using a drop-down list in the user interface. The same features apply to the feature of summoning GPT robots in conversations with ChatGPT.”
Tushkanov said, “Nevertheless, users must remain fully aware and act cautiously despite OpenAI’s use of these technologies, thus they should scrutinize each request carefully. Not only that, but there are other potential ways to leak user data from any robotic chat, due to mistakes made or vulnerabilities in the service, especially if the developing company retains data for model training or if user accounts are compromised. Therefore, users should refrain from sharing personal and confidential information with any online robotic chat.”