Amazon Q’s AI robot designed for business is facing significant challenges due to severe hallucinations and leaking of confidential data. Some employees are sounding the alarm regarding accuracy and privacy issues according to leaked documents obtained by the tech news newsletter Platformer.
Some of these leaks include the locations of AWS data centers, internal discounts, and undisclosed features by the company.
Amazon Q is facing early problems while Amazon is working on dispelling the assumption that they are lagging behind Microsoft, Google, and others in building tools and infrastructure utilizing generative AI.
In September, the company announced an investment of around 4 billion dollars in the emerging AI specialist company Anthropic.
Amazon has downplayed the value of employee discussions in a statement, with the company spokesperson stating: “Some employees share comments through internal channels, which is a common practice at Amazon. We have not identified any security issues as a result of these comments and we appreciate all the feedback we have received. We continue to monitor Amazon Q as it transitions from a preview product to being publicly available.”
The official spokesperson confirmed that the chat robot Amazon Q did not cause any leakage of sensitive information in response to employees’ statements.
Now, Amazon Q can be accessed as a free trial version, offered by Amazon as an enterprise version of ChatGPT.
The chat robot is capable of answering developers’ questions about AWS capabilities, modifying source code, and documenting cited sources, according to Amazon executives.
Amazon Q competes with similar products from Microsoft and Google, offering a lower price than competitors. Its chat robot has been promoted as safer compared to tools used by consumers, such as ChatGPT.
An internal document about the application of Amazon Q and its erroneous responses indicates that it may hallucinate and provide harmful or inappropriate responses, potentially presenting outdated security information that could compromise customer accounts.
The risks of the mentioned document are a typical example of large language models, as they all provide inaccurate or inappropriate responses in some cases at least.
Amazon Q obtains information from company data repositories, code repositories, and enterprise systems instead of getting information from the internet like other chatbots, leading to increased challenges if this information is incorrect or if the robot provides it to inappropriate individuals.