A new study conducted by researchers at Purdue University revealed the ineffectiveness of the artificial intelligence robot ChatGPT in assisting developers with programming instructions. Their analysis of 517 questions from Stack Overflow showed that over half of the robot’s answers were inaccurate, with an error rate of approximately 52%.
The errors ranged from misunderstanding programming concepts to inaccuracies in information precision and logical errors in the code and technical terms.
Furthermore, the study criticized ChatGPT’s approach in providing long and complex answers that surpass the actual need, potentially causing confusion among developers.
However, a survey conducted with 12 programmers indicated that one-third of them prefer clear and organized ChatGPT answers.
The study results carry significant importance as programming errors can lead to larger issues in the future, negatively impacting departments or entire institutions, and these errors can result in system failures and application crashes.
Therefore, programmers are advised to utilize other tools like GitHub Copilot and verify the accuracy of codes provided by ChatGPT through human reviews or code analysis tools to enhance the quality of programming work and reduce errors.
In conclusion of the study, researchers emphasize the necessity of caution and awareness when using ChatGPT answers in programming tasks due to the common errors that the robot produces.