Artificial intelligence researchers on the Ex platform have doubted that Q Star poses a threat to humanity, as they explained that the project is simply an extension of the current work at OpenAI and other artificial intelligence research labs. Among the skeptics is Yan Lekun, who is a senior AI scientist at Meta.
Rick Lemis, who writes the “Intelligent Programming” newsletter, referred to a lecture given by John Schulman, a co-founder of OpenAI, seven years ago at the Massachusetts Institute of Technology, in which he described a mathematical function called “Q Star.”
Many researchers believe that the letter “Q” in the name Q Star represents “Q-learning,” which is an artificial intelligence technique that helps the model learn and improve in a specific task by taking correct actions and receiving rewards for them.
Researchers suggest that “Star” could be a reference to A Star. A Star is an algorithm used to check the contract that forms the graph and explore paths between these contracts.
In 2014, Google’s AI lab, DeepMind, applied Q-learning technology to create an artificial intelligence algorithm capable of playing Atari 2600 games at a human-level performance.
The roots of the A Star algorithm date back to an academic study published in 1968, and researchers at the University of California, Irvine, have been exploring improving the A Star algorithm using Q-learning for several years. This is what OpenAI is currently striving for.
Researcher Nathan Lambert from the Allen Institute for Artificial Intelligence explained that he believes Q Star is primarily related to AI education to study mathematical issues in high schools, not for use in destroying humanity.
Lambert stated, “Earlier this year, OpenAI published its efforts to improve the mathematical thinking abilities of language models using a technique called reward modeling, and it is interesting to see how mathematical skills can go beyond just making the AI chatbot ChatGPT from OpenAI a better assistant in coding instructions.”
Professor Mark Riddle, a computer science professor at the Georgia Institute of Technology, criticized reports that addressed Q Star and the widespread media campaign about OpenAI’s project and efforts to achieve artificial general intelligence, which is AI capable of performing any task like a human.
Reports mentioned that Q Star could be a step toward artificial general intelligence, but researchers express doubts about it, including Riddle who said, “There is no evidence that large language models or any other technology under development at OpenAI are trending towards artificial general intelligence or any scenarios of human annihilation. OpenAI is simply expanding the current ideas and finding new ways to evolve.”
Riddle added that researchers in academia and industry are following all these ideas and have already published several research papers in the past six months on these topics. It is apparent that researchers at OpenAI will not have fundamentally different ideas from other researchers who are also striving to make progress in the field of artificial intelligence.
Lemis confirms that utilizing the Q Star project with some of the techniques outlined in the research paper published by OpenAI researchers in May could significantly enhance the capabilities of language models.
Lemis says, “Based on the research, it is possible that OpenAI has discovered a way to control inference chains for language models, allowing them to guide models to follow desired logical paths, thereby achieving the desired results.”
This reduces the likelihood of models following uncommon human thinking or false patterns to reach harmful or incorrect conclusions, and most AI researchers agree on the necessity of having ethical methods to train large language models in a way that allows them to process information efficiently.