Where will AGI take us: Is Q* the future of artificial intelligence?
Where will AGI take us: Is Q* the future of artificial intelligence?
June 27, 2024
Artificial General Intelligence (AGI) represents the next frontier in artificial intelligence, and some are already making efforts in this direction, such as OpenAI.
But what exactly is Project Q*, which is speculated to be an advanced form of intelligence that could outperform human cognitive abilities? And implications does it hold for the future of AI and society?
Q*? AGI? Never heard of it.
Pronounced Q-Star, Q* is an ongoing project of OpenAI, the creator of ChatGPT. If you haven’t heard of Q* until now, know that it’s basically a super-advanced learning algorithm that promises to take artificial intelligence to the next level.
The Project Q*, although shrouded in mystery, has stood out for its ability to solve mathematical challenges, which is no small feat in the field of artificial intelligence. This ability not only represents a technical breakthrough, but also marks a strategic step towards endowing AI with human-like reasoning capabilities, potentially leading to advancements in AGI (artificial general intelligence).
So… what can AGI and Q* do?
AGI differs from the artificial intelligence we know today in its ability to reason. Platforms like ChatGPT and Google Gemini (former Google Bard), for example, excel at recognizing patterns and understanding data to answer your questions, but they are not able to think for themselves – they always resort to existing data to provide an answer.
AGI takes it to another level. If we ask ChatGPT a math question that has never been asked before, the answer will be incorrect since it has no logic behind it. On the other hand, AGI is capable of logical-mathematical understanding. It doesn’t just memorise answers – it understands the problem in its entirety and works on it step by step.
This is why Q* is making waves in the tech community. Unlike the models used so far, Q* is smarter in its search of new information. According to several sources, the new model was able to solve maths problems at the level of primary school students, which means that technology is closer to having reasoning capabilities equal to or greater than human intelligence.
The two sides of the coin: should we fear it or praise it?
The news about Project Q* that emerged at the end of 2023 raised concerns, especially with the spotlight on the dismissal of OpenAI’s CEO, Sam Altman (and his subsequent reinstatement), at the same time as the company’s researchers were talking about a breakthrough in AI that could “jeopardise humanity”. A bad start for the project, so to speak.
However, regarding the secrecy behind the project, OpenAI claims that it is keeping the details private to prevent misuse. At the end of the day, the company should only share details about the project once they have defined all the necessary security rules.
By being able to solve mathematical problems and learn on its own, Q* will most certainly push the boundaries of AI and open new avenues for innovation. Yet, this evolution also sparks several ethical concerns: how we can control the use of these systems and guarantee their safety are topics on the table.
Well…
Although we still don’t know for sure what Q* will be capable of doing (and whether it will ever see the light of day), we can be sure of one thing: if the project goes ahead, it will revolutionize the way AI thinks and solves problems.