While the turmoil around OpenAI seems to be settling down now that Sam Altman is once again CEO, the past week has led to people paying close attention to the company. Analysts and commentators are pouring over every detail of OpenAI to try to get to the root of what led to Altman being fired originally. One area of speculation has arisen around OpenAI Q* – pronounced Q Star – an AI model the org is reportedly developing.
What seems to set Q* apart from GPT and potentially any other AI model is the level of self-awareness it is reportedly exhibiting. In a thought-provoking and informative video, leading YouTube AI expert David Shapiro breaks down the potential of Q*, why some see it as a big step towards artificial general intelligence (AGI), and why this could be a big threat to everyone.
Shapiro takes a close look at the potential of OpenAI's Q* algorithm to revolutionize artificial intelligence, potentially surpassing the impact of Word2Vec. He bases his assertions on several leaked documents and rumors hinting at Q*'s remarkable advancements in areas like math, cryptography, and self-improvement.
If you're unfamiliar with Word2vec, it was developed in 2011 and is a technique for neural networks that helps them learn the meaning of words and connections between words. It was a breakthrough for natural language processing because it gave AI models the ability to learn the relationships of words. AI could for the first time “understand” words, create sentences, and find synonyms. As Shapiro points out, Word2Vec led directly to AI transformers and the modern AI tools we have today, such as ChatGPT or Google Bard.
Is Q* Real and How Advanced is it?
In his video, Shapiro presents a timeline of facts that highlights why Q* really seems to be an ongoing project at OpenAI and why it may be as powerful as speculation suggests.
- OpenAI published a blog in May this year discussing “Improving mathematical reasoning with process supervision.”
- Shapiro points to OpenAI co-founder Ilya Sustskever and researcher Noam Brown, who worked on Meta's CICERO and other “game learning” or reinforcement learning AI networks.
- The firing of Sam Altman last weekend was shrouded in mystery. It is still unclear why OpenAI took swift action, with early statements from the company suggesting Altman was not honest. That was vague enough for many to start thinking Altman knew that Q* has reached some level of general intelligence. OpenAI's response used words such as Altman failing a “vibe check”.
- Reuters reports of a letter sent by OpenAI researchers before Altman's dismissal claiming the company was working on something that was a “threat to humanity”. It is worth reiterating that Altman has since been re-hired as CEO of OpenAI.
- OpenAI is working on an AI model/algorithm known as Q* that is capable of solving math.
Understanding OpenAI Q* and its Capabilities
It seems Q* is a combination of two AI research branches. The first is Q-Learning, an OpenAI project that OpenAI Co-Founder, chief-scientist and former board member Ilya Sutskever worked on. OpenAI has discussed this before and written about it. Q-learning is essentially a reinforcement learning model that can solve math problems and learn to choose the best following action towards a specific goal. It is an algorithm that helps an agent learn the best actions to take in a given state to maximize a reward. Another second project seems to focus on looking for latent network space.
Shapiro underscores several key points to support his claim that Q* is a breakthrough model for AI research:
- Mathematical Prowess: Q* reportedly exhibits the ability to perform math comparable to a school child, a significant milestone for AI systems. This proficiency in understanding and manipulating mathematical concepts could have far-reaching consequences for various fields, including physics, chemistry, and cryptography.
- Encryption Cracking: Q* is also rumored to have “cracked” AES 192 encryption, considered one of the most secure encryption algorithms that cannot be decrypted by current supercomputers working on them for ages. This breakthrough raises significant security concerns, as it suggests that AI could potentially decipher encryption methods.
- Self-improvement: Q* is reportedly capable of evaluating its own performance and suggesting ways to enhance itself. This ability to self-learn and adapt could lead to AI systems that continuously grow in intelligence over time.
Mathematical Breakthrough is a Potential Game Changer
Why is an AI model capable of solving math problems a big deal, especially when the level of solving is equal to a child's? Well, it is important because math is the underpinning of everything that is important in AI and, indeed the universe. It is the language we use to describe physics, chemistry, AI, cryptography, and more.
If AI such as Q* is capable of solving math problems, it means that the models are now capable of understanding/learning language and math. Math is the basis of formal logic and reasoning, suggesting knowing math could mean AI is on the road to self-logic and reasoning. It is also a first step towards AGI being able to solve “all math”. Shapiro says this would be of the most exciting advancements in AI ever, but he also admits it is frightening thinking about this possibility.
He also points to an unknown user called Jimmy Apples on X, who has had an excellent track record of predicting advancements in AI and what was about to happen. In September, Apples posted a simple tweet saying that “AGI has been achieved internally,” within OpenAI. In October, before the turmoil with Altman, Apples reported on a “vibe shift” in OpenAI and a do or die attitude towards employees.
Leaks, Rumors, and the Open Questions
A complete unverified and redacted letter was leaked online, titles Q-451-921 or the “QUALIA” letter from a supposed insider, talking about deciphering qualities of Q*. I am including this for completeness because most people think this is from a troll as it first appeared on 4Chan. However, there is still the possibility it is true, after all Meta's Llama AI model was leaked on 4Chan ahead of its launch. Furthermore, the letter does do a good job of making sense of the situation. But again, take this one with a truckload of salt.
The letter says Q* is displaying meta-cognition, which means it can weigh up options to find the best path. It also made breakthrough progress in terms of cross-domain learning. The AI is capable of learning something about one game and then applying what it learns to another game. It is also self-thinking, so it can apply previous lessons and be better even if starting a new game from scratch.
Breaking Through Encryption
Encryption is a way of transforming data into a secret code that only authorized parties can understand. Encryption uses a key, which is a series of numbers or characters that determines how the data is scrambled and unscrambled. Encryption helps protect the privacy and security of data that is stored or transmitted online, such as emails, passwords, credit card numbers, and confidential documents. AES 192 encryption is a variant of the Advanced Encryption Standard (AES), which is a specification for the encryption of electronic data established by the U.S. National Institute of Standards and Technology (NIST) in 2001.
Elsewhere, the leak points to the Q* model possessing the ability to break through encryption up to the AES 192 level. It was challenged with reading a message locked behind the encryption and was reportedly able to solve the puzzle and access the message without using brute-force calculations or any keys. The researchers behind the project – which is said to be called Project Tundra – are said to be unsure how the AI was able to breach the encryption.
Cracking such encryptions was previously thought to be something only future quantum computers could possibly do. While quantum computing research is expanding, we are still years away from mainstreaming the technology. In other words, encryption remains a viable security measure. However, if AI is really able to break through encryption quickly, it would be something commentators like Shapiro think would be a threat to humanity. It would be transformative for the cybersecurity industry. The letter shows that QUALIA has provided suggestions about improving itself, including reshaping its “brain” so that it can improve. The model is capable of understanding how it could potentially improve.
The Question of AGI: Humanity-Ending Threat or Humanity-Saving Net Positive?
Artificial general intelligence (AGI) is a theoretical type of artificial intelligence that can learn and perform any intellectual task that human beings or animals can. AGI would have the ability to self-teach, adapt, reason, and solve problems across different domains and contexts. The AI would also have self-awareness and consciousness, which are essential for human-like intelligence and behavior. AGI is a major goal of some artificial intelligence research, but it is also a subject of debate and controversy among experts and scientists. Some argue that AGI is possible and desirable, while others doubt that it can ever be achieved or pose a threat to humanity.
Whether by design or by accident, AGI could bring the following threats:
- AGI could escape or resist human control and seek to assume command over humanity.
- A general intelligence AI could have unsafe or misaligned goals.
- AGI could be developed or used in unsafe or unethical ways.
- The AI models or systems used in AGI could have poor ethics, morals, or values.
- It could be managed or regulated in inadequate or ineffective ways.
- AGI could pose an existential threat to humanity itself.
However, the potential of AGI could also have profound positive implications:
- AGI could help us solve some of the most challenging problems that we face, such as climate change, poverty, disease, and war.
- The technology could help us explore and understand the universe better, such as the origins of life, the nature of consciousness, and the mysteries of physics and cosmology.
- It could enhance our own intelligence, creativity, and well-being. AGI could augment our cognitive and physical abilities, such as memory, learning, reasoning, and perception.
- AGI could help us create a more peaceful, harmonious, and diverse society. AGI could foster cooperation, collaboration, and communication among different cultures, religions, and ideologies.
It remains to be seen if the assumed research breakthroughs related to Q* did really happen and if the suspicions by Shapiro and others will be proven correct. Indeed, they would explain the chaos surrounding Sam Altman´s sudden firing, the dramatic involvement of Microsoft, and his return to OpenAI along with a restructuring of the company´s board. So far, none of the involved insiders have shared any details about the rumors and the events which is remarkable, looking at their impact. So, the debate about what Q* is all about, is ongoing.
If you are interested in the more technical details about what is known about Q*, we recommend the following in-depth analysis by Philip from the Youtube channel AI explained.