Artificial General Intelligence in 2025: Good Luck With That
AI experts have said it would likely be 2050 before AGI hits the market. OpenAI CEO Sam Altman says 2025, but it’s a very difficult problem to solve.
A few years ago, AI experts were predicting that artificial general intelligence (AGI) would become a reality by 2050. OpenAI has been pushing the art of the possible, along with Big Tech, but despite Sam Altman’s estimate of 2025, realizing AGI is unlikely soon.
“We can’t presume that we’re close to AGI because we really don’t understand current AI, which is a far cry from the dreamed-of AGI. We don’t know how current AIs arrive at their conclusions, nor can current AIs even explain to us the processes by which that happens,” says HP Newquist, author of The BrainMakers and executive director of The Relayer Group, a consulting firm that tracks the development of practical AI. “That’s a huge gap that needs to be closed before we can start creating an AI that can do what every human can do. And a hallmark of human thinking, which AGI will attempt to replicate, is being able to explain the rationale for coming up with a solution to a problem or an answer to a question. We’re still trying to keep existing LLMs from hallucinating.”
OpenAI is currently alpha testing advanced voice mode, which is designed to sound human (such as pausing occasionally when one speaks to draw a breath). It can also detect emotion and non-verbal clues. This advancement will help AI seem more human-like, which is important, but there’s more work to do.
Edward Tian, CEO of ZeroGPT, which detects GenAI use in text, also believes the realization of AGI will take time.
“The idea behind artificial general intelligence is creating the most human-like AI possible -- a type of AI that can teach itself and essentially operate in an autonomous manner. So, one of the most obvious challenges is creating AI in a way that allows the developers to be able to take their hands off eventually, as the goal is for it to operate on its own,” says Tian in an email interview. “Technology, no matter how advanced, cannot be human, so the challenge is trying to develop it to be as human as possible. That also leads to ethical dilemmas regarding oversight. There are certainly a lot of people out there who are concerned about AI having too much autonomy and control, and those concerns are valid. How do developers make AGI while also being able to limit its abilities when necessary? Because of all these questions and our limited capabilities and regulations at the present [time] I think that 2025 isn’t realistic.”
What Achieving AGI Will Take
Current AI -- which is artificial narrow intelligence (ANI), performs a specific task well, but it cannot generalize that knowledge to suit a different use case.
“Given how long it took to build current AI models, which suffer from inconsistent outputs, flawed data sources, and unexplainable biases, it would likely make sense to perfect what already exists rather than start working on even more complex models,” says Max LI, CEO of decentralized AI data provider Oort and an adjunct associate professor in the department of electrical engineering at Columbia University. “In academia, for many components of AGI, we do not even know why it works, nor why it does not work.”
To achieve AGI, a system needs to do more than just produce outputs and engage in conversation, which means that LLMs alone won’t be enough.
“It should also be able to continuously learn, forget, make judgments that consider others, including the environment in which the judgments are made, and a lot more. From that perspective, we’re still very far,” says Alex Jaimes, chief AI officer at AI company Dataminr, in an email interview. “It’s hard to imagine AGI that doesn’t include social intelligence, and current AI systems don’t have any social capabilities, such as understanding how their behavior impacts others, cultural and social norms, etc. And of course, AGI would require capabilities in processing and producing not just text, but other kinds of modalities -- not just ‘understanding’ sounds and visual inputs, but potentially also haptics.”
For example, GPT-4 can generate human-like text, but it can’t perform tasks that require understanding physical world dynamics, like robots or sensory perception.
“To get to AGI, we need advanced learning algorithms that can generalize and learn autonomously, integrated systems that combine various AI disciplines, massive computational power, diverse data and a lot of interdisciplinary collaboration,” says Sergey Kastukevich, deputy CTO at gambling software company SOFTSWISS. “For example, current AI models like those used in autonomous vehicles require enormous datasets and computational power just to handle driving in specific conditions, let alone achieve general intelligence.”
LLMs are based on complex transformer models. While they are incredibly powerful and even have some emergent intelligence, the transformer is pretrained and does not learn in real-time.