Artificial General Intelligence in 2025: Good Luck With That

AI experts have said it would likely be 2050 before AGI hits the market. OpenAI CEO Sam Altman says 2025, but it’s a very difficult problem to solve.

Lisa Morgan, Freelance Writer

August 27, 2024

9 Min Read
A Person's Head Covered in a Dark Cloud Labeled AGI
photoschmidt via Alamy Stock

A few years ago, AI experts were predicting that artificial general intelligence (AGI) would become a reality by 2050. OpenAI has been pushing the art of the possible, along with Big Tech, but despite Sam Altman’s estimate of 2025, realizing AGI is unlikely soon. 

“We can’t presume that we’re close to AGI because we really don’t understand current AI, which is a far cry from the dreamed-of AGI. We don’t know how current AIs arrive at their conclusions, nor can current AIs even explain to us the processes by which that happens,” says HP Newquist, author of The BrainMakers and executive director of The Relayer Group, a consulting firm that tracks the development of practical AI. “That’s a huge gap that needs to be closed before we can start creating an AI that can do what every human can do. And a hallmark of human thinking, which AGI will attempt to replicate, is being able to explain the rationale for coming up with a solution to a problem or an answer to a question. We’re still trying to keep existing LLMs from hallucinating.” 

OpenAI is currently alpha testing advanced voice mode, which is designed to sound human (such as pausing occasionally when one speaks to draw a breath). It can also detect emotion and non-verbal clues. This advancement will help AI seem more human-like, which is important, but there’s more work to do. 

Related:Can Anyone Be a Realistic Competitor for OpenAI?

Edward Tian, CEO of ZeroGPT, which detects GenAI use in text, also believes the realization of AGI will take time. 

“The idea behind artificial general intelligence is creating the most human-like AI possible -- a type of AI that can teach itself and essentially operate in an autonomous manner. So, one of the most obvious challenges is creating AI in a way that allows the developers to be able to take their hands off eventually, as the goal is for it to operate on its own,” says Tian in an email interview. “Technology, no matter how advanced, cannot be human, so the challenge is trying to develop it to be as human as possible. That also leads to ethical dilemmas regarding oversight. There are certainly a lot of people out there who are concerned about AI having too much autonomy and control, and those concerns are valid. How do developers make AGI while also being able to limit its abilities when necessary? Because of all these questions and our limited capabilities and regulations at the present [time] I think that 2025 isn’t realistic.” 

What Achieving AGI Will Take 

Current AI -- which is artificial narrow intelligence (ANI), performs a specific task well, but it cannot generalize that knowledge to suit a different use case. 

Related:RAI Institute Founder on Steering AI Systems to Maturity

“Given how long it took to build current AI models, which suffer from inconsistent outputs, flawed data sources, and unexplainable biases, it would likely make sense to perfect what already exists rather than start working on even more complex models,” says Max LI, CEO of decentralized AI data provider Oort and an adjunct associate professor in the department of electrical engineering at Columbia University. “In academia, for many components of AGI, we do not even know why it works, nor why it does not work.” 

To achieve AGI, a system needs to do more than just produce outputs and engage in conversation, which means that LLMs alone won’t be enough. 

“It should also be able to continuously learn, forget, make judgments that consider others, including the environment in which the judgments are made, and a lot more. From that perspective, we’re still very far,” says Alex Jaimes, chief AI officer at AI company Dataminr, in an email interview. “It’s hard to imagine AGI that doesn’t include social intelligence, and current AI systems don’t have any social capabilities, such as understanding how their behavior impacts others, cultural and social norms, etc. And of course, AGI would require capabilities in processing and producing not just text, but other kinds of modalities -- not just ‘understanding’ sounds and visual inputs, but potentially also haptics.” 

Related:Should You Feel Good About Emotion AI?

For example, GPT-4 can generate human-like text, but it can’t perform tasks that require understanding physical world dynamics, like robots or sensory perception. 

“To get to AGI, we need advanced learning algorithms that can generalize and learn autonomously, integrated systems that combine various AI disciplines, massive computational power, diverse data and a lot of interdisciplinary collaboration,” says Sergey Kastukevich, deputy CTO at gambling software company SOFTSWISS. “For example, current AI models like those used in autonomous vehicles require enormous datasets and computational power just to handle driving in specific conditions, let alone achieve general intelligence.” 

LLMs are based on complex transformer models. While they are incredibly powerful and even have some emergent intelligence, the transformer is pretrained and does not learn in real-time. 

“The transformer model’s reasoning and planning capabilities are still fairly basic. Although, there is some progress with agentic systems. In fact, this is one of the hottest areas of generative AI and a major focus of research. A user can set forth an objective and the AI agent can come up with a plan and carry out the tasks,” says Abhi Maheshwari, CEO of AI-driven platform Aisera. “For AGI, there will need to be some breakthroughs with AI models. They will need to be able to generalize about situations without having to be trained on a particular scenario. A system will also need to do this in real-time, just like a human can when they intuitively understand something.”  

In addition, Maheshwari says AGI capabilities will likely need a new hardware architecture, such as quantum computing since GPUs will probably not be sufficient. In addition, the hardware architecture will need to be much more energy efficient and not require massive data centers. 

Other Considerations 

Adnan Masood, chief AI architect at digital transformation services company UST, says AGI will need to be able to do several things that aren’t possible just yet. Specifically: 

  • The ability to generalize: AGI trained on medical data could also diagnose a mechanical failure. 

  • Open-ended learning: AGI will need to take the human guidance and reinforcement learning for defined tasks and then pursue knowledge itself. 

  • Causal reasoning: AGI should be able to explain causality, such as a crop failed because the soil was infected with bacteria. 

  • Sensory input and processing: AGI will need to better understand context, such as the environment in which it is operating. 

LLMs are beginning to do causal inference and will eventually be able to reason. They’ll also have better problem-solving and cognitive capabilities based on the ability to ingest data from multiple sources. 

“One of the gaps we have right now is the physical robotics aspect of things. Robotics hasn’t had its ChatGPT moment yet, so picking up a water bottle and doing something with it requires a significant amount of training,” says Masood. “I visit the MIT Media Lab and Robotics Lab every quarter and I spend a lot of time with the researchers. Reinforcement learning hasn’t really panned out there.” 

Another obvious requirement is the need for safety regulation. 

“The idea that a machine or robot could possess intelligence that matches a human’s is both exciting and terrifying at the same time. On one hand, it will free up humans to focus on more meaningful, desirable jobs or pursuits and could lead to a type of utopian society where differences are resolved, wars cease, and humankind lives a more peaceful existence,” says Oort’s Li. “On the other hand, AGI-powered robots might turn against humans, like Sam Altman predicted, and could potentially exterminate them. Knowledge is power and, while humans may have stayed at the top of the food chain by possessing the most in our planet’s history, it’s chilling to contemplate what could happen once that is no longer the case.” 

While many data scientists have been quick to dismiss a dystopian future caused by AI in past years, they’ve done so in the context of ANI. Now many are saying that though AGI won’t be realized for one or multiple decades, it will need to be regulated to ensure that it is operating in the best interests of humans.  

“AI regulation is going to be challenging, and AGI regulation is going to be even more challenging,” says UST’s Masood. “The biggest risk is misalignment. If [AI’s] goals diverge from human values, that can lead to harmful outcomes.”  

Another concern is a kill switch. Should AGI go rogue, humans would want to shut it down, but would the system protect itself from an existential threat? 

In addition, bias and discrimination are already huge issues that would continue to persist since humans are the root cause of these outcomes. Moreover, higher levels of automation will have an economic impact that isn’t fully understood just yet. 

“What would policy people do if we have widespread automation all over the place causing job displacement and economic equality?” says Masood. “Security threats is another one. If AGI is exploited for malicious purposes, what policy can you put in place so that people or nation states don’t start attacking each other?” 

And finally, there’s the issue of the known unknowns versus unknown unknowns or unintended consequences. 

“Technological policies and international governments need to have a better understanding of AI models, how they are designed, and how they operate, especially as AI tools become more capable and move into the AGI era,” says Ron Reiter, CTO and co-founder of cloud-native data security platform provider Sentra. “When it comes to the power of AI, we’re only just beginning to scratch the surface of what’s possible.  However, at some point, AGI will be able to teach itself, learning from its mistakes to the point where some researchers believe it will become infinitely smart. In anticipation of this future, the international community needs to develop a series of ethical AI practices that can be enforced and promote a better future for mankind.” 

About the Author

Lisa Morgan

Freelance Writer

Lisa Morgan is a freelance writer who covers business and IT strategy and emerging technology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights