No God In The Machine - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Mobile // Mobile Applications
09:06 AM
Connect Directly

No God In The Machine

Artificial intelligence cannot replicate human consciousness, say Irish researchers in new study.

8 Gadgets For The High-Tech Home
8 Gadgets For The High-Tech Home
(Click image for larger view and slideshow.)

Computers might be able to do remarkable things, but new research offers mathematical proof that they cannot replicate human consciousness.

In a recently published paper, "Is Consciousness Computable? Quantifying Integrated Information Using Algorithmic Information Theory," Phil Maguire, co-director of the BSc degree in computational thinking at National University of Ireland, Maynooth, and his co-authors demonstrate that, within the model of consciousness proposed by Giulio Tononi, the integrated information in our brains cannot be modeled by computers.

Consciousness is not well understood. But Giulio Tononi, a psychiatrist and neuroscientist at the University of Wisconsin, Madison, has proposed an integrated information theory (IIT) of consciousness. IIT is not universally accepted, nor does it offer a definitive map of the mind. Nonetheless, it is well regarded as a model for consciousness and has proven valuable in understanding how to treat patients in comas or other states of diminished consciousness.

[Are self-driving cars around the corner? Read Google Car: What's Next?]

One of the axioms of IIT is "Each experience is unified; it cannot be reduced to independent components." This means that a person's experience of a flower, for example, is the product of input from multiple physiological systems -- various senses and other memories -- but that product cannot be reverse engineered. Under this definition, consciousness behaves like a hash function.

(Source: Wikimedia Commons)

"In this paper, we prove that a process which binds information together irreversibly is non-computable," Maguire explained in an email. "If the human brain is genuinely binding information then it cannot be emulated by artificial intelligence. We've proved that mathematically."

We're sorry, Hal. We're afraid we can do that.
Maguire concedes that the human mind might not integrate information in an irreversible process, but he says that does not match human intuition. "We argue that what people mean by the use of the concept 'conscious' is that a system cannot be broken down. If you can break it down, it isn't conscious (e.g. a light switch)."

This is not to say that artificial intelligence cannot behave intelligently or pass the Turing Test. Rather, what Maguire and his co-authors have shown is that there's something fundamentally different between consciousness, at least under Tononi's definition, and artificial intelligence.

"If you build an artificial system, you always know how you've constructed it," explained Maguire in a phone interview. "You know that it is decomposable. You know it's made up of elements that are non-integratable. We can never build a computing system and algorithm that integrates something so completely it can't be decomposed."

Asked whether there's a parallel between the unknowability of consciousness and the unknowability of quantum states, Maguire was cautious.

"Quantum mechanical effects occur when we reach the limits of measurement," he said via email. "Our definitions break down. There are properties that cannot be defined simultaneously. Similarly, if we try to model the integration of the brain, our models will break down. There will be computational properties that cannot meaningfully be defined. This possibility would rule out strong AI. And perhaps the irreversible integration of the brain is what causes quantum superpositions to collapse. But that's speculation for now."

Maguire's paper, co-authored by Philippe Moser (NUI Maynooth, Ireland), Rebecca Maguire (National College of Ireland), and Virgil Griffith (Caltech), is scheduled to be presented at the Annual Meeting of the Cognitive Science Society in Quebec, Canada, in July.

Our InformationWeek Elite 100 issue -- our 26th ranking of technology innovators -- shines a spotlight on businesses that are succeeding because of their digital strategies. We take a close at look at the top five companies in this year's ranking and the eight winners of our Business Innovation awards, and we offer 20 great ideas that you can use in your company. We also provide a ranked list of our Elite 100 innovators. Read our InformationWeek Elite 100 issue today.

Thomas Claburn has been writing about business and technology since 1996, for publications such as New Architect, PC Computing, InformationWeek, Salon, Wired, and Ziff Davis Smart Business. Before that, he worked in film and television, having earned a not particularly useful ... View Full Bio

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
<<   <   Page 2 / 2
User Rank: Ninja
5/8/2014 | 3:21:46 PM
Re: Really?
If conciousness is not well understood then I fail to understand how technology will be able to replicate it. Yet. 

But I am convinced we will get to that point. There's no denying it. Will it change the way we think about technology? Probably. I hope its for good, and not the dismal-type scenarios we have seen in movies and on television. 
User Rank: Strategist
5/8/2014 | 2:08:39 PM
And why exactly should this surprise us?
If AI ends up thinking in a fundamentally diferent manner than humans should we be surprised?  It will  almost certainly have many more sense organs than a human.  It will almost certainly have many more 'brains' involved than a human.  It will almost certainly be 'smarter' (perhaps not at first, but qickly) than a human.


Why would AI want to mimic a human?

User Rank: Ninja
5/8/2014 | 1:49:17 PM
Re: Is this good news or bad news?
Nothing good? The credit fraud detection software protecting your Visa card? IBM Watson to help diagnose cancer and other illnesses? There is a long list of these type of apps, none of these are "good" for us?

It's like you think SkyNet is inevitable if we continue down this road. But I will admit that when I read the line in article that said "although we don't fully understand conciousness yet", it put a damper on any conclusions these guys gave.

I think that if AI ever creates self awareness and self preservation in the machine, that's when the science fiction movies begin to look a little more real. Scariest one I have seen is Eagle Eye. Not feasible today but didn't look that far off from possible reality. The computer was not trying to self actualize, like Data in Star Trek Next Generation, but simply survive when it learned it was going to be shut down.
User Rank: Apprentice
5/8/2014 | 1:07:12 PM
Is this good news or bad news?
I can not find anything positive in the creation of Artificial Intellegence. Once we lose control of these machines, man will not be able to fix anything to stop this from continuing onto a critical end.
Thomas Claburn
Thomas Claburn,
User Rank: Author
5/8/2014 | 12:54:22 PM
Re: Really?
Comparing the development of digital music fidelity to the advancement of AI doesn't work as an analogy because the difference between analog and digital is well-understood. Not so human consciousness. If Tononi's model is correct -- and there's still debate about that -- then we simply can't model human consciousness on a computer. We may get something functionally similar, but we won't be able to compare AI to the conscious mind because the latter will remain a black box.
User Rank: Author
5/8/2014 | 12:42:02 PM
AI Vs. human consciousness
"Under this definition, consciousness behaves like a hash function." Interesting analogy, Tom.
User Rank: Apprentice
5/8/2014 | 12:07:25 PM
And digital music will never sound as good as analog nor will digital photos ever come close to rivaling film! With all due respect, statements like these seem ludicrous to me. We are not even at beta in our thinking about AI, much farther away still from being able to imagine AI post singularity. What will AI itself say about it's own ability to synthesize human consciousness? Don't know the answer, neither do I. Never say never or history will only remember you with amusement.
<<   <   Page 2 / 2
10 Trends Accelerating Edge Computing
Cynthia Harvey, Freelance Journalist, InformationWeek,  10/8/2020
Is Cloud Migration a Path to Carbon Footprint Reduction?
Joao-Pierre S. Ruth, Senior Writer,  10/5/2020
IT Spending, Priorities, Projects: What's Ahead in 2021
Jessica Davis, Senior Editor, Enterprise Apps,  10/2/2020
White Papers
Register for InformationWeek Newsletters
Current Issue
[Special Report] Edge Computing: An IT Platform for the New Enterprise
Edge computing is poised to make a major splash within the next generation of corporate IT architectures. Here's what you need to know!
Flash Poll