Esther Dyson, Technologists Discuss ChatGPT’s Pros and Cons

The noted commentator spoke along with founders of AI-driven startups and a representative from Microsoft about ups and downs of the rise of generative AI.

Joao-Pierre S. Ruth, Senior Editor

June 29, 2023

6 Min Read
Adam Bly, Nasrin Mostafazadeh, Rushil Vora, and Ivy Cohen at the Disruptive Technologists in NYC meetup.Joao-Pierre S. Ruth

Investor Esther Dyson shared some of her sage perspective on ChatGPT and other aspects of the AI surge as part of a panel hosted this week by the Disruptive Technologists in NYC meetup group at Microsoft Times Square.

Dyson, founder of Wellville, spoke via video stream with founders from Verneek and System, which Dyson has invested in, appearing in person along with a Microsoft product manager on the panel. Ivy Cohen, CEO of Ivy Cohen Corporate Communications, moderated.

Commenting on her backing of System and Verneek, Dyson said, “I like them both because they are taking ChatGPT and other large language models and applying them to useful -- to real models of the world.” She also recalled writing in the 1980s about the roots of what became current AI taking shape. “Now, finally, it’s all coming to fruition, and I think the real value is going to come in the models of the world that are now much more easily accessible because of the large language models.”

Dyson described ChatGPT as something of a double-edged sword that, for the moment, can make it easier to instigate harm. “We are in a really challenging time in the world at large right now,” she said. “If you look at what’s happening in Russia and Ukraine, it’s so much easier to destroy than to build and that’s true both in the case of crazy countries and our society at large.”

The real danger, Dyson said, is that bad business models, bad actors, and politicians could take advantage of ChatGPT to create a lot more damage than the potential good that can come from the technology.

“There’s no real solution,” she said. “They’re always going to be evil people. The thing I would suggest is that we focus a lot more on training our children than our AIs and teach them to be self-aware.”

Sifting Out AI Hype and Fear

One of the panelists, Nasrin Mostafazadeh, co-founder of Verneek, is an AI scientist, and her startup developed the Quin AI platform, which can answer personalized questions consumers have. She offered a counterpoint to some of the doomsaying and hype currently associated with AI. “Isn’t it exhausting how much we’re all talking about AI?” she asked. “I think the main reason we’re all talking so much more about these things -- there are all these real threats, real challenges ranging from misinformation to displacement on the job front.”

Mostafazadeh pointed out though that the development of AI did not happen overnight -- it took many decades. Current fears and conversations about ChatGPT may stem from a lack of awareness among the masses. “Not everyone was educated about where we come from, where we’re going,” she said. Mostafazadeh stressed the need for the human element of not using chatbots as replacements for speaking to other people, particularly for addressing personal and emotional matters, rather than using AI as a therapist, for example. She also said hyperbole that paints AI as a sentient conqueror of the world is blown out of proportion.

CEO Adam Bly said his company System uses AI to extract how things function in the world in order to sort out complexities of systems, from healthcare to climate change. He said System also recently launched an AI-powered search engine that indexes scholarly research.

Bly acknowledged that AI is attracting attention at a time when the public faces questionable information from digital sources. “Over the past several years, we have witnessed an assault on truth and facts in this country and around the world,” he said.

Generative AI is on the rise, Bly said, at a time when the information culture, epistemological principles of society concerning how decisions and policy are made, and the use of facts are at risk.

Rushil Vora, a product manager with Microsoft, said the pace at which AI has become more broadly recognized is taking the technology mainstream. “ChatGPT really brought in and widened the audience for it,” he said. “The bigger challenge is how quickly can we educate more broadly in how to use it; how quickly can we spread information on what it is -- what it is not.”

Figuring Out Guardrails for AI

Some of the concerns about AI might have been taken in stride if regulators were better prepared to understand the technology. Bly said government often fails the citizenry, in particular the entities that should anticipate and regulate such emerging technologies as AI. “There once was a body in Congress called the Office of Technology Assessment and it was born to precisely provide Congress with the foresight and the subject matter expertise in science and technology so that it could, appropriately, rationally regulate the issues of our time,” he said. “It was defunded in 1995.”

That left Congress without a dedicated science and technology advisory function, Bly said. This meant in the regulatory process expertise was often deferred to industry and tech CEOs who testify and lobby before Congress instead of independent scientific and technological expertise within Congress. “We need strong, rational government,” he said.

Mostafazadeh agreed that more astute regulatory involvement could be beneficial in this arena, though she cautioned against oversimplification and misguided assumptions about the technology. “AI does not equal ChatGPT,” she said. “AI does not equal generative AI whatsoever.”

Mostafazadeh said the field of AI, as a whole, can make a positive impact and should not be lumped into the same bucket by regulators.

AI Might Enhance Careers

Presumptions that truck drivers and other blue-collar workers would be displaced by AI have not been the reality, Mostafazadeh said, though other types of professionals may be more likely to face job displacement.

Bly said that the benefit of automating tasks through AI should not be conflated with automating and eliminating job functions. “The things that we automate will create space for leveling up these functions,” he said. “If a doctor, if a clinician or a biomedical researcher can save hours of time reading through hundreds of peer-reviewed studies to get a synthesis of all that research, that frees them up to spend more time at the point of care.”

AI would not transition medicine away from the doctor-patient interaction, Bly said, or lawyer-client interaction. “We’re talking about transforming fields to be able to move up the cognitive ladder and ultimately find new spaces for value creation,” he said.

Dyson offered some sobering perspective on companies that might rush to get in on AI as the market becomes saturated with buzz about it. “In the short term, if you invest in AI, you’re going to make a lot of money, but you better sell out pretty fast,” she said. “It’s a world of hype. I don’t think the current enthusiasm will last and I think there’s going to be a lot of duplication.”

What to Read Next:

Neil deGrasse Tyson on Calling Out Bad Data and Appreciating AI

Tech Leaders Endorse AI 'Extinction' Bombshell Statement

Michio Kaku: Silicon Valley Will Become a Rust Belt in Quantum’s Wake

About the Author

Joao-Pierre S. Ruth

Senior Editor

Joao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight. Follow him on Twitter: @jpruth.


Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights