What Just Broke?: Should AI Come with Warning Labels?

AI still has a lot to learn, but does the zeal to explore its possibilities overshadow its encoded flaws to the detriment of an unaware public?

Joao-Pierre S. Ruth, Senior Editor

March 3, 2023

1 Min Read
DWImages via Alamy Stock Photo

Of late I have sat in on some conversations among people who are obviously much smarter than me and enjoy much higher pay grades discussing the transcendence and transformation of AI -- generative AI in particular.

While I cannot name-drop or repeat their perspectives, I will speak to some going concerns I have about such discussions on AI.

What I do not hear enough about are the built-in shortcomings of AI, and not just breaking points where algorithms go belly-up. I am talking about flaws in how AI might be trained up in the first place, the bias that gets baked in from the start. Problems that are features rather than bugs in the system.

Oftentimes there is some general acknowledgement that AI is still in development and has much more to learn before it can take over the world. Missing from that acknowledgement is a detailed breakdown of how AI might function as coded yet produce flawed or tainted results by design.

Listen to the full episode of "What Just Broke" for more:

About the Author(s)

Joao-Pierre S. Ruth

Senior Editor

Joao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight. Follow him on Twitter: @jpruth.


Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights