Generative AI: Modeling the Right Integration Plan

To successfully adopt and integrate generative AI, organizations should focus on more than technology. People, operations, and governance matter, too.

Nathan Eddy, Freelance Writer

May 1, 2023

5 Min Read
Confirmation chain. Fact check. A series of correct decisions. Implementation of contractual roadmap. Compliance with criteria
Andrii Yalanskyi via Alamy Stock

The new era of generative artificial intelligence will change how enterprises approach digital transformation.

To accelerate adoption, businesses must understand how they can build with generative AI and what they can build with the technology. This means crafting a roadmap with a business-driven mindset and a people-first approach to identify priority use cases.

A recent KPMG survey of 225 US executives at companies with revenue of $1 billion and above found while two-thirds of respondents believe generative AI will have a major impact on their organization, 60% say they are still a year or two away from implementing their first solution.

The survey also found less than half of respondents believe they have the right technology, talent, and governance in place to successfully implement generative AI.

Organizations Unprepared for Generative AI

Todd Lohr, KPMG's US technology consulting leader, explains even though AI has been around for decades, it appears the introduction of generative AI caught organizations off guard.

“Based on the survey findings and our ongoing discussions with clients, it’s clear most organizations aren’t fully prepared to adopt generative AI,” he says.

However, organizations that can swiftly adapt their strategies and processes to embed it will establish a competitive advantage -- and see that gap widen as efficiencies from generative AI continue to accelerate.

“As organizations institutionalize generative AI, it’s critical that they also establish a responsible AI framework to mitigate the risks and ethical considerations associated with it,” he adds.

Wayne Butterfield, partner with global technology research and advisory firm ISG, says the correct first step for enterprise-wide usage is generative AI built within enterprise-grade products.

“With any roadmap, organizations need to avoid chasing the shiny object and start with the problems they are trying to solve,” he says. “To date, most generative AI use cases save time when compared with an entirely human-led creation process.”

He explains the time AI can save still relates to a relatively narrow set of tasks. A specific set of key performance indicators will be critical to demonstrate success of early-stage generative AI implementation.

Lan Guan, global lead, data and AI at Accenture Cloud First, adds the adoption of generative AI brings fresh urgency to the need for every company to have a robust responsible AI compliance program in place.

“This includes controls for assessing the potential risk of generative AI use cases at the design stage and a means to embed responsible AI approaches throughout the business,” she says.

Implementing AI Roadmap Requires Multiple Stakeholders

From Butterfield's perspective, key stakeholders must include -- at a minimum -- members of IT leadership, data security, automation, and the C-suite. “Getting this right is incredibly important, as the repercussions could include regulatory fines and IP infringement cases,” he says.

Many of the generative AI tools at present have been trained on predominantly unknown data sources, meaning any generated content could on the surface look like a great outcome, but can prove to be less than helpful, trustworthy, or ethical.

Lohr points out when it comes to establishing the group of stakeholders responsible for developing a generative AI adoption strategy, there are no silver bullets.

“There is no one-size-fits-all approach to implementing generative AI in an organization,” he explains. “Developing a generative AI adoption strategy will require input from multiple stakeholders -- particularly senior leadership, IT, legal, and HR -- to ensure the policies, procedures, and practices are ethical and effective.

He adds early investment in data and analytics provides a strong foundation for adoption.

“Organizations with experience in data engineering, data science, and machine learning operations will be well-equipped to develop, train, and deploy AI models,” Lohr says.

He explains generative AI experiments are emerging across various departments within organizations.

“Initially, most organizations will likely explore it through decentralized experiments to understand its potential and identify suitable use cases,” he says. “Only a few organizations may choose to establish generative AI centers of excellence or dedicate an entire function to it from the outset.”

Taking a Collaborative, People-First Approach

Lohr points out widespread adoption of AI requires a collaborative approach, with business, IT, and AI stakeholders working together.

This “power of three” ensures AI initiatives align with business objectives, are supported by the right technology infrastructure, and leverage the latest advancements in AI.

Guan advises tech and IT leadership to take a people-first approach to driving the adoption of generative AI.

“For businesses to take the leap, change agent opinion leaders are needed to help drive this effort forward,” she notes. “They, in turn, will need the support of leadership across the organization -- from operations to human resources and regulatory affairs teams.”

This means building talent in technical competencies like AI engineering and enterprise architecture, and training people across the organization to work effectively with AI-infused processes.

Robust AI Governance of Critical Import

Butterfield says in the short term, generative AI governance for organizations should ensure the right tools with built-in generative AI are made available to the business, in contrast to allowing a free-for-all of the open-source catalog currently making the headlines.

“Short-term governance will mean keeping a tight lid on anything that will be sent externally,” he explains. “This will limit publicly airing any mistakes, while internal user cases are understood and the ‘explainability’ of the various technologies is improved upon.”

He adds new, enterprise-grade assists like Microsoft’s Copilot will be a next step to a wider rollout, but only once the value of such co-creation capability is understood.

“A tight collaboration before the key stakeholders will be an important part of any governance model for the next 12 months at a minimum,” Butterfield says.

Guan says rather than implementing reactive compliance strategies, companies must proactively develop responsible AI capabilities driven from the top down. “Guided by a people-first approach, responsible AI must be CEO-led with a focus on training and awareness,” she says.

The goal should be to ensure that execution and compliance align with the company's core values and principles while being flexible enough to evolve with the fast pace of changing technology.

“AI governance should be responsible by design and should not be an afterthought,” Guan says. “It should be something you address from day one.”

What to Read Next:

Is Generative AI an Enterprise IT Security Black Hole?

Citing Risks to Humanity, AI & Tech Leaders Demand Pause on AI Research

Google, Microsoft, Salesforce Transform Enterprise Productivity With Generative AI Integrations

About the Author

Nathan Eddy

Freelance Writer

Nathan Eddy is a freelance writer for InformationWeek. He has written for Popular Mechanics, Sales & Marketing Management Magazine, FierceMarkets, and CRN, among others. In 2012 he made his first documentary film, The Absent Column. He currently lives in Berlin.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights