Revamping IT for AI System Support
Companies are stepping up their investments in artificial intelligence. But is IT ready to execute on it?
At a Glance
- IT will ultimately be the implementer and the sustainer of AI technology.
- Cornerstones of AI responsibility that are important for IT to address: development, deployment, governance, and AI support.
- The challenge for companies and for IT is that the law always lags technology.
More artificial intelligence applications and integrations will make their way into enterprises over the next few years. They will require IT to develop new workflows and skills to support them. These workflows and skills include a methodology for iteratively testing and deploying applications; a focus on data integrity and cleaning; the definition of AI rulesets; mindfulness of ethical and legal guardrails that need to be incorporated; collaborative teams with end users; and a maintenance strategy for AI.
Companies are just beginning to get their arms around this. How do they best position IT for support of development and deployment?
Where AI Is Today
In 2023, chip manufacturer AMD interviewed 2,500 global IT leaders and asked them about AI. Half of all survey respondents said they were ready to start adopting AI, but half also said they didn’t feel IT was ready for AI.
Other C-suite executives said they weren’t sure about AI readiness as well. In 2023, CNN reported that 42% of CEOs who participated in the Yale CEO Summit not only expressed trepidation about AI, but actually felt that AI had the potential to “destroy humanity” in five to 10 years. “It’s pretty dark and alarming,” commented Yale professor Jeffrey Sonnenfeld.
The bottom line is that every company feels it will need AI in order to compete, but no one yet understands AI’s many unknowns or what it will bring to the workplace or the world.
IT stands in the middle of this fear and confusion because it will ultimately be the implementer and the sustainer of AI technology.
What IT Should Do Now
Some of the early implementations of AI have actually been in areas of IT operations, such as security management, workflow management, and automated resource allocation. All of these are mature IT technical disciplines, so there isn’t much danger from unknown variables, circumstances, or decisions that AI makes. This keeps both IT and enterprise risks from AI low.
The use of AI in IT operations also gives IT early experience with the emerging technology that can be put to use as companies deploy AI in non-IT areas. However, the early advantages from AI that IT has gained from its own applications could stop there.
For example, what happens when a business AI application begins producing errant results or fails in operation? Who is responsible? How do you know that the AI application and what it teaches itself through machine learning (ML) will continue to adhere to the legal, ethical and governance standards that the company sets for itself? Finally, who makes the decision that an AI system is finally mature enough to go live?
In recent conversations I’ve had with CIOs, most say that these questions haven’t been addressed in their companies.
Addressing the 4 Cornerstones of IT’s AI Responsibility
There are cornerstones of AI responsibility that are important for IT to address: development, deployment, governance, and AI support.
AI development. Like Agile, AI development methodology will be iterative, but unlike Agile, AI projects will never end. This is because business conditions, priorities and information continuously change, as does as how we -- and AI -- think about them.
Once a business identifies a use case, an AI model is designed, built, and tested against a large and varied body of data from diverse sources. The gold standard for AI accuracy is that it must agree 95% of the time with what human subject matter experts would conclude from the same data. In some cases, such as predicting long-term general trends, users might be satisfied with a 70% accuracy rate. In all cases, an AI accuracy metric must be defined.
As part of the development process, IT, data science teams, and end users build in machine learning algorithms that are able to further refine AI models and thinking as new patterns of data and their implications are uncovered by the AI software.
To attain 95% accuracy in decisions and outcomes, the AI must work with high quality, clean data. It will be IT’s job to prepare this data, and to vet data that is imported from other vendors.
Vendor verifications for clean data, security, and governance must also be made. These tasks are likely to fall to IT.
AI deployment. An AI system can be deployed once it reaches its target accuracy metric. Deployment can be fairly straightforward if the AI is to be used as a standalone system. However, if AI must be integrated into other applications, there is an impact to existing systems and business processes that must be considered.
For instance, if an AI engine is to be used as an assessment tool to see what types of loans an applicant can qualify for, the software needs to be integrated into the loan underwriting process. This will entail changes to the loan underwriting workflow, to how underwriters work, and to other systems.
At first glance, users, IT business analysts, and developers perform many system/process integrations, so changing a loan decision process shouldn’t be that hard. However, AI integration is different. AI rulesets are under constant revision, and this could introduce glitches into surrounding processes and systems at any time. The changes could also transform employee training (and retraining) into a continuous process. IT installation and ongoing support must be ready for this.
AI governance. The security and governance standards companies develop come from within, from regulators or from auditors. The risk with AI is that it can overstep these guidelines as it self-trains itself on new data.
“It’s important for everybody to understand how fast this [AI] is going to change,” said Eric Schmidt, former CEO and chairman of Google. “The negatives are quite profound.” Among the concerns is that AI firms still had “no solutions for issues around algorithmic bias or attribution, or for copyright disputes now in litigation over the use of writing, books, images, film, and artworks in AI model training. Many other as yet unforeseen legal, ethical, and cultural questions are expected to arise across all kinds of military, medical, educational, and manufacturing uses.”
The challenge for companies and for IT is that the law always lags technology. There will be few hard and fast rules for AI as it advances relentlessly. So, AI runs the risk of running off ethical and legal guardrails. In this environment, legal cases are likely to arise that define case law and how AI issues will be addressed. The danger for IT and companies is that they don ’t want to be become the defining cases for the law by getting sued.
CIOs can take action by raising awareness of AI as a corporate risk management concern to their boards and CEOs. In daily work, if there is a question about AI ethics or legality, legal and regulatory experts should immediately be consulted.
AI support. AI support from IT will come in several different forms:
Support for AI integration issues with systems and business processes.
Support and fixes/retesting for AI models when these models begin to lose accuracy.
Also, a failover strategy for AI if it needs to be temporarily suspended from production because of failing accuracy.
Removing any AI app from production will require both systems and employees to failover to alternate business processes while the AI gets re-tuned. Because IT has little experience with supporting AI in production or needing to failover to alternate business processes, there are few ways to calculate a mean time to recovery for AI.
Summary
AI will require new development and support strategies from IT, and it will also draw on new skills.
Vendor (and vendor data) management will be paramount. On the administrative side, IT will need to be proactive with data vendors to ensure that data quality, governance, and security standards are met. Data preparation tools like ETL (extract-transform-load) will be standard tools in AI development projects. IT business analysts who interact with users and with data scientists will need to be productively conversant with both groups during AI development, deployment, and support. Plus, ongoing maintenance of AI systems will seem more like a data truth sustainability effort than simply oiling the wheels of systems. Collectively, these realities will force changes in how IT develops, deploys and maintains applications, and how it interacts with users, attorneys, regulators and vendors.
Now is the time for CIOs to plot IT development, deployment, and support directions while AI is still nascent.
About the Author
You May Also Like