AI-based test automation will be a key feature of DevOps in enterprise multi-cloud environments in the 2020s.

James Kobielus, Tech Analyst, Consultant and Author

October 7, 2019

4 Min Read
Image: ipopba - stockadobe.com

Testing is a thankless chore in all software engineering initiatives. It’s a burden that developers and operations personnel usually wish to offload to automated tooling.

Automation is an ongoing trend in software development and operations (DevOps). Increasingly, artificial intelligence is the heart of automated software testing in the new world of cloud-native computing and 24x7 DevOps workflows. As modern applications sprawl over complex multi-cloud and mesh environments, AI-driven DevOps automation will become ever more essential. In fact, it’s integral to the long-running developer shift toward “low code,” “no code,” and other augmented programming methodologies and tools.

Very few IT practitioners now doubt that AI can automate code testing faster, better, and cheaper than manual methods. Over the past several years, there has been growing adoption of AI-driven software-test automation practices, as evidenced by research such as the World Quality Report 2018-2019. In a continuous integration/continuous deployment context, AI can drive automated testing of source code changes upon check-in as well as notification of development and operations personnel when the tests fail. It can be applied to functional, performance, and usability testing, as well as to identifying and resolving test failures.

AI’s chief uses in DevOps test automation include the following:

Coverage: AI-driven tooling can automate assurance of continued full coverage of all testing scenarios. Using adaptive machine learning, it can update tests as code changes, tweak tests for statistical outlier cases, leverage optical recognition of pattern-based user-interface controls to make test automation more resilient to changes, and track anomalous, unused, and unnecessary test cases to indicate coverage gaps in test case portfolios.

Speed: AI-driven tooling can automate scripting, execution, and analysis of tests as fast as code gets deployed or changed. It can accelerate detection of software defects, speed the feedback loop on defects from operations back to development, and boost the range of test cases that can be executed in parallel on every run.

Quality: AI-driven tooling can identify software quality issues, apply test inputs, validate outputs, and emulate users or other conditions. It can automate accuracy, transparency, repeatability, and efficiency of software tests. It can find and fix broken tests and verify that the user interface appears right when viewed by the user. It can leverage different learning methods-- including supervised, unsupervised, and reinforcement -- to detect defects proactively, predict failure points, and optimize testing.

Optimization: AI-driven tooling can leverage historical quality-assurance data to identify appropriate test scenarios. It can optimize test orchestration for each release and prioritize tests based on automated identification of test failures that don’t indicate a problem in the application under test. And it can assess pass/fail outcomes for complex and subjective tests.

Over the next several years, it’s very likely that leading enterprise application tool vendors will add AI-driven software test automation to their solutions. Already, many offer varying degrees of machine-learning-augmented coding as a rapid application development capability, and using the same technology to automate testing this code would be a logical add-on.

However, in spite of this promise, AI-driven software test automation tools are still an immature, niche segment. Commercial offerings such as AI Testbot, Appdiff, Applitools, Appvance, AutifyFunctionize, Infostretch, Mabl, ReTest, Selenic, Test.ai, and Testim.io have yet to gain broad adoption among software developers. Most of these apply principally to DevOps for web and mobile applications, rather than for the broader market of enterprise and cloud-native application development. And though there is a professional association for advancing AI-driven software testing, it appears to have few members and little visibility.

Going forward, I predict that more vendors of low-code tooling will support automatic testing of AI-optimized code-builds. Future developers may simply check off program requirements in a high-level GUI, and then, with a single click, auto-generate and auto-test the predictively best-fit code-build, as determined by embedded AI, into the target runtime environment.

This may involve AI-automated testing of AI-generated code using machine-learning models that have been trained, through supervised learning, on functional and performance metrics that have been derived from prior deployed code-builds. Alternately, future code-testing automation tooling may leverage reinforcement learning. For example, reinforcement-learning-driven AI might use a trial-and-error-based machine-learning method to try different ways of testing distributed code modules to automatically figure out which orchestrations are speedier, more accurate, and more resource efficient than others.

Whatever shape the underlying AI takes, it’s clear that test automation will be a key feature of the distributed “AIOps” control planes that pervade enterprise multiclouds in the 2020s.

About the Author(s)

James Kobielus

Tech Analyst, Consultant and Author

James Kobielus is an independent tech industry analyst, consultant, and author. He lives in Alexandria, Virginia.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights