Enterprises are automating more tasks to improve operational efficiencies and lower costs, but how far will they go, and why?

Lisa Morgan, Freelance Writer

February 11, 2021

6 Min Read
Image: mopic - stock.adobe.com

Businesses have been automating tasks for a long time, especially on manufacturing floors. More modernly, automation is seeping into every corner of the enterprise as companies struggle to compete. To remain competitive, they need to continuously accelerate time to value and embrace higher levels of agility in a manner that can only be achieved through a human-machine partnership.

Organizations will be forced to adopt higher levels of automation as the competitive pressure continues to build.

Automated systems versus autonomous systems

Task automation typically occurs when an organization wants to produce something, faster, cheaper, more repeatably and at a higher scale than human employees can do. The ripest tasks for automation are those which are boring and repetitive -- the kinds of tasks humans don't do well.

Robert_Greene_Oracle.jpg

In fact, one of the motivations for automating tasks is to reduce the number of errors caused by humans simply because they were bored, distracted or tired.

The most extreme form of automation is an autonomous system that operates without human intervention. That's not to say that autonomous systems don't need oversight, however.

"Automation is a necessary, functional component of an autonomous system. 'Autonomous' implies a degree of artificial intelligence, decision making that is not necessarily rule or workflow based, rather taking actions based on new patterns that are not hard coded into the system," said Robert Greene, senior director, Oracle Autonomous Database product management. "Automation…still requires a human to make the decision to invoke [an] action, so a human is still in the loop."

Deciding what to automate and what to make autonomous

Organizations are automating more tasks using robotics process automation (RPA) and in some cases, they're inheriting autonomous capabilities from the enterprise products they use such as the Oracle Autonomous Database.

Chris_Nicholson-Pathmind.jpg

"You start out by automating smaller steps with smaller stakes, so your organization builds its internal capacity to do automation well and learn how to make it work in hybrid situations that involve people," said Chris Nicholson, founder CEO of deep reinforcement learning solution provider Pathmind. "You also want to automate processes that are easy for people to review and give feedback above, such as text recognition in document processing to make sure the machine got it right."

According to Oracle's Greene, security, task efficiency and quality all improve with autonomous oversight and decisioning.

"It's actually the interactions and fleets of endpoints that create the environment where autonomous decisioning becomes crucial," said Greene. "Humans can look at a single database and make final decisions, but humans cannot look at millions of databases and make timely decisions."

Examples of autonomous systems

Autonomous systems make instant decisions and adaptations based on a complex web of information that may be changing all the time (e.g., autonomous car). For example, AI for blockchain solutions provider Fetch.AI launched smart city zoning infrastructure trials in Munich that use autonomous AI agents to optimize resource usage and reduce the city's carbon footprint.

Humayun_Sheikh-fetchAI.jpg

The company also has autonomous AI travel agents operating in 770,000 hotels that market, negotiate and trade inventory in the Fetch.ai network. Both hotels and their guests benefit from cost savings of up to 10%, according to CEO and Co-founder Humayun Sheikh.

"Autonomous systems do not need to replace every automated program, but for autonomous systems to take off, interconnectivity is the key," said Sheikh. "There needs to be a network for widespread adoption."

The risks of autonomous systems

Like automation, autonomous systems can do tasks that humans have traditionally done, albeit faster and at a greater scale. However, organizations should be as mindful of the risks an autonomous system might represent in addition to its benefits because the system is able to operate independently.

"[A]utonomy increases the complexity of systems. Increased complexity raises the likelihood of unpredictable results, including new kinds of failures," said Pathmind's Nicholson. "Unlike people, algorithms and predictive machine learning models cannot always explain their behavior (although some people are not good at that either). [The] lack of visibility into why the failures occur can make them hard to prevent."

The lack of transparency (explainabililty) could also expose a company to fines and legal action, particularly if humans are harmed by the system.

Michael_richardson-Tech-Azur.jpg

Transparency is the most oft-discussed topic when it comes to ethical AI and another critical issue accountability: Who should be held responsible if an anonymous system goes awry? The people who designed it, the people who used it, the people who authorized its use, or some combination of those individuals? The question is simple. The answer is not so simple because the answer may be fact-specific, determined as a matter of law or both.

And what happens when organizations interconnect different autonomous systems? Their behavior will need to be orchestrated.

"[I]t seems far more likely that the interactions will be more contractual in nature wherein the organizations seeking to integrate their respective autonomous systems will first establish the rules of engagement for such integrations," said Michael Richardson, founder and managing director of business transformation consultancy Tech-Azur.

Expect to see more autonomous system standards

As automation becomes more commonplace in enterprises, also expect to see the rise of autonomous systems.

Ram_Chakravarti-BMC.jpg

"By and large, autonomous capabilities are currently based on propriety networks of systems from various vendors. The constituent parts are integrated by a software engineer to provide a solution [that] limits the scalability, availability, and applicability of autonomous systems to a much smaller set of use cases relative to automation and even AI-enabled automation," said Ram Chakravarti, CTO at enterprise software company BMC Software. "As this concept matures, with the mainstreaming of software platforms for autonomous systems, there will be agreed upon industry standards and interaction protocols that will enable the abstraction of the hardware, sensors, devices, etc. within the ecosystem."

The IEEE P7008, P7009 and P7010 standards already address nudging by autonomous systems, autonomous system fail-safes and well-being metrics, respectively. Expect to see more autonomous systems standards in the future.

In the meantime, don't expect to see a fully autonomous business any time soon.

 

Follow up with these articles on automation:

How to Get Automation Right

Enterprise Guide to Robotic Process Automation  

Intelligent Automation: A Step Ahead of AI

 

About the Author(s)

Lisa Morgan

Freelance Writer

Lisa Morgan is a freelance writer who covers big data and BI for InformationWeek. She has contributed articles, reports, and other types of content to various publications and sites ranging from SD Times to the Economist Intelligent Unit. Frequent areas of coverage include big data, mobility, enterprise software, the cloud, software development, and emerging cultural issues affecting the C-suite.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights