The DevOps concept's focus on speed to market and continuous release can leave gaps in software quality. Understand which ones you can live with, and which are critical.

Guest Commentary, Guest Commentary

November 9, 2017

6 Min Read

Last year, Sylvain Kalache, an ex-operations engineer at Slideshare and now co-founder of Holberton School,  recalled how one of Slideshare’s  DevOps applications failed 60,000 users.

“We were a small startup,” said Kalache. “Our goals in adopting DevOps were to achieve optimum efficiency.”  Slideshare prospered. It was acquired in 2012 by LinkedIN for $119 million. Nevertheless, its DevOps success was not without its setbacks.

“A software engineer was working on a database-related project and trying out a tool that offered the ability to explore a MySQL database graphically,” said Kalache. “He decided to reorganize the order of the database columns in that tool so that the data would make more sense to him. What he did not know was that it was also changing the columns’ order in production on the actual database, locking it, which brought down Slideshare.net and shut out the more than 60,000 users trying to access it.”

The issue could have been avoided if a risk assessment quality check had been performed before the app was placed into production, but it wasn’t.

Most importantly, what happened at Slideshare is not unique.

Many companies that employ DevOps today want continuous release  and fast times to market for their software, but they don’t always match it with a continuous quality process.

A recent survey by Parasoft, which provides automated testing tools, indicated that many organizations operating in DevOps and Agile environments overlook QA factors like reliability, testability, availability and resiliency of applications.

This point came to light several years ago, when a non-technical acquaintance who had launched a restaurant review website, showed me how he could build an end to end telephony app on his website in less than two minutes by clicking on icons and using a DevOps code generator. The system automated the development process and the app worked, but it wasn't optimized for the many different devices his end users would be using to access the website.

This is a dilemma for DevOps: How do you facilitate fast times to market with a product that could take down an app or embarrass your company in the eyes of customers?

{image 2}

Companies are placing their bets on automated testing, like the ability of an end user on a DevOps team to quickly run an automated test against a changed website to see if any links are broken by the new software. There are even DevOps automation tools for software loads and environmental stress tests.

"It only makes sense to use automated testing tools when the costs of acquiring the tool and building and maintaining the tests are less than the efficiency gained from the effort,” said John Overbaugh, a senior software development engineer at Microsoft.

However, it is still almost universally agreed that the kind of comprehensive testing performed in traditional software development environments is not consistently evident in DevOps, which has a selling point of time market, sometimes at the expense of testing.

“To solve the dilemma, a lot of folks advised automating everything,” said John Lunsford, of senior program manager at Quality Logic, a cloud-based testing platform to which sites can offload their QA and that uses automated test tools in addition to real human testing. "You have to look at what you can automate, and the automation that you use must be able to demonstrate repeatability.”

Lunsford believes that one of the best deployments of testing automation in DevOps is “sanity check” automation, the kind that checks for web links that might have been broken after you have revised a site, or that can seek out adverse impacts of software changes on other software routines and systems that your app interfaces with.

This brings us to the crux of the DevOps testing challenge: effective integration of the newly designed app with the operating software and hardware that surround it and that it must interface with.

“It’s important to better educate developers on the workings of infrastructure,” said Kalache. “Many of them have never been exposed to production infrastructure….You can’t expect everyone to naturally know the hidden rules.”

Kalache is right.

The non-IT business man who designed a phone app on his website on the fly will never know if it works in every operating environment or on every mobile device. It is likewise a risk factor for continuous software release commercial vendors that might thoroughly test these releases in their own environments,  but might not have  knowledge of the particularities of their customers’ environments.

What can DevOps managers do to address the QA challenge?

Educate developers and other DevOps participants about your  system infrastructure. End users and individuals whose IT careers have exclusively been spent in point and click DevOps environments benefit from a fundamental  understanding of IT infrastructure and the pitfalls of not testing an app with the infrastructure it interacts with.

Developers and system gurus should work together. DevOps people should work collaboratively with DBAs and systems professionals, who can alert them to any problems that the app could cause. This can proactively preclude quality issues.

Understand the needs of your particular DevOps environment and select the right set of test tools/automation. There is a plethora of DevOps test tools on the market, but not all of them are suited to every company. If your DevOps focus is frontend website development, there are tools that address this. If your goal is ensuring that every app you develop works uniformly on each mobile device, there are test and automation tools that do that, too.

Use virtual environments for test. There are automated tools that can test your app in a variety of  operating environments, but they might not cover every scenario your company must support. If there are exceptions to the automated testing, test these scenarios manually.

Never forget your customers. There are many end users who are comfortable with DevOps and don’t mind a mistake or two, but if your application is customer-facing, it must work right the first time and every time. Customer-facing apps should be submitted to a full battery of  manual and automated testing that covers functionality, usability, accuracy, integration, security and recovery. There should be no exceptions to the rule.

Mary Shacklett is owner of Transworld Data in Seattle. She is an experienced IT professional, writer, and IT, marketing and advertising consultant. Mary has a bachelor of science degree from the University of Wisconsin, a master's degree from the University of Southern California and a doctorate of law from William Howard Taft University.

About the Author(s)

Guest Commentary

Guest Commentary

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT professionals in a meaningful way. We publish Guest Commentaries from IT practitioners, industry analysts, technology evangelists, and researchers in the field. We are focusing on four main topics: cloud computing; DevOps; data and analytics; and IT leadership and career development. We aim to offer objective, practical advice to our audience on those topics from people who have deep experience in these topics and know the ropes. Guest Commentaries must be vendor neutral. We don't publish articles that promote the writer's company or product.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights