An effective DevOps program relies on application data to highlight patterns such as which features users rely upon.

Andi Mann, Chief Technology Advocate at Splunk

March 21, 2017

7 Min Read
Image: Geralt/Pixabay

DevOps practitioners are finding more in common with the cowboys of the Old West than modern-day, process-obsessed enterprise architects. These DevOps practitioners or the “shoot from the hip” cowboys of IT, base their decisions on speed, gut-feelings and output, which has been largely successful thus far.  On top of that, a typical DevOps process relies on separate tools for discrete functions such as project management, source code control, provisioning and configuration, test execution, workflow automation, and more.But for large and distributed enterprises, practitioners are going to need more than just intuition, loosely coupled tools, and free flowing decision-making to achieve repeatable and proven success – data-driven methods must be put in place.

Today, DevOps is becoming mainstream, and well-established companies are looking to scale their DevOps “culture” beyond just the software delivery lifecycle and IT operations, to reach across the entirety of their businesses.Large enterprises are less agile, more complex, and more financially, legally, and geographically constrained. Having thousands of developers across multiple geographies -- duplicate tools, specialized teams, varying processes, distributed locations, and multiple business units -- makes visibility even harder to achieve.Because of this need to scale, the C-suite is becoming increasingly involved in DevOps and leadership is looking for more visibility into DevOps workflows to enable streamlined company-wide implementation. DevOps is “growing up.”

While maintaining the core principles of DevOps -- agility, speed, velocity and quality -- businesses are taking the next step with DevOps by implementing a data-first process around it to align the continuous flow of DevOps the industry is accustomed to.

DevOps needs the data to scale

What we see today are managers who have been working at the same company for more than 20 years making rapid-fire decisions to streamline their development cycle. Yet, a production line is only as efficient as the data utilized to back up the processes in place. When a company puts out a new feature on its website or mobile app, no one knows how the features are being used or what about the feature is garnering interest.

[Andi Mann will be a featured speaker during the Interop ITX conference in Las Vegas when he presents Effective Leadership for DevOps Teams and Programs on May 17.]

This is exactly why Amaya Gaming uses data to drive its DevOps decisions. Using data extracted from systems across development, test, acceptance, and production environments, executives can monitor revenue streams to rapidly respond to any unexpected changes, whether good or bad; and can quickly see which new features are being used, and how, by the end users of Amaya's services. This insight is fed back into the development cycle and informs future business decisions, while development teams also use the same data-driven approach to detect and remediate bugs, and service operations use data to perform troubleshooting, all in real-time, based on real data.

Stakeholders from both the business side and the IT side of the company are affected due to lack of visibility across the development cycle, increasing the risk of re-releasing error-ridden software, slowing down response times, introducing security vulnerabilities, oversaturating infrastructure, damaging reputation, and adding unnecessary costs.

Other problems DevOps practitioners are running into are the executives asking the same, but difficult questions to answer: “Why did this application feature perform better than that feature? What can we learn from this?”It’s hard to realize the benefits and returns without aggregating the data points before and after. Experienced and long-rooted managers have the intuitive instincts,but that is hardly a viable and repeatable process or mentality. Put yourself in the shoes of executives trying to build a self-organizing team. They don’t have the experience to make those intuitive decisions that happen to justify each business decision, but they recognize the benefit of a collaborative entity working towards the same goal.

Data needs to be interwoven throughout the DevOps process, providing the foundation for best practices that empower and align teams – no matter the size or geographic location. Data is recognized as having exponential potential. Successful DevOps teams at large organizations are using objective metrics mined from data to gain insight into the end-to-end delivery cycle – from the initial idea for new capabilities all the way to customer engagement and measuring resulting revenue. Collecting, correlating and analyzing datafrom all contributing systems and development steps provides all contributing teams, including the C-suite, a unified view into the velocity, quality and business impact on application delivery, maintaining a measurable and proven process across relevant business segments and decision makers.

DevOps puts on its grownup pants

Measurement and repeatability are key to creating a successful DevOps process within a large organization that executives can be comfortable with. The data generated by a DevOps toolchain provides a wealth of information needed to achieve that process. In turn, executives can leverage the data that comes out of every day workflows to make better data-driven decisions.

For example, one large, global entertainment business measures every step along the way on its application delivery lifecycle, to determine where they are working well and what needs improvement. This organization is able to make highly accurate planning and resourcing decisions using detailed data including releases in progress, stories being worked on, code check-outs and check-ins, build executions, deployments and production status, and even resourcing and staffing/cost data from the project management system. With this data-driven approach, they can balance teams and resources, predict release dates, reduce delivery costs, address process gaps and failures, detect and reject ‘bad changes’, and drive faster, better code into production. This is one of the many examples where data-driven decisions proved to be an enormous benefit to a company with a DevOps practice already in place.

DevOps processes are already happening at large enterprises. At the BBC in the UK, the Dev and Ops teams responsible for the BBC TV website provide reporting to the Marketing teams on which new website features (e.g. “tiles” for new shows, quizzes, and other viewer engagements) are being used, how much, by whom, and more. Marketing is then able to make rapid, data-driven decisions on how best to double-down on successful promotions, or redirect efforts that are not driving desired outcomes. CA Technologies benefits from a similar process with its popular SaaS solution, CA Agile Central (formerly Rally Software). With the ability to track each individual interaction with the software service, CA Technologies DevOps team provides rapid feedback to product management on customer engagements, so they have accurate, data-driven insight into ‘known good’ features and potential problems, so they can iterate rapidly and provide customers with an even better service. These enterprises are focusing on incremental changes, taking into consideration multiple data points and not releasing the software until the data says it’s fit for release. When data rules, software-release decisions are driven by the data rather than the date.

At a high-level, management understands business decisions driven by data and appreciate visibility to needed to ensure processes are accurate and generating revenue. Processes play a central role to ensure consistent quality, remove redundancies and streamline QA. To be successful in a large enterprise, DevOps practitioners need to standardize, collect and share key dev and ops metrics for the greater good. The ‘continuous delivery model’ for software at the heart of DevOps emphasizes common goals while executing on individual responsibilities. Scaling this mentality, everyone has insight across all teams, empowers employees to make needed changes, while working toward the same goal.

For large enterprises to take the next step in digital transformation, DevOps processes must be backed by data. With data, executives can correlate business metrics with code changes to gain new business insights, or improve the user experience by delivering better performing code, even protect reputation by delivering secure and compliant code. While keeping the core values of continuous improvement, development and open communication, we are at the point where DevOps without data will not scale. This data gives executives the means to improve measurement, collaboration and engagement.

About the Author(s)

Andi Mann

Chief Technology Advocate at Splunk

Andi Mann is an accomplished digital executive with global expertise as a strategist, technologist, innovator, marketer, and communicator. With over 30 years' experience, he is a sought-after advisor, commentator, and speaker. Andi has coauthored two books. He blogs at Andi Mann – Ubergeek and tweets as @AndiMann.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights