Welcome!

Industrial IoT Authors: Elizabeth White, Stackify Blog, Yeshim Deniz, SmartBear Blog, Liz McMillan

Related Topics: @DevOpsSummit, Java IoT, Industrial IoT, Microservices Expo, Linux Containers, Containers Expo Blog

@DevOpsSummit: Blog Post

Continuous Delivery Plumbing | @DevOpsSummit #DevOps #Docker #Microservices #ContinuousDelivery

DevOps teams and Continuous Delivery processes must continue to adapt and improve

DevOps and Continuous Delivery Plumbing - Unblocking the Pipes

Jack Welch, the former CEO of GE once said - "If the rate of change on the outside is happening faster than the rate of change on the inside, the end is in sight." This rings truer than ever - especially because business success is inextricably associated with those organizations who've got really good at delivering high-quality software innovations - innovations that disrupt existing markets and carve out new ones.

Like the businesses they've helped digitally transform, DevOps teams and Continuous Delivery processes must themselves continue to adapt and improve. Demands will increase to a point where the dizzying deployments seen today are standard and routine tomorrow. Even with great culture, a plethora of tools and herculean team efforts, there will be a point where many other systemic issues impose a limit of what's actually achievable with DevOps.

One way to address this is with what I call Continuous Delivery plumbing - that is, finding every process and technology issue causing a blockage, applying automation to clear the pipes, and ultimately increasing the flow of value to customers. It sounds simple in theory, but like actual plumbing you'll need to get your hands dirty.

Any idle time is terminal - Continuous Delivery goals like faster lead times often remain elusive because of the constraints deliberately or unintentionally placed on IT. It's hard of course to counter entrenched culture and procedural over-excesses, but we continue to be plagued by problems that are well within our control to fix. These include the usual suspects of development waiting on infrastructure dependencies, manual and error-prone release processes, too many handoffs, plus of course leaving testing and monitoring too late in the lifecycle.

Tools driving process improvements have to some extent helped. Now open source nuggets like Git and Jenkins enable developers to quickly integrate code and automate builds so that problems are detected earlier. Other advanced techniques like containerization are making application portability and reusability a reality, while simulating constrained or unavailable systems allows developers, testers and performance teams to work in parallel for faster delivery.

All these (and many other) tools have a key role to play, but in the context of Continuous Delivery, we often lack the insights needed to purposefully action our considerable investments in pipeline automation - if you will, automate the automation. For example, node-based configuration management is a wonderful thing, but how much more powerful would it be if those configurations were managed in context of an actual application-level baseline during the release process. Similarly, how much time could we save if test assets were automatically generated based on dynamic performance baselines established during release cycles.

Quality inspection actually sucks - There's a lot to love about DevOps and Lean, especially the transformative thinking (ala W. Edwards Deming) on why quality should start and end with the customer. Now in the consumer-centric age, customers rate business on the quality of software interactions and how quickly these experiences can be improved and extended.

But maintaining a fluid balance of speed and quality has proved difficult with existing processes. Too often interrupt driven code inspections, QA testing and rigid compliance checks are grossly mismatched to more agile styles of development and the types of applications now being delivered. Also, many existing processes only give an indication of quality shortfalls, rather than provide teams information needed to drive quality improvements. For example, application performance management (mostly used in production) should also be established into the Continuous Delivery process itself - that'll help DevOps teams continue to find the quality "spot fires" - yes, but also build the feedback loops needed to do what's really awesome - extinguish them completely.

The bar will never be high enough - as application architecture transitions from monolithic to microservice, operational capabilities will become a critical business differentiator. With literally thousands of loosely coupled services being deployed at different rates, success will depend on managing these new platforms at scale. There are other specific challenges too. Newer dynamic microservice architectures with design for failure approaches make it increasingly difficult to build consistent development environments, which when considered with complexities surrounding, messaging and service interaction means comprehensive testing becomes much more challenging.

From a purely quantitative perspective, release automation processes can (provided they scale) solve many of these issues. However, as we continue to raise the bar, it's also important to ensure that continuous delivery leverages and fuses other processes as the means to drive improvements - for example by capturing realistic performance information before testing, cross-functional teams can establish and develop much more confidence in releases. This is much preferable to the traditional approach of monitoring only ever being used to detect problems after the proverbial horse has bolted.

Business success now hinges on the ability to constantly meet the demand for innovative, high-quality applications. But this is challenging if organizations rely on systems and processes that were only ever designed to deploy software in larger increments over longer cycles. Achieving Continuous Delivery to overcome these obstacles is a fundamental goal of DevOps. This means always ensuring the ‘pipes are unblocked' by removing constraints, improving testing efficiency, and enriching processes to increase the velocity and quality of software releases.

More Stories By Pete Waterhouse

Pete Waterhouse, Senior Strategist at CA Technologies, is a business technologist with 20+ years’ experience in development, strategy, marketing and executive management. He is a recognized thought leader, speaker and blogger – covering key trends such as DevOps, Mobility, Cloud and the Internet of Things.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.