Skip to main content

Could DevOps exist without cloud models?

The GigaOm DevOps market landscape report is nearing completion, distilling conversations and briefings into a mere 8,500 word narrative. Yes, it’s big, but it could have been bigger, for the simple reason that DevOps touches, and is therefore helped and hindered by, every aspect of IT and indeed, the broader business. Security and governance, service level delivery, customer experience, API and data management, deployment and orchestration, legacy migration and integration, they all impact DevOps success, or cause what we have termed DevOps Friction.

While the report is about DevOps, and not all these other things (the line had to be drawn somewhere), one aspect rings out like a bell. I go back to an early conversation I had with ex-analyst Michael Coté, who brings a hard-earned, yet homespun wisdom to technology conversations. I paraphrase but Coté’s point, pretty much, was, “The kids of today, they don’t know any other way of building things than using cloud-based architectures.”

With that, he lifted his rifle and shot a can off the fence. Okay, no he didn’t, he talked about the foolishness of caring about operating system versions rather than just using what’s offered by the cloud provider. It took me back to a software audit I undertook, many years ago, when the ultra-modern JBoss interface layer built onto a Progress back-end had been customised by a freelancer who promptly left, leaving the organisation with a poorly documented, legacy open source deployment… but I digress.

Many, if not all startups work on the basis of using what they are given, innovating on top rather than worrying about what’s under the hood (or bonnet, as we say over here. I know, right?). They also then adopt some form of DevOps practice, as the faster they can add new features, the more quickly their organisation will progress: the notion of the constant Beta has been replaced by measuring success in terms of delivery cycles.

Bluntly, the startup innovation approach wouldn’t work without cloud. Providers such as AWS know this; they also know their job is to deliver as many new capabilities as they can, feeding the sausage machine of innovation, however much this complicates things for people trying to understand what is going on. AWS is more SkyMall than Sears, its own business model also based on dynamism of new feature delivery.

This truth also applies to the toolsets around DevOps, which are geared up to help deploy to cloud-based resources, orchestrate services, deploy containers and spin up virtual machines. If a single cloud is your target, the DevOps pipeline is a sight simpler than if you are deploying to an in-house, hybrid and/or multi-cloud environment. Which, of course, reflects the vast majority of enterprises today.

The point, and the central notion behind the report, is that enterprises don’t have it easy: DevOps needs to roll with the punches, rather than sneering from the sidelines about how much easier everything could be. We are where we are: enterprises are complex, wrapped up in historical structures, governance and legacy, and need to be approached accordingly. They might want to adopt cloud wholesale, and may indeed do so at some point in the future, but getting there will be an evolution, not an overnight, flick the switch transformation.

DevOps Friction comes from this reality, and many providers are looking to do something about it. As per a recent conversation with my colleague Christina von Seherr-Thoss, such developments as VMware running on AWS, or indeed Kubernetes-VMware integration, help close the gap between the now-embedded models of the data centre, and the capabilities of the cloud. This isn’t just about making things work together: it’s also transferring some of the weight of processing from internal, to external models.

And, by doing so, it’s helping organisations let go of the stuff that doesn’t matter. We’ve long talked about data gravity, in that most data now sits outside the organisation, but an equally important notion is that processing gravity hasn’t followed, making enterprise DevOps harder as a result. I personally don’t care where things run: if you can run your own cloud, go for it. More important is whether you are locked into a mindset where you tinker with infrastructure, or whether you use what you are given and innovate on top.

Right now, enterprise organisations are looking to adopt DevOps as part of a bigger push, to become more innovative and adapt faster to evolving customer needs — that is, digital transformation. Enterprises are always going to struggle with the weight of complexity and size: as startups grow up, they hit the same challenges. But traditional organisations can do themselves a favour and shift to a model that breaks dependency with servers, storage and so on.

While we don’t deep-dive on infrastructure and cloud advances in our DevOps report, it is fundamental and inevitable that organisations which see technology infrastructure as an externally provided (and relatively fixed) platform will be able to innovate faster than those who see it as a primary focus. Breaking the link with infrastructure, minimising dependencies, using the operating systems you are given and building on top, could be the most important thing your organisation does.



from Gigaom https://gigaom.com/2018/09/14/could-devops-exist-without-cloud-models/

Comments

Popular posts from this blog

Who is NetApp?

At Cloud Field Day 9 Netapp presented some of its cloud solutions. This comes on the heels of NetApp Insight , the annual corporate event that should give its user base not just new products but also a general overview of the company strategy for the future. NetApp presented a lot of interesting news and projects around multi-cloud data and system management. The Transition to Data Fabric This is not the first time that NetApp radically changed its strategy. Do you remember when NetApp was the boring ONTAP-only company? Not that there is anything wrong with ONTAP of course (the storage OS originally designed by NetApp is still at the core of many of its storage appliances). It just can’t be the solution for everything, even if it does work pretty well. When ONTAP was the only answer to every question (even with StorageGrid and EF systems already part of the portfolio), the company started to look boring and, honestly, not very credible. The day the Data Fabric vision was announced