Skip to main content

Three Steps to Successful DevOps

On a recent Voices in Innovation podcast, GigaOm’s VP of Research Jon Collins talked to Harbinder Kang, Global Head of Developer Operations at Finastra, about how to implement DevOps successfully in any organization. Harbinder outlined some fantastic tips based on his experiences working with global organizations and outlined an easy-to-follow three-step process for those just starting on their DevOps journey.

Below is a transcript of their conversation, edited for brevity and clarity, including all of Harbinder’s hints and recommendations:


Jon: Hi Harbinder, thanks for taking the time to chat with me today. What were your early experiences of development and how did you move to become an expert in DevOps?

Harbinder: I started out working at a startup in the fintech space with 50 people, selling enterprise software. At a startup you find yourself being thrown around trying different things: customer support, QA, build development, going on-site, and all the rest.

I found myself gravitating towards the whole build engineering release space before people called it DevOps. The project at the time had an Adhoc build Perl script, and we never had it working properly at any given time. And then I just decided to put continuous integration into place. So, I implemented Cruise Control over a weekend and brought CI to the project. I did a lot of projects after that. It kind of snowballed.

Along the way, I found myself in other areas of the delivery process: test automation, test strategies, understanding what it means to work within both a waterfall methodology and an Agile methodology. This leads to where I am now, being able to look at DevOps holistically for projects and understanding what it takes and what the business wants when it comes to transforming. Recently I’ve been working on Operational performance of production and what that means in the DevOps space, and the engineering practices it takes to make highly available software.

Jon: You stand on one side of the development process, asking ‘how can we make this better’?

Harbinder: Yeah, and I’ve learned to work with stakeholders beyond the development group: the R & D group and the operations group also. Where I am now, for example, implementing

continuous delivery practices, you need to work with the larger enterprise, you need to work with heavyweight processes and controls, you want to be able to push to production on a daily basis. You can’t ignore those elements of the organization. Today, although it’s far more holistic with the teams I work with to achieve those goals, it’s still essentially the same process.

Jon: Where do you start and what do you see as the key elements of pain your organization still faces?

Harbinder: The approach I like to take is a holistic view first. In my role now I’m more top-down at least to start with. I use two tools for this:

  • The Metrics You Find in the State of DevOps Report

DevOps Research and Assessment (DORA) metrics allow me to have a very balanced conversation with both engineering and the business. It puts tension at the right points between throughput and stability.

So you can have a discussion with development, and they’ll be asking for a build that runs in nanoseconds. Then you could talk to a change approval team: they want change approval boards and they’ll make you wait a week before you push into production because it’s all about stability.

Using these metrics helps the business understand what it actually wants to do and helps to focus on system-level outcomes. If you don’t do that I’ve found you can fall into common pitfalls: you can use metrics that optimise locally, at the cost of an overall outcome, and they don’t end up linking back to business outcomes. I’ve found it a good framework to use as the foundation for improvement.

  • Value Stream Maps

They allow us to take a step back, forget the technology, the process and the people for a second, and just map out what’s going on and discover what the set of constraints are that are actually affecting us, rather than anecdotally what we feel may be the case.

We begin to pick up where the biggest waste exists and it may not necessarily be where stakeholders ask you to focus.

It will also break a siloed approach to thinking, because by necessity you are going to different stakeholders across the value stream and getting different versions of the value stream. You then put it all together into a consistent view.

From there we get into: “This is where we are, what do we want to do?”:

  • Is this long-lived software where we just look to optimise margins and profitability and productivity, where there is no goal to change the software architecture investment, where we are optimizing and the technical debt isn’t going to change?
  • Or is it a new project, where the organization has aspirations for growth, there is ideation, and the organization wants internet scale, continuous delivery and the system to be available at all times.
  • In the middle, you’ve got products that have been successful in one model, typically software that was sold in a licencing situation and the client has an aspiration of moving it across into a managed service and that’s a really interesting challenge at that point.

I’ve found using this framework allows me to attack all three of these problems.

In DevOps, there’s a set of constraints you need to be aware of, within what you need to do, and it’s not always in your hands in terms of the outcomes the business wants. For example, you can throw all the DevOps tooling and automation at a product, but if it’s a very lumpy, heavily customized product, there’s only so far you’re going to get in operational performance if you’re not looking at the cost of ownership: it may not be a profitable model. It’s a very nuanced conversation.

Jon: This is fascinating. It’s about how to make sure value continues to be delivered over time. Getting stakeholders on the side at all is a big challenge for many in DevOps, so what would you say to organizations who really really struggle with even starting conversations?

Harbinder: Working across the business there are places where that conversation is still a sticking point and there are projects where the stakeholders have got past that point and have aspirations for change.

It boils down to that, but there’s also a lot of projects in the middle. I’ve found technology is not the hard part, engineering human behavior is a lot harder to navigate. So, I think the approach has to be the same in each case. There has to be a level of buy-in: if it doesn’t exist, you’re dead on arrival and are setting yourself up for failure. These conversations have to be had.

Executive sponsorship in a large enterprise is very important. If you are trying to enact this change at too low a level of seniority without exec sponsorship you can’t force the issue. And ultimately I’ve found, having worked with executives that buy into this and those that don’t, I find them to be the one compelling factor. Some executives say: Why do I want to change things? If you can shift the attitude to: let’s do this, I want to work with you. Once they start engaging and having conversations about what needs to be better, the flood gates open. I have the privilege of working at the moment with a CTO at the moment who gets this, evangelizes this, and facilitates this. Without that, you’re not going to be successful.

Jon: The Value Stream Mapping technique could be a really powerful way of both gaining executive buy-in, because you can say: look this is what it looks like and this is what we want to do. Building on that you can illustrate the power of change and demonstrate where you have improved business outcomes by improving the value stream?

Harbinder: The Value Stream is a great way to talk to the business, it takes the jargon out of the conversation, it doesn’t require you to understand the technicalities, it lets you have a very balanced conversation that is very easy to understand.

On the other hand, the DORA Metrics are a great way to start talking to engineering and other teams about a balanced approach to delivery. The DORA Metrics are a great acid test as well for the business side of the house, to see if they get it or not. Do those metrics resonate with them or not? It’s a great way to test if the business is ready to have that conversation in addition to any value stream mapping you’ve been doing.

Jon: You mentioned the shifting left, starting to get into deciding what to do in the first place and then making sure you are doing the correct thing and doing it across development and engineering. In your experience what lessons can be learned, what can operational engineering gain particularly in this world of container engineering, how far right can we go?

Harbinder: We tackle this a lot at the moment.

I believe a mature DevOps model is one in which you are able to collapse it all the way down to service teams owning their feature from cradle to grave and taking responsibility for it while it lives in production and defending its availability.

I’ve been very fortunate recently to realize that site reliability practices help start that conversation. These are the questions you start to answer:

  • Do you understand your service level objectives for your service on a regular, say weekly, basis?
  • Do you understand how you are adhering to that, what are the fluctuations of that?
  • Are you established enough to have an error budget for that place, once you do, do you govern yourself by it?
  • Do you pace your feature development in balance with your operational performance?
  • If you start exceeding your error budgets are you rebalancing in favor of stability?

And once those same engineers are the ones responding to those incidents to avoid customer impact, the conversations become so relevant to them. They suddenly want:

  • To bring in engineering practices to make their lives easier
  • Good monitoring practices
  • A good incident response platform
  • Blameless post-mortems that allow them to learn and build resilience into this system

Those conversations are so much easier to have when service teams own production. They are accountable, but ideally, you need to support them with, for example, a cloud operations team. But the difference is that the cloud operations team isn’t the first one to take the call, the service team is, and if they need help, they go to the cloud operations team. It’s more the shift in responsibility, they will always need other teams’ support, but they’re on call.

Jon: Given everything we’ve talked about, from all your years of experience, what’s the key that unlocks everything around sponsorship, around best practice, around understanding the right metrics?

Harbinder: There are three steps:

  1. Make sure you’re having a holistic conversation, that acknowledges all the needs and definitions of value within the organization.
  2. Be top down with your approach – have the conversation with different stakeholders, gauge appetite for change, be realistic if it’s going to happen or not.
  3. Form communities bottom up – I strongly believe in community practices, guilds, coalescing around topics of improving and sprouting change, using open-source, proof of concepts and giving people the chance to fail, so they have a space where they can try to do things better, then try to connect the two.

I think, ultimately, you need to find the excuse to bring DevOps into your organization that is compelling to those at the top and at the bottom.

However you do that – Agile implementations, Cloud Adoption, New product launches – seek out the opportunities to start the conversation and anchor yourself to them.


You can listen to the entire conversation here.



from Gigaom https://gigaom.com/2020/05/20/three-steps-to-successful-devops/

Comments

Popular posts from this blog

Who is NetApp?

At Cloud Field Day 9 Netapp presented some of its cloud solutions. This comes on the heels of NetApp Insight , the annual corporate event that should give its user base not just new products but also a general overview of the company strategy for the future. NetApp presented a lot of interesting news and projects around multi-cloud data and system management. The Transition to Data Fabric This is not the first time that NetApp radically changed its strategy. Do you remember when NetApp was the boring ONTAP-only company? Not that there is anything wrong with ONTAP of course (the storage OS originally designed by NetApp is still at the core of many of its storage appliances). It just can’t be the solution for everything, even if it does work pretty well. When ONTAP was the only answer to every question (even with StorageGrid and EF systems already part of the portfolio), the company started to look boring and, honestly, not very credible. The day the Data Fabric vision was announced