Back to All Events

Webinar: How to empower engineering teams to improve, at scale

How do you empower engineering teams to improve, at scale?

Following the success of our recent workshop at SEACON, Matthew Skelton, Team Topologies co-author and Director at Conflux and Benedict Steele, Chief Delivery Officer at Armakuni discussed in this webinar how the Multi-team Software Delivery Assessment (MSDA), approach can be used to assess and empower engineering teams in fast paced businesses.

With fast flow approaches like Continuous Delivery and Team Topologies, different teams and streams use a range of different practices and techniques.

 So, how can you find and emphasise the good practices? How do teams know what good looks like? And how can you assess these practices across tens or hundreds of teams on a regular basis?

Watch the replay here:

Webinar Q&A responses:

  • We gather informal qualitative feedback after a team session, but more detailed engagement metrics would need to be done in a separate tool. We’ve also been told anecdotally by leaders and managers that there is an uplift in morale after we’ve run the sessions.

  • It’s completely up to you who has access to the output of the MSDA exercise. Ideally teams would share their results openly and discuss their successes and failures whilst trying to improve. However, we’re aware that the culture of some organisations aren’t as generative as that yet. We recommend discussing with the teams beforehand how they’d like the data shared (openly, anonymously, or not at all). In our experience openly is always best. Access to the themes in the MSDA, are available via softwaredeliveryassessment.com

  • We usually gather qualitative data immediately after an MSDA team session. Eg. How valuable was the session for the team? The quantitative metrics emerge over time, but we strongly recommend that you do not use the metrics to compare teams directly. Instead, use metrics in aggregate to show trends across groups of teams or the whole organisation.

  • In our experience of running this with Data Science teams, we’ve found it has helped reveal the gaps in their understanding and shown the unknown unknowns. Especially in the space where Data Science crosses over into modern engineering practices. For MLOps teams, we’ve found it covers a lot of what they need, but there are some gaps. We’re working on a more enhanced version for these types of teams, so watch this space - or if you’d like to get involved then please get in contact.

  • If teams want to spend a long time discussing details, there could be a wider need for more discussion on practices. Consider making more time - perhaps via lunchtime talks or an Internal Tech Conference - so that the MSDA sessions can remain more focused because people have already had a chance to talk through the background details. You may also be going into a level of detail that isn’t needed at that moment in time - or your reason for gathering the information isn’t clear. If you're trying to understand your whole ecosystem, then you may end up in analysis paralysis. If that’s the case, we’d recommend a focussed look at one or two dimensions (the ones where you believe the most pain is felt) and work to improve those

  • In the long term, the aim is to empower the end customer to run and interpret the MSDA themselves. In the early stages, however, a partner like Armakuni or Conflux would supervise and guide the assessment, combining it with other activities to help shape a transformation.

  • The Spotify Squad Health Check has been used by many teams around the world over many years as a foundation or part of an assessment approach. Any “validity” comes from its use in multiple organisations over time. The practices and criteria in the MSDA are based on accepted good practice in organisations around the world. Some of the measures are backed up by the DORA and Accelerate findings, and others are based on sound expert and practitioner advice. We’re not claiming any particular empirical basis, but we have confidence in the approaches based on our practical experience.

  • Facilitators help teams to assess their own practices but do not directly themselves assess teams.

  • That is correct. We may provide a dedicated website in the future, but for now the details are directly on GitHub.

  • We always see it as having two benefits - if you’re in a team, it provides a baseline for you to track progress against. If you’re a leader it helps you reveal blindspots and identify cross-team patterns where extra investment or intervention may be needed.

  • This is an example of where we need informed facilitators who know what the criteria and concepts mean. The details behind the questions or criteria are typically all found in books referenced in each dimension, so facilitators can read the books to deepen their knowledge ahead of an MSDA session. As well as the knowledge, we’d also recommend facilitators having some experience in the practises discussed.

  • Absolutely. You need to establish that the exercise isn’t one of judgement (or even assessment really), it’s to take a baseline for them to improve upon. Once you’ve run the exercise, focus on the issues that are within their gift to control. Make time for them to improve those issues. Thirdly, use the output from the MSDA to articulate with others outside the team the impact that any issues those others are having - use the MSDA as a case for investment or for change.

  • The assessment at Comparative Agility is comprehensive, but proprietary. The MSDA is “open core” meaning that you can use the assessment without licence costs. The MSDA is focused on software delivery whereas Comparative Agility addresses additional areas.

  • Yes: the MSDA includes some DORA metrics deliberately, but MSDA tackles some additional dimensions. MSDA and DORA are very well aligned and both are based partly on the same underlying books (especially Continuous Delivery and Accelerate) and practices.

  • he primary focus of MSDA is to empower teams to improve. MSDA does that by encouraging discussion within and across teams about what good looks like. A survey by itself does not include any discussion or self-improvement opportunities.

  • MSDA is a team-centric approach, so it’s teams that self-reflect, not individuals. Given that the key focus of MSDA is to empower teams to improve at scale, we think it’s vital to make the core of the assessment a self-assessment. However, there is a place for automatically-collected metrics for things like cycle time, deployment frequency, failed deployments, Work-In-Progress, flow efficiency, and so forth. Remember to ensure that any automatically-collected metrics feel like empowerment to the teams, otherwise teams will disengage and the assessment will be counterproductive.

  • A key part of MSDA is exactly that teams feel a sense of psychological safety when involved in the assessment process. The goal must be to empower teams to improve, not to compare teams and rank them.

  • It’s possible to protect teams by careful use of the data collected from the assessment. Avoid giving managers or C-level people access to the team-level data and instead aggregate the data and show only trends and averages.

 

Meet our speakers

 

Matthew Skelton, Founder at Conflux

Matthew Skelton is co-author of Team Topologies: organizing business and technology teams for fast flow. Recognized by TechBeacon in 2018, 2019, and 2020 as one of the top 100 people to follow in DevOps, Matthew curates the well-known DevOps team topologies patterns at devopstopologies.com. He is Head of Consulting at Conflux and specializes in Continuous Delivery, operability, and organization dynamics for modern software systems.

 

Benedict Steele, Chief Delivery Officer, Armakuni

Benedict is a creative technology leader who balances lived experience with emerging best practice. He has almost 20 years experience in technology, product, and engineering - transforming organisations of all sizes by helping them adopt new methods, techniques, and ways of working.

He is the Chief Delivery Officer at Armakuni, responsible for client delivery.

Outside of work he has very little free time due to the demands of his tiny humans. When he does get some free time he spends it wishing that he'd got a dog instead.

Previous
Previous
September 30

DevOpsDays 2022: The next level pipelines delivering Net Zero

Next
Next
November 7

SEACON 2022: The next level pipelines delivering Net Zero