Change

There’s no such thing as a committed outcome

A joint article by both myself and Ellie Taylor, a fellow Ways of Working Enablement Specialist here at Nationwide.

The Golden Thread

As we start a new year, focusing on outcomes has been a common topic of discussion in our work in the last few weeks.

Within Nationwide, we are working towards instilling a ‘Golden Thread’ in our work — starting with our strategy, then moving down to three year and in-year outcomes, breaking this down further into quarterly Objectives and Key Results (OKRs) and finally Backlog Items (and if you’d like to go even further — Tasks).

This means that everyone can see how the work they’re doing fits (or maybe does not fit!) into the goals of our organisation. When you consider the work of Daniel Pink, we’re really focusing here on that ‘purpose’ aspect in trying to instil that in everything we do.

As Enablement Specialists, part of our role is to help coach the leaders and teams we work with across our Member Missions to really focus on outcomes over outputs. This is a common mantra you’ll hear recited in the Agile world, with some even putting forth the argument that it should in fact be the other way round. However, orientating around outcomes is our chosen path, and thus we must focus on helping facilitate the gathering of good outcomes.

Yet, in moving to new, outcome oriented ways of working, a pattern (or anti-pattern) has emerged— one of which is the concept of a ‘committed outcome’.

“You can’t change that, it’s a committed outcome”

“We’ve got our committed outcomes and what the associated benefits are”

“Great! We’ve got our committed outcomes for this year!”

This became a hot topic amongst our team — what really is an outcome and is it possible to have a committed outcome?

What is an Outcome?

If we were to look at a purely dictionary definition of the word, an outcome is “something that follows as a result or consequence”. If you want help in what this means in a Lean-Agile context, the book Outcomes over Output helpfully defines an outcome as “a change in human behaviour”. Which we can then tweak for our context to mean ‘something that follows as a result or consequence, which could also be defined as a change in member or colleague behaviour’.

However this definition brings about uncertainty. How can we have certainty over the outcomes given they’re changes in behaviour? Well a simple way is to call them committed outcomes. That way we’ll know what we’ll be getting. Right?

Well this doesn’t quite work…

Outcomes & Cynefin

This is where those focused in introducing new ways of working and particularly leadership should look to leverage Cynefin. Cynefin is a sense-making model originally developed by Dave Snowden to help leaders make decisions by understanding how predictable or unpredictable their problems are. In the world of business agility, it’s often a first port of call to help us understand the particular context teams are working in, and then in turn how best to help them maximise their flow of work by selecting a helpful approach from the big bag of tools and techniques.

The model introduces 4 domains: Clear, Complicated, Complex and Chaotic, which can then be further classified as predictable or unpredictable in nature.

There is also a state of Confused where it is unclear which domain the problem fits into and further work is needed to establish this before attempting to identify the solution.

Source: About — Cynefin Framework

Both Clear and Complicated problems are considered by the model to be predictable since they have repeatable solutions. That is, the same solution can be applied to the same problem, and it will always work. The difference between the two being the level of expertise needed to solve.

The solution to Clear problems is so obvious that a child can solve it, or if expertise is needed the solution is still obvious and there’s normally only one way to solve the problem. Here is where it’s appropriate to use “best practice”. [example: tying shoelaces, riding a bike)

In the Complicated domain, more and more expertise is needed the more and more complicated the problem gets. The outcome is predictable because it has been solved before, but it will take an expert to get there. [example: a mechanic fixing a car, a watchmaker fixing a watch]

The Complex and Chaotic domains are considered unpredictable. The biggest difference between the two from a business agility perspective is whether it is safe to test and learn; ‘yes’ for Complex and definitely ‘no’ for Chaotic.

Complex problems are ones where the solution, and the way in which to get to that solution (i.e. the practices) emerge over time because we’ve never done them before, in this context, environment, etc. Cause and effect are only understood with hindsight, so you can only possibly know what happens and any side-effects of a solution once created. The model describes this activity as probing, trying out some stuff to find out what happens. And this is key to why this domain is a sweet spot for business agility. We need to try out something in such a way, typically small, that ensures that if it doesn’t work as expected, the consequences are minimised. This is often referred in our community as ‘safe to fail.’

And finally, Chaos. This is typically a transient state; it resolves itself quickly (not always in your favour!) and is very unpredictable. This domain is certainly not a place for safe to fail activity. Decisive action is the way forward, but there is also a high degree of choice in the solution and so often novel practices emerge.

Ok, so what’s the issue?

The issue here is that when we think back to focusing on outcomes, and specifically when you hear someone say something is a committed outcome, what’s more likely is that it’s a (committed) output.

It’s something you’re writing down that you’re going to ‘do’ or ‘produce’. Due to the fact that most of what we do sits in the Complex domain, we can’t possibly know (for certain) whether what we are going to ‘do’ will definitely achieve the outcome we’re after until we do it. We also don’t even know if the outcome we think we are after is the right one. Thus, it is nonsensical (and probably impossible!) to ‘commit’ to it. It’s unfortunately trying to apply thinking from the Clear domain to something that is Complex. This is a worry, as now these outcomes become something that we’ll do (output) rather than something we’ll go after (outcome).

In lots of Agile Transformation or Ways of Working initiatives, this manifests itself at team level where large numbers of Scrum teams are stuck in a world where they still fixate on “committed number of items/story points” — ignoring the fact that this left Scrum ten years ago. Scrum Teams commit to going after both long-term (product) and short-term (sprint) goals, expressed as outcomes, with the work they do in the product/sprint backlog being how they’re trying to go after those. They do this because they know their work is complex. The same goes for the wider organisation wide ‘transformation’, which is treated as a programme where, by a pre-determined end date (usually 12 months), we will all be ‘transformed’. This of course can only be demonstrated in output (number of teams, number of people trained and certified, etc.) due to the mindset it is being approached with.

The problem with committing to an outcome (read: output) is that it stifles empowerment, creativity, and innovation, turning your golden thread from something meaningful, purposeful that celebrates accountable freedom, to output oriented, feature factory measured agile theatre.

Ultimately, this means any new ways of working approach is likely to be sub-optimal at best — it’s a pivot without a pivot, leading to everyone in the system delivering output, delivering this output faster, yet perplexed at the lack of meaningful impact. Meaning we neglect the outcomes and experimenting with many ways in our real desired results of delighting members and colleagues.

What we want to focus on are the problems we want to solve, which comes back to the member, user or colleague behaviours that drive business results and the things we can do to help nudge these. Ideally, we’d then have meaningful and, where appropriate, flow and value based measures to quantify these and track progress.

Summary

In closing, some key points to reaffirm when focusing on outcomes:

  • Outcomes are something that follows as a result or consequence

  • Cynefin is a sense-making model originally developed by Dave Snowden to help leaders make decisions by understanding how predictable or unpredictable their problems are.

  • Cynefin has 4 domains: Clear, Complicated, Complex and Chaotic, which can then be further classified as predictable or unpredictable in nature.

  • There is also a state of Confused where it is unclear which domain the problem fits into and further work is needed to establish this before attempting to identify the solution.

  • The work we do regarding change is generally in the Complex domain

  • As the work is Complex, there is no way we can possibly ‘commit’ to what an outcome will be as the relationship between the two is not known

  • Outcomes are things that we’d like to happen (more engaged staff, happier members) because of the work that we do

  • When you hear committed outcomes — people most likely mean outputs

  • Use the outputs as an opportunity to focus on the real problems we want to solve

  • The problems we want to solve should come back to the member, user or colleague behaviours that drive business results (which are the actual ‘outcomes’ we want to go after)

What do you think about when referring to outcomes? 

Have you had similar experiences?

Let us know in the comments below or tweet your thoughts to Ellie or myself :)

Thoughtspot and the four flow metrics

Focusing on flow

As a Ways of Working Enablement Specialist, one of our primary focuses is on flow. Flow can be referred to as the movement of value throughout your product development system. Some of the most common methods teams will use in their day to day are Scrum, Kanban, or Scrum with Kanban.

Optimising flow in a Scrum context requires defining what flow means. Scrum is founded on empirical process control theory, or empiricism. Key to empirical process control is the frequency of the transparency, inspection, and adaptation cycle — which we can also describe as the Cycle Time through the feedback loop.

Kanban can be defined as a strategy for optimising the flow of value through a process that uses a visual, work-in-progress limited pull system. Combining these two in a Scrum with Kanban context means providing a focus on improving the flow through the feedback loop; optimising transparency and the frequency of inspection and adaptation for both the product and the process.

Quite often, product teams will think that the use of a Kanban board alone is a way to improve flow, after all that is one of its primary focuses as a method. Taking this further, many Scrum teams will also proclaim that “we do Scrum with Kanban” or “we like to use ScrumBan” without understanding what this means if you really do focus on flow in the context of Scrum. However, this often becomes akin to pouring dressing all over your freshly made salad, then claiming to eat healthily!

Images via

Idearoom / Adam Luck / Scrum Master Stances

If I was to be more direct, put simply, Scrum using a Kanban board ≠ Scrum with Kanban.

All these methods have a key focus on empiricism and flow — therefore visualisation and measurement of flow metrics is essential, particularly when incorporating these into the relevant events in a Scrum context.

The four flow metrics

There are four basic metrics of flow that teams need to track:

  • Throughput — the number of work items finished per unit of time.

  • Work in Progress (WIP) — the number of work items started but not finished. The team can use the WIP metric to provide transparency about their progress towards reducing their WIP and improving their flow.

  • Cycle Time — the amount of elapsed time between when a work item starts and when a work item finishes.

  • Work Item Age — the amount of time between when a work item started and the current time. This applies only to items that are still in progress.

Generating these in ThoughtSpot

ThoughtSpot is what we use for generating insights on different aspects of work in Nationwide, one of the key products offered to the rest of the organisation by Marc Price and Zsolt Berend from our Measurement & Insight Accelerator. This can be as low level as individual product teams, or as high-level as aggregated into our different Member Missions. We produce ‘answers’ from our data which are then pinned to ‘pinboards’ for others to view.

Our four flow metrics are there as a pinboard for teams to consume, filtering to their details/context and viewing the charts. If they want to, they can then pin these to their own pinboards for sharing with others.

For visualizing the data, we use the following:

  • Throughput — a line chart for the number of items finished per unit of time.

  • WIP — a line chart with the number of items in progress on a given date.

  • Cycle Time — a scatter plot where each dot is an item plotted against how long it took (in days) and the completed date. Supported by an 85th percentile below showing how long in days items took to complete.

  • Work Item Age — a scatter plot where each dot is an item plotted against its current column on the board and how long it has been there. Supported by the average age of WIP in the system.

Using these in Scrum Events

Throughput (Sprint Planning, Review & Retrospective) — Teams can use this as part of Sprint Planning in forecasting the number of items for the Sprint Backlog.

It can also surface in Sprint Reviews when it comes to discussing release forecasts or product roadmaps (although I would encourage the use of Monte Carlo simulations in this context — more in a later blog on this). As well as being reviewed in the Sprint Retrospective, where teams inspect and adapting their processes to find ways to improve (or validating if previous experiments have improved) throughput.

Work In Progress (Daily Scrum & Sprint Retrospective) — as the Daily Scrum focuses on what’s currently happening in the sprint/with the work, WIP chart is good to look at here (potentially seeing if it’s too high).

The chart also is a great input into the Sprint Retrospective, particularly seeing where WIP is trending towards — if teams are optimising their WIP then you would expect this to be relatively stable/low — if high/highly volatile then you need to “stop starting and start finishing” or find ways you can improve your workflow.

Cycle Time (Sprint Planning, Review & Retrospective) — Looking at 85th/95th percentiles of Cycle Time can be a useful input into deciding what items to take into the Sprint Backlog. Can we deliver this within our 85th percentile time? If not, can we break it down? If we can, then let’s add it to the backlog. It also works as an estimation technique, so stakeholders know that when work is started on an item, there is an 85% likelihood it will take n days — want it in n days? Ok well that’s only got a 50% likelihood, can we collaborate to break it down into something smaller? Then let’s add that to a backlog refinement discussion.

In the Sprint Review it can be used by looking at trends, such as if your cycle times are highly varied then are there larger constraints in the “system” that we need stakeholders to help with? Finally, it provides a great discussion point for Retrospectives — we can use it to deep dive into outliers to find out what happened and how to improve, see if there is a big difference in our 50th/85th percentiles (and how to reduce this gap), and/or see if the improvements we have implemented as outcomes of previous discussions are having a positive impact on cycle time.

Work Item Age (Sprint Planning & Daily Scrum) — this is a significantly underutilised chart that so many teams could get benefit from. If you incorporate this into your Daily Scrums, it will likely lead to much more conversations on getting work done (due to item age) rather than generic updates. Compare work item age to your 85th percentile on your cycle time — is it likely to exceed this time? 

 Is that ok? Should we/can slice it down further to get some value out there and faster feedback sooner? All very good, flow-based insights this chart can provide.

It may also play a part in Sprint Planning — do you have items left over from the previous sprint? What should we do with those? All good inputs into the planning conversation.

Summary

To summarise, focusing on flow involves more than just using a Kanban board to visualize your work. To really take a flow-based approach and incorporate the foundations of optimising WIP and empiricism, teams should utilise the four key flow metrics of Throughput, WIP, Cycle Time and Work Item Age. If you’re using these in the context of Scrum, look to accommodate these appropriately into the different Scrum events.

For those wanting to experiment with these concepts in a safe space, I recommend checking out TWiG — the work in progress game, (which now has a handy facilitator and participant guide) and for any Nationwide folks reading this curious about flow in their context, be sure to check out the Four Key Flow Metrics pinboard on our ThoughtSpot platform.

Further/recommended reading:

Kanban Guide (Dec 2020 Edition) — KanbanGuides.org

Kanban Guide for Scrum Teams (Jan 2021 Edition) — Scrum.org

Basic Metrics of Flow — Dan Vacanti & Prateek Singh

Four Key Flow Metrics and how to use them in Scrum events — Yuval Yeret

TWiG — The Work In Progress Game

Weeknotes #39 - Agile not WAgile

Agile not WAgile

This week we’ve been reviewing a number of our projects that are tagged as being delivered using Agile ways of working within our main delivery portfolio. Whilst we ultimately do want to shift from project to product, we recognise that right now we’re still doing a lot of ‘project-y’ style of delivery, and that this will never completely go away. So we’re trying to in parallel at least get people familiar with what Agile delivery is all about, even if delivering from a project perspective.

The catalyst really for this was one of our charts where we look at the work being started and the split between which of that is Agile (blue line) Vs. Waterfall (orange line).

The aspiration being of course that with a strategic goal to be ‘agile by default’ the chart should indeed look something like it does here, with the orange line only slightly creeping up when needed but generally people looking to adopt Agile as much as they can.

When I saw the chart looking like the above last week I must admit, I got suspicious! I felt that we definitely were not noticing the changes in behaviours, mindset and outcomes that the chart would suggest, which prompted a more thorough review.

The review was not intended to act as the Agile police(!), as we very much want to help people in moving to new ways of working, but to really make sure people had understood correctly around what Agile at its core really is about, and if they are indeed doing that as part of their projects.

The review is still ongoing, but currently it looks like so (changing the waterfall/agile field retrospectively updates the chart):

The main problems observed being things such as lack of frequent delivery, with project teams still doing one big deployment to production at the end before going ‘live’ (but lots of deployments to test environments). Projects are maybe using tools such as Azure DevOps and some form of Agile events (maybe daily scrums), but work is still being delivered in phases (Dev / Test / UAT / Live). As well as this, a common theme was not getting early feedback and changing direction/priorities based on that (hardly a surprise if you are infrequently getting stuff into production!).

Inspired by the Agile BS detector from the US Department of Defense, I prepared a one-pager to help people quickly understand if their application of Agile to their projects is right, or if they need to rethink their approach:

Here’s hoping the blue line goes up, but against some of that criteria above, or at least we get more people approaching us for help in how to get there.

Team Health Check

This week we had our sprint review for the project our grads are working on, helping develop a team health check web app for teams to conduct monthly self assessments as to different areas of team needs and ways of working.

Again, I was blown away by what the team had managed to achieve this sprint. Not only had they managed to go from a very basic, black and white version of the app to a fully PwC branded version.

They’ve also successfully worked with Dave (aka DevOps Dave) to configure a full CI/CD pipeline for any future changes made. As the PO for the project I’ll now be in control of any future releases via the release gate in Azure DevOps, very impressive stuff! Hopefully now we can share more widely and get teams using it.

Next Week

Next week will be the last weeknotes for a few weeks, whilst we all recharge and eat lots over Christmas. Looking at finalising training for the new year and getting a run through from Rachel in our team of our new Product Management course!

Weeknotes #36 - Refreshing Mindsets & Cargo Cults

Refreshing Mindsets

This week was the second week of our first sprint working with our graduate intake on our team health check web app. It was great to see in the past week or so that the team, despite not having much of a technical background, had gone away and been able to create a very small app created using a mix of Python and an Azure SQL database for the responses. It just goes to show how taking the work to a team and allowing them to work in an environment where they can be creative (rather than prescribing the ‘how’) can lead to a great outcome. Whilst the app is still not quite yet in a ‘releasable’ state, in just a short time it really isn’t too far away from something a larger group of Agile Delivery Managers and Coaches can use. It’s refreshing to not have to take on the battle of convincing hearts and minds, working with a group of people who recognise this is the right way to work and are just happy to get on and deliver. Thanks to all of them for their efforts so far!

Cargo Culting

“Cargo Culting” is a term used when people believe they can achieve benefits by adopting/copying certain behaviours, actions or techniques. They don’t consider why the benefits and/or causes occur, instead just blindly copy the behaviours to try get similar results.

In the agile world, this is becoming increasingly commonplace, with the Spotify model being the latest fad for cargo culting in organisations. Organisations are hearing about how Spotify or companies like ING are scaling Agile ways of working which, in practice, sounds great, but it is incredibly hard and nowhere near as simple as just redesigning organisations into squads, tribes, chapters and guilds.

In a training session with some of our client facing teams this week, I used the above as an example of what cargo culting is like. Experienced practitioners need to be aware that the Spotify model is one tool in the toolbox, with there being lots of possible paths to organisational agility. Spotify themselves never referred to it as a model, nor use it themselves anymore, as well as ING moving towards experimenting with using LeSS in addition to the Spotify model. Dogma is one of the worst traps you can fall into when it comes to moving to new ways of working, particularly when you don’t stop and reassess whether this actually is the right way for this context. Alignment on language is important, but should not be at the compromise of finding first of all what works in the environment.

Next Week

Next week I’ll be running an Agile Foundations training session, and we (finally!) have Rachel joining our team as a Product Manager. I’m super excited to have her as part of the team, whilst hopeful we can control the flow of requests her way so she does not feel swamped, looking forward to having her join PwC!

Weeknotes #33 - Right to Left

Right to Left

This week I finished reading Mike Burrows’ latest book Right to Left

Yet again Mike manages to expertly tie together numerous aspects of Agile, Lean and everything else, in a manner that’s easy to digest and understandable from a reader/practitioner perspective. One of my favourite sections of the book is the concept of the ‘Outside-In’ Service Delivery Review. As you can imagine from the title of the book, it’s taking the perspective of the right (needs, outcomes, etc.) as an input, over the left (roles, events, etc.) and then applying this thinking across the board, say for example in the Service Delivery Review meeting. This is really handy for where we are on our own journey, as we emphasise the need to focus on outcomes in grouping and moving to product teams that provide a service to the organisation. One area of this being around how you construct the agenda of a service review. 

I’ve slightly tweaked Mikes take on matters, but most of the format/wording is still the same:

With a Service Review coming soon, the hope is that we can start adopting this format as a loose agenda going forward, in particular due to it’s right to left perspective.

Formulating the above has also helped with clarity around the different events and cadences we want teams to be thinking about in choosing their own ways of working. I’ve always been a fan of the kanban cadences and their inputs/outputs into each other:

However I wanted to tweak this again to be a bit simpler, to be relevant to more teams and to align with some of what teams are already doing currently. Sonya Siderova has a nice addition to the above with some overarching themes for each meeting, which again I’ve tailored based on our context:

These will obviously vary depending on what level (team/service) we’re focusing on, but my hope is something like the image above will give teams a bit clearer steer as to things they should be thinking about and the intended purpose of them.

Digital Accelerators

We had another session for our Digital Accelerators this week, which seemed to be very well received by our attendees. We did make a couple changes for this one based on the feedback from last week, removing 2–3 slides and changing the Bad Breath MVP exercise from 2 groups to 4 groups. 

It’s amazing how much a little tweak can make, as it did feel like it flowed a lot easier this time, with plenty opportunity for people to ask questions. 

Last weeks session was apparently one of the highest scoring ones across the whole week (and apparently received the biggest cheer when the recap video showed photos of people playing the ball point game!), with a feedback score of 4.38/5 — hopefully these small changes lead to an even higher score once we get the feedback!

Next Week

Next week is a quieter one, with a trip to Manchester on Tuesday to meet Dave, our new DevOps Engineer, as well as help coach one of our teams around ‘Product’ thinking with one of our larger IT projects at the minute. Looking forward to some different types of challenges there, and how we can start growing that product management capability.

Weeknotes #32 - Little Bets & Digital Accelerators

Little Bets

A few weeks ago, I was chatting to a colleague in our Robotic Process Automation (RPA) team who was telling me about how the team had moved to working in two-week sprints. They mentioned how they were finding it hard to keep momentum and energy up, in particular towards the end of the sprint when it came to getting input to the retro. I asked what day of the week they were starting the sprint to which they replied “Monday”, of course meaning the sprint finished on a Friday. A suggestion I had was actually to move the start of the sprint (keeping the two-week cadence) to be on a Wednesday, as no one really wants to be reviewing or thinking about how to get better (introspection being a notoriously tougher ask anyway) on a Friday. They said they were going to take it away and run it as an experiment and let me know how it went. This week the team had their respective review and retrospective, with the feedback being that the team much preferred this approach, as well as the inputs to the retro being much more meaningful and collaborative.

It reminded me that sometimes as coaches we need to recognise that we can actually achieve big through small, and that a tiny little tweak can actually make the world of difference to a team. For myself I’ve recently found that I’ve been getting very frustrated with bigger changes we want to make, and concepts not landing with people, despite repeated attempts at engagement and involvement. Actually, sometimes it’s better to focus on those tiny tweaks/experiments that can make a big difference.

This concept is explained really well in Peter Sims “Little Bets”, a great book on innovation in organisations through making series of little bets, learning critical information from lots of little failures and from small but significant wins.

Here’s to more little bets with teams, rather than big changes!

Digital Accelerators

This week we also ran the first of two sessions introducing Agile to individuals taking part in our Digital Accelerator programme at PwC. The programme is one of the largest investments by the firm, centered on upskilling our people on all things digital, covering everything from cleansing data and blockchain to 3D Printing and drones.

Our slot was 90 minutes long, where we introduced the manifesto and “Agile Mindset” to individuals, including a couple of exercises such as the Ball Point Game and Bad Breath MVP. With 160 people there we had to run 4 concurrent sessions with 40 people in each, which was the smallest group size we were allowed!

I thoroughly enjoyed my session, as it had been a while since I’d done a short, taster session on Agile — good to brush off the cobwebs! The energy in the room was great, with some maybe getting a little too competitive with plastic balls!

Seems like the rest of our team also enjoyed it, as well as the attendee feedback being very positive. We also had some additional help from colleagues co-facilitating the exercises which I’m very thankful for as it would have been chaotic without their help! Looking forward to hearing how the Digital Accelerators take this back to their day to day, and hopefully generate some future work for us with new teams to work with.

Next week

Next week is another busy one. I’m helping support a proposal around Enterprise Agility for a client, as well as having our first sprint review for our ways of working programme. On top of that we have another Digital Accelerator session to run, so a busy period for our team!