Do you want to know the best way you can provide positive feedback to me? Sure, you can share this article on your favorite social media channel. Better yet, forward the email to others in your organization that might get some value. This particular article reflects many conversations I have had with Engineering, so perhaps, they would be a good fwd.
A small terminology, but important, customer-value focused rant this week.
One of my favorite things about startups (and any vibrant organization) is how they reflect on how things are organized and done — always looking to improve. Are we delivering the most value to customers? Can we do it more efficiently? How can we improve our culture? Why do we do it this way?
In had just such a conversation with the head of engineering, at an important client. Something was broken in our software development process and it clearly needs fixing.
The development process is not Scrum, but a newly implemented home-grown agile methodology with a 2-week sprint cadence. The process looks a little like this:
Sprint Backlog - Work agreed to include in the planned sprint.
In Progress - Work pulled from Backlog and actively being worked on.
Review - Work ready for a second set of eyes to review (assuming we don’t use XP or Pair programming to begin with).
Done - When coding, review, and other Definition of Done tasks have been met.
Deployed - When DevOps deploys the work into Production.
Do you see what is wrong?
Like most engineering teams we have built a process that “feels” complete by having a Done state near the end. Work gets into the Done state when all the engineering, QA work, and other Definition of Done tasks are complete.
The problem is that we see a huge pile up in Done. This pile up actually can include carried over work from one sprint to the next. This is because the process for deploying into production is not fully automated and cannot be done with the flip of a switch.
Instead, DevOps often needs to do some extra manual work to get work from Done deployed to PROD. Further, because of the manual work involved, it usually needs some additional QA.
So I ask the question, is it really Done when engineering has completed their development tasks?
Often times the agreed Definition of Done, includes a checklist item that is “Ready to Deploy”. In an organization that is not yet mature enough to safely, easily, and repeatedly deploy changes into production - I argue that this process is badly defined.
The word Done implies to many that the work is complete. However, if the customer cannot derive value out of the work, then I argue this work ain’t done. Calling it Done will cause external stakeholders to think value has been delivered. Calling it Done will lead product team members to incorrectly think they have crossed the finish line.
I suggest, that any organization that finds itself in a situation where work that is Done is not actually getting into the hands of users (or ready-to-ship in the case of on-prem software) we should revise this standard process model.
With a minor tweak, we can change the discussion internally and externally. Our goal is to deliver value to customers. Unless and until that code is deployed, nobody should consider that work Done.
Now, with this minor tweak in process terminology we create an entirely different discussion between product, engineering, and DevOps. What can we all do to get work Done?
It is clear that we all should be swarming on that Ready to Deploy state and figuring out both how to get the current WIP unstuck and fix the long-term issues that result in a continuing bottleneck in the value stream delivery.
As long as that Done column is empty, we have achieved nothing for our customers.
Product Management must standup for Customer in the definition of such internal processes to ensure it is designed in their interest. Calling something Done that is not, is clearly not in the interest of any Customer.
Words matter. They impact the mindset of the team. The mindset impacts the outcomes achieved. When your Deployment process becomes fully automated, immediate, and error free, then I encourage you to switch back to the old way of calling work Done before Deployment.
Great post, Sean.
In the updated version, what are the criteria you would apply to the 'Done' column at the end?
Would you consider outcome-oriented metrics?
Let's say, for example, a feature was intended to improve the activation rate by X%.
Would the team track this here or elsewhere to see if the feature actually delivered the intended result?