Leveraging iterative design methods to "fail safely"
November 11, 2023
Responding to Pavel Samsonov's article on Medium:
This article offers an important critique of a version of agile methodology that is only concerned with optimizing development cycles to ship working software more quickly and at greater frequency. It's argument goes something like this: the value of software depends on usefulness, and usefulness depends on solving the right problem; we can't solve the right problem until we've learned how we might be wrong (e.g., misaligned focus, faulty assumptions); the longer it takes in a development cycle to find out that you are wrong, the more costly it is going to be for your team and the customer. The solution it offers is to practice "agile within design," meaning that we need iterative methods to test problem framing and assumptions early in the design/development cycle so that the incentives are to "fail safely" and learn from it. Without these design and user research methods, what often happens is that teams "quietly ignore" emerging concerns because of sunk costs toward "outdated commitments."
Here is a passage that I found particularly astute:
Iteration is perhaps the best-known pan-agile best practice[:]...focus our efforts on quickly solving one problem, and use feedback from that release to guide what we need to do next.
While iteration certainly gets code into production more quickly (due to starting small), it doesn’t help much when it comes to delivering value to customers. Agile [as typically practiced] pays lip service to “build-measure-learn” but in practice, conversations always revolve around build velocity (how can we get the thing out the door faster) rather than measure velocity or learning velocity. ...
Unfortunately, though the “Big Design Up Front” phase has been eliminated, [typical Agile approaches] do not provide any alternative design phase. They center the entire conversation on a very narrow piece of the development process: what happens after we have defined the problem and the broad-strokes solution, and are discussing only the sequence in which that solution should be delivered. The question of whether we understand the problem is swept under the rug: the important thing is to build, and then all our questions will be answered by data.
Let's repeat that last sentence:
The question of whether we understand the problem is swept under the rug: the important thing is to build, and then all our questions will be answered by data.
In my experience, this accounts for nearly all situations where time and costs balloon far beyond what was predicted at the outset: the problem is not understood by the team, the solutions developed are misaligned with user needs/wants, and costs accumulate trying to deliver something that satisfies no one.
Read the article: