Humans want to put things into categories. Life is easier that way. At work, particularly for software organisations, planning often means putting your work in a box: Is it a Project work stream or an Experimentation one? Is it Engineering or Product? Is it Date-driven or Result-driven?
Having categories is great because they allow everyone to quickly grasp the fundamental characteristics of a given piece of work, without having to look too hard at the specifics. System 1 thinking will do the trick. We know that for something to be a features, for example, it needs a set of requirements. A date-driven initiative, requirements plus a deadline. Outcome-driven streams are often date-flexible, but assume a static goal.
Regardless of how you categorize, then, you’re reducing the dimensionality of the problem to make it seem understandable, achievable, ambitious yet realistic. Let’s look at some examples.
The project management trifecta / triple constraint, which is wrong in its very unique way, for a long time provided the best oversimplification available to anyone building products or services. It boiled delivery down to its quality, cost and time dimensions and established a 2 out of 3 rule: you can only have fast and good, if significantly increasing cost is an option.
Some have added Scope to this and made it four dimensions. Others still have tried to roll a more complicated (and as such less appealing and popular) dice with Time, Risk, Cost, Resources, Quality and Scope.
No matter how many variables you add to your function, sooner or later you’ll end up modelling the world into categories and then creating hard boundaries between them, rather than assuming things exist on a continuum. It’s easier to build a quarterly roadmap of features with fixed-length time-boxes than constantly having to list how their changing costs, feature set, quality assurance processes, marketing analyses and external risk monitoring might affect their delivery.
For example, a common split is between features and experiments. If an arbitrary threshold of expected-outcome-risk is crossed, organisations will stop calling something an experiment, and start calling it a feature. In the feature world, scope becomes into a largely fixed dimension that should be estimated, as there’s a set of functional and non-functional requirements that define it. This, in turn, allows engineers to estimate its complexity, which then allows managers to use a time boundary they can measure themselves against. Delivering feature, its full set of requirements, on-time, means success.
In the real world, if we monitor our expected-outcomes against the results we produced, we’ll see more of a bell curve. 100% accuracy or inaccuracy in our predictions will be outliers.
In contrast, if organisations treated that piece of work as an experiment, a way to test a hypothesis, they’d see scope as a variable to be minimised, time would be “as fast as possible” and so estimations would be kept at a very high level, just accurate-enough to ascertain whether the experiment would be worth it given their expected ROI.
Experiments also treat that as variable: expected ROI. Whereas features imply a need to build, experiments question whether it’d be worth doing so. They use the language of minimum viable products, of iteration designed to assess links of causality in order to learn about customers quickly and purposefully.
Build-Measure-Learn cycles were popularised by the Lean Startup as the way to achieve validated learning about customers.
There are multiple ways of lowering the risk of your decision-making, of making better predictions, of grounding them in information extracted from past realities or cases, applying inductive theory to generalize from bottom-up observations.
In contrast, top-down strategy in high-entropy, low information scenarios such as Internet Economy companies is still assumedly deductive. We design a theory and then attempt to prove it with observations. We often call that strategy or goal-setting, but fail to make it falsifiable, so that a single negative occurrence might invalidate it.
Because top-down strategy is usually deductive and theories assumed to be true, executives will ask teams to be precise in their estimates. If a strategic action’s expected ROI is considered a low-risk variable, then naturally what matters is how fast and reliably teams can execute its subordinate features.
The problem, of course, is this is often not the case. The classic innovator’s dilemma means good proxy management decisions eventually lead to a company’s demise due to its lack of ability to respond to evolving market circumstances. The pace at which leaders/incumbents innovate is outpaced by its startup competitors. The management problem here is that companies aren’t structured and incentivised to respond to the type of change that will eventually break them, due to crystallised / ossified business-as-usual processes that were design to solve a different category of problem.
Signatories of the Agile Manifesto, of course, knew precisely that, and so do companies like Spotify, Amazon, Netflix and others are creating highly talented, highly autonomous small teams that can tackle problems at scale, even if they have to compete with others internally.
But what does this mean for a team in a traditionally-shaped organisation? It means teams will be incentivised to solve for predictability — how fast and reliably they can deliver on the Exec’s vision - not for learning and evolving to stay ahead of the curve.
The pitfall, of course is that full predictability is not only impossible in a dynamic environment, it is costly and can be counterproductive. The hypothesis is that the returns to being predictable follow an S-curve, while its cost increases exponentially. Humans are also famously bad at predicting stuff.
ROI Chart for estimates. Y axis is resources, X axis is degree of predictability. Complete guessing, while cheap, can be harmful: E.g. agreeing to a date then failing it, can cost an organisation a lot of money. The problem of course is that trying to be extremely precise predictable can be extremely costly and offer next to zero net benefits, given the opportunity costs it entails. You need to get it “just right”.
Then there are the 2nd-order effects. A premature commitment to a timeline always involves making a number of assumptions. Variables that are inherent to the creation process will be heavily simplified: technical complexity, a deep understanding of the customer needs, clear scope, a correct measure of business impact, the right skillset and tooling, etc.
The degree of resolution you can accrue, as you get familiar with a problem, is analogous to the coastline paradox problem. There is no well-defined length to the coastline, as there is no fixed scope/time/cost/quality in any project. Because the problems we’re solving are often fractal in complexity, only by acquiring a deeper understanding of each of its underlying variables will we be able to understand how to capture its expected returns.
Will we measure the coastline in kilometers, meters, nanometers or using the Planck length?
Treating these as constants rather than variables to be managed (as in managed chaos), will have an impact on inventiveness, flexibility and individuals’ ability to respond to change in order to provoke positive outcomes. If we believe that access to knowledge is a market-advantage that will cumulatively produce a competitive advantage, for example via data network effects, it must follow that instead of predictable, we should aim to maximise our rate of learning. Observe, Orient, Decide, Act.
Easy answer: Customer value. Measured by any proxy / leading indicator you can reliably find that predicts long-term customers satisfaction, business viability, task achievement, removal of friction, value proposition stickiness.
Giving teams autonomy and a goal that can be frequently challenged bottom-up is not only key to having happy, engaged people, but it’s also a fundamental trait of functioning product organisations.
Even when faced with date-sensitive initiatives (think Apple managing iPhone launches, Sony releasing a new Playstation, deadlines for regulatory compliance, a startups’s runway ending), the trick is not to settle for asking only for predictably delivering a fixed set of requirements at an arbitrary quality threshold. It is instead, to set your teams up with the budget to discover, deliver and optimize on what the right scope, the right level of quality, the correct amount of risk, the appropriate cost, to achieve the maximum possible amount of impact. Measure teams success as their business impact. Let them pitch new problems and discover new meanings and missions.
Create a process for accountability where you recurrently ask them what problem they’re solving, how do they know it’s the right problem, and will they know if they’ve solved it.
Whereas some teams are using Scrum for this (posit: 90%+ of all teams are only using Scrum to iteratively deliver on their waterfall), others choose 6-week development cycles, some use innovation accounting and others still take advantage of longer cycle times and do stop/start/continue analyses at the end of each cycle (think Spotify’s DIBB framework again).
In short, this is meant to incentivise problem-solving through evidence-based processes, giving teams the autonomy to learn, adapt and even falsify their current problem statements altogether.
It will soon become obvious to teams managing their own fate, that the best way to discover the successful outliers is to lower the risk of their bets when they have very little information, and to slowly shift to a more bayesian process when they start learning and having informed priors. They’ll begin to understand where they should be going and do better at predicting the ROI of their micro-trade-offs. Over time, they’ll also need to evaluate where they are in their development s-curve, and if further extraction will only offer diminishing returns so they can feed that back to the organisation.
Getting to an outcome is messy and you need to fail a lot of times to learn about what works.
If we acknowledge that the categories we create are inherently artificial and that amazing outcomes aren’t the result of linear processes, we’ll be better set up for success. If the goal is to maximise impact for our businesses (and hopefully society in the process), we’ll need techniques that embrace uncertainty. Chaos is uncomfortable. So is setting up the right culture to work within it. But do we have any other choice?