- LEAN Sprint: Move Your Business Model Forward [Book Excerpt] - August 23, 2016
- Business Model Test by Ash Maurya [Infographic] - July 13, 2016
- How to Embrace Failure and Succeed [Book Excerpt] - June 20, 2016
We are living through a global entrepreneurial renaissance. Entrepreneurs are everywhere and thanks to the Internet, as well as technologies enabled by open source and cloud computing, there has been no better time than the present to launch a new product or start a new business — both cheaper and faster than ever before.
But there is a dark cloud in all of this. While we are building more products than before, the sad reality is that the success rate of products hasn’t changed much. The odds are still heavily stacked against starting a new business and many of these products still fail.
This is a real problem because we pour a lot of our time, money and effort into these products. More importantly, these failures can be a real setback, especially for a first-time entrepreneur, both emotionally and financially.
But failure is key to innovation. Good ideas are rare and hard to find. You have to go through lots of ideas that don’t work, before you can find the ones that do.
The answer lies in embracing “controlled failure” and asking a very important question: why?
Below, find an exclusive excerpt for StartupNation.com from “SCALING LEAN: Mastering the Key Metrics for Startup Growth” by Ash Maurya, with permission of Portfolio, an imprint of an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © Ash Maurya, 2016.
Chapter 9: Dealing with Failure
Can you find the common theme across these discoveries: Penicillin, microwave, X-ray, gunpowder, plastics and vulcanized rubber?
Yes, they were all accidental discoveries. But because they were accidental, it’s easy to dismiss them as lucky breaks. However, there was more than luck at play. All these discoveries started as failed experiments.
In each of these cases, the inventors were seeking a specific outcome and instead got a different outcome. But instead of throwing away their “failed” experiments, they did something very different from most people: they asked why.
Innovation experiments are no different. Achieving breakthrough, then, is less about luck and more about a rigorous search. The reason the hockey-stick trajectory has a long flat portion in the beginning is not because the founders are lazy and not working hard, but because before you can find a business model that works, you have to go through lots of stuff that doesn’t.
Most entrepreneurs, however, run away from failure. At the first sign of failure, they rush to course correct without taking the requisite time to dig deeper and get to the root cause of the failure. In the Lean Startup methodology, the term “pivot” is often used to justify this kind of course correction. But this, of course, is a misuse of the term: A pivot not grounded in learning is simply a “see what sticks” strategy.
The key to breakthrough isn’t running away from failure but, like the inventors above, digging in your heels and asking why. The “fail fast” meme is commonly used to reinforce this sentiment. But I’ve found that the taboo of failure runs so deep (everywhere except maybe in Silicon Valley) that “failing fast” is not enough to get people to accept failure as a prerequisite to achieving breakthrough. You need to completely remove the word “failure” from your vocabulary.
Try to see so-called failures as an instance where your model of customer behavior did not match your observed experience. In these instances, you need to either try a different approach or revise your model. Just as the quality of your input ideas drives your results, the quality of your post-experiment analyses drives your next breakthrough insights.
The Analysis step in the GO LEAN framework is where you attempt to reconcile your observed results with your expected outcomes. Your results provide a feedback loop that you process in reverse—first at the experiment level, then at the strategy or Validation Plan level, and finally in your models.
Analyze your experiment
If your observed results match up against your expected outcome, you can pat yourself on the back and move to the next step of analyzing your Validation Plan against the goal.
If, however, you were pitting two (or more) different approaches as competing experiments (an A/B test), it is possible that experiments from each yield positive results, but you can’t keep both approaches. You need to declare a winner.
If, on the other hand, your observed results do not match up with your expected outcomes, you need to spend time understanding why before moving forward.
This might be accomplished in a number of ways:
- Review Captured Artifacts: Revisit captured artifacts such as notes and recorded customer interviews in search of insights you might have missed before.
- Conduct a Five Whys Analysis: Run a Five Whys session with other team members in search of deeper causes for your unexpected outcomes.
- Do More Extensive Data Mining: Analyze your data differently or dig into a different data set of micro metrics that might help you uncover patterns of causality in your observed results.
- Run a Follow-up Learning Experiment: As metrics can tell you only what occurred—not why—sometimes the best course of action is to run a follow-up learning experiment designed to gather more data.
Analyze your strategy
Next you analyze your observed results at the strategy level. Based on your observed results, a Validation Plan can go into one of four possible next states:
- Retire: This is when you successfully break the constraint and achieve the macro goal that you set out to achieve with this strategy. You retire this strategy and move on to prioritizing the next promising ideas in your backlog.
- Persevere: This is when you gather enough positive signals to warrant staying the course on the current strategy. You can move on to the next step of analyzing the strategy against your models, which helps you decide the next experiment to run. For example, if your overall strategy was launching a new feature, your first experiment might involve testing interest in that feature. Getting enough positive signals, as dictated by your models, gives you permission to stay the course.
- Pivot: This is when you don’t gather enough positive signals to stay the course, but you know why, and aren’t ready yet to give up on the strategy. A pivot represents a change in direction based on newly uncovered learning while staying focused on the goal. For example, if your overall strategy was testing for Problem/Solution Fit, your first experiment might involve finding interview leads using your blog. If you fail to get enough leads but know the reason is that your blog audience does not overlap with your ideal early customer segment, you would not give up on the overall strategy but pivot to testing a different channel such as guest blogging or advertising.
- Reset: This is when you gather enough negative signals to invalidate a strategy. As there is clearly no point in staying the course, you make a decision to reallocate resources to more promising ideas in your backlog.
Update your models
Finally, you need to ensure that you always keep your models updated after each experiment—especially your customer factory model, which will change a lot more frequently than your Lean Canvas and traction models. That said, remember that finding a business model that works is a search-versus-execution problem. Much as you pit several competing strategies or experiments against one another, you also pit several competing business models against one another. For this reason, it’s just as important to frequently visit your business model stories and ensure that they reflect your latest learning.
Decide next actions
With your analysis done, you are now ready to decide next actions. Based on your results and updated models, you reevaluate whether the current constraint is broken and, if so, you search for a new constraint. Remember that the steps in your customer factory are highly interdependent on one another. Changes in one area often have ripple effects in other areas. When you don’t constantly monitor the entire system at a macro level, inertia can set in and lead you to fall into the local optimization trap. This is when you fail to recognize that your current efforts have successfully broken a constraint and you keep on optimizing further—at the expense of tackling the next weakest link.
From this analysis, you then decide to double down on certain strategies, discontinue others, and maybe even add new ones.