Our bias as modellers
Our human biases can sometimes get in the way of effective analysis. Whenever you are changing something in your model, running a sensitivity or trying to answer an analytical question, it is always useful to create a hypothesis about what you think is going to happen.
For instance, when the IRR (Internal rate of return) is changed does another variable increase? When a different variable is changed will the leverage go down? Having that hypothesis ready before you undertake the analysis is key.
If you don’t have a hypothesis before your start, your bias will be to believe your model, and start to rationalise the results.
As Kenny explains in the video: ‘you will run something, you will change something and you will look at the results, and then you will set about trying to explain the results which are coming out of your model’.
We are ‘sense making creatures’, we cannot exist without making sense of the world; it is what our brains are programmed to do.
The Two Possibilities
If we have a hypothesis ahead of making a change, and then the results are different, then there are one of two possibilities.
1) The model is wrong or we do not understand how it works
2) Our hypothesis was wrong, and the model is teaching us something about the business we are modelling.
Both scenarios are useful data and we can then dig into those to correctly interpret the results.
We still have to be careful to not construct explanations for which we do not have any evidence. And so testing the changes and refining our hypothesis is critical.
And we should expect our hypotheses to frequently be wrong. This is actually why we have models – to help us continually refine our understanding of commercial dynamics and to eliminate our bias as modellers. If we could always predict correctly based on our intuition – we wouldn’t need a model.
This video was first posted on LinkedIn, see the post here.
If you enjoyed this blog post, perhaps you would like to read more of our articles here.