Source: xresch/Pixabay
There’s a relatively new decision bias that’s rampant in business. We could call it the “machine learning illusion.”
We don’t mean that machine learning isn’t useful. It clearly can be. From banking to medicine, we’re seeing the various benefits of this statistical tool in relevant decisions. Thanks to advances in this technology, we are better able to recognize patterns in complex data and optimize crucial processes.
But one illusion among decision makers is the belief that machine learning is always useful: If one’s understanding and predictions stem from machine learning based on big data, then they must be reliable for decisions.
After all, the results are produced by complex and cutting-edge analyses on a rich information resource, conducted by statistically sophisticated analysts.
What could go wrong?
Two Key Conditions
For machine learning to improve decisions reliably and sustainably, two fundamental things need to happen:
1. The data needs to be representative of the situation.
But this is not guaranteed. There can often be selection issues in the sample (certain outcomes may be missing) or there may be significant delays between causes and effects. If so, the lessons of machine learning would be problematic in terms of understanding what’s actually going on. In fact, companies have been facing all sorts of problems due to machines discriminating against certain content or people.
2. The situation needs to be stable.
This is also not guaranteed. If there’s constant and unpredictable change in the setting, then the lessons of machine learning could fast become obsolete and unreliable. The analyses may well fit the existing data, but this wouldn’t necessarily lead to better predictions. As a result, managers would become overconfident and fail to take certain precautions in a timely manner.
Kind vs. Wicked
If both of these conditions hold approximately, then we’d be in a kind learning environment. Machine learning would indeed help us shed a reliable light on what happened in the past and what the future holds.
But if at least one of these conditions doesn’t hold, then we’d find ourselves in a wicked learning environment. The reliability of insights from machine learning for decisions would be less certain and durable than we’d hope. Instead, there would be a good chance that the learning leads to an illusion of understanding and predictability.
To complicate things further, once the lessons and forecasts of machine learning become available, they are hard to ignore. Machine-approved insights can be quite irrefutable and irreversible. Unlearning and relearning become difficult.
That’s why, when managers design their data collection, analysis and presentation standards within their companies, a potential “machine learning illusion” needs to be acknowledged early on by checking how the “two key conditions” may be violated.