When it comes to forecasting the future we can get into hot water in a hurry. After all no two economists can agree on anything, I’m told. The collapse in financial markets recently seems to bear that out!
Forecasting seems to be an especially risky activity when the results challenge accepted thinking. Highly qualified scientists who learn one way of doing things aren’t easily persuaded somebody has a better way of working things out.
We’re experiencing a little bit of this at the moment, not at all because our audience is Luddite, precisely the opposite. It’s extremely well qualified in its field. What we’re doing verges on “black magic” mostly because we can’t easily explain what’s going on.
At a recent meeting we got into a discussion about whether our prediction system is “early detection” or “forecasting”.
The “early detection” guys were clinicians – people who know enough about physiology to be able to know if one thing is happening it will continue to deteriorate and finish up as something else. This knowledge is based on experience, of course. The result is a prediction based on what has happened before.
The “forecasting” guys were engineers. These are people who “model” various parameters and use math to calculate the probability of outcomes. The result is a level of confidence regarding a prediction of what should happen, based on known parameters.
Both are valid approaches. Unfortunately they both suggest our approach doesn’t meet the standards demanded by science. Basically we can’t explain the inputs to the calculations or how the outputs are derived.
Actually we aren’t much different to the strange old lady reading the tea leaves 😦
Luckily the other group we’re working with – Neural Network experts – have come across the problem before. They give us confidence we’re on the right track.
Bayesian Neural Networks add value precisely because they get into stuff mere humans can’t cope with. They process literally millions of “what if, then” questions and provide us with answers meeting our criteria.
They recognize patterns that traditional science can’t see.
We’re now in a place where, as humans, we can’t explain exactly what the BANN has found. Our only quality check is how accurate the pattern recognition proved to be.
We do know what the software is doing – we built it after all. But digging into the calculations and results of the millions of calculations would take more years than any of us have left.
Our BANN is questioning the physiology parameters down to a level the human brain can’t cope with. It’s finding stuff that one day others will understand. But in the meantime we can only decide whether to believe it, or not, based on the accuracy of it’s predictions.
We can’t explain to the clinicians and engineers the association between physiology parameters.
We can only tell them when we’re seeing predictions which turn out to be right more times than they’re wrong.
Those of us who have been around computers for long have always known there would be a time when the machine could take over.
It seems Avert-IT is finding that point.