Lies, damned lies, and (bad) AIs

Along with the recent advances in machine learning have come a series of ethical and security concerns. For example, there is a whole body of ongoing research on corrupting training datasets in order to cause specific, incorrect inferences. If you haven’t seen it, glance over this paper where they were able to cause image recognition to mis-identify physical road signs by attaching stickers to them: Robust Physical-World Attacks on Deep Learning Models. Other papers have talked more generally about the problems of safety and security in machine learning: https://blog.acolyer.org/2017/11/29/concrete-problems-in-ai-safety/.

Meanwhile, all sorts of unfair, unpleasant, and even potentially deadly (think ML for diagnosis and treatment as just one example) forms of (presumably) unintentional bias has slipped into our ML models: The Ugly Truth About Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity

I am not a trained statistician, but my father taught statistics and social science for many years, and I learned about the pragmatic problems of statistics from him at the breakfast table. Dad would read the newspaper, come across a report of some new scientific study or poll, and call all out all the problems with it: everything from lack of control groups; phrasing of questions; insufficient sample size; as well as more subtle statistical fallacies like Simpson’s Paradox. As I’ve said elsewhere, I had a bit of weird childhood, but in this case it left me with both an appreciation for and a great amount of skepticism of statistics.

In many ways, statistics are a way of compressing or summarizing a dataset, and almost by definition this is a lossy process: Same stats, different graphs: generating datasets with varied appearance and identical statistics through simulated annealing. But you have to ask yourself how do you know what you lost in that summarization?

So it seems naively to me that we should not be surprised that machine learning, which is largely based on various statistical techniques needs to be viewed with a similar degree of skepticism. One of the great discoveries of the past few years is that if you throw a ton of data and processing power at a problem, you can often get excellent results from relatively simple algorithms, whether that problem be vision, playing games like Chess and Go, or interpreting chest X-rays.  These algorithms can find correlations and causation that would not occur to a human researcher.

Unfortunately, the outputs of these algorithms are often just big matrices full of unexplained numbers. And this is exactly where the issues above come into play: without any explanation for why an AI works as it does, how can we know if it is really correct?

Statisticians have traditionally addressed this by modeling. The model has served to frame the input, guide the computation, and explain the results of experiments and polls. Of course this risks restricting the results too much, as mentioned above a lot of the most exciting results of ML have been unexpected and could not be predicted beforehand by a model, but I think we do need to have explainable models as outputs of these algorithms. This is very active area of research and it will be fun to see how it advances in the next few years.