Much of the recent advancement in machine learning has been through neural networks (NNs), and in particular deep learning (DL) has shown record-breaking performance across many domains
In some fields where there is a human element involved, e.g. in medicine and finance, there can be legal or ethical considerations that require decisions to be interpretable/explainable. This requirement often prohibits the use of NNs and other so-called ‘black-box’ models.
There have been some proposed techniques to help explain the internal workings of NNs, many focused on image/visual recognition.
One of these methods known as feature occlusion shows promise for non-image and multi-modal data.
We propose to investigate the use of feature occlusion and potentially develop other techniques for use as general methods for explaining and interpreting deep learning models.