Face Recognition Model Analyzer / OE Fall 2018

Overview

Machine learning allows us to learn a model for a given task such as facial recognition with a high degree of accuracy. However, after these models are generated they are often treated as black boxes and the limitations of a model are often unknown to the end user. To address this issue, we’ve developed a semantically enabled system that allows users to explore the limits of a facial recognition model. We integrate "smart" images with classification results to discover common causes for misclassifications, capture the provenance of the model learning process and describe the structure of the learned model from a data-centric perspective. We evaluated our tool by loading in the Labeled Faces in the Wild dataset, enriching the images with Kumar image tags, and exploring two popular face recognition models FaceNet and DLib. Using our tool we discovered several limitations, such as both models have trouble classifying images with cranial occlusion, DLib does significantly better at recognizing people with long hair and that both models have trouble classifying mugshot-like images.

Machine learning and the amount of money being spent on machine learning research has exploded over the last 10 years. New machine learning techniques are being applied to everything from video game creation to diagnosing diseases, to better communication between machines and humans. However one of the main limitations of machine learning algorithms is that the learned models are often impossible for humans to understand and often are only evaluated for accuracy on a specific dataset for a specific task. This results in future users treating the model as a black box and not really understanding the limitations of a given model Another machine learning limitation is the inability to explain how a model arrives at a prediction, which makes it difficult for users to trust pre-learned models. There are use cases where this is perfectly fine; however, this is unacceptable in scenarios where life and death decisions must be made such as driving vehicles. Because of this, it is often the case that a potential user that is making a model for such a scenario will have insufficient information to make a truly informed decision when choosing which model to use for a new application.

Ontology

To address these issues we developed the Face Recognition Model Analyzer Ontology (FRMA), which semantically describes face recognition models, the images used to train/test a model and the prediction generated by a model. Using this ontology we are able to explore the strengths and weakness of various models from an image attribute perspective using SPARQL queries, allowing future users to better evaluate the effectiveness of a pre-trained model for their use-case.

Team

Acknowledgement

This work is supported by Prof. Deborah L. McGuinness, Ms. Elisa Kendall, Jim McCusker, and Rebecca Cowan for the class Ontologies Fall 2018 at Rensselaer Polytechnic Institute.

This work was conducted using the Protégé resource, which is supported by grant GM10331601 from the National Institute of General Medical Sciences of the United States National Institutes of Health.

 


Face Recognition Model Analyzer

Associated Classes