Interpretable Classification Rules in Relaxed Logical Form

Abstract

Interpretability has become a central thread in ML research. As ML algorithms continue to permeate critical application domains such as medicine, legal, and transportation, it becomes increasingly important to allow human domain experts to understand and interact with ML solutions. ML algorithms that produce predictions in the form of rules are arguably some of the most interpretable ones, but their discrete combinatorial structure makes them computationally hard to learn. Here we generalize the widely popular CNF rules and introduce relaxed-CNF rules. These rules are much more flexible in terms of fitting data (have higher capacity) but about as interpretable to people as the traditional ones. We consider relaxed definitions of standard OR/AND operators which allow exceptions in the construction of a clause and also in the selection of clauses in a rule. We first describe an exact ILP solution, which is computationally expensive. We then propose an incremental solution, which allows us to generate accurate interpretable relaxed-CNF rules with significantly improved runtime performance.

Publication
In IJCAI workshop on XAI (Explainable Artificial Intelligence) and DSO (Data Science meets Optimization)