Generating Random Instances of Weighted Model Counting: An Empirical Analysis with Varying Primal Treewidth

Abstract

Weighted model counting (WMC) is an extension of propo sitional model counting with applications to probabilistic inference and other areas of artificial intelligence. In recent experiments, WMC algorithms perform similarly overall but with significant differences on specific subsets of benchmarks. A good understanding of the differences in the performance of algorithms requires identifying key characteristics that favour some algorithms over others. In this paper, we introduce a random model for WMC instances with a parameter that influences primal treewidth—the parameter most commonly used to characterise the difficulty of an instance. We then use this model to experimentally compare the performance of WMC algorithms c2d, Cachet, d4, DPMC, and miniC2D. Using these random instances, we show that the easy-hard-easy pattern is different for algorithms based on dynamic programming and algebraic decision diagrams than for all other solvers. We also show how all WMC algorithms scale exponentially with respect to primal treewidth and how this scalability varies across algorithms and densities. Finally, we combine insights from experiments involving both random and com petition instances to determine how the best-performing WMC algorithm varies depending on clause density and primal treewidth.

Publication
In Proceedings of Integration of Constraint Programming, Artificial Intelligence, and Operations Research (CPAIOR)