Learning Heuristics for Quantified Boolean Formulas through Reinforcement Learning

Author(s): Gil Lederman, Markus N. Rabe, Edward A. Lee, and Sanjit A. Seshia

Citation
Gil Lederman, Markus N. Rabe, Edward A. Lee, and Sanjit A. Seshia. "Learning Heuristics for Quantified Boolean Formulas through Reinforcement Learning". International Conference on Learning Representations (ICLR), April 26-May 1 2020.

Abstract
We demonstrate how to learn efficient heuristics for automated reasoning algorithms for quantified Boolean formulas through deep reinforcement learning. We focus on a backtracking search algorithm, which can already solve formulas of impressive size - up to hundreds of thousands of variables. The main challenge is to find a representation of these formulas that lends itself to making predictions in a scalable way. For a family of challenging problems in 2QBF we learn a heuristic that solves significantly more formulas compared to the existing handwritten heuristics.

Electronic Downloads

Citation Formats

  • HTML
                    
    Gil Lederman, Markus N. Rabe, Edward A. Lee, and Sanjit A. Seshia.
    "<a href="https://www.icyphy.org/publications/2020_LedermanEtAl/">Learning Heuristics for Quantified Boolean Formulas through Reinforcement Learning</a>".
    <i>International Conference on Learning Representations (ICLR)</i>, April 26-May 1 2020.
                    
                    
  • Plain Text
                    
    Gil Lederman, Markus N. Rabe, Edward A. Lee, and Sanjit A. Seshia.
    "Learning Heuristics for Quantified Boolean Formulas through Reinforcement Learning".
    International Conference on Learning Representations (ICLR), April 26-May 1 2020.
                    
                    
  • BibTeX
                        
    @inproceedings{LedermanEtAl:20:LearningQBF,
    	author = {Gil Lederman, Markus N. Rabe, Edward A. Lee, and Sanjit A. Seshia},
    	title = {Learning Heuristics for Quantified Boolean Formulas through Reinforcement Learning},
    booktitle = {International Conference on Learning Representations (ICLR)},
    month = {April 26-May 1},
    year = {2020},
    abstract = {We demonstrate how to learn efficient heuristics for automated reasoning algorithms for quantified Boolean formulas through deep reinforcement learning. We focus on a backtracking search algorithm, which can already solve formulas of impressive size - up to hundreds of thousands of variables. The main challenge is to find a representation of these formulas that lends itself to making predictions in a scalable way. For a family of challenging problems in 2QBF we learn a heuristic that solves significantly more formulas compared to the existing handwritten heuristics.}, URL = {https://www.icyphy.org/publications/2020_LedermanEtAl/} }