从文献Evaluating CDCL Variable Scoring Schemes中学习相关术语。

Biere A., Fröhlich A. (2015) Evaluating CDCL Variable Scoring Schemes. In: Heule M., Weaver S. (eds) Theory and Applications of Satisfiability Testing — SAT 2015. SAT 2015. Lecture Notes in Computer Science, vol 9340. Springer, Cham. https://doi-org-s.era.lib.swjtu.edu.cn/10.1007/978-3-319-24318-4_29


 

 

 

1 the conflict-index

The conflict-index is the total number of conflicts that occurred so far。

   
2

variable bumping–

The process of updating scores of variables is also referred to as variable
bumping [7]. Note, however, that in modern solvers and also in our experiments
we not only bump variables of the learned clause, but all seen variables occurring in antecedents used to derive the learned clause through a regular input resolution chain [25] from existing clauses.

   
 3  

decision heuristic–

decision score、decision variable、dynamic decision heuristics

The process of updating scores of variables is also referred to as variable
bumping [7].

 

VSIDS (variable state independent decaying sum) 

CDCL (conflict-driven clause learning) 

ACIDS (average conflict-index decision score)

EVSIDS (exponential VSIDS),

VMTF (variable move-to-front) scheme   

normalized VSIDS (NVSIDS) [6], is an exponential moving average on how often a variable occurred in antecedents of learned clauses [6].

 

另:

clause move-to-front (CMTF)

binary decision diagrams (BDDs) [18]

first-order dynamic decision heuristics

second-order dynamic heuristics

 

 

   
 4  

phase saving [22]–

However, almost all modern CDCL solvers implement phase saving [22], which always reassigns the decision variable to the last phase it was previously assigned.

   
 5

 关于EVSIDS 

Typical values for g are in the range of 1.01 to 1.2. Small values have been
shown to be useful for hard satisfiable instances (like cryptographic instances).
Large values are useful with very frequent restarts, particularly in combination
with the reuse-trail technique [28]. In Glucose 2.3, even without reusing the trail, it was thus suggested to slowly decrease g over time from a large value to a small one.1 In the (new) version of Lingeling used in our experiments, g is kept at 1.2.

 

关于VSIDS

VSIDS. The variable state independent decaying sum (VSIDS) of Chaff [5] maintains a variable score for each variable. The basic idea is that variables with large score are preferred decisions. The original VSIDS implementation in Chaff worked as follows. Variables are stored in an array used to search for a decision variable. After learning a clause, the score of its variables is incremented. Further, every 256th conflict, all variable scores are divided by 2, and the array is sorted w.r.t. decreasing score. This process is also called variable rescoring.

 可以对照下表:

 

 

 

   
 6  

Variable scores play a role while (a) bumping variables participating in deriving
a learned clause, (b) deciding or searching for the next decision variable, (c)
unassigning variables during backtracking, (d) rescoring variable scores either
for explicit smoothing in VSIDS or due to protecting scores from overflow during
bumping, and (e) comparing past decisions on the trail to maximize trail
reuse [28].

First, we explain a fast implementation for VMTF, focusing on (a)-(c).

Next, we address its extension to precise scoring schemes using floating-point numbers, which in previous implementations followed the example set by MiniSAT to use a binary heap data structure.

Last, we discuss (d) and (e).

   
 7 reuse-trail optimization– 

The reuse-trail optimization [28] is based on the following observation. After
a restart, it often happens that the same decisions are taken and the trail ends
up with the same assigned variables. Thus, the whole restart was useless.

By comparing scores of assigned previous decisions with the score of the next decision variable before restarting, this situation can be avoided. With some effort, this technique can be lifted to our generic queue implementation. To simplify the comparison in favor of a clean experiment, the results presented in Sect 4 are without reuse-trail (except for sc14ayv, the old 2014 version of Lingeling).

   

 实验提供的概况信息

(1)   

In our experiments in Sect. 4, the default decision heuristic (evsids in Tab. 2) bumped on average 276 literals per learned clause of average length 105 (on 275 considered instances).      ——学习子句平均长度为105,参与活跃度碰撞的变元平均为276个。

 

(2)

However, the number of propagated variables per decision can be quite large (on average, 323 propagations per decision for 275 benchmarks in the evsids column in Tab. 2). Removing them eagerly is too costly.——(采用EVSIDS决策策略)每次决策平均传播323个文字。

 

(3)

MiniSAT/Glucose use 10100 as an upper score limit, which is only a slightly smaller maximum limit than ours 2512 ≈ 10154, but then does not use any truncation for small scores, which means that the minimum score exponent inMiniSAT is (roughly) 2−10. So Lingeling uses 9 bits for positive scores and 9 bits for negative scores, while MiniSAT uses slightly less than 9 bits for positives scores and (almost) full 10 bits for negative scores.

   
9

additive bumping and multiplicative decay

Conflict-Driven Clause-Learning (CDCL) SAT solvers crucially depend on the Variable State Independent Decaying Sum (VSIDS) branching heuristic for their performance. Although VSIDS was proposed nearly fifteen years ago, and many other branching heuristics for SAT solving have since been proposed, VSIDS remains one of the most effective branching heuristics. Despite its widespread use and repeated attempts to understand it, this additive bumping and multiplicative decay branching heuristic has remained an enigma.

——Liang J.H., Ganesh V., Zulkoski E., Zaman A., Czarnecki K. (2015) Understanding VSIDS Branching Heuristics in Conflict-Driven Clause-Learning SAT Solvers. In: Piterman N. (eds) Hardware and Software: Verification and Testing. HVC 2015. Lecture Notes in Computer Science, vol 9434. Springer, Cham. https://doi.org/10.1007/978-3-319-26287-1_14

   
   
   

 

 

References

  1. 1.
    Balint, A., Belov, A., Heule, M.J.H., Järvisalo, M. (eds.): Proceedings of SAT Competition 2013. Volume B-2013-1 of Department of Computer Science Series of Publications B. University of Helsinki (2013)Google Scholar
  2. 2.
    Belov, A., Heule, M.J.H., Järvisalo, M. (eds.): Proceedings of SAT Competition 2014. Volume B-2014-2 of Department of Computer Science Series of Publications B. University of Helsinki (2014)Google Scholar
  3. 3.
    Marques-Silva, J.P., Lynce, I., Malik, S.: Conflict-driven clause learning SAT solvers. [41], 131–153Google Scholar
  4. 4.
    Marques-Silva, J.P., Sakallah, K.A.: GRASP: A search algorithm for propositional satisfiability. IEEE Trans. Computers 48(5), 506–521 (1999)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Moskewicz, M.W., Madigan, C.F., Zhao, Y., Zhang, L., Malik, S.: Chaff: engineering an efficient SAT solver. In: Proceedings of the 38th Design Automation Conference, DAC 2001, pp. 530–535. ACM, Las Vegas, June 18–22, 2001Google Scholar
  6. 6.
    Biere, A.: Adaptive restart strategies for conflict driven SAT solvers. In: Kleine Büning, H., Zhao, X. (eds.) SAT 2008. LNCS, vol. 4996, pp. 28–33. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  7. 7.
    Eén, N., Sörensson, N.: An extensible SAT-solver. In: Giunchiglia, E., Tacchella, A. (eds.) SAT 2003. LNCS, vol. 2919, pp. 502–518. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  8. 8.
    Ryan, L.: Efficient algorithms for clause-learning SAT solvers. Master’s thesis, Simon Fraser University (2004)Google Scholar
  9. 9.
    Goldberg, E.I., Novikov, Y.: Berkmin: a fast and robust sat-solver. In: 2002 Design, Automation and Test in Europe Conference and Exposition (DATE 2002), pp. 142–149. IEEE Computer Society, Paris, March 4–8, 2002Google Scholar
  10. 10.
    Gershman, R., Strichman, O.: Haifasat: A new robust SAT solver. In: Ur, S., Bin, E., Wolfsthal, Y. (eds.) Hardware and Software Verification and Testing. LNCS, vol. 3875, pp. 76–89. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  11. 11.
    Biere, A.: P{{re, i}}coSAT@SC 2009. In: SAT 2009 Competitive Event Booklet, pp. 42–43 (2009)Google Scholar
  12. 12.
    Heule, M.J.H., Kullmann, O., Wieringa, S., Biere, A.: Cube and conquer: guiding CDCL SAT solvers by lookaheads. In: Eder, K., Lourenço, J., Shehory, O. (eds.) HVC 2011. LNCS, vol. 7261, pp. 50–65. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  13. 13.
    Heule, M., van Maaren, H.: Look-ahead based SAT solvers. [41], 155–184Google Scholar
  14. 14.
    Ansótegui, C., Giráldez-Cru, J., Levy, J.: The community structure of SAT formulas. In: Cimatti, A., Sebastiani, R. (eds.) SAT 2012. LNCS, vol. 7317, pp. 410–423. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  15. 15.
    Newsham, Z., Ganesh, V., Fischmeister, S., Audemard, G., Simon, L.: Impact of community structure on SAT solver performance. In: Sinz, C., Egly, U. (eds.) SAT 2014. LNCS, vol. 8561, pp. 252–268. Springer, Heidelberg (2014)Google Scholar
  16. 16.
    Ansótegui, C., Bonet, M.L., Giráldez-Cru, J., Levy, J.: The fractal dimension of SAT formulas. In: Demri, S., Kapur, D., Weidenbach, C. (eds.) IJCAR 2014. LNCS, vol. 8562, pp. 107–121. Springer, Heidelberg (2014)Google Scholar
  17. 17.
    Davis, M., Logemann, G., Loveland, D.W.: A machine program for theorem-proving. Commun. ACM 5(7), 394–397 (1962)MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    Bryant, R.E.: Graph-based algorithms for boolean function manipulation. IEEE Trans. Computers 35(8), 677–691 (1986)CrossRefzbMATHGoogle Scholar
  19. 19.
    Biere, A., Cimatti, A., Clarke, E., Zhu, Y.: Symbolic model checking without BDDs. In: Cleaveland, W.R. (ed.) TACAS 1999. LNCS, vol. 1579, p. 193. Springer, Heidelberg (1999)CrossRefGoogle Scholar
  20. 20.
    Jeroslow, R.G., Wang, J.: Solving propositional satisfiability problems. Annals of Mathematics and Artificial Intelligence 1(1–4), 167–187 (1990)CrossRefzbMATHGoogle Scholar
  21. 21.
    Marques-Silva, J.: The impact of branching heuristics in propositional satisfiability algorithms. In: Barahona, P., Alferes, J.J. (eds.) EPIA 1999. LNCS (LNAI), vol. 1695, pp. 62–74. Springer, Heidelberg (1999)CrossRefGoogle Scholar
  22. 22.
    Pipatsrisawat, K., Darwiche, A.: A lightweight component caching scheme for satisfiability solvers. In: Marques-Silva, J., Sakallah, K.A. (eds.) SAT 2007. LNCS, vol. 4501, pp. 294–299. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  23. 23.
    Zhang, H.: SATO: an efficient propositional prover. In: McCune, William (ed.) CADE 1997. LNCS, vol. 1249, pp. 272–275. Springer, Heidelberg (1997)CrossRefGoogle Scholar
  24. 24.
    Biere, A.: PicoSAT essentials. JSAT 4(2–4), 75–97 (2008)zbMATHGoogle Scholar
  25. 25.
    Beame, P., Kautz, H.A., Sabharwal, A.: Towards understanding and harnessing the potential of clause learning. J. Artif. Intell. Res. (JAIR) 22, 319–351 (2004)MathSciNetzbMATHGoogle Scholar
  26. 26.
    Han, H., Somenzi, F.: On-the-fly clause improvement. In: Kullmann, O. (ed.) SAT 2009. LNCS, vol. 5584, pp. 209–222. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  27. 27.
    Hamadi, Y., Jabbour, S., Sais, L.: Learning for dynamic subsumption. In: 21st IEEE International Conference on Tools with Artificial Intelligence, Newark, New Jersey, USA, ICTAI 2009, pp. 328–335. IEEE Computer Society, November 2–4, 2009Google Scholar
  28. 28.
    van der Tak, P., Ramos, A., Heule, M.J.H.: Reusing the assignment trail in CDCL solvers. JSAT 7(4), 133–138 (2011)MathSciNetzbMATHGoogle Scholar
  29. 29.
    Nadel, A.: Backtrack search algorithms for propositional logic satisfiability: Review and innovations. Master’s thesis, Hebrew University (2002)Google Scholar
  30. 30.
    Sleator, D.D., Tarjan, R.E.: Amortized efficiency of list update and paging rules. Commun. ACM 28(2), 202–208 (1985)MathSciNetCrossRefGoogle Scholar
  31. 31.
    Biere, A.: Lingeling and friends entering the SAT challenge 2012. In: Balint, A., Belov, A., Diepold, D., Gerber, S., Järvisalo, M., Sinz, C. (eds.) Proceedings SAT Challenge 2012: Solver and Benchmark Descriptions. Volume B-2012-2 of Department of Computer Science Series of Publications B., University of Helsinki, pp. 33–34 (2012)Google Scholar
  32. 32.
    Biere, A.: Yet another local search solver and Lingeling and friends entering the SAT Competition 2014. [2], 39–40Google Scholar
  33. 33.
    Audemard, G., Simon, L.: Predicting learnt clauses quality in modern SAT solvers. In: Boutilier, C. (ed.) Proceedings of the 21st International Joint Conference on Artificial Intelligence, IJCAI 2009, Pasadena, California, USA, pp. 399–404, July 11–17, 2009Google Scholar
  34. 34.
    Audemard, G., Simon, L.: Glucose 2.3 in the SAT 2013 Competition. [1], 42–43Google Scholar
  35. 35.
    Oh, C.: MiniSat HACK 999ED, MiniSat HACK 1430ED and SWDiA5BY. [2], 46–47Google Scholar
  36. 36.
    Järvisalo, M., Heule, M.J.H., Biere, A.: Inprocessing rules. In: Gramlich, B., Miller, D., Sattler, U. (eds.) IJCAR 2012. LNCS, vol. 7364, pp. 355–370. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  37. 37.
    Audemard, G., Simon, L.: Refining restarts strategies for SAT and UNSAT. In: Milano, M. (ed.) CP 2012. LNCS, vol. 7514, pp. 118–126. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  38. 38.
    Eén, N., Biere, A.: Effective preprocessing in SAT through variable and clause elimination. In: Bacchus, F., Walsh, T. (eds.) SAT 2005. LNCS, vol. 3569, pp. 61–75. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  39. 39.
    Wallner, J.P.: Benchmark for complete and stable semantics for argumentation frameworks. [2], 84–85Google Scholar
  40. 40.
    Biere, A., Heule, M.J.H., Järvisalo, M., Manthey, N.: Equivalence checking of HWMCC 2012 circuits. [1], 104Google Scholar
  41. 41.
    Biere, A., Heule, M.J.H., van Maaren, H., Walsh, T. (eds.): Handbook of Satisfiability. Volume 185 of Frontiers in Artificial Intelligence and Applications. IOS Press (2009)Google Scholar

 

版权声明:本文为yuweng1689原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://www.cnblogs.com/yuweng1689/p/13794554.html