skip to main content
10.1145/1708046.1708058acmconferencesArticle/Chapter ViewAbstractPublication PagespoplConference Proceedingsconference-collections
research-article

Efficient parallel programming in Poly/ML and Isabelle/ML

Published:19 January 2010Publication History

ABSTRACT

The ML family of languages and LCF-style interactive theorem proving have been closely related from their beginnings about 30 years ago. Here we report on a recent project to adapt both the Poly/ML compiler and the Isabelle theorem prover to current multicore hardware. Checking theories and proofs in typical Isabelle application takes minutes or hours, and users expect to make efficient use of "home machines" with 2-8 cores, or more.

Poly/ML and Isabelle are big and complex software systems that have evolved over more than two decades. Faced with the requirement to deliver a stable and efficient parallel programming environment, many infrastructure layers had to be reworked: from low-level system threads to high-level principles of value-oriented programming. At each stage we carefully selected from the many existing concepts for parallelism, and integrated them in a way that fits smoothly into the idea of purely functional ML with the addition of synchronous exceptions and asynchronous interrupts.

From the Isabelle/ML perspective, the main concept to manage parallel evaluation is that of "future values". Scheduling is implicit, but it is also possible to specify dependencies and priorities. In addition, block-structured groups of futures with propagation of exceptions allow for alternative functional evaluation (such as parallel search), without requiring user code to tackle concurrency. Our library also provides the usual parallel combinators for functions on lists, and analogous versions on prover tactics.

Despite substantial reorganization in the background, only minimal changes are occasionally required in user ML code, and none at the Isabelle application level (where parallel theory and proof processing is fully implicit). The present implementation is able to address more than 8 cores effectively, while the earlier version of the official Isabelle2009 release works best for 2-4 cores. Scalability beyond 16 cores still poses some extra challenges, and will require further improvements of the Poly/ML runtime system (heap management and garbage collection), and additional parallelization of Isabelle application logic.

References

  1. A.W. Appel, J.R. Ellis, and K. Li. Real-time concurrent collection on stock multiprocessors. In ACM SIGPLAN 1988 Conference on Programming Language Design and Implementation (PLDI), 1988. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. J. Armstrong, R. Virding, C. Wikström, and M. Williams. Concurrent Programming in Erlang. Prentice Hall, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. D. Berry, R. Milner, and D.N. Turner. A semantics for ML concurrency primitives. In 17th ACM Symposium on Principles of Programming Languages (POPL). ACM, 1992. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. P. Brinch Hansen. Monitors and concurrent Pascal: a personal history. In ACM SIGPLAN conference on History of programming languages (HOPL-II). ACM, 1993. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. E.W. Dijkstra, L. Lamport, A.J. Martin, C.S. Scholten, and E.F.M. Steffens. On-the-fly garbage collection: An exercise in cooperation. Commun. ACM, 21 (11): 966--975, 1978. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. D. Doligez and X. Leroy. A concurrent, generational garbage collector for a multithreaded implementation of ML. In 20th ACM Symposium on Principles of Programming Languages (POPL). ACM press, 1993. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. C.H. Flood, D. Detlefs, N. Shavit, and X. Zhang. Parallel garbage collection for shared memory multiprocessors. In Java Virtual Machine Research and Technology Symposium (JVM'01), pages 21--21, Berkeley, CA, USA, 2001. USENIX Association. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. A. Giacolone, P. Mishra, and S. Prasad. Facile: A symmetric integration of concurrent and functional programming. International Journal of Parallel Programming, 18 (2), 1989. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. M. Gordon, R. Milner, L. Morris, M.C. Newey, and C.P. Wadsworth. A metalanguage for interactive proof in LCF. In Principles of programming languages (POPL), 1978. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. P. Haller and M. Odersky. Event-based programming without inversion of control. In Joint Modular Languages Conference, Springer LNCS, 2006.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. P. Haller and M. Odersky. Scala actors: unifying thread-based and event-based programming. Theoretical Computer Science, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. R.H. Halstead, Jr. Multilisp: A language for concurrent symbolic computation. ACM Trans. Program. Lang. Syst., 7 (4), 1985. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. T. Harris, S. Marlow, and S. Peyton Jones. Haskell on a shared-memory multiprocessor. In ACM SIGPLAN workshop on Haskell. ACM Press, September 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. T. Harris, S. Marlow, S. Peyton Jones, and M. Herlihy. Composable memory transactions. Commun. ACM, 51 (8), 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. S. Marlow, T. Harris, R.P. James, and S. Peyton Jones. Parallel generational-copying garbage collection with a block-structured heap. In International Symposium on Memory Management, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. D.C.J. Matthews. Concurrency in Poly/ML. In ML with Concurrency, Monographs in Computer Science. Springer Verlag, 1996.Google ScholarGoogle Scholar
  17. D.C.J. Matthews and T. Le Sergent. Lemma: A distributed shared memory with global and local garbage collection. In H.G. Baker, editor, Memory Management, International Workshop (IWMM), Kinross, UK, September 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. R. Milner, M. Tofte, and R. Harper. The definition of Standard ML. MIT Press, Cambridge, MA, USA, 1990. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. R. Milner, M. Tofte, and D. Macqueen. The Definition of Standard ML. MIT Press, Cambridge, MA, USA, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. J.G. Morrisett and A. Tolmach. Procs and locks: a portable multiprocessing platform for Standard ML of New Jersey. In 4th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPOPP). ACM, 1993. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. M. Odersky et al. An overview of the Scala programming language. Technical Report IC/2004/64, EPF Lausanne, 2004.Google ScholarGoogle Scholar
  22. J. Reppy. CML: A higher-order concurrent language. In ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), volume 26. ACM, 1991. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. J. Reppy and Y. Xiao. Toward a parallel implementation of Concurrent ML. In Workshop on Declarative Aspects of Multicore Programming (DAMP), January 2008.Google ScholarGoogle Scholar
  24. J. Reppy, C. Russo, and Y. Xiao. Parallel Concurrent ML. In ACM SIGPLAN International Conference on Functional Programming (ICFP 2009), September 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. A. Rossberg, D. Le Botlan, G. Tack, T. Brunklaus, and G. Smolka. Alice through the looking glass. In Trends in Functional Programming, volume 5. Intellect Books, Bristol, UK, 2006.Google ScholarGoogle Scholar
  26. C.P. Thacker and L.C. Stewart. Firefly: a multiprocessor workstation. SIGARCH Comput. Archit. News, 15 (5): 164--172, 1987. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. M. Wenzel. Parallel proof checking in Isabelle/Isar. In G. Dos Reis and L. Théry, editors, ACM SIGSAM Workshop on Programming Languages for Mechanized Mathematics Systems (PLMMS 2009). ACM Digital Library, 2009.Google ScholarGoogle Scholar

Index Terms

  1. Efficient parallel programming in Poly/ML and Isabelle/ML

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          DAMP '10: Proceedings of the 5th ACM SIGPLAN workshop on Declarative aspects of multicore programming
          January 2010
          90 pages
          ISBN:9781605588599
          DOI:10.1145/1708046

          Copyright © 2010 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 19 January 2010

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Upcoming Conference

          POPL '25

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader