Skip to main content

Abstract

High Performance Computing (HPC) provides support to run advanced application programs efficiently. Message Passing Interface (MPI) is a de-facto standard to provide HPC environment in clusters connected over fast interconnect and gigabit LAN. MPI standard itself is architecture neutral and programming language independent. C++ is widely accepted choice for implementing MPI specifications like MPICH and LAM/MPI. Apart from C++ other efforts are also carried out to implement MPI specifications using programming languages such as Java, Python and C#.  Moreover MPI implementations for different network layouts such as Grid and peer-to-peer exist as well. With these many implementations providing a wide range of functionalities, programmers and users find it difficult to choose the best option to address a specific problem. This paper provides an in-depth survey of available MPI implementations in different languages and for variety of network layouts. Several assessment parameters are identified to analyze the MPI implementations along with their strengths and weaknesses.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Shen, J.P., Lipasti, M.H.: Modern processor design: fundamentals of superscalar processors, 1st edn., p. 656. McGraw-Hill, New York (2005)

    Google Scholar 

  2. Dongarra, J., Gannon, D., Fox, G., Kennedy, K.: The Impact of Multicore on Computational Science Software. CTWatch Quarterly (2007), http://www.ctwatch.org/quarterly/articles/2007/02/the-impact-of-multicore-on-computational-science-software

  3. Protić, J., Tomašević, M., Milutinović, V.: Distributed shared memory: concepts and systems, p. 375. Wiley-IEEE Computer Society Press, University of Belgrade, Serbia (1997)

    Google Scholar 

  4. Kaiser, T.H., Brieger, L., Healy, S.: MYMPI – MPI Programming in Python. In: Proceedings of the International Conference on Parallel and Distributed Processing Techniques, USA (June 2006)

    Google Scholar 

  5. Miller, P.: pyMPI – An Introduction to parallel Python using MPI, UCRL-WEB-150152 (September 2002)

    Google Scholar 

  6. Dalcin, L., Paz, R., Storti, M., Elia, J.D.: MPI for Python: Performance improvements and MPI-2 extensions. J. Parallel Distrib. Comput. 68, 655–662 (2008)

    Article  Google Scholar 

  7. Burns, G., Daoud, R., Vaigl, J.: LAM: An Open Cluster Environment for MPI. Ohio Supercomputer Centre, Columbus, Ohio (1990)

    Google Scholar 

  8. Taboada, G.L., Tourino, J., Doallo, R.: Java for High Performance Computing: Assessment of current research & practice. In: Proceedings of 7th International Conference on Principles and Practice or Programming in Java, Calgary, Canada, pp. 30–39 (2009)

    Google Scholar 

  9. MPI: A Message Passing Interface Standard. Message Passing Interface Forum, http://www.mpi-forum.org/docs/mpi-11-html/mpi-report.html

  10. Geist, A., et al.: MPI-2: Extending the Message-Passing Interface. In: Fraigniaud, P., Mignotte, A., Bougé, L., Robert, Y. (eds.) Euro-Par 1996. LNCS, vol. 1123, Springer, Heidelberg (1996)

    Google Scholar 

  11. Carpenter, B.: MPJ specification (mpijava 1.2 : API Specification) homepage on HPJAVA, http://www.hpjava.org/reports/mpiJava-spec/mpiJava-spec/mpiJava-spec.html

  12. Nieuwpoort, R.V., et al.: Ibis: an Efficient Java based Grid Programming Environment. Concurrency and Computation: Practice and Experience 17(7-8), 1079–1107 (2005)

    Article  Google Scholar 

  13. Nelisse, A., Maassen, J., Kielmann, T., Bal, H.: CCJ: Object-Based Message Passing and Collective Communication in Java. Concurrency and Computation: Practice & Experience 15(3-5), 341–369 (2003)

    Article  MATH  Google Scholar 

  14. Al-Jaroodi, J., Mohamed, N., Jiang, H., Swanson, D.: JOPI: a Java object-passing interface: Research Articles. Concurrency and Computation: Practice & Experience 17(7-8), 775–795 (2005)

    Article  Google Scholar 

  15. Martins, P., Moura Silva, L., Gabriel Silva, J.: A Java Interface for WMPI. In: Alexandrov, V.N., Dongarra, J. (eds.) PVM/MPI 1998. LNCS, vol. 1497, pp. 121–128. Springer, Heidelberg (1998)

    Chapter  Google Scholar 

  16. Mintchev, S., Getov, V.: Towards Portable Message Passing in Java: Binding MPI. In: Bubak, M., Waśniewski, J., Dongarra, J. (eds.) PVM/MPI 1997. LNCS, vol. 1332, pp. 135–142. Springer, Heidelberg (1997)

    Chapter  Google Scholar 

  17. Judd, G., Clement, M., Snell, Q.: DOGMA: Distributed Object Group Metacomputing Architecture.  Concurrency and Computation: Practice and Experience 10(11-13), 977–983 (1998)

    Google Scholar 

  18. Pugh, B., Spacco, J.: MPJava: High-Performance Message Passing in Java using Java. In: Rauchwerger, L. (ed.) LCPC 2003. LNCS, vol. 2958, pp. 323–339. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  19. Zhang, B.Y., Yang, G.W., Zheng, W.M.: Jcluster: an Efficient Java Parallel Environment on a Large-scale Heterogeneous Cluster. Concurrency and Computation: Practice and Experience 18(12), 1541–1557 (2006)

    Article  Google Scholar 

  20. Kaminsky, A.: Parallel Java: A Unified API for Shared Memory and Cluster Parallel Programming in 100% Java. In: Proceedings of 9th International Workshop on Java and Components for Parallelism. Distribution and Concurrency, p. 196a (8 pages) (2007)

    Google Scholar 

  21. Baker, M., Carpenter, B., Fox, G., Ko, S., Lim, S.: mpi-Java: an Object-Oriented Java Interface to MPI: In 1st International Workshop on Java for Parallel and Distributed Computing, LNCS, vol. In: Rolim, J.D.P. (ed.) IPPS-WS 1999 and SPDP-WS 1999. LNCS, vol. 1586, pp. 748–762. Springer, Heidelberg (1999)

    Chapter  Google Scholar 

  22. Genaud, S., Rattanapoka, C.: P2P-MPI: A Peer-to-Peer Framework for Robust Execution of Message Passing Parallel Programs. Journal of Grid Computing 5(1), 27–42 (2007)

    Article  MATH  Google Scholar 

  23. Shafi, A., Carpenter, B., Baker, M.: Nested Parallelism for Multi-core HPC Systems using Java. Journal of Parallel and Distributed Computing 69(6), 532–545 (2009)

    Article  Google Scholar 

  24. Bornemann, M., van Nieuwpoort, R.V., Kielmann, T.: MPJ/Ibis: A flexible and efficient message passing platform for java. In: Di Martino, B., Kranzlmüller, D., Dongarra, J. (eds.) EuroPVM/MPI 2005. LNCS, vol. 3666, pp. 217–224. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  25. Bang, S., Ahn, J.: Implementation and Performance Evaluation of Socket and RMI based Java Message Passing Systems. In: Proceedings of 5th International Conference on Software Engineering Research, Management and Applications, Busan, Korea, pp. 153–159 (2007)

    Google Scholar 

  26. Taboada, G.L., Tourino, J., Doallo, R.: F-MPJ: scalable Java message-passing communications on parallel systems. Journal of Supercomputing (2009)

    Google Scholar 

  27. Foster, I.: The Anatomy of the Grid: Enabling Scalable Virtual Organizations. In: Sakellariou, R., Keane, J.A., Gurd, J.R., Freeman, L. (eds.) Euro-Par 2001. LNCS, vol. 2150, pp. 1–4. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  28. Myrinet webpage on MYRI, http://www.myri.com/myrinet/overview

  29. Infiniband, http://www.infinibandta.org

  30. Gustavson, D.B.: The Scalable Coherent Interface and Related Standards Projects. IEEE Micro. 18(12), 10–22 (1992)

    Article  Google Scholar 

  31. Gropp, W., Lusk, E., Doss, N., Skjellum, A.: A high-performance, portable implementation of the MPI message passing interface standard. J. Parallel Computing 22, 789–828 (1996)

    Article  MATH  Google Scholar 

  32. Java Grande Forum, http://www.javagrande.org

  33. Asghar, S., Hafeez, M., Malik, U.A., Rehman, A., Riaz, N.: A-JUMP Architecture for Java Universal Message Passing. In: Proceedings of 8th International Conference on Frontiers of Information Technology, Islamabad, Pakistan (2010)

    Google Scholar 

  34. Microsoft MPI, http://msdn.microsoft.com/enus/library/bb524831v=vs.85.aspx

  35. Willcock, J., Lumsdaine, A., Robison, A.: Using MPI with C# and the Common Language Infrastructure. Concurrency and Computation: Practice & Experience 17(7-8), 895–917 (2005)

    Article  Google Scholar 

  36. Gregor, D., Lumsdaine, A.: Design and Implementation of a High-Performance MPI for C# and the Common Language Infrastructure. In: Proceedings of 13th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, UT, USA (2008)

    Google Scholar 

  37. MPICH2, http://www.mcs.anl.gov/research/projects/mpich2

  38. MPICH-G, http://www3.niu.edu/mpi

  39. Karonis, N., Toonen, B., Foster, I.: MPICH-G2: A Grid-Enabled Implementation of the Message Passing Interface. Journal of Parallel and Distributed Computing 63(5) (2003)

    Google Scholar 

  40. Kielmann, T., Hofman, R.F.H., Bal, H.E., Plaat, A., Bhoedjang, R.A.F.: MagPIe: MPI’s Collective Communication Operations for Clustered Wide Area Systems. ACM SIGPLAN Notices 34(08), 131–140 (1999)

    Article  Google Scholar 

  41. Chen, L., Wang, C., Lau, F.C.M., Ma, R.K.K.A.: Grid Middleware for Distributed Java Computing with MPI Binding and Process Migration Supports. Journal of Computer Science and Technology 18(04), 505–514 (2003)

    Article  Google Scholar 

  42. Welch, V., Siebenlist, F., Foster, I., Bresnahan, J., Czajkowski, K., Gawor, J., Kesselman, C., Meder, S., Pearlman, L., Tuecke, S.: Security for Grid Services. In: 12th International Symposium on High Performance Distributed Computing. IEEE Press, Washington (2003)

    Google Scholar 

  43. Globus Toolkit, http://www.globus.org/toolkit

  44. Balkanski, D., Trams, M., Rehm, W.: Heterogeneous Computing With MPICH/Madeleine and PACX MPI: a Critical Comparison. Technische Universit, Chemnitz (2003)

    Google Scholar 

  45. Aumage, O.: MPI/Madeleine: Heterogeneous multi-cluster networking with the madeleine III communication library. In: 16th IEEE International Parallel and Distributed Processing Symposium, p. 85 (2002)

    Google Scholar 

  46. GridMPI, http://www.gridmpi.org/index.jsp

  47. Graham, R.L., et al.: OpenMPI: A High-Performance, Heterogeneous MPI. In: Proceedings of IEEE International Conference on Cluster Computing, pp. 1–9 (2006)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Hafeez, M., Asghar, S., Malik, U.A., Rehman, A.u., Riaz, N. (2011). Survey of MPI Implementations. In: Cherifi, H., Zain, J.M., El-Qawasmeh, E. (eds) Digital Information and Communication Technology and Its Applications. DICTAP 2011. Communications in Computer and Information Science, vol 167. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-22027-2_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-22027-2_18

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-22026-5

  • Online ISBN: 978-3-642-22027-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics