词条 | OpenMP |
释义 |
| name = OpenMP | logo = | author = OpenMP Architecture Review Board[1] | developer = OpenMP Architecture Review Board[1] | latest_release_version = 5.0 | latest_release_date = {{start date and age|2018|11|8}} | operating_system = Cross-platform | platform = Cross-platform | genre = Extension to C, C++, and Fortran; API | license = Various[2] | website = {{URL|openmp.org}} }}OpenMP (Open Multi-Processing) is an application programming interface (API) that supports multi-platform shared memory multiprocessing programming in C, C++, and Fortran,[3] on most platforms, instruction set architectures and operating systems, including Solaris, AIX, HP-UX, Linux, macOS, and Windows. It consists of a set of compiler directives, library routines, and environment variables that influence run-time behavior.[2][4][5] OpenMP is managed by the nonprofit technology consortium OpenMP Architecture Review Board (or OpenMP ARB), jointly defined by a group of major computer hardware and software vendors, including AMD, IBM, Intel, Cray, HP, Fujitsu, Nvidia, NEC, Red Hat, Texas Instruments, Oracle Corporation, and more.[1] OpenMP uses a portable, scalable model that gives programmers a simple and flexible interface for developing parallel applications for platforms ranging from the standard desktop computer to the supercomputer. An application built with the hybrid model of parallel programming can run on a computer cluster using both OpenMP and Message Passing Interface (MPI), such that OpenMP is used for parallelism within a (multi-core) node while MPI is used for parallelism between nodes. There have also been efforts to run OpenMP on software distributed shared memory systems,[6] to translate OpenMP into MPI[7][8] and to extend OpenMP for non-shared memory systems.[9] Design{{See also|Fork–join model}}OpenMP is an implementation of multithreading, a method of parallelizing whereby a master thread (a series of instructions executed consecutively) forks a specified number of slave threads and the system divides a task among them. The threads then run concurrently, with the runtime environment allocating threads to different processors. The section of code that is meant to run in parallel is marked accordingly, with a compiler directive that will cause the threads to form before the section is executed.[3] Each thread has an id attached to it which can be obtained using a function (called By default, each thread executes the parallelized section of code independently. Work-sharing constructs can be used to divide a task among the threads so that each thread executes its allocated part of the code. Both task parallelism and data parallelism can be achieved using OpenMP in this way. The runtime environment allocates threads to processors depending on usage, machine load and other factors. The runtime environment can assign the number of threads based on environment variables, or the code can do so using functions. The OpenMP functions are included in a header file labelled omp.h in C/C++. HistoryThe OpenMP Architecture Review Board (ARB) published its first API specifications, OpenMP for Fortran 1.0, in October 1997. In October the following year they released the C/C++ standard. 2000 saw version 2.0 of the Fortran specifications with version 2.0 of the C/C++ specifications being released in 2002. Version 2.5 is a combined C/C++/Fortran specification that was released in 2005. Up to version 2.0, OpenMP primarily specified ways to parallelize highly regular loops, as they occur in matrix-oriented numerical programming, where the number of iterations of the loop is known at entry time. This was recognized as a limitation, and various task parallel extensions were added to implementations. In 2005, an effort to standardize task parallelism was formed, which published a proposal in 2007, taking inspiration from task parallelism features in Cilk, X10 and Chapel.[10] Version 3.0 was released in May 2008. Included in the new features in 3.0 is the concept of tasks and the task construct,[11] significantly broadening the scope of OpenMP beyond the parallel loop constructs that made up most of OpenMP 2.0.[12] Version 4.0 of the specification was released in July 2013.[13] It adds or improves the following features: support for accelerators; atomics; error handling; thread affinity; tasking extensions; user defined reduction; SIMD support; Fortran 2003 support.[14]{{full citation needed|date=March 2015}} The current version is 5.0, released in November 2018. Note that not all compilers (and OSes) support the full set of features for the latest version/s. Core elementsThe core elements of OpenMP are the constructs for thread creation, workload distribution (work sharing), data-environment management, thread synchronization, user-level runtime routines and environment variables. In C/C++, OpenMP uses #pragmas. The OpenMP specific pragmas are listed below. Thread creationThe pragma omp parallel is used to fork additional threads to carry out the work enclosed in the construct in parallel. The original thread will be denoted as master thread with thread ID 0. Example (C program): Display "Hello, world." using multiple threads.
int main(void) {#pragma omp parallel printf("Hello, world.\"); return 0; } Use flag -fopenmp to compile using GCC: $ gcc -fopenmp hello.c -o hello Output on a computer with two cores, and thus two threads: Hello, world. Hello, world. However, the output may also be garbled because of the race condition (in the case of using C++ Hello, wHello, woorld. rld. Work-sharing constructsUsed to specify how to assign independent work to one or all of the threads.
Example: initialize the value of a large array in parallel, using each thread to do part of the work This example is embarrassingly parallel, and depends only on the value of {{mono|i}}. The OpenMP {{mono|parallel for}} flag tells the OpenMP system to split this task among its working threads. The threads will each receive a unique and private version of the variable.[15] For instance, with two worker threads, one thread might be handed a version of {{mono|i}} that runs from 0 to 49999 while the second gets a version running from 50000 to 99999. ClausesSince OpenMP is a shared memory programming model, most variables in OpenMP code are visible to all threads by default. But sometimes private variables are necessary to avoid race conditions and there is a need to pass values between the sequential part and the parallel region (the code block executed in parallel), so data environment management is introduced as data sharing attribute clauses by appending them to the OpenMP directive. The different types of clauses are:
User-level runtime routinesUsed to modify/check the number of threads, detect if the execution context is in a parallel region, how many processors in current system, set/unset locks, timing functions, etc. Environment variablesA method to alter the execution features of OpenMP applications. Used to control loop iterations scheduling, default number of threads, etc. For example, OMP_NUM_THREADS is used to specify number of threads for an application. ImplementationsOpenMP has been implemented in many commercial compilers. For instance, Visual C++ 2005, 2008, 2010, 2012 and 2013 support it (OpenMP 2.0, in Professional, Team System, Premium and Ultimate editions[16][17][18]), as well as Intel Parallel Studio for various processors.[19] Oracle Solaris Studio compilers and tools support the latest [https://web.archive.org/web/20081004161456/http://openmp.org/wp/openmp-specifications/ OpenMP specifications] with productivity enhancements for Solaris OS (UltraSPARC and x86/x64) and Linux platforms. The Fortran, C and C++ compilers from The Portland Group also support OpenMP 2.5. GCC has also supported OpenMP since version 4.2. Compilers with an implementation of OpenMP 3.0:
Several compilers support OpenMP 3.1:
Compilers supporting OpenMP 4.0:
Auto-parallelizing compilers that generates source code annotated with OpenMP directives:
Several profilers and debuggers expressly support OpenMP:
Pros and cons{{Refimprove section|date=February 2017}}Pros:
Cons:
Performance expectationsOne might expect to get an N times speedup when running a program parallelized using OpenMP on a N processor platform. However, this seldom occurs for these reasons:
Thread affinitySome vendors recommend setting the processor affinity on OpenMP threads to associate them with particular processor cores.[36][37][38] This minimizes thread migration and context-switching cost among cores. It also improves the data locality and reduces the cache-coherency traffic among the cores (or processors). See also{{too many see alsos|date=February 2017}}
References1. ^1 2 {{cite web |url=http://openmp.org/wp/about-openmp/ |title=About the OpenMP ARB and |publisher=OpenMP.org |date=2013-07-11 |accessdate=2013-08-14 |deadurl=yes |archiveurl=https://web.archive.org/web/20130809153922/http://openmp.org/wp/about-openmp/ |archivedate=2013-08-09 |df= }} 2. ^1 {{cite web|url=http://openmp.org/wp/openmp-compilers/ |title=OpenMP Compilers |publisher=OpenMP.org |date=2013-04-10 |accessdate=2013-08-14}} 3. ^1 {{cite book|last=Gagne|first=Abraham Silberschatz, Peter Baer Galvin, Greg|title=Operating system concepts|publisher=Wiley|location=Hoboken, N.J.|isbn=978-1-118-06333-0|pages=181–182|edition=9th|date=2012-12-17}} 4. ^OpenMP Tutorial at Supercomputing 2008 5. ^Using OpenMP – Portable Shared Memory Parallel Programming – Download Book Examples and Discuss 6. ^{{cite journal |last=Costa |first=J.J.|display-authors=etal|date=May 2006 |title=Running OpenMP applications efficiently on an everything-shared SDSM |journal=Journal of Parallel and Distributed Computing |volume=66 |issue=5 |pages=647–658 |doi=10.1016/j.jpdc.2005.06.018 }} 7. ^{{cite book |last=Basumallik |first=Ayon |last2=Min |first2=Seung-Jai |last3=Eigenmann |first3=Rudolf |title=Programming Distributed Memory Sytems [sic] using OpenMP |journal=Proceedings of the 2007 IEEE International Parallel and Distributed Processing Symposium |pages=1–8 |location=New York |publisher=IEEE Press |year=2007 |doi=10.1109/IPDPS.2007.370397|isbn=978-1-4244-0909-9 |citeseerx=10.1.1.421.8570 }} A [https://www.cs.rochester.edu/~cding/Announcements/HIPS07/openmp.pdf preprint is available on Chen Ding's home page]; see especially Section 3 on Translation of OpenMP to MPI. 8. ^{{cite journal |last=Wang |first=Jue |last2=Hu |first2=ChangJun |last3=Zhang |first3=JiLin |last4=Li |first4=JianJiang |date=May 2010 |title=OpenMP compiler for distributed memory architectures |journal=Science China Information Sciences |volume=53 |issue=5 |pages=932–944 |doi=10.1007/s11432-010-0074-0 }} ({{as of|2016}} the KLCoMP software described in this paper does not appear to be publicly available) 9. ^[https://software.intel.com/en-us/articles/cluster-openmp-for-intel-compilers Cluster OpenMP] (a product that used to be available for Intel C++ Compiler versions 9.1 to 11.1 but was dropped in 13.0) 10. ^{{cite conference |first1=Eduard |last1=Ayguade |first2=Nawal |last2=Copty |first3=Alejandro |last3=Duran |first4=Jay |last4=Hoeflinger |first5=Yuan |last5=Lin |first6=Federico |last6=Massaioli |first7=Ernesto |last7=Su |first8=Priya |last8=Unnikrishnan |first9=Guansong |last9=Zhang |title=A proposal for task parallelism in OpenMP |conference=Proc. Int'l Workshop on OpenMP |year=2007 |url=http://people.ac.upc.edu/aduran/papers/2007/tasks_iwomp07.pdf}} 11. ^{{cite web|url=http://www.openmp.org/mp-documents/spec30.pdf |title=OpenMP Application Program Interface, Version 3.0 |date=May 2008 |accessdate=2014-02-06 |publisher=openmp.org}} 12. ^{{cite conference |title=A Runtime Implementation of OpenMP Tasks |first1=James |last1=LaGrone |first2=Ayodunni |last2=Aribuki |first3=Cody |last3=Addison |first4=Barbara |last4=Chapman |conference=Proc. Int'l Workshop on OpenMP |year=2011 |pages=165–178 |doi=10.1007/978-3-642-21487-5_13 |citeseerx=10.1.1.221.2775}} 13. ^{{cite web |url=http://openmp.org/wp/openmp-40-api-released/ |title=OpenMP 4.0 API Released |publisher=OpenMP.org |date=2013-07-26 |accessdate=2013-08-14 |deadurl=yes |archiveurl=https://web.archive.org/web/20131109175921/http://openmp.org/wp/openmp-40-api-released/ |archivedate=2013-11-09 |df= }} 14. ^{{cite web|url=http://www.openmp.org/mp-documents/OpenMP4.0.0.pdf |title=OpenMP Application Program Interface, Version 4.0 |date=July 2013 |accessdate=2014-02-06 |publisher=openmp.org}} 15. ^{{Cite web | url=http://supercomputingblog.com/openmp/tutorial-parallel-for-loops-with-openmp/ | title=Tutorial – Parallel for Loops with OpenMP| date=2009-07-14}} 16. ^Visual C++ Editions, Visual Studio 2005 17. ^Visual C++ Editions, Visual Studio 2008 18. ^Visual C++ Editions, Visual Studio 2010 19. ^David Worthington, "Intel addresses development life cycle with Parallel Studio" {{Webarchive|url=https://web.archive.org/web/20120215032407/http://www.sdtimes.com/INTEL_ADDRESSES_DEVELOPMENT_LIFE_CYCLE_WITH_PARALLEL_STUDIO/About_INTEL_and_MULTICORE/33497 |date=2012-02-15 }}, SDTimes, 26 May 2009 (accessed 28 May 2009) 20. ^"XL C/C++ for Linux Features", (accessed 9 June 2009) 21. ^{{cite web|url=http://developers.sun.com/sunstudio/features/ |title=Oracle Technology Network for Java Developers | Oracle Technology Network | Oracle |publisher=Developers.sun.com |date= |accessdate=2013-08-14}} 22. ^1 {{cite web|url=https://gcc.gnu.org/wiki/openmp |title=openmp – GCC Wiki |publisher=Gcc.gnu.org |date=2013-07-30 |accessdate=2013-08-14}} 23. ^{{cite web|author=Submitted by Patrick Kennedy... on Fri, 09/02/2011 – 11:28 |url=http://software.intel.com/en-us/articles/intel-c-and-fortran-compilers-now-support-the-openmp-31-specification/ |title=Intel® C++ and Fortran Compilers now support the OpenMP* 3.1 Specification | Intel® Developer Zone |publisher=Software.intel.com |date=2011-09-06 |accessdate=2013-08-14}} 24. ^1 https://www.ibm.com/support/docview.wss?uid=swg27007322&aid=1 25. ^1 http://www-01.ibm.com/support/docview.wss?uid=swg27007323&aid=1 26. ^1 {{cite web|url=http://llvm.org/releases/3.7.0/tools/clang/docs/ReleaseNotes.html#openmp-support |title=Clang 3.7 Release Notes |publisher=llvm.org |accessdate=2015-10-10}} 27. ^{{cite web|url=https://www.absoft.com/ |title=Absoft Home Page |accessdate=2019-02-12}} 28. ^{{cite web|url=https://www.gnu.org/software/gcc/gcc-4.9/changes.html |title=GCC 4.9 Release Series – Changes |publisher=www.gnu.org }} 29. ^{{cite web| url=https://software.intel.com/en-us/articles/openmp-40-features-in-intel-compiler-150 |title=OpenMP* 4.0 Features in Intel Compiler 15.0 |publisher=Software.intel.com |date=2014-08-13 }} 30. ^1 {{cite journal|doi=10.1016/j.parco.2012.05.005|title=OpenMP parallelism for fluid and fluid-particulate systems|year=2012|last1=Amritkar|first1=Amit|last2=Tafti|first2=Danesh|last3=Liu|first3=Rui|last4=Kufrin|first4=Rick|last5=Chapman|first5=Barbara|journal=Parallel Computing|volume=38|issue=9|page=501}} 31. ^{{cite journal|doi=10.1016/j.jcp.2013.09.007|title=Efficient parallel CFD-DEM simulations using OpenMP|year=2014|last1=Amritkar|first1=Amit|last2=Deb|first2=Surya|last3=Tafti|first3=Danesh|journal=Journal of Computational Physics|volume=256|page=501|bibcode=2014JCoPh.256..501A|title-link=CFD-DEM}} 32. ^[https://www.openmp.org/updates/openmp-accelerator-support-gpus/ OpenMP Accelerator Support for GPUs] 33. ^Detecting and Avoiding OpenMP Race Conditions in C++ 34. ^Alexey Kolosov, Evgeniy Ryzhkov, Andrey Karpov 32 OpenMP traps for C++ developers 35. ^Stephen Blair-Chappell, Intel Corporation, Becoming a Parallel Programming Expert in Nine Minutes, presentation on ACCU 2010 conference 36. ^{{cite journal|doi=10.1535/itj.1104.08|title= Multi-Core Software|date=2007-11-15|last1=Chen|first1=Yurong|journal=Intel Technology Journal|volume=11|issue=4}} 37. ^{{cite web|url=http://www.spec.org/omp/results/res2008q1/omp2001-20080128-00288.html|title=OMPM2001 Result|date=2008-01-28|publisher=SPEC}} 38. ^{{cite web|url=http://www.spec.org/omp/results/res2003q2/omp2001-20030401-00079.html|title=OMPM2001 Result|date=2003-04-01|publisher=SPEC}} Further reading{{refbegin}}
External links
5 : Application programming interfaces|Articles with example Fortran code|C programming language family|Fortran|Parallel computing |
随便看 |
|
开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。