请输入您要查询的百科知识:

 

词条 Intel C++ Compiler
释义

  1. Overview

  2. Optimizations

  3. Architectures

  4. Description of packaging

  5. History since 2003

  6. Flags and manuals

  7. Debugging

  8. Reception

  9. See also

  10. References

  11. External links

{{Infobox software
| name = Intel C++ Compiler
| developer = Intel
| latest_release_version = 19.0.1 (XE 2019)
| latest_release_date = {{Start date and age|2018|11|08}}[1]
| operating system = Windows, Mac, Linux, FreeBSD
| genre = Compiler
| license = Trialware
| website = {{URL|software.intel.com/en-us/intel-compilers}}
}}{{Infobox software
| name = Intel C++ Compiler for Android
| developer = Intel
| latest_release_version = 14.0.1
| latest_release_date = {{Start date and age|2013|11|12}}[2]
| operating system = Windows, OS X
| genre = Compiler
| license = Trialware
| website = {{URL|software.intel.com/c-compiler-android}}
}}

Intel C++ Compiler, also known as icc or icl, is a group of C and C++ compilers from Intel available for Windows, Mac, Linux, FreeBSD[3] and Intel-based Android devices.

Overview

The compilers generate optimized code for IA-32 and Intel 64 architectures, and non-optimized code for non-Intel but compatible processors, such as certain AMD processors. A specific release of the compiler (11.1) is available for development of Linux-based applications for IA-64 (Itanium 2) processors.

The 14.0 compiler added support for Intel-based Android devices and optimized vectorization and SSE Family instructions for performance. The 13.0 release added support for the Intel Xeon Phi coprocessor. It continues support for automatic vectorization, which can generate SSE, SSE2, SSE3, SSSE3, SSE4, AVX and AVX2 SIMD instructions, and the embedded variant for Intel MMX and MMX 2.[4] Use of such instruction through the compiler can lead to improved application performance in some applications as run on IA-32 and Intel 64 architectures, compared to applications built with compilers that do not support these instructions.

Intel compilers support Cilk Plus, which is a capability for writing vectorized and parallel code that can be used on IA-32 and Intel 64 processors or which can be offloaded to Xeon Phi coprocessors. They also continue support for OpenMP 4.0, symmetric multiprocessing, automatic parallelization, and Guided Auto-Parallization (GAP). With the add-on Cluster OpenMP capability, the compilers can also automatically generate Message Passing Interface calls for distributed memory multiprocessing from OpenMP directives.

Intel C++ is compatible with Microsoft Visual C++ on Windows and integrates into Microsoft Visual Studio. On Linux and Mac, it is compatible with GNU Compiler Collection (GCC) and the GNU toolchain. Intel C++ Compiler for Android is hosted on Windows, OS X or Linux and is compatible with the Android NDK, including gcc and the Eclipse IDE. Intel compilers are known for the application performance they can enable as measured by benchmarks, such as the SPEC CPU benchmarks.

Optimizations

Intel compilers are optimized to computer systems using processors that support Intel architectures. They are designed to minimize stalls and to produce code that executes in the fewest possible number of cycles. The Intel C++ Compiler supports three separate high-level techniques for optimizing the compiled program: interprocedural optimization (IPO), profile-guided optimization (PGO), and high-level optimizations (HLO). The Intel C++ compiler in the Parallel Studio XE products also supports tools, techniques and language extensions for adding and maintaining application parallelism on IA-32 and Intel 64 processors and enables compiling for Intel Xeon Phi processors and coprocessors.

Profile-guided optimization refers to a mode of optimization where the compiler is able to access data from a sample run of the program across a representative input set. The data would indicate which areas of the program are executed more frequently, and which areas are executed less frequently. All optimizations benefit from profile-guided feedback because they are less reliant on heuristics when making compilation decisions.

High-level optimizations are optimizations performed on a version of the program that more closely represents the source code. This includes loop interchange, loop fusion, loop fission, loop unrolling, data prefetch, and more.[5]

Interprocedural optimization applies typical compiler optimizations (such as constant propagation) but using a broader scope that may include multiple procedures, multiple files, or the entire program.[6]

Intel's compiler has been criticized for applying, by default, floating-point optimizations not allowed by the C standard and that require special flags with other compilers such as gcc.[7]

Architectures

  • IA-32
  • x86-64 (Intel 64 and AMD64)
  • Intel Xeon Phi coprocessor
  • IA-64 (Itanium 2)

Description of packaging

Except for the Intel Bi-Endian C++ Compiler, Intel C++ compilers are not available in standalone form. They are available in suites:

  • Intel Parallel Studio XE for development of technical, enterprise, and high-performance computing applications on Windows, Linux and Mac
  • Intel System Studio for development of system and app software for embedded systems or devices running Windows, Linux or Android

The suites include other build tools, such as libraries, and tools for threading and performance analysis.

History since 2003

Compiler version Release date Major new features
Intel C++ Compiler 8.0 December 15, 2003 Precompiled headers, code-coverage tools.
Intel C++ Compiler 8.1 September, 2004 AMD64 architecture (for Linux).
Intel C++ Compiler 9.0 June 14, 2005 AMD64 architecture (for Windows), software-based speculative pre-computation (SSP) optimization, improved loop optimization reports.
Intel C++ Compiler 10.0 June 5, 2007 Improved parallelizer and vectorizer, Streaming SIMD Extensions 4 (SSE4), new and enhanced optimization reports for advanced loop transformations, new optimized exception handling implementation.
Intel C++ Compiler 10.1 November 7, 2007 New OpenMP* compatibility runtime library: if you use the new OpenMP RTL, you can mix and match with libraries and objects built by Visual C++. To use the new libraries, you need to use the new option "-Qopenmp /Qopenmp-lib:compat" on Windows, and "-openmp -openmp-lib:compat" on Linux. This version of the Intel compiler supports more intrinsics from Visual Studio 2005.

VS2008 support - command line only in this release. The IDE integration was not supported yet.

Intel C++ Compiler 11.0 November 2008 Initial C++11 support. VS2008 IDE integration on Windows. OpenMP 3.0. Source Checker for static memory/parallel diagnostics.
Intel C++ Compiler 11.1 June 23, 2009 Support for latest Intel SSE SSE4.2, AVX and AES instructions. Parallel Debugger Extension. Improved integration into Microsoft Visual Studio, Eclipse CDT 5.0 and Mac Xcode IDE.
Intel C++ Composer XE 2011 up to Update 5 (compiler 12.0) November 7, 2010 Cilk Plus language extensions, Guided Auto-Parallelism, Improved C++11 support.[8]
Intel C++ Composer XE 2011 Update 6 and above (compiler 12.1) September 8, 2011 Cilk Plus language extensions updated to support specification version 1.1 and available on Mac OS X in addition to Windows and Linux, Threading Building Blocks updated to support version 4.0, Apple blocks supported on Mac OS X, improved C++11 support including support for Variadic templates, OpenMP 3.1 support.
Intel C++ Composer XE 2013 (compiler 13.0) September 5, 2012 Linux-based support for Intel Xeon Phi coprocessors, support for Microsoft Visual Studio 12 (Desktop), support for gcc 4.7, support for Intel AVX 2 instructions, updates to existing functionality focused on improved application performance.[9]
Intel C++ Composer XE 2013 SP1 (compiler 14.0) September 4, 2013 Online installer; support for Intel Xeon Phi coprocessors; preview Win32 only support for Intel graphics; improved C++11 support
Intel C++ Composer XE 2013 SP1 Update 1 (compiler 14.0.1) October 18, 2013 Japanese localization of 14.0; Windows 8.1 and Xcode 5.0 support
Intel C++ Compiler for Android (compiler 14.0.1) November 12, 2013 Hosted on Windows, Linux, or OS X, compatible with Android NDK tools including the gcc compiler and Eclipse
Intel C++ Composer XE 2015 (compiler 15.0) July 25, 2014 Full C++11 language support; Additional OpenMP 4.0 and Cilk Plus enhancements
Intel C++ Composer XE 2015 Update 1 (compiler 15.0.1) October 30, 2014 AVX-512 support; Japanese localization
Intel C++ 16.0 August 25, 2015 Suite-based availability (Intel Parallel Studio XE, Intel System Studio)
Intel C++ 17.0 September 15, 2016 Suite-based availability (Intel Parallel Studio XE, Intel System Studio)
Intel C++ 18.0January 26, 2017Suite-based availability (Intel Parallel Studio XE, Intel System Studio)

Flags and manuals

Documentation can be found at the Intel Software Technical Documentation site.

Windows Linux, macOS & FreeBSD Comment
/Od -O0 No optimization
/O1 -O1 Optimize for size
/O2 -O2 Optimize for speed and enable some optimization
/O3 -O3 Enable all optimizations as O2, and intensive loop optimizations
/arch:SSE3 /-msse3 Enables SSE3, SSE2 and SSE instruction sets optimizations for non-Intel CPUs[10]
/fast -fast Shorthand. On Windows this equates to "/O3 /Qipo /QxHost /Opred-div-" ; on Linux "-O3 -ipo -static -xHOST -no-prec-div". Note that the processor specific optimization flag (-xHOST) will optimize for the processor compiled on—it is the only flag of -fast that may be overridden
/Qprof-gen -prof_gen Compile the program and instrument it for a profile generating run
/Qprof-use -prof_use May only be used after running a program that was previously compiled using prof_gen. Uses profile information during each step of the compilation process

Debugging

The Intel compiler provides debugging information that is standard for the common debuggers (DWARF 2 on Linux, similar to gdb, and COFF for Windows). The flags to compile with debugging information are /Zi on Windows and -g on Linux. Debugging is done on Windows using the Visual Studio debugger and, on Linux, using gdb.

While the Intel compiler can generate a gprof compatible profiling output, Intel also provides a kernel level, system-wide statistical profiler called Intel VTune Amplifier. VTune can be used from a command line or thru an included GUI on Linux or Windows. It can also be integrated into Visual Studio on Windows, or Eclipse on Linux). In addition to the VTune profiler, there is Intel Advisor that specializes in vectorization optimization and tools for threading design and prototyping.

Intel also offers a tool for memory and threading error detection called Intel Inspector XE. Regarding memory errors, it helps detect memory leaks, memory corruption, allocation/de-allocation of API mismatches and inconsistent memory API usage. Regarding threading errors, it helps detect data races (both heap and stack), deadlocks and thread and synch API errors.

Reception

Intel and third parties have published benchmark results to substantiate performance leadership claims over other commercial, open-source and AMD compilers and libraries on Intel and non-Intel processors. Intel and AMD have documented flags to use on the Intel compilers to get optimal performance on Intel and AMD processors.[11][12] Nevertheless, the Intel compilers have been known to produce sub-optimal code for processors from vendors other than Intel. For example, Steve Westfield wrote in a 2005 article at the AMD website:[13]

{{cquote|text=Intel 8.1 C/C++ compiler uses the flag -xN (for Linux) or -QxN (for Windows) to take advantage of the SSE2 extensions. For SSE3, the compiler switch is -xP (for Linux) and -QxP (for Windows). ... With the -xN/-QxN and -xP/-QxP flags set, it checks the processor vendor string—and if it's not "GenuineIntel", it stops execution without even checking the feature flags. Ouch!}}

The Danish developer and scholar Agner Fog wrote in 2009:[14]

{{cquote|text=The Intel compiler and several different Intel function libraries have suboptimal performance on AMD and VIA processors. The reason is that the compiler or library can make multiple versions of a piece of code, each optimized for a certain processor and instruction set, for example SSE2, SSE3, etc. The system includes a function that detects which type of CPU it is running on and chooses the optimal code path for that CPU. This is called a CPU dispatcher. However, the Intel CPU dispatcher does not only check which instruction set is supported by the CPU, it also checks the vendor ID string. If the vendor string is "GenuineIntel" then it uses the optimal code path. If the CPU is not from Intel then, in most cases, it will run the slowest possible version of the code, even if the CPU is fully compatible with a better version.}}

This vendor-specific CPU dispatching decreases the performance on non-Intel processors of software built with an Intel compiler or an Intel function library – possibly without the knowledge of the programmer. This has allegedly led to misleading benchmarks.[14] A legal battle between AMD and Intel over this and other issues has been settled in November 2009.[15] In late 2010, AMD settled a US Federal Trade Commission antitrust investigation against Intel.[16]

The FTC settlement included a disclosure provision where Intel must:[17]

{{cquote|text=publish clearly that its compiler discriminates against non-Intel processors (such as AMD's designs), not fully utilizing their features and producing inferior code.}}

In compliance with this rule, Intel added an "optimization notice" to its compiler descriptions stating that they "may or may not optimize to the same degree for non-Intel microprocessors" and that "certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors". It says that:[18]

{{cquote|text=Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.}}

As reported by The Register[19] in July 2013, Intel was suspected of "benchmarksmanship", when it was shown that the object code produced by the Intel compiler for the AnTuTu Mobile Benchmark omitted portions of the benchmark which showed increased performance compared to ARM platforms.

See also

  • AMD Optimizing C/C++ Compiler
  • Intel Parallel Studio XE
  • Intel Integrated Performance Primitives (IPP)
  • Intel Data Analytics Acceleration Library (DAAL)
  • Intel Math Kernel Library (MKL)
  • Intel Threading Building Blocks (TBB)
  • Cilk Plus
  • VTune Amplifier
  • Intel Fortran Compiler
  • Intel Developer Zone (Intel DZ; support and discussion)

References

1. ^{{cite web |title= Intel C++ Compiler 19.0 Release Notes|url= https://software.intel.com/en-us/articles/intel-cpp-compiler-release-notes#2019u1}}
2. ^{{cite web |title=Intel C++ Compiler for Android documentation |url=http://software.intel.com/c-compiler-android}}
3. ^{{cite web|url=https://software.intel.com/en-us/articles/intel-system-studio-2016-for-freebsd|title=Intel® System Studio 2016 for FreeBSD* {{!}} Intel® Software|website=software.intel.com|language=en|access-date=2018-03-15}}
4. ^A. J. C. Bik, The Software Vectorization Handbook (Intel Press, Hillsboro, OR, 2004), {{ISBN|0-9743649-2-4}}.
5. ^The Software Optimization Cookbook, High-Performance Recipes for IA-32 Platforms, Richard Gerber, Aart J.C. Bik, Kevin B. Smith, and Xinmin Tian, Intel Press, 2006
6. ^Intel C++ Compiler XE 13.0 User and Reference Guides
7. ^The pitfalls of verifying floating-point computations, by David Monniaux, also printed in ACM Transactions on programming languages and systems (TOPLAS), May 2008; section 4.3.2 discusses nonstandard optimizations.
8. ^This note is attached to the release in which Cilk Plus was introduced. This ULR points to current documentation: http://software.intel.com/en-us/intel-composer-xe/
9. ^Intel C++ Composer XE 2013 Release Notes  http://software.intel.com/en-us/articles/intel-c-composer-xe-2013-release-notes/
10. ^{{cite web|url=http://www.intel.com/software/products/compilers/docs/cwin/release_notes.htm |title=Intel® Compilers | Intel® Developer Zone |publisher=Intel.com |date=1999-02-22 |accessdate=2012-10-13}}
11. ^  {{webarchive|url=https://web.archive.org/web/20100323062819/http://software.intel.com/sites/products/documentation/hpc/compilerpro/en-us/cpp/win/compiler_c/index.htm|date=March 23, 2010}}
12. ^{{cite web |url=http://developer.amd.com/Assets/CompilerOptQuickRef-61004100.pdf |title=Archived copy |accessdate=2011-03-30 |deadurl=yes |archiveurl=https://web.archive.org/web/20110322061401/http://developer.amd.com/Assets/CompilerOptQuickRef-61004100.pdf |archivedate=2011-03-22 |df= }}
13. ^{{cite web|url=http://developer.amd.com/documentation/articles/pages/4292005119.aspx|title=Your Processor, Your Compiler, and You: The Case of the Secret CPUID String|publisher=}}
14. ^{{cite web|url=http://www.agner.org/optimize/blog/read.php?i=49|title=Agner`s CPU blog - Intel's "cripple AMD" function|website=www.agner.org}}
15. ^{{cite web|url=http://download.intel.com/pressroom/legal/AMD_settlement_agreement.pdf |title=Settlement agreement |website=download.intel.com |format=PDF}}
16. ^{{cite web|url=http://newsroom.intel.com/community/intel_newsroom/blog/2010/08/04/intel-and-us-federal-trade-commission-reach-tentative-settlement |title=Intel and U.S. Federal Trade Commission Reach Tentative Settlement |publisher=Newsroom.intel.com |date=2010-08-04 |accessdate=2012-10-13}}
17. ^FTC, Intel Reach Settlement; Intel Banned From Anticompetitive Practices
18. ^{{cite web|title=Optimization Notice|url=http://software.intel.com/en-us/articles/optimization-notice|publisher=Intel Corporation|accessdate=11 December 2013}}
19. ^{{cite web|url=https://www.theregister.co.uk/2013/07/12/intel_atom_didnt_beat_arm|title=Analyst: Tests showing Intel smartphones beating ARM were rigged|publisher=}}

External links

  • [https://software.intel.com/en-us/c-compilers/iss Intel C++ Compiler for Android]
  • Compilers in Parallel Studio XE 2013
  • Cilk Plus Open Source Site
  • TBB Open Source Site
  • Free download of Intel compilers for non-commercial use
{{Intel software}}

3 : C compilers|C++ compilers|Intel software

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/11/10 22:51:07