High Performance Parallelism Pearls shows how to leverage parallelism on processors and coprocessors with the same programming - illustrating the most effective ways to better tap the computational potential of systems with Intel Xeon Phi coprocessors and Intel Xeon processors or other multicore processors. The book includes examples of successful programming efforts, drawn from across industries and domains such as chemistry, engineering, and environmental science. Each chapter in this edited work includes detailed explanations of the programming techniques used, while showing high performance results on both Intel Xeon Phi coprocessors and multicore processors. Learn from dozens of new examples and case studies illustrating "success stories" demonstrating not just the features of these powerful systems, but also how to leverage parallelism across these heterogeneous systems.
- Promotes consistent standards-based programming, showing in detail how to code for high performance on multicore processors and Intel® Xeon PhiT
- Examples from multiple vertical domains illustrating parallel optimizations to modernize real-world codes
- Source code available for download to facilitate further exploration
Please Note: This is an On Demand product, delivery may take up to 11 working days after payment has been received.
1. Introduction 2. Towards an efficient Godunov's scheme on Phi 3. Better Concurrency and SIMD on HBM 4. Case Study: Analyzing and Optimizing Concurrency 5. Plesiochronous Phasing Barriers 6. Parallel Evaluation of Fault Tree Expressions 7. Deep-learning and Numerical Optimization 8. Optimizing Gather/Scatter Patterns 9. A many core implementation of the direct N-body problem 10. N-body Methods on Intel® Xeon PhiT Coprocessors 11. Dynamic Load Balancing using OpenMP 4.0 12. Concurrent Kernel Offloading 13. Heterogeneous Computing with MPI 14. Power Analysis on the Intel® Xeon PhiT Coprocessor 15. Integrating Intel Xeon Phis into a Cluster 16. Native File systems 17. NWChem: Quantum Chemistry Simulations at Scale 18. Efficient nested parallelism on large scale system 19. Performance optimization of Black-Scholes pricing 20. Host and Coprocessor Data Transfer through the COI 21. High Performance Ray Tracing with Embree 22. Portable and Perform with OpenCL 23. Characterization and Auto-tuning of 3DFD. 24. Profiling-guided optimization of cache performance 25. Heterogeneous MPI optimization with ITAC 26. Scalable Out-of-core Solvers on a Cluster 27. Sparse matrix-vector multiplication: parallelization and vectorization 28. Morton Order Improves Performance
James Reinders is a senior engineer who joined Intel Corporation in 1989 and has contributed to projects including the world's first TeraFLOP supercomputer (ASCI Red), as well as compilers and architecture work for a number of Intel processors and parallel systems. James has been a driver behind the development of Intel as a major provider of software development products, and serves as their chief software evangelist. James has published numerous articles, contributed to several books and is widely interviewed on parallelism. James has managed software development groups, customer service and consulting teams, business development and marketing teams. James is sought after to keynote on parallel programming, and is the author/co-author of three books currently in print including Structured Parallel Programming, published by Morgan Kaufmann in 2012.
Jim Jeffers was the primary strategic planner and one of the first full-time employees on the program that became Intel ® MIC. He served as lead SW Engineering Manager on the program and formed and launched the SW development team. As the program evolved, he became the workloads (applications) and SW performance team manager. He has some of the deepest insight into the market, architecture and programming usages of the MIC product line. He has been a developer and development manager for embedded and high performance systems for close to 30 years.