||Presentations / Speakers
||Registration at Entrance Hall in the Main Building
From the latency to the throughput age, Jesus Labarta (BSC) [Slide]
Abstract: The talk will present a vision of how multicore and new architectures are impacting parallel programming in the high performance context and what we think should be the result of the revolution we are living.
As parallel programmers, we have been writing codes driven by our mental models of the machines we were targeting. The evolution towards increasing complexity, scale and variability in our systems is generating a growing divergence between our mental models of how systems behave and how they actually behave. This generates a feedback loop where programmer productivity, code maintainability and performance portability are severely damnified.
In this context we consider that programming model developments should concentrate efforts in providing clean interfaces for programmers to focus on specifying algorithms, computations and the data they use. The runtime should take the responsibility of mapping those demands to the available platform, optimizing locality and resource utilization in a very dynamic and responsive way. From this point of view, we consider that the task based models in the direction the OpenMP standard is evolving provide the fundamental mechanisms that support such decoupling of programs from architectural details.
A programming model lacking fundamental mechanisms will certainly result in low quality programs, but having a model that properly supports those mechanisms does not guarantee the ideal result. We claim that the actual revolution has to be an important change in the mindset of programmers. It is our believe that the deep fundamental change that will characterize such revolution is a transition from the still prevalent latency dominated mentality to a throughput oriented mentality that we consider is the key to successfully address the exascale challenge. This will require some time, best practices demonstrations and training, but a quiet revolution is possible.
The talk will elaborate on this vision and present examples of how it drives the OmpSs model and associated runtime developments at BSC.
Speaker Details: Jesus Labarta is full professor on Computer Architecture at the Technical University of Catalonia (UPC) since 1990. Since 1981 he has been lecturing on computer architecture, operating systems, computer networks and performance evaluation. His research interest has been centered on parallel computing, covering areas from multiprocessor architecture, memory hierarchy, programming models, parallelizing compilers, operating systems, parallelization of numerical kernels, performance analysis and prediction tools.
Since 2005 he is responsible of the Computer Science Research Department within the Barcelona Supercomputing Center (BSC). He has been involved in research cooperation with many leading companies on HPC related topics. His major directions of current work relate to performance analysis tools, programming models and resource management. His team distributes the Open Source BSC tools (Paraver and Dimemas) and performs research on increasing the intelligence embedded in the performance analysis tools. He is involved in the development of the OmpSs programming model and its different implementations for SMP, GPUs and cluster platforms. He has been involved in Exascale activities such as IESP and EESI where he has been responsible of the Runtime and Programming model sections of the respective Roadmaps. He leads the programming models and resource management activities in the HPC subproject of the Human Brain Project.
(Session Chair: Bronis R. de Supinski, LLNL)
||Session 5: Extensions (Session Chair: Alice Koniges, Berkeley Lab / NERSC)
- Reducing the Functionality Gap between Auto-Vectorization and Explicit Vectorization: Compress/Expand and Histogram
Hideki Saito, Serguei Preis, Nikolay Panchenko, and Xinmin Tian
- A Proposal to OpenMP for Addressing the CPU Oversubscription Challenge
Yonghong Yan, Jeff R. Hammond, Chunhua Liao, and Alexandre E. Eichenberger
||Session 6: Tools (Session Chair: Nawal Copty, Oracle)
- Testing Infrastructure for OpenMP Debugging Interface Implementations
Joachim Protze, Dong Ahn, Ignacio Laguna, Martin Schulz, and Matthias Mueller
- The secrets of the accelerators unveiled: Tracing heterogeneous executions through OMPT
Germán Llort, Antonio Filgueras, Daniel Jiménez-González, Harald Servat, Xavier Teruel, Estanislao Mercadal, Carlos Álvarez, Judit Giménez, Xavier Martorell, Eduard Ayguadé, and Jesús Labarta
- Language-Centric Performance Analysis of OpenMP Programs with Aftermath
Andi Drebes, Jean-Baptiste Bréjon, Antoniu Pop, Karine Heydemann, and Albert Cohen
||Lunch (at Cafe Half Time in NARA National Museum in the first basement) [Route]
||Session 7: Accelerator programming (Session Chair: Eric Stotzer, Texas Instruments)
- Pragmatic Performance Portability with OpenMP 4.x
Matt Martineau, James Price, Simon McIntosh-Smith, and Wayne Gaudin
- Multiple Target Task Sharing Support for the OpenMP Accelerator Model
Guray Ozen, Sergi Mateo, James Beyer, Eduard Ayguade, and Jesus Labarta
- Early Experiences Porting Three Applications to OpenMP 4.5
Ian Karlin, Tom Scogland, Arpith C. Jacob, Samuel F. Antao, Gheorghe-Teodor Bercea, Carlo Bertolli, Bronis R. de Supinski, Erik W. Draeger, Alexandre E. Eichenberger, Jim Glosli, Holger Jones, Adam Kunen, David Poliakoff, and David F. Richards
- Design and Preliminary Evaluation of Omni OpenACC Compiler for Massive MIMD Processor PEZY-SC
Akihiro Tabuchi, Yasuyuki Kimura, Sunao Torii, Video Matsufuru, Tadashi Ishikawa, Taisuke Boku, and Mitsuhisa Sato
||Session 8: Performance evaluations and optimization (Session Chair: Thomas Scogland, LLNL)
- Evaluating OpenMP Implementations for Java Using PolyBench
Xing Fan, Rui Feng, Oliver Sinnen, and Nasser Giacaman
- Transactional Memory for Algebraic Multigrid Smoothers
Barna Bihari, Ulrike Yang, Michael Wong, and Bronis R. de Supinski
- Supporting Adaptive Privatization Techniques for Irregular Array Reductions in Task-parallel Programming Models
Jan Ciesko, Sergi Mateo, Xavier Teruel, Xavier Martorell, Eduard Ayguade, and Jesus Labarta