Flexibly Scalable High Performance Architectures with Embedded Photonics

Keren Bergman (Columbia University, US)

This event is free of charge and open to the general public. The lecture is given in English.

Friday, June 14, 2019
15:45 - 16:35
Room: HG F 30

Computing systems are critically challenged to meet the performance demands of applications particularly driven by the explosive growth in data analytics. Data movement, dominated by energy costs and limited ‘chip-escape’ bandwidth densities, is a key physical layer roadblock to these systems’ scalability. Integrated silicon photonics with deeply embedded optical connectivity is on the cusp of enabling revolutionary data movement and extreme performance capabilities. Beyond alleviating the bandwidth/energy bottlenecks, embedded photonics can enable new disaggregated architectures that leverage the distance independence of optical transmission. We will discuss how the envisioned modular system interconnected by a unified photonic fabric can be flexibly composed to create custom architectures tailored for specific applications.

Keren Bergman is the Charles Batchelor Professor of Electrical Engineering at Columbia University where she also serves as the Faculty Director of the Columbia Nano Initiative. Prof. Bergman received the B.S. from Bucknell University in 1988, and the M.S. in 1991 and Ph.D. in 1994 from M.I.T. all in Electrical Engineering. At Columbia, Bergman leads the Lightwave Research Laboratory encompassing multiple cross-disciplinary programs at the intersection of computing and photonics. Bergman serves on the Leadership Council of the American Institute of Manufacturing (AIM) Photonics leading projects that support the institute’s silicon photonics manufacturing capabilities and Datacom applications. She is a Fellow of the Optical Society of America (OSA) and IEEE.

Keren Bergman

Investigating Epistatic and Pleiotropic Genetic Architectures in Bioenergy and Human Health

Dan Jacobson (Oak Ridge National Laboratory, US)

Wednesday, June 12, 2019
10:10 - 11:00
Room: HG F 30

The new CoMet application consists of implementations of the 2-way and 3-way Proportional Similarity metric and Custom Correlation Coefficient using native or adapted GEMM kernels optimized for GPU architectures, and received the 2018 Gordon Bell Prize. Nearly 300 quadrillion element comparisons per second and over 2.3 mixed precision ExaOps are reached on Summit by use of Tensor Core hardware on the Nvidia Volta GPUs. These similarity metrics form the major parts of largescale Genome-Wide Epistasis Studies (GWES) and pleiotropy studies. These efforts seek to identify genetic variants that contribute to individual phenotypes, including susceptibility (or robustness) to disease. We are using CoMet to investigate the
genetic architectures underlying complex traits in applications from bioenergy to human clinical genomics.

Dan Jacobson’s career as a computational systems biologist has included leadership roles in academic, corporate, NGO and national lab settings. His lab focuses on the development and subsequent application of computational methods to biological datasets. These methods are applied to various population-scale multiomics data sets in an attempt to better understand the functional relationships at play in biological organisms and communities. His group at ORNL studies many systems - from viruses to microbes to plants to Drosophila, mice and humans. His lab is actively involved in the development of new exascale applications for biology and he is a recent recipient of the Gordon Bell Prize.

Dan Jacobson

Large-Scale Optimization Strategies for Typical HPC Workloads

Yu Liu (Inspur, China)

This event is promoted by Inspur (PASC19 Platinum Sponsor)

Friday, June 14, 2019
08:00 - 08:45
Room: HG F 30

Ensuring performance of applications running on large-scale clusters is one of the primary focuses in HPC research. In this talk, we will show our strategies for performance analysis and optimization of applications in various fields of research using large-scale HPC clusters. Our strategies are designed to comprehensively analyze runtime features of applications, parallelization strategies of physical models, algorithmic implementations, and other technical details. These three levels of strategy cover platform optimization, technological innovation, and model innovation, and targeted optimization based on these features. State-of-the-art CPU instructions, network communication patterns, and innovative parallel strategies have been optimized for various applications.

Dr. Liu is the Head of the HPC Application Support Team at Inspur. Since joining Inspur, he and his team have engaged primarily in the optimization and acceleration of large-scale scientific computing applications in the fields of meteorology, oceanography, climatology, physics, life sciences and chemistry. At Inspur, he has designed and developed the in-house software package “Teye” for monitoring and analyzing characteristics of HPC applications. In addition, he has refined and deepened the Inspur HPC application characterstics analysis method and also extracted a methodology for profiling computational science applications from perspectives of theory and algorithm. He and his team have been involved in the design and optimization of the core codes and algorithms for a number of research projects in to multi-disciplinary computational science. Dr. Liu received his PhD in Condensed Matter Physics at the Chinese Academy of Sciences in 2011.

Yu Liu

High Performance Computing for Instabilities in Aerospace Propulsion Systems

Thierry Poinsot (Institut de Mécanique des Fluides de Toulouse and CERFACS, France)

Thursday, June 13, 2019
19:00 - 19:50
Room: HG F 30

Combustion produces more than 80 percent of the world's energy. This will continue for a long time as the global energy growth remains much larger than what new renewable energies can provide. Our civilization must allow the growth of combustion sources but, at the same time, keep global warming as well as pollution under control. Science has a key role in this scenario: it must optimize combustion systems far beyond the present state of the art. To do this, one promising path is to use High Performance Computation to compute and optimize combustors before they are built. This talk focuses on aerospace propulsion where optimization often leads to the occurrence of instabilities where combustion couples with acoustics, leading to unacceptable oscillations (the most famous example is the Apollo engine which required 1330 full scale tests to reach acceptable oscillation levels). The talk will show how simulation is used to control these problems, in real gas turbine engines and in rocket engines.

Thierry Poinsot is a research director at IMFT CNRS, head of the CFD group at CERFACS, senior research fellow at Stanford University, and consultant for various companies. His group has contributed a significant body of recent research in the field of LES of turbulent combustion in gas turbines. He teaches numerical methods and combustion in many schools and universities worldwide. He has authored more than 200 papers in refereed journals and 250 communications. He is the author of the textbook "Theoretical and numerical combustion" with Dr D. Veynante and is the editor of "Combustion and Flame". In 2017, he received the Zeldovich Gold medal of the Combustion Institute. He also gave the prestigious Hottel plenary lecture at the last Symposium on Combustion in Seoul (2016).

Thierry Poinsot

Microsoft Optics for the Cloud – A New Approach to Data Centre Technology

Scarlet Schwiderski-Grosche (Microsoft Research Cambridge, UK)

This event is promoted by Microsoft Switzerland Ltd. (PASC19 Platinum Sponsor)

Thursday, June 13, 2019
08:00 - 08:45
Room: HG F 30

New hardware technology such as rack-scale computers (RSCs) are redefining the landscape of data center computing today, achieving both higher bandwidth and lower latency. As a basic building block of a redesigned stack of hardware, OS, storage and network, such infrastructure is increasingly suitable for HPC workloads. However, most of today’s data center technology was designed or conceived in the era before the cloud existed. Many of the technologies represent compromises, encumbered by legacy thinking. At Microsoft Research Cambridge, we are exploring optical technologies across the three primary resources, networking, storage, and compute.
In this talk, we will outline the future challenges in the data center and discuss the limitations of current technology. We will take a holistic end-to-end view of the needs of the cloud and present our work on optical networking and storage.

Dr. Scarlet Schwiderski-Grosche is a Director at Microsoft Research Cambridge. She drives strategic research partnerships with academic institutions in EMEA, including the Swiss Joint Research Center with EPFL and ETH Zurich and the Inria Joint Center. Moreover, Scarlet leads academic outreach around optical storage and networking for Microsoft Optics for the Cloud. Scarlet has ten years’ experience driving collaborations with academia and industry around cutting-edge and highly interdisciplinary research projects and has managed a multitude of academic programs including the EMEA PhD Scholarship Programme and the EMEA PhD Summer Schools.

Scarlet Schwiderski-Grosche