Abstraction, Orchestration and Modelling of Data Movement in Heterogeneous Memory Systems

Data movement is a constraining factor in almost all HPC applications and workflows. The reasons for this ubiquity include physical design constraints, environmental/power limitations, relative advancements of processors versus memory and rapid increases in dataset sizes. While decades of research and innovation in HPC have resulted in robust and powerful optimising environments, even basic data-movement optimisation remains a challenge. In many cases, fundamental abstractions suited to the expression of data are still missing, as is a well-functioning model of various memory types/features. Performance portability of Exascale systems requires that heterogeneous memories are used intelligently and abstractly in the middleware/runtime rather than requiring explicit, laborious hand-coding. To do so, capacity, bandwidth and latency considerations of multiple levels must be understood (and often modelled) at runtime. Furthermore, the semantics of data usage within applications must be evident in the programming model. Several research projects in the US and in Europe are presenting solutions for either a piece of this problem (EPiGRAM-HS, Tuyere) or holistically (Maestro, Unity). This minisymposium will present a sample of the most relevant research concerning programming abstractions, models and runtimes for data movement from the perspectives of system/software vendors (Cray), world-class supercomputer centres (ORNL, KTH) and application scientists (ECMWF).

Organizer(s): Adrian Tate (Cray Inc.), Dirk Pleiter (Forschungszentrum Jülich), and Stefano Markidis (KTH Royal Institute of Technology)

Domain(s): Computer Science and Applied Mathematics

Accelerating High Energy Physics with GPU

Thanks to their dedicated architecture, graphical processing units (GPUs) are the workhorse of parallel computing and drove the majority of growth in computation power in the recent decade. The data processing pipeline in high energy physics is a very heterogeneous application, composed of dedicated algorithms, some that parallelize and others that do not. While porting algorithms to GPU programming frameworks like CUDA has its own challenge, some other algorithms need to be rethought from the ground up so as to take advantage of the tremendous throughput of modern devices. We propose a review of large scale use of GPUs in high energy physics data processing, including event reconstruction, simulation, triggering and analysis. We wish to demonstrate the challenges in adopting this technology and discuss the way forward within the community. With this, we hope to provide a good overall picture of the current advancements and challenges in using GPUs in production.

Organizer(s): Javier Duarte (Fermilab National Laboratory), Peter Messmer (NVIDIA Inc.), and Dorothea Von Bruch (LPNHE)

Domain(s): Computer Science and Applied Mathematics, Physics

Adaptive Mesh Refinement in the Era of Platform Heterogeneity, Part I and II

Adaptive mesh refinement (AMR) is an important method that enables many mesh-based applications to run at effectively higher resolution within limited computing resources by allowing high resolution only where really needed. This advantage comes at a cost: greater complexity in the mesh management machinery, and challenges with load distribution. The management of platform heterogeneity is challenging enough for applications; it becomes a bigger challenge when there is AMR. Options such as asynchronous communication and hierarchy management for parallelism and memory come into play. Different groups using AMR based applications are bringing different approaches to this challenge. An interesting exercise is to compare these different approaches to see whether they are truly conceptually different, or the difference lies only in the details. This two-session minisymposium will include presentations from developers of dominant AMR packages or of scientific codes based on AMR about their approach to managing heterogeneity. The final slot will be an open discussion about the merits of various approaches.

Organizer(s): Anshu Dubey (Argonne National Laboratory, University of Chicago), Michael Norman (UC San Diego, SDSC), and Martin Berzins (University of Utah)

Domain(s): Computer Science and Applied Mathematics, Physics, Solid Earth Dynamics

Advances in Computational Seismology and Earth Sciences, Part I, II and III

Recent advances in theory and numerical methods in parallel to the availability of high-quality massive data sets and high-performance computing provide unprecedented opportunities to improve our understanding of Earth's interior and its mechanism. The goal of this session is to bring computational and Earth scientists together to form a platform to discuss the current status, challenges and future directions in computational seismology and computational Earth Sciences. Talks highlight advances through numerical (high-performance computing) simulations, their algorithms and their scientific outcomes. They help to identify opportunities and trends made possible through better computers and data, and they help to uncover challenges and problems arising from better data, computer architecture changes and more complex software. Contributions include, but are not limited to, the areas of earthquake engineering, passive and active-source seismic imaging, links to geodynamical modelling, observational data and laboratory experiments in conjunction with computational approaches such as numerical solvers, large-scale workflow, big data, optimisation strategies and machine-learning on HPC systems.

This session is dedicated to Dimitri Komatitsch.

Organizer(s): Alice-Agnes Gabriel (Ludwig Maximilian University of Munich), Ebru Bozdag (Colorado School of Mines), and Tobias Weinzierl (Durham University)

Domain(s): Solid Earth Dynamics

Advances in Interdisciplinarity between Ocean, Climate Simulation and Deep Learning

As the most important tools for ocean and climate process understanding, forecasting, and projection, the ocean and climate models have achieved great progress in last decade. The main stream of ocean and climate model development focuses on higher resolution, more accurate parameterizations of unresolved physical processes, and including more biogeochemical processes. At the same time, exascale high-performance computing (HPC) has gradually matured. How to use supercomputers more efficiently, such as multi core processor, hyperthread, large cache, high bandwidth process communication structure and high speed I/O functionality, becomes a critical condition for model development now. Meanwhile, the development of information technologies has led to a new era in computation, affecting almost all fields in science and engineering. Recently, increasing with large amounts of simulation and observation data, machine learning, especially deep learning shows its potential on ocean and climate simulations, such as subgrid parameterization schemes, ENSO prediction, typhoon track forecasting, etc. The convergence of simulation and deep learning brings challenges and opportunities. This minisymposium encompasses the interdisciplinary research on ocean, climate simulation/forecasting/prediction/projection and deep learning - from algorithms and new frameworks through to exciting results, outlooks and perspectives.

Organizer(s): Xingrong Chen (National Marine Environment Forecasting Center), Xiaomeng Huang (Tsinghua University), and Zhenya Song (First Institute of Oceanography)

Domain(s): Computer Science and Applied Mathematics, Emerging Application Domains, Climate and Weather, Engineering

Application Scaling and Porting on an FPGA-based Supercomputer in the EuroEXA Project

The international race to develop the world’s first exascale supercomputer is the next frontier in High Performance Computing (HPC). But the creation of an exascale supercomputer requires substantial changes to the current technological models, including in the areas of energy consumption, scalability, network topology, memory, storage, resilience and, consequently, the programming models and systems software – none of which can currently scale to these performance levels. The EuroEXA project (euroexa.eu) targets to provide the template for an upcoming exascale system by co-designing and implementing a petascale-level prototype with ground-breaking characteristics. To accomplish this, the project takes a holistic approach innovating both across the technology and the application/system software pillars. EuroEXA proposes a balanced architecture for compute and data-intensive applications, that builds on top of cost-efficient, modular-integration enabled by novel inter-die links, embraces FPGA acceleration for computational, networking and storage operations. In this minisymposium the EuroEXA experts from the different applications domains (climate/weather, physical/energy and life-science/bioinformatics) will show how the EuroEXA hardware and software stack has helped them to port and scale their applications, and how it helped to advance their science.

Organizer(s): Tom Vander Aa (IMEC), and Paul Carpenter (Barcelona Supercomputing Center)

Domain(s): Computer Science and Applied Mathematics, Emerging Application Domains, Chemistry and Materials, Climate and Weather, Physics, Solid Earth Dynamics, Life Sciences, Engineering

Artificial Intelligence and Knowledge Representation in Chemical Sciences

Artificial Intelligence (AI), data and knowledge representation are the key ingredients of the current revolution in many areas of science and engineering. This renaissance period is having important implications in chemical sciences too, with the development of new methodologies and practical solutions to solve secular problems such as - for example - the design of entire synthetic pathways in organic chemistry. At the core of AI is data, which is needed for training the different models. Data and the representation of its knowledge can be used to speed-up the development of customised solutions for daily industrial problems. In this minisymposium we wish to cover the relevant aspects of AI, data and knowledge representations for applications in chemistry, by bringing together world leading experts in the field of knowledge extraction and AI in chemical research. The topics will relate to the key advancement in chemistry produced by AI, the digitalisation and data availability across the chemical community and how existing AI models and knowledge representation technologies can be creatively exploited for bringing innovation in chemical science. The target audience is computational chemists and material scientists, cheminformaticians, and chemists who wish to grasp how AI will affect design and process in their field.

Organizer(s): Teodoro Laino (IBM Research), Philippe Schwaller (IBM Research), and Theophile Gaudin (IBM Research)

Domain(s): Computer Science and Applied Mathematics, Chemistry and Materials

Basic Libraries for Advanced Simulations: BLAS Redux for Extreme Scale Computations, Part I and II

Many computational science and engineering (CSE) teams that rely on parallel computers for high performance computations will face significant challenges in the coming years as computing architectures move completely to highly concurrent node architectures. Single-level distributed memory computations using MPI become increasingly difficult as the only source for improved performance. CSE teams must develop new algorithms and software to exploit node architectures with many cores, vector units, highly threaded accelerators, and heterogeneous combination of these devices. The disruption and cost of this transition can be daunting for CSE teams that rely on legacy scientific codes. The talks in this two part minisymposium will describe the Extreme-scale Scientific Software Stack (E4S), developed in part by the US Exascale Computing Project (ECP), present a deep dive into application codes that simulate flow physics over the wide range of continuum and rarefied flow regimes, demonstrate the science that is made possible by the fundamental building blocks that undergird the application codes by providing improved parallel performance, and describe strategies for developing highly scalable scientific codes via reusable software components that also ensure software portability across the heterogeneous computing landscape. The use of mini-apps as building blocks for scientific codes will also be explored.

Organizer(s): Ramesh Balakrishnan (Argonne National Laboratory), Michael Heroux (Sandia National Laboratories), and Philipp Schlatter (KTH Royal Institute of Technology)

Domain(s): Computer Science and Applied Mathematics, Engineering

BigData4Science: Moving Research in Scientific Fields One Step Forward with Big Data Technologies

Advances in computer science have brought us to the gates of the exascale era. In order to master this massive computational power, both industry and academia are investing their efforts in developing and improving new software tools and techniques. Some of these have been quickly adopted and widely used by the community, like big data and machine learning technologies. Many scientific domains benefit from the aforementioned software, which helps expanding knowledge in their study areas. For example, using big data technologies gives scientists the opportunity to tackle complex scientific questions that otherwise would be impossible to address. Furthermore, the combination of big data and machine learning has opened a whole new field of scientific possibilities. This minisymposium explores how different scientific domains (life sciences, physics) apply big data technologies and deep learning to bring their research one step forward. We focus on those cases where it would be impossible to address scientific questions without using big data, machine learning or a combination of both. We also emphasize the need of foreseeing the technical requirements behind the experiments and how the computational ecosystem needs to be in place well in advance to ensure the success of the scientific study.

Organizer(s): Judit Planas (EPFL)

Domain(s): Computer Science and Applied Mathematics, Physics, Life Sciences

Breaking the Wall in Computational Astrophysics: Current Bottlenecks and How to Address them towards the Exascale Era

Computational Astrophysics is one of the most computationally demanding fields and has greatly benefited from advances in HPC since the first supercomputers were available. Astrophysics is also one of the fields that will significantly benefit from the next era of advances in HPC, such as exascale computing capabilities. Nevertheless, current astrophysics applications still suffer from several bottlenecks that prevent the proper scaling that will leverage the computational power of the next generation of supercomputers. These bottlenecks relate to deep time-domain hierarchy, fault tolerance, and/or load imbalance, among many others. The aim of this minisymposium is to discuss current strategies to overcome scaling and parallelization issues in astrophysical applications, including the role of accelerators, which are central for the development of the most challenging and ambitious scientific applications into the exascale era.

Organizer(s): Ruben Cabezon (University of Basel), and Roger Käppeli (ETH Zurich)

Domain(s): Computer Science and Applied Mathematics, Physics

Bridging the Software Productivity Gap for Weather and Climate Models, Part I and II

The emergence of diverse and complex massively parallel heterogeneous hardware solutions have a large impact on the programming models traditionally used in existing large and complex weather and climate models that may well be decades old and not easily adaptable to new and possibly multiple programming models. Porting existing large community codes to multiple architectures is a daunting task and may lead to codes with multiple more complex, and more difficult to maintain variations depending on hardware choices. In order to increase the productivity and maintainability, while retaining a high degree of performance in this emerging landscape, a complete rethink of design choices is required based on abstractions and separation of concerns. This approach implies large rewriting efforts of existing codes with a considerable emphasis on low-level infrastructure developments, and automatic code generation. As a result in the past years numerous new technologies and approaches are emerging in order to provide new programming models and abstractions for concurrency and data interpretations. This minisymposium will provide an update both on infrastructure developments, source-to-source translation tools and domain-specific languages (DSL) as a way to address the challenge of bridging the software productivity gap for weather and climate models.

Organizer(s): Rupert Ford (Science and Technology Facilities Council), Willem Deconinck (ECMWF), and Carlos Osuna Escamilla (MeteoSwiss)

Domain(s): Climate and Weather

Bringing Scientific Applications Written in Fortran to the Exascale Era: How Software Engineering Can Help to Fill the Gap

Fortran is one of the most prominent languages for scientific applications in the High Performance Computing community. In some cases, the developments of these applications have been started more than 20 years ago, and they are still actively developed and used by large scientific communities. For some science domains, such as material science and astrophysics, they represent the most consuming applications in terms of computing resources in several computing centers. For this reason, it becomes crucial to extend the existing approaches to application programming, in particular with the approach of the exascale era, when a disruption will be likely necessary for application design to achieve effective exascale performance levels. Although most of the codes make use of old Fortran standard features (mainly F95), an effort of modernization of the language itself has been undertaken with newer revisions of the standard (F03/F08, F18), some of them by introduction of new software engineering techniques. In this minisymposium we will discuss, with invited presentations by prominent experts in the field, how the combination of the new Fortran standard features and software engineering techniques can help to face the exascale challenges for existing Fortran applications.

Organizer(s): Tiziano Müller (University of Zurich), Marcella Iannuzzi (University of Zurich), and Arjen Markus (Deltares Institute)

Domain(s): Computer Science and Applied Mathematics, Chemistry and Materials

Community Codes in Economics

In the last decade, many researchers from finance and economics have adopted computationally intensive methods to generate new insights. However, researchers from these disciplines usually don't have a solid background in software engineering and computational methods. Therefore, they struggle to share code efficiently and have problems to take advantage of recent advances in hardware and software. Only recently, some initiatives have formed that try to overcome these problems. Some pioneers have written open source software packages that can serve as efficient starting points for new projects. Others have analyzed the particular challenges that arise when using modern hardware to achieve massive parallelism in economic or financial applications. The aim of this minisymposium is threefold: 1) Authors of open source packages for research in emerging disciplines present and promote their projects. 2) Computational experts elaborate on the benefits and challenges of using modern hardware to achieve massive parallelism in typical applications in finance and economics. 3) The minisymposium provides a platform for fruitful exchange between economists and computational experts in the hope to spark innovation and foster lasting collaborations.

Organizer(s): Janos Gabler (University of Bonn), and Philipp Eisenhauer (University of Bonn)

Domain(s): Emerging Application Domains

Computational Advances in Macroeconomic Applications: GPUs, Algorithms and Heterogeneous Agent Models

Serguei Maliar's paper applies deep learning to macroeconomic models. The goal of his paper is to demonstrate that deep learning techniques can be used to analyze rather complex economic models in a simple and general manner. They show how to cast the typical dynamic economic model into a form that is suitable for deep learning analysis, and how to design a version of deep learning algorithm that can construct a numerical solution to the model. Xavier Ragot will solve for optimal Ramsey policies in heterogeneous-agent models with aggregate shocks. They provide a new simple theory based on projection on the space of idiosyncratic histories, to present a finite-dimensional state-space representation. Eric Aldrich's paper discusses issues related to GPU computing for economic problems. It highlights new methodologies and resources that are available for solving and estimating economic models and emphasizes situations when they are useful and others where they are impractical. Ralph Luetticke's paper describes a method for solving heterogeneous agent models with aggregate risk and many idiosyncratic states formulated in discrete time. It extends the method proposed by Reiter (2009) and complements recent work by Ahn et al. (2017) on how to solve such models in continuous time.

Organizer(s): Florian Oswald (Sciences Po)

Domain(s): Emerging Application Domains

Computational Biomedicine

This workshop presents numerical models based on modern computational techniques for the simulation of various tissues and organs, in relation to biomedical applications. The purpose is to get a better description of still poorly understood physiological processes and/or to provide the clinical community with new tools for treatment planning and diagnosis.

Organizer(s): Bastien Chopard (University of Geneva), Nicolas Salamin (University of Lausanne), and Fabio Nobile (EPFL)

Domain(s): Computer Science and Applied Mathematics, Physics, Life Sciences

Computational Performance Evaluation for Hardware and Software Alternatives to Increase the HPC Efficiency of Earth System Models

HPC has evolved in the last years from a technology crucial to the academic research community to a point where it is acknowledged as a key piece of numerical modelling, highlighting different challenges which should be solved. However, these challenges cannot be met by mere extrapolation but require radical innovation in computing technologies and numerical implementations, where the evaluation of these alternatives before and after is mandatory. This minisymposium will provide an overview of the computational performance evaluation of some of the models used by the community and different approaches to increase the performance of these models using software and hardware alternatives. It will focus largely on the methodology used to evaluate the performance of our atmospheric and ocean models. Moreover, two alternatives will be presented and evaluated, methodologies to achieve a reduced precision version for the models used, including 1) the development and methodology needed 2) the computational improvement and 3) the effects or not in the quality and accuracy of the results. Additionally, a hardware alternative will be evaluated and presented, where the computational profiling methodology will be applied to cloud computing, comparing the simulation of an atmospheric and ocean model between a typical supercomputer and the Amazon infrastructure.

Organizer(s): Mario Acosta (Barcelona Supercomputing Center), and Tim Whitcomb (Marine Meteorology Division, Naval Research Laboratory)

Domain(s): Climate and Weather

Cross-Platform Programming Language for High Energy Physics Applications

With a diversity of accelerators commercially available (FPGA, GPU, ASIC, … ) and others to come, it seems evident that high energy physics computation frameworks should support heterogenous architectures in a way that should be transparent to the programming user. Experiment software has been developed over several decades and it will be extremely intensive to perform multiple porting to targeted architectures. This would not even be viable should the program need to be executed over heterogeneous resources. High level programing languages like OpenACC, OpenMP, High-Level Synthesis or TensorFlow are abstracting the algorithmic part and the execution part of a software, making it possible to run seamlessly over multiple platforms. Memory management is becoming the energy consumption bottleneck and dedicated data-driven computation frameworks help in keeping the overhead low. We propose to review the various options of architecture-agnostic programmaing frameworks and discuss the challenges of the intensive high energy physics data pipeline.

Organizer(s): Amir Farbin (University of Texas at Arlington), Jennifer Ngadiuba (CERN), and Jean-Roch Vlimant (California Institute of Technology)

Domain(s): Computer Science and Applied Mathematics, Physics

Cutting Edge Machine Learning for High Energy Physics Applications

With the deluge of data and the increased complexity of detectors at the horizon of the high luminosity large hadron collider, processing data will become even more challenging. Physicists will have to outsmart nature and come up with improved algorithms. Machine learning is very attractive in this respect that one can derive algorithms learned from data. Interpretability, or the lack of thereof, is a limiting factor, but progress is being made. Deep learning applications to high energy physics challenges have met great success in recent years in particle identification, event classification, signal extraction, object reconstruction, and anomaly detection. Even though it seems possible to learn physics solely form data, models still perform better when they are infused with domain knowledge. We propose several topical reviews of proofs of concept in applying deep learning to high energy physics problems. Beyond the proofs of concept, we propose to discuss challenges in implementing these methods within the experiments' data pipeline and computing infrastructure.

Organizer(s): Maurizio Pierini (CERN), David Rousseau (LAL), and Catherine Schuman (Oak Ridge National Laboratory)

Domain(s): Computer Science and Applied Mathematics, Physics

Developments of Climate and Weather Models on Modern Supercomputers

Growing performance of supercomputers provides great opportunities for improving the skills of climate and weather prediction. In the past decade, considerable achievements have been made in adapting existing and developing new numerical models to best benefit from new hardware and increasingly-massive parallel computation environment. Much attention has been paid to designing new numerical algorithms for different architectures, searching for alternative coupling and parallel computing schemes, and developing new-generation multi-scale models for different components of the climate system. As the state-of-the-art supercomputers become much more powerful and new architectures are designed to reduce energy consumption, how to improve climate and weather models for a better performance on new platforms remains a challenge for all the scientists dedicated to model development and supercomputing. In this minisymposium, four outstanding scientists in the field of model development and supercomputing related to climate and weather simulations will review recent achievements and discuss future research directions in this field.

Organizer(s): Xunqiang Yin (First Institute of Oceanography), Xi Chen (Princeton University), and Qiang Wang (Alfred Wegener Institute, Germany)

Domain(s): Climate and Weather

The Exabyte Data Challenge

Various data-intense scientific domains must deal with exabytes of data before they reach the exaflop. Data management at these extreme scales is challenging and covers not only pre-processing, data production, and data analysis workflows. While there are many research approaches and science databases that aim to manage data and improved their limits over time, practitioners still struggle to manage their data in the petabyte era. For instance, achieving high performance and providing means to easily localize data upon request. With billions of files, the scalability of the manual and fine-grained data management in HPC environments reaches its limitations. Various domain-specific solutions have been developed that mitigate performance and management issues enabling data management in the petabyte era. However, due to new storage technologies and heterogeneous environments, the challenges increase and so does the development effort for individual solutions. In this minisymposium, speakers from environmental science (Met Office and ECMWF), CERN, and the Square Kilometre Array will address this matter for different domains; each speaker will present the challenges faced in their scientific domain today, give an outlook for the future, and present state-of-the-art approaches the community follows to mitigate the data deluge.

Organizer(s): Julian Kunkel (University of Reading), Bryan Lawrence (NCAS), and Joachim Biercamp (DKRZ)

Domain(s): Computer Science and Applied Mathematics, Climate and Weather, Physics

Extreme CFD for Engineering Applications

Computational fluid dynamics is one of the main drivers of exascale computing, both due to its high relevance in today's world (from nano fluidics up to planetary flows), but also due to the inherent multiscale properties of turbulence. The numerical treatment is notoriously difficult due to the disparate scales in turbulence, the global Poisson solve for incompressible and weakly compressible flows, and the need for correctly resolving local features, as flow separation and shocks in compressible flows. The recent trend in numerical methods goes towards high-fidelity methods (for instance continuous and discontinuous Galerkin) which are suitable for modern computers; however, relevant issues such as scaling, accelerators and heterogeneous systems, high-order meshing and error control, are still far from solved when it comes to largest scale simulations, e.g. in automotive and aeronautical applications. In this minisymposium, we will gather experts from various institutions to discuss current and future issues of extreme scale CFD in engineering applications, with special focus on accurate CFD method. In particular, we will introduce the recently started European Centre of Excellence for Engineering Applications "Excellerat", with talks focusing on aeronautics, acoustics, automotive applications, and postprocessing suitable for large-scale simulation data.

Organizer(s): Philipp Schlatter (KTH Royal Institute of Technology), Niclas Jansson (KTH Royal Institute of Technology), and Ramesh Balakrishnan (Argonne National Laboratory)

Domain(s): Computer Science and Applied Mathematics, Emerging Application Domains, Physics, Engineering

High-Performance Computing for Earthquake Simulation, Geohazard Modeling, and Seismic Imaging, Part I and II

Earth science relies on numerical methods now more than ever before. Dealing with the ever-growing massive observation data, simulating earthquakes or geodynamic problems, imaging the interior of earth at the finer scale, solving geological engineering issues with complicated settings, etc., all these studies benefit from the development of high-performance computing. To embrace the big-data era and harness the power of high-performance computing (HPC), the goal of this minisymposium is to discuss the scientific achievements and the theoretical/technical improvements of numerical methods we have made with HPC, as well as the emerging new frontiers like AI in computational geosciences. The minisymposium also provide a chance for computer scientists and earth scientists to work together to address the challenges in the future of Earth sciences.

Organizer(s): Youyi Ruan (Nanjing University)

Domain(s): Solid Earth Dynamics

High-Resolution Weather and Climate Simulations: High-Performance Computing and Science Case

For decades, weather and climate models have significantly improved in predictive skill. This was possible, among other things, due to a steadily increasing resolution that became accessible through increasing supercomputing capacity. Increases in resolution from 500 km to 5 km have been enabled by a million-fold increase in computational power. Global kilometre-resolving simulations will eventually allow to represent individual clouds and deep convection explicitly within simulations, which is promising an additional jump in predictive skill. However, it is still not clear if and when these simulations will become reality, despite the upcoming exascale era. In this minisymposium, recent advances towards global kilometre-resolving weather and climate simulation are discussed. This will include an assessment of I/O and in-situ approaches to cope with related big data challenges, algorithmic and performance enhancements for the models, and discussions on the scientific value of these high-resolution simulations. Amongst others, findings from the HPC-driven European centre of excellence ESiWACE on exascale computing for weather and climate models as well as from the international DYAMOND intercomparison initiative will be presented to complement the HPC and science picture and to stimulate discussions about the grand challenge to allow operational weather and climate simulation at a resolution of O(1km).

Organizer(s): Philipp Neumann (German Climate Computing Centre), and Peter Düben (European Centre for Medium-Range Weather Forecasts)

Domain(s): Climate and Weather

HPC Challenges in Kinetic Simulations of Plasmas, Part I: Eulerian Approach; Part II: Particle-Based Approach; Part III: Semi-Lagrangian and other approaches

The most comprehensive description of plasma phenomena involves kinetic theory, with applications in different fields, such as magnetic or inertial fusion, industrial plasmas and astrophysics. Solving kinetic problems requires overcoming multiple challenges: the high dimension of phase space (6D), time and spatial scales spanning several orders of magnitude, various non-linearities and the need to solve the combined system of Vlasov/Fokker-Planck equations consistently with Maxwell equations. Reduced models have been devised tailored for a particular application: for example gyrokinetic theory for turbulent transport, or hybrid fluid/kinetic models for fast particle effects on MHD modes. Nevertheless, the resulting equations remain numerically very challenging and, with the aim of carrying out ever more realistic simulations, require running on the most powerful computers available at any time. To make efficient use of these HPC resources, and to maintain readiness for future developments, efforts are being pursued at multiple levels: Refactoring and porting legacy codes on new and emerging architectures; Improving code design for facilitating future portability and further implementation of physics models of increased complexity; Developing improved algorithms as well as innovative numerical approaches running efficiently on future HPC platforms.

Organizer(s): Laurent Villard (EPFL), Stephan Brunner (EPFL), and Claudio Gheller (EPFL)

Domain(s): Physics

HPUQ: High Performance Uncertainty Quantification - Portable Frameworks for General Applications

Uncertainty quantification (UQ) has always been the pinnacle for the calibration and validation of scientific models. The steady increase in computational power is gradually enabling UQ even for very computationally expensive numerical simulations. However, the heterogeneity and the nonlinearity of the models, the need for hardware and programming language agnostic model interfaces, and the growing amount of available experimental data of diverse quality, pose a number of challenges to the development of UQ frameworks for HPC. In addition, plain embarrassingly parallel Bayesian inference algorithms (for instance, independent Markov chains) are being outcompeted by more efficient and adaptive, however, also more communication-intensive methods such as EMCEE, TMCMC, and SMC (optionally coupled with complex nonlinear particle filtering procedures) or Approximate Bayesian Computation. Driven by recent developments in these fields, we aim with this minisymposium to gather the developers of state-of-the-art, modular, scalable, and hardware-independent UQ frameworks together with application domain scientists. Our explicit goals are to expose researchers from different fields to the available parallel tools for UQ, and to discuss the challenges relevant to performing UQ on complex models on modern HPC hardware.

Organizer(s): Jonas Šukys (Eawag), and Marco Bacci (Eawag)

Domain(s): Computer Science and Applied Mathematics, Emerging Application Domains, Engineering

Identifying Relevant Communities in Immense Networks: Clustering Algorithms that Leverage High-Performance Computing

Community detection aims to identify clusters of closely inter-connected nodes within a network – a problem that is encountered in numerous diverse domains, from marketing and forecasting to virtually every scientific field. Many of these applications utilize networks to model their data by representing objects of interest as nodes and pairwise relationships between these objects as edges between the nodes. A pressing issue for network clustering is the inability to scale to accommodate massive datasets while ensuring reliable results. Another, subtler but highly impactful, issue is the inconsistency of a fundamental definition of 'community' in divergent domains. Expectations of sphericity or Euclidean space and neglect of singleton nodes that do not belong in any cluster are common pitfalls compromising the accuracy of results. This minisymposium will explore recent advances for a variety of algorithms that exploit high-performance computing while scrutinizing underlying assumptions. A key theme will be to dissipate the 'one size fits all' tendency by contrasting algorithms, their objectives, and their computational limits.

Organizer(s): Sharlee Climer (University of Missouri - St Louis), and Daniel Jacobson (Oak Ridge National Laboratory)

Domain(s): Computer Science and Applied Mathematics, Emerging Application Domains, Climate and Weather, Life Sciences

Interoperability of Abstractions, High Level Languages and Intermediate Representations for High Productivity of Weather and Climate Models

Numerical weather prediction and climate models are complex scientific applications that need to run on large parallel computer systems. The rapid change of computing architectures and the increasing diversity of architectures and programming models required to run these applications presents a significant challenge for the modelling community to retain single source codes that run efficiently on multiple architectures, now and in the future. In order to address the performance portability problem, numerous solutions have emerged in recent years: i.e. source-to-source translators, domain specific languages (DSLs) and libraries that abstract details of the efficient implementation of the physical equations. However, each of these solutions applies to a particular domain, and supports certain types of horizontal grids, numerical methods or computational pattern. Reuse of abstractions and tools, such as optimizers, for the weather and climate domain, will be key to enable the sustainability and maintainability of the ecosystem of tools and libraries for weather and climate prediction applications. The standardization of interfaces will allow higher level abstractions to be built which will, in turn, increase scientific productivity by improving the ability to develop models. We will present and discuss recent efforts that have begun on the interoperability and standardization of these abstractions.

Organizer(s): Carlos Osuna Escamilla (MeteoSwiss), Willem Deconinck (ECMWF), and Rupert Ford (Science and Technology Facilities Council)

Domain(s): Climate and Weather

Large Scale Simulation in Geodynamics

Computational geophysics has reaped enormous benefits from advances in high performance computing; large forward and inverse models of myriad processes have allowed continually-expanding insight into the formation, evolution, and present-day structure of Earth and other planets. Multiscale phenomena, enormous spatial and temporal domains, and a growing set of relevant physical processes have created new challenges in scalability: scalable algorithms are required, data processing and I/O workloads grow, and software becomes more complex. Performance-driven co-design has become more and more important to address these scalability challenges - domain scientists, applied mathematicians, software developers, and HPC systems architects have growing incentive to work more closely. In this minisymposium, we discuss recent advances in the modelling of large-scale geodynamical processes in the earth and other terrestrial planets, including mantle and lithospheric dynamics. In particular, we focus on simulations and software taking advantage of scalable algorithms on modern HPC systems.

Organizer(s): Paul Tackley (ETH Zurich), and Patrick Sanan (ETH Zurich)

Domain(s): Computer Science and Applied Mathematics, Physics, Solid Earth Dynamics

Machine Learning Applied to Scientific Modeling

Numerical modeling of natural phenomena is an important aspect of modern research. Many tools have been developed which reproduce natural processes to offer in-silico predictions for a large variety of systems, including for example bio-physical phenomena, hydro- and aero-mechanical devices, or large-scale weather forecasting. A new tendency has recently emerged to replace ab-initio modeling of physical systems by heuristics-based predictions, to save significant computational cost. In this minisymposium, experts in the field will provide some perspectives regarding the application of machine learning approaches to this type of problems.

Organizer(s): Bastien Chopard (University of Geneva), Nicolas Salamin (University of Lausanne), and Jonas Lätt (University of Geneva)

Domain(s): Computer Science and Applied Mathematics, Emerging Application Domains, Chemistry and Materials, Climate and Weather, Physics, Solid Earth Dynamics, Life Sciences, Engineering

Machine Learning for HPC

Research and industry invest considerable efforts into using high performance computing to advance machine learning. In this minisymposium, we propose to explore the exact reverse question: How can machine learning advance high performance computing? As supercomputing systems become more powerful and more complex, the amount of monitoring information they produce grows substantially, making it more challenging to understand and process these logs accurately. Moreover, the expanding landscape of complex and heterogeneous hardware, programming paradigms and tools, means more expertise is needed to develop optimal programs for supercomputing systems and make them portable. Simultaneously though, advances in machine learning and natural language processing, as well as the large amounts of code made available by open-source repositories present opportunities in tackling problems previously unsolved using machine learning. Four talks will present methods that use machine learning in current HPC systems or applications for: performance optimization; automatic code analysis; and system monitoring. By showcasing these research topics, we intend to raise awareness, in the HPC community, of solutions machine learning offer to rising HPC problems as well as to create a platform for exchange among actors at the interface between HPC and ML.

Organizer(s): Alice Shoshana Jakobovits (ETH Zurich / CSCS)

Domain(s): Computer Science and Applied Mathematics

Machine Learning in Weather and Climate

Weather and climate science offers an overwhelming amount of data both in terms of observations and model output. This data is used to understand parts of the non-linear behaviour of the Earth System (1) by extracting additional knowledge from the data or (2) by improving the quality of the models that are used for weather and climate predictions. An example of the former case is improved prediction of large scale phenomena such as El Nino. An example of the latter is the improvement of a Physics parameterisation scheme of atmosphere and ocean models through detailed analysis of the errors in a large number of datasets. One way to realise these opportunities is to use new tools from machine learning. This minisymposium showcases examples from current, practical, usage of machine learning in the weather and climate domain. Applications to the use cases (1) and (2) above will be discussed as well as where machine learning can assist in reducing the computational cost of models.

Organizer(s): Peter Dueben (ECMWF), and Samantha Adams (Met Office)

Domain(s): Climate and Weather

Mapping Parallel Scientific Applications onto Complex Architectures Portably and Efficiently

Many scientific applications are expected to provide breakthroughs in science and engineering with exascale supercomputers. To this end, large parallel codes need to be mapped efficiently to complex architectures involving heterogeneous compute and memory resources, not to mention multiple levels of memory, e.g., high-bandwidth and high-capacity memories. The process of mapping processes, threads, GPU kernels, etc. efficiently to a supercomputer is complex and machine dependent. In fact, many performance workshops and tutorials in computer and computational science dedicate a significant portion of time to this problem alone, even before delving into their innovative approaches. In this minisymposium, we delve into the challenges posed by this problem together with ongoing solutions by world-class supercomputer centers and private industry. There are two challenges. First, the efficient mapping of hybrid compute abstractions (e.g., MPI+X) to the underlying hardware in a portable manner across architectures. Second, the way of expressing these mappings and their available features differ significantly from runtime system to runtime system. We will discuss promising mapping algorithms as well as mechanisms and interfaces to adequately host such algorithms portably. This minisymposium aims to foster collaborations in this area with feedback from the scientific community.

Organizer(s): Edgar Leon (Lawrence Livermore National Laboratory)

Domain(s): Computer Science and Applied Mathematics

Modeling Cloud Physics: Preparing for Exascale

Clouds play an important role in the weather and climate system. High-quality observations and experiments have advanced our understanding of cloud processes. Studies have shown that detailed parameterizations of processes, such as aerosol-cloud-precipitation interactions, cumulus convection and cloud radiative forcing are essential for accurate weather and climate predictions. Meanwhile, increasing heterogeneity of computational architectures and diversity of programming models pose a pivotal challenge for high performance scientific computing in the exascale era. In this minisymposium we wil describe various research efforts to incorporate cloud resolving capability into major weather and climate models using diverse programming models that effectively target pre-exascale and exascale supercomputers. Under the Exascale Computing Project (ECP), the first effort is focused on integrating cloud-resolving convective parameterization (superparameterization) into the Energy Exascale Earth System Model (E3SM) using OpenACC to target GPUs. Second, the Simple Cloud Resolving E3SM Atmosphere Model (SCREAM) effort aims to develop a new global cloud-resolving model written in templated C++ and Kokkos for performance portability. Third speaker will discuss acceleration of cloud physics and atmospheric models using GPUs and FPGAs in the context of Met Office Unified Model (UM) components. Finally, a panel session is intended to discuss application development experiences and collaboration opportunities.

Organizer(s): Sarat Sreepathi (Oak Ridge National Laboratory), Katherine Evans (Oak Ridge National Laboratory), and Wei Zhang (Oak Ridge National Laboratory)

Domain(s): Computer Science and Applied Mathematics, Climate and Weather, Physics

Multidimensional Stellar Evolution: Bridging the Modelling and Computational Challenges

The evolution of stars is one of the keys for the understanding of the nucleosynthesis of our Universe. Given the extremely long time scales (mega- to giga-years) and the large spatial scales, studies of stellar evolution have traditionally been carried out in one spatial dimension. However, the fundamental process of convection is an inherently multidimensional phenomenon. This has led researchers to aim for multidimensional simulations of certain phases of stellar evolution where convective and turbulent processes dominate the dynamics. The availability of highly efficient computing hardware and new algorithms like high-order schemes, well-balanced discretizations and implicit time integration methods made this aim reachable in recent years. The collected understanding of the convective mixing processes during these phases is then fed back into lower dimensional simulation thereby increasing their physical fidelity. This hierarchical combination of simulation of varying dimensionality leads to highly efficient algorithms. This minisymposium brings together domain experts from astrophysics and HPC in order to discuss the current state-of-the-art and future directions for the field. In particular, the presentations will focus on the multidisciplinary challenges for astrophysics, numerical modelling and how these can be tackled on current and future heterogeneous HPC platforms.

Organizer(s): Roger Käppeli (ETH Zurich), and Rolf Walder (ENS Lyon)

Domain(s): Computer Science and Applied Mathematics, Physics

Numerical Methods and HPC Challenges in Magneto Hydro Dynamics (MHD) Modelling in Plasma Physics

The HPC challenges and numerical methods in different domains of plasma physics and in particular in magnetic fusion and astrophysics have many common features, such as 3D complex geometry, a wide range of spatial and time scales to be covered especially when MHD instabilities are involved. In the first presentation challenges in MHD modelling in fusion devices and in particular in ITER will be presented using the example of the non-linear MHD code JOREK in complex realistic geometry. Fluid-kinetic formulation, numerical and parallelisation methods of the fully implicit time evolution scheme will be discussed. Some illustrative examples of MHD instabilities and their control in ITER will be presented. In the second presentation the adaptive mesh refinement (AMR) methods for hyperbolic and elliptic PDEs in MHD simulations and the design of an MPI-parallel geometric multigrid library will be discussed. In the third presentation the efficient application of the multidimensional Riemann solver on the structured meshes for the relativistic MHD astrophysical applications will be presented.

Organizer(s): Marina Becoulet (CEA/IRFM)

Domain(s): Computer Science and Applied Mathematics, Physics

On the Use of Exotic Computation Architectures in High Energy Physics Applications

With limitations in expansion of the von Neumann architecture processing units, several new models for running computations are gaining attention from the high energy physics community. These exotic architectures are considered potential platforms of scientific computation. Even though they are not foreseen to be used in the near future, the community needs to be aware of their potential and prepare for when new accelerators might become commercially viable and available. These new architectures require new programming paradigms and somehow limit the type of algorithms they can run. We propose to discuss the usage of non-standard computing architectures as accelerator to part of the high energy physics applications. This includes, but is not limited to, applications running on field programmable gate arrays, neuromorphic architectures, and quantum chips. With this minisymposium, we hope to create a picture of the state of the art, raise awareness of these architectures and guide future developments.

Organizer(s): Tobias Golling (University of Geneva), Sofia Vallecorsa (CERN), and Jean-Roch Vlimant (California Institute of Technology)

Domain(s): Computer Science and Applied Mathematics, Physics

The Pangeo Platform for Interactive Data Analysis

The Pangeo community is an international community committed to fostering collaboration around the open source scientific Python ecosystem for ocean / atmosphere / land / climate science, supporting the development with domain-specific geoscience packages, and improving the scalability of these tools to handle petabyte-scale datasets on HPC and cloud platforms. The Pangeo ecosystem of Python packages include Dask, Xarray, Iris, and Jupyter, and developing interoperability around these packages has led to significant advancements in the last year in the areas of Xarray- and Iris-compatible data cataloging and discovery software, Jupyter Notebook dashboarding, cloud-friendly data formats for analysis, hybrid HPC-cloud technologies and environments, just to name a few. In this session, we cover some of the recent advancements made by the Pangeo community.

Organizer(s): Allison Baker (The National Center for Atmospheric Research), Kevin Paul (The National Center for Atmospheric Research), and Niall Robinson (Met Office)

Domain(s): Climate and Weather

Parallel High-Dimensional Approximation: Uncertainty Quantification and Machine Learning, Part I and II

The aim of this minisymposium is to discuss the latest research at the intersection of parallel computing and high-dimensional approximation. High-dimensional approximation drives e.g. uncertainty quantification and machine learning as well as big data and simulations of complex physics models. It is well-known that approximation of functions of growing dimension has the challenge of the curse of dimensionality. Over the last decades, many powerful mathematical tools have been developed to do weaken or overcome this. These include, but are not limited to sparse approximation, (quasi) Monte Carlo, multi-level (multigrid) / multi-fidelity techniques, (sparse) tensor product and low-rank approximations, hierarchical matrices, compressed sensing and meshfree methods. There is a growing interest to solve high-dimensional approximation problems at large scale. While many of the discussed methods have good or even optimal approximation properties and complexities for larger dimensions, some have been primarily developed for sequential execution. However, to solve large scale approximation problems, it becomes necessary to develop fast, scalable and parallel numerical methods. This minisymposium includes contributions in high-dimensional approximation ranging from initial studies for the use of parallel techniques up to full scale parallel methods that run on large HPC clusters. We focus both on algorithmic-oriented and application-centered research.

Organizer(s): Michael Multerer (Università della Svizzera italiana), Olaf Schenk (Università della Svizzera italiana), and Peter Zaspel (University of Basel)

Domain(s): Computer Science and Applied Mathematics, Chemistry and Materials, Physics, Engineering

Programming Models to Enable Scalable Resilience for Extreme Scale Computing Systems

With growing scale and complexity of computational systems, HPC applications are increasingly susceptible to a wide variety of hardware and software faults, making failure mitigations at the runtime and application layers more essential. Resilience has become a first citizen to enable productive use of extreme scale HPC systems. Today, the major application-level resilience scheme is coordinated checkpoint and restart (C/R) that involves global coordination of processes and threads. Despite the recent progress in I/O technology and the emergence of efficient C/R techniques, this global recovery model entails inherent scalability issues, given that the majority of failures happen at a single process or node (local failure). Recently, several alternative approaches have been proposed to enable localized response to local failures, but their feasibility are yet to be studied. In this minisymposium, we will discuss the recent progress of runtime and library approaches for extreme-scale resilience, including the state-of-art C/R, new fault tolerance proposal of MPI, and localized recovery model facilitated by emerging asynchronous many task parallel programming model.

Organizer(s): Keita Teranishi (Sandia National Laboratories), Aurelien Bouteiller (University of Tennessee), and Kolla Hemanth (Sandia National Laboratories)

Domain(s): Computer Science and Applied Mathematics, Emerging Application Domains, Chemistry and Materials, Climate and Weather, Physics, Solid Earth Dynamics, Life Sciences, Engineering

Python Frameworks for HPC

Python has become one of the predominant programming languages among scientists due to its simplicity and a very solid ecosystem of scientific libraries for data handling, computing and visualization. As Python keeps gaining popularity, more and more efforts aim to move it from just prototyping and workflow management, to also run production applications on large HPC systems. This minisymposium gathers people working on domain specific Python frameworks for HPC applications and systems, so as to start a conversation on the possibilities of Python as an effective language for high quality HPC applications. We wish this minisymposium to be a starting point to reason about code maintainability and scalability (code size/productivity/debuggability/profiling), deployment models on HPC systems, optimization opportunities, programing models, but also to highlight the opportunities that the Python ecosystem could offer for the next generation of parallel systems and applications, like interactive scientific applications. The talks will present different on-going efforts from several authors to scale actual scientific frameworks to large HPC systems, using different techniques, such as handling of multi-processes and threads and code generation. This will allow us to assess advantages and disadvantages of the different solutions and have a better understanding of the challenges ahead.

Organizer(s): Mauro Bianco (ETH Zurich / CSCS), and Enrique González Paredes (ETH Zurich / CSCS)

Domain(s): Computer Science and Applied Mathematics

Resilient Solvers in Exascale Atmospheric Models

Numerical weather prediction and climate studies rely on efficient algorithms to deliver accurate simulations under tight operational constraints, as increasing horizontal resolutions approach few kilometers in global models and test the performance of legacy codes on massively parallel computing architectures. In this context, time-explicit discretizations are impractical as their time step size is constrained by meteorologically insignificant fast waves. Therefore, implicit or semi-implicit schemes are employed, requiring the solution of large systems of equations that takes up a sizeable portion of computing time. The minisymposium will bring together numerical modellers and high-performance computing experts to explore resiliency and scalability of current strategies for linear solvers in atmospheric models. The session will focus on fault-tolerance, necessary in a context where hundreds of thousands of cores are employed for round-the-clock simulations, and efficiency. Contributions on reduced precision in existing implementations will be surveyed, as well as efforts in supplying solvers with suitable preconditioners that can accelerate convergence. Thus, the talks will give perspectives into modelling approaches that can optimize computational resources and provide fault-robust simulations without sacrificing the accuracy of forecasts produced by existing models.

Organizer(s): Tommaso Benacchio (Politecnico di Milano), and Luca Bonaventura (Politecnico di Milano)

Domain(s): Computer Science and Applied Mathematics, Climate and Weather

Scalable Cross-Facility Workflows: Addressing Impedance Mismatches between Facilities

Scientific campaigns are increasingly tightening the feedback and validation loop between simulation and observational data. The linking of experimental and observational data from empirically driven facilities and computational facilities are giving rise to cross-facility workflows. As we scale up such pipelines for scientific discovery, these cross-facility workflows require each participating facility to overcome hurdles to work in an end-to-end manner. These hurdles impede the rapid standup of workflows that straddle facilities. This minisymposium brings together the perspectives of disparate collaborating facilities, explores each facility’s needs, and aims to offer a roadmap for aligning the cross-facility goals and overcoming impedance within collaborations.

Organizer(s): Arjun Shankar (Oak Ridge National Laboratory), Sadaf Alam (ETH Zurich / CSCS), and Jack Wells (Oak Ridge National Laboratory)

Domain(s): Computer Science and Applied Mathematics, Emerging Application Domains, Chemistry and Materials

Scalable Distributed Deep Learning

Deep learning (DL) is increasingly employed in various scientific domains such as cosmology, medical analysis and diagnosis, geophysics, biology, etc. In addition, ready-to-use DL packages facilitate the usage of this powerful technique for the broader scientific community. However, the challenge arises as dataset size and model complexity increase. While, nowadays, traditional numerical methods commonly benefit from HPC environments, this is not yet the case for scalable DL frameworks. This minisymposium aims to share experiences in recent advances of DL applications and algorithms at scale and to provide a platform for knowledge exchange. The minisymposium gives an overview of distributed DL and parallelization strategies, addresses scalable communication-efficient algorithms for machine learning, and finally presents two distributed DL applications based on convolutional neural networks in cosmology research and in the pharmaceutical industry, running on a supercomputer and a cloud computing environment, respectively.

Organizer(s): Jarunan Panyasantisuk (ETH Zurich), and Thomas Wüst (ETH Zurich)

Domain(s): Computer Science and Applied Mathematics, Emerging Application Domains, Physics, Life Sciences

Scientists’ choice of 'X' in MPI+X: Life Sciences, Plasma Physics, Climate & Weather Forecasting, and other HPC Benchmark Suites

Lately, nodes are becoming fatter, they have an increasing amount of computing capability and an increasing amount of memory per node. We are also observing that hardware systems are taking diverse routes, some demonstrate intense heterogeneity such as Summit and some not so much such as UK's Isambard and Riken's Post-K. In either case, the rich feature sets of these powerful systems need a powerful programming environment. This symposium will feature four speakers who will focus on 'X' in MPI + X, while exploring different types of applications from various scientific domains including life sciences, climate and weather forecasting and a variety of popular HPC benchmark suites with a goal to inspire ideas to program the rich nodes of up and coming exascale architectures.

Organizer(s): Sunita Chandrasekaran (University of Delaware)

Domain(s): Emerging Application Domains, Climate and Weather, Life Sciences

Towards Sustainable Scientific Software through Better Engineering, Development, Documentation, Publication and Curation

Scientific software tends to outlive the timespan of Ph.D. theses and project grants. Nevertheless, this important and essential component in the scientific knowledge discovery process has often been neglected due to the focus on scientific publication as an evaluation of scientific advancement. Fortunately, this is currently changing, but especially domain scientists find themselves burdened with a lot of computer and computational science tasks that are foreign to their area of expertise. This minisymposium presents a set of best practices, tools and web portals that ease this process and enable a more productive programming process for all participating parties. As such a long term sustainable software infrastructure that can benefit from all modern techniques - open-source licensing, public review processes, continuous integration, documentation, community building - can be achieved without a tremendous additional effort.

Organizer(s): Guido Juckeland (Helmholtz-Zentrum Dresden-Rossendorf), and David Bernholdt (Oak Ridge National Laboratory)

Domain(s): Computer Science and Applied Mathematics, Engineering

The Use of Explainable-AI and Network Models to Analyze Complex Biological Systems, Part I and II

The cost of generating biological data is dropping exponentially, a decrease that has far outstripped predictions based on Moore's Law. This has ushered in a new era of systems biology in which there are unprecedented opportunities to gain insights into complex biological systems. The dominant paradigm of high-throughput systems biology is the use of new technologies to generate massive amounts of data that can then be analyzed computationally for new insights and hypothesis generation. Integrated biological models need to capture the higher order complexity of the interactions among cellular components. Solving such complex combinatorial problems will give us unprecedented levels of understanding of biological systems. However, understanding higher order sets of relationships among biological objects leads to a combinatorial explosion in the search space of biological data. These exponentially increasing volumes of data, combined with the desire to model more and more sophisticated sets of relationships within a cell and across an organism (or in some cases even ecosystems). A full model of all of the higher order interactions within a biological system and with its environment is one of the ultimate grand challenges of systems biology and has led to the need for new HPC-driven, explainable-AI-based algorithms.

Organizer(s): Daniel Jacobson (Oak Ridge National Laboratory), Kjiersten Fagnan (Lawrence Berkeley National Laboratory), and Ben Brown (Lawrence Berkeley National Laboratory)

Domain(s): Computer Science and Applied Mathematics, Emerging Application Domains, Life Sciences

Using OpenACC for Fluid Dynamics and Atmospheric Prediction Studies: Stories from Application Developers

This minisymposium will focus on application developers in climate weather modeling and computational fluid dynamics who will share their stories while they migrate(d) several thousands to millions of lines of legacy code, written in C/C++ and Fortran, to modern heterogeneous platforms using a directive-based parallel programming model, OpenACC – a model that is being heavily driven by application developers. This minisymposium is being built along the themes of PASC that offers an excellent forum for an exchange of competences in scientific computing and computational science. Speakers will share their success stories and more importantly the challenges they encountered along the way, thus offering productive insights into the porting process of real-world legacy codes to the PASC attendees.

Organizer(s): Anne Kusters (Forschungszentrum Jülich), and Sunita Chandrasekaran (University of Delaware)

Domain(s): Computer Science and Applied Mathematics, Emerging Application Domains, Climate and Weather, Physics