European Urgent Computing Workshop - EuroHPC Summit Week 2021
24 March 2021

ChEESE presents a workshop titled "European Urgent Computing Workshop" at the EuroHPC Summit Week 2021 on 24 March 2021.

Description:

Several high-impact phenomena require of on demand solutions of HPC applications under strong time constrains. Examples include, among other, affectations from geohazards (e.g. propagation of tsunami waves, earthquakes, volcanic eruptions), atmospheric or ocean toxic dispersal (e.g. incidental nuclear release, pathogen emission), or extreme weather events (e.g. triggering large flooding). Prompt reaction to these scenarios requires tier-0 computing infrastructures, complicated data workflows, and engagement with stakeholders formally involved in emergency management (e.g. the European Emergency Response Coordination Centre; ERCC) through shared protocols and policies. Interest to set up Urgent Computing (UC) services in Europe is growing trough the ChEESE CoE, PRACE and also recommendations of the EuroHPC INFRAG Access Policy Group.

Following the workshops about Urgent HPC held during SC19 an SC20, we would like to organize a similar one with European HPC stakeholders based on the need for these services in Europe. This half-day workshop would have around 5 talks of 25 minutes each, including other European HPC-related initiatives such as the ESiWACE CoE, VESTEC and Lexis projects. The event will finish with an interactive discussion panel on the subject of the need for HPC in urgent decision making.

Time Session
12:00-12:10 Welcome by Arnau Folch (Barcelona Supercomputing Center)
12:10-12:35

Urgent Computing Applications for Radiological Dispersion in the Atmosphere

Antonio Cervone (Italian National Agency for New Technologies, Energy and Sustainable Economic Development)

12:35-12:55

HPC for Urgent Tsunami Computation

Steven Gibbons (Norwegian Geotechnical Institute)

12:55-13:20

Urgent Computing for Seismic Hazard Assessment: Progress and Challenges

Marta Pienkowska (ETH Zurich)

13:20-13:45 Break
13:45-14:10

Frontiers and Challenges in the Operational Forecast of Volcanic Ash

Sara Barsotti (Icelandic MET Office) - Use case

Arnau Folch (Barcelona Supercomputing Center) - HPC

14:10-14:35

Extreme-Scale Computing for the Advanced Prediction of Weather Extremes

Tiago Quintino (European Centre for Medium-Range Weather  Forecasts)

14:35-15:00

Panel discussion and Q&A

Josep de la Puente (Barcelona Supercomputing Center) - ChEESE

Thierry Goubier (Laboratory for Integration of Systems and Technology) - LEXIS

Andreas Gerndt (German Aerospace Center) - VESTEC

Tiago Quintino (European Centre for Medium-Range Weather  Forecasts) - ESiWACE

 

 

Urgent Computing Applications for Radiological Dispersion in the Atmosphere by Antonio Cervone (ENEA)

The dispersion of radioactive pollutants in the atmosphere, being it caused by an accident in a nuclear-related installation or by a malicious dispersion device, can pose serious hazards to the population. Emergency centers around Europe must be equipped to deal with such threats with simulation tools that can model the dispersion of such pollutants in very short times (faster than real time) in order to support the decision-makers. The stringent time requirements have led in the past to the use of simplified and over-conservative models such as the Gaussian plume model that, however, can poorly represent the real pollutant distribution in complex terrain and large urbanized areas.
More accurate models, such as the Lagrangian micro-dispersion models, can deal with the geometric complexity and produce far more accurate results, but they have not yet been adopted due to their large computational requirements. In this framework, HPC-optimized simulation tools should be developed in order to guarantee the timely delivery of accurate results and replace the old solutions that are still in use. The development of atmospheric dispersion in the HPC environment should revolve around the creation of integrated platforms that have common objectives with other Urgent HPC fields such as the integration with real-time on-field measurements, automatic acquisition of data from sensors, cybersecurity, targeted visualization tools, etc. Finally, source identification and localization can exploit HPDA through AI-trained surrogate inverse models that leverage extensive datasets of synthetic direct simulations.

HPC for Urgent Tsunami Computation by Steven Gibbons (NGI)

Tsunamis pose a hazard to coastal populations that can strike the population within minutes after being generated, giving very little time for warning. In this context, Faster Than Real Time (FTRT) numerical simulations are needed to forecast and warn of tsunamis effectively. Such urgent computations have until recently been infeasible for use in tsunami early warning. The implementation of efficient numerical tsunami codes using Graphical Processing Units (GPUs) has now allowed much faster simulations and opened for FTRT tsunami computations. While a tsunami simulation from a single source can now be run many times faster than real time, realistic forecasts for tsunami hazard need to run vast numbers of concurrent FTRT simulations, given the significant source uncertainty immediately after a large earthquake. This application is called Probabilistic Tsunami Forecasting, or PTF. FTRT and PTF are so-called pilot demonstrators in the European Commission-funded ChEESE project (www.cheese-coe.eu). A third tsunami-related pilot demonstrator in ChEESE is Probabilistic Tsunami Hazard Analysis, or PTHA. Both the PTF and the PTHA applications makes extensive use of the faster than real time computations as a basis for more rigorous assessments. In PTHA, we quantify the likelihood of exceeding a specified metric of tsunami inundation at a given location within a given time interval. This provides scientific guidance for decision making in coastal engineering and evacuation planning. In this presentation, we will present an overview of tsunami urgent computing methodologies. To this end, the presentation will first address the FTRT, and how an upscaling of HPC resources could significantly improve the accuracy and feasibility of more complex methods such as PTF and local PTHA.

Frontiers and challenges in the operational forecast of volcanic ash by Sarah Barsotti (IMO) and Arnau Folch (BSC)
Prompt reaction to natural hazards requires of computing infrastructures, complicated data workflows, and engagement with stakeholders formally involved in emergency management. The Center of Excellence for Exascale in Solid Earth (ChEESE; https://cheese-coe.eu) is preparing flagship codes for the upcoming Exascale Era together with workflows and Pilot Demonstrators (PDs) to furnish HPC-services to industry and public governance bodies, including urgent computing for earthquakes, tsunamis, and volcanoes. Volcanic ash and tephra represent a serious threat to the aviation community (both flying aircrafts and airport infrastructure) and can have severe impacts on human health, environment, climate, and to the society as a whole. The capability of anticipating the dispersal of tephra and ash, its concentration in the atmosphere, as well as the amount of volcanic material accumulating on the ground, is essential to mitigate its potential impacts. High-resolution simulations with data assimilation and real-time forecasting are needed to face the demand of high-quality and reliable information when an eruption starts. The ChEESE approach tries to overcome limitations of current operational approaches and to push for the next-generation of urgent computing products. The PD has been tested by BSC and IMO in an operational environment during the VOLCICE exercise held in March 2021. In that occasion an eruption at Jan Mayen volcano has been played and the main actions for responding to the emergency have been practiced. The main purpose was to validate the urgent computing service provided by ChEESE to the Icelandic Meteorological Office in producing high-resolution forecast of volcanic ash cloud dispersal needed to issue the first two official products disseminated to the aviation sector (so called Sigmets). Issuing timely and accurate Sigmets within the first 30 minutes since the beginning of an eruption can be challenging, but it is essential to guarantee safe operations for all the aircrafts at risk for exposure to contaminated air. 

Extreme-Scale Computing for the Advanced Prediction of Weather Extremes by Tiago Quintino (ECMWF)

Numerical weather prediction (NWP) has a long record of delivering reliable forecasts of extreme events up to weeks ahead on a daily basis. Examples are wind storms, heavy precipitation, cold spells, heat waves  and droughts which affect all sectors of society. With climate change the probability of occurrence and intensity of extremes is very likely changing, which poses great challenges for climate projections based on numerical simulations  to reach a sufficient level of reliability for decision making.
The operational schedules of urgent computing in global NWP production allocate (roughly) hourly slots for collecting and pre-processing hundreds of millions of observations, producing the initial conditions with very complex mathematical data assimilation algorithms, and running ensemble forecasts to predict the most likely evolution of weather and extremes. Limited area systems run similar workflows but with more frequent updates per day and higher spatial resolution over selected regions.
Traditionally, these workflows are operated on dedicated HPC and data handling/dissemination systems and the underlying software is bespoke and has evolved over decades. The increasing concern about computing and data handling efficiency - given an increasing demand for higher resolution, more model complexity and larger ensembles - has led to signifiant investments in code and workflow redesign in this community. 
The obvious need for evidence based decision-making in support of climate change adaptation and society's need to deal responsibly with extremes has raised the bar for such investments in software development. This raises the question of how Europe can focus resources and coordinate its excellence in science and technology to address one of the biggest challenges of our generation.