Difference between revisions of "Projects"
(→Petascale Inference in Earthquake System Science (PetaShake-2)) |
(→Recent Completions) |
||
Line 116: | Line 116: | ||
= Recent Completions = | = Recent Completions = | ||
− | * [ | + | * [https://scec.usc.edu/scecpedia/PetaSHA1_Project PetaShake-1] Advanced computational platform designed to support high-resolution simulations of large earthquakes on initial NSF petascale machines, supported by NSF OCI and GEO grant |
* <span style="color:#3366BB"> HECURA-1 </span> In collaboration with OSU and TACC, we developed non-blocking one-sided and two-sided communication and computation/communication overlap to improve the parallel efficiency of SCEC seismic applications. | * <span style="color:#3366BB"> HECURA-1 </span> In collaboration with OSU and TACC, we developed non-blocking one-sided and two-sided communication and computation/communication overlap to improve the parallel efficiency of SCEC seismic applications. | ||
* [http://scec.usc.edu/research/cme/projects/petasha-2 PetaSHA-1/2]Cross-disciplinary, multi-institutional collaboration, coordinated by SCEC, each 2-year EAR/IF project with the same name to develop a cyberfacility with a common simulation framework for executing SHA computational pathways | * [http://scec.usc.edu/research/cme/projects/petasha-2 PetaSHA-1/2]Cross-disciplinary, multi-institutional collaboration, coordinated by SCEC, each 2-year EAR/IF project with the same name to develop a cyberfacility with a common simulation framework for executing SHA computational pathways |
Revision as of 14:41, 6 April 2016
Current Projects
Intel Parallel Computing CenterThe Intel PCC at SDSC for Earthquake Simulation will be an interdisciplinary research center with the goal of modernizing SCEC’s highly scalable 3D earthquake modeling environment, called AWP-ODC. The modernizations will leverage the latest multi-core Intel Xeon processors and many-core, self-hosted, next-generation Intel Xeon Phi processors architecture. More info |
Fault Tolerance ProjectCurrent checkpoint/restart approach may introduce an unacceptable amount of overhead into some file systems. In collaboration with CSM, we are developing a fault tolerance framework in which the survival application processes will adapt itself to failures. In collaborating with ORNL, we are integrating ADIOS to scale checkpointing up on Lustre file system. More info |
Supercomputing On Demand: SDSC Supports Event-Driven ScienceHPGeoC supports on-demand CalTech users for urgent science earthquake applications. National Science Foundation (NSF) XSEDE supercomputing resource Trestles is allocated to open this new computing paradigm. We've developed novel ways of utilizing this type of allocation as well as scheduling and job handling procedures. More info |
Blue Waters ProjectThis is part of NSF PRAC award. On Blue Waters, our research will investigate how earthquake ruptures produce high frequency ground motions. Â High frequency ground motions are known to have an important impact on seismic hazards. Existing HPC systems cannot achieve the physical scale range needed to explore the source of high frequencies. More info |
CyberShake SGT CalculationAWP-ODC is a highly scalable, parallel finite-difference application developed at SDSC and SDSU to simulate dynamic rupture and wave propagation that occurs during an earthquake. We have developed strain Green's tensor (SGT) creation and seismogram synthesis. The GPU-based SGT calculations resulted in 6.5x speedup on XK7 compared to XE6, this improved computational efficency in the waveform modeling of CyberShake research will save hundreds of millions of processor-core hours to create a California state-wide physics-based seismic hazard map. More info |
SCEC Data VisualizationThis project focuses on the visualization of a series of large earthquake simulations in the interest of gaining scientific insight into the impact of Southern San Andreas Fault earthquake scenarios on Southern California. In addition to creating its own software, the group also uses tools installed and maintained on SDSC computational resources. More info |
Simulating Earthquake Faults (FESD)HPGeoC Researchers are assisting researchers from six other universities and the US Geological Survey (USGS) to develop detailed, large-scale computer simulations of earthquake faults under a new $4.6 million National Science Foundation (NSF) grant announced September 2011. The initial focus is on the North American plate boundary and the San Andreas system of Northern and Southern California. More info |
San Andreas Fault Zone PlasticityProducing realistic seismograms at high frequencies will require several improvements in anelastic wave propagation engines, including the implementation of nonlinear material behavior. This project supports the development of nonlinear material behavior in both the CPU- and GPU-based wave propagation solvers. Simulations of the ShakeOut earthquake scenario have shown that nonlinearity could reduce the earlier predictions of long period (0 - 0.5 Hz) ground motions in the Los Angeles basin by 30-70. More info |
Finished Projects
SCEC M8 SimulationM8 is the largest earthquake simulation ever conducted, a collaborative effort led by SCEC and requiring collaboration of more than 30 seismologists and computational scientists, supported by DOE INCITE allocation award. It presented tremendous computational and I/O challenges. The simulation was conducted on NCCS Jaguar, a ACM Gordon Bell finalist at Supercomputing'10. More info |
GPU Acceleration ProjectHPGeoC has developed a hybrid CUDA MPI paradigm with AWP-ODC code that achieved 2.3 Pflop/s sustained performance and enabled a rough fault 0-10Hz modeling simulation on ORNL Titan. This code is also used to study the effects of nonlinearity on surface waves during large quakes on the San Andreas fault. More info |
Topology-aware Communication and Scheduling (HECURA-2)Topology-aware MPI communication, mapping, and scheduling is a new research area. This is to take advantage of the topological path to communication optimization  (either point-to-point or collective). We are participating in a joint project between OSU, TACC and SDSC as a case study in how to implement new topology-aware MPI software at the application level. |
Petascale Inference in Earthquake System Science (PetaShake-2)This is a SCEC project with cross-disciplinary, multi-institutional CME Collaboration. We are providing a platform-independent petascale earthquake application that is able to enlist petascale computing to tackle PetaShake problems through a graduated series of milestone calculations. More info |
Petascale Cyberfacility for Physics-based Seismic Hazard Analysis (PetaSHA-3)The SCEC PetaSHA-3 project is sponsored by NSF to provide society with better predictions of earthquake hazards. This project will provide the high- performance computing required to achieve the objectives for earthquake source physics and ground motion prediction outlined in the SCEC3 (2007-2012) research plan. More info |
Recent Completions
- PetaShake-1 Advanced computational platform designed to support high-resolution simulations of large earthquakes on initial NSF petascale machines, supported by NSF OCI and GEO grant
- HECURA-1 In collaboration with OSU and TACC, we developed non-blocking one-sided and two-sided communication and computation/communication overlap to improve the parallel efficiency of SCEC seismic applications.
- PetaSHA-1/2Cross-disciplinary, multi-institutional collaboration, coordinated by SCEC, each 2-year EAR/IF project with the same name to develop a cyberfacility with a common simulation framework for executing SHA computational pathways
- TeraShake The TeraShake Simulations model the rupture of a 230 kilometer stretch of the San Andreas fault and the consequent 7.7 magnitude earthquake. TeraShake wasa multi-institution collaboration led by SCEC/CME.
- Shakeout The Great California Shakeout is a statewide earthquake drill. It is held in October each year and serves as preparation for what to do before, during, and after an earthquake.
- Parallelization of Regional Spectral Method (RSM)
High Performance Computing Allocations
- NSF XSEDE XRAC Allocations
- DOE INCITE Allocation
- NSF PRAC Blue Waters Allocation
- ANL Early Science Program Allocation