Loading…

Sign up or log in to see what your friends are attending and create your own schedule!


View analytic
Filter: King Arthur 3rd Floor
 

8:00am

Student Tutorial: Supercomputing in Plain English
    Monday July 16, 2012 8:00am - 5:00pm @ King Arthur 3rd Floor

    ABSTRACT:

    • Lecture: Overview: What the Heck is Supercomputing?
      This session provides a broad overview of High-Performance Computing (HPC). Topics include: what is supercomputing?; the fundamental issues of HPC (storage hierarchy, parallelism); hardware primer; introduction to the storage hierarchy; introduction to parallelism via an analogy (multiple people working on a jigsaw puzzle); Moore's Law; the motivation for using HPC.
    • Lab: Running A Job on a Supercomputer
      In this hands-on lab session, you'll get an account on one or more supercomputers, and you'll get a chance to run a job. If your Unix/Linux skills have gotten a little rusty, this will be a great refresher.
    • Lecture: The Tyranny of the Storage Hierarchy
      This session focuses on the implications of a fundamental reality: fast implies expensive implies small, and slow implies cheap implies large. Topics include: registers; cache, RAM, and the relationship between them; cache hits and misses; cache lines; cache mapping strategies (direct, fully associative, set associative); cache conflicts; write-through vs. write-back; locality; tiling; hard disk; virtual memory. A key point: Parallel performance can be hard to predict or achieve without understanding the storage hierarchy.
    • Lab: Running Benchmarks on a Supercomputer
      In this hands-on lab session, you'll benchmark a matrix-matrix multiply code to discover the configuration that gets the best performance.
    • Other topics may be introduced if time permits.

    STRONGLY ENCOURAGED: Laptop (Windows, MacOS or Linux); free software might need to be downloaded during the session

    PREREQUISITES: One recent semester of programming in C or C++; recent basic experience with any Unix-like operating system (could be Linux but doesn't have to be). (Attendees with Fortran experience will be able to follow along.) No previous HPC experience will be required.,

     



    Speakers


    Tom Murphy is a Chartered Engineer with a degree in Acoustics...


    Type Tutorial


6:00pm

 
 

10:00am

Science: Membrane protein
    Tuesday July 17, 2012 10:00am - 10:30am @ King Arthur 3rd Floor

    Science: Membrane protein simulations under asymmetric ionic concentrations

    Abstract: Important cellular processes, such as cell-cell recognition, signal transduction, and transport of electrical signals are controlled by membrane proteins. Membrane proteins act as gatekeepers of the cellular environment by allowing passage of ions, small molecules, or nascent proteins under specific environmental signals such as transmembrane voltage, changes in ionic concentration, or binding of a ligand. Molecular dynamics simulations of membrane proteins, performed in a lipid bilayer environment, mimic the cellular environment by representing the solvent, lipids, and the protein in full atomistic detail. These simulations employ periodic boundary conditions in three dimensions to avoid artifacts associated with the finite size of the system. Under these conditions, the membrane protein system is surrounded by ionic solutions on either side of the membrane whose properties cannot be changed independently. We have developed a computational method that allows simulations of membrane proteins under periodic boundary condition while controlling the two ionic solutions properties independently. In this method, an energy barrier is introduced between the two adjacent unit cells and separates the two ionic solutions. The height of the barrier affects the chemical potential of the ions on each side of the barrier, and thus allows for individual control over ionic properties. During the course of the simulation, the height of the barrier is adjusted dynamically to reach the proper ionic concentration on each side. This method has been implemented in the Tcl interface of the molecular dynamics program NAMD. 

     

    We have applied this method to simulate the voltage-gated potassium channel Kv1.2 under physiological conditions, in which the extracellular solution is made of 10mM KCl and 100mM of NaCl solution, while the intracellular solution has an ionic concentration of 100mM KCl and 10mM NaCl. The simulations maintain a 1:10 and 10:1 ratio between ionic concentrations on each side. The simulations are performed under a voltage bias of 100mV and provide the first simulation of potassium channels under the exact physiological condition. 

     The method has also been applied to simulate ionic currents passing through OmpF, an outer membrane porin, under membrane potentials. Here we were able to accurately calculate the reversal potential of the OmpF channel in a tenfold salt gradient of 0.1 intracellular to 1M extracellular KCl. Our results agree with experimental ion conductance measurements and reproduce key features of ion permeation and selectivity of the OmpF channel. Specifically, the I-V plots obtained under asymmetric ionic solutions revealed the natural asymmetry in the channel caused by increased conductance rates observed at positive potentials, as well as the inherent cation-selectivity of the OmpF pore. Therefore, we have developed a method that directly relates molecular dynamics simulations of ionic currents to electrophysiological measurements in ion channels.

     



    Speakers

    Type Science Track
    Session Titles Biological Applications


10:30am

Science: Exploiting HPC
    Tuesday July 17, 2012 10:30am - 11:00am @ King Arthur 3rd Floor

    Science: Exploiting HPC Resources for the 3D-Time Series Analysis of Caries Lesion Activity.

    Abstract: We present a research framework to analyze 3D-time series caries lesion activity based on collections of SkyScanμ-CT images taken at different times during the dynamiccaries process. Analyzing caries progression (or reversal)is data-driven and computationally demanding. It involvessegmenting high-resolution μ-CT images, constructing 3Dmodels suitable for interactive visualization, and analyzing3D and 4D (3D + time) dental images. Our development exploitsXSEDE’s supercomputing, storage, and visualizationresources to facilitate the knowledge discovery process. Inthis paper, we describe the required image processing algorithmsand then discuss the parallelization of these methodsto utilize XSEDE’s high performance computing resources.We then present a workflow for visualization and analysis usingParaView. This workflow enables quantitative analysisas well as three-dimensional comparison of multiple temporaldatasets from the longitudinal dental research studies.Such quantitative assessment and visualization can help usto understand and evaluate the underlying processes thatarise from dental treatment, and therefore can have significantimpact in the clinical decision-making process andcaries diagnosis.



    Speakers

    Type Science Track
    Session Titles Biological Applications


11:00am

Science: Transforming molecular biology
    Tuesday July 17, 2012 11:00am - 11:30am @ King Arthur 3rd Floor

    Science: Transforming molecular biology research through extreme accleration of AMBER molecular dynamics simulations: Sampling for the 99%. 

    Abstract: This talk will cover recent developments in the acceleration of Molecular Dynamics Simulations using NVIDIA Graphics Processing units with the AMBER software package. In particular it will focus on recent algorithmic improvements aimed at accelerating the rate at which phase space is sampled. A recent success has been the reproduction and extension of key results from the DE Shaw 1 millisecond Anton MD simulation of BPTI (Science, Vol. 330 no. 6002 pp. 341-346) with just 2.5 days of dihedral boosted AMD sampling on a single GPU workstation, (Pierce L, Walker R.C. et al. JCTC, 2012 in review). These results show that with careful algorithm design it is possible to obtain sampling of rare biologically relevant events that occur on the millisecond timescale using just a single $500 GTX580 Graphics Card and a desktop workstation. Additional developments highlighted will include the acceleration of AMBER MD simulations using graphics processing units including Amazon EC2 and Microsoft Azure Cloud based automated ensemble calculations, a new precision model optimized for the upcoming Kepler architecture (Walker R.C. et al, JCP, 2012, in prep) as well as approaches for running large scale multi-dimensional GPU accelerated replica exchange calculations on Keeneland and BlueWaters.

     



    Speakers

    Type Science Track
    Session Titles Biological Applications


11:30am

Science: Invited Talk: Multiscale
    Tuesday July 17, 2012 11:30am - 12:00pm @ King Arthur 3rd Floor

    Science: Invited Talk:  Multiscale simulations of blood-flow: from a platelet to an artery

    Abstract: We review our recent advances on multiscale modeling of blood flow including blood rheology. We focus on the objectives, methods, computational complexity and overall methodology for simulations at the level of glycocalyx (<1 micron), blood cells (2-8 microns) and up to larger arteries ($O(cm)$). The main findings of our research and future directions are summarized. We discuss the role of High Performance Computers for multiscale modeling and present new parallel visualization tools. We also present results of simulations performed with our coupled continuum-atomistic solver on up to 300K cores, modeling initial stages of blood clot formation in a brain aneurysm. 



    Speakers

    Type Science Track
    Session Titles Biological Applications


2:45pm

Science: Optimization of Density
    Tuesday July 17, 2012 2:45pm - 3:15pm @ King Arthur 3rd Floor

    Science: Optimization of Density Functional Tight-Binding and Classical Reactive Molecular Dynamics for High-Throughput Simulations of Carbon Materials

    Abstract: Carbon materials and nanostructures (fullerenes, nanotubes) are promising building blocks of nanotechnology. Potential applications include optical and electronic devices, sensors, and nano-scale machines. The multiscale character of processes related to fabrication and physics of such materials requires using 

    a combination of different approaches such as (a) classical dynamics, (b) direct Born-Oppenheimer dynamics, (c) quantum dynamics for electrons and (d) quantum dynamics for selected nuclei. We describe our effort on optimization of classical reactive molecular dynamics and density-functional tight binding method, which is a core method in our direct and quantum dynamics studies. We find that optimization is critical for efficient use of high-end machines. Choosing the optimal configuration for the numerical library and compilers can result in four-fold speedup of direct dynamics as compared with default programming environment. The integration algorithm and parallelization approach must also be tailored for the computing environment.

    The efficacy of possible choices is discussed.

     



    Speakers

    Type Science Track
    Session Titles Materials


3:15pm

Science: A toolkit
    Tuesday July 17, 2012 3:15pm - 3:45pm @ King Arthur 3rd Floor

    Science: A toolkit for the analysis and visualization of free volume in materials

    Abstract: A suite of tools is presented which enable analysis of free volume in terms of accepted standard metrics. The tools are extensible through the use of standard UNIX tools to be useful for output of many standard simulation packages. The tools also include utilities for rapid development of visual output not available in other packages. The tools are also extensible and modifiable for other types of spatial data.

     



    Speakers

    Type Science Track
    Session Titles Materials


3:45pm

Science: Extending Parallel
    Tuesday July 17, 2012 3:45pm - 4:15pm @ King Arthur 3rd Floor

    Science: Extending Parallel Scalability of LAMMPS and Multiscale Reactive Molecular Simulations

    Abstract: Conducting molecular dynamics (MD) simulations involving chemical reactions in large-scale condensed phase systems (liquids, proteins, fuel cells, etc…) is a computationally prohibitive task even though many new ab-initio based methodologies (i.e., AIMD, QM/MM) have been developed. Chemical processes occur over a range of length scales and are coupled to slow (long time scale) system motions, which make adequate sampling a challenge. Multistate methodologies, such as the multistate empirical valence bond (MS-EVB) method, which are based on effective force fields, are more computationally efficient and enable the simulation of chemical reactions over the necessary time and length scales to properly converge statistical properties. 

    The typical parallel scaling bottleneck in both reactive and nonreactive all-atom MD simulations is the accurate treatment of long-range electrostatic interactions. Currently, Ewald-type algorithms rely on three-dimensional Fast Fourier Transform (3D-FFT) calculations. The parallel scaling of these 3D-FFT calculations can be severely degraded at higher processor counts due to necessary MPI all-to-all communication. This poses an even bigger problem in MS-EVB calculations, since the electrostatics, and hence the 3D-FFT, must be evaluated many times during a single time step. 

    Due to the limited scaling of the 3D-FFT in MD simulations, the traditional single-program-multiple-data (SPMD) parallelism model is only able to utilize several hundred CPU cores, even for very large systems. However, with a proper implementation of a multi-program (MP) model, large systems can scale to thousands of CPU cores. This paper will discuss recent efforts in collaboration with XSEDE advanced support to implement the MS-EVB model in the scalable LAMMPS MD code, and to further improve parallel scaling by implementing MP parallelization algorithms in LAMMPS. These algorithms improve parallel scaling in both the standard LAMMPS code and LAMMPS with MS-EVB, thus facilitating the efficient simulation of large-scale condensed phase systems, which include the ability to model chemical reactions. 

     



    Speakers

    Type Science Track
    Session Titles Materials


4:45pm

BOF: XSEDE: Review and Directions After Year One
 
 

10:00am

Science: Comparing the performance
    Wednesday July 18, 2012 10:00am - 10:30am @ King Arthur 3rd Floor

    Science: Comparing the performance of group detection algorithm in serial and parallel processing environments

    Abstract: Developing an algorithm for group identification from a collection of individuals without grouping data has been getting significant attention because of the need for increased understanding of groups and teams in online environments. This study used space, time, task, and players’ virtual behavioral indicators from a game database to develop an algorithm to detect groups over time. The group detection algorithm was primarily developed for a serial processing environment and later then modified to allow for parallel processing on Gordon. For a collection of data representing 192 days of game play (approximately 140 gigabytes of log data), the computation required 270 minutes for the major step of the analysis when running on a single processor. The same computation required 22 minutes when running on Gordon with 16 processors. The provision of massive compute nodes and the rich shared memory environment on Gordon has improved the performance of our analysis by a factor of 12. Besides demonstrating the possibility to save time and effort, this study also highlights some lessons learned for transforming a serial detection algorithm to parallel environments.

     



    Speakers

    Type Science Track
    Session Titles Data and Analytics


10:30am

Science: High performance
    Wednesday July 18, 2012 10:30am - 11:00am @ King Arthur 3rd Floor

    Science: High performance data mining of social media

    Abstract: Difficulties in designing a system to mine social media lie in web service restrictions, legal permissions and security, as well as in network and execution engine latency. Our data mining algorithm tests on Twitter data on small scale at F .90 for accuracy at identification of streets, buildings, place names and place abbreviations. But for large scale, to maintain accuracy and efficiency, we have had to develop techniques to manage the real-time data load. Our contribution algorithm and architecture strategies for multi-core and parallel processing that exclude major program refactoring.

     



    Speakers

    Type Science Track
    Session Titles Data and Analytics


11:00am

Science: Multiple Concurrent Queries
    Wednesday July 18, 2012 11:00am - 11:30am @ King Arthur 3rd Floor

    Science: Multiple Concurrent Queries on Demand: Large Scale Video Analysis in a Flash Memory Environment as a Case for Humanities Supercomputing

    Abstract: [Please see the attached pdf for this abstract with figures integrated, as well as additional author names.]

    The Large Scale Video Analytics (LSVA) research project is a newly supported effort that explores the viability of a human-machine hybrid approach to managing massive video archives for the purposes of research and scholarship. Video databases are characterized by incomplete metadata and wildly divergent content tags; and while machine reading has a low efficacy rate, human tagging is generally too labor intensive to be viable. Thus, in the LSVA project, a prototype of the approach that integrates multiple algorithms for image recognition, scene-completion, and crowd-sourced image tagging will be developed such that the system grows smarter and more valuable with increased usage. Building on interdisciplinary research in the humanities and social sciences on one hand (film theory, collective intelligence, visual communication), and computer science on the other (signal processing, large feature extraction for machine-reading, algorithmic pattern recognition), the LSVA project will enable the researchers in testing different algorithms by placing them into a workflow, applying them to the same video dataset in real-time, and finally analyzing the results using cinematic theories of representation.

    Currently, the process of understanding and utilizing the content of a large database of video archives is time consuming and laborious. Besides the large size of the archives, other key challenges to effectively analyzing the video archives are limited metadata and lack of precise understanding of the actual content of the archive.

    For many years, scholars have required high-performance computing resources for analyzing and examining digital videos. However, due to usage-policies and technical limitations, supercomputers required scholars to work in a batch-oriented workflow. The batch-oriented workflow is contradictory to the typical workflow of scholars which is exploratory and iterative in nature such that the results of one query are used to inform the next query. Batch-oriented workflow interrupts this process and can hinder rather than help discovery.

    The arrival of the XSEDE resource “Gordon”, the supercomputer that has extensive flash memory, transformed the relationship between this research method and HPC. Its architecture opened the possibility for researchers to interactively, and on-demand, query large databases in real-time, including databases of digital videos. Additionally, the computational capability of Gordon is sufficient for extensive analysis of video-assets in real-time for determining which videos to return in response to a query. This is a computationally intensive process involving queries that cannot be anticipated ahead of time.

    This project will be using the Gordon supercomputer to not only pre-process videos to automatically extract meaningful metadata, but also as an interactive engine that allows the researchers to generate queries on the fly for which metadata that was extracted a priori is not sufficient. In order to be useful to researchers, we are combining an interactive database, a robust web-based front-end (Medici), and powerful visualization representations to aid the researcher in understanding the contents of the video-footage without requiring them to watch every frame of every movie. Given that there is more video than one could ever view in a lifetime on YouTube alone, with more added to it and other video hosting sites on a daily-basis, the need for and implications of this type of meta-level analysis is great indeed. 

     

    Due to the need for high-quality end-user experience (low-latency and high-throughput), the LSVA project has dedicated and interactive access to Gordon’s I/O nodes. In the first phase of this project, the database and video archive will be resident on Gordon’s I/O node and Luster File System. In the future, we will experiment with federated databases located at different sites across the country.

    This work builds on the NCSA Medici system as the front-end that the user interacts with (see Fig. 2). Medici comes well-equipped to allow automated processes to be dropped into a technology-supported workflow. Medici also provides easy tagging and grouping of data elements using an RDF model at the back-end.

     

    Conclusions

    Though we are in the preliminary stages of this project, we are enthusiastic and confident about building an on-demand interactive query engine for video archives and designing a user-interface with appropriate visualizations to support real-time video analysis and querying. Ultimately, we hope to turn this system into a science gateway that can be used by the community of film scholars, social scientists, computer scientists, and artists.

     



    Speakers

    Type Science Track
    Session Titles Data and Analytics


11:30am

Science: Invited Talk: Mythbusting
    Wednesday July 18, 2012 11:30am - 12:00pm @ King Arthur 3rd Floor

    Invited Talk: Mythbusting with nanoHUB.org - the first science-gateway software as a service cloud focused on end-to-end application users AND application developers

    Abstract:  Gordon Moore’s 1965 prediction of continued semiconductor device down-scaling and circuit up-scaling has become a self-fulfilling prophesy in the past 40 years. Open source code development and sharing of the process modeling software SUPREM and the circuit modeling software SPICE were two critical technologies that enabled the down-scaling of semiconductor devices and up-scaling of circuit complexity. SPICE was originally a teaching tool that transitioned into a research tool, was disseminated by an inspired engineering professor via tapes, and improved by users who provided constructive feedback to a multidisciplinary group of electrical engineers, physicist, and numerical analysts. Ultimately SPICE and SUPREM transitioned into all electronic design software packages that power today’s 280 billion dollar semiconductor industry.

    Can we duplicate such multi-disciplinary software development starting from teaching and research in a small research group leading to true economic impact? What are technologies that might advance such a process? How can we deliver such software to a broad audience? How can we teach the next generation engineers and scientists on the latest research software? What are critical user requirements? What are critical developer requirements? What are the incentives for faculty members to share their competitive advantages? How do we know early on if such an infrastructure is successful? This presentation will show how nanoHUB.org addresses these questions.

    By serving a community of 230,000 users in the past 12 months with an ever-growing collection of 3,000 resources, including over 220 simulation tools, nanoHUB.org has established itself as “the world’s largest nanotechnology user facility” [1]. nanoHUB.org is driving significant knowledge transfer among researchers and speeding transfer from research to education, quantified with usage statistics, usage patterns, collaboration patterns, and citation data from the scientific literature. Over 850 nanoHUB citations in the literature resulting in a secondary citation h-index of 41 prove that high quality research by users outside of the pool of original tool developers can be enabled by nanoHUB processes. In addition to high-quality content, critical attributes of nanoHUB success are its open access, ease of use, utterly dependable operation, low-cost and rapid content adaptation and deployment, and open usage and assessment data. The open-source HUBzero software platform, built for nanoHUB and now powering many other hubs, is architected to deliver a user experience corresponding to these criteria.

    In June 2011 the National Science and Technology Council published Materials Genome Initiative for Global Competitiveness [2], writing “Accelerating the pace of discovery and deployment of advanced material systems will therefore be crucial to achieving global competitiveness in the 21st century.” The Council goes on to say, "Open innovation will play a key role in accelerating the development of advanced computational tools. … An existing system that is a good example of a first step toward open innovation is the nanoHUB, a National Science Foundation program run through the Network for Computational Nanotechnology."

    [2] Quote by Mikhail Roco, Senior Advisor for Nanotechnology, National Science Foundation.
    [1] http://www.whitehouse.gov/sites/default/files/microsites/ostp/materials_genome_initiative-final.pdf  



    Speakers

    Type Science Track
    Session Titles Data and Analytics


1:15pm

Science: Excited States
    Wednesday July 18, 2012 1:15pm - 1:45pm @ King Arthur 3rd Floor

    Science: Excited States in Lattice QCD using the Stochastic LapH Method

    Abstract: A new method for computing the mass spectrum of excited baryons and mesons from the temporal correlations of quantum-field operators in quantum chromodynamics is described. The correlations are determined using Markov-chain Monte Carlo estimates of QCD path integrals formulated on an anisotropic space-time lattice. Access to the excited states of interest requires determinations of lower-lying multi-hadron state energies, necessitating the use of multi-hadron operators. Evaluating the correlations of such multi-hadron operators is difficult with standard methods. A new stochastic method of treating the low-lying modes of quark propagation which exploits a new procedure for spatially-smearing quark fields, known as Laplacian Heaviside smearing, makes such calculations possible for the first time. A new operator for studying glueballs, a hypothetical form of matter comprised predominantly of gluons, is also tested, and computing the mixing of this glueball operator with a quark-antiquark operator and multiple two-pion operators is shown to be feasible. 

     



    Speakers

    Type Science Track
    Session Titles Quantum Methods


1:45pm

Science: Benchmark Calculations
    Wednesday July 18, 2012 1:45pm - 2:15pm @ King Arthur 3rd Floor

    Science: Benchmark Calculations for Multi-Photon Ionization of the Hydrogen Molecule and the Hydrogen Molecular Ion by Short-Pulse Intense Laser Radiation

    Abstract: We provide an overview of our recent work on the implementation of the finite-element discrete-variable representation to study the interaction of a few-cycle intense laser pulse with the H$_2$ and H$_2^{\,+}$ molecules. The problem is formulated in prolate spheroidal coordinates, the ideal system for a diatomic molecule, and the time-dependent Schr\"odinger equation is solved on a space-time grid. The physical information is extracted by projecting the time-evolved solution to the appropriate field-free states of the problem.

     



    Speakers

    Type Science Track
    Session Titles Quantum Methods


2:15pm

Science: Electrostatic Screening

2:45pm

Science: Quantum Algorithms
    Wednesday July 18, 2012 2:45pm - 3:15pm @ King Arthur 3rd Floor

    Science: Quantum Algorithms for Predicting the Properties of Complex Materials

    Abstract: A central goal in computational materials science is to find efficient methods for solving the Kohn-Sham equation. The realization of this goal would allow one to predict materials properties such as phase stability, structure and optical and dielectric properties for a wide variety of materials. Typically, a solution of the Kohn-Sham equation requires computing a set of low-lying eigenpairs. Standard methods for computing such eigenpairs require two procedures: (a) maintaining the orthogonality of an approximation space, and (b) forming approximate eigenpairs with the Rayleigh-Ritz method. These two procedures scale cubically with the number of desired eigenpairs. Recently, we presented a method, applicable to any large Hermitian eigenproblem, by which the spectrum is partitioned among distinct groups of processors. This "divide and conquer" approach serves as a parallelization scheme at the level of the solver, making it compatible with existing schemes that parallelize at a physical level and at the level of primitive operations, e.g., matrix-vector multiplication. In addition, among all processor sets, the size of any approximation subspace is reduced, thereby reducing the cost of orthogonalization and the Rayleigh-Ritz method. We will address the key aspects of the algorithm, its implementation in real space, and demonstrate the accuracy of the algorithm by computing the electronic structure of some representative materials problems.

     



    Speakers

    Type Science Track
    Session Titles Quantum Methods
    Tags Software and Middleware


3:45pm

Panel: Campus Bridging and the GFFS Pilot Project: Pilot site reports
    Wednesday July 18, 2012 3:45pm - 5:15pm @ King Arthur 3rd Floor

    Abstract: The XSEDE Campus Bridging team has been facilitating a pilot program for the Global Federated File System software at 2 XSEDE test sites and 4 pilot sites, with the goal of making use of the GFFS software on campus and at XSEDE resources in order to share data and facilitate computational workflows for research. Representatives from the pilot sites will discuss their use cases and requirements for GFFS and their fit with the use cases developed by the XSEDE Campus Bridging and Architecture and Design teams.

    Panel Moderator:
    - Jim Ferguson, National Institute for Computational Sciences, University of Tennessee

    Panel Participants:
    - Guy Almes, Texas A&M University
    - Toby Axelsson, University of Kansas

     



    Speakers
    Texas A&M University

    National Institute for Computational Sciences, University of...

    University of Kansas


    Type Panel Session


5:30pm

BOF: (CANCELLED) XSEDE User Portal, Mobile and Social Media Integration
    Wednesday July 18, 2012 5:30pm - 6:30pm @ King Arthur 3rd Floor

    BOF: XSEDE User Portal, XUP Mobile and Social Media Integration

    Abstract: The XSEDE User Portal (XUP) provides an integrated interface for XSEDE users to access the information and services available to them through the XSEDE project. The XUP allows users to accomplish many things, including: 

    • View system account information 

    • Log in to XSEDE resources 

    • Transfer files both between XSEDE resources and between their desktop and XSEDE resources 

    • Request allocations, and view and manage project allocation usage 

    • Monitor the status of HPC, storage, and visualization resources 

    • Access documentation and news 

    • Register for training 

    • Receive consulting support 

     A companion to the XUP, the XSEDE User Portal Mobile, enables users access to many of the above capabilities through their mobile interface. 

     The XUP team will lead a discussion designed to enhance the capabilities of XSEDE User Portal, to improve the XSEDE User Portal mobile interface and potentially native mobile app versions. This will include exploring ideas to leverage and potentially other popular web-based services into the XUP, including social media. Social media has revolutionized how users communicate with each other and make effective use of the services available to them. The challenge is to how to leverage and integrate social media to advance scientific research. 

     The purpose of this BoF is to collect user feedback about the current XSEDE User Portal and its mobile interface, and to discuss how to best integrate social media and other popular online capabilities into the XUP project help make XSEDE users more productive and promote the science that is accomplished in XSEDE. 



    Speakers

    Type BOF


 
 

8:45am

Science: Massively parallel
    Thursday July 19, 2012 8:45am - 9:15am @ King Arthur 3rd Floor

    Science: Massively parallel direct numerical simulations of forced compressible turbulence: a hybrid MPI/OpenMP approach

    Abstract: A highly scalable simulation code for turbulent flowswhich solves the fully compressible Navier-Stokesequations is presented. The code, which supports one, twoand three dimensional domain decompositionsis shown to scale well on up to 262,144cores. Introducing multiple levels of parallelism based ondistributed message passing and shared-memory paradigms results in areductionof up to 33\% of communication time at large core counts.The code has been used to generate a large database ofhomogeneous isotropic turbulence in a stationary state created by forcingthe largest scales in the flow.The scaling of spectra of velocity and density fluctuationsare presented. While the former follow classical theories strictly validfor incompressible flows, the latter presents a more complicated behavior.Fluctuations in velocity gradients and derived quantitiesexhibit extreme though rare fluctuations, a phenomenon known as intermittency.The simulations presented provide data to disentangle Reynolds andMach number effects.

     



    Speakers

    Type Science Track
    Session Titles Astrophysics
    Tags File Systems


9:15am

Science: High Accuracy
    Thursday July 19, 2012 9:15am - 9:45am @ King Arthur 3rd Floor

    Science: High Accuracy Gravitational Waveforms from Black Hole Binary Inspirals Using OpenCL

    Abstract: There is a strong need for high-accuracy and efficient modeling of extreme-mass-ratio binary black hole systems (EMRIs) because these are strong sources of gravitational waves that would be detected by future observatories. In this article, we present sample results from our Teukolsky EMRI code: a time-domain Teukolsky equation solver (a linear, hyperbolic, partial differential equation solver using finite-differencing), that takes advantage of several mathematical and computational enhancements to efficiently generate long-duration and high-accuracy EMRI waveforms. 

     We emphasize here the computational advances made in the context of this code. Currently there is considerable interest in making use of many-core processor architectures, such as Nvidia and AMD graphics processing units (GPUs) for scientific computing. Our code uses the Open Computing Language (OpenCL) for taking advantage of the massive parallelism offered by modern GPU architectures. We present the performance of our Teukolsky EMRI code on multiple modern processors architectures and demonstrate the high level of accuracy and performance it is able to achieve. We also present the code's scaling performance on a large supercomputer i.e. NSF's XSEDE resource, Keeneland.

     



    Speakers

    Type Science Track
    Session Titles Astrophysics
    Tags File Systems


9:45am

Science: A High throughput
    Thursday July 19, 2012 9:45am - 10:15am @ King Arthur 3rd Floor

    Science: A High throughput workflow environment for cosmological simulations

    Abstract: The cause of cosmic acceleration remains an important unanswered question in cosmology. The Dark Energy Survey (DES) is a joint DoE-NSF project that will perform a sensitive survey of cosmic structure traced by galaxies and quasars across 5000 sq deg of sky. DES will be the first project to combine four different methods (supernova brightness, the acoustic scale of galaxy clustering, the population of groups and clusters of galaxies, and weak gravitational lensing) to study dark matter, dark energy, and departures from general relativistic gravity via evolution of the cosmic expansion rate and growth rate of linear density perturbations. Realizing the full statistical power of this and complementary surveys requires support from cosmological simulations to address the many potential sources of systematic error, particularly errors that are shared jointly across the tests of cosmic acceleration using cosmic structure. 

    We are coordinating a Blind Cosmology Challenge (BCC) process for DES, in which a variety of synthetic sky realizations in different cosmologies will be analyzed, in a blind manner, by DES science teams. The BCC process requires us to generate a suite of roughly 50 2048^3-particle N-body simulations that sample the space-time structure in a range of cosmic volumes. These simulations are dressed with galaxies, and the resulting catalog-level truth tables are then processed with physical (e.g., gravitational lensing) and telescope/instrument effects (e.g., survey mask) before their release to science teams. We describe here our efforts to embed control of the catalog production process within a workflow engine that employs a service-oriented architecture to manage XSEDE job requests. We describe the approach, including workflow tests and extensions, and present first production results for the N-body portion of the workflow. We propose future extensions aimed toward a science gateway service for astronomical sky

     



    Speakers

    Type Science Track
    Session Titles Astrophysics
    Tags File Systems


10:45am

General Session: Campus Champion Panel
 

Get Adobe Flash player