Developer Data Modeling Mistakes: From Postgres to NoSQL
Rupak biswas
1. NASA’s Science and Engineering
Applications in the Future
ZettaFLOPS Forum: Frontiers of Extreme Computing
October 26, 2005, Santa Cruz, California
Dr. Rupak Biswas
Chief (Acting), NASA Advanced
Supercomputing (NAS) Division
NASA Ames Research Center
Moffett Field, California
2. NASA’s Mission Directorates
• Aeronautics Research Mission Directorate
(ARMD):
– To pioneer the identification, development,
verification, transfer, application, and
commercialization of high-payoff aeronautics and
space transportation technologies.
Artist concept of a vision for the National Air Transportation System
in 2025, allowing airport and airspace capacity to be more
responsive, adaptable and dynamic.
• Exploration Systems Mission Directorate (ESMD):
– To develop capabilities and supporting research and
technology that enable sustained and affordable human
and robotic exploration; includes the biological and
physical research necessary to ensure the health and
safety of crew during long duration space flight.
Artist concept of a future lunar exploration mission.
• Science Mission Directorate (SMD):
– To carry out the scientific exploration of the Earth, Moon, Mars,
and beyond; charts the best route of discovery; and reaps the
benefits of Earth and space exploration for society.
Sidelong view of Saturn’ s rings captured by Cassini spacecraft on Dec. 14, 2004.
3. NASA’s Mission Directorates
(cont.)
• Space Operations Mission Directorate (SOMD):
– To provide many critical enabling capabilities that make
possible much of the science, research, and exploration
achievements of the rest of NASA. It does this through the
three themes of the International Space Station, the Space
Shuttle Program, and Flight Support.
International Space Station
• NASA Engineering and Safety Center (NESC):
– The NESC is an independent organization, which was
charted in the wake of the Space Shuttle Columbia accident
to serve as an Agency-wide technical resource focused on
engineering excellence. The objective of the NESC is to
improve safety by performing in-depth independent
engineering assessments, testing, and analysis to uncover
technical vulnerabilities and to determine appropriate
preventative and corrective actions for problems, trends or
issues within NASA's programs, projects and institutions.
4. Integrated Safe Spacecraft Design:
2020 Goal
• Vision
– Full simulation and optimization of multiple vehicle designs with safety analysis to
enable automated identification and simulation of failures and effects against a suite
of health management technologies for survivability analysis and cost trade-offs.
Real-time generation of flight simulation enables pilot-in-the loop design.
• Technology Advances
– Full, time-accurate, multi-disciplinary vehicle simulations with high-fidelity modeling
of safety critical elements
– Real-time data generation for piloted simulation
– Integration of health management strategies into vehicle behavior models
• Aerospace Technology Benefits
– Mission Safety - Supports order of magnitude improvement in mission safety from
2nd Gen RLV baseline
– Mission Affordability - Supports development of cost-effective survivable systems
through higher design certainty and lower requirement for safety margin
– Development of advanced tools and processes for rapid, high-confidence design -
Enables early evaluation and decision making within a virtual design process
– Revolutionary solution for fundamentally new missions - Enables simulation and
evaluation of self-repairing systems technologies
5. Supercomputing Requirements
DNS
W
- DIRECT NAVIER-STOKES (DNS) New Hardware
2020 New Hardware
15 - LARGE EDDY SIMULATION (LES) 2040+
10
- DETACHED EDDY FLOW SIMULATION (DES)
- REYNOLDS-AVERAGED NAVIER-STOKES FLOW SIMULATION (RANS)
- NON-LINEAR INVISCID FLOW SIMULATION (EUL) SC R/O
14
10 LES LES
New Hardware
SINGLE DISCIPLINE 2010
SINGLE CONFIGURATION
13 AEROTHERMODYNAMIC W
10 SGI Altix
ANALYSIS
Columbia
2004
MAIN MEMORY, BYTES
12 Turbulence Modeling Gap
10 SGI Origin
Chapman
2002
11 SGI Origin SC R/O
10 Lomax DES DES
2001
10
W R/O - PRA or GA OPTIMIZATION
10
CRAY C-90 SC R/O SC - SPACECRAFT/AIRCRAFT
RANS RANS W - WING/COMPONENT
10
9 SC A - AIRFOIL
EUL W
A Mildly Massively
W
10
8
CRAY Attached Separated Flows, Separated Flows,
YMP Flows Transition, Base Flows,
Only Relaminarization, Bluff Body Flows
10 7 A Control Flap Flows
giga tera peta exa zetta
10 6
8 9 10 11 12 13 14 15 16 17 18 19 20 21
10 10 10 10 10 10 10 10 10 10 10 10 10 10
THEORETICAL PROCESSOR SPEED, FLOPS
7. Columbia: World Class Supercomputing
• Currently the world’s third fastest
supercomputer providing 62 Tflops
peak and 52 Tflops Linpack
sustained performance
• Conceived, designed, built, and
deployed in just 120 days
• A 20-node constellation built on
proven 512-processor nodes
• Largest SGI system in the world with
over 10,000 Itanium 2 processors
Systems: SGI Altix 3700 and 3700-BX2
• Provides the largest node size Processors: 10,240 Intel Itanium 2
incorporating commodity parts (512) Global Shared Memory: 20 Terabytes
and the largest shared-memory Front-End: SGI Altix 3700 (64 proc.)
environment (2048) Online Storage: 440 Terabytes RAID
Offline Storage: 6 Petabytes STK Silo
• 88% efficiency tops the scalar
systems on the Top500 list Internal Networks:
Internode Comm: Infiniband
• Most importantly, having mission Hi-Speed: 10 Gigabit Ethernet
impact almost immediately
8. Exploration Systems:
Space Flight Applications
• In computational fluid dynamics:
QuickTime™ and a
– Real time, high-fidelity simulation for Animation decompressor
are needed to see this picture.
digital flight will be possible.
– With today's technology and computing
capabilities, we focus on high-fidelity
simulation of a certain phenomena on a Return to Flight: Six-degree-of-freedom CFD
analyses to determine the impact conditions
specific section of the vehicle. Some and locations, using the aerodynamic
examples are propulsion, external body characteristics of potential debris.
dynamics with six degree of freedom
(debris transport analysis), re-entry, fluid/
structure interaction, etc. QuickTime™ and a
YUV420 codec decompressor
are needed to see this picture.
– In future, these simulations have to be
very fast and integrated at the system
level so that complete flight can be
simulated in real time.
Flowliner: Instantaneous snapshot from time-
POC: Cetin Kiris, Mike Aftosmis, Stuart Rogers, NASA Ames accurate fuel flowliner analysis using 66 million grid
Research Center, CA points with 262 overlapped zones.
9. Exploration Systems:
Digital Astronaut
Human Brain Circulatory System under Altered Gravity
• For astronauts, blood circulation and body fluid
distribution undergo significant adaptation both
during and after long-duration space flights.
• To assess the impact of changing gravitational
forces on human space flight, it is essential to
quantify the blood flow characteristics in the brain
under varying gravity conditions.
• Currently, NASA is working on blood flow
simulations in the arterial system of an astronaut.
With increased computational capabilities, we will be Human-specific geometry of the cerebral arterial tree
reconstructed from magnetic resonance images are used
able to: in conjunction with supercomputing technology to
– Extend the simulations from just the arterial establish large-scale continuum fluid simulations.
system to the entire body; then, extend this
capability to couple with other systems
such as the respiratory system MICROGRAVITY
– Construct a bridge between macroscopic CIRCULATORY
SYSTEM
and microscopic (molecular) scal; then,
extend studies from the capillary level to QuickTime™ and a
the cell level TIFF (Uncompressed) decompressor RADIATION
are needed to see this picture.
SHIELDS
This will enable us to predict astronauts'
performance during long space flights.
POC: Cetin Kiris, NASA Ames Research Center, CA
10. Earth Science: Finite-Volume General
Circulation Model (fvGCM)
• Even with unlimited computing resources, there will be a hard limit on how far we can go
in resolution beyond which we cannot possibly model without also modeling society,
biology (such as whale movements), etc. We will also need to model human behavior, if
the resolution is of the order of 1 meter.
• The ultimate useful min(dx, dy, dz), in a global model, would be about 10 meters. In that
case, it would be an increase in computing power that is
~ (10km/10m)**4 = (1.E3) ** 4 = 1.E12 times more than what Columbia currently provides!
QuickTime™ and a
YUV420 codec decompressor
are needed to see this picture.
Katrina:
Very promising and
comparable track + NHC
predictions at different 1/4 deg
resolutions from a 1/8 deg
5-day forecast
(1/8 degree fvGCM)
Higher Resolution Hurricane Track Prediction
fvGCM Code Simulations - Hurricane Francis 09/04 (Total
Precipitable Water - Resolution: 1/12th of a degree)
POC: Bowen Shen, NASA Goddard Space Flight Center
11. Earth Science: Estimating the
Circulation and Climate of the Ocean
(ECCO)
Two CPU-intensive problems that ECCO consortium is working
on but are unlikely to be solved in a definitive way during the
next 25 years.
• First problem is convergence of numerical ocean model solutions as resolution is increased. By some
estimates, the ocean is a turbulent fluid with upwards of 1024 degrees of freedom at each instant of
time. To date, the largest computation that ECCO has conducted on Columbia is an ocean simulation
with approximately 109 degrees of freedom at each time step. Taking into account shorter time steps
that are needed to simulate smaller volumes of water, maybe we will not have a definitive answer to
the question of convergence until available computational power is increased by a factor of 1020.
• Second problem is ocean state estimation.
Assuming 1-s time steps, an exhaustive
search of all possible solutions for above
ocean model for 1000 years (the
overturning time scale of the oceans) would
require approximately 1060 increase in
computer FLOPS relative to Columbia.
• Add to above model, atmosphere, land, and To improve specification of error statistics and parameterization of
ice processes, and clearly, there is a very small-scale processes in ECCO and to investigate solution
long way to go before earth scientists will convergence, a series of full-depth, global-ocean, and sea-ice
be fully satisfied with computing capability. simulations at increasingly higher resolution (1/4, 1/8, and now 1/16-
deg) are being carried out on the 2048-CPU partition of Columbia. The
figure shows one-month sea-surface height difference in the Gulf
Stream region from these three integrations (left panel: 1/4 deg; middle
panel: 1/8 deg; and right panel: 1/16 deg). Color scale is -0.125 m to
0.125 m.
POC: Dimitris Menemenlis, Jet Propulsion Lab, California Institute of Technology, Pasadena, CA
12. Space Science:
Stellar Models and Supernovae
The influence of computers in the next 25 years will be much greater
than the huge impact they have had in the last 25.
• In astronomy, large ground-based telescopes will use adaptive optics and other computer-assisted
data enhancement techniques to do observations from the ground that presently can only be done
from space.
• With a 1000-fold increase in present computer power, models will start from a given presupernova
model (mass, angular momentum, distribution, etc) and determine the explosion - including gamma-
ray bursts as a subset, as well as the properties of a neutron star, pulsar, magnetar, or black hole that
is produced, the nucleosynthesis, and the appearance of the supernova remnant. This includes a
detailed description of the neutron star magnetic field inside and out.
• Within 10 years, snapshots of presupernova evolution
studied in 3D with magnetic fields will give a much QuickTime™ and a
YUV420 codec decompressor
better understanding of the transport of angular are needed to see this picture.
momentum, convection, convective overshoot, etc so
that the presupernova model has a good physical
basis.
• Nucleosynthesis will be calculated in all stellar models
and supernovae with unprecedented accuracy.
Improvements in cross sections will also occur in
laboratory and computational nuclear physics. The
models will be able to describe the chemical evolution
of galaxies of all types, not just the Milky Way.
POC: Stan Woosley, University of California, Santa Cruz
13. Space Science:
Stellar Models and Supernovae
• Shown here is an animation of a reactive rising
bubble in conditions appropriate for Type Ia
supernova. The standard picture of an SNe Ia is
that it begins as one or more hotspots near the
center of a carbon/oxygen white dwarf star. These
hotspots quickly burn the carbon fuel to nickel, via
thermonuclear fusion reactions, and a flame is
formed. The hot ash is less dense than the
surrounding fuel, so the bubble of ash will
buoyantly rise, while the flame continues to burn
outward.
• In these simulations, we were interested in QuickTime™ and a
YUV420 codec decompressor
are needed to see this picture.
understanding the role of the turbulence that
develops on the sides of the bubble. In particular,
can these turbulent eddies cause the bubble to
shed some sparks of hot partially burned fuel or
ash, which would then ignite the star in other
regions.
• These calculations are very computationally
demanding, requiring 100s of millions of zones to
accurately capture the flame structure and the
developing turbulence. With zettaflop capability,
we could certainly capture this transition to
turbulence and gain a detailed understanding of
the evolution of these bubbles.
POC: Mike Zingale, Stan Woosley, University of California, Santa Cruz; John Bell, Marc Day, and Charles Rendleman at
Lawrence Berkeley National Laboratory.
14. Space Science: Simulating Convection and Magnetic
Field Generation in the Interiors of Planets and Stars
Our goals and dreams expand much faster than computer power…
• With four or five times the computing resources than currently
available today, it would be possible to simulate the interior dynamics
of stars and planets as strongly turbulent convection in 3D, as can
only now be done in 2D. Comparisons of 2D laminar and turbulent
simulations clearly show fundamental differences. This suggests that
our current 3D simulations, which are at best weakly turbulent, may
be still far from realistic. Simulating strong turbulent convective
dynamos requires much greater spatial and temporal resolution.
• So, it's not that our solutions would be just a little more accurate, if
we had more computational resources; they would likely be Snapshot of the entropy from one of our
fundamentally different and lead to new discoveries and predictions. simulations of turbulent convection in a
rapidly rotating disk or equatorial plane of a
star or giant planet
• Although the current solutions do resemble observations to first order and our understanding of these processes
continues to improve, we cannot include all the spatial and temporal scales that are part of the actual turbulent
mechanisms. The situation has improved significantly over the past two decades and no doubt will continue to
improve over the next two decades. Hopefully by then, it will be clear that we will be simulating all the important
scales.
• We would also like to include the more detailed physics, chemistry and radiative transfer in our 3D time-
dependent models that currently only 1D (spherically-symmetric) evolution models can include.
• We would like to simulate every major body in the solar system simultaneously with all the interactions among
them included, while simulating their internal dynamics. The computational resources needed to do this would
be difficult to estimate - but there will never be a time when those working on state-of-the-art problems will feel
they have enough resources.
POC: Gary Glatzmaier, Earth Science Dept., University of California, Santa Cruz
16. Computational Chemistry
Computational chemists are currently interested in two
areas, radiation biology and computational material
science.
• Simulation of Radiation Damage to DNA:
– Double or triple the computing power allows us to
study damages to the Watson-Crick base pair
quantum mechanically. Currently, we can only
apply quantum mechanics to individual bases. It
will also allow us to study the role of water and
protein in more detail.
– Unlimited computing facility will allow us to follow
the radiation damage from initial hit by the space
radiation, subsequent chemical reactions that
occur in the cell leading to the biological response.
At present these studies are piecemeal.
• Computational Material Science:
– In a multi-scale modeling of materials, double or
triple the computing power allows us to extend
both the size of the quantal region as well as the
molecular dynamics region. This is important to
simulate the energetic reactions such as pyrolysis
of TPS during a high-speed vehicle entry into the
atmosphere. Multi-scale modeling of materials and bioscience -
10-base pair DNA
POC: Winifred Huo, NASA Ames Research Center
17. ZettaFLOP Visualization and Data
Analysis
With zettaFLOP capabilities, we would be able to achieve:
• Visualization of zettabyte datasets
• High-quality ray traced volume rendering with realistic shading models (true shadows, accurate
material reflectance & absorption)
• Interactive radiosity calculations
• Interactive 3D LIC (line integral convolution - "van Gogh" technique)
• Interactive feature exploration and detection, using sophisticated kernel methods, non-linear fitting,
etc.
• Interactive "causality exploration", using high-order Bayesian conditional probability networks
• Natural language interfaces to visualization
applications
• Simulations would be the vis-techniques ( i.e.
there would be no separation between the
computation/ analysis/visualization stages (true
"interactive visual supercomputing")
• Sensory devices could provide extremely good
immersion, using feedback even of saccadic
eye movements
• Neural network-based "cognitive prosthetics"
could assist data analysis and exploration,
using, e.g., map seeking circuits, adaptive
resonance, probability collectives and other
information theoretic techniques.
POC: Chris Henze, NASA Ames Research Center
Artist concept of a visualization tool - a double hyperwall
18. Integrated Safe Spacecraft Design:
2010 Goal
• Vision
– Single vehicle design integrating full, high fidelity multi-disciplinary analyses with
FMEA. Enables perturbation of the simulation to introduce failures and re-fly
through mission profiles to determine survivability.
• Technology Advances
– Full 3-D multidisciplinary simulations
• Benefits
– Mission Safety - Supports 2nd Generation RLV goals of 1:10,000 risk of crew loss
– Develop revolutionary technologies to enable new aerospace capabilities -
Enables an order of magnitude safer human space flight missions.
19. Aeronautics Research: High-Lift
Aerodynamics
• The grid requirements for an accurate computation of high-lift aerodynamics is
staggering. For the simple geometry in the figure below, systematic refinement of the
grid resulted in 46 million cells before a reasonable level of CLmax agreement was
achieved. With the combination of Columbia run time and queue structure, it took 135
days of round-the-clock submittals to get one 13 point lift polar.
• A colleague, Dr. Shahyar Pirzadeh, is presently
trying to apply these guidelines to a Boeing 777
in high-lift configuration. He is presently up to
108 million cells and is getting some results
indicating that this may not be adequate. These
calculations are taking weeks and weeks on 360
processors.
• Therefore, if we could do what we would like to
do with unlimited computational capacity, we
would like to perform these computations in a
few days or less.
Trapezoidal wing high-lift geometry
POC: Neal Frink, NASA Langley Research Center, Virginia; and typical lift-polar
Mark S. Chaffin, Cessna Aircraft Company
20. Space Science:
Solar Simulations in the Zettaflop Era
• Solar convection zone simulations could
be expanded to include multiple super-
granules with a 2-4x increase in computer
power. This would allow a highly credible
analysis of the physics of large-scale
photospheric phenomena.
• Another 2-4x would allow simulation of the
largest photospheric scales, the giant
cells.
• Zettaflop performance would allow a
simulation of the full convection zone,
from 70% of the solar radius out into the
atmosphere, at a horizontal resolution
sufficient to resolve granules. This would
include all important scales of motion and
so give a complete picture of internal solar Current solar convection zone simulations are
dynamics. A very thorough understanding limited to boxes of approximately 10% of the
of solar activity and space weather solar radius on a side. These require roughly
generation would then follow. 200,000 processor hours on Columbia.
POC: Alan Wray, NASA Ames Research Center
21. So Where Are We?
• The Science
– Production CFD codes executing 100x
C90 numbers of just a few years ago.
– Throughput 100x (or more) above that of
a few years ago.
– Earth/Space Science codes executing
2-4x faster than last year’s best efforts,
100x throughput over last year’s efforts.
• The Systems (1997 - present)
– New expanded shared memory architectures:
First 256, 512, and 1024 CPU Origin systems.
First 256p, 512p Altix SSI systems.
– First 2048p NUMAlinked 512p Altix cluster.
• The Future?
– Expanded Altix SSI to 4096?
– Expanded Altix NUMAlinked clusters
to16Kp?
– Serious upgrades to CPUs
22. Conclusion: Advanced Development
Concepts
• Several orders of magnitude increase in effective computational power needed to radically
extend the range of design options to be explored or radically shorten the design cycle
• Computer technology of massively parallel processing combined with single processor
speed increases will support the above
• Computing methods and new architectures are needed to match over a spectrum of
applications
• New paradigms are needed to harness a very large number of processors
• Need to provide advanced development tools,
processes and products to increase design
confidence, and reduce the design cycle time for
aircraft and space vehicles by 50% in 10 years
and 75% in 25 years
• Currently, answers to “what if” questions require
hours, days, even months. To support designer’s
train of thought, these answers should be coming
in seconds
• Progress in computer technology will be achieved
by two ingredients: faster processors, and more of
them - yet needs to maintain a single virtual
computer appearance to the user
POC: Jaroslaw Sobieski (LaRC), Ultrafast Computing Team Report, Feb. 1999
23. Consequences of Architecture Diversity
In the old days, single processor speed increases made our codes
run faster; simple and easy.
• Now, there are a multitude of processors and memory architectures available, in a single
or virtual computer. It is unlikely that smart operating systems will completely mask the
architectural diversity
– New task: tailor solution to architecture
– New opportunity: specify architecture that suits a class of applications
• We need many processors, do we know how to use them?
– Current experience shows diminishing returns setting in when the number of
processors in 100’s is reached
• Why: Types of Parallelism
– Coarse-grained: replicated code, different inputs (problem-dependent)
– Coarse-grained: partitioned domain (diminishing returns)
– Fine-grained: existing code rearranged (machine-dependent, almost useless)
– Fine-grained: existing solution algorithm recoded (machine-dependent, limited
usefulness)
– Radical, new paradigms to be invented
• New paradigms are needed to exploit more than 100’s processors
POC: Jaroslaw Sobieski (LaRC), Ultrafast Computing Team Report, Feb. 1999
24. How to get engineering computing to ride the
wave of the future in computer technology
• The engineering computing market is small relative to that in business and entertainment. Therefore, it constitutes a
niche where the Government seed money might make a real difference.
• In the interdisciplinary arena, one should continue to
– monitor, understand the new computer hardware and software technologies and architectures
– develop an understanding of the capabilities that are likely to be delivered by the commercial development
regardless of the Government actions
– Influence development of the new computer hardware and software technologies and architectures
– Develop understanding of the match between various types of engineering computing jobs and various
computer architectures, and the match frequencies
– Formulate the need for new developments at the integrating framework level and at the disciplinary leveln
particular discipline
– Formulate standards and requirements as needed by the tool integration, MDO environment, and the new
architectures
– Develop methods for effective utilization of the system analysis and MDO for various classes of the new
architectures, taking into consideration the computing load balancing among the processors
– Recommend long term investment strategy based on the above information
– Foster and coordinate disciplinary developments and application projects
– Facilitate education and training 2)
• In each disciplinary domain, one will need to
– Commit to gearing-up to the exploitation of new computer architectures in hardware and software.
– Reexamine and restructure the disciplinary algorithms, and to develop new paradigms where needed,
accounting fully for MDO
– formulate local disciplinary standards and requirements compatible with the ones established in the
interdisciplinary arena
– develop and validate the restructured algorithms and the new paradigms, implementing the standards and
requirements
POC: Jaroslaw Sobieski (LaRC), Ultrafast Computing Team Report, Feb. 1999
25. “Compute as Fast as the Engineers can Think!”
• The charter for the Ultrafast Computing Team Report (Feb.
1999) was to examine impact of new computer
architectures on computing in the engineering design
process because:
– The aerospace vehicle design process is too long; not
computing fast enough is a major culprit
– Computer technology offers new opportunities in
massively heterogeneous and concurrent processing
that should be exploited.
• Examining two user scenarios: RLV and HSCT, it
was determined that:
– Major computing tasks need to be reduced from hours
to seconds
– Effective computing speed need to increase by several
orders of magnitude to achieve that
– Computer technology of massively parallel processing
must combine with new methods to achieve that goal
– There is usually one week for the partnership to
determine which proposed configuration to pursue.
– The objective is to maximize the return on investment
over the life of the vehicle, including the assumptions
of 10 years and 36 launches per year.
POC: Jaroslaw Sobieski (LaRC), Ultrafast Computing Team Report, Feb. 1999
Vision: Computing that underlies the engineering design should be so capable that it no longer acts as a brake on the flow of creative human thought in the design process. The capability of the human mind to formulate concepts and digest data rather than the computing would then pace that process. From NASA/TM-1999-209715 Compute as Fast as the Engineers Can Think! ULTRAFAST COMPUTING TEAM FINAL REPORT R. T. Biedron, P. Mehrotra, M. L. Nelson, F. S. Preston, J. J. Rehder, J. L. Rogers, D. H. Rudy, J. Sobieski, and O. O. Storaasli Langley Research Center, Hampton, Virginia
ARMD: To pioneer the identification, development, verification, transfer, application, and commercialization of high-payoff aeronautics and space transportation technologies. It is responsible for guiding and managing NASA's aeronautics research, and defining the investments that NASA makes on behalf of the Nation. These investments, by definition, are for long-term high-risk undertakings that are beyond the scope, capacities, or risk limits of others to perform. ESMD: To create a constellation of new capabilities, supporting technologies, and foundational research that enables sustained and affordable human and robotic exploration. It results from integrating the responsibility of the previous Office of Exploration Systems and the Office of Biological and Physical Research, including research and development efforts focused on crew health and life-support systems, countermeasures, and radiation protection. The ESMD will address strategic technical challenges and minimize the health and safety risks for the crew of any space vehicle. SMD: To support basic and applied research in Earth and space science. The SMD research program includes the development of major space flight missions; analysis of data from prior missions; conduct of major field campaigns; and the Supporting Research and Technology (SR&T) program which includes development of instruments for suborbital flights and potential missions, detector development, complementary laboratory research, and theoretical studies. The SMD also supports the development of decision-making tools for science-based policy and management decisions.
SOMD: To provide many critical enabling capabilities that make possible much of the science, research, and exploration achievements of the rest of NASA. It does this through the three themes of the International Space Station, the Space Shuttle Program, and Flight Support. NESC: The NESC is an independent organization, which was charted in the wake of the Space Shuttle Columbia accident to serve as an Agency-wide technical resource focused on engineering excellence. The objective of the NESC is to improve safety by performing in-depth independent engineering assessments, testing, and analysis to uncover technical vulnerabilities and to determine appropriate preventative and corrective actions for problems, trends or issues within NASA's programs, projects and institutions.
This chart represents the compute requirements for three different aspects of spacecraft design: physical model fidelity (e.g.turbulent modeling), probabilistic risk assessment (PRA) and optimization based on the genetic algorithm (GA). Turbulent models are required for all the models between Euler (inviscid) approximations and the direct Navier-Stokes model (DNS). For the Euler equations viscous effects are not simulated and for the DNS model all the scales (eddies) not dissipated by physical viscosity are resolved and hence no modeling of unresolved scales is necessary. PRA based on data derived from high fidelity simulation typically will require several 100 to 1000’s of complete configuration simulations (labelled “SC”) and encompasses several different disciplines. Optimization of a complete configuration will require GA based optimization because of the numerous design variables and multiple objective functions. Gradient based methods are not appropriate under these conditions. For simple objective functions and few design variables about a 1000 function evaluations are required. For many more design variables and multiple objective functions, experience indicates that about 50,000 function evaluation are necessary.
Turns out the notes are already in bullets on the slide: about going from capillary to cell level, macroscopic to microscopic, and to include other systems like respiratory.
During 2004, the model was run in real-time experimentally at 0.25 degree resolution producing remarkable results in hurricane track and intensity forecasting [Atlas et al., 2005]. In 2005, the model horizontal resolution was further doubled reaching 0.125 degrees. This resolution makes the fvGCM comparable to the first mesoscale resolving atmospheric General Circulation Model (GCM) at the Earth Simulator Center (ESC) [Ohfuchi et al., 2004]. Nine 5-day 0.125 degree simulations of three hurricanes (Frances, Ivan and Jeanne) in 2004, which gave comparable track predictions to those at 0.25 degree resolution, are presented first for model validation. Then the focus moves on to the simulations of Catalina eddies and Hawaiian lee vortices. Numerical results show that the model is capable of simulating the formation of these mesoscale vortices, which did not appear in initial conditions, but were generated through the interaction between the synoptic scale flows and surface forcing (e.g., the land-water contrasts and topography). To our knowledge, this is the first such successful demonstration with a global model.
(http://ecco.jpl.nasa.gov/~dimitri/articles/SC05.pdf). NAS and JPL have teamed up to dramatically accelerate the development of a highly complex and unique model of the Earth’s oceans. The ECCO team produces time-evolving, 3D estimates of the global state of the ocean in near real-time. These estimates are obtained by incorporating into the model vast amounts of data - such as sea level, current speed, surface temperature, and salinity-which are gathered from instruments in the ocean and from space satellites like NASA’s TOPEX/Poseidon and JASON. Scientists use these realistic, time-evolving estimates as a practical tool to better understand how the ocean currents affect Earth’s climate, to study the role of the ocean in the Earth’s uptake of carbon dioxide, and more accurately predict events like El Nino and global warming. By using Columbia, researchers now get results in a few months that previously took several years to obtain. NAS supports the ECCO project by solving technical issues such as data transfer and storage, and has developed new methods to allow scientists to visualize their results.
Graphic: Supernova Type Ia supernovae are the brightest thermonuclear explosions in the universe. Their brilliance rivals that of their host galaxy (or ten billion suns), and they have become important “standard candles” in the quest to measure the size of the universe. The explosion begins as a few hot spots near the center of a white dwarf star experience a runaway in their nuclear energy generation. An unstable front of turbulent combustion races through the star turning most of it into iron and blowing it apart. A first principles understanding of these astronomical “bombs” has eluded astrophysicsts for decades.
Using the Columbia supercomputer housed at the NASA Advanced Supercomputing (NAS) facility, researchers from the University of California, Santa Cruz and Lawrence Berkeley National Laboratory have simulated the nuclear fusion flame long enough to see its turbulent structure develop. These simulations are quite complex, and only a massively parallel computer with a lot of memory like Columbia is capable of handling the calculation.