Realizing the vision of numerically simulating a complete aircraft across its flight envelope, optimizing an advanced configuration, or certifying an autonomous system will demand greater use of supercomputers. But high-performance computing is approaching a technological cusp, and it is not clear what shape the next generation of supercomputers will take.
Aerospace does not rank highly in supercomputer ownership, according to the benchmark Top 500 list. NASA’s Pleiades at Ames Research Center is ranked 21st, well behind the fastest machine, China’s Tianhe-2. The Air Force Research Laboratory’s Spirit is 24th and the highest-ranked supercomputer owned by a manufacturer is Airbus’s HPC4, at 72 on the list.
Tianhe-2 has 3.12 million computing cores and a benchmarked performance of almost 33,900 teraflops—trillion floating-point operations per second—or 33.9 petaflops. Second fastest, at 17.6 petaflops, is Oak Ridge National Laboratory’s Titan, which is set to be overtaken in 2015 by Trinity, a new Cray computer at Los Alamos National Laboratory for certifying the U.S. nuclear stockpile through simulation.
The U.S. Defense Department has a network of high-performance computing centers, but increasingly restrictive computer security is keeping scientists and engineers from accessing supercomputers in their workplaces. So the Pentagon is deploying a “software-as-a-service” web portal providing secure access via browser to high-performance computing and computational engineering tools.
The Pentagon also is fielding new multi-physics tools for use in acquisition, and the Create program is developing a suite of web-based and government-owned applications for the design of aircraft, ships and antennae. Create Air Vehicles comprises DaVinci, a conceptual design tool, and Kestrel and Helios, high-fidelity analysis tools for fixed- and rotary-wing aircraft, respectively.
Supercomputing’s next step is expected to be massively parallel exascale machines 100 times faster than today. But there are competing candidates with different architectures including quantum, superconducting, molecular and neuromorphic computing.
Lockheed Martin in 2010 purchased the first commercially available quantum computer from Canada’s D-Wave. The 512-qubit D-Wave 2 is based at the University of Southern California (USC). In 2013, Google joined forces with NASA to install a D-Wave 2 at Ames Research Center. These machines are being used to explore how best to use quantum computers.
In a conventional computer, bits are either 0 or 1, but quantum bits (qubits) can be 0, 1 or a superposition of both states. Two computations can be performed simultaneously, creating the possibility of scaling computer power exponentially. Quantum computers may also solve certain problems far faster than conventional machines.
The D-Wave is an “adiabatic” computer that encodes problems into the lowest-energy state of a quantum system. The machine is best suited to solving optimization problems in which several competing criteria must be met, often called “traveling salesman” problems. The computer can test a large number of states in milliseconds to find the best—lowest-temperature—solution.
Lockheed is experimenting with the D-Wave for verification and validation of software, a task becoming prohibitively lengthy and costly as systems become more complex. It could also test adaptive, non-deterministic software that cannot be certified by other means, says Ray Johnson, chief technology officer. NASA and Google are looking into machine learning applications. Lockheed, meanwhile, has teamed with the University of Maryland to develop a different type of quantum computing platform that can be used without requiring a deep understanding of its internal workings.
“Classical computing can take us only so far,” says Johnson. “Critical systems will become so complex, problems will take too long or become too expensive to solve using even our most powerful supercomputers. We believe that the next computational revolution will stem from applied quantum science.”
No comments:
Post a Comment