Exascale computing needs more funding, say federal computer scientists


Funding for Energy Department supercomputer efforts would need at least another $400 million annually to possibly build an exascale computer by 2020, a computer scientist told a May 22 House hearing.

"At that funding level, we think it's feasible--not guaranteed, but feasible--to deploy a system by 2020. Of course, we made those estimates a few years ago when we had more runway than we have now," said Rick Stevens, associate laboratory director at Argonne National Laboratory. At current funding levels, the United States wouldn't have an exascale computer until the middle of the next decade, Stevens added.

Fiscal 2014 budget proposal documents show the DOE Office of Science requesting (.pdf) $465.59 million for the Advanced Scientific Computing Research program and the National Nuclear Security Administration requesting (.pdf) $401.04 million for the Advanced Simulation and Computing Campaign. NNSA relies on supercomputers to run simulations in lieu of live tests for assessing the reliability of the nuclear weapon stockpile.

Both Japan and China have large government investments in exascale development, Stevens told the House Science, Space, and Technology subcommittee on energy. He added that under current funding levels, China will probably reach exascale computing years ahead of the United States.

An exascale computer would be a thousandfold greater in capacity over the first petascale computer, which came online in 2008. Exascale capability is important to American economic competitiveness, national security and healthcare advancements, witnesses told the House panel. The private sector, at current levels of industry research investment, are not likely to reach exascale capability until after 2020, Stevens said.

Achieving exascale capability requirements advancements in several areas of computing, witnesses noted.

"Current system architectures today can't simply be scaled up to produce a useable and cost-effective system," said Dona Crawford, associate director for computation at Lawrence Livermore National Laboratory. In principle, a petascale system could be simply expanded until it could execute an exaflop--1018 floating point operations per second--but that machine "would fill the room and would take 100 megawatts of power, and that's not a cost-effective system," Crawford added.

For more:
- go to the hearing webpage (prepared testimony and webcast available)

Related Articles:
NSF official stresses importance of government investment in big data
DOE awards $62 million contract for exascale computing
NSF says high-performance computing shouldn't fixate on FLOPS