Jump to content
  • Member Statistics

    17,603
    Total Members
    7,904
    Most Online
    ArlyDude
    Newest Member
    ArlyDude
    Joined

New NCEP operational climate/weather supercomputer


Recommended Posts

I found this to be interesting and wonder if anybody knows anything more about it:

http://rdhpcs.noaa.gov/wcossfinal/ (official site)

https://www.fbo.gov/index?s=opportunity&mode=form&tab=core&id=95cd1819c245d2491a6c6ad8b720bb95&_cview=0 (overall solicitation)

https://www.fbo.gov/utils/view?id=cc031d6f3b30b8e6f872e4e53136c851 (contracting documents... the statement of work, i.e. the good stuff, starts at page 95)

NOAA is beginning the contracting process for a new, up to $500 million operational climate and weather forecasting supercomputer. The contracting documents call for two operational supercomputers (a primary and backup) and facilities to house them. For all you tech geeks out there, there's alot of detailed information about exactly what type of computing currently powers NCEP's models and what will power them going forward.

The documents seem to indicate that there will be performance upgrades galore, but it doesn't seem there are any changes to GFS/NAM/SREF/etc modeling software included.

Link to comment
Share on other sites

I found this to be interesting and wonder if anybody knows anything more about it:

http://rdhpcs.noaa.gov/wcossfinal/ (official site)

https://www.fbo.gov/...20bb95&_cview=0 (overall solicitation)

https://www.fbo.gov/...872e4e53136c851 (contracting documents... the statement of work, i.e. the good stuff, starts at page 95)

NOAA is beginning the contracting process for a new, up to $500 million operational climate and weather forecasting supercomputer. The contracting documents call for two operational supercomputers (a primary and backup) and facilities to house them. For all you tech geeks out there, there's alot of detailed information about exactly what type of computing currently powers NCEP's models and what will power them going forward.

The documents seem to indicate that there will be performance upgrades galore, but it doesn't seem there are any changes to GFS/NAM/SREF/etc modeling software included.

The "performance" aspect of the modeling suite/software is completely different than the scientific/product/quality aspect. There is typically language in the procurement that they can speed up our benchmark codes by some factor, either through hardware upgrades or software engineering (most of it typically ends up being hardware, but IBM has done some great software work for us in the past....again to speed things up or run more efficiently).

The actual changes to the models that would impact the quality of the products put out is handled by scientists within the Environmental Modeling Center (and collaborators, such as folks at NOAA/GSD in Boulder).

Link to comment
Share on other sites

Does it really need a backup? Can't they just use the extra hardware for some higher res runs or something?

Absolutely. First of all, there was a massive failure back in the 80s due to a fire ... and no backup machine existed to immediately take over. I couldn't even imagine the repercussions if we had something similar happen nowadays.

Secondly, the backup machine gives us an environment exactly like what we have for operational usage for doing development and testing of prototype/future systems (without interfering/sharing resources with the regularly scheduled jobs...i.e. the operational models).

Link to comment
Share on other sites

Does it really need a backup? Can't they just use the extra hardware for some higher res runs or something?

'Backup' computers almost never sit around idle; they're used for all sorts of things; including being available to other gov't departments and agencies.

When I worked at a heavy-duty D.O.D. computer complex (6 mainframes) in Carderock, Md. I remember the Commerce Dep't being a heavy user of our equipment. Perhaps with Commerce's new equipment the role will be reversed, lol.

Link to comment
Share on other sites

Good news. Maybe after the upgrade they will be able to incease the resolution of the GEFS ensemble and make it a bit more useful.

The GEFS members are tentatively scheduled to get a resolution increase (and move to the newer version of the GFS model) sometime in a year or so (I need to check the date). It's been on the table for a while, but has been slowed by lack of available resources and potential delays in operational products.

Link to comment
Share on other sites

The "performance" aspect of the modeling suite/software is completely different than the scientific/product/quality aspect. There is typically language in the procurement that they can speed up our benchmark codes by some factor, either through hardware upgrades or software engineering (most of it typically ends up being hardware, but IBM has done some great software work for us in the past....again to speed things up or run more efficiently).

The actual changes to the models that would impact the quality of the products put out is handled by scientists within the Environmental Modeling Center (and collaborators, such as folks at NOAA/GSD in Boulder).

DTK,

What is the benefit of upgrades like this? I understand the new machine(s) won't directly affect the scientific/product/quality aspect, but have you heard or do you know how a new supercomputer will help forecasts? Will it enable better software to be run, or more complex modeling to be done?

Thanks

Link to comment
Share on other sites

DTK,

What is the benefit of upgrades like this? I understand the new machine(s) won't directly affect the scientific/product/quality aspect, but have you heard or do you know how a new supercomputer will help forecasts? Will it enable better software to be run, or more complex modeling to be done?

Thanks

The more the machine, the more things you can do (so yes, better software and more complex modeling). Just a few obvious examples:

- Higher resolution / multi-nested

- Larger ensembles

- More complex / compute intensive parameterization (convective/precipitation schemes, cloud microphysics/species, land surface, radiation, etc.)

- More complex data assimilation algorithms (model initialization) including ensemble or hybrid methods

Link to comment
Share on other sites

Absolutely. First of all, there was a massive failure back in the 80s due to a fire ... and no backup machine existed to immediately take over. I couldn't even imagine the repercussions if we had something similar happen nowadays.Secondly, the backup machine gives us an environment exactly like what we have for operational usage for doing development and testing of prototype/future systems (without interfering/sharing resources with the regularly scheduled jobs...i.e. the operational models).

Good deal! I didn't even think of that. Bring it on! :)

Link to comment
Share on other sites

The "performance" aspect of the modeling suite/software is completely different than the scientific/product/quality aspect. There is typically language in the procurement that they can speed up our benchmark codes by some factor, either through hardware upgrades or software engineering (most of it typically ends up being hardware, but IBM has done some great software work for us in the past....again to speed things up or run more efficiently).

The actual changes to the models that would impact the quality of the products put out is handled by scientists within the Environmental Modeling Center (and collaborators, such as folks at NOAA/GSD in Boulder).

More nor'easters passing over the benchmark? </:weenie:>

Link to comment
Share on other sites

We had a similar problem in the late 1990s when the Cray burned. Timeliness of model output fell to 50-60% for a while afterward. Didn't realize it had happened in the 1980s as well.

Absolutely. First of all, there was a massive failure back in the 80s due to a fire ... and no backup machine existed to immediately take over. I couldn't even imagine the repercussions if we had something similar happen nowadays.

Secondly, the backup machine gives us an environment exactly like what we have for operational usage for doing development and testing of prototype/future systems (without interfering/sharing resources with the regularly scheduled jobs...i.e. the operational models).

Link to comment
Share on other sites

We had a similar problem in the late 1990s when the Cray burned. Timeliness of model output fell to 50-60% for a while afterward. Didn't realize it had happened in the 1980s as well.

We're talking about the same thing (I just knew it happened before I came to NCEP, and didn't realize it was actually in the late 90s).

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...