San Diego (CA) – Many scientific advances are observed through a form of Internet haze. It makes it difficult to visualize or appreciate the nuance. The happenings in San Diego this very day are worthy of much more appreciation than it might outwardly appear.
The efforts of those at the San Diego Supercomputing Center (SDSC) are nothing short of brilliant. Their teams have rolled out programs which demonstrate just how mature their operations are becoming. They’re pushing not only supercomputing science, but also the real-world application of that science.
The SDSC should be a type of model for emulation. Their software is robust and it gives customers more of what they need to succeed. Their teams are working together and have rolled out changes allowing unprecedented use of their hardware. Just a few weeks ago they rolled out a sweeping change to a policy for one of their supercomputers. Now, in response to a major earthquake this new system will allow the entire compute load to analyze the event immediately. It will be completed within 30 minutes, including graphics and data in human readable terms. And all of this is now handled automatically due to just some of the advancements they’ve made.
And now today we read about an extension of that kind of ability, though for more generic customers and uses. Customer will now have some access to their own job scheduling. They’ll be able to work within their allocations to schedule last minute changes and responses to failed jobs. A day-long job might have a minor issue early on, something which keeps it from success. In the previous system those hours of compute time would be lost as other jobs would move forward. However, with the new software the order of their own jobs can be re-arranged within their allocated windows of computer use. This allows for greater debugging, testing and inter-department scheduling as needs arise.
In short, the SDSC’s software is making the hardware more robust and usable. And it’s all due to teamwork. It’s that same kind of perceptual change that’s often lost through the Internet haze. Especially with regards to just how powerful it is without a hands on feel for what it takes to make these centers operate. And for SDSC to take what were previously hard-and-fast constants in their system, and essentially make them new variables with this software, well it just demonstrates how mature and robust the teams have now become. They’re stepping back from the physical and moving into the virtual. They’re taking the machine and wielding it as a tool, rather than yielding to it as a set of imposed limitations.
When I visited the National Center for Supercomputing Applications (NCSA) in Urbana, Illinois this past June, I was absolutely floored by how much teamwork is required to make the facility operate. And when I read about advancements like these from the SDSC, and how that teamwork is being rolled up internally and then applied to its customers, I realize just how much of an advance this is. And I can visualize where SDSC and other centers will be going in the months to come. Read the NCSA link above to learn how these nationally funded supercomputing centers operate, and to get a real feel for what it’s like.
Faster compute engines will be almost continually rolled out from now until the end of time. But it’s the people behind those compute engines which make everything so special. Desires and abilities once dreamed about are being made real through cooperative efforts and teamwork.