How computer simulation is revolutionizing science and engineering [Q&A]

Digital twins

In 2020, companies like Tesla and Aerion (supersonic jets) flaunted 'Digital Twins' as being the cornerstones of their product design prowess.

This concept of representing physical objects in digital fashion is an extension of the general computer simulation industry that has been decades in the making. If 'software is eating the world', it seems that for the applied sciences type of domain, computer simulations have officially become the standard utensils.

An interesting company, and one that is addressing the vast market of enterprises for whom simulation is fundamental to digital R&D, is San Francisco-based Rescale. The company today announced $50 million in Series C funding with new investors including Samsung and NVIDIA.

We spoke with Rescale CEO Joris Poort to get some insight into the surge of computer simulation and how his company is arming research engineers in these algorithmically-intensive domains with a cloud platform specifically designed for this type of workload.

BN: Can you tell us a little bit about your backstory and how that led you to the idea for Rescale?

JP: I was an engineer at Boeing working on the 787 Dreamliner. We were trying to apply machine learning to optimize a composite wing design. It was a very sophisticated challenge with many different layers -- more than 20 million variables across everything from aerodynamics, to structural design, to manufacturing constraints, you name it. Ultimately our efforts reduced its weight by more than 150 pounds, which saved more than one hundred million dollars for Boeing in their final design delivered to the different airlines.

In that process we experienced a lot of huge challenges to get the compute infrastructure set up to run all those simulations we needed, as well as the complexity of configuring the simulation software itself. There really was no platform specifically for research scientists doing these types of simulations in the cloud, so we saw an opportunity to start a company around enabling these types of algorithmically complex simulations.

The old way of building products or doing things was to physically make it and see if it worked. When computers got more powerful they could do digital representations of whatever they were trying to do. We saw this massive opportunity to support people running these types of workloads so they didn’t have to set it all up themselves like we did when we were at Boeing. When we started the company, we were lucky to be funded by investors like Jeff Bezos, Microsoft, Richard Branson, Peter Thiel and many others who saw that massive opportunity as well.

BN: So what is Rescale doing today?

JP: We're really focused on accelerating cloud adoption for the science and engineering community with two goals in mind. First is unlocking the hardware performance constraints of using solely on-premises infrastructure, and second is empowering research collaboration that’s only possible with cloud-enabled connectivity. Today, only 20 percent of high performance computing (HPC) workloads run in the cloud. The industry dramatically lags Global 2000 enterprise cloud adoption for many reasons. Rescale was founded to bring HPC workloads to the cloud, to lower costs, accelerate R&D innovation, power faster computer simulations, and allow the science and research community to take advantage of the latest specialized architectures for machine learning and artificial intelligence without massive capital investments in bespoke new data centers.

The Rescale platform is the scientific community’s first cloud platform optimized for algorithmically-complex workloads, including simulation and artificial intelligence, plus integrations with more than 600 of the world’s most-popular HPC software applications and more than 80 specialized hardware architectures. Rescale allows any science engineer to run any workload, on any major public cloud, including AWS, Google Cloud, IBM, Microsoft Azure, Oracle, and more.

BN: What is it about the HPC field that has made it more cautious in making the move to cloud?

JP: It's very complicated. There's about $20 billion spent every year in building these on-prem high performance computing data centers. That's what companies are used to in research and science. In these domains the bar is very high for performance. And there is a lot of highly proprietary data, with special security considerations.

There was previously a raw technical advantage to running HPC on-prem. People that were used to Cray supercomputers were slow to the cloud because they wanted that type of specialized hardware. Fast forward to two years ago, and how popular public clouds have invested heavily in the equipment that rivals the best you could get on-prem. So that objection is no longer applicable.

But what's really driving our success is that researchers and engineers want to be able to do their work faster. Senior R&D leadership is shifting very rapidly from caution about the inhibitors to cloud adoption, and becoming much more focused on the upside of the cloud. Things like speed to market, design and iteration cycles, and engineering productivity. All of the same reasons that cloud is winning out as the platform for the enterprise. These domains are recognizing that their expensive PhD science engineers are worth a lot more to them speeding up their simulations and researching their design outcomes, rather than paying them to figure out how to set up software and hardware.

Image credit: design36/depositphotos.com

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.