August 10, 2020
The enormous increase in computing power through more powerful laptops, workstations and compute clusters has meant that we can now simulate larger, more complex engineering problems. At Ansys, our developers are constantly updating and optimizing our tools to take advantage of the leap in compute technology where you now have access to faster processors, storage systems and communication sockets.
The key is achieving the right balance of hardware and software to optimize the performance of a high-performance computing (HPC) cluster. A balanced system uses modern CPUs with advanced instruction sets (like AVX 512), sufficient RAM and a fast interconnect on the hardware side, coupled with optimized contact settings on the software side. Below are specific examples of how up-to-date hardware and code improvements help.
You can reap the most benefit when upgrading your RAM, which allows for faster number-crunching operations. A sufficient amount of RAM is critical as it enables the program to hold more solution data in memory during a simulation. Otherwise, this data has to be stored on the hard drive, resulting in a significant loss of performance.
You can gain additional acceleration in computing by upgrading the communication networks in large clusters. For example, migrating from a 10 Gbps ethernet connection to an InfiniBand interconnect could reduce solution times by at least a third.
Cutting-edge technology from chip manufacturers now enables you to access chips that can handle very large instruction sets. Roughly speaking, an instruction set refers to the size of each data set handled by the chip. So, when you upgrade from a 256-bit to a 512-bit instruction set, your runs will be 15% faster.
While we strive to take advantage of the improved hardware, our developers are hard at work in developing new, smarter numerical algorithms. For example, nonlinear contacts place a heavy burden on the solvers, as they require a very large computational effort to detect and evaluate potential contact between bodies. The new classification of contacts as small or large sliding, and the ability to split the contact calculations across multiple compute cores, has enabled us to reduce the total computational effort, leading to users seeing up to a 25% improvement in solve times.