High-Performance Computing: A Catalyst for Change

As an Intel Fellow and Chief Technical Officer for the HPC Ecosystem at Intel, Mark Seager has a unique perspective on high-performance computing. He is glad to offer practical advice for executives who want to begin investing in HPC or amplify their current efforts. But what really gets Seager excited is the potential to apply HPC to address global challenges like cancer. Recently, he shared his thought-provoking views in an interview with Dimensions.

Save PDF
High-Performance Computing: A Catalyst for Change

DIMENSIONS: As the visionary for high-performance computing (HPC) at Intel, what are the major issues you focus on? What are the key challenges in HPC today?

MARK SEAGER: While we’ve made tremendous progress in HPC over the last 10 years, there is still huge potential for additional progress. Our goal at Intel is to develop technologies that support exascale computing — where a billion calculations can be made every second. To reach that goal, clearly we need to design more efficient systems that improve performance while reducing power consumption. We need to improve our ability to program highly parallel systems with more efficient message passing and improved threading to enable parallel computing built on Intel’s Scalable System Framework. Finally, we must develop computing systems with greater resiliency, so they won’t be overwhelmed by large numerical problems. If we can make progress on all these fronts, we can achieve the speed and efficiency improvements needed to support exascale computing.

DIMENSIONS: There’s so much buzz in the business press about HPC today. What are the biggest misconceptions executives have about high-performance computing?

MS: Probably the biggest misconception is, “My company doesn’t need to invest in HPC.” The truth is that the majority of businesses can leverage the power of high-performance computing to manage their core tasks better, faster and less expensively.

Let’s take the example of product design and development. Simulation solutions have eliminated a lot of time and costs by allowing engineers to solve problems in the virtual world, instead of building and testing physical prototypes. I think it’s safe to say that simulation has shortened the average design cycle by 30 percent. In today’s environment of intense competition and more frequent product launches, that’s a significant advantage. However, to fully capitalize on simulation, most engineering teams need to work in a high-performance computing environment that’s capable of handling large problems with relatively short design cycles.

Big Data. The Internet of Things is creating huge amounts of information about what customers want and how products are performing in the field. While this is certainly a positive thing, sometimes the sheer volume of information is overwhelming. For instance, when GE collects in-flight data on its jet engines — their stress level, temperature, noise, vibration and other characteristics — more than a terabyte of data is generated per flight. That’s an extreme example, but today every company is collecting a lot of information, and HPC can help store it and analyze it much faster and more cost-effectively.

"Everything around us is going to become more intelligent and more interactive."

Server

DIMENSIONS: As companies begin their HPC journey, what’s the biggest obstacle to getting the capability up and running? How can executives overcome this obstacle?

MS: The biggest challenge is our tendency, as executives, to require a complete business case — and demonstrate a probable return on investment — before we make any kind of move. Most management teams ask the IT staff to prove that HPC will create a proven “win” before a new system is approved. Certainly executives need reassurance that there will be business benefits, but this cannot become a slow, time-consuming process that keeps the company from moving forward. Agility and speed are needed to keep up with the evolution of HPC and avoid being left behind.

I also believe we’re going to see artificial intelligence become more commonplace and more practical. Right now, large enterprises design many products via simulation, and many of these simulations still require manual human interaction and are iterative. What if the design process itself became more automated and machine-led? Human designers have limitations. For example, they consider the look of a product, its aesthetics. Artificial intelligence is not limited by this human design approach; it takes the design parameters and arrives at a solution that is only constrained by the specified design criteria. We could see some dramatic new breakthrough products if we can incorporate artificial intelligence and HPC into different phases of product development.

We’re also going to see more simulation capabilities integrated into the product development process. Already we’ve seen simulation create a dramatic impact, but in the future simulation will not be a separate step. It will be built right into design software, so that scenarios can be modeled, and results realized, in real time at every stage of product development.

Finally, additive manufacturing — or 3-D printing — is set to revolutionize how we make products. As the maker movement takes hold, manufacturing is becoming democratized. That means, eventually, simulation software and other professional design tools will need to be available and easily accessible to entry-level users. That’s going to create an incredible depth and breadth of engineering data, including open source and inexpensive proprietary design IP, and HPC can help facilitate how all that information gets stored and processed.

When the above trends are coupled with virtual and augmented reality, the time to design new products can be radically reduced because one will be able to naturally interact with high resolution, time dependent 3-D data quickly. With AI generated augmentation of the virtual scene, data trends, anomalies and suggestions for improvement will provide deep levels of domain expertise to a wider range of users. This will drive the expansion of HPC adoption beyond the traditional white lab coat elite and contribute to the democratization of manufacturing.

Intel at a Glance

Intel Headquarters:

2015 revenues:
$55.4 billion

Number of employees:
107,300

Headquarters:
Santa Clara, California

"High-performance computing can contribute to solving many global challenges that involve studying large amounts of data — including climate change, energy discovery, national security, and nutrition and hunger."

intel xeon PHI
Supported by a comprehensive technology roadmap and robust ecosystem, the Intel® Xeon Phi™ processor is a future-ready solution that maximizes your return on investment by using open standards code that are flexible, portable and reusable.
DIMENSIONS: What excites you most about the future of high-performance computing?

MS: Clearly, businesses are going to continue to realize significant financial and strategic benefits from their use of HPC. But, personally, I’m really passionate about the power of HPC to improve quality of life by extending HPC use in broader challenges like human health and disease.

For instance, the National Cancer Institute Moonshot initiative has been created with $1 billion in U.S. government funding with the goal of dramatically impacting cancer treatment research. There is so much data out there about the genetic characteristics of cancer patients, historic responses to certain treatments, geographical concentrations of cancer patients and other relevant issues. But it’s absolutely impossible for human researchers to sift through this mostly paper-based information, separate the critical from the trivial, and draw reasonable conclusions. To accomplish that, we need artificial intelligence and analytics, supported by HPC. That’s the only way health-related projects like this can digitize these patient records with informative annotations and identify patterns over large numbers of patients and years of treatment.

High-performance computing can contribute to solving many global challenges that involve studying large amounts of data — including climate change, energy discovery, national security, and nutrition and hunger. The implications are enormous.

a team of investigative journalists and their search for the truth. They were going through all this paper — public records, newspaper archives, church directories — and trying to make connections. I thought to myself, “That era is coming to an end.” And it should come to an end, because there is so much latent information and insight locked up in paper records. Today, we have a wealth of technology tools for scanning digital data and extracting information to make connections between data records, finding patterns or anomalies and analyzing trends. Soon those technology solutions might even be able to make autonomous recommendations via machine learning and cognitive computing. As that happens, it’s going to accelerate our progress in many avenues of research that have the potential to change our lives. Without high-performance computing, that future would not be possible — and that’s why I find my work at Intel so gratifying.

About Mark Seager

Mark Seager

Mark Seager leads HPC strategy for Intel’s Enterprise and Government Group and represents Intel on the OpenSFS board of directors. At Intel, he is working on an ecosystem approach to develop and build HPC systems with exascale capabilities. Previously, he was assistant department head for Advanced Computing Technology within the Integrated Computing and Communications department at Lawrence Livermore National Laboratory. He received a B.S. degree in mathematics and astrophysics from the University of New Mexico at Albuquerque, as well as a Ph.D. in numerical analysis from the University of Texas at Austin.

click below to start a conversation with ANSYS

Contact Us
Contact Us
Contact