AGENDA
Session Type:
- Executive Keynotes
- Ideas Workshop
- Ideas Fountain
Workshop Tracks:
- Voltage Variability & Timing
- RTL-driven Power Efficiency & IR Sign-off Coverage
- Power Integrity Signoff for Complex SoCs
- Multiphysics Solutions by Ansys
- 3D-IC Electrothermal & Eletromagnetics
- Highlighted Solutions
Opening Keynote: Accelerating Moore and Beyond Moore with Multiphysics
Presented By: Vic Kulkarni, VP, Chief Strategist, Ansys
Executive Keynote: TSMC & ANSYS – A Partnership for Your Creativity
Presented By: Suk Lee, Senior Director Design Infrastructure Management Division, TSMC
Read Abstract
The presentation starts with highlights of TSMC trinity of strengths: Manufacturing Excellence, Technology Leadership and Customer Trust, then the overview of the Open Innovation Platform (OIP) ecosystem and ANSYS/TSMC collaboration over the years. Latest updates on silicon manufacturing capability and process technologies are also presented. Last but not least, 3DIC stacking and advanced packaging technologies are discussed and the latest ANSYS/TSMC design solutions to enable both Silicon and 3DIC for our mutual customers.
Executive Keynote: Accelerating AI Compute with Wafer Scale
Presented By: Dhiraj Mallick, VP Engineering and Business Development, Cerebras Systems
Read Abstract
AI compute is the most important computational workload of our
generation. AI has risen from obscurity to top-of-mind, with
widespread and growing applications. However, it is profoundly
computationally intensive. A report by OpenAI shows that
compute required to train the largest models is doubling every
3.5 months, a rate 25,000 times faster than Moore’s
Law.
This voracious demand for compute means that AI is constrained
not by applications or ideas, but by the availability of
compute. Testing a single new hypothesis — training a
new model — takes weeks or months and can cost hundreds
of thousands of dollars in compute time. This is a significant
drag on the pace of innovation. Google, Facebook, and others
have noted that long training time is the key impediment to
progress in AI - many great ideas are ignored because they
take too long to train.
Dhiraj Mallick, VP of Engineering and Business Development at
Cerebras Systems, will discuss how AI’s true potential
can be realized through eliminating this primary impediment to
the advancement of AI — by reducing the time it takes to
train models from months to minutes, from weeks to seconds.
Cerebras recently introduced CS-1, which is comprised of their
Wafer Scale Engine (WSE), the first and only trillion
transistor processor for AI applications, the CS-1 system,
which delivers power, cooling and data to the WSE, and the
Cerebras software platform that enables quick deployment of
the full system and allows researchers to use their existing
software models without modification.
In this session, Mallick will share perspective on the state
of the AI industry and technologies that will enable AI to
continue to rapidly grow and evolve.
Executive Keynote: 2.5 and 3D – The Road Ahead
Presented By: Vicki Mitchell, VP Engineering,
Central Engineering Systems Group, Arm
Rob Harrison, Sr. Director, Systems Implementation, Arm
Read Abstract
Moore’s law may be slowing; however, it is still expected to have life for the next several years primarily enabled by advances in patterning such as EUV. That said, logic performance for the same power is also slowing down. As we enter this fifth wave of computing, we are witnessing massive demands for ever increasing computational power. Edge compute, 5G and beyond, IoT, Data and AI—to name but a few—are changing the way we think of designing IP and systems to solve the workloads of tomorrow. We see more and more examples of systems being built that use the integration techniques that are enabled by 2.5D and 3D connectivity. Heterogenous integration that is unfolding is being accelerated by 2.5D and 3D technology. This creates challenges around systems integrations and chip design that must take into account package analysis and designs. New ways are required for defining systems, and new considerations and tooling for design and analysis when delivering these complex but optimized solutions. We will discuss the challenges ahead.
Methodology for Accurate Analysis of Dynamic Voltage Drop Induced Clock Jitter for Improved PPA
Presented By: Google
Accelerating Timing Closure for ADAS SoCs with transistor accurate Path-based Analysis
Presented By: STMicroelectronics
Fast and Accurate Modeling of Dynamic Voltage Drop Impact on Timing Signoff
Presented By: NVIDIA
Physical Design-aware Early Prototyping and IREM Analysis
Presented By: Samsung
Pre-RTL Power Estimation Model Based on Fine-grained Analytics
Presented By: Intel
Novel RTL Power Regression and Minimization Workflow for Mobile GPU Cores
Presented By: Qualcomm
Scoring Vectors for IR Sign-Off Using Power Weighted Toggle Coverage Metrics
Presented By:Mediatek
Cycle Selection for Robust IR Sign-Off Using Ansys Seascape Advanced Power Analytics
Presented By:NVIDIA
Voltage Drop Aware methodology for Scan-Chain Grouping and PDN Weakness prediction using Build Quality Metrics
Presented By: Broadcom
Power Integrity Signoff Flow for Complex SoCs
Presented By: Synaptics
Power Integrity Flow & Analyses for Mixed Signal NVM Flash Products
Presented By: STMicroelectronics
Generation of Accurate Thermal Views of Standard Cells for SoC-Level Thermal-aware EM Analysis
Presented By: Samsung
Simulation Driven Approach to Analyzing Side Channel Leakage Vulnerability in a Pre-Silicon Design Flow
Presented By: Ansys
IDEAS Fountain Panel Discussion - Women In Technology Accelerating Diversity and Inclusion
Presented By: Semiconductor Engineering, Synopsys, Ansys, Bloom Energy, Science Applications International Corporation
IDEAS Fountain Roundtable - Shifting Left with Moore and Beyond Moore
Presented By: Annapurna Labs, Amazon, Ansys, Arm, Mediatek, Mellanox Technologies (NVIDIA)
Executive Keynote: 5G and Beyond: Future Technology and Research Challenges and Opportunities
Presented By: Mallik Tatipamula, CTO, Ericsson
Read Abstract
This presentation details key industry and market trends for 5G and beyond, and corresponding technology, architecture and research challenges and opportunities.
Executive Keynote: Building a Superconducting Quantum Annealing Processor
Presented By: Allison MacDonald, Lead Experimental Physicist, D-Wave
Read Abstract
As we approach the limitations of Moore's law scaling, quantum computing is gaining traction as an alternate information processing paradigm. This talk will examine the quantum annealing algorithm, discuss D-Wave's implementation of a 2000-qubit processor and introduce a few real-world problems that have been investigated with that processor.
Executive Keynote: Abundant-Data Computing: The N3XT 1,000X
Presented By: Subhasish Mitra, Professor, Stanford University
Read Abstract
The world’s appetite for analyzing massive amounts of data is growing dramatically. The computation demands of these abundant-data applications, such as deep learning, far exceed the capabilities of today’s computing systems, and can no longer be met by isolated improvements in transistor technologies, memories or integrated circuit architectures alone. We must create transformative NanoSystems which exploit unique properties of underlying nanotechnologies to implement new architectures. This talk will present the N3XT (Nano-Engineered Computing Systems Technology) approach that enables such NanoSystems through: (i) new computing system architectures leveraging emerging (logic and memory) nanotechnologies and their dense 3D integration with fine-grained connectivity for computation immersed in memory, (ii) new logic devices (such as carbon nanotube field-effect transistors for high-speed and low-energy circuits) as well as high-density non-volatile memory (such as resistive RAM that can store multiple bits inside each memory cell), amenable to (iii) ultra-dense (monolithic) 3D integration of thin layers of logic and memory devices that are fabricated at a low temperature. N3XT is not an academic concept -- N3XT hardware prototypes have been demonstrated in commercial silicon facilities. N3XT NanoSystems target 1,000X system-level energy-delay-product benefits especially for abundant-data applications. Such massive benefits enable coming generations of applications that push new frontiers, from deeply-embedded computing systems all the way to the cloud.
Executive Keynote: Securing Systems for Future Defense Applications
Presented By: Len Orlando III, Air Force Research Laboratory Sensors Directorate, Wright Patterson AFB"
Using Multiphysics Analysis to Ensure Data Integrity in 5G Systems
Presented By: Ansys
Changing the Game in Autonomous Vehicles
Presented By: Ansys
Ansys Cloud – HPC As Easy As It Should Be
Presented By: Ansys
Reliability Challenges in Advanced Packaging
Presented By: Ansys
Silicon Photonics: A Timely Marriage of Microelectronics and Photonics
Presented By: HPE
Advanced Reliability Analysis for FinFET designs
Presented By: Xilinx
How Electromagnetics are enabling 112Gbps and 224Gbps Serial Links
Presented By: Alphawave
Timing Signoff for Chiplet-Based Designs with Ansys
Presented By: Broadcom
Redefining Sign-off with more SeaScape Products
Presented By: Ansys
2.5D/3DIC multi-physics challenges and solutions
Presented By: Ansys
Unique Advanced Analytics Accelerate Power Grid Sign-off: Ensure Vector Coverage, Identify Key Aggressors
Presented By: Ansys
A Review of HPC Technologies in Ansys HFSS
Presented By: Ansys
Emulation-based Power Analysis using Real Scenario and Workload Applications
Presented By: Mentor Graphics
Enabling Shift-Left in Design Closure through Analysis-Driven Optimization
Presented By: Synopsys
Ansys On-Chip Electromagnetic Tools - Innovations & Improvements
Presented By: Ansys
Executive Keynote: Using Artificial Intelligence in Engineering Simulation
Presented By: Prith Banerjee, CTO, Ansys
Read Abstract
Over the past 50 years, Ansys has become a leader in engineering simulation software. The world around us is governed by the laws of physics which are captured by equations that model various physics; we solve these equations using numerical methods such as finite element analysis and finite difference methods. In the past 50 years, the world of Artificial Intelligence has progressed from Expert Systems to Machine Learning to Deep Learning. In this talk we will explore the use of AI, Machine Learning and Deep Learning to improve engineering simulation around four use cases: (1) Improving Customer Productivity (2) Augmented Simulation (3) Revolutionizing engineering design and (4) Business Intelligence. We will discuss the use of data-driven and physics informed neural network methods for simulation as well as machine learning based partial differential equation solvers that can speed up engineering simulation by several orders of magnitude. We will show early results of our AI/ML journey in the world of simulation.