2022 IEEE CASS Seasonal School: AI/ML for IC Design and EDA

Registration:

Register using the Eventbrite registration link to receive the zoom link.

Organizers:

Student/postdoc "Local Arrangement Co-Chairs" to help with logistics, e-book, zoom . . .

Dates:

  • Nov. 4 and 5: US Central Time 9am - noon
  • Nov. 18 and 19: US Central Time 9am - noon

Overview:

In this CAS Seasonal School, we will invite distinguished speakers from both academia and industry to teach attendees about the state-of-the-art in applying AI/ML to modern chip design and design automation.

This Seasonal School will be held virtually, i.e., no travel is needed for speakers and attendees. We will plan to start each day at a time that is feasible in Asia, Europe and North America (e.g., 7:00am Pacific = 4:00pm CET = 8:30pm IST = 11:00pm in much of Asia). Depending on the level of interest, we will also organize group watching at UCSD and UT Austin, to accommodate students as well as local industry people if they plan to get together and meet in person. 

We plan for a four-day Seasonal School. On each day, we will have three talks plus Q&A, or else two talks plus Q&A along with a lab or demo.

E-book and Contents:

In the e-book, annotated slides from each of the talks below will correspond to a chapter. An Introduction chapter will be provided by the organizers. The Introduction will give an overview of the current state and major threads of AI/ML for IC Design and EDA, and motivate the selection of each of the days’ topics for the Seasonal School.

Schedule: Nov. 4 and 5; Nov. 18 and 19 (9am - noon, US Central Time below)

  • Nov. 4: Deep / Reinforcement Learning
    • 9-10am Mark Ren, Senior Manager, NVIDIA Research, “Machine Learning for EDA Optimization” 
    • 10-11am Ismail Bustany, Fellow, AMD, “Learning to Optimize”
    • 11-noon Joe Jiang, Staff Software Engineer and Manager, Google Brain, “Circuit Training: An open-source framework for generating chip floor plans with distributed deep reinforcement learning” [+demo]
  • Nov. 5: Applications / Future Frontiers
    • 9-10am Sachin Sapatnekar, Professor, University of Minnesota, “Automating Analog Layout: Why This Time is Different”
    • 10-11am Sung Kyu Lim, Professor, Georgia Institute of Technology, “Machine Learning-Powered Tools and Methodologies for 3D Integration”
    • 11- noon Shobha Vasudevan, Researcher at Google and Adjunct Professor at UIUC, “ML for Verification”
  • Nov. 18: Standard Platforms for ML in EDA and IC Design
    • 9-10:30am Kerim Kalafala++, Senior Technical Staff Member, IBM (co-chair, AI/ML for EDA Special Interest Group, Si2), “Exchanging EDA data for AI/ML using Standard API” [+demo]
    • 10:30-noon Jinwook Jung, Research Staff Member, IBM Research, “IEEE CEDA DATC RDF and METRICS2.1: Toward a Standard Platform for ML-Enabled EDA and IC Design” [+demo]
  • Nov. 19: Manufacturability, Testing, Reliability, and Security
    • 9-10am Bei Yu, Associate Professor, Chinese University of Hong Kong, “Machine Learning for DFM”
    • 10-11am Li-C. Wang, Professor, UC Santa Barbara, “ML for Testing and Yield”
    • 11-noon Muhammad Shafique, Professor of Computer Engineering, NYU Abu Dhabi, “ML for Cross-Layer Reliability and Security”

Talk Abstracts and Speaker Bios:

Mark Ren, Senior Manager, NVIDIA Research, “Machine Learning for EDA Optimization”

Abstract: In this talk, I will discuss interesting ML techniques for challenging EDA optimization problems. I will cover commonly used ML techniques such as sequential model-based optimization and reinforcement learning for optimization. I will also introduce two promising techniques: self-supervised learning (SSL) and gradient descent based optimization leveraging deep learning frameworks and architectures. SSL learns the optimized EDA solution data manifold. Conditioned on the problem input, it can directly produce the optimized solution. Gradient descent based optimization approach is very efficient for optimization in high dimensional spaces. Powered by deep learning frameworks and architectures, it can solve many EDA optimization problems efficiently. I will illustrate the applications of these techniques in various physical design problems and discuss the challenges of applying these techniques. Finally, I will outline three main approaches to integrate ML and conventional EDA algorithms together and explain the importance of integrating ML as well as GPU acceleration for EDA.

Bio:
Haoxing Ren (Mark) leads the Design Automation research group at NVIDIA Research, His research interests are machine learning applications in design automation and GPU accelerated EDA. Before joining NVIDIA in 2016, he spent 15 years at IBM Microelectronics and IBM Research working on physical design and logic synthesis tools and methodologies for IBM microprocessor and ASIC designs. He received several IBM technical achievement awards including the IBM for his work on improving microprocessor design productivity. He published many papers in the field of design automation including several book chapters in logic synthesis and physical design. He also received the best paper awards at ISPD’2013, DAC’2019 and TCAD’2021.  He earned his PhD in Computer Engineering from University of Texas at Austin in 2006.

Ismail Bustany, Fellow, AMD, “Learning to Optimize”

Abstract:

Bio:  Dr. Ismail Bustany is a Fellow at AMD, where he works on physical design implementation and MLCAD . He has served on the technical program committees for the ISPD, ISQED, and DAC. He was the 2019 ISPD general chair. He currently serves on the organizing committees for the ICCAD and SLIP. He organized the 2014 and 2015 ISPD detailed routing-driven placement contests and co-organized the 2017 ICCAD detailed placement contest. His research interests include physical design, computationally efficient optimization algorithms, MLCAD, sparse matrix computations/acceleration, and partitioning algorithms. He earned his B.S. in CSE from UC San Diego and M.S./Ph.D. in EECS from UC Berkeley.

Joe Jiang, Staff Software Engineer and Manager, Google Brain, “Circuit Training: An open-source framework for generating chip floor plans with distributed deep reinforcement learning” [+demo]

Abstract: Chip floorplanning is the engineering task of designing the physical layout of a computer chip. Despite five decades of research1, chip floorplanning has defied automation, requiring months of intense effort by physical design engineers to produce manufacturable layouts. Here we present a deep reinforcement learning approach to chip floorplanning. In under six hours, our method automatically generates chip floorplans that are superior or comparable to those produced by humans in all key metrics, including power consumption, performance and chip area. To achieve this, we pose chip floorplanning as a reinforcement learning problem, and develop an edge-based graph convolutional neural network architecture capable of learning rich and transferable representations of the chip. As a result, our method utilizes past experience to become better and faster at solving new instances of the problem, allowing chip design to be performed by artificial agents with more experience than any human designer. Our method was used to design the next generation of Google’s artificial intelligence (AI) accelerators, and has the potential to save thousands of hours of human effort for each new generation. Finally, we believe that more powerful AI-designed hardware will fuel advances in AI, creating a symbiotic relationship between the two fields.
Bio: Wenjie (Joe) Jiang is currently a Tech Lead Manager at Google Research, Brain Team. His research interest is to apply modern machine learning to computer systems and chip design, including physical design (floorplanning, chip placement & routing) and hardware RTL verification. He worked on the Circuit Training project, which applies deep reinforcement learning to the chip placement problem, and led the adoption of Circuit Training on Google's TPU chips. He also co-founded Design2Vec, an RTL learning method for verifying hardware designs on TPU. He holds a PhD degree in computer science from Princeton University.

Sachin Sapatnekar, Professor, University of Minnesota, “Automating Analog Layout: Why This Time is Different”

Abstract: For decades, analog design has stubbornly resisted automation, even as significant parts of digital design flows have embraced it. The reasons for this resistance were embedded in the fact that analog designs were small and “easy'” for an expert to comprehend. Despite the efforts of the brightest minds, using the best mathematical techniques of the time, analog EDA tools have struggled in competition with the expert.  Has ML changed anything? This talk is based on recent experience with developing ALIGN, an open-source analog layout automation flow that has been applied to a wide range of design types and technology nodes. The talk overviews the lessons learned – the advantages, as well as the perils and pitfalls – of applying ML to analog design to enhance designer productivity.

Bio: Sachin S. Sapatnekar is the Henle Chair in ECE and Distinguished McKnight University Professor at the University of Minnesota, and serves as PI of the ALIGN project. His current research interests include design automation methods for analog and digital circuits, circuit reliability, and algorithms and architectures for machine learning. He is a recipient of the NSF Career Award, the SRC Technical Excellence Award, the Semiconductor Industry Association’s University Researcher Award, and 12 Best Paper awards.  He has served as Editor-in-Chief of the IEEE Transactions on CAD and General Chair for the ACM/IEEE Design Automation Conference (DAC). He is a Fellow of the IEEE  and the ACM.

Sung Kyu Lim, Professor, Georgia Institute of Technology, “Machine Learning-Powered Tools and Methodologies for 3D Integration”

Abstract: 3D integrated circuits are a key technological option that keeps Moore’s Law trajectory beyond conventional scaling. In this class, we learn how machine learning algorithms can solve two important physical design problems for 3D ICs. First, we use unsupervised graph-learning to conduct tier partitioning in 3D ICs. We discuss how graph neural network can extract important features from the given circuit and the underlying 3D IC manufacturing technology specifications to guide tier partitioning for PPA optimization. Second, we learn how machine learning is used to predict wire RC parasitics in the final 3D IC layout before attempting the actual 3D IC physical design. We discuss how such prediction can be done accurately and how this information can be exploited during the subsequent 3D IC physical design for more rigorous PPA optimization.

Bio: Prof. Sung Kyu Lim received a Ph.D. degree from UCLA in 2000. He joined the School of Electrical and Computer Engineering at the Georgia Institute of Technology in 2001, where he is currently Motorola Solutions Foundation Professor. His research focus is on the architecture, design, and electronic design automation for 2.5D and 3D ICs. He has published more than 400 papers on the topics. He received the Best Paper Award from the IEEE Transactions on Electromagnetic Compatibility in 2021 and the IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems in 2022. He began serving as a program manager for DARPA Microsystems Technology Office (MTO) since 2022.

Shobha Vasudevan, Researcher at Google and Adjunct Professor at UIUC, “ML for Verification”

Abstract:

Bio:

Jinwook Jung, Research Staff Member, IBM Research, “IEEE CEDA DATC: Establishing Research Foundations for ML-Enabled EDA and IC Design” [+demo]

Abstract: Machine learning (ML) for IC design often faces the challenge of "small data" due to its nature. It takes a huge amount of time and effort to go through multiple P&R flows with various tool settings, constraints, and parameters for obtaining useful training data of ML-enabled EDA. In this regard, systematic and scalable execution of hardware design experiments, together with standards for sharing of data and models, is an essential element of ML-based EDA and chip design. In this talk, I will present the effort taken in IEEE CEDA Design Automation Technical Committee (DATC) to establish research foundations for ML-enabled EDA and IC design.

Bio: Jinwook Jung is a Research Staff Member at IBM Thomas J. Watson Research Center. At IBM, he works to advance design methodologies for AI hardware accelerators and high-performance microprocessors, leveraging machine learning and cloud technologies. He received the Ph.D degree in EE from KAIST.

Bei Yu, Associate Professor, Chinese University of Hong Kong, “VLSI Mask Optimization: From Shallow To Deep Learning”

Abstract: The continued scaling of integrated circuit technologies, along with the increased design complexity, has exacerbated the challenges associated with manufacturability and yield. In today’s semiconductor manufacturing, lithography plays a fundamental role in printing design patterns on silicon. However, the growing complexity and variation of the manufacturing process have tremendously increased the lithography modeling and simulation cost. Both the role and the cost of mask optimization – now indispensable in the design process – have increased. Parallel to these developments are the recent advancements in machine learning which have provided a far-reaching data-driven perspective for problem solving. In this talk, we shed light on the recent deep learning based approaches that have provided a new lens to examine traditional mask optimization challenges. We present hotspot detection techniques, leveraging advanced learning paradigms, which have demonstrated unprecedented efficiency. Moreover, we demonstrate the role deep learning can play in optical proximity correction (OPC) by presenting its successful application in our full-stack mask optimization framework.

Bio: Bei Yu is currently an Associate Professor at the Department of Computer Science and Engineering, The Chinese University of Hong Kong. He received the Ph.D degree from Electrical and Computer Engineering, University of Texas at Austin, USA in 2014. His current research interests include machine learning with applications in EDA and computer vision. He has served as TPC Chair of 1st ACM/IEEE Workshop on Machine Learning for CAD (MLCAD), served in the program committees of DAC, ICCAD, DATE, ASPDAC, ISPD, the editorial boards of ACM Transactions on Design Automation of Electronic Systems (TODAES), Integration, the VLSI Journal. He is Editor of IEEE TCCPS Newsletter.

Prof. Yu received nine Best Paper Awards from DATE 2022, ICCAD 2021 & 2013, ASPDAC 2021 & 2012, ICTAI 2019, Integration, the VLSI Journal in 2018, ISPD 2017, SPIE Advanced Lithography Conference 2016, six other Best Paper Award Nominations (DATE 2021, ICCAD 2020, ASPDAC 2019, DAC 2014, ASPDAC 2013, and ICCAD 2011), six ICCAD/ISPD contest awards, ACM SIGDA Meritorious Service Award, and IEEE CEDA Ernest S.~Kuh Early Career Award.

Li-C. Wang, Professor, UC Santa Barbara, “ML for Testing and Yield”

Abstract: Applying machine learning in test data analytics has been researched for many years. In the field of semiconductor test, data seems to be abundant and opportunities to take advantage of modern ML technologies seem to be many. Nonetheless, we have not observed a similar level of adoption of modern ML in our industry as that in applications related to computer vision and language understanding. In view of this gap, this talk discusses promises and barriers for realizing a ML solution in test data analytics. To overcome the potential barriers, this talks advocates taking a top-down approach that starts with questions at the operation intelligence level and then sketches a system to provide a specification for the required machine learning services. It is after such a system view is specified can one better understands what ML components are needed and whether they can be realized with the current ML technologies and the available data. Practical examples and experiment results are used to illustrate the top-down approach and its key considerations.

Bio: Li-C. Wang, is a professor of the ECE department at University of California, Santa Barbara. He received PhD in 1996 from University of Texas at Austin, and was previously with Motorola PowerPC Design Center. He joined UCSB in 2000. During the first decade at UCSB, his research investigated how machine learning might be utilized in design and test flows. In recent years, his research focused on building a virtual assistant for semiconductor test data analytics. He received 9 Best and 2 Honorable Mention Paper Awards from major conferences, including more recent ones from ITC 2014, VTS 2016, ITC 2018, ITC 2019, ITC 2020, and VLSI-DAT 2019. He is the recipient of the 2010 Technical Excellence Award from Semiconductor Research Corporation (SRC), and the recipient of the 2017 IEEE TTTC Bob Madge Innovation Award. He is a fellow of IEEE, and was the General Chair of the International Test Conference (ITC) in 2017 and 2018. He will again be the General Chair of ITC in 2023. In 2017, he participated in founding the first ITC Asia conference and served as its General co-Chair.

Muhammad Shafique, Professor of Computer Engineering, NYU Abu Dhabi, “ML on System-Level Design and Reliability”

Abstract: In the deep nano-scale regime, reliability has emerged as one of the major design issues for high-density integrated systems. Among others, key reliability-related issues are soft errors, high temperature, and aging effects (e.g., NBTI–Negative Bias Temperature Instability), which jeopardize the correct applications’ execution. Tremendous amount of research effort has been invested at individual system layers. Moreover, in the era of growing cyber-security threats, modern computing systems experience a wide range of security threats at different layers of the software and hardware stacks. However, considering the escalating reliability and security costs, designing a highly reliable and secure system would require engaging multiple system layers (i.e. both hardware and software) to achieve cost-effective robustness.

This talk provides an overview of important reliability issues, prominent state-of-the-art techniques, and various hardware-software collaborative reliability modeling and optimization techniques developed at our lab, with a focus on the recent works on ML-based reliability techniques. Afterwards, this talk will also discuss how advanced ML techniques can be leveraged to devise new types of hardware security attacks, for instance on logic locked circuits. Towards the end of the talk, I will also give a quick pitch on the reliability and security challenges for the embedded machine learning (ML) on resource/energy-constrained devices subjected to unpredictable and harsh scenarios.

Bio: Muhammad Shafique received his Ph.D. degree in Computer Science from the Karlsruhe Institute of Technology (KIT), Germany, in 2011. Afterwards, he established and led a highly recognized research group at KIT for several years as well as conducted impactful collaborative R&D activities across the globe. Besides co-founding a technology startup in Pakistan, he was also an initiator and team lead of an ICT R&D project there. He has also established strong research ties with multiple universities in Pakistan and worldwide, where he has been actively co-supervising various R&D activities and student/research Theses since 2011, resulting in top-quality research outcome and scientific publications. Before KIT, he was with Streaming Networks Pvt. Ltd. where he was involved in research and development of video coding systems for several years. In Oct.2016, he joined the Institute of Computer Engineering at the Faculty of Informatics, Technische Universität Wien (TU Wien), Vienna, Austria as a Full Professor of Computer Architecture and Robust, Energy-Efficient Technologies. Since Sep.2020, Dr. Shafique is with the New York University (NYU), where he is currently a Full Professor and the director of the eBrain Lab at the NYU-Abu Dhabi in UAE, and a Global Network Professor at the Tandon School of Engineering, NYU-New York City in USA. He is also a Co-PI/Investigator in multiple NYUAD Centers, including Center of Artificial Intelligence and Robotics (CAIR), Center of Cyber Security (CCS), Center for InTeractIng urban nEtworkS (CITIES), and Center for Quantum and Topological Systems (CQTS).

Dr. Shafique has demonstrated success in obtaining several prestigious grants, leading team-projects, meeting deadlines for demonstrations, motivating team members to peak performance levels, and completion of independent challenging tasks. His experience is corroborated by strong technical knowledge and an educational record (throughout Gold Medalist). He also possesses an in-depth understanding of various video coding standards and machine learning algorithms. His research interests are in ML for electronic design automation (EDA), AI & machine learning hardware and system-level design, brain-inspired computing, quantum machine learning, cognitive autonomous systems, wearable healthcare, energy-efficient systems, robust computing, hardware security, emerging technologies, FPGAs, MPSoCs, embedded systems, and EDA for quantum computing. His research has a special focus on cross-layer analysis, modeling, design, and optimization of computing and memory systems. The researched technologies and tools are deployed in application use cases from Internet-of-Things (IoT), Smart Cyber-Physical Systems (CPS), and ICT for Development (ICT4D) domains. 

Dr. Shafique has given several Keynotes, Invited Talks, and Tutorials at premier venues. He has also organized many special sessions at flagship conferences (like DAC, ICCAD, DATE, IOLTS, and ESWeek). He has served as the Associate Editor and Guest Editor of prestigious journals like IEEE Transactions on Computer Aided Design (TCAD), IEEE Design and Test Magazine (D&T), ACM Transactions on Embedded Computing (TECS), IEEE Transactions on Sustainable Computing (T-SUSC), and Elsevier MICPRO. He has served as the TPC Chair of several conferences like CODES+ISSS, IGSC, ISVLSI, PARMA-DITAM, RTML, ESTIMedia and LPDC; General Chair of ISVLSI, IGSC, DDECS and ESTIMedia; Track Chair at DAC, ICCAD, DATE, IOLTS, DSD and FDL; and PhD Forum Chair of ISVLSI. He has also served on the program committees of numerous prestigious IEEE/ACM conferences including ICCAD, DAC, MICRO, ISCA, DATE, CASES, ASPDAC, and FPL. He has been recognized as a member of the ACM TODAES Distinguished Review Board in 2022. He is a senior member of the IEEE and IEEE Signal Processing Society (SPS), and a professional member of the ACM, SIGARCH, SIGDA, SIGBED, and HIPEAC. He holds one US patent and has (co-)authored 6 Books, 15+ Book Chapters, 350+ papers in premier journals and conferences, and over 50 archive articles.

Dr. Shafique received the prestigious 2015 ACM/SIGDA Outstanding New Faculty Award, the AI-2000 Chip Technology Most Influential Scholar Award in 2020 and 2022, the ATRC’s ASPIRE Award for Research Excellence in 2021, six gold medals in his educational career, and several best paper awards and nominations at prestigious conferences like CODES+ISSS, DATE, DAC and ICCAD, Best Master Thesis Award, DAC'14 Designer Track Best Poster Award, IEEE Transactions of Computer "Feature Paper of the Month" Awards, and Best Lecturer Award. His research work on aging optimization for GPUs featured as a Research Highlight in the Nature Electronics, Feb.2018 issue. Dr. Shafique was named in the NYU’s 2021 Faculty Honors List. His students have also secured many prestigious student and research awards in the research community, and high-tech. jobs at top industrial and research organizations.