TILOS EVENTS


September 18, 2023


TILOS - OPTML++ Seminar Series: MACHINE LEARNING FROM WEAK, NOISY, AND BIASED SUPERVISION

Speaker: Masashi Sugiyama, University of Tokyo and RIKEN

Abstract: In statistical inference and machine learning, we face a variety of uncertainties such as training data with insufficient information, label noise, and bias. In this talk, I will give an overview of our research on reliable machine learning, including weakly supervised classification (positive unlabeled classification, positive confidence classification, complementary label classification, etc.), noisy label classification (noise transition estimation, instance-dependent noise, clean sample selection, etc.), and transfer learning (joint importance-predictor estimation for covariate shift adaptation, dynamic importance estimation for full distribution shift, continuous distribution shift, etc.).

 


June 2, 2023


TILOS Seminar Series: AI ETHICS ROUNDTABLE


Summer 2023


Data Science Discovery Bootcamp

In Summer 2023 we partnered with the Diversity in Data Science student organization to host a week-long day camp for Sweetwater Union High School district students. The Data Science Discovery Boot Camp aims to empower underrepresented high school students with the skills and confidence needed to pursue a college education and careers in data science. In this week-long program, students will be introduced to programming and data science with hands-on activities that work towards making the topic more approachable and understandable for students with little to no coding experience.


May 19, 2023


TILOS Seminar Series: LEARNING FROM DIVERSE AND SMALL DATA

Speaker: Ramya Korlakai Vinayak, Assistant Professor, University of Wisconsin–Madison


May 15, 2023


TILOS - OPTML++ Seminar Series: THE HIDDEN CONVEX OPTIMIZATION LANDSCAPE OF DEEP NEURAL NETWORKS

Speaker: Mert Pilanci, Stanford University

Abstract: Since deep neural network training problems are inherently non-convex, their recent dramatic success largely relies on non-convex optimization heuristics and experimental findings. Despite significant advancements, the non-convex nature of neural network training poses two central challenges: first, understanding the underlying mechanisms that contribute to model performance, and second, achieving efficient training with low computational cost and energy consumption. The performance of non-convex models is notably influenced by the selection of optimization methods and hyperparameters, including initialization, mini-batching, and step sizes. Conversely, convex optimization problems are characterized by their robustness to these choices, allowing for the efficient and consistent achievement of globally optimal solutions, irrespective of optimization parameters.


April 26, 2023


TILOS - OPTML++ Seminar Series: SUMS OF SQUARES: FROM ALGEBRA TO ANALYSIS

Speaker: Francis Bach, INRIA, ENS, and PSL Paris

Abstract: The representation of non-negative functions as sums of squares has become an important tool in many modeling and optimization tasks. Traditionally applied to polynomial functions, it requires rich tools from algebraic geometry that led to many developments in the last twenty years. In this talk, I will look at this problem from a functional analysis point of view, leading to new applications and new results on the performance of sum-of-squares optimization.

 

 


April 19, 2023


TILOS Seminar Series: ML TRAINING STRATEGIES INSPIRED BY HUMANS’ LEARNING SKILLS

Speaker: Pengtao Xie

Abstract: Humans, as the most powerful learners on the planet, have accumulated a lot of learning skills, such as learning through tests, interleaving learning, self-explanation, active recalling, to name a few. These learning skills and methodologies enable humans to learn new topics more effectively and efficiently. We are interested in investigating whether humans’ learning skills can be borrowed to help machines to learn better. Specifically, we aim to formalize these skills and leverage them to train better machine learning (ML) models. To achieve this goal, we develop a general framework—Skillearn—which provides a principled way to represent humans’ learning skills mathematically and use the formally-represented skills to improve the training of ML models. In two case studies, we apply Skillearn to formalize two learning skills of humans: learning by passing tests and interleaving learning, and use the formalized skills to improve neural architecture search.


February 15, 2023


TILOS Seminar Series: ENGINEERING THE FUTURE OF SOFTWARE WITH AI

Speaker: Dr. Ruchir Puri, Chief Scientist, IBM Research, IBM Fellow, Vice-President IBM Corporate Technology

Abstract: Software has become woven into every aspect of our society, and it will be fair to say that “Software has eaten the world”. More recently, advances in AI are starting to transform every aspect of our society as well. These two tectonic forces of transformation—“Software” and “AI”—are colliding together resulting in a seismic shift: a future where software itself will be built, maintained, and operated by AI, pushing us towards a future where “Computers can program themselves!” In this talk, we will discuss these forces of “AI for Code” and how the future of software engineering is being redefined by AI.


January 18, 2023


TILOS Seminar Series: CAUSAL DISCOVERY FOR ROOT CAUSE ANALYSIS

Speaker: Murat Kocaoglu, Assistant Professor, Purdue University

Abstract: Cause-effect relations are crucial for several fields, from medicine to policy design as they inform us of the outcomes of our actions a priori. However, causal knowledge is hard to curate for complex systems that might be changing frequently. Causal discovery algorithms allow us to extract causal knowledge from the available data. In this talk, first, we provide a short introduction to algorithmic causal discovery. Next, we propose a novel causal discovery algorithm from a collection of observational and interventional datasets in the presence of unobserved confounders, with unknown intervention targets. Finally, we demonstrate the effectiveness of our algorithm for root-cause analysis in microservice architectures.


Throughout 2022


Field Trips

We hosted two field trips in 2022, the first in the Spring and a second in the Fall. Students took a tour of the UC San Diego campus, listened to lectures from faculty, and interacted with the entire pipeline of academics, from undergraduates and graduate students to postdocs and faculty. If you are interested in a field trip, please fill out this form.


November 16, 2022


TILOS Seminar Series: RARE GEMS: FINDING LOTTERY TICKETS AT INITIALIZATION

Speaker: Dimitris Papailiopoulos, Associate Professor, University of Wisconsin–Madison

Abstract: Large neural networks can be pruned to a small fraction of their original size, with little loss in accuracy, by following a time-consuming “train, prune, re-train” approach. Frankle & Carbin in 2019 conjectured that we can avoid this by training lottery tickets, i.e., special sparse subnetworks found at initialization, that can be trained to high accuracy. However, a subsequent line of work presents concrete evidence that current algorithms for finding trainable networks at initialization, fail simple baseline comparisons, e.g., against training random sparse subnetworks. Finding lottery tickets that train to better accuracy compared to simple baselines remains an open problem. In this work, we resolve this open problem by discovering Rare Gems: sparse, trainable networks at initialization, that achieve high accuracy even before training. When Rare Gems are trained with SGD, they achieve accuracy competitive or better than Iterative Magnitude Pruning (IMP) with warmup.


October 12, 2022


TILOS Seminar Series: ROBUST AND EQUITABLE UNVERTAINITY ESTIMATION

Speaker: Aaron Roth, Professor, University of Pennsylvania

Abstract: Machine learning provides us with an amazing set of tools to make predictions, but how much should we trust particular predictions? To answer this, we need a way of estimating the confidence we should have in particular predictions of black-box models. Standard tools for doing this give guarantees that are averages over predictions. For instance, in a medical application, such tools might paper over poor performance on one medically relevant demographic group if it is made up for by higher performance on another group. Standard methods also depend on the data distribution being static—in other words, the future should be like the past.

In this lecture, I will describe new techniques to address both these problems: a way to produce prediction sets for arbitrary black-box prediction methods that have correct empirical coverage even when the data distribution might change in arbitrary, unanticipated ways and such that we have correct coverage even when we zoom in to focus on demographic groups that can be arbitrary and intersecting. When we just want correct group-wise coverage and are willing to assume that the future will look like the past, our algorithms are especially simple.

This talk is based on two papers that are joint work with Osbert Bastani, Varun Gupta, Chris Jung, Georgy Noarov, and Ramya Ramalingam.


September 28, 2022


TILOS Seminar Series: ON POLICY OPTIMIZATION METHODS FOR CONTROL

Speaker: Maryam Fazel, Professor, University of Washington

Abstract: Policy Optimization methods enjoy wide practical use in reinforcement learning (RL) for applications ranging from robotic manipulation to game-playing, partly because they are easy to implement and allow for richly parameterized policies. Yet their theoretical properties, from optimality to statistical complexity, are still not fully understood. To help develop a theoretical basis for these methods, and to bridge the gap between RL and control theoretic approaches, recent work has studied whether gradient-based policy optimization can succeed in designing feedback control policies.

In this talk, we start by showing the convergence and optimality of these methods for linear dynamical systems with quadratic costs, where despite nonconvexity, convergence to the optimal policy occurs under mild assumptions. Next, we make a connection between convex parameterizations in control theory on one hand, and the Polyak-Lojasiewicz property of the nonconvex cost function, on the other. Such a connection between the nonconvex and convex landscapes provides a unified view towards extending the results to more complex control problems.


Fall 2022


Data-Driven Robotic Art Activity

In Fall 2022, the Data-Driven Robotic Art activity had students find the linear regression model for a two-variable dataset, and use it to drive two servos representing those variables. This activity was first piloted in an after-school format with six high school students. It was then piloted as an in-classroom for 38 students activity at Morse High School. To learn more about this project, you can see a PowerPoint presentation here that gives instructions about how to do it. You can also fill out this form.


September 21, 2022


TILOS Seminar Series: NON-CONVEX OPTIMIZATION FOR LINEAR QUADRATIC GAUSSIAN (LQG) CONTROL

Speaker: Yang Zheng, Assistant Professor, UC San Diego

Abstract: Recent studies have started to apply machine learning techniques to the control of unknown dynamical systems. They have achieved impressive empirical results. However, the convergence behavior, statistical properties, and robustness performance of these approaches are often poorly understood due to the non-convex nature of the underlying control problems. In this talk, we revisit the Linear Quadratic Gaussian (LQG) control and present recent progress towards its landscape analysis from a non-convex optimization perspective. We view the LQG cost as a function of the controller parameters and study its analytical and geometrical properties. Due to the inherent symmetry induced by similarity transformations, the LQG landscape is very rich yet complicated. We show that 1) the set of stabilizing controllers has at most two path-connected components, and 2) despite the nonconvexity, all minimal stationary points (controllable and observable controllers) are globally optimal. Based on the special non-convex optimization landscape, we further introduce a novel perturbed policy gradient (PGD) method to escape a large class of suboptimal stationary points (including high-order saddles). These results shed some light on the performance analysis of direct policy gradient methods for solving the LQG problem. The talk is based on our recent papers: https://arxiv.org/abs/2102.04393 and https://arxiv.org/abs/2204.00912.


August 17, 2022


TILOS Seminar Series: MACHINE LEARNING FOR DESIGN METHODOLOGY AND EDA OPTIMIZATION

Speaker: Haoxing Ren, NVIDIA

Abstract: In this talk, I will first illustrate how ML helps improve design quality as well as design productivity from design methodology perspective with examples in digital and analog designs. Then I will discuss the potential of applying ML to solve challenging EDA optimization problems, focusing on three promising ML techniques: reinforcement learning (RL), physics-based modeling and self-supervised learning (SSL). RL learns to optimize the problem by converting the EDA problem objectives into environment rewards. It can be applied to both directly solve the EDA problem or be part of a conventional EDA algorithm. Physics-based modeling enables more accurate and transferable learning for EDA problems. SSL learns the optimized EDA solution data manifold. Conditioned on the problem input, it can directly produce the solution. I will illustrate the applications of these techniques in standard cell layout, computational lithography, and gate sizing problems. Finally, I will outline three main approaches to integrate ML and conventional EDA algorithms together and the importance of adopting GPU computing to EDA.


July 20, 2022


TILOS Seminar Series: HOW TO USE MACHINE LEARNING FOR COMBINATORIAL OPTIMIZATION? RESEARCH DIRECTIONS AND CASE STUDIES

Speaker: Sherief Reda, Professor, Brown University and Principal Research Scientist at Amazon

Abstract: Combinatorial optimization methods are routinely used in many scientific fields to identify optimal solutions among a large but finite set of possible solutions for problems of interests. Given the recent success of machine learning techniques in classification of natural signals (e.g., voice, image, text), it is natural to ask how machine learning methods can be used to improve the quality of solution or the runtime of combinatorial optimization algorithms? In this talk I will provide a general taxonomy and research directions for the use of machine learning techniques in combinatorial optimization. I will illustrate these directions using a number of case studies from my group's research, which include (1) improving the quality of results of integer linear programming (ILP) solver using deep metric learning, and (2) using reinforcement learning techniques to optimize the size of graphs arising in digital circuit design.


June 29, 2022


TILOS Seminar Series: THE FPGA PHYSICAL DESIGN FLOW THROUGH THE EYES OF ML

Speaker: Dr. Ismail Bustany, Fellow at AMD

Abstract: The FPGA physical design (PD) flow has innate features that differentiate it from its sibling, the ASIC PD flow. FPGA device families service a wide range of applications, have much longer lifespans in production use, and bring templatized logic layout and routing interconnect fabrics whose characteristics are captured by detailed device models and simpler timing and routing models (e.g. buffered interconnect and abstracted routing resources). Furthermore, the FPGA PD flow is a “one-stop shop” from synthesis to bitstream generation. This avails complete access to annotate, instrument, and harvest netlist and design features. These key differences provide rich opportunities to exploit both device data and design application specific contexts in optimizing various components of the PD flow. In this talk, I will present examples for the application of ML in device modeling and parameter optimization, draw attention to exciting research opportunities for applying the “learning to optimize” paradigm to solving the placement and routing problems, and share some practical learnings.


June 15, 2022


TILOS Seminar Series: REASONING NUMERICALLY

Speaker: Sicun Gao, Assistant Professor, UC San Diego

Abstract: Highly-nonlinear continuous functions have become a pervasive model of computation. Despite newsworthy progress, the practical success of “intelligent” computing is still restricted by our ability to answer questions regarding their quality and dependability: How do we rigorously know that a system will do exactly what we want it to do and nothing else? For traditional software and hardware systems that primarily use digital and rule-based designs, automated reasoning has provided the fundamental principles and widely-used tools for ensuring their quality in all stages of design and engineering. However, the rigid symbolic formulations of typical automated reasoning methods often make them unsuitable for dealing with computation units that are driven by numerical and data-driven approaches. I will overview some of our attempts in bridging this gap. I will highlight how the core challenge of NP-hardness is shared across discrete and continuous domains, and how it motivates us to seek the unification of symbolic, numerical, and statistical perspectives towards better understanding and handling of the curse of dimensionality.


May 18, 2022


TILOS Seminar Series: DEEP GENERATIVE MODELS AND INVERSE PROBLEMS

Speaker: Alexandros G. Dimakis, Professor, The University of Texas at Austin

Abstract: Sparsity has given us MP3, JPEG, MPEG, Faster MRI and many fun mathematical problems. Deep generative models like GANs, VAEs, invertible flows and Score-based models are modern data-driven generalizations of sparse structure. We will start by presenting the CSGM framework by Bora et al. to solve inverse problems like denoising, filling missing data, and recovery from linear projections using an unsupervised method that relies on a pre-trained generator. We generalize compressed sensing theory beyond sparsity, extending Restricted Isometries to sets created by deep generative models. Our recent results include establishing theoretical results for Langevin sampling from full-dimensional generative models, generative models for MRI reconstruction and fairness guarantees for inverse problems.


May 11, 2022


TILOS - OPTML++ Seminar Series: CONSTANT REGRET IN ONLINE DECISION-MAKING

Speaker: Siddhartha Banerjee, Cornell University

Abstract: I will present a class of finite-horizon control problems, where we see a random stream of arrivals, need to select actions in each step, and where the final objective depends only on the aggregate type-action counts; this includes many widely-studied control problems including online resource-allocation, dynamic pricing, generalized assignment, online bin packing, and bandits with knapsacks. For such settings, I will introduce a unified algorithmic paradigm, and provide a simple yet general condition under which these algorithms achieve constant regret, i.e., additive loss compared to the hindsight optimal solution which is independent of the horizon and state-space. These results stem from an elementary coupling argument, which may prove useful for many other questions in online decision-making. Time permitting, I will illustrate this by showing how we can use this technique to incorporate side information and historical data in these settings, and achieve constant regret with as little as a single data trace.


April 27, 2022


TILOS - OPTML++ Seminar Series: EQUILIBRIUM COMPUTATION, DEEP MULTI-AGENT LEARNING, AND MULTI-AGENT REINFORCEMENT LEARNING

Speaker: Constantinos Daskalakis, MIT


April 20, 2022

TILOS Seminar Series: LEARNING IN THE PRESENCE OF DISTRIBUTION SHIFTS: HOW DOES THE GEOMETRY OF PERTURBATIONS PLAY A ROLE?

Speaker: Hamed Hassani, Assistant Professor, University of Pennsylvania

Abstract: In this talk, we will focus on the emerging field of (adversarially) robust machine learning. The talk will be self-contained and no particular background on robust learning will be needed. Recent progress in this field has been accelerated by the observation that despite unprecedented performance on clean data, modern learning models remain fragile to seemingly innocuous changes such as small, norm-bounded additive perturbations. Moreover, recent work in this field has looked beyond norm-bounded perturbations and has revealed that various other types of distributional shifts in the data can significantly degrade performance. However, in general our understanding of such shifts is in its infancy and several key questions remain unaddressed.

The goal of this talk is to explain why robust learning paradigms have to be designed—and sometimes rethought—based on the geometry of the input perturbations. We will cover a wide range of perturbation geometries from simple norm-bounded perturbations, to sparse, natural, and more general distribution shifts.  As we will show, the geometry of the perturbations necessitates fundamental modifications to the learning procedure as well as the architecture in order to ensure robustness. In the first part of the talk, we will discuss our recent theoretical results on robust learning with respect to various geometries, along with fundamental tradeoffs between robustness and accuracy, phase transitions, etc. The remaining portion of the talk will be about developing practical robust training algorithms and evaluating the resulting (robust) deep networks against state-of-the-art methods on naturally-varying, real-world datasets.


April 5, 2022


Professor Andrew Kahng (UCSD) presents at the opening of the Department of Energy's AI-Enhanced Co-Design for Microelectronics 2022 workshop, organized by Sandia National Laboratories


March 29, 2022


26th ACM International Symposium on Physical Design keynote: "Leveling Up: A Trajectory of OpenROAD, TILOS and Beyond"

Speaker: Professor Andrew Kahng, UC San Diego

March 16, 2022


TILOS Seminar Series: THE CONNECTIONS BETWEEN DISCRETE GEOMETRIC MECHANICS, INFORMATION GEOMETRY, ACCELERATED OPTIMIZATION, AND MACHINE LEARNING

Speaker: Melvin Leok, Professor, UC San Diego

Abstract: Geometric mechanics describes Lagrangian and Hamiltonian mechanics geometrically, and information geometry formulates statistical estimation, inference, and machine learning in terms of geometry. A divergence function is an asymmetric distance between two probability densities that induces differential geometric structures and yields efficient machine learning algorithms that minimize the duality gap. The connection between information geometry and geometric mechanics will yield a unified treatment of machine learning and structure-preserving discretizations. In particular, the divergence function of information geometry can be viewed as a discrete Lagrangian, which is a generating function of a symplectic map, that arise in discrete variational mechanics. This identification allows the methods of backward error analysis to be applied, and the symplectic map generated by a divergence function can be associated with the exact time-h flow map of a Hamiltonian system on the space of probability distributions. We will also discuss how time-adaptive Hamiltonian variational integrators can be used to discretize the Bregman Hamiltonian, whose flow generalizes the differential equation that describes the dynamics of the Nesterov accelerated gradient descent method.


February 16, 2022


TILOS Seminar Series: MCMC VS. VARIATIONAL INFERENCE FOR CREDIBLE LEARNING AND DECISION MAKING AT SCALE

Speaker: Yian Ma, Assistant Professor, UC San Diego

Abstract: Professor Ma will introduce some recent progress towards understanding the scalability of Markov chain Monte Carlo (MCMC) methods and their comparative advantage with respect to variational inference. Further, he will discuss an optimization perspective on the infinite dimensional probability space, where MCMC leverages stochastic sample paths while variational inference projects the probabilities onto a finite dimensional parameter space. Three ingredients will be the focus of this discussion: non-convexity, acceleration, and stochasticity. This line of work is motivated by epidemic prediction, where we need uncertainty quantification for credible predictions and informed decision making with complex models and evolving data.


January 19, 2022


TILOS Seminar Series: REAL-TIME SAMPLING AND ESTIMATION: FROM IOT MARKOV PROCESSES TO DISEASE SPREAD PROCESSES

Speaker: Shirin Saeedi Bidokhti, Assistant Professor, University of Pennsylvania

Abstract: The Internet of Things (IoT) and social networks have provided unprecedented information platforms. The information is often governed by processes that evolve over time and/or space (e.g., on an underlying graph) and they may not be stationary or stable. We seek to devise efficient strategies to collect real-time information for timely estimation and inference. This is critical for learning and control.

In the first part of the talk, we focus on the problem of real-time sampling and estimation of autoregressive Markov processes over random access channels. For the class of policies in which decision making has to be independent of the source realizations, we make a bridge with the recent notion of Age of Information (AoI) to devise novel distributed policies that utilize local AoI for decision making. We also provide strong guarantees for the performance of the proposed policies. More generally, allowing decision making to be dependent on the source realizations, we propose distributed policies that improve upon the state of the art by a factor of approximately six. Furthermore, we numerically show the surprising result that despite being decentralized, our proposed policy has a performance very close to that of centralized scheduling.

In the second part of the talk, we go beyond time-evolving processes by looking at spread processes that are defined over time as well as an underlying network. We consider the spread of an infectious disease such as COVID-19 in a network of people and design sequential testing (and isolation) strategies to contain the spread. To this end, we develop a probabilistic framework to sequentially learn nodes’ probabilities of infection (using test observations) by an efficient backward-forward update algorithm that first infers about the state of the relevant nodes in the past before propagating that forward into future. We further argue that if nodes’ probabilities of infection were accurately known at each time, exploitation-based policies that test the most likely nodes are myopically optimal in a relevant class of policies. However, when our belief about the probabilities is wrong, exploitation can be arbitrarily bad, as we provably show, while a policy that combines exploitation with random testing can contain the spread faster. Accordingly, we propose exploration policies in which nodes are tested probabilistically based on our estimated probabilities of infection  Using simulations, we show in several interesting settings how exploration helps contain the spread by detecting more infected nodes, in a timely manner, and by providing a more accurate estimate of the nodes’ probabilities of infection.


January 18, 2022


Synopsys APUP Speaker Series Special Session: "AI/ML, Optimization and EDA in TILOS, an NSF National AI Research Institute"

Speaker: Andrew Kahng, Professor, UC San Diego

January 17, 2022


Tutorial at the 2022 Asia and South Pacific Design Automation Conference: "IEEE CEDA DATC RDF and METRICS2.1: Toward a Standard Platform for ML-Enabled EDA and IC Design"

Presenters: Jinwook Jung (IBM Research), Andrew B. Kahng (UC San Diego), Seungwon Kim (UC San Diego), Ravi Varadarajan (UC San Diego)


December 15, 2021


TILOS Seminar Series: CLOSING THE VIRTUOUS CYCLE OF AI FOR IC AND IC FOR AI

Speaker: David Pan, Professor, The University of Texas at Austin

Abstract: The recent artificial intelligence (AI) boom has been primarily driven by three confluence forces: algorithms, big-data, and computing power enabled by modern integrated circuits (ICs), including specialized AI accelerators. This talk will present a closed-loop perspective for synergistic AI and agile IC design with two main themes, AI for IC and IC for AI. As semiconductor technology enters the era of extreme scaling and heterogeneous integration, IC design and manufacturing complexities become extremely high. More intelligent and agile IC design technologies are needed than ever to optimize performance, power, manufacturability, design cost, etc., and deliver equivalent scaling to Moore’s Law. This talk will present some recent results leveraging modern AI and machine learning advancement with domain-specific customizations for agile IC design and manufacturing, including open-sourced DREAMPlace (DAC'19 and TCAD'21 Best Paper Awards), DARPA-funded MAGICAL project for analog IC design automation, and LithoGAN for design-technology co-optimization. Meanwhile on the IC for AI frontier, customized ICs, including those with beyond-CMOS technologies, can drastically improve AI performance and energy efficiency by orders of magnitude. I will present our recent results on hardware and software co-design for optical neural networks and photonic ICs (which won the 2021 ACM Student Research Competition Grand Finals 1st Place). Closing the virtuous cycle between AI and IC holds great potential to significantly advance the state-of-the-art of each other.


December 7, 2021


Designer, IP and Embedded Systems Track Presentation at the 58th Design Automation Conference: "Exchanging EDA data for AI/ML using Standard API"

Presenters: Kerim Kalafala (IBM), Lakshmanan Balasubramanian (Texas Instruments), Firas Mohammed (Silvaco), Andrew B. Kahng (UC San Diego)


December 6, 2021


Tutorial at the 58th Design Automation Conference: "Adding machine learning to the mix of EDA optimization algorithms"

Presenters: Ismail Bustany (Xilinx), Andrew B. Kahng (UC San Diego), Padmani Gopalakrishnan (Xilinx)


November 17, 2021


TILOS Seminar Series: A MIXTURE OF PAST, PRESENT, AND FUTURE

Speaker: Arya Mazumdar, Associate Professor, UC San Diego

Abstract: The problems of heterogeneity pose major challenges in extracting meaningful information from data as well as in the subsequent decision making or prediction tasks. Heterogeneity brings forward some very fundamental theoretical questions of machine learning. For unsupervised learning, a standard technique is the use of mixture models for statistical inference. However for supervised learning, labels can be generated via a mixture of functional relationships. We will provide a survey of results on parameter learning in mixture models, some unexpected connections with other problems, and some interesting future directions.


November 2, 2021


"METRICS2.1 and Flow Tuning in the IEEE CEDA Robust Design Flow and OpenROAD" paper presentation in Session 7C of ICCAD-2021 [Link]


October 16, 2021


Xiaolong Wang (UC San Diego) presented on Learning to Perceive Videos for Embodiment at the Second Tutorial on Large Scale Holistic Video Understanding at ICCV-2021 [YouTube]


October 15, 2021


Andrew B. Kahng (UC San Diego) spoke in the "Industry" segment of the NSF Integrated Circuits Research, Education, and Workforce Development Workshop [Link]


Fall 2021


Data Science Automatons Activity

In Fall of 2021, Data Science Automatons was co-developed with middle school teacher, Tina Tom, from Chula Vista Middle School. The activity has students building an automaton using different materials for the cam to test how the friction affects the mechanism. Students record data points for each of the materials and find a pattern. They then use this information to make a decision on their final project. Once they have created their automaton, students are asked to decorate it in a way that helps them make a decision. For example, one student made an automaton that helps them decide what to draw.

In Fall 2022, the middle school teacher who co-developed the curriculum for the Data Science Automaton activity, Ms. Tina Tom, performed the activity on her own for the first time without TILOS staff support, with her 37 students. We offered a professional development opportunity on November 19th, 2022, for five additional middle school teachers and one high school teacher. The professional development session was led by Tina Tom. Of the six teachers that attended the professional development, three administered the activity into their classroom resulting in almost 200 students being impacted (198 kits were distributed). Two teachers of the six were from Urban Discovery Academy Charter and one teacher was from Mar Vista Middle School. Future professional development sessions are being planned with this in mind so that more teachers and students can participate in the future.

If you’re interested in learning more about this activity, please fill out this form.