Event
Erik Bertelli & Matt Olfat- UC Berkeley
Wednesday, September 24

3108 Etcheverry Hall
3:30-5:00pm

Erik Bertelli and Matt Olfat respectively. 

Erik Bertelli

Abstract: High tech products such as smartphones or wearables are multi-generational in the sense that every year, a new version of the product is introduced, with some features that are shared with the previous year’s model as well as added functionality. This setting poses unique problems for manufacturers in making operational decisions regarding production and warranty servicing of these products. For each generation, at some point, a decision must be made to stop producing that version of the product. However, this decision often comes before the items themselves are out of warranty. How should companies manage future warranty claims? Professor Candace Yano and PhD student Erik Bertelli are focused on optimal production and warranty decisions in settings where warranty fulfillment may involve provision of new replacements of the same model, repair using spare parts inventory, and rebates for upgrading to a newer generation of product. This research will enable the manufacturer not only to minimize the expected costs for satisfying warranty claims, but also to reduce electronic waste from unnecessary replacement items and spare parts.

Bio: Erik Bertelli is a third year PhD student in the Industrial Engineering and Operations Research department at UC Berkeley. His research focuses on inventory control for multi-generational high-tech products withuncertain warranty obligations.

Matt Olfat - Fair Optimization: Deep Notions of Fairness in Machine Learning

Abstract: Classification algorithms such as Support Vector Machines (SVM) output an array of probabilities or score functions, which are then thresholded to generate binary predictions. The prevalence of these machine learning methods in commercial use has raised concerns of their ability to codify and amplify biases existent in data. Previous techniques to address this problem mainly rely on pre- or post-processing steps; they are often blind to score functions underlying predictions, making their fairness guarantees non-robust to changes in thresholding. Furthermore, pre- and post-processing is inherently greedy and thus may be prone to excessively penalizing accuracy. In response, we propose the framework of Fair Optimization to enforce robust fairness at training time. The goal of this approach is to capture independence through constraints tractable in an optimization framework, effectively controlling the discriminatory ability of a learner within the learning stage. We show that our method is able to find fair classifiers that retain accuracy for a number of real datasets, and present generalizations of our method to the setting of unsupervised learning as well.

Bio: Matt Olfat is a PhD candidate in IEOR at UC Berkeley. He received his B.S. in Systems Engineering and Mathematics from the University of Virginia in 2014, and his M.S. in Industrial Engineering and Operations Research from UC Berkeley in 2016. His research interests include fairness in machine learning, applications of machine learning in public policy, and decompositions of high-dimensional datasets.