T e a P A C S 2023

International Workshop on Teaching Performance Analysis of Computer Systems

Final Program

09:15 Opening Remarks
09:30 Talk 1: Giuliano Casale
10:15 Talk 2: Diwakar Krishnamurthy
11:00 coffee break
11:30 FCRC Plenary
12:30 Lunch
13:45 Discussion Session (D1): What to teach
14:45 Talk 3: Mohammad Hajiesmaili
15:30 Coffee Break
16:00 Talk 4: Ziv Scully
16:45 Discussion Session (D2): How to teach
17:45 Closing Remarks

Speaker: Giuliano Casale

Title: Performance evaluation teaching in the age of cloud computing

Abstract: Cloud computing has been a primary ICT driver during the last 15 years, fostering sharp changes in performance engineering practices across the computing industry and, at the same time, profoundly steering research trends in academia. A distinctive trait of this paradigm is that cloud engineers can programmatically control software and system performance, raising an expectation for computing graduates who find employment in this area to be ready to engage in hands-on performance engineering problems. This, in turn, calls for a broader and more profound education on performance topics as part of the computing curriculum. This talk will present my own experience and views on the intersection between cloud computing and performance evaluation teaching. I will also give practical examples of how we can engage students in the performance evaluation discipline and prepare them to tackle quality-of-service engineering challenges that arise in the cloud domain.

Bio: Giuliano Casale joined the Department of Computing at Imperial College London in 2010, where he is currently a Reader. He teaches and does research in performance engineering and cloud computing, topics on which he has published more than 150 refereed papers. He has served on the technical program committee of several conferences in performance and dependability. His research work has received multiple recognitions, including best paper awards at ACM SIGMETRICS, IEEE/IFIP DSN, and IEEE INFOCOM. He serves on the editorial boards of IEEE TNSM and ACM TOMPECS, as editor-in-chief of Elsevier PEVA, and as the current chair of ACM SIGMETRICS.

Speaker: Diwakar Krishnamurthy

Title: Teaching Software Performance Evaluation to Undergrads: Lessons Learned and Challenges

Abstract: Recent high profile performance-related outages and problems in industry clearly establish the importance of imparting performance evaluation skills to students at the undergrad level. Yet, performance engineering is rarely a required course in most software engineering programs around the world. The typical undergrad student is naturally drawn towards coding courses and courses on topics that they think are likely to be in demand in industry, e.g., AI and ML. While sympathetic, curriculum designers often cite student pressures and other factors such as accreditation requirements from engineering bodies to argue for not including a required performance evaluation course.

As a long time instructor of a required, undergrad software performance evaluation course, I will describe some of my experiences operating in such a climate. Specifically, I will outline key strategies I have followed to motivate students and overcome their resistance to the somewhat analytical nature of performance analysis. I will also offer my observations on how undergrad curriculums can be tuned to instil a performance-aware mindset into students. Finally, I will point out ongoing challenges and invite the audience to brainstorm some solutions.

Bio: Diwakar Krishnamurthy is a Professor at the Department of Electrical and Software Engineering at the University of Calgary in Calgary, Alberta, Canada. Diwakar’s research interests span all aspects of characterizing, testing, modeling, and optimizing the performance of software systems. Recently, his efforts have focused on automated performance management of cloud and big data systems. A key emphasis of his research is to devise practical performance engineering techniques that can be operationalized in industry contexts, He has collaborated very closely with industry partners such as HP, SAP, and Huawei towards this objective. Diwakar has served on the TPCs of many performance-themed conferences such as ACM SIGMETRICS and ACM/SPEC ICPE. He has won or has been nominated for many teaching awards, including winning one for teaching an undergrad performance evaluation course.

Speaker: Mohammad Hajiesmaili

Title: Teaching Learning-augmented Algorithms with Societal Design Criteria

Abstract: Traditionally, computer systems are designed to optimize classic notions of performance such as flow completion time, cost, etc. The system performance is then typically evaluated by characterizing theoretical bounds in worst-case settings. For the next generation of computer systems, it is essential to elevate societal criteria, such as carbon awareness and equity, as first-class design goals. However, the classic performance metrics may conflict with societal criteria. Foundational understanding and performance evaluations of systems with these inherent tradeoffs lead to two categories of challenges that impact the core educational components of algorithm design and performance analysis courses.

We will use the online knapsack problem as a running example to contextualize how the learning-augmented and societal system design calls for broader and new educational components in classic algorithms and performance analysis courses.

Bio: Mohammad Hajiesmaili is an Assistant Professor with the Manning College of Information and Computer Sciences at the University of Massachusetts Amherst. He directs the Sustainability, Optimization, Learning, and Algorithms Research (SOLAR) lab, where the research focus is on developing optimization and machine learning, and algorithmic tools to improve the energy and carbon efficiency of digital and cyberinfrastructure. He was named to Popular Science’s Brilliant 10 in 2022, featuring his work on the decarbonization of the internet. His awards and honors include an NSF CAREER Award, an Amazon Research Award, a Google Faculty Research Award, and other awards from NSF, VMWare, and Adobe. He has received five best paper runner-ups in the ACM e-Energy conference.

Speaker: Ziv Scully

Title: The Role of Advanced Math in Teaching Performance Modeling

Abstract: How should we teach performance modeling without assuming a deep mathematical background? One approach is to focus on rigorously studying relatively simple stochastic models that do not require too much math background. But this may leave students underprepared to reason about systems in practice. They have multiple servers, non-Poisson arrivals, heavy tails, and other features that demand more complex stochastic models. But rigorously studying such models requires more mathematical background than many computer science and engineering students have.

For students to reason about complex stochastic models, I believe they need powerful, generally applicable tools. In the first half of this talk, I argue that students would be well served by tools like continuous-state Markov processes, state space collapse, and Palm calculus. But these are advanced topics even for students with a strong math background. Can we teach them to all students? I believe the key is to start from a higher-level mathematical foundation for performance modeling, optimizing for accessibility and modeling flexibility over rigor. In the second half of the talk, I outline what this foundation could look like.

Bio: Ziv Scully starts as an Assistant Professor at the Cornell School of Operations Research and Information Engineering in July 2023. He received his PhD from Carnegie Mellon University, where he was advised by Mor Harchol-Balter and Guy Blelloch, in summer 2022. Following that, he was a research fellow at the Simons Institute for Theoretical Computer Science at UC Berkeley (fall 2022) and a postdoc at Harvard CS and MIT CSAIL, where he was mentored by Michael Miztenmacher and Piotr Indyk (spring 2023).

Ziv researches queueing, stochastic processes, and decision-making under uncertainty. One particular focus of his has been scheduling, with the aim of enabling queueing theory to analyze more complex policies in more complex systems. His work on this topic has recognized by multiple awards from INFORMS, ACM SIGMETRICS, and IFIP PERFORMANCE, including winning the 2022 INFORMS Nicholson Competition and receiving the 2022 SIGMETRICS Doctoral Dissertation Award.

Discussion Sessions

However, the success of the workshop depends on a productive dialogue and deliberation among all participants. We, therefore, have two discussion sessions, during which the speakers will sit with other attendees and lead the discussion on two topics:

(D1) What to teach

To keep up with the times, what new topics should be included in a performance modeling and analysis course, and what classical topics can be dropped?

What should the relationship be between such a course and (i) the standard curriculum for Computer Science and (ii) currently popular courses?

What do industry practitioners need to know regarding performance modeling and analysis? Etc.

(D2) How to teach

To suit students’ academic preparation and learning habits, what innovation can instructors bring to design assignments, labs, projects and theses?

What has worked well, and what did not? Etc.