University of Illinois at Urbana-Champaign
Faria Kalim is a PhD candidate at the University of Illinois at Urbana-Champaign, advised by Professor Indy Gupta. Her research interests lie in the area of distributed systems with a focus on building correct, reliable and performant systems. She is a recipient of the Sohaib and Sara Abbasi Fellowship (2015-2020) and the Mavis Future Faculty Fellowship (2019-2020). She was a research intern at VMware Research during the summer of 2019 and at IBM Research in the summer of 2017. Prior to UIUC, she completed her undergraduate degree at the National University of Science and Technology, Pakistan, in 2015, where she was awarded the gold medal for being the top graduating student.
Stream Processing Systems with Performance Guarantees
Stream processing systems have become invaluable today as more and more applications need to process massive amounts of continuously arriving data in real-time. In fact, the streaming analytics market is expected to grow to $13.7 billion by 2021. The jobs that use these systems have strict performance goals e.g. they need to produce results with low latency. My research focusses on allowing users to express their performance requirements for each stream processing job easily and building systems that can scale these jobs automatically to meet their performance goals, thus massively reducing the burden on the user. To this end, I have worked on a) predicting the performance of a stream processing job as it scales up or down and b) designing automated schedulers that can place these jobs on machines in a low-cost, resource-efficient manner. In collaboration with Twitter, we have built Caladrius, a system that forecasts the future traffic load of a stream processing job, and allows us to model its expected performance, with or without changes in resource allocation. Second, we have devised a scheduler called Henge that allows users running multiple stream processing jobs on a single, consolidated cluster to express their job’s latency or throughput service level objectives (SLOs) as a single “intent” that holds in spite of variation in workload. In such a cluster, Henge adapts continually to meet jobs’ respective intents in spite of limited cluster resources, and under dynamically varying workloads. I also extend these ideas to allow users to tradeoff amongst various job deployment options on cloud services such as AWS to meet their exact cost and performance goals.