Matt Fredrikson
Associate Professor, Software and Societal Systems
Bio
Matt Fredrikson is an Associate Professor in the School of Computer Science at Carnegie Mellon University. He is a member of CyLab, the Societal Computing Program, and the Principles of Programming Group. His research interests focus on security, privacy, formal methods, and programming languages. Before joining Carnegie Mellon, he completed his Ph.D. in Computer Science at the University of Wisconsin-Madison in 2015.
Research
Security, privacy, and inference
Predictive models generated by machine learning algorithms are used extensively in current applications. They allow analysts to refine complex data sources into succinct programs that produce valuable information about underlying trends and patterns. A large body of previous research examines the privacy risks that arise when these data sources contain sensitive or proprietary information, and are leaked either in their original form or after “anonymization”. Much less well-understood are the risks that arise when machine learning models trained over these data sources are made available through applications. In recent work I have initiated the study of model inversion, which characterizes an attacker's ability to infer secret facts about the data in this setting. I am also interested in the security risks that arise from using trained models in adversarial environments, as traditional methods of reasoning about program correctness offer little help in this setting.
Models for privacy-aware programming
Modern applications often rely on detailed personal data collected from users, despite growing awareness among users and administrators of the risks involved with disclosing such information. A number of theoretical frameworks have emerged that give precise notions of acceptable disclosure, allowing algorithm designers to deliver personal data-driven functionality while still placing hard limits on the degree to which confidentiality is breached. The main appeal of these frameworks is their ability to provide rigorous guarantees, but subtle implementation mistakes often obviate these guarantees in practice. To address this problem, I work on developing new theory, tools, and language-based techniques that simplify the task of writing correct privacy-aware code.
Secure software systems
Some of the key outstanding challenges in security and privacy lie in figuring out why promising theoretical approaches oftentimes do not translate into effective defenses. Much of my work is concerned with developing formal analysis techniques that provide insight into the problems that might exist in a system, building countermeasures that give provable guarantees, and measuring the effectiveness of these solutions in real settings. Two recent themes in this work look at problems with implementations of popular cryptographic primitives and protocols, and hybrid enforcement models for temporal and stateful security policies in web applications and low-level systems code.