Carnegie Mellon University

Today technology and the data that it generates about us is so interwoven with our daily lives that it has become nearly invisible to users. Our cell phones gather GPS and use data, tracking our location. Applications we use store our credit card information and extensive activity logs. Smart home devices network with other devices in our home, all unseen.

Now, perhaps more than ever, it is imperative that we better understand key concerns related to securing and anonymizing data across an ever-growing range of technologies. Nowhere else can this be seen more acutely than in the various recent data breaches which have exposed countless users' private information and activity; data they believed to be safe and private.

Our research in Privacy and Security brings together a cross-disciplinary faculty to address the complex socio-technical challenges present. Whether it is exploring security concerns in ubiquitous computing, analyzing data collection and management practices in the Internet of Things, or studying how we can make privacy policies more useful, our faculty bring together their domain expertise to allow us to do groundbreaking work that touches the lives of millions.

Example Research

MITES: A Privacy-Aware General-Purpose Sensing Infrastructure for Smart Buildings

The Mites project tackles one of the most complex challenges in smart building technology: how to deploy comprehensive sensing infrastructure while respecting privacy, ensuring security, and maintaining community trust in a shared space. Through an innovative combination of hardware design, privacy-preserving architecture, and extensive community engagement, the researchers developed a system that successfully balances the competing needs of different stakeholders - from building managers seeking efficiency to occupants concerned about surveillance. This work exemplifies how technical innovation must be deeply integrated with social considerations, as demonstrated by their iterative design process that incorporated community feedback and led to novel solutions like location obfuscation to protect occupant privacy. The project showcases the kind of interdisciplinary thinking central to Societal Computing, where cutting-edge technical solutions are shaped by and responsive to human needs and social dynamics...

Learn more

Hertzbleed: Turning Power Side-Channel Attacks Into Remote Timing Attacks on x86

This work shines a light on the Hertzbleed attack, a novel technique that transforms power side-channel attacks into remote timing attacks on modern x86 CPUs. By exploiting data-dependent frequency changes induced by dynamic voltage and frequency scaling (DVFS), Hertzbleed allows remote attackers to infer cryptographic keys through timing variations without needing direct power measurements. This study highlights significant security implications for cryptographic implementations, demonstrating that even constant-time code can be vulnerable to remote timing attacks, challenging long-held assumptions about side-channel resistance in CPU architectures.

Learn more

Speculative Privacy Concerns About AR Glasses Data Collection

This paper explores privacy concerns related to data collection by future consumer-grade augmented reality (AR) glasses. Through semi-structured interviews with current AR users, the authors examine attitudes toward the collection of 15 types of sensitive data, such as face images, brain waves, and bystander voiceprints. Findings reveal diverse privacy concerns, often rooted in context-specific values and expectations. Participants expressed desires for customizable privacy controls and highlighted risks to marginalized groups. This research provides valuable insights for AR designers and policymakers on building privacy-respecting technologies in the evolving AR landscape.

Learn more

GFWeb: Measuring the Great Firewall's Web Censorship at Scale

This paper introduces GFWeb, a large-scale system designed to monitor and analyze web censorship by the Great Firewall (GFW) of China. Over a 20-month study involving more than a billion domain tests, GFWeb uncovers extensive HTTP and HTTPS blocking by the GFW, providing the most comprehensive dataset of censored domains to date. Findings highlight the GFW’s evolving methods, including its asymmetrical and bidirectional blocking behavior, which has implications for measuring and circumventing internet censorship. This work offers critical insights into the architecture of censorship systems and informs future censorship measurement and evasion strategies.

Learn more

WaVe: a verifiably secure WebAssembly sandboxing runtime

WaVe advances WebAssembly (Wasm) security by creating a runtime system that enforces memory and resource isolation through automated verification. Designed to implement the WebAssembly System Interface (WASI), WaVe ensures that Wasm applications interact safely with the operating system without compromising memory or access boundaries. This study highlights WaVe’s ability to deliver robust security while performing on par with industry-standard Wasm runtimes, making it a critical innovation for secure Wasm deployments in various applications.

Learn more

Legal Accountability as Software Quality: A U.S. Data Processing Perspective

This paper proposes "Legal Accountability" as a core software quality to embed regulatory compliance directly into software design, transforming it from a corporate oversight activity into a principal design focus. Through the lens of U.S. data processing law, the authors outline five essential properties—traceability, completeness, validity, auditability, and continuity—that ensure software accountability to the law. This perspective highlights how legal and engineering experts can co-design systems that prioritize compliance alongside other key software qualities, presenting a new framework for legally accountable software development.

Learn more

TEO: Ephemeral Ownership for IoT Devices to Provide Granular Data Control

To address privacy concerns for users and bystanders around IoT devices in shared spaces, this paper introduces TEO (Ephemeral Ownership for IoT). TEO allows temporary co-ownership of devices and data, giving users control over collected information during their usage period while maintaining robust privacy through encrypted storage. Verified for security and implemented as a lightweight library, TEO shows minimal performance impact, making it a viable solution for shared IoT environments like rentals or offices where privacy and control are essential.

Learn more

SoK: Content moderation for end-to-end encryption

Our online interactions are increasingly safeguarded by end-to-end encryption (E2EE), yet this very protection complicates efforts to counter harmful content like hate speech and misinformation. Without solutions that balance privacy with accountability, we risk either compromising user security or allowing unchecked content to thrive. This paper provides a structured framework to tackle these challenges, examining detection, response, and transparency methods that preserve privacy while enabling effective content moderation. The research opens new paths for designing systems that address both security and societal harm, offering essential insights for researchers in security, cryptography, and policy.

Learn more

Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks

AI technologies are advancing rapidly, but with them come a host of emerging privacy risks that threaten personal security in unprecedented ways. This paper presents a taxonomy of privacy risks specific to AI, derived from an analysis of 321 documented incidents. The authors identify 12 categories of risks, from deepfakes and physiognomic profiling to amplified surveillance, offering a structured framework to help AI practitioners understand and mitigate these threats. This taxonomy underscores the urgent need for robust privacy safeguards tailored to the unique challenges posed by AI.

Learn more