Skip to main content

·748 words·4 mins·
Author
Megan E. Barnes
Alex Chen - PhD Research Portfolio
👨‍🎓

Alex Chen

PhD Candidate in Computer Science
Stanford University • AI Research Lab

About Me

Exploring the intersection of AI, ethics, and human-centered design

I am a PhD candidate in Computer Science at Stanford University, working under the supervision of Dr. Jane Smith in the AI Research Lab. My research focuses on developing ethical AI systems that can learn efficiently from limited data, with particular emphasis on transparency, fairness, and human-interpretable machine learning.

My work sits at the intersection of technical innovation and social responsibility. I'm passionate about creating AI systems that not only perform well but also provide meaningful explanations for their decisions, especially in high-stakes domains like healthcare and criminal justice.

Prior to my doctoral studies, I completed my Master's degree in Machine Learning at MIT, where I first became interested in the ethical implications of algorithmic decision-making. My undergraduate work was in Mathematics and Computer Science at UC Berkeley, with a minor in Philosophy.

Beyond research, I'm actively involved in science communication and policy discussions around AI governance. I frequently write about the challenges and opportunities in responsible AI development, and I mentor undergraduate students who are interested in pursuing research at the intersection of technology and society.

Research Projects

Current investigations in responsible AI and machine learning

Explainable AI for Healthcare

Developing interpretable deep learning models for medical diagnosis that provide clinically meaningful explanations. This work aims to build trust between AI systems and healthcare professionals while maintaining diagnostic accuracy.

Active Research

Fairness in Algorithmic Decision-Making

Investigating bias detection and mitigation techniques in machine learning systems used for consequential decisions. Focus on developing frameworks that balance accuracy with fairness across different demographic groups.

Under Review

Privacy-Preserving Federated Learning

Creating protocols for collaborative machine learning that maintain strong privacy guarantees while enabling knowledge sharing across institutions, particularly in sensitive domains like healthcare and finance.

Collaboration

Human-AI Interaction Design

Exploring how to design AI interfaces that support human decision-making rather than replacing it. This includes work on explanation design, trust calibration, and human-in-the-loop systems.

Early Stage

Blog & Reflections

Thoughts on AI research, academia, and the PhD journey

Reflections on Responsible AI Research

📅 March 15, 2024 ⏱️ 6 min read 💭 Research Ethics
AI Ethics Research PhD Life

As AI systems become more prevalent in society, the responsibility of researchers to consider the broader implications of their work has never been more critical. In this post, I reflect on the challenges and opportunities in conducting responsible AI research...

The Interpretability-Performance Trade-off: A False Dichotomy?

📅 February 28, 2024 ⏱️ 8 min read 💭 Technical Deep-dive
Interpretability Machine Learning Theory

The common assumption that we must choose between model performance and interpretability may be outdated. Recent advances in explainable AI suggest we can have both, but it requires rethinking how we approach model design and evaluation...

Navigating the Third Year: Lessons from the PhD Trenches

📅 January 20, 2024 ⏱️ 4 min read 💭 Personal
PhD Journey Academia Personal Growth

The third year of a PhD is often described as the most challenging. Having recently emerged from this phase, I want to share some reflections on impostor syndrome, research pivots, and finding your voice as a researcher...

Publications

Peer-reviewed papers and preprints

Towards Trustworthy AI: A Framework for Explainable Medical Diagnosis
A. Chen, J. Smith, K. Williams | Conference on Neural Information Processing Systems (NeurIPS) 2024

We propose a novel framework for developing explainable AI systems in medical diagnosis that balances accuracy with interpretability. Our approach uses attention mechanisms and concept activation vectors to provide clinically relevant explanations while maintaining state-of-the-art performance on diagnostic tasks.

Fairness-Aware Meta-Learning for Few-Shot Classification
A. Chen, M. Johnson, R. Davis | International Conference on Machine Learning (ICML) 2024

This paper addresses the challenge of maintaining fairness across demographic groups in few-shot learning scenarios. We introduce a meta-learning algorithm that explicitly optimizes for both accuracy and fairness metrics, demonstrating improved equity without significant performance degradation.

Privacy-Preserving Federated Learning with Differential Privacy Guarantees
A. Chen, S. Lee, T. Brown | arXiv preprint 2024 (Under Review at ICLR)

We develop a federated learning protocol that provides formal differential privacy guarantees while maintaining competitive performance. Our approach uses novel noise injection and aggregation techniques to enable collaborative learning across sensitive datasets.

Get In Touch

I'm always interested in discussing research collaborations and ideas