Alex Chen
About Me
Exploring the intersection of AI, ethics, and human-centered design
I am a PhD candidate in Computer Science at Stanford University, working under the supervision of Dr. Jane Smith in the AI Research Lab. My research focuses on developing ethical AI systems that can learn efficiently from limited data, with particular emphasis on transparency, fairness, and human-interpretable machine learning.
My work sits at the intersection of technical innovation and social responsibility. I'm passionate about creating AI systems that not only perform well but also provide meaningful explanations for their decisions, especially in high-stakes domains like healthcare and criminal justice.
Prior to my doctoral studies, I completed my Master's degree in Machine Learning at MIT, where I first became interested in the ethical implications of algorithmic decision-making. My undergraduate work was in Mathematics and Computer Science at UC Berkeley, with a minor in Philosophy.
Beyond research, I'm actively involved in science communication and policy discussions around AI governance. I frequently write about the challenges and opportunities in responsible AI development, and I mentor undergraduate students who are interested in pursuing research at the intersection of technology and society.
Research Projects
Current investigations in responsible AI and machine learning
Explainable AI for Healthcare
Developing interpretable deep learning models for medical diagnosis that provide clinically meaningful explanations. This work aims to build trust between AI systems and healthcare professionals while maintaining diagnostic accuracy.
Active ResearchFairness in Algorithmic Decision-Making
Investigating bias detection and mitigation techniques in machine learning systems used for consequential decisions. Focus on developing frameworks that balance accuracy with fairness across different demographic groups.
Under ReviewPrivacy-Preserving Federated Learning
Creating protocols for collaborative machine learning that maintain strong privacy guarantees while enabling knowledge sharing across institutions, particularly in sensitive domains like healthcare and finance.
CollaborationHuman-AI Interaction Design
Exploring how to design AI interfaces that support human decision-making rather than replacing it. This includes work on explanation design, trust calibration, and human-in-the-loop systems.
Early StageBlog & Reflections
Thoughts on AI research, academia, and the PhD journey
Reflections on Responsible AI Research
As AI systems become more prevalent in society, the responsibility of researchers to consider the broader implications of their work has never been more critical. In this post, I reflect on the challenges and opportunities in conducting responsible AI research...
The Interpretability-Performance Trade-off: A False Dichotomy?
The common assumption that we must choose between model performance and interpretability may be outdated. Recent advances in explainable AI suggest we can have both, but it requires rethinking how we approach model design and evaluation...
Navigating the Third Year: Lessons from the PhD Trenches
The third year of a PhD is often described as the most challenging. Having recently emerged from this phase, I want to share some reflections on impostor syndrome, research pivots, and finding your voice as a researcher...
Publications
Peer-reviewed papers and preprints
We propose a novel framework for developing explainable AI systems in medical diagnosis that balances accuracy with interpretability. Our approach uses attention mechanisms and concept activation vectors to provide clinically relevant explanations while maintaining state-of-the-art performance on diagnostic tasks.
This paper addresses the challenge of maintaining fairness across demographic groups in few-shot learning scenarios. We introduce a meta-learning algorithm that explicitly optimizes for both accuracy and fairness metrics, demonstrating improved equity without significant performance degradation.
We develop a federated learning protocol that provides formal differential privacy guarantees while maintaining competitive performance. Our approach uses novel noise injection and aggregation techniques to enable collaborative learning across sensitive datasets.
Get In Touch
I'm always interested in discussing research collaborations and ideas