I am a machine learning researcher and engineer completing an M.S. in Computer Science at the University of Southern California, with a background in Computer Engineering from Shahid Beheshti University.
My work spans deep learning theory, bioinformatics, and medical AI, with a focus on multiview and multimodal learning, generative and probabilistic models, and graph neural networks — leading to peer-reviewed publications in Nature Communications (2025) and IEEE Access (2024).
More recently I have concentrated on large language models, particularly alignment, safety, and mechanistic interpretability. My ongoing work explores circuit-level understanding and behavioral steering of LLMs, including multi-layer steering methods inspired by transformer circuit discovery.
In parallel, I build end-to-end ML systems across multimodal RAG, LLM evaluation, and deep research agent pipelines. I am particularly interested in bridging theory and real-world systems — taking insights from representation learning and interpretability into practical, scalable AI.

