Doctoral Consortium
Research in responsible computing and AI can involve interdisciplinary, multidisciplinary, and transdisciplinary work, which provide extra challenges to an already challenging doctoral career trajectory.
The Doctoral Consortium will take place on October 27th, 2025. Doctoral students in areas related to responsible computing, responsible AI, human-AI interaction, and policy who are admitted to the Doctoral Consortium will received 1:1 and small-group mentorship from established responsible computing and AI faculty, opportunities for networking and establishing themselves as future voices in the area. We also plan career skill development sessions, such as proposal writing. Consortium participants will also be expected to present at a poster session on October 28th during the main Summit event, and to participate in interactive visioning sessions during the main Summit event.
The Doctoral Consortium will take place in the TSRB Building, 85 Fifth Street NW, Atlanta. You will find the entrance in the courtyard past the Moes and Subway restaurants.

Monday, October 27th 2025
The consortium will take place in the TSRB building, on the 2nd floor.
| Time | Location | Activity |
|---|---|---|
| 10:00 - 10:15 | TSRB 223 | Welcome |
| 10:15 - 10:55 | TSRB 223 | Lightning Talks |
| 10:55 - 11:05 | Quick Break | |
| 11:05 - 12:00 | GVU Cafe | Lightning Talks |
| 12:00 - 1:00 | GVU Cafe | Lunch with Mentors |
| 1:15 - 2:15 | GVU Cafe | Career Panel |
| 2:15 - 2:30 | Break | |
| 2:30 - 3:30 | GVU Cafe | Birds-of-a-feather small group sessions with mentors |
| 3:30 - 3:45 | Break | |
| 3:45 - 5:15 | GVU Cafe | Proposal Writing Tutorial |
| 5:15 | Adjourn for the day |
Meet the 2025 Doctoral Consortium Cohort
- Sina Rismanchian (University of California, Irvine)
- His focus is on researching how to gain the benefits of computer science and data science to excel K-12 education and to explore educational theories using quantitative methods.
- Ali Shirali (University of California, Berkeley)
- “My research broadly explores how modern AI should be designed, deployed, and evaluated when embedded in human and societal contexts.”
- Anna Kawakami (Carnegie Mellon University)
- “My goal is to normalize workplace AI deployments that enhance worker expertise, capabilities, and wellbeing as a first-order objective.”
- Jordan Taylor (Carnegie Mellon University)
- “My research focuses on how queer communities negotiate the values embedded within sociotechnical systems, such such as online communities and generative AI models.”
- Taneea S Agrawaal (University of Toronto)
- “My research envisions a human-centered approach to climate data, risk and modeling that expands contemporary climate technologies to be more place-based and environmentally just.”
- Saloni Dash (University of Washington)
- “My research interests primarily lie in understanding how AI can influence cognitive mechanisms underpinning (mis)information processing & sensemaking.”
- Eve Fleisig (University of California, Berkeley)
- “My research lies at the intersection of natural language processing (NLP) and AI ethics: how can we create NLP systems that we trust to work for all users, without perpetuating societal harms?”
- Jocelyn Shen (MIT Media Lab)
- “My research advances human-centered AI to foster empathic interaction and social connection, while safeguarding against socio-emotional harms of emerging technologies.”
- Charlotte Li (Northwestern University)
- “With my research, I aim to improve how people derive insights from complex, raw data and to make mass communication of information more accessible and equitable.”
- Shi Ding (Georgia Tech)
- “My work spans intelligent tutoring systems, responsible AI, AI in education and AI literacy.”
- Kartik Sharma (Georgia Tech)
- “Through my research, my goal is to study and build machine learning (🤖) models, which are robust to perturbations (🦺) and can be easily controlled (🎛️) by the users, specifically, in relational (eg., graph), ordered (eg., language, decision process), and dynamic (eg., dynamic graph, multi-agent) environments.”
- Yasmine Belghith (Georgia Tech)
- “My research investigates how people learn and interact with technologies to inform the design of interactive learning interventions and environments.”
- Lingqing Wang (Georgia Tech)
- His work in part focuses on how everyday users perceive and interact with explainable AI (XAI), emphasizing adoption barriers and design preferences.
- Xingyu Li (Georgia Tech)
- She explores how Emotion AI can be applied to enhance interpersonal relationships. Her approach is multifaceted, delving into discussions spanning computer science, the arts, design, and mechanical engineering from their inception.
- Charles Nimo (Georgia Tech)
- Currently, his research focuses on understanding and improving the cultural awareness and adaptability of large language models.
- Aman Khullar (Georgia Tech)
- “My research focuses on the social impact of AI, particularly how care work and workers could be supported through LLM-based systems.”
- Amal Alabdulkarim (Georgia Tech)
- Works on algorithms and representations for reasoning about how future expectations and past events play into an agent’s decision-making to generate actionable explanations.
- Kayla Evans (Georgia Tech)
- Working on how power dynamics impact online communication (and vice versa) in the workplace.
Meet the Mentors
- Munmun De Choudhury, Georgia Tech
- Rosa Arriaga, Georgia Tech
- Mark Riedl, Georgia Tech
- Anna Huang, MIT
- Dylan Hadfield-Menell, MIT
- Chris MacLellan, Georgia Tech
- Ding Wang, Google
- Josiah Hester, Georgia Tech
Call For Participation (Closed)
Doctoral students in responsible computing, responsible AI, human-centered AI, human-AI interaction, policy, and related areas are encouraged to apply using the Application Form. Your advisor will be need to submit a 1-page reference letter to Mark Riedl and Kartik Goyal.
Applications before September 15, 2025 will receive full consideration.
We especially encourage Doctoral students who are just before their thesis proposal defense or just after, as this is the period where additional mentorship can have substantial impact.
We also especially encourage Doctoral students who are working in labs that operate from a different epistemic backgroud than that of the student. For example, a student with a traditional CS training working in a policy lab (or vice versa), or a student with traditional HCI training working in an AI lab (or vice versa).
Other strong candidates are welcome and will be fully considered.
Applicants selected for participation in the Consortium will have their flights, lodging, and other travel expenses covered (US only; sorry, our funds have restrictions).