Research & Publications


I explore ethical considerations that might arise from the use of collaborative filtering algorithms in dating apps. Collaborative filtering algorithms can predict the preferences of a target user by looking at the past behavior of similar users. By recommending products through this process, they can influence the news we read, the movies we watch, and more. They are extremely powerful and effective on platforms like Amazon and Google. Recommender systems on dating apps are likely to group people by race, since they exhibit similar patterns of behavior: users on dating platforms seem to segregate themselves based on race, exclude certain races from romantic and sexual consideration (except their own), and generally show a preference for white men and women. As collaborative filtering algorithms learn from these patterns to predict preferences and build recommendations, they can homogenize the behavior of dating app users and exacerbates biased sexual and romantic behavior.

Morgan Luck’s gamer’s dilemma (Luck 2009) rests on our having diverging intuition when considering virtual murder and virtual child molestation in video games. Virtual murder is seemingly permissible, when virtual child molestation is not and there is no obvious morally relevant difference between the two. I look into competitive games, to expand Rami Ali's dissolution of the dilemma (Ali 2015): I argue that when competitors consent to participate in a competition, the rules of the competition supersede everyday moral intuitions. Virtual children cannot be represented as giving consent to be molested because (1) children cannot be represented as giving sexual consent, and (2) consent to be possibly molested cannot be given. This creates a morally relevant difference between murder and molestation.


  • Folk Theories and User Strategies on Dating Apps: How users understand and manage their experience with algorithmic matchmaking (with Min Kyung Lee), iConference 2022 – Information for a Better World: Shaping the Global Future, forthcoming

The goal of this paper is to understand the experience of users with algorithmic filtering on dating apps by identifying folk theories and strategies that users employ to maximize their success. The research on dating apps so far has narrowly focused on what we call algorithmic pairing–an explicit pairing of two users together through a displayed compatibility score. However, algorithms behind more recent dating apps work in the background and it is not clear to the user if and how algorithmic filtering is mediating their inter-action with other users. This study identifies user goals and behaviors specific to dating apps that use algorithmic filtering: while some users employ various strategies to boost their “attractiveness score” to match with as many people as possible, others attempt to teach the algorithm about their unique preferences if they believe that the filtering is not working in their favor. Our research adds to the growing literature on folk theory formation by introducing dating apps as a novel context for research. Since folk theories are developed with specific goals in mind, they reveal user concerns around algorithmic filtering. Our hope is that this paper starts a conversation on the practical and ethical question of algorithmic intervention on sexual and romantic preferences and behavior.

Prior research has established the feasibility of conducting online interviews and observations, yet there is limited guidance in how to interact with participants when conducting fully mediated research with screen-sharing and video. This study, conducted during early phases of COVID-19, included 15 volunteer tweet annotators working with an emergency response organization. This method contribution uses cues-related and surveillance theories to reveal challenges and best practices when asking research participants to share their screen, be on video, and participate in a multiple-interview study. The findings suggest that researchers conducting online-mediated research should be prepared to provide technical support for the devices and interfaces participants use during the study, find ways to “see” beyond what is on the mediated screen, and consider ethical issues not often discussed. In addition to these findings, an output of this research is two brief training videos useful for other researchers embarking on conducting fully mediated research.


  • Bad AI and Beyond: Exploring How Popular Media Shape the Perceived Opportunities and Threats of AI: I am currently working with a research team to study entertainment media's role in shaping the public's perception around artificial intelligence. The project is part of Good Systems, an initiative by the University of Texas at Austin to investigate how to define, evaluate, and build a “Good System.”

  • Human-AI Teaming for Big Data Analytics to Enhance Response to the COVID-19 Pandemic: At the start of the COVID-19 pandemic, I worked with a research team who received a National Science Foundation RAPID/ Collaborative grant to study the process of real-time decisions that digital volunteers make when quickly converting social media data into codes for machine learning. This will allow us to better the human-machine teaming process. Here is the link that gives all the details of the grant.



  • UT Austin Graduate Student Assembly Travel Award, Spring 2021
    $500 - Travel award to present my work on algorithmic matchmaking at the University of Arizona's Graduate Student Feminist Philosophy Conference.

  • Vice-President for Research Special Research Grant, Fall 2020
    $900 - Research Grant to study user experience with algorithmic matchmaking on dating apps.

  • Association Philippe Jabr Scholarship, Fall 2020
    $10,000 - Competitive university scholarship to Lebanese students in higher education.

  • Good Systems COVID-19 Research Graduate Student Award, Summer 2020
    $3,960 - Graduate Research Funding to assist with COVID-19 research teams at UT Austin (see above).