Latest AI Research From CMU and LinkedIn Explains Long-term Dynamics of Fairness Intervention in Connection Recommender Systems

    Nowadays, several social networking sites heavily rely on connection recommendations. Surveys reveal that connection suggestions may make up more than 50% of the social network graph. Connection recommender systems provide people with a list of recommended connections using prompts like “People you may know.” Being connected to the right people comes with significant benefits, such as career opportunities or more visibility on a platform, depending on the use case. Given these platforms’ potential, it becomes crucial to treat all users fairly. It is not clear, though, how fairness can be upheld or even what it means to have a “fair” system in this situation. Furthermore, whereas interventions focus on equality in discrete steps, second-order variables are involved in upholding fairness in dynamic systems like this. In connection suggestion, this typically entails imposing a parity requirement in each slate of recommendations, such as ensuring equal exposure of male and female users for each query while concurrently anticipating that this would eventually lead to more equitable network sizes. 

    Researchers from Carnegie Mellon University thoroughly investigated this approach toward what is considered ‘fair’ in connection with recommendation algorithms. They showed how standard statistical conceptions of suggestion fairness could not ultimately guarantee the equity of network sizes. In addition, the researchers concluded that such fairness interventions are not even enough to stop the amplifying of preexisting prejudices. The anticipated recommendation cycle involves a user asking for a connection recommendation, for instance, by launching the platform’s “People You May Know” page. The system then calculates relevance scores between the source user—the user seeking recommendations—and other unconnected users (referred to as the destination users). According to fairness restrictions, a ranking of destination users is produced from these relevance scores, and the source user is then shown these potential connections. This cycle then continues after new connections have been created.

    The system gets a collection of relevance scores for each suggestion query, from which it derives a probabilistic ranking. The odds are chosen in a way that maximizes the source user’s anticipated utility. When it comes to recommendation literature, this is a common practice. The key issue that needs to be addressed is how relevance ratings can be modeled. Relevance scores typically simulate the likelihood of a connection, if recommended, a measure of a downstream engagement or some combination of the two. The researchers chose a synthetic logistic regression model with three plausible features. When the prediction aim is thought to be the probability of connection, the network size of a source member is employed first. This is predicated on the idea that because individuals with more extensive networks are typically more engaged on the platform, they are more likely to be proactive in connection forming. The following supposition is that users are more likely to connect if their connections are more widespread. Finally, it was used that users are more likely to connect if they have comparable demographics, interests, educational backgrounds, jobs, etc.

    The researchers assume a fixed-size graph of evolving connections with two groups of individuals for simulation purposes. A majority group that was initially more connected comprises 65% of all the members. First, by imposing parameter values in the demonstrated logistic regression function, scores were matched using a ground truth model. This function was then used to simulate a dataset of recommendations and formed connections, which were then used to train the logistic regression model. The entire simulation process was run with and without each fairness intervention to allow for comparison. Ten runs were averaged to produce the final results. The findings showed that, in the absence of intervention, the difference in average network sizes across groups rapidly widens over time. Network sizes were discovered to have a group-wise rich-get-richer effect, which results in a power law distribution with a lower mean in the minority population.

    The majority group share of exposure in recommendations has decreased to 66.3%, which indicates that the intervention is having the desired effect on demographic parity. Additionally, it was discovered that because they already have a large network, most group members ask for suggestions more frequently. Even though both groups are exposed equally, members of the majority group are more likely to receive connection invitations due to their better ranking scores. The dynamic parity of utility intervention addresses some issues with the demographic parity of exposure intervention. As planned, the majority group share of new connection destinations has decreased to 65.4%. Despite this restriction, the discrepancy in average network sizes widens over time. Members of the majority group seek advice more frequently, which is one of the causes. The majority group’s source members typically make more connections to each suggestion query, which contributes to the higher share, it can be added.

    The study demonstrates that unconstrained relationship recommendation causes a collective rich-get-richer impact. Although mandating demographic parity of exposure or dynamic parity of utility between groups reduces bias amplification, this is insufficient to stop the growth of network size differences over time. To achieve fair results over time, connection suggestion acts on a dynamic system that must be considered. Overall, the practice of one-shot or time-aggregate static fairness measurement in recommender systems might create the perception of justice and encourage using fairness-enhancing techniques with unintended implications. The team’s research has also been published in the proceedings of the AAAI / ACM conference on Artificial Intelligence, Ethics, and Society (AIES 2022). 

    This Article is written as a research summary article by Marktechpost Staff based on the research paper 'Long-term Dynamics of Fairness Intervention in Connection Recommender Systems'. All Credit For This Research Goes To Researchers on This Project. Check out the paper and reference article.
    Please Don't Forget To Join Our ML Subreddit

    Khushboo Gupta is a consulting intern at MarktechPost. She is currently pursuing her B.Tech from the Indian Institute of Technology(IIT), Goa. She is passionate about the fields of Machine Learning, Natural Language Processing and Web Development. She enjoys learning more about the technical field by participating in several challenges.

    Recent Articles


    Related Stories

    Stay on op - Ge the daily news in your inbox