Artificial Intelligence has a broad impact on various aspects of human lives including the healthcare sector, financial sector education, and even...
Vous n'êtes pas connecté
In a new paper, researchers from Carnegie Mellon University and Stevens Institute of Technology show a new way of thinking about the fair impacts of AI decisions. They draw on a well-established tradition known as social welfare optimization, which aims to make decisions fairer by focusing on the overall benefits and harms to individuals. This method can be used to evaluate the industry standard assessment tools for AI fairness, which look at approval rates across protected groups. "In assessing fairness, the AI community tries to ensure equitable treatment for groups that differ in economic level, race, ethnic background, gender, and other categories," explained John Hooker, professor of operations research at the Tepper School of Business at Carnegie Mellon, who coauthored the study and presented the paper at the International Conference on the Integration of Constraint Programming, Artificial Intelligence, and Operations Research (CPAIOR) on May 29 in Uppsala, Sweden. The paper received the Best Paper Award. Imagine a situation where an AI system decides who gets approved for a mortgage or who gets a job interview. Traditional fairness methods might only ensure that the same percentage of people from different groups get approved. But what if being denied a mortgage has a much bigger negative impact on someone from a disadvantaged group than on someone from an advantaged group? By employing a social welfare optimization method, AI systems can make decisions that lead to better outcomes for everyone, especially for those in disadvantaged groups. The study focuses on "alpha fairness," a method of finding a balance between being fair and getting the most benefit for everyone. Alpha fairness can be adjusted to balance fairness and efficiency more or less, depending on the situation. Hooker and his co-authors show how social welfare optimization can be used to compare different assessments for group fairness currently used in AI. By using this method, we can understand the benefits of applying different group fairness tools in different contexts. It also ties these group fairness assessment tools to the larger world of fairness-efficiency standards used in economics and engineering. Derek Leben, associate teaching professor of business ethics at the Tepper School, and Violet Chen, assistant professor at Stevens Institute of Technology, who received her Ph.D. from the Tepper School, coauthored the study. “Common group fairness criteria in AI typically compare statistical metrics of AI-supported decisions across different groups, ignoring the actual benefits or harms of being selected or rejected,” said Chen. “We propose a direct, welfare-centric approach to assess group fairness by optimizing decision social welfare. Our findings offer new perspectives on selecting and justifying group fairness criteria.” "Our findings suggest that social welfare optimization can shed light on the intensely discussed question of how to achieve group fairness in AI," Leben said. The study is important for both AI system developers and policymakers. Developers can create more equitable and effective AI models by adopting a broader approach to fairness and understanding the limitations of fairness measures. It also highlights the importance of considering social justice in AI development, ensuring that technology promotes equity across diverse groups in society.
Artificial Intelligence has a broad impact on various aspects of human lives including the healthcare sector, financial sector education, and even...
One of the things I often hear as an emotions researcher is that emotions should not “cloud” people’s decisions, that they get in the way, or...
Making small changes in how we live day-to-day can quickly create significant changes in society, especially in ways that benefit the environment....
Should artificial intelligence be used to improve decision-making in the court of law? According to a new working paper posted to the arXiv preprint...
The head for Microsoft AI, which is Mustafa Suleyman through an interview, now states that any content that has been published on the web or is on the...
Food security is a pressing concern for many countries, including Malaysia, where the stability, accessibility, and quality of food sources are...
Harness the potential of specialized apps to drive your small business forward. By integrating these tools, you can refine operations, boost...
People with type 2 diabetes who are treated with GLP-1 agonists have a decreased risk of developing dementia, according to a new study from...
Working class voters in Rust Belt cities like Pittsburgh used to favor Democrats overwhelmingly, but years of economic hardship and the rise of social...
An elephant uses its trunk for eating, drinking water, communicating, exploring the environment, social behaviour, and making and using tools. The...