Renowned computer scientist Rediet Abebe describes how computing can be used for social good, and how far we still have to go.

Can computing be used for social good? It’s a question socially-conscious computer scientists struggle with, especially now, when more and more evidence arises of how AI algorithms only perpetuate the stereotypes and harm already found in society, instead of mitigating it.

Last Friday, students gathered for a lecture at the Brown Institute, a joint Columbia-Stanford venture into cutting-edge data science and computing ideas. The lecture, aptly named “Roles for Computing in Social Justice”, was led by Rediet Abebe, a shining star among young computing scholars in the academic circles.

Abebe began the lecture by reminding the (virtual) room as to the situation before us. This year, more than ever, underlying issues in our society have come to light, exposing how fundamentally our systems don’t work for everyone encompassed within them. She brought up the example of essential workers: immigrants, specifically undocumented ones, make up a vast number of the essential workforce, so when COVID hit, propping up our society rested largely on their shoulders. The problem is that undocumented immigrants and low-income workers, in general, are the most barred from American healthcare, and so, the people most at-risk and exposed to COVID-19 were also the ones furthest away from support systems and ways to keep safe. Parallel to this, we have also recently discovered, time and time again, how the AI models we trained to be “impartial” in fact reflect a deep ugly bias towards the same people we were attempting to mitigate bias towards. Abebe brought up the specific example of AI used in criminal justice to identify suspects, which was found to have an incredible racial bias.

With that context, Abebe breaks down the three pitfalls of using computing for social good. First, solutionism: the assumption that societal problems can be solved by computing. In other words, the idea that “if we just find the right algorithm, the problem will fix itself.” Second, tinkering: the assumption that societal structures are fixed, and the best we can do as computer scientists is tinker with them, rather than reinvent them. In other words, computer scientists tend to want to improve a current system instead of creating a new one, even if the old one is built on a fundamentally bad foundation. Finally, diversion: distracting from the root cause of a problem by addressing the symptoms of the issue instead. In other words, this is a way to shift accountability away from computing and towards some unspecified “other” who is the cause of societal problems.

So, given these pitfalls, does computing even have a role to play? It would seem like with so much room for error, maybe computing should not even try to take upon itself such huge problems, and instead focus on small-scale, less morally-hefty subjects. Abebe argues that in fact, computing can be the key to solving societal problems, as long as it is used intentionally, in the right ways. She presents four rules that create a framework for understanding computing as a tool for social good. In the lecture, she goes over the first two in detail, but all four are presented in her thesis paper Roles for Computing in Social Change.

The first idea is using computers as a diagnostic tool. Perhaps better than any other tool, computing can be used to precisely measure social problems, and diagnose how they manifest in technical systems. In fact, the majority of current computing work is diagnostic in nature, for example, gender and racial bias in facial recognition.

Abebe brings up a specific project she worked on with Microsoft, on a team trying to understand how HIV/AIDS information and misinformation is spread on search engines, and how bias could factor into those results. The team analyzed the search queries and results from people in African countries, where HIV/AIDS is an incredibly serious widespread problem. From this analysis, they were able to glean the common types of questions asked in relation to HIV/AIDS, and group them into categories: symptoms, drugs, natural cures, stigma, breastfeeding, and others. They decided to look into how search engines answer each type of question, and how accurate the information presented is.

One such discovery was that questions relating to natural cures bring up a lot more unreputable information. For example, for the question “does garlic cure AIDS,” a search engine’s first result comes from mirclegarlic.com, a site that provides a lot of misinformation. On the other hand, the search “antiretroviral treatment for HIV” yield reputable medical sources and a detailed Wikipedia page as the first results. The team was able to quantify the “quality” of answers given by search engines, and found that searches relating to symptoms and natural cures often yielded low-quality information, while searches relating to drugs and breastfeeding often yielded high-quality information. Abebe explains a few causes for this phenomenon, one of which is simply the fact that there is less high-quality information available about natural cures and symptoms. As she put it, “If the page doesn’t exist, the search engine can’t find it”. In fact, when the team looked further into this idea, they found that medical websites, such as the NIH or the WHO have roughly five times more pages about drugs than about natural cures. Of course, this makes sense, as natural cures are not scientific, but the result is that people who search for natural cures are not redirected to good sources, and instead shown mostly bad ones.  

This example was all to show how computing can be used as a very effective diagnostic tool. By analyzing search data, the team was able to determine one of the causes of the spread of misinformation on the internet, and was even able to track down popular myths and ideas about HIV/AIDS to inform medical professionals. An important takeaway though, is that diagnosis does not equal treatment. From this data, it might be easy to say “well then, just make more pages about debunking misinformation”, but that answer is not implied by the data, nor is it necessarily the correct one. Computing can certainly inform treatment, but a computed diagnosis should never be confused with an actual treatment for the problem. Another more broad example of this is tracking statistics about mass incarceration. Basically every demographic statistic about mass incarceration has been calculated, and has been honed in to incredible specifics. But just because we know the details of a problem does not imply we know how to solve it, as is implied by the fact that mass incarceration is happening right now, despite all the statistics we know.

Abebe’s second rule is using computing as a formulizer. Computing as a discipline requires problems to be defined into very clear inputs and outputs, and so shaping societal problems and dividing them up into inputs and outputs can be revealing as to the core nature of the issue at hand. There are currently many vague statement in law and policy – a social worker should act in the best interest of a child, an employer aims to hire the most qualified applicant, a policy aims to improve quality of life, and do on. To solve these problems mathematically, those vague terms must be defined an explored.

The example Abebe brings up for this case is that of eviction and housing insecurity, a very actual problem, especially in COVID times. In her 2020 paper Subsidy Allocations in the Presence of Income Shocks Abebe and her co-writers explore how income shocks (aka, sharp unexpected drops in income) affect eviction rates, and how governments can best aid people so that income shocks do not result in eviction. What she found was that actually, depending on how you set up the optimization problem, the results could be vastly different. For example, did you want to get the maximum number of people above a certain threshold, or instead make sure that nobody was below a different, lower threshold? Depending on the terms of the problem, the government should either pay out small sums at regular intervals (such as a monthly check), or give large one-time grants to individuals who apply. And of course, the sums to be given out vary along with the rest of the problem. Thus, although this paper does not give a clear diagnosis, it shows governments and other aid-givers how they can formulize a problem to calculate how much aid they should give, and by what mechanism.

The final two rules, not discussed in the lecture, are that “Computing serves as rebuttal when it illuminates the boundaries of what is possible through technical means. And computing acts as synecdoche when it makes long-standing social problems newly salient in the public eye,” a quote taken directly from Abebe’s aforementioned paper. Although she did not mention or describe these rules during the lecture, she encouraged listeners to read through the paper themselves.

Abebe finished the lecture with a more direct call-out to computer scientists. Sometimes, when computer scientists create solutions that do not entirely work, their rebuttal is “well, that’s the policy-maker’s job” or “that’s an implementation issue, not an algorithm issue.” Abebe cautions that computing that does not take into account implementation is just a job half-done. Computing projects, especially ones that deal with social good or change, must be thought-through until completion, if not by every team member, but at least by a few. If a solution only works up until a certain point in the pipeline, we have no idea if it works at all, because we cannot trust that things will simply work just because we set them in motion. To mitigate this, Abebe advises that computer scientists build relationships with the communities they are trying to help, and ask community members to weigh in on solutions. People on the ground will always have an innate sense of what will work, and what wont, and will be able to point out flaws in the algorithm that the distantly-removed teammates may not catch.

As the lecture finished, moderator Mark Hansen asked for questions, and a short Q and A followed the lecture. One question with a particularly long answer was “Should we train computer scientists to be more like humanities majors?” Abebe’s answer was essentially yes. Currently, computing is treated as a very hard science, and functions under the assumption that “if we just give people the right tools, they’ll do the right thing.” As has been proven enough times, that is not the case. Abebe said that computing needs to be taught in a more human-focused way, especially with a focus on diversifying computing spaces and focusing on teamwork and cooperation. In terms of diversity specifically, Abebe stressed that it’s not just a flowery term to put on college brochures, but that diversity fundamentally creates better solutions and more well-rounded computer scientists, because, as mentioned before, if you don’t have a person in the room with a certain lived experience, the best you can ever do is guess and approximate, which does not always yield a good result. Abebe also brought up how computer science programs are often tailored for careers in big tech – the pipeline to working at Google, Microsoft, and other tech companies is well-understood, and students are pushed towards careers in that sphere. She described how when she was an undergrad, she wanted to look into careers in NGOs or other non-profits, but there were so few resources that the closest she could come was academia. Abebe highlights how an easier pipeline into NGO and non-profit work could be incredibly useful, both for aspiring students and for the organizations themselves.

AI-themed header via Pixabay