How designing better algorithms can help us design better, more just societies

Computer algorithms play a role in many parts of our lives, both big and small: from content that shows up on our social media feeds and search engine results, to the quality of our education and our chances of getting a job interview.
Lately, a kind of reckoning began as people realized these algorithms are not the perfectly rational and impartial alternatives to flawed human reasoning we once thought them to be. Now, computer scientists are looking for ways to fix algorithms by adding the human element back into their design.
Computer scientist Rediet Abebe believes that better algorithm design principles can be used to diagnose some of the deep-seated societal issues embedded in their results.
“A lot of times when we think about algorithmic fairness or algorithmic justice or algorithmic discrimination, we focus on one particular use of an algorithm and we try to improve that,” she told Spark‘s Nora Young. “And I think we lose sight of the broader issue, that there’s these issues further upstream.”
In a sense, it’s sort of like an instance of algorithmic discrimination. But … it was also an opportunity for us to think about what was going on in education here in Cambridge.- Computer scientist Rediet Abebe
In 2009, Abebe, then an undergraduate student at Harvard University, worked on a project that investigated the system that matched students in Cambridge, Mass. with public schools. An algorithm assigned students to public schools, giving priority based on proximity and whether the student had siblings already enrolled at the school — though it was supposed to take into account the students’ stated top-three school choices, too.
But what Abebe and her team discovered is that the system failed to consider the segregated nature of the city’s neighbourhoods. As a result, students from higher-income neighbourhoods were more likely to be assigned to the top public schools in the city, while students from racialized and lower-income households were often matched with schools they didn’t want.
“In a sense, it’s sort of like an instance of algorithmic discrimination. But … it was also an opportunity for us to think about what was going on in education here in Cambridge,” Abebe explained.
WIRED journalist Sidney Fussell identified two main issues with algorithm use in public domains like education: the process of selecting a dataset to “train” the algorithm, and the transparency of this selection process.
He used the example of this summer’s “A-level fiasco” in the U.K. — where an algorithm was used to calculate students’ exam marks, and thousands saw their grades drop below their university admissions requirements — to illustrate the two issues.

Fussell said to figure out what went wrong with the grading algorithm, “the whole supply chain” of that algorithm should be examined.
He said the algorithms served as a way to standardize the predictions for the students’ test results based on past performance data, not just of their own, but of other students from their schools. The problem with this approach is that students could end up held back by these data points, unable to access opportunities based on their own hard work studying for a final exam.
Then, there’s a wider issue of consent in using algorithms to make decisions in public spaces like education. “With algorithms that are designed to predict things, you end up in situations where the people that are going to be judged have the least amount of say in every other step of the process.”
Inclusive algorithm design
One way to achieve transparency is to involve community members in the process of algorithm design — something Rediet Abebe thinks is vital in her work.
It’s one of the pillars of the Mechanism Design for Social Good (MD4SG), an interdisciplinary initiative co-founded by Abebe, which uses algorithm design techniques to improve access for underserved communities.
“We are really interested in building these trust- and respect-based collaborations with other domains, and also really working towards making sure that our little community is as representative of the people that we’re working for as possible,” Abebe said.
MD4SG’s interdisciplinary team helps avoid treating technological solutions as the only option. “We have to think critically about how we see computer science as being a part of a broader solution,” she said.
Fussell said that algorithm use can often signal a lack of investment of other social resources in the area. “We’re sort of shortcutting [with] these algorithms, which is leading us to realize in all these different spaces, ‘Oh, we don’t actually know how a lot of this stuff works. We don’t actually know what human to talk to, to hold accountable when things go wrong.'”

The alignment problem
Another challenge of using algorithms to solve social problems Fussell identified is the selection of training data.
Part of the issue is something called an alignment problem, or the discrepancy between what the algorithm is intended to do and what it actually does.
“Machine learning are systems that, rather than being explicitly programmed, are trained by examples, and hope that with enough repetition they get the pattern,” author Brian Christian explained. “And the question is, is the pattern that they’re getting, the pattern that you intend for them to get?”
Christian, who is a visiting scholar at University of California, Berkeley, discusses this issue in his new book, The Alignment Problem: Machine Learning and Human Values. He said that while aligning the intention of algorithms with their output is a technological problem, ensuring that the design is ethical and fair is “more of a sociological or political problem.”

“I think we still have a huge set of questions to deal with that await us on the other side of that, around what is to be aligned with whom.”
Journalist Sidney Fussell believes recent cases of algorithm discrimination and misuse, like the Cambridge Analytica scandal, have generated enough public pressure to start thinking about an ethical framework.
“I think that we are in real time developing … ethics around algorithms and technology,” he said. “We can develop ethical systems for the things we consume. It just takes a while.”