Skip to content, sitemap or skip to search.

Personal tools
Join now
You are here: Home Bulletins 2022 spring Unjust Algorithms

Unjust Algorithms

by Zoë Kooyman Contributions Published on Jul 06, 2022 11:15 AM

Developments in artificial intelligence (AI) injustices have rapidly taken a turn for the worse in recent years. Algorithmic decision-making systems are used more than ever by organizations, educational institutions, and governments looking for ways to increase understanding and make predictions. The Free Software Foundation (FSF) is working through this issue, and its many scenarios, to be able to say useful things about how this relates to software freedom. Our call for papers on Copilot was a first step in this direction.

Though complex, we are still talking about proprietary software systems which integrate AI. Often, they are algorithmic systems where only the inputs and outputs can be viewed. It is trained with a selection of base categories of information, after which information goes in and a verdict comes out — but what led to the conclusion is unknown. This makes AI systems less straightforwardly understandable, even by the people who wrote the code.

These systems (referred to as black box systems) can have the potential or intent to do good, but technology is not objective — and at the FSF, we believe that all software should be free. And when it comes to governments, they have the responsibility to demand for the software they use to be free, and the public has a right to the software. The scale to which the increased use of artificial intelligence is affecting people's lives is immense, making this matter of computational sovereignty all the more urgent.

For regulators around the globe, the dangers involved in using AI haven't gone unnoticed either. In late 2018, both the United States and the European Union (EU) started working on obtaining guidance and forming regulations to deal with the proliferation and pervasive use of AI. In September 2021, Brazil’s congress passed a bill that creates a legal framework for AI.

But the acknowledgment of the fast-paced integration of AI into our society without proper oversight didn't stop US tax agencies from recently trying to implement facial recognition for its systems. Worth exploring is the situation where 26,000 people were affected during the Dutch toeslagenaffaire (Dutch childcare benefits scandal) and 1,675 children were removed from their parents' custody during this scandal.

The Dutch tax office that regulates social benefits used an AI-based software system to automate the identification of errors and fraud. The system's training data blatantly violated privacy laws, and led to biased enforcement. It did so by flagging data points such as dual nationality, low income, and "non-Western appearance" as big risk indicators for fraud. Because of people's blind trust in technology, this first system was used to then also teach another algorithm, this time affecting the childcare allowance unit.

The agencies, emboldened by the data provided by the system, ruthlessly penalized the families by withdrawing their aid and fining them tens of thousands — sometimes hundreds of thousands — of Euros. The debts ruined lives, and led to people losing their homes and their relationships. Some even lost custody over their children. The whole affair also led to the resignation of the entire Dutch cabinet. Citizens were discriminated against without their knowing, and they were unjustly denied knowledge as to why they were treated this way, nor given a chance to perform any research, to question the results of the system, or to defend themselves at any point of the process.

Of the scandal, Amnesty International noted, "the fact the tax authorities used a black box system and a self-learning algorithm obstructed accountability and transparency and were incompatible with the principles of good governance, legality, and the rule of law. The use, workings, and effect of the risk classification model were hidden from the public for a long time."

This is just one story of many that shows us what is at stake, and it shows the snowballing and disastrous effects of the lack of free software in government, while also revealing our lack of understanding of the consequences of using machine learning. In the EU, regulators have taken note of this scandal as a warning. European Commission executive vice-president, Margrethe Vestager, said the toeslagenaffaire is "exactly what every government should be scared of." In proposed legislation, they speak of adding checks and balances conducted by humans, and the European AI Act thus far proposes a restriction of the use of so-called "high-risk" AI systems and banning certain "unacceptable" uses that would protect people from such a scenario through what they call a "pyramid of risks," which includes social scoring by public authorities
due to the events that happened in the Netherlands.

There is good intent to avoid a repetition of the "toeslagenaffaire kind," but this is a fast-moving field, and it affects people's lives daily. But the definition of transparency and need for studying the source code remains vague. Without the freedom to inspect the source code, which has to include the software's algorithms, and allowing self-assessment, we create a loophole where the organization ends up policing its own actions.

When a system is nonfree, the argument for "enforcement" of regulations can never truly be made. What the draft legislation typically lacks is the central argument that software should be free (as in freedom). We do not yet have an elegant definition as to what elements must be shared along with the program when we are talking about machine learning, but we know we need to be holding up the GNU General Public License's (GPL) definition of source code as the preferred form for modification, and the importance of installation information, as the guiding lights for charting the right path. In this case, had it been made public what software and what identifiers were used to teach the system how to identify its victims, it may have been given less space to cause this much harm. Systems that play a major role in how our lives unfold should offer the possibility to be checked, and checked again.

You can make a difference as these laws and regulations are taking shape. Many governments have initiatives now that are open to public feedback. Get informed on where artificial intelligence proprietary software systems are used and on its dangers, and demand that your government deploys free software and protects its computational sovereignty: government software must be free software and must be readily available for inspection by its citizens.

Image by mystic_mabel, Copyright ©2009 mystic_mabel. This image is licensed under a Creative Commons Attribution ShareAlike 2.0 Unported license.

Document Actions
Filed under: bulletin

The FSF is a charity with a worldwide mission to advance software freedom — learn about our history and work.

fsf.org is powered by:

 

Send your feedback on our translations and new translations of pages to campaigns@fsf.org.