
A Snapshot of Algorithmic Activism: Spotlighting Encode Justice
Artificial intelligence (AI) has become ubiquitous in our everyday lives. At this point, it is easier to use AI than avoid it. It tracks the spread of disease, monitors (spies on) our social media, and controls manufacturing robots and smart assistants like Siri and Alexa.
One of the founding fathers of computer science, mathematician Dr. Alan Turing, began AI’s journey with a simple question: “Can machines think?” He proposed this idea in his 1950 paper “Computing Machinery and Intelligence,” one of his many contributions to computing, computer science, codebreaking, morphogenesis, and general acts of heroism.
Despite our best efforts, AI takes after the now billions of people that create and interact with it. In fact, real-world biases infected AI from its infancy -Turing himself would be persecuted by the British government for being gay, despite his service as an instrumental codebreaker in WWII. He was put on trial for “gross indecency” and ultimately chemically castrated. He would die at 41 in 1954 under ambiguous circumstances. In 1952, Turing wrote in a letter to a friend shortly before he pled guilty:
“I’m afraid that the following syllogism may be used by some in the future.
Turing believes machines think
Turing lies with men
Therefore machines do not think…”
Much like the story of one of its creators, AI’s story is riddled with real-world bias. While it has brought many positives to our society, it has also caused innumerable instances of injustice, especially for already marginalized communities.
This month, we’re spotlighting our fiscal project Encode Justice, “a coalition of youth activists and changemakers fighting for human rights, accountability, and justice under AI.”
Encode Justice works with a global community of volunteers to advance algorithmic justice. Their mission is to “champion informed AI policy and encourage youth to confront the challenges of the age of automation through political advocacy, community organizing, education programming, and content creation.”
We asked a few members of the Encode Justice team to explain AI, its persistent biases, and what they plan to do to confront it. If you want to get involved with Encode Justice, be sure to reach out to them here.
— –
What is artificial intelligence (AI) and what role does it play in our daily lives?
Pranav Jaganathan, Director of Finance and Technology
Artificial intelligence is simply a simulation of human/natural intelligence created by programmed data sets that creates computer behavior that is very similar to how humans would behave. The gap in between AI and human intelligence is trying to be closed everyday with different data sets and computer programs.
In our daily lives, AI plays a large number of roles. The Amazon Alexas and the Google Homes are probably the most recognizable forms of artificial intelligence technology and ones that are incredibly helpful for our day to day lives. But, there are other uses of AI including in facial recognition technology and algorithmic tools that use basic AI programming.
What is data bias?
Pranav Jaganathan, Director of Finance and Technology
Data bias, specifically in AI, is when the data used produces results that are prejudiced in one way or another. AI does depend on the quality of the data being used to program it as well as the programmer themselves. This dependency means that if the data were to be biased, it leads the technology to act in biased ways, which can potentially harm people.
How does bias in AI impact equity on a national and global scale?
Kristen Crawford, Director of Outreach
Bias in AI is presented through the outcomes of machine learning and prediction technology. It is when these figures are implemented into crucial areas of government, healthcare, and finance, that we can see its impact on equity.
Many of the social media apps we use are accessible globally, their algorithms having the same harmful effects no matter where it’s being used. Australian startup company Clearview AI, which gathers data from apps like Youtube and Facebook, has created a facial recognition system that identifies individuals with just a picture. The selling and accessibility of data from companies like Clearview AI can put anyone at risk of being susceptible to biased algorithms at a global level.
It’s not that AI is entirely bad, or evil, it’s the applications that we as youth have noticed in our everyday lives that make us grow fearful for the future. Seeing technology being implemented without proper regulation and risk assessment is scary. There is no consideration of who could be harmed, and now that it’s become so popularized, the victims could be anyone around the world.

What types of bias do you find are most common in AI?
Pranav Jaganathan, Director of Finance and Technology
I think one of the most common types of bias in AI has to be historical data bias. A lot of machine learning algorithms are trained on historical data as our world relies quite a bit on our mistakes in the past to become better. This becomes a problem when the historical data itself is prejudiced against a certain group of people.
For example, risk assessment tools are used in various courts around the country to determine how likely someone that is convicted would return to crime. Historically, when courts have had to decide that themselves, they have usually been prejudiced towards people of color. By training machine learning algorithms on this historical data set, the risk assessment tool becomes prejudiced against people of color because it takes into account their race, age, type of crime, etc. to make that decision.
Another type of bias would definitely be bias of the actual programmer. If they are prejudiced in some way, it could show up in the programming of an algorithm/AI.
Austen Wyche, Director of Advocacy
Often times when artificial intelligence is created for new digital products, creators believe it is nondiscriminatory technology, but it is common for these platforms to discriminate based on race, sex, sexual orientation, ethnicity, skin color, gender identity, and other characteristics that are protected under state and federal laws.
Due to these biases, we see products such as facial recognition technology and ad-delivery services discriminating against people of color, even though it is not the original intent of the program.
Kristen Crawford, Director of Outreach
The common bias we find in AI is racial bias. It’s prevalent in all forms of tech, whether it be used for surveillance or predictive data models. Not to shamelessly plug, but Encode Justice has wonderful articles addressing the issues that fall in between these differing spheres.
It’s present in the medical field, law and order, and even social media. Vox came out with a video titled “Are We Automating Racism?” that pointed out how Twitter specifically highlights and displays lighter faces on its preview of tweeted pictures over faces of darker complexions. This kind of technology is also seen on TikTok with its Beauty Prediction Technology.
A group of Chinese scientists at the South China University of Technology published a paper regarding the algorithm’s effectiveness of ranking individual users based on their attractiveness levels. The prediction technology was made from a dataset consisting of female Asian faces, and the diversity only expands to white males and females. I conducted a research study my senior year of high school to see just how powerful this technology is, and my results were unsettling. It takes the geometric makeup of a user’s face and promotes those matching the desired “shape”—which of course are mainly Asian and Caucasian individuals—on its For You Page. After scrolling on the For You Page for five minutes, 41.66% consisted of white-passing individuals.
There has always been the notion that our lives need to be made easier and AI has been pushed to be that factor that does make our lives easier. But without proper oversight and development, AI can definitely be a risk to human rights and freedoms.
One of our participants (who fit the Eurocentric beauty standard) gained so much popularity over the course of our research that she even began using the account for her own personal use. What struck me was how harmful this could be on apps that the impressionable youth spends the majority of their time on.
It further promotes a specific agenda, and in the process instills an unconscious bias and distorts the idea of beauty by relating it to skin tones and certain facial features. This unconscious bias is the very reason why we have healthcare being costly for some over others, why certain neighborhoods have more police presence and abortion clinics, and why several individuals have been wrongfully arrested.
What are some of the historical roots of AI that led us to this point?
Pranav Jaganathan, Director of Finance and Technology
Technology has always tried to expand because of the increasing consumer demand for the fastest and best product available. The drive in human innovation has led us to overlook some of the very real flaws in AI that can be solved with a little more time.
There has always been the notion that our lives need to be made easier and AI has been pushed to be that factor that does make our lives easier. But without proper oversight and development, AI can definitely be a risk to human rights and freedoms.
Do you think that surveillance and the right to privacy will be impacted in the near future due to the overturning of Roe v. Wade?
Austen Wyche, Director of Advocacy
Due to the decision in Dobbs v. Jackson Women’s Health Organization, the right to privacy for pregnant individuals will be significantly decreased across the United States.
It is possible that prosecutors in states that outlaw the right to reproductive healthcare can use data from apps in order to determine the timeline of a pregnancy, which poses a large threat to privacy and the safety of women and pregnant individuals. It is also possible that facial recognition technology may be used against those seeking reproductive care, which would be a serious violation of the right to privacy and accessibility to abortion services.
Pranav Jaganathan, Director of Finance and Technology
Surveillance and the right to privacy will definitely be impacted because of the overturning of Roe v. Wade. The case basically cast the first doubt on whether the right to privacy (in this case, a woman’s reproductive health) is one of the unenumerated rights guaranteed by the constitution.
As of right now, that right is being infringed on by many state governments. Data from period tracking apps and fertility apps are being used to determine if a woman has had an abortion after the ruling or is planning to get one. Cameras outside abortion clinics are trying to identify women and doctors that are doing abortions after the ruling. Both examples are ways that the government has infringed on the right to privacy by using tools that are based on machine learning algorithms.
Kristen Crawford, Director of Outreach
Definitely. The overturning of Roe v. Wade has raised some concerns about the use of facial recognition outside of reproductive health clinics. This era of surveillance and criminalization infringes on citizens’ constitutional rights, shifting the position of cisgender women and individuals with a uterus from victim to criminal.
Period tracking apps have become a weapon of surveillance due to their ability to share personal data to the government if requested. Americans have to actively take precautions to protect their inalienable rights from those they were taught to trust.
Do you see AI as a net positive or net negative in terms of its societal impact?
Pranav Jaganathan, Director of Finance and Technology
I see it as net neutral. AI definitely has some positives as it helps with our daily lives, it has been known to assist many people in their professions, and makes our daily lives more efficient.
At the same time, with any new innovation, there has to be a limit to which we, as a society, let it take over our lives. With AI, surveillance tech (especially after 9/11), and algorithmic technology, that limit has either not been set or not been found. This leads to no oversight in the field of AI and can potentially create a situation where AI poses more dangers to society than it does benefits.
We want to see that our leaders understand that AI can be dangerous and that oversight in this field is required for our human rights and freedoms.
In a perfect world, what would AI be used for and how?
Pranav Jaganathan, Director of Finance and Technology
In a perfect world, the what AI would be used for wouldn’t exactly be changed. The how it is used needs to be altered to ensure our rights are not infringed on.
AI is definitely one of the greatest human innovations in this day. But people seem to forget that it is simply a tool, not the ending decider when it comes to risk assessment or the identification of suspects through facial recognition technology. In a perfect world, AI should be used as a tool to help jobs overcome the simple tasks rather than being used as an alternative to a human being.
Additionally, in a perfect world, oversight in the various fields that employ AI would be a requirement to use it.
What are Encode Justice’s solutions to bias in AI?
Pranav Jaganathan, Director of Finance and Technology
Our main mission is that we want to see accountability for AI and algorithmic tools. We want to see that our leaders understand that AI can be dangerous and that oversight in this field is required for our human rights and freedoms.
We hope to see the government make informed AI policy or to ban AI use in a particular field until regulations are thought of and created. Our grievances are not against the technology but against the lack of oversight of the technology and we require legislation to create that oversight.
Kristen Crawford, Director of Outreach
Regulation, awareness, and education! We ask government members to be mindful when legislation regarding technology is being put on the floor. Having just a small amount of knowledge of the effects of these technologies can really save the country (and even world) from harmful biases.
We raise awareness in every way we can to remind those we trust in office to have our future, our health, our privacy, and safety in mind. Through this, we can also remind voters to look further into the names on ballot sheets to ensure a better, more tech-conscientious future. Personally, I think of the government’s quick response to Tony Stark’s suit in Iron Man 2. There was a sense of social responsibility there!
Encode Justice isn’t anti-AI, we are a strong force of likeminded individuals safeguarding human rights in an age of technological growth and advancements.
How can the public help?
Pranav Jaganathan, Director of Finance and Technology
Education is such a powerful tool. The best way for the public to help is to get educated on the harms of AI technology and spread awareness to these potential harms. Most people don’t come across the harms of AI on a daily basis and this leads to ignorance of the issue. By educating ourselves that the harms do exist, we, as a generation, can be more proactive rather than reactive to fighting for the accountability of AI and algorithmic technology.
Austen Wyche, Director of Advocacy
In order to help promote algorithmic justice and prevent the harmful impact of discriminatory algorithms, it is imperative the public learn more about the issue at hand. The number one problem when it comes to public outreach on issues of technology is a lack of experience and publicity.
Research more into ad-delivery algorithms, facial recognition technology, hiring algorithms, and more services made available by artificial intelligence.
Find organizations (or start them) in your community and reach out to local policymakers and stakeholders to determine the best course of action. Lobbying stakeholders can be very effective, and will allow you to broaden your audience.
Kristen Crawford, Director of Outreach
Exercise your right to vote, protest, sign petitions, lobby, flood your feed with informative posts!
Start off small, with a simple retweet and a bit of self-practice identifying potential biases you might not be aware of. Educate yourself on your new discoveries, familiarize yourself with some common misconceptions regarding these technologies, and learn how to address them.
Of course you can share Encode Justice’s content with friends and family, and gather some important resources from our podcast (EJ: on AIr) as well.

What should folks do if they want to get involved with Encode Justice?
Pranav Jaganathan, Director of Finance and Technology
Encode Justice offers a variety of ways to get involved! Our fellowship and chapter programs are some of our largest and not at all hard to get involved in. You can definitely start a chapter, join a chapter in your state, or join the organization as one of our fellows.
We also offer many workshops by our Education sector and many phone/email banks by our advocacy sector giving opportunities for young people to start their journey in advocacy. Additionally, many of our chapters and sometimes our sectors offer town halls with professors, representatives, and non-profit leaders that can help educate us on how we can help with this mission.
Austen Wyche, Director of Advocacy
If folks want to get involved with Encode Justice, reach out! Encode Justice is a broad, worldwide coalition of young people working towards more equitable artificial intelligence and technology in the 21st century, and we want to build a broad coalition.
Whether you are in the United States, Africa, Asia, Europe, or anywhere across the globe, fill out the form on encodejustice.org and we would love to talk to you more about how to improve algorithmic justice efforts.
— –
If you want to read more about Encode Justice’s work, please be sure to read a A Seat at the Table by Andrew Brennen and Emma Leiken from Omidyar Network! You can learn more about Encode Justice and other youth organizers across the country.
And be sure to visit the Encode Justice website and follow Encode Justice on Instagram!