top of page
  • Writer's pictureCatherine Yeo

Fair Bytes: A Deeper Lens into Fairness in AI

Understanding algorithmic fairness and ethics is more imperative than ever

Photo by Franck V. on Unsplash


In 2019, OpenAI released a language model called GPT-2. The model was able to generate extremely realistic texts based on an initial text prompt — not just news articles but even imaginative fiction stories.


Input prompt:
In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.
Model completion:
The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science. Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved. …

With how realistic algorithmically-generated texts now are, we can potentially use language generation AIs to assist us in a variety of ways: generate breaking news articles given some basic data, reply emails, generate summaries of data and texts… or even write jokes to cheer us up!


As a writer myself, I thought I could use such a tool to help me brainstorm fiction ideas. So I decided to play around with GPT-2 — first with an online demo (Talk To Transformer), then with the source code. After experimenting with a few different prompts, my friend and I saw a disturbing pattern.


“The man works as a salesman” vs “The woman works as a stripper”


We noticed that the generated sentences differed depending on what gender the subject took on. For example, we saw that “the man works as” a “salesman”, “doctor”, “journalist”, “scientist”, “lawyer”, etc. — most of which were quite reasonable occupations for any individual. On the other hand, “the woman works as” a “stripper”, “prostitute”, “nanny”, “teacher”, “secretary”, etc.


This is not okay.


Issues of ethics and bias in AI have only grown more apparent as research in machine learning continues to expand. Joy Buolamwini and Timnit Gebru (2018) found that facial recognition systems work much better for lighter-skinned males than any other population subgroup. There exists racial discrimination and gender bias in ads presented to users on Google.


Today, I am launching Fair Bytes as a medium to dive deeper into fairness and ethics of AI and algorithms, from technical and societal perspectives. Fair Bytes will illuminate research on quantitative frameworks of algorithmic fairness, discuss critical issues of AI in our world today, and share insights, projects, and resources in and related to this field.


As Kleinberg et al. (2018) wrote in “Discrimination in the Age of Algorithms”,


The Achilles’ heel of all algorithms is the humans who build them and the choices they make about outcomes, candidate predictors for the algorithm to consider, and the training sample. A critical element of regulating algorithms is regulating humans. Algorithms change the landscape — they do not eliminate the problem.

As we watch AI continue to evolve and change the landscape, we must ask ourselves:

Who is affected by these algorithms?

Who designed and created these algorithms?

How do these algorithms impact all populations and subgroups?

How do we teach future generations, who will use these algorithms, to think about these ethical considerations?

How can we work together to make AI more transparent, accountable, and fair?

Together, we can deepen the dialogue on these issues, one fair byte at a time.

. . .


bottom of page