Ethical AI and Debiasing Our Algorithms ⚖

Founding an AI ethics organization with tens of thousands of article reads, studying CS + English undergrad at Harvard, and more!

I remember the first time I really stopped to consider a problem in AI ethics: a self-driving moral machine test created by MIT. 🚗

To be honest, I struggled with a lot of the questions because the problem quickly becomes not just a matter of empirics and technology, but a battle with personal morals and questioning your own values. Self-driving is one of the more well-known and often talked about areas for AI ethics, but in reality, every AI system and application (and numerous other technologies/fields) warrants a discussion on ethics/fairness. 🤔💭

Today we’re diving into:

  1. ⚖ AI ethics/fairness

  2. 🔍 Debiasing our society

  3. 💻 CS + English


🎙 Δx podcast

I had the pleasure of chatting with the amazing Catherine Yeo, a current Harvard undergrad studying CS + English. 🏫 Besides founding her AI ethics organization Fairbytes, which has reached tens of thousands of blog post and article reads on different AI ethics topics, Catherine has also previously worked at Disney’s StudioLab, Dover, IBM Research, and Apple in AI research and software engineering roles. 🤖

“I want to help others really look at AI from both technical and ethical perspectives… and also consider who they are also designing this algorithm for and how they can maximize the help for them.” 🔍 - Catherine

In this episode, we talk everything from AI ethics to CS + English to the importance of education. 👇

For anyone interested in developing tech solutions to impactful problems, this podcast episode provides insight into important implications and how we can work together as a society to de-bias our algorithms.

💎 Δx takeaways

As Catherine mentions in the podcast, AI is everywhere 👀: voice assistant Siri’s and Alexa’s, Facebook ad recommendations, self-driving cars, etc. However, there isn’t a lot of discussion or information about AI ethics out there.

Catherine became aware of the dangers of bias in AI when she noticed that sentence occupations autogenerated by GPT-2 differed based on gender, proliferating gender stereotypes. In addition, researchers such as Joy Buolamwini have discovered that facial recognition often performs worse on women and individuals with darker skin tones. ❌

Along with other examples Catherine provides in the podcast, here are some common AI ethics problems to ponder + some examples/surprising statistics 😮:

  • Can AI create bias against discrimination of minority groups? 💭 (ex. Amazon AI recruiting tool which penalized women based on historical recruitments. Only 47% of organizations test for bias in data/algorithms.)

  • Who should be held liable if an autonomous vehicle is involved in an accident? 🚔 (ex. Arizona Police Department and US National Transportation Safety Board deciding that Uber’s self-driving car was not responsible for a pedestrian death because the safety driver was distracted)

  • How can we protect privacy given the rise of facial recognition and AI surveillance systems? 📹 (AI Global Surveillance Index reports that 176 countries are using AI surveillance systems and 51% of advanced democracies)

There’s a few positives to note, though. In OpenAI’s GPT-3 paper, they included an AI ethics section (6. Broader Impacts) which discusses the potential harms in using GPT-3, as well as potential ways to mitigate these harms. 📜

As more researchers take into consideration fairness, bias, and ethics, there’s also a chance for us as a society to examine some of our biases and how we can work together to eliminate them. 🥼 The field of AI philosophy related to this is “a branch of the philosophy of technology that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will” (Wikipedia). 🧠💭

If we’re able to debias these models/algorithms, we can also push forward an effect on society to debias society and to reverse this direction. 🔀 - Catherine

I like to think of AI as a mirror on society, and through AI/technology, we can better shine our light on our own flaws and create a more ethical and fair community.

📰 Δx change

  1. 💉 mRNA flu shots move into trials: mRNA vaccines emerged as a breakthrough during COVID-19. Now, scientists are working on applying the technology to influenza, AKA the flu. Existing flu shots typically offer only 40–60% protection from infection. mRNA-based methods could translate into greater immune protection.

  2. 🧠 DeepMind neural algorithmic reasoning: DeepMind is working on enabling neural networks to emulate classical algorithms, or Neural Algorithmic Reasoning (NAR). The key thesis is that algorithms possess fundamentally different qualities when compared to deep learning methods, and if deep learning methods were better able to mimic algorithms, then generalization of the sort seen with algorithms would become possible with deep learning.

  3. 💨 Wind power wall: This wind turbine wall consisting of grid square panes spinning simultaneously on 25 axes can be used on the side of a highway or around buildings to generate renewable energy.

Let me know what your answers are on the AI ethics questions — I’d be curious to hear some additional opinions and discussion on this matter!

Hope you enjoyed this week’s Delta X newsletter, and have a wonderful spooky season! 👻

<3,

Ellen X


Thanks for reading!

Thank you for reading and tuning in to this week’s podcast and newsletter :)