Harry and I discussed the role AI has to play in the future of our workplaces and how our bias (conscious or unconscious) can be directly transferred into its algorithms. We also asked how, in order to combat this growing threat, we can all improve our knowledge of the issue and reduce the likelihood of these biases taking hold.
“We’ve got to ensure that when we’re training algorithms to make decisions on our behalf that we’re not giving it biased training that we can’t undo. ” – Harry Gaskell
Harry Gaskell is the Chief Innovation Officer and a Partner at EY UK&I. He is also a Chair at the Employers Network for Equality & Inclusion (enei). You can follow Harry on LinkedIn.
We cover a wide range of topics, including:
- How AI can be used to empower our workplace.
- Could be AI be fairer than humans?
- Sources of bias for AI we need to combat
- What is required to properly train unbiased AI
- Challenges of undoing bias in AI
- Can AI facilitate a fair selection of senior leaders?
- Examples of how AI can fail to tell differences
- Who should be held responsible for AI from a legal perspective
- Potential actions we can implement to make AI systems more accurate
- Is an international standard required to regulate AI?
- Social responsibility and AI
“If we’re not careful we might start building AI which is around for decades that was never built around principles such as ethics, social responsibility, accountability, explainability, reliability, lineage.” – Harry Gaskell
And much more. Please enjoy, and be sure to grab a copy of Racism at Work: The Danger of Indifference and connect with Professor Binna Kandola OBE on LinkedIn to join the conversation or share your thoughts.
You can also listen on Apple Podcasts, Spotify, Stitcher, TuneIn or in any other podcasting app by searching for “Racism at Work Podcast“, or simply by asking Siri and Alexa to “play the Racism at Work podcast“.
Mentioned in the show
- History of AI in healthcare [9:35]
- Amazon scraps secret AI recruiting tool that showed bias against women [11:50]
- Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling tech [13:32]
- Is this a wolf? Understanding bias in machine learning [25:03]
- AI is the future – but where are the women? [31:45]
- Trusted AI – IBM Research [34:15]
- Putting artificial intelligence (AI) to work by EY [34:20]