More Related Content Similar to 3 Steps To Tackle The Problem Of Bias In Artificial Intelligence (20) More from Bernard Marr (20) 3 Steps To Tackle The Problem Of Bias In Artificial Intelligence1. 3 Steps To Tackle The Problem Of
Bias In Artificial Intelligence
2. © 2018 Bernard Marr, Bernard Marr & Co. All rights reserved
Title
Text
IntroductionIntroduction
One of the problems in society that AI decision-making was meant to solve, was bias. After
all, aren’t computers less likely to have inherent views on, for example, race, gender, and
sexuality?
Well, that was true back in the days when, as a general rule, computers could only do what
we told them. The rollout of machine learning, thanks to the explosion of Big Data, and the
emergence of affordable computers with enough processing power to handle it have
changed all that.
3 Steps To Tackle The Problem Of Bias In
Artificial Intelligence
3. © 2018 Bernard Marr, Bernard Marr & Co. All rights reserved
Title
Text
IntroductionIntroduction
In the old days, the term “garbage in, garbage out” concisely summed up the importance of
high-quality data. When you give computers the wrong information to work with, the
results they come up with are unlikely to be helpful.
Back then, this was mostly a problem for computer programmers and analysts. Today, when
computers are routinely making decisions about whether we are invited to job interviews,
eligible for a mortgage, or a candidate for surveillance by law enforcement and security
services, it’s a problem for everybody.
3 Steps To Tackle The Problem Of Bias In
Artificial Intelligence
4. © 2018 Bernard Marr, Bernard Marr & Co. All rights reserved
ProPublica Study
In possibly the highest profile example of getting this wrong so far, a study found that an
AI algorithm used by parole authorities in the US to predict the likelihood of criminals
reoffending was biased against black people.
Exactly how this came about is unknown – the workings of the proprietary algorithms have
not been made available for independent auditing. But the ProPublica study found that
the system overestimated the likelihood of black offenders going to commit further crimes
after completing their sentence while underestimating the likelihood of white offenders
doing the same.
Biased AI systems are likely to become an increasingly widespread problem as artificial
intelligence moves out of the data science labs and into the real world. The
“democratization of AI” undoubtedly has the potential to do a lot of good, by putting
intelligent, self-learning software in the hands of us all.
But there’s also a very real danger that without proper training on data evaluation and
spotting the potential for bias in data, vulnerable groups in society could be hurt or have
their rights impinged by biased AI.
5. © 2018 Bernard Marr, Bernard Marr & Co. All rights reserved
Research At IBM
It’s possible AI may be the solution to, as well as the cause of this problem. Researchers at
IBM are working on automated bias-detection algorithms, which are trained to mimic
human anti-bias processes we use when making decisions, to mitigate against our own
inbuilt biases.
This includes evaluating the consistency with which we (or machines) make decisions. If
there is a difference in the solution chosen to two different problems, despite the
fundamentals of each situation being similar, then there may be bias for or against some
of the non-fundamental variables. In human terms, this could emerge as racism,
xenophobia, sexism or ageism.
While this is interesting and vital work, the potential for bias to derail drives for equality
and fairness runs deeper, to levels which may not be so easy to fix with algorithms.
6. © 2018 Bernard Marr, Bernard Marr & Co. All rights reserved
Accenture and Responsible AI
I spoke to Dr. Rumman Chowdhury, Accenture’s lead for responsible AI, who outlined that
there may be situations where data and algorithms are clean, but societal biases may still
throw a spanner in the works.
She said, "With societal bias, you can have perfect data and a perfect model, but we have
an imperfect world.”
“Think about the use of AI in hiring … you use all of your historical data to train a model
on who should be hired and why. Then you parse their resume or look at people’s faces
while they’re interviewing.
“But you’re assuming that the only reason people are hired and promoted is pure
meritocracy, and we actually know that not to be true.
“So, in this case, there's nothing wrong with the data, and there's nothing wrong with the
model, what's wrong is that ingrained biases in society have led to unequal outcomes in
the workplace, and that isn't something you can fix with an algorithm.”
7. © 2018 Bernard Marr, Bernard Marr & Co. All rights reserved
Accenture and Responsible AI
In very simplified terms, an algorithm might pick a white, middle-aged man to fill a
vacancy based on the fact that other white, middle-aged men were previously hired to the
same position, and subsequently promoted.
This would be overlooking the fact that the reason he was hired, and promoted, was more
down to the fact he is a white, middle-aged man, rather than that he was good at the job.
Chowdhury lists three specific steps which organizations can take to minimize the risk of
perpetuating societal biases.
• The first is to look at the algorithms themselves and ensure that nothing about the way
they are coded perpetuates bias. This is particularly necessary when AI is constantly
making predictions which are out-of-step with reality (as seems to be the case with the
US probation example mentioned above).
8. © 2018 Bernard Marr, Bernard Marr & Co. All rights reserved
Accenture and Responsible AI
• Second is to consider ways in which AI itself can help to mitigate against the risk of
biased data – IBM’s bias detection algorithms could play a part here.
• Thirdly, we must “make sure our house is in order – we can’t expect an AI algorithm
that has been trained on data that comes from society to be better than society –
unless we’ve explicitly designed it to be.”
9. © 2018 Bernard Marr, Bernard Marr & Co. All rights reserved
Regulation of AI
This leads onto the discussion of the regulation of AI – who will be responsible for setting
the parameters AI operates within – teaching machines what is valid data to learn from,
and where inbuilt societal biases could limit its ability to make decisions which are both
valuable and ethical.
Tech leaders including Google, Facebook and Apple jointly formed the Partnership on AI
in 2016 to encourage research on the ethics of AI, including issues of bias. Part of the
partnerships work involves informing legislators, but this "top-down" approach may not
produce solutions to every problem, and may even stifle innovation.
Chowdhury says “What we don’t want … is every AI project at a company having to be
judged by some governance group, that’s not going to make projects go forward. I call
that model ‘police patrol,' where we have the police going around trying to stop and
arrest criminals. That doesn't create a good culture for ethical behaviour.“
10. © 2018 Bernard Marr, Bernard Marr & Co. All rights reserved
Regulation of AI
Neither should the burden of regulation and enforcement be put solely on the front-line –
the data scientists themselves, argues Chowdhury.
"Yes, the data scientist plays a role, the AI researcher plays a role, but at a corporation,
there are many moving parts.
“We put a lot of responsibility on the data scientist … but they shouldn’t shoulder all of it.”
Basically, if society is at a stage where we are ready to democratize AI, by making it
available to all, then we need to be ready to democratize the oversight and regulation of
AI ethics.
Chowdhury refers to this concept as the Fire Warden model.
11. © 2018 Bernard Marr, Bernard Marr & Co. All rights reserved
Regulation of AI
"Think about how if there's a fire in your building right now, everyone knows what to do –
you all meet outside at a pre-arranged location, someone will raise the alarm – you won't
put out the fire, but you've been educated on how to respond.
“That’s what I want to see in the governance of AI systems, everybody has a role to play,
everyone’s roles are a bit different, but everyone understands how to raise ethical issues.”
Crucially, this will only work if there is faith that someone will put out the fire – no one
would bother calling the fire brigade if they knew they didn’t have the ability and
motivation to do their job. Some top-down regulation will undoubtedly be a necessary
part of tackling the issue of AI bias.
But building a culture of reporting and accountability throughout an organization means
there will be a far greater chance to spot and halt, bias in data, algorithms or systems
before it is perpetuated and becomes harmful.
12. © 2017 Bernard Marr , Bernard Marr & Co. All rights reserved
© 2018 Bernard Marr, Bernard Marr & Co. All rights reserved
Bernard Marr is an internationally best-selling author, popular keynote speaker, futurist, and a
strategic business & technology advisor to governments and companies. He helps
organisations improve their business performance, use data more intelligently, and
understand the implications of new technologies such as artificial intelligence, big data,
blockchains, and the Internet of Things.
LinkedIn has ranked Bernard as one of the world’s top 5 business influencers. He is a frequent
contributor to the World Economic Forum and writes a regular column for Forbes. Every day
Bernard actively engages his 1.5 million social media followers and shares content that
reaches millions of readers.
Visit The
Website
© 2017 Bernard Marr , Bernard Marr & Co. All rights reserved
© 2018 Bernard Marr, Bernard Marr & Co. All rights reserved
Bernard Marr is an internationally best-selling author, popular keynote speaker, futurist, and a
strategic business & technology advisor to governments and companies. He helps
organisations improve their business performance, use data more intelligently, and
understand the implications of new technologies such as artificial intelligence, big data,
blockchains, and the Internet of Things.
LinkedIn has ranked Bernard as one of the world’s top 5 business influencers. He is a frequent
contributor to the World Economic Forum and writes a regular column for Forbes. Every day
Bernard actively engages his 1.5 million social media followers and shares content that
reaches millions of readers.
Visit The
Website
13. Title
Subtitle
Be the FIRST to receive news,
articles, insights and event
updates from Bernard Marr & Co
straight to your inbox.
Signing up is EASY! Simply fill out
the online form and we’ll be in
touch!
© 2018 Bernard Marr, Bernard Marr & Co. All rights reserved