I presented this keynote at the German Green Party's Netzpolitischer Kongress #NK16 on October 28, 2016. In this talk, I discussed a framework for developing a more ethical future society.
If this Giant Must Walk: A Manifesto for a New Nigeria
Toward an ethical framework for the digital society
1. Für eine Ethik der digitalen Gesellschaft
Toward an Ethic for the Digital Society
Jillian C. York
Netzpolitischer Kongress #NK16
2. What is a just social policy for the digital age?
3. What is a just social policy for the digital age?
What kind of future society do we want?
4. What is a just social policy for the digital age?
What kind of future society do we want?
What role does technology play in that society?
5. What is a just social policy for the digital age?
What kind of future society do we want?
What role does technology play in that society?
How do we get there?
6. In the digital future, what does it mean to be human?
Credit: Fritz Lang
14. Where are we now?
We’re entering a new era of technological development
Techno-solutionism is the leading ideology of Silicon
Valley
The speed of development has outpaced the speed of
governance and regulation
It’s time to take a step back
15. It’s time to ask:
Who is developing these new technologies?
What motivates them?
What biases do they have?
How do these biases play into the development of new
tech?
16. So...what are our principles, then?
How do we work toward a new
ethical framework?
17. Governance must be human-centered;
“We cannot outsource our moral responsibilities to
machines” - Zeynep Tufekci
What is a just social policy for the digital age? How do we formulate an ethical approach to regulation moving forward?
In order to answer these questions, I think we need to focus in on what the future of technology actually looks like in practice.
What are we currently afraid of? As our cars become driverless, as AI takes over our jobs, the fear of the future is a fear of losing our bodily autonomy, and of losing our worth as humans.
These are big questions, and I don’t have all the answers. Today, I’m going to focus on the most sensitive aspect of digitalization, and perhaps one of the most vital elements of being human: Our agency as individual human beings.
What does it mean to be human in the digital future?
Censorship of the body
Rupi Kaur example - Instagram
Existing policies (e.g., Facebook) -
Algorithmic determination of bodies ( so-called “skin filters”) demonstrate machine bias - recognition of white bodies vs. brown ones.
Discriminatory algorithms
Google’s photo app was classifying Black people as gorillas
“This is fundamentally a data problem. Algorithms learn by being fed certain images, often chosen by engineers, and the system builds a model of the world based on those images. If a system is trained on photos of people who are overwhelmingly white, it will have a harder time recognizing nonwhite faces.” - Kate Crawford
Surveillance
Facial recognition
“One in two American adults is in a law enforcement face recognition network.” - Georgetown Law report
“Face recognition may be least accurate for those it is most likely to affect: African Americans.” - same
Regulators need to pass laws to limit the use of facial recognition technology to cases where there’s a reasonable suspicion of criminal activity
EFF’s recommendations:
Expand ongoing investigations of police practices and include in future investigations an examination of whether the use of surveillance technologies, including face recognition technology, has had a disparate impact on communities of color; and
Consult with and advise the FBI to examine whether the use of face recognition has had a disparate impact on communities of color.
Labor issues
Self-driving cars, Uber, Airbnb
Low-wage jobs replaced first. We can’t think about this without thinking about how we keep people’s lives livable.
Borders
Biometric identification
And refugees: the most invasive technologies are often piloted on the most vulnerable.
This image shows Syrian refugees using biometric identification to access cash.
Social media analysis at the US border - unlikely to be human-parsed. What biases will come into play?
Societal Capture
Walled gardens
When Free Basics is the only internet you know, and all your data is stored within the FB ecosystem, you are a different kind of digital citizen
Daily life: processed computation
Siri, Allo, Alexa could all have the same input (ie always-on voice capture), but paint a very different picture of who they think you are. The rules governing closed, proprietary ecosystems can drive decisions (ie consumption) predicated upon the compelling interests of their manufactures.
Where are we now?
The end of Web 2.0, premo ergo sum (“I click, therefore I am”)
We’re rapidly entering a new era; libertarian Silicon Valley is developing new ideas and technologies faster than we can regulate them.
In his 2013 book To Save Everything: Click Here, Evgeny Morozov described the concept of techno-solutionism as the approach commonly used by Silicon Valley: that all social and political issues are problems to be solved, and to be solved by technology.
One stark example: The “homeless hotspots” at SXSW a few years ago - wherein the bodies of homeless individuals were utilized as wifi hotspots for the benefit of the elite attendees.
We must also recognize who is building the technologies, and how their biases play into the development of technological solutions.
The assumption of whiteness: “When the white male is imagined as the end user of virtual technologies, this assumption shapes the types of experiences and interfaces that are designed, which in turn allows whiteness, and white maleness in particular, to continue to circulate as a type of neutral user experience.” - Kara Melton.
No internet for young girls: because of the lack in diversity in engineering teams, safeguards may not be built into system design to prevent downstream issues like Twitter’s sexual harassment
As a society, it’s imperative that we move beyond thinking just about what technology can do for us, and think about what technological solutions mean for humanity.
What are our principles? How do we work toward a new ethical framework?
These are complex problems that I don’t believe any one person can solve. In order to devise a new ethical framework for the future of our digital society, I think we have to go back to basics and ask some fundamental questions.
First, we have to ask ourselves what we want from both our society and a an increasingly technological civilization.
What do we really need? Hint: It’s probably not a IoT toaster.
Second, we need to think outside of the box. Too often, we get stuck in old frameworks that may not fit new ideas.
For example, I work primarily on the issue of censorship by social media companies. The law says companies can regulate however they like, so when I give talks about this, corporate types often approach me and say “but corporations have the right to do what they want when it comes to speech.” Sure, they do, legally. But why limit ourselves to thinking inside of that framework if it doesn’t serve the aims of building a better society?
We’re often working with the assumption that technology will improve our lives and our societies, but whose assumption is that and who does it serve?
Governance must be human-centered. This doesn’t mean that there’s so space in our future society for AI, or algorithms, but that, as Zeynep Tufekci has said, “we cannot outsource our moral responsibilities to machines.
Joi Ito’s “society-in-the-loop.”
Human-in-the-loop machine learning is work that is trying to create systems to either allow domain experts to do the training or at least be involved in the training by creating machines that learn through interactions with experts. At the heart of human-in-the-loop computation is the idea of building models not just from data, but also from the human perspective of the data.
“Society-in-the-loop” means figuring out how to take the input from and then gain the buy-in of the public as the ultimate creator
In developing an ethical framework for regulation, we must remember that we have existing human rights frameworks to fall back on.
Over the past few years, I’ve counted numerous attempts to create some sort of declaration for the Internet. This, I believe, is exceptionalism. Instead, we need to look at our existing rights - from the right to privacy to freedom of expression and assembly, and beyond - and think of how we can apply them to a more digitalized society.
Informed constituencies
Ensuring inclusion of most vulnerable populations both in decision-making and development of technologies
A big step towards countering discriminatory algorithms is being able to understand them. This is easier said than done, since the way proprietary algorithms work is usually a closely guarded secret.
Two American researchers believe that the General Data Protection Regulation (GDPR) may provide citizens of EU member states with a way to demand explanations of the decisions algorithms about them.
Whether it does or not remains to be seen, but this raises the important point: Interpretability of algorithms is in the public’s interest, and the robust right to explanation and understanding of algorithms is something we need to fight for.