SCL Annual Conference 2019: Regulating social media platforms for interoperability
1. Regulating social media
platforms for interoperability
SCL Annual Conference
2 October 2019
Professor Chris Marsden
University of Sussex School of Law
5. Largely governed through self-regulation
Technology giants appear set to persuade us that
self-regulation remains the only effective route to
legal accountability for machine learning systems,
jeopardising the sustainable introduction of smart
contracts,
permitting algorithmic discrimination and
compromising the implementation of privacy law.
7. Discriminatory data is likely to lead
to discriminatory results
Discriminatory algorithms
as well as those not designed to filter out discrimination
can make those results more discriminatory
Justice requires that lawyers study algorithmic outcomes
in order to ascertain such discrimination,
which may be highly inefficient as well as
outrageous to natural justice and fundamental rights.
8. Public administration has generic
solutions
Administrative law
Natural justice –at least ‘reasonableness’
Right to explanation/remedy?
Discrimination law –
applies to corporate decisions
Specialist technology law
Biomedical/nanotech
Railways, roads, telecoms
Data Protection
11. Council of Europe: to err is human,
inducing AI complexity does not absolve
12. Caveat: regulation may not be suitable,
appropriate or feasible for many algorithms
But for those that regulators have most
concern about in
sectors that provide the most sensitive
socioeconomic decisions,
it is a remedy that can be explored.
13. Sensitive public facing sectors?
Banking/Credit, Insurance,
Medical Care & Research,
Social Care,
Policing and Security,
Education,
Transport
AI-piloted Airliners &
Autonomous Vehicles,
Social media
Telecommunications.
14. Transparency and replicability are
not the solutions to AI/ML problems
Transparency first requirement of legal recourse
(though some algorithms can be reverse engineered without
transparency “under the hood” of the machine).
It is not sufficient, however, for several reasons.
Claims that the ability to study an algorithm and its operation
provides a remedy for users who suffer as result of decisions.
15. Things change!
Both the training data and the algorithm itself will change constantly
e,g. impossible to forecast real time outcomes of Google searches
vast SEO business attempts approximations without complete accuracy
Remedy that can be achieved is only replicability –
taking an ‘old’ algorithm and its data at a previous point in time
to demonstrate whether algorithm and data became discriminatory.
Estimate just how incomplete a remedy by
allowing effectively ‘slow motion replays’
while the game is rushing onwards
17. AI regulation and 'ethics washing'
Undertaken by technology companies and their
professional advisors
to persuade policy makers that
self-regulation is the only effective route to legal
accountability for Machine Learning systems,
1. jeopardising the sustainable introduction of
smart contracts,
2. permitting algorithmic discrimination and
3. compromising implementation of data
protection law.
19. Ethics washing will fail
Cursory research into
history of communications regulation and
Internet law
demonstrates the falsity of this self-regulation proposition.
See:
Marsden, C. (2018) “Prosumer Law and Network Platform Regulation: The Long
View Towards Creating Offdata”, 2 Georgetown Tech. L.R. 2, pp.376-398;
Marsden, C. and T. Meyer (2019) Report for European Parliament: “The effects of
automated content recognition (ACR) technology-based disinformation initiatives on
freedom of expression and media pluralism”
20. Need for systematic redress
by external agency
Ben Wagner (2019) Liable, but Not in Control?
Ensuring Meaningful Human Agency in Automated Decision-Making
Systems, Policy & Internet, Vol. 11, No. 1, 2019, 104-122 at
https://onlinelibrary.wiley.com/doi/pdf/10.1002/poi3.198
Self-driving cars,
police searches using social media/PNR,
Facebook content moderation
22. What can and should be done?
1. Ethical standards for all AI deployed in ‘wild’ – to public
1. ISO standards being formed, basic privacy/human rights impact
assessment
2. No mandated interoperability for public communications providers
– Instant Messaging/Search/Social Media companies
3. APIs opened to dominant (SMP) operators
Based on Microsoft remedies in longest most expensive antitrust case in
EC history: case started in 1993 in US, EU 1998-2010
Google case started 2009 – ongoing a decade later
Commission decision of 27 June 2017 Case AT.39740 - Google Search (shopping)
23. 1. Ethical standards for all AI
deployed in ‘wild’ – to public
ISO standards being formed
1. Can be quite powerful influencers c.f. ISO27001 on cybersecurity
2. Typically technical engineering realm not normative standards
3. Embedded in national laws can become weak coregulatory signal
Basic privacy/human rights impact assessment
1. Proposed by UN Rapporteur Prof. David Kaye
2. Also see ‘Regulating Code’ (Brown/Marsden)
3. AI impact assessment suggested by European Data Protection
Supervisor
24. Standards still important!
Standards Australia chairing ISO Working Party:
ISO/IEC JTC 1/SC 42 Artificial intelligence
https://www.iso.org/committee/6794475.html
Australian Computer Society AI Ethics Committee:
https://www.acs.org.au/governance/ai-ethics-committee.html
Data61 (Australian Commonwealth Scientific and Industrial Research
Organisation (CSIRO):
Dawson D and Schleiger E*, Horton J, McLaughlin J, Robinson C∞, Quezada
G, Scowcroft J, and Hajkowicz S† (2019) Artificial Intelligence: Australia’s Ethics
Framework. Data61 CSIRO, https://data61.csiro.au/en/Our-Work/AI-Framework
Greenleaf, Graham and Clarke, Roger and Lindsay, David F., (2019)
Does AI Need Governance? The Potential Roles of a ‘Responsible Innovation
Organisation’ in Australia; Submission to the Human Rights Commissioner on
the White Paper Artificial Intelligence: Governance and Leadership
http://dx.doi.org/10.2139/ssrn.3346149
UK Information Commissioner’s Office, Feedback request — profiling and
automated decision-making, 6 April 2017,
https://ico.org.uk/media/about-the-ico/consultations/2013894/ico-feedback-
request-profiling-and-automated-decisionmaking.pdf
25. Interoperability as an algorithmic
regulatory remedy
Attempt to move beyond glances in the rear view mirror
Silicon Valley mantra is “move fast and break things”
To enforce access to dominant regulated company’s API
Application Programme Interfaces
Enables brokers, comparator programmes, regulators
to access algorithms in real time & controlled conditions
to observe the algorithm’s behaviour.
26. 2. Interoperability option for
public communications providers
Instant Messaging/Search/Social Media companies
1. Not so radical – required for broadcasters and telcos
1. Electronic Programme Guides
2. Telephone numbering schemes
3. NOT interconnection – up to smaller Ims to decide how to comply
4. Co-regulatory standards
2. Not as utilities but as media providers
1. This is NOT common carrier regulation
2. Not equivalent to energy/postal providers
3. Not as publishers but as printers
1. Arguments on fake news/hate speech for another time
2. Attempts to impose ‘Duty of Care’ fiduciary in UK/US are highly inappropriate
29. EU Commmissioner Vestager on
interoperability and large platforms
3 June speech: “Competition and the Digital Economy”
https://ec.europa.eu/commission/commissioners/2014-
2019/vestager/announcements/competition-and-digital-economy_en
“Making sure that products made by one company will work
properly with those made by others –
can be vital to keep markets open for competition.”
Microsoft’s takeover of LinkedIn approval depended on
agreement to keep Office working properly,
not just with LinkedIn,
but also with other professional social networks.
“Commission will need to keep a close eye on strategies that
undermine interoperability”
31. 3. Dominant (SMP) operators
API opened
If dominant –competition and consumer remedy
1. ACCC find dominance by Facebook & Google
2. Only applies to platform aspects of their business
1. i.e. iTunes not Apple phones
Microsoft remedies in longest most expensive
antitrust case in EC history - $5billion fines
1. Case started in 1993 in US, EU 1998-2014
1. Google case started 2009 – ongoing a decade later
32. Note this is not about the
advertising market (only a proxy)
33. Three models – proposed by
Brown/Marsden 2008, 2013
Model 1: Must-carry obligations
broadcasters & Electronic Programme Guides
Model 2: API disclosure requirements
Microsoft from EC rulings
Case T-201/04, Microsoft v Commission, EU:T:2007:289, 1088
Decision 24 May 2004 Case C-3/37792 Microsoft; Decision of
16 December 2009 in Case 39530 Microsoft (Tying)
Model 3: Interconnect requirements
Applied to telcos, especially with SMP
34. Interoperability? 3 Types
Protocol interoperability
ability of services/products to interconnect technically
usual interoperability in competition policy
Data interoperability
Recalling Mayer-Schonberger/Cukier
Slice of data to competitors
Full protocol interoperability
What telecoms often thinks of as full
interconnection
35. Why interoperate?
It’s the economics!
Mechanism for achieving any-to-any connectivity –
promotes innovation
There is nothing less valuable than a network with one user!
Interoperability results in increased value of networks
promotes efficient investment in/use of infrastructure
Essential for new entrants to compete with existing
operators on non-discriminatory basis promotes entry
36. Is this remedy more broadly applicable?
Banking/insurance/medical algorithmic ‘AI’?
Self-driving vehicles?
Depends on a variety of socio-economic factors
Many sectors have regulators working on
‘regulatory sandpit’ solutions
Interoperability extensively used in sectors with
which we are most familiar
37. Consumer Data Right?
Oz CDR to deliver open banking, open energy and open telecoms?
Many Europeans – well, we few –very excited about CDR model
UK Furman Review of Digital Markets: ‘data mobility’
Competition and Markets Authority: Data, Technology & Analytics unit
Innovation and Intelligence team: audit algorithms & research tech markets
39. Christopher Kuner, Fred H. Cate, Orla Lynskey, Christopher Millard, Nora Ni Loideain,
and Dan Jerker B. Svantesson, ‘Expanding the artificial intelligence-data protection
debate’ (2018) 8 (4) International Data Privacy Law, 289
Sandra Wachter, Brent Mittelstadt and Luciano Floridi, ‘Why a Right to Explanation of
Automated Decision-Making Does Not Exist in the General Data Protection Regulation’
(2017) 7 (2) International Data Privacy Law 76;
Sandra Wachter, Brent Mittelstadt, Chris Russell, ‘Counterfactual Explanations without
Opening the Black Box: Automated Decisions and the GDPR’ (2018) HarvardJL&Tech 1
Andrew D. Selbst and Julia Powles, ‘Meaningful information and the right to
explanation’ (2017) 7 (4) International Data Privacy Law 233.
Lilian Edwards, Michael Veale, ‘Slave to the algorithm? Why a ’right to an explanation’
is probably not the remedy you are looking for’ (2017) 16 (1) Duke Law & Technology
Review 18;
Lilian Edwards, Michael Veale, ‘Enslaving the Algorithm: From a "Right to an
Explanation" to a "Right to Better Decisions”?’ (2018) 16 (3) IEEE Security & Privacy 46
Lilian Edwards, Michael Veale, ‘Clarity, surprises, and further questions in the Article 29
Working Party draft guidance on automated decision-making and profiling’ (2018) 34 (2)
Computer Law & Security Review 398
40. 10 Steps towards Ethical AI
1. Transparency
Geeks love this, it’s almost meaningless to average user
2. Explainability
See above –more useful is replicability
3. Consent
See GDPR on meaningful & ‘course of business’
4. Discrimination
Garbage in/Garbage out
5. Accountability to Stakeholders
6. Portability
Australia’s Consumer Data Right!
7. Redress and Appeal
8. Algorithmic Literacy
See ‘how to programme your VCR’
9. Independent oversight
10. Governance
Hosanagar advocates for the creation of an independent Algorithmic
Safety Board, modeled on the Federal Reserve Board
https://www.vox.com/the-highlight/2019/5/22/18273284/ai-algorithmic-bill-of-
rights-accountability-transparency-consent-bias