SlideShare une entreprise Scribd logo
1  sur  18
Télécharger pour lire hors ligne
1
Open innovation and AI: Can OpenAI benefit humanity?
Kasper Groes Ludvigsen
klud@itu.dk
Re-assembling innovation
KBREINN1KU-Autumn 2017
Exam essay
ITU
2
Science and technology have long been identified as major sources of economic and social
development (Schumpeter 1939; Kondratiev 1978), however, their contribution to social well-being is
no longer taken for granted (Pellizzoni 2012). This seems to be particularly true for research in the field
of artificial intelligence (AI). The topic of AI is widely discussed with actors in the domain arguing
publicly about the implications of AI on our society (Kasriel, 2017) and mainstream media frequently
report on the progress of AI (e.g. The New York Times overview of AI related articles (The New York
Times, 2017)). While it is widely accepted that AI hold great benefits for the capabilities of the human
race (Bostrom, 2017), prominent research and industry figures also warn against the potential dangers
of AI (Price, 2017; David, 2017), some even calling the potential invention of artificial super
intelligence (ASI) an existential risk to humanity (Sulleyman, 2017). ASI is an AI much smarter than
the best human brains in practically every field, including scientific creativity, general wisdom and
social skills (Bostrom, 2006). AI experts generally believe that ASI will be achieved before 2075
(Müller & Bostrom, 2014). Amodei et al (2016) argue that a multitude of technical problems exist in
relation to prevention of accidents in AI systems. An accident may be a situation in which a human
designer intended the system to perform a certain task or achieve a certain objective (perhaps
informally specified), but the system produced harmful or unexpected results. This is a common issue
in all engineering disciplines, but it may be particularly important to address when building AI systems
(Steinhardt, 2015). One of the main challenges in AI development is the value alignment problem,
which is the challenge inherent in building intelligence which is provably aligned with human values,
and it is a problem that must be addressed even for relatively unintelligent AI systems (Russel, n.d). As
such, this challenge becomes more and more pressing as we approach ASI. However, human values are
obviously difficult to define. In response to concerns about the development of AI, a non-profit, open
source AI research company called OpenAI was founded to “advance digital intelligence in the way
that is most likely to benefit humanity as a whole” and “enact a safe path” to AI (Brockman &
Sutskever, 2015, section 1). This mission statement seems to implicitly assume that openness is the
optimal model of innovation considering its mission. However, the company’s open research strategy
has been criticized (Metz, 2016). To shed further light onto this criticism, this paper is concerned with
answering the following research question:
3
What influence can the openness OpenAI’s research have on the company’s ability to “advance
digital intelligence in the way that is most likely to benefit humanity”?
Answering this question is an important exercise in making sense of one of the most notable actors
within AI research - a technology which, as shown above, could potentially have far reaching
consequences. The remainder of the paper is structured as follows: First, I review literature relevant to
the research questions. Then, I explain the methodology of the paper and introduce the reader more
thoroughly to the case company OpenAI. After, I analyze the case with point of departure in the
reviewed literature and lastly I discuss the findings, bringing in other theoretical perspectives and
conclude on the findings.
Literature review
In the following, I review literature relevant to answering the research question.
Open Innovation
Innovation was once conceived as being the work of a lone entrepreneur bringing innovations to
markets, but new models of innovation acknowledge that innovation processes are interactive and that
innovators rely on interaction with users, suppliers and with a range of institutions in the innovation
system (Laursen & Salter, 2006). In this conception of innovation, actors do not innovate alone. Rather,
they are nested in communities of practice and embedded in a dense network of interactions (Scott and
Brown, 1999; Brown and Duguid, 2000). Open innovation is one such type of innovation, and it refers
to "the use of purposive inflows and outflows of knowledge to accelerate internal innovation and
expand the markets for external use of innovation" (Chesbrough 2006, p. 1). Open innovation in the
form of open search for new ideas is linked to innovative performance, and theory suggests that firms
that are too internally focused will miss opportunities (Laursen & Salter, 2006; Chesbrough, 2006).
“Openess” is also a phenomenon in software development where it refers to “the practice of releasing
into the public domain (continuously and as promptly as is practicable) all relevant source code and
platforms and publishing freely about algorithms and scientific insights and ideas gained in the course
4
of the research.” (Bostrom, 2017, p. 1). It is worth noting here that, in Bostrom’s (ibid) conception of
“openess”, it is not a binary variable, as it can take on many forms.
Power and subjugated knowledges
As the proceeding sections will show, analysis and discussion of OpenAI’s activities can be situated in
the discourse on subjugated knowledges, and this theoretical perspective plays a crucial role in the
reassembling of OpenAI as an innovation. The topic of subjugated knowledges is therefore briefly
introduced here.
Through a struggle over time, global unitary knowledges have subjugated a wide range of knowledges
and disqualified them as ‘‘beneath the required level of cognition or scientificity’’ (Foucault, 1980, p.
82). Global unitary knowledges are the privileging of the methods of science, and these have led to the
subjugation of previously established erudite knowledge and of knowledge located at the margins of
society. These subjugated knowledges have been excluded from the ‘‘legitimate domains of formal
knowledge’’ (White & Epston, 1990, p. 26).
According to Hartman (2000), Foucault was concerned with how this knowledge was exercise of
power and practice of knowledge at the local level. Foucault (1980, p. 52) asserts that “it is not possible
for power to be exercised without knowledge, it is impossible for knowledge not to engender power”.
Thus, in a Foucauldian perspective, the established regimes can be deemed to be self-sustaining
because they enter into a virtuous cycle where the knowledge they produce legitimates their power, and
their power legitimates their knowledge. At its logical conclusion, this relation between knowledge and
power sustains the subjugation of knowledge and makes it difficult for knowledge outside of the
established regimes to surface and become legitimate. The practical application of a unitary body of
knowledge is exemplified by Hartman (2008, p. 20)
"For example, that powerful global and unitary body of knowledge, the
Diagnostic and Statistical Manual of Mental Disorders, Third Edition
(American Psychiatric Association, 1980), which is centrally established and
encoded in economic, medical, and educational systems, is practiced at the
5
most local level--in the relationship between a social worker and a client. When
a social worker is required by an agency’s funding needs or by the rules of
third-party payers to attach a diagnostic label to a client, a powerful and
privileged classification system has entered this relationship and in all
likelihood has affected the worker’s thinking, the relationship, and the client’s
self-definition.”
Methodology
This paper is the result of qualitative inquiry in which I applied document and content analysis to
documentary secondary data (Saunders, Lewis & Thornhill, 2009). In analysing this data, a deductive
approach was applied in which the theoretical framework used in the analysis was framed a priori. I
used open innovation theory to analyze the extent to which OpenAI’s innovation processes are open
and to create a basis for understanding to what extent the company’s openness enables or hampers its
mission. I then introduce into the analysis Foucault’s notion of subjugated knowledges, which allowed
me to problematize the extent to which OpenAI’s innovation process is open. In the case of OpenAI,
these theories are closely interlinked as they allow us to make sense in different but complementing
ways of an observed phenomenon in the case of OpenAI. The analyzed documents are online news
articles, OpenAI’s mission statement, its first blog post in which the background for founding the
company is described and summaries of the company’s 39 research papers. The latter were used in a
content analysis which was guided by Kassarjian’s (1977) framework and served to identify how many
of the articles fall into three categories crated for the purpose of this analysis: “AI safety”, “the
solicitation of public opinion on AI development“ or “efforts to identify global human values”. The
first category is relevant given the company’s goal to “enact a safe path” to AI. The second and third
are relevant because they allow me to assess the extent to which the company make use of purposive
inflows of knowledge from the broader public and efforts made by the company to identify what
human values are. This allowed me to assess the extent to which OpenAI’s innovation process and
research activities helps accomplish its mission. I drew from Kassarjian’s (1977) framework because it
allows for the integration of existing categories into his framework; thus it allowed me to integrate the
three content categories mentioned above.
6
In order to ensure reliability and validity of the data, only data from authoritative news outlets and
OpenAI’s own website was used. Dochartaigh’s (2002) recommendations for assessment of the
authority of documents available via the internet was used as a guide for the selection authoritative
sources.
Using a deductive approach has certain strengths, including linking the research into an existing body
of knowledge and provide an initial analytical framework (Saunders, Lewis & Thornhill, 2009). In
using a deductive approach, there is a risk of introducing a premature closure in the investigated issues
(ibid.) My awareness of this issue guided my analysis so as to avoid it. The pros of the chosen
methodology is that secondary data sources are likely to be of higher quality than what could be
collected by oneself (Stewart & Kamins 1993). Also, the data collection method offered me access to
insights into an organization that would otherwise be off-limits. In addition, the data I used is
permanent and publicly available allowing others to access it easily thereby opening up my conclusions
to public scrutiny (Saunders, Lewis & Thornhill, 2009). The limitations of the methodology is that
other insights would likely have occurred if other methods for data collection such as interviews had
been used. The use of secondary data sources also means that control over data quality is lost
(Saunders, Lewis & Thornhill, 2009).
The case
With more than 1 billion US dollars in funding (Brockman & Sutskever, 2015) and backing from
prominent figures in the technology and research sectors, OpenAI is a resourceful actor which can
potentially yield great progress in AI development. Given the potential of OpenAI’s research activities,
it is important to assess how the openness of the company’s research can affect its ability to achieve its
mission. Consequently of the research question, the most relevant aspects of OpenAI to look into is the
background for founding the company, the extent to which the company’s innovation processes are
open, the extent to which the company allows for knowledge to flow in and out of the company and the
nature of the company’s research activities. According to the company’s “Launch blog post”
(Brockman & Sutskever, 2015), the unpredictability of AI development, and the profound impacts it
can have on humanity, creates a need for “a leading research institution which can prioritize a good
7
outcome for all over its own self-interest” and the founders are hoping OpenAI will become that
institution (ibid, paragraph 7). With this in mind, the company strives “to build value for everyone”, but
according to one of the co-founders, the exact goal of the company is “a little vague” (Friend, 2016).
The company encourages its researchers to publish all their work and any patented technology will be
publicly available, but one of the co-chairs has also stated publicly that the company will not release all
its source code (ibid). OpenAI will “collaborate with others across many institutions” and expect to
work with other companies, and a number of its research papers have been authored in collaboration
with other actors external to the organization. Here, I particularly notice that OpenAI does not mention
collaborations or consultations with the broader public. Indeed, the company is “planning a way to
allow wide swaths of the world to elect representatives to a new governance board” (Friend, 2016, ),
but judging from a lack of empirical evidence of the contrary, the governance board has yet to
materialize. As mentioned, I analyzed the 39 research papers published by OpenAI in order to
determine the number of papers related to AI safety, the solicitation of public opinion on AI
development or efforts to identify global human values. The results of this analysis are seen below:
Table 1: Number of published articles related to AI safety, solicitation of public opinion
and identifying human values
AI safety Solicitation of public
opinion
Identifying global
human values
Number of articles 3 0 0
Percentage of total
number of published
articles
7.69 % 0 % 0 %
8
Analysis
The following analysis shows how an innovation process can be dismantled and reassembled using
various innovation-related theories. In particular, it shows how an empirical observation, i.e. the lack of
public consultation and purposive inflow of knowledge, can be analyzed using disparate theories and
how these analyses can yield different but complimentary understandings of the observed phenomenon
and its implications. In this section, I apply the theoretical lenses of open innovation and subjugated
knowledges to the case of OpenAI. To reiterate, open innovation is “the use of purposive inflows and
outflows of knowledge to accelerate internal innovation and expand the markets for external use of
innovation" (Chesbrough 2006, p. 1). At first sight, it seems natural to classify OpenAI as an instance
of open innovation, but upon closer inspection, this might not be the case. As is clear from the case
description above, OpenAI definitely has purposive outflow of knowledge as exemplified the
encouragement of researchers to publish findings and the release of source code. The company also has
some inflow of knowledge, exemplified by active collaboration with other actors within the research
and industry domain. However, OpenAI does not actively solicit the public opinion on issues related to
the development of AI. Although the company plans to involve the broader society, these plans have
yet to materialize. It therefore seems fair to conclude that, despite the apparent openness of the
company’s innovation process, the process lacks inflow of knowledge to some extent, and it is
therefore not open enough to be characterized as open innovation. This raises a theoretical question: Is
the applied definition of open innovation too rigid? After all, OpenAI’s innovation process is quite
open. If we fail to acknowledge OpenAI as an instance of open innovation, our research of open
innovation cases, hence our understanding and knowledge of open innovation, will be limited. The
failure of the applied definition to encompass OpenAI is therefore not trivial. One could argue that, just
like openness in software development is not a binary variable, neither should open innovation be. I
therefore introduce a distinction between directed and unidirected open innovation. Acknowledging the
importance of networks in innovation, the terms are inspired by Newman’s (2003) characterisation of
network edges as being either directed, meaning that information runs in only one direction, or being
unidirected meaning that information can flow in multiple directions. It must be noted that directed and
unidirected open innovation should be considered two ends of a continuum rather than a binary
variable. Using this typology, OpenAI is mostly an instance of directed open innovation due to the
9
large extent to which the company makes use of a purposive outflow of knowledge and the somewhat
limited extent to which inflow of knowledge is used.
The assessment of the impact of OpenAI’s openness on its ability to accomplish its mission is complex.
In the short term, the outflow of knowledge from OpenAI will likely yield positive outcomes, but this
may not be the case in the long term. The desirability of the long term consequences of openness
depends on whether the objective is to benefit current or future generations. Openness about safety
measures and goals are likely to be positive on both counts. However, other forms of openness, for
instance regarding source code, science and possibly capability could increase competition around the
time of the introduction of advanced AI, which could increase the probability that “winning the AI
race” is incompatible with applying safety measures which slow down the development process or
imposes constraints on the performance of the AI (Bostrom, 2017). As such, it seems the very open
nature of OpenAI’s knowledge outflow may be problematic in the longer term.
In the short and long term, an issue also arises due to the lack of a purposive inflow of knowledge,
particularly in relation to its lack of inflow of knowledge in the form of public consultation. This is
because open search in open innovation processes is correlated with better innovative performance.
This means that not only would the purposive inflow of knowledge in the form of public consultation
make it more likely that OpenAI achieves its goal of benefitting humanity, it would also make the
company achieve this goal more quickly in the sense that its innovative performance would increase. A
practical recommendation arising from this analysis is therefore that OpenAI should purposefully use
the inflow of knowledge of the public in the development of AI.
On the subjugation of knowledge and the nature of OpenAI’s research activites
The apparent failure of OpenAI to have a purposive inflow of knowledge from all societal actors and to
actively solicit public opinion on innovation efforts can also be understood from the perspective of
Foucault’s subjugated knowledges. OpenAI clearly attempts to justify its research by reference to
benefits it will have for humanity. However, applying Fourcault’s (1980) perspective on subjugated
knowledge in the analysis of OpenAI allows us to understand that the position assumed by OpenAI
subjugates knowledge of the public. As such, OpenAI may fail to produce societally beneficial research
10
because the company fails to consult the general public, and instead base its research on what its
employees deem to be the safe path to AGI. This places OpenAI among the centralized, political,
economic and institutional regimes, and like Foucault, one could be concerned with how the regimes
exercise power as they are practiced at the local level. Applying this Foucauldian perspective to the
innovation case of OpenAI, the fact that knowledge produced by OpenAI is privileged is not only
problematic because it subjugates other knowledges. The real problem may arise in the way this
knowledge is practiced at the local level, i.e. how the knowledge affects the lives of those whose
knowledge is subjugated. One could imagine a situation in which knowledge produced by OpenAI aids
the development of AI which ends up being used in ways that further subjugates knowledge or which is
not aligned with the worldviews of actors at the local level. If OpenAI achieves its goal of becoming
the dominant research institution within the field of AI, it is of particular importance that public
opinion is solicited and that the subjugation of knowledge is prevented in this process.
Returning to Hartman’s (2008) powerful example of the powerful unitary body of knowledge, the
Diagnostic and Statistical Manual of Mental Disorders, an analogy to AI development can be made.
Instead of a human doctor applying the manual, it could be applied by an AGI and practiced at the most
local level, namely in the relationship between a social worker and a client. If the worker’s thinking,
the relationship and the client’s self-definition is affected by the application of the manual by a social
worker, these will all be radically changed once an AGI enters the relationship. The social worker
might not be needed, the relationship will be between a human and a machine and the client’s self-
definition will be affected by the functioning of a machine. For this reason, it is highly important that
knowledges are not subjugated in the discourse on the desirable development of AI.
If knowledges remain subjugated in OpenAI’s innovation processes, it seems unlikely that the company
will accomplish its mission of creating AI for the benefit of humanity, because if you do not know what
humanity thinks of AI, it will be impossible to build something that humanity approves of. OpenAI’s
apparent subjugation of knowledge plausibly also means that the company will be unable to sufficiently
tackle the value alignment problem. As mentioned, the value alignment problem is the challenge
inherent in building intelligence which is provably aligned with human values. Again, with no efforts
in identifying human values, the company will arguably not be able to design AI which aligns with
human values. Thus, the lack of inflow of knowledge, and particularly the subjugation of knowledge
11
will reduce OpenAI’s ability to accomplish its mission. As stated in the introduction, the value
alignment problem must be addressed even for relatively unintelligent AI systems, so from this point of
view, the fact that OpenAI is not engaging in the identification of human values or the resolution of the
value alignment problem through its research makes matters worse. It could also be argued that for a
company with the goal of “enacting the safe path to AGI”, more than 7.69 % of published articles
should be about AI safety.
Discussion
As stated in the analysis, OpenAI’s lack of inflow of knowledge from the public could hamper its
mission. In the following, I will discuss various counter arguments and the challenges OpenAI will face
if it implements the plan of including the public in its innovation processes.
OpenAI’s plan to include the public
One could argue that OpenAI’s plans to include the public should to some extent alleviate the company
of the criticism presented above. Accepting such an argument assumes that a company will do what it
says it will do, and such an argument could be countered with reference to organizational hypocrisy
(Brunsson, 2003). The mere fact that the company “plans” to include the broader public may not be
more than an instance of organizational hypocrisy - an act of communication undertaken by a company
as a means of postponing the implementation of a decision taken to satisfy certain stakeholders.
Critique of deliberative innovation processes
Critics of deliberative democracy assert that collective crafting is constituted through acts of power
such as control and exclusion (Mouffe 1999). This critique not only applies to deliberative democracy
at the state level, but also to the application of deliberate processes in innovation (Oudheusden, 2014).
Thus, we cannot simply consider a deliberate innovation processes an inclusive “weighing of interests”
(ibid, 73), as any attempt by OpenAI to include subjugated knowledges in its innovation processes
would be subject to battles for power and the right to be heard. This theoretical disposition points to the
practical difficulties OpenAI would have in facilitating broad inclusion of subjugated knowledges in its
12
innovation processes. Unfortunately, empirical evidence of how deliberation is accomplished is sparse
(ibid.) Thus, future research should address questions pertaining to the distribution of power such as
“What actors are relevant to include and which are not?” Another challenge OpenAI would face would
be to make the outcomes of deliberate processes count in policy and science arenas (Oudheusden,
2014). OpenAI is indeed an important actor in the field of AI development, but it is not the only actor,
and it is therefore important for the company to learn to wield political influence and strategies
(Wesselink & Hoppe, 2011)
The desirability of democratic inclusion
Through the theoretical lenses applied in the analysis, OpenAI’s lack of inflow of knowledge and the
company’s apparent subjugation of knowledge is criticized on the grounds that it will hamper its ability
to accomplish its mission. It also implicitly assumes that expertise is a negotiated attribute (Nahuis and
Van Lente 2008), meaning that expertise is not considered the prerogative of scientists or other
formally recognized experts, but of publics more broadly. It is generally recognized that the inclusion
of the public in deliberation often leads to tensions between formally recognized experts and laypersons
over what constitutes evidence and who is entitled to speak why, when and on behalf of whom or what
(Oudheusden, 2014). This points to a challenge that OpenAI would have to learn to manage if it were
to fulfill its plan of establishing a governance board to engage in deliberation on the development of
AI. This challenges also ties back to the notions of power, control and exclusion in deliberate
innovation processes mentioned above.
Due to its openness, non-profit status and its mission of doing research free of economic obligations,
OpenAI can be viewed as a rebellion against market-based innovation much like innovation commons
(Allen & Potts, 2016; Brian, 2015). In addition, OpenAI’s plan of allowing “wide swaths of the world
to elect representatives to a new governance board” seems akin to the concept of democracy. OpenAI’s
innovation process and plan to include the broader public in its endeavors can therefore be criticized
from the perspective of Hayek (1960) who is a notable proponent of the notion that democracy is not as
important as the conservation of the free market, because the free market will create or sustain the
liberty of the individual, whereas democracy tends to diminish it (Hayek, 1960). He argues that
13
government should only be guided by majority opinion if the opinion is independent of government,
but that opinion in many instances it is not. Majority decisions, he posits, show us what people want at
the moment, but not what would be in their interest if they were better informed and unless their
opinion could be changed by persuasion, they are of no value. From this perspective, OpenAI can be
criticized as opposing the free market and potentially limiting the liberty of the individual, and Hayek’s
critique of democracy points to the necessity of a well-informed discussion when including the broader
publics in deliberate innovation processes. The extent to which Hayek’s criticism of democracy can be
applied to the case of OpenAI provides an interesting avenue for further inquiry. Speaking of OpenAI’s
apparent rebellion against market forces entering into research, I also welcome a broad discussion on
the extent to which OpenAI is in fact solving the right problem. As proposed by Allworth (2015),
attention should be directed at solving the problem that market interests hinder the development of
technology which benefits humanity.
Conclusion
This paper set out to investigate how the openness of OpenAI’s innovation activities can affect the
company’s ability to achieve its mission which is to “advance digital intelligence in the way that is
most likely to benefit humanity”. An important empirical observation were made in the course of
analyzing OpenAI, namely that the company does not include the broader public in its innovation
processes. In addition, a mere 7.69 % of the company’s research papers are on AI safety which seems
low given the company’s mission of enacting the safe path to AGI, and none of its research activities
strive to identify global human values or solicit the public’s opinion. This is a lack of purposive inflow
of knowledge and OpenAI’s innovation processes can therefore not be deemed open innovation. Thus,
the company’s innovation performance will likely be hampered. To allow for a more flexible
perspective on what constitutes open innovation, I propose using the term directed open innovation to
describe instances of open innovation in which there is only either inflow or outflow of knowledge, and
the term unidirected open innovation to describe cases with both inflow and outflow of knowledge.
This distinction will enable the investigation of a broader range of companies such as OpenAI under
the open innovation framework, thus improving our understanding of what openness in innovation is. I
therefore consider this to be one of the main contributions of this paper. Using this distinction, which is
14
a continuum rather than a binary variable, OpenAI is mostly an instance of directed open innovation.
The lack of public consultation also constitutes subjugation of knowledges, which allows us to
understand that knowledge produced by OpenAI is unitary and privileged. Therefore, when OpenAI’s
knowledge is practiced at the local level, problems may arise. One could imagine a situation in which
knowledge produced by OpenAI aids the development of AI which ends up being used in ways which
further subjugates knowledge or which is not aligned with the worldviews of actors at the local level.
Due to OpenAI’s subjugation of knowledge and the lack of inflow of knowledge into the organization,
the AI developed by OpenAI may be misaligned with human values. Thus the company will fail to
tackle an important issue in AI development.
In sum, from the perspectives of the applied theories, OpenAI’s failure to include the public in its
innovation processes reduces the company’s ability to achieve its goal, because it subjugates
knowledges thus failing to identify human values and align with those, and because the lack of a
purposive inflow of knowledge in the form of soliciting public opinion could hamper its innovative
performance.
15
References
Allen, D.W.E. & Potts, J., (2016). How innovation commons contribute to discovering and
developing new technologies. International Journal of the Commons. 10(2), pp.1035–1054.
DOI: http://doi.org/10.18352/ijc.644
Allworth, J. (2015). Is OpenAI Solving the Wrong Problem? Retrieved November 23, 2017, from
https://hbr.org/2015/12/is-openai-solving-the-wrong-problem
Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2016). Concrete
Problems in AI Safety. Retrieved from http://arxiv.org/abs/1606.06565
Bostrom, N. (2017). Strategic Implications of Openness in AI Development. Global Policy.
https://doi.org/10.1111/1758-5899.12403
Bostrom, N. (2006). How long before superintelligence? Linguistic and Philosophical Investigations,
5(1), 11–30. Retrieved from https://nickbostrom.com/superintelligence.html
Bostrom, N. (2017). Strategic Implications of Openness in AI Development.
https://doi.org/10.1111/1758-5899.12403
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. OUP Oxford
Brian, M. (2015). "From capitalism to commons". Anarcho-Syndicalist Review, 64.5
Brockman, G., & Sutskever, I. (2015). Introducing OpenAI. Retrieved November 19, 2017, from
https://blog.openai.com/introducing-openai/
Brown, J. S., & Duguid, P. (n.d.). The social life of information. Retrieved from
https://books.google.be/books/about/The_Social_Life_of_Information.html?id=D-
WjL_HRbNQC&redir_esc=y
Brunsson, N. (2003): ‘Organized hypocrisy’. In B. Czarniawska and G. Sevón. Northerns Lights,
201-222, CBS Press.
Chesbrough, H. (2012). Open Innovation - Where We’ve Been and Where We’re Going. Research-
Technology Management . https://doi.org/10.5437/08956308X5504085
David, J. E. (2017). Elon Musk issues a stark warning about A.I., calls it a bigger threat than North
Korea. Retrieved November 6, 2017, from https://www.cnbc.com/2017/08/11/elon-musk-issues-a-
stark-warning-about-a-i-calls-it-a-bigger-threat-than-north-korea.html
16
Dochartaigh, N.O. (2002) The Internet Research Handbook: A Practical Guide for Students and
Researchers in the Social Sciences. London: Sage.
Friend, T. (2016). Sam Altman’s Manifest Destiny | The New Yorker. Retrieved November 22, 2017,
from https://www.newyorker.com/magazine/2016/10/10/sam-altmans-manifest-destiny
Hartman, A. (2000). In Search of Subjugated Knowledge. Journal of Feminist Family Therapy, 11(4),
19–23. https://doi.org/10.1300/J086v11n04_03
Kasriel, S. (2017). Why Elon Musk is wrong about AI. Retrieved November 22, 2017 from
http://fortune.com/2017/07/27/elon-musk-mark-zuckerberg-ai-debate-work/
Kassarjian, H. H. (1977). Content analysis in consumer research. J. Consumer Res. 4, 1, 8–18.
Kondratiev, N. 1978. “The Long Wave in Economic Life.” Translated by W. Stolper. Lloyds Bank
Review 129, 41–60.
Markoff, J. (2015). Silicon Valley investors to bankroll artificial-intelligence center. Retrieved
November 22, 2017, from https://www.seattletimes.com/business/technology/silicon-valley-
investors-to-bankroll-artificial-intelligence-center/
Metz, C. (2016). Inside OpenAI, Elon Musk’s Wild Plan to Set Artificial Intelligence Free. Retrieved
November 22, 2017, from https://www.wired.com/2016/04/openai-elon-musk-sam-altman-plan-
to-set-artificial-intelligence-free/
Miller, E. F. (2010). Hayek’s The Constitution of Liberty The Institute of Economic Affairs. London:
The Institute of Economic Affairs. Retrieved from
http://iea.org.uk/sites/default/files/publications/files/Hayek’s Constitution of Liberty.pdf
Müller, V. C., & Bostrom, N. (2014). Future Progress in Artificial Intelligence: A Survey of Expert
Opinion. Fundamental Issues of Artificial Intelligence. Retrieved from
https://nickbostrom.com/papers/survey.pdf
Mouffe, C. 1999. “Deliberative Democacy or Agonistic Pluralism?” Social Research 66 (3):
745–758.
Newman, M. E. J. (2003). The structure and function of complex networks. Retrieved from
https://arxiv.org/pdf/cond-mat/0303516.pdf
OpenAI. (n.d.). About OpenAI. Retrieved November 6, 2017, from https://openai.com/about/
17
Pellizzoni, L. (2012). Strong Will in a Messy World. Ethics and the Government of Technoscience.
Nanoethics, 6, 257–272. Retrieved from https://link-springer-com.esc-
web.lib.cbs.dk:8443/content/pdf/10.1007%2Fs11569-012-0159-x.pdf
Price, E. (2017). Stephen Hawking Thinks AI Could Help Robots Take Over The World. Retrieved
November 6, 2017, from http://fortune.com/2017/11/03/stephen-hawking-danger-ai/
Russel, S. (n.d.). Of Myths And Moonshine. Retrieved November 18, 2017, from
https://www.edge.org/conversation/the-myth-of-ai#26015
Saunders, M., Lewis, P., Thornhill, A. (2009). Research methods for business students. 5th
edition.
Prentice Hall.
Samuel, A. L. (1959), “Some Studies in Machine Learning Using the Game of Checkers” in IBM
Journal of Research and Development (Volume:3, Issue: 3, Pages: 209-229)
Schumpeter, A.J. (1939). Business Cycles. New York: McGraw Hill
Steinhardt, J. (2015). Long-Term and Short-Term Challenges to Ensuring the Safety of AI
Systems. [Online; accessed 13-June-2016]. url: https://jsteinhardt.wordpress.com/
2015/06/24/long-term-and-short-term-challenges-to-ensuring-the-safety-ofai-systems/
Stewart, D.W. and Kamins, M.A. (1993) Secondary Research: Information Sources and
Methods(2nd edn). Newbury Park: CA, Sage.
Sulleyman, A. (2017). Elon Musk: AI is a “fundamental existential risk for humancivilisation”and
creators must slow down. Retrieved November 22, 2017, from
http://www.independent.co.uk/life-style/gadgets-and-tech/news/elon-musk-ai-human-
civilisation-existential-risk-artificial-intelligence-creator-slow-down-tesla-a7845491.html
Swan, J., Scarbrough, H. (2005). The politics of networked innovation. Human Relations 58(7),
913–943.
The New York Times. (2017). Artificial Intelligence - The New York Times. Retrieved
November 22, 2017, from https://www.nytimes.com/topic/subject/artificial-intelligence
Tufekci, Z. (2014). Big Questions for Social Media Big Data: Representativeness, Validity and Other
Methodological Pitfalls. In Proceedings of the Eighth International AAAI Conference on Weblogs
and Social Media (pp. 505–514). Retrieved from
https://www.aaai.org/ocs/index.php/ICWSM/ICWSM14/paper/viewFile/8062/8151
18
Van Oudheusden, M. (2014) Where are the politics in responsible innovation? European
governance, technology assessments, and beyond. Journal of Responsible Innovation 1(1), 67–86.
Wesselink, A., R. Hoppe. 2011. “If Post-Normal Science Is the Solution, What Is the Problem?”
White, M., & Epston, D. (1990). Narrative means to therapeutic ends. Norton.

Contenu connexe

Similaire à Open innovation and artificial intelligence: Can OpenAI benefit humanity?

In Search of an Open Innovation Theory
In Search of an Open Innovation TheoryIn Search of an Open Innovation Theory
In Search of an Open Innovation Theory
Varun Deo
 
Big Data Research Trend and Forecast (2005-2015): An Informetrics Perspective
Big Data Research Trend and Forecast (2005-2015): An Informetrics PerspectiveBig Data Research Trend and Forecast (2005-2015): An Informetrics Perspective
Big Data Research Trend and Forecast (2005-2015): An Informetrics Perspective
The International Journal of Business Management and Technology
 
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...
AJHSSR Journal
 
Report IndividualHCI_MaiBodin_Oct 31 2014
Report IndividualHCI_MaiBodin_Oct 31 2014Report IndividualHCI_MaiBodin_Oct 31 2014
Report IndividualHCI_MaiBodin_Oct 31 2014
Mai Bodin
 
Objectification Is A Word That Has Many Negative Connotations
Objectification Is A Word That Has Many Negative ConnotationsObjectification Is A Word That Has Many Negative Connotations
Objectification Is A Word That Has Many Negative Connotations
Beth Johnson
 
EXPLORING THE USE OF GROUNDED THEORY AS A METHODOLOGICAL.docx
EXPLORING THE USE OF GROUNDED THEORY AS A METHODOLOGICAL.docxEXPLORING THE USE OF GROUNDED THEORY AS A METHODOLOGICAL.docx
EXPLORING THE USE OF GROUNDED THEORY AS A METHODOLOGICAL.docx
ssuser454af01
 
Why a profession needs a discipline
Why a profession needs a disciplineWhy a profession needs a discipline
Why a profession needs a discipline
Sue Myburgh
 
Artificial Intelligence and Life in 2030. Standford U. Sep.2016
Artificial Intelligence and Life in 2030. Standford U. Sep.2016Artificial Intelligence and Life in 2030. Standford U. Sep.2016
Artificial Intelligence and Life in 2030. Standford U. Sep.2016
Peerasak C.
 

Similaire à Open innovation and artificial intelligence: Can OpenAI benefit humanity? (20)

In Search of an Open Innovation Theory
In Search of an Open Innovation TheoryIn Search of an Open Innovation Theory
In Search of an Open Innovation Theory
 
User motivation in crowdsourcing
User motivation in crowdsourcingUser motivation in crowdsourcing
User motivation in crowdsourcing
 
Big Data Research Trend and Forecast (2005-2015): An Informetrics Perspective
Big Data Research Trend and Forecast (2005-2015): An Informetrics PerspectiveBig Data Research Trend and Forecast (2005-2015): An Informetrics Perspective
Big Data Research Trend and Forecast (2005-2015): An Informetrics Perspective
 
APPLICATIONS OF HUMAN-COMPUTER INTERACTION IN MANAGEMENT INFORMATION SYSTEMS
APPLICATIONS OF HUMAN-COMPUTER INTERACTION IN MANAGEMENT INFORMATION SYSTEMSAPPLICATIONS OF HUMAN-COMPUTER INTERACTION IN MANAGEMENT INFORMATION SYSTEMS
APPLICATIONS OF HUMAN-COMPUTER INTERACTION IN MANAGEMENT INFORMATION SYSTEMS
 
Analytic Essay Examples
Analytic Essay ExamplesAnalytic Essay Examples
Analytic Essay Examples
 
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...
 
Report IndividualHCI_MaiBodin_Oct 31 2014
Report IndividualHCI_MaiBodin_Oct 31 2014Report IndividualHCI_MaiBodin_Oct 31 2014
Report IndividualHCI_MaiBodin_Oct 31 2014
 
Do not blame it on the algorithm an empirical assessment of multiple recommen...
Do not blame it on the algorithm an empirical assessment of multiple recommen...Do not blame it on the algorithm an empirical assessment of multiple recommen...
Do not blame it on the algorithm an empirical assessment of multiple recommen...
 
Objectification Is A Word That Has Many Negative Connotations
Objectification Is A Word That Has Many Negative ConnotationsObjectification Is A Word That Has Many Negative Connotations
Objectification Is A Word That Has Many Negative Connotations
 
Ai is the new black
Ai is the new black Ai is the new black
Ai is the new black
 
Knowledge Gap: The Magic behind Knowledge Expansion
Knowledge Gap: The Magic behind Knowledge ExpansionKnowledge Gap: The Magic behind Knowledge Expansion
Knowledge Gap: The Magic behind Knowledge Expansion
 
EXPLORING THE USE OF GROUNDED THEORY AS A METHODOLOGICAL.docx
EXPLORING THE USE OF GROUNDED THEORY AS A METHODOLOGICAL.docxEXPLORING THE USE OF GROUNDED THEORY AS A METHODOLOGICAL.docx
EXPLORING THE USE OF GROUNDED THEORY AS A METHODOLOGICAL.docx
 
Bullshiters - Who Are They And What Do We Know About Their Lives
Bullshiters - Who Are They And What Do We Know About Their LivesBullshiters - Who Are They And What Do We Know About Their Lives
Bullshiters - Who Are They And What Do We Know About Their Lives
 
The AI Now Report The Social and Economic Implications of Artificial Intelli...
The AI Now Report  The Social and Economic Implications of Artificial Intelli...The AI Now Report  The Social and Economic Implications of Artificial Intelli...
The AI Now Report The Social and Economic Implications of Artificial Intelli...
 
Why a profession needs a discipline
Why a profession needs a disciplineWhy a profession needs a discipline
Why a profession needs a discipline
 
Ai 100 report_0901fnlc_single
Ai 100 report_0901fnlc_singleAi 100 report_0901fnlc_single
Ai 100 report_0901fnlc_single
 
Ai 100 report_0906fnlc_single
Ai 100 report_0906fnlc_singleAi 100 report_0906fnlc_single
Ai 100 report_0906fnlc_single
 
Ai 100 report_0906fnlc_single
Ai 100 report_0906fnlc_singleAi 100 report_0906fnlc_single
Ai 100 report_0906fnlc_single
 
Artificial Intelligence and Life in 2030. Standford U. Sep.2016
Artificial Intelligence and Life in 2030. Standford U. Sep.2016Artificial Intelligence and Life in 2030. Standford U. Sep.2016
Artificial Intelligence and Life in 2030. Standford U. Sep.2016
 
Intelligence Analysis
Intelligence AnalysisIntelligence Analysis
Intelligence Analysis
 

Dernier

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Victor Rentea
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 

Dernier (20)

Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelMcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Platformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityPlatformless Horizons for Digital Adaptability
Platformless Horizons for Digital Adaptability
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Vector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptxVector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptx
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 

Open innovation and artificial intelligence: Can OpenAI benefit humanity?

  • 1. 1 Open innovation and AI: Can OpenAI benefit humanity? Kasper Groes Ludvigsen klud@itu.dk Re-assembling innovation KBREINN1KU-Autumn 2017 Exam essay ITU
  • 2. 2 Science and technology have long been identified as major sources of economic and social development (Schumpeter 1939; Kondratiev 1978), however, their contribution to social well-being is no longer taken for granted (Pellizzoni 2012). This seems to be particularly true for research in the field of artificial intelligence (AI). The topic of AI is widely discussed with actors in the domain arguing publicly about the implications of AI on our society (Kasriel, 2017) and mainstream media frequently report on the progress of AI (e.g. The New York Times overview of AI related articles (The New York Times, 2017)). While it is widely accepted that AI hold great benefits for the capabilities of the human race (Bostrom, 2017), prominent research and industry figures also warn against the potential dangers of AI (Price, 2017; David, 2017), some even calling the potential invention of artificial super intelligence (ASI) an existential risk to humanity (Sulleyman, 2017). ASI is an AI much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills (Bostrom, 2006). AI experts generally believe that ASI will be achieved before 2075 (Müller & Bostrom, 2014). Amodei et al (2016) argue that a multitude of technical problems exist in relation to prevention of accidents in AI systems. An accident may be a situation in which a human designer intended the system to perform a certain task or achieve a certain objective (perhaps informally specified), but the system produced harmful or unexpected results. This is a common issue in all engineering disciplines, but it may be particularly important to address when building AI systems (Steinhardt, 2015). One of the main challenges in AI development is the value alignment problem, which is the challenge inherent in building intelligence which is provably aligned with human values, and it is a problem that must be addressed even for relatively unintelligent AI systems (Russel, n.d). As such, this challenge becomes more and more pressing as we approach ASI. However, human values are obviously difficult to define. In response to concerns about the development of AI, a non-profit, open source AI research company called OpenAI was founded to “advance digital intelligence in the way that is most likely to benefit humanity as a whole” and “enact a safe path” to AI (Brockman & Sutskever, 2015, section 1). This mission statement seems to implicitly assume that openness is the optimal model of innovation considering its mission. However, the company’s open research strategy has been criticized (Metz, 2016). To shed further light onto this criticism, this paper is concerned with answering the following research question:
  • 3. 3 What influence can the openness OpenAI’s research have on the company’s ability to “advance digital intelligence in the way that is most likely to benefit humanity”? Answering this question is an important exercise in making sense of one of the most notable actors within AI research - a technology which, as shown above, could potentially have far reaching consequences. The remainder of the paper is structured as follows: First, I review literature relevant to the research questions. Then, I explain the methodology of the paper and introduce the reader more thoroughly to the case company OpenAI. After, I analyze the case with point of departure in the reviewed literature and lastly I discuss the findings, bringing in other theoretical perspectives and conclude on the findings. Literature review In the following, I review literature relevant to answering the research question. Open Innovation Innovation was once conceived as being the work of a lone entrepreneur bringing innovations to markets, but new models of innovation acknowledge that innovation processes are interactive and that innovators rely on interaction with users, suppliers and with a range of institutions in the innovation system (Laursen & Salter, 2006). In this conception of innovation, actors do not innovate alone. Rather, they are nested in communities of practice and embedded in a dense network of interactions (Scott and Brown, 1999; Brown and Duguid, 2000). Open innovation is one such type of innovation, and it refers to "the use of purposive inflows and outflows of knowledge to accelerate internal innovation and expand the markets for external use of innovation" (Chesbrough 2006, p. 1). Open innovation in the form of open search for new ideas is linked to innovative performance, and theory suggests that firms that are too internally focused will miss opportunities (Laursen & Salter, 2006; Chesbrough, 2006). “Openess” is also a phenomenon in software development where it refers to “the practice of releasing into the public domain (continuously and as promptly as is practicable) all relevant source code and platforms and publishing freely about algorithms and scientific insights and ideas gained in the course
  • 4. 4 of the research.” (Bostrom, 2017, p. 1). It is worth noting here that, in Bostrom’s (ibid) conception of “openess”, it is not a binary variable, as it can take on many forms. Power and subjugated knowledges As the proceeding sections will show, analysis and discussion of OpenAI’s activities can be situated in the discourse on subjugated knowledges, and this theoretical perspective plays a crucial role in the reassembling of OpenAI as an innovation. The topic of subjugated knowledges is therefore briefly introduced here. Through a struggle over time, global unitary knowledges have subjugated a wide range of knowledges and disqualified them as ‘‘beneath the required level of cognition or scientificity’’ (Foucault, 1980, p. 82). Global unitary knowledges are the privileging of the methods of science, and these have led to the subjugation of previously established erudite knowledge and of knowledge located at the margins of society. These subjugated knowledges have been excluded from the ‘‘legitimate domains of formal knowledge’’ (White & Epston, 1990, p. 26). According to Hartman (2000), Foucault was concerned with how this knowledge was exercise of power and practice of knowledge at the local level. Foucault (1980, p. 52) asserts that “it is not possible for power to be exercised without knowledge, it is impossible for knowledge not to engender power”. Thus, in a Foucauldian perspective, the established regimes can be deemed to be self-sustaining because they enter into a virtuous cycle where the knowledge they produce legitimates their power, and their power legitimates their knowledge. At its logical conclusion, this relation between knowledge and power sustains the subjugation of knowledge and makes it difficult for knowledge outside of the established regimes to surface and become legitimate. The practical application of a unitary body of knowledge is exemplified by Hartman (2008, p. 20) "For example, that powerful global and unitary body of knowledge, the Diagnostic and Statistical Manual of Mental Disorders, Third Edition (American Psychiatric Association, 1980), which is centrally established and encoded in economic, medical, and educational systems, is practiced at the
  • 5. 5 most local level--in the relationship between a social worker and a client. When a social worker is required by an agency’s funding needs or by the rules of third-party payers to attach a diagnostic label to a client, a powerful and privileged classification system has entered this relationship and in all likelihood has affected the worker’s thinking, the relationship, and the client’s self-definition.” Methodology This paper is the result of qualitative inquiry in which I applied document and content analysis to documentary secondary data (Saunders, Lewis & Thornhill, 2009). In analysing this data, a deductive approach was applied in which the theoretical framework used in the analysis was framed a priori. I used open innovation theory to analyze the extent to which OpenAI’s innovation processes are open and to create a basis for understanding to what extent the company’s openness enables or hampers its mission. I then introduce into the analysis Foucault’s notion of subjugated knowledges, which allowed me to problematize the extent to which OpenAI’s innovation process is open. In the case of OpenAI, these theories are closely interlinked as they allow us to make sense in different but complementing ways of an observed phenomenon in the case of OpenAI. The analyzed documents are online news articles, OpenAI’s mission statement, its first blog post in which the background for founding the company is described and summaries of the company’s 39 research papers. The latter were used in a content analysis which was guided by Kassarjian’s (1977) framework and served to identify how many of the articles fall into three categories crated for the purpose of this analysis: “AI safety”, “the solicitation of public opinion on AI development“ or “efforts to identify global human values”. The first category is relevant given the company’s goal to “enact a safe path” to AI. The second and third are relevant because they allow me to assess the extent to which the company make use of purposive inflows of knowledge from the broader public and efforts made by the company to identify what human values are. This allowed me to assess the extent to which OpenAI’s innovation process and research activities helps accomplish its mission. I drew from Kassarjian’s (1977) framework because it allows for the integration of existing categories into his framework; thus it allowed me to integrate the three content categories mentioned above.
  • 6. 6 In order to ensure reliability and validity of the data, only data from authoritative news outlets and OpenAI’s own website was used. Dochartaigh’s (2002) recommendations for assessment of the authority of documents available via the internet was used as a guide for the selection authoritative sources. Using a deductive approach has certain strengths, including linking the research into an existing body of knowledge and provide an initial analytical framework (Saunders, Lewis & Thornhill, 2009). In using a deductive approach, there is a risk of introducing a premature closure in the investigated issues (ibid.) My awareness of this issue guided my analysis so as to avoid it. The pros of the chosen methodology is that secondary data sources are likely to be of higher quality than what could be collected by oneself (Stewart & Kamins 1993). Also, the data collection method offered me access to insights into an organization that would otherwise be off-limits. In addition, the data I used is permanent and publicly available allowing others to access it easily thereby opening up my conclusions to public scrutiny (Saunders, Lewis & Thornhill, 2009). The limitations of the methodology is that other insights would likely have occurred if other methods for data collection such as interviews had been used. The use of secondary data sources also means that control over data quality is lost (Saunders, Lewis & Thornhill, 2009). The case With more than 1 billion US dollars in funding (Brockman & Sutskever, 2015) and backing from prominent figures in the technology and research sectors, OpenAI is a resourceful actor which can potentially yield great progress in AI development. Given the potential of OpenAI’s research activities, it is important to assess how the openness of the company’s research can affect its ability to achieve its mission. Consequently of the research question, the most relevant aspects of OpenAI to look into is the background for founding the company, the extent to which the company’s innovation processes are open, the extent to which the company allows for knowledge to flow in and out of the company and the nature of the company’s research activities. According to the company’s “Launch blog post” (Brockman & Sutskever, 2015), the unpredictability of AI development, and the profound impacts it can have on humanity, creates a need for “a leading research institution which can prioritize a good
  • 7. 7 outcome for all over its own self-interest” and the founders are hoping OpenAI will become that institution (ibid, paragraph 7). With this in mind, the company strives “to build value for everyone”, but according to one of the co-founders, the exact goal of the company is “a little vague” (Friend, 2016). The company encourages its researchers to publish all their work and any patented technology will be publicly available, but one of the co-chairs has also stated publicly that the company will not release all its source code (ibid). OpenAI will “collaborate with others across many institutions” and expect to work with other companies, and a number of its research papers have been authored in collaboration with other actors external to the organization. Here, I particularly notice that OpenAI does not mention collaborations or consultations with the broader public. Indeed, the company is “planning a way to allow wide swaths of the world to elect representatives to a new governance board” (Friend, 2016, ), but judging from a lack of empirical evidence of the contrary, the governance board has yet to materialize. As mentioned, I analyzed the 39 research papers published by OpenAI in order to determine the number of papers related to AI safety, the solicitation of public opinion on AI development or efforts to identify global human values. The results of this analysis are seen below: Table 1: Number of published articles related to AI safety, solicitation of public opinion and identifying human values AI safety Solicitation of public opinion Identifying global human values Number of articles 3 0 0 Percentage of total number of published articles 7.69 % 0 % 0 %
  • 8. 8 Analysis The following analysis shows how an innovation process can be dismantled and reassembled using various innovation-related theories. In particular, it shows how an empirical observation, i.e. the lack of public consultation and purposive inflow of knowledge, can be analyzed using disparate theories and how these analyses can yield different but complimentary understandings of the observed phenomenon and its implications. In this section, I apply the theoretical lenses of open innovation and subjugated knowledges to the case of OpenAI. To reiterate, open innovation is “the use of purposive inflows and outflows of knowledge to accelerate internal innovation and expand the markets for external use of innovation" (Chesbrough 2006, p. 1). At first sight, it seems natural to classify OpenAI as an instance of open innovation, but upon closer inspection, this might not be the case. As is clear from the case description above, OpenAI definitely has purposive outflow of knowledge as exemplified the encouragement of researchers to publish findings and the release of source code. The company also has some inflow of knowledge, exemplified by active collaboration with other actors within the research and industry domain. However, OpenAI does not actively solicit the public opinion on issues related to the development of AI. Although the company plans to involve the broader society, these plans have yet to materialize. It therefore seems fair to conclude that, despite the apparent openness of the company’s innovation process, the process lacks inflow of knowledge to some extent, and it is therefore not open enough to be characterized as open innovation. This raises a theoretical question: Is the applied definition of open innovation too rigid? After all, OpenAI’s innovation process is quite open. If we fail to acknowledge OpenAI as an instance of open innovation, our research of open innovation cases, hence our understanding and knowledge of open innovation, will be limited. The failure of the applied definition to encompass OpenAI is therefore not trivial. One could argue that, just like openness in software development is not a binary variable, neither should open innovation be. I therefore introduce a distinction between directed and unidirected open innovation. Acknowledging the importance of networks in innovation, the terms are inspired by Newman’s (2003) characterisation of network edges as being either directed, meaning that information runs in only one direction, or being unidirected meaning that information can flow in multiple directions. It must be noted that directed and unidirected open innovation should be considered two ends of a continuum rather than a binary variable. Using this typology, OpenAI is mostly an instance of directed open innovation due to the
  • 9. 9 large extent to which the company makes use of a purposive outflow of knowledge and the somewhat limited extent to which inflow of knowledge is used. The assessment of the impact of OpenAI’s openness on its ability to accomplish its mission is complex. In the short term, the outflow of knowledge from OpenAI will likely yield positive outcomes, but this may not be the case in the long term. The desirability of the long term consequences of openness depends on whether the objective is to benefit current or future generations. Openness about safety measures and goals are likely to be positive on both counts. However, other forms of openness, for instance regarding source code, science and possibly capability could increase competition around the time of the introduction of advanced AI, which could increase the probability that “winning the AI race” is incompatible with applying safety measures which slow down the development process or imposes constraints on the performance of the AI (Bostrom, 2017). As such, it seems the very open nature of OpenAI’s knowledge outflow may be problematic in the longer term. In the short and long term, an issue also arises due to the lack of a purposive inflow of knowledge, particularly in relation to its lack of inflow of knowledge in the form of public consultation. This is because open search in open innovation processes is correlated with better innovative performance. This means that not only would the purposive inflow of knowledge in the form of public consultation make it more likely that OpenAI achieves its goal of benefitting humanity, it would also make the company achieve this goal more quickly in the sense that its innovative performance would increase. A practical recommendation arising from this analysis is therefore that OpenAI should purposefully use the inflow of knowledge of the public in the development of AI. On the subjugation of knowledge and the nature of OpenAI’s research activites The apparent failure of OpenAI to have a purposive inflow of knowledge from all societal actors and to actively solicit public opinion on innovation efforts can also be understood from the perspective of Foucault’s subjugated knowledges. OpenAI clearly attempts to justify its research by reference to benefits it will have for humanity. However, applying Fourcault’s (1980) perspective on subjugated knowledge in the analysis of OpenAI allows us to understand that the position assumed by OpenAI subjugates knowledge of the public. As such, OpenAI may fail to produce societally beneficial research
  • 10. 10 because the company fails to consult the general public, and instead base its research on what its employees deem to be the safe path to AGI. This places OpenAI among the centralized, political, economic and institutional regimes, and like Foucault, one could be concerned with how the regimes exercise power as they are practiced at the local level. Applying this Foucauldian perspective to the innovation case of OpenAI, the fact that knowledge produced by OpenAI is privileged is not only problematic because it subjugates other knowledges. The real problem may arise in the way this knowledge is practiced at the local level, i.e. how the knowledge affects the lives of those whose knowledge is subjugated. One could imagine a situation in which knowledge produced by OpenAI aids the development of AI which ends up being used in ways that further subjugates knowledge or which is not aligned with the worldviews of actors at the local level. If OpenAI achieves its goal of becoming the dominant research institution within the field of AI, it is of particular importance that public opinion is solicited and that the subjugation of knowledge is prevented in this process. Returning to Hartman’s (2008) powerful example of the powerful unitary body of knowledge, the Diagnostic and Statistical Manual of Mental Disorders, an analogy to AI development can be made. Instead of a human doctor applying the manual, it could be applied by an AGI and practiced at the most local level, namely in the relationship between a social worker and a client. If the worker’s thinking, the relationship and the client’s self-definition is affected by the application of the manual by a social worker, these will all be radically changed once an AGI enters the relationship. The social worker might not be needed, the relationship will be between a human and a machine and the client’s self- definition will be affected by the functioning of a machine. For this reason, it is highly important that knowledges are not subjugated in the discourse on the desirable development of AI. If knowledges remain subjugated in OpenAI’s innovation processes, it seems unlikely that the company will accomplish its mission of creating AI for the benefit of humanity, because if you do not know what humanity thinks of AI, it will be impossible to build something that humanity approves of. OpenAI’s apparent subjugation of knowledge plausibly also means that the company will be unable to sufficiently tackle the value alignment problem. As mentioned, the value alignment problem is the challenge inherent in building intelligence which is provably aligned with human values. Again, with no efforts in identifying human values, the company will arguably not be able to design AI which aligns with human values. Thus, the lack of inflow of knowledge, and particularly the subjugation of knowledge
  • 11. 11 will reduce OpenAI’s ability to accomplish its mission. As stated in the introduction, the value alignment problem must be addressed even for relatively unintelligent AI systems, so from this point of view, the fact that OpenAI is not engaging in the identification of human values or the resolution of the value alignment problem through its research makes matters worse. It could also be argued that for a company with the goal of “enacting the safe path to AGI”, more than 7.69 % of published articles should be about AI safety. Discussion As stated in the analysis, OpenAI’s lack of inflow of knowledge from the public could hamper its mission. In the following, I will discuss various counter arguments and the challenges OpenAI will face if it implements the plan of including the public in its innovation processes. OpenAI’s plan to include the public One could argue that OpenAI’s plans to include the public should to some extent alleviate the company of the criticism presented above. Accepting such an argument assumes that a company will do what it says it will do, and such an argument could be countered with reference to organizational hypocrisy (Brunsson, 2003). The mere fact that the company “plans” to include the broader public may not be more than an instance of organizational hypocrisy - an act of communication undertaken by a company as a means of postponing the implementation of a decision taken to satisfy certain stakeholders. Critique of deliberative innovation processes Critics of deliberative democracy assert that collective crafting is constituted through acts of power such as control and exclusion (Mouffe 1999). This critique not only applies to deliberative democracy at the state level, but also to the application of deliberate processes in innovation (Oudheusden, 2014). Thus, we cannot simply consider a deliberate innovation processes an inclusive “weighing of interests” (ibid, 73), as any attempt by OpenAI to include subjugated knowledges in its innovation processes would be subject to battles for power and the right to be heard. This theoretical disposition points to the practical difficulties OpenAI would have in facilitating broad inclusion of subjugated knowledges in its
  • 12. 12 innovation processes. Unfortunately, empirical evidence of how deliberation is accomplished is sparse (ibid.) Thus, future research should address questions pertaining to the distribution of power such as “What actors are relevant to include and which are not?” Another challenge OpenAI would face would be to make the outcomes of deliberate processes count in policy and science arenas (Oudheusden, 2014). OpenAI is indeed an important actor in the field of AI development, but it is not the only actor, and it is therefore important for the company to learn to wield political influence and strategies (Wesselink & Hoppe, 2011) The desirability of democratic inclusion Through the theoretical lenses applied in the analysis, OpenAI’s lack of inflow of knowledge and the company’s apparent subjugation of knowledge is criticized on the grounds that it will hamper its ability to accomplish its mission. It also implicitly assumes that expertise is a negotiated attribute (Nahuis and Van Lente 2008), meaning that expertise is not considered the prerogative of scientists or other formally recognized experts, but of publics more broadly. It is generally recognized that the inclusion of the public in deliberation often leads to tensions between formally recognized experts and laypersons over what constitutes evidence and who is entitled to speak why, when and on behalf of whom or what (Oudheusden, 2014). This points to a challenge that OpenAI would have to learn to manage if it were to fulfill its plan of establishing a governance board to engage in deliberation on the development of AI. This challenges also ties back to the notions of power, control and exclusion in deliberate innovation processes mentioned above. Due to its openness, non-profit status and its mission of doing research free of economic obligations, OpenAI can be viewed as a rebellion against market-based innovation much like innovation commons (Allen & Potts, 2016; Brian, 2015). In addition, OpenAI’s plan of allowing “wide swaths of the world to elect representatives to a new governance board” seems akin to the concept of democracy. OpenAI’s innovation process and plan to include the broader public in its endeavors can therefore be criticized from the perspective of Hayek (1960) who is a notable proponent of the notion that democracy is not as important as the conservation of the free market, because the free market will create or sustain the liberty of the individual, whereas democracy tends to diminish it (Hayek, 1960). He argues that
  • 13. 13 government should only be guided by majority opinion if the opinion is independent of government, but that opinion in many instances it is not. Majority decisions, he posits, show us what people want at the moment, but not what would be in their interest if they were better informed and unless their opinion could be changed by persuasion, they are of no value. From this perspective, OpenAI can be criticized as opposing the free market and potentially limiting the liberty of the individual, and Hayek’s critique of democracy points to the necessity of a well-informed discussion when including the broader publics in deliberate innovation processes. The extent to which Hayek’s criticism of democracy can be applied to the case of OpenAI provides an interesting avenue for further inquiry. Speaking of OpenAI’s apparent rebellion against market forces entering into research, I also welcome a broad discussion on the extent to which OpenAI is in fact solving the right problem. As proposed by Allworth (2015), attention should be directed at solving the problem that market interests hinder the development of technology which benefits humanity. Conclusion This paper set out to investigate how the openness of OpenAI’s innovation activities can affect the company’s ability to achieve its mission which is to “advance digital intelligence in the way that is most likely to benefit humanity”. An important empirical observation were made in the course of analyzing OpenAI, namely that the company does not include the broader public in its innovation processes. In addition, a mere 7.69 % of the company’s research papers are on AI safety which seems low given the company’s mission of enacting the safe path to AGI, and none of its research activities strive to identify global human values or solicit the public’s opinion. This is a lack of purposive inflow of knowledge and OpenAI’s innovation processes can therefore not be deemed open innovation. Thus, the company’s innovation performance will likely be hampered. To allow for a more flexible perspective on what constitutes open innovation, I propose using the term directed open innovation to describe instances of open innovation in which there is only either inflow or outflow of knowledge, and the term unidirected open innovation to describe cases with both inflow and outflow of knowledge. This distinction will enable the investigation of a broader range of companies such as OpenAI under the open innovation framework, thus improving our understanding of what openness in innovation is. I therefore consider this to be one of the main contributions of this paper. Using this distinction, which is
  • 14. 14 a continuum rather than a binary variable, OpenAI is mostly an instance of directed open innovation. The lack of public consultation also constitutes subjugation of knowledges, which allows us to understand that knowledge produced by OpenAI is unitary and privileged. Therefore, when OpenAI’s knowledge is practiced at the local level, problems may arise. One could imagine a situation in which knowledge produced by OpenAI aids the development of AI which ends up being used in ways which further subjugates knowledge or which is not aligned with the worldviews of actors at the local level. Due to OpenAI’s subjugation of knowledge and the lack of inflow of knowledge into the organization, the AI developed by OpenAI may be misaligned with human values. Thus the company will fail to tackle an important issue in AI development. In sum, from the perspectives of the applied theories, OpenAI’s failure to include the public in its innovation processes reduces the company’s ability to achieve its goal, because it subjugates knowledges thus failing to identify human values and align with those, and because the lack of a purposive inflow of knowledge in the form of soliciting public opinion could hamper its innovative performance.
  • 15. 15 References Allen, D.W.E. & Potts, J., (2016). How innovation commons contribute to discovering and developing new technologies. International Journal of the Commons. 10(2), pp.1035–1054. DOI: http://doi.org/10.18352/ijc.644 Allworth, J. (2015). Is OpenAI Solving the Wrong Problem? Retrieved November 23, 2017, from https://hbr.org/2015/12/is-openai-solving-the-wrong-problem Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2016). Concrete Problems in AI Safety. Retrieved from http://arxiv.org/abs/1606.06565 Bostrom, N. (2017). Strategic Implications of Openness in AI Development. Global Policy. https://doi.org/10.1111/1758-5899.12403 Bostrom, N. (2006). How long before superintelligence? Linguistic and Philosophical Investigations, 5(1), 11–30. Retrieved from https://nickbostrom.com/superintelligence.html Bostrom, N. (2017). Strategic Implications of Openness in AI Development. https://doi.org/10.1111/1758-5899.12403 Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. OUP Oxford Brian, M. (2015). "From capitalism to commons". Anarcho-Syndicalist Review, 64.5 Brockman, G., & Sutskever, I. (2015). Introducing OpenAI. Retrieved November 19, 2017, from https://blog.openai.com/introducing-openai/ Brown, J. S., & Duguid, P. (n.d.). The social life of information. Retrieved from https://books.google.be/books/about/The_Social_Life_of_Information.html?id=D- WjL_HRbNQC&redir_esc=y Brunsson, N. (2003): ‘Organized hypocrisy’. In B. Czarniawska and G. Sevón. Northerns Lights, 201-222, CBS Press. Chesbrough, H. (2012). Open Innovation - Where We’ve Been and Where We’re Going. Research- Technology Management . https://doi.org/10.5437/08956308X5504085 David, J. E. (2017). Elon Musk issues a stark warning about A.I., calls it a bigger threat than North Korea. Retrieved November 6, 2017, from https://www.cnbc.com/2017/08/11/elon-musk-issues-a- stark-warning-about-a-i-calls-it-a-bigger-threat-than-north-korea.html
  • 16. 16 Dochartaigh, N.O. (2002) The Internet Research Handbook: A Practical Guide for Students and Researchers in the Social Sciences. London: Sage. Friend, T. (2016). Sam Altman’s Manifest Destiny | The New Yorker. Retrieved November 22, 2017, from https://www.newyorker.com/magazine/2016/10/10/sam-altmans-manifest-destiny Hartman, A. (2000). In Search of Subjugated Knowledge. Journal of Feminist Family Therapy, 11(4), 19–23. https://doi.org/10.1300/J086v11n04_03 Kasriel, S. (2017). Why Elon Musk is wrong about AI. Retrieved November 22, 2017 from http://fortune.com/2017/07/27/elon-musk-mark-zuckerberg-ai-debate-work/ Kassarjian, H. H. (1977). Content analysis in consumer research. J. Consumer Res. 4, 1, 8–18. Kondratiev, N. 1978. “The Long Wave in Economic Life.” Translated by W. Stolper. Lloyds Bank Review 129, 41–60. Markoff, J. (2015). Silicon Valley investors to bankroll artificial-intelligence center. Retrieved November 22, 2017, from https://www.seattletimes.com/business/technology/silicon-valley- investors-to-bankroll-artificial-intelligence-center/ Metz, C. (2016). Inside OpenAI, Elon Musk’s Wild Plan to Set Artificial Intelligence Free. Retrieved November 22, 2017, from https://www.wired.com/2016/04/openai-elon-musk-sam-altman-plan- to-set-artificial-intelligence-free/ Miller, E. F. (2010). Hayek’s The Constitution of Liberty The Institute of Economic Affairs. London: The Institute of Economic Affairs. Retrieved from http://iea.org.uk/sites/default/files/publications/files/Hayek’s Constitution of Liberty.pdf Müller, V. C., & Bostrom, N. (2014). Future Progress in Artificial Intelligence: A Survey of Expert Opinion. Fundamental Issues of Artificial Intelligence. Retrieved from https://nickbostrom.com/papers/survey.pdf Mouffe, C. 1999. “Deliberative Democacy or Agonistic Pluralism?” Social Research 66 (3): 745–758. Newman, M. E. J. (2003). The structure and function of complex networks. Retrieved from https://arxiv.org/pdf/cond-mat/0303516.pdf OpenAI. (n.d.). About OpenAI. Retrieved November 6, 2017, from https://openai.com/about/
  • 17. 17 Pellizzoni, L. (2012). Strong Will in a Messy World. Ethics and the Government of Technoscience. Nanoethics, 6, 257–272. Retrieved from https://link-springer-com.esc- web.lib.cbs.dk:8443/content/pdf/10.1007%2Fs11569-012-0159-x.pdf Price, E. (2017). Stephen Hawking Thinks AI Could Help Robots Take Over The World. Retrieved November 6, 2017, from http://fortune.com/2017/11/03/stephen-hawking-danger-ai/ Russel, S. (n.d.). Of Myths And Moonshine. Retrieved November 18, 2017, from https://www.edge.org/conversation/the-myth-of-ai#26015 Saunders, M., Lewis, P., Thornhill, A. (2009). Research methods for business students. 5th edition. Prentice Hall. Samuel, A. L. (1959), “Some Studies in Machine Learning Using the Game of Checkers” in IBM Journal of Research and Development (Volume:3, Issue: 3, Pages: 209-229) Schumpeter, A.J. (1939). Business Cycles. New York: McGraw Hill Steinhardt, J. (2015). Long-Term and Short-Term Challenges to Ensuring the Safety of AI Systems. [Online; accessed 13-June-2016]. url: https://jsteinhardt.wordpress.com/ 2015/06/24/long-term-and-short-term-challenges-to-ensuring-the-safety-ofai-systems/ Stewart, D.W. and Kamins, M.A. (1993) Secondary Research: Information Sources and Methods(2nd edn). Newbury Park: CA, Sage. Sulleyman, A. (2017). Elon Musk: AI is a “fundamental existential risk for humancivilisation”and creators must slow down. Retrieved November 22, 2017, from http://www.independent.co.uk/life-style/gadgets-and-tech/news/elon-musk-ai-human- civilisation-existential-risk-artificial-intelligence-creator-slow-down-tesla-a7845491.html Swan, J., Scarbrough, H. (2005). The politics of networked innovation. Human Relations 58(7), 913–943. The New York Times. (2017). Artificial Intelligence - The New York Times. Retrieved November 22, 2017, from https://www.nytimes.com/topic/subject/artificial-intelligence Tufekci, Z. (2014). Big Questions for Social Media Big Data: Representativeness, Validity and Other Methodological Pitfalls. In Proceedings of the Eighth International AAAI Conference on Weblogs and Social Media (pp. 505–514). Retrieved from https://www.aaai.org/ocs/index.php/ICWSM/ICWSM14/paper/viewFile/8062/8151
  • 18. 18 Van Oudheusden, M. (2014) Where are the politics in responsible innovation? European governance, technology assessments, and beyond. Journal of Responsible Innovation 1(1), 67–86. Wesselink, A., R. Hoppe. 2011. “If Post-Normal Science Is the Solution, What Is the Problem?” White, M., & Epston, D. (1990). Narrative means to therapeutic ends. Norton.