1. The Issue of Dual-Use
Should engineers design tech that can be used for evil as well as for good?
James Michie
Ethics Essay – Rough Draft
ENGR 482-913
November 14, 2014
TO: Austin Styer
2. Introduction
The focus of this paper is the so-called dual-use dilemma, an ethical issue that engineers
and those in the scientific community are often faced with. We will first discuss what is meant by
dual-use and then give a few cases that exemplify the problem. Next, we will look at this
problem through the lenses of different ethical theories. We will start with Aristotle's virtue ethics
and then move to utilitarian ethics and end with Kantian ethics. Lastly, I will outline what I think
is the best way to handle dual-use issues based on our gleanings from the different ethical
theories.
Dual-Use: Definition and Cases
Dual-use, at least for the context of this paper, refers to the potential for technology or
research to be used for bad and good purposes [4]. The dual-use dilemma, then, is an ethical
dilemma because it is concerned with “promoting good in the context of its potential for also
causing harm [3]”. The dual-use issue is one that concerns engineers and researchers, as they are
the ones creating or discovering the technology concerned. However, governments and other
powerful figures should also be concerned about this dilemma as they are usually the ones
pushing or hindering the development of technology. For the dual-use dilemma, the fear is
usually (though perhaps not always) that some outside, malevolent entity will get ahold of the
technology rather than that the developers themselves will create it with evil intentions [3].
Cases of dual-use can be found in many different types of technology. For instance, one
could make the case that computer algorithms suffer from the dual-use dilemma. They could be
used to track down information or connect the dots between phenomenon that otherwise would
go unnoticed. However, computer algorithms could also be (and were) used by the NSA or other
governments to spy on the public [1] (note that I am not trying to say whether the NSA's actions
were right or wrong, as that is up to a lot of debate, but rather just that the algorithm could be
used in a way that its creators did not intend and may not have approved of).
Let's look at another example. Say you go to work for a company that is trying to
improve rotor blade designs in helicopters. These improvements could be used for civilian
purposes but could also be used for military ones as well [2] (for example, making the Apache
3. helicopter an even better weapon of war). Should an engineer be who personally worked on these
rotors be held responsible for the consequences of the militaristic use of this technology as well
as the civil use (i.e. should hey be held responsible for good and potentially bad uses)?
What about technology that can have extreme negative consequences? Think about the
field of biological sciences for instance. Recently, a vaccine-resistant strain of mousepox was
genetically engineered (created to “combat the periodic plagues of mice in Australia [3]”). Some
scientists, from scratch, synthesized a live polio virus. Others remade the 1918 Spanish Flu virus
[4]. Certainly, these all have beneficial sides to them. Mousepox kills the recurring plague of
mice. Being able to synthesize polio makes it easier to study and helps researchers to better
understand the disease (kind of like reverse-engineering but for biologists). Reconstructing the
1918 Spanish Flu virus likely had similar purposes. Nevertheless, these things, in the wrong
hands, could cause a pandemic. The mousepox, for example, could lead to the creation “of a
highly virulent strain of smallpox resistant to available vaccines [3]”. Is it right to make
something that can do good if it can do just as much (if not more) bad? Let's look at from the
viewpoint of a few different ethical theories,starting with Aristotle's virtue ethics.
Analysis: Virtue Ethics
Aristotle in ancient Greece started the whole discussion on virtue ethics. He defined three
main terms that are critical to understanding the basics of virtue ethics: happiness (eudaimonia),
virtue (arete), and an end or final goal (telos). First, He used the term happiness to mean
flourishing, thriving or good spirit. In a way, Aristotle would say that anything that was growing
or prospering was experiencing happiness. Second, Aristotle defined virtue as anything that was
exceptional when compared to things similar to itself, such as an Olympic sprinter being virtuous
because he or she shattered the world record to win a gold medal. Finally, a goal or end to
Aristotle meant that thing that all (or most) people seek above all else. For the vast majority of
people, this goal would be happiness. Linking all three terms together, Aristotle would say that
the goal of life is to be happy or to live “the good life” and that we achieve this happiness when
we realize our potential or become an “outstanding specimen” of humanity.
So how would Aristotle view the dual-use dilemma? He would probably look at two
separate communities: 1) the engineering or scientific community and 2) everyone else. For the
4. first group, one might make the point that the issue of dual-use is basically null. After all, for
these engineers and scientists (especially the scientists) flourishing for them means growing their
field or profession as much as possible. Therefore, whether their research or technology is used
for good or bad does not matter; what matters is how “virtuous” or outstanding the technology or
discovery is. This conclusion is a little underwhelming.
However, there is another way to look at this first group. For example, the engineering
and scientific professions cannot really flourish if everyone else begins to despise them (and take
away funding) because of their devilish creations. Obviously, then, these professionals should
have concern for how their technology affects society. They should be sure that society is better
off overall because of their inventions or discoveries so that their reputation and means of
practice are preserved. Any invention or discovery that would degrade the trust of the non-
technical community would therefore be non-virtuousness and should not see the light of day (I
know that this whole argument sounds logical yet hollow morally, but we'll flesh things out a bit
more in the “My Recommendation” section).
What about the second group? How does the technology produced by the technical
community affect everyone else? What does it means for non-technical people to flourish?
Aristotle would probably make the case that these people flourish when they are allowed to live
their lives unhindered, when they are allowed to seek areas where they can become “specimen of
their kind” without looming fears of dangerous tech falling into the wrong hands. If it did fall
into the wrong hands, then large groups of people could be targeted and harmed (which would
decidedly not be flourishing for them). This view would, in many ways, limit the scale that
engineering and science should be allowed to reach. Based off this conclusion, the atom bomb
would have been a non-virtuous creation because of its potential to wipe out cities. Of course,
one would have to decide how big of a scale is too big. There may also be times when even
highly dangerous inventions would still be virtuous because they allow humanity to flourish
more than if they did not exist. For example, some may say that the atom bomb was a good
invention because it paved the way for the many nuclear plants that now provide power the
world over.
Now the we've gotten a taste of Aristotle's view, let's see how utilitarians would tackle the
dual-use dilemma.
5. Analysis: Utilitarian Ethics
Utilitarians hold that an act is right or wrong based on its consequences. Any act that
maximizes pleasure or wellbeing is right. All the other acts that do not maximize pleasure or
wellbeing are (at least to some extent) wrong. It is important to note that the word consequences
in the first sentence of this paragraph refers to the repercussions of an act in a holistic sense.
Everyone that the act affects and how it affects them should be considered.
Utilitarians, similar to Aristotle, would look at how advancement in the science and
engineering fields affect not only the field and its proponents but also the general nontechnical
community. Utilitarians would see jumps in research or technology as increasing the wellbeing
of the fields of science and engineering. They would also see it as increasing the pleasure of the
engineers and scientists that are the driving force behind these advancements.
However, for the general non-technical community, it becomes less one-sided. Whether
the new technical developments increase this group's wellbeing depends on how it is used and
how well it lends itself to different applications. Technology that has a high potential to be
beneficial when used for good yet a low potential to be bad when used for wrong would be
virtuous in the eyes of utilitarians. Of course, the opposite is true as well. Another thing that
utilitarians would have to consider, though, is what new technology any piece of technology
could lead to. For instance, if new technology has only bad applications but is a stepping stone
toward a future invention that has grossly positive applications, then the development of the new
technology would most likely be seen as virtuous. The final decision on whether to develop some
technology or research some field would depend, then, on the benefits of it long term (including
future discoveries or inventions based off of it and how these would be used) for the fields of
science and engineering and for the general non-technical community as a whole. Kant's ethical
theories, I think, have a much different way of deciding what is and is not virtuous in this case.
Analysis: Kantian Ethics
Kant viewed ethics differently than utilitarians and Aristotle (Aristotle's view seems
almost the opposite of Kant's). Kant believed that whether something was ethical or not
depended on two things: duty and will. He defined duty as some action or behavior that any
6. responsible, rational person would (and should) do in a certain situation. On the other hand, Kant
defined “will” as the diving force that leads people to follow their duty and, in the end,
determines how pure the act of following their duty was. The purest act that can be done, Kant
would say, was one in which the person's will was aligned with their duty. This alignment would
occur best when the person doing the act had no self-interest or potential for gain from the act.
Kant would not have divided people into groups like Aristotle and the utilitarians. Rather,
he would have put himself in the shoes of the scientist or engineer. What is this person's duty to
his or her company? What duty does this scientist have to his field? What duty does this engineer
have to her government? What duty does this biologist have to the public? These questions
would be Kant's focus. Whether the engineer or scientist wants to research or design something
or gains pleasure from doing so would be irrelevant to Kant (in fact, Kant would probably say
that the designer's will would be purest when he or she does not wish to design it).
Kant might look at the above questions as a hierarchy of duties. He would then say that
engineers should follow their foremost duty and then follow their second (or third or fourth) duty
as long as it does not directly conflict with their primary one. For example, Kant might arrange
an engineer's duties as follows: duty to the public (primary responsibility as stated by the NSPE
code), duty to government, duty to field, and lastly duty to company.
So, Kant would answer the dual-use dilemma by saying that an engineer should perform
his or her duty to the public (not just the people of that engineer's government but all) without
regard to their other duties if they conflict with this duty (this is starting to sound like the laws of
robotics, is it not?). However, if an engineer's duty to the public and the next lesser duty can both
be fulfilled then that is a better or more virtuous path. Therefore, an engineer would be acting
most virtuous when he or she fulfills all these duties at once with a pure will. However, fulfilling
all of them at once would be rather difficult, as each of these duties tends to compete with the
other. Designing a weapon may fulfill an engineer's duty to the government, but it may not fulfill
their duty to the public (in fact it may detrimentally affect the public if used incorrectly). In the
same way, an engineer who fulfills their duty to their field may neglect their duty to their
company (for example, spending a lot of money on research that is important scientifically but
which is of no real benefit to the company). Therefore, to act virtuously as an engineer in dual-
use situations, one should first fulfill their duty to the public, then if possible their duty to their
7. government, then (as long as it does not conflict with fulfilling the previous two) their duty to
their field, and then lastly their duty to their company.
Before we wrap up, let me give you a quick recommendation (which you can entireley
ignore if you would like).
My Recommendation
To be honest, this section is more of my take on this issue. You, as the reader, can take it
or leave it. Often, scientists and engineers try to understand something or solve a problem that
they really should have left well enough alone. Even if there really is a problem, they could have
simpler and less dangerous solutions (honestly, there are better ways to kill mice than making a
vaccine-resistant pox virus that could be altered to target humans).
Often, the greatest dangers of dual-use come when the technical community does not
understand the full consequences of what they are doing. For instance, we (America) dropped the
atom bomb on Nagasaki and Hiroshima. We knew well enough about the explosive radius. What
we did not know was that there was a much larger nuclear fallout radius. When someone
(whoever it was) created the internet, they probably did not think about what cyber warfare
would be like in the future. We often end up thinking we know more than we do. We often
(without realizing it sometimes, I think) believe that we are gods in some sense and that we can
solve all the problems of the universe if we are given enough time, resources, and iterations.
I am not trying to say that technology is evil. It has lead to many amazing things that
have been wonderful for humanity. If there is one thing that we can draw from the dual-use
dilemma, it is this: each new piece of technology solves one problem and then introduces another
problem (even if that problem is just that the technology can be used for ill) . We may be able to
say that technology that solves one problem and introduces a lesser one is good. However, what
(I think at least) we can definitely say is that technology is not the solution to the woes of
mankind (not the deep ones anyway). You will have to look higher for that.
8. Conclusion
Overall, the dual-use dilemma can be defined as the ability of discoveries and technology
to be used for evil purposes as well as for good purposes. Aristotle and utilitarians would tackle
this dilemma by dividing people into two groups: technical and nontechnical.
For Aristotle, the technical group would be virtuous (flourishing) basically whenever the
scope of their knowledge or field is growing. This growth would depend on how the
nontechnical group views the technical group as the nontechnical group provides the funding and
resources (and often future technical people) that the technical group needs to grow. What is
virtuous for the nontechnical group depends on how the discovery or technology affects them. If
something leads them to flourish overall (live safely, have time and resources to pursue areas
they can become “specimen of their kind” in) then it is virtuous or good. However, if it is has the
opposite affect, it leads to decay rather than growth and is therefore non-virtuous.
For utilitarians, the conclusion on whether a piece of technology is good depends on how
it benefits the technical field itself (one discovery often leads to another in the future), how it
leads to increased pleasure for the members of that field (scientists are often very personally
invested in their research, and ecstatic when they make a discovery) and how it (and future
technology based off of it) will affect the nontechnical community at large. If the sum of all these
considerations is positive, then the technology should be developed even if it has dual-uses. If
not, then the technology should be left well enough alone.
Kant would argue that the concern of scientists and engineers with regard to the dual-use
dilemma should be to properly fulfill their duty, first to the public, then to their government, then
to their field, and lastly to their company. Good technology or research, then, is of a type that
fulfills an engineer's duty to the public, then if possible their duty to their government, then their
duty to their field (so long as it does not conflict with the previous duties), and finally to fulfill
their duty to their company. Designing a piece of technology that fulfills all these duties would
then be the most virtuous act an engineer could do. However, this situation is often difficult to
obtain because these duties often are competitive in nature.
9. References
[1] A. El-Zein, "As engineers, we must consider the ethical implications of our
work," theguardian, 5 December 2013. [Online]. Available:
http://www.theguardian.com/commentisfree/2013/dec/05/engineering-
moral-effects-technology-impact. [Accessed 13 November 2014].
[2] aerostudents, "Ethics Essay: What are the ethical implications of working for
an arms manufacturer as an engineer, which," aerostudents.com, [Online].
Available: http://aerostudents.com/files/ethics/EthicsExampleEssay1.pdf.
[Accessed 13 November 2014].
[3] S. Miller and M. Selgelid, "Ethical and Philosophical Consideration of the
Dual-use," Sci Eng Ethics, no. 13, pp. 523-580, 2007.
[4] M. Selgelid, "Chapter 1: Ethics Engagement of the Dual-Use Dilemma:
Progress and Potential," [Online]. Available:
http://press.anu.edu.au//education_ethics/pdf/ch01.pdf. [Accessed 13
November 2014].