A Code of Ethics for Technologists

Hans-Christoph Steiner

 

 

 

 

 

 

Applications of Interactive Technologies


In his essay “The Circle of Empathy”, Adam Brate contrasts the differing views of Bill Joy and Jaron Lanier on what technology will bring us in the not-so-distant future.  By bringing in the opinions of many of the foremost thinkers regarding technology, Brate takes the reader through the issues that concern Joy and Lanier, ending with a surprisingly  optimistic view on the future of technology despite the stern warnings of Joy and Lanier.

 

There is widespread disagreement on whether technology is benefiting us or will ultimately destroy us.  The “democratization of evil”, as Joy calls it, means that technology developed to aid humanity can  can be used to further evil, and in fact makes it easier to do so.  There is a consensus among scientists and technologists in agreement with this idea.  As Brate writes, “[technologists] can never ignore the obvious and painful realization that technology, no matter how well intended, is a double-edged sword.”[1]

 

Joy's fear leads him to propose an outright ban on the development of certain technologies he deems too risky: GNR (genetic engineering, nano-technology, and robotics).  Joy's grounds for a ban are solid, but in reality it will never work.  Human society has been trying to ban behaviors deemed evil for millennia, yet we still have crime, prostitution, etc.  Therefore, as John Gilmore states, a ban would only drive the development of such technologies underground, where it could not be regulated or even followed by society at large.

 

Technology’s impact upon human society is currently rarely a factor in its  development.  First and foremost, the ability to make a profit is what guides the development of new technology.  This is tied to what people want though not necessarily tied to what will make our society work better.  Rarely do we consider in depth  the true repercussions of the introduction of a given technology before releasing it upon the world.  The automobile is a good example.  The development of our cities has become totally centered on the car.  It is an approach that entirely ignores the effects on the daily life of the city beyond transportation.  For example, expressways were built to allow more cars to pass through without realizing that it would divide that neighborhood in half.  Plus, moving more cars through that neighborhood means that more cars can get to that neighborhood, thereby making traffic worse rather than better.

 

The other major driving factor in the development of technology is the military.  The military funds the development of much of the technology that later becomes widespread, such as the Internet.   But the military tends to develop technology to serve a short term view, defeating the enemy at hand, without paying much heed to the long term impact of the new technology.  Atomic fission is the perfect example.  During World War II, the push was to develop the atomic bomb before the Germans, and then, after VE day, to be the super weapon which would avert the need for an invasion of Japan.  Little serious thought was given to the long term effects of the atomic bomb by the government, military, or even the scientists who were developing the technology.

 

Joy presents one idea that could go a long way to slowing the development of 'evil' technology: a code of ethics.  The field of medicine has had its own code of ethics for thousands of years, the Hippocratic Oath, which has been quite effective in keeping the work of physicians in the realm of good rather than evil.  It is by no means perfect, but it is far better than nothing.  Applying such a binding oath to technologists and scientists of all kinds would also, similarly, go a long way in keeping the work of technologists in the realm of good.  Having an ongoing discussion about the possible uses of technology before it has been developed, the developers would be much more aware of the range of possible outcomes from the release of it.  And that, in turn, will lead to more responsible development efforts.  A code of ethics provides a framework to foster such discussions.

 

There were a few Manhattan Project scientists who realized that, even though the A-bomb would aid the short term goal of defeating the Nazis, in the long term it was wrong and would cause much greater harm.  Their concerns started a debate between the project members and ultimately caused them to drop out of the project.  As the director of the project, Robert Oppenheimer thought that he was guided by ethics because he was working to eliminate the fascists.  But later in life, he was haunted by his discovery and spent the rest of his life writing about applying ethics to science.  Oppenheimer came to believe in applying a code of ethics to scientific endeavors after he watched his creation trigger an arms race to develop weapons that would be able to destroy the entire world.  As he said in 1948, “the [Manhattan Project] physicists have known sin; and this is a knowledge they cannot lose.”

 

Norbert Wiener, writing about the future of human-machine interaction, is an example of a technologist who was concerned with the ethics of the creation of technology.  He was keenly interested in how the widespread introduction of computers was going to affect human society.  Most importantly, he realized that even technology that was developed with the best intentions can have disastrous consequences.  “The mere fact that we have made the machine does not guarantee that we have the proper information to stop it.”[2]

 

So there exists a large body of support for applying ethics to science and technology, yet there has been little work on formalizing a code of ethics for technologists.  Joy's article is a step in this direction, but he was sidetracked by the idea of an outright ban.  And others such as Lanier argue that we are not capable of developing truly dangerous technology based on computers since current computer software is so buggy.  This is a naïve idea since we have already proven that we can develop very destructive technologies such as the atomic bomb.

 

Many people also dismiss that 'evil' technology is an age-old problem saying we have survived thus far, therefore no regulation is needed.  Brate points out that fear of technology is as old as the old testament, but that does not mean that we can afford to ignore the problems of technology.  What makes our time different is the extent to which our technology rules our lives.  Ever increasingly, we interact with the world through our creations rather than using natural means.  Technology is becoming more and more ubiquitous and therefore affects us more and more.  A code of ethics needs to be established as an essential part of science and technology to guide its development, as it becomes part of our every institution and our every moment.

 

A few examples of a code of ethics guiding scientific development come from fiction.  In Isaac Asimov's novel, I, Robot he created his Three Laws of Robotics: “1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.  3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.”[3]  In Star Trek, Gene Roddenberry insisted that the protagonists be guided by the 'Prime Directive', a simple ethical code stating that they not interfere with the technological development other cultures.

 

Joy presents a brief outline of his idea of a code of ethics by echoing Manhattan Project physicist Hans Bethe's call for all scientists to "cease and desist from work creating, developing, improving, and manufacturing nuclear weapons and other weapons of potential mass destruction.”  But it is not only in the development of GNR and weapons in which this idea of a code ethics is appearing. Aaron Marcus, an interface designer and specialist in internationalization, said you need to ask the question when designing an application, “Is it a gift in the English sense of the word, or Gift in the German sense of the word.”[4]

 

There are plenty of precedents for developing a code of ethics, from Oppenheimer’s experience to Joy's ideas of the future.  We need now to develop that code, and start figuring out how to most effectively apply it.  There are a few ideas that run through the various ethical ideas.  First, scientists and technologists must consider the long term effects and look beyond the problem at hand.  Second, though the vast majority of technology is developed with good intentions, it is imperative that possible evil uses are considered before developing it.  And third, an idea that runs through every code of ethics and every world religion: do not cultivate destruction.  If a concerted effort is made to consider how a new technology will impact society beyond solving the immediate problem, the often small changes needed to ameliorate future problems will become apparent.



[1]Brate, Adam.  Technomanifestos. Texere Ltd., 2002: 324

[2]Wiener, Norbert.  Cybernetics. MIT Press. 1948

[3]Asimov, Isaac. I, Robot.  Doubleday. 1950

[4]Speech to NYU ITP Students, 2002