# Week 13
## Slide 1 - ASPECTS OF CONVERGENT ETHICAL

## Slide 2 - EMERGING AND CONVERGING TECHNOLOGIES

EMERGING AND CONVERGING TECHNOLOGIES • Chapter 12 examines ethical aspects of three key emerging/converging technologies: ➢ ambient intelligence ( AMI ), ➢ nanocomputing , ➢ autonomous machines ( AMs ). • This chapter also examines issues in the emerging field of machine ethics , and it describes a "dynamic" ethical framework for addressing challenges likely to arise from emerging technologies.
KNOW THESE TERMS
## Slide 3 - CONVERGING TECHNOLOGIES AND TECHNOLOGICAL..

CONVERGING TECHNOLOGIES AND TECHNOLOGICAL CONVERGENCE • We must first consider what is meant by the concept of "technological convergence." • Howard Rheingold (1992) states that technological convergence is a phenomenon that occurs when unrelated technologies or technological paths intersect or "converge unexpectedly" to create an entirely new field.
## Slide 4 - CONVERGING TECHNOLOGIES AND PERVASIVE COMPUTING

CONVERGING TECHNOLOGIES AND PERVASIVE COMPUTING • Currently, cybertechnology is converging with non - cybertechnologies at an unprecedented pace. ➢ For example, cyber - specific technologies are converging with non - cybertechnologies, such as biotechnology and nanotechnology. ➢ This makes it difficult to identify ethical issues (e.g., privacy) because of the multiple fields involved, e.g., biology and computer technologies
## Slide 5 - AMBIENT INTELLIGENCE ( AMI )

AMBIENT INTELLIGENCE ( AMI ) • Ambient Intelligence ( AmI ) is typically defined as a technology that enables people to live and work in environments that respond to them in "intelligent ways" (Aarts and Marzano, 2003; Brey, 2005; and Weber et al., 2005). ➢ A mother and her child arrive home. As the car pulls into the driveway, the mother is immediately recognized by a surveillance camera that disables the alarm, unlocks the front door as she approaches it, and turns on the lights to a level of brightness that the home control system has learned she likes. • Is this a good thing?
## Slide 6 - AMI (CONTINUED)

AMI (CONTINUED) • This isn't new! • Nearly 20 years ago, an "aware home" was developed by the Georgia Institute of Technology! • AMI has benefited from, and has been made possible by, developments in the field of artificial intelligence. • AMI has also benefited from the convergence of three key technological components, which underlie it: 1) pervasive computing, 2) ubiquitous communication, (NOT UBIQUITOUS COMPUTING) 3) intelligent user interfaces ( IUIs ).
## Slide 7 - PERVASIVE COMPUTING

PERVASIVE COMPUTING • What, exactly, is pervasive computing ? • Centre for Pervasive Computing (www.pervasive.dk), defined it as a computing environment where information and communication technology are "everywhere, for everyone, at all times."
## Slide 8 - PERVASIVE COMPUTING (CONTINUED)

PERVASIVE COMPUTING (CONTINUED) • Pervasive computing is made possible because of the increasing ease with which circuits can be embedded into objects both small and large, including wearable, even disposable items. • For example, it now pervades the work sphere, cars, public transportation systems, the health sector, the market, and our homes (Bütschi , et al., 2005). • For pervasive computing to operate at its full potential, continuous and ubiquitous communication between devices is also needed.
## Slide 9 - UBIQUITOUS COMMUNICATION

UBIQUITOUS COMMUNICATION • Ubiquitous communication aims at ensuring flexible and omnipresent communication between interlinked computer devices (Raisinghani et al., 2004) via technologies such as: ➢ wireless local area networks (W - LANs), ➢ wireless personal area networks (W - PANs), ➢ wireless body area networks (W - BANs), ➢ Radio Frequency Identification (RFID).
INTELLIGENT USER INTERFACES (IUIS) • Intelligent User Interfaces (or IUIs ) have been made possible by developments in AI. • Brey (2005) notes that IUIs go beyond traditional interfaces such as a keyboard, mouse, and monitor.
IUIS (CONTINUED) • IUIs improve human interaction with technology by making it more intuitive and more efficient than was previously possible with traditional interfaces. • With IUIs, computers can "know" and sense far more about a person – including information about that person's situation, context, or environment – than was possible with traditional interfaces.
IUIS (CONTINUED) • With IUIs, AMI remains in the background and is virtually invisible to the user. • Brey notes that with IUIs, people can be: ➢ surrounded with hundreds of intelligent networked computers that are "aware of their presence, personality, and needs" ➢ but be unaware of the existence of these IUIs in their environments. • IUIs also enable profiling, which Brey describes as "the ability to personalize and automatically adapt to a particular user's behaviour patterns."
ETHICAL AND SOCIAL ISSUES AFFECTING AMI • We briefly examine three kinds of ethical/social issues affecting AMI: 1. freedom and autonomy; 2. technological dependency; 3. privacy, surveillance, and the "Panopticon."
AUTONOMY AND FREEDOM INVOLVING AMI • AMI's supporters suggest humans will gain more control over the environments with which they interact because technology will be more responsive to their needs. • However, Brey notes that "greater control" is presumed to be gained through a "delegation of control to machines." ➢ Gaining control by giving it away
AUTONOMY AND FREEDOM (CONTINUED) • Brey describes three ways in which AMI may make the human environment more controllable because it can: i. become more responsive to the voluntary actions, intentions, and needs of users; ii. supply humans with detailed and personal information about their environment; and iii. do what people want without them having to engage in any cognitive or physical effort.
AUTONOMY AND FREEDOM (CONTINUED) • AMI can diminish the amount of control that humans have over their environments: i. make incorrect inferences about the user, the user's actions, or the situation; ii. require corrective actions on the part of the user; and iii. represent the needs and interests of parties other than the user. • AMI could undermine human freedom and autonomy if humans become too dependent on machines for their judgements and decisions
TECHNOLOGICAL DEPENDENCY • Consider how much we have already come to depend on cybertechnology in conducting so many activities in our day - to - day lives. • In the future, will humans come to depend on the kind of smart objects and smart environments (made possible by AMI technology) in ways that exceed our current dependency on cybertechnology?
TECHNOLOGICAL DEPENDENCY (CONTINUED) • On the one hand, IUIs could relieve us of having to worry about performing many of our routine day - to - day tasks, which can be considered tedious and boring. • On the other hand, however, IUIs could also eliminate much of the cognitive effort that has, in the past, enabled us to be fulfilled and to flourish as humans.
TECHNOLOGICAL DEPENDENCY (CONTINUED) • What would happen to us if we were to lose some of our cognitive capacities because of an increased dependency on cybertechnology?
A (PRE)CAUTIONARY TALE In his short story The Machine Stops (1909), E.M. Forster portrays a futuristic society that, initially, seems like an ideal or utopian world. In fact, his story anticipated many yet - to - be developed technologies such as television and videoconferencing. But it also illustrates how humans have transferred control of much of their lives to a global Machine, which is capable of satisfying their physical and spiritual needs and desires. In surrendering so much control to the Machine, however, people begin to lose touch with the natural world. After a while, defects appear in the Machine, and eventually it breaks down. Unfortunately, no one remembers how to repair it. In Forster's tale, some of the characters begin to realize just how dependent they have become on this machine.
PRIVACY, SURVEILLANCE, AND THE PANOPTICON • Langheinrich (2001) argues that with respect to privacy and surveillance, there are four features that differentiate AMI from other (mostly earlier) kinds of computing applications: i. ubiquity, ii. invisibility, iii. sensing, iv. memory application.
PRIVACY, SURVEILLANCE, AND THE PANOPTICON (CONTINUED) • Because computing devices are ubiquitous or omnipresent in AMI environments, privacy threats are more pervasive in scope than with earlier technologies. • He also notes that because computers are virtually invisible in AMI environments, it is unlikely that users will realize that computing devices are present and are being used to collect and disseminate their personal data. ➢ Recorded by "tireless electronic devices, from the kitchens and living room of our homes to our weekend trips in cars." ➢ No aspect of our lives will be secluded from digitization.
PRIVACY, SURVEILLANCE, AND THE PANOPTICON (CONTINUED) • Langheinrich also believes that AmI poses a more significant threat to privacy than earlier computing technologies because: a) sensing devices associated with IUIs may become so sophisticated that they will be able to sense (private) human emotions like fear, stress, and excitement; b) this technology has the potential to create a memory or "life - log" – i.e., a complete record of someone's past.
BIG BROTHER FEAR
SURVEILLANCE AND THE PANOPTICON • Čas (2004) notes that in AMI environments, no one can be sure that he or she is not being observed. • Because of AMI environments, it may be prudent for a person to assume that information about his or her presence (at any location and at any time) is being recorded.
SURVEILLANCE AND THE PANOPTICON (CONTINUED) • Čas believes that it is realistic to assume that any activity (or inactivity) about us that is being monitored in an AMI environment may be used in any context in the future. • So, people in AMI environments are subject to a virtual "panopticon."
PANOPTICON/INSPECTION HOUSE Bentham (an 18th century social reformer) conceived the idea for managing a prison environment based on the notion of the panopticon. Imagine a prison comprised of glass cells, all arranged in a circle, where prisoners could be observed at any moment by a prison guard who sits at a rotating desk facing the prisoners' cells. Further imaging that the inmates cannot see anyone or anything outside their cells, even though they can be observed by the prison guard at any time. Although a prisoner cannot be certain that he is being observed at any given moment, it would be prudent for him to assume that he is being observed at every moment. The prisoner's realization that he could be observed continuously, and his fear about what could happen to him if he is observed doing something that is not permitted in the cell, would likely be sufficient to control the prisoner's behavior.

NANOTECHNOLOGY AND NANOCOMPUTING • A number of ethical and social controversies arise at the intersection of two distinct technologies that are now also converging – cybertechnology and nanotechnology. • Rosalind Berne (2015) defines nanotechnology as "the study, design, and manipulation of natural phenomena, artificial phenomena, and technological phenomena at the nanometer level." • K. Eric Drexler, describes the field as a branch of engineering dedicated to the development of extremely small electronic circuits and mechanical devices built at the molecular level of matter.
NANOTECHNOLOGY AND NANOCOMPUTING • Drexler (1991) predicted that developments in nanotechnology will result in computers at the nano-scale, no bigger in size than bacteria, called nanocomputers. • Nanocomputers can be designed using various types of architectures. • An electronic nanocomputer would operate in a manner similar to present-day computers, differing primarily in terms of size and scale.
NANOTECHNOLOGY AND NANOCOMPUTERS (CONTINUED) • To appreciate the scale of future nanocomputers, imagine a mechanical or electronic device whose dimensions are measured in nanometers (billionths of a meter, or units of 10-9 meter). • Merkle (2001) predicts that nano-scale computers will be able to deliver a billion billion instructions per second – i.e., a billion times faster than today's desktop computers. • Some predict that future nanocomputers will also be built from biological material such as DNA.
NANOETHICS : IDENTIFYING AND ANALYZING ETHICAL ISSUES IN NANOTECHNOLOGY • Moor and Weckert (2004) believe that assessing ethical issues that arise at the nano - scale is important because of the kinds of "policy vacuums" that are raised. • They do not argue that a separate field of applied ethics called nanoethics is necessary. • But they make a strong case for why an analysis of ethical issues at the nano-level is now critical.
NANOETHICS (CONTINUED) • Moor and Weckert identify three distinct kinds of ethical concerns at the nano-level that warrant analysis: 1. privacy and control; 2. longevity; 3. runaway nanobots.
ETHICAL ASPECTS OF NANOTECHNOLOGY: PRIVACY ISSUES • We will be able to construct nano-scale information-gathering systems that can also track people. • It will become extremely easy to put a nano-scale transmitter in a room, or onto someone's clothing. • Individuals may have no idea that these devices are present or that they are being monitored and tracked by them. • Moor and Weckert believe that invasions of privacy and unjustified control over others will most likely increase.
ETHICAL ASPECTS OF NANOTECHNOLOGY: LONGEVITY ISSUES • Moor and Weckert note that while many see longevity as a good thing, there could also be negative consequences. • For example, they point out that we could have a population problem if the life expectancy of individuals were to change dramatically. • Would the already old stay old longer, and would the young remain young longer?
ETHICAL ASPECTS OF NANOTECHNOLOGY: RUNAWAY NANOBOTS • When nanobots work to our benefit, they build what we desire. • But when nanobots work incorrectly, they can build what we don't want. • Some critics worry that the (unintended) replication of these bots could get out of hand.
SHOULD RESEARCH/DEVELOPMENT IN NANOCOMPUTING BE ALLOWED TO CONTINUE? • Joseph Weizenbaum (1976) argued that computer science research that can have "irreversible and not entirely unforeseeable side effects" should not be undertaken. • Bill Joy (2000) has argued that because developments in nanocomputing are threatening to make us an "endangered species," the only realistic alternative is to limit its development. • If Joy and others are correct about the dangers of nanotechnology, we must seriously consider whether research in this area should be limited.
SHOULD NANOTECHNOLOGY RESEARCH/ DEVELOPMENT BE PROHIBITED? • Ralph Merkle (2001) would disagree with Joy and others on limiting nano - level research. • Merkle argues that if research in nanotechnology is prohibited, or even restricted, it will be done "underground." • If this happens, nano research would not be regulated by governments and professional agencies concerned with social responsibility.
SHOULD WE PRESUME IN FAVOR OF CONTINUED NANO RESEARCH? • Weckert (2006) argues that potential disadvantages that could result from research in a particular field are not in themselves sufficient grounds for halting research. ➢ He suggests that there should be a presumption in favor of freedom in research . • But Weckert also argues that it should be permissible to restrict or even forbid research where it can be clearly shown that harm is more likely than not to result from that research.
ASSESSING NANOTECHNOLOGY RISKS: APPLYING THE PRECAUTIONARY PRINCIPLE • Questions about how best to proceed in scientific research when there are concerns about harm to the public good are often examined via the Precautionary Principle. • Weckert and Moor (2004) interpret the precautionary principle to mean the following: ➢ If some action has a possibility of causing harm, then that action should not be undertaken or some measure should be put in its place to minimize or eliminate the potential harms.
NANOTECHNOLOGY, RISK, AND THE PRECAUTIONARY PRINCIPLE (CONTINUED) • Weckert and Moor believe that when the precautionary principle is applied to questions about nanotechnology research and development, it needs to be analyzed in terms of three different "categories of harm": 1) direct harm , 2) harm by misuse , 3) harm by mistake or accident . • The kinds of risks for each differ significantly.
NANOTECHNOLOGY, RISK, AND THE PRECAUTIONARY PRINCIPLE (CONTINUED) • With respect to direct harm, Weckert and Moor analyze a scenario in which the use of nanoparticles in products could be damaging to the health of some people. • They also note that the kinds of risks involved in direct harm are very different from those arising in the example they use to illustrate harm by misuse – i.e., developments in nano - electronics that could endanger personal privacy.
NANOTECHNOLOGY, RISK, AND THE PRECAUTIONARY PRINCIPLE (CONTINUED) • Regarding harm by mistake or accident , Weckert and Moor describe a scenario in which nanotechnology could lead to the development of self - replicating, and thus "runaway," nanobots. (This kind of harm will occur only if mistakes are made or accidents occur). • You can't legislate against mistakes or accidents! • Weckert and Moor argue that when assessing the risks of nanotechnology via the precautionary principle, we need to look at not only potential harms per se, but also at the relationship between "the initial action and the potential harm."
NANOTECHNOLOGY, RISK, AND THE PRECAUTIONARY PRINCIPLE (CONTINUED) ➢ In their example involving direct harm, the relationship is fairly clear and straightforward: we simply need to know more about the scientific evidence for nanoparticles causing harm. ➢ In their case involving potential misuse of nanotechnology, e.g., in endangering personal privacy, the relationship is less clear. ➢ In the case of the third kind of harm, Weckert and Moor claim that we need evidence regarding the "propensity of humans to make mistakes or the propensity of accidents to happen."
NANOTECHNOLOGY, RISK, AND THE PRECAUTIONARY PRINCIPLE (CONTINUED) • Weckert offers the following solution or strategy: If a prima facie case can be made that some research will likely cause harm…then the burden of proof should be on those who want the research carried out to show that it is safe.
NANOTECHNOLOGY, RISK, AND THE PRECAUTIONARY PRINCIPLE (CONTINUED) • He also believes that there should be: …a presumption in favour of freedom until such time a prima facie case is made that the research is dangerous. The burden of proof then shifts from those opposing the research to those supporting it. At that stage the research should not begin or be continued until a good case can be made that it is safe.
AUTONOMOUS MACHINES (AMS) • AMs include any computerized system/ agent/ robot that is capable of acting and making decisions independently of human oversight. • An AM also can interact with and adapt to (changes in) its environment and it can learn (as it functions).
AMS (CONTINUED) • The expression "autonomous machine" includes three conceptually distinct, but sometimes overlapping, autonomous technologies: 1) (autonomous) artificial agents, 2) autonomous systems, 3) (autonomous as opposed to " tele ") robots. • The key attribute that links together these otherwise distinct (software) programs, systems, and entities is their ability to act autonomously, or at least act independently of human intervention.
AMS (CONTINUED): SOME EXAMPLES AND APPLICATIONS • An influential 2009 report by the UK's Royal Academy of Engineering identifies various kinds of devices, entities, and systems that also fit nicely under our category of AM, which include: ➢ driverless transport systems (in commerce); ➢ unmanned vehicles in military/defense applications (e.g., "drones"); ➢ robots on the battlefield; ➢ autonomous robotic surgery devices; ➢ personal care support systems.
AMS (CONTINUED): SOME EXAMPLES AND APPLICATIONS • Patrick Lin (2012) identifies a wide range of sectors in which AMs (or what he calls "robots") now operate., six of which include: ➢ labor and service; (Roomba vacuum cleaner) ➢ military and security; (drones) ➢ research and education; (Mars Rover) ➢ entertainment; (ASIMO) ➢ medical and healthcare; (robotic pharmacists) ➢ personal care and companionship. (CareBot)
CAN AN AM BE AN (ARTIFICIAL) MORAL AGENT? • Luciano Floridi (2011) believes that AMs can be moral agents because they ➢ (a) are "sources of moral action"; and ➢ (b) can cause moral harm or moral good. • In Chapter 11, we saw that Floridi distinguished between "moral patients" (as receivers of moral action) and moral agents (as sources of moral action). • All information entities, in Floridi's view, deserve consideration (minimally at least) as moral patients, even if they are unable to qualify as moral agents.
KNOW DIFFERENCE BETWEEN MORAL AGENTS AND MORAL PATIENTS
AMS AS MORAL AGENTS (CONTINUED) • Floridi also believes that autonomous AMs would qualify as moral agents because of their (moral) efficacy. • Deborah Johnson (2006), who also believes that AMs have moral efficacy, argues that AMs qualify only as "moral entities" and not moral agents because AMs lack freedom. • Himma (2009) argues that because these entities lack consciousness and intentionality, they cannot satisfy the conditions for moral agency.
AMS AS MORAL AGENTS: MOOR'S MODEL • James Moor (2006) takes a different tack in analyzing this question by focusing on various kinds of "moral impacts" that AMs can have. • First, he notes that computers can be viewed as normative (non - moral) agents – independently of the question whether they are also moral agents – because of the "normative impacts" their actions have (irrespective of any moral impacts).
MOOR'S MODEL (CONTINUED) • Moor also points out that because computers are designed for specific purposes, they can be evaluated in terms of how well, or how poorly, they perform in accomplishing the tasks they are programmed to carry out. ➢ For example, he considers the case of a computer program designed to play chess (such as Deep Blue) that can be evaluated normatively (independent of ethics). ➢ How well did it play chess?
MOOR'S MODEL (CONTINUED) • Moor notes that some normative impacts made possible by computers can also be moral or ethical in nature. • He argues that the consequences, and potential consequences, of "ethical agents" can be analyzed in four levels: 1. Ethical Impact Agents, 2. Implicit Ethical Agents, 3. Explicit Ethical Agents, 4. Full Ethical Agents.
go over agents
MOOR'S MODEL (CONTINUED) • In Moor's scheme: ➢ ethical - impact - agents (i.e., the weakest sense of moral agent) will have (at least some) ethical consequences to their acts; ➢ implicit - ethical - agents have some ethical considerations built into their design and "will employ some automatic ethical actions for fixed situations";
MOOR'S MODEL (CONTINUED) ➢ explicit-ethical-agents will have, or at least act as if they have, "more general principles or rules of ethical conduct that are adjusted and interpreted to fit various kinds of situations"; ➢ full-ethical agents "can make ethical judgments about a wide variety of situations" and in many cases can "provide some justification for them."
MOOR'S MODEL (CONTINUED) • Moor provides some examples of the first two categories: 1. An ethical-impact agent can include a "robotic camel jockey" (a technology used in Qatar to replace young boys as jockeys, and thus freeing those boys from slavery in the human trafficking business). 2. Implicit-ethical agents include an airplane's automatic pilot system and an ATM – both have built - in programming designed to prevent harm from happening to the aircraft, and to prevent ATM customers from being short - changed in financial transactions.
MOOR'S MODEL (CONTINUED) 3. Explicit ethical agents would be able to calculate the best ethical action to take in a specific situation and would be able to make decisions when presented with ethical dilemmas. 4. Full-ethical agents have the kind of ethical features that we usually attribute to ethical agents like us (i.e., what Moor describes as "normal human adults"), including consciousness and free will.
MOOR'S MODEL (CONTINUED) • Moor does not claim that either explicit or full-ethical (artificial) agents exist or that they will be available anytime in the near term. • Even if AMs may never qualify as full moral agents, Wallach and Allen (2009) believe that they can have "functional morality," based on two key dimensions: i. autonomy, ii. sensitivity to ethical values.
WALLACH AND ALLEN'S CRITERIA FOR "FUNCTIONAL MORALITY" FOR AMS • Wallach and Allen also note that we do not yet have systems with both high autonomy and high sensitivity. • They point out that an autopilot is an example of a system that has significant autonomy (in a limited domain) but little sensitivity to ethical values. • Wallach and Allen also note that ethical-decision support systems (such as those used in the medical field to assist doctors) provide decision makers with access to morally relevant information (and thus suggest a high level of sensitivity to moral values), but these systems have virtually no autonomy.
FUNCTIONAL MORALITY (CONTINUED) • Wallach and Allen argue that it is not necessary that AMs be moral agents in the sense that humans are. • They believe that all we need to do is to design machines to act "as if" they are moral agents and thus "function" as such.
TRUST AND AUTHENTICITY IN THE CONTEXT OF AMS • What is trust in the context of AMs, and what does a trust relationship involving humans and AMs entail? ➢ For example, can we trust AMs to always act in our best interests, especially AMs designed in such a way that they cannot be shut down by human operators? • We limit our discussion to two basic questions: I. What would it mean for a human to trust an AM? II. Why is that question important? • First, we need to define what is meant by trust in general.
WHAT IS TRUST ? • A typical dictionary defines trust as "firm reliance on the integrity, ability, or character of a person or thing." • Consider that I am able to trust a human because the person in whom I place my trust not only can disappoint me (or let me down) but can also betray me. ➢ For example, that person, as an autonomous agent, can freely elect to breach the trust I placed in them.
TRUST (CONTINUED) • Some argue that trust also has an emotive aspect, and that this may be especially important in understanding trust in the context of AMs. • Turkle worries about what can happen when machines appear to us "as if" they have feelings. ➢ Turkle describes a phenomenon called the "Eliza effect," which was initially associated with a response that some users had to an interactive software program called "Eliza" (designed by Joseph Weizenbaum at MIT in the 1960s).
TRUST (CONTINUED) • Turkle notes that the Eliza program, which was designed to use language conversationally solicited trust on the part of users. • Although Eliza was only a software program, Turkle suggests that it could nevertheless be viewed as a "relational entity," or what she calls a "relational artifact," because of the way people responded to, and confided in, it. • In this sense, she notes that Eliza seemed to have a strong emotional impact on the students who interacted with it. • But Turkle also notes that while Eliza "elicited trust" on the part of these students, it understood nothing about them.
TRUST AND "ATTACHMENT" IN AMS • Turkle worries that when a machine appears to be interested in people, it can "push our Darwinian buttons…which causes people to respond as if they were in a relationship." • Turkle suggests that because AMs can be designed in ways that make people feel as if a machine cares about them, people can develop feelings of trust in, and attachment to, that machine.
TRUST AND "AUTHENTICITY" • Turkle notes that Cynthia Breazeal, one of Kismet's designers who had also developed a "maternal connection" with this AM while she was a student at MIT, had a difficult time separating from Kismet when she left that institution. • Kismet • In Turkle's view, this factor raises questions of both trust and authenticity • She worries that, unlike in the past, humans must now be able to distinguish between authentic and simulated relationships. (virtual influencers?)
MACHINE ETHICS AND (DESIGNING) MORAL MACHINES • Anderson and Anderson (2011) describe machine ethics as an interdisciplinary field of research that is primarily concerned with developing ethics for machines, as opposed to developing ethics for humans who "use machines." • In their view, machine ethics is concerned with ➢ giving machines ethical principles, or a procedure for discovering ways to resolve ethical dilemmas they may encounter, enabling them to function in an ethically responsible manner through their own decision making.
MACHINE ETHICS (CONTINUED) • One way in which the field of machine ethics has expanded upon traditional computer ethics is by asking how computers can be made into "explicit moral reasoners." • In their answer to this question, Wallach and Allen first draw an important distinction between "reasoning about ethics" and "ethical decision making." ➢ For example, they acknowledge that even if one could build artificial systems capable of reasoning about ethics, it does not necessarily follow that these systems would be genuine "ethical decision makers."
MACHINE ETHICS (CONTINUED) • Wallach and Allen's main interest in how AMs can be made into moral reasoners is more practical than theoretical in nature. • Wallach and Allen also believe that the challenge of figuring out how to provide software/hardware agents with moral decision-making capabilities is urgent.
MORAL MACHINES • Can / should we build "moral machines"? • The kind of moral machines that Wallach and Allen have in mind are AMs that are capable of both a) making moral decisions; b) acting in ways that "humans generally consider to be ethically acceptable behavior." • We should note that the idea of designing machines that could behave morally, i.e., with a set of moral rules embedded in them, is not entirely new.
DESIGNING "MORAL MACHINES" • In the 1940s, Isaac Asimov anticipated the need for ethical rules that would guide the robots of the future. • He then formulated his (now classic) Three Laws of Robots: 1. A robot may not injure a human being, or through inaction, allow a human being to come to harm. 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
DESIGNING MORAL MACHINES (CONTINUED) • Numerous critics have questioned whether the three laws articulated by Asimov are adequate to meet the kinds of ethical challenges that current AMs pose. • But relatively few of these critics have proposed clear and practical guidelines for how to embed machines with ethical instructions that would be generally acceptable to most humans.
DESIGNING MORAL MACHINES (CONTINUED) • The Andersons have developed an "automated dialogue" – i.e., a system involving an ethicist and an artificial system that functions "more or less independently in a particular domain." • They believe that this is an important first step in building moral machines because it enables the artificial system to learn both: a) the "ethically relevant features of the dilemmas it will encounter" (within that domain), b) the appropriate prima facie duties and decision principles it will need to resolve the dilemmas.
DESIGNING MORAL MACHINES (CONTINUED) • Wallach and Allen seem far less concerned with questions about whether AMs can be full moral agents than with questions about how we can design AMs to act in ways that conform to our received notions of morally acceptable behaviour. • S. Anderson (2011) echoes this point when she notes that her primary concern also is with whether machines "can perform morally correct actions and can justify them if asked."
DESIGNING "MORAL MACHINES" AND THE IMPORTANCE OF MACHINE ETHICS • Why is continued work on designing moral machines in particular, and in machine ethics in general, important? 1) ethics (itself) is important; 2) future machines will likely have increased autonomy; 3) designing machines to behave ethically will help us better understand ethics. • Moor's third reason reinforces Wallach and Allen's claim that developments in machine ethics could also help us to better understand our own nature as moral reasoners.
A "DYNAMIC" ETHICAL FRAMEWORK FOR GUIDING RESEARCH IN NEW AND EMERGING TECHNOLOGIES • Some of the ethical concerns affecting AMs, as well as the other new/emerging technologies that we have examined, directly impact the software engineers/programmers who design the technologies. • But virtually everyone will be affected by these technologies in the near future. • We all would benefit from clear ethical guidelines that address research/development in new and emerging technologies.
THE NEED FOR ETHICAL GUIDELINES FOR EMERGING TECHNOLOGIES (CONTINUED) • As research began on the Human Genome Project (HGP) in the 1990s, researchers developed an ethical framework that came to be known as ELSI (Ethical, Legal, and Social Issues) to anticipate (i.e., in advance) some HGP - related ethical issues that would likely arise. • Prior to the ELSI Program, ethics was typically "reactive" in the sense that it had followed scientific developments, rather than informing scientific research. ➢ For example, Moor (2004) notes that in most scientific research areas, ethics has had to play "catch up," because guidelines were developed in response to cases where serious harm had already resulted.
THE NEED FOR CLEAR ETHICAL GUIDELINES FOR EMERGING TECHNOLOGIES (CONTINUED) • Ray Kurzweil (2005) believes that an ELSI - like model should be developed and used to guide researchers working in one area of emerging technologies – nanotechnology. • Many consider the ELSI framework to be an ideal model because it is a "proactive" (rather than a reactive) ethics framework.
THE NEED FOR ETHICAL GUIDELINES (CONTINUED) • Moor (2008) is critical of the ELSI model because it employs a scheme that he calls an "ethics - first" framework. • He believes that this kind of ethical framework has problems because: a) it depends on a "factual determination" of the specific harms and benefits of a technology before an ethical assessment can be done; and b) in the case of nanotechnology, it is very difficult to know what the future will be.
THE NEED FOR ETHICAL GUIDELINES (CONTINUED) • Moor also argues that because new and emerging technologies promise "dramatic change," it is no longer satisfactory to do "ethics as usual." • Instead, he claims that we need to be: ➢ better informed in our "ethical thinking"; and ➢ more proactive in our "ethical action."
THE NEED FOR ETHICAL GUIDELINES (CONTINUED) • Moor and Weckert (2004) note that if we use an ELSI - like ethics model, it might seem appropriate to put a moratorium on research in a specific new / emerging technology until we get all of the facts. • But they also believe that while a moratorium would halt technology developments, it will not advance ethics in that area of emerging technologies.
A moratorium is a temporary suspension of an activity or law until future consideration warrants lifting the suspension
THE NEED FOR ETHICAL GUIDELINES (CONTINUED) • Moor and Weckert also argue that turning back to a traditional "ethics - last model" is not desirable either. • They note that once a technology is in place, much unnecessary harm may already have occurred. • So, for Moor and Weckert , neither an ethics - first (i.e., ELSI - like) nor an ethics - last model is satisfactory for emerging technologies.
A "DYNAMIC" ETHICAL FRAMEWORK THAT CONTINUALLY NEEDS TO BE UPDATED • Moor and Weckert argue that ethics is something that needs to be done continually as: ➢ a specific (new) technology develops; and ➢ that technology's potential consequences become better understood. • Ethics is "dynamic" in that the factual component on which it relies has to be continually updated.
MOOR'S DYNAMIC ETHICAL FRAMEWORK FOR NEW/EMERGING TECHNOLOGIES • We add a fourth component or step to our (threefold) framework developed in Chapter 1, that is: ➢ Step 4. Update the ethical analysis by continuing to: a) differentiate between the factual/descriptive and normative components of the new or emerging technology under consideration; b) revise the policies affecting that technology as necessary, especially as the factual data changes or as information about the potential social impacts becomes clearer.
IMPORTANT NEW STEP IN OUR ETHICAL FRAMEWORK FROM CHAPTER 1
MOOR'S DYNAMIC ETHICAL FRAMEWORK (CONTINUED) • As information about plans for the design and development of a (particular) new technology becomes available, we can: ➢ loop back to Step 1 (of our model in Chap. 1) ➢ proceed carefully through each subsequent step in the expanded ethical framework. • This four - step framework can also be applied as new information about existing technologies and their features becomes available.
FINAL EXAM • Same format as midterm • Part Multiple Choice/True False • Part Short Answer • Part Long Answer • Covers material since midterm • Answer the questions IN YOUR OWN WORDS!
## Slide 1 - Final Exam Review

Final Exam Review
## Slide 2 - 5 Pt Question

5 pt question Paul "Cougar" Rambis is an Iraqi War veteran who lost a leg in combat. Before entering the military, he was a fairly accomplished golfer and had planned to "turn professional" after completing his tour of duty in the U.S. Army. Initially, his dreams seemed shattered when he was severely wounded by an explosive device he encountered while on a routine patrol. But, then, Cougar learned that a new kind of bionic leg had recently been developed and that he was at the top of the list to receive one of these remarkable limbs. When Cougar returned home (with his new "leg" in place), he resumed his golfing activities. But when he wished to declare himself a professional golfer, Cougar was informed that he would be unable to participate in professional golf competitions because of his artificial leg. However, Cougar responded that his new leg, though artificial, was a natural replacement for his original (biological or natural) leg and that, as such, it did not enhance his ability to swing a golf club or to endure the rigorous associated with walking through the typical 18 - hole golf course. What are the ethical considerations in this case study? In your opinion, should Cougar be permitted to become a professional golfer? Why or why not? (5 pts)
## Slide 3 - Key Points in Answer

Key points in answer If Cougar's artificial leg qualifies as a therapeutic device, that is, by simply restoring his body functions to "normal", should Cougar be allowed to compete as a professional golfer? On the other hand, if that "leg" does not injure as easily, and does not age in the way that natural legs do, is Cougar's new leg merely a "therapeutic" replacement? In other words, does it enhance his ability to compete, even if only minimally? • Tavani, Herman T. (2016) Ethics and Technology 5 th edition, p314 "Conventional" implants in the form of devices designed to "correct" deficiencies have been around and used for some time to assist patients in their goal of achieving "normal" states. The question for the Professional Golfers Association is whether there are criteria that can be used to evaluate Cougar's artificial leg vs. a human leg of a professional golfer to determine that there either are or are not differences between the two that would make Cougar's artificial leg an unacceptable advantage to Cougar in competition. Ultimately, the PGA has the right to make whatever rules they like.
## Slide 4 - 3 Pt Question

3 pt question What does Lessig mean by the following claim, "In cyberspace, code is the law"?
## Slide 5 - 3 Pt Question Answer

3 pt question answer Code, for Lessig, consists of programs, devices, and protocols – that is, the sum total of the software and hardware – that constitute cyberspace. Code sets the terms upon which one can enter or exit cyberspace. Code is not optional. Section 9.1.3 p.241
## Slide 6 - 1 Pt Written Answer

1 pt written answer Give an example of a Cyber - exacerbated crime.
## Slide 7 - 1 Pt Written Answer

1 pt written answer Any of • Cyberstalking, internet pedophilia, internet pornography, cyber bullying. Not examples of • Cyber piracy, cybertrespass, Cybervandalism , Defacing a web page, copying proprietary info, using someone's password, viruses, DNOS.
## Slide 8 - 1 Pt mc/ Tf Question

1 pt mc/ tf question The Panopticon refers to: • The provision of an enhanced environment by using technology. • The concept of always being under observation. • Items that provide heads up display of information. • Nanotechnology. • All of these.
## Slide 9 - 1 mc/ Tf Answer

1 mc/ tf answer The Panopticon refers to: • The concept of always being under observation.
---
Made With Glean | [Open Event](https://app.glean.co/event/ab960c89-56bd-4536-b688-53c0f88d578e)