Page 348 - Handbook of Electronic Assistive Technology
P. 348
Chapter 11 • Robotics 337
response. In addition, PMAs can only generate a tension force through contraction;
therefore at least two actuators are often used for each DOF to provide antagonistic
movements like natural skeletal muscles (Huo et al., 2016). The pneumatic muscles
are compliant and well known for their exceptionally high power and force-to-
weight/volume ratios, which make them attractive for use in rehabilitation robots
(Yeh et al., 2010).
5� Series elastic actuators (SEAs) – SEAs place an elastic component between the power
source and the output shaft. By measuring the deflection of the elastic component,
the output force is then measured based on Hooke’s law. It decreases the inertia and
intrinsic impedance of the actuator, allowing a more accurate and stable force control
in unconstrained environments and thereby increases patient safety (Maciejasz et al.,
2014; Huo et al., 2016).
Roboethics
Robot ethics or roboethics is an area of study which aims to understand the ethical impli-
cations and consequences of robotic technology (Scheutz, 2013). The basis of roboethics
are the laws of robotics, written by the author and scientist Isaac Asimov. In his publication
(Asimov, 1950), he defined the ‘laws of robotics’ as:
• First Law: A robot may not harm a human being or, through inaction, allow a human
being to come to harm.
• Second Law: A robot must obey the orders given to it by human beings, except where
such orders would conflict with the First Law.
• Third Law: A robot must protect its own existence as long as such protection does not
conflict with the First or Second Laws.
• Fourth Law (Law Zero): No robot may harm humanity or, through inaction, allow
humanity to come to harm.
Though at the time the ‘laws’ were conceived these technologies seemed far-fetched,
the current status of robotic technology is closer to what Asimov had envisaged. Murphy
and Woods (2009) suggest that Asimov’s laws are based on functional morality, which
assumes that robots have (or will have) sufficient agency and cognition to make moral
decisions (Murphy and Woods, 2009). Morality can be defined as the ability to distinguish
between what is right and what is wrong. Asimov’s laws assume that robots behave as if
they were people. However, it is humans who design and use the robots who must be sub-
ject to any law and the ultimate responsibility for ensuring robots behave well must always
lie with human beings (Boden et al., 2017).
In the field of assistive technology, the area of roboethics is particularly important to
address for the safeguarding of patients. There have been ongoing debates surrounding
the topic of the ethical implications of the anthropomorphism of robots (Alsegier, 2016).
The application of robotics is still in its early stages and hence understanding the conse-
quences it has to society is also developing.