Page 349 - Handbook of Electronic Assistive Technology
P. 349
338 HANDBOOK OF ELECTRONIC ASSISTIVE TECHNOLOGY
For a machine to be classified as intelligent it needs to draw from human concepts such
as autonomy, consciousness, learning, free will and decision-making. Autonomy is defined
as the ability required for carrying on successful activities (Amigoni and Schiaffonati, 2005).
Robotics is steadfastly heading toward the replication of an intelligent and autonomous
being. It is argued that the more autonomy a robot or machine is allowed, there is an
increasing need for it to abide by a set of principles that are compatible with a socially
aligned moral code.
New technology must meet certain requirements to be accepted. This includes
legal, social and global considerations. The concern is that there is no centralised sys-
tem to establish responsibility for ensuring these requirements are in place. According
to Alsegier (2016), when a robot incurs a technical problem causing harm to the indi-
vidual using it, various questions arise: Who becomes the responsible party in this
event? Who is responsible for the ethical implications? Is it a possibility that the robot
itself is held responsible? Is the engineer who developed the robot responsible? Is
the company or the government who allowed the use of the robot the responsible
party?
Other concerns also exist when considering robots in human environments:
• Robots replacing humans could bring an increase in human unemployment, and
thereby possibly increase socioeconomic problems.
• Psychological problems may be caused due to problems with attachment to other
humans, and further in the future when robots are posited to anthropomorphise,
possible confusion between what is real and what is robotic.
The need for a centralised protocol is required. Alsegier suggests a set of solutions for
the future (Alsegier, 2016):
1� The creation of limitations and laws which would be applied to the development
and control of robots. A part of this would be making the content of robotic research
available to the public, and scientists would take it as their responsibility to inform
and educate the public on the uses of any new robots, and clearly state what the
short-term and long-term effects of use would be.
2� Any humanoid robots (including SARs) would have to pass a series of tests and
would be evaluated by ‘neutral’ scientists who would be able to assess any technical
issues the user may face when using the robot. Due to the human-like nature of
these devices, it was suggested that sociologists should be involved to understand
the effects on people’s behaviour and be part of the review process for new products,
ensuring there are no damaging effects to society.
3� The final stage of testing comes under the jurisdiction of the government. It would be
their responsibility to clearly state the legal liabilities involved in the development of
new robotics.
4� A universal set of rules in the production of intelligent robots would include ethical
responsibilities and safety considerations.