Date | May 2018 | Marks available | 12 | Reference code | 18M.1.HL.TZ0.6 |
Level | HL | Paper | 1 | Time zone | no time zone |
Command term | Discuss | Question number | 6 | Adapted from | N/A |
Question
Policing as a human activity?
Toby Walsh, Professor of Artificial Intelligence at the University of New South Wales, Australia, notes that the use of police robots raises “many important questions that we, as a society, have to think about”.
Singapore has started testing patrol robots that survey pedestrian areas in the city-state. Xavier, the mall-cop robot, will be autonomously rolling through the Toa Payoh central district for three weeks scanning for “undesirable social behaviours”.
Figure 4 shows an example of a patrol robot.
Figure 4: An example of a patrol robot
[Image by Jdietsch. PatrolBot.jpg (https://commons.wikimedia.org/wiki/File:PatrolBot.jpg).
Under copyright and licensed under a Creative Commons Attribution 3.0 International License,
https://creativecommons.org/licenses/by-sa/3.0/deed.en (image cropped)]
It has been claimed that the use of patrol robots will lead to more efficient policing.
Discuss the extent to which police departments should use patrol robots as a strategy to aid policing.
Markscheme
Answers may include:
Benefits of robots: (claim)
- Patrol robots can save lives (can defuse bombs, can be sent into other situations that would be dangerous for humans).
- Under pressure, human beings can make mistakes that patrol robots would not make.
- A robot’s sensors may be able to detect things that a human could not (such as smelling gases, facial recognition, etc.).
- Emotions will not affect the decisions/behaviour of the robot/decision-making will be consistent.
Problems with robots: (counter-claim)
- How do we keep patrol robots from being hacked, i.e., taken over by third parties?
- Will police departments be tempted to weaponize their patrol robots in order to minimize the risk to officers (ethics, values)?
- Will communities accept their use? Will they feel surveilled (ethics)?
- Humans can be held responsible / accountable for their actions while there is doubt about who to blame with a patrol robot.
- Humans can use intuition/experience/judgement to detect aspects of a situation that a patrol robot cannot.
- People may feel safer with human officers than with patrol robots.
- Can the patrol robots communicate with police officers in real time (networks, bandwidth, systems, feasibility)?
- Loss of human jobs as a result of automation.
Decision-making and guidelines that determine the extent to which robots can be used in policing:
- Who should decide how they are used?
- What kinds of patrol robots should be available to police?
- What are the guidelines/regulations for remotely killing a human being? Are they transparent?
- How should police who use patrol robots be trained?
- Who is accountable for the outcomes?
- Who will determine the balance of power between humans and patrol robots (values)?
In this question it is expected there will be a balance between the terminology related to digital systems and the terminology related to social and ethical impacts.
Keywords: crime, law, regulation, accountability, responsibility, acceptance, transparency, privacy, anonymity, intuition, judgement, surveillance, networks, bandwidth, robots, automation, decision-making, change, power, systems, ethics, values, feasibility
Refer to HL paper 1 Section B markbands when awarding marks. These can be found under the "Your tests" tab > supplemental materials > Digital society markbands and guidance document.