Software Agents, Anticipatory Ethics, and Accountability

This chapter takes up a case study of the accountability issues around increasingly autonomous computer systems. In this early phase of their development, certain computer systems are being referred to as “software agents” or “autonomous systems” because they operate in a variety of ways that are seemingly independent of human control. However, because of the responsibility and liability issues, conceptualizing these systems as autonomous seems morally problematic and likely to be legally problematic. Whether software agents and autonomous systems are used to make financial decisions, control transportation, or perform military objectives, when something goes wrong, issues of accountability will indubitably arise. While it would seem that the law will ultimately have to handle these issues, law is currently being used only minimally or indirectly to address accountability for computer software failure. This nascent discussion of computer systems “in the making” seems a good focal point for considering innovative approaches to making law, governance, and ethics more helpful with regard to new technologies. For a start, it would seem that some anticipatory reasoning as to how accountability/liability issues are likely to be handled in law could have an influence on the development of the technology (even if the anticipatory thinking is ultimately wrong). Such thinking could – in principle at least – shape the design of computer systems.