Now showing 1 - 9 of 9
  • Publication
    Landscape of Machine Implemented Ethics
    (Springer, 2020-10)
    This paper surveys the state-of-the-art in machine ethics, that is, considerations of how to implement ethical behaviour in robots, unmanned autonomous vehicles, or software systems. The emphasis is on covering the breadth of ethical theories being considered by implementors, as well as the implementation techniques being used. There is no consensus on which ethical theory is best suited for any particular domain, nor is there any agreement on which technique is best placed to implement a particular theory. Another unresolved problem in these implementations of ethical theories is how to objectively validate the implementations. The paper discusses the dilemmas being used as validating ‘whetstones’ and whether any alternative validation mechanism exists. Finally, it speculates that an intermediate step of creating domain-specific ethics might be a possible stepping stone towards creating machines that exhibit ethical behaviour.
      506Scopus© Citations 19
  • Publication
    Empathetic AI for Ethics-in-the-Small
    (Springer, 2023-04) ;
    There has been much media attention on the society-wide effects of the rapid and unfettered deployment of AI (by AI, we mean any device/algorithm that uses AI techniques as part of its functioning, not necessarily just self-driving cars, robots, etc.). Most of these have focused on ethics-in-the-large, i.e., concepts like justice, fairness, bias—which can only be evaluated on a whole-society basis. We think that an equally important and immediate concern should be ethics-in-the-small, where technology has the potential to affect the quality of individual human lives. Consider, for example, mental-health apps that purport to offer support to individuals, or digital personal assistants that function as companions. Here, the notion of ethical behavior tends to be defined more by the individual’s particular circumstances rather than general principles.
      50
  • Publication
    Decentralised Detection of Emergence in Complex Adaptive Systems
    This article describes Decentralised Emergence Detection (DETect), a novel distributed algorithm that enables agents to collaboratively detect emergent events in Complex Adaptive Systems (CAS). Non-deterministic interactions between agents in CAS can give rise to emergent behaviour or properties at the system level. The nature, timing, and consequence of emergence is unpredictable and may be harmful to the system or individual agents. DETect relies on the feedback that occurs from the system level (macro) to the agent level (micro) when emergence occurs. This feedback constrains agents at the micro level and results in changes occurring in the relationship between an agent and its environment. DETect uses statistical methods to automatically select the properties of the agent and environment to monitor and tracks the relationship between these properties over time. When a significant change is detected, the algorithm uses distributed consensus to determine if a sufficient number of agents have simultaneously experienced a similar change. On agreement of emergence, DETect raises an event, which its agent or other interested observers can use to act appropriately. The approach is evaluated using a multi-agent case study.
    Scopus© Citations 7  538
  • Publication
    Clonal Plasticity: An Autonomic Mechanism for Multi-Agent Systems to Self-Diversify
    (Springer, 2017-12-07) ;
    Diversity has long been used as a design tactic in computer systems to achieve various properties. Multi-agent systems, in particular, have utilized diversity to achieve aggregate properties such as efficiency of resource allocations, and fairness in these allocations. However, diversity has usually been introduced manually by the system designer. This paper proposes a decentralized technique, clonal plasticity, that makes homogeneous agents self-diversify, in an autonomic way. We show that clonal plasticity is competitive with manual diversification, at achieving efficient resource allocations and fairness.
      500Scopus© Citations 3
  • Publication
    Ethics by Agreement in Multi-Agent Software Systems
    (SCITEPRESS, 2019-07-28) ;
    Most attempts at inserting ethical behaviour into autonomous machines adopt the ‘designer’ approach, i.e., the ethical principles/behaviour to be implemented are known in advance. Typical approaches include rule-based evaluation of moral choices, reinforcement-learning, and logic based approaches. All of these approaches assume a single moral agent interacting with a moral recipient. This paper argues that there will be more frequent cases where the moral responsibility for a situation will lie among multiple actors, and hence a designed approach will not suffice. We posit that an emergence-based approach offers a better alternative to designed approaches. Further we outline one possible mechanism by which such an emergent morality might be added into autonomous agents.
    Scopus© Citations 2  540
  • Publication
    Automation: An Essential Component Of Ethical AI?
    Ethics is sometimes considered to be too abstract to be meaningfully implemented in artificial intelligence (AI). In this paper, we reflect on other aspects of computing that were previously considered to be very abstract. Yet, these are now accepted as being done very well by computers. These tasks have ranged from multiple aspects of software engineering to mathematics to conversation in natural language with humans. This was done by automating the simplest possible step and then building on it to perform more complex tasks. We wonder if ethical AI might be similarly achieved and advocate the process of automation as key step in making AI take ethical decisions. The key contribution of this paper is to reflect on how automation was introduced into domains previously considered too abstract for computers.
      83
  • Publication
    Pro-Social Rule Breaking as a Benchmark of Ethical Intelligence in Socio-Technical Systems
    (Springer, 2022-07-06) ;
    The current mainstream approaches to ethical intelligence in modern socio-technical systems have weaknesses. This paper argues that implementing and validating pro-social rule breaking behaviour can be used as a mechanism to overcome these weaknesses and introduce a sample scenario that can be used to validate this behaviour.
      78
  • Publication
    Towards An Ethics-Audit Bot
    In this paper we focus on artificial intelligence (AI) for governance, not governance for AI, and on just one aspect of governance, namely ethics audit. Different kinds of ethical audit bots are possible, but who makes the choices and what are the implications? In this paper, we do not provide ethical/philosophical solutions, but rather focus on the technical aspects of what an AI-based solution for validating the ethical soundness of a target system would be like. We propose a system that is able to conduct an ethical audit of a target system, given certain socio-technical conditions. To be more specific, we propose the creation of a bot that is able to support organisations in ensuring that their software development lifecycles contain processes that meet certain ethical standards.
      81
  • Publication
    Assessing the Appetite for Trustworthiness and the Regulation of Artificial Intelligence in Europe
    (CEUR Workshop Proceedings, 2020-12-08) ; ;
    While Artificial Intelligence (AI) is near ubiquitous, there is no effective control framework within which it is being advanced. Without a control framework, trustworthiness of AI is impacted. This negatively affects adoption of AI and reduces its potential for social benefit. For international trade and technology cooperation, effective regulatory frameworks need to be created. This study presents a thematic analysis of national AI strategies for European countries in order to assess the appetite for an AI regulatory framework. A Declaration of Cooperation on AI was signed by EU members and non-members in 2018. Many of the signatories have adopted national strategies on AI. In general there is a high level of homogeneity in the national strategies. An expectation of regulation, in some form, is expressed in the strategies, though a reference to AI specific legislation is not universal. With the exception of some outliers, international cooperation is supported. The shape of effective AI regulation has not been agreed upon by stakeholders but governments are expecting and seeking regulatory frameworks. This indicates an appetite for regulation. The international focus has been on regulating AI solutions and not on the regulation of individuals. The introduction of a professional regulation system may be a complementary or alternative regulatory strategy. Whether the appetite and priorities seen in Europe are mirrored worldwide will require a broader study of the national AI strategy landscape.
      194