[Framework Summary #3] Navigating the Landscape of AI Ethics and Responsibility

September 11, 2024

This week we summarize the third risk framework included in the AI Risk Repository: "Navigating the Landscape of AI Ethics and Responsibility", by Paulo Rupino Cunha and Jacinto Estima (2023).

Using a systematic literature review and an analysis of real-world news about AI-infused systems, this framework clusters existing and emerging AI ethics and responsibility issues into 6 groups:

  1. Broken systems: Algorithms or training data leading to unreliable outputs, often disproportionately weighing variables like race or gender. These can cause significant harm to individuals through biased decision-making in areas like housing, relationships, or legal proceedings.
  2. Hallucinations: AI systems generating false information, particularly in conversational AI tools. This can lead to the spread of misinformation, especially among less knowledgeable users.
  3. Intellectual property rights violations: AI tools potentially infringing on creators' rights by using their work for training without permission or compensation. This includes issues with AI-generated code potentially violating open-source software licenses.
  4. Privacy and regulation violations: AI systems collecting and storing personal data without proper legal basis or user consent, potentially violating privacy laws like GDPR. This also includes the risk of exposing sensitive information through AI tool usage.
  5. Enabling malicious actors and harmful actions: AI technologies being used for nefarious purposes such as creating deep fakes, voice cloning, accelerating password cracking, or generating phishing emails and software exploits.
  6. Environmental and socioeconomic harms: The significant energy consumption and carbon footprint associated with AI applications, contributing to climate change and raising concerns about environmental sustainability.

Key features:

Concludes that AI ethics and responsibility needs to be reflected upon and addressed across five dimensions: Research, Education, Development, Operation, and Business Model.

The discussion section classifies real-world cases of unethical or irresponsible uses of AI

The paper identifies several key cases and issues that have led to the development of taxonomies, conceptual models, and official regulations, to better understand these issues and propose potential solutions to address them

What do you think of this framework?

Feel free to share your thoughts or any related resources in the comments

References/further reading

Cunha, P.R., Estima, J. (2023). Navigating the Landscape of AI Ethics and Responsibility. In: Moniz, N., Vale, Z., Cascalho, J., Silva, C., Sebastião, R. (eds) Progress in Artificial Intelligence. EPIA 2023. Lecture Notes in Computer Science, vol 14115. Springer, Cham.

AI Risk Repository
© 2024 MIT AI Risk Repository