sh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • harvard-anglia-ruskin-university
  • apa-old-doi-prefix.csl
  • sodertorns-hogskola-harvard.csl
  • sodertorns-hogskola-oxford.csl
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Governing the Automated Welfare State: Translations between AI Ethics and Anti-discrimination Regulation
Lund University, Sweden.ORCID iD: 0000-0002-7994-2695
Lund University, Sweden.ORCID iD: 0000-0002-8855-300X
Lund University, Sweden.ORCID iD: 0000-0002-1462-6325
Södertörn University, School of Culture and Education, Media and Communication Studies.ORCID iD: 0000-0002-5879-2130
2024 (English)In: Nordisk välfärdsforskning | Nordic Welfare Research, ISSN 1799-4691, E-ISSN 2464-4161, Vol. 9, no 2, p. 180-192Article in journal (Refereed) Published
Abstract [en]

There is an increasing demand to utilize technological possibilities in the Nordic public sector. Automated decision-making (ADM) has been deployed in some areas towards that end. While ADM is associated with a range of benefits, research shows that its use, with elements of AI, also implicates risks of discrimination and unfair treatment, which has stimulated a flurry of normative guidelines. This article seeks to explore how a sample of these international high-level principled ideas on fairness translate into the specific governance of ADM in national public-sector authorities in Sweden. It does so by answering the question of how ideas about AI ethics and fairness are considered in relation to regulation on anti-discrimination in Swedish public-sector governance. By using a Scandinavian institutionalist approach to translation theory, we trace how ideas about AI governance and public-sector governance translate into state-authority practice; specifically, regarding the definition of ADM, how AI has impacted it as both discourse and technology, and the ideas of ‘ethicsʼ and ‘discriminationʼ. The results indicate that there is a variance in how different organizations understand and translate ideas about AI ethics and discrimination. These tensions need to be addressed in order to develop AI governance practices.

Place, publisher, year, edition, pages
2024. Vol. 9, no 2, p. 180-192
Keywords [en]
Automated Decision-Making, public-sector governance, AI ethics, discrimination, fairness
National Category
Media and Communications
Identifiers
URN: urn:nbn:se:sh:diva-54324DOI: 10.18261/nwr.9.2.6Scopus ID: 2-s2.0-85196781289OAI: oai:DiVA.org:sh-54324DiVA, id: diva2:1875691
Available from: 2024-06-23 Created: 2024-06-23 Last updated: 2025-02-07Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Kaun, Anne

Search in DiVA

By author/editor
Lussi, Ellinor BlomLarsson, StefanHögberg, CharlotteKaun, Anne
By organisation
Media and Communication Studies
In the same journal
Nordisk välfärdsforskning | Nordic Welfare Research
Media and Communications

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 105 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • harvard-anglia-ruskin-university
  • apa-old-doi-prefix.csl
  • sodertorns-hogskola-harvard.csl
  • sodertorns-hogskola-oxford.csl
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf