sh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • harvard-anglia-ruskin-university
  • apa-old-doi-prefix.csl
  • sodertorns-hogskola-harvard.csl
  • sodertorns-hogskola-oxford.csl
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Classification of multimodal translation errors in the entertainment industry: A proposal
University of Niš, Faculty of Philosophy, Serbia.ORCID iD: 0000-0001-6545-5581
Södertörn University, School of Culture and Education, English language. University of Niš, Faculty of Philosophy, Serbia.ORCID iD: 0000-0002-0121-4591
2023 (English)In: Translator (Manchester), ISSN 1355-6509, E-ISSN 1757-0409, Vol. 29, no 3, p. 265-280Article in journal (Refereed) Published
Abstract [en]

Most translation tasks in the entertainment industry involve multiple modes of communication, i.e. they are multimodal, not solely language-based. A translator is expected to analyse, evaluate and transfer each of those modes to render an accurate translation of the source text. This is especially important in films, documentaries, TV and animated shows – multimodal scripts which are being localised for various contexts. An important step in the translation process in the entertainment industry should be the identification of translation errors in the final product which should be based on a proper translation error classification. Given that available translation error classifications rely solely on linguistic modes of communication, the aim of this paper is to propose a multimodal translation error classification which would be based on the multimodality of scripts to be translated and thus provide a reliable tool for the quality check of the final translation product in the entertainment industry. In that way, translators in this industry will be alerted to recognise elements (e.g. tone of voice, facial expressions, proximity, etc.) existing in multimodal scripts where both the source and the target texts as essential parts of the scripts are multimodal products

Place, publisher, year, edition, pages
Taylor & Francis Group, 2023. Vol. 29, no 3, p. 265-280
Keywords [en]
Translation error classification, Classification proposal, Multimodal translation, Multimodality; Entertainment industry
National Category
General Language Studies and Linguistics
Identifiers
URN: urn:nbn:se:sh:diva-49871DOI: 10.1080/13556509.2021.2024654ISI: 000850122400001Scopus ID: 2-s2.0-85137940765OAI: oai:DiVA.org:sh-49871DiVA, id: diva2:1694256
Available from: 2022-09-08 Created: 2022-09-08 Last updated: 2023-10-31Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Stamenković, Dušan

Search in DiVA

By author/editor
Đorđević, JasminaStamenković, Dušan
By organisation
English language
In the same journal
Translator (Manchester)
General Language Studies and Linguistics

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 169 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • harvard-anglia-ruskin-university
  • apa-old-doi-prefix.csl
  • sodertorns-hogskola-harvard.csl
  • sodertorns-hogskola-oxford.csl
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf