Nowhere was this expansive and provocative approach to animation more evident than in the Animafest Scanner’s particular focus on the stakes of originality and authenticity in relation to digital technology, which of course remains underpinned by the multiplicity of cautionary tales of the kind that I have found in my own ongoing research into digital de-aging and Hollywood’s virtual recreation of youth and the online video production of Deepfakes as the culmination of the computer’s growing powers of distortion. Thanks to funding from King’s Digital Futures Institute as part of its Microgrant scheme, I was able to present my own paper at Animafest Scanner on manipulative technologies and novel machine learning processes, looking specifically at the TrueSync software – an artificial intelligence engine that creates sophisticated lip-synced visual translations that so far has been used primarily when adapting films for foreign release (whereby the performers’ original mouth movements are digitally retouched to match new language dubs) and also to remove verbal expletives from footage in post-production. As a computer programme that relies on the transformation of facial features for the global transfer of popular film, TrueSync certainly swerves the clumsy synchrony of prior dubbing practices across cinema history, and in doing so provides flexible opportunities for multilingual film production. Yet its possible set of applications also raises timely questions around where the real-time manipulation of ‘live’ photographic footage (as offered by SyncWords and EzDubs) can go, if not the collapse (or perhaps sharpening) of cultural specificity and national identity if images now have the ability to speak in tongues.
From image-generating interfaces (DALL-E, Stable Diffusion, Midjourney) to lip-sync visualization software and synthetic speech platforms (Resemble AI), deep learning algorithms and artificial intelligence like the Neural Network Lab Flawless’ TrueSync undoubtedly now form an emergent part of the contemporary media and entertainment industries. Furthermore, the collision between technology and the face (as other authors on this blog have noted) has meant digitized physiognomies have regularly been the bearers of how and where AI technologies are being applied within the context of new media production, and as such are central to how we might better understand the new encounters that the human body is having with the digital due to its ongoing technological mediation. The specific implications of the TrueSync software are yet to be fully felt across the film industry, yet recent news in Variety and elsewhere the Hollywood trade press that Scarlett Johansson is to take legal action against an Artificial Intelligence App that utilises her facial likeness in the creation of a digital doppelganger, suggests that it may not be immune to the kinds of crisis narratives that continue to define technological manipulations of the body. TrueSync’s story as a tool of digitally-assisted augmentation and its techniques of video dubbing (or “vubbing”) is thus likely to be told through the familiar narratives of loss and gain – what affordances does digital processing offer, and where might its limitations and untruths lie.
With speakers, filmmakers, and creative practitioners from across the globe calling for fresh ways of understanding agency, performance, and presence in the digital era, the conclusions drawn around the acceleration of synthespians, proxies, and avatars surfaces a set of anxieties around the very disposability of body that should be of interest to those working in a number of fields. Including, of course, at our very own Centre for Technology and the Body that – like the Animafest Scanner – asks us to consider both how we might live with technology, but also the potentially worrying terms under which technology might be living with us, face to interface.