Cortical-striatal brain network distinguishes deepfake from real speaker identity

In this study, Thayabaran Kathiresan, Redenlab’s Director of Audio Engineering, co-authors “Cortical-striatal brain network distinguishes deepfake from real speaker identity”. This study reveals that while deepfakes can partly deceive human cognition, a distinct cortical-striatal brain network is capable of detecting the subtle differences between real and fake speaker identities.

Deepfake voices reduce listening pleasure, regardless of sound quality, showing that humans can only be partially fooled by deepfakes. The neural mechanisms involved highlight our natural ability to detect fake information. However, as generative algorithms improve daily, this raises concerns about what the future might hold.

This study was written by Claudia Roswandowitz, Elisa Pellegrino, Volker Dellwo, Sascha Frühholz and Thayabaran Kathiresan.

For more information, please click here.

Related Post

  • Posted on 17 June, 2024
    Up to half of all people with multiple sclerosis experience communication difficulties due to dysarthria, a disorder that impacts the...
    • Posted on 12 June, 2024
      Frontotemporal Dementia (FTD) encompasses a range of progressive neurodegenerative conditions that impair speech production and comprehension, higher cognitive functions, behavior,...
      • Posted on 7 June, 2024
        Redenlab’s CEO, Adam Vogel, co-wrote "The Role of Verbal Fluency in the Cerebellar Cognitive Affective Syndrome Scale in Friedreich Ataxia"...