Thanks to rapid advancements in generative AI and a glut of training data created by human actors that has been fed into its AI model, Synthesia has been able to produce avatars that are indeed more humanlike and more expressive than their predecessors. The digital clones are better able to match their reactions and intonation to the sentiment of their scripts—acting more upbeat when talking about happy things, for instance, and more serious or sad when talking about unpleasant things. They also do a better job matching facial expressions—the tiny movements that can speak for us without words.

But this technological progress also signals a much larger social and cultural shift. Increasingly, so much of what we see on our screens is generated (or at least tinkered with) by AI, and it is becoming more and more difficult to distinguish what is real from what is not. This threatens our trust in everything we see, which could have very real, very dangerous consequences.

“I think we might just have to say goodbye to finding out about the truth in a quick way,” says Sandra Wachter, a professor at the Oxford Internet Institute, who researches the legal and ethical implications of AI. “The idea that you can just quickly Google something and know what’s fact and what’s fiction—I don’t think it works like that anymore.”

  • sgibson5150@slrpnk.net
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    6 months ago

    If I generate a key pair and use it to sign a file and distribute it and then I publish the public key somewhere like Facebook, any recipient of the file could be assured that the file originated from my Facebook account. A commercial certificate is not required to do this. As to whether the Facebook account holder is actually me is another problem, but hopefully major social media platforms require at least a photo ID.

    Edit: Sorry, I said public certificate when I meant commercial certificate.