nothing to see here :)

  • WhatAmLemmy@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    1 year ago

    Although the underlying technology is similar, AI upscaling is not a valuable propaganda tool, so is not in any way shape or form the same as AI “fakes”.

    The intention with AI upscaling is to enhance existing detail and remove artefacts while increasing size and scale; not to create a completely new or false image that is different to the input source or changes its narrative. It’s closer to this, than it is to deep fakes or propaganda.

    • jet@hackertalks.com
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      1 year ago

      You’re not wrong. But that’s done by inferring what should be there. So it’s still going to appear to be faked, because in a very real sense it is faked. It’s faked within a narrow band of expectations, but it is faked. A better way to send out photos like this is to include the original and the enhanced version in the publication. To remove doubt

      • WhatAmLemmy@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        True. Upscaling is likely to always trigger a positive with any AI analysis tool, unless it has been calibrated to detect upscaling; including probably some reference to, or pre-processing of, the original image.

        So yes… Honestly, including a visible disclaimer, and providing a reference to the original, should be a requirement for ANY digital image adjustment, in ANY work of non-fiction; including adjustments made in photoshop, like making a model skinnier or removing stretch marks. You shouldn’t be able to misrepresent reality to consumers without explicitly telling them it’s a misrepresentation.