Here’s the kicker: based on these AI-assigned definitions in the updated terms, your access to certain content might be limited, or even cut off. You might not see certain tweets or hashtags. You might find it harder to get your own content seen by a broader audience. The idea isn’t entirely new; we’ve heard stories of shadow banning on Twitter before. But the automation and AI involvement are making it more sophisticated and all-encompassing.

  • coheedcollapse@lemmy.world
    link
    fedilink
    English
    arrow-up
    73
    arrow-down
    12
    ·
    edit-2
    1 year ago

    Stuff like this is my biggest reason to believe that the current anti-ai movement is incredibly misled.

    They want to stop open scraping, but if they’re successful, only companies like Twitter, Google, Disney, Getty, Adobe, whatever, are going to have their own closed systems that they’ll either charge for or keep themselves to replace workers, instead of the tech being open to all of us.

    Open scraping is the only saving grace of all of this tech because it’s going to keep at least a number of options entirely free for anyone who wants to use them.

    • SpeakinTelnet@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      46
      ·
      1 year ago

      I’m not anti-AI but the movement is highly against mega corp scrapping personal data as well, not just open scrapping.

      As a simple example Co-Pilot has been under heavy fire from the anti-ai community for a while now due to the usage of open licensed code without attribution.

      • coheedcollapse@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        4
        ·
        edit-2
        1 year ago

        But it won’t matter, because a mega corp scraping data is going to put it into their TOS and literally zero percent of these people are going to get off Twitter or Bluesky or whatever big website that has an exemption to whatever law is passed to stop the scraping of data.

        The only groups who will suffer will be researchers, open source software builders, and pretty much anyone who isn’t a corporation already.

        There’s no solution to this that will end with everyone being 100% happy, but keeping the open internet open and continuing this idea that has pretty much persisted from the beginning of the internet, that whatever you put out there is fair game for viewing, is ideal compared to the alternative.

    • Cethin@lemmy.zip
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Isn’t most of the issue people have open scraping and using it to create copyeritten content that they then sell back to us? It’s not just the scraping, but that they also want to own the output.

      • coheedcollapse@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        5
        ·
        1 year ago

        It’s controversial, to be sure, but I’ve always been of the mind that if someone wants to do something transformative to one of my works, they’ve generated something different, despite being “inspired” by my work.

        ML gens are transformative by nature, so I don’t think my work being one of millions of datapoints used to create something is a huge deal.

        That said, I’m also an advocate of preservation through piracy, so I’d be a hypocrite if I wanted to go copyright mad at bots for looking at images I uploaded on the public internet.

  • MNByChoice@midwest.social
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    2
    ·
    1 year ago

    Twitter has been selling their entire database for years. Those that purchased the data have likely been doing everything with it already.

  • argo_yamato@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    Left twitter a few months ago and it seems like everyday there is a new reason that made leaving a good call

  • lunaticneko@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    1 year ago

    As a Thai I am very intrigued about how the AI-trained version of @sugree will be like.

    For context, Sugree has made numerous Nostradamus-like “prophecy” tweets that predated important events in modern Thai history, such as political movements, before disappearing after a lawsuit.

    • phillaholic@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Is this one of those accounts that made a ton of predictions, deleted the ones that didn’t come true, and only then did someone find out that they tweeted accurate things?

      • lunaticneko@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        No. He just tweeted a whole damn lot to the point that eventually anything and everything will come true anyway.

        (He did not make “explicit” predictions. Just random shit that happened to come true.)

  • bean@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    5
    ·
    1 year ago

    Many people might overlook the importance of the fact that a significant portion of cutting-edge tools, including ‘A.I.’, Large Language Models (LLMs), and Stable Diffusion (SD), are grounded in open source. This approach has led to a broad spectrum of contributors, ranging from novices to experts, who are diving into these technologies and pushing their boundaries every day.

    Among the various projects and platforms, Meta’s contribution is noteworthy. Not necessarily because of altruism, but for their strategic decision to release Llama 2 as open source.

    It’s natural for people to feel a mix of intrigue and caution towards new technologies; while they’re perhaps attracted by the novelty, there’s also an inherent fear about the potential unknowns. This duality reflects human nature to seek progress, while also being wary of unforeseen consequences.