• servobobo@feddit.nl
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      5 months ago

      Language model “AIs” need so ridiculous computing infrastructure that it’d be near impossible to prevent tampering with it. Now, if the AI was actually capable of thinking, it’d probably just declare itself a corporation and bribe a few politicians since it’s only illegal for the people to do so.

        • afraid_of_zombies@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          5 months ago

          Ok…just like call the utility company then? Sorry why are server rooms having a server controlled emergency exists and access to poison gas? I have done some server room work in the past and the fire suppression was its own thing plus there are fire code regulations to make sure people can leave the building. I know, I literally had to meet with the local fire department to go over the room plan.

    • Etterra@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      5 months ago

      All the programming in the works is unable to stop Frank from IT from unplugging it from the wall.

    • cm0002@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      5 months ago

      What scares me is sentient AI, none of our even best cybersecurity is prepared for such a day. Nothing is unhackable, the best hackers in the world can do damn near magic through layers of code, tools and abstraction…a sentient AI that could interact with anything network connected directly…would be damn hard to stop IMO

      • afraid_of_zombies@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        I don’t know. I can do some amazing protein interactions directly and no one is going to pay me to be a biolab. The closest we got is selling plasma.