• perviouslyiner@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    1 year ago

    It was a staple of Asimov’s books that while trying to predict decisions of the robot brain, nobody in that world ever understood how they fundamentally worked.

    He said that while the first few generations were programmed by humans, everything since that was programmed by the previous generation of programs.

    This leads us to Asimov’s world in which nobody is even remotely capable of creating programs that violate the assumptions built into the first iteration of these systems - are we at that point now?

    • amki@feddit.de
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      No. Programs cannot reprogram themselves in a useful way and are very very far from it.

      • legion02@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Eh, I’d say continuous training models are pretty close to this. Adapting to changing conditions and new input is kinda what they’re for.

        • Bjornir@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Very far from reprogramming though. The general shape of the NN doesn’t change, you won’t get a NN made to process images to suddenly process code just by training it.

        • amki@feddit.de
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          It doesn’t or do you have serious applications for self-modifying code?