• L0rdMathias@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    6
    ·
    4 months ago

    That raises a lot of ethical concerns. It is not possible to prove or disprove that these synthetic homunculi controllers are sentient and intelligent beings.

      • L0rdMathias@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        1
        ·
        4 months ago

        I think we should still do it, we probably will never understand unless we do it, but we have to accept the possibility that if these synths are indeed sentient then they also deserve the basic rights of intelligent living beings.

        • kakes@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          25
          arrow-down
          1
          ·
          4 months ago

          Can’t say we as a species have a great history of granting rights to others.

        • Cocodapuf@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          edit-2
          4 months ago

          Slow down… they may deserve the basic rights of living beings, not living intelligent beings.

          Lizards have brains too, but these are not more intelligent than lizards.

          You would try not to step on a lizard if you saw it on the ground, but you wouldn’t think oh, maybe the lizard owns this land, I hope I don’t get sued for trespassing.

      • SeaJ@lemm.eeOP
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        3
        ·
        edit-2
        4 months ago

        But if we do that, how will we maximize how much money we make off of it? /s

    • demonsword@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      7
      ·
      4 months ago

      There are about 90 billion neurons on a human brain. From the article:

      …researchers grew about 800,000 brain cells onto a chip, put it into a simulated environment

      that is far less than I believe would be necessary for anything intelligent emerge from the experiment

    • just another dev@lemmy.my-box.dev
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      edit-2
      4 months ago

      I’d wager the main reason we can’t prove or disprove that, is because we have no strict definition of intelligence or sentience to begin with.

      For that matter, computers have many more transistors and are already capable of mimicking human emotions - how ethical is that, and why does it differ from bio-based controllers?

      • Cocodapuf@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 months ago

        It is frustrating how relevant philosophy of mind becomes in figuring all of this out. I’m more of an engineer at heart and i’d love to say, let’s just build it if we can. But I can see how important that question “what is thinking?” Is becoming.

      • L0rdMathias@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        Good point. There is a theory somewhere that loosely states one cannot understand the nature of one’s own intelligence. Iirc it’s a philosophical extension of group/set theory, but it’s been a long time since I looked into any of that so the details are a bit fuzzy. I should look into that again.

        At least with computers we can mathematically prove their limits and state with high confidence that any intelligence they have is mimicry at best. Look into turing completeness and it’s implications for more detailed answers. Computational limits are still limits.

      • Excrubulent@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        I think a simple self-reporting test is the only robust way to do it.

        That is: does a type of entity independently self-report personhood?

        I say “independently” because anyone can tell a computer to say it’s a person.

        I say “a type of entity” because otherwise this test would exclude human babies, but we know from experience that babies tend to grow up to be people who self-report personhood. We can assume that any human is a person on that basis.

        The point here being that we already use this test on humans, we just don’t think about it because there hasn’t ever been another class of entity that has been uncontroversially accepted as people. (Yes, some people consider animals to be people, and I’m open to that idea, but it’s not generally accepted)

        There’s no other way to do it that I can see. Of course this will probably become deeply politicised if and when it happens, and there will probably be groups desperate to maintain a status quo and their robotic slaves, and they’ll want to maintain a test that keeps humans in control as the gatekeepers of personhood, but I don’t see how any such test can be consistent. I think ultimately we have to accept that a conscious intellect would emerge on its own terms and nothing we can say will change that.

      • el_bhm@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        4 months ago

        There is no soul in there. God did not create it. Here you go, religion serving power again.

    • sugartits@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      4 months ago

      Nah it’s okay. I was called all sorts of names and told I was against progress when I raised such concerns, so obviously I was wrong…