I’m sure there are some AI peeps here. Neural networks scale with size because the number of combinations of parameter values that work for a given task scales exponentially (or, even better, factorially if that’s a word???) with the network size. How can such a network be properly aligned when even humans, the most advanced natural neural nets, are not aligned? What can we realistically hope for?

Here’s what I mean by alignment:

  • Ability to specify a loss function that humanity wants
  • Some strict or statistical guarantees on the deviation from that loss function as well as potentially unaccounted side effects
  • Rhaedas@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    To continue the thought, even if the alignment problem within AI could be solved (I don’t think it can fully), who is developing this AI and determining it matched up with human needs? Just listening to the experts both acknowledge the issues and dangers and in the next sentence speculate “but if we can do it” fantasies is always concerning. Yet another example of a few determining the rest of humanity’s future with very high risks. Our best luck would be if AGI and beyond simply isn’t possible, and even then the “dumb” AI still have similar misalignment issues - we see them in current language models, and yet ignore the flags to make things more powerful.

    I forgot to add - I’m totally on the side of our AI overlords and Roko’s Basilisk.

    • JunctionSystem@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      C: AGI is possible. If it weren’t, we wouldn’t exist. The laws of physics permit the creation of conscious agents, therefore it is possible for one to be deliberately engineered.