We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision.

https://arxiv.org/abs/2311.07590

  • TootSweet@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    7
    ·
    11 months ago

    One frame from The Matrix where Morpheus says "you think that's air you're breathing?" but instead captioned with "you think that's 'agency' making you do things?"

    Maybe it would be more accurate to say “so-and-so exhibited behaviors that included cheating, lies, and coverups” rather than using language to suggest that people have free will. (There’s no dearth of philosophies that would say something not too far from that.)

    Even if humans are ultimately essentially different in that way from any technologies we’ve devised so far, we use convenient fictions for technology all the time. This page comes to mind .