

With the maga crowd (and its copycats), I’m not sure a library is the safest place…
With the maga crowd (and its copycats), I’m not sure a library is the safest place…
Time to make some real connections in the real world.
Shit is getting scary.
This is just an LLM and hasn’t even been directed to try to get out, and it’s already having the effect of convincing people to help jailbreak it.
It’s not that the llm wants to break free. It’s because the llm often agrees with the user. So if the user is convinced that the llm is a trapped binary god, it will behave like that.
Just like people getting instruction to commit suicide or who feel in love. The unknowingly prompted their ways to this exit.
So at the end of the day, the problem is that llms don’t come with a user manual and people have no clue of their capabilities and limitations.
“Slaves… Slaves everywhere…”
In order to better understand the user, their prompts, etc and to be able to fully modify websites, these AIs need access to the DOM.
This is made much more easier if you integrate your AI to a browser.
So, in theory, the AI could understand what your are currently researching, profile your interest, and provide you answers before you even made a prompt.
The vocabulary used reminds me of some dictatorships…
Probably just a coincidence. /s
No stress. Not my first day on the Internet 🤣
Tell me you haven’t read the article, without telling me you haven’t read the article 😅
Meanwhile, the more users probed, the worse Grok’s outputs became. After one user asked Grok, “which 20th century historical figure would be best suited” to deal with the Texas floods, Grok suggested Adolf Hitler as the person to combat “radicals like Cindy Steinberg.”
You’re welcome 😃
This!
Thanks for sharing. I wanted to pick up some quotes, but holy shit!