Most people don’t understand that the only thing it does is ‘put words together that usually go together’. It doesn’t know if something is right or wrong, just if it ‘sounds right’.
Now, if you throw in enough data, it’ll kinda sorta make sense with what it writes. But as soon as you try to verify the things it writes, it falls apart.
I once asked it to write a small article with a bit of history about my city and five interesting things to visit. In the history bit, it confused two people with similar names who lived 200 years apart. In the ‘things to visit’, it listed two museums by name that are hundreds of miles away. It invented another museum that does not exist. It also happily tells you to visit our Olympic stadium. While we do have a stadium, I can assure you we never hosted the Olympics. I’d remember that, as i’m older than said stadium.
The scary bit is: what it wrote was lovely. If you read it, you’d want to visit for sure. You’d have no clue that it was wholly wrong, because it sounds so confident.
AI has its uses. I’ve used it to rewrite a text that I already had and it does fine with tasks like that. Because you give it the correct info to work with.
Use the tool appropriately and it’s handy. Use it inappropriately and it’s a fucking menace to society.
Hmm, yeah, AI never really did think. I can’t argue with that.
It’s really strange now if I mentally zoom out a bit, that we have machines that are better at languange based reasoning than logic based (like math or coding).
Not really true though. Computers are still better at math. They’re even pretty good at coding, if you count compiling high-level code into assembly as coding.
But in this case we built a language machine to respond to language with more language. Of course it’s not going to do great at other stuff.
Wait, when did you do this? I just tried this for my town and researched each aspect to confirm myself. It was all correct. It talked about the natives that once lived here, how the land was taken by Mexico, then granted to some dude in the 1800s. The local attractions were spot on and things I’ve never heard of. I’m…I’m actually shocked and I just learned a bunch of actual history I had no idea of in my town 🤯
I did that test late last year, and repeated it with another town this summer to see if it had improved. Granted, it made less mistakes - but still very annoying ones. Like placing a tourist info at a completely incorrect, non-existent address.
I assume your result also depends a bit on what town you try. I doubt it has really been trained with information pertaining to a city of 160.000 inhabitants in the Netherlands. It should do better with the US I’d imagine.
The problem is it doesn’t tell you it has knowledge gaps like that. Instead, it chooses to be confidently incorrect.
Ugh. Don’t get me started.
Most people don’t understand that the only thing it does is ‘put words together that usually go together’. It doesn’t know if something is right or wrong, just if it ‘sounds right’.
Now, if you throw in enough data, it’ll kinda sorta make sense with what it writes. But as soon as you try to verify the things it writes, it falls apart.
I once asked it to write a small article with a bit of history about my city and five interesting things to visit. In the history bit, it confused two people with similar names who lived 200 years apart. In the ‘things to visit’, it listed two museums by name that are hundreds of miles away. It invented another museum that does not exist. It also happily tells you to visit our Olympic stadium. While we do have a stadium, I can assure you we never hosted the Olympics. I’d remember that, as i’m older than said stadium.
The scary bit is: what it wrote was lovely. If you read it, you’d want to visit for sure. You’d have no clue that it was wholly wrong, because it sounds so confident.
AI has its uses. I’ve used it to rewrite a text that I already had and it does fine with tasks like that. Because you give it the correct info to work with.
Use the tool appropriately and it’s handy. Use it inappropriately and it’s a fucking menace to society.
I know this is off topic, but every time i see you comment of a thread all i can see is the pepsi logo (i use the sync app for reference)
You know, just for you: I just changed it to the Coca Cola santa :D
Spreading the holly day spirit
We are all dutch on this blessed day
We are all gekoloniseerd
Voyager doesn’t show user PFPs at all. :/
I gave it a math problem to illustrate this and it got it wrong
If it can’t do that imagine adding nuance
Well, math is not really a language problem, so it’s understandable LLMs struggle with it more.
But it means it’s not “thinking” as the public perceives ai
Hmm, yeah, AI never really did think. I can’t argue with that.
It’s really strange now if I mentally zoom out a bit, that we have machines that are better at languange based reasoning than logic based (like math or coding).
deleted by creator
Not really true though. Computers are still better at math. They’re even pretty good at coding, if you count compiling high-level code into assembly as coding.
But in this case we built a language machine to respond to language with more language. Of course it’s not going to do great at other stuff.
Ymmv i guess. I’ve given it many difficult calculus problems to help me through and it went well
Wait, when did you do this? I just tried this for my town and researched each aspect to confirm myself. It was all correct. It talked about the natives that once lived here, how the land was taken by Mexico, then granted to some dude in the 1800s. The local attractions were spot on and things I’ve never heard of. I’m…I’m actually shocked and I just learned a bunch of actual history I had no idea of in my town 🤯
I did that test late last year, and repeated it with another town this summer to see if it had improved. Granted, it made less mistakes - but still very annoying ones. Like placing a tourist info at a completely incorrect, non-existent address.
I assume your result also depends a bit on what town you try. I doubt it has really been trained with information pertaining to a city of 160.000 inhabitants in the Netherlands. It should do better with the US I’d imagine.
The problem is it doesn’t tell you it has knowledge gaps like that. Instead, it chooses to be confidently incorrect.
Only 85k pop here, but yeah. I imagine it’s half YMMV, half straight up luck that the model doesn’t hallucinate shit.