- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
(cont’d)
…with “key takeaways” and regurgitated paragraphs that all follow the same format. It’s gross, and yet it generates an article long enough with enough keywords to show up on Google.
Reposting my comment from another similar thread to show that this is easily fixable, and you should be wary of any non-reputable news source anyway.
So I was curious how current LLMs might handle this with proper instructions, so I asked chatGPT this: “What can you tell me about this Reddit post? Would you write a news article about this? Analyze the trustworthiness of this information:” and pasted the text from the post. Here’s a part of its reply:
So it’s not even an issue with current models, just bad setup. An autoGPT with several fact-checking questions added in can easily filter this stuff.
Dragonflight is the latest expansion in world of Warcraft so that last bullet point is wrong.
Half of the deleted […] things are chatGPT mentioning its 2021 knowledge cutoff and suggesting double-checking that info. It was mentioned in this case as well.
If it were an autoGPT with internet access, I think these would prompt an automated online lookup to fact-check it.