• 0 Posts
  • 32 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle
  • The approach of LLMs without some sort of symbolic reasoning layer aren’t actually able to hold a model of what their context is and their relationships. They predict the next token, but fall apart when you change the numbers in a problem or add some negation to the prompt.

    Awesome for protein research, summarization, speech recognition, speech generation, deep fakes, spam creation, RAG document summary, brainstorming, content classification, etc. I don’t even think we’ve found all the patterns they’d be great at predicting.

    There are tons of great uses, but just throwing more data, memory, compute, and power at transformers is likely to hit a wall without new models. All the AGI hype is a bit overblown. That’s not from me that’s Noam Chomsky https://youtu.be/axuGfh4UR9Q?t=9271.
















  • The headline stat is a misinterpretation of the study which was done by Arkose Labs which “provides businesses with lasting bot prevention and account security by sapping the financial motivations of cybercriminals.”

    That’s pretty vague but skimming it sounds like they prevent automated account creation and takeover. The stat comes from the companies they have access to (who need bot protection enough to pay for it), and 76% of activity on the login/account creation was malicious. That makes a lot more sense. All the various hacks and credential leaks result in bots banging in stolen credentials on high value sites.