While conducting research on how AI was changing daily work at a U.S. technology company, UC Berkeley Haas doctoral student Xingqi Maggie Ye noticed a pattern that raised a provocative question: What if AI is intensifying work rather than reducing it? Ye’s eight-month ethnographic study, co-authored by Associate Professor Aruna Ranganathan and featured in Harvard […]
You’re 100% right, and I should know that too. “Not LLM-based” is indeed what I was intending to say.
It gets hard to remember the (correct) broader definition when slop is being shoved into your brain through every possible orifice. Even for us that vehemently disagree, it still subconsciously molds the frameworks and language we use. It’s insidious, really.
And transcriptions usually aren’t really even AI; speech-to-text has been around a while.
Speech to text is AI and always has been.
It wasn’t always the current LLM slop bots that coopted the name, sure.
Yep, that’s a fact. Hidden Markov Models, LSTMs, and LLMs are all ML models, and ML is a branch of AI.
You’re 100% right, and I should know that too. “Not LLM-based” is indeed what I was intending to say.
It gets hard to remember the (correct) broader definition when slop is being shoved into your brain through every possible orifice. Even for us that vehemently disagree, it still subconsciously molds the frameworks and language we use. It’s insidious, really.
See this article by a fellow lemming which I highly recommend.