I do a lot of writing of various kinds, and I could not disagree more strongly. Writing is a part of thinking. Thoughts are fuzzy, interconnected, nebulous things, impossible to communicate in their entirety. When you write, the real labor is converting that murky thought-stuff into something precise. It’s not uncommon in writing to have an idea all at once that takes many hours and thousands of words to communicate. How is an LLM supposed to help you with that? The LLM doesn’t know what’s in your head; using it is diluting your thought with statistically generated bullshit. If what you’re trying to communicate can withstand being diluted like that without losing value, then whatever it is probably isn’t meaningfully worth reading. If you use LLMs to help you write stuff, you are wasting everyone else’s time.
Yeah, I agree. You can see this in all AI generated stuff - none of it has any purpose, no intention.
People who say it’s saving them time, I mean I have to ask what these people are doing that can be replaced by AI and whether they’re actually any good at it, and whether the AI has improved their work or just made it happen faster at the expense of quality.
I have turned off all predictive writing of any kind on my devices, it gets in my head and stops me from forming my own thoughts. I want my authentic voice and I can’t stand the idea of a machine prompting me with its own idea of what I want to say.
Like… we’re prompting the AI, but are they really prompting us?
Amen. In fact, I wrote a whole thing about exactly this – without an LLM! Like most things I write, it took me many hours and evolved many times, but I take pleasure in communicating something to the reader, in the same way that I take pleasure in learning interesting things reading other people’s writing.
I don’t think that sounds like a good way to make a good paper that effectively communicates something complex, for the reasons in my previous comment.
I do a lot of writing of various kinds, and I could not disagree more strongly. Writing is a part of thinking. Thoughts are fuzzy, interconnected, nebulous things, impossible to communicate in their entirety. When you write, the real labor is converting that murky thought-stuff into something precise. It’s not uncommon in writing to have an idea all at once that takes many hours and thousands of words to communicate. How is an LLM supposed to help you with that? The LLM doesn’t know what’s in your head; using it is diluting your thought with statistically generated bullshit. If what you’re trying to communicate can withstand being diluted like that without losing value, then whatever it is probably isn’t meaningfully worth reading. If you use LLMs to help you write stuff, you are wasting everyone else’s time.
Yeah, I agree. You can see this in all AI generated stuff - none of it has any purpose, no intention.
People who say it’s saving them time, I mean I have to ask what these people are doing that can be replaced by AI and whether they’re actually any good at it, and whether the AI has improved their work or just made it happen faster at the expense of quality.
I have turned off all predictive writing of any kind on my devices, it gets in my head and stops me from forming my own thoughts. I want my authentic voice and I can’t stand the idea of a machine prompting me with its own idea of what I want to say.
Like… we’re prompting the AI, but are they really prompting us?
Amen. In fact, I wrote a whole thing about exactly this – without an LLM! Like most things I write, it took me many hours and evolved many times, but I take pleasure in communicating something to the reader, in the same way that I take pleasure in learning interesting things reading other people’s writing.
Removed by mod
I don’t think that sounds like a good way to make a good paper that effectively communicates something complex, for the reasons in my previous comment.