This question right here perfectly encapsulates everything wrong with LLMs right now. They could be good tools but the people pushing them have no idea what they even are. LLMs do not make decisions. All the decisions an LLM appears to make were made in the dataset. All those things that an LLM does that make it seem intelligent were done or said by a human somewhere on the internet. It is a statistical model that determines what output is mostly likely to come next. That is it. It is nothing else. It is not smart. It does not and cannot make decisions. It is an algorithm that searches a dataset and when it can’t find something it’ll provide convincing-looking gibberish instead.
Listen think of it like this; a man decides to take exams to become a doctor in France, but for some reason he doesn’t learn either french or medicine. No, no instead he studies every former exam and all the answers to them. He gets very good at regurgitating those answers so much so that he can even pass the exam. But at no point does he understand what any of it means and when asked new and novel questions he provides utter nonsense answers. No matter how good he gets at memorising those answers he will never get any better at medicine. LLMs are as likely to gain sentience as my excel spreadsheets are.
There’s a massive gulf between what progressives say and do and what the alt-right online influencers claim they do. Perhaps the reason you haven’t heard of anything productive progressives have done is that you get all your news from people who do not want them to appear reasonable and so leave out anything reasonable that they do. These days woke is a word used only by those trying to denigrate the progressive left. Because Woke is a strawman. An empty vessel for you to fill with hate. A totem to represent all the ways in which society is changing that you don’t like.
I can think of nothing less productive than putting down a hollow strawman.