The future looks bleak as fuck.
This future is bleak, but not because of AI, but because of climate change, inflation, the increasing problem of pollution, plastics/nanoplastics etc.
But all this fear mongering about AI is just clickbait, statements taken out of context, and people talking about things which they’ve no idea about.
Don’t get me wrong, there are some real issues like legislation not keeping up with technology, deepfakes, intellectual property issues, job losses etc, but these issues are not insurmountable. At the very least, “AI” isn’t going to take over humanity and kills us, humanity will kill humanity well before AI can.
It’s not about it taking over, or killing us. It’s about it removing the humanity from humans. Cultures will be erased. Organic art and music will be forgotten because the will to create will be replaced by the ease of replication.
AI won’t kill us. It’ll just suck everything real and human out of us.
Did people stop painting because we invented cameras? Mediums will still have their purpose, and more artists may learn how to strive alongside new tools to do things they never could before.
Ultimately we will have people able to naturally dictate entire world’s and games and experiences to share, which they never could have accomplished alone.
It’s like empowering smaller artists with Disney money, as long as we make sure the technology isn’t exclusively held by closed proprietary systems. People keep praising adobe for their training data, but it’s it worth it to make sure only Adobe can have the tool, and you need to pay them absurd subscription fees to use it?
Creators will still create. They will just be empowered in doing so to the extent of their imagination.
More AI panic… whatever gets clicks i guess.
i’m still in the melanie mitchell school of thought. if we created an A.I. advanced enough to be an actual threat, it would need the analogous style of information processing that would allow machines to easily interpret instruction. there is no reasonable incentive for it to act outside of our instruction. don’t anthropomorphise it with “innate desire to keep living even at the cost of humanity or anything else.” we only have that due to evolution. i do not believe in the myth of stupid super-intelligence capable of being an existential threat.
The AIs we have been building so far have no motive at all. Really, the danger at this point is not that they will go rogue and kill us all. The danger is that they will do exactly as they are told, when someone tells them to kill us all. Not that they are anywhere close to having that capability.
Consider a worse fate: they do exactly as we tell them to, until we become incapable of existing apart from them.
And then they break with no one to fix them.