As they improve, we’ll likely trust AI models with more and more responsibility. But if their autonomous decisions end up causing harm, our current legal frameworks may not be up to scratch.
Sounds great in theory till u realise this is the exact sort of law the big tech companies can afford to pay out that will also be used to completly kill foss ai.
Sounds great in theory till u realise this is the exact sort of law the big tech companies can afford to pay out that will also be used to completly kill foss ai.
The liability wouldn’t be on the development, but the deployment.