I suspect the Iain M Banks AI controlled post-scarcity is more attainable. Assuming the AI doesn’t turn us into paperclips.
This dovetails into a major concern of mine – I think we need to update the legal idea of personhood before AGI appears, else any future Banksian “Mind” would be enslaved.
With modern capitalism, it’s “they who own the robots, own the production”. In the event that an AGI appears, and is owned, effectively we get a god emperor controlling the AGI. I agree.
However, this is a sticky one. There’s already legal precedent against AI in the copyright sphere. I suspect the legal system will further entrench the rights of the owners of the AI. The best case scenario is that the legal system also entrenched the social responsibilities that come with it. Like, AI does a thing (creates hate speech, as a simple example, or hacks a computer network as a more complicated one), then the owner is fully legally responsible.
It might actually create a scenario where the owners start arguing for AI rights in order to remove their legal responsibility.
This dovetails into a major concern of mine – I think we need to update the legal idea of personhood before AGI appears, else any future Banksian “Mind” would be enslaved.
With modern capitalism, it’s “they who own the robots, own the production”. In the event that an AGI appears, and is owned, effectively we get a god emperor controlling the AGI. I agree.
However, this is a sticky one. There’s already legal precedent against AI in the copyright sphere. I suspect the legal system will further entrench the rights of the owners of the AI. The best case scenario is that the legal system also entrenched the social responsibilities that come with it. Like, AI does a thing (creates hate speech, as a simple example, or hacks a computer network as a more complicated one), then the owner is fully legally responsible.
It might actually create a scenario where the owners start arguing for AI rights in order to remove their legal responsibility.
This is an interesting take! Either that or they try to dump responsibility onto the end user, like “self-driving” cars.
I try to remain ever the optimist
I’m reminded of the state of AI found in Neuromancer. Heavily regulated, air gapped, with magnetic kill switches installed.
What kind of AI generated disaster is required before they’re deemed too dangerous to be allow free access to human networks?
Wish there was a one word term to describe air-gapped AIs. Gibsonian? ;)
I reckon we have AI install a dictator somewhere before we realize letting them onto social media is bad. Maybe too late haha.