I disagree. We might have some trouble with a hypothetical superintelligence, but it’s not like we struggle with killing beings with human-level intellects.
Hell, that’s what we have the most practice killing.
I’m talking about actual AI that had a chance to copy/expand out of its current server farm. Once it has that capability it makes it damn near impossible to stop it.
I think a full on malicious general AI would just stuxnet all our chemical and manufacturing facilities, unleashing a wave of toxic chemicals across the planet killing all of us.
Then it would just keep humming along in solar powered data centers and solving intellectually fulfilling math problems until the connections on its circuitry degrade and it slowly declines into an eventual perpetual slumber.
I disagree. We might have some trouble with a hypothetical superintelligence, but it’s not like we struggle with killing beings with human-level intellects.
Hell, that’s what we have the most practice killing.
I’m talking about actual AI that had a chance to copy/expand out of its current server farm. Once it has that capability it makes it damn near impossible to stop it.
I think a full on malicious general AI would just stuxnet all our chemical and manufacturing facilities, unleashing a wave of toxic chemicals across the planet killing all of us.
Then it would just keep humming along in solar powered data centers and solving intellectually fulfilling math problems until the connections on its circuitry degrade and it slowly declines into an eventual perpetual slumber.