You must log in or register to comment.
This is a good summary of half of the motive to ignore the real AI safety stuff in favor of sci-fi fantasy doom scenarios. (The other half is that the sci-fi fantasy scenarios are a good source of hype.) I hadn’t thought about the extent to which Altman’s plan is “hey morons, hook my shit up to fucking everything and try to stumble across a use case that’s good for something” (as opposed to the “we’re building a genie, and when we’re done we’re going to ask it for three wishes” he hypes up), that makes more sense as a long term plan…