Because you don’t need to hire extra writers and can pocket the money? I personally don’t care what a CEO thinks is fun, regardless if it’s take two or not.
I get that this is probably just CEO speak for saving money, but as a gamer I think this actually could be a lot of fun if it’s implemented well. NPCs could be much more interactive than in current games.
But the obvious first way it will be viable is always online with meaningful recurring costs (or even more exorbitant monetization models), and I’m worried it won’t get past that to local experiences for a long time after the hardware is capable.
Offline AI do exist that could facilitate NPC interaction. We know they won’t use them and will go with OpenAI, but its possible. I see Indie games nailing it while Triple As fumble to monetize it.
I hope offline LLMs see some improvement soon, especially with regards to VRAM usage.
My 1080ti (11gb VRAM) struggles with anything over about 6B parameters, which in my opinion is unusable. Honestly, even 13B is borderline for most models I’ve used.
Of course, the other solution would be to increase VRAM on cards, but the GPU manufacturers don’t seem to be on board with that idea.
Maybe with some more maturity we’ll see this become viable. Because you’re right. If we hypothetically did it right most machines would be unable to run it from the VRAM consumption alone. Always online is an option to offload that, but that breaks my code of ethics and I refuse to make online only games unless its an MMO.
As a dev, you could do a game with these features and a subscription, while also making the code available to self host and host for others.
You could also configure it in such a way that much of the generation isn’t in real time, so periodic connectivity is needed for the feature to work correctly, but persistent access is not, and have reasonable fallbacks so the game still works without it. You could even make the protocols open even if the source isn’t to make it easy for others to replace your server as tech develops.
Most of the things leading to always online connections to publisher servers are deliberate choices to exert control, not things that can’t be done in other ways.
I can imagine NPCs that actually react to the world happening around them, or carrying on unscripted conversations with each other, or just doing their own things without repeating the same two bits of dialogue over and over. Imagine being able to actually carry on a semi-natural conversation with NPCs and not just being fed 4 lines of dialogue to choose from.
Gaming is the one aspect of AI that actually seems really exciting
Because you don’t need to hire extra writers and can pocket the money? I personally don’t care what a CEO thinks is fun, regardless if it’s take two or not.
I get that this is probably just CEO speak for saving money, but as a gamer I think this actually could be a lot of fun if it’s implemented well. NPCs could be much more interactive than in current games.
Technology giveth and taketh away. AI sucks for a lot of reasons, but god damn if I’m not excited for AI NPCs.
I want to be. The potential is massive.
But the obvious first way it will be viable is always online with meaningful recurring costs (or even more exorbitant monetization models), and I’m worried it won’t get past that to local experiences for a long time after the hardware is capable.
Offline AI do exist that could facilitate NPC interaction. We know they won’t use them and will go with OpenAI, but its possible. I see Indie games nailing it while Triple As fumble to monetize it.
I hope offline LLMs see some improvement soon, especially with regards to VRAM usage.
My 1080ti (11gb VRAM) struggles with anything over about 6B parameters, which in my opinion is unusable. Honestly, even 13B is borderline for most models I’ve used.
Of course, the other solution would be to increase VRAM on cards, but the GPU manufacturers don’t seem to be on board with that idea.
Maybe with some more maturity we’ll see this become viable. Because you’re right. If we hypothetically did it right most machines would be unable to run it from the VRAM consumption alone. Always online is an option to offload that, but that breaks my code of ethics and I refuse to make online only games unless its an MMO.
As a dev, you could do a game with these features and a subscription, while also making the code available to self host and host for others.
You could also configure it in such a way that much of the generation isn’t in real time, so periodic connectivity is needed for the feature to work correctly, but persistent access is not, and have reasonable fallbacks so the game still works without it. You could even make the protocols open even if the source isn’t to make it easy for others to replace your server as tech develops.
Most of the things leading to always online connections to publisher servers are deliberate choices to exert control, not things that can’t be done in other ways.
I can imagine NPCs that actually react to the world happening around them, or carrying on unscripted conversations with each other, or just doing their own things without repeating the same two bits of dialogue over and over. Imagine being able to actually carry on a semi-natural conversation with NPCs and not just being fed 4 lines of dialogue to choose from.
Gaming is the one aspect of AI that actually seems really exciting
I do agree there would be amazing positives to it.