- cross-posted to:
- dnd
- cross-posted to:
- dnd
ChatGPT is certainly no good at a lot of aspects of storytelling, but I wonder how much the author played with different prompts.
For example, if I go to GPT-4 and say, “Write a short fantasy story about a group of adventurers who challenge a dragon,” it gives me a bog standard trope-ridden fantasy story. Standard adventuring party goes into cave, fights dragon, kills it, returns with gold.
But then if I say, “Do it again, but avoid using fantasy tropes and cliches,” it generates a much more interesting story. Not sure about the etiquette of pasting big blocks of ChatGPT text into Lemmy comments, but the setting turned from generic medieval Europe into more of a weird steampunk-like environment, and the climax of the story was the characters convincing the dragon that it was hurting people and should stop.
I dunno what this GM is doing but I find that ChatGPT (GPT4 particularly) does wonderfully as long as you clearly define what you are doing up front, and remember that context can “fall off” in longer threads.
Anyways, here’s a paraphrasing of my typical prompt template:
I am running a Table Top RPG game in the {{SYSTEM}} system, in the {{WORLD SETTING}} universe. Particularly set before|after|during {{WORLD SETTING DETAILED}}. The players are a motley crew that include: {{ LIST OF PLAYERS AND SHORT DESCRIPTIONS }} The party is currently at {{ PLACE }} - {{ PLACE DETAILS }} At present the party is/has {{ GAME CONTEXT / LAST GAMES SUMMARY }} I need help with: {{ DETAILED DESCRIPTION OF TASK FOR CHAT GPT }}
It can get pretty long, but it seems to do the trick for the first prompt - responses can be more conversational until it forgets details - which takes a while on GPT4.
I guess it makes a lot of sense for a bot that predicts the most likely response to generate generic fantasy worlds. I think a bot DM would work a lot better if it had access to tables of tropes, environments, monsters and order elements and could roll or pick from those to create the story.
In the same way combat should probably be handled by code specifically written for that purpose similar to video games. If such a robot DM would be developed like that it would probably do much better.
I don’t like it. I know immediately, as a player, when my GM used it, because it is no longer their own voice. Furthermore, when my GM pregenerates large blocks of text and reads them off on cue, I know we’ve been railroaded. My DM is pretty rookie, so I forgive the railroading. But if we’re being railroaded by ChatGPT, then I’d argue that I’d rather be playing a video game – at least it is logically constructed. But, I can’t win them all, and I’m happy my GM is there at all.
I find it silly how there are specific AI tools to accomplish what they’re trying to do here, but they dont even give that a mention in the article? They instead opt towards using chatgpt, which can technically do it but isn’t meant for this. Them writing “Put plainly: AI sucks at this.” just seems disingenuous when they did not do the proper research into the proper ai’s for this, because AI can do this rather well if you know how.