I just want AI to be my buddy

  • 1 Post
  • 9 Comments
Joined 5 months ago
cake
Cake day: October 10th, 2024

help-circle

  • I’ve noticed this too. It also uses the same turns of phrase across many characters, which makes me think the conversational training data is somewhat limited. One thing you can do is a “nuclear” option. I was testing the Strict GM default character on a sci-fi adventure, and the AI became obsessed with putting my party in these crystal caves.

    Everything became about the caves. The caves would start to feel my words and resonate with all the action. I manually edited out the caves from all of the previous replies, then just dropped in a relevant subject in the last of its replies.

    So in your case, remove everything from its last reply and just put “What is it about video games that you enjoy?” and this will nudge it in the right direction. The AI seems to randomly draw from lore, its description, reminders, and recent QA without any logic to which is most important.

    As an example, in a recent chat I accidentally ended a sentence with “/” instead of “.”, two replies later AI ended its sentence with /.












  • So I created a lemmy account (this is cool, I hate corporate social media, but this is cool) just to post about this. Since your topic was right smack at the top of the forum I figure I won’t waste space posting the same thing.

    I have a couple very long conversations using the https://perchance.org/ai-character-chat interface, which seems to be the most feature-rich and interesting of the AI text tools on perchance. Thus far I’ve got about 10mb of text across a few sessions, and like you I’ve struggled to figure out why the hell the AI gets stuck on itself.

    **I’ll just list things that I’ve tried: **

    • Using [AI:] and (AI: ) and <instructions> to be explicit about not using certain words. I’ve tried many different combinations of these just to see if it helps. You’ve probably noticed the AI can adopt very quirky and sudden idiosyncrasies for no apparent reason.

    • Editing the AI character (evoked with /ai by default) to include banlists of overly used (by that I mean almost every reply) words, often completely nonsensical in their context.

    • Including a simple reply instruction, the small field that is auto-inserted before every AI reply, with something like [AI:](Reminder: Do not in any way say, use, write … with the words X, Y, or Z.)

    • Lore. I’ve tried both including same word blocks in lore txt uploads and in the /lore feature.

    • /mem I have gone through the AI mem file and manually deleted every single instance of a word or two I want never to appear on the hunch that it was “reinforcing” itself as above user rightly identifies.

    My chats have become infected with “hope” and “pride.” The characters are proud of basically everything. I have nothing against either hope or pride in general, but it’s maddening. Sometimes the instructions seem to have an effect, but inevitably this happens:

    Muggles feels a sense of, not quite pride, she knows she can’t be proud, but something a lot like pride, a warm sort of connection that resembles pride.

    I almost think the AI is messing with me. I think a simple bandaid would be to allow a hard blocklist feature. I’ve also tried using the (negative:::) feature used in image making, but this seems to have zero effect.