• 3 Posts
  • 5 Comments
Joined 5 months ago
cake
Cake day: September 16th, 2024

help-circle
  • That makes sense; I want to learn more about how the weighting is computed (quite interesting stuff) but dot-producting the arrays to get an overall “this is relevant” score is useful knowledge.

    Makes me think that we could weight each individual Lore entry a little more knowing this.

    “Tim is the sheriff. Tim used to be the deputy. Tim is a tall guy” might weight-out better than: “Tim is the sheriff, but he used to be the deputy and he’s tall guy”.

    That could get the “Tim” weighting a bit higher for that Lore if I’m understanding right.

    Thanks!


  • Thanks for the details, that helps a lot.

    I understand about poorly-written Lore entries like “he is a snappy dresser” (“he” is vague). My curiosity was more related to Lore such as: “Tim is the sheriff” “Mike is the bad guy”

    If the UI is currently constructing a message to the AI and we are discussing Tim, whether the UI could randomly select the “Mike is a bad guy” line from the Lore entries and it would be totally unrelated to the current topic of Tim.

    Am I understanding right that Lore isn’t so much “chosen at random”, but is first “scored” as to relevance then chosen?




  • I found the issue, just haven’t been able to fix it yet.

    I am using a fileserver on my localhost and things like the avatar images work fine that way (trying to work with the example of different character expressions for sad, happy, etc., but without uploading a lot of images; the custom code can snag the images via localhost with no issue).

    Apparently there is some CloudFlare API in use that is OK with fetching PNG files from localhost, but NOT text files. If I use rentry (or just upload to perchance and use the URL) then the lorebooks DO work.

    I had made the assumption that because I can get local PNG files, I can also get local TXT files. That’s apparently not the case.

    “You could add a trigger to check the name of the AI that sent the message to run a different custom code within the thread to have multiple characters use different custom codes.”

    Yup, doing that already. Works OK, but I need to get some better JS skills before I go much further. I just wanted to make sure that the custom code was part of the oc.thread, and not specific to each oc.character.

    Thanks!



  • This is a feature that I’d love to see as well; my understanding (not much at present, but I’m learning) would be that one could set up a server via node.js and through the user code block in the AI chat, send the AI response text (along with the speaker’s name) to that process. That process, being local on your machine, could potentially invoke a local instance of XTTS and speak the text.

    This is conjecture on my part; I’ve been making some progress integrating per-message JS in the chat. Right now, for TTS, I’d like a means to separate narrator dialog, action text, and speaker text separately so that the TTS doesn’t simply say the entire message. For example:

    "The sheriff walked slowly into the room. ‘Everyone freeze! I’m looking for Bad Bart’ "

    I’d like to have the AI somehow separate this text so the narrator voice could speak the narrator part of the message and the character (with a different voice) would speak the character message. This would involve invoking the TTS engine twice for one message, as expected.

    It will certainly take months for me to approach anything workable, but luckily technology will improve as well over time perhaps making it easier.