I think their documentation will help shed some light on this. Reading my edits will hopefully clarify that too. Either way, I always recommend reading their docs! :)
I think their documentation will help shed some light on this. Reading my edits will hopefully clarify that too. Either way, I always recommend reading their docs! :)
There’s a flatpak too, but it’s not good.
Really? It’s been working just fine for me.
I understand that Perplexity employs various language models to handle queries and that the responses generated may not directly come from the training data used by these models; since a significant portion of the output comes from what it scraped from the web. However, a significant concern for some individuals is the potential for their posts to be scraped and also used to train AI models, hence my post.
I’m not anti AI, and, I see your point that transformers often dissociate the content from its creator. However, one could argue this doesn’t fully mitigate the concern. Even if the model can’t link the content back to the original author, it’s still using their data without explicit consent. The fact that LLMs might hallucinate or fail to attribute quotes accurately doesn’t resolve the potential plagiarism issue; instead, it highlights another problematic aspect of these models imo.
Yes, the platform in question is Perplexity AI, and it conducts web searches. When it performs a web search, it generally gathers and analyzes a substantial amount of data. This compiled information can be utilized in various ways, including creating profiles of specific individuals or users. The reason I bring this up is that some people might consider this a privacy concern.
I understand that Perplexity employs other language models to process queries and that the information it provides isn’t necessarily part of the training data used by these models. However, the primary concern for some people could be that their posts are being scraped (which raises a lot of privacy questions) and could also, potentially, be used to train AI models. Hence, the question.
There are several way, honestly. For Android, there’s NewPipe. The app itself fetches the YouTube data. For PC, there are similar applications that do the same such FreeTube. Those are the solutions I recommend.
If you’re one of those, you can also host your own Invidious and/or Piped instances. But I like NewPipe and FeeTube better.
And that’s when it will get real scary real soon!
Lmao
That would make sense…
Yeah. I totally get what you’re saying.
However, as you pointed out, AI can deal with more information than a human possibly could. I don’t think it would be unrealistic to assume that in the near future it will be possible to track someone cross accounts based on things such as their interests, the way they type, etc. Then it will be a major privacy concern. I can totally see three letter agencies using this technique to identify potential people of interest.
Not really. All I did was ask it what it knew about [email protected] on Lemmy. It hallucinated a lot, thought. The answer was 5 to 6 items long, and the only one who was partially correct was the first one – it got the date wrong. But I never fed it any data.
Yeah, it hallucinated that part.
Don’t give me any ideas now >:)
I couldn’t agree more!
Oh, no. I don’t dislike it, but I also don’t have strong feelings about it. I’m just interested in hearing other people’s opinions; I believe that if something is public, then it is indeed public.
I think so too. And I tried to do my research before making this post, but I wasn’t able to find anyone bringing this issue up.
You can check Hugging Face’s website for specific requirements. I will warn you that lot of home machines don’t fit the minimum requirements for a lot of models available there. There is TinyLlama and it can run on most underpowered machines, but its functionalities are very limited and it would lack a lot as an everyday AI Chatbot. You can check my other comment too for other options.
The issue with that method, as you’ve noted, is that it prevents people with less powerful computers from running local LLMs. There are a few models that would be able to run on an underpowered machine, such as TinyLlama; but most users want a model that can do a plethora of tasks efficiently like ChatGPT can, I daresay. For people who have such hardware limitations, I believe the only option is relying on models that can be accessed online.
For that, I would recommend Mistral’s Mixtral models (https://chat.mistral.ai/) and the surfeit of models available on Poe AI’s platform (https://poe.com/). Particularly, I use Poe for interacting with the surprising diversity of Llama models they have available on the website.
I think that in that case, YouTube is your friend. There are a few pretty straight forward videos that can help you out; if you’re serious about it you’re going have to, eventually, become familiar with it.
The prompt was something like, What do you know about the user [email protected] on Lemmy? What can you tell me about his interests?" Initially, it generated a lot of fabricated information, but it would still include one or two accurate details. When I ran the test again, the response was much more accurate compared to the first attempt. It seems that as my account became more established, it became easier for the crawlers to find relevant information.
It even talked about this very post on item 3 and on the second bullet point of the “Notable Posts” section.
However, when I ran the same prompt again (or similar prompts), it started hallucinating a lot of information. So, it seems like the answers are very hit or miss. Maybe that’s an issue that can be solved with some prompt engineering and as one’s account gets more established.