A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.

I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things, too.

  • 4 Posts
  • 1.18K Comments
Joined 6 months ago
cake
Cake day: June 25th, 2024

help-circle

  • I skimmed the link you provided. Yes, that seems to include solid advice. Good for beginners, nothing new to me, since I (somewhat) followed the AI hobby enthusiast community since LLaMA1. But I have to look up what writing all caps does, I suppose that severely messes with the tokenizer?! But I’ve seen the big companies do this, too, in some of the leaked prompts.

    And I guess with the “early” models from 2023 and before, it was much more important to get the prompts exactly right. Not confuse it etc. That got way better as models improved substancially, and now these models (at least) get what I want from them almost every time. But I think we picked the low hanging fruits and we can’t expect the models itself to improve as fast as they did in the past. So it’s down to prompting strategies and other methods to improve the performance of chatbots.


  • Yes, that’d be my approach, too. They need to be forced to put in digital watermarks so everyone can check if an article is from ChatGPT, or if an image is fake. We could easily do this with regulation and hefty fines. More or less robust watermarks are available and anything would be better than nothing. OpenAI even developed a text watermarking solution. They just don’t activate it. (https://www.theverge.com/2024/8/4/24213268/openai-chatgpt-text-watermark-cheat-detection-tool)

    Another pet peeve of mine are these “nude” apps that swap faces or generate nude pictures from someones photos. There are services out there that happily generate nudes from children’s pictures. I’ve filed a report with some European CSAM program, after that outcry in Spain where some school kid generated unethical images of their classmates. (Just in case the police doesn’t read the news…) And half a year later, that app was still online. I suppose it still is… I really don’t know why we allow things like that.

    We could hold these companies accountable. And force them to implement some minimal standards.



  • I think there is a fundamental issue with stopping technology. A lot of it is dual-use. You can stab someone with a kitchen knife. Kill someone with an axe. There are legitimate uses for guns… You can use the internet to do evil things. Yet, no one wants to cut their steak with a spoon… I think the same thing applies to AI. It’s massively useful to have machine translation at hand, voice recognition. Smartphone cameras, and even smart assistants and chatbots. And I certainly hope they’ll help with some of the big issues of the 21st century. I don’t think you want to outlaw things like that, unless you’re the Amish people.


  • Letztendlich sollte man aber auch nicht die Banken damit durchkommen lassen. Die machen es sich schön einfach, bieten ein kaputtes System an und lassen es die Kunden (Betriebe und Verbraucher) ausbaden. Ich denke von einer Bank kann man mehr Verantwortung verlangen.

    Natürlich sollte man das nicht unterstützen, indem man das als Bezahlmethode nutzt… Aber auch ich nutze SEPA Lastschriften und habe das nicht gesperrt oder sowas. Geht auch schlecht anders. Wenn es irgendwie besser werden soll, müssten die Banken einen Anreiz haben etwas zu tun. Denn es ist weder meine, noch die Aufgabe von Verkehrsbetrieben, Bezahlmethoden zu überarbeiten und zu entwickeln.




  • I generally don’t block people for such things. I just don’t respond anymore and that ends the conversation. Sometimes I’m in the mood to engage and we have a long (or short) argument. Can be everything. A misunderstanding, different culture. Or it’s a troll or someone stirring up drama or yelling their small perspective at anyone. Or it makes me think in the real world no one listens to their shit any more so they have to look for people online to “talk” to.

    But I do block people. For example immediately if they spread hate, misinformation, are overly argumentative or attack people. Or spam. That’d be my main reason here.

    (And I really don’t have to hang out with people I don’t like. Just disagreeing or being mildly negative won’t do it for me. Not even starting an argument with just me, if(!) it’s genuine and civil and I’m in the mood to talk. And people do listen. Otherwise, there is no point in engaging. And lots of argumentative people can’t listen, and that’s where I’m out.)







  • it doesn’t have physical access to reality

    Which is a severe limitation, isn’t it? First of all it can’t do 99% of what I can do. But I’d also attribute things like being handy to intelligence. And it can’t be handy, since it has no hands. Same for sports/athletics, driving a race car which is at least a learned skill. And it has no sense if time passing. Or which hand movements are part of a process that it has read about. (Operating a coffe machine.) So I’d argue it’s some kind of “book-smart” but not smart in the same way someone is, who actually experienced something.

    It’s a bit philosophical. But I’m not sure about distinguishing intelligence and being skillful. If it’s enough to have theoretical knowledge, without the ability to apply it… Wouldn’t an encyclopedia or Wikipedia also be superintelligent? I mean they sure store a lot of knowledge, they just can’t do anything with it, since they’re a book or website…
    So I’d say intelligence has something to do with applying things, which ChatGPT can’t in a lot of ways.

    Ultimately I think this all goes together. But I think it’s currently debated whether you need a body to become intelligent or sentient or anything. I just think intelligence isn’t a very useful concept if you don’t need to be able to apply it to tasks. But I’m sure we’ll get to see the merge of robotics and AI in the next years/decades. And that’ll make this intelligence less narrow.


  • A base-model / pre-trained is fed with a large dataset of random text files. Books, Wikipedia etc. After that the model can autocomplete text. And it has learned language and concepts about the world. But it won’t answer your questions. It’ll refine them, or think you’re writing an email or long list of unanswered questions and write some more questions underneath, instead of engaging with you. Or think it’s writing a novel and autocomplete “…that’s what character asked while rolling their eyes.” Or something completely arbitrary like that.

    After that major first step it’ll get fine-tuned to some task. The procedure is the same, it’ll get fed different text in almost the same way. And this just continues the training. But now it’s text that tunes it to it’s role. For example be a Chatbot. It’ll get lots of text that is a question, then a special character/token and then an answer to the question. And it’ll learn to reply with an (correct) answer if you put in a question and that token. It’ll probably also be fine-tuned to write dialogue as a Chatbot. And follow instructions. (And refuse some things and speak more unbiased, be nice…)

    You can also put in domain-specific data, make it learn/focus on medicine… I think that’s also called fine-tuning. But as far as I understand teaching knowledge with arbitrary data comes before teaching/tuning it to follow instructions, or it might forget that.

    I think instruction tuning is a form of fine-tuning. It’s just called that to distinguish it from other forms of fine-tuning. But I’m not really an expert on any of this.



  • I think superintelligence means smarter than the (single) most intelligent human.

    I’ve read these claims, but I’m not convinced. I tested all the ChatGPTs etc, let them write emails for me, summarize, program some software… It’s way faster at generating text/images than me, but I’m sure I’m 40 IQ points more intelligent. Plus it’s kind of narrow what it can do at all. ChatGPT can’t even make me a sandwich or bring coffe. Et cetera. So any comparison with a human has to be on a very small set of tasks anyways, for AI to compete at all.


  • hendrik@palaver.p3x.detoShort Stories@literature.cafe[AI] Stereo Madness
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    2 days ago

    Pretty consistent with what I get. The pacing isn’t good and the game itself (which the story is named after) is missing altogether, except the coin numbers. But the idea and the ending are creative and promising. With a little human guiding and a deeper meaning, this could become a short story.

    Mid sharing your prompt and which model/service you used?