• 0 Posts
  • 111 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle

  • Not necessarily true. Premium subscribers are worth a ton more to content creators because they get a cut of the premium price for every premium view. And it’s not insignificant compared to a non-premium viewer.

    For a non-premium viewers to provide value to a sponsorship they need to either use an affiliate link and purchase, or at least look. Creators often don’t get paid for viewers that don’t click the link. And only a small percent of the users is going to do that. And even non-premium viewers already use ad and sponsor block.

    Don’t be confused - content creators would likely benefit greatly from a higher percentage of premium subscribers on their videos. They are guaranteed income, and sponsors are often a lot more volatile.


  • Ideas are great - but execution is king. Because execution is where most of your creativity actually makes a difference in how the idea is represented. If you have a good idea and a good execution, it’s very hard for someone to take that away from you. If you have a good idea, but execute it poorly, someone taking that idea and executing it better will leave you in the dust. But without the better execution that wouldn’t work.

    Better execution isn’t always fair though - we often start out in life being unable to compete because of lack of experience, financing, and publicity. But it’s basically how the entire entertainment industry works. Everyone just shuffles ideas around, and try to execute it better (or different enough) from the previous time the idea made the rounds.

    After finding good ideas, get people hooked on your execution, and they will not be able to get that anywhere else unless someone else comes along and does it even better, but with practice that can also be you.



  • Yes, it would be much better at mitigating it and beat all humans at truth accuracy in general. And truths which can be easily individually proven and/or remain unchanged forever can basically be 100% all the time. But not all truths are that straight forward though.

    What I mentioned can’t really be unlinked from the issue, if you want to solve it completely. Have you ever found out later on that something you told someone else as fact turned out not to be so? Essentially, you ‘hallucinated’ a truth that never existed, but you were just that confident it was correct to share and spread it. It’s how we get myths, popular belief, and folklore.

    For those other truths, we simply ascertain the truth to be that which has reached a likelihood we consider it to be certain. But ideas and concepts we have in our minds constantly float around on that scale. And since we cannot really avoid talking to other people (or intelligent agents) to ascertain certain truths, misinterpretations and lies can sneak in to cause us to treat as truth that which is not. To avoid that would mean the having to be pretty much everywhere to personally interpret the information straight from the source. But then things like how fast it can process those things comes in to play. Without making guesses about what’s going to happen, you basically can’t function in reality.


  • Yes, a theoretical future AI that would be able to self-correct would eventually become more powerful than humans, especially if you could give it ways to run magnitudes more self-correcting mechanisms at the same time. But it would still be making ever so small assumptions when there is a gap in the information it has.

    It could be humble enough to admit it doesn’t know, but it can still be mistaken and think it has the right answer when it doesn’t. It would feel neigh omniscient, but it would never truly be.

    A roundtrip around the globe on glass fibre takes hundreds of milliseconds, so even if it has the truth on some matter, there’s no guarantee that didn’t change in the milliseconds it needed to become aware that the truth has changed. True omniscience simply cannot exists since information (and in turn the truth encoded by that information) also propagates at the speed of light.

    a big mistake you are making here is stating that it must be fed information that it knows to be true, this is not inherently true. You can train a model on all of the wrong things to do, as long it has the capability to understand this, it shouldn’t be a problem.

    The dataset that encodes all wrong things would be infinite in size, and constantly change. It can theoretically exist, but realistically it will never happen. And if it would be incomplete it has to make assumptions at some point based on the incomplete data it has, which would open it up to being wrong, which we would call a hallucination.


  • I’m not sure where you think I’m giving it too much credit, because as far as I read it we already totally agree lol. You’re right, methods exist to diminish the effect of hallucinations. That’s what the scientific method is. Current AI has no physical body and can’t run experiments to verify objective reality. It can’t fact check itself other than be told by the humans training it what is correct (and humans are fallible), and even then if it has gaps in what it knows it will fill it up with something probable - but which is likely going to be bullshit.

    All my point was, is that to truly fix it would be to basically create an omniscient being, which cannot exist in our physical world. It will always have to make some assumptions - just like we do.


  • Hallucinations in AI are fairly well understood as far as I’m aware. Explained in high level on the Wikipedia page for it. And I’m honestly not making any objective assessment of the technology itself. I’m making a deduction based on the laws of nature and biological facts about real life neural networks. (I do say AI is driven by the data it’s given, but that’s something even a layman might know)

    How to mitigate hallucinations is definitely something the experts are actively discussing and have limited success in doing so (and I certainly don’t have an answer there either), but a true fix should be impossible.

    I can’t exactly say why I’m passionate about it. In part I want people to be informed about what AI is and is not, because knowledge about the technology allows us to make more informed decision about the place AI takes in our society. But I’m also passionate about human psychology and creativity, and what we can learn about ourselves from the quirks we see in these technologies.



  • It will never be solved. Even the greatest hypothetical super intelligence is limited by what it can observe and process. Omniscience doesn’t exist in the physical world. Humans hallucinate too - all the time. It’s just that our approximations are usually correct, and then we don’t call it a hallucination anymore. But realistically, the signals coming from our feet take longer to process than those from our eyes, so our brain has to predict information to create the experience. It’s also why we don’t notice our blinks, or why we don’t see the blind spot our eyes have.

    AI representing a more primitive version of our brains will hallucinate far more, especially because it cannot verify anything in the real world and is limited by the data it has been given, which it has to treat as ultimate truth. The mistake was trying to turn AI into a source of truth.

    Hallucinations shouldn’t be treated like a bug. They are a feature - just not one the big tech companies wanted.

    When humans hallucinate on purpose (and not due to illness), we get imagination and dreams; fuel for fiction, but not for reality.


  • ClamDrinker@lemmy.worldtomemes@lemmy.worldA bit late
    link
    fedilink
    arrow-up
    119
    arrow-down
    9
    ·
    edit-2
    2 months ago

    The thing is, I’ve seen statements like this before. Except when I heard it, it was being used to justify ignoring women’s experiences and feelings in regard to things like sexual harassment and feeling unsafe, since that’s “just a feeling” as well. It wasn’t okay then, and it’s not okay the other way around. The truth is that feelings do matter, on both sides. Everyone should feel safe and welcome in their surroundings. And how much so that is, is reflected in how those people feel.

    The outcome of men feeling being respected and women feeling safe are not mutually exclusive. The sad part is that someone who is reading this here is far more likely to be an ally than a foe, yet the people who need to hear the intended message the most will most likely never hear it nor be bothered by it. There’s a stick being wedged here that is only meant to divide, and oh my god is it working.

    The original post about bears has completely lost all meaning and any semblance of discussion is lost because the metaphor is inflammatory by design - sometimes that’s a good thing, to highlight through absurdity. But metaphors are fragile - if it’s very likely to be misunderstood or offensive, the message is lost in emotion. Personally I think this metaphor is just highly ineffective at getting the message across, as it has driven people who would stand by the original message to the other side due to the many uncharitable interpretations it presents. And among the crowd of reasonable people are those who confirm those interpretations and muddy the water to make women seem like misandrists, and men like sexual assault deniers. This meme is simply terrible and perhaps we can move on to a better version of it that actually gets the message across well, instead of getting people at each other’s throat.


  • The thing is, games have a minimum difficulty to be somewhat generally enjoyable, and the game designers have often built their game around this. The fun is generally in the obstacles providing real resistance that can be overcome by optimizing your strategy. It means that these obstacles need to be mentally picked apart by the player to proceed. They are built like puzzles.

    This design philosophy - anyone who plays these games can tell you - is deeply rewarding if you go through it, because it requires genuine improvement that you can notice and be proud of. Hence why there is often a limit to how much easier you can make games like these without losing that because you forget the obstacle before even realizing it was preventing you from doing something.

    It’s often not as easy as just tweaking numbers. And often these development teams don’t have the time to rebalance a game for those lower difficulties, so they just don’t.

    Honestly, the first wojack could be quite mad too, because often making an easy game harder also misses the point, where the game is just more difficult, but doesn’t actually provide you with that carefully crafted feeling of constant improvement. Instead some easy games can become downright frustrating because obstacles feel “cheap” or “lacking depth” now that you have to spend a lot more time on them.

    But making an easy game harder by just tweaking the numbers is definitely easier on the development team, and gives existing players a chance to re-experience the game, which wouldn’t happen the other way around. But it’s almost certainly not a better option for new players wanting a harder difficulty.

    At the end of the day though, often there are ways to get what you want. Either by cheating, modding, or otherwise using ‘OP’ usables in the game. Do whatever you want to make the game more enjoyable to yourself. But if you make it too easy on yourself you might come out on the other end wondering why other people enjoyed the game so much more than you did.