• 59 Posts
  • 659 Comments
Joined 2 years ago
cake
Cake day: July 5th, 2023

help-circle
  • You seem to be missing what I’m saying. Maybe a biological comparison would help:

    An octopus is extrmely smart, moreso than even most mammels. It can solve basic logic puzzles, learn and navigate complex spaces, and plan and execute different and adaptive stratgies to humt prey. In spite of this, it can’t talk or write. No matter what you do, training it, trying to teach it, or even trying to develop an octopus specific language, it will not be able to understand language. This isn’t because the octopus isn’t smart, its because its evolved for the purpose of hunting food and hiding from predators. Its brain has developed to understand how physics works and how to recognize patterns, but it just doesn’t have the ability to understand how to socialize, and nothing can change that short of rewiring its brain. Hand it a letter and it’ll try and catch fish with it rather than even considering trying to read it.

    AI is almost the reverse of this. An LLM has “evolved” (been trained) to write stuff that sounds good, but has little emphasis on understanding what it writes. The “understanding” is more about patterns in writting rather than underlying logic. This means that if the LLM encounters something that isn’t standard language, it will “flail” and start trying to apply what it knows, regardless of how well it applies. In the chess example, this might be, for example, just trying to respond with the most common move, regardless of if it can be played. Ultimately, no matter what you input into it, an LLM is trying to find and replicate patterns in language, not underlying logic.


  • The LLM doesn’t have to imagine a board, if you feed it the rules of chess and the dimensions of the board it should be able to “play in its head”.

    That assumes it knows how to play chess. It doesn’t. It know how to have a passable conversation. Asking it to play chess is like putting bread into a blender and being confused when it doesn’t toast.

    But human working memory is shit compared to virtually every other animal. This and processing speed is supposed to be AI’s main draw.

    Processing speed and memory in the context of writing. Give it a bunch of chess boards or chess notation and it has no idea which it needs to remember, nonetheless where/how to move. If you want an AI to play chess, you train it on chess gameplay, not books and Reddit comments. AI isn’t a general use tool.




  • PlzGivHugs@sh.itjust.workstomemes@lemmy.worldSoon
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    7 days ago

    Tomska comes to mind as a pretty hilarious example - not just because he turns them into skits, thats normal enough. He had a whole saga trying to figure out how far he could push the boundries of the VPN company sponsoring him before they would start intervening. It started off simple enough, with the South Park philosophy of “Add provocative stuff so they cut that, rather than the jokes we like.” Rather than editting they script, the approved it as is. He thought it was funny, and took that as a challenge. After increasingly crass and violent ads (on-brand for him, and with appropriate content warnings) eventually ended up going so far as to include an ad that even he considers way too far. Said ad later had to be editted out of the video it was included in. In my opinion, despite obviously being very all ads, its collectively some of the funniest content hes made.

    He’s his videos recapping the saga:

    links

    Dear Surfshark, Please Fire Me

    Dear Surfshark, Please Forgive Me






  • Using your clones example, the Slay the Spire “clones” that give roguelike deckbuilders a bad name aren’t Inscryption or Monster Train or Balatro. Its things like Across the Obelisk and Wildfrost, that are good, but fail to capture what makes others great, and the numerous low-effort copies you’ve likely never heard of that viewed it as an easy way to make a good game without understanding it. Its not that Roguelike Deckbuilders are bad, obviously, its that lazy, or thoughtless use of the mechanics that is. A game isn’t one mechanic, and trying to treat it as such just results in a messy or bad game.


  • Its a crutch because its expected to hold the game up, rather than the game supporting its own weight. In your bullet hell example, dodging isn’t a crutch, it’s the foundational mechanic. A better example would be a slot machine system (something that is near-inherently engaging) being added to a bullet hell game, not because it fits but because its fun independently and helps distract from the fact that they haven’t put any effort into the core gameplay. The mechanic isn’t a crutch, its inclusion as a tacked-on addition is.


  • The mechanic itself isn’t the issue, but how it is implemented.

    It depends on how (and where) its implemented is his point. It needs to be woven into the comvat system as it is in FromSoft, Batman, Ultrakill, or Cuphead, not tacked on because its easy or popular. Each of those uses parrying in a different way to enhance its combat. On the other hand, if you take these mechanics without the greater context or understanding of why it works, then it’ll tends to stand out as bad, or remain unused. Doom Eternal is an example that immediately comes to mind. The whole game is about fast paced combat, with a plethora of new mobility mechanics, that is, until you encounter one of the enemies you need to parry. Then, the game comes to a grinding halt while you wait for the enemy to take action, so you are able to react, completely opposite the rage-fueled persona and the mobility focus of every other mechanic. Compare that to Ultrakill, where parrying isn’t just a reactive way to mitigate damage, its a situational attack that allows you to keep moving and keep up your carnage.

    Game mechanics work best when they’re cohesive. Parrying, due to its simplicity can be tacked on easily, breaking this cohesiveness if not given the same weight as the rest of the mechanics.




  • They had some idea, although it was less certain than seen in the context of Saving Private Ryan.

    First of all, there was efforts to weaken the defenses. Both bombardment from naval ships and dropped bombs from planes were meant to significantly soften the defenses for the landing. According to the plans, this should have significantly reduced the defenses. In reality, the naval bombardment was nowhere near large enough, and the bombers missed their targets due to bad weather. This was only discovered as they reached the beach.

    Once the infantry was landing, there was also supposed to be quite a bit more support for them. Specialized amphibious tanks were created, and meant to be driven up onto the beaches to provide cover for the infantry. This almost immediately went ary as the rough water swamped or sunk dozens of the initial tanks, and lead to the use of landing craft for the remaining tanks, slowing down their deployment. Even of those that were launched by landing craft, many were lost.

    Also worth noting is that the Normandy beaches, don’t actually look that much like the movie. When they land in Saving Private Ryan, it looks like they’re only 20 meters or so from the cliff. In actuality, it was much, much further. If you look at the famous photo from the landing, Into the Jaws of Death by Robert F. Sargent, if gives you an idea of what the beach actually looks like and the conditions on the morning - visibility was low, and they were likely a hundred meters or more from the cliff face. Less likely to get shot the moment the gates open then how it looks in the movie, although horrifically even worse in nearly every other way.

    If you’re really interested in more detail, TimeGhost has an excellent documentary (split into pieces to make it watchable as a series) on the subject on their D-Day 24hrs channel that covers the background, the events of the day, and the details and context surrounding it in extreme detail. That said, its a multi-day watch, given that its 24 hours long.





  • My point of contention is that the arguments you’re using are flawed, not your intentions. OpenAI, Meta, Disney, ect. are in the wrong because they pirate/freeboot and infringement on independent artist’s licenses. It’s not their use of technology or the derivative nature of the works it produces that are the problem: making AI the face of the issues moves the blame away from the companies, and allows them to continue to pirate/freeboot/plagiarize (or steal, as you define it) from artists.

    Yes, part of my point is that capitalism is bad, but thats further up the chain than what I was arguing. My point is that copyright law and more importantly, its implementation and enforcement is broken. Basically all your issues originate not with AI but with the fact that independent artists have no recourse when their copyrights are violated. AI wouldn’t be an issue if AI compananies actually paid artists for their work, and artists could sue companies who infringe on their rights. The problem is that artists are being exploited and have no recourse.

    Using allegory to hopefully make my point a bit more clear: Imagine you have a shop of weavers (artists). The comapny running the shop brings in a loom (AI), and starts chaining their workers to it and claiming its an Automatic Weaver™ (pirating and violating artists rights). The problem isn’t the loom, and blaming it shifts blame away from whoever it was that decided to enslave their workers. Trying to ban the loom doesn’t prevent the shop from just chaining the workers to their desks, as was often done in the past, nor does it prevent them from bringing in Automatic Potters™. If you want to stop this, even ignoring the larger spectre of capitalism, it should be slavery that is outlawed (already done) and punished (not done), not the use of looms.

    If you are trying to fix/stop the current state of AI and prevent artists from being exploited by massive companies in this way, banning AI will only slow it and will limit potentially useful technology (that artists should be paid for). Rather than tackle one of the end results of rhe problem, you need to target it closer to its root - the fact that large companies can freely pirate, freeboot, and plagiarize smaller artists.


  • It isn’t current AI voice tech that was an issue. It was the potential for future AI they were worried about. AI voices as they are now, are of similar quality to pulling someone off the street and putting them in front of a mid-range mic. If you care about quality at all, (without massive changes to how AI tech functions) you’ll always need a human.

    And to be clear, what about AI makes it the problem, rather than copyright? If I can use a voice synthesizer to replicate an actors voice, why is that fine and AI not? Should it not be that reproduction of an actor’s voice is right or wrong based on why its done and its implications rather than because of the technology used to replicate it?

    Edit: And to be clear, just because a company can use it as an excuse to lower wages, doesn’t mean its a viable alternative to hiring workers. Claims that they could replace their workers with AI is just the usual capitalist bullshit excuses to exploit their workers.