Lvxferre [he/him]

I have two chimps within, Laziness and Hyperactivity. They smoke cigs, drink yerba, fling shit at each other, and devour the face of anyone who gets close to either.

They also devour my dreams.

  • 53 Posts
  • 4.85K Comments
Joined 2 years ago
cake
Cake day: January 12th, 2024

help-circle

  • Lemmy has Reddit. PieFed has Lemmy.

    Also, from 4chan’s PoV, Reddit is more like a boogerman than a boogeyman: it’s that weirdo creepo that makes you say “eew”, avoid at all costs, and if you touch them by accident or social pressures (“why no handshake?”), you immediately wash your hands.

    Instead the actual boogeymen are internal: for /g/ it’s /a/, for /b/ and /int/ it’s /pol/, and for almost everyone else it’s /b/.




  • “This should not be seen, in our view, as a cautious or negative stance on Nvidia, but rather in the context of SoftBank needing at least $30.5bn of capital for investments in the Oct-Dec quarter, including $22.5bn for OpenAI and $6.5bn for Ampere,” Rolf Bulk, equity research analyst at New Street Research, told CNBC.

    When I read that, I was puzzled; when the bubble bursts, OpenAI will be way more affected than nVidia, as the later is basically the guy selling shovels in the gold rush. And odds are SoftBank’s CEO knows it; so why are they moving its investments this way?

    Then I remembered this often quoted excerpt from The 18te Brumaire of Louis Bonaparte applies here: “Hegel remarks somewhere that all great world-historic facts and personages appear, so to speak, twice. He forgot to add: the first time as tragedy, the second time as farce.”

    The “first time” here is the dotcom bubble, often compared with the current AI bubble. When the internet was becoming popular, you had that flood of dotcom businesses with overpriced stocks, stocks went brrr then kaboom, bursting around early '00. Like this:

    Note however how sharply the prices raised in '99. I think SoftBank is betting on that: buy stocks, sell them juuuuust before the bubble bursts, and you got some nice profit.


  • Here’s the open letter. I also recommend people to read what Wikipedia says about her, and take your own conclusions.

    “We are not predicting human-level AI next year,” a Commission spokesperson told Euractiv in response to the scientists’ open letter, arguing that AI is developing faster and less predictably than older forecasts had suggested.

    “This is about being prepared, not declaring a date,” they added. “Responsible planning is not guessing the future, it’s preparing for different scenarios.”

    CUT OFF THE CRAP. Even if we interpret her statement as a figure of speech, she still fucked it up. She is a politician dammit; it’s part of her job to be careful with the shit she says.



  • You got me curious, so I checked it.

    I downloaded this wordlist with 479k words, and used find+replace to count four strings: cie, cei, ie, ei. Here’s the result:

    • 16566 (75%) ie vs. 5649 (25%) ei
    • 875 cie (74%) vs. 302 cei (26%)

    So the basic rule (i before e) holds some merit, but the “except after c” part is bullshit - it’s practically the same distribution.

    Of course, this takes all words as equiprobable; results would be different if including the odds of a word appearing in the text into the maths.



  • It’s a general pattern someone noticed and then rhymed, that ⟨ie⟩ is more likely to appear than ⟨ei⟩ in English, except after ⟨c⟩. But it is not a real rule, there’s no orthographic restriction behind that pattern, not even an underlying phonemic reason. So you’re bound to see exceptions everywhere, to the point the pattern is useless as a mnemonic.





  • Based on stuff said in the comments (“İt happened when I asked for weather, maybe someone can replicate it.”), I did some dumb test. Using duck.ai because… well, guess why I’m not subscribed to ChatGPT? Privacy. The article confirms my decision, by the way.

    Anyway, I was curious, I wanted to know which location it would assume I’m from.

    I don’t know which is the dumbest part - making shit up / lying / assuming, acknowledging its own intellectual dishonesty… or not taking spelling into account. (Using British spelling might not be a sign someone is from the UK [I’m not], but it’s a pretty good sign the person is not in USA.)


  • For any other game, this would be a sign the player isn’t too serious about winning. Nethack, though?

    You quaff the fountain. A bunch of water mocassins pops up.
    You go to the mines. Gnome has a wand.
    You pick up a grey stone. Loadstone. Cursed. You’re strained, can’t fight for shit. A newt kills you.
    Ants claim another victim. Go, Team Ant!
    You fall into a trap. It has poisonous spikes.
    [insert other 9000 ways the game tries to kill you]
    Do you want your possessions identified?


  • I predicted something similar. Basically: LLMs generating “fluff” speech of unimportant NPCs, so they don’t repeat the same thing over and over. Then the underlying tech (neural networks) replacing scripts to decide opponent actions; the main appeal will be that opponents will adapt to your playstyle, as if you were “training” the model without realising it.

    For example, let’s say you’re playing a racing game. Your first runs will be pretty much the same as now. But later runs, the AI “catches” on your strategy, and reacts to it, like:

    • if you’re often bumping enemies out of the track, the AI avoids getting close to you, even if it needs to slow down a bit.
    • if you lead the run from the start, and fuel consumption be damned, the AI might want to stay right behind you, as if “riding” on your pressure drag. Until the right moment where it speeds up to run past you.
    • if you avoid helping the opponents with pressure drag, and keep the boosters/nitros for the last lap, the AI might do the same.
    • etc.

    It would be challenging for game designers, too; they’d need to find the right metrics for the AI. For example if you simply “reward”/“punish” the AI based on wins and losses, the AI will use strategies like ganging up against the player - sure, this is effective, but it is not fun at all.

    BTW here’s a cool video of someone “teaching” AI to play Monopoly. I believe the process will be similar-ish, except models would be already a bit pre-“trained” a bit when you buy the game, and further “training” is simply from the player playing it.



  • 15 years from now:

    The desktop computer will be still there, same thing as before. Console market share will decrease, but they’ll be still there. Smartphone games will become increasingly more complex, and some games will be made with phone joysticks in mind.

    A bunch of new input methods will be released; they’ll be seen as gimmicks, most will fade away. One or two will stick for longer.

    There’ll be a bigger crash than in 1983, and a lot of AAA studios will fill bankruptcy. It’ll be like a forest fire - as it burns down the old and big trees, it leaves space for smaller plants to thrive.

    You’ll at least one game featuring a multiplayer version of any given current single-player genre. Multiplayer Vampire bullet heaven, multiplayer colony simulator, multiplayer dating sim…

    Speaking on bullet heaven (I mean things like Vampire Survivors), you’ll see elements of the genre creeping into other genres. Much like RPGs did in the past.

    Nethack version 3.8.4 will be released. It’ll have changes like

    • fixed exploit where players would #name a corpse Vladsbane and use it to kill Vlad, then use the output message to know if the corpse was fresh or tainted.
    • fixed critical bug: if you #wish for a statue of the fourth rider and cast stone to flesh on it, hitting it with a rubber chicken used to end your run. Now it correctly reverts the rider back into a statue.

    LLM presence in games will be subtle. For example, irrelevant NPCs might say things generated by a LLM, instead of pre-scripted ones. This won’t become the standard because they tend to output hallucinations that confuse players (e.g. refer to some item that doesn’t exist), but some games will make good use of it.

    “Evolving” enemies on the other hand will be the next big hit. They won’t use any current AI model, but they’ll run local (and way simpler) neural networks. For example, if you kill lots of enemies by luring them into a pit, you might notice newer enemies avoiding to hang around pits.



  • I live in a temperate region, so sub-zero temperatures are already kind of uncommon, even in winter. So it barely snows here - last two times were in 1975 and in 2013. (It does hail sometimes, though. Bloody hail last year killed one of my pepper plants ;_;)

    That said I’d probably shape and pack individual blocks. Perhaps even glue them together with some water, if it’s cold enough for that.