• 12 Posts
Joined vor 2 Jahren
Cake day: Mai 31, 2020


Right, so here’s what I believe to be facts, without having sources to prove every little detail:

Firefox’s main source of income is the default search engine deal with Google. Yes, they practically advertise Google Search by doing this, but they do not submit more data to Google than google.com itself would like to submit. If you change your default search engine, you’re completely unaffected.

Mozilla also does some advertising, but they are building their own (privacy-friendly) advertising network for that. They are not collaborating with Google for that.

The use of Google Analytics is for telemetry only, so they can improve their software with anonymized data.

This isn’t a great situation. Whenever they add privacy protections to Firefox, they’re biting the hand that feeds them + they’re competing with that hand + they need webpage owners to like them, too, since they have their own rendering engine.

But when it’s a decision about a smaller implementation detail, those parties won’t notice Mozilla’s decision and then Mozilla will gladly opt for the most privacy/user-friendly option.

If it is a larger decision, like good ad blocking, then they will often not make it the default, but give users the option to install an extension or change a setting. This is also especially driven by the Tor Browser devs, who need these capabilities and if they’re not contained in Firefox, they need to maintain their own patches on top of Firefox.

So, with Firefox, we have a finance model that requires the user to configure a few things to get the most privacy-friendly option possible.

Vivaldi, Brave et al have a different model. They need significantly less money, because they’re not building their own engine. More than 99% of their code base is taken verbatim from Chromium/Blink. Those smaller implementation details were all decided on by Google.
And then they add content blockers on top to try to fix that.

This finance model generally allows them to be more privacy-friendly out of the box. But with 15 minutes of customizing Firefox, you get a privacy-friendly browser like no Chromium-based browser will ever be.

@lienrag@mastodon.tedomum.net Yeah, and it’s not proof of a problem with the webpage either.

Google Analytics is bad on basically any webpage that uses it, because by default, it will share data with Google. But Mozilla has a deal with Google to block that: https://bugzilla.mozilla.org/show_bug.cgi?id=697436#c14

And you can use Google Analytics for just basic telemetry, which is not privacy invasive at all. You can do more, but this screenshot doesn’t actually provide evidence of that. And ad tracking will usually happen via ad domains, e.g. doubleclick.net.

I’m definitely on board with not just believing everything at face value, but then we need actual proof. Mozilla is legally a nonprofit with express claim of wanting to protect privacy.
Any actual evidence of them breaking with that, would set the internet ablaze. Any tech journalist would want that news story published. Their own employees would become whistleblowers sooner rather than later, because they are aware of the public image.

Therefore, if you don’t have complete evidence, I think, it’s sane to assume that Mozilla are not being evil until you do find actual evidence.
They are not a traditional company, where I have made that same observation that Google Analytics on the webpage == garbage. @Zerush@lemmy.ml

Just in general, 5 days of work for 2 days of living, doesn’t seem like a great deal.

Some people are kind of already living that romance with the Gemini protocol. So, that’s separate from the whole HTTP/HTML web and you need a Gemini browser to access it. The markup language is rather similar to Markdown, so the fanciest tech you have available, are images and ASCII art. Which is pretty hostile to advertising.

As far as I could tell, if you enjoy reading blog posts, this is actually quite a cozy little corner of the internet.

Again, I don’t know where you get the information from that Mozilla makes money off of surveillance. For many years now, they’ve had the problem that they’re overly reliant on Google, but from the search engine deal, not advertisements. See, for example, this article: https://www.zdnet.com/article/googles-back-its-firefoxs-default-search-engine-again-after-mozilla-ends-yahoo-deal/

They have tried to gain a foothold in advertising to reduce that dependence on Google, but that was always privacy-friendly advertising.

Firefox Sync is end-to-end-encrypted, too. See, for example:

They are generally able to recover sync data, because it’s supposed to be synced to one or more Firefox installations (it’s specifically not a backup service). When you request a password reset, they essentially just wipe what they have on their servers and then re-upload the data from your Firefox installations, encrypted with your new password.

Mozilla will support the APIs specified in MV3 to allow porting Chrome extensions easily. This does not mean they cannot offer other APIs.
And they have officially stated that they will continue to support the content-blocking API from MV2, for at least as long as there is no appropriate replacement.

See the “WebRequest” section here: https://blog.mozilla.org/addons/2022/05/18/manifest-v3-in-firefox-recap-next-steps/

Yeah, I’ve noticed that I’ll occasionally hesitate to click on that “Publish” button for a new software project, because I’ll think to myself, if someone starts using this, they’re fucked.

At the same time, I don’t want to put a disclaimer into every README stating that it’s hot garbage. Like, it’s a repo. Of course, it could contain software which is still in early development or unmaintained or whatever. And I’d rather tell what I’d like it to do someday rather than what ridiculous requirements it won’t fulfill.

I’ve kind of started to revel in my previously-not-really-strong decision to put my code up:

  1. as AGPL, which for example deters Google from ever using it, and
  2. on Codeberg, where it won’t get seen as much and it’s more at the heart of the open-source community rather than on this commercialized platform where most people only go to download released software.

I guess, they mean ensuring this safety is much more efficient when using such an algorithm. The naïve approach wouldn’t be eventually-consistent, so would require much more direct communication + conflict resolution between the servers.

Oh, I can see that scenario. Mine was a rhetoric question, as I’ve been working in the data-shipping field for the past few years.

Thing is, if there’s a thousand power stations, there may well be a thousand different implementations + error codes, because for decades there was no need for a common method of error reporting.

The only common interface was humans. That’s why all of these implementations describe errors in human-readable text. And I would bet a lot of money that they’ve already had to extraxt those error codes from text logs.

Writing them out in e.g. a standardized JSON format, requires standardization efforts, which no one is going to push for while individually building these power stations.

That’s how you end up with a huge mess of different errors and differently described+formatted error codes, which only a human or human-imitating AI can attempt to read.

I mean, there’s definitely things they could have done that are less artificially intelligent, like keyword matching or even just counting how many error codes a power station produces. And I’m not sure you necessarily want a blackbox-AI deciding what gets power and what not. But realistically, companies around the planet will adopt similar approaches.

TL;DR: They want to make their own business look unsuccessful, so that competition watchdogs allow the acquisition from Facebook.

And to further discredit that statement: I feel like GIFs are the closest the ‘old’ internet has to offer for what the young generation wants. There’s tons of GIFs being exported from TikTok every day.

The AI uses natural language processing to “understand” the text explanation behind each error codes, said the engineer.

I mean, I’m glad it’s not just some dumb if-else chain, or even just basic circuitry, that’s being sold as “AI” here.

But at the same time: How did we get to a point where this is the best solution?

And that a few days after the load of GIFShell vulnerabilities. But nah, no need to patch anything.

You could do that, if you’re actively checking ping roundtrip times. It was in a local network and generally, the connection was working, so we didn’t think the ping or roundtrip times would be relevant.

Our software was just routinely logging ping+RTT and when scrolling through the logs, we noticed more or less by chance that the RTT is 300 ms, which is absurdly high for that context.

And well, this is just one example. If you’re doing time-sensitive stuff, it’s useful to know the timings of what you’re dealing with. It’s not usually essential, and you especially don’t need to know it for every possible context. But it can make your life easier.

215 Tage

Yeah, I definitely wouldn’t have recognized it either, if I didn’t know the memes. It’s basically become its own internet culture thing.

I guess, because the original comic is presented in such a simple way, people thought they recognized it in other 4-panel-comics. This lead to people actually hiding it in other comics, which was eventually taken ad absurdum, kind of like rickrolling, where someone posted something as innocous-looking as possible which would still make people think of the original.

315 Tage

So, is this …?

I’ve never seen the original, but from memes about there being lots of memes that are mimicking said thing, I feel like I might recognize it correctly.

Kotlin is a more complex programming language than Scala.

Scala still seems to have this bad rep from the times when functional programming features weren’t commonplace yet in most programming languages, because it stood for this oh-so-complex functional programming stuff.
Now, Kotlin is climbing up the TIOBE index, lauded by Java programmers as finally giving them the features they wanted, and as someone who’s now coded extensively in both, I don’t get it.

Kotlin is obviously heavily inspired by Scala and they seem to have roughly the same features. But Kotlin imposes tons of rules that limit these features in how you can use them.

However, I’m not talking about the good kind of rules, those which might help to streamline the code style. I’m saying it feels like they implemented half of each feature respectively, and then, so they didn’t have to finish implementing, they disallowed using it in other ways.
Arbitrary rules, which as a programmer you just have to memorize to please the language gods.

From a technical perspective, the only aspects I can see Kotlin being better are somewhat better Java interop and if you want to build a DSL, it has some nice features for that. But those DSL features make it worse / less streamlimed when you don’t want to build a DSL, and it just seems to be worse in every other aspect.

Obviously, popularity rarely correlates with technical merit, I can accept that. But when people tell me I shouldn’t use Scala, because it’s so complex, I should use Kotlin instead, that shit fucking triggers me.

I mean, sometimes knowing ‘normal’ timing values can help you debug stuff.

For example, we recently were debugging why we sometimes didn’t get data in time from a network sensor. And we probably wouldn’t have thought much about the logs saying it took 300 ms to ping that sensor, if we didn’t know 50 ms is enough to ping many webpages.

But yeah, unless you’re a performance tester, I cannot imagine why you would need to know all of these values, rather than just their rough relation to each other.

Plasma 5.25 Can Sync Accent Color with Wallpaper
From the release announcement: https://kde.org/announcements/plasma/5/5.25.0/

Hi, I created a community for the game Dungeon Crawl Stone Soup. It's an open-source game, so would've also fit onto [!opensourcegames@lemmy.ml](https://lemmy.ml/c/opensourcegames), but I wanted a place where I can post things which will only make sense, if you've actually played DCSS. So, this is quite a niche interest. I might be shouting into the void here, but that's okay. Of course, anyone else is welcome to shout with me. :)

dav1d is a decoder for the AV1 video codec. It's optimized for performance to achieve reasonable decoding speed even without hardware decoding support (which is still generally lacking for AV1).

Enlightenment 0.25.0 Release