The the central webUI would be key for major players adoption, and more time as well. It’s been not long ago that debian, xorg, and arch (still in progress), migrated to gitlab, for example. Those migrations are expensive in people resources, and time.
And for regular individuals adoption, besides enabling the webUI, it might be way harder, unless someone contributes to sr.ht resources to allow hosting projects, with no CI support, but for free. It’s hard to get individuals adoption at some cost, even if that’s really a low cost, when there are alternatives, which BTW violate SW licenses, for free, :(
Seems better now, though still slower than it used to be. Libredd.it feels close to as usual. So that’s it for me, I’ll keep using libredd.it, although I like teddit UI better, :(
Thanks !
As mentioned, I’ve started to notice it slower since one or two weeks back. More than usual (there’s always been some slowness). So the only thing that came to mind was reddit doing tricky things, but I have no clue how to prove it.
I’ll keep trying both, libredd.it and teddit, hopping perhaps libredd.it reacts just a bit faster by the virtue of being a compiled frontend mostly (rust), though I don’t think that’ll help much, since I noticed similar slowness with it…
Thanks !
Better? :)
See it all depends, as @Jeffrey@lemmy.ml mentioned, out of the box you can start easily mounting remote stuff in a secure way. Depending on the latency between the remote location and you, SSHFS might become more resilient than NFS, though in general might be slower (data goes encrypted and encapsulated by default), but still within the same local LAN (not as remote as mounting something from Texas into Panamá for example), I’m more than OK with SSHFS. Cifs or smbfs is something I prefer avoiding unless there’s no option, you need a samba server exposing a “shared” area, and it requires MS-NT configurations for it to work, and managing access control and users is, well, NTish, so to me it’s way simpler to access remote FS through SSH on the remote device I already have SSH access to, and it boils down to NFS vs. SSHFS, and I consider easier, faster and more secure, the SSHFS way.
But “better”, apart from somehow subjective, depends on your taste as well.
Also:
Tracking One Year of Malicious Tor Exit Relay Activities (Part II)
I’m wondering if it’s still that bad now a days
FYI: kmail does support office365 + exchange, the thing about the kontact suite is its akonadi DB dependency and all kde deps required. It’s like anything kde you install, brings a bunch of other stuff, usually not anything you end up using…
However I do like how kmail integrates with local gnuPG, rather than Thunderbird’s librnp, which I end up replacing with Sequoia Octopus librnp…
I miss read the article’s title, and yes I didn’t see more signs of a privacy discussion within, though this conclusion:
DRM’s purpose is to give content providers control over software and hardware providers, and it is satisfying that purpose well.
Is precisely one of the things I dislike from DRM… At any rate, my bad with the title…
We don’t have to agree with his criteria, do we? Starting from the fact the most DRM implementation is not open source. Besides, in order to control what you use, it’s implied DRM has access to see what you get, when you get it, where you use it, and so on,. That’s by definition a privacy issue, they can get stats on what you consume, how often you use it, where, on which devices and so on.
But the main issue with DRM, I’d agree, is not privacy itself, it’s an ethical one. And DRM hadn’t prevented piracy ever. It’s main issue is controlling and limiting your use of what you acquire/buy, and disallowing sharing, sometimes even with yourself, disallowing unauthorized devices, or disallowing to see content you should have access to, without having an internet connection to the corp watching and controlling how you use such content or whatever it is protected under DRM.
Of course, the blog comes from someone working on a big corp. At any rate. I guess not all open source supporting people actually supports the FSF, on that DRM is unethical. It so happens I do…
https://www.fsf.org/campaigns/drm.html https://www.defectivebydesign.org https://www.defectivebydesign.org/what_is_drm https://www.fsf.org/bulletin/2016/spring/we-need-to-fight-for-strong-encryption-and-stop-drm-in-web-standards
ohh, there’s a tweet, however I’ll have to see if it’ll allow using openkeychain, instead of TB’s own librnp, which I really dislike on the desktop, and use sequoia octopus librnp (on top of gnupg) instead.
I really don’t like TB’s way to keep and maintain keys (I use the gnupg “external” key for my private key, but still TB’s librnp wants to have it stored in its own DB for no reason, otherwise can’t do a thing). And the same that applies to FF applies to TB, they shouldn’t attempt to keep passwords and keys themselves, better use gnupg, and for passwords something like qtpass on the desktop, and for android, there’s openkeychain and others… And they have watched how it’s possible to do something like the sequoia team does, but I guess they like what they chose to do, :( Using sequoia octopus librnp on mobiles might be rather complicated (it’s somehow tedious to use it on distros not officially supporting it, since TB’s changes lately tend to break octopus, and besides one needs to keep replacing the library on every TB’s upgrade)…
But for those using big corps email providers, then yes, TB on Android is good news. In general it is good to have TB on mobile as well, I just hope they would provide more options to users., extensions for gnupg are all banned (admittedly enigmail was mangling too much into the TB’s code), and they don’t like autocrypt either, so no options…
I prefer k9, but that’s a matter of taste. Out of the gmail affair, well, I really never saw much difference (agreed fairmail is more “standard” in the way it treats directories, but once you get used to k9, you see the benefits on its own ways).
On the gmail affair, well, the route fairmail chose to do the oauth2 authentication for gmail (k9 doesn’t) is through having a google account on your phone, so even if there’s benefit over, say the gmail app, it’s terrible, even if you use LOS4microG or similar. I no longer have a google account, since like 3 years ago, and I recommend de-googling, but I understand it’s hard for many, particularly if using work google accounts, :(
Don’t worry, I checked on BiglyBT before. It does the dual function, it does hook to i2p trackers, which are special, and can as well hook to clear internet trackers, and whatever is being downloaded can be shared and exposed on both. It’s a specialized i2p torrent client, like vuze.
That’s what I was trying to avoid using, :( I’m looking to see if I could use any torrent client, and just tunnel its traffic into the i2p router, like if it were a VPN or ssh tunnel. But so far, it seems you need a specialized torrent client, which can connect as a minimum, to i2p trackers, and use the different i2p file sharing protocols…
If I’m mistaken, let me know, but it seems that’s the only way. At least what I’ve read. Oh well, dI don’t trust VPNs, and I don’t like the idea of using something I don’t trust, unless forced to do it…
Thanks a lot !
ohh, so I can use any torrent client (rtorrent for example), as long as I only use i2p sort of trackers, or so I understand from your post, and also from the wiki, perhaps specifying the binding address and port, or something like that…
Sorry if way too OT, :( What torrent i2p client are you using? I don’t like the idea of vuze with a plugin, neither biglybt. I’m more inclined to something like rtorrent (ncurses, and if used with detached screen, then on any ssh session you can remotely monitor, without needing additional remote accesses or web publishing)…
well, to me data brokers exist only because it’s possible to profit from it, and in the end, no legislation will totally ban that practice. The issue is apps developers trying to monetize through users data, and the monetizing model found is not going anywhere, popularized by big companies, and now followed pretty much by anyone wanting to monetize. Users really need to become aware of the issue in the 1st place, and not just few, I mean, like 80% or so of them, and to start with, stop using OS that share their data underneath, and as most of it is proprietary, that means getting rid of android, ios, ms, etc, and use at least open source OS, but preferable FLOSS ones. Then start picking privacy respecting apps, and as much FLOSS as possible. And if a good chunk of user become aware, and start changing their habits and consumption culture, then that monetizing model will be really impacted. No legislation will cover all corner cases, to get rid of the bad practices, there will always be front/back doors found. It’s culture and consumption which needs changing. And as with anything worth, it’s the hardest to get, :(
Well sourcehut can be self hosted as well (ain’t it OS anyways?):
https://sr.ht/~sircmpwn/sourcehut https://man.sr.ht/installation.md
That said, sourcehut has privacy features, and libre oriented features gitlab doesn’t. But I understand, as of now, without webUI, as it is, it’s pretty hard to adopt sourcehut, and even when it finally does, having invested on gitlab (and even majority on github), which implies time and resources, might not be an easy thing to try sourcehut any ways.