• 0 Posts
  • 104 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle

  • I had an on site interview with the owner of a small IT company. He was 30 minutes late (and I’d arrived 10 minutes early to be… ya know, punctual).

    He offered no apologies and had this whole arrogance surrounding him. Complained that he had to drive to the office for this. Then after 5 minutes, it was obvious he didn’t even bother to look over my CV and was completely unprepared for the interview. … and somehow this was my fault.

    Of course, the interview didn’t go well (for either of us). He offered a lowball 30% less than the average salary, I was looking for 30% above. I rolled my eyes, shook hands and left.

    Later, I got a call back from the recruiter “I had no idea you were asking that much. From what X (the owner) said, this was a complete disaster.” I said, “I agree” and politely hung up.

    In hindsight, I should have probably insisted on rescheduling (or just left) after 20 minutes. But, I was young and didn’t have many interviews under my belt. So, I took it as a learning experience.




  • It’s the “stringing it all together” that could be problematic.

    If you have multiple clients (desktop/cellphone) modifying the same entry (or even different entries in the same “database” ). You need something smart enough to gracefully handle this or atleast tell you about it.

    I did the whole “syncing” KeePass and it was functional, but it also meant I needed to handle conflicts - which was annoying. I switched and really appreciate the whole “it just works” with self-hosted bitwarden.





  • Wow, thanks for the full transparency. You are awesome!

    My opinion would be option 2 (proxy requests) , but with a higher cache TTL or simple a LRU (Least Recently Used) Cache.

    If you’re getting throttled, it could be mitigated by increasing the cache retention period (or improving the cache hits).

    Another improvement : Would it be possible to change the proxy, so that if the proxied requests are throttled, it simply sends the user a http-302 to the origin (instead of a broken image)?

    Regarding option 1 (full cache) : I greatly appreciate your desire to hide/protect your users ip, but it is outside the scope of what I expect from a Lemmy server. Maybe you could market and upsell this increased privacy as a subscription based feature. However, if I want privacy - I’ll use a VPN.

    Regarding option 3 (User fetches content from origin) : From a users perspective, I really don’t want my Lemmy experience to be based on hitting a bunch of (potentially) unreliable services. When I, as a lemm.ee User, request a post from Lemmy.world (for example), lemm.ee will proxy and cache that post and the comments. This is the distributed nature of Lemmy (as far as I understand). Why restrict this caching to just posts/threads/comments and not include images (which, let’s face it, are as meaningful as pure text - especially wrt memes).



  • In addition, you can force your cellphone to GSM/2G (ie: super slow internet).

    Depending on what your TV does when it “activates”, if it just needs to “activate/register” - it should be fine. If it needs to “update/upgrade/add a bunch of crapware” - Your internet will be so slow, you can turn it off before it’s finished (note: there is a slim chance that, this could also put your TV in a broken state - if it does, simply do a factory reset and try again)




  • I don’t want PCs to be like smartphones. I don’t want locked bootloaders.

    I’m sorry to burst your bubble, but since Microsoft made TPM mandatory for Windows 11+, locked down bootloader are on their way.

    Basically, TPM allows (Windows) software to validate/verify the integrity of the OS and hardware. This also (could) include the bootloader/bios if Microsoft chooses to do so.

    TPM is the equivalent of attestation on Android, which is the exact reason why your Banking App won’t work on your rooted/custom Android Phone.

    That being said, we should embrace ARM. X86/AMD has 30+ years worth of “history” baked into each ( CISC) chip. This complexity is why your PC draws soooo much power and generates soooo much heat.


  • This is loosely related to “online experience” (as you’ve covered most of the “tech tips”) :

    When choosing a movie don’t watch the trailers, instead (blindly) watch what’s popular. (obviously, if you’re into niche genres - this won’t work.)

    I’ve found Trackt is a good place to understand recent trends (and it just shows film posters). Then I’ll go to IMDB, maybe read the summary, but I always read the first/popular user review and decide if it’s worth my time and money.

    The first/popular user review usually doesn’t contain spoilers.

    Since I’ve actively avoided trailers and spoilers, my enjoyment for films has nearly doubled - even for “bad movies” (I probably wouldn’t have watched otherwise). It’s such a shame that a 2 minute trailer often shows many/most of the highlights of the film.



  • I’d proposed a potential solution.

    I’ll paraphrase : Currently, every Lemmy instance (ie: Lemm.ee, Lemmy.world, etc) is an island. This is one of the strengths of Lemmy (Federation) as we don’t have to worry about information being restricted, censored, manipulated (ie: Reddit).

    However, as things are currently, this Federation comes at the expense of splitting the community between instances. [email protected] vs [email protected] is a perfect example. Posts are either duplicated (which creates noise) or it fosters a “Lemmy instance death by starvation”. Meaning, more and more conversations will eventually drift towards one of the two asklemmy communities, leaving the other one to “starve out”. This defeats the entire purpose of federating.

    There has to be something better.

    For example, instead of “every instance is an island”. Meaning the current hierarchy is “instance” - > “community” - > “post” - > “threads”. We could instead have “community (ie: asklemmy)” - > “post (ie: this post)” - > “instance (Lemmy.ml, Lemmy.world, etc)” - > “threads (this comment)”.

    From a technical perspective, it would mean that each instance (that’s interested in hosting this supercommunity) would replicate the community names and posts (Not the threads).

    Lemmy already kind of does this, when a user pulls a post from another instance. For example, I’m on lemm.ee but when I view posts from [email protected], lemm.ee will retrieve and cache it on lemm.ee. As long as each instance would share a unique identifier to associate the two communities/posts as “the same thing” (and this could simply be the hash of the community /post name). Everything else would be UI.

    Each instance would take ownership of the copy of the community and post, which means they could moderate it according to their standards.

    As an end user, you’d view a community and post, but the comments/threads would be grouped by the instance that hosts it. If there’s an instance you don’t like, you simply unsubscribe from it.

    For future iterations, it might be nice if the instance itself would auto-subscribe or suggest other instances that host the same community to the user. Meaning, if I subscribed to [email protected], I’d automatically be subscribed to [email protected]. However, as the user, these are all separate subscriptions, so I can customize it as I see fit.


  • I think OP is referring to the fact that bad actors, who are exploiting facets of SEO (rather then providing “meaningful” content), use to need to programically generate content (pre-AI/LLM).

    For a real reader, it was obvious (at a quick glance) this was meaningless garbage. As they would often be large walls of text that didn’t make sense, or just lists of random key words.

    With LLM/AI, they’re still walls of text and random key words, but now they grammatically/structurally correct and require no real effort to generate. Unfortunately, it means that the reader actually need to invest time in reading it. You’ll also notice a growing trend in articles (especially in “compare X vs Y” type articles), the same content is recycled and rephrased to “pad” the article and give it a higher SEO ranking.