- 69 Posts
- 468 Comments
RagingHungryPanda@lemm.eeto Lemmy Be Wholesome@lemmy.world•Who'd like to visit Neuschwanstein castle?3·26 days agoI’ve been there. I recommend it. I dove-tailed it with a trip to Baden-Baden (a hot spring town with a great sauna resort) and a day trip to Carlesrue(?). The black forest is nearby as well.
Do it!
RagingHungryPanda@lemm.eeto Ask Lemmy@lemmy.world•Wanting to restart my Webcomic. Any alternative to Wordpress that I could use on my website?2·29 days agoYou need to follow it, but the thing is, you’re probably just as well off posting a link from your own account. It comes up as a separate account for me. I don’t think the federation there is really worth it
RagingHungryPanda@lemm.eeto Ask Lemmy@lemmy.world•Wanting to restart my Webcomic. Any alternative to Wordpress that I could use on my website?2·29 days agoI’m hosting writefreely with picsur as the image host for my blog. They’re pretty lightweight. Otherwise a static site generator like others suggested.
RagingHungryPanda@lemm.eeto Ask Lemmy@lemmy.world•Wanting to restart my Webcomic. Any alternative to Wordpress that I could use on my website?3·30 days agoWhat format are you looking for? Something like a comic layout?
RagingHungryPanda@lemm.eeto Ask Lemmy@lemmy.world•how do I know that you guys are real and not bots?12·1 month agoI asked it to be more concise:
Response:
“Great point! It’s crucial to stay vigilant against bot activity on any platform. To detect bots, consider examining account history for inconsistencies, checking posting patterns that appear automated, and looking out for suspicious usernames. For server admins, collaborating with other instance admins can help share information and best practices. While no system is foolproof, proactive measures can significantly mitigate risks. Let’s work together to foster trust and maintain authenticity within our communities!”
RagingHungryPanda@lemm.eeto Ask Lemmy@lemmy.world•how do I know that you guys are real and not bots?23·1 month agoI asked my deepseek r1:14b model and it gave this. It is quite wordy and likes bullet points. I wouldn’t bother typing all this out.
Thank you for bringing up these important questions about bot detection and trust in online communities. These are valid concerns, especially as technology evolves and becomes more sophisticated.
Verification on Reddit: While Reddit does employ measures to detect bots (such as CAPTCHAs and rate limiting), it’s true that no system is foolproof. However, the platform has been actively working on improving its anti-bot mechanisms over time. As a user or admin, staying informed about these measures can help us understand how communities are moderated.
Lemmy and Other Instances: On platforms like Lemmy (a federated social network), trust in other instances can be challenging because each instance operates independently. However, as a server admin, you can take steps to verify the credibility of an instance:
* Manual Verification: You can manually check accounts on other instances by reviewing their posting patterns and content for consistency with human behavior.
* Community Standards: Look for instances that follow similar community guidelines or moderation practices to yours. This can help ensure a level of trustworthiness. * Transparency: Instances that are transparent about their moderation policies and allow communication between admins are generally more trustworthy.Future Concerns: As AI and bot technology advance, the risk of sophisticated bots mimicking human behavior will increase. However, staying proactive by implementing verification systems (like PII checks) and educating your community about suspicious activity can mitigate these risks.
Spam vs. Human-like Bots: While traditional spam bots are easier to detect, human-like bots that use statistical data to mimic patterns are more insidious. This is why relying solely on automated tools isn’t enough—it requires a combination of technology and human oversight.
As server admins, we must remain vigilant but not overly discouraged. By fostering open communication within our communities and collaborating with other instance admins, we can build a network that prioritizes trust and authenticity.
It IS a more helpful thing than “This should never happen.”
Thanks, I’ll check that out
RagingHungryPanda@lemm.eeto Asklemmy@lemmy.ml•What would life be like if reincarnation was a proven thing and we could remember all our past lives?9·1 month agoOh God. I’ve already got enough baggage 😭
RagingHungryPanda@lemm.eeto World News@beehaw.org•My Afternoon with José Mujica: Uruguay’s leftist president passes, leaving an important legacy.4·1 month agoI enjoyed this read. It’s short, but it’s a look into someone totally different from just about anyone else in his position.
RagingHungryPanda@lemm.eeto Fuck Cars@lemmy.world•5-minute headway isn't ideal, but it's a huge step in the right direction. I'm quite stoked for this.English8·1 month agoI saw a joke where someone in Germany said they arrived too late for the 7:30am train, but were just in time for the 6:30am train. It’s like a meme how late they are.
I am not for the life of me seeing where to add a tag or a label. I checked in 3 different UIs, including the main one.
I had thought whether there should be lemmy, pixelfed, and maybe mastodon for local cities.
I’ve been saving all of these today. Thanks a bunch!
RagingHungryPanda@lemm.eeto Fuck Cars@lemmy.world•5-minute headway isn't ideal, but it's a huge step in the right direction. I'm quite stoked for this.English24·1 month agoI wish we had 5 minute headways haha.
Thanks for giving it a good read through! If you’re getting on nvme ssds, you may find some of your problems just go away. The difference could be insane.
I was reading something recently about databases or disk layouts that were meant for business applications vs ones meant for reporting and one difference was that on disk they were either laid out by row vs by column.
That was a bit of a hasty write, so there’s probably some issues with it, but that’s the gist
yes? maybe, depending on what you mean.
Let’s say you’re doing a job and that job will involve reading 1M records or something. Pagination means you grab N number at a time, say 1000, in multiple queries as they’re being done.
Reading your post again to try and get context, it looks like you’re identifying duplicates as part of a job.
I don’t know what you’re using to determine a duplicate, if it’s structural or not, but since you’re running on HDDs, it might be faster to get that information into ram and then do the job in batches and update in batches. This will also allow you to do things like writing to the DB while doing CPU processing.
BTW, your hard disks are going to be your bottleneck unless you’re reaching out over the internet, so your best bet is to move that data onto an NVMe SSD. That’ll blow any other suggestion I have out of the water.
BUT! there are ways to help things out. I don’t know what language you’re working in. I’m a dotnet dev, so I can answer some things from that perspective.
One thing you may want to do, especially if there’s other traffic on this server:
- use WITH (NOLOCK) so that you’re not stopping other reads and write on the tables you’re looking at
- use pagination, either with windowing or LIMIT/SKIP to grab only a certain number of records at a time
Use a HashSet (this can work if you have record types) or some other method of equality that’s property based. Many Dictionary/HashSet types can take some kind of equality comparer.
So, what you can do is asynchronously read from the disk into memory and start some kind of processing job. If this job does also not require the disk, you can do another read while you’re processing. Don’t do a write and a read at the same time since you’re on HDDs.
This might look something like:
offset = 0, limit = 1000 task = readBatchFromDb(offset, limit) result = await task data = new HashSet\<YourType>(new YourTypeEqualityComparer()) // if you only care about the equality and not the data after use, you can just store the hash codes while (!result.IsEmpty) { offset = advance(offset) task = readBatchFromDb(offset, limit) // start a new read batch dataToWork = data.exclusion(result) // or something to not rework any objects data.addRange(result) dataToWrite = doYourThing(dataToWork) // don't write while reading result = await task await writeToDb(dataToWrite) // to not read and write. There's a lost optimization on not doing any cpu work } // Let's say you can set up a read or write queue to keep things busy abstract class IoJob { public sealed class ReadJob(your args) : IoJob { Task\<Data> ReadTask {get;set;} } public sealed class WriteJob(write data) : IoJob { Task WriteTask {get;set;} } } Task\<IoJob> executeJob(IoJob job){ switch job { ReadJob rj => readBatchFromDb(rj.Offset, rj.Limit), // let's say this job assigns the data to the ReadJob and returns it WriteJob wj => writeToDb(wj) // function should return the write job } } Stack\<IoJob> jobs = new (); jobs.Enqueue(new ReadJob(offset, limit)); jobs.Enqueue(new ReadJob(advance(offset), limit)); // get the second job ready to start job = jobs.Dequeue(); do () { // kick off the next job if (jobs.Peek() != null) executeJob(jobs.Peek()); if (result is ReadJob rj) { data = await rj.Task; if (data.IsEmpty) continue; jobs.Enqueue(new ReadJob(next stuff)) dataToWork = data.exclusion(data) data.AddRange(data) dataToWrite = doYourThing(dataToWork) jobs.Enqueue(new WriteJob(dataToWrite)) } else if (result is WriteJob wj) { await writeToDb(wj.Data) } } while ((job = jobs.Dequeue()) != null)
I’m not entirely sure. I spent more than a year in Latin America and came back to prices being about 2-3x what i remember. Groceries before I left were 2x compared to before COVID.
Shits fucking expensive in the US.