They’re hiding the function (rules) that will trigger a captcha response in the client if they get enough reports that it’s a spammer, after which the client will be unable to continue to send messages until the captcha is solved. That’s it. The reason you can’t check how they’re doing it is because the spammers would just read it as instructions on how to avoid getting caught.
Communication/messaging, everything, is still E2EE. Nobody is getting anything out of this. If the FBI asks them to get user data, they will be unable to share anything with them. They don’t need to warn users because they don’t keep any data anyways - as can be seen by the multiple subpoenas they’ve fought to make public and continue to not provide any useful info.
Last day (not time) a client last pinged their servers.
Signal’s access to your contacts lets the client (not them):
determine whether the contacts in their address book are Signal users without revealing the contacts in their address book to the Signal service [0].
They’ve been developing/improving contact discovery since at least 2014 [1], I’d wager they know a thing or two about how to do it in a secure and scalable way. If you disagree or have evidence that proves otherwise, I’d love to be enlightened. The code is open [2], anyone is free to test it and publish their findings.
A simple system like that is easy to implement. I don’t think anyone’s questioning that they can build the worst attempt at an anti-spam system, like the one you’re suggesting. The types of spam you see on modern systems needs a bit more thought than “block if reported more than x times in x times” because you could easily target people and disable them remotely by coordinating attacks.
So yeah, it’s not magic if you want a dumb system that may introduce other problems, but you really have to think about things sometimes if you want it to work well in the long run.
I’ve never crashed my car, should everyone get rid of their car’s seat belts?
Your experience does not represent the world. I’ve only experienced 2 cases of spam on Signal, but they were all within the last year. I’ve had zero spam in the many years I’ve now been using Signal. So, while my anecdote is just as invalid as your single point of data, there’s definitely a trend for increased spam as a service gains popularity and it makes sense that they’re looking at enhanced methods to block spammers.
I still don’t see why they want a super secure smart system to block with captcha
You don’t understand why Signal, one of the most secure messaging platforms available, wants a super secure smart system to block spammers? I think you answered your own question.
Telegram for example you can add your own bot to kick the bot users. If you get a direct message you can just block and report
Telegram stores all your data and can view everything you do - unless you opt into their inferior E2EE chat solution known as “Secret Chats” - so it’s easier for them to moderate their services. When you report someone, Telegram moderators see your messages for review [0] and can limit an account’s capabilities. Signal can’t view your messages because everything is E2EE, nobody but the intended recipient can view your messages, they can’t review anything.
As you can see, without even digging into it too much, I’ve already found one case where Signal faces challenges not present in Telegram. Thing’s aren’t always as simple as they seem. Especially not for Signal, as they’ve worked their asses off to ensure they have as little data on their users as possible.
They’re hiding the function (rules) that will trigger a captcha response in the client if they get enough reports that it’s a spammer, after which the client will be unable to continue to send messages until the captcha is solved. That’s it. The reason you can’t check how they’re doing it is because the spammers would just read it as instructions on how to avoid getting caught.
Communication/messaging, everything, is still E2EE. Nobody is getting anything out of this. If the FBI asks them to get user data, they will be unable to share anything with them. They don’t need to warn users because they don’t keep any data anyways - as can be seen by the multiple subpoenas they’ve fought to make public and continue to not provide any useful info.
Except phone numbers, dates / times, contacts… pretty much everything except message content.
This is incorrect.
They store:
Signal’s access to your contacts lets the client (not them):
They’ve been developing/improving contact discovery since at least 2014 [1], I’d wager they know a thing or two about how to do it in a secure and scalable way. If you disagree or have evidence that proves otherwise, I’d love to be enlightened. The code is open [2], anyone is free to test it and publish their findings.
[0] https://signal.org/blog/private-contact-discovery/
[1] https://signal.org/blog/contact-discovery/
[2] https://github.com/signalapp/ContactDiscoveryService/
deleted by creator
deleted by creator
A simple system like that is easy to implement. I don’t think anyone’s questioning that they can build the worst attempt at an anti-spam system, like the one you’re suggesting. The types of spam you see on modern systems needs a bit more thought than “block if reported more than x times in x times” because you could easily target people and disable them remotely by coordinating attacks.
So yeah, it’s not magic if you want a dumb system that may introduce other problems, but you really have to think about things sometimes if you want it to work well in the long run.
deleted by creator
I’ve never crashed my car, should everyone get rid of their car’s seat belts?
Your experience does not represent the world. I’ve only experienced 2 cases of spam on Signal, but they were all within the last year. I’ve had zero spam in the many years I’ve now been using Signal. So, while my anecdote is just as invalid as your single point of data, there’s definitely a trend for increased spam as a service gains popularity and it makes sense that they’re looking at enhanced methods to block spammers.
You don’t understand why Signal, one of the most secure messaging platforms available, wants a super secure smart system to block spammers? I think you answered your own question.
Telegram stores all your data and can view everything you do - unless you opt into their inferior E2EE chat solution known as “Secret Chats” - so it’s easier for them to moderate their services. When you report someone, Telegram moderators see your messages for review [0] and can limit an account’s capabilities. Signal can’t view your messages because everything is E2EE, nobody but the intended recipient can view your messages, they can’t review anything.
As you can see, without even digging into it too much, I’ve already found one case where Signal faces challenges not present in Telegram. Thing’s aren’t always as simple as they seem. Especially not for Signal, as they’ve worked their asses off to ensure they have as little data on their users as possible.
[0] https://www.telegram.org/faq_spam#q-what-happened-to-my-account
deleted by creator