Is this really a “proof”? I can’t say I’m a Signal hate, but neither a lover, however I’m not sure if signal itself explaining why signal is privacy friendly is enough to consider their service and products privacy friendly. It might just be my opinion though. Too used to companies providing equivalent arguments.
But you know they can’t say no to the authorities. If they say to start logging user activity from an account, they have to. Otherwise they would become criminals. And it was really the top of these cases. They decline request, what they can. But there are situations, when even the swiss laws can’t protect you. If you are really concerned, you should try self-hosting.
I think the point is that they were ordered to return records by law, and Signal made a legally binding response that they don’t have any records. They are demonstrating that they actually have no information on users. Would that be considered proof? Yes. It absolutely would be.
I have to ask, did you read the court order and the response? Not just the summary they wrote about.
My understanding is that code was limited to anti-spam. There’s going to be some level of trust involved with using a centralized service so I don’t see that it’s such a huge issue, even as someone who prefers to use decentralized and FLOSS for as much as possible.
They’re hiding the function (rules) that will trigger a captcha response in the client if they get enough reports that it’s a spammer, after which the client will be unable to continue to send messages until the captcha is solved. That’s it. The reason you can’t check how they’re doing it is because the spammers would just read it as instructions on how to avoid getting caught.
Communication/messaging, everything, is still E2EE. Nobody is getting anything out of this. If the FBI asks them to get user data, they will be unable to share anything with them. They don’t need to warn users because they don’t keep any data anyways - as can be seen by the multiple subpoenas they’ve fought to make public and continue to not provide any useful info.
A simple system like that is easy to implement. I don’t think anyone’s questioning that they can build the worst attempt at an anti-spam system, like the one you’re suggesting. The types of spam you see on modern systems needs a bit more thought than “block if reported more than x times in x times” because you could easily target people and disable them remotely by coordinating attacks.
So yeah, it’s not magic if you want a dumb system that may introduce other problems, but you really have to think about things sometimes if you want it to work well in the long run.
I’ve never crashed my car, should everyone get rid of their car’s seat belts?
Your experience does not represent the world. I’ve only experienced 2 cases of spam on Signal, but they were all within the last year. I’ve had zero spam in the many years I’ve now been using Signal. So, while my anecdote is just as invalid as your single point of data, there’s definitely a trend for increased spam as a service gains popularity and it makes sense that they’re looking at enhanced methods to block spammers.
I still don’t see why they want a super secure smart system to block with captcha
You don’t understand why Signal, one of the most secure messaging platforms available, wants a super secure smart system to block spammers? I think you answered your own question.
Telegram for example you can add your own bot to kick the bot users. If you get a direct message you can just block and report
Telegram stores all your data and can view everything you do - unless you opt into their inferior E2EE chat solution known as “Secret Chats” - so it’s easier for them to moderate their services. When you report someone, Telegram moderators see your messages for review [0] and can limit an account’s capabilities. Signal can’t view your messages because everything is E2EE, nobody but the intended recipient can view your messages, they can’t review anything.
As you can see, without even digging into it too much, I’ve already found one case where Signal faces challenges not present in Telegram. Thing’s aren’t always as simple as they seem. Especially not for Signal, as they’ve worked their asses off to ensure they have as little data on their users as possible.
Last day (not time) a client last pinged their servers.
Signal’s access to your contacts lets the client (not them):
determine whether the contacts in their address book are Signal users without revealing the contacts in their address book to the Signal service [0].
They’ve been developing/improving contact discovery since at least 2014 [1], I’d wager they know a thing or two about how to do it in a secure and scalable way. If you disagree or have evidence that proves otherwise, I’d love to be enlightened. The code is open [2], anyone is free to test it and publish their findings.
I think I did. I might have not understood it. However, is that response enough? Shouldn’t Signal have some kind of audit by authorities to confirm it is true what they responded?
Is this really a “proof”? I can’t say I’m a Signal hate, but neither a lover, however I’m not sure if signal itself explaining why signal is privacy friendly is enough to consider their service and products privacy friendly. It might just be my opinion though. Too used to companies providing equivalent arguments.
deleted by creator
But you know they can’t say no to the authorities. If they say to start logging user activity from an account, they have to. Otherwise they would become criminals. And it was really the top of these cases. They decline request, what they can. But there are situations, when even the swiss laws can’t protect you. If you are really concerned, you should try self-hosting.
Cant agree with you more , When they ask for your phone number, It means no privacy .
Wrong, phone number relates to anonymity, not privacy. Privacy, security and anonymity are 3 different things.
What you said is very enlightening, I didn’t realize these differences before
I’m a little out of the loop. What’s going on with Telegram?
I think the point is that they were ordered to return records by law, and Signal made a legally binding response that they don’t have any records. They are demonstrating that they actually have no information on users. Would that be considered proof? Yes. It absolutely would be.
I have to ask, did you read the court order and the response? Not just the summary they wrote about.
deleted by creator
My understanding is that code was limited to anti-spam. There’s going to be some level of trust involved with using a centralized service so I don’t see that it’s such a huge issue, even as someone who prefers to use decentralized and FLOSS for as much as possible.
They’re hiding the function (rules) that will trigger a captcha response in the client if they get enough reports that it’s a spammer, after which the client will be unable to continue to send messages until the captcha is solved. That’s it. The reason you can’t check how they’re doing it is because the spammers would just read it as instructions on how to avoid getting caught.
Communication/messaging, everything, is still E2EE. Nobody is getting anything out of this. If the FBI asks them to get user data, they will be unable to share anything with them. They don’t need to warn users because they don’t keep any data anyways - as can be seen by the multiple subpoenas they’ve fought to make public and continue to not provide any useful info.
deleted by creator
A simple system like that is easy to implement. I don’t think anyone’s questioning that they can build the worst attempt at an anti-spam system, like the one you’re suggesting. The types of spam you see on modern systems needs a bit more thought than “block if reported more than x times in x times” because you could easily target people and disable them remotely by coordinating attacks.
So yeah, it’s not magic if you want a dumb system that may introduce other problems, but you really have to think about things sometimes if you want it to work well in the long run.
deleted by creator
I’ve never crashed my car, should everyone get rid of their car’s seat belts?
Your experience does not represent the world. I’ve only experienced 2 cases of spam on Signal, but they were all within the last year. I’ve had zero spam in the many years I’ve now been using Signal. So, while my anecdote is just as invalid as your single point of data, there’s definitely a trend for increased spam as a service gains popularity and it makes sense that they’re looking at enhanced methods to block spammers.
You don’t understand why Signal, one of the most secure messaging platforms available, wants a super secure smart system to block spammers? I think you answered your own question.
Telegram stores all your data and can view everything you do - unless you opt into their inferior E2EE chat solution known as “Secret Chats” - so it’s easier for them to moderate their services. When you report someone, Telegram moderators see your messages for review [0] and can limit an account’s capabilities. Signal can’t view your messages because everything is E2EE, nobody but the intended recipient can view your messages, they can’t review anything.
As you can see, without even digging into it too much, I’ve already found one case where Signal faces challenges not present in Telegram. Thing’s aren’t always as simple as they seem. Especially not for Signal, as they’ve worked their asses off to ensure they have as little data on their users as possible.
[0] https://www.telegram.org/faq_spam#q-what-happened-to-my-account
deleted by creator
Except phone numbers, dates / times, contacts… pretty much everything except message content.
This is incorrect.
They store:
Signal’s access to your contacts lets the client (not them):
They’ve been developing/improving contact discovery since at least 2014 [1], I’d wager they know a thing or two about how to do it in a secure and scalable way. If you disagree or have evidence that proves otherwise, I’d love to be enlightened. The code is open [2], anyone is free to test it and publish their findings.
[0] https://signal.org/blog/private-contact-discovery/
[1] https://signal.org/blog/contact-discovery/
[2] https://github.com/signalapp/ContactDiscoveryService/
deleted by creator
Fair point tho.
I think I did. I might have not understood it. However, is that response enough? Shouldn’t Signal have some kind of audit by authorities to confirm it is true what they responded?
Signal is open source, so the authorities can just check their code and see that they don’t have any of this information.
Not all of it any longer.
Sorry yeah, deleted it right away, not sure how you were able to see it.
Magic!
Heh… Come see https://lemmy.ca/post/15834 it looks like there’s still some interesting federation bugs, as your comment still resides here undeleted.
deleted by creator
Ahh good to know! Thanks
Ahh good to know! Thanks