cultural reviewer and dabbler in stylistic premonitions

  • 88 Posts
  • 182 Comments
Joined 3 years ago
cake
Cake day: January 17th, 2022

help-circle










  • Arthur Besse@lemmy.mltoPrivacy@lemmy.mlI made a gpg Hat
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    18 days ago

    were you careful to be sure to get the parts that have the key’s name and email address?

    It should be if there is chunks missing its unusable. At least thats my thinking, since gpg is usually a binary and ascii armor makes it human readable. As long as a person cannot guess the blacked out parts, there shouldnt be any data.

    you are mistaken. A PGP key is a binary structure which includes the metadata. PGP’s “ascii-armor” means base64-encoding that binary structure (and putting the BEGIN and END header lines around it). One can decode fragments of a base64-encoded string without having the whole thing. To confirm this, you can use a tool like xxd (or hexdump) - try pasting half of your ascii-armored key in to base64 -d | xxd (and hit enter and ctrl-D to terminate the input) and you will see the binary structure as hex and ascii - including the key metadata. i think either half will do, as PGP keys typically have their metadata in there at least twice.







  • TLDR: this is way more broken than I initially realized

    To clarify a few things:

    -No JavaScript is sent after the file metadata is submitted

    So, when i wrote “downloaders send the filename to the server prior to the server sending them the javascript” in my first comment, I hadn’t looked closely enough - I had just uploaded a file and saw that the download link included the filename in the query part of the URL (the part between the ? and the #). This is the first thing that a user sends when downloading, before the server serves the javascript, so, the server clearly can decide to serve malicious javascript or not based on the filename (as well as the user’s IP).

    However, looking again now, I see it is actually much worse - you are sending the password in the URL query too! So, there is no need to ever serve malicious javascript because currently the password is always being sent to the server.

    As I said before, the way other similar sites do this is by including the key in the URL fragment which is not sent to the server (unless the javascript decides to send it). I stopped reading when I saw the filename was sent to the server and didn’t realize you were actually including the password as a query parameter too!

    😱

    The rest of this reply was written when I was under the mistaken assumption that the user needed to type in the password.


    That’s a fundamental limitation of browser-delivered JavaScript, and I fully acknowledge it.

    Do you acknowledge it anywhere other than in your reply to me here?

    This post encouraging people to rely on your service says “That means even I, the creator, can’t decrypt or access the files.” To acknowledge the limitations of browser-based e2ee I think you would actually need to say something like “That means even I, the creator, can’t decrypt or access the files (unless I serve a modified version of the code to some users sometimes, which I technically could very easily do and it is extremely unlikely that it would ever be detected because there is no mechanism in browsers to ensure that the javascript people are running is always the same code that auditors could/would ever audit).”

    The text on your website also does not acknowledge the flawed paradigm in any way.

    This page says "Even if someone compromised the server, they’d find only encrypted files with no keys attached — which makes the data unreadable and meaningless to attackers. To acknowledge the problem here this sentence would need to say approximately the same as what I posted above, except replacing “unless I serve” with “unless the person who compromised it serves”. That page goes on to say that “Journalists and whistleblowers sharing sensitive information securely” are among the people who this service is intended for.

    The server still being able to serve malicious JS is a valid and well-known concern.

    Do you think it is actually well understood by most people who would consider relying on the confidentiality provided by your service?

    Again, I’m sorry to be discouraging here, but: I think you should drastically re-frame what you’re offering to inform people that it is best-effort and the confidentiality provided is not actually something to be relied upon alone. The front page currently says it offers “End-to-end encryption for complete security”. If someone wants/needs to encrypt files so that a website operator cannot see the contents, then doing so using software ephemerally delivered from that same website is not sufficient: they should encrypt the file first using a non-web-based tool.

    update: actually you should take the site down, at least until you make it stop sending the key to the server.



  • Btw, DeadDrop was the original name of Aaron Swartz’ software which later became SecureDrop.

    it’s zero-knowledge encryption. That means even I, the creator, can’t decrypt or access the files.

    I’m sorry to say… this is not quite true. You (or your web host, or a MITM adversary in possession of certificate authority key) can replace the source code at any time - and can do so on a per-user basis, targeting specific IP addresses - to make it exfiltrate the secret key from the uploader or downloader.

    Anyone can audit the code you’ve published, but it is very difficult to be sure that the code one has audited is the same as the code that is being run each time one is using someone else’s website.

    This website has a rather harsh description of the problem: https://www.devever.net/~hl/webcrypto … which concludes that all web-based cryptography like this is fundamentally snake oil.

    Aside from the entire paradigm of doing end-to-end encryption using javascript that is re-delivered by a webserver at each use being fundamentally flawed, there are a few other problems with your design:

    • allowing users to choose a password and using it as the key means that most users’ keys can be easily brute-forced. (Since users need to copy+paste a URL anyway, it would make more sense to require them to transmit a high-entropy key along with it.)
    • the filenames are visible to the server
    • downloaders send the filename to the server prior to the server sending them the javascript which prompts for the password and decrypts the file. this means you have the ability to target maliciously modified versions of the javascript not only by IP but also by filename.

    There are many similar browser-based things which still have the problem of being browser-based but which do not have these three problems: they store the file under a random identifier (or a hash of the ciphertext), and include a high-entropy key in the “fragment” part of the URL (the part after the # symbol) which is by default not sent to the server but is readable by the javascript. (Note that the javascript still can send the fragment to the server, however… it’s just that by default the browser does not.)

    I hope this assessment is not too discouraging, and I wish you well on your programming journey!



  • When it’s libre software, we’re not banned from fixing it.

    Signal is a company and a network service and a protocol and some libre software.

    Anyone can modify the client software (though you can’t actually distribute modified versions via Apple’s iOS App Store, for reasons explained below) but if a 3rd party actually “fixed” the problems I’ve been talking about here then it really wouldn’t make any sense to call that Signal anymore because it would be a different (and incompatible) protocol.

    Only Signal (the company) can approve of changes to Signal (the protocol and service).

    Here is why forks of Signal for iOS, like most seemingly-GPLv3 software for iOS, cannot be distributed via the App Store

    Apple does not distribute GPLv3-licensed binaries of iOS software. When they distribute binaries compiled from GPLv3-licensed source code, it is because they have received another license to distribute those binaries from the copyright holder(s).

    The reason Apple does not distribute GPLv3-licensed binaries for iOS is because they cannot, because the way that iOS works inherently violates the “installation information” (aka anti-tivozation) clause of GPLv3: Apple requires users to agree to additional terms before they can run a modified version of a program, which is precisely what this clause of GPLv3 prohibits.

    This is why, unlike the Android version of Signal, there are no forks of Signal for iOS.

    The way to have the source code for an iOS program be GPLv3 licensed and actually be meaningfully forkable is to have a license exception like nextcloud/ios/COPYING.iOS. So far, at least, this allows Apple to distribute (non-GPLv3!) binaries of any future modified versions of the software which anyone might make. (Legal interpretations could change though, so, it is probably safer to pick a non-GPLv3 license if you’re starting a new iOS project and have a choice of licenses.)

    Anyway, the reason Signal for iOS is GPLv3 and they do not do what NextCloud does here is because they only want to appear to be free/libre software - they do not actually want people to fork their software.

    Only Signal (the company) is allowed to give Apple permission to distribute binaries to users. The rest of us have a GPLv3 license for the source code, but that does not let us distribute binaries to users via the distribution channel where nearly all iOS users get their software.


  • Downvoted as you let them bait you. Escaping WhatsApp and Discord, anti-libre software, is more important.

    I don’t know what you mean by “bait” here, but…

    Escaping to a phone-number-requiring, centralized-on-Amazon, closed-source-server-having, marketed-to-activists, built-with-funding-from-Radio-Free-Asia (for the specific purpose of being used by people opposing governments which the US considers adversaries) service which makes downright dishonest claims of having a cryptographically-ensured inability to collect metadata? No thanks.

    (fuck whatsapp and discord too, of course.)


  • it’s being answered in the github thread you linked

    The answers there are only about the fact that it can be turned off and that by default clients will silently fall back to “unsealed sender”.

    That does not say anything about the question of what attacks it is actually meant to prevent (assuming a user does “enable sealed sender indicators”).

    This can be separated into two different questions:

    1. For an adversary who does not control the server, does sealed sender prevent any attacks? (which?)
    2. For an adversary who does control the server, how does sealed sender prevent that adversary from identifying the sender (via the fact that they must identify themselves to receive messages, and do so from the same IP address)?

    The strongest possibly-true statement i can imagine about sealed sender’s utility is something like this:

    For users who enable sealed sender indicators AND who are connecting to the internet from the same IP address as some other Signal users, from the perspective of an an adversary who controls the server, sealed sender increases the size of the set of possible senders for a given message from one to the number of other Signal users who were online from behind the same NAT gateway at the time the message was sent.

    This is a vastly weaker claim than saying that “by design” Signal has no possibility of collecting any information at all besides the famous “date of registration and last time user was seen online” which Signal proponents often tout.





  • You can configure one or more of your profiles’ addresses to be a “business address” which means that when people contact you via it it will always create a new group automatically. Then you can (optionally, on a per-contact basis) add your other devices’ profiles to it (as can your contact with their other devices, after you make them an admin of the group).

    It’s not the most obvious/intuitive system but it works well and imo this paradigm is actually better than most systems’ multi-device support in that you can see which device someone is sending from and you can choose to give different contacts access to a different subset of your devices than others.