• 71 Posts
  • 750 Comments
Joined 1 year ago
cake
Cake day: September 13th, 2024

help-circle









  • If you’re willing (and/or able) to disassemble the phone and solder to the main board, you should be able to replace the battery with a 3.7v DC power supply.

    Unless there’s some protocol that communicates with the battery controller to validate it that I’m not aware of which will refuse to accept power input otherwise? I feel like that shouldn’t be the case if the battery is internal and they don’t expect you to replace it anyway?

    Definitely helps if it’s like a Fairphone or something with the battery contacts already exposed and easy to access. Actually, someone should 3D print a Fairphone battery dummy that just has wires coming out of it to hook up to your own power supply. Maybe with an ATTiny chip to spoof whatever digital signals it wants from the battery if that’s required.


  • An extra hard drive for offline backup of my home server. Just knowing I have a cold, unplugged copy of my data in my drawer has made me less paranoid about accidentally “rm -rf”-ing my computer and taking all the mount points with it or my dog getting her paw caught on a wire (she likes to run around haphazardly and is pretty clumsy) and dragging the entire hard drive enclosure down with it.

    Ideally I wouldn’t keep that drive in my house but I don’t have anywhere else to put it. Maybe someday I’ll get a safe deposit box or something but then my lazy ass probably wouldn’t bother to retrieve and sync my data nearly as often.





  • This could have been an extension downloaded by only the people who want it and no one would have complained. Nothing like developing a full featured extension system and promptly not using it for a feature they know for a fact people have very polarized opinions about. What is it called when software ships with random shit nobody wanted or asked for and can’t be removed? Oh right! Bloatware!

    “Oh just disable it” yeah just like the Android apps that came with my phone which have disable instead of uninstall in settings. A nice big middle finger to the user that makes it clear they don’t want them in control of their own devices.

    If I wanted an “AI browser” or a browser with every feature under the sun related to web browsing or not, I’d be using Edge. Nobody is choosing Firefox because they want the same experience as a big tech corporate browser, they’d choose an actual big tech corporate browser in that case.



  • it sounds like a Linux password is a red herring, and a secure password even more so

    Yes and no. A secure password is extremely important against some security threats, but completely useless against others. It’s like vitamin C. If you don’t get enough, that’s a massive problem and opens you up to a ton of serious issues, same as if you don’t have enough complexity in your password. But even if you do, it won’t effectively protect you from, say, cancer or unprivileged malware respectively.

    There’s nothing stopping any program from attempting to bruteforce your Linux password, literally running through possibilities hoping to guess it. Modern password implementations usually have some form of bruteforce protection. If you’ve ever entered your password wrong in sudo or KDE’s lock screen, it usually hangs for a few seconds before telling you your password is wrong, even though any modern computer will have determined it was wrong in literally an instant. This is to prevent a malicious program from endlessly trying random guesses until it gets it by making the time it would take to guess a sufficiently unique password too long to be practical. Your phone and optional software available for Linux go a step further, imposing longer and longer delays with each subsequent failed password attempt, and also prevents malicious programs from spawning many threads each independently calling sudo to bruteforce in parallel by completely disabling access until the time penalty elapses. Though you absolutely do need a sufficiently secure password, making it overly long has diminishing returns past a certain point, it doesn’t matter how many millions of years it would take to bruteforce with a 1 second delay for wrong attempts, but the upgrade from millions of seconds with a simple password like “hunter2” to years is the important part.

    Also, a password with no encryption is like a padlock on a wooden box. Even if they don’t have the key, they can still just cut the box open. In computer terms this would be like if someone accessed the files in your SSD directly and injected malware with root privileges, since both completely bypass the check that’s “normally” supposed to stop unauthorized users. Encryption can help but like you said, physical access is generally considered game over anyway unless they found your computer while it’s off and it is never returned to you for you to enter your password. A computer with encrypted everything wouldn’t be able to boot. Your EFI partition and especially your BIOS/firmware have to be unencrypted, and anything unencrypted can be tampered with by a sufficiently skilled attacker with physical access to add things like keyloggers and backdoors that sit dormant until you graciously decrypt everything for them.

    Your password strength matters a lot more with encryption though. If you’re going to the trouble of full disk encrypting your computer, make the password as long and random as you can practically remember. If someone is trying to decrypt your computer’s drive, they’ve probably imaged it and are using a separate machine with no rate limiting whatsoever, and modern GPUs can do a ton of cryptographic operations in a short time. And don’t use that password for your user account once decrypted.

    If SSH is disabled the class of attacks to be prevented are users ‘voluntarily’ running malware pretending to be goodware.

    More or less as far as I know, provided you don’t have any other way of remote access (VNC, RDP, Anydesk/Teamviewer and similar, that weird Steam remote desktop app, a server running vulnerable software on an open port that can be hijacked, etc). In computing, the general rule to follow is if you don’t need it, don’t enable it, otherwise it’s ripe for abuse. That being said, your router should be configured to block local port access from the internet anyway, but if you have another infected device on your network, that’s a major threat. If you do want SSH, configure it to only accept the keys of your trusted devices and not just respond with a password prompt to any device that comes knocking.

    True, but does anyone operate this way? At that point it becomes an iPad or a Chromebook.

    “Trust” in computing is fickle and complicated, just like real life. At the end of the day, you have to make a decison on who and what you personally trust. An iPad or Chromebook would be the least trustworthy computers in my mind because they’re locked down and administered by companies I absolutely do not trust, and though the locked down architecture does prevent other malware from infecting it, there’s probably already malware by any other name on it with proper Google or Apple security signatures that came with the device from the factory.

    This is the same as if your distro maintainer is untrustworthy. They could slip in malware into the official package manager or installer ISO and you’d never know. I personally trust a reputable Linux distro over the literal biggest tech corporations in the world, but I’m still putting my faith in an organization I do not control nor personally know the people in control.

    Open source is more trustworthy than proprietary software because the source code is available, but even that isn’t completely guaranteed to stop malicious code from making it in. The recent xz backdoor comes to mind. You’re still trusting that the other people looking at the source code actually catch the malicious part, and that’s not guaranteed even with the most trustworthy people when everyone working on it are overworked, stressed, and in the grip of tunnel vision to get their small part of it done like software developers tend to be, and even when that happens, it might be months or years down the line after the damage has already been done. There’s a reason a full security audit of an app can cost anywhere from thousands to millions of dollars depending on how big the codebase is. Also, because the vast majority of software aren’t compiled in a reproducible way, you don’t really have a guarantee that the actual binary executable that’s on your computer exactly matches the source code unless you go through the (usually difficult and frustrating) process of actually compiling it yourself. Sure, you can probably assume that the official binary released by the source code authors and signed with their cryptographic keys matches the source code since both come from the same place, but that’s not guaranteed and you’re still trusting a person or organization.

    But wait, there’s more! The compiler you use is itself a program that needed to be compiled by another compiler, and so on and so fourth until you literally reach the stage decades back when someone manually wrote the individual bits for the very first compiler in that chain. A malicious compiler can be made to obfuscate the fact that it’s malicious, and only a manual review and reverse engineering of the raw binary (without reverse engineering software, mind you) can prove or disprove it’s compromised.

    Finally, there’s hardware. Even if you audit every single literal bit of software, the processor itself has immense complexity that you can’t audit without, 1, extremely expensive scientific equipment, and 2, destroying it in the process, and that’s only one chip out of the tens of chips in a computer. Your processor could have secret instructions that bypass all security and your only real hope is to bruteforce every possible input to see what happens. And proving existence of a backdoor is intrinsically much easier than proving absence.

    I’m not trying to scare you, but I do want to illustrate just how hard it is to have absolute trust in any computer. At the end of the day, you can never have a computer you completely trust unless you manually assembeled it from raw materials (not aided by any existing computer) and hand wrote every bit that goes into it. Like I said, we all need to make a decision to have faith in some person or organization we do not know. You can spend every waking minute auditing every last part of your computer, hardware and software, but then you wouldn’t have time to actually use it for the things you want to do. There’s no solution to this, there’s only higher and lower degrees of trust and security, which only you can determine for yourself.

    So no, no one operates that way, because it’s impossible.

    It does look like flatpaks or docker containers isolate behavior, so that’s a win.

    Generally, yes, but remember there’s always the possibility of a bug that allows containers to break out of containers. This is not unique to Docker, any sandbox or hypervisor can be breached if there’s an exploit, just like any other software. Doesn’t invalidate the value of containerization, but it must be kept in mind that nothing guaranteed to be completely safe and “malware proof.”


  • An AGI wouldn’t need to read every book because it can build on the knowledge it already has to draw new conclusions it wasn’t “taught.”

    Also, an AGI would be able to keep a consistent narrative regardless of the amount of data or context it has, because it would be able to create an internal model of what is happening and selectively remember the most important things more so than things that are inconsequential (not to mention assess what’s important and what can be forgotten to shed processing overhead), all things a human does instinctively when given more information than your brain can immediately handle. Meanwhile, an LLM is totally dependent on how much context it actually has bufferered, and giving it too much information will literally push all the old information out of its context, never to be recalled again. It has no ability to determine what’s worth keeping and that’s not, only what’s more or less recent. I’ve personally noticed this especially with smaller locally run LLMs with very limited context windows. If I begin troubleshooting some Linux issue using it, I have to be careful with how much of a log I paste into the prompt, because if I paste too much, it will literally forget why I pasted the log in the first place. This is most obvious with Deepseek and other reasoning models because it will actually start trying to figure out why it was given that input when “thinking,” but it’s a problem with any context based model because that’s its only active memory. I think the reason this happens so obviously when you paste too much in a single prompt and less so when having a conversation with smaller prompts is because it also has its previous outputs in its context, so while it might have forgotten the very first prompt and response, it repeats the information enough times in subsequent prompts to keep it in its more recent context (ever notice how verbose AI tends to be? That could potentially be a mitigation strategy). Meanwhile, when you give it a very large prompt as big or bigger than its context window, it completely overwrites the previous responses, leaving no hints to what was there before.


  • I mean, compared to what? Picking your nose on transit? The people sitting across from you is probably a bigger source of “spying” (and judgment) than the cameras in that case. IMO if you’re okay with being spied on in your car you really don’t have much more to worry about on a train or in a station.

    I further submit that cars, being your personal space but still very much “in public,” give you much more of an illusion of privacy while in most cases being just as if not more invasive than transit.

    Also, if we’re talking only the transit or road system and not the spying at your destination, driving gives much more precise location data than transit. They’ll know which exact house or building you pulled up to compared to which train station or bus stop you get off at. And if you do consider all surveillance, then they can figure out where you’re going even if you walk because there will be cameras at your destination.