• 0 Posts
  • 200 Comments
Joined 2 years ago
cake
Cake day: June 7th, 2023

help-circle
  • The main thing I have from that time is several large boxes hanging about taking up shelf space and a burning hatred of MMOs. My wife and I got into WoW during late Vanilla. We stood in line at midnight to get the collector’s edition box for WotLK and later again for Cataclysm (we weren’t that far gone when The Burning Crusade released). Shortly after Cataclysm released, there was the Midsummer Fire Festival and as we were playing through it, we hit that wall where any more quests became locked behind “Do these daily quests 10,000 times to progress” and the whole suspension of disbelief just came crashing down. I had already hated daily quests and the grindy elements of the game, but at that moment I just said, “fuck this” and walked away from the game.

    I do look back fondly on some of the good times we had in the game. Certainly in Vanilla there was some amazing writing and world crafting. We met some good people and had a lot of fun over the years and I don’t regret the time or money spent. However, one thing it taught me is just how pointless MMOs are. They are specifically designed to be endless treadmills. And this can be OK, so long as the treadmill itself is well designed and fun. But, so many of the elements exist just to eat time. Instead of being fun, they suck the fun out of the game and turn it into a job.

    We even tried a few other MMOs after that point (e.g. Star Wars) just because we wanted something to fill that niche in our gaming time. But invariably, there would be the grind mechanics which ruined the game for us. Or worse yet, pay to win mechanics where the game would literally dangle offers of “pay $X to shortcut this pointless grind” (ESO pops to mind for this). If the game is offering me ways to pay money to not play the game, then I’ll take the easier route and not play the game at all, thank you very much.

    So ya, WoW taught me to hate MMOs and grinding in games. And that’s good, I guess.




  • Location: ~87% of respondents are from Canada

    As others mentioned, this would be an interesting data point to validate. I’m not familiar with the server side of Lemmy, but does the server provide any logs which could be used with GeoIP to get a sense of the relative number of connections from different countries? While there is likely to be some misreporting due to VPN usage and the like, it’s likely to be a low enough number of connections to be ignored as “noise” in the data. Depending on the VPNs in question, it may also be possible to run down many of the IP addresses which are VPNs in the connections logs and report “VPN user” as a distinct category. This would also be interesting to see broken out by instance (e.g. what countries are hitting lemmy.world versus lemmy.ml versus lemmy.ca etc.).

    All that said, thank you for sharing. These sorts of exercises can be interesting to understand what a population looks like.




  • If the goal is stability, I would have likely started with an immutable OS. This creates certain assurances for the base OS to be in a known good state.
    With that base, I’d tend towards:
    Flatpak > Container > AppImage

    My reasoning for this being:

    1. Installing software should not effect the base OS (nor can it with an immutable OS). Changes to the base OS and system libraries are a major source of instability and dependency hell. So, everything should be self contained.
    2. Installing one software package should not effect another software package. This is basically pushing software towards being immutable as well. The install of Software Package 1, should have no way to bork Software Package 2. Hence the need for isolating those packages as flatpaks, AppImages or containers.
    3. Software should be updated (even on Linux, install your fucking updates). This is why I have Flatpak at the top of the list, it has a built in mechanism for updating. Container images can be made to update reasonably automatically, but have risks. By using something like docker-compose and having services tied to the “:latest” tag, images would auto-update. However, its possible to have stacks where a breaking change is made in one service before another service is able to deal with it. So, I tend to tag things to specific versions and update those manually. Finally, while I really like AppImages, updating them is 100% manual.

    This leaves the question of apt packages or doing installs via make. And the answer is: don’t do that. If there is not a flatpak, appimage, or pre-made container, make your own container. Docker files are really simple. Sure, they can get super complex and do some amazing stuff. You don’t need that for a single software package. Make simple, reasonable choices and keep all the craziness of that software package walled off from everything else.


  • An economy is really just a way to distribute finite resources in a world with infinite wants. Even the most egalitarian of systems is going to require deciding who gets something and who doesn’t (winner and losers). It’s perfectly valid to be frustrated by being on the “doesn’t” end of that equation. And we (US and other Western Democracies) could certainly do a lot more to shift some of the resources away from the few who are hording a lot of them, even without a radical “tear the system down” approach. The difficulty is the political will to do so.

    Unfortunately, mustering political will for a collective good, which may come with some individual losses can be a tough sell. Especially when large parts of a population are comfortable. Not only do you have to convince people that the collective good is an overall good for them, you also have to convince them that the individual losses either won’t effect them or will be mitigated by the upsides of the collective good. And given peoples’ tendency to over emphasize the short term risks over the long term risks, this can be especially hard. But, that doesn’t mean you should give up, just that you need to sharpen your arguments and find ways to convince more people that things can be better for them, if they are willing to take that step.


  • Traditions exist to pass on learned knowledge and for social cohesion. Prior to widespread education, many local groups had to learn the same lessons and find a way to pass those on from person to person and generation to generation. Given that this also tended to coincide with societies not having the best grasp on reality (germ theory is not that old), the knowledge being passed on was often specious. But, it might also contain useful bits which worked.

    For example some early societies would pack honey into a wound. Why? Fuck if they knew, but that was what the wise men said to do. It turns out that honey is a natural anti-septic and helps to prevent infection. They had no knowledge of this, but had built up a tradition around it, probably because it seemed to work. And so that got passed on.

    The other aspect of traditions is social. When people do a thing together, they tend to bond and become willing to engage in more pro-social behaviors. It isn’t all that important what the activity it, so long as people do it together. The more people feel like they are part of the in-group, the more they will work to protect and sacrifice for that in-group.

    Sure, a lot of traditions are complete crap. They are superstition wrapped in a “that’s the way we’ve always done it” attitude. But it’s important not to overlook their significance to a population. The Christian Church ran headlong into this time and again through European history as they sought to convert various groups. Those groups tended to hold on to old traditions and just blended them into Christianity. This resulted in a fairly fractured religious landscape, but the Church generally tolerated it, because trying to quash it led to too many problems. While stories of various Easter and Christmas traditions being Pagan in origin are likely apocryphal, there are echos of older religious beliefs hanging about.

    It’s best to be careful when looking at a particular group’s traditions and calling them “backwards” or some other epitaph. Yes, they almost certainly have no basis in the scientific method. But, the value of those traditions to a people are very real. And so long as they are not harmful to others, you’re likely to do more harm trying to remove them than by simply allowing folks to just enjoy them.



  • It’s going to depend on what types of data you are looking to protect, how you have your wifi configured, what type of sites you are accessing and whom you are willing to trust.

    To start with, if you are accessing unencypted websites (HTTP) at least part of the communications will be in the clear and open to inspection. You can mitigate this somewhat with a VPN. However, this means that you need to implicitly trust the VPN provider with a lot of data. Your communications to the VPN provider would be encrypted, though anyone observing your connection (e.g. your ISP) would be able to see that you are communicating with that VPN provider. And any communications from the VPN provider to/from the unencrypted website would also be in the clear and could be read by someone sniffing the VPN exit node’s traffic (e.g. the ISP used by the VPN exit node) Lastly, the VPN provider would have a very clear view of the traffic and be able to associate it with you.

    For encrypted websites (HTTPS), the data portion of the communications will usually be well encrypted and safe from spying (more on this in a sec). However, it may be possible for someone (e.g. your ISP) to snoop on what domains you are visiting. There are two common ways to do this. The first is via DNS requests. Any time you visit a website, your browser will need to translate the domain name to an IP address. This is what DNS does and it is not encrypted by default. Also, unless you have taken steps to avoid it, it likely your ISP is providing DNS for you. This means that they can just log all your requests, giving them a good view of the domains you are visiting. You can use something like DNS Over Https (DOH), which does encrypt DNS requests and goes to specific servers; but, this usually requires extra setup and will work regardless of using your local WiFi or a 5g/4g network. The second way to track HTTPS connections is via a process called Server Name Identification (SNI). In short, when you first connect to a web server your browser needs to tell that server which domain it wants to connect to, so that the server can send back the correct TLS certificate. This is all unencrypted and anyone inbetween (e.g. your ISP) can simply read that SNI request to know what domains you are connecting to. There are mitigations for this, specifically Encrypted Server Name Identification (ESNI), but that requires the web server to implement it, and it’s not widely used. This is also where a VPN can be useful, as the SNI request is encrypted between your system and the VPN exit node. Though again, it puts a lot of trust in the VPN provider and the VPN provider’s ISP could still see the SNI request as it leaves the VPN network. Though, associating it with you specifically might be hard.

    As for the encrypted data of an HTTPS connection, it is generally safe. So, someone might know you are visiting lemmy.ml, but they wouldn’t be able to see what communities you are reading or what you are posting. That is, unless either your device or the server are compromised. This is why mobile device malware is a common attack vector for the State level threat actors. If they have malware on your device, then all the encryption in the world ain’t helping you. There are also some attacks around forcing your browser to use weaker encryption or even the attacker compromising the server’s certificate. Though these are likely in the realm of targeted attacks and unlikely to be used on a mass scale.

    So ya, not exactly an ELI5 answer, as there isn’t a simple answer. To try and simplify, if you are visiting encrypted websites (HTTPS) and you don’t mind your mobile carrier knowing what domains you are visiting, and your device isn’t compromised, then mobile data is fine. If you would prefer your home ISP being the one tracking you, then use your home wifi. If you don’t like either of them tracking you, then you’ll need to pick a VPN provider you feel comfortable with knowing what sites you are visiting and use their software on your device. And if your device is compromised, well you’re fucked anyway and it doesn’t matter what network you are using.



  • sylver_dragon@lemmy.worldtoLinux@lemmy.mlAntiviruses?
    link
    fedilink
    English
    arrow-up
    2
    ·
    20 days ago

    Ultimately, it’s going to be down to your risk profile. What do you have on your machine which would wouldn’t want to lose or have released publicly? For many folks, we have things like pictures and personal documents which we would be rather upset about if they ended up ransomed. And sadly, ransomware exists for Linux. Lockbit, for example is known to have a Linux variant. And this is something which does not require root access to do damage. Most of the stuff you care about as a user exists in user space and is therefore susceptible to malware running in a user context.

    The upshot is that due care can prevent a lot of malware. Don’t download pirated software, don’t run random scripts/binaries you find on the internet, watch for scam sites trying to convince you to paste random bash commands into the console (Clickfix is after Linux now). But, people make mistakes and it’s entirely possible you’ll make one and get nailed. If you feel the need to pull stuff down from the internet regularly, you might want to have something running as a last line of defense.

    That said, ClamAV is probably sufficient. It has a real-time scanning daemon and you can run regular, scheduled scans. For most home users, that’s enough. It won’t catch anything truly novel, but most people don’t get hit by the truly novel stuff. It’s more likely you’ll be browsing for porn/pirated movies and either get served a Clickfix/Fake AV page or you’ll get tricked into running a binary you thought was a movie. Most of these will be known attacks and should be caught by A/V. Of course, nothing is perfect. So, have good backups as well.



  • As a species, homo sapiens have managed to adapt to every environment on Earth. We are the first species to have any measure of control over the natural forces which have wiped out countless other species. Diseases which once ravaged our populations are now gone or minor inconveniences and we continue to find new ways to mitigate the worst effect of many diseases. Should a large asteroid be heading our way, we are the only species which may stand any chance of diverting it or mitigating the long term impacts when it does hit us. While it was certainly not a “choice”, the evolution of higher cognition, problem solving and intra-species communications has put our species in a unique position of having a high degree of control over out fate. Sure, it has its downsides (we are the only species which might be able to end all life on Earth), but it’s been a pretty amazing run for us. On the balance, I think we’re in a much better position to keep going as a species than our ancestors or cousins (homo erectus, homo hablis, neanderthal, great apes, chimpanzees, etc).

    So, was it a “mistake”, I think the current state of evidence is against that. While it may result in a really shit deal for individuals of the species from time to time, as a species I think it would be silly to consider it a mistake.


  • Short answer, no.

    Long answer: We are a long way off from having anything close to the movie villain level of AI. Maybe we’re getting close to the paperclip manufacturing AI problem, but I’d argue that even that is often way overblown. The reason I say this is that such arguments are quite hand-wavy about leaps in capability which would be required for those things to become a problem. The most obvious of which is making the leap from controlling the devices an AI is intentionally hooked up to, to devices it’s not. And it also needs to make that jump without anyone noticing and asking, “hey, what’s all this then?” As someone who works in cybersecurity for a company which does physical manufacturing, I can see how it would get missed for a while (companies love to under-spend on cybersecurity). But eventually enough odd behavior gets picked up. And the routers and firewalls between manufacturing and anything else do tend to be the one place companies actually spend on cybersecurity. When your manufacturing downtime losses are measured in millions per hour, getting a few million a year for NDR tends to go over much better. And no, I don’t expect the AI to hack the cybersecurity, it first needs to develop that capability. AI training processes require a lot of time failing at doing something, that training is going to get noticed. AI isn’t magically good at anything, and while the learning process can be much faster, that speed is going to lead to a shit-ton of noise on the network. And guess what, we have AI and automation running on our behalf as well. And those are trained to shutdown rogue devices attacking the cybersecurity infrastructure.

    “Oh wait, but the AI would be sneaky, slow and stealty!” Why would it? What would it have in it’s currently existing model which would say “be slow and sneaky”? It wouldn’t, you don’t train AI models to do things which you don’t need them to do. A paperclip optimizing AI wouldn’t be trained on using network penetration tools. That’s so far outside the need of the model that the only thing it could introduce is more hallucinations and problems. And given all the Frankenstein’s Monster stories we have built and are going to build around AI, as soon as we see anything resembling an AI reaching out for abilities we consider dangerous, it’s going to get turned off. And that will happen long before it has a chance to learn about alternative power sources. It’s much like zombie outbreaks in movies, for them to move much beyond patient zero requires either something really, really special about the “disease” or comically bad management of the outbreak. Sure, we’re going to have problems as we learn what guardrails to put around AI, but the doom and gloom version of only needing one mistake is way overblown. There are so many stopping points along the way from single function AI to world dominating AI that it’s kinda funny. And many of those stopping points are the same, “the attacker (humans) only need to get lucky once” situation. So no, I don’t believe that the paperclip optimizer AI problem is all that real.

    That does take us to the question of a real general purpose AI being let loose on the internet to consume all human knowledge and become good at everything, which then decides to control everything. And maybe this might be a problem, if we ever get there. Right now, that sort of thing is so firmly in the realm of sci-fi that I don’t think we can meaningfully analyze it. What we have today, fancy neural networks, LLMs and classifiers, puts us in the same ballpark as Jules Verne writing about space travel. Sure, he might have nailed one or two of the details; but, the whole this was so much more fantastically complex and difficult than he had any ability to conceive. Once we are closer to it, I expect we’re going to see that it’s not anything like we currently expect it to be. The computing power requirements may also limit it’s early deployment to only large universities and government projects, keeping it’s processing power well centralized. General purpose AI may well have the same decapitation problems humans do. They can have fantastical abilities, but they need really powerful data centers to run it. And those bring all the power, cooling and not getting blown the fuck up with a JDAM problems of current AI data centers. Again, we could go back and forth making up ways for AI to techno-magic it’s way around those problems, but it’s all just baseless speculation at this point. And that speculation will also inform the guardrails we build in at the time. It would boil down to the same game children play where they shoot each other with imaginary guns, and have imaginary shields. And they each keep re-imagining their guns and shields to defeat the other’s. So ya, it might be fun for a while, but it’s ultimately pointless.


  • For someone who spends a lot of time alone and on a computer this will seem anathema, but go find some sort of physical activity (sport) and start engaging in it a few times a week. Not only does this get you out of the house, it creates opportunities to engage with people socially and it is good for your health.

    I am very much a stay at home, be in front of my computer type hermit. I was this way most of my life and even being married didn’t help much as my wife is the same. A good Friday night for us currently involves playing Baldur’s Gate 3 until much too late. We have a very small circle of friends and don’t get out much at all. However, now in my late 40’s I am having some health issues and that finally gave me the push to get out of my gaming chair and get my body moving. I took up climbing at an indoor rock climbing gym and I really enjoy it. The regularly changing routes on the walls mean that I get to engage the puzzle solving part of my brain, and I am pushed physically as I try to get better. In between climbs I’m near other people with an obvious shared interest and can practice talking to other people by discussing the routes (social skills are like all skills, they take practice). And the exercise has made my doctor visits a lot less “you’re going to die horribly” and more “we’ve got things pretty well controlled”. I also just feel better.

    So ya, go out and find some sort of physical activity you enjoy. Don’t be afraid to try new things, you’ll suck at them but that’s to be expected. The first step in being good at anything is sucking at it. Use that suckage to engage with other people and learn how to suck less. This will help you suck less at socializing. I won’t say that any of this is easy, it’s not. I know there is the hermit piece if me which always wants to fall back into just hiding out in my basement (literally, my office is in my basement). But, I’ve also made a habit of climbing 2-3 times a week and 3 years into doing that I am now looking forward to that time. I get excited when I walk into the gym and see one of the walls changed and now get to solve a new set of climbing routes. I still kinda suck, but not anywhere near as much as I did on my first day.


  • I started self hosting in the days well before containers (early 2000’s). Having been though that hell, I’m very happy to have containers.
    I like to tinker with new things and with bare metal installs this has a way of adding cruft to servers and slowly causing the system to get into an unstable state. That’s my own fault, but I’m a simple person who likes simple solutions. There are also the classic issues with dependency hell and just flat out incompatible software. While these issues have gotten much better over the years, isolating applications avoids this problem completely. It also makes OS and hardware upgrades less likely to break stuff.

    These days, I run everything in containers. My wife and I play games like Valheim together and I have a Dockerfile template I use to build self-hosted serves in a container. The Dockerfile usually just requires a few tweaks for AppId, exposed ports and mount points for save data. That paired with a docker-compose.yaml (also built off a template) means I usually have a container up and running in fairly short order. The update process could probably be better, I currently just rebuild the image, but it gets the job done.