

No, it’s a panic, so it’s more similar to a segfault, but with some amount of unwinding. It can be “caught” but only at a thread boundary.
Mama told me not to come.
She said, that ain’t the way to have fun.


No, it’s a panic, so it’s more similar to a segfault, but with some amount of unwinding. It can be “caught” but only at a thread boundary.


It is unwrap’s fault. If they did it properly, they would’ve had to explicitly deal with the problem, which could clarify exactly what the problem is. In this case, I’d probably use expect() to add context. Also, when doing anything with strict size requirements, I would also explicitly check the size to make sure it’ll fit, again, for better error reporting.
Proper error reporting could’ve made this a 5-min investigation.
Also, the problem in the first place should’ve been caught with unit tests and a test deploy. Our process here is:
And we’re not a massive software shop, we have a few dozen devs in a company of thousands of people. If I worked at Cloudflare, I’d have more rigorous standards given the global impact of a bug (we have a few hundred users, not billions like Cloudflare).


Ift is precious and beyond compare. It has tools that most other languages lack to prove certain classes of bugs are impossible.
You can still introduce bugs, especially when you use certain features that “standard” linter (clippy) catches by default and no team would silence globally. .unwrap() is very controversial in Rust and should never be used without clear justification in production code. Even in my pet projects, it’s the first thing I clear out once basic functionality is there.
This issue should’ve been caught at three separate stages:
The fact that it made it past all three makes me very concerned about how they do development over there. We’re a much smaller company and we’re not even a software company (software dev is <1% of the total company), and we do this. We don’t even use Rust, we’re a Python shop, yet we have robust static analysis for every change. It’s standard, and any company doing anything more than a small in-house tool used by 3 people should have these standards in place.
Use something like Backblaze or Hetzner storage boxes for off-site backups. There are a number of tools for making this painless, so pick your favorite. If you have the means, I recommend doing a disaster recovery scenario every so often (i.e. disconnect existing drives, reinstall the OS, and load everything from remote backup).
Generally speaking, follow the 3-2-1 rule:
For your situation, this could be:
You could rent a cloud server, but it’ll be a lot more expensive vs just renting storage.


Headlander was surprisingly fun. Probably not my favorite, but certainly up there.
I miss them. But I guess I’m in the minority on that one.


Exactly.
There’s a difference between gatekeeping and being transparent about what’s expected. I’m not suggesting people do it the hard way as some kind of hazing ritual, but because there’s a lot of practical value to maintaining your system there. Arch is simple, and their definition of simple means the devs aren’t going to do a ton for you outside of providing good documentation. If your system breaks, that’s on you, and it’s on you to fix it.
If reading through the docs isn’t your first instinct when something goes wrong, you’ll probably have a better experience with something else. There are plenty of other distros that will let you offload a large amount of that responsibility, and that’s the right choice for most people because most people don’t want to mess with their system, they want to use it.
Again, it’s not gatekeeping. I’m happy to help anyone work through the install process. I won’t do it for you, but I’ll answer any questions you might have by showing you where in the docs it is.


If you have reasonable practices, git blame will show you the original ticket, a link to the code review, and relevant information about the change.


Then just do it in your greenhouse. If you don’t have one, ask your help to build one.


Yes, Arch is really stable and has been for about 10 years. In fact, I started using Arch just before they became really stable (the /usr merge), and stuck with it for a few years after. It’s a fantastic distro! If openSUSE Tumbleweed stopped working for me, I’d probably go back to Arch. I ran it on multiple systems, and my main reason for switching is I wanted something with a stable release cycle for servers and rolling on desktop so I can use the same tools on both.
It has fantastic documentation, true, but most likely a new user isn’t going to go there, they’ll go to a forum post from a year ago and change something important. The whole point of going through the Arch install process is to force you to get familiar with the documentation. It’s really not that hard, and after the first install (which took a couple hours), the second took like 20 min. I learned far more in that initial install than I did in the 3-ish years I’d used other distros before trying Arch.
CachyOS being easy to setup defeats the whole purpose since users won’t get familiar with the wiki. By all means, go install CachyOS immediately after the Arch install, buy so yourself a favor and go through it. You’ll understand everything from the boot process to managing system services so much better.


I 100% agree. If you want the Arch experience, you should have the full Arch experience IMO, and that includes the installation process. I don’t mean this in a gatekeepy way, I just mean that’s the target audience and that’s what the distro is expecting.
For a new user, I just cannot recommend Arch because, chances are, that’s not what they actually want. Most new users want to customize stuff, and you can do that with pretty much every distro.
For new users, I recommend Debian, Mint, or Fedora. They’re release based, which is what you want when starting out so stuff doesn’t change on you, and they have vibrant communities. After using it for a year or two, you’ll figure out what you don’t like about the distro and can pick something else.


I disagree. If you want to use Arch for the first time, install it the Arch way. It’s going to be hard, and that’s the point. Arch will need manual intervention at some point, and you’ll be expected to fix it.
If you use something like Manjaro or CachyOS, you’ll look up commands online and maybe it’ll work, but it might not. There’s a decent chance you’ll break something, and you’ll get mad.
Arch expects you to take responsibility for your system, and going through the official install process shows you can do that. Once you get through that once, go ahead and use an installer or fork. You know where to find documentation when something inevitably breaks, so you’re good to go.
If you’re unwilling to do the Arch install process but still want a rolling release, consider OpenSUSE Tumbleweed. It’s the trunk for several projects, some of them commercial, so you’re getting a lot of professional eyeballs on it. There’s a test suite any change needs to pass, and I’ve seen plenty of cases where they hold off on a change because a test fails. And when it does fail (and it probably will), you just snapper rollback and wait a few days. The community isn’t as big as other distros, so I don’t recommend it for a first distro, but they’re also not nearly as impatient as Arch forums.
Arch is a great distro, I used it for a few years without any major issues, but I did need to intervene several times. I’ve been on Tumbleweed about as long and I’ve only had to snapper rollback a few times, and that was the extent of the intervention.


All of them? Maybe an international consortium that pays devs in their home currency.


Back when I used a HDD in my laptop, I was able to get my boot down to 20s or so. I don’t understand what MS is doing…


You know what I want MS to do? Remove all the extra crap and just be a simple OS. The desktop should use 500MB or so of memory, boot should be a few seconds, and launching programs should be a few seconds. Don’t do any weird caching nonsense, I don’t need tens of GBs of OS nonsense, just give me a simple OS.
I have that w/ Linux. The only value Windows provides is app compatibility. Stop trying to be anything more than that.


Yup, and Linux probably boots faster. On my NVMe w/ full-disk encryption (not through the disk’s microcontroller, through an outside FS), I boot to desktop in like 5 sec or less, and the desktop is fully usable. If I want to launch a program, I type the name and hit enter, and it launches in a couple seconds.
My M3 Mac is a little worse, since it gets confused about launching an app vs looking for a file, and it takes a bit longer to boot (20-30 seconds?).
But my SO’s Windows machine is something else. It takes a minute or two to boot, and after that it takes a minute or two to “settle.” I have no idea what it’s doing, but I generally get up and get a drink or something when my SO asks me to get something pulled up. Why is it so crappy?


The basic service is free. There’s an enterprise tier with more features, such as prioritizing IP ranges (e.g. geographical areas the company operates in).


And why take the risk? Just pay Cloudflare to take care of it instead of getting all the expertise in house, surely that’s cheaper, no?


everyone panic selling could spread over to people panic selling everything and trying to get their hands on cold hard cash so their entire life savings dont vanish in an instant, so market wide we could see big drops?
Yeah, that’s basically what happens in a major correction. In fact, stock prices are valid basically the result of how many prior people are buying vs selling; more buyers than sellers causes prices to go up, more sellers than buyers cause prices to go down. Stock prices tend to have momentum precisely because of this (people try to jump on the bandwagon on the way up and jump off on the way down). And that’s also why we tend to see a quick recovery afterward once all the facts come out.
A 20-30% drop is a pretty big deal. It’s not anomalous though. There have been 19 major corrections (over 20% loss) over the past 150 years, meaning it happens every 7-10 years on average (150/19 ~= 7.8 years).
I don’t think this is like the .com or financial markets of the 2000s. But let’s say it is. If I bought at the peak of the .com bubble (March 10, 2000), I would’ve gotten 5.3% annualized growth over that 25 years (so $1k would be $3900-ish), assuming I don’t sell. The impact would be limited long term.
The AI bubble popping wouldn’t be the catastrophy many are making it out to be. I think it’ll be closer to the 2020 correction.
I think Nvidia is overvalued. I don’t think the economy will crash if AI crashes.
Nah, if there’s one thing they thoroughly test, it’s the spying.