

CAD was a big problem for me as well. I’ve been happy enough with OnShape (coming from Autodesk Inventor), but the extreme SaaS nature of it makes me worry.
Alt account of @Badabinski
Just a sweaty nerd interested in software, home automation, emotional issues, and polite discourse about all of the above.
CAD was a big problem for me as well. I’ve been happy enough with OnShape (coming from Autodesk Inventor), but the extreme SaaS nature of it makes me worry.
Yeah, I think it’d be a pretty silly thing for us to ever try to do. My goal was to take their stupid idea, provide a slightly less stupid idea, and then say “or just don’t do space power at all and keep everything terrestrial.” Orbital solar power stations were lots of fun in science fiction, but panels are cheap, there’s plenty of land, and giant death masers that cook any birds flying into the beam are, uh, suboptimal.
We’ve had the template for this for decades. Put the solar panels in space where the thick soupy gunky spunky atmosphere doesn’t stop the little energy things from the sun. Collect the power in orbit. You just do that up there up in orbit okay? And then you fucking beam the power down to the surface you numpty fucks. Use a maser to send the power down to the surface and you can pick a frequency that isn’t affected by the gunky spunky and then the receivers on the ground can pick it up and they send the power through these things called wires to a building that uses the power and the building can use this neat little thing called CONVECTION to more efficiently remove the heat from the things using the electricity wow.
Or just, y’know, use less power and make use of ground based solar. We don’t need fucking AI data centers in space. Don’t get me wrong, I think it might be useful to, say, have some compute up in geostationary orbit that other satellites could punt some data to for computation. You could have an evenly spaced ring of the fuckers so the users up there can get some data crunching done with a RTT of like 50ms instead of 700ms. That seems like a hard sell, but it at least seems a bit tenable if you needed to reduce the data you’re sending back to the earth down to a more manageable amount with some preprocessing. That is still not fuckass gigawatt AI data centers. Fuck
The No Kings protest in Utah ended tragically because armed “peacekeepers” (aka armed civilians) shot at a protester who was open-carrying an AR-15 at the protest. The protester had no ill intentions, but the peacekeepers didn’t know that. The peacekeepers missed and killed a bystander.
That’s why you don’t open carry at protests. The untrained “good guy with a gun” is likely to shoot you. Carry concealed if you’re going to carry, or don’t bring a gun at all.
Oh god, please don’t use it for Bash. LLM-generated Bash is such a fucking pot of horse shit bad practices. Regular people have a hard enough time writing good Bash, and something trained on all the fucking crap on StackOverflow and GitHub is inevitably going to be so bad…
Signed, a senior dev who is the “Bash guy” for a very large team.
Yo, I think your shit got hacked.
I’ve listened to most of the Culture series and I really liked it all. Look To Windward was especially good imo.
I love The Player of Games by Iain M. Banks. The audiobook is narrated by Peter Kenny and he does such a good job with it.
AFAIK, LFP thermal runaway can’t start fires. NMC or other lithium chemistries can and they scare me, but LFPs are pretty damn safe. That being said, I’m still stoked for sodium chemistries to be developed. If the round trip efficiency issues can be solved, then I think it’ll be a great solution for residential power storage.
I made the mistake of buying a Samsung washer/dryer set in 2017. The washer actually still works and the seal has held up well, but the dryer drum jumped its tracks within the first year, and both have been plagued with gremlins.
Fuck Samsung appliances and honestly most things Samsung sells.
it was a form from Google soliciting feedback on the thing.
Lovely, thank you for this. I’ve left my feedback, and I hope many, many other people do as well.
It also lacks any form of dependency management AFAICT. I don’t think there’s any way to say you depend on another service. I’m guessing you can probably order things lexically? But that’s, uh, shitty and bad.
I wrote and maintained a lot of sysvinit scripts and I fucking hated them. I wrote Upstart scripts and I fucking hated them. I wrote OpenRC scripts and I fucking hated them. Any init system that relies on one of the worst languages in common use nowadays can fuck right off. Systemd units are well documented, consistent, and reliable.
From my 30 seconds of looking, I actually like nitro a bit more than OpenRC or Upstart. It does seem like it’d struggle with daemons the way sysvinit scripts used to. Like, you have to write a process supervisor to track when your daemonized process dies so that it can then die and tell nitro (which is, ofc, a process supervisor), and it looks like the logging might be trickier in that case too. I fucking hate services that background themselves, but they do exist and systemd does a great job at handling those. It also doesn’t do any form of dependency management AFAICT, which is a more serious flaw.
Nitro seems like a good option for some use cases (although I cannot conceive why you’d want to run a service manager in a container when docker and k8s have robust service management built into them), but it’s never touching the disk on any of the tens of thousands of boxes I help administrate. systemd is just too good.
Just journalctl | grep
and you’re good to go. The binary log files contain a lot of metadata per message that makes it easy to do more advanced filtering without breaking existing log file parsers.
I’ll agree that list comprehensions can be a bit annoying to write because your IDE can’t help you until the basic loop is done, but you solve that by just doing [
and then add whatever conditions and attr access/function calls you need. ]
Anubis has worked if that’s happening. The point is to make it computationally expensive to access a webpage, because that’s a natural rate limiter. It kinda sounds like it needs to be made more computationally expensive, however.
Do you have any sources for the 10x memory thing? I’ve seen people who have made memory usage claims, but I haven’t seen benchmarks demonstrating this.
EDIT: glibc-based images wouldn’t be using service managers either. PID 1 is your application.
EDIT: In response to this:
There’s a reason a huge portion of docker images are alpine-based.
After months of research, my company pushed thousands and thousands of containers away from alpine for operational and performance reasons. You can get small images using glibc-based distros. Just look at chainguard if you want an example. We saved money (many many dollars a month) and had fewer tickets once we finished banning alpine containers. I haven’t seen a compelling reason to switch back, and I just don’t see much to recommend Alpine outside of embedded systems where disk space is actually a problem. I’m not going to tell you that you’re wrong for using it, but my experience has basically been a series of events telling me to avoid it. Also, I fucking hate the person that decided it wasn’t going to do search domains properly or DNS over TCP.
Debian is superior for server tasks. musl is designed to optimize for smaller binaries on disk. Memory is a secondary goal, and cpu time is a non-goal. musl isn’t meant to be fast, it’s meant to be small and easily embedded. Those are great things if you need to run in a network/disk constrained environment, but for a server? Why waste CPU cycles using a libc that is, by design, less time efficient?
EDIT: I had to fight this fight at my job. We had hundreds of thousands of Alpine containers running, and switching them to glibc-based containers resulted in quantifiable cloud spend savings. I’m not saying musl (or alpine) is bad, just that you have horses for courses.
the
f
stands for file. The c manpage has some details on how it works: https://www.man7.org/linux/man-pages/man2/flock.2.html