• 0 Posts
  • 28 Comments
Joined 15 days ago
cake
Cake day: January 21st, 2025

help-circle
  • Is it teflon coated? If so you should be careful. Many of the suggestions here are for increasingly abrasive options which will scratch that coating and cause it to eventually flake off, which means it will get into your food, which definitely not something you want to eat

    On one hand the grease itself is probably not a food safety issue anymore. Similar to a cast iron pan once oil/grease heats enough on a surface it polymerizes and essentially bonds to the surface. This is generally safe unless the oil is exposed to very high temps (beyond what is typically used for cooking) but it looks bad on stuff like sheet pans

    However you do want to be a bit more diligent cleaning as a result. Unlike a cast iron pan where the polymerization layer (seasoning) is generally very smooth this is generally not; it is bumpy and has more nooks and crannies. This means there are more areas where filth and bacteria can be harbored. Not a huge issue, just make sure you clean well.



  • I assume they’re being pragmatic. Appealing the us to remove copyright is a fools errand. Appealing to shorten the length of the prection period is too frankly given the nature of corporate protectionism but it’s far more likely than America ever entertaining the idea of free information, regardless of the benefits that could ensue. Think of the shareholders! And yourself! Gotta hang that carrot in front of you. What if you finally write that novel and it blows up! I know you secretly want to be a multimillionaire! No one just makes art or has passion to study and document something without financial gain, that’s dumb. Ignore all those forum posts where people do exactly that




  • Oh I didn’t mean larger like that, I meant width wise. Standard rack width is 19 inches so if it’s one of those specialty racks that’s narrower that thing I said about repurposing an old 1u/2u is pointless because it won’t fit. Doesn’t mean you can’t or shouldn’t use this rack, just that that idea is no good.

    4u is fine unless you want to expand down the line. Networking gear and stuff. However if it’s a narrow rack I don’t think there will be much to put in it for those purposes? Depends on your goals. I have a larger rack but I also have my whole networking stack in it, switch, poe switch, ups, router, nas, etc.

    I would consider posting on the unraid forums. There may be someone who has used similar hardware and can give guidance on how they approached the setup. The benefit of unraid is ultimately that the support community is very solid


  • How do you connect the drives? Looking at specs there’s only one sata port (which I don’t actually see anywhere, but it says it is there, although using it slows the second nvme lane)

    USB connected drives in a raid array are not ideal. USB connectivity is not as solid as a direct sata connection and a drive suddenly disappearing from your area, especially parity, is quite a headache

    No pci slot so you can’t add an hba for more sata lanes either. You could do one of those nvme to sata things but I’ve heard bad things about the reliability of those.

    If it’s free though I def think it’s worth finding a way to make it work. The specs are more than enough for unraid and usually those tiny pcs are pretty power efficient, which is nice. But that’s the issue to work around, connecting the hard drives reliably.

    WRT what to put them in it could be anything really. You could get a cheap broken 1 or 2u server case where someone’s pulled the motherboard and powersupply, rig something in there to hold them all. Should be more than enough space for 5 drives and will probably have cages for at least 2-3, maybe all 5 if you get lucky. Might even have hot swap ones. Dunno if this would fit though, that rack looks small and I couldn’t get the specs to load, is it full sized or a tiny one?

    Could also see if there’s some kind of 3d print thing. There’s probably a 3d print thing to rack mount that mini pc.


  • Not shocking to hear, he’s a scumbag at heart. But now if you say that people will be like “uhhh how can you say that he’s donated so much money”

    Then when you point out he’s donated literally 0% of his overall current net worth, his past (and current, apparently) behavior has arguably as much humanity if not more than he has offset, etc you’ll get whataboutism. “What have you done??”

    I don’t want philanthropy to be contingent on the whims of billionaires. Gates has done a lot but it still has major issues, there is no real transparency, and it’s still authoritatively controlled because he has a great deal of influence over his foundation. The even bigger issue is that he is by far the exception. Other billionaires donate minimally only to maximize tax benefits and only to issues they have been personally impacted by.

    The other day I was with people who were watching a football game. The eagles won and I asked why the owner gets to speak first at the trophy ceremony, let alone at all, given it was the teams effort. This led to a whole discussion but one thing that came up was how he donates so much money to autism research because he has a grandson with autism. This was meant to appeal to me because I have a background working in autism research and I work with people with autism a lot.

    all I could think is “how fucked up is it that we have to hope that an obscenely rich person personally experiences the issue for them to decide to bequeath funding?” This inherently means that things with a much higher rate of prevalence, like autism (1 in 36, roughly) or dementia (prevalence varies widely by age range (2% to 13%) but ~10 million cases per year), will get tons of money. But what about far less common things? I’ve worked with people who have extremely rare conditions. Angelmans syndrome, prader willi, chromosomal deletions, (rates of 1-2 per 10,000) or extremely rare things like hellers syndrome (rates of 1-2 per 100,000).

    This is why we fund things like NIMH, so that money can be fairly dispersed to ensure that all things are researched. Teams of people research what needs to be researched. This isn’t even just about equity; sometimes researching lesser known disorders leads to discoveries that are applicable in a broader context

    But instead we let a few oligarchs hoard money. Most of them don’t bother to fund this stuff at all and they few that do only bother to do so when it’s something personally relevant to them. We have no say in the matter.


  • 80s, 90s, and a few years into early 2000s. Gates ruthlessness lasted decades, destroyed many businesses and lives, and is mostly whitewashed thanks to his philanthropic efforts and a few reddit amas and some secret santa participation

    Not to mention the destruction he did to computing as a whole. The nightmare of proprietary bullshit is something that he did not architect but he pushed heavily and lobbied for constantly. He had the position to push for interoperability from an early stake in computing, to set the stage for computers to have a strong precedent to work together. Instead he and microsoft made every effort to work against open standards. They would adopt open standards and extend them with proprietary extensions to intentionally ruin them. A lot of what is infuriating about modern tech can be traced back to precedent that microsoft set at his direction

    Reminder despite every donation he has made his net worth is higher now than it ever was and this has essentially always been the case. His philanthropy, while objectively good, is a measured pr effort that does not impact his overall obscene wealth and basically never has




  • On device isn’t always ideal. I don’t use immich because i don’t have a large photo library. But I do use komga. Nextcloud can sort and manage epub/pdf like komga but as poVoq said, the specialized solution is superior

    This point is where on device app is not the ideal situation, for me at least. These apps exist. Tachiyomi and the resultant forks can import a local library. And frankly even a somewhat massive local library can fit on a cheap SD card

    The point of the server is portability. With this I have portability across my devices. My library, reading status, metadata, etc is available on all devices. I can read a book on my ereader, close it, the status is synced. I can pick up from my laptop and the same thing occurs. I can pick up from my phone, download the book to my device, and keep reading while I’m away from home. If I wanted to I could open remote access to my server and avoid the need for downloading the books but that’s a whole thing

    I don’t think it would make sense to run a server solely for this but it’s a service that doesn’t take much in terms of resources and I read a lot.


  • I just have them on a usb stick with a copy on the array as well so they can also be checked for bitrot. Even doing it for every file it’s not that much data and it’s scripted so it’s done pretty continuously (I do it weekly).

    Actual file backups are what I store off site. 2 copies, one here and one off. My data generally isn’t changed all that much so I don’t bother continually backing up most directories. Like it doesn’t make sense to have 30 backups of my tv folder with my shows. They’re the same shows. I have some redundancy, I don’t just do one and done, but tape media is expensive so I don’t do like monthly backups either. Tape is wildly impractical for most home users though and offsite with tape means you need a trusted place to put it that’s reasonably safe and of moderately decent climate/humidity. Though an advantage of tape is that basically no one but the biggest of tech dorks is going to be able to read that data (versus something like leaving an external hard drive or bluray at a friends house. Even if you trust them a LOT they might plug it in. Although encryption exists)

    It’s home data so it’s about balancing what makes sense with what’s cost effective and your risk tolerance

    Some data is crucial of course. My personal documents are backed up far more regularly, like once an hour or so, and that’s where I utilize services like back blaze. My business, which is healthcare oriented, is entirely different and that data is segregated and utilizes backblaze as well as specialized software since it handles PHI and hipaa concerns. That’s backed up pretty much every few minutes.


  • Bitrot sucks

    Zfs protects against this. It historically has been a pain to work with for home users but recently the implementation raidz expansion has made things a lot easier as you can now expand vdevs and increase the size of arrays without doubling the amount of disks.

    This is a potential great option for someone like you who is just starting out but still would require a minimum of 3 disks and the associated hardware. Sucks for people like me though who built arrays lonnnnng before zfs had this feature! It was literally up streamed like less than a year ago, good timing on your part (or maybe bad, maybe it doesn’t work well? I haven’t read much about it tbf but from the small amount I have read it seems to work fine. They worked on it for years)

    Btrfs is also an option for similar reasons as it has built in protections against bitrot. If you read on this there can be a lot of debate about whether it’s actually useful or dangerous. FWIW the consensus seems to be for single drives it’s fine. My array has a separate raid1 array of 2tb nvme drives, these are utilized as much higher speed cache/working storage for the services that run. Eg if a torrent downloads it goes to the nvme first as this storage is much easier to work with than the slow rotational drives that are even slower because they are in a massive array, then later the file is moved to the large array for storage in the middle of the night. Reading from the array is generally not an intensive operation but writing to it can be and a torrent that saturates my gigabit connection sometimes can’t keep up (or other operations that aren’t internet dependent like muxing or transcoding a video file). Anyway, this array has btrfs and has had 0 issues. That said I personally wouldn’t recommend it for raid5/6 and given the nature of this array I don’t care at all about the data on it

    My array has xfs. This doesn’t protect against bitrot. What you can do if you are in this scenario is what I do: once a week I run a plugin that checksums all new files and verifies checksums of old files. If checksums don’t match it warns me. I can then restore the invalid file from backup and investigate for issues (smart errors, bad sata cable, ecc problem with ram, etc). The upside of my xfs array is that I can expand it very easily and storage is maximized. I have 2 parity drives and at any point I can simply pop in another drive and extend the array to be bigger. This was not an option with zfs until about 9 months ago. This is a relatively “dangerous” setup but my array isn’t storing amazing critical data, it’s fully backed up despite that, and despite all of that it’s been going for 6+ years and has survived at least 3 drive failures

    That said my approach is inferior to btrfs and zfs because in this scenario they could revert to snapshot rather than needing to manually restore from backup. One day I will likely rebuild my array with zfs especially now that raidz expansion is complete. I was basically waiting for that

    As always double check everything I say. It is very possible someone will reply and tell me I’m stupid and wrong for several reasons. People can be very passionate about filesystems


  • Yeah I have a 15 drive array.

    You can raid 1 and that’s basically just keeping a constant copy of the drive. A lot of people don’t do this because they want to maximize storage space but if you only have a 2 drive array it’s probably your safest option

    it’s only when you get to 3 (2 drive array + parity) that you have some potential to maximize storage space. Note that here you’re still basically sacrificing the space of an entire drive but now you basically double it and this is more resilient overall because the data is spread out over multiple drives. But it costs more because obviously you need multiple drives

    Keep in mind none of these are back up solutions though. It’s true that when a drive dies in a raid array you can rebuild the data from other drives but it is also true that this operation is extremely stressful and can lead to death of the array. Eg in raid 1 a single drive dies and when adding a new drive the second drive that held the copy of your data starts having sector corruption during rebuild of the new drive, or in raid 2 one of the 3+ drives dies and when you rebuild from parity the parity drive dies for similar reasons. These drives are normally only being accessed occasionally and the rebuild operation is basically seeking to every sector on the drive if you have a lot of data, and often puts the drive under a lot of read operation for a very long period of time (like days) especially if you get very large modern drives (18,20,24tb)

    So either be okay with your data going “poof” or back up your data as well. When I got started I was okay with certain things going “poof”, like pirated media, and would backup essential documents to cloud providers. This was really the only feasible solution because my array is huge (about 200tb with about 100tb used). But now I have tape backup so I back everything up locally although I still back up critical documents to backblaze. Depends on your needs. I am very strict about not wanting to be integrated to google, apple, dropbox, etc. and my media collection is not simply stuff I can retorrent, it’s a lot of custom media I’ve put together the “best” version of to my taste. but to set something up like this either takes a hefty investment or if you’re like me years of trawling ewaste/recycling centers and decommission auctions (and it’s still pricey then but at least my data is on my server and not googles)




  • Yeah there are plenty of apps that can rip from tidal, apple music, etc. noteburner, deemix, deezloader, musify, notecable, and noteburner are all ones that I tried where they successfully ripped audio from streams to flac but spectrals showed the flac was transcoded from lossy source.

    Granted this is basically inaudible and super nitpicky, like honestly show me the person who can truly hear the difference between a modern 320 mp3 and a 16bit flac in a double blind situation. But if you’re using these rippers to upload to a private tracker, especially a popular release, guarantee someone will check

    That said streamrip can get deezer 16 bit, 24bit tidal mqa (which isn’t actually lossless), and 24/192 qobuz but you need a premium account and things break from time to time

    https://github.com/nathom/streamrip

    Apple music remains a very closely guarded secret although I recently saw this: https://github.com/zhaarey/apple-music-downloader . I have to create a burner and vm to play with this though bc it’s pretty sketch



  • Most of the publicly available ones that rip streaming services to lossless fail spectral checks. They can rip high quality MP3s which they then transcode to flac but if you were to upload this somewhere like RED you’d get shit for it. Literally every one I’ve found has failed the spectral check thread on RED

    This MAY not apply for Spotify as they don’t stream lossless to begin with

    The people that can actually rip fully lossless files from deezer, apple music, qobuz, tidal, etc guard that info like crazy. The second the method gets public you better believe all those companies are patching it out. Plus it probably doesn’t hurt that being the one with the keys to the method gets you like infinite ratio