• 0 Posts
  • 135 Comments
Joined 3 years ago
cake
Cake day: July 5th, 2023

help-circle



  • The Fediverse is designed specifically to publish its data for others to use in an open manner.

    Sure, and if the AI companies want to configure their crawlers to actually use APIs and ActivityPub to efficiently scrape that data, great. Problem is that there’s been crawlers that have done things very inefficiently (whether by malice, ignorance, or misconfiguration) and scrape the HTML of sites repeatedly, driving up some hosting costs and effectively DOSing some of the sites.

    If you put Honeypot URLs in the mix and keep out polite bots with robots.txt and keep out humans by hiding those links, you can serve poisoned responses only to the URLs that nobody should be visiting and not worry too much about collateral damage to legitimate visitors.




  • If I am reading this correctly, anyone who wants to use this service can just configure their HTTP server to act as the man in the middle of the request, so that the crawler sees your URL but is retrieving poison fountain content from the poison fountain service.

    If so, that means the crawlers wouldn’t be able to filter by URL because the actual handler that responds to the HTTP request doesn’t ever see the canonical URL of the poison fountain.

    In other words, the handler is “self hosted” at its own URL while the stream itself comes from the same URL that the crawler never sees.


  • The hot concept around the late 2000’s and early 2010’s was crowdsourcing: leveraging the expertise of volunteers to build consensus. Quora, Stack Overflow, Reddit, and similar sites came up in that time frame where people would freely lend their expertise on a platform because that platform had a pretty good rule set for encouraging that kind of collaboration and consensus building.

    Monetizing that goodwill didn’t just ruin the look and feel of the sites: it permanently altered people’s willingness to participate in those communities. Some, of course, don’t mind contributing. But many do choose to sit things out when they see the whole arrangement as enriching an undeserving middleman.





  • Apple supports its devices for a lot longer than most OEMs after release (minimum 5 years since being available for sale from Apple, which might be 2 years of sales), but the impact of dropped support is much more pronounced, as you note. Apple usually announces obsolescence 2 years after support ends, too, and stop selling parts and repair manuals, except a few batteries supported to the 10 year mark. On the software/OS side, that usually means OS upgrades for 5-7 years, then 2 more years of security updates, for a total of 7-9 years of keeping a device reasonably up to date.

    So if you’re holding onto a 5-year-old laptop, Apple support tends to be much better than a 5-year-old laptop from a Windows OEM (especially with Windows 11 upgrade requirements failing to support some devices that were on sale at the time of Windows 11’s release).

    But if you’ve got a 10-year-old Apple laptop, it’s harder to use normally than a 10-year-old Windows laptop.

    Also, don’t use the Apple store for software on your laptop. Use a reasonable package manager like homebrew that doesn’t have the problems you describe. Or go find a mirror that hosts old MacOS packages and install it yourself.


  • Most Costco-specific products, sold under their Kirkland brand, are pretty good. They’re always a good value and they’re sometimes are among the best in class separate from cost.

    I think Apple’s products improved when they started designing their own silicon chips for phones, then tablets, then laptops and desktops. I have beef with their operating systems but there’s no question that they’re better able to squeeze battery life out of their hardware because of that tight control.

    In the restaurant world, there are plenty of examples of a restaurant having a better product because they make something in house: sauces, breads, butchery, pickling, desserts, etc. There are counterexamples, too, but sometimes that kind of vertical integration can result in a better end product.




  • This write-up is really, really good. I think about these concepts whenever people discuss astrophotography or other computation-heavy photography as being fake software generated images, when the reality of translating the sensor data with a graphical representation for the human eye (and all the quirks of human vision, especially around brightness and color) needs conscious decisions on how those charges or voltages on a sensor should be translated into a pixel on digital file.


  • my general computing as a subscription to a server.

    You say this, but I think most of us have offloaded formerly local computing to a server of some kind:

    • Email organization, including folders and attachments, has mostly shifted from a desktop client saving offline copies retrieved and then deleted from the server, to web and app and even IMAP interfaces to the canonical cloud server organization.
    • A huge chunk of users have shifted their productivity tasks (word processing, spreadsheets, presentations, image editing and design) to web-based software.
    • A lot of math functionality is honestly just easier to plug into web-based calculators for finance, accounting, and even the higher level math that Wolfram Alpha excels at.
    • Lots of media organization, from photos to videos to music, are now in cloud-based searchable albums and playlists.

    All these things used to be local uses of computing, and can now be accessed from low powered smartphones. Things like Chromebooks give a user access to between 50-100% of what they’d be doing on a full fledged high powered desktop, depending on the individual needs and use cases.



  • Cutting edge chip making is several different processes all stacked together. The nations that are roughly aligned with the western capitalist order have split up responsibilities across many, many different parts of this, among many different companies with global presence.

    The fabrication itself needs to tie together several different processes controlled by different companies. TSMC in Taiwan is the current dominant fab company, but it’s not like there isn’t a wave of companies closely behind them (Intel in the US, Samsung in South Korea).

    There’s the chip design itself. Nvidia, Intel, AMD, Apple, Qualcomm, Samsung, and a bunch of other ARM licensees are designing chips, sometimes with the help of ARM itself. Many of these leaders are still American companies developing the design in American offices. ARM is British. Samsung is South Korean.

    Then there’s the actual equipment used in the fabs. The Dutch company ASML is the most famous, as they have a huge lead on the competition in manufacturing photolithography machines (although old Japanese competitors like Nikon and Canon want to get back in the game). But there are a lot of other companies specializing in specific equipment found in those labs. The Japanese company Tokyo Electron and the American companies Applied Materials and Lam Research, are in almost every fab in the West.

    Once the silicon is fabricated, the actual packaging of that silicon into the little black packages to be soldered onto boards is a bunch of other steps with different companies specializing in different processes relevant to that.

    Plus advanced logic chips aren’t the only type of chips out there. There are analog or signal processing chips, or power chips, or other useful sensor chips for embedded applications, where companies like Texas Instruments dominate on less cutting edge nodes, and memory/storage chips, where the market is dominated by 3 companies, South Korean Samsung and SK Hynix, and American company Micron.

    TSMC is only one of several, standing on a tightly integrated ecosystem that it depends on. It also isn’t limited to only being located in Taiwan, as they own fabs that are starting production in the US, Japan, and Germany.

    China is working at trying to replace literally every part of the chain in domestic manufacturing. Some parts are easier than others to replace, but trying to insource the whole thing is going to be expensive, inefficient, and risky. Time will tell whether those costs and risks are worth it, but there’s by no means a guarantee that they can succeed.


  • No, X-rays are too energetic.

    Photolithography is basically shining some kind of electromagnetic radiation through a stencil so that specific lines are etched into the top “photoresist” layer of a silicon wafer. The radiation causes a chemical change wherever a photon hits, so that stencil blocks the photons in a particular pattern.

    Photons are subject to interference from other photons (and even itself) based on wavelength, so smaller wavelengths (which are higher energy) can fit into smaller and finer feature size, which ultimately means smaller transistors where more can fit in any given area of silicon.

    But once the energy gets too high, as with X-ray photons, there’s a secondary effect that ruins things. The photons have too much leftover energy even after hitting the photoresist to be etched, and it causes excited electrons to cause their own radiation where high energy photons start bouncing around underneath, and then the resulting boundaries between the photoresist that has been exposed to radiation and the stuff that hasn’t becomes blurry and fuzzy, which wrecks the fine detail.

    So much of the 20 years leading up to commercialized EUV machines has been about finding the perfect wavelength optimized for feature size, between wavelengths small enough to make really fine details and energy levels low enough not to cause secondary reactions.