• 0 Posts
  • 12 Comments
Joined 2 months ago
cake
Cake day: September 26th, 2024

help-circle
  • Hardly any web developers had the deep skill set needed to pull it off.

    I’m personally of the opinion it’s not so much an issue of a lack of talent that prevented graceful fallback from being adopted, but simply the amount of extra effort necessary to implement it properly.

    In my opinion, to do it properly you can’t make any assumptions about the browser your app is running on; you should never base anything on the reported user agent string. Instead, you need to test for each individual JavaScript, HTML, (or sometimes even CSS) feature and design the experience around having a fallback for when that one singular piece of functionality isn’t present. Otherwise you create a brand new problem where, for example, a forked Firefox browser with a custom user agent string doesn’t get recognized despite having the feature set to provide the full experience, and that person then gets screwed over.

    But yeah, that approach is incredibly cumbersome and time consuming to code and test for. Even with libraries that help with properly detecting the capabilities of the browser, you’ll still need to implement granular fallbacks that work for your particular application, and that’s a lot of extra work.

    Add to that the fact devs in this field are already burdened with having to support layouts and designs that must scale responsively to everything ranging from a phone screen to a 100" inch TV and it quickly becomes nearly impossible to actually finish any project on a realistic timeline. Doing it that way is a monumental task to undertake, and realistically it probably mainly benefits people that use NoScript or similar – so not a lot of people.


  • It’s one month later and I am back to reply:

    I don’t want to replace HTTP, or the web. But, I also absolutely don’t want to build anything in greater complexity than what we have today. In other words, keep it for what it’s doing now, but having an isolated app/container based platform efficiently served through a browser might just be a good thing for everyone?

    5 years ago I was writing rust code compiled to web-assembly and then struggling to get it to run in a browser. I did that because I couldn’t write an efficient enough version of whatever the algorithm I was following in javascript - probably on account of most things being objects. I got it to run eventually with decent enough performance, but it wasn’t fun gluing all that mess together. I think if there was a better delivery platform for WASM built into browsers and maybe eventually mobile platforms, it would probably be better than today’s approach to cross-platform apps being served via HTTP.


  • Ok, let’s try to narrow this down so our exchanges aren’t vague. To me going from propellers to jet engines would have been “revolutionary”, but to you it may have just been incrementally expanding on the concept of a wing that keeps aircraft afloat.

    So for clarity, I’m not suggesting a complete replacement to HTTP. I don’t envision a world where the web as we know gets fully “replaced”. But, I do think that it has out lived its purpose and ultimately we should be seeking a better protocol for information exchange. Or, in other words, I don’t think formulating a solution that can provide privacy, integrity, etc should be restricted to being built on HTTP just because that is what we essentially consider the web to be today.


  • To keep a modicum of privacy and openness, the web is de-facto dependent on Firefox continuing to exist in the medium term. And it has to be paid for somehow.

    The web today has no privacy or openness. It has gmail accounts, russian propaganda bots, and AI SEO article spam. Does it matter which rose tinted browser you care to view or interact with this shit through? I’m approaching 40 and the web has been a fundamental part of my life to the point that I am sometimes bewildered about what I’d do without it. It is a sinking ship though, and at this point I’m much more interested in seeing alternatives to HTTP rather than trying to save the mess we built on-top of it.


  • To clarify it doesn’t get disconnected. It just periodically doesn’t recognize that a storage device got plugged in or, alternatively, that there was one plugged in at the time that the laptop was powered on.

    But no, I haven’t contacted them about it yet. I need to first check if there’s any dmesg/journalctl events happening that might be worth following up on before contacting support. Primarily because I don’t recall having any issues like that when I had Windows installed so I’m not convinced yet that it is a hardware fault.



  • 11th gen Intel Framework 13 and using Pop_OS. I have many USB related annoyances. For example, when I’m using their USB-A expansion cards that they state support USB 3.2 Gen 2 I am unable to get more than 30MB/s. If I use the same device but through a USB-A to USB-C adapter and a USB-C expansion card I see 500-800MB/s.

    I also have some weird issue where USB devices sometimes just don’t show up when plugged in, or if I boot with them plugged in. Re-inserting the device usually fixes it. I was assuming it might have been a hardware problem at first, but it happens on every port regardless of what device it is regardless of if it’s through a USB-A or USB-C card. Not sure what’s going on or how to really go about debugging issues like this.


  • Yes, 30 FPS at best just makes my inputs feel laggy, but usually it also strains my eyes and has given me actual migraines. Bloodborne was the worst offender because of the need to focus on choppy animations of bosses.

    I’ve already answered the PC build question, but to summarize: any comparable build to a PS 5 Pro that uses new components from brands that make reliable hardware typically cost over one grand USD. Also most people that I see recommend these builds typically don’t even bother including peripherals like a controller and kb+m in the cost. Not to mention that by going into the budget gaming PC route will also generally require additional time to tinker with graphic settings in each game to try and get adequate performance.

    Anyway, I’ve done this before, I had a higher end PC in my living room hooked up to my TV a few years ago. The experience wasn’t terrible, but also wasn’t as good as just having a console where everything is designed to be operated via controller. So honestly I don’t see the point of paying extra money for something that seems like the worse option for me.

    I’ll be building a higher end gaming PC with a 480hz OLED display in mind next year, but yeah, I won’t be using that from a couch.



  • You can use a controller on PC and also connect to this display with the same responsiveness and colors.

    I’ve done this in the past when I had a desktop near my living room TV. I don’t these days and the experience wasn’t good enough to justify rearranging my house rather than simply buying a console.

    Also, to get ahead of the people that are already twitching at the opportunity to inform me that I could build a dedicated PC just to keep next to my TV for gaming: Sure, but the cost of building one with similar performance to the pro, while using new components and avoiding Ali Express brands that may start a house fire one random evening, is over 1 grand at a minimum.

    I always thought consoles were for the exclusive games and to play with friends, not performance or graphics.

    Please, by all means, go email Sony and tell them to not bother with PS6. Tell Nintendo to drop what they’re doing with the Switch 2. Us console gamers simply don’t care about performance or graphic upgrades. Surely they should have learned this by now.