If software worked and was good in 2005 on pcs with 2gb RAM and with CPUs/GPUs vastly worse than modern ones, then why not write modern software like how that was written? Why not leverage powerful hardware when needed, but leave resource demands low at other times?

What are the reasons for which it might not work? What problems are there with this idea/approach? What architectural (and other) downgrades would this entail?

Note: I was not around at that time.

  • Ephera@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 hours ago

    I’ve seen it argued that the best way to create lightweight software is to give devs old hardware to develop on.

    Which, yeah, I can see that. The problem is that as a dev, you might have some generic best practices in your head while coding, but beyond that, you don’t really concern yourself with performance until it becomes an issue. And on new hardware, you won’t notice the slowness until it’s already pretty bad for those on older hardware.

    But then, as the others said, there’s little incentive to actually give devs old hardware. In particular, it costs a lot of money to have your devs waiting for compilation on older hardware…

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    1
    ·
    edit-2
    18 hours ago

    then why not write modern software like how that was written?

    Well, three reasons that come to mind:

    First, because it takes more developer time to write efficient software, so some of what developers have done is used new hardware not to get better performance, but cheaper software. If you want really extreme examples, read about the kind of insanity that went into trying to make video games in the first three generations of video game consoles or so, on extremely limited hardware. I’d say that in most cases, this is the dominant factor.

    Second, because to a limited degree, the hardware has changed. For example, I was just talking with someone complaining that Counter-Strike 2 didn’t perform well on his system. Most systems today have many CPU cores, and heavyweight video games and some other CPU-intensive software will typically seek to take advantage of those. CS2 apparently only makes much use of one or two cores. Go back to 2005, and the ability to saturate more cores was much less useful.

    Third, in some cases, functionality is present that you might not immediately appreciate. For example, when I get a higher-resolution display in 2025, text typically doesn’t become tiny — instead, it becomes sharper. In 2005, most of it was rendered to pixel dimensions. Go back earlier, and most text wasn’t antialiased, and go back further and fonts seen on the screen were mostly just bitmap fonts, not vector. Those jumps generally made text rendering more-compute-expensive, but also made it look nicer. And that’s for something as simple as just drawing “hello world” on the screen.

  • zxqwas@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    17 hours ago

    Experienced SWEs get promoted into management and write very little code themselves.

    Also all your customers have modern hardware and when the user has spent 15 minutes pecking on the keyboard to enter the data it does not matter that it took 10 seconds to load the program and 10 seconds to do the calculations instead of 1 seconds. You don’t get to charge more so you don’t tell your new SWEs to make it faster so you don’t get SWEs ecperienced in making things faster.

  • JustARegularNerd@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    17
    ·
    edit-2
    21 hours ago

    I’m not an experienced developer, I’ve just done stuff in Java and Python before, so take what I say with a grain of salt.

    If we’re strictly talking interfaces, most modern software is a web browser showing you their interface made in HTML. Common ones that come to mind include Discord, Microsoft Teams and Spotify. You can usually tell from how hovering over action buttons always results in a pointing hand cursor, and how absolutely sluggish they run even on decent hardware. This is often done with Electron, and these apps are often called Electron apps.

    The problem with this is that now you’re not running a native application with minimal overhead, you’re running a whole ass web engine

    This is (probably, IMO) because it’s much easier to hire a frontend web developer and have them do up an interface, than have a dedicated backend developer do it for whatever window library. It also makes it easy to port the app to many systems (including mobile) given how HTML5, CSS and JS all can be made to work on any platform that can run a web engine.

    I also imagine that it makes the user interface consistent to the company’s brand, rather than consistent to your operating system. If you look at Discord on Windows, macOS and Linux, it looks almost identical on all three except for only where necessary such as the top window border. Meanwhile if you look at LibreOffice (native application) on Windows, macOS and Linux, the window styling is completely different per system.

    Update I realise after posting that I never otherwise explained other performance considerations outside of the interface - but I hope that just briefly going into interfaces gives a good idea already for software. If you are talking games, then that’s a whole separate conversation

  • dgdft@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    19 hours ago

    I’m a big fan of this approach to software; it works. PHP5, cgi-bin scripts, perl spaghetti, etc. are lit for hobby work.

    The tradeoff is that you have to pay a lot of attention to do things securely, and you have to hand roll a lot more of your codebase instead of relying on external packages.

  • lennybird@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    20 hours ago

    As a general rule you can have versatility, or efficiency but not both.

    You can have legible coding or efficient coding but not both.

    Memory is comparatively cheap, which means labor hours is comparatively more costly; so why spend time optimizing when you can throw together bloated boilerplate frameworks and packages?

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      14 hours ago

      It should be said, though, that this really is just a general rule. Some frameworks and programming languages are definitely less efficient than others without providing more versatility.

  • bulwark@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    21 hours ago

    Because modern hardware includes instructions that old software could not benefit from. For example H.264 video decoding has specific instructions physically designed into the silicone to speed up playback that isn’t there on older hardware.

    That being said, it really depends on the use case and a lot of “modern” software is incredibly bloated and could benefit from not being designed as if RAM is an unlimited resource.