Profile pic is from Jason Box, depicting a projection of Arctic warming to the year 2100 based on current trends.

  • 0 Posts
  • 322 Comments
Joined 2 years ago
cake
Cake day: March 3rd, 2024

help-circle


  • I see your point, but that exactly was a coping mechanism for something that didn’t have a solution. Is assisted suicide a modern version as a way to deal with an unsolvable problem (and I’m all for it btw, just comparing the goals of both).

    I don’t think they are the same as finding ways to avoid grief, which is what the topic of a replacement of the lost individual is about. I’m sure anyone in the therapy field has already explored this to find any benefits of prolonging.

    But in regards about the claim: I don’t even know how far the cloning has gone, or how it’s been accepted. But I have heard that immediately getting another pet to replace that loss isn’t a good thing to do for similar reasons for owner and pet, and the cloning is worse because it’s pretending it’s the same animal (in most cases, I can’t say everyone). That’s how it was sold, getting your pet back. I can’t see how this can turn into a better route for grief when there isn’t any, and might turn to despair or anger when the new version of the pet doesn’t act the same as the old.

    But you’re right, there’s no data, it’s just a gut feeling based on my own experiences that I’m still dealing with in some respects.

    If anything, the AI acting as far as just visual is not a huge jump from watching old video of them from the past. It’s a bit odd, but I can accept that times change and some things become normal that were not. Having an AI that responds back as if they were the person crosses the line that I’ve been talking about. Some people think ChatGPT with its flaws is still a person, so they’ll fall for this being the loved one from the grave, and I still hold that living in that fantasy is not healthy for the mind.


  • From a science pov it makes sense that it’s something to pursue, even as just a renewable biofuel. Algae grows fast, it’s where oil comes from, it’s a biological “fix”. It’s perfect. Except it didn’t work nearly as well as hoped.

    I looked into it a long time ago as a “solution” to how to best pull carbon of out the air and sequester it. Algae farms over deep water areas, grown and culled and the dead carbon sunk deep to stay out of the loop. Sounds perfect, doesn’t it?

    But in both scenarios there are so many costs and variables to consider that are left out when proponents are selling it. Some are just the “forgotten” costs of running a process that pollutes on their own and take energy (that requires emissions too). Some are effects outside the process that damage the environment in other ways. And the costs and effects of feeding the algae itself, it just won’t grow in a vat of water alone. So many things that change the net result. And with the case for fuel (which doesn’t lock the carbon away so it’s not a help to existing carbon in the air) assuming the fuel percentage per weight would be high enough to justify the rest of the costs. Which Exxon figured out it was not, while selling it as a miracle.






  • There is comfortable, wealthy, and the super rich. The first ones still look at money as the rest of the population, while the ultra wealthy (the top .1% or higher) use their assets for power. They don’t have to concern themselves almost all of the time on price tags for things, it’s irrelevant. It’s what their influence can allow them to do that is far more important. So yes, the richest live an expensive lifestyle, but they don’t care.

    I agree with others on the middle class falsehood. You either have enough assets and income to be able to live well, or you don’t. At this point many millionaires are not that well off either because their expenses put them in the same situation the poorer people have to deal with. Maybe it’s not only one paycheck away from disaster, but they have their own buffer zone that’s not as large as they’d like in bad times. Likewise, there are “poor” people who manage their budgets well enough that they are comfortable, but because they don’t have a lot they are at the mercy of things around them so that can disappear quickly.

    The rich line is where you can lose entire businesses or a house or other large material thing and the money part doesn’t phase you.





  • I could see you not reacting well to the gift and them being upset, but then it turned into something more than that. They made the mistake of doing something that you claim is well known you don’t like. You held your line and rather than let it sit for a bit insisted it had to go. Now you’re both mad/upset over a gift. Doesn’t make sense, does it? Even more so if the value of this object isn’t that much even new. Who is hurt more by this? You’re confused about their reaction but were you hurt by the act of giving, even if it was something unwanted? The core thing you should ask yourself is why it became an argument, and was it worth it? It doesn’t even matter who was right.





  • One issue is that AI in its various forms makes it far easier than it had been to use such a tool without understanding what the limitations are. Garbage in, garbage out still applies, but if the user can’t tell the difference, the garbage gets spread as quality work. This had led to the term “AI slop” which has morphed into a general “I don’t like this post” label.

    Another bigger issue is the origin of the data for training, which unfortunately has tainted good uses for these tools (when used within their limits, as stated before). I agree with this concern, but once LLMs and related AI became freely open to the public, that ship has sailed and even if there was a company that could even prove its AI was trained only with legitimately obtained information (which could make it more limited than the ones out there), would anyone believe them?

    A related issue on training would be how the AI was trained (ignoring the problem of the source of the data). The very fact that LLMs were modeled to give proper and positive answers only leads to the conclusion that it has long moved from a research project to find AGI into a marketing ploy to give the best impression on the ignorant public to profit from. This gets into the “AI slop” area of seemingly good results to the average user when it is not, but rather than slop it’s deception.