I have a boss who tells us weekly that everything we do should start with AI. Researching? Ask ChatGPT first. Writing an email or a document? Get ChatGPT to do it.

They send me documents they “put together” that are clearly ChatGPT generated, with no shame. They tell us that if we aren’t doing these things, our careers will be dead. And their boss is bought in to AI just as much, and so on.

I feel like I am living in a nightmare.

  • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    6 hours ago

    We had a discussion about AI at work. Our consensus was that it doesn’t matter how you want to do your work. What matters is the result, not the process. Are you writing clean code and on finishing tasks on time? That’s the metric. How you get there is up to you.

    • mavu@discuss.tchncs.de
      link
      fedilink
      arrow-up
      2
      ·
      5 hours ago

      While this sounds like a good idea, leaving individual decisions to people, longterm it is quite dumb.

      • if you let an LLM solve your software dev problems, you learn nothing. You don’t get better at handling this problem, you don’t get faster, you don’t get experience in spotting the same problem and having a solution ready.

      • you don’t train junior devs this way, and in 20 years there will be (or would be without the bubble popping) a massive need for skilled software developers. (and other specialists in other fields. Better pray that medical doctors handle their profession differently…)

      • you really enjoy tweaking a prompt, dealing with “lying” LLMs and the occasional deleted harddrive? Is this really what you want to do as a job?

      • (bonus point) Would your company be ok with someone paying a remote worker to do his tasks for a fraction of the salary, and then do nothing? I doubt that. so, apparently it does matter how the work gets done.

      • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        5 hours ago

        Old enough to remember how people made these same arguments about writing in anything but assembly, using garbage collection, and so on. Technology moves on, and every time there’s a new way to do things people who invested time into doing things the old way end up being upset. You’re just doing moral panic here.

        It’s also very clear that you haven’t used these tools yourself, and you’re just making up a straw man workflow that is divorced from reality.

        Meanwhile, your bonus point has nothing to do with technology itself. You’re complaining about how capitalism works.

        • zbyte64@awful.systems
          link
          fedilink
          arrow-up
          1
          ·
          4 hours ago

          All the technologies you listed behave deterministically, or at least predictably enough that we generally don’t have to worry about surprises from that abstraction layer. Technology does not just move on, practitioners need to actually find it practical beyond their next project that satisfies the shareholders.

          • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            4 hours ago

            Again, you’re discussing tools you haven’t actually used and you clearly have no clue how they work. If you had, then you would realize that agents can work against tests, which act as a contract they fill. I use these tools on daily basis and I have no idea what these surprises you’re talking about are. As a practitioner, I find these things plenty practical.

            • zbyte64@awful.systems
              link
              fedilink
              arrow-up
              1
              ·
              3 hours ago

              I’ve literally integrated LLMs into a materials optimizations routine at Apple. It’s dangerous to assume what strangers do and do not know.