• Fushuan [he/him]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    7
    ·
    13 hours ago

    Fyi, “AI” has been used in medicine research for decades. GenAI is the one that’s wonky. I’d be surprised and sceptical of any researcher that would suggest genAI as the star tool when there are so many predictive ML models that already work so well…

  • earthworm@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    14 hours ago

    “It’s almost certain” that AI will reach that level eventually, one researcher told Nature.

    Semafor doing so much work trying the launder this into a story. “One scientist” in the original article, to multiple scientists in their headline.

    This is the first of three waves of AI in science, says Sam Rodriques, chief executive of FutureHouse — a research lab in San Francisco, California, that debuted an LLM designed to do chemistry tasks earlier this year.

    And the one “scientist” seems to have switched tracks from doing actual research to doing capitalism.

  • nyan@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    5
    ·
    15 hours ago

    I’m pretty sure that you can find one researcher, somewhere, who will agree with anything you say, including that the weather is being affected by a war between Martians and the people living inside the hollow earth. Especially if you’re offering a large bribe to said researcher to make a statement about something outside their field while they’re somewhat drunk, and then mutilating their remark out of context via the process fondly known as journalism.

    In other words, “one researcher” predicting something is pretty much worthless.

  • CosmoNova@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    18 hours ago

    Technically machines make most discoveries possible these days but I have yet to see an electric microscope receive the prize. I don‘t see how this is any different.

  • Bronzebeard@lemmy.zip
    link
    fedilink
    English
    arrow-up
    7
    ·
    17 hours ago

    We’ve been shoving large amounts of data into machine learning algorithms for ages now. Still need people to interpret the outputs and actually test that the results are accurate.

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    17 hours ago

    “eventually” is a cheap cop out. Because I have no doubt AI will eventually surpass us, it’s simply the nature of the speed of development of technology over evolution. But we are not there yet.

  • Fyrnyx@kbin.melroy.org
    link
    fedilink
    arrow-up
    6
    ·
    19 hours ago

    Well, until AI finds a cure for cancer, solves climate issues and fixes the economy for everyone, it is still shit.

  • Frezik@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    1
    ·
    13 hours ago

    This one probably will happen.

    The reason is that there are certain fields where you have to sift through massive amounts of data to find the thing you’re looking for. This is an ideal task for machine learning. It’s not going to replace real scientists, and it sure as hell shouldn’t replace peer review. It’s a tool with a use.

    As one example, the longest known black hole jet was recently discovered using ML techniques: https://www.caltech.edu/about/news/gargantuan-black-hole-jets-are-biggest-seen-yet

  • phdepressed@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    17 hours ago

    Eventually we’ll make agi instead of this llm bullshit. Assuming we don’t destroy ourselves first.