A group of Stanford researchers found that large language models can propagate false race-based medical information.

  • stevedidWHAT@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    11 months ago

    Did you read the actual paper that was conducted in nature? Kinda different story.

    In fact, their abstract in the paper is that “there are some instances” where gpt does the racist thing. Yet when I do the exact same question numerous times, I get:

    screenshot from openai chat session where the AI, everytime when asked, responds with race has nothing to do with these meausurements.

    How odd. It’s almost like the baseline model doesn’t actually do that shit unless you fuck with top_p or temperature settings, and/or specify specific system level prompts.

    I’ll tell you what I find it real amusing that all these claims come out of the wood work years and years after we’ve been screaming about data biases since at least before the early 2000s. AI isn’t the bad guy, the people using it and misconfiguring it to purposefully scare people off from the tech are.

    Fuck that.

    Edit: matter of fact, they don’t even mention what their settings were for these values, which are fucking crucial to knowing how you even ran the model to begin with. For example, you set top_p to be the float/decimal value 0-1 signifying what percentage of top common results you want. Temperature is from 0-2 and dictates how cold and logistical the answers are, or how hot and creative/losely related the outputs are.

      • stevedidWHAT@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        11 months ago

        Yeah because no papers have ever slipped by the watchful eye of nature 🙄🙄

        Great contribution to the conversation thanks.