The research from Purdue University, first spotted by news outlet Futurism, was presented earlier this month at the Computer-Human Interaction Conference in Hawaii and looked at 517 programming questions on Stack Overflow that were then fed to ChatGPT.

“Our analysis shows that 52% of ChatGPT answers contain incorrect information and 77% are verbose,” the new study explained. “Nonetheless, our user study participants still preferred ChatGPT answers 35% of the time due to their comprehensiveness and well-articulated language style.”

Disturbingly, programmers in the study didn’t always catch the mistakes being produced by the AI chatbot.

“However, they also overlooked the misinformation in the ChatGPT answers 39% of the time,” according to the study. “This implies the need to counter misinformation in ChatGPT answers to programming questions and raise awareness of the risks associated with seemingly correct answers.”

  • d3Xt3r@lemmy.nz
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    5 months ago

    In the footnotes they mention GPT-3.5. Their argument for not testing 4 was because it was paid, and so most users would be using 3.5 - which is already factually incorrect now because the new GPT-4o (which they don’t even mention) is now free. Finally, they didn’t mention GPT-4 Turbo either, which is even better at coding compared to 4.

    • PM_ME_YOUR_ZOD_RUNES@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 months ago

      Anyone can use GPT-4 for free. Co-pilot uses GPT-4 and with a Microsoft account you can do up to 30 queries. I’ve used it a lot to create Excel VBA code for work and it’s pretty good. Much better than GPT-3.5 that’s for sure.

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 months ago

      4 is free for a very small number of queries, then it switches back to 3.5. Or at least that’s what happened to me the other day.