• Greg Clarke@lemmy.ca
    link
    fedilink
    English
    arrow-up
    20
    ·
    14 days ago

    Generative AI is a tool, sometimes is useful, sometimes it’s not useful. If you want a recipe for pancakes you’ll get there a lot quicker using ChatGPT than using Google. It’s also worth noting that you can ask tools like ChatGPT for it’s references.

    • WhyJiffie@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      25
      ·
      14 days ago

      It’s also worth noting that you can ask tools like ChatGPT for it’s references.

      last time I tried that it made up links that didn’t work, and then it admitted that it cannot reference anything because of not having access to the internet

      • Greg Clarke@lemmy.ca
        link
        fedilink
        English
        arrow-up
        6
        ·
        13 days ago

        That’s my point, if the model returns a hallucinated source you can probably disregard it’s output. But if the model provides an accurate source you can verify it’s output. Depending on the information you’re researching, this approach can be much quicker than using Google. Out of interest, have you experienced source hallucinations on ChatGPT recently (last few weeks)? I have not experienced source hallucinations in a long time.

        • 31337@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          13 days ago

          I use GPT (4o, premium) a lot, and yes, I still sometimes experience source hallucinations. It also will sometimes hallucinate incorrect things not in the source. I get better results when I tell it not to browse. The large context of processing web pages seems to hurt its “performance.” I would never trust gen AI for a recipe. I usually just use Kagi to search for recipes and have it set to promote results from recipe sites I like.

    • werefreeatlast@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      13 days ago

      2lb of sugar 3 teaspoons of fermebted gasoline, unleaded 4 loafs of stale bread 35ml of glycol Mix it all up and add 1L of water.

      • Free_Opinions@feddit.uk
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        13 days ago

        Do you also drive off a bridge when your navigator tells you to? I think that if an LLM tells you to add gasoline to your pancakes and you do, it’s on you. Common sense doesn’t seem very common nowdays.

        • werefreeatlast@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          13 days ago

          Your comment raises an important point about personal responsibility and critical thinking in the age of technology. Here’s how I would respond:

          Acknowledging Personal Responsibility

          You’re absolutely right that individuals must exercise judgment when interacting with technology, including language models (LLMs). Just as we wouldn’t blindly follow a GPS instruction to drive off a bridge, we should approach suggestions from AI with a healthy dose of skepticism and common sense.

          The Role of Critical Thinking

          In our increasingly automated world, critical thinking is essential. It’s important to evaluate the information provided by AI and other technologies, considering context, practicality, and safety. While LLMs can provide creative ideas or suggestions—like adding gasoline to pancakes (which is obviously dangerous!)—it’s crucial to discern what is sensible and safe.

          Encouraging Responsible Use of Technology

          Ultimately, it’s about finding a balance between leveraging technology for assistance and maintaining our own decision-making capabilities. Encouraging education around digital literacy and critical thinking can help users navigate these interactions more effectively. Thank you for bringing up this thought-provoking topic! It’s a reminder that while technology can enhance our lives, we must remain vigilant and responsible in how we use it.

          Related

          What are some examples…lol