• danielbln@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    1 year ago

    Hallucinations can be heavily reduced today by providing the LLM with grounding truth. People use naked LLMs as knowledge databases, which is prone to hallucinations indeed. However, provide them with verified data from the side and they are very, very good at keeping to the truth. I know, because we deploy these with clients to great avail.

    Image, music, video models are making great strides and are already part of various pipelines, all the way up to the big boy tools like Photoshop (generative fill, for example).

    The tech is being incorporated at a large scale by a lot of companies, from SME to megacorp. I don’t see it going away any time soon, even if it doesn’t improve from here on out (which it undoubtedly will).

      • QuaternionsRock@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        1 year ago

        GPT-4:

        In Africa, there are three countries that start with the letter “K”:

        1. Kenya
        2. Kingdom of Eswatini (although it’s often referred to simply as Eswatini)
        3. Kiribati

        However, it’s worth noting that Kiribati is not in Africa; it’s a Pacific island nation. So, only Kenya and the Kingdom of Eswatini in Africa start with the letter “K”, but most people just refer to Eswatini without the “Kingdom” prefix. If you meant countries solely with the prominent “K” at the start, then it’s just Kenya.

        Anecdotal evidence is useless because it can be contradicted with anecdotal evidence.

    • BastingChemina@slrpnk.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      The issue is that there are from time to time they still confidently hallucinate and there is no way to detect if they are right or not.

      • krakenx@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Hire 1 person to verify AI output instead of a dozen to make the content. If that one editor misses something, who cares when we live in a post-truth society where the media lies on purpose.