• kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    In part this is because the SotA model is by far GPT-4, but OpenAI has pigeon holed it into ‘chatbot.’

    The earliest versions of it pre-release when it was being incorporated into Bing were amazing. Probably the most impressive thing I’ve seen in tech.

    But it was too human-like and freaking users out, so rather than wait for the market to adjust they did extensive fine tuning to make the large language model trained to predict human ouput be less likely to produce human-like output.

    The problem is that they don’t have a scalpel for this sort of thing and ended up with a model that’s very good as a chatbot within a certain scope, but significantly impaired at some of the outside the box mechanics visible early on.

    And because it’s the SotA, everyone is now using it to fine tune their own models.

    So the entire industry is being set back in practical applications outside of “kind of boring chatbot.”