• lloram239@feddit.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I think we might end up with the Microsoft/Apple/Google situation all over again. While it’s easy to build an AI, having to jump between AIs for each and every task is no fun. I think the one that wins the golden goose is the one that manages to build a complete OS with AI at it’s core, i.e. instead of Unix shell, you just have a ChatGPT-like thing sitting there that it can interact with all your data and other software in a save and reliable manner. Basically the computer from StarTrek were you just tell it what you want and it figures out how to get it.

    That others can spin up their own LLM won’t help here, as whoever gets to be the default AI that pops up when you switch on your computer will be the one that has the control and can reek the benefits.

      • lloram239@feddit.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Yes, but whoever overcomes those problems will be the next Microsoft/Apple/Google (or get rich by getting bough by either of them). I think a large paradigm shift in how we do computing is unavoidable, LLMs are way to powerful to just be left as chatbots.

        • nossaquesapao@lemmy.eco.br
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Do you think these problems are solvable, and not inherent characteristics? I don’t know, I expect to see computers with high performant ai modules, but not a full ai computer.

          • lloram239@feddit.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 year ago

            Just have the LLM output verifiable scripts instead of manipulating the data directly itself and have version control for the data so the AI can undo changes. All pretty doable, though maybe tricky to get into old apps.