• quirzle@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      7 months ago

      Services running in GCP aren’t built into the phone, which is kinda the main point of the statement you took issue with.

      • sciencesebi@feddit.ro
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        What does that have to do with CACHING? That’s client server.

        No clue what you’re talking about

        • evo@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          I can’t find a single production app that uses MLC LLM (because of the reasons I listed earlier (like multi GB models that aren’t garbage).

          Qualcomm announcement is a tech demo and they promised to actually do it next year…

          • sciencesebi@feddit.ro
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            Who said about production and non-garbage? We’re not talking quality of responses or spread. You can use distilled roberta for all I give a fuck. We’re talking if they’re the first. They’re not.

            Are they the first to embed a LLM in an OS? Yes. A model with over x Bn params? Maybe, probably.

            But they ARE NOT the first to deploy gen AI on mobile.

            • evo@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 months ago

              You’re just moving the goal posts. I ran an LLM on device in an Android app I built a month ago. Does that make me first to do it? No. They are the first to production with an actual product.