• rufus@discuss.tchncs.de
    cake
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    7 months ago

    Isn’t that super slow? I mean that could be slower than using llama.cpp on CPU? (If you always transfer layers between SSD, RAM and over the PCIE-Bus into the GPU…

    • tinwhiskers@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 months ago

      I expect so, but as we start to get more agents capable of doing jobs without the hand holding, there are some jobs where time isn’t as important as ability. You could potentially run a very powerful model on a GPU with 24GB of memory.

      • abhibeckert@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 months ago

        OK but the article implies that this approach saves money. I don’t think it does that at all.

        You know what’s cheaper than a GPU with 120GB of RAM? Renting one, for a split second. You can do that for like 1 cent.

        • tinwhiskers@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          Yeah, I’m not sure how they get that, but maybe, if you’re wanting to run a model in-house, as many people would prefer, you can then run much more capable models on consumer grade hardware and make savings there compared to requiring the more expensive kit. Many would already have decent hardware, and this extends what they can run before needing to fork out for new hardware.

          I know, I’m guessing.

        • rufus@discuss.tchncs.de
          cake
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          7 months ago

          Or just not bother with the GPU at all, get a much cheaper computer/cloud instance without. And do it on CPU if you’re going to pipe it through RAM anyways. Tests with llama.cpp (at least) have shown that it’s bound by RAM (bus width and speed). Even my old 4-core Xeon can do the matrix multiplications faster than it can get the numbers in. So the extra step sending it to the GPU and doing the computations there seems to be superfluous, unless I’m missing something. Sure, I use quantized values and my computer is old and has DDR4 memory. (And less memory lanes than a proper, modern server.) So the story could be a little bit different in other circumstances. But I’d be surprised if this changed fundamentally.

          I’m not sure if renting vs buying makes a difference, though. That depends on how much you use your GPU. And how. Sure, if it’s just idle for the most time, or sits under your table, switched off at night, you’d be better off renting a cloud instance. But that’s just you using it wrong. If you buy a car and then just use twice a year, it’s the same. But not if you drive to work every single weekday.

          @tinwhiskers: You’re right. I kinda forgot that we also do stuff that’s not fed to the user immediately. I can imagine slower inference being useful to index or sum up stuff during the night. Or have it work in conjunction with a smaller model. Maybe fact-check stuff with it’s increased intelligence level.