• PriorityMotif@lemmy.world
    link
    fedilink
    arrow-up
    19
    ·
    30 days ago

    You can run it on your own machine. It won’t work on a phone right now, but I guarantee chip manufacturers are working on a custom SOC right now which will be able to run a rudimentary local model.

    • MystikIncarnate@lemmy.ca
      link
      fedilink
      English
      arrow-up
      7
      ·
      29 days ago

      Both apple and Google have integrated machine learning optimisations, specifically for running ML algorithms, into their processors.

      As long as you have something optimized to run the model, it will work fairly well.

      They don’t want to have independent ML chips, they want it baked into every processor.

        • MystikIncarnate@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          29 days ago

          That’s fine, Qualcomm has followed suit, and Samsung is doing the same.

          I’m sure Intel and AMD are not far behind. They may already be doing this, I just haven’t kept up on the latest information from them.

          Eventually all processors will have it, whether you want it or not.

          I’m not saying this is a good thing, I’m saying this as a matter of fact.

    • TriflingToad@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      20 days ago

      It will run on a phone right now. Llama3.2 on Pixel 8

      Only drawback is that it requires a lot of RAM so I needed to close all other applications, but that could be fixed easily on the next phone. Other than that it was quite fast and only took ~3gb of storage!