For years I’ve had a dream of building a rack mounted PC capable of splitting its resources to host multiple GPU intensive VMs:

  • a few gaming VMs
  • a VM for work that can run Davinci Resolve and Blender renders
  • an LLM server
  • a Stable Diffusion server
  • media server

Just to name a few possibilities…

Everytime I’ve looked into it, it seemed like the technology just wasn’t there yet. I remember a few years ago Linus TT took a shot at it, but in the end suggested the technology (for non-commercial entities) just wasn’t in a comfortable spot yet.

So how far off are we? Obviously AI focused companies seem to make it work, but what possibilities exist for us self-hosters who might also want to run multiple displays in addition to the web gui LLM servers? And without forking out crazy money for GPU virtualization software licenses?

  • LrdThndr@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 months ago

    I bought a cheap used Dell R710 on Facebook marketplace for like $100 or so, as well as an ups, rack, 10g switch, etc, from various other sellers. All told, I’ve got about $500 in my server setup.

    Installed proxmox on it. It’s “free” if you don’t buy a license. You just have to put up with a little nag screen when you open the control panel but it still works 100%, much like winrar.

    Works great.

    Edit: just realized this is in c/selfhosted AND I misunderstood the post. I’m gonna leave it here just on the off chance it’s useful to somebody, but I acknowledge it’s not what you’re looking for.

    • Lupec@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      Btw just in case you aren’t aware, the nag can be done away with. I don’t have a link off the top of my head but it’s out there.

  • TCB13@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    The technology has “been there” for a while, it’s trivial do setup what you’re asking for, the issue is that games have anti cheat engines that will get triggered by the virtualization and ban you.

    • socphoenix@midwest.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Which games do that? Running pasthrough gpu on windows for destiny and halo at least gave me 0 issues for years

      • You999@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Anything using vanguard such as valorant and league of legends, battleye such as pubg, destiny 2, and rainbow 6 siege, and easy anti cheat such as fortnight blocks virtual machines. Vanguard is especially bad because it will not allow to run the game with Intel-VT/AMD-V enabled even if you are running bare metal as of its last update.

        • umbrella@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          this just makes me wanna install bare-metal goody-2-shoes windows and cheat using a 5$ arduino

      • Dark Arc@social.packetloss.gg
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        3 months ago

        I’m surprised, I was pretty sure anything with Battleye flat out rejected virtualization.

        I thought Destiny used Battleye but I must be mistaken on one of these points.

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    You’re not really describing your use-case here. Are you just trying to run a server that does all your rendering for you so you can play games elsewhere? Yes, that’s totally possible.

    If you’re trying to describe a business…no, it’s not possible, scalable, or profitable.

    I’m curious as to what your intentions are here though.

    • brownmustardminion@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      I have a workstation I use for video editing/vfx as well as gaming. Because of my work, I’m fortunate to have the latest high end GPUs and a 160" projector screen. I also have a few TVs in various rooms around the house.

      Traditionally, if I want to watch something or play a video game, I have to go to the room with the jellyfin/plex/roku box to watch something and am limited to the work/gaming rig to play games. I can’t run renders and game at the same time. Buying an entire new pc so I can do both is a massive waste of money. If I want to do a test screening of a video I’m working on to see how it displays on various devices, I have to transfer the file around to these devices. This is limiting and inefficient to me.

      I want to be able to go to any screen in my house: my living room TV, my large projector in my studio room, my tablet, or even my phone and switch between:

      • my workstation display running on a Window 10 VM
      • my linux VM with youtube or jellyfin player I use as a daily driver
      • a fedora or Windows VM dedicated to gaming, maybe SteamOS
      • maybe a friend comes over for a LAN party and we both can game without having to set up a 2nd rig
      • I want to host an LLM or stablediffusion server without having to buy a new GPU with enough VRAM to run SDXL
  • Decipher0771@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 months ago

    I’ve been doing exactly that at home for a couple years now. First with Parsec, now Sunshine/Moonlight.

    Host is Proxmox on Ryzen 5800x, 64gm RAM GPU is 2070 Super, with VGPU patched drivers from https://gitlab.com/polloloco/vgpu-proxmox

    When I’m gaming I’ll dedicate the full 8Gb to my windows Vm, otherwise I split it in 2 or 4Gb chunks to Jellyfin or my home camera monitoring. 8gb can’t split very many ways, and most things require at least 2 to run.

    Locally at home I can run 1440p 60fps rock solid over wifi on any device, from my phone/old laptop/apple tv/raspberry pi. Remote I can do 1080p60, but a bit more hit or miss depending on my network connection.

    Experimenting with LLMs I’ve done through the same windows VM, or to a ubuntu dev VM. Works the same way. I’m thinking of transitioning my gaming VM to Linux too.

    The amount of VRAM is the hard limitation to get past, the virtualization tech itself has been there for a while.

    But to be perfectly honest……it really was just a “let’s see if I could do this” type task, direct GPU pass though is more straightforward and it’s not really worth splitting 8Gb these days. Unless you get a card with significantly more VRAM passthrough is much less work.

      • Decipher0771@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Yeah unfortunately. 20xx is last generation supported so far via the patch, not sure if support for later cards is coming or not.

    • Lem453@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      This is really amazing! In theory, can you can use 2gb with 4 different VMs?

      • Decipher0771@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Sure, but you’ll get diminishing returns most likely as consumer hardware doesn’t really have the resources to scale that way very well if all the VMs are running demanding apps simultaneously.

        Even for something like 4 VMs that just do NVenc, there are limits for how many streams the GPU can do. I think there’s another patch that lets you raise that, but at some point you’ll run out of resources quick. Even powerful consumer gear isn’t really designed to be used by more than one user/app and it starts to show the more you virtualize and split those resources.

  • Byter@lemmy.one
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    I’ve also wanted to do this for a while, but there were always a few too many barriers to actually spin up the project. Here’s just a brain dump of things I’ve seen recently.

    vGPUs continue to be behind a license. But there is now vgpu_unlock.

    L1T just showed off PCIe “fabric” from Liqid that can switch physical devices between machines.

    Turning VMs on and off isn’t as slick as either of the above, but that is doable today. You’ll just have to build all the switching automation yourself. That could just be a shell script running QEMU/libvirt commands, at a minimum.

  • Trincapinones@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I’ve recently tried to do that using sunsine and different linux gaming distros and it was awful, the VM was working great for a few minutes and then suddenly crashes and I have to hard stop it.

    All the people that I’ve seen talking about it on the internet are using Windows VMs so I guess that I’m doing something wrong or the only way to do it is through a Windows VM, which I’ll not even try.

        • Sebbe@lemmy.sebbem.se
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          2 months ago

          Hey, sorry I didn’t reply until now but life has been pretty hectic and I also kinda borked my streaming VM right at the same time as I wrote that. I ran Nobara Linux for a while with KDE on Xorg and it actually worked pretty well. Then I decided I wanted to give Bazzite a try but I didn’t like the whole immutable thing. I went back to Nobara just to find that Steam Remote Play straight up didn’t work and I couldn’t know if I had failed to set up something properly or Valve just broke it while I was “away”. A couple of days ago I decided to just abandon Remote Play for the time being and deployed Games on Whales and it seems very promising so far. Much easier than fiddling with VM:s and GPU passthrough and Sunshine/Moonlight has never failed me.

          • Trincapinones@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            No worries, LOL we followed exactly the same steps with the same problems, in fact, I was procrastinating documenting my problems in my Logseq and I think I’ll copy your explanation because it’s exactly my case in everything xd thanks ^^