need help understanding if this setup is even feasible.
need help understanding if this setup is even feasible.
I have an unused dell optiplex 7010 i wanted to use as a base for an interference rig.
My idea was to get a 3060, a pci riser and 500w power supply just for the gpu. Mechanically speaking i had the idea of making a backpack of sorts on the side panel, to fit both the gpu and the extra power supply since unfortunately it's an sff machine.
What's making me weary of going through is the specs of the 7010 itself: it's a ddr3 system with a 3rd gen i7-3770. I have the feeling that as soon as it ends up offloading some of the model into system ram is going to slow down to a crawl. (Using koboldcpp, if that matters.)
Do you think it's even worth going through?
Edit: i may have found a thinkcenter that uses ddr4 and that i can buy if i manage to sell the 7010. Though i still don't know if it will be good enough.
I just got second PSU just for powering multiple cards on a single bifurcated pcie for a homelab type thing. A snag I hit that you might be able to learn from: PSUs need to b turned on by the motherboard befor e being able to power a GPU. You need a 15$ electrical relay board that sends power from the motherboard to the second PSU or it won't work.
Its gonna be slow as molasses partially offloaded onto regular ram no matter what its not like ddr4 vs ddr3 is that much different speed wise. It might maybe be a 10-15% increase if that. If your partially offloading and not doing some weird optimized MOE type of offloading expect 1-5token per second (really more like 2-3).
If youre doing real inferencing work and need speed then vram is king. you want to fit it all within the GPU. How much vram is the 3060 youre looking at?
If you are talking about something like the add2psu boards that jump the PS_ON line of the secondary power supply on when the 12v line of the primary one is ready. Then i'm already on it the diy way. Thanks for the heads up though :-).
5 tokens per seconds would be wonderful compared to what i'm using right now, since it averages at ~ 1,5 tok/s with 13B models. (Koboldcpp through vulkan on a steam deck) My main concerns for upgrading are bigger context/models + trying to speed up prompt processing. But i feel like the last one will also be handicapped by offloading to RAM.
I'm looking for the 12GB version. i'm also giving myself space to add another one (most likely through a 1x mining riser) if i manage to save up enough another card in the future to bump it up to 24 gb with parallel processing, though i doubt i'll manage.
Sorry for the wall of text, and thanks for the help.
No worries :) a model fully loaded onto 12gb vram on a 3060 will give you a huge boost around 15-30tps depending on the bandwidth throughput and tensor cores of the 3060. Its really such a big difference once you get a properly fit quantized model your happy with you probably won't be thinking of offloading ram again if you just want llm inferencing. Check to make sure your motherboard supports pcie burfurcation before you make any multigpu plans I got super lucky with my motherboard allowing 4x4x4x4 bifuraction for 4 GPUs potentially but I could have been screwed easy if it didnt.