Niche Model of the Day: Nemotron 49B 3bpw exl3
Niche Model of the Day: Nemotron 49B 3bpw exl3

huggingface.co
turboderp/Llama-3.3-Nemotron-Super-49B-v1-exl3 at 3.0bpw

This is one of the "smartest" models you can fit on a 24GB GPU now, with no offloading and very little quantization loss. It feels big and insightful, like a better (albeit dry) Llama 3.3 70B with thinking, and with more STEM world knowledge than QwQ 32B, but comfortably fits thanks the new exl3 quantization!
You need to use a backend that support exl3, like (at the moment) text-gen-web-ui or (soon) TabbyAPI.