this post was submitted on 21 Oct 2024
19 points (100.0% liked)

Free Open-Source Artificial Intelligence

2849 readers
22 users here now

Welcome to Free Open-Source Artificial Intelligence!

We are a community dedicated to forwarding the availability and access to:

Free Open Source Artificial Intelligence (F.O.S.A.I.)

More AI Communities

LLM Leaderboards

Developer Resources

GitHub Projects

FOSAI Time Capsule

founded 1 year ago
MODERATORS
 

For about half a year I stuck with using 7B models and got a strong 4 bit quantisation on them, because I had very bad experiences with an old qwen 0.5B model.

But recently I tried running a ~smaller~ ~model~ like llama3.2 3B with 8bit quant and qwen2.5-1.5B-coder on full 16bit floating point quants, and those performed super good aswell on my 6GB VRAM gpu (gtx1060).

So now I am wondering: Should I pull strong quants of big models, or low quants/raw 16bit fp versions of smaller models?

What are your experiences with strong quants? I saw a video by that technovangelist guy on youtube and he said that sometimes even 2bit quants can be perfectly fine.

all 13 comments
sorted by: hot top controversial new old
[–] SGforce@lemmy.ca 5 points 14 hours ago (1 children)

The technology for quantisation has improved a lot this past year making very small quants viable for some uses. I think the general consensus is that an 8bit quant will be nearly identical to a full model. Though a 6bit quant can feel so close that you may not even notice any loss of quality.

Going smaller than that is where the real trade off occurs. 2-3 bit quants of much larger models can absolutely surprise you, though they will probably be inconsistent.

So it comes down to the task you're trying to accomplish. If it's programming related, 6bit and up for consistency with whatever the largest coding model you can fit. If it's creative writing or something a much lower quant with a larger model is the way to go in my opinion.

[–] Smorty@lemmy.blahaj.zone 2 points 13 hours ago (1 children)

Hmm, so what you're saying is that for creative generations one should use big parameter models with strong quants but when good structure is required, like with coding and JSON output, we want to use a large quant of a model which actually fits into our VRAM?

I'm currently testing JSON output, so I guess a small Qwen model it is! (they advertised good JSON generations)

Does the difference between fp8 and fp16 influence the structure strongly, or are fp8 models fine for structured content?

[–] SGforce@lemmy.ca 1 points 13 hours ago (1 children)

fp8 would probably be fine, though the method used to make the quant would greatly influence that.

I don't know exactly how Ollama works but a more ideal model I would think would be one of these quants

https://huggingface.co/bartowski/Qwen2.5-Coder-1.5B-Instruct-GGUF

A GGUF model would also allow some overflow into system ram if ollama has that capability like some other inference backends.

[–] Smorty@lemmy.blahaj.zone 2 points 13 hours ago (1 children)

Ollama does indeed have the ability to share the memory between VRAM and RAM, but I always assumed it wouldn't make sense, since it would massively slow down the generation.

I think ollama already uses GGUF, since that is how you import the model from HF to ollama anyway, you gotta use the *.GGUF file.

As someone who has experience with shader development in glsl, I know very well that communication between the GPU and CPU is super slow, and sending data from the GPU to the CPU is a pretty heavy task. So I just assumed it wouldn't make any sense. I will try a full 7B model (fp16) model now using my 32GB of normal RAM to check out the speed. I'll edit this comment once I'm done and share results

[–] SGforce@lemmy.ca 1 points 11 hours ago (1 children)

With modern methods sometimes running a larger model split between GPU/CPU can be fast enough. Here's an example https://dev.to/maximsaplin/llamacpp-cpu-vs-gpu-shared-vram-and-inference-speed-3jpl

[–] Smorty@lemmy.blahaj.zone 1 points 11 hours ago (1 children)

oooh a windows only feature, now I see why I haven't heard of this yet. Well, too bad I guess. It's time to switch to AMD for me anyway...

[–] SGforce@lemmy.ca 1 points 11 hours ago

Oh, that part is. But the splitting tech is built into llama.cpp

[–] hendrik@palaver.p3x.de 1 points 11 hours ago* (last edited 10 hours ago)

A 2bit or 3bit quantization is quite some trade-off. At 2bit, it'll probably be worse then a smaller model with a lesser quantization. At the same effective size.

There is a sweet spot somewhere between 4 to 8 bit(?). And more than 8bit seems to be a waste, it seems indistinguishable from full precision.

General advice seems to be: Take the largest model you can fit at somewhere around 4bit or 5bit.

The official way to compare such things is calculate the perplexity for all of the options and choose the one with the smallest perplexity, that fits.

And by the way: I don't really use the tiny models like 3B parameters. They write text, but they don't seem to be able to store a lot of knowledge. And in turn they can't handle any complex questions and they generally make up a lot of things. I usually use 7B to 14B parameter models. That's a proper small model. And I stick to 4bit or 5bit quants for llama.cpp

Your graphics card should be able to run a 8B parameter LLM (4-bit quantized) I'd prefer that to a 3B one, it'll be way more intelligent.

[–] j4k3@lemmy.world 2 points 13 hours ago (1 children)

I prefer a middle ground. My favorite model is still the 8 x 7b mixtral and specifically the flat/dolphin/maid uncensored model. Llama 3 can be better in some areas but alignment is garbage in many areas.

[–] Smorty@lemmy.blahaj.zone 2 points 13 hours ago (1 children)

Yeaaa those models are just too large for most people... You gotta have 56GB of VRAM to run an 8bit quant, which most people don't have a quarter of.

Also, what specifically do you mean by alignment? Are you talking about finetuning or instruction alignment?