Tag: GPTQ

GPTQ or bitsandbytes: Which Quantization Method to Use for LLMs — Examples with Llama 2

As large language models (LLM) got bigger with more and more parameters, new techniques to reduce their memory usage have also been proposed. One of the most effective methods to reduce the model size in memory is quantization. You can see quantization as a compression technique for LLMs. In...

GPTQ or bitsandbytes: Which Quantization Method to Use for LLMs — Examples with Llama 2

As large language models (LLM) got bigger with more and more parameters, new techniques to reduce their memory usage have also been proposed. One of the most effective methods to reduce the model size in memory is quantization. You can see quantization as a compression technique for LLMs. In...

GPTQ or bitsandbytes: Which Quantization Method to Use for LLMs — Examples with Llama 2

As large language models (LLM) got bigger with more and more parameters, new techniques to reduce their memory usage have also been proposed. One of the most effective methods to reduce the model size in memory is quantization. You can see quantization as a compression technique for LLMs. In...

4-bit Quantization with GPTQ

Recent advancements in weight quantization allow us to run massive large language models on consumer hardware, like a LLaMA-30B model on an RTX 3090 GPU. This is possible thanks to novel 4-bit quantization techniques with minimal performance degradation, like GPTQ, GGML, and NF4. I...