You are viewing a single comment's thread from:

RE: GPT4All: How to run "ChatGPT" locally on your PC, Facebook/Meta has ignited the open-source uncensored GPT community, what an irony 🚀

in STEMGeeks • 11 months ago

Is it RAM or VRAM that the large language models need?
I have 128Gb RAM but no VRAM

Sort:  

The cool thing is, you can run the models both on the GPU and CPU, and also split the inference between them, so preferably on GPU, because it's much faster, but it's possible to run the entire model on the CPU RAM only.