Private GPT

Everyone uses AI and LLMs, but do you trust them with your data? Well, no worries! You can run an LLM locally on your machine and access the most recent models using Ollama.

It should automatically detect your GPU and run the model on it, but sometimes it defaults to the CPU if your GPU isn't supported (like mine), which can lead to longer response times.

In my case, I have an Intel i5-12400 CPU running at 4.4GHz, so the response time wasn't that bad, but I really wanted it to run on my RX 6700 XT GPU. I found an article that helped me acheive that.

Finally, as cool as it is to run an LLM in the terminal, I wanted a ChatGPT-like UI. I found a very cool project that provides exactly that.