My experience training a local LLM (AI chatbot) on local data…

The user encountered challenges while attempting to use various methods to feed information into local Large Language Models (LLMs) via RAG (Retrieval-Augmented Generation). They explored methods such as Nvidia Chat with RTX, Ollama with Python scripts, and Ollama with Open-webui. Results varied, with some methods providing inaccurate or incomplete outputs. Comparatively, Microsoft Co-pilot, running GPT4-Turbo, significantly outperformed the local methods.

Windows is getting me down

**WARNING - this is a very niche rant** I'm quite a tech nerd. I enjoy gadets, and phones, and even writing software and addons. A big part of this is that I just generally enjoy interacting with my computer's operating system to get stuff done. I've been firmly embedded in Windows most of my life,... Continue Reading →

Website Built with WordPress.com.

Up ↑