A software stack for 64Gb

When I wrote the previous post just over a week ago, I thought I had an operational AI configuration; but it turned out not to be the case. I was getting “Ollama not responding” more often than not when I sent in my prompts, and I eventually concluded that my 8-year-old laptop just wasn’t up to the job. I had already planned to upgrade it later this year, so I decided to bring that forward and do it right away. I elected to buy an HP Omen laptop with 64Gb RAM and it duly arrived on Monday 16th March. There followed an intense period of installing applications and transferring data from my old laptop. There were some problems – there always are when you get a new machine – but by the following afternoon I was ready to restart my AI journey.

I started again by asking ChatGPT what tools and models it would suggest for my new 64Gb laptop, and it recommended LM Studio running the “Mistral 7B Instruct” model with AnythingLLM providing the front end and RAG capability. I duly downloaded and installed all this software, but hit a problem when I entered my first query: AnythingLLM is set up to provide a variety of system prompts (instructions that shape the AI’s responses and behaviour) which are not recognised by LM Studio and the Mistral model. ChatGPT first advised me to run another model, and when that didn’t work either, it suggested disabling AnythingLLM’s System Prompts. Unfortunately, AnythingLLM wouldn’t let me do that. Eventually, after about two and a half hours, I gave up trying to troubleshoot the problem and took up another of ChatGPT’s suggestions to replace LM Studio with Ollama running another Mistral model. This change only took about 15 minutes – and it worked! I started running my test questions through the new configuration and was getting answers back in 2-6 seconds – every time!

Now, throughout this process I was following ChatGPT’s guidance. I simply don’t have the knowledge to do any of this on my own, and, I must say, ChatGPT has been very clear and helpful; most answers provide options, a rationale for its suggestions, and a final summary of what should be done. However, as demonstrated by my above experiences, ChatGPT is not necessarily familiar with all aspects of all available products, nor fully aware of all potential problems. If it was, it wouldn’t have suggested the initial pairing of LM Studio with Mistral and AnythingLLM. Furthermore, when asked about functionality in a particular product it often offers various possibilities depending on which version is being used, suggesting a general knowledge rather than specific expertise. Of course, this is exactly what should be expected from an AI system. After all it is only predicting the next word based on a whole load of training data.

Let me be clear: the guidance I’ve already received from ChatGPT has enabled me to make considerable progress in a relatively short period of time; and I plan to continue to rely on it to guide my future steps in this journey – after all I have no other option. However, I will remain alert to the possibility of its advice being incomplete or unsound or even wrong; and I will rely on the actual experiences I have with the software itself, to draw my own conclusions.

Leave a Reply

Your email address will not be published. Required fields are marked *