{"id":2760,"date":"2026-03-20T07:13:25","date_gmt":"2026-03-20T07:13:25","guid":{"rendered":"https:\/\/www.pwofc.com\/ofc\/?p=2760"},"modified":"2026-03-20T07:13:25","modified_gmt":"2026-03-20T07:13:25","slug":"a-software-stack-for-64gb","status":"publish","type":"post","link":"https:\/\/www.pwofc.com\/ofc\/2026\/03\/20\/a-software-stack-for-64gb\/","title":{"rendered":"A software stack for 64Gb"},"content":{"rendered":"<p>When I wrote the previous post just over a week ago, I thought I had an operational AI configuration; but it turned out not to be the case. I was getting <em>\u201cOllama not responding\u201d<\/em> more often than not when I sent in my prompts, and I eventually concluded that my 8-year-old laptop just wasn\u2019t up to the job. I had already planned to upgrade it later this year, so I decided to bring that forward and do it right away. I elected to buy an HP Omen laptop with 64Gb RAM and it duly arrived on Monday 16<sup>th<\/sup> March. There followed an intense period of installing applications and transferring data from my old laptop. There were some problems \u2013 there always are when you get a new machine \u2013 but by the following afternoon I was ready to restart my AI journey.<\/p>\n<p>I started again by asking ChatGPT what tools and models it would suggest for my new 64Gb laptop, and it recommended LM Studio running the \u201cMistral 7B Instruct\u201d model with AnythingLLM providing the front end and RAG capability. I duly downloaded and installed all this software, but hit a problem when I entered my first query: AnythingLLM is set up to provide a variety of system prompts (instructions that shape the AI\u2019s responses and behaviour) which are not recognised by LM Studio and the Mistral model. ChatGPT first advised me to run another model, and when that didn\u2019t work either, it suggested disabling AnythingLLM\u2019s System Prompts. Unfortunately, AnythingLLM wouldn\u2019t let me do that. Eventually, after about two and a half hours, I gave up trying to troubleshoot the problem and took up another of ChatGPT\u2019s suggestions to replace LM Studio with Ollama running another Mistral model. This change only took about 15 minutes \u2013 and it worked! I started running my test questions through the new configuration and was getting answers back in 2-6 seconds \u2013 every time!<\/p>\n<p>Now, throughout this process I was following <a href=\"https:\/\/www.pwofc.com\/ofc\/wp-content\/uploads\/2026\/03\/2026-03-17-ChatGPT-on-the-number-of-LLMs-and-best-stack-for-64Gb-RAM.docx\">ChatGPT\u2019s guidance<\/a>. I simply don\u2019t have the knowledge to do any of this on my own, and, I must say, ChatGPT has been very clear and helpful; most answers provide options, a rationale for its suggestions, and a final summary of what should be done. However, as demonstrated by my above experiences, ChatGPT is not necessarily familiar with all aspects of all available products, nor fully aware of all potential problems. If it was, it wouldn\u2019t have suggested the initial pairing of LM Studio with Mistral and AnythingLLM. Furthermore, when asked about functionality in a particular product it often offers various possibilities depending on which version is being used, suggesting a general knowledge rather than specific expertise. Of course, this is exactly what should be expected from an AI system. After all it is only predicting the next word based on a whole load of training data.<\/p>\n<p>Let me be clear: the guidance I\u2019ve already received from ChatGPT has enabled me to make considerable progress in a relatively short period of time; and I plan to continue to rely on it to guide my future steps in this journey \u2013 after all I have no other option. However, I will remain alert to the possibility of its advice being incomplete or unsound or even wrong; and I will rely on the actual experiences I have with the software itself, to draw my own conclusions.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>When I wrote the previous post just over a week ago, I thought I had an operational AI configuration; but it turned out not to be the case. I was getting \u201cOllama not responding\u201d more often than not when I &hellip; <a href=\"https:\/\/www.pwofc.com\/ofc\/2026\/03\/20\/a-software-stack-for-64gb\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[40],"tags":[],"class_list":["post-2760","post","type-post","status-publish","format-standard","hentry","category-ai-for-personal-archives"],"_links":{"self":[{"href":"https:\/\/www.pwofc.com\/ofc\/wp-json\/wp\/v2\/posts\/2760","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.pwofc.com\/ofc\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.pwofc.com\/ofc\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.pwofc.com\/ofc\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.pwofc.com\/ofc\/wp-json\/wp\/v2\/comments?post=2760"}],"version-history":[{"count":1,"href":"https:\/\/www.pwofc.com\/ofc\/wp-json\/wp\/v2\/posts\/2760\/revisions"}],"predecessor-version":[{"id":2762,"href":"https:\/\/www.pwofc.com\/ofc\/wp-json\/wp\/v2\/posts\/2760\/revisions\/2762"}],"wp:attachment":[{"href":"https:\/\/www.pwofc.com\/ofc\/wp-json\/wp\/v2\/media?parent=2760"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.pwofc.com\/ofc\/wp-json\/wp\/v2\/categories?post=2760"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.pwofc.com\/ofc\/wp-json\/wp\/v2\/tags?post=2760"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}