Edwin Perkins

Low Cost, and Safe DeepSeek-R1

It has been a while since we looked at running large language models (LLMs) locally. In AI things move fast, and while on the one hand, the foundational aspects haven’t changed much, there has been a ton of change with LLMs themselves. We have seen...

The Short-Term Future of Computer Virtualization with VMware

By now most VMware customers are aware of the Broadcom acquisition of the popular computer virtualization software. The purchase has created a lot of confusion and uncertainty in an area of IT often counted upon for predictability, and stability. IT...

Local LLMs Part 3 – Linux

In part I and part II of this series we looked at setting up local LLMs running on Apple MacOS and Microsoft Windows respectively. This post is going to dig into using Linux to run an LLM. In several respects Linux is different from both MacOS and...

Local LLMs Part 2 – Microsoft Windows

In part one of this series, we looked at running the Meta LLAMA 2 AI large language model (LLM) on Apple Silicon base computers directly. This allows a ChatGPT like AI assistant to run without an Internet connection, but much more importantly to...

Local LLMs Part 1 – Apple MacOS

Running large language models on your local computer can be a safe and cost-effective way to use the latest artificial intelligence tools. This blog post will outline the steps needed for local AI use on Apple Macs with Apple Silicon CPUs.

Subscribe Here For Our Blogs:

Recent Posts

Categories

see all