Alpaca makes it easy to run powerful AI language models on Linux

Want to use an AI language model (often called LLM or “Large Language Model”) like ChatGPT as a digital assistant, but have concerns about how your data is handled? Are the features of Microsoft CoPilot appealing to you, but the potential privacy nightmare scares you? Maybe you just don’t want to pay a subscription fee or get locked into a third-party ecosystem. Maybe you’d rather embrace open source software. Whatever your reasons, running an AI model locally may be the solution.

When I started digging into starting to run LLMs locally on Linux, I thought I would have to rely on command line tools like Ollama. While Ollama IS fully capable, and running anything in a terminal has the potential to make you feel like crap, it doesn’t exactly boast a beginner-friendly interface:

Later, I discovered LM Studio, a cross-platform solution that runs natively on Windows, macOS, and Linux. It is powerful and flexible, with a traditional GUI. But if I’m honest, sometimes too much power and flexibility can be distracting or lead to breakage. Plus, the only aspect of LM Studio that is open source is the command line tool. I’m not inherently against proprietary, closed source software, but I appreciate that if a developer decides to stop working on an open source project, it can be broken and live on.

So my journey took me to Alpaca, which acts as a graphic front for Ollama. And if we consider what really matters, the bottom line is this: it’s simply easy to use.

Alpaca is simple and intuitive. It’s easy to install on any Linux distribution via Flathub, and comes bundled with the Ollama backend. There is no complicated setup involved; just choose an AI model to download and start chatting.

I started with Meta’s newly released Llama 3.1, which is open source and comes in 8 billion, 70 billion, and 405 billion parameter sizes. (Think of parameters as a way to measure the complexity of a language model. The higher the number, the more capable it is.) Granted, the $405 billion version is massive and requires a 231GB download. By comparison, ChatGPT 3’s parameter size was 175 billion. Using these full-fat models can bring the best consumer PC to its knees, but in my experience the smaller spec models are perfectly capable of being bona fide digital assistants and chatbots.

On the plus side, Alpaca integrates natively with your system notifications, so you can minimize the app while it’s not receiving a response and be notified when a response is ready.

But if you decide a particular model isn’t for you, deleting it and clearing some space in your car is a breeze. Just go to the “Manage Templates” menu, where you can search, drag and delete dozens of templates.

The Alpaca project came into being in June 2024, but the first stable release was launched just a few weeks ago. The improvements seem to be rolling out steadily, but there is a paper-cutting issue that may affect some AMD GPU users: Alpaca is using CPU resources instead of the GPU, which makes processing slower. It appears to be a bug related to ROCm, but the developer is aware and investigating a fix.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top