Running large language models locally
Model Shop

Image © sdecoret, 123RF.com
Ollama and Open WebUI let you join the AI revolution without relying on the cloud.
Large language models (LLMs) such as the ones used by OpenAI's [1] ChatGPT [2] are too resource intensive to run locally on your own computer. That's why they're deployed as online services that you pay for. However, since ChatGPT's release, some significant advancements have occurred around smaller LLMs. Many of these smaller LLMs are open source or have a liberal license (see the "Licenses" box). You can run them on your own computer without having to send your input to a cloud server and without having to pay a fee to an online service.
Because these LLMs are computationally intensive and need a lot of RAM, running them on your CPU can be slow. For optimal performance, you need a GPU – GPUs have many parallel compute cores and a lot of dedicated RAM. An NVIDIA or AMD GPU with 8GB RAM or more is recommended.
In addition to the hardware and the models, you also need software that enables you to run the models. One popular package is Ollama [3], named for Meta AI's large language model Llama [4]. Ollama is a command-line application that runs on Linux, macOS, and Windows, and you can also run it as a server that other software connects to.
[...]
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

News
-
Red Hat Releases RHEL 10 Early
Red Hat quietly rolled out the official release of RHEL 10.0 a bit early.
-
openSUSE Joins End of 10
openSUSE has decided to not only join the End of 10 movement but it also will no longer support the Deepin Desktop Environment.
-
New Version of Flatpak Released
Flatpak 1.16.1 is now available as the latest, stable version with various improvements.
-
IBM Announces Powerhouse Linux Server
IBM has unleashed a seriously powerful Linux server with the LinuxONE Emperor 5.
-
Plasma Ends LTS Releases
The KDE Plasma development team is doing away with the LTS releases for a good reason.
-
Arch Linux Available for Windows Subsystem for Linux
If you've ever wanted to use a rolling release distribution with WSL, now's your chance.
-
System76 Releases COSMIC Alpha 7
With scores of bug fixes and a really cool workspaces feature, COSMIC is looking to soon migrate from alpha to beta.
-
OpenMandriva Lx 6.0 Available for Installation
The latest release of OpenMandriva has arrived with a new kernel, an updated Plasma desktop, and a server edition.
-
TrueNAS 25.04 Arrives with Thousands of Changes
One of the most popular Linux-based NAS solutions has rolled out the latest edition, based on Ubuntu 25.04.
-
Fedora 42 Available with Two New Spins
The latest release from the Fedora Project includes the usual updates, a new kernel, an official KDE Plasma spin, and a new System76 spin.