Oget: The Easiest Way to Install Ollama Models Offline
🇬🇧 Oget is a command-line tool that allows you to directly download Ollama models via CDN and easily install them in offline environments.
The ollama pull command can sometimes turn into a nightmare, especially with slow or unstable internet connections. To permanently overcome common issues like interrupted downloads, timeout errors, and processes starting over from scratch, we developed the Oget tool. | 🇹🇷 Türkçe
What is Oget?
Oget 🦙 is a lightweight command-line tool that provides direct CDN download links for Ollama models and integrates these files into Ollama offline. It is designed entirely using standard Python libraries, without any external library dependencies.
The standard ollama pull gemma2:2b command downloads the model through its own service infrastructure; when the connection drops, the process usually fails. However, the Ollama registry uses a standard OCI (Docker) structure. This means that by obtaining the manifest and blob URLs directly, you can complete the process much more stably and quickly with any advanced download manager (IDM, aria2, wget, curl, etc.) that supports the operation.
Oget automates this exact process, leaving the control entirely in your hands.
Installation
via pip (all platforms)
1
pip install oget
via AUR (Arch Linux)
1
2
3
4
5
6
7
8
9
10
# using yay
yay -S oget
# using paru
paru -S oget
# Manual
git clone https://aur.archlinux.org/oget.git
cd oget
makepkg -si
How to Use?
Using Oget basically consists of 3 main steps:
flowchart LR
A(["🖥️ User"]) --> B["oget get gemma2:2b"]
B --> C[("registry.ollama.ai
Manifest + Blob URLs")]
C --> D["curl / IDM / aria2
Download files"]
D --> E["oget install"]
E --> F(["✅ ollama run gemma2:2b"])
Step 1 — Get the Download Links
You can get the download links for your desired model using the commands below:
1
2
3
oget get gemma2:2b
oget get deepseek-r1:7b
oget get huihui_ai/deepseek-r1-abliterated:8b
When this command runs successfully, the following information is listed on the screen:
- 📄 Direct URL address of the Manifest file
- 📦 CDN URL and file size of each download layer (blob) belonging to the model
- Ready-to-use
curlcommands for direct execution via terminal
Example terminal output:
1
2
3
4
5
6
7
8
9
10
11
12
Manifest:
📄 https://registry.ollama.ai/v2/library/gemma2/manifests/2b
Curl command to download the manifest (run in your manifest folder):
curl -L "https://registry.ollama.ai/v2/library/gemma2/manifests/2b" -o "manifest"
Download links for layers:
1 - [1.6 GB] https://registry.ollama.ai/v2/library/gemma2/blobs/sha256:...
2 - [4.9 KB] https://registry.ollama.ai/v2/library/gemma2/blobs/sha256:...
Curl command to download all blobs (run in your blobs folder):
curl -L "..." -o "sha256-..."
Step 2 — Download the Files
You can download the files listed via the read-to-use curl commands from the terminal output or using any download manager (IDM, etc.) you prefer into two separate folders:
- The first folder must contain the manifest data.
- The second folder is for downloading the blob files that contain the model weights.
Since the links are taken directly from Ollama’s CDN servers, you won’t get stuck on standard limitations.
Step 3 — Ollama Installation
After downloading the files, you can start the offline installation process:
1
2
3
4
5
# On Linux/macOS systems (sudo is required for permissions)
sudo oget install --model gemma2:2b --blobsPath ./downloads
# If you want to manually define the Ollama models directory:
sudo oget install --model gemma2:2b --blobsPath ./downloads --modelsPath ~/.ollama/models
Once the installation process is complete, you can start your Ollama model using the standard method:
1
ollama run gemma2:2b
The model is ready to run. This entire process occurs safely over local files, without the need for an internet connection.
Architecture and Background
The core logic of the Oget architecture is built within the cmd_get function. Ollama’s remote registry, registry.ollama.ai, is fully compatible with the standard Docker/OCI protocol by its structure.
Oget sends a direct request to the system in the format of GET /v2/<namespace>/<model>/manifests/<tag> to fetch the system manifest (model content) in JSON format. Within this manifest tree, the security verification code (sha256 hash) and disk sizes of all data layers that make up the model are reported point by point.
1
2
3
4
url = f"https://{DEFAULT_REGISTRY}/v2/{namespace}/{model}/manifests/{tag}"
req = urllib.request.Request(url, headers={
"Accept": "application/vnd.docker.distribution.manifest.v2+json"
})
Thanks to the data gathered from the manifest, a direct and high-speed CDN download route is generated for each blob (model data package):
1
layer_url = f"https://{DEFAULT_REGISTRY}/v2/{namespace}/{model}/blobs/{digest}"
In the installation phase, the cmd_install function that executes the process organizes these locally downloaded data into folders according to the standard expected directory structure of Ollama:
graph TD
ROOT["📁 models/"]
ROOT --> MAN["📁 manifests/"]
ROOT --> BLOBS["📁 blobs/"]
MAN --> REG["📁 registry.ollama.ai/"]
REG --> NS["📁 library/"]
NS --> MOD["📁 gemma2/"]
MOD --> TAG["📄 2b ← manifest"]
BLOBS --> B1["📦 sha256-abc... ← model weights"]
BLOBS --> B2["📦 sha256-def... ← tokenizer etc."]
The names of the downloaded files are restructured in SHA-256 hash format (This is the local library standard of Ollama). Even if the name of the file you downloaded is different, Oget performs the SHA-256 verification in the background, copying the file to the relevant directory with the correct name securely.
Supported Model Naming Formats
Oget fully supports all naming standards (tags & namespaces) of Ollama:
| Format Type | Usage Example |
|---|---|
<model>:<tag> | gemma2:2b |
<model> | gemma2 (Automatically targets the latest tag) |
<namespace>/<model>:<tag> | huihui_ai/deepseek-r1-abliterated:8b |
Automatic Data Directory Detection
Oget detects where your Ollama models are stored on your local disk with a smart priority order:
| Priority | Target |
|---|---|
| 1. | --modelsPath command-line argument (Highest priority) |
| 2. | OLLAMA_MODELS environment variable |
| Error Status | If the directory cannot be found, the system securely stops and generates an instructive error message. |
Zero External Dependency Guarantee
One of Oget’s strongest aspects is that it does not need external libraries. It only uses Python standard libraries (argparse, urllib, json, shutil, hashlib, platform). Having Python 3.8 or above installed on your system is sufficient; it does not require additional installations like pip install requests.
Supported Platforms
Oget is designed to work platform-independently:
- ✅ Linux
- ✅ macOS
- ✅ Windows
Especially if you have to download and manage Large Language Models (LLM) on restricted internet connections, such as metered, slow, or corporate networks, Oget makes this process much more flexible and fault-tolerant. Once you download the model data to your local disk, you can perform an offline installation to any machine via USB memory or local network within seconds.
Source Code: fr0stb1rd/oget
PyPI Package: pypi.org/project/oget
Arch Linux AUR: aur.archlinux.org/packages/oget
