Get up and running with Llama 2, Mistral, Gemma, and other large language models.

Edit Package ollama
https://ollama.com

Get up and running with Llama 2, Mistral, Gemma, and other large language models.

You can find a list of models available for use at https://ollama.com/library.

Refresh
Refresh
Source Files (show unmerged sources)
Filename Size Changed
_service 0000000802 802 Bytes
_servicedata 0000000234 234 Bytes
ollama-0.4.2.obscpio 0017815053 17 MB
ollama-add-install-targets.patch 0000002627 2.57 KB
ollama-lib64-runner-path.patch 0000000686 686 Bytes
ollama-pr7499.patch 0000075272 73.5 KB
ollama-use-external-cc.patch 0000000704 704 Bytes
ollama-user.conf 0000000158 158 Bytes
ollama-verbose-tests.patch 0000000352 352 Bytes
ollama.changes 0000041025 40.1 KB
ollama.obsinfo 0000000095 95 Bytes
ollama.service 0000000221 221 Bytes
ollama.spec 0000004675 4.57 KB
vendor.tar.zstd 0005366874 5.12 MB
Latest Revision
Oleksandr Ostrenko's avatar Oleksandr Ostrenko (birdwatcher) committed (revision 21)
* refactor install patch & build script
* verbise tests
Comments 1

Hzu's avatar

AMD users, install this one. This is the one you want with proper ROCm support. Thanks birdwatcher for taking the time to properly make ollama and ROCm modules fully available on Tumbleweed.

For Radeon 780M, there's no need to modify anything to get it running. However, due to some limitations imposed by ROCm and perhaps Ollama as well, you might be limited to 4096 MiB of VRAM. My GTT says I have 7000+ MiB of memory. However, ROCm only detects 4096 and will crash on most 7B models even though I have set UMA in BIOS to 16G.

As a workaround, you need to set a custom GTT size as well as TTM pool and page pool sizes to use your whole available VRAM. Instructions here: https://www.reddit.com/r/ROCm/comments/1g3lnuj/rocm_apu_680m_and_gtt_memory_on_arch/

There's an open PR in Ollama's repository as well: https://github.com/ollama/ollama/pull/6282

openSUSE Build Service is sponsored by