Get up and running with Llama 2, Mistral, Gemma, and other large language models.
https://ollama.com
Get up and running with Llama 2, Mistral, Gemma, and other large language models.
You can find a list of models available for use at https://ollama.com/library.
- Developed at science:machinelearning
- Sources inherited from project openSUSE:Factory
-
3
derived packages
- Download package
-
Checkout Package
osc -A https://api.opensuse.org checkout openSUSE:Factory:Rebuild/ollama && cd $_
- Create Badge
Refresh
Refresh
Source Files
Filename | Size | Changed |
---|---|---|
_service | 0000000805 805 Bytes | |
_servicedata | 0000000234 234 Bytes | |
enable-lto.patch | 0000001472 1.44 KB | |
ollama-0.1.48.obscpio | 0160434702 153 MB | |
ollama-user.conf | 0000000090 90 Bytes | |
ollama.changes | 0000019872 19.4 KB | |
ollama.obsinfo | 0000000096 96 Bytes | |
ollama.service | 0000000193 193 Bytes | |
ollama.spec | 0000002682 2.62 KB | |
vendor.tar.zstd | 0005307324 5.06 MB |
Revision 11 (latest revision is 25)
Ana Guerrero (anag+factory)
accepted
request 1186033
from
Eyad Issa (VaiTon)
(revision 11)
- Update to version 0.1.48: * Fixed issue where Gemma 2 would continuously output when reaching context limits * Fixed out of memory and core dump errors when running Gemma 2 * /show info will now show additional model information in ollama run * Fixed issue where ollama show would result in an error on certain vision models - Update to version 0.1.48: * Added support for Google Gemma 2 models (9B and 27B) * Fixed issues with ollama create when importing from Safetensors - Update to version 0.1.46: * Docs (#5149) * fix: quantization with template * Fix use_mmap parsing for modelfiles * Refine mmap default logic on linux * Bump latest fedora cuda repo to 39
Comments 0