Get up and running with Llama 2, Mistral, Gemma, and other large language models.
https://ollama.com
Get up and running with Llama 2, Mistral, Gemma, and other large language models.
You can find a list of models available for use at https://ollama.com/library.
- Developed at science:machinelearning
- Sources inherited from project openSUSE:Factory
-
3
derived packages
- Download package
-
Checkout Package
osc -A https://api.opensuse.org checkout openSUSE:Backports:SLE-15-SP4:FactoryCandidates/ollama && cd $_
- Create Badge
Refresh
Refresh
Source Files
Filename | Size | Changed |
---|---|---|
_service | 0000000804 804 Bytes | |
_servicedata | 0000000234 234 Bytes | |
enable-lto.patch | 0000001550 1.51 KB | |
ollama-0.3.3.obscpio | 0153645582 147 MB | |
ollama-user.conf | 0000000090 90 Bytes | |
ollama.changes | 0000025735 25.1 KB | |
ollama.obsinfo | 0000000095 95 Bytes | |
ollama.service | 0000000193 193 Bytes | |
ollama.spec | 0000002983 2.91 KB | |
vendor.tar.zstd | 0005354897 5.11 MB |
Revision 17 (latest revision is 24)
Dominique Leuenberger (dimstar_suse)
accepted
request 1191409
from
Eyad Issa (VaiTon)
(revision 17)
- Update to version 0.3.3: * The /api/embed endpoint now returns statistics: total_duration, load_duration, and prompt_eval_count * Added usage metrics to the /v1/embeddings OpenAI compatibility API * Fixed issue where /api/generate would respond with an empty string if provided a context * Fixed issue where /api/generate would return an incorrect value for context * /show modefile will now render MESSAGE commands correctly - Update to version 0.3.2: * Fixed issue where ollama pull would not resume download progress * Fixed issue where phi3 would report an error on older versions
Comments 0