Get up and running with Llama 2, Mistral, Gemma, and other large language models.
https://ollama.com
Get up and running with Llama 2, Mistral, Gemma, and other large language models.
You can find a list of models available for use at https://ollama.com/library.
- Developed at science:machinelearning
- Sources inherited from project openSUSE:Factory
-
3
derived packages
- Download package
-
Checkout Package
osc -A https://api.opensuse.org checkout openSUSE:Backports:SLE-15-SP4:FactoryCandidates/ollama && cd $_
- Create Badge
Refresh
Refresh
Source Files
Filename | Size | Changed |
---|---|---|
_service | 0000000804 804 Bytes | |
_servicedata | 0000000234 234 Bytes | |
enable-lto.patch | 0000001550 1.51 KB | |
ollama-0.3.6.obscpio | 0184983566 176 MB | |
ollama-user.conf | 0000000090 90 Bytes | |
ollama.changes | 0000027452 26.8 KB | |
ollama.obsinfo | 0000000095 95 Bytes | |
ollama.service | 0000000193 193 Bytes | |
ollama.spec | 0000002983 2.91 KB | |
vendor.tar.zstd | 0005354975 5.11 MB |
Revision 18 (latest revision is 24)
Dominique Leuenberger (dimstar_suse)
accepted
request 1194354
from
Eyad Issa (VaiTon)
(revision 18)
- Update to version 0.3.6: * Fixed issue where /api/embed would return an error instead of loading the model when the input field was not provided. * ollama create can now import Phi-3 models from Safetensors * Added progress information to ollama create when importing GGUF files * Ollama will now import GGUF files faster by minimizing file copies - Update to version 0.3.6: * Fixed issue where temporary files would not be cleaned up * Fix rare error when Ollama would start up due to invalid model data - Update to version 0.3.4: * New embedding models - BGE-M3: a large embedding model from BAAI distinguished for its versatility in Multi-Functionality, Multi-Linguality, and Multi-Granularity. - BGE-Large: a large embedding model trained in english. - Paraphrase-Multilingual: A multilingual embedding model trained on parallel data for 50+ languages. * New embedding API with batch support - Ollama now supports a new API endpoint /api/embed for embedding generation: * This API endpoint supports new features: - Batches: generate embeddings for several documents in one request - Normalized embeddings: embeddings are now normalized, improving similarity results - Truncation: a new truncate parameter that will error if set to false - Metrics: responses include load_duration, total_duration and prompt_eval_count metrics
Comments 0