Revisions of ollama
Ana Guerrero (anag+factory)
accepted
request 1225993
from
Factory Maintainer (factory-maintainer)
(revision 25)
Automatic submission by obs-autosubmit
Dominique Leuenberger (dimstar_suse)
accepted
request 1222485
from
Guillaume GARDET (Guillaume_G)
(revision 24)
Dominique Leuenberger (dimstar_suse)
accepted
request 1207827
from
Guillaume GARDET (Guillaume_G)
(revision 22)
Ana Guerrero (anag+factory)
accepted
request 1204591
from
Eyad Issa (VaiTon)
(revision 21)
- Update to version 0.3.12: * Llama 3.2: Meta's Llama 3.2 goes small with 1B and 3B models. * Qwen 2.5 Coder: The latest series of Code-Specific Qwen models, with significant improvements in code generation, code reasoning, and code fixing. * Ollama now supports ARM Windows machines * Fixed rare issue where Ollama would report a missing .dll file on Windows * Fixed performance issue for Windows without GPUs (forwarded request 1204394 from cabelo)
Dominique Leuenberger (dimstar_suse)
accepted
request 1194354
from
Eyad Issa (VaiTon)
(revision 18)
- Update to version 0.3.6: * Fixed issue where /api/embed would return an error instead of loading the model when the input field was not provided. * ollama create can now import Phi-3 models from Safetensors * Added progress information to ollama create when importing GGUF files * Ollama will now import GGUF files faster by minimizing file copies - Update to version 0.3.6: * Fixed issue where temporary files would not be cleaned up * Fix rare error when Ollama would start up due to invalid model data - Update to version 0.3.4: * New embedding models - BGE-M3: a large embedding model from BAAI distinguished for its versatility in Multi-Functionality, Multi-Linguality, and Multi-Granularity. - BGE-Large: a large embedding model trained in english. - Paraphrase-Multilingual: A multilingual embedding model trained on parallel data for 50+ languages. * New embedding API with batch support - Ollama now supports a new API endpoint /api/embed for embedding generation: * This API endpoint supports new features: - Batches: generate embeddings for several documents in one request - Normalized embeddings: embeddings are now normalized, improving similarity results - Truncation: a new truncate parameter that will error if set to false - Metrics: responses include load_duration, total_duration and prompt_eval_count metrics
Dominique Leuenberger (dimstar_suse)
accepted
request 1191409
from
Eyad Issa (VaiTon)
(revision 17)
- Update to version 0.3.3: * The /api/embed endpoint now returns statistics: total_duration, load_duration, and prompt_eval_count * Added usage metrics to the /v1/embeddings OpenAI compatibility API * Fixed issue where /api/generate would respond with an empty string if provided a context * Fixed issue where /api/generate would return an incorrect value for context * /show modefile will now render MESSAGE commands correctly - Update to version 0.3.2: * Fixed issue where ollama pull would not resume download progress * Fixed issue where phi3 would report an error on older versions
Dominique Leuenberger (dimstar_suse)
accepted
request 1189982
from
Eyad Issa (VaiTon)
(revision 15)
- Update to version 0.3.0: * Ollama now supports tool calling with popular models such as Llama 3.1. This enables a model to answer a given prompt using tool(s) it knows about, making it possible for models to perform more complex tasks or interact with the outside world. * New models: ~ Llama 3.1 ~ Mistral Large 2 ~ Firefunction v2 ~ Llama-3-Groq-Tool-Use * Fixed duplicate error message when running ollama create
Ana Guerrero (anag+factory)
accepted
request 1188404
from
Eyad Issa (VaiTon)
(revision 13)
- Fixed issue with shared libraries - Added %check section - Use -v when building - Update to version 0.2.6: * New models: MathΣtral is a 7B model designed for math reasoning and scientific discovery by Mistral AI. * Fixed issue where uppercase roles such as USER would no longer work in the chat endpoints * Fixed issue where empty system message would be included in the prompt
Ana Guerrero (anag+factory)
accepted
request 1187407
from
Eyad Issa (VaiTon)
(revision 12)
- Update to version 0.2.5: - Update to version 0.2.4: - Update to version 0.2.3: - Update to version 0.2.2: - Update to version 0.2.1: - Update to version 0.2.0:
Ana Guerrero (anag+factory)
accepted
request 1186033
from
Eyad Issa (VaiTon)
(revision 11)
- Update to version 0.1.48: * Fixed issue where Gemma 2 would continuously output when reaching context limits * Fixed out of memory and core dump errors when running Gemma 2 * /show info will now show additional model information in ollama run * Fixed issue where ollama show would result in an error on certain vision models - Update to version 0.1.48: * Added support for Google Gemma 2 models (9B and 27B) * Fixed issues with ollama create when importing from Safetensors - Update to version 0.1.46: * Docs (#5149) * fix: quantization with template * Fix use_mmap parsing for modelfiles * Refine mmap default logic on linux * Bump latest fedora cuda repo to 39
Dominique Leuenberger (dimstar_suse)
accepted
request 1183991
from
Factory Maintainer (factory-maintainer)
(revision 10)
Automatic submission by obs-autosubmit
Ana Guerrero (anag+factory)
accepted
request 1178089
from
Eyad Issa (VaiTon)
(revision 8)
- Update to version 0.1.40: - Update to version 0.1.39:
Ana Guerrero (anag+factory)
accepted
request 1175956
from
Eyad Issa (VaiTon)
(revision 7)
- Added 15.6 build
Displaying revisions 1 - 20 of 25