Overview

Request 1196351 accepted

The specification file update was automatically generated by the osc command.

I am sending this update at the request of the factory team, who asked to mention the .patch in the changelog file.

- Remove NPU Compile Tool
* openvino-remove-npu-compile-tool.patch
- Update to 2024.3.0
- Summary of major features and improvements  
* More Gen AI coverage and framework integrations to minimize
code changes
+ OpenVINO pre-optimized models are now available in Hugging
Face making it easier for developers to get started with
these models.
* Broader Large Language Model (LLM) support and more model
compression techniques.
+ Significant improvement in LLM performance on Intel
discrete GPUs with the addition of Multi-Head Attention
(MHA) and OneDNN enhancements.
* More portability and performance to run AI at the edge, in the
cloud, or locally.
+ Improved CPU performance when serving LLMs with the
inclusion of vLLM and continuous batching in the OpenVINO
Model Server (OVMS). vLLM is an easy-to-use open-source
library that supports efficient LLM inferencing and model
serving.
- Support Change and Deprecation Notices
* Using deprecated features and components is not advised.
They are available to enable a smooth transition to new
solutions and will be discontinued in the future. To keep
using discontinued features, you will have to revert to the
last LTS OpenVINO version supporting them. For more details,
refer to the OpenVINO Legacy Features and Components page.
* Discontinued in 2024.0:
+ Runtime components:
- Intel® Gaussian & Neural Accelerator (Intel® GNA)..Consider
using the Neural Processing Unit (NPU) for low-powered
systems like Intel® Core™ Ultra or 14th generation
and beyond.
- OpenVINO C++/C/Python 1.0 APIs (see 2023.3 API transition
guide for reference).
- All ONNX Frontend legacy API (known as ONNX_IMPORTER_API)
- 'PerfomanceMode.UNDEFINED' property as part of the OpenVINO
Python API
+ Tools:
- Deployment Manager. See installation and deployment guides
for current distribution options.
- Accuracy Checker.
- Post-Training Optimization Tool (POT). Neural Network
Compression Framework (NNCF) should be used instead.
- A Git patch for NNCF integration with huggingface/
transformers. The recommended approach is to use
huggingface/optimum-intel for applying NNCF optimization
on top of models from Hugging Face.
- Support for Apache MXNet, Caffe, and Kaldi model formats.
Conversion to ONNX may be used as a solution.
* Deprecated and to be removed in the future:
+ The OpenVINO™ Development Tools package (pip install
openvino-dev) will be removed from installation options
and distribution channels beginning with OpenVINO 2025.0.
+ Model Optimizer will be discontinued with OpenVINO 2025.0.
Consider using the new conversion methods instead. For
more details, see the model conversion transition guide.
+ OpenVINO property Affinity API will be discontinued with
OpenVINO 2025.0. It will be replaced with CPU binding
configurations (ov::hint::enable_cpu_pinning).
+ OpenVINO Model Server components:
- “auto shape” and “auto batch size” (reshaping a model
in runtime) will be removed in the future. OpenVINO’s
dynamic shape models are recommended instead.
+ A number of notebooks have been deprecated. For an
up-to-date listing of available notebooks, refer to
the OpenVINO™ Notebook index (openvinotoolkit.github.io).
- Add riscv-cpu-plugin subpackage
- Update to 2024.2.0
- More Gen AI coverage and framework integrations to minimize code
changes
* Llama 3 optimizations for CPUs, built-in GPUs, and discrete
GPUs for improved performance and efficient memory usage.
* Support for Phi-3-mini, a family of AI models that leverages
the power of small language models for faster, more accurate
and cost-effective text processing.
* Python Custom Operation is now enabled in OpenVINO making it
easier for Python developers to code their custom operations
instead of using C++ custom operations (also supported).
Python Custom Operation empowers users to implement their own
specialized operations into any model.
* Notebooks expansion to ensure better coverage for new models.
Noteworthy notebooks added: DynamiCrafter, YOLOv10, Chatbot
notebook with Phi-3, and QWEN2.
- Broader Large Language Model (LLM) support and more model
compression techniques.
* GPTQ method for 4-bit weight compression added to NNCF for
more efficient inference and improved performance of
compressed LLMs.
* Significant LLM performance improvements and reduced latency
for both built-in GPUs and discrete GPUs.
* Significant improvement in 2nd token latency and memory
footprint of FP16 weight LLMs on AVX2 (13th Gen Intel® Core™
processors) and AVX512 (3rd Gen Intel® Xeon® Scalable
Processors) based CPU platforms, particularly for small
batch sizes.
- More portability and performance to run AI at the edge, in the
cloud, or locally.
* Model Serving Enhancements:
* Preview: OpenVINO Model Server (OVMS) now supports
OpenAI-compatible API along with Continuous Batching and
PagedAttention, enabling significantly higher throughput
for parallel inferencing, especially on Intel® Xeon®
processors, when serving LLMs to many concurrent users.
* OpenVINO backend for Triton Server now supports built-in
GPUs and discrete GPUs, in addition to dynamic
shapes support.
* Integration of TorchServe through torch.compile OpenVINO
backend for easy model deployment, provisioning to
multiple instances, model versioning, and maintenance.
* Preview: addition of the Generate API, a simplified API
for text generation using large language models with only
a few lines of code. The API is available through the newly
launched OpenVINO GenAI package.
* Support for Intel Atom® Processor X Series. For more details,
see System Requirements.
* Preview: Support for Intel® Xeon® 6 processor.
- Support Change and Deprecation Notices
* Using deprecated features and components is not advised.
They are available to enable a smooth transition to new
solutions and will be discontinued in the future.
To keep using discontinued features, you will have to revert
to the last LTS OpenVINO version supporting them. For more
details, refer to the OpenVINO Legacy Features and
Components page.
* Discontinued in 2024.0:
+ Runtime components:
- Intel® Gaussian & Neural Accelerator (Intel® GNA).
Consider using the Neural Processing Unit (NPU) for
low-powered systems like Intel® Core™ Ultra or 14th
generation and beyond.
- OpenVINO C++/C/Python 1.0 APIs (see 2023.3 API
transition guide for reference).
- All ONNX Frontend legacy API (known as ONNX_IMPORTER_API)
- 'PerfomanceMode.UNDEFINED' property as part of the
OpenVINO Python API
+ Tools:
- Deployment Manager. See installation and deployment
guides for current distribution options.
- Accuracy Checker.
- Post-Training Optimization Tool (POT). Neural Network
Compression Framework (NNCF) should be used instead.
- A Git patch for NNCF integration with 
huggingface/transformers. The recommended approach 
is to use huggingface/optimum-intel for applying NNCF
optimization on top of models from Hugging Face.
- Support for Apache MXNet, Caffe, and Kaldi model formats.
Conversion to ONNX may be used as a solution.
* Deprecated and to be removed in the future:
+ The OpenVINO™ Development Tools package (pip install
openvino-dev) will be removed from installation options
and distribution channels beginning with OpenVINO 2025.0.
+ Model Optimizer will be discontinued with OpenVINO 2025.0.
Consider using the new conversion methods instead. For
more details, see the model conversion transition guide.
+ OpenVINO property Affinity API will be discontinued with
OpenVINO 2025.0. It will be replaced with CPU binding
configurations (ov::hint::enable_cpu_pinning).
+ OpenVINO Model Server components:
+ “auto shape” and “auto batch size” (reshaping a model in
runtime) will be removed in the future. OpenVINO’s dynamic
shape models are recommended instead.
+ A number of notebooks have been deprecated. For an
up-to-date listing of available notebooks, refer to the
OpenVINO™ Notebook index (openvinotoolkit.github.io).
- Fix sample source path in build script:
* openvino-fix-build-sample-path.patch
- Update to 2024.1.0
- More Generative AI coverage and framework integrations to
minimize code changes.
* Mixtral and URLNet models optimized for performance
improvements on Intel® Xeon® processors.
* Stable Diffusion 1.5, ChatGLM3-6B, and Qwen-7B models
optimized for improved inference speed on Intel® Core™
Ultra processors with integrated GPU.
* Support for Falcon-7B-Instruct, a GenAI Large Language Model
(LLM) ready-to-use chat/instruct model with superior
performance metrics.
* New Jupyter Notebooks added: YOLO V9, YOLO V8
Oriented Bounding Boxes Detection (OOB), Stable Diffusion
in Keras, MobileCLIP, RMBG-v1.4 Background Removal, Magika,
TripoSR, AnimateAnyone, LLaVA-Next, and RAG system with
OpenVINO and LangChain.
- Broader Large Language Model (LLM) support and more model
compression techniques.
* LLM compilation time reduced through additional optimizations
with compressed embedding. Improved 1st token performance of
LLMs on 4th and 5th generations of Intel® Xeon® processors
with Intel® Advanced Matrix Extensions (Intel® AMX).
* Better LLM compression and improved performance with oneDNN,
INT4, and INT8 support for Intel® Arc™ GPUs.
* Significant memory reduction for select smaller GenAI
models on Intel® Core™ Ultra processors with integrated GPU.
- More portability and performance to run AI at the edge,
in the cloud, or locally.
* The preview NPU plugin for Intel® Core™ Ultra processors
is now available in the OpenVINO open-source GitHub
repository, in addition to the main OpenVINO package on PyPI.
* The JavaScript API is now more easily accessible through
the npm repository, enabling JavaScript developers’ seamless
access to the OpenVINO API.
* FP16 inference on ARM processors now enabled for the
Convolutional Neural Network (CNN) by default.
- Support Change and Deprecation Notices
* Using deprecated features and components is not advised. They
are available to enable a smooth transition to new solutions
and will be discontinued in the future. To keep using
Discontinued features, you will have to revert to the last
LTS OpenVINO version supporting them.
* For more details, refer to the OpenVINO Legacy Features
and Components page.
* Discontinued in 2024.0:
+ Runtime components:
- Intel® Gaussian & Neural Accelerator (Intel® GNA).
Consider using the Neural Processing Unit (NPU)
for low-powered systems like Intel® Core™ Ultra or
14th generation and beyond.
- OpenVINO C++/C/Python 1.0 APIs (see 2023.3 API
transition guide for reference).
- All ONNX Frontend legacy API (known as
ONNX_IMPORTER_API)
- 'PerfomanceMode.UNDEFINED' property as part of
the OpenVINO Python API
+ Tools:
- Deployment Manager. See installation and deployment
guides for current distribution options.
- Accuracy Checker.
- Post-Training Optimization Tool (POT). Neural Network
Compression Framework (NNCF) should be used instead.
- A Git patch for NNCF integration with 
huggingface/transformers. The recommended approach
 is to use huggingface/optimum-intel for applying
NNCF optimization on top of models from Hugging
Face.
- Support for Apache MXNet, Caffe, and Kaldi model
formats. Conversion to ONNX may be used as
a solution.
* Deprecated and to be removed in the future:
+ The OpenVINO™ Development Tools package (pip install
openvino-dev) will be removed from installation options
and distribution channels beginning with OpenVINO 2025.0.
+ Model Optimizer will be discontinued with OpenVINO 2025.0.
Consider using the new conversion methods instead. For
more details, see the model conversion transition guide.
+ OpenVINO property Affinity API will be discontinued with
OpenVINO 2025.0. It will be replaced with CPU binding
configurations (ov::hint::enable_cpu_pinning).
+ OpenVINO Model Server components:
- “auto shape” and “auto batch size” (reshaping a model
in runtime) will be removed in the future. OpenVINO’s
dynamic shape models are recommended instead.
- License update: play safe and list all third party licenses as
part of the License tag.
- Switch to _service file as tagged Source tarball does not
include `./thirdparty` submodules.
- Update openvino-fix-install-paths.patch to fix python module
install path.
- Enable python module and split it out into a python subpackage
(for now default python3 only).
- Explicitly build python metadata (dist-info) and install it
(needs simple sed hackery to support "officially" unsupported
platform ppc64le).
- Specify ENABLE_JS=OFF to turn off javascript bindings as
building these requires downloading npm stuff from the network.
- Build with system pybind11.
- Bump _constraints for updated disk space requirements.
- Drop empty %check section, rpmlint was misleading when it
recommended adding this.
- Numerous specfile cleanups:
* Drop redundant `mv` commands and use `install` where
appropriate.
* Build with system protobuf.
* Fix Summary tags.
* Trim package descriptions.
* Drop forcing CMAKE_BUILD_TYPE=Release, let macro default
RelWithDebInfo be used instead.
* Correct naming of shared library packages.
* Separate out libopenvino_c.so.* into own shared lib package.
* Drop rpmlintrc rule used to hide shlib naming mistakes.
* Rename Source tarball to %{name}-%{version}.EXT pattern.
* Use ldconfig_scriptlet macro for post(un).
- Add openvino-onnx-ml-defines.patch -- Define ONNX_ML at compile
time when using system onnx to allow using 'onnx-ml.pb.h'
instead of 'onnx.pb.h', the latter not being shipped with
openSUSE's onnx-devel package (gh#onnx/onnx#3074).
- Add openvino-fix-install-paths.patch: Change hard-coded install
paths in upstream cmake macro to standard Linux dirs.
- Add openvino-ComputeLibrary-include-string.patch: Include header
for std::string.
- Add external devel packages as Requires for openvino-devel.
- Pass -Wl,-z,noexecstack to %build_ldflags to avoid an exec stack
issue with intel CPU plugin.
- Use ninja for build.
- Adapt _constraits file for correct disk space and memory
requirements.
- Add empty %check section.
- Initial package
- Version 2024.0.0
- Add openvino-rpmlintrc.

Loading...
Request History
Alessandro de Oliveira Faria's avatar

cabelo created request

The specification file update was automatically generated by the osc command.

I am sending this update at the request of the factory team, who asked to mention the .patch in the changelog file.

- Remove NPU Compile Tool
* openvino-remove-npu-compile-tool.patch
- Update to 2024.3.0
- Summary of major features and improvements  
* More Gen AI coverage and framework integrations to minimize
code changes
+ OpenVINO pre-optimized models are now available in Hugging
Face making it easier for developers to get started with
these models.
* Broader Large Language Model (LLM) support and more model
compression techniques.
+ Significant improvement in LLM performance on Intel
discrete GPUs with the addition of Multi-Head Attention
(MHA) and OneDNN enhancements.
* More portability and performance to run AI at the edge, in the
cloud, or locally.
+ Improved CPU performance when serving LLMs with the
inclusion of vLLM and continuous batching in the OpenVINO
Model Server (OVMS). vLLM is an easy-to-use open-source
library that supports efficient LLM inferencing and model
serving.
- Support Change and Deprecation Notices
* Using deprecated features and components is not advised.
They are available to enable a smooth transition to new
solutions and will be discontinued in the future. To keep
using discontinued features, you will have to revert to the
last LTS OpenVINO version supporting them. For more details,
refer to the OpenVINO Legacy Features and Components page.
* Discontinued in 2024.0:
+ Runtime components:
- Intel® Gaussian & Neural Accelerator (Intel® GNA)..Consider
using the Neural Processing Unit (NPU) for low-powered
systems like Intel® Core™ Ultra or 14th generation
and beyond.
- OpenVINO C++/C/Python 1.0 APIs (see 2023.3 API transition
guide for reference).
- All ONNX Frontend legacy API (known as ONNX_IMPORTER_API)
- 'PerfomanceMode.UNDEFINED' property as part of the OpenVINO
Python API
+ Tools:
- Deployment Manager. See installation and deployment guides
for current distribution options.
- Accuracy Checker.
- Post-Training Optimization Tool (POT). Neural Network
Compression Framework (NNCF) should be used instead.
- A Git patch for NNCF integration with huggingface/
transformers. The recommended approach is to use
huggingface/optimum-intel for applying NNCF optimization
on top of models from Hugging Face.
- Support for Apache MXNet, Caffe, and Kaldi model formats.
Conversion to ONNX may be used as a solution.
* Deprecated and to be removed in the future:
+ The OpenVINO™ Development Tools package (pip install
openvino-dev) will be removed from installation options
and distribution channels beginning with OpenVINO 2025.0.
+ Model Optimizer will be discontinued with OpenVINO 2025.0.
Consider using the new conversion methods instead. For
more details, see the model conversion transition guide.
+ OpenVINO property Affinity API will be discontinued with
OpenVINO 2025.0. It will be replaced with CPU binding
configurations (ov::hint::enable_cpu_pinning).
+ OpenVINO Model Server components:
- “auto shape” and “auto batch size” (reshaping a model
in runtime) will be removed in the future. OpenVINO’s
dynamic shape models are recommended instead.
+ A number of notebooks have been deprecated. For an
up-to-date listing of available notebooks, refer to
the OpenVINO™ Notebook index (openvinotoolkit.github.io).
- Add riscv-cpu-plugin subpackage
- Update to 2024.2.0
- More Gen AI coverage and framework integrations to minimize code
changes
* Llama 3 optimizations for CPUs, built-in GPUs, and discrete
GPUs for improved performance and efficient memory usage.
* Support for Phi-3-mini, a family of AI models that leverages
the power of small language models for faster, more accurate
and cost-effective text processing.
* Python Custom Operation is now enabled in OpenVINO making it
easier for Python developers to code their custom operations
instead of using C++ custom operations (also supported).
Python Custom Operation empowers users to implement their own
specialized operations into any model.
* Notebooks expansion to ensure better coverage for new models.
Noteworthy notebooks added: DynamiCrafter, YOLOv10, Chatbot
notebook with Phi-3, and QWEN2.
- Broader Large Language Model (LLM) support and more model
compression techniques.
* GPTQ method for 4-bit weight compression added to NNCF for
more efficient inference and improved performance of
compressed LLMs.
* Significant LLM performance improvements and reduced latency
for both built-in GPUs and discrete GPUs.
* Significant improvement in 2nd token latency and memory
footprint of FP16 weight LLMs on AVX2 (13th Gen Intel® Core™
processors) and AVX512 (3rd Gen Intel® Xeon® Scalable
Processors) based CPU platforms, particularly for small
batch sizes.
- More portability and performance to run AI at the edge, in the
cloud, or locally.
* Model Serving Enhancements:
* Preview: OpenVINO Model Server (OVMS) now supports
OpenAI-compatible API along with Continuous Batching and
PagedAttention, enabling significantly higher throughput
for parallel inferencing, especially on Intel® Xeon®
processors, when serving LLMs to many concurrent users.
* OpenVINO backend for Triton Server now supports built-in
GPUs and discrete GPUs, in addition to dynamic
shapes support.
* Integration of TorchServe through torch.compile OpenVINO
backend for easy model deployment, provisioning to
multiple instances, model versioning, and maintenance.
* Preview: addition of the Generate API, a simplified API
for text generation using large language models with only
a few lines of code. The API is available through the newly
launched OpenVINO GenAI package.
* Support for Intel Atom® Processor X Series. For more details,
see System Requirements.
* Preview: Support for Intel® Xeon® 6 processor.
- Support Change and Deprecation Notices
* Using deprecated features and components is not advised.
They are available to enable a smooth transition to new
solutions and will be discontinued in the future.
To keep using discontinued features, you will have to revert
to the last LTS OpenVINO version supporting them. For more
details, refer to the OpenVINO Legacy Features and
Components page.
* Discontinued in 2024.0:
+ Runtime components:
- Intel® Gaussian & Neural Accelerator (Intel® GNA).
Consider using the Neural Processing Unit (NPU) for
low-powered systems like Intel® Core™ Ultra or 14th
generation and beyond.
- OpenVINO C++/C/Python 1.0 APIs (see 2023.3 API
transition guide for reference).
- All ONNX Frontend legacy API (known as ONNX_IMPORTER_API)
- 'PerfomanceMode.UNDEFINED' property as part of the
OpenVINO Python API
+ Tools:
- Deployment Manager. See installation and deployment
guides for current distribution options.
- Accuracy Checker.
- Post-Training Optimization Tool (POT). Neural Network
Compression Framework (NNCF) should be used instead.
- A Git patch for NNCF integration with 
huggingface/transformers. The recommended approach 
is to use huggingface/optimum-intel for applying NNCF
optimization on top of models from Hugging Face.
- Support for Apache MXNet, Caffe, and Kaldi model formats.
Conversion to ONNX may be used as a solution.
* Deprecated and to be removed in the future:
+ The OpenVINO™ Development Tools package (pip install
openvino-dev) will be removed from installation options
and distribution channels beginning with OpenVINO 2025.0.
+ Model Optimizer will be discontinued with OpenVINO 2025.0.
Consider using the new conversion methods instead. For
more details, see the model conversion transition guide.
+ OpenVINO property Affinity API will be discontinued with
OpenVINO 2025.0. It will be replaced with CPU binding
configurations (ov::hint::enable_cpu_pinning).
+ OpenVINO Model Server components:
+ “auto shape” and “auto batch size” (reshaping a model in
runtime) will be removed in the future. OpenVINO’s dynamic
shape models are recommended instead.
+ A number of notebooks have been deprecated. For an
up-to-date listing of available notebooks, refer to the
OpenVINO™ Notebook index (openvinotoolkit.github.io).
- Fix sample source path in build script:
* openvino-fix-build-sample-path.patch
- Update to 2024.1.0
- More Generative AI coverage and framework integrations to
minimize code changes.
* Mixtral and URLNet models optimized for performance
improvements on Intel® Xeon® processors.
* Stable Diffusion 1.5, ChatGLM3-6B, and Qwen-7B models
optimized for improved inference speed on Intel® Core™
Ultra processors with integrated GPU.
* Support for Falcon-7B-Instruct, a GenAI Large Language Model
(LLM) ready-to-use chat/instruct model with superior
performance metrics.
* New Jupyter Notebooks added: YOLO V9, YOLO V8
Oriented Bounding Boxes Detection (OOB), Stable Diffusion
in Keras, MobileCLIP, RMBG-v1.4 Background Removal, Magika,
TripoSR, AnimateAnyone, LLaVA-Next, and RAG system with
OpenVINO and LangChain.
- Broader Large Language Model (LLM) support and more model
compression techniques.
* LLM compilation time reduced through additional optimizations
with compressed embedding. Improved 1st token performance of
LLMs on 4th and 5th generations of Intel® Xeon® processors
with Intel® Advanced Matrix Extensions (Intel® AMX).
* Better LLM compression and improved performance with oneDNN,
INT4, and INT8 support for Intel® Arc™ GPUs.
* Significant memory reduction for select smaller GenAI
models on Intel® Core™ Ultra processors with integrated GPU.
- More portability and performance to run AI at the edge,
in the cloud, or locally.
* The preview NPU plugin for Intel® Core™ Ultra processors
is now available in the OpenVINO open-source GitHub
repository, in addition to the main OpenVINO package on PyPI.
* The JavaScript API is now more easily accessible through
the npm repository, enabling JavaScript developers’ seamless
access to the OpenVINO API.
* FP16 inference on ARM processors now enabled for the
Convolutional Neural Network (CNN) by default.
- Support Change and Deprecation Notices
* Using deprecated features and components is not advised. They
are available to enable a smooth transition to new solutions
and will be discontinued in the future. To keep using
Discontinued features, you will have to revert to the last
LTS OpenVINO version supporting them.
* For more details, refer to the OpenVINO Legacy Features
and Components page.
* Discontinued in 2024.0:
+ Runtime components:
- Intel® Gaussian & Neural Accelerator (Intel® GNA).
Consider using the Neural Processing Unit (NPU)
for low-powered systems like Intel® Core™ Ultra or
14th generation and beyond.
- OpenVINO C++/C/Python 1.0 APIs (see 2023.3 API
transition guide for reference).
- All ONNX Frontend legacy API (known as
ONNX_IMPORTER_API)
- 'PerfomanceMode.UNDEFINED' property as part of
the OpenVINO Python API
+ Tools:
- Deployment Manager. See installation and deployment
guides for current distribution options.
- Accuracy Checker.
- Post-Training Optimization Tool (POT). Neural Network
Compression Framework (NNCF) should be used instead.
- A Git patch for NNCF integration with 
huggingface/transformers. The recommended approach
 is to use huggingface/optimum-intel for applying
NNCF optimization on top of models from Hugging
Face.
- Support for Apache MXNet, Caffe, and Kaldi model
formats. Conversion to ONNX may be used as
a solution.
* Deprecated and to be removed in the future:
+ The OpenVINO™ Development Tools package (pip install
openvino-dev) will be removed from installation options
and distribution channels beginning with OpenVINO 2025.0.
+ Model Optimizer will be discontinued with OpenVINO 2025.0.
Consider using the new conversion methods instead. For
more details, see the model conversion transition guide.
+ OpenVINO property Affinity API will be discontinued with
OpenVINO 2025.0. It will be replaced with CPU binding
configurations (ov::hint::enable_cpu_pinning).
+ OpenVINO Model Server components:
- “auto shape” and “auto batch size” (reshaping a model
in runtime) will be removed in the future. OpenVINO’s
dynamic shape models are recommended instead.
- License update: play safe and list all third party licenses as
part of the License tag.
- Switch to _service file as tagged Source tarball does not
include `./thirdparty` submodules.
- Update openvino-fix-install-paths.patch to fix python module
install path.
- Enable python module and split it out into a python subpackage
(for now default python3 only).
- Explicitly build python metadata (dist-info) and install it
(needs simple sed hackery to support "officially" unsupported
platform ppc64le).
- Specify ENABLE_JS=OFF to turn off javascript bindings as
building these requires downloading npm stuff from the network.
- Build with system pybind11.
- Bump _constraints for updated disk space requirements.
- Drop empty %check section, rpmlint was misleading when it
recommended adding this.
- Numerous specfile cleanups:
* Drop redundant `mv` commands and use `install` where
appropriate.
* Build with system protobuf.
* Fix Summary tags.
* Trim package descriptions.
* Drop forcing CMAKE_BUILD_TYPE=Release, let macro default
RelWithDebInfo be used instead.
* Correct naming of shared library packages.
* Separate out libopenvino_c.so.* into own shared lib package.
* Drop rpmlintrc rule used to hide shlib naming mistakes.
* Rename Source tarball to %{name}-%{version}.EXT pattern.
* Use ldconfig_scriptlet macro for post(un).
- Add openvino-onnx-ml-defines.patch -- Define ONNX_ML at compile
time when using system onnx to allow using 'onnx-ml.pb.h'
instead of 'onnx.pb.h', the latter not being shipped with
openSUSE's onnx-devel package (gh#onnx/onnx#3074).
- Add openvino-fix-install-paths.patch: Change hard-coded install
paths in upstream cmake macro to standard Linux dirs.
- Add openvino-ComputeLibrary-include-string.patch: Include header
for std::string.
- Add external devel packages as Requires for openvino-devel.
- Pass -Wl,-z,noexecstack to %build_ldflags to avoid an exec stack
issue with intel CPU plugin.
- Use ninja for build.
- Adapt _constraits file for correct disk space and memory
requirements.
- Add empty %check section.
- Initial package
- Version 2024.0.0
- Add openvino-rpmlintrc.


Guillaume GARDET's avatar

Guillaume_G accepted request

openSUSE Build Service is sponsored by