Sign Up
Log In
Log In
or
Sign Up
Places
All Projects
Status Monitor
Collapse sidebar
openSUSE:Factory:Rebuild
product-composer
product-composer-0.4.20.obscpio
Overview
Repositories
Revisions
Requests
Users
Attributes
Meta
File product-composer-0.4.20.obscpio of Package product-composer
07070100000000000041ED00000000000000000000000266FA600300000000000000000000000000000000000000000000002000000000product-composer-0.4.20/.github07070100000001000041ED00000000000000000000000266FA600300000000000000000000000000000000000000000000002A00000000product-composer-0.4.20/.github/workflows07070100000002000081A400000000000000000000000166FA600300000391000000000000000000000000000000000000003500000000product-composer-0.4.20/.github/workflows/tests.yamlname: 'tests' on: pull_request: branches: ['main'] concurrency: group: ${{ github.workflow }}-${{ github.ref }} cancel-in-progress: true jobs: unit: name: "unit" runs-on: 'ubuntu-latest' strategy: fail-fast: false matrix: container: - 'registry.opensuse.org/opensuse/tumbleweed' container: image: ${{ matrix.container }} steps: - name: 'Install packages' run: | zypper -n modifyrepo --disable repo-openh264 || : zypper -n --gpg-auto-import-keys refresh zypper -n install python3 python3-pip python3-pydantic python3-pytest python3-rpm python3-setuptools python3-solv python3-PyYAML - uses: actions/checkout@v4 - name: 'Run unit tests' run: | pip3 config set global.break-system-packages 1 pip3 install --no-dependencies -e . pytest tests 07070100000003000081A400000000000000000000000166FA6003000000A2000000000000000000000000000000000000002300000000product-composer-0.4.20/.gitignore.venv examples/repos src/productcomposer.egg-info src/productcomposer/__pycache__ src/productcomposer/core/__pycache__ src/productcomposer/api/__pycache__ output 07070100000004000081A400000000000000000000000166FA6003000001DF000000000000000000000000000000000000002100000000product-composer-0.4.20/Makefile# Project management tasks. VENV = .venv PYTHON = . $(VENV)/bin/activate && python PYTEST = $(PYTHON) -m pytest $(VENV)/.make-update: pyproject.toml python3 -m venv $(VENV) $(PYTHON) -m pip install -U pip # needs to be updated first $(PYTHON) -m pip install -e ".[dev]" touch $@ .PHONY: dev dev: $(VENV)/.make-update .PHONY: docs docs: dev asciidoc docs/productcomposer.adoc .PHONY: test-unit test-unit: dev $(PYTEST) tests/unit/ .PHONY: check check: test-unit 07070100000005000081A400000000000000000000000166FA600300000951000000000000000000000000000000000000002300000000product-composer-0.4.20/README.rstproduct-composer ================ This is the successor of product-builder. A tool to create rpm product repositories inside of Open Build Service based on a larger pool of packages. It is starting as small as possible, just enough for ALP products atm. Currently it supports: - processing based on a list of rpm package names - optional filters for architectures, versions and flavors can be defined - it can either just take a single rpm of a given name or all of them - it can post process updateinfo data - post processing like rpm meta data generation Not yet implemented: - create bootable iso files Development =========== Create the development environment: .. code-block:: console $ python -m venv .venv $ .venv/bin/python -m pip install -e ".[dev]" Run tests: .. code-block:: $ .venv/bin/python -m pytest -v tests/ Build documentation: .. code-block:: $ make docs Installation ============ Packaging and distributing a Python application is dependent on the target operating system(s) and execution environment, which could be a Python virtual environment, Linux container, or native application. Install the application to a self-contained Python virtual environment: $ python -m venv .venv $ .venv/bin/python -m pip install <project source> $ cp -r <project source>/etc .venv/ $ .venv/bin/productcomposer --help Execution ========= The installed application includes a wrapper script for command line execution. The location of this scripts depends on how the application was installed. Configuration ------------- The application uses `TOML`_ files for configuration. Configuration supports runtime parameter substitution via a shell-like variable syntax, *i.e.* ``var = ${VALUE}``. CLI invocation will use the current environment for parameter substitution, which makes it simple to pass host-specific values to the application without needing to change the config file for every installation. .. code-block:: toml mailhost = $SENDMAIL_HOST Logging ------- The application uses standard `Python logging`_. All loggins is to ``STDERR``, nd the logging level can be set via the config file or on the command line. .. _TOML: https://toml.io .. _Python logging: https://docs.python.org/3/library/logging.html .. _mdklatt/cookiecutter-python-app: https://github.com/mdklatt/cookiecutter-python-app 07070100000006000041ED00000000000000000000000266FA600300000000000000000000000000000000000000000000001D00000000product-composer-0.4.20/docs07070100000007000081A400000000000000000000000166FA600300000029000000000000000000000000000000000000002800000000product-composer-0.4.20/docs/.gitignore# Ignore Sphinx build artifacts. _build 07070100000008000081A400000000000000000000000166FA600300001AD6000000000000000000000000000000000000003400000000product-composer-0.4.20/docs/build_description.adoc == productcompose build description options === minimal version product_compose_schema: 0.2 vendor: I_and_myself name: my_product version: 1.0 product-type: module architectures: [ x86_64 ] packages: - my-single-rpm-package === build options The build options may be used to change the behaviour of the build process. The options are described in the details below. Just add them to enable the desired functionality, no further arguments are allowed. === flavors Flavors can be defined with any name. These can be used to build multiple media from one build description. Each flavor may define an own architecture list. It can also be used to add different package sets. You need to add a _multibuild file to your sources to enable the build. === iso Enables iso file generation and requires configuration of iso9660 headers. === unpack unpack defines the packageset to be used for extracting the content of the rpm packages directly on the medium. These rpm packages need to provide these files below /usr/lib/skelcd/CD1 Currently it gets only extracted to the first/main medium, but not on source or debug media. === packagesets The packages list lists rpm names to be put on the medium. There is usually one master list and in addition there can be addional optional lists. The additional lists can be filtered by flavors and/or architectures. The packageset requires at least a packages definition, but may optionaly also a name, flavors or architectures. ==== name Defines the name of the package set. 'main' is the default name. ==== architecture Lists the architectures where the set is to be used. The default is for all architectures. ==== flavor Lists the flavors where the set is to be used. The default is for all flavors. ==== add Can be used to add further packagesets by specifing their names. A special packageset called '__all__' will add all package names local available. ==== sub Can be used to remove packages from the specified packageset names. ==== intersect Can be used to filter packages with specified package set lists. ==== packages Lists all package names to be added. This is just the rpm name, not the file name. === Details ==== name The product name. ==== version The product version ==== summary The product name in explaining words. It will be presented to the user on overview screens ==== product-type Either 'base' for operation systems or 'module' for any product depending on any existing installation. 'extension' is handled as alias for 'module'. ==== architectures An array of the master architectures to be put into the repository. This can be used to build a single repository usable for many hardware architectures. product composer will automatically fall back to "noarch" packages if the package is not found natively. Setting a global architecture list is optional, when architectures are listed for each flavor. ==== bcntsynctag Optionaly defines a bcntsynctag for OBS. OBS will sync the build counter over all packages in same repository and architecture according to this tag. ==== milestone Optionaly defines a milestone which will be used by OBS at release time. This can be used to turn candidate builds into a Beta1 for example ==== build_options ===== take_all_available_versions By default only "the best" version of each rpm is taken. Use this switch to put all candidates on the medium. For example for maintenance repositories. ===== ignore_missing_packages Missing packages lead by default to a build failure. Use this switch to continue. The missing packages are still listed in the build log. ===== hide_flavor_in_product_directory_name The flavor name is by default part of the directory name of the build result. This can be disabled, when each flavor has a different arch list. Otherwise conflicts can happen. ===== add_slsa_provenance Add slsa provenance files for each rpm if available ===== abort_on_empty_updateinfo Existing updateinfo.xml are scanned by default and reduced to the available package binaries. In case none are found the update is skipped. Enableing this option leads to a build failure instead. ==== iso ===== publisher For setting the iso9660 PUBLISHER header ===== vendor_id For setting the iso9660 VENDOR_ID header ===== tree Can be set to "drop" for creating only the iso files. ==== installcheck Runs a repository closure test for each architecture. This will report any missing dependencies and abort. ===== ignore_errors For reporting the dependency errors, but ignoring them. ==== debug Configure the handling of debuginfo and debugsource rpms. Use either debug: include to include them or debug: drop to drop all debug packages or debug: split to create a seperate medium mwith -Debug suffix. Missing debug packages will always be ignored. ==== packages The package list. It can contain either simple name or it can be extended by a >, >=, =, <, <= operator to specify a specific version constraint. The syntax for the version is rpm like [EPOCH:]VERSION[-RELEASE] A missing epoch means epoch zero. If the release is missing, it matches any release. The package list can be valid globally or limited to specific flavors or architectures. ==== product_compose_schema Defines the level of the yaml syntax. Please expect incompatible changes at any time atm. This will be used to provide backward compability once we stabilized. ==== product_directory_name Can be used to specify a directory or medium name manually. The default is "name-version". The directory name will always be suffixed by the architecture and build number. ==== source Configure the handling of src or nosrc rpms for the picked binaries. Use either source: include to include all source packages or source: drop to drop all source packages or source: split to create a seperate medium with -Source suffix. A missing source package leads to a build failure unless the ignore_missing_packages built option is used. ==== vendor Defines the company responsible for the content. Can be for example openSUSE or SUSE. It is used by the install stack. ==== set_updateinfo_from Can be set to replace the "from" attribute in updateinfo.xml files with a fixed value. This is shown as patch provider by zypp stack. Otherwise the value stays, OBS is setting the packager from _patchinfo file here by default. ==== set_updateinfo_id_prefix Sets a fixed prefix to all id's of included updateinfo data. It is not adding again if the prefix exists already. This can be used to have a common identifier for an update for many products, but still being able to identify the filtering for a specific product. ==== block_updates_under_embargo The current default is to include maintenance updates under embargo. This option can be set to abort when an embargo date is in future. 07070100000009000081A400000000000000000000000166FA600300000741000000000000000000000000000000000000003200000000product-composer-0.4.20/docs/productcomposer.adoc= productcomposer :toc: :icons: :numbered: :website: https://www.geckito.org/ == Goals A lightweight success or product builder. It is used to generate product rpm repositories out of a pool of rpms. Unlike product builder, these can also be used to ship maintenance updates. .Currently it supports: - processing based on a list of rpm package names. product compose is not take care of dependencies atm. - providing matching source and/or debug packages for picked rpm packages. These can be either included into main repository or prepared via extra repositories - optional filters for architectures, versions and flavors can be defined - it can provide either just a single rpm of a given name or all of them - it can post process updateinfo data - post processing to provide various rpm meta data generation Not yet implemented: - create bootable iso files == Design product composer issupposed to be used only inside of OBS builds atm. OBS or osc is preparing all binary rpm candidates in local directory before starting the build. == Setup in OBS You will require OBS 2.11 or later. .Create a new repository with any name. Either in a new or existing project. - The product-composer package must be available in any repository listed in the path elements. - All scheduler architectures where packages are taken from must be listed. Your build description file may have any name, but must have a .productcompose suffix. The build type for the repository must be set to Type: productcompose in the build configuration (aka prjconf). == Special setup for maintenance Ensure to build your patchinfo builds in a repository where "local" is the first architecture. Your productcompose file may provide all versions of each rpm if you enable "take_all_available_versions" in the build options. include::build_description.adoc[] 0707010000000A000081A400000000000000000000000166FA6003000067EF000000000000000000000000000000000000003200000000product-composer-0.4.20/docs/productcomposer.html<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"> <head> <meta http-equiv="Content-Type" content="application/xhtml+xml; charset=UTF-8" /> <meta name="generator" content="AsciiDoc" /> <title>productcomposer</title> <style type="text/css"> /* Shared CSS for AsciiDoc xhtml11 and html5 backends */ /* Default font. */ body { font-family: Georgia,serif; } /* Title font. */ h1, h2, h3, h4, h5, h6, div.title, caption.title, thead, p.table.header, #toctitle, #author, #revnumber, #revdate, #revremark, #footer { font-family: Arial,Helvetica,sans-serif; } body { margin: 1em 5% 1em 5%; } a { color: blue; text-decoration: underline; } a:visited { color: fuchsia; } em { font-style: italic; color: navy; } strong { font-weight: bold; color: #083194; } h1, h2, h3, h4, h5, h6 { color: #527bbd; margin-top: 1.2em; margin-bottom: 0.5em; line-height: 1.3; } h1, h2, h3 { border-bottom: 2px solid silver; } h2 { padding-top: 0.5em; } h3 { float: left; } h3 + * { clear: left; } h5 { font-size: 1.0em; } div.sectionbody { margin-left: 0; } hr { border: 1px solid silver; } p { margin-top: 0.5em; margin-bottom: 0.5em; } ul, ol, li > p { margin-top: 0; } ul > li { color: #aaa; } ul > li > * { color: black; } .monospaced, code, pre { font-family: "Courier New", Courier, monospace; font-size: inherit; color: navy; padding: 0; margin: 0; } pre { white-space: pre-wrap; } #author { color: #527bbd; font-weight: bold; font-size: 1.1em; } #email { } #revnumber, #revdate, #revremark { } #footer { font-size: small; border-top: 2px solid silver; padding-top: 0.5em; margin-top: 4.0em; } #footer-text { float: left; padding-bottom: 0.5em; } #footer-badges { float: right; padding-bottom: 0.5em; } #preamble { margin-top: 1.5em; margin-bottom: 1.5em; } div.imageblock, div.exampleblock, div.verseblock, div.quoteblock, div.literalblock, div.listingblock, div.sidebarblock, div.admonitionblock { margin-top: 1.0em; margin-bottom: 1.5em; } div.admonitionblock { margin-top: 2.0em; margin-bottom: 2.0em; margin-right: 10%; color: #606060; } div.content { /* Block element content. */ padding: 0; } /* Block element titles. */ div.title, caption.title { color: #527bbd; font-weight: bold; text-align: left; margin-top: 1.0em; margin-bottom: 0.5em; } div.title + * { margin-top: 0; } td div.title:first-child { margin-top: 0.0em; } div.content div.title:first-child { margin-top: 0.0em; } div.content + div.title { margin-top: 0.0em; } div.sidebarblock > div.content { background: #ffffee; border: 1px solid #dddddd; border-left: 4px solid #f0f0f0; padding: 0.5em; } div.listingblock > div.content { border: 1px solid #dddddd; border-left: 5px solid #f0f0f0; background: #f8f8f8; padding: 0.5em; } div.quoteblock, div.verseblock { padding-left: 1.0em; margin-left: 1.0em; margin-right: 10%; border-left: 5px solid #f0f0f0; color: #888; } div.quoteblock > div.attribution { padding-top: 0.5em; text-align: right; } div.verseblock > pre.content { font-family: inherit; font-size: inherit; } div.verseblock > div.attribution { padding-top: 0.75em; text-align: left; } /* DEPRECATED: Pre version 8.2.7 verse style literal block. */ div.verseblock + div.attribution { text-align: left; } div.admonitionblock .icon { vertical-align: top; font-size: 1.1em; font-weight: bold; text-decoration: underline; color: #527bbd; padding-right: 0.5em; } div.admonitionblock td.content { padding-left: 0.5em; border-left: 3px solid #dddddd; } div.exampleblock > div.content { border-left: 3px solid #dddddd; padding-left: 0.5em; } div.imageblock div.content { padding-left: 0; } span.image img { border-style: none; vertical-align: text-bottom; } a.image:visited { color: white; } dl { margin-top: 0.8em; margin-bottom: 0.8em; } dt { margin-top: 0.5em; margin-bottom: 0; font-style: normal; color: navy; } dd > *:first-child { margin-top: 0.1em; } ul, ol { list-style-position: outside; } ol.arabic { list-style-type: decimal; } ol.loweralpha { list-style-type: lower-alpha; } ol.upperalpha { list-style-type: upper-alpha; } ol.lowerroman { list-style-type: lower-roman; } ol.upperroman { list-style-type: upper-roman; } div.compact ul, div.compact ol, div.compact p, div.compact p, div.compact div, div.compact div { margin-top: 0.1em; margin-bottom: 0.1em; } tfoot { font-weight: bold; } td > div.verse { white-space: pre; } div.hdlist { margin-top: 0.8em; margin-bottom: 0.8em; } div.hdlist tr { padding-bottom: 15px; } dt.hdlist1.strong, td.hdlist1.strong { font-weight: bold; } td.hdlist1 { vertical-align: top; font-style: normal; padding-right: 0.8em; color: navy; } td.hdlist2 { vertical-align: top; } div.hdlist.compact tr { margin: 0; padding-bottom: 0; } .comment { background: yellow; } .footnote, .footnoteref { font-size: 0.8em; } span.footnote, span.footnoteref { vertical-align: super; } #footnotes { margin: 20px 0 20px 0; padding: 7px 0 0 0; } #footnotes div.footnote { margin: 0 0 5px 0; } #footnotes hr { border: none; border-top: 1px solid silver; height: 1px; text-align: left; margin-left: 0; width: 20%; min-width: 100px; } div.colist td { padding-right: 0.5em; padding-bottom: 0.3em; vertical-align: top; } div.colist td img { margin-top: 0.3em; } @media print { #footer-badges { display: none; } } #toc { margin-bottom: 2.5em; } #toctitle { color: #527bbd; font-size: 1.1em; font-weight: bold; margin-top: 1.0em; margin-bottom: 0.1em; } div.toclevel0, div.toclevel1, div.toclevel2, div.toclevel3, div.toclevel4 { margin-top: 0; margin-bottom: 0; } div.toclevel2 { margin-left: 2em; font-size: 0.9em; } div.toclevel3 { margin-left: 4em; font-size: 0.9em; } div.toclevel4 { margin-left: 6em; font-size: 0.9em; } span.aqua { color: aqua; } span.black { color: black; } span.blue { color: blue; } span.fuchsia { color: fuchsia; } span.gray { color: gray; } span.green { color: green; } span.lime { color: lime; } span.maroon { color: maroon; } span.navy { color: navy; } span.olive { color: olive; } span.purple { color: purple; } span.red { color: red; } span.silver { color: silver; } span.teal { color: teal; } span.white { color: white; } span.yellow { color: yellow; } span.aqua-background { background: aqua; } span.black-background { background: black; } span.blue-background { background: blue; } span.fuchsia-background { background: fuchsia; } span.gray-background { background: gray; } span.green-background { background: green; } span.lime-background { background: lime; } span.maroon-background { background: maroon; } span.navy-background { background: navy; } span.olive-background { background: olive; } span.purple-background { background: purple; } span.red-background { background: red; } span.silver-background { background: silver; } span.teal-background { background: teal; } span.white-background { background: white; } span.yellow-background { background: yellow; } span.big { font-size: 2em; } span.small { font-size: 0.6em; } span.underline { text-decoration: underline; } span.overline { text-decoration: overline; } span.line-through { text-decoration: line-through; } div.unbreakable { page-break-inside: avoid; } /* * xhtml11 specific * * */ div.tableblock { margin-top: 1.0em; margin-bottom: 1.5em; } div.tableblock > table { border: 3px solid #527bbd; } thead, p.table.header { font-weight: bold; color: #527bbd; } p.table { margin-top: 0; } /* Because the table frame attribute is overridden by CSS in most browsers. */ div.tableblock > table[frame="void"] { border-style: none; } div.tableblock > table[frame="hsides"] { border-left-style: none; border-right-style: none; } div.tableblock > table[frame="vsides"] { border-top-style: none; border-bottom-style: none; } /* * html5 specific * * */ table.tableblock { margin-top: 1.0em; margin-bottom: 1.5em; } thead, p.tableblock.header { font-weight: bold; color: #527bbd; } p.tableblock { margin-top: 0; } table.tableblock { border-width: 3px; border-spacing: 0px; border-style: solid; border-color: #527bbd; border-collapse: collapse; } th.tableblock, td.tableblock { border-width: 1px; padding: 4px; border-style: solid; border-color: #527bbd; } table.tableblock.frame-topbot { border-left-style: hidden; border-right-style: hidden; } table.tableblock.frame-sides { border-top-style: hidden; border-bottom-style: hidden; } table.tableblock.frame-none { border-style: hidden; } th.tableblock.halign-left, td.tableblock.halign-left { text-align: left; } th.tableblock.halign-center, td.tableblock.halign-center { text-align: center; } th.tableblock.halign-right, td.tableblock.halign-right { text-align: right; } th.tableblock.valign-top, td.tableblock.valign-top { vertical-align: top; } th.tableblock.valign-middle, td.tableblock.valign-middle { vertical-align: middle; } th.tableblock.valign-bottom, td.tableblock.valign-bottom { vertical-align: bottom; } /* * manpage specific * * */ body.manpage h1 { padding-top: 0.5em; padding-bottom: 0.5em; border-top: 2px solid silver; border-bottom: 2px solid silver; } body.manpage h2 { border-style: none; } body.manpage div.sectionbody { margin-left: 3em; } @media print { body.manpage div#toc { display: none; } } </style> <script type="text/javascript"> /*<![CDATA[*/ var asciidoc = { // Namespace. ///////////////////////////////////////////////////////////////////// // Table Of Contents generator ///////////////////////////////////////////////////////////////////// /* Author: Mihai Bazon, September 2002 * http://students.infoiasi.ro/~mishoo * * Table Of Content generator * Version: 0.4 * * Feel free to use this script under the terms of the GNU General Public * License, as long as you do not remove or alter this notice. */ /* modified by Troy D. Hanson, September 2006. License: GPL */ /* modified by Stuart Rackham, 2006, 2009. License: GPL */ // toclevels = 1..4. toc: function (toclevels) { function getText(el) { var text = ""; for (var i = el.firstChild; i != null; i = i.nextSibling) { if (i.nodeType == 3 /* Node.TEXT_NODE */) // IE doesn't speak constants. text += i.data; else if (i.firstChild != null) text += getText(i); } return text; } function TocEntry(el, text, toclevel) { this.element = el; this.text = text; this.toclevel = toclevel; } function tocEntries(el, toclevels) { var result = new Array; var re = new RegExp('[hH]([1-'+(toclevels+1)+'])'); // Function that scans the DOM tree for header elements (the DOM2 // nodeIterator API would be a better technique but not supported by all // browsers). var iterate = function (el) { for (var i = el.firstChild; i != null; i = i.nextSibling) { if (i.nodeType == 1 /* Node.ELEMENT_NODE */) { var mo = re.exec(i.tagName); if (mo && (i.getAttribute("class") || i.getAttribute("className")) != "float") { result[result.length] = new TocEntry(i, getText(i), mo[1]-1); } iterate(i); } } } iterate(el); return result; } var toc = document.getElementById("toc"); if (!toc) { return; } // Delete existing TOC entries in case we're reloading the TOC. var tocEntriesToRemove = []; var i; for (i = 0; i < toc.childNodes.length; i++) { var entry = toc.childNodes[i]; if (entry.nodeName.toLowerCase() == 'div' && entry.getAttribute("class") && entry.getAttribute("class").match(/^toclevel/)) tocEntriesToRemove.push(entry); } for (i = 0; i < tocEntriesToRemove.length; i++) { toc.removeChild(tocEntriesToRemove[i]); } // Rebuild TOC entries. var entries = tocEntries(document.getElementById("content"), toclevels); for (var i = 0; i < entries.length; ++i) { var entry = entries[i]; if (entry.element.id == "") entry.element.id = "_toc_" + i; var a = document.createElement("a"); a.href = "#" + entry.element.id; a.appendChild(document.createTextNode(entry.text)); var div = document.createElement("div"); div.appendChild(a); div.className = "toclevel" + entry.toclevel; toc.appendChild(div); } if (entries.length == 0) toc.parentNode.removeChild(toc); }, ///////////////////////////////////////////////////////////////////// // Footnotes generator ///////////////////////////////////////////////////////////////////// /* Based on footnote generation code from: * http://www.brandspankingnew.net/archive/2005/07/format_footnote.html */ footnotes: function () { // Delete existing footnote entries in case we're reloading the footnodes. var i; var noteholder = document.getElementById("footnotes"); if (!noteholder) { return; } var entriesToRemove = []; for (i = 0; i < noteholder.childNodes.length; i++) { var entry = noteholder.childNodes[i]; if (entry.nodeName.toLowerCase() == 'div' && entry.getAttribute("class") == "footnote") entriesToRemove.push(entry); } for (i = 0; i < entriesToRemove.length; i++) { noteholder.removeChild(entriesToRemove[i]); } // Rebuild footnote entries. var cont = document.getElementById("content"); var spans = cont.getElementsByTagName("span"); var refs = {}; var n = 0; for (i=0; i<spans.length; i++) { if (spans[i].className == "footnote") { n++; var note = spans[i].getAttribute("data-note"); if (!note) { // Use [\s\S] in place of . so multi-line matches work. // Because JavaScript has no s (dotall) regex flag. note = spans[i].innerHTML.match(/\s*\[([\s\S]*)]\s*/)[1]; spans[i].innerHTML = "[<a id='_footnoteref_" + n + "' href='#_footnote_" + n + "' title='View footnote' class='footnote'>" + n + "</a>]"; spans[i].setAttribute("data-note", note); } noteholder.innerHTML += "<div class='footnote' id='_footnote_" + n + "'>" + "<a href='#_footnoteref_" + n + "' title='Return to text'>" + n + "</a>. " + note + "</div>"; var id =spans[i].getAttribute("id"); if (id != null) refs["#"+id] = n; } } if (n == 0) noteholder.parentNode.removeChild(noteholder); else { // Process footnoterefs. for (i=0; i<spans.length; i++) { if (spans[i].className == "footnoteref") { var href = spans[i].getElementsByTagName("a")[0].getAttribute("href"); href = href.match(/#.*/)[0]; // Because IE return full URL. n = refs[href]; spans[i].innerHTML = "[<a href='#_footnote_" + n + "' title='View footnote' class='footnote'>" + n + "</a>]"; } } } }, install: function(toclevels) { var timerId; function reinstall() { asciidoc.footnotes(); if (toclevels) { asciidoc.toc(toclevels); } } function reinstallAndRemoveTimer() { clearInterval(timerId); reinstall(); } timerId = setInterval(reinstall, 500); if (document.addEventListener) document.addEventListener("DOMContentLoaded", reinstallAndRemoveTimer, false); else window.onload = reinstallAndRemoveTimer; } } asciidoc.install(2); /*]]>*/ </script> </head> <body class="article"> <div id="header"> <h1>productcomposer</h1> <div id="toc"> <div id="toctitle">Table of Contents</div> <noscript><p><b>JavaScript must be enabled in your browser to display the table of contents.</b></p></noscript> </div> </div> <div id="content"> <div class="sect1"> <h2 id="_goals">1. Goals</h2> <div class="sectionbody"> <div class="paragraph"><p>A lightweight success or product builder.</p></div> <div class="paragraph"><p>It is used to generate product rpm repositories out of a pool of rpms. Unlike product builder, these can also be used to ship maintenance updates.</p></div> <div class="ulist"><div class="title">Currently it supports:</div><ul> <li> <p> processing based on a list of rpm package names. product compose is not take care of dependencies atm. </p> </li> <li> <p> providing matching source and/or debug packages for picked rpm packages. These can be either included into main repository or prepared via extra repositories </p> </li> <li> <p> optional filters for architectures, versions and flavors can be defined </p> </li> <li> <p> it can provide either just a single rpm of a given name or all of them </p> </li> <li> <p> it can post process updateinfo data </p> </li> <li> <p> post processing to provide various rpm meta data generation </p> </li> </ul></div> <div class="paragraph"><p>Not yet implemented: - create bootable iso files</p></div> </div> </div> <div class="sect1"> <h2 id="_design">2. Design</h2> <div class="sectionbody"> <div class="paragraph"><p>product composer issupposed to be used only inside of OBS builds atm. OBS or osc is preparing all binary rpm candidates in local directory before starting the build.</p></div> </div> </div> <div class="sect1"> <h2 id="_setup_in_obs">3. Setup in OBS</h2> <div class="sectionbody"> <div class="paragraph"><p>You will require OBS 2.11 or later.</p></div> <div class="ulist"><div class="title">Create a new repository with any name. Either in a new or existing project.</div><ul> <li> <p> The product-composer package must be available in any repository listed in the path elements. </p> </li> <li> <p> All scheduler architectures where packages are taken from must be listed. </p> </li> </ul></div> <div class="paragraph"><p>Your build description file may have any name, but must have a .productcompose suffix.</p></div> <div class="paragraph"><p>The build type for the repository must be set to</p></div> <div class="literalblock"> <div class="content"> <pre><code>Type: productcompose</code></pre> </div></div> <div class="paragraph"><p>in the build configuration (aka prjconf).</p></div> </div> </div> <div class="sect1"> <h2 id="_special_setup_for_maintenance">4. Special setup for maintenance</h2> <div class="sectionbody"> <div class="paragraph"><p>Ensure to build your patchinfo builds in a repository where "local" is the first architecture.</p></div> <div class="paragraph"><p>Your productcompose file may provide all versions of each rpm if you enable "take_all_available_versions" in the build options.</p></div> </div> </div> <div class="sect1"> <h2 id="_productcompose_build_description_options">5. productcompose build description options</h2> <div class="sectionbody"> <div class="sect2"> <h3 id="_minimal_version">5.1. minimal version</h3> <div class="literalblock"> <div class="content"> <pre><code>product_compose_schema: 0 vendor: I_and_myself name: my_product version: 1.0</code></pre> </div></div> <div class="literalblock"> <div class="content"> <pre><code>architectures: [ x86_64 ]</code></pre> </div></div> <div class="literalblock"> <div class="content"> <pre><code>packages: - my-single-rpm-package</code></pre> </div></div> </div> <div class="sect2"> <h3 id="_build_options">5.2. build options</h3> <div class="sect3"> <h4 id="_take_all_available_versions">5.2.1. take_all_available_versions</h4> <div class="paragraph"><p>By default only "the best" version of each rpm is taken. Use this switch to put all candidates on the medium. For example for maintenance repositories.</p></div> </div> <div class="sect3"> <h4 id="_ignore_missing_packages">5.2.2. ignore_missing_packages</h4> <div class="paragraph"><p>Missing packages lead by default to a build failure. Use this switch to continue. The missing packages are still listed in the build log.</p></div> </div> <div class="sect3"> <h4 id="_hide_flavor_in_product_directory_name">5.2.3. hide_flavor_in_product_directory_name</h4> <div class="paragraph"><p>The flavor name is by default part of the directory name of the build result. This can be disabled, when each flavor has a different arch list. Otherwise conflicts can happen.</p></div> </div> <div class="sect3"> <h4 id="_flavors">5.2.4. flavors</h4> <div class="paragraph"><p>Flavors can be defined with any name. These can be used to build multiple media from one build description.</p></div> <div class="paragraph"><p>Each flavor may define an own architecture list.</p></div> <div class="paragraph"><p>It can also be used to add different package sets.</p></div> <div class="paragraph"><p>You need to add a _multibuild file to your sources to enable the build.</p></div> </div> <div class="sect3"> <h4 id="_packages">5.2.5. packages</h4> <div class="paragraph"><p>The packages list lists rpm names to be put on the medium.</p></div> <div class="paragraph"><p>There is usually one master list and in addition there can be addional optional lists.</p></div> <div class="paragraph"><p>The additional lists can be filtered by flavors and/or architectures.</p></div> </div> <div class="sect3"> <h4 id="_unpack_packages">5.2.6. unpack_packages</h4> <div class="paragraph"><p>The unpack_packages section can be used in the same way as the packages section.</p></div> <div class="paragraph"><p>The difference is that not the rpm itself is put on the medium, but the extracted content only.</p></div> </div> </div> <div class="sect2"> <h3 id="_details">5.3. Details</h3> <div class="sect3"> <h4 id="_name">5.3.1. name</h4> <div class="paragraph"><p>The product name.</p></div> </div> <div class="sect3"> <h4 id="_version">5.3.2. version</h4> <div class="paragraph"><p>The product version</p></div> </div> <div class="sect3"> <h4 id="_architectures">5.3.3. architectures</h4> <div class="paragraph"><p>An array of the master architectures to be put into the repository. This can be used to build a single repository usable for many hardware architectures.</p></div> <div class="paragraph"><p>product composer will automatically fall back to "noarch" packages if the package is not found natively.</p></div> <div class="paragraph"><p>Setting a global architecture list is optional, when architectures are listed for each flavor.</p></div> </div> <div class="sect3"> <h4 id="_build_options_2">5.3.4. build_options</h4> <div class="paragraph"><p>The build options may be used to change the behaviour of the build process. The options are described above.</p></div> <div class="paragraph"><p>Just add them to enable them, no further arguments are allowed.</p></div> </div> <div class="sect3"> <h4 id="_debug">5.3.5. debug</h4> <div class="paragraph"><p>Configure the handling of debuginfo and debugsource rpms. Use either</p></div> <div class="literalblock"> <div class="content"> <pre><code>debug: drop</code></pre> </div></div> <div class="paragraph"><p>to exclude them or</p></div> <div class="literalblock"> <div class="content"> <pre><code>debug: split</code></pre> </div></div> <div class="paragraph"><p>to create a seperate medium mwith -Debug suffix.</p></div> <div class="paragraph"><p>Missing debug packages will always be ignored.</p></div> </div> <div class="sect3"> <h4 id="_packages_2">5.3.6. packages</h4> <div class="paragraph"><p>The package list. It can contain either simple name or it can be extended by a >, >=, =, <, ⇐ operator to specify a specific version constraint.</p></div> <div class="paragraph"><p>The syntax for the version is rpm like</p></div> <div class="literalblock"> <div class="content"> <pre><code>[EPOCH:]VERSION[-RELEASE]</code></pre> </div></div> <div class="paragraph"><p>A missing epoch means epoch zero. If the release is missing, it matches any release.</p></div> <div class="paragraph"><p>The package list can be valid globally or limited to specific flavors or architectures.</p></div> </div> <div class="sect3"> <h4 id="_product_compose_schema">5.3.7. product_compose_schema</h4> <div class="paragraph"><p>Defines the level of the yaml syntax. We are currently at level 0. Please expect incompatible changes at any time atm.</p></div> <div class="paragraph"><p>This to used to provide backward compability.</p></div> </div> <div class="sect3"> <h4 id="_product_directory_name">5.3.8. product_directory_name</h4> <div class="paragraph"><p>Can be used to specify a directory or medium name manually. The default is "name-version".</p></div> <div class="paragraph"><p>The directory name will always be suffixed by the architecture and build number.</p></div> </div> <div class="sect3"> <h4 id="_source">5.3.9. source</h4> <div class="paragraph"><p>Configure the handling of src or nosrc rpms for the picked binaries. Use either</p></div> <div class="literalblock"> <div class="content"> <pre><code>source: drop</code></pre> </div></div> <div class="paragraph"><p>to exclude all source packages or</p></div> <div class="literalblock"> <div class="content"> <pre><code>source: split</code></pre> </div></div> <div class="paragraph"><p>to create a seperate medium with -Source suffix.</p></div> <div class="paragraph"><p>A missing source package leads to a build failure unless the ignore_missing_packages built option is used.</p></div> </div> <div class="sect3"> <h4 id="_vendor">5.3.10. vendor</h4> <div class="paragraph"><p>Defines the company responsible for the content. Can be for example openSUSE or SUSE. It is used by the install stack.</p></div> </div> </div> </div> </div> </div> <div id="footnotes"><hr /></div> <div id="footer"> <div id="footer-text"> Last updated 2023-12-05 15:26:40 CET </div> </div> </body> </html> 0707010000000B000041ED00000000000000000000000266FA600300000000000000000000000000000000000000000000001C00000000product-composer-0.4.20/etc0707010000000C000081A400000000000000000000000166FA60030000001B000000000000000000000000000000000000002800000000product-composer-0.4.20/etc/config.toml[core] logging = "WARNING" 0707010000000D000041ED00000000000000000000000266FA600300000000000000000000000000000000000000000000002100000000product-composer-0.4.20/examples0707010000000E000081A400000000000000000000000166FA6003000008B0000000000000000000000000000000000000003400000000product-composer-0.4.20/examples/ftp.productcompose# Our initial schema version. Be prepared that it breaks until we are # in full production mode product_compose_schema: 0.2 vendor: openSUSE name: Tumbleweed version: 1.0 product-type: base # or module # summary is the short product description as available in meta data summary: openSUSE Tumbleweed # OBS specials: # bcntsynctag: MyProductFamily # milestone: Beta1 # scc data has no effect to the build result, it is just managing data # for the infrastructure scc: description: > openSUSE Tumbleweed is the rolling distribution by the openSUSE.org project. # family: sl-micro # free: false iso: publisher: volume_id: # tree: drop build_options: ### For maintenance, otherwise only "the best" version of each package is picked: - take_all_available_versions # - ignore_missing_packages # - hide_flavor_in_product_directory_name # - block_updates_under_embargo # - add_slsa_provenance #installcheck: # - ignore_errors # Enable collection of source and debug packages. Either "include" it # on main medium, "drop" it or "split" it away on extra medium. source: split debug: drop # The default architecture list. Each of these will be put on the medium. # It is optional to have a default list, when each flavor defines an # architecture list. The main package won't be build in that case. architectures: [x86_64] # A flavor list, each flavor may change the architecture list flavors: small: {} large_arm: architectures: [armv7l, aarch64] name: Tumbleweed_ARM summary: openSUSE Tumbleweed ARM unpack: - unpackset - unpackset_powerpc_DVD_only # packages to be put on the medium packagesets: - name: unpackset_powerpc_DVD_only flavors: - DVD medium architectures: - ppc64le packages: - Super-Special-Slideshow-for-DVD_medium-on-ppc64le - name: unpackset packages: - skelcd-openSUSE - skelcd-openSUSE-installer - name: 32bit architectures: - i586 - i686 packages: - kernel-default-pae - packages: - kernel-default # take only glibc packages newer than 2.38-9 # note: this works like a rpm dependency, i.e. the release part is optional # and epochs can be specified with EPOCH: prefix - glibc > 2.38-9 add: - 32bit supportstatus: l2 0707010000000F000081A400000000000000000000000166FA600300000341000000000000000000000000000000000000002700000000product-composer-0.4.20/pyproject.toml[project] name = "productcomposer" description = "OBS product image creator" authors = [ { name = "Adrian Schröter", email = "adrian@suse.de" }, ] license = {file = "LICENSE"} requires-python = ">=3.11" dependencies = [ "rpm", "zstandard", "pydantic<2", "pyyaml", ] dynamic = ["version", "readme"] [project.urls] "Homepage" = "https://somewhere" [project.scripts] productcomposer = "productcomposer.cli:main" [project.optional-dependencies] dev = [ "pytest>=7.3.1,<8", "sphinx>=6.2.1,<7", "sphinx_rtd_theme>=1.2.1,<2", ] [build-system] requires = ["setuptools>=61.0"] build-backend = "setuptools.build_meta" [tool.setuptools.dynamic] version = {attr = "productcomposer.__version__"} readme = {file = ["README.rst"], content-type = "text/x-rst"} [tool.setuptools.packages.find] where = ["src"] 07070100000010000041ED00000000000000000000000266FA600300000000000000000000000000000000000000000000001C00000000product-composer-0.4.20/src07070100000011000041ED00000000000000000000000266FA600300000000000000000000000000000000000000000000002C00000000product-composer-0.4.20/src/productcomposer07070100000012000081A400000000000000000000000166FA60030000007A000000000000000000000000000000000000003800000000product-composer-0.4.20/src/productcomposer/__init__.py""" Package for the obs product builder application. """ from .__version__ import __version__ from .__main__ import main 07070100000013000081A400000000000000000000000166FA6003000000FA000000000000000000000000000000000000003800000000product-composer-0.4.20/src/productcomposer/__main__.py""" Main application entry point. python -m productcomposer ... """ def main(): """ Execute the application. """ raise NotImplementedError # Make the script executable. if __name__ == "__main__": raise SystemExit(main()) 07070100000014000081A400000000000000000000000166FA6003000002D2000000000000000000000000000000000000003B00000000product-composer-0.4.20/src/productcomposer/__version__.py""" Current version of the obs product builder application. This project uses the Semantic Versioning scheme in conjunction with PEP 0440: <https://semver.org/> <https://www.python.org/dev/peps/pep-0440> Major versions introduce significant changes to the API, and backwards compatibility is not guaranteed. Minor versions are for new features and other backwards-compatible changes to the API. Patch versions are for bug fixes and internal code changes that do not affect the API. Development versions are incomplete states of a release . Version 0.x should be considered a development version with an unstable API, and backwards compatibility is not guaranteed for minor versions. """ __version__ = "0.0.0" 07070100000015000041ED00000000000000000000000266FA600300000000000000000000000000000000000000000000003000000000product-composer-0.4.20/src/productcomposer/api07070100000016000081A400000000000000000000000166FA60030000006E000000000000000000000000000000000000003C00000000product-composer-0.4.20/src/productcomposer/api/__init__.py""" Application commands common to all interfaces. """ from .parse import main as parse __all__ = "parse", 07070100000017000081A400000000000000000000000166FA600300000100000000000000000000000000000000000000003900000000product-composer-0.4.20/src/productcomposer/api/parse.py""" Implement the hello command. """ from ..core.logger import logger def main(name="World") -> str: """ Execute the command. :param name: name to use in greeting """ logger.debug("executing hello command") return f"Hello, parser!" 07070100000018000081A400000000000000000000000166FA600300009A0C000000000000000000000000000000000000003300000000product-composer-0.4.20/src/productcomposer/cli.py""" Implementation of the command line interface. """ import os import re import shutil import subprocess import gettext from datetime import datetime from argparse import ArgumentParser from xml.etree import ElementTree as ET import yaml from .core.logger import logger from .core.PkgSet import PkgSet from .core.Package import Package from .core.Pool import Pool from .wrappers import CreaterepoWrapper from .wrappers import ModifyrepoWrapper __all__ = "main", ET_ENCODING = "unicode" tree_report = {} # hashed via file name # hardcoded defaults for now chksums_tool = 'sha512sum' # global db for supportstatus supportstatus = {} # per package override via supportstatus.txt file supportstatus_override = {} def main(argv=None) -> int: """ Execute the application CLI. :param argv: argument list to parse (sys.argv by default) :return: exit status """ # # Setup CLI parser # parser = ArgumentParser('productcomposer', description='An example sub-command implementation') subparsers = parser.add_subparsers(required=True, help='sub-command help') # One sub parser for each command verify_parser = subparsers.add_parser('verify', help='The first sub-command') build_parser = subparsers.add_parser('build', help='The second sub-command') verify_parser.set_defaults(func=verify) build_parser.set_defaults(func=build) # Generic options for cmd_parser in [verify_parser, build_parser]: cmd_parser.add_argument('-f', '--flavor', help='Build a given flavor') cmd_parser.add_argument('-v', '--verbose', action='store_true', help='Enable verbose output') cmd_parser.add_argument('--reposdir', action='store', help='Take packages from this directory') cmd_parser.add_argument('filename', default='default.productcompose', help='Filename of product YAML spec') # build command options build_parser.add_argument('-r', '--release', default=None, help='Define a build release counter') build_parser.add_argument('--disturl', default=None, help='Define a disturl') build_parser.add_argument('--vcs', default=None, help='Define a source repository identifier') build_parser.add_argument('--clean', action='store_true', help='Remove existing output directory first') build_parser.add_argument('out', help='Directory to write the result') # parse and check args = parser.parse_args(argv) filename = args.filename if not filename: # No subcommand was specified. print("No filename") parser.print_help() die(None) # # Invoke the function # args.func(args) return 0 def die(msg, details=None): if msg: print("ERROR: " + msg) if details: print(details) raise SystemExit(1) def warn(msg, details=None): print("WARNING: " + msg) if details: print(details) def note(msg): print(msg) def build(args): flavor = None if args.flavor: f = args.flavor.split('.') if f[0] != '': flavor = f[0] if not args.out: # No subcommand was specified. print("No output directory given") parser.print_help() die(None) yml = parse_yaml(args.filename, flavor) directory = os.getcwd() if args.filename.startswith('/'): directory = os.path.dirname(args.filename) reposdir = args.reposdir if args.reposdir else directory + "/repos" supportstatus_fn = os.path.join(directory, 'supportstatus.txt') if os.path.isfile(supportstatus_fn): parse_supportstatus(supportstatus_fn) pool = Pool() note(f"scanning: {reposdir}") pool.scan(reposdir) if args.clean and os.path.exists(args.out): shutil.rmtree(args.out) product_base_dir = get_product_dir(yml, flavor, args.release) create_tree(args.out, product_base_dir, yml, pool, flavor, args.vcs, args.disturl) def verify(args): parse_yaml(args.filename, args.flavor) def parse_yaml(filename, flavor): with open(filename, 'r') as file: yml = yaml.safe_load(file) if 'product_compose_schema' not in yml: die('missing product composer schema') if yml['product_compose_schema'] != 0 and yml['product_compose_schema'] != 0.1 and yml['product_compose_schema'] != 0.2: die(f"Unsupported product composer schema: {yml['product_compose_schema']}") if 'flavors' not in yml: yml['flavors'] = [] if flavor: if flavor not in yml['flavors']: die("Flavor not found: " + flavor) f = yml['flavors'][flavor] # overwrite global values from flavor overwrites for tag in ['architectures', 'name', 'summary', 'version', 'product-type', 'product_directory_name']: if tag in f: yml[tag] = f[tag] if 'iso' in f: if not 'iso' in yml: yml['iso'] = {} for tag in ['volume_id', 'publisher', 'tree']: if tag in f['iso']: yml['iso'][tag] = f['iso'][tag] if 'architectures' not in yml or not yml['architectures']: die("No architecture defined. Maybe wrong flavor?") if 'build_options' not in yml or yml['build_options'] is None: yml['build_options'] = [] if 'installcheck' in yml and yml['installcheck'] is None: yml['installcheck'] = [] return yml def parse_supportstatus(filename): with open(filename, 'r') as file: for line in file.readlines(): a = line.strip().split(' ') supportstatus_override[a[0]] = a[1] def get_product_dir(yml, flavor, release): name = yml['name'] + "-" + str(yml['version']) if 'product_directory_name' in yml: # manual override name = yml['product_directory_name'] if flavor and not 'hide_flavor_in_product_directory_name' in yml['build_options']: name += "-" + flavor if yml['architectures']: visible_archs = yml['architectures'] if 'local' in visible_archs: visible_archs.remove('local') name += "-" + "-".join(visible_archs) if release: name += "-Build" + str(release) if '/' in name: die("Illegal product name") return name def run_helper(args, cwd=None, fatal=True, stdout=None, stdin=None, failmsg=None): if stdout is None: stdout = subprocess.PIPE if stdin is None: stdin = subprocess.PIPE popen = subprocess.Popen(args, stdout=stdout, stdin=stdin, cwd=cwd) output = popen.communicate()[0] if isinstance(output, bytes): output = output.decode(errors='backslashreplace') if popen.returncode: if failmsg: msg="Failed to " + failmsg else: msg="Failed to run " + args[0] if fatal: die(msg, details=output) else: warn(msg, details=output) return output if stdout == subprocess.PIPE else '' def create_tree(outdir, product_base_dir, yml, pool, flavor, vcs=None, disturl=None): if not os.path.exists(outdir): os.mkdir(outdir) maindir = outdir + '/' + product_base_dir if not os.path.exists(maindir): os.mkdir(maindir) workdirectories = [ maindir ] debugdir = sourcedir = None if "source" in yml: if yml['source'] == 'split': sourcedir = outdir + '/' + product_base_dir + '-Source' os.mkdir(sourcedir) workdirectories.append(sourcedir) elif yml['source'] == 'include': sourcedir = maindir elif yml['source'] != 'drop': die("Bad source option, must be either 'include', 'split' or 'drop'") if "debug" in yml: if yml['debug'] == 'split': debugdir = outdir + '/' + product_base_dir + '-Debug' os.mkdir(debugdir) workdirectories.append(debugdir) elif yml['debug'] == 'include': debugdir = maindir elif yml['debug'] != 'drop': die("Bad debug option, must be either 'include', 'split' or 'drop'") for arch in yml['architectures']: note(f"Linking rpms for {arch}") link_rpms_to_tree(maindir, yml, pool, arch, flavor, debugdir, sourcedir) for arch in yml['architectures']: note(f"Unpack rpms for {arch}") unpack_meta_rpms(maindir, yml, pool, arch, flavor, medium=1) # only for first medium am repos = [] if disturl: match = re.match("^obs://([^/]*)/([^/]*)/.*", disturl) if match: obsname = match.group(1) project = match.group(2) repo = f"obsproduct://{obsname}/{project}/{yml['name']}/{yml['version']}" repos = [repo] if vcs: repos.append(vcs) default_content = ["pool"] for file in os.listdir(maindir): if not file.startswith('gpg-pubkey-'): continue args = ['gpg', '--no-keyring', '--no-default-keyring', '--with-colons', '--import-options', 'show-only', '--import', '--fingerprint'] out = run_helper(args, stdin=open(f'{maindir}/{file}', 'rb'), failmsg="Finger printing of gpg file") for line in out.splitlines(): if not str(line).startswith("b'fpr:"): continue default_content.append(str(line).split(':')[9]) note("Create rpm-md data") run_createrepo(maindir, yml, content=default_content, repos=repos) if debugdir: note("Create rpm-md data for debug directory") run_createrepo(debugdir, yml, content=["debug"], repos=repos) if sourcedir: note("Create rpm-md data for source directory") run_createrepo(sourcedir, yml, content=["source"], repos=repos) if not os.path.exists(maindir + '/repodata'): die("run_createrepo did not create a repodata directory"); note("Write report file") write_report_file(maindir, maindir + '.report') if sourcedir and maindir != sourcedir: note("Write report file for source directory") write_report_file(sourcedir, sourcedir + '.report') if debugdir and maindir != debugdir: note("Write report file for debug directory") write_report_file(debugdir, debugdir + '.report') # CHANGELOG file # the tools read the subdirectory of the maindir from environment variable os.environ['ROOT_ON_CD'] = '.' if os.path.exists("/usr/bin/mk_changelog"): args = ["/usr/bin/mk_changelog", maindir] run_helper(args) # ARCHIVES.gz if os.path.exists("/usr/bin/mk_listings"): args = ["/usr/bin/mk_listings", maindir] run_helper(args) # media.X structures FIXME mediavendor = yml['vendor'] + ' - ' + product_base_dir mediaident = product_base_dir # FIXME: calculate from product provides mediaproducts = [yml['vendor'] + '-' + yml['name'] + ' ' + str(yml['version']) + '-1'] create_media_dir(maindir, mediavendor, mediaident, mediaproducts) create_checksums_file(maindir) create_susedata_xml(maindir, yml) if debugdir: create_susedata_xml(debugdir, yml) if sourcedir: create_susedata_xml(sourcedir, yml) if 'installcheck' in yml: for arch in yml['architectures']: note(f"Run installcheck for {arch}") args = ['installcheck', arch, '--withsrc'] args.append(find_primary(maindir)) if debugdir: args.append(find_primary(debugdir)) if sourcedir: args.append(find_primary(sourcedir)) run_helper(args, fatal=(not 'ignore_errors' in yml['installcheck']), failmsg="run installcheck validation") create_updateinfo_xml(maindir, yml, pool, flavor, debugdir, sourcedir) # Add License File and create extra .license directory licensefilename = '/license.tar' if os.path.exists(maindir + '/license-' + yml['name'] + '.tar') or os.path.exists(maindir + '/license-' + yml['name'] + '.tar.gz'): licensefilename = '/license-' + yml['name'] + '.tar' if os.path.exists(maindir + licensefilename + '.gz'): run_helper(['gzip', '-d', maindir + licensefilename + '.gz'], failmsg="Uncompress of license.tar.gz failed") if os.path.exists(maindir + licensefilename): note("Setup .license directory") licensedir = maindir + ".license" if not os.path.exists(licensedir): os.mkdir(licensedir) args = ['tar', 'xf', maindir + licensefilename, '-C', licensedir] output = run_helper(args, failmsg="extract license tar ball") if not os.path.exists(licensedir + "/license.txt"): die("No license.txt extracted", details=output) mr = ModifyrepoWrapper( file=maindir + licensefilename, directory=os.path.join(maindir, "repodata"), ) mr.run_cmd() os.unlink(maindir + licensefilename) # meta package may bring a second file or expanded symlink, so we need clean up if os.path.exists(maindir + '/license.tar'): os.unlink(maindir + '/license.tar') if os.path.exists(maindir + '/license.tar.gz'): os.unlink(maindir + '/license.tar.gz') for workdir in workdirectories: # detached signature args = ['/usr/lib/build/signdummy', '-d', workdir + "/repodata/repomd.xml"] run_helper(args, failmsg="create detached signature") if os.path.exists(workdir + '/CHECKSUMS'): args = ['/usr/lib/build/signdummy', '-d', workdir + '/CHECKSUMS'] run_helper(args, failmsg="create detached signature for CHECKSUMS") # pubkey with open(workdir + "/repodata/repomd.xml.key", 'w') as pubkey_file: args = ['/usr/lib/build/signdummy', '-p'] run_helper(args, stdout=pubkey_file, failmsg="write signature public key") # do we need an ISO file? if 'iso' in yml: note("Create iso files") application_id = re.sub(r'^.*/', '', maindir) args = ['/usr/bin/mkisofs', '-quiet', '-p', 'Product Composer - http://www.github.com/openSUSE/product-composer'] args += ['-r', '-pad', '-f', '-J', '-joliet-long'] # FIXME: do proper multi arch handling isolinux = 'boot/' + yml['architectures'][0] + '/loader/isolinux.bin' if os.path.isfile(workdir + '/' + isolinux): args += ['-no-emul-boot', '-boot-load-size', '4', '-boot-info-table'] args += ['-hide', 'glump', '-hide-joliet', 'glump'] args += ['-eltorito-alt-boot', '-eltorito-platform', 'efi'] args += ['-no-emul-boot'] # args += [ '-sort', $sort_file ] # args += [ '-boot-load-size', block_size("boot/"+arch+"/loader") ] args += ['-b', isolinux] if 'publisher' in yml['iso'] and yml['iso']['publisher'] is not None: args += ['-publisher', yml['iso']['publisher']] if 'volume_id' in yml['iso'] and yml['iso']['volume_id'] is not None: args += ['-V', yml['iso']['volume_id']] args += ['-A', application_id] args += ['-o', workdir + '.iso', workdir] run_helper(args, cwd=outdir, failmsg="create iso file") # simple tag media call ... we may add options for pading or triggering media check later args = [ 'tagmedia' , '--digest' , 'sha256', workdir + '.iso' ] run_helper(args, cwd=outdir, failmsg="tagmedia iso file") # creating .sha256 for iso file with open(workdir + ".iso.sha256", 'w') as sha_file: # argument must not have the path args = [ 'sha256sum', workdir.split('/')[-1] + '.iso' ] run_helper(args, cwd=outdir, stdout=sha_file, failmsg="create .iso.sha256 file") if 'tree' in yml['iso'] and yml['iso']['tree'] == 'drop': args = [ 'rm', '-rf', workdir ] run_helper(args, failmsg="dropping rpm-md tree") # create SBOM data if os.path.exists("/usr/lib/build/generate_sbom"): spdx_distro = f"{yml['name']}-{yml['version']}" note(f"Creating sboom data for {spdx_distro}") # SPDX args = ["/usr/lib/build/generate_sbom", "--format", 'spdx', "--distro", spdx_distro, "--product", maindir ] with open(maindir + ".spdx.json", 'w') as sbom_file: run_helper(args, stdout=sbom_file, failmsg="run generate_sbom for SPDX") # CycloneDX args = ["/usr/lib/build/generate_sbom", "--format", 'cyclonedx', "--distro", spdx_distro, "--product", maindir ] with open(maindir + ".cdx.json", 'w') as sbom_file: run_helper(args, stdout=sbom_file, failmsg="run generate_sbom for CycloneDX") # create media info files def create_media_dir(maindir, vendorstr, identstr, products): media1dir = maindir + '/' + 'media.1' if not os.path.isdir(media1dir): os.mkdir(media1dir) # we do only support seperate media atm with open(media1dir + '/media', 'w') as media_file: media_file.write(vendorstr + "\n") media_file.write(identstr + "\n") media_file.write("1\n") if products: with open(media1dir + '/products', 'w') as products_file: for productname in products: products_file.write('/ ' + productname + "\n") def create_checksums_file(maindir): with open(maindir + '/CHECKSUMS', 'a') as chksums_file: for subdir in ('boot', 'EFI', 'docu', 'media.1'): if not os.path.exists(maindir + '/' + subdir): continue for root, dirnames, filenames in os.walk(maindir + '/' + subdir): for name in filenames: relname = os.path.relpath(root + '/' + name, maindir) run_helper([chksums_tool, relname], cwd=maindir, stdout=chksums_file) # create a fake package entry from an updateinfo package spec def create_updateinfo_package(pkgentry): entry = Package() for tag in 'name', 'epoch', 'version', 'release', 'arch': setattr(entry, tag, pkgentry.get(tag)) return entry def generate_du_data(pkg, maxdepth): dirs = pkg.get_directories() seen = set() dudata_size = {} dudata_count = {} for dir, filedatas in pkg.get_directories().items(): size = 0 count = 0 for filedata in filedatas: (basename, filesize, cookie) = filedata if cookie: if cookie in seen: next seen.add(cookie) size += filesize count += 1 if dir == '': dir = '/usr/src/packages/' dir = '/' + dir.strip('/') subdir = '' depth = 0 for comp in dir.split('/'): if comp == '' and subdir != '': next subdir += comp + '/' if subdir not in dudata_size: dudata_size[subdir] = 0 dudata_count[subdir] = 0 dudata_size[subdir] += size dudata_count[subdir] += count depth += 1 if depth > maxdepth: break dudata = [] for dir, size in sorted(dudata_size.items()): dudata.append((dir, size, dudata_count[dir])) return dudata # Get supported translations based on installed packages def get_package_translation_languages(): i18ndir = '/usr/share/locale/en_US/LC_MESSAGES' p = re.compile('package-translations-(.+).mo') languages = set() for file in os.listdir(i18ndir): m = p.match(file) if m: languages.add(m.group(1)) return sorted(list(languages)) # get the file name from repomd.xml def find_primary(directory): ns = '{http://linux.duke.edu/metadata/repo}' tree = ET.parse(directory + '/repodata/repomd.xml') return directory + '/' + tree.find(f".//{ns}data[@type='primary']/{ns}location").get('href') # Create the main susedata.xml with translations, support, and disk usage information def create_susedata_xml(rpmdir, yml): susedatas = {} susedatas_count = {} # find translation languages languages = get_package_translation_languages() # create gettext translator object i18ntrans = {} for lang in languages: i18ntrans[lang] = gettext.translation(f'package-translations-{lang}', languages=['en_US']) primary_fn = find_primary(rpmdir) # read compressed primary.xml openfunction = None if primary_fn.endswith('.gz'): import gzip openfunction = gzip.open elif primary_fn.endswith('.zst'): import zstandard openfunction = zstandard.open else: die(f"unsupported primary compression type ({primary_fn})") tree = ET.parse(openfunction(primary_fn, 'rb')) ns = '{http://linux.duke.edu/metadata/common}' # Create main susedata structure susedatas[''] = ET.Element('susedata') susedatas_count[''] = 0 # go for every rpm file of the repo via the primary for pkg in tree.findall(f".//{ns}package[@type='rpm']"): name = pkg.find(f'{ns}name').text arch = pkg.find(f'{ns}arch').text pkgid = pkg.find(f'{ns}checksum').text version = pkg.find(f'{ns}version').attrib susedatas_count[''] += 1 package = ET.SubElement(susedatas[''], 'package', {'name': name, 'arch': arch, 'pkgid': pkgid}) ET.SubElement(package, 'version', version) # add supportstatus if name in supportstatus and supportstatus[name] is not None: ET.SubElement(package, 'keyword').text = f'support_{supportstatus[name]}' # add disk usage data location = pkg.find(f'{ns}location').get('href') if os.path.exists(rpmdir + '/' + location): p = Package() p.location = rpmdir + '/' + location dudata = generate_du_data(p, 3) if dudata: duelement = ET.SubElement(package, 'diskusage') dirselement = ET.SubElement(duelement, 'dirs') for duitem in dudata: ET.SubElement(dirselement, 'dir', {'name': duitem[0], 'size': str(duitem[1]), 'count': str(duitem[2])}) # get summary/description/category of the package summary = pkg.find(f'{ns}summary').text description = pkg.find(f'{ns}description').text category = pkg.find(".//{http://linux.duke.edu/metadata/rpm}entry[@name='pattern-category()']") category = Package._cpeid_hexdecode(category.get('ver')) if category else None # look for translations for lang in languages: isummary = i18ntrans[lang].gettext(summary) idescription = i18ntrans[lang].gettext(description) icategory = i18ntrans[lang].gettext(category) if category is not None else None if isummary == summary and idescription == description and icategory == category: continue if lang not in susedatas: susedatas[lang] = ET.Element('susedata') susedatas_count[lang] = 0 susedatas_count[lang] += 1 ipackage = ET.SubElement(susedatas[lang], 'package', {'name': name, 'arch': arch, 'pkgid': pkgid}) ET.SubElement(ipackage, 'version', version) if isummary != summary: ET.SubElement(ipackage, 'summary', {'lang': lang}).text = isummary if idescription != description: ET.SubElement(ipackage, 'description', {'lang': lang}).text = idescription if icategory != category: ET.SubElement(ipackage, 'category', {'lang': lang}).text = icategory # write all susedata files for lang, susedata in sorted(susedatas.items()): susedata.set('xmlns', 'http://linux.duke.edu/metadata/susedata') susedata.set('packages', str(susedatas_count[lang])) ET.indent(susedata, space=" ", level=0) mdtype = (f'susedata.{lang}' if lang else 'susedata') susedata_fn = f'{rpmdir}/{mdtype}.xml' with open(susedata_fn, 'x') as sd_file: sd_file.write(ET.tostring(susedata, encoding=ET_ENCODING)) mr = ModifyrepoWrapper( file=susedata_fn, mdtype=mdtype, directory=os.path.join(rpmdir, "repodata"), ) mr.run_cmd() os.unlink(susedata_fn) # Add updateinfo.xml to metadata def create_updateinfo_xml(rpmdir, yml, pool, flavor, debugdir, sourcedir): if not pool.updateinfos: return missing_package = False # build the union of the package sets for all requested architectures main_pkgset = PkgSet('main') for arch in yml['architectures']: pkgset = main_pkgset.add(create_package_set(yml, arch, flavor, 'main', pool=pool)) main_pkgset_names = main_pkgset.names() uitemp = None for u in sorted(pool.lookup_all_updateinfos()): note("Add updateinfo " + u.location) for update in u.root.findall('update'): needed = False parent = update.findall('pkglist')[0].findall('collection')[0] # drop OBS internal patchinforef element for pr in update.findall('patchinforef'): update.remove(pr) if 'set_updateinfo_from' in yml: update.set('from', yml['set_updateinfo_from']) id_node = update.find('id') if 'set_updateinfo_id_prefix' in yml: # avoid double application of same prefix id_text = re.sub(r'^'+yml['set_updateinfo_id_prefix'], '', id_node.text) id_node.text = yml['set_updateinfo_id_prefix'] + id_text for pkgentry in parent.findall('package'): src = pkgentry.get('src') # check for embargo date embargo = pkgentry.find('embargo_date') if embargo is not None: try: embargo_time = datetime.strptime(embargo.text, '%Y-%m-%d %H:%M') except ValueError: embargo_time = datetime.strptime(embargo.text, '%Y-%m-%d') if embargo_time > datetime.now(): print("WARNING: Update is still under embargo! ", update.find('id').text) if 'block_updates_under_embargo' in yml['build_options']: die("shutting down due to block_updates_under_embargo flag") # clean internal elements for internal_element in ['supportstatus', 'superseded_by', 'embargo_date']: for e in pkgentry.findall(internal_element): pkgentry.remove(e) # check if we have files for the entry if os.path.exists(rpmdir + '/' + src): needed = True continue if debugdir and os.path.exists(debugdir + '/' + src): needed = True continue if sourcedir and os.path.exists(sourcedir + '/' + src): needed = True continue name = pkgentry.get('name') pkgarch = pkgentry.get('arch') # do not insist on debuginfo or source packages if pkgarch == 'src' or pkgarch == 'nosrc': parent.remove(pkgentry) continue if name.endswith('-debuginfo') or name.endswith('-debugsource'): parent.remove(pkgentry) continue # ignore unwanted architectures if pkgarch != 'noarch' and pkgarch not in yml['architectures']: parent.remove(pkgentry) continue # check if we should have this package if name in main_pkgset_names: updatepkg = create_updateinfo_package(pkgentry) if main_pkgset.matchespkg(None, updatepkg): warn(f"package {updatepkg} not found") missing_package = True parent.remove(pkgentry) if not needed: if 'abort_on_empty_updateinfo' in yml['build_options']: die(f'Stumbled over an updateinfo.xml where no rpm is used: {id_node.text}') continue if not uitemp: uitemp = open(rpmdir + '/updateinfo.xml', 'x') uitemp.write("<updates>\n ") uitemp.write(ET.tostring(update, encoding=ET_ENCODING)) if uitemp: uitemp.write("</updates>\n") uitemp.close() mr = ModifyrepoWrapper( file=os.path.join(rpmdir, "updateinfo.xml"), directory=os.path.join(rpmdir, "repodata"), ) mr.run_cmd() os.unlink(rpmdir + '/updateinfo.xml') if missing_package and not 'ignore_missing_packages' in yml['build_options']: die('Abort due to missing packages') def run_createrepo(rpmdir, yml, content=[], repos=[]): product_name = yml['name'] product_summary = yml['summary'] or yml['name'] product_summary += " " + str(yml['version']) product_type = '/o' if 'product-type' in yml: if yml['product-type'] == 'base': product_type = '/o' elif yml['product-type'] in ['module', 'extension']: product_type = '/a' else: die('Undefined product-type') cr = CreaterepoWrapper(directory=".") cr.distro = product_summary cr.cpeid = f"cpe:{product_type}:{yml['vendor']}:{yml['name']}:{yml['version']}" cr.repos = repos # cr.split = True # cr.baseurl = "media://" cr.content = content cr.excludes = ["boot"] cr.run_cmd(cwd=rpmdir, stdout=subprocess.PIPE) def unpack_one_meta_rpm(rpmdir, rpm, medium): tempdir = rpmdir + "/temp" os.mkdir(tempdir) run_helper(['unrpm', '-q', rpm.location], cwd=tempdir, failmsg=f"extract {rpm.location}") skel_dir = tempdir + "/usr/lib/skelcd/CD" + str(medium) if os.path.exists(skel_dir): shutil.copytree(skel_dir, rpmdir, dirs_exist_ok=True) shutil.rmtree(tempdir) def unpack_meta_rpms(rpmdir, yml, pool, arch, flavor, medium): missing_package = False for unpack_pkgset_name in yml.get('unpack', []): unpack_pkgset = create_package_set(yml, arch, flavor, unpack_pkgset_name, pool=pool) for sel in unpack_pkgset: rpm = pool.lookup_rpm(arch, sel.name, sel.op, sel.epoch, sel.version, sel.release) if not rpm: warn(f"package {sel} not found") missing_package = True continue unpack_one_meta_rpm(rpmdir, rpm, medium) if missing_package and not 'ignore_missing_packages' in yml['build_options']: die('Abort due to missing packages') def create_package_set_compat(yml, arch, flavor, setname): if setname == 'main': oldname = 'packages' elif setname == 'unpack': oldname = 'unpack_packages' else: return None if oldname not in yml: return PkgSet(setname) if setname == 'unpack' else None pkgset = PkgSet(setname) for entry in list(yml[oldname]): if type(entry) == dict: if 'flavors' in entry: if flavor is None or flavor not in entry['flavors']: continue if 'architectures' in entry: if arch not in entry['architectures']: continue pkgset.add_specs(entry['packages']) else: pkgset.add_specs([str(entry)]) return pkgset def create_package_set_all(setname, pool, arch): if pool is None: die('need a package pool to create the __all__ package set') pkgset = PkgSet(setname) pkgset.add_specs([n for n in pool.names(arch) if not (n.endswith('-debuginfo') or n.endswith('-debugsource'))]) return pkgset def create_package_set(yml, arch, flavor, setname, pool=None): if 'packagesets' not in yml: pkgset = create_package_set_compat(yml, arch, flavor, setname) if pkgset is None: die(f'package set {setname} is not defined') return pkgset pkgsets = {} for entry in list(yml['packagesets']): name = entry['name'] if 'name' in entry else 'main' if name in pkgsets and pkgsets[name] is not None: die(f'package set {name} is already defined') pkgsets[name] = None if 'flavors' in entry: if flavor is None or entry['flavors'] is None: continue if flavor not in entry['flavors']: continue if 'architectures' in entry: if arch not in entry['architectures']: continue pkgset = PkgSet(name) pkgsets[name] = pkgset if 'supportstatus' in entry: pkgset.supportstatus = entry['supportstatus'] if 'packages' in entry and entry['packages']: pkgset.add_specs(entry['packages']) for setop in 'add', 'sub', 'intersect': if setop not in entry: continue for oname in entry[setop]: if oname == '__all__' and oname not in pkgsets: pkgsets[oname] = create_package_set_all(oname, pool, arch) if oname == name or oname not in pkgsets: die(f'package set {oname} does not exist') if pkgsets[oname] is None: pkgsets[oname] = PkgSet(oname) # instantiate if setop == 'add': pkgset.add(pkgsets[oname]) elif setop == 'sub': pkgset.sub(pkgsets[oname]) elif setop == 'intersect': pkgset.intersect(pkgsets[oname]) else: die(f"unsupported package set operation '{setop}'") if setname not in pkgsets: die(f'package set {setname} is not defined') if pkgsets[setname] is None: pkgsets[setname] = PkgSet(setname) # instantiate return pkgsets[setname] def link_rpms_to_tree(rpmdir, yml, pool, arch, flavor, debugdir=None, sourcedir=None): singlemode = True if 'take_all_available_versions' in yml['build_options']: singlemode = False add_slsa = False if 'add_slsa_provenance' in yml['build_options']: add_slsa = True main_pkgset = create_package_set(yml, arch, flavor, 'main', pool=pool) missing_package = None for sel in main_pkgset: if singlemode: rpm = pool.lookup_rpm(arch, sel.name, sel.op, sel.epoch, sel.version, sel.release) rpms = [rpm] if rpm else [] else: rpms = pool.lookup_all_rpms(arch, sel.name, sel.op, sel.epoch, sel.version, sel.release) if not rpms: warn(f"package {sel} not found for {arch}") missing_package = True continue for rpm in rpms: link_entry_into_dir(rpm, rpmdir, add_slsa=add_slsa) if rpm.name in supportstatus_override: supportstatus[rpm.name] = supportstatus_override[rpm.name] else: supportstatus[rpm.name] = sel.supportstatus srcrpm = rpm.get_src_package() if not srcrpm: warn(f"package {rpm} does not have a source rpm") continue if sourcedir: # so we need to add also the src rpm srpm = pool.lookup_rpm(srcrpm.arch, srcrpm.name, '=', None, srcrpm.version, srcrpm.release) if srpm: link_entry_into_dir(srpm, sourcedir, add_slsa=add_slsa) else: details = f" required by {rpm}" warn(f"source rpm package {srcrpm} not found", details=details) missing_package = True if debugdir: drpm = pool.lookup_rpm(arch, srcrpm.name + "-debugsource", '=', None, srcrpm.version, srcrpm.release) if drpm: link_entry_into_dir(drpm, debugdir, add_slsa=add_slsa) drpm = pool.lookup_rpm(arch, rpm.name + "-debuginfo", '=', rpm.epoch, rpm.version, rpm.release) if drpm: link_entry_into_dir(drpm, debugdir, add_slsa=add_slsa) if missing_package and not 'ignore_missing_packages' in yml['build_options']: die('Abort due to missing packages') def link_file_into_dir(source, directory, name=None): if not os.path.exists(directory): os.mkdir(directory) if name is None: name = os.path.basename(source) outname = directory + '/' + name if not os.path.exists(outname): if os.path.islink(source): # osc creates a repos/ structure with symlinks to it's cache # but these would point outside of our media shutil.copyfile(source, outname) else: os.link(source, outname) def link_entry_into_dir(entry, directory, add_slsa=False): canonfilename = entry.canonfilename outname = directory + '/' + entry.arch + '/' + canonfilename if not os.path.exists(outname): link_file_into_dir(entry.location, directory + '/' + entry.arch, name=canonfilename) add_entry_to_report(entry, outname) if add_slsa: slsalocation = entry.location.removesuffix('.rpm') + '.slsa_provenance.json' if os.path.exists(slsalocation): slsaname = canonfilename.removesuffix('.rpm') + '.slsa_provenance.json' link_file_into_dir(slsalocation, directory + '/' + entry.arch, name=slsaname) def add_entry_to_report(entry, outname): # first one wins, see link_file_into_dir if outname not in tree_report: tree_report[outname] = entry def write_report_file(directory, outfile): root = ET.Element('report') if not directory.endswith('/'): directory += '/' for fn, entry in sorted(tree_report.items()): if not fn.startswith(directory): continue binary = ET.SubElement(root, 'binary') binary.text = 'obs://' + entry.origin for tag in 'name', 'epoch', 'version', 'release', 'arch', 'buildtime', 'disturl', 'license': val = getattr(entry, tag, None) if val is None or val == '': continue if tag == 'epoch' and val == 0: continue if tag == 'arch': binary.set('binaryarch', str(val)) else: binary.set(tag, str(val)) if entry.name.endswith('-release'): cpeid = entry.product_cpeid if cpeid: binary.set('cpeid', cpeid) tree = ET.ElementTree(root) tree.write(outfile) if __name__ == "__main__": try: status = main() except Exception as err: # Error handler of last resort. logger.error(repr(err)) logger.critical("shutting down due to fatal error") raise # print stack trace else: raise SystemExit(status) # vim: sw=4 et 07070100000019000041ED00000000000000000000000266FA600300000000000000000000000000000000000000000000003100000000product-composer-0.4.20/src/productcomposer/core0707010000001A000081A400000000000000000000000166FA600300001441000000000000000000000000000000000000003C00000000product-composer-0.4.20/src/productcomposer/core/Package.py""" Package base class """ import os import re import rpm import functools @functools.total_ordering class Package: def __init__(self, location=None, rpm_ts=None): if location is None: return self.location = location h = self._read_rpm_header(rpm_ts=rpm_ts) for tag in 'name', 'epoch', 'version', 'release', 'arch', 'sourcerpm', \ 'buildtime', 'disturl', 'license', 'filesizes', 'filemodes', \ 'filedevices', 'fileinodes', 'dirindexes', 'basenames', 'dirnames': val = h[tag] if isinstance(val, bytes): val = val.decode('utf-8') setattr(self, tag, val) if not self.sourcerpm: self.arch = 'nosrc' if h['nosource'] or h['nopatch'] else 'src' def __eq__(self, other): return (self.name, self.evr) == (other.name, other.evr) def __lt__(self, other): if self.name == other.name: return rpm.labelCompare((self.epoch, self.version, self.release), (other.epoch, other.version, other.release)) == -1 return self.name < other.name def __str__(self): return self.nevra @property def evr(self): if self.epoch and self.epoch != "0": return f"{self.epoch}:{self.version}-{self.release}" return f"{self.version}-{self.release}" @property def nevra(self): return f"{self.name}-{self.evr}.{self.arch}" @property def canonfilename(self): return f"{self.name}-{self.version}-{self.release}.{self.arch}.rpm" @property def provides(self): h = self._read_rpm_header() if h is None: return None return [dep.DNEVR()[2:] for dep in rpm.ds(h, 'provides')] def _read_rpm_header(self, rpm_ts=None): if self.location is None: return None if rpm_ts is None: rpm_ts = rpm.TransactionSet() rpm_ts.setVSFlags(rpm._RPMVSF_NOSIGNATURES) fd = os.open(self.location, os.O_RDONLY) h = rpm_ts.hdrFromFdno(fd) os.close(fd) return h @staticmethod def _cpeid_hexdecode(p): pout = '' while True: match = re.match(r'^(.*?)%([0-9a-fA-F][0-9a-fA-F])(.*)', p) if not match: return pout + p pout = pout + match.group(1) + chr(int(match.group(2), 16)) p = match.group(3) @functools.cached_property def product_cpeid(self): cpeid_prefix = "product-cpeid() = " for dep in self.provides: if dep.startswith(cpeid_prefix): return Package._cpeid_hexdecode(dep[len(cpeid_prefix):]) return None def get_src_package(self): if not self.sourcerpm: return None match = re.match(r'^(.*)-([^-]*)-([^-]*)\.([^\.]*)\.rpm$', self.sourcerpm) if not match: return None srcpkg = Package() srcpkg.name = match.group(1) srcpkg.epoch = None # sadly unknown srcpkg.version = match.group(2) srcpkg.release = match.group(3) srcpkg.arch = match.group(4) return srcpkg def matches(self, arch, name, op, epoch, version, release): if name is not None and self.name != name: return False if arch is not None and self.arch != arch: if arch == 'src' or arch == 'nosrc' or self.arch != 'noarch': return False if op is None: return True # special case a missing release or epoch in the match as labelCompare # does not handle it tepoch = self.epoch if epoch is not None else None trelease = self.release if release is not None else None cmp = rpm.labelCompare((tepoch, self.version, trelease), (epoch, version, release)) if cmp > 0: return '>' in op if cmp < 0: return '<' in op return '=' in op def get_directories(self): h = self._read_rpm_header() if h is None: return None dirs = {} filedevs = h['filedevices'] fileinos= h['fileinodes'] filesizes = h['filesizes'] filemodes = h['filemodes'] dirnames = h['dirnames'] dirindexes = h['dirindexes'] basenames = h['basenames'] if not basenames: return dirs for basename, dirindex, filesize, filemode, filedev, fileino in zip(basenames, dirindexes, filesizes, filemodes, filedevs, fileinos): dirname = dirnames[dirindex] if isinstance(basename, bytes): basename = basename.decode('utf-8') if isinstance(dirname, bytes): dirname = dirname.decode('utf-8') if dirname != '' and not dirname.endswith('/'): dirname += '/' if not dirname in dirs: dirs[dirname] = [] cookie = f"{filedev}/{fileino}" if (filemode & 0o170000) != 0o100000: filesize = 0 dirs[dirname].append((basename, filesize, cookie)) return dirs # vim: sw=4 et 0707010000001B000081A400000000000000000000000166FA6003000014F7000000000000000000000000000000000000003E00000000product-composer-0.4.20/src/productcomposer/core/PkgSelect.py""" Package selector specification """ import re import rpm class PkgSelect: def __init__(self, spec, supportstatus=None): self.supportstatus = supportstatus match = re.match(r'([^><=]*)([><=]=?)(.*)', spec.replace(' ', '')) if match: self.name = match.group(1) self.op = match.group(2) epoch = '0' version = match.group(3) release = None if ':' in version: (epoch, version) = version.split(':', 2) if '-' in version: (version, release) = version.rsplit('-', 2) self.epoch = epoch self.version = version self.release = release else: self.name = spec self.op = None self.epoch = None self.version = None self.release = None def matchespkg(self, arch, pkg): return pkg.matches(arch, self.name, self.op, self.epoch, self.version, self.release) @staticmethod def _sub_ops(op1, op2): if '>' in op2: op1 = re.sub(r'>', '', op1) if '<' in op2: op1 = re.sub(r'<', '', op1) if '=' in op2: op1 = re.sub(r'=', '', op1) return op1 @staticmethod def _intersect_ops(op1, op2): outop = '' if '<' in op1 and '<' in op2: outop = outop + '<' if '>' in op1 and '>' in op2: outop = outop + '>' if '=' in op1 and '=' in op2: outop = outop + '=' return outop def _cmp_evr(self, other): release1 = self.release if self.release is not None else other.release release2 = other.release if other.release is not None else self.release return rpm.labelCompare((self.epoch, self.version, release1), (other.epoch, other.version, release2)) def _throw_unsupported_sub(self, other): raise RuntimeError(f"unsupported sub operation: {self}, {other}") def _throw_unsupported_intersect(self, other): raise RuntimeError(f"unsupported intersect operation: {self}, {other}") def sub(self, other): if self.name != other.name: return self if other.op is None: return None if self.op is None: out = self.copy() out.op = PkgSelect._sub_ops('<=>', other.op) return out cmp = self._cmp_evr(other) if cmp == 0: if (self.release is not None and other.release is None) or (other.release is not None and self.release is None): self._throw_unsupported_sub(other) out = self.copy() out.op = PkgSelect._sub_ops(self.op, other.op) return out if out.op != '' else None elif cmp < 0: if '>' in self.op: self._throw_unsupported_sub(other) return None if '<' in other.op else self elif cmp > 0: if '<' in self.op: self._throw_unsupported_sub(other) return None if '>' in other.op else self self._throw_unsupported_sub(other) def intersect(self, other): if self.name != other.name: return None if other.op is None: return self if self.op is None: return other cmp = self._cmp_evr(other) if cmp == 0: if self.release is not None or other.release is None: out = self.copy() else: out = other.copy() out.op = PkgSelect._intersect_ops(self.op, other.op) if out.op == '': if (self.release is not None and other.release is None) or (other.release is not None and self.release is None): self._throw_unsupported_intersect(other) return None return out elif cmp < 0: if '>' in self.op and '<' not in other.op: return other if '<' in other.op and '>' not in self.op: return self if '<' not in other.op and '>' not in self.op: return None elif cmp > 0: if '>' in other.op and '<' not in self.op: return self if '<' in self.op and '>' not in other.op: return other if '<' not in self.op and '>' not in other.op: return None self._throw_unsupported_intersect(other) def copy(self): out = PkgSelect(self.name) out.op = self.op out.epoch = self.epoch out.version = self.version out.release = self.release out.supportstatus = self.supportstatus return out def __str__(self): if self.op is None: return self.name evr = self.version if self.release is not None: evr = evr + '-' + self.release if self.epoch and self.epoch != '0': evr = self.epoch + ':' + evr return self.name + ' ' + self.op + ' ' + evr def __hash__(self): if self.op: return hash((self.name, self.op, self.epoch, self.version, self.release)) else: return hash(self.name) def __eq__(self, other): if self.name != other.name: return False return str(self) == str(other) # vim: sw=4 et 0707010000001C000081A400000000000000000000000166FA600300000AA7000000000000000000000000000000000000003B00000000product-composer-0.4.20/src/productcomposer/core/PkgSet.py""" Package selection set """ from .PkgSelect import PkgSelect class PkgSet: def __init__(self, name): self.name = name self.pkgs = [] self.byname = None self.supportstatus = None def _create_byname(self): byname = {} for sel in self.pkgs: name = sel.name if name not in byname: byname[name] = [] byname[name].append(sel) self.byname = byname def _byname(self): if self.byname is None: self._create_byname() return self.byname def add_specs(self, specs): for spec in specs: sel = PkgSelect(spec, supportstatus=self.supportstatus) self.pkgs.append(sel) self.byname = None def add(self, other): s1 = set(self) for sel in other.pkgs: if sel not in s1: if self.supportstatus is not None and sel.supportstatus is None: sel = sel.copy() sel.supportstatus = self.supportstatus self.pkgs.append(sel) s1.add(sel) self.byname = None def sub(self, other): otherbyname = other._byname() pkgs = [] for sel in self.pkgs: name = sel.name if name not in otherbyname: pkgs.append(sel) continue for osel in otherbyname[name]: if sel is not None: sel = sel.sub(osel) if sel is not None: pkgs.append(p) self.pkgs = pkgs self.byname = None def intersect(self, other): otherbyname = other._byname() pkgs = [] s1 = set() pkgs = [] for sel in self.pkgs: name = sel.name if name not in otherbyname: continue for osel in otherbyname[name]: isel = sel.intersect(osel) if isel and isel not in s1: pkgs.append(isel) s1.add(isel) self.pkgs = pkgs self.byname = None def matchespkg(self, arch, pkg): if self.byname is None: self._create_byname() if pkg.name not in self.byname: return False for sel in self.byname[pkg.name]: if sel.matchespkg(arch, pkg): return True return False def names(self): if self.byname is None: self._create_byname() return set(self.byname.keys()) def __str__(self): return self.name + "(" + ", ".join(str(p) for p in self.pkgs) + ")" def __iter__(self): return iter(self.pkgs) # vim: sw=4 et 0707010000001D000081A400000000000000000000000166FA6003000008BA000000000000000000000000000000000000003900000000product-composer-0.4.20/src/productcomposer/core/Pool.py""" Pool base class """ import os import rpm from .Package import Package from .Updateinfo import Updateinfo class Pool: def __init__(self): self.rpms = {} self.updateinfos = {} def make_rpm(self, location, rpm_ts=None): return Package(location, rpm_ts=rpm_ts) def make_updateinfo(self, location): return Updateinfo(location) def add_rpm(self, pkg, origin=None): if origin is not None: pkg.origin = origin name = pkg.name if not name in self.rpms: self.rpms[name] = [] self.rpms[name].append(pkg) def add_updateinfo(self, uinfo): self.updateinfos[uinfo.location] = uinfo def scan(self, directory): ts = rpm.TransactionSet() ts.setVSFlags(rpm._RPMVSF_NOSIGNATURES) for dirpath, dirs, files in os.walk(directory): reldirpath = os.path.relpath(dirpath, directory) for filename in files: fname = os.path.join(dirpath, filename) if filename.endswith('updateinfo.xml'): uinfo = self.make_updateinfo(fname) self.add_updateinfo(uinfo) elif filename.endswith('.rpm'): pkg = self.make_rpm(fname, rpm_ts=ts) self.add_rpm(pkg, os.path.join(reldirpath, filename)) def lookup_all_rpms(self, arch, name, op=None, epoch=None, version=None, release=None): if name not in self.rpms: return [] return [rpm for rpm in self.rpms[name] if rpm.matches(arch, name, op, epoch, version, release)] def lookup_rpm(self, arch, name, op=None, epoch=None, version=None, release=None): return max(self.lookup_all_rpms(arch, name, op, epoch, version, release), default=None) def lookup_all_updateinfos(self): return self.updateinfos.values() def names(self, arch=None): if arch is None: return set(self.rpms.keys()) names = set() for name in self.rpms: for pkg in self.rpms[name]: if pkg.matches(arch, None, None, None, None, None): names.add(name) break return names # vim: sw=4 et 0707010000001E000081A400000000000000000000000166FA6003000001D9000000000000000000000000000000000000003F00000000product-composer-0.4.20/src/productcomposer/core/Updateinfo.py""" Updateinfo base class """ import functools from xml.etree import ElementTree as ET @functools.total_ordering class Updateinfo: def __init__(self, location=None): if location is None: return self.root = ET.parse(location).getroot() self.location = location def __eq__(self, other): return self.location == other.location def __lt__(self, other): return self.location < other.location # vim: sw=4 et 0707010000001F000081A400000000000000000000000166FA600300000026000000000000000000000000000000000000003D00000000product-composer-0.4.20/src/productcomposer/core/__init__.py""" Core implementation package. """ 07070100000020000081A400000000000000000000000166FA600300000DFC000000000000000000000000000000000000003B00000000product-composer-0.4.20/src/productcomposer/core/config.py""" Global application configuration. This module defines a global configuration object. Other modules should use this object to store application-wide configuration values. """ from pathlib import Path from string import Template import re try: import tomllib # Python 3.11+ except ModuleNotFoundError: import tomli as tomllib from .logger import logger __all__ = "config", "TomlConfig" class _AttrDict(dict): """ A dict-like object with attribute access. """ def __getitem__(self, key: str): """ Access dict values by key. :param key: key to retrieve """ value = super(_AttrDict, self).__getitem__(key) if isinstance(value, dict): # For mixed recursive assignment (e.g. `a["b"].c = value` to work # as expected, all dict-like values must themselves be _AttrDicts. # The "right way" to do this would be to convert to an _AttrDict on # assignment, but that requires overriding both __setitem__ # (straightforward) and __init__ (good luck). An explicit type # check is used here instead of EAFP because exceptions would be # frequent for hierarchical data with lots of nested dicts. self[key] = value = _AttrDict(value) return value def __getattr__(self, key: str) -> object: """ Get dict values as attributes. :param key: key to retrieve """ return self[key] def __setattr__(self, key: str, value: object): """ Set dict values as attributes. :param key: key to set :param value: new value for key """ self[key] = value return class TomlConfig(_AttrDict): """ Store data from TOML configuration files. """ def __init__(self, paths=None, root=None, params=None): """ Initialize this object. :param paths: one or more config file paths to load :param root: place config values at this root :param params: mapping of parameter substitutions """ super().__init__() if paths: self.load(paths, root, params) return def load(self, paths, root=None, params=None): """ Load data from configuration files. Configuration values are read from a sequence of one or more TOML files. Files are read in the given order, and a duplicate value will overwrite the existing value. If a root is specified the config data will be loaded under that attribute. :param paths: one or more config file paths to load :param root: place config values at this root :param params: mapping of parameter substitutions """ try: paths = [Path(paths)] except TypeError: # Assume this is a sequence of paths. pass if params is None: params = {} for path in paths: # Comments must be stripped prior to template substitution to avoid # any unintended semantics such as stray `$` symbols. comment = re.compile(r"\s*#.*$", re.MULTILINE) with open(path, "rt") as stream: logger.info(f"Reading config data from '{path}'") conf = comment.sub("", stream.read()) toml = Template(conf).substitute(params) data = tomllib.loads(toml) if root: self.setdefault(root, {}).update(data) else: self.update(data) return config = TomlConfig() 07070100000021000081A400000000000000000000000166FA600300000BE0000000000000000000000000000000000000003B00000000product-composer-0.4.20/src/productcomposer/core/logger.py""" Global application logging. All modules use the same global logging object. No messages will be emitted until the logger is started. """ from logging import getLogger, getLoggerClass, setLoggerClass from logging import Formatter, NullHandler, StreamHandler __all__ = "logger", class _Logger(getLoggerClass()): """ Message logger. """ LOGFMT = "%(asctime)s;%(levelname)s;%(name)s;%(message)s" def __init__(self, name=None): """ Initialize this logger. Loggers with the same name refer to the same underlying object. Names are hierarchical, e.g. 'parent.child' defines a logger that is a descendant of 'parent'. :param name: logger name (application name by default) """ # With a NullHandler, client code may make logging calls without regard # to whether the logger has been started yet. The standard Logger API # may be used to add and remove additional handlers, but the # NullHandler should always be left in place. super().__init__(name or __name__.split(".")[0]) self.addHandler(NullHandler()) # default to no output return def start(self, level="WARN", stream=None): """ Start logging to a stream. Until the logger is started, no messages will be emitted. This applies to all loggers with the same name and any child loggers. Multiple streams can be logged to by calling start() for each one. Calling start() more than once for the same stream will result in duplicate records to that stream. Messages less than the given priority level will be ignored. The default level conforms to the *nix convention that a successful run should produce no diagnostic output. Call setLevel() to change the logger's priority level after it has been stared. Available levels and their suggested meanings: DEBUG - output useful for developers INFO - trace normal program flow, especially external interactions WARN - an abnormal condition was detected that might need attention ERROR - an error was detected but execution continued CRITICAL - an error was detected and execution was halted :param level: logger priority level :param stream: output stream (stderr by default) """ self.setLevel(level.upper()) handler = StreamHandler(stream) handler.setFormatter(Formatter(self.LOGFMT)) handler.setLevel(self.level) self.addHandler(handler) return def stop(self): """ Stop logging with this logger. """ for handler in self.handlers[1:]: # Remove everything but the NullHandler. self.removeHandler(handler) return # Never instantiate a Logger object directly, always use getLogger(). setLoggerClass(_Logger) # applies to all subsequent getLogger() calls logger = getLogger(__name__.split(".", 1)[0]) # use application name 07070100000022000081A400000000000000000000000166FA60030000015B000000000000000000000000000000000000003800000000product-composer-0.4.20/src/productcomposer/defaults.py""" Product composer executes programs that have their own defaults. These defaults rarely change, but if they do, they'll impact product composes. To avoid such unexpected changes, we define our defaults here and explicitly pass them to the programs. """ CREATEREPO_CHECKSUM_TYPE: str = "sha512" CREATEREPO_GENERAL_COMPRESS_TYPE: str = "zstd" 07070100000023000041ED00000000000000000000000266FA600300000000000000000000000000000000000000000000003500000000product-composer-0.4.20/src/productcomposer/wrappers07070100000024000081A400000000000000000000000166FA600300000054000000000000000000000000000000000000004100000000product-composer-0.4.20/src/productcomposer/wrappers/__init__.pyfrom .createrepo import CreaterepoWrapper from .modifyrepo import ModifyrepoWrapper 07070100000025000081A400000000000000000000000166FA60030000036A000000000000000000000000000000000000003F00000000product-composer-0.4.20/src/productcomposer/wrappers/common.py__all__ = ( "BaseWrapper", "Field", ) import os import subprocess from abc import abstractmethod from pydantic import BaseModel from pydantic import Field class BaseWrapper(BaseModel, validate_assignment=True, extra="forbid"): @abstractmethod def get_cmd(self) -> list[str]: pass def run_cmd(self, check=True, stdout=None, stderr=None, cwd=None, env=None) -> subprocess.CompletedProcess: cmd = self.get_cmd() if env: # merge partial user-specified env with os.environ and pass it to the program call full_env = os.environ.copy() full_env.update(env) env = full_env return subprocess.run( cmd, check=check, stdout=stdout, stderr=stderr, cwd=cwd, env=env, encoding="utf-8", ) 07070100000026000081A400000000000000000000000166FA60030000060E000000000000000000000000000000000000004300000000product-composer-0.4.20/src/productcomposer/wrappers/createrepo.pyfrom .common import * from .. import defaults class CreaterepoWrapper(BaseWrapper): directory: str = Field() baseurl: str | None = Field(default=None) checksum_type: str = Field(default=defaults.CREATEREPO_CHECKSUM_TYPE) content: list[str] | None = Field(default=None) cpeid: str | None = Field(default=None) distro: str | None = Field(default=None) repos: list[str] | None = Field(default=None) excludes: list[str] | None = Field(default=None) general_compress_type: str = Field(default=defaults.CREATEREPO_GENERAL_COMPRESS_TYPE) split: bool = Field(default=False) def get_cmd(self): cmd = ["createrepo", self.directory] cmd.append("--no-database") cmd.append("--unique-md-filenames") cmd.append(f"--checksum={self.checksum_type}") cmd.append(f"--general-compress-type={self.general_compress_type}") if self.baseurl: cmd.append(f"--baseurl={self.baseurl}") if self.content: for i in self.content: cmd.append(f"--content={i}") if self.distro: if self.cpeid: cmd.append(f"--distro={self.cpeid},{self.distro}") else: cmd.append(f"--distro={self.distro}") if self.excludes: for i in self.excludes: cmd.append(f"--excludes={i}") if self.repos: for i in self.repos: cmd.append(f"--repo={i}") if self.split: cmd.append("--split") return cmd 07070100000027000081A400000000000000000000000166FA6003000003A0000000000000000000000000000000000000004300000000product-composer-0.4.20/src/productcomposer/wrappers/modifyrepo.pyfrom pydantic.types import DirectoryPath from pydantic.types import FilePath from .common import * from .. import defaults class ModifyrepoWrapper(BaseWrapper): file: FilePath = Field() directory: DirectoryPath = Field() checksum_type: str = Field(default=defaults.CREATEREPO_CHECKSUM_TYPE) compress: bool = Field(default=True) compress_type: str = Field(default=defaults.CREATEREPO_GENERAL_COMPRESS_TYPE) mdtype: str | None = Field(default=None) def get_cmd(self): cmd = ["modifyrepo", self.file, self.directory] cmd.append("--unique-md-filenames") cmd.append(f"--checksum={self.checksum_type}") if self.compress: cmd.append("--compress") else: cmd.append("--no-compress") cmd.append(f"--compress-type={self.compress_type}") if self.mdtype: cmd.append(f"--mdtype={self.mdtype}") return cmd 07070100000028000041ED00000000000000000000000266FA600300000000000000000000000000000000000000000000001E00000000product-composer-0.4.20/tests07070100000029000081A400000000000000000000000166FA60030000002D000000000000000000000000000000000000002900000000product-composer-0.4.20/tests/.gitignore# Ignore pytest cache files. .pytest_cache/ 0707010000002A000041ED00000000000000000000000266FA600300000000000000000000000000000000000000000000002500000000product-composer-0.4.20/tests/assets0707010000002B000081A400000000000000000000000166FA600300000043000000000000000000000000000000000000003000000000product-composer-0.4.20/tests/assets/conf1.tomlstr = "$$str" # literal `$`, no substitution var = "${var1}$var2" 0707010000002C000081A400000000000000000000000166FA600300000035000000000000000000000000000000000000003000000000product-composer-0.4.20/tests/assets/conf2.tomlvar = "${var1}$var3" # override `var` in conf1.toml 0707010000002D000081A400000000000000000000000166FA60030000008C000000000000000000000000000000000000002900000000product-composer-0.4.20/tests/pytest.ini[pytest] # Names with with a leading underscore are ignored. python_files = test_*.py python_classes = [A-Z]*Test python_functions = test_* 0707010000002E000041ED00000000000000000000000266FA600300000000000000000000000000000000000000000000002300000000product-composer-0.4.20/tests/unit0707010000002F000041ED00000000000000000000000266FA600300000000000000000000000000000000000000000000002800000000product-composer-0.4.20/tests/unit/core07070100000030000081A400000000000000000000000166FA6003000007E8000000000000000000000000000000000000004000000000product-composer-0.4.20/tests/unit/core/test_config.py.disabled""" Test suite for the core.config module. """ from pathlib import Path import pytest from {{ cookiecutter.app_name }}.core.config import * # tests __all__ class TomlConfigTest(object): """ Test suite for the YamlConfig class. """ @classmethod @pytest.fixture def files(cls, tmp_path): """ Return configuration files for testing. """ files = "conf1.toml", "conf2.toml" return tuple(Path("tests", "assets", item) for item in files) @classmethod @pytest.fixture def params(cls): """ Define configuration parameters. """ return {"var1": "VAR1", "var2": "VAR2", "var3": "VAR3"} def test_item(self): """ Test item access. """ config = TomlConfig() config["root"] = {} config["root"]["key"] = "value" assert config["root"]["key"] == "value" return def test_attr(self): """ Test attribute access. """ config = TomlConfig() config.root = {} config.root.key = "value" assert config.root.key == "value" return @pytest.mark.parametrize("root", (None, "root")) def test_init(self, files, params, root): """ Test the __init__() method for loading a file. """ merged = {"str": "$str", "var": "VAR1VAR3"} config = TomlConfig(files, root, params) if root: assert config == {root: merged} else: assert config == merged return @pytest.mark.parametrize("root", (None, "root")) def test_load(self, files, params, root): """ Test the load() method. """ merged = {"str": "$str", "var": "VAR1VAR3"} config = TomlConfig() config.load(files, root, params) if root: assert config == {root: merged} else: assert config == merged return # Make the module executable. if __name__ == "__main__": raise SystemExit(pytest.main([__file__])) 07070100000031000081A400000000000000000000000166FA6003000008AF000000000000000000000000000000000000004000000000product-composer-0.4.20/tests/unit/core/test_logger.py.disabled""" Test suite for the core.logger module. The script can be executed on its own or incorporated into a larger test suite. However the tests are run, be aware of which version of the package is actually being tested. If the package is installed in site-packages, that version takes precedence over the version in this project directory. Use a virtualenv test environment or setuptools develop mode to test against the development version. """ from logging import DEBUG from io import StringIO import pytest from obsimager.core.logger import logger as _logger @pytest.fixture def logger(): """ Get the global logger object for testing. """ yield _logger _logger.stop() # reset logger after each test return class LoggerTest(object): """ Test suite for the Logger class. """ def test_start(self, capsys, logger): """ Test the start method. """ message = "test message" logger.start("debug") logger.debug(message) _, stderr = capsys.readouterr() assert logger.level == DEBUG assert message in stderr return def test_stop(self, capsys, logger): """ Test the stop() method. """ logger.start("debug") logger.stop() logger.critical("test") _, stderr = capsys.readouterr() assert not stderr return def test_restart(self, capsys, logger): """ Test a restart. """ debug_message = "debug message" logger.start("INFO") logger.debug(debug_message) _, stderr = capsys.readouterr() assert debug_message not in stderr logger.stop() logger.start("DEBUG") logger.debug(debug_message) _, stderr = capsys.readouterr() assert debug_message in stderr return def test_stream(self, logger): """ Test output to an alternate stream. """ message = "test message" stream = StringIO() logger.start("debug", stream) logger.debug(message) assert message in stream.getvalue() return # Make the module executable. if __name__ == "__main__": raise SystemExit(pytest.main([__file__])) 07070100000032000081A400000000000000000000000166FA60030000034F000000000000000000000000000000000000003800000000product-composer-0.4.20/tests/unit/test_api.py.disabled""" Test suite for the api module. The script can be executed on its own or incorporated into a larger test suite. However the tests are run, be aware of which version of the module is actually being tested. If the library is installed in site-packages, that version takes precedence over the version in this project directory. Use a virtualenv test environment or setuptools develop mode to test against the development version. """ import pytest from obsimager.api import * # tests __all__ def test_hello(): """ Test the hello() function. """ assert hello() == "Hello, World!" return def test_hello_name(): """ Test the hello() function with a name. """ assert hello("foo") == "Hello, foo!" return # Make the script executable. if __name__ == "__main__": raise SystemExit(pytest.main([__file__])) 07070100000033000081A400000000000000000000000166FA6003000005D6000000000000000000000000000000000000003800000000product-composer-0.4.20/tests/unit/test_cli.py.disabled""" Test suite for the cli module. The script can be executed on its own or incorporated into a larger test suite. However the tests are run, be aware of which version of the module is actually being tested. If the library is installed in site-packages, that version takes precedence over the version in this project directory. Use a virtualenv test environment or setuptools develop mode to test against the development version. """ from shlex import split from subprocess import call from sys import executable import pytest from obsimager.cli import * # test __all__ @pytest.fixture(params=("--help", "hello")) def command(request): """ Return the command to run. """ return request.param def test_main(command): """ Test the main() function. """ try: status = main(split(command)) except SystemExit as ex: status = ex.code assert status == 0 return def test_main_none(): """ Test the main() function with no arguments. """ with pytest.raises(SystemExit) as exinfo: main([]) # displays a help message and exits gracefully assert exinfo.value.code == 1 def test_script(command): """ Test command line execution. """ # Call with the --help option as a basic sanity check. cmdl = f"{executable} -m obsimager.cli {command} --help" assert 0 == call(cmdl.split()) return # Make the script executable. if __name__ == "__main__": raise SystemExit(pytest.main([__file__])) 07070100000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000B00000000TRAILER!!!245 blocks
Locations
Projects
Search
Status Monitor
Help
OpenBuildService.org
Documentation
API Documentation
Code of Conduct
Contact
Support
@OBShq
Terms
openSUSE Build Service is sponsored by
The Open Build Service is an
openSUSE project
.
Sign Up
Log In
Places
Places
All Projects
Status Monitor