Sign Up
Log In
Log In
or
Sign Up
Places
All Projects
Status Monitor
Collapse sidebar
openSUSE:Leap:16.0:FactoryCandidates
pueue
pueue-3.4.1.obscpio
Overview
Repositories
Revisions
Requests
Users
Attributes
Meta
File pueue-3.4.1.obscpio of Package pueue
07070100000000000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000001400000000pueue-3.4.1/.config07070100000001000081A4000000000000000000000001665F1B69000006E4000000000000000000000000000000000000002100000000pueue-3.4.1/.config/nextest.toml# See https://nexte.st/book/configuration.html for format and defaults # Profiles defined here inherit from profile.default # profile used in GitHub test runs # - Retry a few times to detect flaky tests # - Call out every test as it finishes, including slow, skipped and flaky tests # - List failures again at the end. # - Run all tests even if some failed. # - Output test results in JUnit format. [profile.ci] # "retries" defines the number of times a test should be retried. If set to a # non-zero value, tests that succeed on a subsequent attempt will be marked as # non-flaky. Can be overridden through the `--retries` option. retries = 2 # * none: no output # * fail: show failed (including exec-failed) tests # * retry: show flaky and retried tests # * slow: show slow tests # * pass: show passed tests # * skip: show skipped tests (most useful for CI) # * all: all of the above # # Each value includes all the values above it; for example, "slow" includes # failed and retried tests. status-level = "all" # * "immediate-final": output failures as soon as they happen and at the end of # the test run; combination of "immediate" and "final" failure-output = "immediate-final" # Cancel the test run on the first failure. For CI runs, consider setting this # to false. fail-fast = false [profile.ci.junit] # Output a JUnit report into the given file inside 'store.dir/<profile-name>'. # The default value for store.dir is 'target/nextest', so the following file # is written to the target/nextest/ci/ directory. path = "junit.xml" # profile used in GitHub coverage runs # - lower retry count as a compromise between speed and resilience # - no fail-fast to at least keep coverage percentages accurate. [profile.coverage] retries = 1 fail-fast = false 07070100000002000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000001400000000pueue-3.4.1/.github07070100000003000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000002300000000pueue-3.4.1/.github/ISSUE_TEMPLATE07070100000004000081A4000000000000000000000001665F1B6900000838000000000000000000000000000000000000003200000000pueue-3.4.1/.github/ISSUE_TEMPLATE/bug_report.ymlname: Bug Report 🐛 description: Create a report to help improve the project labels: ["t: bug"] title: "[Bug]" body: - type: markdown attributes: value: | Please take the time to fill out all relevant the fields below. - type: textarea id: description-of-bug attributes: label: Describe the bug description: A clear and concise description of what the bug is. placeholder: A short description validations: required: true - type: textarea id: steps-to-reproduce attributes: label: Steps to reproduce description: Steps to reproduce the behavior. placeholder: | 1. I added a task with `pueue add -- this is the command` 2. Then I did ... validations: required: true - type: textarea id: debug-output attributes: label: Debug logs (if relevant) description: | This is mostly important for crashes, panics and weird daemon behavior. Logs helps me to debug a problem, especially if the bug is something that's not clearly visible. You can get detailed log output by launching `pueue` or `pueued` with the `-vvv` flag directly after the binary name. placeholder: | ``` Some log output here ``` validations: required: false - type: input id: operating-system attributes: label: Operating system description: The operating system you're using. placeholder: iOS 8 / Windows 10 / Ubuntu 22.04 validations: required: true - type: input id: pueue-version attributes: label: Pueue version description: | The current pueue version you're using. You get the `pueue`/`pueued` version by calling `pueue --version`. placeholder: v3.1.2 validations: required: true - type: textarea id: additional-context attributes: label: Additional context description: Add any other context about the problem here. placeholder: | Anything else you want to add. validations: required: false 07070100000005000081A4000000000000000000000001665F1B6900000A24000000000000000000000000000000000000003800000000pueue-3.4.1/.github/ISSUE_TEMPLATE/feature_request.yamlname: Feature request description: Suggest an idea for this project labels: ["t: feature"] body: - type: markdown attributes: value: | Please note, that the project is considered **feature-complete.** Hence, no new major features will be added. Only UX/UI improvements and QoL features will be considered. If this is the case for your feature, please take the time to fill out all relevant the fields below. - type: textarea id: feature attributes: label: A detailed description of the feature you would like to see added. description: | Explain how that feature would look like and how it should behave. placeholder: | I would love to see a configuration option to configure the color temperature of the terminal output. For instance, this could be done by adding a new configuration field to the `pueue.yml`. It would be enough for me to have a simple toggle between a `light` and `dark` mode. validations: required: true - type: textarea id: user-story attributes: label: Explain your usecase of the requested feature description: | I need to know what a feature is going to be used for, before I can decide if and how it's going to be implemented. The more information you provide, the better I understand your problem ;). placeholder: | I'm using a light terminal colorscheme and reading Pueue's output can be really hard from time to time. It would be awesome, if there was an option to have darker colors, so I'ts easier to read the output. validations: required: true - type: textarea id: alternatives attributes: label: Alternatives description: | If your problem can be solved in multiple ways, I would like to hear the possible alternatives you've considered. Some problems really don't have any feasible alternatives, in that case don't bother answering this question :) placeholder: | I could add a wrapper around `pueue` that takes any output and rewrites the ANSI escape codes. However, this is very cumbersome and not user-friendly. This is why I think this should be in the upstream project. validations: required: false - type: textarea id: additional-context attributes: label: Additional context description: Add any other context about the problem here. placeholder: | Anything else you want to add such as sketches, screenshots, etc. validations: required: false 07070100000006000081A4000000000000000000000001665F1B6900000069000000000000000000000000000000000000002300000000pueue-3.4.1/.github/dependabot.ymlversion: 2 updates: - package-ecosystem: github-actions directory: "/" schedule: interval: daily 07070100000007000081A4000000000000000000000001665F1B6900000195000000000000000000000000000000000000002D00000000pueue-3.4.1/.github/pull_request_template.mdThanks for sending a pull request! ## Checklist Please make sure the PR adheres to this project's standards: - [ ] I included a new entry to the `CHANGELOG.md`. - [ ] I checked `cargo clippy` and `cargo fmt`. The CI will fail otherwise anyway. - [ ] (If applicable) I added tests for this feature or adjusted existing tests. - [ ] (If applicable) I checked if anything in the wiki needs to be changed. 07070100000008000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000001E00000000pueue-3.4.1/.github/workflows07070100000009000081A4000000000000000000000001665F1B6900000795000000000000000000000000000000000000002B00000000pueue-3.4.1/.github/workflows/coverage.ymlname: "Test Coverage" on: push: branches: - main paths: - ".github/**/*" - ".codecov.yml" - "**.rs" - "**/Cargo.toml" - "**/Cargo.lock" pull_request: branches: - main paths: - ".github/**/*" - ".codecov.yml" - "**.rs" - "**/Cargo.toml" - "**/Cargo.lock" jobs: publish: name: Create test coverage on ${{ matrix.os }} for ${{ matrix.target }} runs-on: ${{ matrix.os }} strategy: fail-fast: false matrix: target: - x86_64-unknown-linux-gnu include: - os: ubuntu-latest target: x86_64-unknown-linux-gnu steps: - name: Checkout code uses: actions/checkout@v3 - name: Setup Rust toolchain uses: dtolnay/rust-toolchain@stable with: components: llvm-tools-preview targets: ${{ matrix.target }} - uses: actions/cache@v3 with: path: | ~/.cargo/bin/ ~/.cargo/registry/index/ ~/.cargo/registry/cache/ ~/.cargo/git/db/ target/ key: ${{ runner.os }}-cargo-coverage-${{ matrix.target }}-${{ hashFiles('**/Cargo.lock') }} restore-keys: | ${{ runner.os }}-cargo-coverage-${{ matrix.target }}- ${{ runner.os }}-cargo-${{ matrix.target }}- - name: Install cargo-llvm-cov and nextest uses: taiki-e/install-action@v2 with: tool: cargo-llvm-cov,nextest - name: Generate code coverage env: NEXTEST_PROFILE: coverage # defined in .config/nextest.toml run: cargo llvm-cov nextest --all-features --workspace --lcov --output-path lcov.info - name: Upload coverage to Codecov uses: codecov/codecov-action@v4 with: fail_ci_if_error: false files: lcov.info token: ${{ secrets.CODECOV_TOKEN }} 0707010000000A000081A4000000000000000000000001665F1B6900000A62000000000000000000000000000000000000002700000000pueue-3.4.1/.github/workflows/lint.ymlname: Lint the code on: push: branches: - main paths: - ".github/**/*" - "**.rs" - "**/Cargo.toml" - "**/Cargo.lock" pull_request: branches: - main paths: - ".github/**/*" - "**.rs" - "**/Cargo.toml" - "**/Cargo.lock" jobs: linting: name: Lint on ${{ matrix.os }} for ${{ matrix.target }} runs-on: ${{ matrix.os }} strategy: fail-fast: false matrix: target: - x86_64-unknown-linux-gnu - aarch64-unknown-linux-musl - armv7-unknown-linux-musleabihf - arm-unknown-linux-musleabihf - x86_64-pc-windows-msvc - x86_64-apple-darwin - aarch64-apple-darwin include: - os: ubuntu-latest target: x86_64-unknown-linux-gnu - os: ubuntu-latest target: aarch64-unknown-linux-musl - os: ubuntu-latest target: armv7-unknown-linux-musleabihf - os: ubuntu-latest target: arm-unknown-linux-musleabihf - os: windows-latest target: x86_64-pc-windows-msvc - os: macos-latest target: x86_64-apple-darwin - os: macos-latest target: aarch64-apple-darwin steps: - name: Checkout code uses: actions/checkout@v3 - name: Setup Rust toolchain uses: dtolnay/rust-toolchain@stable with: components: llvm-tools-preview targets: ${{ matrix.target }} - uses: actions/cache@v3 with: path: | ~/.cargo/bin/ ~/.cargo/registry/index/ ~/.cargo/registry/cache/ ~/.cargo/git/db/ target/ key: lint-${{ runner.os }}-cargo-${{ matrix.target }}-${{ hashFiles('**/Cargo.lock') }} restore-keys: | lint-${{ runner.os }}-cargo-${{ matrix.target }}- ${{ runner.os }}-cargo-${{ matrix.target }}- - name: Install cargo-sort run: cargo install cargo-sort || exit 0 if: matrix.target != 'x86_64-pc-windows-msvc' # ----- Actual linting logic ------ # These lines should mirror the `just lint` command. - name: cargo fmt run: cargo fmt --all -- --check - name: cargo sort run: cargo sort --workspace --check # Don't run cargo-sort on windows, as the formatting behavior seems to be slightly different: # https://github.com/DevinR528/cargo-sort/issues/56 if: matrix.target != 'x86_64-pc-windows-msvc' - name: cargo clippy run: cargo clippy --tests --workspace -- -D warnings 0707010000000B000081A4000000000000000000000001665F1B6900001455000000000000000000000000000000000000003100000000pueue-3.4.1/.github/workflows/package-binary.ymlname: Packaging on: push: tags: - "v*.*.*" jobs: publish: name: Publish on ${{ matrix.os }} for ${{ matrix.target }} runs-on: ${{ matrix.os }} strategy: fail-fast: false matrix: target: - x86_64-unknown-linux-musl - aarch64-unknown-linux-musl - armv7-unknown-linux-musleabihf - arm-unknown-linux-musleabihf - x86_64-pc-windows-msvc - x86_64-apple-darwin - aarch64-apple-darwin include: - os: ubuntu-latest target: x86_64-unknown-linux-musl client_artifact_name: target/x86_64-unknown-linux-musl/release/pueue daemon_artifact_name: target/x86_64-unknown-linux-musl/release/pueued client_release_name: pueue-linux-x86_64 daemon_release_name: pueued-linux-x86_64 cross: true strip: true - os: ubuntu-latest target: aarch64-unknown-linux-musl client_artifact_name: target/aarch64-unknown-linux-musl/release/pueue daemon_artifact_name: target/aarch64-unknown-linux-musl/release/pueued client_release_name: pueue-linux-aarch64 daemon_release_name: pueued-linux-aarch64 cross: true strip: false - os: ubuntu-latest target: armv7-unknown-linux-musleabihf client_artifact_name: target/armv7-unknown-linux-musleabihf/release/pueue daemon_artifact_name: target/armv7-unknown-linux-musleabihf/release/pueued client_release_name: pueue-linux-armv7 daemon_release_name: pueued-linux-armv7 cross: true strip: false - os: ubuntu-latest target: arm-unknown-linux-musleabihf client_artifact_name: target/arm-unknown-linux-musleabihf/release/pueue daemon_artifact_name: target/arm-unknown-linux-musleabihf/release/pueued client_release_name: pueue-linux-arm daemon_release_name: pueued-linux-arm cross: true strip: false - os: windows-latest target: x86_64-pc-windows-msvc client_artifact_name: target/x86_64-pc-windows-msvc/release/pueue.exe daemon_artifact_name: target/x86_64-pc-windows-msvc/release/pueued.exe client_release_name: pueue-windows-x86_64.exe daemon_release_name: pueued-windows-x86_64.exe cross: false strip: true - os: macos-latest target: x86_64-apple-darwin client_artifact_name: target/x86_64-apple-darwin/release/pueue daemon_artifact_name: target/x86_64-apple-darwin/release/pueued client_release_name: pueue-macos-x86_64 daemon_release_name: pueued-macos-x86_64 cross: false strip: true - os: macos-latest target: aarch64-apple-darwin client_artifact_name: target/aarch64-apple-darwin/release/pueue daemon_artifact_name: target/aarch64-apple-darwin/release/pueued client_release_name: pueue-darwin-aarch64 daemon_release_name: pueued-darwin-aarch64 cross: false strip: true steps: - name: Checkout code uses: actions/checkout@v3 - name: Setup Rust toolchain uses: dtolnay/rust-toolchain@stable with: components: llvm-tools-preview targets: ${{ matrix.target }} - name: cargo build uses: houseabsolute/actions-rust-cross@v0 with: command: build args: --release --locked target: ${{ matrix.target }} - name: Compress client uses: svenstaro/upx-action@v2 with: file: ${{ matrix.client_artifact_name }} args: --lzma strip: ${{ matrix.strip }} if: matrix.target != 'x86_64-pc-windows-msvc' - name: Compress daemon uses: svenstaro/upx-action@v2 with: file: ${{ matrix.daemon_artifact_name }} args: --lzma strip: ${{ matrix.strip }} if: matrix.target != 'x86_64-pc-windows-msvc' - name: Upload client binaries to release uses: svenstaro/upload-release-action@v2 with: repo_token: ${{ secrets.GITHUB_TOKEN }} file: ${{ matrix.client_artifact_name }} asset_name: ${{ matrix.client_release_name }} tag: ${{ github.ref }} overwrite: true - name: Upload daemon binaries to release uses: svenstaro/upload-release-action@v2 with: repo_token: ${{ secrets.GITHUB_TOKEN }} file: ${{ matrix.daemon_artifact_name }} asset_name: ${{ matrix.daemon_release_name }} tag: ${{ github.ref }} overwrite: true - uses: svenstaro/upload-release-action@v2 with: repo_token: ${{ secrets.GITHUB_TOKEN }} file: utils/pueued.service tag: ${{ github.ref }} asset_name: systemd.pueued.service body: ${{ steps.changelog_reader.outputs.log_entry }} if: matrix.target == 'x86_64-unknown-linux-musl' 0707010000000C000081A4000000000000000000000001665F1B6900000A25000000000000000000000000000000000000002E00000000pueue-3.4.1/.github/workflows/test-report.yml# This workflow makes it possible to publish test reports without running into # permission issues when the test workflow was run from a fork or by Dependabot. # # The test workflow uploads a junit file per matrix target as an artifact, plus # the worflow events file, both of which this worfklow buids upon. Note that # the events file artifact, specifically, is expected to be named 'Event File'. # # See the [Publish Test Results action documentation][ptr] for more information. # # [ptr]: https://github.com/marketplace/actions/publish-test-results#support-fork-repositories-and-dependabot-branches name: "Test Report" on: workflow_run: workflows: ["Test Build"] types: - completed permissions: {} jobs: test-results: name: Test Results runs-on: ubuntu-latest if: github.event.workflow_run.conclusion != 'skipped' permissions: checks: write # permission to comment on PRs pull-requests: write # permission to download artifacts actions: read steps: - name: Download and extract artifacts env: GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}} run: | # Unzip all artifacts created by the triggering workflow into # directories under an `artifacts/` directory. # # This uses `gh api` to output the name and URL for each artifact as # tab-separated lines, then uses `read` to take each name and URL # and download those to named zip files, and finally extracting those # zip files into directories with matching names. mkdir -p artifacts && cd artifacts # The artifacts URL from the *triggering* test workflow, not *this* # workflow. artifacts_url=${{ github.event.workflow_run.artifacts_url }} gh api "$artifacts_url" -q '.artifacts[] | [.name, .archive_download_url] | @tsv' | while read artifact do IFS=$'\t' read name url <<< "$artifact" gh api $url > "$name.zip" unzip -d "$name" "$name.zip" done # Run the publisher. Note that it is given the 'Event File' artifact # created by the test workflow so it has the *original* webhook payload # to base its context on. - name: Publish Test Results uses: EnricoMi/publish-unit-test-result-action@v2 with: commit: ${{ github.event.workflow_run.head_sha }} event_file: artifacts/Event File/event.json event_name: ${{ github.event.workflow_run.event }} junit_files: "artifacts/**/*.xml" 0707010000000D000081A4000000000000000000000001665F1B69000010B5000000000000000000000000000000000000002700000000pueue-3.4.1/.github/workflows/test.ymlname: "Test Build" on: push: branches: - main paths: - ".github/**/*" - "**.rs" - "**/Cargo.toml" - "**/Cargo.lock" pull_request: branches: - main paths: - ".github/**/*" - "**.rs" - "**/Cargo.toml" - "**/Cargo.lock" jobs: publish: name: Test on ${{ matrix.os }} for ${{ matrix.target }} runs-on: ${{ matrix.os }} strategy: fail-fast: false matrix: target: - x86_64-unknown-linux-gnu - aarch64-unknown-linux-musl - armv7-unknown-linux-musleabihf - arm-unknown-linux-musleabihf - x86_64-pc-windows-msvc - x86_64-apple-darwin - aarch64-apple-darwin include: - os: ubuntu-latest target: x86_64-unknown-linux-gnu cross: false - os: ubuntu-latest target: aarch64-unknown-linux-musl cross: true - os: ubuntu-latest target: armv7-unknown-linux-musleabihf cross: true - os: ubuntu-latest target: arm-unknown-linux-musleabihf cross: true - os: windows-latest target: x86_64-pc-windows-msvc cross: false - os: macos-latest target: x86_64-apple-darwin cross: false - os: macos-latest target: aarch64-apple-darwin cross: true steps: - name: Checkout code uses: actions/checkout@v3 - name: Setup Rust toolchain uses: dtolnay/rust-toolchain@stable with: components: llvm-tools-preview targets: ${{ matrix.target }} - name: Install cargo-nextest uses: taiki-e/install-action@v2 with: tool: nextest - name: Install cargo-cross uses: taiki-e/install-action@v2 with: tool: cross if: ${{ matrix.cross }} - uses: actions/cache@v3 with: path: | ~/.cargo/bin/ ~/.cargo/registry/index/ ~/.cargo/registry/cache/ ~/.cargo/git/db/ target/ key: ${{ runner.os }}-cargo-${{ matrix.target }}-${{ hashFiles('**/Cargo.lock') }} restore-keys: | ${{ runner.os }}-cargo-${{ matrix.target }}- # ----- Non-Cross path - name: cargo build run: cargo build --target=${{ matrix.target }} if: ${{ !matrix.cross }} - name: cargo test run: cargo nextest run --workspace --target=${{ matrix.target }} env: NEXTEST_PROFILE: ci # defined in .config/nextest.toml if: ${{ !matrix.cross }} # ----- Cross path #- name: Install qemu # run: apt-get install --assume-yes binfmt-support qemu-user-static qemu-user # if: ${{ matrix.cross }} - name: cargo build run: cross build --target=${{ matrix.target }} if: ${{ matrix.cross }} # We don't do automated testing for cross builds yet. # - They don't work in the CI. I have yet to figure out why things aren't set up properly. # - The tests run way to slow and all kinds of race conditions are triggered. # Until we find a way to run time related tests in an ultra slow environment, this needs to be postponed. #- name: cargo test # run: cross test run --workspace --target=${{ matrix.target }} # env: # NEXTEST_PROFILE: ci # defined in .config/nextest.toml # if: ${{ matrix.cross }} # ----- Test result artifacts are used by the test-report.yaml workflow. - name: upload test results uses: actions/upload-artifact@v3 if: ${{ !matrix.cross }} with: name: Test results (${{ matrix.target }}) path: target/nextest/ci/junit.xml # the event file (containing the JSON payload for the webhook triggering this # workflow) is needed to generate test result reports with the correct # context. See the test-report.yaml workflow for details. event_file: name: "Event File" runs-on: ubuntu-latest steps: - name: Upload uses: actions/upload-artifact@v3 with: name: Event File path: ${{ github.event_path }} 0707010000000E000081A4000000000000000000000001665F1B690000012B000000000000000000000000000000000000001700000000pueue-3.4.1/.gitignore# Generated by Cargo # will have compiled files and executables target/ lib/target/ # These are backup files generated by rustfmt *.rs.bk *_stdout* *_stderr* /utils/completions # OS generated files # ###################### .DS_Store .DS_Store? ._* .Spotlight-V100 .Trashes ehthumbs.db Thumbs.db 0707010000000F000081A4000000000000000000000001665F1B690000AA6E000000000000000000000000000000000000001900000000pueue-3.4.1/CHANGELOG.md# Changelog All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). ## \[Unreleased\] ### Added ## \[3.4.1\] - 2024-06-04 ### Added - Nushell autocompletion script [#527](https://github.com/Nukesor/pueue/pull/527) - Add FreeBSD process helper to facilitate FreeBSD builds ### Changed - Replace `chrono-english` by the `interim` drop-in replacement. [#534](https://github.com/Nukesor/pueue/issues/534) ## \[3.4.0\] - 2024-03-22 ### Added - Support modification of task priorities via `pueue edit --priority/-o` and `pueue restart --edit-priority/-o` [#449](https://github.com/Nukesor/pueue/issues/449). - If no output directory is provided in `completions`, the generated file is printed to `stdout` [#489](https://github.com/Nukesor/pueue/issues/489). - Allow setting the `parallel_tasks` value of groups to `0`. Setting this value allows unlimited tasks for that group [#500](https://github.com/Nukesor/pueue/issues/500). ### Fixed - Include priority in `Task`s' `Debug` output [#493](https://github.com/Nukesor/pueue/issues/493) - Made the daemon exit gracefully (exit code 0) on SIGINT and SIGTEM. [#504](https://github.com/Nukesor/pueue/issues/504) - Fix reading of configuration files that lacks a `shared` section. [#505](https://github.com/Nukesor/pueue/issues/505) - Respect the `-g` flag when using the `status` filter query. [#508](https://github.com/Nukesor/pueue/issues/508) ## \[3.3.3\] - 2024-01-04 ### Fixed - Bump `ring` from 0.16 to 0.17 to add riscv64 support [#484](https://github.com/Nukesor/pueue/issues/484). - Fix that `add --priority` flag tried to get multiple arguments [#486](https://github.com/Nukesor/pueue/issues/486). ## \[3.3.2\] - 2023-11-28 ### Fixed - Fixed panic when calling parallel without arguments [#477](https://github.com/Nukesor/pueue/issues/477) - Fixed wrong default location for `pueue_aliases.yml` [#480](https://github.com/Nukesor/pueue/issues/480) - Fix typos ## \[3.3.1\] - 2023-10-27 ### Fixed - Daemonization doesn't work if pueued is not in $PATH [#299](https://github.com/Nukesor/pueue/issues/299) ## \[3.3.0\] - 2023-10-21 ### Added - Support the `PUEUE_CONFIG_PATH` environment variable in addition to the `--config` option. [#464](https://github.com/Nukesor/pueue/issues/464) ### Fixed - Support parameter parsing for signal names with capslock (`SIGINT`) and short name (`INT`|`int`). [#455](https://github.com/Nukesor/pueue/issues/455) - Better error messages for pid related I/O errors. [#466](https://github.com/Nukesor/pueue/issues/466) ### Changed - QoL improvement: Don't pause groups if there're no queued tasks. [#452](https://github.com/Nukesor/pueue/issues/452) Auto-pausing of groups was only done to prevent the unwanted execution of other tasks, but this isn't necessary, if there're no queued tasks. ### Added - `clear` and `cleanup` aliases for `clean` subcommand. The two following features are very new and marked as "experimental" for the time being. They might be reworked in a later release, since working with shells is always tricky and this definitely need more testing. - Experimental: Allow configuration of the shell command that executes task commands. [#454](https://github.com/Nukesor/pueue/issues/454) - Experimental: Allow injection of hard coded environment variables via config file. [#454](https://github.com/Nukesor/pueue/issues/454) ## \[3.2.0\] - 2023-06-13 ### Added - Add the `-j/--json` flag to `pueue group` to get a machine readable list of all current groups. [#430](https://github.com/Nukesor/pueue/issues/430) - Add `pueued.plist` template to run pueue with launchd on MacOS. [#429](https://github.com/Nukesor/pueue/issues/429) - Add query syntax documentation to `pueue status` [#438](https://github.com/Nukesor/pueue/issues/429) - Add the `--priority/-o` flag to `pueue add` [#429](https://github.com/Nukesor/pueue/issues/427). This feature can be used to have easier control in which order tasks are executed. This was previously only possible via `pueue switch`. - Add the `success` wait status. With this status, `pueue` will exit with `1` as soon as a single task fails. [#434](https://github.com/Nukesor/pueue/issues/434) ### Fix - Fix broken bash autocompletion. Temporarily changes the name in the help texts to `pueue` and `pueued`. [#426](https://github.com/Nukesor/pueue/issues/426) - Reword, extend and format most subcommand help texts. ### Change - Don't fail on `follow` if a followed task exists but hasn't started yet. [#436](https://github.com/Nukesor/pueue/issues/436) - Fail with a `1` exit code, when a followed task disappears or doesn't exist in the first place. [#436](https://github.com/Nukesor/pueue/issues/436) ## \[3.1.2\] - 2023-02-26 ## Fixed - Fixed changes to stdout not being printed after each I/O copy when using `pueue follow`. [#416](https://github.com/Nukesor/pueue/issues/416) ## \[3.1.1\] - 2023-02-12 ## Fixed - Fixed missing newlines after `status`, `log` and `follow` [#414](https://github.com/Nukesor/pueue/issues/414). ## \[3.1.0\] - 2023-02-08 ### Added - Allow to wait for specific task state when using `pueue wait` [#400](https://github.com/Nukesor/pueue/issues/400). ### Fixed - Point to a new patched fork of `darwin-libproc`, as the original has been deleted. This fixes the development builds for pueue on Apple platforms. ## \[3.0.1\] - 2022-12-31 ### Fixed - Bump `command-group` to fix broken windows process handling [#402](https://github.com/Nukesor/pueue/issues/402) ## \[3.0.0\] - 2022-12-12 This release was planned to be a much smaller one, but you know how it's like. A new major version is appropriate, as the process handling has been completely refactored. Thanks to the work of [@mjpieters](https://github.com/mjpieters), Pueue now uses process groups to manage subprocesses, preventing detached processes by default! This also closes a long standing issue and brings the support for MacOs on par with Linux! v3.0.0 also adds the long-requested feature to add a query/filter logic for the `status` command and lots of other quality of life improvements. The test coverage and development tooling has never been better, the project continues to improve! ### Breaking Changes - Tasks are now started in a process group, and `pueue kill` will kill all processes in the group [#372](https://github.com/Nukesor/pueue/issues/372). The `--children` cli flag has been deprecated (signals go to the whole group, always). This brings pueue's task handling in line with how interactive shells handle jobs. As a side-effect it prevents detached processes and thereby covers the 90% usecase users usually expect. ### Changed - `pueue log` output now includes the task label, if any. [#355](https://github.com/Nukesor/pueue/issues/355) - Enable `pueue edit` to edit multiple properties in one go. ### Added - _status querying_! `pueue status` now implements the first version of a simple query logic. The filtering/order/limit logic is also applied to the `--json` output. This allows you to: - `columns=id,status,path` select the exact columns you want to be shown. - `[column] [<|>|=|~] [value]` Apply various filters to columns. There's only a fix amount of operations on a small amount of columns available for now. If you need more filtering capabilities, please create an issue or a PR :). - `limit [last|first] 10` limit the results that'll be shown. - `order_by [column] [asc|desc]` order by certain columns. - For exact info on the syntax check the [syntax file](https://github.com/Nukesor/pueue/blob/main/client/query/syntax.pest). I still have to write detailed docs on how to use it. - Show a hint when calling `pueue log` if the task output has been truncated. [#318](https://github.com/Nukesor/pueue/issues/318) - Add `Settings.shared.alias_file`, which allows to set the location of the `pueue_aliases.yml` file. - Added functionality to edit a task's label [#354](https://github.com/Nukesor/pueue/issues/354). - Added the `created_at` and `enqueued_at` metadata fields on `Task` [#356](https://github.com/Nukesor/pueue/issues/356). They'll only be exposed when running `status --json` for now. ### Fixed - Interpret the `$EDITOR` command, when editing a task's command/path, as a shell expression instead of an executable ([#336](https://github.com/Nukesor/pueue/issues/336)). This gives users more control over how their editor should be started. - Don't show the version warning message between daemon and client, when using any `--json` flag. - Fix some test failures in non-standard environments for NixOS test suite ([#346](https://github.com/Nukesor/pueue/issues/346)). - The time in pueue's logs will now be in localtime instead of UTC [#385](https://github.com/Nukesor/pueue/issues/385). - MacOs support has been brought on par with Linux. ### Misc - Continuation of testing the `pueue` client, pushing the test coverage from ~70% to ~73%. - A codecov.yml syntax error was corrected, which prevented Codecov from applying the repository-specific configuration. - CI tests are now run using cargo nextest, for faster test execution, flaky test handling and better test output. - The macos test suite is now the same as that for Linux, including the client and daemon test suites. ## \[2.1.0\] - 2022-07-21 ### Added - Use the new `--color` command-line switch to control when pueue will use colors in its output. Fixes [#311](https://github.com/Nukesor/pueue/issues/311) by [mjpieters](https://github.com/mjpieters). The default is `auto`, which means it'll enable colors when connected to a TTY. The other options are `never` and `always`. ### Fixed - Only style the `group` header in status output when on a TTY ([#319](https://github.com/Nukesor/pueue/pull/319)) by [mjpieters](https://github.com/mjpieters). ### Changed - Exit `pueue follow` when reading logs, as soon as the followed task is no longer active. - Properly formatted debug output. - Hide `Task.envs` and `AddMessage.envs` in debug output, as they were too verbose and contained possibly sensible information. ### Misc - Enable CI linting on all platforms ([#323](https://github.com/Nukesor/pueue/pull/323)) by [mjpieters](https://github.com/mjpieters). - Add CI caching ([#322](https://github.com/Nukesor/pueue/pull/322)) by [mjpieters](https://github.com/mjpieters). - Fix missing toolchain bug in CI ([#321](https://github.com/Nukesor/pueue/pull/321)) by [mjpieters](https://github.com/mjpieters). - Set up code-coverage in CI. - Tests suite for `pueue` client, pushing the test coverage from ~53% to ~70%. ## \[2.0.4\] - 2022-06-05 ### Fixed - Return the correct path from `pueue_lib::settings::configuration_directories()`, when we get a path from `dirs::config_dir()` (was `/home/<user>/.config/pueue.yaml/`, is now again `/home/<user>/.config/pueue/`). - Use the correct path to delete the PID file during shutdown. ## \[2.0.3\] - 2022-06-04 ### Fixed - Use the `dirs` crate for platform specific directory discovery. [#311](https://github.com/Nukesor/pueue/issues/311) The previous trivial implementation was error prone in some edge-cases. For instance, Pueue fell back to the shared directory, if the `$XDG_RUNTIME_DIR` couldn't be found. This resulted in a reocurrence of [#302](https://github.com/Nukesor/pueue/issues/302) in non-XDG environments. Furthermore, Pueue used the wrong directories for its configuration and cache on Apple and Windows platforms. This is now fixed. This change is a bit tricky: - It's a fix on one hand (correct directories for Apple & Windows + fix for [#311](https://github.com/Nukesor/pueue/issues/311)). - It's somewhat of a **breaking change** for Apple & Windows on the other hand? I still decided to make this a patch release, as the next major release is still in the pipeline and needs a lot of work. [#302](https://github.com/Nukesor/pueue/issues/302) will still show up in Apple/Windows environments, as there doesn't seem to be runtime directory equivalent for those platforms. ## \[2.0.2\] - 2022-03-22 ### Added - Better debug output for migration instructions from v1 to v2 [#298](https://github.com/Nukesor/pueue/issues/298). - Better error output and error context for some filesystem related errors (continuation). - Add a new option to specify the location of the `PID` file: `shared.pid_path` [#302](https://github.com/Nukesor/pueue/issues/302). ### Fixed - Some options weren't properly passed onto the forked daemon instance, when starting `pueued` with the `-d` flag. - the `-vvv` flags - the `--profile` option. - Autocompletion shell scripts. Their generation is now also tested to prevent future regressions. - Move the `PID` file into the runtime directory to prevent rare startup issues after crashes + reboot. [#302](https://github.com/Nukesor/pueue/issues/298). This won't cause any problems for running clients/daemons, making this a backward compatible change. - The `format-status` option now respects the order in which tasks piped back into pueue, as long as they're passed in list form [#301](https://github.com/Nukesor/pueue/issues/301). Tasks that're passed as a map will still be displayed in increasing order. ## \[2.0.1\] - 2022-03-12 ### Added - Better error output and error context for filesystem related errors [#239](https://github.com/Nukesor/pueue/issues/293). ### Fixed - Commands no longer inherit environment variables from the daemon process by [drewkett](https://github.com/drewkett) [#297](https://github.com/Nukesor/pueue/pull/297). Previously, the daemon environment variables bled into the subprocesses. ## \[2.0.0\] - 2022-02-18 This release marks the second stable release of Pueue. Shortly after releasing `v1.0.0` a few short-comings of some design decisions became apparent. This release aims to remove all those short-comings or important missing features. Some of those changes required breaking changes of both internal APIs and datastructures, as well as the CLI interfaces and the configuration file. Since this project sticks to SemVer, this meant that a new major release was necessary. Hopefully, this will be the last stable release for quite a while. There are a few features planned that might introduce further breaking changes, but those will most likely need quite some time to implement (if we manage to implement them at all). Anyhow, I'm quite pleased with the overall state of this release! A lot of cool and convenient stuff has been added and quite a bit of internal logic has been streamlined and cleaned up. Also a huge thanks to all contributors that helped working on this version! ### Added - Shell auto-completion value hints for some arguments (zsh and fish only). - Introduce the `rm` (remove), `re` (restart) and `fo` (follow) subcommand aliases [#245](https://github.com/Nukesor/pueue/issues/245). - Allow to set the amount of parallel tasks at group creation by [Spyros Roum](https://github.com/SpyrosRoum) [#245](https://github.com/Nukesor/pueue/issues/249). - When calling `pueue` without a subcommand, the `status` command will be called by default [#247](https://github.com/Nukesor/pueue/issues/247). - Add the `--group` parameter to the `pueue clean` command [#248](https://github.com/Nukesor/pueue/issues/248). - Add `output` for a task's log output as template parameters for callbacks [#269](https://github.com/Nukesor/issues/269). - Add `--lines` parameter to `pueue follow` to only show specified number of lines from stdout before following [#270](https://github.com/Nukesor/pueue/issues/270). - Notify the user if a task is added to a paused group [#265](https://github.com/Nukesor/pueue/issues/265). - Notify the user that when killing whole groups, those groups are also paused [#265](https://github.com/Nukesor/pueue/issues/265). - Implementation of configuration profiles [#244](https://github.com/Nukesor/pueue/issues/244). This supports multiple profiles in a single `pueue.yml`, which can be loaded via the `--profile/-p $name` flag. - Added the `shared.runtime_directory` config variable for any runtime related files, such as sockets. - `XDG_CONFIG_HOME` is respected for Pueue's config directory [#243](https://github.com/Nukesor/pueue/issues/243). - `XDG_DATA_HOME` is used if the `pueue_directory` config isn't explicitly set [#243](https://github.com/Nukesor/pueue/issues/243). - `XDG_RUNTIME_DIR` is used if the new `runtime_directory` config isn't explicitly set [#243](https://github.com/Nukesor/pueue/issues/243). - The unix socket is now located in the `runtime_directory` by default [#243](https://github.com/Nukesor/pueue/issues/243). - The `format-status` subcommand [#213](https://github.com/Nukesor/pueue/issues/213). This is a preliminary feature, which allows users to use external tools, such as `jq`, to filter Pueue's `state -j` output and pipe them back into `format-status` to display it. This feature will probably be removed once a proper internal filter logic has been added. \ The simplest usage looks like this: `pueue status --json | jq -c '.tasks' | pueue format-status` - Show currently active commands when calling `pueue wait`. ### Changed - Improved memory footprint for reading partial logs. - Always only show the last X lines of output when using `pueue log` without additional parameters. - `pueue parallel` without arguments now also shows the groups with their current limit like `pueue group`. [#264](https://github.com/Nukesor/pueue/issues/264) - Configuration files will no longer be changed programatically [#241](https://github.com/Nukesor/pueue/issues/241). - Default values for all most configuration variables have been added [#241](https://github.com/Nukesor/pueue/issues/241). - **Breaking changes:** `stderr` and `stdout` of Pueue's tasks are now combined into a single file. This means a few things. - One doesn't have to filter for stderr any longer. - All logs are now combined in a single chronologically correct log file. - One **can no longer** filter for stderr/stdout specific output. - **Breaking changes:** The `group` subcommand now has `group add [-p $count] $name` and `group remove $name` subcommands. The old `group [-a,-p,-r]` flags have been removed. - **Breaking changes:** The configuration for groups can no longer be done via configuration file. This means, that groups can only be edited, created or deleted via the commandline interface. The amount of parallel tasks will also be reset to `1` when upgrading. ### Removed - No longer read `/etc/pueue/` configuration files. Pueue isn't designed as a system wide service, hence this doesn't make any sense to have system wide configuration files. - If multiple configuration files are found, they're no longer merged together. Instead, only the first file will be used. ### Fixed - Recover tasks from `Locked` state if editing fails [#267](https://github.com/Nukesor/pueue/issues/267) - `pueue log` now behaves the same for local and remote logs. Remote logs previously showed more lines under some circumstances. - panic due to rogue `.unwrap()` when filtering for a non-existing group in `pueue status`. ## \[1.0.6\] - 2022-01-05 #### Fixed - The `--after` flag on add no longer accepted multiple parameters. This was due to a change in Clap's API in their bump from beta to full v3 release. ## \[1.0.5\] - 2022-01-02 ### Changed - Update to stable clap v3.0. ### Fix - Panic instead of loop endlessly, if `task_log` directory disapears. ## \[1.0.4\] - 2021-11-12 ### Fix - Hard panic of the daemon, when one tries to switch a task with itself [#262](https://github.com/Nukesor/pueue/issues/262). ## \[1.0.3\] - 2021-09-15 ### Fix - The `default` group wasn't created, if the `pueue.yml` config file didn't contain it. [#242](https://github.com/Nukesor/pueue/issues/242). This lead to crashes and undefined behavior in the daemon and the client. This bug was introduced in `1.0.0` due to changes to the internal datastructures and several added features. It only popped up now, due to [#236](https://github.com/Nukesor/pueue/issues/236) being fixed, as the config is now being correctly used. This only affects users with quite old pueue configs or custom config files. ## \[1.0.2\] - 2021-09-12 ### Feature This feature wasn't supposed to be added to v1.0.2 and breaks semantic versioning. I'm still getting used to this, sorry for any inconveniences. - Add the `--working-directory` parameter to the `pueue add` command [#227](https://github.com/Nukesor/pueue/issues/227). ### Fix - Settings weren't always read on daemon restart. [#236](https://github.com/Nukesor/pueue/issues/236). This bug was introduced in `1.0.0` due to large-scale refactorings and insufficient testing. ## \[1.0.1\] - 2021-08-20 ### Fix - Update to clap `v3.0.0-beta.4`. The upgrade from beta.2 to beta.4 introduced breaking changes, which lead to compiler errors when doing a `cargo install` without a `--locked`. A beta upgrade seems to be handled like a patch version in semantic versioning. This isn't a bug per se, but it leads to confusion when people forget the `--locked` flag during install. ## \[1.0.0\] - 2021-08-19 A lot of things happened during this release. Even though quite a few new features were added, the main effort went into increasing stability and inter-version compatibility. The goal of this release is to push the code quality, error handling, test coverage and stability to a level that justifies a v1.0 release. \ Since this project follows semantic versioning, this includes no breaking changes and backward compatibility on minor version upgrades. \ This also means that I'm quite certain that there are no critical bugs in the project and that all important and planned features have been implemented. Unless some critical issues pop up, this can be seen as a finished version of the project! **Disclaimer:** This project is mainly developed for Linux. Windows and MacOS/Apple platforms are partially supported, but this is a community effort. Thereby, v1.0 might be misleading for those. \ I hope you understand, that I cannot wait for someone to implement missing features for these platforms. I want this project to move forward. ### Added - `~` is respected in configuration paths by [dadav](https://github.com/dadav) for [#191](https://github.com/Nukesor/pueue/issues/191). - Use `pueue kill --signal SigTerm` to send Unix signals directly to Pueue's processes. [#202](https://github.com/Nukesor/pueue/issues/202) - Support for other `apple` platforms. New build artifacts for `ios-aarch64`. - Option in config file to use the `--in-place` flag on `restart` by default. - `--failed-in-group [group_name]` for `restart`. That way you can restart all failed tasks of a specific group [#211](https://github.com/Nukesor/pueue/issues/211) - Options in config file to configure the time and datetime format in `pueue status` for [#212](https://github.com/Nukesor/pueue/issues/212). - Add a worker pool representation for groups to Pueue [#218](https://github.com/Nukesor/pueue/issues/218). The task's group name and the pool's worker id for a given task are then injected into the environment variables of the subprocess. This allows users to map Pueue's internal group and worker logic to external resources: ``` ./run_on_gpu_pool --gpu $PUEUE_WORKER_ID --pool $PUEUE_GROUP` ``` - The last lines of `stderr` and `stdout` are now available in the callback command. [#196](https://github.com/Nukesor/pueue/issues/196). - Add `callback_log_lines` setting for the daemon, specifying the amount of lines returned to the callback. [#196](https://github.com/Nukesor/pueue/issues/196). - Add a PID file to `$pueue_directory/pueue.pid`, which will be used to check whether there's an already running daemon. ### Changed - Use the next available id instead of constantly increasing id's. This results in ids being reused, on `pueue clean` or `pueue remove` of the last tasks in a queue. - Show the date in `pueue status` for the `start` and `end` fields, if the task didn't start today. - Backward compatible protocol for stable version changes with `serde_cbor`. - Detection of old daemon versions during client->daemon handshake. - Overall better debug messages. - Use tokio's async runtime and set a hardcoded limit of 4 worker threads, which is already more than enough. - Add a debug message, when using `pueue wait` or `pueue wait -g some_group` and there're no tasks in the group. - Stabilized internal daemon shutdown and restoration logic. - Rename `Index` to `Id` in `pueue status` to free up screen space. - Remove `Exitcode` column in `pueue status` and include exitcode into `Failed` status to free up screen space. - You can no longer remove groups, if there are still tasks assigned to that group. - A non-zero exit code will be returned, if no tasks were affected by an action. ### Datastructures A whole lot of Pueue's internal datastructures have been refactored. The main goal of this was to prevent impossible/invalid states wherever possible. Overall, this resulted in sleaker und much better maintainable code. However, this broke backwards compatibility to pre-v1.0 at numerous places. - Json structure of the `Task` struct changed significantly, as data depending on the current status has been moved into the `TaskStatus` enum. - Many messages have been touched, as several new enums have been introduced and many fields have been removed. ### Fixed - Handle very rare race-condition, where tasks with failed dependencies start anyway. - `pueue log --json` now works again. [#186](https://github.com/Nukesor/pueue/issues/186) By default, only a few lines of output will be provided, but this can be configured via the `--full` and `--lines` option. - Use crossbeam's mpsc channels, resulting in faster execution of user's instructions. - Fix issue where the daemon was shutting down so fast, there wasn't enough time to respond the client that it's actually shutting down. ### Removed - Removed the `enqueue` parameter from callback, as the callback is only run for finished tasks. ## \[0.12.2\] - 2021-04-20 ### Fixed - Remove task logs on `pueue remove`. [#187](https://github.com/Nukesor/pueue/issues/187) - Improve Windows support by [oiatz](https://github.com/oiatz). [#114](https://github.com/Nukesor/pueue/issues/114) - Fix empty output for empty groups when requesting specific group with `status -g $name`. [#190](https://github.com/Nukesor/pueue/issues/190) - Fix missing output when explicitly requesting default group with `status -g default`. [#190](https://github.com/Nukesor/pueue/issues/190) ## \[0.12.1\] - 2021-03-12 ### Fixed - Dependant tasks didn't update the id of their dependencies, if a dependency's id was changed via `pueue switch` [#185](https://github.com/Nukesor/pueue/issues/185) ### Changed - Show the status of the default group, if there are no tasks in the queue. ## \[0.12.0\] - 2021-02-10 **Info for all packagers:** \ In case you updated your packaging rules for the new layout in v0.11, those changes need to be reverted. \ The new repository layout with workspaces didn't work out that well. Managing two crates in a single repository in combination with `cargo release` turned out to be quite annoying. ### Added - `--all-failed` flag for `restart`. This will restart all tasks that didn't finish with a `Success` status. [#79](https://github.com/Nukesor/pueue/issues/79) - New config option `client.dark_mode` by [Mephistophiles](https://github.com/Mephistophiles). [#178](https://github.com/Nukesor/pueue/issues/178) Default: `false`. Adds the ability to switch to dark colors instead of regular colors. ### Changed - Rename/change some flags on the `restart` subcommand. 1. Rename `--path` to `--edit-path`. The short flag stays the same (`p`). 1. Rename the short flag for `--start-immediately` to `-k`. - Dependency bump to pueue-lib `v0.12.1` ### Fixed - `-s` flag overload on the `restart` command. `--start-immediately` and `--stashed` collided. - Error on BSD due to inability to get username from system registry. [#173](https://github.com/Nukesor/pueue/issues/173) ## \[0.11.2\] - 2021-02-01 ### Changed - Readability of the `log` command has been further improved. - Dependency bump to pueue-lib `v0.11.2` ## \[0.11.1\] - 2021-01-19 ### Fixed - Wrong version (`pueue-v0.11.0-alpha.0`) due to an error in the build process with the new project structure. [#169](https://github.com/Nukesor/pueue/issues/169) ## \[0.11.0\] - 2021-01-18 ### Added - Add the `--lines` flag to the `log` subcommand. This is used to only show the last X lines of each task's stdout and stderr. - Add the `--full` flag to the `log` subcommand. This is used to show the whole logfile of each task's stdout and stderr. - Add the `--successful-only` flag to the `clean` subcommand. This let's keep you all important logs of failed tasks, while freeing up some screen space. ### Changed - If multiple tasks are selected, `log` now only shows the last few lines for each log. You can use the new `--full` option to get the old behavior. ## \[0.10.2\] - 2020-12-31 ### Fixed - It was possible to remove tasks with active dependants, i.e. tasks which have a dependency and didn't finish yet. This didn't lead to any crashes, but could lead to unwanted behavior, since the dependant tasks simply started due to the dependency no longer existing. It's however still possible to delete dependencies as long as their dependants are deleted as well. ## \[0.10.1\] - 2020-12-29 ### Fixed - panic, when using `pueue status` and only having tasks in non-default groups. ## \[0.10.0\] - 2020-12-29 This release adds a lot of breaking changes! I tried to clean up, refactor and streamline as much code as possible. `v0.10.0` aims to be the last release before hitting v1.0.0. \ From that point on I'll try to maintain backward compatibility for as long as possible (v2.0.0).\ Please read this changelog carefully. ### Changed - Use TLS encryption for all TCP communication. [#52](https://github.com/Nukesor/pueue/issues/52) - Updated Crossterm and thereby bump the required rust version to `1.48`. - Extract the shared `secret` into a separate file. [#52](https://github.com/Nukesor/pueue/issues/52) This will allow users to publicly sync their config directory between machines. - Change default secret length from 20 to 512 chars. [#52](https://github.com/Nukesor/pueue/issues/52) - Lots of internal code cleanup/refactoring/restructuring. - Exit client with non-zero exit code when getting a failure message from the daemon. - The `group` list output has been properly styled. - Use unix sockets by default on unix systems. [#165](https://github.com/Nukesor/pueue/issues/165) - Any unix socket code or configuration stuff has been removed, when building for Windows. ### Added - Add the `shared.host` configuration variable. [#52](https://github.com/Nukesor/pueue/issues/52) This finally allows to accept outside connections, but comes with some security implications. - Create a self-signed ECDSA cert/key for TLS crypto with [rcgen](https://github.com/est31/rcgen). [#52](https://github.com/Nukesor/pueue/issues/52) - Error messages have been improved in many places. - `daemon.pause_all_on_failure` config, which actually pauses all groups as soon as a task fails. - `daemon.pause_group_on_failure` config, which only pauses the group of the affected task instead of everything. - Users can add some additional information to tasks with the `task add --label $LABEL` option, which will be displayed when calling `pueue status`. [#155](https://github.com/Nukesor/pueue/issues/155) - `--escape` flag on the `add` subcommand, which takes all given Parameter strings and escapes special characters. [#158](https://github.com/Nukesor/pueue/issues/158) - Remove `--task-ids` for `wait`. Now it's used the same way as start/kill/pause etc. - Add an option `--print-task-id` to only return the task id on `add`. This allows for better scripting. [#151](https://github.com/Nukesor/pueue/issues/151) ### Removed - Removed the `daemon.pause_on_failure` configuration variable in favor of the other two previously mentioned options. - Removed the `--port` and `--unix-socket-path` cli flags on client in favor of the `--config` flag. - Removed the `--port` flag on the daemon in favor of the `--config` flag. ### Fixed - Properly pass `--config` CLI argument to daemonized `pueued` instance. - The `--default` flag on the `kill` command has been removed, since this was the default anyway. That makes this command's behavior consistent with the `start` and `pause` command. - Allow the old `kill [task_ids...]` behavior. You no longer need the `-t` flag to kill a tasks. This broke in one of the previous refactorings. ### Internal - The default group is now an actual group. ## \[0.9.0\] - 2020-12-14 ### Added - The `wait` subcommand. This allows you to wait for all tasks in the default queue/ a specific group to finish. [#117](https://github.com/Nukesor/pueue/issues/117) On top of this, you can also specify specific tasks ids. - New client configuration `show_expanded_aliases` (default: `false`). Determines whether the original input command or the expanded alias will be shown when calling `status`. - New `--in-place` option for `restart`, which resets and reuses the existing task instead of creating a new one. [#147](https://github.com/Nukesor/pueue/issues/147) ### Changed - Don't update the status of tasks with failed dependencies on paused queues. This allows to fix dependency chains without having to restart all tasks in combination with the `pause_on_failure` and the new `--in-place` restart option. ### Fixed - `pause_on_failure` pauses the group of the failed tasks. Previously this always paused the default queue. - Properly display version when using `-V`. (#143) - Execute callbacks for tasks with failed dependencies. - Execute callbacks for tasks that failed to spawn at all. - Persist state changes when handling tasks that failed to spawn. - Set proper start/end times for all tasks that failed in any way. ### Changed - The original user command will be used when editing a task's command. As a result of this, aliases will be re-applied after editing a command. ## \[0.8.2\] - 2020-11-20 ### Added - Add `exit_code` parameter to callback hooks. (#138) - Add a confirmation message when using `reset` with running tasks by [quebin31](https://github.com/quebin31). [#140](https://github.com/Nukesor/pueue/issues/140) ### Changed - Update to beta branch of Clap v3. Mainly for better auto-completion scripts. ## \[0.8.1\] - 2020-10-27 ### Added - Add `start`, `end` and `enqueue` time parameters to callback hooks by [soruh](https://github.com/soruh). - Config flag to truncate content in 'status'. (#123) ### Fixed - ZSH completion script fix by [ahkrr](https://github.com/ahkrr). ## \[0.8.0\] - 2020-10-25 This version adds breaking changes: - The configuration file structure has been changed. There's now a `shared` section. - The configuration files have been moved to a dedicated `pueue` subdirectory. ### Added - Unix socket support [#90](https://github.com/Nukesor/pueue/issues/) - New option to specify a configuration file on startup for daemon and client. - Warning messages for removing/killing tasks [#111](https://github.com/Nukesor/pueue/issues/111) by [Julian Kaindl](https://github.com/kaindljulian) - Better message on `pueue group`, when there are no groups yet. - Guide on how to connect to remote hosts via ssh port forwarding. ### Changed - Move a lot of documentation from the README and FAQ into Github's wiki. The docs have been restructured at the same time. - Never create a default config when starting the client. Only starting the daemon can do that. - Better error messages when connecting with wrong secret. - Windows: The configuration file will now also be placed in `%APPDATA%\Local\pueue`. ### Fixed - Fixed panic, when killing and immediately removing a task. [#119](https://github.com/Nukesor/pueue/issues/119) - Fixed broken non-responsive daemon, on panic in threads. [#119](https://github.com/Nukesor/pueue/issues/119) - Don't allow empty commands on `add`. - The client will never persist/write the configuration file. [#116](https://github.com/Nukesor/pueue/issues/116) - The daemon will only persist configuration file on startup, if anything changes. [#116](https://github.com/Nukesor/pueue/issues/116) - (Probably fixed) Malformed configuration file. [#116](https://github.com/Nukesor/pueue/issues/116) ## \[0.7.2\] - 2020-10-05 ### Fixed - Non-existing tasks were displayed as successfully removed. [#108](https://github.com/Nukesor/pueue/issues/108) - Remove child process handling logic for MacOs, since the library simply doesn't support this. - Remove unneeded `config` features and reduce compile time by ~10%. Contribution by [LovecraftianHorror](https://github.com/LovecraftianHorror) [#112](https://github.com/Nukesor/pueue/issues/112) - Remove futures-timers, effectively reducing compile time by ~14%. [#112](https://github.com/Nukesor/pueue/issues/112) - Update to comfy-table v1.1.0, reducing compile time by another ~10%. [#112](https://github.com/Nukesor/pueue/issues/112) ### Changed - Linux process handling now always sends signals to its direct children, if the root process is a `sh -c` process. Previously, this behavior was somewhat ambiguous and inconsistent. [#109](https://github.com/Nukesor/pueue/issues/109) ### Added - Update workflow to build arm binaries. ## \[0.7.0\] - 2020-07-23 ### Added - New `-e` and `-p` flags to edit tasks on restart. `-e` for `command`, `-p` for `path`. Both can be added at the same time. ### Changed - Internal refactoring of the client code. Mostly structure. ### Fixed - Improved CLI validation. Several subcommands accepted empty task id vectors, when they shouldn't. ## \[0.6.3\] - 2020-07-11 ### Changed - Don't do any code styling, if `stdout` is no tty. ## \[0.6.2\] - 2020-07-11 ### Fixed - Fix local `stderr` formatting for `log`. - Fix missing sleep in local `follow` loop, resulting in single core 100% CPU usage. ## \[0.6.1\] - 2020-06-14 ### Changed - New default behavior for `follow`. Implemented by [JP-Ellis](https://github.com/JP-Ellis). - Delete everything in Pueue's `task_logs` folder on `reset`. ## \[0.6.0\] - 2020-06-07 ### Added - `pueue_aliases.yml`, which allows some shell-like aliasing. - `-c` flag for `kill` and `reset`. ## \[0.5.1\] - 2020-05-31 ### Added - `--children/-c` flag for `start` and `stop`. This sends the `SIGSTOP`/`SIGSTART` signal not only to the main process of a task, but also to direct children. This is, for instance, useful if you're starting tasks via a shell script. ### Fixed - Fixed formatting bug in `pueue log`. Fixed by [sourcefrog](https://github.com/sourcefrog). ## \[0.5.0\] - 2020-05-15 ### Added - Groups! Tasks can now be assigned to a group. Each group acts as their own queue and each group has their own setting for parallel task execution. Groups can also be paused/resumed individually. - Added `--group` flag for `status`. This will only print tasks of a specific group - Add new flags `--default` to `kill`. With this flag only tasks in the default queue will be affected. - Users can now specify a custom callback that'll be called whenever tasks finish. - Environment variable capture. Tasks will now start with the variables of the environment `pueue add` is being called in. ### Changed - `log` now also works on running and paused tasks. It thereby replaces some of `show`'s functionality. - Rename `show` to `follow`. The `follow` is now only for actually following the output of a single command. - `follow` (previously `show`) now also reads directly from disk, if `read_local_logs` is set to `true`. - The `--all` flag now affects all groups AND the default queue for `kill`, `start` and `pause`. ## \[0.4.0\] - 2020-05-04 ### Added - Dependencies! This adds the `--after [ids]` option. Implemented by [tinou98](https://github.com/tinou98). Task with this option will only be started, if all specified dependencies successfully finish. Tasks with failed dependencies will fail as well. - New state `FailedToStart`. Used if the process cannot be started. - New state `DependencyFailed`. Used if any dependency of a task fails. - New config option `read_local_logs`. Default: `true` We assume that the daemon and client run on the same machine by default. This removes the need to send logs via socket, since the client can directly read the log files. Set to `false` if you, for instance, use Pueue in combination with SSH port forwarding. ### Changed - Pueue no longer stores log output in its backup files. - Process log output is no longer permanently stored in memory. This significantly reduced RAM usage for large log outputs. Huge thanks for helping with this to [sourcefrog](https://github.com/sourcefrog)! - Process log output is compressed in-memory on read from disk. This leads to reduced bandwidth and RAM usage. ## \[0.3.1\] - 2020-04-10 ### Fixed - Set `start` for processes. (Seems to have broken in 0.2.0) ## \[0.3.0\] - 2020-04-03 ### Added - `pause_on_failure` configuration flag. Set this to true to pause the daemon as soon as a task fails. - Add `--stashed` flag to `restart`. - Add `-p/--path` flag to allow editing of a stashed/queued task's path. - Better network utilization for `pueue log`. ### Fixed - Respect `Killed` tasks on `pueue clean`. - Show `Killed` status in `pueue log`. - Fix `pueue log` formatting. - Show daemon status if no tasks exist. - Better error messages when daemon isn't running. ## \[0.2.0\] - 2020-03-25 ### Added - New `--delay` flag, which delays enqueueing of a task. Can be used on `start` and `enqueue`. Implemented by [taylor1791](https://github.com/taylor1791). - `--stashed` flag for `pueue add` to add a task in stashed mode. Implemented by [taylor1791](https://github.com/taylor1791). ### Changed - Generating completion files moved away from build.rs to the new `pueue completions {shell} {output_dir}` subcommand. This seems to be the proper way to generate completion files with clap. There is a `build_completions.sh` script to build all completion files to the known location for your convenience. ### Fixed - Fix `edit` command. - Several wrong state restorations after restarting pueue. ## \[0.1.6\] - 2020-02-05 ### Fixed - \[BUG\] Fix wrong TCP receiving logic. - Automatically create config directory. - Fix and reword cli help texts. ## \[0.1.5\] - 2020-02-02 ### Changed - Basic Windows support. Huge thanks to [Lej77](https://github.com/Lej77) for implementing this! - Integrate completion script build in `build.rs`. ## \[0.1.4\] - 2020-01-31 ### Changed - Dependency updates ## \[0.1.3\] - 2020-01-29 ### Changed - Change table design of `pueue status`. ## \[0.1.2\] - 2020-01-28 ### Fixed - Handle broken UTF8 in `show` with `-f` and `-e` flags. - Allow restart of `Killed` processes. ## \[0.1.1\] - 2020-01-28 ### Added - Add --daemonize flag for daemon to daemonize pueued without using a service manager. - Add `shutdown` subcommand for client for being able to manually kill the pueue daemon. ### Changed - Replace prettytables-rs with comfy-table. - Replace termion with crossterm. 07070100000010000081A4000000000000000000000001665F1B690000EF91000000000000000000000000000000000000001700000000pueue-3.4.1/Cargo.lock# This file is automatically @generated by Cargo. # It is not intended for manual editing. version = 3 [[package]] name = "addr2line" version = "0.22.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6e4503c46a5c0c7844e948c9a4d6acd9f50cccb4de1c48eb9e291ea17470c678" dependencies = [ "gimli", ] [[package]] name = "adler" version = "1.0.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f26201604c87b1e01bd3d98f8d5d9a8fcbb815e8cedb41ffccbeb4bf593a35fe" [[package]] name = "aho-corasick" version = "1.1.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8e60d3430d3a69478ad0993f19238d2df97c507009a52b3c10addcd7f6bcb916" dependencies = [ "memchr", ] [[package]] name = "android-tzdata" version = "0.1.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e999941b234f3131b00bc13c22d06e8c5ff726d1b6318ac7eb276997bbb4fef0" [[package]] name = "android_system_properties" version = "0.1.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "819e7219dbd41043ac279b19830f2efc897156490d7fd6ea916720117ee66311" dependencies = [ "libc", ] [[package]] name = "anstream" version = "0.6.14" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "418c75fa768af9c03be99d17643f93f79bbba589895012a80e3452a19ddda15b" dependencies = [ "anstyle", "anstyle-parse", "anstyle-query", "anstyle-wincon", "colorchoice", "is_terminal_polyfill", "utf8parse", ] [[package]] name = "anstyle" version = "1.0.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "038dfcf04a5feb68e9c60b21c9625a54c2c0616e79b72b0fd87075a056ae1d1b" [[package]] name = "anstyle-parse" version = "0.2.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c03a11a9034d92058ceb6ee011ce58af4a9bf61491aa7e1e59ecd24bd40d22d4" dependencies = [ "utf8parse", ] [[package]] name = "anstyle-query" version = "1.0.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a64c907d4e79225ac72e2a354c9ce84d50ebb4586dee56c82b3ee73004f537f5" dependencies = [ "windows-sys 0.52.0", ] [[package]] name = "anstyle-wincon" version = "3.0.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "61a38449feb7068f52bb06c12759005cf459ee52bb4adc1d5a7c4322d716fb19" dependencies = [ "anstyle", "windows-sys 0.52.0", ] [[package]] name = "anyhow" version = "1.0.86" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b3d1d046238990b9cf5bcde22a3fb3584ee5cf65fb2765f454ed428c7a0063da" [[package]] name = "assert_cmd" version = "2.0.14" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ed72493ac66d5804837f480ab3766c72bdfab91a65e565fc54fa9e42db0073a8" dependencies = [ "anstyle", "bstr 1.9.1", "doc-comment", "predicates", "predicates-core", "predicates-tree", "wait-timeout", ] [[package]] name = "async-trait" version = "0.1.80" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c6fa2087f2753a7da8cc1c0dbfcf89579dd57458e36769de5ac750b4671737ca" dependencies = [ "proc-macro2", "quote", "syn", ] [[package]] name = "autocfg" version = "1.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0c4b4d0bd25bd0b74681c0ad21497610ce1b7c91b1022cd21c80c6fbdd9476b0" [[package]] name = "backtrace" version = "0.3.72" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "17c6a35df3749d2e8bb1b7b21a976d82b15548788d2735b9d82f329268f71a11" dependencies = [ "addr2line", "cc", "cfg-if", "libc", "miniz_oxide", "object", "rustc-demangle", ] [[package]] name = "base64" version = "0.22.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "72b3254f16251a8381aa12e40e3c4d2f0199f8c6508fbecb9d91f575e0fbb8c6" [[package]] name = "beef" version = "0.5.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3a8241f3ebb85c056b509d4327ad0358fbbba6ffb340bf388f26350aeda225b1" [[package]] name = "better-panic" version = "0.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6fa9e1d11a268684cbd90ed36370d7577afb6c62d912ddff5c15fc34343e5036" dependencies = [ "backtrace", "console", ] [[package]] name = "bindgen" version = "0.69.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a00dc851838a2120612785d195287475a3ac45514741da670b735818822129a0" dependencies = [ "bitflags 2.5.0", "cexpr", "clang-sys", "itertools", "lazy_static", "lazycell", "proc-macro2", "quote", "regex", "rustc-hash", "shlex", "syn", ] [[package]] name = "bitflags" version = "1.3.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "bef38d45163c2f1dde094a7dfd33ccf595c92905c8f8f4fdc18d06fb1037718a" [[package]] name = "bitflags" version = "2.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "cf4b9d6a944f767f8e5e0db018570623c85f3d925ac718db4e06d0187adb21c1" [[package]] name = "block-buffer" version = "0.10.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3078c7629b62d3f0439517fa394996acacc5cbc91c5a20d8c658e77abd503a71" dependencies = [ "generic-array", ] [[package]] name = "bstr" version = "0.2.17" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ba3569f383e8f1598449f1a423e72e99569137b47740b1da11ef19af3d5c3223" dependencies = [ "lazy_static", "memchr", "regex-automata 0.1.10", ] [[package]] name = "bstr" version = "1.9.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "05efc5cfd9110c8416e471df0e96702d58690178e206e61b7173706673c93706" dependencies = [ "memchr", "regex-automata 0.4.6", "serde", ] [[package]] name = "bumpalo" version = "3.16.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "79296716171880943b8470b5f8d03aa55eb2e645a4874bdbb28adb49162e012c" [[package]] name = "byteorder" version = "1.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1fd0f2584146f6f2ef48085050886acf353beff7305ebd1ae69500e27c67f64b" [[package]] name = "bytes" version = "1.6.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "514de17de45fdb8dc022b1a7975556c53c86f9f0aa5f534b98977b171857c2c9" [[package]] name = "cc" version = "1.0.98" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "41c270e7540d725e65ac7f1b212ac8ce349719624d7bcff99f8e2e488e8cf03f" [[package]] name = "cexpr" version = "0.6.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6fac387a98bb7c37292057cffc56d62ecb629900026402633ae9160df93a8766" dependencies = [ "nom", ] [[package]] name = "cfg-if" version = "1.0.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd" [[package]] name = "cfg_aliases" version = "0.1.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "fd16c4719339c4530435d38e511904438d07cce7950afa3718a84ac36c10e89e" [[package]] name = "chrono" version = "0.4.38" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a21f936df1771bf62b77f047b726c4625ff2e8aa607c01ec06e5a05bd8463401" dependencies = [ "android-tzdata", "iana-time-zone", "js-sys", "num-traits", "serde", "wasm-bindgen", "windows-targets 0.52.5", ] [[package]] name = "clang-sys" version = "1.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a483f3cbf7cec2e153d424d0e92329d816becc6421389bd494375c6065921b9b" dependencies = [ "glob", "libc", "libloading", ] [[package]] name = "clap" version = "4.5.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "90bc066a67923782aa8515dbaea16946c5bcc5addbd668bb80af688e53e548a0" dependencies = [ "clap_builder", "clap_derive", ] [[package]] name = "clap_builder" version = "4.5.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ae129e2e766ae0ec03484e609954119f123cc1fe650337e155d03b022f24f7b4" dependencies = [ "anstream", "anstyle", "clap_lex", "strsim", ] [[package]] name = "clap_complete" version = "4.5.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "dd79504325bf38b10165b02e89b4347300f855f273c4cb30c4a3209e6583275e" dependencies = [ "clap", ] [[package]] name = "clap_complete_nushell" version = "4.5.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "80d0e48e026ce7df2040239117d25e4e79714907420c70294a5ce4b6bbe6a7b6" dependencies = [ "clap", "clap_complete", ] [[package]] name = "clap_derive" version = "4.5.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "528131438037fd55894f62d6e9f068b8f45ac57ffa77517819645d10aed04f64" dependencies = [ "heck 0.5.0", "proc-macro2", "quote", "syn", ] [[package]] name = "clap_lex" version = "0.7.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "98cc8fbded0c607b7ba9dd60cd98df59af97e84d24e49c8557331cfc26d301ce" [[package]] name = "colorchoice" version = "1.0.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0b6a852b24ab71dffc585bcb46eaf7959d175cb865a7152e35b348d1b2960422" [[package]] name = "comfy-table" version = "7.1.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b34115915337defe99b2aff5c2ce6771e5fbc4079f4b506301f5cf394c8452f7" dependencies = [ "crossterm", "strum", "strum_macros", "unicode-width", ] [[package]] name = "command-group" version = "5.0.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a68fa787550392a9d58f44c21a3022cfb3ea3e2458b7f85d3b399d0ceeccf409" dependencies = [ "nix 0.27.1", "winapi", ] [[package]] name = "console" version = "0.15.8" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0e1f83fc076bd6dd27517eacdf25fef6c4dfe5f1d7448bafaaf3a26f13b5e4eb" dependencies = [ "encode_unicode", "lazy_static", "libc", "windows-sys 0.52.0", ] [[package]] name = "core-foundation-sys" version = "0.8.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "06ea2b9bc92be3c2baa9334a323ebca2d6f074ff852cd1d7b11064035cd3868f" [[package]] name = "cpufeatures" version = "0.2.12" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "53fe5e26ff1b7aef8bca9c6080520cfb8d9333c7568e1829cef191a9723e5504" dependencies = [ "libc", ] [[package]] name = "crossterm" version = "0.27.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f476fe445d41c9e991fd07515a6f463074b782242ccf4a5b7b1d1012e70824df" dependencies = [ "bitflags 2.5.0", "crossterm_winapi", "libc", "parking_lot", "winapi", ] [[package]] name = "crossterm_winapi" version = "0.9.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "acdd7c62a3665c7f6830a51635d9ac9b23ed385797f70a83bb8bafe9c572ab2b" dependencies = [ "winapi", ] [[package]] name = "crypto-common" version = "0.1.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1bfb12502f3fc46cca1bb51ac28df9d618d813cdc3d2f25b9fe775a34af26bb3" dependencies = [ "generic-array", "typenum", ] [[package]] name = "ctrlc" version = "3.4.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "672465ae37dc1bc6380a6547a8883d5dd397b0f1faaad4f265726cc7042a5345" dependencies = [ "nix 0.28.0", "windows-sys 0.52.0", ] [[package]] name = "deranged" version = "0.3.11" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b42b6fa04a440b495c8b04d0e71b707c585f83cb9cb28cf8cd0d976c315e31b4" dependencies = [ "powerfmt", ] [[package]] name = "diff" version = "0.1.13" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "56254986775e3233ffa9c4d7d3faaf6d36a2c09d30b20687e9f88bc8bafc16c8" [[package]] name = "difflib" version = "0.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6184e33543162437515c2e2b48714794e37845ec9851711914eec9d308f6ebe8" [[package]] name = "digest" version = "0.10.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9ed9a281f7bc9b7576e61468ba615a66a5c8cfdff42420a70aa82701a3b1e292" dependencies = [ "block-buffer", "crypto-common", ] [[package]] name = "dirs" version = "5.0.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "44c45a9d03d6676652bcb5e724c7e988de1acad23a711b5217ab9cbecbec2225" dependencies = [ "dirs-sys", ] [[package]] name = "dirs-sys" version = "0.4.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "520f05a5cbd335fae5a99ff7a6ab8627577660ee5cfd6a94a6a929b52ff0321c" dependencies = [ "libc", "option-ext", "redox_users", "windows-sys 0.48.0", ] [[package]] name = "doc-comment" version = "0.3.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "fea41bba32d969b513997752735605054bc0dfa92b4c56bf1189f2e174be7a10" [[package]] name = "either" version = "1.12.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3dca9240753cf90908d7e4aac30f630662b02aebaa1b58a3cadabdb23385b58b" [[package]] name = "encode_unicode" version = "0.3.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a357d28ed41a50f9c765dbfe56cbc04a64e53e5fc58ba79fbc34c10ef3df831f" [[package]] name = "env_filter" version = "0.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a009aa4810eb158359dda09d0c87378e4bbb89b5a801f016885a4707ba24f7ea" dependencies = [ "log", "regex", ] [[package]] name = "env_logger" version = "0.11.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "38b35839ba51819680ba087cd351788c9a3c476841207e0b8cee0b04722343b9" dependencies = [ "anstream", "anstyle", "env_filter", "humantime", "log", ] [[package]] name = "equivalent" version = "1.0.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5443807d6dff69373d433ab9ef5378ad8df50ca6298caf15de6e52e24aaf54d5" [[package]] name = "errno" version = "0.3.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "534c5cf6194dfab3db3242765c03bbe257cf92f22b38f6bc0c58d59108a820ba" dependencies = [ "libc", "windows-sys 0.52.0", ] [[package]] name = "fastrand" version = "2.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9fc0510504f03c51ada170672ac806f1f105a88aa97a5281117e1ddc3368e51a" [[package]] name = "fnv" version = "1.0.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3f9eec918d3f24069decb9af1554cad7c880e2da24a9afd88aca000531ab82c1" [[package]] name = "futures" version = "0.3.30" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "645c6916888f6cb6350d2550b80fb63e734897a8498abe35cfb732b6487804b0" dependencies = [ "futures-channel", "futures-core", "futures-executor", "futures-io", "futures-sink", "futures-task", "futures-util", ] [[package]] name = "futures-channel" version = "0.3.30" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "eac8f7d7865dcb88bd4373ab671c8cf4508703796caa2b1985a9ca867b3fcb78" dependencies = [ "futures-core", "futures-sink", ] [[package]] name = "futures-core" version = "0.3.30" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "dfc6580bb841c5a68e9ef15c77ccc837b40a7504914d52e47b8b0e9bbda25a1d" [[package]] name = "futures-executor" version = "0.3.30" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a576fc72ae164fca6b9db127eaa9a9dda0d61316034f33a0a0d4eda41f02b01d" dependencies = [ "futures-core", "futures-task", "futures-util", ] [[package]] name = "futures-io" version = "0.3.30" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a44623e20b9681a318efdd71c299b6b222ed6f231972bfe2f224ebad6311f0c1" [[package]] name = "futures-macro" version = "0.3.30" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "87750cf4b7a4c0625b1529e4c543c2182106e4dedc60a2a6455e00d212c489ac" dependencies = [ "proc-macro2", "quote", "syn", ] [[package]] name = "futures-sink" version = "0.3.30" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9fb8e00e87438d937621c1c6269e53f536c14d3fbd6a042bb24879e57d474fb5" [[package]] name = "futures-task" version = "0.3.30" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "38d84fa142264698cdce1a9f9172cf383a0c82de1bddcf3092901442c4097004" [[package]] name = "futures-timer" version = "3.0.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f288b0a4f20f9a56b5d1da57e2227c661b7b16168e2f72365f57b63326e29b24" [[package]] name = "futures-util" version = "0.3.30" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3d6401deb83407ab3da39eba7e33987a73c3df0c82b4bb5813ee871c19c41d48" dependencies = [ "futures-channel", "futures-core", "futures-io", "futures-macro", "futures-sink", "futures-task", "memchr", "pin-project-lite", "pin-utils", "slab", ] [[package]] name = "generic-array" version = "0.14.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "85649ca51fd72272d7821adaf274ad91c288277713d9c18820d8499a7ff69e9a" dependencies = [ "typenum", "version_check", ] [[package]] name = "getrandom" version = "0.2.15" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c4567c8db10ae91089c99af84c68c38da3ec2f087c3f82960bcdbf3656b6f4d7" dependencies = [ "cfg-if", "libc", "wasi", ] [[package]] name = "gimli" version = "0.29.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "40ecd4077b5ae9fd2e9e169b102c6c330d0605168eb0e8bf79952b256dbefffd" [[package]] name = "glob" version = "0.3.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d2fabcfbdc87f4758337ca535fb41a6d701b65693ce38287d856d1674551ec9b" [[package]] name = "half" version = "1.8.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1b43ede17f21864e81be2fa654110bf1e793774238d86ef8555c37e6519c0403" [[package]] name = "handlebars" version = "5.1.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d08485b96a0e6393e9e4d1b8d48cf74ad6c063cd905eb33f42c1ce3f0377539b" dependencies = [ "log", "pest", "pest_derive", "serde", "serde_json", "thiserror", ] [[package]] name = "hashbrown" version = "0.14.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e5274423e17b7c9fc20b6e7e208532f9b19825d82dfd615708b70edd83df41f1" [[package]] name = "heck" version = "0.4.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "95505c38b4572b2d910cecb0281560f54b440a19336cbbcb27bf6ce6adc6f5a8" [[package]] name = "heck" version = "0.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "2304e00983f87ffb38b55b444b5e3b60a884b5d30c0fca7d82fe33449bbe55ea" [[package]] name = "hermit-abi" version = "0.3.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d231dfb89cfffdbc30e7fc41579ed6066ad03abda9e567ccafae602b97ec5024" [[package]] name = "hex" version = "0.4.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70" [[package]] name = "humantime" version = "2.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9a3a5bfb195931eeb336b2a7b4d761daec841b97f947d34394601737a7bba5e4" [[package]] name = "iana-time-zone" version = "0.1.60" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e7ffbb5a1b541ea2561f8c41c087286cc091e21e556a4f09a8f6cbf17b69b141" dependencies = [ "android_system_properties", "core-foundation-sys", "iana-time-zone-haiku", "js-sys", "wasm-bindgen", "windows-core", ] [[package]] name = "iana-time-zone-haiku" version = "0.1.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f31827a206f56af32e590ba56d5d2d085f558508192593743f16b2306495269f" dependencies = [ "cc", ] [[package]] name = "indexmap" version = "2.2.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "168fb715dda47215e360912c096649d23d58bf392ac62f73919e831745e40f26" dependencies = [ "equivalent", "hashbrown", ] [[package]] name = "interim" version = "0.1.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9afd0f0bff60c0e845844b6ee665e07990541ef3b70d8cd21861cf85b69fbef4" dependencies = [ "chrono", "logos", ] [[package]] name = "is_terminal_polyfill" version = "1.70.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f8478577c03552c21db0e2724ffb8986a5ce7af88107e6be5d2ee6e158c12800" [[package]] name = "itertools" version = "0.12.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ba291022dbbd398a455acf126c1e341954079855bc60dfdda641363bd6922569" dependencies = [ "either", ] [[package]] name = "itoa" version = "1.0.11" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "49f1f14873335454500d59611f1cf4a4b0f786f9ac11f4312a78e4cf2566695b" [[package]] name = "js-sys" version = "0.3.69" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "29c15563dc2726973df627357ce0c9ddddbea194836909d655df6a75d2cf296d" dependencies = [ "wasm-bindgen", ] [[package]] name = "lazy_static" version = "1.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e2abad23fbc42b3700f2f279844dc832adb2b2eb069b2df918f455c4e18cc646" [[package]] name = "lazycell" version = "1.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "830d08ce1d1d941e6b30645f1a0eb5643013d835ce3779a5fc208261dbe10f55" [[package]] name = "libc" version = "0.2.155" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "97b3888a4aecf77e811145cadf6eef5901f4782c53886191b2f693f24761847c" [[package]] name = "libloading" version = "0.8.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0c2a198fb6b0eada2a8df47933734e6d35d350665a33a3593d7164fa52c75c19" dependencies = [ "cfg-if", "windows-targets 0.48.5", ] [[package]] name = "libproc" version = "0.14.8" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ae9ea4b75e1a81675429dafe43441df1caea70081e82246a8cccf514884a88bb" dependencies = [ "bindgen", "errno", "libc", ] [[package]] name = "libredox" version = "0.1.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c0ff37bd590ca25063e35af745c343cb7a0271906fb7b37e4813e8f79f00268d" dependencies = [ "bitflags 2.5.0", "libc", ] [[package]] name = "linux-raw-sys" version = "0.4.14" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "78b3ae25bc7c8c38cec158d1f2757ee79e9b3740fbc7ccf0e59e4b08d793fa89" [[package]] name = "lock_api" version = "0.4.12" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "07af8b9cdd281b7915f413fa73f29ebd5d55d0d3f0155584dade1ff18cea1b17" dependencies = [ "autocfg", "scopeguard", ] [[package]] name = "log" version = "0.4.21" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "90ed8c1e510134f979dbc4f070f87d4313098b704861a105fe34231c70a3901c" [[package]] name = "logos" version = "0.14.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "161971eb88a0da7ae0c333e1063467c5b5727e7fb6b710b8db4814eade3a42e8" dependencies = [ "logos-derive", ] [[package]] name = "logos-codegen" version = "0.14.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8e31badd9de5131fdf4921f6473d457e3dd85b11b7f091ceb50e4df7c3eeb12a" dependencies = [ "beef", "fnv", "lazy_static", "proc-macro2", "quote", "regex-syntax 0.8.3", "syn", ] [[package]] name = "logos-derive" version = "0.14.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1c2a69b3eb68d5bd595107c9ee58d7e07fe2bb5e360cc85b0f084dedac80de0a" dependencies = [ "logos-codegen", ] [[package]] name = "matchers" version = "0.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8263075bb86c5a1b1427b5ae862e8889656f126e9f77c484496e8b47cf5c5558" dependencies = [ "regex-automata 0.1.10", ] [[package]] name = "memchr" version = "2.7.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6c8640c5d730cb13ebd907d8d04b52f55ac9a2eec55b440c8892f40d56c76c1d" [[package]] name = "minimal-lexical" version = "0.2.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "68354c5c6bd36d73ff3feceb05efa59b6acb7626617f4962be322a825e61f79a" [[package]] name = "miniz_oxide" version = "0.7.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "87dfd01fe195c66b572b37921ad8803d010623c0aca821bea2302239d155cdae" dependencies = [ "adler", ] [[package]] name = "mio" version = "0.8.11" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a4a650543ca06a924e8b371db273b2756685faae30f8487da1b56505a8f78b0c" dependencies = [ "libc", "wasi", "windows-sys 0.48.0", ] [[package]] name = "nix" version = "0.27.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "2eb04e9c688eff1c89d72b407f168cf79bb9e867a9d3323ed6c01519eb9cc053" dependencies = [ "bitflags 2.5.0", "cfg-if", "libc", ] [[package]] name = "nix" version = "0.28.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ab2156c4fce2f8df6c499cc1c763e4394b7482525bf2a9701c9d79d215f519e4" dependencies = [ "bitflags 2.5.0", "cfg-if", "cfg_aliases", "libc", ] [[package]] name = "nom" version = "7.1.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d273983c5a657a70a3e8f2a01329822f3b8c8172b73826411a55751e404a0a4a" dependencies = [ "memchr", "minimal-lexical", ] [[package]] name = "nu-ansi-term" version = "0.46.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "77a8165726e8236064dbb45459242600304b42a5ea24ee2948e18e023bf7ba84" dependencies = [ "overload", "winapi", ] [[package]] name = "num-conv" version = "0.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "51d515d32fb182ee37cda2ccdcb92950d6a3c2893aa280e540671c2cd0f3b1d9" [[package]] name = "num-traits" version = "0.2.19" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "071dfc062690e90b734c0b2273ce72ad0ffa95f0c74596bc250dcfd960262841" dependencies = [ "autocfg", ] [[package]] name = "num_cpus" version = "1.16.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4161fcb6d602d4d2081af7c3a45852d875a03dd337a6bfdd6e06407b61342a43" dependencies = [ "hermit-abi", "libc", ] [[package]] name = "num_threads" version = "0.1.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5c7398b9c8b70908f6371f47ed36737907c87c52af34c268fed0bf0ceb92ead9" dependencies = [ "libc", ] [[package]] name = "object" version = "0.35.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b8ec7ab813848ba4522158d5517a6093db1ded27575b070f4177b8d12b41db5e" dependencies = [ "memchr", ] [[package]] name = "once_cell" version = "1.19.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3fdb12b2476b595f9358c5161aa467c2438859caa136dec86c26fdd2efe17b92" [[package]] name = "option-ext" version = "0.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "04744f49eae99ab78e0d5c0b603ab218f515ea8cfe5a456d7629ad883a3b6e7d" [[package]] name = "overload" version = "0.1.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b15813163c1d831bf4a13c3610c05c0d03b39feb07f7e09fa234dac9b15aaf39" [[package]] name = "parking_lot" version = "0.12.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f1bf18183cf54e8d6059647fc3063646a1801cf30896933ec2311622cc4b9a27" dependencies = [ "lock_api", "parking_lot_core", ] [[package]] name = "parking_lot_core" version = "0.9.10" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1e401f977ab385c9e4e3ab30627d6f26d00e2c73eef317493c4ec6d468726cf8" dependencies = [ "cfg-if", "libc", "redox_syscall 0.5.1", "smallvec", "windows-targets 0.52.5", ] [[package]] name = "pem" version = "3.0.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8e459365e590736a54c3fa561947c84837534b8e9af6fc5bf781307e82658fae" dependencies = [ "base64", "serde", ] [[package]] name = "pest" version = "2.7.10" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "560131c633294438da9f7c4b08189194b20946c8274c6b9e38881a7874dc8ee8" dependencies = [ "memchr", "thiserror", "ucd-trie", ] [[package]] name = "pest_derive" version = "2.7.10" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "26293c9193fbca7b1a3bf9b79dc1e388e927e6cacaa78b4a3ab705a1d3d41459" dependencies = [ "pest", "pest_generator", ] [[package]] name = "pest_generator" version = "2.7.10" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3ec22af7d3fb470a85dd2ca96b7c577a1eb4ef6f1683a9fe9a8c16e136c04687" dependencies = [ "pest", "pest_meta", "proc-macro2", "quote", "syn", ] [[package]] name = "pest_meta" version = "2.7.10" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d7a240022f37c361ec1878d646fc5b7d7c4d28d5946e1a80ad5a7a4f4ca0bdcd" dependencies = [ "once_cell", "pest", "sha2", ] [[package]] name = "pin-project-lite" version = "0.2.14" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "bda66fc9667c18cb2758a2ac84d1167245054bcf85d5d1aaa6923f45801bdd02" [[package]] name = "pin-utils" version = "0.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8b870d8c151b6f2fb93e84a13146138f05d02ed11c7e7c54f8826aaaf7c9f184" [[package]] name = "portpicker" version = "0.1.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "be97d76faf1bfab666e1375477b23fde79eccf0276e9b63b92a39d676a889ba9" dependencies = [ "rand", ] [[package]] name = "powerfmt" version = "0.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "439ee305def115ba05938db6eb1644ff94165c5ab5e9420d1c1bcedbba909391" [[package]] name = "ppv-lite86" version = "0.2.17" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5b40af805b3121feab8a3c29f04d8ad262fa8e0561883e7653e024ae4479e6de" [[package]] name = "predicates" version = "3.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "68b87bfd4605926cdfefc1c3b5f8fe560e3feca9d5552cf68c466d3d8236c7e8" dependencies = [ "anstyle", "difflib", "predicates-core", ] [[package]] name = "predicates-core" version = "1.0.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b794032607612e7abeb4db69adb4e33590fa6cf1149e95fd7cb00e634b92f174" [[package]] name = "predicates-tree" version = "1.0.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "368ba315fb8c5052ab692e68a0eefec6ec57b23a36959c14496f0b0df2c0cecf" dependencies = [ "predicates-core", "termtree", ] [[package]] name = "pretty_assertions" version = "1.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "af7cee1a6c8a5b9208b3cb1061f10c0cb689087b3d8ce85fb9d2dd7a29b6ba66" dependencies = [ "diff", "yansi", ] [[package]] name = "proc-macro2" version = "1.0.84" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ec96c6a92621310b51366f1e28d05ef11489516e93be030060e5fc12024a49d6" dependencies = [ "unicode-ident", ] [[package]] name = "procfs" version = "0.16.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "731e0d9356b0c25f16f33b5be79b1c57b562f141ebfcdb0ad8ac2c13a24293b4" dependencies = [ "bitflags 2.5.0", "hex", "lazy_static", "procfs-core", "rustix", ] [[package]] name = "procfs-core" version = "0.16.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "2d3554923a69f4ce04c4a754260c338f505ce22642d3830e049a399fc2059a29" dependencies = [ "bitflags 2.5.0", "hex", ] [[package]] name = "pueue" version = "3.4.1" dependencies = [ "anyhow", "assert_cmd", "better-panic", "chrono", "clap", "clap_complete", "clap_complete_nushell", "comfy-table", "command-group", "crossterm", "ctrlc", "env_logger", "handlebars", "interim", "log", "pest", "pest_derive", "pretty_assertions", "procfs", "pueue-lib", "rstest", "serde", "serde_derive", "serde_json", "serde_yaml", "shell-escape", "similar-asserts", "simplelog", "snap", "strum", "strum_macros", "tempfile", "test-log", "tokio", "whoami", ] [[package]] name = "pueue-lib" version = "0.26.1" dependencies = [ "anyhow", "async-trait", "better-panic", "byteorder", "chrono", "command-group", "dirs", "handlebars", "libproc", "log", "portpicker", "pretty_assertions", "procfs", "rand", "rcgen", "rev_buf_reader", "rustls", "rustls-pemfile", "serde", "serde_cbor", "serde_derive", "serde_json", "serde_yaml", "shellexpand", "snap", "strum", "strum_macros", "tempfile", "thiserror", "tokio", "tokio-rustls", "whoami", "winapi", ] [[package]] name = "quote" version = "1.0.36" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0fa76aaf39101c457836aec0ce2316dbdc3ab723cdda1c6bd4e6ad4208acaca7" dependencies = [ "proc-macro2", ] [[package]] name = "rand" version = "0.8.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "34af8d1a0e25924bc5b7c43c079c942339d8f0a8b57c39049bef581b46327404" dependencies = [ "libc", "rand_chacha", "rand_core", ] [[package]] name = "rand_chacha" version = "0.3.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e6c10a63a0fa32252be49d21e7709d4d4baf8d231c2dbce1eaa8141b9b127d88" dependencies = [ "ppv-lite86", "rand_core", ] [[package]] name = "rand_core" version = "0.6.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ec0be4795e2f6a28069bec0b5ff3e2ac9bafc99e6a9a7dc3547996c5c816922c" dependencies = [ "getrandom", ] [[package]] name = "rcgen" version = "0.13.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "54077e1872c46788540de1ea3d7f4ccb1983d12f9aa909b234468676c1a36779" dependencies = [ "pem", "ring", "rustls-pki-types", "time", "yasna", ] [[package]] name = "redox_syscall" version = "0.4.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4722d768eff46b75989dd134e5c353f0d6296e5aaa3132e776cbdb56be7731aa" dependencies = [ "bitflags 1.3.2", ] [[package]] name = "redox_syscall" version = "0.5.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "469052894dcb553421e483e4209ee581a45100d31b4018de03e5a7ad86374a7e" dependencies = [ "bitflags 2.5.0", ] [[package]] name = "redox_users" version = "0.4.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "bd283d9651eeda4b2a83a43c1c91b266c40fd76ecd39a50a8c630ae69dc72891" dependencies = [ "getrandom", "libredox", "thiserror", ] [[package]] name = "regex" version = "1.10.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c117dbdfde9c8308975b6a18d71f3f385c89461f7b3fb054288ecf2a2058ba4c" dependencies = [ "aho-corasick", "memchr", "regex-automata 0.4.6", "regex-syntax 0.8.3", ] [[package]] name = "regex-automata" version = "0.1.10" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6c230d73fb8d8c1b9c0b3135c5142a8acee3a0558fb8db5cf1cb65f8d7862132" dependencies = [ "regex-syntax 0.6.29", ] [[package]] name = "regex-automata" version = "0.4.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "86b83b8b9847f9bf95ef68afb0b8e6cdb80f498442f5179a29fad448fcc1eaea" dependencies = [ "aho-corasick", "memchr", "regex-syntax 0.8.3", ] [[package]] name = "regex-syntax" version = "0.6.29" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f162c6dd7b008981e4d40210aca20b4bd0f9b60ca9271061b07f78537722f2e1" [[package]] name = "regex-syntax" version = "0.8.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "adad44e29e4c806119491a7f06f03de4d1af22c3a680dd47f1e6e179439d1f56" [[package]] name = "relative-path" version = "1.9.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ba39f3699c378cd8970968dcbff9c43159ea4cfbd88d43c00b22f2ef10a435d2" [[package]] name = "rev_buf_reader" version = "0.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8c0f2e47e00e29920959826e2e1784728a3780d1a784247be5257258cc75f910" dependencies = [ "memchr", ] [[package]] name = "ring" version = "0.17.8" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c17fa4cb658e3583423e915b9f3acc01cceaee1860e33d59ebae66adc3a2dc0d" dependencies = [ "cc", "cfg-if", "getrandom", "libc", "spin", "untrusted", "windows-sys 0.52.0", ] [[package]] name = "rstest" version = "0.19.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9d5316d2a1479eeef1ea21e7f9ddc67c191d497abc8fc3ba2467857abbb68330" dependencies = [ "futures", "futures-timer", "rstest_macros", "rustc_version", ] [[package]] name = "rstest_macros" version = "0.19.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "04a9df72cc1f67020b0d63ad9bfe4a323e459ea7eb68e03bd9824db49f9a4c25" dependencies = [ "cfg-if", "glob", "proc-macro2", "quote", "regex", "relative-path", "rustc_version", "syn", "unicode-ident", ] [[package]] name = "rustc-demangle" version = "0.1.24" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "719b953e2095829ee67db738b3bfa9fa368c94900df327b3f07fe6e794d2fe1f" [[package]] name = "rustc-hash" version = "1.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "08d43f7aa6b08d49f382cde6a7982047c3426db949b1424bc4b7ec9ae12c6ce2" [[package]] name = "rustc_version" version = "0.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "bfa0f585226d2e68097d4f95d113b15b83a82e819ab25717ec0590d9584ef366" dependencies = [ "semver", ] [[package]] name = "rustix" version = "0.38.34" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "70dc5ec042f7a43c4a73241207cecc9873a06d45debb38b329f8541d85c2730f" dependencies = [ "bitflags 2.5.0", "errno", "libc", "linux-raw-sys", "windows-sys 0.52.0", ] [[package]] name = "rustls" version = "0.23.8" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "79adb16721f56eb2d843e67676896a61ce7a0fa622dc18d3e372477a029d2740" dependencies = [ "log", "once_cell", "ring", "rustls-pki-types", "rustls-webpki", "subtle", "zeroize", ] [[package]] name = "rustls-pemfile" version = "2.1.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "29993a25686778eb88d4189742cd713c9bce943bc54251a33509dc63cbacf73d" dependencies = [ "base64", "rustls-pki-types", ] [[package]] name = "rustls-pki-types" version = "1.7.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "976295e77ce332211c0d24d92c0e83e50f5c5f046d11082cea19f3df13a3562d" [[package]] name = "rustls-webpki" version = "0.102.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ff448f7e92e913c4b7d4c6d8e4540a1724b319b4152b8aef6d4cf8339712b33e" dependencies = [ "ring", "rustls-pki-types", "untrusted", ] [[package]] name = "rustversion" version = "1.0.17" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "955d28af4278de8121b7ebeb796b6a45735dc01436d898801014aced2773a3d6" [[package]] name = "ryu" version = "1.0.18" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f3cb5ba0dc43242ce17de99c180e96db90b235b8a9fdc9543c96d2209116bd9f" [[package]] name = "scopeguard" version = "1.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49" [[package]] name = "semver" version = "1.0.23" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "61697e0a1c7e512e84a621326239844a24d8207b4669b41bc18b32ea5cbf988b" [[package]] name = "serde" version = "1.0.203" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7253ab4de971e72fb7be983802300c30b5a7f0c2e56fab8abfc6a214307c0094" dependencies = [ "serde_derive", ] [[package]] name = "serde_cbor" version = "0.11.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "2bef2ebfde456fb76bbcf9f59315333decc4fda0b2b44b420243c11e0f5ec1f5" dependencies = [ "half", "serde", ] [[package]] name = "serde_derive" version = "1.0.203" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "500cbc0ebeb6f46627f50f3f5811ccf6bf00643be300b4c3eabc0ef55dc5b5ba" dependencies = [ "proc-macro2", "quote", "syn", ] [[package]] name = "serde_json" version = "1.0.117" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "455182ea6142b14f93f4bc5320a2b31c1f266b66a4a5c858b013302a5d8cbfc3" dependencies = [ "itoa", "ryu", "serde", ] [[package]] name = "serde_yaml" version = "0.9.34+deprecated" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6a8b1a1a2ebf674015cc02edccce75287f1a0130d394307b36743c2f5d504b47" dependencies = [ "indexmap", "itoa", "ryu", "serde", "unsafe-libyaml", ] [[package]] name = "sha2" version = "0.10.8" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "793db75ad2bcafc3ffa7c68b215fee268f537982cd901d132f89c6343f3a3dc8" dependencies = [ "cfg-if", "cpufeatures", "digest", ] [[package]] name = "sharded-slab" version = "0.1.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f40ca3c46823713e0d4209592e8d6e826aa57e928f09752619fc696c499637f6" dependencies = [ "lazy_static", ] [[package]] name = "shell-escape" version = "0.1.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "45bb67a18fa91266cc7807181f62f9178a6873bfad7dc788c42e6430db40184f" [[package]] name = "shellexpand" version = "3.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "da03fa3b94cc19e3ebfc88c4229c49d8f08cdbd1228870a45f0ffdf84988e14b" dependencies = [ "dirs", ] [[package]] name = "shlex" version = "1.3.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0fda2ff0d084019ba4d7c6f371c95d8fd75ce3524c3cb8fb653a3023f6323e64" [[package]] name = "similar" version = "2.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "fa42c91313f1d05da9b26f267f931cf178d4aba455b4c4622dd7355eb80c6640" dependencies = [ "bstr 0.2.17", "unicode-segmentation", ] [[package]] name = "similar-asserts" version = "1.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e041bb827d1bfca18f213411d51b665309f1afb37a04a5d1464530e13779fc0f" dependencies = [ "console", "similar", ] [[package]] name = "simplelog" version = "0.12.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "16257adbfaef1ee58b1363bdc0664c9b8e1e30aed86049635fb5f147d065a9c0" dependencies = [ "log", "termcolor", "time", ] [[package]] name = "slab" version = "0.4.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8f92a496fb766b417c996b9c5e57daf2f7ad3b0bebe1ccfca4856390e3d3bb67" dependencies = [ "autocfg", ] [[package]] name = "smallvec" version = "1.13.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3c5e1a9a646d36c3599cd173a41282daf47c44583ad367b8e6837255952e5c67" [[package]] name = "snap" version = "1.1.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1b6b67fb9a61334225b5b790716f609cd58395f895b3fe8b328786812a40bc3b" [[package]] name = "socket2" version = "0.5.7" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ce305eb0b4296696835b71df73eb912e0f1ffd2556a501fcede6e0c50349191c" dependencies = [ "libc", "windows-sys 0.52.0", ] [[package]] name = "spin" version = "0.9.8" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6980e8d7511241f8acf4aebddbb1ff938df5eebe98691418c4468d0b72a96a67" [[package]] name = "strsim" version = "0.11.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7da8b5736845d9f2fcb837ea5d9e2628564b3b043a70948a3f0b778838c5fb4f" [[package]] name = "strum" version = "0.26.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5d8cec3501a5194c432b2b7976db6b7d10ec95c253208b45f83f7136aa985e29" [[package]] name = "strum_macros" version = "0.26.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c6cf59daf282c0a494ba14fd21610a0325f9f90ec9d1231dea26bcb1d696c946" dependencies = [ "heck 0.4.1", "proc-macro2", "quote", "rustversion", "syn", ] [[package]] name = "subtle" version = "2.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "81cdd64d312baedb58e21336b31bc043b77e01cc99033ce76ef539f78e965ebc" [[package]] name = "syn" version = "2.0.66" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c42f3f41a2de00b01c0aaad383c5a45241efc8b2d1eda5661812fda5f3cdcff5" dependencies = [ "proc-macro2", "quote", "unicode-ident", ] [[package]] name = "tempfile" version = "3.10.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "85b77fafb263dd9d05cbeac119526425676db3784113aa9295c88498cbf8bff1" dependencies = [ "cfg-if", "fastrand", "rustix", "windows-sys 0.52.0", ] [[package]] name = "termcolor" version = "1.4.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "06794f8f6c5c898b3275aebefa6b8a1cb24cd2c6c79397ab15774837a0bc5755" dependencies = [ "winapi-util", ] [[package]] name = "termtree" version = "0.4.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3369f5ac52d5eb6ab48c6b4ffdc8efbcad6b89c765749064ba298f2c68a16a76" [[package]] name = "test-log" version = "0.2.16" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3dffced63c2b5c7be278154d76b479f9f9920ed34e7574201407f0b14e2bbb93" dependencies = [ "env_logger", "test-log-macros", "tracing-subscriber", ] [[package]] name = "test-log-macros" version = "0.2.16" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5999e24eaa32083191ba4e425deb75cdf25efefabe5aaccb7446dd0d4122a3f5" dependencies = [ "proc-macro2", "quote", "syn", ] [[package]] name = "thiserror" version = "1.0.61" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c546c80d6be4bc6a00c0f01730c08df82eaa7a7a61f11d656526506112cc1709" dependencies = [ "thiserror-impl", ] [[package]] name = "thiserror-impl" version = "1.0.61" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "46c3384250002a6d5af4d114f2845d37b57521033f30d5c3f46c4d70e1197533" dependencies = [ "proc-macro2", "quote", "syn", ] [[package]] name = "thread_local" version = "1.1.8" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8b9ef9bad013ada3808854ceac7b46812a6465ba368859a37e2100283d2d719c" dependencies = [ "cfg-if", "once_cell", ] [[package]] name = "time" version = "0.3.36" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5dfd88e563464686c916c7e46e623e520ddc6d79fa6641390f2e3fa86e83e885" dependencies = [ "deranged", "itoa", "libc", "num-conv", "num_threads", "powerfmt", "serde", "time-core", "time-macros", ] [[package]] name = "time-core" version = "0.1.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ef927ca75afb808a4d64dd374f00a2adf8d0fcff8e7b184af886c3c87ec4a3f3" [[package]] name = "time-macros" version = "0.2.18" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3f252a68540fde3a3877aeea552b832b40ab9a69e318efd078774a01ddee1ccf" dependencies = [ "num-conv", "time-core", ] [[package]] name = "tokio" version = "1.37.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1adbebffeca75fcfd058afa480fb6c0b81e165a0323f9c9d39c9697e37c46787" dependencies = [ "backtrace", "bytes", "libc", "mio", "num_cpus", "pin-project-lite", "socket2", "tokio-macros", "windows-sys 0.48.0", ] [[package]] name = "tokio-macros" version = "2.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5b8a1e28f2deaa14e508979454cb3a223b10b938b45af148bc0986de36f1923b" dependencies = [ "proc-macro2", "quote", "syn", ] [[package]] name = "tokio-rustls" version = "0.26.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0c7bc40d0e5a97695bb96e27995cd3a08538541b0a846f65bba7a359f36700d4" dependencies = [ "rustls", "rustls-pki-types", "tokio", ] [[package]] name = "tracing" version = "0.1.40" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c3523ab5a71916ccf420eebdf5521fcef02141234bbc0b8a49f2fdc4544364ef" dependencies = [ "pin-project-lite", "tracing-core", ] [[package]] name = "tracing-core" version = "0.1.32" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c06d3da6113f116aaee68e4d601191614c9053067f9ab7f6edbcb161237daa54" dependencies = [ "once_cell", "valuable", ] [[package]] name = "tracing-log" version = "0.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ee855f1f400bd0e5c02d150ae5de3840039a3f54b025156404e34c23c03f47c3" dependencies = [ "log", "once_cell", "tracing-core", ] [[package]] name = "tracing-subscriber" version = "0.3.18" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ad0f048c97dbd9faa9b7df56362b8ebcaa52adb06b498c050d2f4e32f90a7a8b" dependencies = [ "matchers", "nu-ansi-term", "once_cell", "regex", "sharded-slab", "thread_local", "tracing", "tracing-core", "tracing-log", ] [[package]] name = "typenum" version = "1.17.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "42ff0bf0c66b8238c6f3b578df37d0b7848e55df8577b3f74f92a69acceeb825" [[package]] name = "ucd-trie" version = "0.1.6" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ed646292ffc8188ef8ea4d1e0e0150fb15a5c2e12ad9b8fc191ae7a8a7f3c4b9" [[package]] name = "unicode-ident" version = "1.0.12" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "3354b9ac3fae1ff6755cb6db53683adb661634f67557942dea4facebec0fee4b" [[package]] name = "unicode-segmentation" version = "1.11.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d4c87d22b6e3f4a18d4d40ef354e97c90fcb14dd91d7dc0aa9d8a1172ebf7202" [[package]] name = "unicode-width" version = "0.1.12" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "68f5e5f3158ecfd4b8ff6fe086db7c8467a2dfdac97fe420f2b7c4aa97af66d6" [[package]] name = "unsafe-libyaml" version = "0.2.11" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "673aac59facbab8a9007c7f6108d11f63b603f7cabff99fabf650fea5c32b861" [[package]] name = "untrusted" version = "0.9.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8ecb6da28b8a351d773b68d5825ac39017e680750f980f3a1a85cd8dd28a47c1" [[package]] name = "utf8parse" version = "0.2.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "711b9620af191e0cdc7468a8d14e709c3dcdb115b36f838e601583af800a370a" [[package]] name = "valuable" version = "0.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "830b7e5d4d90034032940e4ace0d9a9a057e7a45cd94e6c007832e39edb82f6d" [[package]] name = "version_check" version = "0.9.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "49874b5167b65d7193b8aba1567f5c7d93d001cafc34600cee003eda787e483f" [[package]] name = "wait-timeout" version = "0.2.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9f200f5b12eb75f8c1ed65abd4b2db8a6e1b138a20de009dacee265a2498f3f6" dependencies = [ "libc", ] [[package]] name = "wasi" version = "0.11.0+wasi-snapshot-preview1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9c8d87e72b64a3b4db28d11ce29237c246188f4f51057d65a7eab63b7987e423" [[package]] name = "wasite" version = "0.1.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b8dad83b4f25e74f184f64c43b150b91efe7647395b42289f38e50566d82855b" [[package]] name = "wasm-bindgen" version = "0.2.92" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4be2531df63900aeb2bca0daaaddec08491ee64ceecbee5076636a3b026795a8" dependencies = [ "cfg-if", "wasm-bindgen-macro", ] [[package]] name = "wasm-bindgen-backend" version = "0.2.92" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "614d787b966d3989fa7bb98a654e369c762374fd3213d212cfc0251257e747da" dependencies = [ "bumpalo", "log", "once_cell", "proc-macro2", "quote", "syn", "wasm-bindgen-shared", ] [[package]] name = "wasm-bindgen-macro" version = "0.2.92" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a1f8823de937b71b9460c0c34e25f3da88250760bec0ebac694b49997550d726" dependencies = [ "quote", "wasm-bindgen-macro-support", ] [[package]] name = "wasm-bindgen-macro-support" version = "0.2.92" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e94f17b526d0a461a191c78ea52bbce64071ed5c04c9ffe424dcb38f74171bb7" dependencies = [ "proc-macro2", "quote", "syn", "wasm-bindgen-backend", "wasm-bindgen-shared", ] [[package]] name = "wasm-bindgen-shared" version = "0.2.92" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "af190c94f2773fdb3729c55b007a722abb5384da03bc0986df4c289bf5567e96" [[package]] name = "web-sys" version = "0.3.69" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "77afa9a11836342370f4817622a2f0f418b134426d91a82dfb48f532d2ec13ef" dependencies = [ "js-sys", "wasm-bindgen", ] [[package]] name = "whoami" version = "1.5.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a44ab49fad634e88f55bf8f9bb3abd2f27d7204172a112c7c9987e01c1c94ea9" dependencies = [ "redox_syscall 0.4.1", "wasite", "web-sys", ] [[package]] name = "winapi" version = "0.3.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "5c839a674fcd7a98952e593242ea400abe93992746761e38641405d28b00f419" dependencies = [ "winapi-i686-pc-windows-gnu", "winapi-x86_64-pc-windows-gnu", ] [[package]] name = "winapi-i686-pc-windows-gnu" version = "0.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6" [[package]] name = "winapi-util" version = "0.1.8" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4d4cc384e1e73b93bafa6fb4f1df8c41695c8a91cf9c4c64358067d15a7b6c6b" dependencies = [ "windows-sys 0.52.0", ] [[package]] name = "winapi-x86_64-pc-windows-gnu" version = "0.4.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f" [[package]] name = "windows-core" version = "0.52.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "33ab640c8d7e35bf8ba19b884ba838ceb4fba93a4e8c65a9059d08afcfc683d9" dependencies = [ "windows-targets 0.52.5", ] [[package]] name = "windows-sys" version = "0.48.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "677d2418bec65e3338edb076e806bc1ec15693c5d0104683f2efe857f61056a9" dependencies = [ "windows-targets 0.48.5", ] [[package]] name = "windows-sys" version = "0.52.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "282be5f36a8ce781fad8c8ae18fa3f9beff57ec1b52cb3de0789201425d9a33d" dependencies = [ "windows-targets 0.52.5", ] [[package]] name = "windows-targets" version = "0.48.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9a2fa6e2155d7247be68c096456083145c183cbbbc2764150dda45a87197940c" dependencies = [ "windows_aarch64_gnullvm 0.48.5", "windows_aarch64_msvc 0.48.5", "windows_i686_gnu 0.48.5", "windows_i686_msvc 0.48.5", "windows_x86_64_gnu 0.48.5", "windows_x86_64_gnullvm 0.48.5", "windows_x86_64_msvc 0.48.5", ] [[package]] name = "windows-targets" version = "0.52.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6f0713a46559409d202e70e28227288446bf7841d3211583a4b53e3f6d96e7eb" dependencies = [ "windows_aarch64_gnullvm 0.52.5", "windows_aarch64_msvc 0.52.5", "windows_i686_gnu 0.52.5", "windows_i686_gnullvm", "windows_i686_msvc 0.52.5", "windows_x86_64_gnu 0.52.5", "windows_x86_64_gnullvm 0.52.5", "windows_x86_64_msvc 0.52.5", ] [[package]] name = "windows_aarch64_gnullvm" version = "0.48.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "2b38e32f0abccf9987a4e3079dfb67dcd799fb61361e53e2882c3cbaf0d905d8" [[package]] name = "windows_aarch64_gnullvm" version = "0.52.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7088eed71e8b8dda258ecc8bac5fb1153c5cffaf2578fc8ff5d61e23578d3263" [[package]] name = "windows_aarch64_msvc" version = "0.48.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "dc35310971f3b2dbbf3f0690a219f40e2d9afcf64f9ab7cc1be722937c26b4bc" [[package]] name = "windows_aarch64_msvc" version = "0.52.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "9985fd1504e250c615ca5f281c3f7a6da76213ebd5ccc9561496568a2752afb6" [[package]] name = "windows_i686_gnu" version = "0.48.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "a75915e7def60c94dcef72200b9a8e58e5091744960da64ec734a6c6e9b3743e" [[package]] name = "windows_i686_gnu" version = "0.52.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "88ba073cf16d5372720ec942a8ccbf61626074c6d4dd2e745299726ce8b89670" [[package]] name = "windows_i686_gnullvm" version = "0.52.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "87f4261229030a858f36b459e748ae97545d6f1ec60e5e0d6a3d32e0dc232ee9" [[package]] name = "windows_i686_msvc" version = "0.48.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8f55c233f70c4b27f66c523580f78f1004e8b5a8b659e05a4eb49d4166cca406" [[package]] name = "windows_i686_msvc" version = "0.52.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "db3c2bf3d13d5b658be73463284eaf12830ac9a26a90c717b7f771dfe97487bf" [[package]] name = "windows_x86_64_gnu" version = "0.48.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "53d40abd2583d23e4718fddf1ebec84dbff8381c07cae67ff7768bbf19c6718e" [[package]] name = "windows_x86_64_gnu" version = "0.52.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4e4246f76bdeff09eb48875a0fd3e2af6aada79d409d33011886d3e1581517d9" [[package]] name = "windows_x86_64_gnullvm" version = "0.48.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0b7b52767868a23d5bab768e390dc5f5c55825b6d30b86c844ff2dc7414044cc" [[package]] name = "windows_x86_64_gnullvm" version = "0.52.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "852298e482cd67c356ddd9570386e2862b5673c85bd5f88df9ab6802b334c596" [[package]] name = "windows_x86_64_msvc" version = "0.48.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ed94fce61571a4006852b7389a063ab983c02eb1bb37b47f8272ce92d06d9538" [[package]] name = "windows_x86_64_msvc" version = "0.52.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "bec47e5bfd1bff0eeaf6d8b485cc1074891a197ab4225d504cb7a1ab88b02bf0" [[package]] name = "yansi" version = "0.5.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "09041cd90cf85f7f8b2df60c646f853b7f535ce68f85244eb6731cf89fa498ec" [[package]] name = "yasna" version = "0.5.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e17bb3549cc1321ae1296b9cdc2698e2b6cb1992adfa19a8c72e5b7a738f44cd" dependencies = [ "time", ] [[package]] name = "zeroize" version = "1.8.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ced3678a2879b30306d323f4542626697a464a97c0a07c9aebf7ebca65cd4dde" 07070100000011000081A4000000000000000000000001665F1B69000003D6000000000000000000000000000000000000001700000000pueue-3.4.1/Cargo.toml# The project is a top-level crate *as well* as a workspace. # The `pueue_lib` crate lives in the `lib` folder. # The following is the shared configuration for both pueue and its lib [workspace] members = ["pueue", "pueue_lib"] resolver = "2" [workspace.package] authors = ["Arne Beer <contact@arne.beer>"] homepage = "https://github.com/nukesor/pueue" repository = "https://github.com/nukesor/pueue" license = "MIT" edition = "2021" rust-version = "1.67" [workspace.dependencies] # Chrono version is hard pinned to a specific version. # See https://github.com/Nukesor/pueue/issues/534 chrono = { version = "0.4", features = ["serde"] } command-group = "5" log = "0.4" serde = "1.0" serde_json = "1.0" serde_yaml = "0.9" serde_derive = "1.0" snap = "1.1" strum = "0.26" strum_macros = "0.26" tokio = { version = "1.36", features = ["rt-multi-thread", "time", "io-std"] } handlebars = "5.1" anyhow = "1" better-panic = "0.3" pretty_assertions = "1" [profile.release] lto = "thin" 07070100000012000081A4000000000000000000000001665F1B690000033C000000000000000000000000000000000000001700000000pueue-3.4.1/Cross.toml[target.x86_64-unknown-linux-musl] image = "ghcr.io/cross-rs/x86_64-unknown-linux-musl:main" pre-build = [ """ dpkg --add-architecture amd64 && \ apt-get update && \ apt-get install --assume-yes lld clang """ ] [target.aarch64-unknown-linux-musl] image = "ghcr.io/cross-rs/aarch64-unknown-linux-musl:main" pre-build = [ """ apt-get update && \ apt-get install --assume-yes lld clang """ ] [target.armv7-unknown-linux-musleabihf] image = "ghcr.io/cross-rs/armv7-unknown-linux-musleabihf:main" pre-build = [ """ apt-get update && \ apt-get install --assume-yes lld clang """ ] [target.arm-unknown-linux-musleabihf] image = "ghcr.io/cross-rs/arm-unknown-linux-musleabihf:main" pre-build = [ """ apt-get update && \ apt-get install --assume-yes lld clang """ ] 07070100000013000081A4000000000000000000000001665F1B6900000311000000000000000000000000000000000000001500000000pueue-3.4.1/Justfile# Bump all deps, including incompatible version upgrades bump: just ensure_installed upgrade cargo update cargo upgrade --incompatible cargo test --workspace # Run the test suite with nexttest nextest: just ensure_installed nextest cargo nextest run --workspace # If you change anything in here, make sure to also adjust the lint CI job! lint: just ensure_installed sort cargo fmt --all -- --check cargo sort --workspace --check cargo clippy --tests --workspace -- -D warnings format: just ensure_installed sort cargo fmt cargo sort --workspace ensure_installed *args: #!/bin/bash cargo --list | grep -q {{ args }} if [[ $? -ne 0 ]]; then echo "error: cargo-{{ args }} is not installed" exit 1 fi 07070100000014000081A4000000000000000000000001665F1B690000042F000000000000000000000000000000000000001400000000pueue-3.4.1/LICENSEMIT License Copyright (c) 2018-2022 Arne Beer Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 07070100000015000081A4000000000000000000000001665F1B69000038E2000000000000000000000000000000000000001600000000pueue-3.4.1/README.md# Pueue [![Test Build](https://github.com/Nukesor/pueue/actions/workflows/test.yml/badge.svg)](https://github.com/Nukesor/pueue/actions/workflows/test.yml) [![Crates.io](https://img.shields.io/crates/v/pueue)](https://crates.io/crates/pueue) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Downloads](https://img.shields.io/github/downloads/nukesor/pueue/total.svg)](https://github.com/nukesor/pueue/releases) [![codecov](https://codecov.io/gh/nukesor/pueue/branch/main/graph/badge.svg)](https://codecov.io/gh/nukesor/pueue) ![Pueue](https://raw.githubusercontent.com/Nukesor/images/main/pueue-v2.0.0.gif) Pueue is a command-line task management tool for sequential and parallel execution of long-running tasks. Simply put, it's a tool that **p**rocesses a q**ueue** of shell commands. On top of that, there are a lot of convenient features and abstractions. Since Pueue is not bound to any terminal, you can control your tasks from any terminal on the same machine. The queue will be continuously processed, even if you no longer have any active ssh sessions. **Pueue is considered feature-complete :tada:.** All features that were planned have been added and only minor improvements, bug-fixes and regular maintenance work will get merged. - [Features](https://github.com/Nukesor/pueue#features) - [Installation](https://github.com/Nukesor/pueue#installation) - [How to use it](https://github.com/Nukesor/pueue#how-to-use-it) - [Similar Projects](https://github.com/Nukesor/pueue#similar-projects) - [Design Goals](https://github.com/Nukesor/pueue#design-goals) - [Contributing](https://github.com/Nukesor/pueue#contributing) ## Features - Scheduling - Add tasks as you go. - Run multiple tasks at once. You decide how many tasks should run concurrently. - Change the order of the scheduled tasks. - Specify dependencies between tasks. - Schedule tasks to run at a specific time. - Process interaction - Easy output inspection. - Send input to running processes. - Pause/resume tasks, when you need some processing power right NOW! - Task groups (multiple queues) - Each group can have several tasks running in parallel. - Pause/start tasks by a group. - Background process execution - The `pueued` daemon runs in the background. No need to be logged in. - Commands are executed in their respective working directories. - The current environment variables are copied when adding a task. - Commands are run in a shell which allows the full feature set of shell coding. - Consistency - The queue is always saved to disk and restored on kill/system crash. - Logs are persisted onto the disk and survive a crash. - Miscellaneous - A callback hook to, for instance, set up desktop notifications. - JSON output for `log` and `status` if you want to display info about tasks in another program. - A `wait` subcommand to wait for specific tasks, a group (or everything) to finish. - A lot more. Check the -h options for each subcommand for detailed options. - Cross Platform - Linux is fully supported and battle-tested. - MacOS is fully supported and on par with Linux. - Windows is fully supported and working fine for quite a while. - [Why should I use it](https://github.com/Nukesor/pueue/wiki/FAQ#why-should-i-use-it) - [Advantages over Using a Terminal Multiplexer](https://github.com/Nukesor/pueue/wiki/FAQ#advantages-over-using-a-terminal-multiplexer) ## What Pueue is **not** Pueue is **not** designed to be a heavy-duty programmable (scriptable) task scheduler/executor. The focus of `pueue` lies on human interaction, i.e. it's supposed to be used by a real person on some kind of OS. See [the Design Goals section](#design-goals) Due to this, the feature set of `pueue` and `pueued` as well as their implementation and architecture have been kept simple by design! Even though it can be scripted to some degree, it hasn't been built for this and there's no official support! There's definitely the need for a complex task scheduler/executor with advanced API access and scheduling options, but this is the job for another project, as this is not what pueue has been built for. ## Installation There are a few different ways to install Pueue. #### Package Manager <a href="https://repology.org/project/pueue/versions"><img align="right" src="https://repology.org/badge/vertical-allrepos/pueue.svg" alt="Packaging status"></a> The preferred way to install Pueue is to use your system's package manager. This will usually deploy service files and completions automatically. Pueue has been packaged for quite a few distributions, check the table on the right for more information. #### Prebuild Binaries Statically linked (if possible) binaries for Linux (incl. ARM), Mac OS and Windows are built on each release. \ You can download the binaries for the client and the daemon (`pueue` and `pueued`) for each release on the [release page](https://github.com/Nukesor/pueue/releases). \ Just download both binaries for your system, rename them to `pueue` and `pueued` and place them in your `$PATH`/program folder. #### Via Cargo Pueue is built for the current `stable` Rust version. It might compile on older versions, but this isn't tested or officially supported. ```bash cargo install --locked pueue ``` This will install Pueue to `$CARGO_HOME/bin/pueue` (default is `~/.cargo/bin/pueue`) #### From Source Pueue is built for the current `stable` Rust version. It might compile on older versions, but this isn't tested or officially supported. ```bash git clone git@github.com:Nukesor/pueue cd pueue cargo build --release --locked --path ./pueue ``` The final binaries will be located in `target/release/{pueue,pueued}`. ## How to Use it Check the wiki to [get started](https://github.com/Nukesor/pueue/wiki/Get-started) :). There are also detailed sections for (hopefully) every important feature: - [Configuration](https://github.com/Nukesor/pueue/wiki/Configuration) - [Groups](https://github.com/Nukesor/pueue/wiki/Groups) - [Advanced usage](https://github.com/Nukesor/pueue/wiki/Advanced-usage) - [Connect to remote](https://github.com/Nukesor/pueue/wiki/Connect-to-remote) On top of that, there is a help option (-h) for all commands. ```text Interact with the Pueue daemon Usage: pueue [OPTIONS] [COMMAND] Commands: add Enqueue a task for execution. There're many different options when scheduling a task. Check the individual option help texts for more information. Furthermore, please remember that scheduled commands are executed via your system shell. This means that the command needs proper shell escaping. The safest way to preserve shell escaping is to surround your command with quotes, for example: pueue add 'ls $HOME && echo "Some string"' remove Remove tasks from the list. Running or paused tasks need to be killed first switch Switches the queue position of two commands. Only works on queued and stashed commands stash Stashed tasks won't be automatically started. You have to enqueue them or start them by hand enqueue Enqueue stashed tasks. They'll be handled normally afterwards start Resume operation of specific tasks or groups of tasks. By default, this resumes the default group and all its tasks. Can also be used force-start specific tasks. restart Restart failed or successful task(s). By default, identical tasks will be created and enqueued, but it's possible to restart in-place. You can also edit a few properties, such as the path and the command, before restarting. pause Either pause running tasks or specific groups of tasks. By default, pauses the default group and all its tasks. A paused queue (group) won't start any new tasks. kill Kill specific running tasks or whole task groups.. Kills all tasks of the default group when no ids or a specific group are provided. send Send something to a task. Useful for sending confirmations such as 'y\n' edit Edit the command, path or label of a stashed or queued task. By default only the command is edited. Multiple properties can be added in one go. group Use this to add or remove groups. By default, this will simply display all known groups. status Display the current status of all tasks format-status Accept a list or map of JSON pueue tasks via stdin and display it just like "pueue status". A simple example might look like this: pueue status --json | jq -c '.tasks' | pueue format-status log Display the log output of finished tasks. Only the last few lines will be shown by default. If you want to follow the output of a task, please use the "follow" subcommand. follow Follow the output of a currently running task. This command works like "tail -f" wait Wait until tasks are finished. By default, this will wait for all tasks in the default group to finish. Note: This will also wait for all tasks that aren't somehow 'Done'. Includes: [Paused, Stashed, Locked, Queued, ...] clean Remove all finished tasks from the list reset Kill all tasks, clean up afterwards and reset EVERYTHING! shutdown Remotely shut down the daemon. Should only be used if the daemon isn't started by a service manager parallel Set the amount of allowed parallel tasks By default, adjusts the amount of the default group. No tasks will be stopped, if this is lowered. This limit is only considered when tasks are scheduled. completions Generates shell completion files. This can be ignored during normal operations help Print this message or the help of the given subcommand(s) Options: -v, --verbose... Verbose mode (-v, -vv, -vvv) --color <COLOR> Colorize the output; auto enables color output when connected to a tty [default: auto] [possible values: auto, never, always] -c, --config <CONFIG> If provided, Pueue only uses this config file. This path can also be set via the "PUEUE_CONFIG_PATH" environment variable. The commandline option overwrites the environment variable! -p, --profile <PROFILE> The name of the profile that should be loaded from your config file -h, --help Print help -V, --version Print version ``` ## Design Goals Pueue is designed to be a convenient helper tool for a single user. It's supposed to work stand-alone and without any external integration. The idea is to keep it simple and to prevent feature creep. Also, **Pueue is considered feature-complete :tada:.** All features that were planned have been added and only minor improvements, bug-fixes and regular maintenance work will get merged. For the record, the following features weren't included as they're out of scope: - Distributed task management/execution. - Multi-user task management. - Sophisticated task scheduling for optimal load balancing. - Tight system integration or integration with external tools. - Explicit support for scripting. If you're adamant about scripting it anyway, take a look at the `pueue-lib` library, which provides proper API calls for `pueued`. However, keep in mind that `pueued` is still supposed to be a minimalistic task executor with as little scheduling logic as possible. There seems to be the need for some project that satisfies all these points mentioned above, but that will be the job of another tool. I very much encourage forking Pueue and I would love to see forks grow into other cool projects! ## Similar Projects #### slurm [Slurm](https://slurm.schedmd.com/overview.html) is a feature rich and widely used cluster management and scheduling system. If you find yourself in the need for complex setups such as multiple worker pools or distributed nodes, slurm will be much better suited than Pueue. #### GNU Parallel A robust and featureful parallel processor with text-based joblog and n-retries. [GNU Parallel](https://www.gnu.org/software/parallel/parallel_tutorial.html) is able to scale to multi-host parallelization and has complex code to have deep integration across different tools and shells, as well as other advanced features. `Pueue` differentiates itself from GNU Parallel by focusing more on visibility across many different long running commands, and creating a central location for commands to be stored, rather than GNU Parallel's focus on chunking a specific task. #### Pm2 [pm2](https://pm2.keymetrics.io/docs/usage/quick-start/) is a process management tool, whose focus is more on management of recurring and long-living tasks. It seems to be quite mature and has a rich interface. #### nq A very lightweight job queue systems which require no setup, maintenance, supervision, or any long-running processes. \ [Link to project](https://github.com/leahneukirchen/nq) #### task-spooler _task spooler_ is a Unix batch system where the tasks spooled run one after the other. \ Links to [ubuntu manpage](http://manpages.ubuntu.com/manpages/xenial/man1/tsp.1.html) and a [fork on Github](https://github.com/xenogenesi/task-spooler). The original website seems to be down. ## Contributing Feature requests and pull requests are very much appreciated and welcome! Anyhow, please talk to me a bit about your ideas before you start hacking! It's always nice to know what you're working on and I might have a few suggestions or tips :) Depending on the type of your contribution, you should branch of from the `main` branch. Pueue is mature enough to no longer need a `development` branch and all changes are collected on there before a new release is pushed. Urgent hotfixes might get deployed on a separate branch, but this will be decided on a case-by-case basis. There's also the [Architecture Guide](https://github.com/Nukesor/pueue/blob/main/docs/Architecture.md), which is supposed to give you a brief overview and introduction to the project. Copyright © 2019 Arne Beer ([@Nukesor](https://github.com/Nukesor)) 07070100000016000081A4000000000000000000000001665F1B69000000DA000000000000000000000000000000000000001800000000pueue-3.4.1/codecov.ymlignore: - "**/*.lock" - "**/*.toml" - "**/*.md" - "utils" - "**/tests" - "LICENSE" - ".github" - ".gitignore" coverage: status: project: default: target: auto threshold: 2% 07070100000017000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000001100000000pueue-3.4.1/docs07070100000018000081A4000000000000000000000001665F1B6900001710000000000000000000000000000000000000002100000000pueue-3.4.1/docs/Architecture.md# Architecture Guide This document is supposed to give you a short introduction to the project. \ It explains the project structure, so you can get a rough overview of the overall architecture. Feel free to expand this document! - [Overall Structure](https://github.com/Nukesor/pueue/blob/main/ARCHITECTURE.md#overall-structure) - [Daemon](https://github.com/Nukesor/pueue/blob/main/ARCHITECTURE.md#daemon) - [Request Handler](https://github.com/Nukesor/pueue/blob/main/ARCHITECTURE.md#request-handler) - [TaskHandler](https://github.com/Nukesor/pueue/blob/main/ARCHITECTURE.md#taskhandler) - [Shared State](https://github.com/Nukesor/pueue/blob/main/ARCHITECTURE.md#shared-state) - [Code Style](https://github.com/Nukesor/pueue/blob/main/ARCHITECTURE.md#code-style) ## Overall Structure This project is divided into two modules, the client (`pueue`) and the daemon (`pueued`). \ _Pueue_ also depends on [pueue-lib](https://github.com/nukesor/pueue-lib). _Pueue-lib_ contains everything that is shared between the daemon and the client. This includes: - The protocol used for communicating. - Settings, since they're parsed by both binaries. - All data structs, namely `state`, `task` and `message`. - Helper to interact with task's logs. ## Daemon The daemon is composed of two main components. 1. Request handling in `pueue/src/daemon/network/`. This is the code responsible for communicating with clients. In `pueue/src/daemon/network/message_handler/` you can find neatly separated handlers for all of Pueue's subcommands. 2. The TaskHandler in `pueue/src/daemon/task_handler/`. It's responsible for everything regarding process interaction. All information that's not sub-process specific, is stored in the `State` (`pueue-lib/state.rs`) struct. \ Both components share a reference to the State, a `Arc<Mutex<State>>`. That way we can guarantee a single source of truth and a consistent state. It's also important to know, that there's a `mpsc` channel. \ This channel is used to send on-demand messages from the network request handler to the the TaskHandler. This includes stuff like "Start/Pause/Kill" sub-processes or "Reset everything". ### Request handling The `pueue/src/daemon/network/socket.rs` module contains the logic for accepting client connections and receiving payloads. The request accept and handle logic is a single async-await loop run by the main thread. The payload is then deserialized to `Message` (`pueue-lib/message.rs`) and handled by its respective function. All functions used for handling these messages can be found in `pueue/src/daemon/network/message_handler`. Many messages can be instantly handled by simply modifying or reading the state. \ However, sometimes the TaskHandler has to be notified, if something involves modifying actual system processes (start/pause/kill tasks). That's when the `mpsc` channel to the TaskHandler comes into play. ### TaskHandler The TaskHandler is responsible for actually starting and managing system processes. \ It's further important to note, that it runs in its own thread. The TaskHandler runs a never ending loop, which checks a few times each second, if - there are new instructions in the `mpsc` channel. - a new task can be started. - tasks finished and can be finalized. - delayed tasks can be enqueued (`-d` flag on `pueue add`) - A few other things. Check the `TaskHandler::run` function in `pueue/src/daemon/task_handler/mod.rs`. The TaskHandler is by far the most complex piece of code in this project, but there is also a lot of documentation. ## Shared State Whenever you're writing some core-logic in Pueue, please make sure to understand how mutexes work. Try to be conservative with your `state.lock()` calls, since this also blocks the request handler! Only use the state, if you absolutely have to. At the same time, you should also lock early enough to prevent inconsistent states. Operations should generally be atomic. \ Anyhow, working with mutexes is usually straight-forward, but can sometimes be a little bit tricky. ## Code Style This is a result of `tokei ./pueue ./pueue_lib` on commit `84a2d47` at the 2022-12-27. ``` =============================================================================== Language Files Lines Code Comments Blanks =============================================================================== JSON 2 238 238 0 0 Markdown 2 310 0 192 118 Pest 1 69 43 12 14 TOML 2 140 112 12 16 YAML 1 27 27 0 0 ------------------------------------------------------------------------------- Rust 137 12983 9645 1179 2159 |- Markdown 127 1571 0 1450 121 (Total) 14554 9645 2629 2280 =============================================================================== Total 145 13767 10065 1395 2307 =============================================================================== ``` ### Format and Clippy `cargo format` and `cargo clean && cargo clippy` should never return any warnings on the current stable Rust version! PR's are automatically checked for these two and won't be accepted unless everything looks fine. ### Comments 1. All functions must have a doc block. 2. All non-trivial structs must have a doc block. 3. Rather too many inline comments than too few. 4. Non-trivial code should be well documented! In general, please add a lot of comments. It makes maintenance, collaboration and reviews MUCH easier. 07070100000019000081A4000000000000000000000001665F1B69000001B6000000000000000000000000000000000000001A00000000pueue-3.4.1/docs/Cross.md# Cross compilation Compilation and testing for other architectures is rather easy with `cross`. 1. Install `cargo-cross`. 1. Make sure to install `qemu`. - On Arch-Linux install `qemu-user-static-binfmt`. - On Ubuntu install `binfmt-support` and `qemu-user-static`. Run the build/test against the target infrastructure, I.e.: - `cross build --target=aarch64-unknown-linux-musl` - `cross test --target=aarch64-unknown-linux-musl` 0707010000001A000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000001200000000pueue-3.4.1/pueue0707010000001B000081A4000000000000000000000001665F1B6900000915000000000000000000000000000000000000001D00000000pueue-3.4.1/pueue/Cargo.toml[package] name = "pueue" version = "3.4.1" description = "A cli tool for managing long running shell commands." keywords = ["shell", "command", "parallel", "task", "queue"] readme = "../README.md" authors = { workspace = true } homepage = { workspace = true } repository = { workspace = true } license = { workspace = true } edition = { workspace = true } rust-version = { workspace = true } [badges] maintenance = { status = "actively-developed" } [dependencies] anyhow = "1.0" chrono = { workspace = true } clap = { version = "4.5.1", features = ["derive", "cargo", "help"] } clap_complete = "4.5.1" clap_complete_nushell = "4.5.1" comfy-table = "7" command-group = { workspace = true } ctrlc = { version = "3", features = ["termination"] } handlebars = { workspace = true } interim = { version = "0.1.2", features = ["chrono"] } log = { workspace = true } pest = "2.7" pest_derive = "2.7" pueue-lib = { version = "0.26.1", path = "../pueue_lib" } serde = { workspace = true } serde_derive = { workspace = true } serde_json = { workspace = true } shell-escape = "0.1" simplelog = "0.12" snap = { workspace = true } strum = { workspace = true } strum_macros = { workspace = true } tempfile = "3" tokio = { workspace = true } [dev-dependencies] anyhow = { workspace = true } assert_cmd = "2" better-panic = { workspace = true } # Make it easy to view log output for select tests. # Set log level for tests with RUST_LOG=<level>, use with failed tests or # disable test stdout/stderr capture (`cargo test -- --nocapture` / `cargo # nextest run --no-capture`) env_logger = "0.11" pretty_assertions = { workspace = true } rstest = "0.19" serde_yaml = { workspace = true } similar-asserts = "1" test-log = "0.2" # We don't need any of the default features for crossterm. # However, the windows build needs the windows feature enabled. [target.'cfg(not(windows))'.dependencies] crossterm = { version = "0.27", default-features = false } [target.'cfg(windows)'.dependencies] crossterm = { version = "0.27", default-features = false, features = ["windows"] } # Test specific dev-dependencies [target.'cfg(any(target_os = "linux", target_os = "freebsd"))'.dependencies] whoami = "1" # Test specific Linux dev-dependencies [target.'cfg(target_os = "linux")'.dependencies] procfs = { version = "0.16", default-features = false } 0707010000001C000081A4000000000000000000000001665F1B69000001D9000000000000000000000000000000000000001C00000000pueue-3.4.1/pueue/README.md# Pueue These are the internal library files that **shouldn't** be used by anything but `pueue` itself. If you're looking for a way to install the `pueue` binaries, please refer to [Pueue's crates.io page](https://crates.io/crates/pueue) or the [Github repository](https://github.com/nukesor/pueue). If you're looking for a way to programatically interface with `pueue` via Rust code, please take a look at the [`pueue_lib`](https://docs.rs/pueue-lib/latest/pueue_lib/) 0707010000001D000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000001600000000pueue-3.4.1/pueue/src0707010000001E000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000001A00000000pueue-3.4.1/pueue/src/bin0707010000001F000081A4000000000000000000000001665F1B690000146B000000000000000000000000000000000000002300000000pueue-3.4.1/pueue/src/bin/pueue.rsuse std::path::PathBuf; use anyhow::{bail, Context, Result}; use clap::{CommandFactory, Parser}; use clap_complete::{generate, generate_to, shells}; use log::warn; use simplelog::{Config, ConfigBuilder, LevelFilter, SimpleLogger}; use pueue_lib::settings::Settings; use pueue::client::cli::{CliArguments, Shell, SubCommand}; use pueue::client::client::Client; /// This is the main entry point of the client. /// /// At first we do some basic setup: /// - Parse the cli /// - Initialize logging /// - Read the config /// /// Once all this is done, we init the [Client] struct and start the main loop via [Client::start]. #[tokio::main(flavor = "current_thread")] async fn main() -> Result<()> { // Parse commandline options. let opt = CliArguments::parse(); // In case the user requested the generation of shell completion file, create it and exit. if let Some(SubCommand::Completions { shell, output_directory, }) = &opt.cmd { return create_shell_completion_file(shell, output_directory); } // Init the logger and set the verbosity level depending on the `-v` flags. let level = match opt.verbose { 0 => LevelFilter::Error, 1 => LevelFilter::Warn, 2 => LevelFilter::Info, _ => LevelFilter::Debug, }; // Try to initialize the logger with the timezone set to the Local time of the machine. let mut builder = ConfigBuilder::new(); let logger_config = match builder.set_time_offset_to_local() { Err(_) => { warn!("Failed to determine the local time of this machine. Fallback to UTC."); Config::default() } Ok(builder) => builder.build(), }; SimpleLogger::init(level, logger_config).unwrap(); // Try to read settings from the configuration file. let (mut settings, config_found) = Settings::read(&opt.config).context("Failed to read configuration.")?; // Load any requested profile. if let Some(profile) = &opt.profile { settings.load_profile(profile)?; } // Error if no configuration file can be found, as this is an indicator, that the daemon hasn't // been started yet. if !config_found { bail!("Couldn't find a configuration file. Did you start the daemon yet?"); } // Warn if the deprecated --children option was used if let Some(subcommand) = &opt.cmd { if matches!( subcommand, SubCommand::Start { children: true, .. } | SubCommand::Pause { children: true, .. } | SubCommand::Kill { children: true, .. } | SubCommand::Reset { children: true, .. } ) { println!(concat!( "Note: The --children flag is deprecated and will be removed in a future release. ", "It no longer has any effect, as this command now always applies to all processes in a task." )); } } // Create client to talk with the daemon and connect. let mut client = Client::new(settings, opt) .await .context("Failed to initialize client.")?; client.start().await?; Ok(()) } /// [clap] is capable of creating auto-generated shell completion files. /// This function creates such a file for one of the supported shells and puts it into the /// specified output directory. fn create_shell_completion_file(shell: &Shell, output_directory: &Option<PathBuf>) -> Result<()> { let mut app = CliArguments::command(); app.set_bin_name("pueue"); // Output a completion file to a directory, if one is provided if let Some(output_directory) = output_directory { let completion_result = match shell { Shell::Bash => generate_to(shells::Bash, &mut app, "pueue", output_directory), Shell::Elvish => generate_to(shells::Elvish, &mut app, "pueue", output_directory), Shell::Fish => generate_to(shells::Fish, &mut app, "pueue", output_directory), Shell::PowerShell => { generate_to(shells::PowerShell, &mut app, "pueue", output_directory) } Shell::Zsh => generate_to(shells::Zsh, &mut app, "pueue", output_directory), Shell::Nushell => generate_to( clap_complete_nushell::Nushell, &mut app, "pueue", output_directory, ), }; completion_result.context(format!("Failed to generate completions for {shell:?}"))?; return Ok(()); } // Print the completion file to stdout let mut stdout = std::io::stdout(); match shell { Shell::Bash => generate(shells::Bash, &mut app, "pueue", &mut stdout), Shell::Elvish => generate(shells::Elvish, &mut app, "pueue", &mut stdout), Shell::Fish => generate(shells::Fish, &mut app, "pueue", &mut stdout), Shell::PowerShell => generate(shells::PowerShell, &mut app, "pueue", &mut stdout), Shell::Zsh => generate(shells::Zsh, &mut app, "pueue", &mut stdout), Shell::Nushell => generate( clap_complete_nushell::Nushell, &mut app, "pueue", &mut stdout, ), }; Ok(()) } 07070100000020000081A4000000000000000000000001665F1B6900000981000000000000000000000000000000000000002400000000pueue-3.4.1/pueue/src/bin/pueued.rsuse std::process::Command; use anyhow::Result; use clap::Parser; use log::warn; use simplelog::{Config, ConfigBuilder, LevelFilter, SimpleLogger}; use pueue::daemon::cli::CliArguments; use pueue::daemon::run; #[tokio::main(flavor = "multi_thread", worker_threads = 4)] async fn main() -> Result<()> { // Parse commandline options. let opt = CliArguments::parse(); if opt.daemonize { return fork_daemon(&opt); } // Set the verbosity level of the logger. let level = match opt.verbose { 0 => LevelFilter::Error, 1 => LevelFilter::Warn, 2 => LevelFilter::Info, _ => LevelFilter::Debug, }; // Try to initialize the logger with the timezone set to the Local time of the machine. let mut builder = ConfigBuilder::new(); let logger_config = match builder.set_time_offset_to_local() { Err(_) => { warn!("Failed to determine the local time of this machine. Fallback to UTC."); Config::default() } Ok(builder) => builder.build(), }; SimpleLogger::init(level, logger_config).unwrap(); run(opt.config, opt.profile, false).await } /// This is a simple and cheap custom fork method. /// Simply spawn a new child with identical arguments and exit right away. fn fork_daemon(opt: &CliArguments) -> Result<()> { let mut arguments = Vec::<String>::new(); if let Some(config) = &opt.config { arguments.push("--config".to_string()); arguments.push(config.to_string_lossy().into_owned()); } if let Some(profile) = &opt.profile { arguments.push("--profile".to_string()); arguments.push(profile.clone()); } if opt.verbose > 0 { arguments.push("-".to_string() + &"v".repeat(opt.verbose as usize)); } // Try to get the path to the current binary, since it may not be in the $PATH. // If we cannot detect it (for some unknown reason), fallback to the raw `pueued` binary name. let current_exe = if let Ok(path) = std::env::current_exe() { path.to_string_lossy().clone().to_string() } else { println!("Couldn't detect path of current binary. Falling back to 'pueue' in $PATH"); "pueued".to_string() }; Command::new(current_exe) .args(&arguments) .spawn() .expect("Failed to fork new process."); println!("Pueued is now running in the background"); Ok(()) } 07070100000021000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000001D00000000pueue-3.4.1/pueue/src/client07070100000022000081A4000000000000000000000001665F1B6900005064000000000000000000000000000000000000002400000000pueue-3.4.1/pueue/src/client/cli.rsuse std::path::PathBuf; use chrono::prelude::*; use chrono::TimeDelta; use clap::ArgAction; use clap::{Parser, ValueEnum, ValueHint}; use interim::*; use pueue_lib::network::message::Signal; use super::commands::WaitTargetStatus; #[derive(Parser, Debug)] pub enum SubCommand { #[command( about = "Enqueue a task for execution.\n\ There're many different options when scheduling a task.\n\ Check the individual option help texts for more information.\n\n\ Furthermore, please remember that scheduled commands are executed via your system shell.\n\ This means that the command needs proper shell escaping.\n\ The safest way to preserve shell escaping is to surround your command with quotes, for example:\n\ pueue add 'ls $HOME && echo \"Some string\"'", trailing_var_arg = true )] Add { /// The command to be added. #[arg(required = true, num_args(1..), value_hint = ValueHint::CommandWithArguments)] command: Vec<String>, /// Specify current working directory. #[arg(name = "working-directory", short = 'w', long, value_hint = ValueHint::DirPath)] working_directory: Option<PathBuf>, /// Escape any special shell characters (" ", "&", "!", etc.). /// Beware: This implicitly disables nearly all shell specific syntax ("&&", "&>"). #[arg(short, long)] escape: bool, /// Immediately start the task. #[arg(name = "immediate", short, long, conflicts_with = "stashed")] start_immediately: bool, /// Create the task in Stashed state. /// Useful to avoid immediate execution if the queue is empty. #[arg(short, long, conflicts_with = "immediate")] stashed: bool, /// Prevents the task from being enqueued until 'delay' elapses. See "enqueue" for accepted formats. #[arg(name = "delay", short, long, conflicts_with = "immediate", value_parser = parse_delay_until)] delay_until: Option<DateTime<Local>>, /// Assign the task to a group. Groups kind of act as separate queues. /// I.e. all groups run in parallel and you can specify the amount of parallel tasks for each group. /// If no group is specified, the default group will be used. #[arg(short, long)] group: Option<String>, /// Start the task once all specified tasks have successfully finished. /// As soon as one of the dependencies fails, this task will fail as well. #[arg(name = "after", short, long, num_args(1..))] dependencies: Vec<usize>, /// Start this task with a higher priority. /// The higher the number, the faster it will be processed. #[arg(short = 'o', long)] priority: Option<i32>, /// Add some information for yourself. /// This string will be shown in the "status" table. /// There's no additional logic connected to it. #[arg(short, long)] label: Option<String>, /// Only return the task id instead of a text. /// This is useful when working with dependencies. #[arg(short, long)] print_task_id: bool, }, /// Remove tasks from the list. /// Running or paused tasks need to be killed first. #[command(alias("rm"))] Remove { /// The task ids to be removed. #[arg(required = true)] task_ids: Vec<usize>, }, /// Switches the queue position of two commands. /// Only works on queued and stashed commands. Switch { /// The first task id. task_id_1: usize, /// The second task id. task_id_2: usize, }, /// Stashed tasks won't be automatically started. /// You have to enqueue them or start them by hand. Stash { /// Stash these specific tasks. #[arg(required = true)] task_ids: Vec<usize>, }, /// Enqueue stashed tasks. They'll be handled normally afterwards. #[command(after_help = "DELAY FORMAT: The --delay argument must be either a number of seconds or a \"date expression\" similar to GNU \ \"date -d\" with some extensions. It does not attempt to parse all natural language, but is \ incredibly flexible. Here are some supported examples. 2020-04-01T18:30:00 // RFC 3339 timestamp 2020-4-1 18:2:30 // Optional leading zeros 2020-4-1 5:30pm // Informal am/pm time 2020-4-1 5pm // Optional minutes and seconds April 1 2020 18:30:00 // English months 1 Apr 8:30pm // Implies current year 4/1 // American form date wednesday 10:30pm // The closest wednesday in the future at 22:30 wednesday // The closest wednesday in the future 4 months // 4 months from today at 00:00:00 1 week // 1 week at the current time 1days // 1 day from today at the current time 1d 03:00 // The closest 3:00 after 1 day (24 hours) 3h // 3 hours from now 3600s // 3600 seconds from now ")] Enqueue { /// Enqueue these specific tasks. task_ids: Vec<usize>, /// Delay enqueuing these tasks until 'delay' elapses. See DELAY FORMAT below. #[arg(name = "delay", short, long, value_parser = parse_delay_until)] delay_until: Option<DateTime<Local>>, }, #[command( about = "Resume operation of specific tasks or groups of tasks.\n\ By default, this resumes the default group and all its tasks.\n\ Can also be used force-start specific tasks.", verbatim_doc_comment )] Start { /// Start these specific tasks. Paused tasks will resumed. /// Queued or Stashed tasks will be force-started. task_ids: Vec<usize>, /// Resume a specific group and all paused tasks in it. /// The group will be set to running and its paused tasks will be resumed. #[arg(short, long, conflicts_with = "all")] group: Option<String>, /// Resume all groups! /// All groups will be set to running and paused tasks will be resumed. #[arg(short, long)] all: bool, /// Deprecated: this switch no longer has any effect. #[arg(short, long)] children: bool, }, #[command( about = "Restart failed or successful task(s).\n\ By default, identical tasks will be created and enqueued, but it's possible to restart in-place.\n\ You can also edit a few properties, such as the path and the command, before restarting.", alias("re") )] Restart { /// Restart these specific tasks. task_ids: Vec<usize>, /// Restart all failed tasks across all groups. /// Nice to use in combination with `-i/--in-place`. #[arg(short, long)] all_failed: bool, /// Like `--all-failed`, but only restart tasks failed tasks of a specific group. /// The group will be set to running and its paused tasks will be resumed. #[arg(short = 'g', long, conflicts_with = "all_failed")] failed_in_group: Option<String>, /// Immediately start the tasks, no matter how many open slots there are. /// This will ignore any dependencies tasks may have. #[arg(short = 'k', long, conflicts_with = "stashed")] start_immediately: bool, /// Set the restarted task to a "Stashed" state. /// Useful to avoid immediate execution. #[arg(short, long)] stashed: bool, /// Restart the task by reusing the already existing tasks. /// This will overwrite any previous logs of the restarted tasks. #[arg(short, long)] in_place: bool, /// Restart the task by creating a new identical tasks. /// Only applies, if you have the restart_in_place configuration set to true. #[arg(long)] not_in_place: bool, /// Edit the tasks' commands before restarting. #[arg(short, long)] edit: bool, /// Edit the tasks' paths before restarting. #[arg(short = 'p', long)] edit_path: bool, /// Edit the tasks' labels before restarting. #[arg(short = 'l', long)] edit_label: bool, /// Edit the tasks' priorities before restarting. #[arg(short = 'o', long)] edit_priority: bool, }, #[command(about = "Either pause running tasks or specific groups of tasks.\n\ By default, pauses the default group and all its tasks.\n\ A paused queue (group) won't start any new tasks.")] Pause { /// Pause these specific tasks. /// Does not affect the default group, groups or any other tasks. task_ids: Vec<usize>, /// Pause a specific group. #[arg(short, long, conflicts_with = "all")] group: Option<String>, /// Pause all groups! #[arg(short, long)] all: bool, /// Only pause the specified group and let already running tasks finish by themselves. #[arg(short, long)] wait: bool, /// Deprecated: this switch no longer has any effect. #[arg(short, long)] children: bool, }, #[command(about = "Kill specific running tasks or whole task groups..\n\ Kills all tasks of the default group when no ids or a specific group are provided.")] Kill { /// Kill these specific tasks. task_ids: Vec<usize>, /// Kill all running tasks in a group. This also pauses the group. #[arg(short, long, conflicts_with = "all")] group: Option<String>, /// Kill all running tasks across ALL groups. This also pauses all groups. #[arg(short, long)] all: bool, /// Deprecated: this switch no longer has any effect. #[arg(short, long)] children: bool, /// Send a UNIX signal instead of simply killing the process. /// DISCLAIMER: This bypasses Pueue's process handling logic! /// You might enter weird invalid states, use at your own descretion. #[arg(short, long, ignore_case(true))] signal: Option<Signal>, }, /// Send something to a task. Useful for sending confirmations such as 'y\n'. Send { /// The id of the task. task_id: usize, /// The input that should be sent to the process. input: String, }, #[command( about = "Edit the command, path, label, or priority of a stashed or queued task.\n\ By default only the command is edited.\n\ Multiple properties can be added in one go." )] Edit { /// The task's id. task_id: usize, /// Edit the task's command. #[arg(short, long)] command: bool, /// Edit the task's path. #[arg(short, long)] path: bool, /// Edit the task's label. #[arg(short, long)] label: bool, /// Edit the task's priority. #[arg(short = 'o', long)] priority: bool, }, #[command(about = "Use this to add or remove groups.\n\ By default, this will simply display all known groups.")] Group { /// Print the list of groups as json. #[arg(short, long)] json: bool, #[command(subcommand)] cmd: Option<GroupCommand>, }, /// Display the current status of all tasks. Status { /// Users can specify a custom query to filter for specific values, order by a column /// or limit the amount of tasks listed. /// Use `--help` for the full syntax definition. #[arg( long_help = "Users can specify a custom query to filter for specific values, order by a column or limit the amount of tasks listed. Syntax: [column_selection]? [filter]* [order_by]? [limit]? where: - column_selection := `columns=[column]([column],)*` - column := `id | status | command | label | path | enqueue_at | dependencies | start | end` - filter := `[filter_column] [filter_op] [filter_value]` (note: not all columns support all operators, see \"Filter columns\" below.) - filter_column := `start | end | enqueue_at | status | label` - filter_op := `= | != | < | > | %=` (`%=` means 'contains', as in the test value is a substring of the column value) - order_by := `order_by [column] [order_direction]` - order_direction := `asc | desc` - limit := `[limit_type]? [limit_count]` - limit_type := `first | last` - limit_count := a positive integer Filter columns: - `start`, `end`, `enqueue_at` contain a datetime which support the operators `=`, `!=`, `<`, `>` against test values that are: - date like `YYYY-MM-DD` - time like `HH:mm:ss` or `HH:mm` - datetime like `YYYY-MM-DDHH:mm:ss` (note there is currently no separator between the date and the time) Examples: - `status=running` - `columns=id,status,command status=running start > 2023-05-2112:03:17 order_by command first 5` The formal syntax is defined here: https://github.com/Nukesor/pueue/blob/main/pueue/src/client/query/syntax.pest More documentation is on the query syntax PR: https://github.com/Nukesor/pueue/issues/350#issue-1359083118" )] query: Vec<String>, /// Print the current state as json to stdout. /// This does not include the output of tasks. /// Use `log -j` if you want everything. #[arg(short, long)] json: bool, #[arg(short, long)] /// Only show tasks of a specific group group: Option<String>, }, #[command( about = "Accept a list or map of JSON pueue tasks via stdin and display it just like \"pueue status\".\n\ A simple example might look like this:\n\ pueue status --json | jq -c '.tasks' | pueue format-status", after_help = "DISCLAIMER:\n\ This command is a temporary workaround until a proper filtering language for \"status\" has been implemented. It might be removed in the future." )] FormatStatus { #[arg(short, long)] /// Only show tasks of a specific group group: Option<String>, }, #[command(about = "Display the log output of finished tasks.\n\ Only the last few lines will be shown by default.\n\ If you want to follow the output of a task, please use the \"follow\" subcommand.")] Log { /// View the task output of these specific tasks. task_ids: Vec<usize>, /// Print the resulting tasks and output as json. /// By default only the last lines will be returned unless --full is provided. /// Take care, as the json cannot be streamed! /// If your logs are really huge, using --full can use all of your machine's RAM. #[arg(short, long)] json: bool, /// Only print the last X lines of each task's output. /// This is done by default if you're looking at multiple tasks. #[arg(short, long, conflicts_with = "full")] lines: Option<usize>, /// Show the whole output. #[arg(short, long)] full: bool, }, /// Follow the output of a currently running task. /// This command works like "tail -f". #[command(alias("fo"))] Follow { /// The id of the task you want to watch. /// If no or multiple tasks are running, you have to specify the id. /// If only a single task is running, you can omit the id. task_id: Option<usize>, /// Only print the last X lines of the output before following #[arg(short, long)] lines: Option<usize>, }, #[command(about = "Wait until tasks are finished.\n\ By default, this will wait for all tasks in the default group to finish.\n\ Note: This will also wait for all tasks that aren't somehow 'Done'.\n\ Includes: [Paused, Stashed, Locked, Queued, ...]")] Wait { /// This allows you to wait for specific tasks to finish. task_ids: Vec<usize>, /// Wait for all tasks in a specific group #[arg(short, long, conflicts_with = "all")] group: Option<String>, /// Wait for all tasks across all groups and the default group. #[arg(short, long)] all: bool, /// Don't show any log output while waiting #[arg(short, long)] quiet: bool, /// Wait for tasks to reach a specific task status. #[arg(short, long)] status: Option<WaitTargetStatus>, }, /// Remove all finished tasks from the list. #[command(aliases(["cleanup", "clear"]))] Clean { /// Only clean tasks that finished successfully. #[arg(short, long)] successful_only: bool, /// Only clean tasks of a specific group #[arg(short, long)] group: Option<String>, }, /// Kill all tasks, clean up afterwards and reset EVERYTHING! Reset { /// Deprecated: this switch no longer has any effect. #[arg(short, long)] children: bool, /// Don't ask for any confirmation. #[arg(short, long)] force: bool, }, /// Remotely shut down the daemon. Should only be used if the daemon isn't started by a service manager. Shutdown, #[command(about = "Set the amount of allowed parallel tasks\n\ By default, adjusts the amount of the default group.\n\ No tasks will be stopped, if this is lowered.\n\ This limit is only considered when tasks are scheduled.")] Parallel { /// The amount of allowed parallel tasks. /// Setting this to 0 means an unlimited amount of parallel tasks. parallel_tasks: Option<usize>, /// Set the amount for a specific group. #[arg(name = "group", short, long)] group: Option<String>, }, /// Generates shell completion files. /// This can be ignored during normal operations. Completions { /// The target shell. #[arg(value_enum)] shell: Shell, /// The output directory to which the file should be written. #[arg(value_hint = ValueHint::DirPath)] output_directory: Option<PathBuf>, }, } #[derive(Parser, Debug)] pub enum GroupCommand { /// Add a group by name. Add { name: String, /// Set the amount of parallel tasks this group can have. /// Setting this to 0 means an unlimited amount of parallel tasks. #[arg(short, long)] parallel: Option<usize>, }, /// Remove a group by name. /// This will move all tasks in this group to the default group! Remove { name: String }, } #[derive(Parser, ValueEnum, Debug, Clone, PartialEq, Eq)] pub enum ColorChoice { Auto, Never, Always, } #[derive(Parser, ValueEnum, Debug, Clone, PartialEq, Eq)] pub enum Shell { Bash, Elvish, Fish, PowerShell, Zsh, Nushell, } #[derive(Parser, Debug)] #[command( name = "pueue", about = "Interact with the Pueue daemon", author, version )] pub struct CliArguments { /// Verbose mode (-v, -vv, -vvv) #[arg(short, long, action = ArgAction::Count)] pub verbose: u8, /// Colorize the output; auto enables color output when connected to a tty. #[arg(long, value_enum, default_value = "auto")] pub color: ColorChoice, /// If provided, Pueue only uses this config file. /// This path can also be set via the "PUEUE_CONFIG_PATH" environment variable. /// The commandline option overwrites the environment variable! #[arg(short, long, value_hint = ValueHint::FilePath)] pub config: Option<PathBuf>, /// The name of the profile that should be loaded from your config file. #[arg(short, long)] pub profile: Option<String>, #[command(subcommand)] pub cmd: Option<SubCommand>, } fn parse_delay_until(src: &str) -> Result<DateTime<Local>, String> { if let Ok(seconds) = src.parse::<i64>() { let delay_until = Local::now() + TimeDelta::try_seconds(seconds) .ok_or("Failed to get timedelta from {seconds} seconds")?; return Ok(delay_until); } if let Ok(date_time) = parse_date_string(src, Local::now(), Dialect::Us) { return Ok(date_time); } Err(String::from( "could not parse as seconds or date expression", )) } 07070100000023000081A4000000000000000000000001665F1B6900005466000000000000000000000000000000000000002700000000pueue-3.4.1/pueue/src/client/client.rsuse std::env::{current_dir, vars}; use std::io::{self, stdout, Write}; use std::{borrow::Cow, collections::HashMap}; use anyhow::{bail, Context, Result}; use clap::crate_version; use crossterm::tty::IsTty; use log::error; use pueue_lib::network::message::*; use pueue_lib::network::protocol::*; use pueue_lib::network::secret::read_shared_secret; use pueue_lib::settings::Settings; use pueue_lib::state::PUEUE_DEFAULT_GROUP; use crate::client::cli::{CliArguments, ColorChoice, GroupCommand, SubCommand}; use crate::client::commands::*; use crate::client::display::*; /// This struct contains the base logic for the client. /// The client is responsible for connecting to the daemon, sending instructions /// and interpreting their responses. /// /// Most commands are a simple ping-pong. However, some commands require a more complex /// communication pattern, such as the `follow` command, which can read local files, /// or the `edit` command, which needs to open an editor. pub struct Client { subcommand: SubCommand, settings: Settings, style: OutputStyle, stream: GenericStream, } /// This is a small helper which either returns a given group or the default group. pub fn group_or_default(group: &Option<String>) -> String { group .clone() .unwrap_or_else(|| PUEUE_DEFAULT_GROUP.to_string()) } /// This is a small helper which determines a task selection depending on /// given commandline parameters. /// I.e. whether the default group, a set of tasks or a specific group should be selected. /// `start`, `pause` and `kill` can target either of these three selections. /// /// If no parameters are given, it returns to the default group. pub fn selection_from_params( all: bool, group: &Option<String>, task_ids: &[usize], ) -> TaskSelection { if all { TaskSelection::All } else if let Some(group) = group { TaskSelection::Group(group.clone()) } else if !task_ids.is_empty() { TaskSelection::TaskIds(task_ids.to_owned()) } else { TaskSelection::Group(PUEUE_DEFAULT_GROUP.into()) } } impl Client { /// Initialize a new client. /// This includes establishing a connection to the daemon: /// - Connect to the daemon. /// - Authorize via secret. /// - Check versions incompatibilities. pub async fn new(settings: Settings, opt: CliArguments) -> Result<Self> { // Connect to daemon and get stream used for communication. let mut stream = get_client_stream(&settings.shared) .await .context("Failed to initialize stream.")?; // Next we do a handshake with the daemon // 1. Client sends the secret to the daemon. // 2. If successful, the daemon responds with their version. let secret = read_shared_secret(&settings.shared.shared_secret_path())?; send_bytes(&secret, &mut stream) .await .context("Failed to send secret.")?; // Receive and parse the response. We expect the daemon's version as UTF-8. let version_bytes = receive_bytes(&mut stream) .await .context("Failed to receive version during handshake with daemon.")?; if version_bytes.is_empty() { bail!("Daemon went away after sending secret. Did you use the correct secret?") } let version = match String::from_utf8(version_bytes) { Ok(version) => version, Err(_) => { bail!("Daemon sent invalid UTF-8. Did you use the correct secret?") } }; // Info if the daemon runs a different version. // Backward compatibility should work, but some features might not work as expected. if version != crate_version!() { // Only show warnings if we aren't supposed to output json. let show_warning = if let Some(subcommand) = &opt.cmd { match subcommand { SubCommand::Status { json, .. } => !json, SubCommand::Log { json, .. } => !json, SubCommand::Group { json, .. } => !json, _ => true, } } else { true }; if show_warning { println!( "Different daemon version detected '{version}'. Consider restarting the daemon." ); } } // Determine whether we should color/style our output or not. // The user can explicitly disable/enable this, otherwise we check whether we are on a TTY. let style_enabled = match opt.color { ColorChoice::Auto => stdout().is_tty(), ColorChoice::Always => true, ColorChoice::Never => false, }; let style = OutputStyle::new(&settings, style_enabled); // Determine the subcommand that has been called by the user. // If no subcommand is given, we default to the `status` subcommand without any arguments. let subcommand = opt.cmd.unwrap_or(SubCommand::Status { json: false, group: None, query: Vec::new(), }); Ok(Client { settings, style, stream, subcommand, }) } /// This is the function where the actual communication and logic starts. /// At this point everything is initialized, the connection is up and /// we can finally start doing stuff. /// /// The command handling is split into "simple" and "complex" commands. pub async fn start(&mut self) -> Result<()> { // Return early, if the command has already been handled. if self.handle_complex_command().await? { return Ok(()); } // The handling of "generic" commands is encapsulated in this function. self.handle_simple_command().await?; Ok(()) } /// Handle all complex client-side functionalities. /// Some functionalities need special handling and are contained in their own functions /// with their own communication code. /// Some examples for special handling includes /// - reading local files /// - sending multiple messages /// - interacting with other programs /// /// Returns `Ok(true)`, if the current command has been handled by this function. /// This indicates that the client can now shut down. /// If `Ok(false)` is returned, the client will continue and handle the Subcommand in the /// [Client::handle_simple_command] function. async fn handle_complex_command(&mut self) -> Result<bool> { // This match handles all "complex" commands. match &self.subcommand { SubCommand::Reset { force, .. } => { // Get the current state and check if there're any running tasks. // If there are, ask the user if they really want to reset the state. let state = get_state(&mut self.stream).await?; let running_tasks = state .tasks .iter() .filter_map(|(id, task)| if task.is_running() { Some(*id) } else { None }) .collect::<Vec<_>>(); if !running_tasks.is_empty() && !force { self.handle_user_confirmation("remove running tasks", &running_tasks)?; } // Now that we got the user's consent, we return `false` and let the // `handle_simple_command` function process the subcommand as usual to send // a `reset` message to the daemon. Ok(false) } SubCommand::Edit { task_id, command, path, label, priority, } => { let message = edit( &mut self.stream, &self.settings, *task_id, *command, *path, *label, *priority, ) .await?; self.handle_response(message)?; Ok(true) } SubCommand::Wait { task_ids, group, all, quiet, status, } => { let selection = selection_from_params(*all, group, task_ids); wait(&mut self.stream, &self.style, selection, *quiet, status).await?; Ok(true) } SubCommand::Restart { task_ids, all_failed, failed_in_group, start_immediately, stashed, in_place, not_in_place, edit, edit_path, edit_label, edit_priority, } => { // `not_in_place` superseeds both other configs let in_place = (self.settings.client.restart_in_place || *in_place) && !*not_in_place; restart( &mut self.stream, &self.settings, task_ids.clone(), *all_failed, failed_in_group.clone(), *start_immediately, *stashed, in_place, *edit, *edit_path, *edit_label, *edit_priority, ) .await?; Ok(true) } SubCommand::Follow { task_id, lines } => { // If we're supposed to read the log files from the local system, we don't have to // do any communication with the daemon. // Thereby we handle this in a separate function. if self.settings.client.read_local_logs { local_follow( &mut self.stream, &self.settings.shared.pueue_directory(), task_id, *lines, ) .await?; return Ok(true); } // Otherwise, we forward this to the `handle_simple_command` function. Ok(false) } SubCommand::FormatStatus { .. } => { format_state( &mut self.stream, &self.subcommand, &self.style, &self.settings, ) .await?; Ok(true) } _ => Ok(false), } } /// Handle logic that's super generic on the client-side. /// This (almost) always follows a singular ping-pong pattern. /// One message to the daemon, one response, done. /// /// The only exception is streaming of log output. /// In that case, we send one request and contine receiving until the stream shuts down. async fn handle_simple_command(&mut self) -> Result<()> { // Create the message that should be sent to the daemon // depending on the given commandline options. let message = self.get_message_from_opt()?; // Create the message payload and send it to the daemon. send_message(message, &mut self.stream).await?; // Check if we can receive the response from the daemon let mut response = receive_message(&mut self.stream).await?; // Handle the message. // In some scenarios, such as log streaming, we should continue receiving messages // from the daemon, which is why we have a while loop in place. while self.handle_response(response)? { response = receive_message(&mut self.stream).await?; } Ok(()) } /// Most returned messages can be handled in a generic fashion. /// However, some commands require to continuously receive messages (streaming). /// /// If this function returns `Ok(true)`, the parent function will continue to receive /// and handle messages from the daemon. Otherwise the client will simply exit. fn handle_response(&self, message: Message) -> Result<bool> { match message { Message::Success(text) => print_success(&self.style, &text), Message::Failure(text) => { print_error(&self.style, &text); std::process::exit(1); } Message::StatusResponse(state) => { let tasks = state.tasks.values().cloned().collect(); let output = print_state(*state, tasks, &self.subcommand, &self.style, &self.settings)?; println!("{output}"); } Message::LogResponse(task_logs) => { print_logs(task_logs, &self.subcommand, &self.style, &self.settings) } Message::GroupResponse(groups) => { let group_text = format_groups(groups, &self.subcommand, &self.style); println!("{group_text}"); } Message::Stream(text) => { print!("{text}"); io::stdout().flush().unwrap(); return Ok(true); } Message::Close => return Ok(false), _ => error!("Received unhandled response message"), }; Ok(false) } /// Prints a warning and prompt for a given action and tasks. /// Returns `Ok(())` if the action was confirmed. fn handle_user_confirmation(&self, action: &str, task_ids: &[usize]) -> Result<()> { // printing warning and prompt let task_ids = task_ids .iter() .map(|t| format!("task{t}")) .collect::<Vec<String>>() .join(", "); println!("You are trying to {action}: {task_ids}",); let mut input = String::new(); loop { print!("Do you want to continue [Y/n]: "); io::stdout().flush()?; input.clear(); io::stdin().read_line(&mut input)?; match input.chars().next().unwrap() { 'N' | 'n' => { println!("Aborted!"); std::process::exit(1); } '\n' | 'Y' | 'y' => { break; } _ => { continue; } } } Ok(()) } /// Convert the cli command into the message that's being sent to the server, /// so it can be understood by the daemon. /// /// This function is pretty large, but it consists mostly of simple conversions /// of [SubCommand] variant to a [Message] variant. fn get_message_from_opt(&self) -> Result<Message> { Ok(match &self.subcommand { SubCommand::Add { command, working_directory, escape, start_immediately, stashed, group, delay_until, dependencies, priority, label, print_task_id, } => { // Either take the user-specified path or default to the current working directory. let path = working_directory .as_ref() .map(|path| Ok(path.clone())) .unwrap_or_else(current_dir)?; let mut command = command.clone(); // The user can request to escape any special shell characters in all parameter strings before // we concatenated them to a single string. if *escape { command = command .iter() .map(|parameter| shell_escape::escape(Cow::from(parameter)).into_owned()) .collect(); } AddMessage { command: command.join(" "), path, // Catch the current environment for later injection into the task's process. envs: HashMap::from_iter(vars()), start_immediately: *start_immediately, stashed: *stashed, group: group_or_default(group), enqueue_at: *delay_until, dependencies: dependencies.to_vec(), priority: priority.to_owned(), label: label.clone(), print_task_id: *print_task_id, } .into() } SubCommand::Remove { task_ids } => { if self.settings.client.show_confirmation_questions { self.handle_user_confirmation("remove", task_ids)?; } Message::Remove(task_ids.clone()) } SubCommand::Stash { task_ids } => Message::Stash(task_ids.clone()), SubCommand::Switch { task_id_1, task_id_2, } => SwitchMessage { task_id_1: *task_id_1, task_id_2: *task_id_2, } .into(), SubCommand::Enqueue { task_ids, delay_until, } => EnqueueMessage { task_ids: task_ids.clone(), enqueue_at: *delay_until, } .into(), SubCommand::Start { task_ids, group, all, .. } => StartMessage { tasks: selection_from_params(*all, group, task_ids), } .into(), SubCommand::Pause { task_ids, group, wait, all, .. } => PauseMessage { tasks: selection_from_params(*all, group, task_ids), wait: *wait, } .into(), SubCommand::Kill { task_ids, group, all, signal, .. } => { if self.settings.client.show_confirmation_questions { self.handle_user_confirmation("kill", task_ids)?; } KillMessage { tasks: selection_from_params(*all, group, task_ids), signal: signal.clone(), } .into() } SubCommand::Send { task_id, input } => SendMessage { task_id: *task_id, input: input.clone(), } .into(), SubCommand::Group { cmd, .. } => match cmd { Some(GroupCommand::Add { name, parallel }) => GroupMessage::Add { name: name.to_owned(), parallel_tasks: parallel.to_owned(), }, Some(GroupCommand::Remove { name }) => GroupMessage::Remove(name.to_owned()), None => GroupMessage::List, } .into(), SubCommand::Status { .. } => Message::Status, SubCommand::Log { task_ids, lines, full, .. } => { let lines = determine_log_line_amount(*full, lines); let message = LogRequestMessage { task_ids: task_ids.clone(), send_logs: !self.settings.client.read_local_logs, lines, }; Message::Log(message) } SubCommand::Follow { task_id, lines } => StreamRequestMessage { task_id: *task_id, lines: *lines, } .into(), SubCommand::Clean { successful_only, group, } => CleanMessage { successful_only: *successful_only, group: group.clone(), } .into(), SubCommand::Reset { force, .. } => { if self.settings.client.show_confirmation_questions && !force { self.handle_user_confirmation("reset", &Vec::new())?; } ResetMessage {}.into() } SubCommand::Shutdown => Shutdown::Graceful.into(), SubCommand::Parallel { parallel_tasks, group, } => match parallel_tasks { Some(parallel_tasks) => { let group = group_or_default(group); ParallelMessage { parallel_tasks: *parallel_tasks, group, } .into() } None => GroupMessage::List.into(), }, SubCommand::FormatStatus { .. } => bail!("FormatStatus has to be handled earlier"), SubCommand::Completions { .. } => bail!("Completions have to be handled earlier"), SubCommand::Restart { .. } => bail!("Restarts have to be handled earlier"), SubCommand::Edit { .. } => bail!("Edits have to be handled earlier"), SubCommand::Wait { .. } => bail!("Wait has to be handled earlier"), }) } } 07070100000024000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000002600000000pueue-3.4.1/pueue/src/client/commands07070100000025000081A4000000000000000000000001665F1B6900001C99000000000000000000000000000000000000002E00000000pueue-3.4.1/pueue/src/client/commands/edit.rsuse std::env; use std::io::{Read, Seek, Write}; use std::path::{Path, PathBuf}; use anyhow::{bail, Context, Result}; use pueue_lib::settings::Settings; use tempfile::NamedTempFile; use pueue_lib::network::message::*; use pueue_lib::network::protocol::*; use pueue_lib::process_helper::compile_shell_command; /// This function handles the logic for editing tasks. /// At first, we request the daemon to send us the task to edit. /// This also results in the task being `Locked` on the daemon side, preventing it from being /// started or manipulated in any way, as long as we're editing. /// /// After receiving the task information, the user can then edit it in their editor. /// Upon exiting the text editor, the line will then be read and sent to the server pub async fn edit( stream: &mut GenericStream, settings: &Settings, task_id: usize, edit_command: bool, edit_path: bool, edit_label: bool, edit_priority: bool, ) -> Result<Message> { // Request the data to edit from the server and issue a task-lock while doing so. let init_message = Message::EditRequest(task_id); send_message(init_message, stream).await?; let init_response = receive_message(stream).await?; // In case we don't receive an EditResponse, something went wrong // Return the response to the parent function and let the client handle it // by the generic message handler. let Message::EditResponse(init_response) = init_response else { return Ok(init_response); }; // Edit the command if explicitly specified or if no flags are provided (the default) let edit_command = edit_command || !edit_path && !edit_label && !edit_priority; // Edit all requested properties. let edit_result = edit_task_properties( settings, &init_response.command, &init_response.path, &init_response.label, init_response.priority, edit_command, edit_path, edit_label, edit_priority, ); // Any error while editing will result in the client aborting the editing process. // However, as the daemon moves tasks that're edited into the `Locked` state, we cannot simply // exit the client. We rather have to notify the daemon that the editing process was interrupted. // In the following, we notify the daemon of any errors, so it can restore the task to its previous state. let edited_props = match edit_result { Ok(inner) => inner, Err(error) => { eprintln!("Encountered an error while editing. Trying to restore the task's status."); // Notify the daemon that something went wrong. let edit_message = Message::EditRestore(task_id); send_message(edit_message, stream).await?; let response = receive_message(stream).await?; match response { Message::Failure(message) | Message::Success(message) => { eprintln!("{message}"); } _ => eprintln!("Received unknown response: {response:?}"), }; return Err(error); } }; // Create a new message with the edited properties. let edit_message = EditMessage { task_id, command: edited_props.command, path: edited_props.path, label: edited_props.label, delete_label: edited_props.delete_label, priority: edited_props.priority, }; send_message(edit_message, stream).await?; Ok(receive_message(stream).await?) } #[derive(Default)] pub struct EditedProperties { pub command: Option<String>, pub path: Option<PathBuf>, pub label: Option<String>, pub delete_label: bool, pub priority: Option<i32>, } /// Takes several task properties and edit them if requested. /// The `edit_*` booleans are used to determine which fields should be edited. /// /// Fields that have been edited will be returned as their `Some(T)` equivalent. /// /// The returned values are: `(command, path, label)` #[allow(clippy::too_many_arguments)] pub fn edit_task_properties( settings: &Settings, original_command: &str, original_path: &Path, original_label: &Option<String>, original_priority: i32, edit_command: bool, edit_path: bool, edit_label: bool, edit_priority: bool, ) -> Result<EditedProperties> { let mut props = EditedProperties::default(); // Update the command if requested. if edit_command { props.command = Some(edit_line(settings, original_command)?); }; // Update the path if requested. if edit_path { let str_path = original_path .to_str() .context("Failed to convert task path to string")?; let changed_path = edit_line(settings, str_path)?; props.path = Some(PathBuf::from(changed_path)); } // Update the label if requested. if edit_label { let edited_label = edit_line(settings, &original_label.clone().unwrap_or_default())?; // If the user deletes the label in their editor, an empty string will be returned. // This is an indicator that the task should no longer have a label, in which case we // set the `delete_label` flag. if edited_label.is_empty() { props.delete_label = true; } else { props.label = Some(edited_label); }; } // Update the priority if requested. if edit_priority { props.priority = Some(edit_line(settings, &original_priority.to_string())?.parse()?); }; Ok(props) } /// This function enables the user to edit a task's details. /// Save any string to a temporary file, which is opened in the specified `$EDITOR`. /// As soon as the editor is closed, read the file content and return the line. fn edit_line(settings: &Settings, line: &str) -> Result<String> { // Create a temporary file with the command so we can edit it with the editor. let mut file = NamedTempFile::new().expect("Failed to create a temporary file"); writeln!(file, "{line}").context("Failed to write to temporary file.")?; // Get the editor that should be used from the environment. let editor = match env::var("EDITOR") { Err(_) => bail!("The '$EDITOR' environment variable couldn't be read. Aborting."), Ok(editor) => editor, }; // Compile the command to start the editor on the temporary file. // We escape the file path for good measure, but it shouldn't be necessary. let path = shell_escape::escape(file.path().to_string_lossy()); let editor_command = format!("{editor} {path}"); let status = compile_shell_command(settings, &editor_command) .status() .context("Editor command did somehow fail. Aborting.")?; if !status.success() { bail!("The editor exited with a non-zero code. Aborting."); } // Read the file. let mut file = file.into_file(); file.rewind() .context("Couldn't seek to start of file. Aborting.")?; let mut line = String::new(); file.read_to_string(&mut line) .context("Failed to read Command after editing")?; // Remove any trailing newlines from the command. while line.ends_with('\n') || line.ends_with('\r') { line.pop(); } Ok(line.trim().to_string()) } 07070100000026000081A4000000000000000000000001665F1B690000069A000000000000000000000000000000000000003600000000pueue-3.4.1/pueue/src/client/commands/format_state.rsuse std::{ collections::BTreeMap, io::{self, prelude::*}, }; use anyhow::{Context, Result}; use pueue_lib::{network::protocol::GenericStream, settings::Settings, task::Task}; use crate::client::{ cli::SubCommand, display::{print_state, OutputStyle}, }; /// This function tries to read a map or list of JSON serialized [Task]s from `stdin`. /// The tasks will then get deserialized and displayed as a normal `status` command. /// The current group information is pulled from the daemon in a new `status` call. pub async fn format_state( stream: &mut GenericStream, command: &SubCommand, style: &OutputStyle, settings: &Settings, ) -> Result<()> { // Read the raw input to a buffer let mut stdin = io::stdin(); let mut buffer = Vec::new(); stdin .read_to_end(&mut buffer) .context("Failed to read json from stdin.")?; // Convert it to a valid utf8 stream. If this fails, it cannot be valid JSON. let json = String::from_utf8(buffer).context("Failed to convert stdin input to UTF8")?; // Try to deserialize the input as a map of tasks first. // If this doesn't work, try a list of tasks. let map_deserialize = serde_json::from_str::<BTreeMap<usize, Task>>(&json); let tasks: Vec<Task> = if let Ok(map) = map_deserialize { map.into_values().collect() } else { serde_json::from_str(&json).context("Failed to deserialize from JSON input.")? }; let state = super::get_state(stream) .await .context("Failed to get the current state from daemon")?; let output = print_state(state, tasks, command, style, settings)?; print!("{output}"); Ok(()) } 07070100000027000081A4000000000000000000000001665F1B69000007CA000000000000000000000000000000000000003600000000pueue-3.4.1/pueue/src/client/commands/local_follow.rsuse std::path::Path; use anyhow::{bail, Result}; use pueue_lib::network::protocol::GenericStream; use crate::client::commands::get_state; use crate::client::display::follow_local_task_logs; /// This function reads a log file from the filesystem and streams it to `stdout`. /// This is the default behavior of `pueue`'s log reading logic, which is only possible /// if `pueued` runs on the same environment. /// /// `pueue follow` can be called without a `task_id`, in which case we check whether there's a /// single running task. If that's the case, we default to it. /// If there are multiple tasks, the user has to specify which task they want to follow. pub async fn local_follow( stream: &mut GenericStream, pueue_directory: &Path, task_id: &Option<usize>, lines: Option<usize>, ) -> Result<()> { let task_id = match task_id { Some(task_id) => *task_id, None => { // The user didn't provide a task id. // Check whether we can find a single running task to follow. let state = get_state(stream).await?; let running_ids: Vec<_> = state .tasks .iter() .filter_map(|(&id, t)| if t.is_running() { Some(id) } else { None }) .collect(); match running_ids.len() { 0 => { bail!("There are no running tasks."); } 1 => running_ids[0], _ => { let running_ids = running_ids .iter() .map(|id| id.to_string()) .collect::<Vec<_>>() .join(", "); bail!( "Multiple tasks are running, please select one of the following: {running_ids}", ); } } } }; follow_local_task_logs(stream, pueue_directory, task_id, lines).await?; Ok(()) } 07070100000028000081A4000000000000000000000001665F1B6900000742000000000000000000000000000000000000002D00000000pueue-3.4.1/pueue/src/client/commands/mod.rs//! This module contains the logic for all non-trivial commands, such as `follow`, `restart`, //! `wait`, etc. //! //! "non-trivial" vaguely means that we, for instance, have to do additional requests to the //! daemon, open some files on the filesystem, edit files and so on. //! All commands that cannot be simply handled by handling requests or using `pueue_lib`. use anyhow::Result; use pueue_lib::network::protocol::*; use pueue_lib::state::State; use pueue_lib::{network::message::Message, task::Task}; mod edit; mod format_state; mod local_follow; mod restart; mod wait; pub use edit::edit; pub use format_state::format_state; pub use local_follow::local_follow; pub use restart::restart; pub use wait::{wait, WaitTargetStatus}; // This is a helper function for easy retrieval of the current daemon state. // The current daemon state is often needed in more complex commands. pub async fn get_state(stream: &mut GenericStream) -> Result<State> { // Create the message payload and send it to the daemon. send_message(Message::Status, stream).await?; // Check if we can receive the response from the daemon let message = receive_message(stream).await?; match message { Message::StatusResponse(state) => Ok(*state), _ => unreachable!(), } } // This is a helper function for easy retrieval of a single task from the daemon state. pub async fn get_task(stream: &mut GenericStream, task_id: usize) -> Result<Option<Task>> { // Create the message payload and send it to the daemon. send_message(Message::Status, stream).await?; // Check if we can receive the response from the daemon let message = receive_message(stream).await?; let state = match message { Message::StatusResponse(state) => state, _ => unreachable!(), }; Ok(state.tasks.get(&task_id).cloned()) } 07070100000029000081A4000000000000000000000001665F1B69000015B8000000000000000000000000000000000000003100000000pueue-3.4.1/pueue/src/client/commands/restart.rsuse anyhow::{bail, Result}; use pueue_lib::network::message::*; use pueue_lib::network::protocol::*; use pueue_lib::settings::Settings; use pueue_lib::state::FilteredTasks; use pueue_lib::task::{Task, TaskResult, TaskStatus}; use crate::client::commands::edit::edit_task_properties; use crate::client::commands::get_state; /// When restarting tasks, the remote state is queried and a [AddMessage] /// is create from the existing task in the state. /// /// This is done on the client-side, so we can easily edit the task before restarting it. /// It's also necessary to get all failed tasks, in case the user specified the `--all-failed` flag. #[allow(clippy::too_many_arguments)] pub async fn restart( stream: &mut GenericStream, settings: &Settings, task_ids: Vec<usize>, all_failed: bool, failed_in_group: Option<String>, start_immediately: bool, stashed: bool, in_place: bool, edit_command: bool, edit_path: bool, edit_label: bool, edit_priority: bool, ) -> Result<()> { let new_status = if stashed { TaskStatus::Stashed { enqueue_at: None } } else { TaskStatus::Queued }; let state = get_state(stream).await?; // Filter to get done tasks let done_filter = |task: &Task| task.is_done(); let filtered_tasks = if all_failed || failed_in_group.is_some() { // Either all failed tasks or all failed tasks of a specific group need to be restarted. // First we have to get all finished tasks (Done) let filtered_tasks = if let Some(group) = failed_in_group { state.filter_tasks_of_group(done_filter, &group) } else { state.filter_tasks(done_filter, None) }; // now pick the failed tasks let failed = filtered_tasks .matching_ids .into_iter() .filter(|task_id| { let task = state.tasks.get(task_id).unwrap(); !matches!(task.status, TaskStatus::Done(TaskResult::Success)) }) .collect(); // We return an empty vec for the mismatching tasks, since there shouldn't be any. // Any User provided ids are ignored in this mode. FilteredTasks { matching_ids: failed, ..Default::default() } } else if task_ids.is_empty() { bail!("Please provide the ids of the tasks you want to restart."); } else { state.filter_tasks(done_filter, Some(task_ids)) }; // Build a RestartMessage, if the tasks should be replaced instead of creating a copy of the // original task. This is only important, if replace is `True`. let mut restart_message = RestartMessage { tasks: Vec::new(), stashed, start_immediately, }; // Go through all Done commands we found and restart them for task_id in &filtered_tasks.matching_ids { let task = state.tasks.get(task_id).unwrap(); let mut new_task = Task::from_task(task); new_task.status = new_status.clone(); // Edit any properties, if requested. let edited_props = edit_task_properties( settings, &task.command, &task.path, &task.label, task.priority, edit_command, edit_path, edit_label, edit_priority, )?; // Add the tasks to the singular message, if we want to restart the tasks in-place. // And continue with the next task. The message will then be sent after the for loop. if in_place { restart_message.tasks.push(TaskToRestart { task_id: *task_id, command: edited_props.command, path: edited_props.path, label: edited_props.label, delete_label: edited_props.delete_label, priority: edited_props.priority, }); continue; } // In case we don't do in-place restarts, we have to add a new task. // Create a AddMessage to send the task to the daemon from the updated info and the old task. let add_task_message = AddMessage { command: edited_props.command.unwrap_or_else(|| task.command.clone()), path: edited_props.path.unwrap_or_else(|| task.path.clone()), envs: task.envs.clone(), start_immediately, stashed, group: task.group.clone(), enqueue_at: None, dependencies: Vec::new(), priority: edited_props.priority.or(Some(task.priority)), label: edited_props.label.or_else(|| task.label.clone()), print_task_id: false, }; // Send the cloned task to the daemon and abort on any failure messages. send_message(add_task_message, stream).await?; if let Message::Failure(message) = receive_message(stream).await? { bail!(message); }; } // Send the singular in-place restart message to the daemon. if in_place { send_message(restart_message, stream).await?; if let Message::Failure(message) = receive_message(stream).await? { bail!(message); }; } if !filtered_tasks.matching_ids.is_empty() { println!("Restarted tasks: {:?}", filtered_tasks.matching_ids); } if !filtered_tasks.non_matching_ids.is_empty() { println!( "Couldn't restart tasks: {:?}", filtered_tasks.non_matching_ids ); } Ok(()) } 0707010000002A000081A4000000000000000000000001665F1B69000025FF000000000000000000000000000000000000002E00000000pueue-3.4.1/pueue/src/client/commands/wait.rsuse std::collections::{HashMap, HashSet}; use std::time::Duration; use anyhow::Result; use chrono::Local; use crossterm::style::{Attribute, Color}; use pueue_lib::network::message::TaskSelection; use pueue_lib::state::State; use strum_macros::{Display, EnumString}; use tokio::time::sleep; use pueue_lib::network::protocol::GenericStream; use pueue_lib::task::{Task, TaskResult, TaskStatus}; use crate::client::{commands::get_state, display::OutputStyle}; /// The `wait` subcommand can wait for these specific stati. #[derive(Default, Debug, Clone, PartialEq, Display, EnumString)] pub enum WaitTargetStatus { #[default] #[strum(serialize = "done", serialize = "Done")] Done, #[strum(serialize = "success", serialize = "Success")] Success, #[strum(serialize = "queued", serialize = "Queued")] Queued, #[strum(serialize = "running", serialize = "Running")] Running, } /// Wait until tasks are done. /// Tasks can be specified by: /// - Default queue (no parameter given) /// - Group /// - A list of task ids /// - All tasks (`all == true`) /// /// By default, this will output status changes of tasks to `stdout`. /// Pass `quiet == true` to suppress any logging. pub async fn wait( stream: &mut GenericStream, style: &OutputStyle, selection: TaskSelection, quiet: bool, target_status: &Option<WaitTargetStatus>, ) -> Result<()> { let mut first_run = true; // Create a list of tracked tasks. // This way we can track any status changes and if any new tasks are added. let mut watched_tasks: HashMap<usize, TaskStatus> = HashMap::new(); // Since tasks can be removed by users, we have to track tasks that actually finished. let mut finished_tasks: HashSet<usize> = HashSet::new(); // Wait for either a provided target status or the default (`Done`). let target_status = target_status.clone().unwrap_or_default(); loop { let state = get_state(stream).await?; let tasks = get_tasks(&state, &selection); if tasks.is_empty() { println!("No tasks found for selection {selection:?}"); return Ok(()); } // Get current time for log output // Iterate over all matching tasks for task in tasks.iter() { // Get the previous status of the task. // Add it to the watchlist we we know this task yet. let Some(previous_status) = watched_tasks.get(&task.id).cloned() else { if finished_tasks.contains(&task.id) { continue; } // Add new/unknown tasks to our watchlist watched_tasks.insert(task.id, task.status.clone()); if !quiet { log_new_task(task, style, first_run); } continue; }; // The task's status didn't change, continue as there's nothing to do. if previous_status == task.status { continue; } // Update the (previous) task status and log any changes watched_tasks.insert(task.id, task.status.clone()); if !quiet { log_status_change(previous_status, task, style); } } // We can stop waiting, if every task reached its the target state. // We have to check all watched tasks and handle any tasks that get removed. let task_ids: Vec<usize> = watched_tasks.keys().cloned().collect(); for task_id in task_ids { // Get the correct task. If it no longer exists, remove it from the task list. let Some(task) = tasks.iter().find(|task| task.id == task_id) else { watched_tasks.remove(&task_id); continue; }; // Check if the task hit the target status. if reached_target_status(task, &target_status) { watched_tasks.remove(&task_id); finished_tasks.insert(task_id); } // If we're waiting for `Success`ful tasks, check if any of the tasks failed. // If so, exit with a `1`. if target_status == WaitTargetStatus::Success && task.failed() { std::process::exit(1); } } if watched_tasks.is_empty() { break; } // Sleep for a few seconds. We don't want to hurt the CPU. // However, we allow faster polling when in a test environment. let mut sleep_time = 2000; if std::env::var("PUEUED_TEST_ENV_VARIABLE").is_ok() { sleep_time = 250; } sleep(Duration::from_millis(sleep_time)).await; first_run = false; } Ok(()) } /// Check if a task reached the target status. /// Other stati that can only occur after that status will also qualify. fn reached_target_status(task: &Task, target_status: &WaitTargetStatus) -> bool { match target_status { WaitTargetStatus::Queued => { task.status == TaskStatus::Queued || task.status == TaskStatus::Running || matches!(task.status, TaskStatus::Done(_)) } WaitTargetStatus::Running => { task.status == TaskStatus::Running || matches!(task.status, TaskStatus::Done(_)) } WaitTargetStatus::Done => matches!(task.status, TaskStatus::Done(_)), WaitTargetStatus::Success => matches!(task.status, TaskStatus::Done(TaskResult::Success)), } } /// Get the correct tasks depending on a given TaskSelection. fn get_tasks(state: &State, selection: &TaskSelection) -> Vec<Task> { match selection { // Get all tasks TaskSelection::All => state.tasks.values().cloned().collect(), // Get all tasks of a specific group TaskSelection::TaskIds(task_ids) => state .tasks .iter() .filter(|(id, _)| task_ids.contains(id)) .map(|(_, task)| task.clone()) .collect(), // Get all tasks of a specific group TaskSelection::Group(group) => state .tasks .iter() .filter(|(_, task)| task.group.eq(group)) .map(|(_, task)| task.clone()) .collect::<Vec<Task>>(), } } /// Write a log line about a newly discovered task. fn log_new_task(task: &Task, style: &OutputStyle, first_run: bool) { let current_time = Local::now().format("%H:%M:%S").to_string(); let color = get_color_for_status(&task.status); let task_id = style.style_text(task.id, None, Some(Attribute::Bold)); let status = style.style_text(&task.status, Some(color), None); if !first_run { // Don't log non-active tasks in the initial loop. println!("{current_time} - New task {task_id} with status {status}"); return; } if task.is_running() { // Show currently running tasks for better user feedback. println!("{current_time} - Found active Task {task_id} with status {status}",); } } /// Write a log line about a status changes of a task. fn log_status_change(previous_status: TaskStatus, task: &Task, style: &OutputStyle) { let current_time = Local::now().format("%H:%M:%S").to_string(); let task_id = style.style_text(task.id, None, Some(Attribute::Bold)); // Check if the task has finished. // In case it has, show the task's result in human-readable form. // Color some parts of the output depending on the task's outcome. if let TaskStatus::Done(result) = &task.status { let text = match result { TaskResult::Success => { let status = style.style_text("0", Some(Color::Green), None); format!("Task {task_id} succeeded with {status}") } TaskResult::DependencyFailed => { let status = style.style_text("failed dependencies", Some(Color::Red), None); format!("Task {task_id} failed due to {status}") } TaskResult::FailedToSpawn(_) => { let status = style.style_text("failed to spawn", Some(Color::Red), None); format!("Task {task_id} {status}") } TaskResult::Failed(exit_code) => { let status = style.style_text(exit_code, Some(Color::Red), Some(Attribute::Bold)); format!("Task {task_id} failed with {status}") } TaskResult::Errored => { let status = style.style_text("IO error", Some(Color::Red), Some(Attribute::Bold)); format!("Task {task_id} experienced an {status}.") } TaskResult::Killed => { let status = style.style_text("killed", Some(Color::Red), None); format!("Task {task_id} has been {status}") } }; println!("{current_time} - {text}"); return; } // The task didn't finish yet, but changed it's state (e.g. from `Queued` to `Running`). // Inform the user about this change. let new_status_color = get_color_for_status(&task.status); let previous_status_color = get_color_for_status(&previous_status); let previous_status = style.style_text(previous_status, Some(previous_status_color), None); let new_status = style.style_text(&task.status, Some(new_status_color), None); println!("{current_time} - Task {task_id} changed from {previous_status} to {new_status}",); } fn get_color_for_status(task_status: &TaskStatus) -> Color { match task_status { TaskStatus::Running | TaskStatus::Done(_) => Color::Green, TaskStatus::Paused | TaskStatus::Locked => Color::White, _ => Color::White, } } 0707010000002B000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000002500000000pueue-3.4.1/pueue/src/client/display0707010000002C000081A4000000000000000000000001665F1B6900000F82000000000000000000000000000000000000002F00000000pueue-3.4.1/pueue/src/client/display/follow.rsuse std::io::{self, Write}; use std::path::Path; use std::time::Duration; use anyhow::Result; use tokio::time::sleep; use pueue_lib::{ log::{get_log_file_handle, get_log_path, seek_to_last_lines}, network::protocol::GenericStream, }; use crate::client::commands::get_task; /// Follow the log output of running task. /// /// If no task is specified, this will check for the following cases: /// /// - No running task: Wait until the task starts running. /// - Single running task: Follow the output of that task. /// - Multiple running tasks: Print out the list of possible tasks to follow. pub async fn follow_local_task_logs( stream: &mut GenericStream, pueue_directory: &Path, task_id: usize, lines: Option<usize>, ) -> Result<()> { // It might be that the task is not yet running. // Ensure that it exists and is started. loop { let Some(task) = get_task(stream, task_id).await? else { println!("Pueue: The task to be followed doesn't exist."); std::process::exit(1); }; // Task started up, we can start to follow. if task.is_running() || task.is_done() { break; } sleep(Duration::from_millis(1000)).await; } let mut handle = match get_log_file_handle(task_id, pueue_directory) { Ok(stdout) => stdout, Err(err) => { println!("Failed to get log file handles: {err}"); return Ok(()); } }; let path = get_log_path(task_id, pueue_directory); // Stdout handle to directly stream log file output to `io::stdout`. // This prevents us from allocating any large amounts of memory. let mut stdout = io::stdout(); // If `lines` is passed as an option, we only want to show the last `X` lines. // To achieve this, we seek the file handle to the start of the `Xth` line // from the end of the file. // The loop following this section will then only copy those last lines to stdout. if let Some(lines) = lines { if let Err(err) = seek_to_last_lines(&mut handle, lines) { println!("Error seeking to last lines from log: {err}"); } } // The interval at which the task log is checked and streamed to stdout. let log_check_interval = 250; // We check in regular intervals whether the task finished. // This is something we don't want to do in every loop, as we have to communicate with // the daemon. That's why we only do it now and then. let task_check_interval = log_check_interval * 2; let mut last_check = 0; loop { // Check whether the file still exists. Exit if it doesn't. if !path.exists() { println!("Pueue: Log file has gone away. Has the task been removed?"); return Ok(()); } // Read the next chunk of text from the last position. if let Err(err) = io::copy(&mut handle, &mut stdout) { println!("Pueue: Error while reading file: {err}"); return Ok(()); }; // Flush the stdout buffer to actually print the output. if let Err(err) = stdout.flush() { println!("Pueue: Error while flushing stdout: {err}"); return Ok(()); }; // Check every `task_check_interval` whether the task: // 1. Still exist // 2. Is still running // // In case either is not, exit. if (last_check % task_check_interval) == 0 { let Some(task) = get_task(stream, task_id).await? else { println!("Pueue: The followed task has been removed."); std::process::exit(1); }; // Task exited by itself. We can stop following. if !task.is_running() { return Ok(()); } } last_check += log_check_interval; let timeout = Duration::from_millis(log_check_interval); sleep(timeout).await; } } 0707010000002D000081A4000000000000000000000001665F1B69000007EC000000000000000000000000000000000000002E00000000pueue-3.4.1/pueue/src/client/display/group.rsuse crossterm::style::{Attribute, Color}; use pueue_lib::{ network::message::GroupResponseMessage, state::{Group, GroupStatus}, }; use crate::client::cli::SubCommand; use super::OutputStyle; /// Print some info about the daemon's current groups. /// This is used when calling `pueue group`. pub fn format_groups( message: GroupResponseMessage, cli_command: &SubCommand, style: &OutputStyle, ) -> String { // Get commandline options to check whether we should return the groups as json. let json = match cli_command { SubCommand::Group { json, .. } => *json, // If `parallel` is called without an argument, the group info is shown. SubCommand::Parallel { parallel_tasks: None, group: None, } => false, _ => { panic!("Got wrong Subcommand {cli_command:?} in format_groups. This shouldn't happen.") } }; if json { return serde_json::to_string(&message.groups).unwrap(); } let mut text = String::new(); let mut group_iter = message.groups.iter().peekable(); while let Some((name, group)) = group_iter.next() { let styled = get_group_headline(name, group, style); text.push_str(&styled); if group_iter.peek().is_some() { text.push('\n'); } } text } /// Return some nicely formatted info about a given group. /// This is also used as a headline that's displayed above group's task tables. pub fn get_group_headline(name: &str, group: &Group, style: &OutputStyle) -> String { // Style group name let name = style.style_text(format!("Group \"{name}\""), None, Some(Attribute::Bold)); // Print the current state of the group. let status = match group.status { GroupStatus::Running => style.style_text("running", Some(Color::Green), None), GroupStatus::Paused => style.style_text("paused", Some(Color::Yellow), None), }; format!("{} ({} parallel): {}", name, group.parallel_tasks, status) } 0707010000002E000081A4000000000000000000000001665F1B6900000BAD000000000000000000000000000000000000002F00000000pueue-3.4.1/pueue/src/client/display/helper.rsuse std::collections::BTreeMap; use chrono::{DateTime, Local, LocalResult}; use pueue_lib::{settings::Settings, task::Task}; /// Try to get the start of the current date to the best of our abilities. /// Throw an error, if we can't. pub fn start_of_today() -> DateTime<Local> { let result = Local::now() .date_naive() .and_hms_opt(0, 0, 0) .expect("Failed to find start of today.") .and_local_timezone(Local); // Try to get the start of the current date. // If there's no unambiguous result for today's midnight, we pick the first value as a backup. match result { LocalResult::None => panic!("Failed to find start of today."), LocalResult::Single(today) => today, LocalResult::Ambiguous(today, _) => today, } } /// Sort given tasks by their groups. /// This is needed to print a table for each group. pub fn sort_tasks_by_group(tasks: Vec<Task>) -> BTreeMap<String, Vec<Task>> { // We use a BTreeMap, since groups should be ordered alphabetically by their name let mut sorted_task_groups = BTreeMap::new(); for task in tasks.into_iter() { if !sorted_task_groups.contains_key(&task.group) { sorted_task_groups.insert(task.group.clone(), Vec::new()); } sorted_task_groups.get_mut(&task.group).unwrap().push(task); } sorted_task_groups } /// Returns the formatted `start` and `end` text for a given task. /// /// 1. If the start || end is today, skip the date. /// 2. Otherwise show the date in both. /// /// If the task doesn't have a start and/or end yet, an empty string will be returned /// for the respective field. pub fn formatted_start_end(task: &Task, settings: &Settings) -> (String, String) { // Get the start time. // If the task didn't start yet, just return two empty strings. let start = match task.start { Some(start) => start, None => return ("".into(), "".into()), }; // If the task started today, just show the time. // Otherwise show the full date and time. let started_today = start >= start_of_today(); let formatted_start = if started_today { start .format(&settings.client.status_time_format) .to_string() } else { start .format(&settings.client.status_datetime_format) .to_string() }; // Get finish time, if already set. Otherwise only return the formatted start. let end = match task.end { Some(end) => end, None => return (formatted_start, "".into()), }; // If the task ended today we only show the time. // In all other circumstances, we show the full date. let finished_today = end >= start_of_today(); let formatted_end = if finished_today { end.format(&settings.client.status_time_format).to_string() } else { end.format(&settings.client.status_datetime_format) .to_string() }; (formatted_start, formatted_end) } 0707010000002F000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000002900000000pueue-3.4.1/pueue/src/client/display/log07070100000030000081A4000000000000000000000001665F1B6900000B54000000000000000000000000000000000000003100000000pueue-3.4.1/pueue/src/client/display/log/json.rsuse std::collections::{BTreeMap, HashMap}; use std::io::Read; use serde_derive::{Deserialize, Serialize}; use snap::read::FrameDecoder; use pueue_lib::log::{get_log_file_handle, read_last_lines}; use pueue_lib::network::message::TaskLogMessage; use pueue_lib::settings::Settings; use pueue_lib::task::Task; /// This is the output struct used for #[derive(Clone, Debug, Deserialize, Serialize)] pub struct TaskLog { pub task: Task, pub output: String, } pub fn print_log_json( task_log_messages: BTreeMap<usize, TaskLogMessage>, settings: &Settings, lines: Option<usize>, ) { let mut tasks: BTreeMap<usize, Task> = BTreeMap::new(); let mut task_log: BTreeMap<usize, String> = BTreeMap::new(); // Convert the TaskLogMessage into a proper JSON serializable format. // Output in TaskLogMessages, if it exists, is compressed. // We need to decompress and convert to normal strings. for (id, message) in task_log_messages { tasks.insert(id, message.task); if settings.client.read_local_logs { let output = get_local_log(settings, id, lines); task_log.insert(id, output); } else { let output = get_remote_log(message.output); task_log.insert(id, output); } } // Now assemble the final struct that will be returned let mut json = BTreeMap::new(); for (id, mut task) in tasks { let (id, output) = task_log.remove_entry(&id).unwrap(); task.envs = HashMap::new(); json.insert(id, TaskLog { task, output }); } println!("{}", serde_json::to_string(&json).unwrap()); } /// Read logs directly from local files for a specific task. fn get_local_log(settings: &Settings, id: usize, lines: Option<usize>) -> String { let mut file = match get_log_file_handle(id, &settings.shared.pueue_directory()) { Ok(file) => file, Err(err) => { return format!("(Pueue error) Failed to get log file handle: {err}"); } }; // Only return the last few lines. if let Some(lines) = lines { return read_last_lines(&mut file, lines); } // Read the whole local log output. let mut output = String::new(); if let Err(error) = file.read_to_string(&mut output) { return format!("(Pueue error) Failed to read local log output file: {error:?}"); }; output } /// Read logs from from compressed remote logs. /// If logs don't exist, an empty string will be returned. fn get_remote_log(output_bytes: Option<Vec<u8>>) -> String { let Some(bytes) = output_bytes else { return String::new(); }; let mut decoder = FrameDecoder::new(&bytes[..]); let mut output = String::new(); if let Err(error) = decoder.read_to_string(&mut output) { return format!("(Pueue error) Failed to decompress remote log output: {error:?}"); } output } 07070100000031000081A4000000000000000000000001665F1B69000009B5000000000000000000000000000000000000003200000000pueue-3.4.1/pueue/src/client/display/log/local.rsuse std::fs::File; use std::io::{self, Stdout}; use crossterm::style::{Attribute, Color}; use pueue_lib::log::{get_log_file_handle, seek_to_last_lines}; use pueue_lib::settings::Settings; use crate::client::display::OutputStyle; /// The daemon didn't send any log output, thereby we didn't request any. /// If that's the case, read the log file from the local pueue directory. pub fn print_local_log( task_id: usize, style: &OutputStyle, settings: &Settings, lines: Option<usize>, ) { let mut file = match get_log_file_handle(task_id, &settings.shared.pueue_directory()) { Ok(file) => file, Err(err) => { println!("Failed to get log file handle: {err}"); return; } }; // Stdout handler to directly write log file output to io::stdout // without having to load anything into memory. let mut stdout = io::stdout(); print_local_file( &mut stdout, &mut file, &lines, style.style_text("output:", Some(Color::Green), Some(Attribute::Bold)), ); } /// Print a local log file of a task. fn print_local_file(stdout: &mut Stdout, file: &mut File, lines: &Option<usize>, header: String) { if let Ok(metadata) = file.metadata() { if metadata.len() != 0 { // Indicates whether the full log output is shown or just the last part of it. let mut output_complete = true; // Only print the last lines if requested if let Some(lines) = lines { match seek_to_last_lines(file, *lines) { Ok(complete) => output_complete = complete, Err(err) => { println!("Failed reading local log file: {err}"); return; } } } // Add a hint if we should limit the output to X lines **and** there are actually more // lines than that given limit. let mut line_info = String::new(); if !output_complete { line_info = lines.map_or(String::new(), |lines| format!(" (last {lines} lines)")); } // Print a newline between the task information and the first output. println!("\n{header}{line_info}"); // Print everything if let Err(err) = io::copy(file, stdout) { println!("Failed reading local log file: {err}"); }; } } } 07070100000032000081A4000000000000000000000001665F1B6900001BF5000000000000000000000000000000000000003000000000pueue-3.4.1/pueue/src/client/display/log/mod.rsuse std::collections::BTreeMap; use comfy_table::{Attribute as ComfyAttribute, Cell, CellAlignment, Table}; use crossterm::style::Color; use pueue_lib::network::message::TaskLogMessage; use pueue_lib::settings::Settings; use pueue_lib::task::{Task, TaskResult, TaskStatus}; use super::OutputStyle; use crate::client::cli::SubCommand; mod json; mod local; mod remote; use json::*; use local::*; use remote::*; /// Determine how many lines of output should be printed/returned. /// `None` implicates that all lines are printed. /// /// By default, everything is returned for single tasks and only some lines for multiple. /// `json` is an exception to this, in json mode we always only return some lines /// (unless otherwise explicitly requested). /// /// `full` always forces the full log output /// `lines` force a specific amount of lines pub fn determine_log_line_amount(full: bool, lines: &Option<usize>) -> Option<usize> { if full { None } else if let Some(lines) = lines { Some(*lines) } else { // By default, only some lines are shown per task Some(15) } } /// Print the log output of finished tasks. /// Either print the logs of every task /// or only print the logs of the specified tasks. pub fn print_logs( mut task_logs: BTreeMap<usize, TaskLogMessage>, cli_command: &SubCommand, style: &OutputStyle, settings: &Settings, ) { // Get actual commandline options. // This is necessary to know how we should display/return the log information. let SubCommand::Log { json, task_ids, lines, full, } = cli_command else { panic!("Got wrong Subcommand {cli_command:?} in print_log. This shouldn't happen"); }; let lines = determine_log_line_amount(*full, lines); // Return the server response in json representation. if *json { print_log_json(task_logs, settings, lines); return; } // Check some early return conditions if task_ids.is_empty() && task_logs.is_empty() { println!("There are no finished tasks"); return; } if !task_ids.is_empty() && task_logs.is_empty() { println!("There are no finished tasks for your specified ids"); return; } // Iterate over each task and print the respective log. let mut task_iter = task_logs.iter_mut().peekable(); while let Some((_, task_log)) = task_iter.next() { print_log(task_log, style, settings, lines); // Add a newline if there is another task that's going to be printed. if let Some((_, task_log)) = task_iter.peek() { if matches!( &task_log.task.status, TaskStatus::Done(_) | TaskStatus::Running | TaskStatus::Paused, ) { println!(); } } } } /// Print the log of a single task. /// /// message: The message returned by the daemon. This message includes all /// requested tasks and the tasks' logs, if we don't read local logs. /// lines: Whether we should reduce the log output of each task to a specific number of lines. /// `None` implicates that everything should be printed. /// This is only important, if we read local lines. fn print_log( message: &mut TaskLogMessage, style: &OutputStyle, settings: &Settings, lines: Option<usize>, ) { let task = &message.task; // We only show logs of finished or running tasks. if !matches!( task.status, TaskStatus::Done(_) | TaskStatus::Running | TaskStatus::Paused ) { return; } print_task_info(task, style); if settings.client.read_local_logs { print_local_log(message.task.id, style, settings, lines); } else if message.output.is_some() { print_remote_log(message, style, lines); } else { println!("Logs requested from pueue daemon, but none received. Please report this bug."); } } /// Print some information about a task, which is displayed on top of the task's log output. fn print_task_info(task: &Task, style: &OutputStyle) { // Print task id and exit code. let task_cell = style.styled_cell( format!("Task {}: ", task.id), None, Some(ComfyAttribute::Bold), ); let (exit_status, color) = match &task.status { TaskStatus::Paused => ("paused".into(), Color::White), TaskStatus::Running => ("running".into(), Color::Yellow), TaskStatus::Done(result) => match result { TaskResult::Success => ("completed successfully".into(), Color::Green), TaskResult::Failed(exit_code) => { (format!("failed with exit code {exit_code}"), Color::Red) } TaskResult::FailedToSpawn(_err) => ("Failed to spawn".to_string(), Color::Red), TaskResult::Killed => ("killed by system or user".into(), Color::Red), TaskResult::Errored => ("some IO error.\n Check daemon log.".into(), Color::Red), TaskResult::DependencyFailed => ("dependency failed".into(), Color::Red), }, _ => (task.status.to_string(), Color::White), }; let status_cell = style.styled_cell(exit_status, Some(color), None); // The styling of the task number and status is done by a single-row table. let mut table = Table::new(); table.load_preset("││─ └──┘ ─ ┌┐ "); table.set_content_arrangement(comfy_table::ContentArrangement::Dynamic); table.set_header(vec![task_cell, status_cell]); // Explicitly force styling, in case we aren't on a tty, but `--color=always` is set. if style.enabled { table.enforce_styling(); } println!("{table}"); // All other information is aligned and styled by using a separate table. let mut table = Table::new(); table.load_preset(comfy_table::presets::NOTHING); table.set_content_arrangement(comfy_table::ContentArrangement::Dynamic); // Command and path table.add_row(vec![ style.styled_cell("Command:", None, Some(ComfyAttribute::Bold)), Cell::new(&task.command), ]); table.add_row(vec![ style.styled_cell("Path:", None, Some(ComfyAttribute::Bold)), Cell::new(task.path.to_string_lossy()), ]); if let Some(label) = &task.label { table.add_row(vec![ style.styled_cell("Label:", None, Some(ComfyAttribute::Bold)), Cell::new(label), ]); } // Start and end time if let Some(start) = task.start { table.add_row(vec![ style.styled_cell("Start:", None, Some(ComfyAttribute::Bold)), Cell::new(start.to_rfc2822()), ]); } if let Some(end) = task.end { table.add_row(vec![ style.styled_cell("End:", None, Some(ComfyAttribute::Bold)), Cell::new(end.to_rfc2822()), ]); } // Set the padding of the left column to 0 align the keys to the right let first_column = table.column_mut(0).unwrap(); first_column.set_cell_alignment(CellAlignment::Right); first_column.set_padding((0, 0)); println!("{table}"); } 07070100000033000081A4000000000000000000000001665F1B69000006F4000000000000000000000000000000000000003300000000pueue-3.4.1/pueue/src/client/display/log/remote.rsuse std::io; use anyhow::Result; use crossterm::style::{Attribute, Color}; use snap::read::FrameDecoder; use pueue_lib::network::message::TaskLogMessage; use super::OutputStyle; /// Prints log output received from the daemon. /// We can safely call .unwrap() on output in here, since this /// branch is always called after ensuring that it is `Some`. pub fn print_remote_log(task_log: &TaskLogMessage, style: &OutputStyle, lines: Option<usize>) { if let Some(bytes) = task_log.output.as_ref() { if !bytes.is_empty() { // Add a hint if we should limit the output to X lines **and** there are actually more // lines than that given limit. let mut line_info = String::new(); if !task_log.output_complete { line_info = lines.map_or(String::new(), |lines| format!(" (last {lines} lines)")); } // Print a newline between the task information and the first output. let header = style.style_text("output:", Some(Color::Green), Some(Attribute::Bold)); println!("\n{header}{line_info}"); if let Err(err) = decompress_and_print_remote_log(bytes) { println!("Error while parsing stdout: {err}"); } } } } /// We cannot easily stream log output from the client to the daemon (yet). /// Right now, the output is compressed in the daemon and sent as a single payload to the /// client. In here, we take that payload, decompress it and stream it it directly to stdout. fn decompress_and_print_remote_log(bytes: &[u8]) -> Result<()> { let mut decompressor = FrameDecoder::new(bytes); let stdout = io::stdout(); let mut write = stdout.lock(); io::copy(&mut decompressor, &mut write)?; Ok(()) } 07070100000034000081A4000000000000000000000001665F1B690000039C000000000000000000000000000000000000002C00000000pueue-3.4.1/pueue/src/client/display/mod.rs//! This module contains all logic for printing or displaying structured information about the //! daemon. //! //! This includes formatting of task tables, group info, log inspection and log following. mod follow; mod group; pub mod helper; mod log; mod state; pub mod style; pub mod table_builder; use crossterm::style::Color; // Re-exports pub use self::follow::follow_local_task_logs; pub use self::group::format_groups; pub use self::log::{determine_log_line_amount, print_logs}; pub use self::state::print_state; pub use self::style::OutputStyle; /// Used to style any generic success message from the daemon. pub fn print_success(_style: &OutputStyle, message: &str) { println!("{message}"); } /// Used to style any generic failure message from the daemon. pub fn print_error(style: &OutputStyle, message: &str) { let styled = style.style_text(message, Some(Color::Red), None); println!("{styled}"); } 07070100000035000081A4000000000000000000000001665F1B6900001664000000000000000000000000000000000000002E00000000pueue-3.4.1/pueue/src/client/display/state.rsuse anyhow::Result; use pueue_lib::settings::Settings; use pueue_lib::state::{State, PUEUE_DEFAULT_GROUP}; use pueue_lib::task::Task; use super::{helper::*, table_builder::TableBuilder, OutputStyle}; use crate::client::cli::SubCommand; use crate::client::display::group::get_group_headline; use crate::client::query::apply_query; /// Get the output for the state of the daemon in a nicely formatted table. /// If there are multiple groups, each group with a task will have its own table. /// /// We pass the tasks as a separate parameter and as a list. /// This allows us to print the tasks in the order passed to the `format-status` subcommand. pub fn print_state( mut state: State, mut tasks: Vec<Task>, cli_command: &SubCommand, style: &OutputStyle, settings: &Settings, ) -> Result<String> { let mut output = String::new(); let (json, group_only, query) = match cli_command { SubCommand::Status { json, group, query } => (*json, group.clone(), Some(query)), SubCommand::FormatStatus { group } => (false, group.clone(), None), _ => panic!("Got wrong Subcommand {cli_command:?} in print_state. This shouldn't happen!"), }; let mut table_builder = TableBuilder::new(settings, style); if let Some(query) = query { let query_result = apply_query(&query.join(" "), &group_only)?; table_builder.set_visibility_by_rules(&query_result.selected_columns); tasks = query_result.apply_filters(tasks); tasks = query_result.order_tasks(tasks); tasks = query_result.limit_tasks(tasks); } // If the json flag is specified, print the state as json and exit. if json { if query.is_some() { state.tasks = tasks.into_iter().map(|task| (task.id, task)).collect(); } output.push_str(&serde_json::to_string(&state).unwrap()); return Ok(output); } if let Some(group) = group_only { print_single_group(state, tasks, style, group, table_builder, &mut output); return Ok(output); } print_all_groups(state, tasks, style, table_builder, &mut output); Ok(output) } /// The user requested only a single group to be displayed. /// /// Print this group or show an error if this group doesn't exist. fn print_single_group( state: State, tasks: Vec<Task>, style: &OutputStyle, group_name: String, table_builder: TableBuilder, output: &mut String, ) { // Sort all tasks by their respective group; let mut sorted_tasks = sort_tasks_by_group(tasks); let Some(group) = state.groups.get(&group_name) else { eprintln!("There exists no group \"{group_name}\""); return; }; // Only a single group is requested. Print that group and return. let tasks = sorted_tasks.entry(group_name.clone()).or_default(); let headline = get_group_headline(&group_name, group, style); output.push_str(&headline); // Show a message if the requested group doesn't have any tasks. if tasks.is_empty() { output.push_str(&format!( "\nTask list is empty. Add tasks with `pueue add -g {group_name} -- [cmd]`" )); return; } let table = table_builder.build(tasks); output.push_str(&format!("\n{table}")); } /// Print all groups. All tasks will be shown in the table of their assigned group. /// /// This will create multiple tables, one table for each group. fn print_all_groups( state: State, tasks: Vec<Task>, style: &OutputStyle, table_builder: TableBuilder, output: &mut String, ) { // Early exit and hint if there are no tasks in the queue // Print the state of the default group anyway, since this is information one wants to // see most of the time anyway. if state.tasks.is_empty() { let headline = get_group_headline( PUEUE_DEFAULT_GROUP, state.groups.get(PUEUE_DEFAULT_GROUP).unwrap(), style, ); output.push_str(&format!("{headline}\n")); output.push_str("\nTask list is empty. Add tasks with `pueue add -- [cmd]`"); return; } // Sort all tasks by their respective group; let sorted_tasks = sort_tasks_by_group(tasks); // Always print the default queue at the very top, if no specific group is requested. if sorted_tasks.contains_key(PUEUE_DEFAULT_GROUP) { let tasks = sorted_tasks.get(PUEUE_DEFAULT_GROUP).unwrap(); let headline = get_group_headline( PUEUE_DEFAULT_GROUP, state.groups.get(PUEUE_DEFAULT_GROUP).unwrap(), style, ); output.push_str(&headline); let table = table_builder.clone().build(tasks); output.push_str(&format!("\n{table}")); // Add a newline if there are further groups to be printed if sorted_tasks.len() > 1 { output.push('\n'); } } // Print a table for every other group that has any tasks let mut sorted_iter = sorted_tasks.iter().peekable(); while let Some((group, tasks)) = sorted_iter.next() { // We always want to print the default group at the very top. // That's why we print it before this loop and skip it in here. if group.eq(PUEUE_DEFAULT_GROUP) { continue; } let headline = get_group_headline(group, state.groups.get(group).unwrap(), style); if !output.is_empty() { output.push('\n'); } output.push_str(&headline); let table = table_builder.clone().build(tasks); output.push_str(&format!("\n{table}")); // Add a newline between groups if sorted_iter.peek().is_some() { output.push('\n'); } } } 07070100000036000081A4000000000000000000000001665F1B6900000D87000000000000000000000000000000000000002E00000000pueue-3.4.1/pueue/src/client/display/style.rsuse pueue_lib::settings::Settings; use comfy_table::{Attribute as ComfyAttribute, Cell, Color as ComfyColor}; use crossterm::style::{style, Attribute, Color, Stylize}; /// OutputStyle wrapper for actual colors depending on settings /// - Enables styles if color mode is 'always', or if color mode is 'auto' and output is a tty. /// - Using dark colors if dark_mode is enabled #[derive(Debug, Clone)] pub struct OutputStyle { /// Whether or not ANSI styling is enabled pub enabled: bool, /// Whether dark mode is enabled. pub dark_mode: bool, } impl OutputStyle { /// init color-scheme depending on settings pub const fn new(settings: &Settings, enabled: bool) -> Self { Self { enabled, dark_mode: settings.client.dark_mode, } } /// Return the desired crossterm color depending on whether we're in dark mode or not. fn map_color(&self, color: Color) -> Color { if self.dark_mode { match color { Color::Green => Color::DarkGreen, Color::Red => Color::DarkRed, Color::Yellow => Color::DarkYellow, _ => color, } } else { color } } /// Return the desired comfy_table color depending on whether we're in dark mode or not. fn map_comfy_color(&self, color: Color) -> ComfyColor { if self.dark_mode { return match color { Color::Green => ComfyColor::DarkGreen, Color::Red => ComfyColor::DarkRed, Color::Yellow => ComfyColor::DarkYellow, _ => ComfyColor::White, }; } match color { Color::Green => ComfyColor::Green, Color::Red => ComfyColor::Red, Color::Yellow => ComfyColor::Yellow, _ => ComfyColor::White, } } /// This is a helper method with the purpose of easily styling text, /// while also prevent styling if we're printing to a non-tty output. /// If there's any kind of styling in the code, it should be done with the help of this method. pub fn style_text<T: ToString>( &self, text: T, color: Option<Color>, attribute: Option<Attribute>, ) -> String { let text = text.to_string(); // Styling disabled if !self.enabled { return text; } let mut styled = style(text); if let Some(color) = color { styled = styled.with(self.map_color(color)); } if let Some(attribute) = attribute { styled = styled.attribute(attribute); } styled.to_string() } /// A helper method to produce styled Comfy-table cells. /// Use this anywhere you need to create Comfy-table cells, so that the correct /// colors are used depending on the current color mode and dark-mode preset. pub fn styled_cell<T: ToString>( &self, text: T, color: Option<Color>, attribute: Option<ComfyAttribute>, ) -> Cell { let mut cell = Cell::new(text.to_string()); // Styling disabled if !self.enabled { return cell; } if let Some(color) = color { cell = cell.fg(self.map_comfy_color(color)); } if let Some(attribute) = attribute { cell = cell.add_attribute(attribute); } cell } } 07070100000037000081A4000000000000000000000001665F1B6900002669000000000000000000000000000000000000003600000000pueue-3.4.1/pueue/src/client/display/table_builder.rsuse chrono::TimeDelta; use comfy_table::presets::UTF8_HORIZONTAL_ONLY; use comfy_table::{Cell, ContentArrangement, Row, Table}; use crossterm::style::Color; use pueue_lib::settings::Settings; use pueue_lib::task::{Task, TaskResult, TaskStatus}; use super::helper::{formatted_start_end, start_of_today}; use super::OutputStyle; use crate::client::query::Rule; /// This builder is responsible for determining which table columns should be displayed and /// building a full [comfy_table] from a list of given [Task]s. #[derive(Debug, Clone)] pub struct TableBuilder<'a> { settings: &'a Settings, style: &'a OutputStyle, /// Whether the columns to be displayed are explicitly selected by the user. /// If that's the case, we won't do any automated checks whether columns should be displayed or /// not. selected_columns: bool, /// This following fields represent which columns should be displayed when executing /// `pueue status`. `true` for any column means that it'll be shown in the table. id: bool, status: bool, priority: bool, enqueue_at: bool, dependencies: bool, label: bool, command: bool, path: bool, start: bool, end: bool, } impl<'a> TableBuilder<'a> { pub fn new(settings: &'a Settings, style: &'a OutputStyle) -> Self { Self { settings, style, selected_columns: false, id: true, status: true, priority: false, enqueue_at: false, dependencies: false, label: false, command: true, path: true, start: true, end: true, } } pub fn build(mut self, tasks: &[Task]) -> Table { self.determine_special_columns(tasks); let mut table = Table::new(); table .set_content_arrangement(ContentArrangement::Dynamic) .load_preset(UTF8_HORIZONTAL_ONLY) .set_header(self.build_header()) .add_rows(self.build_task_rows(tasks)); // Explicitly force styling, in case we aren't on a tty, but `--color=always` is set. if self.style.enabled { table.enforce_styling(); } table } /// By default, several columns aren't shown until there's at least one task with relevant data. /// This function determines whether any of those columns should be shown. fn determine_special_columns(&mut self, tasks: &[Task]) { if self.selected_columns { return; } // Check whether there are any tasks with a non-default priority if tasks.iter().any(|task| task.priority != 0) { self.priority = true; } // Check whether there are any delayed tasks. let has_delayed_tasks = tasks.iter().any(|task| { matches!( task.status, TaskStatus::Stashed { enqueue_at: Some(_) } ) }); if has_delayed_tasks { self.enqueue_at = true; } // Check whether there are any tasks with dependencies. if tasks.iter().any(|task| !task.dependencies.is_empty()) { self.dependencies = true; } // Check whether there are any tasks a label. if tasks.iter().any(|task| task.label.is_some()) { self.label = true; } } /// Take a list of given [pest] rules from our `crate::client::query::column_selection::apply` logic. /// Set the column visibility based on these rules. pub fn set_visibility_by_rules(&mut self, rules: &[Rule]) { // Don't change anything, if there're no rules if rules.is_empty() { return; } // First of all, make all columns invisible. self.id = false; self.status = false; self.priority = false; self.enqueue_at = false; self.dependencies = false; self.label = false; self.command = false; self.path = false; self.start = false; self.end = false; // Make sure we don't do any default column visibility checks of our own. self.selected_columns = true; for rule in rules { match rule { Rule::column_id => self.id = true, Rule::column_status => self.status = true, Rule::column_priority => self.priority = true, Rule::column_enqueue_at => self.enqueue_at = true, Rule::column_dependencies => self.dependencies = true, Rule::column_label => self.label = true, Rule::column_command => self.command = true, Rule::column_path => self.path = true, Rule::column_start => self.start = true, Rule::column_end => self.end = true, _ => (), } } } /// Build a header row based on the current selection of columns. fn build_header(&self) -> Row { let mut header = Vec::new(); // Create table header row if self.id { header.push(Cell::new("Id")); } if self.status { header.push(Cell::new("Status")); } if self.priority { header.push(Cell::new("Prio")); } if self.enqueue_at { header.push(Cell::new("Enqueue At")); } if self.dependencies { header.push(Cell::new("Deps")); } if self.label { header.push(Cell::new("Label")); } if self.command { header.push(Cell::new("Command")); } if self.path { header.push(Cell::new("Path")); } if self.start { header.push(Cell::new("Start")); } if self.end { header.push(Cell::new("End")); } Row::from(header) } fn build_task_rows(&self, tasks: &[Task]) -> Vec<Row> { let mut rows = Vec::new(); // Add rows one by one. for task in tasks.iter() { let mut row = Row::new(); // Users can set a max height per row. if let Some(height) = self.settings.client.max_status_lines { row.max_height(height); } if self.id { row.add_cell(Cell::new(task.id)); } if self.status { // Determine the human readable task status representation and the respective color. let status_string = task.status.to_string(); let (status_text, color) = match &task.status { TaskStatus::Running => (status_string, Color::Green), TaskStatus::Paused | TaskStatus::Locked => (status_string, Color::White), TaskStatus::Done(result) => match result { TaskResult::Success => (TaskResult::Success.to_string(), Color::Green), TaskResult::DependencyFailed => { ("Dependency failed".to_string(), Color::Red) } TaskResult::FailedToSpawn(_) => ("Failed to spawn".to_string(), Color::Red), TaskResult::Failed(code) => (format!("Failed ({code})"), Color::Red), _ => (result.to_string(), Color::Red), }, _ => (status_string, Color::Yellow), }; row.add_cell(self.style.styled_cell(status_text, Some(color), None)); } if self.priority { row.add_cell(Cell::new(task.priority.to_string())); } if self.enqueue_at { if let TaskStatus::Stashed { enqueue_at: Some(enqueue_at), } = task.status { // Only show the date if the task is not supposed to be enqueued today. let enqueue_today = enqueue_at <= start_of_today() + TimeDelta::try_days(1).unwrap(); let formatted_enqueue_at = if enqueue_today { enqueue_at.format(&self.settings.client.status_time_format) } else { enqueue_at.format(&self.settings.client.status_datetime_format) }; row.add_cell(Cell::new(formatted_enqueue_at)); } else { row.add_cell(Cell::new("")); } } if self.dependencies { let text = task .dependencies .iter() .map(|id| id.to_string()) .collect::<Vec<String>>() .join(", "); row.add_cell(Cell::new(text)); } if self.label { row.add_cell(Cell::new(task.label.as_deref().unwrap_or_default())); } // Add command and path. if self.command { if self.settings.client.show_expanded_aliases { row.add_cell(Cell::new(&task.command)); } else { row.add_cell(Cell::new(&task.original_command)); } } if self.path { row.add_cell(Cell::new(task.path.to_string_lossy())); } // Add start and end info let (start, end) = formatted_start_end(task, self.settings); if self.start { row.add_cell(Cell::new(start)); } if self.end { row.add_cell(Cell::new(end)); } rows.push(row); } rows } } 07070100000038000081A4000000000000000000000001665F1B6900000075000000000000000000000000000000000000002400000000pueue-3.4.1/pueue/src/client/mod.rspub mod cli; #[allow(clippy::module_inception)] pub mod client; mod commands; pub(crate) mod display; pub mod query; 07070100000039000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000002300000000pueue-3.4.1/pueue/src/client/query0707010000003A000081A4000000000000000000000001665F1B690000066D000000000000000000000000000000000000003700000000pueue-3.4.1/pueue/src/client/query/column_selection.rsuse anyhow::Result; use pest::iterators::Pair; use super::{QueryResult, Rule}; pub fn apply(section: Pair<'_, Rule>, query_result: &mut QueryResult) -> Result<()> { // This query is expected to be structured like this: // `columns = [(column (, column)*]` let mut columns_pairs = section.into_inner(); // Pop the `column` and `=` let _columns_word = columns_pairs.next().unwrap(); let _equals = columns_pairs.next().unwrap(); // Get the list of columns. let multiple_columns = columns_pairs.next().unwrap(); // Extract all columns from the multiple_columns.inner iterator // The structure is like this // ``` // Pair { // rule: multiple_columns, // span: Span { // str: "id,status", // start: 7, // end: 16, // }, // inner: [ // Pair { // rule: column, // span: Span { // str: "id", // start: 7, // end: 9, // }, // inner: [ // Pair { // rule: id, // span: Span { // str: "id", // start: 7, // end: 9, // }, // inner: [], // }, // ], // }, // ... // ] // } // ``` let mut columns = multiple_columns .into_inner() .map(|pair| pair.into_inner().next().unwrap().as_rule()) .collect::<Vec<Rule>>(); query_result.selected_columns.append(&mut columns); Ok(()) } 0707010000003B000081A4000000000000000000000001665F1B6900002C00000000000000000000000000000000000000002E00000000pueue-3.4.1/pueue/src/client/query/filters.rs#![allow(bindings_with_variant_name)] use anyhow::{bail, Context, Result}; use chrono::{DateTime, Local, NaiveDate, NaiveDateTime, NaiveTime, TimeDelta}; use pest::iterators::Pair; use pueue_lib::task::{Task, TaskResult, TaskStatus}; use super::{QueryResult, Rule}; enum DateOrDateTime { DateTime(DateTime<Local>), Date(NaiveDate), } /// Parse a datetime/date/time filter. /// Such a filter can be applied to either the `start`, `end` or `enqueue_at` field. /// /// This filter syntax looks like this is expected to be: /// `[enqueue_at|start|end] [>|<|=|!=] [YYYY-MM-DD HH:mm:SS|HH:mm:SS|YYYY-MM-DD]` /// /// The data structure looks something like this: /// Pair { /// rule: datetime_filter, /// span: Span { /// str: "start=2022-09-01", /// start: 0, /// end: 16, /// }, /// inner: [ /// Pair { /// rule: column_start, /// span: Span { /// str: "start", /// start: 0, /// end: 5, /// }, /// inner: [], /// }, /// Pair { /// rule: operator, /// span: Span { /// str: "=", /// start: 5, /// end: 6, /// }, /// inner: [ /// Pair { /// rule: eq, /// span: Span { /// str: "=", /// start: 5, /// end: 6, /// }, /// inner: [], /// }, /// ], /// }, /// Pair { /// rule: date, /// span: Span { /// str: "2022-09-01", /// start: 6, /// end: 16, /// }, /// inner: [], /// }, /// ], /// } pub fn datetime(section: Pair<'_, Rule>, query_result: &mut QueryResult) -> Result<()> { let mut filter = section.into_inner(); // Get the column this filter should be applied to. // Either of [Rule::column_enqueue_at | Rule::column_start | Rule::column_end] let column = filter.next().unwrap(); let column = column.as_rule(); // Get the operator that should be applied in this filter. // Either of [Rule::eq | Rule::neq | Rule::lt | Rule::gt] let operator = filter.next().unwrap().as_rule(); // Get the point in time which we should filter for. // This can be either a Date or a DateTime. let operand = filter.next().unwrap(); let operand_rule = operand.as_rule(); let operand = match operand_rule { Rule::time => { let time = NaiveTime::parse_from_str(operand.as_str(), "%X") .context("Expected hh:mm:ss time format")?; let today = Local::now().date_naive(); let datetime = today.and_time(time).and_local_timezone(Local).unwrap(); DateOrDateTime::DateTime(datetime) } Rule::datetime => { let datetime = NaiveDateTime::parse_from_str(operand.as_str(), "%F %X") .context("Expected YYYY-MM-SS hh:mm:ss date time format")?; DateOrDateTime::DateTime(datetime.and_local_timezone(Local).unwrap()) } Rule::date => { let date = NaiveDate::parse_from_str(operand.as_str(), "%F") .context("Expected YYYY-MM-SS date format")?; DateOrDateTime::Date(date) } _ => bail!("Expected either a date, datetime or time expression."), }; let filter_function = Box::new(move |task: &Task| -> bool { // Get the field we should apply the filter to. let field = match column { Rule::column_enqueue_at => { let TaskStatus::Stashed { enqueue_at: Some(enqueue_at), } = task.status else { return false; }; enqueue_at } Rule::column_start => { let Some(start) = task.start else { return false; }; start } Rule::column_end => { let Some(end) = task.end else { return false; }; end } _ => return true, }; // Apply the operator to the operands. // The operator might have a different meaning depending on the type of datetime/date // we're dealing with. // E.g. when working with dates, `>` should mean bigger than the end of that day. // `<` however should mean before that day. match operand { DateOrDateTime::DateTime(datetime) => match operator { Rule::eq => field == datetime, Rule::neq => field != datetime, Rule::lt => field < datetime, Rule::gt => field > datetime, _ => true, }, DateOrDateTime::Date(date) => { // Get the start of the given day. // Use the most inclusive datetime in case of ambiguity let start_of_day = date .and_hms_opt(0, 0, 0) .expect("Couldn't determine start of day for given date.") .and_local_timezone(Local); let start_of_day = match start_of_day.latest() { None => return false, Some(datetime) => datetime, }; // Get the end of the given day. // Use the most inclusive datetime in case of ambiguity let end_of_day = (date + TimeDelta::try_days(1).unwrap()) .and_hms_opt(0, 0, 0) .expect("Couldn't determine start of day for given date.") .and_local_timezone(Local); let end_of_day = match end_of_day.latest() { None => return false, Some(datetime) => datetime, }; match operator { Rule::eq => field > start_of_day && field < end_of_day, Rule::neq => field < start_of_day && field > end_of_day, Rule::lt => field < start_of_day, Rule::gt => field > end_of_day, _ => true, } } } }); query_result.filters.push(filter_function); Ok(()) } /// Parse a filter for the label field. /// /// This filter syntax looks like this: /// `label [=|!=] string` /// /// The data structure looks something like this: /// Pair { /// rule: label_filter, /// span: Span { /// str: "label=test", /// start: 0, /// end: 10, /// }, /// inner: [ /// Pair { /// rule: column_label, /// span: Span { /// str: "label", /// start: 0, /// end: 5, /// }, /// inner: [], /// }, /// Pair { /// rule: eq, /// span: Span { /// str: "=", /// start: 5, /// end: 6, /// }, /// inner: [], /// }, /// Pair { /// rule: label, /// span: Span { /// str: "test", /// start: 6, /// end: 10, /// }, /// inner: [], /// }, /// ], /// } pub fn label(section: Pair<'_, Rule>, query_result: &mut QueryResult) -> Result<()> { let mut filter = section.into_inner(); // The first word should be the `label` keyword. let _label = filter.next().unwrap(); // Get the operator that should be applied in this filter. // Can be either of [Rule::eq | Rule::neq]. let operator = filter.next().unwrap().as_rule(); // Get the name of the label we should filter for. let operand = filter.next().unwrap().as_str().to_string(); // Build the label filter function. let filter_function = Box::new(move |task: &Task| -> bool { let Some(label) = &task.label else { return operator == Rule::neq; }; match operator { Rule::eq => label == &operand, Rule::neq => label != &operand, Rule::contains => label.contains(&operand), _ => false, } }); query_result.filters.push(filter_function); Ok(()) } /// Parse a filter for the status field. /// /// This filter syntax looks like this: /// `status [=|!=] [status]` /// /// The data structure looks something like this: /// Pair { /// rule: status_filter, /// span: Span { /// str: "status=success", /// start: 0, /// end: 14, /// }, /// inner: [ /// Pair { /// rule: column_status, /// span: Span { /// str: "status", /// start: 0, /// end: 6, /// }, /// inner: [], /// }, /// Pair { /// rule: eq, /// span: Span { /// str: "=", /// start: 6, /// end: 7, /// }, /// inner: [], /// }, /// Pair { /// rule: status_success, /// span: Span { /// str: "success", /// start: 7, /// end: 14, /// }, /// inner: [], /// }, /// ], /// } pub fn status(section: Pair<'_, Rule>, query_result: &mut QueryResult) -> Result<()> { let mut filter = section.into_inner(); // The first word should be the `status` keyword. let _status = filter.next().unwrap(); // Get the operator that should be applied in this filter. // Can be either of [Rule::eq | Rule::neq] let operator = filter.next().unwrap().as_rule(); // Get the status we should filter for. let operand = filter.next().unwrap().as_rule(); // Build the filter function for the task's status. let filter_function = Box::new(move |task: &Task| -> bool { let matches = match operand { Rule::status_queued => matches!(task.status, TaskStatus::Queued), Rule::status_stashed => matches!(task.status, TaskStatus::Stashed { .. }), Rule::status_running => matches!(task.status, TaskStatus::Running), Rule::status_paused => matches!(task.status, TaskStatus::Paused), Rule::status_success => { matches!(&task.status, TaskStatus::Done(TaskResult::Success)) } Rule::status_failed => { let mut matches = false; if let TaskStatus::Done(result) = &task.status { if !matches!(result, TaskResult::Success) { matches = true; } } matches } _ => return false, }; match operator { Rule::eq => matches, Rule::neq => !matches, _ => false, } }); query_result.filters.push(filter_function); Ok(()) } 0707010000003C000081A4000000000000000000000001665F1B690000073B000000000000000000000000000000000000002C00000000pueue-3.4.1/pueue/src/client/query/limit.rsuse anyhow::{bail, Context, Result}; use pest::iterators::Pair; use super::{QueryResult, Rule}; /// An enum indicating whether the first or the first or the last tasks in the /// `pueue status` command should be shown. pub enum Limit { First, Last, } /// Parse a limit condition. /// /// This limit syntax looks like this: /// `[first|last] [count]` /// /// The data structure looks something like this: /// Pair { /// rule: limit_condition, /// span: Span { /// str: "first 2", /// start: 0, /// end: 7, /// }, /// inner: [ /// Pair { /// rule: first, /// span: Span { /// str: "first", /// start: 0, /// end: 5, /// }, /// inner: [], /// }, /// Pair { /// rule: limit_count, /// span: Span { /// str: "2", /// start: 6, /// end: 7, /// }, /// inner: [], /// }, /// ], /// } pub fn limit(section: Pair<'_, Rule>, query_result: &mut QueryResult) -> Result<()> { let mut limit_condition = section.into_inner(); // The first word should be the `label` keyword. let direction = limit_condition.next().unwrap(); let direction = match direction.as_rule() { Rule::first => Limit::First, Rule::last => Limit::Last, _ => bail!("Expected either of [first|last]"), }; // Get the label we should order by. let amount = limit_condition.next().unwrap(); let count: usize = amount .as_str() .parse() .context("Expected a number 0 for limit condition")?; if count == 0 { bail!("Expected a number >0 for limit condition"); } query_result.limit = Some((direction, count)); Ok(()) } 0707010000003D000081A4000000000000000000000001665F1B6900001789000000000000000000000000000000000000002A00000000pueue-3.4.1/pueue/src/client/query/mod.rs// Clippy generates a false-positive for an empty generated docstring in the query parser code. #![allow(clippy::empty_docs)] use anyhow::{bail, Context, Result}; use pest::Parser; use pest_derive::Parser; use pueue_lib::task::{Task, TaskResult, TaskStatus}; mod column_selection; mod filters; mod limit; mod order_by; use limit::Limit; use order_by::Direction; /// See the pest docs on how this derive macro works and how to use pest: /// https://docs.rs/pest/latest/pest/ #[derive(Parser)] #[grammar = "./src/client/query/syntax.pest"] struct QueryParser; type FilterFunction = dyn Fn(&Task) -> bool; /// All applicable information that has been extracted from the query. #[derive(Default)] pub struct QueryResult { /// Filter results for a single group. group: Option<String>, /// The list of selected columns based. pub selected_columns: Vec<Rule>, /// A list of filter functions that should be applied to the list of tasks. filters: Vec<Box<FilterFunction>>, /// A list of filter functions that should be applied to the list of tasks. order_by: Option<(Rule, Direction)>, /// Limit limit: Option<(Limit, usize)>, } impl QueryResult { /// Take a list of tasks and apply all filters to it. pub fn apply_filters(&self, tasks: Vec<Task>) -> Vec<Task> { let mut iter = tasks.into_iter(); // If requested, only look at tasks of a specific group. if let Some(group) = &self.group { iter = iter .filter(|task| task.group == *group) .collect::<Vec<Task>>() .into_iter(); } for filter in self.filters.iter() { iter = iter.filter(filter).collect::<Vec<Task>>().into_iter(); } iter.collect() } /// Take a list of tasks and apply all filters to it. pub fn order_tasks(&self, mut tasks: Vec<Task>) -> Vec<Task> { // Only apply ordering if it was requested. let Some((column, direction)) = &self.order_by else { return tasks; }; // Sort the tasks by the specified column. tasks.sort_by(|task1, task2| match column { Rule::column_id => task1.id.cmp(&task2.id), Rule::column_status => { /// Rank a task status to allow ordering by status. /// Returns a u8 based on the expected fn rank_status(task: &Task) -> u8 { match &task.status { TaskStatus::Stashed { .. } => 0, TaskStatus::Locked => 1, TaskStatus::Queued => 2, TaskStatus::Paused => 3, TaskStatus::Running => 4, TaskStatus::Done(result) => match result { TaskResult::Success => 6, _ => 5, }, } } rank_status(task1).cmp(&rank_status(task2)) } Rule::column_label => task1.label.cmp(&task2.label), Rule::column_command => task1.command.cmp(&task2.command), Rule::column_path => task1.path.cmp(&task2.path), Rule::column_start => task1.start.cmp(&task2.start), Rule::column_end => task1.end.cmp(&task2.end), _ => std::cmp::Ordering::Less, }); // Reverse the order, if we're in ordering by descending order. if let Direction::Descending = direction { tasks.reverse(); } tasks } /// Take a list of tasks and apply all filters to it. pub fn limit_tasks(&self, tasks: Vec<Task>) -> Vec<Task> { // Only apply limits if it was requested. let Some((direction, count)) = &self.limit else { return tasks; }; // Don't do anything if: // - we don't have to limit // - the limit is invalid if tasks.len() <= *count || *count == 0 { return tasks; } match direction { Limit::First => tasks[0..*count].to_vec(), Limit::Last => tasks[(tasks.len() - count)..].to_vec(), } } } /// Take a given `pueue status QUERY` and apply it to all components that're involved in the /// `pueue status` process: /// /// - TableBuilder: The component responsible for building the table and determining which /// columns should or need to be displayed. /// A `columns [columns]` statement will define the set of visible columns. pub fn apply_query(query: &str, group: &Option<String>) -> Result<QueryResult> { let mut parsed = QueryParser::parse(Rule::query, query).context("Failed to parse query")?; let mut query_result = QueryResult { group: group.clone(), ..Default::default() }; // Expect there to be exactly one pair for the full query. // Return early if we got an empty query. let Some(parsed) = parsed.next() else { return Ok(query_result); }; // Make sure we really got a query. if parsed.as_rule() != Rule::query { bail!("Expected a valid query"); } // Get the sections of the query let sections = parsed.into_inner(); // Go through each section and handle it accordingly for section in sections { // The `columns=[columns]` section // E.g. `columns=id,status,start,end` match section.as_rule() { Rule::column_selection => column_selection::apply(section, &mut query_result)?, Rule::datetime_filter => filters::datetime(section, &mut query_result)?, Rule::label_filter => filters::label(section, &mut query_result)?, Rule::status_filter => filters::status(section, &mut query_result)?, Rule::order_by_condition => order_by::order_by(section, &mut query_result)?, Rule::limit_condition => limit::limit(section, &mut query_result)?, _ => (), } } Ok(query_result) } 0707010000003E000081A4000000000000000000000001665F1B69000009A1000000000000000000000000000000000000002F00000000pueue-3.4.1/pueue/src/client/query/order_by.rs#![allow(bindings_with_variant_name)] use anyhow::Result; use pest::iterators::Pair; use super::{QueryResult, Rule}; pub enum Direction { Ascending, Descending, } /// Parse an order_by condition. /// /// This filter syntax looks like this: /// `order_by [column] [asc|desc]` /// /// The data structure looks something like this: /// Pair { /// rule: order_by_condition, /// span: Span { /// str: "order_by label desc", /// start: 0, /// end: 19, /// }, /// inner: [ /// Pair { /// rule: order_by, /// span: Span { /// str: "order_by", /// start: 0, /// end: 8, /// }, /// inner: [], /// }, /// Pair { /// rule: column, /// span: Span { /// str: "label", /// start: 9, /// end: 14, /// }, /// inner: [ /// Pair { /// rule: column_label, /// span: Span { /// str: "label", /// start: 9, /// end: 14, /// }, /// inner: [], /// }, /// ], /// }, /// Pair { /// rule: descending, /// span: Span { /// str: "desc", /// start: 15, /// end: 19, /// }, /// inner: [], /// }, /// ], /// } pub fn order_by(section: Pair<'_, Rule>, query_result: &mut QueryResult) -> Result<()> { let mut order_by_condition = section.into_inner(); // The first word should be the `order_by` keyword. let _order_by = order_by_condition.next().unwrap(); // Get the column we should order by. // The column is wrapped by a `Rule::column` keyword. let column_keyword = order_by_condition.next().unwrap(); let column = column_keyword.into_inner().next().unwrap().as_rule(); // Get the direction we should order by. // If no direction is provided, default to `Ascending`. let direction = match order_by_condition.next().map(|pair| pair.as_rule()) { Some(Rule::ascending) => Direction::Ascending, Some(Rule::descending) => Direction::Descending, _ => Direction::Ascending, }; query_result.order_by = Some((column, direction)); Ok(()) } 0707010000003F000081A4000000000000000000000001665F1B69000009D7000000000000000000000000000000000000002F00000000pueue-3.4.1/pueue/src/client/query/syntax.pestWHITESPACE = _{ " " } COMMA = _{ "," } // Definition of possible comparison operators for task filtering. eq = { ^"=" } neq = { ^"!=" } lt = { ^"<" } gt = { ^">" } contains = { ^"%=" } // Definition of all columns column_id = { ^"id" } column_status = { ^"status" } column_priority = { ^"priority" } column_command = { ^"command" } column_label = { ^"label" } column_path = { ^"path" } column_enqueue_at = { ^"enqueue_at" } column_dependencies = { ^"dependencies" } column_start = { ^"start" } column_end = { ^"end" } // Either one of all column and a comma-separated list of columns. column = { column_id | column_status | column_command | column_label | column_path | column_enqueue_at | column_dependencies | column_start | column_end } multiple_columns = { column ~ (COMMA ~ column )* } // ----- Column visibility ----- // The columns clause used to specify the columns that should be shown. columns_word = { ^"columns" } column_selection = { columns_word ~ eq ~ multiple_columns } // ----- Filtering ----- // Status filter status_queued = { ^"queued" } status_stashed = { ^"stashed" } status_paused = { ^"paused" } status_running = { ^"running" } status_success = { ^"success" } status_failed = { ^"failed" } status_filter = { column_status ~ (eq | neq) ~ (status_queued | status_stashed | status_running | status_paused | status_success | status_failed) } // Label filter label = { ANY* } label_filter = { column_label ~ ( eq | neq | contains ) ~ label } // Time related filters datetime = { ASCII_DIGIT{4} ~ "-" ~ ASCII_DIGIT{2} ~ "-" ~ ASCII_DIGIT{2} ~ ASCII_DIGIT{2} ~ ":" ~ ASCII_DIGIT{2} ~ (":" ~ ASCII_DIGIT{2})? } date = { ASCII_DIGIT{4} ~ "-" ~ ASCII_DIGIT{2} ~ "-" ~ ASCII_DIGIT{2} } time = { ASCII_DIGIT{2} ~ ":" ~ ASCII_DIGIT{2} ~ (":" ~ ASCII_DIGIT{2})? } datetime_filter = { (column_start | column_end | column_enqueue_at) ~ (eq | neq | lt | gt) ~ (datetime | date | time) } // ----- Ordering ----- order_by = { ^"order_by" } ascending = { ^"asc" } descending = { ^"desc" } order_columns = { column_id | column_status | column_command | column_label | column_path | column_start | column_end } order_by_condition = { order_by ~ column ~ (ascending | descending)? } // ----- Limit ----- first = { ^"first" } last = { ^"last" } limit_count = { ASCII_DIGIT* } limit_condition = { (first | last) ~ limit_count } // ----- The final query syntax ----- query = { SOI ~ column_selection? ~ ( datetime_filter | status_filter | label_filter )*? ~ order_by_condition? ~ limit_condition? ~ EOI } 07070100000040000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000001D00000000pueue-3.4.1/pueue/src/daemon07070100000041000081A4000000000000000000000001665F1B6900000404000000000000000000000000000000000000002400000000pueue-3.4.1/pueue/src/daemon/cli.rsuse std::path::PathBuf; use clap::{ArgAction, Parser, ValueHint}; #[derive(Parser, Debug)] #[command(name = "pueued", about = "Start the Pueue daemon", author, version)] pub struct CliArguments { /// Verbose mode (-v, -vv, -vvv) #[arg(short, long, action = ArgAction::Count)] pub verbose: u8, /// If this flag is set, the daemon will start and fork itself into the background. /// Closing the terminal won't kill the daemon any longer. /// This should be avoided and rather be properly done using a service manager. #[arg(short, long)] pub daemonize: bool, /// If provided, Pueue only uses this config file. /// This path can also be set via the "PUEUE_CONFIG_PATH" environment variable. /// The commandline option overwrites the environment variable! #[arg(short, long, value_hint = ValueHint::FilePath)] pub config: Option<PathBuf>, /// The name of the profile that should be loaded from your config file. #[arg(short, long)] pub profile: Option<String>, } 07070100000042000081A4000000000000000000000001665F1B6900001986000000000000000000000000000000000000002400000000pueue-3.4.1/pueue/src/daemon/mod.rsuse std::path::Path; use std::sync::{Arc, Mutex}; use std::{fs::create_dir_all, path::PathBuf}; use anyhow::{bail, Context, Result}; use log::warn; use std::sync::mpsc::channel; use pueue_lib::error::Error; use pueue_lib::network::certificate::create_certificates; use pueue_lib::network::message::Shutdown; use pueue_lib::network::protocol::socket_cleanup; use pueue_lib::network::secret::init_shared_secret; use pueue_lib::settings::Settings; use pueue_lib::state::State; use self::state_helper::{restore_state, save_state}; use crate::daemon::network::socket::accept_incoming; use crate::daemon::task_handler::{TaskHandler, TaskSender}; pub mod cli; mod network; mod pid; /// Contains re-usable helper functions, that operate on the pueue-lib state. pub mod state_helper; mod task_handler; /// The main entry point for the daemon logic. /// It's basically the `main`, but publicly exported as a library. /// That way we can properly do integration testing for the daemon. /// /// For the purpose of testing, some things shouldn't be run during tests. /// There are some global operations that crash during tests, such as the ctlc handler. /// This is due to the fact, that tests in the same file are executed in multiple threads. /// Since the threads own the same global space, this would crash. pub async fn run(config_path: Option<PathBuf>, profile: Option<String>, test: bool) -> Result<()> { // Try to read settings from the configuration file. let (mut settings, config_found) = Settings::read(&config_path).context("Error while reading configuration.")?; // We couldn't find a configuration file. // This probably means that Pueue has been started for the first time and we have to create a // default config file once. if !config_found { if let Err(error) = settings.save(&config_path) { bail!("Failed saving config file: {error:?}."); } }; // Load any requested profile. if let Some(profile) = &profile { settings.load_profile(profile)?; } init_directories(&settings.shared.pueue_directory())?; if !settings.shared.daemon_key().exists() && !settings.shared.daemon_cert().exists() { create_certificates(&settings.shared).context("Failed to create certificates.")?; } init_shared_secret(&settings.shared.shared_secret_path()) .context("Failed to initialize shared secret.")?; pid::create_pid_file(&settings.shared.pid_path()).context("Failed to create pid file.")?; // Restore the previous state and save any changes that might have happened during this // process. If no previous state exists, just create a new one. // Create a new empty state if any errors occur, but print the error message. let state = match restore_state(&settings.shared.pueue_directory()) { Ok(Some(state)) => state, Ok(None) => State::new(), Err(error) => { warn!("Failed to restore previous state:\n {error:?}"); warn!("Using clean state instead."); State::new() } }; // Save the state once at the very beginning. save_state(&state, &settings).context("Failed to save state on startup.")?; let state = Arc::new(Mutex::new(state)); let (sender, receiver) = channel(); let sender = TaskSender::new(sender); let mut task_handler = TaskHandler::new(state.clone(), settings.clone(), receiver); // Don't set ctrlc and panic handlers during testing. // This is necessary for multithreaded integration testing, since multiple listener per process // aren't allowed. On top of this, ctrlc also somehow breaks test error output. if !test { setup_signal_panic_handling(&settings, &sender)?; } std::thread::spawn(move || { task_handler.run(); }); accept_incoming(sender, state.clone(), settings.clone()).await?; Ok(()) } /// Initialize all directories needed for normal operation. fn init_directories(pueue_dir: &Path) -> Result<()> { // Pueue base path if !pueue_dir.exists() { create_dir_all(pueue_dir).map_err(|err| { Error::IoPathError(pueue_dir.to_path_buf(), "creating main directory", err) })?; } // Task log dir let log_dir = pueue_dir.join("log"); if !log_dir.exists() { create_dir_all(&log_dir) .map_err(|err| Error::IoPathError(log_dir, "creating log directory", err))?; } // Task certs dir let certs_dir = pueue_dir.join("certs"); if !certs_dir.exists() { create_dir_all(&certs_dir) .map_err(|err| Error::IoPathError(certs_dir, "creating certificate directory", err))?; } // Task log dir let logs_dir = pueue_dir.join("task_logs"); if !logs_dir.exists() { create_dir_all(&logs_dir) .map_err(|err| Error::IoPathError(logs_dir, "creating task log directory", err))?; } Ok(()) } /// Setup signal handling and panic handling. /// /// On SIGINT and SIGTERM, we exit gracefully by sending a DaemonShutdown message to the /// TaskHandler. This is to prevent dangling processes and other weird edge-cases. /// /// On panic, we want to cleanup existing unix sockets and the PID file. fn setup_signal_panic_handling(settings: &Settings, sender: &TaskSender) -> Result<()> { let sender_clone = sender.clone(); // This section handles Shutdown via SigTerm/SigInt process signals // Notify the TaskHandler, so it can shutdown gracefully. // The actual program exit will be done via the TaskHandler. ctrlc::set_handler(move || { // Notify the task handler sender_clone .send(Shutdown::Graceful) .expect("Failed to send Message to TaskHandler on Shutdown"); })?; // Try to do some final cleanup, even if we panic. let settings_clone = settings.clone(); let orig_hook = std::panic::take_hook(); std::panic::set_hook(Box::new(move |panic_info| { // invoke the default handler and exit the process orig_hook(panic_info); // Cleanup the pid file if let Err(error) = pid::cleanup_pid_file(&settings_clone.shared.pid_path()) { println!("Failed to cleanup pid after panic."); println!("{error}"); } // Remove the unix socket. if let Err(error) = socket_cleanup(&settings_clone.shared) { println!("Failed to cleanup socket after panic."); println!("{error}"); } std::process::exit(1); })); Ok(()) } 07070100000043000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000002500000000pueue-3.4.1/pueue/src/daemon/network07070100000044000081A4000000000000000000000001665F1B6900001319000000000000000000000000000000000000003300000000pueue-3.4.1/pueue/src/daemon/network/follow_log.rsuse std::io::Read; use std::path::Path; use std::time::Duration; use anyhow::Result; use pueue_lib::log::*; use pueue_lib::network::message::*; use pueue_lib::network::protocol::{send_message, GenericStream}; use pueue_lib::state::SharedState; /// Handle the continuous stream of a message. pub async fn handle_follow( pueue_directory: &Path, stream: &mut GenericStream, state: &SharedState, message: StreamRequestMessage, ) -> Result<Message> { // The user can specify the id of the task they want to follow // If the id isn't specified and there's only a single running task, this task will be used. // However, if there are multiple running tasks, the user will have to specify an id. let task_id = if let Some(task_id) = message.task_id { task_id } else { // Get all ids of running tasks let state = state.lock().unwrap(); let running_ids: Vec<_> = state .tasks .iter() .filter_map(|(&id, t)| if t.is_running() { Some(id) } else { None }) .collect(); // Return a message on "no" or multiple running tasks. match running_ids.len() { 0 => { return Ok(create_failure_message("There are no running tasks.")); } 1 => running_ids[0], _ => { let running_ids = running_ids .iter() .map(|id| id.to_string()) .collect::<Vec<_>>() .join(", "); return Ok(create_failure_message(format!( "Multiple tasks are running, please select one of the following: {running_ids}" ))); } } }; // It might be that the task is not yet running. // Ensure that it exists and is started. loop { { let state = state.lock().unwrap(); let Some(task) = state.tasks.get(&task_id) else { return Ok(create_failure_message( "Pueue: The task to be followed doesn't exist.", )); }; // The task is running or finished, we can start to follow. if task.is_running() || task.is_done() { break; } } tokio::time::sleep(Duration::from_millis(1000)).await; } let mut handle = match get_log_file_handle(task_id, pueue_directory) { Err(_) => { return Ok(create_failure_message( "Couldn't find output files for task. Maybe it finished? Try `log`", )) } Ok(handle) => handle, }; // Get the output path. // We need to check continuously, whether the file still exists, // since the file can go away (e.g. due to finishing a task). let path = get_log_path(task_id, pueue_directory); // If `lines` is passed as an option, we only want to show the last `X` lines. // To achieve this, we seek the file handle to the start of the `Xth` line // from the end of the file. // The loop following this section will then only copy those last lines to stdout. if let Some(lines) = message.lines { if let Err(err) = seek_to_last_lines(&mut handle, lines) { println!("Error seeking to last lines from log: {err}"); } } loop { // Check whether the file still exists. Exit if it doesn't. if !path.exists() { return Ok(create_success_message( "Pueue: Log file has gone away. Has the task been removed?", )); } // Read the next chunk of text from the last position. let mut buffer = Vec::new(); if let Err(err) = handle.read_to_end(&mut buffer) { return Ok(create_failure_message(format!("Pueue Error: {err}"))); }; let text = String::from_utf8_lossy(&buffer).to_string(); // Only send a message, if there's actual new content. if !text.is_empty() { // Send the next chunk. let response = Message::Stream(text); send_message(response, stream).await?; } // Check if the task in question does: // 1. Still exist // 2. Is still running // // In case it's not, close the stream. { let state = state.lock().unwrap(); let Some(task) = state.tasks.get(&task_id) else { return Ok(create_failure_message( "Pueue: The followed task has been removed.", )); }; // The task is done, just close the stream. if !task.is_running() { return Ok(Message::Close); } } // Wait for 1 second before sending the next chunk. tokio::time::sleep(Duration::from_millis(1000)).await; } } 07070100000045000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000003500000000pueue-3.4.1/pueue/src/daemon/network/message_handler07070100000046000081A4000000000000000000000001665F1B6900000E50000000000000000000000000000000000000003C00000000pueue-3.4.1/pueue/src/daemon/network/message_handler/add.rsuse chrono::Local; use pueue_lib::aliasing::insert_alias; use pueue_lib::network::message::*; use pueue_lib::state::{GroupStatus, SharedState}; use pueue_lib::task::{Task, TaskStatus}; use super::*; use crate::daemon::state_helper::save_state; use crate::ok_or_return_failure_message; /// Invoked when calling `pueue add`. /// Queues a new task to the state. /// If the start_immediately flag is set, send a StartMessage to the task handler. pub fn add_task( message: AddMessage, sender: &TaskSender, state: &SharedState, settings: &Settings, ) -> Message { let mut state = state.lock().unwrap(); if let Err(message) = ensure_group_exists(&mut state, &message.group) { return message; } // Ensure that specified dependencies actually exist. let not_found: Vec<_> = message .dependencies .iter() .filter(|id| !state.tasks.contains_key(id)) .collect(); if !not_found.is_empty() { return create_failure_message(format!( "Unable to setup dependencies : task(s) {not_found:?} not found", )); } // Create a new task and add it to the state. let mut task = Task::new( message.command, message.path, message.envs, message.group, TaskStatus::Locked, message.dependencies, message.priority.unwrap_or(0), message.label, ); // Set the starting status. if message.stashed || message.enqueue_at.is_some() { task.status = TaskStatus::Stashed { enqueue_at: message.enqueue_at, }; } else { task.status = TaskStatus::Queued; task.enqueued_at = Some(Local::now()); } // Check if there're any aliases that should be applied. // If one is found, we expand the command, otherwise we just take the original command. // Anyhow, we save this separately and keep the original command in a separate field. // // This allows us to have a debug experience and the user can opt to either show the // original command or the expanded command in their `status` view. task.command = insert_alias(settings, task.original_command.clone()); // Sort and deduplicate dependency ids. task.dependencies.sort_unstable(); task.dependencies.dedup(); // Check if the task's group is paused before we pass it to the state let group_status = state .groups .get(&task.group) .expect("We ensured that the group exists.") .status; let group_is_paused = matches!(group_status, GroupStatus::Paused); // Add the task and persist the state. let task_id = state.add_task(task); ok_or_return_failure_message!(save_state(&state, settings)); // Notify the task handler, in case the client wants to start the task immediately. if message.start_immediately { sender .send(StartMessage { tasks: TaskSelection::TaskIds(vec![task_id]), }) .expect(SENDER_ERR); } // Create the customized response for the client. let mut response = if message.print_task_id { task_id.to_string() } else if let Some(enqueue_at) = message.enqueue_at { let enqueue_at = enqueue_at.format("%Y-%m-%d %H:%M:%S"); format!("New task added (id {task_id}). It will be enqueued at {enqueue_at}") } else { format!("New task added (id {task_id}).") }; // Notify the user if the task's group is paused if !message.print_task_id && group_is_paused { response.push_str("\nThe group of this task is currently paused!") } create_success_message(response) } 07070100000047000081A4000000000000000000000001665F1B6900001D9A000000000000000000000000000000000000003E00000000pueue-3.4.1/pueue/src/daemon/network/message_handler/clean.rsuse pueue_lib::log::clean_log_handles; use pueue_lib::network::message::*; use pueue_lib::state::SharedState; use pueue_lib::task::{TaskResult, TaskStatus}; use super::*; use crate::daemon::state_helper::{is_task_removable, save_state}; use crate::ok_or_return_failure_message; fn construct_success_clean_message(message: CleanMessage) -> String { let successful_only_fix = if message.successful_only { " successfully" } else { "" }; let group_fix = message .group .map(|name| format!(" from group '{name}'")) .unwrap_or_default(); format!("All{successful_only_fix} finished tasks have been removed{group_fix}") } /// Invoked when calling `pueue clean`. /// Remove all failed or done tasks from the state. pub fn clean(message: CleanMessage, state: &SharedState, settings: &Settings) -> Message { let mut state = state.lock().unwrap(); let filtered_tasks = state.filter_tasks(|task| matches!(task.status, TaskStatus::Done(_)), None); for task_id in &filtered_tasks.matching_ids { // Ensure the task is removable, i.e. there are no dependant tasks. if !is_task_removable(&state, task_id, &[]) { continue; } if message.successful_only || message.group.is_some() { if let Some(task) = state.tasks.get(task_id) { // Check if we should ignore this task, if only successful tasks should be removed. if message.successful_only && !matches!(task.status, TaskStatus::Done(TaskResult::Success)) { continue; } // User's can specify a specific group to be cleaned. // Skip the task if that's the case and the task's group doesn't match. if message.group.is_some() && message.group.as_deref() != Some(&task.group) { continue; } } } let _ = state.tasks.remove(task_id).unwrap(); clean_log_handles(*task_id, &settings.shared.pueue_directory()); } ok_or_return_failure_message!(save_state(&state, settings)); create_success_message(construct_success_clean_message(message)) } #[cfg(test)] mod tests { use super::super::fixtures::*; use super::*; use pretty_assertions::assert_eq; use tempfile::TempDir; fn get_message(successful_only: bool, group: Option<String>) -> CleanMessage { CleanMessage { successful_only, group, } } trait TaskAddable { fn add_stub_task(&mut self, id: &str, group: &str, task_result: TaskResult); } impl TaskAddable for State { fn add_stub_task(&mut self, id: &str, group: &str, task_result: TaskResult) { let task = get_stub_task_in_group(id, group, TaskStatus::Done(task_result)); self.add_task(task); } } /// gets the clean test state with the required groups fn get_clean_test_state(groups: &[&str]) -> (SharedState, Settings, TempDir) { let (state, settings, tempdir) = get_state(); { let mut state = state.lock().unwrap(); for &group in groups { if !state.groups.contains_key(group) { state.create_group(group); } state.add_stub_task("0", group, TaskResult::Success); state.add_stub_task("1", group, TaskResult::Failed(1)); state.add_stub_task("2", group, TaskResult::FailedToSpawn("error".to_string())); state.add_stub_task("3", group, TaskResult::Killed); state.add_stub_task("4", group, TaskResult::Errored); state.add_stub_task("5", group, TaskResult::DependencyFailed); } } (state, settings, tempdir) } #[test] fn clean_normal() { let (state, settings, _tempdir) = get_stub_state(); // Only task 1 will be removed, since it's the only TaskStatus with `Done`. let message = clean(get_message(false, None), &state, &settings); // Return message is correct assert!(matches!(message, Message::Success(_))); if let Message::Success(text) = message { assert_eq!(text, "All finished tasks have been removed"); }; let state = state.lock().unwrap(); assert_eq!(state.tasks.len(), 4); } #[test] fn clean_normal_for_all_results() { let (state, settings, _tempdir) = get_clean_test_state(&[PUEUE_DEFAULT_GROUP]); // All finished tasks should removed when calling default `clean`. let message = clean(get_message(false, None), &state, &settings); // Return message is correct assert!(matches!(message, Message::Success(_))); if let Message::Success(text) = message { assert_eq!(text, "All finished tasks have been removed"); }; let state = state.lock().unwrap(); assert!(state.tasks.is_empty()); } #[test] fn clean_successful_only() { let (state, settings, _tempdir) = get_clean_test_state(&[PUEUE_DEFAULT_GROUP]); // Only successfully finished tasks should get removed when // calling `clean` with the `successful_only` flag. let message = clean(get_message(true, None), &state, &settings); // Return message is correct assert!(matches!(message, Message::Success(_))); if let Message::Success(text) = message { assert_eq!(text, "All successfully finished tasks have been removed"); }; // Assert that only the first entry has been deleted (TaskResult::Success) let state = state.lock().unwrap(); assert_eq!(state.tasks.len(), 5); assert!(!state.tasks.contains_key(&0)); } #[test] fn clean_only_in_selected_group() { let (state, settings, _tempdir) = get_clean_test_state(&[PUEUE_DEFAULT_GROUP, "other"]); // All finished tasks should removed in selected group (other) let message = clean(get_message(false, Some("other".into())), &state, &settings); // Return message is correct assert!(matches!(message, Message::Success(_))); if let Message::Success(text) = message { assert_eq!( text, "All finished tasks have been removed from group 'other'" ); }; // Assert that only the 'other' group has been cleared let state = state.lock().unwrap(); assert_eq!(state.tasks.len(), 6); assert!(state.tasks.iter().all(|(_, task)| &task.group != "other")); } #[test] fn clean_only_successful_only_in_selected_group() { let (state, settings, _tempdir) = get_clean_test_state(&[PUEUE_DEFAULT_GROUP, "other"]); // Only successfully finished tasks should removed in the 'other' group let message = clean(get_message(true, Some("other".into())), &state, &settings); // Return message is correct assert!(matches!(message, Message::Success(_))); if let Message::Success(text) = message { assert_eq!( text, "All successfully finished tasks have been removed from group 'other'" ); }; // Assert that only the first entry has been deleted from the 'other' group (TaskResult::Success) let state = state.lock().unwrap(); assert_eq!(state.tasks.len(), 11); assert!(!state.tasks.contains_key(&6)); } } 07070100000048000081A4000000000000000000000001665F1B6900000EFF000000000000000000000000000000000000003D00000000pueue-3.4.1/pueue/src/daemon/network/message_handler/edit.rsuse pueue_lib::aliasing::insert_alias; use pueue_lib::network::message::*; use pueue_lib::state::SharedState; use pueue_lib::task::TaskStatus; use super::*; use crate::daemon::state_helper::save_state; use crate::ok_or_return_failure_message; /// Invoked when calling `pueue edit`. /// If a user wants to edit a message, we need to send him the current command. /// Lock the task to prevent execution, before the user has finished editing the command. pub fn edit_request(task_id: usize, state: &SharedState) -> Message { // Check whether the task exists and is queued/stashed. Abort if that's not the case. let mut state = state.lock().unwrap(); match state.tasks.get_mut(&task_id) { Some(task) => { if !task.is_queued() && !task.is_stashed() { return create_failure_message("You can only edit a queued/stashed task"); } task.prev_status = task.status.clone(); task.status = TaskStatus::Locked; EditResponseMessage { task_id: task.id, command: task.original_command.clone(), path: task.path.clone(), label: task.label.clone(), priority: task.priority, } .into() } None => create_failure_message("No task with this id."), } } /// Invoked after closing the editor on `pueue edit`. /// Now we actually update the message with the updated command from the client. pub fn edit(message: EditMessage, state: &SharedState, settings: &Settings) -> Message { // Check whether the task exists and is locked. Abort if that's not the case. let mut state = state.lock().unwrap(); match state.tasks.get_mut(&message.task_id) { Some(task) => { if !(task.status == TaskStatus::Locked) { return create_failure_message("Task is no longer locked."); } // Restore the task to its previous state. task.status = task.prev_status.clone(); // Update command if applicable. if let Some(command) = message.command { task.original_command = command.clone(); task.command = insert_alias(settings, command); } // Update path if applicable. if let Some(path) = message.path { task.path = path; } // Update label if applicable. if message.label.is_some() { task.label = message.label; } else if message.delete_label { task.label = None; } // Update priority if applicable. if let Some(priority) = message.priority { task.priority = priority; } ok_or_return_failure_message!(save_state(&state, settings)); create_success_message("Command has been updated") } None => create_failure_message(format!("Task to edit has gone away: {}", message.task_id)), } } /// Invoked if a client fails to edit a task and asks the daemon to restore the task's status. pub fn edit_restore(task_id: usize, state: &SharedState) -> Message { // Check whether the task exists and is queued/stashed. Abort if that's not the case. let mut state = state.lock().unwrap(); match state.tasks.get_mut(&task_id) { Some(task) => { if task.status != TaskStatus::Locked { return create_failure_message("The requested task isn't locked"); } task.status = task.prev_status.clone(); create_success_message(format!( "The requested task's status has been restored to '{}'", task.status )) } None => create_failure_message("No task with this id."), } } 07070100000049000081A4000000000000000000000001665F1B69000005B8000000000000000000000000000000000000004000000000pueue-3.4.1/pueue/src/daemon/network/message_handler/enqueue.rsuse chrono::Local; use pueue_lib::network::message::*; use pueue_lib::state::SharedState; use pueue_lib::task::TaskStatus; use crate::daemon::network::response_helper::*; /// Invoked when calling `pueue enqueue`. /// Enqueue specific stashed tasks. pub fn enqueue(message: EnqueueMessage, state: &SharedState) -> Message { let mut state = state.lock().unwrap(); let filtered_tasks = state.filter_tasks( |task| matches!(task.status, TaskStatus::Stashed { .. } | TaskStatus::Locked), Some(message.task_ids), ); for task_id in &filtered_tasks.matching_ids { // We just checked that they're there and the state is locked. It's safe to unwrap. let task = state.tasks.get_mut(task_id).expect("Task should be there."); // Either specify the point of time the task should be enqueued or enqueue the task // immediately. if message.enqueue_at.is_some() { task.status = TaskStatus::Stashed { enqueue_at: message.enqueue_at, }; } else { task.status = TaskStatus::Queued; task.enqueued_at = Some(Local::now()); } } let text = if let Some(enqueue_at) = message.enqueue_at { let enqueue_at = enqueue_at.format("%Y-%m-%d %H:%M:%S"); format!("Tasks will be enqueued at {enqueue_at}") } else { String::from("Tasks are enqueued") }; compile_task_response(&text, filtered_tasks) } 0707010000004A000081A4000000000000000000000001665F1B6900000982000000000000000000000000000000000000003E00000000pueue-3.4.1/pueue/src/daemon/network/message_handler/group.rsuse pueue_lib::network::message::*; use pueue_lib::state::{SharedState, PUEUE_DEFAULT_GROUP}; use super::TaskSender; use crate::daemon::network::message_handler::ok_or_failure_message; use crate::daemon::network::response_helper::ensure_group_exists; use crate::ok_or_return_failure_message; /// Invoked on `pueue groups`. /// Manage groups. /// - Show groups /// - Add group /// - Remove group pub fn group(message: GroupMessage, sender: &TaskSender, state: &SharedState) -> Message { let mut state = state.lock().unwrap(); match message { GroupMessage::List => { // Return information about all groups to the client. GroupResponseMessage { groups: state.groups.clone(), } .into() } GroupMessage::Add { name, parallel_tasks, } => { if state.groups.contains_key(&name) { return create_failure_message(format!("Group \"{name}\" already exists")); } // Propagate the message to the TaskHandler, which is responsible for actually // manipulating our internal data let result = sender.send(GroupMessage::Add { name: name.clone(), parallel_tasks, }); ok_or_return_failure_message!(result); create_success_message(format!("Group \"{name}\" is being created")) } GroupMessage::Remove(group) => { if let Err(message) = ensure_group_exists(&mut state, &group) { return message; } if group == PUEUE_DEFAULT_GROUP { return create_failure_message("You cannot delete the default group".to_string()); } // Make sure there are no tasks in that group. if state.tasks.iter().any(|(_, task)| task.group == group) { return create_failure_message( "You cannot remove a group, if there're still tasks in it.".to_string(), ); } // Propagate the message to the TaskHandler, which is responsible for actually // manipulating our internal data let result = sender.send(GroupMessage::Remove(group.clone())); ok_or_return_failure_message!(result); create_success_message(format!("Group \"{group}\" is being removed")) } } } 0707010000004B000081A4000000000000000000000001665F1B69000008C3000000000000000000000000000000000000003D00000000pueue-3.4.1/pueue/src/daemon/network/message_handler/kill.rsuse pueue_lib::network::message::*; use pueue_lib::state::SharedState; use super::{TaskSender, SENDER_ERR}; use crate::daemon::network::response_helper::{ensure_group_exists, task_action_response_helper}; /// Invoked when calling `pueue kill`. /// Forward the kill message to the task handler, which then kills the process. pub fn kill(message: KillMessage, sender: &TaskSender, state: &SharedState) -> Message { let mut state = state.lock().unwrap(); // If a group is selected, make sure it exists. if let TaskSelection::Group(group) = &message.tasks { if let Err(message) = ensure_group_exists(&mut state, group) { return message; } } // Construct a response depending on the selected tasks. let response = if let Some(signal) = &message.signal { match &message.tasks { TaskSelection::TaskIds(task_ids) => task_action_response_helper( "Tasks are being killed", task_ids.clone(), |task| task.is_running(), &state, ), TaskSelection::Group(group) => create_success_message(format!( "Sending signal {signal} to all running tasks of group {group}.", )), TaskSelection::All => { create_success_message(format!("Sending signal {signal} to all running tasks.")) } } } else { match &message.tasks { TaskSelection::TaskIds(task_ids) => task_action_response_helper( "Tasks are being killed", task_ids.clone(), |task| task.is_running(), &state, ), TaskSelection::Group(group) => create_success_message(format!( "All tasks of group \"{group}\" are being killed. The group will also be paused!!!" )), TaskSelection::All => { create_success_message("All tasks are being killed. All groups will be paused!!!") } } }; if let Message::Success(_) = response { // Forward the message to the task handler, but only if there is something to kill. sender.send(message).expect(SENDER_ERR); } response } 0707010000004C000081A4000000000000000000000001665F1B690000079D000000000000000000000000000000000000003C00000000pueue-3.4.1/pueue/src/daemon/network/message_handler/log.rsuse std::collections::BTreeMap; use pueue_lib::log::read_and_compress_log_file; use pueue_lib::network::message::*; use pueue_lib::settings::Settings; use pueue_lib::state::SharedState; /// Invoked when calling `pueue log`. /// Return tasks and their output to the client. pub fn get_log(message: LogRequestMessage, state: &SharedState, settings: &Settings) -> Message { let state = { state.lock().unwrap().clone() }; // Return all logs, if no specific task id is specified. let task_ids = if message.task_ids.is_empty() { state.tasks.keys().cloned().collect() } else { message.task_ids }; let mut tasks = BTreeMap::new(); for task_id in task_ids.iter() { if let Some(task) = state.tasks.get(task_id) { // We send log output and the task at the same time. // This isn't as efficient as sending the raw compressed data directly, // but it's a lot more convenient for now. let (output, output_complete) = if message.send_logs { match read_and_compress_log_file( *task_id, &settings.shared.pueue_directory(), message.lines, ) { Ok((output, output_complete)) => (Some(output), output_complete), Err(err) => { // Fail early if there's some problem with getting the log output return create_failure_message(format!( "Failed reading process output file: {err:?}" )); } } } else { (None, true) }; let task_log = TaskLogMessage { task: task.clone(), output, output_complete, }; tasks.insert(*task_id, task_log); } } Message::LogResponse(tasks) } 0707010000004D000081A4000000000000000000000001665F1B69000016D6000000000000000000000000000000000000003C00000000pueue-3.4.1/pueue/src/daemon/network/message_handler/mod.rsuse std::fmt::Display; use pueue_lib::network::message::*; use pueue_lib::settings::Settings; use pueue_lib::state::SharedState; use super::TaskSender; use crate::daemon::network::response_helper::*; mod add; mod clean; mod edit; mod enqueue; mod group; mod kill; mod log; mod parallel; mod pause; mod remove; mod restart; mod send; mod start; mod stash; mod switch; pub static SENDER_ERR: &str = "Failed to send message to task handler thread"; pub fn handle_message( message: Message, sender: &TaskSender, state: &SharedState, settings: &Settings, ) -> Message { match message { Message::Add(message) => add::add_task(message, sender, state, settings), Message::Clean(message) => clean::clean(message, state, settings), Message::Edit(message) => edit::edit(message, state, settings), Message::EditRequest(task_id) => edit::edit_request(task_id, state), Message::EditRestore(task_id) => edit::edit_restore(task_id, state), Message::Enqueue(message) => enqueue::enqueue(message, state), Message::Group(message) => group::group(message, sender, state), Message::Kill(message) => kill::kill(message, sender, state), Message::Log(message) => log::get_log(message, state, settings), Message::Parallel(message) => parallel::set_parallel_tasks(message, state), Message::Pause(message) => pause::pause(message, sender, state), Message::Remove(task_ids) => remove::remove(task_ids, state, settings), Message::Reset(message) => reset(message, sender), Message::Restart(message) => restart::restart_multiple(message, sender, state, settings), Message::Send(message) => send::send(message, sender, state), Message::Start(message) => start::start(message, sender, state), Message::Stash(task_ids) => stash::stash(task_ids, state), Message::Switch(message) => switch::switch(message, state, settings), Message::Status => get_status(state), _ => create_failure_message("Not yet implemented"), } } /// Invoked when calling `pueue reset`. /// Forward the reset request to the task handler. /// The handler then kills all children and clears the task queue. fn reset(message: ResetMessage, sender: &TaskSender) -> Message { sender.send(message).expect(SENDER_ERR); create_success_message("Everything is being reset right now.") } /// Invoked when calling `pueue status`. /// Return the current state. fn get_status(state: &SharedState) -> Message { let state = state.lock().unwrap().clone(); Message::StatusResponse(Box::new(state)) } fn ok_or_failure_message<T, E: Display>(result: Result<T, E>) -> Result<T, Message> { match result { Ok(inner) => Ok(inner), Err(error) => Err(create_failure_message(format!( "Failed to save state. This is a bug: {error}" ))), } } #[macro_export] macro_rules! ok_or_return_failure_message { ($expression:expr) => { match ok_or_failure_message($expression) { Ok(task_id) => task_id, Err(error) => return error, } }; } #[cfg(test)] mod fixtures { use std::collections::HashMap; use std::env::temp_dir; use std::sync::{Arc, Mutex}; use tempfile::TempDir; pub use pueue_lib::settings::Settings; pub use pueue_lib::state::{SharedState, State, PUEUE_DEFAULT_GROUP}; pub use pueue_lib::task::{Task, TaskResult, TaskStatus}; pub fn get_settings() -> (Settings, TempDir) { let tempdir = TempDir::new().expect("Failed to create test pueue directory"); let mut settings = Settings::default(); settings.shared.pueue_directory = Some(tempdir.path().to_owned()); (settings, tempdir) } pub fn get_state() -> (SharedState, Settings, TempDir) { let (settings, tempdir) = get_settings(); // Create the normal pueue directories. let log_dir = tempdir.path().join("log"); if !log_dir.exists() { std::fs::create_dir(log_dir).expect("Failed to create test log dir"); } let task_log_dir = tempdir.path().join("task_log"); if !task_log_dir.exists() { std::fs::create_dir(task_log_dir).expect("Failed to create test task log dir"); } let state = State::new(); (Arc::new(Mutex::new(state)), settings, tempdir) } /// Create a new task with stub data in the given group pub fn get_stub_task_in_group(id: &str, group: &str, status: TaskStatus) -> Task { Task::new( id.to_string(), temp_dir(), HashMap::new(), group.to_string(), status, Vec::new(), 0, None, ) } /// Create a new task with stub data pub fn get_stub_task(id: &str, status: TaskStatus) -> Task { get_stub_task_in_group(id, PUEUE_DEFAULT_GROUP, status) } pub fn get_stub_state() -> (SharedState, Settings, TempDir) { let (state, settings, tempdir) = get_state(); { // Queued task let mut state = state.lock().unwrap(); let task = get_stub_task("0", TaskStatus::Queued); state.add_task(task); // Finished task let task = get_stub_task("1", TaskStatus::Done(TaskResult::Success)); state.add_task(task); // Stashed task let task = get_stub_task("2", TaskStatus::Stashed { enqueue_at: None }); state.add_task(task); // Running task let task = get_stub_task("3", TaskStatus::Running); state.add_task(task); // Paused task let task = get_stub_task("4", TaskStatus::Paused); state.add_task(task); } (state, settings, tempdir) } } 0707010000004E000081A4000000000000000000000001665F1B6900000273000000000000000000000000000000000000004100000000pueue-3.4.1/pueue/src/daemon/network/message_handler/parallel.rsuse pueue_lib::network::message::*; use pueue_lib::state::SharedState; use crate::daemon::network::response_helper::*; /// Set the parallel tasks for a specific group. pub fn set_parallel_tasks(message: ParallelMessage, state: &SharedState) -> Message { let mut state = state.lock().unwrap(); let group = match ensure_group_exists(&mut state, &message.group) { Ok(group) => group, Err(message) => return message, }; group.parallel_tasks = message.parallel_tasks; create_success_message(format!( "Parallel tasks setting for group \"{}\" adjusted", &message.group )) } 0707010000004F000081A4000000000000000000000001665F1B69000005B2000000000000000000000000000000000000003E00000000pueue-3.4.1/pueue/src/daemon/network/message_handler/pause.rsuse pueue_lib::network::message::*; use pueue_lib::state::SharedState; use pueue_lib::task::TaskStatus; use super::{TaskSender, SENDER_ERR}; use crate::daemon::network::response_helper::*; /// Invoked when calling `pueue pause`. /// Forward the pause message to the task handler, which then pauses groups/tasks/everything. pub fn pause(message: PauseMessage, sender: &TaskSender, state: &SharedState) -> Message { let mut state = state.lock().unwrap(); // If a group is selected, make sure it exists. if let TaskSelection::Group(group) = &message.tasks { if let Err(message) = ensure_group_exists(&mut state, group) { return message; } } // Construct a response depending on the selected tasks. let response = match &message.tasks { TaskSelection::TaskIds(task_ids) => task_action_response_helper( "Tasks are being paused", task_ids.clone(), |task| matches!(task.status, TaskStatus::Running), &state, ), TaskSelection::Group(group) => { create_success_message(format!("Group \"{group}\" is being paused.")) } TaskSelection::All => create_success_message("All queues are being paused."), }; if let Message::Success(_) = response { // Forward the message to the task handler, but only if there is something to pause. sender.send(message).expect(SENDER_ERR); } response } 07070100000050000081A4000000000000000000000001665F1B6900001201000000000000000000000000000000000000003F00000000pueue-3.4.1/pueue/src/daemon/network/message_handler/remove.rsuse pueue_lib::log::clean_log_handles; use pueue_lib::network::message::*; use pueue_lib::settings::Settings; use pueue_lib::state::SharedState; use pueue_lib::task::{Task, TaskStatus}; use super::ok_or_failure_message; use crate::daemon::network::response_helper::*; use crate::daemon::state_helper::{is_task_removable, save_state}; use crate::ok_or_return_failure_message; /// Invoked when calling `pueue remove`. /// Remove tasks from the queue. /// We have to ensure that those tasks aren't running! pub fn remove(task_ids: Vec<usize>, state: &SharedState, settings: &Settings) -> Message { let mut state = state.lock().unwrap(); // Filter all running tasks, since we cannot remove them. let filter = |task: &Task| { matches!( task.status, TaskStatus::Queued | TaskStatus::Stashed { .. } | TaskStatus::Done(_) | TaskStatus::Locked ) }; let mut filtered_tasks = state.filter_tasks(filter, Some(task_ids)); // Don't delete tasks, if there are other tasks that depend on this one. // However, we allow to delete those tasks, if they're supposed to be deleted as well. for task_id in filtered_tasks.matching_ids.clone() { if !is_task_removable(&state, &task_id, &filtered_tasks.matching_ids) { filtered_tasks.non_matching_ids.push(task_id); filtered_tasks.matching_ids.retain(|id| id != &task_id); }; } for task_id in &filtered_tasks.matching_ids { state.tasks.remove(task_id); clean_log_handles(*task_id, &settings.shared.pueue_directory()); } ok_or_return_failure_message!(save_state(&state, settings)); compile_task_response("Tasks removed from list", filtered_tasks) } #[cfg(test)] mod tests { use super::super::fixtures::*; use super::*; use pretty_assertions::assert_eq; #[test] fn normal_remove() { let (state, settings, _tempdir) = get_stub_state(); // 3 and 4 aren't allowed to be removed, since they're running. // The rest will succeed. let message = remove(vec![0, 1, 2, 3, 4], &state, &settings); // Return message is correct assert!(matches!(message, Message::Success(_))); if let Message::Success(text) = message { assert_eq!( text, "Tasks removed from list: 0, 1, 2\nThe command failed for tasks: 3, 4" ); }; let state = state.lock().unwrap(); assert_eq!(state.tasks.len(), 2); } #[test] fn removal_of_dependencies() { let (state, settings, _tempdir) = get_stub_state(); { let mut state = state.lock().unwrap(); // Add a task with a dependency to a finished task let mut task = get_stub_task("5", TaskStatus::Queued); task.dependencies = vec![1]; state.add_task(task); // Add a task depending on the previous task -> Linked dependencies let mut task = get_stub_task("6", TaskStatus::Queued); task.dependencies = vec![5]; state.add_task(task); } // Make sure we cannot remove a task with dependencies. let message = remove(vec![1], &state, &settings); // Return message is correct assert!(matches!(message, Message::Failure(_))); if let Message::Failure(text) = message { assert_eq!(text, "The command failed for tasks: 1"); }; { let state = state.lock().unwrap(); assert_eq!(state.tasks.len(), 7); } // Make sure we cannot remove a task with recursive dependencies. let message = remove(vec![1, 5], &state, &settings); // Return message is correct assert!(matches!(message, Message::Failure(_))); if let Message::Failure(text) = message { assert_eq!(text, "The command failed for tasks: 1, 5"); }; { let state = state.lock().unwrap(); assert_eq!(state.tasks.len(), 7); } // Make sure we can remove tasks with dependencies if all dependencies are specified. let message = remove(vec![1, 5, 6], &state, &settings); // Return message is correct assert!(matches!(message, Message::Success(_))); if let Message::Success(text) = message { assert_eq!(text, "Tasks removed from list: 1, 5, 6"); }; { let state = state.lock().unwrap(); assert_eq!(state.tasks.len(), 4); } } } 07070100000051000081A4000000000000000000000001665F1B6900000CCC000000000000000000000000000000000000004000000000pueue-3.4.1/pueue/src/daemon/network/message_handler/restart.rsuse chrono::Local; use pueue_lib::settings::Settings; use std::sync::MutexGuard; use pueue_lib::aliasing::insert_alias; use pueue_lib::network::message::*; use pueue_lib::state::{SharedState, State}; use pueue_lib::task::TaskStatus; use super::{task_action_response_helper, TaskSender, SENDER_ERR}; /// This is a small wrapper around the actual in-place task `restart` functionality. /// /// The "not in-place" restart functionality is actually just a copy the finished task + create a /// new task, which is completely handled on the client-side. pub fn restart_multiple( message: RestartMessage, sender: &TaskSender, state: &SharedState, settings: &Settings, ) -> Message { let task_ids: Vec<usize> = message.tasks.iter().map(|task| task.task_id).collect(); let mut state = state.lock().unwrap(); // We have to compile the response beforehand. // Otherwise we no longer know which tasks, were actually capable of being being restarted. let response = task_action_response_helper( "Tasks restarted", task_ids.clone(), |task| task.is_done(), &state, ); // Actually restart all tasks for task in message.tasks.into_iter() { restart(&mut state, task, message.stashed, settings); } // Tell the task manager to start the task immediately if requested. if message.start_immediately { sender .send(StartMessage { tasks: TaskSelection::TaskIds(task_ids), }) .expect(SENDER_ERR); } response } /// This is invoked, whenever a task is actually restarted (in-place) without creating a new task. /// Update a possibly changed path/command/label and reset all infos from the previous run. /// /// The "not in-place" restart functionality is actually just a copy the finished task + create a /// new task, which is completely handled on the client-side. fn restart( state: &mut MutexGuard<State>, to_restart: TaskToRestart, stashed: bool, settings: &Settings, ) { // Check if we actually know this task. let Some(task) = state.tasks.get_mut(&to_restart.task_id) else { return; }; // We cannot restart tasks that haven't finished yet. if !task.is_done() { return; } // Either enqueue the task or stash it. if stashed { task.status = TaskStatus::Stashed { enqueue_at: None }; task.enqueued_at = None; } else { task.status = TaskStatus::Queued; task.enqueued_at = Some(Local::now()); }; // Update command if applicable. if let Some(new_command) = to_restart.command { task.original_command = new_command.clone(); task.command = insert_alias(settings, new_command); } // Update path if applicable. if let Some(path) = to_restart.path { task.path = path; } // Update path if applicable. if to_restart.label.is_some() { task.label = to_restart.label; } else if to_restart.delete_label { task.label = None } // Update priority if applicable. if let Some(priority) = to_restart.priority { task.priority = priority; } // Reset all variables of any previous run. task.start = None; task.end = None; } 07070100000052000081A4000000000000000000000001665F1B6900000450000000000000000000000000000000000000003D00000000pueue-3.4.1/pueue/src/daemon/network/message_handler/send.rsuse pueue_lib::network::message::*; use pueue_lib::state::SharedState; use pueue_lib::task::TaskStatus; use super::{TaskSender, SENDER_ERR}; /// Invoked when calling `pueue send`. /// The message will be forwarded to the task handler, which then sends the user input to the process. /// In here we only do some error handling. pub fn send(message: SendMessage, sender: &TaskSender, state: &SharedState) -> Message { // Check whether the task exists and is running. Abort if that's not the case. { let state = state.lock().unwrap(); match state.tasks.get(&message.task_id) { Some(task) => { if task.status != TaskStatus::Running { return create_failure_message("You can only send input to a running task"); } } None => return create_failure_message("No task with this id."), } } // Check whether the task exists and is running, abort if that's not the case. sender.send(message).expect(SENDER_ERR); create_success_message("Message is being send to the process.") } 07070100000053000081A4000000000000000000000001665F1B6900000631000000000000000000000000000000000000003E00000000pueue-3.4.1/pueue/src/daemon/network/message_handler/start.rsuse pueue_lib::network::message::*; use pueue_lib::state::SharedState; use pueue_lib::task::TaskStatus; use super::{TaskSender, SENDER_ERR}; use crate::daemon::network::response_helper::*; /// Invoked when calling `pueue start`. /// Forward the start message to the task handler, which then starts the process(es). pub fn start(message: StartMessage, sender: &TaskSender, state: &SharedState) -> Message { let mut state = state.lock().unwrap(); // If a group is selected, make sure it exists. if let TaskSelection::Group(group) = &message.tasks { if let Err(message) = ensure_group_exists(&mut state, group) { return message; } } let response = match &message.tasks { TaskSelection::TaskIds(task_ids) => task_action_response_helper( "Tasks are being started", task_ids.clone(), |task| { matches!( task.status, TaskStatus::Paused | TaskStatus::Queued | TaskStatus::Stashed { .. } ) }, &state, ), TaskSelection::Group(group) => { create_success_message(format!("Group \"{group}\" is being resumed.")) } TaskSelection::All => create_success_message("All queues are being resumed."), }; if let Message::Success(_) = response { // Forward the message to the task handler, but only if something can be started sender.send(message).expect(SENDER_ERR); } // Return a response depending on the selected tasks. response } 07070100000054000081A4000000000000000000000001665F1B6900000368000000000000000000000000000000000000003E00000000pueue-3.4.1/pueue/src/daemon/network/message_handler/stash.rsuse pueue_lib::network::message::*; use pueue_lib::state::SharedState; use pueue_lib::task::TaskStatus; use crate::daemon::network::response_helper::*; /// Invoked when calling `pueue stash`. /// Stash specific queued tasks. /// They won't be executed until they're enqueued or explicitly started. pub fn stash(task_ids: Vec<usize>, state: &SharedState) -> Message { let mut state = state.lock().unwrap(); let filtered_tasks = state.filter_tasks( |task| matches!(task.status, TaskStatus::Queued | TaskStatus::Locked), Some(task_ids), ); for task_id in &filtered_tasks.matching_ids { if let Some(ref mut task) = state.tasks.get_mut(task_id) { task.status = TaskStatus::Stashed { enqueue_at: None }; task.enqueued_at = None; } } compile_task_response("Tasks are stashed", filtered_tasks) } 07070100000055000081A4000000000000000000000001665F1B6900001A9A000000000000000000000000000000000000003F00000000pueue-3.4.1/pueue/src/daemon/network/message_handler/switch.rsuse pueue_lib::network::message::*; use pueue_lib::settings::Settings; use pueue_lib::state::SharedState; use pueue_lib::task::TaskStatus; use super::ok_or_failure_message; use crate::daemon::state_helper::save_state; use crate::ok_or_return_failure_message; /// Invoked when calling `pueue switch`. /// Switch the position of two tasks in the upcoming queue. /// We have to ensure that those tasks are either `Queued` or `Stashed` pub fn switch(message: SwitchMessage, state: &SharedState, settings: &Settings) -> Message { let mut state = state.lock().unwrap(); let task_ids = [message.task_id_1, message.task_id_2]; let filtered_tasks = state.filter_tasks( |task| matches!(task.status, TaskStatus::Queued | TaskStatus::Stashed { .. }), Some(task_ids.to_vec()), ); if !filtered_tasks.non_matching_ids.is_empty() { return create_failure_message("Tasks have to be either queued or stashed."); } if task_ids[0] == task_ids[1] { return create_failure_message("You cannot switch a task with itself."); } // Get the tasks. Expect them to be there, since we found no mismatch let mut first_task = state.tasks.remove(&task_ids[0]).unwrap(); let mut second_task = state.tasks.remove(&task_ids[1]).unwrap(); // Switch task ids let first_id = first_task.id; let second_id = second_task.id; first_task.id = second_id; second_task.id = first_id; // Put tasks back in again state.tasks.insert(first_task.id, first_task); state.tasks.insert(second_task.id, second_task); for (_, task) in state.tasks.iter_mut() { // If the task depends on both, we can just keep it as it is. if task.dependencies.contains(&first_id) && task.dependencies.contains(&second_id) { continue; } // If one of the ids is in the task's dependency list, replace it with the other one. if let Some(old_id) = task.dependencies.iter_mut().find(|id| *id == &first_id) { *old_id = second_id; task.dependencies.sort_unstable(); } else if let Some(old_id) = task.dependencies.iter_mut().find(|id| *id == &second_id) { *old_id = first_id; task.dependencies.sort_unstable(); } } ok_or_return_failure_message!(save_state(&state, settings)); create_success_message("Tasks have been switched") } #[cfg(test)] mod tests { use pretty_assertions::assert_eq; use tempfile::TempDir; use super::super::fixtures::*; use super::*; fn get_message(task_id_1: usize, task_id_2: usize) -> SwitchMessage { SwitchMessage { task_id_1, task_id_2, } } fn get_test_state() -> (SharedState, Settings, TempDir) { let (state, settings, tempdir) = get_state(); { let mut state = state.lock().unwrap(); let task = get_stub_task("0", TaskStatus::Queued); state.add_task(task); let task = get_stub_task("1", TaskStatus::Stashed { enqueue_at: None }); state.add_task(task); let task = get_stub_task("2", TaskStatus::Queued); state.add_task(task); let task = get_stub_task("3", TaskStatus::Stashed { enqueue_at: None }); state.add_task(task); let mut task = get_stub_task("4", TaskStatus::Queued); task.dependencies = vec![0, 3]; state.add_task(task); let mut task = get_stub_task("5", TaskStatus::Stashed { enqueue_at: None }); task.dependencies = vec![1]; state.add_task(task); let mut task = get_stub_task("6", TaskStatus::Queued); task.dependencies = vec![2, 3]; state.add_task(task); } (state, settings, tempdir) } #[test] /// A normal switch between two id's works perfectly fine. fn switch_normal() { let (state, settings, _tempdir) = get_test_state(); let message = switch(get_message(1, 2), &state, &settings); // Return message is correct assert!(matches!(message, Message::Success(_))); if let Message::Success(text) = message { assert_eq!(text, "Tasks have been switched"); }; let state = state.lock().unwrap(); assert_eq!(state.tasks.get(&1).unwrap().command, "2"); assert_eq!(state.tasks.get(&2).unwrap().command, "1"); } #[test] /// Tasks cannot be switched with themselves. fn switch_task_with_itself() { let (state, settings, _tempdir) = get_test_state(); let message = switch(get_message(1, 1), &state, &settings); // Return message is correct assert!(matches!(message, Message::Failure(_))); if let Message::Failure(text) = message { assert_eq!(text, "You cannot switch a task with itself."); }; } #[test] /// If any task that is specified as dependency get's switched, /// all dependants need to be updated. fn switch_task_with_dependant() { let (state, settings, _tempdir) = get_test_state(); switch(get_message(0, 3), &state, &settings); let state = state.lock().unwrap(); assert_eq!(state.tasks.get(&4).unwrap().dependencies, vec![0, 3]); } #[test] /// A task with two dependencies shouldn't experience any change, if those two dependencies /// switched places. fn switch_double_dependency() { let (state, settings, _tempdir) = get_test_state(); switch(get_message(1, 2), &state, &settings); let state = state.lock().unwrap(); assert_eq!(state.tasks.get(&5).unwrap().dependencies, vec![2]); assert_eq!(state.tasks.get(&6).unwrap().dependencies, vec![1, 3]); } #[test] /// You can only switch tasks that are either stashed or queued. /// Everything else should result in an error message. fn switch_invalid() { let (state, settings, _tempdir) = get_state(); let combinations: Vec<(usize, usize)> = vec![ (0, 1), // Queued + Done (0, 3), // Queued + Stashed (0, 4), // Queued + Running (0, 5), // Queued + Paused (2, 1), // Stashed + Done (2, 3), // Stashed + Stashed (2, 4), // Stashed + Running (2, 5), // Stashed + Paused ]; for ids in combinations { let message = switch(get_message(ids.0, ids.1), &state, &settings); // Assert, that we get a Failure message with the correct text. assert!(matches!(message, Message::Failure(_))); if let Message::Failure(text) = message { assert_eq!(text, "Tasks have to be either queued or stashed."); }; } } } 07070100000056000081A4000000000000000000000001665F1B690000006E000000000000000000000000000000000000002C00000000pueue-3.4.1/pueue/src/daemon/network/mod.rspub mod follow_log; pub mod message_handler; pub mod response_helper; pub mod socket; use super::TaskSender; 07070100000057000081A4000000000000000000000001665F1B6900000AA4000000000000000000000000000000000000003800000000pueue-3.4.1/pueue/src/daemon/network/response_helper.rsuse std::sync::MutexGuard; use pueue_lib::network::message::{create_failure_message, create_success_message, Message}; use pueue_lib::state::{FilteredTasks, Group, State}; use pueue_lib::task::Task; use crate::daemon::state_helper::LockedState; /// Check whether the given group exists. Return an failure message if it doesn't. pub fn ensure_group_exists<'state>( state: &'state mut LockedState, group: &str, ) -> Result<&'state mut Group, Message> { let group_keys: Vec<String> = state.groups.keys().cloned().collect(); if let Some(group) = state.groups.get_mut(group) { return Ok(group); } Err(create_failure_message(format!( "Group {group} doesn't exists. Use one of these: {group_keys:?}", ))) } /// Compile a response for actions that affect several given tasks. /// These actions can sometimes only succeed for a part of the given tasks. /// /// That's why this helper exists, which determines based on a given criterion `filter` /// for which tasks the action succeeded and which tasks failed. pub fn task_action_response_helper<F>( message: &str, task_ids: Vec<usize>, filter: F, state: &MutexGuard<State>, ) -> Message where F: Fn(&Task) -> bool, { // Get all matching/mismatching task_ids for all given ids and statuses. let filtered_tasks = state.filter_tasks(filter, Some(task_ids)); compile_task_response(message, filtered_tasks) } /// Compile a response for instructions with multiple tasks ids /// A custom message will be combined with a text about all matching tasks /// and possibly tasks for which the instruction cannot be executed. pub fn compile_task_response(message: &str, filtered_tasks: FilteredTasks) -> Message { let matching_ids: Vec<String> = filtered_tasks .matching_ids .iter() .map(|id| id.to_string()) .collect(); let non_matching_ids: Vec<String> = filtered_tasks .non_matching_ids .iter() .map(|id| id.to_string()) .collect(); let matching_ids = matching_ids.join(", "); // We don't have any mismatching ids, return the simple message. if filtered_tasks.non_matching_ids.is_empty() { return create_success_message(format!("{message}: {matching_ids}")); } let mismatched_message = "The command failed for tasks"; let mismatching_ids = non_matching_ids.join(", "); // All given ids are invalid. if matching_ids.is_empty() { return create_failure_message(format!("{mismatched_message}: {mismatching_ids}")); } // Some ids were valid, some were invalid. create_success_message(format!( "{message}: {matching_ids}\n{mismatched_message}: {mismatching_ids}", )) } 07070100000058000081A4000000000000000000000001665F1B6900001614000000000000000000000000000000000000002F00000000pueue-3.4.1/pueue/src/daemon/network/socket.rsuse std::time::{Duration, SystemTime}; use anyhow::{bail, Context, Result}; use clap::crate_version; use log::{debug, info, warn}; use tokio::time::sleep; use pueue_lib::error::Error; use pueue_lib::network::message::*; use pueue_lib::network::protocol::*; use pueue_lib::network::secret::read_shared_secret; use pueue_lib::settings::Settings; use pueue_lib::state::SharedState; use crate::daemon::network::follow_log::handle_follow; use crate::daemon::network::message_handler::{handle_message, SENDER_ERR}; use crate::daemon::task_handler::TaskSender; /// Poll the listener and accept new incoming connections. /// Create a new future to handle the message and spawn it. pub async fn accept_incoming( sender: TaskSender, state: SharedState, settings: Settings, ) -> Result<()> { let listener = get_listener(&settings.shared).await?; // Read secret once to prevent multiple disk reads. let secret = read_shared_secret(&settings.shared.shared_secret_path())?; loop { // Poll incoming connections. let stream = match listener.accept().await { Ok(stream) => stream, Err(err) => { warn!("Failed connecting to client: {err:?}"); continue; } }; // Start a new task for the request let sender_clone = sender.clone(); let state_clone = state.clone(); let secret_clone = secret.clone(); let settings_clone = settings.clone(); tokio::spawn(async move { let _result = handle_incoming( stream, sender_clone, state_clone, settings_clone, secret_clone, ) .await; }); } } /// Continuously poll the existing incoming futures. /// In case we received an instruction, handle it and create a response future. /// The response future is added to unix_responses and handled in a separate function. async fn handle_incoming( mut stream: GenericStream, sender: TaskSender, state: SharedState, settings: Settings, secret: Vec<u8>, ) -> Result<()> { // Receive the secret once and check, whether the client is allowed to connect let payload_bytes = receive_bytes(&mut stream).await?; // Didn't receive any bytes. The client disconnected. if payload_bytes.is_empty() { info!("Client went away"); return Ok(()); } let start = SystemTime::now(); // Return if we got a wrong secret from the client. if payload_bytes != secret { let received_secret = String::from_utf8(payload_bytes)?; warn!("Received invalid secret: {received_secret}"); // Wait for 1 second before closing the socket, when getting a invalid secret. // This invalidates any timing attacks. let remaining_sleep_time = Duration::from_secs(1) - SystemTime::now() .duration_since(start) .context("Couldn't calculate duration. Did the system time change?")?; sleep(remaining_sleep_time).await; bail!("Received invalid secret"); } // Send a short `ok` byte to the client, so it knows that the secret has been accepted. // This is also the current version of the daemon, so the client can inform the user if the // daemon needs a restart in case a version difference exists. send_bytes(crate_version!().as_bytes(), &mut stream).await?; // Get the directory for convenience purposes. let pueue_directory = settings.shared.pueue_directory(); loop { // Receive the actual instruction from the client let message_result = receive_message(&mut stream).await; if let Err(Error::EmptyPayload) = message_result { debug!("Client went away"); return Ok(()); } // In case of a deserialization error, respond the error to the client and return early. if let Err(Error::MessageDeserialization(err)) = message_result { send_message( create_failure_message(format!("Failed to deserialize message: {err}")), &mut stream, ) .await?; return Ok(()); } let message = message_result?; let response = match message { // The client requested the output of a task. // Since this involves streaming content, we have to do some special handling. Message::StreamRequest(message) => { handle_follow(&pueue_directory, &mut stream, &state, message).await? } // Initialize the shutdown procedure. // The message is forwarded to the TaskHandler, which is responsible for // gracefully shutting down. // // This is an edge-case as we have respond to the client first. // Otherwise it might happen, that the daemon shuts down too fast and we aren't // capable of actually sending the message back to the client. Message::DaemonShutdown(shutdown_type) => { let response = create_success_message("Daemon is shutting down"); send_message(response, &mut stream).await?; // Notify the task handler. sender.send(shutdown_type).expect(SENDER_ERR); return Ok(()); } _ => { // Process a normal message. handle_message(message, &sender, &state, &settings) } }; // Respond to the client. send_message(response, &mut stream).await?; } } 07070100000059000081A4000000000000000000000001665F1B6900000822000000000000000000000000000000000000002400000000pueue-3.4.1/pueue/src/daemon/pid.rsuse std::fs::File; use std::io::{Read, Write}; use std::path::Path; use anyhow::{bail, Context, Result}; use log::info; use pueue_lib::error::Error; use pueue_lib::process_helper::process_exists; /// Read a PID file and throw an error, if another daemon instance is still running. fn check_for_running_daemon(pid_path: &Path) -> Result<()> { info!("Placing pid file at {pid_path:?}"); let mut file = File::open(pid_path) .map_err(|err| Error::IoPathError(pid_path.to_path_buf(), "opening pid file", err))?; let mut pid = String::new(); file.read_to_string(&mut pid) .map_err(|err| Error::IoPathError(pid_path.to_path_buf(), "reading pid file", err))?; let pid: u32 = pid .parse() .context(format!("Failed to parse PID from file: {pid_path:?}"))?; if process_exists(pid) { bail!( "Pid file already exists and another daemon seems to be running.\n\ Please stop the daemon beforehand or delete the file manually: {pid_path:?}", ); } Ok(()) } /// Create a file containing the current pid of the daemon's main process. /// Fails if it already exists or cannot be created. pub fn create_pid_file(pid_path: &Path) -> Result<()> { // If an old PID file exists, check if the referenced process is still running. // The pid might not have been properly cleaned up, if the machine or Pueue crashed hard. if pid_path.exists() { check_for_running_daemon(pid_path)?; } let mut file = File::create(pid_path) .map_err(|err| Error::IoPathError(pid_path.to_path_buf(), "creating pid file", err))?; file.write_all(std::process::id().to_string().as_bytes()) .map_err(|err| Error::IoPathError(pid_path.to_path_buf(), "writing pid file", err))?; Ok(()) } /// Remove the daemon's pid file. /// Errors if it doesn't exist or cannot be deleted. pub fn cleanup_pid_file(pid_path: &Path) -> Result<(), Error> { std::fs::remove_file(pid_path) .map_err(|err| Error::IoPathError(pid_path.to_path_buf(), "removing pid file", err)) } 0707010000005A000081A4000000000000000000000001665F1B6900002067000000000000000000000000000000000000002D00000000pueue-3.4.1/pueue/src/daemon/state_helper.rsuse std::collections::BTreeMap; use std::fs; use std::path::{Path, PathBuf}; use std::sync::MutexGuard; use std::time::SystemTime; use anyhow::{Context, Result}; use chrono::prelude::*; use log::{debug, info}; use pueue_lib::settings::Settings; use pueue_lib::state::{Group, GroupStatus, State, PUEUE_DEFAULT_GROUP}; use pueue_lib::task::{TaskResult, TaskStatus}; pub type LockedState<'a> = MutexGuard<'a, State>; /// Check if a task can be deleted. \ /// We have to check all dependant tasks, that haven't finished yet. /// This is necessary to prevent deletion of tasks which are specified as a dependency. /// /// `to_delete` A list of task ids, which should also be deleted. /// This allows to remove dependency tasks as well as their dependants. pub fn is_task_removable(state: &LockedState, task_id: &usize, to_delete: &[usize]) -> bool { // Get all task ids of any dependant tasks. let dependants: Vec<usize> = state .tasks .iter() .filter(|(_, task)| { task.dependencies.contains(task_id) && !matches!(task.status, TaskStatus::Done(_)) }) .map(|(_, task)| task.id) .collect(); if dependants.is_empty() { return true; } // Check if the dependants are supposed to be deleted as well. let should_delete_dependants = dependants.iter().all(|task_id| to_delete.contains(task_id)); if !should_delete_dependants { return false; } // Lastly, do a recursive check if there are any dependants on our dependants dependants .iter() .all(|task_id| is_task_removable(state, task_id, to_delete)) } /// A small helper for handling task failures. \ /// Users can specify whether they want to pause the task's group or the /// whole daemon on a failed tasks. This function wraps that logic and decides if anything should be /// paused depending on the current settings. /// /// `group` should be the name of the failed task. pub fn pause_on_failure(state: &mut LockedState, settings: &Settings, group: &str) { if settings.daemon.pause_group_on_failure { if let Some(group) = state.groups.get_mut(group) { group.status = GroupStatus::Paused; } } else if settings.daemon.pause_all_on_failure { state.set_status_for_all_groups(GroupStatus::Paused); } } /// Do a full reset of the state. /// This doesn't reset any processes! pub fn reset_state(state: &mut LockedState, settings: &Settings) -> Result<()> { backup_state(state, settings)?; state.tasks = BTreeMap::new(); state.set_status_for_all_groups(GroupStatus::Running); save_state(state, settings) } /// Convenience wrapper around save_to_file. pub fn save_state(state: &State, settings: &Settings) -> Result<()> { save_state_to_file(state, settings, false) } /// Save the current current state in a file with a timestamp. /// At the same time remove old state logs from the log directory. /// This function is called, when large changes to the state are applied, e.g. clean/reset. pub fn backup_state(state: &LockedState, settings: &Settings) -> Result<()> { save_state_to_file(state, settings, true)?; rotate_state(settings).context("Failed to rotate old log files")?; Ok(()) } /// Save the current state to disk. \ /// We do this to restore in case of a crash. \ /// If log == true, the file will be saved with a time stamp. /// /// In comparison to the daemon -> client communication, the state is saved /// as JSON for readability and debugging purposes. fn save_state_to_file(state: &State, settings: &Settings, log: bool) -> Result<()> { let serialized = serde_json::to_string(&state).context("Failed to serialize state:"); let serialized = serialized.unwrap(); let path = settings.shared.pueue_directory(); let (temp, real) = if log { let path = path.join("log"); let now: DateTime<Utc> = Utc::now(); let time = now.format("%Y-%m-%d_%H-%M-%S"); ( path.join(format!("{time}_state.json.partial")), path.join(format!("{time}_state.json")), ) } else { (path.join("state.json.partial"), path.join("state.json")) }; // Write to temporary log file first, to prevent loss due to crashes. fs::write(&temp, serialized).context("Failed to write temp file while saving state.")?; // Overwrite the original with the temp file, if everything went fine. fs::rename(&temp, &real).context("Failed to overwrite old state while saving state")?; if log { debug!("State backup created at: {real:?}"); } else { debug!("State saved at: {real:?}"); } Ok(()) } /// Restore the last state from a previous session. \ /// The state is stored as json in the `pueue_directory`. /// /// If the state cannot be deserialized, an empty default state will be used instead. \ /// All groups with queued tasks will be automatically paused to prevent unwanted execution. pub fn restore_state(pueue_directory: &Path) -> Result<Option<State>> { let path = pueue_directory.join("state.json"); // Ignore if the file doesn't exist. It doesn't have to. if !path.exists() { info!("Couldn't find state from previous session at location: {path:?}"); return Ok(None); } info!("Restoring state"); // Try to load the file. let data = fs::read_to_string(&path).context("State restore: Failed to read file:\n\n{}")?; // Try to deserialize the state file. let mut state: State = serde_json::from_str(&data).context("Failed to deserialize state.")?; // Restore all tasks. // While restoring the tasks, check for any invalid/broken stati. for (_, task) in state.tasks.iter_mut() { // Handle ungraceful shutdowns while executing tasks. if task.status == TaskStatus::Running || task.status == TaskStatus::Paused { info!( "Setting task {} with previous status {:?} to new status {:?}", task.id, task.status, TaskResult::Killed ); task.status = TaskStatus::Done(TaskResult::Killed); } // Handle crash during editing of the task command. if task.status == TaskStatus::Locked { task.status = TaskStatus::Stashed { enqueue_at: None }; } // Go trough all tasks and set all groups that are no longer // listed in the configuration file to the default. let group = match state.groups.get_mut(&task.group) { Some(group) => group, None => { task.set_default_group(); state .groups .entry(PUEUE_DEFAULT_GROUP.into()) .or_insert(Group { status: GroupStatus::Running, parallel_tasks: 1, }) } }; // If there are any queued tasks, pause the group. // This should prevent any unwanted execution of tasks due to a system crash. if task.status == TaskStatus::Queued { info!( "Pausing group {} to prevent unwanted execution of previous tasks", &task.group ); group.status = GroupStatus::Paused; } } Ok(Some(state)) } /// Remove old logs that aren't needed any longer. fn rotate_state(settings: &Settings) -> Result<()> { let path = settings.shared.pueue_directory().join("log"); // Get all log files in the directory with their respective system time. let mut entries: BTreeMap<SystemTime, PathBuf> = BTreeMap::new(); let mut directory_list = fs::read_dir(path)?; while let Some(Ok(entry)) = directory_list.next() { let path = entry.path(); let metadata = entry.metadata()?; let time = metadata.modified()?; entries.insert(time, path); } // Remove all files above the threshold. // Old files are removed first (implicitly by the BTree order). let mut number_entries = entries.len(); let mut iter = entries.iter(); while number_entries > 10 { if let Some((_, path)) = iter.next() { fs::remove_file(path)?; number_entries -= 1; } } Ok(()) } 0707010000005B000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000002A00000000pueue-3.4.1/pueue/src/daemon/task_handler0707010000005C000081A4000000000000000000000001665F1B6900001300000000000000000000000000000000000000003600000000pueue-3.4.1/pueue/src/daemon/task_handler/callback.rsuse handlebars::RenderError; use super::*; impl TaskHandler { /// Users can specify a callback that's fired whenever a task finishes. /// Execute the callback by spawning a new subprocess. pub fn spawn_callback(&mut self, task: &Task) { // Return early, if there's no callback specified let Some(template_string) = &self.settings.daemon.callback else { return; }; // Build the command to be called from the template string in the configuration file. let callback_command = match self.build_callback_command(task, template_string) { Ok(callback_command) => callback_command, Err(err) => { error!("Failed to create callback command from template with error: {err}"); return; } }; let mut command = compile_shell_command(&self.settings, &callback_command); // Spawn the callback subprocess and log if it fails. let spawn_result = command.spawn(); let child = match spawn_result { Err(error) => { error!("Failed to spawn callback with error: {error}"); return; } Ok(child) => child, }; debug!("Spawned callback for task {}", task.id); self.callbacks.push(child); } /// Take the callback template string from the configuration and insert all parameters from the /// finished task. pub fn build_callback_command( &self, task: &Task, template_string: &str, ) -> Result<String, RenderError> { // Init Handlebars. We set to strict, as we want to show an error on missing variables. let mut handlebars = Handlebars::new(); handlebars.set_strict_mode(true); // Add templating variables. let mut parameters = HashMap::new(); parameters.insert("id", task.id.to_string()); parameters.insert("command", task.command.clone()); parameters.insert("path", (*task.path.to_string_lossy()).to_owned()); parameters.insert("group", task.group.clone()); // Result takes the TaskResult Enum strings, unless it didn't finish yet. if let TaskStatus::Done(result) = &task.status { parameters.insert("result", result.to_string()); } else { parameters.insert("result", "None".into()); } // Format and insert start and end times. let print_time = |time: Option<DateTime<Local>>| { time.map(|time| time.timestamp().to_string()) .unwrap_or_default() }; parameters.insert("start", print_time(task.start)); parameters.insert("end", print_time(task.end)); // Read the last lines of the process' output and make it available. if let Ok(output) = read_last_log_file_lines( task.id, &self.pueue_directory, self.settings.daemon.callback_log_lines, ) { parameters.insert("output", output); } else { parameters.insert("output", "".to_string()); } let out_path = get_log_path(task.id, &self.pueue_directory); // Using Display impl of PathBuf which isn't necessarily a perfect // representation of the path but should work for most cases here parameters.insert("output_path", out_path.display().to_string()); // Get the exit code if let TaskStatus::Done(result) = &task.status { match result { TaskResult::Success => parameters.insert("exit_code", "0".into()), TaskResult::Failed(code) => parameters.insert("exit_code", code.to_string()), _ => parameters.insert("exit_code", "None".into()), }; } else { parameters.insert("exit_code", "None".into()); } handlebars.render_template(template_string, ¶meters) } /// Look at all running callbacks and log any errors. /// If everything went smoothly, simply remove them from the list. pub fn check_callbacks(&mut self) { let mut finished = Vec::new(); for (id, child) in self.callbacks.iter_mut().enumerate() { match child.try_wait() { // Handle a child error. Err(error) => { error!("Callback failed with error {error:?}"); finished.push(id); } // Child process did not exit yet. Ok(None) => continue, Ok(exit_status) => { info!("Callback finished with exit code {exit_status:?}"); finished.push(id); } } } finished.reverse(); for id in finished.iter() { self.callbacks.remove(*id); } } } 0707010000005D000081A4000000000000000000000001665F1B6900000F7C000000000000000000000000000000000000003600000000pueue-3.4.1/pueue/src/daemon/task_handler/children.rsuse command_group::GroupChild; use std::collections::BTreeMap; /// This structure is needed to manage worker pools for groups. /// It's a newtype pattern around a nested BTreeMap, which implements some convenience functions. /// /// The datastructure represents the following data: /// BTreeMap<group_name, BTreeMap<group_worker_id, (task_id, subprocess_handle)> pub struct Children(pub BTreeMap<String, BTreeMap<usize, (usize, GroupChild)>>); impl Children { /// Returns whether there are any active tasks across all groups. pub fn has_active_tasks(&self) -> bool { self.0.iter().any(|(_, pool)| !pool.is_empty()) } /// A convenience function to check whether there's child with a given task_id. /// We have to do a nested linear search, as these datastructure aren't indexed via task_ids. pub fn has_child(&self, task_id: usize) -> bool { for pool in self.0.values() { for (child_task_id, _) in pool.values() { if child_task_id == &task_id { return true; } } } false } /// A convenience function to get a mutable child by its respective task_id. /// We have to do a nested linear search over all children of all pools, /// beceause these datastructure aren't indexed via task_ids. pub fn get_child_mut(&mut self, task_id: usize) -> Option<&mut GroupChild> { for pool in self.0.values_mut() { for (child_task_id, child) in pool.values_mut() { if child_task_id == &task_id { return Some(child); } } } None } /// A convenience function to get a list with all task_ids of all children. pub fn all_task_ids(&self) -> Vec<usize> { let mut task_ids = Vec::new(); for pool in self.0.values() { for (task_id, _) in pool.values() { task_ids.push(*task_id) } } task_ids } /// Returns the next free worker slot for a given group. /// This function doesn't take Pueue's configuration into account, it simply returns the next /// free integer key, starting from 0. /// /// This function should only be called when spawning a new process. /// At this point, we're sure that the worker pool for the given group already exists, hence /// the expect call. pub fn get_next_group_worker(&self, group: &str) -> usize { let pool = self .0 .get(group) .expect("The worker pool should be initialized when getting the next worker id."); // This does a simple linear scan over the worker keys of the process group. // Keys in a BTreeMap are ordered, which is why we can start at 0 and check for each entry, // if it is the same as our current id. // // E.g. If all slots to the last are full, we should walk through all keys and increment by // one each time. // If the second slot is free, we increment once, break the loop in the second iteration // and return the new id. let mut next_worker_id = 0; for worker_id in pool.keys() { if worker_id != &next_worker_id { break; } next_worker_id += 1; } next_worker_id } /// Inserts a new children into the worker pool of the given group. /// /// This function should only be called when spawning a new process. /// At this point, we're sure that the worker pool for the given group already exists, hence /// the expect call. pub fn add_child(&mut self, group: &str, worker_id: usize, task_id: usize, child: GroupChild) { let pool = self .0 .get_mut(group) .expect("The worker pool should be initialized when inserting a new child."); pool.insert(worker_id, (task_id, child)); } } 0707010000005E000081A4000000000000000000000001665F1B69000008BC000000000000000000000000000000000000003A00000000pueue-3.4.1/pueue/src/daemon/task_handler/dependencies.rsuse super::*; use pueue_lib::state::Group; impl TaskHandler { /// Ensure that no `Queued` tasks have any failed dependencies. /// Otherwise set their status to `Done` and result to `DependencyFailed`. pub fn check_failed_dependencies(&mut self) { // Clone the state ref, so we don't have two mutable borrows later on. let state_ref = self.state.clone(); let mut state = state_ref.lock().unwrap(); // Get id's of all tasks with failed dependencies let has_failed_deps: Vec<_> = state .tasks .iter() .filter(|(_, task)| task.status == TaskStatus::Queued && !task.dependencies.is_empty()) .filter_map(|(id, task)| { // At this point we got all queued tasks with dependencies. // Go through all dependencies and ensure they didn't fail. let failed = task .dependencies .iter() .flat_map(|id| state.tasks.get(id)) .filter(|task| task.failed()) .map(|task| task.id) .next(); failed.map(|f| (*id, f)) }) .collect(); // Update the state of all tasks with failed dependencies. for (id, _) in has_failed_deps { // Get the task's group, since we have to check if it's paused. let group = if let Some(task) = state.tasks.get(&id) { task.group.clone() } else { continue; }; // Only update the status, if the group isn't paused. // This allows users to fix and restart dependencies in-place without // breaking the dependency chain. if let Some(&Group { status: GroupStatus::Paused, .. }) = state.groups.get(&group) { continue; } let task = state.tasks.get_mut(&id).unwrap(); task.status = TaskStatus::Done(TaskResult::DependencyFailed); task.start = Some(Local::now()); task.end = Some(Local::now()); self.spawn_callback(task); } } } 0707010000005F000081A4000000000000000000000001665F1B6900001329000000000000000000000000000000000000003900000000pueue-3.4.1/pueue/src/daemon/task_handler/finish_task.rsuse anyhow::Context; use super::*; use crate::daemon::state_helper::{pause_on_failure, save_state}; use crate::ok_or_shutdown; impl TaskHandler { /// Check whether there are any finished processes /// In case there are, handle them and update the shared state pub fn handle_finished_tasks(&mut self) { let finished = self.get_finished(); // Nothing to do. Early return if finished.is_empty() { return; } // Clone the state ref, so we don't have two mutable borrows later on. let state_ref = self.state.clone(); let mut state = state_ref.lock().unwrap(); for ((task_id, group, worker_id), error) in finished.iter() { // Handle std::io errors on child processes. // I have never seen something like this, but it might happen. if let Some(error) = error { let (_taks_id, _child) = self .children .0 .get_mut(group) .expect("Worker group must exist when handling finished tasks.") .remove(worker_id) .expect("Errored child went missing while handling finished task."); let group = { let task = state.tasks.get_mut(task_id).unwrap(); task.status = TaskStatus::Done(TaskResult::Errored); task.end = Some(Local::now()); self.spawn_callback(task); task.group.clone() }; error!("Child {} failed with io::Error: {:?}", task_id, error); pause_on_failure(&mut state, &self.settings, &group); continue; } // Handle any tasks that exited with some kind of exit code let (_task_id, mut child) = self .children .0 .get_mut(group) .expect("Worker group must exist when handling finished tasks.") .remove(worker_id) .expect("Child of task {} went away while handling finished task."); // Get the exit code of the child. // Errors really shouldn't happen in here, since we already checked if it's finished // with try_wait() before. let exit_code_result = child.wait(); let exit_code = exit_code_result .context(format!( "Failed on wait() for finished task {task_id} with error: {error:?}" )) .unwrap() .code(); // Processes with exit code 0 exited successfully // Processes with `None` have been killed by a Signal let result = match exit_code { Some(0) => TaskResult::Success, Some(exit_code) => TaskResult::Failed(exit_code), None => TaskResult::Killed, }; // Update all properties on the task and get the group for later let group = { let task = state .tasks .get_mut(task_id) .expect("Task was removed before child process has finished!"); task.status = TaskStatus::Done(result.clone()); task.end = Some(Local::now()); self.spawn_callback(task); task.group.clone() }; if let TaskResult::Failed(_) = result { pause_on_failure(&mut state, &self.settings, &group); } // Already remove the output files, if the daemon is being reset anyway if self.full_reset { clean_log_handles(*task_id, &self.pueue_directory); } } ok_or_shutdown!(self, save_state(&state, &self.settings)); } /// Gather all finished tasks and sort them by finished and errored. /// Returns a list of finished task ids and whether they errored or not. fn get_finished(&mut self) -> Vec<((usize, String, usize), Option<std::io::Error>)> { let mut finished = Vec::new(); for (group, children) in self.children.0.iter_mut() { for (worker_id, (task_id, child)) in children.iter_mut() { match child.try_wait() { // Handle a child error. Err(error) => { finished.push(((*task_id, group.clone(), *worker_id), Some(error))); } // Child process did not exit yet Ok(None) => continue, Ok(_exit_status) => { info!("Task {task_id} just finished"); finished.push(((*task_id, group.clone(), *worker_id), None)); } } } } finished } } 07070100000060000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000003300000000pueue-3.4.1/pueue/src/daemon/task_handler/messages07070100000061000081A4000000000000000000000001665F1B6900000DBC000000000000000000000000000000000000003C00000000pueue-3.4.1/pueue/src/daemon/task_handler/messages/group.rsuse std::collections::BTreeMap; use log::{error, info}; use pueue_lib::network::message::GroupMessage; use crate::daemon::state_helper::save_state; use crate::daemon::task_handler::{Shutdown, TaskHandler}; use crate::ok_or_shutdown; impl TaskHandler { /// Handle the addition and the removal of groups. /// /// This is done in the TaskHandler, as we also have to create/remove worker pools. /// I.e. we have to touch three things: /// - state.groups /// - state.config.daemon.groups /// - self.children pub fn handle_group_message(&mut self, message: GroupMessage) { let cloned_state_mutex = self.state.clone(); let mut state = cloned_state_mutex.lock().unwrap(); match message { GroupMessage::List => {} GroupMessage::Add { name, parallel_tasks, } => { if state.groups.contains_key(&name) { error!("Group \"{name}\" already exists"); return; } let group = state.create_group(&name); if let Some(parallel_tasks) = parallel_tasks { group.parallel_tasks = parallel_tasks; } info!("New group \"{name}\" has been created"); // Create the worker pool. self.children.0.insert(name, BTreeMap::new()); // Persist the state. ok_or_shutdown!(self, save_state(&state, &self.settings)); } GroupMessage::Remove(group) => { if !state.groups.contains_key(&group) { error!("Group \"{group}\" to be remove doesn't exists"); return; } // Make sure there are no tasks in that group. if state.tasks.iter().any(|(_, task)| task.group == group) { error!("Tried to remove group \"{group}\", while it still contained tasks."); return; } if let Err(error) = state.remove_group(&group) { error!("Error while removing group: \"{error}\""); return; } // Make sure the worker pool exists and is empty. // There shouldn't be any children, if there are no tasks in this group. // Those are critical errors, as they indicate desynchronization inside our // internal datastructures, which is really bad. if let Some(pool) = self.children.0.get(&group) { if !pool.is_empty() { error!("Encountered a non-empty worker pool, while removing a group. This is a critical error. Please report this bug."); self.initiate_shutdown(Shutdown::Emergency); return; } } else { error!("Encountered an group without an worker pool, while removing a group. This is a critical error. Please report this bug."); self.initiate_shutdown(Shutdown::Emergency); return; } // Actually remove the worker pool. self.children.0.remove(&group); // Persist the state. ok_or_shutdown!(self, save_state(&state, &self.settings)); info!("Group \"{group}\" has been removed"); } } } } 07070100000062000081A4000000000000000000000001665F1B69000013E7000000000000000000000000000000000000003B00000000pueue-3.4.1/pueue/src/daemon/task_handler/messages/kill.rsuse log::{error, info, warn}; use pueue_lib::network::message::{Signal, TaskSelection}; use pueue_lib::process_helper::*; use pueue_lib::state::GroupStatus; use pueue_lib::task::TaskStatus; use crate::daemon::state_helper::{save_state, LockedState}; use crate::daemon::task_handler::{Shutdown, TaskHandler}; use crate::ok_or_shutdown; impl TaskHandler { /// Kill specific tasks or groups. /// /// By default, this kills tasks with Rust's subprocess handling "kill" logic. /// However, the user can decide to send unix signals to the processes as well. /// /// `issued_by_user` This is `true` when a kill is issued by an actual user. /// It is `false`, if the daemon resets or during shutdown. /// /// In case `true` is given and a `group` or `all` are killed the affected groups should /// be paused under some circumstances. is mostly to prevent any further task execution /// during an emergency. These circumstances are: /// - There're further queued or scheduled tasks in a killed group. /// /// `signal` Don't kill the task as usual, but rather send a unix process signal. pub fn kill(&mut self, tasks: TaskSelection, issued_by_user: bool, signal: Option<Signal>) { let cloned_state_mutex = self.state.clone(); let mut state = cloned_state_mutex.lock().unwrap(); // Get the keys of all tasks that should be resumed let task_ids = match tasks { TaskSelection::TaskIds(task_ids) => task_ids, TaskSelection::Group(group_name) => { // Ensure that a given group exists. (Might not happen due to concurrency) if !state.groups.contains_key(&group_name) { return; }; // Check whether the group should be paused before killing the tasks. if should_pause_group(&state, issued_by_user, &group_name) { let group = state.groups.get_mut(&group_name).unwrap(); group.status = GroupStatus::Paused; } // Determine all running or paused tasks in that group. let filtered_tasks = state.filter_tasks_of_group( |task| matches!(task.status, TaskStatus::Running | TaskStatus::Paused), &group_name, ); info!("Killing tasks of group {group_name}"); filtered_tasks.matching_ids } TaskSelection::All => { // Pause all groups, if applicable let group_names: Vec<String> = state.groups.keys().cloned().collect(); for group_name in group_names { if should_pause_group(&state, issued_by_user, &group_name) { state.set_status_for_all_groups(GroupStatus::Paused); } } info!("Killing all running tasks"); self.children.all_task_ids() } }; for task_id in task_ids { if let Some(signal) = signal.clone() { self.send_internal_signal(task_id, signal); } else { self.kill_task(task_id); } } ok_or_shutdown!(self, save_state(&state, &self.settings)); } /// Send a signal to a specific child process. /// This is a wrapper around [send_signal_to_child], which does a little bit of /// additional error handling. pub fn send_internal_signal(&mut self, task_id: usize, signal: Signal) { let child = match self.children.get_child_mut(task_id) { Some(child) => child, None => { warn!("Tried to kill non-existing child: {task_id}"); return; } }; if let Err(err) = send_signal_to_child(child, signal) { warn!("Failed to send signal to task {task_id} with error: {err}"); }; } /// Kill a specific task and handle it accordingly. /// Triggered on `reset` and `kill`. pub fn kill_task(&mut self, task_id: usize) { if let Some(child) = self.children.get_child_mut(task_id) { kill_child(task_id, child).unwrap_or_else(|err| { warn!("Failed to send kill to task {task_id} child process {child:?} with error {err:?}"); }) } else { warn!("Tried to kill non-existing child: {task_id}"); } } } /// Determine, whether a group should be paused during a kill command. /// It should only be paused if: /// - The kill was issued by the user, i.e. it wasn't issued by a system during shutdown/reset. /// - The group that's being killed must have queued or stashed-enqueued tasks. fn should_pause_group(state: &LockedState, issued_by_user: bool, group: &str) -> bool { if !issued_by_user { return false; } // Check if there're tasks that're queued or enqueued. let filtered_tasks = state.filter_tasks_of_group(|task| task.is_queued(), group); !filtered_tasks.matching_ids.is_empty() } 07070100000063000081A4000000000000000000000001665F1B6900000534000000000000000000000000000000000000003A00000000pueue-3.4.1/pueue/src/daemon/task_handler/messages/mod.rsuse std::time::Duration; use log::warn; use pueue_lib::network::message::*; use crate::daemon::task_handler::TaskHandler; mod group; mod kill; mod pause; mod send; mod start; impl TaskHandler { /// Some client instructions require immediate action by the task handler /// This function is also responsible for waiting pub fn receive_messages(&mut self) { // Sleep for a few milliseconds. We don't want to hurt the CPU. let timeout = Duration::from_millis(200); if let Ok(message) = self.receiver.recv_timeout(timeout) { self.handle_message(message); }; } fn handle_message(&mut self, message: Message) { match message { Message::Pause(message) => self.pause(message.tasks, message.wait), Message::Start(message) => self.start(message.tasks), Message::Kill(message) => self.kill(message.tasks, true, message.signal), Message::Send(message) => self.send(message.task_id, message.input), Message::Reset(_) => self.reset(), Message::Group(message) => self.handle_group_message(message), Message::DaemonShutdown(shutdown) => { self.initiate_shutdown(shutdown); } _ => warn!("Received unhandled message {message:?}"), } } } 07070100000064000081A4000000000000000000000001665F1B69000009E2000000000000000000000000000000000000003C00000000pueue-3.4.1/pueue/src/daemon/task_handler/messages/pause.rsuse log::{error, info}; use pueue_lib::network::message::TaskSelection; use pueue_lib::state::GroupStatus; use pueue_lib::task::TaskStatus; use crate::daemon::state_helper::{save_state, LockedState}; use crate::daemon::task_handler::{ProcessAction, Shutdown, TaskHandler}; use crate::ok_or_shutdown; impl TaskHandler { /// Pause specific tasks or groups. /// /// `wait` decides, whether running tasks will kept running until they finish on their own. pub fn pause(&mut self, selection: TaskSelection, wait: bool) { let cloned_state_mutex = self.state.clone(); let mut state = cloned_state_mutex.lock().unwrap(); // Get the keys of all tasks that should be paused let keys: Vec<usize> = match selection { TaskSelection::TaskIds(task_ids) => task_ids, TaskSelection::Group(group_name) => { // Ensure that a given group exists. (Might not happen due to concurrency) let group = match state.groups.get_mut(&group_name) { Some(group) => group, None => return, }; // Pause a specific group. group.status = GroupStatus::Paused; info!("Pausing group {group_name}"); let filtered_tasks = state.filter_tasks_of_group( |task| matches!(task.status, TaskStatus::Running), &group_name, ); filtered_tasks.matching_ids } TaskSelection::All => { // Pause all groups, since we're pausing the whole daemon. state.set_status_for_all_groups(GroupStatus::Paused); info!("Pausing everything"); self.children.all_task_ids() } }; // Pause all tasks that were found. if !wait { for id in keys { self.pause_task(&mut state, id); } } ok_or_shutdown!(self, save_state(&state, &self.settings)); } /// Pause a specific task. /// Send a signal to the process to actually pause the OS process. fn pause_task(&mut self, state: &mut LockedState, id: usize) { match self.perform_action(id, ProcessAction::Pause) { Err(err) => error!("Failed pausing task {id}: {err:?}"), Ok(success) => { if success { state.change_status(id, TaskStatus::Paused); } } } } } 07070100000065000081A4000000000000000000000001665F1B69000002E4000000000000000000000000000000000000003B00000000pueue-3.4.1/pueue/src/daemon/task_handler/messages/send.rsuse std::io::Write; use log::{error, warn}; use crate::daemon::task_handler::TaskHandler; impl TaskHandler { /// Send some input to a child process' stdin. pub fn send(&mut self, task_id: usize, input: String) { let child = match self.children.get_child_mut(task_id) { Some(child) => child, None => { warn!("Task {task_id} finished before input could be sent"); return; } }; { let child_stdin = child.inner().stdin.as_mut().unwrap(); if let Err(err) = child_stdin.write_all(&input.into_bytes()) { error!("Failed to send input to task {task_id} with err {err:?}"); }; } } } 07070100000066000081A4000000000000000000000001665F1B6900000E1B000000000000000000000000000000000000003C00000000pueue-3.4.1/pueue/src/daemon/task_handler/messages/start.rsuse log::{error, info, warn}; use pueue_lib::network::message::TaskSelection; use pueue_lib::state::GroupStatus; use pueue_lib::task::TaskStatus; use crate::daemon::state_helper::{save_state, LockedState}; use crate::daemon::task_handler::{ProcessAction, Shutdown, TaskHandler}; use crate::ok_or_shutdown; impl TaskHandler { /// Start specific tasks or groups. /// /// By default, this command only resumes tasks. /// However, if specific task_ids are provided, tasks can actually be force-started. /// Of course, they can only be started if they're in a valid status, i.e. Queued/Stashed. pub fn start(&mut self, tasks: TaskSelection) { let cloned_state_mutex = self.state.clone(); let mut state = cloned_state_mutex.lock().unwrap(); let task_ids = match tasks { TaskSelection::TaskIds(task_ids) => { // Start specific tasks. // This is handled differently and results in an early return, as this branch is // capable of force-spawning processes, instead of simply resuming tasks. for task_id in task_ids { // Continue all children that are simply paused if self.children.has_child(task_id) { self.continue_task(&mut state, task_id); } else { // Start processes for all tasks that haven't been started yet self.start_process(task_id, &mut state); } } ok_or_shutdown!(self, save_state(&state, &self.settings)); return; } TaskSelection::Group(group_name) => { // Ensure that a given group exists. (Might not happen due to concurrency) let group = match state.groups.get_mut(&group_name) { Some(group) => group, None => return, }; // Set the group to running. group.status = GroupStatus::Running; info!("Resuming group {}", &group_name); let filtered_tasks = state.filter_tasks_of_group( |task| matches!(task.status, TaskStatus::Paused), &group_name, ); filtered_tasks.matching_ids } TaskSelection::All => { // Resume all groups and the default queue info!("Resuming everything"); state.set_status_for_all_groups(GroupStatus::Running); self.children.all_task_ids() } }; // Resume all specified paused tasks for task_id in task_ids { self.continue_task(&mut state, task_id); } ok_or_shutdown!(self, save_state(&state, &self.settings)); } /// Send a start signal to a paused task to continue execution. fn continue_task(&mut self, state: &mut LockedState, task_id: usize) { // Task doesn't exist if !self.children.has_child(task_id) { return; } // Task is already done if state.tasks.get(&task_id).unwrap().is_done() { return; } let success = match self.perform_action(task_id, ProcessAction::Resume) { Err(err) => { warn!("Failed to resume task {}: {:?}", task_id, err); false } Ok(success) => success, }; if success { state.change_status(task_id, TaskStatus::Running); } } } 07070100000067000081A4000000000000000000000001665F1B690000290E000000000000000000000000000000000000003100000000pueue-3.4.1/pueue/src/daemon/task_handler/mod.rsuse std::collections::{BTreeMap, HashMap}; use std::path::PathBuf; use std::process::Child; use std::process::Stdio; use std::sync::mpsc::{Receiver, SendError, Sender}; use anyhow::Result; use chrono::prelude::*; use command_group::CommandGroup; use handlebars::Handlebars; use log::{debug, error, info}; use pueue_lib::log::*; use pueue_lib::network::message::*; use pueue_lib::network::protocol::socket_cleanup; use pueue_lib::process_helper::*; use pueue_lib::settings::Settings; use pueue_lib::state::{GroupStatus, SharedState}; use pueue_lib::task::{Task, TaskResult, TaskStatus}; use crate::daemon::pid::cleanup_pid_file; use crate::daemon::state_helper::{reset_state, save_state}; mod callback; /// A helper newtype struct, which implements convenience methods for our child process management /// datastructure. mod children; /// Logic for handling dependencies mod dependencies; /// Logic for finishing and cleaning up completed tasks. mod finish_task; /// This module contains all logic that's triggered by messages received via the mpsc channel. /// These messages are sent by the threads that handle the client messages. mod messages; /// Everything regarding actually spawning task processes. mod spawn_task; use self::children::Children; /// This is a little helper macro, which looks at a critical result and shuts the /// TaskHandler down, if an error occurred. This is mostly used if the state cannot /// be written due to IO errors. /// Those errors are considered unrecoverable and we should initiate a graceful shutdown /// immediately. #[macro_export] macro_rules! ok_or_shutdown { ($task_manager:ident, $result:expr) => { match $result { Err(err) => { error!("Initializing graceful shutdown. Encountered error in TaskHandler: {err}"); $task_manager.initiate_shutdown(Shutdown::Emergency); return; } Ok(inner) => inner, } }; } /// Sender<TaskMessage> wrapper that takes Into<Message> as a convenience option #[derive(Debug, Clone)] pub struct TaskSender { sender: Sender<Message>, } impl TaskSender { pub fn new(sender: Sender<Message>) -> Self { Self { sender } } #[inline] pub fn send<T>(&self, message: T) -> Result<(), SendError<Message>> where T: Into<Message>, { self.sender.send(message.into()) } } pub struct TaskHandler { /// The state that's shared between the TaskHandler and the message handling logic. state: SharedState, /// The receiver for the MPSC channel that's used to push notificatoins from our message /// handling to the TaskHandler. receiver: Receiver<Message>, /// Pueue's subprocess and worker pool representation. Take a look at [Children] for more info. children: Children, /// These are the currently running callbacks. They're usually very short-lived. callbacks: Vec<Child>, /// A simple flag which is used to signal that we're currently doing a full reset of the daemon. /// This flag prevents new tasks from being spawned. full_reset: bool, /// Whether we're currently in the process of a graceful shutdown. /// Depending on the shutdown type, we're exiting with different exitcodes. shutdown: Option<Shutdown>, /// The settings that are passed at program start. settings: Settings, // Some static settings that are extracted from `settings` for convenience purposes. pueue_directory: PathBuf, } impl TaskHandler { pub fn new(shared_state: SharedState, settings: Settings, receiver: Receiver<Message>) -> Self { // Clone the pointer, as we need to regularly access it inside the TaskHandler. let state_clone = shared_state.clone(); let state = state_clone.lock().unwrap(); // Initialize the subprocess management structure. let mut pools = BTreeMap::new(); for group in state.groups.keys() { pools.insert(group.clone(), BTreeMap::new()); } TaskHandler { state: shared_state, receiver, children: Children(pools), callbacks: Vec::new(), full_reset: false, shutdown: None, pueue_directory: settings.shared.pueue_directory(), settings, } } /// Main loop of the task handler. /// In here a few things happen: /// /// - Receive and handle instructions from the client. /// - Handle finished tasks, i.e. cleanup processes, update statuses. /// - Callback handling logic. This is rather uncritical. /// - Enqueue any stashed processes which are ready for being queued. /// - Ensure tasks with dependencies have no failed ancestors /// - Whether whe should perform a shutdown. /// - If the client requested a reset: reset the state if all children have been killed and handled. /// - Check whether we can spawn new tasks. /// /// This first step waits for 200ms while receiving new messages. /// This prevents this loop from running hot, but also means that we only check if a new task /// can be scheduled or if tasks are finished, every 200ms. pub fn run(&mut self) { loop { self.receive_messages(); self.handle_finished_tasks(); self.check_callbacks(); self.enqueue_delayed_tasks(); self.check_failed_dependencies(); if self.shutdown.is_some() { // Check if we're in shutdown. // If all tasks are killed, we do some cleanup and exit. self.handle_shutdown(); } else if self.full_reset { // Wait until all tasks are killed. // Once they are, reset everything and go back to normal self.handle_reset(); } else { // Only start new tasks, if we aren't in the middle of a reset or shutdown. self.spawn_new(); } } } /// Initiate shutdown, which includes killing all children and pausing all groups. /// We don't have to pause any groups, as no new tasks will be spawned during shutdown anyway. /// Any groups with queued tasks, will be automatically paused on state-restoration. fn initiate_shutdown(&mut self, shutdown: Shutdown) { self.shutdown = Some(shutdown); self.kill(TaskSelection::All, false, None); } /// Check if all tasks are killed. /// If they aren't, we'll wait a little longer. /// Once they're, we do some cleanup and exit. fn handle_shutdown(&mut self) { // There are still active tasks. Continue waiting until they're killed and cleaned up. if self.children.has_active_tasks() { return; } // Lock the state. This prevents any further connections/alterations from this point on. let _state = self.state.lock().unwrap(); // Remove the unix socket. if let Err(error) = socket_cleanup(&self.settings.shared) { println!("Failed to cleanup socket during shutdown."); println!("{error}"); } // Cleanup the pid file if let Err(error) = cleanup_pid_file(&self.settings.shared.pid_path()) { println!("Failed to cleanup pid during shutdown."); println!("{error}"); } // Actually exit the program the way we're supposed to. // Depending on the current shutdown type, we exit with different exit codes. if matches!(self.shutdown, Some(Shutdown::Emergency)) { std::process::exit(1); } std::process::exit(0); } /// Users can issue to reset the daemon. /// If that's the case, the `self.full_reset` flag is set to true, all children are killed /// and no new tasks will be spawned. /// This function checks, if all killed children have been handled. /// If that's the case, completely reset the state fn handle_reset(&mut self) { // Don't do any reset logic, if we aren't in reset mode or if some children are still up. if self.children.has_active_tasks() { return; } let mut state = self.state.lock().unwrap(); if let Err(error) = reset_state(&mut state, &self.settings) { error!("Failed to reset state with error: {error:?}"); }; if let Err(error) = reset_task_log_directory(&self.pueue_directory) { panic!("Error while resetting task log directory: {error}"); }; self.full_reset = false; } /// Kill all children by using the `kill` function. /// Set the respective group's statuses to `Reset`. This will prevent new tasks from being spawned. fn reset(&mut self) { self.full_reset = true; self.kill(TaskSelection::All, false, None); } /// As time passes, some delayed tasks may need to be enqueued. /// Gather all stashed tasks and enqueue them if it is after the task's enqueue_at fn enqueue_delayed_tasks(&mut self) { let state_clone = self.state.clone(); let mut state = state_clone.lock().unwrap(); let mut changed = false; for (_, task) in state.tasks.iter_mut() { if let TaskStatus::Stashed { enqueue_at: Some(time), } = task.status { if time <= Local::now() { info!("Enqueuing delayed task : {}", task.id); task.status = TaskStatus::Queued; task.enqueued_at = Some(Local::now()); changed = true; } } } // Save the state if a task has been enqueued if changed { ok_or_shutdown!(self, save_state(&state, &self.settings)); } } /// This is a small wrapper around the real platform dependant process handling logic /// It only ensures, that the process we want to manipulate really does exists. fn perform_action(&mut self, id: usize, action: ProcessAction) -> Result<bool> { match self.children.get_child_mut(id) { Some(child) => { debug!("Executing action {action:?} to {id}"); send_signal_to_child(child, &action)?; Ok(true) } None => { error!("Tried to execute action {action:?} to non existing task {id}"); Ok(false) } } } } 07070100000068000081A4000000000000000000000001665F1B69000021F8000000000000000000000000000000000000003800000000pueue-3.4.1/pueue/src/daemon/task_handler/spawn_task.rsuse std::io::Write; use super::*; use crate::daemon::state_helper::{pause_on_failure, save_state, LockedState}; use crate::ok_or_shutdown; impl TaskHandler { /// See if we can start a new queued task. pub fn spawn_new(&mut self) { let cloned_state_mutex = self.state.clone(); let mut state = cloned_state_mutex.lock().unwrap(); // Check whether a new task can be started. // Spawn tasks until we no longer have free slots available. while let Some(id) = self.get_next_task_id(&state) { self.start_process(id, &mut state); } } /// Search and return the next task that can be started. /// Precondition for a task to be started: /// - is in Queued state /// - There are free slots in the task's group /// - The group is running /// - has all its dependencies in `Done` state /// /// Order at which tasks are picked (descending relevancy): /// - Task with highest priority first /// - Task with lowest ID first pub fn get_next_task_id(&mut self, state: &LockedState) -> Option<usize> { // Get all tasks that could theoretically be started right now. let mut potential_tasks: Vec<&Task> = state .tasks .iter() .filter(|(_, task)| task.status == TaskStatus::Queued) .filter(|(_, task)| { // Make sure the task is assigned to an existing group. let group = match state.groups.get(&task.group) { Some(group) => group, None => { error!( "Got task with unknown group {}. Please report this!", &task.group ); return false; } }; // Let's check if the group is running. If it isn't, simply return false. if group.status != GroupStatus::Running { return false; } // If parallel tasks are set to `0`, this means an unlimited amount of tasks may // run at any given time. if group.parallel_tasks == 0 { return true; } // Get the currently running tasks by looking at the actually running processes. // They're sorted by group, which makes this quite convenient. let running_tasks = match self.children.0.get(&task.group) { Some(children) => children.len(), None => { error!( "Got valid group {}, but no worker pool has been initialized. This is a bug!", &task.group ); return false } }; // Make sure there are free slots in the task's group running_tasks < group.parallel_tasks }) .filter(|(_, task)| { // Check whether all dependencies for this task are fulfilled. task.dependencies .iter() .flat_map(|id| state.tasks.get(id)) .all(|task| matches!(task.status, TaskStatus::Done(TaskResult::Success))) }) .map(|(_, task)| {task}) .collect(); // Order the tasks based on their priortiy and their task id. // Tasks with higher priority go first. // Tasks with the same priority are ordered by their id in ascending order, meaning that // tasks with smaller id will be processed first. potential_tasks.sort_by(|a, b| { // If they have the same prio, decide the execution order by task_id! if a.priority == b.priority { return a.id.cmp(&b.id); } // Otherwise, let the priority decide. b.priority.cmp(&a.priority) }); // Return the id of the first task (if one has been found). potential_tasks.first().map(|task| task.id) } /// Actually spawn a new sub process /// The output of subprocesses is piped into a separate file for easier access pub fn start_process(&mut self, task_id: usize, state: &mut LockedState) { // Check if the task exists and can actually be spawned. Otherwise do an early return. match state.tasks.get(&task_id) { Some(task) => { if !matches!( &task.status, TaskStatus::Stashed { .. } | TaskStatus::Queued | TaskStatus::Paused ) { info!("Tried to start task with status: {}", task.status); return; } } None => { info!("Tried to start non-existing task: {task_id}"); return; } }; // Try to get the log file to which the output of the process will be written to. // Panic if this doesn't work! This is unrecoverable. let (stdout_log, stderr_log) = match create_log_file_handles(task_id, &self.pueue_directory) { Ok((out, err)) => (out, err), Err(err) => { panic!("Failed to create child log files: {err:?}"); } }; // Get all necessary info for starting the task let (command, path, group, mut envs) = { let task = state.tasks.get(&task_id).unwrap(); ( task.command.clone(), task.path.clone(), task.group.clone(), task.envs.clone(), ) }; // Build the shell command that should be executed. let mut command = compile_shell_command(&self.settings, &command); // Determine the worker's id depending on the current group. // Inject that info into the environment. let worker_id = self.children.get_next_group_worker(&group); envs.insert("PUEUE_GROUP".into(), group.clone()); envs.insert("PUEUE_WORKER_ID".into(), worker_id.to_string()); // Spawn the actual subprocess let spawned_command = command .current_dir(path) .stdin(Stdio::piped()) .env_clear() .envs(envs.clone()) .stdout(Stdio::from(stdout_log)) .stderr(Stdio::from(stderr_log)) .group_spawn(); // Check if the task managed to spawn let child = match spawned_command { Ok(child) => child, Err(err) => { let error = format!("Failed to spawn child {task_id} with err: {err:?}"); error!("{}", error); // Write some debug log output to the task's log file. // This should always work, but print a datailed error if it didn't work. if let Ok(mut file) = get_writable_log_file_handle(task_id, &self.pueue_directory) { let log_output = format!("Pueue error, failed to spawn task. Check your command.\n{error}"); let write_result = file.write_all(log_output.as_bytes()); if let Err(write_err) = write_result { error!("Failed to write spawn error to task log: {}", write_err); } } // Update all necessary fields on the task. let group = { let task = state.tasks.get_mut(&task_id).unwrap(); task.status = TaskStatus::Done(TaskResult::FailedToSpawn(error)); task.start = Some(Local::now()); task.end = Some(Local::now()); self.spawn_callback(task); task.group.clone() }; pause_on_failure(state, &self.settings, &group); ok_or_shutdown!(self, save_state(state, &self.settings)); return; } }; // Save the process handle in our self.children datastructure. self.children.add_child(&group, worker_id, task_id, child); let task = state.tasks.get_mut(&task_id).unwrap(); task.start = Some(Local::now()); task.status = TaskStatus::Running; // Overwrite the task's environment variables with the new ones, containing the // PUEUE_WORKER_ID and PUEUE_GROUP variables. task.envs = envs; info!("Started task: {}", task.command); ok_or_shutdown!(self, save_state(state, &self.settings)); } } 07070100000069000081A4000000000000000000000001665F1B69000000BA000000000000000000000000000000000000001D00000000pueue-3.4.1/pueue/src/lib.rs// This lint is generating way too many false-positives. // Ignore it for now. #![allow(clippy::assigning_clones)] #![doc = include_str!("../README.md")] pub mod client; pub mod daemon; 0707010000006A000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000001800000000pueue-3.4.1/pueue/tests0707010000006B000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000001F00000000pueue-3.4.1/pueue/tests/client0707010000006C000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000002A00000000pueue-3.4.1/pueue/tests/client/_snapshots0707010000006D000081A4000000000000000000000001665F1B6900000005000000000000000000000000000000000000003A00000000pueue-3.4.1/pueue/tests/client/_snapshots/follow__defaulttest 0707010000006E000081A4000000000000000000000001665F1B690000003F000000000000000000000000000000000000004700000000pueue-3.4.1/pueue/tests/client/_snapshots/follow__fail_on_disappearingtest Pueue: Log file has gone away. Has the task been removed? 0707010000006F000081A4000000000000000000000001665F1B690000002E000000000000000000000000000000000000004700000000pueue-3.4.1/pueue/tests/client/_snapshots/follow__fail_on_non_existingPueue: The task to be followed doesn't exist. 07070100000070000081A4000000000000000000000001665F1B6900000008000000000000000000000000000000000000003D00000000pueue-3.4.1/pueue/tests/client/_snapshots/follow__last_lines5 6 7 8 07070100000071000081A4000000000000000000000001665F1B690000012F000000000000000000000000000000000000003900000000pueue-3.4.1/pueue/tests/client/_snapshots/group__colored[1mGroup "default"[0m (1 parallel): [38;5;11mpaused[39m [1mGroup "test_2"[0m (2 parallel): [38;5;10mrunning[39m [1mGroup "test_3"[0m (3 parallel): [38;5;10mrunning[39m [1mGroup "test_5"[0m (5 parallel): [38;5;10mrunning[39m [1mGroup "testgroup"[0m (2 parallel): [38;5;10mrunning[39m 07070100000072000081A4000000000000000000000001665F1B69000000BD000000000000000000000000000000000000003900000000pueue-3.4.1/pueue/tests/client/_snapshots/group__defaultGroup "default" (1 parallel): running Group "test_2" (2 parallel): running Group "test_3" (3 parallel): running Group "test_5" (5 parallel): running Group "testgroup" (2 parallel): running 07070100000073000081A4000000000000000000000001665F1B690000009B000000000000000000000000000000000000003F00000000pueue-3.4.1/pueue/tests/client/_snapshots/wait__multiple_tasksNew task 1 with status Queued Task 0 changed from Stashed to Running Task 0 succeeded with 0 Task 1 changed from Queued to Running Task 1 succeeded with 0 07070100000074000081A4000000000000000000000001665F1B6900000027000000000000000000000000000000000000004A00000000pueue-3.4.1/pueue/tests/client/_snapshots/wait__single_task_target_statusTask 0 changed from Stashed to Running 07070100000075000081A4000000000000000000000001665F1B690000008A000000000000000000000000000000000000004000000000pueue-3.4.1/pueue/tests/client/_snapshots/wait__success_failureTask 0 changed from Stashed to Running Task 1 changed from Stashed to Queued Task 0 failed with 127 Task 1 changed from Queued to Running 07070100000076000081A4000000000000000000000001665F1B690000003F000000000000000000000000000000000000004000000000pueue-3.4.1/pueue/tests/client/_snapshots/wait__success_successTask 0 changed from Stashed to Running Task 0 succeeded with 0 07070100000077000081A4000000000000000000000001665F1B690000006C000000000000000000000000000000000000003E00000000pueue-3.4.1/pueue/tests/client/_snapshots/wait__target_statusNew task 1 with status Stashed Task 1 changed from Stashed to Running Task 0 changed from Stashed to Queued 07070100000078000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000002A00000000pueue-3.4.1/pueue/tests/client/_templates07070100000079000081A4000000000000000000000001665F1B69000001A7000000000000000000000000000000000000003700000000pueue-3.4.1/pueue/tests/client/_templates/log__colored┌───────────────────────────────────┐ │[1m Task 0: [0m [38;5;10m completed successfully [39m│ └───────────────────────────────────┘ Command: echo test Path: {{ cwd }} Start: {{ task_0_start_long }} End: {{ task_0_end_long }} [38;5;10m[1moutput:[0m test 0707010000007A000081A4000000000000000000000001665F1B690000017E000000000000000000000000000000000000003700000000pueue-3.4.1/pueue/tests/client/_templates/log__default┌───────────────────────────────────┐ │ Task 0: completed successfully │ └───────────────────────────────────┘ Command: echo test Path: {{ cwd }} Start: {{ task_0_start_long }} End: {{ task_0_end_long }} output: test 0707010000007B000081A4000000000000000000000001665F1B69000001F6000000000000000000000000000000000000003A00000000pueue-3.4.1/pueue/tests/client/_templates/log__last_lines┌───────────────────────────────────┐ │ Task 0: completed successfully │ └───────────────────────────────────┘ Command: echo '1 2 3 4 5 6 7 8 9 10' Path: {{ cwd }} Start: {{ task_0_start_long }} End: {{ task_0_end_long }} output: (last 5 lines) 6 7 8 9 10 0707010000007C000081A4000000000000000000000001665F1B690000019A000000000000000000000000000000000000003A00000000pueue-3.4.1/pueue/tests/client/_templates/log__with_label┌───────────────────────────────────┐ │ Task 0: completed successfully │ └───────────────────────────────────┘ Command: echo test Path: {{ cwd }} Label: {{ task_0_label }} Start: {{ task_0_start_long }} End: {{ task_0_end_long }} output: test 0707010000007D000081A4000000000000000000000001665F1B6900000256000000000000000000000000000000000000003A00000000pueue-3.4.1/pueue/tests/client/_templates/status__colored[1mGroup "default"[0m (1 parallel): [38;5;10mrunning[39m ────────────────────────────────────────────── Id Status Command Start End ══════════════════════════════════════════════ 0 [38;5;10m Success [39m ls {{ task_0_start }} {{ task_0_end }} ────────────────────────────────────────────── 0707010000007E000081A4000000000000000000000001665F1B6900000230000000000000000000000000000000000000003A00000000pueue-3.4.1/pueue/tests/client/_templates/status__defaultGroup "default" (1 parallel): running ────────────────────────────────────────────── Id Status Command Start End ══════════════════════════════════════════════ 0 Success ls {{ task_0_start }} {{ task_0_end }} ────────────────────────────────────────────── 0707010000007F000081A4000000000000000000000001665F1B69000003F1000000000000000000000000000000000000003700000000pueue-3.4.1/pueue/tests/client/_templates/status__fullGroup "default" (1 parallel): running ────────────────────────────────────────────────────────────────── Id Status Enqueue At Deps Label Command Start End ══════════════════════════════════════════════════════════════════ 0 Stashed {{ task_0_enqueue_at }} test ls ────────────────────────────────────────────────────────────────── 1 Queued 0 ls ────────────────────────────────────────────────────────────────── 07070100000080000081A4000000000000000000000001665F1B6900000466000000000000000000000000000000000000004200000000pueue-3.4.1/pueue/tests/client/_templates/status__multiple_groupsGroup "testgroup" (1 parallel): running ────────────────────────────────────────────── Id Status Command Start End ══════════════════════════════════════════════ 0 Success ls {{ task_0_start }} {{ task_0_end }} ────────────────────────────────────────────── Group "testgroup2" (1 parallel): running ────────────────────────────────────────────── Id Status Command Start End ══════════════════════════════════════════════ 1 Success ls {{ task_1_start }} {{ task_1_end }} ────────────────────────────────────────────── 07070100000081000081A4000000000000000000000001665F1B6900000232000000000000000000000000000000000000003F00000000pueue-3.4.1/pueue/tests/client/_templates/status__single_groupGroup "testgroup" (1 parallel): running ────────────────────────────────────────────── Id Status Command Start End ══════════════════════════════════════════════ 0 Success ls {{ task_0_start }} {{ task_0_end }} ────────────────────────────────────────────── 07070100000082000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000002600000000pueue-3.4.1/pueue/tests/client/helper07070100000083000081A4000000000000000000000001665F1B69000015EB000000000000000000000000000000000000003800000000pueue-3.4.1/pueue/tests/client/helper/compare_output.rsuse std::collections::HashMap; use anyhow::{bail, Context, Result}; use chrono::Local; use handlebars::Handlebars; use std::fs::read_to_string; use std::path::PathBuf; use pueue_lib::settings::*; use pueue_lib::task::TaskStatus; use crate::helper::get_state; /// Read the current state and extract the tasks' info into a context. pub async fn get_task_context(settings: &Settings) -> Result<HashMap<String, String>> { // Get the current state let state = get_state(&settings.shared).await?; let mut context = HashMap::new(); // Get the current daemon cwd. context.insert( "cwd".to_string(), settings .shared .pueue_directory() .to_string_lossy() .to_string(), ); for (id, task) in state.tasks { let task_name = format!("task_{id}"); if let Some(start) = task.start { // Use datetime format for datetimes that aren't today. let format = if start.date_naive() == Local::now().date_naive() { &settings.client.status_time_format } else { &settings.client.status_datetime_format }; let formatted = start.format(format).to_string(); context.insert(format!("{task_name}_start"), formatted); context.insert(format!("{task_name}_start_long"), start.to_rfc2822()); } if let Some(end) = task.end { // Use datetime format for datetimes that aren't today. let format = if end.date_naive() == Local::now().date_naive() { &settings.client.status_time_format } else { &settings.client.status_datetime_format }; let formatted = end.format(format).to_string(); context.insert(format!("{task_name}_end"), formatted); context.insert(format!("{task_name}_end_long"), end.to_rfc2822()); } if let Some(label) = &task.label { context.insert(format!("{task_name}_label"), label.to_string()); } if let TaskStatus::Stashed { enqueue_at: Some(enqueue_at), } = task.status { // Use datetime format for datetimes that aren't today. let format = if enqueue_at.date_naive() == Local::now().date_naive() { &settings.client.status_time_format } else { &settings.client.status_datetime_format }; let enqueue_at = enqueue_at.format(format); context.insert(format!("{task_name}_enqueue_at"), enqueue_at.to_string()); } } Ok(context) } /// This function takes the name of a snapshot template, applies a given context to the template /// and compares it with a given process's `stdout`. pub fn assert_template_matches( name: &str, stdout: Vec<u8>, context: HashMap<String, String>, ) -> Result<()> { let path = PathBuf::from(env!("CARGO_MANIFEST_DIR")) .join("tests") .join("client") .join("_templates") .join(name); let actual = String::from_utf8(stdout).context("Got invalid utf8 as stdout!")?; let Ok(mut expected) = read_to_string(&path) else { println!("Actual output:\n{actual}"); bail!("Failed to read template file {path:?}") }; // Handle the snapshot as a template, if there's some context. if !context.is_empty() { // Init Handlebars. We set to strict, as we want to show an error on missing variables. let mut handlebars = Handlebars::new(); handlebars.set_strict_mode(true); expected = handlebars .render_template(&expected, &context) .context(format!( "Failed to render template for file: {name} with context {context:?}" ))?; } assert_strings_match(expected, actual)?; Ok(()) } /// Convenience wrapper to compare process stdout with snapshots. pub fn assert_snapshot_matches_stdout(name: &str, stdout: Vec<u8>) -> Result<()> { let actual = String::from_utf8(stdout).context("Got invalid utf8 as stdout!")?; assert_snapshot_matches(name, actual) } /// This function takes the name of a snapshot and ensures that it is the same as the actual /// provided string. pub fn assert_snapshot_matches(name: &str, actual: String) -> Result<()> { let path = PathBuf::from(env!("CARGO_MANIFEST_DIR")) .join("tests") .join("client") .join("_snapshots") .join(name); let Ok(expected) = read_to_string(&path) else { println!("Actual output:\n{actual}"); bail!("Failed to read template file {path:?}") }; assert_strings_match(expected, actual)?; Ok(()) } /// Check whether two outputs are identical. /// For convenience purposes, we trim trailing whitespaces. pub fn assert_strings_match(mut expected: String, mut actual: String) -> Result<()> { expected = expected .lines() .map(|line| line.trim_end().to_owned()) .collect::<Vec<String>>() .join("\n"); actual = actual .lines() .map(|line| line.trim_end().to_owned()) .collect::<Vec<String>>() .join("\n"); if expected != actual { println!("Expected output:\n-----\n{expected}\n-----"); println!("\nGot output:\n-----\n{actual}\n-----"); println!( "\n{}", similar_asserts::SimpleDiff::from_str(&expected, &actual, "expected", "actual") ); bail!("The stdout of the command doesn't match the expected string"); } Ok(()) } 07070100000084000081A4000000000000000000000001665F1B69000000A5000000000000000000000000000000000000002D00000000pueue-3.4.1/pueue/tests/client/helper/mod.rsmod compare_output; mod run; // Re-export all helper functions for this test as a convenience. pub use crate::helper::*; pub use compare_output::*; pub use run::*; 07070100000085000081A4000000000000000000000001665F1B6900000C3C000000000000000000000000000000000000002D00000000pueue-3.4.1/pueue/tests/client/helper/run.rsuse std::collections::HashMap; use anyhow::{Context, Result}; use assert_cmd::prelude::*; use std::process::{Command, Output, Stdio}; use pueue_lib::settings::Shared; use pueue_lib::task::TaskStatus; use crate::helper::get_state; /// Spawn a client command that connects to a specific daemon. pub fn run_client_command(shared: &Shared, args: &[&str]) -> Result<Output> { // Inject an environment variable into the pueue command. // This is used to ensure that the environment is properly captured and forwarded. let mut envs = HashMap::new(); envs.insert("PUEUED_TEST_ENV_VARIABLE", "Test"); run_client_command_with_env(shared, args, envs) } /// Run the status command without the path being included in the output. pub async fn run_status_without_path(shared: &Shared, args: &[&str]) -> Result<Output> { // Inject an environment variable into the pueue command. // This is used to ensure that the environment is properly captured and forwarded. let mut envs = HashMap::new(); envs.insert("PUEUED_TEST_ENV_VARIABLE", "Test"); let state = get_state(shared).await?; println!("{state:?}"); let mut base_args = vec!["status"]; // Since we want to exclude the path, we have to manually assemble the // list of columns that should be displayed. // We start with the base columns, check which optional columns should be // included based on the current task list and add any of those columns at // the correct position. let mut columns = vec!["id,status"]; // Add the enqueue_at column if necessary. if state.tasks.iter().any(|(_, task)| { if let TaskStatus::Stashed { enqueue_at } = task.status { return enqueue_at.is_some(); } false }) { columns.push("enqueue_at"); } // Add the `deps` column if necessary. if state .tasks .iter() .any(|(_, task)| !task.dependencies.is_empty()) { columns.push("dependencies"); } // Add the `label` column if necessary. if state.tasks.iter().any(|(_, task)| task.label.is_some()) { columns.push("label"); } // Add the remaining base columns. columns.extend_from_slice(&["command", "start", "end"]); let column_filter = format!("columns={}", columns.join(",")); base_args.push(&column_filter); base_args.extend_from_slice(args); run_client_command_with_env(shared, &base_args, envs) } /// Spawn a client command that connects to a specific daemon. /// Accepts a list of environment variables that'll be injected into the client's env. pub fn run_client_command_with_env( shared: &Shared, args: &[&str], envs: HashMap<&str, &str>, ) -> Result<Output> { let output = Command::cargo_bin("pueue")? .arg("--config") .arg(shared.pueue_directory().join("pueue.yml").to_str().unwrap()) .args(args) .envs(envs) .current_dir(shared.pueue_directory()) .stdout(Stdio::piped()) .stderr(Stdio::piped()) .output() .context(format!("Failed to execute pueue with {args:?}"))?; Ok(output) } 07070100000086000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000002B00000000pueue-3.4.1/pueue/tests/client/integration07070100000087000081A4000000000000000000000001665F1B6900000363000000000000000000000000000000000000003A00000000pueue-3.4.1/pueue/tests/client/integration/completions.rsuse std::process::{Command, Stdio}; use anyhow::{Context, Result}; use assert_cmd::prelude::*; use rstest::rstest; /// Make sure that the daemon's environment variables don't bleed into the spawned subprocesses. #[rstest] #[case("zsh")] #[case("elvish")] #[case("bash")] #[case("fish")] #[case("power-shell")] #[case("nushell")] #[test] fn autocompletion_generation(#[case] shell: &'static str) -> Result<()> { let output = Command::cargo_bin("pueue")? .arg("completions") .arg(shell) .arg("./") .stdout(Stdio::piped()) .stderr(Stdio::piped()) .current_dir(env!("CARGO_TARGET_TMPDIR")) .output() .context(format!("Failed to run completion generation for {shell}:"))?; assert!( output.status.success(), "Completion for {shell} didn't finish successfully." ); Ok(()) } 07070100000088000081A4000000000000000000000001665F1B6900000AED000000000000000000000000000000000000003C00000000pueue-3.4.1/pueue/tests/client/integration/configuration.rsuse std::{ collections::HashMap, process::{Child, Command, Stdio}, }; use anyhow::{bail, Context, Result}; use assert_cmd::prelude::CommandCargoExt; use pueue_lib::{ settings::{Shared, PUEUE_CONFIG_PATH_ENV}, state::State, }; use crate::helper::*; /// Spawn the daemon by calling the actual pueued binary. /// This is basically the same as the `standalone_daemon` logic, but it uses the /// `PUEUE_CONFIG_PATH` environment variable instead of the `--config` flag. pub async fn standalone_daemon_with_env_config(shared: &Shared) -> Result<Child> { // Inject an environment variable into the daemon. // This is used to test that the spawned subprocesses won't inherit the daemon's environment. let mut envs = HashMap::new(); envs.insert("PUEUED_TEST_ENV_VARIABLE", "Test".to_owned()); envs.insert( PUEUE_CONFIG_PATH_ENV, shared .pueue_directory() .join("pueue.yml") .to_string_lossy() .to_string(), ); let child = Command::cargo_bin("pueued")? .arg("-vvv") .envs(envs) .stdout(Stdio::piped()) .stderr(Stdio::piped()) .spawn()?; let tries = 20; let mut current_try = 0; // Wait up to 1s for the unix socket to pop up. let socket_path = shared.unix_socket_path(); while current_try < tries { sleep_ms(50).await; if socket_path.exists() { return Ok(child); } current_try += 1; } bail!("Daemon didn't boot in stand-alone mode after 1sec") } /// Test that editing a task without any flags only updates the command. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn run_with_env_config_path() -> Result<()> { let (settings, _tempdir) = daemon_base_setup()?; let mut child = standalone_daemon_with_env_config(&settings.shared).await?; let shared = &settings.shared; // Check if the client can connect to the daemon. let mut envs = HashMap::new(); envs.insert( PUEUE_CONFIG_PATH_ENV, shared .pueue_directory() .join("pueue.yml") .to_string_lossy() .to_string(), ); let output = Command::cargo_bin("pueue")? .args(["status", "--json"]) .envs(envs) .current_dir(shared.pueue_directory()) .stdout(Stdio::piped()) .stderr(Stdio::piped()) .output() .context("Failed to execute pueue with env config variable".to_string())?; // Deserialize the message and make sure it's a status response. let response = String::from_utf8_lossy(&output.stdout); let state: State = serde_json::from_str(&response)?; assert!(state.tasks.is_empty(), "State must have no tasks"); child.kill()?; Ok(()) } 07070100000089000081A4000000000000000000000001665F1B690000159F000000000000000000000000000000000000003300000000pueue-3.4.1/pueue/tests/client/integration/edit.rsuse std::collections::HashMap; use anyhow::{Context, Result}; use pueue_lib::task::TaskStatus; use crate::client::helper::*; /// Test that editing a task without any flags only updates the command. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn edit_task_default() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Create a stashed message which we'll edit later on. let mut message = create_add_message(shared, "this is a test"); message.stashed = true; send_message(shared, message) .await .context("Failed to to add stashed task.")?; // Update the task's command by piping a string to the temporary file. let mut envs = HashMap::new(); envs.insert("EDITOR", "echo 'expected command string' > "); run_client_command_with_env(shared, &["edit", "0"], envs)?; // Make sure that both the command has been updated. let state = get_state(shared).await?; let task = state.tasks.get(&0).unwrap(); assert_eq!(task.command, "expected command string"); // All other properties should be unchanged. assert_eq!(task.path, daemon.tempdir.path()); assert_eq!(task.label, None); assert_eq!(task.priority, 0); Ok(()) } /// Test that editing a multiple task properties works as expected. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn edit_all_task_properties() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Create a stashed message which we'll edit later on. let mut message = create_add_message(shared, "this is a test"); message.stashed = true; send_message(shared, message) .await .context("Failed to to add stashed task.")?; // Update all task properties by piping a string to the respective temporary file. let mut envs = HashMap::new(); envs.insert("EDITOR", "echo 'expected string' > "); run_client_command_with_env( shared, &["edit", "--command", "--path", "--label", "0"], envs, )?; // Make sure that all properties have been updated. let state = get_state(shared).await?; let task = state.tasks.get(&0).unwrap(); assert_eq!(task.command, "expected string"); assert_eq!(task.path.to_string_lossy(), "expected string"); assert_eq!(task.label, Some("expected string".to_string())); Ok(()) } /// Ensure that deleting the label in the editor result in the deletion of the task's label. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn edit_delete_label() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Create a stashed message which we'll edit later on. let mut message = create_add_message(shared, "this is a test"); message.stashed = true; message.label = Some("Testlabel".to_owned()); send_message(shared, message) .await .context("Failed to to add stashed task.")?; // Echo an empty string into the file. let mut envs = HashMap::new(); envs.insert("EDITOR", "echo '' > "); run_client_command_with_env(shared, &["edit", "--label", "0"], envs)?; // Make sure that the label has indeed be deleted let state = get_state(shared).await?; let task = state.tasks.get(&0).unwrap(); assert_eq!(task.label, None); Ok(()) } /// Ensure that updating the priority in the editor results in the modification of the task's priority. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn edit_change_priority() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Create a stashed message which we'll edit later on. let mut message = create_add_message(shared, "this is a test"); message.stashed = true; message.priority = Some(0); send_message(shared, message) .await .context("Failed to to add stashed task.")?; // Echo a new priority into the file. let mut envs = HashMap::new(); envs.insert("EDITOR", "echo '99' > "); run_client_command_with_env(shared, &["edit", "--priority", "0"], envs)?; // Make sure that the priority has indeed been updated. let state = get_state(shared).await?; let task = state.tasks.get(&0).unwrap(); assert_eq!(task.priority, 99); Ok(()) } /// Test that automatic restoration of a task's state works, if the edit command fails for some /// reason. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn fail_to_edit_task() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Create a stashed message which we'll edit later on. let mut message = create_add_message(shared, "this is a test"); message.stashed = true; send_message(shared, message) .await .context("Failed to to add stashed task.")?; // Run a editor command that crashes. let mut envs = HashMap::new(); envs.insert("EDITOR", "non_existing_test_binary"); let output = run_client_command_with_env(shared, &["edit", "0"], envs)?; assert!( !output.status.success(), "The command should fail, as the command isn't valid" ); // Make sure that nothing has changed and the task is `Stashed` again. let state = get_state(shared).await?; let task = state.tasks.get(&0).unwrap(); assert_eq!(task.command, "this is a test"); assert_eq!(task.status, TaskStatus::Stashed { enqueue_at: None }); Ok(()) } 0707010000008A000081A4000000000000000000000001665F1B6900001639000000000000000000000000000000000000003500000000pueue-3.4.1/pueue/tests/client/integration/follow.rsuse anyhow::{Context, Result}; use rstest::rstest; use crate::client::helper::*; pub fn set_read_local_logs(daemon: &mut PueueDaemon, read_local_logs: bool) -> Result<()> { // Force the client to read remote logs via config file. daemon.settings.client.read_local_logs = read_local_logs; // Persist the change, so it can be seen by the client. daemon .settings .save(&Some(daemon.tempdir.path().join("pueue.yml"))) .context("Couldn't write pueue config to temporary directory")?; Ok(()) } /// Test that the `follow` command works with the log being streamed locally and by the daemon. #[rstest] #[case(true)] #[case(false)] #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn default(#[case] read_local_logs: bool) -> Result<()> { let mut daemon = daemon().await?; set_read_local_logs(&mut daemon, read_local_logs)?; let shared = &daemon.settings.shared; // Add a task and wait until it started. assert_success(add_task(shared, "sleep 1 && echo test").await?); wait_for_task_condition(shared, 0, |task| task.is_running()).await?; // Execute `follow`. // This will result in the client receiving the streamed output until the task finished. let output = run_client_command(shared, &["follow"])?; assert_snapshot_matches_stdout("follow__default", output.stdout)?; Ok(()) } /// Test that the remote `follow` command works, if one specifies to only show the last few lines /// of recent output. #[rstest] #[case(true)] #[case(false)] #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn last_lines(#[case] read_local_logs: bool) -> Result<()> { let mut daemon = daemon().await?; set_read_local_logs(&mut daemon, read_local_logs)?; let shared = &daemon.settings.shared; // Add a task which echos 8 lines of output assert_success(add_task(shared, "echo \"1\n2\n3\n4\n5\n6\n7\n8\" && sleep 1").await?); wait_for_task_condition(shared, 0, |task| task.is_running()).await?; // Follow the task, but only print the last 4 lines of the output. let output = run_client_command(shared, &["follow", "--lines=4"])?; assert_snapshot_matches_stdout("follow__last_lines", output.stdout)?; Ok(()) } /// If a task exists but hasn't started yet, wait for it to start. #[rstest] #[case(true)] #[case(false)] #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn wait_for_task(#[case] read_local_logs: bool) -> Result<()> { let mut daemon = daemon().await?; set_read_local_logs(&mut daemon, read_local_logs)?; let shared = &daemon.settings.shared; // Add a normal task that will start in 2 seconds. run_client_command(shared, &["add", "--delay", "2 seconds", "echo test"])?; // Wait for the task to start and follow until it finisheds. let output = run_client_command(shared, &["follow", "0"])?; assert_snapshot_matches_stdout("follow__default", output.stdout)?; Ok(()) } /// Fail when following a non-existing task #[rstest] #[case(true)] #[case(false)] #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn fail_on_non_existing(#[case] read_local_logs: bool) -> Result<()> { let mut daemon = daemon().await?; set_read_local_logs(&mut daemon, read_local_logs)?; let shared = &daemon.settings.shared; // Execute `follow` on a non-existing task. // The client should exit with exit code `1`. let output = run_client_command(shared, &["follow", "0"])?; assert!(!output.status.success(), "follow got an unexpected exit 0"); assert_snapshot_matches_stdout("follow__fail_on_non_existing", output.stdout)?; Ok(()) } // /// This test is commented for the time being. // /// There's a race condition that can happen from time to time. // /// It's especially reliably hit on MacOS for some reason. // /// // /// What happens is that the daemon resets in between reading the output of the file // /// and the check whether the task actually still exists in the daemon. // /// There's really no way to properly work around this. // /// So I'll keep this commented for the time being. // /// // /// // /// Fail and print an error message when following a non-existing task disappears // #[rstest] // #[case(true)] // #[case(false)] // #[tokio::test(flavor = "multi_thread", worker_threads = 2)] // async fn fail_on_disappearing(#[case] read_local_logs: bool) -> Result<()> { // let mut daemon = daemon().await?; // set_read_local_logs(&mut daemon, read_local_logs)?; // let shared = &daemon.settings.shared; // // // Add a task echoes something and waits for a while // assert_success(add_task(shared, "echo test && sleep 20").await?); // wait_for_task_condition(shared, 0, |task| task.is_running()).await?; // // // Reset the daemon after 2 seconds. At this point, the client will already be following the // // output and should notice that the task went away.. // // This is a bit hacky, but our client test helper always waits for the command to finish // // and I'm feeling too lazy to add a new helper function now. // let moved_shared = shared.clone(); // tokio::task::spawn(async move { // sleep_ms(2000).await; // // Reset the daemon // send_message(&moved_shared, ResetMessage {}) // .await // .expect("Failed to send Start tasks message"); // }); // // // Execute `follow` and remove the task // // The client should exit with exit code `1`. // let output = run_client_command(shared, &["follow", "0"])?; // // assert_snapshot_matches_stdout("follow__fail_on_disappearing", output.stdout)?; // // Ok(()) // } 0707010000008B000081A4000000000000000000000001665F1B69000009EE000000000000000000000000000000000000003400000000pueue-3.4.1/pueue/tests/client/integration/group.rsuse std::collections::BTreeMap; use anyhow::{Context, Result}; use pueue_lib::network::message::*; use pueue_lib::state::{Group, GroupStatus}; use crate::client::helper::*; /// Test that adding a group and getting the group overview works. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn default() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Add a group via the cli interface. run_client_command(shared, &["group", "add", "testgroup", "--parallel=2"])?; wait_for_group(shared, "testgroup").await?; // Get the group status output let output = run_client_command(shared, &["group"])?; assert_snapshot_matches_stdout("group__default", output.stdout)?; Ok(()) } /// Test that adding a group and getting the group overview with the `--color=always` flag works. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn colored() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Add a group via the cli interface. run_client_command(shared, &["group", "add", "testgroup", "--parallel=2"])?; // Pauses the default queue while waiting for tasks // We do this to ensure that paused groups are properly colored. let message = PauseMessage { tasks: TaskSelection::Group(PUEUE_DEFAULT_GROUP.into()), wait: true, }; send_message(shared, message) .await .context("Failed to send message")?; wait_for_group_status(shared, PUEUE_DEFAULT_GROUP, GroupStatus::Paused).await?; // Get the group status output let output = run_client_command(shared, &["--color", "always", "group"])?; assert_snapshot_matches_stdout("group__colored", output.stdout)?; Ok(()) } /// Make sure that getting the list of groups as json works. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn json() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Get the group status output let output = run_client_command(shared, &["group", "--json"])?; let json = String::from_utf8_lossy(&output.stdout); println!("{json}"); let state = get_state(shared).await?; let deserialized_groups: BTreeMap<String, Group> = serde_json::from_str(&json).context("Failed to deserialize json state")?; assert_eq!( deserialized_groups, state.groups, "The serialized groups differ from the actual groups from the state." ); Ok(()) } 0707010000008C000081A4000000000000000000000001665F1B6900001590000000000000000000000000000000000000003200000000pueue-3.4.1/pueue/tests/client/integration/log.rsuse std::collections::{BTreeMap, HashMap}; use anyhow::{Context, Result}; use pueue_lib::task::Task; use rstest::rstest; use serde_derive::Deserialize; use crate::client::helper::*; /// Test that the `log` command works for both: /// - The log being streamed by the daemon. /// - The log being read from the local files. #[rstest] #[case(true)] #[case(false)] #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn read(#[case] read_local_logs: bool) -> Result<()> { let mut daemon = daemon().await?; let shared = &daemon.settings.shared; // Force the client to read remote logs via config file. daemon.settings.client.read_local_logs = read_local_logs; // Persist the change, so it can be seen by the client. daemon .settings .save(&Some(daemon.tempdir.path().join("pueue.yml"))) .context("Couldn't write pueue config to temporary directory")?; // Add a task and wait until it finishes. assert_success(add_task(shared, "echo test").await?); wait_for_task_condition(shared, 0, |task| task.is_done()).await?; let output = run_client_command(shared, &["log"])?; let context = get_task_context(&daemon.settings).await?; assert_template_matches("log__default", output.stdout, context)?; Ok(()) } /// Test that the `log` command properly truncates content and hints this to the user for: /// - The log being streamed by the daemon. /// - The log being read from the local files. #[rstest] #[case(true)] #[case(false)] #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn read_truncated(#[case] read_local_logs: bool) -> Result<()> { let mut daemon = daemon().await?; let shared = &daemon.settings.shared; // Force the client to read remote logs via config file. daemon.settings.client.read_local_logs = read_local_logs; // Persist the change, so it can be seen by the client. daemon .settings .save(&Some(daemon.tempdir.path().join("pueue.yml"))) .context("Couldn't write pueue config to temporary directory")?; // Add a task and wait until it finishes. assert_success(add_task(shared, "echo '1\n2\n3\n4\n5\n6\n7\n8\n9\n10'").await?); wait_for_task_condition(shared, 0, |task| task.is_done()).await?; let output = run_client_command(shared, &["log", "--lines=5"])?; let context = get_task_context(&daemon.settings).await?; assert_template_matches("log__last_lines", output.stdout, context)?; Ok(()) } /// If a task has a label, it is included in the log output #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn task_with_label() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Add a task and wait until it finishes. run_client_command(shared, &["add", "--label", "test_label", "echo test"])?; wait_for_task_condition(shared, 0, |task| task.is_done()).await?; let output = run_client_command(shared, &["log"])?; let context = get_task_context(&daemon.settings).await?; assert_template_matches("log__with_label", output.stdout, context)?; Ok(()) } /// Calling `log` with the `--color=always` flag, colors the output as expected. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn colored() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Add a task and wait until it finishes. assert_success(add_task(shared, "echo test").await?); wait_for_task_condition(shared, 0, |task| task.is_done()).await?; let output = run_client_command(shared, &["--color", "always", "log"])?; let context = get_task_context(&daemon.settings).await?; assert_template_matches("log__colored", output.stdout, context)?; Ok(()) } /// This is the output struct used for task logs. /// Since the Pueue client isn't exposed as a library, we have to declare our own for testing /// purposes. The counter part can be found in `client/display/log/json.rs`. #[derive(Debug, Deserialize)] pub struct TaskLog { pub task: Task, pub output: String, } /// Calling `pueue log --json` prints the expected json output to stdout. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn json() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Add a task and wait until it finishes. assert_success(add_task(shared, "echo test").await?); wait_for_task_condition(shared, 0, |task| task.is_done()).await?; let output = run_client_command(shared, &["log", "--json"])?; // Deserialize the json back to the original task BTreeMap. let json = String::from_utf8_lossy(&output.stdout); let mut task_logs: BTreeMap<usize, TaskLog> = serde_json::from_str(&json) .context(format!("Failed to deserialize json tasks: \n{json}"))?; // Get the actual BTreeMap from the daemon let mut state = get_state(shared).await?; let original_task = state.tasks.get_mut(&0).unwrap(); // Clean the environment variables, as they aren't transmitted when calling `log`. original_task.envs = HashMap::new(); let task_log = task_logs.get_mut(&0).expect("Expected one task log"); assert_eq!( original_task, &task_log.task, "Deserialized task and original task aren't equal" ); // Append a newline to the deserialized task's output, which is automatically done when working // with the shell. assert_eq!("test", task_log.output); Ok(()) } 0707010000008D000081A4000000000000000000000001665F1B6900000071000000000000000000000000000000000000003200000000pueue-3.4.1/pueue/tests/client/integration/mod.rsmod completions; mod configuration; mod edit; mod follow; mod group; mod log; mod restart; mod status; mod wait; 0707010000008E000081A4000000000000000000000001665F1B69000019DF000000000000000000000000000000000000003600000000pueue-3.4.1/pueue/tests/client/integration/restart.rsuse std::collections::HashMap; use anyhow::{bail, Result}; use pueue_lib::task::{TaskResult, TaskStatus}; use crate::client::helper::*; /// Test that restarting a task while editing its command work as expected. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn restart_and_edit_task_command() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Create a task and wait for it to finish. assert_success(add_task(shared, "ls").await?); wait_for_task_condition(shared, 0, |task| task.is_done()).await?; // Set the editor to a command which replaces the temporary file's content. let mut envs = HashMap::new(); envs.insert("EDITOR", "echo 'sleep 60' > "); // Restart the command, edit its command and wait for it to start. run_client_command_with_env(shared, &["restart", "--in-place", "--edit", "0"], envs)?; wait_for_task_condition(shared, 0, |task| task.is_running()).await?; // Make sure that both the command has been updated. let state = get_state(shared).await?; let task = state.tasks.get(&0).unwrap(); assert_eq!(task.command, "sleep 60"); assert_eq!(task.status, TaskStatus::Running); Ok(()) } /// Test that restarting a task while editing its path work as expected. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn restart_and_edit_task_path() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Create a task and wait for it to finish. assert_success(add_task(shared, "ls").await?); wait_for_task_condition(shared, 0, |task| task.is_done()).await?; // Set the editor to a command which replaces the temporary file's content. let mut envs = HashMap::new(); envs.insert("EDITOR", "echo '/tmp' > "); // Restart the command, edit its command and wait for it to start. run_client_command_with_env(shared, &["restart", "--in-place", "--edit-path", "0"], envs)?; wait_for_task_condition(shared, 0, |task| task.is_done()).await?; // Make sure that both the path has been updated. let state = get_state(shared).await?; let task = state.tasks.get(&0).unwrap(); assert_eq!(task.path.to_string_lossy(), "/tmp"); Ok(()) } /// Test that restarting a task while editing both, its command and its path, work as expected. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn restart_and_edit_task_path_and_command() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Create a task and wait for it to finish. assert_success(add_task(shared, "ls").await.unwrap()); wait_for_task_condition(shared, 0, |task| task.is_done()) .await .unwrap(); // Set the editor to a command which replaces the temporary file's content. let mut envs = HashMap::new(); envs.insert("EDITOR", "echo 'replaced string' > "); // Restart the command, edit its command and path and wait for it to start. // The task will fail afterwards, but it should still be edited. run_client_command_with_env( shared, &[ "restart", "--in-place", "--edit", "--edit-path", "--edit-label", "0", ], envs, )?; wait_for_task_condition(shared, 0, |task| task.is_done()).await?; // Make sure that both the path has been updated. let state = get_state(shared).await?; let task = state.tasks.get(&0).unwrap(); assert_eq!(task.command, "replaced string"); assert_eq!(task.path.to_string_lossy(), "replaced string"); assert_eq!(task.label, Some("replaced string".to_owned())); // Also the task should have been restarted and failed. if let TaskStatus::Done(TaskResult::FailedToSpawn(_)) = task.status { } else { bail!("The task should have failed"); }; Ok(()) } /// Test that restarting a task while editing its priority works as expected. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn restart_and_edit_task_priority() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Create a task and wait for it to finish. assert_success(add_task(shared, "ls").await?); wait_for_task_condition(shared, 0, |task| task.is_done()).await?; // Set the editor to a command which replaces the temporary file's content. let mut envs = HashMap::new(); envs.insert("EDITOR", "echo '99' > "); // Restart the command, edit its priority and wait for it to start. run_client_command_with_env( shared, &["restart", "--in-place", "--edit-priority", "0"], envs, )?; wait_for_task_condition(shared, 0, |task| task.is_done()).await?; // Make sure that the priority has been updated. let state = get_state(shared).await?; let task = state.tasks.get(&0).unwrap(); assert_eq!(task.priority, 99); Ok(()) } /// Test that restarting a task **not** in place works as expected. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn normal_restart_with_edit() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Create a task and wait for it to finish. assert_success(add_task(shared, "ls").await?); let original_task = wait_for_task_condition(shared, 0, |task| task.is_done()).await?; assert!( original_task.enqueued_at.is_some(), "Task is done and should have enqueue_at set." ); // Set the editor to a command which replaces the temporary file's content. let mut envs = HashMap::new(); envs.insert("EDITOR", "echo 'sleep 60' > "); // Restart the command, edit its command and wait for it to start. run_client_command_with_env(shared, &["restart", "--edit", "0"], envs)?; wait_for_task_condition(shared, 1, |task| task.is_running()).await?; // Make sure that both the command has been updated. let state = get_state(shared).await?; let task = state.tasks.get(&1).unwrap(); assert_eq!(task.command, "sleep 60"); assert_eq!(task.status, TaskStatus::Running); // Since we created a copy, the new task should be created after the first one. assert!( original_task.created_at < task.created_at, "New task should have a newer created_at." ); // The created_at time should also be newer. assert!( original_task.enqueued_at.unwrap() < task.enqueued_at.unwrap(), "The second run should be enqueued before the first run." ); Ok(()) } 0707010000008F000081A4000000000000000000000001665F1B6900001179000000000000000000000000000000000000003500000000pueue-3.4.1/pueue/tests/client/integration/status.rsuse anyhow::Context; use anyhow::Result; use pueue_lib::state::State; use crate::client::helper::*; /// Test that the normal status command works as expected. /// Calling `pueue` without any subcommand is equivalent of using `status`. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn default() -> Result<()> { Ok(()) } /// Test the status output with all columns enabled. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn full() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Add a paused task so we can use it as a dependency. run_client_command( shared, &["add", "--label", "test", "--delay", "1 minute", "ls"], )?; // Add a second command that depends on the first one. run_client_command(shared, &["add", "--after=0", "ls"])?; let output = run_status_without_path(shared, &[]).await?; let context = get_task_context(&daemon.settings).await?; assert_template_matches("status__full", output.stdout, context)?; Ok(()) } ///// Calling `status` with the `--color=always` flag, colors the output as expected. //#[tokio::test(flavor = "multi_thread", worker_threads = 2)] //async fn colored() -> Result<()> { // let daemon = daemon().await?; // let shared = &daemon.settings.shared; // // // Add a task and wait until it finishes. // assert_success(add_task(shared, "ls").await?); // wait_for_task_condition(shared, 0, |task| task.is_done()).await?; // // let output = run_status_without_path(shared, &["--color", "always"]).await?; // // let context = get_task_context(&daemon.settings).await?; // assert_stdout_matches("status__colored", output.stdout, context)?; // // Ok(()) //} /// Test status for single group #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn single_group() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Add a new group add_group_with_slots(shared, "testgroup", 1).await?; // Add a task to the new testgroup. run_client_command(shared, &["add", "--group", "testgroup", "ls"])?; // Add another task to the default group. run_client_command(shared, &["add", "--stashed", "ls"])?; // Make sure the first task finished. wait_for_task_condition(shared, 0, |task| task.is_done()).await?; let output = run_status_without_path(shared, &["--group", "testgroup"]).await?; // The output should only show the first task let context = get_task_context(&daemon.settings).await?; assert_template_matches("status__single_group", output.stdout, context)?; Ok(()) } /// Multiple groups #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn multiple_groups() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Add a new group add_group_with_slots(shared, "testgroup", 1).await?; add_group_with_slots(shared, "testgroup2", 1).await?; // Add a task to the new testgroup. run_client_command(shared, &["add", "--group", "testgroup", "ls"])?; // Add another task to the default group. run_client_command(shared, &["add", "--group", "testgroup2", "ls"])?; // Make sure the second task finished. wait_for_task_condition(shared, 1, |task| task.is_done()).await?; let output = run_status_without_path(shared, &[]).await?; // The output should show multiple groups let context = get_task_context(&daemon.settings).await?; assert_template_matches("status__multiple_groups", output.stdout, context)?; Ok(()) } /// Calling `pueue status --json` will result in the current state being printed to the cli. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn json() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Add a task and wait until it finishes. assert_success(add_task(shared, "ls").await?); wait_for_task_condition(shared, 0, |task| task.is_done()).await?; let output = run_client_command(shared, &["status", "--json"])?; let json = String::from_utf8_lossy(&output.stdout); let deserialized_state: State = serde_json::from_str(&json).context("Failed to deserialize json state")?; let state = get_state(shared).await?; assert_eq!( deserialized_state, *state, "Json state differs from actual daemon state." ); Ok(()) } 07070100000090000081A4000000000000000000000001665F1B6900001767000000000000000000000000000000000000003300000000pueue-3.4.1/pueue/tests/client/integration/wait.rsuse std::{ process::Output, thread::{self, JoinHandle}, }; use anyhow::Result; use tokio::time::sleep; use crate::client::helper::*; use pueue_lib::settings::Shared; /// All lines have the following pattern: /// 01:49:42 - New task 1 with status Queued /// /// The following code trims all timestamps from the log output. /// We cannot work with proper timings, as these times are determined by the client. /// They are unknown to us. fn clean_wait_output(stdout: Vec<u8>) -> Vec<u8> { let log = String::from_utf8_lossy(&stdout); let mut log = log .lines() .map(|line| line.split('-').nth(1).unwrap().trim_start()) .collect::<Vec<&str>>() .join("\n"); log.push('\n'); log.as_bytes().to_owned() } /// Spawn the `wait` subcommand in a separate thread. /// We expect it to finish later on its own. async fn spawn_wait_client(shared: &Shared, args: Vec<&'static str>) -> JoinHandle<Result<Output>> { let shared_clone = shared.clone(); let wait_handle = thread::spawn(move || run_client_command(&shared_clone, &args)); // Sleep for half a second to give `pueue wait` time to properly start. sleep(std::time::Duration::from_millis(500)).await; wait_handle } /// Test that `wait` will detect new commands and wait until all queued commands are done. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn multiple_tasks() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Run a command that'll run for a short time after a delay. // The `pueue wait` command will be spawne directly afterwards, resulting in the spawned // process to wait for this command to finish. run_client_command(shared, &["add", "--delay", "2 seconds", "sleep 1"])?; let wait_handle = spawn_wait_client(shared, vec!["wait"]).await; // We now spawn another task that should be picked up by and waited upon completion // by the `wait` process. run_client_command(shared, &["add", "--after=0", "sleep 1"])?; let output = wait_handle.join().unwrap()?; let stdout = clean_wait_output(output.stdout); assert_snapshot_matches_stdout("wait__multiple_tasks", stdout)?; Ok(()) } /// Test that `wait` will correctly wait for the correct status on tasks. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn target_status() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Run a command that'll run for a short time after a delay. run_client_command(shared, &["add", "--delay", "4 seconds", "sleep 20"])?; // wait for all tasks to be queued. // task0 will go from `Stashed` to `Queued` to `Running`. // task1 will go from `Stashed` to `Queued`. // // `Running` fulfills the `Queued` condition, which is why the `wait` command should // exit as soon as the second task is enqueued. let wait_handle = spawn_wait_client(shared, vec!["wait", "--status", "queued"]).await; // We now spawn another task. run_client_command(shared, &["add", "--delay", "1 seconds", "sleep 5"])?; let output = wait_handle.join().unwrap()?; let stdout = clean_wait_output(output.stdout); assert_snapshot_matches_stdout("wait__target_status", stdout)?; Ok(()) } /// Test that `wait` will correctly wait for the correct status on tasks on a single task. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn single_task_target_status() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Run a command that'll run for a short time after a short delay. run_client_command(shared, &["add", "--delay", "2 seconds", "sleep 20"])?; // We now spawn another task that should be queued after a long time. // The wait command shouldn't wait for this one run_client_command(shared, &["add", "--delay", "20 seconds", "sleep 5"])?; // The wait should exit as soon as task0 changes to `Queued`. let wait_handle = spawn_wait_client(shared, vec!["wait", "0", "--status", "queued"]).await; let output = wait_handle.join().unwrap()?; let stdout = clean_wait_output(output.stdout); assert_snapshot_matches_stdout("wait__single_task_target_status", stdout)?; Ok(()) } /// Test that `wait success` will correctly wait for successful tasks. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn success_success() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Run a command that'll run for a short time after a delay. run_client_command(shared, &["add", "--delay", "1 seconds", "sleep 2"])?; let wait_handle = spawn_wait_client(shared, vec!["wait", "--status", "success"]).await; let output = wait_handle.join().unwrap()?; assert!(output.status.success(), "Got non-zero exit code on wait."); let stdout = clean_wait_output(output.stdout); assert_snapshot_matches_stdout("wait__success_success", stdout)?; Ok(()) } /// Test that `wait success` will fail with exitcode 1 if a single task fails. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn success_failure() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Run two command the first will immediately fail, the second should (theoretically) succeed. run_client_command( shared, &["add", "--delay", "2 seconds", "sleep 2 && failing_command"], )?; run_client_command(shared, &["add", "--delay", "2 seconds", "sleep 2"])?; let wait_handle = spawn_wait_client(shared, vec!["wait", "--status", "success"]).await; let output = wait_handle.join().unwrap()?; assert!( !output.status.success(), "Got unexpected zero exit code on wait." ); let stdout = clean_wait_output(output.stdout); assert_snapshot_matches_stdout("wait__success_failure", stdout)?; Ok(()) } 07070100000091000081A4000000000000000000000001665F1B6900000027000000000000000000000000000000000000002600000000pueue-3.4.1/pueue/tests/client/mod.rsmod helper; mod integration; mod unit; 07070100000092000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000002400000000pueue-3.4.1/pueue/tests/client/unit07070100000093000081A4000000000000000000000001665F1B6900000012000000000000000000000000000000000000002B00000000pueue-3.4.1/pueue/tests/client/unit/mod.rsmod status_query; 07070100000094000081A4000000000000000000000001665F1B690000220A000000000000000000000000000000000000003400000000pueue-3.4.1/pueue/tests/client/unit/status_query.rsuse std::{collections::HashMap, path::PathBuf}; use anyhow::Result; use chrono::{Local, TimeZone}; use pretty_assertions::assert_eq; use rstest::rstest; use pueue::client::query::{apply_query, Rule}; use pueue_lib::state::PUEUE_DEFAULT_GROUP; use pueue_lib::task::{Task, TaskResult, TaskStatus}; /// A small helper function to reduce a bit of boilerplate. pub fn build_task() -> Task { Task::new( "sleep 60".to_owned(), PathBuf::from("/tmp"), HashMap::new(), PUEUE_DEFAULT_GROUP.to_owned(), TaskStatus::Queued, Vec::new(), 0, None, ) } /// Build a list of some pre-build tasks that are used to test. pub fn test_tasks() -> Vec<Task> { let mut tasks = Vec::new(); // Failed task let mut failed = build_task(); failed.id = 0; failed.status = TaskStatus::Done(TaskResult::Failed(255)); failed.start = Some(Local.with_ymd_and_hms(2022, 1, 10, 10, 0, 0).unwrap()); failed.end = Some(Local.with_ymd_and_hms(2022, 1, 10, 10, 5, 0).unwrap()); failed.label = Some("label-10-0".to_string()); tasks.insert(failed.id, failed); // Successful task let mut successful = build_task(); successful.id = 1; successful.status = TaskStatus::Done(TaskResult::Success); successful.start = Some(Local.with_ymd_and_hms(2022, 1, 8, 10, 0, 0).unwrap()); successful.end = Some(Local.with_ymd_and_hms(2022, 1, 8, 10, 5, 0).unwrap()); successful.label = Some("label-10-1".to_string()); tasks.insert(successful.id, successful); // Stashed task let mut stashed = build_task(); stashed.status = TaskStatus::Stashed { enqueue_at: None }; stashed.id = 2; stashed.label = Some("label-10-2".to_string()); tasks.insert(stashed.id, stashed); // Scheduled task let mut scheduled = build_task(); scheduled.status = TaskStatus::Stashed { enqueue_at: Some(Local.with_ymd_and_hms(2022, 1, 10, 11, 0, 0).unwrap()), }; scheduled.id = 3; scheduled.group = "testgroup".to_string(); tasks.insert(scheduled.id, scheduled); // Running task let mut running = build_task(); running.status = TaskStatus::Running; running.id = 4; running.start = Some(Local.with_ymd_and_hms(2022, 1, 2, 12, 0, 0).unwrap()); tasks.insert(running.id, running); // Add two queued tasks let mut queued = build_task(); queued.id = 5; tasks.insert(queued.id, queued.clone()); // Task 6 depends on task 5 queued.id = 6; queued.dependencies.push(5); tasks.insert(queued.id, queued); tasks } fn test_tasks_with_query(query: &str, group: &Option<String>) -> Result<Vec<Task>> { let mut tasks = test_tasks(); let query_result = apply_query(query, group)?; tasks = query_result.apply_filters(tasks); tasks = query_result.order_tasks(tasks); tasks = query_result.limit_tasks(tasks); Ok(tasks) } /// Select only specific columns for printing #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn column_selection() -> Result<()> { let result = apply_query("columns=id,status,command", &None)?; assert_eq!( result.selected_columns, [Rule::column_id, Rule::column_status, Rule::column_command] ); Ok(()) } /// Select the first few entries of the list #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn limit_first() -> Result<()> { let tasks = test_tasks_with_query("first 4", &None)?; assert!(tasks.len() == 4); assert_eq!(tasks[0].id, 0); assert_eq!(tasks[3].id, 3); Ok(()) } /// Select the last few entries of the list #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn limit_last() -> Result<()> { let tasks = test_tasks_with_query("last 4", &None)?; assert!(tasks.len() == 4); assert_eq!(tasks[0].id, 3); assert_eq!(tasks[3].id, 6); Ok(()) } /// Order the test state by task status. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn order_by_status() -> Result<()> { let tasks = test_tasks_with_query("order_by status", &None)?; let expected = vec![ TaskStatus::Stashed { enqueue_at: None }, TaskStatus::Stashed { enqueue_at: Some(Local.with_ymd_and_hms(2022, 1, 10, 11, 0, 0).unwrap()), }, TaskStatus::Queued, TaskStatus::Queued, TaskStatus::Running, TaskStatus::Done(TaskResult::Failed(255)), TaskStatus::Done(TaskResult::Success), ]; let actual: Vec<TaskStatus> = tasks.iter().map(|task| task.status.clone()).collect(); assert_eq!(actual, expected); Ok(()) } /// Filter by start date #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn filter_start() -> Result<()> { let tasks = test_tasks_with_query("start>2022-01-10 09:00:00", &None)?; assert!(tasks.len() == 1); assert_eq!(tasks[0].id, 0); Ok(()) } /// Filtering in combination with groups works as expected #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn filter_with_group() -> Result<()> { let tasks = test_tasks_with_query("status=stashed", &Some("testgroup".to_string()))?; assert!(tasks.len() == 1); assert_eq!(tasks[0].id, 3); Ok(()) } /// Filter by end date with the current time as a time and a date. #[rstest] #[case("2022-01-10")] #[case("2022-01-10 09:00:00")] #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn filter_end_with_time(#[case] format: &'static str) -> Result<()> { let tasks = test_tasks_with_query(&format!("end<{format}"), &None)?; assert!(tasks.len() == 1); assert_eq!(tasks[0].id, 1); Ok(()) } /// Filter tasks by status #[rstest] #[case(TaskStatus::Queued, 2)] #[case(TaskStatus::Running, 1)] #[case(TaskStatus::Paused, 0)] #[case(TaskStatus::Done(TaskResult::Success), 1)] #[case(TaskStatus::Done(TaskResult::Failed(255)), 1)] #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn filter_status(#[case] status: TaskStatus, #[case] match_count: usize) -> Result<()> { // Get the correct query keyword for the given status. let status_filter = match status { TaskStatus::Queued => "queued", TaskStatus::Stashed { .. } => "stashed", TaskStatus::Running => "running", TaskStatus::Paused => "paused", TaskStatus::Done(TaskResult::Success) => "success", TaskStatus::Done(TaskResult::Failed(_)) => "failed", _ => anyhow::bail!("Got unexpected TaskStatus in filter_status"), }; let tasks = test_tasks_with_query(&format!("status={status_filter}"), &None)?; for task in tasks.iter() { let id = task.id; assert_eq!( task.status, status, "Expected a different task status on task {id} based on filter {status:?}" ); } assert_eq!( tasks.len(), match_count, "Got a different amount of tasks than expected for the status filter {status:?}." ); Ok(()) } /// Filter tasks by label with the "contains" `%=` filter. #[rstest] #[case("%=", "label", 3)] #[case("%=", "label-10", 3)] #[case("%=", "label-10-1", 1)] #[case("=", "label-10-1", 1)] #[case("!=", "label-10-1", 6)] #[case("!=", "label-10", 7)] #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn filter_label( #[case] operator: &'static str, #[case] label_filter: &'static str, #[case] match_count: usize, ) -> Result<()> { let tasks = test_tasks_with_query(&format!("label{operator}{label_filter}"), &None)?; for task in tasks.iter() { // Make sure the task either has no label or the label doesn't match the filter. if operator == "!=" { if let Some(label) = &task.label { assert_ne!( label, label_filter, "Label '{label}' matched exact filter '{label_filter}'" ); } continue; } let label = task.label.as_ref().expect("Expected task to have a label"); if operator == "%=" { // Make sure the label contained our filter. assert!( label.contains(label_filter), "Label '{label}' didn't contain filter '{label_filter}'" ); } else if operator == "=" { // Make sure the label exactly matches the filter. assert_eq!( label, &label_filter, "Label '{label}' didn't match exact filter '{label_filter}'" ); } } assert_eq!( tasks.len(), match_count, "Got a different amount of tasks than expected for the label filter: {label_filter}." ); Ok(()) } 07070100000095000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000001F00000000pueue-3.4.1/pueue/tests/daemon07070100000096000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000002400000000pueue-3.4.1/pueue/tests/daemon/data07070100000097000081A4000000000000000000000001665F1B6900000DF3000000000000000000000000000000000000003600000000pueue-3.4.1/pueue/tests/daemon/data/v2.0.0_state.json{ "settings": { "client": { "restart_in_place": true, "read_local_logs": true, "show_confirmation_questions": false, "show_expanded_aliases": false, "dark_mode": false, "max_status_lines": 10, "status_time_format": "%H:%M:%S", "status_datetime_format": "%Y-%m-%d %H:%M:%S" }, "daemon": { "pause_group_on_failure": false, "pause_all_on_failure": false, "callback": "notify-send \"Task {{ id }}\nCommand: {{ command }}\nPath: {{ path }}\nFinished with status '{{ result }}'\nTook: $(bc <<< \"{{end}} - {{start}}\") seconds\"", "callback_log_lines": 10 }, "shared": { "pueue_directory": null, "runtime_directory": null, "use_unix_socket": true, "unix_socket_path": null, "host": "127.0.0.1", "port": "6924", "daemon_cert": null, "daemon_key": null, "shared_secret_path": null }, "profiles": {} }, "tasks": { "0": { "id": 0, "original_command": "ls", "command": "ls", "path": "/home/nuke/.local/share/pueue", "envs": {}, "group": "default", "dependencies": [], "label": null, "status": { "Done": "Success" }, "prev_status": "Queued", "start": "2022-05-09T18:41:29.273563806+02:00", "end": "2022-05-09T18:41:29.473998692+02:00" }, "1": { "id": 1, "original_command": "ls", "command": "ls", "path": "/home/nuke/.local/share/pueue", "envs": { "PUEUE_WORKER_ID": "0", "PUEUE_GROUP": "test" }, "group": "test", "dependencies": [], "label": null, "status": { "Done": "Success" }, "prev_status": "Queued", "start": "2022-05-09T18:43:30.683677276+02:00", "end": "2022-05-09T18:43:30.884243263+02:00" }, "2": { "id": 2, "original_command": "ls", "command": "ls", "path": "/home/nuke/.local/share/pueue", "envs": { "PUEUE_WORKER_ID": "0", "PUEUE_GROUP": "test" }, "group": "test", "dependencies": [], "label": null, "status": "Queued", "prev_status": "Queued", "start": null, "end": null }, "3": { "id": 3, "original_command": "ls stash_it", "command": "ls stash_it", "path": "/home/nuke/.local/share/pueue", "envs": {}, "group": "default", "dependencies": [], "label": null, "status": { "Stashed": { "enqueue_at": null } }, "prev_status": { "Stashed": { "enqueue_at": null } }, "start": null, "end": null } }, "groups": { "default": { "status": "Running", "parallel_tasks": 1 }, "test": { "status": "Paused", "parallel_tasks": 2 } }, "config_path": null } 07070100000098000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000002B00000000pueue-3.4.1/pueue/tests/daemon/integration07070100000099000081A4000000000000000000000001665F1B6900000A9C000000000000000000000000000000000000003200000000pueue-3.4.1/pueue/tests/daemon/integration/add.rsuse anyhow::Result; use chrono::Local; use pueue_lib::network::message::TaskSelection; use pueue_lib::task::*; use crate::helper::*; /// Test if adding a normal task works as intended. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_normal_add() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; let pre_addition_time = Local::now(); // Add a task that instantly finishes assert_success(add_task(shared, "sleep 0.01").await?); // Wait until the task finished and get state let task = wait_for_task_condition(shared, 0, |task| task.is_done()).await?; let post_addition_time = Local::now(); // Make sure the task's created_at and enqueue_at times are viable. assert!( task.created_at > pre_addition_time && task.created_at < post_addition_time, "Make sure the created_at time is set correctly" ); assert!( task.enqueued_at.unwrap() > pre_addition_time && task.enqueued_at.unwrap() < post_addition_time, "Make sure the enqueue_at time is set correctly" ); // The task finished successfully assert_eq!( get_task_status(shared, 0).await?, TaskStatus::Done(TaskResult::Success) ); Ok(()) } /// Test if adding a task in stashed state work. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_stashed_add() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Tell the daemon to add a task in stashed state. let mut message = create_add_message(shared, "sleep 60"); message.stashed = true; assert_success(send_message(shared, message).await?); // Make sure the task is actually stashed. let task = wait_for_task_condition(shared, 0, |task| { matches!(task.status, TaskStatus::Stashed { .. }) }) .await?; assert!( task.enqueued_at.is_none(), "An unqueued task shouldn't have enqueue_at set." ); Ok(()) } /// Pause the default group and make sure that immediately spawning a task still works. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_add_with_immediate_start() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Pause the daemon and prevent tasks to be automatically spawned. pause_tasks(shared, TaskSelection::All).await?; // Tell the daemon to add a task that must be immediately started. assert_success(add_and_start_task(shared, "sleep 60").await?); // Make sure the task is actually being started. wait_for_task_condition(shared, 0, |task| task.is_running()).await?; Ok(()) } 0707010000009A000081A4000000000000000000000001665F1B6900000C7E000000000000000000000000000000000000003600000000pueue-3.4.1/pueue/tests/daemon/integration/aliases.rsuse std::collections::HashMap; use anyhow::Result; use pueue_lib::network::message::*; use pueue_lib::task::*; use crate::helper::*; /// Test that using aliases when adding task normally works. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_add_with_alias() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; let mut aliases = HashMap::new(); aliases.insert("non_existing_cmd".into(), "echo".into()); create_test_alias_file(daemon.tempdir.path(), aliases)?; // Add a task whose command should be replaced by an alias assert_success(add_task(shared, "non_existing_cmd test").await?); // Wait until the task finished and get state wait_for_task_condition(shared, 0, |task| task.is_done()).await?; let task = get_task(shared, 0).await?; // The task finished successfully and its command has replaced the alias. assert_eq!(task.status, TaskStatus::Done(TaskResult::Success)); assert_eq!(task.command, "echo test"); assert_eq!(task.original_command, "non_existing_cmd test"); // Make sure we see an actual "test" in the output. // This ensures that we really called "echo". let log = get_task_log(shared, 0, None).await?; assert_eq!(log, "test\n"); Ok(()) } /// Test that aliases are applied when a task's command is changed on restart. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_restart_with_alias() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Add a task whose command that should fail and wait for it to finish. assert_success(add_task(shared, "non_existing_cmd test").await?); let task = wait_for_task_condition(shared, 0, |task| task.is_done()).await?; // Ensure the command hasn't been mutated and the task failed. assert_eq!(task.command, "non_existing_cmd test"); assert_eq!(task.status, TaskStatus::Done(TaskResult::Failed(127))); // Create the alias file which will replace the new command with "echo". let mut aliases = HashMap::new(); aliases.insert("replaced_cmd".into(), "echo".into()); create_test_alias_file(daemon.tempdir.path(), aliases)?; // Restart the task while editing its command. let message = RestartMessage { tasks: vec![TaskToRestart { task_id: 0, command: Some("replaced_cmd test".to_string()), path: None, label: None, delete_label: false, priority: Some(0), }], start_immediately: true, stashed: false, }; send_message(shared, message).await?; let task = wait_for_task_condition(shared, 0, |task| task.is_done()).await?; // The task finished successfully and its command has replaced the alias. assert_eq!(task.original_command, "replaced_cmd test"); assert_eq!(task.command, "echo test"); assert_eq!(task.status, TaskStatus::Done(TaskResult::Success)); // Make sure we see an actual "test" in the output. // This ensures that we really called "echo". let log = get_task_log(shared, 0, None).await?; assert_eq!(log, "test\n"); Ok(()) } 0707010000009B000081A4000000000000000000000001665F1B69000012DD000000000000000000000000000000000000003400000000pueue-3.4.1/pueue/tests/daemon/integration/clean.rsuse anyhow::Result; use pueue_lib::network::message::*; use crate::helper::*; /// Ensure that clean only removes finished tasks #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_normal_clean() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // This should result in one failed, one finished, one running and one queued task. for command in &["failing", "ls", "sleep 60", "ls"] { assert_success(add_task(shared, command).await?); } // Wait for task2 to start. This implies that task[0,1] are done. wait_for_task_condition(shared, 2, |task| task.is_running()).await?; // Send the clean message let clean_message = CleanMessage { successful_only: false, group: None, }; send_message(shared, clean_message).await?; // Assert that task 0 and 1 have both been removed let state = get_state(shared).await?; assert!(!state.tasks.contains_key(&0)); assert!(!state.tasks.contains_key(&1)); Ok(()) } /// Ensure only successful tasks are removed, if the `-s` flag is set. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_successful_only_clean() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // This should result in one failed, one finished, one running and one queued task. for command in &["failing", "ls"] { assert_success(add_task(shared, command).await?); } // Wait for task2 to start. This implies task[0,1] being finished. wait_for_task_condition(shared, 1, |task| task.is_done()).await?; // Send the clean message let clean_message = CleanMessage { successful_only: true, group: None, }; send_message(shared, clean_message).await?; // Assert that task 0 is still there, as it failed. let state = get_state(shared).await?; assert!(state.tasks.contains_key(&0)); // Task 1 should have been removed. assert!(!state.tasks.contains_key(&1)); Ok(()) } /// Ensure only tasks of the selected group are cleaned up #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_clean_in_selected_group() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; add_group_with_slots(shared, "other", 1).await?; for group in &[PUEUE_DEFAULT_GROUP, "other"] { for command in &["failing", "ls", "sleep 60", "ls"] { assert_success(add_task_to_group(shared, command, group).await?); } } // Wait for task6 to start. This implies task[4,5] in the 'other' group being finished. wait_for_task_condition(shared, 6, |task| task.is_running()).await?; // Send the clean message let clean_message = CleanMessage { successful_only: false, group: Some("other".to_string()), }; send_message(shared, clean_message).await?; // Assert that task 0 and 1 are still there let state = get_state(shared).await?; assert!(state.tasks.contains_key(&0)); assert!(state.tasks.contains_key(&1)); assert!(state.tasks.contains_key(&2)); assert!(state.tasks.contains_key(&3)); // Assert that task 4 and 5 have both been removed assert!(!state.tasks.contains_key(&4)); assert!(!state.tasks.contains_key(&5)); assert!(state.tasks.contains_key(&6)); assert!(state.tasks.contains_key(&7)); Ok(()) } /// Ensure only successful tasks are removed, if the `-s` flag is set. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_clean_successful_only_in_selected_group() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; add_group_with_slots(shared, "other", 1).await?; for group in &[PUEUE_DEFAULT_GROUP, "other"] { for command in &["failing", "ls", "sleep 60", "ls"] { assert_success(add_task_to_group(shared, command, group).await?); } } // Wait for task6 to start. This implies task[4,5] in the 'other' group being finished. wait_for_task_condition(shared, 6, |task| task.is_running()).await?; // Send the clean message let clean_message = CleanMessage { successful_only: true, group: Some("other".to_string()), }; send_message(shared, clean_message).await?; let state = get_state(shared).await?; // group default assert!(state.tasks.contains_key(&0)); assert!(state.tasks.contains_key(&1)); assert!(state.tasks.contains_key(&2)); assert!(state.tasks.contains_key(&3)); // group other assert!(state.tasks.contains_key(&4)); // Task 5 should have been removed. assert!(!state.tasks.contains_key(&5)); assert!(state.tasks.contains_key(&6)); assert!(state.tasks.contains_key(&7)); Ok(()) } 0707010000009C000081A4000000000000000000000001665F1B69000009F5000000000000000000000000000000000000003300000000pueue-3.4.1/pueue/tests/daemon/integration/edit.rsuse std::path::PathBuf; use anyhow::{bail, Result}; use test_log::test; use pueue_lib::network::message::*; use pueue_lib::settings::Shared; use pueue_lib::state::GroupStatus; use pueue_lib::task::*; use crate::helper::*; async fn create_edited_task(shared: &Shared) -> Result<EditResponseMessage> { // Add a task assert_success(add_task(shared, "ls").await?); // The task should now be queued assert_eq!(get_task_status(shared, 0).await?, TaskStatus::Queued); // Send a request to edit that task let response = send_message(shared, Message::EditRequest(0)).await?; if let Message::EditResponse(payload) = response { Ok(payload) } else { bail!("Didn't receive EditResponse after requesting edit.") } } /// Test if adding a normal task works as intended. #[test(tokio::test(flavor = "multi_thread", worker_threads = 2))] async fn test_edit_flow() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Pause the daemon. That way the command won't be started. pause_tasks(shared, TaskSelection::All).await?; wait_for_group_status(shared, PUEUE_DEFAULT_GROUP, GroupStatus::Paused).await?; let response = create_edited_task(shared).await?; assert_eq!(response.task_id, 0); assert_eq!(response.command, "ls"); assert_eq!(response.path, daemon.tempdir.path()); assert_eq!(response.priority, 0); // Task should be locked, after the request for editing succeeded. assert_eq!(get_task_status(shared, 0).await?, TaskStatus::Locked); // You cannot start a locked task. It should still be locked afterwards. start_tasks(shared, TaskSelection::TaskIds(vec![0])).await?; assert_eq!(get_task_status(shared, 0).await?, TaskStatus::Locked); // Send the final message of the protocol and actually change the task. let response = send_message( shared, EditMessage { task_id: 0, command: Some("ls -ahl".into()), path: Some("/tmp".into()), label: Some("test".to_string()), delete_label: false, priority: Some(99), }, ) .await?; assert_success(response); // Make sure the task has been changed and enqueued. let task = get_task(shared, 0).await?; assert_eq!(task.command, "ls -ahl"); assert_eq!(task.path, PathBuf::from("/tmp")); assert_eq!(task.label, Some("test".to_string())); assert_eq!(task.status, TaskStatus::Queued); assert_eq!(task.priority, 99); Ok(()) } 0707010000009D000081A4000000000000000000000001665F1B69000003B8000000000000000000000000000000000000004400000000pueue-3.4.1/pueue/tests/daemon/integration/environment_variables.rsuse anyhow::Result; use crate::helper::*; /// Make sure that the daemon's environment variables don't bleed into the spawned subprocesses. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_isolated_task_environment() -> Result<()> { let (settings, _tempdir) = daemon_base_setup()?; let mut child = standalone_daemon(&settings.shared).await?; let shared = &settings.shared; // Spawn a task which prints a special environment variable. // This environment variable is injected into the daemon's environment. // It shouldn't show up in the task's environment, as the task should be isolated! assert_success(add_and_start_task(shared, "echo $PUEUED_TEST_ENV_VARIABLE").await?); wait_for_task_condition(shared, 0, |task| task.is_done()).await?; let log = get_task_log(shared, 0, None).await?; // The log output should be empty assert_eq!(log, "\n"); child.kill()?; Ok(()) } 0707010000009E000081A4000000000000000000000001665F1B6900000AC7000000000000000000000000000000000000003400000000pueue-3.4.1/pueue/tests/daemon/integration/group.rsuse anyhow::Result; use pueue_lib::network::message::*; use crate::helper::*; /// Add and directly remove a group. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_add_and_remove() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Add a new group add_group_with_slots(shared, "testgroup", 1).await?; // Try to add the same group again. This should fail let add_message = GroupMessage::Add { name: "testgroup".to_string(), parallel_tasks: None, }; assert_failure(send_message(shared, add_message).await?); // Remove the newly added group and wait for the deletion to be processed. let remove_message = GroupMessage::Remove("testgroup".to_string()); assert_success(send_message(shared, remove_message.clone()).await?); wait_for_group_absence(shared, "testgroup").await?; // Make sure it got removed let state = get_state(shared).await?; assert!(!state.groups.contains_key("testgroup")); Ok(()) } /// Users cannot delete the default group. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_cannot_delete_default() -> Result<()> { let daemon = daemon().await?; let message = GroupMessage::Remove(PUEUE_DEFAULT_GROUP.to_string()); assert_failure(send_message(&daemon.settings.shared, message).await?); Ok(()) } /// Users cannot delete a non-existing group. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_cannot_delete_non_existing() -> Result<()> { let daemon = daemon().await?; let message = GroupMessage::Remove("doesnt_exist".to_string()); assert_failure(send_message(&daemon.settings.shared, message).await?); Ok(()) } /// Groups with tasks shouldn't be able to be removed. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_cannot_delete_group_with_tasks() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Add a new group add_group_with_slots(shared, "testgroup", 1).await?; // Add a task assert_success(add_task_to_group(shared, "ls", "testgroup").await?); wait_for_task_condition(&daemon.settings.shared, 0, |task| task.is_done()).await?; // We shouldn't be capable of removing that group let message = GroupMessage::Remove("testgroup".to_string()); assert_failure(send_message(shared, message).await?); // Remove the task from the group let remove_message = Message::Remove(vec![0]); send_message(shared, remove_message).await?; // Removal should now work. let message = GroupMessage::Remove("testgroup".to_string()); assert_success(send_message(shared, message).await?); Ok(()) } 0707010000009F000081A4000000000000000000000001665F1B6900001108000000000000000000000000000000000000003300000000pueue-3.4.1/pueue/tests/daemon/integration/kill.rsuse anyhow::Result; use pretty_assertions::assert_eq; use rstest::rstest; use pueue_lib::network::message::*; use pueue_lib::state::GroupStatus; use pueue_lib::task::*; use crate::helper::*; /// Test if killing running tasks works as intended. /// /// We test different ways of killing those tasks. /// - Via the --all flag, which just kills everything. /// - Via the --group flag, which just kills everything in the default group. /// - Via specific ids. /// /// If a whole group or everything is killed, the respective groups should also be paused, /// as long as there's no further queued task. /// This is security measure to prevent unwanted task execution in an emergency. #[rstest] #[case( KillMessage { tasks: TaskSelection::All, signal: None, }, true )] #[case( KillMessage { tasks: TaskSelection::Group(PUEUE_DEFAULT_GROUP.into()), signal: None, }, true )] #[case( KillMessage { tasks: TaskSelection::TaskIds(vec![0, 1, 2]), signal: None, }, false )] #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_kill_tasks_with_pause( #[case] kill_message: KillMessage, #[case] group_should_pause: bool, ) -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Add multiple tasks and start them immediately for _ in 0..3 { assert_success(add_and_start_task(shared, "sleep 60").await?); } // Wait until all tasks are running for id in 0..3 { wait_for_task_condition(shared, id, |task| task.is_running()).await?; } // Add another task that will be normally enqueued. for _ in 0..3 { assert_success(add_task(shared, "sleep 60").await?); } // Send the kill message send_message(shared, kill_message).await?; // Make sure all tasks get killed for id in 0..3 { wait_for_task_condition(shared, id, |task| { matches!(task.status, TaskStatus::Done(TaskResult::Killed)) }) .await?; } // Groups should be paused in specific modes. if group_should_pause { let state = get_state(shared).await?; assert_eq!( state.groups.get(PUEUE_DEFAULT_GROUP).unwrap().status, GroupStatus::Paused ); } Ok(()) } /// This test ensures the following rule: /// If a whole group or everything is killed, the respective groups should not be paused, as long /// as there's no further queued task in that group. /// /// We test different ways of killing those tasks. /// - Via the --all flag, which just kills everything. /// - Via the --group flag, which just kills everything in the default group. /// - Via specific ids. #[rstest] #[case( KillMessage { tasks: TaskSelection::All, signal: None, } )] #[case( KillMessage { tasks: TaskSelection::Group(PUEUE_DEFAULT_GROUP.into()), signal: None, } )] #[case( KillMessage { tasks: TaskSelection::TaskIds(vec![0, 1, 2]), signal: None, } )] #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_kill_tasks_without_pause(#[case] kill_message: KillMessage) -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Add multiple tasks and start them immediately for _ in 0..3 { assert_success(add_and_start_task(shared, "sleep 60").await?); } // Wait until all tasks are running for id in 0..3 { wait_for_task_condition(shared, id, |task| task.is_running()).await?; } // Add a dummy group that also shouldn't be paused. add_group_with_slots(shared, "testgroup", 1).await?; // Send the kill message send_message(shared, kill_message).await?; // Make sure all tasks get killed for id in 0..3 { wait_for_task_condition(shared, id, |task| { matches!(task.status, TaskStatus::Done(TaskResult::Killed)) }) .await?; } // Groups should not be paused, since no other queued tasks exist at this point in time. let state = get_state(shared).await?; assert_eq!( state.groups.get(PUEUE_DEFAULT_GROUP).unwrap().status, GroupStatus::Running ); assert_eq!( state.groups.get("testgroup").unwrap().status, GroupStatus::Running ); Ok(()) } 070701000000A0000081A4000000000000000000000001665F1B6900001451000000000000000000000000000000000000003200000000pueue-3.4.1/pueue/tests/daemon/integration/log.rsuse std::fs::read_to_string; use std::fs::File; use std::path::Path; use anyhow::{bail, Context, Result}; use pueue_lib::network::message::*; use tempfile::TempDir; use crate::helper::*; /// This function creates files `[1-20]` in the specified directory. /// The return value is the expected output. /// /// If `partial == true`, the expected output are only the last 5 lines. fn create_test_files(path: &Path, partial: bool) -> Result<String> { // Convert numbers from 1 to 01, so they're correctly ordered when using `ls`. let names: Vec<String> = (0..20) .map(|number| { if number < 10 { let mut name = "0".to_string(); name.push_str(&number.to_string()); name } else { number.to_string() } }) .collect(); for name in &names { File::create(path.join(name))?; } // Only return the last 5 lines if partial output is requested. if partial { return Ok((15..20).fold(String::new(), |mut full, name| { full.push_str(&name.to_string()); full.push('\n'); full })); } // Create the full expected output. let mut expected_output = names.join("\n"); expected_output.push('\n'); Ok(expected_output) } /// Make sure that receiving partial output from the daemon works. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_full_log() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Create a temporary directory and put some files into it. let tempdir = TempDir::new().unwrap(); let tempdir_path = tempdir.path(); let expected_output = create_test_files(tempdir_path, false).context("Failed to create test files.")?; // Add a task that lists those files and wait for it to finish. let command = format!("ls {tempdir_path:?}"); assert_success(add_task(shared, &command).await?); wait_for_task_condition(shared, 0, |task| task.is_done()).await?; // Request all log lines let output = get_task_log(shared, 0, None).await?; // Make sure it's the same assert_eq!(output, expected_output); Ok(()) } /// Make sure that receiving partial output from the daemon works. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_partial_log() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Create a temporary directory and put some files into it. let tempdir = TempDir::new().unwrap(); let tempdir_path = tempdir.path(); let expected_output = create_test_files(tempdir_path, true).context("Failed to create test files.")?; // Add a task that lists those files and wait for it to finish. let command = format!("ls {tempdir_path:?}"); assert_success(add_task(shared, &command).await?); wait_for_task_condition(shared, 0, |task| task.is_done()).await?; // Debug output to see what the file actually looks like: let real_log_path = shared.pueue_directory().join("task_logs").join("0.log"); let content = read_to_string(real_log_path).context("Failed to read actual file")?; println!("Actual log file contents: \n{content}"); // Request a partial log for task 0 let log_message = LogRequestMessage { task_ids: vec![0], send_logs: true, lines: Some(5), }; let response = send_message(shared, Message::Log(log_message)).await?; let logs = match response { Message::LogResponse(logs) => logs, _ => bail!("Received non LogResponse: {:#?}", response), }; // Get the received output let logs = logs.get(&0).unwrap(); let output = logs .output .clone() .context("Didn't find output on TaskLogMessage")?; let output = decompress_log(output)?; // Make sure it's the same assert_eq!(output, expected_output); Ok(()) } /// Ensure that stdout and stderr are properly ordered in log output. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_correct_log_order() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Add a task that lists those files and wait for it to finish. let command = "echo 'test' && echo 'error' && echo 'test'"; assert_success(add_task(shared, command).await?); wait_for_task_condition(shared, 0, |task| task.is_done()).await?; // Request all log lines let log_message = LogRequestMessage { task_ids: vec![0], send_logs: true, lines: None, }; let response = send_message(shared, Message::Log(log_message)).await?; let logs = match response { Message::LogResponse(logs) => logs, _ => bail!("Received non LogResponse: {:#?}", response), }; // Get the received output let logs = logs.get(&0).unwrap(); let output = logs .output .clone() .context("Didn't find output on TaskLogMessage")?; let output = decompress_log(output)?; // Make sure it's the same assert_eq!(output, "test\nerror\ntest\n"); Ok(()) } 070701000000A1000081A4000000000000000000000001665F1B69000001C3000000000000000000000000000000000000003200000000pueue-3.4.1/pueue/tests/daemon/integration/mod.rsmod add; mod aliases; mod clean; mod edit; mod environment_variables; mod group; mod kill; mod log; mod parallel_tasks; mod pause; mod priority; mod remove; mod reset; mod restart; /// Tests regarding state restoration from a previous run. mod restore; /// Tests for shutting down the daemon. mod shutdown; mod spawn; mod start; mod stashed; /// Test that the worker pool environment variables are properly injected. mod worker_environment_variables; 070701000000A2000081A4000000000000000000000001665F1B6900000B87000000000000000000000000000000000000003D00000000pueue-3.4.1/pueue/tests/daemon/integration/parallel_tasks.rsuse anyhow::Result; use pretty_assertions::assert_eq; use pueue_lib::{network::message::ParallelMessage, task::*}; use crate::helper::*; /// Test that multiple groups with multiple slots work. /// /// For each group, Pueue should start tasks until all slots are filled. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_parallel_tasks() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // ---- First group ---- // Add a new group with 3 slots add_group_with_slots(shared, "testgroup_3", 3).await?; // Add 5 tasks to this group, only 3 should be started. for _ in 0..5 { assert_success(add_task_to_group(shared, "sleep 60", "testgroup_3").await?); } // Ensure those three tasks are started. for task_id in 0..3 { wait_for_task_condition(shared, task_id, |task| task.is_running()).await?; } // Tasks 4-5 should still be queued let state = get_state(shared).await?; for task_id in 3..5 { let task = state.tasks.get(&task_id).unwrap(); assert_eq!(task.status, TaskStatus::Queued); } // ---- Second group ---- // Add another group with 2 slots add_group_with_slots(shared, "testgroup_2", 2).await?; // Add another 5 tasks to this group, only 2 should be started. for _ in 0..5 { assert_success(add_task_to_group(shared, "sleep 60", "testgroup_2").await?); } // Ensure only two tasks are started. for task_id in 5..7 { wait_for_task_condition(shared, task_id, |task| task.is_running()).await?; } // Tasks 8-10 should still be queued let state = get_state(shared).await?; for task_id in 7..10 { let task = state.tasks.get(&task_id).unwrap(); assert_eq!(task.status, TaskStatus::Queued); } Ok(()) } /// Test that a group with a parallel limit of `0` has an unlimited amount of tasks. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_unlimited_parallel_tasks() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Add a new group with 1 slot add_group_with_slots(shared, "testgroup", 1).await?; // Add 10 long running tasks to this group, only 1 should be immediately started. for _ in 0..10 { assert_success(add_task_to_group(shared, "sleep 600", "testgroup").await?); } // Ensure the first tasks is started. wait_for_task_condition(shared, 0, |task| task.is_running()).await?; // Update the parallel limit of the group to 0 let message = ParallelMessage { group: "testgroup".to_string(), parallel_tasks: 0, }; assert_success(send_message(shared, message).await?); // Make sure all other tasks are started as well in quick succession. for task_id in 1..10 { wait_for_task_condition(shared, task_id, |task| task.is_running()).await?; } Ok(()) } 070701000000A3000081A4000000000000000000000001665F1B6900000A93000000000000000000000000000000000000003400000000pueue-3.4.1/pueue/tests/daemon/integration/pause.rsuse anyhow::{Context, Result}; use pueue_lib::network::message::*; use pueue_lib::state::GroupStatus; use pueue_lib::task::*; use crate::helper::*; /// Make sure that no tasks will be started in a paused queue #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_pause_daemon() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // This pauses the daemon pause_tasks(shared, TaskSelection::All).await?; // Make sure the default group get's paused wait_for_group_status(shared, PUEUE_DEFAULT_GROUP, GroupStatus::Paused).await?; // Add a task and give the taskmanager time to theoretically start the process add_task(shared, "ls").await?; sleep_ms(500).await; // Make sure it's not started assert_eq!(get_task_status(shared, 0).await?, TaskStatus::Queued); Ok(()) } /// Make sure that running tasks will be properly paused #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_pause_running_task() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Start a long running task and make sure it's started add_task(shared, "sleep 60").await?; wait_for_task_condition(shared, 0, |task| task.is_running()).await?; // This pauses the daemon pause_tasks(shared, TaskSelection::All).await?; // Make sure the task as well as the default group get paused wait_for_task_condition(shared, 0, |task| matches!(task.status, TaskStatus::Paused)).await?; let state = get_state(shared).await?; assert_eq!( state.groups.get(PUEUE_DEFAULT_GROUP).unwrap().status, GroupStatus::Paused ); Ok(()) } /// A queue can get paused, while the tasks may finish on their own. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_pause_with_wait() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Start a long running task and make sure it's started add_task(shared, "sleep 60").await?; wait_for_task_condition(shared, 0, |task| task.is_running()).await?; // Pauses the default queue while waiting for tasks let message = PauseMessage { tasks: TaskSelection::Group(PUEUE_DEFAULT_GROUP.into()), wait: true, }; send_message(shared, message) .await .context("Failed to send message")?; // Make sure the default group gets paused, but the task is still running wait_for_group_status(shared, PUEUE_DEFAULT_GROUP, GroupStatus::Paused).await?; let state = get_state(shared).await?; assert_eq!(state.tasks.get(&0).unwrap().status, TaskStatus::Running); Ok(()) } 070701000000A4000081A4000000000000000000000001665F1B6900000C8E000000000000000000000000000000000000003700000000pueue-3.4.1/pueue/tests/daemon/integration/priority.rsuse anyhow::Result; use rstest::rstest; use pueue_lib::network::message::TaskSelection; use crate::helper::*; /// For tasks with the same priority, lowest ids are started first. #[rstest] #[case(0)] #[case(-1)] #[case(1)] #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_default_ordering(#[case] priority: i32) -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Pause the daemon and prevent tasks to be automatically spawned. pause_tasks(shared, TaskSelection::All).await?; // Add two tasks with default priority. assert_success(add_task_with_priority(shared, "sleep 10", priority).await?); assert_success(add_task_with_priority(shared, "sleep 10", priority).await?); // Resume the daemon. start_tasks(shared, TaskSelection::All).await?; // Make sure task 0 is being started and task 1 is still waiting. wait_for_task_condition(shared, 0, |task| task.is_running()).await?; wait_for_task_condition(shared, 1, |task| task.is_queued()).await?; Ok(()) } /// Tasks with a higher priority should be executed before tasks with a lower priority. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_highest_priority_first() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Pause the daemon and prevent tasks to be automatically spawned. pause_tasks(shared, TaskSelection::All).await?; // Add one normal task and one with the lowest possible priority. assert_success(add_task(shared, "sleep 10").await?); assert_success(add_task_with_priority(shared, "sleep 10", 1).await?); assert_success(add_task_with_priority(shared, "sleep 10", 2).await?); // Resume the daemon. start_tasks(shared, TaskSelection::All).await?; // Make sure task 0 is being started and task 1 is still waiting. wait_for_task_condition(shared, 2, |task| task.is_running()).await?; wait_for_task_condition(shared, 1, |task| task.is_queued()).await?; wait_for_task_condition(shared, 0, |task| task.is_queued()).await?; Ok(()) } /// Tasks with a negative priority should be executed before tasks with default priority. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_default_priority_over_negative_priority() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Pause the daemon and prevent tasks to be automatically spawned. pause_tasks(shared, TaskSelection::All).await?; // Add one normal task and one with the lowest possible priority. assert_success(add_task_with_priority(shared, "sleep 10", -2).await?); assert_success(add_task_with_priority(shared, "sleep 10", -1).await?); assert_success(add_task(shared, "sleep 10").await?); // Resume the daemon. start_tasks(shared, TaskSelection::All).await?; // Make sure task 0 is being started and task 1 is still waiting. wait_for_task_condition(shared, 2, |task| task.is_running()).await?; wait_for_task_condition(shared, 0, |task| task.is_queued()).await?; wait_for_task_condition(shared, 1, |task| task.is_queued()).await?; Ok(()) } 070701000000A5000081A4000000000000000000000001665F1B690000070D000000000000000000000000000000000000003500000000pueue-3.4.1/pueue/tests/daemon/integration/remove.rsuse anyhow::Result; use pueue_lib::network::message::*; use crate::helper::*; /// Ensure that only removable tasks can be removed. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_normal_remove() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // We'll add some tasks. // Task 0-2 will be immediately handled by the daemon, the other three tasks are queued for // now. However, we'll manipulate them in such a way, that we'll end up with this mapping: // 0 -> failed // 1 -> success // 2 -> running // 3 -> paused // 4 -> queued // 5 -> stashed for command in &["failing", "ls", "sleep 60", "sleep 60", "ls", "ls"] { assert_success(add_task(shared, command).await?); } // Wait for task2 to start. This implies task[0,1] being finished. wait_for_task_condition(shared, 2, |task| task.is_running()).await?; // Explicitly start task3, wait for it to start and directly pause it. start_tasks(shared, TaskSelection::TaskIds(vec![3])).await?; wait_for_task_condition(shared, 3, |task| task.is_running()).await?; pause_tasks(shared, TaskSelection::TaskIds(vec![3])).await?; // Stash task 5 send_message(shared, Message::Stash(vec![5])).await?; let remove_message = Message::Remove(vec![0, 1, 2, 3, 4, 5]); send_message(shared, remove_message).await?; // Ensure that every task that isn't currently running can be removed let state = get_state(shared).await?; assert!(!state.tasks.contains_key(&0)); assert!(!state.tasks.contains_key(&1)); assert!(state.tasks.contains_key(&2)); assert!(state.tasks.contains_key(&3)); assert!(!state.tasks.contains_key(&4)); assert!(!state.tasks.contains_key(&5)); Ok(()) } 070701000000A6000081A4000000000000000000000001665F1B69000003EE000000000000000000000000000000000000003400000000pueue-3.4.1/pueue/tests/daemon/integration/reset.rsuse anyhow::{Context, Result}; use pueue_lib::network::message::*; use crate::helper::*; /// A reset command kills all tasks and forces a clean state. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_reset() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Start a long running task and make sure it's started add_task(shared, "ls").await?; add_task(shared, "failed").await?; add_task(shared, "sleep 60").await?; add_task(shared, "ls").await?; wait_for_task_condition(shared, 2, |task| task.is_running()).await?; // Reset the daemon send_message(shared, ResetMessage {}) .await .context("Failed to send Start tasks message")?; // Resetting is asynchronous, wait for the first task to disappear. wait_for_task_absence(shared, 0).await?; // All tasks should have been removed. let state = get_state(shared).await?; assert!(state.tasks.is_empty(),); Ok(()) } 070701000000A7000081A4000000000000000000000001665F1B6900000C5B000000000000000000000000000000000000003600000000pueue-3.4.1/pueue/tests/daemon/integration/restart.rsuse std::path::PathBuf; use anyhow::Result; use pueue_lib::network::message::*; use crate::helper::*; /// Ensure that restarting a task in-place, resets it's state and possibly updates the command and /// path to the new values. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_restart_in_place() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Add a single task that instantly finishes. assert_success(add_task(shared, "sleep 0.1").await?); // Wait for task 0 to finish. let original_task = wait_for_task_condition(shared, 0, |task| task.is_done()).await?; assert!( original_task.enqueued_at.is_some(), "Task is done and should have enqueue_at set." ); // Restart task 0 with an extended sleep command with a different path. let restart_message = RestartMessage { tasks: vec![TaskToRestart { task_id: 0, command: Some("sleep 60".to_string()), path: Some(PathBuf::from("/tmp")), label: Some("test".to_owned()), delete_label: false, priority: Some(0), }], start_immediately: false, stashed: false, }; assert_success(send_message(shared, restart_message).await?); let state = get_state(shared).await?; assert_eq!(state.tasks.len(), 1, "No new task should be created"); // Task 0 should soon be started again let task = wait_for_task_condition(shared, 0, |task| task.is_running()).await?; // The created_at time should be the same, as we updated in place assert_eq!( original_task.created_at, task.created_at, "created_at shouldn't change on 'restart -i'" ); // The created_at time should have been updated assert!( original_task.enqueued_at.unwrap() < task.enqueued_at.unwrap(), "The second run should be enqueued before the first run." ); // Make sure both command and path were changed let state = get_state(shared).await?; let task = state.tasks.get(&0).unwrap(); assert_eq!(task.command, "sleep 60"); assert_eq!(task.path, PathBuf::from("/tmp")); assert_eq!(task.label, Some("test".to_owned())); Ok(()) } /// Ensure that running task cannot be restarted. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_cannot_restart_running() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Add a single task that instantly finishes. assert_success(add_task(shared, "sleep 60").await?); // Wait for task 0 to finish. wait_for_task_condition(shared, 0, |task| task.is_running()).await?; // Restart task 0 with an extended sleep command. let restart_message = RestartMessage { tasks: vec![TaskToRestart { task_id: 0, command: None, path: None, label: None, delete_label: false, priority: Some(0), }], start_immediately: false, stashed: false, }; assert_failure(send_message(shared, restart_message).await?); Ok(()) } 070701000000A8000081A4000000000000000000000001665F1B6900000764000000000000000000000000000000000000003600000000pueue-3.4.1/pueue/tests/daemon/integration/restore.rsuse anyhow::Result; use pretty_assertions::assert_eq; use pueue_lib::network::message::TaskSelection; use pueue_lib::state::GroupStatus; use crate::helper::*; /// The daemon should start in the same state as before shutdown, if no tasks are queued. /// This function tests for the running state. #[tokio::test] async fn test_start_running() -> Result<()> { let (settings, _tempdir) = daemon_base_setup()?; let mut child = standalone_daemon(&settings.shared).await?; let shared = &settings.shared; // Kill the daemon and wait for it to shut down. assert_success(shutdown_daemon(shared).await?); wait_for_shutdown(&mut child).await?; // Boot it up again let mut child = standalone_daemon(&settings.shared).await?; // Assert that the group is still running. let state = get_state(shared).await?; assert_eq!( state.groups.get(PUEUE_DEFAULT_GROUP).unwrap().status, GroupStatus::Running ); child.kill()?; Ok(()) } /// The daemon should start in the same state as before shutdown, if no tasks are queued. /// This function tests for the paused state. #[tokio::test] async fn test_start_paused() -> Result<()> { let (settings, _tempdir) = daemon_base_setup()?; let mut child = standalone_daemon(&settings.shared).await?; let shared = &settings.shared; // This pauses the daemon pause_tasks(shared, TaskSelection::All).await?; // Kill the daemon and wait for it to shut down. assert_success(shutdown_daemon(shared).await?); wait_for_shutdown(&mut child).await?; // Boot it up again let mut child = standalone_daemon(&settings.shared).await?; // Assert that the group is still paused. let state = get_state(shared).await?; assert_eq!( state.groups.get(PUEUE_DEFAULT_GROUP).unwrap().status, GroupStatus::Paused ); child.kill()?; Ok(()) } 070701000000A9000081A4000000000000000000000001665F1B690000061C000000000000000000000000000000000000003700000000pueue-3.4.1/pueue/tests/daemon/integration/shutdown.rsuse anyhow::{Context, Result}; use crate::helper::*; /// Spin up the daemon and send a SIGTERM shortly afterwards. /// This should trigger the graceful shutdown and kill the process. #[tokio::test] async fn test_ctrlc() -> Result<()> { let (settings, _tempdir) = daemon_base_setup()?; let mut child = standalone_daemon(&settings.shared).await?; use command_group::{Signal, UnixChildExt}; // Send SIGTERM signal to process via nix child .signal(Signal::SIGTERM) .context("Failed to send SIGTERM to daemon")?; // Sleep for 500ms and give the daemon time to shut down sleep_ms(500).await; let result = child.try_wait(); assert!(matches!(result, Ok(Some(_)))); let code = result.unwrap().unwrap(); assert!(matches!(code.code(), Some(0))); Ok(()) } /// Spin up the daemon and send a graceful shutdown message afterwards. /// The daemon should shutdown normally and exit with a 0. #[tokio::test] async fn test_graceful_shutdown() -> Result<()> { let (settings, _tempdir) = daemon_base_setup()?; let mut child = standalone_daemon(&settings.shared).await?; // Kill the daemon gracefully and wait for it to shut down. assert_success(shutdown_daemon(&settings.shared).await?); wait_for_shutdown(&mut child).await?; // Sleep for 500ms and give the daemon time to shut down sleep_ms(500).await; let result = child.try_wait(); assert!(matches!(result, Ok(Some(_)))); let code = result.unwrap().unwrap(); assert!(matches!(code.code(), Some(0))); Ok(()) } 070701000000AA000081A4000000000000000000000001665F1B690000062E000000000000000000000000000000000000003400000000pueue-3.4.1/pueue/tests/daemon/integration/spawn.rsuse std::io::Read; use anyhow::{Context, Result}; use rstest::rstest; use pueue_lib::{ log::get_log_file_handle, task::{TaskResult, TaskStatus}, }; use crate::helper::*; /// Make sure a task that isn't able to spawn, prints out an error message to the task's log file. #[rstest] #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_fail_to_spawn_task() -> Result<()> { // Start a custom daemon that uses a shell command that doesn't exist. let (mut settings, tempdir) = daemon_base_setup()?; settings.daemon.shell_command = Some(vec!["thisshellshouldreallynotexist.hopefully".to_string()]); let tempdir_path = tempdir.path().to_path_buf(); settings .save(&Some(tempdir_path.join("pueue.yml"))) .context("Couldn't write pueue config to temporary directory")?; let daemon = daemon_with_settings(settings, tempdir).await?; let shared = &daemon.settings.shared; // Try to start a task. That task should then fail. assert_success(add_task(shared, "sleep 60").await?); let task = wait_for_task_condition(shared, 0, |task| task.failed()).await?; assert!(matches!( task.status, TaskStatus::Done(TaskResult::FailedToSpawn(_)) )); // Get the log output and ensure that there's the expected error log from the daemon. let mut log_file = get_log_file_handle(0, &tempdir_path)?; let mut output = String::new(); log_file.read_to_string(&mut output)?; assert!(output.starts_with("Pueue error, failed to spawn task. Check your command.")); Ok(()) } 070701000000AB000081A4000000000000000000000001665F1B6900000808000000000000000000000000000000000000003400000000pueue-3.4.1/pueue/tests/daemon/integration/start.rsuse anyhow::Result; use pueue_lib::network::message::*; use pueue_lib::task::*; use rstest::rstest; use crate::helper::*; /// Test if explicitly starting tasks and resuming tasks works as intended. /// /// We test different ways of resumes tasks. /// - Via the --all flag, which resumes everything. /// - Via the --group flag, which resumes everything in a specific group (in our case 'default'). /// - Via specific ids. #[rstest] #[case( StartMessage { tasks: TaskSelection::All, } )] #[case( StartMessage { tasks: TaskSelection::Group(PUEUE_DEFAULT_GROUP.into()), } )] #[case( StartMessage { tasks: TaskSelection::TaskIds(vec![0, 1, 2]), } )] #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_start_tasks(#[case] start_message: StartMessage) -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Add multiple tasks only a single one will be started by default for _ in 0..3 { assert_success(add_task(shared, "sleep 60").await?); } // Wait for task 0 to start on its own. // We have to do this, otherwise we'll start task 1/2 beforehand, which prevents task 0 to be // started on its own. wait_for_task_condition(shared, 0, |task| task.is_running()).await?; // Start tasks 1 and 2 manually start_tasks(shared, TaskSelection::TaskIds(vec![1, 2])).await?; // Wait until all tasks are running for id in 0..3 { wait_for_task_condition(shared, id, |task| task.is_running()).await?; } // Pause the whole daemon and wait until all tasks are paused pause_tasks(shared, TaskSelection::All).await?; for id in 0..3 { wait_for_task_condition(shared, id, |task| matches!(task.status, TaskStatus::Paused)) .await?; } // Send the kill message send_message(shared, start_message).await?; // Ensure all tasks are running for id in 0..3 { wait_for_task_condition(shared, id, |task| task.is_running()).await?; } Ok(()) } 070701000000AC000081A4000000000000000000000001665F1B690000108D000000000000000000000000000000000000003600000000pueue-3.4.1/pueue/tests/daemon/integration/stashed.rsuse anyhow::{Context, Result}; use chrono::{DateTime, Local, TimeDelta}; use pueue_lib::state::GroupStatus; use rstest::rstest; use pueue_lib::network::message::*; use pueue_lib::settings::Shared; use pueue_lib::task::*; use crate::helper::*; /// Helper to pause the whole daemon pub async fn add_stashed_task( shared: &Shared, command: &str, stashed: bool, enqueue_at: Option<DateTime<Local>>, ) -> Result<Message> { let mut message = create_add_message(shared, command); message.stashed = stashed; message.enqueue_at = enqueue_at; send_message(shared, message) .await .context("Failed to to add task message") } /// Tasks can be stashed and scheduled for being enqueued at a specific point in time. /// /// Furthermore these stashed tasks can then be manually enqueued again. #[rstest] #[case(true, None)] #[case(true, Some(Local::now() + TimeDelta::try_minutes(2).unwrap()))] #[case(false, Some(Local::now() + TimeDelta::try_minutes(2).unwrap()))] #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_enqueued_tasks( #[case] stashed: bool, #[case] enqueue_at: Option<DateTime<Local>>, ) -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; assert_success(add_stashed_task(shared, "sleep 10", stashed, enqueue_at).await?); // The task should be added in stashed state. let task = wait_for_task_condition(shared, 0, |task| task.is_stashed()).await?; assert!( task.enqueued_at.is_none(), "Enqueued tasks shouldn't have an enqeued_at date set." ); // Assert the correct point in time has been set, in case `enqueue_at` is specific. if enqueue_at.is_some() { let status = get_task_status(shared, 0).await?; assert!(task.is_stashed()); if let TaskStatus::Stashed { enqueue_at: inner } = status { assert_eq!(inner, enqueue_at); } } let pre_enqueue_time = Local::now(); // Manually enqueue the task let enqueue_message = EnqueueMessage { task_ids: vec![0], enqueue_at: None, }; send_message(shared, enqueue_message) .await .context("Failed to to add task message")?; // Make sure the task is started after being enqueued let task = wait_for_task_condition(shared, 0, |task| task.is_running()).await?; assert!( task.enqueued_at.unwrap() > pre_enqueue_time, "Enqueued tasks should have an enqeued_at time set." ); Ok(()) } /// Delayed stashed tasks will be enqueued. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_delayed_tasks() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // The task will be stashed and automatically enqueued after about 1 second. let response = add_stashed_task( shared, "sleep 10", true, Some(Local::now() + TimeDelta::try_seconds(1).unwrap()), ) .await?; assert_success(response); // The task should be added in stashed state for about 1 second. wait_for_task_condition(shared, 0, |task| task.is_stashed()).await?; // Make sure the task is started after being automatically enqueued. sleep_ms(800).await; wait_for_task_condition(shared, 0, |task| task.is_running()).await?; Ok(()) } /// Stash a task that's currently queued for execution. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_stash_queued_task() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Pause the daemon pause_tasks(shared, TaskSelection::All).await?; wait_for_group_status(shared, "default", GroupStatus::Paused).await?; // Add a task that's queued for execution. add_task(shared, "sleep 10").await?; // Stash the task send_message(shared, Message::Stash(vec![0])) .await .context("Failed to send STash message")?; let task = get_task(shared, 0).await?; assert_eq!(task.status, TaskStatus::Stashed { enqueue_at: None }); assert!( task.enqueued_at.is_none(), "Enqueued tasks shouldn't have an enqeued_at date set." ); Ok(()) } 070701000000AD000081A4000000000000000000000001665F1B6900000F5F000000000000000000000000000000000000004B00000000pueue-3.4.1/pueue/tests/daemon/integration/worker_environment_variables.rsuse anyhow::Result; use pueue_lib::{network::message::TaskSelection, state::PUEUE_DEFAULT_GROUP}; use crate::helper::*; /// Make sure that the expected worker variables are injected into the tasks' environment variables /// for a single task on the default queue. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_single_worker() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Add some tasks that finish instantly. for _ in 0..3 { assert_success(add_env_task(shared, "sleep 0.1").await?); } // Wait a second. Since the tasks are run sequentially, the timings are sometimes a bit tight. sleep_ms(1000).await; // Wait for the last task to finish. wait_for_task_condition(shared, 2, |task| task.is_done()).await?; // All tasks should have the worker id 0, as the tasks are processed sequentially. let state = get_state(shared).await?; for task_id in 0..3 { assert_worker_envs(shared, &state, task_id, 0, PUEUE_DEFAULT_GROUP).await?; } Ok(()) } /// Make sure the correct workers are used when having multiple slots. /// /// Slots should be properly freed and re-used. /// Add some tasks to a group with three slots: /// /// Task0-2 are started in parallel. /// Task3-4 are started in parallel once Task0-2 finished. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_multiple_worker() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Pause the group before adding the tasks. // Adding tasks takes a while and the first task might already be finished // when we add the last one. pause_tasks(shared, TaskSelection::Group("test_3".to_string())).await?; // Add three tasks. They will be started in the same main loop iteration // and run in parallel. for _ in 0..3 { assert_success(add_env_task_to_group(shared, "sleep 0.1", "test_3").await?); } // Start and wait for the tasks start_tasks(shared, TaskSelection::Group("test_3".to_string())).await?; wait_for_task_condition(shared, 2, |task| task.is_done()).await?; // The first three tasks should have the same worker id's as the task ids. // They ran in parallel and each should have their own worker id assigned. let state = get_state(shared).await?; for task_id in 0..3 { assert_worker_envs(shared, &state, task_id, task_id, "test_3").await?; } // Spawn two more tasks and wait for them. // They should now get worker0 and worker1, as there aren't any other running tasks. pause_tasks(shared, TaskSelection::Group("test_3".to_string())).await?; for _ in 0..2 { assert_success(add_env_task_to_group(shared, "sleep 0.1", "test_3").await?); } start_tasks(shared, TaskSelection::Group("test_3".to_string())).await?; wait_for_task_condition(shared, 4, |task| task.is_done()).await?; let state = get_state(shared).await?; // Task3 gets worker0 assert_worker_envs(shared, &state, 3, 0, "test_3").await?; // Task4 gets worker1 assert_worker_envs(shared, &state, 4, 1, "test_3").await?; Ok(()) } /// Make sure the worker pools are properly initialized when manually adding a new group. #[tokio::test(flavor = "multi_thread", worker_threads = 2)] async fn test_worker_for_new_pool() -> Result<()> { let daemon = daemon().await?; let shared = &daemon.settings.shared; // Add a new group add_group_with_slots(shared, "testgroup", 1).await?; // Add a tasks that finishes instantly. assert_success(add_env_task_to_group(shared, "sleep 0.1", "testgroup").await?); wait_for_task_condition(shared, 0, |task| task.is_done()).await?; // The task should have the correct worker id + group. let state = get_state(shared).await?; assert_worker_envs(shared, &state, 0, 0, "testgroup").await?; Ok(()) } 070701000000AE000081A4000000000000000000000001665F1B6900000033000000000000000000000000000000000000002600000000pueue-3.4.1/pueue/tests/daemon/mod.rsmod integration; mod state_backward_compatibility; 070701000000AF000081A4000000000000000000000001665F1B6900000518000000000000000000000000000000000000003F00000000pueue-3.4.1/pueue/tests/daemon/state_backward_compatibility.rsuse std::fs::File; use std::io::prelude::*; use anyhow::{Context, Result}; use pueue::daemon::state_helper::restore_state; use tempfile::TempDir; use pueue_lib::settings::Settings; /// From 0.12.2 on, we aim to have full backward compatibility. /// For this reason, an old v0.12.2 serialized state has been checked in. /// /// We have to be able to restore from that state at all costs. /// Everything else results in a breaking change and needs a major version change. /// /// On top of simply having an old state, I also added a few non-existing fields. /// This should be handled as well. #[test] fn test_restore_from_old_state() -> Result<()> { better_panic::install(); let old_state = include_str!("data/v2.0.0_state.json"); let temp_dir = TempDir::new()?; let temp_path = temp_dir.path(); // Open v0.12.2 file and write old state to it. let temp_state_path = temp_dir.path().join("state.json"); let mut file = File::create(temp_state_path)?; file.write_all(old_state.as_bytes())?; let mut settings = Settings::default(); settings.shared.pueue_directory = Some(temp_path.to_path_buf()); let state = restore_state(&settings.shared.pueue_directory()) .context("Failed to restore state in test")?; assert!(state.is_some()); Ok(()) } 070701000000B0000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000001F00000000pueue-3.4.1/pueue/tests/helper070701000000B1000081A4000000000000000000000001665F1B6900000959000000000000000000000000000000000000002A00000000pueue-3.4.1/pueue/tests/helper/asserts.rsuse anyhow::{bail, Result}; use pueue_lib::network::message::*; use pueue_lib::settings::Shared; use pueue_lib::state::State; use super::send_message; /// Assert that a message is a successful message. pub fn assert_success(message: Message) { assert!( matches!(message, Message::Success(_)), "Expected to get SuccessMessage, got {message:?}", ); } /// Assert that a message is a failure message. pub fn assert_failure(message: Message) { assert!( matches!(message, Message::Failure(_)), "Expected to get FailureMessage, got {message:?}", ); } /// Make sure the expected environment variables are set. /// This also makes sure, the variables have properly been injected into the processes' /// environment. pub async fn assert_worker_envs( shared: &Shared, state: &State, task_id: usize, worker: usize, group: &str, ) -> Result<()> { let task = state.tasks.get(&task_id).unwrap(); // Make sure the environment variables have been properly set. assert_eq!( task.envs.get("PUEUE_GROUP"), Some(&group.to_string()), "Worker group didn't match for task {task_id}", ); assert_eq!( task.envs.get("PUEUE_WORKER_ID"), Some(&worker.to_string()), "Worker id hasn't been correctly set for task {task_id}", ); // Get the log output for the task. let response = send_message( shared, LogRequestMessage { task_ids: vec![task_id], send_logs: true, lines: None, }, ) .await?; let Message::LogResponse(message) = response else { bail!("Expected LogResponse got {response:?}") }; // Make sure the PUEUE_WORKER_ID and PUEUE_GROUP variables are present in the output. // They're always printed as to the [add_env_task] function. let log = message .get(&task_id) .expect("Log should contain requested task."); let stdout = log.output.clone().unwrap(); let output = String::from_utf8_lossy(&stdout); assert!( output.contains(&format!("WORKER_ID: {worker}")), "Output should contain worker id {worker} for task {task_id}. Got: {output}", ); assert!( output.contains(&format!("GROUP: {group}")), "Output should contain worker group {group} for task {task_id}. Got: {output}", ); Ok(()) } 070701000000B2000081A4000000000000000000000001665F1B69000009E7000000000000000000000000000000000000002900000000pueue-3.4.1/pueue/tests/helper/daemon.rsuse std::fs::File; use std::io::Read; use std::path::Path; use std::process::Child; use anyhow::{anyhow, bail, Context, Result}; use pueue_lib::network::message::*; use pueue_lib::settings::*; use super::*; /// Send the Shutdown message to the test daemon. pub async fn shutdown_daemon(shared: &Shared) -> Result<Message> { let message = Shutdown::Graceful; send_message(shared, message) .await .context("Failed to send Shutdown message") } /// Get a daemon pid from a specific pueue directory. /// This function gives the daemon a little time to boot up, but ultimately crashes if it takes too /// long. pub async fn get_pid(pid_path: &Path) -> Result<i32> { // Give the daemon about 1 sec to boot and create the pid file. let sleep = 50; let tries = TIMEOUT / sleep; let mut current_try = 0; while current_try < tries { // The daemon didn't create the pid file yet. Wait for 100ms and try again. if !pid_path.exists() { sleep_ms(sleep).await; current_try += 1; continue; } let mut file = File::open(pid_path).context("Couldn't open pid file")?; let mut content = String::new(); file.read_to_string(&mut content) .context("Couldn't write to file")?; // The file has been created but not yet been written to. if content.is_empty() { sleep_ms(50).await; current_try += 1; continue; } let pid = content .parse::<i32>() .map_err(|_| anyhow!("Couldn't parse value: {content}"))?; return Ok(pid); } bail!("Couldn't find pid file after about 1 sec."); } /// Waits for a daemon to shut down. pub async fn wait_for_shutdown(child: &mut Child) -> Result<()> { // Give the daemon about 1 sec to shutdown. let sleep = 50; let tries = TIMEOUT / sleep; let mut current_try = 0; while current_try < tries { // Try to read the process exit code. If this succeeds or // an error is returned, the process is gone. if let Ok(None) = child.try_wait() { // Process is still alive, wait a little longer sleep_ms(sleep).await; current_try += 1; continue; } // Process is gone; either there was a status code // or the child is not a child of this process (highly // unlikely). return Ok(()); } bail!("Pueued daemon didn't shut down after about 2 sec."); } 070701000000B3000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000002900000000pueue-3.4.1/pueue/tests/helper/factories070701000000B4000081A4000000000000000000000001665F1B6900000207000000000000000000000000000000000000003200000000pueue-3.4.1/pueue/tests/helper/factories/group.rsuse anyhow::Result; use pueue_lib::network::message::*; use pueue_lib::settings::*; use crate::helper::*; /// Create a new group with a specific amount of slots. pub async fn add_group_with_slots(shared: &Shared, group_name: &str, slots: usize) -> Result<()> { let add_message = GroupMessage::Add { name: group_name.to_string(), parallel_tasks: Some(slots), }; assert_success(send_message(shared, add_message.clone()).await?); wait_for_group(shared, group_name).await?; Ok(()) } 070701000000B5000081A4000000000000000000000001665F1B6900000041000000000000000000000000000000000000003000000000pueue-3.4.1/pueue/tests/helper/factories/mod.rspub mod group; pub mod task; pub use group::*; pub use task::*; 070701000000B6000081A4000000000000000000000001665F1B6900000814000000000000000000000000000000000000003100000000pueue-3.4.1/pueue/tests/helper/factories/task.rsuse anyhow::{Context, Result}; use pueue_lib::network::message::*; use pueue_lib::settings::*; use crate::helper::*; /// Adds a task to the test daemon. pub async fn add_task(shared: &Shared, command: &str) -> Result<Message> { send_message(shared, create_add_message(shared, command)) .await .context("Failed to to add task.") } /// Adds a task to the test daemon and starts it immediately. pub async fn add_and_start_task(shared: &Shared, command: &str) -> Result<Message> { let mut message = create_add_message(shared, command); message.start_immediately = true; send_message(shared, message) .await .context("Failed to to add task.") } /// Adds a task to the test daemon. pub async fn add_task_with_priority( shared: &Shared, command: &str, priority: i32, ) -> Result<Message> { let mut message = create_add_message(shared, command); message.priority = Some(priority); send_message(shared, message) .await .context("Failed to to add task.") } /// Adds a task to a specific group of the test daemon. pub async fn add_task_to_group(shared: &Shared, command: &str, group: &str) -> Result<Message> { let mut message = create_add_message(shared, command); message.group = group.to_string(); send_message(shared, message) .await .context("Failed to to add task to group.") } /// Mini wrapper around add_task, which creates a task that echos PUEUE's worker environment /// variables to `stdout`. pub async fn add_env_task(shared: &Shared, command: &str) -> Result<Message> { let command = format!("echo WORKER_ID: $PUEUE_WORKER_ID; echo GROUP: $PUEUE_GROUP; {command}"); add_task(shared, &command).await } /// Just like [add_env_task], but the task get's added to specific group. pub async fn add_env_task_to_group(shared: &Shared, command: &str, group: &str) -> Result<Message> { let command = format!("echo WORKER_ID: $PUEUE_WORKER_ID; echo GROUP: $PUEUE_GROUP; {command}"); add_task_to_group(shared, &command, group).await } 070701000000B7000081A4000000000000000000000001665F1B6900001B00000000000000000000000000000000000000002B00000000pueue-3.4.1/pueue/tests/helper/fixtures.rsuse std::collections::HashMap; use std::env::temp_dir; use std::fs::{canonicalize, File}; use std::io::Write; use std::path::{Path, PathBuf}; use std::process::{Child, Command, Stdio}; use anyhow::{bail, Context, Result}; use assert_cmd::prelude::*; use tempfile::{Builder, TempDir}; use tokio::io::{self, AsyncWriteExt}; use pueue::daemon::run; use pueue_lib::settings::*; use crate::helper::*; /// All info about a booted standalone test daemon. /// This daemon is executed in the same async environment as the rest of the test. pub struct PueueDaemon { pub settings: Settings, pub tempdir: TempDir, pub pid: i32, } /// A helper function which creates some test config, sets up a temporary directory and spawns /// a daemon into the async tokio runtime. /// This is done in 90% of our tests, thereby this convenience helper. pub async fn daemon() -> Result<PueueDaemon> { let (settings, tempdir) = daemon_base_setup()?; daemon_with_settings(settings, tempdir).await } /// A helper function which takes a Pueue config, a temporary directory and spawns /// a daemon into the async tokio runtime. pub async fn daemon_with_settings(settings: Settings, tempdir: TempDir) -> Result<PueueDaemon> { // Uncomment the next line to get some daemon logging. // Ignore any logger initialization errors, as multiple loggers will be initialized. //let _ = simplelog::SimpleLogger::init(log::LevelFilter::Debug, simplelog::Config::default()); let pueue_dir = tempdir.path(); let path = pueue_dir.to_path_buf(); // Start/spin off the daemon and get its PID tokio::spawn(run_and_handle_error(path, true)); let pid = get_pid(&settings.shared.pid_path()).await?; let sleep = 50; let tries = TIMEOUT / sleep; let mut current_try = 0; // Wait up to 1s for the unix socket to pop up. let socket_path = settings.shared.unix_socket_path(); while current_try < tries { sleep_ms(sleep).await; if socket_path.exists() { create_test_groups(&settings.shared).await?; return Ok(PueueDaemon { settings, tempdir, pid, }); } current_try += 1; } bail!("Daemon didn't boot after 1sec") } /// Internal helper function, which wraps the daemon main logic inside tokio and prints any errors. async fn run_and_handle_error(pueue_dir: PathBuf, test: bool) -> Result<()> { if let Err(err) = run(Some(pueue_dir.join("pueue.yml")), None, test).await { let mut stdout = io::stdout(); stdout .write_all(format!("Entcountered error: {err:?}").as_bytes()) .await .expect("Failed to write to stdout."); stdout.flush().await?; return Err(err); } Ok(()) } /// Spawn the daemon by calling the actual pueued binary. /// This function also checks for the pid file and the unix socket to appear. pub async fn standalone_daemon(shared: &Shared) -> Result<Child> { // Inject an environment variable into the daemon. // This is used to test that the spawned subprocesses won't inherit the daemon's environment. let mut envs = HashMap::new(); envs.insert("PUEUED_TEST_ENV_VARIABLE", "Test"); let child = Command::cargo_bin("pueued")? .arg("--config") .arg(shared.pueue_directory().join("pueue.yml").to_str().unwrap()) .arg("-vvv") .envs(envs) .stdout(Stdio::piped()) .stderr(Stdio::piped()) .spawn()?; let sleep = 50; let tries = TIMEOUT / sleep; let mut current_try = 0; // Wait up to 1s for the unix socket to pop up. let socket_path = shared.unix_socket_path(); while current_try < tries { sleep_ms(sleep).await; if socket_path.exists() { return Ok(child); } current_try += 1; } bail!("Daemon didn't boot in stand-alone mode after 1sec") } /// This is the base setup for all daemon test setups. pub fn daemon_base_setup() -> Result<(Settings, TempDir)> { // Init the logger for debug output during tests. // We ignore the result, as the logger can be initialized multiple times due to the // way tests are run in Rust. //use log::LevelFilter; //use simplelog::{Config, SimpleLogger}; //let _ = SimpleLogger::init(LevelFilter::Info, Config::default()); // Create a temporary directory used for testing. // The path is canonicalized to ensure test consistency across platforms. let tempdir = Builder::new() .prefix("pueue-") .tempdir_in(canonicalize(temp_dir())?)?; let tempdir_path = tempdir.path(); std::fs::create_dir(tempdir_path.join("certs")).unwrap(); let shared = Shared { pueue_directory: Some(tempdir_path.to_path_buf()), runtime_directory: Some(tempdir_path.to_path_buf()), alias_file: Some(tempdir_path.join("pueue_aliases.yml")), host: "localhost".to_string(), port: "51230".to_string(), daemon_cert: Some(tempdir_path.join("certs").join("daemon.cert")), daemon_key: Some(tempdir_path.join("certs").join("daemon.key")), shared_secret_path: Some(tempdir_path.join("secret")), ..Default::default() }; let client = Client { max_status_lines: Some(15), status_datetime_format: "%Y-%m-%d %H:%M:%S".into(), ..Default::default() }; #[allow(deprecated)] let daemon = Daemon { callback_log_lines: 15, ..Default::default() }; let settings = Settings { client, daemon, shared, profiles: HashMap::new(), }; settings .save(&Some(tempdir_path.join("pueue.yml"))) .context("Couldn't write pueue config to temporary directory")?; Ok((settings, tempdir)) } /// Create a few test groups that have various parallel task settings. pub async fn create_test_groups(shared: &Shared) -> Result<()> { add_group_with_slots(shared, "test_2", 2).await?; add_group_with_slots(shared, "test_3", 3).await?; add_group_with_slots(shared, "test_5", 5).await?; wait_for_group(shared, "test_3").await?; wait_for_group(shared, "test_5").await?; Ok(()) } /// Create an alias file that'll be used by the daemon to do task aliasing. /// This fill should be created in the daemon's temporary runtime directory. pub fn create_test_alias_file(config_dir: &Path, aliases: HashMap<String, String>) -> Result<()> { let content = serde_yaml::to_string(&aliases) .context("Failed to serialize aliase configuration file.")?; // Write the deserialized content to our alias file. let path = config_dir.join("pueue_aliases.yml"); let mut alias_file = File::create(path).context("Failed to open alias file")?; alias_file .write_all(content.as_bytes()) .context("Failed writing to alias file")?; Ok(()) } 070701000000B8000081A4000000000000000000000001665F1B6900000517000000000000000000000000000000000000002600000000pueue-3.4.1/pueue/tests/helper/log.rsuse std::io::Read; use anyhow::{bail, Context, Result}; use pueue_lib::network::message::*; use pueue_lib::settings::*; use snap::read::FrameDecoder; use super::*; // Log output is send in a compressed form from the daemon. // We have to unpack it first. pub fn decompress_log(bytes: Vec<u8>) -> Result<String> { let mut decoder = FrameDecoder::new(&bytes[..]); let mut output = String::new(); decoder .read_to_string(&mut output) .context("Failed to decompress remote log output")?; Ok(output) } /// Convenience function to get the log of a specific task. /// `lines: None` requests all log lines. pub async fn get_task_log(shared: &Shared, task_id: usize, lines: Option<usize>) -> Result<String> { let message = LogRequestMessage { task_ids: vec![task_id], send_logs: true, lines, }; let response = send_message(shared, message).await?; let mut logs = match response { Message::LogResponse(logs) => logs, _ => bail!("Didn't get log response response in get_state"), }; let log = logs .remove(&task_id) .context("Didn't find log of requested task")?; let bytes = log .output .context("Didn't get log output even though requested.")?; decompress_log(bytes) } 070701000000B9000081A4000000000000000000000001665F1B6900000506000000000000000000000000000000000000002600000000pueue-3.4.1/pueue/tests/helper/mod.rs//! This module contains helper functions, which are used by both, the client and daemon tests. use anyhow::Result; use tokio::io::{self, AsyncWriteExt}; pub use pueue_lib::state::PUEUE_DEFAULT_GROUP; mod asserts; mod daemon; mod factories; mod fixtures; mod log; mod network; mod state; mod task; mod wait; pub use self::log::*; pub use asserts::*; pub use daemon::*; pub use factories::*; pub use fixtures::*; pub use network::*; pub use state::*; pub use task::*; pub use wait::*; // Global acceptable test timeout const TIMEOUT: u64 = 5000; /// A helper function to sleep for ms time. /// Only used to avoid the biolerplate of importing the same stuff all over the place. pub async fn sleep_ms(ms: u64) { tokio::time::sleep(std::time::Duration::from_millis(ms)).await; } /// A small helper function, which instantly writes the given string to stdout with a newline. /// Useful for debugging async tests. #[allow(dead_code)] pub async fn async_println(out: &str) -> Result<()> { let mut stdout = io::stdout(); stdout .write_all(out.as_bytes()) .await .expect("Failed to write to stdout."); stdout .write_all("\n".as_bytes()) .await .expect("Failed to write to stdout."); stdout.flush().await?; Ok(()) } 070701000000BA000081A4000000000000000000000001665F1B6900000859000000000000000000000000000000000000002A00000000pueue-3.4.1/pueue/tests/helper/network.rsuse anyhow::{anyhow, bail, Context, Result}; use pueue_lib::network::message::*; use pueue_lib::network::protocol::{ get_client_stream, receive_bytes, receive_message, send_bytes, send_message as internal_send_message, GenericStream, }; use pueue_lib::network::secret::read_shared_secret; use pueue_lib::settings::Shared; /// This is a small convenience wrapper that sends a message and immediately returns the response. pub async fn send_message<T>(shared: &Shared, message: T) -> Result<Message> where T: Into<Message>, { let mut stream = get_authenticated_stream(shared).await?; // Check if we can receive the response from the daemon internal_send_message(message, &mut stream) .await .map_err(|err| anyhow!("Failed to send message: {err}"))?; // Check if we can receive the response from the daemon receive_message(&mut stream) .await .map_err(|err| anyhow!("Failed to receive message: {err}")) } /// Create a new stream that already finished the handshake and secret exchange. /// /// Pueue creates a new socket stream for each command, which is why we do it the same way. async fn get_authenticated_stream(shared: &Shared) -> Result<GenericStream> { // Connect to daemon and get stream used for communication. let mut stream = match get_client_stream(shared).await { Ok(stream) => stream, Err(err) => { panic!("Couldn't get client stream: {err}"); } }; // Next we do a handshake with the daemon // 1. Client sends the secret to the daemon. // 2. If successful, the daemon responds with their version. let secret = read_shared_secret(&shared.shared_secret_path()).context("Couldn't read shared secret.")?; send_bytes(&secret, &mut stream) .await .context("Failed to send bytes.")?; let version_bytes = receive_bytes(&mut stream) .await .context("Failed sending secret during handshake with daemon.")?; if version_bytes.is_empty() { bail!("Daemon went away after sending secret. Did you use the correct secret?") } Ok(stream) } 070701000000BB000081A4000000000000000000000001665F1B69000001E4000000000000000000000000000000000000002800000000pueue-3.4.1/pueue/tests/helper/state.rsuse anyhow::{bail, Result}; use pueue_lib::network::message::*; use pueue_lib::settings::*; use pueue_lib::state::State; use super::*; /// Convenience function for getting the current state from the daemon. pub async fn get_state(shared: &Shared) -> Result<Box<State>> { let response = send_message(shared, Message::Status).await?; match response { Message::StatusResponse(state) => Ok(state), _ => bail!("Didn't get status response in get_state"), } } 070701000000BC000081A4000000000000000000000001665F1B6900000795000000000000000000000000000000000000002700000000pueue-3.4.1/pueue/tests/helper/task.rsuse std::collections::HashMap; use std::env::vars; use anyhow::{anyhow, Context, Result}; use pueue_lib::network::message::*; use pueue_lib::settings::*; use pueue_lib::task::{Task, TaskStatus}; use crate::helper::*; /// Create a bare AddMessage for testing. /// This is just here to minimize boilerplate code. pub fn create_add_message(shared: &Shared, command: &str) -> AddMessage { AddMessage { command: command.into(), path: shared.pueue_directory(), envs: HashMap::from_iter(vars()), start_immediately: false, stashed: false, group: PUEUE_DEFAULT_GROUP.to_string(), enqueue_at: None, dependencies: Vec::new(), priority: None, label: None, print_task_id: false, } } /// Helper to either continue the daemon or start specific tasks pub async fn start_tasks(shared: &Shared, tasks: TaskSelection) -> Result<Message> { let message = StartMessage { tasks }; send_message(shared, message) .await .context("Failed to send Start tasks message") } /// Helper to pause the default group of the daemon pub async fn pause_tasks(shared: &Shared, tasks: TaskSelection) -> Result<Message> { let message = PauseMessage { tasks, wait: false }; send_message(shared, message) .await .context("Failed to send Pause message") } /// Convenience wrapper around `get_state` if you only need a single task. pub async fn get_task(shared: &Shared, task_id: usize) -> Result<Task> { let state = get_state(shared).await?; let task = state .tasks .get(&0) .ok_or_else(|| anyhow!("Couldn't find task {task_id}"))?; Ok(task.clone()) } /// Convenience wrapper around `get_task` if you really only need the task's status. pub async fn get_task_status(shared: &Shared, task_id: usize) -> Result<TaskStatus> { let task = get_task(shared, task_id).await?; Ok(task.status) } 070701000000BD000081A4000000000000000000000001665F1B690000127D000000000000000000000000000000000000002700000000pueue-3.4.1/pueue/tests/helper/wait.rs/// This module contains helper functions to work with Pueue's asynchronous nature during tests. /// As the daemon often needs some time to process requests by the client internally, we cannot /// check whether the requested actions have been taken right away. /// /// Using continuous lookups, we can allow long waiting times, while still having fast tests if /// things don't take that long. use anyhow::{bail, Result}; use pueue_lib::settings::Shared; use pueue_lib::state::GroupStatus; use pueue_lib::task::Task; use crate::helper::TIMEOUT; use super::{get_state, sleep_ms}; /// This is a small helper function, which checks in very short intervals, whether a task fulfills /// a certain criteria. This is necessary to prevent long or potentially flaky timeouts in our tests. pub async fn wait_for_task_condition<F>( shared: &Shared, task_id: usize, condition: F, ) -> Result<Task> where F: Fn(&Task) -> bool, { let sleep = 50; let tries = TIMEOUT / sleep; let mut current_try = 0; while current_try <= tries { let state = get_state(shared).await?; match state.tasks.get(&task_id) { Some(task) => { // Check if the condition is met. // If it isn't, continue if condition(task) { return Ok(task.clone()); } // The status didn't change to target. Try again. current_try += 1; sleep_ms(sleep).await; continue; } None => { bail!("Couldn't find task {task_id} while waiting for condition") } } } bail!("Task {task_id} didn't fulfill condition after about 1 second.") } /// This is a small helper function, which checks in very short intervals, whether a task has been /// deleted. This is necessary, as task deletion is asynchronous task. pub async fn wait_for_task_absence(shared: &Shared, task_id: usize) -> Result<()> { let sleep = 50; let tries = TIMEOUT / sleep; let mut current_try = 0; while current_try <= tries { let state = get_state(shared).await?; if state.tasks.contains_key(&task_id) { current_try += 1; sleep_ms(sleep).await; continue; } return Ok(()); } bail!("Task {task_id} hasn't been removed after about 1 second.") } /// This is a small helper function, which checks in very short intervals, whether a group has been /// initialized. This is necessary, as group creation became an asynchronous task. pub async fn wait_for_group(shared: &Shared, group: &str) -> Result<()> { let sleep = 50; let tries = TIMEOUT / sleep; let mut current_try = 0; while current_try <= tries { let state = get_state(shared).await?; if !state.groups.contains_key(group) { current_try += 1; sleep_ms(sleep).await; continue; } return Ok(()); } bail!("Group {group} didn't show up in about 1 second.") } /// This is a small helper function, which checks in very short intervals, whether a group has been /// deleted. This is necessary, as group deletion became an asynchronous task. pub async fn wait_for_group_absence(shared: &Shared, group: &str) -> Result<()> { let sleep = 50; let tries = TIMEOUT / sleep; let mut current_try = 0; while current_try <= tries { let state = get_state(shared).await?; if state.groups.contains_key(group) { current_try += 1; sleep_ms(sleep).await; continue; } return Ok(()); } bail!("Group {group} hasn't been removed after about 1 second.") } /// Waits for a status on a specific group. pub async fn wait_for_group_status( shared: &Shared, group: &str, expected_status: GroupStatus, ) -> Result<()> { // Give the daemon about 1 second to change group status. let sleep = 50; let tries = TIMEOUT / sleep; let mut current_try = 0; while current_try < tries { let state = get_state(shared).await?; match state.groups.get(group) { Some(group) => { if group.status == expected_status { return Ok(()); } // The status didn't change to the expected status. Try again. current_try += 1; sleep_ms(sleep).await; continue; } None => { bail!("Couldn't find group {group} while waiting for status change") } } } bail!("Group {group} didn't change to state {expected_status:?} after about 1 second",); } 070701000000BE000081A4000000000000000000000001665F1B690000004D000000000000000000000000000000000000002100000000pueue-3.4.1/pueue/tests/tests.rs#[cfg(unix)] mod helper; #[cfg(unix)] mod client; #[cfg(unix)] mod daemon; 070701000000BF000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000001600000000pueue-3.4.1/pueue_lib070701000000C0000081A4000000000000000000000001665F1B6900000038000000000000000000000000000000000000002100000000pueue-3.4.1/pueue_lib/.gitignore# Ignore Cargo.lock, since this is a library Cargo.lock 070701000000C1000081A4000000000000000000000001665F1B69000030DC000000000000000000000000000000000000002300000000pueue-3.4.1/pueue_lib/CHANGELOG.md# Changelog All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/). This project adheres **somewhat** to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). The concept of SemVer is applied to the daemon/client API, but not the library API itself. ## [0.26.0] - 2024-03-22 ### Added - Added `priority` field on `EditResponseMessage`, `EditMessage` and `TaskToRestart`. ## [0.25.1] - 2024-01-04 ### Changed - Bump dependencies. Most notably `ring` from 0.16 to 0.17 to add riscv64 support [#484](https://github.com/Nukesor/pueue/issues/484). ## [0.25.0] - 2023-10-21 ### Added - `Task::is_stashed()` - `Default` impl for `Setting` - Support the `PUEUE_CONFIG_PATH` environment variable in addition to the `--config` option. [#464](https://github.com/Nukesor/pueue/issues/464) - Experimental: Allow configuration of the shell command that executes task commands. [#454](https://github.com/Nukesor/pueue/issues/454) - Experimental: Allow injection of hard coded environment variables via config file. [#454](https://github.com/Nukesor/pueue/issues/454) ### Changed - The `filter_tasks_*` functions of `State` have been refactored to be more clear and ergonomic to use. ## [0.24.0] - 2023-06-13 ### Added - New setting `daemon.shell_command` to configure how the command shall be executed. - New setting `daemon.env_vars` to inject hard coded environment variables into the process. ### Changed - Refactor `State::filter_*` functions to return proper type. ## [0.23.0] - 2023-06-13 ### Added - Add `priority` field to `Task` - Remove `tempdir` dependency ## [0.22.0] This version was skipped due to a error during release :). ## [0.21.3] - 2023-02-12 ### Changed - Switched the test suite on MacOS to use the new `libproc::processes::pids_by_type()` API to enumerate PIDs in a program group, removing the need to depend on the unmaintained darwing-librproc library. [#409](https://github.com/Nukesor/pueue/issues/409). ## [0.21.2] - 2023-02-08 ### Fix - Point to a new patched fork of `darwin-libproc`, as the original has been deleted. This fixes the development builts for pueue on Apple platforms. ## [0.21.0] - 2022-12-12 ### Breaking Changes - Tasks are now started in a process group, with signals (including SIGTERM) optionally sent to the whole group. [#372](https://github.com/Nukesor/pueue/issues/372) - Renamed `TasksToRestart` to `TaskToRestart`. - Make `TaskToRestart::path` and `TaskToRestart::command` optional. - Make `EditMessage::path` and `EditMessage::command` optional. - The `children` flag has been removed for the `Start`-,`Pause`-,`Kill`- and `ResetMessage`. - No longer support TLS 1.2 certificates, only accept version 1.3. All generated certificates were 1.3 anyway, so there shouldn't be any breakage, except users created their own certs. ### Added - Added `Settings.shared.alias_file`, which can be used to specify the location of the `pueue_aliases.yml`. - Added functionality to edit a task's label [#354](https://github.com/Nukesor/pueue/issues/354). - `TaskToRestart.label` - `TaskToRestart.delete_label` - `EditMessage.label` - `EditMessage.delete_label` - Added `Task.enqueued_at` and `Task.created_at` metadata fields [#356](https://github.com/Nukesor/pueue/issues/356). ### Changed - The module structure of the platform specific networking code has been streamlined. - The process handling code has been moved from the daemon to `pueue_lib`. See [#336](https://github.com/Nukesor/pueue/issues/336). The reason for this is, that the client will need some of these process handling capabilitites to spawn shell commands when editing tasks. ## [0.20.0] - 2022-07-21 ### Added - `Message::Close` which indicates the client that everything is done and the connection is being closed. ### Removed - Breaking change: Backward compatibility logic for the old group structure in the main state. - Breaking change: The `State` no longer owns a copy of the current settings. This became possible due to the group configuration no longer being part of the configuration file. ### Fixed - The networking logic wasn't able to handle rapid successiv messages until now. If two messages were sent in quick succession, the client would receive both messages in one go. The reason for this was simply that the receiving buffer was always of a size of 1400 Bytes, even if the actual payload was much smaller. This wasn't a problem until now as there was no scenario where two messages were send immediately one after another. ## [0.19.6] - unreleased ### Added - Docs on how pueue's communication protocol looks like [#308](https://github.com/Nukesor/pueue/issues/308). ## [0.19.5] - 2022-03-22 ### Added - Settings option to configure pid path ## [0.19.4] - 2022-03-12 ### Added - New `Error::IoPathError` which provides better error messages for path related IoErrors. - `Error::RawIoError` for all generic IoErrors that already have some context. ### Changed - More info in `Error::IoError` for better context on some IoErrors. ### Removed - `Error::LogWrite` in favor of the new `IoPathError`. - `Error::LogRead` in favor of the new `IoPathError`. - `Error::FileNotFound` in favor of the new `IoPathError`. ## [0.19.3] - 2022-02-18 ### Changed - Use PathBuf in all messages and structs for paths. ## [0.19.2] - 2022-02-07 ### Changed - Make most configuration sections optional. ## [0.19.1] - 2022-01-31 - Update some dependencies for library better stability ## [0.19.0] - 2022-01-30 ### Added - Add optional `group` field to CleanMessage. - Add optional `parallel_tasks` field to Group create message. - Introduced a `Group` struct which is used to store information about groups in the `State`. - Added the `shared.runtime_directory` config variable for any runtime related files, such as sockets. - `XDG_CONFIG_HOME` is respected for Pueue's config directory [#243](https://github.com/Nukesor/pueue/issues/243). - `XDG_DATA_HOME` is used if the `pueue_directory` config isn't explicitly set [#243](https://github.com/Nukesor/pueue/issues/243). - `XDG_RUNTIME_DIR` is used if the new `runtime_directory` config isn't explicitly set [#243](https://github.com/Nukesor/pueue/issues/243). - Add `lines` to `LogRequestMessage` [#270](https://github.com/Nukesor/pueue/issues/270). - Add a new message type `Message::EditRestore` which is used to notify the daemon of a failed editing process. ### Removed - Remove the `settings.daemon.default_parallel_tasks` setting, as it doesn't have any effect. ### Changed - Switch from `async-std` to tokio. - Update to rustls 0.20 - **Breaking:** Logs are now no longer split into two files, for stderr and stdout respectively, but rather a single file for both. - **Breaking:** The unix socket is now located in the `runtime_directory` by default [#243](https://github.com/Nukesor/pueue/issues/243). - **Breaking:** `Shared::pueue_directory` changed from `PathBuf` to `Option<PathBuf>`. - **Breaking:** `Settings::read_with_defaults` no longer a boolean as first parameter. Instead, it returns a tuple of `(Settings, bool)` with the boolean indicating whether a config file has been found. - **Breaking:** The type of `State.group` changed from `BTreeMap<String, GroupStatus>` to the new `BTreeMap<String, Group>` struct. - **Breaking:** The `GroupResponseMessage` now also uses the new `Group` struct. ## [0.18.1] - 2021-09-15 ### Added - Add the `PUEUE_DEFAULT_GROUP` constant, which provides a consistent way of working with the `"default"` group. ### Fix - Always insert the "default" group into `settings.daemon.group` on read. ## [0.18.0] - 2021-07-27 ### Change - Make `GroupMessage` an enum to prevent impossible states. - Introduce `TaskSelection` enum to prevent impossible states in Kill-/Start-/PauseMessage structs. ## [0.17.2] - 2021-07-09 ### Fix - Fix default for `client.restart_in_place` to previous default. ## [0.17.1] - 2021-07-08 ### Fix - Add missing config default for `client.status_time_format` and `client.status_datetime_format` ## [0.17.0] - 2021-07-08 ### Added - Add config option to restart tasks with `in_place` by default. ### Changed Remove defaults for backward compatibility. We broke that in the last version anyway, so we can use this opportunity and clean up a little. ## [0.16.0] - 2021-07-05 This release aims to remove non-generic logic from `State`, that should be moved to the `Pueue` project. ### Added - Add config option for datetime/time formatting in pueue status. ### Changed - Make `State::config_path` public. ### Removed - `State::handle_task_failure` - `State::is_task_removable` - `State::task_ids_in_group_with_stati` in favor of `State::filter_tasks_of_group` - `State::save`, `State::backup`, `State::restore` and all related functions. - State related errors from the custom `Error` type. ## [0.15.0] - 2021-07-03 Several non-backward compatible breaking API changes to prevent impossible states. ### Changed - Remove `tasks_of_group_in_statuses` and `tasks_in_statuses` in favor of generic filter functions `filter_tasks_of_group` and `filter_tasks`. - Move `TaskResult` into `TaskStatus::Done(TaskResult)` to prevent impossible states. - Move `enqueue_at` into `TaskStatus::Stashed{enqueue_at: Option<DateTime<Local>>}` for better contextual data structure. ## [0.14.1] - 2021-06-21 ### Added - Messages now have PartialEq for better testability ## [0.14.0] - 2021-06-15 ### Changed - Add `ShutdownType` to `DaemonShutdownMessage` ## [0.13.1] - 2021-06-04 - Add `State::tasks_of_group_in_statuses` ## [0.13.0] - 2021-05-28 ### Changed - Use `serde_cbor` instead of `bincode` to allow protocol backward compatibility between versions - Use the next id that's available. This results in ids being reused, on `pueue clean` or `pueue remove` of the last tasks in a queue. - Paths are now accessed via functions by [dadav](https://github.com/dadav) for [Pueue #191](https://github.com/Nukesor/pueue/issues/191) - Remove `full` flag from TaskLogRequestMessage. - Automatically create `$pueue_directory/certs` directory on `create_certificates` if it doesn't exist yet. - Remove `require_config` flag from `Settings::read`, since it's implicitely `true`. - Rename `Settings::new`, to `Settings::read_with_defaults`. - Return errors via `Result` in `State` functions with io. - Don't write the State on every change. Users have to call `state::save()` manually from now on. ### Added - `~` is now respected in configuration paths by [dadav](https://github.com/dadav) for [Pueue #191](https://github.com/Nukesor/pueue/issues/191). - New function `read_last_log_file_lines` for [#196](https://github.com/Nukesor/pueue/issues/196). - Add `callback_log_lines` setting for Daemon, specifying the amount of lines returned to the callback. [#196](https://github.com/Nukesor/pueue/issues/196). - Support for other `apple` platforms by [althiometer](https://github.com/althiometer) - Added backward compatibility tests for v0.12.2 state. - Added SignalMessage and Signal enum for a list of all supported Unix signals. ### Fixed - Only try to remove log files, if they actually exist. ## [0.12.2] - 30-03-2021 ### Changed - Clippy adjustment: Transform `&PathBuf` to `&Path` in function parameter types. This should be reverse-compatible, since `&PathBuf` dereferences to `&Path`. ## [0.12.1] - 09-02-2021 ### Added - `dark_mode` client configuration flag by [Mephistophiles](https://github.com/Mephistophiles) ## [0.12.0] - 04-02-2021 Moved into a stand-alone repository for better maintainability. ### Changed - Change the packet size from 1.5 Kbyte to 1.4 Kbyte to prevent packet splitting on smaller MTUs. - Add LOTS of documentation. - Hide modules that aren't intended for public use. - Rename `GenericListener` to `Listener` and vice versa. ### Removed - Remove unused `group_or_default` function. ## [0.11.2] - 01-02-2021 ### Changed - Use `127.0.0.1` instead of `localhost` as default host. This prevents any unforseen consequences if somebody deletes the default `localhost` entry from their `/etc/hosts` file. ## [0.11.0] - 18-01-2020 ### Fixed - Moved into a stand-alone repository for better maintainability. - Don't parse config path, if it's a directory. - Error with "Couldn't find config at path {:?}" when passing a directory via `--config`. - Fixed missing newline between tasks in `log` output. 070701000000C2000081A4000000000000000000000001665F1B690000076D000000000000000000000000000000000000002100000000pueue-3.4.1/pueue_lib/Cargo.toml[package] name = "pueue-lib" version = "0.26.1" description = "The shared library to work with the Pueue client and daemon." keywords = ["pueue"] readme = "README.md" authors = { workspace = true } homepage = { workspace = true } repository = { workspace = true } license = { workspace = true } edition = { workspace = true } rust-version = { workspace = true } [badges] maintenance = { status = "actively-developed" } [dependencies] anyhow = "1.0" async-trait = "0.1" byteorder = "1.5" chrono = { workspace = true } command-group = { workspace = true } dirs = "5.0" handlebars = { workspace = true } log = { workspace = true } rand = "0.8" rcgen = "0.13" rev_buf_reader = "0.3" rustls = { version = "0.23", features = [ "ring", "logging", "std", "tls12", ], default-features = false } rustls-pemfile = "2" serde = { workspace = true } serde_cbor = "0.11" serde_derive = { workspace = true } serde_json = { workspace = true } serde_yaml = "0.9" shellexpand = "3.1" snap = { workspace = true } strum = { workspace = true } strum_macros = { workspace = true } thiserror = "1.0" tokio = { workspace = true, features = ["macros", "net", "io-util"] } tokio-rustls = { version = "0.26", default-features = false } [dev-dependencies] anyhow = { workspace = true } better-panic = { workspace = true } portpicker = "0.1" pretty_assertions = { workspace = true } tempfile = "3" tokio = { workspace = true } # --- Platform specific dependencies --- # Windows [target.'cfg(windows)'.dependencies] winapi = { version = "0.3", features = [ "tlhelp32", "errhandlingapi", "processthreadsapi", "minwindef", "impl-default", ] } # Unix [target.'cfg(unix)'.dependencies] whoami = "1" [target.'cfg(any(target_os = "linux", target_os = "macos"))'.dependencies] libproc = "0.14.6" # Linux only [target.'cfg(target_os = "linux")'.dependencies] procfs = { version = "0.16", default-features = false } 070701000000C3000081A4000000000000000000000001665F1B690000042F000000000000000000000000000000000000001E00000000pueue-3.4.1/pueue_lib/LICENSEMIT License Copyright (c) 2018-2021 Arne Beer Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 070701000000C4000081A4000000000000000000000001665F1B69000003FA000000000000000000000000000000000000002000000000pueue-3.4.1/pueue_lib/README.md# Pueue-lib [![Test Build](https://github.com/Nukesor/pueue/actions/workflows/test.yml/badge.svg)](https://github.com/Nukesor/pueue/actions/workflows/test.yml) [![Crates.io](https://img.shields.io/crates/v/pueue-lib)](https://crates.io/crates/pueue-lib) [![docs](https://docs.rs/pueue-lib/badge.svg)](https://docs.rs/pueue-lib/) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) This is the shared library used by the [Pueue](https://github.com/nukesor/pueue/) client and daemon. It contains common components such as: - Everything about the [Task](task::Task), [TaskResult](task::TaskResult) etc. - The [State](state::State), which represents the current state of the daemon. - Network code. Everything you need to communicate with the daemon. - Other helper code and structs. Pueue-lib is a stand-alone crate, so it can be used by third-party applications to either manipulate or monitor the daemon or to simply write your own front-end for the daemon. 070701000000C5000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000001A00000000pueue-3.4.1/pueue_lib/src070701000000C6000081A4000000000000000000000001665F1B690000072A000000000000000000000000000000000000002600000000pueue-3.4.1/pueue_lib/src/aliasing.rsuse std::collections::HashMap; use std::fs::File; use std::io::prelude::*; use log::info; use crate::error::Error; use crate::settings::Settings; /// Return the contents of the alias file, if it exists and can be parsed. \ /// The file has to be located in `pueue_directory` and named `pueue_aliases.yml`. pub fn get_aliases(settings: &Settings) -> Result<HashMap<String, String>, Error> { // Go through all config directories and check for a alias file. let path = settings.shared.alias_file(); // Return early if we cannot find the file if !path.exists() { info!("Didn't find pueue alias file at {path:?}."); return Ok(HashMap::new()); }; // Read the file content let mut alias_file = File::open(&path) .map_err(|err| Error::IoPathError(path.clone(), "opening alias file", err))?; let mut content = String::new(); alias_file .read_to_string(&mut content) .map_err(|err| Error::IoPathError(path.clone(), "reading alias file", err))?; serde_yaml::from_str(&content).map_err(|err| { Error::ConfigDeserialization(format!("Failed to read alias configuration file:\n{err}")) }) } /// Check if there exists an alias for a given command. /// Only the first word will be replaced. pub fn insert_alias(settings: &Settings, command: String) -> String { // Get the first word of the command. let first = match command.split_whitespace().next() { Some(first) => first, None => return command, }; let aliases = match get_aliases(settings) { Err(err) => { info!("Couldn't read aliases file: {err}"); return command; } Ok(aliases) => aliases, }; if let Some(alias) = aliases.get(first) { return command.replacen(first, alias, 1); } command } 070701000000C7000081A4000000000000000000000001665F1B6900000568000000000000000000000000000000000000002300000000pueue-3.4.1/pueue_lib/src/error.rsuse std::path::PathBuf; #[derive(thiserror::Error, Debug)] pub enum Error { #[error("Error while building path: {}", .0)] InvalidPath(String), /// Any errors regarding the certificate setup. #[error("Invalid or malformed certificate: {}", .0)] CertificateFailure(String), #[error("{}", .0)] Connection(String), #[error("Got an empty payload")] EmptyPayload, #[error("Couldn't deserialize message:\n{}", .0)] MessageDeserialization(String), #[error("Couldn't serialize message:\n{}", .0)] MessageSerialization(String), #[error("Error while reading configuration:\n{}", .0)] ConfigDeserialization(String), #[error("Some error occurred. {}", .0)] Generic(String), #[error("I/O error while {}:\n{}", .0, .1)] IoError(String, std::io::Error), #[error("Unexpected I/O error:\n{}", .0)] RawIoError(#[from] std::io::Error), #[error("I/O error at path {:?} while {}:\n{}", .0, .1, .2)] IoPathError(PathBuf, &'static str, std::io::Error), /// Thrown if one tries to create the unix socket, but it already exists. /// Another daemon instance might be already running. #[error( "There seems to be an active pueue daemon.\n\ If you're sure there isn't, please remove the \ socket inside the pueue_directory manually." )] UnixSocketExists, } 070701000000C8000081A4000000000000000000000001665F1B6900000312000000000000000000000000000000000000002100000000pueue-3.4.1/pueue_lib/src/lib.rs#![doc = include_str!("../README.md")] /// Shared module for internal logic! /// Contains helper for command aliasing. pub mod aliasing; /// Pueue lib's own Error implementation. pub mod error; /// Helper classes to read and write log files of Pueue's tasks. pub mod log; pub mod network; /// Shared module for internal logic! /// Contains helper to spawn shell commands and examine and interact with processes. pub mod process_helper; /// This module contains all platform unspecific default values and helper functions for working /// with our setting representation. mod setting_defaults; /// Pueue's configuration representation. pub mod settings; /// The main struct used to represent the daemon's current state. pub mod state; /// Everything regarding Pueue's task pub mod task; 070701000000C9000081A4000000000000000000000001665F1B6900001FA3000000000000000000000000000000000000002100000000pueue-3.4.1/pueue_lib/src/log.rsuse std::fs::{read_dir, remove_file, File}; use std::io::{self, prelude::*, Read, SeekFrom}; use std::path::{Path, PathBuf}; use log::error; use rev_buf_reader::RevBufReader; use snap::write::FrameEncoder; use crate::error::Error; /// Get the path to the log file of a task. pub fn get_log_path(task_id: usize, pueue_dir: &Path) -> PathBuf { let task_log_dir = pueue_dir.join("task_logs"); task_log_dir.join(format!("{task_id}.log")) } /// Create and return the two file handles for the `(stdout, stderr)` log file of a task. /// These are two handles to the same file. pub fn create_log_file_handles(task_id: usize, pueue_dir: &Path) -> Result<(File, File), Error> { let log_path = get_log_path(task_id, pueue_dir); let stdout_handle = File::create(&log_path) .map_err(|err| Error::IoPathError(log_path, "getting stdout handle", err))?; let stderr_handle = stdout_handle .try_clone() .map_err(|err| Error::IoError("cloning stderr handle".to_string(), err))?; Ok((stdout_handle, stderr_handle)) } /// Return the file handle for the log file of a task. pub fn get_log_file_handle(task_id: usize, pueue_dir: &Path) -> Result<File, Error> { let path = get_log_path(task_id, pueue_dir); let handle = File::open(&path) .map_err(|err| Error::IoPathError(path, "getting log file handle", err))?; Ok(handle) } /// Return the file handle for the log file of a task. pub fn get_writable_log_file_handle(task_id: usize, pueue_dir: &Path) -> Result<File, Error> { let path = get_log_path(task_id, pueue_dir); let handle = File::options() .write(true) .open(&path) .map_err(|err| Error::IoPathError(path, "getting log file handle", err))?; Ok(handle) } /// Remove the the log files of a task. pub fn clean_log_handles(task_id: usize, pueue_dir: &Path) { let path = get_log_path(task_id, pueue_dir); if path.exists() { if let Err(err) = remove_file(path) { error!("Failed to remove stdout file for task {task_id} with error {err:?}"); }; } } /// Return the output of a task. \ /// Task output is compressed using [snap] to save some memory and bandwidth. /// Return type is `(Vec<u8>, bool)` /// - `Vec<u8>` the compressed task output. /// - `bool` Whether the full task's output has been read. /// `false` indicate that the log output has been truncated pub fn read_and_compress_log_file( task_id: usize, pueue_dir: &Path, lines: Option<usize>, ) -> Result<(Vec<u8>, bool), Error> { let mut file = get_log_file_handle(task_id, pueue_dir)?; let mut content = Vec::new(); // Indicates whether the full log output is shown or just the last part of it. let mut output_complete = true; // Move the cursor to the last few lines of both files. if let Some(lines) = lines { output_complete = seek_to_last_lines(&mut file, lines)?; } // Compress the full log input and pipe it into the snappy compressor { let mut compressor = FrameEncoder::new(&mut content); io::copy(&mut file, &mut compressor) .map_err(|err| Error::IoError("compressing log output".to_string(), err))?; } Ok((content, output_complete)) } /// Return the last lines of of a task's output. \ /// This output is uncompressed and may take a lot of memory, which is why we only read /// the last few lines. pub fn read_last_log_file_lines( task_id: usize, pueue_dir: &Path, lines: usize, ) -> Result<String, Error> { let mut file = get_log_file_handle(task_id, pueue_dir)?; // Get the last few lines of both files Ok(read_last_lines(&mut file, lines)) } /// Remove all files in the log directory. pub fn reset_task_log_directory(pueue_dir: &Path) -> Result<(), Error> { let task_log_dir = pueue_dir.join("task_logs"); let files = read_dir(&task_log_dir) .map_err(|err| Error::IoPathError(task_log_dir, "reading task log files", err))?; for file in files.flatten() { if let Err(err) = remove_file(file.path()) { error!("Failed to delete log file: {err}"); } } Ok(()) } /// Read the last `amount` lines of a file to a string. /// /// Only use this for logic that doesn't stream from daemon to client! /// For streaming logic use the `seek_to_last_lines` and compress any data. // We allow this clippy check. // The iterators cannot be chained, as RevBufReader.lines doesn't implement the necessary traits. #[allow(clippy::needless_collect)] pub fn read_last_lines(file: &mut File, amount: usize) -> String { let reader = RevBufReader::new(file); let lines: Vec<String> = reader .lines() .take(amount) .map(|line| line.unwrap_or_else(|_| "Pueue: Failed to read line.".to_string())) .collect(); lines.into_iter().rev().collect::<Vec<String>>().join("\n") } /// Seek the cursor of the current file to the beginning of the line that's located `amount` newlines /// from the back of the file. /// /// The `bool` return value indicates whether we sought to the start of the file (there were less /// lines than the limit). `true` means that the handle is now at the very start of the file. pub fn seek_to_last_lines(file: &mut File, amount: usize) -> Result<bool, Error> { let mut reader = RevBufReader::new(file); // The position from which the RevBufReader starts reading. // The file size might change while we're reading the file. Hence we have to save it now. let start_position = reader .get_mut() .stream_position() .map_err(|err| Error::IoError("seeking to start of file".to_string(), err))?; let start_position: i64 = start_position.try_into().map_err(|_| { Error::Generic("Failed to convert start cursor position to i64".to_string()) })?; let mut total_read_bytes: i64 = 0; let mut found_lines = 0; // Read in 4KB chunks until there's either nothing left or we found `amount` newline characters. 'outer: loop { let mut buffer = vec![0; 4096]; let read_bytes = reader .read(&mut buffer) .map_err(|err| Error::IoError("reading next log chunk".to_string(), err))?; // Return if there's nothing left to read. // We hit the start of the file and read fewer lines then specified. if read_bytes == 0 { return Ok(true); } // Check each byte for a newline. // Even though the RevBufReader reads from behind, the bytes in the buffer are still in forward // order. Since we want to scan from the back, we have to reverse the buffer for byte in buffer[0..read_bytes].iter().rev() { total_read_bytes += 1; if *byte != b'\n' { continue; } // We found a newline. found_lines += 1; // We haven't visited the requested amount of lines yet. if found_lines != amount + 1 { continue; } // The RevBufReader most likely already went past this point. // That's why we have to set the cursor to the position of the last newline. // Calculate the distance from the start to the desired location. let distance_to_file_start = start_position - total_read_bytes + 1; // Cast it to u64. If it somehow became negative, just seek to the start of the // file. let distance_to_file_start: u64 = distance_to_file_start.try_into().unwrap_or(0); // We can safely unwrap `start_position`, as we previously casted it from an u64. if distance_to_file_start < start_position.try_into().unwrap() { // Seek to the position. let file = reader.get_mut(); file.seek(SeekFrom::Start(distance_to_file_start)) .map_err(|err| { Error::IoError("seeking to correct position".to_string(), err) })?; } break 'outer; } } Ok(false) } 070701000000CA000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000002200000000pueue-3.4.1/pueue_lib/src/network070701000000CB000081A4000000000000000000000001665F1B69000009CA000000000000000000000000000000000000003100000000pueue-3.4.1/pueue_lib/src/network/certificate.rsuse std::fs::File; use std::io::Write; use std::path::Path; use log::info; use rcgen::{generate_simple_self_signed, CertifiedKey}; use crate::error::Error; use crate::settings::Shared; /// This the default certificates at the default `pueue_dir/certs` location. pub fn create_certificates(shared_settings: &Shared) -> Result<(), Error> { let daemon_cert_path = shared_settings.daemon_cert(); let daemon_key_path = shared_settings.daemon_key(); if daemon_key_path.exists() || daemon_cert_path.exists() { if !(daemon_key_path.exists() && daemon_cert_path.exists()) { return Err(Error::CertificateFailure( "Not all default certificates exist, some are missing. \ Please fix your cert/key paths.\n \ You can also remove the `$pueue_directory/certs` directory \ and restart the daemon to create new certificates/keys." .into(), )); } info!("All default keys do exist."); return Ok(()); } let subject_alt_names = vec!["pueue.local".to_string(), "localhost".to_string()]; let CertifiedKey { cert, key_pair } = generate_simple_self_signed(subject_alt_names).map_err(|_| { Error::CertificateFailure("Failed to generate self-signed daemon certificate.".into()) })?; // The certificate is now valid for localhost and the domain "hello.world.example" let ca_cert = cert.pem(); write_file(ca_cert, "daemon cert", &daemon_cert_path)?; let ca_key = key_pair.serialize_pem(); write_file(ca_key, "daemon key", &daemon_key_path)?; Ok(()) } fn write_file(blob: String, name: &str, path: &Path) -> Result<(), Error> { info!("Generate {name}."); let mut file = File::create(path) .map_err(|err| Error::IoPathError(path.to_path_buf(), "creating certificate", err))?; file.write_all(&blob.into_bytes()) .map_err(|err| Error::IoPathError(path.to_path_buf(), "writing certificate", err))?; #[cfg(not(target_os = "windows"))] { use std::os::unix::fs::PermissionsExt; let mut permissions = file .metadata() .map_err(|_| Error::CertificateFailure("Failed to certificate permission.".into()))? .permissions(); permissions.set_mode(0o640); std::fs::set_permissions(path, permissions) .map_err(|_| Error::CertificateFailure("Failed to certificate permission.".into()))?; } Ok(()) } 070701000000CC000081A4000000000000000000000001665F1B6900002BE5000000000000000000000000000000000000002D00000000pueue-3.4.1/pueue_lib/src/network/message.rsuse std::collections::{BTreeMap, HashMap}; use std::path::PathBuf; use chrono::prelude::*; use serde_derive::{Deserialize, Serialize}; use strum_macros::{Display, EnumString}; use crate::state::{Group, State}; use crate::task::Task; /// Macro to simplify creating [From] implementations for each variant-contained /// struct; e.g. `impl_into_message!(AddMessage, Message::Add)` to make it possible /// to use `AddMessage { }.into()` and get a `Message::Add()` value. macro_rules! impl_into_message { ($inner:ident, $variant:expr) => { impl From<$inner> for Message { fn from(message: $inner) -> Self { $variant(message) } } }; } /// This is the main message enum. \ /// Everything that's send between the daemon and a client can be represented by this enum. #[derive(PartialEq, Eq, Clone, Debug, Deserialize, Serialize)] pub enum Message { Add(AddMessage), Remove(Vec<usize>), Switch(SwitchMessage), Stash(Vec<usize>), Enqueue(EnqueueMessage), Start(StartMessage), Restart(RestartMessage), Pause(PauseMessage), Kill(KillMessage), /// Used to send some input to a process's stdin Send(SendMessage), /// The first part of the three-step protocol to edit a task. /// This one requests an edit from the daemon. EditRequest(usize), /// This is send by the client if something went wrong during the editing process. /// The daemon will go ahead and restore the task's old state. EditRestore(usize), /// The daemon locked the task and responds with the task's details. EditResponse(EditResponseMessage), /// The client sends the edited details to the daemon. Edit(EditMessage), Group(GroupMessage), GroupResponse(GroupResponseMessage), Status, StatusResponse(Box<State>), Log(LogRequestMessage), LogResponse(BTreeMap<usize, TaskLogMessage>), /// The client requests a continuous stream of a task's log. StreamRequest(StreamRequestMessage), /// The next chunk of output, that's send to the client. Stream(String), Reset(ResetMessage), Clean(CleanMessage), DaemonShutdown(Shutdown), Success(String), Failure(String), /// Simply notify the client that the connection is now closed. /// This is used to, for instance, close a `follow` stream if the task finished. Close, Parallel(ParallelMessage), } /// This enum is used to express a selection of tasks. /// As commands can be executed on various sets of tasks, we need some kind of datastructure to /// explicitly and unambiguously specify the selection. #[derive(PartialEq, Eq, Clone, Debug, Deserialize, Serialize)] pub enum TaskSelection { TaskIds(Vec<usize>), Group(String), All, } #[derive(PartialEq, Eq, Clone, Deserialize, Serialize)] pub struct AddMessage { pub command: String, pub path: PathBuf, pub envs: HashMap<String, String>, pub start_immediately: bool, pub stashed: bool, pub group: String, pub enqueue_at: Option<DateTime<Local>>, pub dependencies: Vec<usize>, pub priority: Option<i32>, pub label: Option<String>, pub print_task_id: bool, } /// We use a custom `Debug` implementation for [AddMessage], as the `envs` field just has /// too much info in it and makes the log output much too verbose. /// /// Furthermore, there might be secrets in the environment, resulting in a possible leak /// if users copy-paste their log output for debugging. impl std::fmt::Debug for AddMessage { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("Task") .field("command", &self.command) .field("path", &self.path) .field("envs", &"hidden") .field("start_immediately", &self.start_immediately) .field("stashed", &self.stashed) .field("group", &self.group) .field("enqueue_at", &self.enqueue_at) .field("dependencies", &self.dependencies) .field("label", &self.label) .field("print_task_id", &self.print_task_id) .finish() } } impl_into_message!(AddMessage, Message::Add); #[derive(PartialEq, Eq, Clone, Debug, Deserialize, Serialize)] pub struct SwitchMessage { pub task_id_1: usize, pub task_id_2: usize, } impl_into_message!(SwitchMessage, Message::Switch); #[derive(PartialEq, Eq, Clone, Debug, Deserialize, Serialize)] pub struct EnqueueMessage { pub task_ids: Vec<usize>, pub enqueue_at: Option<DateTime<Local>>, } impl_into_message!(EnqueueMessage, Message::Enqueue); #[derive(PartialEq, Eq, Clone, Debug, Deserialize, Serialize)] pub struct StartMessage { pub tasks: TaskSelection, } impl_into_message!(StartMessage, Message::Start); /// The messages used to restart tasks. /// It's possible to update the command and paths when restarting tasks. #[derive(PartialEq, Eq, Clone, Debug, Deserialize, Serialize)] pub struct RestartMessage { pub tasks: Vec<TaskToRestart>, pub start_immediately: bool, pub stashed: bool, } impl_into_message!(RestartMessage, Message::Restart); #[derive(PartialEq, Eq, Clone, Debug, Deserialize, Serialize)] pub struct TaskToRestart { pub task_id: usize, /// Restart the task with an updated command. pub command: Option<String>, /// Restart the task with an updated path. pub path: Option<PathBuf>, /// Restart the task with an updated label. pub label: Option<String>, /// Cbor cannot represent `Option<Option<T>>` yet, which is why we have to utilize a /// boolean to indicate that the label should be released, rather than an `Some(None)`. pub delete_label: bool, /// Restart the task with an updated priority. pub priority: Option<i32>, } #[derive(PartialEq, Eq, Clone, Debug, Deserialize, Serialize)] pub struct PauseMessage { pub tasks: TaskSelection, pub wait: bool, } impl_into_message!(PauseMessage, Message::Pause); /// This is a small custom Enum for all currently supported unix signals. /// Supporting all unix signals would be a mess, since there is a LOT of them. /// /// This is also needed for usage in clap, since nix's Signal doesn't implement [Display] and /// [std::str::FromStr]. #[derive(PartialEq, Eq, Clone, Debug, Deserialize, Serialize, Display, EnumString)] #[strum(ascii_case_insensitive)] pub enum Signal { #[strum(serialize = "sigint", serialize = "int", serialize = "2")] SigInt, #[strum(serialize = "sigkill", serialize = "kill", serialize = "9")] SigKill, #[strum(serialize = "sigterm", serialize = "term", serialize = "15")] SigTerm, #[strum(serialize = "sigcont", serialize = "cont", serialize = "18")] SigCont, #[strum(serialize = "sigstop", serialize = "stop", serialize = "19")] SigStop, } #[derive(PartialEq, Eq, Clone, Debug, Deserialize, Serialize)] pub struct KillMessage { pub tasks: TaskSelection, pub signal: Option<Signal>, } impl_into_message!(KillMessage, Message::Kill); #[derive(PartialEq, Eq, Clone, Debug, Deserialize, Serialize)] pub struct SendMessage { pub task_id: usize, pub input: String, } impl_into_message!(SendMessage, Message::Send); #[derive(PartialEq, Eq, Clone, Debug, Deserialize, Serialize)] pub struct EditResponseMessage { pub task_id: usize, pub command: String, pub path: PathBuf, pub label: Option<String>, pub priority: i32, } impl_into_message!(EditResponseMessage, Message::EditResponse); #[derive(PartialEq, Eq, Clone, Debug, Deserialize, Serialize)] pub struct EditMessage { pub task_id: usize, pub command: Option<String>, pub path: Option<PathBuf>, pub label: Option<String>, /// Cbor cannot represent `Option<Option<T>>` yet, which is why we have to utilize a /// boolean to indicate that the label should be released, rather than an `Some(None)`. pub delete_label: bool, pub priority: Option<i32>, } impl_into_message!(EditMessage, Message::Edit); #[derive(PartialEq, Eq, Clone, Debug, Deserialize, Serialize)] pub enum GroupMessage { Add { name: String, parallel_tasks: Option<usize>, }, Remove(String), List, } impl_into_message!(GroupMessage, Message::Group); #[derive(PartialEq, Eq, Clone, Debug, Deserialize, Serialize)] pub struct GroupResponseMessage { pub groups: BTreeMap<String, Group>, } impl_into_message!(GroupResponseMessage, Message::GroupResponse); #[derive(PartialEq, Eq, Clone, Debug, Deserialize, Serialize)] pub struct ResetMessage {} impl_into_message!(ResetMessage, Message::Reset); #[derive(PartialEq, Eq, Clone, Debug, Deserialize, Serialize)] pub struct CleanMessage { #[serde(default = "bool::default")] pub successful_only: bool, #[serde(default = "Option::default")] pub group: Option<String>, } impl_into_message!(CleanMessage, Message::Clean); /// Determines which type of shutdown we're dealing with. #[derive(PartialEq, Eq, Clone, Debug, Deserialize, Serialize)] pub enum Shutdown { /// Emergency is most likely a system unix signal or a CTRL+C in a terminal. Emergency, /// Graceful is user initiated and expected. Graceful, } impl_into_message!(Shutdown, Message::DaemonShutdown); #[derive(PartialEq, Eq, Clone, Debug, Deserialize, Serialize)] pub struct StreamRequestMessage { pub task_id: Option<usize>, pub lines: Option<usize>, } impl_into_message!(StreamRequestMessage, Message::StreamRequest); /// Request logs for specific tasks. /// /// `task_ids` specifies the requested tasks. If none are given, all tasks are selected. /// `send_logs` Determines whether logs should be sent at all. /// `lines` Determines whether only a few lines of log should be returned. #[derive(PartialEq, Eq, Clone, Debug, Deserialize, Serialize)] pub struct LogRequestMessage { pub task_ids: Vec<usize>, pub send_logs: bool, pub lines: Option<usize>, } impl_into_message!(LogRequestMessage, Message::Log); /// Helper struct for sending tasks and their log output to the client. #[derive(PartialEq, Eq, Clone, Deserialize, Serialize)] pub struct TaskLogMessage { pub task: Task, #[serde(default = "bool::default")] /// Indicates whether the log output has been truncated or not. pub output_complete: bool, pub output: Option<Vec<u8>>, } /// We use a custom `Debug` implementation for [TaskLogMessage], as the `output` field /// has too much info in it and renders log output unreadable. impl std::fmt::Debug for TaskLogMessage { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("TaskLogMessage") .field("task", &self.task) .field("output_complete", &self.output_complete) .field("output", &"hidden") .finish() } } #[derive(PartialEq, Eq, Clone, Debug, Deserialize, Serialize)] pub struct ParallelMessage { pub parallel_tasks: usize, pub group: String, } impl_into_message!(ParallelMessage, Message::Parallel); pub fn create_success_message<T: ToString>(text: T) -> Message { Message::Success(text.to_string()) } pub fn create_failure_message<T: ToString>(text: T) -> Message { Message::Failure(text.to_string()) } 070701000000CD000081A4000000000000000000000001665F1B6900000A2C000000000000000000000000000000000000002900000000pueue-3.4.1/pueue_lib/src/network/mod.rs//! This module contains everything that's necessary to communicate with the pueue daemon or one of //! its clients. //! //! ## Sockets //! //! Pueue's communication can happen either via TLS encrypted TCP sockets or via UNIX sockets. //! The mode of communication is usually specified via the configuration file and the daemon only //! listens on a single type of socket. //! //! - Unix sockets are unencrypted //! - TCP sockets are encrypted via TLS //! //! ## Communication //! //! Sending and receiving raw bytes is handled via the [send_bytes](crate::network::protocol::send_bytes) //! and [receive_bytes](crate::network::protocol::receive_bytes) functions. //! Details on how they work can be found on the respective function docs. //! //! There're also the convenience functions [send_message](crate::network::protocol::send_message) //! and [receive_message](crate::network::protocol::receive_message), which automatically handle //! serialization and deserialization for you. //! //! The payloads itself are defined via the `Message` enum that can be found in the //! crate::network::message module. //! //! The serialization/deserialization format that's used by `pueue_lib` is `cbor`. //! //! ## Protocol //! //! Before the real data exchange starts, a simple handshake + authorization is done //! by the client and daemon. //! An example on how to do this can be found in the pueue's `crate::client::Client::new()` function. //! //! The following steps are written from the client's perspective: //! //! - Connect to socket. //! - Send the secret's bytes. //! - Receive the daemon's version (utf-8 encoded), which is sent if the secret was correct. //! - Send the actual message. //! - Receive the daemon's response. //! //! In the case of most messages, the daemon is ready to receive the next the message from //! the client, once it has send its response. //! //! However, some message types are special. The log `follow`, for instance, basically //! work like a stream. //! I.e. the daemon continuously sends new messages with the new log output until //! the socket is closed by the client. /// Used by the daemon to initialize the TLS certificates. pub mod certificate; /// This contains the main [Message](message::Message) enum and all its structs used to /// communicate with the daemon or client. pub mod message; /// This is probably the most interesting part for you. pub mod protocol; /// Functions to write and read the secret to/from a file. pub mod secret; /// Low-level socket handling code. pub mod socket; /// Helper functions for reading and handling TLS files. mod tls; 070701000000CE000081A4000000000000000000000001665F1B69000020E1000000000000000000000000000000000000002E00000000pueue-3.4.1/pueue_lib/src/network/protocol.rsuse std::io::Cursor; use byteorder::{BigEndian, ReadBytesExt, WriteBytesExt}; use log::debug; use serde_cbor::de::from_slice; use serde_cbor::ser::to_vec; use tokio::io::{AsyncReadExt, AsyncWriteExt}; use crate::error::Error; use crate::network::message::*; // Reexport all stream/socket related stuff for convenience purposes pub use super::socket::*; // We choose a packet size of 1280 to be on the safe site regarding IPv6 MTU. pub const PACKET_SIZE: usize = 1280; /// Convenience wrapper around send_bytes. /// Deserialize a message and feed the bytes into send_bytes. pub async fn send_message<T>(message: T, stream: &mut GenericStream) -> Result<(), Error> where T: Into<Message>, { let message: Message = message.into(); debug!("Sending message: {message:#?}",); // Prepare command for transfer and determine message byte size let payload = to_vec(&message).map_err(|err| Error::MessageDeserialization(err.to_string()))?; send_bytes(&payload, stream).await } /// Send a Vec of bytes. /// This is part of the basic protocol beneath all communication. \ /// /// 1. Sends a u64 as 4bytes in BigEndian mode, which tells the receiver the length of the payload. /// 2. Send the payload in chunks of [PACKET_SIZE] bytes. pub async fn send_bytes(payload: &[u8], stream: &mut GenericStream) -> Result<(), Error> { let message_size = payload.len() as u64; let mut header = Vec::new(); WriteBytesExt::write_u64::<BigEndian>(&mut header, message_size).unwrap(); // Send the request size header first. // Afterwards send the request. stream .write_all(&header) .await .map_err(|err| Error::IoError("sending request size header".to_string(), err))?; // Split the payload into 1.4Kbyte chunks // 1.5Kbyte is the MUT for TCP, but some carrier have a little less, such as Wireguard. for chunk in payload.chunks(PACKET_SIZE) { stream .write_all(chunk) .await .map_err(|err| Error::IoError("sending payload chunk".to_string(), err))?; } Ok(()) } /// Receive a byte stream. \ /// This is part of the basic protocol beneath all communication. \ /// /// 1. First of, the client sends a u64 as a 4byte vector in BigEndian mode, which specifies /// the length of the payload we're going to receive. /// 2. Receive chunks of [PACKET_SIZE] bytes until we finished all expected bytes. pub async fn receive_bytes(stream: &mut GenericStream) -> Result<Vec<u8>, Error> { // Receive the header with the overall message size let mut header = vec![0; 8]; stream .read_exact(&mut header) .await .map_err(|err| Error::IoError("reading request size header".to_string(), err))?; let mut header = Cursor::new(header); let message_size = ReadBytesExt::read_u64::<BigEndian>(&mut header)? as usize; // Buffer for the whole payload let mut payload_bytes = Vec::with_capacity(message_size); // Receive chunks until we reached the expected message size while payload_bytes.len() < message_size { let remaining_bytes = message_size - payload_bytes.len(); let mut chunk_buffer: Vec<u8> = if remaining_bytes < PACKET_SIZE { // The remaining bytes fit into less then our PACKET_SIZE. // In this case, we have to be exact to prevent us from accidentally reading bytes // of the next message that might already be in the queue. vec![0; remaining_bytes] } else { // Create a static buffer with our max packet size. vec![0; PACKET_SIZE] }; // Read data and get the amount of received bytes let received_bytes = stream .read(&mut chunk_buffer) .await .map_err(|err| Error::IoError("reading next chunk".to_string(), err))?; if received_bytes == 0 { return Err(Error::Connection( "Connection went away while receiving payload.".into(), )); } // Extend the total payload bytes by the part of the buffer that has been filled // during this iteration. payload_bytes.extend_from_slice(&chunk_buffer[0..received_bytes]); } Ok(payload_bytes) } /// Convenience wrapper that receives a message and converts it into a Message. pub async fn receive_message(stream: &mut GenericStream) -> Result<Message, Error> { let payload_bytes = receive_bytes(stream).await?; if payload_bytes.is_empty() { return Err(Error::EmptyPayload); } // Deserialize the message. let message: Message = from_slice(&payload_bytes).map_err(|err| Error::MessageDeserialization(err.to_string()))?; debug!("Received message: {message:#?}"); Ok(message) } #[cfg(test)] mod test { use std::time::Duration; use async_trait::async_trait; use pretty_assertions::assert_eq; use tokio::net::{TcpListener, TcpStream}; use tokio::task; use super::*; use crate::network::socket::Stream as PueueStream; // Implement generic Listener/Stream traits, so we can test stuff on normal TCP #[async_trait] impl Listener for TcpListener { async fn accept<'a>(&'a self) -> Result<GenericStream, Error> { let (stream, _) = self.accept().await?; Ok(Box::new(stream)) } } impl PueueStream for TcpStream {} #[tokio::test] async fn test_single_huge_payload() -> Result<(), Error> { let listener = TcpListener::bind("127.0.0.1:0").await?; let addr = listener.local_addr()?; // The message that should be sent let payload = "a".repeat(100_000); let message = create_success_message(payload); let original_bytes = to_vec(&message).expect("Failed to serialize message."); let listener: GenericListener = Box::new(listener); // Spawn a sub thread that: // 1. Accepts a new connection // 2. Reads a message // 3. Sends the same message back task::spawn(async move { let mut stream = listener.accept().await.unwrap(); let message_bytes = receive_bytes(&mut stream).await.unwrap(); let message: Message = from_slice(&message_bytes).unwrap(); send_message(message, &mut stream).await.unwrap(); }); let mut client: GenericStream = Box::new(TcpStream::connect(&addr).await?); // Create a client that sends a message and instantly receives it send_message(message, &mut client).await?; let response_bytes = receive_bytes(&mut client).await?; let _message: Message = from_slice(&response_bytes) .map_err(|err| Error::MessageDeserialization(err.to_string()))?; assert_eq!(response_bytes, original_bytes); Ok(()) } /// Test that multiple messages can be sent by a sender. /// The receiver must be able to handle those massages, even if multiple are in the buffer /// at once. #[tokio::test] async fn test_successive_messages() -> Result<(), Error> { let listener = TcpListener::bind("127.0.0.1:0").await?; let addr = listener.local_addr()?; let listener: GenericListener = Box::new(listener); // Spawn a sub thread that: // 1. Accepts a new connection. // 2. Immediately sends two messages in quick succession. task::spawn(async move { let mut stream = listener.accept().await.unwrap(); send_message(create_success_message("message_a"), &mut stream) .await .unwrap(); send_message(create_success_message("message_b"), &mut stream) .await .unwrap(); }); // Create a receiver stream let mut client: GenericStream = Box::new(TcpStream::connect(&addr).await?); // Wait for a short time to allow the sender to send all messages tokio::time::sleep(Duration::from_millis(500)).await; // Get both individual messages that have been sent. let message_a = receive_message(&mut client).await.expect("First message"); let message_b = receive_message(&mut client).await.expect("Second message"); assert_eq!(Message::Success("message_a".to_string()), message_a); assert_eq!(Message::Success("message_b".to_string()), message_b); Ok(()) } } 070701000000CF000081A4000000000000000000000001665F1B690000078A000000000000000000000000000000000000002C00000000pueue-3.4.1/pueue_lib/src/network/secret.rsuse std::fs::File; use std::io::prelude::*; use std::path::Path; use rand::{distributions::Alphanumeric, Rng}; use crate::error::Error; /// Read the shared secret from a file. pub fn read_shared_secret(path: &Path) -> Result<Vec<u8>, Error> { let mut file = File::open(path).map_err(|err| { Error::IoPathError( path.to_path_buf(), "opening secret file. Did you start the daemon at least once?", err, ) })?; let mut buffer = Vec::new(); file.read_to_end(&mut buffer) .map_err(|err| Error::IoPathError(path.to_path_buf(), "reading secret file", err))?; Ok(buffer) } /// Generate a random secret and write it to a file. pub fn init_shared_secret(path: &Path) -> Result<(), Error> { if path.exists() { return Ok(()); } const PASSWORD_LEN: usize = 512; let mut rng = rand::thread_rng(); let secret: String = std::iter::repeat(()) .map(|()| rng.sample(Alphanumeric)) .map(char::from) .take(PASSWORD_LEN) .collect(); let mut file = File::create(path) .map_err(|err| Error::IoPathError(path.to_path_buf(), "creating shared secret", err))?; file.write_all(&secret.into_bytes()) .map_err(|err| Error::IoPathError(path.to_path_buf(), "writing shared secret", err))?; // Set proper file permissions for unix filesystems #[cfg(not(target_os = "windows"))] { use std::os::unix::fs::PermissionsExt; let mut permissions = file .metadata() .map_err(|err| { Error::IoPathError(path.to_path_buf(), "reading secret file metadata", err) })? .permissions(); permissions.set_mode(0o640); std::fs::set_permissions(path, permissions).map_err(|err| { Error::IoPathError(path.to_path_buf(), "setting secret file permissions", err) })?; } Ok(()) } 070701000000D0000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000002900000000pueue-3.4.1/pueue_lib/src/network/socket070701000000D1000081A4000000000000000000000001665F1B69000001AB000000000000000000000000000000000000003000000000pueue-3.4.1/pueue_lib/src/network/socket/mod.rs//! Socket handling is platform specific code. //! //! The submodules of this module represent the different implementations for //! each supported platform. //! Depending on the target, the respective platform is read and loaded into this scope. /// Shared socket logic #[cfg_attr(not(target_os = "windows"), path = "unix.rs")] #[cfg_attr(target_os = "windows", path = "windows.rs")] mod platform; pub use self::platform::*; 070701000000D2000081A4000000000000000000000001665F1B69000018D8000000000000000000000000000000000000003100000000pueue-3.4.1/pueue_lib/src/network/socket/unix.rsuse std::convert::TryFrom; use async_trait::async_trait; use log::info; use rustls::pki_types::ServerName; use tokio::io::{AsyncRead, AsyncWrite}; use tokio::net::{TcpListener, TcpStream, UnixListener, UnixStream}; use tokio_rustls::TlsAcceptor; use crate::error::Error; use crate::network::tls::{get_tls_connector, get_tls_listener}; use crate::settings::Shared; /// Unix specific cleanup handling when getting a SIGINT/SIGTERM. pub fn socket_cleanup(settings: &Shared) -> Result<(), std::io::Error> { // Clean up the unix socket if we're using it and it exists. if settings.use_unix_socket && settings.unix_socket_path().exists() { std::fs::remove_file(settings.unix_socket_path())?; } Ok(()) } /// A new trait, which can be used to represent Unix- and TcpListeners. \ /// This is necessary to easily write generic functions where both types can be used. #[async_trait] pub trait Listener: Sync + Send { async fn accept<'a>(&'a self) -> Result<GenericStream, Error>; } /// This is a helper struct for TCP connections. /// TCP should always be used in conjunction with TLS. /// That's why this helper exists, which encapsulates the logic of accepting a new /// connection and initializing the TLS layer on top of it. /// This way we can expose an `accept` function and implement the Listener trait. pub(crate) struct TlsTcpListener { tcp_listener: TcpListener, tls_acceptor: TlsAcceptor, } #[async_trait] impl Listener for TlsTcpListener { async fn accept<'a>(&'a self) -> Result<GenericStream, Error> { let (stream, _) = self .tcp_listener .accept() .await .map_err(|err| Error::IoError("accepting new tcp connection.".to_string(), err))?; let tls_stream = self .tls_acceptor .accept(stream) .await .map_err(|err| Error::IoError("accepting new tls connection.".to_string(), err))?; Ok(Box::new(tls_stream)) } } #[async_trait] impl Listener for UnixListener { async fn accept<'a>(&'a self) -> Result<GenericStream, Error> { let (stream, _) = self .accept() .await .map_err(|err| Error::IoError("accepting new unix connection.".to_string(), err))?; Ok(Box::new(stream)) } } /// A new trait, which can be used to represent Unix- and Tls encrypted TcpStreams. \ /// This is necessary to write generic functions where both types can be used. pub trait Stream: AsyncRead + AsyncWrite + Unpin + Send {} impl Stream for UnixStream {} impl Stream for tokio_rustls::server::TlsStream<TcpStream> {} impl Stream for tokio_rustls::client::TlsStream<TcpStream> {} /// Convenience type, so we don't have type write `Box<dyn Listener>` all the time. pub type GenericListener = Box<dyn Listener>; /// Convenience type, so we don't have type write `Box<dyn Stream>` all the time. \ /// This also prevents name collisions, since `Stream` is imported in many preludes. pub type GenericStream = Box<dyn Stream>; /// Get a new stream for the client. \ /// This can either be a UnixStream or a Tls encrypted TCPStream, depending on the parameters. pub async fn get_client_stream(settings: &Shared) -> Result<GenericStream, Error> { // Create a unix socket, if the config says so. if settings.use_unix_socket { let unix_socket_path = settings.unix_socket_path(); let stream = UnixStream::connect(&unix_socket_path) .await .map_err(|err| { Error::IoPathError( unix_socket_path, "connecting to daemon. Did you start it?", err, ) })?; return Ok(Box::new(stream)); } // Connect to the daemon via TCP let address = format!("{}:{}", &settings.host, &settings.port); let tcp_stream = TcpStream::connect(&address).await.map_err(|_| { Error::Connection(format!( "Failed to connect to the daemon on {address}. Did you start it?" )) })?; // Get the configured rustls TlsConnector let tls_connector = get_tls_connector(settings) .await .map_err(|err| Error::Connection(format!("Failed to initialize tls connector:\n{err}.")))?; // Initialize the TLS layer let stream = tls_connector .connect(ServerName::try_from("pueue.local").unwrap(), tcp_stream) .await .map_err(|err| Error::Connection(format!("Failed to initialize tls:\n{err}.")))?; Ok(Box::new(stream)) } /// Get a new listener for the daemon. \ /// This can either be a UnixListener or a TCPlistener, depending on the parameters. pub async fn get_listener(settings: &Shared) -> Result<GenericListener, Error> { if settings.use_unix_socket { let socket_path = settings.unix_socket_path(); info!("Using unix socket at: {socket_path:?}"); // Check, if the socket already exists // In case it does, we have to check, if it's an active socket. // If it is, we have to throw an error, because another daemon is already running. // Otherwise, we can simply remove it. if socket_path.exists() { if get_client_stream(settings).await.is_ok() { return Err(Error::UnixSocketExists); } std::fs::remove_file(&socket_path).map_err(|err| { Error::IoPathError(socket_path.clone(), "removing old socket", err) })?; } let unix_listener = UnixListener::bind(&socket_path) .map_err(|err| Error::IoPathError(socket_path, "creating unix socket", err))?; return Ok(Box::new(unix_listener)); } // This is the listener, which accepts low-level TCP connections let address = format!("{}:{}", &settings.host, &settings.port); info!("Binding to address: {address}"); let tcp_listener = TcpListener::bind(&address) .await .map_err(|err| Error::IoError("binding tcp listener to address".to_string(), err))?; // This is the TLS acceptor, which initializes the TLS layer let tls_acceptor = get_tls_listener(settings)?; // Create a struct, which accepts connections and initializes a TLS layer in one go. let tls_listener = TlsTcpListener { tcp_listener, tls_acceptor, }; Ok(Box::new(tls_listener)) } 070701000000D3000081A4000000000000000000000001665F1B6900000E80000000000000000000000000000000000000003400000000pueue-3.4.1/pueue_lib/src/network/socket/windows.rsuse std::convert::TryFrom; use async_trait::async_trait; use rustls::pki_types::ServerName; use tokio::io::{AsyncRead, AsyncWrite}; use tokio::net::{TcpListener, TcpStream}; use tokio_rustls::TlsAcceptor; use crate::error::Error; use crate::network::tls::{get_tls_connector, get_tls_listener}; use crate::settings::Shared; /// Windowsspecific cleanup handling when getting a SIGINT/SIGTERM. pub fn socket_cleanup(_settings: &Shared) -> Result<(), Error> { Ok(()) } /// This is a helper struct for TCP connections. /// TCP should always be used in conjunction with TLS. /// That's why this helper exists, which encapsulates the logic of accepting a new /// connection and initializing the TLS layer on top of it. /// This way we can expose an `accept` function and implement the GenericListener trait. pub struct TlsTcpListener { tcp_listener: TcpListener, tls_acceptor: TlsAcceptor, } /// A new trait, which can be used to represent Unix- and TcpListeners. /// This is necessary to easily write generic functions where both types can be used. #[async_trait] pub trait Listener: Sync + Send { async fn accept<'a>(&'a self) -> Result<GenericStream, Error>; } #[async_trait] impl Listener for TlsTcpListener { async fn accept<'a>(&'a self) -> Result<GenericStream, Error> { let (stream, _) = self.tcp_listener.accept().await?; Ok(Box::new(self.tls_acceptor.accept(stream).await?)) } } /// A new trait, which can be used to represent Unix- and Tls encrypted TcpStreams. /// This is necessary to write generic functions where both types can be used. pub trait Stream: AsyncRead + AsyncWrite + Unpin + Send {} impl Stream for tokio_rustls::server::TlsStream<TcpStream> {} impl Stream for tokio_rustls::client::TlsStream<TcpStream> {} /// Two convenient types, so we don't have type write Box<dyn ...> all the time. pub type GenericListener = Box<dyn Listener>; pub type GenericStream = Box<dyn Stream>; /// Get a new stream for the client. /// This can either be a UnixStream or a Tls encrypted TCPStream, depending on the parameters. pub async fn get_client_stream(settings: &Shared) -> Result<GenericStream, Error> { // Connect to the daemon via TCP let address = format!("{}:{}", settings.host, settings.port); let tcp_stream = TcpStream::connect(&address).await.map_err(|_| { Error::Connection(format!( "Failed to connect to the daemon on {address}. Did you start it?" )) })?; // Get the configured rustls TlsConnector let tls_connector = get_tls_connector(settings) .await .map_err(|err| Error::Connection(format!("Failed to initialize tls connector {err}.")))?; // Initialize the TLS layer let stream = tls_connector .connect(ServerName::try_from("pueue.local").unwrap(), tcp_stream) .await .map_err(|err| Error::Connection(format!("Failed to initialize tls {err}.")))?; Ok(Box::new(stream)) } /// Get a new tcp&tls listener for the daemon. pub async fn get_listener(settings: &Shared) -> Result<GenericListener, Error> { // This is the listener, which accepts low-level TCP connections let address = format!("{}:{}", settings.host, settings.port); let tcp_listener = TcpListener::bind(&address).await.map_err(|err| { Error::Connection(format!("Failed to listen on address {address}. {err}")) })?; // This is the TLS acceptor, which initializes the TLS layer let tls_acceptor = get_tls_listener(settings)?; // Create a struct, which accepts connections and initializes a TLS layer in one go. let tls_listener = TlsTcpListener { tcp_listener, tls_acceptor, }; Ok(Box::new(tls_listener)) } 070701000000D4000081A4000000000000000000000001665F1B6900000F7B000000000000000000000000000000000000002900000000pueue-3.4.1/pueue_lib/src/network/tls.rsuse std::fs::File; use std::io::BufReader; use std::path::Path; use std::sync::Arc; use tokio_rustls::{TlsAcceptor, TlsConnector}; use rustls::pki_types::{CertificateDer, PrivateKeyDer}; use rustls::{ClientConfig, RootCertStore, ServerConfig}; use rustls_pemfile::{pkcs8_private_keys, rsa_private_keys}; use crate::error::Error; use crate::settings::Shared; /// Initialize our client [TlsConnector]. \ /// 1. Trust our own CA. ONLY our own CA. /// 2. Set the client certificate and key pub async fn get_tls_connector(settings: &Shared) -> Result<TlsConnector, Error> { // Only trust server-certificates signed with our own CA. let ca = load_ca(&settings.daemon_cert())?; let mut cert_store = RootCertStore::empty(); cert_store.add(ca).map_err(|err| { Error::CertificateFailure(format!("Failed to build RootCertStore: {err}")) })?; let config: ClientConfig = ClientConfig::builder() .with_root_certificates(cert_store) .with_no_client_auth(); Ok(TlsConnector::from(Arc::new(config))) } /// Configure the server using rusttls. \ /// A TLS server needs a certificate and a fitting private key. pub fn get_tls_listener(settings: &Shared) -> Result<TlsAcceptor, Error> { // Set the server-side key and certificate that should be used for all communication. let certs = load_certs(&settings.daemon_cert())?; let key = load_key(&settings.daemon_key())?; let config = ServerConfig::builder() .with_no_client_auth() .with_single_cert(certs, key) .map_err(|err| Error::CertificateFailure(format!("Failed to build TLS Acceptor: {err}")))?; Ok(TlsAcceptor::from(Arc::new(config))) } /// Load the passed certificates file fn load_certs<'a>(path: &Path) -> Result<Vec<CertificateDer<'a>>, Error> { let file = File::open(path) .map_err(|err| Error::IoPathError(path.to_path_buf(), "opening cert", err))?; let certs: Vec<CertificateDer> = rustls_pemfile::certs(&mut BufReader::new(file)) .collect::<Result<Vec<_>, std::io::Error>>() .map_err(|_| Error::CertificateFailure("Failed to parse daemon certificate.".into()))? .into_iter() .map(CertificateDer::from) .collect(); Ok(certs) } /// Load the passed keys file. /// Only the first key will be used. It should match the certificate. fn load_key<'a>(path: &Path) -> Result<PrivateKeyDer<'a>, Error> { let file = File::open(path) .map_err(|err| Error::IoPathError(path.to_path_buf(), "opening key", err))?; // Try to read pkcs8 format first let keys = pkcs8_private_keys(&mut BufReader::new(&file)) .collect::<Result<Vec<_>, std::io::Error>>() .map_err(|_| Error::CertificateFailure("Failed to parse pkcs8 format.".into())); if let Ok(keys) = keys { if let Some(key) = keys.into_iter().next() { return Ok(PrivateKeyDer::Pkcs8(key)); } } // Try the normal rsa format afterwards. let keys = rsa_private_keys(&mut BufReader::new(file)) .collect::<Result<Vec<_>, std::io::Error>>() .map_err(|_| Error::CertificateFailure("Failed to parse daemon key.".into()))?; if let Some(key) = keys.into_iter().next() { return Ok(PrivateKeyDer::Pkcs1(key)); } Err(Error::CertificateFailure(format!( "Couldn't extract private key from keyfile {path:?}", ))) } fn load_ca<'a>(path: &Path) -> Result<CertificateDer<'a>, Error> { let file = File::open(path) .map_err(|err| Error::IoPathError(path.to_path_buf(), "opening cert", err))?; let cert = rustls_pemfile::certs(&mut BufReader::new(file)) .collect::<Result<Vec<_>, std::io::Error>>() .map_err(|_| Error::CertificateFailure("Failed to parse daemon certificate.".into()))? .into_iter() .map(CertificateDer::from) .next() .ok_or_else(|| Error::CertificateFailure("Couldn't find CA certificate in file".into()))?; Ok(cert) } 070701000000D5000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000002900000000pueue-3.4.1/pueue_lib/src/process_helper070701000000D6000081A4000000000000000000000001665F1B69000000DF000000000000000000000000000000000000003200000000pueue-3.4.1/pueue_lib/src/process_helper/apple.rsuse libproc::libproc::{proc_pid, task_info}; /// Check, whether a specific process exists or not pub fn process_exists(pid: u32) -> bool { proc_pid::pidinfo::<task_info::TaskInfo>(pid.try_into().unwrap(), 0).is_ok() } 070701000000D7000081A4000000000000000000000001665F1B6900000165000000000000000000000000000000000000003400000000pueue-3.4.1/pueue_lib/src/process_helper/freebsd.rsuse std::path::Path; /// Check, whether a specific process is exists or not pub fn process_exists(pid: u32) -> bool { return Path::new(&format!("/proc/{}", pid)).is_dir(); } #[cfg(test)] pub mod tests { /// Get all processes in a process group pub fn get_process_group_pids(pgrp: i32) -> Vec<i32> { /// TODO return {}; } } 070701000000D8000081A4000000000000000000000001665F1B6900000140000000000000000000000000000000000000003200000000pueue-3.4.1/pueue_lib/src/process_helper/linux.rsuse procfs::process; /// Check, whether a specific process is exists or not pub fn process_exists(pid: u32) -> bool { match pid.try_into() { Err(_) => false, Ok(pid) => match process::Process::new(pid) { Ok(process) => process.is_alive(), Err(_) => false, }, } } 070701000000D9000081A4000000000000000000000001665F1B6900000D6F000000000000000000000000000000000000003000000000pueue-3.4.1/pueue_lib/src/process_helper/mod.rs//! Subprocess handling is platform specific code. //! //! The submodules of this module represent the different implementations for //! each supported platform. //! Depending on the target, the respective platform is read and loaded into this scope. use std::{collections::HashMap, process::Command}; use crate::{network::message::Signal as InternalSignal, settings::Settings}; // Unix specific process handling // Shared between Linux and Apple #[cfg(unix)] mod unix; #[cfg(unix)] pub use self::unix::*; #[cfg(unix)] use command_group::Signal; // Platform specific process support #[cfg_attr(target_os = "linux", path = "linux.rs")] #[cfg_attr(target_vendor = "apple", path = "apple.rs")] #[cfg_attr(target_os = "windows", path = "windows.rs")] #[cfg_attr(target_os = "freebsd", path = "freebsd.rs")] mod platform; pub use self::platform::*; /// Pueue directly interacts with processes. /// Since these interactions can vary depending on the current platform, this enum is introduced. /// The intend is to keep any platform specific code out of the top level code. /// Even if that implicates adding some layers of abstraction. #[derive(Debug)] pub enum ProcessAction { Pause, Resume, } impl From<&ProcessAction> for Signal { fn from(action: &ProcessAction) -> Self { match action { ProcessAction::Pause => Signal::SIGSTOP, ProcessAction::Resume => Signal::SIGCONT, } } } impl From<InternalSignal> for Signal { fn from(signal: InternalSignal) -> Self { match signal { InternalSignal::SigKill => Signal::SIGKILL, InternalSignal::SigInt => Signal::SIGINT, InternalSignal::SigTerm => Signal::SIGTERM, InternalSignal::SigCont => Signal::SIGCONT, InternalSignal::SigStop => Signal::SIGSTOP, } } } /// Take a platform specific shell command and insert the actual task command via templating. pub fn compile_shell_command(settings: &Settings, command: &str) -> Command { let shell_command = get_shell_command(settings); let mut handlebars = handlebars::Handlebars::new(); handlebars.set_strict_mode(true); handlebars.register_escape_fn(handlebars::no_escape); // Make the command available to the template engine. let mut parameters = HashMap::new(); parameters.insert("pueue_command_string", command); // We allow users to provide their own shell command. // They should use the `{{ pueue_command_string }}` placeholder. let mut compiled_command = Vec::new(); for part in shell_command { let compiled_part = handlebars .render_template(&part, ¶meters) .unwrap_or_else(|_| { panic!("Failed to render shell command for template: {part} and parameters: {parameters:?}") }); compiled_command.push(compiled_part); } let executable = compiled_command.remove(0); // Chain two `powershell` commands, one that sets the output encoding to utf8 and then the user provided one. let mut command = Command::new(executable); for arg in compiled_command { command.arg(&arg); } // Inject custom environment variables. if !settings.daemon.env_vars.is_empty() { log::info!( "Inject environment variables: {:?}", &settings.daemon.env_vars ); command.envs(&settings.daemon.env_vars); } command } 070701000000DA000081A4000000000000000000000001665F1B69000025DB000000000000000000000000000000000000003100000000pueue-3.4.1/pueue_lib/src/process_helper/unix.rs// We allow anyhow in here, as this is a module that'll be strictly used internally. // As soon as it's obvious that this is code is intended to be exposed to library users, we have to // go ahead and replace any `anyhow` usage by proper error handling via our own Error type. use anyhow::Result; use command_group::{GroupChild, Signal, UnixChildExt}; use log::info; use crate::settings::Settings; pub fn get_shell_command(settings: &Settings) -> Vec<String> { let Some(ref shell_command) = settings.daemon.shell_command else { return vec![ "sh".into(), "-c".into(), "{{ pueue_command_string }}".into(), ]; }; shell_command.clone() } /// Send a signal to one of Pueue's child process group handle. pub fn send_signal_to_child<T>(child: &mut GroupChild, signal: T) -> Result<()> where T: Into<Signal>, { child.signal(signal.into())?; Ok(()) } /// This is a helper function to safely kill a child process group. /// Its purpose is to properly kill all processes and prevent any dangling processes. pub fn kill_child(task_id: usize, child: &mut GroupChild) -> std::io::Result<()> { match child.kill() { Ok(_) => Ok(()), Err(ref e) if e.kind() == std::io::ErrorKind::InvalidData => { // Process already exited info!("Task {task_id} has already finished by itself."); Ok(()) } Err(err) => Err(err), } } #[cfg(test)] mod tests { use std::process::Command; use std::thread::sleep; use std::time::Duration; use anyhow::Result; use command_group::CommandGroup; use libproc::processes::{pids_by_type, ProcFilter}; use log::warn; use pretty_assertions::assert_eq; use super::*; use crate::process_helper::{compile_shell_command, process_exists}; /// List all PIDs that are part of the process group pub fn get_process_group_pids(pgrp: u32) -> Vec<u32> { match pids_by_type(ProcFilter::ByProgramGroup { pgrpid: pgrp }) { Err(error) => { warn!("Failed to get list of processes in process group {pgrp}: {error}"); Vec::new() } Ok(mut processes) => { // MacOS doesn't list the main process in this group if !processes.iter().any(|pid| pid == &pgrp) && !process_is_gone(pgrp) { processes.push(pgrp) } processes } } } /// Assert that certain process id no longer exists fn process_is_gone(pid: u32) -> bool { !process_exists(pid) } #[test] fn test_spawn_command() { let settings = Settings::default(); let mut child = compile_shell_command(&settings, "sleep 0.1") .group_spawn() .expect("Failed to spawn echo"); let ecode = child.wait().expect("failed to wait on echo"); assert!(ecode.success()); } #[test] /// Ensure a `sh -c` command will be properly killed without detached processes. fn test_shell_command_is_killed() -> Result<()> { let settings = Settings::default(); let mut child = compile_shell_command(&settings, "sleep 60 & sleep 60 && echo 'this is a test'") .group_spawn() .expect("Failed to spawn echo"); let pid = child.id(); // Sleep a little to give everything a chance to spawn. sleep(Duration::from_millis(500)); // Get all child processes, so we can make sure they no longer exist afterwards. // The process group id is the same as the parent process id. let group_pids = get_process_group_pids(pid); assert_eq!(group_pids.len(), 3); // Kill the process and make sure it'll be killed. assert!(kill_child(0, &mut child).is_ok()); // Sleep a little to give all processes time to shutdown. sleep(Duration::from_millis(500)); // collect the exit status; otherwise the child process hangs around as a zombie. child.try_wait().unwrap_or_default(); // Assert that the direct child (sh -c) has been killed. assert!(process_is_gone(pid)); // Assert that all child processes have been killed. assert_eq!(get_process_group_pids(pid).len(), 0); Ok(()) } #[test] /// Ensure a `sh -c` command will be properly killed without detached processes when using unix /// signals directly. fn test_shell_command_is_killed_with_signal() -> Result<()> { let settings = Settings::default(); let mut child = compile_shell_command(&settings, "sleep 60 & sleep 60 && echo 'this is a test'") .group_spawn() .expect("Failed to spawn echo"); let pid = child.id(); // Sleep a little to give everything a chance to spawn. sleep(Duration::from_millis(500)); // Get all child processes, so we can make sure they no longer exist afterwards. // The process group id is the same as the parent process id. let group_pids = get_process_group_pids(pid); assert_eq!(group_pids.len(), 3); // Kill the process and make sure it'll be killed. send_signal_to_child(&mut child, Signal::SIGKILL).unwrap(); // Sleep a little to give all processes time to shutdown. sleep(Duration::from_millis(500)); // collect the exit status; otherwise the child process hangs around as a zombie. child.try_wait().unwrap_or_default(); // Assert that the direct child (sh -c) has been killed. assert!(process_is_gone(pid)); // Assert that all child processes have been killed. assert_eq!(get_process_group_pids(pid).len(), 0); Ok(()) } #[test] /// Ensure that a `sh -c` process with a child process that has children of its own /// will properly kill all processes and their children's children without detached processes. fn test_shell_command_children_are_killed() -> Result<()> { let settings = Settings::default(); let mut child = compile_shell_command(&settings, "bash -c 'sleep 60 && sleep 60' && sleep 60") .group_spawn() .expect("Failed to spawn echo"); let pid = child.id(); // Sleep a little to give everything a chance to spawn. sleep(Duration::from_millis(500)); // Get all child processes, so we can make sure they no longer exist afterwards. // The process group id is the same as the parent process id. let group_pids = get_process_group_pids(pid); assert_eq!(group_pids.len(), 3); // Kill the process and make sure its children will be killed. assert!(kill_child(0, &mut child).is_ok()); // Sleep a little to give all processes time to shutdown. sleep(Duration::from_millis(500)); // collect the exit status; otherwise the child process hangs around as a zombie. child.try_wait().unwrap_or_default(); // Assert that the direct child (sh -c) has been killed. assert!(process_is_gone(pid)); // Assert that all child processes have been killed. assert_eq!(get_process_group_pids(pid).len(), 0); Ok(()) } #[test] /// Ensure a normal command without `sh -c` will be killed. fn test_normal_command_is_killed() -> Result<()> { let mut child = Command::new("sleep") .arg("60") .group_spawn() .expect("Failed to spawn echo"); let pid = child.id(); // Sleep a little to give everything a chance to spawn. sleep(Duration::from_millis(500)); // No further processes exist in the group let group_pids = get_process_group_pids(pid); assert_eq!(group_pids.len(), 1); // Kill the process and make sure it'll be killed. assert!(kill_child(0, &mut child).is_ok()); // Sleep a little to give all processes time to shutdown. sleep(Duration::from_millis(500)); // collect the exit status; otherwise the child process hangs around as a zombie. child.try_wait().unwrap_or_default(); assert!(process_is_gone(pid)); Ok(()) } #[test] /// Ensure a normal command and all its children will be /// properly killed without any detached processes. fn test_normal_command_children_are_killed() -> Result<()> { let mut child = Command::new("bash") .arg("-c") .arg("sleep 60 & sleep 60 && sleep 60") .group_spawn() .expect("Failed to spawn echo"); let pid = child.id(); // Sleep a little to give everything a chance to spawn. sleep(Duration::from_millis(500)); // Get all child processes, so we can make sure they no longer exist afterwards. // The process group id is the same as the parent process id. let group_pids = get_process_group_pids(pid); assert_eq!(group_pids.len(), 3); // Kill the process and make sure it'll be killed. assert!(kill_child(0, &mut child).is_ok()); // Sleep a little to give all processes time to shutdown. sleep(Duration::from_millis(500)); // collect the exit status; otherwise the child process hangs around as a zombie. child.try_wait().unwrap_or_default(); // Assert that the direct child (sh -c) has been killed. assert!(process_is_gone(pid)); // Assert that all child processes have been killed. assert_eq!(get_process_group_pids(pid).len(), 0); Ok(()) } } 070701000000DB000081A4000000000000000000000001665F1B6900003935000000000000000000000000000000000000003400000000pueue-3.4.1/pueue_lib/src/process_helper/windows.rs// We allow anyhow in here, as this is a module that'll be strictly used internally. // As soon as it's obvious that this is code is intended to be exposed to library users, we have to // go ahead and replace any `anyhow` usage by proper error handling via our own Error type. use anyhow::{bail, Result}; use command_group::GroupChild; use log::{error, info, warn}; use winapi::shared::minwindef::FALSE; use winapi::shared::ntdef::NULL; use winapi::um::errhandlingapi::GetLastError; use winapi::um::handleapi::{CloseHandle, INVALID_HANDLE_VALUE}; use winapi::um::processthreadsapi::{OpenThread, ResumeThread, SuspendThread}; use winapi::um::tlhelp32::{ CreateToolhelp32Snapshot, Process32First, Process32Next, Thread32First, Thread32Next, PROCESSENTRY32, TH32CS_SNAPPROCESS, TH32CS_SNAPTHREAD, THREADENTRY32, }; use winapi::um::winnt::THREAD_SUSPEND_RESUME; use crate::settings::Settings; /// Shim signal enum for windows. pub enum Signal { SIGINT, SIGKILL, SIGTERM, SIGCONT, SIGSTOP, } pub fn get_shell_command(settings: &Settings) -> Vec<String> { let Some(ref shell_command) = settings.daemon.shell_command else { // Chain two `powershell` commands, one that sets the output encoding to utf8 and then the user provided one. return vec![ "powershell".into(), "-c".into(), "[Console]::OutputEncoding = [Text.UTF8Encoding]::UTF8; {{ pueue_command_string }}" .into(), ]; }; shell_command.clone() } /// Send a signal to a windows process. pub fn send_signal_to_child<T>(child: &mut GroupChild, signal: T) -> Result<()> where T: Into<Signal>, { let pids = get_cur_task_processes(child.id()); if pids.is_empty() { bail!("Process has just gone away"); } let signal: Signal = signal.into(); match signal { Signal::SIGSTOP => { for pid in pids { for thread in get_threads(pid) { suspend_thread(thread); } } } Signal::SIGCONT => { for pid in pids { for thread in get_threads(pid) { resume_thread(thread); } } } _ => { bail!("Trying to send unix signal on a windows machine. This isn't supported."); } } Ok(()) } /// Kill a child process pub fn kill_child(task_id: usize, child: &mut GroupChild) -> std::io::Result<()> { match child.kill() { Ok(_) => Ok(()), Err(ref e) if e.kind() == std::io::ErrorKind::InvalidData => { // Process already exited info!("Task {task_id} has already finished by itself."); Ok(()) } Err(err) => Err(err), } } /// Get current task pid, all child pid and all children's children /// TODO: see if this can be simplified using QueryInformationJobObject /// on the job object created by command_group. fn get_cur_task_processes(task_pid: u32) -> Vec<u32> { let mut all_pids = Vec::new(); // Get all pids by BFS let mut parent_pids = vec![task_pid]; while let Some(pid) = parent_pids.pop() { all_pids.push(pid); get_child_pids(pid, &mut parent_pids); } // Keep parent pid ahead of child. We need execute action for parent process first. all_pids.reverse(); all_pids } /// Get child pids of a specific process. fn get_child_pids(target_pid: u32, pid_list: &mut Vec<u32>) { unsafe { // Take a snapshot of all processes in the system. // While enumerating the set of processes, new processes can be created and destroyed. let snapshot_handle = CreateToolhelp32Snapshot(TH32CS_SNAPPROCESS, target_pid); if snapshot_handle == INVALID_HANDLE_VALUE { error!("Failed to get process {target_pid} snapShot"); return; } // Walk the list of processes. let mut process_entry = PROCESSENTRY32 { dwSize: std::mem::size_of::<PROCESSENTRY32>() as u32, ..Default::default() }; if Process32First(snapshot_handle, &mut process_entry) == FALSE { error!("Couldn't get first process."); CloseHandle(snapshot_handle); return; } loop { if process_entry.th32ParentProcessID == target_pid { pid_list.push(process_entry.th32ProcessID); } if Process32Next(snapshot_handle, &mut process_entry) == FALSE { break; } } CloseHandle(snapshot_handle); } } /// Get all thread id of a specific process fn get_threads(target_pid: u32) -> Vec<u32> { let mut threads = Vec::new(); unsafe { // Take a snapshot of all threads in the system. // While enumerating the set of threads, new threads can be created and destroyed. let snapshot_handle = CreateToolhelp32Snapshot(TH32CS_SNAPTHREAD, 0); if snapshot_handle == INVALID_HANDLE_VALUE { error!("Failed to get process {target_pid} snapShot"); return threads; } // Walk the list of threads. let mut thread_entry = THREADENTRY32 { dwSize: std::mem::size_of::<THREADENTRY32>() as u32, ..Default::default() }; if Thread32First(snapshot_handle, &mut thread_entry) == FALSE { error!("Couldn't get first thread."); CloseHandle(snapshot_handle); return threads; } loop { if thread_entry.th32OwnerProcessID == target_pid { threads.push(thread_entry.th32ThreadID); } if Thread32Next(snapshot_handle, &mut thread_entry) == FALSE { break; } } CloseHandle(snapshot_handle); } threads } /// Suspend a thread /// Each thread has a suspend count (with a maximum value of `MAXIMUM_SUSPEND_COUNT`). /// If the suspend count is greater than zero, the thread is suspended; otherwise, the thread is not suspended and is eligible for execution. /// Calling `SuspendThread` causes the target thread's suspend count to be incremented. /// Attempting to increment past the maximum suspend count causes an error without incrementing the count. /// [SuspendThread](https://docs.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-suspendthread) fn suspend_thread(tid: u32) { unsafe { // Attempt to convert the thread ID into a handle let thread_handle = OpenThread(THREAD_SUSPEND_RESUME, FALSE, tid); if thread_handle != NULL { // If SuspendThread fails, the return value is (DWORD) -1 if u32::max_value() == SuspendThread(thread_handle) { let err_code = GetLastError(); warn!("Failed to suspend thread {tid} with error code {err_code}"); } } CloseHandle(thread_handle); } } /// Resume a thread /// ResumeThread checks the suspend count of the subject thread. /// If the suspend count is zero, the thread is not currently suspended. Otherwise, the subject thread's suspend count is decremented. /// If the resulting value is zero, then the execution of the subject thread is resumed. /// [ResumeThread](https://docs.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-resumethread) fn resume_thread(tid: u32) { unsafe { // Attempt to convert the thread ID into a handle let thread_handle = OpenThread(THREAD_SUSPEND_RESUME, FALSE, tid); if thread_handle != NULL { // If ResumeThread fails, the return value is (DWORD) -1 if u32::max_value() == ResumeThread(thread_handle) { let err_code = GetLastError(); warn!("Failed to resume thread {tid} with error code {err_code}"); } } CloseHandle(thread_handle); } } /// Assert that certain process id no longer exists pub fn process_exists(pid: u32) -> bool { unsafe { let handle = CreateToolhelp32Snapshot(TH32CS_SNAPPROCESS, 0); let mut process_entry = PROCESSENTRY32 { dwSize: std::mem::size_of::<PROCESSENTRY32>() as u32, ..Default::default() }; loop { if process_entry.th32ProcessID == pid { CloseHandle(handle); return true; } if Process32Next(handle, &mut process_entry) == FALSE { break; } } CloseHandle(handle); } false } #[cfg(test)] mod test { use std::process::Command; use std::thread::sleep; use std::time::Duration; use command_group::CommandGroup; use super::*; use crate::process_helper::compile_shell_command; /// Assert that certain process id no longer exists fn process_is_gone(pid: u32) -> bool { !process_exists(pid) } /// A test helper function, which ensures that a specific amount of subprocesses can be /// observed for a given PID in a given time window. /// If the correct amount can be observed, the process ids are then returned. /// /// The process count is checked every few milliseconds for the given duration. fn assert_process_ids(pid: u32, expected_processes: usize, millis: usize) -> Result<Vec<u32>> { // Check every 50 milliseconds. let interval = 50; let tries = millis / interval; let mut current_try = 0; while current_try <= tries { // Continue waiting if the count doesn't match. let process_ids = get_cur_task_processes(pid); if process_ids.len() != expected_processes { current_try += 1; sleep(Duration::from_millis(interval as u64)); continue; } return Ok(process_ids); } let count = get_cur_task_processes(pid).len(); bail!("{expected_processes} processes were expected. Last process count was {count}") } #[test] fn test_spawn_command() { let settings = Settings::default(); let mut child = compile_shell_command(&settings, "sleep 0.1") .group_spawn() .expect("Failed to spawn echo"); let ecode = child.wait().expect("failed to wait on echo"); assert!(ecode.success()); } #[ignore] #[test] /// Ensure a `powershell -c` command will be properly killed without detached processes. /// /// This test is ignored for now, as it is flaky from time to time. /// See https://github.com/Nukesor/pueue/issues/315 fn test_shell_command_is_killed() -> Result<()> { let settings = Settings::default(); let mut child = compile_shell_command(&settings, "sleep 60; sleep 60; echo 'this is a test'") .group_spawn() .expect("Failed to spawn echo"); let pid = child.id(); // Get all processes, so we can make sure they no longer exist afterwards. let process_ids = assert_process_ids(pid, 1, 5000)?; // Kill the process and make sure it'll be killed. assert!(kill_child(0, &mut child).is_ok()); // Sleep a little to give all processes time to shutdown. sleep(Duration::from_millis(500)); // Assert that the direct child (sh -c) has been killed. assert!(process_is_gone(pid)); // Assert that all child processes have been killed. for pid in process_ids { assert!(process_is_gone(pid)); } Ok(()) } #[ignore] #[test] /// Ensure that a `powershell -c` process with a child process that has children of it's own /// will properly kill all processes and their children's children without detached processes. fn test_shell_command_children_are_killed() -> Result<()> { let settings = Settings::default(); let mut child = compile_shell_command(&settings, "powershell -c 'sleep 60; sleep 60'; sleep 60") .group_spawn() .expect("Failed to spawn echo"); let pid = child.id(); // Get all processes, so we can make sure they no longer exist afterwards. let process_ids = assert_process_ids(pid, 2, 5000)?; // Kill the process and make sure it'll be killed. assert!(kill_child(0, &mut child).is_ok()); // Assert that the direct child (powershell -c) has been killed. sleep(Duration::from_millis(500)); assert!(process_is_gone(pid)); // Assert that all child processes have been killed. for pid in process_ids { assert!(process_is_gone(pid)); } Ok(()) } #[ignore] #[test] /// Ensure a normal command without `powershell -c` will be killed. fn test_normal_command_is_killed() -> Result<()> { let mut child = Command::new("ping") .arg("localhost") .arg("-t") .group_spawn() .expect("Failed to spawn ping"); let pid = child.id(); // Get all processes, so we can make sure they no longer exist afterwards. let _ = assert_process_ids(pid, 1, 5000)?; // Kill the process and make sure it'll be killed. assert!(kill_child(0, &mut child).is_ok()); // Sleep a little to give all processes time to shutdown. sleep(Duration::from_millis(500)); assert!(process_is_gone(pid)); Ok(()) } #[ignore] #[test] /// Ensure a normal command and all it's children will be /// properly killed without any detached processes. fn test_normal_command_children_are_killed() -> Result<()> { let mut child = Command::new("powershell") .arg("-c") .arg("sleep 60; sleep 60; sleep 60") .group_spawn() .expect("Failed to spawn echo"); let pid = child.id(); // Get all processes, so we can make sure they no longer exist afterwards. let process_ids = assert_process_ids(pid, 1, 5000)?; // Kill the process and make sure it'll be killed. assert!(kill_child(0, &mut child).is_ok()); // Sleep a little to give all processes time to shutdown. sleep(Duration::from_millis(500)); // Assert that the direct child (sh -c) has been killed. assert!(process_is_gone(pid)); // Assert that all child processes have been killed. for pid in process_ids { assert!(process_is_gone(pid)); } Ok(()) } } 070701000000DC000081A4000000000000000000000001665F1B690000020D000000000000000000000000000000000000002E00000000pueue-3.4.1/pueue_lib/src/setting_defaults.rs/// The `Default` impl for `bool` is `false`. /// This function covers the `true` case. pub(crate) fn default_true() -> bool { true } pub(crate) fn default_host() -> String { "127.0.0.1".to_string() } pub(crate) fn default_port() -> String { "6924".to_string() } pub(crate) fn default_status_time_format() -> String { "%H:%M:%S".to_string() } pub(crate) fn default_status_datetime_format() -> String { "%Y-%m-%d\n%H:%M:%S".to_string() } pub(crate) fn default_callback_log_lines() -> usize { 10 } 070701000000DD000081A4000000000000000000000001665F1B69000047E6000000000000000000000000000000000000002600000000pueue-3.4.1/pueue_lib/src/settings.rsuse std::collections::HashMap; use std::fs::{create_dir_all, File}; use std::io::{prelude::*, BufReader}; use std::path::{Path, PathBuf}; use log::info; use serde_derive::{Deserialize, Serialize}; use shellexpand::tilde; use crate::error::Error; use crate::setting_defaults::*; /// The environment variable that can be set to overwrite pueue's config path. pub const PUEUE_CONFIG_PATH_ENV: &str = "PUEUE_CONFIG_PATH"; /// All settings which are used by both, the client and the daemon #[derive(PartialEq, Eq, Clone, Debug, Deserialize, Serialize)] pub struct Shared { /// Don't access this property directly, but rather use the getter with the same name. /// It's only public to allow proper integration testing. /// /// The directory that is used for all of pueue's state. \ /// I.e. task logs, state dumps, etc. pub pueue_directory: Option<PathBuf>, /// Don't access this property directly, but rather use the getter with the same name. /// It's only public to allow proper integration testing. /// /// The location where runtime related files will be placed. /// Defaults to `pueue_directory` unless `$XDG_RUNTIME_DIR` is set. pub runtime_directory: Option<PathBuf>, /// Don't access this property directly, but rather use the getter with the same name. /// It's only public to allow proper integration testing. /// /// The location of the alias file used by the daemon/client when working with /// aliases. pub alias_file: Option<PathBuf>, /// If this is set to true, unix sockets will be used. /// Otherwise we default to TCP+TLS #[cfg(not(target_os = "windows"))] #[serde(default = "default_true")] pub use_unix_socket: bool, /// Don't access this property directly, but rather use the getter with the same name. /// It's only public to allow proper integration testing. /// /// The path to the unix socket. #[cfg(not(target_os = "windows"))] pub unix_socket_path: Option<PathBuf>, /// The TCP hostname/ip address. #[serde(default = "default_host")] pub host: String, /// The TCP port. #[serde(default = "default_port")] pub port: String, /// The path where the daemon's PID is located. /// This is by default in `runtime_directory/pueue.pid`. pub pid_path: Option<PathBuf>, /// Don't access this property directly, but rather use the getter with the same name. /// It's only public to allow proper integration testing. /// /// The path to the TLS certificate used by the daemon. \ /// This is also used by the client to verify the daemon's identity. pub daemon_cert: Option<PathBuf>, /// Don't access this property directly, but rather use the getter with the same name. /// It's only public to allow proper integration testing. /// /// The path to the TLS key used by the daemon. pub daemon_key: Option<PathBuf>, /// Don't access this property directly, but rather use the getter with the same name. /// It's only public to allow proper integration testing. /// /// The path to the file containing the shared secret used to authenticate the client. pub shared_secret_path: Option<PathBuf>, } /// All settings which are used by the client #[derive(PartialEq, Eq, Clone, Debug, Deserialize, Serialize)] pub struct Client { /// If set to true, all tasks will be restart in place, instead of creating a new task. /// False is the default, as you'll lose the logs of the previously failed tasks when /// restarting tasks in place. #[serde(default = "Default::default")] pub restart_in_place: bool, /// Whether the client should read the logs directly from disk or whether it should /// request the data from the daemon via socket. #[serde(default = "default_true")] pub read_local_logs: bool, /// Whether the client should show a confirmation question on potential dangerous actions. #[serde(default = "Default::default")] pub show_confirmation_questions: bool, /// Whether aliases specified in `pueue_aliases.yml` should be expanded in the `pueue status` /// or shown in their short form. #[serde(default = "Default::default")] pub show_expanded_aliases: bool, /// Whether the client should use dark shades instead of regular colors. #[serde(default = "Default::default")] pub dark_mode: bool, /// The max amount of lines each task get's in the `pueue status` view. pub max_status_lines: Option<usize>, /// The format that will be used to display time formats in `pueue status`. #[serde(default = "default_status_time_format")] pub status_time_format: String, /// The format that will be used to display datetime formats in `pueue status`. #[serde(default = "default_status_datetime_format")] pub status_datetime_format: String, } /// All settings which are used by the daemon #[derive(PartialEq, Eq, Clone, Debug, Deserialize, Serialize)] pub struct Daemon { /// Whether a group should be paused as soon as a single task fails #[serde(default = "Default::default")] pub pause_group_on_failure: bool, /// Whether the daemon (and all groups) should be paused as soon as a single task fails #[serde(default = "Default::default")] pub pause_all_on_failure: bool, /// The callback that's called whenever a task finishes. pub callback: Option<String>, /// Environment variables that can be will be injected into all executed processes. #[serde(default = "Default::default")] pub env_vars: HashMap<String, String>, /// The amount of log lines from stdout/stderr that are passed to the callback command. #[serde(default = "default_callback_log_lines")] pub callback_log_lines: usize, /// The command that should be used for task and callback execution. /// The following are the only officially supported modi for Pueue. /// /// Unix default: /// `vec!["sh", "-c", "{{ pueue_command_string }}"]`. /// /// Windows default: /// `vec!["powershell", "-c", "[Console]::OutputEncoding = [Text.UTF8Encoding]::UTF8; {{ pueue_command_string }}"]` pub shell_command: Option<Vec<String>>, } impl Default for Shared { fn default() -> Self { Shared { pueue_directory: None, runtime_directory: None, alias_file: None, #[cfg(not(target_os = "windows"))] unix_socket_path: None, #[cfg(not(target_os = "windows"))] use_unix_socket: true, host: default_host(), port: default_port(), pid_path: None, daemon_cert: None, daemon_key: None, shared_secret_path: None, } } } impl Default for Client { fn default() -> Self { Client { restart_in_place: false, read_local_logs: true, show_confirmation_questions: false, show_expanded_aliases: false, dark_mode: false, max_status_lines: None, status_time_format: default_status_time_format(), status_datetime_format: default_status_datetime_format(), } } } impl Default for Daemon { fn default() -> Self { Daemon { pause_group_on_failure: false, pause_all_on_failure: false, callback: None, callback_log_lines: default_callback_log_lines(), shell_command: None, env_vars: HashMap::new(), } } } /// The parent settings struct. \ /// This contains all other setting structs. #[derive(PartialEq, Eq, Clone, Default, Debug, Deserialize, Serialize)] pub struct Settings { #[serde(default = "Default::default")] pub client: Client, #[serde(default = "Default::default")] pub daemon: Daemon, #[serde(default = "Default::default")] pub shared: Shared, #[serde(default = "HashMap::new")] pub profiles: HashMap<String, NestedSettings>, } /// The nested settings struct for profiles. \ /// In contrast to the normal `Settings` struct, this struct doesn't allow profiles. /// That way we prevent nested profiles and problems with self-referencing structs. #[derive(PartialEq, Eq, Clone, Debug, Deserialize, Serialize)] pub struct NestedSettings { #[serde(default = "Default::default")] pub client: Client, #[serde(default = "Default::default")] pub daemon: Daemon, #[serde(default = "Default::default")] pub shared: Shared, } pub fn default_configuration_directory() -> Option<PathBuf> { dirs::config_dir().map(|dir| dir.join("pueue")) } /// Get the default config directory. /// If no config can be found, fallback to the current directory. pub fn configuration_directories() -> Vec<PathBuf> { if let Some(config_dir) = default_configuration_directory() { vec![config_dir, PathBuf::from(".")] } else { vec![PathBuf::from(".")] } } /// Little helper which expands a given path's `~` characters to a fully qualified path. pub fn expand_home(old_path: &Path) -> PathBuf { PathBuf::from(tilde(&old_path.to_string_lossy()).into_owned()) } impl Shared { pub fn pueue_directory(&self) -> PathBuf { if let Some(path) = &self.pueue_directory { expand_home(path) } else if let Some(path) = dirs::data_local_dir() { path.join("pueue") } else { PathBuf::from("./pueue") } } /// Get the current runtime directory in the following precedence. /// 1. Config value /// 2. Environment configuration /// 3. Pueue directory pub fn runtime_directory(&self) -> PathBuf { if let Some(path) = &self.runtime_directory { expand_home(path) } else if let Some(path) = dirs::runtime_dir() { path } else { self.pueue_directory() } } /// The unix socket path can either be explicitly specified or it's simply placed in the /// current runtime directory. #[cfg(not(target_os = "windows"))] pub fn unix_socket_path(&self) -> PathBuf { if let Some(path) = &self.unix_socket_path { expand_home(path) } else { self.runtime_directory() .join(format!("pueue_{}.socket", whoami::username())) } } /// The location of the alias file used by the daemon/client when working with /// task aliases. pub fn alias_file(&self) -> PathBuf { if let Some(path) = &self.alias_file { expand_home(path) } else if let Some(config_dir) = default_configuration_directory() { config_dir.join("pueue_aliases.yml") } else { PathBuf::from("pueue_aliases.yml") } } /// The daemon's pid path can either be explicitly specified or it's simply placed in the /// current runtime directory. pub fn pid_path(&self) -> PathBuf { if let Some(path) = &self.pid_path { expand_home(path) } else { self.runtime_directory().join("pueue.pid") } } pub fn daemon_cert(&self) -> PathBuf { if let Some(path) = &self.daemon_cert { expand_home(path) } else { self.pueue_directory().join("certs").join("daemon.cert") } } pub fn daemon_key(&self) -> PathBuf { if let Some(path) = &self.daemon_key { expand_home(path) } else { self.pueue_directory().join("certs").join("daemon.key") } } pub fn shared_secret_path(&self) -> PathBuf { if let Some(path) = &self.shared_secret_path { expand_home(path) } else { self.pueue_directory().join("shared_secret") } } } impl Settings { /// Try to read existing config files, while using default values for non-existing fields. /// If successful, this will return a full config as well as a boolean on whether we found an /// existing configuration file or not. /// /// The default local config locations depends on the current target. pub fn read(from_file: &Option<PathBuf>) -> Result<(Settings, bool), Error> { // If no explicit path is provided, we look for the PUEUE_CONFIG_PATH env variable. let from_file = from_file .clone() .or_else(|| std::env::var(PUEUE_CONFIG_PATH_ENV).map(PathBuf::from).ok()); // Load the config from a very specific file path if let Some(path) = &from_file { // Open the file in read-only mode with buffer. let file = File::open(path) .map_err(|err| Error::IoPathError(path.clone(), "opening config file", err))?; let reader = BufReader::new(file); let settings = serde_yaml::from_reader(reader) .map_err(|err| Error::ConfigDeserialization(err.to_string()))?; return Ok((settings, true)); }; info!("Parsing config files"); let config_dirs = configuration_directories(); for directory in config_dirs.into_iter() { let path = directory.join("pueue.yml"); info!("Checking path: {path:?}"); // Check if the file exists and parse it. if path.exists() && path.is_file() { info!("Found config file at: {path:?}"); // Open the file in read-only mode with buffer. let file = File::open(&path) .map_err(|err| Error::IoPathError(path, "opening config file.", err))?; let reader = BufReader::new(file); let settings = serde_yaml::from_reader(reader) .map_err(|err| Error::ConfigDeserialization(err.to_string()))?; return Ok((settings, true)); } } info!("No config file found. Use default config."); // Return a default configuration if we couldn't find a file. Ok((Settings::default(), false)) } /// Save the current configuration as a file to the given path. \ /// If no path is given, the default configuration path will be used. \ /// The file is then written to the main configuration directory of the respective OS. pub fn save(&self, path: &Option<PathBuf>) -> Result<(), Error> { let config_path = if let Some(path) = path { path.clone() } else if let Ok(path) = std::env::var(PUEUE_CONFIG_PATH_ENV) { PathBuf::from(path) } else if let Some(path) = dirs::config_dir() { let path = path.join("pueue"); path.join("pueue.yml") } else { return Err(Error::Generic( "Failed to resolve default config directory. User home cannot be determined." .into(), )); }; let config_dir = config_path .parent() .ok_or_else(|| Error::InvalidPath("Couldn't resolve config directory".into()))?; // Create the config dir, if it doesn't exist yet if !config_dir.exists() { create_dir_all(config_dir).map_err(|err| { Error::IoPathError(config_dir.to_path_buf(), "creating config dir", err) })?; } let content = match serde_yaml::to_string(self) { Ok(content) => content, Err(error) => { return Err(Error::Generic(format!( "Configuration file serialization failed:\n{error}" ))) } }; let mut file = File::create(&config_path).map_err(|err| { Error::IoPathError(config_dir.to_path_buf(), "creating settings file", err) })?; file.write_all(content.as_bytes()).map_err(|err| { Error::IoPathError(config_dir.to_path_buf(), "writing settings file", err) })?; Ok(()) } /// Try to load a profile. Error if it doesn't exist. pub fn load_profile(&mut self, profile: &str) -> Result<(), Error> { let profile = self.profiles.remove(profile).ok_or_else(|| { Error::ConfigDeserialization(format!("Couldn't find profile with name \"{profile}\"")) })?; self.client = profile.client; self.daemon = profile.daemon; self.shared = profile.shared; Ok(()) } } #[cfg(test)] mod test { use super::*; /// Check if profiles get loaded correctly. #[test] fn test_load_profile() { // Create some default settings and ensure that default values are loaded. let mut settings = Settings::default(); assert_eq!( settings.client.status_time_format, default_status_time_format() ); assert_eq!( settings.daemon.callback_log_lines, default_callback_log_lines() ); assert_eq!(settings.shared.host, default_host()); // Crate a new profile with slightly different values. let mut profile = Settings::default(); profile.client.status_time_format = "test".to_string(); profile.daemon.callback_log_lines = 100_000; profile.shared.host = "quatschhost".to_string(); let profile = NestedSettings { client: profile.client, daemon: profile.daemon, shared: profile.shared, }; settings.profiles.insert("testprofile".to_string(), profile); // Load the profile and ensure the new values are now loaded. settings .load_profile("testprofile") .expect("We just added the profile"); assert_eq!(settings.client.status_time_format, "test"); assert_eq!(settings.daemon.callback_log_lines, 100_000); assert_eq!(settings.shared.host, "quatschhost"); } /// A proper pueue [Error] should be thrown if the profile cannot be found. #[test] fn test_error_on_missing_profile() { let mut settings = Settings::default(); let result = settings.load_profile("doesn't exist"); let expected_error_message = "Couldn't find profile with name \"doesn't exist\""; if let Err(Error::ConfigDeserialization(error_message)) = result { assert_eq!(error_message, expected_error_message); return; } panic!("Got unexpected result when expecting missing profile error: {result:?}"); } } 070701000000DE000081A4000000000000000000000001665F1B6900001CD2000000000000000000000000000000000000002300000000pueue-3.4.1/pueue_lib/src/state.rsuse std::collections::BTreeMap; use std::sync::{Arc, Mutex}; use serde_derive::{Deserialize, Serialize}; use crate::error::Error; use crate::task::{Task, TaskStatus}; pub const PUEUE_DEFAULT_GROUP: &str = "default"; pub type SharedState = Arc<Mutex<State>>; /// Represents the current status of a group. /// Each group acts as a queue and can be managed individually. #[derive(PartialEq, Eq, Clone, Debug, Copy, Deserialize, Serialize)] pub enum GroupStatus { Running, Paused, } /// The representation of a group. #[derive(PartialEq, Eq, Clone, Debug, Deserialize, Serialize)] pub struct Group { pub status: GroupStatus, pub parallel_tasks: usize, } /// This is the full representation of the current state of the Pueue daemon. /// /// This includes /// - The currently used settings. /// - The full task list /// - The current status of all tasks /// - All known groups. /// /// However, the State does NOT include: /// - Information about child processes /// - Handles to child processes /// /// That information is saved in the daemon's TaskHandler. /// /// Most functions implemented on the state shouldn't be used by third party software. /// The daemon is constantly changing and persisting the state. \ /// Any changes applied to a state and saved to disk, will most likely be overwritten /// after a short time. /// /// /// The daemon uses the state as a piece of shared memory between it's threads. /// It's wrapped in a MutexGuard, which allows us to guarantee sequential access to any crucial /// information, such as status changes and incoming commands by the client. #[derive(PartialEq, Eq, Clone, Debug, Deserialize, Serialize)] pub struct State { /// All tasks currently managed by the daemon. pub tasks: BTreeMap<usize, Task>, /// All groups with their current state a configuration. pub groups: BTreeMap<String, Group>, } impl Default for State { fn default() -> Self { Self::new() } } /// A little helper struct that's returned by the state's task filter functions. /// Contains all task ids of tasks that matched and didn't match a given condition. #[derive(Debug, Default)] pub struct FilteredTasks { pub matching_ids: Vec<usize>, pub non_matching_ids: Vec<usize>, } impl State { /// Create a new default state. pub fn new() -> State { let mut state = State { tasks: BTreeMap::new(), groups: BTreeMap::new(), }; state.create_group(PUEUE_DEFAULT_GROUP); state } /// Add a new task pub fn add_task(&mut self, mut task: Task) -> usize { let next_id = match self.tasks.keys().max() { None => 0, Some(id) => id + 1, }; task.id = next_id; self.tasks.insert(next_id, task); next_id } /// A small helper to change the status of a specific task. pub fn change_status(&mut self, id: usize, new_status: TaskStatus) { if let Some(ref mut task) = self.tasks.get_mut(&id) { task.status = new_status; }; } /// Add a new group to the daemon. \ /// This also check if the given group already exists. /// Create a state.group entry and a settings.group entry, if it doesn't. pub fn create_group(&mut self, name: &str) -> &mut Group { self.groups.entry(name.into()).or_insert(Group { status: GroupStatus::Running, parallel_tasks: 1, }) } /// Remove a group. /// This also iterates through all tasks and sets any tasks' group /// to the `default` group if it matches the deleted group. pub fn remove_group(&mut self, group: &str) -> Result<(), Error> { if group.eq(PUEUE_DEFAULT_GROUP) { return Err(Error::Generic( "You cannot remove the default group.".into(), )); } self.groups.remove(group); // Reset all tasks with removed group to the default. for (_, task) in self.tasks.iter_mut() { if task.group.eq(group) { task.set_default_group(); } } Ok(()) } /// Set the group status (running/paused) for all groups including the default queue. pub fn set_status_for_all_groups(&mut self, status: GroupStatus) { for (_, group) in self.groups.iter_mut() { group.status = status; } } /// Get all ids of task inside a specific group. pub fn task_ids_in_group(&self, group: &str) -> Vec<usize> { self.tasks .iter() .filter(|(_, task)| task.group.eq(group)) .map(|(id, _)| *id) .collect() } /// This checks, whether some tasks match the expected filter criteria. \ /// The first result is the list of task_ids that match these statuses. \ /// The second result is the list of task_ids that don't match these statuses. \ /// /// By default, this checks all tasks in the current state. If a list of task_ids is /// provided as the third parameter, only those tasks will be checked. pub fn filter_tasks<F>(&self, condition: F, task_ids: Option<Vec<usize>>) -> FilteredTasks where F: Fn(&Task) -> bool, { // Either use all tasks or only the explicitly specified ones. let task_ids = match task_ids { Some(ids) => ids, None => self.tasks.keys().cloned().collect(), }; self.filter_task_ids(condition, task_ids) } /// Same as [State::filter_tasks], but only checks for tasks of a specific group. pub fn filter_tasks_of_group<F>(&self, condition: F, group: &str) -> FilteredTasks where F: Fn(&Task) -> bool, { // Return empty vectors, if there's no such group. if !self.groups.contains_key(group) { return FilteredTasks::default(); } // Filter all task ids of tasks that match the given group. let task_ids = self .tasks .iter() .filter(|(_, task)| task.group == group) .map(|(id, _)| *id) .collect(); self.filter_task_ids(condition, task_ids) } /// Internal function used to check which of the given tasks match the provided filter. /// /// Returns a tuple of all (matching_task_ids, non_matching_task_ids). fn filter_task_ids<F>(&self, condition: F, task_ids: Vec<usize>) -> FilteredTasks where F: Fn(&Task) -> bool, { let mut matching_ids = Vec::new(); let mut non_matching_ids = Vec::new(); // Filter all task id's that match the provided statuses. for task_id in task_ids.iter() { // Check whether the task exists and save all non-existing task ids. match self.tasks.get(task_id) { None => { non_matching_ids.push(*task_id); continue; } Some(task) => { // Check whether the task status matches the filter. if condition(task) { matching_ids.push(*task_id); } else { non_matching_ids.push(*task_id); } } }; } FilteredTasks { matching_ids, non_matching_ids, } } } 070701000000DF000081A4000000000000000000000001665F1B6900001A5F000000000000000000000000000000000000002200000000pueue-3.4.1/pueue_lib/src/task.rsuse std::{collections::HashMap, path::PathBuf}; use chrono::prelude::*; use serde_derive::{Deserialize, Serialize}; use strum_macros::Display; use crate::state::PUEUE_DEFAULT_GROUP; /// This enum represents the status of the internal task handling of Pueue. /// They basically represent the internal task life-cycle. #[derive(PartialEq, Eq, Clone, Debug, Display, Serialize, Deserialize)] pub enum TaskStatus { /// The task is queued and waiting for a free slot Queued, /// The task has been manually stashed. It won't be executed until it's manually enqueued Stashed { enqueue_at: Option<DateTime<Local>> }, /// The task is started and running Running, /// A previously running task has been paused Paused, /// Task finished. The actual result of the task is handled by the [TaskResult] enum. Done(TaskResult), /// Used while the command of a task is edited (to prevent starting the task) Locked, } /// This enum represents the exit status of an actually spawned program. /// It's only used, once a task finished or failed in some kind of way. #[derive(PartialEq, Eq, Clone, Debug, Display, Serialize, Deserialize)] pub enum TaskResult { /// Task exited with 0 Success, /// The task failed in some other kind of way (error code != 0) Failed(i32), /// The task couldn't be spawned. Probably a typo in the command FailedToSpawn(String), /// Task has been actively killed by either the user or the daemon on shutdown Killed, /// Some kind of IO error. This should barely ever happen. Please check the daemon logs. Errored, /// A dependency of the task failed. DependencyFailed, } /// Representation of a task. /// start will be set the second the task starts processing. /// `result`, `output` and `end` won't be initialized, until the task has finished. #[derive(PartialEq, Eq, Clone, Deserialize, Serialize)] pub struct Task { pub id: usize, #[serde(default = "Local::now")] pub created_at: DateTime<Local>, #[serde(default = "Default::default")] pub enqueued_at: Option<DateTime<Local>>, pub original_command: String, pub command: String, pub path: PathBuf, pub envs: HashMap<String, String>, pub group: String, pub dependencies: Vec<usize>, #[serde(default = "Default::default")] pub priority: i32, pub label: Option<String>, pub status: TaskStatus, /// This field is only used when editing the path/command of a task. /// It's necessary, since we enter the `Locked` state during editing. /// However, we have to go back to the previous state after we finished editing. /// /// TODO: Refactor this into a `TaskStatus::Locked{previous_status: TaskStatus}`. pub prev_status: TaskStatus, pub start: Option<DateTime<Local>>, pub end: Option<DateTime<Local>>, } impl Task { #[allow(clippy::too_many_arguments)] pub fn new( original_command: String, path: PathBuf, envs: HashMap<String, String>, group: String, starting_status: TaskStatus, dependencies: Vec<usize>, priority: i32, label: Option<String>, ) -> Task { Task { id: 0, created_at: Local::now(), enqueued_at: None, original_command: original_command.clone(), command: original_command, path, envs, group, dependencies, priority, label, status: starting_status.clone(), prev_status: starting_status, start: None, end: None, } } /// A convenience function used to duplicate a task. pub fn from_task(task: &Task) -> Task { Task { id: 0, created_at: Local::now(), enqueued_at: None, original_command: task.original_command.clone(), command: task.command.clone(), path: task.path.clone(), envs: task.envs.clone(), group: task.group.clone(), dependencies: Vec::new(), priority: 0, label: task.label.clone(), status: TaskStatus::Queued, prev_status: TaskStatus::Queued, start: None, end: None, } } /// Whether the task is having a running process managed by the TaskHandler pub fn is_running(&self) -> bool { matches!(self.status, TaskStatus::Running | TaskStatus::Paused) } /// Whether the task's process finished. pub fn is_done(&self) -> bool { matches!(self.status, TaskStatus::Done(_)) } /// Check if the task errored. \ /// It either: /// 1. Finished successfully /// 2. Didn't finish yet. pub fn failed(&self) -> bool { match &self.status { TaskStatus::Done(result) => !matches!(result, TaskResult::Success), _ => false, } } /// Convenience helper on whether a task is stashed pub fn is_stashed(&self) -> bool { matches!(self.status, TaskStatus::Stashed { .. }) } /// Check whether a task is queued or might soon be enqueued. pub fn is_queued(&self) -> bool { matches!( self.status, TaskStatus::Queued | TaskStatus::Stashed { enqueue_at: Some(_) } ) } /// Small convenience function to set the task's group to the default group. pub fn set_default_group(&mut self) { self.group = String::from(PUEUE_DEFAULT_GROUP); } pub fn is_in_default_group(&self) -> bool { self.group.eq(PUEUE_DEFAULT_GROUP) } } /// We use a custom `Debug` implementation for [Task], as the `envs` field just has too much /// info in it and makes the log output much too verbose. /// /// Furthermore, there might be secrets in the environment, resulting in a possible leak if /// users copy-paste their log output for debugging. impl std::fmt::Debug for Task { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { f.debug_struct("Task") .field("id", &self.id) .field("original_command", &self.original_command) .field("command", &self.command) .field("path", &self.path) .field("envs", &"hidden") .field("group", &self.group) .field("dependencies", &self.dependencies) .field("label", &self.label) .field("status", &self.status) .field("prev_status", &self.prev_status) .field("start", &self.start) .field("end", &self.end) .field("priority", &self.priority) .finish() } } 070701000000E0000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000001C00000000pueue-3.4.1/pueue_lib/tests070701000000E1000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000002100000000pueue-3.4.1/pueue_lib/tests/data070701000000E2000081A4000000000000000000000001665F1B6900000382000000000000000000000000000000000000003600000000pueue-3.4.1/pueue_lib/tests/data/v0.15.0_settings.yml--- client: bogus_settings: ~ restart_in_place: false read_local_logs: true show_confirmation_questions: false show_expanded_aliases: false max_status_lines: ~ status_time_format: "%H:%M:%S" status_datetime_format: "%Y-%m-%d\n%H:%M:%S" daemon: default_parallel_tasks: 1 pause_group_on_failure: false pause_all_on_failure: false callback: "notify-send \"Task {{ id }}\nCommand: {{ command }}\nPath: {{ path }}\nFinished with status '{{ result }}'\nDuration: $(humanizer time -s $(bc <<< \"{{end}} - {{start}}\"))\"" groups: test: 1 webhook: 1 shared: pueue_directory: ~/.local/share/pueue use_unix_socket: true unix_socket_path: ~/.local/share/pueue/pueue.socket host: localhost port: "6924" daemon_cert: ~/.local/share/pueue/certs/daemon.cert daemon_key: ~/.local/share/pueue/certs/daemon.key shared_secret_path: ~/.local/share/pueue/shared_secret 070701000000E3000081A4000000000000000000000001665F1B6900000DF3000000000000000000000000000000000000003400000000pueue-3.4.1/pueue_lib/tests/data/v0.19.0_state.json{ "settings": { "client": { "restart_in_place": true, "read_local_logs": true, "show_confirmation_questions": false, "show_expanded_aliases": false, "dark_mode": false, "max_status_lines": 10, "status_time_format": "%H:%M:%S", "status_datetime_format": "%Y-%m-%d %H:%M:%S" }, "daemon": { "pause_group_on_failure": false, "pause_all_on_failure": false, "callback": "notify-send \"Task {{ id }}\nCommand: {{ command }}\nPath: {{ path }}\nFinished with status '{{ result }}'\nTook: $(bc <<< \"{{end}} - {{start}}\") seconds\"", "callback_log_lines": 10 }, "shared": { "pueue_directory": null, "runtime_directory": null, "use_unix_socket": true, "unix_socket_path": null, "host": "127.0.0.1", "port": "6924", "daemon_cert": null, "daemon_key": null, "shared_secret_path": null }, "profiles": {} }, "tasks": { "0": { "id": 0, "original_command": "ls", "command": "ls", "path": "/home/nuke/.local/share/pueue", "envs": {}, "group": "default", "dependencies": [], "label": null, "status": { "Done": "Success" }, "prev_status": "Queued", "start": "2022-05-09T18:41:29.273563806+02:00", "end": "2022-05-09T18:41:29.473998692+02:00" }, "1": { "id": 1, "original_command": "ls", "command": "ls", "path": "/home/nuke/.local/share/pueue", "envs": { "PUEUE_WORKER_ID": "0", "PUEUE_GROUP": "test" }, "group": "test", "dependencies": [], "label": null, "status": { "Done": "Success" }, "prev_status": "Queued", "start": "2022-05-09T18:43:30.683677276+02:00", "end": "2022-05-09T18:43:30.884243263+02:00" }, "2": { "id": 2, "original_command": "ls", "command": "ls", "path": "/home/nuke/.local/share/pueue", "envs": { "PUEUE_WORKER_ID": "0", "PUEUE_GROUP": "test" }, "group": "test", "dependencies": [], "label": null, "status": "Queued", "prev_status": "Queued", "start": null, "end": null }, "3": { "id": 3, "original_command": "ls stash_it", "command": "ls stash_it", "path": "/home/nuke/.local/share/pueue", "envs": {}, "group": "default", "dependencies": [], "label": null, "status": { "Stashed": { "enqueue_at": null } }, "prev_status": { "Stashed": { "enqueue_at": null } }, "start": null, "end": null } }, "groups": { "default": { "status": "Running", "parallel_tasks": 1 }, "test": { "status": "Paused", "parallel_tasks": 2 } }, "config_path": null } 070701000000E4000081A4000000000000000000000001665F1B69000004C2000000000000000000000000000000000000002600000000pueue-3.4.1/pueue_lib/tests/helper.rsuse tempfile::{Builder, TempDir}; use portpicker::pick_unused_port; use pueue_lib::settings::*; pub fn get_shared_settings( #[cfg_attr(target_os = "windows", allow(unused_variables))] use_unix_socket: bool, ) -> (Shared, TempDir) { // Create a temporary directory used for testing. let tempdir = Builder::new().prefix("pueue_lib-").tempdir().unwrap(); let tempdir_path = tempdir.path(); std::fs::create_dir(tempdir_path.join("certs")).unwrap(); let shared_settings = Shared { pueue_directory: Some(tempdir_path.to_path_buf()), runtime_directory: Some(tempdir_path.to_path_buf()), alias_file: None, #[cfg(not(target_os = "windows"))] use_unix_socket, #[cfg(not(target_os = "windows"))] unix_socket_path: None, pid_path: None, host: "localhost".to_string(), port: pick_unused_port() .expect("There should be a free port") .to_string(), daemon_cert: Some(tempdir_path.join("certs").join("daemon.cert")), daemon_key: Some(tempdir_path.join("certs").join("daemon.key")), shared_secret_path: Some(tempdir_path.join("secret")), }; (shared_settings, tempdir) } 070701000000E5000081A4000000000000000000000001665F1B69000006ED000000000000000000000000000000000000003E00000000pueue-3.4.1/pueue_lib/tests/message_backward_compatibility.rsuse serde_cbor::de::from_slice; use serde_cbor::ser::to_vec; use serde_derive::{Deserialize, Serialize}; use pueue_lib::network::message::Message as OriginalMessage; /// This is the main message enum. \ /// Everything that's communicated in Pueue can be serialized as this enum. #[derive(Clone, Debug, Deserialize, Serialize)] pub enum Message { Switch(SwitchMessage), Clean(CleanMessage), } #[derive(Clone, Debug, Deserialize, Serialize)] pub struct SwitchMessage { pub task_id_1: usize, pub task_id_2: usize, pub some_new_field: usize, } #[derive(Clone, Debug, Deserialize, Serialize)] pub struct CleanMessage {} /// Make sure we can deserialize old messages as long as we have default values set. #[test] fn test_deserialize_old_message() { let message = Message::Clean(CleanMessage {}); let payload_bytes = to_vec(&message).unwrap(); let message: OriginalMessage = from_slice(&payload_bytes).unwrap(); if let OriginalMessage::Clean(message) = message { // The serialized message didn't have the `successful_only` property yet. // Instead the default `false` should be used. assert!(!message.successful_only); } else { panic!("It must be a clean message"); } } /// Make sure we can deserialize new messages, even if new values exist. #[test] fn test_deserialize_new_message() { let message = Message::Switch(SwitchMessage { task_id_1: 0, task_id_2: 1, some_new_field: 2, }); let payload_bytes = to_vec(&message).unwrap(); let message: OriginalMessage = from_slice(&payload_bytes).unwrap(); // The serialized message did have an additional field. The deserialization works anyway. assert!(matches!(message, OriginalMessage::Switch(_))); } 070701000000E6000081A4000000000000000000000001665F1B690000042A000000000000000000000000000000000000003F00000000pueue-3.4.1/pueue_lib/tests/settings_backward_compatibility.rsuse std::path::PathBuf; use anyhow::{Context, Result}; use pueue_lib::settings::Settings; /// From 0.15.0 on, we aim to have full backward compatibility. /// For this reason, an old (slightly modified) v0.15.0 serialized settings file /// has been checked in. /// /// We have to be able to restore from that config at all costs. /// Everything else results in a breaking change and needs a major version change. /// (For `pueue_lib` as well as `pueue`!) /// /// On top of simply having old settings, I also removed a few default fields. /// This should be handled as well. #[test] fn test_restore_from_old_state() -> Result<()> { better_panic::install(); let old_settings_path = PathBuf::from(env!("CARGO_MANIFEST_DIR")) .join("tests") .join("data") .join("v0.15.0_settings.yml"); // Open v0.15.0 file and ensure the settings file can be read. let (_settings, config_found) = Settings::read(&Some(old_settings_path)) .context("Failed to read old config with defaults:")?; assert!(config_found); Ok(()) } 070701000000E7000081A4000000000000000000000001665F1B69000007E8000000000000000000000000000000000000003C00000000pueue-3.4.1/pueue_lib/tests/state_backward_compatibility.rsuse std::{fs, path::PathBuf}; use anyhow::{Context, Result}; use pueue_lib::state::{GroupStatus, State, PUEUE_DEFAULT_GROUP}; /// From 0.18.0 on, we aim to have full backward compatibility for our state deserialization. /// For this reason, an old (slightly modified) v0.18.0 serialized state has been checked in. /// /// **Warning**: This is only one part of our state tests. /// There is another full test suite in the `pueue` project, which deals with domain /// specific state restoration logic. This test only checks, whether we can /// deserialize old state files. /// /// We have to be able to restore from that state at all costs. /// Everything else results in a breaking change and needs a major version change. /// (For `pueue_lib` as well as `pueue`! /// /// On top of simply having an old state, I also removed a few default fields. /// This should be handled as well. #[test] fn test_restore_from_old_state() -> Result<()> { better_panic::install(); let path = PathBuf::from(env!("CARGO_MANIFEST_DIR")) .join("tests") .join("data") .join("v0.19.0_state.json"); // Try to load the file. let data = fs::read_to_string(path).context("State restore: Failed to read file")?; // Try to deserialize the state file. let state: State = serde_json::from_str(&data).context("Failed to deserialize state.")?; // Make sure the groups are loaded. assert!( state.groups.contains_key(PUEUE_DEFAULT_GROUP), "Group 'default' should exist." ); assert_eq!( state.groups.get(PUEUE_DEFAULT_GROUP).unwrap().status, GroupStatus::Running ); assert!( state.groups.contains_key("test"), "Group 'test' should exist" ); assert_eq!( state.groups.get("test").unwrap().status, GroupStatus::Paused ); assert!(state.tasks.contains_key(&3), "Task 3 should exist"); assert_eq!(state.tasks.get(&3).unwrap().command, "ls stash_it"); Ok(()) } 070701000000E8000081A4000000000000000000000001665F1B6900000697000000000000000000000000000000000000002A00000000pueue-3.4.1/pueue_lib/tests/tls_socket.rsuse anyhow::Result; use pretty_assertions::assert_eq; use serde_cbor::de::from_slice; use serde_cbor::ser::to_vec; use tokio::task; use pueue_lib::network::certificate::create_certificates; use pueue_lib::network::message::*; use pueue_lib::network::protocol::*; mod helper; /// This tests whether we can create a listener and client, that communicate via TLS sockets. #[tokio::test] async fn test_tls_socket() -> Result<()> { better_panic::install(); let (shared_settings, _tempdir) = helper::get_shared_settings(false); // Create new stub tls certificates/keys in our temp directory create_certificates(&shared_settings).unwrap(); let listener = get_listener(&shared_settings).await.unwrap(); let message = create_success_message("This is a test"); let original_bytes = to_vec(&message).expect("Failed to serialize message."); // Spawn a sub thread that: // 1. Accepts a new connection // 2. Reads a message // 3. Sends the same message back task::spawn(async move { let mut stream = listener.accept().await.unwrap(); let message_bytes = receive_bytes(&mut stream).await.unwrap(); let message: Message = from_slice(&message_bytes).unwrap(); send_message(message, &mut stream).await.unwrap(); }); let mut client = get_client_stream(&shared_settings).await.unwrap(); // Create a client that sends a message and instantly receives it send_message(message, &mut client).await.unwrap(); let response_bytes = receive_bytes(&mut client).await.unwrap(); let _message: Message = from_slice(&response_bytes).unwrap(); assert_eq!(response_bytes, original_bytes); Ok(()) } 070701000000E9000081A4000000000000000000000001665F1B69000006A9000000000000000000000000000000000000002B00000000pueue-3.4.1/pueue_lib/tests/unix_socket.rs#[cfg(not(target_os = "windows"))] mod helper; #[cfg(not(target_os = "windows"))] mod tests { use anyhow::Result; use pretty_assertions::assert_eq; use serde_cbor::de::from_slice; use serde_cbor::ser::to_vec; use tokio::task; use pueue_lib::network::message::*; use pueue_lib::network::protocol::*; use super::*; /// This tests whether we can create a listener and client, that communicate via unix sockets. #[tokio::test] async fn test_unix_socket() -> Result<()> { better_panic::install(); let (shared_settings, _tempdir) = helper::get_shared_settings(true); let listener = get_listener(&shared_settings).await?; let message = create_success_message("This is a test"); let original_bytes = to_vec(&message).expect("Failed to serialize message."); // Spawn a sub thread that: // 1. Accepts a new connection // 2. Reads a message // 3. Sends the same message back task::spawn(async move { let mut stream = listener.accept().await.unwrap(); let message_bytes = receive_bytes(&mut stream).await.unwrap(); let message: Message = from_slice(&message_bytes).unwrap(); send_message(message, &mut stream).await.unwrap(); }); let mut client = get_client_stream(&shared_settings).await?; // Create a client that sends a message and instantly receives it send_message(message, &mut client).await?; let response_bytes = receive_bytes(&mut client).await?; let _message: Message = from_slice(&response_bytes)?; assert_eq!(response_bytes, original_bytes); Ok(()) } } 070701000000EA000041ED000000000000000000000002665F1B6900000000000000000000000000000000000000000000001200000000pueue-3.4.1/utils070701000000EB000081A4000000000000000000000001665F1B690000035E000000000000000000000000000000000000001F00000000pueue-3.4.1/utils/pueued.plist<?xml version="1.0" encoding="UTF-8"?> <!-- This is the plist file for the pueue daemon on macos --> <!-- Place pueued.plist in ~/Library/LaunchAgents --> <!-- To enable the daemon navigate into the directory `cd ~/Library/LaunchAgents` and type `launchctl load pueued.plist` --> <!-- To start the daemon type `launchctl start pueued` --> <!-- If you want to check that the daemon is running type `launchctl list | grep pueued` --> <!-- You have to change the program location, if pueue is not installed with homebrew --> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>pueued</string> <key>ProgramArguments</key> <array> <string>/opt/homebrew/bin/pueued</string> <string>-vv</string> </array> <key>RunAtLoad</key> <true/> </dict> </plist> 070701000000EC000081A4000000000000000000000001665F1B6900000154000000000000000000000000000000000000002100000000pueue-3.4.1/utils/pueued.service# This is the service file for the pueue daemon # To enable the daemon type `systemctl --user enable pueued.service` # To start the daemon type `systemctl --user start pueued.service` [Unit] Description=Pueue Daemon - CLI process scheduler and manager [Service] Restart=no ExecStart=/usr/bin/pueued -vv [Install] WantedBy=default.target 070701000000ED000081ED000000000000000000000001665F1B69000005A9000000000000000000000000000000000000002D00000000pueue-3.4.1/utils/release_artifact_hashes.py#!/bin/env python3 # # A small helper script which downloads all artifacts and creates a sha256 sum for it. # # Accepts a release tag as the first parameter, e.g. 'v3.1.0'. # Otherwise it will print the shaws for the latest release. import sys import requests import hashlib base_url = "https://github.com/Nukesor/pueue/releases/latest/download/" if len(sys.argv) > 1: release = sys.argv[1] print(f"Sha256 sums for artifacts of release {release}") base_url = f"https://github.com/Nukesor/pueue/releases/download/{release}/" else: print(f"Sha256 sums for artifacts of latest release") artifacts = [ "pueue-linux-x86_64", "pueued-linux-x86_64", "pueue-macos-x86_64", "pueued-macos-x86_64", "pueued-windows-x86_64.exe", "pueue-windows-x86_64.exe", "pueue-darwin-aarch64", "pueued-darwin-aarch64", "pueue-linux-aarch64", "pueued-linux-aarch64", "pueue-linux-arm", "pueued-linux-arm", "pueue-linux-armv7", "pueued-linux-armv7", ] for artifact in artifacts: url = base_url + artifact response = requests.get(url, stream=True) sha256_hash = hashlib.sha256() if response.status_code == 200: for chunk in response.iter_content(4096): sha256_hash.update(chunk) sha256_sum = sha256_hash.hexdigest() print(f"{artifact}: {sha256_sum}") else: print(f"Failed to download {artifact}. Status code: {response.status_code}") 07070100000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000B00000000TRAILER!!!1515 blocks
Locations
Projects
Search
Status Monitor
Help
OpenBuildService.org
Documentation
API Documentation
Code of Conduct
Contact
Support
@OBShq
Terms
openSUSE Build Service is sponsored by
The Open Build Service is an
openSUSE project
.
Sign Up
Log In
Places
Places
All Projects
Status Monitor