python-cotengra

Edit Package python-cotengra
No description set
Refresh
Refresh
Source Files
Filename Size Changed
cotengra-0.6.2.tar.gz 0003326192 3.17 MB
python-cotengra.changes 0000005623 5.49 KB
python-cotengra.spec 0000003299 3.22 KB
Revision 5 (latest revision is 6)
Markéta Machová's avatar Markéta Machová (mcalabkova) accepted request 1177286 from John Paul Adrian Glaubitz's avatar John Paul Adrian Glaubitz (glaubitz) (revision 5)
- Update to version 0.6.2
  * Fix final, output contractions being mistakenly marked as not tensordot-able.
  * When `implementation="autoray"` don't require a backend to have both
    `einsum` and `tensordot`, instead fallback to `cotengra`'s own.
- from version 0.6.1
  * The number of workers initialized (for non-distributed pools) is now set to,
    in order of preference, 1. the environment variable `COTENGRA_NUM_WORKERS`,
    2. the environment variable `OMP_NUM_THREADS`, or 3. `os.cpu_count()`.
  * Add [RandomGreedyOptimizer](cotengra.pathfinders.path_basic.RandomGreedyOptimizer)
    which is a lightweight and performant randomized greedy optimizer, eschewing both
    hyper parameter tuning and full contraction tree construction, making it suitable
    for very large contractions (10,000s of tensors+).
  * Add [optimize_random_greedy_track_flops](cotengra.pathfinders.path_basic.optimize_\
    random_greedy_track_flops) which runs N trials of (random) greedy path optimization,
    whilst computing the FLOP count simultaneously. This or its accelerated rust counterpart
    in `cotengrust` is the driver for the above optimizer.
  * Add `parallel="threads"` backend, and make it the default for `RandomGreedyOptimizer`
    when `cotengrust` is present, since its version of `optimize_random_greedy_track_flops`
    releases the GIL.
  * Significantly improve both the speed and memory usage of [`SliceFinder`](cotengra.slicer.SliceFinder)
  * Alias `tree.total_cost()` to `tree.combo_cost()`
- from version 0.6.0
  * All input node legs and pre-processing steps are now calculated lazily,
    allowing slicing of indices including those 'simplified' away {issue}`31`.
  * Make [`tree.peak_size`](cotengra.ContractionTree.peak_size) more accurate,
    by taking max assuming left, right and parent intermediate tensors are all
    present at the same time.
  * Add simulated annealing tree refinement (in `path_simulated_annealing.py`),
    based on "Multi-Tensor Contraction for XEB Verification of
    Quantum Circuits" by Gleb Kalachev, Pavel Panteleev, Man-Hong Yung
Comments 0
openSUSE Build Service is sponsored by