Overview
Request 1177286 accepted
- Update to version 0.6.2
* Fix final, output contractions being mistakenly marked as not tensordot-able.
* When `implementation="autoray"` don't require a backend to have both
`einsum` and `tensordot`, instead fallback to `cotengra`'s own.
- from version 0.6.1
* The number of workers initialized (for non-distributed pools) is now set to,
in order of preference, 1. the environment variable `COTENGRA_NUM_WORKERS`,
2. the environment variable `OMP_NUM_THREADS`, or 3. `os.cpu_count()`.
* Add [RandomGreedyOptimizer](cotengra.pathfinders.path_basic.RandomGreedyOptimizer)
which is a lightweight and performant randomized greedy optimizer, eschewing both
hyper parameter tuning and full contraction tree construction, making it suitable
for very large contractions (10,000s of tensors+).
* Add [optimize_random_greedy_track_flops](cotengra.pathfinders.path_basic.optimize_\
random_greedy_track_flops) which runs N trials of (random) greedy path optimization,
whilst computing the FLOP count simultaneously. This or its accelerated rust counterpart
in `cotengrust` is the driver for the above optimizer.
* Add `parallel="threads"` backend, and make it the default for `RandomGreedyOptimizer`
when `cotengrust` is present, since its version of `optimize_random_greedy_track_flops`
releases the GIL.
* Significantly improve both the speed and memory usage of [`SliceFinder`](cotengra.slicer.SliceFinder)
* Alias `tree.total_cost()` to `tree.combo_cost()`
- from version 0.6.0
* All input node legs and pre-processing steps are now calculated lazily,
allowing slicing of indices including those 'simplified' away {issue}`31`.
* Make [`tree.peak_size`](cotengra.ContractionTree.peak_size) more accurate,
by taking max assuming left, right and parent intermediate tensors are all
present at the same time.
* Add simulated annealing tree refinement (in `path_simulated_annealing.py`),
based on "Multi-Tensor Contraction for XEB Verification of
Quantum Circuits" by Gleb Kalachev, Pavel Panteleev, Man-Hong Yung
Request History
glaubitz created request
- Update to version 0.6.2
* Fix final, output contractions being mistakenly marked as not tensordot-able.
* When `implementation="autoray"` don't require a backend to have both
`einsum` and `tensordot`, instead fallback to `cotengra`'s own.
- from version 0.6.1
* The number of workers initialized (for non-distributed pools) is now set to,
in order of preference, 1. the environment variable `COTENGRA_NUM_WORKERS`,
2. the environment variable `OMP_NUM_THREADS`, or 3. `os.cpu_count()`.
* Add [RandomGreedyOptimizer](cotengra.pathfinders.path_basic.RandomGreedyOptimizer)
which is a lightweight and performant randomized greedy optimizer, eschewing both
hyper parameter tuning and full contraction tree construction, making it suitable
for very large contractions (10,000s of tensors+).
* Add [optimize_random_greedy_track_flops](cotengra.pathfinders.path_basic.optimize_\
random_greedy_track_flops) which runs N trials of (random) greedy path optimization,
whilst computing the FLOP count simultaneously. This or its accelerated rust counterpart
in `cotengrust` is the driver for the above optimizer.
* Add `parallel="threads"` backend, and make it the default for `RandomGreedyOptimizer`
when `cotengrust` is present, since its version of `optimize_random_greedy_track_flops`
releases the GIL.
* Significantly improve both the speed and memory usage of [`SliceFinder`](cotengra.slicer.SliceFinder)
* Alias `tree.total_cost()` to `tree.combo_cost()`
- from version 0.6.0
* All input node legs and pre-processing steps are now calculated lazily,
allowing slicing of indices including those 'simplified' away {issue}`31`.
* Make [`tree.peak_size`](cotengra.ContractionTree.peak_size) more accurate,
by taking max assuming left, right and parent intermediate tensors are all
present at the same time.
* Add simulated annealing tree refinement (in `path_simulated_annealing.py`),
based on "Multi-Tensor Contraction for XEB Verification of
Quantum Circuits" by Gleb Kalachev, Pavel Panteleev, Man-Hong Yung
mcalabkova accepted request
ok