Skip to content

Releases: jcmgray/quimb

v1.13.0

20 Mar 00:55

Choose a tag to compare

What's Changed

Breaking Changes

  • ham_hubbard_hardcore fix description and sign convention of hopping strength t.
  • heisenberg_from_edges fix sign convention of magnetic field terms.
  • the quimb.tensor submodule structure has been refactored with tn1d, tn2d, tn3d, and tnag submodules for better organization. Imports from old locations will still work, but are deprecated. Public classes and functions such as MatrixProductState are directly accessible from the top level quimb.tensor module as before.

Enhancements:

Major updates to splitting/decomposing individual tensors/arrays:

  • add array_split and array_svals as the primary array-level entry points for matrix decomposition, consolidating dispatch logic that was previously internal to tensor_core.
  • add register_split_driver and register_svals_driver decorators for registering custom matrix decomposition methods with array_split and array_svals.
  • allow array_split to handle batches of matrices (for most methods).
  • array_split: automatically detect and forward valid kwargs to underlying decomposition methods.
  • tensor_split and array_split: expand absorb options significantly beyond "left", "both", "right", None to include "lorthog", "rorthog", "lfactor", "rfactor", "lsqrt", "rsqrt and "s" for returning partial results (single factors or singular values only). Default changed from "both" to "auto", which uses each method's natural default.
  • add method "svd:eig" with main implementation svd_via_eig for efficient SVD via hermitian eigen-decomposition, with shortcuts for all absorb modes. This can be faster (especially e.g. on GPU) than the standard SVD, but entails some loss of precision.
  • tensor_split: rename method option "eig" to "svd:eig" to make it clearer that this is an SVD split via eigen-decomposition. "eig" remains as a deprecated alias for "svd:eig".
  • add method "svd:rand" with main implementation svd_rand_truncated for randomized SVD with truncation, with shortcuts for all absorb modes. (This is a new and backend agnostic implementation as opposed to the existing 'rsvd' method).
  • add method "qr:cholesky" qr_via_cholesky for efficient QR or LQ like decompositions via cholesky decomposition, with shortcuts for all absorb modes. This can be faster than the standard QR (especially on GPU) but entails some loss of precision.
  • tensor_split and array_split: add "lsqrt" and "rsqrt" absorb options, update cholesky decomposition to cholesky_regularized with shift as exposed parameter.
  • compute_oblique_projectors: allow method kwarg.
  • QR decomposition: add stabilize kwarg for controlling QR stabilization behavior.
  • decomposition methods: various compatibility improvements for JAX backend.

Other enhancements:

Read more

v1.12.1

13 Jan 00:13

Choose a tag to compare

Breaking Changes

  • bump minimum required python version to 3.11

Bug fixes:

Full Changelog: v1.12.0...v1.12.1

v1.12.0

10 Jan 02:18

Choose a tag to compare

What's Changed

Enhancements:

Bug fixes:

New Contributors

Full Changelog: v1.11.2...v1.12.0

v1.11.2

31 Jul 00:24

Choose a tag to compare

Enhancements:

Bug fixes:

  • fixes for MPS and MPO constructors when L=1, (#314)
  • tensor splitting with absorb="left" now correctly marks left indices.
  • tn.isel: fix bug when value could not be compared to string "r"
  • truncated svd, make n_chi comparison more robust to different backends

Full Changelog: v1.11.1...v1.11.2

v1.11.1

21 Jun 00:08

Choose a tag to compare

Enhancements:

  • add create_bond to tensor_canonize_bond and tensor_compress_bond for optionally creating a new bond between two tensors if they don't already share one. Add as a flag to TensorNetwork1DFlat.compress and related functions (#294).
  • add ensure_bonds_exist for ensuring that all bonds in a 1D flat tensor network exist. Use this in the permute_arrays methods and optionally in the expand_bond_dimension method.
  • tn.draw(): permit empty network, and allow color=True to automatically color all tags.
  • tn.add_tag: add a record: Optional[dict] kwarg, to allow for easy rewinding of temporary tags without tracking the actual networks.
  • add qu.plot as a quick wrapper for calling matplotlib.pyplot.plot with the quimb style.
  • quimb.schematic: add zorder_delta kwarg for fine adjustments to layering of objects in approximately the same position.
  • operatorbuilder: big performance improvements and fixes for building matrix representations including Z2 symmetry. Add default symmetry and sector options that can be overridden at build time. Add lazy (slow, matrix free) 'apply' method. Add pauli_decompose transformation. Add experimental PEPO builder for nearest neighbor operators. Add unit tests.

Bug fixes:

Full Changelog: v1.11.0...v1.11.1

v1.11.0

15 May 00:06

Choose a tag to compare

Breaking Changes

  • move belief propagation to quimb.tensor.belief_propagation
  • calling tn.contract() when an non-zero value has been accrued into tn.exponent now automatically re-absorbs that exponent.
  • binary tensor operations that would previously have errored now will align and broadcast

Enhancements:

Bug fixes:

  • fix MatrixProductState.measure for cupy backend arrays (#276).
  • fix linalg.expm dispatch (#275)
  • fix 'dm' 1d compress method for disconnected subgraphs
  • fix docs source lookup in quimb.tensor module
  • fix raw gate copying in Circuit (#285)

New Contributors

Full Changelog: v1.10.0...v1.11.0

v1.10.0

18 Dec 23:49

Choose a tag to compare

Enhancements

  • tensor network fitting: add method="tree" for when ansatz is a tree - tensor_network_fit_tree
  • tensor network fitting: fix method="als" for complex dtype networks
  • tensor network fitting: allow method="als" to use a iterative solver suited to much larger tensors, by default a custom conjugate gradient implementation.
  • tensor_network_distance and fitting: support hyper indices explicitly via output_inds kwarg
  • add tn.make_overlap and tn.overlap for computing the overlap between two tensor networks, $\langle O |T \rangle$, with explicit handling of outer indices to address hyper networks. Add output_inds to tn.norm and tn.make_norm also, as well as the squared kwarg.
  • replace all numba based paralellism (prange and parallel vectorize) with explicit thread pool based parallelism. Should be more reliable and no need to set NUMBA_NUM_THREADS anymore. Remove env var QUIMB_NUMBA_PAR.
  • Circuit: add dtype and convert_eager options. dtype specifies what the computation should be performed in. convert_eager specifies whether to apply this (and any to_backend calls) as soon as gates are applied (the default for MPS circuit simulation) or just prior to contraction (the default for exact contraction simulation).
  • tn.full_simplify: add check_zero (by default set of "auto") option which explicitly checks for zero tensor norms when equalizing norms to avoid log10(norm) resulting in -inf or nan. Since it creates a data dependency that breaks e.g. jax tracing, it is optional.
  • schematic.Drawing: add shorten kwarg to line drawing and curve drawing and examples to the docs.
  • TensorNetwork: add .backend and .dtype_name properties.

PRs:

  • Circuit: add default dtype and convert_eager options by @jcmgray in #273
  • add fit(method="tree") and fix ALS for complex TNs by @jcmgray in #274

Full Changelog: v1.9.0...v1.10.0

v1.9.0

20 Nov 02:50

Choose a tag to compare

Breaking Changes

  • renamed MatrixProductState.partial_trace and MatrixProductState.ptr to MatrixProductState.partial_trace_to_mpo to avoid confusion with other partial_trace methods that usually produce a dense matrix.

Enhancements:

v1.8.4

20 Jul 21:00

Choose a tag to compare

What's Changed

  • fix MPS sample handling of RNG seed by @kevinsung in #248
  • fix bug in applying MPO lazily to MPS (#246)

New Contributors

Full Changelog: v1.8.3...v1.8.4

v1.8.3

10 Jul 23:20

Choose a tag to compare

Enhancements:

Full Changelog: v1.8.2...v1.8.3