Tensors
Tensor operations
Tensor contraction
Contract two tensors x and y over dimensions cix and ciy with a shared size. This can be done in place with a pre-allocated tensor z using contract!(z, x, y, cix, ciy), or otherwise contract(x, y, cix, ciy). The dimensions cix and ciy can be specified as integers or tuples of integers for contractions over multiple dimensions. Optionally, the tensors used in the contraction can be conjugated using the arguments conjx and conjy. Note that, by default, the result will be stored in the memory cache (using keyword argument tocache). If the contraction is not some intermediate step, and you would like to save the resulting tensor for future use, then use tocache=false.
TeNe.contract! — Functioncontract!(z, x, y, cix, ciy, [conjx=false, conjy=false])Contract tensors x and y across dimensions cix and ciy, and store the result in z. In-place version of contract.
Arguments
- `z`: tensor to store the result.
- `x`: first tensor to contract.
- `y': second tensor to contract.
- `cix`: the dimensions of the first tensor to contract.
- `ciy`: the dimensions of the second tensor to contract.
- `conjx::Bool=false`: Take the complex conjugate of argument x?
- `conjy::Bool=false`: Take the complex conjugate of argument y?Examples
julia> x = randn(ComplexF64, 2, 3, 4);
julia> y = randn(ComplexF64, 3, 5, 6);
julia> z = similar(x, (2, 4, 5, 6));
julia> contract!(z, x, y, 2, 1)TeNe.contract — Functioncontract(x, y, cix, ciy, [conjx=false, conjy=false]; kwargs...)Contract tensors x and y across dimensions cix and ciy, and returns it as z.
Arguments
- `x`: first tensor to contract.
- `y': second tensor to contract.
- `cix`: the dimensions of the first tensor to contract.
- `ciy`: the dimensions of the second tensor to contract.
- `conjx::Bool=false`: Take the complex conjugate of argument x?
- `conjy::Bool=false`: Take the complex conjugate of argument y?Optional Keyword Arguments
- `tocache::Bool=true`: store the result in the second level of the cache?
- `sublevel=:auto`: if stored in cache, at which sublevel? :auto finds non-aliased memoryExamples
julia> x = randn(ComplexF64, 2, 3, 4);
julia> y = randn(ComplexF64, 3, 5, 6);
julia> z = contract(x, y, 2, 1);
julia> size(z)
(2, 4, 5, 6)julia> x = randn(ComplexF64, 2, 3, 4, 5);
julia> y = randn(ComplexF64, 6, 5, 2, 7);
julia> z = contract(x, y, (1, 4), (3, 2));
julia> size(z)
(3, 4, 6, 7)Tensor product
Take the tensor product over two tensors x and y to give a single tensor. This can be done in place with a pre-allocated tensor z using tensorproduct!(z, x, y), or otherwise tensorproduct(x, y). Optionally, the tensors used in the product can be conjugated using the arguments conjx and conjy. Note that, by default, the result will be stored in the memory cache (using keyword argument tocache). If the result is not some intermediate step, and you would like to save the resulting tensor for future use, then use tocache=false.
TeNe.tensorproduct! — Functiontensorproduct!(z, x, y, [conjx=false, conjy=false])Compute the tensor product of the two tensors x and y, and store the result in z. Optionally, do the tensor product using the conjugate of the tensors.
Arguments
- `z`: tensor to store the result.
- `x`: first tensor.
- `y': second tensor.
- `conjx::Bool=false`: Take the complex conjugate of argument x?
- `conjy::Bool=false`: Take the complex conjugate of argument y?Examples
julia> x = randn(ComplexF64, 2, 3);
julia> y = randn(ComplexF64, 4, 5);
julia> z = similar(x, (2, 3, 4, 5));
julia> tensorproduct!(z, x, y);TeNe.tensorproduct — Functiontensorproduct(x, y, [conjx=false, conjy=false])Compute the tensor product of the two tensors x and y, and store the result in z. Optionally, do the tensor product using the conjugate of the tensors.
Arguments
- `x`: first tensor.
- `y': second tensor.
- `conjx::Bool=false`: Take the complex conjugate of argument x?
- `conjy::Bool=false`: Take the complex conjugate of argument y?Optional Keyword Arguments
- `tocache::Bool=true`: store the result in the second level of the cache?
- `sublevel=:auto`: if stored in cache, at which sublevel? :auto finds non-aliased memoryExamples
julia> x = randn(ComplexF64, 2, 3);
julia> y = randn(ComplexF64, 4, 5);
julia> z = tensorproduct(x, y);
julia> size(z)
(2, 3, 4, 5)Tensor trace
Compute the trace over multiple dimensions cix in a tensor x. This can be done in place with a pre-allocated tensor z using trace!(z, x, cix...), or otherwise trace(x, cix...). Optionally, the tensor used in the trace can be conjugated using the key word argument conj. Note that, by default, the result will be stored in the memory cache (using keyword argument tocache). If the result is not some intermediate step, and you would like to save the resulting tensor for future use, then use tocache=false.
TeNe.trace! — Functiontrace!(z, x, cix::Int...; kwargs)Compute the trace of xover dimensionscix, and store the result inz`. In place version of trace.
Optional Keyword Arguments
- 'conj::Bool=false': take the conjugate?Examples
julia> x = randn(ComplexF64, 2, 3, 4, 3);
julia> z = similar(x, (2, 4));
julia> trace!(z, x, (2, 4));TeNe.trace — Methodtrace(x, cix::Int...; kwargs)Compute the trace of x over dimensions cix.
Optional Keyword Arguments
- `conj::Bool=false`: take the conjugate?
- `tocache::Bool=true`: store the result in the second level of the cache?
- `sublevel=:auto`: if stored in cache, at which sublevel? :auto finds non-aliased memoryExamples
julia> x = randn(ComplexF64, 2, 3, 4, 3);
julia> y = trace(x, 2, 4);
julia> size(y)
(2, 4)Permuting a single dimension
Permute the dimension at position i in tensor x to position j. This can be done in place with a pre-allocated tensor z using permutedim!(z, x, i, j), or otherwise permutedim(x, i, j). Note that, by default, the result will be stored in the memory cache (using keyword argument tocache). If the result is not some intermediate step, and you would like to save the resulting tensor for future use, then use tocache=false.
TeNe.permutedim! — Functionpermutedim!(z, x, i, j; kwargs...)Permute dimension i to j for tensor x. Store the result in z. In place version of permutedim.
Examples
julia> x = randn(ComplexF64, 2, 3, 4, 5);
julia> z = similar(x, (2, 4, 5, 3));
julia> permutedims!(z, x, 2, 4);TeNe.permutedim — Functionpermutedim(x, i::Int, j::Int; kwargs...)Permute dimension with position i to position j for tensor x.
Optional Keyword Arguments
- `tocache::Bool=true`: store the result in the second level of the cache?
- `sublevel=:auto`: if stored in cache, at which sublevel? :auto finds non-aliased memoryExamples
julia> x = randn(ComplexF64, 2, 3, 4, 5);
julia> x = permutedim(x, 2, 4);
julia> size(x)
(2, 4, 5, 3)Combining & restoring dimensions
Dimensions in a tensor can be combined into a single dimension, and restored using a key. This allows us to make efficient use of BLAS and LAPACK routines involving matrix operations.
Combine the dimensions cixs of tensor x. This can be done in place with a pre-allocated tensor z using key = combinedims!(z, x, cixs), or otherwise z, key = combinedims(x, cixs). Note that, by default, the result will be stored in the memory cache (using keyword argument tocache). If the result is not some intermediate step, and you would like to save the resulting tensor for future use, then use tocache=false.
TeNe.combinedims! — Functioncombinedims!(y, x, cixs)Combine the dimensions cixs in tensor x, and store the result in y.
Examples
julia> x = randn(ComplexF64, 4, 5, 6, 7);
julia> y = similar(x, (4, 7, 30));
julia> key = combinedims!(y, x, (2, 3))
((2, 3), (5, 6))TeNe.combinedims — Functioncombinedims(x, cixs; kwargs...)Combine the dimensions cixs in tensor x.
Returns the reshaped tensor, along with a key to restore the original permutations.
Optional Keyword Arguments
- `tocache::Bool=true`: store the result in the second level of the cache?
- `sublevel=:auto`: which sublevel to store in the cache?
- `return_copy=false`: Return the result in newly allocated memory from the cache?
Only necessary if the combined dimensions are the last dimensions of `x`.Examples
julia> x = randn(ComplexF64, 4, 5, 6, 7);
julia> y, key = combinedims(x, (2, 3));
julia> size(y)
(4, 7, 30)After combining the dimensions, which returns a key, the dimensions of the tensor can be restored. This can be done in place with a pre-allocated tensor y using uncombinedims!(y, x, key), or otherwise y = uncombinedims(y, x, key). Note that, by default, the result will be stored in the memory cache (using keyword argument tocache). If the result is not some intermediate step, and you would like to save the resulting tensor for future use, then use tocache=false.
TeNe.uncombinedims! — Functionuncombinedims!(y, x, key)Uncombine the end dimensions of x according to the key, and store the result in y.
Examples
julia> x = randn(ComplexF64, 4, 5, 6, 7);
julia> z, key = combinedims(x, (2, 3));
julia> y = similar(x);
julia> isapprox(y, x);
trueTeNe.uncombinedims — Functionuncombinedims(x, key; kwargs...)Uncombine the end dimension in tensor x according to the key.
Key arguments
- `tocache::Bool=true`: store the result in the second level of the cache?
- `sublevel=:auto`: which sublevel to store in the cache?
- `return_copy=false`: Return the result in newly allocated memory from the cache?
Only necessary if the combined dimensions are the last dimensions of `x`.Examples
julia> x = randn(ComplexF64, 4, 5, 6, 7);
julia> y, key = combinedims(x, (2, 3));
julia> z = uncombinedims(y, key);
julia> size(z)
(4, 5, 6, 7)Tensor factorisations
Singular value decomposition
The singular value decomposition $M = USV$ is typically applied to a matrix, but can equally be applied to a tensor to split the dimensions into separate tensors. This is done by permuting and reshaping the tensor into a matrix representation and applying the SVD. The dimensions to be contained in $V$ are specified by dims, with U, S, V = tsvd(x, dims).
TeNe.tsvd — Functiontsvd(x, dims; kwargs...)
tsvd(x, dim::Int; kwargs...)Computer a singular value decomposition of tensor x. Seperates the dimensions dims from the remainder.
Optional Keyword Arguments
- `cutoff::Float64=0.0`: Truncation criteria to reduce the bond dimension.
Good values range from 1e-8 to 1e-14.
- `mindim::Int=1`: Mininum dimension for truncated.
- `maxdim::Int=0`: Maximum bond dimension for truncation. Set to 0 to have
no limit.At a later date, we would like to improve the SVD to pre-allocate memory in the cache for calculating the returns (and optional parameters to pre-allocated the memory to restore the results.)