Opt-einsum-path: Einsum Path Contraction Optimization for Tensor Contraction
⚓ Rust 📅 2025-08-21 👤 surdeus 👁️ 10Hi! This is a crate for scientific computation, chemistry, physics, numerical methods.
This crate performs path optimization for tensor contraction (not performing the actual tensor contraction computation).
This is a re-implementation (with minor modifications) to opt_einsum. The opt_einsum user document is also a good resource for users not familiar with tensor contraction path.
Quick Example
This example involves contraction with multiple tensors.
Computation cost of this contraction, if performs naively, scales as O(N^8). But for certain optimized tensor contraction path, it scales as O(N^5):
use opt_einsum_path::contract_path;
let expression = "μi,νa,μνκλ,κj,λb->iajb";
let nao = 1024; // μνκλ
let nocc = 128; // ij
let nvir = 896; // ab
let g = vec![nao, nao, nao, nao]; // G_μνκλ
let c = vec![nao, nocc]; // C_μi, C_κj
let d = vec![nao, nvir]; // D_λb, D_νa
let shapes = [c.clone(), d.clone(), g.clone(), c.clone(), d.clone()];
let path_result = contract_path(
expression, // Expression to contract | &str
&shapes, // Shapes of the tensors | &[Vec<usize>]
"optimal", // Optimization kind | impl PathOptimizer
None, // Memory limit | impl Into<SizeLimitType>
);
let (path, path_info) = path_result.unwrap();
println!("Path: {path:?}"); // Vec<Vec<usize>>
// Path: [[0, 2], [1, 3], [0, 2], [0, 1]]
println!("{path_info}"); // PathInfo
// Complete contraction: μi,νa,μνκλ,κj,λb->iajb
// Naive scaling: 8
// Optimized scaling: 5
// Naive FLOP count: 7.231e22
// Optimized FLOP count: 3.744e14
// Theoretical speedup: 1.931e8
// Largest intermediate: 1.374e11 elements
// --------------------------------------------------------------------------------
// scaling BLAS current remaining
// --------------------------------------------------------------------------------
// 5 GEMM μi,μνκλ->νκλi νa,κj,λb,νκλi->iajb
// 5 TDOT κj,νκλi->νλij νa,λb,νλij->iajb
// 5 GEMM νa,νλij->λija λb,λija->iajb
// 5 GEMM λb,λija->iajb iajb->iajb
Supported Optimizers
optimal: Slow, but will always give the best contraction path (by depth-first search).greedy: Fast and generally good, but will give sub-optimal contraction path occationally.dp(dynamic programming): Robust, generally gives optimal contraction path. Can be configured to optimize different objectives (flops,size,write,combo,limit). Defaults toflopsobjective.branch(branch-bound): Robust, gives more optimal contraction path with increasing branch-search level (branch-1,branch-2,branch-all).random-greedy: Greedy with some random optimization techniques. After multiple iterations, select the best contraction path. Can set number iterations likerandom-greedy-128for 128 iterations. Defaults to 32 iterations.
Default optimizer is similar (not exactly the same) to high-quality in original python package opt_einsum (auto-hq):
optimalfor [0, 6) tensors;dpfor [6, 20) tensors;random-greedy-128for [20, inf) tensors.
Citation and Relation to opt_einsum
This is originally developed for developing rust tensor toolkit RSTSR and electronic structure toolkit REST. It is formally NOT a project related to opt_einsum.
The author thanks the original authors of opt_einsum and the algorithms implemented in NumPy. This really accelarates development of electronic structure algorithms.
We refer
- Original implementation: opt_einsum
- Similar Project in Python cotengra and its rust counterpart cotengrust
- Citation: Daniel G. A. Smith and Johnnie Gray, opt_einsum - A Python package for optimizing contraction order for einsum-like expressions. Journal of Open Source Software, 2018, 3 (26), 753. doi: https://doi.org/10.21105/joss.00753
1 post - 1 participant
🏷️ Rust_feed