seemps.optimization.gradient_descent#
- seemps.optimization.gradient_descent(H, guess, maxiter=1000, tol=1e-13, k_mean=10, tol_variance=1e-14, tol_up=None, strategy=<seemps.state.core.Strategy object>, callback=None)[source]#
Ground state search of Hamiltonian H by gradient descent.
- Parameters:
- H
Union
[MPO
,MPOList
,MPOSum
] Hamiltonian in MPO form.
- state
MPS
|MPSSum
Initial guess of the ground state.
- maxiter
int
Maximum number of iterations (defaults to 1000).
- tol
float
Energy variation that indicates termination (defaults to 1e-13).
- tol_up
float
,default
= tol If energy fluctuates up below this tolerance, continue the optimization.
- tol_variance
float
Energy variance target (defaults to 1e-14).
- strategy
Optional
[Strategy
] Linear combination of MPS truncation strategy. Defaults to DESCENT_STRATEGY.
- callback
Optional
[Callable
[[MPS
,OptimizeResults
],Any]] A callable called after each iteration (defaults to None).
- H