Simulations with complex measures (original) (raw)
Monte Carlo simulation of systems with complex-valued measures
Nuclear Physics B - Proceedings Supplements, 1998
A simulation method based on the RG blocking is shown to yield statistical errors smaller than that of the crude MC using absolute values of the original measures. The new method is particularly suitable to apply to the sign problem of indefinite or complex-valued measures. We demonstrate the many advantages of this method in the simulation of 2D Ising model with complex-valued temperature.
Monte Carlo simulations with indefinite and complex-valued measures
Physical Review E, 1994
A method is presented to tackle the sign problem in the simulations of systems having inde nite or complex-valued measures. In general, this new approach is shown to yield statistical errors smaller than the crude Monte Carlo using absolute values of the original measures. Exactly solvable, one-dimensional Ising models with complex temperature and complex activity illustrate the considerable improvements and the workability of the new method even when the crude one fails.
Simulations with complex measure
Nuclear Physics B, 1998
A method is proposed to handle the sign problem in the simulation of systems having indefinite or complex-valued measures.
Monte Carlo Simulations with Complex-Valued Measure
A simulation method based on the RG blocking is shown to yield statistical errors smaller than that of the crude MC using absolute values of the original measures. The new method is particularly suitable to apply to the sign problem of indefinite or complex-valued measures. We demonstrate the many advantages of this method in the simulation of 2D Ising model with complex-valued temperature.
International Journal of Modern Physics C, 1994
To tackle the sign problem in the simulations of systems having inde nite or complex-valued measures, we propose a new approach which yields statistical errors smaller than the crude Monte Carlo using absolute values of the original measures. The 1D complex-coupling Ising model is employed as an illustration.
The paradigm of complex probability and Monte Carlo methods
Systems Science & Control Engineering, OA, 2019
In 1933, Andrey Nikolaevich Kolmogorov established the system of five axioms that define the concept of mathematical probability. This system can be developed to include the set of imaginary numbers and this by adding a supplementary three original axioms. Therefore, any experiment can be performed in the set C of complex probabilities which is the summation of the set R of real probabilities and the set M of imaginary probabilities. The purpose here is to include additional imaginary dimensions to the experiment taking place in the ‘real’ laboratory in R and hence to evaluate all the probabilities. Consequently, the probability in the entire set C = R +M is permanently equal to one no matter what the stochastic distribution of the input random variable in R is, therefore the outcome of the probabilistic experiment in C can be determined perfectly. This is due to the fact that the probability in C is calculated after subtracting from the degree of our knowledge the chaotic factor of the random experiment. This novel complex probability paradigm will be applied to the classical probabilistic Monte Carlo numerical methods and to prove as well the convergence of these stochastic procedures in an original way.
The Monte Carlo Techniques and the Complex Probability Paradigm
IntechOpen, 2020
The concept of mathematical probability was established in 1933 by Andrey Nikolaevich Kolmogorov by defining a system of five axioms. This system can be enhanced to encompass the imaginary numbers set after the addition of three novel axioms. As a result, any random experiment can be executed in the complex probabilities set C which is the sum of the real probabilities set R and the imaginary probabilities set M. We aim here to incorporate supplementary imaginary dimensions to the random experiment occurring in the “real” laboratory in R and therefore to compute all the probabilities in the sets R,M, and C. Accordingly, the probability in the whole set C = R + M is constantly equivalent to one independently of the distribution of the input random variable in R, and subsequently the output of the stochastic experiment in R can be determined absolutely in C. This is the consequence of the fact that the probability in C is computed after the subtraction of the chaotic factor from the degree of our knowledge of the nondeterministic experiment. We will apply this innovative paradigm to the well-known Monte Carlo techniques and to their random algorithms and procedures in a novel way.
The Novel Complex Probability Paradigm Applied to Monté Carlo Methods
B P International, 2024
Monte Carlo methods were central to the simulations required for the Manhattan Project, though severely limited by the computational tools at the time. In 1933, Andrey Nikolaevich Kolmogorov established the system of five axioms that define the concept of mathematical probability. This system can be developed to include the set of imaginary numbers and this by adding a supplementary three original axioms. Quantum Monte Carlo, and more specifically Diffusion Monte Carlo methods can also be interpreted as a mean field particle Monte Carlo approximation of Feynman-Kac path integrals. Therefore, any experiment can be performed in the set C of complex probabilities which is the summation of the set R of real probabilities and the set M of imaginary probabilities. The purpose here is to include additional imaginary dimensions to the experiment taking place in the "real" laboratory in R and hence to evaluate all the probabilities in R, M, and C. Consequently, the probability in the entire set C = R + M is permanently equal to one no matter what the stochastic distribution of the input random variable in R is, therefore the outcome of the probabilistic experiment in C can be determined perfectly. This is due to the fact that the probability in C is calculated after subtracting from the degree of our knowledge the chaotic factor of the random experiment. It is important to state here that one essential and very well-known probability distribution was taken into consideration in the current chapter which is the uniform and discrete probability distribution as well as a specific generator of uniform random numbers, knowing that the original CPP model can be applied to any generator of uniform random numbers that exists in literature. This will yield certainly to analogous results and conclusions and will confirm without any doubt the success of my innovative theory. This novel complex probability paradigm will be applied to the classical probabilistic Monte Carlo numerical methods and to prove as well the convergence of these stochastic procedures in an original way.
The factorization method for Monte Carlo simulations of systems with a complex with
Nuclear Physics B - Proceedings Supplements, 2004
We propose a metShod for Monte Carlo simulations of systems with a complex act,ion. The method has the advantages of being in principle applicable to a.ny such system and provides a solution to the overlap problem. In some cases, like in the IKKT matrix model, a finite size scaling extrapolation can provide results for systems whose size would make it prohibitive to simulat,e directly
The convergence of complex Langevin simulations
Nuclear Physics B, 1994
It is proven that ensemble average computed from a complex Langevin (CL) simulation will necessarily converge to the correct values if the ensemble averages become time independent. This is illustrated with two model problems defined on the compact spaces U(1) and S2, as well as with a lattice fermion model. For all three problems, the CL method is found to be, with few exceptions, applicable. For the U(1) problem, this is demonstrated via a semi-analytic solution for the expectation values. The difficulties of obtaining accurate numerical solutions of the stochastic differential equations are discussed.
Representation basis in quantum Monte Carlo calculations and the negative-sign problem
Physics Letters A, 1992
We present a method of exact estimation of the decay rate of the "average sign" which appears in the world-line quantum Monte Carlo calculations for frustrated spin systems. We observe the dependence ofthe negative-sign problem on the spin representation for the antiferromagnetic XXZ model on a triangular lattice. 0375-9601 /92/$ 05.00
A subset solution to the sign problem in random matrix simulations
Physical Review D, 2012
We present a solution to the sign problem in dynamical random matrix simulations of a twomatrix model at nonzero chemical potential. The sign problem, caused by the complex fermion determinants, is solved by gathering the matrices into subsets, whose sums of determinants are real and positive even though their cardinality only grows linearly with the matrix size. A detailed proof of this positivity theorem is given for an arbitrary number of fermion flavors. We performed importance sampling Monte Carlo simulations to compute the chiral condensate and the quark number density for varying chemical potential and volume. The statistical errors on the results only show a mild dependence on the matrix size and chemical potential, which confirms the absence of sign problem in the subset method. This strongly contrasts with the exponential growth of the statistical error in standard reweighting methods, which was also analyzed quantitatively using the subset method. Finally, we show how the method elegantly resolves the Silver Blaze puzzle in the microscopic limit of the matrix model, where it is equivalent to QCD.
The mechanism of complex Langevin simulations
Journal of Statistical Physics, 1993
We discuss conditions under which expectation values computed from a complex Langevin process Z will converge to integral averages over a given complex valued weight function. The difficulties in proving a general result are pointed out. For complex valued polynomial actions, it is shown that for a process converging to a strongly stationary process one gets the correct answer for averages of polynomials if c τ (k) ≡ E(e ikZ(τ)) satisfies certain conditions. If these conditions are not satisfied, then the stochastic process is not necessarily described by a complex Fokker Planck equation. The result is illustrated with the exactly solvable complex frequency harmonic oscillator. * Supported by Fonds zur Förderung der Wissenschaftlichen Forschung inÖsterreich, project P7849.
The Factorization method for simulating systems with a complex action
2003
We propose a method for Monte Carlo simulations of systems with a complex action. The method has the advantages of being in principle applicable to any such system and provides a solution to the overlap problem. We apply it in random matrix theory of finite density QCD where we compare with analytic results. In this model we find non-commutativity of the limits µ → 0 and N → ∞ which could be of relevance in QCD at finite density.
Green function monte carlo with stochastic reconfiguration: An effective remedy for the sign problem
2000
A recent technique, proposed to alleviate the "sign problem disease", is discussed in details. As well known the ground state of a given Hamiltonian H can be obtained by applying the imaginary time propagator e −Hτ to a given trial state ψ T for large imaginary time τ and sampling statistically the propagated state ψ τ = e −Hτ ψ T. However the so called "sign problem" may appear in the simulation and such statistical propagation would be practically impossible without employing some approximation such as the well known "fixed node" approximation (FN). This method allows to improve the FN dynamic with a systematic correction scheme. This is possible by the simple requirement that, after a short imaginary time propagation via the FN dynamic, a number p of correlation functions can be further constrained to be exact by small perturbation of the FN propagated state, which is free of the sign problem. By iterating this scheme the Monte Carlo average sign, which is almost zero when there is sign problem, remains stable and finite even for large τ. The proposed algorithm is tested against the exact diagonalization results available on finite lattice. It is also shown in few test cases that the dependence of the results upon the few parameters entering the stochastic
A Monte Carlo Sampling Scheme for the Ising Model
Journal of Statistical Physics, 2000
In this paper we describe a Monte Carlo sampling scheme for the Ising model and similar discrete state models. The scheme does not involve any particular method of state generation but rather focuses on a new way of measuring and using the Monte Carlo data.
Statistical Study of Complex eigenvalues in Stochastic Systems
Research Journal of Applied Sciences, Engineering and Technology, Maxwell review, 2010
In this research we analyze the complex modes arising in multiple degree-of-freedom nonproportionally damped discrete linear stochastic systems. The complex eigenvalues intervene when unstable states like resonances, happened. Linear dynamic systems must generally be expected to exhibit non-proportional damping. Non-proportionally damped linear systems do not possess classical normal modes but possess complex modes. The proposed method is based on the transformation of random variables. The advantage of this method which give us the probability density function of real and imaginary part of the complex eigenvalue for stochastic mechanical system, i.e. a system with random output (Young's modulus, load). The proposed method is illustrated by considering numerical example based on a linear array of damped spring-mass oscillators. It is show n that the approach can predict the probability density function with good accuracy when compared with independent Monte-Carlo simulations.
Complex Dynamical Systems and Mathematical Modelling
Research & Reviews: Journal of Physics, 2019
So-called “Complex Dynamical Systems” (that is, systems displaying complex behavior) do appear in condensed matter physics and chemistry, as well as, playing a fundamental role, in biological systems. They require a theoretical treatment in terms of “Mathematical Modelling”, with statistical formalisms being of large relevance. We present here a feature-like article describing and discussing the question. It has emphasized the difficult question of presence of hidden constraints, and the introduction of nonstandard statistics arising in the realm of “Information Theory”.