# Python代写｜Monte Carlo simulation: Importance sampling

• ALL

Instead of choosing configurations (states) randomly, then weighting them with exp( / ) −E k T B ,
we choose configurations with a probability exp( / ) −E k T B and then weight them evenly.

Consider a particle moving in the one-dimensional space under a potential U = 1/2kx2. The
probability density function for the particle position x is given by f x ae ( ) = −U x k T ( )/ B in which
a is a normalization constant for f x dx ( ) 1

For simplicity, we set k T k B / 1 = for computational purpose.

(a) Use the Metropolis algorithm to generate a sequence of states (i.e. a sequence of particle
position x, also called a Markov chain) according to the PDF and evaluate the expectation
values (mean values) of x , x2 , x3 , and x4 .

(b) Plot the histogram of the x values in the sequence and compare it with the PDF f x ( ) .
Long runs are necessary for obtaining good results: (i) We must wait for a sufficiently long
time to let the sequence reach equilibrium distribution; (ii) The sequence for averaging must
be long enough to reduce statistical fluctuations.

E-mail: vipdue@outlook.com  微信号:vipnxx