Archive for December, 2010

Generalized acceptance ratio

Wednesday, December 8th, 2010

The generalized acceptance ratio scheme has been used intensively in MD to get free energy difference between two states, in particular, in thermodynamic integration or free energy perturbation simulations. It is post simulation analysis tool and uses a combination of forward and backward sampling to minimize the free energy error. The idea was introduced by Charles H. Bennett in 1976 and the original version is called Bennett Acceptance Ratio (BAR), here we focus on the generalized version, though very subtle difference.

Suppose we have a system at two states, 0 and 1, the transferring energy for from state 0 to state 1 is

 \Delta A = \beta^{-1} ln \frac{Q_{0}}{Q_{1}}

where Q_{0} and Q_{1} are the partition functions of state 0 and 1, respectively.

Let’s do a little trick here

\frac{Q_{0}}{Q_{1}}=\frac{Q_{0}}{Q_{1}}\frac{\int d\vec{r}^{N} w(\vec{r}^{N})e^{-\beta (U_{0}+U_{1})}}{\int  d\vec{r}^{N} w(\vec{r}^{N})e^{-\beta (U_{0}+U_{1})}}

=\frac{Q_{0}}{Q_{1}} \frac{\int d\vec{r}^{N} w(\vec{r}^{N})e^{-\beta U_{0}}e^{ -\beta U_{1}}}{\int d\vec{r}^{N} w(\vec{r}^{N})e^{-\beta U_{0}}e^{ -\beta U_{1}}}

=\frac{\ll w e^{-\beta U_{1}}\gg_{0}}{\ll w e^{-\beta U_{0}}\gg_{1}}

=e^{\beta \Delta A} [Eq.1]

So we introduce an arbitrary function w into the free energy difference calculation, and so far nothing gain and nothing loss. Before we continue to work on this new and trivial formula, we need fresh up our knowledge of error estimation using delta function.  Suppose we have a function of measurement f(X_{N}) of a sample X_{N} consisting of N independent configurations, then the expectation value of f(X_{N}) can be written in the following expansion manner:

E[f(X_{N})] = f(\overline{X}) + \sum_{j=1}^{n}\frac{f^{j}(\overline{X})}{j!}E(X_{N}-\overline{X})^{j}+O(N^{-(n+1)/2}) where f is bounded, and so be the first n+1 derivatives, \overline{X} is the true value of the average X and f^{j}(\overline{X}) denotes the jth derivative of f at \overline{X}. If we take only the first order, we have

E[f(X_{N})] = f(\overline{X}) + \frac{\partial f}{\partial X}\vert _{X=\overline{X}} E(X_{N}-\overline{X})

the error for f(X_{N}) is then written as:

\sigma^{2}_{f(X)}=[\frac{\partial f}{\partial X}]^{2}\sigma^{2}(X) [Eq.2].

Now let’s use Eq.2 to analyze the error in Eq.1 for the free energy difference. Very simple substitution and derivation leads to:

\sigma^{2}_{\beta \Delta A} = \frac{\ll [w e^{-\beta U_{1}}]^{2}\gg_{0}-\ll w e^{-\beta U_{1}}\gg_{0}^{2}}{N_{0}\ll w e^{-\beta U_{1}}\gg_{0}^{2}} + \frac{\ll [w e^{-\beta U_{0}}]\gg_{1}-\ll w e^{-\beta U_{0}}\gg_{1}^{2}}{N_{1}\ll w e^{-\beta U_{0}}\gg_{1}^{2}} [Eq.3]

where N_{0} and N_{1} are the numbers of independent configurations in state 0 and 1, respectively. Notice that the two denominators in the right side of the equation are from [\frac{\partial f}{\partial X}]^2 as shown in Eq.2.

Let us fill out the integration in Eq.3:

\sigma^{2}_{\beta \Delta A}= \int d \vec{r}^{N} (\frac{Q_{0}}{N_{0}}e^{-\beta U_{1}} + \frac{Q_{1}}{N_{1}}e^{-\beta U_{0}} ) w^{2} e^{-\beta (U_{0}+U_{1})} \frac{1}{(\int d \vec{r}^{N} w e^{-\beta (U_{0}+U_{1})})^{2}}-\frac{1}{N_{0}}-\frac{1}{N_{1}}

One observation is that if we multiple w with a constant, the variance doesn’t change at all and since we said before w is an arbitrary function, we define the following constant:

\int d \vec{r}^{N} w e^{-\beta (U_{0}+U_{1})}=constant [Eq.5].

Then minimizing the variance is formulated as minimizing the variance with the Eq.5 equality constraint. We invoke Lagrange multiplier to solve such a minimization problem.

\Lambda(w,\lambda)=\sigma^{2}_{\beta \Delta A}+\lambda (\int d \vec{r}^{N} w e^{-\beta (U_{0}+U_{1})}-C_{1})

\sigma^{2}_{\beta \Delta A}= \frac{1}{C_{1}}\int d \vec{r}^{N}  (\frac{Q_{0}}{N_{0}}e^{-\beta U_{1}} + \frac{Q_{1}}{N_{1}}e^{-\beta  U_{0}} ) w^{2} e^{-\beta (U_{0}+U_{1})} -\frac{1}{N_{0}}-\frac{1}{N_{1}}

The essence of Lagrange multiplier tells us that

 \nabla_{w,\lambda} \Lambda(w,\lambda)=0, this gives us the solution for w:

 w = \frac{C}{\frac{Q_{0}}{N_{0}}e^{-\beta U_{1}} + \frac{Q_{1}}{N_{1}}e^{-\beta  U_{0}}} [Eq.6]

Now we substitute w in Eq.1, we have

 \frac{Q_{0}}{Q_{1}}=\frac{\ll [1+e^{\beta (U_{0}-U_{1}+C)}]^{-1}\gg_{1}}{\ll [1+e^{\beta (U_{1}-U_{0}-C)} ]^{-1}\gg_{0}} e^{\beta C} [Eq.7]

where \beta C=\ln \frac{Q_{0} N_{1}}{Q_{1} N_{0}} and \beta \Delta A = \ln\frac{Q_{0}}{Q_{1}}.

Let us reorganize Eq.7 and make it looking nicer:

 e^{\beta \Delta A} = \frac{\ll [1+e^{\beta (U_{0}-U_{1}+C)}]^{-1}\gg_{1}}{\ll [1+e^{\beta  (U_{1}-U_{0}-C)} ]^{-1}\gg_{0}} e^{\beta C}

C= \Delta A+ \beta^{-1} \ln \frac {N_{1}}{ N_{0}}

Eq.7 is the so called generalized acceptance ratio. It is both forward and backward sampling because the sign difference of \Delta U = U_{1}-U_{0} in the numerator and denominator of right hand side of Eq.7.

The free energy difference can be get using the self-consistent way, that is, since \Delta A shows up in both sides of the nonlinear equation Eq.6, we use iteration to get the actual value for the free energy difference.

References:

1. Free energy calculations: theory and applications in chemistry and biology, Christophe Chipot and Andrew Pohorille, Springer

2.Understanding molecular simulations: from algorithms to applications, D. Frenkel and B. Smit Academic

3.Good practices in free-energy calculations, Andrew Pohorille, Christopher Jarzynski, and Christophe Chipot, J. Phys. Chem. B 2010, 114, 10235-10253

The three references are exceptionally good in their own ways and I absolutely recommend them!