This iterative approach to the problem[130,131] led to the
development of adaptive biasing potential methods that improve the
potential ``on the fly'' [132,60,58,133],
i.e., while the simulation is performed.
All these methods share all the common basic idea, namely,
``to introduce the concept of memory''[132] during a
simulation by changing the potential of mean force perceived by the
system, in order to penalize
conformations that have been already sampled before.
The potential becomes history-dependent since
it is now a functional of the past trajectory along the reaction
coordinate.
Among these algorithms, the Wang-Landau [60] and the
metadynamics[58] algorithms have received most attention in the
fields of the Monte Carlo (MC) and Molecular Dynamics (MD) simulations,
respectively. This success is mainly due to the clearness and the ease
of implementation of the algorithm, that is basically the same for the
two methods. The Wang-Landau algorithm was initially proposed as a
method to
compute the density of states , and therefore the entropy
, of a simulated discrete system.
During a Wang-Landau MC simulation,
is estimated as
an histogram, increasing by a fixed quantity the frequency of the
visited energy levels, while moves are generated randomly and accepted with a Metropolis probability
,
where
is the current estimate of the
entropy change after the move. While for a random walk in energy the
system would have been trapped in entropy maxima,
the algorithm, that can be easily extended to the
computation of any entropy-related thermodynamic potential along a
generic collective variable, helps the system in escaping from these
maxima and reconstructs the entropy
. The metadynamics algorithm extends this approach to
off-lattice systems and to Molecular Dynamics.
Metadynamics has been successfully applied in the
computation of free energy profiles in disparate fields, ranging from
chemical physics to biophysics and material sciences. For a system in
the canonical ensemble, metadynamics reconstructs the free energy
along some reaction coordinate
as a sum of Gaussian functions
deposed along the trajectory of the system. This sum inverted in sign
is used during the simulation as a biasing potential
that depends
explicitly on time
:
The thermodynamic work spent
in changing the potential from the
original Hamiltonian to
can be computed through the relation
. In the
limit of an adiabatic transformation, this quantity is equal to the Helmholtz
free energy difference
between two systems with energy functions
and
,
where
and
[134]. However, if the process
is too fast with respect to the ergodic time scale, a part of the work
spent during the switching will be dissipated in the system, resulting
in an non-equilibrium, non-canonical distribution, and in a systematic
error in the free energy estimate. In particular, it is assumed that
during a metadynamics simulation all the microscopic variables
different from the macroscopic reaction coordinate
are always
in the equilibrium state corresponding to the value of
[135].
This property is known with the name of Markov property, and it summarizes the main
assumption of the algorithm: all the slow modes of the system coupled
to the reaction under study have to be known a priori and they
have to be included in the number of the reaction coordinates.
Therefore, at variance with the
methods presented in the previous chapters, metadynamics should be
considered a quasi-equilibrium method, in which the knowledge about
the variables that capture the mechanism of a reaction is exploited to
gain insight on the transition states and more
generally to compute the free energy landscape along the relevant
reaction coordinates.