Algorithms

molmod.binning – Binning

Binning is a useful technique for efficiently calculating all distances between a number of coordinates when you are only interested in the distances below a given cutoff. The algorithm consists of two major steps:

  1. Divide the given set of coordinates into bins on a regular grid
  2. Calculate the distances (or other useful things) between coordinates in neighboring bins.
class molmod.binning.PairSearchIntra(coordinates, cutoff, unit_cell=None, grid=None)

Iterator over all pairs of coordinates with a distance below a cutoff.

Example usage:

coordinates = np.random.uniform(0,10,(10,3))
for i, j, delta, distance in PairSearchIntra(coordinates, 2.5):
    print i, j, distance

Note that for periodic systems the minimum image convention is applied.

Arguments:
coordinates – A Nx3 numpy array with Cartesian coordinates
radius – The cutoff radius for the pair distances. Distances larger than the cutoff will be neglected in the pair search.
Optional arguments:
unit_cell – Specifies the periodic boundary conditions
grid – Specification of the grid, can be a floating point number which will result in cubic bins with edge length equal to the given number. Otherwise a UnitCell object can be specified to construct non-cubic bins. In the latter case and when a unit_cell is given, the unit cell vectors must be integer linear combinations of the grid cell vectors (for those directions that are active in the unit cell). If this is not the case, a ValueError is raised.

The default value of grid depends on other parameters:

  1. When no unit cell is given, it is equal to cutoff/2.9.
  2. When a unit cell is given, the grid cell is as close to cubic as possible, with spacings below cutoff/2 that are integer divisions of the unit cell spacings
class molmod.binning.PairSearchInter(coordinates0, coordinates1, cutoff, unit_cell=None, grid=None)

Iterator over all pairs of coordinates with a distance below a cutoff.

Example usage:

coordinates0 = np.random.uniform(0,10,(10,3))
coordinates1 = np.random.uniform(0,10,(10,3))
for i, j, delta, distance in PairSearchInter(coordinates0, coordinates1, 2.5):
    print i, j, distance

Note that for periodic systems the minimum image convention is applied.

Arguments:
coordinates0 – A Nx3 numpy array with Cartesian coordinates
coordinates1 – A Nx3 numpy array with Cartesian coordinates
radius – The cutoff radius for the pair distances. Distances larger than the cutoff will be neglected in the pair search.
Optional arguments:
unit_cell – Specifies the periodic boundary conditions
grid – Specification of the grid, can be a floating point number which will result in cubic bins with edge length equal to the given number. Otherwise a UnitCell object can be specified to construct non-cubic bins. In the latter case and when a unit_cell is given, the unit cell vectors must be integer linear combinations of the grid cell vectors (for those directions that are active in the unit cell). If this is not the case, a ValueError is raised.
The default value of grid depends on other parameters:
  1. When no unit cell is given, it is equal to cutoff/2.9.
  2. When a unit cell is given, the grid cell is as close to cubic as possible, with spacings below cutoff/2 that are integer divisions of the unit cell spacings

molmod.clusters – Clustering

Given a mixed set of related and unrelated data pionts, it is often interesting to extract clusters of related items. The basic workflow is as follows:

cf = ClusterFactory()
while foo:
    cf.add_related(some, related, items)
for cluster in cf.iter_clusters():
    print cluster
class molmod.clusters.Cluster(items)

A set of related items

This is the most elementary implementation of a cluster. In practice on is often interested in extending the functionality of a cluster.

Argument:
items – the items that belong in this cluster
add_item(item)

Add an item to a cluster

update(other)

Merge another cluster into this cluster

class molmod.clusters.RuleCluster(items, rules=None)

Clusters based on rules

This is a typical derived Cluster class where the relation between the items is one or more rules, which one would like to know at the end of the clustering algorithm.

An example application is the shake algorithm where it is beneficial to group constraints that share certain degrees of freedom into a cluster of equations.

Argument:
items – the items that belong in this cluster
Optional argument:
rules – a list of rules that binds the items
update(other)

Extend the current cluster with data from another cluster

class molmod.clusters.ClusterFactory(cls=<class 'molmod.clusters.Cluster'>)

A very basic cluster algorithm

Optinional argument:
cls – A class to construct new cluster objects [default=Cluster]

Add related items

The arguments can be individual items or cluster objects containing several items.

When two groups of related items share one or more common members, they will be merged into one cluster.

get_clusters()

Returns a set with the clusters

molmod.ic – Internal coordinates

This implementation is pure python and sacrifices computational efficiency on the altar of programming flexibility. It is really easy to implement new types of internal coordinates since one only has to enter the formula that evaluates the internal coordinate. First and second order derivatives towards Cartesian coordinates require only a minimum of extra work.

Two auxiliary classes Scalar and Vector3 support most of the mathematical operations required to compute the internal coordinates. Additionally they also know the chain rule for each operation and can therefore evaluate the derivatives simultaneously.

class molmod.ic.Scalar(size, deriv=0, value=0, index=None)

A scalar object with optional first and second order derivates

Each input value to which the derivative is computed has its own index. The numerical value of the derivatives are stored in arrays self.d and self.dd. The value of the scalar itself if self.v

Arguments:
size – The number of inputs on which this ic depends. e.g. a distance depends on 6 Cartesian coordinates.
deriv – Consider up to deriv order derivatives. (max=2)
value – The initial value.
index – If this scalar is one of the input variables, this is its index.

The scalar object supports several in place modifications.

copy()

Return a deep copy

inv()

In place invert

results()

Return the value and optionally derivative and second order derivative

class molmod.ic.Vector3(size, deriv=0, values=(0, 0, 0), indexes=(None, None, None))

A Three dimensional vector with optional first and second order derivatives.

This object is nothing more than a tier for three Scalar objects.

Arguments:
size – The number of inputs on which this ic depends. e.g. a distance depends on 6 Cartesian coordinates.
deriv – Consider up to deriv order derivatives. (max=2)
values – The initial values.
indexes – If this vector is one of the input variables, these are the indexes of the components.
copy()

Return a deep copy

norm()

Return a Scalar object with the norm of this vector

molmod.ic.dot(r1, r2)

Compute the dot product

Arguments:
r1, r2 – two Vector3 objects

(Returns a Scalar)

molmod.ic.cross(r1, r2)

Compute the cross product

Arguments:
r1, r2 – two Vector3 objects

(Returns a Vector3)

molmod.ic.bond_length(rs, deriv=0)

Compute the distance between the two points rs[0] and rs[1]

Arguments:
rs – two numpy array with three elements
deriv – the derivatives to be computed: 0, 1 or 2 [default=0]

When derivatives are computed a tuple with a single result is returned

molmod.ic.pair_distance(rs, deriv=0)

Compute the distance between the two points rs[0] and rs[1]

Arguments:
rs – two numpy array with three elements
deriv – the derivatives to be computed: 0, 1 or 2 [default=0]

When derivatives are computed a tuple with a single result is returned

molmod.ic.bend_cos(rs, deriv=0)

Compute the cosine of the angle between the vectors rs[0]-rs[1] and rs[2]-rs[1]

Arguments:
rs – three numpy array with three elements
deriv – the derivatives to be computed: 0, 1 or 2 [default=0]

When derivatives are computed a tuple with a single result is returned

molmod.ic.bend_angle(rs, deriv=0)

Compute the angle between the vectors rs[0]-rs[1] and rs[2]-rs[1]

Arguments:
rs – three numpy array with three elements
deriv – the derivatives to be computed: 0, 1 or 2 [default=0]

When derivatives are computed a tuple with a single result is returned

molmod.ic.dihed_cos(rs, deriv=0)

Compute the cosine of the angle between the planes rs[0], rs[1], rs[2] and rs[1], rs[2], rs[3]

Arguments:
rs – four numpy array with three elements
deriv – the derivatives to be computed: 0, 1 or 2 [default=0]
molmod.ic.dihed_angle(rs, deriv=0)

Compute the angle between the planes rs[0], rs[1], rs[2] and rs[1], rs[2], rs[3]

The sign convention corresponds to the IUPAC definition of the torsion angle: http://dx.doi.org/10.1351/goldbook.T06406

Arguments:
rs – four numpy array with three elements
deriv – the derivatives to be computed: 0, 1 or 2 [default=0]

When derivatives are computed a tuple with a single result is returned

molmod.ic.opbend_cos(rs, deriv=0)

Compute the cosine of the angle between the vector (rs[0],rs[3]) and plane rs[0],rs[1],rs[2]

Arguments:
rs – four numpy array with three elements
deriv – the derivatives to be computed: 0, 1 or 2 [default=0]
molmod.ic.opbend_angle(rs, deriv=0)

Compute the angle between the vector rs[0], rs[3] and the plane rs[0], rs[1], rs[2]

The sign convention is as follows: positive if rs[3] lies in the space above plane rs[0], rs[1], rs[2] and negative if rs[3] lies below. Above is defined by right hand rule from rs[0]-rs[1] to rs[0]-rs[2].

Arguments:
rs – four numpy array with three elements
deriv – the derivatives to be computed: 0, 1 or 2 [default=0]

When no derivatives are computed a tuple with a single result is returned.

molmod.ic.opbend_dist(rs, deriv=0)

Compute the out-of-plane distance, i.e. the distance between atom rs[3] and plane rs[0],rs[1],rs[2]

Arguments:
rs – four numpy array with three elements
deriv – the derivatives to be computed: 0, 1 or 2 [default=0]
molmod.ic.opbend_mcos(rs, deriv=0)

Compute the mean cos of the 3 opbend_angles

molmod.ic.opbend_mangle(rs, deriv=0)

Compute the mean value of the 3 opbend_angles

molmod.minimizer – Minimizer

The implementation is mainly concerned with robustness, rather than computational efficiency. Example usage:

def fun(x, do_gradient=False):
    value = 2 + np.sin(x[0]) + np.cos(x[1]) + x[0]*x[0] + x[1]*x[1] - x[0]*x[1]
    if do_gradient:
        gradient = np.array([
            np.cos(x[0]) + 2*x[0] - x[1],
            -np.sin(x[1]) + 2*x[1] - x[0],
        ])
        return value, gradient
    else:
        return value

x_init = np.zeros(2, float)
search_direction = ConjugateGradient()
line_search = NewtonLineSearch()
convergence = ConvergenceCondition(grad_rms=1e-6, step_rms=1e-6)
stop_loss = StopLossCondition(max_iter=50)
minimizer = Minimizer(
    x_init, fun, search_direction, line_search, convergence, stop_loss,
    anagrad=True, verbose=True,
)
print "optimum", minimizer.x, fun(minimizer.x)

The signature of the function fun must always be the same as in the example. The first argument. x is mandatory and contains a 1D numpy array with function arguments. The second argument, do_gradient is optional with default value False.

The returned values must also follow the same convention as in the example. When do_gradient==True, two return values are given. The first one is the function value and the second one is a 1D numpy array with the partial derivatives of the function towards the arguments. When do_gradient==False, only one value is returned, i.e. the function value.

class molmod.minimizer.SearchDirection

Abstract base class for a search direction method

is_sd()

Return True if the last direction was steepest descent

reset()

Reset the internal state of the search direction algorithm

This implies that the next direction will be steepest descent.

update(gradient, step)

Update the search direction given the latest gradient and step

class molmod.minimizer.SteepestDescent

The steepest descent method.

This method simply sets the search direction to minus the gradient. This method is the least efficient choice and becomes very inefficient for ill-conditioned problems.

is_sd()

Return True if the last direction was steepest descent

Always returns True in this case.

reset()

Reset the internal state of the search direction algorithm

Does nothing in this case.

update(gradient, step)

Update the search direction given the latest gradient and step

class molmod.minimizer.ConjugateGradient

The conjugate gradient method

This method is always superior to the steepest descent method in practical applications. An automatic reset mechanism reverts the search direction to the steepest descent when beta becomes negative.

is_sd()

Return True if the last direction was steepest descent

reset()

Reset the internal state of the search direction algorithm

update(gradient, step)

Update the search direction given the latest gradient and step

class molmod.minimizer.QuasiNewton

The quasi Newton method

is_sd()

Return True if the last direction was steepest descent

reset()

Reset the internal state of the search direction algorithm

update(gradient, step)

Update the search direction given the latest gradient and step

class molmod.minimizer.LineSearch(qmax=None)

Abstract base class for a line search

Optional argument:
qmax – The maximum step size of a line search
__call__(fun, f0, initial_step_size, epsilon)

Return the value that minimizes the one-dimensional function ‘fun’

Arguments:
fun – function to minimize (one-dimensional)
f0 – the function value at the starting point q=0”
initial_step_size – a guess of the order of magnitude of step size to be found.
epsilon – a value that is small compared to initial_step_size
Returns:
success – a boolean indicating that the line search resulted in an improved solution
wolfe – a boolean indicating that the new solution satisfies Wolfe conditions
qopt – the position of the new solution on the line
fopt – the corresponding function value
limit_step(step)

Clip the a step within the maximum allowed range

class molmod.minimizer.GoldenLineSearch(qtol, qmax=None, max_iter=None)

The golden section line search algorithm

Argument:
qtol – The threshold for displacements along the line. (When displacements become smaller than qtol, we assume convergence.)
Optional arguments
qmax – The maximum step size of a line search
max_iter – the maximum number of iteration for the line search (only applies to the bracketing part)
__call__(fun, initial_step_size, epsilon)

Return the value that minimizes the one-dimensional function ‘fun’

Arguments:
fun – function to minimize (one-dimensional)
f0 – the function value at the starting point q=0”
initial_step_size – a guess of the order of magnitude of step size to be found. This is used to bracket the minimum.
epsilon – a value that is small compared to initial_step_size
Returns:
success – a boolean indicating that the line search resulted in an improved solution
wolfe – a boolean indicating that the new solution satisfies Wolfe conditions
qopt – the position of the new solution on the line
fopt – the corresponding function value

P.S. The wolfe parameter is always True, but this aspect is not guaranteed to be correct. Never use the GoldenLineSearch in combination with a quasi Newton method.

class molmod.minimizer.NewtonLineSearch(c1=0.0001, c2=0.1, max_iter=5, qmax=None)

The Newton line search algorithm

When the curvature is negative, a steepest descent step is tried, using the step size from the previous step multiplied by 1.5. If the new function value is higher, the step size is reduced by a factor two. The latter is repeated at most max_iter times. If no lower value is found, the line search fails. (This is known is the back tracking algorithm)

When the curvature is positive, Newton step are performed. When the function or the absolute value of the derivative at a new point increases, the procedure is interupted and the last descent point is used. When there is no last descent point, back tracking is used. The Wolfe conditions are used to determine the convergence of the line search. At most max_iter Newton steps are allowed.

Optional arguments:
c1 – The coefficient in the first Wolfe condition (sufficient decrease of the function) [default=1e-4]
c2 – The coefficient in the second Wolfe condition (sufficient decrease of the derivative) [default=1e-1]. the default is optimal for the conjugate gradient method
max_iter – the maximum number of iterations in the line search.
qmax – The maximum step size of a line search
__call__(fun, initial_step_size, epsilon)

Return the value that minimizes the one-dimensional function ‘fun’

Arguments:
fun – function to minimize (one-dimensional)
f0 – the function value at the starting point q=0”
initial_step_size – a guess of the order of magnitude of step size to be found. This is used in case the default newton line search fails and when the routine reverts to backtracking as a last resort.
epsilon – a value that is small compared to initial_step_size
Returns:
success – a boolean indicating that the line search resulted in an improved solution
wolfe – a boolean indicating that the new solution satisfies Wolfe conditions
qopt – the position of the new solution on the line
fopt – the corresponding function value
class molmod.minimizer.Preconditioner(fun, each, grad_rms)

Base class for preconditioners

A preconditioner is a (linear) transformation of the unknowns to a new basis in which the Hessian of the minimum becomes a better-conditioned matrix. In these new coordinates the convergence of the minimizer will be faster. Picking the right preconditioner is a matter of experience. One must balance the extra computational cost of the preconditioner against the gains in computational cost because of the reduced number of iterations in the minimizer.

The preconditioners in this package act as wrappers around the function to be optimized. One just replaces a function by the preconditioner in the constructor of the Minimizer object. E.g.

>>> Minimizer(fun, ...)

becomes:

>>> Minimizer(SomePreconditioner(fun, ...), ...)

Also note that the convergence and stop loss conditions are not affected by the preconditioner. They get the gradient and step in original coordinates.

Arguments:
fun – the function whose arguments must be transformed
each – update the linear transformation after each ‘each’ minimizer steps without updates
grad_rms – only update when the rms value of the gradient (in the original coordinates) is below this threshold
__call__(x_prec, do_gradient=False)

The actual wrapper around the function call.

Arguments:
x_prec – the unknowns in preconditioned coordinates
do_gradient – if True, the gradient is also computed and transformed to preconditioned coordinates

Note that this implementation assumes that the preconditioner is a linear transformation.

do(x_orig)

Transform the unknowns to preconditioned coordinates

This method also transforms the gradient to original coordinates

undo(x_prec)

Transform the unknowns to original coordinates

This method also transforms the gradient to preconditioned coordinates

update(counter, f, x_orig, gradient_orig)

Perform an update of the linear transformation

Arguments:
counter – the iteration counter of the minimizer
f – the function value at x_orig
x_orig – the unknowns in original coordinates
gradient_orig – the gradient in original coordinates
Return value:
do_update – True when an update is required.

Derived classes must call this method to test of the preconditioner requires updating. Derived classes must also return this boolean to their caller.

class molmod.minimizer.DiagonalPreconditioner(fun, each, grad_rms, epsilon=0.001, scale_limit=0.0)

The diagonal preconditioner

This preconditioner derives a diagonal transformation based on a finite difference approximation of the diagonal elements of the Hessian. The trasnformation is such that these diagonal elements would become equal after the transformation. This type of preconditioner is especially usefull when the unknowns have different units. In many cases this preconditioner is a good trade off betweem accelerated convergence and extra cost. In particular when it is combined with a conjugate gradient minimizer, it can be more effecient that a quasi Newton method.

(For more general info on preconditioners, read the doc string of the Preconditioner base class.)

Arguments:
fun – the function whose arguments must be transformed
each – update the linear transformation after each ‘each’ minimizer steps without updates
grad_rms – only update when the rms value of the gradient (in the original coordinates) is below this threshold
Optional argument:
epsilon – a small scalar used for the finite differences (taken in previous preconditioned coordinates) [default=1e-3]
scale_limit – scales smaller than scale_limit times the largest scale are fixed to scale_limit times the largest scale
do(x_orig)

Transform the unknowns to preconditioned coordinates

This method also transforms the gradient to original coordinates

undo(x_prec)

Transform the unknowns to original coordinates

This method also transforms the gradient to preconditioned coordinates

update(counter, f, x_orig, gradient_orig)

Perform an update of the linear transformation

Arguments:
counter – the iteration counter of the minimizer
f – the function value at x_orig
x_orig – the unknowns in original coordinates
gradient_orig – the gradient in original coordinates
Return value:
done_update – True when an update has been done

The minimizer must reset the search direction method when an updated has been done.

class molmod.minimizer.FullPreconditioner(fun, each, grad_rms, epsilon=0.001)

The full preconditioner

This preconditioner is a bit experimental. The transformation is such that the hessian in the new coordinates becomes a constant matrix, i.e. diagonal with all elements the same.

Arguments:
fun – the function whose arguments must be transformed
each – update the linear transformation after each ‘each’ minimizer steps without updates
grad_rms – only update when the rms value of the gradient (in the original coordinates) is below this threshold
Optional argument:
epsilon – a small scalar used for the finite differences (taken in original coordinates) [default=1e-3]
do(x_orig)

Transform the unknowns to preconditioned coordinates

This method also transforms the gradient to original coordinates

undo(x_prec)

Transform the unknowns to original coordinates

This method also transforms the gradient to preconditioned coordinates

update(counter, f, x_orig, gradient_orig)

Perform an update of the linear transformation

Arguments:
counter – the iteration counter of the minimizer
f – the function value at x_orig
x_orig – the unknowns in original coordinates
gradient_orig – the gradient in original coordinates
Return value:
done_update – True when an update has been done

The minimizer must reset the search direction method when an updated has been done.

class molmod.minimizer.ConvergenceCondition(step_rms=None, step_max=None, grad_rms=None, grad_max=None, rel_grad_rms=None, rel_grad_max=None)

Callable object that tests the convergence of the minimizer

Optional arguments:
step_rms – threshold for the RMS value of the step vector in the iterative minimizer
step_max – threshold for the maximum component of the step vector in the iterative minimizer
grad_rms – threshold for the RMS value of the gradient components of the function to be minimized
grad_max – threshold for the maximum value of the gradient components of the function to be minimized
rel_grad_rms – threshold for the RMS value of the gradient components of the function to be minimized, divided by the function value
rel_grad_max – threshold for the maximum value of the gradient components of the function to be minimized, divided by the function value

Only the present arguments define when the minimization has converged. All actual values must go below the given thresholds.

__call__(grad, step, f)

Return True when the minimizer has converged

Arguments:
grad – The gradient at the current point.
step – The last step vector.
f – The last function value.
get_header()

Returns the header for screen logging of the minimization

class molmod.minimizer.StopLossCondition(max_iter=None, fun_margin=None, grad_margin=None, step_min=None)

Callable object that checks if minimizer has lost track

Optional arguments:
max_iter – the maximum number of iterations allowed
fun_margin – if the function to be minimized goes above the lowest value so far plus this margin, the minimization is aborted
grad_margin – if the RMS value of the gradient components goes above the lowest value plus this threshold, the minimization is aborted
step_min – If the RMS step size drops below this margin, the optimization is interrupted.

Only the present arguments define when the minimization has lost track.

__call__(counter, fn, gradient, step)

Return True when the minimizer has lost track

reset()
class molmod.minimizer.Constraints(equations, threshold, rcond1=1e-10, max_iter=100)

Algorithm to apply half-open and convential constraints during minimization.

The constraint solver internally works with a constraint cost function, defined as the squared sum of the constraint functions. The constraints are satisfied by bringing the cost function to zero. This is done in iterative fashion. At each iteration, two attempts are made to lower the constraint cost function:

  1. Take a Levenberg-Marquardt-like step.
  2. If (1) fails or is too slow, take a step to fix only one of the constraints, i.e. the constraint which has the largest mismatch.
Arguments:
equations – a list of (sign,equation) pairs. sign can be +1, 0 of -1. equation is a function with one argument: the vector of unknowns in the minimizer. It returns the value of the constraint function and the gradient of that function. If sign is +1, the parameters will be forced in the region where the constraint function is positive. (Similar for -1, constraint function is forced to be negative.) When the sign is 0, the constraint function is forced to be zero.
threshold – The acceptable allowed deviation from the constraints. The deviation is defined as the euclidean norm of the (active) constraint functions.
Optional arguments:
rcond1 – During the iterative solution of the constraint equations in the shake algorithm, it may happen that an ill-conditioned set of equations must be solved. In that case rcond1 is the first ridge parameter used to regularize these equations. If needed, the ridge parameter is multiplied by 10 until a better fit of the constraints is found.
max_iter – The maximum number of iterations in the shake algorithm. This is used in several functions.
free_shake(x)

Brings unknowns to the constraints.

Arguments:
x – The unknowns.
project(x, vector)

Project a vector (gradient or direction) on the active constraints.

Arguments:
x – The unknowns.
vector – A numpy array with a direction or a gradient.

The return value is a gradient or direction, where the components that point away from the constraints are projected out. In case of half-open constraints, the projection is only active of the vector points into the infeasible region.

safe_shake(x, fun, fmax)

Brings unknowns to the constraints, without increasing fun above fmax.

Arguments:
x – The unknowns.
fun – The function being minimized.
fmax – The highest allowed value of the function being minimized.

The function fun takes a mandatory argument x and an optional argument do_gradient:

x – the arguments of the function to be tested
do_gradient – when False, only the function value is returned. when True, a 2-tuple with the function value and the gradient are returned [default=False]
class molmod.minimizer.Minimizer(x_init, fun, search_direction, line_search, convergence_condition, stop_loss_condition, anagrad=False, epsilon=1e-06, verbose=True, callback=None, initial_step_size=1.0, constraints=None, debug_line=False)

A flexible multivariate minimizer

The minimizer searches (in principle) for the ‘nearest’ local minimum.

Arguments:
x_init – the initial guess for the minimum
fun – function to be minimized (see below)
search_direction – a SearchDirection object
line_search – a LineSearch object
convergence_condition – a ConvergenceCondition object
stop_loss_condition – a StopLossCondition object
Optional arguments:
anagrad – when set to True, analytical gradients are used
epsilon – a small value compared to expected changes in the unknowns [default=1e-6]. it is used to compute finite differences.
verbose – print progress information on screen [default=True]
callback – optional callback routine after each iteration. the callback routine gets the minimizer as first and only argument. [default=None]
initial_step_size – The initial step size used in the first call to the line search. For later line searches, the actual step size found by the previous line search is used as initial step size. How the initial step size is used, depends on the line search algorithm.
constraints – An instance of the Constraints class.
debug_line – If True, and when the line search fails, a plot with the line function will be made with matplotlib and written as 'line_failed_%s.png' % isodatetime.

The function fun takes a mandatory argument x and an optional argument do_gradient:

x – the arguments of the function to be tested
do_gradient – when False, only the function value is returned. when True, a 2-tuple with the function value and the gradient are returned [default=False]
get_final()

Return the final solution in the original coordinates

initialize()
propagate()
molmod.minimizer.check_anagrad(fun, x0, epsilon, threshold)

Check the analytical gradient using finite differences

Arguments:
fun – the function to be tested, more info below
x0 – the reference point around which the function should be tested
epsilon – a small scalar used for the finite differences
threshold – the maximum acceptable difference between the analytical gradient and the gradient obtained by finite differentiation

The function fun takes a mandatory argument x and an optional argument do_gradient:

x – the arguments of the function to be tested
do_gradient – When False, only the function value is returned. When True, a 2-tuple with the function value and the gradient are returned [default=False]
molmod.minimizer.check_delta(fun, x, dxs, period=None)

Check the difference between two function values using the analytical gradient

Arguments:
fun – The function to be tested, more info below.
x – The argument vector.
dxs – A matrix where each row is a vector of small differences to be added to the argument vector.
Optional argument:
period – If the function value is periodic, one may provide the period such that differences are computed using periodic boundary conditions.

The function fun takes a mandatory argument x and an optional argument do_gradient:

x – The arguments of the function to be tested.
do_gradient – When False, only the function value is returned. When True, a 2-tuple with the function value and the gradient are returned. [default=False]

For every row in dxs, the following computation is repeated:

  1. D1 = ‘f(x+dx) - f(x)’ is computed.
  2. D2 = ‘0.5 (grad f(x+dx) + grad f(x)) . dx’ is computed.

A threshold is set to the median of the D1 set. For each case where |D1| is larger than the threshold, |D1 - D2|, should be smaller than the threshold.

molmod.minimizer.compute_fd_hessian(fun, x0, epsilon, anagrad=True)

Compute the Hessian using the finite difference method

Arguments:
fun – the function for which the Hessian should be computed, more info below
x0 – the point at which the Hessian must be computed
epsilon – a small scalar step size used to compute the finite differences
Optional argument:
anagrad – when True, analytical gradients are used [default=True]

The function fun takes a mandatory argument x and an optional argument do_gradient:

x – the arguments of the function to be tested
do_gradient – When False, only the function value is returned. When True, a 2-tuple with the function value and the gradient are returned [default=False]

molmod.symmetry – Symmetry

molmod.symmetry.compute_rotsym(molecule, graph, threshold=0.0018897261339212521)

Compute the rotational symmetry number

Arguments:
molecule – The molecule
graph – The corresponding bond graph
Optional argument:
threshold – only when a rotation results in an rmsd below the given threshold, the rotation is considered to transform the molecule onto itself.

molmod.toyff – ToyFF

The ToyFF is not meant for accurate geometries, but rather to generate molecular geometries from scratch. The routines below start with just a random set of coordinates and turn that into a rough molecular geometry. Post-processing with a more reliable force field is mandatory.

molmod.toyff.guess_geometry(graph, unit_cell=None, verbose=False)

Construct a molecular geometry based on a molecular graph.

This routine does not require initial coordinates and will give a very rough picture of the initial geometry. Do not expect all details to be in perfect condition. A subsequent optimization with a more accurate level of theory is at least advisable.

Argument:
graph – The molecular graph of the system, see :class:molmod.molecular_graphs.MolecularGraph
Optional argument:
unit_cell – periodic boundry conditions, see molmod.unit_cells.UnitCell
verbose – Show optimizer progress when True
molmod.toyff.tune_geometry(graph, mol, unit_cell=None, verbose=False)

Fine tune a molecular geometry, starting from a (very) poor guess of the initial geometry.

Do not expect all details to be in perfect condition. A subsequent optimization with a more accurate level of theory is at least advisable.

Arguments:
graph – The molecular graph of the system, see :class:molmod.molecular_graphs.MolecularGraph
mol – A :class:molmod.molecules.Molecule class with the initial guess of the coordinates
Optional argument:
unit_cell – periodic boundry conditions, see molmod.unit_cells.UnitCell
verbose – Show optimizer progress when True
class molmod.toyff.ToyFF(graph, unit_cell=None)

A force field implementation for generating geometries.

See :func:guess_geomtry and :func:tune_geomtry for two practical use cases.

Argument:
graph – the molecular graph from which the force field terms are extracted. See :class:molmod.molecular_graphs.MolecularGraph
Optional argument:
unit_cell – periodic boundry conditions, see molmod.unit_cells.UnitCell
__call__(x, do_gradient=False)

Compute the energy (and gradient) for a set of Cartesian coordinates

Argument:
x – the Cartesian coordinates
do_gradient – when set to True, the gradient is also computed and returned. [default=False]
class molmod.toyff.SpecialAngles

A database with precomputed valence angles from small molecules

get_angle(triplet)

Get a rest angle for a given triplet

A triplet consists of a tuple with six elements: (n0, v0, n1, v1, n2, v2) The indexes refer to consecutive atoms forming a valence angle. n0, n1 and n2 are the atom numbers of the angle and v0, v1 and v2 are the valences of the corresponding atoms. n1 and v1 are the values for the central atom in the angle.