Macromolecular docking


Macromolecular docking is the computational modelling of the quaternary structure of complexes formed by two or more interacting biological macromolecules. Protein–protein complexes are the most commonly attempted targets of such modelling, followed by protein–nucleic acid complexes.
The ultimate goal of docking is the prediction of the three-dimensional structure of the macromolecular complex of interest as it would occur in a living organism. Docking itself only produces plausible candidate structures. These candidates must be ranked using methods such as scoring functions to identify structures that are most likely to occur in nature.
The term "docking" originated in the late 1970s, with a more restricted meaning; then, "docking" meant refining a model of a complex structure by optimizing the separation between the interactors but keeping their relative orientations fixed. Later, the relative orientations of the interacting partners in the modelling was allowed to vary, but the internal geometry of each of the partners was held fixed. This type of modelling is sometimes referred to as "rigid docking". With further increases in computational power, it became possible to model changes in internal geometry of the interacting partners that may occur when a complex is formed. This type of modelling is referred to as "flexible docking".

Background

The biological roles of most proteins, as characterized by which other macromolecules they interact with, are known at best incompletely. Even those proteins that participate in a well-studied biological process may have unexpected interaction partners or functions which are unrelated to that process.
In cases of known protein–protein interactions, other questions arise. Genetic diseases are known to be caused by misfolded or mutated proteins, and there is a desire to understand what, if any, anomalous protein–protein interactions a given mutation can cause. In the distant future, proteins may be designed to perform biological functions, and a determination of the potential interactions of such proteins will be essential.
For any given set of proteins, the following questions may be of interest, from the point of view of technology or natural history:
If they do bind,
If they do not bind,
Protein–protein docking is ultimately envisaged to address all these issues. Furthermore, since docking methods can be based on purely physical principles, even proteins of unknown function may be docked. The only prerequisite is that their molecular structure has been either determined experimentally, or can be estimated by a protein structure prediction technique.
Protein–nucleic acid interactions feature prominently in the living cell. Transcription factors, which regulate gene expression, and polymerases, which catalyse replication, are composed of proteins, and the genetic material they interact with is composed of nucleic acids. Modeling protein–nucleic acid complexes presents some unique challenges, as described below.

History

In the 1970s, complex modelling revolved around manually identifying features on the surfaces of the interactors, and interpreting the consequences for binding, function and activity; any computer programmes were typically used at the end of the modelling process, to discriminate between the relatively few configurations which remained after all the heuristic constraints had been imposed. The first use of computers was in a study on hemoglobin interaction in sickle-cell fibres. This was followed in 1978 by work on the trypsin-BPTI complex. Computers discriminated between good and bad models using a scoring function which rewarded large interface area, and pairs of molecules in contact but not occupying the same space. The computer used a simplified representation of the interacting proteins, with one interaction centre for each residue. Favorable electrostatic interactions, including hydrogen bonds, were identified by hand.
In the early 1990s, more structures of complexes were determined, and available computational power had increased substantially. With the emergence of bioinformatics, the focus moved towards developing generalized techniques which could be applied to an arbitrary set of complexes at acceptable computational cost. The new methods were envisaged to apply even in the absence of phylogenetic or experimental clues; any specific prior knowledge could still be introduced at the stage of choosing between the highest ranking output models, or be framed as input if the algorithm catered for it.
1992 saw the publication of the correlation method, an algorithm which used the fast Fourier transform to give a vastly improved scalability for evaluating coarse shape complementarity on rigid-body models. This was extended in 1997 to cover coarse electrostatics.
In 1996 the results of the first blind trial were published, in which six research groups attempted to predict the complexed structure of TEM-1 Beta-lactamase with Beta-lactamase inhibitor protein. The exercise brought into focus the necessity of accommodating conformational change and the difficulty of discriminating between conformers. It also served as the prototype for the CAPRI assessment series, which debuted in 2001.

Rigid-body docking ''vs''. flexible docking

If the bond angles, bond lengths and torsion angles of the components are not modified at any stage of complex generation, it is known as rigid body docking. A subject of speculation is whether or not rigid-body docking is sufficiently good for most docking. When substantial conformational change occurs within the components at the time of complex formation, rigid-body docking is inadequate. However, scoring all possible conformational changes is prohibitively expensive in computer time. Docking procedures which permit conformational change, or flexible docking procedures, must intelligently select small subset of possible conformational changes for consideration.

Methods

Successful docking requires two criteria:
For many interactions, the binding site is known on one or more of the proteins to be docked. This is the case for antibodies and for competitive inhibitors. In other cases, a binding site may be strongly suggested by mutagenic or phylogenetic evidence. Configurations where the proteins interpenetrate severely may also be ruled out a priori.
After making exclusions based on prior knowledge or stereochemical clash, the remaining space of possible complexed structures must be sampled exhaustively, evenly and with a sufficient coverage to guarantee a near hit. Each configuration must be scored with a measure that is capable of ranking a nearly correct structure above at least 100,000 alternatives. This is a computationally intensive task, and a variety of strategies have been developed.

Reciprocal space methods

Each of the proteins may be represented as a simple cubic lattice. Then, for the class of scores which are discrete convolutions, configurations related to each other by translation of one protein by an exact lattice vector can all be scored almost simultaneously by applying the convolution theorem. It is possible to construct reasonable, if approximate, convolution-like scoring functions representing both stereochemical and electrostatic fitness.
Reciprocal space methods have been used extensively for their ability to evaluate enormous numbers of configurations. They lose their speed advantage if torsional changes are introduced. Another drawback is that it is impossible to make efficient use of prior knowledge. The question also remains whether convolutions are too limited a class of scoring function to identify the best complex reliably.

Monte Carlo methods

In Monte Carlo, an initial configuration is refined by taking random steps which are accepted or rejected based on their induced improvement in score, until a certain number of steps have been tried. The assumption is that convergence to the best structure should occur from a large class of initial configurations, only one of which needs to be considered. Initial configurations may be sampled coarsely, and much computation time can be saved. Because of the difficulty of finding a scoring function which is both highly discriminating for the correct configuration and also converges to the correct configuration from a distance, the use of two levels of refinement, with different scoring functions, has been proposed. Torsion can be introduced naturally to Monte Carlo as an additional property of each random move.
Monte Carlo methods are not guaranteed to search exhaustively, so that the best configuration may be missed even using a scoring function which would in theory identify it. How severe a problem this is for docking has not been firmly established.

Evaluation

Scoring functions

To find a score which forms a consistent basis for selecting the best configuration, studies are carried out on a standard benchmark of protein–protein interaction cases. Scoring functions are assessed on the rank they assign to the best structure, and on their coverage.
Types of scores studied include:
It is usual to create hybrid scores by combining one or more categories above in a weighted sum whose weights are optimized on cases from the benchmark. To avoid bias, the benchmark cases used to optimize the weights must not overlap with the cases used to make the final test of the score.
The ultimate goal in protein–protein docking is to select the ideal ranking solution according to a scoring scheme that would also give an insight into the affinity of the complex. Such a development would drive in silico protein engineering, computer-aided drug design and/or high-throughput annotation of which proteins bind or not. Several scoring functions have been proposed for binding affinity / free energy prediction. However the correlation between experimentally determined binding affinities and the predictions of nine commonly used scoring functions have been found to be nearly orthogonal. It was also observed that some components of the scoring algorithms may display better correlation to the experimental binding energies than the full score, suggesting that a significantly better performance might be obtained by combining the appropriate contributions from different scoring algorithms. Experimental methods for the determination of binding affinities are: surface plasmon resonance, Förster resonance energy transfer, radioligand-based techniques, isothermal titration calorimetry, microscale thermophoresis or spectroscopic measurements and other fluorescence techniques. Textual information from scientific articles can provide useful cues for scoring.

Benchmarks

A benchmark of 84 protein–protein interactions with known complexed structures has been developed for testing docking methods. The set is chosen to cover a wide range of interaction types, and to avoid repeated features, such as the profile of interactors' structural families according to the SCOP database. Benchmark elements are classified into three levels of difficulty. The protein–protein docking benchmark contains examples of enzyme-inhibitor, antigen-antibody and homomultimeric complexes.
The latest version of protein-protein docking benchmark consists of 230 complexes. A protein-DNA docking benchmark consists of 47 test cases. A protein-RNA docking benchmark was curated as a dataset of 45 non-redundant test cases with complexes solved by X-ray crystallography only as well as an extended dataset of 71 test cases with structures derived from homology modelling as well. The protein-RNA benchmark has been updated to include more structures solved by X-ray crystallography and now it consists of 126 test cases. The benchmarks have a combined dataset of 209 complexes.
A binding affinity benchmark has been based on the protein–protein docking benchmark. 81 protein–protein complexes with known experimental affinities are included; these complexes span over 11 orders of magnitude in terms of affinity. Each entry of the benchmark includes several biochemical parameters associated with the experimental data, along with the method used to determine the affinity. This benchmark was used to assess the extent to which scoring functions could also predict affinities of macromolecular complexes.
This Benchmark was post-peer reviewed and significantly expanded. The new set is diverse in terms of the biological functions it represents, with complexes that involve G-proteins and receptor extracellular domains, as well as antigen/antibody, enzyme/inhibitor, and enzyme/substrate complexes. It is also diverse in terms of the partners' affinity for each other, with Kd ranging between 10−5 and 10−14 M. Nine pairs of entries represent closely related complexes that have a similar structure, but a very different affinity, each pair comprising a cognate and a noncognate assembly. The unbound structures of the component proteins being available, conformation changes can be assessed. They are significant in most of the complexes, and large movements or disorder-to-order transitions are frequently observed. The set may be used to benchmark biophysical models aiming to relate affinity to structure in protein–protein interactions, taking into account the reactants and the conformation changes that accompany the association reaction, instead of just the final product.

The CAPRI assessment

The Critical Assessment of PRediction of Interactions is an ongoing series of events in which researchers throughout the community try to dock the same proteins, as provided by the assessors. Rounds take place approximately every 6 months. Each round contains between one and six target protein–protein complexes whose structures have been recently determined experimentally. The coordinates and are held privately by the assessors, with the cooperation of the structural biologists who determined them. The assessment of submissions is double blind.
CAPRI attracts a high level of participation and a high level of interest from the biological community in general. Although CAPRI results are of little statistical significance owing to the small number of targets in each round, the role of CAPRI in stimulating discourse is significant..