Dry lab
Encyclopedia
A dry lab is a laboratory where computational or applied mathematical analyses are done on a computer-generated model to simulate a phenomenon in the physical realm whether it be a molecule changing quantum states, the event horizon of a black hole or anything that otherwise might be impossible or too dangerous to observe under normal laboratory conditions http://medical.merriam-webster.com/medical/dry%20lab. This term may also refer to a lab that uses primarily electronic equipment, for example, a robotics lab. A dry lab can also refer to a laboratory space for the storage of dry materials.http://www.wbdg.org/design/lab_dry.php To dry lab can also refer to supplying fictional (yet plausible) results in lieu of performing an assigned experiment. The term dry lab is also used in the photo printing industry to refer to photo printing systems that do not employ the use of "wet" photographic chemicals.

In silico chemistry

As computing power has grown exponentially this approach to research, often referred to as in silico
In silico
In silico is an expression used to mean "performed on computer or via computer simulation." The phrase was coined in 1989 as an analogy to the Latin phrases in vivo and in vitro which are commonly used in biology and refer to experiments done in living organisms and outside of living organisms,...

(as opposed to in vitro
In vitro
In vitro refers to studies in experimental biology that are conducted using components of an organism that have been isolated from their usual biological context in order to permit a more detailed or more convenient analysis than can be done with whole organisms. Colloquially, these experiments...

), has amassed more attention especially in the area of bioinformatics
Bioinformatics
Bioinformatics is the application of computer science and information technology to the field of biology and medicine. Bioinformatics deals with algorithms, databases and information systems, web technologies, artificial intelligence and soft computing, information and computation theory, software...

. More specifically, within bioinformatics, is the study of proteins or proteomics
Proteomics
Proteomics is the large-scale study of proteins, particularly their structures and functions. Proteins are vital parts of living organisms, as they are the main components of the physiological metabolic pathways of cells. The term "proteomics" was first coined in 1997 to make an analogy with...

, which is the elucidation of their unknown structures and folding patterns. The general approach in the elucidation of protein structure has been to first purify a protein, crystallize it and then send X-rays through such a purified protein crystal to observe how these x-rays diffract into specific pattern--a process referred to as X-ray crystallography
X-ray crystallography
X-ray crystallography is a method of determining the arrangement of atoms within a crystal, in which a beam of X-rays strikes a crystal and causes the beam of light to spread into many specific directions. From the angles and intensities of these diffracted beams, a crystallographer can produce a...

. However, many proteins, especially those embedded in cellular membranes, are nearly impossible to crystallize due to their hydrophobic nature. Although other techniques exists, such as ramachandran plotting and mass spectrometry, these alone generally do not lead to the full elucidation of protein structure or folding mechanisms.

Distributed computing

As a means of surpassing the limitations of these techniques, projects such as Folding@Home
Folding@home
Folding@home is a distributed computing project designed to use spare processing power on personal computers to perform simulations of disease-relevant protein folding and other molecular dynamics, and to improve on the methods of doing so...

 and Rosetta@Home
Rosetta@home
Rosetta@home is a distributed computing project for protein structure prediction on the Berkeley Open Infrastructure for Network Computing platform, run by the Baker laboratory at the University of Washington...

 are aimed at resolving this problem using computational analysis, this means of resolving protein structure is referred to as protein structure prediction
Protein structure prediction
Protein structure prediction is the prediction of the three-dimensional structure of a protein from its amino acid sequence — that is, the prediction of its secondary, tertiary, and quaternary structure from its primary structure. Structure prediction is fundamentally different from the inverse...

. Although many labs have a slightly different approach, the main concept is to find, from a myriad of protein conformations, which conformation has the lowest energy or, in the case of Folding@Home, to find relatively low energies of proteins that could cause the protein to misfold and aggregate other proteins to itself -- like in the case of sickle cell anemia. The general scheme in these projects is that a small number of computations are parsed to, or sent to be calculated on, a computer, generally a home computer, and then that computer analyzes the likelihood that a specific protein will take a certain shape or conformation based on the amount of energy required for that protein to stay in that shape, this way of processing data is what is generally referred to as distributed computing. This analysis is done on an extraordinarily large number of different conformations, owing to the support of hundreds of thousands of home-based computers, in hopes to find the conformation of lowest possible energy or set of conformations of lowest possible energy relative to any conformations that are just slightly different. Although doing so is quite difficult, one can, by observing the energy distribution of a large number of conformations, despite the almost infinite number of different protein conformations possible for any given protein (see Levinthal Paradox
Levinthal paradox
Levinthal's paradox is a thought experiment, also constituting a self-reference in the theory of protein folding. In 1969, Cyrus Levinthal noted that, because of the very large number of degrees of freedom in an unfolded polypeptide chain, the molecule has an astronomical number of possible...

), with a reasonably large number of protein energy samplings, predict relatively closely what conformation, within a range of conformations, has the expected lowest energy using methods in statistical inference
Statistical inference
In statistics, statistical inference is the process of drawing conclusions from data that are subject to random variation, for example, observational errors or sampling variation...

 . There are other factors such as salt concentration, pH, ambient temperature or chaperonins, which are proteins that assist in the folding process of other proteins, that can greatly affect how a protein folds. However, if the given protein is shown to fold on its own, especially in vitro
In vitro
In vitro refers to studies in experimental biology that are conducted using components of an organism that have been isolated from their usual biological context in order to permit a more detailed or more convenient analysis than can be done with whole organisms. Colloquially, these experiments...

, these findings can be further supported. Once we can see how a protein folds then we can see how it works as a catalyst, or in intracellular communication, e.g. neuroreceptor-neurotransmitter interaction. How certain compounds may be used to enhance or prevent the function of these proteins and how an elucidated protein overall plays a role in diseases such as Alzheimer's Disease or Huntington's Disease can also be much better understood http://folding.stanford.edu/English/FAQ-Diseases.

Of course, there are many other avenues of research in which the dry lab approach has been implemented. Other physical phenomena, such as sound, properties of newly discovered or hypothetical compounds and quantum mechanics models have recently received more attention in this area of approach.

See also

  • http://biox.stanford.edu
  • Computer simulation
    Computer simulation
    A computer simulation, a computer model, or a computational model is a computer program, or network of computers, that attempts to simulate an abstract model of a particular system...

  • Computational physics
    Computational physics
    Computational physics is the study and implementation of numerical algorithms to solve problems in physics for which a quantitative theory already exists...

  • Protein structure prediction
    Protein structure prediction
    Protein structure prediction is the prediction of the three-dimensional structure of a protein from its amino acid sequence — that is, the prediction of its secondary, tertiary, and quaternary structure from its primary structure. Structure prediction is fundamentally different from the inverse...

  • Wet lab
  • Computational chemistry
    Computational chemistry
    Computational chemistry is a branch of chemistry that uses principles of computer science to assist in solving chemical problems. It uses the results of theoretical chemistry, incorporated into efficient computer programs, to calculate the structures and properties of molecules and solids...

The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK