SEE THIS BLOG & WRITE YOUR OPINION..............FOLLOW MEEEE.........A COMPLETE EDUCATIONAL BLOG.............JOCKS,NEWS,VIDEOS,PICTURES,PROJECT NOTES ARE AVAILABLE HERE.............

India Vision Live News

Wednesday, November 25, 2009

kerala gods own country


ols lorry accident

A lorry accident..........

Amaizing lorry accident

gods own country kerala (in munnar)

gods own country kerala

gods own country kerala

A lorry on the rail

A train on the road ha........ha.......ha.......

Hajj new.....mina

hajj 2009..........

Allahu Akbar................

Eid Mubarak................

Eid Mubarak...............

Arafa new pics.....

Arafa, a sky view.....


Hajj 2009 Amazing

BIOINFORMATICS






Bioinformatics is the application of information technology to the field of molecular biology. The term bioinformatics was coined by Paulien Hogeweg in 1979 for the study of informatic processes in biotic systems. Its primary use since at least the late 1980s has been in genomics and genetics, particularly in those areas of genomics involving large-scale DNA sequencing. Bioinformatics now entails the creation and advancement of databases, algorithms, computational and statistical techniques, and theory to solve formal and practical problems arising from the management and analysis of biological data. Over the past few decades rapid developments in genomic and other molecular research technologies and developments in information technologies have combined to produce a tremendous amount of information related to molecular biology. It is the name given to these mathematical and computing approaches used to glean understanding of biological processes. Common activities in bioinformatics include mapping and analyzing DNA and protein sequences, aligning different DNA and protein sequences to compare them and creating and viewing 3-D models of protein structures.
The primary goal of bioinformatics is to increase our understanding of biological processes. What sets it apart from other approaches, however, is its focus on developing and applying computationally intensive techniques (e.g., pattern recognition, data mining, machine learning algorithms, and visualization) to achieve this goal. Major research efforts in the field include sequence alignment, gene finding, genome assembly, protein structure alignment, protein structure prediction, prediction of gene expression and protein-protein interactions, genome-wide association studies and the modeling of evolution.
INTRODUCTION
Bioinformatics was applied in the creation and maintenance of a database to store biological information at the beginning of the "genomic revolution", such as nucleotide and amino acid sequences. Development of this type of database involved not only design issues but the development of complex interfaces whereby researchers could both access existing data as well as submit new or revised data.
In order to study how normal cellular activities are altered in different disease states, the biological data must be combined to form a comprehensive picture of these activities. Therefore, the field of bioinformatics has evolved such that the most pressing task now involves the analysis and interpretation of various types of data, including nucleotide and amino acid sequences, protein domains, and protein structures. The actual process of analyzing and interpreting data is referred to as computational biology. Important sub-disciplines within bioinformatics and computational biology include:
a) the development and implementation of tools that enable efficient access to, and use and management of, various types of information. b) the development of new algorithms (mathematical formulas) and statistics with which to assess relationships among members of large data sets, such as methods to locate a gene within a sequence, predict protein structure and/or function, and cluster protein sequences into families of related sequences.
Major research areas

Sequence analysis

Since the Phage Φ-X174 was sequenced in 1977, the DNA sequences of thousands of organisms have been decoded and stored in databases. This sequence information is analyzed to determine genes that encode polypeptides (proteins), RNA genes, regulatory sequences, structural motifs, and repetitive sequences. A comparison of genes within a species or between different species can show similarities between protein functions, or relations between species (the use of molecular systematics to construct phylogenetic trees). With the growing amount of data, it long ago became impractical to analyze DNA sequences manually. Today, computer programs such as BLAST are used daily to search the genomes of thousands of organisms, containing billions of nucleotides. These programs can compensate for mutations (exchanged, deleted or inserted bases) in the DNA sequence, in order to identify sequences that are related, but not identical. A variant of this sequence alignment is used in the sequencing process itself. The so-called shotgun sequencing technique (which was used, for example, by The Institute for Genomic Research to sequence the first bacterial genome, Haemophilus influenzae) does not produce entire chromosomes, but instead generates the sequences of many thousands of small DNA fragments (ranging from 35 to 900 nucleotides long, depending on the sequencing technology). The ends of these fragments overlap and, when aligned properly by a genome assembly program, can be used to reconstruct the complete genome. Shotgun sequencing yields sequence data quickly, but the task of assembling the fragments can be quite complicated for larger genomes. For a genome as large as the human genome, it may take many days of CPU time on large-memory, multiprocessor computers to assemble the fragments, and the resulting assembly will usually contain numerous gaps that have to be filled in later. Shotgun sequencing is the method of choice for virtually all genomes sequenced today, and genome assembly algorithms are a critical area of bioinformatics research.
Another aspect of bioinformatics in sequence analysis is annotation, which involves computational gene finding to search for protein-coding genes, RNA genes, and other functional sequences within a genome. Not all of the nucleotides within a genome are genes. Within the genome of higher organisms, large parts of the DNA do not serve any obvious purpose. This so-called junk DNA may, however, contain unrecognized functional elements. Bioinformatics helps to bridge the gap between genome and proteome projects--for example, in the use of DNA sequences for protein identification.
Genome annotation

In the context of genomics, annotation is the process of marking the genes and other biological features in a DNA sequence. The first genome annotation software system was designed in 1995 by Dr. Owen White, who was part of the team at The Institute for Genomic Research that sequenced and analyzed the first genome of a free-living organism to be decoded, the bacterium Haemophilus influenzae. Dr. White built a software system to find the genes (places in the DNA sequence that encode a protein), the transfer RNA, and other features, and to make initial assignments of function to those genes. Most current genome annotation systems work similarly, but the programs available for analysis of genomic DNA are constantly changing and improving.
Computational evolutionary biology

Evolutionary biology is the study of the origin and descent of species, as well as their change over time. Informatics has assisted evolutionary biologists in several key ways; it has enabled researchers to:
• trace the evolution of a large number of organisms by measuring changes in their DNA, rather than through physical taxonomy or physiological observations alone,
• more recently, compare entire genomes, which permits the study of more complex evolutionary events, such as gene duplication, horizontal gene transfer, and the prediction of factors important in bacterial speciation,
• build complex computational models of populations to predict the outcome of the system over time
• track and share information on an increasingly large number of species and organisms
Future work endeavours to reconstruct the now more complex tree of life.
The area of research within computer science that uses genetic algorithms is sometimes confused with computational evolutionary biology, but the two areas are unrelated.
Analysis of gene expression

The expression of many genes can be determined by measuring mRNA levels with multiple techniques including microarrays, expressed cDNA sequence tag (EST) sequencing, serial analysis of gene expression (SAGE) tag sequencing, massively parallel signature sequencing (MPSS), or various applications of multiplexed in-situ hybridization. All of these techniques are extremely noise-prone and/or subject to bias in the biological measurement, and a major research area in computational biology involves developing statistical tools to separate signal from noise in high-throughput gene expression studies. Such studies are often used to determine the genes implicated in a disorder: one might compare microarray data from cancerous epithelial cells to data from non-cancerous cells to determine the transcripts that are up-regulated and down-regulated in a particular population of cancer cells.
Analysis of regulation
Regulation is the complex orchestration of events starting with an extracellular signal such as a hormone and leading to an increase or decrease in the activity of one or more proteins. Bioinformatics techniques have been applied to explore various steps in this process. For example, promoter analysis involves the identification and study of sequence motifs in the DNA surrounding the coding region of a gene. These motifs influence the extent to which that region is transcribed into mRNA. Expression data can be used to infer gene regulation: one might compare microarray data from a wide variety of states of an organism to form hypotheses about the genes involved in each state. In a single-cell organism, one might compare stages of the cell cycle, along with various stress conditions (heat shock, starvation, etc.). One can then apply clustering algorithms to that expression data to determine which genes are co-expressed. For example, the upstream regions (promoters) of co-expressed genes can be searched for over-represented regulatory elements.
Analysis of protein expression
Protein microarrays and high throughput (HT) mass spectrometry (MS) can provide a snapshot of the proteins present in a biological sample. Bioinformatics is very much involved in making sense of protein microarray and HT MS data; the former approach faces similar problems as with microarrays targeted at mRNA, the latter involves the problem of matching large amounts of mass data against predicted masses from protein sequence databases, and the complicated statistical analysis of samples where multiple, but incomplete peptides from each protein are detected.
Analysis of mutations in cancer
In cancer, the genomes of affected cells are rearranged in complex or even unpredictable ways. Massive sequencing efforts are used to identify previously unknown point mutations in a variety of genes in cancer. Bioinformaticians continue to produce specialized automated systems to manage the sheer volume of sequence data produced, and they create new algorithms and software to compare the sequencing results to the growing collection of human genome sequences and germline polymorphisms. New physical detection technology are employed, such as oligonucleotide microarrays to identify chromosomal gains and losses (called comparative genomic hybridization), and single nucleotide polymorphism arrays to detect known point mutations. These detection methods simultaneously measure several hundred thousand sites throughout the genome, and when used in high-throughput to measure thousands of samples, generate terabytes of data per experiment. Again the massive amounts and new types of data generate new opportunities for bioinformaticians. The data is often found to contain considerable variability, or noise, and thus Hidden Markov model and change-point analysis methods are being developed to infer real copy number changes.
Another type of data that requires novel informatics development is the analysis of lesions found to be recurrent among many tumors .
Prediction of protein structure

Protein structure prediction is another important application of bioinformatics. The amino acid sequence of a protein, the so-called primary structure, can be easily determined from the sequence on the gene that codes for it. In the vast majority of cases, this primary structure uniquely determines a structure in its native environment. (Of course, there are exceptions, such as the bovine spongiform encephalopathy - aka Mad Cow Disease - prion.) Knowledge of this structure is vital in understanding the function of the protein. For lack of better terms, structural information is usually classified as one of secondary, tertiary and quaternary structure. A viable general solution to such predictions remains an open problem. As of now, most efforts have been directed towards heuristics that work most of the time.
One of the key ideas in bioinformatics is the notion of homology. In the genomic branch of bioinformatics, homology is used to predict the function of a gene: if the sequence of gene A, whose function is known, is homologous to the sequence of gene B, whose function is unknown, one could infer that B may share A's function. In the structural branch of bioinformatics, homology is used to determine which parts of a protein are important in structure formation and interaction with other proteins. In a technique called homology modeling, this information is used to predict the structure of a protein once the structure of a homologous protein is known. This currently remains the only way to predict protein structures reliably.
One example of this is the similar protein homology between hemoglobin in humans and the hemoglobin in legumes (leghemoglobin). Both serve the same purpose of transporting oxygen in the organism. Though both of these proteins have completely different amino acid sequences, their protein structures are virtually identical, which reflects their near identical purposes.
Other techniques for predicting protein structure include protein threading and de novo (from scratch) physics-based modeling.
Comparative genomics

The core of comparative genome analysis is the establishment of the correspondence between genes (orthology analysis) or other genomic features in different organisms. It is these intergenomic maps that make it possible to trace the evolutionary processes responsible for the divergence of two genomes. A multitude of evolutionary events acting at various organizational levels shape genome evolution. At the lowest level, point mutations affect individual nucleotides. At a higher level, large chromosomal segments undergo duplication, lateral transfer, inversion, transposition, deletion and insertion. Ultimately, whole genomes are involved in processes of hybridization, polyploidization and endosymbiosis, often leading to rapid speciation. The complexity of genome evolution poses many exciting challenges to developers of mathematical models and algorithms, who have recourse to a spectra of algorithmic, statistical and mathematical techniques, ranging from exact, heuristics, fixed parameter and approximation algorithms for problems based on parsimony models to Markov Chain Monte Carlo algorithms for Bayesian analysis of problems based on probabilistic models.
Many of these studies are based on the homology detection and protein families computation.
Modeling biological systems

Systems biology involves the use of computer simulations of cellular subsystems (such as the networks of metabolites and enzymes which comprise metabolism, signal transduction pathways and gene regulatory networks) to both analyze and visualize the complex connections of these cellular processes. Artificial life or virtual evolution attempts to understand evolutionary processes via the computer simulation of simple (artificial) life forms.
High-throughput image analysis

Computational technologies are used to accelerate or fully automate the processing, quantification and analysis of large amounts of high-information-content biomedical imagery. Modern image analysis systems augment an observer's ability to make measurements from a large or complex set of images, by improving accuracy, objectivity, or speed. A fully developed analysis system may completely replace the observer. Although these systems are not unique to biomedical imagery, biomedical imaging is becoming more important for both diagnostics and research. Some examples are:
• high-throughput and high-fidelity quantification and sub-cellular localization (high-content screening, cytohistopathology)
• morphometrics
• clinical image analysis and visualization
• determining the real-time air-flow patterns in breathing lungs of living animals
• quantifying occlusion size in real-time imagery from the development of and recovery during arterial injury
• making behavioral observations from extended video recordings of laboratory animals
• infrared measurements for metabolic activity determination
• inferring clone overlaps in DNA mapping, e.g. the Sulston score
Protein-protein docking

In the last two decades, tens of thousands of protein three-dimensional structures have been determined by X-ray crystallography and Protein nuclear magnetic resonance spectroscopy (protein NMR). One central question for the biological scientist is whether it is practical to predict possible protein-protein interactions only based on these 3D shapes, without doing protein-protein interaction experiments. A variety of methods have been developed to tackle the Protein-protein docking problem, though it seems that there is still much work to be done in this field.
Software and tools

Software tools for bioinformatics range from simple command-line tools, to more complex graphical programs and standalone web-services available from various bioinformatics companies or public institutions. The computational biology tool best-known among biologists is probably BLAST, an algorithm for determining the similarity of arbitrary sequences against other sequences, possibly from curated databases of protein or DNA sequences. BLAST is one of a number of generally available programs for doing sequence alignment. The NCBI provides a popular web-based implementation that searches their databases.
Web services in bioinformatics

SOAP and REST-based interfaces have been developed for a wide variety of bioinformatics applications allowing an application running on one computer in one part of the world to use algorithms, data and computing resources on servers in other parts of the world. The main advantages derive from the fact that end users do not have to deal with software and database maintenance overheads.
Basic bioinformatics services are classified by the EBI into three categories: SSS (Sequence Search Services), MSA (Multiple Sequence Alignment) and BSA (Biological Sequence Analysis). The availability of these service-oriented bioinformatics resources demonstrate the applicability of web based bioinformatics solutions, and range from a collection of standalone tools with a common data format under a single, standalone or web-based interface, to integrative, distributed and extensible bioinformatics workflow management systems.








PLANT TISSUE CULTURE









Plant tissue culture is a practice used to propagate plants under sterile conditions, often to produce clones of a plant. Different techniques in plant tissue culture may offer certain advantages over traditional methods of propagation, including:
• The production of exact copies of plants that produce particularly good flowers, fruits, or have other desirable traits.
• To quickly produce mature plants.
• The production of multiples of plants in the absence of seeds or necessary pollinators to produce seeds.
• The regeneration of whole plants from plant cells that have been genetically modified.
• The production of plants in sterile containers that allows them to be moved with greatly reduced chances of transmitting diseases, pests, and pathogens.
• The production of plants from seeds that otherwise have very low chances of germinating and growing, i.e.: orchids and nepenthes.
• To clean particular plant of viral and other infections and to quickly multiply these plants as 'cleaned stock' for horticulture and agriculture.
Plant tissue culture relies on the fact that many plant cells have the ability to regenerate a whole plant (totipotency). Single cells, plant cells without cell walls (protoplasts), pieces of leaves, or (less commonly) roots can often be used to generate a new plant on culture media given the required nutrients and plant hormones.
Techniques

Modern plant tissue culture is performed under aseptic conditions under filtered air. Living plant materials from the environment are naturally contaminated on their surfaces (and sometimes interiors) with microorganisms, so surface sterilization of starting materials (explants) in chemical solutions (usually alcohol or bleach) is required. Mercuric chloride is seldom used as a plant sterilant today, as it is dangerous to use, and is difficult to dispose of. Explants are then usually placed on the surface of a solid culture medium, but are sometimes placed directly into a liquid medium, particularly when cell suspension cultures are desired. Solid and liquid media are generally composed of inorganic salts plus a few organic nutrients, vitamins and plant hormones. Solid media are prepared from liquid media with the addition of a gelling agent, usually purified agar.The composition of the medium, particularly the plant hormones and the nitrogen source (nitrate versus ammonium salts or amino acids) have profound effects on the morphology of the tissues that grow from the initial explant. For example, an excess of auxin will often result in a proliferation of roots, while an excess of cytokinin may yield shoots. A balance of both auxin and cytokinin will often produce an unorganised growth of cells, or callus, but the morphology of the outgrowth will depend on the plant species as well as the medium composition. As cultures grow, pieces are typically sliced off and transferred to new media (subcultured) to allow for growth or to alter the morphology of the culture. The skill and experience of the tissue culturist are important in judging which pieces to culture and which to discard.
As shoots emerge from a culture, they may be sliced off and rooted with auxin to produce plantlets which, when mature, can be transferred to potting soil for further growth in the greenhouse as normal plants. reference: Plant tissue culture: theory and practice By Sant Saran Bhojwani, M. K. Razdan.
Choice explant
. The tissue obtained from the plant to culture is called an explant. Based on work with certain model systems, particularly tobacco, it has often been claimed that a totipotent explant can be grown from any part of the plant. However, this concept has been vitiated in practice. In many species explants of various organs vary in their rates of growth and regeneration, while some do not grow at all. The choice of explant material also determines if the plantlets developed via tissue culture are haploid or diploid. Also the risk of microbial contamination is increased with inappropriate explants. Thus it is very important that an appropriate choice of explant be made prior to tissue culture.
The specific differences in the regeneration potential of different organs and explants have various explanations. The significant factors include differences in the stage of the cells in the cell cycle, the availability of or ability to transport endogenous growth regulators, and the metabolic capabilities of the cells. The most commonly used tissue explants are the meristematic ends of the plants like the stem tip, auxiliary bud tip and root tip. These tissues have high rates of cell division and either concentrate or produce required growth regulating substances including auxins and cytokinins.
Some explants, like the root tip, are hard to isolate and are contaminated with soil microflora that become problematic during the tissue culture process. Certain soil microflora can form tight associations with the root systems, or even grow within the root. Soil particles bound to roots are difficult to remove without injury to the roots that then allows microbial attack. These associated microflora will generally overgrow the tissue culture medium before there is significant growth of plant tissue.
Aerial (above soil) explants are also rich in undesirable microflora. However, they are more easily removed from the explant by gentle rinsing, and the remainder usually can be killed by surface sterilization. Most of the surface microflora do not form tight associations with the plant tissue. Such associations can usually be found by visual inspection as a mosaic, de-colorization or localized necrosis on the surface of the explant.
An alternative for obtaining uncontaminated explants is to take explants from seedlings which are aseptically grown from surface-sterilized seeds. The hard surface of the seed is less permeable to penetration of harsh surface sterilizing agents, such as hypochlorite, so the acceptable conditions of sterilization used for seeds can be much more stringent than for vegetative tissues.
Applications
Plant tissue culture is used widely in plant science; it also has a number of commercial applications. Applications include:
• Micropropagation is widely used in forestry and in floriculture. Micropropagation can also be used to conserve rare or endangered plant species.
• A plant breeder may use tissue culture to screen cells rather than plants for advantageous characters, e.g. herbicide resistance/tolerance.
• Large-scale growth of plant cells in liquid culture inside bioreactors as a source of secondary products, like recombinant proteins used as biopharmaceuticals.
• To cross distantly related species by protoplast fusion and regeneration of the novel hybrid.
• To cross-pollinate distantly related species and then tissue culture the resulting embryo which would otherwise normally die (Embryo Rescue).
• For production of doubled monoploid (dihaploid) plants from haploid cultures to achieve homozygous lines more rapidly in breeding programmes, usually by treatment with colchicine which causes doubling of the chromosome number.
• As a tissue for transformation, followed by either short-term testing of genetic constructs or regeneration of transgenic plants.
• Certain techniques such as meristem tip culture can be used to produce clean plant material from virused stock, such as potatoes and many species of soft fruit.
• micropropagation using meristem and shoot culture to produce large numbers of identical individuals.
Laboratories
Although some growers and nurseries have their own labs for propagating plants by the technique of tissue culture, a number of independent laboratories provide custom propagation services. The Plant Tissue Culture Information Exchange lists many commercial tissue culture labs. Since plant tissue culture is a very labour intensive process, this would be an important factor in determining which plants would be commercially viable to propagate in a laboratory.




DESIGNER BABY







The colloquial term "designer baby" refers to a baby whose genetic makeup has been artificially selected by genetic engineering combined with in vitro fertilisation to ensure the presence or absence of particular genes or characteristics. The term is derived by comparison with "designer clothing". It implies the ultimate commodification of children and is therefore usually used pejoratively to signal opposition to such use of reprogenetics.
ETHICS
A minority of bioethicists consider the process of designing a baby, once the reprogenetic technology is shown to be safe, to be a responsible and justifiable application of parental procreative liberty. Some believe such selection should be legally mandatory.
The usage of reprogenetics on one's offspring is said to be defensible as procreative beneficence, the moral obligation of parents to try to give their children the healthiest, happiest lives possible. Some futurists claim that it would put the human species on a path to participant evolution.
A common objection to the notion of using reprogenetic technologies to create a "designer baby" is based on the ethics of human experimentation. Modern bioethical codes such as the Declaration of Helsinki condemn experiments on humans that are unnecessary, dangerous, or without the subject's consent. A report by the American Association for the Advancement of Science (AAAS) voices these concerns in the context of inheritable genetic modification, concluding that this biotechnology "cannot presently be carried out safely and responsibly on human beings" and that "pressing moral concerns" have not yet been addressed.
Other objections to the idea of designer babies include the termination of embryos and how many disapprove of methods such as these under moral and religious grounds. For example, a group who believes in pro-life would not approve of the termination of preborn embryos. Also, the social standards go much further. It can be projected that we will breed a race of super humans who look down on those humans without genetic enhancements. Assuming genetic enhancement becomes readily available will it be incredibly expensive? In this instance only the wealthy would be protected from inherited diseases and disabilities, and the discrimination against those with disabilities would greatly rise. Lastly, humans have never experienced the effects of genetic structure alteration. The results could have dire consequences and possibly damage the gene pool.
Genetic modification can be used to alter anything from gender to disease, and eventually appearance, personality, and even IQ. Another controversy facing the advancement of genetic modification technology is the price of such procedures and its ability to create a gap in society. Altering embryos is fairly recent technology and as it develops is a very costly procedure. With only the wealthy being able to pay for the modification that will eliminate disease for their children and eventually choose traits such as personality and appearance will lead to an elite race, far more advanced than the poor who cannot afford such technologies.
Most opponents of this use of reprogenetic technology refer to its possible social implications, distinguishing between genetic modifications used to treat people with disabilities or diseases and those used to enhance healthy people. They are particularly wary of this technology’s ability to lead to a new eugenics where individuals are "bred" or designed to suit social preferences such as above average height, certain hair color, increased intelligence, or greater memory. Not only is the prospect of future generations of "better people" a metaphysical concern, but apprehension also arises from the possibility that such groups of people might become prejudiced against one another due to a feeling of lost common humanity with non-enhanced or differently-enhanced groups. Within journalistic coverage of the issue, as well as within the analysis of bioconservative critics, the issue of safety takes a secondary role to that of humanity, because it is thought that the ethical issue of safety can eventually be resolved by innovation and so should not be focused on due to its fallibility. The so-called Frankenstein argument asserts that genetically engineering designer babies would compel us to think of each other as products or devices rather than people, and the spectre has often been raised (for instance by the Center for Genetics and Society) of young parents-to-be who might one day send away for a catalogue, compose a list of desirable features and order a custom infant produced to specification.
Genetic engineering of human beings is a controversial topic because many of the aspects associated with human genetic engineering are controversial. One controversial aspect is that the current test subjects for genetic engineering are animals like mice and primates. Scientists first perform their tests on mice and rats and if they are successful scientists move on to primates because their DNA is the most similar to humans. Some tests have been proven to be successful but many have also failed. When the test fails the subject is discarded. Genetic engineering is not tested on humans because that is against the law, animals receiving the same rights is a much debated topic. If one is pro animal testing then this aspect of genetic engineering is not an issue for them but if one is against animal testing then they will probably have a different view on genetic engineering.
The genetic modification of humans can pose an ethical debate about the rights of the baby. One side of this issue is that the fetus should be free to not be genetically modified. Once the genetic modification of the fetus takes place then the baby is changed forever, there is no chance that the genetic modification completed prior to birth could ever be reversed. The opposing view to this is that the parents are the ones with the rights to their unborn child, so they should be able to have the option to alter their baby if they choose to. This ethical debate about genetic modification and the rights of a fetus is similar to the debate about abortion and if the parents or the unborn child has the rights to decide the future of the fetus.

GENETICALLY MODIFIED FOOD






Genetically modified (GM) foods are foods derived from genetically modified organisms. Genetically modified organisms have had specific changes introduced into their DNA by genetic engineering, using a process of either Cisgenesis or Transgenesis. These techniques are much more precise than mutagenesis (mutation breeding) where an organism is exposed to radiation or chemicals to create a non-specific but stable change. Other techniques by which humans modify food organisms include selective breeding (plant breeding and animal breeding), and somaclonal variation.
GM foods were first put on the market in the early 1990s. Typically, genetically modified foods are transgenic plant products: soybean, corn, canola, and cotton seed oil. But animal products have also been developed. In 2006 a pig was controversially engineered to produce omega-3 fatty acids through the expression of a roundworm gene produced. Researchers have also developed a genetically-modified breed of pigs that are able to absorb plant phosphorus more efficiently, and as a consequence the phosphorus content of their manure is reduced by as much as 60%.
Critics have objected to GM foods on several grounds, including perceived safety issues, ecological concerns, and economic concerns raised by the fact that these organisms are subject to intellectual property law.
METHOD
Genetic modification involves the insertion or deletion of genes. In the process of Cisgenesis genes are artificially transferred between organisms that could be conventionally bred. In the process of Transgenesis genes from a different species are inserted, which is a form of horizontal gene transfer. In nature this can occur when exogenous DNA penetrates the cell membrane for any reason. To do this artificially may require attaching the genes to a virus or just physically inserting the extra DNA into the nucleus of the intended host with a very small syringe, or with very small particles fired from a gene gun. However, other methods exploit natural forms of gene transfer, such as the ability of Agrobacterium to transfer genetic material to plants, or the ability of lentiviruses to transfer genes to animal cells.
DEVELOPMENT
The first commercially grown genetically modified whole food crop was a tomato (called FlavrSavr), which was modified to ripen without softening, by a Californian company Calgene. Calgene took the initiative to obtain FDA approval for its release in 1994 without any special labeling, although legally no such approval was required. It was welcomed by consumers who purchased the fruit at a substantial premium over the price of regular tomatoes. However, production problems and competition from a conventionally bred, longer shelf-life variety prevented the product from becoming profitable. A variant of the Flavr Savr was used by Zeneca to produce tomato paste which was sold in Europe during the summer of 1996. The labeling and pricing were designed as a marketing experiment, which proved, at the time, that European consumers would accept genetically engineered foods.In addition, various genetically engineered micro-organisms are routinely used as sources of enzymes for the manufacture of a wide variety of processed foods. These include alpha-amylase from bacteria, which converts starch to simple sugars, chymosin from bacteria or fungi that clots milk protein for cheese making, and pectinesterase from fungi which improves fruit juice clarity.
Growing GM Crops
Between 1997 and 2005, the total surface area of land cultivated with GMOs had increased by a factor of 50, from 17,000 km2 (4.2 million acres) to 900,000 km2 (222 million acres).
Although most GM crops are grown in North America, in recent years there has been rapid growth in the area sown in developing countries. For instance in 2005 the largest increase in crop area planted to GM crops (soybeans) was in Brazil (94,000 km2 in 2005 versus 50,000 km2 in 2004.) There has also been rapid and continuing expansion of GM cotton varieties in India since 2002. (Cotton is a major source of vegetable cooking oil and Fodder|animal feed.) It is predicted that in 2008/9 32,000 km2 of GM cotton will be harvested in India (up more than 100 percent from the previous season). Indian national average cotton yields of GM cotton were seven times lower in 2002, because the parental cotton plant used in the genetic engineered variant was not well suited to the climate of India and failed. The publicity given to transgenic trait Bt insect resistance has encouraged the adoption of better performing hybrid cotton varieties, and the Bt trait has substantially reduced losses to insect predation. Though controversial and often disputed, economic and environmental benefits of GM cotton in India to the individual farmer have been documented.
In 2003, countries that grew 99% of the global transgenic crops were the United States (63%), Argentina (21%), Canada (6%), Brazil (4%), China (4%), and South Africa (1%). The Grocery Manufacturers of America estimate that 75% of all processed foods in the U.S. contain a GM ingredient . In particular, Bt corn, which produces the pesticide within the plant itself, is widely grown, as are soybeans genetically designed to tolerate glyphosate herbicides. These constitute "input-traits" are aimed to financially benefit the producers, have indirect environmental benefits and marginal cost benefits to consumers.
In the US, by 2006 89% of the planted area of soybeans, 83% of cotton, and 61% maize were genetically modified varieties. Genetically modified soybeans carried herbicide-tolerant traits only, but maize and cotton carried both herbicide tolerance and insect protection traits (the latter largely the Bacillus thuringiensis Bt insecticidal protein). In the period 2002 to 2006, there were significant increases in the area planted to Bt protected cotton and maize, and herbicide tolerant maize also increased in sown area.
ECONOMIC AND POLITICAL EFFECTS
• . Many proponents of genetically engineered crops claim they lower pesticide usage and have brought higher yields and profitability to many farmers, including those in developing nations.
• The United States has seen a widespread adoption of genetically-engineered corn, cotton and soybean crops over the last decade (see figure).
• In August 2003, Zambia cut off the flow of Genetically Modified Food (mostly maize) from UN's World Food Programme. This left a famine-stricken population without food aid.
• In December 2005 the Zambian government changed its mind in the face of further famine and allowed the importation of GM maize. However, the Zambian Minister for Agriculture Mundia Sikatana has insisted that the ban on genetically modified maize remains, saying "We do not want GM (genetically modified) foods and our hope is that all of us can continue to produce non-GM foods."
• In April 2004 Hugo Chávez announced a total ban on genetically modified seeds in Venezuela.
• In January 2005, the Hungarian government announced a ban on importing and planting of genetic modified maize seeds, which was subsequently authorized by the EU.[
• On August 18, 2006, American exports of rice to Europe were interrupted when much of the U.S. crop was confirmed to be contaminated with unapproved engineered genes, possibly due to accidental cross-pollination with conventional crops.
FUTURE DEVELOPMENT
Future envisaged applications of GMOs are diverse and include drugs in food, bananas that produce human vaccines against infectious diseases such as Hepatitis B, metabolically engineered fish that mature more quickly, fruit and nut trees that yield years earlier, foods no longer containing properties associated with common intolerances, and plants that produce new plastics with unique properties. While their practicality or efficacy in commercial production has yet to be fully tested, the next decade may see exponential increases in GM product development as researchers gain increasing access to genomic resources that are applicable to organisms beyond the scope of individual projects. Safety testing of these products will also, at the same time, be necessary to ensure that the perceived benefits will indeed outweigh the perceived and hidden costs of development. Plant scientists, backed by results of modern comprehensive profiling of crop composition, point out that crops modified using GM techniques are less likely to have unintended changes than are conventionally bred crops.


HUMAN GENOM PROJECT





The Human Genome Project (HGP) was an international scientific research project with a primary goal to determine the sequence of chemical base pairs which make up DNA and to identify and map the approximately 20,000–25,000 genes of the human genome from both a physical and functional standpoint.The first available assembly of the genome was completed in 2000 by the UCSC Genome Bioinformatics Group, composed of Jim Kent (then a UCSC graduate student of molecular, cell and developmental biology), Patrick Gavin, Terrence Furey and David Kulp.
The project began in 1990, initially headed by James D. Watson at the U.S. National Institutes of Health. A working draft of the genome was released in 2000 and a complete one in 2003, with further analysis still being published. A parallel project was conducted outside of government by the Celera Corporation. Most of the government-sponsored sequencing was performed in universities and research centers from the United States, the United Kingdom, Canada, and New Zealand. The mapping of human genes is an important step in the development of medicines and other aspects of health care.
While the objective of the Human Genome Project is to understand the genetic makeup of the human species, the project also has focused on several other nonhuman organisms such as E. coli, the fruit fly, and the laboratory mouse. It remains one of the largest single investigational projects in modern science.
The HGP originally aimed to map the nucleotides contained in a haploid reference human genome (more than three billion). Several groups have announced efforts to extend this to diploid human genomes including the International HapMap Project, Applied Biosystems, Perlegen, Illumina, JCVI, Personal Genome Project, and Roche-454.
The "genome" of any given individual (except for identical twins and cloned organisms) is unique; mapping "the human genome" involves sequencing multiple variations of each gene. The project did not study the entire DNA found in human cells; some heterochromatic areas (about 8% of the total genome) remain un-sequenced.
BACKGROUND
The project began with the culmination of several years of work supported by the United States Department of Energy, in particular workshops in 1984 and 1986 and a subsequent initiative of the US Department of Energy. This 1987 report stated boldly, "The ultimate goal of this initiative is to understand the human genome" and "knowledge of the human as necessary to the continuing progress of medicine and other health sciences as knowledge of human anatomy has been for the present state of medicine." Candidate technologies were already being considered for the proposed undertaking at least as early as 1985.
James D. Watson was head of the National Center for Human Genome Research at the National Institutes of Health (NIH) in the United States starting from 1988. Largely due to his disagreement with his boss, Bernadine Healy, over the issue of patenting genes, Watson was forced to resign in 1992. He was replaced by Francis Collins in April 1993, and the name of the Center was changed to the National Human Genome Research Institute (NHGRI) in 1997.
The $3-billion project was formally founded in 1990 by the United States Department of Energy and the U.S. National Institutes of Health, and was expected to take 15 years. In addition to the United States, the international consortium comprised geneticists in the United Kingdom, France, Germany, Japan, China, and India.
Due to widespread international cooperation and advances in the field of genomics (especially in sequence analysis), as well as major advances in computing technology, a 'rough draft' of the genome was finished in 2000 (announced jointly by then US president Bill Clinton and the British Prime Minister Tony Blair on June 26, 2000). Ongoing sequencing led to the announcement of the essentially complete genome in April 2003, 2 years earlier than planned. In May 2006, another milestone was passed on the way to completion of the project, when the sequence of the last chromosome was published in the journal Nature.
STATE OF COMPLETION
There are multiple definitions of the "complete sequence of the human genome". According to some of these definitions, the genome has already been completely sequenced, and according to other definitions, the genome has yet to be completely sequenced. There have been multiple popular press articles reporting that the genome was "complete." The genome has been completely sequenced using the definition employed by the International Human Genome Project. A graphical history of the human genome project shows that most of the human genome was complete by the end of 2003. However, there are a number of regions of the human genome that can be considered unfinished:
• First, the central regions of each chromosome, known as centromeres, are highly repetitive DNA sequences that are difficult to sequence using current technology. The centromeres are millions (possibly tens of millions) of base pairs long, and for the most part these are entirely un-sequenced.
• Second, the ends of the chromosomes, called telomeres, are also highly repetitive, and for most of the 46 chromosome ends these too are incomplete. It is not known precisely how much sequence remains before the telomeres of each chromosome are reached, but as with the centromeres, current technological restraints are prohibitive.
• Third, there are several loci in each individual's genome that contain members of multigene families that are difficult to disentangle with shotgun sequencing methods - these multigene families often encode proteins important for immune functions.
• Other than these regions, there remain a few dozen gaps scattered around the genome, some of them rather large, but there is hope that all these will be closed in the next couple of years.
In summary: the best estimates of total genome size indicate that about 92.3% of the genome has been completed and it is likely that the centromeres and telomeres will remain un-sequenced until new technology is developed that facilitates their sequencing. Most of the remaining DNA is highly repetitive and unlikely to contain genes, but it cannot be truly known until it is entirely sequenced. Understanding the functions of all the genes and their regulation is far from complete. The roles of junk DNA, the evolution of the genome, the differences between individuals, and many other questions are still the subject of intense interest by laboratories all over the world.
GOALS
The sequence of the human DNA is stored in databases available to anyone on the Internet. The U.S. National Center for Biotechnology Information (and sister organizations in Europe and Japan) house the gene sequence in a database known as GenBank, along with sequences of known and hypothetical genes and proteins. Other organizations such as the University of California, Santa Cruz, and Ensembl present additional data and annotation and powerful tools for visualizing and searching it. Computer programs have been developed to analyze the data, because the data themselves are difficult to interpret without such programs.
The process of identifying the boundaries between genes and other features in raw DNA sequence is called genome annotation and is the domain of bioinformatics. While expert biologists make the best annotators, their work proceeds slowly, and computer programs are increasingly used to meet the high-throughput demands of genome sequencing projects. The best current technologies for annotation make use of statistical models that take advantage of parallels between DNA sequences and human language, using concepts from computer science such as formal grammars.
Another, often overlooked, goal of the HGP is the study of its ethical, legal, and social implications. It is important to research these issues and find the most appropriate solutions before they become large dilemmas whose effect will manifest in the form of major political concerns.
All humans have unique gene sequences. Therefore the data published by the HGP does not represent the exact sequence of each and every individual's genome. It is the combined genome of a small number of anonymous donors. The HGP genome is a scaffold for future work in identifying differences among individuals. Most of the current effort in identifying differences among individuals involves single nucleotide polymorphisms and the HapMap.
Key findings of Genome Project:
1. There are approx. 24,000 genes in human beings, the same range as in mice and twice that of roundworms. Understanding how these genes express themselves will provide clues to how diseases are caused.
2. All human races are 99.99 % alike, so racial differences are genetically insignificant.
3. Most genetic mutation occurs in the male of the species and as such are agents of change. They are also more likely to be responsible for genetic disorders.
4. Genomics has led to advances in genetic archaeology and has improved our understanding of how we evolved as humans and diverged from apes 25 million years ago. It also tells how our body works, including the mystery behind how the sense of taste works.
HOW IT WAS ACCOMPLISHED
Funding came from the US government through the National Institutes of Health in the United States, and the UK charity, the Wellcome Trust, who funded the Sanger Institute (then the Sanger Centre) in Great Britain, as well as numerous other groups from around the world. The genome was broken into smaller pieces; approximately 150,000 base pairs in length. These pieces were then spliced into a type of vector known as "bacterial artificial chromosomes", or BACs, which are derived from bacterial chromosomes which have been genetically engineered. The vectors containing the genes can be inserted into bacteria where they are copied by the bacterial DNA replication machinery. Each of these pieces was then sequenced separately as a small "shotgun" project and then assembled. The larger, 150,000 base pairs go together to create chromosomes. This is known as the "hierarchical shotgun" approach, because the genome is first broken into relatively large chunks, which are then mapped to chromosomes before being selected for sequencing.
Human Genome Project is called a Mega Project because of the following facts:
1. The human genome has approx. 3.3 x 109 base-pairs; if the cost of sequencing is US $3 per base-pair, then the approx. cost will be US $10 billion.
2. If the sequence obtained were to be stored in a typed form in books and if each page contains 1000 letters and each book contains 1000 pages, then 3300 such books would be needed to store the complete information.
3. The enormous quantity of data expected to be generated also necessitates the use of high speed computer hard-drives for data storage and super-computers for retrieval and analysis.
PUBLIC VERSUS PRIVATE APPROACHES
In 1998, a similar, privately funded quest was launched by the American researcher Craig Venter, and his firm Celera Genomics. Venter was a scientist at the NIH during the early 1990s when the project was initiated. The $300,000,000 Celera effort was intended to proceed at a faster pace and at a fraction of the cost of the roughly $3 billion publicly funded project.
Celera used a riskier technique called whole genome shotgun sequencing, which had been used to sequence bacterial genomes of up to six million base pairs in length, but not for anything nearly as large as the three billion base pair human genome.
Celera initially announced that it would seek patent protection on "only 200–300" genes, but later amended this to seeking "intellectual property protection" on "fully-characterized important structures" amounting to 100–300 targets. The firm eventually filed preliminary ("place-holder") patent applications on 6,500 whole or partial genes. Celera also promised to publish their findings in accordance with the terms of the 1996 "Bermuda Statement," by releasing new data annually (the HGP released its new data daily), although, unlike the publicly funded project, they would not permit free redistribution or scientific use of the data. The publicly funded competitor UC Santa Cruz was compelled to publish the first draft of the human genome before Celera for this reason. On July 7, 2000, the UCSC Genome Bioinformatics Group released a first working draft on the web. The scientific community downloaded one-half trillion bytes of information from the UCSC genome server in the first 24 hours of free and unrestricted access to the first ever assembled blueprint of our human species. This was a dramatic triumph for those who champion free access to information, and it occurred just days before Celera's publishing .
In March 2000, President Clinton announced that the genome sequence could not be patented, and should be made freely available to all researchers. The statement sent Celera's stock plummeting and dragged down the biotechnology-heavy Nasdaq. The biotechnology sector lost about $50 billion in market capitalization in two days.
Although the working draft was announced in June 2000, it was not until February 2001 that Celera and the HGP scientists published details of their drafts. Special issues of Nature (which published the publicly funded project's scientific paper) and Science (which published Celera's paper) described the methods used to produce the draft sequence and offered analysis of the sequence. These drafts covered about 83% of the genome (90% of the euchromatic regions with 150,000 gaps and the order and orientation of many segments not yet established). In February 2001, at the time of the joint publications, press releases announced that the project had been completed by both groups. Improved drafts were announced in 2003 and 2005, filling in to ≈92% of the sequence currently.
The competition proved to be very good for the project, spurring the public groups to modify their strategy in order to accelerate progress. The rivals at UC Santa Cruz initially agreed to pool their data, but the agreement fell apart when Celera refused to deposit its data in the unrestricted public database GenBank. Celera had incorporated the public data into their genome, but forbade the public effort to use Celera data.
HGP is the most well known of many international genome projects aimed at sequencing the DNA of a specific organism. While the human DNA sequence offers the most tangible benefits, important developments in biology and medicine are predicted as a result of the sequencing of model organisms, including mice, fruit flies, zebrafish, yeast, nematodes, plants, and many microbial organisms and parasites.
In 2004, researchers from the International Human Genome Sequencing Consortium (IHGSC) of the HGP announced a new estimate of 20,000 to 25,000 genes in the human genome. Previously 30,000 to 40,000 had been predicted, while estimates at the start of the project reached up to as high as 2,000,000. The number continues to fluctuate and it is now expected that it will take many years to agree on a precise value for the number of genes in the human genome.
HISTORY
In 1976, the genome of the RNA virus Bacteriophage MS2 was the first complete genome to be determined, by Walter Fiers and his team at the University of Ghent (Ghent, Belgium). The idea for the shotgun technique came from the use of an algorithm that combined sequence information from many small fragments of DNA to reconstruct a genome. This technique was pioneered by Frederick Sanger to sequence the genome of the Phage Φ-X174, a virus (bacteriophage) that primarily infects bacteria that was the first fully sequenced genome (DNA-sequence) in 1977. The technique was called shotgun sequencing because the genome was broken into millions of pieces as if it had been blasted with a shotgun. In order to scale up the method, both the sequencing and genome assembly had to be automated, as they were in the 1980s.
Those techniques were shown applicable to sequencing of the first free-living bacterial genome (1.8 million base pairs) of Haemophilus influenzae in 1995 and the first animal genome (~100 Mbp) It involved the use of automated sequencers, longer individual sequences using approximately 500 base pairs at that time. Paired sequences separated by a fixed distance of around 2000 base pairs which were critical elements enabling the development of the first genome assembly programs for reconstruction of large regions of genomes (aka 'contigs').
Three years later, in 1998, the announcement by the newly-formed Celera Genomics that it would scale up the shotgun sequencing method to the human genome was greeted with skepticism in some circles. The shotgun technique breaks the DNA into fragments of various sizes, ranging from 2,000 to 300,000 base pairs in length, forming what is called a DNA "library". Using an automated DNA sequencer the DNA is read in 800bp lengths from both ends of each fragment. Using a complex genome assembly algorithm and a supercomputer, the pieces are combined and the genome can be reconstructed from the millions of short, 800 base pair fragments. The success of both the public and privately funded effort hinged upon a new, more highly automated capillary DNA sequencing machine, called the Applied Biosystems 3700, that ran the DNA sequences through an extremely fine capillary tube rather than a flat gel. Even more critical was the development of a new, larger-scale genome assembly program, which could handle the 30–50 million sequences that would be required to sequence the entire human genome with this method. At the time, such a program did not exist. One of the first major projects at Celera Genomics was the development of this assembler, which was written in parallel with the construction of a large, highly automated genome sequencing factory. Development of the assembler was led by Brian Ramos. The first version of this assembler was demonstrated in 2000, when the Celera team joined forces with Professor Gerald Rubin to sequence the fruit fly Drosophila melanogaster using the whole-genome shotgun method. At 130 million base pairs, it was at least 10 times larger than any genome previously shotgun assembled. One year later, the Celera team published their assembly of the three billion base pair human genome.
The Human Genome Project was a 13 year old mega project, that was launched in the year 1990 and completed in 2003. This project is closely associated to the branch of biology called Bio-informatics. The human genome project international consortium announced the publication of a draft sequence and analysis of the human genome—the genetic blueprint for the human being. An American company—Celera, led by Craig Venter and the other huge international collaboration of distinguished scientists led by Francis Collins, director, National Human Genome Research Institute, U.S., both published their findings.
This Mega Project is co-ordinated by the U.S. Department of Energy and the National Institute of Health. During the early years of the project, the Wellcome Trust (U.K.) became a major partner, other countries like Japan, Germany, China and France contributed significantly. Already the atlas has revealed some starting facts. The two factors that made this project a success are:
1. Genetic Engineering Techniques, with which it is possible to isolate and clone any segment of DNA.
2. Availability of simple and fast technologies, to determining the DNA sequences.
Being the most complex organisms, human beings were expected to have more than 100,000 genes or combination of DNA that provides commands for every characteristics of the body. Instead their studies show that humans have only 30,000 genes – around the same as mice, three times as many as flies, and only five times more than bacteria. Scientist told that not only are the numbers similar, the genes themselves, baring a few, are alike in mice and men. In a companion volume to the Book of Life, scientists have created a catalogue of 1.4 million single-letter differences, or single nucleotide polymorphisms (SNP's) – and specified their exact locations in the human genome. This SNP map, the world's largest publicly available catalogue of SNP's, promises to revolutionize both mapping diseases and tracing human history. The sequence information from the consortium has been immediately and freely released to the world, with no restrictions on its use or redistribution. The information is scanned daily by scientists in academia and industry, as well as commercial database companies, providing key information services to bio-technologists. Already, many genes have been identified from the genome sequence, including more than 30 that play a direct role in human diseases. By dating the three millions repeat elements and examining the pattern of interspersed repeats on the Y-chromosome, scientists estimated the relative mutation rates in the X and the Y chromosomes and in the male and the female germ lines. They found that the ratio of mutations in male Vs female is 2:1. Scientists point to several possible reasons for the higher mutation rate in the male germ line, including the fact that there are a greater number of cell divisions involve in the formation of sperm than in the formation of eggs.
METHODS
The IHGSC used pair-end sequencing plus whole-genome shotgun mapping of large (≈100 Kbp) plasmid clones and shotgun sequencing of smaller plasmid sub-clones plus a variety of other mapping data to orient and check the assembly of each human chromosome.
The Celera group emphasized the importance of the “whole-genome shotgun” sequencing method, relying on sequence information to orient and locate their fragments within the chromosome. However they used the publicly available data from HGP to assist in the assembly and orientation process, raising concerns that the Celera sequence was not independently derived.
BENEFITS
The work on interpretation of genome data is still in its initial stages. It is anticipated that detailed knowledge of the human genome will provide new avenues for advances in medicine and biotechnology. Clear practical results of the project emerged even before the work was finished. For example, a number of companies, such as Myriad Genetics started offering easy ways to administer genetic tests that can show predisposition to a variety of illnesses, including breast cancer, disorders of hemostasis, cystic fibrosis, liver diseases and many others. Also, the etiologies for cancers, Alzheimer's disease and other areas of clinical interest are considered likely to benefit from genome information and possibly may lead in the long term to significant advances in their management.
There are also many tangible benefits for biological scientists. For example, a researcher investigating a certain form of cancer may have narrowed down his/her search to a particular gene. By visiting the human genome database on the world wide web, this researcher can examine what other scientists have written about this gene, including (potentially) the three-dimensional structure of its product, its function(s), its evolutionary relationships to other human genes, or to genes in mice or yeast or fruit flies, possible detrimental mutations, interactions with other genes, body tissues in which this gene is activated, diseases associated with this gene or other datatypes.
Further, deeper understanding of the disease processes at the level of molecular biology may determine new therapeutic procedures. Given the established importance of DNA in molecular biology and its central role in determining the fundamental operation of cellular processes, it is likely that expanded knowledge in this area will facilitate medical advances in numerous areas of clinical interest that may not have been possible without them.
The analysis of similarities between DNA sequences from different organisms is also opening new avenues in the study of evolution. In many cases, evolutionary questions can now be framed in terms of molecular biology; indeed, many major evolutionary milestones (the emergence of the ribosome and organelles, the development of embryos with body plans, the vertebrate immune system) can be related to the molecular level. Many questions about the similarities and differences between humans and our closest relatives (the primates, and indeed the other mammals) are expected to be illuminated by the data from this project.
The Human Genome Diversity Project (HGDP), spinoff research aimed at mapping the DNA that varies between human ethnic groups, which was rumored to have been halted, actually did continue and to date has yielded new conclusions. In the future, HGDP could possibly expose new data in disease surveillance, human development and anthropology. HGDP could unlock secrets behind and create new strategies for managing the vulnerability of ethnic groups to certain diseases (see race in biomedicine). It could also show how human populations have adapted to these vulnerabilities.
Advantages of Human Genome Project:
1. Knowledge of the effects of variation of DNA among individuals can revolutionize the ways to diagnose, treat and even prevent a number of diseases that affects the human beings.
2. It provides clues to the understanding of human biology.
ETHICAL, LEGAL AND SOCIAL ISSUES
The project's goals included not only identifying all of the approximately 24,000 genes in the human genome, but also to address the ethical, legal, and social issues (ELSI) that might arise from the availability of genetic information. Five percent of the annual budget was allocated to address the ELSI arising from the project.
Debra Harry, Executive Director of the U.S group Indigenous Peoples Council on Biocolonialism (IPCB), says that despite a decade of ELSI funding, the burden of genetics education has fallen on the tribes themselves to understand the motives of Human genome project and its potential impacts on their lives. Meanwhile, the government has been busily funding projects studying indigenous groups without any meaningful consultation with the groups. (See Biopiracy.)
The main criticism of ELSI is the failure to address the conditions raised by population-based research, especially with regard to unique processes for group decision-making and cultural worldviews. Genetic variation research such as HGP is group population research, but most ethical guidelines, according to Harry, focus on individual rights instead of group rights. She says the research represents a clash of culture: indigenous people's life revolves around collectivity and group decision making whereas the Western culture promotes individuality. Harry suggests that one of the challenges of ethical research is to include respect for collective review and decision making, while also upholding the Western model of individual rights.
The distribution of genes in mammalian chromosomes is striking. It turns out that human chromosomes have crowded urban centres with many genes in close proximity to one another and also vast expanses of unpopulated desert where only non coding DNA can be found. This distribution of genes is in marked contrast to the genomes of many other organisms. The full set of proteins encoded by the human genome is more complex than those of the invertebrates because humans have rearranged old protein domains into a rich collection of new architectures. The sequence will serve as a foundation for a broad range of functional genomic tools to help biologists to probe the function of the genes in a more systematic manner. Comparative genomics will also offer scientists insights into important regions in the sequence that perform regulatory functions. The human genome sequence provides a great help to build the tools to conquer most of the illness that cause untold human sufferings and premature death. Already the genome has helped to detect more than 30 diseased genes, including some of the common diseases like breast cancer, colour blindness etc. There will be a lot more emphasis now on preventive medicines. The consortium's ultimate goal is to produce a completely 'finished' sequence with no gaps and 99.9 % accuracy. Although the near finished version is adequate for most biomedical research, the Human Genome Project has made a commitment to filling all gaps and resolving all uncertainty in the sequence by the year 2003 C.E. The draft genome sequence has provided an initial look at the human gene content, but many uncertainties remain. One of the Human Genome Project priorities will be to refine the data to accurately reflect every gene and every alternatively spliced form. Several steps are needed to reach this ambitious goal.

EID MUBARAK

Bhuvan mapping

DOWN LOAD Bhuvan mapping A review of ISRO Bhuvan Features and Performance

Here is a frank review of the features and performance of ISRO Bhuvan (the much anticipated satellite-based 3D mapping application from ISRO) BETA Release and comparing it to supposed arch rival Google Earth. Bhuvan from the begining is claiming that it is not competing with Google Earth in any way, but there was much hype and propaganda in the media saying that ISRO Bhuvan will be a Google Earth killer atleast in India. But it looks like that can nit be the case anytime soon. Here is why..

  • While Google Earth works on a downloadable client, Bhuvan works within the browser (only supports Windows and IE 6 and above).
  • The ISRO Bhuvan currently has serious performance issues. The site currently very unstable. It gives up or hangs the browser every once in a while. When a layer (state, district, taluk, etc.) is turned on, it renders unevenly and sometimes fails to render at all. The navigation panel failed to load routinely and it felt like a rare sighting when we could actually use the panel.
  • The promise of high resolution images has not been kept. While the service promises zoom up to 10 metres from the ground level as against 200 metres for Google Earth, we didn’t encounter a single image with nearly as much detailing. In fact, comparative results for a marquee location such as New Delhi’s Connaught Place or Red Fort make its clear as to the inferior performance of ISRO Earth as of now.
  • The navigation tools are similar to Google Earth (GE).
  • The search doesn’t work if a query returns multiple results. A pop up window is supposed to give the multiple results from which the user is supposed to be able to choose. During two days of sporadic testing, we found the result only once. The rest of the time, the window would pop up, but nothing would be displayed. When the search is accurate, the software ‘flies in’ to the exact location, the same way as GE.
  • Users need to create an account and download a plug-in.
  • Bhuvan packs a lot of data on weather, waterbodies and population details of various administrative units. We were unable to access weather data. Clicking on icons of administrative units show basic information such as the population. For specialist users, Bhuvan might hold some attraction. For instance, there is a drought map which cab be used to compare drought situation across years and there is a flood map that shows Bihar during the Kosi flood and after. With Isro backing, Bhuvan would be able to provide such relevant data from time to time, but the application needs major improvements in terms of usability before it will be of interest to the ordinary user.
  • Users can also not edit any data or tag locations.
  • We hope Bhuvan is able to fix the bugs soon. But even then, to be a credible alternative to existing mapping services, and even to get new users to try it, it much provide much higher resolution images. User interest will be piqued only when they can see their house or school or local street in high resolution. With Isro data, this is easily doable.

Having said all this, ISRO Bhuvan is still a very good step forward for ISRO in the right direction we feel. We wish all the best for ISRO and hopefully Bhuvan will mature very fast to become a good service and can really compete with Google Earth.

download bhuvan mapping.............. here