Bioinformatics in the context of Biological


Bioinformatics in the context of Biological

Bioinformatics Study page number 1 of 3

Play TriviaQuestions Online!

or

Skip to study material about Bioinformatics in the context of "Biological"


⭐ Core Definition: Bioinformatics

Bioinformatics (/ˌb.ˌɪnfərˈmætɪks/ ) is an interdisciplinary field of science that develops computational methods and software tools for understanding biological data, especially when the data sets are large and complex. Bioinformatics uses biology, chemistry, physics, computer science, data science, computer programming, information engineering, mathematics and statistics to analyze and interpret biological data. This process can sometimes be referred to as computational biology, however the distinction between the two terms is often disputed. To some, the term computational biology refers to building and using models of biological systems.

Computational, statistical, and computer programming techniques have been used for computer simulation analyses of biological queries. They include reused specific analysis "pipelines", particularly in the field of genomics, such as by the identification of genes and single nucleotide polymorphisms (SNPs). These pipelines are used to better understand the genetic basis of disease, unique adaptations, desirable properties (especially in agricultural species), or differences between populations. Bioinformatics also includes proteomics, which aims to understand the organizational principles within nucleic acid and protein sequences.

↓ Menu
HINT:

In this Dossier

Bioinformatics in the context of Biomedical sciences

Biomedical sciences are a set of sciences applying portions of natural science or formal science, or both, to develop knowledge, interventions, or technology that are of use in healthcare or public health. Such disciplines as medical microbiology, clinical virology, clinical epidemiology, genetic epidemiology, and biomedical engineering are medical sciences. In explaining physiological mechanisms operating in pathological processes, however, pathophysiology can be regarded as basic science.

Biomedical Sciences, as defined by the UK Quality Assurance Agency for Higher Education Benchmark Statement in 2015, includes those science disciplines whose primary focus is the biology of human health and disease and ranges from the generic study of biomedical sciences and human biology to more specialised subject areas such as pharmacology, human physiology and human nutrition. It is underpinned by relevant basic sciences including anatomy and physiology, cell biology, biochemistry, microbiology, genetics and molecular biology, pharmacology, immunology, mathematics and statistics, and bioinformatics. As such the biomedical sciences have a much wider range of academic and research activities and economic significance than that defined by hospital laboratory sciences. Biomedical Sciences are the major focus of bioscience research and funding in the 21st century.

View the full Wikipedia page for Biomedical sciences
↑ Return to Menu

Bioinformatics in the context of Molecular medicine

Molecular medicine is a broad field, where physical, chemical, biological, bioinformatics and medical techniques are used to describe molecular structures and mechanisms, identify fundamental molecular and genetic errors of disease, and to develop molecular interventions to correct them. The molecular medicine perspective emphasizes cellular and molecular phenomena and interventions rather than the previous conceptual and observational focus on patients and their organs.

View the full Wikipedia page for Molecular medicine
↑ Return to Menu

Bioinformatics in the context of Phylogenetic tree

A phylogenetic tree or phylogeny is a graphical representation which shows the evolutionary history between a set of species or taxa during a specific time. In other words, it is a branching diagram or a tree showing the evolutionary relationships among various biological species or other entities based upon similarities and differences in their physical or genetic characteristics. In evolutionary biology, all life on Earth is theoretically part of a single phylogenetic tree, indicating common ancestry. Phylogenetics is the study of phylogenetic trees. The main challenge is to find a phylogenetic tree representing optimal evolutionary ancestry between a set of species or taxa. Computational phylogenetics (also phylogeny inference) focuses on the algorithms involved in finding optimal phylogenetic tree in the phylogenetic landscape.

Phylogenetic trees may be rooted or unrooted. In a rooted phylogenetic tree, each node with descendants represents the inferred most recent common ancestor of those descendants, and the edge lengths in some trees may be interpreted as time estimates. Each node is called a taxonomic unit. Internal nodes are generally called hypothetical taxonomic units, as they cannot be directly observed. Trees are useful in fields of biology such as bioinformatics, systematics, and phylogenetics. Unrooted trees illustrate only the relatedness of the leaf nodes and do not require the ancestral root to be known or inferred.

View the full Wikipedia page for Phylogenetic tree
↑ Return to Menu

Bioinformatics in the context of Non-coding RNA

A non-coding RNA (ncRNA) is a functional RNA molecule that is not translated into a protein. The DNA sequence from which a functional non-coding RNA is transcribed is often called an RNA gene. Abundant and functionally important types of non-coding RNAs include transfer RNAs (tRNAs) and ribosomal RNAs (rRNAs), as well as small RNAs such as microRNAs, siRNAs, piRNAs, snoRNAs, snRNAs, exRNAs, scaRNAs and the long ncRNAs such as Xist and HOTAIR.

The number of non-coding RNAs within the human genome is unknown; however, recent transcriptomic and bioinformatic studies suggest that there are thousands of non-coding transcripts. Many of the newly identified ncRNAs have unknown functions, if any. There is no consensus on how much of non-coding transcription is functional: some believe most ncRNAs to be non-functional "junk RNA", spurious transcriptions, while others expect that many non-coding transcripts have functions to be discovered.

View the full Wikipedia page for Non-coding RNA
↑ Return to Menu

Bioinformatics in the context of Cluster analysis

Cluster analysis, or clustering, is a data analysis technique aimed at partitioning a set of objects into groups such that objects within the same group (called a cluster) exhibit greater similarity to one another (in some specific sense defined by the analyst) than to those in other groups (clusters). It is a main task of exploratory data analysis, and a common technique for statistical data analysis, used in many fields, including pattern recognition, image analysis, information retrieval, bioinformatics, data compression, computer graphics and machine learning.

Cluster analysis refers to a family of algorithms and tasks rather than one specific algorithm. It can be achieved by various algorithms that differ significantly in their understanding of what constitutes a cluster and how to efficiently find them. Popular notions of clusters include groups with small distances between cluster members, dense areas of the data space, intervals or particular statistical distributions. Clustering can therefore be formulated as a multi-objective optimization problem. The appropriate clustering algorithm and parameter settings (including parameters such as the distance function to use, a density threshold or the number of expected clusters) depend on the individual data set and intended use of the results. Cluster analysis as such is not an automatic task, but an iterative process of knowledge discovery or interactive multi-objective optimization that involves trial and failure. It is often necessary to modify data preprocessing and model parameters until the result achieves the desired properties.

View the full Wikipedia page for Cluster analysis
↑ Return to Menu

Bioinformatics in the context of Evolution of insects

The most recent understanding of the evolution of insects is based on studies of the following branches of science: molecular biology, insect morphology, paleontology, insect taxonomy, evolution, embryology, bioinformatics and scientific computing. The study of insect fossils is known as paleoentomology. It is estimated that the class of insects originated on Earth about 480 million years ago, in the Ordovician, at about the same time terrestrial plants appeared. Insects are thought to have evolved from a group of crustaceans. The first insects were landbound, but about 400 million years ago in the Devonian period one lineage of insects evolved flight, the first animals to do so. The oldest insect fossil has been proposed to be Rhyniognatha hirsti, estimated to be 400 million years old, but the insect identity of the fossil has been contested. Global climate conditions changed several times during the history of Earth, and along with it the diversity of insects. The Pterygotes (winged insects) underwent a major radiation in the Carboniferous (358 to 299 million years ago) while the Endopterygota (insects that go through different life stages with metamorphosis) underwent another major radiation in the Permian (299 to 252 million years ago).

Most extant orders of insects developed during the Permian period. Many of the early groups became extinct during the mass extinction at the Permo-Triassic boundary, the largest extinction event in the history of the Earth, around 252 million years ago. The survivors of this event evolved in the Triassic (252 to 201 million years ago) to what are essentially the modern insect orders that persist to this day. Most modern insect families appeared in the Jurassic (201 to 145 million years ago).

View the full Wikipedia page for Evolution of insects
↑ Return to Menu

Bioinformatics in the context of Deep learning

In machine learning, deep learning focuses on utilizing multilayered neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience and revolves around stacking artificial neurons into layers and "training" them to process data. The adjective "deep" refers to the use of multiple layers (ranging from three to several hundred or thousands) in the network. Methods used can be supervised, semi-supervised or unsupervised.

Some common deep learning network architectures include fully connected networks, deep belief networks, recurrent neural networks, convolutional neural networks, generative adversarial networks, transformers, and neural radiance fields. These architectures have been applied to fields including computer vision, speech recognition, natural language processing, machine translation, bioinformatics, drug design, medical image analysis, climate science, material inspection and board game programs, where they have produced results comparable to and in some cases surpassing human expert performance.

View the full Wikipedia page for Deep learning
↑ Return to Menu

Bioinformatics in the context of Biocuration

Biocuration is the field of life sciences dedicated to organizing biomedical data, information and knowledge into structured formats, such as spreadsheets, tables and knowledge graphs. The biocuration of biomedical knowledge is made possible by the cooperative work of biocurators, software developers and bioinformaticians and is at the base of the work of biological databases.

View the full Wikipedia page for Biocuration
↑ Return to Menu

Bioinformatics in the context of Latent variable

In statistics, latent variables (from Latin: present participle of lateo 'lie hidden') are variables that can only be inferred indirectly through a mathematical model from other observable variables that can be directly observed or measured. Such latent variable models are used in many disciplines, including engineering, medicine, ecology, physics, machine learning/artificial intelligence, natural language processing, bioinformatics, chemometrics, demography, economics, management, political science, psychology and the social sciences.

Latent variables may correspond to aspects of physical reality. These could in principle be measured, but may not be for practical reasons. Among the earliest expressions of this idea is Francis Bacon's polemic the Novum Organum, itself a challenge to the more traditional logic expressed in Aristotle's Organon:

View the full Wikipedia page for Latent variable
↑ Return to Menu

Bioinformatics in the context of Pattern recognition

Pattern recognition is the task of assigning a class to an observation based on patterns extracted from data. While similar, pattern recognition (PR) is not to be confused with pattern machines (PM) which may possess PR capabilities but their primary function is to distinguish and create emergent patterns. PR has applications in statistical data analysis, signal processing, image analysis, information retrieval, bioinformatics, data compression, computer graphics and machine learning. Pattern recognition has its origins in statistics and engineering; some modern approaches to pattern recognition include the use of machine learning, due to the increased availability of big data and a new abundance of processing power.

Pattern recognition systems are commonly trained from labeled "training" data. When no labeled data are available, other algorithms can be used to discover previously unknown patterns. KDD and data mining have a larger focus on unsupervised methods and stronger connection to business use. Pattern recognition focuses more on the signal and also takes acquisition and signal processing into consideration. It originated in engineering, and the term is popular in the context of computer vision: a leading computer vision conference is named Conference on Computer Vision and Pattern Recognition.

View the full Wikipedia page for Pattern recognition
↑ Return to Menu

Bioinformatics in the context of Edit distance

In computational linguistics and computer science, edit distance is a string metric, i.e. a way of quantifying how dissimilar two strings (e.g., words) are to one another, that is measured by counting the minimum number of operations required to transform one string into the other. Edit distances find applications in natural language processing, where automatic spelling correction can determine candidate corrections for a misspelled word by selecting words from a dictionary that have a low distance to the word in question. In bioinformatics, it can be used to quantify the similarity of DNA sequences, which can be viewed as strings of the letters A, C, G and T.

Different definitions of an edit distance use different sets of like operations. Levenshtein distance operations are the removal, insertion, or substitution of a character in the string. Being the most common metric, the term Levenshtein distance is often used interchangeably with edit distance.

View the full Wikipedia page for Edit distance
↑ Return to Menu

Bioinformatics in the context of Graph drawing

Graph drawing is an area of mathematics and computer science combining methods from geometric graph theory and information visualization to derive two-dimensional (or, sometimes, three-dimensional) depictions of graphs arising from applications such as social network analysis, cartography, linguistics, and bioinformatics.

A drawing of a graph or network diagram is a pictorial representation of the vertices and edges of a graph. This drawing should not be confused with the graph itself: very different layouts can correspond to the same graph. In the abstract, all that matters is which pairs of vertices are connected by edges. In the concrete, however, the arrangement of these vertices and edges within a drawing affects its understandability, usability, fabrication cost, and aesthetics. The problem gets worse if the graph changes over time by adding and deleting edges (dynamic graph drawing) and the goal is to preserve the user's mental map.

View the full Wikipedia page for Graph drawing
↑ Return to Menu

Bioinformatics in the context of Molecular biologist

Molecular biology /məˈlɛkjʊlər/ is a branch of biology that seeks to understand the molecular structures and chemical processes that are the basis of biological activity within and between cells. It is centered largely on the study of nucleic acids (such as DNA and RNA) and proteins. It examines the structure, function, and interactions of these macromolecules as they orchestrate processes such as replication, transcription, translation, protein synthesis, and complex biomolecular interactions. The field of molecular biology is multi-disciplinary, relying on principles from genetics, biochemistry, physics, mathematics, and more recently computer science (bioinformatics).

Though cells and other microscopic structures had been observed in organisms as early as the 18th century, a detailed understanding of the mechanisms and interactions governing their behavior did not emerge until the 20th century, when technologies used in physics and chemistry had advanced sufficiently to permit their application in the biological sciences. The term 'molecular biology' was first used in 1945 by the English physicist William Astbury, who described it as an approach focused on discerning the underpinnings of biological phenomena—i.e. uncovering the physical and chemical structures and properties of biological molecules, as well as their interactions with other molecules and how these interactions explain observations of so-called classical biology, which instead studies biological processes at larger scales and higher levels of organization. In 1953, Francis Crick, James Watson, Rosalind Franklin, and their colleagues at the Medical Research Council Unit, Cavendish Laboratory, were the first to describe the double helix model for the chemical structure of deoxyribonucleic acid (DNA), which is often considered a landmark event for the nascent field because it provided a physico-chemical basis by which to understand the previously nebulous idea of nucleic acids as the primary substance of biological inheritance. They proposed this structure based on previous research done by Franklin, which was conveyed to them by Maurice Wilkins and Max Perutz. Their work led to the discovery of DNA in other microorganisms, plants, and animals.

View the full Wikipedia page for Molecular biologist
↑ Return to Menu

Bioinformatics in the context of Conserved sequence

In evolutionary biology, conserved sequences are identical or similar sequences in nucleic acids (DNA and RNA) or proteins across species (orthologous sequences), or within a genome (paralogous sequences), or between donor and receptor taxa (xenologous sequences). Conservation indicates that a sequence has been maintained by natural selection.

A highly conserved sequence is one that has remained relatively unchanged far back up the phylogenetic tree, and hence far back in geological time. Examples of highly conserved sequences include the RNA components of ribosomes present in all domains of life, the homeobox sequences widespread amongst eukaryotes, and the tmRNA in bacteria. The study of sequence conservation overlaps with the fields of genomics, proteomics, evolutionary biology, phylogenetics, bioinformatics and mathematics.

View the full Wikipedia page for Conserved sequence
↑ Return to Menu

Bioinformatics in the context of Genomic

Genomics is an interdisciplinary field of molecular biology focusing on the structure, function, evolution, mapping, and editing of genomes. A genome is an organism's complete set of DNA, including all of its genes as well as its hierarchical, three-dimensional structural configuration. In contrast to genetics, which refers to the study of individual genes and their roles in inheritance, genomics aims at the collective characterization and quantification of all of an organism's genes, their interrelations and influence on the organism. Genes may direct the production of proteins with the assistance of enzymes and messenger molecules. In turn, proteins make up body structures such as organs and tissues as well as control chemical reactions and carry signals between cells. Genomics also involves the sequencing and analysis of genomes through uses of high throughput DNA sequencing and bioinformatics to assemble and analyze the function and structure of entire genomes. Advances in genomics have triggered a revolution in discovery-based research and systems biology to facilitate understanding of even the most complex biological systems such as the brain.

The field also includes studies of intragenomic (within the genome) phenomena such as epistasis (effect of one gene on another), pleiotropy (one gene affecting more than one trait), heterosis (hybrid vigour), and other interactions between loci and alleles within the genome.

View the full Wikipedia page for Genomic
↑ Return to Menu

Bioinformatics in the context of Parasitology

Parasitology is the study of parasites, their hosts, and the relationship between them. As a biological discipline, the scope of parasitology is not determined by the organism or environment in question but by their way of life. This means it forms a synthesis of other disciplines, and draws on techniques from fields such as cell biology, bioinformatics, biochemistry, molecular biology, immunology, genetics, evolution and ecology.

View the full Wikipedia page for Parasitology
↑ Return to Menu