Ontology (information science) in the context of Semantic interoperability


Ontology (information science) in the context of Semantic interoperability

Ontology (information science) Study page number 1 of 1

Play TriviaQuestions Online!

or

Skip to study material about Ontology (information science) in the context of "Semantic interoperability"


⭐ Core Definition: Ontology (information science)

In information science, an ontology encompasses a representation, formal naming, and definitions of the categories, properties, and relations between the concepts, data, or entities that pertain to one, many, or all domains of discourse. More simply, an ontology is a way of showing the properties of a subject area and how they are related, by defining a set of terms and relational expressions that represent the entities in that subject area. The field which studies ontologies so conceived is sometimes referred to as applied ontology.

Every academic discipline or field, in creating its terminology, thereby lays the groundwork for an ontology. Each uses ontological assumptions to frame explicit theories, research and applications. Improved ontologies may improve problem solving within that domain, interoperability of data systems, and discoverability of data. Translating research papers within every field is a problem made easier when experts from different countries maintain a controlled vocabulary of jargon between each of their languages. For instance, the definition and ontology of economics is a primary concern in Marxist economics, but also in other subfields of economics. An example of economics relying on information science occurs in cases where a simulation or model is intended to enable economic decisions, such as determining what capital assets are at risk and by how much (see risk management).

↓ Menu
HINT:

👉 Ontology (information science) in the context of Semantic interoperability

Semantic interoperability is the ability of computer systems to exchange data with unambiguous, shared meaning. Semantic interoperability is a requirement to enable machine computable logic, inferencing, knowledge discovery, and data federation between information systems.

Semantic interoperability is therefore concerned not just with the packaging of data (syntax), but the simultaneous transmission of the meaning with the data (semantics). This is accomplished by adding data about the data (metadata), linking each data element to a controlled, shared vocabulary. The meaning of the data is transmitted with the data itself, in one self-describing "information package" that is independent of any information system. It is this shared vocabulary, and its associated links to an ontology, which provides the foundation and capability of machine interpretation, inference, and logic.

↓ Explore More Topics
In this Dossier

Ontology (information science) in the context of Knowledge representation

Knowledge representation (KR) aims to model information in a structured manner to formally represent it as knowledge in knowledge-based systems whereas knowledge representation and reasoning (KRR, KR&R, or KR²) also aims to understand, reason, and interpret knowledge. KRR is widely used in the field of artificial intelligence (AI) with the goal to represent information about the world in a form that a computer system can use to solve complex tasks, such as diagnosing a medical condition or having a natural-language dialog. KR incorporates findings from psychology about how humans solve problems and represent knowledge, in order to design formalisms that make complex systems easier to design and build. KRR also incorporates findings from logic to automate various kinds of reasoning.

Traditional KRR focuses more on the declarative representation of knowledge. Related knowledge representation formalisms mainly include vocabularies, thesaurus, semantic networks, axiom systems, frames, rules, logic programs, and ontologies. Examples of automated reasoning engines include inference engines, theorem provers, model generators, and classifiers.

View the full Wikipedia page for Knowledge representation
↑ Return to Menu

Ontology (information science) in the context of Process ontology

In philosophy, a process ontology refers to a universal model of the structure of the world as an ordered wholeness. Such ontologies are fundamental ontologies, in contrast to the so-called applied ontologies. Fundamental ontologies do not claim to be accessible to any empirical proof in itself but to be a structural design pattern, out of which empirical phenomena can be explained and put together consistently. Throughout Western history, the dominating fundamental ontology is the so-called substance theory. However, fundamental process ontologies have become more important in recent times, because the progress in the discovery of the foundations of physics has spurred the development of a basic concept able to integrate such boundary notions as "energy," "object", and those of the physical dimensions of space and time.

In computer science, a process ontology is a description of the components and their relationships that make up a process. A formal process ontology is an ontology in the knowledge domain of operations. Often such ontologies take advantage of the benefits of an upper ontology. Planning software can be used to perform plan generation based on the formal description of the process and its constraints. Numerous efforts have been made to define a process/planning ontology.

View the full Wikipedia page for Process ontology
↑ Return to Menu

Ontology (information science) in the context of Classification scheme

In information science and ontology, a classification scheme is an arrangement of classes or groups of classes. The activity of developing the schemes bears similarity to taxonomy, but with perhaps a more theoretical bent, as a single classification scheme can be applied over a wide semantic spectrum while taxonomies tend to be devoted to a single topic.

In the abstract, the resulting structures are a crucial aspect of metadata, often represented as a hierarchical structure and accompanied by descriptive information of the classes or groups. Such a classification scheme is intended to be used for the classification of individual objects into the classes or groups, and the classes or groups are based on characteristics which the objects (members) have in common.

View the full Wikipedia page for Classification scheme
↑ Return to Menu

Ontology (information science) in the context of Large language model

A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) and provide the core capabilities of modern chatbots. LLMs can be fine-tuned for specific tasks or guided by prompt engineering. These models acquire predictive power regarding syntax, semantics, and ontologies inherent in human language corpora, but they also inherit inaccuracies and biases present in the data they are trained on.

They consist of billions to trillions of parameters and operate as general-purpose sequence models, generating, summarizing, translating, and reasoning over text. LLMs represent a significant new technology in their ability to generalize across tasks with minimal task-specific supervision, enabling capabilities like conversational agents, code generation, knowledge retrieval, and automated reasoning that previously required bespoke systems.

View the full Wikipedia page for Large language model
↑ Return to Menu

Ontology (information science) in the context of Frame (artificial intelligence)

Frames are an artificial intelligence data structure used to divide knowledge into substructures by representing "stereotyped situations".

They were proposed by Marvin Minsky in his 1974 article "A Framework for Representing Knowledge". Frames are the primary data structure used in artificial intelligence frame languages; they are stored as ontologies of sets.

View the full Wikipedia page for Frame (artificial intelligence)
↑ Return to Menu

Ontology (information science) in the context of Upper ontology

In information science, an upper ontology (also known as a top-level ontology, upper model, or foundation ontology) is an ontology (in the sense used in information science) that consists of very general terms (such as "object", "property", "relation") that are common across all domains. An important function of an upper ontology is to support broad semantic interoperability among a large number of domain-specific ontologies by providing a common starting point for the formulation of definitions. Terms in the domain ontology are ranked under the terms in the upper ontology, e.g., the upper ontology classes are superclasses or supersets of all the classes in the domain ontologies.

A number of upper ontologies have been proposed, each with its own proponents.

View the full Wikipedia page for Upper ontology
↑ Return to Menu

Ontology (information science) in the context of Cell ontology

The Cell Ontology is an ontology that aims at capturing the diversity of cell types in animals. It is part of the Open Biomedical and Biological Ontologies (OBO) Foundry. The Cell Ontology identifiers and organizational structure are used to annotate data at the level of cell types, for example in single-cell RNA-seq studies. It is one important resource in the construction of the Human Cell Atlas.

The Cell Ontology was first described in an academic article in 2005.

View the full Wikipedia page for Cell ontology
↑ Return to Menu

Ontology (information science) in the context of Body of knowledge

A body of knowledge (BOK or BoK) is the complete set of concepts, terms and activities that make up a professional domain, as defined by the relevant learned society or professional association. It is a type of knowledge representation by any knowledge organization. Several definitions of BOK have been developed, for example:

  • "Structured knowledge that is used by members of a discipline to guide their practice or work." "The prescribed aggregation of knowledge in a particular area an individual is expected to have mastered to be considered or certified as a practitioner." (BOK-def).
  • The systematic collection of activities and outcomes in terms of their values, constructs, models, principles and instantiations, which arises from continuous discovery and validation work by members of the profession and enables self-reflective growth and reproduction of the profession (Romme 2016).
  • A set of accepted and agreed upon standards and nomenclatures pertaining to a field or profession (INFORMS 2009).
  • A set of knowledge within a profession or subject area which is generally agreed as both essential and generally known (Oliver 2012).

A body of knowledge is the accepted ontology for a specific domain. A BOK is more than simply a collection of terms; a professional reading list; a library; a website or a collection of websites; a description of professional functions; or even a collection of information.

View the full Wikipedia page for Body of knowledge
↑ Return to Menu

Ontology (information science) in the context of ICD-11

The ICD-11 is the eleventh revision of the International Classification of Diseases (ICD). It replaces the ICD-10 as the global standard for recording health information and causes of death. The ICD is developed and annually updated by the World Health Organization (WHO). Development of the ICD-11 started in 2007 and spanned over a decade of work, involving over 300 specialists from 55 countries divided into 30 work groups, with an additional 10,000 proposals from people all over the world. Following an alpha version in May 2011 and a beta draft in May 2012, a stable version of the ICD-11 was released on 18 June 2018, and officially endorsed by all WHO members during the 72nd World Health Assembly on 25 May 2019.

The ICD-11 is a large ontology consisting of about 85,000 entities, also called classes or nodes. An entity can be anything that is relevant to health care. It usually represents a disease or a pathogen, but it can also be an isolated symptom or (developmental) anomaly of the body. There are also classes for reasons for contact with health services, social circumstances of the patient, and external causes of injury or death. The ICD-11 is part of the WHO-FIC, a family of medical classifications. The WHO-FIC contains the Foundation Component, which comprises all entities of the classifications endorsed by the WHO. The Foundation is the common core from which all classifications are derived. For example, the ICD-O is a derivative classification optimized for use in oncology. The primary derivative of the Foundation is called the ICD-11 MMS, and it is this system that is commonly referred to as simply "the ICD-11". MMS stands for Mortality and Morbidity Statistics. The ICD-11 is distributed under a Creative Commons BY-ND license.

View the full Wikipedia page for ICD-11
↑ Return to Menu

Ontology (information science) in the context of Semantic Web

The Semantic Web, sometimes known as Web 3.0, is an extension of the World Wide Web through standards set by the World Wide Web Consortium (W3C). The goal of the Semantic Web is to make Internet data machine-readable.

To enable the encoding of semantics with the data, technologies such as Resource Description Framework (RDF) and Web Ontology Language (OWL) are used. These technologies are used to formally represent metadata. For example, ontology can describe concepts, relationships between entities, and categories of things. These embedded semantics offer significant advantages such as reasoning over data and operating with heterogeneous data sources.These standards promote common data formats and exchange protocols on the Web, fundamentally the RDF. According to the W3C, "The Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries." The Semantic Web is therefore regarded as an integrator across different content and information applications and systems.

View the full Wikipedia page for Semantic Web
↑ Return to Menu

Ontology (information science) in the context of Map–territory relation

The map–territory relation is the relationship between an object and a representation of that object, as in the relation between a geographical territory and a map of it. Mistaking the map for the territory is a logical fallacy that occurs when someone confuses the semantics of a term with what it represents. Polish-American scientist and philosopher Alfred Korzybski remarked that "the map is not the territory" and that "the word is not the thing", encapsulating his view that an abstraction derived from something, or a reaction to it, is not the thing itself. Korzybski held that many people do confuse maps with territories, that is, confuse conceptual models of reality with reality itself. These ideas are crucial to general semantics, a system Korzybski originated.

The relationship has also been expressed in other terms, such as "the model is not the data", "all models are wrong", and Alan Watts's "The menu is not the meal." The concept is thus quite relevant throughout ontology and applied ontology regardless of any connection to general semantics per se (or absence thereof). Its avatars are thus encountered in semantics, statistics, logistics, business administration, semiotics, and many other applications.

View the full Wikipedia page for Map–territory relation
↑ Return to Menu

Ontology (information science) in the context of Semantic relationship

Contemporary ontologies share many structural similarities, regardless of the ontology language in which they are expressed. Most ontologies describe individuals (instances), classes (concepts), attributes, and relations.

View the full Wikipedia page for Semantic relationship
↑ Return to Menu

Ontology (information science) in the context of OBO Foundry

The Open Biological and Biomedical Ontologies (OBO) Foundry is a group of people who build and maintain ontologies related to the life sciences. The OBO Foundry establishes a set of principles for ontology development for creating a suite of interoperable reference ontologies in the biomedical domain. Currently, there are more than a hundred ontologies that follow the OBO Foundry principles.

The OBO Foundry effort makes it easier to integrate biomedical results and carry out analysis in bioinformatics. It does so by offering a structured reference for terms of different research fields and their interconnections (ex: a phenotype in a mouse model and its related phenotype in zebrafish).

View the full Wikipedia page for OBO Foundry
↑ Return to Menu