Formal language theory in the context of Noncontracting grammar


Formal language theory in the context of Noncontracting grammar

Formal language theory Study page number 1 of 1

Play TriviaQuestions Online!

or

Skip to study material about Formal language theory in the context of "Noncontracting grammar"


⭐ Core Definition: Formal language theory

In logic, mathematics, computer science, and linguistics, a formal language is a set of strings whose symbols are taken from a set called "alphabet".

The alphabet of a formal language consists of symbols that concatenate into strings (also called "words"). Words that belong to a particular formal language are sometimes called well-formed words. A formal language is often defined by means of a formal grammar such as a regular grammar or context-free grammar.

↓ Menu
HINT:

👉 Formal language theory in the context of Noncontracting grammar

In formal language theory, a noncontracting grammar (also called monotonic grammar) is a type of formal grammar whose production rules never decrease the total length of a string during derivation. This means that when applying any rule to transform one string into another, the resulting string must have at least as many symbols as the original.

Noncontracting grammars are significant because they are equivalent in expressive power to context-sensitive grammars and define the same class of languages (the context-sensitive languages) in the Chomsky hierarchy. This equivalence makes them important for understanding the computational limits of natural language processing and compiler design, as they can model complex linguistic phenomena while maintaining certain desirable mathematical properties. Some authors use the term context-sensitive grammar to refer to noncontracting grammars in general, though this usage varies in the literature.

↓ Explore More Topics
In this Dossier

Formal language theory in the context of Formal epistemology

Formal epistemology uses formal methods from decision theory, logic, probability theory and computability theory to model and reason about issues of epistemological interest. Work in this area spans several academic fields, including philosophy, computer science, economics, and statistics. The focus of formal epistemology has tended to differ somewhat from that of traditional epistemology, with topics like uncertainty, induction, and belief revision garnering more attention than the analysis of knowledge, skepticism, and issues with justification. Formal epistemology extenuates into formal language theory.

View the full Wikipedia page for Formal epistemology
↑ Return to Menu

Formal language theory in the context of Regular grammar

In theoretical computer science and formal language theory, a regular grammar is a grammar that is right-regular or left-regular.While their exact definition varies from textbook to textbook, they all require that

Every regular grammar describes a regular language.

View the full Wikipedia page for Regular grammar
↑ Return to Menu

Formal language theory in the context of Alphabet (formal languages)

In formal language theory, an alphabet, often called a vocabulary in the context of terminal and nonterminal symbols, is a non-empty set of indivisible symbols/characters/glyphs, typically thought of as representing letters, characters, digits, phonemes, or even words. The definition is used in a diverse range of fields including logic, mathematics, computer science, and linguistics. An alphabet may have any cardinality ("size") and, depending on its purpose, may be finite (e.g., the alphabet of letters "a" through "z"), countable (e.g., ), or even uncountable (e.g., ).

Strings, also known as "words" or "sentences", over an alphabet are defined as a sequence of the symbols from the alphabet set. For example, the alphabet of lowercase letters "a" through "z" can be used to form English words like "iceberg" while the alphabet of both upper and lower case letters can also be used to form proper names like "Wikipedia". A common alphabet is {0,1}, the binary alphabet, and "00101111" is an example of a binary string. Infinite sequences of symbols may be considered as well (see Omega language).

View the full Wikipedia page for Alphabet (formal languages)
↑ Return to Menu

Formal language theory in the context of Regular language

In theoretical computer science and formal language theory, a regular language (also called a rational language) is a formal language that can be defined by a regular expression, in the strict sense in theoretical computer science (as opposed to many modern regular expression engines, which are augmented with features that allow the recognition of non-regular languages).

Alternatively, a regular language can be defined as a language recognised by a finite automaton. The equivalence of regular expressions and finite automata is known as Kleene's theorem (after American mathematician Stephen Cole Kleene). In the Chomsky hierarchy, regular languages are the languages generated by Type-3 grammars.

View the full Wikipedia page for Regular language
↑ Return to Menu

Formal language theory in the context of Empty string

In formal language theory, the empty string, also known as the empty word or null string, is the unique string of length zero.

View the full Wikipedia page for Empty string
↑ Return to Menu

Formal language theory in the context of Kleene star

In formal language theory, the Kleene star (or Kleene operator or Kleene closure) refer to two related unary operations, that can be applied either to an alphabet of symbols or to a formal language, a set of strings (finite sequences of symbols).

The Kleene star operator on an alphabet V generates the set V* of all finite-length strings over V, that is, finite sequences whose elements belong to V; in mathematics, it is more commonly known as the free monoid construction. The Kleene star operator on a language L generates another language L*, the set of all strings that can be obtained as a concatenation of zero or more members of L. In both cases, repetitions are allowed.

View the full Wikipedia page for Kleene star
↑ Return to Menu

Formal language theory in the context of Context-sensitive language

In formal language theory, a context-sensitive language is a formal language that can be defined by a context-sensitive grammar, where the applicability of a production rule may depend on the surrounding context of symbols. Unlike context-free grammars, which can apply rules regardless of context, context-sensitive grammars allow rules to be applied only when specific neighboring symbols are present, enabling them to express dependencies and agreements between distant parts of a string.

These languages correspond to type-1 languages in the Chomsky hierarchy and are equivalently defined by noncontracting grammars (grammars where production rules never decrease the total length of a string). Context-sensitive languages can model natural language phenomena such as subject-verb agreement, cross-serial dependencies, and other complex syntactic relationships that cannot be captured by simpler grammar types, making them important for computational linguistics and natural language processing.

View the full Wikipedia page for Context-sensitive language
↑ Return to Menu