Large Language Models in the context of "Transformer architecture"

Play Trivia Questions online!

or

Skip to study material about Large Language Models in the context of "Transformer architecture"

Ad spacer

⭐ Core Definition: Large Language Models

A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) and provide the core capabilities of modern chatbots. LLMs can be fine-tuned for specific tasks or guided by prompt engineering. These models acquire predictive power regarding syntax, semantics, and ontologies inherent in human language corpora, but they also inherit inaccuracies and biases present in the data they are trained on.

They consist of billions to trillions of parameters and operate as general-purpose sequence models, generating, summarizing, translating, and reasoning over text. LLMs represent a significant new technology in their ability to generalize across tasks with minimal task-specific supervision, enabling capabilities like conversational agents, code generation, knowledge retrieval, and automated reasoning that previously required bespoke systems.

↓ Menu

>>>PUT SHARE BUTTONS HERE<<<
In this Dossier

Large Language Models in the context of Computational semantics

Computational semantics is a subfield of computational linguistics. Its goal is to elucidate the cognitive mechanisms supporting the generation and interpretation of meaning in humans. It usually involves the creation of computational models that simulate particular semantic phenomena, and the evaluation of those models against data from human participants. While computational semantics is a scientific field, it has many applications in real-world settings and substantially overlaps with Artificial Intelligence.

Broadly speaking, the discipline can be subdivided into areas that mirror the internal organization of linguistics. For example, lexical semantics and frame semantics have active research communities within computational linguistics. Some popular methodologies are also strongly inspired by traditional linguistics. Most prominently, the area of distributional semantics, which underpins investigations into embeddings and the internals of Large Language Models, has roots in the work of Zellig Harris.

↑ Return to Menu