16-bit computing in the context of Intel 8086


16-bit computing in the context of Intel 8086

16-bit computing Study page number 1 of 1

Play TriviaQuestions Online!

or

Skip to study material about 16-bit computing in the context of "Intel 8086"


⭐ Core Definition: 16-bit computing

In computer architecture, 16-bit integers, memory addresses, or other data units are those that are 16 bits (2 octets) wide. Also, 16-bit central processing unit (CPU) and arithmetic logic unit (ALU) architectures are those that are based on registers, address buses, or data buses of that size. 16-bit microcomputers are microcomputers that use 16-bit microprocessors.

A 16-bit register can store 2 different values. The range of integer values that can be stored in 16 bits depends on the integer representation used. With the two most common representations, the range is 0 through 65,535 (2 − 1) for representation as an (unsigned) binary number, and −32,768 (−1 × 2) through 32,767 (2 − 1) for representation as two's complement. Since 2 is 65,536, a processor with 16-bit memory addresses can directly access 64 KiB (65,536 bytes) of byte-addressable memory. If a system uses segmentation with 16-bit segment offsets, more can be accessed.

↓ Menu
HINT:

👉 16-bit computing in the context of Intel 8086

The 8086 (also called iAPX 86) is a 16-bit microprocessor chip released by Intel on June 8, 1978 after development began in early 1976. It was followed by the Intel 8088 in 1979, which was a slightly modified chip with an external 8-bit data bus (allowing the use of cheaper and fewer supporting ICs).

The 8086 gave rise to the x86 architecture, which eventually became Intel's most successful line of processors. On June 5, 2018, Intel released a limited-edition CPU celebrating the 40th anniversary of the Intel 8086, called the Intel Core i7-8086K.

↓ Explore More Topics
In this Dossier

16-bit computing in the context of W65C816S

The W65C816S (also 65C816 or 65816) is a 16-bit microprocessor (MPU) developed and sold by the Western Design Center (WDC). Introduced in 1985, the W65C816S is an enhanced version of the WDC 65C02 8-bit MPU, itself a CMOS enhancement of the venerable MOS Technology 6502 NMOS MPU. The 65C816 is the CPU for the Apple IIGS and, in modified form, the Super Nintendo Entertainment System.

The 65 in the part's designation comes from its 65C02 compatibility mode, and the 816 signifies that the MPU has selectable 8- and 16-bit register sizes. In addition to the availability of 16-bit registers, the W65C816S extends memory addressing to 24 bits, supporting up to 16 megabytes of random-access memory. It has an enhanced instruction set and a 16-bit stack pointer, as well as several new electrical signals for improved system hardware management.

View the full Wikipedia page for W65C816S
↑ Return to Menu

16-bit computing in the context of Side-scrolling

A side-scrolling video game (alternatively side-scroller) is a video game viewed from a side-view camera angle where the screen follows the player as they move left or right. The jump from single-screen or flip-screen graphics to scrolling graphics during the golden age of arcade games was a pivotal leap in game design, comparable to the move to 3D graphics during the fifth generation.

Hardware support of smooth scrolling backgrounds is built into many arcade video games, some game consoles, and home computers. Examples include 8-bit systems like the Atari 8-bit computers and Nintendo Entertainment System, and 16-bit consoles, such as the Super Nintendo Entertainment System and Sega Genesis. These 16-bit consoles added multiple layers, which can be scrolled independently for a parallax scrolling effect.

View the full Wikipedia page for Side-scrolling
↑ Return to Menu

16-bit computing in the context of UTF-16

UTF-16 (16-bit Unicode Transformation Format) is a character encoding that supports all 1,112,064 valid code points of Unicode. The encoding is variable-length as code points are encoded with one or two 16-bit code units. UTF-16 arose from an earlier obsolete fixed-width 16-bit encoding now known as UCS-2 (for 2-byte Universal Character Set), once it became clear that more than 2 (65,536) code points were needed, including most emoji and important CJK characters such as for personal and place names.

UTF-16 is used by the Windows API, and by many programming environments such as Java and Qt. The variable-length character of UTF-16, combined with the fact that most characters are not variable-length (so variable length is rarely tested), has led to many bugs in software, including in Windows itself.

View the full Wikipedia page for UTF-16
↑ Return to Menu

16-bit computing in the context of PDP-11

The PDP-11 is a series of 16-bit minicomputers originally sold by Digital Equipment Corporation (DEC) from 1970 into the late 1990s, one of a set of products in the Programmed Data Processor (PDP) series. In total, around 600,000 PDP-11s of all models were sold, making it one of DEC's most successful product lines. The PDP-11 is considered by some experts to be the most popular minicomputer.

The PDP-11 included a number of innovative features in its instruction set and additional general-purpose registers that made it easier to program than earlier models in the PDP series. Further, the innovative Unibus system allowed external devices to be more easily interfaced to the system using direct memory access, opening the system to a wide variety of peripherals. The PDP-11 replaced the PDP-8 in many real-time computing applications, although both product lines lived in parallel for more than 10 years. The ease of programming of the PDP-11 made it popular for general-purpose computing.

View the full Wikipedia page for PDP-11
↑ Return to Menu

16-bit computing in the context of X86

x86 (also known as 80x86 or the 8086 family) is a family of complex instruction set computer (CISC) instruction set architectures initially developed by Intel, based on the 8086 microprocessor and its 8-bit-external-bus variant, the 8088. The 8086 was introduced in 1978 as a fully 16-bit extension of Intel's 8-bit 8080 microprocessor, with memory segmentation as a solution for addressing more memory than can be covered by a plain 16-bit address. The term "x86" came into being because the names of several successors to Intel's 8086 processor end in "86", including the 80186, 80286, 80386 and 80486. Colloquially, their names were "186", "286", "386" and "486".

The term is not synonymous with IBM PC compatibility, as this implies a multitude of other computer hardware. Embedded systems and general-purpose computers used x86 chips before the PC-compatible market started, some of them before the IBM PC (1981) debut.

View the full Wikipedia page for X86
↑ Return to Menu

16-bit computing in the context of Windows NT

Windows NT is a proprietary graphical operating system produced by Microsoft as part of its Windows product line, the first version of which, Windows NT 3.1, was released on July 27, 1993. Originally made for the workstation, office, and server markets, the Windows NT line was made available to consumers with the release of Windows XP in 2001. The underlying technology of Windows NT continues to exist to this day with incremental changes and improvements, with the latest version of Windows based on Windows NT being Windows Server 2025 announced in 2024.

The name "Windows NT" originally denoted the major technological advancements that it had introduced to the Windows product line, including eliminating the 16-bit memory access limitations of earlier Windows releases such as Windows 3.1. Each Windows release built on this technology is considered to be based on, if not a revision of Windows NT, even though the Windows NT name itself has not been used in any other Windows releases since Windows NT 4.0 in 1996.

View the full Wikipedia page for Windows NT
↑ Return to Menu

16-bit computing in the context of Intel 8080

The Intel 8080 is Intel's second 8-bit microprocessor. Introduced in April 1974, the 8080 was an enhanced non-binary compatible successor to the earlier Intel 8008 microprocessor. Originally intended for use in embedded systems such as calculators, cash registers, computer terminals, and industrial robots, its performance soon led to adoption in a broader range of systems, ultimately launching the microcomputer industry.

Several key design choices contributed to the 8080’s success. Its 40‑pin package simplified interfacing compared to the 8008’s 18‑pin design, enabling a more efficient data bus. The transition to NMOS technology provided faster transistor speeds than the 8008's PMOS, also making it TTL compatible. An expanded instruction set and a full 16-bit address bus allowed the 8080 to access up to 64 KB of memory, quadrupling the capacity of its predecessor. A broader selection of support chips further enhanced its functionality. Many of these improvements stemmed from customer feedback, as designer Federico Faggin and others at Intel heard from industry about shortcomings in the 8008 architecture.

View the full Wikipedia page for Intel 8080
↑ Return to Menu

16-bit computing in the context of Data General Nova

The Nova is a series of 16-bit minicomputers released by the American company Data General. The Nova family was very popular in the 1970s and ultimately sold tens of thousands of units.

The first model, known simply as "Nova", was released in 1969. The Nova was packaged into a single 3U rack-mount case and had enough computing power to handle most simple tasks. The Nova became popular in science laboratories around the world. It was followed the next year by the SuperNOVA, which ran roughly four times as fast, making it the fastest mini for several years.

View the full Wikipedia page for Data General Nova
↑ Return to Menu

16-bit computing in the context of Atari ST

Atari ST is a line of personal computers from Atari Corporation and the successor to the company's 8-bit computers. The initial model, the Atari 520ST, had limited release in April–June 1985, and was widely available in July. It was the first personal computer with a bitmapped color graphical user interface, using a version of Digital Research's GEM environment from February 1985. The Atari 1040ST, released in 1986 with 1 MB of memory, was the first home computer with a cost per kilobyte of RAM under US$1/KB.

After Jack Tramiel purchased the assets of the Atari, Inc. consumer division in 1984 to create Atari Corporation, the 520ST was designed in five months by a small team led by Shiraz Shivji. Alongside the Macintosh, Amiga, Apple IIGS, and Acorn Archimedes, the ST is part of a mid-1980s generation of computers with 16 or 16/32-bit processors, 256 KB or more of RAM, and mouse-controlled graphical user interfaces. "ST" officially stands for "Sixteen/Thirty-two", referring to the Motorola 68000's 16-bit external bus and 32-bit internals.

View the full Wikipedia page for Atari ST
↑ Return to Menu

16-bit computing in the context of Motorola 68000

The Motorola 68000 (sometimes shortened to Motorola 68k or m68k and usually pronounced "sixty-eight-thousand") is a 16/32-bit complex instruction set computer (CISC) microprocessor, introduced in 1979 by Motorola Semiconductor Products Sector.

The design implements a 32-bit instruction set, with 32-bit registers and a 16-bit internal data bus. The address bus is 24 bits and does not use memory segmentation, which made it easier to program for. Internally, it uses a 16-bit data arithmetic logic unit (ALU) and two 16-bit arithmetic units used mostly for addresses, and has a 16-bit external data bus. For this reason, Motorola termed it a 16/32-bit processor.

View the full Wikipedia page for Motorola 68000
↑ Return to Menu