Microprocessor in the context of Ball grid array


Microprocessor in the context of Ball grid array

Microprocessor Study page number 1 of 5

Play TriviaQuestions Online!

or

Skip to study material about Microprocessor in the context of "Ball grid array"


⭐ Core Definition: Microprocessor

A microprocessor is a computer processor for which the data processing logic and control is included on a single integrated circuit (IC), or a small number of ICs. The microprocessor contains the arithmetic, logic, and control circuitry required to perform the functions of a computer's central processing unit (CPU). The IC is capable of interpreting and executing program instructions and performing arithmetic operations. The microprocessor is a multipurpose, clock-driven, register-based, digital integrated circuit that accepts binary data as input, processes it according to instructions stored in its memory, and provides results (also in binary form) as output. Microprocessors contain both combinational logic and sequential digital logic, and operate on numbers and symbols represented in the binary number system.

The integration of a whole CPU onto a single or a few integrated circuits using very-large-scale integration (VLSI) greatly reduced the cost of processing power. Integrated circuit processors are produced in large numbers by highly automated metal–oxide–semiconductor (MOS) fabrication processes, resulting in a relatively low unit price. Single-chip processors increase reliability because there are fewer electrical connections that can fail. As microprocessor designs improve, the cost of manufacturing a chip (with smaller components built on a semiconductor chip the same size) generally stays the same, according to Rock's law.

↓ Menu
HINT:

In this Dossier

Microprocessor in the context of CMOS

Complementary metal–oxide–semiconductor (CMOS, pronounced "sea-moss", /smɑːs/, /-ɒs/) is a type of metal–oxide–semiconductor field-effect transistor (MOSFET) fabrication process that uses complementary and symmetrical pairs of p-type and n-type MOSFETs for logic functions. CMOS technology is used for constructing integrated circuit (IC) chips, including microprocessors, microcontrollers, memory chips, and other digital logic circuits. CMOS overtook NMOS logic as the dominant MOSFET fabrication process for very large-scale integration (VLSI) chips in the 1980s, replacing earlier transistor–transistor logic (TTL) technology at the same time. CMOS has since remained the standard fabrication process for MOSFET semiconductor devices. As of 2011, 99% of IC chips, including most digital, analog and mixed-signal ICs, were fabricated using CMOS technology.

In 1948, Bardeen and Brattain patented an insulated-gate transistor (IGFET) with an inversion layer. Bardeen's concept forms the basis of CMOS technology today. The CMOS process was presented by Fairchild Semiconductor's Frank Wanlass and Chih-Tang Sah at the International Solid-State Circuits Conference in 1963. Wanlass later filed US patent 3,356,858 for CMOS circuitry and it was granted in 1967. RCA commercialized the technology with the trademark "COS-MOS" in the late 1960s, forcing other manufacturers to find another name, leading to "CMOS" becoming the standard name for the technology by the early 1970s. Two important characteristics of CMOS devices are high noise immunity and low static power consumption. Since one transistor of the MOSFET pair is always off, the series combination draws significant power only momentarily during switching between on and off states. Consequently, CMOS devices do not produce as much waste heat as other forms of logic, like NMOS logic or transistor–transistor logic (TTL), which normally have some standing current even when not changing state. These characteristics allow CMOS to integrate a high density of logic functions on a chip. It was primarily for this reason that CMOS became the most widely used technology to be implemented in VLSI chips.

View the full Wikipedia page for CMOS
↑ Return to Menu

Microprocessor in the context of Machine code

In computing, machine code is data encoded and structured to control a computer's central processing unit (CPU) via its programmable interface. A computer program consists primarily of sequences of machine-code instructions. Machine code is classified as native with respect to its host CPU since it is the language that the CPU interprets directly. A software interpreter is a virtual machine that processes virtual machine code.

A machine-code instruction causes the CPU to perform a specific task such as:

View the full Wikipedia page for Machine code
↑ Return to Menu

Microprocessor in the context of GoTo telescope

In amateur astronomy, "GoTo" refers to a type of telescope mount and related software that can automatically point a telescope at astronomical objects that the user selects. Both axes of a GoTo mount are driven by a motor and controlled by a computer. It may be either a microprocessor-based integrated controller or an external personal computer. This differs from the single-axis semi-automated tracking of a traditional clock-drive equatorial mount.

The user can command the mount to point the telescope to the celestial coordinates that the user inputs, or to objects in a pre-programmed database including ones from the Messier catalogue, the New General Catalogue, and even major Solar System bodies (the Sun, Moon, and planets).

View the full Wikipedia page for GoTo telescope
↑ Return to Menu

Microprocessor in the context of Calculator

A calculator is typically a portable electronic device used to perform calculations, ranging from basic arithmetic to complex mathematics.

The first solid-state electronic calculator was created in the early 1960s. Pocket-sized devices became available in the 1970s, especially after the Intel 4004, the first microprocessor, was developed by Intel for the Japanese calculator company Busicom. Modern electronic calculators vary from cheap, give-away, credit-card-sized models to sturdy desktop models with built-in printers. They became popular in the mid-1970s as the incorporation of integrated circuits reduced their size and cost. By the end of that decade, prices had dropped to the point where a basic calculator was affordable to most and they became common in schools.

View the full Wikipedia page for Calculator
↑ Return to Menu

Microprocessor in the context of Semiconductor device fabrication

Semiconductor device fabrication is the process used to manufacture semiconductor devices, typically integrated circuits (ICs) such as microprocessors, microcontrollers, and memories (such as RAM and flash memory). It is a multiple-step photolithographic and physico-chemical process (with steps such as thermal oxidation, thin-film deposition, ion implantation, etching) during which electronic circuits are gradually created on a wafer, typically made of pure single-crystal semiconducting material. Silicon is almost always used, but various compound semiconductors are used for specialized applications. Steps such as etching and photolithography can be used to manufacture other devices, such as LCD and OLED displays.

The fabrication process is performed in highly specialized semiconductor fabrication plants, also called foundries or "fabs", with the central part being the "clean room". In more advanced semiconductor devices, such as modern 14/10/7 nm nodes, fabrication can take up to 15 weeks, with 11–13 weeks being the industry average. Production in advanced fabrication facilities is completely automated, with automated material handling systems taking care of the transport of wafers from machine to machine.

View the full Wikipedia page for Semiconductor device fabrication
↑ Return to Menu

Microprocessor in the context of Wafer fabrication

Wafer fabrication is a procedure composed of many repeated sequential processes to produce complete electrical or photonic circuits on semiconductor wafers in a semiconductor device fabrication process. Examples include production of radio frequency (RF) amplifiers, LEDs, optical computer components, and microprocessors for computers. Wafer fabrication is used to build components with the necessary electrical structures.

The main process begins with integrated circuit design, where electrical engineers designing the circuit and defining its functions, and specifying the signals, inputs/outputs and voltages needed. These electrical circuit specifications are entered into electrical circuit design software, such as SPICE, and then imported into circuit layout programs, which are similar to ones used for computer aided design. This is necessary for the layers to be defined for photomask production. The resolution of the circuits increases rapidly with each step in design, as the scale of the circuits at the start of the design process is already being measured in fractions of micrometers. Each step thus increases circuit density for a given area.

View the full Wikipedia page for Wafer fabrication
↑ Return to Menu

Microprocessor in the context of Desktop computer

A desktop computer, often abbreviated as desktop, is a personal computer designed for regular use at a stationary location on or near a desk (as opposed to a portable computer) due to its size and power requirements. The most common configuration has a case that houses the power supply, motherboard (a printed circuit board with a microprocessor as the central processing unit, memory, bus, certain peripherals and other electronic components), disk storage (usually one or more hard disk drives, solid-state drives, optical disc drives, and in early models floppy disk drives); a keyboard and mouse for input; and a monitor, speakers, and, often, a printer for output. The case may be oriented horizontally or vertically and placed either underneath, beside, or on top of a desk.

Desktop computers with their cases oriented vertically are referred to as towers. As the majority of cases offered since the mid 1990s are in this form factor, the term desktop has been retronymically used to refer to modern cases offered in the traditional horizontal orientation.

View the full Wikipedia page for Desktop computer
↑ Return to Menu

Microprocessor in the context of MEMS

MEMS (micro-electromechanical systems) is the technology of microscopic devices incorporating both electronic and moving parts. MEMS are made up of components between 1 and 100 micrometres in size (i.e., 0.001 to 0.1 mm), and MEMS devices generally range in size from 20 micrometres to a millimetre (i.e., 0.02 to 1.0 mm), although components arranged in arrays (e.g., digital micromirror devices) can be more than 1000 mm. They usually consist of a central unit that processes data (an integrated circuit chip such as microprocessor) and several components that interact with the surroundings (such as microsensors).

Because of the large surface area to volume ratio of MEMS, forces produced by ambient electromagnetism (e.g., electrostatic charges and magnetic moments), and fluid dynamics (e.g., surface tension and viscosity) are more important design considerations than with larger scale mechanical devices. MEMS technology is distinguished from molecular nanotechnology or molecular electronics in that the latter two must also consider surface chemistry.

View the full Wikipedia page for MEMS
↑ Return to Menu

Microprocessor in the context of Clock speed

Clock rate or clock speed in computing typically refers to the frequency at which the clock generator of a processor can generate pulses used to synchronize the operations of its components. It is used as an indicator of the processor's speed. Clock rate is measured in the SI unit of frequency hertz (Hz).

The clock rate of the first generation of computers was measured in hertz or kilohertz (kHz), the first personal computers from the 1970s through the 1980s had clock rates measured in megahertz (MHz). In the 21st century the speed of modern CPUs is commonly advertised in gigahertz (GHz). This metric is most useful when comparing processors within the same family, holding constant other features that may affect performance.

View the full Wikipedia page for Clock speed
↑ Return to Menu

Microprocessor in the context of Complementary MOS

Complementary metal–oxide–semiconductor (CMOS /ˈsmɒs/ SEE-mos) is a type of metal–oxide–semiconductor field-effect transistor (MOSFET) fabrication process that uses complementary and symmetrical pairs of p-type and n-type MOSFETs for logic functions. CMOS technology is used for constructing integrated circuit (IC) chips, including microprocessors, microcontrollers, memory chips, and other digital logic circuits. CMOS overtook NMOS logic as the dominant MOSFET fabrication process for very large-scale integration (VLSI) chips in the 1980s, replacing earlier transistor–transistor logic (TTL) technology at the same time. CMOS has since remained the standard fabrication process for MOSFET semiconductor devices. As of 2011, 99% of IC chips, including most digital, analog and mixed-signal ICs, were fabricated using CMOS technology.

In 1948, Bardeen and Brattain patented an insulated-gate transistor (IGFET) with an inversion layer. Bardeen's concept forms the basis of CMOS technology today. The CMOS process was presented by Fairchild Semiconductor's Frank Wanlass and Chih-Tang Sah at the International Solid-State Circuits Conference in 1963. Wanlass later filed US patent 3,356,858 for CMOS circuitry and it was granted in 1967. RCA commercialized the technology with the trademark "COS-MOS" in the late 1960s, forcing other manufacturers to find another name, leading to "CMOS" becoming the standard name for the technology by the early 1970s. Two important characteristics of CMOS devices are high noise immunity and low static power consumption. Since one transistor of the MOSFET pair is always off, the series combination draws significant power only momentarily during switching between on and off states. Consequently, CMOS devices do not produce as much waste heat as other forms of logic, like NMOS logic or transistor–transistor logic (TTL), which normally have some standing current even when not changing state. These characteristics allow CMOS to integrate a high density of logic functions on a chip. It was primarily for this reason that CMOS became the most widely used technology to be implemented in VLSI chips.

View the full Wikipedia page for Complementary MOS
↑ Return to Menu

Microprocessor in the context of Microcontroller

A microcontroller (MC, uC, or μC) or microcontroller unit (MCU) is a small computer on a single integrated circuit. A microcontroller contains one or more processor cores along with memory and programmable input/output peripherals. Program memory in the form of NOR flash, OTP ROM, or ferroelectric RAM is also often included on the chip, as well as a small amount of RAM. Microcontrollers are designed for embedded applications, in contrast to the microprocessors used in personal computers or other general-purpose applications consisting of various discrete chips.

In modern terminology, a microcontroller is similar to, but less sophisticated than, a system on a chip (SoC). A SoC may include a microcontroller as one of its components but usually integrates it with advanced peripherals like a graphics processing unit (GPU), a Wi-Fi module, or one or more coprocessors.

View the full Wikipedia page for Microcontroller
↑ Return to Menu

Microprocessor in the context of Very large-scale integration

Very-large-scale integration (VLSI) is the process of creating an integrated circuit (IC) by combining millions or billions of MOS transistors onto a single chip. VLSI began in the 1970s when MOS integrated circuit (metal oxide semiconductor) chips were developed and then widely adopted, enabling complex semiconductor and telecommunications technologies. Microprocessors and memory chips are VLSI devices.

Before the introduction of VLSI technology, most ICs had a limited set of functions they could perform. An electronic circuit might consist of a CPU, ROM, RAM and other glue logic. VLSI enables IC designers to add all of these into one chip.

View the full Wikipedia page for Very large-scale integration
↑ Return to Menu

Microprocessor in the context of Transistor count

The transistor count is the number of transistors in an electronic device (typically on a single substrate or silicon die). It is the most common measure of integrated circuit complexity (although the majority of transistors in modern microprocessors are contained in cache memories, which consist mostly of the same memory cell circuits replicated many times). The rate at which MOS transistor counts have increased generally follows Moore's law, which observes that transistor count doubles approximately every two years. However, being directly proportional to the area of a die, transistor count does not represent how advanced the corresponding manufacturing technology is. A better indication of this is transistor density which is the ratio of a semiconductor's transistor count to its die area.

View the full Wikipedia page for Transistor count
↑ Return to Menu

Microprocessor in the context of Digital signal processor

A digital signal processor (DSP) is a specialized microprocessor chip, with its architecture optimized for the operational needs of digital signal processing. DSPs are fabricated on metal–oxide–semiconductor (MOS) integrated circuit chips. They are widely used in audio signal processing, telecommunications, digital image processing, radar, sonar and speech recognition systems, and in common consumer electronic devices such as mobile phones, disk drives and high-definition television (HDTV) products.

The goal of a DSP is usually to measure, filter or compress continuous real-world analog signals. Most general-purpose microprocessors can also execute digital signal processing algorithms successfully, but may not be able to keep up with such processing continuously in real-time. Also, dedicated DSPs usually have better power efficiency, thus they are more suitable in portable devices such as mobile phones because of power consumption constraints. DSPs often use special memory architectures that are able to fetch multiple data or instructions at the same time.

View the full Wikipedia page for Digital signal processor
↑ Return to Menu

Microprocessor in the context of History of video games

The history of video games began in the 1950s and 1960s as computer scientists began designing simple games and simulations on minicomputers and mainframes. Spacewar! was developed by Massachusetts Institute of Technology (MIT) student hobbyists in 1962 as one of the first such games on a video display. The first consumer video game hardware was released in the early 1970s. The first home video game console was the Magnavox Odyssey, and the first arcade video games were Computer Space and Pong. After its home console conversions, numerous companies sprang up to capture Pong's success in both the arcade and the home by cloning the game, causing a series of boom and bust cycles due to oversaturation and lack of innovation.

By the mid-1970s, low-cost programmable microprocessors replaced the discrete transistor–transistor logic circuitry of early hardware, and the first ROM cartridge-based home consoles arrived, including the Atari Video Computer System (VCS). Coupled with rapid growth in the golden age of arcade video games, including Space Invaders and Pac-Man, the home console market also flourished. The 1983 video game crash in the United States was characterized by a flood of too many games, often of poor or cloned qualities, and the sector saw competition from inexpensive personal computers and new types of games being developed for them. The crash prompted Japan's video game industry to take leadership of the market, which had only suffered minor impacts from the crash. Nintendo released its Nintendo Entertainment System in the United States in 1985, helping to rebound the failing video games sector. The latter part of the 1980s and early 1990s included video games driven by improvements and standardization in personal computers and the console war competition between Nintendo and Sega as they fought for market share in the United States. The first major handheld video game consoles appeared in the 1990s, led by Nintendo's Game Boy platform.

View the full Wikipedia page for History of video games
↑ Return to Menu

Microprocessor in the context of Second generation of video game consoles

In the history of video games, the second-generation era refers to computer and video games, video game consoles, and handheld video game consoles available from 1976 to 1992. Notable platforms of the second generation include the Fairchild Channel F, Atari 2600, Intellivision, Odyssey 2, and ColecoVision. The generation began in November 1976 with the release of the Fairchild Channel F. This was followed by the Atari 2600 in 1977, Magnavox Odyssey² in 1978, Intellivision in 1979 and then the Emerson Arcadia 2001, ColecoVision, Atari 5200, and Vectrex, all in 1982. By the end of the era, there were over 15 different consoles. It coincided with, and was partly fueled by, the golden age of arcade video games. The generation also included the entry of handheld consoles, chiefly led by Nintendo’s foray into gaming led by the Blue Ocean philosophy of Gunpei Yokoi and the release of the Game & Watch in 1980. This peak era of popularity and innovation for the medium resulted in many games for second generation home consoles being ports of arcade games. Space Invaders, the first "killer app" arcade game to be ported, was released in 1980 for the Atari 2600, though earlier Atari-published arcade games were ported to the 2600 previously. Coleco packaged Nintendo's Donkey Kong with the ColecoVision when it was released in August 1982.

Built-in games, like those from the first generation, saw limited use during this era. Though the first generation Magnavox Odyssey had put games on cartridge-like circuit cards, the games had limited functionality and required TV screen overlays and other accessories to be fully functional. More advanced cartridges, which contained the entire game experience, were developed for the Fairchild Channel F, and most video game systems adopted similar technology. The first system of the generation and some others, such as the RCA Studio II, still came with built-in games while also being able to use cartridges. The popularity of game cartridges grew after the release of the Atari 2600. From the late 1970s to the mid-1990s, most home video game systems used cartridges until the technology was replaced by optical discs. The Fairchild Channel F was also the first console to use a microprocessor, which was the driving technology that allowed the consoles to use cartridges. Other technology such as screen resolution, color graphics, audio, and AI simulation was also improved during this era. The generation also saw the first handheld game cartridge system, the Microvision, which was released by toy company Milton Bradley in 1979.

View the full Wikipedia page for Second generation of video game consoles
↑ Return to Menu