Main memory in the context of Page cache


Main memory in the context of Page cache

Main memory Study page number 1 of 1

Play TriviaQuestions Online!

or

Skip to study material about Main memory in the context of "Page cache"


⭐ Core Definition: Main memory

Computer data storage or digital data storage is the retention of digital data via technology consisting of computer components and recording media. Digital data storage is a core function and fundamental component of computers.

Generally, the faster and volatile storage components are referred to as "memory", while slower persistent components are referred to as "storage". This distinction was extended in the Von Neumann architecture, where the central processing unit (CPU) consists of two main parts: The control unit and the arithmetic logic unit (ALU). The former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data. In practice, almost all computers use a memory hierarchy, which puts memory close to the CPU and storage further away.

↓ Menu
HINT:

👉 Main memory in the context of Page cache

In computing, a page cache, sometimes also called disk cache, is a transparent cache for the pages originating from a secondary storage device such as a hard disk drive (HDD) or a solid-state drive (SSD). The operating system keeps a page cache in otherwise unused portions of the main memory (RAM), resulting in quicker access to the contents of cached pages and overall performance improvements. A page cache is implemented in kernels with the paging memory management, and is mostly transparent to applications.

Usually, all physical memory not directly allocated to applications is used by the operating system for the page cache. Since the memory would otherwise be idle and is easily reclaimed when applications request it, there is generally no associated performance penalty and the operating system might even report such memory as "free" or "available".

↓ Explore More Topics
In this Dossier

Main memory in the context of Central processing unit

A central processing unit (CPU), also called a central processor, main processor, or just processor, is the primary processor in a given computer. Its electronic circuitry executes instructions of a computer program, such as arithmetic, logic, controlling, and input/output (I/O) operations. This role contrasts with that of external components, such as main memory and I/O circuitry, and specialized coprocessors such as graphics processing units (GPUs).

The form, design, and implementation of CPUs have changed over time, but their fundamental operation remains almost unchanged. Principal components of a CPU include the arithmetic–logic unit (ALU) that performs arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations, and a control unit that orchestrates the fetching (from memory), decoding and execution (of instructions) by directing the coordinated operations of the ALU, registers, and other components. Modern CPUs devote a lot of semiconductor area to caches and instruction-level parallelism to increase performance and to CPU modes to support operating systems and virtualization.

View the full Wikipedia page for Central processing unit
↑ Return to Menu

Main memory in the context of S/360

The IBM System/360 (S/360) is a family of computer systems announced by IBM on April 7, 1964, and delivered between 1965 and 1978. System/360 was the first family of computers designed to cover both commercial and scientific applications and a complete range of sizes from small, entry-level machines to large mainframes. The design distinguished between architecture and implementation, allowing IBM to release a suite of compatible designs at different prices. All but the only partially compatible Model 44 and the most expensive systems use microcode to implement the instruction set, which used 8-bit byte addressing with fixed-point binary, fixed-point decimal and hexadecimal floating-point calculations. The System/360 family introduced IBM's Solid Logic Technology (SLT), which packed more transistors onto a circuit card, allowing more powerful but smaller computers, but did not include integrated circuits, which IBM considered too immature.

System/360's chief architect was Gene Amdahl and the project was managed by Fred Brooks, responsible to Chairman Thomas J. Watson Jr. The commercial release was piloted by another of Watson's lieutenants, John R. Opel, who managed the launch of IBM's System/360 mainframe family in 1964. The slowest System/360 model announced in 1964, the Model 30, could perform up to 34,500 instructions per second, with memory from 8 to 64 KB. High-performance models came later. The 1967 IBM System/360 Model 91 could execute up to 16.6 million instructions per second. The larger 360 models could have up to 8 MB of main memory, though that much memory was unusual; a large installation might have as little as 256 KB of main storage, but 512 KB, 768 KB or 1024 KB was more common. Up to 8 megabytes of slower (8 microsecond) Large Capacity Storage (LCS) was also available for some models.

View the full Wikipedia page for S/360
↑ Return to Menu

Main memory in the context of Computer memory

Computer memory stores information, such as data and programs, for immediate use in the computer. The term memory is often synonymous with the terms RAM, main memory, or primary storage. Archaic synonyms for main memory include core (for magnetic core memory) and store.

Main memory operates at a high speed compared to mass storage which is slower but less expensive per bit and higher in capacity. Besides storing opened programs and data being actively processed, computer memory serves as a mass storage cache and write buffer to improve both reading and writing performance. Operating systems typically borrow RAM capacity for caching so long as it is not needed by running software. If needed, contents of the computer memory can be transferred to storage; a common way of doing this is through a memory management technique called virtual memory.

View the full Wikipedia page for Computer memory
↑ Return to Menu

Main memory in the context of Processor register

A processor register is a quickly accessible location available to a computer's processor. Registers usually consist of a small amount of fast storage, although some registers have specific hardware functions, and may be read-only or write-only. In computer architecture, registers are typically addressed by mechanisms other than main memory, but may in some cases be assigned a memory address e.g. DEC PDP-10, ICT 1900.

Almost all computers, whether load/store architecture or not, load items of data from a larger memory into registers where they are used for arithmetic operations, bitwise operations, and other operations, and are manipulated or tested by machine instructions. Manipulated items are then often stored back to main memory, either by the same instruction or by a subsequent one. Modern processors use either static or dynamic random-access memory (RAM) as main memory, with the latter usually accessed via one or more cache levels.

View the full Wikipedia page for Processor register
↑ Return to Menu

Main memory in the context of Computer multitasking

In computing, multitasking is the concurrent execution of multiple tasks (also known as processes) over a certain period of time. New tasks can interrupt already started ones before they finish, instead of waiting for them to end. As a result, a computer executes segments of multiple tasks in an interleaved manner, while the tasks share common processing resources such as central processing units (CPUs) and main memory. Multitasking automatically interrupts the running program, saving its state (partial results, memory contents and computer register contents) and loading the saved state of another program and transferring control to it. This "context switch" may be initiated at fixed time intervals (pre-emptive multitasking), or the running program may be coded to signal to the supervisory software when it can be interrupted (cooperative multitasking).

Multitasking does not require parallel execution of multiple tasks at exactly the same time; instead, it allows more than one task to advance over a given period of time. Even on multiprocessor computers, multitasking allows many more tasks to be run than there are CPUs.

View the full Wikipedia page for Computer multitasking
↑ Return to Menu

Main memory in the context of Storage tube

Storage tubes are a class of cathode-ray tubes (CRTs) that are designed to hold an image for a long period of time, typically as long as power is supplied to the tube.

A specialized type of storage tube, the Williams tube, was used as a main memory system on a number of early computers, from the late 1940s into the early 1950s. They were replaced with other technologies, notably core memory, starting in the 1950s.

View the full Wikipedia page for Storage tube
↑ Return to Menu

Main memory in the context of CPU cache

A CPU cache is a hardware cache used by the central processing unit (CPU) of the computer to reduce an average cost (time or energy) to the access data from a main memory. A cache is a smaller, faster memory, located closer to a processor core, which stores copies of the data from frequently used main memory locations, avoiding the need to always refer to main memory which may be tens to hundreds of times slower to access.

Cache memory is typically implemented with static random-access memory (SRAM), which requires multiple transistors to store a single bit. This makes it expensive in terms of the area it takes up, and in modern CPUs the cache is typically the largest part by chip area. The size of the cache needs to be balanced with the general desire for smaller chips which cost less. Some modern designs implement some or all of their cache using the physically smaller eDRAM, which is slower to use than SRAM but allows larger amounts of cache for any given amount of chip area.

View the full Wikipedia page for CPU cache
↑ Return to Menu

Main memory in the context of Data redundancy

In computer main memory, auxiliary storage and computer buses, data redundancy is the existence of data that is additional to the actual data and permits correction of errors in stored or transmitted data. The additional data can simply be a complete copy of the actual data (a type of repetition code), or only select pieces of data that allow detection of errors and reconstruction of lost or damaged data up to a certain level.

For example, by including computed check bits, ECC memory is capable of detecting and correcting single-bit errors within each memory word, while RAID 1 combines two hard disk drives (HDDs) into a logical storage unit that allows stored data to survive a complete failure of one drive. Data redundancy can also be used as a measure against silent data corruption; for example, file systems such as Btrfs and ZFS use data and metadata checksumming in combination with copies of stored data to detect silent data corruption and repair its effects.

View the full Wikipedia page for Data redundancy
↑ Return to Menu

Main memory in the context of Write buffer

A write buffer is a type of data buffer that can be used to hold data being written from the cache to main memory or to the next cache in the memory hierarchy to improve performance and reduce latency. It is used in certain CPU cache architectures like Intel's x86 and AMD64. In multi-core systems, write buffers destroy sequential consistency. Some software disciplines, like C11's data-race-freedom, are sufficient to regain a sequentially consistent view of memory.

A variation of write-through caching is called buffered write-through.

View the full Wikipedia page for Write buffer
↑ Return to Menu

Main memory in the context of Booting

In computing, booting is the process of starting a computer as initiated via hardware such as a physical button on the computer or by a software command. After it is switched on, a computer's central processing unit (CPU) has no software in its main memory, so some process must load software into memory before it can be executed. This may be done by hardware or firmware in the CPU, or by a separate processor in the computer system. On some systems a power-on reset (POR) does not initiate booting and the operator must initiate booting after POR completes. IBM uses the term Initial Program Load (IPL) on some product lines.

Restarting a computer is also called rebooting, which can be "hard", e.g. after electrical power to the CPU is switched from off to on, or "soft", where the power is not cut. On some systems, a soft boot may optionally clear RAM to zero. Both hard and soft booting can be initiated by hardware, such as a button press, or by a software command. Booting is complete when the operative runtime system, typically the operating system and some applications, is attained.

View the full Wikipedia page for Booting
↑ Return to Menu

Main memory in the context of Memory management unit

A memory management unit (MMU), sometimes called paged memory management unit (PMMU), is a computer hardware unit that examines all references to memory, and translates the memory addresses being referenced, known as virtual memory addresses, into physical addresses in main memory.

In modern systems, programs generally have addresses that access the theoretical maximum memory of the computer architecture, 32 or 64 bits. The MMU maps the addresses from each program into separate areas in physical memory, which is generally much smaller than the theoretical maximum. This is possible because programs rarely use large amounts of memory at any one time.

View the full Wikipedia page for Memory management unit
↑ Return to Menu