Virtual address space in the context of Instruction set


Virtual address space in the context of Instruction set

Virtual address space Study page number 1 of 1

Play TriviaQuestions Online!

or

Skip to study material about Virtual address space in the context of "Instruction set"


⭐ Core Definition: Virtual address space

In computing, a virtual address space (VAS) is an area of contiguous virtual memory locations, called virtual addresses, which an operating system makes available to a process for executing instructions and storing data, and which it maps to the address space of physical addresses in a computer's hardware memory. The range of virtual addresses usually starts at a low address and can extend to the highest address allowed by the computer's instruction set architecture and supported by the operating system's pointer size implementation, which can be 4 bytes for 32-bit or 8 bytes for 64-bit OS versions. This provides several benefits including security through process isolation, assuming each process is given a separate address space.

↓ Menu
HINT:

In this Dossier

Virtual address space in the context of Memory management

Memory management (also dynamic memory management, dynamic storage allocation, or dynamic memory allocation) is a form of resource management applied to computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed. This is critical to any advanced computer system where more than a single process might be underway (multitasking) at any time.

Several methods have been devised that increase the effectiveness of memory management. Virtual memory systems separate the memory addresses used by a process from actual physical addresses, allowing separation of processes and increasing the size of the virtual address space beyond the available amount of RAM using paging or swapping to secondary storage. The quality of the virtual memory manager can have an extensive effect on overall system performance. The system allows a computer to appear as if it may have more memory available than physically present, thereby allowing multiple processes to share it.

View the full Wikipedia page for Memory management
↑ Return to Menu

Virtual address space in the context of Virtual memory

In computing, virtual memory, or virtual storage, is a memory management technique that provides an "idealized abstraction of the storage resources that are actually available on a given machine" which "creates the illusion to users of a very large (main) memory".

The computer's operating system, using a combination of hardware and software indirection, maps memory addresses used by a program, called virtual addresses, into physical addresses in computer memory. Main storage, as seen by a process or task, appears as a contiguous address space or collection of contiguous segments. The operating system manages virtual address spaces and the assignment of real memory to virtual memory. Address translation hardware in the CPU, often referred to as a memory management unit (MMU), automatically translates virtual addresses to physical addresses. Software within the operating system may extend these capabilities, utilizing, e.g., disk storage, to provide a virtual address space that can exceed the capacity of real memory and thus reference more memory than is physically present in the computer.

View the full Wikipedia page for Virtual memory
↑ Return to Menu

Virtual address space in the context of Light-weight process

In computer operating systems, a light-weight process (LWP) is a means of achieving multitasking. In the traditional meaning of the term, as used in Unix System V and Solaris, a LWP runs in user space on top of a single kernel thread and shares its address space and system resources with other LWPs within the same process. Multiple user-level threads, managed by a thread library, can be placed on top of one or many LWPs - allowing multitasking to be done at the user level, which can have some performance benefits.

In some operating systems, there is no separate LWP layer between kernel threads and user threads. This means that user threads are implemented directly on top of kernel threads. In those contexts, the term "light-weight process" typically refers to kernel threads and the term "threads" can refer to user threads. On Linux, user threads are implemented by allowing certain processes to share resources, which sometimes leads to these processes to be called "light weight processes". Similarly, in SunOS version 4 onwards (prior to Solaris) "light weight process" referred to user threads.

View the full Wikipedia page for Light-weight process
↑ Return to Menu