Is Virtual Memory Necessary

Posted on
Is Virtual Memory Necessary Average ratng: 3,5/5 3775 votes

May 15, 2017  Process, Working Set, Total: This counter is a measure of the virtual memory in 'active' use. This counter shows how much RAM is required so that the virtual memory being used for all processes is in RAM. This value is always a multiple of 4,096, which is. Virtual memory. It is created when a computer is running many processes at once and RAM is running low. The operating system makes part of the storage drive available to use as RAM. Virtual memory is much slower than main memory because processing power is being taken up by moving data around, rather than just executing instructions. Mar 06, 2018  There are a variety of very critical reasons why virtual memory is needed in modern systems. Programmer’s view - It gives programmer a view of infinite linear memory (which is obviously not present) so the programmer does not have to worry abou.

So my understanding is that every process has its own virtual memory space ranging from 0x0 to 0xFF....F. These virtual addresses correspond to addresses in physical memory (RAM). Why is this level of abstraction helpful? Why not just use the direct addresses?

Virtual memory definition

I understand why paging is beneficial, but not virtual memory.

Memory
Collin
CollinCollin
8372 gold badges17 silver badges37 bronze badges

3 Answers

There are many reasons to do this:

Is Virtual Memory Necessary

  • If you have a compiled binary, each function has a fixed address in memory and the assembly instructions to call functions have that address hardcoded. If virtual memory didn't exist, two programs couldn't be loaded into memory and run at the same time, because they'd potentially need to have different functions at the same physical address.

  • If two or more programs are running at the same time (or are being context-switched between) and use direct addresses, a memory error in one program (for example, reading a bad pointer) could destroy memory being used by the other process, taking down multiple programs due to a single crash.

  • On a similar note, there's a security issue where a process could read sensitive data in another program by guessing what physical address it would be located at and just reading it directly.

  • If you try to combat the two above issues by paging out all the memory for one process when switching to a second process, you incur a massive performance hit because you might have to page out all of memory.

  • Depending on the hardware, some memory addresses might be reserved for physical devices (for example, video RAM, external devices, etc.) If programs are compiled without knowing that those addresses are significant, they might physically break plugged-in devices by reading and writing to their memory. Worse, if that memory is read-only or write-only, the program might write bits to an address expecting them to stay there and then read back different values.

Hope this helps!

templatetypedeftemplatetypedef
274k74 gold badges697 silver badges908 bronze badges

Short answer: Program code and data required for execution of a process must reside in main memory to be executed, but main memory may not be large enough to accommodate the needs of an entire process.

Two proposals

Memory

(1) Using a very large main memory to alleviate any need for storage allocation: it's not feasible due to very high cost.

(2) Virtual memory: It allows processes that may not be entirely in the memory to execute by means of automatic storage allocation upon request. The term virtual memory refers to the abstraction of separating LOGICAL memory--memory as seen by the process--from PHYSICAL memory--memory as seen by the processor. Because of this separation, the programmer needs to be aware of only the logical memory space while the operating system maintains two or more levels of physical memory space.

More:

Early computer programmers divided programs into sections that were transferred into main memory for a period of processing time. As higher level languages became popular, the efficiency of complex programs suffered from poor overlay systems. The problem of storage allocation became more complex.

Two theories for solving the problem of inefficient memory management emerged -- static and dynamic allocation. Static allocation assumes that the availability of memory resources and the memory reference string of a program can be predicted. Dynamic allocation relies on memory usage increasing and decreasing with actual program needs, not on predicting memory needs.

Program objectives and machine advancements in the '60s made the predictions required for static allocation difficult, if not impossible. Therefore, the dynamic allocation solution was generally accepted, but opinions about implementation were still divided.

One group believed the programmer should continue to be responsible for storage allocation, which would be accomplished by system calls to allocate or deallocate memory. The second group supported automatic storage allocation performed by the operating system, because of increasing complexity of storage allocation and emerging importance of multiprogramming.

Is Virtual Memory Still Needed

In 1961, two groups proposed a one-level memory store. One proposal called for a very large main memory to alleviate any need for storage allocation. This solution was not possible due to very high cost. The second proposal is known as virtual memory.

cne/modules/vm/green/defn.html

eeooheeeeoohee

The main purpose of virtual memory is multi-tasking and running large programmes. It would be great to use physical memory, because it would be a lot faster, but RAM memory is a lot more expensive than ROM.

Good luck!

gpanichgpanich

Not the answer you're looking for? Browse other questions tagged memorymemory-managementoperating-system or ask your own question.