In the previous chapter we have studied the memory management techniques. Process address space and three types of addressing i.e. symbolic, relative and physical addresses with dynamic and static loading and linking were also discussed. Swapping, Memory allocation (both single and multiple partitions) and fragmentation (both internal and external) were the major topics to understand. In the end paging and segmentation techniques of memory management were discussed in detail. In this article I will discuss about virtual memory in operating system.

What is Virtual Memory

More memory can be addressed by a computer then physical memory installed on system. This extra memory which we previously called more memory is actually known as Virtual Memory. The RAM of computer is emulated by hard disk in this section i.e. virtual memory.

Purpose of vitual Memory

Programs can be larger than physical memory so this scheme can provide a visible advantage in this case. Following are the two purposes of virtual memory:

  • By using disk we can increase the use of physical memory.
  • In the translation of virtual address to physical address it provides protection.

Sometimes the program doesn’t need to be fully loaded in main memory. Given below are some of these situations,

  • In the data or computation, error can occur. In this case error handling routines written by user are used.
  • There is rare chance of usage of certain options and features of programs.
  • Only small amount of table is actually used still many tables are assigned a fix amount of address space.
  • There is a lot of benefit in the ability to execute a program partially in memory.
  • Loading and swapping each user program in memory will cause less number of I/O use.
  • Available amount of physical memory will no longer be constraining a program.
  • More programs can run at the same time as each user program will take less memory. A corresponding increase of utilization and throughput of CPU can be seen.

Memory Management Unit

A memory management unit (MMU) is built in the hardware for the general purpose use of micro processors. Virtual addresses are translated into physical addresses with the help of MMU. Now we will share a basic figurative example with you people so you can see how it works.

Demand Paging

Demand paging is commonly used to implement virtual memory. It can also be provided by demand segmentation so it can be implemented in segmentation system.

Now demand paging is just like paging system with swapping. In demand paging system the pages are not loaded in advance but only on demand they are loaded. The processes reside in secondary memory in this system. No old program’s pages are copied by operating system out to disk when a context switch occurs. Also no new program’s pages are copied to main memory. Execution of the new program starts after first page is loaded and program’s pages are fetched as they are referenced.

Page Fault

If a page is referenced by a program which is not available in the memory while execution because it is swapped out a little ago then this invalid memory reference is treated by the processor as page fault. From here the control is transferred to operating system from program to demand the page back into memory.

Advantages:

Following are the advantages of demand paging:

  • Virtual memory is large.
  • Memory is used more efficiently.
  • Degree of multiprogramming has no limit.

Disadvantages:

Number of tables and the amount of processor overhead for handling page interrupts are greater than in the case of the simple paged management techniques.

Page Replacement Algorithm:

Using the technique of page replacement algorithms operating system decides:

  • Which memory page to swap out?
  • What should be written to disk when a page of memory needs to be allocated?

Whenever a free page can’t be used for allocating purposes i.e. a page fault occurs, paging takes place. There are two reasons for this problem,

  • No availability of pages.
  • Lower number of pages than required pages.

When the page is referenced again which was selected for replacement and was paged out, then it has to read in from disk and this requires for I/O completion. The quality of page replacement algorithm is determined by this process. The algorithm is considered better when waiting time for page-ins is less.

Limited information about accessing pages provided by hardware is looked up by the page replacement algorithm. It minimizes the total number of page misses by selecting which pages should be replaced. It is done by balancing costs of primary storage and processor time of algorithm itself. There a number of page replacement algorithm which we use. Evaluation of algorithm is determined by running it on a particular string of memory reference and computing the number of page faults.

Reference String

Memory references string is called reference string. These strings can be made artificially or given system is traced and each memory reference address is recorded. A large number of data is produced by the latter choice, in which we have to note two things,

  • We don’t consider entire address for a given page size but only page number is considered.
  • We will explain this procedure with an example. If we have a reference to a page p, then any immediately following references to page pwill never cause a page fault. Page p will be in memory after the first reference; the immediately following references will not fault.
  • Considering the following sequence of addresses i.e. 123, 215, 600, 1234, 76, 96.
  • The reference string is 1, 2, 6, 12, 0, 0 if page size is 100.

First In First Out (FIFO) Algorithm:

The page which is selected for the replacement is the oldest page in the main memory. Following are the benefits of FIFO,

  • Implementation is easy.
  • A list is maintained.
  • Pages are added to head after they are replaced form the tail.

Optimal Page Algorithm:

In all the algorithms only optimal page-algorithm has the lowest fault-rate. This algorithm exits and it is known as OPT or MIN. In this algorithm the page which not used for long period of time is replaced. It uses the time when page is to be used.

Least Recently Used (LRU) Algorithm:

This algorithm selects that page for replacement which is not used for longest period of time in main memory. Its implementation is easy and it maintains a list. By looking back in time it replaces the pages.

Page Buffering Algorithm:

  • A pool of free frames is kept to start a process swiftly.
  • A page is selected to be replaced on page fault.
  • Mark the page table and restart the process after writing the process in the frame of free pool.
  • The frame holding replaced page is placed in the free pool when the dirty page is written out of the disk.

Least Frequently Used (LFU) Algorithm:

In this algorithm that one page is selected who has the smallest count. This algorithm faces those situations in which a page is never used after it is used heavily in the initial phase of a process.

Most Frequently Used:

This algorithm is based on the argument that the page with the smallest count was probably just brought in and has yet to be used.