Memory Management in Operating System

0
111

 

The handling or management of primary memory and back and forth movement of processes between main memory and disk during execution is the functionality of operating system. This is called Memory Management. Regardless to its allocation to some process or not each and every memory location’s record track is kept by memory management. How much memory has to be allocated to process is checked and decision of giving memory at particular time is taken by it. Memory management updates its status whenever some memory is freed or not allocated. This chapter will discuss basic concepts related to memory management.

Process Address Space:

There is a set of logical addresses which a process uses in its code for reference is called the address space of process. Now we will explain this logical addressing by the following example,

In the usage of 32-bit addressing the range of addresses can be 0 to 0×7fffffff; i.e. 2^31(2 raise to power) decimal numbers, for the total theoretical size of 2 gigabytes. Mapping logical addresses to physical address is done by operating system at the time memory allocation to program.

Following are the three addressing types used before and after allocating memory in program,

  • Symbolic Addresses: Symbolic addresses are consisted of basic elements i.e. variable names, constants and instruction labels. They are used in source code.
  • Relative Addresses: Symbolic addresses are converted into relative addresses at the time of compilation.
  • Physical Addresses: The addresses generated by the loader when a program is loaded into the main memory.

In compile-time and load-time address-binding schemes virtual and physical addresses are the same but differ in execution-time address-binding scheme.

What is logical address space?

Generating the set of all the logical addresses by the program is known logical address space.

What is physical address space?

Correspondence of the set of all the physical addresses to logical addresses the program is known physical address space.

At the runtime Memory management Unit (MMU), a hardware device, maps virtual addresses to physical addresses. To convert virtual addresses to physical addresses MMU uses following mechanism,

  • Offset Addition: Every address generated by user process is added with the value in the base register. It is treated as offset at the time it is sent to memory. To ease the difficulty, we will present an example.

Example:

If the base register value is 10000, then an attempt by the user to use address location 100 will be dynamically reallocated to location 10100.

  • User Interaction: User program never deals with real physical addresses but it only interacts with virtual addresses.

Static Vs Dynamic Loading:

During the development of computer program the choice of static or dynamic loading is made.

When you load statically:

The complete programs will be compiled and linked without leaving any external program or module dependency at the time of compilation if the user has to proceed with static loading. With the involvement of logical addresses the object program is combined with other necessary object modules into an absolute program. The absolute program and data is loaded in memory in order for its execution.

When you load dynamically:

The compiler will compile the program in dynamic loading. Only reference will be provided of all the programs that have to be included dynamically and rest of the work is done after execution. On a disk, dynamic routines of the library are stored in relocatable form in dynamic loading. These routines are loaded only when they are needed by the program.

Static and Dynamic Linking:

Following are the elaboration of two kinds of linking,

Static Linking:

As we have seen in static loading, to avoid runtime dependency, all the modules needed by the program are combined by the linker into single executable program.

Dynamic Linking:

A reference to the dynamic module is provided at the time of compilation and linking when dynamic linking is used because it is not required to link the actual module or library with the program.

Examples:

  • Dynamic Link Libraries (DLL) in Windows
  • Shared Objects in Unix

Swapping:

The mechanism in which memory is freed by swapping or moving a process to secondary memory i.e. disk from main memory i.e. RAM is called swapping. The process is swapped back to main memory after some later time from secondary storage.

It helps in running multiple and bog process in parallel although performance is affected by swapping. For the good reason it is swapping is known as a technique for memory compaction.

What is Total Time for Swapping?

The time required by the swapping to move the entire process form RAM to secondary memory and then copying the process again to main memory makes up the total time of the swapping. Time to regain main memory by the process is also added in total time of swapping.

Don’t worry. We will explain this procedure by example.

Size of a process on standard hard disk = 2048KB

Swapping has data transfer rate = 1MB per Second

How much time will 1000KB take = ?

Here we divide the size of process with the rate, after converting the rate in to kilo bytes. So in this case will have,

Time for 1000KB = 2048 / 1024 = 2 seconds = 2000 milliseconds

Now considering in and out time, it will take complete 4000 milliseconds plus other overhead where the process competes to regain main memory.

Memory Allocation:

There are two types of allocations in main memory,

  • Low memory: In this memory operating system resides.
  • High memory: User processes are held in this memory.

Memory Allocation and Description:

Single-Partition Allocation:

To protect user processes from each other, and from changing operating system code and data, relocation register scheme is used in single partition allocation. Smallest physical addresses and range of logical addresses are contained by relocation register and limit register. The limit register should be greater than each logical register.

 

Multiple-Partition Allocation:

A number of fixed-sized partitions are used to divide main memory in multiple-partition allocation. There should be one process in each partition. A process is loaded into the free partition after its selection from input queue, when a partition is free. The partition is available for another process when a process terminates.

Fragmentation:

Free memory space is broken into little pieces as processes are loaded and removed from memory. Sometimes the block size is not enough for a process to fit in because process size is bigger. In this way the memory blocks remain unused.  This problem is known as fragmentation. Following are the two types of fragmentation,

Internal Fragmentation:

The process which has been given the process block is smaller than the given space. The portion which remains unused by this action is not useable by other process.

External Fragmentation:

Total memory space is enough to satisfy a request or to reside a process in it, but it is not contiguous, so it cannot be used. 

Now here we will show you a diagram which will tell you how memory is wasted by fragmentation how we can create more free memory out of fragmented memory with help of compaction.

Reducing External Fragmentation:

We can shuffle memory contents by placing all free memory in a large block or can use compaction to reduce the external fragmentation.

Reducing Internal Fragmentation:

The internal fragmentation can be reduced by effectively assigning the smallest partition but large enough for the process.

Paging:

More memory can be addressed by a computer than the physically installed amount of memory on system. This extra memory is section of hard that’s set up to emulate computers RAM. It is known as virtual memory. Virtual memory is implemented with the help of paging technique.

Pages:

Process address space is broken into blocks of same size known as pages in this memory management technique i.e. paging. Each page has size of power of 2 ranging from 512 bytes to 8192 bytes. Size of process is measured in the number of pages.

Frames:

Small fixed size blocks of physical memory are used in main memory known as frames. To get the optimum utilization of main memory the size of single frame is kept equal to the size of a page. This also avoids external fragmentation.

The diagram below will explain the working of a page and a frame.

 

Address translation:

Page address is actually logical address and is represented by page number and the offset.

Logical Address = Page Number + Offset

Frame address is basically physical address which is represented by frame number and the offset.

Physical Address = Frame Number + Offset

Page map table is a data structure used to keep relation between pages of process to frames in physical memory.

The following figure describes the mapping of a page on the frame in physical memory.

The logical address of a page is translated into the physical address when a frame is allocated to this page. An entry is created in the page table, at this time, to be used throughout execution of the program. Any available memory frames can be used to be loaded with corresponding pages when a process has to execute.

For more understanding we can explain this experience by an example. Let’s suppose we have a program of 8Kb and we have only space of 5Kb at a single time. To process this ambiguity concept of paging will jump in to handle the situation. OS moves unwanted or idle pages into secondary memory to free the RAM for other processes and bring them back when needed by the program. This is done when computer runs out of RAM.

During the whole of execution of program this process continues and OS performs its jobs by removing idle pages from main memory and writing them into secondary memory. OS also brings these pages back when required by program.

Advantages of paging:

  • External fragmentation is reduced by paging.
  • It is an efficient memory management technique and its implementation is easy.
  • Swapping is easy because of equally sized frames and pages.

Disadvantages of Paging:

  • It cannot stop internal fragmentation.
  • Small RAM is not good for page table as it requires extra memory space.

Segmentation:

It is also a technique of memory management. In segmentation, each job is divided into several segments of different sizes, one for each module that contains pieces that perform related functions. Each segment represents the logical address space in program.

The segmentation is loaded into non-contiguous memory in the execution of process. A contiguous block of available memory is used for loading each segment though.

Segmentation is similar to paging but paging has fixed length pages while segmentation has variable length segments.

What a Program Contains?

It contains the following important things,

  • Main function
  • Utility functions
  • Data Structures

A segment map table for every process is maintained by OS with a list of memory blocks along with segment numbers, their sizes and corresponding memory locations in main memory. Length and starting address of segment is stored in the table. A reference to a memory location includes a value that identifies a segment and an offset.

The process of segmentation is shown in the following figure,