Paging:
In computer operating systems that have their main memory divided into pages, paging (sometimes called swapping) is a transfer of pages between main memory and an auxiliary store, such as hard disk drive.[1] Paging is an important part of virtual memory implementation in most contemporary general-purpose operating systems, allowing them to use disk storage for data that does not fit into physical RAM. Paging is usually implemented as architecture-specific code built into the kernel of the operating system.
Segmentation:
Segmentation is one approach to memory management and protection in the operating system. It has been superseded by paging for most purposes, but much of the terminology of segmentation is still used, "segmentation fault" being an example. Some operating systems still have segmentation at some logical level although paging is used as the main memory management policy.
Thursday, September 25, 2008
Wednesday, September 10, 2008
LINKED ALLOCATION
The problems in contiguous allocation can be traced directly to the requirement that the spaces be allocated contiguously and that the files that need these spaces are of different sizes. These requirements can be avoided by using linked allocation.
In linked allocation, each file is a linked list of disk blocks. The directory contains a pointer to the first and (optionally the last) block of the file. For example, a file of 5 blocks which starts at block 4, might continue at block 7, then block 16, block 10, and finally block 27. Each block contains a pointer to the next block and the last block contains a NIL pointer. The value -1 may be used for NIL to differentiate it from block 0.
With linked allocation, each directory entry has a pointer to the first disk block of the file. This pointer is initialized to nil (the end-of-list pointer value) to signify an empty file. A write to a file removes the first free block and writes to that block. This new block is then linked to the end of the file. To read a file, the pointers are just followed from block to block.
There is no external fragmentation with linked allocation. Any free block can be used to satisfy a request. Notice also that there is no need to declare the size of a file when that file is created. A file can continue to grow as long as there are free blocks. Linked allocation, does have disadvantages, however. The major problem is that it is inefficient to support direct-access; it is effective only for sequential-access files. To find the ith block of a file, it must start at the beginning of that file and follow the pointers until the ith block is reached. Note that each access to a pointer requires a disk read.
Another severe problem is reliability. A bug in OS or disk hardware failure might result in pointers being lost and damaged. The effect of which could be picking up a wrong pointer and linking it to a free block or into another file.
In linked allocation, each file is a linked list of disk blocks. The directory contains a pointer to the first and (optionally the last) block of the file. For example, a file of 5 blocks which starts at block 4, might continue at block 7, then block 16, block 10, and finally block 27. Each block contains a pointer to the next block and the last block contains a NIL pointer. The value -1 may be used for NIL to differentiate it from block 0.
With linked allocation, each directory entry has a pointer to the first disk block of the file. This pointer is initialized to nil (the end-of-list pointer value) to signify an empty file. A write to a file removes the first free block and writes to that block. This new block is then linked to the end of the file. To read a file, the pointers are just followed from block to block.
There is no external fragmentation with linked allocation. Any free block can be used to satisfy a request. Notice also that there is no need to declare the size of a file when that file is created. A file can continue to grow as long as there are free blocks. Linked allocation, does have disadvantages, however. The major problem is that it is inefficient to support direct-access; it is effective only for sequential-access files. To find the ith block of a file, it must start at the beginning of that file and follow the pointers until the ith block is reached. Note that each access to a pointer requires a disk read.
Another severe problem is reliability. A bug in OS or disk hardware failure might result in pointers being lost and damaged. The effect of which could be picking up a wrong pointer and linking it to a free block or into another file.
File Allocation Methods
One main problem in file management is how to allocate space for files so that disk space is utilized effectively and files can be accessed quickly. Three major methods of allocating disk space are:
* Contiguous Allocation
* Linked Allocation
* Indexed Allocation.
Each method has its advantages and disadvantages. Accordingly, some systems support all three (e.g. Data General's RDOS). More commonly, a system will use one particular method for all files.
Contiguous File Allocation
* Contiguous Allocation
* Linked Allocation
* Indexed Allocation.
Each method has its advantages and disadvantages. Accordingly, some systems support all three (e.g. Data General's RDOS). More commonly, a system will use one particular method for all files.
Contiguous File Allocation
- Each file occupies a set of contiguous block on the disk
- Allocation using first fit/best fit.
- A need for compaction
- Only starting block and length of file in blocks are needed to work with the file
- Allows random access
- Problems with files that grow.
Linked File Allocation
- Each file is a linked list of blocks
- NO external fragmentations
- Effective for sequential access.
- Problematic for direct access
File Allocation Table (FAT)
- Variation of the link list(MS/DOS and OS/2)
- A section of the disk at the beginning of each partition (Volume) is set aside to contain a FAT
- FAT has one entry for each disk block, pointing to the next block in the file.
- The link list is implemented in that section
- Actual file blocks contain no links
Indexed File Allocation
- Indexed allocation is bringing all the pointers together
- Much more effective for direct access
- Inefficient for small files
Monday, August 4, 2008
RoundRobin Scheduler
Round-robin (RR) is one of the simplest scheduling algorithms for processes in an operating system, which assigns time slices to each process in equal portions and in order, handling all processes without priority. Round-robin scheduling is both simple and easy to implement, and starvation-free. Round-robin scheduling can also be applied to other scheduling problems, such as data packet scheduling in computer networks.
The name of the algorithm comes from the round-robin principle known from other fields, where each person takes an equal share of something in turn.Round-robin (RR) is one of the simplest scheduling algorithms for processes in an operating system, which assigns time slices to each process in equal portions and in order, handling all processes without priority. Round-robin scheduling is both simple and easy to implement, and starvation-free. Round-robin scheduling can also be applied to other scheduling problems, such as data packet scheduling in computer networks.
The name of the algorithm comes from the round-robin principle known from other fields, where each person takes an equal share of something in turn.
The name of the algorithm comes from the round-robin principle known from other fields, where each person takes an equal share of something in turn.Round-robin (RR) is one of the simplest scheduling algorithms for processes in an operating system, which assigns time slices to each process in equal portions and in order, handling all processes without priority. Round-robin scheduling is both simple and easy to implement, and starvation-free. Round-robin scheduling can also be applied to other scheduling problems, such as data packet scheduling in computer networks.
The name of the algorithm comes from the round-robin principle known from other fields, where each person takes an equal share of something in turn.
context switch
A context switch (also sometimes referred to as a process switch or a task switch) is the switching of the CPU (central processing unit) from one process or thread to another. A context is the contents of a CPU's registers and program counter at any point in time. A register is a small amount of very fast memory inside of a CPU (as opposed to the slower RAM main memory outside of the CPU) that is used to speed the execution of computer programs by providing quick access to commonly used values, generally those in the midst of a calculation. A program counter is a specialized register that indicates the position of the CPU in its instruction sequence and which holds either the address of the instruction being executed or the address of the next instruction to be executed, depending on the specific system. Context switching can be described in slightly more detail as the kernel (i.e., the core of the operating system) performing the following activities with regard to processes (including threads) on the CPU: (1) suspending the progression of one process and storing the CPU's state (i.e., the context) for that process somewhere in memory, (2) retrieving the context of the next process from memory and restoring it in the CPU's registers and (3) returning to the location indicated by the program counter (i.e., returning to the line of code at which the process was interrupted) in order to resume the process.
Dispatcher
Dispatchers are communications personnel responsible for receiving and transmitting pure and reliable messages, tracking vehicles and equipment, and recording other important information.[1] A number of organizations, including police and fire departments, emergency medical services, taxicab providers, trucking companies, train stations, and public utility companies, use dispatchers to relay information and coordinate their operations. Essentially, the dispatcher is the "conductor" of the force, and is responsible for the direction of all units within it.
Thread
A thread in computer science is short for a thread of execution. Threads are a way for a program to fork (or split) itself into two or more simultaneously (or pseudo-simultaneously) running tasks. Threads and processes differ from one operating system to another but, in general, a thread is contained inside a process and different threads in the same process share some resources while different processes do not.
On a single processor, Multithreading generally occurs by time-division multiplexing ("time slicing") in very much the same way as the parallel execution of multiple tasks (computer multitasking): the processor switches between different threads. This context switching can happen so fast as to give the illusion of simultaneity to an end user. On a multiprocessor or multi-core system, threading can be achieved via multiprocessing, wherein different threads and processes can run literally simultaneously on different processors or cores.
On a single processor, Multithreading generally occurs by time-division multiplexing ("time slicing") in very much the same way as the parallel execution of multiple tasks (computer multitasking): the processor switches between different threads. This context switching can happen so fast as to give the illusion of simultaneity to an end user. On a multiprocessor or multi-core system, threading can be achieved via multiprocessing, wherein different threads and processes can run literally simultaneously on different processors or cores.
Subscribe to:
Posts (Atom)