Thursday, September 25, 2008

Paging and Segmentation

Paging:
In computer operating systems that have their main memory divided into pages, paging (sometimes called swapping) is a transfer of pages between main memory and an auxiliary store, such as hard disk drive.[1] Paging is an important part of virtual memory implementation in most contemporary general-purpose operating systems, allowing them to use disk storage for data that does not fit into physical RAM. Paging is usually implemented as architecture-specific code built into the kernel of the operating system.

Segmentation:
Segmentation is one approach to memory management and protection in the operating system. It has been superseded by paging for most purposes, but much of the terminology of segmentation is still used, "segmentation fault" being an example. Some operating systems still have segmentation at some logical level although paging is used as the main memory management policy.

Wednesday, September 10, 2008

LINKED ALLOCATION

The problems in contiguous allocation can be traced directly to the requirement that the spaces be allocated contiguously and that the files that need these spaces are of different sizes. These requirements can be avoided by using linked allocation.

In linked allocation, each file is a linked list of disk blocks. The directory contains a pointer to the first and (optionally the last) block of the file. For example, a file of 5 blocks which starts at block 4, might continue at block 7, then block 16, block 10, and finally block 27. Each block contains a pointer to the next block and the last block contains a NIL pointer. The value -1 may be used for NIL to differentiate it from block 0.


With linked allocation, each directory entry has a pointer to the first disk block of the file. This pointer is initialized to nil (the end-of-list pointer value) to signify an empty file. A write to a file removes the first free block and writes to that block. This new block is then linked to the end of the file. To read a file, the pointers are just followed from block to block.

There is no external fragmentation with linked allocation. Any free block can be used to satisfy a request. Notice also that there is no need to declare the size of a file when that file is created. A file can continue to grow as long as there are free blocks. Linked allocation, does have disadvantages, however. The major problem is that it is inefficient to support direct-access; it is effective only for sequential-access files. To find the ith block of a file, it must start at the beginning of that file and follow the pointers until the ith block is reached. Note that each access to a pointer requires a disk read.


Another severe problem is reliability. A bug in OS or disk hardware failure might result in pointers being lost and damaged. The effect of which could be picking up a wrong pointer and linking it to a free block or into another file.

File Allocation Methods

One main problem in file management is how to allocate space for files so that disk space is utilized effectively and files can be accessed quickly. Three major methods of allocating disk space are:
* Contiguous Allocation
* Linked Allocation
* Indexed Allocation.
Each method has its advantages and disadvantages. Accordingly, some systems support all three (e.g. Data General's RDOS). More commonly, a system will use one particular method for all files.

Contiguous File Allocation
  • Each file occupies a set of contiguous block on the disk
  • Allocation using first fit/best fit.
  • A need for compaction
  • Only starting block and length of file in blocks are needed to work with the file
  • Allows random access
  • Problems with files that grow.

Linked File Allocation

  • Each file is a linked list of blocks
  • NO external fragmentations
  • Effective for sequential access.
  • Problematic for direct access

File Allocation Table (FAT)

  • Variation of the link list(MS/DOS and OS/2)

- A section of the disk at the beginning of each partition (Volume) is set aside to contain a FAT
- FAT has one entry for each disk block, pointing to the next block in the file.
- The link list is implemented in that section
- Actual file blocks contain no links

Indexed File Allocation

  • Indexed allocation is bringing all the pointers together
  • Much more effective for direct access
  • Inefficient for small files

Monday, August 4, 2008

RoundRobin Scheduler

Round-robin (RR) is one of the simplest scheduling algorithms for processes in an operating system, which assigns time slices to each process in equal portions and in order, handling all processes without priority. Round-robin scheduling is both simple and easy to implement, and starvation-free. Round-robin scheduling can also be applied to other scheduling problems, such as data packet scheduling in computer networks.
The name of the algorithm comes from the round-robin principle known from other fields, where each person takes an equal share of something in turn.Round-robin (RR) is one of the simplest scheduling algorithms for processes in an operating system, which assigns time slices to each process in equal portions and in order, handling all processes without priority. Round-robin scheduling is both simple and easy to implement, and starvation-free. Round-robin scheduling can also be applied to other scheduling problems, such as data packet scheduling in computer networks.
The name of the algorithm comes from the round-robin principle known from other fields, where each person takes an equal share of something in turn.

context switch

A context switch (also sometimes referred to as a process switch or a task switch) is the switching of the CPU (central processing unit) from one process or thread to another. A context is the contents of a CPU's registers and program counter at any point in time. A register is a small amount of very fast memory inside of a CPU (as opposed to the slower RAM main memory outside of the CPU) that is used to speed the execution of computer programs by providing quick access to commonly used values, generally those in the midst of a calculation. A program counter is a specialized register that indicates the position of the CPU in its instruction sequence and which holds either the address of the instruction being executed or the address of the next instruction to be executed, depending on the specific system. Context switching can be described in slightly more detail as the kernel (i.e., the core of the operating system) performing the following activities with regard to processes (including threads) on the CPU: (1) suspending the progression of one process and storing the CPU's state (i.e., the context) for that process somewhere in memory, (2) retrieving the context of the next process from memory and restoring it in the CPU's registers and (3) returning to the location indicated by the program counter (i.e., returning to the line of code at which the process was interrupted) in order to resume the process.

Dispatcher

Dispatchers are communications personnel responsible for receiving and transmitting pure and reliable messages, tracking vehicles and equipment, and recording other important information.[1] A number of organizations, including police and fire departments, emergency medical services, taxicab providers, trucking companies, train stations, and public utility companies, use dispatchers to relay information and coordinate their operations. Essentially, the dispatcher is the "conductor" of the force, and is responsible for the direction of all units within it.

Thread

A thread in computer science is short for a thread of execution. Threads are a way for a program to fork (or split) itself into two or more simultaneously (or pseudo-simultaneously) running tasks. Threads and processes differ from one operating system to another but, in general, a thread is contained inside a process and different threads in the same process share some resources while different processes do not.
On a single processor, Multithreading generally occurs by time-division multiplexing ("time slicing") in very much the same way as the parallel execution of multiple tasks (computer multitasking): the processor switches between different threads. This context switching can happen so fast as to give the illusion of simultaneity to an end user. On a multiprocessor or multi-core system, threading can be achieved via multiprocessing, wherein different threads and processes can run literally simultaneously on different processors or cores.

process

In computing, a process is an instance of a computer program that is being sequentially executed[1] by a computer system that has the ability to run several computer programs concurrently.


A single computer processor executes one or more (multiple) instructions at a time (per clock cycle), one after the other (this is a simplification; for the full story, see superscalar CPU architecture). To allow users to run several programs at once (e.g., so that processor time is not wasted waiting for input from a resource), single-processor computer systems can perform time-sharing. Time-sharing allows processes to switch between being executed and waiting (to continue) to be executed. In most cases this is done very rapidly, providing the illusion that several processes are executing 'at once'. (This is known as concurrency or multiprogramming.) Using more than one physical processor on a computer, permits true simultaneous execution of more than one stream of instructions from different processes, but time-sharing is still typically used to allow more than one process to run at a time. (Concurrency is the term generally used to refer to several independent processes sharing a single processor; simultaneity is used to refer to several processes, each with their own processor.) Different processes may share the same set of instructions in memory (to save storage), but this is not known to any one process. Each execution of the same set of instructions is known as an instance— a completely separate instantiation of the program.

TIME SLICE:

1. With a multi-user system, a time-slice is the set amount of processing time each user gets.
2.With a single-user system, a time-slice is the set amount of processing time each program gets.

TIME SLICE:

CPU Scheduler

We had much experience for waiting. Recently, in bank, we get a unique number entering the bank. There several line processing at the same time. Once the number flip, somebody will be served. A small version of same thing happens on our operating system. Lots of process standby for being served. The main difference was we have just one cpu at most time. For simplicity, we also consider the situation with one cpu which mean there is always one process being served at one time.
Short-term scheduler , a intermediator for dispatching a process to use the cpu resource. One choose a process from free-memory and one from secondary storage device to memory is the difference between short-term scheduler and long-term scheduler(also called job scheduler).
Activity of process of the operatiing system exist all the time. However, there is only one cpu can be used. Therefore a process can be run at one time.The way directly perceived through the senses and simplist is first in first serve which means one process one by one served by cpu a time until no one is waiting.Forgeting bank number will cause a seriel wait when he need call for his family for help. Part of service of next people can proceed to shorten total waiting time.
A lot of scheduling method be brought up like Round-Rubin , FCFS(First come first serve), SJF(Shortest Job First) , Multilevel feedback queue scheduling.I implemented four of them in Java with fully architecture include virtual cpu, virtual timing.

CPU SCHEDULING:

http://web.cs.wpi.edu/~cs3013/c07/lectures/Section05-Scheduling.pdf

CONTEXT SWITCH-SOME POINTS

A context switch (also sometimes referred to as a process switch or a task switch) is the switching of the CPU (central processing unit) from one process or thread to another.Context switching is an essential feature of multitasking operating systems. A multitasking operating system is one in which multiple processes execute on a single CPU seemingly simultaneously and without interfering with each other.
ACTIVITES OF CONTEXT SWITCH:
Context switching performing the following activities with regard to processes (including threads) on the CPU
(1) suspending the progression of one process and storing the CPU's state (i.e., the context) for that process somewhere in memory.
(2) retrieving the context of the next process from memory and restoring it in the CPU's .
(3) returning to the location indicated by the program counter (i.e., returning to the line of code at which the process was interrupted) in order to resume the process.
(4)A context switch can also occur as a result of a hardware interrupt.

Saturday, August 2, 2008

Multilevel Queue Scheduling

A multilevel queue scheduling algorithm partitions the ready queue in several separate queues, for instance
In a multilevel queue scheduling processes are permanently assigned to one queues.
The processes are permanently assigned to one another, based on some property of the process, such as
Memory size
Process priority
Process type
Algorithm choose the process from the occupied queue that has the highest priority, and run that process either
Preemptive or
Non-preemptively
Each queue has its own scheduling algorithm or policy.

Possibility I If each queue has absolute priority over lower-priority queues then no process in the queue could run unless the queue for the highest-priority processes were all empty.
For example, in the above figure no process in the batch queue could run unless the queues for system processes, interactive processes, and interactive editing processes will all empty.

Possibility II If there is a time slice between the queues then each queue gets a certain amount of CPU times, which it can then schedule among the processes in its queue. For instance;
80% of the CPU time to foreground queue using RR.
20% of the CPU time to background queue using FCFS.
Since processes do not move between queue so, this policy has the advantage of low scheduling overhead, but it is inflexible.

DIFFERENCES BETWEEN PREEMPTIVE AND NON-PREEMPTIVE

Non-Preemptive: Non-preemptive algorithms are designed so that once a process enters the running state(is allowed a process), it is not removed from the processor until it has completed its service time ( or it explicitly yields the processor). context_switch() is called only when the process terminates or blocks.

Preemptive: Preemptive algorithms are driven by the notion of prioritized computation. The process with the highest priority should always be the one currently using the processor. If a process is currently using the processor and a new process with a higher priority enters, the ready list, the process on the processor should be removed and returned to the ready list until it is once again the highest-priority process in the system

Non-Preemptive: Non-preemptive algorithms are designed so that once a process enters the running state(is allowed a process), it is not removed from the processor until it has completed its service time ( or it explicitly yields the processor). context_switch() is called only when the process terminates or blocks.

Preemptive: Preemptive algorithms are driven by the notion of prioritized computation. The process with the highest priority should always be the one currently using the processor. If a process is currently using the processor and a new process with a higher priority enters, the ready list, the process on the processor should be removed and returned to the ready list until it is once again the highest-priority process in the system.

Tuesday, July 29, 2008

INTERPROCESS COMMUNICATION

Interprocess communication (IPC) is a set of programming interfaces that allow a programmer to coordinate activities among different program processes that can run concurrently in an operating system. This allows a program to handle many user requests at the same time. Since even a single user request may result in multiple processes running in the operating system on the user's behalf, the processes need to communicate with each other. The IPC interfaces make this possible. Each IPC method has its own advantages and limitations so it is not unusual for a single program to use all of the IPC methods.

Thursday, July 24, 2008

CONTEXT SWITCH

A context switch (also sometimes referred to as a process switch or a task switch) is the switching of the CPU (central processing unit) from one process or thread to another.
A context switch is sometimes described as the kernel suspending execution of one process on the CPU and resuming execution of some other process that had previously been suspended. Although this wording can help clarify the concept, it can be confusing in itself because a process is, by definition, an executing instance of a program. Thus the wording suspending progression of a process might be preferable

PROCESS STATES

The following typical process states are possible on computer systems of all kinds. In most of these states, processes are "stored" on main memory.

Created
(Also called new.) When a process is first created, it occupies the "created" or "new" state. In this state, the process awaits admission to the "ready" state. This admission will be approved or delayed by a long-term, or admission, scheduler. Typically in most desktop computer systems, this admission will be approved automatically, however for real time operating systems this admission may be delayed. In a real time system, admitting too many processes to the "ready" state may lead to oversaturation and overcontention for the systems resources, leading to an inability to meet process deadlines.process means the program that is currently running,or that part of program currently used by processor.

Ready
(Also called waiting' or runnable.) A "ready" or "waiting" process has been loaded into main memory and is awaiting execution on a CPU (to be context switched onto the CPU by the dispatcher, or short-term scheduler). There may be many "ready" processes at any one point of the systems execution - for example, in a one processor system, only one process can be executing at any one time, and all other "concurrently executing" processes will be waiting for execution.

Running
(Also called active or executing.) A "running", "executing" or "active" process is a process which is currently executing on a CPU. From this state the process may exceed its allocated time slice and be context switched out and back to "ready" by the operating system, it may indicate that it has finished and be terminated or it may block on some needed resource (such as an input / output resource) and be moved to a "blocked" state.

Blocked
(Also called sleeping.) Should a process "block" on a resource (such as a file, a semaphore or a device), it will be removed from the CPU (as a blocked process cannot continue execution) and will be in the blocked state. The process will remain "blocked" until its resource becomes available, which can unfortunately lead to deadlock. From the blocked state, the operating system may notify the process of the availability of the resource it is blocking on (the operating system itself may be alerted to the resource availability by an interrupt). Once the operating system is aware that a process is no longer blocking, the process is again "ready" and can from there be dispatched to its "running" state, and from there the process may make use of its newly available resource.

Terminated
A process may be terminated, either from the "running" state by completing its execution or by explicitly being killed. In either of these cases, the process moves to the "terminated" state. If a process is not removed from memory after entering this state, this state may also be called zombie

os concepts

http://informatik.unibas.ch/lehre/ws06/cs201/_Downloads/cs201-osc-svc-2up.pdf
Operating System (OS) - the layer of software with the application programs and users above it and the machine below it

Purposes:
convenience: transform the raw hardware into a machine that is more amiable to users
efficiency: manage the resources of the overall computer system
Operating systems are:
activated by interrupts from the hardware below or traps from the software above. Interrupts are caused by devices requesting attention from the cpu (processor). Traps are caused by illegal events, such as division by zero, or requests from application programs or from users via the command interpreter
not usually running: Silberschatz's comment (p.6) that ``the operating system is the one program running at all times on the computer (usually called the kernel), with all else being application programs'' is wrong or very confusing -- the operating system code is usually NOT running on the processor. However, part (or all) of the OS code is stored in main memory ready to run. Examples of operating systems are:
Windows Vista, Windows XP, Windows ME, Windows 2000, Windows NT, Windows 95, and all other members of the Windows family
UNIX, Linux, Solaris, Irix, and all other members of the UNIX family
MacOS 10 (OSX), MacOS 8, MacOS 7, and all other members of the MacOS family
Overall View of Operating System:
responds to processes: handles system calls (which are either sent as traps or by a special systemcall instruction) and error conditions. The text sometimes refers to traps as ``software interrupts''.
responds to devices: handles interrupts (true interrupts from hardware)
Two crucial terms:
system call: a request to the operating system from a program; in UNIX, a system call looks like a call to a C function, and the set of system calls looks like a library of predefined functions
process: a program in execution (simple definition);
The operating system manages the execution of programs by having a table of processes, with one entry in the table for each executing entity with its own memory space.
Example 1: When a UNIX command such as 'ls' or 'who' is issued, a new process is created to the run the executable file with this name;
Example 2: When you select Start -> Program -> Word in Windows, a new process is created to the run the executable file with the name Winword.exe;
If two users are running the same program, the OS keeps the executions separate by having one process for each.
Major Services Provided by an Operating System: 1. process management and scheduling 2. main-memory management3. secondary-memory management4. input/output system management, including interrupt handling 5. file management6. protection and security 7. networking 8. command interpretation
Other Services Provided by Operating Systems: 1. error detection and handling2. resource allocation3. accounting 4. configuration
Other Goals of OS Design: 1. easy to extend 2. portable - easy to move to different hardware 3. easy to install 4. easy to uninstall
The nucleus deals with the following:
Interrupt/trap handling - OS contains interrupt service routines (interrupt handlers), typically one for each possible type of interrupt from the hardware - Example: clock handler: handles the clock device, which ticks 60 (or more) times per second - OS also contains trap service routines (trap handlers), typically one for each possible type of trap from the processor
Short term scheduling - choosing which process to run next
Process management - creating and deleting processes - assigning privileges and resources to processes
Interprocess communication (ipc) - exchanging information between processes
Within the nucleus there are routines for:
managing registers
managing time
handling device interrupts
nucleus provides the environment in which processes exist
ultimately, every process depends on services provided by the nucleus
Command Interpreter or Shell
interface between user and OS
used to transform a request from the user into a request to the OS
can be GUI or line-oriented
the appearance of the command interpreter is the principal feature of the OS noted by users
In Windows, the command interpreter is based on a graphical user interface
In UNIX, there is a line-orientated command interpreter:
a login process is first created
a user interacts with it and the login validates the user
changes itself into a shell process by starting to run the executable code in /bin/csh
the user's commands are received by the shell process as a a string of characters, e.g. elm hamilton or hist 20
the string of characters is parsed and one of three possibilities results.
If the first word matches an internal command of the shell, e.g., hist, the shell directly performs the requested operation.
Otherwise, if the first word is the name of a file in any of the list of directories to be searched for programs, e.g., /usr/local/bin/elm, the shell creates a new process and runs this program.
Otherwise, an error is reported.
Layered Design
OS software is designed as a series of software layers.
Each layer of software provides services to the layer above and uses services provided by layers below.
If each layer is restricted to use only the services provided by the layer immediately below it, the approach is referred to as the strongly layered approach.
Advantages:
easy to design and implement one layer separately from other layers (modular)
easy to test (debugging)
easy to replace particular components
Disadvantages:
hard to choose/define layers
slows the OS down, e.g., in a strongly layered approach software in the highest layer (layer 5) can only call software in the lowest layer (layer 1) by calling layer 4, which calls layer 3, which calls layer 2, which calls layer 1.
Virtual Machine
A virtual machine is a software emulation of a real (hardware) or imaginary machine. It is completely implemented in software.
A virtual machine for the Intel 8086 processor is used to allow programs written for the 8086 to run on different hardware.
The user has the advantage of not having to purchase or maintain the correct hardware if it can be emulated on another machine. For example, if an 8086 virtual machine is available on a Sun, no 8086-compatible processor is required.
Java code is written for an imaginary machine called the Java Virtual Machine.
The user has the advantage of not having to purchase special hardware because the Java virtual machine is available to run on very many existing hardware/OS platforms.
As long as the Java Virtual Machines are implemented exactly according to specification, Java code is highly portable since it can run on all platforms without change.
Dual-Mode Operation
Dual-mode operation forms the basis for I/O protection, memory protection and CPU protection. In dual-mode operation, there are two separate modes: monitor mode (also called 'system mode' and 'kernel mode') and user mode. In monitor mode, the CPU can use all instructions and access all areas of memory. In user mode, the CPU is restricted to unprivileged instructions and a specified area of memory. User code should always be executed in user mode and the OS design ensures that it is. When responding to system calls, other traps/exceptions, and interrupts, OS code is run. The CPU automatically switches to monitor mode whenever an interrupt or trap occurs. So, the OS code is run in monitor mode.
Input/output protection: Input/output is protected by making all input/output instructions privileged. While running in user mode, the CPU cannot execute them; thus, user code, which runs in user mode, cannot execute them. User code requests I/O by making appropriate system calls. After checking the request, the OS code, which is running in monitor mode, can actually perform the I/O using the privileged instructions.
Memory protection: Memory is protected> by partitioning the memory into pieces. While running in user mode, the CPU can only access some of these pieces. The boundaries for these pieces are controlled by the base register and the limit register (specifying bottom bound and number of locations, respectively). These registers can only be set via privileged instructions.
CPU protection: CPU usage is protected by using the timer device, the associated timer interrupts, and OS code called the scheduler. While running in user mode, the CPU cannot change the timer value or turn off the timer interrupt, because these require privileged operations. Before passing the CPU to a user process, the scheduler ensures that the timer is initialized and interrupts are enabled. When an timer interrupt occurs, the timer interrupt handler (OS code) can run the scheduler (more OS code), which decides whether or not to remove the current process from the CPU.
Other Key Concepts
Batch operating system:
originally referred to the case where a human operator would group together jobs with similar needs
now commonly means an operating system where no interaction between the user and their running process is possible
Muliprogrammed: multiple processes in memory at the same time, and the CPU switches between them



Thursday, July 17, 2008

OPERATING SYSTEM SERVICES

Operating systems are responsible for providing essential services within a computer system:
Initial loading of programs and transfer of programs between secondary storage and main memory
Supervision of the input/output devices
File management
Protection facilities .
SERVICES :
Following are the five services provided by an operating systems to the convenience of the users.
Program Execution:
The purpose of a computer systems is to allow the user to execute programs. So the operating systems provides an environment where the user can conveniently run programs. The user does not have to worry about the memory allocation or multitasking or anything. These things are taken care of by the operating systems.
Running a program involves the allocating and deallocating memory, CPU scheduling in case of multiprocess. These functions cannot be given to the user-level programs. So user-level programs cannot help the user to run programs independently without the help from operating systems.


I/O Operations:
Each program requires an input and produces output. This involves the use of I/O. The operating systems hides the user the details of underlying hardware for the I/O. All the user sees is that the I/O has been performed without any details. So the operating systems by providing I/O makes it convenient for the users to run programs.
For efficiently and protection users cannot control I/O so this service cannot be provided by user-level programs.


File System Manipulation:
The output of a program may need to be written into new files or input taken from some files. The operating systems provides this service. The user does not have to worry about secondary storage management. User gives a command for reading or writing to a file and sees his her task accomplished. Thus operating systems makes it easier for user programs to accomplished their task.
This service involves secondary storage management. The speed of I/O that depends on secondary storage management is critical to the speed of many programs and hence I think it is best relegated to the operating systems to manage it than giving individual users the control of it. It is not difficult for the user-level programs to provide these services but for above mentioned reasons it is best if this service s left with operating system.


Communications:
There are instances where processes need to communicate with each other to exchange information. It may be between processes running on the same computer or running on the different computers. By providing this service the operating system relieves the user of the worry of passing messages between processes. In case where the messages need to be passed to processes on the other computers through a network it can be done by the user programs. The user program may be customized to the specifics of the hardware through which the message transits and provides the service interface to the operating system.


Error Detection:
An error is one part of the system may cause malfunctioning of the complete system. To avoid such a situation the operating system constantly monitors the system for detecting the errors. This relieves the user of the worry of errors propagating to various part of the system and causing malfunctioning.
This service cannot allowed to be handled by user programs because it involves monitoring and in cases altering area of memory or deallocation of memory for a faulty process. Or may be relinquishing the CPU of a process that goes into an infinite loop. These tasks are too critical to be handed over to the user programs. A user program if given these privileges can interfere with the correct (normal) operation of the operating systems.

Saturday, July 12, 2008

INTERRUPT

An interrupt is an event in hardware that triggers the processor to jump from its current program counter to a specific point in the code. Interrupts are designed to be special events whose occurrence cannot be predicted precisely (or at all). The MSP has many different kinds of events that can trigger interrupts, and for each one the processor will send the execution to a unique, specific point in memory. Each interrupt is assigned a word long segment at the upper end of memory. This is enough memory for a jump to the location in memory where the interrupt will actually be handled. Interrupts in general can be divided into two kinds- maskable and non-maskable. A maskable interrupt is an interrupt whose trigger event is not always important, so the programmer can decide that the event should not cause the program to jump. A non-maskable interrupt (like the reset button) is so important that it should never be ignored. The processor will always jump to this interrupt when it happens. Often, maskable interrupts are turned off by default to simplify the default behavior of the device. Special control registers allow non-maskable and specific non-maskable interrupts to be turned on. Interrupts generally have a "priority;" when two interrupts happen at the same time, the higher priority interrupt will take precedence over the lower priority one. Thus if a peripheral timer goes off at the same time as the reset button is pushed, the processor will ignore the peripheral timer because the reset is more important (higher priority).

WHAT IS A SYSTEM CALL

System call is a request made by any arbitrary program to the operating system for performing tasks -- picked from a predefined set -- which the said program does not have required permissions to execute in its own flow of execution. Most operations interacting with the system require permissions not available to a user level process, i.e. any I/O performed with any arbitrary device present on the system or any form of communication with other processes requires the use of system calls.
The fact that improper use of the system can easily cause a system crash necessitates some level of control. The design of the microprocessor architecture on practically all modern systems (except some embedded systems) offers a series of privilege levels -- the (low) privilege level in which normal applications execute limits the address space of the program so that it cannot access or modify other running applications nor the operating system itself. It also prevents the application from using any system devices (e.g. the frame buffer or network devices). But obviously any normal application needs these abilities; thus it can call the operating system. The OS executes at the highest level of privilege and allows the applications to request services via system calls, which are often implemented through interrupts. If allowed, the system enters a higher privilege level, executes a specific set of instructions which the interrupting program has no direct control over, then returns control to the former flow of execution. This concept also serves as a way to implement security.
With the development of separate operating modes with varying levels of privilege, a mechanism was needed for transferring control safely from lesser privileged modes to higher privileged modes. Less privileged code could not simply transfer control to more privileged code at any arbitrary point and with any arbitrary processor state. To allow it to do so would allow it to break security. For instance, the less privileged code could cause the higher privileged code to execute in the wrong order, or provide it with a bad stack.

Thursday, July 10, 2008

Real-Time Systems

A real-time computing (RTC) is the study of hardware and software systems that are subject to a "real-time constraint"—i.e., operational deadlines from event to system response. By contrast, a non-real-time system is one for which there is no deadline, even if fast response or high performance is desired or even preferred.

A real time system may be one where its application can be considered (within context) to be mission critical. The anti-lock brakes on a car are a simple example of a real-time computing system — the real-time constraint in this system is the short time in which the brakes must be released to prevent the wheel from locking. Real-time computations can be said to have failed if they are not completed before their deadline, where their deadline is relative to an event. A real-time deadline must be met, regardless of system load.

Hard and soft real-time systems
A system is said to be real-time if the total correctness of an operation depends not only upon its logical correctness, but also upon the time in which it is performed. The classical conception is that in a hard or immediate real-time system, the completion of an operation after its deadline is considered useless - ultimately, this may lead to a critical failure of the complete system. A soft real-time system on the other hand will tolerate such lateness, and may respond with decreased service quality (e.g., dropping frames while displaying a video).

Hard real-time systems are typically found interacting at a low level with physical hardware, in embedded systems. For example, a car engine control system is a hard real-time system because a delayed signal may cause engine failure or damage. Other examples of hard real-time embedded systems include medical systems such as heart pacemakers and industrial process controllers.
Hard Real Time System examples: ATM, Airbag

Soft real-time systems are typically those used where there is some issue of concurrent access and the need to keep a number of connected systems up to date with changing situations. Example: the software that maintains and updates the flight plans for commercial airliners. These can operate to a latency of seconds. Live audio-video systems are also usually soft real-time; violation of constraints results in degraded quality, but the system can continue to operate.

Soft Real Time System examples: Games, Washing Machine Camcorder

Saturday, July 5, 2008

Definition of System Call

The invocation of an operating system routine. Operating systems contain sets of routines for performing various low-level operations. For example, all operating systems have a routine for creating a directory. If you want to execute an operating system routine from a program, you must make a system call.
A request by an active process for a service performed by the UNIX system kernel, such as I/O or process creation.

Hardware protection and interrupts

Computer-System Operation
I/O devices and the CPU can execute concurrently.
Each device controller is in charge of a particular device type.
Each device controller has a local buffer.
CPU moves data from/to main memory to/from the local buffers.
I/O is from the device to local buffer of controller.
Device controller informs CPU that it has finished its operation by causing an interrupt.
Common Functions of Interrupts
Interrupt transfers control to the interrupt service routine, generally, through the interrupt vector, which contains the addresses of all the service routines.
Interrupt architecture must save the address of the interrupted instruction.
Incoming interrupts are disabled while another interrupt is being processed to prevent a lost interrupt.
A trap is a software-generated interrupt caused either by an error or a user request.
An operating system is interrupt driven.
Interrupt Handling
The operating system preserves the state of the CPU by storing registers and the program counter.
Determines which type of interrupt has occurred:
polling
vectored interrupt system
Separate segments of code determine what action should be taken for each type of interrupt. I/O Structure
I/O Interrrupts
After I/O starts, control returns to user program only upon I/O completion.
wait instruction idles the CPU until the next interrupt.
wait loop (contention for memory access).
at most one I/O request is outstanding at a time; no simultaneous I/O processing.
After I/O starts, control returns to user program without waiting for I/O completion.
System call – request to the operating system to allow user to wait for I/O completion.
Device-status table contains entry for each I/O device indicating its type, address, and state (not functioning, idle or busy)
Multiple requests for the same device are maintained in a wait queue.
Operating system indexes into I/O device table to determine device status and to modify table entry to include interrupt.
DMA Structure
Used for high-speed I/O devices able to transmit information at close to memory speeds.
Device controller transfers blocks of data from buffer storage directly to main memory without CPU intervention.
Only one interrupt is generated per block, rather than the one interrupt per byte.
Basic Operation
User program (or OS) requests data transfer
The OS finds a buffer (empty buffer for i/p or a full buffer for o/p) from the pool of buffers to transfer.
The Device Driver (part of OS) sets the DMA controller registers to use appropriate source and destination addresses and transfer length.
The DMA controller is instructed to start transfer operation
While transfer is going on the CPU is free to perform other tasks
DMA controller steals cycles from CPU
DMA controller interrupts CPU when the work is done. Storage Structure
Main memory – only large storage media that the CPU can access directly.
Secondary storage – extension of main memory that provides large nonvolatile storage capacity.
Magnetic disks – rigid metal or glass platters covered with magnetic recording material.
Disk surface is logically divided into tracks, which are subdivided into sectors.
The disk controllerdetermines the logical interaction between the device and the computer.
Storage Hierarchy
Storage systems organized in hierarchy:
speed
cost
volatility
Caching – copying information into faster storage system; main memory can be viewed as a fast cache for secondary storage.
Storage-Device Hierarchy:
Hardware Protection
Dual-Mode Operation

Sharing system resources requires operating system to ensure that an incorrect program cannot cause other programs to execute incorrectly.
Provide hardware support to differentiate between at least two modes of operations.
User mode – execution done on behalf of a user.
Monitor mode(also supervisor mode or system mode) – execution done on behalf of operating system.
Mode bit added to computer hardware to indicate the current mode: monitor (0) or user (1).
When an interrupt or fault occurs hardware switches to monitor mode
Privileged instructions can be issued only in monitor mode.
I/O Protection
All I/O instructions are privileged instructions.
Must ensure that a user program could never gain control of the computer in monitor mode (i.e., a user program that, as part of its execution, stores a new address in the interrupt vector).
Memory Protection
Must provide memory protection at least for the interrupt vector and the interrupt service routines.
In order to have memory protection, add two registers that determine the range of legal addresses a program may access:
base register – holds the smallest legal physical memory address.
limit register – contains the size of the range.
Memory outside the defined range is protected.
Protection Hardware:
When executing in monitor mode, the operating system has unrestricted access to both monitor and users’ memory.
The load instructions for the baseand limitregisters are privileged instructions.
CPU Protection
Timer – interrupts computer after specified period to ensure operating system maintains control.
Timer is decremented every clock tick.
When timer reaches the value 0, an interrupt occurs.
Timer commonly used to implement time sharing.
Timer also used to compute the current time.
Load-timer is a privileged instruction.
General-System Architecture
Given that I/O instructions are privileged, how does the user program perform I/O?
System call – the method used by a process to request action by the operating system.
Usually takes the form of a trap to a specific location in the interrupt vector.
Control passes through the interrupt vector to a service routine in the OS, and the mode bit is set to monitor mode.
The monitor verifies that the parameters are correct and legal, executes the request, and returns control to the instruction following the system call.

Thursday, July 3, 2008

BOOT-STRAP LOADER

Boot-strap Loader
Boot-strapping process is a three stage process. An initial program is loaded by the hardware at boot time. This program reads the second and third stages off the disk into memory. The second stage loads the third stage into memory in the correct address range and initializes the machine. The third stage is nachos, which runs at the end of the boot sequence.
First Stage - Loading the Kernel
Second Stage - CPU and Protected Mode Initializations
Third Stage - Run-time System Initialization

DIRECT MEMORY ACCESS

DMA (Direct Memory Access
Stands for "Direct Memory Access." DMA is a method of transferring data from the computer's RAM to another part of the computer without processing it using the CPU. While most data that is input or output from your computer is processed by the CPU, some data does not require processing, or can be processed by another device. In these situations, DMA can save processing time and is a more efficient way to move data from the computer's memory to other devices.For example, a sound card may need to access data stored in the computer's RAM, but since it can process the data itself, it may use DMA to bypass the CPU. Video cards that support DMA can also access the system memory and process graphics without needing the CPU. Ultra DMA hard drives use DMA to transfer data faster than previous hard drives that required the data to first be run through the CPU.In order for devices to use direct memory access, they must be assigned to a DMA channel. Each type of port on a computer has a set of DMA channels that can be assigned to each connected device.
For example, a PCI controller and a hard drive controller each have their own set of DMA channels.

Interrupt Vector

An interrupt vector is the memory address of an interrupt handler, or an index into an array called an interrupt vector table or dispatch table. Interrupt vector tables contain the memory addresses of interrupt handlers. When an interrupt is generated, the processor saves its execution state via a context switch, and begins execution of the interrupt handler at the interrupt vector

Interrupt

In computing, an interrupt is an asynchronous signal from hardware indicating the need for attention or a synchronous event in software indicating the need for a change in execution. A hardware interrupt causes the processor to save its state of execution via a context switch, and begin execution of an interrupt handler. Software interrupts are usually implemented as instructions in the instruction set, which cause a context switch to an interrupt handler similar to a hardware interrupt. Interrupts are a commonly used technique for computer multitasking, especially in real-time computing. Such a system is said to be interrupt-driven. [1]
An act of interrupting is referred to as an interrupt request ("IRQ").

DIRECT MEMORY ACCESS

Direct memory access (DMA) is a feature of modern computers and microprocessors that allows certain hardware subsystems within the computer to access system memory for reading and/or writing independently of the central processing unit. Many hardware systems use DMA including disk drive controllers, graphics cards, network cards, sound cards and GPUs. DMA is also used for intra-chip data transfer in multi-core processors, especially in multiprocessor system-on-chips, where its processing element is equipped with a local memory (often called scratchpad memory) and DMA is used for transferring data between the local memory and the main memory. Computers that have DMA channels can transfer data to and from devices with much less CPU overhead than computers without a DMA channel. Similarly a processing element inside a multi-core processor can transfer data to and from its local memory without occupying its processor time, overlapping computation and data transfer.
Without DMA, using programmed input/output (PIO) mode for communication with peripheral devices, or load/store instructions in the case of multicore chips, the CPU is typically fully occupied for the entire duration of the read or write operation, and is thus unavailable to perform other work. With DMA, the CPU would initiate the transfer, do other operations while the transfer is in progress, and receive an interrupt from the DMA controller once the operation has been done. This is especially useful in real-time computing applications where not stalling behind concurrent operations is critical. Another and related application area is various forms of stream processing where it is essential to have data processing and transfer in parallel, in order to achieve sufficient throughput

Wednesday, July 2, 2008

MEMORY ALLOCATION IN REAL-TIME OPERATING SYSTEM

Memory allocation is even more critical in an RTOS than in other operating systems.
Firstly, speed of allocation is important. A standard memory allocation scheme scans a linked list of indeterminate length to find a suitable free memory block; however, this is unacceptable as memory allocation has to occur in a fixed time in an RTOS.
Secondly, memory can become fragmented as free regions become separated by regions that are in use. This can cause a program to stall, unable to get memory, even though there is theoretically enough available. Memory allocation algorithms that slowly accumulate fragmentation may work fine for desktop machines—when rebooted every month or so—but are unacceptable for embedded systems that often run for years without rebooting.
The simple fixed-size-blocks algorithm works astonishingly well for simple embedded systems.
Another real strength of fixed size blocks is for DSP systems particularly where one core is performing one section of the pipeline and the next section is being done on another core. In this case, fixed size buffer management with one core filling the buffers and another set of cores returning the buffers is very efficient. A DSP optimized RTOS like Unison Operating System or DSPnano RTOS provides these features.

TIME-SHARING

  • Timesharing is the technique of scheduling a computer's time so that they are shared across multiple tasks and multiple users, with each user having the illusion that his or her computation is going on continuously.
  • It is a mode of operation that allows multiple independent users to share the resources of a multiuser computer system, including the CPU, bus, and memory.Generally, it is accomplished by interleaving computer usage, with the independent applications or users taking turns using computer resources, although the appearance may be of concurrent usage

HISTORY OF TIME-SHARING

John McCarthy, the inventor of LISP(List Processing Language), first imagined this technique in the late 1950s. The first timesharing operating systems, BBN's "Little Hospital" and CTSS(Compatible Time-Sharing System), were deplayed in 1962-63. The early hacker culture of the 1960s and 1970s grew up around the first generation of relatively cheap timesharing computers, notably the DEC(Digital Equipment Corporation) 10, 11, and VAX(Virtual Address eXtension) lines. But these were only cheap in a relative sense; though quite a bit less powerful than today's personal computers, they had to be shared by dozens or even hundreds of people each. The early hacker comunities nucleated around places where it was relatively easy to get access to a timesharing account.

TIME SHARING SYSTEMS

Time-sharing system:
Time-sharing refers to sharing a computing resource among many users by multitasking.
Because early mainframes and minicomputers were extremely expensive, it was rarely possible to allow a single user exclusive access to the machine for interactive use. But because computers in interactive use often spend much of their time idly waiting for user input, it was suggested that multiple users could share a machine by allocating one user's idle time to service other users. Similarly, small slices of time spent waiting for disk, tape, or network input could be granted to other users.
Throughout the late 1960s and the 1970s, computer terminals were multiplexed onto large institutional mainframe computers (central computer systems), which in many implementations sequentially polled the terminals to see if there was any additional data or action requested by the computer user. Later technology in interconnections were interrupt driven, and some of these used parallel data transfer technologies like, for example, the IEEE 488 standard. Generally, computer terminals were utilized on college properties in much the same places as desktop computers or personal computers are found today. In the earliest days of personal computers, many were in fact used as particularly smart terminals for time-sharing systems.
With the rise of microcomputing in the early 1980s, time-sharing faded into the background because the individual microprocessors were sufficiently inexpensive that a single person could have all the CPU time dedicated solely to their needs, even when idle.
The Internet has brought the general concept of time-sharing back into popularity. Expensive corporate server farms costing millions can host thousands of customers all sharing the same common resources. As with the early serial terminals, websites operate primarily in bursts of activity followed by periods of idle time. This bursting nature permits the service to be used by many website customers at once, and none of them notice any delays in communications until the servers start to get very busy.

DIFFERENCE BETWEEN DISTRIBUTED OS AND NETWORK OS


Difference between Network OS and Distributed Os:
Network OS is used to manage Networked computer systems and create,maintain and transfer files in that Network. Distributed OS is also similar to Networked OS but in addition to it the platform on which it is running should have high configuration such as more capacity RAM,High speed Processor

Difference between a loosely coupled system and a tightly coupled system

What is the difference between a loosely coupled system and a tightly coupled system? Give examples

One feature that is commonly characterizing tightly coupled systems is that they share the clock.
Therefore multiprocessors are typically tightly coupled but distributed workstations on a network are not.
Another difference is that: in a tightly-coupled system, the delay experienced when a message is sent from one computer to another is short, and data rate is high; that is, the number of bits per second that can be transferred is large. In a loosely-coupled system, the opposite is true: the intermachine message delay is large and the data rate is low. For example, two CPU chips on the same printed circuit board and connected by wires etched onto the board are likely to be tightly coupled, whereas two computers connected by a 2400 bit/sec modem over the telephone system are certain to be loosely coupled.

LOOSELY COUPLED SYSTEMS

Loose coupling describes an approach where integration interfaces are developed with minimal assumptions between the sending/receiving parties, thus reducing the risk that a change in one application/module will force a change in another application/module.
Loose coupling has multiple dimensions. Integration between two applications may be loosely coupled in time using Message-oriented middleware, meaning the availability of one system does not affect the other. Alternatively, integration may be loosely coupled in format using middleware to perform Data transformation, meaning differences in data models do not prevent integration. In Web Services or Service Oriented Architecture, loose coupling may mean simply that the implementation is hidden from the caller.

DRAWBACK OF DISTRIBUTED SYSTEM

If not planned properly, a distributed system can decrease the overall reliability of computations if the unavailability of a node can cause disruption of the other nodes. Leslie Lamport famously quipped that: "A distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable.Troubleshooting and diagnosing problems in a distributed system can also become more difficult, because the analysis may require connecting to remote nodes or inspecting communication between nodes.
Many types of computation are not well suited for distributed environments, typically owing to the amount of network communication or synchronization that would be required between nodes. If bandwidth, latency, or communication requirements are too significant, then the benefits of distributed computing may be negated and the performance may be worse than a non-distributed environment

What are the main types of system calls? Describe their purpose.

What are the main types of system calls? Describe their purpose.
Process control:
End, abort; load, execute; create process, terminate process; get process attributes, set process attributes; wait for time; wait event, signal event; allocate and free memory.
File management:
Create file, delete file; open, close; read, write, reposition; get file attributes, set file attributes.
Device management :
Request device, release device; read, write, reposition; get device attributes, set device attributes; Logically attach or detach devices.
Information maintenance :
Get time or date, set time or date; get system data, set system data; get process, file, or device attributes; set process, file, or device attributes.
Communications:
Create, delete communication connection; send, receive messages; transfer status information;
Attach or detach remote devices.

Tuesday, July 1, 2008

Distributed system

Distributed computing deals with hardware and software systems containing more than one processing element or storage element, concurrent processes, or multiple programs, running under a loosely or tightly controlled regime.
In distributed computing a program is split up into parts that run simultaneously on multiple computers communicating over a network. Distributed computing is a form of parallel computing, but parallel computing is most commonly used to describe program parts running simultaneously on multiple processors in the same computer. Both types of processing require dividing a program into parts that can run simultaneously, but distributed programs often must deal with heterogeneous environments, network links of varying latencies, and unpredictable failures in the network or the computers

PROCESS

When a process is created, it needs to wait for the process scheduler (of the operating system) to set its status to "waiting" and load it into main memory from secondary storage device (such as a hard disk or a CD-ROM). Once the process has been assigned to a processor by a short-term scheduler, a context switch is performed (loading the process into the processor) and the process state is set to "running" - where the processor executes its instructions. If a process needs to wait for a resource (such as waiting for user input, or waiting for a file to become available), it is moved into the "blocked" state until it no longer needs to wait - then it is moved back into the "waiting" state. Once the process finishes execution, or is terminated by the operating system, it is moved to the "terminated" state where it waits to be removed from main memory

Monday, June 30, 2008

System call in operating system

A system call is a request made by any arbitrary program to the operating system for performing tasks -- picked from a predefined set -- which the said program does not have required permissions to execute in its own flow of execution. Most operations interacting with the system require permissions not available to a user level process, i.e. any I/O performed with any arbitrary device present on the system or any form of communication with other processes requires the use of system calls.

UNIX

Unix (officially trademarked as UNIX, sometimes also written as Unix with small caps) is a computer operating system originally developed in 1969 by a group of AT&T employees at Bell Labs including Ken Thompson, Dennis Ritchie and Douglas McIlroy. Today's Unix systems are split into various branches, developed over time by AT&T as well as various commercial vendors and non-profit organizations.
As of 2007, the owner of the trademark is The Open Group, an industry standards consortium. Only systems fully compliant with and certified to the Single UNIX Specification are qualified to use the trademark; others are called "Unix system-like" or "Unix-like".

Saturday, June 28, 2008

CLEAN BOOT

CLEAN BOOT:

Starting (booting) a computer as minimalistically as possible. Typically when you start your computer, it loads many files and programs to customize your environment. A clean boot eliminates these optional features and loads only those files and programs that are absolutely required by the operating system.
A clean boot is a troubleshooting technique that allows you to get the computer up and running so that you can perform diagnostic tests to determine which elements of the normal boot process are causing problems.

COLD BOOT

Definition of: cold boot
Starting the computer by turning power on. Turning power off and then back on again clears memory and many internal settings. Some program failures will lock up the computer and require a cold boot to use the computer again.The start-up of a computer from a powered-down, or off, state. Also called a hard boot.

WARM BOOT

WARM BOOT:
Refers to restarting a computer that is already turned on via the operating system. Restarting it returns the computer to its initial state. A warm boot is sometimes necessary when a program encounters an error from which it cannot recover. On PCs, you can perform a warm boot by pressing the Control, Alt, and Delete keys simultaneously. On Macs, you can perform a warm boot by pressing the Restart button.
Also called a soft boot.
Contrast with cold boot, turning a computer on from an off position.

Information about windows xp and windows vista

Windows XP has been out since around 2002. It is more stable, programs are generally centered around running on it, and most of the security issues have been resolved and patched from being out on the market for so long. Windows Vista is Microsoft's newest operating system, released in January 2007. It's newer and has more security and added features, but since it's so new there are very few bugfixes for it and many concerns about the security and usability.However, with Vista being the newest operating system, this is the direction Microsoft is taking computers. It's not as easy to find a computer with XP on it anymore (all computers sold at Future Shop now come with Vista loaded on) and all future software should be written to support Windows Vista. Before long, there will be programs out that require Windows Vista to run. games, Halo 2 and Shadowrun, which require Windows Vista . Just like Windows XP succeeded Windows 2000 and Windows ME, Windows Vista is looking to succeed Windows XP. Put in the plainest of terms, Vista is XP's sequel

LINUX

LINUX:
Linux (pronounced /ˈlɪnəks/ or /ˈlɪnʊks/)[1] is the name usually given to any Unix-like computer operating system that uses the Linux kernel. Linux is one of the most prominent examples of free software and open source development: typically all underlying source code can be freely modified, used, and redistributed by anyone.[2]
The name "Linux" comes from the Linux kernel, started in 1991 by Linus Torvalds. The system's utilities and libraries usually come from the GNU operating system, announced in 1983 by Richard Stallman. The GNU contribution is the basis for the alternative name GNU/Linux.[3]
Predominantly known for its use in servers, Linux is supported by corporations such as Dell, Hewlett-Packard, IBM, Novell, Oracle Corporation, Red Hat, and Sun Microsystems. It is used as an operating system for a wide variety of computer hardware, including desktop computers, supercomputers,[4], and embedded devices such as E-book readers, video game systems (PlayStation 2, PlayStation 3 and XBox[5]), mobile phones and routers.

VIRTUAL MEMORY

Virtual memory is a computer system technique which gives an application program the impression that it has contiguous working memory, while in fact it may be physically fragmented and may even overflow on to disk storage. Systems that use this technique make programming of large applications easier and use real physical memory (e.g. RAM) more efficiently than those without virtual memory.
Note that "virtual memory" is not just "using disk space to extend physical memory size".

SYMBIAN PHONES

Symbian Phones
Symbian OS is the world-leading open operating system that powers the most popular and
ad anced smartphones today from the world’s leading handset manufacturers.
http://www.symbian.com/phones/index.html

Friday, June 27, 2008

Real time operating system


A real-time operating system (RTOS) is an operating system that guarantees a certain capability within a specified time constraint.

For example, operating system might be designed to ensure that a certain object was available for a robot on an assembly line. In what is usually called a "hard" real-time operating system, if the calculation could not be performed for making the object available at the designated time, the operating system would terminate with a failure.

In a "soft" real-time operating system, the assembly line would continue to function but the production output might be lower as objects failed to appear at their designated time, causing the robot to be temporarily unproductive. Some real-time operating systems are created for a special application and others are more general purpose. Some existing general purpose operating systems claim to be a real-time operating systems. To some extent, almost any general purpose operating system such as Microsoft's Windows 2000 or IBM's OS/390 can be evaluated for its real-time operating system qualities. That is, even if an operating system doesn't qualify, it may have characteristics that enable it to be considered as a solution to a particular real-time application problem.
In general, real-time operating systems are said to require:
multitasking
Process threads that can be prioritized
A sufficient number of interrupt levels
Real-time operating systems are often required in small embedded operating systems that are packaged as part of microdevices. Some kernels can be considered to meet the requirements of a real-time operating system. However, since other components, such as device drivers, are also usually needed for a particular solution, a real-time operating system is usually larger than just the kernel.

SYMBIAN OS OVERVIEW

The Symbian OS was designed specifically for mobile devices. It has a very small memory footprint and low power consumption. This is very important, as users do not want to recharge their phone daily and to allow to run on small devices with limited memory. Unlike other proprietary operating systems, it is an open OS, enabling third party developers to write and install applications independently from the device manufacturers.

HOW MEMORY MANAGEMENT UNIT WORKS

Modern MMUs typically divide the virtual address space (the range of addresses used by the processor) into pages, whose size is 2n, usually a few kilobytes. The bottom n bits of the address (the offset within a page) are left unchanged. The upper address bits are the (virtual) page number. The MMU normally translates virtual page numbers to physical page numbers via an associative cache called a Translation Lookaside Buffer (TLB). When the TLB lacks a translation, a slower mechanism involving hardware-specific data structures or software assistance is used. The data found in such data structures are typically called page table entries (PTEs), and the data structure itself is typically called a page table. The physical page number is combined with the page offset to give the complete physical address.

What are the benefits of Symbian OS?

1. Wide selection of applications available for a range of mobile phones
2. Implements industry standard protocols, interfaces and management services for IT system integration
3. Application development using industry standard Java and C++ languages
4. Extensive connectivity options - including GSM, GPRS, CDMA, WCDMA, WiFi and Bluetooth

MULTIPROCESSING

Multiprocessing is the use of two or more central processing units (CPUs) within a single computer system. The term also refers to the ability of a system to support more than one processor and/or the ability to allocate tasks between them.[1] There are many variations on this basic theme, and the definition of multiprocessing can vary with context, mostly as a function of how CPUs are defined (multiple cores on one die, multiple chips in one package, multiple packages in one system unit, etc.).
Multiprocessing sometimes refers to the execution of multiple concurrent software processes in a system as opposed to a single process at any one instant. However, the term multiprogramming is more appropriate to describe this concept, which is implemented mostly in software, whereas multiprocessing is more appropriate to describe the use of multiple hardware CPUs. A system can be both multiprocessing and multiprogramming, only one of the two, or neither of the two.