Tuesday, July 29, 2008

INTERPROCESS COMMUNICATION

Interprocess communication (IPC) is a set of programming interfaces that allow a programmer to coordinate activities among different program processes that can run concurrently in an operating system. This allows a program to handle many user requests at the same time. Since even a single user request may result in multiple processes running in the operating system on the user's behalf, the processes need to communicate with each other. The IPC interfaces make this possible. Each IPC method has its own advantages and limitations so it is not unusual for a single program to use all of the IPC methods.

Thursday, July 24, 2008

CONTEXT SWITCH

A context switch (also sometimes referred to as a process switch or a task switch) is the switching of the CPU (central processing unit) from one process or thread to another.
A context switch is sometimes described as the kernel suspending execution of one process on the CPU and resuming execution of some other process that had previously been suspended. Although this wording can help clarify the concept, it can be confusing in itself because a process is, by definition, an executing instance of a program. Thus the wording suspending progression of a process might be preferable

PROCESS STATES

The following typical process states are possible on computer systems of all kinds. In most of these states, processes are "stored" on main memory.

Created
(Also called new.) When a process is first created, it occupies the "created" or "new" state. In this state, the process awaits admission to the "ready" state. This admission will be approved or delayed by a long-term, or admission, scheduler. Typically in most desktop computer systems, this admission will be approved automatically, however for real time operating systems this admission may be delayed. In a real time system, admitting too many processes to the "ready" state may lead to oversaturation and overcontention for the systems resources, leading to an inability to meet process deadlines.process means the program that is currently running,or that part of program currently used by processor.

Ready
(Also called waiting' or runnable.) A "ready" or "waiting" process has been loaded into main memory and is awaiting execution on a CPU (to be context switched onto the CPU by the dispatcher, or short-term scheduler). There may be many "ready" processes at any one point of the systems execution - for example, in a one processor system, only one process can be executing at any one time, and all other "concurrently executing" processes will be waiting for execution.

Running
(Also called active or executing.) A "running", "executing" or "active" process is a process which is currently executing on a CPU. From this state the process may exceed its allocated time slice and be context switched out and back to "ready" by the operating system, it may indicate that it has finished and be terminated or it may block on some needed resource (such as an input / output resource) and be moved to a "blocked" state.

Blocked
(Also called sleeping.) Should a process "block" on a resource (such as a file, a semaphore or a device), it will be removed from the CPU (as a blocked process cannot continue execution) and will be in the blocked state. The process will remain "blocked" until its resource becomes available, which can unfortunately lead to deadlock. From the blocked state, the operating system may notify the process of the availability of the resource it is blocking on (the operating system itself may be alerted to the resource availability by an interrupt). Once the operating system is aware that a process is no longer blocking, the process is again "ready" and can from there be dispatched to its "running" state, and from there the process may make use of its newly available resource.

Terminated
A process may be terminated, either from the "running" state by completing its execution or by explicitly being killed. In either of these cases, the process moves to the "terminated" state. If a process is not removed from memory after entering this state, this state may also be called zombie

os concepts

http://informatik.unibas.ch/lehre/ws06/cs201/_Downloads/cs201-osc-svc-2up.pdf
Operating System (OS) - the layer of software with the application programs and users above it and the machine below it

Purposes:
convenience: transform the raw hardware into a machine that is more amiable to users
efficiency: manage the resources of the overall computer system
Operating systems are:
activated by interrupts from the hardware below or traps from the software above. Interrupts are caused by devices requesting attention from the cpu (processor). Traps are caused by illegal events, such as division by zero, or requests from application programs or from users via the command interpreter
not usually running: Silberschatz's comment (p.6) that ``the operating system is the one program running at all times on the computer (usually called the kernel), with all else being application programs'' is wrong or very confusing -- the operating system code is usually NOT running on the processor. However, part (or all) of the OS code is stored in main memory ready to run. Examples of operating systems are:
Windows Vista, Windows XP, Windows ME, Windows 2000, Windows NT, Windows 95, and all other members of the Windows family
UNIX, Linux, Solaris, Irix, and all other members of the UNIX family
MacOS 10 (OSX), MacOS 8, MacOS 7, and all other members of the MacOS family
Overall View of Operating System:
responds to processes: handles system calls (which are either sent as traps or by a special systemcall instruction) and error conditions. The text sometimes refers to traps as ``software interrupts''.
responds to devices: handles interrupts (true interrupts from hardware)
Two crucial terms:
system call: a request to the operating system from a program; in UNIX, a system call looks like a call to a C function, and the set of system calls looks like a library of predefined functions
process: a program in execution (simple definition);
The operating system manages the execution of programs by having a table of processes, with one entry in the table for each executing entity with its own memory space.
Example 1: When a UNIX command such as 'ls' or 'who' is issued, a new process is created to the run the executable file with this name;
Example 2: When you select Start -> Program -> Word in Windows, a new process is created to the run the executable file with the name Winword.exe;
If two users are running the same program, the OS keeps the executions separate by having one process for each.
Major Services Provided by an Operating System: 1. process management and scheduling 2. main-memory management3. secondary-memory management4. input/output system management, including interrupt handling 5. file management6. protection and security 7. networking 8. command interpretation
Other Services Provided by Operating Systems: 1. error detection and handling2. resource allocation3. accounting 4. configuration
Other Goals of OS Design: 1. easy to extend 2. portable - easy to move to different hardware 3. easy to install 4. easy to uninstall
The nucleus deals with the following:
Interrupt/trap handling - OS contains interrupt service routines (interrupt handlers), typically one for each possible type of interrupt from the hardware - Example: clock handler: handles the clock device, which ticks 60 (or more) times per second - OS also contains trap service routines (trap handlers), typically one for each possible type of trap from the processor
Short term scheduling - choosing which process to run next
Process management - creating and deleting processes - assigning privileges and resources to processes
Interprocess communication (ipc) - exchanging information between processes
Within the nucleus there are routines for:
managing registers
managing time
handling device interrupts
nucleus provides the environment in which processes exist
ultimately, every process depends on services provided by the nucleus
Command Interpreter or Shell
interface between user and OS
used to transform a request from the user into a request to the OS
can be GUI or line-oriented
the appearance of the command interpreter is the principal feature of the OS noted by users
In Windows, the command interpreter is based on a graphical user interface
In UNIX, there is a line-orientated command interpreter:
a login process is first created
a user interacts with it and the login validates the user
changes itself into a shell process by starting to run the executable code in /bin/csh
the user's commands are received by the shell process as a a string of characters, e.g. elm hamilton or hist 20
the string of characters is parsed and one of three possibilities results.
If the first word matches an internal command of the shell, e.g., hist, the shell directly performs the requested operation.
Otherwise, if the first word is the name of a file in any of the list of directories to be searched for programs, e.g., /usr/local/bin/elm, the shell creates a new process and runs this program.
Otherwise, an error is reported.
Layered Design
OS software is designed as a series of software layers.
Each layer of software provides services to the layer above and uses services provided by layers below.
If each layer is restricted to use only the services provided by the layer immediately below it, the approach is referred to as the strongly layered approach.
Advantages:
easy to design and implement one layer separately from other layers (modular)
easy to test (debugging)
easy to replace particular components
Disadvantages:
hard to choose/define layers
slows the OS down, e.g., in a strongly layered approach software in the highest layer (layer 5) can only call software in the lowest layer (layer 1) by calling layer 4, which calls layer 3, which calls layer 2, which calls layer 1.
Virtual Machine
A virtual machine is a software emulation of a real (hardware) or imaginary machine. It is completely implemented in software.
A virtual machine for the Intel 8086 processor is used to allow programs written for the 8086 to run on different hardware.
The user has the advantage of not having to purchase or maintain the correct hardware if it can be emulated on another machine. For example, if an 8086 virtual machine is available on a Sun, no 8086-compatible processor is required.
Java code is written for an imaginary machine called the Java Virtual Machine.
The user has the advantage of not having to purchase special hardware because the Java virtual machine is available to run on very many existing hardware/OS platforms.
As long as the Java Virtual Machines are implemented exactly according to specification, Java code is highly portable since it can run on all platforms without change.
Dual-Mode Operation
Dual-mode operation forms the basis for I/O protection, memory protection and CPU protection. In dual-mode operation, there are two separate modes: monitor mode (also called 'system mode' and 'kernel mode') and user mode. In monitor mode, the CPU can use all instructions and access all areas of memory. In user mode, the CPU is restricted to unprivileged instructions and a specified area of memory. User code should always be executed in user mode and the OS design ensures that it is. When responding to system calls, other traps/exceptions, and interrupts, OS code is run. The CPU automatically switches to monitor mode whenever an interrupt or trap occurs. So, the OS code is run in monitor mode.
Input/output protection: Input/output is protected by making all input/output instructions privileged. While running in user mode, the CPU cannot execute them; thus, user code, which runs in user mode, cannot execute them. User code requests I/O by making appropriate system calls. After checking the request, the OS code, which is running in monitor mode, can actually perform the I/O using the privileged instructions.
Memory protection: Memory is protected> by partitioning the memory into pieces. While running in user mode, the CPU can only access some of these pieces. The boundaries for these pieces are controlled by the base register and the limit register (specifying bottom bound and number of locations, respectively). These registers can only be set via privileged instructions.
CPU protection: CPU usage is protected by using the timer device, the associated timer interrupts, and OS code called the scheduler. While running in user mode, the CPU cannot change the timer value or turn off the timer interrupt, because these require privileged operations. Before passing the CPU to a user process, the scheduler ensures that the timer is initialized and interrupts are enabled. When an timer interrupt occurs, the timer interrupt handler (OS code) can run the scheduler (more OS code), which decides whether or not to remove the current process from the CPU.
Other Key Concepts
Batch operating system:
originally referred to the case where a human operator would group together jobs with similar needs
now commonly means an operating system where no interaction between the user and their running process is possible
Muliprogrammed: multiple processes in memory at the same time, and the CPU switches between them



Thursday, July 17, 2008

OPERATING SYSTEM SERVICES

Operating systems are responsible for providing essential services within a computer system:
Initial loading of programs and transfer of programs between secondary storage and main memory
Supervision of the input/output devices
File management
Protection facilities .
SERVICES :
Following are the five services provided by an operating systems to the convenience of the users.
Program Execution:
The purpose of a computer systems is to allow the user to execute programs. So the operating systems provides an environment where the user can conveniently run programs. The user does not have to worry about the memory allocation or multitasking or anything. These things are taken care of by the operating systems.
Running a program involves the allocating and deallocating memory, CPU scheduling in case of multiprocess. These functions cannot be given to the user-level programs. So user-level programs cannot help the user to run programs independently without the help from operating systems.


I/O Operations:
Each program requires an input and produces output. This involves the use of I/O. The operating systems hides the user the details of underlying hardware for the I/O. All the user sees is that the I/O has been performed without any details. So the operating systems by providing I/O makes it convenient for the users to run programs.
For efficiently and protection users cannot control I/O so this service cannot be provided by user-level programs.


File System Manipulation:
The output of a program may need to be written into new files or input taken from some files. The operating systems provides this service. The user does not have to worry about secondary storage management. User gives a command for reading or writing to a file and sees his her task accomplished. Thus operating systems makes it easier for user programs to accomplished their task.
This service involves secondary storage management. The speed of I/O that depends on secondary storage management is critical to the speed of many programs and hence I think it is best relegated to the operating systems to manage it than giving individual users the control of it. It is not difficult for the user-level programs to provide these services but for above mentioned reasons it is best if this service s left with operating system.


Communications:
There are instances where processes need to communicate with each other to exchange information. It may be between processes running on the same computer or running on the different computers. By providing this service the operating system relieves the user of the worry of passing messages between processes. In case where the messages need to be passed to processes on the other computers through a network it can be done by the user programs. The user program may be customized to the specifics of the hardware through which the message transits and provides the service interface to the operating system.


Error Detection:
An error is one part of the system may cause malfunctioning of the complete system. To avoid such a situation the operating system constantly monitors the system for detecting the errors. This relieves the user of the worry of errors propagating to various part of the system and causing malfunctioning.
This service cannot allowed to be handled by user programs because it involves monitoring and in cases altering area of memory or deallocation of memory for a faulty process. Or may be relinquishing the CPU of a process that goes into an infinite loop. These tasks are too critical to be handed over to the user programs. A user program if given these privileges can interfere with the correct (normal) operation of the operating systems.

Saturday, July 12, 2008

INTERRUPT

An interrupt is an event in hardware that triggers the processor to jump from its current program counter to a specific point in the code. Interrupts are designed to be special events whose occurrence cannot be predicted precisely (or at all). The MSP has many different kinds of events that can trigger interrupts, and for each one the processor will send the execution to a unique, specific point in memory. Each interrupt is assigned a word long segment at the upper end of memory. This is enough memory for a jump to the location in memory where the interrupt will actually be handled. Interrupts in general can be divided into two kinds- maskable and non-maskable. A maskable interrupt is an interrupt whose trigger event is not always important, so the programmer can decide that the event should not cause the program to jump. A non-maskable interrupt (like the reset button) is so important that it should never be ignored. The processor will always jump to this interrupt when it happens. Often, maskable interrupts are turned off by default to simplify the default behavior of the device. Special control registers allow non-maskable and specific non-maskable interrupts to be turned on. Interrupts generally have a "priority;" when two interrupts happen at the same time, the higher priority interrupt will take precedence over the lower priority one. Thus if a peripheral timer goes off at the same time as the reset button is pushed, the processor will ignore the peripheral timer because the reset is more important (higher priority).

WHAT IS A SYSTEM CALL

System call is a request made by any arbitrary program to the operating system for performing tasks -- picked from a predefined set -- which the said program does not have required permissions to execute in its own flow of execution. Most operations interacting with the system require permissions not available to a user level process, i.e. any I/O performed with any arbitrary device present on the system or any form of communication with other processes requires the use of system calls.
The fact that improper use of the system can easily cause a system crash necessitates some level of control. The design of the microprocessor architecture on practically all modern systems (except some embedded systems) offers a series of privilege levels -- the (low) privilege level in which normal applications execute limits the address space of the program so that it cannot access or modify other running applications nor the operating system itself. It also prevents the application from using any system devices (e.g. the frame buffer or network devices). But obviously any normal application needs these abilities; thus it can call the operating system. The OS executes at the highest level of privilege and allows the applications to request services via system calls, which are often implemented through interrupts. If allowed, the system enters a higher privilege level, executes a specific set of instructions which the interrupting program has no direct control over, then returns control to the former flow of execution. This concept also serves as a way to implement security.
With the development of separate operating modes with varying levels of privilege, a mechanism was needed for transferring control safely from lesser privileged modes to higher privileged modes. Less privileged code could not simply transfer control to more privileged code at any arbitrary point and with any arbitrary processor state. To allow it to do so would allow it to break security. For instance, the less privileged code could cause the higher privileged code to execute in the wrong order, or provide it with a bad stack.

Thursday, July 10, 2008

Real-Time Systems

A real-time computing (RTC) is the study of hardware and software systems that are subject to a "real-time constraint"—i.e., operational deadlines from event to system response. By contrast, a non-real-time system is one for which there is no deadline, even if fast response or high performance is desired or even preferred.

A real time system may be one where its application can be considered (within context) to be mission critical. The anti-lock brakes on a car are a simple example of a real-time computing system — the real-time constraint in this system is the short time in which the brakes must be released to prevent the wheel from locking. Real-time computations can be said to have failed if they are not completed before their deadline, where their deadline is relative to an event. A real-time deadline must be met, regardless of system load.

Hard and soft real-time systems
A system is said to be real-time if the total correctness of an operation depends not only upon its logical correctness, but also upon the time in which it is performed. The classical conception is that in a hard or immediate real-time system, the completion of an operation after its deadline is considered useless - ultimately, this may lead to a critical failure of the complete system. A soft real-time system on the other hand will tolerate such lateness, and may respond with decreased service quality (e.g., dropping frames while displaying a video).

Hard real-time systems are typically found interacting at a low level with physical hardware, in embedded systems. For example, a car engine control system is a hard real-time system because a delayed signal may cause engine failure or damage. Other examples of hard real-time embedded systems include medical systems such as heart pacemakers and industrial process controllers.
Hard Real Time System examples: ATM, Airbag

Soft real-time systems are typically those used where there is some issue of concurrent access and the need to keep a number of connected systems up to date with changing situations. Example: the software that maintains and updates the flight plans for commercial airliners. These can operate to a latency of seconds. Live audio-video systems are also usually soft real-time; violation of constraints results in degraded quality, but the system can continue to operate.

Soft Real Time System examples: Games, Washing Machine Camcorder

Saturday, July 5, 2008

Definition of System Call

The invocation of an operating system routine. Operating systems contain sets of routines for performing various low-level operations. For example, all operating systems have a routine for creating a directory. If you want to execute an operating system routine from a program, you must make a system call.
A request by an active process for a service performed by the UNIX system kernel, such as I/O or process creation.

Hardware protection and interrupts

Computer-System Operation
I/O devices and the CPU can execute concurrently.
Each device controller is in charge of a particular device type.
Each device controller has a local buffer.
CPU moves data from/to main memory to/from the local buffers.
I/O is from the device to local buffer of controller.
Device controller informs CPU that it has finished its operation by causing an interrupt.
Common Functions of Interrupts
Interrupt transfers control to the interrupt service routine, generally, through the interrupt vector, which contains the addresses of all the service routines.
Interrupt architecture must save the address of the interrupted instruction.
Incoming interrupts are disabled while another interrupt is being processed to prevent a lost interrupt.
A trap is a software-generated interrupt caused either by an error or a user request.
An operating system is interrupt driven.
Interrupt Handling
The operating system preserves the state of the CPU by storing registers and the program counter.
Determines which type of interrupt has occurred:
polling
vectored interrupt system
Separate segments of code determine what action should be taken for each type of interrupt. I/O Structure
I/O Interrrupts
After I/O starts, control returns to user program only upon I/O completion.
wait instruction idles the CPU until the next interrupt.
wait loop (contention for memory access).
at most one I/O request is outstanding at a time; no simultaneous I/O processing.
After I/O starts, control returns to user program without waiting for I/O completion.
System call – request to the operating system to allow user to wait for I/O completion.
Device-status table contains entry for each I/O device indicating its type, address, and state (not functioning, idle or busy)
Multiple requests for the same device are maintained in a wait queue.
Operating system indexes into I/O device table to determine device status and to modify table entry to include interrupt.
DMA Structure
Used for high-speed I/O devices able to transmit information at close to memory speeds.
Device controller transfers blocks of data from buffer storage directly to main memory without CPU intervention.
Only one interrupt is generated per block, rather than the one interrupt per byte.
Basic Operation
User program (or OS) requests data transfer
The OS finds a buffer (empty buffer for i/p or a full buffer for o/p) from the pool of buffers to transfer.
The Device Driver (part of OS) sets the DMA controller registers to use appropriate source and destination addresses and transfer length.
The DMA controller is instructed to start transfer operation
While transfer is going on the CPU is free to perform other tasks
DMA controller steals cycles from CPU
DMA controller interrupts CPU when the work is done. Storage Structure
Main memory – only large storage media that the CPU can access directly.
Secondary storage – extension of main memory that provides large nonvolatile storage capacity.
Magnetic disks – rigid metal or glass platters covered with magnetic recording material.
Disk surface is logically divided into tracks, which are subdivided into sectors.
The disk controllerdetermines the logical interaction between the device and the computer.
Storage Hierarchy
Storage systems organized in hierarchy:
speed
cost
volatility
Caching – copying information into faster storage system; main memory can be viewed as a fast cache for secondary storage.
Storage-Device Hierarchy:
Hardware Protection
Dual-Mode Operation

Sharing system resources requires operating system to ensure that an incorrect program cannot cause other programs to execute incorrectly.
Provide hardware support to differentiate between at least two modes of operations.
User mode – execution done on behalf of a user.
Monitor mode(also supervisor mode or system mode) – execution done on behalf of operating system.
Mode bit added to computer hardware to indicate the current mode: monitor (0) or user (1).
When an interrupt or fault occurs hardware switches to monitor mode
Privileged instructions can be issued only in monitor mode.
I/O Protection
All I/O instructions are privileged instructions.
Must ensure that a user program could never gain control of the computer in monitor mode (i.e., a user program that, as part of its execution, stores a new address in the interrupt vector).
Memory Protection
Must provide memory protection at least for the interrupt vector and the interrupt service routines.
In order to have memory protection, add two registers that determine the range of legal addresses a program may access:
base register – holds the smallest legal physical memory address.
limit register – contains the size of the range.
Memory outside the defined range is protected.
Protection Hardware:
When executing in monitor mode, the operating system has unrestricted access to both monitor and users’ memory.
The load instructions for the baseand limitregisters are privileged instructions.
CPU Protection
Timer – interrupts computer after specified period to ensure operating system maintains control.
Timer is decremented every clock tick.
When timer reaches the value 0, an interrupt occurs.
Timer commonly used to implement time sharing.
Timer also used to compute the current time.
Load-timer is a privileged instruction.
General-System Architecture
Given that I/O instructions are privileged, how does the user program perform I/O?
System call – the method used by a process to request action by the operating system.
Usually takes the form of a trap to a specific location in the interrupt vector.
Control passes through the interrupt vector to a service routine in the OS, and the mode bit is set to monitor mode.
The monitor verifies that the parameters are correct and legal, executes the request, and returns control to the instruction following the system call.

Thursday, July 3, 2008

BOOT-STRAP LOADER

Boot-strap Loader
Boot-strapping process is a three stage process. An initial program is loaded by the hardware at boot time. This program reads the second and third stages off the disk into memory. The second stage loads the third stage into memory in the correct address range and initializes the machine. The third stage is nachos, which runs at the end of the boot sequence.
First Stage - Loading the Kernel
Second Stage - CPU and Protected Mode Initializations
Third Stage - Run-time System Initialization

DIRECT MEMORY ACCESS

DMA (Direct Memory Access
Stands for "Direct Memory Access." DMA is a method of transferring data from the computer's RAM to another part of the computer without processing it using the CPU. While most data that is input or output from your computer is processed by the CPU, some data does not require processing, or can be processed by another device. In these situations, DMA can save processing time and is a more efficient way to move data from the computer's memory to other devices.For example, a sound card may need to access data stored in the computer's RAM, but since it can process the data itself, it may use DMA to bypass the CPU. Video cards that support DMA can also access the system memory and process graphics without needing the CPU. Ultra DMA hard drives use DMA to transfer data faster than previous hard drives that required the data to first be run through the CPU.In order for devices to use direct memory access, they must be assigned to a DMA channel. Each type of port on a computer has a set of DMA channels that can be assigned to each connected device.
For example, a PCI controller and a hard drive controller each have their own set of DMA channels.

Interrupt Vector

An interrupt vector is the memory address of an interrupt handler, or an index into an array called an interrupt vector table or dispatch table. Interrupt vector tables contain the memory addresses of interrupt handlers. When an interrupt is generated, the processor saves its execution state via a context switch, and begins execution of the interrupt handler at the interrupt vector

Interrupt

In computing, an interrupt is an asynchronous signal from hardware indicating the need for attention or a synchronous event in software indicating the need for a change in execution. A hardware interrupt causes the processor to save its state of execution via a context switch, and begin execution of an interrupt handler. Software interrupts are usually implemented as instructions in the instruction set, which cause a context switch to an interrupt handler similar to a hardware interrupt. Interrupts are a commonly used technique for computer multitasking, especially in real-time computing. Such a system is said to be interrupt-driven. [1]
An act of interrupting is referred to as an interrupt request ("IRQ").

DIRECT MEMORY ACCESS

Direct memory access (DMA) is a feature of modern computers and microprocessors that allows certain hardware subsystems within the computer to access system memory for reading and/or writing independently of the central processing unit. Many hardware systems use DMA including disk drive controllers, graphics cards, network cards, sound cards and GPUs. DMA is also used for intra-chip data transfer in multi-core processors, especially in multiprocessor system-on-chips, where its processing element is equipped with a local memory (often called scratchpad memory) and DMA is used for transferring data between the local memory and the main memory. Computers that have DMA channels can transfer data to and from devices with much less CPU overhead than computers without a DMA channel. Similarly a processing element inside a multi-core processor can transfer data to and from its local memory without occupying its processor time, overlapping computation and data transfer.
Without DMA, using programmed input/output (PIO) mode for communication with peripheral devices, or load/store instructions in the case of multicore chips, the CPU is typically fully occupied for the entire duration of the read or write operation, and is thus unavailable to perform other work. With DMA, the CPU would initiate the transfer, do other operations while the transfer is in progress, and receive an interrupt from the DMA controller once the operation has been done. This is especially useful in real-time computing applications where not stalling behind concurrent operations is critical. Another and related application area is various forms of stream processing where it is essential to have data processing and transfer in parallel, in order to achieve sufficient throughput

Wednesday, July 2, 2008

MEMORY ALLOCATION IN REAL-TIME OPERATING SYSTEM

Memory allocation is even more critical in an RTOS than in other operating systems.
Firstly, speed of allocation is important. A standard memory allocation scheme scans a linked list of indeterminate length to find a suitable free memory block; however, this is unacceptable as memory allocation has to occur in a fixed time in an RTOS.
Secondly, memory can become fragmented as free regions become separated by regions that are in use. This can cause a program to stall, unable to get memory, even though there is theoretically enough available. Memory allocation algorithms that slowly accumulate fragmentation may work fine for desktop machines—when rebooted every month or so—but are unacceptable for embedded systems that often run for years without rebooting.
The simple fixed-size-blocks algorithm works astonishingly well for simple embedded systems.
Another real strength of fixed size blocks is for DSP systems particularly where one core is performing one section of the pipeline and the next section is being done on another core. In this case, fixed size buffer management with one core filling the buffers and another set of cores returning the buffers is very efficient. A DSP optimized RTOS like Unison Operating System or DSPnano RTOS provides these features.

TIME-SHARING

  • Timesharing is the technique of scheduling a computer's time so that they are shared across multiple tasks and multiple users, with each user having the illusion that his or her computation is going on continuously.
  • It is a mode of operation that allows multiple independent users to share the resources of a multiuser computer system, including the CPU, bus, and memory.Generally, it is accomplished by interleaving computer usage, with the independent applications or users taking turns using computer resources, although the appearance may be of concurrent usage

HISTORY OF TIME-SHARING

John McCarthy, the inventor of LISP(List Processing Language), first imagined this technique in the late 1950s. The first timesharing operating systems, BBN's "Little Hospital" and CTSS(Compatible Time-Sharing System), were deplayed in 1962-63. The early hacker culture of the 1960s and 1970s grew up around the first generation of relatively cheap timesharing computers, notably the DEC(Digital Equipment Corporation) 10, 11, and VAX(Virtual Address eXtension) lines. But these were only cheap in a relative sense; though quite a bit less powerful than today's personal computers, they had to be shared by dozens or even hundreds of people each. The early hacker comunities nucleated around places where it was relatively easy to get access to a timesharing account.

TIME SHARING SYSTEMS

Time-sharing system:
Time-sharing refers to sharing a computing resource among many users by multitasking.
Because early mainframes and minicomputers were extremely expensive, it was rarely possible to allow a single user exclusive access to the machine for interactive use. But because computers in interactive use often spend much of their time idly waiting for user input, it was suggested that multiple users could share a machine by allocating one user's idle time to service other users. Similarly, small slices of time spent waiting for disk, tape, or network input could be granted to other users.
Throughout the late 1960s and the 1970s, computer terminals were multiplexed onto large institutional mainframe computers (central computer systems), which in many implementations sequentially polled the terminals to see if there was any additional data or action requested by the computer user. Later technology in interconnections were interrupt driven, and some of these used parallel data transfer technologies like, for example, the IEEE 488 standard. Generally, computer terminals were utilized on college properties in much the same places as desktop computers or personal computers are found today. In the earliest days of personal computers, many were in fact used as particularly smart terminals for time-sharing systems.
With the rise of microcomputing in the early 1980s, time-sharing faded into the background because the individual microprocessors were sufficiently inexpensive that a single person could have all the CPU time dedicated solely to their needs, even when idle.
The Internet has brought the general concept of time-sharing back into popularity. Expensive corporate server farms costing millions can host thousands of customers all sharing the same common resources. As with the early serial terminals, websites operate primarily in bursts of activity followed by periods of idle time. This bursting nature permits the service to be used by many website customers at once, and none of them notice any delays in communications until the servers start to get very busy.

DIFFERENCE BETWEEN DISTRIBUTED OS AND NETWORK OS


Difference between Network OS and Distributed Os:
Network OS is used to manage Networked computer systems and create,maintain and transfer files in that Network. Distributed OS is also similar to Networked OS but in addition to it the platform on which it is running should have high configuration such as more capacity RAM,High speed Processor

Difference between a loosely coupled system and a tightly coupled system

What is the difference between a loosely coupled system and a tightly coupled system? Give examples

One feature that is commonly characterizing tightly coupled systems is that they share the clock.
Therefore multiprocessors are typically tightly coupled but distributed workstations on a network are not.
Another difference is that: in a tightly-coupled system, the delay experienced when a message is sent from one computer to another is short, and data rate is high; that is, the number of bits per second that can be transferred is large. In a loosely-coupled system, the opposite is true: the intermachine message delay is large and the data rate is low. For example, two CPU chips on the same printed circuit board and connected by wires etched onto the board are likely to be tightly coupled, whereas two computers connected by a 2400 bit/sec modem over the telephone system are certain to be loosely coupled.

LOOSELY COUPLED SYSTEMS

Loose coupling describes an approach where integration interfaces are developed with minimal assumptions between the sending/receiving parties, thus reducing the risk that a change in one application/module will force a change in another application/module.
Loose coupling has multiple dimensions. Integration between two applications may be loosely coupled in time using Message-oriented middleware, meaning the availability of one system does not affect the other. Alternatively, integration may be loosely coupled in format using middleware to perform Data transformation, meaning differences in data models do not prevent integration. In Web Services or Service Oriented Architecture, loose coupling may mean simply that the implementation is hidden from the caller.

DRAWBACK OF DISTRIBUTED SYSTEM

If not planned properly, a distributed system can decrease the overall reliability of computations if the unavailability of a node can cause disruption of the other nodes. Leslie Lamport famously quipped that: "A distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable.Troubleshooting and diagnosing problems in a distributed system can also become more difficult, because the analysis may require connecting to remote nodes or inspecting communication between nodes.
Many types of computation are not well suited for distributed environments, typically owing to the amount of network communication or synchronization that would be required between nodes. If bandwidth, latency, or communication requirements are too significant, then the benefits of distributed computing may be negated and the performance may be worse than a non-distributed environment

What are the main types of system calls? Describe their purpose.

What are the main types of system calls? Describe their purpose.
Process control:
End, abort; load, execute; create process, terminate process; get process attributes, set process attributes; wait for time; wait event, signal event; allocate and free memory.
File management:
Create file, delete file; open, close; read, write, reposition; get file attributes, set file attributes.
Device management :
Request device, release device; read, write, reposition; get device attributes, set device attributes; Logically attach or detach devices.
Information maintenance :
Get time or date, set time or date; get system data, set system data; get process, file, or device attributes; set process, file, or device attributes.
Communications:
Create, delete communication connection; send, receive messages; transfer status information;
Attach or detach remote devices.

Tuesday, July 1, 2008

Distributed system

Distributed computing deals with hardware and software systems containing more than one processing element or storage element, concurrent processes, or multiple programs, running under a loosely or tightly controlled regime.
In distributed computing a program is split up into parts that run simultaneously on multiple computers communicating over a network. Distributed computing is a form of parallel computing, but parallel computing is most commonly used to describe program parts running simultaneously on multiple processors in the same computer. Both types of processing require dividing a program into parts that can run simultaneously, but distributed programs often must deal with heterogeneous environments, network links of varying latencies, and unpredictable failures in the network or the computers

PROCESS

When a process is created, it needs to wait for the process scheduler (of the operating system) to set its status to "waiting" and load it into main memory from secondary storage device (such as a hard disk or a CD-ROM). Once the process has been assigned to a processor by a short-term scheduler, a context switch is performed (loading the process into the processor) and the process state is set to "running" - where the processor executes its instructions. If a process needs to wait for a resource (such as waiting for user input, or waiting for a file to become available), it is moved into the "blocked" state until it no longer needs to wait - then it is moved back into the "waiting" state. Once the process finishes execution, or is terminated by the operating system, it is moved to the "terminated" state where it waits to be removed from main memory