The Great Symbian

Anything under the sun goes here!

Computer programs must be in main memory (also called random-access memory or RAM) to be executed. Main memory is the only large storage area(millions to billions of bytes) that the processor can access directly. It is implemented in a semiconductor technology called dynamic random-access memory (DRAM), which forms an array of memory words. Each word has its own address. Interaction is achieved through a sequence of load or store instructions to specific memory addresses. The load instruction moves the content of a register to main memory. Aside from explicit loads and stores, the CPU automatically loads instructions from main memory for execution.

Direct Memory Access (DMA) Structure
• Used for high-speed I/O devices able to transmit information at close to memory speeds.
• Device controller transfers blocks of data from buffer storage directly to main memory without CPU intervention.
• Only one interrupt is generated per block, rather than the one interrupt per byte.

• When executing in kernel mode, the operating system has unrestricted access to both kernel and user’s memory.

• The load instructions for the base and limit registers are privileged instructions.


• Sharing system resources requires operating system to ensure that an incorrect program cannot cause other programs to execute incorrectly.

• Provides hardware support to differentiate between at least two modes of operations.

1. User mode – execution done on behalf of a user.

2. Kernel mode (also monitor mode, supervisor mode or system mode) – execution done on behalf of operating system.

• Mode bit added to computer hardware to indicate the current mode: kernel (0) or user (1).• When an interrupt or fault occurs hardware switches to kernel mode.

• Privileged instructions can be issued only in kernel mode.


• All I/O instructions are privileged instructions.

• Given the I/O instructions are privileged, how does the user program perform I/O?

• System call

– the method used by a process to request action by the operating system.

– Usually takes the form of a trap to a specific location in the interrupt vector.

– Control passes through the interrupt vector to a service routine in the OS, and the mode bit is set to kernel mode.

– The kernel verifies that the parameters are correct and legal, executes the request, and returns control to the instruction following the system call.

<--Use of a System Call to Perform an I/O-->

• Must ensure that a user program could never gain control of the computer in kernel mode. Otherwise, undesirable actions can be done e.g. a user program that, as part of its execution, stores a new address in the interrupt vector.


• Must provide memory protection at least for the interrupt vectorand the interrupt service routines.

• In order to have memory protection, add two registers that determine the range of legal addresses a program may access:

– Base Register – holds the smallest legal physical memory address.

– Limit Register – contains the size of the range

• Memory outside the defined range is protected.


• Timer

– interrupts computer after specified period to ensure operating system maintains control.

– Timer is decremented every clock tick.

– When timer reaches the value 0, an interrupt occurs.

• Timer commonly used to implement time sharing.

• Time also used to compute the current time.

• Load-timer is a privileged instruction.

• Storage systems organized in hierarchy.
– Speed
– Cost
– Size
– Volatility


Click here to see full image size

• Caching – copying information into faster storage system; main memory can be viewed as a cache for secondary storage.

–improve performance where a large access-time or transfer-rate disparity exists between two components) Memory caching: add cache (faster and smaller memory) between CPU and main memory

--> When need some data, check if it’s in cache

-->If yes, use the data from cache

--> If not, use data from main memory and put a copy in cache

– Disk caching: main memory can be viewed as a cache for disks

<--Cache Coherency and Consistency-->
• Cache coherency in multiprocessor systems
- Each CPU has a local cache
- A copy of X may exist in several caches --> must make sure that an update of X in one cache is immediately reflected in all other caches where X resides
- Hardware problem
• Cache consistency in distributed systems
- A master copy of the file resides at the server machine
- Copies of the same file scattered in caches of different client machines
- Must keep the cached copies consistent with the master file
- OS problem

• Main Memory – only large storage media that the CPU can access directly.
• Secondary Storage – extension of main memory that provides large nonvolatile storage capacity.
• Magnetic Disks
– rigid metal or glass platters covered with magnetic recording material.
– Disk surface is logically divided into tracks, which are subdivided into sectors.
– The disk controller determines the logical interaction between the device and the computer.
-->>Moving Head Mechanism

• Magnetic Tapes

Magnetic tape is a medium for magnetic recording generally consisting of a thin magnetizable coating on a long and narrow strip of plastic. Nearly all recording tape is of this type, whether used for recording audio or video or for computer data storage. It was originally developed in Germany, based on the concept of magnetic wire recording. Devices that record and playback audio and video using magnetic tape are generally called tape recorders and video tape recorders respectively. A device that stores computer data on magnetic tape can be called a tape drive, a tape unit, or a streamer.
Magnetic tape revolutionized the broadcast and recording industries. In an age when all radio (and later television) was live, it allowed programming to be prerecorded. In a time when gramophone records were recorded in one take, it allowed recordings to be created in multiple stages and easily mixed and edited with a minimal loss in quality between generations. It is also one of the key enabling technologies in the development of modern computers. Magnetic tape allowed massive amounts of data to be stored in computers for long periods of time and rapidly accessed when needed.
Today, many other technologies exist that can perform the functions of magnetic tape. In many cases these technologies are replacing tape. Despite this, innovation in the technology continues and tape is still widely used.

Device-status table contains entry for each I/O deviceindicating its type, address, and state.


Click here to see full size of image

• Sharing system resources requires operating system to ensure that an incorrect program cannot cause other programs to execute incorrectly
• Provides hardware support to differentiate between at least two modes of operations.
1. User mode – execution done on behalf of a user.
2. Kernel mode (also monitor mode, supervisor mode or system mode) – execution done on behalf of operating system.

  • Interrupts and Traps. A great deal of the kernel consists of code that is invoked as the result of a interrupt or a trap.
  • While the words "interrupt" and "trap" are often used interchangeably in the context of operating systems, there is a distinct difference between the two.
  • An interrupt is a CPU event that is triggered by some external device.
  • A trap is a CPU event that is triggered by a program. Traps are sometimes called software interrupts. They can be deliberately triggered by a special instruction, or they may be triggered by an illegal instruction or an attempt to access a restricted resource.
    When an interrupt is triggered by an external device the hardware will save the the status of the currently executing process, switch to kernel mode, and enter a routine in the kernel.
  • This routine is a first level interrupt handler. It can either service the interrupt itself or wake up a process that has been waiting for the interrupt to occur.
    When the handler finishes it usually causes the CPU to resume the processes that was interrupted. However, the operating system may schedule another process instead.
    When an executing process requests a service from the kernel using a trap the process status information saved, the CPU is placed in kernel mode, and control passes to code in the kernel.
  • This kernel code is called the system service dispatcher. It examines parameters set before the trap was triggered, often information in specific CPU registers, to determine what action is required. Control then passes to the code that performs the desired action.
    When the service is finished, control is returned to either the process that triggered the trap or some other process.
  • Traps can also be triggered by a fault. In this case the usual action is to terminate the offending process. It is possible on some systems for applications to register handlers that will be evoked when certain conditions occur -- such as a division by zero.

--> Code stored in ROM that is able to locate the kernel, load it into memory, and start its execution
--> In computing, booting is a bootstrapping process that starts operating systems when the user turns on a computer system.
Most computer systems can only execute code found in the memory (ROM or RAM); modern operating systems are mostly stored on hard disk drives, LiveCDs and USB flash drive. Just after a computer has been turned on, it doesn't have an operating system in memory. The computer's hardware alone cannot perform complicated actions of the operating system, such as loading a program from disk on its own; so a seemingly irresolvable paradox is created: to load the operating system into memory, one appears to need to have an operating system already installed.

  • In computing, system for processing data with little or no operator intervention. Batches of data are prepared in advance to be processed during regular ‘runs’ (for example, each night). This allows efficient use of the computer and is well suited to applications of a repetitive nature, such bulk file format conversion, as a company payroll, or the production of utility bills.

  • It's an OS which keeps several jobs(programs) in memory at a time .The operating system picks and begins to execute one of the jobs in the memory. Eventually the job may have to wait for some task like a tape to be mounted,... Or an input output operation to be complete.In a multiprogrammed OS the OS is not idle it's simply switch to another job and executes it .As there is always some jobs to execute , the CPU will never be idle.
  • Time-sharing is sharing a computing resource among many users by multitasking. Its introduction in the 1960s, and emergence as the prominent model of computing in the 1970s, represents a major historical shift in the history of computing. By allowing a large number of users to interact simultaneously on a single computer, time-sharing dramatically lowered the cost of providing computing, while at the same time making the computing experience much more interactive.


-> Jobs with similar needs are batched together and run through the computer as a group, by an operator or automatic job sequencer. Performance is increased by attempting to keep CPU and I/O devices busy at all times through buffering, off-line operation, spooling, and multiprogramming.


The first involved timesharing or timeslicing. The idea of multiprogramming was extended to allow for multiple terminals to be connected to the computer, with each in-use terminal being associated with one or more jobs on the computer. The operating system is responsible for switching between the jobs, now often called processes, in such a way that favored user interaction. If the context-switches occurred quickly enough, the user had the impression that he or she had direct access to the computer.


-> A real-time system is used when rigid time requirements have been placed on the operation of a processor or the flow of data thus it is often used as a control device in a dedicated application. Sensors bring data to the computer. A real-time system has well-defined, fixed time constraints. Processing must be done within the defined constraints, or the system will fail.


-> Networked systems consist of multiple computers that are networked together, usually with a common operating system and shared resources. Users, however, are aware of the different computers that make up the system.


A network, in the simplest terms, is a communication path between two or more systems. Distributed systems depend on networking for their functionality.
1. Client-Server Systems
2. Peer-to-Peer Systems

also consist of multiple computers but differ from networked systems in that the multiple computers are transparent to the user. Often there are redundant resources and a sharing of the workload among the different computers, but this is all transparent to the user.


Handheld systems include personal digital assistants (PDAs) or cellular telephones with connectivity to a network such as the Internet.

Differentiate the design issues of OS between a stand-alone PC and a Workstation connected to a network.

->A stand-alone PC works on its own. While in a workstation connected to a network, you can freely share your files and databases to other PC.A desktop or laptop computer that is used on its own without requiring a connection to a local area network (LAN) or wide area network (WAN). Although it may be connected to a network, it is still a stand-alone PC as long as the network connection is not mandatory for its general use.In offices throughout the 1990s, millions of stand-alone PCs were hooked up to the local network for file sharing and mainframe access. Today, computers are commonly networked in the home so that family members can share an Internet connection as well as printers, scanners and other peripherals. When the computer is running local applications without Internet access, the machine is technically a stand-alone PC. A workstation is a high-end microcomputer designed for technical or scientific applications. Intended primarily to be used by one person at a time, they are commonly connected to a local area network and run multiuser operating system. The term workstation has also been used to refer to a mainframe computer terminal or a PC connected to a network.

Operating Systems can be explored from two viewpoints: the user and the system.

The user view of the computer varies by the interface being used. Most computer users sit in front of a PC, consisting of a monitor, keyboard, mouse and system unit. Such a system is designed for one user to monopolize its resources, to maximize the work that the user is performing. In this case,the operating system is designed mostly for ease of use, with some attention paid to performance, and none paid to resource utilization.

Some users sit at a terminal connected to a mainframe or minicomputer. Other users are accessing the same computer through other terminals. These users share resources and may exchange information. The operating system is designed to maximize resource utilization.

Other users sit at workstations, connected to networks of other workstations and servers. These users have dedicated resources at their disposal, but they also share resources such as networking and servers.

Recently, many varieties of handheld computers have come into fashion. These devices are mostly standalone, used singly by individual users. Some are connected to networks, either directly by wire or through wireless modems. Due to power and interface limitations they perform relatively few remote operations. These operating systems are designed mostly for individual usability, but performance per amount of battery life is important as well.

Some computers have little or no user view. For example, embedded computers in home devices and automobiles may have numeric keypad, and may turn indicator lights on or off to show status, but mostly they and their operating systems are designed to run without user intervention.


We can view an operating system as a resource allocator. A computer system has many resources - hardware and software - that may be required to solve a problem. The operating system acts as the manager of these resources.

An operating system can also be viewed as a control program that manages the execution of user programs to prevent errors and improper use of the computer. It is especially concerned with the operation and control of I/O devices.

We have no universally accepted definition of what is part of the operating system. A simple viewpoint is that it includes everything a vendor ships when you order “the operating system”.

A more common definition is that the operating system is the one program running at all times on the computer (usually called the kernel), with all else being application programs. This is the one that we generally follow.


-> System or networking is a distributed application architecture that partitions tasks or work loads between service providers (servers) and service requesters, called clients.[1] Often clients and servers operate over a computer network on separate hardware. A server is a high-performance host that is a registering unit and shares its resources with clients. A client does not share any of its resources, but requests a server's content or service function. Clients therefore initiate communication sessions with servers which await (listen to) incoming requests.

-> Client-server describes the relationship between two computer programs in which one program, the client program, makes a service request to another, the server program. Standard networked functions such as email exchange, web access and database access, are based on the client-server model. For example, a web browser is a client program at the user computer that may access information at any web server in the world. To check your bank account from your computer, a web browser client program in your computer forwards your request to a web server program at the bank. That program may in turn forward the request to its own database client program that sends a request to a database server at another bank computer to retrieve your account balance. The balance is returned to the bank database client, which in turn serves it back to the web browser client in your personal computer, which displays the information for you.

A server based network (i.e: not peer-to-peer):


-> Peer-to-peer (P2P) system or networking is a method of delivering computer network services in which the participants share a portion of their own resources, such as processing power, disk storage, network bandwidth, printing facilities. Such resources are provided directly to other participants without intermediary network hosts or servers.[1] Peer-to-peer network participants are providers and consumers of network services simultaneously, which contrasts with other service models, such as traditional client-server computing.

A peer-to-peer based network:

- In computing, symmetric multiprocessing or SMP involves a multiprocessor computer-architecture where two or more identical processors can connect to a single shared main memory. Most common multiprocessor systems today use an SMP architecture. In the case of multi-core processors, the SMP architecture applies to the cores, treating them as separate processors.
SMP systems allow any processor to work on any task no matter where the data for that task are located in memory; with proper operating system support, SMP systems can easily move tasks between processors to balance the workload efficiently.
- Asymmetric multiprocessing or ASMP is a type of multiprocessing supported in DEC's VMS V.3 as well as a number of older systems including TOPS-10 and OS-360. It varies greatly from the standard processing model that we see in personal computers today. Due to the complexity and unique nature of this architecture, it was not adopted by many vendors or programmers during its brief stint between 1970 - 1980.
Where as a symmetric multiprocessor or SMP treats all of the processing elements in the system identically, an ASMP system assigns certain tasks only to certain processors. In particular, only one processor may be responsible for fielding all of the interrupts in the system or perhaps even performing all of the I/O in the system. This makes the design of the I/O system much simpler, although it tends to limit the ultimate performance of the system. Graphics cards, physics cards and cryptographic accelerators which are subordinate to a CPU in modern computers can be considered a form of asymmetric multiprocessing.[citation needed] SMP is extremely common in the modern computing world, when people refer to "multi core" or "multi processing" they are most commonly referring to SMP.


- (commonly abbreviated to either OS or O/S) is an interface between hardware and user; it is responsible for the management and coordination of activities and the sharing of the resources of the computer. The operating system acts as a host for computing applications that are run on the machine. As a host, one of the purposes of an operating system is to handle the details of the operation of the hardware. This relieves application programs from having to manage these details and makes it easier to write applications. Almost all computers (including handheld computers, desktop computers, supercomputers, video game consoles) as well as some robots, domestic appliances (dishwashers, washing machines), and portable media players use an operating system of some type. [1] Some of the oldest models may however use an embedded operating system, that may be contained on a compact disk or other data storage device.

It is easier to define an operating system by what it does than what it is, but even this can be tricky. The primary goal of some operating system is convenience for the user. The primary goal of other operating system is efficient operation of the computer system. Operating systems and computer architecture have influenced each other a great deal. To facilitate the use of the hardware, researchers developed operating systems. Users of the operating systems then proposed changes in hardware design to simplify them. In this short historical review, notice how identification of operating-system problems led to the introduction of new hardware features.

- Multiprocessor systems with more than one CPU in close communication.
- Tightly coupled system – processors share memory and aclock; communication usually takes place through the shared memory.
- Symmetric multiprocessing (SMP)
·Each processor runs and identical copy of the operating system.
·Many processes can run at once without performance deterioration.
·Most modern operating systems support SMP
- Asymmetric multiprocessing
·Each processor is assigned a specific task; master processor schedules and allocated work to slave processors.
· More common in extremely large systems
Advantages of parallel system:
1. Increased throughput
2. Economical
3. Increased reliability
3.1. graceful
3.2. degradationfail-soft systems


Add to Google Reader or Homepage