The Great Symbian

Anything under the sun goes here!

  • Thread Library schedules user-level threads to run on LWP
  • Thread management done by user-level Threads Library
  • The Thread Library is responsible for scheduling user threads on the available schedulable entities; this makes context switching of threads very fast, as it avoids system calls. However, this increases complexity and the likelihood of priority inversion, as well as suboptimal scheduling without extensive (and expensive) coordination between the userland scheduler and the kernel scheduler.

MANY-TO-ONE MODEL

• Many user-level threads mapped to single kernel thread.

• Used on systems that do not support kernel threads.




ONE-TO-ONE MODEL

• Each user-level thread maps to kernel thread.
• Creating a user thread requires creating the corresponding kernel thread.

• Windows NT/2000, OS/2




MANY-TO-MANY MODEL

• Allows many user level threads to be mapped to many kernel threads.

• Allows the operating system to create a sufficient number of kernel threads.

• Solaris 2 , IRIS, HP-UX










• Supported by the Kernel

• Slower to create and manage threads than are user threads

• If a thread performs a blocking system call, then the kernel can schedule another thread in the application for execution.• Multiple threads are able to run in parallel on multiprocessors.

• Examples
- Windows NT/2000
- Solaris 2
- Tru64 UNIX
- BeOS
- Linux

• Thread management done by user-level threads library

• Fast to create and manage threads

• If the kernel is single-threaded, then any user-level thread performing a blocking system call

will cause the entire process to block

• No need for kernel intervention

• Drawback : all may run in single process. If one blocks, all block.

• Examples
- POSIX Pthreads
- Mach C-treads
- Solaris UI-threads

  • Responsiveness

- Can run even if one thread blocked or bus

- Web browser example – one thread per client

  • Resource Sharing
  • Economy

- Creating and context switching threads is low cost

- Solaris 2 : creating 30x, context switch 5x slower for procs

  • Utilization of MP ArchitecturesF Run each thread on different CPU


  • A thread, also called a lightweight process (LWP), is the basic unit of CPU utilization.

  • It has its own program counter, a register set, and stack space.

  • It shares with the pear threads its code section, data section, and OS resources such as open files and signals, collectively called a task.

  • Single-threaded process has one program counter specifying location of next instexecuteO Process executes instructions sequentially, one at a time, until completion
  • Multi-threaded process has one program counter per thread

• Mechanism for processes to communicate and to synchronize
their actions.

• Message system – processes communicate with each other
without resorting to shared variables.

IPC facility provides two operations:
– send(message) – message size fixed or variable
– receive(message)

• If P and Q wish to communicate, they need to:
– establish a communication link between them
– exchange messages via send/receive

• Implementation of communication link
– physical (e.g., shared memory, hardware bus)
– logical (e.g., logical properties)


DIRECT COMMUNICATION

• Processes must name each other explicitly:
– Symmetric Addressing
• Send (P, message) – send to process P
• Receive(Q, message) – receive from Q
Asymmetric Addressing
•send (P, message) – send to process P
•receive(id, message) – rx from any; system sets id = sender
•Primitives:

–send(A, message) – send a message to mailbox A
–receive(A, message) – receive a message from mailbox A

•Properties of Communication Link
–Links established automatically between pairs
–processes must know each others ID
–Exactly one link per pair of communicating processes

•Disadvantage: a process must know the name or ID of the process(es) it wishes to communicate with


INDIRECT COMMUNICATION

• Messages are directed and received from mailboxes(also referred to as ports).
- Each mailbox has a unique id.
- Processes can communicate only if they share a mailbox.

• Properties of communication link
- Link established only if processes share a common mailbox
- A link may be associated with many processes.
- Each pair of processes may share several communicationlinks.
- Link may be unidirectional or bi-directional.
• Operations
- create a new mailbox
- send and receive messages through mailbox
- destroy a mailbox

• Primitives are defined as:
send a message to mailbox Areceive(A, message)
receive a message from mailbox A

• Mailbox sharingF P1, P2, and P3 share mailbox A.
- P1, sends; P2 and P3 receive.
- Who gets the message?

• Solutions
- Allow a link to be associated with at most two processes.
- Allow only one process at a time to execute a receiveoperation.
-Allow the system to select arbitrarily the receiver. Sender is notified who the receiver was.


SYNCHRONIZATION

• Message passing may be either blocking or non-blocking.

Blocking Send: sender blocked until message received by mailbox or process
Nonblocking Send: sender resumes operation immediately after sending
Blocking Receive: receiver blocks until a message is available
Nonblocking Receive: receiver returns immediately with either a valid or null message.

• Blocking is considered synchronous

• Non-blocking is considered asynchronous

• Send and receive primitives may be either blocking or non-blocking.

BUFFERING


• All messaging system require framework to temporarily buffer messages. These queues are implemented in one of three ways:

1.)Zero Capacity – 0 messages
Sender must wait for receiver (rendezvous).

2.) Bounded Capacity – finite length of n messages
Sender must wait if link full.

3.) Unbounded Capacity – infinite lengthSender never waits.

PRODUCER-CONSUMER EXAMPLE

• One process generates data – the producer

• The other process uses it – the consumer

• If directly connected – time coordination

- How would they coordinate the time ??




• One process generates data – the producer

• The other process uses it – the consumer

• If not directly connected – have a buffer

- Buffer must be accessible to both

- Finite Capacity N – Number in use - K










•Independent process cannot affect or be affected by the
execution of another process.

•Cooperating process can affect or be affected by the execution of
another process

•Advantages of process cooperation
– Information sharing
– Computation speed-up
– Modularity
– Convenience

OS should be able to create and delete processes dynamically.
A. PROCESS CREATION
  • When the OS or a user process decides to create a new process, it can proceed as follows:
    - Assign a new process identifier and add its entry to the primary process table.
    - Allocate space for the process (program+data) and user stack. The amount of space required can set to default values depending on the process type. If a user process spawns a new process, the parent process can pass these values to the OS.
    - Create process control block.
    - Set appropriate linkage to a queue (ready) is set.
    - Create other necessary data structures (e.g. to store accounting information).
  • Parent process creates children processes, which, in turn create other processes, forming a tree of processes.
  • Resource sharing possibilities
    - Parent and children share all resources.
    - Children share subset of parent’s resources.
    - Parent and child share no resources.
  • Execution possibilities
    - Parent and children execute concurrently.
    - Parent waits until children terminate.
  • Address space possibilities
    - Child duplicate of parent.
    - Child has a program loaded into it.

B. PROCESS TERMINATION

  • A process terminates when it executes last statement and asks the operating system to delete it by using exit system call. At that time, the child process
    - Output data from child to parent (via wait).
    - Process’ resources are deallocated by operating system.
  • Parent may terminate execution of children processes via appropriate system called (e.g. abort). A parent may terminate the execution of one of its children for the following reasons:
    - Child has exceeded allocated resources.
    - Task assigned to child is no longer required.
    - Parent is exiting.
  • Operating system does not allow child to continue if its parent terminates.
    - Cascading termination.

A. PROCESS SCHEDULING QUEUES



Job Queue – When a process enters a system, it is put in a job queue.

  • Ready Queue – set of all processes residing in main memory, ready and waiting to execute are kept in ready queue.
  • Device Queues – There may be many processes in the system requesting for an I/O. Since only one I/O request can be entertained for a particular device, a process needing an I/O may have to wait. The list of processes waiting for an I/O device is kept in a device queue for that particular device.

An example of a ready queue and various device queues is shown below.


B. SCHEDULERS

  • A process may migrate between the various queues.
  • The OS must select, for scheduling purposes, processes from these queues.
  • Long-term scheduler (or job scheduler) – selects which processes should be brought into the ready queue.
    - It is invoked very infrequently (seconds, minutes) Þ (may be slow).
    - It controls the degree of multiprogramming.
  • Short-term scheduler (or CPU scheduler) – selects which process should be executed next and allocates CPU.
    - Short-term scheduler is invoked very frequently (milliseconds) Þ(must be fast).
  • Midterm scheduler selects which partially executed job, which has been swapped out, should be brought into ready queue.



C. CONTEXT SWITCH

  • When CPU switches to another process, the system must save the state of the old process and load the saved state for the new process - this is called context switch.

  • The time it takes is dependent on hardware support.

  • Context-switch time is overhead; the system does no useful work while switching.

Steps in Context Switching

  • Save context of processor including program counter and other registers.
  • Update the PCB of the running process with its new state and other associate information.
  • Move PCB to appropriate queue - ready, blocked,
  • Select another process for execution.
  • Update PCB of the selected process.
  • Restore CPU context from that of the selected process.





An operating system executes a variety of programs:

  • Batch System – jobs
  • Time-Shared Systems – user programs or tasks
  • Textbook uses the terms job and process almost interchangeably.

    PROCESS – a program in execution; process execution must progress in sequential fashion.
A process includes:
  • Program Counter

  • Stack

  • Data Section

A. PROCESS STATE



A process can be in one of many possible states:

  • New: The process is being created.

  • Running: Instructions are being executed.

  • Waiting: The process is waiting for some event to occur.

  • Ready: The process is waiting to be assigned to a process.

  • Terminated: The process has finished execution.


PROCESS TRANSITIONS

As a process executes, it changes its state


Fig. Process State Transition Diagram

B. PROCESS CONTROL BLOCK

  • Each process in the operating system is represented by a process control block (PCB) – also called a task control block.

  • Information associated with each process includes:
    - Process state – new, ready, running, waiting...
    - Process identification information
    ° Unique process identifier (PID) - indexes (directly or indirectly) into the process table.
    ° User identifier (UID) - the user who is responsible for the job.
    ° Identifier of the process that created this process (PPID).
    - Program counter – To indicate the next instruction to be executed for this process.
    - CPU registers – include index registers, general purpose registers etc. so that the process can be restarted correctly after an interrupt occurs.
    - CPU scheduling information – Such as process priority, pointers to scheduling queues etc.
    - Memory-management information – Include base and limit register, page tables etc.
    - Accounting information – Amount of CPU and real time used, time limits, account number, job or process numbers and so on.
    - I/O status information – List of I/O devices allocated to this process, a list of open files etc.





C. THREADS


  • A thread, also called a lightweight process (LWP), is the basic unit of CPU utilization.

  • It has its own program counter, a register set, and stack space.

  • It shares with the pear threads its code section, data section, and OS resources such as open files and signals, collectively called a task.

  • The idea of a thread is that a process has five fundamental parts: code ("text"), data, stack, file I/O, and signal tables. "Heavy-weight processes" (HWPs) have a significant amount of overhead when switching: all the tables have to be flushed from the processor for each task switch. Also, the only way to achieve shared information between HWPs is through pipes and "shared memory". If a HWP spawns a child HWP using fork(), the only part that is shared is the text.
  • Threads reduce overhead by sharing fundamental parts. By sharing these parts, switching happens much more frequently and efficiently. Also, sharing information is not so "difficult" anymore: everything can be shared.

User-Level and Kernel-Level Threads

  • There are tow types of thread: user-level and kernel-level.
  • User-level avoids the kernel and manages the tables itself.
  • These threads are implemented in user-level libraries rather than via system calls.
  • Often this is called "cooperative multitasking" where the task defines a set of routines that get "switched to" by manipulating the stack pointer.
  • Typically each thread "gives-up" the CPU by calling an explicit switch, sending a signal or doing an operation that involves the switcher. Also, a timer signal can force switches.
  • User threads typically can switch faster than kernel threads.

Thread States

  • Threads can be in one of the several states: ready, blocked, running, or terminated.
  • Like process, threads share the CPU and only one thread at a time is in running state.




























1. What are the major activities of an Operating System with regards to Process Management?




The operating system is responsible for the following activities in connection with process management.

Process creation and deletion.

  • Process suspension (process is in I/O wait queue, or “swapped” out to disk, …) and resumption (move to ready queue or execution) – manage the state of the process.
  • Provision of mechanisms for:
  1. Process synchronization - concurrent processing is supported thus the need for synchronization of processes or threads.
  2. Process communication
  3. Deadlock handling


2.
What are the major activities of an Operating System with regards to Memory Management?

  • Keep track of which parts of memory are currently being used and by whom.
  • Decide which processes to load when memory space becomes available - long term or medium term scheduler.
  • Mapping addresses in a process to absolute memory addresses - at load time or run time.
  • Allocate and deallocate memory space as needed.
  • Memory partitioning, allocation, paging (VM), address translation, defrag, …
  • Memory protection


3.
What are the major activities of an Operating System with regards to Secondary-Storage Management?
Free space management
  • Storage allocation
  • Disk scheduling – minimize seeks (arm movement … very slow operation)
  • Disk as the media for mapping virtual memory space
  • Disk caching for performance
  • Disk utilities: defrag, recovery of lost clusters, etc.


  • 4.
    What are the major activities of an Operating System with regards to File Management?

    • File creation and deletion - system calls or commands.
    • Directory creation and deletion - system calls or commands.
    • Support of primitives for manipulating files and directories in an efficient manner - system calls or commands.
    • Mapping files onto secondary storage.
    • File backup on stable (nonvolatile) storage media.
      EX: File Allocation Table (FAT) for Windows/PC systems


    5. What is the purpose of the command-interpreter?

    • The program that reads and interprets control statements is called variously:
      ·command-line interpreter (Control card interpreter in the “old batch days”)
      ·shell (in UNIX)
    • Command.com for external commands in DOS
      Its function is to get and execute the next command statement.



    - Operating system must be made available to hardware so hardware can start it

    • Small piece of code – bootstrap loader, locates the kernel, loads it into memory, and starts it
    • Sometimes two-step process where boot block at fixed location loads bootstrap loader
    • When power initialized on system, execution starts at a fixed memory location

    - Firmware used to hold initial boot code

    - Operating systems are designed to run on any of a class of machines; the system must be configured for each specific computer site

    - SYSGEN program obtains information concerning the specific configuration of the hardware system

    - Booting – starting a computer by loading the kernel

    - Bootstrap program – code stored in ROM that is able to locate the kernel, load it into memory, and start its execution

    - A virtual machine takes the layered approach to its logical conclusion. It treats hardware and the operating system kernel as though they were all hardware

    - A virtual machine provides an interface identical to the underlying bare hardware

    - The operating system host creates the illusion that a process has its own processor and (virtual memory)
    - Each guest provided with a (virtual) copy of underlying computer

    VIRTUAL MACHINE HISTORY AND BENEFITS

    - First appeared commercially in IBM mainframes in 1972
    - Fundamentally, multiple execution environments (different operating systems) can share the same hardware
    - Protect from each other
    - Some sharing of file can be permitted, controlled
    - Commutate with each other, other physical systems via networking
    - Useful for development, testing
    - Consolidation of many low-resource use systems onto fewer busier systems
    - “Open Virtual Machine Format”, standard format of virtual machines, allows a VM to run within many different virtual machine (host) platforms






    EXAMPLES




    JAVA VIRTUAL MACHINE


    - Compiled Java programs are platform-neutral bytecodes executed by a Java Virtual Machine (JVM)
    - JVM consists of

    • Class loader
    • Class verifier
    • Runtime interpreter


    - Just-In-Time (JIT) compilers increase performance




    Simple Structure


    -->View the OS as a series of levels
    -->Each level performs a related subset of functions
    -->Each level relies on the next lower level to perform more primitive functions
    -->This decomposes a problem into a number of more manageable subproblems


    Layered Approach


    The operating system is divided into a number of layers (levels), each built on top of lower layers. The bottom layer (layer 0), is the hardware; the highest (layer N) is the user interface.

    With modularity, layers are selected such that each uses functions (operations) and services of only lower-level layers.


    <--MS-DOS Layered Structure-->









    System calls provide the interface between a process and the operating system. These calls are generally available as assembly language instructions


    Some systems also allow to make system calls from a high level language, such as C.

    Three general methods are used to pass parameters between a running program and the operating system.
    - Pass parameters in registers.
    - Store the parameters in a table in memory, and the table address is passed as a parameter in a register.
    - Push (store) the parameters onto the stack by the program, and pop off the stack by operating system.



    <--Types of System Calls-->




    -->Process control – load, execute, abort, end, create process, allocate and free memory, wait event etc.

    -->File management – Create file, delete file, open, close, read, write, get file attribute etc.

    -->Device management – Request device, release device, read, write, logically attach or detach device etc.

    -->Information maintenance – Get time and date, set time and date, get process attribute etc.
    -->Communications – create, close communication connection, send, receive messages, etc.


    Program execution – system capability to load a program into memory and to run it.

    I/O operations – since user programs cannot execute I/O operations directly, the operating system must provide some means to perform I/O.

    File-system manipulation – program capability to read, write, create, and delete files.

    Communications – exchange of information between processes executing either on the same computer or on different systems tied together by a network. Implemented via shared memory or message passing.

    Error detection – ensure correct computing by detecting errors in the CPU and memory hardware, in I/O devices, or in user programs.


    Additional functions exist not for helping the user, but rather for ensuring efficient system operations.
    •Resource allocation – allocating resources to multiple users or multiple jobs running at the same time.
    •Preemptable, nonpreemptable resources
    •Deadlock prevention and detection models
    •Accounting – keep track of and record which users use how much and what kinds of computer resources for account billing or for accumulating usage statistics.
    •Protection – ensuring that all access to system resources is controlled.

    SYSTEM COMPONENT



    Operating Systems Process Management



    A process is a program in execution. A process needs certain resources: CPU time, memory (address space), files, and I/O devices, to accomplish its task.



    The operating system is responsible for the following activities in connection with process management.
    -->Process creation and deletion.
    -->Process suspension and resumption.
    -->Provision of mechanisms for:
    -->Process synchronization
    -->Process communication



    Main Memory Management




    Memory is a large array of words or bytes, each with its own address. It is a repository of instructions and data shared by the CPU and I/O devices. Main memory is a volatile storage device. It loses its contents in the case of system failure.




    The operating system is responsible for the following activities in connections with memory management:




    -->Decide which processes to load when memory space becomes available.
    -->Allocate and deallocate memory space as needed. Keep track of which parts of memory are currently being used and by whom.



    File Management




    A file is a collection of related information defined by its creator. Commonly, files represent programs (both source and object forms) and data.



    The operating system is responsible for the following activities in connections with file management:




    -->File creation and deletion.
    -->Directory creation and deletion.
    -->Support of primitives for manipulating files and directories.
    -->Mapping files onto secondary storage.
    -->File backup on stable (nonvolatile) storage media.


    Before mounting,
    -->Files on floppy are inaccessible

    After mounting floppy on b,
    -->Ffiles on floppy are part of file hierarchy


    I/O System Management


    The I/O system consists of:
    -->A buffer-caching system
    -->A general device-driver interface
    -->Drivers for specific hardware devices

    Secondary Storage Management


    Since main memory (primary storage) is volatile and too small to accommodate all data and programs permanently, the computer system must provide secondary storage to back up main memory.

    Most modern computer systems use disks as the principle on-line storage medium, for both programs and data.

    The operating system is responsible for the following activities in connection with disk management:


    -->Free space management
    -->Storage allocation
    -->Disk scheduling

    Protection System


    Protection refers to a mechanism for controlling access by programs, processes, or users to both system and user resources.

    The protection mechanism must:


    -->Distinguish between authorized and unauthorized usage.
    -->Specify the controls to be imposed.
    -->Provide a means of enforcement.

    Command-Interpreter System

    Command-Interpreter reads commands from the user or from a file of commandsand executes them, usually by turning them into one or more systemcalls. It is usually not part of the kernel since the command interpreteris subject to changes.


    The program that reads and interprets control statements is called variously:
    -->command-line interpreter
    -->shell (in UNIX)

    Its function is to get and execute the next command statement







    FEEDS

    Add to Google Reader or Homepage