The Great Symbian

Anything under the sun goes here!

1. Example of Resource Allocation Graph


Explanation:

· The Resource Allocation Graph (RAG) above is composed of 3 processes and four resources.
· R1 or resource 1 is composed of only one instance
· R2 has 2 instances
· R3 has one instance
· R4 has 3 instances
· P1 requests instance of R1
· P2 is holding an instance of R1
· P2 requests instance of R3
· P3 holds an instance of R3
· There is no deadlock found




2. Example of Resource Allocation Graph


Explanation:

· The Resource Allocation Graph (RAG) above is composed of 3 processes and four resources.
· R1 or resource 1 is composed of only one instance
· R2 has 2 instances
· R3 has one instance
· R4 has 3 instances
· P1 requests instance of R1
· P2 is holding an instance of R1
· P2 requests instance of R3
· P3 holds an instance of R3
· P3 requests an instance of R1
· There is deadlock based of the RAG no. 2 because all of the instances of R2 are held by the P1 and P2, then P3 is requesting for instances of R2. R2 cannot give any instances to P3 that is why a deadlock occurred.




3. Example of Resource Allocation Graph



Explanation:

· The Resource Allocation Graph (RAG) above is composed of4 processes and 2 resources.
· R1 and R2 has composed of 2 instances
· P1 requests instance of R1
· P2 is holding an instance of R1
· P3 also holds an instance of R1
· P3 requests an instance of R2
· P4 holds an instance of R2
· P1 also holds an instance of R2
· There is no deadlock.

4. Example of Resource Allocation Graph


Explanation:

· The Resource Allocation Graph (RAG) above is composed of 2 processes and 2 resources.
· P1 holds an instance of R1
· P2 requests an instance of R1
· P1 and P2 may request an instance of R2


5. Example of Resource Allocation Graph


Explanation:

· The RAG above is compose of 2 resources and 2 processes
· P1 holds an instance of R1
· P2 is requesting an instance of R1
· P2 holds an instance of R2
· P1 may request an instance of R2

A set of vertices V and a set of edges E.

· V is partitioned into two types:
» P = {P1, P2, …, Pn}, the set consisting of all the processes in the system.
» R = {R1, R2, …, Rm}, the set consisting of all resource types in the system.
· Request Edge – directed edge P1 -> Rj
· Assignment Edge – directed edge Rj -> Pi


RESOURCE ALLOCATION GRAPH

· Process

· Resource Type With 4 Instances

· Pi request instance of Rj

· Pi holding an instance of Rj


How Would You Know If There's a Deadlock Based on the Resource Allocation Graph?
·If graph contains no cycles - no deadlock.
·If graph contains a cycle :
» if only one instance per resource type, then deadlock.
» if several instances per resource type, possibility of deadlock.





Recovery from Deadlock Recovery from Deadlock

„ Process Termination:
· Abort all deadlocked processes.
· Abort one process at a time until the deadlock cycle is eliminated.
· In which order should we choose to abort?
 - Priority of the process.
 - How long process has computed, and how much longer to completion.
 - Resources the process has used.
 - Resources process needs to complete.
 - How many processes will need to be terminated.
 - Is process interactive or batch?

„ Resource Preemption

· Selecting a victim – minimize cost.
· Rollback – return to some safe state, restart process for that state.
· Starvation – same process may always be picked as victim, include
number of rollback in cost factor.

Deadlock Detection: allow the system to enter a deadlock
state and then recover.
· Requires deadlock detection algorithm
· Technique for forcibly preempting resources and/or terminating tasks

1. ) ALLOW SYSTEM TO ENTER DEADLOCK STATE


2. ) DETECTION ALGORITHM
1. Let Work and Finish be vectors of length m and n, respectively
Initialize:
(a) Work = Available
(b) For i = 1,2, …, n, if Allocationi ≠ 0,then
Finish[i] = false;otherwise, Finish[i] = true.
2. Find an index i such that both:
(a) Finish[i] == false
(b) Requesti
≤ Work
If no such i exists, go to step 4.
3. Work = Work + Allocationi
Finish[i] = true
go to step 2.
4. If Finish[i] == false, for some i, 1 ≤ i ≤ n, then the system is in deadlock
state. Moreover, if Finish[i] == false, then Pi
is deadlocked.
Algorithm requires an order of O(m x n2) operations to detect whether
the system is in deadlocked state.

3.) RECOVERY SCHEME




Deadlock Prevention Deadlock Prevention


„ - Mutual Exclusion – not required for sharable resources; must hold for nonsharable
resources.

„ - Hold and Wait – must guarantee that whenever a process requests a resource, it
does not hold any other resources.

· Require process to request and be allocated all its resources before it begins
execution, or allow process to request resources only when the process has
none.
· Low resource utilization; starvation possible
- No Preemption
· If a process that is holding some resources requests another resource that
cannot be immediately allocated to it, then all resources currently being held
are released.
· Preempted resources are added to the list of resources for which the process is
waiting.
· Process will be restarted only when it can regain its old resources, as well as
the new ones that it is requesting.

- „Circular Wait – impose a total ordering of all resource types, and require that each
process requests resources in an increasing order of enumeration.
Restrain the ways that request can be made.

Methods for Handling Deadlocks Methods for Handling Deadlocks

- „Deadlock Prevention: ensure that the system will never enter
a deadlock state – expensive operations
· Need to monitor all lock acquisitions
· Selectively deny those that might lead to deadlock

- „Deadlock Detection: allow the system to enter a deadlock
state and then recover.
· Requires deadlock detection algorithm
· Technique for forcibly preempting resources and/or terminating tasks
- Ignore the problem and pretend that deadlocks never occur
in the system
· Used by most operating systems, including UNIX

DEADLOCK CHARACTERIZATION

A deadlock can occur if the following four conditions hold
simultaneously:

1. Mutual exclusion: at least one resource must be held in a
nonsharable mode.

2. Hold and wait: a process must be holding at least one
resource and waiting to acquire additional resources held by
other processes.

3. No preemption: resources can’t be preempted.

4. Circular wait: there exists a set {P0, P1, …, Pn} of waiting
processes such that P0
is waiting for a resource that is held by
P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is
waiting for a resource that is held by Pn, and Pn is waiting for a
resource that is held by P0.

  • CPU scheduling more complex when multiple CPUs are available

  • Homogeneous processors within a multiprocessor

  • Load sharing

  • Asymmetric multiprocessing – only one processor accesses the system data structures, alleviating the need for data sharing

  • Hard real-time systems – required to complete a critical task within a guaranteed amount of time
  • Soft real-time computing – requires that critical processes receive priority over less fortunate ones

THREAD SCHEDULING
  • Distinction between user-level and kernel-level threads
  • OS only schedules kernel-level threads. User-level threads are scheduled through a direct or indirect (LWP) mapping
  • Many-to-one and many-to-many models, thread library schedules user-level threads to run on LWP
    • Known as process-contention scope (PCS) since scheduling competition is within the process
  • Kernel thread scheduled onto available CPU is system-contention scope (SCS) – competition among all threads in system
  • Typically – PCS is priority based. Programmer can set user-level thread priorities


Scheduling Algorithms
1.First-come, first-served (FCFS) scheduling
2.Shortest-job first (SJF) scheduling
3.Priority scheduling
4.Round-robin scheduling
5.Multilevel queue scheduling
6.Multilevel feedback queue scheduling

- First-come, First-served (FCFS) scheduling is the simplest scheduling algorithm, but it can cause short processes to wait for very long processes.

- Shortest-job-first (SJF) scheduling is provably optimal, providing the shortest average waiting time. Implementing SJF scheduling is difficult because predicting the length of the next CPU burst is difficult. The SJF algorithm is a special case of the general priority-scheduling algorithm, which simply allocates the CPU to the highest-priority process. Both priority and SJF scheduling may suffer from starvation. Aging is a technique to prevent starvation.

- Round-robin (RR) scheduling is more appropriate for a time-shared (interactive) system. RR scheduling allocates the CPU to the first process in the ready queue for q time units, where q is the time quantum. After q time units, if the process has not relinquished the CPU, it is preempted and the process is put at the tail of the ready queue. The major problem is the selection of the time quantum. If the quantum is too large, RR scheduling degenerates to FCFS scheduling; if the quantum is too small, scheduling overhead in the form of context-switch time becomes excessive.

The FCFS algorithm is nonpreemptive, the RR algorithm is preemptive. The SJF and priority algorithms may be either preemptive or nonpreemptive.

Multilevel queue algorithms allow different algorithms to be used for various classes of processes. The most common is a foreground interactive queue which uses RR scheduling, and a background batch queue, which uses FCFS scheduling. Multilevel feedback queues allow processes to move from one queue to another.

Because such a wide variety of scheduling algorithms are available, we need methods to select among them. Analytic methods use mathematical analysis to determine the performance of an algorithm. Simulation methods determine performance by imitating the scheduling algorithm on a “representative” sample of processes, and computing the resulting performance.

Operating Systems supporting threads at the kernel level must schedule threads - not processes - for execution. This is the case with Solaris 2 and Windows 2000 where both systems schedule threads using preemptive priority based on scheduling algorithm including support for real-time threads. The Linux process scheduler also uses a priority-based algorithm with real-time supports as well. The scheduling algorithms for these three operating systems typically favor interactive over batch and CPU-bound processes.systems typically favor interactive over batch and CPU-bound processes.

WINDOWS XP THREAD


Implements the one-to-one mapping
Each thread contains
- A thread id
- Register set
- Separate user and kernel stacks
- Private data storage area
The register set, stacks, and private storage area are known as the context of the threads
The primary data structures of a thread include:
- ETHREAD (executive thread block)
- KTHREAD (kernel thread block)
- TEB (thread environment block)

LINUX THREADS
- Linux refers to them as tasks rather than threads
- Thread creation is done through clone() system call
- clone() allows a child task to share the address space of the parent task (process)

JAVA THREADS
- Java threads are managed by the JVM

- Java threads may be created by:
  • Extending Thread class
  • Implementing the Runnable interface


FEEDS

Add to Google Reader or Homepage