CMPS 111: Introduction to Operating Systems
Examinations
There will be two examinations, a midterm and a final exam.
Exams
Sample Final Exam Questions
Note: One of these questions will actually appear on the final exam.
- Early Intel processors such as the 8086 did not support dual-mode operation
(user and supervisor modes). This had a fairly major impact on the type of
system that could be implemented on these processors, and most did not support
multi-user operation. Discuss a few potential problems that could arise in
trying to support multi-user operation on such a system.
- What are the differences between a trap and an interrupt? What is the use
of each?
- What is a process? What information does an operating system generally
need to keep about running processes in order to execute them?
- Memory management is important in operating systems. Discuss the main problems
that can occur if memory is managed poorly.
- What are the five major activities of an operating system in regard to
file management? Why is the file interface used in many modern systems to
interact with devices as well as files?
- User-level threads are threads that are scheduled directly by the process
in which they live rather than by the kernel - the kernel schedules the processes,
and any process that has threads can schedule them however it wants within
its time slice. Describe briefly how such a scheme would work. Can you think
of any problems with such an implementation?
- Suppose that a scheduling algorithm (at the level of short-term CPU scheduling)
favors those processes that have used the least processor time in the recent
past. Why will this algorithm favor I/O-bound processes and yet not permanently
starve CPU-bound processes?
- What is the meaning of the term busy waiting? What other kinds of waiting
are there in an operating system? Can busy waiting be avoided altogether?
Explain your answer.
- Real-time processing is processing where each process has deadlines — times
by which they must complete certain computations. What do you think are the
major differences between operating systems that support real-time processing
and those that support best-effort processing (the type of processing we have
been studying in this class)?
- Gridlock is a term describing a traffic situation in which there are so
many cars in the streets and the intersections that essentially no car can
move any direction because other cars are in the way. Explain how this is
the same as deadlock in an operating system by showing how each of the four
conditions for deadlock hold in this situation.
- Explain the difference between logical and physical addresses.
- Consider a paging system with a page-table stored in memory. If a memory
references takes 200 nanoseconds, how long does a paged memory reference take?
If we add associative registers, and 75 percent of all page table references
are found in the associative registers, what is the effective memory reference
time? (Assume that looking for (and maybe finding) a page-table entry in the
associative memory takes zero time).
- Consider a demand-paging system with a paging disk that has an average
access and transfer time of 20 milliseconds. Addresses are translated through
a page table in main memory, with an access time of 1 microsecond per memory
access. Thus, each memory reference through the page table takes two accesses.
To improve this time we have added an associative memory that reduces access
time to one memory reference if the page table entry is in the associative
memory. Assume that 80 percent of the accesses are in the associative memory
and that of the remaining, 10 percent (or 2 percent of the total) cause page
faults. What is the effective memory access time.
- We have discussed LRU as an attempt to predict future memory access patterns
based on previous access patterns (i.e. if we haven’t accessed a particular
page in a while, we are not likely to reference it again soon). Another idea
that some researchers have explored is to record the memory reference pattern
from the last time the program was run and use it to predict what it will
access next time. Discuss the positive and negative aspects of this idea.
- Suppose that you have a UNIX file system where the disk block size is 1kB,
and an inode takes 128 bytes. Disk addresses take 32 bits, and the inode contains
space for 64 bytes of data (a recent optimization), 8 direct addresses, one
indirect, one double-indirect and one triple-indirect (the rest of the space
in the inode is taken up with other information such as ownership and protection).
An index block is the same size as a disk block. How much space (including
overhead) do files that are: One (1) byte long, 1025 bytes long, 65536 (64kB)
bytes long, and 1048576 (1MB) bytes long require? Hint: it may help if you
draw a picture of how inodes are used to locate the blocks making up a file.
- In most file systems, an update to a block of a file is written back to
the same block on disk. Some modern high-performance file systems just write
the updated block of data into a nearby available disk block. Briefly describe
one advantage and one disadvantage of this strategy.
- Why are some devices block-devices while others are character-devices?
Why not just standardize them all to be character devices, or block devices?
- Explain why SSTF scheduling tends to favor accesses to middle cylinders
over the innermost and outermost cylinders.
- I have just invented a new scheduling algorithm that I claim gives the
highest priority to processes that have just entered the system, but is fair
to all processes. The algorithm works like this: There are two queues, one
for “new” processes and one for “old” processes. When a process enters the
system, it is put at the end of the “new” queue. After 2 milliseconds, regardless
of whether or not they have been scheduled, processes on the “new” queue go
to the end of the “old” queue. When it is time to schedule a process, the
system schedules the process at the head of one of the queues, alternating
between the two queues. Each process runs to completion before the next process
is scheduled. Assume that processes enter the system at random times and that
most processes take much longer than 2 milliseconds to execute. Does this
algorithm give the highest priority to new processes? Explain your answer.
Is this algorithm fair to all processes? Explain your answer.