A89: supervisor vs usermode...
[Prev][Next][Index][Thread]
A89: supervisor vs usermode...
> I'm just curious about the difference between supervisor and
> usermode...Actually why does there even have to be 2 different
> modes...wouldn't everything be just as easy (easier) w/ 1 mode which has
> access to all components....ie: interrupts...??? Could someone possibly
> explain the main differences betweeen these 2 modes and perhaps why it is
> even necessary???
Well, I can give it a try ...
When Motorola designed the 68000 they didn't have calculators in
mind. The 68000 was designed for high-end microprocessor systems, much
like a PPC or an Alpha today. This meant multitasking, multiuser systems.
The whole idea is that you want to have a computer on which you run
programs and it does not crash even if the program crashes. Not to
mention that you may want to run multiple applications on the computer
and you don not want them to crash each other, nor to peek into
each other's data and so on.
A very neat solution is that you provide an environment to the
programs which have no direct access to the inner workings of the
system - if they want to do something which involves a HW component,
e.g. a disk, they talk to the (let's name it) operating system.
The OS, on the other hand, must be able to talk to the HW and it must
have control over all user programs so that it can manage them.
So, you define 2 modes (on some CPUs, like x86, there are more). One
allows access to absolutely everything - this is the supervisor mode.
The other limits the access to the level where a program can not
possibly get the system to its knees. It basically means that the user
code can not access HW directly and can not mess up with CPU internal
things which the user has nothing to do with.
The protection of memory ranges and HW things hanging on the CPU needs
additional stuff such as an MMU but the CPU itself must provide
enough information to its surroundings to allow them to decide whether
a particular bus access is valid or not.
Now in the case of the 68000 Motorola made 2 big mistakes which made
the whole thing screwed up. One is that the user can read the high
byte of the status register, therefore it can determine that it is
running in user mode. The other is that the 68000 can not re-run an
instruction which makes virtual machine and on-demand paging virtual
memory almost impossible to make (almost, because ingenious people
still managed to do that, with two CPUs running the same user code,
with a one insn delay, but that's a different story).
So, when the CPU is not screwed up, that is 68010 and up, the CPU can
do the following things for you:
A program running in user mode can be made to believe that it is
actually running in supervisor mode. This is achieved by the
protection violation trap. If the program tries to execute a
supervisor instruction, for example, it wants to read the top byte of
the status register, an exception will be executed, which, of course,
runs in real supervisor mode. The exception routine gets enough
information so that it can figure out what the user program was trying
to do and can emulate it. That is, when the exception routine
returns the user program will not know that there was a whole bunch of
code executing instead of the single insn and it will get a status
register value which the real supervisor made up for it.
More or less the same trick can be used for every other thing that the
user might try to do while believing that it is in supervisor mode.
However, the real supervisor is in the position to see what the
user wants to do and allow only benign attempts and can kick the
program out if it tries to do any nasty stuff.
The insn re-run is a feature which is very handy for virtual memory.
It does the following: if an insn ends up in a bus error (which means
that the address it supplied is illegal) an exception will be
started. A whole lot of info about the internal state of the CPU is
saved on the stack which is enough to the CPU to re-run, or more
precisely continue the insn which failed exactly from where it died.
For example, if you have a move:
move.l 100,a0
move.l 200,a1
...
move.l (a0),(a1)+
which will copy a longword from address 100 to address 200, the bus
cycles (after the move.l insn fetch) look like this:
read a word from 100
read a word from 102
write a word to 200
write a word to 202
and a1 will be incremented by 4.
Now assume that 202 is temporarily not accessible for some reason
(see later). The CPU will successfully execute the first 3 bus cycles
but the last one will end up in a bus error.
The BERR exception handler can then clean the situation up and make
202 writeable for the program. However, you can't just repeat the
insn because you do not know if a1 was not updated yet, or it is
incremented by 2 or already by 4 ? In addition, if at 200 there's
something which is not just a single piece of memory and it is
important that you write to it exactly only once (which is the case
with lots of HW elements), then you are in big trouble.
With insn re-run capable CPUs (that is, 68010 and up) when the bus
error comes there will be enough information pushed onto the stack
so that you can instruct the CPU (after making 202 writeable) to
re-run the insn. It will suck in the stuff from the stack and figure
out that it has done everything up to and including the write to 200,
so it will write to 202, update a1 and merrily chugging along as if
nothing happened.
Now this may seem unwarranted complexity - all in all, if your write
went wrong, you accessed an address which you shouldn't, didn't you.
Well, not necessarily.
In a paged virtual memory environment the total memory requirement of
all concurrently running programs can be much higher than the actual
physical memory you have. What the system does is that using the MMU
it keeps the parts of the address space that the programs are actually
using (called the active set) in memory and all the rest on disk. It
comes from the fact that most programs tend to spend stretches of
time manipulating only of small set of data, then they move along to
an other chunk, work on that for a while and so on.
Now when this move to the other chunk happens, it may very well be
that that other chunk happens to be sitting on the disk (historically
it's called being swapped out). However, the program knows nothing
about it, it just tries to access it - whoops, a bus error comes in
since that particular address not a valid memory reference at the
moment. The OS then takes a look at the address and sees that the
program did indeed accessed an address which belongs to it (i.e. it
was a completely valid operation), however, the data is not in
memory. It then allocates a bit of memory for the program and starts
the disk to deliver the data into that piece of RAM. While the disk is
busily whirring looking for the data and pumping it into the memory,
the system will of course let other programs execute. When the disk
finishes its chores it will cause an interrupt, which, of course, ends
up in the system. The system then will re-program the MMU so that
the piece of memory will appear on the requested address for the
user program (regardless of the memory's real physical address) and
finally, often tens of milliseconds later it will re-run the failed
insn and the user program marrily marches on with no knowledge about
the whole complicated business that happened during the execution of
that single instruction.
With the previous example, you can imagine as if the move was sort of
interrupted between the first and the second write, the system
executed tens of thousands of insns, including all sorts of other user
programs, the disk worked a bit in parallel of everything else and
suddenly everything went back to normal, the move finished and life
went on.
Now systems like this could not be made without the supervisor and
user level distinction, more precisely, one could write it, but any
user program error (or a malicious program) could take the system
down (for example, Windows 3.x and AFAIK '95 are systems like that).
Even if you don't have a multiuser system the user/supervisor thing
can come very handy. In embedded systems if your main program runs in
user mode and is not allowed to access HW and certain portions of
memory directly, you can protect your system somewhat. If you have
a bug in the user code and a stale pointer dereference would write to
a HW element or into critical system data, possibly toppling
everything over, the protection scheme will a) not let it do that and
b) inform the supervisor that something very bad happened.
The supervisor could then shut the system gracefully down or restart
it *and* log the actual mishap so that you can start chasing the bug.
So, while it is not critical in a calculator, it does not hurt.
It can help to find and track down bugs and gives a limited protection
against programs going mad.
Regards,
Zoltan
Follow-Ups:
References: