                                                    The Client/Server Model
I.      Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  1

II.     Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  2
        A.      Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  2
        B.      Message Passing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  2
        C.      Processes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  3
        D.      Message Passing and Synchronization. . . . . . . . . . . . . . . . . . . . . . . . . .  3
        E.      Deadlock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  5
                        Detecting Deadlock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  5
                        Avoiding Deadlock. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  6
        F.      Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  7

III.    Servers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  8
        A.      Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  8
        B.      Server Message Handling Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . .  9
                        Transaction Oriented Servers . . . . . . . . . . . . . . . . . . . . . . . . .  9
                        State Oriented Servers . . . . . . . . . . . . . . . . . . . . . . . . . . .   10
        C.      External Server Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . .   11
                        The Manager Process. . . . . . . . . . . . . . . . . . . . . . . . . . . . .   11
                        The Agent Process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   12
                        The Interrupt Handler. . . . . . . . . . . . . . . . . . . . . . . . . . . .   13
        D.      Internal Server Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . .   15
        E.      The Client Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   16
                        A Minimum API Functionality. . . . . . . . . . . . . . . . . . . . . . . . .   17
                        The Client/Server Message Structure. . . . . . . . . . . . . . . . . . . . .   18
                        Making the Client Interface Functions Efficient. . . . . . . . . . . . . . .   21
        F.      Priority Inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   22
                        Client Driven Priority . . . . . . . . . . . . . . . . . . . . . . . . . . .   22
                        Priority Message Queuing . . . . . . . . . . . . . . . . . . . . . . . . . .   23
        G.      Server Related Operating System Services . . . . . . . . . . . . . . . . . . . . . .   23
                        The Name Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   23
                        Proxies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   24

IV.     Client/Server Example - Chat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   26
        A.      Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   26
        B.      Overall Structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   26
        C.      The Chat Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   26
                        Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   27
                        Message and Data Structures. . . . . . . . . . . . . . . . . . . . . . . . .   27
                        Privileges and Responsibilities. . . . . . . . . . . . . . . . . . . . . . .   28
                        QNX Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   28
                        Pseudo Code. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   28
        D.      The Chat Client. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   30
                        Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   30
                        Message and Data Structures. . . . . . . . . . . . . . . . . . . . . . . . .   30
                        Privileges and Responsibilities. . . . . . . . . . . . . . . . . . . . . . .   30
                        QNX Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   31
                        Pseudo Code. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   31

V.      Appendix A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   32
        A.      The Chat Server Source Code. . . . . . . . . . . . . . . . . . . . . . . . . . . . .   32
I.     Introduction

        This document introduces basic concepts and terminology that the reader
must have to develop Client/Server applications within the QNX Real-Time
Operating System.

        Topics of discussion include Processes, the Kernel and the Client/Server
model.  Moreover, terminology and differences between QNX and other operating
systems will also be pointed out.

        To solidify these concepts, a simple Client/Server application is designed,
developed and presented.  This sample program, called Chat, implements a multi-
user line-oriented conversation utility within the QNX environment.

        It is assumed that the reader has had prior exposure to multi-tasking
operating systems - but perhaps not those using message passing.  It is further
assumed that the reader has some programming experience.

        All programming examples included are coded in the C programming
language.II.Basic Concepts

        A.      Introduction

        The QNX Real-Time Operating System was designed as a multi-user,
network-distributed, operating system.  The concepts behind QNX make it ideally
suited to solving real-time problems, and an understanding of these concepts is
helpful if you are going to program in the QNX environment.  This section
provides you with that background information.

        QNX is a real-time kernel coupled with a rich set of development tools and
utilities.  There is no need to cross-compile or work in a foreign environment and
down-load to the QNX system.  The entire edit, compile, execute and test phases
of application development takes place within the single domain.  Further, strict
POSIX compliance has been adhered to -- ensuring easy migration from other
POSIX operating environments.

        In an effort to provide a standardized applications programming interface,
the designers of QNX chose to use message passing to provide clean interaction
between system resources and the application program.  Message passing is the
quintessential communication mechanism for all processes in the QNX
environment.

        B.      Message Passing

        Message passing involves sending mutually agreed upon codes along with
some optional data, to instruct the keeper of a resource to do some work on your
behalf.  For example, the traditional open function in C requests that a file be
opened for reading and/or writing.  Under QNX, the underlying library function
will build and send a message on your behalf to the manager (or Server) of the file
system, in this case Fsys, to open the file with the specified access.  Using the C
programming language, structures (records under other programming languages)
are typically used as the message mechanism.

        C.      Processes

        A process is roughly equivalent to a program in other operating systems. 
There is exactly one process running at a time, but there may be many others
waiting to run.

        QNX processes can create and destroy other processes.  A typical
application running under QNX consists of several processes communicating with
each other and the real world through the message passing facility.

        Every process has a unique identification number, called a process id or pid. 
Pids are the handles by which processes refer to themselves and other processes.

        D.      Message Passing and Synchronization

        Processes in QNX communicate by passing messages between themselves. 
Message passing consists of three fundamental (primitive) operations: Send,
Receive and Reply.  There are also specialized versions of these three
primitives - but if you understand the basic three, the others are just variations.

        QNX doesn't impose a structure on messages.  The contents of a message
can be anything: ASCII printable characters, binary data or complex data
structures.  It's up to processes communicating with each other to decide on the
format of their messages.  By convention, messages usually consist of an action
code (number) and zero or more bytes of data.  This is the message structure that
all Quantum provided Servers use, as such, any application program that you
design would enjoy a more seamless integration with the standard system
processes if it followed suit.

        Send always sends a message to a specified process.  When a process sends
a message, the sending process blocks until the receiving process acknowledges the
message by replying to the sender.  A blocked process can't run until it unblocks,
so other processes that are ready to run - will.  The act of replying doesn't block
the process that replied; it just unblocks the sending process.

        There is also a form of Send that is non-blocking.  This special send is
known as a Proxy.  Proxies are "go-between" processes that have the single
purpose of sending a previously prepared message on behalf of another process. 
The sender can trigger a proxy to send its message and go about its business. 
Proxies are particularly useful when the sending process should not block.  Server
processes fall into this non- send blocked category (as explained later), to help you
better understand proxies, we will use them in our sample Chat application.

        A process only receives a message when it explicitly executes Receive.  A
process can receive a message from a specific process (called receive specific), or
from any process (called receive general).

        If a process attempts to receive a message when no message is available it
will receive block until one is.  If a process attempts to receive a message and
another process has already sent one, then the receiving process won't block, it
will receive the message and continue processing without blocking.

        A process is free to wait as long as it wants before replying, and it may
receive more messages without replying to previous ones.  It's also free to reply to
the messages in a different order than it received them in.  Remember though, the
sender is blocked until the message is replied to.

        This method of passing messages is synchronous, since processes
synchronize with each other as they pass messages.  The Kernel handles the
synchronization of communicating processes.  It also handles the transfer of the
message from the data space of one process to the other.

        The QNX model for message passing has a number of advantages.  Writing
application processes is much easier because there are a small number of well-
behaved communications primitives.  The whole system has been optimized so
that message passing has low overhead.  The blocking nature of message passing
ensures that there's no worry about system resources (message queues, for
example) overflowing.  You will also find that the message passing model
translates well from high level design to implementation.

        The synchronous message passing model is also complete.  Any problem
that can be solved by other multi-tasking operating systems primitives
(semaphores or monitors, for example) can also be solved with message passing.

        E.      Deadlock

        In all operating systems, if a careful system design is not performed, it is
possible for deadlock occur.

        Deadlock occurs when two or more processes appear to be blocked forever. 
For example, process A may be waiting for process B to do something, and process
B may be waiting for process A to do something.  This is the simplest form of
deadlock.  (See Figure 1.)

        Compound deadlock occurs when more than two processes are involved. 
Process A waits on B which waits on C which is waiting on A.  This type of
deadlock is more difficult to detect - especially as the number of processes involved
increases, but like the simple form of deadlock, this form also has a circular
dependency amongst the processes.  To solve the deadlock problem you have to
become a bit of a detective.  Figure 2 shows an example of compound deadlock.

                Detecting Deadlock

        Fortunately QNX provides the necessary tools to aid you in your deadlock
detective work.  For example, the system information utility sin will show you the
states of all processes and their correspondents.

        Simple deadlock can be confirmed by finding the two processes involved in
the message exchange and noting their states.  If they are both in the same state
(i.e. send blocked or received blocked), and are corresponding with each other,
then simple deadlock has occurred.

        Compound deadlock can be identified in a similar fashion.  Pick a process
suspected of being involved and follow the correspondent chain.  The sin utility
gives you the correspondent under the field titled BLK.  If you continue in this
manner and eventually find that some process is waiting for a process you have
already passed as you follow the chain of correspondents, then again you have
deadlock.

        It's usually worthwhile to keep notes of the states and correspondents of
processes as you work through the above procedure.  This makes the backtracking
a little easier, and makes the dependencies more visible.

        Fortunately deadlock is rare in well-structured QNX applications.  If you
discover that your application is deadlocked, your only recourse is to return to the
design drawing board and look closely at the processes involved.

        Deadlock free systems can be guaranteed by following one of two simple
design paradigms.

                Avoiding Deadlock

        By ensuring that a single process only Sends or Receives (but not both),
then regardless of the system's complexity - deadlock will be avoided.

        Another technique is to organize your application into layers, and adopt the
convention that a lower level layer can only send to a higher layer but not to the
same or to lower layers.  This also ensures a dead-lock free design.  (Layers are
not significant other than to categorize or organize your processes for the explicit
purpose of following the send-to-higher-layer convention.)
        F.      Summary

        The basic parts of QNX include: the micro-kernel, processes and special
processes known as Servers.

        QNX uses a synchronous message passing scheme with three basic
primitives: Send, Receive and Reply.  There is a non-blocking form of Send
known as a proxy.

        Adopting either the paradigm of having processes only Send or Receive, or
by layering your application, ensures a deadlock free system.III.Servers

        A.      Introduction

        We now have enough background to discuss processes known as Servers.

        By now you know that a Server is the manager of a resource and the Client
is the application program.  A Server accepts requests from Client processes and
arranges for those requests to be fulfilled.

        Servers themselves, are well defined processes with a common application
layer interface.  Often a Server provides access to a hardware device, but Servers
are not exclusively used as such.  A Server may provide a software interface to a
number of processes.  For example, a Server could be written to administer
software queues.  However, for Servers that do manage hardware, a well designed
Server isolates the application program from the details of that hardware as much
as possible.  Servers also have the effect of making an application program more
portable with respect to the physical device (i.e. from one serial controller chip
(SCC) to another, a serial Server remains a serial Server from the Client's
perspective, as does the programming interface).

        Simple Servers consist of a single process and possibly an interrupt handler
(Figure 3).  More complex Servers can use one or more Agent processes and/or
interrupt handlers (Figure 4).  Agent processes are also often used to simplify the
design and/or operation of the Server.  Agents can also be used to enforce the
integrity between the Client application and the Server.

        If a Server is complex, that is, if it has more than one process, the main
controlling process is referred to as the Manager.  The other processes (Agents), if
used, further distribute the work to be done.

        The Manager controls the flow of all the work.  It ensures requests from
Client processes are handled in the appropriate order.  If hardware resources are
involved, then the Manager can ensure that they are used in the most efficient
way.  (Elevator seeking from within the file Server is a good example of ordering
requests to make device access more efficient.)

        Servers can also enforce security or integrity constraints.  The File Server
might enforce file access privileges, so that certain processes can read and write a
file, while others can only read it.

        Servers that provide access to devices may also have one or more interrupt
handler(s).  With the interrupt handling services provided by QNX, interrupt
handlers can be written in C and require no special set up or exit code.

        Many aspects of a Server depend on what the Server is meant to do.  While
a serial I/O Server is very different in the services it provides as compared to a
data acquisition Server, all Servers share one of two common message handling
mechanisms.

        B.      Server Message Handling Mechanisms

        The first message handling mechanism is transaction oriented while the
second is state oriented.

                Transaction Oriented Servers

        Transaction oriented Servers treat each request as new and complete unto
itself.  This requires the sender to provide all information to fulfil the request with
each and every message.  Further, the Server does not have to remember the last
action that was performed on behalf of a particular Client in the event that the
next request is related.  The Server probably does not have to worry about the
death of Client either (unless resources have been allocated by the Server on the
Client's behalf).

        A simple example of a Transaction Oriented Server can be illustrated
with the following C code fragment:

for ( ;; )
{   pid = Receive( 0, &msg, sizeof msg );
    switch ( msg.type )
    {   case READ_DATA:
            ...
        break;

        case WRITE_DATA:
            ...
        break;

        default:
            ...
    }
    Reply( pid, &msg, replySize );
}

        As can be seen, the Server is structured with an infinite loop - receive-
blocked on any process (pid == 0) that wants to send to it.  The Server recognizes
the work request code, performs the service and replies to the sender.  The cycle
then repeats.

        Each case statement will perform an action or call a function which would
affect the variable msg and replySize.  A slight variation on this would remove
the Reply from the bottom of the loop and place it within each case.

                State Oriented Servers

        State oriented Servers are similar to transaction oriented Servers, but
typically require the sending process to open the resource to access it and close the
resource when done.  The open provides a symbolic name which is used to gain
access to a resource.  The open request from the Server returns some sort of
handle identifier which is then used by the Client in subsequent requests. 

        State oriented Servers are more likely to allocate resources on the Client's
behalf.  As a result, the Server needs to be notified in the event of the Client's
voluntary termination (close) or its untimely demise (abrupt termination).

        The File Server (Fsys) for example, is state oriented.  An initial open
request is followed by a series of read and/or write requests subsequently
followed by a close.

        Because state oriented Servers are more complicated, they warrant an
example.  As such, our Chat Server will be constructed based on this message
handling mechanism.

        C.      External Server Organization

        While Servers organize the command/response handling in one of two ways,
all Servers have a common external organization.  As mentioned above, Servers
are composed of three (3) commonly named pieces; Manager, Agent and interrupt
handler.  Lets look at each of these in more detail.

                The Manager Process

        A minimal Server has only a Manager.  The Manager - as the name implies,
manages all the Client requests.  It is the process that is initially loaded by the
system - and it is the process that creates the Agent processes, if required.  If
there are no Agents then the Server still receives the requests but must perform
the work itself.

        Usually, the Manager provides the interface with which the Client
application interacts.  However, it is not uncommon to have Agent processes
between the Client and the Manager in more complex Servers.

                The Agent Process

        As mentioned above, a Server may have one or more Agents.  Agent
processes further distribute the work to be done by the Server.  Agents are often
associated on a per Client basis (an Agent per Client) or on device data flow
direction (one Agent to write to the device, and one Agent to read from the device)
(see Figure 4).

        A good rule to follow when writing Agent code is to make Agent processes
as hardware independent as possible.  If this is done correctly, Agents should be
the second easiest process to integrate on new devices after the Manager.

        One common mistake made by neophyte QNX programmers is assuming
that all work requests must be sent (via Send) to Agents.  In fact the opposite is
true, the Manager should never Send to an Agent process.  How then, can the
Manager ask the Agent to do some work?  The answer is by having the Agent
process Report for Service and having the Manager issue all work requests to the
Agent in the form of a Reply.  A typical Manager/Agent interaction would be: 
The Manager creates the Agent, the Agent runs through its initialization code and
then Sends to the Manager reporting that it is ready to handle work requests
(thus "Reports for Service").  The Server then simply leaves the Agent reply-
blocked until such time as the Agent's services are required.  When the Agent is
needed, the Manager Replys to the Agent with the work request and then goes
back into the receive-blocked state (by executing Receive) so that it can accept
additional Client requests.  Once the Agent has completed its work it Sends to the
Manager with the results.  An example of this would be the distributing of work
among nodes involved in computing a Fast-Fourier Transform (FFT).  Agent
processes could be created on addtional network nodes included in the calculation
and they would then report back to the Manager for service.  Once all Agents had
reported back, the calculation could be "sliced up" and a reply sent to each agent
with its segment of the calculation.  Once the Agent had finished its calculation -
it could send the results back to the Manager who would add this piece to the
puzzle.  This whole reply-receive-add cycle would repeat until the calculation was
completed.  Distributing work among Agents has the advantage that some Agents
will turn around slices faster (those that had a numeric data processor for
example) than others, but no single Agent becomes the limiting process.
                The Interrupt Handler

        A Server that manages a device will often have one or more Interrupt
Handlers. (Interrupt Handlers are sometimes referred to as interrupt service
routines or ISRs.)

        Interrupt Handlers are very device dependant.  They should be isolated
from the rest of the Server code and identified as such to ease porting to future
devices.

        Under QNX, an ISR almost always communicates directly with the Manager
via a Proxy.

        The ISR resides within the Manager's process space, and as such, is
attached to the interrupt by the Manager (via the qnx_hint_attach function) as
part of its initialization sequence.

        It's interesting to note that given QNX's flexibility, it is possible for an
interrupt handler (imbedded in a process) to reside on another node or processor
and communicate with the Manager via a network transparent proxy.  This is of
course not very efficient - but it can be done.

        One tip in making interrupt handling more efficient is to have your
interrupt handler gather multiple interrupts before triggering the proxy (Figures 5
and 6).  Handling interrupts this way means fewer context switches (which are
relatively processor expensive) occur - as interrupt handlers themselves do not
cause a complete context switch.  Interrupt handling without causing a complete
context switch makes for very lean real-time response times.

        As the designer of the interrupt handler, you also have to make the trade
off between latency and throughput.  On one hand you can have improved
throughput by having the interrupt handler process more of the data without
causing a context switch, on the other hand by exiting your interrupt handler
quickly you reduce the amount of interrupt latency.  (Interrupt latency is the time
taken from the reception of a hardware interrrupt until the first instruction of a
software interrupt handler is executed.)  Also, if the interrupt handler causes
another process to be scheduled (by triggering a proxy), exiting your handler
quickly can also reduce the amount of scheduling latency.  (Scheduling latency is
the time taken between the termination of an interrupt handler and the execution
of the first instruction of a driver process.)  Stated differently, if your application
demands a real-time response to all interrupt generated events, then keep the
amount of code in the interrupt handler to an absolute minimum.  This ensures
that interrupts at the handler level and below are masked for the least amount of
time.  But if throughput is more crucial an issue, then move more code into the
interrupt handler and you will see the overall throughput of the system improve. 
Ultimately let the compute resources available to your application be your guide.

        When an interrupt occurs, QNX masks all further interrupts at the same
priority level and below.  If the kernel is in an interrupt handler and a higher
priority interrupt occurs, then this new interrupt is NOT masked and handling
proceeds as expected.  When the higher priority interrupt finishes, the original
interrupt handler will be re-entered and allowed to continue.  This technique of
masking to the current level ensures that high priority interrupts are serviced in
accordance with their priority, thus reducing interrupt latency.

        Interrupt handlers in the QNX environment can be written entirely in C. 
Services are provided by the operating system to ensure that entry and exit to the
handler code is managed properly.

        Although QNX intelligently re-arranges the interrupt levels giving higher
priority to the devices that require it, there are start up arguments passed to the
Process Manager (Proc) that allow you to arrange priorities.  Table 1 below gives a
list of the default levels.

QNX
LevelHardware
LevelDescriptionHighest3Serial Port #2.4Serial Port #1.5Available on AT.6Floppy Disk Controller.7Available on AT.0Timer 0.1Keyboard.2AT (slave 8259).8AT RTC (Real Time Clock).9IRQ2.10AT/PS2 Reserved.11AT/PS2 Reserved.12AT/PS2 Reserved.13AT/PS2 80287.14AT/PS2 Disk ControllerLowest15AT/PS2 ReservedTable 1 - QNX Interrupt Ordering
        D.      Internal Server Organization

        When writing a portable, maintainable Server, consider the following
guidelines while designing the Server and its application programming interface
(API):
                 Try to make it as easy as possible for users to set up
                their applications to use the Server.  Pay special
                attention to what are the most commonly used parts of
                the Server and how those cases can be handled with
                almost no set up.

                 Hide the coding of memory locations, device I/O ports
                etc., as much as possible.  Your Server should assume a
                reasonable default hardware configuration, only forcing
                the user to enter hardware parameters when their
                configuration differs from the default.

                 If a Server has hardware-dependant modules, keep
                them in separate files for ease of replacement when used
                later.  A little bit of thought designing the interface
                between the machine-dependant and machine-
                independent parts of the Server often means that the
                only work required to use the Server with different
                hardware is to create the machine-dependant portion
                and link with the rest of the Server.  Application code
                should not have to be changed, nor even re-compiled, to
                use the same Server with different hardware.

        E.      The Client Interface

        To make the Server more accessible to the Client process, the Server writer
should provide interface functions that build and send messages to the Server.  In
this way the Server will be guaranteed to receive at least a portion of the Client's
message in an intelligible form.

        Providing interface functions also allows for first level error checking before
a message is ever sent.

        Interface functions can also hide interface complexity from the Client
process by supplying information that is difficult for the Client process to obtain. 
For example, the file Server employs such a technique with its open function. 
The open function takes three (3) formal parameters: a filename, an access
method and a permission request.  You will notice that nowhere in that function is
the file Server pid specified, yet the request is sent as a message to Fsys. 
Obviously the pid and any additional information required to fulfil the request is
determined by the code contained within the open function.

        The important thing to remember is to keep the API to your Server as
simple as possible.  Provide Server interface functions that are few in number and
have a low formal parameter count.  This will ensure the least number of Client
coding errors.  Also, be sure to include prototypes for your interface functions. 
This will allow the compiler's prototype checker to verify that the information
supplied to the Server functions are of the right type and order.

                A Minimum API Functionality

        For transaction oriented Servers there is no minimum interface function set. 
This is because each request is separate from all requests that occurred before and
after it.

        State Servers however, should provide at least two (2) API functions:  open
and close.  The open function should return a status code and some sort of
identifier to be used by the Client when making subsequent requests from the
Server.
                The Client/Server Message Structure

        As mentioned earlier, you the Server writer, are free to choose any structure
and content for the messages that are passed between Client and Server, however
you would be better served if you adopted the QNX standard message format.

        If you choose to adopt QNX style messages, your Server will integrate
seamlessly into the QNX environment, and by default, you will have a program
structure in place to accept Server privledged messages from all of Quantum's
system processes including the process manager (more on this later).

        Quantum's system messages consist of a header portion and zero or more
data bytes.  This is true for both Send (therefore Receive) and Reply messages.
        The Send/Receive header consists of two (2) unsigned shorts contained
within a C structure.  The structure, called _sysmsg_hdr, contains the members
type and subtype.  This structure is declared in the file
/usr/include/sys/sys_msgs.h.  The Reply header is also declared in the same
file but has the C structure name of _sysmsg_hdr_reply - with two unsigned
short members status and zero.

        Within the context of a header message, the following table shows the type
member's various ranges as defined by Quantum:

msg.hdr.typeDescription0x0000 to 0x00ffProcess Manager messages0x0100 to 0x01ffI/O messages (common to all I/O servers)0x0200 to 0x02ffFile System Manager messages0x0300 to 0x03ffDevice Manager messages0x0400 to 0x04ffNetwork Manager messages0x0500 to 0x0fffReserved for future system processesTable 2 - Quantum defined message types
        The Server writer is free to use any value for type starting at 100016
(409610) up to the full size of an unsigned integer (sixteen (16) bits, ffff16 (6553516)). 
(If this range seems limiting, don't forget about the second unsigned integer
available in the subtype field.  Combining the two, gives you over 4 billion unique
Server action codes!)

        To request system messages, your Server must be a super-user process and
apply for the system messages with the qnx_psinfo function.  There are various
flags that can be chosen and you are referred to the text associated with that
function for more information.

        In our Chat application we will only concern ourselves with system
messages with header type equal to 0.  At the time of writing, only 4 subtypes
are defined.  These are symbolically represented by the following manifests:

                 _SYSMSG_SUBTYPE_SIGNAL
                 _SYSMSG_SUBTYPE_DEATH
                 _SYSMSG_SUBTYPE_TRACE
                 _SYSMSG_SUBTYPE_VERSION

        All of these manifests are defined in the sys_msgs.h file.  The trace
message is beyond the scope of this document, but the other three (3) messages
(signal, death and version) will be discussed below.  As well, the Chat Server will
accept all four (4) messages but only act on death notification and version
requests.

        Signal messages help the Server adapt to the unexpected withdrawal of a
Client from a message exchange as a result of that Client receiving an
asynchronous signal.  Imagine that we have a Client process that calls Send to
send a message to the Server.  Further imagine that the Client gets hit by an
asynchronous signal by some other (third) process through the kill function. 
This effectively causes the Client to become ready (a -1 is returned by the Send
function when this happens).  The Server, at this point, is in one of two states.  It
may not of received the message from the Client yet (i.e. the signal occurred before
the message exchange began so the Server is unaware that a message was sent
and removed), or the Server may have received the message and is now ready or
running - and is possibly performing the work on the Client's behalf.  In the first
case (the message didn't get through) the Server is none the wiser.  In the second
case (where the Server has already begun processing the request), Replying to the
Client is a non-blocking primitive so the fact that the Client is no longer reply-
blocked is inconsequential, but the reply message disappears.  So how does the
Server guarantee the delivery of each and every message to a Client?  What is
needed is a mechanism that prevents Clients from withdrawing from the message
exchange in the event of a Client receiving an asynchronous signal.  This is where
the _PPF_SIGCATCH flag in the qnx_pflags function comes into play.  A process
that is reply blocked will be unable to withdraw from the message exchange
until the Server it is corresponding with permits it to do so (by replying to the
_SYSMSG_SUBTYPE_SIGNAL message).  This is exactly what we need to guarantee
end to end delivery of all messages.  From the Client's perspective, a Send to a
process that has requested signal messages will never fail (due to the Client
receiving an asynchronous signal).  From the Server's point of view, it ensures
that any asynchronous signal will be held off until after the Send, Receive,
Reply cycle has completed.

        Provided you have requested them, Death messages arrive in the event that
any process on the local node terminates (for any reason).  To give you an idea of
how to process these messages lets look at the Chat Server.  The Chat Server,
upon receiving a death message, checks to see if the pid matches that of any of the
processes that are engaged in the Chat session.  If it finds this to be the case, it
releases any resources that were allocated on the Client's behalf.  It is important
to note that regardless of whether the process was one that we cared about or
not, we must reply with a reply header status of EOK.  EOK is a C manifest
defined to be zero (0) indicating that everything is alright (a convention of the
POSIX committee).

        Finally, version messages are handled by filling in the
_sysmsg_version_reply structure.  (The details of which are shown in the Chat
Server code) and again replying with EOK.  Using this feature allows you to query
your Server at run-time with the "sin version" command.

        For all system messages, it is the Server's responsibility to reply in a timely
fashion, and if you must perform some work upon receiving a death message, it is
suggested that you reply immediately to the system message and then proceed
with the clean-up.

                Making the Client Interface Functions Efficient

        Messages sent through the Send, Receive and Reply primitives are copied
from the data space of the sender process, to the data space of the receiver
process, by the Kernel.  This means that the message must be contiguous in
memory.

        This restriction can be inconvenient.  For example, imagine the File Server
got a request to read a number of blocks from a file on the disk drive.  Internally
the data would be read and cached into buffers that would not necessarily be
contiguous.  Therefore to reply with the data to the Client would require one of
two solutions.  The first solution would be to have the Client fetch a single block
per request - incurring a context switch for each and every block.  The second
solution is to copy the buffers into a contiguous space and pass that whole thing
back to the Client process.  This obviously is not a good solution either, because
the data is copied twice (once by the Fsys to the buffer, and once by the Kernel to
Client buffer).  Further, the Client now has to pre-allocate a large enough message
to fulfil the worst case request.

        You as a Server writer are faced with a similar problem.  If you provide a
function to the Client process that hides some of the details of the message
structure, the header portion for example.  The data portion will be passed to you
by the Client and would then have to be appended to the header before calling the
Send primitive.  This also results in double copy.

        To reduce the amount of message passing overhead under this scenario,
QNX provides a special class of message passing primitives known as the mx
primitives.  These primitives provide the same functionality as Send, Receive
and Reply, but are optimized to recognize that most messages consist of a header
followed by one or more data records.

        With the mx primitives you simply supply the appropriate message passing
primitive (Sendmx, Receivemx or Replymx) with a structure containing the
address of the header portion and the address of data portion(s) and the Kernel
takes care of the rest.

        F.      Priority Inversion

        One common problem affecting high priority Server processes as they
interact with lower priority Clients, is that all work requests are treated equally. 
That is, regardless of a Client's priority - the work is performed by the Server at
its own priority irrespective of the Clients priority.

        Having the Server ignore a Client's priority has the effect of causing the
Server to do the Client's work, but at the Server's priority.  This is referred to as
Priority Inversion.  Further, a low priority process could potentially preempt a
higher priority process by requesting a Server to do some work (thus robbing the
higher priority process of its ability to complete its more important work).

        There is no easy way to deal with Priority Inversion but there are a couple
of techniques provided by QNX to help ensure that Client requests are dealt with
in accordance to their importance (priority).

                Priority Message Queuing

        One technique used in offsetting priority inversion is to have messages
queued' &+ (*rdance with the priority of the process making the request (again via
qnx_psinfo) rather than timed arrival order.  Queuing messages this way will
only help in the event that multiple work requests are waiting on the Server.  If
the Server typically has one (1) outstanding work request, Priority Message
Queuing will be of little value.

                Client Driven Priority

        Another technique is to have the Server change its priority to match the
Client currently making the work request (via qnx_psinfo).  If the priority of the
Server is lowered or raised to the same level as the Client - then this ensures that
other processes running at a higher priority (than the Client) continue to receive
the processing cycles required.

        If the Server is processing on behalf of a low priority Client and a high
priority Client sends a work request, the Server will be immediately boosted to the
level of the Client that just sent that request.  This allows the server to finish
processing the lower priority request, and begin working on behalf of the high
priority Client sooner.

        The Server's priority will be the maximum  of the priorities of all clients
send-blocked on it.  This Server option also automatically sets the aforementioned
Priority Message Queuing.
        G.      Server Related Operating System Services

                The Name Server

        Locating Servers (or other essential processes) can be a problem since pids
are assigned dynamically at process creation time.  There is no guarantee that a
given process will have the same pid each time it is created.

        To help make it easier to find shared processes, QNX allows any process
(including Servers) to register a symbolic name with the Process Manger.  The
symbolic name can then be queried (via qnx_name_locate) and if found the
function will return the pid associated with the symbolic name.  Once the pid has
been obtained, the usual message passing primitives can be used.

        Each Process Manager (there is exactly one Process Manager per processor)
maintains its own set of named processes.  Process names may be up to 32
characters in length.  If the first character is a forward slash (oblique, /), then the
name is considered to be global to all processors and network nodes.  If the oblique
is not the first character, the name will only be visible to processes on the same
processor or network node (global versus local visibility).

        The name you choose for your Server should be unique.  This can be done
by using your company name as the prefix.  This makes the Server provider
identifiable and it can be guaranteed that if the name is unique within your
company it will be unique to the QNX community at large.  From a users
perspective, this ensures that if more than one company provides a "daq" (data
acquisition) Server, naming collisions won't occur.  The following table provides
some naming examples.

NameScope/acme/daqTM3010Globalacme/daqTS6719Local/qnx/ChatServerGlobalTable 3
        In our sample Server, "/qnx/ChatServer" will be attached as its registered
name with the qnx_name_attach function.  Subsequently created Chat Clients
can then use the qnx_name_locate function with this symbolic name to locate
the Server.  Note that only one ChatServer is required per network.  Clients can
run on any processor or node and communicate with any other Client on the
network.

        Remember, if you do a qnx_name_locate and the named process is on a
different processor from which the call was made, a virtual circuit will be
established from your local node to the remote node.  A virtual circuit is the
mechanism through which messages are passed between nodes.  It follows then
that repeatedly calling qnx_name_locate without a corresponding
qnx_vc_detach will consume process table space.  It is therefore recommended
that you call qnx_name_locate once and store the process id returned in a
variable for future reference.

                Proxies

        Proxies are a very powerful extension to the existing message passing
primitives within the QNX environment.

        This form of a non-blocking send can help ensure that our Server is never
compromised (send blocked) on an application process.  Triggering a proxy is
conceptually the same as sending a message,  although the message cannot vary
from trigger to trigger.

        As stated earlier, Servers should never send messages to Client processes
lest the Client neglects to reply - leaving the Server reply-blocked.  Because
Proxies are non-blocking, they allow us to relax the rule.  Overall, this can greatly
simplify the design of the Server and the Client/Server model is reduced to that as
shown in Figure 4.

        Unlike regular messages, Proxies don't necessarily arrive in the same order
in which they were sent (triggered).  This is because the Process Manager only
keeps one copy of the Proxy in the process table and simply increments a count
indicating the number of times the Proxy has been triggered.  For example,
suppose that process B has two proxies (p1 & p2).  Further suppose that process A
is at a higher priority and is currently executing.  If process A triggers p1 then p2
alternatively for 2 counts (p1,p2,p1,p2) and then blocks, process B would receive
p1,p1,p2,p2.  See Figure 7.

        Servers provided with the QNX operating system know about and use
proxies.  For example, the device Server Dev can trigger a proxy whenever there is
keyboard input ready.  This prevents us from having to poll the keyboard using up
processor cycles unnecessarily.  The concept of Proxies is so universally applicable
to all applications that we will use them later in our Chat application.IV.Client/Server Example - Chat

        A.      Introduction

        This section will put together all the pieces that have been discussed from
the previous sections.  We will design, develop and implement a Chat Server.

        Chat can be used to allow many users to engage in an on-line conversation. 
The users may be on consoles, terminals, or modems anywhere on the network. 
As each user enters a line of text it is distributed to all other users participating
in the "chat".  This program is ideal for a multi-user conference system or large
networks.  Figure 8 shows the structure of the Server.

        B.      Overall Structure

        We will write Chat with two distinct parts, a Chat Server process with one
Chat Client process per user.  As the number of users changes so does the number
of Clients - but only one Server is ever needed.

        Our Chat Server and Client will be network transparent - a feature that is
easily obtainable within the QNX environment.

        C.      The Chat Server

        The Chat Server will be a state driven, synchronous, message passing
process.  We will use standard QNX services and will write it entirely in C.

        Through the use of proxies, Agent processes will not be needed as
intermediaries between the Server and the Client.  New message arrivals will
trigger a Client proxy requesting that the Client send for the new message.

        The Server will receive the Client requests directly and act on them itself
(this aspect of the Server is also exclusive of Agents).

        The Server will use the standard QNX message structure.  The message
will consist of a header portion with a type and subtype field followed by zero (0)
or more bytes of data.

                Design

        The Chat Server provides its services to trusted Client processes
(remember, we wrote both parts).  As such, the Server will never be compromised
by the Client.

        But most Server writers will not be in such a fortuitous position.  In an
effort to demonstrate how a Server writer goes about protecting the Server, we
will adopt the un-trusted Client posture when designing and implementing our
server.  As such, each Client request (subsequent to the open) will have the handle
validated.

        It's important for the Client to allocate the proxy even though the Server
has enough information to do so itself.  This is because proxies must be allocated
on the processor (network node) local to the Client process - if network
transparency is to be achieved.

                Message and Data Structures

        Because of our adoption of the standard QNX message structure, we assign
type = 100016 for Chat messages.  The following is a table of combined (system
and Chat) symbolic message types that can be expected by the Server.

Manifest typeManifest subtypeDescription_SYSMSG_SYSMSG_SUBTYPE_DEATHProcess Manager process death message_SYSMSG_SYSMSG_SUBTYPE_VERSIONProcess Manager requesting version
informationCHATMSGCHATMSG_SUBTYPE_OPENChat Client request to join sessionCHATMSGCHATMSG_SUBTYPE_POSTChat Client posting of messageCHATMSGCHATMSG_SUBTYPE_FETCHChat Client request of new messagesCHATMSGCHATMSG_SUBTYPE_CLOSEChat Client request termination of sessionTable 4 - Chat Message Types
                Privileges and Responsibilities

        The Chat Server will accept the responsibility of receiving system messages. 
Even though all system messages will be replied to, only death notification and
version requests will be acted upon.

                QNX Services

        In addition to the standard library functions provided by the compiler for
things like I/O, we will also use some QNX specific services in order to fulfil our
Server design requirements.  These include:

                       Name registration (Our Chat Server will register globally).
                       Message passing (Using Receive/Reply primitives)
                       Proxies  (One proxy for new message notification, one proxy to
                        indicate keyboard activity).
                       Process death notification (In order to close sessions that may
                        terminate abnormally (killed)).

                Pseudo Code

        The following is an overview (in pseudo code) of the Chat Server operation.

Check that the Server has been started as a background process
Register name globally ensuring network wide uniqueness
apply for Server status from the Process Manager (all Server privledges as well as
death notification)

while ( FOREVER )
    Receive( GENERAL )
    switch ( type )
         case _SYSMSG
            switch ( subtype )
                case death:
                    if ( this is a Chat participant )
                        Perform CLOSE on Client's behalf
                        if ( no more participants )
                            reset message buffer
                        else
                            Inform other participants of unexpected
                            Client withdrawal
                        endif
                    endif
                    Reply( EOK )
                break
                case version
                    build version reply message
                    Reply( EOK )
                break
                default
                    Reply( E_NOSYS )
                break
        break
        case CHATMSG
            switch( subtype )
                case MSG_OPEN
                    Allocate Client open slot
                    Add a join message
                    Reply( EOK )
                break
                case MSG_POST
                    Add message to message buffer
                    Reply( EOK )
                break
                case MSG_FETCH
                    Get message from message buffer
                    Reply( EOK )
                break
                case MSG_CLOSE
                    De-allocate Client open slot
                    if ( no more Clients )
                        Reset message buffer
                    else
                        Post Client withdrawal message
                    endif
                    Reply( EOK )
                break
                default
                    Reply( E_NOSYS )
                break
            endswitch
        break
        default
            Reply( E_NOSYS )
        break
    endswitch
endwhile
        D.      The Chat Client

        Our Chat Client will be an event driven, synchronous, message passing
process.  It too will use standard QNX services and be written entirely in C.

                Design

        The Chat Client will send all the Chat related messages (type =
CHATMSG) outlined above - in addition, it will receive two additional messages
(via proxies) from the Server (in the event of new messages) and from the
keyboard (in the event of keystrokes).

        We have already stated that the Chat Client will allocate a proxy and pass
it to the Server as part of the open request, similarly a second proxy will be
allocated and given to Dev (the device Server for the console).  Dev will then be
told to send us that proxy whenever it detects keyboard activity.

                Message and Data Structures

        In addition to the aforementioned Chat messages (type = CHATMSG) sent
to the Chat Server, the following two message types arrive via the Dev and Chat
Server proxies respectively.

Manifest typeManifest subtypeDescriptionMSG_KEYBOARD_READYnot usedProcess Manager process death messageMSG_MESSAGE_READYnot usedProcess Manager requesting version
informationTable 5 - Chat Client Proxy Messages
                Privileges and Responsibilities

        The Client is an ordinary process with no special privledges or
responsibilities.

                QNX Services

        Like the Server we will use the standard system library and use all the
services outlined in the Server section above.

                Pseudo Code

        The following is an overview (in pseudo code) of the Chat Client.

Locate the Chat Server
Obtain a local proxy to be given to the Chat Server
ChatOpen() - OPEN request to the Chat Server
Obtain a local proxy to be given to Dev
Turn on raw character processing from keyboard
Clear any triggers that may be outstanding on the keyboard
Arm a trigger on keyboard input (using our designated proxy)
Done := FALSE
while ( ! Done )

    Receive ( GENERAL )
    switch ( type )

        case MSG_KEYBOARD_READY
            Turn off raw processing of keyboard characters (line edit enabled)
            Print input prompt and get one line message
            if ( first character = '.' )
                Done = TRUE
                continue
            endif
            Turn on raw character processing from keyboard
            Arm a trigger on keyboard input (using our designated proxy)
            ChatPostMsg( ChatServer, Handle, Smsg )
        break

        case MSG_MESSAGE_READY
            do
                type = MSG_FETCH
                ChatFetchMsg( ChatServer, Handle, Rmsg )
            while ( another message is available )
        break

        default
            Log spurious message
    endswitch
endwhile
V.     Appendix A

        Included here is the complete Client/Server Chat Application.

        A.      The Chat Server Source Code

        There are three files that make up the application.  ChatServer.c, the
Server source code, ChatClient.c the Client source code, and Chat.h, the header
file that contains manifests and definitions that are common to both the Client
and the Server.



                                         Insert Source Code Here