DNIX


DNIX was a Unix-like real-time operating system from the Swedish company Dataindustrier AB. A version called ABCenix was also developed for the ABC 1600 computer from Luxor.

History

Inception at DIAB in Sweden

Dataindustrier AB was started in 1970 by Lars Karlsson as a single-board computer manufacture in Sundsvall, Sweden, producing a Zilog Z80-based computer called Data Board 4680. In 1978 DIAB started to work with the Swedish television company Luxor AB to produce the home and office computer series ABC 80 and ABC 800.
In 1983 DIAB independently developed the first UNIX-compatible machine, DIAB DS90 based on the Motorola 68000 CPU. D-NIX here made its appearance, based on a UNIX System V license from AT&T. DIAB was however an industrial automation company, and needed a real-time operating system, so the company replaced the AT&T-supplied UNIX kernel with their own in-house developed, yet compatible real-time variant. This kernel was originally a Z80 kernel called OS8.
Over time, the company also replaced several of the UNIX standard userspace tools with their own implementations, to the point where no code was derived from UNIX, and their machines could be deployed independently of any AT&T UNIX license. Two years later and in cooperation with Luxor, a computer called ABC 1600 was developed for the office market, while in parallel, DIAB continue to produce enhanced versions of the DS90 computer using newer versions of the Motorola CPUs such as Motorola 68010, 68020, 68030 and eventually 68040. In 1990 DIAB was acquired by Groupe Bull who continued to produce and support the DS machines under the brand name DIAB, with names such as DIAB 2320, DIAB 2340 etc., still running DIABs version of DNIX.

Derivative at ISC Systems Corporation

purchased the right to use DNIX in the late 1980s for use in its line of Motorola 68k-based banking computers. This code branch was the SVR2 compatible version, and received extensive modification and development at their hands. Notable features of this operating system were its support of demand paging, diskless workstations, multiprocessing, asynchronous I/O, the ability to mount processes on directories in the file system, and message passing. Its real-time support consisted largely of internal event-driven queues rather than list search mechanisms, static process priorities in two classes, support for contiguous files, and memory locking. The quality of the orthogonal asynchronous event implementation has yet to be equalled in current commercial operating systems, though some approach it. The asynchronous I/O facility obviated the need for Berkeley sockets select or SVR4's STREAMS poll mechanism, though there was a socket emulation library that preserved the socket semantics for backward compatibility. Another feature of DNIX was that none of the standard utilities rummaged around in the kernel's memory to do their job. System calls were used instead, and this meant the kernel's internal architecture was free to change as required. The handler concept allowed network protocol stacks to be outside the kernel, which greatly eased development and improved overall reliability, though at a performance cost. It also allowed for foreign file systems to be user-level processes, again for improved reliability. The main file system, though it could have been an external process, was pulled into the kernel for performance reasons. Were it not for this DNIX could well have been considered a microkernel, though it was not formally developed as such. Handlers could appear as any type of 'native' Unix file, directory structure, or device, and file I/O requests that the handler itself could not process could be passed off to other handlers, including the underlying one upon which the handler was mounted. Handler connections could also exist and be passed around independent of the filesystem, much like a pipe. One effect of this is that TTY-like 'devices' could be emulated without requiring a kernel-based pseudo terminal facility.
An example of where a handler saved the day was in ISC's diskless workstation support, where a bug in the implementation meant that using named pipes on the workstation could induce undesirable resource locking on the fileserver. A handler was created on the workstation to field accesses to the afflicted named pipes until the appropriate kernel fixes could be developed. This handler required approximately 5 kilobytes of code to implement, an indication that a non-trivial handler did not need to be large.
ISC also received the right to manufacture DIAB's DS90-10 and DS90-20 machines as its file servers. The multiprocessor DS90-20's, however, were too expensive for the target market and ISC designed its own servers and ported DNIX to them. ISC designed its own GUI-based diskless workstations for use with these file servers, and ported DNIX again. The asynchronous I/O support of DNIX allowed for easy event-driven programming in the workstations, which performed well even though they had relatively limited resources. A full-blown installation could consist of one server and up to 64 workstations. Though slow to boot up, such an array would perform acceptably in a bank teller application. Besides the innate efficiency of DNIX, the associated DIAB C compiler was key to high performance. It generated particularly good code for the 68010, especially after ISC got done with it. The DIAB C compiler was, of course, used to build DNIX itself which was one of the factors contributing to its efficiency, and is still available through Wind River Systems.
These systems are still in use as of this writing in 2006, in former Seattle-First National Bank branches now branded Bank of America. There may be, and probably are, other ISC customers still using DNIX in some capacity. Through ISC there was a considerable DNIX presence in Central and South America.

Asynchronous Events

DNIX's native system call was the dnix library function, analogous to the standard Unix unix or syscall function. It took multiple arguments, the first of which was a function code. Semantically this single call provided all appropriate Unix functionality, though it was syntactically different from Unix and had, of course, numerous DNIX-only extensions.
DNIX function codes were organized into two classes: Type 1 and Type 2. Type 1 commands were those that were associated with I/O activity, or anything that could potentially cause the issuing process to block. Major examples were F_OPEN, F_CLOSE, F_READ, F_WRITE, F_IOCR, F_IOCW, F_WAIT, and F_NAP. Type 2 were the remainder, such as F_GETPID, F_GETTIME, etc. They could be satisfied by the kernel itself immediately.
To invoke asynchronicity, a special file descriptor called a trap queue had to have been created via the Type 2 opcode F_OTQ. A Type 1 call would have the F_NOWAIT bit OR-ed with its function value, and one of the additional parameters to dnix was the trap queue file descriptor. The return value from an asynchronous call was not the normal value but a kernel-assigned identifier. At such time as the asynchronous request completed, a read of the trap queue file descriptor would return a small kernel-defined structure containing the identifier and result status. The F_CANCEL operation was available to cancel any asynchronous operation that hadn't yet been completed, one of its arguments was the kernel-assigned identifier. In addition to the kernel-assigned identifier, one of the arguments given to any asynchronous operation was a 32-bit user-assigned identifier. This most often referenced a function pointer to the appropriate subroutine that would handle the I/O completion method, but this was merely convention. It was the entity that read the trap queue elements that was responsible for interpreting this value.

struct itrq ;

Of note is that the asynchronous events were gathered via normal file descriptor read operations, and that such reading was itself capable of being made asynchronous. This had implications for semi-autonomous asynchronous event handling packages that could exist within a single process. Also of note is that any potentially blocking operation was capable of being issued asynchronously, so DNIX was well equipped to handle many clients with a single server process. A process was not restricted to having only one trap queue, so I/O requests could be grossly prioritized in this way.

Compatibility

In addition to the native dnix call, a complete set of 'standard' libc interface calls was available.
open, close, read, write, etc. Besides being useful for backwards compatibility, these were implemented in a binary-compatible manner with the NCR Tower computer, so that binaries compiled for it would run unchanged under DNIX. The DNIX kernel had two trap dispatchers internally, one for the DNIX method and one for the Unix method. Choice of dispatcher was up to the programmer, and using both interchangeably was acceptable. Semantically they were identical wherever functionality overlapped. calls, and the trap #4 instruction for dnix. The two trap handlers were really quite similar, though the unix call held the function code in the processor's D0 register, whereas dnix
DNIX 5.2 had no networking protocol stacks internally, all networking was conducted by reading and writing to Handlers. Thus, there was no socket mechanism, but a libsocket existed that used asynchronous I/O to talk to the TCP/IP handler. The typical Berkeley-derived networking program could be compiled and run unchanged, though it might not be as efficient as an equivalent program that used native asynchronous I/O.

Handlers

Under DNIX, a process could be used to handle I/O requests and to extend the filesystem. Such a process was called a Handler, and was a major feature of the operating system. A handler was defined as a process that owned at least one request queue, a special file descriptor that was procured in one of two ways: with a F_ORQ or a F_MOUNT call. The former invented an isolated request queue, one end of which was then typically handed down to a child process. The latter hooked into the filesystem so that file I/O requests could be adopted by handlers. Once mounted on a directory in the filesystem, the handler then received all I/O calls to that point.
A handler would then read small kernel-assigned request data structures from the request queue. The handler would then do whatever each request required to be satisfied, often using the DNIX F_UREAD and F_UWRITE calls to read and write into the request's data space, and then would terminate the request appropriately using F_TERMIN. A privileged handler could adopt the permissions of its client for individual requests to subordinate handlers via the F_T1REQ call, so it didn't need to reproduce the subordinate's permission scheme. If a handler was unable to complete a request itself, the F_PASSRQ function could be used to pass I/O requests from one handler to another. A handler could perform part of the work requested before passing the rest on to another handler. It was very common for a handler to be state-machine oriented so that requests it was fielding from a client were all done asynchronously. This allowed for a single handler to field requests from multiple clients simultaneously without them blocking each other unnecessarily. Part of the request structure was the process ID and its priority so that a handler could choose what to work on first based upon this information, there was no requirement that work be performed in the order it was requested. To aid in this, it was possible to poll both request and trap queues to see if there was more work to be considered before buckling down to actually do it.

struct ireq ;

There was no particular restriction on the number of request queues a process could have. This was used to provide networking facilities to chroot jails, for example.

Examples

To give some appreciation of the utility of handlers, at ISC handlers existed for:
ISC purchased both 5.2 and 5.3 versions of DNIX. At the time of purchase, DNIX 5.3 was still undergoing development at DIAB so DNIX 5.2 was what was deployed. Over time, ISC's engineers incorporated most of their 5.3 kernel's features into 5.2, primarily shared memory and IPC, so there was some divergence of features between DIAB and ISC's versions of DNIX. DIAB's 5.3 likely went on to contain more SVR3 features than ISC's 5.2 ended up with. Also, DIAB went on to DNIX 5.4, a SVR4 compatible OS.
At ISC, developers considerably extended their version of DNIX 5.2 based upon both their needs and the general trends of the Unix industry:
When DNIX development at ISC effectively ceased in 1997, a number of planned OS features were left on the table: