John E. Wulff studied Electrical Engineering at the University of New South Wales in Sydney, Australia graduating in 1960. His first professional experience was in the Telephone industry, developing switching circuits with electro-mechanical relays but also with vacuum tubes, cold cathode tubes and very soon with the emerging transistors. In 1964 he spent 6 months in England, getting know-how on a new family of switching circuits using germanium diodes and transistors, but which already supported clocked flip-flops. These had been developed at the BICC research laboratory near Hampton Court, where John Sparkes had invented the principle of clocking a few years earlier. With this experience, John Wulff was chief designer for a special purpose computer with 100 kilobytes of magnetic drum memory, 1 million transistors, 2.5 million diodes for logic and 100,000 silicon controlled rectifiers for power output drivers, switching 24 Volt rotary solenoids drawing up to 5 Amps. This computer controlled a letter sorting system with 150 input consoles and a throughput of 5 million letters a day. The system worked reliably for 25 years at the Redfern Mail Exchange in Sydney.
Experience with logic design based on integrated circuits followed. The availability of mini computers led to an interest in programming. A Master of Engineering Science Degree in Information Science at the University of New South Wales provided a solid foundation for future work as a Software Engineer. The design and implementation of a Real Time Operating System (or Monitor, as it was then called) on a PDP-8, which provided a task context switch in 15 machine instructions was the content of his Masters Thesis [Wulff72], and later provided the basis for some very fast industrial machine control systems with Data General Nova mini-computers, whose instruction execution time was 6 microseconds.
In the mid 80’s John Wulff came in contact with PLC’s. He was asked to help during the commissioning of a PLC-system, controlling a parcel sorting complex consisting of 100 standard conveyor systems and 4 high speed conveyors (2 metres/second) which had mechanical gates along their length, to divert parcels. The gates on these high speed belts needed a control resolution of 15 milliseconds, in which time a parcel had moved 30 mm. Unfortunately the function blocks for the 100 standard conveyors, whose outputs hardly ever changed once they were started, had to be executed 100 times each cycle, once for each conveyor. Because PLC’s execute all their instructions over and over, this brought the total cycle time to over 1 second!! What to do? Fortunately the PLC had just enough (8) interrupt inputs, to allow the implementation of an event driven sub-system using the assembler instructions of the PLC. This saved the company a lot of liquidated damages.
That experience spawned the idea for an event driven PLC, which resulted in the current iC system. The design was very much influenced by thinking about biological neural networks in the brain. How is it possible that such relatively slow components as neurons and synapses can process such vast amounts of information at the speed that they do? The algorithms at the heart of the iC system are based directly on synapses and biological neural networks – not for artificial learning, but simply to gain speed. iC is orders of magnitude faster than a PLC with the same speed for any reasonable application one can think of. I was programming the firmware of conventional PLC’s for a manufacturer at the time (1989). The IEC-1131 standard with its new language ‘Structured Text’ had just been published. immediate C is simply not compatible with Structured Text, which relies on the cyclic model of PLC’s, although iC is fully compatible with Ladder Logic. For that reason immediate C was never accepted by industry.
The iC compiler and run-time system is available now as an Open Source project GitHub. I have always been very interested in Version Control and have used SCCS since 1982 and RCS later. I developed some scripts for RCS, which allow the automatic maintenance of what I call Parts Lists, which keep the names and version numbers of all sources which go together in the generated binaries. This is a feature which was always missing from Version Control Systems until GIT. Unfortunately GIT does not maintain individual version numbers for source files – it only collects each group of files committed together as an entity, whose label is a completely random SHA code. I have written and tested hooks for GIT to support correct $Id keyword expansion., which allows the use of time honoured engineering type version numbers and hierarchical parts lists showing all the items in a full release not just those that happened to be committed together. Tags are the recommended way in GIT and older systems to group files, but they must be applied manually, which is error prone and mostly ignored. My system is automatic once it is set up. Each generated binary file contains a complete versioned list of all its sources.
John E. Wulff, BE, M EngSc – Bowen Mountain, Australia.