2007-12-24
With Christmas comes blinken-lights, so naturally a geek's thoughts turn to building flashing things. Many times I've shared my desires to build large arrays of RGB led modules with a bit of in-built brains that can have various patterns clocked into them in real-time, but last night it came to me that I might build decentralised nodes that communicate only with their neighbours.
Conway Life was an immediate thought, but I decided to start with something much simpler. Much like von Neumann and Ulam before me, I started doing the maths on the cost of building such automata and quickly decided to build it virtually, at least at first! One-dimensional Wolfram automata sounded much less ambitious than tiled arrangements and perhaps cheaper to implement should I actually go the whole hog and build it.
Each cell is quite simple, and easily realisable in logic. If all I wanted was a light show, it would be *much* cheaper to implement using a single microcontroller Charlieplexing a bunch of LEDs, but that isn't the point, I want de-centralised computation with each node essentially identical. I arrived at a basic architecture like this:
Each node could be built on a small PCB and chained together. It could be made as a closed loop, and the most immediate "application" I could think of is geeky Christmas lights, or perhaps an illuminated bracelet. The logic element could be hard-coded, or programmed via dip-switch or jumpers directly in Wolfram Notation. A push button could inject a 1 at some point to seed the system, perhaps at the node that also contains the system clock and power management.
The dip-programming version could be implemented using a 3-bit decoder and a standard 8-line dip-switch, this has some promise for the experimenter, as it allows different rules in each node, which while not conventional for CA, might be quite useful. Hard-coded logic was investigated for the Rule-30 System, the gate count is fairly high even once minimised (and using /Q from the latch if available), so implementing the entire node in a tiny PLD or microcontroller might be the cheapest solution. Alternatively Fairchild TinyLogic® is extremely cheap in large quantities and offers the latch and decoder elements required.
The need to seed the system is a bit of a pain, so I began to wonder if the rule could be chosen to auto-start the automata, and still offer interesting patterns. I was also curious about the affect of closed or open loop topology and nodes with differing rules. The easiest way to study this is by simulation with a PC... So some code was written to simulate the proposed hardware. I chose ECMAScript so I could host the simulator in this page.
The input language is line-based, each line containing a single directive. A directive is a string label followed by one or more space separated arguments. The argument format is a function of the particular directive.
directive | description |
---|---|
cellCount [n] | The minimum is 3, there is no maximum, but extreme numbers will take a lot of processing power. Large cell counts also mean huge periods, which means massive amounts of RAM can be required to store the state history, or even just render it on screen. There is no default, a value must be specified. |
defaultRule [rule] | The rule assigned to all cells by default. The default value is '00011110' (i.e. Rule 30). May be over-ridden on a cell-by-cell basis by the cellRule directive. Must be 8 chars from {1, 0}. The rule is encoded MSB first left-to-right. It represents the next logical state when indexed by the MSB first index of bits (left this right). |
closed [bool] | By default the cell array is a closed loop (i.e. cell 0's left neighbour is cell (cellCount - 1) and (cellCount - 1)'s right is cell 0.) This may be overridden by specifying 0 here instead of 1. The boundaries are then treated as having neighbour cells which are always in state 0. |
generationDelay [ms] | The delay between each step in the simulation in milliseconds. The minimum is 0 which will cause the simulation to run as fast as possible. The maximum 10000. The default is 500, or one-half second pause between generations. |
setCell [id] [bool] | Assigns a starting state to a cell. Cells are numbered 0 to (cellCount - 1). |
setCells [id] [states] | Assigns a starting state to a contiguous block of cells starting at cell id. |
cellRule [id] [rule] | Overrides the defaultRule for a specific cell. |
stopOnCycle [bool] | By default the simulation stops when a previous state re-appears. By assigning this as 0 you may cause the simulation to run indefinitely. |
Upon playing with the simulator it becomes obvious that closed rings are best, open ones tend to lock up or go into short-period oscillations. Odd numbers of nodes seem to work well with Rule 30, even fairly small numbers of cells work quite well. Because the next state is completely defined by the current state and the ruleset, it is easy to detect when repetition begins, so by default the simulator runs only until a cycle is detected.
A hardware implementation of Rule 30 might be:
We need to use an edge-triggered latch to hold the state, because of the direct feedback in the logic module. I do something similar in the simulator code, using a temp variable and a two-phase stepping procedure. To minimise the transistor count you could use a dual-phase clock, but the extra gates are safer and simpler. Edge triggered D flip-flops are available in the one package in most logic families, including TinyLogic and 7400 series.
For 7400 series logic, a BCD decoder like the 7442 or a 3-8 demultiplexer like the 74138 and a few diodes to implement a wired-OR would be a budget "programmable" logic solution. Along with half a 7474 for the state memory, gives a two-chip node that could implement any CA with just a few cents worth of diodes to change the programming (you could use 8-diodes and a dip-switch or jumpers too if you don't mind the price). (4000 series logic would be a 4028 decoder and a 4013 D flip-flop with the advantage that the decoder output isn't inverted.) This prices each node in $2-$3 region considering the PCB cost and general stuffing around. That isn't too bad really, might be worth making a few prototype nodes, and if it works OK, a PCB order.
Naturally this makes one want to investigate other automata. Life and Lattice Gas would be cool, but the extra dimension steeply increases the cost and complexity. Individual life nodes have 8 neighbours which is a lot of wiring, or some extra smartness in each node to route state information orthogonally through a grid of connectivity (mandates a microcontroller unfortunately). Lattice Gas would be ideal with hexagonally tiled nodes.
Once you get to the need for a microcontroller the price per node tends to flatten off, and the programmability gives you dynamic features that hard-coded logic just can't do. You can amortise the cost per node somewhat by implementing several nodes per controller/board.
Some things I have at the back of my mind are wireless nodes. I've seen some people doing firefly replicants with LDRs as flash sensors. The natural progression of this would be RF interfaces, with one-transistor super-regenerative transceivers and a microcontroller per node. I did some back of the envelope sketching of the protocols needed, and it isn't too prohibitive. It would enable self-organising experiments, where you just scatter nodes around on the floor and they chat back and forward with something similar to Slotted ALOHA or Token Bus. I envisage something like a HAM radio QSO, with nodes calling CQ, then passing around the conversation, with some pseudo-randomness (or true-random noise from the RX) stirred into the mix, except with the extra complexity of negotiated call signs.
This goes beyond automata (who have a global clock and no memory) and into autonomous agents.
Another thought is that connection topologies don't need to be regular, the RF system has the most dynamic connection topology, but simulations can have general graph connectivity, rather like neural nets. There is enormous range for experimentation (and perhaps practical applications) with such systems.
Non-digital hardware is something I've thought about also. Leaky integrators implemented with op-amps could function as delay lines, allowing wave-propagation (Lattice Gas-like) simulation, with analogue hardware rather than digital. This deserves some simulation too. Various parameters can be tuned, giving nodes of different refractive index, perfect reflectors, perfect absorbers, oscillators, etc. Reflection and diffraction should be visible with large enough systems. Time to write some Java Applets...
Leave a comment on this article.
title | type | size |
---|---|---|
1D Architecture Overview Source | application/postscript | 11.907 kbytes |
A Rule 30 Implementation Source | application/postscript | 13.895 kbytes |