I. Introduction: A Brief History

...O fill me
With strength against those who would freeze my
humanity, would dragoon me into a lethal automaton,
would make me a cog in a machine...

-- Louis MacNeice, 1944

In the early 1950's, the well-known mathematician John Von Neumann was trying to develop what he termed a self-replicating automaton; that is, a machine whose computer brain was capable of devising instructions to effect the construction of itself. Von Neumann never actually intended for the machine to be built. Rather, he was interested in arriving at rules by which a computer could be programmed such that it could fashion an exact replica of itself. He envisioned initially a robot wandering around a warehouse littered with spare parts, identifying the relevant pieces, and fashioning an exact replica of itself. [1]

As the legend goes, Von Neumann fooled around with various approaches for a while but was not satisfied with the results. The physical movement of pieces by the robot did not seem to fit the mathematical theory he desired, nor did the complexity of various attempts at solution suit his liking. Von Neumann sought a solution that was simple, elegant, and general. [2]

Stanislaw Ulam, a fellow mathematician, suggested to Von Neumann a different approach. Consider a rectangular array of cells, much like a chessboard, in which each cell can exist in one of a finite number of states: 0, 1, 2, ... Time would progress discretely (i.e. in jumps, rather than continuously. Each increment of time would be a chance for cells to change their state. The rule governing the change of state for each cell would depend only upon the states of the cell's immediate neighbors and possibly upon the state of the cell itself. The rule would be the same for each cell and all cells would change (or not change) according to the rule with each time step. All cells would initially be in the 0, or quiescent, state; to start the automaton, place some cells in nonzero states and start the clock. Watch the cells and see how they changed according to their local--but common--rule. Ulam's grid was an example of a cellular automaton.

Von Neumann quickly saw that this system could fulfill his purpose in solving the self-replication problem. The grid of cells would be a computer; what is a computer, after all, if not a series of circuits coupled together such that the passage of current through them results in meaningful output? It is common knowledge that the fundamental language of a computer is just 1's and 0's. Interpreting these 1's and 0's allows people to interact with the computer. The cellular automaton computer would be programmed by placing some cells in nonzero states. The combination of initial condition--the states of the cells at time zero--and the rule for how each cell updates would be the means by which the computer operated. The resulting states of the cells at a later time would be interpreted as output.

This output could be used to instruct the movements of a mechanical arm. The mechanical arm could change a nearby grid of cells, all initially in the zero state and adjacent to the cells constituting the computer, so that this new grid would have the same initial conditions as the original grid. The computer would thus have replicated itself, and Von Neumann would have solved his problem. Von Neumann's solution required hundreds of thousands of cells, each of which could exist in one of 29 different states; [3] nevertheless, the fundamental design of the system was inherently simple and solved the replication problem elegantly. His result was published posthumously in 1966 as the Theory of Self-Reproducing Automata.

Cellular automata (CA) manifest one of the most intriguing ideas in mathematics: from simple rules and algorithms, complex patterns and behavior can result. Underlying this is the notion of scale. The rule for state-change of cells in Von Neumann's computer is local; each cell 'sees' only its immediate neighbors. Yet the combination of the right initial conditions and the right local rule produces a global pattern which, when interpreted correctly, can instruct the arm to construct a replica of the computer. Cellular automata, of which Von Neumann's self-replicating automaton is just one example, also transmit information in an interesting manner. There is no moving piece that carries data from one portion of the automaton to another. Cells convey information by referencing their neighbors; without movement, data is transmitted across the automaton.

This concept of transmitting information via only localized interaction and the remarkable complexity arising from simple, local rules are what I find most significant about cellular automata. This paper attempts to explain these concepts and to display some of the interesting results possible from cellular automata systems.

II. Sequences: Recursive and Explicit Definitions

For the rest of it, the last and greatest art is to limit and isolate oneself.

-- Johann Wolfgang von Goethe, 1825



The simplest mathematical example of the information-transmission property of cellular automata is a recursively defined sequence. In general sequences are written as strings of numbers (termed elements) with commas between each number. The sequences discussed here will be infinite in length and will be generated by some specific rule; that is, there exists an algorithm for generating each number in the sequence. The rule might be "double the last number and add three" to get the next number. A different sequence-generating algorithm might be "write down all the numbers that are one greater than the powers of two."

If a sequence is defined recursively then each successive element is defined as a function of an element (or several elements) preceding it. This manner of defining sequences is akin to setting up a row of dominoes; each falling domino knocks over the domino behind it. As with dominoes, one must have a place to start--there must be a domino to start the chain. With sequences this amounts to naming the first or first several element(s). This number is called (unsurprisingly) the initial or first element.

A typical recursive rule for a sequence might look like
            Sn = 2Sn-1  
            S1 = 1.  [4] 

The rule is interpreted to mean: "Start with the number 1 as the first element. The next element is generated by multiplying the current element by 2." Thus, starting with 1 as the first element, one would produce the second element by multiplying 1 by 2 to get 2. The third element would be found by taking the second element (2) and multiplying it by 2 to get 4. The first few elements of the sequence defined by the above rule look as follows:

S: 1, 2, 4, 8, 16, 32, ....

Note the similarity between generating a sequence in this manner and the dominoes discussed above: the value of each successive element is found by looking only at the value of the element preceding it; no other elements are considered. (Obviously, though, each element depends indirectly upon all of its predecessors.) The rule here is local.

An alternative means of defining a sequence is an explicit rule. In this rule, each element is defined in terms of its place in line, usually denoted n. The rule allows one to construct a sequence by specifying the value of the nth element as a function of n. An example might be

Sn = (2)n-1.

This rule can be interpreted to mean: "To generate the nth term, raise 2 to the power n-1." To find the first term, one would substitute 1 for n, raise 2 to the power 0, and get 1 as a result. To find the third term, substitute 3 for n, raise 2 to the power 2, and arrive at 4. The beginning of the sequence defined by this rule would look as follows:

S: 1, 2, 4, 8, 16, 32, ....

Note that the two sequences are the same. This exhibits an important point: starting with very different conceptual ideas of how to generate a sequence, one can produce exactly the same sequence. The recursive formula specified a starting number and then gave a rule for how to arrive at the next element, given the value of the first element. The explicit formula gave a means by which any element could be found, without looking at the value of any other elements around it. The two formulas represent very different ways of thinking about producing or transmitting information, the former in a local manner, the latter in a global manner.

The key point of the above discussion is that complex behavior (i.e. a sequence of numbers related to each other through some complicated pattern) can be generated via a simple rule that involves only localized pieces of the system. None of the numbers in a recursively-defined sequence 'know' what the overall rule is for the sequence. Each number knows only its relationship to one other number--the elements are in a sense isolated from all elements other than their immediate predecessor. Yet the pattern is coherent from an overall viewpoint. At the same time, the pattern could have been generated by an explicit rule, in which the guiding principle, in effect, resides in the overall system--not in each element therein. This distinction between local and global viewpoints is central to the thesis of this paper, and will be discussed further below.

III. One-Dimensional Cellular Automata

[Hercule Poirot] tapped his forehead. 'These little grey cells. It is "up to them".'

-- Dame Agatha Christie, 1920.

The simplest type of cellular automata (CA) is the one-dimensional, two-state CA. It consists of two parts: a group of cells, all in either the 0-state or 1-state, and a rule specifying how each cell's state changes with time. The group of cells can be thought of a long row of boxes, each containing the number 0 or 1 (hence the name two-state.) The rule for how a cell changes (or does not change) is based entirely upon the state of the immediate neighbors of that cell.

Whereas each sequence element was a single number, for 1-D CA each element is a row of cells. One-dimensional CA elements are typically displayed as horizontal rows, with the first element at the top of the page or computer screen and subsequent elements stacked below it. The stacking allows the viewer to see the evolution of elements over time, since lower elements are later elements--elements resulting from applying the update rule to the cells of the row above. Cells are often displayed on a computer screen as pixels, with the 0 state shown as white and the 1 state shown as black.

Figure 1 shows an example of a typical one-dimensional, two-state CA. (The pattern formed by this rule/initial condition will be discussed below.) The initial condition is the first row. Each of the cells in the first row is placed in the 0- or 1-state--in this example there exists one cell in the 1-state and all other cells are in the 0-state. The states of the cells in the second element, after the application of the rule to the cells of the initial element, is shown in row two. The result of the application of the rule to the second row is shown in row three, and so on. The rule for how any given cell changes in each time step in this example is "If the cell's neighbor to the left is the same as the cell, and the cell's neighbor to the right is the same as the cell, the cell's state becomes 0; otherwise, the cell becomes a 1."

Mathematically, one can describe this rule in terms of the sum of a cell's state and the states of its left and right neighbors. [6] Given the sum for any given cell (which means adding the value of the cell and its two neighbors), one can write a rule describing the state of the cell after the next time step. For the CA pictured in Figure 1, the rule would be described in terms of sums as

if the sum is...                         0  1  2  3     

...then the state of the cell becomes 0 1 1 0


The above rule, along with the state of the cells in the first row, is all one would need to know to generate an exact copy of this picture.

Interestingly, Figure 1 is an example of a geometric shape known as a Sierpinski triangle. A Sierpinski triangle is a triangle with the middle triangle, formed by connecting the midpoints of the three sides removed. Repeat this process (i.e. removing the middle triangle) on the three triangles remaining. Continue this indefinitely on the smaller and smaller triangles remaining after each removal. The result is, after infinite iterations, a figure in which any black point is a branching fork. Figure 1, while limited in resolution because of the large size of each cell, depicts this shape with 1's as black and 0's as white.

The top picture in Figure 2 is an example of a Sierpinski triangle created in the same manner, but using software [8] for which each cell is a pixel, rather than a box containing 0 or 1. The picture is much clearer since the size of each cell (a pixel in Figure 2, rather than a box in Figure 1) is smaller, hence the resolution is greater. As in Figure 1, the first row in the top picture in Figure 2 was 'seeded' with one black pixel. Subsequent rows were generated with the same rule used in Figure 1.

The bottom picture in Figure 2 was constructed using the same rule and initial conditions as those of the top, but the process was allowed to run for a greater length of time. After colliding into each other, the Sierpinski triangles eventually form the netted, less ordered pattern shown. The bottom picture is an image from a later time in the life of the CA shown in the top picture; the result of applying the rule repeatedly--of numerous iterations--is the netting pattern you see. This pattern results because of the way the cells on the borders of the CA are defined, which is explained as follows.

Since each successive state of a cell is defined for this example in terms of its left and right neighbors, special rules must apply at the left and right borders of each row. This implies a choice between two alternatives. First, one could modify the rule so that the cells on the edge refer only to their one neighbor's state in deciding on how/if to change states; cells on the left border (row A in Figure 1) would change states by a rule depending only upon their right neighbors (row B), while the rule for these cells would not refer to any other cell. This presents a problem: the rows can no longer be thought as infinite and cells no longer all have the same rule. Edge effects due to these midified rules may interfere with the continuity of the pattern one would see in the absence of borders.

To solve these problems, there exists a second alternative. The rule for edge cells can refer to the opposite border cells--this is usually termed 'wraparound'. In Figure 1, cells in column A would see cells in row AN (the rightmost column ) as their left neighbors and vice versa. Since the finite nature of any machine or method of displaying cellular automata precludes the possibility of portraying an infinite-length row, wraparound is in some ways a good substitute, since all cells have a right and left neighbor. As shown in the bottom picture of Figure 2, however, the left and right borders of the image tend to collide after many iterations and the figure will lose the true pattern it would have had if a computer were truly able to handle an infinitely-wide CA region. For many CA rules, though, wraparound does not interfere with the pattern formed by the cells.

An interesting question: how do the cells, each with its individual rules and each blind to the states of cells outside of its neighborhood, 'know' enough to cut out the middle one-fourth of every triangle, know enough to generate a Sierpinski triangle? First we will consider the pattern along the vertical axis of symmetry, running through the top vertex of the triangle. Note that there are horizontal rows of 1's running the width of the Sierpinski triangle at some rows. (This stops being true because of the wraparound effect mentioned above--the triangles wrap around, collide with each other, and break the pattern.) Immediately below these rows of 1's there are rows of 0's: since the sum for cells (except those near the edge of the triangle) in the long row of 1's is 3, the rule states that cells below should be in the 0 state.

Since these rows of 1's and 0's form part of the pattern we define as a Sierpinski triangle, the question of 'How do the cells know to form this pattern?' becomes 'When (i.e. at which rows) do these long rows occur?' Consider a row of all 1's running the width of the triangle and occurring at row n. This row will have 2n-1 ones, and the row below it (row n+1) will contain 2(n+1)-5 zeroes (since long rows of 0's contain 4 fewer 0's than the width of the row), or 2n-3 zeroes. Any long row of 0's bounded by 1's at right and left 'contracts' in width downward at a rate of 2 zeroes per row, since the two cells immediately below the rightmost and leftmost zeroes must become a 1 in the next generation. [10] A row of w zeroes will thus take (w+1)/2 rows to dissipate, or contract completely.[11]

Combining the formulae in the above paragraph, one gets the following result: after row n of all 1's, a long row of zeroes will take ((2n-3)+1)/2 rows to dissipate. Simplifying gives n-1 rows for the 0's to dissipate. The triangle formed from 0's just below the long row of 1's at row n will be gone by row n + (n-1). The next row of all 1's will thus be at n+n, or 2n. This result shows that long rows of 1's form at rows 1, 2, 4, 8, 16, ... Looked at another way, this means that triangles of smaller and smaller height, each one-half the height of its predecessor, will be removed (i.e. constructed out of 0's) as one nears the top vertex. It is through this process that the cells know to remove the middle one-fourth of any triangle--since removing the middle one-fourth involves joining the midpoints of the triangle.

A similar process occurs off the vertical axis through the top vertex, with smaller triangles of 0's forming and dissipating as above. In a 'real' Sierpinski triangle, the sequence ..., 8, 4, 2, 1 would be repeated into the fractions between 1 and 0: ..., 8, 4, 2, 1, 1/2, 1/4, 1/8, ... Since the cells forming the Sierpinski triangle in Figure 1 have a finite height and width, though, the process stops at 1. Furthermore, the wraparound effect ruins the pattern after row 19, since the left edge of the triangle wraps around, collides with the right side, and destroys the pattern. One does not see, for example, all 1's in row 32, as would be the case in a CA of infinite width.

Drawing a Sierpinski triangle using CA has important resonances in information theory. Consider the difference between describing the CA rule used to generate the triangle in Figure 1 or 2 and describing the picture itself. The former would be much more economical than specifying (as a computer must) the state of every pixel in a graphics image. When I made an informal test of the memory savings on my computer, I found that the text file containing the instruction required 26K, while the graphics file containing the Sierpinski triangle picture took up 39K worth of memory, a savings of 13K. The graphics image is much larger, yet the instructions in the text comprise all that one need know to exactly reproduce the image.

IV. Two-Dimensional Cellular Automata.

The chessboard is the world; the pieces are the phenomena of the universe; the rules of the games are what we call the laws of Nature.

-- T. H. Huxley, 1870.

Where the elements of sequences were numbers and those of one-dimensional CA were rows of cells, the elements of 2-D CA are planes of cells. Each cell can, as in the 1-D case, exist in a finite number of states. Rules for how a cell changes in 2-D CA typically reference the north, east, south, and west neighbors of the cell, reference the eight cells in a box around the cell, or reference nearby cells in some other similar pattern.

Since the time evolution of 2-D CA would have to be displayed as stacks of planes (each plane of which is an element of the automata), this type of cellular automata is typically shown in an animated fashion; each successive element occupies the screen as it is generated, giving the impression of animation as cells change color. A grid of cells represents the initial states of cells (usually by assigning each state a color) and subsequent elements are written over the original, showing how the cells change with each time step and giving the impression of movement. Figures 3 and 4 shows two snapshots [12] of a 2-D CA. [13]

Figures 3 and 4

Figures 3 and 4 show examples of a 2-D CA rule named 'stepping stone.' Figure 3, showing the initial condition at time=0 (i.e. the first element), contains 256 colors distributed randomly across the rectangle. Figure 4 is the result of applying the rule repeatedly. The rule for each cell is as follows:

Choose a number between 0 and 1; this will be the update probability for all cells. For each cell in the array, generate a random number between 0 and 1 at every time step. If the random number generated for the given cell is higher than the update probability, the color of the cell changes to that of one of its neighbors selected uniformly at random. (Neighbor is defined as the four orthogonally adjacent cells: north, east, south, west.) [14]

Informally, this rule dictates that cells randomly 'eat' one of their neighbors. The image in Figure 4 has less than 256 colors, since some have been eliminated through iteration. As with the 1-D examples above, wraparound is again used so that the top border cells see bottom border cells as neighbors, left border cells see right border cells as neighbors, etc.

The result of this rule is the formation of planes of single color of larger and larger size. The interior of any region containing pixels of uniform color is stable under this rule, since cells that eat their neighbors won't change color. Random fluctuations, however, will enable some colors to win out over other colors. Colors will compete in the random, initial state for regions and then, when regions have been formed (as in Figure 3), they will compete at the edges of their regions. Interestingly, one color will always win in the end, taking over the entire rectangle. [15] Since the rule involves random numbers two different colors may emerge as the eventual winner in 2 trials, even starting with exactly the same initial condition.

One can think of the stepping stone rule as a model for competition between selectively neutral genetic types. In fact, the stepping stone model for population ecology goes back to the population geneticist Sewall Wright who used it for exactly this purpose in the 1940's. [16] A similar use of cellular automata is that of John Conway, who developed the 'Game of Life'. Conway used 2-D cellular automata to model microorganism life. His 2-D, 2-state CA ran with rules such as "if a cell has 0 or 1 neighbors in the eight cells bordering it, then it dies of loneliness" and "a cell surrounded by 4 or more neighbors dies of overcrowding." [17] Guessing which initial conditions lead to stable, periodic, or vanishing future behavior in the Game of Life is a fascinating exercise. [18]

We now have three examples of cellular automata, with sequences included as the first. There is a very beautiful symmetry inherent in the transition from sequences to one-dimensional automata to two-dimensional automata. Sequences, which one could make a strong case for labeling 'zero-dimensional cellular automata', have single numbers, which can be thought of as points, as their constituent elements, and propagate to form strings of numbers--lines. In the typical representation of 1-D CA, each element is a horizontal line, and each successive iteration is another line: lines join to form planes. Similarly, each element in the two-dimensional automata is a plane; showing the time evolution of 2-D CA could involve stacked planes--a solid.
The various-dimension automata involve transition to increasingly higher Euclidian dimensions, from points to lines to planes to solids. Obviously, the process could be extrapolated infinitely, with the next case being solid (3-D) automata; each element would be a solid, comprising a set of cubical cells, and the changes over time would be presented as a hypersolid. This raises conceptual problems much greater than those of the examples above. (I, quite frankly, am up past my bedtime on this.) Luckily, there exists such a wealth of elegant and complex mathematics and behavior in the lower-dimensional cases that restricting the scope of this paper to the these cases leaves a large body of material from which to draw.

V. Aspects of Cellular Automata in Art

My vegetable love should grow
Vaster than empires, and more slow....
But at my back I always hear
Time's wingèd chariot hurrying near.

-- Andrew Marvell, 1681.

At one point in his book Painting Techniques of the Impressionists , Bernard Dunston relates Camille Pissarro's advice to a young painter; Dunston calls this "[p]robably the purest exposition of what Impressionism is all about." Since the themes addressed in this paper above appear so prominently in this passage, it is worth quoting at length.

Do not define too closely the outlines; it is the brushstroke of the right value and color which should produce the drawing ... Paint the essential character of things; try to convey it by any means whatever, without bothering about technique. When painting, make a choice of subject, see what is lying at right and left, then work on everything simultaneously ... [P]lace tones everywhere, with brushstrokes of the right color and value, while noticing what is alongside ... One must always have only one master--nature; she is the one always to be consulted. [19]

Pissarro maintains that it is the brushstroke--the detail--out of which the painting grows. The artist must work on all parts of the canvas at once, but concentrate on making the individual details true to the scene before him or her. The result will 'convey the essential character of things'. This idea is strikingly similar to the concepts underlying cellular automata: attention to individual details (the 'rules' established in the scene itself by nature) will bring about global results. Claude Monet expressed this idea even more explicitly: "... Merely think, here is a little square of blue, here is an oblong of pink, here is a streak of yellow, and paint it, just as it looks to you, the exact color and shape, until it gives your own naive impression of the scene." [20]

It is noteworthy that some of Pissarro's work is done in the Pointillist style. Dunston writes of a Pissarro painting from his Pointillist period that "broad color shapes are made up of separate, repetitious strokes, setting up a rhythm or a sense of cont ained movement throughout the painting." [21] Pointillist works such as those by Pissarro or Georges Seurat seem to express a cellular automata style of thinking about art, breaking a subject down into constituent blobs of color to render a scene or portrait. While Seurat and other Pointillists were intent upon creating subtle color gradations and shimmering color effect in their works, I cannot help but think that painted from a sensibility related to cellular automata.

Figures 5 and 6 show two pictures of the actress Sharon Stone. The first is a scanned image, the second the result of applying repeatedly the stepping stone rule to the scanned image. (David Griffeath, who wrote the software I used here, included Sharon Stone's picture as a play on the words 'stepping stone.') As in Figures 3 and 4, iterating the stepping stone rule has the effect of fostering competition between colors, with larger and larger planes of color formed. Note the similarity between the latter image and an impressionist painting. (Despite the similarity, I suspect that Renoir would have been against at the colors in the left side of the face. [22]) In Figure 6 I especially like the way in which the light coming from the right side of the picture is expressed through the planes of high light in the hair and background. Drawing too much of a conclusion from these two images is, I believe, stretching an analogy too far, but the reader will no doubt recognize some similarities between art of the late nineteenth century and the image presented.

Figures 5 and 6

The desire of Friedensreich Hundertwasser, the Austrian-born painter and graphic artist, to find a happy medium between rigidity and freedom in art also reflects similarities with concepts inherent in cellular automata. Dismayed by, as Pierre Restany writes in his book Hundertwasser, both "the rational conventionalism of the apparent geometric rigor" and "the uncontrolled license of Tachist automatism," Hunderwasser sought a middle ground. [23] For him, both extremes were artistic dead ends. A such, Hundertwasser founded the school of Transautomatism (with himself as lone member.)

Transautomatism stressed the "evolutionary slowness of vegetal-vegetative order." [24] Hundertwasser endeavored to think like a plant in producing his work; that is, to follow the biological rules (as he saw them) inherent in plant growth and life. [25] "I paint flat horizontally, without an easel," wrote Hundertwasser, "this is a vegetal, earthbound discipline. My colored lines are like the sap rings on trees, like sediments of nature, like organic growth." [26] Restany describes Transautomatism as "controlled automatism compatible with biological ... determinism." [27] Spirals play an important role in Hundertwasser's art; he saw them as "organic, biological, and vegetative" [28] and the antithesis of the straight line, the ultimate symbol of all he hated about geometric rigor in art and architecture. Interestingly, spirals are often found in 2-D cellular automata; it is a stable pattern under many 2-D rules.

VI. Creating Cellular Automata Art

Rules and models destroy genius and art.

-- William Hazlitt, 1839.

As part of this project, I wanted to create a work developed through a CA outlook, using some of the techniques and media from earlier class assignments. My goal was to establish the rules and initial conditions, bury myself in the interrelationships between 'cells' and in the details of the work, and emerge at the completion to discover how the piece as a whole turned out. There would be no thought of any overall considerations during the creation--I would not let my desire for balance or coherence override the local rules. It would be as if I set up an experiment in a lab, left for the night, and returned the next day to examine the results.

I chose printing as the medium, since the blocks I would use fit nicely into the CA notion of cells. The inspiration for the work was a mental picture of the Big Bang, the creation of the universe; postulating a limited number of particles and a rule for how those particles would propagate, I would start with a small square of four blocks in the middle of the paper and see how the surrounding blocks formed. This would be a 1-D CA with the initial row bent around into a square. Successive rows would be rings or 'concentric squares' around the first. I would represent time by changing the value of one color. The darkest value of blue in the center would denote the earliest particles, while increasingly lighter values would express the passage of time, later and later.

I chose three different square block designs, one for each of the three states in this 1-D CA. Each design contained arcs of some sort and touched the middle of three or four sides of its block, thus ensuring that lines would meet up between most adjacent blocks regardless of the arrangement of the blocks on the paper; thus large patterns would form from the small designs. I then constructed several different rules and proceeded, on a computer, to test the results of each one.
The three blocks shown below correspond to states 0, 1, and 2.

   0              1              2

I started arbitrarily with four blocks formed in a square in the middle of the page; this would be the big bang itself. (Figure 7 shows the development of one sketch.) The rule for generating new blocks was as follows: add the states of a block on an edge and its interior neighbor; the block outside it will possess state according to the following rule:


if the sum is...                           0   1   2   3   4
...then the state of the block becomes 0 1 2 0 1
The drawing below illustrates the use of this rule. In addition to using the rule to determine the state of each cell, I randomly rotated each block before placing it on the paper. I assigned each block an arbitrary 'up' direction and rotated it by 0, 90, 180, or 270 degrees according to a list of random numbers.

For example, to find the state of cell A, add 1+2 to get 3, and look up 3 on the table above to get 0. Randomly pick a rotation for the cell and the result is as shown in the picture above. Cell B's state is determined by the 0- and 1-state cells to B's left; add to get 1 and the table gives 1 as cell B's state. Cell B's orientation (how many times it is rotated by a quarter circle) is again determined randomly. Each ring, starting with the one surrounding the initial four blocks, was formed in this manner. Figure 8 shows the sketch I used to construct the actual print.


After drawing the sketches on the computer and selecting one I liked, I cut the three blocks and printed according to the sketch. One thing I especially liked about the print was the way large patterns and structures formed from the white lines of the block designs, almost as if particles were combining to form different types of matter. I was also pleased with how the designs on the block seemed to float in front of the blocks themselves, so that the viewer could disassociate the overall pattern of thin white lines from the 'digitalness' of the blocks. Seemingly random, there was a determinism to the way in which the patterns were formed.

VII. Conclusion

Everything's got a moral, if only you can find it.
-- Lewis Carroll, 1865.


CA systems can generate output of amazing complexity from the repeated iteration of simple rules. The Sierpinski triangle alone would, for me, be proof of this. 2-D CA exhibit this property even more strikingly. [29] The variety of applications for which cellular automata are used is great, from neural networks to modeling life to counting coal dust particles. [30] And the manner in which information is propagated through space without physical movement is intellectually (at least for me) intriguing.

The aspect of CA which I find most interesting though, and an idea I had not considered before writing this paper, is the manner in which deterministic and stochastic processes are, in a sense, reconciled. Consider the stepping stone rule. The rule for each cell is well-defined, regions of color of greater and greater area will always form, and one color will always win out in the end. However, the random choice of which color a cell takes on means that the end result--which color will win out--may be different for every trial. Even in the automata with rules that are completely deterministic, different initial conditions and the length of time for which one iterated the rule of the automaton allow for great variety of output; witness the two images (Figure 3) from the Sierpinski rule.

One could perhaps even extrapolate that here is a case where mathematics suggests a reconciliation between moral absolutism and moral relativism: strict, independent rules governing phenomena, yet allowing leeway for considerable diversity. Deriving this from the mathematics discussed above may be a reach. Friendensreich Hundertwasser, however, certainly saw this in art. Walking the middle ground between rigidity and lack of any constraint did not have implications for art alone, but for how we lived out lives:

So I venture to say that the line described be my feet as I go walking to the museum is more important that the lines one finds hanging on the walls inside. And, I get enormous pleasure in seeing that the line is never straight, never confused, but has its reasons for being the way it is in every smallest part. Beware of the straight line and the drunken line, but especially of the straight one! The straight line leads to the loss of humanity.[31]


NOTE

[1] Perry, p. 182
[2] Preston and Duff, p. 3
[3] Perry, p. 182.
[4] n is a variable representing an ordinal number, thus starting at 1. The 'nth term' denotes the term occupying the nth spot in the sequence. The n=2 term would be the second term, for example.
[5] Insert the disk "B Hoke Final Project" and double click on the "Begin Slideshow" icon to display Figure 1. Figure 1 is an image captured from an entire screen--you'll see mac desktop items (arrow cursor, icons).
[6] A good treatment of writing rules in terms of sums can be found in Perry, pp. 181-182.
[7] Click once on the play button (the right-pointing arrow) to view Figure 2.
[8] "Macintosh Cellular Automata".
[9] Click on the back button (left-facing arrow) to display Figure 1.
[10] If this is confusing, look at Figure 1 and note that triangles formed of 1's get bigger as one moves down the rows, while triangles composed of 0's get smaller. 0-triangles 'lose' two 0's after each row.
[11] Strictly speaking, a row of zeroes of width w will take x rows to dissipate, where x is the least integer greater than or equal to w/2. Since the rows in discussion here all contain odd numbers of 0's and 1's, I simplified the formula to avoid confusion.
[12] Remember that each picture is one moment in time for the CA--thus the pictures are snapshots of the state of the CA at a given time.
[13] These pictures can be generated using sstone.xpt in "WinCA".
[14] "WinCA" sstone.xpt notes
[15] "WinCA" sstone.xpt notes
[16] www.math.wisc.edu/~griffeat/logos.html. The palette I used for Figures 3 and 4 contained mostly biodiverse shades; Figure 4 looks to me like competition in a deciduous forest in autumn.
[17] www.research.digital.com/nsl/projects/life/rules.html
[18] The World Wide Web offers many interactive pages by which the user can try out the Game of Life. One I found particularly good was at www.research.com/nsl/projects/life/cgi-bin/life
[19] pp. 12-13.
[20] Dunston, p. 46.
[21] p. 75.
[22] The poor color is partly due to the fact that I had to translate the images several times between two different platforms, four different applications, and three different file types. The original was a bit better.
[23] Restany, p. 14.
[24] Restany, p. 13.
[25] Dorothy Wallace, lecture.
[26] Restany, p. 17.
[27] p. 13-14.
[28] Restany, p. 25.
[29] The reader wishing to browse some of the more interesting 2-D CA should delve into David Griffeath's WWW page, found at www.math.wisc.edu/~griffeat.
[30] Preston and Duff, pp. 3-4.
[31] Restany, p. 14.


WORKS CITED

Dunston, Bernard. Painting Methods of the Impressionists. New York: Watson-Guptill Publications, 1983.

Griffeath, David. Website. http://www.math.wisc.edu/~griffeat .

Perry, Kenneth E. "Abstract Mathematical Art." Byte December 1986: 181-190.

Preston, Kendall, Jr. and Michael J. B. Duff. Modern Cellular Automata: Theory and Applications . New York: Plenum Press, 1984.

Restany, Pierre. Hundertwasser . New York: Ballantine, 1975.

Wallace, Dorothy, lecture. MALS 300: Pattern. August 15, 1996.

Website. http://www.research.digital.com/nsl/projects/life .


SOFTWARE

ClarisWorks 4.0,1991-1995 Claris Corporation.

Macintosh Cellular Automata,1984 At Play Corp. & Thai Truon.

WinCA: A Cellular Automaton Modelling Environment for Windows, by Robert Frisch and David Griffeath.

Quotes are from Dartmouth College Information System, Oxford Dictonary of Quotations.