Source Discussion


The source code for the Alife simulation is very straightforward. Let's first walk through the structures for the simulation that describe not only the environment but also the agents and other objects within the environment.

Listing 7.1 shows the agent type. Most elements of this structure are self-explanatory, type defines the agent as herbivore or carnivore, energy is the available energy to the agent, age is the age in time steps, and generation is a value that describes this agent in the lineage of agents that reproduced.

Listing 7.1: Agent Types and Symbolics.
start example
 typedef struct {   short type;   short energy;   short parent;   short age;   short generation;   locType location;   unsigned short direction;   short inputs[MAX_INPUTS];   short weight_oi[MAX_INPUTS * MAX_OUTPUTS];   short biaso[MAX_OUTPUTS];   short actions[MAX_OUTPUTS]; } agentType; #define TYPE_HERBIVORE  0 #define TYPE_CARNIVORE  1 #define TYPE_DEAD       -1 typedef struct {   short x;   short y; } locType; 
end example
 

The location of the agent (as defined by the locType type) specifies the x/y coordinate of the agent within the landscape. The inputs vector defines the values of the inputs to the neural network during the perception stage. The actions vector is the output layer of the neural network that defines the action to be taken. Finally, weight_oi (weight value from output to input) and biaso represent the weights and biases for the output layer of the network. We'll see later how the weights are structured within this single vector.

The input vector defines the inputs as object and zone (such as herbivore and front). To give an agent the ability to differentiate each of these elements separately, each is given its own input into the neural network. The outputs are also well-defined with each output cell in the output vector representing a single action. Listing 7.2 provides the symbolic constant definitions for the input and output cells .

Listing 7.2: Sensor Input and Action Output Cell Definitions.
start example
 #define HERB_FRONT      0 #define CARN_FRONT      1 #define PLANT_FRONT     2 #define HERB_LEFT       3 #define CARN_LEFT       4 #define PLANT_LEFT      5 #define HERB_RIGHT      6 #define CARN_RIGHT      7 #define PLANT_RIGHT     8 #define HERB_PROXIMITY  9 #define CARN_PROXIMITY  10 #define PLANT_PROXIMITY 11 #define MAX_INPUTS      12 #define ACTION_TURN_LEFT        0 #define ACTION_TURN_RIGHT       1 #define ACTION_MOVE             2 #define ACTION_EAT              3 #define MAX_OUTPUTS     4 
end example
 

The agent environment is provided by a three dimensional cube. Three planes exist for the agents, with each plane being occupied by a single type of object (plant, herbivore, or carnivore). The agent's world is still viewed as a two dimensional grid, but three dimensions exist to more efficiently account for the objects present. Listing 7.3 provides the types and symbolics used to represent the landscape.

Listing 7.3: Agent Environment Types and Symbolics
start example
 #define HERB_PLANE        0 #define CARN_PLANE        1 #define PLANT_PLANE       2 #define MAX_GRID   30 /* Agent environment in 3 dimensions (to have independent planes  * for plant, herbivore and carnivore objects.  */ int landscape[3][MAX_GRID][MAX_GRID]; #define MAX_AGENTS 36 #define MAX_PLANTS 30 agentType agents[MAX_AGENTS]; int agentCount = 0; plantType plants[MAX_PLANTS]; int plantCount = 0; 
end example
 

The size of the grid, number of agents that exist, and the amount of plant life are all parameters that can be manipulated for different experiments. The header file (  common.h ) includes a section for parameters that can be adjusted.

Finally, a set of macros is provided in the header file for commonly used functions associated with random number generation (see Listing 7.4).

Listing 7.4: Simulation Macro Functions.
start example
 #define getSRand()      ((float)rand() / (float)RAND_MAX) #define getRand(x)      (int)((x) * getSRand()) #define getWeight()     (getRand(9)-1) 
end example
 

The getSRand function returns a random number between 0 and 1, while getRand returns a number from 0 to x-1. Finally, the getWeight function returns a weight used for the agent neural networks. This includes biases used with the output cells.

On the CD  

We'll now walk through the source of the simulation. We begin with the main function, which has been simplified to remove the command-line processing and statistics collection. The unedited source can be found on the CD-ROM.

The main function sets up the simulation and then loops for the number of time steps allotted for the simulation within the header file (via the MAX_STEPS symbolic constant). The simulate function is the primary entry point of the simulation where life is brought to the agents and the environment (see Listing 7.5).

Listing 7.5: main Function for the Alife Simulation.
start example
 int main(int argc, char *argv[]) {   int i;   /* Seed the random number generator */   srand(time(NULL));   /* Initialize the simulation */   init();   /* Main loop for the simulation. */   for (i = 0 ; i < MAX_STEPS ; i++) {     /* Simulate each agent for one time step */     simulate();   }   return 0; } 
end example
 

The init function initializes both the environment and the objects within the environment (plants, herbivores, and carnivores). Note that when agents are initialized , the type of the agent is filled in. This is done so that the initAgent can be reused for other purposes within the simulation. With the type filled in, the initAgent function knows which kind of agent is being initialized and treats it accordingly (see Listing 7.6).

Listing 7.6: Function init to Initialize the Simulation.
start example
 void init(void) {   /* Initialize the landscape */   bzero((void *)landscape, sizeof(landscape));   bzero((void *)bestAgent, sizeof(bestAgent));   /* Initialize the plant plane */   for (plantCount = 0 ; plantCount < MAX_PLANTS ; plantCount++) {     growPlant(plantCount);   }   /* Randomly initialize the Agents */   for (agentCount = 0 ; agentCount < MAX_AGENTS ; agentCount++) {     if (agentCount < (MAX_AGENTS / 2)) {       agents[agentCount].type = TYPE_HERBIVORE;     } else {       agents[agentCount].type = TYPE_CARNIVORE;     }     initAgent(&agents[agentCount]);   } } 
end example
 

The plant plane is first initialized by creating plants of a count defined by the MAX_PLANTS symbolic constant. Function growPlants provides this function (see Listing 7.7). Next , the agents are initialized. For the maximum number of agents allowed (defined by MAX_AGENTS ), half of the space is reserved for each time. We initialize the herbivores first and then the carnivores. The actual initialization of the agents is provided by the initAgent function (shown in Listing 7.8).

Listing 7.7: Function growPlant to Introduce Foliage into the Simulation.
start example
 void growPlant(int i) {   int x,y;   while (1) {     /* Pick a random location in the environment */     x = getRand(MAX_GRID); y = getRand(MAX_GRID);     /* As long as a plant isn't already there */     if (landscape[PLANT_PLANE][y][x] == 0) {       /* Update the environment for the new plant */       plants[i].location.x = x;       plants[i].location.y = y;       landscape[PLANT_PLANE][y][x]++;       break;     }   }   return; } 
end example
 
Listing 7.8: Function initAgent to Initialize the Agent Species.
start example
 void initAgent(agentType *agent) {   int i;   agent->energy = (MAX_ENERGY / 2);   agent->age = 0;   agent->generation = 1;   agentTypeCounts[agent->type]++;   findEmptySpot(agent);   for (i = 0 ; i < (MAX_INPUTS * MAX_OUTPUTS) ; i++) {     agent->weight_oi[i] = getWeight();   }   for (i = 0 ; i < MAX_OUTPUTS ; i++) {     agent->biaso[i] = getWeight();   }   return; } void findEmptySpot(agentType *agent) {   agent->location.x = -1;   agent->location.y = -1;   while (1) {     /* Pick a random location for the agent */     agent->location.x = getRand(MAX_GRID);     agent->location.y = getRand(MAX_GRID);     /* If an agent isn't there already, break out of the loop */     if (landscape[agent->type]                  [agent->location.y][agent->location.x] == 0)       break;   }   /* Pick a random direction for the agent, and update the map */   agent->direction = getRand(MAX_DIRECTION);   landscape[agent->type][agent->location.y][agent->location.x]++;   return; } 
end example
 

The growPlant function simply finds an empty spot in the plant plane and places a plant in that position (see Listing 7.7). The function also ensures that no plant exists there already (so that we can control the number of unique plants available in the environment).

The agent planes are next initialized with the herbivore and carnivore species (see Listing 7.8). A reference pointer to an agent is passed (as shown in Listing 7.6) which represents the agent element to initialize. Recall that the agent type has already been defined. We first initialize the energy of the agent to half of the maximum available to the agent. This is done because when an agent reaches a large percentage of its maximum allowable energy, it is permitted to reproduce. Setting the energy to half of the maximum requires that the agent must quickly find food within the environment in order to reproduce. The age and generation are also initialized for a new agent. We keep a count of the number of agents within agentTypeCounts. This ensures that we maintain a 50/50 split between the two agent species within the simulation. Next, we find a home for the agent using the findEmptySpot function. This function, shown in Listing 7.8, finds an empty element in the given plane (defined by the agent type ) and stores the coordinates of the agent within the agent structure. Finally, we initialize the weights and biases for the agent's neural network.

Note in function findEmptySpot, that the landscape is represented by counts. This records whether an object is contained at the coordinates of the grid. As objects die, or are eaten, the landscape is decremented to identify the removal of an object.

We've now completed our discussion of the initialization of the simulation; let's now continue with the actual simulation. Recall that our main function (shown in Listing 7.5) calls the simulate function to drive the simulation. The simulate function (shown in Listing 7.9) permits each of the agents to perform a single action within the environment. The herbivores are simulated first and then the carnivores. This gives a slight advantage to the herbivores, but since herbivores must contend with starvation and a predator , it seemed like a way to level the playing field, if only slightly.

Listing 7.9: The simulate Function.
start example
 void simulate(void) {   int i, type;   /* Simulate the herbivores first, then the carnivores */   for (type = TYPE_HERBIVORE ; type <= TYPE_CARNIVORE ; type++) {     for (i = 0 ; i < MAX_AGENTS ; i++) {       if (agents[i].type == type) {         simulateAgent(&agents[i]);       }     }   } } 
end example
 

The simulate function (Listing 7.9) calls the simulateAgent function to simulate a single step of the agent. This function can be split into four logical sections. These are perception, network propagation, action selection, and energy test.

The perception algorithm is likely the most complicated of the simulation. Recall from Figure 7.4 that an agent's field of view is based upon its direction and is split into four separate zones ( front, proximity, left, and right ). For the agent to perceive the environment, it must first identify the coordinates of the grid that make up its field of view (based upon the direction of the agent) and then split these up into the four separate zones. We can see how this is done in the switch statement of the simulateAgent function (Listing 7.11). The switch statement determines the direction in which the agent is facing . Each call to the percept function sums the objects for the particular zone. Note that each call identifying HERB_<zone> represents the first plane for the zone (herbivore, carnivore, and then plant).

The percept call is made with the agents current coordinates, the offset into the inputs array and a list of coordinate offsets and bias. Note that when the agent is facing north, the north<zone> offsets are passed, but when the agent is facing south, we again pass the north<zone> offsets but with a -1 bias. This is done similarly with the west<zone>. The coordinate offsets for the agent to each of the zones are defined for a given direction, but can be reversed to identify the coordinates in the opposite direction.

So what do we mean by this? Let's look at the coordinate offsets in Listing 7.10.

Listing 7.10: Coordinate Offsets to Sum Objects in the Field of View.
start example
 const offsetPairType northFront[]=       {{-2,-2}, {-2,-1}, {-2,0}, {-2,1}, {-2,2}, {9,9}}; const offsetPairType northLeft[]={{0,-2}, {-1,-2}, {9,9}}; const offsetPairType northRight[]={{0,2}, {-1,2}, {9,9}}; const offsetPairType northProx[]=       {{0,-1}, {-1,-1}, {-1,0}, {-1,1}, {0,1}, {9,9}}; const offsetPairType westFront[]=       {{2,-2}, {1,-2}, {0,-2}, {-1,-2}, {-2,-2}, {9,9}}; const offsetPairType westLeft[]={{2,0}, {2,-1}, {9,9}}; const offsetPairType westRight[]={{-2,0}, {-2,-1}, {9,9}}; const offsetPairType westProx[]=       {{1,0}, {1,-1}, {0,-1}, {-1,-1}, {-1,0}, {9,9}}; 
end example
 

Two sets of coordinate offset vectors are provided, one for north and one for west. Let's take the northRight vector as an example. Let's say our agent is sitting at coordinates <7,9> within the environment (using an <y,x> coordinate system). Using the northRight vector as coordinate biases, we calculate two new coordinate pairs (since <9,9> represents the list end): <7,11> and <6,11>. These two coordinates represent the two locations for the right zone given an agent facing north. If the agent were facing south, we would negate the northRight coordinates before adding them to our current position, resulting in: <7,7> and <8,7>. These two coordinates represent the two locations for the right zone given an agent facing south.

Now that we've illustrated the coordinate offset pairs, let's continue with our discussion of the simulateAgent function (see Listing 7.11).

Listing 7.11: Function simulateAgent .
start example
 void simulateAgent(agentType *agent) {   int x, y;   int out, in;   int largest, winner;   /* Use shorter names   */   x = agent->location.x;   y = agent->location.y;   /* Determine inputs for the agent neural network */   switch(agent->direction) {     case NORTH:       percept(x, y, &agent->inputs[HERB_FRONT], northFront, 1);       percept(x, y, &agent->inputs[HERB_LEFT], northLeft, 1);       percept(x, y, &agent->inputs[HERB_RIGHT], northRight, 1);       percept(x, y, &agent->inputs[HERB_PROXIMITY], northProx, 1);       break;     case SOUTH:       percept(x, y, &agent->inputs[HERB_FRONT], northFront, -1);       percept(x, y, &agent->inputs[HERB_LEFT], northLeft, -1);       percept(x, y, &agent->inputs[HERB_RIGHT], northRight, -1);       percept(x, y, &agent->inputs[HERB_PROXIMITY], northProx, -1);       break;     case WEST:       percept(x, y, &agent->inputs[HERB_FRONT], westFront, 1);       percept(x, y, &agent->inputs[HERB_LEFT], westLeft, 1);       percept(x, y, &agent->inputs[HERB_RIGHT], westRight, 1);       percept(x, y, &agent->inputs[HERB_PROXIMITY], westProx, 1);       break;     case EAST:       percept(x, y, &agent->inputs[HERB_FRONT], westFront, -1);       percept(x, y, &agent->inputs[HERB_LEFT], westLeft, -1);       percept(x, y, &agent->inputs[HERB_RIGHT], westRight, -1);       percept(x, y, &agent->inputs[HERB_PROXIMITY], westProx, -1);       break;   }   /* Forward propogate the inputs through the neural network */   for (out = 0 ; out < MAX_OUTPUTS ; out++) {     /* Initialize the output node with the bias */     agent->actions[out] = agent->biaso[out];     /* Multiply the inputs by the weights for this output node */     for (in = 0 ; in < MAX_INPUTS ; in++) {       agent->actions[out] +=         (agent->inputs[in] *           agent->weight_oi[(out * MAX_INPUTS)+in]);     }   }   largest = -9;   winner = -1;   /* Select the largest node (winner-takes-all network) */   for (out = 0 ; out < MAX_OUTPUTS ; out++) {     if (agent->actions[out] >= largest) {       largest = agent->actions[out];       winner = out;     }   }   /* Perform Action */   switch(winner) {     case ACTION_TURN_LEFT:     case ACTION_TURN_RIGHT:       turn(winner, agent);       break;     case ACTION_MOVE:       move(agent);       break;     case ACTION_EAT:       eat(agent);       break;   }   /* Consume some amount of energy.    * Herbivores, in this simulation, require more energy to    * survive than carnivores.    */   if (agent->type == TYPE_HERBIVORE) {     agent->energy -= 2;   } else {     agent->energy -= 1;   }   /* If energy falls to or below zero, the agent dies. Otherwise,      we    * check to see if the agent has lived longer than any other      agent    * of the particular type.    */   if (agent->energy <= 0) {     killAgent(agent);   } else {     agent->age++;     if (agent->age > agentMaxAge[agent->type]) {       agentMaxAge[agent->type] = agent->age;       agentMaxPtr[agent->type] = agent;     }   }   return; } 
end example
 

Having discussed perception, we now continue with the remaining three stages of the simulateAgent function. The next step is to forward propagate our inputs collected in the perception stage to the output cells of the agent's neural network. This process is performed based upon Equation 7.1. The result is a set of output cells representing a value calculated using the input cells and the weights between the inputs cell and output cells. We then select the action to take based upon the highest output cell (in a winner-takes-all network fashion). A switch statement is used to call the particular action function. As shown in Listing 7.11, available actions are ACTION_TURN_LEFT , ACTION_TURN_RIGHT , ACTION_MOVE and ACTION_EAT .

The final stage of agent simulation is an energy test. At each step, an agent loses some amount of energy ( differs for carnivores and herbivores). If the agent's energy falls to zero, the agent dies of starvation and ceases to exist within the simulation. If the agent survives, then its age is incremented. Otherwise, the agent is killed using the killAgent function.

We'll now walk through the functions referenced within the simulateAgent function, in the order in which they were called ( percept , turn , move , eat , and killAgent ).

While the earlier discussion of percept may have given the impression that a complicated function was required, as is shown in Listing 7.12, it's quite simple. This is because much of the functionality is provided by the data structure; the code simply follows the data structure to achieve the intended function.

Listing 7.12: percept Function.
start example
 void percept(int x, int y, short *inputs,               const offsetPairType *offsets, int neg) {   int plane, i;   int xoff, yoff;   /* Work through each of the planes in the environment */   for (plane = HERB_PLANE ; plane <= PLANT_PLANE ; plane++) {     /* Initialize the inputs */     inputs[plane] = 0;     i = 0;     /* Continue until we've reached the end of the offsets */     while (offsets[i].x_offset != 9) {       /* Compute the actual x and y offsets for the current        * position.        */       xoff = x + (offsets[i].x_offset * neg);       yoff = y + (offsets[i].y_offset * neg);       /* Clip the offsets (force the toroid as shown by Figure 7.2.        */       xoff = clip(xoff);       yoff = clip(yoff);       /* If something is in the plane, count it */       if (landscape[plane][yoff][xoff] != 0) {         inputs[plane]++;       }       i++;     }   }   return; } int clip(int z) {   if (z > MAX_GRID-1) z = (z % MAX_GRID);   else if (z < 0) z = (MAX_GRID + z);   return z; } 
end example
 

Recall that each call to percept provides the calculation of the number of objects in a given zone, but for all three planes. Therefore, percept uses a for-loop to walk through each of the three planes and calculates the sums based upon the particular plane of interest. For a given plane, we walk through each of the coordinate offset pairs (as defined by the offsets argument). Each coordinate offset pair defines a new set of coordinates based upon the current position. With these new coordinates, we increment the inputs element if anything exists at the location. This means that the agent is unaware of how many objects exist at the coordinate for the given plane, only that at least one object exists there.

Also shown in Listing 7.12 is the clip function. This function is used by percept to achieve a toroid (wrap) effect on the grid.

The turn function, shown in Listing 7.13, is very simple, as the agent simply changes direction. As is shown in Listing 7.13, given the agent's current direction and the direction to turn, a new direction results.

Listing 7.13: Function turn .
start example
 void turn (int action, agentType *agent) {   /* Since our agent can turn only left or right, we determine    * the new direction based upon the current direction and the    * turn action.    */   switch(agent->direction) {     case NORTH:       if (action == ACTION_TURN_LEFT) agent->direction = WEST;       else agent->direction = EAST;       break;     case SOUTH:       if (action == ACTION_TURN_LEFT) agent->direction = EAST;       else agent->direction = WEST;       break;     case EAST:       if (action == ACTION_TURN_LEFT) agent->direction = NORTH;       else agent->direction = SOUTH;       break;     case WEST:       if (action == ACTION_TURN_LEFT) agent->direction = SOUTH;       else agent->direction = NORTH;       break;   }   return; } 
end example
 

The move function is just slightly more complicated. Using a set of offsets to determine the new coordinate position, and the direction to determine which pair of offsets to use, a new set of coordinates is calculated. Also shown in Listing 7.14, is maintenance of the landscape. Prior to the agent's move, the landscape for the given plane (as defined by the agent type ) is decremented to represent an agent moving from the location. Once the agent has moved, the landscape is updated again to show an agent now located at the coordinates in the given plane.

Listing 7.14: Function move .
start example
 void move(agentType *agent) {   /* Determine new position offset based upon current direction */   const offsetPairType offsets[4]={{-1,0},{1,0},{0,1},{0,-1}};   /* Remove the agent from the landscape. */   landscape[agent->type][agent->location.y][agent->location.x]--;   /* Update the agent's X,Y position (including clipping) */   agent->location.x =     clip(agent->location.x + offsets[agent->direction].x_offset);   agent->location.y =     clip(agent->location.y + offsets[agent->direction].y_offset);   /* Add the agent back onto the landscape */   landscape[agent->type][agent->location.y][agent->location.x]++;   return; } 
end example
 

The eat function is split into two stages, locating an object to eat within the agent's proximity (if one exists) and then the record keeping required to document and remove the eaten item (see Listing 7.15).

Listing 7.15: Function eat .
start example
 void eat(agentType *agent) {   int plane, ax, ay, ox, oy, ret=0;   /* First, determine the plane that we'll eat from based upon    * our agent type (carnivores eat herbivores, herbivores eat    * plants).    */   if (agent->type == TYPE_CARNIVORE) plane = HERB_PLANE;   else if (agent->type == TYPE_HERBIVORE) plane = PLANT_PLANE;   /* Use shorter location names */   ax = agent->location.x;   ay = agent->location.y;   /* Choose the object to consume based upon direction (in the    * proximity of the agent).    */   switch(agent->direction) {     case NORTH:       ret = chooseObject(plane, ax, ay, northProx, 1, &ox, &oy);       break;     case SOUTH:       ret = chooseObject(plane, ax, ay, northProx, -1, &ox, &oy);       break;     case WEST:       ret = chooseObject(plane, ax, ay, westProx, 1, &ox, &oy);       break;     case EAST:       ret = chooseObject(plane, ax, ay, westProx, -1, &ox, &oy);       break;   }   /* Found an object -- eat it! */   if (ret) {     int i;     if (plane == PLANT_PLANE) {       /* Find the plant in the plant list (based upon position) */       for (i = 0 ; i < MAX_PLANTS ; i++) {         if ((plants[i].location.x == ox) &&             (plants[i].location.y == oy))           break;       }       /* If found, remove it and grow a new plant elsewhere */       if (i < MAX_PLANTS) {         agent->energy += MAX_FOOD_ENERGY;         if (agent->energy > MAX_ENERGY) agent->energy = MAX_ENERGY;         landscape[PLANT_PLANE][oy][ox]--;         growPlant(i);       }     } else if (plane == HERB_PLANE) {       /* Find the herbivore in the list of agents (based upon        * position).        */       for (i = 0 ; i < MAX_AGENTS ; i++) {         if ((agents[i].location.x == ox) &&              (agents[i].location.y == oy))           break;       }       /* If found, remove the agent from the simulation */       if (i < MAX_AGENTS) {         agent->energy += (MAX_FOOD_ENERGY*2);         if (agent->energy > MAX_ENERGY) agent->energy = MAX_ENERGY;         killAgent(&agents[i]);       }     }     /* If our agent has reached the energy level to reproduce,        allow      * it to do so (as long as the simulation permits it).      */     if (agent->energy > (REPRODUCE_ENERGY * MAX_ENERGY)) {       if (noRepro == 0) {         reproduceAgent(agent);         agentBirths[agent->type]++;       }     }   }   return; } 
end example
 

The first step is to identify the plane of interest, which is based upon the type of agent doing the consumption. If the agent is an herbivore, we'll look into the plant plane, otherwise, as a carnivore, we'll look into the herbivore plane.

Next, using the agent's direction, we call the chooseObject function (shown in Listing 7.16) to return the coordinates of an object of interest in the desired plane. Note that we use the coordinate offset pairs again (as used in Listing 7.11), but concentrate solely on the proximity zone per the agent's direction. If an object was found, the chooseObject function returns a nonzero value and fills the coordinates into the ox/oy coordinates as passed in by the eat function.

Listing 7.16: Function chooseObject .
start example
 int chooseObject(int plane, int ax, int ay,                    const offsetPairType *offsets,                    int neg, int *ox, int *oy) {   int xoff, yoff, i=0;   /* Work through each of the offset pairs */   while (offsets[i].x_offset != 9) {     /* Determine next x,y offset */     xoff = ax + (offsets[i].x_offset * neg);     yoff = ay + (offsets[i].y_offset * neg);     xoff = clip(xoff);     yoff = clip(yoff);     /* If an object is found at the check position, return      * the indices.      */     if (landscape[plane][yoff][xoff] != 0) {       *ox = xoff; *oy = yoff;       return 1;     }     /* Check the next offset */     i++;   }   return 0; } 
end example
 

The chooseObject function is very similar to the percept function shown in Listing 7.12, except that instead of accumulating the objects found in the given plane in the given zone, it simply returns the coordinates of the first object found.

The next step is consuming the object. If an object was returned, we check the plane in which the object was found. For the plant plane, we search through the plants array and remove this plant from the landscape. We then grow a new plant, which will be placed in a new random location. For the herbivore plane, we identify the particular herbivore with the agents array and then kill it using the killAgent function (shown in Listing 7.17). The current agent's energy is also increased, per its consumption of the object.

Listing 7.17: Function killAgent .
start example
 void killAgent(agentType *agent) {   agentDeaths[agent->type]++;   /* Death came to this agent (or it was eaten)... */   landscape[agent->type][agent->location.y][agent->location.x]--;   agentTypeCounts[agent->type]--;   if (agent->age > bestAgent[agent->type].age) {     memcpy((void *)&bestAgent[agent->type],              (void *)agent, sizeof(agentType));  }   /* 50% of the agent spots are reserved for asexual reproduction.    * If we fall under this, we create a new random agent.    */   if (agentTypeCounts[agent->type] < (MAX_AGENTS / 4)) {     /* Create a new agent */     initAgent(agent);   } else {     agent->location.x = -1;     agent->location.y = -1;     agent->type = TYPE_DEAD;   }   return; } 
end example
 

Finally, if the agent has reached the level of energy required for reproduction, the reproduceAgent function is called to permit the agent to asexually give birth to a new agent of the given type. The reproduceAgent function is shown in Listing 7.18.

Listing 7.18: Function reproduceAgent .
start example
 void reproduceAgent(agentType *agent) {   agentType *child;   int i;   /* Don't allow an agent type to occupy more than half of    * the available agent slots.    */   if (agentTypeCounts[agent->type] < (MAX_AGENTS / 2)) {     /* Find an empty spot and copy the agent, mutating one of the      * weights or biases.      */     for (i = 0 ; i < MAX_AGENTS ; i++) {       if (agents[i].type == TYPE_DEAD) break;     }     if (i < MAX_AGENTS) {       child = &agents[i];       memcpy((void *)child, (void *)agent, sizeof(agentType));       findEmptySpot(child);       if (getSRand() <= 0.2) {         child->weight_oi[getRand(TOTAL_WEIGHTS)] = getWeight();       }       child->generation = child->generation + 1;       child->age = 0;       if (agentMaxGen[child->type] < child->generation) {         agentMaxGen[child->type] = child->generation;       }       /* Reproducing halves the parent's energy */       child->energy = agent->energy = (MAX_ENERGY / 2);       agentTypeCounts[child->type]++;       agentTypeReproductions[child->type]++;     }   }   return; } 
end example
 

Killing an agent is primarily a record-keeping task. We first remove the agent from the landscape and record some statistical data (number of deaths per agent type and number of agents of a given type ). Then we store the agent away if it is the oldest found for the given species.

Once record keeping is done, we decide whether we want to initialize a new random agent (of the given type) is its place. The decision point is the number of agents that exist for the given type. Since the evolutionary aspect of the simulation is the most interesting, we want to maintain a number of open agent slots so that when an agent does desire to reproduce, it can. Therefore, we allow a new random agent to fill the dead agents place when the population of this agent species fills less than 25% of the overall population. This leaves 25% of the agent slots available (for a given species) open for reproduction.

The final function within the simulation is the reproduceAgent function. This is by far the most interesting function because it provides the Lamarckian learning aspect to the simulation. When an agent gives birth in the simulation, it does so by passing on its traits (its neural network) to its child. The child inherits the parent's traits with a slight probability of mutating the weights of the neural network. This permits the evolutionary aspect of the simulation, a desired increasing level of competence for survival within the environment. The reproduceAgent function is provided in Listing 7.18.

The first step is to identify whether space is available for the new child. For this test, we check to see if less than 50% of the total agent slots are filled for the given species. This provides an even distribution of agents within the environment. This may not be biologically correct, but we can simulate one species dominating another in a special playback mode (to be discussed later).

If an open slot is found for the child, we copy the parent's agent structure to the child's and then find an empty spot for the child to occupy. Next, we mutate a single weight within the agent's neural network. Note that we use a TOTAL_WEIGHTS symbolic to find the weight to modify. This encompasses not only the weights but also the biases (as they are contiguous within the agent structure). We then do a little record keeping and halve the energy between the parent and child. This then requires them to navigate their environments to find food before they are permitted to reproduce again.




Visual Basic Developer
Visual Basic Developers Guide to ASP and IIS: Build Powerful Server-Side Web Applications with Visual Basic. (Visual Basic Developers Guides)
ISBN: 0782125573
EAN: 2147483647
Year: 1999
Pages: 175

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net