Ant colony foraging
Argentine ants (L. humile) were collected from the Los Angeles, CA, area in July, 2022. Nine replicate nests were established each with one to several queens, 200-500 workers, and brood. The nests had continual access to water, but food was only provided in foraging networks. Each network is composed of 12 cells in a 4x3 design, with individual networks connected in one of three ways (Fig. 1; Supplement Fig. S1). In each network type, the two left columns are a mirror image of the right two.
Network designs
1. Linear.
- Natural analog. Food may be present in series of sites, most of which are constrained to be searched in a sequential order, such as traveling around the edges of a pond or some by other impassable obstruction.
- Network arrangement. In 8 cells there are connections to only two other network cells and 4 cells with three connections (in two of these, one connection is to the nest). All the cells with three connections are located on the inner two columns. There are several possible traplines where a series of cells can be searched without any revisits. The shortest one is four cells long. Other possible traplines can consist of 8 cells, and one is a Hamiltonian cycle (Gould 1991) where all 12 cells can be sequentially visited (See Fig. 1 for the trapline routes). A forager in each cell would always face a choice of either returning to its previous cell or continuing further along a trapline.
- Distance. The network has three rows, with increasing distance from the nest. Row cells on the inner columns are always one ‘step’ (a step = moving from one cell to another) closer to the nest than those on the outer columns. Therefore, depending of the row and column, a forager would be 1-4 steps away from returning to its nest.
2. Uniform.
- Natural analog. An open plain with minimal obstructions between adjacent cells.
- Network arrangement. As in the Linear network there are two entry points to the network. Within the network, each cell is connected to every other adjacent cell. Thus, there are 4 cells with two connections to other cells, 4 with three, and 4 with four (Fig. 1). The 4 most connected cells are located on the inner two columns. There are multiple possible trapline routes that can visit any even number of cells without any revisits, including the same Hamiltonian cycle as in the Linear network.
- Distance. The two network entry points and distribution of steps needed to return to the nest are the same as in the Linear network.
3. Modular.
- Natural analog. There are specific sites/paths that could be rewarding (i.e., with nectar-providing plants). However, the closest routes between sites involve returning and passing by the nest.
- Network arrangement. The 12 cells are in 4 ‘modules’ of 3 cells arranged in a straight line, with no direct connections between modules (Fig. 1). Thus, any forager that travels past the first cell would need to revisit cells in order to return to the nest. Foragers cannot visit multiple modules without returns to the nest area. In essence, there are no trapline equivalents in the Modular network to those in the other two networks, and no cell has more than two connections to any other cell. There are no Hamiltonian cycles possible to sequentially visit all 12 cells in this network.
- Distance. Each module has a unique entry point and, therefore, there are four cells that are only one step away from the nest. Furthermore, in comparison to the Linear and Uniform networks, no ant will ever be more than three steps from its nest.
Survey protocols
Ant numbers and their cell locations were recorded by scan surveys when ants were searching for food (i.e., at a time when no food was present in the networks). Multiple scan surveys could occur in the same day, as long as they were separated by at least an hour. Every nest experienced all three networks in a randomized presentation order of three series (i.e., a given series had three nests foraging in each network design).
For these three series, nests were fed by semi-randomly placing food in one of the 12 cells, with constraints being that each cell had to experience approximately the same number of presentations and a food item could not appear sequentially in the same cell more than twice. No scan surveys were taken when food was present or within two hours after its removal. The presented food varied and cycled between: scrambled eggs with meat; sugar water; ant protein jelly; or honey water. The day after a food presentation, no food was present in the networks to facilitate scan sampling.
After completion of the three series, each colony experienced a fourth series in which food continued to appear randomly, but only in two cells and never more than three times in row in the same cell (locations 5a and 5b; Fig. 1) Data were collected across 50-83 days per series, and produced 51-68 location surveys across the nests, except for one nest, which in their initial Uniform presentation was set up late and had 25 surveys across 15 days. Food encounter data were collected both in the series where food could appear in any of the cells (50 encounters for each nest) and in the series where food could appear only in the two most distant cells (25 timed encounters/nest). Before the first presentation and in between sequential ones, nests were allowed to acclimatize to the new network or pattern of food appearance for at least two weeks before data recording commenced.
Ant presence is calculated per cell as the total number of ants observed in a given series, divided by the number of observations.
Foraging success protocol
When food was presented, the nests were observed and the time to discovery by the first ant was recorded. If no ant found the food within 60 minutes, the observation was terminated and recorded as 60 minutes. The food could remain until the following day, as this was the only food available to the nests.
Simulation model
Network designs
Agents foraged in 12-cell networks, where the individual cells were connected to each other as described for the Linear, Uniform or Modular arrangements. Depending on the simulation, food could appear randomly in any of the 12 cells (with the restriction that it could not appear more than twice sequentially in the same cell), or randomly in one of two cells (cells 5a or 5b in Fig. 1; and no more than three times sequentially in the same cell). At any given time step in the simulation no more than one cell would contain food.
When food appeared, it would be present for 5-15 time steps (mean = 10). When removed, food had a 5% probability of reappearing in the next time step. Thus, food had 51% probability of having reappeared by the 14th time step and a 92% probability by the 50th.
Agents in the foraging network leave ‘footstep’ marks. Each agent adds a score of one to the cell in which it resides at the end of a time step. At the end of a time step, all mark totals are decremented by 0.5 (with the minimal mark value being 0.05 in a given cell). Thus, a cell remains positively marked for at least one time step after an agent passes through it. Mark intensity can accumulate in a cell by having multiple agents simultaneously pass through, or by agents entering the cell and then not moving for a period of time.
Foraging agent behavior
A maximum of 8 agents could forage in the networks. Each agent has three behavioral ‘loci’ that determine whether or not they move or remain at their present location during a time step.
- Activate (a). The probability an agent in the home nest spontaneously switches to searching its food network.
- Search (s). The base probability that a searching agent in the network either remains in its present cell, or choses to move to a different location. If this intrinsic probability of moving is less than one, then the total probability that an agent moves in a time step is combination of this locus’s value s plus (1- s)d as prorated by the distance locus (d: see below).
- Recruit (r). The probability of an agent after encountering a successful forager returning towards the nest or a pheromone trail left by a successful forager becoming a ‘recruit’ and moving towards the food cell. An agent can be recruited either while in the home nest or while searching its food network.
The range of possible values for these loci is 0 to 1 (except for activate which is constrained to a minimum of 0.01).
Each agent also has three more ‘loci’ that can determine a preference in their movement as a searcher to a connected cell.
- Follow footsteps (f). Values between -1 and 1. The absolute value is the probability that the choice of the next cell will be affected by the location and movement of other foragers. Positive values bias choices towards the cells with the most marks, while a negative value biases movement towards cells with the fewest marks. This locus, therefore, can create a search bias that either clumps agents or disperses them.
- Distance (d). Values between -1 and 1. This locus both affects the propensity of an agent to move and whether that propensity is an increasing or decreasing function of distance from the nest. If the value is positive, the value is multiplied by a correction factor of the number of steps the current cell is from the nest divided by the maximum possible steps. Positive values increase the propensity to move when further from the nest. For example, if an agent has a distance locus value of 0.5 and is four steps from the nest its moving probability score is d = (1 – s) x 0.5 x 4/4 = 0.5, if s = 0. The same agent one step from the nest would have a score of d = 0.5 x 1/4 = 0.125. Therefore, agents having a positive distance locus would tend to move less and stay longer in the first row of cells. For negative loci values the correction factors are inverted – greatest close to the nest and least at distance. Thus, if s is large, moves are frequent and relatively insensitive to distance. If s is smaller and the absolute value of d is large, movement rate can also be frequent and variable relative to distance.
- Backtrack (b). Values between 0 and 1. A value of one means that the cell choice is indifferent to past movement. Values less than one set a probability that the agent will not return (backtrack) to their previously occupied cell. For any move that is probabilistically first determined to not backtrack, the agent’s previous cell is excluded from the any calculations for the above preferences for choice. Although immediate revisits could thus be prevented, it is possible for agents to have non-sequential revisits to cells. Also, agents searching the most distant cells in the Modular network, are allowed to backtrack as this is the only path available. Thus, a completely non-redundant agent in a Modular network would always have a search path that leads to the most distant cell in a module and then immediately back to the nest.
At the beginning of each time step in the simulation for each of the 8 agents it is determined if it moves or remains where it is. Searchers may or may not move as determined by their total probability, which is a function of their activate and distance loci and their position in the network. If they move, the choice of the next cell could be random or based presence of other agents or their foot prints. Agents that have reached the food location are ‘laden’ and return by the shortest path to the nest. They mark each cell they traverse which acts like a pheromone path to recruit and guide other recruits to the food cell. Searching agents that encounter either a laden agent or a pheromone trail immediately become recruits. Recruits always move one cell closer to a food cell in each time step by following a pheromone trail.
Foraging gains and costs
Every agent that reaches a cell where food is present brings back one unit of food to the nest. For each time step, every agent incurs a metabolic cost of 0.01 independent of where it is. If the agent moves to a new cell, it incurs an additional cost of 0.01. The net gain rate from foraging for a nest is the total number of food units brought to the nest, minus the total metabolic costs incurred by all the agents, divided by the total number of time steps. Laden or recruited agents move differently from searching agents. This makes the TSP in the simulation into an asymmetric problem (Odili et al. 2021), where the movement behavior, costs and path choice differ before or after an event or move. Asymmetric TSPs add further complexity and it is not known how well current TSP algorithms can predict optimal solutions when this asymmetry is present (Odili et al. 2021). Evolutionary simulations, however, are designed to find optimizing solutions in the simultaneous presence of all complicating factors.
Evolutionary simulations
Phase one: constructing agent libraries.
A simulation starts with one agent, assigned random values for each behavioral locus. The agent forages until 120 food items have been presented, after which its total net gain is calculated. Thereafter, one of its loci is randomly mutated and the agent once more forages for 120 food items. If the mutated behavioral repertoire creates a higher net gain, it is kept. If not, the previous repertoire is kept. This is repeated for 5000 mutations, at the end of which a more efficient forager has evolved.
At this point, a second agent (with randomly-chosen loci values) is added to the nest and the process is repeated, but with mutations possible for both agents. Additional agents are similarly added until there are eight, in total. This entire process is replicated 10 times each for the Linear, Uniform and Modular network arrangements, and where food can appear either in any cell or only two. Thus, at the end of this phase there are 80 evolved agents from the 10 replicates in libraries with evolved behavioral repertoires for each network type and food distribution.
Phase two: optimizing foraging strategies.
A simulation starts with 8 agents drawn from the run with the highest net energy gain rate. As before, they forage until food has been presented 120 times and net gain is calculated. Afterwards, the agent population is modified in one of the following ways:
- A randomly chosen locus on a randomly selected agent is mutated or replaced by the locus value from another randomly chosen agent.
- The least productive agent of the eight is entirely replaced by another randomly drawn from the entire library.
If the modification results in a higher net gain, it is retained. Otherwise, it is discarded. This process is repeated 10,000 times, and then replicated ten times. Note that it is possible for combinations of behavioral repertoires to evolve that would bias agent distributions towards any given cell in Figure 1 – if that distribution produces increased net foraging gain.
At the end of each of replicate, the evolved genotypes of the eight foraging agents is recorded (yielding 80 genotypes across the 10 replicates). From these optimized combinations of group-searching genotypes the following data are recorded
1. Agent distribution in the networks as calculated by the total number of agents observed in each of the 12 cells divided by the total number of time steps.
2. Foraging efficiency as the proportion of food presentations which are encountered before disappearing (‘hits’ versus ‘misses’), and the number of time steps to encounter for each found presentation. Food could appear in a cell where an agent is already present, yielding a time step value of zero.
3. Agent search decisions as to movement between cells and the number of cells visited on each search trip. When the decision is between multiple cells, the move is a “follow” if the chosen cell has the greatest signal of previous agent presence; an “avoid” if the cell has the least signal; or a “random” if otherwise. Note that if the agent chooses randomly, it could still be logged as a follow or avoid. Also, if only one cell choice is available or all the available choices are identical in signal strength, then the choice is recorded as being random, even if the agent is strongly biased in behavior. Thus, strongly biased agents could still make mostly random choices. Decisions and paths are tracked until each agent in the group of eight has made at least 1000 trips. With thousands of decisions recorded, any bias in choosing would be statistically revealed.
Agent distributions are counted only in the time periods when food is either absent or not yet found. I.e., data are only recorded in the simulations when agents are searching and not while recruitment occurs. This matches when the observations are taken in the ant experiments.
Statistical Analyses
The results from both the simulation models and ant experiments are analyzed by ANOVA, with the independent variables being habitat characteristics that are appropriate to the relationship being tested such as: the type of network; the centrality of a cell (positioned on an inner or outer column); row number (first or closest to the nest to third and furthest away); and the pattern of food appearance (either equally likely in any cell or only appearing in two cells at the greatest distance: Fig. 1). Note that column and row are proxy measures for the effects of behavioral biases. It is not assumed that ants are cognizant of column alignment or row number, per se.
In each network setup there are six distinct cell categories, each of which are defined by column and row locations. For distributions, the dependent variable is either the number of agents or ants in a given cell / number of times cells were observed. For ant search efficiency, the proportion of times a food item is encountered within an hour of its placement in the network, and the length of time until such an encounter. Agent search efficiency is the proportion of times a food item is encountered in the 5-15 time steps it is present, and the number of time steps until such an encounter occurs.
The results from the simulations are not intended to produce a quantitative match given the unavoidable differences between a hypothetical agent and an actual ant. Instead, the intent of the statistical analyses is comparative. How similar are the statistical relationships that arise in foraging ant nests to the relationships to those evolve in an evolutionary simulation based on individual agents following simple decision rules?