Logo weiterlesen.de
From Bricks to Brains

from Bricks to Brains

The Embodied Cognitive Science of LEGO Robots

Michael R.W. Dawson

Brian Dupuis

Michael Wilson

Contents

Acknowledgements

CHAPTER 1
Mind Control — Internal or External?

1.0 Chapter Overview

1.1 Our Special Intelligence

1.2 Rodents that Engineer Wetlands

1.2.1 Castor canadensis

1.2.2 The Cognitive Beaver?

1.3 The Instincts of Insects

1.3.1 The Instinctive Wasp

1.3.2 Umwelt and Control

1.4 Paper Wasp Colonies and Their Nests

1.4.1 Colonies and Their Nests

1.4.2 Scaling Up

1.5 The Towers of Termites

1.5.1 Termite Mounds

1.5.2 The Thinking Termite?

1.6 The Rational Insect?

1.6.1 Computational Theory of Mind

1.6.2 Are Insects Representational?

1.7 Insect as Superorganism?

1.7.1 The Intelligent Whole

1.7.2 Colonial Intelligence

1.8 The Ultimate Democracy

1.8.1 Emerging Problems

1.8.2 From Whence Organization?

1.9 Programs for Nest Construction

1.9.1 An Inherited Program

1.9.2 Testing the Theory

1.10 The Environment as Program

1.10.1 A Complex Environment

1.10.2 Stigmergy

1.11 Stigmergy and the Synthetic Approach

1.11.1 The Synthetic Approach

1.11.2 Wasp Nest Examples

1.12 Stigmergy and the Parable of the Ant

1.12.1 Intelligence and Stigmergy

1.12.2 Are Mammals Stigmergic?

1.13 Embodiment and Posthumanism

1.13.1 Posthumanism

1.13.2 Embodiment

1.14 Stigmergy and Classical Cognition

1.14.1 Classical Control

1.14.2 Externalizing Control

CHAPTER 2
Classical Music and the Classical Mind

2.0 Chapter Overview

2.1 The Boolean Dream

2.1.1 Cognitive Science

2.1.2 Logicism

2.1.3 The Boolean Dream

2.2 Classical Cognitive Science

2.2.1 A Classical Device

2.2.2 Three Key Characteristics

2.3 Classical Views of Mind and Music

2.3.1 Mind and Music

2.3.2 A Classical Analogy

2.4 Musical Logicism

2.4.1 Musical Formalisms

2.4.2 Sonata-Allegro Form

2.5 A Harmonious Narrative

2.5.1 Representational Explanation

2.5.2 Musical Expressions

2.6 The Nature of Classical Composition

2.6.1 The Disembodied Mind

2.6.2 The Thoughtful Composer

2.7 Central Control of a Classical Performance

2.7.1 Central Control

2.7.2 Conductor as Central Controller

2.7.3 The Controlling Score

2.8 Disembodiment and the Classical Audience

2.8.1 Disembodiment

2.8.2 Audience and Composition

2.9 Classical Reactions

2.9.1 Reacting to Music

2.9.2 Classical Competitors

2.10 Modern Music

2.10.1 Out with the Old

2.10.2 In with the New

2.11 Dodecaphony

2.11.1 Tonality and Atonality

2.11.2 The Twelve-Tone Method

2.12 Reactions to Atonal Structure

2.12.1 From Structure to Structure

2.12.2 Reducing Central Control

2.13 Control and Emergence in Cage’s Music

2.13.1 Silence

2.13.2 Chance and Emergence

2.14 Emergence in Minimalist Music

2.14.1 Tape as Medium

2.14.2 It’s Gonna Rain

2.15 A Minimalist Score

2.15.1 In C

2.15.2 Minimalism and Stigmergy

2.16 Musical Stigmergy

2.16.1 Musical Swarms

2.16.2 The ReacTable

2.17 From Hot to Cool

2.17.1 The Conduit Metaphor

2.17.2 Audible Processes

2.18 The Shock of the New

2.18.1 Classical Value

2.18.2 A Tradition of Improvisation

2.19 Musical Methods and the Mind

2.19.1 Characteristic Questions

2.19.2 The Synthetic Approach

CHAPTER 3
Situated Cognition and Bricolage

3.0 Chapter Overview

3.1 Three Topics to Consider

3.1.1 Review to This Point

3.1.2 New Headings

3.2 Production Systems as Classical Architectures

3.2.1 The Production System

3.2.2 Classical Characteristics

3.3 Sense–Think–Act with Productions

3.3.1 An Early Production System

3.3.2 The Next ACT

3.4 Logic from Action

3.4.1 Productions and Logicism

3.4.2 Logic as Internalized Action

3.5 An EPIC Evolution

3.5.1 Productions, Sensing, and Action

3.5.2 The EPIC Architecture

3.6 Productions and Formal Operations

3.6.1 Sense–Think–Act

3.6.2 Formal Operations

3.7 Evidence for Sensing and Acting Without Thinking

3.7.1 Classical Modularity

3.7.2 Visuomotor Modules

3.8 Action without Representation?

3.8.1 Multiple Visual Pathways

3.8.2 Blindsight

3.9 A Need for Action

3.9.1 Incorporating Action

3.9.2 Advantages of Action

3.10 The External World and Computation

3.10.1 Worldly Support for Cognition

3.10.2 Scaffolding

3.11 Some Implications of Scaffolding

3.11.1 The Leaky Mind

3.11.2 Group Cognition

3.11.3 Specialized Cognition

3.12 Stigmergy of Thought

3.12.1 Environmental Import

3.13 Bricolage

3.13.1 Resource Allocation

3.13.2 Thought as Bricolage

3.14 The Power of Bricolage

3.14.1 The Savage Mind

3.14.2 Power from Non-linearity

3.15 The Society of Mind

3.15.1 Agents and Agencies

3.15.2 Explaining Mental Societies

3.16 Engineering a Society of Mind

3.16.1 Reverse Engineering

3.16.2 Forward Engineering

3.17 Synthesis in Action

3.17.1 Cricket Phonotaxis

3.17.2 Robot Phonotaxis

3.18 Verum-Factum

3.18.1 Synthetic Psychology

3.18.2 Vico’s Philosophy

3.19 Mind and Method

3.19.1 Mind

3.19.2 Method

3.20 Synthesis as Process, Not as Design

3.20.1 Synthesis Is Not Design

3.20.2 Synthesis as Process

3.21 Building Bricoleurs

3.21.1 Cartesian Alternatives

3.21.2 Students as Bricoleurs

CHAPTER 4
Braitenberg’s Vehicle 2

4.0 Chapter Overview

4.1 A Robot’s Parable

4.1.1 Path of a Robot

4.1.2 Analysis and Synthesis

4.2 Braitenberg’s Thought Experiments

4.2.1 A Thought Experiment

4.2.2 Goals

4.3 Foraging for Parts

4.3.1 Parts and Foraging

4.3.2 Robot Bricolage

4.4 Chassis Design (Steps 1 through 4)

4.4.1 General Design

4.4.2 Initial Chassis Construction

4.5 Constructing the Chassis (Steps 5 through 7)

4.5.1 General Design

4.6 The NXT Interactive Servo Motor

4.6.1 The Evolution of LEGO Motors

4.6.2 The NXT Servo Motor

4.7 Adding Motors to the Chassis (Steps 8 and 9)

4.8 Adding a Front Slider (Step 10)

4.8.1 Passive Front Support

4.8.2 Constructing the Front Slider

4.9 Constructing Rear Axles (Step 11)

4.9.1 Wheel Axle Design

4.9.2 Constructing the Wheel Axles

4.10 Attaching the NXT Brick (Step 12)

4.10.1 The NXT Brick

4.10.2 Attaching the Brick

4.11 Attaching Light Sensor Supports (Step 13)

4.11.1 Sensor Mount Design

4.12 Adding Light Sensors (Step 14)

4.12.1 Mounting Light Sensors

4.13 Wheels and Cable Considerations

4.13.1 Completing the Robot

4.14 Sensing, Acting, and the NXT Brick

4.14.1 The NXT Brick

4.15 NXT Light Sensor Properties

4.15.1 The LEGO Light Sensor

4.16 Programming the NXT Brick

4.16.1 Programming Steps

4.16.2 Programming Environment

4.17 A Simple Main Task

4.17.1 The Main Task

4.17.2 Defining Variable Names

4.17.3 Miscellaneous Syntax

4.18 Linking Light Sensors to Motors

4.18.1 Two More Tasks

4.19 A Complete Program

4.20 Exploring Vehicle 2 Behaviour

4.20.1 Three Test Environments

4.20.2 A Simple World

4.20.3 A More Complex World

4.20.4 Complexities via Embodiment

4.21 Further Avenues for Bricoleurs

4.21.1 Exploring Embodiment

4.21.2 Manipulating Environments

4.21.3 Modifying Code

4.21.4 Bricolage, Not Design

CHAPTER 5
Thoughtless Walkers

5.0 Chapter Overview

5.1 Analysis vs. Synthesis

5.1.1 Synthetic Methodology

5.1.2 Analytic Methodology

5.1.3 Complementary Methodologies

5.2 Biomimetics and Analysis

5.2.1 Natural Technology

5.2.2 Early Analysis of Locomotion

5.3 From Motion Analysis to Walking Robots

5.3.1 Modern Motion Analysis

5.3.2 Biologically Inspired Robots

5.4 Analysis That Constrains Synthesis

5.4.1 Passive Dynamic Walking

5.4.2 Search and Construct

5.5 A LEGO Passive Dynamic Walker

5.5.1 Synthesis after Analysis

5.5.2 Parts and Foraging

5.6 Building a Straight-Legged Hinge

5.6.1 Centre Post

5.6.2 Support Legs

5.7 Weighting the Walker

5.7.1 The Need for Weights

5.7.2 LEGO Weights

5.8 A Specialized Environment

5.8.1 The Need for Holes

5.8.2 Building a Ramp with Gaps

5.9 Raising the Ramp

5.9.1 Reinforced Ends

5.9.2 Elevating the Platform

5.10 From Talking the Talk to Walking the Walk

5.10.1 Passive Dynamic Walking

5.10.2 Implications

5.11 Synthesis in Aid of Analysis

5.11.1 The Opposite Direction

5.11.2 Analytic Intractability

5.12 Ashby’s Homeostat

5.12.1 Homeostat Design

5.12.2 Behaviour of the Homeostat

5.13 The Great Pretender

5.13.1 Synthesis and Scaling Up

5.13.2 Strandbeest

5.14 A LEGO Strandbeest

5.14.1 Alternative Material

5.15 Segmented Design

5.15.1 Parts and Foraging

5.16 From the Ground Up

5.16.1 Ankles and Feet

5.16.2 Feet vs. Wheels

5.17 A Strandbeest Leg

5.17.1 Precise Proportions

5.18 LEGO Legs and Holy Numbers

5.18.1 Completing a Leg

5.18.2 The Holy Numbers

5.19 Reinventing the Wheel

5.19.1 Pairing Legs into a Module

5.20 Quadruped

5.20.1 Mounting the Modules

5.20.2 Gait Exploration

5.21 Manipulating Quadruped Gaits

5.21.1 Quadruped Gaits

5.21.2 Exploring Strandbeest Gaits

5.22 An Octapedal Strandbeest

5.22.1 Additional Legs

5.22.2 Walking with Eight Legs

5.23 Strandbeests in Action

5.23.1 Observing Strandbeest Gaits

5.23.2 Exploiting Stronger Situation

5.24 Alternative Gaits and Robotic Snakes

5.24.1 Snake-like Movement

5.24.2 Analyzing Snake Locomotion

5.25 The Wormeostat: A Synthetic Snake or Worm

5.25.1 Feedback and Motion

5.25.2 Motion from Friction

5.26 Foraging For Wormeostat Parts

5.26.1 Building the Wormeostat

5.26.2 Parts and Modules

5.27 Motor and Tire Assemblies

5.27.1 Motor Modules

5.27.2 Tire Assemblies

5.28 Preparing Two NXT Bricks

5.28.1 Control and Friction

5.29 Front End Friction

5.29.1 Motor Friction

5.29.2 Brick Friction

5.30 A Second Front End Motor

5.30.1 Reflected Construction

5.31 Completing the Front Half

5.31.1 Connecting Three Components

5.32 Modules for the Rear Half

5.32.1 Replicating Components

5.33 Completing the Rear Half

5.33.1 A Second Chain

5.34 The Total Wormeostat

5.34.1 Linking the Halves

5.34.2 Programming Feedback

5.35 Wormeostat Code for Motor 1

5.35.1 Motor Behaviour

5.36 Wormeostat Code for Motor 2

5.36.1 Second Verse Same as First

5.37 Wormeostat Main Task

5.37.1 The Main Task

5.37.2 Modular Duplication

5.38 The Wormeostat’s Behaviour

5.38.1 Fireside Dogs

5.38.2 Wormeostat Movement

5.39 Implications

5.39.1 Two Cultures

5.39.2 Mending the Rift

CHAPTER 6
Machina Speculatrix

6.0 Chapter Overview

6.1 William Grey Walter

6.1.1 Biographical Highlights

6.1.2 A Very Public Robot

6.2 The Tortoise

6.2.1 Appearance

6.2.2 Behaviour

6.3 Speculation and Positive Tropisms

6.3.1 Exploration as Speculation

6.3.2 Phototropism

6.3.3 Inferring Internal Mechanisms

6.4 Not All Lights Are the Same

6.4.1 A Negative Phototropism

6.4.2 Analysis of Behaviour

6.5 Choice

6.5.1 Buridan’s Ass

6.5.2 Complicating the Environment

6.6 Additional Negative Tropisms

6.6.1 Avoiding Obstacles

6.6.2 Avoiding Slopes

6.7 Dynamic Tropisms

6.7.1 Toys vs. Tools

6.7.2 Changing Light Sensitivity

6.8 Self-Recognition

6.8.1 Self, Not Machine

6.8.2 The Mirror Dance

6.9 Mutual Recognition

6.9.1 The Relative World

6.9.2 Social Environments

6.10 Internal Stability

6.10.1 Feedback and Cybernetics

6.10.2 Cybernetics and Simulation

6.11 Parsimony

6.11.1 Two Approaches to Stability

6.11.2 A Simple Machine

6.12 A LEGO Tortoise

6.12.1 A New Generation

6.12.2 Variations of Design

6.13 Parts for a Modular Design

6.13.1 Sophistication from Tweaking

6.14 The “Spine” of the Chassis

6.14.1 Building a Spine

6.15 Mirrored Motor Assemblies

6.15.1 Two Motor Assemblies

6.16 Attaching Motors to the Chassis

6.16.1 Motors and Steering Gear

6.17 A Small Stick for Big Obstacles

6.17.1 Stick-In-Ring Detector

6.18 Adding a Drive Gear and Stick-In-Ring Switch

6.18.1 Front Wheel Drive

6.18.2 Stick In the Stick-In-Ring

6.19 A Vertical Front Axle

6.19.1 Front Axle Gears

6.20 Preparing the NXT Brick

6.20.1 Readying the Brick

6.20.2 Stick-In-Ring Detector

6.21 Supporting Rear Wheels

6.21.1 Rear Wheel Supports

6.21.2 Brick and Wheel Attachment

6.22 Front Wheel Assembly and Attachment

6.22.1 Front Wheel Gear Gang

6.23 Pilot Light Assembly

6.23.1 A LEGO Pilot Light

6.24 Attaching Pilot Lights and Connecting Wires

6.24.1 Pilot Light Wiring

6.25 A Periscope Mirror

6.25.1 A 360° Rotating Mirror

6.26 Sensing Light from the Periscope

6.26.1 Attaching the Periscope

6.26.2 Sensing Periscope Light

6.27 Adding More Cables

6.27.1 Periscope Wiring

6.27.2 Motor Wiring

6.28 A Surrounding Shell

6.28.1 Shell Design

6.29 Suspending the Shell

6.29.1 The Suspension System

6.30 Completing the Tortoise

6.30.1 Attaching the Shell

6.30.2 Next: Tortoise Programming

6.30.3 Embodiment Issues

CHAPTER 7
The Subsumption Architecture

7.0 Chapter Overview

7.1 A Sandwich of Vertical Modules

7.1.1 Cognitivism

7.1.2 The Classical Sandwich

7.2 The New Look and Its Problems

7.2.1 The New Look in Perception

7.2.2 Shakey Implications

7.3 Horizontal Layers in the Human Brain

7.3.1 Evidence from Action

7.3.2 Sandwich Alternative

7.4 Horizontal Links between Sense and Action

7.4.1 A Sandwich Alternative

7.5 The Subsumption Architecture

7.5.1 Modularity of Mind

7.5.2 Vertical Modules

7.6 Advantages of the Subsumption Architecture

7.6.1 Reasons for Revolution

7.6.2 Coping with Multiple Goals

7.6.3 Combining Multiple Sensors

7.6.4 Robustness

7.6.5 Speed with No Modeling

7.7 Concrete Examples

7.7.1 Walking Robots

7.7.2 The Tortoise

7.8 Level 0, Basic Movement

7.8.1 A Fundamental Function

7.9 Level 1, Steering

7.9.1 Exploration

7.10 Level 2, Sensing Ambient Light

7.10.1 Light Affects Lower Levels

7.11 Level 3, Obstacle Avoidance

7.11.1 Sophistication from Tweaking

7.12 The Main Task

7.12.1 Modular Design

7.13 Observing Tortoise Behaviour

7.13.1 Level 0

7.13.2 Level 0 + Level 1

7.13.3 Level 0 + Level 1 + Level 2

7.13.4 All Four Levels

7.14 The Total Tortoise

7.14.1 Repeating History

7.14.2 Search for an Optimum

7.14.3 Free Will

7.14.4 Discernment

7.14.5 Self-Recognition

7.15 Tortoise Implications

7.15.1 Grey Walter’s Legacy

7.15.2 The LEGO Tortoise

7.15.3 Degrees of Embodiment

CHAPTER 8
Embodiment, Stigmergy, and Swarm Intelligence

8.0 Chapter Overview

8.1 Travelling Salesmen

8.1.1 The Traveling Salesman Problem

8.1.2 Solving the TSP

8.2 Swarm Intelligence

8.2.1 Economical Ants

8.2.2 Emergent Intelligence

8.3 Collective Contributions

8.3.1 Swarm Advantages

8.3.2 Robot Collectives

8.4 Critical Numbers of Agents

8.4.1 When Is a Swarm Intelligent?

8.4.2 A Foraging Example

8.5 Coordination, Communication, and Cost

8.5.1 Costly Coordination

8.5.2 A Stigmergic Solution

8.6 Co-operative Transport

8.6.1 Robots that Push Boxes

8.6.2 Stigmergic Co-operation

8.7 Collective Sorting

8.7.1 Spatial Sorting by Ants

8.7.2 Stigmergic Sorting by Robots

8.8 Stigmergy and Degrees of Embodiment

8.8.1 Extending the Mind into the World

8.8.2 Degrees of Embodiment

8.9 The Lemming

8.9.1 Lemming Situation

8.9.2 Lemming Embodiment

8.10 Foraging for Robot Parts and World Parts

8.10.1 Robot Parts

8.10.2 Bricks to Move

8.11 Chassis and Rear Wheels

8.11.1 NXT Brick as Chassis

8.12 Mounting Motors

8.12.1 Motors and Cables

8.13 Upper Ultrasonic Sensor and Front Wheels

8.13.1 The Upper Ultrasonic

8.13.2 Front Wheel Drive

8.14 Mounting the Lower Ultrasonic Sensor

8.14.1 Angled Ultrasonics

8.15 Designing the Brick Catcher

8.15.1 Important Embodiment

8.16 Brick Catcher, Brick Processor

8.16.1 Embodiment and Situation

8.17 Completing the Lemming

8.17.1 Final Construction

8.18 Level 0: Drive and Calibrate

8.18.1 Driving

8.18.2 Calibration

8.19 Level 1: Dodge Obstacles

8.19.1 The Lemming’s Umwelt

8.19.2 Avoiding Obstacles

8.20 Level 2: Seek Bricks

8.20.1 Brick Attraction

8.20.2 Using the Lower Ultrasonic

8.21 Level 3: Process Brick Colours

8.21.1 Bricks and Behaviour

8.22 Level -1: Integrate Levels to Control Motors

8.22.1 Multiple Motor Influences

8.23 Putting All the Levels Together

8.23.1 The Main Task

8.24 The Lonely Lemming

8.24.1 Lemming Behaviour

8.25 Collective Collecting

8.25.1 Two Lemmings

8.25.2 Three Lemmings

8.26 Explaining Sorting Into Corners

8.26.1 Corner Analysis

8.26.2 Corners for Free

8.27 Do Lemmings Have Collective Intelligence?

8.27.1“Speed” of Work

8.28 Explaining Collective Intelligence

8.28.1 Brick Dynamics

8.28.2 Interaction and the Middle

8.29 Implications and Future Directions

8.29.1 Implications

8.29.2 Future Directions

CHAPTER 9
Totems, Toys — Or Tools?

9.0 Chapter Overview

9.1 Are Our Robots More Than Totems?

9.1.1 Uncanny Machines

9.2 Are Our Robots More Than Toys?

9.2.1 The Tortoise as Toy

9.2.2 LEGO Is a Toy!

9.3 From Totems and Toys to Tools

9.3.1 Tortoise as Tool

9.3.2 Pedagogical and Scientific Tools

9.4 Animal Navigation and Representation

9.4.1 Navigational Organisms

9.4.2 Sense–Think–Navigate

9.5 Representation and Robot Navigation

9.5.1 Animals to Animats

9.5.2 SLAM and AntiSLAM

9.6 Spatial Behaviour and the Reorientation Task

9.6.1 Navigational Cues

9.6.2 The Reorientation Task

9.7 Basic Findings with the Reorientation Task

9.7.1 Rotational Error

9.7.2 Mandatory Geometry

9.8 Representational Theories of Reorientation

9.8.1 The Geometric Module

9.8.2 Geometry and Representation

9.9 Whither the Geometric Module?

9.9.1 Modifying Modularity

9.9.2 Non-modular Reorientation

9.10 Reactive Robots and Their Evolution

9.10.1 New Wave Robotics

9.10.2 Evolving Robots

9.11 Reactive Robots and Rotational Errors

9.11.1 Reactive Reorientation

9.11.2 Representative Reaction

9.12 Reorienting LEGO Robots

9.12.1 Motivating AntiSLAM

9.12.2 Ultrasonic Sensors

9.13 AntiSLAM Overview

9.13.1 Modifying Vehicle 2

9.14 From Vehicle 2 Onward

9.14.1 Foraging for Parts

9.15 A Spine for AntiSLAM

9.15.1 Creating a Chassis

9.16 Structure from Motors

9.16.1 Motors and Axles

9.17 Sensor Supports and Front Wheels

9.17.1 Creating Sensor Supports

9.17.2 Front Wheels

9.18 Sensor Arrays

9.18.1 Mounting Sensors

9.19 AntiSLAM’s Rear Wheels and Cables

9.19.1 Rear Wheels

9.19.2 Connecting Cables

9.20 AntiSLAM Level 0: Drive

9.20.1 Subsumption Architecture

9.21 Level 1: Escape

9.21.1 Importance of Escaping

9.22 Level 2: Following Walls

9.22.1 Biasing Lower-level Behaviour

9.23 Level 3: Using Light as a Local Feature

9.23.1 Local Feature Sensitivity

9.24 Level -1: Determining Motor Speeds

9.24.1 Finally, Level -1

9.25 The Main Task

9.25.1 Putting It Together

9.26 Primitive Behaviours

9.26.1 Levels -1 + 0

9.26.2 Levels -1 + 0 + 1

9.27 Bias and Reorientation

9.27.1 Levels -1 + 0 + 1 + 2

9.27.2 Rotational Error and AntiSLAM

9.28 Beyond Rotational Error

9.28.1 Nolfi and Beyond

9.28.2 Feature Sensitivity

9.29 Moving the Local Feature

9.29.1 Moving the Light

9.30 All Levels with No Local Feature

9.30.1 Turning Lights Off

9.31 Reorienting Reorientation

9.31.1 Building a Better Mouse

9.31.2 Different Views of Reorientation

9.32 Hard Fun and Hard Science

9.32.1 Hard Fun

9.32.2 Hard Science

References

Index

Acknowledgements

This book manuscript was created with the support of an NSERC Discovery Grant, a SSHRC Standard Research Grant (particularly Chapter 2), and a 2007–08 McCalla Professorship from the Faculty of Arts at the University of Alberta, all awarded to MRWD.

Accompanying this book is additional web support that provides pdf files of traditional, “wordless” LEGO instructions for building robots, downloadable programs for controlling the robots that we describe, and videos that demonstrate robot behaviour. This web support is available at https://www.bcp.psych.ualberta.ca/~mike/BricksToBrains/.

The instructional images that are provided in this book were created by first building a CAD model of the robot using the LDRAW family of programs. This is a set of programs available as freeware from https://www.ldraw.org/. The CAD files were then converted into the instructional images using the LPUB4 program, available as freeware from www.kclague.net/LPub4.htm. Resources are available that provide detailed instructions on how to use such software tools (Clague, Agullo, & Hassing, 2002). The NXC code for the various robots described in this book was created, and downloaded to the robot, using the BricxCC utility, available as freeware at https://bricxcc.sourceforge.net/. This utility provides an excellent help file to describe the components and the syntax of the NXC programming language.

All of the photographs used in Chapter 1 were taken by Nancy Digdon, and are used with her permission. The beaver in Section 1.2 was in Astotin Lake at Elk Island National Park near Edmonton, Alberta. The beaver dam in the same section is located at the nature trail in Miramichi, New Brunswick. The two images of the bald-faced hornet nest were also taken at Miramichi. The wasp nest under construction in Section 1.10 was found at Elk Lake Park, near Victoria, British Columbia.

Chapter 1
Mind Control—Internal or External?

1.0 CHAPTER OVERVIEW

Classical cognitive science adopts the representational theory of mind (Pylyshyn, 1984). According to this theory, cognition is the rule-governed manipulation of internal symbols or representations. This view has evolved from Cartesian philosophy (Descartes, 1637/1960), and has adopted many of Descartes’ tacit assumptions about the nature of the mind (Devlin, 1996). For example, it views the mind as a disembodied entity that can be studied independently of its relationship to the world. This view of the rational mind as distinct from the world has also been used to distinguish humans from other organisms. That is, rational humans are viewed as being controllers or creators of their environment, while irrational animals are completely under the environment’s control (Bertalanffy, 1967; Bronowski, 1973; Cottingham, 1978).

The purpose of this chapter is to explore this view of the mind, in preparation for considering alternative accounts of cognition that will be developed in more detail as the book proceeds. We begin by considering the representational theory of mind, and how it is typically used to distinguish man from other organisms. We consider examples of animals, such as beavers and social insects, that appear to challenge this view because they create sophisticated structures, and could be viewed to some degree as controllers or builders of their own environment. A variety of theories of how they build these structures are briefly considered. Some of these theories essentially treat these animals as being rational or representational. However, more modern theories are consistent with the notion that the construction of elaborate nests or other structures is predominantly under the control of environmental stimuli; one prominent concept in such theories is stigmergy. The chapter ends, though, by pointing out that such control is easily found in prototypical architectures that have been used to model human cognition. It raises the possibility that higher-order human cognition might be far less Cartesian than classical cognitive science assumes, a theme that will be developed in more detail in Chapter 2. The notion of stigmergy that is introduced in Chapter 1 will recur in later chapters, and will be particularly important in Chapter 8’s discussion of collective intelligence.

1.1 OUR SPECIAL INTELLIGENCE

We humans constantly attempt to identify our unique characteristics. For many our special status comes from possessing a soul or consciousness. For Descartes, the essence of the soul was “only to think,” and the possession of the soul distinguished us from the animals (Descartes, 1637/1960). Because they lacked souls, animals could not be distinguished from machines: “If there were any machines which had the organs and appearance of a monkey or of some other unreasoning animal, we would have no way of telling that it was not of the same nature as these animals” (p. 41). This view resulted in Cartesian philosophy being condemned by modern animal rights activists (Cottingham, 1978).

More modern arguments hold that it is our intellect that separates us from animals and machines (Bronowski, 1973). “Man is distinguished from other animals by his imaginative gifts. He makes plans, inventions, new discoveries, by putting different talents together; and his discoveries become more subtle and penetrating, as he learns to combine his talents in more complex and intimate ways” (p. 20). Biologist Ludwig von Bertalanffy noted, “symbolism, if you will, is the divine spark distinguishing the poorest specimen of true man from the most perfectly adapted animal” (Bertalanffy, 1967, p. 36).

It has been argued that mind emerged from the natural selection of abilities to reason about the consequences of hypothetical actions (Popper, 1978). Rather than performing an action that would have fatal consequences, the action can be thought about, evaluated, and discarded before actually being performed.

Popper’s position is central to much research in artificial intelligence and cognitive science. The fundamental hypothesis of such classical or symbolic research is that cognition is computation, that thinking is the rule-governed manipulation of symbols that represent the world. Thus the key role of cognition is planning: on the basis of perceptual information, the mind builds a model of the world, and uses this model to plan the next action to be taken. This has been called the sense–think–act cycle (Pfeifer & Scheier, 1999). Classical cognitive science has studied the thinking component of this cycle (What symbols are used to represent the world? What rules are used to manipulate these symbols? What methods are used to choose which rule to apply at a given time?), often at the expense of studying sensing and acting (Anderson et al., 2004; Newell, 1990).

A consequence of the sense–think–act cycle is diminished environmental control over humans. “Among the multitude of animals which scamper, fly, burrow and swim around us, man is the only one who is not locked into his environment. His imagination, his reason, his emotional subtlety and toughness, make it possible for him not to accept the environment, but to change it” (Bronowski, 1973, p. 19). In modern cognitivism, mind reigns over matter.

Ironically, cognitivism antagonizes the view that cognition makes humans special. If cognition is computation, then certain artifacts might be cognitive as well. The realization that digital computers are general purpose symbol manipulators implies the possibility of machine intelligence (Turing, 1950): “I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted” (p. 442).

However, classical cognitivism is also subject to competing points of view. A growing number of researchers are concerned that the emphasis on planning using representations of the world is ultimately flawed. They argue that the mind is not a planner, but is instead a controller that links perceptions with actions without requiring planning, reasoning, or central control. They would like to replace the “sense–think–act” cycle with a “sense–act” cycle in which the world serves as a model of itself. Interestingly, this approach assumes that human intelligence is largely controlled by the environment, perhaps making us less special than we desire. The purpose of this book is to explore this alternative view of cognition.

1.2 RODENTS THAT ENGINEER WETLANDS

1.2.1 Castor canadensis

Is our intelligence special? Perhaps the divide between ourselves and the animals is much smaller than we believe. Consider, for example, the North American beaver, Castor canadensis. A large rodent, a typical adult beaver usually lives in a small colony of between four and eight animals (Müller-Schwarze & Sun, 2003). Communication amongst animals in a colony is accomplished using scent marks, a variety of vocalizations, and the tail slap alarm signal (Figure 1-1).

1-1

Image

In his classic study, Lewis Morgan noted that “in structural organization the beaver occupies a low position in the scale of mammalian forms” (Morgan, 1868/1986, p. 17). Nonetheless, the beaver is renowned for its artifacts. “Around him are the dam, the lodge, the burrow, the tree-cutting, and the artificial canal; each testifying to his handiwork, and affording us an opportunity to see the application as well as the results of his mental and physical powers” (p. 18). In short, beavers — like humans — construct their own environments.

1-2

Image

To dam a stream (Figure 1-2), a colony of beavers will first prop sticks on both banks, pointing them roughly 30° upstream (Müller-Schwarze & Sun, 2003). Heavy stones are then moved to weigh these sticks down; grass is stuffed between these stones. Beavers complete the dam by ramming poles into the existing structure that sticks out from the bank. Poles are aligned with stream flow direction. The dam curves to resist stream flow; sharper-curved dams are used in faster streams. Beavers add mud to the upstream side of the dam to seal it. The dam is constantly maintained, reinforced and raised when water levels are low; it is made to leak more when water levels become too high (Frisch, 1974). Dams range in height from 20 cm to 3 m, and a large dam can be several hundred metres in length. A colony of beavers might construct more than a dozen dams to control water levels in their territory.

1.2.2 The Cognitive Beaver?

How does such a small, simple animal create these incredible structures? Morgan attributed intelligence, reasoning, and planning to the beaver. For instance, he argued that a beaver’s felling of a tree involved a complicated sequence of thought processes, including identifying the tree as a food source and determining whether the tree was near enough to the pond or a canal to be transported. Such thought sequences “involve as well as prove a series of reasoning processes indistinguishable from similar processes of reasoning performed by the human mind” (Morgan, 1868/1986, pp. 262-263). If this were true, then the division between man and beast would be blurred. Later, though, we will consider the possibility that even though the beaver is manipulating its environment, it is still completely governed by it. However, we must also explore the prospect that, in a similar fashion, the environment plays an enormous role in controlling human cognition.

1.3 THE INSTINCTS OF INSECTS

To begin to reflect on how thought or behaviour might be under environmental control, let us consider insects, organisms that are far simpler than beavers.

1.3.1 The Instinctive Wasp

Insects are generally viewed as “blind creatures of impulse.” For example, in his assessment of insect-like robots, Moravec (1999) notes that they, like insects, are intellectually damned: “The vast majority fail to complete their life cycles, often doomed, like moths trapped by a streetlight, by severe cognitive limitations.” These limitations suggest that insects are primarily controlled by instinct (Hingston, 1933), where instinct is “a force that is innate in the animal, and one performed with but little understanding” (p. 132).

This view of insects can be traced to French entomologist J.H. Fabre. He described a number of experiments involving digger wasps, whose nests are burrows dug into the soil (Fabre, 1915). A digger wasp paralyzes its prey, and drags it back to a nest. The prey is left outside as the wasp ventures inside the burrow for a brief inspection, after which the wasp drags the prey inside, lays a single egg upon it, leaves the burrow, and seals the entrance.

While the wasp was inspecting the burrow, Fabre moved its paralyzed prey to a different position outside the nest (Fabre, 1919). This caused the wasp to unnecessarily re-inspect the burrow. If Fabre moved the prey once more during the wasp’s second inspection, the wasp inspected the nest again!

In another investigation, Fabre (1915) completely removed the prey from the vicinity of the burrow. After conducting a vain search, the wasp turned and sealed the empty burrow as if they prey had already been deposited. “Instinct knows everything, in the undeviating paths marked out for it; it knows nothing, outside those paths” (Fabre, 1915, p. 211).

1.3.2 Umwelt and Control

The instincts uncovered by Fabre are not blind, because some adaptation to novel situations occurs. At different stages of the construction of a wasp’s nest, researchers have damaged the nest and observed the ensuing repairs. Repaired nests can deviate dramatically in appearance from the characteristic nest of the species (Smith, 1978). Indeed, environmental constraints cause a great deal of variation of nest structure amongst wasps of the same species (Wenzel, 1991). This would be impossible if wasp behaviour were completely inflexible.

However, this flexibility is controlled by the environment. That is, observed variability is not the result of modifying instincts themselves, but rather the result of how instincts interact with a variable environment. Instincts are elicited by stimuli in the sensory world, which was called the umwelt by ethologist Jakob von Uexküll. The umwelt is an “island of the senses”; agents can only experience the world in particular ways because of limits, or specializations, in their sensory apparatus (Uexküll, 2001). Because of this, different organisms can live in the same environment, but at the same time exist in different umwelten, because they experience this world in different ways. The notion of umwelt is similar to the notion of affordance in ecological theories of perception (Gibson, 1966, 1979).

Some have argued that the symbolic nature of human thought and language makes us the only species capable of creating our own umwelt (Bertalanffy, 1967). “Any part of the world, from galaxies inaccessible to direct perception and biologically irrelevant, down to equally inaccessible and biologically irrelevant atoms, can become an object of ‘interest’ to man. He invents accessory sense organs to explore them, and learns behavior to cope with them” (p. 21). While the animal umwelt restricts them to a physical universe, “man lives in a symbolic world of language, thought, social entities, money, science, religion, art” (p. 22).

1.4 PAPER WASP COLONIES AND THEIR NESTS

Experiments have revealed the instincts of solitary wasps (Fabre, 1915, 1919) and other insects. However, social insects can produce artifacts that may not be so easily rooted in instinct. This is because these artifacts are examples of collective intelligence (Goldstone & Janssen, 2005; Kube & Zhang, 1994; Sulis, 1997). Collective intelligence requires coordinating the activities of many agents; its creations cannot be produced by one agent working in isolation. Might paper nests show how social insects create and control their own environment?

1.4.1 Colonies and Their Nests

For example, the North American bald-faced hornet (Dolichovespula maculata, which is not a hornet but instead a wasp) houses its colony in an inverted, pear-shaped “paper” nest. A mature nest can be as large as a basketball; an example nest is illustrated in Figure 1-3.

1-3

Image

1-4

Image

Inside the outer paper envelope is a highly structured interior (Figure 1-4). There are several horizontal layers, each consisting of a number of hexagonal combs. A layer of combs is attached to the one directly above it, so that the layers hang as a group from the top of the nest. Each comb layer is roughly circular in shape, and its diameter and shape match the outer contours of the nest. The walls of each comb are elongated, some being longer than others.

“In the complexity and regularity of their nests and the diversity of their construction techniques, wasps equal or surpass many of the ants and bees” (Jeanne, 1996, p. 473). There is tremendous variability in the size, shape, and location of social wasp nests (Downing & Jeanne, 1986). A nest may range from having a few dozen cells to having in the order of a million; some wasps build nests that are as high as one metre (Theraulaz, Bonabeau, & Deneubourg, 1998). As well, nest construction can involve the coordination of specialized labor. For example, Polybia occidentalis constructs nests using builders, wood-pulp foragers, and water foragers (Jeanne, 1996).

1.4.2 Scaling Up

The large and intricate nests constructed by colonies of social wasps might challenge simple, instinctive, explanations. Such nests are used and maintained by a small number of wasp generations for just a few months. Greater challenges to explaining nest construction emerge when we are confronted with other insect colonies, such as termites, whose mounds are vastly larger structures built by millions of insects extending over many years. Such nests “seem evidence of a master plan which controls the activities of the builders and is based on the requirements of the community. How this can come to pass within the enormous complex of millions of blind workers is something we do not know” (Frisch, 1974, p. 150). Let us now turn to considering termite nests, and ask whether these structures might offer evidence of cognitive processes that are qualitatively similar to our own.

1.5 THE TOWERS OF TERMITES

1.5.1 Termite Mounds

Termites are social insects that live in colonies that may contain as many as a million members. Though seemingly similar to bees and ants, they are actually more closely related to cockroaches. In arid savannahs termites are notable for housing the colony in distinctively shaped structures called mounds. One of the incredible properties of termite mounds is their size: they can tower over the landscape. While a typical termite mound is an impressive 2 metres in height, an exceptional one might be as high as 7 metres (Frisch, 1974)!

Termite mounds are remarkable for more than their size. One issue that is critical for the health of the colony is maintaining a consistent temperature in the elaborate network of tunnels and chambers within the mound. This is particularly true for some species that cultivate fungus within the mound as a source of food. Some termites regulate temperature by building a vertical ventilation system that enables hot air to rise and leave the structure via “chimneys” on the top of the mound.

Other termites adopt a different architectural solution to the problem of thermoregulation. The “compass” or “magnetic” termite Amitermes laurensis is found in Australia. Its mounds are wedge shaped, much longer than wide. Amazingly, the mound of this termite is oriented so that its long walls face north and south, and its narrow walls face east and west.

It has been suggested that the shape and orientation of the mound built by Amitermes laurensis helps protect the colony from the heat of the sun. When the sun rises in the east, only a single narrow wall is in direct sunlight. The west wall is actually shaded, and is insulated by the core of the mound (which is solid for this species of termite). In the morning, colony members will tend to congregate in the western side of the mound. Later in the day, when the sun is in the west, it is the eastern wall that is shaded, and the colony congregates on that side of the mound. The mound’s wedge shape is such that at the hottest part of the day, when the sun is overhead, only its thin top edges are is exposed to direct heat. The wider northern and southern walls are never in direct sunlight, and have been shown to be in the order of 8° cooler than the others. As well, constituting the greatest areas of the outside of the mound, they provide a means for heat to dissipate outward. In short, Amitermes laurensis designs its mound for maximal coolness in severely hot conditions.

The shape of a “magnetic mound” also provides a solution to the problem of maintaining air quality for the colony. A large number of insects within a colony consume high volumes of oxygen, and produce high volumes of carbon dioxide. As a result, there is a pressing need to replenish the air within the mound. This must be accomplished via pores in the mound’s outer wall. However, the effectiveness of these pores is reduced by moisture during the wet season. The shape of the magnetic mound results in a high ratio of wall surface area to mound volume. This increases the area over which air exchange is possible, helping the mound to “breathe,” even during the wet season.

1.5.2 The Thinking Termite?

How do such tiny, simple animals as termites coordinate their activities to produce such amazing structures? Do termites exhibit intelligence that is similar in kind to our own? “One of the challenges of insect sociobiology is to explain how such colony-level behavior emerges from the individual decisions of members of the colony” (Jeanne, 1996, p. 473). There have been a wide variety of explanations proposed in the literature, ranging from rational insects, nest construction governed by blind instinct, colonies as intelligent superorganisms, and nest building controlled by the dynamic environment. In the following pages we will briefly consider a number of these different theories. We will see how environmental control may still be responsible for the construction of the elaborate nests of social insects. However, we must then consider whether a similar theory is applicable to human intelligence.

1.6 THE RATIONAL INSECT?

1.6.1 Computational Theory of Mind

The dominant perspective in cognitive science is the representational theory of mind (Fodor, 1975; Newell, 1980; Pylyshyn, 1984). According to this theory, external behaviour is guided or mediated by the contents of internal representations. Such representations are symbolic structures that have associated content, in the sense that they stand for states of affairs in the external world.

In the representational theory of mind, perceptual mechanisms are presumed to provide links between the external world and internal symbols. Thinking or cognition is the rule-governed manipulation of these internal representations in order to acquire new knowledge (e.g., by inference, by problem solving, by planning). The products of thinking are then responsible for producing behaviours, or actions upon the world. Thus, the computational theory of mind involves a continuous sense–think–act cycle (Pfeifer & Scheier, 1999). “Representation is an activity that individuals perform in extracting and deploying information that is used in their further actions” (Wilson, 2004, p. 183).

1.6.2 Are Insects Representational?

We have seen that social insects like termites and wasps are capable of monumental feats of engineering. What sort of intelligence guides the construction of such large, complex insect nests? “The problem before us is a very old one. Are the lower animals blind creatures of impulse or are they rational beings?” (Hingston, 1929). Can the computational theory of mind be applied to non-human agents? Can the nest of a colony of social insects be explained as the result of representational thought processes? Some accounts of insect behaviour, including nest construction, appeal to the notion of the rational insect.

Consider Major Richard Hingston, who was a doctor, a member of the 1924 expedition to Mount Everest, and an avid naturalist. He published accounts of his observations of insects, and of his experiments on their behaviour, including studies of nest building by solitary wasps (Hingston, 1929). While he was open to the notion that some insect behaviour was guided by instinct (Hingston, 1933), he also believed that insects were more rational or intelligent than many of us would expect: “So far as I can judge from the evidence given, we are not justified in making barriers between insect and human mentality. I mean we have no right to regard their minds as being totally different in kind” (p. 183).

Hingston’s work was a direct reaction against studies demonstrating that insects were governed by blind instinct (Fabre, 1915, 1919). Four decades later, Hingston’s naïve notion of the “rational insect” had evolved into one that was more sophisticated and representational.

For example, ethologist W.H. Thorpe reviewed studies of nest building in a variety of animals, including wasps, and proposed that nest construction behaviours were controlled by an ideal releaser (Thorpe, 1963). He did not describe the properties of ideal releasers in detail, but it is clear that to Thorpe they were representations of intended products. “The bird must have some ‘conception’ of what the completed nest should look like, and some sort of ‘conception’ that the addition of a piece of moss or lichen here and here will be a step towards the ‘ideal’ pattern, and that other pieces there and there would detract from it” (p. 22).

Thorpe’s (1963) notion of the ideal releaser is consistent with heuristics used in models of problem solving that were being published around the same time (Newell & Simon, 1961). For instance, Newell and Simon’s general problem solver (GPS) would maintain a representation of a goal state, compute differences between it and the current state of a problem, and then use these differences to solve the problem. Actions would reduce the differences between the current and goal states, and then differences would be recomputed until the problem was solved. The goal state in GPS served exactly the same role as the ideal releaser proposed by Thorpe.

1.7 INSECT AS SUPERORGANISM?

1.7.1 The Intelligent Whole

In spite of Hingston’s evidence, the view that insects were rational was not endorsed by many researchers. They were instead interested in explaining how simple, non-rational beings were capable of impressive feats (such as nest building) that appeared to be intelligent. One approach was to attribute intelligence to a colony as a whole, not to its individual members. In the modern literature, this has become known as swarm intelligence (Bonabeau & Meyer, 2001; Sharkey, 2006; Tarasewich & McMullen, 2002).

1.7.2 Colonial Intelligence

The roots of swarm intelligence can be found in early-twentieth-century entomology (Wheeler, 1911). William Morton Wheeler argued that biology had to explain how organisms coped with complex and unstable environments. For Wheeler, “an organism is a complex, definitely coordinated and therefore individualized system of activities, which are primarily directed to obtaining and assimilating substances from an environment, to producing other similar systems, known as offspring, and to protecting the system itself and usually also its offspring from disturbances emanating from the environment” (p. 308).

Wheeler used this rather broad definition of “organism” because he proceeded to propose an unusual idea: that a colony of ants, considered as a whole, could be also classified as being an organism. “The animal colony is a true organism and not merely the analogue of the person” (Wheeler, 1911, p. 310). He then argued that insect colonies, considered as wholes, demonstrated each and every one of the properties listed in his definition of organism. These colonies became known as superorganisms.

Wheeler recognized that a superorganism’s properties emerged from the actions of its parts (Wheeler, 1926). However, Wheeler also argued that higher-order properties could not be reduced to properties of the superorganism’s components.

Wheeler’s defended the notion that higher-order regularities could not be easily reduced to lower-order properties by applying ideas that were also in vogue in Gestalt psychology (Koffka, 1935; Köhler, 1947). Gestalt psychologists realized that many perceptual experiences could not be captured by appealing to the properties of their components. Instead, they proposed a number of perceptual laws that applied to the whole, and attempted to explain these higher-order principles by appealing to the notion of an organized perceptual field. Wheeler made arguments very similar to those made by Gestalt psychologists when arguing for a unique level of superorganismic properties: “The unique qualitative character of organic wholes is due to the peculiar non-additive relations or interactions among their parts. In other words, the whole is not merely a sum, or resultant, but also an emergent novelty, or creative synthesis.” (Wheeler, 1926, p. 433).

Many modern theories in a number of different disciplines exploit the notion of emergence (Holland, 1998; Johnson, 2001; Sawyer, 2002). Holland argues that such modern theories, in order to be scientific, must exhibit a number of different properties. First and foremost, the higher-order patterns that emerge must be recognizable and recurring. These patterns are persistent at higher-order levels of analysis, in the sense that the higher-order pattern can remain even when the components underlying the phenomenon change. They are usually found in systems that are dynamic (i.e., that change over time) and adaptive (i.e., that change in response to demands). Most importantly, emergent patterns can be explained by appealing to laws or rules that explain how they are supported by the characteristics of system components. As noted by Wheeler in the quote in the preceding paragraph, the laws that explain emergence make explicit “the peculiar non-additive relations or interactions” between parts, and are often expressed in some formalism that can be related to dynamical systems theory (Port & Van Gelder, 1995).

1.8 THE ULTIMATE DEMOCRACY

1.8.1 Emerging Problems

Wheeler’s notion of the superorganism, and the organizational principles of Gestalt psychology, are two examples of holism (Sawyer, 2002). Such theories recognize that the regularities governing a whole system cannot be easily reduced to a theory that appeals to the properties of the system’s parts. For example, Gestalt psychology attacked psychological behaviourism because of its reductionist approach to explaining psychological phenomena. Unfortunately, holism has not had much success in being accepted as being scientific. “Holism is an idea that has haunted biology and philosophy for nearly a century, without coming into clear focus” (Wilson & Lumsden, 1991, p. 401). Gestalt psychology flourished in Germany from 1920 until just before World War II (Henle, 1977). By the end of the war, this school of thought had come to the end of its influence. Many of its students had been killed in the war, or had been displaced to a variety of different countries. Attempts to reignite Gestalt psychology in the United States failed because Gestalt ideas were in conflict with the then dominant school of behaviourism. One problem with Gestalt psychology was that it had difficulty being accepted as a form of emergentism. “Emergentism is a form of materialism which holds that some complex natural phenomena cannot be studied using reductionist methods” (Sawyer, 2002, p. 2). Gestalt psychologists had difficulty in providing materialist accounts of such concepts as perceptual fields, cortical currents, or isomorphic relations between objects in the world and objects in the mind (Henle, 1977). Ironically, Gestalt psychology was ahead of its time. Formal and algorithmic accounts that have appeared in some subdomains of modern cognitive science like connectionism (Bechtel & Abrahamsen, 2002; Dawson, 2004; Rumelhart & McClelland, 1986) or dynamical systems theory (Port & Van Gelder, 1995) appear to offer approaches that could have converted the holism of Gestalt psychology into a more causally grounded emergentism (e.g., Sawyer, 2002).

Wheeler’s notion of the superorganism has enjoyed an enduring popularity (Detrain & Deneubourg, 2006; Queller & Strassmann, 2002; Seeley, 1989; Wilson & Sober, 1989). However, in terms of biology, this idea suffered a fate similar to that of Gestalt psychology. The problem with the view that colonies are organisms is that it is very difficult to provide a scientific account of the laws that govern them. Where do the laws come from? How do laws governing the whole emerge from the actions of individual parts? Wheeler recognized that such questions posed “knotty problems,” but was ultimately unable to provide adequate solutions to them (Evans & Evans, 1970). The result was that entomologists rejected the notion of the superorganism (Wilson & Lumsden, 1991).

1.8.2 From Whence Organization?

Rejecting the superorganism, however, does not remove the need for explaining how complex structures such as nests could be constructed by social insects. If the colony itself was not intelligent, then what was the source of amazing structures like termite mounds?

The alternative was to claim that colonial intelligence could be reduced to the actions of individual colony members. This view was championed by French biologist Etienne Rabaud, who was a contemporary of Wheeler. “His entire work on insect societies was an attempt to demonstrate that each individual insect in a society behaves as if it were alone” (Theraulaz & Bonabeau, 1999). Biologist E. O. Wilson has adopted a similar position. “It is tempting to postulate some very complex force distinct from individual repertories and operating at the level of the colony. But a closer look shows that the superorganismic order is actually a straightforward summation of often surprisingly simple individual responses” (Wilson & Lumsden, 1991, p. 402). In short, swarm intelligence wasn’t real — it was just in the eye of the beholder. And the coordination of individuals might be accomplished via the environment, as is considered in the following pages.

1.9 PROGRAMS FOR NEST CONSTRUCTION

1.9.1 An Inherited Program

It has been proposed that wasps do not inherit an ideal releaser, but instead inherit a program for nest construction. One example of such a program is part of a general account of wasp behaviour (Evans, 1966; Evans & West-Eberhard, 1970). In this model, a hierarchy of internal drives serves to release behaviours. For instance, high-level drives might include mating, feeding, and brood rearing. Such drives set in motion lower-level sequences of behaviour, which in turn might activate even lower-level behavioural sequences. For example, a brood-rearing drive might activate a drive for capturing prey, which in turn activates a set of behaviours that produces a hunting flight. So, for Evans, a program is a set of behaviours that are produced in a particular sequence, where the sequence is dictated by the control of a hierarchical arrangement of drives. However, these behaviours are also controlled by releasing stimuli that are external to the wasp. In particular, one behaviour in the sequence is presumed to produce an environmental signal that serves to initiate the next behaviour in the sequence. For instance, in Evans’ (1966) model, the digging behaviour of a wasp produces loosened soil, which serves as a signal for the wasp to initiate scraping behaviour. This behaviour in turn causes the burrow to be clogged, which serves as a signal for clearing behaviour. Having a sequence of behaviours under the control of both internal drives and external releasers provides a balance between rigidity and flexibility: the internal drives serve to provide a general behavioural goal, while variations in external releasers can produce variations in behaviours (e.g., resulting in an atypical nest structure when nest damage elicits a varied behavioural sequence). “Each element in the ‘reaction chain’ is dependent upon that preceding it as well as upon certain factors in the environment (often gestalts), and each act is capable a certain latitude of execution” (Evans, 1966, p. 144).

1.9.2 Testing the Theory

Smith (1978) has provided compelling evidence of component of Evans’ model, the external control of specific behaviours used by wasps to construct nests. Smith examined a particular mud wasp that digs a hole in the ground, lines the hole with mud, and then builds an elaborate funnel on top of the hole to keep parasites out. The funnel is a long straight tube, to which is added a marked curve, to which is attached a large bell-shaped opening. The existence of a mud-lined hole appears to be the stimulus that caused the wasp to build the straight tube. Smith demonstrated this by creating a hole in the curve that the wasp added to the straight tube. This caused the wasp to start creating a brand new tube out from the hole in the curve, resulting in a second funnel structure being built on top of the first. Importantly, and consistent with Evans’ (1966) model, external stimuli are not the sole elicitors of behaviour (Baerends, 1959). Baerends studied digger wasps that provided for several nests at the same time. The nest is begun with a single egg and a single caterpillar as prey. Later, the wasp returns to the nest to inspect larval development. Depending upon the size of the larva, and upon the amount of food remaining in the nest, the wasp will hunt for more prey to be added to the nest. Baerends (1959) found that the state of the nest would affect behaviour only when the wasp made its first inspection. If he added or removed food after the inspection, the foraging behaviour of the wasp was not altered accordingly, even though the wasp was exposed to the new situation inside the nest when it returned to it with new prey. In other words, its foraging was not merely under the control of the nest-as-stimulus; foraging was also controlled by the internal state of the wasp during its first inspection. Nonetheless, in models like those of Evans (1966), the environment plays a key role in controlling the nest-building behaviour of insects.

1.10 THE ENVIRONMENT AS PROGRAM

1.10.1 A Complex Environment

Evans’ (1966) theory of nest construction by solitary insects can easily be extended to insect societies. It has long been recognized that an insect colony provides a much more complex environment (i.e., a much richer set of stimuli) than would be available to asocial insects. The social insect “must respond not only to all the stimuli to which it reacted in its presocial stage but also to a great number of additional stimuli emanating from the other members of the society in which it is living” (Wheeler, 1923, p. 503). Clearly one sense in which this environment is more complex is with respect to the signals used by one colony member to communicate to others. Such signals include movements, such as the dance that one honeybee performs to communicate the location of a food source to others (Frisch, 1966, 1967, 1974), as well as with chemicals (Queller & Strassmann, 2002). “The members of an insect society undoubtedly communicate with one another by means of peculiar movements of the body and antennæ, by shrill sounds (stridulation) and by odors” (Wheeler, 1923, p. 506).

However, there is another sense in which an insect colony provides its individuals a complex and dynamic environment that affects their behaviour, even in the possible situation in which there is absolutely no direct communication between colony members using actions, sounds, or scents.

Consider wasps adding to a nest. Much of this process is parallel because more than one wasp works on the nest at the same time, as shown in Figure 1-5. Imagine an individual working on this nest, guided (as a working hypothesis) by a nest-building program (Evans, 1966). This wasp will perform some action governed by its internal state and by some triggering characteristic of the nest. At some point the wasp leaves to obtain new building materials. In its absence, the appearance of the nest will change because of the activities of other colony members. As a result, the behaviour performed by the returning wasp may be quite different than would have been the case had the nest been unaltered in its absence. In short, different colony members can communicate indirectly with one another by changing the nest, and as a result by changing the available releasing stimuli.

1-5

Image

1.10.2 Stigmergy

French zoologist Pierre-Paul Grassé explained the mound-building behaviour of termites by appealing to the notion of indirect communication by changing the environment (Theraulaz & Bonabeau, 1999). Grassé demonstrated that the termites themselves do not coordinate or regulate their building behaviour, but that this is instead controlled by the mound structure itself. The term stigmergy was coined for this type of behavioural control (Grassé, 1959). The word stigmergy comes from the Greek stigma, meaning sting, and ergon, meaning work, capturing the notion that the environment is a stimulus that causes particular work (behaviour) to occur. Researchers describe quantitative stigmergy as involving stimuli that differ in intensity, but not quality, such as pheromone fields (Deneubourg & Goss, 1989). These stimuli modify the probability of individual responses. In contrast, qualitative stigmergy involves control of a variety of behaviours using a set of qualitatively different environmental stimuli (Theraulaz & Bonabeau, 1995).

1.11 STIGMERGY AND THE SYNTHETIC APPROACH

1.11.1 The Synthetic Approach

Stigmergy appeals to an environmental control structure that coordinates the performances of a group of agents. One of the appeals of stigmergy is that it explains how very simple agents create extremely complex products, particularly in the case where the final product (e.g., a termite mound) is extended in space and time far beyond the life expectancy of the organisms that create it. As well, it accounts for the building of large, sophisticated nests without the need for a complete blueprint and without the need for direct communication amongst colony members (Bonabeau et al., 1998; Downing & Jeanne, 1988; Grassé, 1959; Karsai, 1999; Karsai & Penzes, 1998; Karsai & Wenzel, 2000; Theraulaz & Bonabeau, 1995). One of the reasons that stigmergy can produce such complex products is because the behaviours of the agents, and the environmental stimuli that elicit these behaviours, are highly non-linear. As a result, it is very difficult to take a finished product, such as a completed wasp nest, and reverse engineer it to decipher the specific order of operations that produced it. However, it is also very difficult to look at a simple set of rules, such as a nest program, and to predict with any accuracy the final product that these rules could create for a colony of insects.

For this reason, stigmergy is often studied using a synthetic methodology (Braitenberg, 1984; Dawson, 2004; Pfeifer & Scheier, 1999). That is, researchers propose a small group of rules that are under stigmergic control, set these rules in motion in a computer simulation, and observe the products that the simulation creates.

1.11.2 Wasp Nest Examples

As an example, consider how the synthetic approach has been used to study nest construction by social paper wasps. A nest for such wasps consists of a lattice of cells, where each cell is essentially a comb created from a hexagonal arrangement of walls. When a large nest is under construction, where will new cells be added? This is a key issue, because the building activities of a large number of wasps must be coordinated in some manner to prevent the nest from growing predominately in one direction. Theraulaz and Bonabeau (e.g., 1999) used the synthetic approach to answer this question.

Theraulaz and Bonabeau (1999) proposed that an individual wasp’s decision about where to build a new cell wall was driven by the number of already completed walls that were perceptible. If there is a location on the nest in which three walls of a cell already existed, then this was proposed as a stimulus to cause a wasp to add another wall here with high probability. If only two walls already existed, this was also a stimulus to add a wall, but this stimulus produced this action with a much lower probability.

The crucial characteristic of this approach is that it is stigmergic: when either of these rules results in a cell wall being added to the nest, then the nest structure changes. In turn, this causes changes in the appearance of the nest, which in turn causes changes in the locations where walls will be added next. Theraulaz and Bonabeau (1999) created a nest building simulation that only used these two rules, and demonstrated that it created simulated nests that were very similar in structure to real wasp nests. In addition to adding cells laterally to the nest, wasps must also lengthen walls that already exist. This is to accommodate the growth of a larva that lives inside the cell. Karsai (1999) proposed another stigmergic model of this aspect of nest building. His rule involved an inspection of the relative difference between the longest and the shortest wall of a cell. If the difference was below a threshold value, then the cell was untouched. However, if the difference exceeded a threshold, then this was a stimulus that caused a wasp to add material to the shortest wall. Karsai used a computer simulation to demonstrate that this simple stigmergic model provided an accurate account of the three-dimensional growth of a wasp nest over time.

1.12 STIGMERGY AND THE PARABLE OF THE ANT

1.12.1 Intelligence and Stigmergy

Stigmergy may explain how insects can be master architects, but still possess a lower intelligence than humans. It has certainly become an important concept in cognitive science and robotics (Goldstone & Janssen, 2005; Holland & Melhuish, 1999; Kube & Zhang, 1994; Sulis, 1997). However, researchers in cognitive science have been reluctant to apply stigmergy to explain the behaviours of higher organisms, including man (Susi & Ziemke, 2001). This is an important oversight. Consider beaver dams. Morgan (1868/1986) tacitly explained dam characteristics by appealing to the thoughts of beavers. He ignored the possibility, raised by stigmergy, that dams themselves play a large role in guiding their development and intricate nature.

The importance of the environment was a theme of early theoretical work in artificial intelligence (Simon, 1969). In Simon’s famous parable of the ant, observers recorded the path traveled by an ant along a beach. How might we account for the complicated twists and turns of the ant’s route? Cognitive scientists tend to explain complex behaviours by invoking complicated representational mechanisms (Braitenberg, 1984). In contrast, Simon noted the path might result from simple internal processes reacting to complex external forces — the various obstacles along the natural terrain of the beach. “Viewed as a geometric figure, the ant’s path is irregular, complex, hard to describe. But its complexity is really a complexity in the surface of the beach, not a complexity in the ant” (Simon, 1969, p. 24).

A similar point can show how robot building can inform cognitive science. Braitenberg (1984) has argued that when we observe interesting behaviour in a system, we tend to ignore the environment, and explain all of the behaviour by appealing to internal structure. “When we analyze a mechanism, we tend to overestimate its complexity” (Braitenberg, 1984, p. 20). He suggested that an alternative approach, in which simple agents (such as robots) are built, and then observed in environments of varying complexity, can provide cognitive science with more powerful, and much simpler, theories. Such synthetic theories take advantage of the fact that not all of the intelligence must be placed inside an agent.

1.12.2 Are Mammals Stigmergic?

Might stigmergy account for the intelligence of “higher” organisms? Consider the beaver. Morgan (1868/1986) recounts a story of a colony that built a dam that threatened a railway line by the beaver pond. The track master had a hole cut through the middle of the dam to protect the track. “As this was no new experience to the beavers, who were accustomed to such rents, they immediately repaired the breach” (p. 102). The breaking and repairing of the dam went on repeatedly, 10 or 15 times, at this site. This story describes the tenacity of beavers, but perhaps is more revealing considered as a mammalian variant of Fabre’s (1919) experiments revealing the blind instincts of digger wasps.

Beavers might respond to releasing stimuli in a fashion consistent with the preceding accounts of insect behaviour. The sound of running water brings them immediately to repair the dam. As a result, trappers would lure their prey by cutting holes into existing dams (Morgan, 1868/1986). Water levels around the lodge are stimuli to either raise the dam (to conserve water), or to make it leakier (to lower water levels (Frisch, 1974)). Researchers have had some success using environmental features to predict where dams will be constructed (Barnes & Mallik, 1997; Curtis & Jensen, 2004; Hartman & Tornlov, 2006). All of these observations suggest that it is plausible to hypothesize that stigmergy might have an important role in theories of the initiation, development, and final characteristics of beaver infrastructure.

If it is at least plausible that stigmergy guides some mammals, then is it possible that it might apply to theories of human cognition as well? A number of theories in modern cognitive science have opened the door to considering this idea.

1.13 EMBODIMENT AND POSTHUMANISM

1.13.1 Posthumanism

Our everyday experience of self-consciousness is inconsistent with the view of the mind held by modern cognitive scientists (Varela, Thompson, & Rosch, 1991). While we have a compelling sense of self, cognitive theories reject it. Many researchers believe that the mind is modular, incorporating a large number of independent machines that are isolated from consciousness (Fodor, 1983). We can be conscious of mental contents, but have no awareness of the mechanisms that represent them (Pylyshyn, 1981, 1984). Our sense of holistic consciousness is an illusion built from the activity of multiple, independent sources (Dennett, 1991, 2005). Entire theories of cognition begin with the foundational assumption that the mind is a society of simple, unconscious agents (Minsky, 1985, 2006).

These theoretical trends have resulted in a view that is known as posthumanism (Hayles, 1999). “The posthuman view configures human being so that it can be seamlessly articulated with intelligent machines. In the posthuman, there are no essential differences or absolute demarcations between bodily existence and computer simulation, cybernetic mechanism and biological organism, robot teleology and human goals” (p. 3). According to Hayles, posthumanism results when the content of information is more important than the physical medium in which it is represented, when consciousness is considered to be epiphenomenal, and when the human body is simply a prosthetic. Posthumanism is rooted in the pioneering work of cybernetics (Ashby, 1956, 1960; MacKay, 1969; Wiener, 1948), but it also flourishes in modern cognitivism.

The posthumanism that has developed from cybernetics and cognitive science denies our intelligence a special status. It proposes not only that that our thinking cannot be differentiated from that of animals, but also cannot be differentiated from that of machines.

1.13.2 Embodiment

Interestingly, a nascent theme in posthumanism (e.g., Hayles, 1999, Chapter 8) is not only blurring the distinction between different types of intelligence, but also blurring the distinction between mind, or body, and world. However, part of this blurring is motivated by the realization that the language of information can be applied equally easily to states of the world and to mental states. Hayles notes that one of the major implications of posthumanism is the resulting “systematic devaluation of materiality and embodiment” (p. 48). One of Hayles’ goals is to resist the notion that “because we are essentially information, we can do away with the body” (p. 12).

One approach to achieving this goal is to explore the ideas in the new field of embodied cognitive science (Agre, 1997; Brooks, 1999; Clark, 1997, 1999; Gibbs, 2006; Pfeifer & Scheier, 1999). The theories of embodied cognitive science recognize that the individual can only be studied by considering his or her relationship to the environment, and that this relationship depends crucially upon embodiment (our physical structure) and situation (our sensing of the world). In other words, it places far more emphasis on the environment than has traditionally been found in the computational theories of modern cognitive science. It takes seriously the idea that Simon’s (1969) parable of the ant might also be applicable to human cognition. “A man, viewed as a behaving system, is quite simple. The apparent complexity of his behavior over time is largely a reflection of the complexity of the environment in which he finds himself” (Simon, 1969, p. 25).

This raises two different kinds of questions. The first, explored in Chapter 2, is the degree to which higher-order cognitive phenomena in humans might be explained by such notions as embodiment, situation, and stigmergy. The second concerns the research methodologies required to study human cognition from the perspective of embodied cognitive science.

1.14 STIGMERGY AND CLASSICAL COGNITION

1.14.1 Classical Control

It is important to recognize that endorsing new ideas, such as the stigmergic control of cognition, does not require the complete abandonment of the representational theory of mind or of classical cognitive science. Indeed, stigmergy has a long history in prototypical models of higher-order human cognition.

Classical cognitive science views cognition as information processing (Dawson, 1998). An explanation in classical cognitive science thus requires that researchers propose a cognitive architecture (Pylyshyn, 1984; VanLehn, 1991). A cognitive architecture requires that the basic nature of an information processor’s symbols or representations must be detailed. The set of rules that can manipulate these symbols must also be stipulated.

However, these two components are not enough to complete the architecture.

Would you like to know how the story ends?

Buy "From Bricks to Brains" in your preferred e-book store and continue reading:

Amazon

Apple iBookstore

ebook.de

Thalia

Weltbild

Enjoy your reading!



Kaufen