A modified version of the paper submitted to "Semiotica" Other papers by Alexei Sharov The origin and evolution of signsAlexei SharovDepartment of Entomology, Virginia Tech, Blacksburg, Virginia 24061, sharov@vt.edu
1. IntroductionSigns are basic elements of human thought and communication. When we see a sign it makes us think about something beyond the visual image of this sign. Signs are links between thoughts that integrate them into a system which we call mind or intelligence. To understand the nature and origin of mind, we need to analyze the origin and evolution of signs. But this evolution started long before humans; thus, we need to expand the definition of a sign beyond its human use. Peirce (1955) made the first attempt to build a non-anthropocentric theory of signs. He defined a sign as a triadic relationship between a sign vehicle (representamen), an object, and interpretant which is a representation of the object invoked by the sign vehicle (Fig. 1). Figure 1. Peircean sign is a triadic relationship between a sign vehicle (smoke), an object (fire), and interpretant (idea of fire in the head of observer). The major problem in non-anthropocentric semiotics is to define an observer (interpreter). A simple solution is to consider any physical interaction as interpretation. Craters on the moon can be viewed as signs of comets and asteroids that fell on the moon in the past. According to Deely (1992), a bone of a fossil animal is a sign vehicle that points to the original animal, and the rock formation in which the bone was fossilized is the interpreter. This point of view may be attractive because of its generality. But do we gain any understanding of human intelligence from expanding semiotics to the boundaries of physics? Physics does not consider the use of objects, and thus, it can not explain why signs are useful for interpreters. It describes the movement of a rock falling from the cliff in the same way as the movement of a rock thrown by a hunter in order to kill a deer. The movement of sound waves in the air is described by physics irrespectively of the contents of the sound message. Thus, physics does not help us to understand the nature of signs. |
Jakob von Uexküll (1864-1944). Used by courtesy of the
There were several attempts to link semiotics with biology. Uexküll (1940) developed a theory of meaning which considered animals as interpreters of their environment. He called this subjectively interpreted environment ‘Umwelt’ (Umwelt means environment in German). Uexküll (1940) considered only living organisms as interpreters. He pointed that the ability of animals to interpret the world helps them to perform their functions. However, his conception of usefulness (adaptation) was not based on the theory of natural selection which he denied.
| |
The theory of zoosemiotics (Sebeok 1972) contributed to a further integration of biology and semiotics. Signs used by animals (visual, acoustic, and chemical) are processed by their nervous system in the same way as in humans. Thus, it was natural to expand semiotic notions from human semiotics to zoosemiotics (Sebeok 1972). Further studies indicated that interpretation of signs does not necessary require a nervous system. Krampen (1981) suggested that plants are capable of interpreting signs although they have no nervous system. Individual cells in any organism use tiny receptors to recognize sign molecules in their environment (Hoffmeyer 1996). Finally, each cell has a genetic library in the form of DNA molecules that are copied and interpreted. It is obvious that interpretation of signs in human mind is different from that in a bacterial cell. Human interpretation yields a concept or idea which can be true or false, whereas a cell simply responds to signs with actions that can not be evaluated as true or false. To distinguish between these cases I will use the term ‘signal’ for those signs that are interpreted as actions, and ‘proper signs’ for those signs that are interpreted as concepts. The term ‘proper sign’ is already used for signs with intermediate degree of motivation, but it is very hard to avoid term overlaps in semiotics because all possible adjectives to the word ‘sign’ are already occupied. Eco (1976) considers the difference between signals and proper signs so important that he excludes signals, stimuli, and codes (e.g., genetic code) from the category of signs. He suggests that semiotics should study proper signs only, and the theory of information should study signals. However, in order to study the evolution of signs we need a word that embraces all phenomena that involve interpretation. Thus, I prefer to use the word ‘sign’ in a broad sense. But understanding how simple signals could evolve into proper signs is a real problem which is discussed below. The origin of signals is another problem. The central question is what can be considered a minimal interpreter. There is a subtle difference between the interaction of oxygen with sulphuretted hydrogen and building a protein according to the m-RNA sequence. The former process is just a chemical reaction, whereas the latter process is interpretation. It is not the complexity of the reaction that makes the difference, but the usefulness of the protein synthesis for a cell. If the oxidation of the sulphuretted hydrogen is performed by a bacterial cell then this reaction becomes meaningful too. Pattee (1995) suggested that all signs are particles in a large self-referent organization which he calls ‘semantic closure’. Thus, interpretation of signs requires a self-referencing loop. But functional definition of life also requires a self-referencing loop (Rosen 1991). This coincidence indicates a close relationship between life and signs. An interpreter should be alive in order to interpret signs, and it can not be alive unless it interprets signs. This leads to the central idea of biosemiotics that life is communication or semiosis (Sharov 1992, Hoffmeyer 1992, 1996). Producing offsprings is a form of communication to future generations which was called ‘vertical semiosis’ (Hoffmeyer 1996). An organism is a message that carries information on how to survive in a specific environment as well as how to adapt to new environments. Communication among coexisting organisms (e.g., chemical, acoustic, behavioral) is a horizontal semiosis. In this article I attempt to reconstruct the processes that may have led to the origin and evolution of signs. The evolution of signs is described as a continuous process that started from simple autocatalytic systems and ended with human language. Although the details of this process apparently will never be determined, it is still possible to point the major steps in the evolution of signs and suggest hypothetical mechanisms that could be involved in each step. I consider the following 3 steps in the evolution of signs: the origin of signs (vertical semiosis), development of horizontal semiosis, and emergence of proper signs. 2. The Origin of SignsWithout interpreter a sign is just a physical object. Thus, signs should have originated simultaneously with interpreters, i.e., living organisms. The origin of life is traditionally discussed from the physical and chemical points of view. But I would like to emphasize the semiotic aspect of the origin of life. The first step in the origin of life was the emergence of autocatalytic systems (Fig. 2A,B). It is likely that polymerization was the kind of autocatalysis that eventually developed into complex life forms. Polymer structure may be 1- 2- or 3-dimensional, which corresponds to a linear thread, surface (membrane), or solid crystal, respectively. Random disturbances (mechanical or chemical) may break a polymer into smaller portions that continue their growth. Although 3-dimensional crystals can be found in some living cells, it is unlikely that they were involved in the origin of life because it is very difficult to break a solid crystal. However, linear polymers and membranes can be easily broken into smaller pieces that will continue growing on their ends or edges (Fig. 2A,B). Membranes may form spheres and grow by inserting monomers from the side (Morowitz 1992). The major argument in favor of this hypothesis is that all currently known living organisms are built from linear polymers and membranes. Figure 2. Hypothetical scheme of the origin of life. A and B: self-reproducing linear polymers and membranes; C: symbiosis of linear polymers and membranes; D: the number of components increases; E: complimentary duplication, folding, and catalysis; F: regulated catalysis; G: coded catalysis (genetic code). Autocatalytic systems usually are considered not alive because they do not have coded information about their structure (Von Neumann 1966, Rosen 1991). The idea that coding is essential for biological evolution was suggested by Von Neumann (1966). However, Kauffman (1995) showed that autocatalytic systems can evolve without coding. Thus, there is no reason to set the boundary of life at the moment when genetic code appeared. The dynamics of autocatalytic systems resembles the population dynamics of any other living organisms. In both cases there is a life-cycle, consumption of resources, and exponential (or logistic, if resources are limited) population growth. Any organism can be viewed as a giant autocatalytic molecule. Thus, I consider autocatalytic systems alive; but they represent the most primitive form of life. Autocatalytic systems transfer the information about their structure and function to offspring systems, which is vertical semiosis, according to Hoffmeyer (1996). But these systems have no genetic code and it may not be clear how information can be transferred without coding. I suggest the following explanation of heredity without coding. Each autocatalytic system can be viewed as an attractor in the phase space. If the state of an offspring system is located within the domain of the parent’s attractor, then it will eventually converge to the same attractor. Thus, offspring systems can inherit their attractor from the parent system (Fig. 3). Then, a mutation is a leap to another self-reproducing attractor. Figure 3. Inheritance of partent's attractor and mutations in systems without genetic code. An autocatalytic system is a sign vehicle and its interpreter simultaneously; thus, the semiotic process can be characterized as self-interpretation or semantic closure (Pattee 1995). Somebody may argue that interpretation should always include a possibility of misinterpretation. In this case, an autocatalytic system faces the choice of remaining in the same attractor or getting out of it which means death or mutation. I think that death can be considered as the most primitive misinterpretation. Some elements of self-interpretation can be found in contemporary living organisms. For example, expression of a gene can be partially regulated by the same gene. Human consciousness is based on self-awareness which is also a kind of self-interpretation. Perception of an object changes the state of the observer, and hence affects the way the observer perceives the object. Thus, any observation includes self-interpretation and it is always partially subjective. Self-referent signs (with semantic closure) represent the highest level of semiotic systems because it is not interpreted by any external interpreter. Any organism viewed as a whole system is semantically closed. But some parts of an organism may be mostly signs (e.g., DNA molecules) and other may be mostly interpreters (e.g., ribosomes). The dynamics (development) of an organism is its self-interpretation. The interpretant of an organism is the same organism at the next stage of development. After a sequence of interpretations, the life-cycle becomes closed, and an organism returns to the same stage but in multiple numbers because reproduction takes place somewhere in the life cycle. Objects that interact with an autocatalytic system become signs (signals) to which the system responds with specific actions. For example, when a self-reproducing polymer encounters a monomer, it binds the monomer at the end (or edge). Incorporation of monomers is selective which can be viewed as a form of recognition. But in this case, recognition is not separated from the action. Systems with a higher organization are able to separate recognition and action. For example, living organisms usually examine their food before attempting to eat it. Any organism in a growing population is characterized by reproductive value which shows the average contribution of this organism to future generations (Fisher 1930). For example, eggs have a smaller reproductive value than adults because adults can easily produce multiple eggs, but it takes a long time for an egg to develop into adult. Thus, the population that started from 10 eggs will be smaller in numbers than the population that started at the same time from 10 adults. Reproductive value is similar to the notion of present value in economics, which quantifies the ability of an object (e.g., a machine) to generate inflation-adjusted income (Clark 1976). An organism invests its energy into various kinds of production processes, trying to maximize the total value of all products. For example, there is a trade-off between producing more offsprings and the longevity of the parent. When an insect female lays an egg, its own reproductive value decreases. To maximize the total value, egg production should stop when the decrease in reproductive value of the parent exceeds the value of an egg (Roff 1992). Some insects invest all their energy into production of eggs and then die; but other insects lay eggs in small amounts but over a long time. Organisms respond to components of environment according to the contribution of these components to the reproductive value. An organism searches for resources that have a positive value and avoids dangerous objects that have a negative value. Values exist only in semantically closed (i.e., autocatalytic) systems. Thus, it is semantic closure that makes organisms interested in the outer world. A sign is everything that has significance for the system, i.e. value (either positive or negative). Recognition of a sign means that the system realizes the value of an object and uses it according to its value. 3. Development of Horizontal SemiosisThe next step in the evolution of signs is cooperation of several simple autocatalytic systems that results in establishment of new hierarchical levels. Sub-systems become integrated with each other via communication, which is a horizontal semiosis (Hoffmeyer 1996). Development of new hierarchical levels in self-reproducing systems was called metasystem transition (Turchin 1977). There are 2 possible ways of cooperation that may result in the integration of subsystems into a larger system: homogeneous and heterogeneous (Fig. 4). Homogeneous cooperation is established between similar systems and is always symmetrical at least at the beginning. For example, several identical organisms may form a colony. Homogeneous cooperation usually develops between related organisms (e.g., progeny of one parent) because of kin selection. The symmetry of homogeneous cooperation may become eventually broken, and components differentiate. For example, cells in a multicellular organism eventually become differentiated and perform different functions. Termites are differentiated into several casts: a queen and king, workers, soldiers, etc. Figure 4. Two possible mechanisms of metasystem transition. Turchin (1977) thought that homogeneous cooperation is the only mechanism of metasystem transition. However, there is also heterogeneous cooperation or symbiosis when several non-similar systems become united. For example, fungi and algae may form lichens; eucaryotic cells resulted from a symbiosis of several types of procaryotic cells; termites have symbiotic bacteria in their gut that help them to digest wood; parasitic nematodes cooperate with bacteria that help to kill their host. Heterogeneous cooperation is always asymmetric, i.e., cooperating components perform different functions from the very beginning. Apparently, symbiosis originates from commensalism that is beneficial for one component and neutral for another component; it is not likely that interaction is beneficial for both components from the very beginning. But eventually, this relationship evolves into a real symbiosis. Apparently, the first symbiosis occurred when linear self-reproducing polymers (they could be polycarbohydrates or polypeptides) colonized membrane spheres (Fig. 2C). It is not clear whether they colonized the surface or the cavity of membrane spheres. Linear polymers might improve their self-assembly on the membrane but it is unlikely that membranes initially had any benefits from linear polymers. But eventually some linear polymers were able to catalyze the production of molecules that can be inserted into membranes. At this point cooperation circle became closed (semantic closure) and the metasystem transition was completed. The process results in the emergence of a cell with a autocatalytic network inside. When two autocatalytic systems cooperate, they produce resources for each other. Resources, as we have seen in the previous section, are primitive signs. Thus, cooperation is a semiotic relationship. Each component has a double interpretation in such a system. First, it is self-reproducing on its own (self-interpretation), and second, it produces signs (resources) that are interpreted by another component. The first kind of interpretation is local because it occurs inside of the component, but the second kind of interpretation is global because it integrates components into a large system. Each component benefits from both local and global interpretation. Although the global interpretation takes additional energy, the cost of communication (the reduction in the rate of direct self-reproduction) is compensated by additional benefits from global interpretation (facilitation of self-reproduction by other components). The major obstacle on the way of cooperation is an eventual evolutionary instability. Let us consider cooperating species A and B that produce resources for each other and both benefit from this cooperation (Fig. 5). Species A may mutate into a selfish species A1 which will use resources produced by species B without providing help to the species B. As a result, the positive loop of cooperation between species A and B becomes broken. It is no longer beneficial for species B to produce resources for species A because these resources are intercepted by A1. Thus they stop producing resources for A and cooperation loop becomes broken.
Figure 5. Evolutionary instability of cooperation between systems A and B. If B-individuals that were produced with the help of A-individuals could be marked ‘For use by species A only’, then mutants like A1 would not be able to break the cooperation loop. Thus, cooperation is successful only if it includes specific constraints on the use of systems components (e.g., property rights). The problem of constraints on the use of elements can be solved by encapsulation, i.e. by physical connection of components. Hard connection (e.g., via covalent bonds) may considerably reduce the degrees of freedom of system components so that they would not perform their functions. Another mechanism of encapsulation is surrounding by a boundary without physical attachment, which is a soft connection (Fig. 6). Soft connection gives more freedom to components so that they can perform the same functions as in their solitary state. Figure 6. Encapsulation of components makes cooperation evolutionary stable because of group selection. Encapsulation provides security against parasitic mutations that are beneficial for an isolated component but detrimental for the entire system. If such a mutation appears in a non-encapsulated system, it will spread and may become destructive on a large scale. But if the system is encapsulated, this mutation can not spread beyond the boundary of the system. If it is harmful for the system, it will kill it or reduce its reproduction rate. This kind of group selection favors cooperation (Fig. 6). It is important that encapsulation lasts long enough for the mutation to disappear. For example, cells may coalesce if they come into contact; and if they coalesce too often, then the parasitic mutation has a chance to spread over the entire population. Another example of insufficient encapsulation was given by Maynard Smith (1964) who analyzed the evolution of altruistic behavior in groups of animals. If groups are not isolated sufficiently, the evolution of altruism is not possible. Eigen and Schuster (1979) developed the model of a hypercycle, which is a system of several mutually cooperating self-reproducing components. Their model assumed that self-reproduction is based on a digitally coded information that is similar to the genetic code. However, hypercycles that existed at the origin of life could not have digitally coded information simply because decoding requires very complicated systems that could not appear from a random fluctuation. It is much more likely that components of first hypercycles had no digital coding at all. For example, a membrane with an autocatalytic network of linear polymers inside can be viewed as a hypercycle. Obviously, hypercycles are formed via metasystem transition. Hypercycles (without coding) have several advantages compared with a simple (non-hierarchical) autocatalytic system. Small changes in a simple autocatalytic system is likely to be fatal because they draw the system away from the self-reproducing attractor. All parts of the system are tightly interconnected; and the change in one part causes malfunction of all other parts. However, in hierarchical systems, each component is relatively autonomous. Thus, small changes in one component have a limited effect on the function of other components. Also, because the relationship between components are cooperative, other components may even compensate the loss of functions in a mutant component. As a result, the fitness landscape becomes more gradual which increases the effectiveness of natural selection (Kauffman 1995). This is the first advantage of hypercycles. Second, hierarchical systems have a more effective way of gathering information. In simple self-reproducing systems, there are no internal communication networks and natural selection is the only way of gathering information. Each bit of information literally costs lives. In a cell with numerous components, the destruction of one component is not fatal for the entire cell. Moreover, it can be beneficial because it indicates specific environmental conditions, and the cell can use the loss of one component as a signal for adaptive changes in other components. Components that are easily modified by external or internal changes may become specialized as sensors. When destroyed, they can be recycled and restored within a system. As a result, cells become homeostatic systems that can sustain disturbances. Third, some components of a hierarchical system may gradually evolve towards increasing control over other components. This process eventually leads to the separation of the ‘genotype’ that controls the ‘phenotype’. Simple autocatalytic systems had to combine the function of a sign with the function of the interpreter. In a hypercycle, these functions can be separated. The genotype specializes in a sign function, and phenotype specializes in interpretation. Initial genotypes were not digitally coded and they probably controlled other components via energy regulation. The code developed much later and its development required a long period of natural selection. A hypothetical mechanism of the evolution of the genetic code is shown in Fig. 2E-G) Waddington (1957) suggested that the role of information in biological systems is to switch between dynamic trajectories at unstable bifurcation points. He viewed genes as railroad switches that direct cell development to specific differentiation pathways. A signal causes small distortions of the epigenetic landscape that switches the trajectory between several attractors at bifurcation points (Sharov 1992). Small variations in the information component of the system (e.g., a point mutation) are amplified by the epigenetic landscape and may result in large changes of the phenotype. Although this model explains well the effect of genes on the phenotype of organisms, it does not depict the process that led to the development of epigenetic landscapes with such specific properties. Waddington’s conception is based on the assumption that the genotype and phenotype are separated. This assumption is the major obstacle for understanding the origin of life. Systems in which genotype and phenotype are not separated (e.g., simple autocatalytic systems) are not considered alive, but systems with differentiated genotype and phenotype are too complex to explain their origin via random aggregation of molecules. Thus, I believe we need to shift from the Waddington’s (1957) conception of biological information to the Pattee’s (1995) conception of semantic closure. Information is not necessary a small signal that is amplified. Instead, a sign should be defined as an object that is involved in a semantic closure and therefore has significance (i.e., value) for some interpreter. 4. From Signals to Proper SignsAll signs considered in the previous two sections were signals because they were immediately interpreted as actions. In this section I discuss how signals could eventually evolve into proper signs that are interpreted as concepts. The major difference between actions and concepts is that concepts are evaluated as true or false whereas actions are evaluated according to their usefulness. For example, our concept of snow allows us to check if the object that we see is really snow. The statement ‘this is snow’ is true if the object is really snow. Otherwise the statement is false (e.g., if this is a fake snow made of foam). But attraction of moths to a pheromone is considered useful rather than true or false. The gap between actions and concepts can be filled if we use the pragmatic approach to the definition of truth that was initially developed by Peirce (1955) and James (1975). According to these authors, the statement is true if a person trusts it and uses it for his/her needs. If two statements lead to the same practical consequences, then they have the same meaning, and differ merely in their verbal form. Peirce (1955) wrote that we are not interested in a metaphysical absolute truth simply because nobody has access to it. This view seems to contradict to the mathematical logic which assumes that truth values are objective and absolute. However, the contradiction can be avoided if we consider mathematical logic as a simplified model of human language. Mathematical logic can be applied only to very narrow areas of human language that satisfy the following conditions: the number of objects is finite and relatively small (otherwise, truth values can not be determined effectively); all people agree on the names of objects and predicates, as well as on procedures that determine truth values; and this procedures are simple, affordable, and do not change properties of objects. These are ideal conditions that usually do not hold in language practice. Thus, the use of mathematical logic in real debates is very limited. Usually, opponents are familiar with different sets of objects, they use different procedures for testing predicates, and it is not feasible to test predicates on all objects. The reason why pragmatic logic may be useful for biosemiotics is that it can explain the transition from signals to proper signs. An action can be considered true if it is the best action available in this situation. This means that the true action increases the value of an organism (its rate of self-replication) more than any other action available. For example, if an organism has two options: to go into the diapause or to continue active development, then entering the diapause is a true action in unfavorable conditions in which dormancy is more beneficial than active development. The more often an organism selects true actions, the more competitive it becomes in natural selection. Thus, selection favors those organisms who are able to classify the environment (or situations) correctly so that each time they select a true action. For this purpose, they can use signals that had no value before. By connecting these signals with specific actions, an organism can create values for these signals. For example, the photoperiod has no direct value for insects, but because shortening day is correlated with seasonal changes, insects started using photoperiod as a signal for entering the diapause that helps them to survive in winter. A signal is true in a given situation if it is interpreted as a true action in this situation. Signals that are true in all possible situations are most reliable because they guarantee maximum benefits for an organism in any environment. However, organisms often use signals that are not always true. For example, some plants start flowering in mild winters, as if they ‘think’ that it is already spring. An organism can tolerate false actions if they are not fatal and not frequent. In most cases, one signal is not sufficient to classify situations correctly. Thus, organisms often compare outputs from various sensors in order to select an action. In this case, recognition becomes separated from perception. Perception is simply getting output from various receptors, but recognition requires logic to integrate information that comes from receptors. For example, photoreceptors in an eye generate an image (a bitmap), which is then recognized by a neuron network. The contribution of individual photoreceptors to the result of recognition is relatively low. Because of high redundancy, organisms can select true actions based on incomplete or ambiguous information. If the number of possible actions and their combinations is large, then it becomes extremely difficult to determine which kind of activity is optimal for an organism. The search for the best solution may take time that exceeds the life span of an organism. In this case, selection favors a strategy which is a trade-off between the speed and quality of reaction. The simplest way to find a satisfactory solution is to plan activity in a hierarchical way. First, an organism selects large blocks of activity; then it may specify components in each block and sub-components in each component. For example, if I go shopping, I can walk, drive a car to the shop, or take a bus. After I have selected a car, I need to select the rout. Then I follow street signs, select pedals and buttons to push, select muscles that should contract, etc. Most of this activity goes unconscious, but there is a hierarchical selection process. Actions are located at the very bottom of this hierarchy, and at each higher level we find a goal and a concept to which this goal is applied. In this example, I apply the top goal of buying food to the concept of myself. This concept includes my schedule for the coming week that shows that the best time to buy food is now. Also, this concept includes information that I have a car, and that I live relatively far from the store. Then, the best way of going to the store appears to be by car. Then I take the concept of the town (a map) and select the best rout. The concept of a car helps me to select the best driving strategy, and so on. Decision tree is unfolding by sequential analysis of concepts and goals. A parent goal is applied to the corresponding concept and yields a lower-level goal together with another concept. Often we cannot plan all the details of our activity beforehand. Then we use our senses to identify objects and situations that correspond to our concepts. For example, from the car I may see a red light. I recognize it as a traffic light that regulates car movement at the intersection. The traffic light itself does not determine my actions. But it determines which actions are allowed and which are not. My concept of red light is true if it is helpful in avoiding collisions (or fines) at the intersection. Obeying the rules of the traffic light does not guarantee that my trip will be useful, but at least I will not get into a trouble at the intersection. Primitive concepts consist of a number of empirically developed rules associated with specific objects or situations. For example, the concept of a cow specifies the way how to keep it, feed, milk, etc. As we know more about cows, our concept of a cow may include additional information, e.g., how to use cows in biotechnology. Advanced concepts may include dynamic models that can be used for optimization. The ability to use the same object or situation for various purposes can be used as a criterion of proper signs. Apparently, genetic information never reached the level of concepts. Only animals with a well-developed nervous systems may be capable of using concepts. It is likely that the concept of ‘self’ was the first concept that appeared in evolution. Its main function is to prioritize organism’s actions. Higher animals use concepts for external objects, e.g., resources, enemies, refuges, etc. The higher the level of development of concepts is, the more flexible is animal behavior and the more difficult it is to mislead an animal. The concept is true if it recommends the best way of actions for each possible goal. For example, the concept of snow specifies that it is better to ski than to walk, and it is better to plug snow away from your driveway before you drive your car. The snow itself does not imply any actions, but if you have goals, then it will help to select a better way of reaching the goal. Various nations have different activities and traditions associated with snow; thus the concept of snow may be different in other cultures. According to Martin (1986), Eskimo language has more words for ‘snow’ than English, which indicates that snow plays an important role in Eskimo’s life. The statement ‘snow is white’ is true if the concept of white is true every time when the concept of snow is true. This statement may be helpful if we use indicators of snow to perform activities associated with color. For example, at night we can not see well the color of ground cover, but we recognize it as snow using other characteristics (fluffiness, coldness, etc.). In this case we can predict that the surface of the ground will be white at day time. This may be important for selecting the color of cloths for soldiers. Statements are usually classified into analytic and synthetic (Brody 1973). The truth of synthetic statements is checked experimentally whereas analytic statements may be proved without any experiments. However, this distinction depends on the logical structure of concepts. A person from a countryside may believe that snow is always white, and whiteness is a part of the definition of snow. Then, the statement ‘snow is white’ is analytic (a tautology). However, a person who lives in a big city knows that snow is often dark when it is dirty. For him, the statement ‘snow is white’ would be synthetic indicating that the snow is fresh. Concepts form a hierarchical classification tree. For example, a doctor first determines that a child is sick but he does not know yet the kind of the disease. Then, he may narrow his diagnosis to viral diseases. Finally, he may conclude that the child has measles. At each level of classification, specific treatment actions can be recommended; but best actions can be selected when the diagnosis is most specific. At the root of the classification of concepts we will find the concept of an object. An object is a generic concept that can be applied almost to everything. However, the concept of an object is more rich than we would expect. For example, an object can be named, it has some coordinates in space and time, it may be composed of sub-units and these sub-units are objects themselves and can be counted. All other concepts are derivatives from the concept of object. This approach is widely used in object-oriented programming in which all classes of objects are build from the initial class called ‘object’. The semiotic theory of Peirce (1955) works at the level of proper signs. The sign vehicle (a footprint) is perceived and then recognized as a footprint. This means that it becomes associated in our mind with the concept of a footprint (immediate object). However, the concept of a footprint is internally connected to the concept of a human. Thus, the footprint becomes automatically associated with the presence of a human. 5. ConclusionsThe principal idea of biosemiotics is that life is communication, and the contents of communication is how to live, i.e., how to communicate. The history of life is characterized by increasing complexity of communication. I suggest that the most important characteristic of a sign is its value (or significance, usefulness) which exists in semantically closed (i.e., living) systems only. Even simple autocatalytic systems have a semantic closure and can be considered alive. Their semiosis is characterized by the following primitive features: signs are not separated from interpreters, signs are interpreted as actions rather than concepts, the process of sign recognition is not separated from the use of signs, and there are no internal communication processes at lower levels. Cooperation of several autocatalytic systems resulted in the development of simple hypercycles integrated by internal communication. In a hypercycle, each component has a double interpretation: it is copied via local self-reproduction, and also interpreted globally by other components. Differentiation of the genotype and phenotype could occur in hypercycles when some components gained control over other components. Apparently, initial genotypes had no digital coding, and development of the genetic code required a long period of natural selection. When behavioral repertoire of organisms increased, it became necessary to separate interpretation of signs from actions. As a result, signs became associated with concepts rather than with actions. I used a pragmatic approach to logic initially outlined by Peirce (1955) and James (1975) to bridge the gap between actions and concepts. An action is true if it is beneficial for an organism. In the same way, a concept is true if it suggests the best way of reaching any goal that may be associated with this concept. Concepts help to optimize organism’s activity if the number of possible actions is large. Specialists in human semiotics may be reluctant to accept biosemiotic ideas because the content and structure of human signs is much more rich compared with sign primitives that existed at the dawn of life. The question is can we learn anything about human signs from studying simple signs that are used by bacterial cells? At the beginning of the XX-th century biologists did not expect to learn anything about higher animals from studying bacteria. However, molecular biology has proven that processes in complex organisms can be understood from studying simplified systems in bacteria. I believe that the same will be true in semiotics. References
Alexei Sharov 11/23/99 |