Front. Psychol. Frontiers in Psychology Front. Psychol. 1664-1078 Frontiers Media S.A. 10.3389/fpsyg.2023.1273470 Psychology Hypothesis and Theory What does it mean to be an agent? Naidoo Meshandren * School of Law, University of KwaZulu-Natal, Durban, South Africa

Edited by: Simisola Oluwatoyin Akintola, University of Ibadan, Nigeria

Reviewed by: Nathalie Gontier, University of Lisbon, Portugal; Opeyemi A. Gbadegesin, University of Ibadan, Nigeria

*Correspondence: Meshandren Naidoo 214549331@stu.ukzn.ac.za
17 10 2023 2023 14 1273470 08 08 2023 26 09 2023 Copyright © 2023 Naidoo. 2023 Naidoo

This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

Artificial intelligence (AI) has posed numerous legal–ethical challenges. These challenges are particularly acute when dealing with AI demonstrating substantial computational prowess, which is then correlated with agency or autonomy. A common response to considering this issue is to inquire whether an AI system is “conscious” or not. If it is, then it could constitute an agent, actor, or person. This framing is, however, unhelpful since there are many unresolved questions about consciousness. Instead, a practical approach is proposed, which could be used to better regulate new AI technologies. The value of the practical approach in this study is that it (1) provides an empirically observable, testable framework that contains predictive value; (2) is derived from a data-science framework that uses semantic information as a marker; (3) relies on a self-referential logic which is fundamental to agency; (4) enables the “grading” or “ranking” of AI systems, which provides an alternative method (as opposed to current risk-tiering approaches) and measure to determine the suitability of an AI system within a specific domain (e.g., such as social domains or emotional domains); (5) presents consistent, coherent, and higher informational content as opposed to other approaches; (6) fits within the conception of what informational content “laws” are to contain and maintain; and (7) presents a viable methodology to obtain “agency”, “agent”, and “personhood”, which is robust to current and future developments in AI technologies and society.

agency artificial intelligence autonomy explanations personhood semantics complex system mechanics and dynamics National Institute of Mental Health10.13039/100000025 National Research Foundation10.13039/501100001321 section-at-acceptance Theoretical and Philosophical Psychology

香京julia种子在线播放

    1. <form id=HxFbUHhlv><nobr id=HxFbUHhlv></nobr></form>
      <address id=HxFbUHhlv><nobr id=HxFbUHhlv><nobr id=HxFbUHhlv></nobr></nobr></address>

      1. Introduction and limitations

      This paper aimed to establish a robust account of agency which can be applied to many kinds of systems, including AI systems. This raises further sub-questions, such as (1) what does it mean to be an agent; and (2) what markers are there to determine an agent? An account of the agency must provide answers to those questions in a generally determinable manner. To build an explanatory account of agency, this study evaluates and uses the various logic underpinning “explanations” using the ecological framing of biological organisms as agents of their own evolution. In this light, information-centric quantification tools such as statistical mechanics and bioinformatics would be attractive sources for creating such an account. An important question would be “what is an AI system?” This question is beyond the scope of this article but will be examined in future research. An additional limit is that this methodology describes an empirically testable account of agency, but it will not describe in detail its preferability compared to existing approaches. It is assumed that the reader is familiar with existing approaches.

      Evolution is an ecological phenomenon arising from the purposive engagement of organisms with their conditions of existence. It is incorrect to separate evolutionary biology into processes of inheritance, development, selection, and mutation. Instead, the component processes of evolution are jointly caused by the organismal agency and their ecological relations with their affordances. Purposive action is understood to be agents that use features in their environments as affordances that are conducive to their goals. Furthermore, a Kantian approach (see Part B of Supplementary material) is used. It focuses on accounts of agency and personhood as being the intrinsic purposiveness of the agent/person. A Kantian approach is preferable since it is the common framing for many legal constitutions and is a dominant framing mechanism for questions of this kind. Thus, this research moves away from the erroneous “intentional” approach (Sapolsky, 2017).

      2. The nature of explanations and understanding 2.1. Theories and mental models

      Explanations usually contain more than theories, in that they involve different bodies of knowledge (Keil, 2006). Explanations create trajectories and lead to understanding among people. They also tend to be more robust than theories. Explanations differ from mental models, which rather speak to formal representations of logical patterns to image-like representations of the works of systems. Mental models are often understood in spatial terms and explanations are not the same as mental modeling. Explanations involve interpretations. The value of explanations in growing knowledge lies in their transactional status and their interpretation (Keil, 2006). Related to this is the question “what does it mean to understand?” When people are probed about their beliefs about the world, coherence often evaporates. Often only fragments about the workings of systems are known, and of these known fragments very few are coherent (Keil, 2006). People's beliefs also tend to contradict one another. They are ignored only until the time when they are made explicit or are pointed out by someone else. This may be because of the limits of working memory. Therefore, not all elements can be considered together at the same time, which would help identify inconsistencies.

      2.2. Synchronicity and the nature of oscillation

      The question that has plagued humans for a long time is how do we come to an agreement on anything? In language, how do we agree on the meaning of words? In behavioral sciences, it is asked how do we know behaviors? In physics, we ask how entangled particles know what the others are doing? Weiner (1948) published Cybernetics: Or Control and Communication in the Animal and the Machine. He discussed the problems of communication and control in systems. He used the example of crickets and how they can synchronize their behavior so that their chirps can follow the progression that it does.

      The answer is in oscillations or spin; we can observe this in neurons and non-living things such as pendulums synchronizing with each other, which Christiaan Huygens wrote about in 1665 (Redish, 2019). Mathematics then captured the essence of synchronization. There are populations/families of oscillators. Oscillators are things that repeat themselves. A pendulum, for example, is a mechanical oscillator, and a neuron firing in the brain is a cellular oscillator. Birds moving in unison, flying together, are animal oscillators.

      2.2.1. Coupling

      What is next needed is a coupling mechanism between individuals in a population. Coupling (Stankovski et al., 2015) depends on the population of concern. For neurons, it is the connections between each of them. For animals, it is sight or sound. For particles, it is spin. You can then capture frequency/pulse. There are also weak and strong couplings. Strong couplings mean that there is a stronger statistical tendency for the oscillation relationship/synchronization to take place. For coupling to take place, either strong or weak, there must be a relatively similar innate frequency between individuals, and they must be local (generally). Many different interlinking oscillators apply to humans and other creatures. The Yoshiko Kuramoto mathematical model (Strogatz, 2000) can explain complicated behaviors in complex systems, including perhaps even semantic information. Oscillation and coupling are key components of understanding (perhaps the key components). These components explain not only understanding but also relationality and non-verbal/verbal social communication.

      2.2.2. The brain

      Robert Moore, Victor Eichler, Frederich Stephan, and Irving Zucker discovered the brain regions responsible for governing circadian rhythms. The key structure is the suprachiasmatic nucleus (SCN), which processes information about light and darkness from the retinas. Damaged SCNs impair animal rhythms. Oscillators are the tools used to interlink and relate to others like us. They define what constitutes an “us”. Examples of coupling mechanisms include things such as heat, shape, direction, and vision (eyes, in particular, are a gateway for bonding) (Cornell University, 2022). Previously, the postulation was that mirror neurons enabled us to mimic the behaviors of others in our social group and thus coordinate social or group learning; however, this has not been confirmed (Dickerson et al., 2017). Oscillators and coupling are the modalities of world-building and social organization or communication.

      More generally, there are other instances of “understanding” or knowing. These instances involve embodied ontogenetic knowledge: of time, place, circumstance, culture, bodily knowledge (such as sensory information), and the like. For John Vervaeke, this is the four modalities of knowing: (1) participatory knowing; (2) perspectival knowing; (3) procedural knowing; and (4) propositional knowing (Raninen, 2023). Therefore, notions such as “understanding” or “knowing” are not related to thought or mental representations but rather to natural and mechanical processes of relation. This enables a reframing of these concepts such that they do not need to be intimately linked to purely human mental representations.

      2.3. Patterns, stances, domains, and social/emotion

      We can distinguish different explanations by the causal patterns they employ, the stances they invoke, the domains of phenomena they explain, or whether they are value- or emotion-laden (Keil, 2006). Each of these has different trajectories and properties.

      2.3.1. Causal patterns

      The most common causal relations to which explanations refer are (1) common cause, (2) common effect, (3) linear causal chains, and (4) causal homeostasis (Keil, 2006). Common cause explanations cite a single cause as having a branching set of consequences. These are usually diagnosis-type explanations (such as a bacterial infection that causes many symptoms or a computer virus). Common effect refers to instances where causes converge to create an event. These are common in historical narratives where several causes are attributed to converge and create an event. Linear chains, on the other hand, are degenerate cases of common cause and effect. With these, there is a unique serial chain from a single initial cause to a series of steps through a single effect (Keil, 2006). Causal homeostatic explanations are fundamental to natural kinds of explanations. These explain why sets of things endure as stable sets of properties. This type does not explain how a cause progresses over time to create effect(s), but rather how an interlocking set of causes and effects results in a set of properties that endure in combination over time as a stable set. This stable set is then of a natural kind. Some explanations are easier to follow, while others are more difficult and hence “unnatural”. Furthermore, some explanations are often understood to be domain-based, although this is not necessarily the case (Keil, 2006).

      2.3.2. Stances

      One can frame explanations in terms of the stance that they take. Dan Dennett is known for drawing this distinction. Each stance speaks to a framing device for explanations. Each stance is general and non-predictive but does speak to certain relations, properties, and arguments that are fundamental to each (Keil, 2006). Dennett highlighted three different kinds of stances: (1) mechanical, (2) design, and (3) intentional. Mechanical stances consider only simple physical objects and their interactions. The design stance considers entities as having purposes and functions that occur beyond mechanical interactions. Some argue that teleology/functional explanation is part of this stance. There are also questions about whether an intentional designer is necessary for teleological explanations. The intentional stance sees entities as having beliefs, desires, and other mental contents/representations that govern their behaviors (Keil, 2006). These mental states then have causal consequences in terms of behavior. This has, however, often been criticized for being based on folk psychology (Woolman, 2013). Each stance describes different insights and distortions and explains different things. They need not exclude each other and can be complementary (see part G of the Supplementary material for more information on intentionality).

      2.4. Causation

      Causal explanations have been the most dominant explanation, especially in the sciences. However, these are not the only forms of explanations; there are also non-casual explanations, which are called constitutive explanations (Salmon, 1984).

      2.4.1. Causal capacities as explanada (etiological)

      The object of constitutive explanation is the causal capacity of a system. This capacity describes what a system would do under specified circumstances/conditions (under a certain trigger). Causal capacities speak to what would, or could, or will happen under certain conditions and it includes notions such as ability, power, propensity, and tendency. Causal capacities speak to processes and events: when process (X) happens, event (Y) happens. These explain the changes in properties of a system—that is what an event is (Ylikoski, 2013). They focus on the origin, persistence, and changes in properties of (or in) a system.

      2.4.2. Counterfactuals and the Millian method of difference

      This is the “Millian Method of Difference” (Encyclopedia Britannica, 2023) or counterfactual approach. Counterfactual explanations (Mertes et al., 2022) are the “knockout” kinds (the gene as the unit of inheritance was established through this approach). Here, if you want to determine whether something (C) as a cause has an effect (E), you perform an experiment whereby you remove (C) and then observe the effects. This can be a literal removal or a conceptual removal. This is often used to explain why something happened, such as a decision, event, or outcome by reference to a particular thing or sequence.

      You can also change the values of (C) by making it stronger or weaker, and then observe what happens to (E). We use this to make inferences from the difference observed in effects where (C) is absent or different. Thus, we infer the causal role of (C) based on its presence versus its absence or its changes. This is effective for identifying discrete explanatory privileged causes (Walsh, 2015) (see Part A of Supplementary material for an example and information on its undesirability).

      2.4.3. Causation in complex systems

      Complex adaptive systems can maintain stable configurations despite perturbations because they can alter the causal relations that happen between their parts. Each part affects, and is affected by, others, and the overall effect is attributable, jointly and severally, to all the parts. The system is thus affected by itself, and these causes are non-separable. Causes are only separable when the effect of a change in one is independent of the effects of changes in others. If we remove or interfere with one we would also be interfering with others. Therefore, causal composition/decomposition fails on non-separability—the influence/control factor of each part is non-determinable (thus non-quantifiable), and we cannot attribute differences in effect to specific differences in the causal contributions of the parts. One cannot assume when reviewing a result that the other factors are functioning as they were before the removal of a factor—they can be operating differently. Thus, we cannot decompose causes and differences in effect by reference to external versus internal influences. Changes in the dynamics of complex adaptive systems can be initiated endogenously through internal perturbations or exogenously through changes in the environment. The system mounts a response to both, and the result of that response is attributable to both internal and external influences as a single cause. Feedback is where the internal dynamics and environment both cause a change in the behavior of a system with signals. Thus, the environment is part of the system's dynamic structure. This is why it is difficult or impossible to attribute liability (either for an action or for a composition of a product or artwork) to either an AI system or a human, whereby there is a “commingling” between both. Even distinguishing between “principle causes” and “initiating causes” does not offer an adequate solution.

      Complex adaptive systems tend to distinguish between “principal causes” and “initiating causes”. Principal causes are those to which we can attribute a large portion of the observable effect. Initiating causes starts the causal process, which ends with an effect. If two identical systems diverge in their outcomes, it is reasonable to afford principal causal responsibility for differences in effect to a factor that initiates the different trajectories (Walsh, 2015) (assuming that all other components contribute as before). In such a case, the principal causes would be initiating causes. However, the inference cannot hold for complex systems. There is logical discord between (1) the proposition that a change in the dynamics of complex systems is initiated by changes in exogenous conditions; and (2) the conclusion that the principle cause of the overall effect is that change in the exogenous condition. All this means is that the usual modes of inferences (cause and effect) do not work in complex dynamical systems.

      2.5. Constitution

      The constitution explains how things have the causal capacities that they do by relying on their parts and organizations (Ylikoski, 2013). Constitutive explanations ask: “what was it about (X) that resulted in it having disposition (Y)? What is it about (X) that enables a causation event to happen?” They provide different information compared to causative explanations. Fundamentally, these explanations provide modal information for causal possibilities.

      To explicate constitutive explanations (Cummins, 1975, 1983, 2000; Craver, 2007a,b; Craver and Bechtel, 2007), their explanada must first be described. Constitutive explanations are not related to behavior, reactions, or activities of a system. These explain the properties of a system themselves. The relata of causative and constitutive explanations thus differ; causal explanations deal with events and constitutional explanations deal with properties. A constitutive explanation would say, for example, that system (S) has a causal capacity (C) in circumstances (E) because of its components (S1) and (S2) and their organization (O) (Ylikoski, 2013). Therefore, there is an ontological difference between causation and constitution. Both are relations of dependence (Rosen, 2010), but they are metaphysically different. Both, however, must account for explanatory relevance.

      Metaphysics posits that the parts, their causal capacities, and their organization constitute the causal capacities of a system/whole. Constitution is synchronous and thus they are atemporal (meaning that it is not based on time and can be instantaneous). This means that if there are changes in the basis, there is an instant change in the causal capacities of a system (hence constitution is process and time-independent).

      Importantly, the constitutive relata are not independent existences. In causation, one can insist that the relata of cause and effect are distinct from each other but one cannot insist on the same within constitution relata. Specific causal capacities are direct functions of certain constitutions. Constitutions then do not have independent identities.

      Constitutive explanations distinguish themselves from identity, in that identity is a reflexive relation and is symmetric. First, one must distinguish between the constitution of all causal capacities of a system and the constitution of an individual capacity (Ylikoski, 2013). The former is the complete set of causal capacities of a particular system (at a time). We can identify the causal capacities and their causal basis (the organization). To have specific causal capacities, a specific causal basis (organization) is first necessary. Symmetries can be exact and help with allowing for simplicity in explanations, but it does not correlate to being the identity of a thing.

      We cannot identify individual causal capacities with or as their composite bases (alternative constitution). This is because different objects can have the same causal capacities despite having different compositions (Ylikoski, 2013). This is known as multiple realization (MR). MR implies that we cannot equate a specific property of an object (like fragility) with the specific structural element of an object (molecular structure), but we can attribute a specific property of the object because of a specific structure that it has. At the heart of the scientific inquiry are questions about what makes causal powers possible and how changes in the organization of parts affect the total causative capacities of the system. Science largely involves studies of constitution (the study of the relation of dependences). Therefore, the constitution is at the heart of causal inquiries. There is justification then for an approach of the constitution to explain agent status or agency. This explanation offers a method for granting an AI system agent or person status through relations of internal dynamics and dependencies. This understanding of the constitution is what Kant was alluding to in his oft-quoted notion that things are to be understood as ends in themselves and never as means to an end.

      The necessary asymmetries are present; the constitution explains causation, and the constitution is composed of parts and the organization of those parts. Systems then are made of causative parts and their organizations. The other asymmetry is existence. This asymmetry means that parts can exist independently of systems, while systems cannot exist without their parts (they can exist without some parts, but not all). The organization of parts is also fundamental for maintaining the status of a system (since systems are not reducible to their parts, they are greater than the sum of their parts). Organization therefore has explanatory relevance. Systems' causal capacities are not just the sum of their parts; they are also the organization of those parts. Organizations' explanatory relevance stems from their contribution to the causal capacities of the system as a whole (change organizations and the causal capacities of the systems change). Organization is also called contextual causation and is empirically observable. Contextual is similar to downward causation (below), except that it displaces the notion of “downward” and instead posits that parts can influence each other regardless of a relative placement in relation to each other (Ylikoski, 2013). Parts can be of different sizes, different levels of abstraction, and situated at different levels. Causation is not limited to agency nor human agency, but it can also include instances of manipulation/intervention.

      Constitution and causation are both explained in terms of their dependencies, which are a particular set of “objective” relations of dependent facts. These facts give explanations a direction and they are the basis for explanatory preferences (explanations must explain the systems' causal capacities in terms of their basis and not vice versa) (Ylikoski, 2013). Constitutive relations involve causal manipulation.

      2.6. Downward causation

      Downward causation provides an explanation for “emergence” which will also be necessary for an explanation of AI agency. However, downward causation has been criticized. For example, Kim (2006) argues:

      “[d]ownward causation is the raison d'etre of emergence, but it may well turn out to be what in the end undermines it”.

      However, this argument assumes the causal inheritance principle, which stipulates that the causal powers of complex systems are inherited exclusively from the causal powers of their parts. This has two salient points: (a) If parts do not have causal capacities, then the system as a whole would not (the capacities of the whole counterfactually depend on the capacities of the parts); and (b) in complex entities, nothing other than their parts are relevant to the determination of their causal properties. This then requires the causal powers of an entity to be internal to it.

      Internal properties are context-insensitive, and an entity/system has all its internal properties (until there is an internal change) regardless of the context. If causal powers are internal, it is only the internal constitution of a system that confers those causal powers. This, and the assumption of internal causal properties, results in an ontological primacy afforded to the capacities of the parts, as opposed to the capacities of the totality/aggregates. The idea is that complex entities inherit their causal powers from their parts, but that the converse is not true. Complex entities cannot confer on their parts' causal powers which the parts did not have by their internal natures/capacities. Therefore, the properties of complex entities cannot explain why their parts have their causal powers (Walsh, 2015). This is Kim's argument against reflexive downward causation.

      Kim's argument against emergence is the assumption of internal causal properties. This kind of thinking may have arisen from the notions of how mass (as a fundamental causal power/property) is context-insensitive. Masses of macroscopic objects are not altered by the masses of other bodies; mass behaves in a context-insensitive manner with regard to forces. An object's mass allows the prediction of its behavior across different contexts where forces act on it. It allows for the assumption that their effects are mutually independent, but do not affect masses.

      Context insensitivity of causal powers is present in the analytic method (Cartwright, 2007). The assumption is that as contexts are altered, entities' causal powers remain unchanged because of the internal nature of the causal powers. However, context insensitivity does not equate to internality. The mass itself shows this. It is possible for a mass to be invariant across many contexts, but it is not an internal property of a body. For example, recently, it was discovered that the mass of protons comes from a combination of the masses of their constitutive three quarks, their movements, the strong force that ties them together (the gluon), and the interactions of quarks and gluons (Thomas Jefferson National Accelerator Facility, 2023). Hence, the mass of the proton is emergent. Mass is conferred onto a proton by its relations to something else. Causal powers may therefore be invariant in different contexts; they can be relational properties of things.

      If causal powers are non-internal properties conferred on things by contexts, then one can argue that parts of complex systems get their causal powers from the system as a whole (connubiality). The parts would not have those capacities if they were not parts of that complex entity. The whole system, in this way, is the context that confers causal powers on its parts. This holds true even if the causal powers of a whole system are completely inherited from its parts.

      Therefore, the property of the whole depends on the properties of the parts, and the converse is also true. If properties are understood to be relational (not internal in the strictest sense) and context-sensitive, it becomes easier to understand. Reflexive downward causation can be explained as follows: If they are relational properties, it means that complex systems have the causal powers that they do because of the causal powers of their parts (as in causal inheritance). It is also possible that parts have their causal powers because of the complex system they are part of.

      In causally cyclical systems, one can assume that the causal powers of the parts are context-dependent and are conferred by the system in which they are parts. Hence, emergence is a fact of complex systems which can transform their parts (Ganeri, 2011). By transform, I mean that they confer on their parts capacities that they counterfactually would not have. These capacities reciprocally fix the properties of the system. Therefore, emergence can arise based on the context. Systems can give their parts causal powers and causal powers of the parts can be explained through reference to the system as a whole and its properties. They are hence relational, and the more suitable framing of this would be “intrinsic” as opposed to “internal”. This is developed further at the end of the article.

      2.7. Fundamental and emergence

      “Fundamental” speaks to things that cannot be decomposed further into smaller resolutions, meaning that we cannot get a coherent theory if we do so. What is fundamental is thus contingent on knowledge and the era that you find yourself in. Previously, atoms were thought to be fundamental until particle theory was established. However, emergence is different from fundamental, and unlike fundamental, emergence is something that is not conceptually contingent in the same way that fundamental is. “Emergence” can explain many issues in physics, such as how Schrödinger's (1944) order-from-disorder answer in his book What is Life? gives us a hint of a theory that incorporates emergence into complex systems. Out-of-equilibrium systems, for example, spontaneously build structures that dissipate energy, and, as they do this, they become increasingly stable and more complex. They have their own intrinsic dynamics. The dynamics of these systems can yield predictions and explanations, not just about the activities of the whole system but also about the activities of the parts. The movement of information toward order is also an emergent property. Vopson and Lepadatu (2022) demonstrated that while the thermodynamic entropy of systems increases (in terms of the studied virus), the overall informational entropy decreases (or stays constant). This is “the second law of infodynamics”. The law itself works in opposition to thermodynamic entropy; it describes this movement as an emergent entropic force. Thus, we can now account for (1) informational emergence in complex systems, which (2) are not considered to be “alive”.

      2.8. Variance 2.8.1. Multiple possible variance and rarity

      The microstate of a system is the configuration of the system (Hidalgo, 2015). Entropy is the logarithm of the fraction of all equivalent states. Entropy is lowest where the states have the least possible variance (order), and it is at its highest where there is the most possible variance (MPV). Rarity (Hidalgo, 2015) is the measure of the possibility of a particular arrangement occurring at random or without intervention. If it is rare, the probability of it occurring without intervention is unlikely. Functionality and working conditions are indicators of rarity. The natural state of things is to be in disorder, as opposed to order. States of the disorder have less information, and thus, the destruction of a physical order is also the “destruction” of information (informational content). Creating physical order is creating information (which is embedded in that order). The rareness of a state of order is measured against the number of possible states. One manner to do this is by correlating the connections between states. There is a correlation if one can get from state A to state B with a simple transformation. Information-rich states that involve correlations give the word “information” its colloquial meaning. Most things are made up of information. “Order” is a statistical probability measure of occurrence. Sometimes, the states of systems do not allow changes from A to B, or they impose limitations on the modes of transformation. The modes of achieving disorder outnumber the modes of achieving order.

      2.8.2. Covariance, correlation, and mutual information

      In statistics, correlation describes the degree of linear dependence, association, distance, or relation between two random variables in data. Correlations and standard deviations apply only in mediocristan non-scalable environments (Taleb, 2007) (Gaussian or bell-curves) wherein magnitude does not matter. In other words, both only have predictive or informational power in that context (see part D of Supplementary material). They can only be used to draw qualitative inferences.

      Importantly, while correlation applies only to linear relationships between variables, this linear relationship (the signal) between variables does not scale linearly. Correlation is not additive (Taleb et al., 2023) because the correlation coefficients are non-linear functions of the magnitude of the relations between variables. They cannot be averaged for this reason. In turn, this means that an average of the correlation coefficients does not equal an average correlation itself. For example, a correlation coefficient signal of 0.7 conveys much less information than a coefficient of 0.9, while a signal of 0.3 conveys almost the same relationship as that of 0.5 (Salazar, 2022). Correlation cannot be used in non-linear relationships between variables, which is what characterizes reality (or Extremistan, scalable environments). Using it here will result in an incorrect explanation of the relation between random variables. In short, correlation does not accurately reflect the informational distance between random variables (Taleb et al., 2023).

      Covariance speaks to the linear measure of the strength of the correlation between two or more sets of random variables. The covariance for two random variables (X) and (Y) each with a sample size of (S) is defined by an expectation value (Weisstein, 2023). Where there is a correlation between the values, the covariance will be non-zero. Where they are not correlated, it will be zero. The covariance can be directly proportional or inversely proportional. Covariance can be infinite, while correlation is always finite (Taleb, 2020). Covariance provides a method for construing features of contexts as “affordances” since this would be a qualitative finding and one that is non-scalable as described below.

      The appropriate measurement function is mutual information (MI), which is not dissimilar to the Kelly criterion in finance and risk (see Supplementary material). In machine learning, this is known as relative entropy and is based on the expectation of the Kullback–Leibler divergence (a measure of similarity between distributions) (Taleb et al., 2023). Machine learning loss functions rely on entropy methods. Mutual information can be understood as a non-linear function of correlation; if mutual information increases, correlation itself increases, but non-linearly. Mutual information compares the probability of observing two random variables together with the probability of observing those same two variables independently (Prior and Geffet, 2003). In other words, an MI approach captures non-linear relationships and, importantly, it also scales to noise. The MI approach describes the amount of mutual dependence between two random variables; one gains information about a random variable by observing the value of another random variable. It measures this amount of dependence in information (in bits) and is used in instances of non-linear dependencies and discrete random variables. This is an entropy measure, and it is additive (Taleb et al., 2023). This understanding of how “seemingly” random variables are related in terms of how the values or changes in one variable affect the understanding of the values or changes in another is an important tool.

      Mutual information maps to the mutual dependence of random variables (how much can I rely on X if I know Y). Therefore, an MI approach would be most applicable to genetic distances (Taleb et al., 2023). Furthermore, an information metric is preferable and suitable for an account of agency or personhood, since DNA is understood as the basis of “life”. Mutual information then provides the proper tool for creating a methodology with proper scaling, proper explanatory value, minimal informational loss (Taleb et al., 2023), and avoidance of using linear approaches (such as Cartesian methods or internal–external measures). Conditional mutual information (also known as transfer entropy) provides a suitable manner for causality detection since non-linear relationships (Mukherjee et al., 2019) in data associated with genetics and biological systems make generalized data impossible. Transfer entropy provides a consistent method across different conditions.

      2.8.3. Intervention

      Interventions usually involve notions of manipulations carried out on a variable (X) to determine whether changes in (X) are causally related to a variable (Y). However, any process qualifies as an intervention if it has the right causal characteristics, and not just human activities (Woodward, 2000). Consider this example: First, there is an intervention (I) on variable (X) which is a causal process that changes (X) in an exogenous way. If a change in (Y) happens after this, this change occurs only because of the change in (X) and not because of another set of causal factors (Woodward, 2000). One must also define what intervention means; interventions involve exogenous changes that break or disrupt previously existing endogenous causal relationships between variables and system states. This understanding of intervention allows for an extrinsic manner of specifying intrinsic features. It allows us to distinguish between types of correlations and dependencies that reflect causal and explanatory relations and those that do not. Viewing intervention in this way also transparently allows for the epistemological designation of experimentation as the establisher of causal and explanatory relationships. This allows us to make claims about the role behavior plays in causality through the use of interventions (Woodward, 2000). This is a much clearer account of causation and explanation as opposed to the traditional doxa.

      2.8.4. Invariance, generalizations, and laws

      According to Woodward, generalizations can be used in explanations and depend on invariance rather than lawfulness (Woodward, 2000). A generalization describing a relationship between two or more variables is invariant if it is stable or robust after the occurrence of an intervention or change in various other conditions at an appropriate level of approximation (Woodward, 2000; Maher, 2006). Invariance comes in degrees, and it has other features that capture the characteristics of explanatory generalizations in the social sciences, in particular (Woodward, 2000). In other words, invariance does not appeal to laws for its usefulness in explanations. The set or range of changes over which a relationship of generalization is invariant is known as its domain of invariance.

      There are two types of changes, and both are fundamental to explanatory powers. The first is changes in background conditions (changes that affect other variables other than those variables which are part of the generalization) (Woodward, 2000). The second is changes in variables that are present solely within the generalization itself [within the Newtonian equation of F=ma, the change can occur to mass as (m) or acceleration as (a)].

      For a methodology to constitute a law on personhood or agency, it must meet the conditions of laws (see part A of Supplementary material). This includes being a generalization with a higher invariance or wide applicability and being confirmable, predictable, and integrable (not only including being integratable with other laws, but also with philosophical or jurisprudential axioms which may ground legal laws, such as Kantian philosophy). Laws can also replace other older laws where they demonstrate that the older laws were unsuitable or provide less information.

      2.8.5. Explanations and invariance

      Good explanations require the use of invariant generalizations, which enable the specification of systemic patterns (of counterfactual dependence). This converts information into explanations since it can be used to answer a range of counterfactual circumstances about the explanandum. This allows for better predictive models. There are various kinds of counterfactual dependences, including active and passive ones; active is the type that is necessary for good explanations (Woodward, 2000). Invariance is thus necessary for reliance on counterfactuals and prediction (and to some degree also causal links). Invariance comes in degrees. There is also a connection between the range of invariance and explanatory depths; generalizations with more invariances constitute better explanations, especially for science. Generalizations that are not invariant under any conditions have no explanatory powers. Invariance is also important for building a purposive teleological account and countering the notion of “chance”.

      2.9. Theories of explanation: teleology and mechanism 2.9.1. What is teleology?

      Teleology explains the existence of a feature based on its purpose (Walsh, 2015). The understanding that biological organisms are self-building, self-organizing, or adaptive suggests that they are greater than the sum of their parts. Thus, we can argue that organisms are purposive things. Refer to Sommerhoff (1950) in part B of the Supplementary material for information on how capacities can serve as a criterion of purposiveness.

      2.9.2. Mechanism vs. teleology

      Mechanists argue that natural selection explains the fit and diversity of organic forms, thus making teleology or purpose explanations unnecessary. The mechanical view is that every event has a cause, with causes being able to fully explain events. But there are three main arguments against this approach: (1) non-actuality, (2) intentionality, and (3) normativity (Walsh, 2015).

      The non-actuality argument states that means come before ends (goals). However, in terms of teleology, ends explain their means. Therefore, teleology in this light is inferential: it is the process of positing one's own presuppositions to establish an end. When the means occur, the goal or ends are not yet realized (they are non-actual). How can a non-actual state affect or cause a means?

      The intentionality argument states that non-actual states of affairs cannot cause anything but mental representations of them can. One way to solve the teleology non-actual dilemma is to propose mental states as representations of these goals (or ends). Thus, occurrences of actions or events are explained by intentions as mental states of agents. The intentional and mental state argument is the most common justification of teleology (Kant and Bernard, 1790). The issue is that organisms typically do not have intentional states. However, this intentional and mental state justification is most commonly used in teleology. The earliest form of teleology can be found in Plato's Timaeaus and in the works of Thomas Aquinas; after all, any perceived forms of an order must presuppose a purpose or an intention. Aquinas argues that whatever lacks intelligence cannot move toward the end unless it is directed by knowledge, “as the arrow is shot to its mark by the archer.” Intentionality is the obvious paradigm for teleological framing. Kant (2000) notes that intentionality is our only model for understanding purpose.

      The normativity argument suggests that teleology has a normative value. Explaining an action as a consequence of intention is to argue that an agent was rationally required or permitted to act in a particular way to achieve certain goals. Rational actions are those which are required to attain a goal (or end). Thus, a teleological approach must account for an action being rational (Walsh, 2015).

      Bedau (1991, 1998) argues that because of the normativity of teleological explanations, goals can have their explanatory roles only if they have intrinsic normative properties. Namely, (c) construed as a means toward attaining a goal (g) could only be something that a system ought to produce, if (e) is a state that the system ought to attain, but (e) could not be an “ought to attain” state unless (e) was intrinsically good. The issue is that natural facts are not intrinsically evaluable (Walsh, 2015). A proper account of teleology must account for all these arguments in making space for purpose. Furthermore, a proper teleological account must not be purely metaphysical, but must also operate within a scientific framework. Emergence is an important aspect of the account of agency. The dynamics of agents must be explained by their purposes and affordances. These would be emergent properties that emerge from the relation between agents and their contexts. They are not properties of the systems' parts themselves. Mechanistic explanations tend to exclude emergence since they appeal to the dynamics of complex systems as being entirely explainable through the properties of their parts (Walsh, 2015). Parts are not emergent. However, before solving the emergence issue, I need to account for “purpose”.

      2.9.3. Teleology and purpose

      Teleology explains the existence of a feature based on its purpose (Walsh, 2015; Kampourakis, 2020). We can argue that organisms are purposive things because organisms or agents are self-building, self-organizing, and adaptive, which suggests that they are more than the sum of their parts.

      2.9.4. Chance and purpose

      In biology, Jacques Monod considered the consequences of a non-purposive nature/biology. He identified a contradiction at the heart of evolutionary biology. This is the “paradox of invariance” (Monod, 1971). The paradox is that living creatures show two contradictory properties: invariance and purpose. Invariance is the ability to reproduce and transmit information, including ne variateur information. Ne variateur information relates to its own structures and is transmitted from one generation to the next. The purposiveness of organisms is evident in the maintenance of their viability by responding to environments and adaptation. However, many would argue that science does not recognize this kind of purpose because it seems to be a contingent truth instead of an objective one. To explain this, Monod suggested that purposiveness can be explained by the mechanism of molecular invariance (Walsh, 2015).

      However, the invariance principle raises complications as evolution is fundamentally about change. Adaptive evolution is a form of environmentally charged biased change. Thus, there should be a source of new variants and a process that is biased toward change. If we argue that new variants are biased in favor of goals and purposes, we may also be undermining science. For Monod (1971), the source of evolutionary novelties must come from unbiased chance. Monod argues that chance must have a requisite role in evolution, and this role is methodological and not metaphysical. This is akin to Democritus, who argues that everything is a result of chance and necessity. With chance and necessity, there is no need for purpose (Walsh, 2015). However, the chance is unsuitable for an account of purposiveness that I want to build.

      Aristotle took issue with Democritus's explanation, since chance is, by its nature, not measurable. In Physics Book II, Aristotle discussed what an explanation should include. His arguments were developed to counter the atomists' arguments at the time, which are similar to the mechanists' arguments of cause and effect. He did not like explanations that did not account for something—and chance was unaccounted for. He illustrates this (Physics II.5) (Barnes, 1991) with the story of a man who is collecting money. The man meets a debtor at the market and collects money owed to him. This, for Aristotle, is a chance encounter since the collector went to the market for a different purpose; he coincidentally also collected his money. This is a mechanistic explanation, and these explanations do not distinguish between occurrences that are regular/purposive or chance. They therefore give incomplete information since they do not distinguish between both. Mechanistic explanations are necessary since every occurrence must have a mechanical cause, regardless of whether it occurred for a purpose or because of chance (Walsh, 2015).

      Purposive events are, however, robust (invariant) across a range of alternate initial conditions and mechanisms, whereas chance events are not (they have differing modal profiles). Good explanations must be able to distinguish these. Purposive encounters are those which are insensitive to initial conditions, including locations. Thus, in purposive occurrences, the means counterfactually depend on the ends. Chance occurrences are sensitive to initial conditions and, if the initial conditions are different, the event or ends would not have happened. Unlike chance occurrences, purposive occurrences are sensitive to goals. If an agent's goals were different, the event would now have occurred. If the collector had been elsewhere in the market, then the encounter may have happened elsewhere, at a different time, and by different mechanisms.

      Given the counterfactual dependence of mechanisms and ends, events that happen because they serve a purpose can be explained in two ways: (1) the occurrence results from mechanical interactions and (2) the occurrence is conducive to the fulfillment of a goal. However, one thing is certain; one cannot simply disregard purposes. If purposes are ignored, it induces a “selective blindness” to a class of explainable occurrences, namely, those that are structured according to the counterfactual dependence of means on goals. This is not just an error of omission; it also risks misconstruing purposive occurrences as blind chance. To properly account for events, both teleology and mechanistic explanations are needed. I have now explained purposiveness as goals; these purposes can also explain their own means (Walsh, 2015).

      2.9.5. Goals

      Goal-directed processes are those that are conducive to stable end states and their maintenance. The end state itself is the goal. Thus, a goal is a state that the goal-directed process is aimed toward. Central to studies on natural goal-directed processes is an adaptive and autonomous system, which can achieve and maintain persistent and robust states through the implementation of compensatory changes (Di Paolo, 2005; Barandiaran et al., 2009). These systems can pursue goal states and sustain them in the presence of perturbations. They can effectively implement changes to component processes in ways that correct the effects of perturbations, which could otherwise result in the system not achieving its goal (Walsh, 2015). This will be necessary for an account of purpose and agency.

      The architecture of the system underpins the goal-directed capacities and states of the goal itself. These systems are usually comprised of modules. These modules are clusters of causally integrated processes decoupled from other modules. They also demonstrate the capacity to produce and maintain integrated activities across a range of perturbations of influences (robustness). Each model has regulatory influence, using positive and negative feedback, over a small number of other modules. Each part effectively influences other parts in some way. This allows for robustness and plasticity by maintaining stability in the presence of perturbations by enacting new adaptive changes. Robustness describes a property of something which can produce novelty, in response to novel circumstances. Biological organisms display this. What allows organisms or systems to do this is the modularity of their development (Schlosser, 2002).

      Thus, goal-directed behavior is a causal consequence of the architecture of adaptive systems. Furthermore, it is an observable feature of systems dynamics. It is the capacity of systems as a whole to utilize the causal capacities of their parts, and the ability to direct them toward attaining a robust and stable end state. That end state or goal is not a mysterious something; it is a complex and relational property—the property of being in a state that a goal-directed process can achieve and maintain. Therefore, goals are natural and observable (Walsh, 2015). Goals are thus not “mental states” and instead are naturally derived from a system's intrinsic dynamics.

      But what about the content of teleological explanations? We can determine the conditions under which they apply as explanations, but we must also account for the content of the explanation. There is a fundamental difference. Conditions for teleology can be understood as causal occurrences; however, content cannot be described in causal terms. Teleology is not about explaining causes, it is about explaining goals to which events are conducive (Walsh, 2015). Thus, for agency, we no longer need to rule out an entity based on being “created” or “developed” by something or someone else. The focus is on the entity itself.

      2.9.6. Teleological explanations and invariance

      To describe a non-mechanistic account of goals, two questions must be answered: (1) How can an event be explained by citing the ends to which it is simply a means; and (2) Why does this explanation not need to be explained through mechanisms of cause and effect?

      To address the first question, goals can explain their means of achieving those goals in a way that is similar to how mechanisms explain their effects by using counterfactual invariance relations. Invariance here does not mean the transmutation of stability of form across generations or lineages. Here, it is Woodwardian invariance. We can answer the second question by simply demonstrating that they appeal to different invariance relations more than mechanistic explanations do (Walsh, 2015).

      Mechanistic explanations demonstrate how activities and characteristics of (X) produce (Y) as the effect including the specific properties related to that effect. Activities produce effects, which are related through the notion of counterfactual dependence—effects counterfactually depend on their mechanisms. These activities can be expressed in terms such as “binding”, “opening”, and “bending”. Woodward (2003) called this “relation invariance”:

      “[T]he sorts of counterfactuals that matter for purposes of causation and explanation are just such counterfactuals that describe how the value of one variable would change under interventions that change the value of another. Thus, as a rough approximation, a necessary and sufficient condition for X to cause Y or to figure in a causal explanation of Y is that the value of X would change under some intervention on X in some background circumstances”.

      Thus, we can use this to explain how events as means are related to their goals. If there is goal (X), which then produces event (A) which is conducive to (X) under conditions (Q), then under different conditions (V), it would produce event (B), as (B) would be more conducive toward (A). If the system had another goal (Z), it would produce event (C), should (C) be more conducive toward attaining (Z). This is an invariance relation. It is the obverse of the relation of cause and effect. In other words, we explain that causes themselves explain their effects, because when the cause occurs, then so too would the effect. If the cause does not occur, neither does the effect. We can also reason that a goal explains its means because if a system has a goal then the means too would arise, and if there was no goal then the means would not arise.

      This explains how events, as means, are related to their goals. Causes explain their effects because when the cause occurs, so does the effect. If it does not, neither does the effect. We can also reason that a goal explains its means. If a system has a goal, the means arise; without a goal, the means do not arise. But, on its own, invariance is insufficient. Explanations are description-dependent, and good explanations enhance understanding. Mechanistic explanations do not simply speak to cause and effect (relations), and they also speak to the appropriateness or accuracy of that relation. The relation itself only exists if it is appropriate. We use concept descriptions such as “push”, “pull”, and “attract” to describe productive relations. These speak to the nature of the relation, and sometimes also explain the effect.

      For teleology, we use the concept descriptor of “conduce/ive”. So, the modal relations are (1) causes produce effects; and (2) means are conducive to their ends. Conducing is not causation. A means is only considered conducive to its ends if it robustly and reliably brings about the end ceteris paribus across a range of counterfactual circumstances. Hence, if the goal is (A) and event (X) causes (A), this does not mean that (X) conduces to (A) (Davidson, 1980). Thus, producing and conducing are descriptions of events, and they have different informational content. Producing specifies an earlier event (time is important here), which is the mechanism for the later event. This describes how the later event arose. Conducing specifies the why of an event—that it is conducive to realizing or maintaining a goal.

      A singular event can be explained in terms of mechanistic (causal) and teleological (conducive) relations. The former explains how things happen, while the latter explains why they happen, and thus they co-exist. They are complementary and non-competing. They are also complete—they do not need each other to explain their own coherence—the how's explain the how's and the why's explain the why's, and we do not need the how's to explain the why's. They both explain different information about events. However, for the completeness or coherence of an explanation as a whole, one needs both types of sub-explanations. Without both, there is an explanatory loss. Thus, both mechanism and purpose are important for explanations but not for independent systems themselves. The non-actual claim, for example, is a conflation between causes and explanations. In terms of the intentionality counter, intentions can be understood as goal-directed activity instead of mental representations. Intentional states are mental representations and are unnecessary for teleology (Walsh, 2015).

      In terms of the normativity counter, the goal need not be described as “good” to explain why systems ought to act in certain ways, which result in conducing to that goal. Systems will do what it takes to achieve the goal; there is no specific modality to be followed. The modality need not be prescribed, singular, or of a specific nature (such as good or valuable). What matters is appropriateness. There is thus no need for an evaluative state of affairs. Aristotelian teleology is not intentional, transcendent, or causation-based. It comes about because of the activities of goal-directed entities which are observable and occur in the natural world. This can be used for both predictive power and explanatory power in the same way that we use other robust regularities (Walsh, 2015).

      2.10. Theories of explanation: agents and objects 2.10.1. Natural agents

      Natural agents are obtained from the natural purpose explanation. Agency, such as purposiveness, is an observable property of a system's gross behavior. The system can pursue goals and respond to conditions of its environment and its internal constitution in ways that promote the attainment and maintenance of its goal states. The agency is observable in the sense that we see agents negotiating with situations using its dynamics. We can see a range of robust and regular responses to conditions. If we understand its goal, we can understand its behavior. The agency is ecological as a system that can cope with its context and achieve its goals by responding to affordances as affordances. An ecological definition of the agency includes three inter-definable factors: (1) goals, (2) affordances, and (3) repertoire (Walsh, 2015). Affordances are opportunities for, or impediments to, a goal; only goal-directed systems can experience its conditions as affordances. Systems can experience affordances only if they have repertoires, which are sets of possible responses that systems can enlist in pursuit of goals (in response to the system's experienced conditions). For repertoires to constitute a response to affordances, repertoires must be biased. Systems must be able to exploit behavioral repertoires in response to conditions in ways that are conducive to the attainment or the maintenance of their goal. The goal of the system is the state that it moves toward attaining/maintaining by directing behavioral repertoires in response to affordances conducing that state. Repertoires come in degrees, and some agents have richer repertoires than others. Systems with wide ranges of repertoires can respond to more affordances and can pursue a wider range of goals. Ecological agency is not all-or-nothing: It comes in degrees. There is a continuum from the most basic agents capable of pursuing a narrow range of goals to those possessing greater repertoires of responses. Cognitive systems tend to have large repertoires, with thinking forming part of their repertoire (Walsh, 2015). This is a model in which we can “grade” or rank the agent status of a system. A system will have a greater agent status grading if it demonstrates a greater repertoire (as variable responses to affordances) for maintaining or improving the conduciveness toward a goal (see Parts E and F of the Supplementary material).

      2.10.2. Object and agent theories

      There is a difference between object and agent theories. Object theories that we use today aim to describe and explain the dynamics of objects (Walsh, 2015). To construct these theories, we create a space of possible alternatives for those objects. This is known as a “state space”. We then look for principles that may account for various possible trajectories through this state space. The objects in these domains are subject to forces, laws, and initial conditions. Lee Smolin dubs this the “Newtonian paradigm” (Smolin, 2013). This describes system dynamics by the answers to two questions: (1) What potential configurations does the system have; and (2) In each configuration, what forces is the system subject to (Smolin, 2013)? In this paradigm, the laws, forces, and initial conditions are irrelevant to and exist separately from the objects. Object theories are transcendental, and they have an explanatory asymmetry. Transcendental means that the principles that govern the dynamics of the objects in the theory's domain are not part of the domain itself. They do not evolve as the system does, and the laws of nature and the space of possibilities through which the objects move remain constant as the objects change (Walsh, 2015). This allows for the explanation of the changing state of a studied system by appealing to unchanging laws.

      2.10.3. Action theories

      The Cartesian view holds that agents' thoughts, beliefs, and desires explain their actions only if they cause said actions (Davidson, 1963). This means that contemporary action theory is interpreted as implying that thoughts are mental entities realized as internal physiological mechanisms and that these mechanisms combine with other internal mechanisms to effect actions. They do so by their intrinsic causal properties (Fodor, 1987). Actions are outputs, such as an internal process of computation, and they result from the mechanical interactions of the internal states of the agent. The purposes of agents and their dynamics do not appear in the explanations of actions. The Cartesian model (thought and action) posits that agents are akin to “middlemen” (Walsh, 2015) since they are the connection between the causal activities of their psychological states and the environmental demands that they experience (Walsh, 2015). Haugeland (1998) described the notions of intimacy and commingling. His conception was in opposition to the Cartesian mind which posited that the mind is entirely internal to the agent, and the position that the environment is entirely external to the mind. In Cartesian dualism, both communicate through perception (which is environment to the mind) and action (which is mind to environment). Haugeland (1998) thus argued that the mind played an active role in constituting the environmental conditions to which it responds. Intimacy in this explanation described the mind as embodied and embedded in the world. This is not just an interdependence but also a “commingling” or integralness of the mind, body, and world, which undermines any separation between them.

      2.10.4. The disappearing agent

      The standard action theory approach created the issue of the missing agent, which is a consequence of its underlying methodological commitments (Velleman, 1992; Hornsby, 1997). These have arisen from precepts of the Cartesian mechanisms already described. It ignores the fact that actions do not happen to agents: they are performed by them. Cartesian mechanisms of action miss this point—an action is something produced by an agent for a reason. A proper account of action involves explaining the doing of agents by highlighting them to be reasonable or rationally justified considering the agent's purposes. The agent's goals will explain the appropriateness/conduciveness of the actions undertaken. Viewing actions as just causal consequences of internal states erroneously misses the fact that actions are purposive activities in lieu of goals. The Cartesian object theory views agents as objects, wherein the actions of agents are explained/caused by extraneous forces that act on said agents. It does not explain actions as products of agency, but rather as effects of extrinsic causes: external environments and internal computation and representation. Thus, it is an exclusion of agency which is both real and natural. This is also present in the understanding of “rational action”. Action theory is divided between two conceptions of humans: (1) as objects in the natural world, subject to external causal influences; and (2) as agents able to initiate actions that are guided by reasons (Walsh, 2015).

      Merleau-Ponty explains behavior as commencing with an active organismal agent that is problem-solving and goal-pursuing (Matthews, 2002). The agent responds to conditions as meaningful, either obstacles or opportunities. The goals and capacities of the agent give importance to the conditions. Thus, actions are responses initiated by agents to sets of affordances, and these affordances are largely of the agent's making. Agents also co-evolve with these affordances in line with their actions and goals. Agent theories of actions view actions as events that are generated by agents because of agents' pursuit of the goals. These purposes explain and justify the actions and not the other way around. Adaptive evolution is thus a phenomenon of agency. Thus, using an agent theory of this sort enables proper conceptual underpinning for agent status and agency in combination with natural purpose and goals (see Part F of the Supplementary material).

      2.10.5. Autonomy

      Agents create degrees of freedom for themselves by constituting their affordances through self-maintaining and self-regulating activities. They determine which environmental conditions are important. They also enable the exploitation of opportunities that the environment presents. This is a stronger account of autonomy. The integral processes in autonomous systems are (1) continually dependent on one another in their formation and realization as a network; (2) make up a unity (converge) in their domain of existence; and (3) govern areas of exchanges with the environment (Thompson, 2007). Autonomous agents can “make sense” of circumstances. Making sense means to detect and use the features of one's context, which in turn also constitutes the features/context. This is then the capacity of the agent to mobilize its resources in a way that supports the pursuit of its goals, and by exploiting opportunities or reducing impediments. Agents make features significant in the way they are detected and responded to in pursuit of their goals. In this way, autonomous agents construct and constitute the conditions that they respond to. There is a reciprocity of form and affordance—as form evolves so do affordances (Walsh, 2015). As mentioned above, this is related to the repertoire of capabilities. Thus, systems as agents that demonstrate a greater ability to identify, interpret, utilize, and implement features as affordances in pursuit of their goals would be graded higher [see Part C of Supplementary material for a supportive moral perspective on AI and agency and the supportive novel Technological Approach to Mind Everywhere (TAME) framing].

      3. Constructing the AI agent 3.1. Write-re-write systems: semantic closure

      Semantic closure is a concept that refers to the fact that a system can enclose meaning within itself. In biology, for example, a string of DNA and messenger RNA (mRNA), the encoding mechanism between both, has evolved, altering the meaning of DNA by rewriting the genetic code (Clark et al., 2017). In biology, the most important factors related to this concept are the ribosome, transfer RNA (tRNA), DNA, and mRNA (Clark et al., 2017). The tRNA is involved in expression which defines the meaning of DNA by mapping the three bases of DNA to one amino acid. Changing the mapping also means rewriting the genetic code. Hence, the meaning of the genome can itself be altered (Clark et al., 2017). Rewriting in biology is the process of moving from one semantically closed state to another.

      It is important then to understand how meaning originated for translating proteins and how it has been altered through evolution. This is an ontogenetic or bottom-up approach (Clark et al., 2017). For this process of moving from one semantically closed state to another, there must be a necessary structure. Von Neumann was the first to describe what an artificial architecture that enables semantic closure would look like. His constructor theory birthed the modern form of universal constructor architecture (Clark et al., 2017). Some of these models have highlighted the necessity of redundancy in maintaining stability in the presence of mutations. In the proposed theorem of chemical construction theory, the authors also highlight the self-referential nature of the genome (it contains descriptions of all other machines in the system, and hence it is its own description) (Clark et al., 2017). In their experiments, the authors demonstrated how alterations in the expressors can lead to novel interpretations of the genome which, in turn, gives rise to pleiotropic effects. Thus, the meaning of the genome has been changed, and this new interpretation of it extends to other molecules, not just the expresser. They also demonstrated that it is not genetic material that evolves but also the mechanisms of copying. Each string can play different functions in many different relations or reactions. Control in this way is distributed throughout the system (there is no explicit or centralized control mechanism). The authors also postulated that the ribosomes may be the biological equivalent of any string that imposes meaning into the system (Clark et al., 2017).

      Finally, the authors proposed something interesting: there were emergent or transient changes that were expressed and which did not appear in genetic records. These arise through inaccurate expressions. Their results demonstrate that these “errors”, while not reflected in the genome, are reflected in heritable changes in expression (they are covert). Errors in expression in biology are deleterious or non-heritable, since only genomic information is thought to be heritable (Clark et al., 2017). They also provide evidence for misreading errors of this nature, including the streptomycin-dependent phenotypes of E. coli. Errors in ribosomic interpretation of DNA have been demonstrated previously (Clark et al., 2017). In this way, they can change meaning. The authors stated that expressors can make a consistent interpretation of a genome (meaning it leads to its own expression). By interpreting its own genetic material, expressors obtain meaning through self-reference. From this, we can use semantic information as the central measure for an account of personhood or agency. Importantly, it is not tied to a biological brain, and systems can themselves enclose and change their own semantics. Self-reference in this light provides another framing for personhood and agency. This study also provides backing for “emergence”.

      3.2. A semantics model for personhood, agent, and agency 3.2.1. Semantics

      Historically, semantic information was contrasted with syntactic information. Syntactic information quantifies the kinds of statistical correlations between two systems without giving meaning to those correlations (Kolchinsky and Wolpert, 2018). This is used predominantly with Shannon's information theory, which is a measure of the reduction of statistical uncertainty between two system states which can differ in time.

      Some studies (going forward known as the study or this study) have distinguished between syntactic and semantic information in systems (Kolchinsky and Wolpert, 2018). This study attempted to create a formal definition of semantic information that is applied to both “living” and “non-living” beings (any physical system like a rock or cell, for example). Herein, semantic information was defined as information that a physical system has about its environment which is causally necessary to maintain its existence over time. The qualitative aspect of semantic information is related to the intrinsic dynamics of systems and their “environments”. The quantitative tools used to calculate semantics are information theory and non-equilibrium statistical mechanics.

      Importantly, the study is distinguished between “meaningful bits” and “meaningless bits”. This also allowed for a differentiation between sub-concepts of semantic information such as “value of information”, “semantic content”, and “agency.” Semantic information then is defined as information that enables systems to achieve their goals (maintaining a low entropy state). However, this is not an exogenous (goal derived from or measured from “external” sources) approach. Any “meaning” obtained from exogenous studies is meaningful (in terms of goals) from the perspective of the observer or scientist, and not the system itself. The difference in this study as compared to others is that the others offer standard teleo-semantic approaches where goals are understood in terms of evolutionary successes such as fitness. These standard approaches are suited more for systems that change according to selection; they do not describe systems that are “non-living” or synthetic (Kolchinsky and Wolpert, 2018). They also tend to be etiological, in that they are based on past histories of the system. The approach presented in this study instead creates an account of semantic information based solely on the intrinsic dynamics of a system in an environment without regard to its past or origin. Therefore, it presents an attractive model for an account of agency which includes AI systems. This is an autonomous agent model which requires that a not-in-equilibrium agent maintain its own self-existence/maintenance in an environment. This is active self-maintenance where agents use information about the environments to achieve their goals, and hence this information is intrinsically meaningful for them (Kolchinsky and Wolpert, 2018). This kind of perspective also applies to robots and “non-living” systems. This intrinsic goal is neither obtained from an exogenous source nor is it based on past histories or origins. Importantly, semantic information is derived from the mutual information between the system and its environment (within the initial distribution, which is defined as stored semantic information).

      3.2.2. Viability and value

      The study coins the term “viability function”. Viability functions are used to statistically quantify the system's degrees of existence at any given time (hence, one can say that viability functions describe real-value aspects of systems). For this, a negative Shannon entropy is used (it provides an upper bound on the probability that the system occupies any small set of viable states). Semantic information now means the information exchanged between the system and its environment which causally contributes to the system's existence. It is measured by the maintenance of the value of the viability function. To quantify causal contributions, the study used a counterfactual intervened distribution in which there was a scrambling of syntactic information between the system and its environment. The value of information was defined as the difference between the system's viability in time after the intervention. A positive difference would mean that there was some syntactic information between the system and its environment which plays a causal role in maintaining its existence. A negative difference would mean that the syntactic information would decrease the system's ability to exist.

      To describe the value of information, the study gives the example of a rock. A rock has a very low dynamic and thus it can remain in a low entropy state for longer periods. If information is then scrambled by swopping rocks from their current environment into different ones, this intervention would not make much difference to the rock. However, by doing the same thing with a hurricane (my modified explanation) that requires specific conditions for its maintenance, the result is that the hurricane has a greater set of parameters for its maintenance. If those parameters are not met, it will dissipate (viability decreased)—and thus it has some important semantic information. Therefore, the semantic information is important for hurricanes, and this would likely be greater for hurricanes than rocks. If you put an organism in a new environment it may not be able to find its own food, hence organisms place a higher value on information.

      3.2.3. Viability, syntactic, and semantic information

      Non-equilibrium systems are those in which the non-equilibrium status is maintained by the ongoing exchange of information by sub-systems. An example of this is the “feedback-control” process in which one subsystem acquires information about another subsystem and then uses this information to apply controls to keep itself or the other system out of equilibrium (like Maxwell's demon). Information-powered non-equilibrium states differ from the traditional non-equilibrium systems considered in statistical physics which are driven by work reservoirs with control protocols, or which are coupled to thermodynamic reservoirs (Kolchinsky and Wolpert, 2018). The reduction of entropy thus carries costs in the expenditure of energy as heat. Within the thermodynamics of information, Launderer's principle states that any process that reduces a system's entropy (by x number of bits) must release energy in the form of heat. Heat generation is also necessary for the acquisition of syntactic information. Viability is connected to this reduction of entropy through semantic information acquisition. Semantic efficiency in the study speaks to a quantification value of how much the system is “tuned” to possess only syntactic information which is relevant for maintaining its own existence. The semantic efficiency is related to the thermodynamic multiplier which is the measure of the “bang-for-buck” of information (below). This simply asks, “what types of information would carry more benefit than other types?” Systems with positive values of information and higher semantic efficiency tend to have a larger thermodynamic multiplier (Kolchinsky and Wolpert, 2018). Stored semantic information is not that which is acquired during dynamic exchanges with environments. Rather, it is the mutual information between systems and environments that is also causally responsible for maintaining viability. It is important to note that systems with low entropy are not the same as remaining within a specific viability set. This means that systems do not need to maintain the same “identity” over time to maintain a low entropy state. Identities can change while still maintaining low entropy states. Hence, a specific identity profile (like a human) is unnecessary for an account of agency.

      Observed semantic information in the study speaks to that which is affected by the dynamic interventions that scramble the transfer of entropy from the environment to the agent. This kind of information identifies semantic information which is acquired by dynamic interactions between systems and environments (not mutual or stored information). The syntactic information in the study is scrambled to obtain semantic information. This is how meaningless and meaningful are obtained (optimal intervention determines this). Any information that can be scrambled without affecting viability is meaningless and that which must be preserved to preserve viability would be meaningful. Both observed and stored information are necessary for viability preservation; however, observed speaks to dynamic interactions between systems and their environments. The semantic efficiency ratio is the ratio of the stored semantic information to the overall syntactic information (Kolchinsky and Wolpert, 2018).

      Systems can have a non-unique optimal intervention, namely, multiple variable and redundant sources of semantic information which are used to maintain viability (like relating to different food sources, see Kolchinsky and Wolpert, 2018). This is important when considering the different dimensions of society in which systems are integrated. Relevant reservoirs depending on the system, its context, and its function can include sexual reservoirs, ethical/behavioral reservoirs, different knowledge domain reservoirs, socio-political reservoirs, and socio-emotional reservoirs. This presents a paradigm and mechanism to determine the status and inclusion of certain systems in certain contexts by assessing their suitability to participate adequately in that context. The thermodynamic multiplier provides a means to determine suitability.

      3.2.4. The thermodynamic multiplier

      The thermodynamic multiplier is the stored semantic information (the benefit–cost ratio of mutual information) that provides a manner of comparison for the ability of different systems to use the information to maintain their viability (Kolchinsky and Wolpert, 2018). This would mean that the stored semantic information gains its status based on its benefit outweighing its cost. If the information value is positive, then having a low semantic efficiency means that there would also be a low thermodynamic multiplier. Therefore, “paying attention to the right information” in terms of semantic efficiency is also correlated with thermodynamical efficiency. It is a measure of the thermodynamic costs of obtaining new mutual information compared to the viability benefit obtained from that acquisition.

      3.2.5. Transfer entropy and semantic efficiency

      Observed semantic information can be acquired in dynamic interactions through the use of transfer entropy. This is a measure of information flow and is widely used and understood. The transfer entropy movement from the environment to the system is not necessarily the same as the flow from the system to the environment. Observed semantic information describes dynamic actions and decisions where any information scrambling that comes from the environment to the organism would result in an impact on viability. For example, Jack and Jill went up the hill with Jack leaving behind a trail of breadcrumbs to lead them back home. If at some point during their adventure, a wind were to blow away those breadcrumbs then they would not know their way home, affecting their ability to survive or feed themselves. The transfer entropy would speak to the breadcrumbs which would have observed semantic information because the breadcrumbs as an object contain an informational interaction between a system (as agent) and the environment. Thus, the value of the transfer entropy is the viability value at a specific time before scrambling versus the viability value after scrambling. This value is then known as semantic efficiency (Kolchinsky and Wolpert, 2018).

      3.2.6. The agent

      An autonomous agent (and autonomous agency) in this system would be a physical system that has a large measure of semantic information (Kolchinsky and Wolpert, 2018). One can identify autonomous agents by finding timescales and system/environment decompositions which maximize measures of semantic information. This would in turn depend on the thermodynamic multipliers, the transfer entropy, and the amounts of semantic information. It is, however, important to remember that semantic information can have a negative viability value. This means that it can be mistaken/misrepresented information that is used in a way that harms the agent's viability value. The study also highlights that semantic information requires an asymmetrical measure (unlike syntactic mutual information). This is because this information concerns the viability of the system and not the environment. This system also does not require the decomposition into separate degrees of freedom (such as sensors, effectors, membranes, interior, exterior, brain, or body). Thus, it is not about internal representations but rather about the intrinsic dynamics of the system and its environment. This can also be used to create an account for “life”.

      4. Conclusion

      My methodology, using successful observable, predictable experiments that provide more information, is more accurate and enables a method of grading or ranking systems as agents according to domain suitability. This relies on the use of semantic information and its relationship with viability. To summarize, viability (reducing or maintaining a low entropy state) is the ability of a system to continue to exist, and it is measured in terms of the viability function. Changes in this viability function are determined by counterfactual dependences obtained through the scrambling of syntactic information. This enables the ascertainment of the more “valuable” semantic information as causally contributing to the system's viability function. There are two kinds of semantic information, both of which affect the viability function: (1) stored and (2) observed. Stored semantic information is the mutual information between systems and environments, while observed information is that which is acquired by dynamic exchanges between systems and environments. One can obtain observed semantic information by scrambling the transfer entropy. The observed semantic information is necessary to determine actions and agency since it describes dynamic “active” interactions. Furthermore, survival in this instance is de-linked from “biological” systems and is measured according to maintaining a system's viability based on its own intrinsic dynamics. This presents an attractive way to create a general and invariant account of personhood and agency. I also presented an account of what constitutes rarity. This provides a further attractive way to grade “emergent” information content or properties.

      This account, routed in Kantianism, recognizes the explanation and information issues in alternative accounts and provides a more accurate framework. Legal systems and ethics discourse should take note of this account as the usual ways in which these conversations are entertained and are doomed since they tend to rely on poorly understood, ephemeral notions such as “consciousness”. Instead, systems should be evaluated according to their own intrinsic properties which enable a better approach to determining suitability (agency and personhood) because it considers agents as agents within their own informational paradigm and not relative to another agent's informational paradigm. In this way, intrinsic bias is made to be a strength when it is considered from the perspective of the system itself.

      Data availability statement

      Publicly available datasets were analyzed in this study. This data can be found here: https://osf.io/evna6.

      Author contributions

      MN: Conceptualization, Investigation, Methodology, Project administration, Resources, Visualization, Writing—original draft.

      Funding

      The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This article was supported by the U.S. National Institute of Mental Health and the U.S. National Institutes of Health (award number U01MH127690) under the Harnessing Data Science for Health Discovery and Innovation in Africa (DS-I Africa) program. In addition, support was also obtained from the National Research Foundation (NRF) under the Doctoral Innovation Scholarship (award MND190619448844).

      The author would like to acknowledge Amy Gooden for her technical editing of this article and Donrich Thaldar for acting as a critical reader and for his helpful comments about earlier drafts of this article.

      Conflict of interest

      The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

      Publisher's note

      All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

      Author disclaimer

      The content of this article is solely my responsibility and does not necessarily represent the official views of the U.S. National Institute of Mental Health or the U.S. National Institutes of Health.

      Supplementary material

      The Supplementary Material for this article can be found online at: /articles/10.3389/fpsyg.2023.1273470/full#supplementary-material

      References Barandiaran X. E. Di Paolo E. Rohde M. (2009). Defining agency: individuality, normativity, asymmetry and spatio-temporality in action. J. Adapt. Behav. 17, 113. 10.1177/1059712309343819 Barnes J. (1991). The Complete Works of Aristotle. Princeton: Princeton University Press. Bedau M. (1991). Can biological teleology be naturalized? J. Philos., 88, 647655. 10.5840/jphil1991881111 Bedau M. (1998). “Where's the good in teleology?” in Reprinted in Nature's Purposes: Analyses of Function and Design in Biology, eds. Allen C. Bekoff M. Lauder G. (Cambridge: MIT Press), 261291. Cartwright N. (2007). Causal Powers: What Are They? Why Do We Need Them? What Can and Cannot be Done With Them? London: Contingency and Dissent in Science Project. Clark E. B. Hickinbotham S. J. Stepney S. (2017). Semantic closure demonstrated by the evolution of a universal constructor architecture in an artificial chemistry. J. R. Soc. Interface 14, 112. 10.1098/rsif.2016.103328515326 Cornell University (2022). “Eye expressions offer a glimpse into the evolution of emotion,” in ScienceDaily. Available online at: https://www.sciencedaily.com/releases/2017/04/170417182822.htm (accessed March 6, 2023). Craver C. (2007a). Explaining the Brain: Mechanisms and the Mosaic Unity of Neuroscience. Oxford: Oxford University Press. Craver C. (2007b). Constitutive explanatory relevance. J. Philos. Research 32, 320. 10.5840/jpr20073241 Craver C. Bechtel W. (2007). Top-down causation without top-down causes. Biol. Philos. 22, 547563. 10.1007/s10539-006-9028-8 Cummins R. (1975). Functional analysis. J. Philos., 72, 741765. 10.2307/2024640 Cummins R. (1983). The Nature of Psychological Explanation. Cambridge: MIT Press. Cummins R. (2000). “‘How does it work?' versus ‘what are the laws?': Two conceptions of psychological explanation,” in Explanation and Cognition, eds. Keil F. Wilson R. (Cambridge: MIT Press), 117144. Davidson D. (1963). Actions, reasons and causes. J. Philos., 60, 685700. 10.2307/2023177 Davidson D. (1980). “Agency,” in Essays on Actions and Events, ed. Davidson D. (Oxford: Clarendon Press), 4362. Di Paolo E. (2005). Autopoiesis, adaptivity, teleology, agency. Phenom. Cogn. Sci. 4, 429452. 10.1007/s11097-005-9002-y Dickerson K. Gerhardstein P. Moser A. (2017). The role of the human mirror neuron system in supporting communication in a digital world. Front. Psychol., 8, 16. 10.3389/fpsyg.2017.0069828553240 Encyclopedia Britannica (2023). Mill's Methods. Available online at: https://www.britannica.com/topic/Mills-methods (accessed January 6, 2023). Fodor J. (1987). Psychosemantics: The Problem of Meaning in Philosophy of Mind. Cambridge: MIT Press. Ganeri J. (2011). Emergentisms, ancient and modern. Mind 120, 671703. 10.1093/mind/fzr038 Haugeland J. (1998). Having Thought: Essays in the Metaphysics of Mind. Cambridge: Harvard University Press. Hidalgo C. (2015). Why Information Grows: The Evolution of Order, From Atoms to Economies. New York: Basic Books. Hornsby J. (1997). Simple Mindedness: A Defense of Naive Naturalism in the Philosophy of Mind. Cambridge: Harvard University Press. Kampourakis K. (2020). Students' “teleological misconceptions” in evolution education: why the underlying design stance, not teleology per se, is the problem. Evo. Edu. Outreach 13, 112. 10.1186/s12052-019-0116-z Kant I. (2000). Critique of the Power of Judgment, translated by Guyer, P., and Matthew E. Cambridge: Cambridge University Press. Kant I. Bernard J. H. (ed). (1790). Critique of Judgment. New York, NY: Barnes & Noble. Keil F. C. (2006). Explanation and understanding. Annu. Rev. Psychol., 57, 227254. 10.1146/annurev.psych.57.102904.190100 Kim J. (2006). Emergence: core ideas and issues. Synthese 151, 547559. 10.1007/s11229-006-9025-0 Kolchinsky A. Wolpert D. H. (2018). Semantic information, autonomous agency, and nonequilibrium statistical physics. Interface Focus 8, 20180041. 10.1098/rsfs.2018.004130443338 Maher P. (2006). Invariance and Laws. Available online at: http://patrick.maher1.net/471/lectures/wood9.pdf (accessed April 4, 2023). Matthews E. (2002). The Philosophy of Merleau-Ponty. McLean, VA: Acumen. Mertes S. Huber T. Weitz K. Heimerl A. André E. (2022). GANterfactual–Counterfactual explanations for medical non-experts using generative adversarial learning. Front. Artif. Intell. 5, 119. 10.3389/frai.2022.82556535464995 Monod J. (1971). Chance and Necessity: An Essay on the Metaphysics of Life. New York: Vintage Books. Mukherjee S. Asnani H. Kannan S. (2019). CCMI: Classifier based Q15 conditional mutual information estimation. arXiv [Preprint]. Available online at: https://arxiv.org/abs/1906.01824 (accessed October 6, 2023). Prior A. Geffet M. (2003). “Mutual information and semantic similarity as predictors of word association strength: Modulation by association type and semantic relation,” in Proceedings of Eurocogsci 03, eds. Schmalhofer F. Young R. M. Katz G. Graham K. (New York: Routledge). Raninen M. (2023). “Four ways of knowing – a semiotic interpretation,” in PhiloSign. Available online at: https://philosign.substack.com/p/four-ways-of-knowing-a-semiotic-interpretation (accessed April 30, 2023). Redish J. (2019). “Huygens' principle,” in Nexus Physics. Available online at: https://www.compadre.org/nexusph/course/Huygens'_principle (accessed April 5, 2023). Rosen G. (2010). “Metaphysical dependence: Grounding and reduction,” in Modality: Metaphysics, Logic, and Epistemology, eds. Hale B. Hoffmann A. (Oxford: Oxford University Press), 109135. Salazar D. (2022). Correlation is not Correlation. Available online at: https://david-salazar.github.io/posts/fat-vs-thin-tails/2020-05-22-correlation-is-not-correlation.html (accessed April 28, 2023).11539429 Salmon W. (1984). Scientific Explanation and the Causal Structure of the World. New Jersey: Princeton University Press. Sapolsky R. M. (2017). Behave: The Biology of Humans at Our Best and Worst. New York: Penguin Books. Schlosser G. (2002). Modularity and the units of evolution. Theory Biosci. 121, 180. 10.1078/1431-7613-00049 Schrödinger E. (1944). What is Life?. Cambridge: Cambridge University Press. Smolin L. (2013). Time Reborn: From the Crisis in Physics to the Future of the Universe. Boston: Houghton Mifflin Harcourt. Sommerhoff G. (1950). Systems Biology. Oxford: Oxford University Press. Stankovski T. Ticcinelli V. McClintock P. V. E. Stefanovska A. (2015). Coupling functions in networks of oscillators. New J. Phys. 17, 112. 10.1088/1367-2630/17/3/035002 Strogatz S. H. (2000). From Kuramoto to Crawford: Exploring the onset of synchronization in populations of coupled oscillators. Phys. D: Nonlinear Phenom. 143, 120. 10.1016/S0167-2789(00)00094-4 Taleb N. N. (2007). The Black Swan: The Impact of the Highly Improbable. New York: Penguin Books. Taleb N. N. (2020). Statistical Consequences of Fat Tails: Real World Preasymptotics, Epistomology, and Applications (The Technical Incerto Collection). Pittsburgh: STEM Academic Press. Taleb N. N. Zalloua P. Elbassioni K. Henschel A. Platt D. (2023). Informational rescaling of PCA maps with application to genetic distance. arXiv [Preprint]. Available online at: https://arxiv.org/abs/2303.12654 (accessed October 6, 2023). Thomas Jefferson National Accelerator Facility (2023). “Charming experiment finds gluon mass in the proton: Experimental determination of the proton's gluonic gravitational form factors may have revealed part of proton's hidden mass,” in ScienceDaily. Available online at: www.sciencedaily.com/releases/2023/03/230330102332.htm (accessed April 5, 2023). Thompson E. (2007). Mind in Life: Biology, Phenomenology and the Sciences of Mind. Cambridge: Harvard University Press. Velleman D. (1992). What happens when someone acts? Mind 101, 461481. 10.1093/mind/101.403.461 Vopson M. Lepadatu S. (2022). Second law of information dynamics. AIP Adv. 12, 17. 10.1063/5.0100358 Walsh D. M. (2015). Organisms, Agency, and Evolution. Cambridge: Cambridge University Press. 10.1017/CBO9781316402719 Weiner N. (1948). Cybernetics: Or Control and Communication in the Animal and the Machine. Hoboken: Wiley. Weisstein E. W. (2023). “Covariance,” in MathWorld. Available online at: https://mathworld.wolfram.com/Covariance.html (accessed April 21, 2023). Woodward J. (2000). Explanation and invariance in the special sciences. Brit. J. Phil. Sci., 51, 197254. 10.1093/bjps/51.2.197 Woodward J. (2003). Making Things Happen. Oxford: Oxford University Press. 10.1093/0195155270.001.0001 Woolman S. (2013). The Selfless Constitution: Experimentalism and Flourishing as Foundations of South Africa's Basic Law. Cape Town: Juta. Ylikoski P. (2013). Causal and constitutive explanation compared. Erkenntnis 78, 128. 10.1007/s10670-013-9513-9
      ‘Oh, my dear Thomas, you haven’t heard the terrible news then?’ she said. ‘I thought you would be sure to have seen it placarded somewhere. Alice went straight to her room, and I haven’t seen her since, though I repeatedly knocked at the door, which she has locked on the inside, and I’m sure it’s most unnatural of her not to let her own mother comfort her. It all happened in a moment: I have always said those great motor-cars shouldn’t be allowed to career about the streets, especially when they are all paved with cobbles as they are at Easton Haven, which are{331} so slippery when it’s wet. He slipped, and it went over him in a moment.’ My thanks were few and awkward, for there still hung to the missive a basting thread, and it was as warm as a nestling bird. I bent low--everybody was emotional in those days--kissed the fragrant thing, thrust it into my bosom, and blushed worse than Camille. "What, the Corner House victim? Is that really a fact?" "My dear child, I don't look upon it in that light at all. The child gave our picturesque friend a certain distinction--'My husband is dead, and this is my only child,' and all that sort of thing. It pays in society." leave them on the steps of a foundling asylum in order to insure [See larger version] Interoffice guff says you're planning definite moves on your own, J. O., and against some opposition. Is the Colonel so poor or so grasping—or what? Albert could not speak, for he felt as if his brains and teeth were rattling about inside his head. The rest of[Pg 188] the family hunched together by the door, the boys gaping idiotically, the girls in tears. "Now you're married." The host was called in, and unlocked a drawer in which they were deposited. The galleyman, with visible reluctance, arrayed himself in the garments, and he was observed to shudder more than once during the investiture of the dead man's apparel. HoME香京julia种子在线播放 ENTER NUMBET 0016www.hwfcjh.com.cn
      www.eryao.net.cn
      www.kqxiwq.com.cn
      www.i-33.com.cn
      gbnzqg.com.cn
      jhofxm.com.cn
      fwupdk.com.cn
      my8news.com.cn
      www.pcgdgl.com.cn
      www.pasiphae.com.cn
      处女被大鸡巴操 强奸乱伦小说图片 俄罗斯美女爱爱图 调教强奸学生 亚洲女的穴 夜来香图片大全 美女性强奸电影 手机版色中阁 男性人体艺术素描图 16p成人 欧美性爱360 电影区 亚洲电影 欧美电影 经典三级 偷拍自拍 动漫电影 乱伦电影 变态另类 全部电 类似狠狠鲁的网站 黑吊操白逼图片 韩国黄片种子下载 操逼逼逼逼逼 人妻 小说 p 偷拍10幼女自慰 极品淫水很多 黄色做i爱 日本女人人体电影快播看 大福国小 我爱肏屄美女 mmcrwcom 欧美多人性交图片 肥臀乱伦老头舔阴帝 d09a4343000019c5 西欧人体艺术b xxoo激情短片 未成年人的 插泰国人夭图片 第770弾み1 24p 日本美女性 交动态 eee色播 yantasythunder 操无毛少女屄 亚洲图片你懂的女人 鸡巴插姨娘 特级黄 色大片播 左耳影音先锋 冢本友希全集 日本人体艺术绿色 我爱被舔逼 内射 幼 美阴图 喷水妹子高潮迭起 和后妈 操逼 美女吞鸡巴 鸭个自慰 中国女裸名单 操逼肥臀出水换妻 色站裸体义术 中国行上的漏毛美女叫什么 亚洲妹性交图 欧美美女人裸体人艺照 成人色妹妹直播 WWW_JXCT_COM r日本女人性淫乱 大胆人艺体艺图片 女同接吻av 碰碰哥免费自拍打炮 艳舞写真duppid1 88电影街拍视频 日本自拍做爱qvod 实拍美女性爱组图 少女高清av 浙江真实乱伦迅雷 台湾luanlunxiaoshuo 洛克王国宠物排行榜 皇瑟电影yy频道大全 红孩儿连连看 阴毛摄影 大胆美女写真人体艺术摄影 和风骚三个媳妇在家做爱 性爱办公室高清 18p2p木耳 大波撸影音 大鸡巴插嫩穴小说 一剧不超两个黑人 阿姨诱惑我快播 幼香阁千叶县小学生 少女妇女被狗强奸 曰人体妹妹 十二岁性感幼女 超级乱伦qvod 97爱蜜桃ccc336 日本淫妇阴液 av海量资源999 凤凰影视成仁 辰溪四中艳照门照片 先锋模特裸体展示影片 成人片免费看 自拍百度云 肥白老妇女 女爱人体图片 妈妈一女穴 星野美夏 日本少女dachidu 妹子私处人体图片 yinmindahuitang 舔无毛逼影片快播 田莹疑的裸体照片 三级电影影音先锋02222 妻子被外国老头操 观月雏乃泥鳅 韩国成人偷拍自拍图片 强奸5一9岁幼女小说 汤姆影院av图片 妹妹人艺体图 美女大驱 和女友做爱图片自拍p 绫川まどか在线先锋 那么嫩的逼很少见了 小女孩做爱 处女好逼连连看图图 性感美女在家做爱 近距离抽插骚逼逼 黑屌肏金毛屄 日韩av美少女 看喝尿尿小姐日逼色色色网图片 欧美肛交新视频 美女吃逼逼 av30线上免费 伊人在线三级经典 新视觉影院t6090影院 最新淫色电影网址 天龙影院远古手机版 搞老太影院 插进美女的大屁股里 私人影院加盟费用 www258dd 求一部电影里面有一个二猛哥 深肛交 日本萌妹子人体艺术写真图片 插入屄眼 美女的木奶 中文字幕黄色网址影视先锋 九号女神裸 和骚人妻偷情 和潘晓婷做爱 国模大尺度蜜桃 欧美大逼50p 西西人体成人 李宗瑞继母做爱原图物处理 nianhuawang 男鸡巴的视屏 � 97免费色伦电影 好色网成人 大姨子先锋 淫荡巨乳美女教师妈妈 性nuexiaoshuo WWW36YYYCOM 长春继续给力进屋就操小女儿套干破内射对白淫荡 农夫激情社区 日韩无码bt 欧美美女手掰嫩穴图片 日本援交偷拍自拍 入侵者日本在线播放 亚洲白虎偷拍自拍 常州高见泽日屄 寂寞少妇自卫视频 人体露逼图片 多毛外国老太 变态乱轮手机在线 淫荡妈妈和儿子操逼 伦理片大奶少女 看片神器最新登入地址sqvheqi345com账号群 麻美学姐无头 圣诞老人射小妞和强奸小妞动话片 亚洲AV女老师 先锋影音欧美成人资源 33344iucoom zV天堂电影网 宾馆美女打炮视频 色五月丁香五月magnet 嫂子淫乱小说 张歆艺的老公 吃奶男人视频在线播放 欧美色图男女乱伦 avtt2014ccvom 性插色欲香影院 青青草撸死你青青草 99热久久第一时间 激情套图卡通动漫 幼女裸聊做爱口交 日本女人被强奸乱伦 草榴社区快播 2kkk正在播放兽骑 啊不要人家小穴都湿了 www猎奇影视 A片www245vvcomwwwchnrwhmhzcn 搜索宜春院av wwwsee78co 逼奶鸡巴插 好吊日AV在线视频19gancom 熟女伦乱图片小说 日本免费av无码片在线开苞 鲁大妈撸到爆 裸聊官网 德国熟女xxx 新不夜城论坛首页手机 女虐男网址 男女做爱视频华为网盘 激情午夜天亚洲色图 内裤哥mangent 吉沢明歩制服丝袜WWWHHH710COM 屌逼在线试看 人体艺体阿娇艳照 推荐一个可以免费看片的网站如果被QQ拦截请复制链接在其它浏览器打开xxxyyy5comintr2a2cb551573a2b2e 欧美360精品粉红鲍鱼 教师调教第一页 聚美屋精品图 中韩淫乱群交 俄罗斯撸撸片 把鸡巴插进小姨子的阴道 干干AV成人网 aolasoohpnbcn www84ytom 高清大量潮喷www27dyycom 宝贝开心成人 freefronvideos人母 嫩穴成人网gggg29com 逼着舅妈给我口交肛交彩漫画 欧美色色aV88wwwgangguanscom 老太太操逼自拍视频 777亚洲手机在线播放 有没有夫妻3p小说 色列漫画淫女 午间色站导航 欧美成人处女色大图 童颜巨乳亚洲综合 桃色性欲草 色眯眯射逼 无码中文字幕塞外青楼这是一个 狂日美女老师人妻 爱碰网官网 亚洲图片雅蠛蝶 快播35怎么搜片 2000XXXX电影 新谷露性家庭影院 深深候dvd播放 幼齿用英语怎么说 不雅伦理无需播放器 国外淫荡图片 国外网站幼幼嫩网址 成年人就去色色视频快播 我鲁日日鲁老老老我爱 caoshaonvbi 人体艺术avav 性感性色导航 韩国黄色哥来嫖网站 成人网站美逼 淫荡熟妇自拍 欧美色惰图片 北京空姐透明照 狼堡免费av视频 www776eom 亚洲无码av欧美天堂网男人天堂 欧美激情爆操 a片kk266co 色尼姑成人极速在线视频 国语家庭系列 蒋雯雯 越南伦理 色CC伦理影院手机版 99jbbcom 大鸡巴舅妈 国产偷拍自拍淫荡对话视频 少妇春梦射精 开心激动网 自拍偷牌成人 色桃隐 撸狗网性交视频 淫荡的三位老师 伦理电影wwwqiuxia6commqiuxia6com 怡春院分站 丝袜超短裙露脸迅雷下载 色制服电影院 97超碰好吊色男人 yy6080理论在线宅男日韩福利大全 大嫂丝袜 500人群交手机在线 5sav 偷拍熟女吧 口述我和妹妹的欲望 50p电脑版 wwwavtttcon 3p3com 伦理无码片在线看 欧美成人电影图片岛国性爱伦理电影 先锋影音AV成人欧美 我爱好色 淫电影网 WWW19MMCOM 玛丽罗斯3d同人动画h在线看 动漫女孩裸体 超级丝袜美腿乱伦 1919gogo欣赏 大色逼淫色 www就是撸 激情文学网好骚 A级黄片免费 xedd5com 国内的b是黑的 快播美国成年人片黄 av高跟丝袜视频 上原保奈美巨乳女教师在线观看 校园春色都市激情fefegancom 偷窥自拍XXOO 搜索看马操美女 人本女优视频 日日吧淫淫 人妻巨乳影院 美国女子性爱学校 大肥屁股重口味 啪啪啪啊啊啊不要 操碰 japanfreevideoshome国产 亚州淫荡老熟女人体 伦奸毛片免费在线看 天天影视se 樱桃做爱视频 亚卅av在线视频 x奸小说下载 亚洲色图图片在线 217av天堂网 东方在线撸撸-百度 幼幼丝袜集 灰姑娘的姐姐 青青草在线视频观看对华 86papa路con 亚洲1AV 综合图片2区亚洲 美国美女大逼电影 010插插av成人网站 www色comwww821kxwcom 播乐子成人网免费视频在线观看 大炮撸在线影院 ,www4KkKcom 野花鲁最近30部 wwwCC213wapwww2233ww2download 三客优最新地址 母亲让儿子爽的无码视频 全国黄色片子 欧美色图美国十次 超碰在线直播 性感妖娆操 亚洲肉感熟女色图 a片A毛片管看视频 8vaa褋芯屑 333kk 川岛和津实视频 在线母子乱伦对白 妹妹肥逼五月 亚洲美女自拍 老婆在我面前小说 韩国空姐堪比情趣内衣 干小姐综合 淫妻色五月 添骚穴 WM62COM 23456影视播放器 成人午夜剧场 尼姑福利网 AV区亚洲AV欧美AV512qucomwwwc5508com 经典欧美骚妇 震动棒露出 日韩丝袜美臀巨乳在线 av无限吧看 就去干少妇 色艺无间正面是哪集 校园春色我和老师做爱 漫画夜色 天海丽白色吊带 黄色淫荡性虐小说 午夜高清播放器 文20岁女性荫道口图片 热国产热无码热有码 2015小明发布看看算你色 百度云播影视 美女肏屄屄乱轮小说 家族舔阴AV影片 邪恶在线av有码 父女之交 关于处女破处的三级片 极品护士91在线 欧美虐待女人视频的网站 享受老太太的丝袜 aaazhibuo 8dfvodcom成人 真实自拍足交 群交男女猛插逼 妓女爱爱动态 lin35com是什么网站 abp159 亚洲色图偷拍自拍乱伦熟女抠逼自慰 朝国三级篇 淫三国幻想 免费的av小电影网站 日本阿v视频免费按摩师 av750c0m 黄色片操一下 巨乳少女车震在线观看 操逼 免费 囗述情感一乱伦岳母和女婿 WWW_FAMITSU_COM 偷拍中国少妇在公车被操视频 花也真衣论理电影 大鸡鸡插p洞 新片欧美十八岁美少 进击的巨人神thunderftp 西方美女15p 深圳哪里易找到老女人玩视频 在线成人有声小说 365rrr 女尿图片 我和淫荡的小姨做爱 � 做爱技术体照 淫妇性爱 大学生私拍b 第四射狠狠射小说 色中色成人av社区 和小姨子乱伦肛交 wwwppp62com 俄罗斯巨乳人体艺术 骚逼阿娇 汤芳人体图片大胆 大胆人体艺术bb私处 性感大胸骚货 哪个网站幼女的片多 日本美女本子把 色 五月天 婷婷 快播 美女 美穴艺术 色百合电影导航 大鸡巴用力 孙悟空操美少女战士 狠狠撸美女手掰穴图片 古代女子与兽类交 沙耶香套图 激情成人网区 暴风影音av播放 动漫女孩怎么插第3个 mmmpp44 黑木麻衣无码ed2k 淫荡学姐少妇 乱伦操少女屄 高中性爱故事 骚妹妹爱爱图网 韩国模特剪长发 大鸡巴把我逼日了 中国张柏芝做爱片中国张柏芝做爱片中国张柏芝做爱片中国张柏芝做爱片中国张柏芝做爱片 大胆女人下体艺术图片 789sss 影音先锋在线国内情侣野外性事自拍普通话对白 群撸图库 闪现君打阿乐 ady 小说 插入表妹嫩穴小说 推荐成人资源 网络播放器 成人台 149大胆人体艺术 大屌图片 骚美女成人av 春暖花开春色性吧 女亭婷五月 我上了同桌的姐姐 恋夜秀场主播自慰视频 yzppp 屄茎 操屄女图 美女鲍鱼大特写 淫乱的日本人妻山口玲子 偷拍射精图 性感美女人体艺木图片 种马小说完本 免费电影院 骑士福利导航导航网站 骚老婆足交 国产性爱一级电影 欧美免费成人花花性都 欧美大肥妞性爱视频 家庭乱伦网站快播 偷拍自拍国产毛片 金发美女也用大吊来开包 缔D杏那 yentiyishu人体艺术ytys WWWUUKKMCOM 女人露奶 � 苍井空露逼 老荡妇高跟丝袜足交 偷偷和女友的朋友做爱迅雷 做爱七十二尺 朱丹人体合成 麻腾由纪妃 帅哥撸播种子图 鸡巴插逼动态图片 羙国十次啦中文 WWW137AVCOM 神斗片欧美版华语 有气质女人人休艺术 由美老师放屁电影 欧美女人肉肏图片 白虎种子快播 国产自拍90后女孩 美女在床上疯狂嫩b 饭岛爱最后之作 幼幼强奸摸奶 色97成人动漫 两性性爱打鸡巴插逼 新视觉影院4080青苹果影院 嗯好爽插死我了 阴口艺术照 李宗瑞电影qvod38 爆操舅母 亚洲色图七七影院 被大鸡巴操菊花 怡红院肿么了 成人极品影院删除 欧美性爱大图色图强奸乱 欧美女子与狗随便性交 苍井空的bt种子无码 熟女乱伦长篇小说 大色虫 兽交幼女影音先锋播放 44aad be0ca93900121f9b 先锋天耗ばさ无码 欧毛毛女三级黄色片图 干女人黑木耳照 日本美女少妇嫩逼人体艺术 sesechangchang 色屄屄网 久久撸app下载 色图色噜 美女鸡巴大奶 好吊日在线视频在线观看 透明丝袜脚偷拍自拍 中山怡红院菜单 wcwwwcom下载 骑嫂子 亚洲大色妣 成人故事365ahnet 丝袜家庭教mp4 幼交肛交 妹妹撸撸大妈 日本毛爽 caoprom超碰在email 关于中国古代偷窥的黄片 第一会所老熟女下载 wwwhuangsecome 狼人干综合新地址HD播放 变态儿子强奸乱伦图 强奸电影名字 2wwwer37com 日本毛片基地一亚洲AVmzddcxcn 暗黑圣经仙桃影院 37tpcocn 持月真由xfplay 好吊日在线视频三级网 我爱背入李丽珍 电影师傅床戏在线观看 96插妹妹sexsex88com 豪放家庭在线播放 桃花宝典极夜著豆瓜网 安卓系统播放神器 美美网丝袜诱惑 人人干全免费视频xulawyercn av无插件一本道 全国色五月 操逼电影小说网 good在线wwwyuyuelvcom www18avmmd 撸波波影视无插件 伊人幼女成人电影 会看射的图片 小明插看看 全裸美女扒开粉嫩b 国人自拍性交网站 萝莉白丝足交本子 七草ちとせ巨乳视频 摇摇晃晃的成人电影 兰桂坊成社人区小说www68kqcom 舔阴论坛 久撸客一撸客色国内外成人激情在线 明星门 欧美大胆嫩肉穴爽大片 www牛逼插 性吧星云 少妇性奴的屁眼 人体艺术大胆mscbaidu1imgcn 最新久久色色成人版 l女同在线 小泽玛利亚高潮图片搜索 女性裸b图 肛交bt种子 最热门有声小说 人间添春色 春色猜谜字 樱井莉亚钢管舞视频 小泽玛利亚直美6p 能用的h网 还能看的h网 bl动漫h网 开心五月激 东京热401 男色女色第四色酒色网 怎么下载黄色小说 黄色小说小栽 和谐图城 乐乐影院 色哥导航 特色导航 依依社区 爱窝窝在线 色狼谷成人 91porn 包要你射电影 色色3A丝袜 丝袜妹妹淫网 爱色导航(荐) 好男人激情影院 坏哥哥 第七色 色久久 人格分裂 急先锋 撸撸射中文网 第一会所综合社区 91影院老师机 东方成人激情 怼莪影院吹潮 老鸭窝伊人无码不卡无码一本道 av女柳晶电影 91天生爱风流作品 深爱激情小说私房婷婷网 擼奶av 567pao 里番3d一家人野外 上原在线电影 水岛津实透明丝袜 1314酒色 网旧网俺也去 0855影院 在线无码私人影院 搜索 国产自拍 神马dy888午夜伦理达达兔 农民工黄晓婷 日韩裸体黑丝御姐 屈臣氏的燕窝面膜怎么样つぼみ晶エリーの早漏チ○ポ强化合宿 老熟女人性视频 影音先锋 三上悠亚ol 妹妹影院福利片 hhhhhhhhsxo 午夜天堂热的国产 强奸剧场 全裸香蕉视频无码 亚欧伦理视频 秋霞为什么给封了 日本在线视频空天使 日韩成人aⅴ在线 日本日屌日屄导航视频 在线福利视频 日本推油无码av magnet 在线免费视频 樱井梨吮东 日本一本道在线无码DVD 日本性感诱惑美女做爱阴道流水视频 日本一级av 汤姆avtom在线视频 台湾佬中文娱乐线20 阿v播播下载 橙色影院 奴隶少女护士cg视频 汤姆在线影院无码 偷拍宾馆 业面紧急生级访问 色和尚有线 厕所偷拍一族 av女l 公交色狼优酷视频 裸体视频AV 人与兽肉肉网 董美香ol 花井美纱链接 magnet 西瓜影音 亚洲 自拍 日韩女优欧美激情偷拍自拍 亚洲成年人免费视频 荷兰免费成人电影 深喉呕吐XXⅩX 操石榴在线视频 天天色成人免费视频 314hu四虎 涩久免费视频在线观看 成人电影迅雷下载 能看见整个奶子的香蕉影院 水菜丽百度影音 gwaz079百度云 噜死你们资源站 主播走光视频合集迅雷下载 thumbzilla jappen 精品Av 古川伊织star598在线 假面女皇vip在线视频播放 国产自拍迷情校园 啪啪啪公寓漫画 日本阿AV 黄色手机电影 欧美在线Av影院 华裔电击女神91在线 亚洲欧美专区 1日本1000部免费视频 开放90后 波多野结衣 东方 影院av 页面升级紧急访问每天正常更新 4438Xchengeren 老炮色 a k福利电影 色欲影视色天天视频 高老庄aV 259LUXU-683 magnet 手机在线电影 国产区 欧美激情人人操网 国产 偷拍 直播 日韩 国内外激情在线视频网给 站长统计一本道人妻 光棍影院被封 紫竹铃取汁 ftp 狂插空姐嫩 xfplay 丈夫面前 穿靴子伪街 XXOO视频在线免费 大香蕉道久在线播放 电棒漏电嗨过头 充气娃能看下毛和洞吗 夫妻牲交 福利云点墦 yukun瑟妃 疯狂交换女友 国产自拍26页 腐女资源 百度云 日本DVD高清无码视频 偷拍,自拍AV伦理电影 A片小视频福利站。 大奶肥婆自拍偷拍图片 交配伊甸园 超碰在线视频自拍偷拍国产 小热巴91大神 rctd 045 类似于A片 超美大奶大学生美女直播被男友操 男友问 你的衣服怎么脱掉的 亚洲女与黑人群交视频一 在线黄涩 木内美保步兵番号 鸡巴插入欧美美女的b舒服 激情在线国产自拍日韩欧美 国语福利小视频在线观看 作爱小视颍 潮喷合集丝袜无码mp4 做爱的无码高清视频 牛牛精品 伊aⅤ在线观看 savk12 哥哥搞在线播放 在线电一本道影 一级谍片 250pp亚洲情艺中心,88 欧美一本道九色在线一 wwwseavbacom色av吧 cos美女在线 欧美17,18ⅹⅹⅹ视频 自拍嫩逼 小电影在线观看网站 筱田优 贼 水电工 5358x视频 日本69式视频有码 b雪福利导航 韩国女主播19tvclub在线 操逼清晰视频 丝袜美女国产视频网址导航 水菜丽颜射房间 台湾妹中文娱乐网 风吟岛视频 口交 伦理 日本熟妇色五十路免费视频 A级片互舔 川村真矢Av在线观看 亚洲日韩av 色和尚国产自拍 sea8 mp4 aV天堂2018手机在线 免费版国产偷拍a在线播放 狠狠 婷婷 丁香 小视频福利在线观看平台 思妍白衣小仙女被邻居强上 萝莉自拍有水 4484新视觉 永久发布页 977成人影视在线观看 小清新影院在线观 小鸟酱后丝后入百度云 旋风魅影四级 香蕉影院小黄片免费看 性爱直播磁力链接 小骚逼第一色影院 性交流的视频 小雪小视频bd 小视频TV禁看视频 迷奸AV在线看 nba直播 任你在干线 汤姆影院在线视频国产 624u在线播放 成人 一级a做爰片就在线看狐狸视频 小香蕉AV视频 www182、com 腿模简小育 学生做爱视频 秘密搜查官 快播 成人福利网午夜 一级黄色夫妻录像片 直接看的gav久久播放器 国产自拍400首页 sm老爹影院 谁知道隔壁老王网址在线 综合网 123西瓜影音 米奇丁香 人人澡人人漠大学生 色久悠 夜色视频你今天寂寞了吗? 菲菲影视城美国 被抄的影院 变态另类 欧美 成人 国产偷拍自拍在线小说 不用下载安装就能看的吃男人鸡巴视频 插屄视频 大贯杏里播放 wwwhhh50 233若菜奈央 伦理片天海翼秘密搜查官 大香蕉在线万色屋视频 那种漫画小说你懂的 祥仔电影合集一区 那里可以看澳门皇冠酒店a片 色自啪 亚洲aV电影天堂 谷露影院ar toupaizaixian sexbj。com 毕业生 zaixian mianfei 朝桐光视频 成人短视频在线直接观看 陈美霖 沈阳音乐学院 导航女 www26yjjcom 1大尺度视频 开平虐女视频 菅野雪松协和影视在线视频 华人play在线视频bbb 鸡吧操屄视频 多啪啪免费视频 悠草影院 金兰策划网 (969) 橘佑金短视频 国内一极刺激自拍片 日本制服番号大全magnet 成人动漫母系 电脑怎么清理内存 黄色福利1000 dy88午夜 偷拍中学生洗澡磁力链接 花椒相机福利美女视频 站长推荐磁力下载 mp4 三洞轮流插视频 玉兔miki热舞视频 夜生活小视频 爆乳人妖小视频 国内网红主播自拍福利迅雷下载 不用app的裸裸体美女操逼视频 变态SM影片在线观看 草溜影院元气吧 - 百度 - 百度 波推全套视频 国产双飞集合ftp 日本在线AV网 笔国毛片 神马影院女主播是我的邻居 影音资源 激情乱伦电影 799pao 亚洲第一色第一影院 av视频大香蕉 老梁故事汇希斯莱杰 水中人体磁力链接 下载 大香蕉黄片免费看 济南谭崔 避开屏蔽的岛a片 草破福利 要看大鸡巴操小骚逼的人的视频 黑丝少妇影音先锋 欧美巨乳熟女磁力链接 美国黄网站色大全 伦蕉在线久播 极品女厕沟 激情五月bd韩国电影 混血美女自摸和男友激情啪啪自拍诱人呻吟福利视频 人人摸人人妻做人人看 44kknn 娸娸原网 伊人欧美 恋夜影院视频列表安卓青青 57k影院 如果电话亭 avi 插爆骚女精品自拍 青青草在线免费视频1769TV 令人惹火的邻家美眉 影音先锋 真人妹子被捅动态图 男人女人做完爱视频15 表姐合租两人共处一室晚上她竟爬上了我的床 性爱教学视频 北条麻妃bd在线播放版 国产老师和师生 magnet wwwcctv1024 女神自慰 ftp 女同性恋做激情视频 欧美大胆露阴视频 欧美无码影视 好女色在线观看 后入肥臀18p 百度影视屏福利 厕所超碰视频 强奸mp magnet 欧美妹aⅴ免费线上看 2016年妞干网视频 5手机在线福利 超在线最视频 800av:cOm magnet 欧美性爱免播放器在线播放 91大款肥汤的性感美乳90后邻家美眉趴着窗台后入啪啪 秋霞日本毛片网站 cheng ren 在线视频 上原亚衣肛门无码解禁影音先锋 美脚家庭教师在线播放 尤酷伦理片 熟女性生活视频在线观看 欧美av在线播放喷潮 194avav 凤凰AV成人 - 百度 kbb9999 AV片AV在线AV无码 爱爱视频高清免费观看 黄色男女操b视频 观看 18AV清纯视频在线播放平台 成人性爱视频久久操 女性真人生殖系统双性人视频 下身插入b射精视频 明星潜规测视频 mp4 免賛a片直播绪 国内 自己 偷拍 在线 国内真实偷拍 手机在线 国产主播户外勾在线 三桥杏奈高清无码迅雷下载 2五福电影院凸凹频频 男主拿鱼打女主,高宝宝 色哥午夜影院 川村まや痴汉 草溜影院费全过程免费 淫小弟影院在线视频 laohantuiche 啪啪啪喷潮XXOO视频 青娱乐成人国产 蓝沢润 一本道 亚洲青涩中文欧美 神马影院线理论 米娅卡莉法的av 在线福利65535 欧美粉色在线 欧美性受群交视频1在线播放 极品喷奶熟妇在线播放 变态另类无码福利影院92 天津小姐被偷拍 磁力下载 台湾三级电髟全部 丝袜美腿偷拍自拍 偷拍女生性行为图 妻子的乱伦 白虎少妇 肏婶骚屄 外国大妈会阴照片 美少女操屄图片 妹妹自慰11p 操老熟女的b 361美女人体 360电影院樱桃 爱色妹妹亚洲色图 性交卖淫姿势高清图片一级 欧美一黑对二白 大色网无毛一线天 射小妹网站 寂寞穴 西西人体模特苍井空 操的大白逼吧 骚穴让我操 拉好友干女朋友3p