Edited by: Simisola Oluwatoyin Akintola, University of Ibadan, Nigeria
Reviewed by: Nathalie Gontier, University of Lisbon, Portugal; Opeyemi A. Gbadegesin, University of Ibadan, Nigeria
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
Artificial intelligence (AI) has posed numerous legal–ethical challenges. These challenges are particularly acute when dealing with AI demonstrating substantial computational prowess, which is then correlated with agency or autonomy. A common response to considering this issue is to inquire whether an AI system is “conscious” or not. If it is, then it could constitute an agent, actor, or person. This framing is, however, unhelpful since there are many unresolved questions about consciousness. Instead, a practical approach is proposed, which could be used to better regulate new AI technologies. The value of the practical approach in this study is that it (1) provides an empirically observable, testable framework that contains predictive value; (2) is derived from a data-science framework that uses semantic information as a marker; (3) relies on a self-referential logic which is fundamental to agency; (4) enables the “grading” or “ranking” of AI systems, which provides an alternative method (as opposed to current risk-tiering approaches) and measure to determine the suitability of an AI system within a specific domain (e.g., such as social domains or emotional domains); (5) presents consistent, coherent, and higher informational content as opposed to other approaches; (6) fits within the conception of what informational content “laws” are to contain and maintain; and (7) presents a viable methodology to obtain “agency”, “agent”, and “personhood”, which is robust to current and future developments in AI technologies and society.
香京julia种子在线播放
This paper aimed to establish a robust account of agency which can be applied to many kinds of systems, including AI systems. This raises further sub-questions, such as (1) what does it mean to be an agent; and (2) what markers are there to determine an agent? An account of the agency must provide answers to those questions in a generally determinable manner. To build an explanatory account of agency, this study evaluates and uses the various logic underpinning “explanations” using the ecological framing of biological organisms as agents of their own evolution. In this light, information-centric quantification tools such as statistical mechanics and bioinformatics would be attractive sources for creating such an account. An important question would be “what is an AI system?” This question is beyond the scope of this article but will be examined in future research. An additional limit is that this methodology describes an empirically testable account of agency, but it will not describe in detail its preferability compared to existing approaches. It is assumed that the reader is familiar with existing approaches.
Evolution is an ecological phenomenon arising from the purposive engagement of organisms with their conditions of existence. It is incorrect to separate evolutionary biology into processes of inheritance, development, selection, and mutation. Instead, the component processes of evolution are jointly caused by the organismal agency and their ecological relations with their affordances. Purposive action is understood to be agents that use features in their environments as affordances that are conducive to their goals. Furthermore, a Kantian approach (see Part B of
Explanations usually contain more than theories, in that they involve different bodies of knowledge (Keil,
The question that has plagued humans for a long time is how do we come to an agreement on anything? In language, how do we agree on the meaning of words? In behavioral sciences, it is asked how do we know behaviors? In physics, we ask how entangled particles know what the others are doing? Weiner (
The answer is in oscillations or spin; we can observe this in neurons and non-living things such as pendulums synchronizing with each other, which Christiaan Huygens wrote about in 1665 (Redish,
What is next needed is a coupling mechanism between individuals in a population. Coupling (Stankovski et al.,
Robert Moore, Victor Eichler, Frederich Stephan, and Irving Zucker discovered the brain regions responsible for governing circadian rhythms. The key structure is the suprachiasmatic nucleus (SCN), which processes information about light and darkness from the retinas. Damaged SCNs impair animal rhythms. Oscillators are the tools used to interlink and relate to others like us. They define what constitutes an “us”. Examples of coupling mechanisms include things such as heat, shape, direction, and vision (eyes, in particular, are a gateway for bonding) (Cornell University,
More generally, there are other instances of “understanding” or knowing. These instances involve embodied ontogenetic knowledge: of time, place, circumstance, culture, bodily knowledge (such as sensory information), and the like. For John Vervaeke, this is the four modalities of knowing: (1) participatory knowing; (2) perspectival knowing; (3) procedural knowing; and (4) propositional knowing (Raninen,
We can distinguish different explanations by the causal patterns they employ, the stances they invoke, the domains of phenomena they explain, or whether they are value- or emotion-laden (Keil,
The most common causal relations to which explanations refer are (1) common cause, (2) common effect, (3) linear causal chains, and (4) causal homeostasis (Keil,
One can frame explanations in terms of the stance that they take. Dan Dennett is known for drawing this distinction. Each stance speaks to a framing device for explanations. Each stance is general and non-predictive but does speak to certain relations, properties, and arguments that are fundamental to each (Keil,
Causal explanations have been the most dominant explanation, especially in the sciences. However, these are not the only forms of explanations; there are also non-casual explanations, which are called constitutive explanations (Salmon,
The object of constitutive explanation is the causal capacity of a system. This capacity describes what a system would do under specified circumstances/conditions (under a certain trigger). Causal capacities speak to what would, or could, or will happen under certain conditions and it includes notions such as ability, power, propensity, and tendency. Causal capacities speak to processes and events: when process (X) happens, event (Y) happens. These explain the changes in properties of a system—that is what an event is (Ylikoski,
This is the “Millian Method of Difference” (Encyclopedia Britannica,
You can also change the values of (C) by making it stronger or weaker, and then observe what happens to (E). We use this to make inferences from the difference observed in effects where (C) is absent or different. Thus, we infer the causal role of (C) based on its presence versus its absence or its changes. This is effective for identifying discrete explanatory privileged causes (Walsh,
Complex adaptive systems can maintain stable configurations despite perturbations because they can alter the causal relations that happen between their parts. Each part affects, and is affected by, others, and the overall effect is attributable, jointly and severally, to all the parts. The system is thus affected by itself, and these causes are non-separable. Causes are only separable when the effect of a change in one is independent of the effects of changes in others.
Complex adaptive systems tend to distinguish between “principal causes” and “initiating causes”. Principal causes are those to which we can attribute a large portion of the observable effect. Initiating causes starts the causal process, which ends with an effect. If two identical systems diverge in their outcomes, it is reasonable to afford principal causal responsibility for differences in effect to a factor that initiates the different trajectories (Walsh,
The constitution explains how things have the causal capacities that they do by relying on their parts and organizations (Ylikoski,
To explicate constitutive explanations (Cummins,
Metaphysics posits that the parts, their causal capacities, and their organization constitute the causal capacities of a system/whole. Constitution is synchronous and thus they are atemporal (meaning that it is not based on time and can be instantaneous). This means that if there are changes in the basis, there is an instant change in the causal capacities of a system (hence constitution is process and time-independent).
Importantly, the constitutive relata are not independent existences. In causation, one can insist that the relata of cause and effect are distinct from each other but one cannot insist on the same within constitution relata. Specific causal capacities are direct functions of certain constitutions. Constitutions then do not have independent identities.
Constitutive explanations distinguish themselves from identity, in that identity is a reflexive relation and is symmetric. First, one must distinguish between the constitution of all causal capacities of a system and the constitution of an individual capacity (Ylikoski,
We cannot identify individual causal capacities with or as their composite bases (alternative constitution). This is because different objects can have the same causal capacities despite having different compositions (Ylikoski,
The necessary asymmetries are present; the constitution explains causation, and the constitution is composed of parts and the organization of those parts. Systems then are made of causative parts and their organizations. The other asymmetry is existence. This asymmetry means that parts can exist independently of systems, while systems cannot exist without their parts (they can exist without some parts, but not all). The organization of parts is also fundamental for maintaining the status of a system (since systems are not reducible to their parts, they are greater than the sum of their parts). Organization therefore has explanatory relevance. Systems' causal capacities are not just the sum of their parts; they are also the organization of those parts. Organizations' explanatory relevance stems from their contribution to the causal capacities of the system as a whole (change organizations and the causal capacities of the systems change). Organization is also called contextual causation and is empirically observable. Contextual is similar to downward causation (below), except that it displaces the notion of “downward” and instead posits that parts can influence each other regardless of a relative placement in relation to each other (Ylikoski,
Constitution and causation are both explained in terms of their dependencies, which are a particular set of “objective” relations of dependent facts. These facts give explanations a direction and they are the basis for explanatory preferences (explanations must explain the systems' causal capacities in terms of their basis and not vice versa) (Ylikoski,
Downward causation provides an explanation for “emergence” which will also be necessary for an explanation of AI agency. However, downward causation has been criticized. For example, Kim (
“[d]ownward causation is the raison d'etre of emergence, but it may well turn out to be what in the end undermines it”.
However, this argument assumes the causal inheritance principle, which stipulates that the causal powers of complex systems are inherited exclusively from the causal powers of their parts. This has two salient points: (a) If parts do not have causal capacities, then the system as a whole would not (the capacities of the whole counterfactually depend on the capacities of the parts); and (b) in complex entities, nothing other than their parts are relevant to the determination of their causal properties. This then requires the causal powers of an entity to be internal to it.
Internal properties are context-insensitive, and an entity/system has all its internal properties (until there is an internal change) regardless of the context. If causal powers are internal, it is only the internal constitution of a system that confers those causal powers. This, and the assumption of internal causal properties, results in an ontological primacy afforded to the capacities of the parts, as opposed to the capacities of the totality/aggregates. The idea is that complex entities inherit their causal powers from their parts, but that the converse is not true. Complex entities cannot confer on their parts' causal powers which the parts did not have by their internal natures/capacities. Therefore, the properties of complex entities cannot explain why their parts have their causal powers (Walsh,
Kim's argument against emergence is the assumption of internal causal properties. This kind of thinking may have arisen from the notions of how mass (as a fundamental causal power/property) is context-insensitive. Masses of macroscopic objects are not altered by the masses of other bodies; mass behaves in a context-insensitive manner with regard to forces. An object's mass allows the prediction of its behavior across different contexts where forces act on it. It allows for the assumption that their effects are mutually independent, but do not affect masses.
Context insensitivity of causal powers is present in the analytic method (Cartwright,
If causal powers are non-internal properties conferred on things by contexts, then one can argue that parts of complex systems get their causal powers from the system as a whole (
Therefore, the property of the whole depends on the properties of the parts, and the converse is also true. If properties are understood to be relational (not internal in the strictest sense) and context-sensitive, it becomes easier to understand. Reflexive downward causation can be explained as follows: If they are relational properties, it means that complex systems have the causal powers that they do because of the causal powers of their parts (as in causal inheritance). It is also possible that parts have their causal powers because of the complex system they are part of.
In causally cyclical systems, one can assume that the causal powers of the parts are context-dependent and are conferred by the system in which they are parts. Hence,
“Fundamental” speaks to things that cannot be decomposed further into smaller resolutions, meaning that we cannot get a coherent theory if we do so. What is fundamental is thus contingent on knowledge and the era that you find yourself in. Previously, atoms were thought to be fundamental until particle theory was established. However, emergence is different from fundamental, and unlike fundamental, emergence is something that is not conceptually contingent in the same way that fundamental is. “Emergence” can explain many issues in physics, such as how Schrödinger's (
The microstate of a system is the configuration of the system (Hidalgo,
In statistics, correlation describes
Importantly, while correlation applies only to
Covariance speaks to the linear measure of the strength of the correlation between two or more sets of random variables. The covariance for two random variables (X) and (Y) each with a sample size of (S) is defined by an expectation value (Weisstein,
The appropriate measurement function is
Mutual information maps to the mutual dependence of random variables (how much can I rely on X if I know Y). Therefore, an MI approach would be most applicable to genetic distances (Taleb et al.,
Interventions usually involve notions of manipulations carried out on a variable (X) to determine whether changes in (X) are causally related to a variable (Y). However, any process qualifies as an intervention if it has the right causal characteristics, and not just human activities (Woodward,
According to Woodward, generalizations can be used in explanations and depend on invariance rather than lawfulness (Woodward,
There are two types of changes, and both are fundamental to explanatory powers. The first is changes in background conditions (changes that affect other variables other than those variables which are part of the generalization) (Woodward,
For a methodology to constitute a law on personhood or agency, it must meet the conditions of laws (see part A of
Good explanations require the use of invariant generalizations, which enable the specification of systemic patterns (of counterfactual dependence). This converts information into explanations since it can be used to answer a range of counterfactual circumstances about the explanandum. This allows for better predictive models. There are various kinds of counterfactual dependences, including active and passive ones; active is the type that is necessary for good explanations (Woodward,
Teleology explains the existence of a feature based on its purpose (Walsh,
Mechanists argue that natural selection explains the fit and diversity of organic forms, thus making teleology or purpose explanations unnecessary. The mechanical view is that every event has a cause, with causes being able to fully explain events. But there are three main arguments against this approach: (1) non-actuality, (2) intentionality, and (3) normativity (Walsh,
The non-actuality argument states that means come before ends (goals). However, in terms of teleology, ends
The intentionality argument states that non-actual states of affairs cannot cause anything but mental representations of them can. One way to solve the teleology non-actual dilemma is to propose mental states as representations of these goals (or ends). Thus, occurrences of actions or events are explained by intentions as mental states of agents. The intentional and mental state argument is the most common justification of teleology (Kant and Bernard,
The normativity argument suggests that teleology has a normative value. Explaining an action as a consequence of intention is to argue that an agent was rationally required or permitted to act in a particular way to achieve certain goals. Rational actions are those which are required to attain a goal (or end). Thus, a teleological approach must account for an action being rational (Walsh,
Bedau (
Teleology explains the existence of a feature based on its purpose (Walsh,
In biology, Jacques Monod considered the consequences of a non-purposive nature/biology. He identified a contradiction at the heart of evolutionary biology. This is the “paradox of invariance” (Monod,
However, the invariance principle raises complications as evolution is fundamentally about change. Adaptive evolution is a form of environmentally charged biased change. Thus, there should be a source of new variants and a process that is biased toward change. If we argue that new variants are biased in favor of goals and purposes, we may also be undermining science. For Monod (
Aristotle took issue with Democritus's explanation, since chance is, by its nature, not measurable. In
Purposive events are, however, robust (invariant) across a range of alternate initial conditions and mechanisms, whereas chance events are not (they have differing modal profiles). Good explanations must be able to distinguish these. Purposive encounters are those which are insensitive to initial conditions, including locations. Thus, in purposive occurrences, the means counterfactually depend on the ends. Chance occurrences are sensitive to initial conditions and, if the initial conditions are different, the event or ends would not have happened. Unlike chance occurrences, purposive occurrences are sensitive to goals. If an agent's goals were different, the event would now have occurred. If the collector had been elsewhere in the market, then the encounter may have happened elsewhere, at a different time, and by different mechanisms.
Given the counterfactual dependence of mechanisms and ends, events that happen because they serve a purpose can be explained in two ways: (1) the occurrence results from mechanical interactions and (2) the occurrence is conducive to the fulfillment of a goal. However, one thing is certain; one cannot simply disregard purposes. If purposes are ignored, it induces a “selective blindness” to a class of explainable occurrences, namely, those that are structured according to the counterfactual dependence of means on goals. This is not just an error of omission; it also risks misconstruing purposive occurrences as blind chance. To properly account for events, both teleology and mechanistic explanations are needed. I have now explained purposiveness as goals; these purposes can also explain their own means (Walsh,
Goal-directed processes are those that are conducive to stable end states and their maintenance. The end state itself is the goal. Thus, a goal is a state that the goal-directed process is aimed toward. Central to studies on natural goal-directed processes is an adaptive and autonomous system, which can achieve and maintain persistent and robust states through the implementation of compensatory changes (Di Paolo,
The architecture of the system underpins the goal-directed capacities and states of the goal itself. These systems are usually comprised of modules. These modules are clusters of causally integrated processes decoupled from other modules. They also demonstrate the capacity to produce and maintain integrated activities across a range of perturbations of influences (robustness). Each model has regulatory influence, using positive and negative feedback, over a small number of other modules. Each part effectively influences other parts in some way. This allows for robustness and plasticity by maintaining stability in the presence of perturbations by enacting new adaptive changes. Robustness describes a property of something which can produce novelty, in response to novel circumstances. Biological organisms display this. What allows organisms or systems to do this is the modularity of their development (Schlosser,
Thus, goal-directed behavior is a causal consequence of the architecture of adaptive systems. Furthermore, it is an observable feature of systems dynamics. It is the capacity of systems as a whole to utilize the causal capacities of their parts, and the ability to direct them toward attaining a robust and stable end state. That end state or goal is not a mysterious something; it is a complex and relational property—the property of being in a state that a goal-directed process can achieve and maintain. Therefore, goals are natural and observable (Walsh,
But what about the content of teleological explanations? We can determine the conditions under which they apply as explanations, but we must also account for the content of the explanation. There is a fundamental difference. Conditions for teleology can be understood as causal occurrences; however, content cannot be described in causal terms. Teleology is not about explaining causes, it is about explaining goals to which events are conducive (Walsh,
To describe a non-mechanistic account of goals, two questions must be answered: (1) How can an event be explained by citing the ends to which it is simply a means; and (2) Why does this explanation not need to be explained through mechanisms of cause and effect?
To address the first question, goals can explain their means of achieving those goals in a way that is similar to how mechanisms explain their effects by using counterfactual invariance relations. Invariance here does not mean the transmutation of stability of form across generations or lineages. Here, it is
Mechanistic explanations demonstrate how activities and characteristics of (X) produce (Y) as the effect including the specific properties related to that effect. Activities
“[T]he sorts of counterfactuals that matter for purposes of causation and explanation are just such counterfactuals that describe how the value of one variable would change under interventions that change the value of another. Thus, as a rough approximation, a necessary and sufficient condition for
Thus, we can use this to explain how events as means are related to their goals. If there is goal (X), which then produces event (A) which is conducive to (X) under conditions (Q), then under different conditions (V), it would produce event (B), as (B) would be more conducive toward (A). If the system had another goal (Z), it would produce event (C), should (C) be more conducive toward attaining (Z). This is an invariance relation. It is the obverse of the relation of cause and effect. In other words, we explain that causes themselves explain their effects, because when the cause occurs, then so too would the effect. If the cause does not occur, neither does the effect. We can also reason that a goal explains its means because if a system has a goal then the means too would arise, and if there was no goal then the means would not arise.
This explains how events, as means, are related to their goals. Causes explain their effects because when the cause occurs, so does the effect. If it does not, neither does the effect. We can also reason that a goal explains its means. If a system has a goal, the means arise; without a goal, the means do not arise. But, on its own, invariance is insufficient. Explanations are description-dependent, and good explanations enhance understanding. Mechanistic explanations do not simply speak to cause and effect (relations), and they also speak to the appropriateness or accuracy of that relation. The relation itself only exists if it is appropriate. We use concept descriptions such as “push”, “pull”, and “attract” to describe productive relations. These speak to the nature of the relation, and sometimes also explain the effect.
For teleology, we use the concept descriptor of “conduce/ive”. So, the modal relations are (1) causes produce effects; and (2) means are conducive to their ends.
A singular event can be explained in terms of mechanistic (causal) and teleological (conducive) relations. The former explains how things happen, while the latter explains why they happen, and thus they co-exist. They are complementary and non-competing. They are also complete—they do not need each other to explain their own coherence—the how's explain the how's and the why's explain the why's, and we do not need the how's to explain the why's. They both explain different information about events. However, for the completeness or coherence of an explanation as a whole, one needs both types of sub-explanations. Without both, there is an explanatory loss. Thus, both mechanism and purpose are important for explanations but not for independent systems themselves. The non-actual claim, for example, is a conflation between causes and explanations. In terms of the intentionality counter, intentions can be understood as goal-directed activity instead of mental representations. Intentional states are mental representations and are unnecessary for teleology (Walsh,
In terms of the normativity counter, the goal need not be described as “good” to explain why systems
Natural agents are obtained from the natural purpose explanation. Agency, such as purposiveness, is an observable property of a system's gross behavior. The system can pursue goals and respond to conditions of its environment and its internal constitution in ways that promote the attainment and maintenance of its goal states. The agency is observable in the sense that we see agents negotiating with situations using its dynamics. We can see a range of robust and regular responses to conditions. If we understand its goal, we can understand its behavior. The agency is ecological as a system that can cope with its context and achieve its goals by responding to
There is a difference between object and agent theories. Object theories that we use today aim to describe and explain the dynamics of objects (Walsh,
The Cartesian view holds that agents' thoughts, beliefs, and desires explain their actions only if they cause said actions (Davidson,
The standard action theory approach created the issue of the missing agent, which is a consequence of its underlying methodological commitments (Velleman,
Merleau-Ponty explains behavior as commencing with an active organismal agent that is problem-solving and goal-pursuing (Matthews,
Agents create degrees of freedom for themselves by constituting their affordances through self-maintaining and self-regulating activities. They determine which environmental conditions are important. They also enable the exploitation of opportunities that the environment presents. This is a stronger account of autonomy. The integral processes in autonomous systems are (1) continually dependent on one another in their formation and realization as a network; (2) make up a unity (converge) in their domain of existence; and (3) govern areas of exchanges with the environment (Thompson,
Semantic closure is a concept that refers to the fact that a system can enclose meaning within itself. In biology, for example, a string of DNA and messenger RNA (mRNA), the encoding mechanism between both, has evolved, altering the meaning of DNA by rewriting the genetic code (Clark et al.,
It is important then to understand how meaning originated for translating proteins and how it has been altered through evolution. This is an ontogenetic or bottom-up approach (Clark et al.,
Finally, the authors proposed something interesting: there were
Historically, semantic information was contrasted with syntactic information. Syntactic information quantifies the kinds of statistical correlations between two systems without giving meaning to those correlations (Kolchinsky and Wolpert,
Some studies (going forward known as the study or this study) have distinguished between syntactic and semantic information in systems (Kolchinsky and Wolpert,
Importantly, the study is distinguished between “meaningful bits” and “meaningless bits”. This also allowed for a differentiation between sub-concepts of semantic information such as “value of information”, “semantic content”, and “agency.” Semantic information then is defined as information that enables systems to achieve their goals (maintaining a low entropy state). However, this is not an exogenous (goal derived from or measured from “external” sources) approach. Any “meaning” obtained from exogenous studies is meaningful (in terms of goals) from the
The study coins the term “viability function”. Viability functions are used to statistically quantify the system's degrees of existence at any given time (hence, one can say that viability functions describe real-value aspects of systems). For this, a negative Shannon entropy is used (it provides an upper bound on the probability that the system occupies any small set of viable states). Semantic information now means the information exchanged between the system and its environment which causally contributes to the system's existence. It is measured by the maintenance of the
To describe the value of information, the study gives the example of a rock. A rock has a very low dynamic and thus it can remain in a low entropy state for longer periods. If information is then scrambled by swopping rocks from their current environment into different ones, this intervention would not make much difference to the rock. However, by doing the same thing with a hurricane (my modified explanation) that requires specific conditions for its maintenance, the result is that the hurricane has a greater set of parameters for its maintenance. If those parameters are not met, it will dissipate (viability decreased)—and thus it has some important semantic information. Therefore, the semantic information is important for hurricanes, and this would likely be greater for hurricanes than rocks. If you put an organism in a new environment it may not be able to find its own food, hence organisms place a higher value on information.
Non-equilibrium systems are those in which the non-equilibrium status is maintained by the ongoing exchange of information by sub-systems. An example of this is the “feedback-control” process in which one subsystem acquires information about another subsystem and then uses this information to apply controls to keep itself or the other system out of equilibrium (like Maxwell's demon). Information-powered non-equilibrium states differ from the traditional non-equilibrium systems considered in statistical physics which are driven by work reservoirs with control protocols, or which are coupled to thermodynamic reservoirs (Kolchinsky and Wolpert,
Observed semantic information in the study speaks to that which is affected by the dynamic interventions that scramble the transfer of entropy from the environment to the agent. This kind of information identifies semantic information which is acquired by dynamic interactions between systems and environments (not mutual or stored information). The syntactic information in the study is scrambled to obtain semantic information. This is how
Systems can have a non-unique optimal intervention, namely,
The thermodynamic multiplier is the stored semantic information (the benefit–cost ratio of mutual information) that provides a manner of comparison for the ability of different systems to use the information to maintain their viability (Kolchinsky and Wolpert,
Observed semantic information can be acquired in dynamic interactions through the use of transfer entropy. This is a measure of information flow and is widely used and understood. The transfer entropy movement from the environment to the system is not necessarily the same as the flow from the system to the environment. Observed semantic information describes dynamic actions and decisions where any information scrambling that comes from the environment to the organism
An autonomous agent (and autonomous agency) in this system would be a physical system that has a large measure of semantic information (Kolchinsky and Wolpert,
My methodology, using successful observable, predictable experiments that provide more information, is more accurate and enables a method of grading or ranking systems as agents according to domain suitability. This relies on the use of semantic information and its relationship with viability. To summarize, viability (reducing or maintaining a low entropy state) is the ability of a system to continue to exist, and it is measured in terms of the viability function. Changes in this viability function are determined by counterfactual dependences obtained through the scrambling of syntactic information. This enables the ascertainment of the more “valuable” semantic information as causally contributing to the system's viability function. There are two kinds of semantic information, both of which affect the viability function: (1) stored and (2) observed. Stored semantic information is the
This account, routed in Kantianism, recognizes the explanation and information issues in alternative accounts and provides a more accurate framework. Legal systems and ethics discourse should take note of this account as the usual ways in which these conversations are entertained and are doomed since they tend to rely on poorly understood, ephemeral notions such as “consciousness”. Instead, systems should be evaluated according to their own intrinsic properties which enable a better approach to determining suitability (agency and personhood) because it considers agents as agents within their own informational paradigm and not relative to another agent's informational paradigm. In this way, intrinsic bias is made to be a strength when it is considered from the perspective of the system itself.
Publicly available datasets were analyzed in this study. This data can be found here:
MN: Conceptualization, Investigation, Methodology, Project administration, Resources, Visualization, Writing—original draft.
The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This article was supported by the U.S. National Institute of Mental Health and the U.S. National Institutes of Health (award number U01MH127690) under the Harnessing Data Science for Health Discovery and Innovation in Africa (DS-I Africa) program. In addition, support was also obtained from the National Research Foundation (NRF) under the Doctoral Innovation Scholarship (award MND190619448844).
The author would like to acknowledge Amy Gooden for her technical editing of this article and Donrich Thaldar for acting as a critical reader and for his helpful comments about earlier drafts of this article.
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
The content of this article is solely my responsibility and does not necessarily represent the official views of the U.S. National Institute of Mental Health or the U.S. National Institutes of Health.
The Supplementary Material for this article can be found online at: