This post is another look, from another angle at Generative AI. The angle in this post is care and truth.
Occasionally on this blog we have drawn from autopoiesis. To briefly recap, an autopoietic organism is an organism which has a boundary to draw distinction between itself and the external environment. This boundary allows the organism to allow certain elements of the external environment in to benefit iself, while keeping other elements out. The organism takes care of itself through this exchange with the external environment as a dynamical system (Di Paolo et al, 2022).
The Four Es of Cognitive Science, below, provide summary for the basis of autopoiesis.
Embodied: Recognizing that cognition is grounded in the physical body and its interactions with the environment.
Embedded: Understanding that cognition is situated within a specific context or environment and cannot be separated from it.
Extended: Acknowledging that cognitive processes can extend beyond the boundaries of an individual’s mind and incorporate external tools or resources.
Enactive: Emphasizing that cognition arises from the dynamic interactions between an individual, their environment, and the actions they perform.
(Drawn from Varela et al, 2017, Cross and Ramsey, 2021)
A key point which we can draw from all the above is that an autopoietic organism, whether it be a bacterium or a human being, cares about itself, and cares about what is going on in the external environment to further or threaten that goal. For a human being, our existence is more sophisticated than most autopoietic organisms in the universe, which brings a range of pros and cons.
The sophistication of cognition for a human being means there are many resources and goals we can pursue to secure survival, maintain it, and then potentially thrive. For wealthy societies we can choose friends, professions, hobbies, and make use of infrastructure, qualifications, and tools to avoid detriment and seek benefit. This activity extends beyond individual self-care to taking care of family, friends, and communities.
Regardless of the wealth of the society we live in, human beings care intensely about the truth (Frankfurt, 2005). We care about the truth because it is key to our survival and places us in touch with reality. If we believe A is true and the truth is B, then acting as though A is true could jeopardize our survival. For example, an idea to build a bridge across a river requires testing to discover if the idea is true. The idea needs to correspond with reality, and as a result the idea is tested carefully, to unveil any misconceptions which might compromise effectiveness and safety. Based on the testing and resulting feedback, the idea can be adapted to better fit with reality. We build a better bridge.
If we proceed with no care, then we show a disregard for the relationship between what we say and what we do and with reality. This lack of care will, sooner or later, compromise survival. This brings us back to a point raised in the previous blog, the betweenness of A and B is care for the transition and the flow which connects one moment to another.
A similar concept applies to what we say, if we say something is true without care for its correspondence to reality, then we have lost attention to nuance, modification and adaptation (McGilchrist, 2021, Frankfurt, 2005). Instead, all attention is placed on A (the question) and B (the response), and moved away from the relationship between A and B. This robs us of the chance to test, modify and adapt the response.
It is worth noting, as Frankfurt (2005) observes, lying demonstrates a care for the truth. The care is because the liar knows the truth, and cares about concealing it. This can be a method of survival and puts lying in service of self and/or collective care. The ethical and moral conditions for each lie are complex and sit outside of this blog. However, simply accepting things and saying things without care is a form of self-deception.
Where is all this going? Generative AI does not meet any autopoietic conditions which are listed above. It does not care about what it says in response to our questions and does not care about developing its relationship with us.
The AI gathers information puts it into a narrative and presents it. It does not proceed carefully with the information it has gathered, it does not test its correspondence to reality, but it always presents what it finds in a plausible way. What we need to do is inject the care into what we have been given. We need to proceed carefully and take responsibility for discovering the relationship between what we have been presented with and the correspondence with reality.
If we accept what AI tells us without care, then we have put our care of the self (and others) at risk. We need to proceed carefully with AI’s findings, to correspond the findings with reality and test them, place them into context. Otherwise, if we are tempted to feel this is unnecessary, we need to ask ourselves, is it wise to accept answers from something which does not care what it says?
Reading
Varela, F., Thompson, E., & Rosch, E. 2017. The Embodied Mind: Cognitive Science and Human Experience. Revised Edition. MIT Press.
Cross, E.S. and Ramsey, R., 2021. Mind meets machine: Towards a cognitive science of human–machine interactions. Trends in Cognitive Sciences, 25(3), pp.200-212.
Di Paolo, E., Thompson, E. and Beer, R., 2022. Laying down a forking path: Tensions between enaction and the free energy principle. Philosophy and the Mind Sciences, 3.
Frankfurt, H. G. (2005). On Bullshit. Princeton University Press.