1A. VALIDITY AS A THICK CONCEPT
This paper presents a novel position in the philosophy of logic: I argue that validity is a thick concept. Hence I propose to consider validity in analogy to other thick concepts, such as honesty, selfishness or justice. This proposal is motivated by the debate on the normativity of logic: while logic textbooks seem simply descriptive in their presentation of logical truths, many have argued that logic has consequences for how we ought to reason, for what we ought to believe, or for what we ought to infer. How can logic be normative, if it appears to be descriptive? According to the proposal of this paper, the normativity of logic can be explained because a thick concept is in play: validity. Thus I argue that the debate on how to best characterize validity and the debate on logic’s normativity are more connected than we think, because validity is a thick concept.
1B. The Femme(bot) Fatales, Kyoko and Ava: The Intersection of Race, Gender, and Disability in Ex Machina
Directed by Alex Garland, Ex Machina (2014) interrogates the relationship between gender and embodiment in artificial intelligence (AI) – a pressing exploration considering most, if not all, current advances in physically-embodied AI are ascribed stereotypically feminine traits. However, Garland admits that equal attention was not paid to the role of race in embodiment: in a 2015 interview, he said that “‘the only embedded point that I knew I was making in regard to race centered around… Kyoko, a mute, very complicit Asian robot, or Asian-appearing robot, because of course, she, as a robot, isn’t Asian.’” While Kyoko clearly encapsulates a distinct set of racialized gender norms, current discourses on Ex Machina overlook how Ava, too, as a presumably white AI, is simultaneously racialized and gendered. In this paper, I use an intersectional framework to analyze the treatment of Kyoko and Ava as feminine AI, with special focus paid to the intersections of race, gender, and dis/ability. Drawing on Kate Darling, I conclude my paper with a discussion of how my analysis can inform our approach to physically-embodied AI beyond the realm of fiction and film.
1C. On the Format of Object Representation: What Icons Can’t Do Field of research: Philosophy of cognitive science
Our sensory system represents the world as filled with objects which persist even as many of their features change over time. Recently this fact has been the basis of an argument which contends that our mental representations of objects are discursive (i.e. language-like) as opposed to iconic (i.e. picture-like) (Quilty-Dunn 2016, Green & Quilty-Dunn 2017). According to this argument, only a discursive representational format has the resources to track objects over changes in their features. Ned Block (ms. 2020) has recently challenged this argument. Block argues that this capacity is grounded in an iconic representation’s functional role and that occupying the necessary functional role is consistent with an iconic representational format. In this paper, I argue that Block’s proposal doesn’t work. First, I situate the points raised by Green and Quilty-Dunn within a larger literature from cognitive science on “multiple object tracking.” Second, I develop Block’s proposal so that it provides its best possible response to this literature. Third, I argue that Block’s proposal is unsuccessful. I contend that once we work out the details of what the proposal would have to be, it becomes clear that iconic mental states cannot occupy the necessary functional role.
2A. Preserving Logical Abductivism Against the Adoption Problem with Logical Constitutivism
On a popular contemporary view about how we come to know and be justified in our use of basic logical laws, abductivism, we consider logical theories in totum and a posteriori, we evaluate them on the basis of how well each theory accounts for the data and exhibits theoretical virtues, and then we draw an in- ference to the best explanation to determine the One True Logic. Abductivism however faces a challenge in the form of the Adoption Problem, which seems to show that certain special inference rules can’t be freely adopted, revised, or rejected. Some have argued that the Adoption Problem means we must give up abductivism altogether, and instead we’re required to provide an alterna- tive non-inferential and a priori epistemology of logic that can make sense of our knowledge and use of these exceptional logical rules. What I show is that one can acknowledge and explain what the Adoption Problem demonstrates without having to give up abductivism. According to the view that certain infrence rules are constitutive norms of the basic practice of thought, what I call logical constitutivism, we can explain in virtue of what the Adoption Prob- lem arises, and acknowledge the special nature of these rules, without giving up abductivism. For logical constitutivism, what it is to think is contingently determined by what Wittgenstein calls our “form of life”, and assuming certain inference rules as norms of evaluation just is what we call thinking. So while not compatible with a strict anti-exceptionalism, accepting the Adoption Problem doesn’t thereby require one to give up abductivism—making room for consti- tutive logical norms of thought is compatible with the view that logical theory choice occurs on the basis of an inference to the best explanation about which theory best accounts for the data and exhibits the most virtues. It’s just that we already begin within a framework where certain rules must remain fixed in order for abductive inquiry to be possible.
2B. Does Artificial Intelligence Use Private Language?
Wittgenstein’s Private Language Argument [PLA] holds that language requires rule-following, rule following requires the possibility of error, error is precluded in pure introspection, and inner mental life is known only by pure introspection, thus language cannot exist entirely within inner mental life. Fodor defends his Language of Thought (1975, 2010) against the PLA with a dilemma: either privacy is so narrow that internal mental life can be known outside of introspection, or so broad that computer language serves as a counter-example. I suggest that the developing field of artificial intelligence (deep learning neural networks) tends to vitiate Fodor’s defense and hence vindicate the PLA. The first horn of Fodor’s dilemma requires language to encompass genuinely internal mental life, i.e. non-projected intentional states, which are not exhibited in classical machine learning but only by deep learning neural networks (Ressler, 2003). Such networks act as black boxes, however, whose state cannot be understood by tracking the changes in their supervenience bases without shared context (López-Rubio, 2021), and that shared context introduces the possibility of error (von Eschenbach, 2021). The language of artificial intelligence is not private.
2C. Graham Priest’s Logical Abductivism and the Threat of the Adoption Problem
This paper is concerned with a possible objection to Graham Priest’s anti-exceptionalist (abductivist) view about logic. The objection is Padró’s (2015, 2020) version of so-called Adoption Problem (AP), which holds that “certain basic logical principles cannot be adopted because, if a subject already infers with them, no adoption is needed, and if the subject does not infer in accordance with them, no adoption is possible” (2015, 41–42). In §2, I provide an overview of Priest’s model for rational theory choice and his response to common circularity objections, which may lead us to believe that the Adoption Problem does not concern his view. In §3, I present the AP—and Padró’s argument for it—in detail, and I explain the connection between logical theory choice and logical revision, thereby establishing the relevance of the AP for Priest’s view. In §4, I propose a generalized version of Padró’s argument against the possibility of adoption and show how it can be applied to different anti-exceptionalist views. I conclude, in §5, by explaining the context in which the AP poses a serious objection to Priest’s account of rational theory choice.
2D. A New Argument Against Non-Well-Founded Set Theory
Non-well-founded set theory, which allows for sets with loops and infinitely descending chains of membership, has long been excluded from mainstream logical and philosophical discourse. In spite of a widespread attitude that such sets are too strange to exist, there is a tradition going back at least to Peter Aczel’s work which formalizes non-well-founded sets using certain kinds of graphs. In this paper, I argue that anyone who wishes to use these graph-based set theories to demonstrate metaphysical points about sets— for instance, that self-membered sets really exist—must accept the graph conception of set. This principle, formulated by Luca Incurvati, asserts that sets are the things depicted by graphs. I then argue that the graph conception of set is undermined by the conceptual priority of sets before graphs. Hence the graph-based set theories introduced by Aczel should be rejected as descriptions of the ontology of sets.
2E. Policing in the Age of Algorithms
There is an urgent need to examine the use of artificial intelligence (AI) in policing to determine the ethical underpinnings of these technologies, as well as the political and legal consequences of putting them into use. Emerging literature discusses the use of algorithms in policing within the context of criminal justice and human rights. However, in this paper, I will argue that we ought to place the use of AI in policing within the broader discussion of settler colonialism. Framing the discussion within an analysis of settler colonialism more adequately reflects the level of discrimination and racism that is inherent in these technologies. To demonstrate, I will look to Indigenous movements such as Idle No More to argue that surveillance is a form of settler colonialism that deliberately discriminates against minority groups. I will argue that if we are to truly appreciate the problems of algorithmic policing, then the context of settler colonialism is crucial for our analysis. In outlining these background conditions, I hope to reveal the underlying power structures that mediate the use of artificial intelligence, demonstrating why AI is not neutral or objective. I hope this analysis sheds light on the complexity of artificial intelligence while also offering some insight into the ways that settler colonialism is shaping society’s future.
*While this paper uses examples from the Canadian context, the examples can be easily connected to the American political and legal context as well.
2F. Measuring Moral Disengagement
First devised by Albert Bandura, moral disengagement is a psychological construct positing that agents deploy certain psychological mechanisms to avoid, pre-empt, neutralize, or lessen the negative self-reactive responses they have when their actions deviate from their internal moral standards. Bandura and colleagues were also the first to develop a method for measuring moral disengagement. Current measures of moral disengagement are by and large modifications of Bandura’s measure. There is a problem with these measures though: they fail to capture the self-sanction modification aspect of the moral disengagement construct. These measures fail to distinguish between instances where an agent deploys (or has a disposition to deploy) the mechanisms of moral disengagement to avoid, pre-empt, neutralize, or lessen negative self-reactive responses, and instances where the mechanisms of moral disengagement are part of the agent’s internalized moral standards. This means that current measures of moral disengagement do not capture all and only dispositions to morally disengage, which is the aim of these measures, but instead capture dispositions to morally disengage and dispositions to have the mechanisms of moral disengagement within one’s internalized moral standards.
2G. Epistemic Modals are Question Sensitive
We argue that epistemic modals are evaluated with respect to a salient ques- tion under discussion (QUD). We provide a simple implementation of our theory in inquisitive semantics. The resulting semantics makes correct predictions about disagreement/retraction data, epistemic contradictions (Yalcin 2007, Mandelkern 2019), and the unavailability of epistemic de re readings (Aloni 2001, Yalcin 2015, Moss 2018, Ninan 2018).
2H. On Relativizing the Sensitivity Condition to Belief-Formation Methods
According to the sensitivity account of knowledge, S knows that p only if S’s belief in p is sensitive in the sense that S would not believe that p if p were false. It is widely accepted that the sensitivity condition should be relativized to belief-formation methods to avoid putative counterexamples. A remaining issue for the account is how belief-formation methods should be individuated. In this paper, I argue that while a coarse-grained individuation is still susceptible to counterexamples, a fine-grained individuation makes the target belief trivially insensitive. Therefore, there is not a principled way of individuating belief-formation methods that helps the sensitivity account to accommodate different cases.