Hostname: page-component-68c7f8b79f-7mrzp Total loading time: 0 Render date: 2025-12-30T04:25:02.467Z Has data issue: false hasContentIssue false

Luminous, Ergo Bright Beliefs

Published online by Cambridge University Press:  29 December 2025

Julio De Rizzo*
Affiliation:
Department of Philosophy, University of Vienna , Vienna, Austria
Rights & Permissions [Opens in a new window]

Abstract

A state is luminous if and only if, whenever one is in it, one is in a position to know that one is. A state is bright if and only if, whenever one is in it, one is in a position to believe that one is. Beliefs have long been regarded, both historically and from a contemporary perspective, as luminous and bright. This paper evaluates Timothy Williamson’s influential anti-luminosity arguments as they apply to the luminosity and brightness of belief. While his margin-for-error argument may effectively challenge the luminosity of knowledge, it cannot be straightforwardly extended to undermine either the luminosity or the brightness of belief. Some authors responded to the more general anti-luminosity argument based on a constitutive connection between states and attitudes. An influential reply on Williamson’s behalf by Srinavasan in terms of degrees of confidence opens way to the claim that there is a constitutive connection between confidence and belief. The more general argument, in the specific case of belief, can then be resisted by drawing on independently defensible views of how we form beliefs about our own beliefs.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press

1. Introduction

A state is luminous if and only if, whenever one is in that state, one is in a position to know that one is in it. Much of the debate sparked by Williamson’s influential anti-luminosity arguments has focused on whether knowledge is luminous. While some of Williamson’s points specifically target knowledge, his most discussed argument was explicitly intended to apply to other mental states, including belief.

The luminosity of belief holds particular historical significance. Descartes famously – and somewhat surprisingly – extends the certainty of the cogito, which initially appears restricted to the I’s existence, to one’s thoughts, including occurrent beliefs. If awareness (conscientia) is sufficient for knowledge, Descartes appears explicitly committed to the luminosity of thoughts – plausibly encompassing beliefs:Footnote 1

As to the fact that there can be nothing in the mind, in so far as it is a thinking thing, of which it is not aware [conscius], this seems to me to be self-evident. For there is nothing that we can understand to be in the mind, regarded in this way, that is not a thought or dependent on a thought. If it were not a thought or dependent on a thought it would not belong to the mind qua thinking thing; and we cannot have any thought of which we are not aware [conscius] at the very moment when it is in us. (1641, Replies to the fourth set of objections: 147; cf. Malebranche Reference Malebranche, Lennon and Olscamp1674: 218)

It hardly needs reminding that the echoes of the Cartesian tradition still resonate today. At times, one finds support for a related principle: whenever one believes that p, one also believes that one believes that p.

(…) that to the extent that a subject is rational, and possessed of the relevant concepts (most importantly, the concept of belief), believing that p brings with it the cognitive dispositions that an explicit belief that one has that belief would bring, and so brings with it the at least tacit belief that one has it. (Shoemaker Reference Shoemaker1996: 241; see also e.g. Stoljar Reference Stoljar2019: 405; Smithies Reference Smithies2019: 174–176.)

Call a state bright if it is such that if one is in that state, one is in a position to believe that one is in it. Thus understood, the brightness of a state is strictly weaker than its luminosity, since the latter entails the former, but not vice versa (under the assumption that knowledge implies belief, and thus that being in a position to know entails being in a position to believe).

Of being in a position to know, Williamson writes:

To be in a position to know p, it is neither necessary to know p nor sufficient to be physically and psychologically capable of knowing p. No obstacle must block one’s path to knowing p. If one is in a position to know p, and one has done what one is in a position to do to decide whether p is true, then one does know p. The fact is open to one’s view, unhidden, even if one does not yet see it. (Reference Williamson2000: 95)

In a similar vein, to be in a position to believe, for present purposes, it is neither necessary to believe p nor sufficient to be physically and psychologically capable of believing p. No obstacle must block one’s path to believing p. If one is in a position to believe p, and one has done what one is in a position to do to decide whether p is true, then one does believe p. The appearence of fact is open to one’s view, unhidden, even if one does not yet see it. In short: if one considers the matter, it seems to one as if p, which prompts the belief that p.Footnote 2 (Although this is in principle in tension with doxastic voluntarism – the view that, in some cases at least, one might choose to believe or not to believe that p – it is by no means uncommon to suppose it simply false; at any rate, even doxastic voluntarists should make exception to at least some beliefs; the ensuing discussion might be restricted to these.) For readability, I use the simpler forms ‘knows’ and ‘believes’ throughout. The reader should note that these can be reformulated in terms of being in a position to know (or believe), which avoids commitment to idealized conditions and allows us to bypass fine-grained distinctions between various types of belief – occurrent, standing, implicit, and so on – while enabling a higher-level, more abstract consideration of luminosity and brightness as it applies to beliefs in general.

While the luminosity of belief has received considerable attention, its brightness has rarely been addressed explicitly. It is therefore worth briefly motivating why the brightness of belief merits assessment. The following considerations rest on contentious assumptions but are meant merely to illustrate the significance of brightness, not to defend those assumptions.

Arguably, the best explanation of the oddness of belief ascriptions using Moorean sentences of the form ‘She believes that (p and she does not believe that p)’ rely on brightness. Since the belief operator distributes over conjunction, she believes that p; if belief is bright, she then believes that she believes that p (or more plausibly: is in a position to believe that she believes that p). But by the second conjunct and distribution, she believes that she does not believe that p. Thus if the first ascription holds, she believes both that she believes p and that she does not believe that p. If she is a consistent believer, this cannot hold. (The point is essentially Hintikka’s Reference Hintikka1962: section §4.7; see also Smithies Reference Smithies2016.)Footnote 3

Another source of interest in brightness arises from the normativity of inferential reasoning. One recent influential account (Boghossian Reference Boghossian2014) holds that a transition from premises P 1 , …, P n to a conclusion C counts as an inference if and only if: (i) one comes to believe C because one believes each of P 1 , …, P n ; and (ii) one believes that P 1 , …, P n support C. Naturally, this invites several questions – particularly about the explanatory role of ‘because’ – but we set these aside for now.

It would be implausible to require, in place of (ii), the more demanding condition that one believes that one’s beliefs in P1, …, Pn support one’s belief in C (Broome Reference Broome, Balcerak Jackson and Balcerak Jackson2019: §7). Such a condition would likely exclude many unsophisticated but otherwise competent reasoners. Still, being aware of the normative force of inference – that one ought to believe C upon inferring from P 1 , …, P n – may demand precisely this stronger requirement. This applies to practical reasoning as well, where the output is often an intention rather than a belief, but the input still includes beliefs. However, if belief is not bright, then the normative demands associated with inference may themselves be opaque. One might fail to know what one is required to believe (or intend), simply because one is unaware of the relevant beliefs one’s reasoning relies on (see Marcus Reference Marcus2021 for related discussion).

Both the luminosity and the brightness of belief are widely endorsed. How do they relate? Logically, belief may be bright without being luminous. Yet plausible assumptions cast doubt on the value of brightness absent luminosity. If knowledge is the most successful form of belief – as suggested by the knowledge norm of belief (Williamson Reference Williamson2000: 255–6) – then forming beliefs about one’s beliefs risks falling short of success across the range of cases targeted by the anti-luminosity argument. This is not to say belief loses its value when it fails to amount to knowledge; it is often worth trying even without a guarantee of success. Still, in the case of second-order belief, the general practice of forming beliefs about one’s own beliefs appears unjustified if it cannot typically aim at knowledge – at least more often than Williamson’s argument permits. Moreover, brightness alone does not account for the felt authority we have over our own beliefs. Thus, even if belief remains bright in the absence of luminosity, the resulting illumination is dimmer – and its purposefulness markedly less rationally motivated – than many would have hoped in this case.

A more careful examination of Williamson’s anti-luminosity arguments, particularly as they apply to belief, is also warranted by a tension that, to my knowledge, remains insufficiently addressed in the literature: between the broadly optimistic stance many take toward self-knowledge of belief and the implications of Williamson’s case against it. The vast majority of proponents of the optimistic view do not engage directly with Williamson’s challenge. One reason for this may be their caution: these authors often acknowledge the fallibility of self-knowledge. Yet even with such a concession, it remains overly optimistic to assume that the acknowledged fallibility aligns neatly with the kinds of cases Williamson targets. His argument purports to reveal a systematic and far-reaching failure – one that plausibly exceeds the scope of standard fallibility. While optimism about self-knowledge, qualified by a fallibility caveat, is logically consistent with the anti-luminosity thesis, a tension persists that calls for closer scrutiny.Footnote 4

This paper investigates Williamson’s anti-luminosity arguments insofar as they concern belief, analyzing both their challenge to luminosity and their implications for the brightness of belief. In §2, I consider Williamson’s margin-for-error argument against the luminosity of knowledge and show that it does not extend to challenge the brightness and luminosity of belief. §3 presents Williamson’s more general anti-luminosity argument. An influential reply draws on a constitutive connection between phenomenal states and cognitive attitudes about them (Leitgeb Reference Leitgeb2002; Weatherson Reference Weatherson2004; Berker Reference Berker2008; Ramachandran Reference Ramachandran2009). Srinavasan (Reference Srinivasan2013), relying on Williamson’s original exposition, develops a response in terms of degrees of confidence, which I present in the same section as well. §4 addresses the application to belief. Once the details of the belief case are fully developed, the argument remains far from conclusive. The constitutive connection strategy re-emerges, grounded in the plausible view that belief is constituted by degrees of confidence – a perspective supported by well-established accounts of how rational agents form beliefs about their own beliefs. §5 concludes the discussion.

2. The argument from margins of error

Given our limited capacities of discrimination, human knowers need margins of error. Being a human knower, Mr. Magoo is no exception. By looking at a tree some distance off from his window, Mr. Magoo gains some knowledge about its height. No matter how good his glasses are, he cannot know that the tree is i inches tall, for any natural number i. However, he knows, for instance, that it is not 60 nor 6000 inches tall. But for many natural numbers i, he does not know that the tree is not i inches tall. If the tree is i + 1 inches tall, he does not know that it is not i inches tall. Reflecting on his abilities, he knows the latter fact:

  1. 1. Mr. Magoo knows that (if the tree is i + 1 inches tall, he does not know that it is not i inches tall).

About Magoo, we make three further assumptions. First, the uncontentious:

  1. 2. Mr. Magoo knows that the tree is not 1 inch tall.

We further assume that knowledge is luminous to Mr. Magoo, stated in the following weakened form:

  1. 3. KK. For any pertinent p, if Mr. Magoo knows that p, then he knows that he knows that p.

(As Williamson notes (Reference Williamson2000: 115), the restriction to pertinent propositions avoids, for instance, the objection that Mr. Magoo knows every finite iteration of knowledge on what he knows. For even if p is pertinent, Kp need not be.)

Having taken some courses in logic, Mr. Magoo satisfies multi-premise closure under logical consequence (Reference Williamson2000:116):

  1. 4. C. If p and all members of the set X are pertinent, p is a logical consequence of X, and Mr. Magoo knows each member of X, then he knows that p.

Finally, about the tree, we assume:

  1. 5. The tree is 666 inches tall.

The argument proceeds as follows. The following is an instance of 1.:

  1. 6. Mr. Magoo knows that (if the tree is 2 inches tall, he does not know that the tree is not 1 inch tall).

From KK and 2, we get:

  1. 7. Mr. Magoo knows that he knows that the tree is not 1 inch tall.

By propositional logic, 2. and ‘if the tree is 2 inches tall, Mr. Magoo does not know that the tree is not 1 inch tall’ imply that the tree is not 2 inches tall. Given C , we have:

  1. 8. Mr. Magoo knows that the tree is not 2 inches tall.

Repeating this reasoning further 664 times, we conclude:

  1. 9. Mr. Magoo knows that the tree is not 666 inches tall.

But this is false, since knowledge is factive, and the tree is in fact 666 inches tall.

Given the plausibility of the other premises, this argument puts pressure on either KK or C , the multi-premise closure attributed to Magoo. Since this would extend beyond the present scope, we need not discuss Williamson’s defense of C (Reference Williamson2000: 117–19; see Stalnaker Reference Stalnaker and Goldberg2015 for critical discussion). We turn to the application to belief.

2.1. Inexact belief

Williamson’s argument in the previous section is directed specifically at the luminosity of knowledge. While it is not intended to apply more broadly, it is natural to ask whether the argument extends to belief, given the close connection between the two states.

Mr. Magoo is a reasonable average observer and cautious believer. For no natural number i, he does believe that the tree is i inches tall; and for many natural numbers i, he does not believe that the tree is not i inches tall. Moreover, for no natural number i, the tree is i + 1 inches tall, while he believes that it is not i inches tall.Footnote 5 Reflecting on his beliefs, he believes this latter general fact:

  1. 1. Mr. Magoo believes that (if the tree is i + 1 inches tall, he does not believe that it is not i inches tall).

About Magoo, we make three assumptions. First, the uncontentious:

  1. 2. Mr. Magoo believes that the tree is not 1 inch tall.

We further assume that belief is bright to Mr. Magoo, stated in the following weakened form:

  1. 3. BB. For any pertinent p, if Mr. Magoo believes that p, then he believes that he believes that p.

His courses on logic made him sharp enough to satisfy:

  1. 4. C*. If p and all members of the set X are pertinent, p is a logical consequence of X, and Mr. Magoo believes each member of X, then he believes that p.Footnote 6

Again, we assume:

  1. 5. The tree is 666 inches high.

The argument is analogous, except for the last step. We end up with the conclusion:

  1. 6. Mr. Magoo believes that the tree is not 666 inches tall.

However, since belief is not factive, this does not contradict the fact about the tree.

We may even grant:

  1. 7. ∼∃i (Mr. Magoo believes that the tree is i inches high)

At the same time, it remains highly plausible that:

  1. 8. Mr. Magoo believes that ∃i (60 < i < 6000 inches and the tree is i inches high).

These are perfectly consistent. One might surely believe that some fair lottery ticket is the winner, while for no particular ticket, one does believe that it is the winner.Footnote 7 , Footnote 8

Similar considerations show that the argument does not bear on the luminosity of belief, provided that we do not assume KK. To show this, we make the same assumptions as in the former argument, with the exception of KK , which we substitute for:

KB. For any pertinent p, if Mr. Magoo believes that p, then he knows that he believes that p.

  1. 1. Mr. Magoo knows that (if the tree is i + 1 inches tall, he does not know that it is not i inches tall).

  2. 2. Mr. Magoo knows that the tree is not 1 inch tall.

  3. 3. C. If p and all members of the set X are pertinent, p is a logical consequence of X, and Mr. Magoo knows each member of X, then he knows that p.

  4. 4. The tree is 666 inches tall.

We also assume:

  1. 5. If Mr. Magoo knows that p, then he believes that p.

Unsurprisingly, the argument flounders where KK was employed. Applying 5. to 2., we conclude that

  1. 6. Mr. Magoo believes that the tree is not 1 inch tall.

Now applying KB to 6., we arrive at:

  1. 7. Mr. Magoo knows that he believes that the tree is not 1 inch tall.

Since 6. is logically compatible with Mr. Magoo not knowing that the tree is not 1 inch tall, we cannot apply C to conclude that Mr. Magoo knows that the tree is not 2 inches tall. Of course, adding the plausible assumptions on knowledge will suffice for this and reasonably close steps, but for some sufficiently high i, the assumptions cease to be plausible enough to restore the argument.

3. The general anti-luminosity argument

Williamson’s most discussed argument against luminosity is meant to be generalizable to all mental states, including belief. He illustrates the argument using the example of the condition of feeling cold.

Suppose α0 is a situation (a ‘case’ in Williamson’s terminology) at dawn where one feels cold. The temperature gradually and slowly rises, until in a later situation at noon, αn, one does not feel cold anymore. One’s feelings of coldness cannot discern between αi and αi + 1. In all the process, one attentively considers how cold one feels. Williamson shows that the following three claims are inconsistent:

Luminosity of feeling cold (LUM). ∀αi ((at αi: one feels cold) → at αi: K(one feels cold))

Setup of the Case (CASE). At α0: one feels cold and at αn: one does not feel cold.

(For some sufficiently later n. All αi and αi + 1, for 0 ≤ i ≤ n–1, are close possibilities to one another.)

Knowledge-safety of feeling cold (K-SAFE).

∀αi (at αi: K(one feels cold) → at αi + 1: one feels cold)

(where αi and αi + 1 are close possibilities)Footnote 9

By CASE, at α0: one feels cold. By LUM and modus ponens, at α0: K(one feels cold). By K-SAFE, at α1: one feels cold. By repeating these steps n-1 times, we reach the conclusion that at n: one feels cold, which contradicts CASE.

K-SAFE is an instance of a more general scheme:

Knowledge-safety . ∀αi (at αi: Kp → at αi + 1: p)

(where αi and αi + 1 are sufficiently similar situations.) Since CASE holds by the description of the case, LUM or K-SAFE must go. Williamson argues for keeping the latter.

3.1. Defending knowledge-safety: First pass

How could one argue for K-SAFE – and the more general scheme of which it is an instance, Knowledge-safety?

A first natural step is to consider:

Belief-Safety . ∀α (at α: Kp → ∀β(β is similar to α → at β: (Bp → p)))

According to Belief-Safety, one knows that p in a situation only if in every possible similar situation in which one believes that p, it is true that p. The principle arguably captures an intuitive application of the idea that knowledge requires a modal notion of safety: one’s belief qualifies as knowledge only if it is not hostage to falsity in every similar scenario (Pritchard Reference Pritchard2005: 146f.).

Belief-Safety does not suffice for Knowledge-Safety, since it requires only that in every scenario in which one believes that p, it is true that p. To derive K-SAFE, one might resort to Srinavasan (Reference Srinivasan2013: 302):

Doxastic-Disposition for feeling cold

∀αi(at αi: B(one feels cold) → ∃βi + 1i + 1 is a phenomenal duplicate of αi + 1 & at βi + 1: B(one feels cold))).

Accordingly, if one believes that one feels cold at a situation αi, there exists a possible situation βi + 1 in which one’s cold feelings are exactly the same as in the next actual situation αi+1 and in which one believes that one feels cold. That one’s cold feelings are exactly the same in those scenarios, that αi + 1 and βi + 1 are ‘phenomenal duplicates’, means: at αi + 1 one feels cold if and only if at βi + 1 one feels cold.

Doxastic-Disposition and Belief-Safety (where ‘one feels cold’ instantiates ‘p’) entail K-Safe. Suppose K(one feels cold) at αi. Since knowledge implies belief, B(one feels cold) at αi. Modus ponens applied to Doxastic-Disposition implies that there is a possible scenario βi+1 similar to αi in which B(one feels cold). Thus by Belief-Safety, at βi + 1: one feels cold. Since βi + 1 and αi + 1 are phenomenal duplicates, at αi + 1: one feels cold. Conditionalization discharges the initial supposition, and we obtain K-SAFE.

Doxastic-Disposition is an instance of a more general principle:

Doxastic-Disposition (general) . ∀α(at α: Bp → ∀β(β is similar to α → at β: one is disposed to Bp))

That is, if one believes that p at a scenario, then in every sufficiently similar scenario, one is disposed to believe that p. Although the similarity relation allows for a familiar degree of flexibility, the principle draws its plausibility from a natural requirement: that similar situations preserve the bases on which the relevant beliefs are formed – or would be formed. In the example above, this means, for instance, that one’s feelings of cold are preserved across the relevantly similar cases.

However, this derivation of K-SAFE falls prey to a reply raised by many in response to Williamson’s argument (Leitgeb Reference Leitgeb2002; Weatherson Reference Weatherson2004; Berker Reference Berker2008; Ramachandran Reference Ramachandran2009). Namely, that it presupposes the falsity of:

Phenomenal-Belief Constitution (CON). ∀α (one attends to one’s cold feelings at α→ (at α: one feels cold ↔ B(one feels cold)))

That is, in every situation where one attends to whether one feels cold, one’s beliefs about feeling cold track one’s cold feelings. Although the important point is the left-to-right implication across possible situations, this is usually assumed to be supported by a more intimate essential or constitutive relation between one’s feelings and the beliefs about them.

If CON holds, then Doxastic-Disposition is false with respect to the case at hand. For, as the case is described, there is an αi which is the last where one feels cold; thus by CON the last where one believes that one feels cold. But then there cannot be a possible βi + 1 which is a phenomenal duplicate of αi + 1 – thus, where one does not feel cold – and yet one believes that one feels cold, since that contradicts CON.

3.2. Defending knowledge-safety: Second pass

Despite numerous replies that rely on CON, brace yourself: Williamson does not employ that strategy in defending K-SAFE, or Knowledge Safety more broadly. Rather than invoking Belief-Safety, Williamson formulates a safety condition for knowledge in terms of degrees of confidence. To a first approximation, one’s degree of confidence in a proposition reflects the extent to which one is disposed to rely on it in practical reasoning – that is, to treat it as a premise in deliberation.

Degrees of confidence are not credences, or degrees of subjective probability reflecting betting behavior. Williamson describes the following case to illustrate the difference (see also Reference Williamson2000: 98-99):

Lottie knows that there are 10,000 tickets in a fair lottery and only one will win. She cautiously refrains from forming a belief either way as to whether her ticket will lose. Nevertheless, she knows and believes that its chance of losing is 0.9999; she makes bets on that basis. Thus, by an operational standard, her credence that her ticket will lose is 0.9999. In fact, her ticket wins. (Williamson Reference Williamson, Flowerree and Reed2020)

Lottie is not at fault for refraining from believing that the ticket will lose – even if she assigns it a high probability. The case is consistent with her having no inclination whatsoever to rely on that proposition in practical reasoning, that is, with her having confidence 0 in it.

Though one might be tempted to respond by equating outright belief with credence 1, the following case runs against the temptation:

Infinitely Many Tosses. Indira knows that there will be an ω-sequence (ordered like the natural numbers) of independent tosses of a fair coin. She cautiously refrains from forming a belief either way as to whether tails will come up at least once. Nevertheless, she knows and believes that the chance of tails coming up at least once is 1; she makes bets on that basis. Thus, by an operational standard, her credence that tails will come up at least once is 1. In fact, heads comes up every time. (Williamson Reference Williamson, Flowerree and Reed2020)

In cases involving an infinite number of cases, assignments of probability 1 do not correspond to outright belief, since it is consistent with such an assignment that the claim does not hold (cf. Williamson Reference Williamson2007; see Haverkamp and Schulz Reference Haverkamp and Schulz2012 for discussion).

The underlying idea of the principle from which K-SAFE might be derived is that knowledge requires absence of nearby misplaced confidence:

Confidence-Safety. If in case α one knows with confidence c that p, then in any sufficiently similar case α* in which one has an at-most-slightly-lower degree of confidence c* that p, it is true that p.

To derive K-SAFE from Confidence-Safety, we need an additional premise. Srinavasan (Reference Srinivasan2013) suggests (she labels it ‘Conf*’):

Bridge Principle (‘Bridge’). If at αi one has degree of confidence c that one feels cold, there exists a sufficiently similar possible case βi + 1 in which one’s cold-feelings are a phenomenal duplicate of one’s cold-feelings at αi + 1 and in which one has an at-most-slightly-lower degree of confidence c* that one feels cold.

Bridge has two main elements: it asserts on the one hand that if one has degree of confidence c that one feels cold at αi, there is a possible scenario βi + 1 which is such that one feels cold at αi + 1 if and only if one feels cold at βi + 1 (this is intended as a consequence of one’s cold feelings being a phenomenal duplicate of one another in these scenarios); and on the other hand, at that same βi + 1, one has an at-most-slightly-lower degree of confidence c* that one feels cold.

Suppose at αi one knows that one feels cold, and thus has a sufficiently high confidence c that one feels cold. By Bridge, there is a possible βi + 1 which is such that at βi + 1: one feels cold if and only if at αi + 1: one feels cold; and at which one has an at-most-slightly-lower confidence c* that one feels cold. Since at αi one knows that one feels cold, by Confidence Safety at βi + 1 one feels cold. By Bridge, at αi + 1 one feels cold. Hence, K-SAFE.

Crucially, this defense of K-SAFE is independent of the falsity of CON above. If a constitutive relation between feelings of cold and beliefs about them holds, then at exactly the point where one ceases to feel cold, one stops believing that one feels cold. As we saw, if the safety requirement for knowledge would amount simply to absence of nearby untrue belief, CON blocks the argument. But Confidence-safety amounts to more than this, namely absence of nearby misplaced confidence (even if it falls short of outright belief). As Srinavasan (Reference Srinivasan2013) writes:

Suppose that in αi S truly believes that she feels cold, and that in αi+1 she is still quite confident that she feels cold, but insufficiently confident for outright belief. By CON, S feels cold in αi but does not feel cold in αi+1. But by Confidence-Safety, S does not know that she feels cold in αi. So even a constitutive connection between feeling cold and believing one feels cold is insufficient to vindicate luminosity. (310)

Thus, Confidence-Safety offers a robust defense of the broader argument against luminosity.Footnote 10 However, as we shall see, even if this response succeeds with respect to first-order attitudes, it leaves room for a luminist reply in the case of higher-order beliefs – particularly if a constitutive link can be established between belief and the underlying degrees of confidence.

4. Luminous beliefs

To inquire about the luminosity of belief, we may consider the same case, but now focusing on how one’s beliefs about feeling cold, and beliefs about these beliefs, unfold across the situations. The three corresponding theses are as follows:

Luminosity of B(feeling cold) (LUM*). ∀αi ((at αi: B(one feels cold)) → at αi: KB(one feels cold))

Setup of the Case (CASE*). At α0: B(one feels cold) & at αn: ∼B(one feels cold).

(For some sufficiently later n. All α i and αi + 1, for 0 ≤ i ≤ n–1, are close possibilities to one another.)

Knowledge-safety of B(feeling cold) (K-SAFE*).

∀αi (at αi: KB(one feels cold) → at αi + 1: B(one feels cold))

(where αi and αi + 1 are close possibilities)

The reasoning is the same as before. By CASE*, at α0: one believes one feels cold. By LUM and modus ponens, at α0: KB(one feels cold). By K-SAFE, at α1: B(one feels cold). By repeating these steps n-1 times, we reach the conclusion that at αn: B(one feels cold), which contradicts CASE*.

The principles required to derive K-SAFE* are now:

Bridge Principle*. If at αi one has degree of confidence c that one believes that one feels cold, there exists a sufficiently similar possible case βi + 1 in which one’s first-order doxastic states about feeling cold are a duplicate of one’s first-order doxastic states about feeling cold at αi + 1 and in which one has an at-most-slightly-lower degree of confidence c* that one believes that one feels cold.

Confidence-Safety*. If in case α one knows with confidence c that Bp, then in any sufficiently similar case α* in which one has an at-most-slightly-lower degree of confidence c* that Bp, it is true that Bp.

(‘Doxastic states’ covers the possibility of having no beliefs at all, that is, having confidence 0 in a proposition.)

5.1. Translucent minds

If the argument is to be resisted, we should turn to specific accounts of self-knowledge – those that seek to explain how we form beliefs about our own beliefs. The views considered in this section need not rely on luminist preconceptions. They are typically aimed at making sense, on the one hand, of the peculiar access one seems to have to one’s mental states: that we seem to know what we believe in a way that differs markedly from how we know other things, including the mental states of others; and of the privileged stance we appear to occupy with respect to our doxastic lives, on the other. The peculiarity as such is neutral on the epistemic status of higher-order beliefs. The privilege might fall short of luminosity, and in at least some degree should be recognized even by anti-luminists. Though we may not be luminous believers, we are far from fumbling in the dark: we are, by and large, impressively good at knowing what we believe. The principles we’ll examine shortly rely only on how we form higher-order beliefs and are therefore independent of whether Luminosity* holds.

A recurring theme across otherwise divergent positions is the notion that first-order beliefs are, in some sense, necessarily linked to higher-order beliefs about them. Stated in such general terms, the idea lends itself to several distinct formulations.

Transparency theorists hold that, when deciding whether they believe that p, the typical way to proceed is by considering whether p is true. As Evans famously puts it – in a passage by now cited almost to exhaustion – ‘in making a self-ascription of belief, one’s eyes are, so to speak, or occasionally literally, directed outward – upon the world. (…) I get myself in a position to answer the question whether I believe that p by putting into operation whatever procedure I have for answering the question whether p’ (Evans Reference Evans and McDowell.1982: 225). The core idea is that we come to believe that we believe p by way of considering p itself and taking a positive stance toward it. In self-ascribing belief, attention need not be directed toward the believer or their internal states, but only toward what the first-order belief is about.

The transparency idea can be implemented in different ways. A higher-order belief might be formed inferentially – for example, by following a rule such as: if p, then believe that you believe that p (Byrne Reference Byrne2018). Alternatively, it might be formed non-inferentially – for instance, if the higher-order belief arises from the same (not necessarily inferential) basis as the first-order belief that p (Fernández Reference Fernández2013).

Another related family of views connects higher-order beliefs to first-order beliefs in a different way. Peacocke (1994, Reference Peacocke1999, ch. 5) develops an account of self-knowledge in which phenomenal conscious states and events serve as non-inferential justifiers of higher-order states and events that share part of their content – assuming the subject possesses the relevant concepts. On this view, for example, having the apparent memory that Williamson was born in Sweden can itself justify the judgment that Williamson was born in Sweden. One need not observe the memory, infer the content of the judgment from it, or let the justification reside within the judgment itself.

For higher-order beliefs, the typical justifiers are judgments, which may themselves be non-inferentially grounded in further conscious states or events. For instance, one’s judgment that Williamson was born in Sweden can justify the self-ascription of the belief that Williamson was born in Sweden, itself a judgment which in turn justifies the belief that one believes that Williamson was born in Sweden. The justification of a higher-belief is described by a process that typically has four parts: first, one undergoes a conscious episode or state with the content that p; taking the episode at face value, one then judges that p on that basis; then that judgment non-inferentially justifies the self-ascription of the belief that p; finally, that self-ascription justifies the belief that one believes that p.

Yet a last family of views holds an even more intimate, constitutive, correlation between first-order and second-order beliefs. Wright in earlier work, for instance, suggests that one’s (first-order) beliefs are constituted by what one takes themselves to believe, that is, their higher-order beliefs, under appropriate, optimal conditions (Wright Reference Wright1998; Khani Reference Khani2020). These include having the concepts in question, not being subject to self-deception, and so on. Facts about belief, and mental contents more broadly, are accordingly response-dependent, in an analogous way as being red might be taken to be dependent on how observers respond to objects in appropriate, optimal conditions. As their response-dependence attaches to the nature of belief, that constitution is meant to hold necessarily.

A similar result is pursued by Shoemaker (Reference Shoemaker1996), who, however, grounds the necessary correlation between first-order and higher-order beliefs in a necessary connection between the states themselves – more precisely, in the partial identity of their neural realizations and the overlap in their causal profiles.

It hardly needs saying that this brief overview does not do justice to the intricacies of the individual accounts; its aim is merely to illustrate a common tendency – namely, to posit a necessary connection, sometimes conditional on some conditions we may realistically presume the subject in the argument satisfies, between second-order beliefs and the first-order states they are about.

Applied to degrees of confidence, the idea shared by those views might be spelled out in two main ways (we assume that degrees of confidence assume values in the real interval [0,1]):

Higher-Order Constitutivism (HCON). For some c, necessarily,

  1. i) one has confidence 1 in Bp if one has at least confidence c in p;

  2. ii) one has confidence 0 in Bp otherwise. (cf. ‘Vindication’ in Goldstein and Waxman Reference Goldstein and Waxman2020: 9)

‘c’ might vary with the range of substitutions of ‘p’. HCON draws on those accounts being accounts of our knowledge of outright belief, marked by a threshold of confidence which one’s higher-order beliefs reliably track.Footnote 11

Alternatively, one might let degrees of confidence in Bp mirror the degrees of confidence in the embedded propositions, thus obtaining:

Higher-Order Constitutivism * (HCON*). Necessarily, for all c: one has confidence c in Bp if and only if one has confidence c in p. (Cf. ‘Calibration’ in Goldstein and Waxman Reference Goldstein and Waxman2020.)

According to HCON*, one’s confidence in one’s belief exactly matches one’s confidence in the proposition believed.

HCON and HCON* are consistent with means of arriving at second-order beliefs that do not turn on first-order ones – by testimony of a trusted therapist, for instance. One might relativize them to a specific way of knowing to render this explicit. Importantly, the methods they describe plausibly align with how the person in the argument forms second-order beliefs.

What evidence is there that we conform to either HCON or HCON*? Any defense of the specific views on higher-order belief discussed earlier already lends support to this contention. Yet typical armchair philosophy – or psychology confined to the conceptual, foundational level – can hardly be expected to settle the issue. Still, taken together, these diverse views may provide cumulative evidence for either of the confidence profiles: the fact that multiple, well-developed accounts have been advanced by reasonable theorists, each underwriting either HCON or HCON*, constitutes, however inconclusive, considerable evidence in their favor.

Can HCON and HCON* be tested empirically? Recent research on metacognition has established a strong connection between confidence and metacognitive sensitivity (De Gardelle and Summerfield Reference De Gardelle and Summerfield2011; Spence et al. Reference Spence, Dux and Arnold2016; Boldt et al. Reference Boldt, de Gardelle and Yeung2017; Fleming Reference Fleming2024). In this context, metacognitive sensitivity denotes the ability to reliably monitor one’s own cognitive functions – for instance, perceptual discrimination – while confidence refers to the degree to which one believes oneself to be correct, typically concerning a particular decision or action (often termed ‘propositional confidence’ in this literature; Pouget et al. Reference Pouget, Drugowitsch and Kepecs2016; Fleming Reference Fleming2024).

There are, of course, notorious methodological challenges: metacognitive bias, task evaluation, and unconscious processing may all distort sensitivity in ways difficult to capture quantitatively (Fleming and Lau Reference Fleming and Lau2014). Still, under certain idealizing assumptions, an observable positive correlation between perceptual discrimination and metacognitive sensitivity – hence with confidence in the above sense, especially in visual discrimination tasks – may be regarded as indirect evidence for either HCON or HCON*. Specifically:

  1. i) if the rate of success in perceptual discrimination reflects one’s belief or degree of confidence in sensory input, and

  2. ii) if the assessment of the correctness of the ensuing action or decision reflects one’s higher-order attitude toward that belief or those degrees, then the observable correlation parallels, however imperfectly, the sort of relation predicted by HCON and HCON*.

Assumption (i) is plausible in light of Williamson’s idea that one’s degree of confidence tracks the extent to which one relies on the object of that confidence in action or reasoning. If agents tend to form beliefs and adjust their first-order confidence levels reliably – and if visual discrimination, being among the least contaminated and most practiced first-order perceptual functions, exemplifies this – then (i) is well motivated.

Assumption (ii) may be justified as follows. In such experiments, participants are asked to classify a given stimulus – for example, to press or refrain from pressing a button depending on how they categorize it. Their decision is thus tied to their belief (and degree of confidence) that the stimulus falls under a certain category. Consequently, their assessment of whether their action was correct can reasonably be taken as an assessment of the correctness of their belief.

To be clear, even if these assumptions hold, the observed correlation cannot conclusively establish either hypothesis – partly because of the methodological complications just noted. Nonetheless, current research on metacognition may be taken to provide defeasible empirical support for the idealized correlation predicted by HCON and HCON*.

Both principles offer responses to the anti-luminosity argument as applied to belief: HCON is inconsistent with Bridge*, while HCON* undermines the central motivation for Confidence-Safety*.

To see the first point, note that for some αi in the case, αi is the last where one has confidence c that one feels cold, that is, at αi + 1, one has a lower confidence that one feels cold (for c a particular value of ‘c’ in HCON). Bridge* would have us conclude that there is a possible situation βi + 1 which is a duplicate of one’s first-order doxastic states about feeling cold at αi + 1, where one has an at-most-slightly-lower degree of confidence c* that one believes one feels cold. But by HCON, one’s confidence drops at βi + 1 to from 1 to 0, that is, from one extreme to another, thus c* cannot be at-most-slightly-lower than c.

HCON describes a confidence profile of higher-order beliefs about beliefs that is utterly implausible when applied to first-order beliefs in the original case. It seems far more plausible that one’s confidence in the proposition that one feels cold does not abruptly drop from 1 to 0 once the sensation of cold falls below a certain threshold. However, if our higher-order beliefs are constitutively based on our first-order ones, as the account behind HCON hold, this kind of profile is not always reserved for highly idealized reasoners. Interestingly, this comes close to invoking the constitutive connection strategy in the context of higher-order belief. Even if Srinivasan’s (Reference Srinivasan2013) ingenious reply succeeds for first-order mental states, it does so only by permitting the necessary connections to resurface in the specific case of higher-order belief.

To better probe HCON, it is instructive to consider a case where the first-order belief targets a non-gradable, or at least not so explicitly gradable, phenomenon.

Imagine looking out of a window in one of Oxford’s buildings and seeing, in the distance, an average tall, slender figure in a well-tailored suit. His silvery hair carries the scholarly disarray of someone caught between a paradox and a comb. As he draws nearer, at a slow but steady pace, the bright British accents of his colorful socks catch your eye – followed by his glasses: serious enough for logic, stylish enough for metaphysics. Then, as he steps fully into view, it suddenly strikes you: that’s Timothy Williamson.

The case employs a gradient from not recognizing Williamson to recognizing him – contrasting with the ‘feeling cold’ case. Our focus lies on the formation of the belief – formed through visual experience – that that person (pointing demonstratively) is Timothy Williamson and of the accompanying higher-order belief that one believes that is Timothy Williamson.

To visualize the confidence profile predicted by HCON, imagine a graph where the X-axis represents time as Williamson approaches, and the Y-axis represents one’s confidence in the proposition that is Williamson. According to HCON, the graph remains flat at 0 for a stretch, followed by a sharp jump to 1 at the moment when the visual evidence becomes sufficient to form the belief. (Strictly speaking, this need not be a strict discontinuity, but steep enough so that there is no situation ‘in between’.) The jump marks the point at which one sees Williamson and thus comes to believe it is him. It contrasts with the original ‘feeling cold’ case, where confidence drops gradually in step with a slowly fading sensation; the corresponding graph would resemble that of a constant linear function.

The conflict between HCON* and Confidence-Safety* is more nuanced than a straightforward logical inconsistency. On the latter view, if an agent knows that p with confidence c in a given situation, then p must be true in all sufficiently similar situations where the agent retains a slightly lower level of confidence, c*. The standard rationale for safety requirements appeals to our limited discriminatory capacities: we are often unable to reliably distinguish between very similar situations, and our beliefs do not systematically vary in a way that tracks these fine-grained differences. As a result, for a belief to amount to knowledge, it must be appropriately insulated from error across nearby cases. The application of safety constraints thus reflects our status as cognitively limited agents who are prone to misplaced confidence. That such requirements apply to us is a consequence of our epistemic fallibility and is far from an idealization of our capacities (Williamson Reference Williamson2009; Srinivasan Reference Srinivasan2013: 314–315).

However, if our confidence profile regarding propositions about our own beliefs conforms to HCON*, then our discriminatory capacities appear far from limited. To appreciate the implausibility of imposing a safety condition on confidence in such cases – and analogous ones – consider the following example. Suppose we have a thermometer (T) inserted into a liquid that is being continuously warmed, and a secondary device (D) that registers what T measures. D reliably tracks T’s readings: at each moment αi, D displays that T measures the liquid at i degrees Celsius. Although D knows that T registers i degrees at αi, at the subsequent moment αi+1, when the temperature has increased slightly, T no longer measures i degrees. Yet D’s knowledge is not compromised by this, because it reliably tracks T’s evolving measurements – were T to register i, for any i, D would indicate as much. A safety condition on confidence, when applied here, rules out too much: it fails to accommodate cases in which the knower is reliably tracking change across a range of closely related situations. In such contexts, the knower’s discriminatory success undermines the rationale for imposing safety constraints.Footnote 12

If HCON* holds, we are like the device tracking the thermometer’s measurements. Confidence Safety* does not apply to us. We are undoubtedly imperfect – but that does not mean certain domains of knowledge cannot offer glimpses of perfection.

5. Conclusion

Beliefs occupy a special place in our cognitive homes. Williamson’s margin-of-error argument against the luminosity of knowledge cannot be adapted to dim the brightness nor the luminosity of belief. Moreover, if we adopt specific, well-developed views on how we form beliefs about our beliefs, we can respond to his ingenious anti-luminosity argument in this context. While this may not secure illumination, it clears a major obstacle to our beliefs standing, once more, in the open light.

Acknowledgments

For helpful feedback on earlier drafts, I thank the Phlox Research Group and an anonymous referee for Episteme. This research was funded in whole or in part by the Austrian Science Fund (FWF) Grant-DOI 10.55776/P36713. I gratefully acknowledge their support.

Footnotes

1 We leave aside intricacies of Descartes’ account. In particular, we need not address what could depend on a thought for him in this context. The quote is merely illustrative.

2 ‘Considers the matter’ should cover reflection on possible distortions of evidence, so as to exclude, for instance, the objection that in a visual illusion, it seems to one that p, while one does not believe that p. In crude terms, the notion of seeming here is not only phenomenal seeming. I thank an anyonymous reviewer for raising the issue.

3 As an anonymous reviewer aptly observes, the explanation may presuppose the stronger interpretation of Brightness in terms of belief, rather than merely being in a position to believe. For it might well be coherent to be in a position to believe that one believes that p, while at the same time believing that one does not believe that p. Part of the response, I think, turns on how exactly ‘being in a position to believe’ is understood – on my view, it is meant to be rationally bindingin a sense that supports the inconsistency claim. Admittedly, this understanding may not withstand scrutiny; if so, Brightness would lose its motivation from explaining the peculiar character of Moorean cases. It would nonetheless remain of independent interest. I am grateful to the anonymous reviewer for this remark.

4 I assume the views on self-knowledge discussed here reject a distinction that would exempt self-knowledge from general epistemic requirements, pace Srinivasan (Reference Srinivasan2013). Some make this explicit – see Byrne’s desideratum of ‘economy’ (Reference Byrne2018: 14ff).

5 This is not as easily generalizable to all reasoners in the case of belief as it is in the case of knowledge, since, by chance, one’s margins might deviate from it. Nevertheless, for the purposes of this argument, we may assume it with respect to Mr. Magoo.

6 The restriction to pertinent propositions allows one to block the reasoning that establishes 6. To higher values of i. The set of pertinent propositions need not be closed under logical consequence.

7 One should be careful to distinguish this from the reasoning generating the Lottery paradox. It is one thing to believe each premise of the form ‘i is not the winning ticket’ and then conclude: ∀i (i is not the winning ticket)’; quite another to not believe each premise of the form ‘i is the winning ticket’, thus concluding ‘∼∃i (I believe that i is the winning ticket)’. The former is inconsistent with believing that there is a winning ticket, the latter is not.

8 As an anonymous referee notes, it is crucial in this context to distinguish the absence of belief from negative belief – that is, belief that p is not the case. To illustrate, one might think that the Mr. Magoo and lottery cases are disanalogous. For in the former, one could say that, for every i, Mr. Magoo believes that the tree is not i inches tall; whereas, in the latter, it would be implausible to claim that, for every ticket, one believes that it is not the winner. Yet there is no reason to adopt this assumption about Magoo, for the same reason the analogous assumption is implausible in the lottery case. For each i, Mr. Magoo does not believe that the tree is i inches tall; and for many i – those lying outside his rough estimate of the tree’s height – he indeed believes that the tree is not i inches tall. Since such an estimate is absent in the lottery case, the two differ in this respect. But this difference does not affect the main point: just as there is no evidence favoring or disfavoring any particular ticket, there is no evidence, for values of i within Magoo’s estimate, that sufficiently indicates the tree is or is not that many inches tall. In both cases, the available evidence does not warrant forming either a positive or a negative belief, making the absence of belief the appropriate attribution. I thank the referee for helpful discussion.

9 Instead of quantifying over the indices in the subscripts, I opted for quantifying over situations, leaving the quantifiers dependent on the first variable implicit. This serves simplicity only. As is clear from the discussion, the quantifiers range over all possible situations.

10 As an anonymous referee aptly observes, Srinivasan’s defence may be read as conceding that the argument applies only to states that are either phenomenal or grounded in the phenomenal. This restriction suffices as a reply to the constitutive strategy but may be narrower in scope than Williamson’s original attempt. For reasons of space, I cannot address the interesting question of how far the anti-luminosity argument extends once Srinivasan’s defence is accepted. I proceed, therefore, under the assumption – favourable to the anti-luminist – that such a generalization remains viable. I thank the referee for drawing attention to this point.

11 Williamson’s argument need not assume the existence of a final αi at which one outright believes one feels cold – though adopting Knowledge-Safety in a classical setting leads to that conclusion. Similarly, it need not presuppose a threshold degree of confidence for outright belief, just as there may be no precise threshold for feeling cold: there might be no value c such that one believes p outright if and only if one’s confidence in p is at or above c. HCON could be reformulated to reflect this. For reasons of space, I set aside the details.

12 Since there is no obvious analogue of D’s confidence in tracking T’s reading being short of full (<1), the thermometer case might be read as conforming to either of HCON or HCON*. I thank an anonymous referee for this observation.

References

Berker, S. (2008). ‘Luminosity Regained.’ Philosophers’ Imprint 8(2), 122.Google Scholar
Boghossian, P. (2014). ‘What is Inference?Philosophical Studies 169(1), 118.10.1007/s11098-012-9903-xCrossRefGoogle Scholar
Boldt, A., de Gardelle, V. and Yeung, N. (2017). ‘The Impact of Evidence Reliability on Sensitivity and Bias in Decision Confidence.’ Journal of Experimental Psychology: Human Perception and Performance 43(8), 1520–31.Google ScholarPubMed
Broome, J. (2019). ‘A Linking Belief is Not Essential for Reasoning.’ In Balcerak Jackson, M. and Balcerak Jackson, B. (eds), Reasoning: New Essays on Theoretical and Practical Thinking, pp. 3243. Oxford: Oxford University Press.10.1093/oso/9780198791478.003.0003CrossRefGoogle Scholar
Byrne, A. (2018). Transparency and Self-Knowledge. New York: Oxford University Press.10.1093/oso/9780198821618.001.0001CrossRefGoogle Scholar
De Gardelle, V. and Summerfield, C. (2011). Robust Averaging During Perceptual Judgment. Proceedings of the National Academy of Sciences 108(32), 13341–46.10.1073/pnas.1104517108CrossRefGoogle ScholarPubMed
Descartes, R. (1641). ‘Objections and Replies.’ In Ariew, R. and Cress, D. (eds.), R. Descartes: Meditations, Objections, and Replies, 2006, pp. 51179. Indianapolis: Hackett Publishing Company.Google Scholar
Evans, G. (1982). The Varieties of Reference. Oxford: Oxford University Press. Edited by McDowell., John Henry Google Scholar
Fernández, J. (2013). Transparent Minds: A Study of Self-Knowledge. Oxford, United Kingdom: Oxford University Press.10.1093/acprof:oso/9780199664023.001.0001CrossRefGoogle Scholar
Fleming, S.M. (2024). ‘Metacognition and Confidence: A Review and Synthesis.’ Annual Review of Psychology 75, 241268. https://doi.org/10.1146/annurev-psych-022423-032425 CrossRefGoogle ScholarPubMed
Fleming, S.M. and Lau, H.C. (2014). ‘How to Measure Metacognition.’ Frontiers in Human Neuroscience 8, 443. https://doi.org/10.3389/fnhum.2014.00443.CrossRefGoogle ScholarPubMed
Goldstein, S. and Waxman, D. (2020). ‘Losing Confidence in Luminosity.’ Noûs 55(4), 130.Google Scholar
Haverkamp, N. and Schulz, M. (2012). ‘A Note on Comparative Probability.’ Erkenntnis 76(3), 395402.10.1007/s10670-011-9307-xCrossRefGoogle Scholar
Hintikka, J. (1962). Knowledge and Belief. Ithaca: Cornell University Press.Google Scholar
Khani, A.H. (2020). ‘Interpretationism and Judgement-Dependence.’ Synthese 198(10), 9639–59.10.1007/s11229-020-02670-8CrossRefGoogle Scholar
Leitgeb, H. (2002). ‘Review of ‘Timothy Williamson, Knowledge and Its Limits.’Grazer Philosophische Studien 65, 207–17.Google Scholar
Malebranche, N (1674) ‘The Search After Truth.’ In Lennon, T.M. and Olscamp, P.J. (eds.), Malebranche: The Search After Truth: With Elucidations of the Search After Truth, 1997, pp. 1529. New York: Cambridge University Press.Google Scholar
Marcus, E. (2021). Belief, Inference, and the Self-Conscious Mind. Oxford, United Kingdom: Oxford University Press.10.1093/oso/9780192845634.001.0001CrossRefGoogle Scholar
Peacocke, C. (1996). ‘Entitlement, Self-Knowledge, and Conceptual Redeployment.’ Proceedings of the Aristotelian Sociey 96, 117–58.Google Scholar
Peacocke, C. (1999). Being Known. New York: Oxford University Press.10.1093/0198238606.001.0001CrossRefGoogle Scholar
Pouget, A., Drugowitsch, J. and Kepecs, A. (2016). ‘Confidence and Certainty: Distinct Probabilistic Quantities for Different Goals.’ Nature Neuroscience 19, 3366–74 10.1038/nn.4240CrossRefGoogle ScholarPubMed
Pritchard, D. (2005). Epistemic Luck. Oxford, GB: Clarendon Press.10.1093/019928038X.001.0001CrossRefGoogle Scholar
Ramachandran, M. (2009). ‘Anti-Luminosity: Four Unsuccessful Strategies.’ Australasian Journal of Philosophy 87(4), 659–73.10.1080/00048400802587408CrossRefGoogle Scholar
Shoemaker, S. (1996). The First Person Perspective and Other Essays. Oxford: Oxford University Press.10.1017/CBO9780511624674CrossRefGoogle Scholar
Smithies, D. (2016). ‘Belief and Self-Knowledge: Lessons from Moore’s Paradox.’ Philosophical Issues 26(1), 393421.10.1111/phis.12075CrossRefGoogle Scholar
Smithies, D. (2019). The Epistemic Role of Consciousness. New York, USA: Oxford University Press.10.1093/oso/9780199917662.001.0001CrossRefGoogle Scholar
Spence, M.L., Dux, P.E. and Arnold, D.H. (2016). ‘Computations Underlying Confidence in Visual Perception.’ Journal of Experimental Psychology: Human Perception and Performance 42(5), 671.Google ScholarPubMed
Srinivasan, A. (2013). ‘Are We Luminous?Philosophy and Phenomenological Research 90(2), 294 319.10.1111/phpr.12067CrossRefGoogle Scholar
Stalnaker, R. (2015). ‘Luminosity and the KK Thesis.’ In Goldberg, S. (ed.), Externalism, Self-Knowledge, and Skepticism: New Essays, pp. 1940. United Kingdom: Cambridge University Press.10.1017/CBO9781107478152.002CrossRefGoogle Scholar
Stoljar, D. (2019). ‘Evans on Transparency: A Rationalist Account.’ Philosophical Studies 176(8), 2067–85.10.1007/s11098-018-1111-xCrossRefGoogle Scholar
Weatherson, B. (2004). ‘Luminous Margins.’ Australasian Journal of Philosophy 82(3), 373–83.10.1080/713659874CrossRefGoogle Scholar
Williamson, T.. (2000). Knowledge and Its Limits. Oxford: Oxford University Press.Google Scholar
Williamson, T. (2007). ‘How Probable is an Infinite Sequence of Heads?Analysis 67(3), 173–80.10.1093/analys/67.3.173CrossRefGoogle Scholar
Williamson, T. (2009). ‘Probability and Danger.’ Amherst Lecture in Philosophy, 135.Google Scholar
Williamson, T. (2020). ‘Knowledge, Credence, and the Strength of Belief.’ In Flowerree, A. and Reed, B. (eds.), Expansive Epistemology: Norms, Action, and the Social World, pp. 128. London: Routledge.Google Scholar
Wright, C. (1998). ‘Self-Knowledge: The Wittgensteinian Legacy.’ Royal Institute of Philosophy Supplement 43, 101122.10.1017/S135824610000432XCrossRefGoogle Scholar