Adam I. Gerard

Addressing Metalogical Skepticism

I wanted to address something I worked out a while back and think should be commented on.


Generally, skepticism (of the philosophical variety - e.g. Pyrrhonian Skepticism) comes in narrow or global versions. Global versions (like Descartes' Evil Demon Argument) apply against knowledge generally (all of it). Local versions attack specific domains of knowledge (like a specific field in science, subject, or a subset of human inquiry).

Skeptical arguments typically take aim at some underlying, but necessary, presupposition and purport to demonstrate that such presumptions are false or that conclusions derived from them are invalid knocking out some piece or region of knowledge.

For example, if you can’t figure out whether you live in the Matrix, or aren’t being deceived by any Evil Demon, or what have you with absolute certainty, contends the philosophical skeptic, then you don’t really know something. That delicious apple you just ate might just be a fancy computer illusion and so you can’t, with absolute certainty, conclude that you did, in fact, eat an apple, enjoy it, etc. (since it, the apple, might not even exist).

Metalogical Skepticism

So, skepticism about logic (itself) is somewhat rare. The idea being that if you refuse to accept logic (at all), it’s difficult to then convincingly argue that one shouldn’t use logic (since, in doing so, one often uses logical argument). Either that or they repudiate logical inference entirely (and so are disposed to wanton irrationality and don’t then care about logical argument whatsoever and aren’t in the business of trying to convince you about logic or justification at all).

More narrowly construed attacks allow for some semblance of reasoning or attempt to draw out weaknesses in the scaffolding or architecture of logic (it’s systems, assumptions, dependencies, and so on - e.g. the nature of proof and judgment, theories of rationality and evidence, the nature of symbols and meaning, Truth, and so on).

I think the best-articulated one, and perhaps the most concerning one, in our present times involves challenges to logical justification by way of the metalanguage and object language distinction (near-universally employed through logic, mathematical logic, and mathematics):

  1. We justify a logic L (prove and characterize its properties: deduction theorem, soundness, completeness, consistent) within a metalogic L+.
  2. But, what then justifies L+? Supposedly, another metalogic L++ that has L+ as its object language.
  3. But, what then justifies L++? Supposedly, another metalogic L+++ that has L++ as its object language.
  4. And so on, ad infinitum.
  5. But, justification cannot transmit across infinite sequences.
  6. Therefore, all logics lack justification (are unjustified)

How does one address this intuitively persuasive argument?

Justifying Logics (A Sketch)

Suppose every logic was, in fact, isomorphic to each other. And, for the sake of simplicity, that each logic was classical. If we were able to prove that any logic were sound, complete, and consistent then we’d have proven that for all of them. And, we’d have proven that as such for all of them regardless of their relationship to each other.

Now, a metalogic within which we can prove properties about some primitive (zero order) logic like classical logic must have additional linguistic expressiveness - must be equipped with additional linguistic machinery (to be able to talk about soundness, truth, completeness, and consistency) since these don’t exist in zero order logic (by itself). So, the simple example I give above simply can’t defuse the metalogical skeptical argument given previously (since it depicts a scenario where every logic is strictly isomorphic to the others). We also know that adding extra “stuff” to a formal system can transform a previously sound and complete system into one that’s not (take the Liar Paradox, Hume’s Law, and other notorious Semantic Paradoxes).

So, we quickly arrive at two takeaways:

  1. If we’re going to address metalogical skepticism, we may need to break our argument into multiple stages: address primitive, base logics (classical sentential logic / propositional calculus / Boolean algebra) showing how each can be justified. We could use a superset of L, call it L+, to justify L. If L+ is semantically closed, L+ can justify itself (given enough linguistic machinery - reflection principles, idempotence, predicativity all come into play).
  2. The semantic paradoxes (including all of the alethic paradoxes: Curry’s paradox, Liar cycles, Liar sentence, Yablo sequences, etc.) block metalogical justification since their presence in any formal system L+ implodes it.

Can we arrive at a formal system capable of expressing proof-theoretic concepts, consistency (and hence, consistent Truth), and so on all within a language that’s semantically closed (or at least one that’s not necessarily not semantically closed to use Tarski’s vernacular)?

Yes, my previous proposal regarding Truth, Truth Grounding, and the Liar Paradox. It’s consistent and has sufficient machinery to capture consistent conceptions of Truth.

Proof theoretic notions might be made more consistent (using the same procedure applied to Gödel's Provability predicate - e.g. restricting either the predicate or unrestricted diagonalization both of which play an essential role in the famed Second Incompleteness Theorem) and so might be added to it without much fanfare. If so, then classical zero order logic could then be justified in FOLT++ (First Order Logic with Truth Grounding - FOLT+ - and the rest of the proof-theoretic machinery, +) and FOLT++ can demonstrate its own justifiability.

Side Comment: The Liar Paradox in Programming

Programming languages, by design, are much more flexible than most mathematical languages (formal languages, in general). Programming languages have Try, Catch, and Error handling clauses (and keywords). So, logical paradoxes when implemented (albeit loosely implemented since the implementation of the Liar Sentence occurs at the level of a non-core language expression in programming - e.g. in logic, the Liar Paradox is a feature of the logic itself not a user's construction after the fact) will typically just recurse indefinitely (throwing an infinite recursion error and getting caught in an available Try-Catch clause).

Consider this programming discussion thread on Reddit.

// Implementation from Reddit Thread
bool ThisStatementIs(bool x) {
    return ThisStatementIs(!x);

The participants to that thread discuss how it's an imprecise but fairly close implementation that results in infinite recursion. I haven't thought too much about the programming language aspects of the Liar Paradox and it's fun to consider some of these intersections.

The above has to do with the following properties:

  1. Propositional Depth a quick sketch of Englebretsen who argues that any acceptable sentence must have a determinate, finite, propositional depth: One sentence can convey multiple propositions (by being the conjunction of multiple other independent sentences: "It is raining today.", "I am happy.", "It is raining today and I am happy." or referring to other sentences "Everything that person said is a lie." each of which conveys a proposition).

i. Crassly, a sentence has finite sentential depth when we can reduce the contents of a top-level sentence into all of its atomic sentences in a finite sequence of steps. Each proposition-bearing sentence then ultimately conveys a finite number of propositions - finite and determinate propositional depth thereby. Sentences whose propositional depth is both n and n+1 have indeterminate propositional depth. Self-recursive expressions have this (as do several others that aren't). This property of sentences aligns with the intuition that expressions in computer science and programming should not have infinite recursion (although there are some subtle differences between the two concepts).

ii. I think the propositional depth solution captures a real and interesting linguistic property but dispenses with many sentences that are completely valid expressions (valid even at compile time in almost all programming languages). Any self-reflexive sentence of any kind has an indeterminate propositional depth and would be ruled out according to Englebretsen. This threatens to rid programming languages of concepts like idempotence, reflection, some metaprogramming, and so on. Statements made at a certain level of abstraction range over infinitely-many expressions - do these then too lack meaning? As a result, despite finding the characterization of propositional depth to be intriguing, I disagree with the overall proposal (as a solution for the Liar Paradox).

There's the additional point that every sentence that happens to have indeterminate propositional depth as a result of containing a Truth Predicate is captured by my proposal. Note that Englebretsen doesn't specify the exact way such sentences would be ruled out. Arguably the way they'd be ruled out is to just restrict the T-Schema.

  1. Infinite Recursion in computer science is a problem for at least two reasons:

(a) the practical outcome of an interpreter or virtual machine iterating indefinitely in an unbreakable loop locks a thread (very bad in synchronous programming and still bad in asynchronous or concurrent programming since the thread would not release) and

(b) infinite recursion often represents mathematical properties like the above that while not always mathematically ruled out (like self-reflexive expressions), are often problematic for other reasons (like bad programming). Infinite recursion is usually sandwiched in Try-Catch clauses that ultimately throw an Error or Exception terminating the thread or process (depending).

  1. Run Time and Compile Time - programming languages separate write time, compile time, and run time. Logical inconsistencies can be introduced at write time and are then caught at compile time (along with syntax or grammar violations). Other inconsistencies (even ones involving Booleans) can appear at run time (depending on one's programming language). Mathematical logic has no such distinction between semantics and syntax - they "occur simultaneously" (so to speak) and completely align in systems that are both sound (everything provable is true) and complete (everything true is provable). (Proof is usually taken to be syntactic and when a proof system is both sound and complete anything proven is also true within that system.)
  2. Programming and Programming Languages allow for inconsistencies to a degree that mathematical languages (programming is in some sense mathematical - I mean the difference between say ZFC Set Theory and say Java) do not. For example, return Types might be incompatible, some value that's supposed to be something isn't, or some asynchronous call doesn't return (in time or just at all) causing data to be missing.
  3. Booleans while the Boolean Type exists in most programming languages, we observe that the Liar Paradox emerges only in languages with Predication (first or higher order, not zero). In other words, the Liar Paradox cannot be formulated using Boolean objects alone.
  4. Undecidability - more precisely, the Liar Paradox (and other Semantic Paradoxes) are formally undecidable since the Liar Paradox lacks stable fixed points (it oscillates between truth-values). As such, no determinate answer terminates the recursion chain (resulting in the infinite recursion). It's pretty cool to see this happen empirically and in real time (programmatically)!