Adam I. Gerard

On the Myth of the Given

Regarding a previous post that addresses weaknesses with Carnap's Linguistic Frameworks approach. I wanted to suss out the last paragraphs a bit more fully.

The Myth of the Given

This is a famous argument made by Sellars that attacks the combination of Foundationalism and Empiricism. It was conceived during a time of major upheaval in philosophy (the decline of Logical Positivism which had held sway over most of philosophy in the West for nearly five decades). The Myth of the Given accompanied various other criticisms (for example, those raised by Quine) that challenged almost all of the implicit or explicit assumptions that Empiricism made or required to justify it.

The gist of the argument is that Empiricists want to ground inferential knowledge and reasoning in sense percepts ("sense data"). The horn:

  1. If "sense data" is justified, then it can confer justification into such inferences. But, such "sense data" cannot be primitive nor basic (since being justified entails that something else justifies it). But then, our sensations are not the ultimate source of knowledge and justification (contra Empiricism).
  2. If "sense data" isn't justified (if it's primitive or basic), then it can't seemingly confer justification into inferences (which is the whole point of Empiricism).

Read a great article on the subject. They summarize The Myth of the Given like so:

"[T]he proponent of the given is caught in a fundamental and inescapable dilemma: if his intuitions or direct awarenesses or immediate apprehensions are construed as cognitive, at least quasi-judgmental (as seems clearly the more natural interpretation), then they will be both capable of providing justification for other cognitive states and in need of it themselves; but if they are construed as noncognitive, nonjudgmental, then while they will not themselves need justification, they will also be incapable of giving it. In either case, such states will be incapable of serving as an adequate foundation for knowledge. This, at bottom, is why empirical givenness is a myth. (BonJour, 1985, p. 69)"

A few possible lines of reply that have been considered:

  1. Can something be in cogito (self-justifying)? For example, take Descartes' "I think therefore I am" which considered to be such a kind of self-justifying primitive (this is almost universally rejected today for many reasons).
  2. Can something participate in justification (conveying) but lack it directly?
  3. How could something convey justification (or be a part of justification) but lack it itself?

Presentation, Kant, and SCO

I'm a big fan of the following intersection of ideas that I believe addresses points 2 and 3 (regarding how something can participate in conveying justification but perhaps lack it directly itself) above:

  1. Brading and Landry's presentation. That empirical data is inherently patterned, in some sense, and it is the patterns that we engage with in model-building and theorizing.
  2. Kantian pre-structuring (by the mind). Thus, any empirical, phenomenal, or perceptual content already bears some footprint or pattern of the mind's work upon it. These patterns, although indirectly reflecting some noumenal thing, are what we engage with in natural science.
  3. My proposal (SCO), that fits in with these other two ideas and further fleshes them out.

Verbiage Clarification

Previously, I described the interaction of three ontological units or kinds as being the basis for SCO. Here, I'd like to tease out some of the differences a little more clearly:

  1. Phenomenal experiences: the aggregate unity of our visual, auditory, olfactory (both smell and taste), and tactile sensations. (E.g. - the "mind", "subjective experience", "phenomenology", etc. to use the vernacular of other thinkers.)
  2. Phenomenal structures: what is presented by our phenomenal experiences.
  3. Structures: what we reason about with our theories. We define structures to match the phenomenal structures presented by our experiences. (It about structure that most substantive disagreements arise.)
  4. Theories: theories are about structures (in the sense above). This is broadly in line with the Semantic View of Scientific Theories although it doesn't presuppose it (I point several weaknesses with the Model-Theoretic View in the draft.

Justification and inference-making live in 4 (theories). We reason about things through formal edifices (though the degree of formalization may vary quite a bit).

We also challenge whether or not structures are best, suitable, or sufficient for representing phenomena. Because I maintain that theories and structures are both largely independent and embedded into a "milieu" (a flexible "sea" of possible combinations as it were rather than a "pyramid") which mirrors actual scientific and mathematical practice, these can be detached and structures considered or evaluated from a meta level (e.g. - in a metalogic, in a metalanguage). That theorizing still occurs at a theory level (although one may have jumped from one theory into another, broader, and more meta theory).

As such, I maintain that phenomenal experience can lack justification (to attribute it is essentially a category error on my view) yet still be part of a justification-conveying system cutting off the The Myth of the Given as a viable critique of SCO.

Addressing the Myth

I wanted to take some time to more fully flesh out my previous post since it's still a bit opaque:

  1. There are weaker epistemic concepts that cognitive agents use to support or initiate cognitive projects: acceptance, for example. (That’s not necessary to further refine or substantiate my initial articulation but serves as a viable alternative path back to that conclusion. I can retreat to a weaker claim that SCO fits into rationality broadly-construed which includes tacit acceptance or revisable postulates - literally posits that are modified or tweaked as one builds a model or theory - and therefore, I think, escape from these considerations entirely.)
  2. If the unity of sensory experience is structured (patterned) then, we may justifiably argue against the conception Sellars took a potshot at. In other words, sensory experiences are structured and structures are what we reason about not the experiences themselves.
  3. So, things that lack justification can still stand in justification (conveying reasons or inferences) since a "facet" of them can be extracted - the pattern of the phenomena.
  4. This is why I refer to Sellars as taking a potshot: the conception of The Given he has in mind appears to be ontological atoms (sensory ones) - if our phenomenology is intrinsically structured (as Husserl, Brading, Landry, and Kant have all variously endorsed) then why would The Given be of concern in the first place?

So, to reprise a few other (jumbled) comments made elsewhere:

  1. I’m not a Foundationalist.
  2. Sensory experiences present patterns and such patterns are what our theories and inferences are about. (We reason indirectly about things and reason is itself an indirect pattern-constraining way of thinking.)
  3. I can retreat to a weaker variant of epistemic rationality that involves only tacit acceptance or assumptions to get off the ground (anyway).

Unjustified Things Exhibit Structure

An unjustified belief nevertheless has a structure to it. And we reason about the structure of such beliefs (though the beliefs themselves lack justification).

That’s just what we do when we parse an argument and assess its validity or soundness.

More on Machine Learning

Machine Learning demonstrates how primitive inputs (structured to lesser or greater extents) can be organized into concepts by way of an intermediate classification scheme. Non-inferential primitives are put into structures and reasoned about (through function fit and using other statistical associations) further.

The appropriateness of the representation of a pattern is often a subject of debate. So, such patterns require justification although the presented structures within our experiences are just that. But, such patterns are debated using reasoning systems (theories) - "top-down" or at least not "bottom-up" alone.

Again, I’m also not a Foundationalist - I think we often revise “lower level” concepts in light of “higher level” ones (Information Theory being applied to Physics for example). I dislike the “levels” metaphor altogether. I think this squares better with modern Machine Learning techniques since classifications are and function fit is evaluated iteratively through intermediate learning algorithms.

Machine Leaning shows us that concept formation involves both "top-down" and "bottom-up" notions working simultaneously. Given the use of training patterns (algorithms) a function is recursively defined on inputs and expectations. Conceptual classifications, taxonomic, or category assignments are made through statistical associations. In other words, intermediate algorithms define concepts from primitive inputs. Justification occurs here through statistical likelihood.