The epistemology of truth

The metaphysics of truth
November 4, 2019
Ante omnia quaestio
November 14, 2019

After previously discussing if truth exists (The metaphysics of truth), this article focuses on the question if truth can be known. What can I know about my brown table? Under what conditions can we consider ‘true’ to be a predicate or to express a relation between facts? How is truth created?

Ancients and Christians

The source of truth has long been considered to exist outside the visible world. A story tells us that Pythagoras, after the discovery of his famous theorem, sacrificed a hundred oxen to the gods to thank them for letting him in on their secrets. For Plato, truth existed outside the visible world, in the realm of the Forms. Consequently, humans could never discover truth through perception, but only by reasoning.

As Aristotle thought that universals existed in the perceivable world, ordinary truths could be seen, heard, smelled, tasted and felt by anyone. He developed a set of ten categories like essence, quantity, quality, place, time, etc. to define the properties of things. Most of Aristotle’s works were not known in western Europe until the middle of the 12th century. When they became available, the Church at first saw them as a threat to its truth monopoly. But Saint Thomas Aquinas, a lifelong fan of Aristotle, separated philosophy and science from theology as sharply as he just could. Roughly said, philosophers and scientists were free to study the phenomena that everybody could directly notice, as long as they left divine truth to the theologians. He made a famous statement on truth: “Truth is the correspondence between thing and concept.”

Empiricism and rationalism

This study of perceivable reality came with changes to theories. An enormous conflict broke loose between the universalists and the nominalists. The nominalists didn’t believe that universal properties were the building blocks of reality. To them, only perceivable objects were real; the ways we call objects and properties are just names – nomen is Latin for name.

Afbeelding met gebouw, tekst

Automatisch gegenereerde beschrijving

The text says: “This is brother Ockham.”

One of the champions of nominalism was William of Ockham (1288 – 1347). He found universals needlessly complex. His way of reasoning came to be called Ockham’s Razor. From explanations, we should shave away all unnecessary entities and suppositions, a process we nowadays call reduction. But that does not mean that the simplest explanation is always the best. Ockham denied that perfect mathematical shapes, like ideal rectangular triangles, really existed as universal forms. Such figures were rather models representing reality. Ockham was a forerunner of empiricism, holding experience and sense perception to be the primary source of knowledge.

However, can we trust our senses? René Descartes (1596 – 1650) thought we can’t. He searched for an undoubtable basis of knowledge, which he found only in the fact of his own thinking: “I think, therefore I exist.” Though Descartes firmly believed that God gave humans their rational capacities and guaranteed the truth of their insights, he nevertheless held that man was capable to find truths only by his own individual thinking. Thus, he became the father of rationalism.In the centuries to follow, empiricists like Locke, Berkeley and Hume fought battles of thought with rationalists like Spinoza and Leibniz, until in 1781 Immanuel Kant settled the issue in his book Critique of Pure Reason. Kant wrote that we should do away with prejudice and accept that we cannot know objects and relations as they are in themselves, but only in the way they appear to us.  Space, time and causal relations do not exist outside our minds.  The same goes for my table – and for truth.

Idealism and positivism

If Georg Friedrich Wilhelm Hegel (1770 – 1831) would have taken a look at my table, he would have said that once it existed as a tree, and, who knows, at some time will be burnt to ashes. To him reality was not being but becoming: we need to know the history of something to understand it. But Hegel did not focus on simple truths. To him, the world was an organic, self-perfecting system of ideas. His philosophy was called idealism. Consequently, the truth can never be known fully before perfection is reached.

Auguste Comte (1798 – 1859) however strongly believed that in theory properties and relations of natural phenomena were ‘positively’ – that is ‘with certainty’ – knowable, hence the name positivism.  Sciences began to break away from philosophy as independent disciplines. Comte believed that the methods of the natural sciences should be applied to the social sciences also. But he also understood the enormous complexity of sociology, the science he himself founded.

The remainder of the 19th century saw an ongoing battle between thinkers striving to unify the methods and the language of the natural and social sciences and those holding that this could never be done. The first group thought that the world could be objectively analyzed and then explained, the latter thought it could only be subjectively understood and at the best displayed.

Képtalálat a következőre: „wittgenstein”

Ludwig (left) and his brother Paul Wittgenstein, who lost his right arm in the First World War and became a famous one-handed concert pianist.

Correspondence or coherence?

Truth again became important around the turn of the century. To refine scientific descriptions and give them a general meaning comparable to mathematics, logicians tried to replace language by formulas. But they discovered that this was far from easy. One of them was Ludwig Wittgenstein (1889 – 1951), who made a nasty discovery. An argument like: all apes are mammals, all mammals breastfeed, therefore, apes breastfeed may also be written in a formula as:

All x belong to the set of y’s, all y‘s have the property z, therefore, all x have the property z.

Put this way, the argument might apply to many other cases as well. But according to Wittgenstein, such an argument was a tautology: though its validity– correct logical structure – could be logically tested, its truth still depended on the definitions of ape, mammal, and breastfeeding. This incited Wittgenstein to say that the borders of his language were the borders of his world.

Already in the 18th century David Hume described the problem of induction. If I hold my pen in my hand above my table and ask my wife “What will happen if I let go of it?” she might give me a strange look to then say: “Everybody knows it will fall.” But if I then ask her “How do you know that?”, things change. “Because of the law of gravitation”, she might say. Now we leap back in time across Newton: we are in a prehistoric cave, no pen, no table, I’m holding a stone, ask my wife the same question. “Everybody knows it will fall”, she says. “How do you know that?” I ask. She answers: “It always goes like that.” Her contemporary answer we call deduction – going from a general law to a single fact –, her prehistoric answer is induction – assuming a general law from a series of similar facts. But what are the truth criteria to draw general laws from single facts?

The 1930s saw fierce discussions between philosophers seeing truths as unique relations between facts and corresponding propositions – the correspondence theory of truth – and those seeing truth as a logical whole – the coherence theory of truth. These theories, both belonging to logical positivism, were searching for an undoubtable fundament for truth and wanted to remove all metaphysical smell from it. The adherents of the correspondence theory were adherents of relationalism, never doubting that mental representations expressed in propositions directly relate to reality. Some saw the direct observation of phenomena as the means to verify fundamental truths – yes, plural. Seeing al knowledge grounded in some fundamental truths, these philosophers became known also as foundationalists. To them, knowledge was a building, resting on fundamental facts. Their problem, however, was that continuous verification by repeating the same observations does not produce general laws of the type “All ravens are black.” The adepts of the coherence theory on the other hand considered new observations to be true if they enforced already existing natural laws. To them, knowledge was a raft, held together by consistent theories. They also had a serious problem: their general, coherent truth – yes, singular –  lacked an undoubtable cornerstone.

Karl Popper (1902 – 1994) rejected classical, empirical induction.  Endless series of confirmative observations – an endless number of black ravens – meant nothing compared to one single counterexample. Popper called this the falsification principle. Only if a theorem can in theory be shown to be false, the theorem is scientific. If not, it’s pseudo-science, at the most. The falsification principle also meant that in principle, natural laws are true by general agreement, but only true until proven to be false. In his book The structure of scientific revolutions, first published as an article in 1962, Thomas S. Kuhn (1922 – 1996) took the coherence theory of truth a step further. Coherent or almost coherent knowledge systems he called paradigms, following after each other throughout history. An example of such a paradigm was Aristotle’s idea of Earth as the center of the universe.  Already in the Middle Ages, observations were made that just didn’t match with that geocentric paradigm, but these were considered to be mistakes of the observers. Only when Copernicus (1473 – 1543) got the idea that Earth itself might be moving, a scientific crisis began to emerge, culminating in the empirical confirmation of Copernicus’ theory by the observations of Galileo Galilei, a century later.

Also read: The metaphysics of truth.

Until Kuhn, logical empiricism didn’t have much to offer except to the natural sciences, but his book became the single most widely cited source in the social sciences. These could now be seen as generally coherent knowledge systems with a truth value comparable to that of the natural sciences. Hans-Georg Gadamer (1900 – 2002) wrote about the nature of human understanding in his book Truth and Method (1960). He argued that truth and method were at odds with each other. The human sciences shouldn’t even try to model themselves on the natural sciences. Understanding not only comes from studying facts but from individuals being conscious of the historic stream of thought and their own connection to that, a remote echo of Hegel’s theory of becoming. Unlike Kant, Gadamer thought that prejudice was inevitable and rather a positive incitement. He also emphasized the role of authority. After all, most of what we believe are not our own personal discoveries. We cannot but rely on what authorities say.

Modern philosophers like for example Donald Davidson hold that most of what we believe is true, if we depart from a principle of radical interpretation – interpreting a speaker or a source from scratch, purely on the basis of gut feeling. This view is very close to coherence theories, since people are likely to interpret sources starting from general knowledge. Michael Dummett simply states that truth is the aim of assertion – like I already wrote in my previous article on The metaphysics of truth. Assertion should be seen as a practice, governed by rules of the game. What people say may, of course, be lie or error, but will then be subject to criticism immediately.

Truth is not s synonym of reality. It is an intersubjective body of beliefs that constantly changes. Discussions take place between people claiming to possess authority in some field of knowledge and others challenging their insights.  Truth as the aim of assertion is too plain a definition. Seeing the recent discussions about fake news and manipulation of public opinion, I would rather say: Truth is a social event. However, social events indeed are governed by rules of the game. This implies that we need still another article on truth. Its title is not hard to guess: The ethics of truth.


  1. […] Also read: The epistemology of truth […]

  2. […] knowledge doesn’t require absolute certainty. My articles on The metaphysics of truth and The epistemology of truth make clear why this position makes a lot of sense: truth as universal quality does not […]

  3. […] In 1979, French philosopher Jean-François Lyotard criticized ‘big stories’ such as those about progress, enlightenment and emancipation and their reliance on some form of “transcendent and universal truth”: one of the biggest stories, of course, was Christian religion.  Four years later, Lyotard attacked the traditional viewpoint that the meanings of phrases can be determined by what they refer to. Therefore, reality, or ‘truth’ if you like, will always remain indeterminate. In short, language is inadequate to describe reality: not a new problem, as we already saw when discussing Wittgenstein, see The epistemology of truth. […]

  4. […] are ready to investigate truth as a relation between facts and the ability of humans to know them: The epistemology of truth. We shall see, however, that we left unanswered an important metaphysical question: whether […]

Leave a Reply to Leap of faith – Ante omnia quaestio Cancel reply

Your email address will not be published. Required fields are marked *

Buy now