This is an elevator pitch of Critical Rationalism (CR) by Elliot Temple that I like.
I did not use the blog’s quote funktion. Instead I put the entire quote in quotation marks, because the quotation funktion removes italics and messes with the original quote.
CR solves the fundamental problems of epistemology, like how knowledge can be created, which induction failed to solve. It’s a very hard problem: the only solution ever devised is evolution (literally, not analogously – evolution is about replicators, not just genes). In terms of ideas, evolution takes the form of guesses and criticism. CR develops much better criticisms of induction than came before, which are decisive. CR challenges the conventional, infallibilist conception of knowledge – justified, true belief – and replaces it with a non-skeptical, non-authoritarian conception of knowledge: problem-solving information (information adapted to a purpose). Although we expect to learn better ideas in the future, that doesn’t prevent our knoweldge from having value and solving problems in the current context. This epistemology is fully general purpose – it works with e.g. moral philosophy, aesthetics and explanations, not just science/observation/prediction. The underlying reason CR works to create knowledge is the same reason evolution works – it’s a process of error correction. Rather than trying to positively justify ideas, we must accept they are tentative guesses and work to correct errors to improve them.
This position should not be judged by how nice or strong it sounds; it logically works OK unlike every rival. Decisive issues for why something can’t work at all, like induction faces, have priority over how intuitive you find something or whether it does everything you’d like it to do (for example, CR is difficult to translate into computer code or math, which you may not like, but that doesn’t matter if no rival epistemology works at all).
I expect someone to bring up Solomonoff Induction so I’ll speak briefly to that. It attempts to answer the “infinite general patterns fit the data set” problem of induction (in other words, which idea should you induce from the many contradictory possibilities?) problem with a form of Occam’s Razor: favor the ideas with shorter computer code in some language. This doesn’t solve the problem of figuring out which ideas are good, it just gives an arbitrary answer (shorter doesn’t mean truer). Shorter ideas are often worse because you can get shortness by omitting explanation, reasoning, background knowledge, answers to critics, generality that isn’t necessary to the current issue, etc. This approach also, as with induction in general, ignores critical argument. And it’s focused on prediction and doesn’t address explanation. And, perhaps worst of all: how do you know Occam’s Razor is any good? With epistemology we’re trying to start at the beginning and address the foundations of thinking, so you can’t just assume common sense intuitions in our culture. If we learn by induction, then we have to learn and argue for Occam’s Razor itself by induction. But inductivists never argue with me by induction, they always write standard English explanatory arguments on philosophical topics like induction. So they need some prior epistemology to govern the use of the arguments for their epistemology, and then need to very carefully analyze what the prior epistemology is and how much of the work it’s doing. (Perhaps the prior epistemology is CR and is doing 100% of the work? Or perhaps not, but that needs to be specified instead of ignored.) CR, by contrast, is an epistemology suitable for discussing epistemology, and doesn’t need something else to get off the ground.”