Tag Archives: Elliot Temple

Elliot Temple is a philosopher. He focuses on epistemology.

elliottemple.com & curi.us

Parts from ET’s “How People Get Socially Conditioned”

This is from a FI Email Discussion Group post by Elliot Temple named “How People Get Socially Conditioned” that I liked.

Kate goes to school. Imagine around 1st grade (6-7 years old). People socialize. Kate finds when she behaves in some ways she’s mocked, embarrassed, harassed, disliked, not invited to play, left out, etc. When she behaves in other ways, people tell her secrets, seek her out, want her attention, look up to her, follow her lead, listen to her ideas, and so on. This is *social status* but she doesn’t know the term. She just knows some of her actions get good results and others get bad results. She sees the consequences.

Sometimes it’s pretty hard to connect an action – like wearing a particular piece of clothing – with a result like gaining or losing social status. Over the years, with many examples, Kate gets better at understanding how to behave when dealing with people, both in the more straightforward cases (like don’t say things that get immediate, overt negative reactions) and in much more subtle cases.

Kate forms habits. She doesn’t know, conceptually, what all the social rules are. Her concepts and explanations are vague, incomplete and inaccurate. She keeps trying out different behavior and doing more of what works. She does a lot of learning by trial and error. Lots of her knowledge is fragile: she knows X works and Y doesn’t work, but she doesn’t know why, so trying out Z is risky (it’s hard for her to predict if Z would work or fail). This leads to living conservatively: being risk adverse, being bland and focusing on fitting in. Only a minority are skilled enough, or willing to risk downsides enough, that they can take the lead on new behaviors, innovate and be trend setters. Most people aren’t leaders because they don’t want to risk an error and they are having a hard enough time just trying to do OK and not suffer too much.

Kate doesn’t just learn from her own trials and errors. She spends a lot of her life observing other people and trying to understand what they are doing, and whether it gains or loses social status. It’s safer to be the second person to do something, after seeing if the first person gets viewed positively or negatively. It’s even safer to wait for 25-75% of people to do it before joining in. And note that the vast majority of all possible changes are errors.

So Kate ends up with habits based on rules of thumb and based on partial, vague explanations. And the years go by and she doesn’t remember most of the evidence she used to create her habits. It’s just how she lives now. It feels natural to her. It’s automatic and intuitive.

Her habits are highly adapted and hard to change. They’re social conditioning. They’re static memes. They’re entrenched. They’re irrational. She has very little ability to introspect about them *and doesn’t want to*. Introspecting about her habits is dangerous. During childhood she tried, thousands of times, to introspect and understand herself and improve herself. And she got punished for it most of the time. When she tried to use reason to improve things, the results were painful – over and over and over. She learned it’s better to just accept nonsense, and it’s harder to follow it if you try to rationally analyze it. It’s better if you have intuitive habits instead of second-guessing yourself. It’s better if you only have one voice in your head – the voice of social conditioning – which you follow enthusiastically, rather than if you have a second voice confusing you and giving contrary advice.

To deal with the pain of rejecting reason, child-Kate rationalized her worldview. She came up with excuses to help her feel OK with not questioning her habits and approach to life. This was a defense against suffering which merits sympathy. That’s not the only thing that was going on and she’s wasn’t just an innocent victim, but it’s a substantial part of what happens. Kids do have lots of innocence and are victims in big ways. Now she’s hostile to introspection, examining her life, and so on. She’s attached to her long-held reasons for avoiding that and has convinced herself that *not* thinking is more rational and makes more sense.

Remember, again, that this is the story of approximately everyone. And, btw, scarily, most of the exceptions are now called “autistic” or “diagnosed” with another “mental illness”, and experts (in conformity and conventionality, called psychiatrists and psychologists) are brought in to make them conform. If someone’s parents, peers, priests, teachers and culture (Facebook, TV, magazines, instagram, twitter, etc.) aren’t enough to make a child conform, the war against the individual child will usually be escalated. First the parents usually try to escalate by themselves: they get stricter, read books with advice, etc. If that doesn’t work, a *lot* of parents will now get “experts” involved. (And even if parents don’t want “experts” or drugs involved, often a teacher will push for it, often successfully.) And the support for “expert” meddling in the raising of children has been trending upward. It wasn’t that long ago that parents expected far more control over their children and teachers played a much smaller role, and now government teachers do a massive part of raising most children and psychiatrists and psychologists are doing more and more too.

[…]

This is everyone’s life and it’s so sad. Most people hide it more than Kate by means of staying away from people capable of seeing what’s going on and analyzing it. Kate has, for whatever unusual reasons, spent years giving more info and examples about her irrationality (by actually doing it, not by sharing examples) without leaving even though she hasn’t been making progress. (She made some progress early on which impressed her, but then she ran into some parts that were hard for her, got stuck, and has stayed stuck and become very dishonest about her situation. BTW it’s somewhat common for people to make some progress initially until they reach some part that is hard for them and then get stuck. That’s the standard pattern for people who make any progress at all. But people usually leave much sooner after getting stuck.)

I recommend reading the comments in the link as well (I am currently reading them).

When you do see something wrong with an expert view, but not with your own view, it’s irrational to do something you expect not to work, over something you expect to work. Of course if [you] use double standards for criticism of your own ideas, and other people’s, you will go wrong. But the solution to that isn’t deferring to experts, it’s improving your mind.

– Elliot Temple, [comment] “Bayesian Epistemology vs Popper”

Elevator Pitch of Critical Rationalism – Elliot Temple

This is an elevator pitch of Critical Rationalism (CR) by Elliot Temple that I like.

I did not use the blog’s quote funktion. Instead I put the entire quote in quotation marks, because the quotation funktion removes italics and messes with the original quote.

Elevator pitch:

CR solves the fundamental problems of epistemology, like how knowledge can be created, which induction failed to solve. It’s a very hard problem: the only solution ever devised is evolution (literally, not analogously – evolution is about replicators, not just genes). In terms of ideas, evolution takes the form of guesses and criticism. CR develops much better criticisms of induction than came before, which are decisive. CR challenges the conventional, infallibilist conception of knowledge – justified, true belief – and replaces it with a non-skeptical, non-authoritarian conception of knowledge: problem-solving information (information adapted to a purpose). Although we expect to learn better ideas in the future, that doesn’t prevent our knoweldge from having value and solving problems in the current context. This epistemology is fully general purpose – it works with e.g. moral philosophy, aesthetics and explanations, not just science/observation/prediction. The underlying reason CR works to create knowledge is the same reason evolution works – it’s a process of error correction. Rather than trying to positively justify ideas, we must accept they are tentative guesses and work to correct errors to improve them.

This position should not be judged by how nice or strong it sounds; it logically works OK unlike every rival. Decisive issues for why something can’t work at all, like induction faces, have priority over how intuitive you find something or whether it does everything you’d like it to do (for example, CR is difficult to translate into computer code or math, which you may not like, but that doesn’t matter if no rival epistemology works at all).

I expect someone to bring up Solomonoff Induction so I’ll speak briefly to that. It attempts to answer the “infinite general patterns fit the data set” problem of induction (in other words, which idea should you induce from the many contradictory possibilities?) problem with a form of Occam’s Razor: favor the ideas with shorter computer code in some language. This doesn’t solve the problem of figuring out which ideas are good, it just gives an arbitrary answer (shorter doesn’t mean truer). Shorter ideas are often worse because you can get shortness by omitting explanation, reasoning, background knowledge, answers to critics, generality that isn’t necessary to the current issue, etc. This approach also, as with induction in general, ignores critical argument. And it’s focused on prediction and doesn’t address explanation. And, perhaps worst of all: how do you know Occam’s Razor is any good? With epistemology we’re trying to start at the beginning and address the foundations of thinking, so you can’t just assume common sense intuitions in our culture. If we learn by induction, then we have to learn and argue for Occam’s Razor itself by induction. But inductivists never argue with me by induction, they always write standard English explanatory arguments on philosophical topics like induction. So they need some prior epistemology to govern the use of the arguments for their epistemology, and then need to very carefully analyze what the prior epistemology is and how much of the work it’s doing. (Perhaps the prior epistemology is CR and is doing 100% of the work? Or perhaps not, but that needs to be specified instead of ignored.) CR, by contrast, is an epistemology suitable for discussing epistemology, and doesn’t need something else to get off the ground.”

 

Rationality is about methods of thinking which allow for the correction of mistakes. It’s wise because irrational attitudes, if they are mistaken, stay mistaken. Mistakes in rational attitudes can be fixed. Can someone reject the premises of my argument, or refuse to listen to it if they don’t want to, or misunderstand it? Yes. And for all I know they can understand it and reject it — maybe I’m wrong. But none of this is a problem or bad thing. Progress doesn’t come from airtight arguments that force people to accept reason or anything else. It comes from voluntary action, people choosing to think and wanting to gain values by thinking, people having problems they want to improve on, people recognizing their mistakes and wanting a better life. Life presents problems which can inspire people to take some initiative in improving, we don’t have to worry about forcing passive people to live the way we deem correct (and we must not do that, because we might be mistaken; a tolerant society is the only rational society).

– Elliot Temple, [comment on] The Myth of the Closed Mind, 3

A Better Way to Brainstorm

I found out a better way of brainstorming in ET’s latest video stream.

The way I usually did brainstorming:

  1. Think of a topic
  2. Think of some topic categories
  3. Brainstorm about those topic categories until satisfied

The better way of doing brainstorming:

  1. Think of a topic
  2. Brainstorm freely
  3. Organise into categories after the brainstorming
  4. Repeat #2 & #3 until satisfied

The first example might get you stuck within the categories you first thought of. The second example does not limit your brainstorming in that way but lets you brainstorm freely, independent of category. When you are satisfied with your brainstorming you can organise the whole thing into categories for a better overview.

Programming and Philosophy

I am learning programming. The programming language I chose to learn is Scheme.

Why learn Scheme, a symbolic programming language, of all things?

The reason is that I want to understand the conceptual thinking of programming. Regarding programming, Curi (Elliot Temple), told me:

you need the big picture instead of to treat it like a bunch of math.

Scheme looks to have good resources for doing that. From Simply Scheme’s foreword:

It [Simply Scheme] emphasizes programming as a way to express ideas, rather than just a way to get computers to perform tasks.

This is the essence of how good philosophy works as well: learning to understand concepts, integrating them into the big picture, and avoiding contradictions in the process. Objectivism teaches this. In Return of the Primitive: The Anti-Industrial Revolution, Ayn Rand writes:

There are two different methods of learning: by memorizing and by understanding. The first belongs primarily to the perceptual level of a human consciousness, the second to the conceptual.
[…] The second method of learning—by a process of understanding—is possible only to man. To understand means to focus on the content of a given subject (as against the sensory—visual or auditory—form in which it is communicated), to isolate its essentials, to establish its relationship to the previously known, and to integrate it with the appropriate categories of other subjects. Integration is the essential part of understanding.

Rand, in Atlas Shrugged:

Contradictions do not exist. Whenever you think that you are facing a contradiction, check your premises. You will find that one of them is wrong.

Do you see the similarities between good philosophy and Scheme programming?

By learning programming I work on philosophy – and by learning philosophy I work on my programming.

I am using Simply Scheme to learn Scheme.

Discussion Tree: An Analysis of Discussions in Idea Tree Form

Discussion trees are a subcategory of idea trees. Discussion trees can help you analyse discussions and see where they fall apart (if that happens) and what questions have been answered and how.

Below is a discussion tree of one discussion that I had with a friend on whether people can change.


You can see the discussion tree in bigger format:
nikluk’s discussion tree – “Can ppl change?”

Elliot Temple (curi) gives feedback and comments on my discussion tree in this stream (timestamped).

Coronavirus Strategy

Not all strategies to handle the coronavirus are equal. The Hammer and the Dance explains important stuff that everyone should be aware of. Plenty of the details in the article could be debated, but I think that the core message is correct. We need to act fast and hard, so that we can contain the spread and return to normal life as soon as possible, without needlessly killing off a lot of people.

Elliot Temple wrote a summary:

Absolutely don’t give up and intentionally let everyone get the disease. And we don’t need total lockdown for 18+ months to wait for a vaccine, either. Instead, we must immediately do roughly 4-6 weeks of lockdown to get the disease under control (every day counts against an exponential pandemic). Once it stops spreading exponentially, we can manage it using testing and contact tracing, and ongoing mild and cost-effective lockdown measures while awaiting a vaccine. Any time spent on half-measures right now is condemning people to die and hurting the economy without solving the main problem. If we don’t get this right, the hospital system will be overwhelmed and millions will die as hospitals turn them away. We’re already on course for disaster, in a matter of days, if we don’t make this policy change.

I agree with Elliot.