Sunday, May 14, 2017

Are you a Benvillian or a Marpian?

Imagine 5,000 years ago when the whole world was just a bunch of tribes.
Some tribes interacted with other tribes, and some tribes didn’t try to work with other tribes. The tribes that worked together did better and made more progress than compared to the tribes that didn’t work together.
Some tribes did worse than that. They considered all other tribes as enemies. They didn’t just not want to work with other tribes. They wanted to work against them.

Imagine a tribe of this first kind called Benville. Benville’s traditions are as follows:
  • it’s good to work together because that creates wealth. this applies to Benvillians dealing with each other, and it also applies to Benvillians dealing with people from other tribes.
  • it’s bad to work against each other because that destroys wealth. this applies to Benvillians dealing with each other, and also to Benvillians dealing with people from other tribes.
  • it’s a crime to use force to stop people from working together
  • it’s good to have critical discussions about anything and everything, including Benville’s traditions. this is why Benvillian traditions improve over time — because the flaws in the traditions get fixed with critical discussion.
  • it’s good to accept people from other tribes as Benvillians. You don’t need to be born a Benvillian in order to be a Benvillian. You just adopt the major traditions stated above and bam you’re a Benvillian.

Imagine another tribe called Marp that has the following traditions:
  • it’s a crime to tell the truth about Marp, if telling the truth would lead to Marp's reputation getting worse. Marpians don’t want a reduced reputation because they fear losing people to other tribes. The punishment for this crime is 50 beatings.
  • it’s a crime to criticize Marpian traditions. The punishment for this crime is death. Marpians don’t want the risk of some Marpians running around disobeying Marpian traditions. That would be bad for Marp in lots of ways including lowering it’s reputation.
  • it’s a crime to leave Marp to become part of another tribe. The punishment for this crime is death. Marpians don’t want outsiders to know their secrets. So if you leave, Marpians will kill you so that the other tribes don’t learn their secrets.

Imagine a Benvillian named Paul and a Marpian named Manny meeting each other in a forest with no one else around. They talk to each other about their tribes for weeks and weeks. Manny likes the Benvillian traditions and decides that it’s better for him to live in Benville than to go back to Marp. So he resettles in Benville.
Later a Marpian finds out that Manny left Marp, went to Benville, adopted Benvillian traditions, and renounced Marpian traditions. In Marpian tradition, this means Manny committed the crime of leaving Marp for another tribe. So now Manny has a bounty on his head. According to Marpian tradition, Manny must be killed so that Manny doesn’t tell Marp’s secrets to Benville and other tribes.

Now imagine a Benvillian named Amy who decides that she will reject Benvillian traditions and accept Marpian traditions instead. What does the Benville leader do? Nothing. Benvillian tradition doesn’t consider this a crime. Benville doesn’t mind losing people to other tribes if those people don’t want to live as Benvillians.
So if you leave the Benville tribe, like if you say “I’m no longer a Benvillian,” nothing happens to you. But if you leave the Marp tribe, you are supposed to be killed.
If you criticize Benvillian traditions, nothing bad happens and instead what happens is that Benvillian traditions improve as the flaws in them get fixed. But if you criticize Marpian traditions, bad things happen, you get killed. And this prevents flaws in their traditions from getting fixed. 
So Benvillian traditions evolve over time, getting better and better and better without limit. And Marpian traditions don’t get better over time. Actually they get worse. Marpians don’t like losing people to other tribes. So what happens is that future Marp leaders invent new ways to force Marpians to stay Marpians.

Now fast forward a few thousands years. Benville has made tons and tons and tons of progress. Marp has made little to no progress. And Marpian traditions have changed to be way more controlling than compared to thousands of years ago.
Here are examples of new Benvillian traditions:
  • new ideas about how to find the truth 
  • new ideas about how to live good
  • new ideas about how to operate a government in ways that are consistent with major Benvillian traditions.

Here are examples of new Marpian traditions:
  • new ideas about how to pressure people to avoid thinking critically about Marpian traditions. For example, a Marpian leader invented an idea that if you think critically about Marpian tradition, then you will burn in a fire for eternity while your flesh keeps getting remade so that you can keep getting burned.
  • new ideas about how to get non-Marpians to adopt Marpian tradition. For example, a Marpian leader invented the idea of using Benvillian communication technology to spread Marpian traditions layered with lies in order to make it more likely for people of other tribes to be tricked and adopt Marpian traditions.

Sunday, January 22, 2017

Saving ideas

People often try to save ideas.

Like when a person has an idea that he’s not fully satisfied with, thinks is wrong, has some benefits but some flaws too.

For example, say you’re married with kids and the idea you’re trying to save is to stay married. You see pros with staying married but you also see some cons with it.

Trying to save an idea that you see pros and cons with is ok as long as you’re using good methods. Using bad methods is what makes things go badly and causes suffering.

Here’s a bad way to try to save an idea.

Convince yourself that the pros of the idea outweigh the cons. This is no good. It means sticking with what you already know to be wrong. It means settling. Sacrificing yourself. Notice that this isn’t about creating new options that could work to meet your standards. Instead it’s about trying to fool yourself into picking an existing option that you already know to be a bad one.

Using the married with kids example from above, it could be that the person cares about staying married because he thinks it’s best for his children. So he thinks that a pro of staying married is that his children get to have a better life than compared to what it would be if they divorced. This actually doesn’t work. He’d be a bad role model for his kids, teaching them that life is about sacrifice for the “benefit” of others.

Here’s a good way to try to save an idea.

Try to create a new idea from the flawed one with some changes. Focus the changes on parts that are flawed. Aim for creating an idea that isn’t already criticized by criticisms you already know of.

Still using the married with kids example from above, the person could try to save the pro from the old idea (good life for his children) while changing some other parts. For example, instead of staying married, they could get divorced while trying to continuously improve their parenting relations so that they are better parents to their children. In this way, the parents have decreased their interactions, limiting them to just things about the kids. So they would have removed a huge source of fights while also shifting their time and effort into helping their kids.

Sunday, November 22, 2015

Are you thinking of quitting FI?

Some people quit FI discussion group. They unsubscribe, so they stop getting FI emails.

They do it for different reasons.

Some do it because they misinterpreted things and think that someone on FI was angry at them. And they are scared of people getting angry at them. So they quit FI so that they don’t have to deal with their fear of the perceived “anger". Basically they mistook criticism for hostility. They do it because they interpret things using their win/lose worldview that interprets things that way.

Others quit FI because they’ve tried to implement FI ideas in their lives and failed and they feel that things got worse instead of better. But they didn't understand the ideas, so they shouldn't have put them into action yet. They were overreaching. FI advises against that.

Sometimes it’s a case where the person blames himself for not engaging with FI well.

Other times it’s a case where the person blames FI instead. They’ll say something like “I’m thinking of quitting fi because it caused more problems than it solved”.

But that’s a mistake. They are blaming fi for something that fi isn’t to blame for.

What’s to blame is the person's own ideas.

But there’s an upside. You can change your ideas!!

If you continue with fi, you'll have an infinitely better opportunity of changing your ideas because you'll get help from fi criticism, suggestions, and questions.

Rejecting fi means accepting a state of no progress on your problems. Accepting death. 
Guaranteeing failure.

Rejecting fi is not a solution. 

Rejecting fi is the same as rejecting hope. Embracing pessimism.

Friday, November 20, 2015

Bad "science" that claims to explain laziness

This article claims that science explains laziness.

> There’s a neurological reason for apathy and laziness, according to new research. Inefficient connections between certain areas of the brain may make it harder for some people to decide to act.

well, worse ideas do make it harder for people to decide to act. 

i'm already expecting that the researchers ignored the role of ideas in decision making.

> In each round of the game, the researcher offered the subject a reward in return for some effort. Participants had to decide whether to accept the offer, based on whether the reward as worth the effort. Predictably, the participants who had already been identified as apathetic were much less likely to accept offers that required effort, even if the reward was large - but when apathetic subjects did choose to accept an offer, their MRIs showed much more activity in the pre-motor cortex, an area of the brain involved in taking actions, than in more motivated participants.

> That was the opposite of what researchers expected. They had assumed that lazy people’s pre-motor cortices would show less activity when deciding to take action.

Why did they think that? What was their explanation that they drew their prediction from? The article doesn't say.

If the researchers had no such explanation while making their prediction, then they aren't doing science. See _The Beginning of Infinity_ for explanation on why explanation-less "science" is not science.

> “We thought that this might be because their brain structure is less efficient, so it’s more of an effort for apathetic people to turn decisions into actions,” said lead researcher Masud Husain, a professor of neurology and cognitive neuroscience at Oxford University, in a statement. After further investigation, it turned out that people who identified as apathetic had less efficient connections between the anterior cingulate cortex, a part of the brain involved in making decisions and anticipating rewards, and the supplementary motor area, a part of the brain that helps control movement.

> “The brain uses around a fifth of the energy you’re burning each day. If it takes more energy to plan an action, it becomes more costly for apathetic people to make actions,” explained Husain. “Their brains have to make more effort.” Husain and his team published their findings in the journal Cerebral Cortex.

Can having some ideas cause one to spend more energy to plan an action, as compared to having other ideas instead? Yes!

For example, if you have good ideas about how to make decisions, then you'll be more efficient at decision making. That means you'll be putting in less effort to make a decision, as compared to the same decision being made by somebody who has worse ideas about how to make decisions. Basically, having better ideas about decision making means running into less trouble when going through the process of making a decision.

What are good ideas about how to make decisions? To date, our best understanding of how to decide is the method known as Common Preference Finding. It's a method focussed on resolving active conflicting ideas, such that only one non-refuted idea remains, which is the one to be acted on. A person who doesn't understand how to resolve conflicts will have lots of trouble making decisions because he has conflicting options active in his mind while he doesn't know how to rule out all but one option. He doesn't know how to rule out options and decide on one.

People with bad ideas about how to decide, do a lot of wasteful thinking. Consider an example. Say a person with bad ideas about how to decide, has a decision to make. He knows of a couple of options he can act on, but he doesn't know which one he should act on. His method consists of choosing one of them, without ruling out the other. But how can he CHOOSE one over the other without a reason? Well, that's the point. He can't do it reasonably. He's doing it arbitrarily. 

He says to himself: Do I choose option A or option B? Well, I don't know which is best, but I have to choose one anyway. I'll choose A. No I'll choose B. No I'll choose A. No I'll choose B. Ok I'll act on B now. NO WAIT! I want to do A instead. Ok I'll act on A now. NO!!!

And he does that indefinitely until he finally chooses, but it's not a confident choice. He's still thinking that option B is in play while acting on option A, or vice versa.

Now imagine another person who has better ideas about how to decide. He knows the method known as Common Preference Finding. 

He says to himself: Do I choose option A or option B? What other options can I brainstorm? And what criticisms can I brainstorm that will rule out all but one option? (And then he does some brainstorming for more options and more criticisms.) Ok I found another option C to consider. And I found a criticism that rules out options B and C, leaving option A as the lone non-refuted option. So I'll choose option A to act on. Problem solved. Next problem.

The guy doing CPF spends less energy on his decision as compared to the guy not doing CPF.

Consider an analogy. People with bad ideas about how to decide are people who have their internal neuron-to-neuron communications using copper, while people with good ideas about how to decide are people who have their neuron-to-neuron communications using superconducting material. The result? Copper has lots of imperfections that cause most electrons to not flow smoothly through the copper — because they “bump” into the imperfections (atoms that ideally shouldn’t be there), which causes resistance, slowing the flow of electricity, and wasting lots of energy. Superconducting material, on the other hand, doesn't have those imperfections, and so electrons are not blocked from flowing smoothly through the material. So there's no resistance, and no dampening of the flow of electricity.

So, if you want a no-resistance brain, learn CPF. If you want a high-resistance brain, don’t learn CPF and just randomly and uncritically create your philosophy by picking up bits and pieces from your parents and society.

Sunday, November 15, 2015

You Can Change

It’s well known in Western culture that people can change. But some people have doubts. They doubt that some things about them are within their power to change, like their short temper or how intelligent they are.

But this is a mistake. Even these things are changeable. It’s not some static thing that is handed to us at birth. It’s well known that if you work at it, you can get better at learning/problem-solving, which is what intelligence is.

Some people know that they can get better at learning but they find it hard to do, and they feel bad about that.⁠1 So to avoid that bad feeling, some of them give up and accept that they can’t change. That gives them the feeling that they’re off the hook because if that’s true then it’s not their fault - not their responsibility. But the truth doesn’t depend on what they’d like the truth to be.

And it becomes a self-fulfilling prophecy. They fool themselves into believing that they can’t change. So then they don’t try to change. So then no change occurs. They use the fact that they didn’t change as evidence supporting the theory that they can’t change. But evidence doesn’t support theories - evidence only rules out theories. And the evidence that they didn’t change does not rule out the theory that they can change but didn’t because they didn’t try the right things to change.

Note that this book isn’t meant for them. I can’t persuade them of something while they’re making excuses about it. Nothing I could say would get through to them. Instead, I want to talk to the people that don’t want to make excuses.

I’m also only interested in people who are open to considering ideas different than their own. You should expect to find a bunch of cases where you disagree with me. And you shouldn’t assume that I’m wrong. Nor should you assume that you are wrong. You’ve got to come to your own conclusions according to your own independent judgment. 

Disagreements are good! They are opportunities to learn - for you to learn why you’re wrong, or for me to learn why I’m wrong, or both. This is how progress is made.

How Much Can We Change?

Now common sense says we can change, but it doesn’t explain to what extent we can change. To understand that, it helps to understand our best theory to date⁠2 explaining how the human mind works and explaining how that contrasts against how other animal minds work.⁠3

For a non-human animal, software is installed in its brain according to its genes, and then the animal acts according to its software (its mind), and it’s not able to change that installed software because that’s how the software is designed. This is analogous to how a computer chess program can’t learn checkers. It’s because the animal cannot change the software it was given.

But for a human, software is installed in his brain according to his genes, and then he has the ability to change any part of his software. He has this ability because his software is designed to be able to change itself. This implies that a person can learn anything, solve any problem, create any knowledge. This is the faculty of reason.

The only thing we can’t do is break the laws of nature. Everything else is within our reach. The key is knowledge.⁠4

And why should we do it? Why should we examine ourselves and look for areas of improvement? What’s the point? Because as the philosopher Socrates said over 2,400 years ago, the unexamined life is not worth living!

1 To understand why people feel bad in these situations, it helps to understand shame and static traditions. See
2 What do I mean by “best theory to date”? I answer that in the FAQ at the end of this book.
3 For details see meme theory in the book The Beginning of Infinity, by David Deutsch.
4 What do I mean by knowledge? See

This essay is the first chapter of my book Anger: And How to Change. You can purchase the book on amazon. If you like the book, and especially if you don't, please send me critical feedback or questions and I'll gladly review and reply. Thank you.

Wednesday, September 30, 2015

Parent punishes child for stealing by destroying his xbox

Check out this parent punishing his child for stealing. See my comments about it below.

Another day in my crib. If ya kids steal little shit now, fix it before it's too late.. I don't beat em no mo. That don't teach em shit...For licensing / usage, please contact
Posted by Showboat Hogg Life MC on Sunday, September 27, 2015

why does child think he should steal?

parent didn't ask

parent doesn't realize that it's his fault that child thinks stealing is his best option.

parent is acting as if he has no fault in why child chose to steal.
maybe the child thinks stealing is his best option because he doesn't think parent will give him what he wants or listen to reason about it.

Monday, August 31, 2015

Reacting badly to being told what to do or that you’re wrong

Some people get angry when they are told to do things they don’t want to do. Or when they’re told they are wrong. Or when they are questioned.

It’s people who were raised by authoritative parents who used anger, punishment, violence, consequences, pain, etc. to teach lessons. They were told to obey. And punished when they refused. Their questions and criticisms and protestations were ignored. Arguing their case didn’t help. Their parents didn’t listen to reason.

This happened so many times that they automatized the whole process. It became a trigger. The trigger fires when the person is contradicted in some way. The result is a bad feeling and possibly getting angry too.

Some of these people take it further by adopting an explicit philosophy to match. The problem they have is with people expecting obedience. And they correctly understand that obedience is implied by authority. But where they go wrong is believing that authority is implied by the existence of truth. 

So they reject authority and throw out truth with it because they think it’s a package deal. They think you can’t reject one without the other.

So they adopt a philosophy that rejects authority but also rejects that there is truth in morality. But if authority is something that should be rejected, then that implies that there exists truth, and that authority is a deviation from that truth. Why would you reject something unless it’s wrong, or not as good as some other better competing thing?

A person with this philosophy who hears someone say “you should do X” or “you are wrong about Y” will misinterpret that to mean that he is presenting it as the final complete truth and that he’s demanding obedience.

But that’s a mistake. The truth, as far as we know, is that people don’t have access to the final complete truth and instead what we do have is fallible knowledge about the truth. And to the best of our knowledge, it’s wrong to demand obedience to one’s moral views.

By that I mean that there is a better way that is known. And that is to ask for an audience so that your voice may be heard. Where your goal is to alert people of mistakes or rival theories that they didn’t know already, and where they would be glad if you stuck your neck out to tell them about it because they know that it could benefit them.

So by getting upset in these situations you’re ignoring a few possibilities. The person could be trying to help you (win/win) instead of hurt you (win/lose). He could be against authority in truth-seeking. He could be against demanding obedience. 

He could be against anyone even voluntarily accepting his views on his word. He could hold the belief that you should make your own independent judgement, rather than rely on his judgement. He could be wrong, so you should judge things for yourself to help catch mistakes in his ideas. And you can’t do that if you blindly accept what he says as the truth. 

And even if the person is right, and you blindly accept what he said as the truth, you could easily have misunderstood his idea. So without doing your own independent judgement, you’d be making tons of misunderstandings and believing all sorts of false things that you’d be falsely attributing to him.

So, when you’re told to do something you don’t want to do, or you’re told you’re wrong, and if you react badly to this, it could be because you are falsely assuming that the person thinks he can’t possibly be wrong and that he’s demanding obedience to his moral views. You’re seeing the world through the win/lose lens.

Weird business practices in the tv series Parenthood s2 episode "Meet the New Boss"

Parenthood s2 episode "Meet the New Boss" @12:10

The context is that a company gets a new owner. The main manager Adam just recently met his new boss. And he doesn't know what to expect.

Adam finds out that his boss isn't talking about his direction for the new company. Adam keeps asking about it but the boss doesn't talk about it.

Then Adam is talking with his wife about it and he says about his boss that "and now he runs TNS"... "i could be out of a job"... "maybe he'll come up with a good idea, if necessary, I'll give him some guidance".

this is interesting.

i don't think i'd ever be in the situation where i'm an owner of a business and i don't know what's going on in the company yet. i mean, i'd investigate things before buying the company.

but let's say that somehow i'm in that position. what would i do? i'd investigate. i'd say to Adam:

"i want to get up to speed on what's known about how to run this company. i want to learn the company's traditions and then work on evolving them, rejecting some traditions, evolving others, and starting new ones."

But Adam wants the owner to tell him the company's direction, rather than Adam explaining to the owner the companies existing traditions (like it's current direction).

Discussion with a moral relativist about whether morality is objective

this is a discussion i had with somebody on fb about whether morality is objective. he called it "moral realism" so i continued to use that term with him. i've included just the most relevant parts of the discussion.

On Mon, Aug 28, 2015 somebody offlist:

> Sam Harris and Islam are the same stuff. Moral realism wrapped up in a hateful, polarizing package.

are you disagreeing with the moral realism part?

On Mon, Aug 28, 2015 somebody offlist:

> Of course. I am an atheist because there is no evidence for god(s). I am not a moral realist for the same reason - lack of evidence.
> Moral realism is an artifact of religiosity and theocracy. If one is an atheist and still clings to moral realism, all that tells me is one has stopped questioning much, much too soon. Religion comes with a lot of baggage, from moral realism to retributivism dressed up as justice. Saying one does not believe in god(s) is only scratching the surface.

why do you believe that more questioning of that idea would necessarily lead to realizing that revenge is not justice? it’s because you believe that the goal post is the truth, and questioning leads one closer to *the truth*. hence moral realism.

On Mon, Aug 30, 2015 somebody offlist:

> When were we objectively morally right about gender equality?

Maybe never. But the idea that men should have legal rights that women don't have, is wrong.

> And why so?

you mean, why do I believe we're right today and that people were wrong in the past? because we know of flaws in their theory that they didn't know about.

> The very least a moral realist *must* recognize is that we were right at some point or wrong at some point.

Which I've acknowledged.

> And for the record, just because I express a moral opinion (and that's what I recognize it is) does not mean I am become a moral realism, thinking my moral opinion is the absolute, objective truth.

?! so you're saying that having a moral opinion while being a moral realist means that the moral realist believes that his opinion is the absolute objective truth? why do you believe that? I'm a moral realist AND I have moral opinions AND I don't believe that any of my opinions are absolute objective truth. You think I'm wrong to hold these views but you haven't explained how I'm wrong. Can you explain that?

On Mon, Aug 30, 2015 somebody offlist:

> Interesting. So what is your support/evidence/justification for "...For example, the idea that men should have legal rights that women don't have, is wrong."

Searching for support/justification is flawed epistemology. The best epistemology known to date expains that we need to look for flaws/problems in our theories. And that a theory is treated as rejected only if it has known flaws/problems. And a theory is accepted if you can't find any flaws/problems with it.

Do you see any flaws or problems with the idea that women should have the same rights as men? I don't.

On Mon, Aug 30, 2015 somebody offlist:

> Alright, so how do we know that liberalism is the truth?

Find a flaw. Explain the flaw. Then submit that explanation to criticism. If that explanation survives criticism, then as far as we know, the explanation is correct. And if that happens, then we've either rejected it, or we changed it to account for the flaw in the old version. And if we don’t find a flaw (in liberalism), then its our best knowledge about the truth.

Though we could be wrong about it. So we should be open to changing our minds. We should be open to new criticism and new rival theories.

On Mon, Aug 30, 2015 somebody offlist:

> That would make it sound *almost* like a scientific endeavor, but without the final arbiter (that empirical reality serves as in science).

Emprical evidence isn't the final arbiter. Evidence can be MISINTERPRETED. So a scientist (and anybody) must lookout for misinterpreting the evidence. So the relevant question here is: How does one analyze evidence in order to catch one's misinterpretations of the evidence?

> Tell me, how do we know when we have found a flaw?

If you think you found a flaw, and you don't see any criticisms of the flaw you see, then as far as you know the flaw is the best knowledge you have about the truth.

On Mon, Aug 30, 2015 somebody offlist:

> Honestly, though, if that is your view of morality, there doesn't seem to be much room for the confidence most moral realists in the past have craved when making their decrees. I mean does truth really amount to some variant of argumentum ad populum without any sort of arbiter?

Truth is not judged by popular vote. 

Truth is judged by critical discussion. If a theory survives criticism, then it is accepted as the best knowledge to date about the truth. If a theory doesn’t survive criticism, then it is rejected for having known flaws. Theories that survive criticism are non-refuted (no known flaws). Theories that didn’t survive criticism are refuted (has known flaws).

> That hardly seems like a process to truth... I guess truth won't really mean much any more, will it? Now "knowledge," and "truth" are watered down to nothing.
> So, you would never say you have the moral truth, but you hold that there is moral truth nonetheless, is that correct?

Yes. It's the same as in science. We have knowledge of the truth (of the physical world). But none of our current scientific theories are THE FINAL COMPLETE TRUTH. They are all flawed. But we don't reject a accepted theory UNTIL a flaw is made known.

On Mon, Aug 30, 2015 somebody offlist:

> Actually, in a very important way, empirical reality is the final arbiter for science. Yes, there is the possibility of misinterpretation, and there are deeper problems (like that different theories could conceivably be just as scientifically valid for the same phenomena), but in the end we measure them against empirical reality.

What would you do in this situation? Say you have 2 scientific theories competing to explain some aspect of physical reality and they both are consistent with all known evidence.

Then how do you choose between the 2? Empirical reality won’t help here unless you can find NEW evidence that contradicts 1 of these theories leaving the other untouched.

But even if that happened there’s nothing FINAL about it. Somebody could find new evidence. Or somebody could find new criticism of an existing explanation of evidence, thus refuting that empirical-explanation, thus saving some previously-refuted scientific theory. So FINAL arbiter doesn’t make sense.

or is there a reason you are using the “final” qualifier that i’m not aware of?

> What, if anything, is the final arbiter for your position? 

The tentative “final” arbiter is a simple test: has the theory survived criticism or not?

> It sure would be helpful to find it, because as it is we are just guessing between internally consistent but contradictory "theories." (Not to be confused with scientific theories.)

I don’t believe that your theory is internally consistent. And I think you haven't really given me the opportunity to explain what i understand about this to you.

On Mon, Aug 30, 2015 somebody offlist:

> To be honest, our perspectives seem similar in some ways.

I agree.

> I do not subscribe to what I would call the conceit of moral truth. Your position does not seem to mesh well with the desires of the "confused" moral realists to make and defend moral prescriptions. My perspective certainly doesn’t.

Can you tell me more of what you mean by “moral prescriptions”? Do you mean something like where people are supposed to obey these “prescriptions” in the sense that they have to do it even if they don’t agree with it? Like against their will?

Demanding obedience is morally wrong. By that I mean that there is a better way that is known. Instead of demanding obedience, you should request an audience so that your voice may be heard. Why? For the purpose of alerting people of mistakes or rival theories that (a) they didn’t already know and (b) they would be glad if you stuck your neck out to tell them about.

And when I say that you should do X, I'm including an implicit "but only if you wholeheartedly agree with me about you doing X". I will never demand that you do what I say on my authority. Because i reject authority in truth-seeking. I also don't even want you to *voluntarily* accept what I say on my authority. again because I reject authority in truth-seeking. 

You should do your own independent judgement, not rely on me. I could be wrong, so you should judge things for yourself to help catch the mistakes that I make. You can't catch the mistakes in my ideas if you blindly accept what I say as truth. And even if I'm right, and you blindly accept what I say as truth, you could easily misunderstand me. so without doing your own independent judgement, you'd be making tons of misunderstandings and believing all sorts of false things that you would be falsely attributing to me.

> What, precisely, is the difference between holding that there is unreachable truth and not bothering with truth at all?

The difference is this:

Unreachable truth - I cannot reach perfection but I can do a good job of it. I can make progress continuously. Tomorrow will be better than today (on average). How? Because I’m finding and fixing flaws in my knowledge. That’s what progress is. It’s evolution.

Not bothering with truth at all - I cannot reach perfection so I’ll just stop trying. Stagnation is ok. I’ll just learn to deal with the suffering. That’s what everybody else does. And they seem happy.

> Especially when there isn't even any way to determine when there is or is not a flaw in the moral "theory?”

Sure there is. Whatever theory you are considering, you need to consider rival theories too. And you need to use criticism to rule out all but one. The theory that survives criticism is the one that is deemed non-refuted. The rival theories that are criticized are deemed refuted. And this is tentative since new criticism can be found in the future.

Now there are nuance situations like ‘what do i do when i have two rivals theories that are not criticized?’. all the questions (AFAIK) that have been asked about this have been answered (AFAIK).

> Is it a "point of the journey" moment? If our moral theory is always flawed, and we recognize it as such, from whence does any confidence in our moral prescriptions arise?

hmm, i’m trying to use my interpretation of what you mean by “moral prescription” and it doesn’t seem to fit. i thought you mean “demanding obedience” to the moral prescription.

can you clarify?

> To be fair, you asked me the same question in a different form earlier. I would respond "in the democratic process of negotiation." I'm not sure how you would respond.

what is the context? do you mean 2-person interaction? 5-person interaction? a whole nation? or do you mean all of these?

> A major problem for you is that most are not going to understand moral realism the way you do.

what kind of problem is it for me? do you mean like, it’ll make communication with them more difficult? like with more misunderstandings?

or do you mean some other kind of problem?

> Rightly or wrongly, they are likely to dismiss it as not giving them the power to prescribe (although they might enjoy the elasticity of it).

do you mean “the power to [demand obedience]”? or do you mean something else?

> For most people morality is about having prescriptive power. Do you give it to them in some way I am not seeing?


Morality is not authoritarian. Nobody is infallible.

Knowledge is not authoritarian. Nobody is infallible.

Knowledge is created by people. People are fallible so the knowledge we create is fallible.

To clarify this, see:

-- Rami

Package deal. Moral realism and demanding obedience to one’s moral views.

Some people who drop their belief in god also drop the idea that there is objective truth in morality (aka moral realism).

Why do they do that? It’s because they believe that moral realism implies demanding obedience to one’s moral views. They treat moral realism and demanding obedience to one’s moral views as a package deal - as if you can’t have one without the other. And since they are against the idea that people should demand obedience to their moral views, they reject the whole package, instead of just rejecting the one idea they have an actual problem with.

The thing is, the best knowledge we have to date about the truth regarding morality is that it is wrong to demand obedience to one's moral views.

Saturday, August 29, 2015

Some good philosophy in the tv series Parenthood s2 episode "Put Yourself Out There"

Parenthood s2 episode “Put Yourself Out There" @25:00

Some good philosophy, arguing for taking action instead of being passive.

The context is that a college-bound high school senior is talking to a successful business woman. Like, to get advise and connections maybe.

The girl expressed that it must suck that people ask for her help.

The woman replied, “If you never ask for what you want, you’ll never know if the answer is gonna be yes or no. You gotta take the risk.”

But why would it suck? Sacrificial help would suck. But it could easily be mutually beneficial.

I guess the girl doesn’t realize that it could easily be mutually beneficial. So it doesn’t get accounted for in her reasoning.

Greco wrestler gets manhandled by a 132 lb guy

Check out this video of a 5'5" tall 132 lb guy physically controlling a wrestler who looks to be like 5'10" tall 240 lbs.

The smaller guy is better at controlling his balance. He reacts quicker to changes in his balance. This means that he's using his center of gravity better than the bigger guy is.

Note though that the smaller guy's center of gravity is lower than the bigger guy basically because he's much shorter.

The bigger guy is exerting a lot more force but the smaller guy absorbs that force effectively.

As the bigger guy gets tired, and starts doing more lunging, the smaller guy uses the bigger guy's momentum against him to the point of throwing him to the ground.

I wonder if it's fake though.

Chen Ziqiang - Chen style Tai Ji & Wrestling
Interesting video of Chen-style Tai Ji Quan lineage holder, Chen Ziqiang demonstrating some throws on a wrestler. (He's the nephew of Chen Xiaowang for those of you who know Tai Ji)Chen Ziqiang is quite well known as he quite often competes in San Shou matches (akin to kickboxing that allows grabbing and throws) and accept challenge style matches.The large guy in the video a wrestler (Greco) and a Sambo player. In contrast, Chen Ziqiang is 132 lbs (60kg) and 5'5" (165cm).
Posted by Fighting HQ on Tuesday, February 24, 2015

In my high school football days, there was a game where I was badly beating this 6'5" 350 lb guy with just a 5'10" 220 lb frame. I was the offensive lineman pushing the defensive lineman 10 feet past the line of scrimmage while my ball handler ran up the gaping whole I just created. He was mostly fat and mostly standing up instead of getting low to the ground but the point is that size isn't an advantage if it's not used correctly.

Thursday, August 27, 2015

Strange passivity in the tv series Parenthood s2 episode “The Booth Job" @37:00

The father and mother of a boy are thinking about being married to each other. It’s happening during a romantic moment where they are already wearing wedding rings. They were using the rings to fake that they were married, so that they could have a better chance of getting into a school for their boy.

They each knew that the other was thinking about being married to each other.

He almost proposes. But then he didn’t.

And as soon as he backed out, she immediately reciprocated. She gave the signal that she’s out too.

So, he had an objection. But he kept it to himself. He could have discussed it with her. They could have learned that the objection was a mistake and he could have gracefully and happily gotten past it.

Also *she* could have asked him, "what’s holding you back?” And if he evaded, she could have persisted by asking various questions until he exposes the issue. Or until she decides that he’s not worth it.

What are they afraid of? What’s the worst that could happen?

So, one approach to conflict is to resolve it by critical discussion between all the people involved in the conflict. This is graceful. Peaceful. Happy. Learn how at the Fallible Ideas website and the Fallible Ideas discussion group.

The other approach is to try to avoid conflict as much as possible. This means that each person is dealing with his own problems and not getting any help from other people involved in the conflict. So when those times come where you’re forced to deal with the conflict, then you grit your teeth and push through the fighting. This can be very emotional. Very rocky. Very unpleasant. Very hurtful.

Sunday, August 23, 2015

Preferences for people aren't inherently problematic

This is a reply to an FI post:

On Aug 23, 2015, at 6:58 PM, Alisa Zinov'yevna Rosenbaum [fallible-ideas] <> wrote:

> On Aug 23, 2015, at 4:04 AM, Elliot Temple <> wrote:
>> At least preferences about nature and reality are good. But preferences about humans are dangerous.
> Great distinction. people are autonomous thinkers with their own preferences. having preferences about what they do doesn't make sense.
> reminds me of the part in atlas shrugged when the government wants people to treat their arbitrary edicts as facts of nature. the govt was trying to blur a similar distinction.
>> How do you maintain autonomy without giving up being selective and discerning? Or do you have preferences about people but then never ask anyone to meet them and just kinda passively hope?
> i hope that Elliot keeps participating in public philosophy discussions.

Because you think that's better (for you) than if Elliot stops participating in public discussions. Right?

> I don't think that counts as a preference about him because I would only want him to do that if he thinks it best.

As far as I know, having preferences for people is compatible with the preference that those people only interact together voluntarily. You seem to think otherwise but you're not explaining why.

-- Rami

The idea of teaching is confused

The idea of teaching is confused. It sets the goal post at what the teacher says instead of what the truth is.

Learning, on the other hand, sets the goal post at the truth. It’s a matter of creating knowledge about the truth as each person involved makes guesses about the truth and rules out guesses with criticism. The focus is on trying to create knowledge about the truth and finding and fixing flaws in our knowledge to get ever closer to the truth.

Teaching isn't about that. The focus is on the teacher trying to make the student learn what the teacher said. The possibility that the teacher can be wrong is ignored.

But there are exceptions. Like Richard Feynman. He was a professor of physics. And he was known as The Great Explainer. What he did was explain what he knows and how he knows it. And he tailored his explanations to the questions his students asked him. 

Sadly most people who decide to go to college do it because they want to be
taught. They don't want the responsibility of judging the ideas to figure out which ones are true and which ones are false. They want the responsibility to be on their teachers. What they want is to be able to accept on authority what their teachers tell them, without any critical thinking of their own.

Preferences are good; but only if you are open to changing them

This is a reply to an FI post:

On Sun, Aug 23, 2015 at 3:04 AM, Elliot Temple [fallible-ideas] <> wrote:

> Preferences are good. Liking things is good. It's about having some idea that things are better one way rather than another.
> Preferences don't need to be justified. You don't have to prove your wants are logical. Just look for and solve problems.
> At least preferences about nature and reality are good. But preferences about humans are dangerous.

In the abstract, I don't see the problem.

If you are willing to rethink your preferences as you get new information, where's the danger?

> If you have a preference about a person and they have a different preference about themselves then that can cause conflict. People can fight over their clashing preferences.

I think the fighting can only happen if the person is having a hard time rethinking (i.e. changing) his preferences. It could be that he doesn't want to rethink them. It could be that rethinking his preferences is frustrating for him. It could be that he doesn't know how to rethink his preferences. These are avoidable mistakes.

> How do you avoid fighting with people but also avoid giving up having preferences about people? People are a huge part of life so avoiding preferences about them makes a big difference.
> How do you maintain autonomy without giving up being selective and discerning? Or do you have preferences about people but then never ask anyone to meet them and just kinda passively hope?

I don't think that preferences for non-persons is that much different than preferences for persons.

I think people can hurt due to non-person-preferences not being met like people can hurt due to person-preferences not being met.

To clarify that, I'll explain something that happened to me a few years ago. I remember telling somebody about a new plan I had for doing something (it was a career-type plan). I was excited/happy. The person I told this to immediately got upset. I was confused about why he got upset. So I asked. I found out that he was upset because he fears that I'm going to get upset if my plan doesn't become reality. I think he assumes that about me because that's what happens with him. I asked, "so you think it's better to not make plans for fear that the plans don't become reality"?

That's ridiculous. I will make plans optimistically, and if my plans don't become reality, I'll change my plans accordingly, without having any negative emotions around the fact that my past expectations didn't get met.

It's fear of making mistakes. It's wanting something perfectly, or not wanting it at all. But both of those suck. One of them is impossible, and the other is worse than death.

The same thing works for person-preferences. If I make a plan with somebody to do something, say a long project, and then we start the project, but then later something comes up and then the project ends (seemingly permanently), that's ok. And it should be expected a lot. And feeling bad over it is a mistake.

So my point is that rigid preferences *for things* can hurt people like rigid preferences *for people* do.

I think it's the rigidity that is problematic. I don't think a preference for a person is problematic just because it's for a person.

Saturday, August 22, 2015

8th reply in the Morality Test discussion

This is a reply to an FI post:

On Sun, Aug 16, 2015 at 12:33 PM, Erin Minter <> wrote:

> On Aug 15, 2015, at 12:22 PM, Erin Minter [fallible-ideas] <> wrote:
>> This was sent to me offlist and I am forwarding it to FI with permission.
>> Begin forwarded message:
>> From: Rami Rustom <>
>> Subject: Re: [FI] Morality Test
>> Date: August 15, 2015 at 8:56:11 AM EDT
>> To: E Mint <>
>>> On Sat, Aug 15, 2015 at 12:04 AM, Erin Minter <> wrote:
>>>> On Aug 14, 2015, at 7:01 PM, Rami Rustom <> wrote:
>>>>> On Fri, Aug 14, 2015 at 5:36 PM, Erin Minter <> wrote:
>>>>>> On Aug 14, 2015, at 12:39 PM, Rami Rustom <> wrote:
>>>>>>> On Fri, Aug 14, 2015 at 9:35 AM, Erin Minter <> wrote:
>>>>>>>>> How do you know if something is morally good or not? What’s the check? What’s your test?
>>>>>>>>> Say 2 people are thinking about doing something together.
>>>>>>>>> Say one of them has an idea that is being considered as a common preference (cp). A cp is an idea about how to proceed that they both have no criticisms of.
>>>>>>>>> And say one of them has an objection to that idea. Then it’s not a cp. So it’s not morally ok to act on this idea.
>>>>>>>>> If nobody has any objections, then it’s a cp. So it is morally ok to act on this idea.
>>>>>>>> I think the "something" could still be immoral (objectively).  Even if they agreed on proceeding with the action, I don’t think that means the action itself is always moral (will enhance/further/promote their lives).
>>>>>>> I didn't mean immoral objectively.
>>>>>>> I don't think it makes sense to think of it as you are. Because nobody
>>>>>>> is omniscient. So there's no way to omnisciently check if something is
>>>>>>> morally ok or not.
>>>>>> Say the idea is that they both agree (they both *prefer*) to get married and each promise to devote the rest of their lives to each other.
>>>>>> It’s a cp,
>>>>> I don't think you demonstrated that it's a cp.
>>>>> Did they have objections that they didn't address and just ignored in
>>>>> favor of the idea?
>>>> lots of ppl get married because they prefer to get married.  both sides prefer it and want it, when they choose to get married.
>>>> it’s a preference, which they have in common.
>>> if they have objections when they do it, and ignore those objections,
>>> then it's not a cp.
> they don’t have objections, but they didn’t go searching for them either.  they slammed their minds shut to any glimpses of them.

do they have doubts that they evaded?

>>>>>> but isn’t it immoral? Just because they both prefer it, it doesn’t mean they’ve passed a morality test and what they are doing is moral.
>>>>> But it's not clear to me that they don't have any objections.
>>>> i think lots of ppl really want to get married.  so much so that it bothers them to NOT be married.
>>> that seems off topic. being bothered to not be married doesn't say
>>> anything about other objections they have.
> they don’t have objections.  some ppl whole-heartedly prefer to get married and don’t want to even consider any criticisms of it.

how do you know they are whole-heartedly preferring it? i say more
about this below.

>>>>> So let's say they didn't have any objections. So it's a cp. Is it immoral?
>>>>> Well what are you thinking makes it immoral?
>>>> it hinders one’s individuality / growth / learning / life / sense of self.
>>> i'm starting to think we should take a step back.
>>> the question that started this discussion was somebody asking me this:
>>>>>>>>> How do you know if something is morally good or not? What’s the check? What’s your test?
>>> What was meant by it is this:
>>>> If I have a choice to make, and I have an idea about what to choose. How do I know if that's the idea I should choose or not?
>>> So, what I'm focussed on is how to choose. more below.
> ok.  I’ve always thought of a common preference as just what is says - a preference 2+ ppl have in common.  I don’t think “cp" means you have to use 100% good methods, like specifically seeking external crit of your preferences, not evading or lying to yourself, etc.
> So I don’t see it as like THE test that you are making a moral choice.  It’s important.  And if something is not a cp (and one person coerces the other), then it’s (usually) immoral.
> But just because it is a cp, I don’t think that necessarily means its a moral choice.

I think it does. I explain below.


>>>> If it’s a CP, there would be an aspect of their method which is moral.  However:
>>>> - there could other aspects of their methods which are immoral.  Like how much have they really thought it thru and looked for flaws / crits with their plan.  Is it a whim-based preference?  A static meme based preference?
>>> Even if those things are the case, I think what's important is what
>>> knowledge the people interacting have.
>>> Like, if one of them has some knowledge about that marriage is bad.
>>> And if he ignores that and chooses marriage. Then it's not a cp. So
>>> choosing marriage in this case is immoral.
> ppl are really really good at evading and lying to themselves about stuff, tho.  They don’t find their wedding day TCS-coercive.  They get really good at convincing themselves that they whole-heartedly prefer certain things (even if they do have tiny doubts or fears or whatever in there.  they effectively ignore them to the point where they don’t exist).

The thing is, just because they are convinced that they are
whole-heartedly preferring something, that doesn't mean that they
actually are whole-heartedly preferring something.

> And they don’t SEEK external crit.  They don’t want to hear about ideas which would criticize what they think they want.  So without any criticism and lots of evasion, their preference remains the same.
> If you ask them if they have any doubts, they’d say “No”.  they’d say the prefer to get married.
> so what then?  if both ppl believe it’s their preference, isn’t that a cp?

i don't think that's a cp. i try to explain why below.

> yet, at the same time, seems immoral.  they’ve lied to themselves and evaded opportunities to get crit.
> it's hard for me to believe that someone is *moral* when they evade, just because they don’t know that evasion is bad or that they’ve evaded the fact that evasion is bad.

Maybe the original question is misleading. Here's the question I had
in mind that began this discussion.

Question: Say 2 people are considering doing a joint project. And one
of them has an idea for what to do. How do you check if an idea should
not be acted on?

Answer: If either of them has any objections/doubts about acting on
the idea, then that idea shouldn't be acted on. And if you're evading
your doubts, to the point that you don't have any of your doubts
conscious in your mind at the moment, then you're cheating. That
doesn't pass the test. Evaded doubts are still doubts.

So about cps. Let's talk about 1 person finding a cp with himself. He
has a conflict and he's resolving it. When he finds the resolution,
that's a cp. But how could a resolution be found when there are evaded
doubts? I mean, the conflict is still there. So it's not a resolution.
So it's not a cp.

What do you think?