Pages

Friday, November 20, 2015

Bad "science" that claims to explain laziness

This article claims that science explains laziness.

http://gizmodo.com/science-finally-explains-why-some-people-just-dont-care-1742654672

> There’s a neurological reason for apathy and laziness, according to new research. Inefficient connections between certain areas of the brain may make it harder for some people to decide to act.

well, worse ideas do make it harder for people to decide to act. 

i'm already expecting that the researchers ignored the role of ideas in decision making.


> In each round of the game, the researcher offered the subject a reward in return for some effort. Participants had to decide whether to accept the offer, based on whether the reward as worth the effort. Predictably, the participants who had already been identified as apathetic were much less likely to accept offers that required effort, even if the reward was large - but when apathetic subjects did choose to accept an offer, their MRIs showed much more activity in the pre-motor cortex, an area of the brain involved in taking actions, than in more motivated participants.

> That was the opposite of what researchers expected. They had assumed that lazy people’s pre-motor cortices would show less activity when deciding to take action.

Why did they think that? What was their explanation that they drew their prediction from? The article doesn't say.

If the researchers had no such explanation while making their prediction, then they aren't doing science. See _The Beginning of Infinity_ for explanation on why explanation-less "science" is not science.

> “We thought that this might be because their brain structure is less efficient, so it’s more of an effort for apathetic people to turn decisions into actions,” said lead researcher Masud Husain, a professor of neurology and cognitive neuroscience at Oxford University, in a statement. After further investigation, it turned out that people who identified as apathetic had less efficient connections between the anterior cingulate cortex, a part of the brain involved in making decisions and anticipating rewards, and the supplementary motor area, a part of the brain that helps control movement.

> “The brain uses around a fifth of the energy you’re burning each day. If it takes more energy to plan an action, it becomes more costly for apathetic people to make actions,” explained Husain. “Their brains have to make more effort.” Husain and his team published their findings in the journal Cerebral Cortex.

Can having some ideas cause one to spend more energy to plan an action, as compared to having other ideas instead? Yes!

For example, if you have good ideas about how to make decisions, then you'll be more efficient at decision making. That means you'll be putting in less effort to make a decision, as compared to the same decision being made by somebody who has worse ideas about how to make decisions. Basically, having better ideas about decision making means running into less trouble when going through the process of making a decision.

What are good ideas about how to make decisions? To date, our best understanding of how to decide is the method known as Common Preference Finding. It's a method focussed on resolving active conflicting ideas, such that only one non-refuted idea remains, which is the one to be acted on. A person who doesn't understand how to resolve conflicts will have lots of trouble making decisions because he has conflicting options active in his mind while he doesn't know how to rule out all but one option. He doesn't know how to rule out options and decide on one.

People with bad ideas about how to decide, do a lot of wasteful thinking. Consider an example. Say a person with bad ideas about how to decide, has a decision to make. He knows of a couple of options he can act on, but he doesn't know which one he should act on. His method consists of choosing one of them, without ruling out the other. But how can he CHOOSE one over the other without a reason? Well, that's the point. He can't do it reasonably. He's doing it arbitrarily. 

He says to himself: Do I choose option A or option B? Well, I don't know which is best, but I have to choose one anyway. I'll choose A. No I'll choose B. No I'll choose A. No I'll choose B. Ok I'll act on B now. NO WAIT! I want to do A instead. Ok I'll act on A now. NO!!!

And he does that indefinitely until he finally chooses, but it's not a confident choice. He's still thinking that option B is in play while acting on option A, or vice versa.

Now imagine another person who has better ideas about how to decide. He knows the method known as Common Preference Finding. 

He says to himself: Do I choose option A or option B? What other options can I brainstorm? And what criticisms can I brainstorm that will rule out all but one option? (And then he does some brainstorming for more options and more criticisms.) Ok I found another option C to consider. And I found a criticism that rules out options B and C, leaving option A as the lone non-refuted option. So I'll choose option A to act on. Problem solved. Next problem.

The guy doing CPF spends less energy on his decision as compared to the guy not doing CPF.


Consider an analogy. People with bad ideas about how to decide are people who have their internal neuron-to-neuron communications using copper, while people with good ideas about how to decide are people who have their neuron-to-neuron communications using superconducting material. The result? Copper has lots of imperfections that cause most electrons to not flow smoothly through the copper — because they “bump” into the imperfections (atoms that ideally shouldn’t be there), which causes resistance, slowing the flow of electricity, and wasting lots of energy. Superconducting material, on the other hand, doesn't have those imperfections, and so electrons are not blocked from flowing smoothly through the material. So there's no resistance, and no dampening of the flow of electricity.

So, if you want a no-resistance brain, learn CPF. If you want a high-resistance brain, don’t learn CPF and just randomly and uncritically create your philosophy by picking up bits and pieces from your parents and society.

Wednesday, September 30, 2015

Parent punishes child for stealing by destroying his xbox

Check out this parent punishing his child for stealing. See my comments about it below.





Another day in my crib. If ya kids steal little shit now, fix it before it's too late.. I don't beat em no mo. That don't teach em shit...For licensing / usage, please contact licensing@viralhog.com
Posted by Showboat Hogg Life MC on Sunday, September 27, 2015



why does child think he should steal?

parent didn't ask


parent doesn't realize that it's his fault that child thinks stealing is his best option.

parent is acting as if he has no fault in why child chose to steal.
maybe the child thinks stealing is his best option because he doesn't think parent will give him what he wants or listen to reason about it.

Monday, August 31, 2015

Reacting badly to being told what to do or that you’re wrong

Some people get angry when they are told to do things they don’t want to do. Or when they’re told they are wrong. Or when they are questioned.

It’s people who were raised by authoritative parents who used anger, punishment, violence, consequences, pain, etc. to teach lessons. They were told to obey. And punished when they refused. Their questions and criticisms and protestations were ignored. Arguing their case didn’t help. Their parents didn’t listen to reason.

This happened so many times that they automatized the whole process. It became a trigger. The trigger fires when the person is contradicted in some way. The result is a bad feeling and possibly getting angry too.


Some of these people take it further by adopting an explicit philosophy to match. The problem they have is with people expecting obedience. And they correctly understand that obedience is implied by authority. But where they go wrong is believing that authority is implied by the existence of truth. 

So they reject authority and throw out truth with it because they think it’s a package deal. They think you can’t reject one without the other.

So they adopt a philosophy that rejects authority but also rejects that there is truth in morality. But if authority is something that should be rejected, then that implies that there exists truth, and that authority is a deviation from that truth. Why would you reject something unless it’s wrong, or not as good as some other better competing thing?

A person with this philosophy who hears someone say “you should do X” or “you are wrong about Y” will misinterpret that to mean that he is presenting it as the final complete truth and that he’s demanding obedience.

But that’s a mistake. The truth, as far as we know, is that people don’t have access to the final complete truth and instead what we do have is fallible knowledge about the truth. And to the best of our knowledge, it’s wrong to demand obedience to one’s moral views.

By that I mean that there is a better way that is known. And that is to ask for an audience so that your voice may be heard. Where your goal is to alert people of mistakes or rival theories that they didn’t know already, and where they would be glad if you stuck your neck out to tell them about it because they know that it could benefit them.

So by getting upset in these situations you’re ignoring a few possibilities. The person could be trying to help you (win/win) instead of hurt you (win/lose). He could be against authority in truth-seeking. He could be against demanding obedience. 

He could be against anyone even voluntarily accepting his views on his word. He could hold the belief that you should make your own independent judgement, rather than rely on his judgement. He could be wrong, so you should judge things for yourself to help catch mistakes in his ideas. And you can’t do that if you blindly accept what he says as the truth. 

And even if the person is right, and you blindly accept what he said as the truth, you could easily have misunderstood his idea. So without doing your own independent judgement, you’d be making tons of misunderstandings and believing all sorts of false things that you’d be falsely attributing to him.

So, when you’re told to do something you don’t want to do, or you’re told you’re wrong, and if you react badly to this, it could be because you are falsely assuming that the person thinks he can’t possibly be wrong and that he’s demanding obedience to his moral views. You’re seeing the world through the win/lose lens.

Weird business practices in the tv series Parenthood s2 episode "Meet the New Boss"

Parenthood s2 episode "Meet the New Boss" @12:10

The context is that a company gets a new owner. The main manager Adam just recently met his new boss. And he doesn't know what to expect.

Adam finds out that his boss isn't talking about his direction for the new company. Adam keeps asking about it but the boss doesn't talk about it.

Then Adam is talking with his wife about it and he says about his boss that "and now he runs TNS"... "i could be out of a job"... "maybe he'll come up with a good idea, if necessary, I'll give him some guidance".


this is interesting.

i don't think i'd ever be in the situation where i'm an owner of a business and i don't know what's going on in the company yet. i mean, i'd investigate things before buying the company.

but let's say that somehow i'm in that position. what would i do? i'd investigate. i'd say to Adam:

"i want to get up to speed on what's known about how to run this company. i want to learn the company's traditions and then work on evolving them, rejecting some traditions, evolving others, and starting new ones."

But Adam wants the owner to tell him the company's direction, rather than Adam explaining to the owner the companies existing traditions (like it's current direction).

Discussion with a moral relativist about whether morality is objective

this is a discussion i had with somebody on fb about whether morality is objective. he called it "moral realism" so i continued to use that term with him. i've included just the most relevant parts of the discussion.



On Mon, Aug 28, 2015 somebody offlist:

> Sam Harris and Islam are the same stuff. Moral realism wrapped up in a hateful, polarizing package.

are you disagreeing with the moral realism part?



On Mon, Aug 28, 2015 somebody offlist:

> Of course. I am an atheist because there is no evidence for god(s). I am not a moral realist for the same reason - lack of evidence.
>
> Moral realism is an artifact of religiosity and theocracy. If one is an atheist and still clings to moral realism, all that tells me is one has stopped questioning much, much too soon. Religion comes with a lot of baggage, from moral realism to retributivism dressed up as justice. Saying one does not believe in god(s) is only scratching the surface.

why do you believe that more questioning of that idea would necessarily lead to realizing that revenge is not justice? it’s because you believe that the goal post is the truth, and questioning leads one closer to *the truth*. hence moral realism.



On Mon, Aug 30, 2015 somebody offlist:

> When were we objectively morally right about gender equality?

Maybe never. But the idea that men should have legal rights that women don't have, is wrong.

> And why so?

you mean, why do I believe we're right today and that people were wrong in the past? because we know of flaws in their theory that they didn't know about.

> The very least a moral realist *must* recognize is that we were right at some point or wrong at some point.

Which I've acknowledged.

> And for the record, just because I express a moral opinion (and that's what I recognize it is) does not mean I am become a moral realism, thinking my moral opinion is the absolute, objective truth.

?! so you're saying that having a moral opinion while being a moral realist means that the moral realist believes that his opinion is the absolute objective truth? why do you believe that? I'm a moral realist AND I have moral opinions AND I don't believe that any of my opinions are absolute objective truth. You think I'm wrong to hold these views but you haven't explained how I'm wrong. Can you explain that?



On Mon, Aug 30, 2015 somebody offlist:

> Interesting. So what is your support/evidence/justification for "...For example, the idea that men should have legal rights that women don't have, is wrong."

Searching for support/justification is flawed epistemology. The best epistemology known to date expains that we need to look for flaws/problems in our theories. And that a theory is treated as rejected only if it has known flaws/problems. And a theory is accepted if you can't find any flaws/problems with it.

Do you see any flaws or problems with the idea that women should have the same rights as men? I don't.



On Mon, Aug 30, 2015 somebody offlist:

> Alright, so how do we know that liberalism is the truth?

Find a flaw. Explain the flaw. Then submit that explanation to criticism. If that explanation survives criticism, then as far as we know, the explanation is correct. And if that happens, then we've either rejected it, or we changed it to account for the flaw in the old version. And if we don’t find a flaw (in liberalism), then its our best knowledge about the truth.

Though we could be wrong about it. So we should be open to changing our minds. We should be open to new criticism and new rival theories.



On Mon, Aug 30, 2015 somebody offlist:

> That would make it sound *almost* like a scientific endeavor, but without the final arbiter (that empirical reality serves as in science).

Emprical evidence isn't the final arbiter. Evidence can be MISINTERPRETED. So a scientist (and anybody) must lookout for misinterpreting the evidence. So the relevant question here is: How does one analyze evidence in order to catch one's misinterpretations of the evidence?

> Tell me, how do we know when we have found a flaw?

If you think you found a flaw, and you don't see any criticisms of the flaw you see, then as far as you know the flaw is the best knowledge you have about the truth.



On Mon, Aug 30, 2015 somebody offlist:

> Honestly, though, if that is your view of morality, there doesn't seem to be much room for the confidence most moral realists in the past have craved when making their decrees. I mean does truth really amount to some variant of argumentum ad populum without any sort of arbiter?

Truth is not judged by popular vote. 

Truth is judged by critical discussion. If a theory survives criticism, then it is accepted as the best knowledge to date about the truth. If a theory doesn’t survive criticism, then it is rejected for having known flaws. Theories that survive criticism are non-refuted (no known flaws). Theories that didn’t survive criticism are refuted (has known flaws).

> That hardly seems like a process to truth... I guess truth won't really mean much any more, will it? Now "knowledge," and "truth" are watered down to nothing.
>
> So, you would never say you have the moral truth, but you hold that there is moral truth nonetheless, is that correct?

Yes. It's the same as in science. We have knowledge of the truth (of the physical world). But none of our current scientific theories are THE FINAL COMPLETE TRUTH. They are all flawed. But we don't reject a accepted theory UNTIL a flaw is made known.



On Mon, Aug 30, 2015 somebody offlist:

> Actually, in a very important way, empirical reality is the final arbiter for science. Yes, there is the possibility of misinterpretation, and there are deeper problems (like that different theories could conceivably be just as scientifically valid for the same phenomena), but in the end we measure them against empirical reality.

What would you do in this situation? Say you have 2 scientific theories competing to explain some aspect of physical reality and they both are consistent with all known evidence.

Then how do you choose between the 2? Empirical reality won’t help here unless you can find NEW evidence that contradicts 1 of these theories leaving the other untouched.

But even if that happened there’s nothing FINAL about it. Somebody could find new evidence. Or somebody could find new criticism of an existing explanation of evidence, thus refuting that empirical-explanation, thus saving some previously-refuted scientific theory. So FINAL arbiter doesn’t make sense.

or is there a reason you are using the “final” qualifier that i’m not aware of?

> What, if anything, is the final arbiter for your position? 

The tentative “final” arbiter is a simple test: has the theory survived criticism or not?

> It sure would be helpful to find it, because as it is we are just guessing between internally consistent but contradictory "theories." (Not to be confused with scientific theories.)

I don’t believe that your theory is internally consistent. And I think you haven't really given me the opportunity to explain what i understand about this to you.



On Mon, Aug 30, 2015 somebody offlist:


> To be honest, our perspectives seem similar in some ways.

I agree.

> I do not subscribe to what I would call the conceit of moral truth. Your position does not seem to mesh well with the desires of the "confused" moral realists to make and defend moral prescriptions. My perspective certainly doesn’t.

Can you tell me more of what you mean by “moral prescriptions”? Do you mean something like where people are supposed to obey these “prescriptions” in the sense that they have to do it even if they don’t agree with it? Like against their will?

Demanding obedience is morally wrong. By that I mean that there is a better way that is known. Instead of demanding obedience, you should request an audience so that your voice may be heard. Why? For the purpose of alerting people of mistakes or rival theories that (a) they didn’t already know and (b) they would be glad if you stuck your neck out to tell them about.

And when I say that you should do X, I'm including an implicit "but only if you wholeheartedly agree with me about you doing X". I will never demand that you do what I say on my authority. Because i reject authority in truth-seeking. I also don't even want you to *voluntarily* accept what I say on my authority. again because I reject authority in truth-seeking. 

You should do your own independent judgement, not rely on me. I could be wrong, so you should judge things for yourself to help catch the mistakes that I make. You can't catch the mistakes in my ideas if you blindly accept what I say as truth. And even if I'm right, and you blindly accept what I say as truth, you could easily misunderstand me. so without doing your own independent judgement, you'd be making tons of misunderstandings and believing all sorts of false things that you would be falsely attributing to me.

> What, precisely, is the difference between holding that there is unreachable truth and not bothering with truth at all?

The difference is this:

Unreachable truth - I cannot reach perfection but I can do a good job of it. I can make progress continuously. Tomorrow will be better than today (on average). How? Because I’m finding and fixing flaws in my knowledge. That’s what progress is. It’s evolution.

Not bothering with truth at all - I cannot reach perfection so I’ll just stop trying. Stagnation is ok. I’ll just learn to deal with the suffering. That’s what everybody else does. And they seem happy.

> Especially when there isn't even any way to determine when there is or is not a flaw in the moral "theory?”

Sure there is. Whatever theory you are considering, you need to consider rival theories too. And you need to use criticism to rule out all but one. The theory that survives criticism is the one that is deemed non-refuted. The rival theories that are criticized are deemed refuted. And this is tentative since new criticism can be found in the future.

Now there are nuance situations like ‘what do i do when i have two rivals theories that are not criticized?’. all the questions (AFAIK) that have been asked about this have been answered (AFAIK).

> Is it a "point of the journey" moment? If our moral theory is always flawed, and we recognize it as such, from whence does any confidence in our moral prescriptions arise?

hmm, i’m trying to use my interpretation of what you mean by “moral prescription” and it doesn’t seem to fit. i thought you mean “demanding obedience” to the moral prescription.

can you clarify?

> To be fair, you asked me the same question in a different form earlier. I would respond "in the democratic process of negotiation." I'm not sure how you would respond.

what is the context? do you mean 2-person interaction? 5-person interaction? a whole nation? or do you mean all of these?

> A major problem for you is that most are not going to understand moral realism the way you do.

what kind of problem is it for me? do you mean like, it’ll make communication with them more difficult? like with more misunderstandings?

or do you mean some other kind of problem?

> Rightly or wrongly, they are likely to dismiss it as not giving them the power to prescribe (although they might enjoy the elasticity of it).

do you mean “the power to [demand obedience]”? or do you mean something else?

> For most people morality is about having prescriptive power. Do you give it to them in some way I am not seeing?

No.

Morality is not authoritarian. Nobody is infallible.

Knowledge is not authoritarian. Nobody is infallible.

Knowledge is created by people. People are fallible so the knowledge we create is fallible.

To clarify this, see: http://fallibleideas.com/reason

-- Rami

Package deal. Moral realism and demanding obedience to one’s moral views.


Some people who drop their belief in god also drop the idea that there is objective truth in morality (aka moral realism).

Why do they do that? It’s because they believe that moral realism implies demanding obedience to one’s moral views. They treat moral realism and demanding obedience to one’s moral views as a package deal - as if you can’t have one without the other. And since they are against the idea that people should demand obedience to their moral views, they reject the whole package, instead of just rejecting the one idea they have an actual problem with.


The thing is, the best knowledge we have to date about the truth regarding morality is that it is wrong to demand obedience to one's moral views.

Saturday, August 29, 2015

Some good philosophy in the tv series Parenthood s2 episode "Put Yourself Out There"


Parenthood s2 episode “Put Yourself Out There" @25:00

Some good philosophy, arguing for taking action instead of being passive.

The context is that a college-bound high school senior is talking to a successful business woman. Like, to get advise and connections maybe.

The girl expressed that it must suck that people ask for her help.

The woman replied, “If you never ask for what you want, you’ll never know if the answer is gonna be yes or no. You gotta take the risk.”


But why would it suck? Sacrificial help would suck. But it could easily be mutually beneficial.

I guess the girl doesn’t realize that it could easily be mutually beneficial. So it doesn’t get accounted for in her reasoning.

Greco wrestler gets manhandled by a 132 lb guy

Check out this video of a 5'5" tall 132 lb guy physically controlling a wrestler who looks to be like 5'10" tall 240 lbs.

The smaller guy is better at controlling his balance. He reacts quicker to changes in his balance. This means that he's using his center of gravity better than the bigger guy is.

Note though that the smaller guy's center of gravity is lower than the bigger guy basically because he's much shorter.

The bigger guy is exerting a lot more force but the smaller guy absorbs that force effectively.

As the bigger guy gets tired, and starts doing more lunging, the smaller guy uses the bigger guy's momentum against him to the point of throwing him to the ground.

I wonder if it's fake though.

Chen Ziqiang - Chen style Tai Ji & Wrestling
Interesting video of Chen-style Tai Ji Quan lineage holder, Chen Ziqiang demonstrating some throws on a wrestler. (He's the nephew of Chen Xiaowang for those of you who know Tai Ji)Chen Ziqiang is quite well known as he quite often competes in San Shou matches (akin to kickboxing that allows grabbing and throws) and accept challenge style matches.The large guy in the video a wrestler (Greco) and a Sambo player. In contrast, Chen Ziqiang is 132 lbs (60kg) and 5'5" (165cm).
Posted by Fighting HQ on Tuesday, February 24, 2015


In my high school football days, there was a game where I was badly beating this 6'5" 350 lb guy with just a 5'10" 220 lb frame. I was the offensive lineman pushing the defensive lineman 10 feet past the line of scrimmage while my ball handler ran up the gaping whole I just created. He was mostly fat and mostly standing up instead of getting low to the ground but the point is that size isn't an advantage if it's not used correctly.

Thursday, August 27, 2015

Strange passivity in the tv series Parenthood s2 episode “The Booth Job" @37:00

The father and mother of a boy are thinking about being married to each other. It’s happening during a romantic moment where they are already wearing wedding rings. They were using the rings to fake that they were married, so that they could have a better chance of getting into a school for their boy.

They each knew that the other was thinking about being married to each other.

He almost proposes. But then he didn’t.

And as soon as he backed out, she immediately reciprocated. She gave the signal that she’s out too.


So, he had an objection. But he kept it to himself. He could have discussed it with her. They could have learned that the objection was a mistake and he could have gracefully and happily gotten past it.


Also *she* could have asked him, "what’s holding you back?” And if he evaded, she could have persisted by asking various questions until he exposes the issue. Or until she decides that he’s not worth it.



What are they afraid of? What’s the worst that could happen?


So, one approach to conflict is to resolve it by critical discussion between all the people involved in the conflict. This is graceful. Peaceful. Happy. Learn how at the Fallible Ideas website and the Fallible Ideas discussion group.

The other approach is to try to avoid conflict as much as possible. This means that each person is dealing with his own problems and not getting any help from other people involved in the conflict. So when those times come where you’re forced to deal with the conflict, then you grit your teeth and push through the fighting. This can be very emotional. Very rocky. Very unpleasant. Very hurtful.

Sunday, August 23, 2015

Preferences for people aren't inherently problematic

This is a reply to an FI post:

https://groups.yahoo.com/neo/groups/fallible-ideas/conversations/messages/11100

On Aug 23, 2015, at 6:58 PM, Alisa Zinov'yevna Rosenbaum petrogradphilosopher@gmail.com [fallible-ideas] <fallible-ideas@yahoogroups.com> wrote:

> On Aug 23, 2015, at 4:04 AM, Elliot Temple <curi@curi.us> wrote:
>
>> At least preferences about nature and reality are good. But preferences about humans are dangerous.
>
> Great distinction. people are autonomous thinkers with their own preferences. having preferences about what they do doesn't make sense.
>
> reminds me of the part in atlas shrugged when the government wants people to treat their arbitrary edicts as facts of nature. the govt was trying to blur a similar distinction.
>
>> How do you maintain autonomy without giving up being selective and discerning? Or do you have preferences about people but then never ask anyone to meet them and just kinda passively hope?
>
> i hope that Elliot keeps participating in public philosophy discussions.

Because you think that's better (for you) than if Elliot stops participating in public discussions. Right?


> I don't think that counts as a preference about him because I would only want him to do that if he thinks it best.

As far as I know, having preferences for people is compatible with the preference that those people only interact together voluntarily. You seem to think otherwise but you're not explaining why.

-- Rami

Preferences are good; but only if you are open to changing them

This is a reply to an FI post:

https://groups.yahoo.com/neo/groups/fallible-ideas/conversations/messages/11082


On Sun, Aug 23, 2015 at 3:04 AM, Elliot Temple curi@curi.us [fallible-ideas] <fallible-ideas@yahoogroups.com> wrote:

> Preferences are good. Liking things is good. It's about having some idea that things are better one way rather than another.
>
> Preferences don't need to be justified. You don't have to prove your wants are logical. Just look for and solve problems.
>
> At least preferences about nature and reality are good. But preferences about humans are dangerous.

In the abstract, I don't see the problem.


If you are willing to rethink your preferences as you get new information, where's the danger?


> If you have a preference about a person and they have a different preference about themselves then that can cause conflict. People can fight over their clashing preferences.

I think the fighting can only happen if the person is having a hard time rethinking (i.e. changing) his preferences. It could be that he doesn't want to rethink them. It could be that rethinking his preferences is frustrating for him. It could be that he doesn't know how to rethink his preferences. These are avoidable mistakes.



> How do you avoid fighting with people but also avoid giving up having preferences about people? People are a huge part of life so avoiding preferences about them makes a big difference.
>
> How do you maintain autonomy without giving up being selective and discerning? Or do you have preferences about people but then never ask anyone to meet them and just kinda passively hope?


I don't think that preferences for non-persons is that much different than preferences for persons.

I think people can hurt due to non-person-preferences not being met like people can hurt due to person-preferences not being met.

To clarify that, I'll explain something that happened to me a few years ago. I remember telling somebody about a new plan I had for doing something (it was a career-type plan). I was excited/happy. The person I told this to immediately got upset. I was confused about why he got upset. So I asked. I found out that he was upset because he fears that I'm going to get upset if my plan doesn't become reality. I think he assumes that about me because that's what happens with him. I asked, "so you think it's better to not make plans for fear that the plans don't become reality"?

That's ridiculous. I will make plans optimistically, and if my plans don't become reality, I'll change my plans accordingly, without having any negative emotions around the fact that my past expectations didn't get met.

It's fear of making mistakes. It's wanting something perfectly, or not wanting it at all. But both of those suck. One of them is impossible, and the other is worse than death.


The same thing works for person-preferences. If I make a plan with somebody to do something, say a long project, and then we start the project, but then later something comes up and then the project ends (seemingly permanently), that's ok. And it should be expected a lot. And feeling bad over it is a mistake.


So my point is that rigid preferences *for things* can hurt people like rigid preferences *for people* do.

I think it's the rigidity that is problematic. I don't think a preference for a person is problematic just because it's for a person.

Saturday, August 22, 2015

8th reply in the Morality Test discussion

This is a reply to an FI post:

https://groups.yahoo.com/neo/groups/fallible-ideas/conversations/messages/10960



On Sun, Aug 16, 2015 at 12:33 PM, Erin Minter <erinminter@icloud.com> wrote:

> On Aug 15, 2015, at 12:22 PM, Erin Minter erinminter@icloud.com [fallible-ideas] <fallible-ideas@yahoogroups.com> wrote:
>
>> This was sent to me offlist and I am forwarding it to FI with permission.
>>
>> Begin forwarded message:
>>
>> From: Rami Rustom <rombomb@gmail.com>
>> Subject: Re: [FI] Morality Test
>> Date: August 15, 2015 at 8:56:11 AM EDT
>> To: E Mint <erinminter@icloud.com>
>>
>>> On Sat, Aug 15, 2015 at 12:04 AM, Erin Minter <erinminter@icloud.com> wrote:
>>>
>>>> On Aug 14, 2015, at 7:01 PM, Rami Rustom <rombomb@gmail.com> wrote:
>>>>
>>>>> On Fri, Aug 14, 2015 at 5:36 PM, Erin Minter <erinminter@icloud.com> wrote:
>>>>>
>>>>>> On Aug 14, 2015, at 12:39 PM, Rami Rustom <rombomb@gmail.com> wrote:
>>>>>>
>>>>>>> On Fri, Aug 14, 2015 at 9:35 AM, Erin Minter <erinminter@icloud.com> wrote:
>>>>>>>> http://ramirustom.blogspot.com/2015/08/how-do-you-know-if-something-is-morally.html
>>>>>>>>
>>>>>>>>> How do you know if something is morally good or not? What’s the check? What’s your test?
>>>>>>>>>
>>>>>>>>> Say 2 people are thinking about doing something together.
>>>>>>>>>
>>>>>>>>> Say one of them has an idea that is being considered as a common preference (cp). A cp is an idea about how to proceed that they both have no criticisms of.
>>>>>>>>>
>>>>>>>>> And say one of them has an objection to that idea. Then it’s not a cp. So it’s not morally ok to act on this idea.
>>>>>>>>>
>>>>>>>>> If nobody has any objections, then it’s a cp. So it is morally ok to act on this idea.
>>>>>>>>
>>>>>>>> I think the "something" could still be immoral (objectively).  Even if they agreed on proceeding with the action, I don’t think that means the action itself is always moral (will enhance/further/promote their lives).
>>>>>>>
>>>>>>> I didn't mean immoral objectively.
>>>>>>>
>>>>>>> I don't think it makes sense to think of it as you are. Because nobody
>>>>>>> is omniscient. So there's no way to omnisciently check if something is
>>>>>>> morally ok or not.
>>>>>>
>>>>>> Say the idea is that they both agree (they both *prefer*) to get married and each promise to devote the rest of their lives to each other.
>>>>>>
>>>>>> It’s a cp,
>>>>>
>>>>> I don't think you demonstrated that it's a cp.
>>>>>
>>>>> Did they have objections that they didn't address and just ignored in
>>>>> favor of the idea?
>>>>
>>>> lots of ppl get married because they prefer to get married.  both sides prefer it and want it, when they choose to get married.
>>>>
>>>> it’s a preference, which they have in common.
>>>
>>> if they have objections when they do it, and ignore those objections,
>>> then it's not a cp.
>
> they don’t have objections, but they didn’t go searching for them either.  they slammed their minds shut to any glimpses of them.


do they have doubts that they evaded?


>>>>>> but isn’t it immoral? Just because they both prefer it, it doesn’t mean they’ve passed a morality test and what they are doing is moral.
>>>>>
>>>>> But it's not clear to me that they don't have any objections.
>>>>
>>>> i think lots of ppl really want to get married.  so much so that it bothers them to NOT be married.
>>>
>>> that seems off topic. being bothered to not be married doesn't say
>>> anything about other objections they have.
>
> they don’t have objections.  some ppl whole-heartedly prefer to get married and don’t want to even consider any criticisms of it.


how do you know they are whole-heartedly preferring it? i say more
about this below.


>>>>> So let's say they didn't have any objections. So it's a cp. Is it immoral?
>>>>>
>>>>> Well what are you thinking makes it immoral?
>>>>
>>>> it hinders one’s individuality / growth / learning / life / sense of self.
>>>
>>> i'm starting to think we should take a step back.
>>>
>>> the question that started this discussion was somebody asking me this:
>>>
>>>>>>>>> How do you know if something is morally good or not? What’s the check? What’s your test?
>>>
>>> What was meant by it is this:
>>>
>>>> If I have a choice to make, and I have an idea about what to choose. How do I know if that's the idea I should choose or not?
>>>
>>> So, what I'm focussed on is how to choose. more below.
>
> ok.  I’ve always thought of a common preference as just what is says - a preference 2+ ppl have in common.  I don’t think “cp" means you have to use 100% good methods, like specifically seeking external crit of your preferences, not evading or lying to yourself, etc.
>
> So I don’t see it as like THE test that you are making a moral choice.  It’s important.  And if something is not a cp (and one person coerces the other), then it’s (usually) immoral.
>
> But just because it is a cp, I don’t think that necessarily means its a moral choice.


I think it does. I explain below.


//TRIM//

>>>> If it’s a CP, there would be an aspect of their method which is moral.  However:
>>>>
>>>> - there could other aspects of their methods which are immoral.  Like how much have they really thought it thru and looked for flaws / crits with their plan.  Is it a whim-based preference?  A static meme based preference?
>>>
>>> Even if those things are the case, I think what's important is what
>>> knowledge the people interacting have.
>>>
>>> Like, if one of them has some knowledge about that marriage is bad.
>>> And if he ignores that and chooses marriage. Then it's not a cp. So
>>> choosing marriage in this case is immoral.
>
> ppl are really really good at evading and lying to themselves about stuff, tho.  They don’t find their wedding day TCS-coercive.  They get really good at convincing themselves that they whole-heartedly prefer certain things (even if they do have tiny doubts or fears or whatever in there.  they effectively ignore them to the point where they don’t exist).


The thing is, just because they are convinced that they are
whole-heartedly preferring something, that doesn't mean that they
actually are whole-heartedly preferring something.


> And they don’t SEEK external crit.  They don’t want to hear about ideas which would criticize what they think they want.  So without any criticism and lots of evasion, their preference remains the same.
>
> If you ask them if they have any doubts, they’d say “No”.  they’d say the prefer to get married.
>
> so what then?  if both ppl believe it’s their preference, isn’t that a cp?


i don't think that's a cp. i try to explain why below.


> yet, at the same time, seems immoral.  they’ve lied to themselves and evaded opportunities to get crit.
>
> it's hard for me to believe that someone is *moral* when they evade, just because they don’t know that evasion is bad or that they’ve evaded the fact that evasion is bad.


Maybe the original question is misleading. Here's the question I had
in mind that began this discussion.

Question: Say 2 people are considering doing a joint project. And one
of them has an idea for what to do. How do you check if an idea should
not be acted on?

Answer: If either of them has any objections/doubts about acting on
the idea, then that idea shouldn't be acted on. And if you're evading
your doubts, to the point that you don't have any of your doubts
conscious in your mind at the moment, then you're cheating. That
doesn't pass the test. Evaded doubts are still doubts.


So about cps. Let's talk about 1 person finding a cp with himself. He
has a conflict and he's resolving it. When he finds the resolution,
that's a cp. But how could a resolution be found when there are evaded
doubts? I mean, the conflict is still there. So it's not a resolution.
So it's not a cp.

What do you think?

Rational and Happy Eating

This is a reply to an FI email:

https://groups.yahoo.com/neo/groups/fallible-ideas/conversations/topics/10776


On Sat, Aug 8, 2015 at 3:16 PM, Justin Mallone <justinceo@gmail.com> wrote:

> Here are some arguments related to food and the enjoyment of its non-nutritional aspects. Plz crit.
>
> Arg A:
> (Food should follow function)
> 1. Food is a value because it sustains human life.
> 2. The actual function of food is providing nutrition.
> 3. Just like ROARK said about buildings, the type of food you eat should follow its function. And so it should not have irrelevant stuff like prestige presentations or a bunch of effort put into making tasty variety of foods.



what is the parallel for the tasty quality in the buildings analogy?

colors and designs, paintings/posters? AFAIK these are objectively good.


> 4. Therefore, nutritionally optimized food is objectively the BEST.



Taste is function. Not prestige. (I give an argument for this below.)


> 5. So something like https://www.soylent.com/ is ftw QED.
>
> Arg B:
> (Taste is whim)
> 1. You need to live by reason in order to have a good rational life.
> 2. Living by reason involves having good thoughtful reasons for doing stuff and not indulging in WHIM.



this strikes me as saying:

> Living by reason involves acting on your justified ideas and suppressing your unjustified ideas.


[wrt epistemology:]

but ideas don't need justification. what they need is error-correction.

is the thing you're doing causing a problem?

if you think not, are you open to changing your mind about that? like when people give you criticism about the thing you're doing.


[wrt morality:]

suppressing your ideas means tcs-coercion/suffering. better to act on non-refuted ideas because thats the only way to prevent tcs-coercion/suffering.


> 3. There’s a good argument for eating food for nutritional reasons, which is that if u don’t u DIE, which is contrary to LIFE.
> 4. There’s no positive argument for valuing taste being a good/rational thing.



should we not listen to music too? is that the same kind of thing?

i think they're fine. i'm not aware of any conflicts between leading a good/rational life and listening to music or eating tasty food.

i mean, i'm not aware of any problems that listening to music or eating tasty food has for leading a good/rational life.


> 5. THEREFORE valuing taste is IRRATIONAL QED.



it doesn't make sense to talk of values or ideas being irrational.

rationality is about how one treats ideas.

it's ok to be wrong. what's not ok is acting like you can't be wrong.


> Arg C:
> (Don’t waste money)
> 1. One should use one’s wealth according to reason and not according to one’s whims.



this has the same problem i described above about suppressing your ideas.


> 2. Food that gives you the nutrition you need to survive is very cheap.
> 3. Any expenditure above what it takes to get nutrition you need to survive is a waste of money that could be better put to use for more worthwhile purposes.
> 4. Therefore buying say cheeseburgers over rice is typically irrational/immoral WHIM indulgence. Rice4Eva QED.



Eating can be mundane, but it's my best option for fueling my hunger.

I mean, maybe in the future we can press a button and nutrition is inserted into my blood. Or I eat a pill a few times a day and all the nutrition I need is in there. I think I could go for that. But right now I can't have that. So eating is my best option. So to make it less mundane, I spice it up. So I get a cheeseburger instead of rice.

Rationalizing and how to catch yourself doing it

This is a reply to an FI post:

https://groups.yahoo.com/neo/groups/fallible-ideas/conversations/messages/11052



On Thu, Aug 20, 2015 at 11:00 PM, Erin Minter <erinminter@icloud.com> wrote:

> On Aug 17, 2015, at 6:13 PM, Rami Rustom rombomb@gmail.com [fallible-ideas] <fallible-ideas@yahoogroups.com> wrote:

(snip)


> other similar rationalizations could be:
>
> “i do romance sort of differently, so those romance flaws don’t apply to my approach.” (yet they don’t explain how that’s possible and expose their ideas to crit)

one reason they might not expose those ideas to crit is that they believe that doing so means saying some private stuff which they don't want to do cause that violates privacy. and this could be a rationalization because it's like saying "i'm justified to not expose these ideas to crit because i'm protecting my privacy".

now a person in this situation could have learned from FI that people should develop the skill of depersonalizing the ideas so that they can talk about them without breaking privacy (making hypotheticals). but then they go months or years without ever really trying to learn how to do this. what rationalizations might someone in this situation come up with for why it was ok for him to do this?


> “FI ppl don’t know the full story about romance.  so i can ignore some of their ideas about this stuff.”

wow that one is pretty vague. its such a big gaping hole. it lets the person ignore ANY/ALL ideas being said here on FI, or anywhere.

i mean, you could replace romance with literally anything, and that idea would "justify" it.

- FI ppl don't know the full story about liberalism (TRUE). so i can ignore some of their ideas about liberalism (MORALLY WRONG).

- FI ppl don't know the full story about parenting (TRUE). so i can ignore some of their ideas about parenting (MORALLY WRONG).

- FI ppl don't know the full story about how to play games (TRUE). so i can ignore some of their ideas about how to play games (MORALLY WRONG).

It's wrong to ignore ideas that you are engaging with. The correct approach is to judge an idea that you are considering and wholeheartedly accept or reject it after the judging process is finished.



> so these rationalizations make them feel better about the conflict.  while at the same time (and mb without even knowing it), they are giving up on learning.
>
> they are creating a piece of anti-reason (the rationalization) and placing it in their minds.  then as time passes, the implications of this idea get more and more worked out.  and interconnected to other stuff.  and entrenched.  and harder to change.

Yea the one that I referred to as leaving a big gaping hole above is especially bad. It's like justifying not doing X because X *could* be wrong.

I've seen this kind of tactic used with the following situation: somebody says "it's just a mistake so it's ok" right before he decides to do the mistake.

No that's not ok.

"X is just a mistake" isn't a criticism of not doing X. It's a criticism of feeling bad over having done X. So it's irrelevant to the question of whether or not he should do X.

This kind of thinking is trying to justify doing X (support of theories, which ignores criticism), rather than trying to find out whether X shouldn't be done (ruling out of theories, which doesn't ignore criticism).


(snip)

>> Say somebody has romance preferences, and he has preferences for
>> reason/FI too. And he has learned that FI says romance is bad/harmful.
>> So he has a conflict between his ideas.
>
> if he has preferences for reason, then has he learned that romance is bad?  That’s sort of how I tried to set mine up:  part of the person wants romance, while part wants no romance cuz they can sorta see how it’s bad.

well let's say that the person agrees with FI that there are some flaws with romance. so that means that part of the person wants no romance (of that kind that he thinks is flawed). but as you said, maybe the person believes that he can do better than traditional romance knowledge stuff. so this person who partly doesn't want romance of X-type is fine with wanting romance of NOT-X type. So this person thinks he has a way of doing romance better than the others.

And if this person doesn't expose his ideas to FI, then he can easily fool himself about it. Maybe this is just a justification/rationalization/excuse, rather than an actual good explanation (that survives crit) that he can do romance better than others.


oh i just thought of another issue. let's say somebody wants romance and thinks he has a better version of it than others. and he wants FI/reason. and then he exposes his ideas about romance to FI crit. and then an FI veteran explains some ideas criticizing this guys version of romance. And then this guys comes up with this idea, "Maybe this FI veteran is justifying/rationalizing/excusing/lying about this. He says to himself, "Maybe this FI veteran is trying to reject my version of romance by falsely lumping it in with traditional romance knowledge stuff." And maybe he thinks to himself that it's also possible that this FI veteran doesn't even know that he's lying. And then let's say that this person doesn't know what to do about this. he doesn't know how to tell the FI veteran that he thinks he's lying and that he might not even know it. He thinks that he'll "lose" the debate to the FI veteran not because the FI veteran is on the correct side of the truth, but instead because the FI veteran is better at arguing than himself. So he thinks 'there's no point' in even trying to argue the point.


Thoughts on that?


(snip)

>> And then let's say this person comes up with this idea:
>>
>> "I currently want a gf/bf, and I also want reason/FI. And i know that
>> FI says romance is bad. So, maybe FI is wrong that romance is bad, or
>> maybe I'm wrong that romance is good, or maybe we're both wrong about
>> it. So, my current plan is to continue pursuing both. That means also
>> trying to resolve the conflict. Address FI's criticisms of romance.
>> Refute them, or concede and stop wanting romance. Now this might take
>> a while. Maybe months or years.
>
> if you are doing romance for years while at the same time thinking you are being rational and trying to “resolve the conflict”, then i think you are lying to yourself.
>
> and this sort of rationalization is what i’m talking about.
>
> like as you start out doing romance, you start to SEE the flaws (if you’ve read and understood some FI).  if you are rational, you will see them, criticize them, refute them, and be done.

i think a huge indication that somebody is lying to himself about that is whether or not he is exposing his ideas about it to crit. Like, if you've been doing romance and FI for years, and you think you're not rationalization/excusing/lying/justifying, then at some point you would have learned from FI that you could be wrong about that. Maybe you are lying to yourself. You should expose your ideas to crit to CHECK IF THEY ARE LIES/EXCUSES/RATIONALIZATIONS/JUSTIFICATIONS.

so like if you've been doing romance and FI for years, and you haven't talked about romance at all on FI, then that's a huge sign. Why aren't you talking about romance? Why aren't you dealing with this big conflict? Why aren't you trying to explain to FI that FI is wrong about your version of romance?[1]  Why aren't you trying to find out if you're wrong by exposing your ideas about this to FI crit?


[1] Here's an idea someone could come up with. "It's not my responsibility to teach stuff to FI people. What do I gain by teaching them? Nothing. So I won't waste my time teaching stuff to FI."

What should be said about this?


> but if you are irrational, then you will try to accept them.  fit them in your life.  make excuses for them.  and lie to yourself about what you are doing.
>
> you either have to do one or the other (rational or irrational) once you start noticing these contradictions.  and if it’s a year later and you are still trying to make progress on romance, then best guess you chose the irrational path.  and all of those rationalizations are going to be *harder* to fix.

i want to consider some examples of "trying to make progress on romance".

Let's say somebody doesn't have any romance in his life right now. No gf/bf/spouse/etc currently. Let's say he chooses to spend some time going to social engagements as a means of FINDING somebody to romance with. If this person exposes this idea to FI crit, I imagine FIers would ask *why prioritize spending time and energy finding someone to do romance with over learning FI?*

What other things would FI say to this person? How would such a person reply? What are some irrational ways of replying, and some rational ways of replying?


Or let's say somebody currently has a many-years-running romance relationship. And let's say he has also been on FI for a year reading emails but not posting at all. And he fails to convince his partner to join FI. So the only FI learning the partner is doing is directly via discussion with this FI lurker.

So let's say this person decides to ask on FI *how do I decide how to spend my time? I want to spend some time doing FI but my romance partner feels sad when I'm busy doing FI while  I could be spending time with him/her. How should I think about this?*

-- Rami

Friday, August 21, 2015

Revenge/Justice

Sometimes when a person gets angry at a friend, coworker, or family member, he's acting as if he already knows the final complete truth about what he's angry over. And it means that he’s not even thinking about the fact that he could be wrong. He's not open to changing his mind about it.

But he could easily be wrong. It's super common. Even when he feels justified.

Their logic goes like this. Somebody hurt me. So I'm gonna hurt him back. It's revenge. It's justice. I have to hurt him back to make him do what I think is right. I'm justified.

But this is all wrong. You’re seeing the world through the lens of the win/lose mentality. You're not infallible. You're not omniscient. You don't have the final complete truth. You're not justified. Revenge is evil. What you have is a fallible guess about the truth. You could easily be wrong about it. So your actions should reflect that. And they don't. So there is a big contradiction between your ideas and reality.

You should be calm. There's no hurry. You need to figure out what the problem is and figure out a solution. Something that everybody involved agrees with wholeheartedly. So that means each person involved has no objections to acting on the idea they agreed on. A win/win.

To illustrate this mistaken thinking with a concrete example, consider a situation where two siblings Chris and Paul hit each other and are complaining to their parent about it.

Chris: Paul hit me!

Parent: uh, Paul why did you hit Chris?

Paul: because he hit me.

Parent: you mean revenge? … you want to do revenge? … you think that’s the right thing to do? [1]

Paul: [no reply]

Parent: so, Chris why did you hit Paul?

Chris: it was an accident. I was swinging my arms and he was walking by me and I didn’t see him.

Parent: ok, so let’s review. Chris hit Paul by accident, and Paul hit Chris back on purpose for revenge. Or, Paul, is it that you think he hit you on purpose?

Paul: I don’t know.

Parent: Paul, so you agree that Chris could have done it by accident?

Paul: yes.


Parent: ok so let’s review again. Paul hit Chris while Paul hadn’t ruled out that Chris did it by accident. So Chris didn’t do anything wrong and Paul was wrong to hit Chris.

-------------------------

[1] To clarify, this is a situation where Chris does not have a history of wanting revenge or otherwise being negligent with respect to trying to avoid hurting other people.