Monday, December 30, 2013

Is Morality Objective?

Is morality objective?

Lot’s of people disagree about morality. Many of them think that since we disagree about it, that must mean that morality is subjective, as opposed to objective. But this reasoning doesn’t make sense because it’s analogous to saying that science is subjective since many scientists disagree about scientific theories. The reason we disagree is that none of us are infallible, and so people are making mistakes. None of us have reached the truth, but individually we are making our way towards it -- at least this is the case for the people that are genuinely trying to seek the truth.

Another reason some people think morality is subjective is that they think there is no objective way for people to agree. But this is not true either. To explain why, I'll explain what I mean by the idea that truth is objective.

Truth is objective. This means that every question has only one correct answer. This applies to moral truths like it does for any other truths.

Truth is also contextual. So morality is also contextual. No two people are ever in the exact same situation, so no two contexts are exactly the same. One consequence of this is that what is right for me is not necessarily right for you, and vice versa.

The objectivity of morality refers not so much to moral conclusions, but rather to the standard by which moral conclusions are determined. Judges should come to their conclusions using a standard that is independent of any individual judge. Analogously, scientists should come to their conclusions using a standard that is independent of any individual scientist.

Often, judges and scientists draw the wrong conclusions, but the method by which they reach their conclusions ensures that their conclusions will be revised in the future when new evidence or criticism is found. In the US judicial system, all court cases are open to appeal. This means that the law treats court decisions as tentative, and so the law is setup to keep all decisions open to revision. The same goes for science. All scientific theories are treated as our best theories to date -- which means that any scientific theory might be flawed and so it's important to keep them all of them open to revision. And the same goes for moral conclusions. Everybody should treat their moral conclusions as fallible, just like court decisions and scientific theories -- which means that any moral idea might be flawed and so it's important to keep them all open to revision. To clarify, even the standard by which we come to conclusions is a fallible standard, and it too is open to revision.


So what is the standard and how does it work?

Let’s consider science first. The standard for science is this: A theory is scientific if and only if it can, in principle, be ruled out by empirical evidence. So if there is a theory that is claimed to be scientific, and if it cannot, in principle, be ruled out by empirical evidence, then it is not scientific, and instead, it is what we call psuedo-science.

The standard for morality is similar, and it applies to other kinds of knowledge too, not just morality. The standard is this: An idea is objective if and only if it is intended to solve a problem. Note that this even applies to science. The problem that a scientific theory is intended to solve is explaining physical reality while making testable predictions about it, such that the predictions are consistent with all our existing empirical evidence.

An objective idea is one that can be found to be false. And one way to determine whether it's false is to determine whether or not the idea fails to solve the problem it’s intended to solve. So if a person judges that a moral idea fails to solve the problem it's intended to solve, then that is it’s flaw. And if the idea is flawed, then it’s false, and so it should not be acted on. To clarify, when we explain why the idea doesn’t solve the problem it’s intended to solve, this explanation constitutes a criticism of the idea. It refutes the idea. [2]

So if an idea is flawed, then it's refuted. And if it doesn't have a flaw, then it's unrefuted. Now I've made that sound pretty simple but it's a lot more complicated than that. For one thing, people are fallible, which means that any of our ideas may be mistaken, which means that even our criticism can be flawed. That's why it's important to keep all our ideas on the table, even our criticisms.

So to clarify how refutation works, if an idea has an unrefuted criticism, then the idea is tentatively refuted. And, the unrefuted status of the criticism is also tentative. So if somebody comes along with a criticism of that criticism, then the original idea is now unrefuted.

A second thing to consider is how criticism works. A criticism is an explanation of a flaw in an idea. Now some ideas are vague -- their purpose is not clear. In other words, the problem that the idea is intended to solve is not clear. This makes it hard to find a flaw in it. For this reason, the fact that the idea is vague is a useful criticism of the idea. In other words, if the idea's purpose is unclear, then it's refuted.

Now people often disagree about what things are unclear, but this is a soluble problem. One way to do it is to identify what problem the idea is intended to solve. The people discussing the idea might go back and forth a bunch of times before the problem is established, but once that is agreed on, then it's easier to figure out if there is a flaw in the idea. Since the idea is a proposed solution for the intended problem, if we can explain how the proposed solution fails to solve the problem, then we've found a devastating criticism of that idea.

As an example, consider the case where somebody claims that some event caused some other event. If the claim doesn't have an explanation for the causal relationship, then that is a criticism of the claim -- that it's unexplained. It's a criticism because without an explanation, we can't find out if it's reasoning is wrong. So it's wrong for not having any reasoning.

So ideas that are intended to solve problems are objective. Those that don’t are subjective. And morality is about solving problems. A moral philosophy should be able to provide a method to answer questions like 'should I learn to read,' 'should I learn epistemology,' 'what and when should I eat,' 'how should I raise my children?' These are all ideas that are intended to solve specific problems. And for this reason, it's possible to find out if they fail to solve the problems they intend to solve.


------


[2] "... by an objective theory I mean a theory which is arguable, which can be exposed to rational criticism, preferably a theory which can be tested: one which does not merely appeal to our subjective intuitions." _Unended Quest_, Chapter 31, by Karl Popper.


----------------------------------------------[ Q & A ]--------------------------------------------------

Q: Morality is subjective because moral ideas are based in subjective premises.

A: No, the premises are fallible too. They can be shown to be wrong. And it's possible for people to agree on them. There is no law of nature preventing people from agreeing on the premises.


-------------[ Since it's about opinions, then it's subjective ]---------------

Q: Morals are merely opinions, all opinions are subjective. So morality is subjective.

A: Do you think it's wrong for a parent to murder his 2 y.o. child for crying too much? Or do you believe that that opinion is not wrong? If you believe the opinion is wrong, well that raises the question: By what standard did you judge the opinion wrong? If you have a standard, well then that means you are treating morality objectively.


-------------[ Basing ideas on personal feelings and opinions ]--------------

Q: Morality can't be objective because people form their moral codes based on or influenced by personal feelings and opinions. There are plenty examples of this.

A: Yes, lots of people justify (aka base) their ideas on other ideas/feelings, rather than look for flaws in them. But justifying ideas is wrong. So any idea that uses justification, is also wrong. And what you're saying is that there are a lot of examples of people coming to conclusions using false methods of reasoning (justification), but so what? Just because some people are wrong about justificationism doesn’t mean that morality is subjective. For one thing, they are able to learn that their method is wrong. There is no law of nature preventing them from learning it. [...] To clarify, coming to conclusions by working out which one feels right is not an objective method.


-------------[ Dismiss all personal experience as invalid ]---------------

Q: I think that you have it all entirely the wrong way round. We are our emotions, our sense impressions, our ideas, our experiences. Assuming there is something 'out there', we can only encounter it through the filter of our minds. We can't encounter the world objectively; we can only do so individually and then compare our experiences in order to find a satisfactory way of understanding it . If you dismiss all personal experience as invalid, then you dismiss even the possibility of knowledge.

A: I agree that we can only encounter "it" through the filter of our minds. But I did not dismiss all experience as invalid. That’s not what fallibility means. You’ve misinterpreted me. [...] That an idea/feeling/experience is fallible, means that it COULD be wrong, which also means that it COULD be right. And you’ve missed this part. You just said that my position is that “all experience as invalid” which means that all experience is wrong, which actually contradicts my position. [...] My point is that justifying one's ideas/feelings/experiences by other ideas/feelings/experiences is a wrong way to come to conclusions -- it means trying to prove one's ideas/feelings/experiences, which is a mistake because that means not looking for flaws in them, and doing that means keeping your ideas/feelings static, preventing them from evolving. The right way is to try to find flaws in one's ideas/feelings/experiences and fixing them, which means evolving/improving your ideas/feelings.


-------------[ Absolutes in morality ]---------------

Q: I read your essay and it's clear to me that you don't understand the meaning of objective morality, if morality is objective, as you clearly believe it to be, then there are absolutes in terms of morality, which is incorrect. In order for there to be an absolute it has to be based on some outside force, in the case of morality theists argue that that source is god. If you believe morality to be objective you need to prove where it comes from, what is the basis for morality?

A: So you believe your claim because it feels clear to you? That’s a non-argument. Your feelings are not an objective standard for knowledge. [...] Re your assertion that if morality is objective then there are absolutes in terms of morality -- that is an unexplained assertion, so it's wrong for being unexplained. [...] Re outside force -- I disagree that knowledge has a basis. I disagree that there needs to be some outside force (god) in order for morality to be objective. Knowledge is created by guesses and criticism, not by basing it on other knowledge. Ideas need not, and cannot, be proved. Ideas have flaws and what we can do is seek out and fix flaws thus evolving our ideas.


-------------[ Ideas can't be proved ]--------------

Q: Re your idea that ideas can't be proved -- Really? Gravity is an idea, Evolution is an idea!

A: Yes really. There are many theories of gravity and evolution, and not all of them are correct. There is only one theory of gravity that is consistent with all our existing evidence. There is only one theory of evolution that is consistent with all our existing evidence. All the other theories of gravity and evolution have been refuted by empirical evidence. So ideas can only be ruled out, they cannot be proved.


-------------[ Popular opinion ]-----------------

Q: Just because someone has a moral standard does not mean that morality is objective. Objective means based on facts, I can have a moral standard based on the popular opinion in my society!

A: Coming to conclusions by determining which ideas are most popular is not an objective method. That's analogous to a scientist who bases his opinion of a scientific theory by polling all other scientists to find out the popular opinion of the population of scientists.


-------[ Morality is contextual as a means of dodging questions? ]--------

Q: Does the additional context just create a new question in order to dodge answering the initial question or render it irrelevant due to lacking context about the individuals involved?

A: A lot of moral relativists (aka subjectivists) use their philosophy as a means of dodging questions posed by moral objectivists. I think it’s bad to dodge questions. And I think it’s bad to adopt a philosophy as a means of dodging questions.


----[ Does morality is contextual mean that morality is subjective? ]----

Q: By allowing data to be seen in different contexts, are you not arguing morality is subjective to begin with? (Objectivity comes into question when subjective data is deemed accurate or not in certain contexts.)

A: No. I'll give an example to clarify. Let’s say that John is in a situation and he decides that action X is his best course of action. Let’s say that John seeks Paul’s help to figure out if there is something better than X. So John tells Paul all the details that John thinks Paul needs to know to make a moral determination. Now let’s say that after hearing all the details that John gave, Paul is not sure if X is right or wrong, and he has a question about another detail that John didn’t give. So Paul asks John the question, and John answers it, and the answer is a detail that wasn’t mentioned earlier. Now lets say Paul has enough information now, and he thinks that action X is wrong because action Y is better for a reason that Paul explains to John. Now let’s say that John agrees with Paul’s reason, and let’s say John doesn’t have any new criticisms, so John agrees that action Y is better than action X so he chooses to do Y instead of X.

So let’s summarize what happened. John was in a situation and he thinks that the relevant contextual details are A, B, and C, and he decides that action X is his best course of action, but he wants other people’s help to make sure. John explains this context to Paul, and Paul thinks that the contextual details that John thought were enough, were not enough. So Paul asks John a question whose answer reveals contextual detail D. At this point, Paul and John agree that contextual detail D changes the context such that the best course of action is Y rather than X. 

Now back to your question. Does this mean that morality is subjective? No. It’s possible for John and Paul to agree that John’s understanding of the context is missing some details that are relevant in determining what is the best course of action.



In science, the method is this: Create a falsifiable theory, and then test in the effort to falsify it.

In everything else, we do not have access to empirical evidence, which means that we can't falsify them. Instead we use criticism to refute them. A criticism is an explanation of a flaw in an idea. If an idea is flawed, that means that it fails to solve the problem it's intended to solve.

Now this method is actually a general case of the one used in science. The problem that a scientific theory (aka idea) is intended to solve is to explain reality such that the explanation makes testable predictions and such that the explanation is consistent with all empirical evidence.


Now the reason this standard works is because scientific theories are the only kind of theories that one can apply the scientific method on. The scientific method is to create a falsifiable theory, and then test it in the effort to falsify it.

And the reason that this standard works is because objective ideas are the only kind of ideas that one can apply an objective method on. 

Friday, December 20, 2013

_Why atheists fail to persuade theists._

_Why atheists fail to persuade theists._

Most people have the wrong epistemology, theists and atheists alike. One's epistemology is how he determines what is true and what is false.

Karl Popper found this mistake and called it Justificationism. This is how justification is supposed to work. Say you have an idea. For this idea to be believed, i.e. considered knowledge, i.e. considered true, the idea must be justified.

The problem with this epistemology is that it’s impossible to work — an idea can never be justified. Whatever justification one has for the idea, is itself an idea, so by justificationism's own reasoning, the justification needs a justification to justify it so that it can be considered true. But then, in order to consider that justification true, we need another justification for it. And so on, forever. This is an infinite regression problem that needs a solution.

So what’s the solution? Popper figured out that the solution is to break the cycle by not seeking justification at all.[1] Instead, he said that an idea is considered (tentatively) true if there are no criticisms of that idea [2] — a criticism is an explanation of a flaw in an idea.[3]

To be clear about how criticism works, we need to understand that flaws can only make sense in the context of a problem. Ideas are solutions to problems. So if there is a criticism of an idea, it explains why the idea fails to solve the problem — that is it's flaw.[3]

Now, even if an idea has a criticism, thus rendering the idea false, the criticism itself could be wrong. So if somebody comes along with a criticism of that criticism, then the idea is rendered true again. So this means that an idea can be true, but only tentatively, until a criticism of it is found. So instead of using the terms true and false, it’s better to use the terms unrefuted and refuted because it's more clear that these labels are always treated as tentative.

So let’s do an example. Isaac Newton created his theory of gravity.[4] No one had any criticisms of it for 300 years. During that time, the theory was unrefuted (aka tentatively true). Then Einstein showed that Newton’s theory doesn’t work for anything going near the speed of light. For 300 years scientists thought that Newton’s theory was true — would never be refuted — but they were wrong. Einstein refuted it. [To be clear, we still use Newton’s theory of gravity in situations where the objects are not going near the speed of light because it’s a pretty good approximation as long as you stay within these constraints.]

Now, what some atheists do is to say that the God idea is wrong because it’s not justified by evidence. And this is a mistake because justification is a mistake. And the theists can claim the same thing: that the idea that God doesn't exist is wrong because it’s not justified by evidence — and they are correct about that. So, by atheists using this false epistemology in their arguments, they give theists the same tool to use in their counter-arguments, thus creating a stalemate. But the only reason this is a stalemate is because the atheists are framing the problem like this: You don't have evidence to justify your claim.[5]

The Popperian way to think about this is to think about which idea is unrefuted and which is refuted. Now to be clear, a refutation does not require physical evidence gathered from experiments. A refutation can be an explanation of a flaw in an idea.

So let’s consider the God and No-God ideas. Both the God idea and the No-God idea have no evidence against them. But let’s ask why the God idea doesn’t have any evidence against it. It’s because the God idea doesn’t make any testable predictions.[6] And without testable predictions, we cannot design an experiment that could test it. So it's not a scientific theory. But that doesn’t mean that we should consider it true. We should also look for criticisms of the idea that don't use physical evidence..

As for the No-God idea, it also doesn’t make any testable predictions. But that’s not really the rival idea to God. The competing idea is Evolution. And the Evolution theory does make testable predictions. And to date, we haven’t found any evidence that refutes the theory of Evolution. To be clear, that a theory makes testable predictions, means that it’s a scientific theory. So the Evolution theory is a scientific theory that hasn’t been refuted by evidence, while the God idea is not a scientific theory and thus cannot have any physical evidence against it because it doesn’t even make any testable predictions.

But as I said before, we don’t need physical evidence to refute ideas. We can do with criticism alone. If we have just one unrefuted criticism of an idea, then that idea is refuted.

So are there any criticisms of the God idea? Well in order to figure that out, let’s consider what problem the God idea is supposed to be solving. The most common problem that theists claim the God idea solves is this [7]:

"Where does 'apparent design' come from? Where does complexity come from? Where do adaptations come from? Where do useful or purposeful things come from? All of these questions are fundamentally asking roughly the same thing: Where does knowledge come from?" (source: http://fallibleideas.com/evolution-and-knowledge).

Now, everyone knows that people create knowledge, but where did people come from?

The God idea attempts to solve this problem by saying that God created people. But this doesn’t solve the problem — all it does is add a layer of indirection. God is a complex, intelligent being, like people. God contains knowledge, so where did God come from?

So that’s the contradiction. The “solution” does not solve the problem because all it does is create a new problem of the same type: What created God?

The solution to this problem should address the question of knowledge, rather than sidestep it.

The solution to this problem is Evolution. For more on that, see _Evolution and Knowledge_, by Elliot Temple (link: http://fallibleideas.com/evolution-and-knowledge).


--------------[ 1 ]---------------

Q: But this clearly doesn't apply to mathematics because mathematics is only concerned with proof and justification. And without either of these two elements, you have nothing (as far as mathematics is concerned).

A: I disagree. When mathematicians created their math ideas, they did it by guessing ideas, and criticizes their guesses. What you're calling a proof is an argument that argues for the math idea. When a math guy creates an argument (aka proof), he goes through many iterations before he lands on something he's happy with, something that addresses all of his own criticism. The point is that the argument is fallible, and the mathematician treats it that way.


--------------[ 2 ]---------------

Q: So as long as I say something is true, even if it isn't it's true until someone says "no it isnt" then what they say becomes the truth. I don't think we would get very far if this were the case.

A: That's how the US judicial system works. And I think it works pretty good. Somebody might go to jail because of a theory of his guilt, but the case can always be appealed, because the court knows that it's possible something was wrong with the evidence, or something else. So even the courts know that their decisions are tentative.

Q: I don't know if I agree. There is a burden of proof requirement in criminal cases in the US. It's a fallible system and innocent people do get convicted. That theory in the op has no burden of proof. It's simply states because "I say so"

A: No you’ve misunderstood. If “you say so” that I’m guilty by doing specific thing X. And “I say so” that I’m innocent by doing specific thing Y instead, and if there is no evidence to refute either claim, then it’s a stalemate — both are refuted — innocent until proven guilty.


--------------[ 3 ]---------------

Q: Rami, if I am correct, you are arguing that the default position for any argument should be that it is correct until it is criticized. I do not think this is reasonable. For one thing, it is difficult to define what a criticism is. How do we show something is a criticism?

A: My answer is long so I decided to blog it: How does criticism work?


--------------[ 4 ]---------------

Q: You're using the wrong definitions for theory, hypothesis, and law. So your essay is wrong.

A: No. Just because I use different words than you do, that doesn't mean that my ideas are different than yours. Two people can say the same sentences and mean different ideas by them. Also, two people can say different sentences and mean the same ideas by them. So, the words theory, hypothesis, and law, in the way I used them, are all the same thing in that they are testable/falsifiable theories, which is what gives them scientific status. As Popper explained, a theory is scientific if and only if it can, in principle, be ruled out by physical evidence. This is known as The Line of Demarcation -- it separates science from non-science. Note that some people claim to be doing science but they don't create testable/falsifiable theories and test them, so they aren't doing science since they aren't doing the scientific method. So it's important to be selective with which theories get to be classified as scientific. Just because it's claimed to be scientific doesn't mean it is. Make your own judgement call. Don't trust it just because it's claimed to be scientific.


--------------[ 5 ]---------------

Q: If (physical) evidence isn't used as justification in support of a theory, then how is it used?

A: Evidence refutes theories. Evidence is criticism. To be clear, only scientific theories can be ruled out by evidence.




--------------[ 6 ]---------------

With the God idea I'm talking about the harder case where it doesn't make any testable predictions. I know that some people's idea of God does make testable predictions, but I didn't talk about that in my essay because that's easier to refute, since you can refute it with physical evidence.


--------------[ 7 ]---------------

There are some other common reasons people give for believing in God which I explain in another essay, _Is God real?_.


----[ What about morality? ]----

Q: If people become atheists, doesn't that mean they will become immoral?

A: Why would that happen? Morality is a body of knowledge about what is right and wrong. Humans can create any kind of knowledge, including moral knowledge. So just because a person leaves a religion, that doesn't mean he'll throw out his morality. For example, a christian knows that murder is wrong. If he stops believing in god and christianity, that doesn't mean that he'll stop believing that murder is wrong.


----[ What about certainty? ]----

Q: If anyone is very sure about existence of God, either atheist or theist, how do you have that "certainty?" Would anyone like to share?

A: Humans can not have certainty. All our knowledge is conjectural. We do not have infallible sources of knowledge. [...] I am an atheist. I believe that there is no being that created the universe. And I came to this conclusion by the same standard that I come to all my conclusions -- if an idea has an unrefuted criticism, then it's refuted, and alternatively, if an idea does not have any unrefuted criticisms, then it's unrefuted.


-----[ more on persuading ]-----

Q: How do I persuade a theist?

A: Ask him, "What question does your god claim answer?"  Or "What problem does your god claim solve?"  That puts the ball in his court. Don't take the ball back until he has given you a question to work with. Then you criticize it. Show how their god claim doesn't solve their intended problem. ~~~ If you are arguing with a theist and you're not sure how to criticize his question, post it here and I'll help you. Or email me privately.


-----[ Clarification on Popper ]------

Q: Popper did not argue that anything non-scientific cannot be true, rather he criticized pseudo-science - ie claiming something is scientific when it is not. Popper recognized that some things cannot be tested.

A: I didn’t argue that either. I said that scientific theories can be refuted by empirical evidence and by criticism, and that non-scientific theories can be refuted by criticism. So any theory, scientific or not, that has an unrefuted criticism of it, is false. And any theory, scientific or not, that doesn’t have any unrefuted criticisms of it, is true. Now I use these labels true and false tentatively, since we can’t predict future criticisms, i.e. we can’t predict future knowledge creation.






Rami Rustom a lot of arguments i hear from atheists go like this: What evidence do you have of god?

12 minutes ago · Like




Rami Rustom But that's a bad question because you can't get evidence of god.

12 minutes ago · Like




Rami Rustom And then the theist counters with: What evidence do you have that god doesn't exist?

10 minutes ago · Like




Rami Rustom And that's a bad question too, because you can't get evidence that god doesn't exist. (to be clear, i'm talking about the harder god claim, the one that doesn't make any testable predictions.)






Thursday, December 19, 2013

How does criticism work?



Question

Rami, if I am correct, you are arguing that the default position for any argument should be that it is correct until it is criticized. I do not think this is reasonable. For one thing, it is difficult to define what a criticism is. How do we show something is a criticism?


Answer:

Criticism has no meaning outside the context of a problem.

A criticism is an explanation of a flaw in an idea.

An idea is a solution to a problem.

Once the problem is defined, then we can talk about whether or not the proposed solution actually solves the problem.

I'll do a couple of examples.


Let's say the idea is: "God exists."

Well, what problem does that idea solve?

Let's say the theists says that the problem is: "What created the universe?"

And the theist's proposed solution is: "God did it."

So here's a criticism: "The proposed solution doesn't work because all it does is create another problem of the same type: What created god?"

So his god idea is refuted for containing a contradiction.


Here's another example. Let's say the idea is "Punishment works".

Well, what problem does that idea solve?

Let's say the punishment advocator says that the problem is: "How do I make someone change his bad behavior?"

And his proposed solution is: "The person who is doing wrong behavior must be punished so that next time that he thinks of doing that behavior, he'll avoid doing it in order to avoid the pain of the punishment."

So here's a criticism: "The proposed solution doesn't work because he hasn't learned why the behavior is wrong, nor what behavior is better than the wrong behavior, which means that even if he knew it's wrong, he hasn't learned a better way of behaving, so he'll continue doing what he knows. Changing one's behavior is a matter of learning. Learning requires explanations that the person doing the learning must agree with in order to be persuaded. For more on that, see my essay on Parenting.




-----[ more on criticism ]-----

Q: Re supernatural claims, when the claimant claims natural evidence warrants supernatural causes.. Theism does this. There are hundreds of reasons theists use to justify their position. I see no connection. Is this no connection an acceptable criticism? Or under Popper's idea, the claims persist due to no criticism existing?

A: If somebody claims a cause, and if he doesn't give an explanation for his claim, then that's a criticism of his claim -- that it is unexplained. It's a criticism because without an explanation, we can't find out if it's reasoning is wrong. So it's wrong for not having any reasoning. A similar criticism is of vague explanations. If you have an idea, and you tell me your explanation, and if that explanation seems vague to me -- meaning that it could mean like 10 different things -- then my response will be 'Your explanation is vague' which is a criticism of your explanation. At this point you could clarify your explanation in an effort to make it less vague.

Thursday, November 14, 2013

Rethinking Higher Education


Americans grow up being told this idea that earning a degree means getting higher-paying jobs and having more job options. I think this is a conclusion people draw from a static they hear repeated on TV that college graduates on average earn more than everybody else by $1,000,000 over their lifetime.


Oil painting by Ragod Rustom
The problem with this reasoning is that it treats all college students the same, when they aren't. Students who take advantage of opportunities, do better than those who don't. And this is true for people both during and after school. More importantly, the people who seek out and take advantage of opportunities learn way more than those who don't -- and it's the lifelong learners that earn more than everybody else. So, going to college isn't the thing that increases a person's chances of success, because college doesn't make you seek out and take advantage of opportunities. 

So the relevant question should be, if you go to college, what are you going to do there? Are you going to wait for teachers and job placement counselors to give you opportunities and to tell you how to take advantage of them?



College-bound highschoolers?

Now some people choose to go to college because they got excellent grades in high school and they think that that means they will get into the best schools, which they think translates to being selected first for the best jobs. But this reasoning has the same problem as before -- it treats all jobs the same, when they aren't. It ignores that job availability depends on supply and demand. So if you place yourself in a pool of job applicants in an industry where there is way more supply than demand, for example attorneys (see this article for more), then you might be left with a job that doesn't require your degree, or worse, no job at all. When employers are choosing to fill positions, they would rather take their chances with experienced people over college grads with great grades.

But it's even worse than that. To illustrate how bad the situation is for so many college grads, let's consider two options that a person might have available to him coming out of high school; going to college, and getting a job.


The financial cost of college

The college option typically costs $80,000 in tuition for four years*. Now let's assume that you're still living with your parents as a means of saving money. Let's also say you chose not to work during that period so that you could put more of your effort into getting good grades without having to struggle with a job because that would consume some of your time and attention. So there's an opportunity cost on the college option since it's competing with the job option, and it amounts to whatever you would have earned had you been working a job for those four years. To make this conservative, let's say you worked as a full-time cashier at a fast food restaurant making $7.50 an hour -- that comes to $50,000 for the four years of take-home pay, taking into account taxes and social security and medicare. Going along with the theme of saving money, you would save all your take-home income by getting your parents to pay for your food and everything else, just like they would have done for you had you chosen the college option. So by the end of the four years, the difference between the college option and the job option is $130,000.

But that's not all. There are some variables that are harder to calculate. Had you been working for those four years, you would have been learning job skills that would have helped you get higher-paying jobs, because that's what employers want, job skills. To be clear, had you been working for four years, you would have been promoted a few times during that period, and with each promotion you would have gotten more pay and more learning opportunities -- while with the college option, you don't get that (see this article for more).

So not only does the college option set you back $130,000 compared to the job option, it also sets you back in time. That's four years that you could have been learning job skills that employers want -- so this is another type of opportunity cost. 

And to be clear, the $130,000 figure is conservative since it assumes that you didn't get any promotions during the four year period.


College is an investment

Now a lot of people think of college as an investment. But if you're going to think of it as an investment, then you sure better treat it that way too. So consider the analog -- an investment of your hard-earned money into a new business that you are going to start up. Now, you know the business might not become profitable. So you know that you are risking your money -- you could lose your entire investment! And the prudent thing to do is to do everything in your power to make this business a success -- short of anything immoral that is. So that means that you should seek out and take advantage of opportunities that could make you successful, because just putting in the investment doesn't guarantee that the investment turns a profit. It's the same with college -- if you sit back and wait for opportunities to fall in your lap, then your investment won't turn a profit.


Does getting a specialized degree help?

Now you might think that you can get around this problem by getting a specialized higher degree so that you could get specialized jobs, like being an attorney, but that doesn't work for most people either. Some attorneys who are just starting out, even if they ended up getting a job with their specialized degree, don't even make enough money to make their loan payments and so they resort to making hard decisions like working for the government in order to get their debt erased (see this article for more), or moving to larger cities where the pay is higher. To be clear, lots of government attorney jobs are competitive and lots of attorneys wish they could get those jobs.

Ok so that's a bad situation, but maybe there are other specialized jobs that fair better? Sure. You could be a physician. At least that way you could make enough money to afford the loan payments. But now we're talking about 8 years of school beyond high school and then you're only making $40,000 a year while you're in training in a residency program for at least three more years. And that isn't enough income to afford your loan payments unless you're still living at home -- which would raise the question: How long do you plan to be living at your parent's house?


So who should go to college?

If school is not for everybody, then who is it for? Well it depends on the situation. Let's say you've loved building things your whole life and you want to do experimental physics, then maybe a PhD in Physics is for you. But even someone like this might prefer to work as an engineer, which might only require a Bachelors degree. Or maybe he'd prefer to work as a computer programmer in order to fund his love of building things at home using 3D printers and other CNC machines without being paid for it (see Make magazine for more) -- although it's possible he invents something that people would be willing to pay money for, CHA-CHING! Also, being a computer programmer allows you to automate actions into the things you make -- that's robotics! And that's not the only advantage. Computer programming doesn't need any degrees (see this article for more), and it's something you can start learning as soon as your parents give you a computer.

When deciding whether or not to do higher education, some people have the opportunity to take advantage of scholarships for some or all of the cost of tuition, room, and board. For these people, the college option is better, but not by much. You still have the problem of not earning money, so by the end of four years, you are net neutral on money instead of net positive, which is what you would have been had you gotten working instead of attending college. You still have the problem of losing four years of learning job skills. A full-ride scholarship does not solve these problems, so it's important to make sure not to go to college just because you have access to a full scholarship.

Most people who want to improve their job options would do well to seek a job that requires only a training program at a trade school, say for welding or hairstyling, which is something people can do while working full-time. Similarly, lots of higher-paying jobs like paralegal, medical assistant, and message therapist only require an Associates degree and again, you can attend these schools while working full-time. Some of these programs can be done online which makes it a lot easier for full-time workers and for those who have children.


Social pressure as a reason to go to college

Now there is another dynamic at play here. Lots of people feel pressured to go to school, from parents and friends. When asked the question, "why do you want to go to college?" they answer "so I can say to people that I went to college" or "my parents never approve of anything I do, so if I graduate from college then they would get off my back." So they seek prestige because prestige makes them feel better -- because having prestige makes other people react in certain ways that they prefer. But these people are wrong that society is pressuring them. What's happening is that these people pressure themselves because they care what other people think about them -- they crave social approval. These people would do better to learn how to live their lives by their own opinions, as opposed to living their lives by the opinions of others.


Social pressure as a reason to not take a McJob

Just like going to college to seek prestige, lots of people also choose not to take minimum wage jobs as a means of seeking prestige. These people think of these "McJobs" as low-class and only for dumb people. They think that if they take a "McJob," then they will lose prestige. But this doesn't make sense. The cost of acquiring prestige is more than the potential benefit, so what's the point of having it? In fact, it seems to work against earning more money -- since the alternative of getting a McJob would help you learn job skills that you could use towards getting higher-paying jobs later. For more on prestige, see The Fountainhead, by Ayn Rand.


Having prestige does not improve your life. You should make decisions that add value to your life, and having the ability to say to your friends "I'm a college graduate" and "I don't work at McDonalds" may add perceived value to your life but it does not add actual value.

So, when deciding whether or not to do higher education, make sure you are making a choice that is right for you, that takes into account your situation, your interests, your goals, and that doesn't depend on how other people perceive you.



------------------------------------------------------------------------------------------


* The $80,000 tuition figure is for 2013. Currently tuition prices are inflating 10% annually.


crit


people do actually lose prestige/status in the eyes of various people if they work a McJob for a long time
u can try and persuade people that prestige and status aren't worth having
This prestige won't help you get a job or earn more money, so what's the point of having it?
playing prestige and status games DOES get lots of people jobs, tho
its a life strategy
i don't think its a particularly good one
but u can't just pretend that it's totally ineffectual
When Rand wrote the Fountainhead, she didn't write Keating as totally failing at everything. She showed the miserable state of "success" by Keating's standards.

showing him just failing at everything would have been faking reality

Monday, November 11, 2013

Pulling the plug...

This is a post that I originally wrote on 1/10/2012 in the BOI discussion group here.

So a liberal view says that suicide is ok, because someone who is
experiencing great distress, should have the choice to end that
distress in which ever way she chooses [so long as she does not
infringe upon anothers' rights]. Another reason is that a person did
not choose to come into existence, so that person should have the
choice to reverse this decision that was made for them.

And this idea intersects with the idea of healthcare. And I think a
liberal view says that healthcare should not be paid for by government
at all [including for the old and young]. For now I'll hold on the
idea of the young until we've resolved the matter of the old.

The old are getting older and the healthcare costs are rising
dramatically. Keeping somebody alive at 100 y.o. costs considerably
more than at 90, which costs considerably more than at 80, and so on.
And the way it stands now, the middle aged are paying for it. But the
ratio of non-working vs working is quickly getting larger. So this
seems like another spiral effect situation. And spiral effects don't
end well. So what is the solution?

Consider this thought experiment. A 90 year old has an accident and
almost dies; she slips into a coma. She is hooked up to machines that
keep her alive. Her family hopes that she recovers. Time passes. She's
still hooked up to machines because her family hopes that she
recovers. If she were conscious she might ask to be unplugged; but we
have no way of knowing. More time passes. Her family still hopes. More
time passes. They still hope. Where does this end? When her body
finally fails? Is this the right solution? Now lets add the idea that
the entire hospital stay was paid for by taxpayers. Is this
acceptable? What if this went on for 20 years? What if this scenario
happens 100 years from now when our technology is better and people
can be kept alive indefinitely, i.e. their body does not fail. Is it
right to keep her body alive for hundreds of years or for ever while
taxpayers are footing the bill?

Absolutely not. So the question is, where does the line get drawn? I
think its simple. There is no line to be drawn. Either an old person
pays for their healthcare to stay alive or she doesn't and dies. This
seems cold, but if you disagree with me, then consider the above
thought experiment; where would you draw the line? And if you choose a
position on this scale to draw this line, what will you do when the
scale changes [as it necessarily will as older technology gets cheaper
and new technology arises]? Will you try to move the line with the
scale? How would you choose that? What rational process would you use
to make such choices?

And I'm not suggesting that old people should die. Their children
could pay for them. And if they don't want to, why should I have to
pay for someone else's old parents? I have the choice to pay for my
parents when they are old. And I want to retain the option to not pay
for somebody else's old parents.

What do you think?

This is the healthcare debate that I mentioned above:
http://groups.google.com/group/beginning-of-infinity/browse_thread/thread/b5dd29e69970ac7e

-- Rami

Sunday, November 3, 2013

Why the gender gap on physics assessments?



Researchers are lost on the question of why women consistently score lower than men on assessments of conceptual understanding of physics. Previous research claimed to have found the "smoking gun" that would account for the differences, but new synthesis of that past work has shown that there is no pattern to be found.

"These tests have been very important in the history of physics education reform," said Dr. McKagan, who co-authored the new analysis. Past studies have shown that students in classrooms using interactive techniques get significantly higher scores on these tests than do students in more traditional lecture settings; "these results have inspired a lot of people to change the way that they teach," said McKagan. But several studies had also reported that women's scores on these tests are typically lower than men's. Lead author Madsen said, "We set out to determine whether there is a gender gap on these concept inventories, and if so, what causes it."

But what problem are these researchers arguing over anyway? Are they thinking that all men and women on average should understand physics equally? Another question that this raises is: do they think that these tests accurately measure understanding of physics? Well in order to keep this piece short I'll assume that the tests accurately measure what it's creators claim they measure. So that still leaves the question: why should men and women understand physics equally?

To illustrate that this is the wrong question, consider two individuals, one that loves physics and doesn't particularly like art and another that doesn't particularly like physics and instead loves art. Should these two people be expected to learn physics equally? Should they be expected to learn how to draw equally? Of course not. Interest drives learning. Everybody knows this yet somehow researchers ignore it. When somebody is interested in a subject, they spend a lot of time thinking about it, enjoyably. And without interest, the person wouldn't think much on that subject. And trying to do so in spite of lack of interest is not enjoyable at all. It's painstaking. And that's precisely why lack of interest is a barrier to learning.

So what's going on? Why are these researchers thinking that lumping all women together and all men together is the correct way to figure out what's going on here? Do they think that interest in physics by women on average should be equal to interest in physics by men on average? Does that even make sense? I think these researchers are completely ignoring the concept that interest drives learning, and that differences in interest causes differences in learning. So then the question is: why are there differences in interest between men and women?

To answer that question, I'd like to consider a more primary question: why make this arbitrary division by gender? Why not divide by race? Should we expect that all races on average should have equal interest in physics? What about dividing by culture? I suspect that these researchers would think that dividing by race or culture wouldn't make sense because there are huge differences in background knowledge among the groups. But then that raises the question: why should we think that men and women in US schools, whom theoretically receive the same education opportunities on average, have the same background knowledge? Well men and women don't share the same background knowledge. Boys and girls are raised differently by their parents, and society treats them differently, so girls grow up with different background knowledge than boys. And it's these differences in background knowledge that result in differences in interest, which then results in differences in learning.

Just consider the two hypothetical individuals from before. One loves physics and the other loves art. The question is: why do they love different things? Is it that there are differences in genes between them that cause differences in interest? Or are the interests learned?

Well, even if genes are a factor then isn't it possible for the X chromosome and the Y chromosome to contain some genes that affect interest in things? And if this is the case, then what would the researchers be looking for exactly? If it's possible that women's lessor interest in physics is due to a gene on the Y chromosome, which men do not have, then whatever those researchers are looking for could easily be drowned out by this gender-specific gene difference. So even if some data "emerged" from the analysis, there is no way to know whether that identified variable is the cause or if the cause is actually a gender-specific gene. So why do the researchers think that the answer they are looking for would emerge from the analysis?

If it's a genetics issue, then the researchers are looking in the wrong places. And if it's not a genetics issue, well then we should be talking about the cultural differences between men and women as a factor in why they learn physics differently. And again, analogous to the genetic question, if it's a cultural issue, then researchers are looking in the wrong places.

So what's going on here? Why are these researchers looking in the wrong places? What are they doing wrong? Well this has already been answered decades ago by Karl Popper, a philosopher of science. He explained that many scientists do science wrong. The right way is to create a testable theory, and then to test that theory. The wrong way, which is what these researchers are doing, is to sift through data looking for theories, and never actually doing any tests that could possibly rule out a theory. It's a problem of scientific methodology.

Popper taught us that not all science is being done right. Kinda obvious huh? Well it's true. We need to be selective in figuring out what is good science and what just looks like science. And he created his Line of Demarcation to separate science from non-science. It goes like this: a theory is scientific if and only if it can (in principle) be ruled out by experiment. So that means that if a theory cannot be ruled out by experiment, then it's not scientific -- instead it is scientism, stuff that looks like science but isn't because there is no way to rule out the theories being hypothesized.

So the way to test whether or not a theory is scientific is to ask yourself, “what would it take to make this theory false?” If the answer is nothing, then it isn’t science.

So consider what these researchers are doing. They are assuming that there is a difference between men and women that should account for the differences in learning physics, and they are sifting through data hoping for the theory (the "smoking gun") to jump out at them. But this is backwards. Where is the part about creating an experiment that could rule out the theory? It's not there. They aren't even thinking about it. This is not science. It is scientism.

For more on the Line of Demarcation, see _Conjectures and Refutations_ (Chapter 11: The Demarcation Between Science and Metaphysics), by Karl Popper, or see the more recent and easier to read _The Beginning of Infinity_ (Chapter 1: The Reach of Explanations), by David Deutsch.

Author: Rami Rustom

---

Citation:

"The gender gap on concept inventories in physics: what is consistent, what is inconsistent, and what factors influence the gap?" A. Madsen, S. B. McKagan and E. C. Sayre, Physical Review Special Topics – Physics Education Research. 

Saturday, October 19, 2013

How to interpret

How to create knowledge

———

Start with a problem.

Create multiple proposals for solutions, and with each, explain how the proposal solves the problem.

Try to rule out all but one with criticism.

The solution will have (unrefuted) criticism of all of it's rival proposals.



If 1 left, then that’s the (tentative) solution.

If more than 1 left…

If none left…

———

How to create knowledge (of a text, or body language, or any kind of action that a person does -- herein referred to as an “idea”).

———

Start with a problem. (What is the correct interpretation of the idea?)

Create multiple proposals for solutions, and with each, explain how the proposal solves the problem. (Create multiple interpretations, and with each, explain why the interpretation is correct.)

Try to rule out all but one (interpretation) with criticism.

The solution will have (unrefuted) criticism of all of it's rival proposals.
  • State the problem the (interpreted) idea is intended to solve.
  • Consider the relevant context as a means of creating criticism.
  • Check understanding of words with dictionary, possibly more than one dictionary
  • Consider EVERY word as a means of creating criticism. (Don't approximate.)


If 1 left, then that’s the (tentative) solution.

If more than 1 left…

If none left…

-----------------

Clarifications:
  • When dealing with anyone, the attitude "Maybe this other guy knows something I don't" should be your initial attitude. Actually this is true for any idea (intuition/emotion) even within one person.
  • If you produce inexplicit knowledge as an interpretation, e.g. anger at someone, or a gut feeling, or "love at first sight" -- it's dangerous to act on that knowledge because you have not done any explicit work of creating multiple interpretations and refuting all but one. 
  • After creating one or more interpretations, if it's practical, ask the author of the idea to confirm that your interpretation is correct. Do this BEFORE attempting to criticize the idea. Also, if feasible, ask the author of the idea for criticism of your interpretation of his idea BEFORE
    attempting to criticize the idea.
  • If you are interpreting something someone said, if there is available evidence, then check that evidence while you are explicitly interpreting it rather than trusting your memory that you remember the evidence on first impression.

Wednesday, October 9, 2013

What should American companies do in foreign nations?


Question:
Is it wrong for US companies to conduct business in foreign nations using business practices that would be illegal if done within our US borders?
Answer:
I am a follower of Objectivism's economic philosophy, which is borrowed from the Austrian school of economics. It advocates for free trade. It says that any law that restricts voluntary trade transactions by giving one group rights and not others will increase the share of the pie for that one group while necessarily decreasing the size of the whole pie.
President Woodrow Wilson understood this well. He advocated for free maritime trade in his Fourteen Points at the end of WWI. He understood that peace requires prosperity, and that prosperity requires free trade. 
Now, there is a counter-argument I've recently heard of but I'm unsure of its validity. It involves the situation where a foreign country subsidizes the production of a product -- case in point, China subsidizes anything made from iron. And this is a sort of reverse tax. And the problem is that this is unfair treatment for domestic companies competing with those foreign companies. And the counter-argument says that we should be able to put a tariff on that imported foreign product in order to offset that reverse tariff within our nation's borders -- and the thing is that tariffs are a restriction on voluntary trade transactions.
Is this feasible? What could go wrong? Will the foreign country stop subsidizing? Or will the foreign companies just decide to put their efforts in exporting to other nations that don't put up a tariff on those products? And what does this mean for us? In any case, I'm going to post this question to the Fallible Ideas email list, where I post my ideas to get quality external criticism.


Now an interesting case is Apple's Chinese workers. Their working conditions are less than American standards, and Apple is heavily criticized for this by lots of Americans. The problem here is that those workers should not be compared to American workers, and instead they should be compared to their Chinese counterparts. Compared to them, Apple's Chinese workers have much better working conditions. So apple should be praised not vilified. 


I should mention that I disagree with some of our laws on labor -- they are anti-free-trade/anti-liberalism. Case in point, children are not allowed to work (except in certain cases). And this law was created in order to protect children from their parents who would force them to work. But this one-size-fits-all "solution" doesn't work. Sometimes a kid *wants* to work -- VOLUNTARILY!  So why shouldn't he be able to work?
I worked since 12 years old in my dad's convenience stores until age 19 -- this is one of the exceptions, that working for family is ok. Had I not worked for those 7 years, I would not have been successful in my first company that I started at age 19 while in college. And the thing is that so many children were not allowed to work like I was. They were stripped of their opportunity to learn. So this law acted as a discriminatory barrier for them -- it gave me rights that they were not given. So it increased my slice of the pie while necessarily decreasing the size of the whole pie. This is a parochial mistake. Its analogous to the situation where America championed equality under the law *for all*, while still supporting slavery. This is anti-liberalism. And its time for us to expand this great tradition to the last group of people that have so far been excluded -- children!
-----

Liberalism - Liberalism in economics - Liberalism in parenting - CRC (international law on child labor)


Join the discussion group or email comments to rombomb@gmail.com

Saturday, October 5, 2013

Child custody battles


Why do people see child custody cases as "battles"?

Its because these parents are fighting each other for custody of their kids. But it doesn't have to be that way. In fact it shouldn't be that way. Its wrong on so many levels.

Oil painting by Ragod Rustom
For one, its wrong in that it ignores the preferences of the children. Surely the children don't want their parents fighting -- especially not about them -- its how kids learn to feel guilty about their parents fighting and breakup. Its as if the parents are using their children in order to hurt each other, all while hurting their children as collateral damage.

Parents should be thinking about their children more so than themselves because the adults can take care of themselves for the most part and the children are caught in the middle because they still need their parents help -- and their parents are still responsible for helping them become independent.

Children should have a say in their lives

More importantly, the children should have a say in how they live, whom they live with, where they live, etc. If a child wants to live with his mother, and if mother wants that too, then they should live together. It doesn't matter what the father wants in this. The child chose. Done deal.

Now here's where things get hairy. Most people will think that I'm advocating whim-worship. But I'm not. Whim-worship is evil. Let me explain.

What is whim-worship?

Whim-worship is the anti-thesis to reason. It is evil because in conflicts of interest between people, if one person resorts to violence to resolve the conflict, its because he acted by whim instead of by reason. Resolving conflicts rationally can only be done by reason. And as long as there are whim-worshippers in the world, there will be evil. When the last whim-worshipper learns the error of his ways and stops acting by whim, evil will have been eradicated.

Can children reason like adults?

Most people think that children can't reason like adults do. They think that a child can't do better than whims. But that doesn't make sense. Anybody can learn to improve his thinking. Anybody can learn the value of a reasoned preference over a whim. Anybody can learn to recognize the difference between a whim and a reasoned preference. Even children -- especially children.

Why did I say especially children? Its because children are more rational than adults in an important way. Adults have far more irrationalities than children do. And its the irrationalities that cause people to ignore their problems. Some people think that having (or admitting that a person has) irrationalities gives that person a pass. But this is wrong. People are responsible for fixing their irrationalities. And they are responsible for their actions even if those actions were caused by their irrationalities. Blaming one's irrationalities does not absolve himself of the crimes he commits because of them.

If you can't imagine a concrete example of what I mean by an irrationality, consider this example. Some adults feel attacked when their ideas are criticized -- they learned this from being raised by parents that shame them to make them feel bad (dirty looks, insults, yelling, emotional or physical punishment). And they also learned to react to criticism by ignoring the criticism, blocking it out of their mind, so that the bad feeling stops. So they've created a conditioned response -- a trigger. And young children haven't yet conditioned themselves to do this. Its because their parents, teachers, and friends haven't yet done the shame/punishment thing to them a lot.

You still think I'm wrong about children? Maybe you remember examples of situations with your child where you failed to persuade your child of something, and your child got upset. And you think this means that children are irrational. Well you're wrong. What you did was resolve the conflict by coercing your child, which means that its against his will. So you resolved the conflict with coercion -- like violence -- instead of reason. Why did I add violence in with coercion? Its because if the child continues to disagree with his parent, at some point the conflict is resolved in one of two ways: either (1) by them walking away from each other, or (2) by the parent resorting to physical violence against his child. Now most parents don't go as far as violence because their kid's will breaks before that. So violence is averted, but not because of the benevolence of the parent and instead its in spite of the evil of the parent. And what's worse is that its these situations that cause children to have irrationalities.

Walking away from each other is a rational way to resolve a conflict. Resorting to physical violence is an irrational way to resolve a conflict -- its whim-worship. So actually what happened is that the parent was acting irrationally and not the child. So the parent got it backwards, and its because of his own irrationalities that he gets this wrong.

A common thinking mistake that leads to acting on whims goes like this: "I know way more than my child does, so in any given conflict between me and him, the rational approach is to side with my way since I know more." But this doesn't work. This means judging ideas by authority instead of by merit. It means adjudicating between rival theories by working out which theory was created by the guy with the most knowledge. And this doesn't work because the guy with more knowledge can make mistakes just like the guy with less knowledge. That's because everybody makes mistakes. So its possible for the less-knowledgeable guy to be right even when the more-knowledgeable guy disagrees with him. So working out who knows more is not a rational way to resolve a conflict of interest. Its whim-worship. Its anti-reason.

All people are fallible

Still unsure about the idea that children can reason like adults? Well consider this. All people are fallible -- that means everybody makes mistakes. It means that each of us has flawed knowledge of:
(1) reality, which is about how the physical world works, 
(2) morality, which is about how people should act, and 
(3) epistemology, which is about how knowledge is created.
And acting rationally is something that requires knowledge about how knowledge is created. And the thing is that everyone applies knowledge about how knowledge is created, not just adults. How do I know? Well, how do you think children learn a language? Do you think raw sensory data is physically written into their brains like how our computers physically write 0s and 1s into their harddrives? Its not the case. What happens is that babies actively think about things and create concepts of them and they attach labels to them which we call "words". And as they create concepts, they create their own (mostly inexplicit) epistemology which they then use to understand the world. And the thing is that everybody has imperfect epistemology, which means that we all make mistakes when we create knowledge.

Further, because we're fallible, its impossible to know in advance which of my ideas will be found to be wrong in the future. I can't predict future criticisms. That would be predicting future knowledge-creation. That means knowing something before you learned it, which is a contradiction, so its impossible. It means prejudging a case before learning the facts of that case. No good judge does this. So no good parent should either. You should never under any circumstances prejudge a case before learning the facts of that case. Its pure evil. Its what leads to violence in all forms including war.

How do I know if I'm wrong?

In terms of theories and criticisms, just because I feel I'm right about a theory of mine, that doesn't mean that I should act like I'm right even in the face of outstanding criticism of my theory. Having an outstanding criticism means that my theory is refuted -- i.e. treated as false until further notice. To be clear, an outstanding criticism is one that doesn't itself have an outstanding criticism.

So do you agree that children can reason like adults do? If not, then what is your counter-argument to my argument that they can? If you don't have a counter-argument, and you still disagree, then your disagreement is a whim. Which raises the question: Aren't you part of the problem instead of being part of the solution?

If you want to improve your skill at being part of the solution, start by learning better how to do conflict resolution with some advanced epistemology.

If you're still not sure, read this essay covering a lot more details about parenting.

-----

Check out my other essays on parenting related topics:
How should parents raise children? 
The Nature of Man
Telling people what to do doesn't work 
Why curious children become scared adults 
Relationship between psychopathy and Autism/Aspergers
How do children learn how to act? 
Parental respect 
Traditions vs New ideas 
Lying and responsibility 
Golden rule vs Platinum rule vs CPF 
Why should people collaborate?
Psychology 
------
Join the discussion group or email comments to rombomb@gmail.com

Epistemology: How learning works



Epistemology is about how learning works, how knowledge is created, how problems get solved. These all mean the same thing.

Most people don't pay much attention to the way they learn. This is partly due to going to school and accepting their model of learning without questioning it. Students are given material to learn, and they are given tests, and the kids are left to figure out on their own about how to learn the material. Some kids figure out a simple way which is to memorize the facts and regurgitate them on the test. And this "works" for many years until at some point in school things become more difficult and memorizing doesn't "work" well anymore. The amount of stuff to learn becomes overwhelming and the kid starts to get mixed up. They also start putting problems on the tests that were not covered in class or in homework, and the student is left not knowing how to create solutions to the new problems.

To explain why this memorizing method doesn't work, consider what it means to memorize facts. A fact is a solution to a problem. On the test, the teacher will give the student a problem, and to solve it, the student plans to recall the solution that he memorized that goes with this problem. Now there are a few major flaws with this memorizing method. One, if you are only memorizing a few sets of problems and solutions, then when you are presented with new problems, you won't know how to create a solution, since that's not how you were doing it before. Two, how do you know you solved the problem correctly? In other words, how do you know that the solution you recalled is in fact the solution to the problem you've encountered? How will you check that you got it right? If you are only memorizing facts, then you won't know what to do. Three, how do you know you understood the problem correctly? In other words, how do you know that your interpretation of the problem is the correct one? If you are only memorizing facts, then you won't know how to interpret the problem explicitly, and instead you'll be doing it by first impression.

The right way to learn is by reason, which is a creative and critical process. When you are thinking of learning something, ask why it matters, how it could be used, what is it's purpose. Ask yourself: Why should I learn this? Ask yourself: What problem does this idea solve? Then get answers, from other people, from internet searches, and make guesses yourself. Then criticize those answers -- in other words, try to find out if these answers are wrong and why. Do your own criticism, and ask others for criticism too. None of this involves memorizing. All of it is a creative process.

To explain why this reasoning method works better than the memorizing method, consider what it means to reason. It's a creative process -- you create (new) solutions to (new) problems. And it's a critical process -- you try to find out if your solution is wrong. So, if you're taking a test that gives you a problem that you didn't encounter in class or in the homework, you're ok because you know how to create a solution since that's what you were doing in class and in the homework.

To be clear, you not only use criticism on your solution, but you also do it on your interpretation of the problem. By that I mean that when you read a problem, you shouldn't assume that you understood it. People are fallible, which means that anything we do can be mistaken, so we should make sure to check for mistakes at every step, including the step of interpreting the meaning of the problem. And this interpretation step is a creative and critical process, just like the process of creating a solution to a problem. Actually, interpreting a problem is creating a solution to a problem, which is the problem of what is the correct interpretation of this problem?

Now let's take it a step further. The problem itself could be wrong. A person created it, and since people are fallible, anything they create could be wrong. So it's important to check if the problem is wrong. Now this gets tricky when you are criticizing a problem and criticizing your interpretation of the problem, so it takes some getting used to in order to keep from conflating the two. This gets even more tricky when you are discussing this with another person since you have the problem, your interpretation of the problem, and his interpretation of the problem, and you both are criticizing all three, so it's important to keep track of what you're criticizing.


----------------------------------------------------------------------------


Here are some essays that explain epistemology from different angles. These are by the philosopher Elliot Temple. Note, I've carefully put these essays in order so I suggest reading them in this order.

Here are some of my essays on epistemology:

 Here are some related essays: