By Rami Rustom
Webinar & Prezi presentation
- Webinar hosted by TOCICO. To be recorded soon.
- Prezi presentation used in the webinar: Improving TOC Using the Scientific Approach. Draft.
Preface
The purpose of this article is to explain my proposal to improve Theory of Constraints (TOC), founded by Eli Goldratt.
This article is a continuation of another article which gives more of the background history and is for a lay-person audience.
To be clear, I’m confident that if Eli read this article, he would say that all of the ideas fall into one of three categories: (1) things that he tried to communicate too but with different wording, (2) things he already knew explicitly but didn’t explain, and (3) things he knew intuitionally but not explicitly. And to clarify category 3, almost all of these ideas came from other thinkers, rather than from me.
In other words, this article is a reorganization of existing knowledge. But why do we need a reorganization? Here’s a summary of why we need a reorganization (for details, see this reference material): As Albert Einstein said, “The world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.” And in my estimation, current business management practices, and more specifically, the practices of people who know something about TOC, including the educational practices that teach TOC to the next generation, are not good enough. The current situation results in confusions about TOC that cause stagnation instead of progress, and the only way to improve the situation is to improve our thinking. This means that people’s current understanding of the scientific approach, and it’s application to business management, needs to be improved.
Note that this reorganization is not the end. People will find areas of potential improvement in this article, paving the way for a better reorganization, one that does better at causing progress.
Note also that this reorganization is partially dependent on the current environment, factoring in people’s current misconceptions. In the far future, our common misconceptions will be different than today, and so a proper reorganization of this knowledge will stress different aspects of the scientific approach than what I’ve chosen for this article.
Feedback and further error-correction
I recommend that anyone who has questions, criticisms, doubts, suggestions for improvements, or whatever, to contact me so that we may learn from each other, thus improving my knowledge and yours. And since the goal should be that we all learn from each other, I prefer public discussion instead of private discussion so that others can contribute their ideas and also learn from our ideas. For this reason, I recommend that you post your ideas/questions to the r/TheoryOfConstraints subreddit, and I recommend that you tag me [link] so that I know that you’re asking me specifically. Or if your post is not related to TOC or business/organizations, then post to my subreddit r/LoveAndReason, and again, tag me please.
Table of Contents
What is the scientific approach? 3
1. Fallibility 4
2. Optimism 6
3. Non-contradiction / Harmony 7
4. Goals/Problems/Purpose 8
5. Conflict-resolution 9
6. Evolution 10
7. The role of criticism 11
8. The role of tradition 16
9. Modeling 18
10. Harmony between people 22
What is the scientific approach?
The scientific approach traces back to the pre-Socratics of Ancient Greece.
Over the past 2,400+ years, people improved this body of knowledge by building on past ideas, including correcting errors in them.
This body of knowledge leaked into practically all fields of human endeavor, while also improving within many of the fields. One such field was business management.
Henry Ford, founder of Ford Motor Company, used the scientific approach to model his business in order to drive progress. Taiichi Ohno, father of the Toyota Production System and creator of LEAN manufacturing, also used the scientific approach (which included building on Ford’s ideas). And since then, Eli Goldratt, and a few others after him, continued this endeavor to advance the field of business management.
Below are 10 principles and their associated methods describing the scientific approach. I include many concrete/practical examples designed to illustrate how these ideas apply in business management.
Note that the principles and methods do not stand alone; they are all connected. Try to think of every part (each principle, each method) as being connected to the rest of the system (the scientific approach).
1. Fallibility
Fallibility was one of the earliest concepts that started the wave of ideas that became known as the scientific approach.
Fallibility says that we’re not perfect. Our knowledge has flaws, even when we’re not currently aware of those flaws. Socrates said, “I know that I know nothing.” What he meant was, “I [fallibly] know that I [infallibily] know nothing.”
This is what Eli Goldratt meant when he said “Never say you know.” What Eli meant was, “Never say you [infallibly] know.” In other words, never say that you can’t possibly be wrong. Never say that you’re absolutely sure or certain. Never say that your idea is guaranteed to work. Never say that your idea doesn’t have room for improvement. Never say that there can’t be another competing idea (that either exists now or doesn’t exist yet) that is better than your own.
All of our knowledge is an approximation of the perfect truth. There’s always a deviation, an error, between our theories and the reality that our theories are intended to approximate. Our pursuit is to iteratively close the gap between our theories and reality.
Since Ancient Greece, people have asked the question: How can I be absolutely certain that my idea is right? In other words, how can I be absolutely sure that my idea is right before I act on it? This question reveals the motivation these people had. They sought certainty. But it’s confused because, as Karl Popper explained, certainty is impossible [see his book In Search of a Better World: Lectures and Essays from Thirty Years]. Popper explained that we should instead be asking: How can I operate in a world where certainty is impossible? Popper’s view was that we should replace the search for certainty with the search for explanatory theories and error-correction. Popper was on the right track, but a bit misleading. Searching for explanatory theories and error-correction is a means to an end, not an end in itself. The end goal should be conclusivity. So we should replace the search for certainty with the search for conclusivity. This is an improvement made by Elliot Temple [link]. This understanding has not spread widely. People today still want certainty and so they don’t know how to operate in an uncertain world, and this of course has damaging effects.
Consider what happens when a person tries to communicate an idea without this fallibility concept in mind throughout the process. He will say something and assume that the other person understood it without the possibility of error in transmission. If he instead recognized that there’s error in transmission, he would not assume that the other person understood. He would instead do things like ask the other person to explain the idea in his own words so that the first person could check it for misunderstandings. And he would be ready to clarify for the other person, with no frustration whatsoever, if they indicated that they didn’t understand (e.g. they had a confused look on their face). This is how to deal with the inherent uncertainty in the transmission of an idea from one person’s mind to another. The goal is to reduce the error in the transmission process such that the outcome is good enough for our current purposes.
One of the TOC experts, Eli Schragenheim, explained this problem of people not accounting for uncertainty. In his article The special role of common and expected uncertainty for management, Eli explained: “The use of just ONE number forecasts in most management reports demonstrates how managers pretend ‘knowing’ what the future should be, ignoring the expected spread around the average. When the forecast is not achieved it is the fault of employees who failed, and this is many times a distortion of what truly happened. Once the employees learn the lesson they know to maneuver the forecast to secure their performance. The organization loses from that behavior.” In my understanding, the core of the problem is that these people do not understand one of the most core aspects of the scientific approach. Fallibility is not yet thoroughly engrained into every part of their mind – intellect and intuition/emotion. If it was, they would know how to operate with the inherent uncertainty of our knowledge. They would know to use forecasts with error estimates. And they would not rush to blame employees for not meeting forecasts and instead would use the scientific approach to discover the root causes of the events they’re experiencing. Usually the root causes are systemic phenomena related to the policies and culture of the entire company, not just the one employee, and usually it goes all the way up to the top, the CEO and the board of directors.
The concept of fallibility led to a few other things that spread pretty widely. Consider the concepts known as “benefit of the doubt” and “innocent until proven guilty”. Both of these ideas are commonly used in judicial systems, but they’re also commonly used in social situations, like among friends. But these concepts are not being implemented in the scenario quoted in the above paragraph. Management is not giving employees the benefit of the doubt and they’re not treating them as innocent until proven guilty. Guilt is just assumed.
Another idea that is used in judicial systems, and was born from the concept of fallibility, is the idea of allowing convicted criminals to appeal their convictions. This feature was implemented because people understood that any verdict by a judge/jury could have been incorrect, and a future judge/jury could correct that mistake.
More generally, the policy of allowing appeals is part of a broader thing called “rule of law” – as opposed to “rule of man”. The underlying logic is that citizens should be ruled by the law of the land rather than by specific people. We don’t want the particular biases and blindspots of a few individuals to negatively affect everyone. To avoid that outcome, we created government institutions designed to combat people’s biases by holding the law higher than any individual person. By the same logic, good businesses act the same way. The leaders try to set up rules that everyone in the company is expected to follow, including themselves. A good CEO will not implement new company policies while his team disagrees with those policies – instead, the company has a policy that says “we make company policies in a way where the whole team is onboard”. Notice how this policy does not rely on the ideas of the particular person currently sitting in the CEO seat. If a CEO tried to circumvent this policy, an effective board of directors would recognize the CEO’s grievous error and replace him with someone who respects the rule of law.
2. Optimism
Optimism says that we have the ability to create knowledge, to get closer to the truth, without ever reaching perfection.
It says that all problems are solvable, and all of us have the capability to solve any problem that any other person is capable of solving, the only limit being our current knowledge.
It says that we can always arrive at an idea that we judge to be better than our current idea. We can always make progress.
The laws of nature do not prevent us from making progress, and we can do anything, literally anything, except break the laws of nature (I mean the actual laws of nature, not our current flawed theories about nature). This was explained in the book The Beginning of Infinity, by David Deutsch.
An important aspect of this is that when a person is doing something wrong, it is caused by a lack of knowledge. Better knowledge would change their mind, and thus, their actions.
My explicit understanding of optimism comes mainly from David Deutsch, but many giants before him had similar ideas, namely Karl Popper [link]. Eli Goldratt called this idea Inherent Potential, known as the 4th TOC Pillar [link]. Notice that fallibility (the above section) and optimism (this section) are combined into one TOC pillar.
Consider the consequences of a lack of optimism in a person. He will not put in the effort to try to solve a problem that, in his view, he’s incapable of solving. It will manifest as a lack of curiosity. A lack of honesty.
An optimistic person is someone that intuitionally knows to think and act and have all the emotions that manifest as part of living optimistically. Optimism is developed by a long chain of successful learning experiences spanning one’s entire lifetime.
Karl Popper expressed it better than I can. “I think that there is only one way to science – or to philosophy, for that matter: to meet a problem, to see its beauty and fall in love with it; to get married to it, and to live with it happily, till death do you part - unless you should meet another and even more fascinating problem, or unless, indeed, you should obtain a solution. But even if you obtain a solution, you may then discover, to your delight, the existence of a whole family of enchanting though perhaps difficult problem children for whose welfare you may work, with a purpose, to the end of your days.” Realism and The Aim of Science.
3. Non-contradiction / Harmony
One of the ideas that was fleshed out from the above line of thinking is that there are no contradictions in reality. Reality is harmonious with itself. No law of nature can contradict another law of nature. If there is a contradiction between our theories about nature, that implies a mistake in our theories, not a contradiction in nature. This is known as Inherent Consistency/Harmony, the 2nd TOC pillar [link].
This is why people came up with the idea that if there is a god, there must be only one god. If there were many gods, that implies that there could be contradictions between them, and that doesn’t make any sense. This is another way of saying that knowledge is objective – there’s only one truth, only one true answer for any sufficiently non-ambiguous question.
One way that this non-contradiction idea manifests itself is in how people in the hard-sciences judge empirical theories. An empirical theory is a theory that makes empirical predictions. If the predictions contradict reality, that implies that there’s a mistake we made, either in the theory, or in our interpretation of the empirical evidence, or somewhere else. Scientific experiments are designed to expose this kind of contradiction. This idea goes back to Ancient Greece. The idea was as follows: we should check our empirical theories to see that they agree with the reality that they supposedly represent, and reject the ones that don’t agree.
Another way that this non-contradiction idea manifests itself is in how people judge any kind of theory, empirical or non-empirical. We seek out theories that are consistent with themselves, meaning that they do not have any internal contradictions. A single contradiction in a theory implies that the theory is wrong. But we must always be aware of the everpresent possibility that our judgement, that there is a contradiction in a theory, could itself be wrong.
4. Goals/Problems/Purpose
Every idea has a purpose, a goal, a problem that it's intended to solve. And one of the ways to judge an idea is to check that it actually achieves that purpose/goal, in other words, that it actually solves the problem that it’s intended to solve.
In physics, the purpose of a physics theory is to explain a part of reality to the best of our knowledge. The part of reality that physics deals with is phenomena that do not factor in things like human emotions or decision-making. In business, the purpose of a business theory is to explain an entire organization and how it interacts with other organizations. This kind of theory necessitates factoring in things about the human mind. For example, we must have a working model of the human mind that explains the relationship between emotion and logic and their roles in decision-making.
Another difference between physics and business is that in physics we don’t usually have time constraints as part of the goal, while in business, time constraints are all over the place. One case where we did have time constraints in physics was when the US was creating the atomic bomb with the aim of ending the second world war.
But to be clear, physicists use time-saving methods all the time even when there isn’t a time constraint builtin to the overarching goal of the physics research. Physicists use heuristics/shortcuts/rules of thumb a lot. It’s done as a convenience. They would rather not do complicated math when they could instead use a shortcut that is good enough for their current purposes.
5. Conflict-resolution
The scientific approach is a process of resolving conflicts between ideas. We look for conflicts between our theories and reality, between our theories, and within our theories, and then we try to create new theories that do not have the conflicts that we saw in the earlier theories.
Any problem or goal can be expressed as a conflict between ideas. And the solution can be expressed as a new idea that resolves the conflict.
Eli Goldratt created the “Evaporating Cloud” method for resolving conflicts of ideas [link]. The method hinges on the idea that there is a hidden, and false, assumption underlying at least one of the conflicting ideas, and that revealing it (and putting it in the cloud diagram) helps us see that it’s mistaken, thus bringing us one step closer to finding a solution. Part of the process involves identifying the goals of the ideas in conflict, including the shared goals that all the involved parties have.
Consider an example of a toddler turning over a cereal box on the kitchen floor and his parent says “no you can’t do that”, then he takes the box away, leads the child out of the kitchen to his room to play with his toys, and cleans the mess up. Suppose the parent is trying to cook and needs the kitchen floor to not have cereal everywhere. This indicates a conflict. The parent’s general preference for not having a mess on the kitchen floor while cooking is reasonable, but their approach to dealing with the child in this specific instance is not reasonable. The parent is not being helpful and is not even trying to understand what the child is trying to accomplish. The parent is not trying to resolve the conflict. He’s not trying to create mutual understanding and mutual agreement. He’s not trying to find a solution that everyone involved is happy with. Now it could be that the child wants to see what happens to the cereal when he flips the box over - he’s trying to learn - and let’s suppose that the parent wants his child to learn too. So this is a goal that both the parent and child share. Now consider that if the parent had instead put some creativity toward understanding the child’s goal, he would have the opportunity to figure it out, and if he succeeded, he’d put some more creativity toward proposing a new way to solve the same problem (achieve the same goal), such that the parent and the child would be ok with the new proposal. For example, the parent could suggest flipping over the cereal box in a bathtub or box instead of the kitchen floor. Notice how the Evaporating Cloud method applies here; the parent tries to expose the child’s goal of flipping over the cereal box (to discover what happens when a cereal box is turned over), and the underlying goals that they both share (both parent and child want the child to learn), in order to find a solution that resolves the conflict.
6. Evolution
Karl Popper made a revolutionary discovery that corrected a ~2,300 year old mistake made by Aristotle that almost everybody after him had been misled by (also some people partially recreate that mistake independently). Even many scientists have been misled. The body of knowledge that was built on top of Aristotle’s mistake became known as Justified True Belief (also known as foundationalism). In short, the idea is that we must provide positive support to our ideas in order to consider them good enough to act on – in order to consider them knowledge.
Popper’s discovery identified why this is wrong and he explained the correction. He figured this out while studying how scientists over the centuries had created their theories. He was trying to figure out the core difference between science and pseudo-science. He discovered that knowledge-creation is an evolutionary process of guesses and criticism with the same logic as genetic evolution [see his book Conjectures and Refutations]. We make guesses about the world and rule out the bad ones with criticism, leaving some guesses to survive, while also setting the stage for another round of new guesses.
In genetic evolution, gene variants are created, and the “unfit” genes are not replicated (or replicate less than their competitors), resulting in those genes eventually ceasing to exist in the gene pool. In idea evolution (or memetic evolution), meme variants are created, and the “unfit” memes are not replicated (or replicate less than their competitors), resulting in those memes eventually ceasing to exist in the mene pool.
Like with genetic evolution, the new memes (guesses) are descendants of older ones such that the descendants do not have the flaws pointed out by the criticisms of the ancestors.
Aristotle’s mistake manifests in many ways. Here are two examples:
There’s a super common thing people do in decision-making where they want something, usually based in emotion, and then they try to create “rational” arguments to provide positive support to their idea. But this is not rational. It’s pseudo-science. Notice how this method ignores whether or not a conclusive state has been reached. The correct way to arrive at a decision is to try to criticize our ideas in search of conclusivity, in search of ideas that we do not see any flaws in. It includes considering all known criticisms of the idea and all known competing ideas, and it includes searching for new criticisms and competing ideas.
Another common thing people do in decision-making is they try to use statistical methods to judge the probability that a theory is true, while ignoring whether or not a conclusive state has been reached – where one theory refutes all of its rivals. These people are trying to provide positive support to their ideas instead of criticizing them. Note that all the work on AGI, as far as I know, hinges on this false premise that Aristotle created, and so without the correction by Popper, the efforts to create AGI, in the sense of creating a software that replicates human intelligence, will fail [see David Deutsch’s article Creative Blocks].
7. The role of criticism
So the scientific approach is a series of guesses and criticisms.
One of the roles of criticism is that it helps us improve our guesses. In this sense, criticism is positive because it provides opportunities to improve. Another role of criticism is that it helps us reject our bad guesses. And when we combine both roles together, what we get is a two-in-one: (1) reject our bad guesses, and then (2) make better guesses that do not have the flaws pointed out by the criticisms of the earlier bad guesses.
But what exactly is criticism, and how can we identify it? Here are three equivalent definitions that focus on different aspects:
A criticism is an idea which explains a flaw in another idea.
A criticism is an idea which explains why another idea fails to solve the problem that it’s intended to solve.
A criticism is an idea which explains why another idea fails to achieve its goal.
Note that in any given situation where you have a criticism, you should be able to translate between these definitions.
There are some types of criticism that deserve attention:
Criticisms directed at one person’s understanding of another person’s ideas: We should criticize our own interpretations of other people’s ideas before attempting to criticize their ideas. We could also get help from the other person with our goal. This is the same as saying that we should understand an idea before criticizing it.
Criticisms of positive ideas/arguments/etc: A criticism of a positive idea or argument should explain how the positive idea/argument fails to achieve its goal.
Criticisms that are directed at other criticisms: One example is: Suppose someone says that an idea has a flaw. So I ask, does this flaw prevent the goal from being achieved? If they say ‘no’, then it’s not a flaw. Check the 3 definitions of criticism above. You have to be able to translate between them. A flaw in an idea implies that the idea cannot achieve its goal.
Criticisms that incorporate empirical data: A piece of empirical data alone cannot constitute a criticism. In other words, it cannot constitute evidence. The data must be interpreted in the light of a theory that explains how to interpret the empirical data. The criticism (or evidence) is the interpretation of the empirical data. Data without interpretation is meaningless.
Vagueness or ambiguity as a criticism
The quality of being vague or ambiguous can be used as a criticism:
Your sentence (idea) is vague. I can’t make an interpretation that I think represents your idea.
Your sentence (idea) is ambiguous. I can think of many possible interpretations that match your sentence and I don’t know which one you intended.
This scenario can be improved whereby the person explains his idea in more detail, enough to satisfy the person who found the initial idea vague or ambiguous. And of course the other person can prompt the first person by asking clarifying questions to better understand the idea that he initially found to be vague/ambiguous.
Note that one of the unstated goals of the idea being criticized is that it should be understood by the other person. This is the case in most contexts. But in the case that this wasn’t one of the idea’s goals, then the quality of being vague or ambiguous isn’t a flaw, and the criticism is mistaken. We could criticize that criticism with: “No, vagueness/ambiguity isn’t a flaw because one of the goals of my idea was that people would not be able to understand it. So it’s a feature, not a bug.”
What if people complained that an idea needs more clarity without end? They’d be making a mistake because more clarity isn’t always better. As Eli Goldratt explained, more is better only at a bottleneck; more is worse when it’s at a non-bottleneck. If vagueness is a current bottleneck, then more clarity helps. If vagueness is not a current bottleneck, then putting more effort into clarity makes things worse (because you’re spending your time on things that won’t cause progress instead of spending your time on things that will cause progress).
In other words, if the current level of clarity of an idea is not enough to prevent the idea from achieving its goal, then there’s no reason to put more effort into making the idea clearer. We should only put in more effort if the current level of clarity of the idea is causing us to not achieve a goal.
So to reiterate, vagueness or ambiguity are only flaws if the vagueness or ambiguity of the idea is causing the idea to fail to achieve its goal.
Criticism breaks symmetry
Sometimes our criticisms are not clearly stated, but the facts (the “state of the debate”) imply one or more criticisms, which should then be stated and incorporated into the knowledge-creation process.
Consider the example where we have two rival theories (T1 and T2) and each of them fails to explain why it’s better than its rival. This implies two criticisms:
In this scenario, there is symmetry between the rival theories, and what we need is asymmetry. So at this point both theories should be rejected. And the way to resolve the conflict is to find a way to differentiate them with a criticism that points out a flaw in one of them but not the other. This could come in the form of a new feature of one of the theories which the rival theory does not have. This means that we have a new theory, T3. Not having a crucial feature is a flaw, if the context is that another rival theory does have that feature. This breaks the symmetry between the initial theories and now we can adopt T3.
This raises the question of how to determine whether or not two theories are in fact rivals. The way to judge that is to investigate the goals of the rival theories in order to check that they’re intended to achieve the same goal. If so, they are rivals, otherwise, they are not rivals. And again, our judgement that two theories are rivals, or not, is a fallible judgement, meaning that we could be wrong about that judgement.
Criticism can be hidden in suggestions
People often encounter criticism in the form of suggestions without realizing that it is criticism. Consider the scenario where somebody tells you a suggestion to do something other than what you’re currently doing. That implies a criticism. He’s saying that your idea does not work and their suggestion does work, or that their suggestion works better than your idea.
The suggestion may come with an explanation, but even if it doesn’t, the person may have the explanation ready to provide to you if you prompt him for it. Or if he doesn’t have that either, he may have an intuition underlying the suggestion, and he could convert that intuition into an explanation.
It’s your job, if you choose to accept it, to put effort toward understanding the criticism. That usually requires that you improve the criticism beyond the original understanding of the person who gave it to you. Since you know more about your situation than he does, you’re in a better position to incorporate his critical advice into your context.
A single criticism is enough to reject a theory – there’s no ‘weighing’ involved
You may have noticed that all of the explanations in this document imply that just one criticism is enough to reject a theory. There’s no “weighing” involved. We cannot differentiate between rival theories by seeing which one has fewer criticisms against it, or which one has more “positive support”, or some mix of the two. Doing so means accepting a contradiction. It is pseudo-science.
These confusions were born from the mistake Aristotle made (mentioned in the above section) and it has led a lot of people down an incorrect path where they think “weighing” theories is a reasonable way to differentiate between rival theories. They think that “weighing” theories can give us a degree of certainty, allowing us to select the theory that has the highest degree of certainty.
More sophisticated, yet still wrong, approaches have been created from this false premise where people think they can use probabilities to judge the likelihood of rival theories being true, and then select the one that is more likely. It’s all arbitrary nonsense. These people have misunderstood Bayes’ theorom [link], by Thomas Bayes. The theory is purposed for calculating the probabilities of events occurring given a particular theory, and that means that the validity of the probability figures depends on the accuracy of the assumptions of the underlying theory that was used to calculate the probability figures. None of this helps us calculate the probability that a theory is true. These people are trying to pick a “winner” among rival ideas without resolving the conflict between them. Effectively, it is a way to maintain existing conflicts… to avoid conflict-resolution. It is pseudo-science.
Reusable criticisms
As we evolve, on an individual scale and a collective scale, we continue to create new types of criticisms that worked well in the past, which should then be reused afterward. This means that a person’s set of reusable criticisms increases over time. The same thing is going on with our methods of creating models (see Section #9 Modeling). The methods are reusable, and the set of methods is increasing over time.
Consider that in judicial systems, namely the US judicial system which is an extension of the English judicial system going back to the 13th century, judges have created “rules of evidence” that allow them to identify whether or not a piece of evidence should be adopted or rejected. These rules are reusable criticisms that judges have been cataloging and improving for 800+ years.
In TOC, Eli Goldratt coined the term Categories of Legitimate Reservations (CLRs). The concept is designed to help in the creation of models of cause-and-effect networks. It helps in the error-correction process. CLRs are reusable criticisms [link to TOC CLR page].
How people misunderstand the concept of criticism, and why they dislike it
So many people today dislike criticism because they do not thoroughly understand the role of criticism in all human endeavor. They don’t like being criticized by other people, and this often results in resisting criticism that they receive. It also causes them to resist giving criticism, for fear that it won’t be received well by the would-be receiver (oftentimes projecting their own bad psychology onto the other person). If they instead thoroughly understood the role of criticism in all human endeavor, they would love criticism for what it is. Criticism allows progress; lack of criticism causes stagnation.
One factor that causes people to develop a dislike for criticism is that their parents and the rest of society gave them criticism in a bad way, and they created coping mechanisms to deal with it, thus internalizing the external pressures. So many people criticize without understanding the goals of the ideas being criticized, and they often layer it with anger/shame/dirty looks/raising their voice, which they wouldn’t do if they thoroughly understood the role of criticism. These people are criticizing ideas that they don’t even understand. Criticism of an idea can only be effective if you first understand the idea that you’re criticizing. You can’t point out a flaw in an idea that you don’t even understand yet, but people try to do it all the time.
Consider the example of the toddler turning over a cereal box (details in section #5 Conflict-resolution). The parent is treating the child badly. He’s not trying to resolve the conflict. He’s not trying to find out what the child is trying to accomplish. Instead he just gives a criticism that does not factor in the child’s goal; the parent said, “no you can’t do that [because it gets in the way of my cooking]”. Years of this sort of treatment reliably result in children developing a hate for criticism.
People who dislike criticism are of the type to see themselves as static beings. They dislike criticism because they (in their view, whether it’s explicit or just intuition) can’t change themselves in order to fix the flaws explained by the criticism. They do not have a thorough understanding that all problems are solvable or that they’re capable of solving any problem. In contrast, the rest of us see ourselves as dynamic beings. We recognize that a person is a system of ideas, and any idea can be changed. We love criticism because we know that we can change ourselves in order to fix the flaws explained by the criticism. Note that for people who have the belief that they can’t change, the belief effectively becomes a self-fulfilling prophecy. If you believe you can’t change, then you won’t put in the effort to change, and thus you won’t change. You may change anyway, but that’s because you inadvertently put in effort without it being part of your life plan.
Another factor causing people to dislike criticism is related to how people view rejection and failure. Consider a new salesperson who gets discouraged because his first attempts to make a sale had failed. Sales managers try to correct this bad psychology by explaining that the learning that happens during and after a rejection is what causes progress in the big measurable goals. Consider that an astrophysicist looking for earth-like planets does not get discouraged when they point their telescope in space and don’t find what they’re looking for. Consider also that when that happens, they’ve narrowed down the landscape a bit, so they have definitely made some progress with that “rejection/failure”. The point here is that failure is not bad. What is bad is not doing everything we know about how to learn from our failures.
Yet another factor causing people to dislike criticism is their narrow view about progress. When considering the example of the astrophysicist in the above paragraph, most people would not recognize a failure to find an earth-like planet as a unit of progress. They vaguely conceptualize progress as being huge. And they think of progress as only happening if there’s a visible success, like having found an earth-like planet. They don’t know how to recognize our baby steps as progress toward a goal. They don’t recognize that each baby step is an achieved goal that brings us closer to achieving a larger goal.
8. The role of tradition
Another crucial aspect of the scientific approach is tradition. A tradition is an idea (a guess) that has been criticized, and thus improved (because many flaws were fixed), by many people before you. The point of explaining this is that when we are comparing a brand new idea to a tradition, where they are rivals, the tradition has already been improved through a lot of criticism while the brand new idea has not yet been subjected to the same error-correction efforts. Could it be that the new idea is better than the tradition? Yes that’s possible, and this possibility is why knowledge can progress. But the point here is that the brand new idea should be subjected to the same criticisms that the tradition was subjected to, and it must survive them, and it must provide a new criticism of the tradition while the new idea must survive that criticism, all before concluding that the new idea is better than the tradition.
There is a common mindset in the West today that causes people to have a disrespect for tradition. If something is old, they assume its bad compared to something new. This is against the scientific approach. It’s pseudo-science. No scientist creates a successful scientific theory in a vacuum where he ignores the previous scientists that did work in the field. Nobody could create knowledge that way. We must stand on the shoulders of the giants that came before us in order to surpass them! We cannot surpass them otherwise. This is what Isaac Newton, and many before him, meant when they said, “If I have seen a little further, it is by standing on the shoulders of giants.”
TOC explains an aspect of this with the article Six Steps to Standing on the Shoulders of Giants, by Lisa Ann Ferguson. Note step 3: “… Gain the historical perspective - understand the giant's solution better than he did.” The giant’s solution is a tradition, and it’s our job to understand that tradition better than he did, so that we can create an idea that is better than the tradition.
This understanding has applicability far beyond just the case of dealing with a giant’s theory and our proposal replacement. Here are three examples:
A child rejects his parent’s ideas without trying their best to understand those ideas. It is arrogance. I fell victim to this. I’ve since corrected it and now anytime my parents have something to teach me, I listen, and I ask clarifying questions until I finally understand their idea such that I’m able to explain it back to them in a way that they agree that my rephrasing matches their version. I often disagree anyway because I see a flaw that they do not see, and other times I agree with them and adopt their view.
A business manager launches an initiative that affects everyone in the company, yet ignores the knowledge of the downline workers. Those workers have traditions related to their work that are not known by management, which is a natural outcome because management personnel are not working alongside the employees. And so the manager’s initiative might be contradicting the knowledge in those traditions, and if that’s the case, that means damaging effects – people working at cross purposes, and fewer goal units achieved.
Regarding conflicts between intuition and intellect, there are two common approaches: (1) side with intellect and ignore intuition, or (2) side with intuition and ignore intellect. The first one is a case of disrespecting tradition (your intuition). The second one is a case of blindly adopting tradition. Both of these are wrong. Note that in both cases, nobody is even attempting to resolve the conflict. Instead they are picking a “winner” despite the conflict not having been resolved. They are picking a side without knowing which side is the better one. They are picking arbitrarily. The proper approach is to integrate our intellect with our intuition so that the conflict vanishes. This could be because we improved our intellect, or improved our intuition, but usually it means that we improved both.
I created a visual to help explain the knowledge-creation process. Note the “library of criticism” (reusable criticism) and “library of solutions”. These are traditions. Note also the “library of refuted ideas”. These are ideas that failed in the past, which can then be reused (with changes).

9. Modeling
One crucial aspect of the scientific approach is modeling.
In all human endeavor, we’re always working with a model of reality, whether we know it or not. We’re never in a situation where we’re dealing directly with reality. We can only “see” reality through the lens of our models.
Either the model one is working with is explicit, which makes it easy for them to improve it, or it’s a hidden model, a set of intuitions incorporating some hidden assumptions, which makes it difficult to make improvements to. While human thinking can and does progress in this way, a much better method was provided to us by Isaac Newton.
Cause-and-effect logic