Most of the secular West has recently abandoned the concept of divine instruction, preferring the raw power of the human mind over Judeo-Christian moral code and its inflexible (unalienable, some might even say) system of human rights. But if we’re at all fair, we’ll recognize that rationalism–meaning, the reliance on mankind’s rationality as the sole source of morality and social advancement–has some serious downsides, even according to its own standards.
First off, rationalism makes the assumption that human beings are eventually going to reach a greatly advanced (or even perfect) system of social and moral evolution, in which the human race will achieve an optimum harmony, which we’ll most likely reach through trial and error. But this stance assumes too much of humanity: first, that human beings always remember all information, or that if they don’t, they’ll recognize correct information when they see it. Information is important, because it precedes successful decisions, and its absence from the human thinking process threatens us not only with error, but also with social regression. But we know that humans aren’t omniscient, which is one of the reasons that history tends to repeat itself. And besides the fact that human beings aren’t always working with a full toolbox, we can’t assume that humans are going to accept useful information when it’s right in front of them. After all, you can lead a horse to water, but you can’t make it drink.
Second, while information gives us practical knowledge, wisdom–understanding how to practically apply knowledge–comes almost entirely from experience or a profound respect for the experienced (the second one, not so popular these days). But even a combination between wisdom and knowledge isn’t everything we need for rational behavior: rationalists forget that the application of wisdom is entirely dependent upon personal character, because carrying the cause of wisdom to its fullest fruition requires that you be strong enough to not only recognize proper procedure, but also to carry it out in spite of opposition and laziness. In this light, the benefits of rationalism aren’t solely reliant upon information and knowing how to apply it practically, but also upon the willpower of the human being.
So we can safely say that if rationalism is to guide us into the correct decisions and a more advanced civilization, human society must first possess all three traits (perfect knowledge, total wisdom, and rock-solid character) in order for us to assume that a person is making a socially-progressive, rational decision. Lacking any one of these traits very likely places the human being into a position of irrational behavior, since having a high IQ alone isn’t a guarantee of any useful morality or advancement, necessarily. What we have to ask ourselves, then, is whether or not we actually believe that we possess all these qualities at every given moment. And if we don’t, then we have to wonder whether our assumption of human rationality is necessarily rational.
Third, if we completely abandon a fixed moral/behavioral standard, we must judge the value of behaviors solely by our own standards, and this must be done by using a utilitarian/objectivist scale (meaning that painful/stupid behaviors which lessen happiness and stability are labeled as “immoral,” while behaviors which increase happiness and stability are considered “moral”). But rationalism makes the poor assumption that any moral “truths” we encounter through the process of rationality will be in agreement, and that two different pain/pleasure perspectives will necessarily agree about the outcomes of a particular moral behavior.
For instance, the man who steals medicine for his wife is getting a great benefit from stealing, but the druggist and society as a whole will feel threatened. Do we judge by the majority’s pleasure, or the minority’s right from pain? And how do we know how they feel, anyway? As such, rationalistic utilitarianism is never the source of moral truths: it can only concern itself with taking unalienable rights and applying them to current situations, otherwise morality and progress would simply be matters of preference and persuasion, not something to be argued about or taken seriously.
In light of these necessities, rationalists have some very serious questions to answer:
First, if rationalism, due to different pain/pleasure perspectives, doesn’t necessarily always yield concurrent moral standards by using the same information, then we must acknowledge that it isn’t necessarily taking us in a particular moral direction (what makes the difference between a thief and a Robin Hood?). But if rationalism isn’t necessarily taking us in a particular moral direction, are we necessarily advancing?
Second, if we can conclude that rationality only serves a moral purpose within the confines of objective moral truths, as objective moral truth cannot be created by the human mind, then is worldview–a person’s foundational beliefs about reality–of more importance than rationality itself, since it supplies the information and pathways within which reason works? And if worldview predicates proper rationality, should we necessarily be interested in total cognitive liberty?
Third, is any economic/moral system which rewards ignorance (lack of knowledge), foolishness (lack of wisdom), and weakness (lack of character) through philanthropy ever morally acceptable for the rationalist, since the removal of social consequences would theoretically reduce the likelihood of society’s maintenance of these basic rational values?
Fourth, should “less rational” people be subject to the dictates of rationality, for the cause of advancement? And who determines what “advanced” is, anyway?
Fifth, if we ever do achieve a sort of supremely beneficial value system which harmonizes human interaction to its maximum extent, but the process of rationality supplies the human race with rational arguments against that system, would the process of rationality become dangerous if it led away from that system?
Now, there will be some people out there who don’t appreciate this article, and consider my position dangerous and backward. If you disagree with this article, please–don’t be angry. Consider that like your anger, this article was brought to you by the process of reason.
Editor’s note: article two of this series, which questions the rational value of secular idealism, and contrasts its functional applicability with natural rights, can be found here.