The Need for Practical Wisdom in the Federal Bureaucracy

4 Comments

Washington DC - Capitol Hill: United States Ca...

Washington DC – Capitol Hill: United States Capitol (Photo credit: wallyg)

With the massive growth of the federal government comes growth in a complex bureaucratic structure that creates multiple layers of administration between government agencies and what they are designed to do. In the 1950s it was relatively easy to begin the interstate highway system–the government was more simply run and the number of “checkers” was reasonable. These days it takes years from conception to finish to build a small limited access route around a growing city. This is not only an issue of environmental regulation–it is an issue of paperwork, finding the right codes, administrator egos, and too many layers of management. In addition, any bureaucracy operates on a system of strict rules. In the case of the federal government, these rules are said to be necessary to protect the public from fraud, from unsafe products, from incompetent health care, or from shoddy construction on buildings and roads. Rules are essential to any organization–it would be irrational to deny that. People, left to themselves, are not often an orderly lot, and efficient, competent operation requires rules. However, beyond rules that are absolutely essential for safety or another vital value, rules often get in the way of common sense. A needed highway may be delayed by the failure to fill out some obscure paperwork that very few people knew about at the time. People in a local area may realize that what they request is badly needed, but someone in the bureaucracy nicks the request. Often, the requests of local people who know the needs of the communities in  which they live are overridden by someone who has never set foot in a particular community. The current trend in the federal government seems to be to follow the model of private business and focus on efficiency. Admittedly the federal government could do a better job of being efficient. However, efficiency should not trump service, and federal supervisors from upper management to “ordinary” employees should be given enough discretion to use practical wisdom to react in the proper way to a particular situation. As Aristotle pointed out, practical wisdom has to do with the local, the particular, rather than with an overarching universal. It is all too easy for federal officials to get caught in their abstract language and multiple abbreviations and lose sight of the very people that pay their salaries and whom they are to serve in a caring, responsible way. Discretion in spending of money should be broadened. Civil service should be reformed in such a way that seniority does not imply that an incompetent person or someone abusing his authority cannot be fired. But there should also be room for dissent and questioning of the decisions of middle and upper management as long as it is done in a respectful way. For example, suppose a federal employee lives in a community where a new bridge is supposed to be built. The employee knows that the road over which the bridge will be build will be re-routed so as to avoid the need for a bridge–at cost savings to the community. Higher federal officials say, “Congress appropriated the money for a bridge, and a bridge you shall have.” What would be wrong with local federal employees who know the situation informing their managers and those managers going up the chain of command so that Congress can allow the community to use the money appropriated for the bridge for re-routing the road? It is not insubordination to question a ruling. Not following a ruling after a final decision has been made would be wrong–but questioning if there is good reason to question should be a right of any American citizen including one who works for the federal government.

Some government programs work well; most do not. Why not work with those who do not to improve them, and if they are not viable, eliminate them? Federal programs, like federal employees, seem to be self-perpetuating no matter how useless or incompetent they are. This demoralizes good employees and empowers the cynical. Instead of focusing on “Which set of rules must we follow now,” focus on “What is the best thing to do in this particular situation?” The best thing will depend on the particularities of the situation and will require practical wisdom, learned by experience, rather than a list of rules to reach the best decision. This implies good observation and evaluation skills as well as the skills to creatively find ways to stay within the rules while stretching them to fit the limits of a particular situation. Experienced local officials should be trusted, unless they have proven untrustworthy, to make prudent decisions. Normally middle and upper management should, if sufficient funds are available, yield to the suggestions of the people who know an area and its problems best. Civil service, designed in the Chester Arthur administration to prevent political favoritism, should not be used to maintain the incompetent, the arrogant, and those managers who harm others by their laziness in performing their tasks. At the local level, conversations in the workplace between different units should be as open as possible so that “the right hand knows where the left hand goes.” Wise decisions are based on the most accurate and thorough information possible. Hopefully federal employees can then go beyond mere rule-following and exercise their discretion

Accreditation and the Tyranny of the Social Sciences

6 Comments

Detail of The School of Athens by Raffaello Sa...

College and university accreditation and re-accreditation has become a nightmare. Accreditation agencies demand “continuous quality improvement” to be documented by quantifiable data. Following a model that has wreaked havoc with teachers in the public school system, specific departments at a university and the university as a whole must not only form a mission statement, but formulate a series of goals and objectives to meet those goals. The objectives must be measurable in a quantitative way. Some departments require not only a list of goals and objectives for the course, but also for each week of the course. Standardized group final exams are becoming more common in certain fields, such as the physical and social sciences. The comprehensive portion of the final exam may have some of the same questions year to year so that a department can track “improvement” in students’ ability to answer certain questions covering key goals of the course.

Such a social science oriented quantitative approach to education works neither in the physical sciences nor in the humanities, and I doubt it works in some social sciences either. Science involves critical thinking, something that is more than a quantifiable measure and often involves “abduction,” an inference to the best explanation that is as much an art as it is a science. The “social science approach,” a fortiori, does not work in the humanities. Students must do some memorization of facts in the humanities as in any other field,  and they can be “objectively” tested over such facts. The humanities, however, are about critical thinking, forming a world view, interacting with the great events and texts of history, reading Plato, Aristotle, and other great philosophers who sought wisdom. Wisdom uses knowledge, but refers to the practical wisdom (prudence, or what Aristotle called phronesis) to make the best decision about how to live the good life in a specific situation. A conception of the good life implies a world view, a vision of how all things fit together into a whole. World views are by nature qualitative, not quantitative. They demand weighing different and sometimes contradictory perspectives. That is why it is important in philosophy to allow faculty to use the textbooks and the approach they choose, rather than having a “cloned” approach to teaching a course. The trend toward conformity in academia has been accelerated by pressure from aggressive accrediting agencies.

There is a line of thought in the social sciences, which is also present among some scientists who work in the natural science, that nothing is real unless it is quantifiable, including knowledge (I doubt that this line of thought has room for “wisdom”). Many psychologists, especially, take a totally quantitative approach to what they are studying. As the most conservative of sciences, psychology tends to fit better into nineteenth century though rather than into twentieth and twenty-first century thought. The situation seems like the revenge of Jeremy Bentham‘s often criticized “hedonic calculus” that tried to quantify an exact measure of pleasures and pains. The basic idea of quantifying everything has been broadened to the idea that one can operationally define any learning task and test to determine whether students have actually learned. Can Plato’s view of the Forms be operationally defined? What about the significance of World War I in the development of interwar continental philosophy? Can wisdom be operationally defined? What about truth, beauty, and goodness? The accrediting agencies are attempting to destroy what is most valuable about education–becoming wiser, with a better ability to think critically and to make judgments, exposure to different world views, the privilege of discussing differing positions with a professor. To say that qualitative measures are allowed is disingenuous since even those “must be measurable”–how? There must be a quantitative rating scale. Hopefully college and university faculty will encourage accreditation agencies to re-examine this current trend toward a bad social science model of evaluating educational quality.

Universal Terms and Reality

Leave a comment

20 px

Image via Wikipedia

The problem of universals is one of the oldest problems in philosophy. From Plato and Aristotle to Boethus, Abelard, and the other medievals to the modern and contemporary periods philosophers have discussed whether universal words such as “man,” “dog,” “oak tree” or “water” exist in themselves, are mere labels for objects we group together for our convenience, or do not refer to things, but to objective similarities between things. Extreme realism, such as held by Plato, holds that universal terms such as “dog” refer to the Form “Dog” that exists in a spaceless, timeless world separate from the empirical world, and which is known by reason, not by sense experience. The opposite view, extreme nominalism, associated with Foucault and Derrida (although whether this is their actual position can be debated) holds that “dog” refers to what any society labels particular animals they wish to group together as “dogs.” There is no essence of dogness, no set of necessary and sufficient conditions for dogness to which the term “dog” refers. Finally, moderate realism (the position of Abelard (perhaps–I think so), St. Thomas Aquinas, and the Blessed Duns Scotus, asserts that universal terms refer to objective similarities between natural kinds of the same type, to a set of, say, necessary and sufficient conditions that make a dog a “dog.” The trend since William of Occam has been toward conceptualism, such as Occam’s notion of universals as labels that refer to similarities between entities that have some basis in extramental reality. He sounds like a moderate realist; some interpreters call him a “realistic conceptualist.” In his own day he was interpreted as a nominalist, and whatever his position may have been, philosophers after Occam gravitated toward nominalism. This trend accelerated the split between faith and reason that ended the Medieval synthesis. The end stage of this process is found in Nietzsche’s work, which supported nominalism in the sense that all meanings are culturally constructed and do not have an objective basis in extramental reality. Contemporary English Departments at many universities, especially in the United States, tend toward a radical nominalism and linguistic constructivism in which universal words refer to whatever fits a particular society’s interest. Even though I agree with the notion that meaning is flexible, since I accept the medieval four-fold model of meaning in Biblical interpretation, there remain limits to the scope of meanings that a word can have. Meaning occurs in context, and a particular context may both increase the number of possible meanings of a term, but it can also lower or eliminate the possibilities of other meanings. “I am going to the bank” makes sense if the person saying that also adds “to go fishing.” If he says, “I am going to the bank to withdraw money,” that eliminates the the other meaning of “bank” as “bank of a river.” Natural kind terms clearly refer to entities that are objectively similar. Sure, a beagle does not look like a Rottweiler, but they are both carnivores, they both bark, they can interbreed, and they have similar genetic codes and similar causal powers. There is no need to posit the existence of “Dogness” in any transcendent world independently of actual dogs. “Dogness” might exist in individual dogs in the sense that it refers to the set of necessary and jointly sufficient conditions to make a dog fall under the universal term “Dog” (or another term, such as Canis, used in another language. Thus my own sympathies are with Aristotle’s and Aquinas’ moderate realism: universal terms refer to objective similarities between things that are necessary and jointly sufficient for an entity to be the kind of thing it is. Universal terms may also hint at universal ideas or patterns in the mind of God through which He created the universe and the things in it. This view, dating back to Augustine, was picked up by the Medieval philosphers such as Aquinas, and did not fade until William of Occam denied it in the fourteenth century. Moderate realism evades the problem of arbitrariness found in postmodernism as well as the over-transcendence of Plato’s world of the Forms. I am hopeful that it will be adopted by philosophers outside the Thomist school, since as Richard Weaver pointed out in his fine book, Ideas Have Consequences, nominalism helped lead to the idea that nature, including human nature, is infinitely malleable by human ingenuity. Realism, whether ultra or moderate, helps to form a stable society in which human nature and nonhuman nature are both respected. Moderate realism avoids the problems of Plato’s doctrine of participation by placing the entity to which a universal term refers “in” the individual substance. Now substance, I believe, following Fr. Norris Clarke, is “substance-as-relation,” so that the intellectual content of the object observed would “seek” (metaphorically speaking) to communicate itself as far as possible–and the observer would strive to communicate was much as possible. Through such joining of information the mind becomes “intentionally one” with the object perceived, and thereby knows it, not exhaustively–but the actual information he receives is accurate to a degree. If this is the way communication between being and mind takes place, there is no need for transcendent Forms, but there is a need for “forms” with a small “f” to guarantee stable behavior patterns among natural kinds.

How Police Officers Should NOT Treat an Autistic Person

Leave a comment

Dallas Police Department (Texas)

Image via Wikipedia

At  http://dfw.cbslocal.com/2011/10/03/autistic-mans-acting-odd-lands-him-in-jail/ is a story that reveals the way police officers should not treat an autistic person. Police officers have a tough job, and they deal with the worst people in American society. It is not surprising, then, that they are cynical. Sometimes such cynicism is necessary and can save their lives. However, sometimes officers can become so cynical that they do not believe what anyone says, even an autistic person who informs the officers of his condition. If Mr. Blake did tell the officers that he had Autism (and Mr. Blake was wearing a medical alert bracelet), then they should have realized that they were not dealing with the usual troublesome drunk. If an officer did call Mr. Blake a liar, and this would not surprise me, then this was unprofessional conduct and is worthy of disciplinary action. One would think that after the Ryan Moats incident that Dallas police would have more common sense, but apparently these officers did not learn from their fired fellow officer’s mistakes. Do Dallas police get training in dealing with special classes of people, those who have Autism or Tourette’s Syndrome or other medical conditions that can cause behavioral problems? If so, the officers dealing with Mr. Blake apparently ignored their training, and they certainly lacked the virtue of prudence, the ability to adjust to particular circumstances in order to make the correct moral decision. Mr. Blake now sits holed up in his room, afraid a police officer will come after him.

Paul Craig Roberts has claimed that American police are frustrated with not being able to catch the real criminals, so they turn to intimidation and violence against law abiding citizens or those weaker and vulnerable. I do not believe that this is generally the case; officers do catch a significant number  of criminals who end up being convicted and sentenced to prison. There may be some officers who fall into the class to which Mr. Roberts refers. Part of the problem may be lowered police recruiting standards due to a dearth of qualified applicants. It becomes more difficult to weed out the smart-alack, power hungry,  searching for an adrenaline rush officers who cause many of the problems departments face. Funding difficulties may prevent courses on special needs individuals from being taught to officers, even at large police departments. I am sorry if police who may be reading this think I am being overly harsh; since I have Asperger’s Syndrome, I have particularly strong feelings about such incidents. There are times I will talk to myself in public, usually when I am reasoning out some problem to myself. Suppose someone complained about my behavior. Would I be dragged out of a place of business and arrested? I suppose incidents such as this one are understandable; Americans are overly rule-oriented and do not focus sufficiently on the ancient virtue or prudence, or practical reasoning (Aristotle’s phronesis). But a failure to recognize the unique nature of unique circumstances is a moral failure, not merely a technical failure, and that is what, in my judgment, occurred in the treatment of Mr. Blake.

Does Thomism Really Avoid the Lockean Epistemological Gap between Idea and Thing?

1 Comment

Portrait of John Locke, by Sir Godfrey Kneller...

Image via Wikipedia

John Locke thought of himself as a realist (not in the Medieval sense of accepting the reality of universals, but in the modern sense of believing in a mind-independent world). Yet it seems that his philosophy leaves no room for any knowledge of that alleged world, as Berkeley and Hume pointed out. Locke believed that all knowledge comes by means of sense experience (thus he is an empiricist, as opposed to being a rationalist such as Descartes–it is ironic that in his hierarchical classification of knowledge Locke lists intuitive knowledge as first, demonstrative knowledge as second, and sensory knowledge as the lowest form of knowledge, barely to be called knowledge). Locke believes that knowledge arises by means of ideas in the mind. Whether these ideas are images or something else remains a subject of debate among Lockean scholars. In any case, Locke believes that that a quality is the power to produce an idea in the mind. Primary qualities are actually in the thing-in-itself, and our ideas of primary qualities are isomorphic with the actual structure of the physical substance we perceive. Primary qualities are measurable, and include size, shape, and mass. Secondary qualities are not in the thing itself; our ideas of secondary qualities are not isomorphic with the actual structure of the material substance. However, the primary qualities interact with human sensory organs and with the human brain to produce ideas of particular colors, odors, sounds, and tastes. Thus, secondary qualities have a partial basis in the thing-in-itself despite the lack of isomorphism between idea and thing.

The classic problem with this view is that Locke claims that we are only aware of our own ideas. We do not have any direct access to the material substance, to the thing-in-itself. In fact, substance is just that which underlies the qualities, a “something-I-know-not-what.” But if we lack access to the thing-in-itself, there is no way to compare our ideas to the actual object allegedly causing those ideas to determine which qualities are primary and which ones are secondary. Access to knowledge of extramental reality seems impossible, and a trip down the phenomenalist brick road of Berkeley, Hume, and the sense data theorists of the early twentieth century. Such an idealistic journey is not what Locke wanted to make. Idealism has serious difficulties; the source of the ideas (our own minds? the mind of God) remains a mystery, and the orderly nature of the phenomena we experience is left unexplained unless a person takes the Berkeleian route of positing God to explain natural laws. Direct realism is another option; the label of “naive realism” is a pejorative and is a blatant attempt to beg the question regarding the truth or falsity of direct realism. As for the straw men critics of direct realism try to knock down, no direct realist has denied the possibility of illusion. It is Berkeley and Hume’s phenomenalism that cannot distinguish between illusion and reality except by taking Hume’s route of more vivid ideas (which he calls impressions) being the most “real.”

Aristotle and St. Thomas Aquinas were both direct realists. Aquinas accepted the idea that knowledge comes through the “phantasm,” or sensory image, from which the mind extracts the intelligible content from a material substance. Thomists today often say that the difference from Locke’s view is that Locke believed we have access to ideas, not the thing in itself–it is the ideas that we know. In contrast, Aquinas believes that it is through the phantasm that a person gains some knowledge, albeit limited, of the thing-in-itself. But does this really avoid Locke’s problem or does it evade it by a kind of word game?

After reading more of how contemporary Thomists deal with the epistemological gap, I must back away from my earlier position that Thomism does not avoid an epistemological gap between mind and thing. Contemporary Thomists believe that humans have evolved as part of their environment, not as creatures separate from their environment. Even thought knowledge is of “external” things, there is a communication of intelligible content from object to subject–agent causation is not limited to human agents. The phantasm contains the information that human beings extract to help them to live in the environment in which they are embedded, to the point that the person becomes “intentionally one” with the thing-in-itself. While Duns Scotus posited intuitive knowledge of an object as existing in addition to a rather traditional Aristotelian account of knowledge, I am not sure that such an intuitive knowledge is necessary for human beings to get by in the world. If such intuitive knowledge exists (perhaps in the form of psi), such knowledge could speed up our apprehension of a thing and determine whether or not it is dangerous. But if the mind is not considered a container, but as one way of an organism’s acting in the world, that seems to eliminate the Lockean gap between idea and thing. The phantasm becomes that “by which” a person apprehends some aspects of the being of a thing.

 

The REAL Reason Most College Students are Moral Relativists

1 Comment

C. S. Lewis

Image via Wikipedia

The real reason most college and university students are moral relativists is because they want to get laid–not just once, but promiscuously. They also want to get drunk–not slightly, but thoroughly and often. I could add drug use, rudeness to professors and to each other, and the other problems students have in a decaying culture.

Other than sociopaths and psychopaths, people have consciences. They do not like to feel guilt. If they convince themselves that what they feel is right is “right for them,” then that can do bad things without the guilt. In the past, this tendency of the young to rebel was controlled by strong parenting and strong community standards. Even in the government schools, students were taught that there are some actions that are right, not just for them, but for everyone else–and that some actions are wrong–not just for them, but for all people. In the 1970s, “values clarification” was used to try to teach relativism to students in K-12. Students who are that age between childhood and adulthood who wanted to “go wild” then had an excuse–there are, they were taught, no cross-culturally valid moral standards. The government schools still teach such relativist garbage (the trend began in England before it began in the U.S.; read C. S. Lewis’ The Abolition of Man.

Part of adulthood is understanding one’s responsibilities in life and following basic standards of moral decency–avoiding excessive anger, avoiding jealously, envy, murder, theft, sexual wrongs, and other actions that are harmful to human flourishing. Moral relativism is one of many forces that have pushed the effective age of adulthood for Americans to around twenty-six.

People without a firm moral compass will literally try to do everything they desire to do. It is no surprise that even white collar people have been involved in scandal after scandal–there are other causes for their behavior than relativism–bad character, for instance, but the relativism rampant in the school system does not help matters.

Man is a social animal, and for mankind to survive, certain moral rules are essential–do not murder (take innocent human life), steal (for the notion of property rights collapses otherwise), do keep promises (this is necessary for contracts to have any meaning, as is general truthfulness), do not commit adultery (for the sake of a stable family). As far back as Aristotle these were considered to be values required to be a good human being who contributes to the community. Ancient thinkers from Aristotle to Confucius believed in a common moral code that, despite cultural differences in application, had the same general list of virtues and vices. C. S. Lewis calls this code the “Tao.” A society that rejects the Tao will end up like the children in Lord of the Flies, committing murder and hunting with a stick sharpened at both ends (I am grateful to the late Louis Pojman for this point).

Some students will grow out of their relativism, especially after having children of their own. Others do not, however, and this contributes to the decay of the fundamental institution of society, the family, and of social institutions both public and private. Hopefully parents will counteract the influences facing their children these days, as difficult as that is. I am frankly tired of hearing students saying that the evil deeds done by Lenin, Hitler, Stalin, Mao, and Pol Pot were “right for them.” They were not right, period, and young people ought to have enough sense to recognize that.

The French Revolution, Rationalism, and the Left

2 Comments

"The Storming of the Bastille", Visi...

Image via Wikipedia

Since the time of Rene Descartes (1596-1650), French philosophy has been characterized by rationalism, the view that our knowledge comes through reason rather than through sense experience. Descartes began a trend toward rationalism in philosophy on the European continent as a whole (with the exception of the British Isles, and in the 20th and 21st centuries, with the exception of Scandinavia). In politics, it is dangerous to apply a rationalist approach, since that approach is often used by idealistic thinkers to set up an ideal political state arising from thought alone rather than from concrete human experience. In the ancient world, Plato is a good example; in his ideal state, babies are taken from their parents at birth and whisked off to state-run nurseries. There, children attain their “natural state” of being artisans or soldiers; later, from among the soldiers are chosen philosopher kings who rule with dictatorial power hidden by “noble lies” they tell the people. Aristotle rightly countered that government should start from below with the actual historical development of a people rather than being imposed via some idealistic rationalist framework.

The practical results of such a rationalist approach are seen in the tragedy of the French Revolution. What began as a series of grievances against King Louis XVI and his queen, Marie Antoinette, quickly degenerated into a rationalist framework being imposed on the French people. A new “age of reason” was proclaimed, tradition trashed, and the king and queen executed. Later, after Robespierre’s rise to power, the new modern apparatus of the police state was used to round up those who opposed those in power, or simply opposed the attempt to eliminate Catholicism and replace it with a cult of reason. As a result, tens of thousands of heads rolled from the guillotine. The attempt to oppose a government run by the “reasonable elite” led to tyranny. Since then, that threat became even more dangerous with Rousseau’s notion of the “general will,” which can be used as an excuse to label the most brutal tyranny “the will of the people.” The proper question to ask is “which people”? The answer is almost always in terms of the elites who run the state according to their rationalistic plan no matter how many people are killed. The cause becomes higher than the individual.

This scenario was repeated by the followers of Karl Marx. Lenin murdered hundreds of thousands of those who “opposed the people.” Stalin, though more of a psychopath and thug than an ideological Marxist, murdered millions in the gulags and his forced relocation of millions of people.

Those who wield political power in the United States are more benign–people are generally not killed or imprisoned for their beliefs that oppose the position of the state. But top-down management by elites takes place to the point that the United States often seems to have the form of a democratic republic without the content. Federal judges make mandates and force them onto the people against their will–not because those mandates are really constitutional, but because the judge has a rationalistic vision of society he or she wishes to impose on the rest of society. Government bureaucrats do similar things, with their arrogant “we know best” attitude. This arrogance is supported by media elites who despise Middle America as a group of ignorant hicks, and who believe that if their vision of society prevails, we will live in a utopia, a secularized heaven on earth. The American left eagerly supports their goals with an almost missionary zeal. When Middle America opposed the left’s goals, as in the Tea Party movement, the left does not resort to rational argument, but to name-calling. This is ironic–if the left really believed that, say, redistribution of wealth, unlimited access to abortion, affirmative action, etc., were rational, they would present arguments to support their position. But for the most part they do not do so–and no empirical evidence against their methods, no matter how persuasive, will phase them–after all, if their form of government is so clearly proven by reason, their position, they hold, cannot be touched by the evidence of our senses.

The alternative is to realizes that governments should arise from the bottom-up, not from the top-down. A government, as Aristotle recognized, reflects the geography, history, and traditions of a people. This does not mean that “anything goes;” in fact, Aristotle strongly condemns tyranny. As traditional conservatives such as Russell Kirk argued, it is best to respect a people’s traditions and not impose an artificial, rationalist ideology to remake society, including political governance, in its image.