Wikileaks: More Dangerous than Helpful

Leave a comment

United States Department of State, Washington,...

Image by Rainer Ebert via Flickr

As an opponent of the Iraq and Afghanistan Wars, it would be easy for me to praise Wikileaks for revealing evidence of U. S. torture in those wars. Or I could argue that the recent leak of State Department cables was useful in revealing a chaotic and seemingly incoherent U. S. foreign policy. But Wikileaks is more dangerous than helpful, and leaking secret government documents borders on treason.

Lives may be at stake. After these leaks, State Department personnel overseas may find themselves threatened by angry citizens from the countries where they are stationed. Other countries will be angry at the information in the leaks, which may do irreparable harm to U. S. relations with those countries. Some secrecy is essential for diplomacy to take place and for the State Department to do its job. Any other country has similar secrets, private memos criticizing other countries, memos about espionage activities, etc. Most countries would charge someone who leaked such sensitive information with treason, as indeed it is. Real people’s lives are at stake, as well as the ability of the United States–under any administration– to conduct diplomacy. Hopefully, if federal law enforcement does not do its job, Congress will play a role in stopping such a serious breach of secret information in the future.

The New York Times made a poor decision in its decision to reveal the information from the leaks. Of course if it had not done so, one of the other major news organizations would have revealed the information. This does not make revealing such information right. The idea that journalism must reveal every fact it knows is a morally irresponsible idea. Perhaps most of the damage to U. S. foreign policy can be undone, and prayerfully all U. S. personnel in the foreign service will be safe. I pray that the United States does not find itself drawn into another foreign war due to blowback from these leaks. The risk to the United States is too great for another leak of this magnitude to happen again.

The Ethics of Psychedelic Research on Human Subjects

2 Comments

Stanislav Grof, psychologist and psychiatrist

Image via Wikipedia

Is it morally right to do research using psychedelic drugs such as mescaline, LSD, or DMT using human subjects? Much laboratory research has been done using animals already, and someone may argue that there is no need to study these dangerous substances in human beings. I will argue otherwise.

All three of these drugs, as well as other psychedelics, are widely abused–and that is one of the dangers of research–the press will find out about the research, disseminate information about it, and some non-addicts will read about the research and say, “Now that drug seems interesting–I think I’ll try it.” Ergo, we have more addicts than ever. But I would argue that that danger is exaggerated. Knowledge of mescaline and LSD has been public for many years, and DMT has become increasingly known since the 1990s. Mushrooms have been used for centuries, and ketamine has been widely abused since the 1960s. I do not see how human trials could publicize these drugs any more than they already have been–and even if they do, dangerous side effects and bad trips will also be publicized, scaring many people away from trying them.

The main reason I support psychedelic research with select groups of human subjects is that some mental illness is intractable to current treatments. Some cases of schizophrenia, for example, are so severe that current therapy does little or no good. Some researchers, such as Stanislav Grof, have used LSD in the treatment of schizophrenics. Other conditions, such as depression, can be so severe that only electroconvulsive therapy does any good, and the good that is does is only temporary. Plus, ECT carries with it the risk of brain damage. If some psychedelics could be used to treat these intractable cases of schizophrenia and depression more effectively than current treatments, especially if such research is backed up by animal studies, why not try it using select subjects. Now for subjects able to give informed consent, they should be thoroughly warned about the risks of such studies. For subjects who are mentally incompetent, the family member or person with power of attorney should be given sufficient information to give or withhold informed consent based on his interpretation of the patient’s prior wishes. If risks are thoroughly explained, and the patient has not been helped by any other treatment, and informed consent is given, I see no ethical problem with attempting to determine whether a psychedelic drug can help the patient. A critic may reply, “What about the risk of harm, both physical and psychological? What about the risk of future addiction caused by the study?” If current treatments, such as ECT, can harm the patient and only give a temporary reprieve from the illness, a study using psychedlics most likely would not do more harm than prior treatments–and it may help. As far as the risk of addiction, that comes with the territory of any drug that helps a patient feel better. Should we stop research on painkillers because some patients become addicted to them?

The FDA has been very conservative in approving studies with psychedelics. Part of this caution is necessary to prevent harm to human subjects. And no one wants to go back to the days when the U. S. Army and CIA were secretly giving LSD to soldiers–one soldier committed suicide. The FDA has the right to leave no stone unturned–I would not want to be the FDA agent who helped pass a study that ended up harming research subjects. But sometimes regulatory agencies hear the word “psychedelic” and are afraid to support any research involving such drugs, even if they potential to treat intractable mental illness. Hopefully some balance can be found between the absolute necessity of protecting research subjects and the desire to find new drugs to help those who cannot be helped with current therapies.

The U.S. Should Let South Korea Defend Herself

Leave a comment

Map of the Korean peninsula with Jeju Island l...

Image via Wikipedia

The recent North Korean attack on a South Korean island was unjustified and typical of the behavior of the hard-core Maoist government. As much as the United States finds the attack reprehensible, this is a conflict in which the U. S. should not be involved. No national interest of the U. S. would be served by participating in any renewed Korean War. It would only enrich the pockets of those in the military-industrial complex, strain the U. S. military to the breaking point, and cause perhaps irreparable damage to the U. S. economy. South Korea has solid, modernized military forces and could outlast North Korea in any extended conflict. If full scale war breaks out and China becomes involved to support North Korea, such a situation would remain an Asian matter for Asian countries to resolve.

For years the U. S. forces in South Korea have been sitting ducks–any full-scale North Korean attack on U. S. bases would be devastating to the Americans. The number of American troops (over 28,000) is insufficient to make either a significant tactical or strategic impact in any full scale war. It is long past time to remove all U. S. troops from South Korea.

I can hear Neoconservatives whining now, “You are a wimp–you just want to run–you don’t want to defend our allies.” Such claims are ridiculous and the Neocons know it. In supporting fully the national interests of the United States, my position is more “patriotic” than that of the Neocons, who would sell out the good of the United States in foreign conflicts.

Wars are ultimately bad for the economy and are bad for freedom. The U. S. cannot afford to get involved in another war. Let South Korea, which is perfectly able to take care of herself, do so.

Shop ’till you Drop….?

Leave a comment

Shopping carts in ABC Tikkula.

Image via Wikipedia

Thanksgiving Day is nearly upon us, but instead of focusing on thanking God for blessings, television blares ads about the day after Thanksgiving. News outlets report the latest prediction about sales on Friday, the biggest shopping day of the season. And when Friday arrives, long lines will form around stores, with some people camped out the previous night in the cold to be the first shoppers to buy bargains.

Even apart from Thanksgiving or the second day of Christmas (December 26, which the secularized world calls the “day after Christmas”), consumption drives modern Western economies. Shopping becomes an obsession with some people, both men and women. Women look for the latest fashions, and men look for the latest gadgets. Both, underneath the happy surface on shopping days, are lost and miserable.

Shopping has become one way to run from death. If we’re out and about and busy, we don’t have to think about the finality of life. Maybe if we keep shopping, we’ll never drop and we’ll live forever! Perhaps late on some insomniac night a sane person will remember her inevitable doom and weep, but most people allow such thoughts to seep out of their mind, lost in the busyness of life. Now I’m not implying that everyone who enjoys shopping is running from death, but the extent of the bustle, the extent to which shopping becomes an end in itself, suggests that is exactly what some people do. It is almost as if finding a bargain becomes a substitute for the transcendent–and a paltry substitute. No matter what a person buys, one day those clothes, appliances, home decor items, and so forth will either travel to the local trash heap or to the nearest thrift store after the owner of those items lies beneath six feet of soil.

American culture is rapidly secularizing to the point that it may one day be as secular as Western Europe. In a world that does not believe in the truly transcendent, a God who loves mankind and calls us to repentance, all that remains is mindless bustle and lust for wealth, sexual fulfillment, fame, or the other false gods of the contemporary world. I suppose these gods are not as dangerous as the worship of the state, but while they may not take physical life, they will, if sought as ends in themselves, destroy the soul.

So shop if you enjoy it–but realize that shopping is not an end in itself. Otherwise, shopping becomes an idol that enslaves, like any other idol that is finite. As both Augustine and Kierkegaard recognized, only an infinite being can satisfy an infinite longing. Only God can fill an empty soul.

Teaching vs. Research among College and University Professors

Leave a comment

Research. Olin Warner (completed by Herbert Ad...

Image via Wikipedia

This topic may not be as exciting as the topic of full body scanners, but it is a hot topic in academia and among critics of higher education in the U. S. The criticism runs something like this: “College and University professors spend so much time in research that they teach very few classes. In large universities, teaching assistants teach most of the introductory classes. Just because a professor is good at research does not mean he is good at teaching. Higher education should focus more on teaching than research.”

There is some truth to that criticism. After the influx of federal money into the university system, especially in reaction to the Soviet launch of Sputnik in 1957, science programs emphasized research, and eventually the humanities followed the model of the sciences. There was competition for research grants, and at some research-oriented universities, a tenure-track assistant professor without enough grants will not receive tenure. Besides getting grants, there is an expectation of publication in one’s field, the extent of which varies from university to university. In science, that expectation usually focuses on articles in peer-reviewed scientific journals. In philosophy, my field, articles in peer-reviewed journals are good, but some places require a book (not a textbook, but a scholarly monograph). Some departments only count publications in specific philosophical journals, such as the Journal of Philosophy or The Monist. Other departments are not as picky. At my school, which is considered a teaching school, scholarly work is expected, but it is up to the individual departments to determine the amount of expected work. When I was department chair, I told new faculty that I expected, by the sixth year when application for tenure usually takes place, at least two peer-reviewed articles and a few conference presentations. A monograph, of course, would suffice. That is a very modest publication requirement, but given a 4-4 load with frequent overloads required, it is a fair one.

In my opinion, research is a good thing–and some researchers ought to confine themselves to that aspect of scholarly endeavor. Excellent researchers are sometimes given research positions, which I think is fine. But for most college and university professors in the major schools, it would be good for them to balance teaching and research. Both, I believe, are needed–I have found research to be of great value in my own teaching. Teaching and research should be in a symbiotic relationship, with teaching fueling ideas for research and research contributing to a teacher’s knowledge of his field and to his teaching. I refer to my own research when it is relevant, and if I believe one of my articles is relevant to a class, I will sometimes require students to read it. Professors who do not write in their fields ought to at least read key journals to keep up with what is going on in their fields. When they can, they should apply their research to the content of their teaching. And teaching assistants, while necessary, should not teach all introductory courses; even full professors can learn something by teaching freshman courses. My school does not have TAs, so I teach introductory as well as upper-division courses, and I learn something new from my students almost every day (both positive and negative!). Now my preference is research–I love the process of research and writing–but teaching forces me to deal with real people in the real world with their own struggles and take on the issues of philosophy. That is a good thing–and more professors ought to be learning in a similar way by focusing more on their teaching.

What is an ideal situation? Probably a 3-3 or 3-2 load with funds for research and travel–and a more modest publication requirement than many large universities have. This provides a good balance between teaching and research so that the professor has time for both.

Airport Scanners and American Aversion to Risk

Leave a comment

Represent

Image by Jon Wiley via Flickr

See the link at

http://www.alternet.org/story/148886/%27porno_scanner%27_scandal_shows_the_idiocy_of_america%27s_zero_risk_culture

Richard Forno’s article not only points to the absurdity of the TSA’s “porno scanners,” but to the reason that many Americans are willing to accept such an invasion of privacy–the myth of a “zero-risk culture.” Forno has hit the nail on the head. Why does one poll indicate that 80% of Americans support full-body scanners? Was the poll badly done? I hope so. I hope that Americans have not bought the idea of a world without risk to the point that they would give up all their freedoms for the myth of an absolutely secure world.

When I was a child, I rode my bicycle quite a bit–without a helmet. I played in the back yard by myself. I rode in the back of my Dad’s pickup truck on the freeway on the way to the Mall. I sat on a bale of hay near the bottom of a stack as the loaded truck drove down the highway at 40 mph. Was there risk in those activities? You betcha. Should I have avoided doing those things? Hell, no. Risk is a part of life. There is a risk of terrorist attack–it would be naive to deny that. But the risk of dying of heart disease, stroke, cancer, pneumonia, the flu, auto accidents, gunshot wounds (with the perpetrators being Americans), lightening–all these risks are greater than the risk of being killed in a terrorist attack–even for frequent air travelers. Many Americans want a society that controls all risk–controls which foods people eat, what they can drink, what they can smoke, whether they can sit on the back of a pickup, how they should ride their bicycles, and whether they can fly on an airplane without being virtually stripped naked. The sad thing is that we’re all going to die anyway. There is room for common sense controls that minimize risk of harm–but to invade the most private aspects of daily life smacks of totalitarianism. The government becomes a nanny, or at worst an abusive parent, and the people become docile children. Hopefully enough pampered Americans will grow up before the country in which they were reared grows unrecognizable. The saddest thing is that so many would not care.

 

Is Asperger Syndrome a Real Condition?

Leave a comment

Hans Asperger

Image via Wikipedia

In 2006 I was diagnosed with Asperger Syndrome, a condition, first identified by Hans Asperger, on the autism spectrum in which social awkwardness, a focus on particular interests to the exclusion of others, and physical clumsiness are combined. But does the term “Asperger Syndrome” refer to a real phenomenon in humans or is it a product of the modern tendency to attach labels to normal variations in personality?

My short answer is “Yes, Asperger Syndrome” is a real set of behavior patterns in some individuals”–but there is more to be said. It is true that Americans, especially, love to label. They love to medicalize. Although Attention Deficit Hyperactivity Disorder is a real condition that truly handicaps some children, there is a temptation to apply the label to children being children, especially by tired teachers or by parents looking for an excuse for their child’s poor behavior. But misdiagnosis does not imply that ADHD is a myth; it means that diagnosis must be done with care and conditions such as ADHD must be carefully defined. Otherwise, some children may be placed on Ritalin or some other drug of which we know very little about long-term effects.

Now Asperger Syndrome can be socially damaging since individuals with Asperger’s do not read other people well. Thus there are some negative effects of the condition that have practical consequences, especially for social relationships, friendships, marriage, and job interviews. There is some evidence that, like full-blown Autism, Asperger Syndrome is correlated with changes in the brain–but one must take care about such correlation. Correlation is not causation. In addition, there is the “chicken-egg” problem of whether changes in the brain affect behavior patters or whether behavior patterns remold the brain. The truth may be a combination of both. Parents with Autism or Asperger’s are more likely to have children with these conditions, so there seems to be a genetic link. A critic, however, may point out that if parents who model behavior for their children have Asperger’s, then the child may model his behavior after the parents and engage in Asperger-like behaviors. Science may not be able to answer every question about Asperger’s and heritability, but from my own experience, watching my maternal grandfather’s behavior, I believe that a tendency to Asperger Syndrome can be inherited, but whether science confirms this conclusively will depend on future research.

Asperger Syndrome is not a disease. In part, I think of the term as a pragmatic label that helps people “wrap their minds around” a certain combination of behavior patterns. Placing those patterns under the Asperger’s label helps both scientists and individuals with Asperger’s and their families to place past, present, and future behaviors into a meaningful “pigeonhole.” Pigeonholes are not always bad things as long as people are not overly legalistic about them. If a pigeonhole helps someone make sense of his life or his loved one’s life, and if it points to an actual pattern of natural human behavior, then the pigeonhole is pragmatically useful.

But is the label “Asperger’s” true? Does it refer to a “natural kind” with stable properties? I lean toward that position. I realize the dangers of over-medicalization of normal variations (and I think Asperger’s is an extreme of normal personality) and of labeling. From my own experience and from the experience of others, Asperger Syndrome does refer to a pattern that is ontologically real–“really real,” so to speak. Now whether Asperger’s is overdiagnosed is an empirical question that can, in principle, be resolved by careful study. More rigorous tests can make diagnosis more accurate. But for those who are accurately diagnosed, they should take comfort that their condition is not just a label–and accept the strengths that arise from having Asperger’s–and work on the weaknesses.

Older Entries