Saturday, December 31, 2011

David Brooks on American Politics

David Brooks has an interesting perspective on American politics and public policy. A New York Times Op-Ed columnist, his viewpoint can be described as moderately conservative (a vanishing breed in contemporary American politics). His commentary is informed by a humanities and social sciences perspective that is rare among political analysts today. I earlier blogged about his criticism of the current American meritocracy. I want to talk today about two of his recent columns that provide an interesting perspective on what’s wrong with our political thinking. While he doesn’t provide specific solutions, he talks about some failings among both parties, along with how we need to change our thinking about what we want government to do. As we enter the election year 2012, in which Americans will choose between the same tired and failed alternatives, it’s important to think about our political problems in a different way, as Brooks does.

His first article entitled The Two Moons, is named after Samuel Lubell’s concept of a “political solar system.” Lubell, writing in the 1950’s, proposed the idea that at any moment there’s a Sun Party (the majority party) and a Moon Party (the minority party). The Sun Party drives the agenda, and the Moon Party (which shines by reflecting the Sun’s rays) opposes the Sun Party. During the New Deal, the Democrats were the Sun Party. During the Reagan Revolution, the Republicans were. Between 1996 and 2004, the two parties were tied. This was a transition time, in which from historical experience a Sun Party usually emerges. But no Sun Party emerged. No party was able to take the lead; they both are now Moon Parties. Both Republicans and Democrats have record low approval ratings. “Neither party has been able to rally the country behind its vision of government.”

Having two simultaneous minority parties leads to strange outcomes. Both parties embrace minority mentalities. Democrats feel oppressed by big business and big finance, while Republicans feel oppressed by big government and the big liberal establishment (the media, academics, judges, artists, Hollywood, etc.). Neither side wants to compromise, feeling that the outside world, and especially members of the opposing party, are hostile. They instinctively protect their special interests (labor unions, the poor, the old, and minorities for Democrats; the rich, big business, big finance, and evangelicals for Republicans), and refuse to engage or try to convert outsiders. Neither party engages in serious internal debates, and moderates are no longer welcome.

The Two Moon era is volatile. Voters reject one party, then two years later reject the other party. That happened in the 2008 and 2010 elections, which had a historically large shift from Democrat to Republican. Usually, minority parties suffer a string of election defeats, promoting modernization and revision. The Republicans’ defeats in 2006 and 2008 did help promote internal change via the Tea Party. But rather than owning up to the failures of their neoconservative ideology, and moving more to the center, Tea Party Republicans pushed the party to an extreme libertarian agenda. The Republicans’ 2010 victory encouraged them to act like a Sun Party, trying to force their Tea Party agenda, but were stymied by the fact that they were not a true majority party, and that Democrats controlled the Senate and Presidency. The Democrats’ defeat in 2010, along with subsequent Republican intransigence, is making the Democrats become more extreme. Even if they win big in 2012, however, they do not have the broad-based support to become a Sun Party.

How will the Two Moon era become resolved? Brooks suggests either a third party or a crisis. A third party faces enormous obstacles in the American political context, as discussed in this article. A third-party presidential run has always lost, but sometimes (e.g. Teddy Roosevelt in 1912, or Ross Perot in 1992) can spoil an election. Even in the unlikely event that a third-party candidate can win the election, he’ll have to govern with a Congress in which almost all members are Republicans or Democrats. For a third party to make fundamental changes in government, it needs to control both the executive and legislative branch, which the American political system makes virtually impossible.

This leaves a crisis as a way to break the Two Moon deadlock. It’s possible that a financial panic and/or war will force some major political change. Exactly how the events will unfold and what the responses will be are impossible to determine, but the American political system will have to undergo a radical (and painful) transformation for the Two Moon deadlock to become resolved.

* * *

Brooks’ second article is entitled Midlife Crisis Economics. Brooks criticizes the Obama’s administration’s historical accuracy. When the administration first came to power in 2009, it compared the current situation to the 1930’s, and promoted a second New Deal. Obamacare was a product of this thinking, a successor of ambitious social legislation like Social Security. The problem is that the public’s perception of government has radically changed since the New Deal. “Today, Americans are more likely to fear government than be reassured by it.”

The Obama Administration then switched historical analogies from the New Deal to the progressive era at the beginning of the 20th century (e.g. Obama’s “Roosevelt Speech” in Kansas). There are some similarities between today and a century ago. “Then, as now, we are seeing great concentrations of wealth, especially at the top.” But the differences outweigh the similarities. The economy was much different back then, a vibrant jobs machine at the height of the industrial era. The information age we live in today is not a jobs machine. The industrial decline and elimination of jobs due to outsourcing/offshoring/automation was taking place well before the current recession. Factories that used to employ 1000 workers can now produce more products while employing fewer than 100 workers.

Inequality, while due partially to government policy, has a structural economic component that would be difficult to reverse (short of communism). Inequality is growing in all developed countries, including ones (e.g. Sweden and Germany) that have much more generous welfare states.

Brooks succinctly summarizes the essential differences between today and a century ago: “In the progressive era, the economy was in its adolescence and the task was to control it. Today the economy is middle-aged; the task is to rejuvenate it.” A century ago there were few or no government agencies, institutions, or regulations to protect workers, the sick, the disabled, the old, children, minorities, etc. Today, we have more than enough government; the challenge is to make government more effective. We need to reform existing programs to make them deliver more services at less cost.

Brooks also talks about the moral differences between today and the progressive era. The beginning of the twentieth century was still a time of Victorian culture, with its strict moral code. Individuals and the government taking on debt that they had no hope of repaying would have been considered sinful. Illegitimacy and divorce were culturally prohibited and very rare.

These moral differences have major practical consequences. It’s obvious today what harm the contemporary embracing of public and private debt has caused our economy. Subprime and other risky mortgages led directly to the 2008 financial crisis and subsequent recession. Deficit financing that the American federal government has been doing routinely for decades has led to a buildup of unsustainable debt, and is preventing the government from responding in an appropriate way to the magnitude of the current crisis. The fact that 40% of American children are born out of wedlock leads to a number of social and mental health problems that were unknown a century ago. These problems, including crime, juvenile delinquency, drug abuse, dangerous neighborhoods, failed schools, etc., forces the government to intervene in family life and child upbringing, and spend billions of dollars that it otherwise would not have had to spend.

My favorite quote from this article is the following: “One hundred years ago, we had libertarian economics but conservative values. Today we have oligarchic economics and libertarian moral values — a bad combination.” It’s important to understand that the last (and only) time that libertarian economics was ever practiced, during the late nineteenth and early twentieth centuries in America, Britain, and (to a lesser extent) in other Western European countries, was a time of strict conservative Victorian morality. There is no historical example of a combination of libertarian economics and libertarian morality. There is no contemporary example of libertarian economics. Contemporary countries with strong, rapidly growing, export-oriented economies, e.g. China, are anything but libertarian.

Ron Paul’s fans need to understand that they are advocating an experimental system, something never before tried. They also need to understand that the rise of American government deficit spending, regulation, waste, inefficiency, and all the other things they complain about occurred during a time of increasing libertarian morality. It was the 1960’s embrace of individual social liberty, of personal and sexual freedom from authority, of licentiousness, that represented a cultural revolution, something unique in history. The trend for increasing individual social freedom and autonomy has only increased since the 1960’s. During that period, government deficit spending, regulation, litigation, interference with the economy, waste, fraud, and incompetence has also increased. The two are clearly correlated; it’s impossible to prove whether libertarian morality caused the explosion of government spending and regulation.

Note that Brooks said that we have “oligarchic economics” in America today, not “socialist economics.” The distinction is important. It implies that libertarian ideas did in fact influence not only social and cultural life, but also the economy. Instead of moving toward the libertarian ideal of limited government, however, we moved toward a corrupt, oligarchic form of government, one in which the rich and powerful use a bloated government to become more rich and powerful. The most egregious example is explosion in growth of the financial sector, which occurred after libertarian-inspired deregulation. The investment banks won the freedom to gamble on risky mortgage-based securities, a gamble that led to the 2008 crisis. Unlike in a truly free market, they felt secure enough to make these gambles knowing that the government would bail them out.

As usual, Brooks has no specific solutions to the problems he talks about. “The job is to restore old disciplines, strip away decaying structures and reform the welfare state. The country needs a productive midlife crisis.” I’d agree with that, but how is it going to happen? How can we reform an oligarchic government while maintaining libertarian moral values? How can people from different socioeconomic status, intelligence, race, age, ethnicity, culture, sexual orientation, political beliefs, religious beliefs, etc. come together and agree on anything, when libertarian morality tells them to pursue their own self interest no matter the social cost? Will they be willing to make sacrifices for the common good, whether it take the form of accepting fewer government benefits, more taxation, or a combination of the two? I doubt it. My view is that the “productive midlife crisis” our country needs must include a serious re-evaluation of the permissive morality that has been dominant since the 1960’s.

Monday, November 7, 2011

Daylight Saving Time Change = Mood Change?

I’ve updated my Are You Sensitive page to include two additional tests of possible sensitivity to the Earth’s magnetic field. One is any mood change from the daylight saving time change. In the U.S., we set our clocks back an hour yesterday (Sunday, November 6). I noticed a mood change this morning. I also have a mood change when the spring ahead time change happens in March. Why does the daylight saving time change cause a mood change in magnetoreceptive people? It has to do with circadian rhythms, internal rhythms that are approximately one day in length. I’ve found that circadian rhythm affects both my mood and my ability to locate my magnetic home. When we shift our clocks forward or backward, one of my circadian rhythms changes, and this change causes symptoms. I’ve found ways to compensate, including going to bed earlier in the fall, and later in the spring.

The second additional test is the presence of seasonal changes in symptoms. Although the daylight saving time change is associated with seasonal changes, I have something different in mind here. Winter depression is an example of a seasonal change in symptoms. Unlike the daylight saving time change, which happens once in the fall and once in the late winter, seasonal depression and other seasonal symptoms are more gradual. They get progressively worse before the solstice and better after the solstice. Magnetoreceptive people can have seasonal symptoms when the location they’re living in is far enough north or south of where they grew up. During childhood, seasonal changes in day length are “programmed in”, and your body expects similar seasonal changes throughout your life. If you live far enough north or south of where you grew up, day length around the solstices can be different enough for your body to notice it. For example, if you live in New York and grew up in Florida, in New York you’ll have fewer hours of daylight in December than you did when living in Florida. Your body reacts to this difference, causing symptoms. I don’t think that the symptoms are caused, as commonly believed, to it being too dark in the winter. I had winter and summer seasonal symptoms when I lived in North Carolina, which was about 600 km (370 miles) south of where I grew up (New Jersey). These symptoms consisted of sleep problems, anxiety, tics, and agitation. I didn’t have seasonal symptoms when I lived in Salt Lake City, Utah, which, while being far west of New Jersey, is close in north-south distance to it. The best way to compensate for seasonal symptoms is to move away from the place that is causing symptoms. If that’s not possible, then temporarily increasing dosage of medication to get through the tough times, or using a bright light box can be helpful for some people.

Saturday, September 24, 2011

A Psychiatrist Criticizes His Own Profession: A Review of “Unhinged” by Daniel Carlat

It’s not news that the profession of psychiatry is in a crisis state. I’ve talked about problems with psychiatry on this blog, and there have been many books, articles, and blogs written criticizing psychiatry. There’s no shortage of things to criticize. The fact that there is little understanding of the neurobiology of psychopathology, that there are no diagnostic tests for psychiatric disorders, that drugs have become widely prescribed with little evidence supporting their long-term efficacy, that most psychiatrists have become exclusively pill pushers, eschewing the need to understand and connect to their patients beyond a 15 minute med check, and that many psychiatrists have allowed themselves to be corrupted by drug money, are some of the problems with contemporary psychiatry.

Daniel Carlat, in his book Unhinged: The Trouble with Psychiatry—A Doctor’s Revelations about a Profession in Crisis, provides an insider’s look at the problems with psychiatry. Carlat was motivated to pursue a career in psychiatry because of his depressed mother’s suicide. He begins his book by giving an account of his medical training. Much of what he learned from the medical school curriculum and clinical rotations was useless in his later practice. Psychiatrists don’t perform surgeries, don’t order lab tests (except to rule out rare physical problems), and (usually) don’t order brain scans. They diagnose based on symptoms, which is what all doctors used to do before the twentieth century, but which is only a minor part of medical diagnosis today. Psychiatrists, like other doctors, prescribe drugs, so they need to understand psychopharmacology, but as Carlat describes in his book their decisions about medications are usually subjective.

Carlat did his residency at Massachusetts General Hospital (MGH), beginning in 1992, after the introduction of second generation antidepressants like Prozac. Although his supervisors were divided between therapy and drug advocates, Carlat admits that “the main thing you learn in a psychiatric residency, then or now, is how to write prescriptions” (p. 34). After completing his residency, Carlat, like most psychiatrists of his generation, specialized in psychopharmacology, i.e. prescribing drugs. He saw patients for 15 to 20 minute medication visits. The reason why he and other psychiatrists did this was that it was more profitable. Seeing three patients an hour for med checks allowed him to make about $180 an hour minus expenses. Seeing one patient an hour for therapy allowed him to make between $80 to $100 an hour, which is about 50% less. Carlat blames managed care companies for forcing psychiatrists into becoming pill pushers. But managed care companies can’t force psychiatrists to do something that they don’t believe in. The “key opinion leaders” in psychiatry, those leading academic psychiatrists who set the direction of their profession via their research and publications, are fanatical advocates of the biochemical/drug paradigm. This advocacy is quite profitable for them, as Carlat describes in his book.

Carlat talks about his experiences as a “hired gun,” someone who is paid (i.e. bought out) by drug companies. Carlat worked for Wyeth Pharmaceuticals to promote the antidepressant drug Effexor to primary care doctors. He made $750 per talk. He describes that although he was officially an “expert consultant,” in reality he was basically a highly-credentialed salesman. Sales reps attended his talks, and they communicated to him via body language and other feedback whether he was doing what they expected of him, i.e. to promote their product. When he tried to be more balanced and neutral, they criticized him. He eventually gave up his hired gun position.

Although Carlat was a minor figure among the drug company hired guns, he assumes that the same thought processes and corruption happened among the more famous and successful psychiatrists. He discusses the Senator Grassley investigations of several key opinion leaders in psychiatry, including:
  • Melissa DelBello, a researcher who was paid $180,000 by AstraZeneca to promote the antipsychotic drug Seroquel.
  • Joseph Biederman, Timothy Wilens, and Thomas Spencer of MGH, who were paid a combined $4.2 million by drug companies over a 7 year period. These men also received taxpayer money in the form of NIH research grants, which explicitly disallows such large drug company income. Biederman and colleagues pioneered the diagnosis of bipolar disorder in toddlers, which has led to thousands of preschool children receiving drug cocktails including powerful antipsychotics.
  • Alan Schatzberg of Stanford, president of the American Psychiatric Association, controlled more than $4.8 million of stock in Corcept Therapeutics, a company he cofounded to test a drug to treat psychotic depression. He was also the principal investigator of a huge NIH study to test the same drug. This combination represented a major conflict of interest.
  • Charles Nemeroff, chairman of the department of psychiatry at Emory University, earned $2.8 million from consulting arrangements, but failed to disclose at least $1.2 million. Some of this money was from the drug company GlaxoSmithKline, whose drugs Nemeroff was studying with $3.9 million of taxpayer money.
Carlat describes the insidious methods that drug companies use to market their products to psychiatrists. He used the epilepsy drug Neurontin as an example. The evidence in favor of Neurontin’s effectiveness for treating psychiatric disorders was poor, and did not meet the FDA’s criteria for effectiveness. Warner-Lambert, the drug company that introduced this drug, decided to illegally market it off-label for various disorders, including bipolar disorder, pain, and anxiety. The company bribed doctors to prescribe Neurontin, and hired marketing firms to ghost-write articles pushing this drug. It also paid doctors to allow drug reps to shadow them during their patient visits. These drug reps persuaded some doctors to prescribe Neurontin off-label. The sleazy techniques paid off, earning Warner-Lambert $2.7 billion in 2003 Neurontin sales, almost all of them from off-label uses.

Carlat describes his own experiences with the pharmaceutical marketing machine. One experience was a lavish party put on by Janssen, the manufacturer of the antipsychotic drug Risperdal, during the 1999 American Psychiatric Association annual meeting. Janssen rented the entire Smithsonian Air and Space Museum, with live jazz band, buffet food, and free drinks, all for the benefit of psychiatrists whom they wanted to prescribe Risperdal.

Another experience was with Valerie, his Ambien rep. Ambien, marketed by Sanofi-Aventis, was the first in a new category of non-benzodiazepine sleeping pill. Since Ambien was going off patent soon, Valerie tried to sell him Ambien CR (“Controlled Release”), a longer-acting drug than the original. Carlat was skeptical of the science behind the new pill. Valerie knew that he wasn’t prescribing much Ambien CR. She had access to this information since pharmacies have been selling their prescribing data to drug companies since the 1990’s. Valerie was persistent, offering a free medical textbook as a gift. Carlat prescribed Ambien CR to a patient, subconsciously or consciously reciprocating Valerie’s gift. The patient didn’t like the drug due to a hangover side effect. Carlat didn’t tell his patient that he prescribed Ambien CR as a favor to a drug rep.

Carlat criticizes the chemical imbalance theory. This theory, i.e. that depression is caused by a deficiency of the neurotransmitters serotonin and norepinephrine, and that schizophrenia is caused by too much dopamine, came about from discoveries of the pharmacology of antidepressant and antipsychotic drugs. It’s now known that antidepressant drugs block the reuptake of serotonin and/or norepinephrine (thus increasing the amount of these neurotransmitters in the synapse). Antipsychotic drugs block dopamine receptors. The chemical imbalance theory took as a given that the drugs were effective, and that they were not just treating symptoms, but treating the biochemical cause of mental disorders.

There has been no direct evidence in favor of the chemical imbalance theory. Part of the problem with verifying the theory is that scientists can only indirectly measure neurotransmitters via breakdown products in the blood, urine, or cerebrospinal fluid, or post-mortem exams. The studies utilizing these techniques have been inconclusive for both depression and schizophrenia. The result is that since virtually all biological psychiatric research in the past several decades has been based on the chemical imbalance theory, “the shadow of our ignorance [of psychiatric disorders] overwhelms the few dim lights of our knowledge” (p. 80). Carlat admits that with the lack of a scientific basis for drug choice, prescribing is more of an art than a science. “To a remarkable degree, our choice of medications is subjective, even random. Perhaps your psychiatrist is in a Lexapro mood this morning, because he was just visited by an attractive Lexapro drug rep” (p. 83).

Carlat talks about the overdiagnosis of psychiatric disorders, which is caused by a combination of lack of scientific understanding of mental disorders, and greed. The DSM-IV, the “bible” of psychiatry, classifies mental disorders based on a list of symptoms. If you have five of the symptoms it mentions, you have depression. If you only have four, you don’t have depression. Since this symptom-based diagnosis is ultimately based on subjective or arbitrary factors, there is no way to prevent multiplication or redefinition of disorders. One of the criticisms of the upcoming revision (entitled “DSM-V”), is that new definitions of disorders would make it too easy to diagnose patients. An example is the proposed “prepsychotic” category, attempting to identify individuals who might in the future develop schizophrenia. In the words of Allen Frances, who was the chair of the DSM-IV committee, these broadened categories “would be a wholesale imperial medicalization of normality that will trivialize mental disorder and lead to a deluge of unneeded medication treatment” (p. 65).

The problems with the DSM also affect clinical practice. For example, bipolar disorder is a diagnosis in the DSM based on symptoms such as alternating manic and depressive episodes. This diagnosis was intended for adults, although sometimes older teenagers have these symptoms. Joseph Biederman and his colleagues at MGH decided to expand the diagnosis of bipolar disorder to toddlers. In 1996, Biederman published a paper reporting that nearly a quarter of children he was treating for ADHD also met his criteria for bipolar disorder. Since preschool children don’t have mania or depression, how could they have bipolar disorder? Biederman decided that irritability was a defining attribute for mania in young children, even in the absence of euphoria or grandiosity. How could he make this diagnostic change? It wasn’t based on science, since the neurobiology of bipolar disorder in adults or children is unknown. Biederman was able to do this, and get a lot of other psychiatrists to follow his lead, because he was a full professor at Harvard, next to God, in his own words, on the psychiatric prestige scale. His diagnostic change led to a forty-fold increase in the number of children and adolescents treated for bipolar disorder. These children have been prescribed powerful drug cocktails, including antipsychotic drugs. The antipsychotic drugs have side effects such as sedation and weight gain that are much more harmful to children than adults. As I mentioned above, Biederman and colleagues received over $4 million dollars from drug companies, which was certainly a powerful incentive to expand diagnosis (and drug treatment) of bipolar disorder to children.

What is Carlat’s prescription for change in his broken profession? Carlat wants psychiatrists to go back to providing therapy, which can be balanced with medications. 15 minute medication checks are not sufficient to get to know a patient, to know what makes him tick. Sometimes changes in symptoms are not due to medications but life changes or stresses. Since most psychiatrists don’t have time to inquire about anything other than symptoms and medications, they are blind to what is going on in their patients’ lives. Carlat changed his own practice from exclusively 15-20 minute med checks, to somewhat longer medication sessions (20- 25 minutes), alternating with 45 minute therapy visits. He doesn’t use traditional psychodynamic therapy, but instead “a version of supportive therapy that I now try to weave into the fabric of all my sessions with patients, whether they are seeing me primarily for medications or for therapy” (p. 199).

Carlat also wants to see an alternative to medical school for training psychiatrists. He feels that medical school, in addition to be largely useless to future psychiatrists, indoctrinates these students into an excessively biomedical viewpoint. It also makes them feel inferior to other doctors, since psychiatry is in such a primitive state compared to the rest of medicine, and makes them feel superior to other mental health professionals, since psychiatrists have a medical degree. Carlat would like to see a new training program modeled on the “Doctorate in Mental Health” experiment in San Francisco in the 1970’s and 1980’s. This program combined two years of medical and psychological classes with three years of on-the-job training similar to a psychiatric residency. This shaved three years from the standard medical training, but unfortunately fizzled when psychiatrists successfully lobbied to prevent its graduates from being licensed. This program can be revived, and serve as a way to train new practitioners who would be better able to integrate drugs and therapy.

Carlat assumes that drugs are effective. In his book, he gives a number of case examples of patients who he says were helped by medications he prescribed. But how does he know that? His conclusions about drug effectiveness are based on his own clinical observations, which derive from 15 to 25 minute appointments. How can he, or any other psychiatrist, make any conclusion about effectiveness based on such short patient visits, and in the absence of any objective lab tests? Many people with mental disorders are in a complete state of denial about their condition. They are unable to recognize when they’re behaving strangely or irrationally. Family members and friends can usually notice changes, but they don’t always accompany a patient to a psychiatrist doctor visit. Even if they do, they may not speak up.

I have a personal example to support this. I have a relative who for years has been in denial about his depression, anxiety, paranoia, and OCD. When he went for his 10 minute medication visits (which was every month or every other month), he would tell his psychiatrist that everything was fine. His psychiatrist didn’t have time to ask probing questions or to try to make some independent observations. My relative's wife, who would drive him to the appointment and usually not go into the doctor’s office, was afraid to speak up. She was afraid because if she did speak up, my relative would become extremely angry, paranoid, and hostile, and blame her for triggering his bad mood, which would persist sometimes for days. So in the absence of any contradicting information, the doctor would usually tell him that he was doing great, and refill his prescriptions. This is pill pushing, not medicine. But it is the current standard of care in psychiatry.

I can give an example of where a 15 minute appointment would represent quality medical care. Let’s take a hypothetical example of Mary, who has diabetes. She is in denial about her condition. When she goes to the doctor, she tells him that she feels fine. He looks at her lab tests, however, and sees that her blood sugar is elevated. He tells her that she’s not fine, and then discusses possible causes for the elevated blood sugar, and prescribes a treatment plan. The doctor has an independent, objective source of information to balance Mary’s account of how she feels.

A psychiatrist doesn’t have any objective lab tests to balance a patient’s testimony. He needs more time to make an accurate diagnosis of the patient’s current state of mind. I think that an hour a patient is a bare minimum, even if the psychiatrist is only prescribing drugs, and not doing any therapy. In an hour, the psychiatrist has more time to observe and hear the patient, and ask questions. While it’s easy for a disturbed patient to put on an act for 15 minutes, it’s a lot harder do maintain this for 50 minutes. If a family member is present, it would also give the psychiatrist time to interview the family member away from the patient.

Psychiatrists, including Carlat, blame managed care companies for forcing them to have such short patient visits. But it is psychiatrists who set the standard of care for their profession. If leading psychiatrists said that they needed to see patients for a longer time, that 15 minute visits represented poor patient care and shouldn’t be reimbursed at all, and they lobbied the government, Medicare, and managed care companies to reimburse them at a higher rate for longer visits, then these changes would get made. Psychiatrists are not helpless pawns in the face of powerful entities, they have significant power themselves and can use that power to improve patient care.

Carlat doesn’t question the basic effectiveness of psychiatric drugs. His prescribing habits seem conventional, including prescribing drug cocktails. He gives an example of James, whom he calls a “typical success story of modern psychopharmacology” (p. 70). Carlat prescribed James five different medications, including Celexa for depression, Ativan for anxiety, Ambien for insomnia, Provigil for fatigue (a side effect of Celexa), and Viagra for erectile dysfunction (another side effect of Celexa). Is this a success story, even if James reported feeling happier? Carlat compares James to an old pickup truck, held together with baling wire and duck tape. Is turning patients into fragile jalopies a good thing?

Drug cocktails make it impossible to identify what drug is responsible for what side effect or reaction. How does one know if the patient’s fatigue or agitation is due to Drug A, Drug B, Drug C, the interaction between Drug A and Drug B, the interaction between Drug B and Drug C, or the interaction between Drug A and Drug C? This is for only 3 drugs. If a patient is on 5 drugs, as is James in the case mentioned above, then there are 10 possible interactions to consider. (The number of drug interactions can be expressed as [{n*(n-1)}/2], where n is the number of different drugs).

Drug interactions aren’t the only thing to worry about. There is also the question of dose sizes. Maybe Drug A is causing problems because it’s at too high a dose. But with the drug interactions and the effects of Drugs B and C, it’s not easy to identify Drug A’s dose as the problem. Also, drugs have different long-term effects from short-term effects. If a patient is on three different drugs, and suddenly develops severe anxiety, how does one know if this anxiety is due to the long-term effect of a drug? More likely, the doctor will add another drug to the regimen, or increase the dose of the anti-anxiety pill. But the better solution would have been to reduce or eliminate the drug that is causing the reaction, a drug that is impossible to ascertain because the patient is on too many drugs.

I emphasize these problems with drug cocktails because of my own personal experience with psychiatric drugs. I avoided drug cocktails, usually talking no more than two drugs at a time. With two drugs, there is only one interaction to worry about. For a period of time, the two drugs I took were the antidepressants Anafranil and Zoloft. After two years on this combination, I had an attack of severe anxiety. Rather than allowing my psychiatrist to prescribe an anti-anxiety drug, which would have been the typical response to this problem, I discontinued taking Zoloft. I had been on Anafranil for 3 years before I started Zoloft, and I didn’t think that Anafranil was the problem. The anxiety abated, and never returned to the level it was. If I were taking three or more medications, it would have been harder to identify the drug that was the problem.

Psychiatrists, including Carlat, have a knee-jerk reaction to add more drugs when there is a problem, rather than take drugs away. This is how drug cocktails come into being. Drug cocktails represent a type of off-label prescribing, because the individual drugs in the cocktail were never clinically tested or approved as part of a cocktail, only when taken individually. They are part of what’s wrong with psychiatry today, something that Carlat doesn’t acknowledge.

It’s unlikely that Carlat would accept Irving Kirsch’s thesis in his book The Emperor’s New Drugs (which I review here) that antidepressants are no better than placebos, or Robert Whitaker’s more radical thesis in his book Anatomy of an Epidemic (which I review here) that psychiatric drugs do more long-term harm than good. The reason for this is that Carlat has spent his entire career prescribing drugs. For him to acknowledge that they are placebos with dangerous side effects, or that they harm patients long-term, would be for him to admit that his life’s work was a failure. It’s similar to getting a district attorney who, after years of prosecuting and convicting an innocent man, to admit that he made a terrible error. There are not many district attorneys who can make this admission. In the case of psychiatrists like Carlat, most will not admit that they have made a terrible error and harmed patients by prescribing drugs. Thus Unhinged, while a call for change in psychiatry, doesn’t go far enough in questioning the efficacy of drugs. Such questioning usually comes from people outside the profession, such as a clinical psychologist (Irving Kirsh) who treats patients via psychotherapy, and a journalist (Robert Whitaker) who isn’t a mental health practitioner.

In conclusion, Unhinged is a well-written, honest account of systemic problems in psychiatry written by someone with an insider’s perspective on the profession. Carlat does an excellent job describing the drug money corruption in psychiatry, in the overmedicalized view of a complicated phenomenon such as mental illness, and in the need for psychiatrists to better know their patients and provide some of them with therapy. He fails, however, to go far enough in questioning drug (and drug cocktail) efficacy.

Monday, July 18, 2011

Are Tea Party Republicans Anarchists?

NY Times columnist Timothy Egan’s recent article describes radical Republicans as anarchists, seeking to destroy the U.S. government. Their refusal to raise the debt limit, and refusal to deal at all with Obama and Democrats, threatens America with another financial crash and economic catastrophe. This “burn-it-all-down position” indicates that the Tea Party Republicans “didn’t go to Washington to find solutions; they went there to destroy the place.” Is Egan being fair in his description of radical Republicans?

Anarchy is the absence of government, a condition of complete freedom from government coercion. Anarchists are people who want to bring about this utopian condition. Tea Party republicans claim to want to reduce the size of government, not eliminate it entirely. So by the strict dictionary definition, radical Republicans aren’t anarchists. They are extremists, however, and this extremism threatens the stability and health of our government and society. It also threatens to bring about a condition of anarchy.

Extremists share some common attributes, whether they are anarchists, libertarians, communists, Nazis, sixties radicals, or religious zealots:
  1. Promoting extreme change
  2. Wanting the extreme change now
  3. Being unwilling to compromise
  4. Promoting violence

I’ll go over these attributes and see if the Tea Party Republicans fit these characteristics.
  1. Extreme change: The Tea Party Republicans claim to be conservative, wanting the federal government to return to its small pre-Great Deal size. But since so many people have become dependent on government programs, and we’ve had for decades a large military, progressive income tax, and widespread regulation of industry, a return to the way things were 80 or more years ago is an extreme change from the status quo. Thus I can conclude that radical Republicans promote extreme change.
  2. Wanting the extreme change now: Paul Ryan’s plan to overhaul Medicare doesn't take effect until 2022, exempting people 55 and older from the change. Although it proposes a radical change to Medicare, it’s not extremist in the sense of wanting the change now. The refusal to extend the debt limit in the absence of an immediate agreement to major cuts in government spending is, however, extremist. It threatens economic catastrophe unless changes are made now. While it’s true that changes are badly needed in Washington, threatening an economic crisis unless changes happen immediately is extremist and very dangerous. So I conclude that in the context of the debt crisis radical Republicans want extreme change now.
  3. Being unwilling to compromise: Tea Party Republicans are refusing to compromise with Obama and the Democrats on the issue of tax increases. They will not accept any deal that involves tax increases, even if the spending cuts vastly outnumber the tax increases, as is the case in the Obama plan. This “my way or the highway” attitude is definitely extremist. Tea Party Republicans have exuded this type of extremism since they began their term in office.
  4. Promoting violence: Tea party advocates and representatives so far haven’t advocated violence or committed any violent acts. But the other three extremist characteristics that describe Tea Party Republicans make it highly likely that violence will eventually happen. If one of the two major American political parties wants extreme change, wants it now, and is unwilling to compromise, then the legislative process will grind to a halt. We can see that happening already. If legislators cannot accomplish anything, then the government won’t be do anything, even to pay its bills. This will provoke a crisis.

While it’s possible that the crisis can be solved peacefully, I wouldn’t bet on it. Historically, economic crises have sometimes led to violent revolutions in other countries. The French Revolution and subsequent reign of terror was a product of a financial crisis. The Bolsheviks came to power in the context of the economic and political chaos of World War I.

A revolution can sometimes be a good thing, right? What about the American Revolution? The Tea Party Republicans may claim to be intellectual heirs of the American Revolution, but in reality they are far apart ideologically. Compared to other revolutions, the American Revolution was moderate. Before the revolution and after, Americans enjoyed the privileges of a representative government, freedom of expression and religion, a capitalistic economic system, and the absence of any hereditary aristocracy. The biggest change was the withdrawal from the British Empire and the establishment of a new republican political system. While this was a substantial change, it didn’t affect the daily life of most people. There was no widespread expropriation of property or political murders that characterized the contemporaneous French Revolution.

The Tea Party Republicans may not provoke a revolution at all. Since their dominant ideology is anti-government, if they succeed in bringing the economy and the federal government to their knees, we may end up with a condition of anarchy. The best contemporary example is Iraq after the 2003 invasion. Like the Iraqi government, the American government may become so weakened and ineffective that terrorists and criminals will basically take over the country. Anarchy is an unstable condition, so it won’t last forever, but as we’ve seen in some countries (e.g. Somalia) this could last for years and possibly decades.

I think that America has too much going for it (i.e. its wealth, history, political system, social structures) that we will become another Iraq or Somalia. But there’s a real chance of some major political and economic disruption that will affect the lives of millions. If the Tea Party Republicans get their way, this disruption will happen soon.

Thursday, June 23, 2011

Genetically-Engineered Flies Prove That a Human Eye Protein Has Magnetoreceptive Characteristics


The journal Nature Communications published an article this week that supports the hypothesis of human magnetoreception. Steven Reppert and colleagues at the University of Massachusetts Medical School did a study in which they genetically engineered fruit flies (Drosophila) to produce a human eye protein called cryptochrome. This protein is known to be important in circadian clock regulation along with light-dependent magnetoreception. Normal Drosophilia flies have magnetoreceptive ability that can be behaviorally tested in a lab. Flies that are genetically engineered not to produce cryptochrome lose their magnetoreceptive ability. In this experiment, flies that were genetically engineered to produce the human (not fly) version of cryptochrome had magnetoreceptive ability similar to normal flies.

It’s important not to get too excited about this experiment. The study did not show that humans have magnetoreceptive ability. It only showed that human cryptochrome can give magnetoreceptive ability to Drosophilia. The fact that humans have this magnetoreceptive protein doesn’t imply that any people actually use it for navigational purposes. It could be a vestigial protein, an evolutionary leftover that has no function, like the tailbone or appendix.

The authors of this study suggest that  “[a]dditional research on magnetosensitivity in humans at the behavioural level. . .would be informative.” I agree, and point to my paper as something I hope to guide future research.

Here are some links to nontechnical articles that talk about this experiment:

http://www.bbc.co.uk/news/science-environment-13809144
http://www.redorbit.com/news/science/2068170/human_retina_can_sense_earths_magnetism/
http://www.geekosystem.com/flies-magnetic-fields-protein/
http://www.nytimes.com/2011/06/28/science/28magnet.html?_r=1

Thursday, June 16, 2011

Can Creative Genius Be Scientifically Studied? A Review of “Origins of Genius” by Dean Keith Simonton

It’s common to think of the genius of men like Shakespeare, Darwin, and Einstein as something impossible to understand or study, kind of like a miracle. But psychologists consider all facets of human behavior to be fair game for research, no matter how strange or unusual. The geniuses who wrote Hamlet, who painted the Sistine Chapel, and who discovered the theory of relativity are extreme examples of personality traits like creativity, intelligence, and literary/artistic/scientific ability that all people posses to some extent. Dean Keith Simonton is a psychologist who specializes in studying creativity and genius. In his book Origins of Genius: Darwinian Perspectives on Creativity, Simonton applies Darwin’s theory of evolution to the study of creative achievement.

Simonton begins by defining “genius”. A genius isn’t someone with a high IQ, although most people we consider geniuses were very intelligent. A genius is someone who accomplishes something that sets him apart from everyone else, i.e. someone who has achieved eminence. Beethoven is a musical genius because of his symphonies, piano concertos, and other works that continue to be performed almost two centuries after his death. If Beethoven hadn’t composed any music, he wouldn’t be considered a genius, no matter how much intelligence or ability he possessed. “The phrase unrecognized genius becomes an oxymoron” (p. 5).

A “creative genius” is someone who invents or discovers an original, adaptive idea or product. Originality is required because mere imitation isn’t creative. Adaptiveness is required because if the new mousetrap doesn’t work, it isn’t any better than what we have now. If the new theory doesn’t fit the facts, or has internal contradictions, it won’t help advance science. If no one wants to listen to the new symphony, then no matter how original it is, it won’t be considered a great musical composition.

Given these definitions, Simonton goes on to present the research, theories, and biographical information that have advanced our understanding of creative genius. He begins by examining the thought processes of geniuses. To understand how Darwinism can be applied to this thought process, Simonton mentions Donald Campbell’s theory. This theory states that creative thinking involves 3 conditions:
  1. There exists some process that generates ideational variations, similar to genetic recombinations and mutations.
  2. These variations are subjected to a selection mechanism, similar to natural selection, but more cognitive or cultural in nature.
  3. There is some retention procedure that preserves and reproduces creative ideas, similar to genetic retention.
In simpler language, creative people come up with their ideas blindly (i.e. without knowing beforehand whether or not they will be successful or adaptive). They select the most promising ideas, and publish or otherwise share them with other people. The audience they share with further selects the creators’ ideas, rejecting some of them and accepting others. The ones that win out in the end, and become the acknowledged masterpieces, are a small fraction of the total number of ideas and products generated by the creators.

The more ideas, the greater chance that one or more of them will become a masterpiece. The most successful geniuses are also the most prolific. Shakespeare, Beethoven, Newton and others created (or discovered) many things that are now forgotten. The few things for which they are remembered are only a fraction of their total output.

How do creative people think? Like other people, they use imagery, intuition, and insight, along with analytical thinking. The main difference between creative and ordinary people is that creative people utilize remote association and divergent thinking. Remote association is making connections between separate ideas. A historical example of remote association is Einstein’s connecting relative space and time with the constancy of the speed of light. Before Einstein, scientists had considered these separate subjects.

Divergent thinking is the ability to come up with original ideas, solutions, and responses to questions. It is distinguished from convergent thinking, which is the coming up with the expected solution or response, e.g. the correct answer on a standardized test. Some fields require more divergent thinking than others. For example, artists, composers, and creative writers usually think more divergently than scientists.

Creative people also have different personalities than most other people. They have broad interests, they’re open to novel and ambiguous stimuli, they have trouble focusing on any one thing, they are flexible cognitively and behaviorally, they are more likely to be introverted, and are nonconformists. While not all creative geniuses fit this personality profile exactly, most exhibit at least some of these traits.

Creative people are also more likely to have psychiatric problems than the average person. There are many examples of depressed, bipolar, alcoholic, suicidal, and psychotic creative geniuses. Famous names with severe disorders include Vincent Van Gogh, Robert Schumann, Ernest Hemingway, Peter Tchaikovsky, and Charles Darwin. It’s important to understand, however, that most creative geniuses don’t exhibit high levels of psychopathology, or they would never have been able to create anything. Their symptoms tend to be midway between normal and abnormal.

Not surprisingly, geniuses usually grow up in enriched family environments. Their parents tend to have higher-than-average levels of formal education, with at least one parent working at an intellectual profession. Their parents value learning and supply their children with an ample supply of books, magazines, games. Visits to museums, exhibits, galleries, libraries and other places that stimulate intellectual development are common.

What’s unusual about geniuses’ childhood experiences is that they frequently face adversity. Geniuses often have physical or sensory disabilities, or chronic childhood illnesses. Sometimes their parents become bankrupt or impoverished. Many geniuses have experienced the death of one or both parents at an early age. These adverse experiences may set the young future genius on a developmental path different from most of his peers.

While geniuses are all highly intelligent, many of them lack advanced degrees or do poorly in school. Artistic creators generally have less schooling than scientific creators. Formal schooling suppresses creativity in favor of memorization and conformity. So it makes sense that many geniuses would dislike school. Since education and training is required for acquiring the necessary expertise in one’s profession, creators who drop out of school always are autodidacts. Many geniuses have one or more mentors who help compensate for the lack of formal education.

Like all complex human behaviors, creative achievement is a product of both nature and nurture. On the nurture side, as I mentioned above, creative people tend to grow up in enriched home environments, and also experience more traumatic childhood experiences than usual. But many people grow up in enriched environments and have adverse childhood experiences. Only a tiny fraction become geniuses. So there must be a unique genetic predisposition that interacts with these experiences to produce a genius. The technical term for this is “emergenesis”, or a combination of multiple genetic components, each of which must be inherited for the trait to appear. An emergenic trait is shared but identical twins, but doesn’t run in families. Evidence that creative genius is emergenic comes from family studies of geniuses. Many of the greatest geniuses in history, including Newton, Shakespeare, Beethoven, and Michelangelo, have no relatives of distinction.

Unlike intelligence, which is normally distributed in the population (i.e. having a bell-shaped curve), creative achievement is highly skewed. There are two mathematical laws describing this skewed distribution. The Price law states that if k represents the number of active creators in a field, then √k of these creators contribute about half of the products in the field. For example, assuming that there are 100 total architects, then 10 out of 100 architects are responsible for half of all building designs. The Lotka law states that the number of creators who contribute n products is inversely proportional to n squared. For example, the number of creators who contribute 10 products is c/100, where c is a constant. The number of creators who contribute 20 products (twice the number) is c/400, only 1/4 of who contribute 10 products. If you go up to 50 products (five times the original number), then the number is c/2500, or 1/25 of who contribute 10 products.

These laws hold whether one is considering all works created, or just works that have stood the test of time. For example, all of the works that make up the standard repertoire of classical music (i.e. works that are still performed today) were composed by about 250 composers. The square root of this number is about 16, which is the number of composers accounting for half of all the pieces performed today. Mozart alone composed 6.1% of the standard repertoire, which is slightly greater than the sum total of the bottom 150 composers!

The level of creative achievement varies across a creator’s career. At the beginning of his career, his output gradually increases. At some point he reaches a peak, after which his output declines. This trajectory is based on career age rather than chronological age. Late bloomers will have a later peak. Some disciplines, such as mathematics and poetry, have a much earlier peak than other disciplines, such as geology and philosophy.

One of the most interesting and difficult-to-explain aspects of creative genius is that achievement isn’t constant over place and time. Genius tends to cluster in some places at certain times. When these places experience a burst of creative achievement, they are said to have experienced a “golden age”. Eventually, and for reasons not fully understood, these golden ages decline to silver ages, and finally to dark ages. Examples of golden ages of achievement include Ancient Greece, the European Renaissance and Scientific Revolution, and similar (but less golden) ages in China, Japan, and other places.

It’s easy to understand how the presence of genius in one’s time can help spur additional genius. This can occur by means of immediate predecessors and contemporaries. “[T]he number of eminent creators in one generation is a positive function of the number in the preceding generation who are active in the same or affiliated domains” (p. 206). These eminent predecessors serve both as role models and mentors. Charles Darwin had Alexander Monro and Adam Sedgwick as his teachers, and was influenced by Hutton, Lamarck, Sprengel, Malthus, Lyell, and others. Beethoven studied under Haydn, and was influenced by Mozart.

Eminent scientists and artists are more likely to occur during eras in which they can form relationships with contemporaries in their fields. These contemporaries don’t have to be at the same level that the geniuses are. “No matter what the domain of achievement, genius of the highest quality tends to be contemporaneous with genius of a lesser rank, and even with the more obscure also-rans and nonentities” (p. 208). Geniuses need audiences, professional contacts, and social networks to help motivate them to create. The idea of the lone-wolf genius, like Howard Roark in the novel The Fountainhead, doesn’t correspond to historical fact. While there were likely potential geniuses during the European Dark Ages, the barren cultural milieu prevented them from achieving anything.

Geniuses tend to live during eras of high intellectual receptiveness, ethnic diversity, and political openness. Some level of basic political freedom and economic prosperity is required for creative flourishing. It must be noted, however, that most creative achievement occurred under regimes which by today’s standards would be considered autocratic and unfree. Remember that Socrates, who lived during the Athenian creative golden age, was executed for his beliefs. Galileo, who lived during the European scientific revolution, was persecuted by the Catholic Church.

The division of labor is an important sociological factor that contributed to the emergence of creative genius. If every person in a society is engaged in physical labor such as hunting, gathering, homemaking, or farming, no one will have any time to create. With increased population comes specialization and division of labor, including jobs for creative people such as painters, sculptors, architects, musicians, and engineers. The European scientific revolution gave birth to another profession that gave men opportunity for creative achievement. Population size is a necessary, but not sufficient factor for the emergence of creative genius, as there were many large societies (e.g. Europe during the dark ages) that had very little creative achievement, and smaller societies (e.g. classical Athens) that had a great deal of achievement.

Origins of Genius is strong in its understanding and analysis of individual genius. Simonton does an excellent job explaining creative thought processes, in presenting what personality characteristics distinguish geniuses from everyone else, in explaining how developmentally and genetically they are different, and in describing the mathematical laws that show the skewed output of creative achievement.

The book’s main weaknesses have to do with group phenomena, including clustering of genius, and gender differences in genius. While Simonton can explain why having creative achievers in one’s own and preceding generations can help spur geniuses to accomplishment, he can’t explain what starts and ends this process. Since genius occurs in golden ages that come and go, something must occur to start the process of creative achievement, and something must end it. Simonton has little to offer to explain this.

Another problem is that Simonton completely ignores the cultural and creative dark ages we are currently living in. As I explain in an earlier blog post, dullness has become triumphant in our times. In the arts, modernism has completely destroyed all the great European traditions, and we have a reign of mediocrity. This isn’t just my personal opinion. I can prove this by asking who are the great contemporary artists, writers, and composers. Various people will provide different responses, but none of the names mentioned will have the recognition that the great artists of the past have. As Simonton mentions in his book, an unrecognized genius is an oxymoron. A genius who is unrecognized in his own time will most likely remain unrecognized in future generations.

In the sciences, there is some low-level creative group achievement, but no individual geniuses on the level of Newton, Darwin, or Einstein. As proof, I’ll offer the same name recognition test that I did for artists. While every year a handful of scientists win Nobel Prizes, they are always people no one (outside their fields) has heard of. These people, while accomplished, are not geniuses.

One of the problems with both the arts and sciences is that both have become so highly professionalized and esoteric that people can gain distinction in a field, but be unknown outside the field. This is partly due to the disintegration and balkanization of our culture, which has various causes, including modernism (in the arts) and technology (e.g. the Internet). But it is partly due to declining accomplishment. If someone accomplished something at the level of Einstein, Michelangelo, or Beethoven, most educated people would have heard of him. This is as true today as it would have been true centuries ago.

One historical fact that Simonton tries unsuccessfully to explain on environmental grounds is the almost complete lack of female creative geniuses. “In Western civilization, for example, women make up only around 3% of the most illustrious figures of history. And many of these females [e.g. monarchs] entered the records in part by birthright or marriage. (p. 215)” Almost all geniuses in history have been male. Some fields, such as mathematics, physics, and music composition, are completely dominated by men. The one field where women have a minor presence is literature.

Simonton rejects biological explanations such as lower IQ, less variable IQ, and biologically-based personality differences. He provides more conventional environmental explanations such as child-rearing practices, the costs of marriage and family to female achievement, sex discrimination, and philosophical/cultural hostility to female employment and achievement. He claims that female achievement in literature is due solely to the fact that creative writing requires little start-up cost or overhead. But he ignores that publication (including book marketing) does require overhead. Many women had access to musical instruments in their home. Why didn’t they use these instruments to compose music, as men did? Why is the female lack of creative achievement universal across societies, including cultures that had little interaction with each other (e.g. Eastern and Western societies before the modern era). Why, in contemporary America and Europe, when all these environmental hindrances to female creative achievement have been removed, do we still see so little female achievement?

In conclusion, Origins of Genius is an excellent summary of scientific knowledge about creative genius at the beginning of the new millennium. It provides a wealth of detailed facts and analyses that illuminate our understanding of this strange and wondrous phenomenon. Its main weakness is a lack of understanding of what environmental factors bring about creative golden ages and dark ages. This weakness makes it unhelpful in trying to engender new works of genius, i.e. helping bring about a new golden age of creativity.

Wednesday, May 4, 2011

How Greed and Ideology are Endangering America’s Financial and Mental Health

I just watched the excellent documentary Inside Job about the financial crisis. Its main point is that the 2008 crisis and subsequent Great Recession were a consequence of deregulation of the financial sector. This deregulation (primarily in the United States, but also in other countries like Iceland and Ireland) led to an explosion of trading in risky investments like CDO’s, that in turn led to the housing bubble and crash. The tragic thing about this crash is that if we had kept the regulations in place that came about after the 1929 crash and Great Depression, we would never have had the recent crisis. Protections such as separation of savings and investment banks, restrictions on leverage, and enforcement by the SEC and other regulatory agencies of securities and fraud laws were stripped away in recent decades. Also, the financial services industry created new exotic “financial weapons of mass destruction” and successfully lobbied to keep them unregulated.

How did this unstable and risky situation develop? Inside Job argues that a combination of greed and free market ideology led to the housing bubble and bust. The greed came about from the enormous profits generated by the deregulated post-1980 financial services industry. This money flowed to traders in the form of multi-million dollar bonuses, to politicians in the form of lobbying and campaign contributions, and to academic economists in the form of consulting fees. The free market ideology, promoted by right-wing think tanks, politicians, and academic economists, provided a rationalization for the government to deregulate the financial services industry. This ideology argued that allowing an unfettered financial services industry to generate incredible amounts of wealth, and to make riskier and riskier investments, was good for the economy. This attitude is best summarized by Gordon Gekko’s memorable line in the first Wall Street movie: "Greed, for lack of a better word, is good."

While the free market ideology is seductive, and I admit that I was once an enthusiastic follower, it clearly failed as a rationale for financial services deregulation. While it’s easy to blame the government for everything bad that happens, and the government did promote home ownership for people who should have been renting, the leaders of the Wall Street investment banks were the main culprits. They knowingly and deliberately invested their client’s savings in risky mortgage securities, sometimes while simultaneously betting against the securities (i.e. betting that their own clients would lose money). It was Wall Street money that bid up housing prices, that promoted subprime mortgages, that paid the ratings agencies to give ridiculously high ratings to what was basically junk, and that led to the worst financial crash and recession since World War II. Some foreign banks also were to blame, as they did the same thing that the Wall Street banks did. Ireland and Iceland are two egregious examples of investment banks destroying their economies. The executives in charge of the investment banks got greedy, and even after their companies lost billions and needed to be bailed out, they got to keep their multi-million dollar bonuses and salary. The most disgusting thing is that not one of them has been prosecuted (in the U.S.) for fraud or securities violations.

When an ideology fails in the real world, people who believe in it have two options. One is to accept reality, and stop believing in the ideology. The other is to continue believing in the ideology, and abandon reality. Most Democrats, who in the past (especially in the Clinton administration) joined with Republicans in eagerly stripping away financial services regulation, realized after the crash that they made a grave error. When they were in charge of Congress, they passed the Dodd-Frank bill to improve regulation of Wall Street. These regulations, while inadequate, were better than doing nothing. These regulations are now law, but as Paul Krugman writes in an article, they are in danger of not being enforced. The reason for this is that Republicans have chosen to keep their free market ideology, while abandoning reality. Republicans are now in charge of the House, and Republicans don’t believe in regulation in any form. Since they control the funds, Republicans can prevent the SEC and other agencies from doing any regulating and investigating. Wall Street will then be free to create more bubbles and more crises.

Why do people continue believing in an ideology when it has clearly failed? To answer this, consider the historical example of Communism. Within a few years after the 1917 Bolshevik revolution, it was clear that Communism was failing in its ability to provide goods and services that people wanted. That’s when the scapegoating started. It wasn’t Communism’s fault, it was the fault of the capitalists (or the kulaks, or the imperialists, or the fascists, etc.). That scapegoating continued for decades, until finally no one believed the ideology any more, and all that was left was greed and corruption.

In the case of Republicans, the scapegoat is the government. Wall Street didn’t do anything wrong, it was government intervention in the economy that caused the crash and recession. The fact that the financial services companies made many risky investments and bad decisions, not just in the U.S. but also overseas, is also a result of U.S. government intervention. If the government hadn’t provided guarantees and bailouts, the companies wouldn’t have made these risky investments.

In the context of a mixed economy, with an internationally-linked market, the government needs to regulate the financial markets. If Republicans get their way, and repeal or refuse to enforce the new regulations, another crash will happen. We’ll then be in the same situation in which the choice will be government bailouts or a second Great Depression. Blaming the government for all our problems is similar to the Communists blaming the capitalists for all their problems. The free market ideology has failed the reality test vis-à-vis financial regulation, and continuing to adhere to it is a sign of either delusion or greed. Regarding the latter motive, Wall Street continues to utilize its vast wealth to lobby politicians to weaken and gut regulations.

Now let me turn to mental health and psychiatry. While watching Inside Job, I was struck with the parallels between the financial services industry and the psychiatric-pharmaceutical industry. While the free market ideology is used by the financial services industry to justify deregulation, the chemical imbalance ideology is used by psychiatists to justify their prescribing drugs to patients. The chemical imbalance ideology states that schizophrenia is caused by too much of the neurotransmitter dopamine, and that depression is caused by too little serotonin. These imbalances can be treated by antipsychotic and antidepressant drugs, respectively. Patients who take their drugs restore balance in their brains, become symptom free, and function better than patients who are unwilling or unable to take drugs.

This chemical imbalance ideology doesn’t fit the facts. As Robert Whitaker describes in his book Anatomy of an Epidemic (which I reviewed in a previous blog post), after many millions of dollars of research money, and enormous time and effort spent, this theory remains unverified. Scientists haven’t made much progress in understanding the cause or pathophysiology of mental illness. Diagnosis is still based on symptoms due to a lack of any reliable or valid lab tests. Drugs that seem to help patients in the short term, have a much more problematic long-term outcome. As newer drugs began to be prescribed more frequently, patients seemed to function worse than they ever did before, many of them becoming permanently disabled and unable to work.

Just as the free market true believers identify the government as an all-encompassing scapegoat, the chemical imbalance true believers come up with their own scapegoat: the disease. The reason why juvenile bipolar disorder prevalence skyrocketed, why so many children were being diagnosed and treated with drugs, and becoming permanently disabled, wasn’t due to drugs they were being prescribed. It was due to something else in the environment causing a psychiatric epidemic. It’s almost comical reading Whitaker’s description of how bipolar disorder was “discovered” in children. Before psychiatric drugs were widely used, children never got bipolar disorder. The fact that stimulant and antidepressant drugs induce manic episodes, and that many more children were taking stimulants and antidepressants than ever before, was conveniently ignored. Since many children taking antidepressant drugs became bipolar, some researchers argued that the drugs were an effective diagnostic tool, unmasking the bipolar disorder that was hidden underneath. Diagnostic boundaries for juvenile bipolar disorder were expanded to include irritable and antisocial children, and these children were given drug cocktails that led to functional impairment and serious physical and mental side effects.

Another similarity between the financial services industry and the psychiatric-pharmaceutical industry is greed. When the ideology has cracks, fill in the gaps with cash. Just as academic economists receive thousands of dollars from the financial services industry in consulting fees, academic psychiatrists receive thousands of dollars from the pharmaceutical industry. This money provides an incentive for economists to spout free-market/deregulation dogma, and psychiatrists to promote drugs.

This ideology-greed connection becomes self-perpetuating. Academic economists promote deregulation and free markets, which leads to more profits for Wall Street, which means more consulting fees, which means more economists promote deregulation, etc. Academic psychiatrists promote the chemical imbalance theory and drug treatments, which means more profits for the pharmaceutical industry, which means more consulting fees for the psychiatrists, which means more psychiatrists promote drugs, etc.

What can be done to emerge from these vicious circles of greed and failed ideology? Reform from within isn’t an option. Both the financial services and psychiatric-pharmaceutical industries are too hopelessly corrupt to reform or regulate themselves. The only institution powerful enough to take on the economic and ideological might of these two industries is the federal government. But the government is itself corrupted by Wall Street and Big Pharma money. The government must be spurred into action by grassroots advocacy groups.

Any advocacy group needs to have a clear, specific goal in mind. The American Tea Party had a vague goal of opposing big government and bailouts. It has now been hijacked by libertarian/free market extremists.

The group that takes on Wall Street should have these goals: Phase 1 should be to pressure the government to fully fund all the regulatory agencies involved in enforcing Dodd-Frank. In addition, the government should prosecute bank and financial executives who committed fraud and securities violations during the housing boom and bust. Phase 2 should be to make even more ambitious reforms to curb the power, influence, and money of the financial services industry.

The group that takes on Big Pharma should begin with pressuring for congressional hearings. Robert Whitaker and other people who criticize the psychiatric-pharmaceutical industry should be allowed to speak, along with academic psychiatrists, drug company representatives, and parents and patients. If it turns out that there’s not enough evidence to demonstrate that drugs do long-term good (the burden of proof is on the pro-drug people to show that they work), then Phase 2 should be a major overhaul of psychiatric drug regulation. This can include banning drug advertising, banning off-label prescribing, banning prescribing the drugs to children, and extending clinical trials to several years (from the current 6 weeks).

We Got Bin Laden (Finally)!

Congratulations to everyone responsible for taking down Osama bin Laden. After all the bad news we’ve been receiving, it’s nice to hear about things going right for a change. Praise is due to President Obama, his national security advisors, the CIA, the Navy Seals who conducted the raid, and others involved in the operation. Obama made a smart decision to not delegate this to the corrupt and incompetent Pakistani government. Our President took a calculated risk, and it paid off.

Sunday, April 24, 2011

Did America’s Economy Implode Because the Rich Are Too Wealthy? A Review of “Aftershock” by Robert B. Reich

The Great Recession has forced Americans to confront political and economic issues that have been ignored for decades. Since around 1980, middle class real wages have stagnated, both personal and government debt have increased to unsustainable levels, infrastructure and education have declined—and both Democrats and Republicans are to blame. The optimism, pragmatism, “can-do” attitude, and belief in continual progress that characterized America for most of its history (with some notable exceptions, including the Civil War and Great Depression) have been shattered. Americans have grown increasingly pessimistic that their children and grandchildren can achieve the “American Dream.”

Robert B. Reich, former secretary of labor under Bill Clinton, writes about the causes of America’s economic problems in his book Aftershock: The Next Economy and America’s Future. His central message is that the concentration of wealth at the top of the income ladder results in insufficient domestic demand for products and services, which leads to economic stagnation, anemic recoveries, and deep recessions.

There’s no question that income has become more concentrated the last 30 years. In the late 1970’s, the richest 1% of the country took in less than 9% of the nation’s total income. By 2007, the richest 1% took in 23.5% of the nation’s income, or more than double what it was before. Real wages of a typical American worker, however, stagnated during the same time period. If the gains of the American economy had been more equally distributed during the last 30 years, a typical person would be making 60% more now than he did then.

The problem with the rich becoming too wealthy is that they don’t spend enough. They live too modestly compared to what they can afford. The overall demand for goods and services shrinks because the rich invest most of their income. If more of the nation’s wealth went to the middle class, who spend a greater percentage of their income than the wealthy do, demand would increase, businesses would expand and hire more, and the economy would grow.

Reich writes approvingly of the period of the “Great Prosperity”, the years 1947 through 1975, in which American implemented a “basic bargain”: workers made enough to buy what was produced, resulting in complementary mass production and mass consumption. Wages of lower-income Americans grew faster than those at or near the top, doubling over these years. Productivity also doubled, giving the lie to those who argued that large inequality was needed for economic growth. It’s true that high income and wealth is an incentive for entrepreneurial and executive achievement, but how high does this income and wealth have to be? Even though CEO’s made only 30 times the typical worker salaries during the Great Prosperity (as opposed to 300 times today), they seemed motivated enough back then to do their jobs well.

The Great Prosperity also included powerful unions, generous health and pension benefits, minimum wage and overtime laws, unemployment insurance, Social Security, Medicare and Medicaid (in 1965), and interest deduction on mortgages. The G.I. Bill and expansion of the public university system helped make college affordable for the middle class. The interstate highway system became the most ambitious public works program in American history. The Cold War resulted in continued high military spending, which (via the “military-industrial complex”) led to the invention of the transistor, laser, computer, jet engine, and Internet. Unlike today, there was a societal consensus that this high level of government spending had to be paid for. The top marginal income tax rates in the 1950’s were 91%, and were 77% as late as 1969.

After the late 1970’s, the Great Prosperity ended and America began a period of increasing inequality. Reich recognizes that both globalization and automation led to the loss of high-paying manufacturing jobs. New jobs that were created, mainly in the service sector, didn’t pay as well as the jobs lost. At the same time that the middle and working classes had to accept lower wages, however, business executives and Wall Street traders saw their incomes skyrocket.

Government could have enforced the basic bargain, and reversed these trends leading to more wealth concentration at the top. For example, it could have given employees more bargaining power to get higher wages, especially in industries sheltered from foreign competition. It could have enlarged safety nets, financed Medicare for all, and forced industries laying off large numbers of workers at once to pay a year’s severance, along with training them for new jobs. It could have raised taxes on the rich and cut them for the poor. Instead, it did the opposite. Under the influence of libertarian, free-market economists, the government deregulated, privatized, cut taxes, and shredded the safety nets. It allowed companies to bust unions, slash jobs and wages, cut benefits, and move factories and jobs overseas. Government deregulated Wall Street while insuring it against major losses. This changed the finance industry from the servant to the master of American industry. Wall Street demanded short-term profits over long-term growth. “Between 1997 and 2007, the finance sector became the fastest-growing part of the U.S. economy” (p. 56). Finance and insurance companies’ share of American corporate profits increased from 10% to 40%.

It’s true that confidence in government began to decline toward the end of the Great Prosperity, with events such as the Vietnam War, Watergate, oil shortages and double-digit inflation. This confidence has continued to decline since. Reich attributes this decline to deterioration of government services and explosion of deficits, both of which were caused by the tax cuts. Also important was the increasing appeal of free-market dogma, which came through think tanks, books, media and ads that were largely financed by the rich and powerful.

Middle class Americans developed three “coping mechanisms” to help mitigate the effects of their declining economic status. These mechanisms included:
  1. Women moved into paid work. In 1966, 20% of mothers with young children had jobs outside the home. By the late 1990’s, this had risen to 60% of mothers. While women with college degrees were able to land high-paying jobs, most women worked low-wage jobs to try to prop up stagnant and declining family income.
  2. Everyone worked longer hours. By the 2000’s the typical American family worked 500 hours, or 12 weeks longer per year than it had in 1979.
  3. People saved less and borrowed more. During the Great Prosperity, the American middle class saved about 9% of their after-tax income each year, and their debt averaged 50 to 55%. The savings rate declined to 2.6% in 2008, while debt exploded to 138%. Reich sees this indebtedness not as a moral failing, but as a way to try to maintain their previous lifestyle. When the debt bubble burst during the Great Recession, people were unable to continue borrowing.
The wealth inequality today is similar to what it was in 1928, just before the Great Depression, and Reich draws parallels between the two eras. There were high levels of private debt, especially mortgage debt, in both the 1920’s and 2000’s. Richer Americans speculated on a limited range of assets, resulting in stock market and real estate bubbles. The 1920’s even witnessed a Florida real estate boom, similar to recent times.

After the Great Recession that began in 2007, the U.S. government kept interests rates near-zero, bailed out the banks, and printed a great deal of money. This helped avert a second Great Depression, but left the federal government with vastly increased debt and deficits. Unlike during the Great Depression, in which the Roosevelt administration created a new economic order through its New Deal policies, the post-Great Recession Obama administration has done very little fundamental reform. The problem of widening inequality will likely continue. Reich predicts many years of high unemployment, middle class economic insecurity, and economic stagnation. From this Great Recession “aftershock”, we’ll see either a major political backlash against both big business and government, or large-scale reforms.

Some reforms that Reich would like to see implemented include:
  • A reverse income tax that supplements the wages of the middle class.
  • Higher marginal tax rates on the wealthy.
  • A carbon tax, to promote the development of “green” technologies.
  • A redesign of the unemployment system, making it a “reemployment system” that smoothes the transition to a new job.
  • Replacing spending on public schools with vouchers based on family income, which would force wealthy suburban schools to take in lower and middle-income students.
  • Making tuition free in all public colleges and universities, which will be financed by requiring that all graduates pay a fixed percentage of their taxable income for the first 10 years of full-time work.
  • An expansion of Medicare to all Americans (replacing the current private insurance system).
  • Expanding government spending on public goods such as transportation, parks, recreational facilities, museums, and libraries, making them free for all.
  • Strengthening campaign-finance laws, funding elections publically, and limiting issue advertising.

Reich’s agenda for change makes sense in a nation in which there’s a consensus for government activism, high taxation, and social programs, in which people respect and admire politicians and bureaucrats, and are confident in the competence and ability of government to follow through on its promises. Such a nation doesn’t have anything in common with contemporary America. Reich’s laundry list for change has no chance for passage.

Reich’s views are similar to those of Nobel Prize-winning Princeton economist and New York Times columnist Paul Krugman. The Reich/Krugman viewpoint can be classified as “Liberal Democrat” and “Keynesian”. Unlike Thomas Friedman, who focuses on competitiveness and tries to appeal to both ends of the political spectrum (see my review of his book The World is Flat), Reich and Krugman focus exclusively on the demand side of the economy, and their appeal is mainly to liberals.

Was the Great Recession caused by too much wealth possessed by the rich? The speculative real-estate boom that led to the Great Recession was a world-wide phenomenon. Lax regulation certainly contributed to it. I’m not sure that American wealth inequality was a direct cause, although it may also have contributed to it. European countries have much less wealth inequality than America, yet they also suffered from the recession. The fact that the rich invest most of their money has led to more speculation and greater influence and power of Wall Street.

Regarding Reich’s point that the economy would be doing better if there was less inequality, I don’t agree with that. The American economy grew enormously during the (relatively) laissez-faire period of 1870-1928. This graph shows how real per capita GNP in the U.S. more than doubled. During that period, there were significant inventions and technological advances that improved the standard of living of many Americans, including electric power, electric lights, bicycles, telephones, automobiles, airplanes, widespread use of railroads and steamboats, medical advances, improved public sanitation, central heat, air conditioning, appliances, and radio. While there were few government social programs, and little military spending, America emerged from the ashes of its Civil War to become the world’s premier economic power. So I don’t agree that government programs are necessary to have a robust, vibrant economy, or to have a higher standard of living for the majority of people.

On the other hand, contrary to what many conservatives and libertarians say, a mixed economy with substantial military and social welfare spending, along with high marginal taxes, isn’t inimical to economic progress. Reich and Krugman are correct that the Great Prosperity (1947-1975) period in America was a time of enormous economic and technological progress, along with an improved standard of living for the majority of people.

Both liberals and conservatives agree that the last 30 years has been a period of economic decline in America. They’re in sharp disagreement, however, over the causes. Liberals such as Reich and Krugman argue that the decline was due to tax cuts and subsequent reduced government investments in social programs and infrastructure, along with deregulation, leading to wage stagnation among the middle class, concentration of wealth at the top, and increased deficits and debt. Conservatives argue that government social welfare and entitlement spending, regulation, and litigation exploded, leading to increased taxes, deficits, debt, and economic decline.

Both sides are right. Liberals are correct that the tax cuts have largely benefited the wealthy. Infrastructure has been neglected. Deregulation led to the housing boom and bust. Conservatives are correct that social welfare and entitlement spending has increased, along with some types of regulation and litigation (especially connected to employment, discrimination, and civil rights).

I think the problem isn’t the level of government spending or taxation, but fundamental disagreements between the two parties that has led to gridlock. Both periods of growth and progress, the laissez-faire one of 1870-1928, and the mixed economy one of 1947-1975, were a time of general consensus. While there were people and groups who opposed the consensus, they were an insignificant minority. During both periods, there was a long-term vision, and large investments in infrastructure and education. The infrastructure and education investments during the laissez-faire period were largely private; during the mixed economy period they were mostly public and military. The private versus public nature of the infrastructure and education investments is less important than the fact that they get done. They haven’t been getting done the last 30 years—existing American infrastructure and education has not been well maintained, and new investment is virtually nonexistent.

The Great Depression and World War II eras (1929-1945) were a transition time from a laissez-faire to a mixed economy. Reich argues in this book that the Great Depression was caused by income inequality, leading to inadequate demand. Conservatives would counter that the Great Depression was caused by government meddling in the economy, both before (income tax, antitrust regulation, the Federal Reserve), and after (New Deal regulations and programs, increased taxation). I would argue that the laissez-faire consensus was lost during the Great Depression, and the fact that the consensus was lost led to the length and severity of the depression. The crisis of World War II created a new consensus for big government, and led to the mixed economy Great Prosperity of 1947-1975. It was a combination of the transmutation of liberalism from a strictly economic philosophy to a combination economic-social philosophy in the 1960’s and 1970’s, along with the religious and libertarian conservative backlash that began with the Regan administration in the 1980’s, that led to the demise of the Great Prosperity consensus.

As further evidence of the importance of consensus to economic progress, consider contemporary China. It’s difficult to characterize the political-economic situation there—part Communist, part mixed-economy, part laissez-faire—but it’s undeniable that whatever they are doing, is working. The consensus there is produced partly by a dictatorship that tolerates no dissent, but this dictatorship, unlike the Maoist one, isn’t based solely on fear and ideology. The people in China are seeing their standard of living improve—millions have achieved middle class status, and the general progress there is perhaps the greatest economic miracle of our time. Compared to how other countries have fared in the Great Recession, especially America, China has weathered the downturn well.

This Chinese success story directly contradicts both the liberal and conservative perspectives. As Reich notes in his book, most of Chinese government spending is devoted to production, not consumption. Chinese people have virtually no safety net—spending on social services is about 6% of the economy, compared to an average of 25% in most developed nations. China isn’t following the Reich-Krugman vision of a demand-oriented, socially-generous nation. It also isn’t following the conservative/libertarian vision of “free minds and free markets.” The basic political system is a Communist dictatorship that arrests and jails dissidents, limits families to one child, and suppresses religious expression. Contemporary China bears little resemblance to past or present America. Yet its economy is one of the fastest-growing in the world. This chart shows China’s explosive GDP growth since 1990, 5 times America’s growth during the same period.

Since neither Reich’s liberal agenda, nor a libertarian/conservative agenda can be enacted in the current political climate of gridlock, what can be done to achieve a new consensus that will allow America to grow and prosper again? One possibility is for centrists like Thomas Friedman, David Brooks, Fareed Zakaria, and others to bring the two sides together and come up with a workable compromise. The problem is that the two sides have been trying to compromise for 30 years, and the compromises haven’t been working:
  • Conservatives cut taxes and increased defense spending, while liberals increased social welfare and entitlement spending. This has led to skyrocketing deficits and debt, along with declining infrastructure and education.
  • Conservatives deregulated the financial industry, while liberals used the government to insure against poor investments and losses. This led to the financial crash and bailout of 2008.
  • Conservatives tried to keep some private sector, free market elements in the American health care system, while liberals tried to expand the public sector options and give more people access to care. The conservative influence has produced a supposedly competitive private insurance sector that fails to control costs, and the liberal vision has produced a public sector that provides all the benefits of socialized medicine (free and unlimited health care for seniors and the disabled) without any of the cost controls (i.e. rationing). The result is the most expensive health care system in the world that provides mediocre results. We have the highest infant mortality among the world’s industrialized nations, and our life expectancy is shorter than 40 other nations.
  • Conservatives promoted school choice, especially private schools, while liberals maintained the status quo in the public schools. The result has been that most students today attend mediocre public schools, while a few lucky or rich kids attend charter or private schools.
  • Conservatives promoted oil drilling and other fossil fuel development, while liberals advocated for conservation and green technology. The result has been that we’ve had no energy policy, and we’re almost twice as dependent on foreign oil today than we were at the time of the first oil crisis in 1973.
These differences and failed compromises on economic issues are only part of the bad blood between conservatives and liberals. The other part is the profound differences over social issues. Since the 1970’s, liberals have become more secular and hostile to traditional religion, as exemplified in their using the courts and legislation to legalize abortion, to restrict public religious expression, and promote alternative lifestyles such as homosexuality. At the same time, conservatives have embraced fundamentalist Christianity, seething at the godless liberals and their secular-humanistic agenda. For many liberals and conservatives, these differences are more important than the economic ones.

The over 30-year marriage of modern liberals and conservatives has failed. The two sides have spent most of the time attacking and criticizing the other, neglecting their duty to promote the public good. It’s unlikely that centrist marriage counselors like Thomas Friedman can do anything constructive to bring the two sides together. It’s time for a divorce.

I propose that the country split into two, a predominantly liberal Coastal America, and predominantly conservative Middle America. Coastal America would include most of the “Blue” states. Some states in the middle portion of the country would likely join, such as Illinois. Some states in the Southeast, such as South Carolina, would likely not join. Middle America would include most of the “Red” states. My guess is that the capitol of Middle America would be in Texas, the most heavily populated among the “Red” states. Splitting the country this way would cause major logistical and transportation issues, since some state borders would transform into international borders. If, for example, Kansas joins Middle America, and Missouri joins Coastal America, many commuters in the Kansas City metro area would have to cross an international border to get to work. Also, the “marriage settlement agreement” (i.e. splitting the assets and debts between the two countries) would be a source of contention. Would both sides get nuclear weapons? There would be a possibility of military engagement between the two sides (perhaps leading to civil war, as it did 150 years ago).

While not an ideal situation, and one with many obstacles and pitfalls, it’s the best of several bad alternatives. One alternative is to continue the way we’ve been going. This will lead to continued economic decline, neglect of infrastructure and education, growing frustration at government, and the increased possibility of widespread political unrest or civil war. Another alternative is to wait for some crisis or external threat to bring the sides together. Even in the unlikely event that some unifying crisis will happen, such as the 9/11 attacks, this crisis won’t change the fundamental differences. Just as with the 9/11 attacks, after the crisis is over the status quo will resume. Another alternative is for one side to establish a dictatorship, imposing its ideas on the other. That’s the only way that either the liberal Reich or the conservative/libertarian agenda can be achieved in an intact America. An American dictatorship would be impossible without first a civil war, a major economic disaster, or both.

Reich’s agenda has a good chance for passage in Coastal America. While there would be some conservatives there, they wouldn’t be numerous or influential enough to stop the liberals from passing the legislation, and a President Obama or some other liberal President from signing it. Reich, Krugman, Obama, Hillary Clinton and others can transform Coastal America into a European-style welfare state, complete with high taxes, universal Medicare, and, most importantly, public investment in infrastructure and education. Their model would be the Great Prosperity America of 1947-1975, along with contemporary European countries. One thing liberals need to understand is that the aging of America, along with foreign competition, forces them to be much less generous with social welfare and entitlement spending than they were in the past.

Rush Limbaugh, Glenn Beck, Sarah Palin, Ron/Rand Paul, Newt Gingrich and others can transform Middle America into a Libertarian-Christian utopia, using as their model the laissez-faire U.S. from 1870-1928. They can promote private investment and free markets in health care, infrastructure and education. Since American today is closer to the 1947-1975 model than the 1870-1928 model, and many people depend on social welfare and entitlement programs, the conservatives would have a harder time enacting their agenda. One thing that conservatives need to understand is that the America of 1870-1928 had very little defense spending (with the brief exception of World War I). If Rush, Glenn, Sarah and the others want to cut taxes to pre-1930 levels, or eliminate the income tax, they’ll need to radically cut defense spending.

The good thing about a split America is that neither the Coastal nor Middle parts would have any pretensions about being a superpower. There would be no need for (or ability to fund) the level of military spending we have now. We wouldn’t be able to send troops to a future Iraq or Afghanistan to try to perform nation building. We would have to sell off or destroy much of our military hardware, and send most of our troops back to private life. Our military would transform from being the world’s policeman to being a self-defense force.

Like with West and East Germany between World War II and the destruction of the Berlin Wall, Coastal and Middle America can be seen as a large-scale political-economic experiment. If one is vastly more successful than the other, as West Germany was compared to East Germany, then perhaps the failed one will eventually agree to accept the system adopted by the successful one. Like with the reunification of Germany, America may then become reunited again. On other hand, if both are successful, or neither one successful, then perhaps America may remain divided for a long time.

In conclusion, Aftershock has some good examples of how our country has become more economically unequal in the last 30 years. The middle class has struggled while the rich have gotten much richer. The coping mechanisms that those in the working and middle classes have used to try to maintain their standard of living, including having both spouses work, increasing their work hours, and increasing their level of debt relative to income, have been exhausted. The money going to the rich, unlike that going to the middle class, is usually invested, not spent, which helps out Wall Street but not Main Street.

I don’t agree with Reich’s argument that a massive increase of government social welfare spending and taxation is needed to grow America’s economy. While such a policy will lead to more equality, there are other ways to promote economic growth and improved standard of living that rely more on stimulating production than consumption. These alternatives include laissez-faire economics that America utilized in the late 19th and early 20th centuries, along with a mix of Communist dictatorship and capitalism that contemporary China uses. Reich’s agenda can only be achieved in a Coastal America that results from splitting the country into two. Such a split is the best of a number of bad alternatives that face America today, a result of a failed marriage between modern liberals and conservatives.