Thursday, December 30, 2010

A Review of “Science and Change 1500-1700” by Hugh Kearney

We take modern science for granted, and assume that it has always been the way that it is today, with professional scientists, empirically-based research, peer-reviewed journals, laboratories, and a common scientific method accepted by all. But what we know as science today evolved over time. In the Middle Ages there were monks in monasteries studying “natural philosophy,” and alchemists doing chemical experiments, but no scientists in the modern sense. During the early modern period, something happened to transform natural philosophy and alchemy to science. Hugh Kearney discusses this crucial transition period, aka “The Scientific Revolution,” in his book Science and Change 1500-1700.

This book, originally published in 1971, is an old college textbook of mine that I decided to reread recently. It includes many fascinating images, reproductions of paintings, drawings, and diagrams that are relevant to the text discussion. The prose is clear and straightforward, and it includes a wealth of historical examples. The book’s theme is that the Scientific Revolution can be understood as a mixture of three intellectual traditions: organic, magical, and mechanistic. Each tradition had roots in ancient Greek philosophy. During the Middle Ages, the ancient texts were rediscovered and merged with Christian theology, becoming a new establishment, known as “scholasticism”. The Scientific Revolution was largely a revolt against this establishment, favoring empirical research over reliance on ancient authorities.

The organic tradition was based on what we now call biology, with the emphasis on study of living organisms. The Greek philosophers who most influenced men working in this tradition were Aristotle and Galen. Aristotle’s philosophy was wide ranging, but his empirical observations most influenced people working in the organic tradition. Galen, who came after Aristotle, was best known for his anatomical and medical work.

The University of Padua was the capital of the organic tradition during the Scientific Revolution. Some famous names of faculty members there include Andreas Vesalius (1514-64), the “father of modern anatomy,” and Gabriele Fallopio (1523-63), who discovered the Fallopian tubes. William Harvey (1578-1657), who discovered the circulation of blood, was a student at Padua.

The magical tradition provided a scientific framework in which nature was seen as a work of art. God was thought of as a magician or artist, and mathematics was a tool to understand the Deity. The Greek philosopher who most influenced this movement was Plato. His ideas were taken up and modified by the Roman philosopher Plotinus, who founded a movement called “neo-Platonism.” Also influential in this movement were writings attributed to a legendary Egyptian named Hermes Trismegistus, along with the Jewish Cabbala.

Some famous men in the history of astronomy were influenced by the magical tradition. Nicholas Copernicus (1473-1543), who was influenced by neo-Platonism while living ten years in Italy, developed a mathematical heliocentric theory. Tycho Brahe (1546-1601), who was influenced by astrology, made a large number of astronomical observations. Johannes Kepler (1571-1630) made use of these observations to argue that planets move in elliptical orbits around the sun, and that the velocities of the planets are not uniform during the course of their orbits. Kepler utilized the magnetic theory of English physician William Gilbert (1540-1603) to explain planetary motion in terms of magnetic interactions. Kepler’s last book was a mass of neo-Platonic speculation. Isaac Newton (1642-1727), the discoverer of the universal law of gravitation, was in part a neo-Platonic mystic. He looked upon space and time as part of the Divine presence in the universe.

The mechanistic tradition viewed nature as a machine. Natural phenomena were regular and predictable, capable of being understood by mathematical laws. Greek philosophers in this tradition included Democritus (an early atomist) and Archimedes (a mathematician and engineer).

Some famous names in physics, chemistry and applied mathematics were part of the mechanistic tradition. Galileo (1564-1642) developed laws of motion based on experiment. He contradicted Aristotelian doctrine by showing that the speed of a falling body is not proportional to weight. Mathematician and philosopher RenĂ© Descartes (1596-1650) helped promote the mechanistic world view. Descartes’ philosophy conflicted at almost every point with Aristotelian principles, but his extensive use of deductive reasoning left him as exposed to experimental attack as any of the scholastics. Mathematician, experimenter, and philosopher Blaise Pascal (1623-62), who invented a primitive calculator, saw no trace of the Christian God in the world of nature. According to Pascal, God sometimes intervened miraculously, but didn’t interfere with the mechanical laws of nature. Pascal attacked the Aristotelian assumption that the vacuum doesn’t exist by doing a series of experiments. Robert Boyle (1627-91) applied mechanical philosophy to the world of chemistry. He formulated the law that pressure is inversely proportional to volume. Isaac Newton was in part a mechanist, especially in his optical work, which showed that light behaved according to mechanical laws when passed through different media.

What tradition won out in the end? For science, the mechanistic tradition became dominant. Physics and chemistry became the models by which all sciences were evaluated. Modern astronomy, which started with the neo-Platonic speculations of Copernicus and Kepler, eventually shed its mystical/astrological side and became a sub-discipline of physics. Even biology, which began in the Aristotelian/organic tradition, over time became mechanical. Darwin’s theory of evolution did away with the need for postulating Aristotelian final causes to explain higher organisms. Natural selection, not God’s will, explains how humans evolved from primitive ancestors. Cells, tissues, and organs became to be viewed as biochemical factories, randomly designed from a genetic blueprint.

For society as a whole, the mechanistic philosophy became most influential in the late 17th and 18th centuries. The philosophies of Descartes, Spinoza, Hobbes, Locke, Voltaire, Adam Smith, and the American and French revolutionaries were squarely in the mechanistic tradition. The Scientific Revolution influenced the early stages of the Industrial Revolution. For example, James Watt, inventor of the steam engine, was acquainted with scientist Joseph Black, who did work on latent heat. The neo-classical revival of the Age of Enlightenment was influenced by the mechanistic tradition. Clear and concise prose, such as that of John Locke, became a model for many others. Poetic satire such as Pope’s Dunciad replaced poetry based on magical and mystical sources of language and imagery. Neoclassical art, such as that of David, emphasized clarity and simplicity.

The Romantic movement that began in the late 18th century turned against the mechanistic/scientific viewpoint. It influenced philosophy, art, music, politics, and culture. Gradually, romanticism was replaced by realism and then modernism and post-modernism. While these subsequent movements revolted against previous movements, and had unique characteristics, none of them revived the mechanistic philosophy. This has led to a division between mechanistic scientists, engineers, technicians, doctors, lawyers, and other practical people, versus artistic, creative, philosophical, religious, spiritual, and other non-practical people, a divide that persists across the world today. This division is also present in most people, with a practical, mechanical side and a non-practical, spiritual side, with usually one side being dominant.

One criticism of Kearney’s thesis of the different traditions is that he may have emphasized too much the non-scientific views and philosophies of some of the early scientists. While Harvey was influenced by the organic philosophy of Aristotle and Galen, and Copernicus, Kepler, and Newton by neo-Platonism, are these influences essential in understanding their scientific achievements? These men are famous today because they made some discovery that influenced the future direction of science. Their discoveries came not from their philosophical and religious influences, but in spite of them. They applied the scientific method to refute some ancient authority or commonly-held belief. For example, is the fact that Kepler believed in neo-Platonism, but Galileo did not, reason enough to separate them into different traditions? They were both natural scientists who made important discoveries, discoveries that relied on observation, data, and mathematics. Are their religious or philosophical beliefs essential to understanding their contribution to science?

I recommend “Science and Change 1500-1700” to anyone interested in the history of science.

Monday, November 1, 2010

Thoughts on the One Year Anniversary of Harrymagnet.com

The Harrymagnet.com site went live on October 25, 2009. The content on the site represented a summary of 2 years of my own independent research. This research suggests that I have the ability to perceive the Earth’s magnetic field, and that this ability directly influences how I feel. My psychiatric symptoms (OCD + tics + mild chronic depression) are navigational tools, directing me toward magnetic home. I had observed magnetic home (aka “The Happy Zone”) in Utah and North Carolina. By recording GPS coordinates and observing how my magnetic home moved in response to things like changes in bedtime and bed angle, I had acquired a great deal of data. I wrote a research paper that presented the data with my analysis. Before going public with the data, I first contacted some researchers, sending them my abstract and asking if they wanted to read the full paper. I found it difficult to select researchers to query, as I wasn't aware of anyone doing research in the navigational aspect of human magnetoreception. (Robin Baker studied this in the 1970’s and 1980’s, but has since moved onto other things). Only one (a British researcher who specialized in parapsychology research) asked to see the paper, and he didn’t provide any feedback.

Disappointed by the lack of interest in this initial query, I decided to adopt the pseudonym “Harry Magnet,” purchase a domain name, and put the research paper and some other information on the website. I had two goals in doing this:
  1. Since I had no idea who might be interested in studying human magnetoreception, instead of my trying to find them, I’d let them find me.
  2. I wanted to find out if there were other people with similar magnetoreceptive abilities as mine.

I utilized Google Adwords to market the site, along with sending select queries to additional researchers. After some time, my site became listed in the first 3 pages of relevant search results, such as “human magnetoreception” and “human magnetic sense.” Some people clicked on my Two Mysteries multimedia article by doing a Google image search.

After a year, I haven’t made much progress in either of my goals. Only one researcher has given me any feedback about my paper, an American psychologist named Jorge Conesa-Sevilla (whom I had queried). No one has expressed any interest in researching the human magnetoreception phenomena I describe in my paper. There may be some people conducting research without telling me, but I have no reason to believe that any such research is happening.

Perhaps more disappointing to me is the lack of feedback among people with similar magnetoreceptive abilities. I created an Are You Sensitive page, with some simple steps that people could take to verify whether or not they are magnetoreceptive. I provided several ways for people to provide feedback, including Facebook, anonymously commenting on my blog, and also contacting me directly. I only heard back from two people, who provided information suggesting that they may be magnetoreceptive, but I was unable to follow up with them.

I thought that my website might generate some online buzz, but there have only been two sites I’m aware of that have mentioned me. One, a bipolar discussion forum, talked about me soon after my site became live. Another, a mental health blog, mentioned one of my book reviews. While I’m grateful to these authors for their mentioning me, I was hoping for a great deal more buzz after a year.

I’m not currently doing Google Adwords. I get only a handful of daily hits to my site and blog. Otherwise, nothing. To what do I attribute my lack of success?

One observation I’ve made is that while it’s easier than ever to publish, it’s harder than ever to get anyone to read what you publish. The Web has allowed anyone with a computer and Internet connection to have a voice. While this democratization of publishing is a good thing, it has created a situation in which there is so much content that it’s very hard to distinguish the few gems from the mass of irrelevant, poorly written, and useless body of information. I imagine that the few people who have clicked on my site take a cursory look and then say to themselves, “Harry Magnet, who’s he? Why should I spend my time reading this stuff?” They then leave the site.

This is where a middleman can be very useful. A creative person cannot achieve eminence alone. He needs a middleman, an intermediary to tell people that his idea is important and worthy of notice. In the past, publishers and agents served this gatekeeping role, but today it is very hard for someone without a name to get published. I didn’t even try.

The problem with my project is that all my results are based on subjective experience. That’s why I’d never get published by a peer-reviewed journal, or book or magazine publisher. The middleman I need at present is a scientist willing to research human magnetoreception, to put to the test the claims I make in my research paper. Successful results, published in a peer-reviewed journal, would bring me attention and support.

As I said above, I’m not aware of anyone in the world studying the navigational aspects of human magnetoreception. That implies that if I want to find someone to research this, I’d need to convince him to go beyond his narrow subspecialty and begin a new field of research. It’s hard to find such a person, even though there are people studying related phenomena such as the human magnetic sense, and bioelectromagnetics. Scientific research in general has become very conservative, with few people daring to study new things. Curiosity was the primary motivator of the great scientists of the past, and I need at least one scientist to become curious enough about my project to risk defying conventional wisdom.

Another potential middleman is a practitioner willing to work with me to try out my techniques on other people. I’ve made some changes to my sleeping behavior and environment that have helped me feel better. Others can make similar changes. I have no credentials and cannot be an independent practitioner, but I’d like to work with someone. This will likely be someone I meet locally, as it is hard to try these techniques from a distance.

My experience the past year has convinced me that it was a mistake to rely solely on online media to spread my message. I should have focused more on developing local connections. One reason for this is that although I have a presence on Facebook, I have little interest in acquiring virtual friends. I prefer face-to-face interaction, and have a difficult time trusting or befriending people I haven’t met in person. Due to my exclusive use of online media, I haven’t made any progress on answering these important questions:
  • Who else has magnetoreceptive abilities like mine?
  • What psychiatric disorders are connected to magnetoreception?
  • How different are others’ magnetoreceptive experiences compared to mine?

I’m going to be leaving North Carolina soon, and I’m not sure yet where I’ll end up living. Wherever it is, I’ll try to network with other people with psychiatric disorders, along with practitioners and researchers. I’ll keep the website, Facebook page, and blog up, and utilize it both as something to refer my local connections to, along with having an online presence. Hopefully by the time of the second year anniversary of Harrymagnet.com, I’ll have more positive results to convey.

Thursday, October 14, 2010

Is Globalization Good For America? A Review of “The World is Flat” by Thomas Friedman

Globalization has its advantages and disadvantages. While American consumers can enjoy cheaper products and services thanks to manufacturing plants in China and call centers in India, many American workers have lost their jobs. What was an economic change affecting mainly manufacturing workers in the 1980’s and 1990’s has in the last decade, and especially since the Great Recession began, been transformed into a job-destroying mechanism for workers at all levels of ability, in many different fields. Secretaries, computer programmers, and even lawyers have seen their jobs outsourced to India and other countries. Does globalization benefit the country as a whole?

Thomas L. Friedman writes about globalization in his 2005 book The World is Flat: A Brief History of the Twenty-First Century. I decided to read this book because I have read many New York Times opinion columns by the author, and found him to have an interesting perspective on the economy, politics, global warming, and the Middle East. Friedman is a journalist, and traveled extensively to research this book, interviewing business executives from around the world, from Bill Gates to Vivek Paul (president of the Indian outsourcing company Wipro). The book contains no footnotes, endnotes, or bibliography. Most of the material comes from Friedman’s observations and interviews.

Being a computer programmer, I have seen firsthand that many IT jobs have been shipped to India. I was fascinated by Friedman’s account of the history of this outsourcing process. It began with India’s opening up of its previously socialist economy in the early 1990’s, a political change influenced by the fall of the Berlin Wall and the collapse of the former Soviet Union. The computer, telecommunication, and Internet revolutions helped create a platform for globalization of IT. During the late 1990’s technology bubble, companies overinvested in fiber optic networks. The dotcom bust in the early 2000’s bankrupted many companies, but brought down the price of data and phone transmission. These technological and economic changes made India “the luckiest country in the history of the late twentieth century” (p. 103). Finding themselves with cheap fiber optic networks just waiting to be used, Indian entrepreneurs utilized homegrown engineering, computer science, and software talent to outsource American jobs. The Y2K computer crisis that occurred at the turn of the twenty-first century gave Indian outsourcing companies a big project to work on, and programmers there did much of the Y2K upgrade work. E-commerce was another important technology outsourced to India, and pretty soon it would be hard to find a technology or software product in which Indian companies didn’t have a footprint.

The title of the book reflects how technology, business process, economic and political changes have “flattened the world,” i.e. removed barriers that prevented individuals or countries from competing. The above paragraph shows how India benefited from these changes. China, Russia, Eastern Europe, and some countries in Latin America and Asia also benefited. Certain countries, such as many in the Middle East and Africa, have not benefited. Friedman talks about the reasons why Arab-Muslim countries have been left behind in the global flattening. In his view, anger and frustration at the Israeli occupation of Palestine, at the Arab-Muslims having to live under authoritarian governments, and at a decadent and promiscuous West has helped promote religious extremists such as members of al-Qaeda. He calls al-Qaeda terrorists “Islamo-Leninists.” Like Lenin, they have a utopian-totalitarian worldview, one in which their acts of terrorism against the West mobilize and energize the Muslim masses to rise up against their corrupt rulers. Unlike Lenin, however, they want to establish an Islamic state that spans the territory of the former Muslim empire. Desiring life and government to return to the way it was in the Middle Ages is not an attitude conducive to participating in the global economy. Friedman offers a counterexample of Arab entrepreneur Fadi Ghandour, cofounder and CEO of Aramex, a package delivery service company, the only Arab company listed on the Nasdaq. If more Arabs would desire to be like Ghandour, and fewer like bin Laden, the Middle East would be part of the flat world.

Friedman believes in free trade, the economic basis for globalization. Trying to protect certain jobs or industries, while helping one group of people, is not going to help the country as a whole. The wealth of examples he provides of how globally integrated our economy has become is good evidence in support of his view. An example is his description of how his Dell computer was assembled using a global supply chain. His order was emailed to a Dell notebook factory in Malaysia. The notebook was codesigned in Texas and Taiwan. The Intel microprocessor came from a factory either in the Philippines, Costa Rica, Malaysia, or China. The memory came from a factory in Korea, Taiwan, Germany, or Japan. The motherboard came from a factory in China or Taiwan. The hard drive came from a factory in Singapore, Thailand, or the Philippines. Note that the only thing from this list made in America was the notebook design, and it was only codesigned here. Most of the factories are in Asia.

While there may be transition phases in certain fields, in which wages go down, and people lose their jobs, there’s no reason to believe that this downward trend will be permanent. Not everything that can be invented has been invented. New technologies, new types of jobs will replace the ones lost. A good example is the computer/IT/Internet revolution. Fifty years ago, who could have imagined how the computer revolution would take place, how many jobs would be created, and how our lives would change? There will likely be new technologies that come into existence in the future, that no one can imagine now. These technologies will produce new jobs and new companies, just as the computer revolution produced Microsoft, Apple, and Google.

Friedman is sensitive to the plight of workers who become unemployed due to globalization. He would like to see instead of lifetime employment, lifetime employability. Government and business can both help in making a worker employable for his lifetime. Education is the key to this. Tertiary education should be government-subsidized for at least two years, whether at a state university, a community college, or technical school. Employers should help train and cross-train employees so that if their job is outsourced, they have the skills necessary to get another job.

Friedman talks about social problems in America that will make the country less competitive down the road, something he calls “a quiet crisis.” He wrote the book before the Great Recession began. Now, this crisis is becoming noisier than ever. One problem is that fewer Americans are becoming scientists and engineers, at a time when the number of jobs requiring science and engineering training continues to grow. A National Science Board report in 2004 notes that science and engineering degrees represent 60% of all bachelor’s degrees in China, 33% in South Korea, 41% in Taiwan, but only 31% in the United States. The number of American college-age students who receive science degrees has fallen to seventeenth in the world, when we ranked third 30 years ago. Immigrants have filled in some of this gap. 60% of the nation’s top science students and 65% of the top mathematics students are children of recent immigrants.

Another problem is that Americans have a sense of advantage and entitlement that makes them lazier than people who grow up in less affluent countries and homes. It takes a sense of self-discipline, delayed gratification, and long-term thinking to pursue higher education and achieve success in an intellectually-demanding career. Americans seem to have lost the ability to defer gratification and look to the long-term. The best example is the debt-fueled consumption of the recent decades, financed in large part by the disciplined and motivated Chinese. The Great Recession has forced many an American to “wake up” to the new reality, and redirect their lives.

Friedman doesn’t mention a related problem that seems to afflict Americans more than anyone else: the drug epidemic. Nothing represents a national failure to delay gratification more than widespread drug abuse and addiction. The illegal drugs like cocaine, heroin, pot, and ecstasy give the user an immediate high, they feel good for a short time, but then pretty soon they feel miserable until their next fix. The legal psychotropic ones like antidepressants, antipsychotics, benzodiazepines and analgesics help people suffering from a variety of disorders to temporarily feel better. But they don’t provide any long-term relief. As I report elsewhere, many of these supposedly therapeutic drugs make people worse over the long run, suffering from chronic addiction, and physical and mental side effects. Antidepressants and antipsychotics also blunt motivation and emotion, something not conducive to career achievement. Doctors ignore these deleterious long-term effects, prescribing these drugs willy-nilly. A nation of drugged-up zombies and addicts can't compete in the global marketplace.

What should people do to keep from being outsourced? Friedman devotes a chapter to this. One suggestion is for people to work in a specialized field. Examples include specialized lawyers, brain surgeons, cutting-edge computer architects, and robot operators. This type of work cannot easily be digitized and transferred to a lower-wage location. The problem for most people is that they don’t have the skills or education necessary to work in these fields.

Another suggestion is to be anchored, i.e. to work in a service job that cannot easily be automated or outsourced. Robots or Indians can't serve you lunch or check out groceries, at least not yet. The problem with these jobs is that they are usually low wage.

Another suggestion is to be adaptable, to constantly acquire new skills, knowledge, and expertise. If you are a computer programmer and only know the C language, you’re not very marketable. The problem with this suggestion is that older and low-IQ people have difficulties being adaptable.

Friedman doesn’t mention that there is a large group of people who are being left behind in this new economy. They don’t have the intelligence, education, ambition, or skills necessary to work in a specialized field, or to be adaptable. The only thing left for them is to work in a low-wage service job. I have some friends, bright people, people who have worked in good, decent-paying, middle class jobs, who are unable to get a good job in the current economy. They are forced to take low-wage service or call center jobs. Eventually, as robots become more advanced and prevalent, even the service jobs will become automated. Then what will happen to people who can only work in these jobs? Will we have a large group of unemployable people who will have to rely on state assistance to live? How can society afford to support these people? What will being on chronic state assistance do to their inner dignity and self-worth? These are questions that Friedman and others who applaud globalization need to answer.

Thursday, July 29, 2010

Do Psychiatric Drugs Do More Harm Than Good? A Review of “Anatomy of an Epidemic” by Robert Whitaker

The idea that people with psychiatric disorders should “take their meds”, that these meds control their symptoms, that they’d better off on than off their drugs, is a noncontroversial position today. Decades of advice from doctors, nurses, therapists, academics, and drug company spokesmen have pushed people with disorders such as schizophrenia, bipolar disorder, major depression, and anxiety to take prescribed medications. We think of someone who decides not to take medications as acting in an irrational manner. Patients who are hospitalized don’t have a choice—they must take their medications.

Is this virtually monolithic societal support of the use of psychiatric drugs a group delusion, similar to the delusions that (nonmedicated) schizophrenics experience? Robert Whitaker takes up this question in his book entitled Anatomy of an Epidemic: Magic Bullets, Psychiatric Drugs, and the Astonishing Rise of Mental Illness in America.

Whitaker has two main points in the book. One, is that there has been an epidemic of mental illness in America in the recent past. There is twice the rate of disabled mentally ill in 2007 compared to 1987, and six times the rate compared to 1955. This trend is even more obvious for children, where the number of disabled mentally ill children multiplied thirty-five fold between 1987 and 2007.

Whitaker bases these claims of an epidemic on SSI (Supplementary Security Income) and SSDI (Social Security Disability Insurance) statistics. He compares these figures in 2007 and 1987 to hospitalization figures in 1955 (most disabled mentally ill in the 1950’s were cared for in state and county mental hospitals). These SSI and SSDI figures are not true prevalence figures, of course, since there are people disabled by mental illness not on these programs, and people on these programs who aren’t truly disabled. Since diagnostic criteria for mental illness has changed significantly in the past half century, it’s impossible to arrive at a true prevalence figure. But his general point of a mental illness epidemic is hard to deny, considering that America spent $25 billion in 2007 on antidepressants and antipsychotics. If there was little or no mental illness, why are doctors prescribing so many pills?

Whitaker’s second point is much more controversial. He says that this epidemic of mental illness is, in large part, caused by the very drugs that are being used to treat psychiatric disorders! Whitaker presents evidence in his book that psychiatric drugs do more harm than good when looked at from a long-term perspective. He acknowledges that these drugs make the patient better in the short term, such as the 6 week period in which clinical trials of new drugs occur. But, looked at over the period of years, not weeks, patients do better off drugs than on them.

Whitaker uses several pieces of evidence to craft his argument. One, is that the epidemic of disabling mental illness occurred during the period in which psychiatric drugs were used. During the early phase of psychiatric drugs (mid 1950’s through the mid 1980’s), antipsychotic drugs were used primarily on people with schizophrenia. Anti-anxiety and antidepressant drugs were prescribed for people with anxiety and depression. Lithium was used as a mood stabilizer for people with bipolar disorder. Most of these early drugs (with the exception of the anti-anxiety drugs) had terrible side effects, such as tardive dyskinesia for the typical antipsychotics. These side effects limited the number of people willing to take these drugs. (The anti-anxiety drugs Miltown and Valium were very popular, especially among women, but also very addictive.) Studies showed that those on the drugs, including the anti-anxiety drugs, had worse long-term outcomes than those off the drugs. These studies were never widely publicized, and doctors continued to prescribe the drugs to adults (children rarely were prescribed drugs in this period, with the exception of Ritalin for ADHD).

The second period, beginning with the FDA approval of Prozac for depression in 1987, saw the release and aggressive promotion of “second-generation” psychiatric drugs, which included SSRI antidepressants (e.g. Prozac, Zoloft, and Paxil), and atypical antipsychotics (e.g. Risperdal, Zyprexa, and Seroquel). Drug companies heavily marketed these drugs both to the public and to doctors, with the help of academic leaders in psychiatry. This period also saw a major increase in the diagnosis and drug treatment of pediatric psychiatric disorders.

The second-generation drugs had fewer and less-severe side effects than the older drugs, but weren’t any more effective than these drugs. They still had some serious side effects, however, such as increased risk of suicide for Prozac, and weight gain for Zyprexa. These adverse effects were generally swept under the rug as the pharmaceutical marketing machine (with the aid of drug-money soaked academic psychiatrists) promoted them as miracle drugs.

The release of the psychiatry reference manual DSM-III in 1980 led to a major change in diagnostic criteria, leading to more reliable diagnoses based on symptoms. The DSM-III (followed by the DSM-IV in 1994) made psychiatry appear more scientific and objective, since it dispensed with vague Freudian notions like “neurosis.” But the problem was that it wasn’t accompanied by any breakthroughs in the understanding of the biology of mental illness. The “chemical imbalance” theory, i.e. that schizophrenia was a result of too much dopamine, and depression a result of too little serotonin, was demolished by research findings. But drug companies and psychiatrists continued to tell people that this imbalance theory was true. Whitaker makes a good argument that the drugs actually cause chemical imbalances, not treat them. These iatrogenic (drug-caused) imbalances force people on the drugs to continue taking them, as they get terrible withdrawal symptoms if they try to stop. Getting children addicted to these drugs was a pharmaceutical marketing home run, as the kids would be hooked for life, a reliable long-term source of income.

It was during these last 25 years of widespread prescription of psychiatric drugs that the disability numbers soared. These drugs didn’t seem to be helping people over the long term, or else why would they be forced to go on disability? Large numbers of children started becoming disabled. According to Whitaker, the second-generation drugs had as dismal a long-term effect on patients as the first-generation drugs. The problem for society was that many times more people were taking them than took the earlier drugs. The few academic psychiatrists who questioned the efficacy of drugs were stripped of their positions and funding, and marginalized. Millions of dollars in drug money flowed to virtually all the academic leaders in psychiatry.

Whitaker weaves in anecdotes along with his research findings, enhancing the readability of his book. Some of the stories are haunting, such as the case of “Jasmine,” a girl prescribed an antidepressant for bed wetting in fifth grade. The agitation she got from this drug led psychiatrists to put her on a drug cocktail that included Zyprexa. She was hospitalized in high school, and the drug merry-go-round to which she was subjected fried her brain, leaving her perpetually psychotic and mute. Of course, Jasmine is an exception, and most children and adults on drugs have much more positive outcomes, but the reader is left to wonder, “what would have happened if she never got on drugs the first place?”

Whitaker provides examples of how drugs used to treat one condition (e.g. Ritalin for ADHD) led to some children developing manic symptoms, which led them to be diagnosed with bipolar disorder, which led them to receive drug “cocktails” including powerful and harmful antipsychotics. They end up on a long-term trajectory of serious physical and mental problems, an outcome which probably would have been much more benign if they hadn’t been prescribed any drugs in the first place.

My own experience with psychiatric drugs was generally a positive one. I didn’t start on drugs until I was 27. I had many years of dysfunctional behavior, of cognitive and emotional problems. Virtually any intellectual activity at all (e.g. reading or writing) would trigger horrible “states” that would last for days. These “states” would include severe obsessions, difficulty concentrating, emotional suppression, and tics. Although I had a job, I was working well below my ability, and had little social life. It was under these conditions that I decided to try Anafranil, a drug used to treat OCD, and it did help improve my functioning for a while. I tried combining Anafranil with many other drugs that were less helpful, although I usually never took more than two at a time. Being very sensitive to side effects, I stuck to low or moderate dosages. After ten years of Anafranil, I felt that it was no longer needed, and convinced my psychiatrist to allow me to stop taking it. I followed up with 3 years of low doses of Xanax, then got off medication completely. I’ve been basically medication-free for 3 years. I think the late onset of starting with drugs, the small number and doses of drugs, and the eventual getting off drugs made me one of the few long-term medication success stories.

Whitaker is not anti-drug, and acknowledges that “there is a place for the drugs in psychiatry’s toolbox” (p. 333). But he wants psychiatrists to “think about the medications in a scientifically honest way and to speak honestly about them to the public” (p. 333). Certainly the vast majority of drug prescriptions are unnecessary and possibly harmful. After reading this book, (although Whitaker doesn’t suggest this) I think that a strong case can be made to outlaw prescribing psychiatric drugs for children. If they cause more harm than good in the long run, then it is children who have the most to lose by taking these drugs. Exceptions can be made for children already on drugs, and for the few children who are completely out of control, and whose behavior cannot be managed by any other means. Psychiatrists would howl about any such legislation, but let them (and their drug company masters) put forward studies that show children benefit long-term from drugs. If Whitaker is right, and there are no such studies, then let’s ban the prescribing of drugs for children.

I found this book to be one of the most powerful and thought-provoking books I’ve read in years. I recommend it to anyone concerned with the mental illness epidemic that plagues our country.

Thursday, June 17, 2010

Is Astrology Scientifically Verifiable? A Review of “Astrology: Science or Superstition” by Eysenck and Nias

Hans Eysenck and David Nias take on the contentious issue of whether or not astrological ideas can be scientifically verified in their book Astrology: Science or Superstition. This book, published in 1982, goes over a lot of research, both of traditional astrology and of a newer quasi-astrology called “cosmobiology.” While the book is almost 30 years old, I decided to read it because I admire and enjoy reading the works of Hans Eysenck.

Hans Eysenck (1916-1997) is my favorite 20th century psychologist. While not as well known or influential as Freud or Skinner, I think that Eysenck was on the right track in his research and theories. During the mid-twentieth century, when unproved psychoanalytical theories were predominant, Eysenck was a fervent critic. Eysenck did groundbreaking work in personality theory, backed by extensive empirical research. He’s most famous for his theory breaking personality down into three fundamental categories: extraversion, introversion, and psychoticism. He was a pioneer in the study of the biological basis of personality, intelligence, and mental health. Not one to shy from controversy, his view that racial differences in IQ are (partially) genetically determined got him punched in the face during a talk.

Most scientists avoid studying astrology, considering research in the field to be a waste of time and resources, kind of like studying psychic phenomena or human magnetoreception. But what if there is something to it, something that we don’t understand very well at present, but may lead to a new understanding of human behavior? It must be remembered that not long ago, the idea that some psychiatric disorders have a genetic component would have been laughed off as ridiculous. The same with the idea that taking pills would treat symptoms of depression or schizophrenia. What about splitting the atom, landing a man on the moon, or video conferencing with someone halfway around the world? Things that are common knowledge or obvious now weren’t always so.

Eysenck and Nias begin their book by discussing how mainstream scientists are irrationally hostile to anything connected to astrology. They then describe astrology, and delve into the research on the subject. They don’t find much research support for traditional astrology, e.g. whether birth charts and sun signs have any connection to personality, occupation, or personal destiny. Any studies that seemed to show a positive result either had methodological or statistical problems, or failed in replication. While many astrologers are sincere and mean well, any advice they give people isn’t based on research or science.

The authors then turn to what they call “cosmobiology”, or the scientific study of how extra-terrestrial factors influence living organisms. Much of the research in this field consists of initial studies that have not been replicated, and thus can’t be considered to be proven. I’ll omit these studies and focus on the few findings that have been replicated and that Eysenck and Nias consider to be solid. One is that eminent people are born more often between the winter solstice and spring equinox. (“Eminent people” are those who have accomplished enough to be listed in one or more encyclopedias). One study showed 36 eminent people born per day during the peak in February, compared to 27 people per day at the trough in June. There’s a similar trend for schizophrenia. A 1998 study confirmed this excess of winter births for schizophrenia, and also for bipolar and unipolar depression. Eysenck’s book on creativity explains that psychosis and creativity are biologically linked, and the seasonal of birth data support his hypothesis.

One possible explanation for the season of birth data is that the parents of eminent or mentally ill children have a greater tendency for conceiving a child in the spring than the average parent. Why this is the case isn’t clear. Another explanation for the greater prevalence of mentally ill winter births is that babies born during the winter are more susceptible to infections, and these infections cause their mental illness. There’s been no definitive link, however, between any known kind of infection and schizophrenia or mood disorder. This viral hypothesis doesn’t explain why eminent people should be born more often in the winter.

Eysenck and Nias take up the question of whether the sunspot cycle correlates with biological or historical cycles, but don’t find any solid evidence in favor of this. In his later book on creativity, however, Eysenck mentions Ertel’s research. Ertel found a correlation between the sunspot cycle and creative achievement in the arts and sciences. There’s more worldwide creative achievement in the nadir of the solar cycle as compared to the peak. There were also worldwide bursts of creativity during periods in which there were few or no sunspots, such as the Maunder Minimum (1620 – 1710). To my knowledge, Ertel’s research has not been replicated, so this isn’t solid evidence by Eysenck’s and Nias’ standards. Eysenck, however, thought it significant enough in his creativity book to devote 7 pages in explaining it.

The most solid evidence Eysenck and Nias come up with, to which they devote an entire chapter, is the research done by Michel Gauquelin (1928 – 1991), i.e. the Mars Effect. Gauquelin researched the connection between planetary position at time of birth and eminence. From the perspective of the rotating earth, planets rise and set, as do the sun and the moon. Gauquelin divided each planets’ path into 12 sectors, similar to, but not identical with, astrological houses. Gauquelin studied birth records of eminent doctors, artists, scientists, and athletes. He found that eminent scientists tend to be born more often just after the rise (i.e. when the planet first appears on the horizon) or upper culmination (i.e. the highest point) of Saturn, while eminent artists are less likely to be born during these times. He found that eminent doctors are more likely to be born with Mars or Saturn in the sectors following the rise or upper culmination. He found that for eminent military leaders and for “iron willed” athletes, Mars was more likely to be in the sector following the rise or upper culmination (remember that Mars is the ancient god of war).

Eysenck and Nias consider the Mars Effect to be successfully replicated. Since the book is 30 years old, I checked out more recent sources. A 2005 article describes the history of the controversy. To summarize, the jury is still out whether or not this effect is a statistical artifact or not. The previously mentioned Ertel became involved in the controversy, claiming that the Mars Effect did exist in the replications, if one focuses only on eminent people. For example, only highly successful athletes show this effect, not average ones.

Notice how eminence is an important factor in the Mars Effect, as it is in Ertel’s sunspot cycle research, and as it is in the season of birth research. Perhaps eminent people have some type of capability that others don’t have. This capability may be related to the magnetic sense, as the geomagnetic field is known to be influenced by sunspots and other extraterrestrial factors.

I can see a similarity between the controversy about the Mars Effect, and the controversy about Robin Baker’s human magnetoreception research. In both cases, one side argues that the replications were successful, and the other side argues that they weren’t. The key questions are methodological and statistical in nature. Mainstream science is not convinced, and the research project becomes abandoned.

My own opinion about the Mars Effect, echoing that of many scientists, is “so what?” Even if the Mars Effect exists, it can’t be used to predict anything. It’s an interesting correlation, showing that birth times may be connected to extraterrestrial events, but doesn’t really tell much about human psychology or human behavior. Eysenck would probably argue that it’s difficult in general to make predictions in the social sciences. There are so many influences and factors involved in human behavior, that correlational research is often the only method that can be used. The size of the correlations in the Mars Effect is as high as in other mainstream social science research. So why dismiss the Mars Effect, solely on the grounds that we don’t understand how it can work?

Eysenck and Nias make the analogy to Newton’s Law of Gravity. Newton didn’t understand how gravity worked, but was able to construct a successful theory of gravitation. We don’t understand how cosmobiology works, but why can’t we take it seriously, do the necessary research, and come up with a theory? In their concluding chapter, the authors make the connection between cosmobiology and geomagnetism. It’s likely that any extraterrestrial effects are mediated by the geomagnetic field. Research in animal and human magnetoreception is directly relevant to cosmobiology.

I would like to see more research in cosmobiology, but would like to see us move away from the correlational in favor of the experimental. Robin Baker did experimental research on human magnetoreception, but the effect sizes were so small and difficult to reproduce that he was unable to convince other scientists. My human magnetoreception hypothesis points the way to another research paradigm that may have more successful results. I (and some others) have a limited functionality GPS. While this GPS doesn’t provide a great deal of navigational information, the limited information it does provide can be used as the basis for experimental tests. I have the ability to distinguish between being north or south of magnetic home. This ability can be tested in a double-blind manner, by driving me around in a bus with a sunroof and covered or blocked windows. If my sleeping behavior is experimentally controlled as I describe in my research paper, I will be near-perfect in my ability to distinguish between being north or south of home. The effect won’t be masked by statistical noise, as it was in Baker’s experiments and in the various correlational research studies mentioned in this book.

In conclusion, I recommend this book as an accessible, well-written discussion of astrological research.

Monday, May 31, 2010

Major enhancement to Harry Magnet website

I've made my first major enhancement to my Harry Magnet website. I've replaced my nontechnical summary with an article entitled The Magnetic Sense and Psychiatric Disorders: Two Great Biological Mysteries of Our Time. This article has a number of pictures and embedded videos to help clarify my points. I've also updated my Are You Sensitive page to include some pictures and better descriptions. The rest of the site has a new, more exciting look. (Note that I have not updated my Research Paper.)

Thursday, May 13, 2010

Are Some Anxiety Disorders Connected to Magnetoreception? A Review of “Phobias: Fighting the Fear” by Helen Saul

I came across the book Phobias: Fighting the Fear after doing a Google search on “human magnetoreception.” This 306 page book is an excellent summary of the status of scientific knowledge of anxiety disorders at the beginning of the 21st century. Helen Saul (the author) is a scientific journalist, and writes for a general audience. The book is comprehensive in scope, as revealed by the chapter titles:  “History,” “Evolution,” “Genetics,” “Neurophysiology,” “Behavior,” Cognition,” “Personality and Temperament,” “Gender and Hormones,” “Light and Electromagnetism,” and “A Physical Problem.” Those who want to check her sources can consult the “Further Reading” section at the end of the book.

One thing I gleaned from this book is the wide diversity of anxiety disorders. It’s likely that syndromes that are combined together into a single disorder (e.g. panic disorder) have different etiologies. Since we can only diagnose anxiety disorders based on symptoms, we are certainly combining different disorders into one “package.” It is similar to headaches. A headache is a symptom, but it can be caused by many different things (migraines, tension, stroke, brain tumor, etc.) Similarly for panic disorder and other anxiety disorders.

The main reason I read this book was for the information included in the “Light and Electromagnetism” chapter. Saul begins the chapter with a section called “Mary’s Story.” Mary Dwarka is an agoraphobic housewife who developed an original hypothesis about her disorder. (Agoraphobia is a fear of public places and open spaces, sometimes associated with panic attacks.) Dwarka believes that agoraphobia is basically a travel disorder, a disruption of our navigational ability. People suffering from this disorder get disoriented when trying to use modern transport systems. “It could be due to the magnetic fields set up by subway or electric train systems, the artificial lighting on almost all public transport, or even just the sheer speed and distances covered. Whatever the underlying reason, it is the disorientation, rather than anxiety, that triggers the panic reaction” (p. 228). Agoraphobics are also upset by the lack of an escape route. If you’re on a train, you can’t get off until it stops at the next station. If you’re traveling in a car on a highway, you have to wait until the next exit to get out. Our Stone Age ancestors wouldn’t have been so confined as they wandered through the forests and jungles. Dwarka attributes the fact that women are 2 to 3 times more likely than men to have agoraphobia to fluctuating hormones. Her symptoms, as those of some other women, went into remission during her pregnancy.

Saul then summarizes Robin Baker’s human magnetoreception research. Baker, working in the 1970's and 1980's, was a pioneer in the study of human magnetoreception. It must be remembered that Baker studied normal subjects, and wasn’t specifically interested in the application of magnetoreception to psychiatric disorders. Saul mentions his research as support for Mary Dwarka’s hypothesis.

Saul next turns to the effects of light on anxiety disorders. She mentions the research of Arnold Wilkins, who studied the effects of fluorescent lights on several medical conditions, including epilepsy. Some people are very sensitive to fluorescent lighting, TV’s, and strobe lights. This sensitivity can take the form of seizures.

Agoraphobics are sensitive to sunlight. The Greek physician Hippocrates wrote of sunlight sensitivity 2400 years ago. A third of agoraphobics wear sunglasses or prefer to go out in the dark. Many are particularly sensitive to fluorescent lighting. Wilkins found that strip lights, glare, bright sunshine, and sunlight broken by trees or railings are troublesome for agoraphobics.

Saul mentions some alternative magnetic treatments like Empulse and Transcranial Magnetic Stimulation (TMS). Both of them seem to help some people with anxiety disorders, but evidence is not conclusive. It must be remembered that they produce artificial magnetic fields that are many times more powerful than the geomagnetic field. They are as unnatural as drugs.

My human magnetoreception hypothesis combines the two threads in the “Light and Electromagnetism” chapter. Like Mary Dwarka, I believe that some psychiatric disorders are connected to navigation and magnetic fields. Some cases of agoraphobia  + depression may be magnetoreceptive in origin. Agoraphobia is a positive symptom, and is a signal that you are south of magnetic home. Depression is a negative symptom, and is a signal that you are north of magnetic home. Dwarka is correct in saying that some agoraphobics become disoriented from modern transport symptoms. The speed and distance traveled, along with the artificial lighting, are disorienting. For people who are sensitive to the geomagnetic field, modern travel exposes them to far greater magnetic field differences than our Stone Age ancestors experienced. Evolution has not adapted humans to handle these differences.

I am sensitive to bright sunlight, as are many agoraphobics. I attribute this sensitivity to the fact that the sunlight activates my limited functionality GPS. This internal GPS tells me if I’m north or south of magnetic home.

One problem with the book is that it is entirely text. Saul is an excellent writer, and I didn’t have a problem following it, but I have a background in psychology. Saul would have made the book accessible to a wider audience if she had included some pictures, charts, and tables.

I winced when I read the sentence: “I. P. Pavlov . . . was an early learning theorist in the 1940s whose dogs famously heard a bell before receiving food” (p. 139). Pavlov died in 1936.

I recommend this book as a readable introduction to the science and treatment of anxiety disorders. I commend Saul for her willingness to talk about subjects like light and electromagnetism, subjects that mainstream scientists avoid like the plague.

Friday, March 26, 2010

Are We Overmedicating Our Kids? A Review of "We’ve Got Issues" by Judith Warner

Judith Warner answers the controversial question of whether or not children are being overdiagnosed and overmedicated in her new book We’ve Got Issues: Children and Parents in the Age of Medication. One interesting thing about the author that motivated me to read this was that she changed her mind on this question during the course of writing the book. At first, she held the view that kids were being overmedicated by their affluent, competitive, perfectionistic parents. She changed this view 180 degrees in the course of researching the book. The book argues that mentally ill kids are in fact undertreated and undermedicated. Most parents are very reluctant to drug their children, and do so only as a last resort. Many children with mental health issues (about 70%) are not being treated, partly due to parental neglect or opposition, but mostly due to lack of quality, affordable mental health providers and resources in many communities.

Warner first tackles the question of whether or not there is in fact an epidemic of pediatric mental illness. There’s no question that in recent decades there has been a massive increase in both diagnosis and medication treatment. Since the early 1990’s, the number of children receiving diagnoses of mental health disorders has tripled. From 1991 to 2006, there was a 3,500 percent increase in autistic children participating in special-education programs. Prevalence of depressed children has increased from near zero in the early 1970’s to between 2 and 15 percent today. Before the mid-1990’s, there weren’t any children diagnosed with bipolar; today prevalence is up to one percent. Prevalence of ADHD went from 1.5 to 2 percent in the mid-1970’s to about 8 percent today.

With the increased diagnosis, comes increased medication usage. Prior to the 1990’s, kids usually weren’t prescribed antidepressants or antipsychotics. Now 1 to 2 percent of kids are prescribed antidepressants. Atypical antipsychotic drugs, which can cause serious metabolic changes and weight gain, were used by over half a million children in 2003. Ritalin and other stimulant use has skyrocketed along with the increased number of children diagnosed with ADHD.

Warner argues that much of the increase is due to changing diagnostic patterns; what she calls “increased visibility.” Many children who suffered from mental illness in the period from 1945 until 1980 were, depending on the symptoms, punished for misbehavior or conduct disorder, kicked out of school, institutionalized, or labeled as retarded. They weren’t diagnosed with psychiatric disorders because there were no effective treatments (this was a time when psychoanalytic techniques were dominant). Quoting Edward Shorter, “Physicians prefer to diagnose conditions they can treat rather than those they can’t” (p. 46).

Warner tackles the contentious issue of the fortyfold increase in juvenile bipolar disorder diagnosis between 1994 and 2003. She presents evidence skeptical that many of these children in fact have bipolar: the connection between the doctors promoting the diagnosis and the pharmaceutical industry; the fact that mood disorders occur predominantly in women, but juvenile bipolar is largely diagnosed in boys; and that the symptoms of extreme irritability that the supposedly bipolar children have is not a valid predictor of adult bipolar disorder. On the other hand, children diagnosed with bipolar disorder do have serious problems, and probably need some type of medication.

Warner also presents research that indicates that there has been in fact an increase in mental illness prevalence separate from changing diagnostic standards. For example, one study found that successive generations of Americans born after World War II seemed to have a greater incidence and earlier onset of depression. Anxiety seems to be increasing, also, as the number of teens between the ages of 14 and 16 who agreed with the statement that “Life is a strain for me much of the time” quadrupled between the early 1950’s and 1989.

The truth is that there’s no way to know if actual prevalence of pediatric mental illness has increased. According to Warner, “The research that could provide solid answers—epidemiological studies conducted in a parallel manner over time, asking the same questions, looking for the same disorders, using consistent language and definitions—for the most part doesn’t exist” (pp. 56-57).

My view is that she’s right, we can’t be sure of this. But it does seem that something in the environment is making kids more depressed, anxious, and autistic than before. Warner focuses on the post-World War II period, in which parents of young children today grew up. But what about if you go further back in history? It must be remembered that psychiatry, clinical psychology, and other mental health professions didn’t exist before 1900, nor did any mental health treatments. If there were so many mentally ill children (and adults) back then, wouldn’t someone have noticed it? There was no safety net; people had to pull their own weight or starve. If there were as many children and adults with disorders back then, and no way to treat them, wouldn’t society have come to a standstill?

Another way to look at things is the flip side of mental illness: creativity. Many mentally ill people are also creative. Many creative achievers of the past had mental illness. While there is no consistent way we’ve diagnosed mental illness over time, creative achievement is, by definition, public and visible. Michelangelo, Beethoven, Shakespeare, Einstein and others achieved things that make them household names today; there are no contemporary equivalents. As I’ve argued elsewhere, the rate of creative achievement in the arts and sciences declined precipitously in the 20th century, to near-zero today. People who would have achieved in the past are not achieving. There are many explanations for this, but one is that they are acquiring severe, debilitating mental illness that prevents them from achieving. Another explanation, that they are taking emotion and motivation-blunting drugs, is something that I’ll discuss below.

Warner next makes her central argument, bolstered by a variety of anecdotes and quotes from parents, doctors, and researchers. She argues that most parents don’t want to medicate their children, that they usually try a variety of useless alternative treatments before reluctantly agreeing to medication, and that they are usually pleased by the results. While medications don’t work miracles, and some of them produce side effects, they result in improvement. Children who were unable to cope or function in school can now function at a tolerable level. We should stop blaming parents and doctors for overmedicating children, and instead focus on the large percentage of children who, for various reasons, are denied effective treatment for their mental illness.

I found it interesting that Warner didn’t include any examples of cases in which medications did more harm than good. She didn’t include any examples of parents who were eager to use medication on their children. While she says that she couldn’t find such examples, a reviewer indicated that this is an example of her selection bias. There’s no evidence in the book that she took a random sample of medicated children and then tracked down and interviewed the parents. She’s a popular writer, not a researcher, so she can be excused for this unscientific methodology, but the reader must be wary of any generalizations that she draws.

It isn’t hard to find examples of unhappy results from pediatric use of medications. Stephany, the author of the blog Soulful Sepulcher, writes poignantly about her daughter, currently institutionalized, who is such an example. I would have preferred to seen a more balanced presentation from Warner.

Of course, the fact that some children don’t excel with medication isn’t an argument that medication is ineffective. No treatment is successful 100% of the time. All drugs have side effects, and sometimes the side effects are worse than the treatment effect. Another reviewer is critical of the scientific evidence that Warner presents in favor of medication use. Warner says that there are treatments for kids that “actually work” (p. 211), that allow children to “improve and live their lives to the fullest” (p.210). Warner is basing these Pollyannish assessments on short-term measures of success, especially in school. The reviewer is skeptical about how much we know about the long-term effects of drugs on children’s developing brains.

My view is that we can’t be sure of the long-term efficacy of drugs unless we do longitudinal studies, comparing control versus experimental (drug) groups, that extend up to about age 50. The reason for such a long time frame is that by the age of 50, we should know whether or not someone has been successful in his or her career. What if children on drugs do well in school, but not so well in their adult life? Since most of the drugs (except Ritalin) haven’t been used in children for more than 20 years, we’re being premature about declaring a success.

I’m not saying that we shouldn’t prescribe these drugs in children, although I do think that they should be a last resort, after every type of therapy has been tried. We also need to distinguish between the more serious types of disorders and less serious ones. For less serious childhood disorders, the ones in which children can function in some contexts (e.g. home) but not in school, I think we need to think about changing the school environment. The typical public school classroom is OK for an average student, but ones with special needs and/or abilities don’t thrive there. There are some special education classes, but to get a child into them a parent has to have significant time to deal with public school bureaucracies, or money to get into a private school program. Warner mentions this in her book; these services are effectively denied to the lower middle and working classes. We need to try to expand these services, and also think about coming up with less expensive alternatives that focus on vocational or artistic training.

As I mentioned earlier, psychotropic drugs are one of the causes of the lack of creative achievement in arts and sciences today. The legal, prescribed ones blunt emotions and motivation. The illegal ones can cause destructive addictions. Even if a child does better in school while taking drugs, will he contribute more to society than he would have otherwise? In another blog post, I mention that our society has a narrow (academic) definition of talent, that lacks context and empathy. Both context and empathy are necessary for good decision making, and in recent decades America’s political and economic leadership has made some poor decisions. Probably some of these deficits are due to the effects of drugs on people. These effects are hard to quantify, and won’t show up in a typical study, but they are incredibly important for society.

For the more serious types of disorders, in which the child can’t function anywhere, medications are an unfortunate necessity. Warner admits in her book, echoing many parents, that medication is not going to make these kids successful. It may make them less disabled than they would have otherwise been. The most important thing for us to do is to find out genetic and environmental causes of serious psychiatric disorders, to prevent them from happening in the future.

On the environmental side, we’ve covered just about every base imaginable, including diet, chemicals, vaccines, infections, etc. One thing we haven’t looked at is the effect of artificial magnetic fields. The reason for this is that scientists don’t think that humans have the ability to perceive magnetic fields. I present evidence in my research paper contradicting this. I found that I’m sensitive to artificial magnetic fields when sleeping, but not when I’m awake. If I’m sensitive now, as an adult, I was probably much more sensitive as a child. My hypothesis is that some children with serious psychiatric disorders are sensitive to artificial magnetic fields, especially when sleeping. My hypothesis would explain the increased prevalence in recent decades of serious pediatric psychiatric disorders. When I was growing up in the late 1960’s through mid 1980’s, the only artificial magnetic fields I had to worry about when sleeping were the innerspring mattress, steel bed frame, steel building structure, and a fan that I used in the summer to keep cool. Today, we have wireless Internet, home security, cordless phones, cell phones, baby monitors, computers, widescreen TV’s, iPods, central air, etc. Many of these devices and appliances emit strong magnetic fields, and are kept on while sleeping. It would be easy to test my hypothesis. Start turning things off, change to a non-magnetic bed, and see if the disturbed child becomes a little less disturbed. See if he can sleep better, which will make him function better the next day.

In conclusion, I found Warner’s book to be well written, with a lot of useful information for those interested in pediatric mental health. Its major flaw was a lack of balance in presenting evidence in favor of medication use in children.

Friday, March 5, 2010

A Beautiful Mind

I just finished reading A Beautiful Mind, the 1998 biography of mathematician John Nash by Sylvia Nasar. It’s an interesting account of a highly intelligent and creative man who suffered from schizophrenia. I’ll discuss information I gleaned from the book in this post. I’m assuming the biography is accurate, although I haven’t done any independent checking.

Like many people, I first heard of Nash by watching the movie starring Russell Crowe. The movie is true to the basic outline of the biography, but omits or fudges many relevant details. For example, the movie didn’t mention Nash’s travels to Europe or his divorce. From the movie, one would think that Nash’s disorder began at graduate school at Princeton, since that is when he first “sees” his imaginary roommate. This imaginary roommate isn’t mentioned in the book.

Although eccentric and strange, Nash didn’t suffer his first breakdown until age 30, which is relatively late for schizophrenia. If Nash had developed this disorder when he was 20, no one would have heard of him, and there would be no biography to read. It’s important to emphasize this point. For every famous person like Nash, there are thousands of talented people who, due to mental illness, never get a chance to exercise their talents.

What precipitated Nash’s initial breakdown? It could have been his marriage that occurred two years before, along with his wife’s pregnancy. Stress is known to precipitate psychiatric symptoms, and major life changes like getting married and having a child are significant sources of stress. It could have been a European honeymoon trip that Nash and his wife went on about 6 months prior to his breakdown. There’s no way to know for sure.

After his breakdown and initial hospitalization, Nash gave up his tenured position at MIT and headed to Europe, where he would spend the next 9 months, attempting to renounce his U.S. citizenship and become a “world citizen.” This behavior exemplifies that the severe mental illnesses are primarily disorders of instability. Who in his right mind would give up a tenured faculty position at MIT? Most normal people crave stability, and there’s nothing more stable than a tenured position. Mentally ill people crave the opposite; for them the stability of a tenured position is both frightening and undesirable. Nash’s travels are further indications of instability.

After Nash returned to the U.S., he had temporary episodes of sanity alternating with psychotic episodes. With some brief exceptions, he wasn’t able to resume his career until he had a remission in his fifties. This remission occurred after having lived a relatively quiet and stable life at Princeton for over ten years. Nash won the Nobel Prize in economics in 1994.

Nasar’s biography convinces me that schizophrenia is closer to bipolar disorder than Alzheimer’s Disease. Schizophrenia is an episodic disease, characterized by alternating sanity and insanity, of rational thought and delusions. At least in Nash’s case, it doesn’t appear to be degenerative.

There’s no evidence from the biography supporting my hypothesis that schizophrenia is connected to the Earth’s magnetic field. Before he developed schizophrenia, Nash traveled within the U.S., including working for a time at Rand in Santa Monica, California. If Nash were sensitive to the geomagnetic field, he should have developed a breakdown while living in California. The fact that his first breakdown occurred soon after a trip to Europe doesn’t imply that the different geomagnetic in Europe precipitated his schizophrenia.

Whether or not schizophrenia is connected to the geomagnetic field, there’s no question that it is a severely disabling disorder, probably the most disabling psychiatric disorder. That Nash was able to accomplish what he did despite the disorder, and the fact that he was able to achieve remission, is amazing.

Monday, March 1, 2010

Harry Magnet offering free consulting

For a limited time I'm offering free consulting. Check out the details by clicking here.

Sunday, February 21, 2010

How an Unstable Meritocracy Has Failed America

David Brooks’ interesting NY Times Op-Ed column points out that while American society has become more fair and open, the level of trust and respect of our institutions has plummeted. The male WASP, blue-blood establishment of the past has given way to a new establishment in which religion, family, race, and gender are no longer as important. People get into the top schools largely based on intelligence and academic achievement, and after they graduate they move into positions of power in government, business, and academia. This can be seen as a triumph of meritocracy, a system in which people are promoted and reach leadership positions based on ability. Isn’t that a good thing, something that generations of Americans have longed for, something that harks back to the anti-aristocratic heritage of the American Revolution?

Brooks points out that this meritocratic establishment is lacking in several respects. One is that it’s based on a narrow (academic) definition of talent. This definition lacks context and empathy, two important factors for good decision making. In recent years, members of the political and economic elite have made some poor decisions that have severely harmed American long-term interests.

Another problem with the current meritocracy (a point which was made by the authors of The Bell Curve) is that members of it live almost entirely separate lives from everyone else. They go to their elite schools, work in a rarefied environment, live in gated communities, and marry and socialize with other elites. They could care less about ordinary people.

Another problem is that solidarity among elites is weaker. The socially connected, inbred WASP elite may have competed with each other, but didn’t fight an all-out war, as elites do today. Is it in the best interests of the country that Democrats and Republicans can’t agree with each other on anything?

A related problem is that our society is too transparent. No one knew at the time that JFK was having various love affairs, because certain topics were considered off-limits to reporters. Was it in the best interest of the country that Bill Clinton was impeached for sexual indiscretions?

The most important problem with meritocracy is that it is unstable and based on short-term thinking. The WASP elite could trace their lineage back generations, and this family-centric perspective encouraged long-term thinking. The 1960’s revolution and its aftermath swept away the WASP elite, but didn’t put any stable social structures in its place. The U.S. has been a very unstable country since this revolution. This instability occurs at every level of society, from the family breakdown, drug abuse, and crime in the inner cities, to the reckless gambling of Wall Street elites that led to the current Great Recession. Schools and infrastructure have declined as our political leaders put special interests over the interests of the country as a whole.

Brooks doesn’t offer any solutions to this problem. While we can’t go back to the 1950’s, I think that we need to start rolling back some of the reforms that have led us to our present dire situation. We need to regain our appreciation for social stability and social structures, for family connections, and for long-term thinking. Creative public policy ideas that involve such an appreciation need to be formulated and implemented.

Monday, February 1, 2010

Antidepressants Are No Better Than Placebo

Newsweek’s cover story reinforces what researchers concluded in a recent article in JAMA—that for the vast majority of patients, antidepressants are no better than placebo. In fact, antidepressants are worse than placebo, because they have side effects. Only for patients with more severe forms of depression do these pills have any significant benefit.

Considering that 2008 antidepressant sales in the U.S. were $9.6 billion, it seems that if one wants to control spiraling health care costs, substituting sugar pills for antidepressants would be effective. If anything, we’re going in the opposite direction. Antidepressant users doubled in the 10 years from 1996 to 2005. Aggressive marketing by Big Pharma to both doctors and customers has expanded antidepressant use, even while the evidence of their effectiveness has been seriously questioned. As the Newsweek article states, “antidepressants are basically expensive Tic Tacs.”

Antidepressants are also used to control OCD. I feel that my OCD/tic disorder was helped by Anafranil, an older antidepressant. While this benefit could be due to the placebo effect, I doubt it. I think that from a biological perspective OCD is more similar to severe depression than to anxiety disorders, with which it is classified by the current DSM. Both OCD and severe depression are helped by antidepressants.

The distinction between antidepressant effectiveness in severe and light/moderate depression reinforces my view that a more individualized/case-by-case approach is needed for psychiatric disorders. As Dr. Klitzman says in a separate Newsweek article, paraphrasing Tolstoy, “every unhappy individual is unhappy in his or her own way.” It’s possible that light/moderate depression is a reaction to negative life events, while serious depression, OCD, and bipolar disorder are something completely different.

My human magnetoreception hypothesis connects these serious disorders with the Earth’s magnetic field. Since magnetoreception is dependent on childhood experience, i.e. where and when someone grew up, each of these magnetoreceptive people needs to be tested and evaluated individually. While the same drug may work in many people, the location of their magnetic home will vary based on their childhood experience. Also, their sleeping behavior and environment is likely different, so recommendations on changing this behavior and environment need to be tailored to the individual circumstances. This is similar to psychotherapy, in that therapy needs to be cognizant of the individual’s background, interests, intelligence, and personality.

In conclusion, we need to stop prescribing antidepressants to people for whom they have no benefit over placebo. For those with more serious disorders like severe depression, OCD, and bipolar disorder, antidepressants and other medications are useful. In the future, if my human magnetoreception hypothesis is confirmed, there will likely be more effective treatments that combine behavioral changes and futuristic devices.

Friday, January 15, 2010

The Difference Between Magnetoreception and Bioelectromagnetics

Earlier this month, I did a search in Pubmed on the term “magnetoreception”. An article came up which described an experiment on humans. The abstract ended with the following sentence: “Magnetoreception may be more common than presently thought.” The implication is that humans may have magnetoreceptive abilities.

The experiment involved studying how exposure to low-intensity, low-frequency magnetic fields can alter pain sensitivity in humans. Subjects were exposed to acute thermal pain while lying in an fMRI machine. Experimental subjects received the magnetic field exposure (which involved different magnetic fields than the fMRI machine normally produced), while control subjects didn’t receive this. The fMRI machine imaged both experimental and control subjects’ brain activity. Significant differences in activity were found in various brain regions in the experimental versus control group.

I would classify this experiment as an example of a bioelectromagnetic phenomenon, but not as a magnetoreceptive phenomenon. Bioelectromagnetics is the study of how electromagnetic fields interact with living beings; magnetoreception (aka magnetoception) is the study of how animals utilize magnetic fields for orientation. There has been much more research on bioelectromagnetics than on magnetoreception. One familiar bioelectromagnetic research area is the possible increased cancer risk from cell phone radiation.

Magnetoreception involves the perception of magnetic fields for orientation, i.e. to alter behavior. Some animals (e.g. migratory birds) navigate partially based on information in the geomagnetic field. Magnetoreception is basically a “sixth sense,” similar to vision or hearing, in which sensory information is processed by the brain to produce a perception. In this case, the perception is some sense of where the animal is, where it needs to go, and how to get there (i.e. an internal GPS, possibly combined with an internal compass).

To my knowledge, there has been only one research project involved in understanding whether or not humans have magnetoreception. This was undertaken by Robin Baker at the University of Manchester in the 1970’s and 1980’s (summarized in his book Human Navigation and Magnetoreception). Baker did a variety of studies designed to ascertain whether or not humans have magnetoreceptive navigational abilities. I talk about his research in my scientific paper. It’s difficult to summarize his results; a fair assessment is that he was unable to convince the scientific community in the existence of human magnetoreception.

For two years, I studied what I believe to be my own magnetoreceptive abilities. Unlike Robin Baker, who studied a weak orientational ability (involving internal GPS and compass ability) in normal subjects, I focused only on a limited-functionality GPS ability that was directly connected with my psychiatric symptoms. My research revealed that my symptoms (negative versus positive) indicate whether I’m north or south of (magnetic) home. I claim no compass ability at all. I can’t tell whether I’m facing or moving north, south, east, or west. All I can do is feel whether I’m north or south of magnetic home. To get to magnetic home, I need to utilize external navigational aids (e.g. maps, roads, technical GPS, etc.).

The most important aspect of my research is not that it will provide an awesome new navigational tool for humans. The modern technical GPS is great as a navigational tool, and my internal GPS is no match for it. The significant part of my research is that it provides an explanation for the environmental cause of psychiatric disorders. Right now, this environmental cause is unknown, and as a consequence we’re unable to effectively treat mental illness, or to prevent new cases from happening. Drugs treat symptoms, and can help many people function better, but they are no cure. To find a cure (or way to prevent psychiatric disorders from happening), we must understand the environmental cause. My hypothesis is that psychiatric symptoms are navigational tools, the human equivalent of the animal instinctual response to being north or south of home. This hypothesis can be tested (I describe some experiments in my scientific paper), and if confirmed, can hopefully lead to a future in which schizophrenia and bipolar disorder are as rare as smallpox and polio are today.

Sunday, January 3, 2010

What $640 / month rent gets you in Tokyo



Times are tough in Japan right now. Their export-oriented economy has been hit hard by the Great Recession. "Hotels" like the one pictured above, originally intended for salarymen who stayed out too late and missed the last train, now rent out a third of the units long term. Read more about life in a 6.5 x 5 ft cubicle here.