There is certainly
no shortage of attempts to explain the rising prevalence of autism. I
would like to present another. As we know, the rate of autism in 1997
was reported as being 1.6 per 1,000. That has sharply risen to 6.6
per 1,000 in 2007, and is currently reported as 14.7 per 1,000 in
2014. By comparison, the cost to make and market a movie in 1997 was
around $60 million, increasing to $100 million in 2007, and $180+
million currently. The cost has more than tripled in just 17 years,
coinciding with the sharp increase in the prevalence of autism.
It is likely that
there are other factors involved, as the correlation does not appear
to be linear. In addition, 1942's Casablanca
cost just over $1 million to produce. During that same year, the rate
of autism was effectively zero. This suggests that there may be a
safe level of movie production cost, without risking elevating the
autism rates.
It
is not known at this time exactly how these two might be related.
Some possibilities include more realistic special effects directing
children to focus too much on small details, or more aggressive
marketing strategies may lead to obsessive interests. Whatever the
cause may be, I think further research may
be merited.
Odds
are very good right now that either you're laughing hysterically or
you think I've gone completely nuts. In truth, I created a
deliberately ridiculous example to make a point about correlation and
causation. I don't believe
that any sane person is likely to believe that movie production costs
have anything to do with autism. However, there are many studies and
theories that have little to no more validity than this, including
several that gain widespread media attention. Unfortunately,
most people are not adequately equipped to recognize a bogus or
unusable study when they see one. I'd like to help
by providing a quick guide to understanding the experimental process.
To
start, I feel I should point out one of the basic rules of logic:
correlation does not imply causation. Just because two things are
happening at the same time does not mean they have anything to do
with each other. They might be related, but further evidence will
always be needed to establish a connection. My hope is that point is
made above in the opening of this piece.
Second,
it is important to always clearly define your terms. This is often
overlooked in autism research, and perhaps other psychological
studies. As definitions
change over time and methods improve, researchers often neglect to
adjust previous findings to account for these changes, leading to
faulty results. If you look carefully at the above example, you may
find a result of this
fallacy. (Hint: Look up when Leo Kanner first published his autism
research.)
The
primary purpose of any experiment or study is to isolate one
particular variable as much as possible. If there are too many
variables, it can be difficult to say the reason for the results.
There are multiple ways to isolate one variable. As many of them as
possible should be used.
The
first is to use a large sample group. Coincidences and unusual
phenomena happen. With a small enough sample group, it can be
difficult to tell if you're looking at a coincidence or an actual
result of the experiment. The sample group should be large enough
that coincidences should be expected. By
doing this, it becomes easier to tell if a particular occurrence is
happening at a statistically significant rate.
Next,
we have to account for individual variation between different people.
The best way to control for this is to have a diverse and
representative sample. Many autism researchers look to particular
programs or classes for their research subjects. The problem with
this is that many of these programs only accept people with a defined
age or level of intelligence, functioning, or income. Most of them
also largely constitute people within a certain area. This means that
autism is no longer the only variable involved in the study. The
findings become unusable outside the one type of group in the study.
The third way we
have to isolate a specific variable is to use a control group. A
control group is a second sample group, similar to the experimental
group, but without the variable being tested. The purpose is to
observe alongside the experiment what a normal result would look
like.
One type of control
that's commonly used in treatment studies is a placebo. Often, a
person will react to a treatment just by virtue of expecting a
result. A placebo is designed to separate this phenomenon from actual
results. The way it works is the subject is given an inert equivalent
to the treatment and is allowed to believe that it is the actual
treatment. Only differences between the actual treatment and the
placebo group should be noted.
Another technique is
a blind or double blind study. A blind study is when the participants
do not know whether they are the experimental group or the control
group. Most studies are conducted this way. A double blind study is
when the researcher present with the participants also does not know
which group is which. This is done to prevent the researcher from
giving subconscious clues to the participants, or from subconsciously
biasing the results.
To illustrate the
isolation of variables, I'd like to use what may be considered to be
an extreme example, Andrew Wakefield's research into vaccines. I
realize this is a controversial example outside the scientific and
medical communities. Allow me to explain why his research was never
fully accepted.
First, Wakefield
used a sample group of only twelve children. It is almost impossible
to distinguish actual results from a coincidence with such a small
group. Second, all of the children were drawn from his existing
gastroenterology work, meaning that they all likely had
gastrointestinal problems. That makes his results useless in terms of
anyone with no gastrointestinal problems alongside autism. And third,
he used no control group to compare his results. It is entirely
possible that by using one or more control groups, he may have
noticed similar results in unvaccinated autistic children, or
different results in vaccinated neuronormal children. Either one of
these would have rendered his observations irrelevant.
On a final note, you
will occasionally find a study where the researcher appears to have
decided on the conclusion before conducting the research. Usually,
this is unintentional. Researchers are, after all, human like the
rest of us. Sometimes, though, the data may be cherry picked to suit
the desired conclusion. If you look at my example of the costs of
movie productions, an astute reader may note that those costs have
more or less risen steadily over the course of time, while the rates
of autism diagnoses appears to have sharply risen in the 1990s,
shortly after Hans Asperger's research was translated o English.
The check for this
is usually the assumption that science is repeatable. If other
researchers are unable to reproduce the same results, there is
probably something wrong with the study.
If I may return to
the Wakefield study into vaccines, there are two solid reasons to
suspect his results may have been influenced by his desired
conclusion. First, as often as his theory was put to the test, no
other researcher has ever been able to reproduce his results. Second,
it was later proven that at least five of his twelve subjects were
showing signs of autism prior to vaccination. This second point, in
particular, strongly suggests that this study may have been
fraudulent.
The major take home
message from this should be to not believe everything you read on the
internet. There are certainly plenty of valid studies out there. Just
be sure to check the methods and double check the data before you
believe it.