Some Old Poems....so you MFA poets can laugh laugh

Big Smiles on Cheshire Cats

That grin taunts Alice
flaunts logic (philosophical)
haunts my little girl dreams.
Which way ought I to go from here?

Steady pace it’s 7:16
Stairs winding first
through the kitchen
second doorknob, twist
stairs winding second, land with a thump
Twist, step-step-step, out.           out.
of options.
Firemen grazing on Menthols while
Slothful see-men lie.

It’s curious—a
grin without a cat.
Like a dream without a sleep.

Big Smiles in a Central Park

Wind flaps saffron flaps two focused faces beam
Her face is round, but round back in ’03 they called
her pretty gray eyes match a peacoat proper.

Rosy cheeks betray bravado     
Bundled up in nervous-wind-flapping saffron
Eyes hopeful,
Smiles fixed-posed-set-forever celluloid
Call back to my moment of wonderful awkward

Turned warm against the wind—
foreground, left—
heart skip-skipping when
mittened fingers brushed his cheek.
She was warm but
knows the cold is sweeping through the saffron

Chasing the Cold White

There—a cold white within her. A doctor who travels
by airplane and they talk about
New York City models,
she laughs.
Beauty in boniness and pallor.

There is a sickness inside, consumes halfway; makes
blood redder
skin whiter.

Suffering does not localize.

Wind, low rocks, icy water
breed familiarity and bacterium.
He worries about her children.

Later she will die and later he will too,
maybe he will cry. He will take off and
land and take off and land, following
the cold white as it grows.

Phillip Copper

Protruding plastic tubes
exit: left: elbow crook
over the river and through the woods to the blinking white ATM machine
Digital displays
A minute passes, as does 465 milliliters
“my body can’t handle 500”
Dirty blood—out. Clean blood—in.  
Beep beep. be-eee beeeeeeep.
She’s dozing—the loud talker,
under faded afghan
below Bob Barker.



Make sure to watch the whole thing.



I got the NPR internship next semester! Yayyyy Science Desk. And DC fun.


More Fun with Google

Another fun Google game, courtesy of Ryan James Wilson. To play, google your name with “is” and write down what comes up. Then google your name with “believes” and write down what comes up. Then combine them. Hehehheheehe soooo fun.

Ginny is not yet left to languish, therefore she believes in waiting for sex ‘till after marriage.

Ginny is a painted last strung doll, therefore she believes in old-fashioned service.

Ginny is hiding in front of the car, therefore she believes in giving back to her community at every level.

Ginny is very powerful magically, therefore she believes that looking good on the dance floor requires not so much a knowledge of memorized moves but instead a knowledge of movement.

Ginny is a dynamic workshop leader, therefore she believes in the “bloom where your planted” theory.

Einstein vs. The Nobel Committee

When Albert Einstein listed the most important honors of his life, he did not include the one with the highest profile and pay: the Nobel Prize. But perhaps this omission isn’t so surprising. The Nobel nod—17 years after Einstein published his special theory of relativity—came long after recognition by the rest of the world. Even more bizarre, the gold medallion was given to Einstein not for his relativity revolution, but for the relatively obscure discovery of the law of the photoelectric effect. Now, in this 100-year anniversary of Einstein’s “annus mirabilis,” or miraculous year, one historian thinks he knows why.  After years of sifting through letters and diaries of the Scandinavian archives, Robert Marc Friedman of the University of Oslo says it was an intentional snub, fueled by the political atmosphere of post-war Europe.

In 1905, while working as a patent clerk in Switzerland, 26-year-old Albert Einstein published five seminal papers on the nature of space, light, and motion. Until that point, most physicists viewed the cosmos through Isaac-Newton-colored glasses: the planets and the stars obeyed the same rules as apples falling from a tree. Space was inflexible and could be described with Euclidean geometry; time ticked by at the same steady rate everywhere in the universe. But one of Einstein’s 1905 papers, the special theory of relativity, he did away with the notion of absolute space and time—effectively turning the standard Newtonian model on its deterministic head.

In the next decade, Einstein built upon these ideas to include the concept of gravity. In 1915, his general theory of relativity proposed that gravity was not some mysterious force of attraction between bodies, but rather the result of the space distortions caused by massive bodies like the Sun and the planets. Moreover, it proposed that this curving of space affects not only particles, people, and planets—but light, too. Today, general relativity is celebrated as Einstein’s most impressive work. But Friedman says that in Germany after the First World War, Einstein was despised as a pacifist Jew who renounced his German citizenship, went to meetings of radical groups, and publicly supported socialism. His theories were dismissed as “world bluffing Jewish physics” by some prominent German physicists, who claimed to practice “true” German science based on observations of the natural world and hypotheses that could be tested in a laboratory.

Luckily for Einstein, British astronomer Arthur Stanley Eddington believed there was a way to test the general theory. If space was curved, as Einstein proposed, then light traveling through it should too follow a curved, rather than straight, path. Moreover, its path would be curved most around a very strong gravitational force like that of the Sun. This bending of light could be observed, it was thought, if photographs of stars that appeared very close to the Sun were compared to photographs of the same stars when they were not near the Sun. The only problem was that from an earthly vantage point, the Sun’s brightness blinded that of nearby stars. But on May 29, 1919, six minutes of a total solar eclipse ironically provided enough obscurity for Eddington to measure the positions of the stars that appeared next to the eclipsed Sun. And sure enough, they followed the predictions of Einstein’s general theory.  

Almost overnight, Einstein became a household name throughout the world. Nominations for Einstein poured into the laps of the members of the Nobel Committee as they were reviewing candidates for the 1920 prize.  But the Committee’s objections to relativity went beyond its theoretical nature. According to Friedman, the Committee did not want a “political and intellectual radical, who—it was said—did not conduct experiments, crowned as the pinnacle of physics.” So the 1920 prize was given to the Swiss Charles-Edouard Guillaume for his ho-hum discovery of an inert nickel-steel alloy. When the announcement was made, Friedman says the previously-unknown Guillaume “was as surprised as the rest of the world.”

By the next year, what Friedman calls “Einstein-mania” was in full bloom. As his quirky personality (and untamed tresses) gained more popularity with the general public, his theory gained more credibility in the scientific community. In 1921, swarms of both theoreticians and experimentalists again nominated Einstein for his work on relativity. Reporters kept asking him, to his great annoyance: Would this finally be the year that he received a Nobel Prize?

But 1921 was not the year, thanks to one stubborn senior member of the Prize Committee: Allvar Gullstrand. Trained as an ophthalmologist, Gullstrand’s knowledge of theoretical physics left much to be desired; nevertheless his arrogance, according to Friedman, led him to challenge Einstein’s theories. (As Friedman jokes, “In a small, isolated but locally prestigious academic environment, arrogance, like mold in a damp cellar, tends to thrive.”) In a 50-odd-page report, Gullstrand collected every published article—no matter how obscure—that even slightly doubted relativity, while omitting the far greater number that saluted it. One private remark by Gullstrand, which Friedman found buried in a diary, sums up his sour attitude: “Einstein must never receive a Nobel Prize, even if the whole world demands it.” Gullstrand’s arguments, however biased, convinced the rest of the committee. In 1921, no one was awarded the Nobel Prize in Physics.

Two prizes were thus available in 1922. By this time, Einstein’s popularity was so great that many members of the committee worried about their international reputation if they didn’t recognize him in some way. Like the previous two years, he received many nominations for relativity. But this year there was one nomination—from a certain Carl Wilhelm Oseen—not for relativity, but for the discovery of the law of the photoelectric effect.

The photoelectric effect was, along with the special theory of relativity, one of the five papers Einstein published in 1905. Before this, light was known to come in waves. But Einstein was the first to propose that light actually had a dual nature: it acted as both a wave and a particle. At first, this theory faced just as much controversy as special relativity. But there was a big difference: laboratory experiments conducted in 1916 showed the validity of the photoelectric effect. Friedman was the first historian to emphasize that Oseen wanted the Committee to recognize the photoelectric effect not as a theory, but as a fundamental law of nature. Why? Not because he cared about recognizing Einstein, but because he had another theoretical physicist in mind for that second available prize: Niels Bohr.

Niels Bohr had proposed a new quantum theory of the atom that Oseen felt was “the most beautiful of all the beautiful” in recent theoretical physics. In his report to the Committee, Oseen exaggerated the close bond between Einstein’s proven law of nature and Bohr’s new atom. “In one brilliant stroke,” Friedman explains, “he saw how to meet the objections against both Einstein and Bohr.”
After reviewing Friedman’s research, Bruce Hunt, an Einstein historian at the University of Texas at Austin, was especially intrigued by the way Oseen changed the emphasis from the theory of the photoelectric effect to the law of the photoelectric effect. “What Friedman brings out particularly well,” Hunt says, “is how deftly Oseen used the other Swedish physicists' worship of empirical results, and their denigration of ‘mere theory,’ to win them over.”

The Committee was indeed won over. On November 10, 1922, the Nobel Prize for Physics was given to Niels Bohr, and the delayed 1921 Prize awarded to Albert Einstein, “especially for his discovery of the law of the photoelectric effect.” Einstein, en route to Japan, wasn’t able to attend the official ceremony to accept his award. But according to Friedman, it wasn’t the medal that he cared about, anyway—it was the money. As the German mark decreased in value after the war, Einstein needed a hard foreign currency for alimony payments to his ex-wife. Moreover, the terms of his 1919 divorce proceedings dictated that she was already entitled to all the money “from an eventual Nobel Prize.” Hunt says calling attention to these financial arrangements “brings out the fact that Einstein was a much more worldly and savvy man than his later public image would suggest.”

Einstein gave his official Nobel lecture one year later, to a large and attentive audience that included the Swedish King, Gustav Adolf V. Einstein was told to speak about the photoelectric effect. But the speech—which began: “If we consider that part of the theory of relativity which may nowadays be regarded as bona fide scientific knowledge…”—focused instead on the subject for which he clearly thought he should have won the prize.

Robert Marc Friedman’s story leaves another attentive audience with a new history lesson: “Einstein understood that the golden Nobel medallion is etched with human frailties…and like Einstein, perhaps we should also avoid the temptation of dancing around this modern golden calf.” Hunt says Friedman’s tale gave an oft-overlooked perspective on the way science was practiced and praised in the early 20th century.  “The decisions of the Nobel committees are often treated by the press and public as the voice of god,” Hunt says, but Friedman’s research brought to light “how political the deliberations of the Nobel committees sometimes were—and presumably still are.”


Google Game!

To play this game (which I stole from Ryan James Wilson!), just Google your name with the word "need." Then you'll find out what you've always expected is true: You do need that!

Ginny Needs:
1. a wig
2. three cute cover-ups to keep her warm!
3. some form of defense against her growing attraction to her new DADA teacher
4. the money to get the message out
5. to be pushed into the spotlight
6. a special home or office environment where she does not have access to soft material furniture such as a bed or couch
7. robes and a wand and everything
8. a day in the sun
9. another dimension

and finally….
10. to be an only pet


Clip Clip

So I finally got published…..even if there is absolutely no style in the piece. It’s on Science magazine’s online news site. Check it out at:


Finding New Bugs in an Old Broth

Charles Dickens gave it to Tiny Tim; Hippocrates described it as the most widespread disease of his day; paleontologists even found traces of it in 5,000-year-old Egyptian mummies. Tuberculosis is an old disease. And the diagnostic tests for TB, in contrast to the “cutting-edge” progression of most medical technologies, are similarly ancient. The majority of the world’s hospitals use a “sputum smear test” that has remained unchanged since its invention in 1881: your suspect phlegm is placed in a glorified Petri dish of nutrient broth, where the lung-eating bacteria can grow, though very slowly. After many weeks, when they’ve grown into visible clumps, a microscope can identify the killer bug. But to how many will you spread it while waiting for test results?

Tuberculosis diagnosis, “is as old-fashioned as it gets,” says Dr. Richard Chaisson, the founder of the Johns Hopkins Center for Tuberculosis Research. Faster, cheaper and more accurate diagnostic tools are desperately needed, Chaisson says, to curb the growing epidemic of TB—a curable disease that still kills 5,000 people every day. This summer, three biotech companies announced partnerships with FIND, the Foundation for Innovative New Diagnostics, to develop better TB-testing products. But a large-scale study is about to be released suggesting the most effective diagnostic method is not a product at all, or at least not a patentable one. It’s just a new way of looking at an old broth.

The global TB crisis made U.S. headlines on October 17, when pharmaceutical kingfish Bayer announced it will allow one of its best-selling antibiotics to be tested against tuberculosis. Chaisson, who was instrumental in the deal, says the drug will reduce treatment time from six months to four. Still, he has reservations about its effect on the epidemic’s spread through the population. “The individual cure rate is awfully good,” he says, “but the number of cases is still going through the roof.”  This is partly because of the increase in HIV infections; those with HIV have compromised immune systems and are thus more vulnerable to TB. But it also stems from the bug’s ability to adapt: strains have evolved that are resistant to every major antibiotic. Because TB is often spread more quickly than it is identified, Chaisson says the answer lies not in faster drugs, but faster diagnostics.

Today’s sputum smear test takes far too long. In Sub-Saharan Africa, where both TB and HIV run rampant, patients can expect to wait 12-16 weeks for test results, according to FIND. And the sputum smear has other problems, too. Making the broth requires electricity—unavailable in most clinics of the third world—for mixing and refrigeration. Moreover, it can’t reliably detect the presence of multi-strain TB.  

Since 2003, FIND’s mission has been to tackle these problems. This summer, three international biotech companies announced financial partnerships with FIND to develop new tests that use color-changing strips or simple test-tube reactions to detect proteins that are found in many strains of TB, getting results in hours or even minutes. One promising product is called “TK medium.” When the medium, a red substance, is mixed in a test tube with active TB bacteria, the color turns green. “Nobody knows yet why it works,” Chaisson says. “They’re about a buck each, and you could sell tens of millions of them a year.”

But no fancy new products are needed for what seems to be the best test of all. In the early 1990s, a lab tech in Peru noticed that TB bugs can be detected—using a common broth medium and a regular light microscope—weeks before the bugs grow into visible clumps. Chaisson finds it remarkable that no one had thought of the method—now called MODS—before. “The only drawback,” he says, “is that it’s not patentable.” So for now, FIND won’t fund MODS.

Compared to most bacteria, the growth of TB bugs is interminably slow. And according to Chaisson, slow too is the technological progression of its treatment and diagnosis. He describes, with obvious disdain, the conventional wisdom of most TB doctors: “My god, if it was good enough for my grandfather, then it’s good enough for me.” So perhaps MODS—using old tools and an old broth—is exactly what’s needed to unite the old and new medical philosophies, to keep the bug from staining future pages of human history.  


Can Science Save The Holy Wisdom? (And Does It Need Saving?)

So why has it been ages since Ginny has updated her blog? Well, a certain feature story (and its associated interviews and background research) about earthquakes, architecture, and the Byzantine Empire has taken most of my writing energies. Here’s the first paragraph…..perhaps more of the 4,000-word treatise will come later. Lots of love to all my bloggers! Mwa

Every year, thousands flock to Istanbul to see the church that scholars through the ages have called the most magnificent structure on earth: the Hagia Sophia. Greek for “Church of the Holy Wisdom,” in 537 AD the 180-feet-tall domed basilica became the most visible symbol of Justinian’s new Byzantine Empire. For 1,500 years, in a land notorious for political instability, the Hagia Sophia has stood tall and resilient, transforming even, when the Muslim Ottomans invaded in 1299, from a basilica to a mosque. And sitting on top of a major fault line—one that has caused no fewer than three dozen major earthquakes to shake Sophia—the monument has also survived serious geophysical instability. International teams of civil engineers and earthquake scientists are using computer models of today’s church to figure out how it has already withstood such seismic stress. But after the most recent devastating quake in 1999, head researcher Ahmet Çakmak told the New York Times: "The fault that runs closer to Istanbul is still very dangerous…The newspapers are saying we survived the big earthquake, but that's silly. It's a big mistake. What we should do is learn from this one, expect a bigger one and be prepared." If Istanbul is to be hit with a quake of unprecedented size, the big question is whether the Holy Wisdom needs some 21st century technology to—literally—back it up.  


Hitting the "Maleness" JAKpot

We all learned it in grade school: Boys have a Y chromosome, and girls don’t. A genetic switch turns on maleness or femaleness. But actually, it’s not so simple. A developing embryo’s search for its sexual destiny follows a long and windy road. “I don’t like the term sex determination,” said Mark Van Doren, an Assistant Biology Professor at Johns Hopkins University, because “it implies one moment. But it’s actually a very long process.”

The process starts with the germ cells that have the unique ability to create a new organism; male germ cells go on to produce sperm, while female germ cells produce eggs. But how is the sex of the germ cell determined? In Drosophila fruit fly experiments published in the July 28 issue of Nature, Van Doren and his colleagues found that when activated by neighboring tissue, a certain chemical pathway—JAK/STAT—develops male, but not female, germ cells.

By the time a young germ cell starts down the road to sexual identity, its gonad neighbors—called somatic cells—are already different in males and females (male somatic cells express a specific gene called doublesex). And previous studies had shown that these differentiated neighbors somehow influence the germ cell’s sexual destination. As Van Doren explained, “Germ cells can’t do it on their own...they need a specialized soma” to tell them how to develop.

To find out how exactly the germ cells are influenced by their somatic neighbors, Van Doren’s team first looked at fruit fly gonads with a male germ cell surrounded by male somatic cells. In these situations, the JAK/STAT chemical pathway was always activated—that is, a specific molecule set off a chain of reactions that ended in the expression of a protein called “STAT” in the germ cell. They knew STAT was expressed because they had added molecules with fluorescent tags to find and bind to STAT proteins, in effect “lighting them up” for anybody peeking through the microscope. For the next set of experiments, this fluorescent presence would indicate germ cell maleness.

Their next step was to see what would happen to the sex of the germ cell if they broke one of the links on the reaction chain. When they inhibited JAK/STAT in gonads with male germ cells surrounded by male somatic cells, the germ cells no longer expressed the STAT protein and thus, as Van Doren said, “Male germ cell identity was lost.”

Their most remarkable experimental manipulations, however, observed the function of JAK/STAT in female germ cells. They placed female germ cells in surrounding tissue that expressed the doublesex gene (and was therefore male). These male somatic neighbors triggered the JAK/STAT pathway, telling the female germ cells to express the STAT protein. “We took a female and made it look male,” Van Doren said, “And that’s really the wow factor.”  

Van Doren and his team have now shown that activation of JAK/STAT leads to the development male germ cells and thus, a walk down the road to Spermville. But, he stressed, this doesn’t mean females take the path of least resistance. Soon the biologists will look for somatic signals that point to Eggdom instead. Whether flies or humans, though, these signals are just early signposts along the long and torturous road to true sexual identity.


Meagan Got a Clip!!!

Check it out:



Dead Stars Captured by a Dying Hubble

On some nights, Astronomer Andrew Fruchter is jolted from sleep by a Prokofiev theme blaring from his cell phone. The urgent message: a massive star died at the edge of the universe—several billion years ago. Unable to fight the force of gravity, the star collapsed into a black hole, setting off a huge supernova explosion, and releasing a jet of high-energy light. And now, after traveling billions of light-years, the jet’s extraordinary blaze has been spotted by satellite telescopes. The star’s collapse was long over, but the astronomer’s work had just begun.

For four years, Fruchter and his colleagues at the Space Telescope Science Institute in Baltimore, MD, have worked to pinpoint the source of these stellar explosions, called long gamma ray bursts, or LGRBs. Packing enough energy to supply the world’s electric needs for a billion billion billion years, a LGRB in our galaxy could obliterate the planet. But Fruchter’s most recent research, soon to be published in Nature, radiates good news: LGRBs don’t occur in our kind of galaxy. Based on photographs taken by the aging Hubble telescope, his work might also give reason for its contested, expensive repair.

With photos taken by the Hubble Space Telescope between February 1997 and October 2004, Fruchter compared the luminescence of galaxies containing LGRBs to those with core collapse supernovae, the celestial phenomena that spawn LGRBs. Core collapse supernovae are gigantic explosions that occur throughout the universe when a massive star runs out of nuclear fuel. But Fruchter’s photos indicate that the select few supernovae that produce LGRBs go off in specific types of galaxies—those that are small, non-spiral, and have clumps of very bright stars rather than an even distribution. They’re “scruffy little things,” Fruchter says, “not your normal, pretty galaxies.” In other words, they’re not like ours.

But why do LGRBs occur in these particular galaxies? Fruchter says LGRB galaxies are not very chemically evolved. That is, they’re made up of mostly hydrogen gas, while other galaxies—like ours—contain life-giving elements like carbon, oxygen, and nitrogen. Fruchter speculates that in evolved galaxies, these ions interact with the stream of protons and electrons that normally encircle a star, causing the star to lose a lot of mass. What’s more, he thinks the ions create a magnetic field that keeps the star from spinning as fast as it would in a galaxy made of only hydrogen. With less mass and slower spins, this means the stars in our galaxy are less likely to form the black holes that lead to LGRBs.

Astronomers detect a LGRB using the three telescopes of the “SWIFT” deep-space satellite. First, one of the telescopes detects the gamma rays of the LGRB. Within seconds, the other two “swiftly” look for the x-rays and visible light rays of the afterglow, to determine the LGRB’s exact position. The data relays to ground computers, which then pass it on to Fruchter’s cell phone. Prokofiev blares, and as Fruchter explains, “then I have to get ground-based telescopes to go follow these things.” And finally, the Hubble can take a picture. In recent months, NASA has been debating whether to give the 15-year-old telescope and its aging parts a $700 million tune-up. Fruchter is optimistic, speculating, “If they can get the shuttle working again, I think they’ll service Hubble.” If the Hubble is put to bed, Fruchter’s research will be too. But at least he’d get a good night’s sleep.


Adderall Abuse on College Campuses

Gavin balances the metal tray on his lap, completely focused at the task at hand. He grinds the three orange pills to powder—slowly, steadily—with the bottom of a heavy cocktail glass, while eight sets of eyes watch in eager anticipation. Sitting in the middle of the room, he is quite literally the center of attention—and loves it. He babbles excitedly without purpose or pause.

“Did you see that guy downstairs?”

“When are we leaving—wait, wait, where are you going?”

“Who sings this song?”

Someone tells a joke. Gavin giggles.

After about ten minutes of compulsive mincing, he uses the edge of a dollar bill to form thin, uniform lines of the white powder.

He says jokingly, “look, I’m using dirty money like a cokehead.”

But it’s not cocaine that Gavin and the eight other Brown seniors are about to snort for a late-night energy rush. It’s what many doctors refer to as “kiddy coke,” and it is sweeping the dormitories and libraries of colleges throughout the nation.

It’s called Adderall, and can be bought for as little as $3 a pill. But Gavin gets his supply for free, from his best friend Ben.

Adderall is an amphetamine, a fast-acting stimulant in the same chemical family as cocaine and the infamous crystal meth. But unlike those narcotics, Adderall’s legal.

A tablet prescription, it effectively curbs the restless tendencies of many hyperactive children. It makes it easier to function in social and educational situations, which is especially beneficial to kids who have trouble paying attention at school.

Before he started Adderall, no one ever told Ben he had a learning disorder.

“I was always a very good student, you know, always got my stuff done,” he said. “But, after I started taking the pill, I was smarter. I could read for a long time, and got really interested in what I read.”

Since the early 1960s, close to 200 studies involving more than 6,000 children have investigated the efficacy of stimulants like Adderall for the treatment of inattentive kids. And according to a 1998 review article in the Journal of the American Medical Association, the meds work: stimulants significantly reduced hyperactivity and increased focus in more than 70 percent of the children tested.

But why use a stimulant to decrease hyperactivity?

Dr. Ronald Cohen, a clinical neuroscientist who specializes in attention at Miriam hospital, said it’s because of the precise workings of the brain’s frontal lobe, the seat of our attentional processes.

“It may seem paradoxical,” he said. “But really, there’s an inhibitory system in the frontal lobes that tells you: stop, look listen. So the stimulant better activates this system, making kids less hyperactive.”

Today, Adderall is America’s most widely prescribed drug for Attention Deficit Hyperactivity Disorder (ADHD) (surpassing the better-known Ritalin in 2000).

ADHD is the most commonly diagnosed psychiatric disorder of childhood, according to the National Institute of Mental Health. The neurologically-based disability affects between three and five percent of school-age children—that’s three million kids nationwide—and it occurs three times more often in boys than in girls.

No one knows exactly what causes ADHD, but twin and cross-generational studies have shown that it does have a genetic component. In 1994, the American Psychiatric Association set specific criteria for the accurate diagnosis of ADHD, which include the persistent symptoms of inattention, hyperactivity, and impulsivity.

The diagnosis of the disability has been a controversial topic in the U.S. in the past few decades, mostly because there’s no blood chemical, no brain protein, no physical blemish of any kind that definitively marks an ADHD kid from any other squirmy elementary schooler.

Ben now believes that he was originally misdiagnosed with ADHD by a psychiatrist he saw for depression and anti-anxiety problems.

“He started asking me questions, like, ‘when you’re reading a long and boring book for school, do you put it down in the middle?’ and I’d say, ‘yeah, sometimes.’ And then suddenly I had Attention Deficit Disorder,” Ben recalled.

So Ben started taking Adderall. He now takes two orange pills, 20 mg each, as soon as he gets up in the morning. Because the drug is time-released, his body absorbs it slowly all day long. He usually feels it start to wear off by nightfall.

Since the early nineties, American doctors have diagnosed more and more cases of ADHD. According to a 2000 study from the Journal of the American Medical Association, the use of stimulants by 2- to 4-year-olds increased three-fold from 1991 to 1995.

Cohen partly attributes the increase on the pressures of modern society, where so much emphasis is placed on academic achievement.

“Parents are looking for reasons why their kid isn’t doing as well as the next-door neighbor’s kid,” he said. “There are many kids with mild attention problems, who are singled out because they are only getting B’s in school but may have the IQ potential to get A’s. So do you call it ADD?”

And because children diagnosed as toddlers often take the stimulants well into adolescence and adulthood, the number of college students who pick it up weekly at the local pharmacy is also on the rise, making the drug readily available on any college campus.

Attesting to this, the scientific journal Addiction published a study earlier this year that surveyed over 10,000 randomly-selected college students across the nation. The reports were telling: 6.9 percent of respondents admitted to non-medical use of prescription stimulants like Adderall, and at some colleges this number reached as high as 25 percent.

Why do so many college students seek out Adderall?

“They call it the ‘academic steroid,’ you know,” Ben explained. “I don’t think it’s usually recreational. It’s just a shortcut, a way for them to get by without working as hard.”

The survey also found that non-medical use was higher among males, whites, fraternity and sorority members, and students who earned the lowest grade point averages. Students abused the drug most at colleges with competitive admissions standards in the northeast.

At Brown, especially in this season of lengthy term papers and tough exams, it’s Adderall’s ability to increase focus and motivation that appeals to students under a lot of academic pressure.

Gavin, a senior English major, has used Adderall dozens of times at Brown to write long academic papers.

“You take it and half an hour later, all you want to do is write,” Gavin said. “You pretty much finish the task in front of you. It lasts for like 12 hours, so you get done whatever you need to get done.”

Because it’s a prescription given to children, many college students perceive it as harmless. But this is far from accurate. For those with ADHD, the drug’s side effects may include decreased appetite, insomnia, increased anxiety, and irritability.

“I don’t eat when I’m on Adderall,” Ben said. I can’t—it’s gross to see food, especially four or five hours into it. But then I get really hungry at night when it starts to wear off.”

And for those without ADHD, the drug can take an even harder toll. Dehydration, hot flashes, stomach pains, marked aggression and even personality changes are possible side effects, according to the National Institute of Mental Health.

What’s more, Cohen warned that users without prescriptions may have unknown pre-existing conditions that would make consuming the drug especially harmful.

Without taking a full medical history, he said, taking Adderall is like “playing Russian Roulette.”

“If someone is relatively stable they might not have a reaction to it,” he explained. “But if you give it to someone who has a predisposition toward schizophrenia or manic episodes, there’s a big risk of going over the edge.”

With the recent increased rates in U.S. Adderall prescriptions, many experts worry about the potential for abuse due to increased availability.

And worry they should. The Addiction survey of 10,000 students stated that 54 percent of undergraduates who were legally prescribed stimulants for the treatment of their ADHD had been approached to sell, trade, or give away their meds within the past year.

The same survey showed—unsurprisingly—that non-medical use of stimulants correlated highly with the use of other illegal drugs. Of those who admitted to non-medical use of stimulants, 84.6 percent also used marijuana, 51.7 percent used ecstasy, and 34.6 percent used cocaine in the past year.

The potency and dangers of Adderall’s non-medical use exponentially increase when it is snorted or injected because it enters the bloodstream directly. Snorting Adderall, like any stimulant, may cause damage to the nasal and sinus cavities, respiratory problems, irregular heartbeat, psychotic episodes, and even death by toxic shock.

Cohen explained, “Even though you’re taking the same amount of drug, more of it is getting into the blood, and quickly, so its as if you were taking a much higher dose.”

The term “kiddy coke” is an appropriate one, then, for Adderall is chemically very similar to cocaine. Both drugs block the brain’s natural dopamine receptors, elevating mood and alertness using the same mechanism as anti-depressants.

Gavin snorts it when he’s feeling sluggish before a long night of partying.

“When I snort it, he said, “it has more of an energy effect. You’re just wired and you want to talk and have a good time.”

Also like cocaine, the Adderall crash can be quite extreme.

“Coming down is the worst,” said Gavin. “You definitely can’t eat, can’t go to bed...I have to drink or smoke [pot] if I want any chance of sleeping. So you’ve got to make sure to do it only when you don’t have anything important the next day.”

With the large rise in Adderall use—both medically and recreationally—there is a growing fear of a new generation growing up addicted to these “study drugs.”

Though he takes Adderall exactly as his doctor recommends, Ben still worries about what the drug does to him.

“It’s a drug I’m addicted to now, not just something that I’m being prescribed. They tell you you’re not changing yourself, that it makes you more like your normal self, but I’m not the same person I was.”

Gavin, too, recognizes the dangers of Adderall addiction.

“There is that risk, yeah. I mean, 20 mg every three weeks is very different than Ben’s 40 [mg] a day. But I realized that in the real world I won’t able to just take an Adderall to get my work done. It won’t be as easy to get.”

Ben hopes for a time in the future when he will no longer feel Adderall dependency.

“This summer I want to switch to a new drug—Strattera—it’s not a stimulant,” Ben said. “I figure it’s a good time to get off now that I’m done with school, and yeah, I wonder what it’ll be like to get off of it. But I think it’s the right thing to do.”

If caught with non-prescribed Adderall at Brown, the administration would generally follow state mandates as if it was cocaine or marijuana.

But Brown policies on use and distribution would depend on individual situations, said Associate Dean of Campus Life Terry Addison.

“It is an illegal drug. The student would have a judicial involvement, an internal investigation,” he explained.

Addison also said that distribution of the drug is more egregious than mere consumption, and that the amount exchanged would be a factor.

“We would ask if it was it just a couple of pills or a hundred,” he said. “Were they giving it to their whole floor or just one or two of their friends?”

The punishment would be still more severe, he said, if the student was uncooperative or combative when apprehended.

Regardless of how effective or fun the prescription drug may be, students should realize that when used without a prescription, Adderall is ultimately an illicit drug like any other.

Beyond the risk of disciplinary action, there are serious and long-term physical dangers that far outweigh any short-term academic advantage or single night of energy.

So this exam season, think of your health and try a Red Bull instead.


Risky Business

Imagine a woman with little financial savvy, who after saving some cash is looking for a good investment. For a safe bet with small payoffs, she could buy government bonds. But if she’s willing to lose it all for a chance at fortune, she could instead buy shares of a volatile stock. Either way, her brain must somehow evaluate her economic circumstances and take action. As Daniel Bernoulli, the Dutch physicist and mathematician who is perhaps best known for his principles of fluid dynamics, explained in a 1738 paper on economic theory: “…there is no doubt that a gain of one thousand ducats is more significant to a pauper than to a rich man, though both gain the same amount.” The woman’s choice might depend upon her job security, for instance, or the number of children she has to support, or even the number of years she has left to enjoy those potential earnings. New research in monkey brains is coming closer to decoding the brain signals that ultimately drive such decisions. In a study published in this month’s Nature Neuroscience, researchers at Duke University found patterns of neural signaling that reliably indicate the riskiness of a choice. Their findings may lead to mathematical models of human risk behaviors and a better understanding of why we sometimes make irrational and even harmful decisions.

How do researchers watch and manipulate monkey decisions? Most begin by teaching them to play a simple game on a computer screen, called a visual gambling task. Two circles appear on the screen, and the monkeys are trained to pick one of by looking directly at it. Eye tracking sensors attached to their heads tell the researchers which circle they have chosen and connect to a feeding tube that instantly pays them a fruit juice reward. Old studies have focused on the objective factors that influence their decision, like the size of the reward, because they’re much easier to measure than subjective factors like how thirsty the monkey is or how much it enjoys the thrill of taking a risk. In 2003, for instance, neuroscientist Allison N. McCoy found that when given the choice between a circle that gave a small juice reward and another that gave a large reward, monkey brain responses were stronger when they chose larger rewards. These results are perhaps not so surprising—a $20 million jackpot is, after all, more exciting than a $20 scratch card.

But little had been done on the more difficult testing of subjective factors–until now. In the new Duke study, McCoy collaborated with Michael L. Platt to tackle the question of what choice would be made if they kept constant the objective value of the reward—i.e. the amount of juice the monkey would get after many trials—but varied the riskiness of its choice. By looking at one of the circles, the “certain target,” the monkey would always get a fixed amount of fruit juice. By looking at the other “risky target,” though, it had a 50:50 chance of getting a shot of juice that was either smaller or larger than the certain target. In other words, to look at the risky target was to take a gamble.

And McCoy and Platt were able to systematically change the size of the gamble. Say the certain target paid out a 50-ml reward and--because it’s certain-- this was paid every single time the monkey looked at it. But the risky target, remember, didn’t always give the same reward. On”high-risk” trials, a monkey choosing the risky target got either a 75-ml reward or a 25-ml reward. In contrast, in “low-risk” trials, looking at the risky target reaped either a 45-ml or a 55-ml dose. In all cases, the average pay-off over many trials was the same—using these hypothetical numbers, 50-ml. So you wouldn’t expect the monkeys to prefer one option over another. But here’s the kicker: McCoy and Platt’s monkeys overwhelmingly chose the risky target. What’s more, they chose it more often on the high-risk trials.

In the next part of the study, the neuroscientists looked for patterns of neuron firing that correlated with these preferences. They inserted an electrode—a tiny needle that records the number of electrical signals produced by a single neuron-- in a long and narrow brain region spanning the top of the skull to the top of the ear, called the posterior cingulate cortex. This area was chosen for recordings because it was known from previous studies that its cells were highly responsive during the activities—seeing shapes on a screen, moving the eyes quickly, receiving a reward—involved in the visual gambling task. McCoy and Platt recorded the electrical patterns generated while the monkeys performed the visual gambling game. Not only were the neuronal firings higher when the monkeys took risks, but they also fired in a predictable code: as risk was systematically increased, so too was the firing frequency. In short, they were exquisitely sensitive to risk.

These primate results could be easily translated into human terms: monkeys prefer risk, therefore humans prefer risk. But human behavior is complex, and the temptation to assume that these findings will lead to the ability to predict human behaviors should be squelched. As Daeyeol Lee put it in a Nature commentary, “an individual might insure a car used to drive to the casino.” Moreover, even given identical circumstances, every brain has a slightly different inclination toward risk; not every casino patron becomes a pathological gambler, and not every teenager who picks up a cigarette goes on to smoke a pack a day. Like most of today’s neurological research, these results are remarkable mostly because they expose systematic and surprising patterns at a microscopic level; though the authors did hint their study may have some human application “as an important model for probing the neural processes that underlie pathological risk taking in individuals with addictions to drugs, sex, food or gambling.” As for the woman looking to invest—nothing ventured, nothing gained?


Of Schizophrenic Mice and Men

He starts with a normal mouse, then changes a single gene to give it a “beautiful mind.” Though neuroscientist Akira Sawa will never know if the altered critters hear imaginary voices, he does know their brains look remarkably similar to those of the 1% of humans who have schizophrenia.  

Working at the Johns Hopkins University School of Medicine, Sawa has recently added a mutated gene—previously linked to schizophrenia—into the brains of mouse embryos. Remarkably, he found this stunted their development in the same way it does to many human schizophrenics. These modified mice could serve as models of the disease, to design and test new drugs. And further down the road, Sawa hopes to create more effective therapies using them in combination with ground-breaking stem-cell technology.  

Schizophrenia—a disease that causes hallucinations and emotional apathy—is usually diagnosed between age 15 and 30. Many researchers have thus assumed that it’s caused by environmental influences after birth, such as viral infections or psychological stresses. But in recent years, research on identical twins and autopsied brains of schizophrenics pointed to a genetic link.

Convinced of the genetic basis, Sawa looked for a gene or mutation that was reliably associated with the disease—a daunting task, as “more than 100 genes have been implicated in the disease but maybe 90 are junk.” He chose a gene called DISC-1 (Disrupted-in-Schizophrenia) for two reasons: “the other candidates were known genes with known functions…and only [DISC-1] has a clear disease-associated mutation.”  

To get mut-DISC1 into a developing mouse’s genome, Sawa’s latest study took advantage of the gene’s electrical properties. He placed a pregnant mouse in an electric field such that the negatively-charged mut-DISC1 was attracted to the positive end of the field, and thus forced into the embryo’s brain. Although Sawa said the technique only worked in one to five percent of cells, it was enough to dramatically change brain development.

In the mutated mice, the migration of neurons from the inside chambers of the brain to the outer layers of cortex was severely delayed. What’s more, an abnormal orientation of neurons in their brains matched that seen in the autopsied brains of human schizophrenics. Taken together, these results show that mut-DISC1 disrupts brain development and often leads to schizophrenia.

But mut-DISC1 isn’t a gene for schizophrenia. If “a mutation is in the same gene, but in a different place [within the gene],” Sawa said, “clinical manifestations can be very different.” In identical twins, for instance, sometimes only one gets the disease. “Even if they have the exact genome,” he said, “they may have different gene expression or one may catch a viral infection or…have different social stresses.”

The mouse models can be used right away to test new drugs; Sawa admitted he was funded by several pharmaceutical companies. But he’s more excited about a new treatment that uses human stem cells. Starting with a biopsy through a patient’s nose, neuronal stem cells could be harvested and then manipulated in the lab to replace the damaged ones. They “mimic neurodevelopmental features, even in an adult,” he said. “Here,” pointing to his nose, “they’re still young.” Though still three to five years away, Sawa has high hopes that using stem cell therapy in combination with the mouse models will ultimately “bridge bench to bed.”


Aliens Invade

Last month, five illegal aliens were caught in Queens, skulking in the brackish water of Meadow Lake. After positive identification, New York State authorities cut them with knives until they bled to death. But the government had no other choice. For unlike the illegal immigrants found working the fields of central California, these were alien fish, whose continued survival could lead to dire economic and ecological consequences.

The exterminated individuals were Northern snakehead fish (Channa argus argus), native to Asia. Also known as “Frankenfish,” the MO of these monsters comes straight from the annals of science fiction. Their heads, covered in snake-like scales that give them their name, also hold a mouth full of sharp fangs. Their torpedo-shaped bodies grow as long as 40 inches, as heavy as 15 pounds. They can breathe air, and “walk” on land using their pectoral fins. They’ll survive for several months under iced-over waters or even buried in mud banks. Scarier still is an insatiable and indiscriminate appetite—they’ll feast on anything on land or sea, from fish and frogs, to ducks, and even small mammals. Secretary of the Interior Gale A. Norton was apt to call them, “something from a bad horror movie.”

Yet the public has taken little note of the Queens incident, and the media has described it with tongue-in-cheek. In an August 9th  New York Times article, for instance, Anthony DePalma makes light of the problem, writing “Lady Liberty…might have a hard time getting at all gushy about some of the most recent immigrants to the city.”

This mockery of the snakeheads is unfortunate, and surprising, because this isn’t the first time the fish have reared their ugly heads. Just three years ago, a tourist bought a snakehead at a fish market in Manhattan’s Chinatown. (Sold for about $9 a pound, steamed strips of fish are often combined with ginger and scallions in sweet soup recipes. Cooks like them for their freshness, since they can survive up to four days out of water.) The man took his new pet back to Crofton, Maryland to raise it. But there was a problem: No matter how much it ate—up to 12 goldfish per day—it wouldn’t stop growing. When he finally dumped the beast into a small pond behind a local shopping center, he had no idea he was breaking a state law, or that his act could lead to an ecological disaster for the entire Chesapeake Bay.

Sadly, most Americans are not now and have never been aware of the dangers caused by the introduction of invasive alien species. For hundreds of years, Europeans and Americans who ventured abroad have shipped slaves, gold and, yes, biological booty back and forth from their native lands. Take one romantic, if not odd, Eugene Schieffelin. In the late 19th century, this wealthy contributor to New York theatre was determined to give the New World all of the birds ever mentioned in the works of Shakespeare. Most couldn’t survive the American habitat. But he released 60 pairs of starlings (mentioned by Shakespeare just once, in Henry IV: “I'll have a starling shall be taught to speak…”) that would propagate to become our most abundant and annoying bird species, pushing out natives like bluebirds and martins.

Granted, creatures from afar are often romantic, novel, exotic—it’s why everybody loves the zoo. So an earnest public might ask, why not bring them here? What’s the big deal?

According to an extensive 1993 report by the Office of Tribal Affairs (OTA), over 900 exotic, free-ranging species have caused ecological or economic harm in the United States. To understand why, consult evolutionary theory. Alien species, by definition, did not evolve in or for our specific ecosystem. Thus, when thrown into a new environment—with a new climate, new food sources, new competition, new prey—their behaviors are unpredictable. Unsurprisingly, many cannot survive at all. But others are superbly equipped to travel, thrive and even dominate their new surroundings.

Take that wretched Northern snakehead, for example. In its native Southeast Asia, it lives in irrigation ditches and rice paddies. After the rainy season, these trenches dry out and the fish must migrate to a wetter place. Hence, for thousands of years, the creatures with fins that allowed them to flop around on land are the ones that survived. Similarly adaptive systems evolved in the snakehead’s digestive system, where it has an air sac that allows it to absorb oxygen when it’s out of water. The snakeheads, thus falling into the category of invasive foreign species, are a North American ecosystem’s worst nightmare.  

The biggest ecological problem with invasive species is the loss of biodiversity. In the 1960s, a “walking” catfish from Sri Lanka was exported to a Florida fish farm. Within 10 years, the catfish had spread to 20 Floridan counties. In the decades since, up to 90%--that’s 4,000 pounds per acre—of the area’s fish kill has consisted of the catfish. In Australia, a similar invasion has caused the rapid decline of native frog populations. There, a deadly virus was introduced to Queensland and transmitted to frogs by an imported fish. The telling part: the fish was not sold for eating, but for ornament.  

Another negative effect of invasive species is the destruction of natural resources. The Siberian gypsy moth, an insect that strips the leaves from spruce, larch, and fir trees, and in large numbers thus poses a great threat to coniferous forests, is ranked in the top three of Russia’s biological pests. The OTA’s report divulged that thousands of raw logs imported to the United States from Siberia carried--guess what?—Siberian gypsy moths.  

And for those who scoff at the ecological consequences, perhaps they’ll sit up when they hear the situation affects their pocketbooks. Loss of biodiversity often translates into loss of money, and lots of it. The Australian Brown tree snake, accidentally introduced to Guam in the early 1950s, now thrives there with up to 13,000 snakes per square mile. The snakes, by crawling on power lines, have caused more than 1,200 power outages since 1978. Since its arrival in 1892, the Cotton boll weevil has cost the U.S. cotton industry $13 billion; Zebra mussels—introduced to the Great Lakes from Caspian Sea water dumped from a transatlantic ship—have clogged American pipes since at least 1988, demanding utility repairs to the tune of $3 billion.

So what ever happened to those snakefish dumped in the Maryland pond three years ago? Officials knew they had to be eradicated, for fear of their spread. Because they can “walk,” they could have left the pond, crawled a mere 75 feet to the Little Patuxent River, and from there invaded the Maryland river system and Chesapeake Bay. The pond could have been drained, except some of the fish would have inevitably buried themselves in the mud until they could make it to the Little Patuxent. Electroshock treatments didn’t work, due to dense vegetation in the pond. Traps were used with some success, but authorities still didn’t know for certain that all of the fish had been killed. A complete purge came finally when they poisoned the entire pond using a plant-derived toxin called Rotenone.

If not public awareness, the incident in Maryland at least spawned legislation. Within a few months, a federal law was passed banning the importation of snakeheads. But for many working on the problem, the ban didn’t go far enough. California is one of 13 states where it is illegal to import, transport, or even possess a snakehead. The state’s Fish and Game Commission has argued that the federal ban lacks teeth, as the high volumes of fish are imported together, and the snakeheads are small enough to hide behind larger ones. As Miles Young, a lieutenant in the California Department of Fish and Game, told the Sacramento Bee: “it’s been an enforcement problem.”

Back in Queens, now a full month since the finding of the five new snakeheads, the problem rages still. Undoubtedly, kin of the five exterminated fish are lurking underwater, devouring what’s left of the carp, white perch, and pumpkinseed fish native to Meadow Lake. Now closed for fishing, activity on the lake consists solely of local biologists setting traps for more snakeheads. But so far, they’ve had no luck.


On the Sandy Shoulders of Giants

In the spring of 1960, the largest of all recorded earthquakes rocked an idyllic seascape on the coast of Chile. Hundreds of locals, so frightened by the violent shaking of the land, sought refuge in small boats. They thought they were safe—until the sea roared with the 75-foot waves of the quake’s sister tsunami. All boats were lost.

Now, four decades later, a team of international seismologists have finally examined the stratigraphic tracks of the Chilean Giant and the devastating tsunami it caused. As revealed in last week’s Nature, their analysis of the layers of earth in the coastal region has shed light upon the long-misunderstood history of the 1960 monster.

Typically, the magnitude of an earthquake is directly related to the number of years since the last one in the area. So considering its enormity, at least 350 years should have elapsed between the Chilean quake and its immediate predecessor. And sure enough, Spanish conquistadors wrote of a large quake in 1575. But here’s the big mystery: other documents suggest the 1575 quake wasn’t the last one before 1960. Two others are documented: one in 1737 and another in 1837. This left those who were studying the 1960 event scratching their heads to figure out how so much energy could have built up in just 123 years.

The 1960 quake was caused when one piece of the earth’s crust—a tectonic plate—slid beneath another. Bordering the Chilean coast, the Nazca plate pushed under the South American plate to its east. Though it moved just three inches per year, over several centuries great amounts of energy built up, until May 22, when the earth finally succumbed to the pressure. And just fifteen minutes after the resulting quake—much like the ripples a dropped stone makes on the surface of a pond—the tectonic grind created a tsunami whose waves would reach as far as Japan.

Chilean eyewitnesses in 1960 said that even five miles inland, the tsunami coated the land with sand. Seismologists Marco Cisternas and Brian F. Atwater took advantage of this fact to clear up the contradictory historical evidence of the region’s earthquake timeline. As revealed in the Nature study, they dug up the earth’s settled layers of rock and sand. They assumed each layer of sand a footprint of a past tsunami, and thus a reliable indicator of a past earthquake. In this way, the team was able to piece together a 2,000-year record of seismic history. The results: the quake of 1575 appeared clearly in the stratigraphic record, while those of 1737 and 1837 did not.

So what about the documents of the latter two? The team suggests neither quake caused lasting ecological damage. After counting the trunk rings of 15 standing dead trees—presumed to have died in 1960—they found ten were alive in 1837 and two in 1737. The forest damage from the earlier quakes, then, could not have been as extensive as the havoc wreaked in 1960.

But this isn’t any too reassuring for those living on a fault line. The two smaller quakes, whose occurrences could not have been predicted based on the big one preceding them, were still devastating enough to go down in history—in ink, at least, if not in sand.


Another Poem from Meagan, the Birthday Girl

She Doesn't Wear a Jacket

Drugs lace their lingo.
Sex swims on their lips.
Smoke coats their convo;
She laughs at darted quips.

Crack bridges barriers
And booze opens doors.
Powder pushes pulses;
Inhibition hits the floor.

Inhibition is the jacket
That she rarely ever wears.
It doesn’t let her move enough.
Its color doesn’t dare.

And it’s autumn in the city,
Her steps are rushing-free.
She doesn’t don that jacket.
She’s asked: how can this be?

"My drug is in the sun’s shot.
My sex is in the grass.
That fog that coats my memory
Too shimmers on lake-glass."

You ask her if she’s crazy.
You query of her joy.
You seek to sweet-corrupt her.
You’re carved of natural boy.

She answers you with wind-words
Racing on the breeze...
She answers you with eyes-wide:
“I love life. This is me.”


Hey, Obese Americans: Sumo Wrestlers Say Eat Your Heart Out.

Like 60 million other Americans and a quarter of a million other Hawaiians, Fiamalu Penitani is obese. Severely so, in fact, with 520 pounds saddling a six-foot-four-inch frame. Fat all his life—in his Oahu high school, he was a 290-pound Greco-Roman wrestler and a nose guard for the football team—his girth quite literally gets in the way of many activities that smaller bodies take for granted. Flying is especially uncomfortable, for instance, because even spanning three seats, his back and hips get numb. Unlike the majority of other fat Americans, though, recent studies suggest there is little chance Penitani’s heart will ever succumb to disease or malfunction. He probably won’t die before his slimmer friends, either. So just what is it about this fat guy that gives him an edge in the fight against obesity-related health woes? Penitani—called Musashimaru in his professional life—was a grand champion sumo wrestler.  

In Japan—the land of painstakingly-pruned bonsai trees; where raw, lean fish prevails as the finest delicacy; where a popular hobby is the folding of paper squares into intricate birds and flowers; where the splendor of a geisha dance depends as much on the silks of her kimono as the lightness of her steps—the 1500-year-old, part-sport, part-religious ritual of sumo wrestling weighs in as the ultimate of cultural contradictions.  The Japanese are, by and (un)large, a lean people—the average man is 5’6” and just 143 pounds—and sumos, well, are not.

The rituals of a sumo match, too, are anything but dainty. Technically, a match is the one-on-one combat of two men, each trying to force the other to touch the ground or step outside an elevated clay ring. But as the typical brawl only lasts about 30 seconds, the charade that precedes it is the more fascinating element of the sport. The two loinclothed opponents start with elaborate gestures meant to intimidate each other. They slap their thighs, lift, flaunt and stomp the ox-like muscles in each of their tree-trunk legs, glare, groan, and even throw pillars of salt at their rival.  And though over 70 distinct moves—some so acrobatic they require the nimbleness of a ballerina—are officially recognized by the Japanese Sumo Association, the top wrestlers use only two or three of the most basic. Musashimaru’s favorite, for instance, was the brute strength maneuver oshi-dashi: a frontal push to drive the opponent backward and out of the ring.

Japanese sumo training begins as early as age 15, when the boy (and only boys, for the Shinto tradition purports that women and their “impure” menstrual cycles would contaminate the sumo ring) moves into one of 54 “stable houses” to train under a prominent master.  For the next few years in this spartan setting, the young hopefuls learn complex rituals and endure extreme physical training. Eric Gasper, an American who trained as a young wrestler in the early nineties, described the experience—where students may sleep 20 to a room, share squat toilets, and do their own cooking and cleaning—as “boot camp, prison, and war…all at the same time.”  On a typical day, the students wake at 4:30 a.m. and immediately begin a four-hour session of strength training.

Because there are no weight classes in sumo, the largest wrestlers have a major competitive advantage. But with such intense exercise, how do they put on so much weight? They eat—a lot. After the morning workout, wrestlers feast on chanko-nabe, a protein-rich meal-in-a-pot. There are dozens of variations of this hearty stew; typical ingredients include any combination of miso, beef broth, tofu, white rice, noodles, fish, pork, chicken, fried eggs, and fried root vegetables. And of course, the athletes wash down the enormous quantities of nabe with comparable amounts of beer and saki. But the real secret to weight gain comes right after the meal. Musashimaru, when once asked how he grew from 290 to 520 pounds, explained that sumos sleep immediately after eating, to insure slow digestion of their food and maximum retention of muscle mass.

A recent study from the Annals of Internal Medicine suggests yet another key element in the relationship between the sumos’ eating and sleeping schedules: hormones. Scientists have long known that the hormones ghrelin and leptin influence appetite; ghrelin secretion in the gastrointestinal tract stimulates appetite, while leptin production in fat cells represses it. But just last year, researchers at the University of Chicago published a further connection between these chemicals and the sleep cycle. The experiment subjected 12 healthy men to two days of sleep deprivation followed by two days of extended sleep, all the while monitoring their hormone levels, appetite, and activity. When sleep was restricted, ghrelin levels increased, leptin levels decreased, and consequently, the men’s appetites went through the roof. Their cravings for high-calorie foods increased by 45%. This hormonal play could thus explain the insatiable appetites of sumos—who get just six hours of sleep over the course of a typical day.

In November 2003, Musashimaru retired after a brilliant sumo career. In just 14 years, he became the first sumo wrestler in history to win 55 consecutive tournaments, and the second American fighter to achieve the most coveted rank of yokosuna, or grand champion.

Perhaps the only thing more surprising than Musashimaru’s rapid climb to the top of the sumo rankings is that his heart’s still ticking. For most obese people, their excessive weight, retained over months and years, steadily weakens their heart muscle. This makes obesity a major risk factor for all kinds of cardiovascular problems, including high blood pressure and fatal heart attacks. But for reasons just beginning to be understood by cardiologists all over the world, the hearts of sumo wrestlers are largely immune to these ailments.

Sumo wrestlers are physiologically different from the average obese Joe in two ways. First, their weight is made of mostly muscle instead of fat. Musashimaru’s body fat percentage at his heaviest, for instance, was around 20%. Compare that to the typical 500-pound man with a 70” waist (if, indeed, one can fathom a “typical” man of this enormity), whose body fat is about 35% of his weight.

Sumos also differ from other obese people in the way their hearts can handle the extra strain from their weight. Though the giants are some of the largest men in the world, the shape and function of their hearts is indistinguishable from those of the leanest of athletes.

Professional athletes are frequently diagnosed with a medical condition known as athletic heart syndrome. Characterized by an abnormally stretched heart muscle, the condition is basically the body’s way of adapting to strenuous physical activity. For a bigger heart means that a larger amount of blood (called stroke volume) can be pumped, and thus oxygen can be delivered faster to strained muscles and organs throughout the body.  But it seems that the phenomenon is not limited to lean athletes. A 2003 study published in the American Journal of Cardiology measured the heart size of 331 Japanese professional sumo wrestlers and found that just over 85% exhibited the ventricle dimensions of an athletic heart. This was surprising to the researchers, because the “static,” or non-cardio exercise done by sumos was not previously thought to strengthen the heart as much as that of, say, a marathon runner.

After the press conference where he announced his retirement, Musashimaru was quoted in multiple international newspapers mostly for just one phrase: “sumo hurts.” But apparently the pressure, harassment, and physical training required for excellence in the sport bring supersized health payoffs. And as anyone who has sampled it will tell you—tourists, internationally-acclaimed food critics, and especially the wrestlers themselves—chanko-nabe is delicious.


Drilling Time

It’s really fucking hard to install a shade into stubborn plaster walls without a drill.  


A Desperate Plea to the Fans of Nurture vs. Nature: Get out of the Ring! It’s a Draw.

Good evening ladies and gentlemen, and wellllllllcome to the Futile Philosophy Arena! This evening’s competition is sure to fascinate and entertain the billions out there in our global audience, as the outcome will undoubtedly influence the entire future of our species! In one corner, we have our hard-hitting rookie—the modern, the trendy, the politically-correct heavyweight adored by cultural anthropologists, ecologists, and coddling mothers everywhere: Nuurture!!! And facing Nurture, feast your pre-programmed eyes, ears, sperm and chromosomes on the molecular backbone of each and every one of our cells, the prized baby of The Human Genome Project, defeatists, racists, and clones alike: Naaaaature!!! Now let’s bring on the main event!

The truth about the nurture-vs.-nature debate is no secret. I first learned about it in 10-grade biology class. The textbook read something like:

“An organism’s physical appearance, called its phenotype, depends on environment as well as genes. A single tree, locked into its inherited genes, has leaves that vary in size, shape, and greenness, depending on exposure to wind and sun…”

Indeed, over the next six years of studying biology, neuroscience and animal behavior, every science or social science or humanities teacher I met touted this same tenet again and again. And no student ever raised his hand to dispute the shared roles of nurture and nature, the shared credit for clever behaviors and the shared blame for diseases and societal ills. Outside of the academic sphere, too, any rational Joe Shmoe will readily admit to this fact of life. Because it just makes sense.

Which is why, when I oh-so-frequently come across published material that perpetuates this non-existent “debate”—in respected newspapers, television documentaries, flashy science magazines, and even scientific journals—I am confused, frustrated, and just plain angry. Why does the myth continue, when we can read its long history of folly and futility? When we can look to modern examples to refute the argument from both sides? The time and efforts devoted to fueling the debate, from both sides, are wasted, in that they contribute nothing to a society fraught with so many other, pressing scientific needs.

Please, you writers and scientists whom I detest (and you know who you are): slay the beast. Clear the way for more focused and effective research. Take off the boxing gloves and get back in the lab coats.  


I live with a poet.

Check out what the new roomie—the resident poet laurete!—wrote:

Science Writers—Ginny & Meg

There's a pulse and it's live and it's different,
And I sense that it beats in us both.
We know what intrigues us: cold science-
And warmly we write of its growth...
Its growth and its place in this vast world!
This earth of a spin science-spun,
Where daily technology marches
Under watch of the moon and the sun.
This moon and that sun, they are beauteous.
Their rays bring new light to dark times
Just as our vibrant words will embrace this-
This science that unites mankind.


The "Thrifty Gene" (It's not what you think)

I’ve spent the last couple of days reading about obese rats. There are several inbred species with funny names, like the Zucker Diabetic Fatty Rat or the Otsuka Long-Evans Tokushima Fatty Rat. But most interesting of all is a real species (as opposed to one engineered by the white coats): the Israeli Sand Rat, Psammomys Obesus. When observed in its natural, arid habitat—the deserts of North Africa and the Eastern Mediterranean—P. Obesus eats low-calorie salt bush, if it manages to find any food at all. But if caged in a laboratory and given an abundant food supply, the rats become obese within four days. After just four weeks of the new-found gluttony, they require insulin shots to stay alive.

So why study all these fat rats? Well, many researchers speculate that the lil’ critters store fat and acquire diabetes in the same way as the large percentage of obese humans who live in the Pacific Islands.

On the Micronesian island of Kosrae, for example, more than 80% of its 7,600 residents are overweight or obese, and one in eight adults have diabetes. But this wasn’t always the case. Until about fifty years ago, Kosraeans were a lean bunch who ate what they had—fish, bananas, and coconuts. Then, after World War II, the U.S. gained control and what do you know? Alluva sudden these thin, healthy folks were subjected to the evils of canned and processed foods—a new-found gluttony. And, just like the rats, they blew up.

Most smart people who study these things attribute the sudden weight changes to something called a “thrifty gene.” Some thousands of years ago, in the days of hunter-gathering, the ancestors of the obese Kosraeans had a “thrifty gene” that allowed them to store fat well. In those days famine was frequent, and the people with the gene could better survive the food shortages and pass the gene on to the next generation. Sort of like a hibernating bear who lives on his fat reserves in the winter. But today, when the islanders are faced with an abundant, high-fat food supply, the gene does more harm than good.

Here’s what I don’t get: why didn’t the Anglo-Saxon ancestors have this thrifty gene, too? Weren’t all of the hunter-gatherers exposed to devastating famine? Stay tuned…


Hey! What ever happened to all of those romantic Brown legends?

For you Brown kids: Imagine Soldiers' Arch, at exactly midnight on a warm, clear night of senior year. You are standing in the most serene evening spot on campus, where the dim sidewalk lights of Lincoln Field flood the old brick walls of Metcalf, and the surrounding bushes muffle the ruckus of Thayer Street. You gaze into the blue-blue eyes of your sweetheart, the shy Texan with rosy cheeks who you met just three days into freshman year (ah, unit love). The entire green is yours to enjoy, with only Marcus Aurelius watching from a distance. You kiss. And with that, your future is secure. You are destined to marry the Texan, support him through the stresses of law/business/med school, and have beautiful babies who will one day walk the very campus where their parents fell in love.

Or so the story goes. As fondly as I now recall it, I can not seem to remember the first time I heard this beloved Brown legend. It could have been the day I first set foot on College Hill, during the requisite campus tour, when I was also told to rub the nose of John Hay and avoid the Pembroke Seal. Or maybe one of the freshmen told me at ADOCH, sometime between the talent show and the awkward ice cream social. In any case, the story made a lasting impression, so that now I can’t help but blush when the idyllic image comes to mind.

The legend continues in the Brown Alumni Magazine. Every month, I resentfully peruse BAM for the quaint stories of those happy couples of the past who managed to find love on campus. There will be a grandchild update from Susie Smiles (’56), who had a crush on husband Tommy Touchdown (‘53) even before she joined the cheerleading squad; or an announcement of vow-renewal from Zoe Zero and Simon Sulk (’88 and ’88.5), who fell in love while reciting each other’s angst-ridden poems across the tables of the Ivy Room.

Sadly, those days exist no longer. The problem with the Soldiers Arch legend is that it is a complete fallacy. A cruel joke. A sham. Truth be told, long-term or even casual dating barely exists at Brown today, let alone marriage proposals and happily-ever-afters. At Wheaton College in Illinois, there’s a giant bell for girls to ring when they get engaged. For Sorority sisters at UVA, it’s a candle-lighting ceremony. Granted, most of us new alums shudder to think of where we'd be if we had run off to get hitched the day after graduation. Still, it tooik about four years of frat party dance floors, dorm room "dvd nights," shady late-night rendezvous, and awkward late-night rendezvous before finally finding that sole meaningful relationship at the end of senior year. I guess I wish I had more warm and fuzzy Brown memories--as opposed to embarrassing, make-you-wince-to-think-about memories--to take with me into the real world.

Thus, I propose a challenge to my fellow graduates: forget that I’m-too-intellectual-for-romance smugness. We're supposed to be adults now, so try, just once, to go on a date date. Just because we're not on campus doesn't mean we have to leave behind the Soldiers' Arch, ivy-league magic.


If you can't take the heat, get out of Camden Yards

I've always liked baseball. No, seriously. Thanks to my father and grandfather--who could very well be ranked in the top 5% of all of the baseball trivia aficionados in the world--I grew up on the stuff. Do I know the rules? Yes. Did I ever play? Yes. Do I know the strategies, the ins-and-outs? Pretty much. Do I know the names of the players? Naw. Do I watch it on television? No (but I don't watch other sports, either, in my attempts to avoid ESPN at all costs). Do I know the years and plays of famous world series games? No. However, I do appreciate the fun of America's pastime, you see. Who wouldn't be up for a three-hour excursion in a beautiful ball park, tummy content with a hot dog and overpriced draft, reveling in the camaraderie of the overweight, obnoxious-yet-friendly fans sitting next to you?

Or so I thought on Saturday afternoon, just before arriving at the Orioles game with the infamous Michael Laws. Admittedly, Camden Yards is gorgeous. With red brick facades over gleaming steel trusses, right in the heart of downtown Baltimore, I can see why my father described this one-time railroad station as one of the most stunning parks in the country. I liked the feel of the covered arcade, too, surrouding the park and bristling with happy fans and good-natured food vendors.

So just what happened to change this happy picture in my mind's eye? What so tainted my memories of baseball that I'm not sure I'll ever set foot in a part again? H-E-A-T. Or, more so, HUMIDITY. According to the meterological experts at weather.com, Saturday was Baltimore's hottest day of the summer--at 100 degrees with a dewpoint of 107. Suddenly, the sight of a wilting hot dog made me want to boot. And my overpriced draft was equally disappointing after just ten minutes of sitting under that unbearable sunshine. I should have been all smiles and giggles, leaning back to relax in those hard plastic seats; instead, I was hopelessly attached to them by the buckets of sweat dripping through my once-crisp, once-clean sundress. How the hell do the players keep from passing out?

Yeah, they did win. Beat Toronto 1-0. Whoopdie fucking doo.