Archive for December, 2009

Basaveshvara Vachana by Ranjani Hebbar

December 31, 2009

This time, when we were in Udupi on vacation, accidentally, we picked a CD of Devaranama by Ranjani Hebbar called Ranganathana Noduva Banni. We decided to buy this CD primarily because it contained a vachana of Basaveshvara (which is very uncommon for a CD of devara nama to contain) and because I have heard that vachana sung only by Basavaraj Rajguru (in not such a great quality recording) and is a favourite of mine — called Nudidara Muddina Haaradhanthirabeku. We are quite happy with this purchase — of all the CDs that we bought this time, this is the one that we like the most. I am going to be on the look out for more of her music from now on.

Shape of a long leaf

December 30, 2009

An interesting one from H Liang and L Mahadeavan:

Long leaves in terrestrial plants and their submarine counterparts, algal blades, have a typical, saddle-like midsurface and rippled edges. To understand the origin of these morphologies, we dissect leaves and differentially stretch foam ribbons to show that these shapes arise from a simple cause, the elastic relaxation via bending that follows either differential growth (in leaves) or differential stretching past the yield point (in ribbons). We quantify these different modalities in terms of a mathematical model for the shape of an initially flat elastic sheet with lateral gradients in longitudinal growth. By using a combination of scaling concepts, stability analysis, and numerical simulations, we map out the shape space for these growing ribbons and find that as the relative growth strain is increased, a long flat lamina deforms to a saddle shape and/or develops undulations that may lead to strongly localized ripples as the growth strain is localized to the edge of the leaf. Our theory delineates the geometric and growth control parameters that determine the shape space of finite laminae and thus allows for a comparative study of elongated leaf morphology.

Take a look!

HowTo: get your papers accepted

December 29, 2009

Matt Welsh at Volatile and Decentralized has some nice pointers:

When reviewing so many papers, it is amazing to me how many authors make simple mistakes that make it so much more difficult to review (let alone accept!) their papers. Keep in mind that when reviewing 25+ papers for a program committee, you have to do them fairly quickly and the easier it is for the reviewer to digest your paper and get to the core ideas, the more likely they are to look favorably on the paper. I tend to review papers while on the elliptical machine at the gym, which also helps to tamp down any physical aggression I might feel while reading them. (Of course, I have to go back and write up my comments later, but usually in a post-exercise state of unusual mental clarity.)

A few hints on getting papers accepted — or at least not pissing off reviewers too much.

Take a look!

Five Whys!

December 28, 2009

I wanted to go through the Five Whys exercise with Ryan and his staff. Five Whys is a problem-solving technique developed by Toyota after World War II to improve its manufacturing process. The idea is to ask “Why?” five times to get to the root of any failure, so you fix the core problem instead of the symptoms.

That is Joel Spolsky in his piece at Inc. Take a look!

The dirty secret of science

December 23, 2009

From this must-read of Jonah Lehrer (via Abi):

Dunbar came away from his in vivo studies with an unsettling insight: Science is a deeply frustrating pursuit. Although the researchers were mostly using established techniques, more than 50 percent of their data was unexpected. (In some labs, the figure exceeded 75 percent.) “The scientists had these elaborate theories about what was supposed to happen,” Dunbar says. “But the results kept contradicting their theories. It wasn’t uncommon for someone to spend a month on a project and then just discard all their data because the data didn’t make sense.” Perhaps they hoped to see a specific protein but it wasn’t there. Or maybe their DNA sample showed the presence of an aberrant gene. The details always changed, but the story remained the same: The scientists were looking for X, but they found Y.

Dunbar was fascinated by these statistics. The scientific process, after all, is supposed to be an orderly pursuit of the truth, full of elegant hypotheses and control variables. (Twentieth-century science philosopher Thomas Kuhn, for instance, defined normal science as the kind of research in which “everything but the most esoteric detail of the result is known in advance.”) However, when experiments were observed up close — and Dunbar interviewed the scientists about even the most trifling details — this idealized version of the lab fell apart, replaced by an endless supply of disappointing surprises. There were models that didn’t work and data that couldn’t be replicated and simple studies riddled with anomalies. “These weren’t sloppy people,” Dunbar says. “They were working in some of the finest labs in the world. But experiments rarely tell us what we think they’re going to tell us. That’s the dirty secret of science.”

But that is not really the surprising part of the piece. It is this:

How did the researchers cope with all this unexpected data? How did they deal with so much failure? Dunbar realized that the vast majority of people in the lab followed the same basic strategy. First, they would blame the method. The surprising finding was classified as a mere mistake; perhaps a machine malfunctioned or an enzyme had gone stale. “The scientists were trying to explain away what they didn’t understand,” Dunbar says. “It’s as if they didn’t want to believe it.”

The experiment would then be carefully repeated. Sometimes, the weird blip would disappear, in which case the problem was solved. But the weirdness usually remained, an anomaly that wouldn’t go away.

This is when things get interesting. According to Dunbar, even after scientists had generated their “error” multiple times — it was a consistent inconsistency — they might fail to follow it up. “Given the amount of unexpected data in science, it’s just not feasible to pursue everything,” Dunbar says. “People have to pick and choose what’s interesting and what’s not, but they often choose badly.” And so the result was tossed aside, filed in a quickly forgotten notebook. The scientists had discovered a new fact, but they called it a failure.

The reason we’re so resistant to anomalous information — the real reason researchers automatically assume that every unexpected result is a stupid mistake — is rooted in the way the human brain works. Over the past few decades, psychologists have dismantled the myth of objectivity. The fact is, we carefully edit our reality, searching for evidence that confirms what we already believe. Although we pretend we’re empiricists — our views dictated by nothing but the facts — we’re actually blinkered, especially when it comes to information that contradicts our theories. The problem with science, then, isn’t that most experiments fail — it’s that most failures are ignored.

A must-read piece (if I haven’t said that already)!

Numbers in biology: estimated and measured

December 23, 2009

An interesting paper from the latest PNAS:

A feeling for the numbers in biology

Rob Phillips and Ron Milo

Although the quantitative description of biological systems has been going on for centuries, recent advances in the measurement of phenomena ranging from metabolism to gene expression to signal transduction have resulted in a new emphasis on biological numeracy. This article describes the confluence of two different approaches to biological numbers. First, an impressive array of quantitative measurements make it possible to develop intuition about biological numbers ranging from how many gigatons of atmospheric carbon are fixed every year in the process of photosynthesis to the number of membrane transporters needed to provide sugars to rapidly dividing Escherichia coli cells. As a result of the vast array of such quantitative data, the BioNumbers web site has recently been developed as a repository for biology by the numbers. Second, a complementary and powerful tradition of numerical estimates familiar from the physical sciences and canonized in the so-called “Fermi problems” calls for efforts to estimate key biological quantities on the basis of a few foundational facts and simple ideas from physics and chemistry. In this article, we describe these two approaches and illustrate their synergism in several particularly appealing case studies. These case studies reveal the impact that an emphasis on numbers can have on important biological questions.

Science makes me feel stupid!

December 21, 2009

Martin Schwartz on the importance of stupidity in scientific research:

I recently saw an old friend for the first time in many years. We had been Ph.D. students at the same time, both studying science, although in different areas. She later dropped out of graduate school, went to Harvard Law School and is now a senior lawyer for a major environmental organization. At some point, the conversation turned to why she had left graduate school. To my utter astonishment, she said it was because it made her feel stupid. After a couple of years of feeling stupid every day, she was ready to do something else.

I had thought of her as one of the brightest people I knew and her subsequent career supports that view. What she said bothered me. I kept thinking about it; sometime the next day, it hit me. Science makes me feel stupid too. It’s just that I’ve gotten used to it. So used to it, in fact, that I actively seek out new opportunities to feel stupid. I wouldn’t know what to do without that feeling. I even think it’s supposed to be this way. Let me explain.

Link via Shencottah.

Bala: RIP

December 10, 2009

I have written about Bala in this blog here, here and here. His enthusiasm was infectious and my interactions with him — both in person and through mails was always very elevating. Through this post at Scholars Without Borders, I understand that he is no more. A wonderful human being and an extremely versatile personality who will be missed.

Lessons from a scandal

December 10, 2009

From this must-read post of Dr.Free-Ride:

Trying to interfere with peer review is always a bad call.

If you don’t thoroughly document your code, no one but you will have a clear understanding of what it’s supposed to do.

It’s a good thing to keep your original data.

Proprietary data makes it much harder for other scientists to do quality control on your work.

Folks may judge you by your behavior, not just your data.

These are just some of the bulleted points. There are enough quotes and exposition on each and much more in Dr. Free-Ride’s post. Go take a look at it.

Eusocial insects

December 3, 2009

A piece by D Balasubramanian in The Hindu. I am disappointed though that there are no references either to the works of Raghavendra Gadagkar or to his classic Survival strategies.