Archive for the ‘Methodology’ Category

Sometimes it is not scaling that is key

July 29, 2013

Say Paul Graham and Atul Gawande in two different contexts! When we discuss teaching methodologies and talk about scaling, I do feel that it is not scaling that is the key!

Trello

May 1, 2013

Just heard about Trello from Joel and signed up! Looks promising; may be I can use it to manage things around in the lab and office! Let us see.

A computational person’s nightmare

April 21, 2013

Anybody who writes code for her research knows about the scary feeling of a bug being there in the code; may be the exciting new results are all but an artifact of the code; and, what is worse, may be the bug in the code is a silly, elementary mistake.

Of course, I have had my share of such nightmares too — not just the feeling, but at least twice, did notice mistakes themselves. But, fortunately for me, (a) I noticed them myself; and, (b) the mistakes were minor.

All this has taught me that while it is almost impossible to avoid bugs in codes, there are several ways in which one can pick them up: (i) Have some sharp eyed colleagues go over your results (and, if you are as lucky as me, they might even be willing to go over your code) and spot any unphysical results; (ii) Explain you code to a human; this is almost a fail-safe way of spotting mistakes; I think in the coding community this is closer to the pair-programming concept; (iii) Keep benchmarking your code and keep coming up with newer and newer tests; while bechmarking, I have also found that it is important that you match numbers within the accuracy of your calculations; numbers not matching at the fourth or fifth decimal place, sometimes have led me to identify the bugs in the code. (iv) Make your code open source and share with as many users as possible. Even if a few of those users turn out to be developers, they will notice errors, if any. (v) If possible, find another colleague who will implement the same code, and results matching from two people increases the confidence in the code enormously; as my advisor used to say the probability of two people making the same error goes down multiplicatively.

All the above thoughts are triggered by this link list from Abi, and especially this report.

Of course, if you are any lucky, you will also get to experience, once in a while, the following feeling during your research career:

“I almost didn’t believe my eyes when I saw just the basic spreadsheet error,” said Herndon, 28. “I was like, am I just looking at this wrong? There has to be some other explanation. So I asked my girlfriend, ‘Am I seeing this wrong?’”

I will save my thoughts on this for another post (and also my thoughts on the ease with which you can mess up things while using excel: I remember the terrible mess I made while grading for the first time for a large class using excel spreadsheet). In the meanwhile, I recommend that you follow all the links in Abi’s post.

 

Two posts on managing

February 20, 2012

Both of which, at some level, are saying the same things.

Joel Sposky:

Think about how a university department organizes itself. There are professors at various ranks, who pretty much just do whatever the heck they want. Then there’s a department chairperson who, more often than not, got suckered into the role. The chairperson of the department might call meetings and adjudicate who teaches what class, but she certainly doesn’t tell the other professors what research to do, or when to hold office hours, or what to write or think.

That’s the way it has to work in a knowledge organization. You don’t build a startup with one big gigantic brain on the top, and a bunch of lesser brains obeying orders down below. You try to get everyone to have a gigantic brain in their area, and you provide a minimum amount of administrative support to keep them humming along.

This is my view of management as administration—as a service corps that helps the talented individuals that build and sell products do their jobs better. Attempting to see management as the ultimate decision makers demotivates the smart people in the organization who, without the authority to do what they know is right, will grow frustrated and leave. And if this happens, you won’t notice it, but you’ll be left with a bunch of yes-men, who don’t particularly care (or know) how things should work, and the company will only have one brain – the CEO’s. See what I mean about “it doesn’t scale?”

And yes, you’re right, Steve Jobs didn’t manage this way. He was a dictatorial, autocratic asshole who ruled by fiat and fear. Maybe he made great products this way. But you? You are not Steve Jobs. You are not better at design than everyone in your company. You are not better at programming than every engineer in your company. You are not better at sales than every salesperson in the company.

It is not, as it turns out, necessary to be a micromanaging psychopath with narcissistic personality disorder (or even to pretend to be one) if you just hire smart people and give them real authority. The saddest thing about the Steve Jobs hagiography is all the young “incubator twerps” strutting around Mountain View deliberately cultivating their worst personality traits because they imagine that’s what made Steve Jobs a design genius. Cum hoc ergo propter hoc, young twerp. Maybe try wearing a black turtleneck too.

For every Steve Jobs, there are a thousand leaders who learned to hire smart people and let them build great things in a nurturing environment of empowerment and it was AWESOME. That doesn’t mean lowering your standards. It doesn’t mean letting people do bad work. It means hiring smart people who get things done—and then getting the hell out of the way.

T T Ram Mohan:

Can we have companies without managers, including top management? Sounds unthinkable. But one such company, Morning Star, is the subject of a cover story by Gary Hamel in a recent issue of HBR. The idea is not as crazy as it sounds. There has been a general movement toward flatter companies. Morning Star carries the process to its logical extreme. We have very flat organisations in academia; investment banks have a few layers; manufacturing tends to be more layered. Morning Star is a manufacturing company that has abolished layers. The company works entirely through self-directed mission statements and objectives and peer reviews.And it has delivered performance for several years now.

Can we extend this to larger organisations? Will it work for complex operations such as aircraft manufacturing? Hamel answers in the affirmative. My point would be that it is not necessary to replicate Morning Star in full. It is the underlying principle that is important: abandon the notion that decision-making is the privilege of a few at the top and involve more people in decision-making. A Brazilian firm, Semco, has done that. It does have managers but the managers are evaluated by their subordinates!

Have fun!

Technology, courage and fun

August 29, 2011

The basic personal start-up mechanism for research has to be curiosity. I find myself curious about how something works, or I observe something strange and begin to explore it. Because I am fond of symmetry, when I observe some simple symmetry, I am almost inexorably drawn into exploring it. For example, one day Don Oestreicher, who was then a graduate student, and I noticed that the number of random wires expected to cross the midsection of an N terminal printed circuit board is N/4 independent of whether the wires connect two or three terminals on the board. This comes about because although the probability of crossing is higher for wires connecting three terminals, 3/4 rather than 1/2, the number of wires is correspondingly reduced from N/2 to N/3. This simple observation led us to explore other wiring patterns, gather some data from real printed circuit boards, and eventually to publish a paper [4] called How Big Should a Printed Circuit Board Be? Follow your curiosity.

Beauty provides another form of personal encouragement for me. Some of the products of research are just pretty, although mathematicians prefer to use the word “elegant.” The simplicity of E=MC2, the elegance of information theory, and the power of an undecidability proof are examples. I got interested in asynchronous circuits by discovering a very simple form of first in first out (FIFO) storage that has rather complete symmetry [1,8]. It simply amazes me that my simple and symmetric circuit can “know” which way to pass data forward. The beauty itself piques my curiosity and flatters my pride.

Simplicity is to be valued in research results. Many students ask, “How long should my thesis be?” It would be better for them to ask, “How short can it be?” The best work is always simply expressed. If you find something simple to explore, do not turn it aside as trivial, especially if it appears to be new. In a very real sense, research is a form of play in which ideas are our toys and our objective is the creation of new castles from the old building block set. The courage to do research comes in part from our attraction to the simplicity and beauty of our creations.

I, for one, am and will always remain a practicing technologist. When denied my minimum daily adult dose of technology, I get grouchy. I believe that technology is fun, especially when computers are involved, a sort of grand game or puzzle with ever so neat parts to fit together. I have turned down several lucrative administrative jobs because they would deny me that fun. If the technology you do isn’t fun for you, you may wish to seek other employment. Without the fun, none of us would go on.

I tried to capture the spirit of research as a game in my paper about our walking robot [2]. Unfortunately, the editors removed from my paper all of the personal comments, the little poem about the robot by Claude Shannon, the pranks and jokes, and in short, the fun. The only fun they left was the title: Footprints in the Asphalt. All too often, technical reports are dull third person descriptions of something far away and impersonal. Technology is not far away and impersonal. It’s here, it’s intensely personal, and it’s great fun.

That is the last section of Ivan Sutherland’s Technology and Courage, a must-read piece. Link via Relevant History.

Creativity and memory

March 8, 2011

Jim Holt at London Review of Books:

What do we really know about creativity? Very little. We know that creative genius is not the same thing as intelligence. In fact, beyond a certain minimum IQ threshold – about one standard deviation above average, or an IQ of 115 – there is no correlation at all between intelligence and creativity. We know that creativity is empirically correlated with mood-swing disorders. A couple of decades ago, Harvard researchers found that people showing ‘exceptional creativity’ – which they put at fewer than 1 per cent of the population – were more likely to suffer from manic-depression or to be near relatives of manic-depressives. As for the psychological mechanisms behind creative genius, those remain pretty much a mystery. About the only point generally agreed on is that, as Pinker put it, ‘Geniuses are wonks.’ They work hard; they immerse themselves in their genre.Could this immersion have something to do with stocking the memory? As an instructive case of creative genius, consider the French mathematician Henri Poincaré, who died in 1912. Poincaré’s genius was distinctive in that it embraced nearly the whole of mathematics, from pure (number theory) to applied (celestial mechanics). Along with his German coeval David Hilbert, Poincaré was the last of the universalists. His powers of intuition enabled him to see deep connections between seemingly remote branches of mathematics. He virtually created the modern field of topology, framing the ‘Poincaré conjecture’ for future generations to grapple with, and he beat Einstein to the mathematics of special relativity. Unlike many geniuses, Poincaré was a man of great practical prowess; as a young engineer he conducted on-the-spot diagnoses of mining disasters. He was also a lovely prose stylist who wrote bestselling works on the philosophy of science; he is the only mathematician ever inducted into the literary section of the Institut de France. What makes Poincaré such a compelling case is that his breakthroughs tended to come in moments of sudden illumination. One of the most remarkable of these was described in his essay ‘Mathematical Creation’. Poincaré had been struggling for some weeks with a deep issue in pure mathematics when he was obliged, in his capacity as mine inspector, to make a geological excursion. ‘The changes of travel made me forget my mathematical work,’ he recounted.

Having reached Coutances, we entered an omnibus to go some place or other. At the moment I put my foot on the step the idea came to me, without anything in my former thoughts seeming to have paved the way for it, that the transformations I had used to define the Fuchsian functions were identical with those of non-Euclidean geometry. I did not verify the idea; I should not have had time, as, upon taking my seat in the omnibus, I went on with a conversation already commenced, but I felt a perfect certainty. On my return to Caen, for conscience’s sake, I verified the result at my leisure.

How to account for the full-blown epiphany that struck Poincaré in the instant that his foot touched the step of the bus? His own conjecture was that it had arisen from unconscious activity in his memory. ‘The role of this unconscious work in mathematical invention appears to me incontestable,’ he wrote. ‘These sudden inspirations … never happen except after some days of voluntary effort which has appeared absolutely fruitless.’ The seemingly fruitless effort fills the memory banks with mathematical ideas – ideas that then become ‘mobilised atoms’ in the unconscious, arranging and rearranging themselves in endless combinations, until finally the ‘most beautiful’ of them makes it through a ‘delicate sieve’ into full consciousness, where it will then be refined and proved.

Poincaré was a modest man, not least about his memory, which he called ‘not bad’ in the essay. In fact, it was prodigious. ‘In retention and recall he exceeded even the fabulous Euler,’ one biographer declared. (Euler, the most prolific mathematician of all – the constant e takes his initial – was reputedly able to recite the Aeneid from memory.) Poincaré read with incredible speed, and his spatial memory was such that he could remember the exact page and line of a book where any particular statement had been made. His auditory memory was just as well developed, perhaps owing to his poor eyesight. In school, he was able to sit back and absorb lectures without taking notes despite being unable to see the blackboard.

It is the connection between memory and creativity, perhaps, which should make us most wary of the web. ‘As our use of the web makes it harder for us to lock information into our biological memory, we’re forced to rely more and more on the net’s capacious and easily searchable artificial memory,’ Carr observes. But conscious manipulation of externally stored information is not enough to yield the deepest of creative breakthroughs: this is what the example of Poincaré suggests. Human memory, unlike machine memory, is dynamic. Through some process we only crudely understand – Poincaré himself saw it as the collision and locking together of ideas into stable combinations – novel patterns are unconsciously detected, novel analogies discovered. And this is the process that Google, by seducing us into using it as a memory prosthesis, threatens to subvert.

HowTo: present a paper

February 25, 2011

Leslie Lamport tells how; via.

How to: become a physicist and/or a hacker

February 8, 2011

Nice piece by ZapperZ on becoming a physicist; while we are at it, I have also drawn much inspiration from Eric Raymond’s how to on becoming a hacker.

PS: By the way, personally, one thing about being a computational materials scientist/engineer is that you get to explore (and try and develop expertise) in mathematics, physics/chemistry as well as coding skills — making the process exciting and rich.

Money matters, administration, forgivenss, selfishness and all that

July 22, 2010

Paul Graham’s latest piece (as usual) has plenty of interesting thoughts:

I think most people have one top idea in their mind at any given time. That’s the idea their thoughts will drift toward when they’re allowed to drift freely. And this idea will thus tend to get all the benefit of that type of thinking, while others are starved of it. Which means it’s a disaster to let the wrong idea become the top one in your mind.

I hear similar complaints from friends who are professors. Professors nowadays seem to have become professional fundraisers who do a little research on the side. It may be time to fix that.

I’ve found there are two types of thoughts especially worth avoiding—thoughts like the Nile Perch in the way they push out more interesting ideas. One I’ve already mentioned: thoughts about money. Getting money is almost by definition an attention sink. The other is disputes. These too are engaging in the wrong way: they have the same velcro-like shape as genuinely interesting ideas, but without the substance. So avoid disputes if you want to get real work done.

Turning the other cheek turns out to have selfish advantages. Someone who does you an injury hurts you twice: first by the injury itself, and second by taking up your time afterward thinking about it. If you learn to ignore injuries you can at least avoid the second half. I’ve found I can to some extent avoid thinking about nasty things people have done to me by telling myself: this doesn’t deserve space in my head. I’m always delighted to find I’ve forgotten the details of disputes, because that means I hadn’t been thinking about them. My wife thinks I’m more forgiving than she is, but my motives are purely selfish.

A short, nice piece!

Blogging, hedgehogs, foxes and mathematics

July 5, 2010

A couple of interesting points from Fleix Salmon’s piece (Hat tip to Swarup for the pointer):

Horn’s point is that any organized attempt to look deeply at something risks being self-defeating: you can end up disappearing down all manner of silly dead ends, and understanding less than you would with a more-is-more approach.This absolutely rings true to me. For reasons which today elude me, I decided when I was doing my A-levels in England to do what they call “double maths” — essentially taking two mathematics exams (Maths and Further Maths), in the same two years you’d normally spend studying for just one. As a result, we had a highly accelerated mathematics curriculum, and there was no time to circle back and make sure the class had understood something before moving on to the next thing. It was all rather sink-or-swim.

And at any given point in time, I was sinking — along, I think, with most of the rest of my class. I was pretty fuzzy about what we’d been taught in previous weeks, and I was very unlikely to understand what the teacher was trying to say at any given time. Maths class, for me, was a combination of panic and incomprehension, combined with a desperate attempt to bluff my way through as much as I could. (Needless to say, if you’re reduced to trying to bluff, mathematics is not the best subject to choose.)

Yet somehow my classmates and I all did very well, at the end of the two years, when it came time to taking the actual exams. As I recall, nearly everybody taking double maths wound up getting an A in their Maths A-level, and most of us got an A or a B in Further Maths as well. Somehow we had managed to gain a pretty good grasp of the subject by dint of sheer velocity: the mechanism, I think, was that a desperate attempt to understand a new concept had the effect of making earlier ideas drop into place. And that the best way of mastering the Maths curriculum was not so much to study it directly, but rather to try to study the Further Maths curriculum: even getting halfway there would bring you pretty much up to speed on the stuff that went before.

Something similar, I think, happens with blogging. Bloggers tend to be foxes, rather than hedgehogs; it’s pretty clear that Athreya is an archetypal hedgehog and has a deep-seated mistrust of foxes. We skip around a lot of different things, and much of the time we don’t really understand them. But somehow the accumulated effect of all that skipping around is to make connections and develop understandings which hedgehogs often lack. What’s more, we live, as Athreya admits, in a highly complex world — one which there are serious limits to what economics can do on its own.

A good post!


Follow

Get every new post delivered to your Inbox.

Join 84 other followers