Archive for the ‘Materials Science’ Category
It was his lifelong ambition to bring to metallurgy, a ﬁeld rich in observations but still mostly phenomenological in method, the quantitative and predictive rigor of the recent developments in atomic-scale physics.
His work on dislocations was a ﬁrst success in this endeavor. The dislocation had been introduced as a theoretical concept in 1934 to explain the ease of plastic ﬂow in crystals: moving a line defect that localizes shear
deformation requires a much lower stress than uniform shearing of the crystal planes. The concept was only gradually accepted. Even though dislocations provided key insights into mechanical behavior and crystal growth, it was only in 1956 that electron microscopy produced the ﬁrst direct images of moving dislocations. In the late 1940s, Cottrell showed that dislocation theory could be used to make quantitative predictions.
A very nice piece!
Two interesting papers from PNAS:
Take a look!
When I started my PhD in mid-nineties, nano was the in-thing. I remember a reputed electron microscopist whose made a presentation, in which, he pointed out to a particular feature in a transmission electron micrograph and said “Previously, I used to call this a particle; nowadays, we call it a nanoparticle”. That pretty much summed up our attitude towards nano too. Many of us felt that nano was all hype and that there was nothing exciting or intellectually challenging in nano (I do have a friend who calls his blog nono-science!).
A few years into my PhD, I did get exposed to two new viewpoints on nano. One point of view, given by a reputed researcher in nano area, is that nano is a good pedagogical tool which can be used in training a new researcher in materials science and engineering to understand some of the fundamental concepts of materials science; this is because, many a processes and properties of materials, which are generally not considered as important in the typical length scales become important in nano-scale making it an interesting and useful area of study. The other was that nano has some uses in military and medical sciences because these are two areas in which other considerations override cost considerations; otherwise, in actual engineering practice, one might never use nanomaterials at all. Both these points of view are still true to some extent.
Reading Cyrus C M Mody’s Instrumental community: probe microscopy and the path to nanotechnology has given another view point on nano. The first is that nano was a response to “perceived declines in the disciplines”:
Over time, many surface scientists came to believe that nanotechnology provided the best way for them to revive or transform their discipline while retaining much of their knowledge base. As Jun Nagomi puts it
strictly classical surface structure determination is dead as a field. Or extremely mature, and not very fundable. So what I do now I can honestly bill as being related to nanotechnology. But when you look at the actual kinds of materials I’m working with, I’m still working with metals and I’m still working with semiconductors.
The other is that the decline in industrial funding is the reason why nano became popular:
Usually, though, governments created new academic nanotechnology institutions to occupy the niche once held by the big corporate labs. As Jim Murdy puts is
Bell labs is just a shadow of what it once was, and IBM has had to scale back much of its operation as well. So they are not the dominant force they used to be globally across surface science or nano….If they go away, we still have very good people, they just tend to be more in the universities than in an industrial lab. Universities have different strengths. They generally have a harder time getting good equipment. An industrial lab had stuff universities drool over…. That in some sense what IBM and Bell labs did–they brought a bunch of very good people abd put them in a central location at the same lab and equipped them well. To an extent that’s what the [National Nanotechnology] centers are meant to do at the universities.
Finally, Mody also makes the most interesting point about the identification of probe microscopy with nanotechnology and the reasons for it: the short answer — interdisciplinarity.
What is nice about Mody’s book is that he makes these points convincingly and in an extremely readable book. I thoroughly enjoyed the book. If you like anthropology or history of science or science and technology or any combinations thereof, this is the book to read. I strongly recommend it.
One more bonus thing that I learnt from the book is the philosophy of Prof. Virgil Elings on the need for and teaching of instrumentation. As the following quotes indicate they are quite provocative and interesting:
One lesson from the master’s program that Elings carried into DI was that “the areas that students had done undergraduate work in made little difference in their ability to design instruments. Any deficiency, except of knowledge of math, could be repaired by some reading and talking with other students. All those esoteric courses made little difference.”
The MSI program was clearly quite different from a traditional academic degree program. It was, for some students, ” a rude awakening from the spoon-feeding of most undergraduate experiences.”
Elings became convinced that formal academic pedagogy was counterproductive: “[S]chools at all levels, practically down to kindergarten, do almost nothing to foster innovation and invention….[A]cademia can afford to spend some time on innovation since, in my opinion, a lot of what is done now is a waste of time.”
The talk is available here; have fun!!
A nice ten minute clip of a conversation with John Cahn at YouTube (especially, for the metallurgists and materials scientists among you!)
Undercooling, constitutional undercooling, and kinetic undercooling are some of the things that I have known; today, I learnt that there is geometric undercooling too! Take a look.
Long time readers of this blog know of a kitchen sink experiment I did nearly four years ago. Thanks to an email alert from my colleague and friend Shirish Waghulde, I learnt about another cool kitchen experiment today — the diagnostics of microwave ovens using appalams as sensors. You will soon see photos in this blog of diagnostics of the microwave oven in our kitchen!
Inverse problems are generally known to be hard. Here is a commentary on a couple of papers published in PRL which discusses one such problem – namely, finding a potential that gives rise to a given type of lattice. One of the papers referred to in the commentary linked above has this to say:
In general, proving that a certain configuration is the ground state of a given potential is a very hard problem. In fact, the exact nature of the ground state is not rigorously known even for simple interactions such as the Lennard-Jones potential . In this Letter we have described
a direct method to design potentials for targeted self-assembly of lattices, a problem usually approached using iterative methods involving repeated relaxations of the system [2,3]. From our construction follows the somewhat counterintuitive observation that it is actually simpler to find a potential with a given configuration as a ground state than to determine the ground state(s) of a given potential.
Another thing about these papers is the use of the concept of the reciprocal space. May be I can use these papers to tell students the power of reciprocal space based techniques when I teach my mathematical methods course next semester.