Archive for the ‘Giant’s shoulders carnival’ Category

Giant’s Shoulders carnival: 15th Edition — Septmber, 2009.

September 16, 2009

Welcome to the 15th edition of Giant’s Shoulders!

[1] Chronology of the early days of telescope — from the Renaissance Mathematicus

On 25th August Google celebrated the 400th anniversary of Galileo’s first public presentation of his telescope an anniversary that is also commented upon in the latest addition of the Guardian Weekly, a compendium of the English daily newspaper The Guardian for ex-patriots like myself. It’s kind of nice to see the world paying a bit of attention to the history of astronomy but unfortunately they both got the date wrong! I suspect that both of them relied on the same news agency report and didn’t bother to check the facts. Well for those that care and even for those that don’t I have put together a short chronology of the early days of the telescope.

[2]  Why, after all, would the Earth moving be such a revolutionary idea? — from Starts with a bang

So, why was “eppur si muove” such a big deal? Didn’t we know about this second possibility? Didn’t people at least consider it?

The answer, shockingly, is no, not really. Why didn’t we consider it? Because looking up and noticing that the heavens rotate predated our idea that the Earth could be round. If the Earth is flat, then it wouldn’t make sense for it to rotate, would it?

The huge mistake is that once we discovered the Earth was round (in about 300 B.C., by the way), we didn’t reconsider this problem. We didn’t go back and say, “Hey, you know what? Instead of this celestial sphere idea, maybe it’s just us that’s spinning!” And so we held onto the idea of a fixed Earth for nearly 2,000 more years.

I can’t help but wonder, if we went back and critically re-examined our assumptions and conclusions based on what we know now, if we would uncover some fantastic possibilities that could challenge our current scientific views of the Universe? Or, if everything we have concluded to this point really is the scientific best we can do with what we know?

By the way, the video of the rotation of the sky and the stars that this piece links to is a must-see!

[3] Which is the first modern disaster to be studied from a sociological and psychological perspective? — from Providentia

… the Halifax Explosion represented one of the first modern disasters to be studied from a sociological and psychological perspective. Several years after the explosion, Samuel Henry Prince completed his Ph.D. at Columbia University.  His thesis, titled Catastrophe and Social Change: Based on a Sociological Study of the Halifax Disaster, represented the first systematic study of relief efforts following a major disaster. As a scholar, pastor, and social activist, Prince’s own experience dealing with disaster was considerable. Not only was he part of the rescue effort following the Titanic sinking five years earlier, but he narrowly escaped being injured in the Halifax explosion himself (despite the risk of flying glass, he arrived on the scene just minutes after the blast to help with rescue efforts). After the explosion, Prince helped mobilize international relief efforts and continued to work on behalf of survivors and their families for years afterward.In his book, Prince described the “shock and disintegration” that followed the explosion as well as the massive relief effort needed and the complex social issues that were linked to aiding survivors. He also emphasized the long-term consequences of the tragedy by stating: “The Halifax Disaster will leave a permanent mark upon the city because so many of the living have been blinded or maimed for life. But it is possible that the disaster may leave a mark of another sort, for it is confidently believed by those who took part in the relief work during the first few weeks that Halifax will gain as well as lose.  The sturdy quality of its citizens will bring “beauty out of ashes”””. Prince also stressed the role that disasters can play in bringing about meaningful social change to prevent future loss of life.

Although there had certainly been previous disasters and major relief efforts, Catastrophe and Social Change represents one of the first attempts at studying disasters from a social science perspective and is still considered a classic in the field. While emergency management would eventually become a well-researched topic after World War II, the lessons learned from the Halifax Explosion continue to be applied in rescue efforts around the world.

To download Catastrophe and Social Change

[4] The Osborn headaches — from Lealeps

Osborn was not suggesting that his “aristogenes” were macromutations that were then acted upon by natural selection. The new traits were already fine-tuned to what the creature needed to survive. Thus, for Osborn, “artistogenes” were by definition adaptive. Later in the same paper he wrote;

A new aristogene is readily distinguished from a new D. mutation [i.e. a sudden, large-scale mutation that gives rise to a new trait] by invariably obeying the eighteen principles of biomechanical adaptation; D. mutations on the contrary may or may not be adaptive.

This may seem straightforward now, but let me assure you it took a lot of careful reading to tease this much out. While Osborn did make many important contributions to paleontology his ideas about evolution, especially in his later years, were often presented in a tangle of new terms and references to certain laws or principles. It seemed like he was trying to make his hypotheses more law-like by forming new terms and trying to boil them down to the workings of chemistry and physics, but it seems to me that this approach backfired.

[5] Is selfish gene the bio-sciences equivalent of Newtonian gravity? — from the Primate diaries

Over many generations Wilson and Swenson continued to collect the bacteria laden soil from under the largest and smallest plants and grow more plants of the same species. What they found was that plants grown in the two soil ecosystems grew the same as previous (genetically unrelated) plants had. By selecting groups of bacteria, rather than the individual plants, Wilson and Swenson demonstrated how group selection could give rise to the most fit individuals. Since the network of soil bacteria thrive so long as their host plant thrives, the mutualistic relationship promotes each individual bacteria in the network, but they are only able to thrive because of the selection that occurred at the level of the group. This is how multilevel selection operates.

At this time, multilevel selection theory is in its infancy and it’s currently unknown how widespread this kind of selection is in the natural world. However, I believe the research shows great promise and deserves to be taken seriously. It’s unfortunate that Dawkins has often been as stridently against this line of research as he has been with religious conservatives. Referring to this research as the “group delusion” and Wilson’s interest in it as a “weird infatuation” demeans his role as an advocate of cutting edge scientific research. Multilevel and group selection theory is not ideologically driven and the primary interest is to advance knowledge of evolutionary history not condemn it. Unfortunately, Dawkins has made the issue personal when the focus should be on the merits of the research itself.

However, all of this is commonplace in the history of science. Dawkins, like Newton before him, is passionate about his topic and will defend his perspective vociferously. I don’t believe the life sciences equivalent of Einstein has entered the stage just yet, but I do think that such an arrival should be anticipated and welcomed as we continue to expand our knowledge about the natural world.

The following three pieces might be a bit of stretch of Giant’s Shoulders’; but, hey, they are interesting; so, here you go!

[A1] Arrival of Starlings in North America

[A2] The departure of the ice man — or, is it the arrival?

[A3] The Dispersal of Darwin’s Cambridge visit posts!

Have fun; and, the next edition will be published at Quiche Moraine.

Advertisements

Measurement of equilibrium vacancy concentrations

November 25, 2008

Vacancies as equilibrium defects

In a crystal, by definition, there is a regular arrangement of atoms; however, this regular arrangement is disrupted by defects of various kinds; point defects, more specifically, vacancies, are one of them; see this wiki page in crystallographic defects for some schematics.

Vacancies are equilibrium defects; by that what we mean is that the thermodynamic equilibrium demands the existence of these defects (at any temperature above absolute zero) in crystals.

The necessity for the creation of vacancies at any (absolute) temperature T can be understood by the following argument [1]: Let us consider a vacancy; because of its presence in the lattice, some bonds, which would otherwise have been satisfied, are now broken, and this costs the system some energy, resulting in an increase in the internal energy and hence the enthalpy of the system. This increase is directly proportional to the fraction of sites that are left vacant; that is, proportional to the vacancy concentration: \Delta H \approx X_v \Delta H_v, where, X_v is the mole fractions of vacancies, \Delta H_v is the increase in enthalpy per mole of vacancies created and \Delta H is the total increase. On the other hand, the creation of vacancy leads to an increase in the entropy of the system; this increase comes in two ways: one is called the configurational entropy which is due to the fact that there are many different ways in which the vacancies can be distributed in the lattice; the other is the thermal entropy due to the changes in vibrational frequencies of atoms around the vacancies. Thus, the total change in entropy is \Delta S = X_v \Delta S_v - R [ X_v \ln{X_v} + (1-X_v) \ln{(1-X_v)}]. Hence, using these two expressions, the molar free energy \Delta G = \Delta H - T \Delta S can be expressed in terms of X_v. In the specific case where X_v \ll 1, by differentiation, one can obtain the equilibrium concentration of vacancies X_v^e for which the free energy is a minimum at any given temperature T. Schematically (following [1]), this optimization is depicted as below:

Graphical representation of minimization of free energy and equilibrium vacancy concentration

Graphical representation of minimization of free energy and equilibrium vacancy concentration

The work of Simmons and Balluffi

Simmons and Balluffi, in a series of papers published in Physical Review between 1960 and 1963, measured and reported the equilibrium concentration of vacancies in aluminium, silver, gold and copper [2-5]. In this post, I would like to talk about these papers — more specifically, about the first one, in which the equilibrium concentration of vacancies in aluminium was reported.

As Cahn notes in his wonderful (and must-read for every materials scientist) The coming of materials science, (a) even in 1920s it was known that vacancies are equilibrium defects and that they should exist in every crystal at any temperature above absolute zero, and, (b)  the experimental approach of Simmons and Balluffi (that came nearly four decades afterwards) to measuring these quantities for metals at various temperatures by comparing dilatometric measurements with precision measurements of lattice parameter is one of the very fruitful ones [6].



A side note: Cahn, while talking about point defects, notes why modern materials science is not physical metallurgy, and how

... the gradual clarification of the nature of point defects in crystals (...) came entirely from the concentrated study of ionic crystals, and the study of polymeric materials after the Second World War began to broaden from being an exclusively chemical pursuit becoming one of the most fascinating topics of physics research.

In any case, the relevant sections of Cahn's book (3.2.3.1 on point defects. 4.2.2 on diffusion and 5.1.3 on radiation damage) are very informative, readable, enjoyable and are a must-read; they also serve as nice historical introduction to the development of the theoretical concepts and experimental methodologies.



The paper of Simmons and Balluffi begins with a statement of the state of things as it existed at that time:

The experimental determination of the predominant atomic defects present in thermal equilibrium in metals has proven to be a difficult problem. Even though the general thermodynamic theory of point defects is well developed, experiment has not yet established the nature and concentrations of the defects in a completely satisfactory way.



Another digression: One of the references at the end of the second sentence in the quote above is to the work of Feder and Nowick [7]; Cahn calls the approach of Feder and Nowick, "ingenious", and the plot in Figure 2 of Feder and Nowik is the same as that in Simmons and Balluffi's Figure 3; in that sense, Simmons and Balluffi were following up on the work of Feder and Nowick. However, while Feder and Nowick wrote

It must therefore be concluded that, in the case of Pb and Al, an unexpected increase in a physical property at high temperatures must be explained in terms of the anharmonicity of the lattice vibrations, rather than in terms of large vacancy concentrations[.]

Simmons and Balluffi conclude

... it is concluded that they are predominantly lattice vacancies.

In fact the number that Simmons and Balluffi quote for the number of vacancies at melting point is of the same order (but greater by a factor of 3) to that quoted by Feder and Nowick.

Is it that Feder and Nowick tried the methodology and (a) got the right results but wrongly concluded that it is not of great use, or, (b) did not make accurate measurements and hence were lead to the wrong conclusion? To me, it looks like the answer is (b) -- one of the important conclusions of Simmons and Balluffi (in their own words) is

[In the experiments of Feder and Nowick] ... aluminium gave a rather questionable indication that vacancies are the predominant defect at elevated temperatures.

...

It appears that the method may be sufficiently precise to serve as a tool for investigating point defects in many substances.

However, this is a question on which I would love to hear from an historian of science.



The experimental approach used by Simmons and Balluffi is very special among the various methodologies used to measure equilibrium vacancy concentration; while other methods (based on the measurements of internal friction, electrical resistivity, specific heat etc) give indirect information,

There is one type of experiment, however, that appears to be capable of giving direct information about the nature of predominant defects.

This method, as we noted above,

… consists of measuring the differences between the fractional lattice parameter change, \Delta a / a, as measured by x-ray diffraction, and the linear dilatation of the specimen, \Delta L/L, as defects are generated in a crystal containing constant number of atoms.

Or, in other words, when a material is heated its dimensions change which is directly related to the change in the lattice parameter of the material. In addition, if there are more vacancies (or, lattice points) that are created, there will be a discrepancy between the two (since the dilatation will also measure the change in length due to the creation of these extra lattice points) and this difference can then be used to calculate the vacancy concentration. The idea is as simple as that!

Of course, one of the things I like a lot about Simmons and Balluffi’s experiment is that the theoretical basis for their actual calculation was provided by Eshelby, in a theorem:

Eshelby has shown that a uniform random distribution of cubically symmetric point centers of dilatation (point defects) in an elastic material with cubic elastic constants will produce a uniform elastic strain of the crystal without change of shape. When uniform straining without change of shape occurs, the reciprocal lattice undergoes a uniform strain equal and opposite to the uniform strain of the crystal lattice; and the fractional change of lattice constant is equal to the fractional change in linear dimensions of the crystal.

From Simmons and Balluffi’s paper, I see that Eshelby’s theorem was also tested experimentally; those are probably some of the papers along with that of Eshelby which deserve a post of their own, and I might do that some time!

The key aspects of Simmons and Balluffi’s paper, as far as experimental technique is concerned, are

[a] that the lattice parameter and dilatational measurements were carried out at the same temperature, and as a function of temperature, and

[b] that the measurements were highly accurate — to within 1 in 100,000 for both the quantities.

And, to satisfy the above conditions, Simmons and Balluffi built a special apparatus such that

… both \Delta L/L and \Delta a/a could be measured simultaneously on the same specimen and where all temperatures were measured with the same thermocouple.

The details of the apparatus, speimen, furnace, and the precautions taken during the experiment and the measurement make fascinating reading — to give a few examples:

… Recrystallisation and grain growth occurred, and a final bamboo-type grain structure was produced where single large grains, several cm in size, occupied the full width of the bar.

… The paramount considerations in the desing of the present measurements of the change in length were that (1) the speciment be quite free of external constraint during expansion and contraction, (2) the influence of creep at the highest temperatures be minimized, and (3) the necessary sensitivity be obtained with very direct means, avoiding any apparatus which would be in a temperature gradient and therefore subject to possible unknown error.

… Characteristics required of the x-ray lattice expansion measurement included (1) high accuracy, (2) short measurement time, (3) a specimen having properties identical to those of the length change measurement speciment, and (4) careful desing so that the temperature distribution would be negligibly perturbed by the necessary access port for the x-rays.

The results of Simmons and Balluffi are, in some sense, not unexpected; they did find vacancies to be most dominant defects; as they note in their abstract, theirs

is the first direct measurement of formation entropy; the value is near that expected from theoretical considerations.

This success lead to their during similar experiments on copper, silver and gold, and as the review by Kraftmakher [8] point out, of the many important papers on point defects in metals, those of Simmons and Balluffi continue to be of relevance nearly four-and-a-half decades after their publication.

Current state

Simmons and Balluffi’s paper is a classic that shows how far you can go with careful experimentation (which are themselves based on strong theoretical foundations); the values reported by them were regarded the most reliable for nearly three decades after they published their paper [8]; even now, though there are some discrepancies between the more modern results and those reported by Simmons and Balluffi, the issue is far from settled (against Simmons and Balluffi, that is). Further, as Kraftmakher notes in his review [8], some of the original ideas and suggestions in Simmons and Balluffi (like the ones about quenched in vacancies and the lattice anharmonicity contributions at high temperatures) remain very much relevant and continue to be active areas of study.

As that great poem The dance of the solids by John Updike notes,

Textbooks and Heaven only are ideal;
Solidity is an imperfect state.

And, Simmons and Balluffi’s series of papers are nearly perfect on the imperfectness of the solid state of metals (noble and otherwise) and are worth reading and pondering on — Have fun!

References:

  1. D A Porter and K E Easterling, Phase transformations in metals and alloys, Chapman & Hall, Second Edition, pp. 43-44, 1992.
  2. R O Simmons and R W Balluffi, Measurements of equilibrium vacancy concentrations in aluminium, Physical Review, 117, 52-61, 1960.
  3. R O Simmons and R W Balluffi, Measurement of the equilibrium concentration of lattice vacancies in silver near the melting point, Physical Review, 119, 600-605, 1960.
  4. R O Simmons and R W Balluffi, Measurement of the equilibrium concentrations of lattice vacancies in gold, Physical Review, 125, 862-872, 1962.
  5. R O Simmons and R W Balluffi, Measurement of the equilibrium concentrations of vacancies in copper, Physical Review, 129, 1533-1544, 1963.
  6. R W Cahn, The coming of materials science, Pergamon Materials Series, Pergamon, First Edition, pp. 105-109, 2001.
  7. R Feder and A S Nowick, Use of thermal expansion measurements to detect lattice vacancies near the melting point of pure lead and aluminium, Physical Review, 109, 1959-1963, 1958.
  8. Y Kraftmakher, Equilibrium vacancies and thermophysical properties of metals, Physics Reports, 299, pp. 79-188, 1998.

On Griffith’s criterion for brittle fracture

November 10, 2008

In a paper published in 1920 [1], A A Griffith laid the foundations of fracture mechanics with his criterion for brittle fracture: the wiki page on Griffith as well as this and this short notes by Zhigang Suo have more information on the man and his achievements. In this blog post, which is part of my monthly Classics in Materials Science series, I would like to discuss Griffith’s paper (with quotes from the paper itself — as usual); by the way, the wiki page on fracture mechanics also explains the idea behind Griffith’s work in a lucid manner, and links to several wonderful online resources.

The idea

Griffith noticed that polished surfaces and materials with smaller sized cracks have enhanced strength against rupture as opposed to materials with scratched surfaces and larger sized cracks. These observations were not satisfactorily explained by the then existing hypotheses of rupture in materials. So, at the very beginning, Griffith introduces his alternate criterion, which is solely based on energy minimization:

According to the well-known “theorem of minimum energy,” the equilibrium state of an elastic body, deformed by specified surface forces, is such that the potential energy of the whole system is a minimum. The new criterion of rupture is obtained by adding to this theorem the statement that the equilibrium position, if equilibrium is possible, must be one in which rupture of the solid has occurred, if the system can pass from the unbroken to the broken condition by a process involving a continuous decrease in potential energy.

Thus, Griffith’s criterion is nothing but a necessary, thermodynamic criterion. In modern parlance, if the elastic energy due to the presence of a crack can be relieved by opening the crack up, the system will do so — provided the cost paid to open it up (in terms of the creation of newer surfaces) is more than compensated by the elastic energy gain.

This idea is rather general, and, as Griffith himself notes,

Up to this point the theory is quite general, no assumption having been introduced regarding the isotropy or homogeneity of the substance, or the linearity of its stress strain relations.

Griffith goes on to consider two specific cases — the case of a cracked plate and that of the strength of thin fibres.

The cracked plate and thin fibres — theory and experiments

The first illustration that Griffith considers is as lens shaped crack that runs through the length thickness (Thanks Rajesh for the careful reading!) of the specimen; and the plate is under tensile load such that the tensile axis is perpendicular to the crack surface — as shown in the figure below.griffith-schematicGiven this scenario, now, it is just a question of calculating the strain energy and the surface energy. Griffith does both and obtains the well known expression for the stress \sigma_f at which this material would rupture as \sigma_f = \sqrt{\frac{2 \gamma Y}{\pi c}}, where \gamma is the surface energy and Y is the Young’s modulus. [Note that the elastic energy expressions in the paper are incorrect — as Griffith himself notes at the end of the paper].

Griffith went on to do experiments on thin tubes and spherical bulbs of hard English to verify his expression; he measured the surface tension in these glasses, and then, after introducing scratches on them, ruptured them by the application of stress. Knowing the elastic modulus of the glasses and the scratch size, it was then easy to calculate the \sigma_f and compare it with the experimentally observed values. The match was quite good — save a systematic error; and, Griffith attributes the error to stress intensity at the crack tips using the following observation, which I liked a lot (the emphasis and the paranthetical remarks are mine):

… It must be regarded as improbable that the error in the estimated surface tension is large enough to account for this difference [10% below the theoretical value], as this view would render necessary a somewhat unlikely deviation from the linear law [of variation of surface tension of glass with temperature].

A more probable explanation is to be obtained from an estimate of the maximum stress in the cracks. An upper limit to the magnitude of the radius of curvature at the ends of the crack was obtained by inspection of the interference colours shown there. Near the ends a faint brownish tint was observed, and this gradually died out as the end was approached, until finally nothing at all was visible. It was inferred that the width of the cracks at the ends was not greater than one-quarter of the shortest wavelength of visible light, or about 4 \times 10^{-6} inch. Hence, \rho [radius of the crack tip] could not be greater than 2 \times 10^{-6} inch.

Griffiths ends the section on cracked plates with a discussion, in which, he succinctly sums up:

The general conclusion may be drawn that the weakness of isotropic solids, as ordinarily met with, is due to the presence of discontinuities, or flaws as they may be more correctly called, whose ruling dimensions are large compared with molecular distances. The effective strength of technical materials might be increased 10 or 20 times at least if these flaws could be eliminated.

Having noted that the elimination of flaws would lead to larger strengths, it was only natural for Griffith to test the idea and add support to the validity of his hypothesis; this he did with his experiments on thin fibres:

… very small solids of given form, e.g., wires or fibers, might be expected to be stronger than large ones, as there must in such cases be some additional restriction on the size of the flaws. In the limit, in fact, a fibre consisting of a single line of molecules must possess the theoretical molecular tensile strength. …

This conclusion has been verified experimentally for the glass used in the previous tests, strengths of the same order as the theoretical tenacity having been observed.

The rest

Apart from these discussions mentioned above which have become part and parcel of every materials scientists basic knowledge base, Griffith goes on to discuss many other things in the later sections of the paper — molecular theory, limitations of elastic theory and the application of his theory to liquids. They are very interesting as they are speculative (at least at that time). For example, here is Griffith on the validity of the continuum assumption for small scale systems, which, anybody who works on nanomaterials today will appreciate:

It is a fundamental assumption of the mathematical theory that it is legitimate to replace summation of the molecular forces by integration. In general this can only be true if the smallest material dimension, involved in the calculations, is large compared with the unit of structure of the substance.

The impact of the paper

The scientific study of fracture begain with Griffith’s work on cracks in brittle solids like glass …

— J D Eshelhy [2]

In real life, crystals deform at stresses which are much lower than those predicated by theories; this discrepancy is what gave birth to the theory, and later the experimental observation and confirmation, of dislocations. Similarly, in real life, materials rupture at stresses much lower than that predicted by theories; by introducing the idea of flaws, Griffith removed this inconsistency and showed a way of quantifying things.  In doing so, Griffith gave rise to the field of study, now known as fracture mechanics. His elegant and simple arguments have now become part of every basic materials engineering course and more detailed and varied  studies based on his work continue to be pursued (like this study on the beginning of earthquakes, for example) — which makes his paper a must-read. Finally, the last, and probably the best, reason to pursue this classic is perhaps also its clarity and the pleasure it gives on perusal. Have fun!

Note:

  • Thanks to my colleague Rajesh (discussions with whom prompted me to pick Griffith for this post), I understand that a volume of Transactions of the American Society for Metals has re-printed Griffith’s paper — probably with annotations [3]; unfortunately, I am not able to locate the volume; but it might be well worth the effort; by the way, if you manage to do, send me a note or leave a comment below.
  • As an historical aside, G I Taylor, one of the proponents of the dislocation theory is also the one who communicated Griffith’s paper; in addition, Taylor and Griffith also worked together on stress estimation; however, Griffith’s work on fracture predates that of Taylor on dislocations by more than a decade; on the other hand, while Taylor’s work lead to vigorous activity in the study of dislocations, Griffith’s was not followed up for quite some time — till after second world war or so.

References:

[1] A A Griffith, The phenomena of rupture and flow in solids, Philosophical Transactions of the Royal Society of London, Series A, 221,  163-198, 1920.

[2] J D Eshelby, Fracture Mechanics, Science Progress, 59, 161-179, 1971.

[3] Metallurgical classics, Transactions of the American Society for Metals, 61, 871-906 (1968).

Update — Nov 14, 2008: Though I vaguely remembered the story about a fire accident involving the experiments of Griffith with glass rods, I could not locate a reference earlier. Today, I found it in Gordon’s The New Science of Strong Materials (the entire section on Griffith and glass fracture is well worth your time):

I never knew Griffith himself but Sir Ben Lockspeiser, who acted as Griffith’s assistant at this time, told me something about the circumstances under which the work was done. In those days research workers were expected to earn their money by being practical, and in the case of materials they were expected to confine their experiments to proper engineering materials like wood and steel. Griffith wanted a much simpler experimental material than wood or steel and one which would have an uncomplicated brittle fracture, for these reasons he chose glass as what is now called a ‘model’ material. In those days models were all very well in the wind tunnel for aerodynamic experiments but, damn it, who ever heard of a model material?

These things being so, Griffith and Lockspeiser took care not to bring the details of their experiments too much to the notice of the authorities. The experiments, however, involved drawing fibres and blowing bubbles of molten glass and one day, after the work has been going on for some months, Lockspeiser went home leaving the gas torch used foe melting the glass still burning. After the enquiry into the resulting fire, Griffith and Lockspeiser were commanded to cease wasting their time. Griffith was transferred to other work and became very famous engine designer. The feeling about glass died hard. Many years later, about 1943, I introduced a distinguished Air Marshal to one of the first of the airborne glass-fibre radomes, a biggish thing intended to be hung under a Lancaster bomber. ‘Whats’s it made of?’ ‘Glass sir.’ ‘GLASS! — GLASS! I won’t have you putting glass on any of my bloody aeroplanes, blast you!’ The turnover of the fibreglass industry passed the £ 100,000,000  mark about 1959 I believe.

Morphological instabilities during growth: linear stability analyses

October 9, 2008

Morphological instabilities

Typically, when a liquid alloy solidifies, as heat is extracted and the solid nucleates and grows, even if the initial solid-liquid interface is planar, pretty soon it breaks up — resulting in cellular and dendritic structures; see this page for some samples of such structures and videos (both experimental and simulated). Similar break-up of planar interfaces can also happen when a solid grows in another that is supersaturated, purely by diffusion, at isothermal conditions.

A rigorous mathematical study of these kinds of instabilities of interfaces during growth were pioneered by Mullins and Sekerka in a couple of classic papers [1,2]; as the quote below shows, this work is considered as one of the key steps in the general area of study known as pattern formation:

A determination of the stability of simple solutions to moving-boundary equations with respect to shape perturbations is an important step in the investigation of a wide range of pattern-formation processes. The pioneering work of Mullins and Sekerka on the stability of the growth of solidification fronts and of Saffman and Taylor on moving fluid-fluid interfaces were major advances. The basic approach is to analyze the initial, short time, growth and/or decay of an infinitesimally small perturbation as a function of the characteristic length scale or wavelength of the perturbation. Although the linear stability approach, exemplified by this work, is not always sufficient, it is a basic tool in theoretical morphogenesis.

— Paul Meakin in Appendix A of his Fractals, scaling and growth far from equilibrium

In this blog post, I will talk about the papers of Mullins and Sekerka; the work of Saffman and Taylor as well as the insufficiency of linear stability analyses mentioned in the quote above deserve their own posts; and, may be someday I will write them.

Point effect of diffusion

Here is a schematic showing growth during a phase transformation regulated by (a) flow of heat and (b) diffusion. In case (a), the heat is getting extracted from the left hand side, resulting in the growth of the solid into the liquid. In case (b), it is the diffusion of solute from the supersaturated solid on the right hand side that results in the growth of the solid on the left hand side. In both cases, the interface is shown by the dotted lines in the schematic; though the interface is shown to be planar, as we see below, in most of the cases, it would not remain so.

In the second case, wherein one solid (say, 1) is growing into a supersaturated solid (say, 2), the schematic composition profile will look as shown here:

With the above composition profile, it is easier to see as to why one can expect the interface, shown to be planar in Fig. 1 (b) above can be expected not to remain planar: suppose there is a small protrubation on the planar interface; the sharper the disturbance, the larger the area (or volume) of material ahead from which, by diffusion, the material can be ferried to the interface, resulting in faster growth. This is known as the point effect of diffusion.

The figure below explains the point effect of diffusion:

as opposed to a case where a planar interface would result in a half-circle of radius r_D, where, r_D is the diffusion distance, if there is a protrubation with a sharp end, that sharp end can ferry material from a(n almost) circular region of radius r_D. Thus, it is favourable for the interface to break into a large number of such jagged edges purely from a point of view of growth; however, such jagged interfaces lead to higher interfacial areas and hence higher interfacial energies. Thus, the actual shape of the interface is determined by these two opposing factors – namely, interfacial energy considerations and the point effect of diffusion.

Even though the above explanation was in terms of diffusion, a similar effect can be shown to operate in the case of heat extraction also. In fact the generic way of looking at both is to consider the gradients in these fields– be it composition or temperature. In Chapter 9 of the book Introduction to nonlinear physics (edited by Lui Lam), L M Sander explains the mechanism behind Mullins-Sekerka instability using a schematic of equipotential lines ahead of a bump — since they are bunched up ahead of a protuberance, it grows (p. 200 — Fig. 9.4). Of course, this explanation is a visual version of what Mullins and Sekerka have to say in their paper [1]:

The isoconcerntrates are then bunched together above the protuberances and are rarified above the depressions of the perturbation. The corresponding focussing of diffusion flux away from the depressions onto the protuberances increases the amplitude of the perturbation; we may view the process as an incipience of the so-called point effect of diffusion.

The analysis of Mullins and Sekerka

The mathematical analysis of Mullins and Sekerka [1] is aimed at understanding the morphology of the interface; as they themselves explain:

The purpose of this paper is to study the stability of the shape of a phase boundary enclosing a particle whose growth during a phase transformation is regulated by the diffusion of the material or the flow of heat. … The question of stability is studied by introducing a perturbation in the original shape and determining whether this perturbation will grow or decay.

Of course, in the case of a dilute alloy, during solidification, the solid-liquid interface is known to break-up and this break-up is more complicated since it involves simultaneous heat flow and diffusion; and, in another paper published shortly afterwards [2], Mullins and Sekerka analyse the stability of such an interface:

The purpose of this paper is to develop a rigorous theory of the stability of the planar interface by calculating the time dependence of the amplitude of a sinusoidal perturbation of infinitesimal initial amplitude introduced into the shape of the plane; … the interface is unstable if any sinusoidal wave grows and is stable if none grow.

Both these papers are models of clarity in exposition; Mullins and Sekerka are very careful to discuss the assumptions they make and the validity of the same; they also show how these assumptions are physical in most cases of interest.

As noted above, the actual break-up of an interface is determined by two competing forces — the capillary forces which oppose the break-up and the point effect which promotes break-up; what Mullins and Sekerka achieve through their analysis is to get the exact mathematical expressions (albeit under the given assumptions and approximations) for these two competing terms.

The continuing relevance

There are several limitations associated with the Mullins-Sekerka analysis; it is a linear stability analysis; it assumes isotropic interfacial energies; it neglects elastic stresses, if there be any.

Of course, there are many studies which try to rectify some of these limitations; for example, we ourselves have carried out Mullins-Sekerka type instability analysis for stressed thin films. Numerical studies and nonlinear analyses which look at morphological stability overcome the problems associated with the assumption of linearity that forms the basis of Mullins-Sekerka analysis.

But what is more important is that in addition to being the basis for these other studies, Mullins-Sekerka analysis, by itself, also continues to be of relevance — both from a point of view of our fundamental understanding of some of these natural processes and from a point of view of practical applications of industrial importance. I can do no better than to quote from this (albeit a bit old) news report:

Scientists in the 1940s and ’50s were well aware of instabilities and knew they played a role in formation of dendrites. But until Mullins and Sekerka published their first paper in 1963, no one had ever been able to explain the mechanisms that accounted for instabilities.

The Mullins-Sekerka theory provided a method that scientists and engineers could use to quantify all sorts of instabilities, said Jorge Vinals, an associate professor of computational science and information technology at Florida State University and a former post-doctoral fellow who studied under Sekerka.

Understanding instabilities is the first step in controlling them, so this methodology is important for engineers who need to make industrial processes as stable as possible, Vinals said. Physicists, on the other hand, find that interesting things happen when systems become unstable and so have an entirely different sort of interest in the theory. Mathematicians, for their part, have launched entire fields, such as non-linear dynamics and bifurcation theory, that explore the underlying mathematical descriptions of instabilities.

One example of how the theory has been put to use is in the semiconductor field, where computer chips are made out of large, single crystals of silicon that are sliced into thin wafers. In the early years, these single crystals measured just an inch in diameter; today, 12-inch diameter crystals are produced, resulting in wafers that each can yield hundreds of fingernail-size computer chips.

“You don’t just walk into the lab and build a bigger [silicon crystal] machine because in a bigger machine these instabilities can eat you alive,” Sekerka said. But by understanding the instabilities that occur as liquid silicon crystallizes, engineers have found ways to greatly reduce the formation of dendrites.

Sekerka, a Wilkinsburg native who earned his doctorate in physics from Harvard University, said he and Mullins weren’t thinking about such applications 40 years ago. Though working in a metallurgy department during Pittsburgh’s steel and aluminum heyday, they weren’t especially inspired by the needs of the metals industry, either.

“We were driven by intellectual curiosity more than the need to solve any particular problem,” he said. “Some of the greatest discoveries come from following intellectual curiosity.”

I will end this post with a link to the obituary of W W Mullins (by R F Sekerka H Paxton) and that of his wife June Mullins — to give an idea of the person behind these works.

References:

[1] W W Mullins and R F Sekerka, Morphological stability of a particle growing by diffusion or heat flow, Journal of Applied Physics, 34, 323-329,1963.

[2] W W Mullins and R F Sekerka, Stability of a planar interface during solidification of a dilute binary alloy, Journal of Applied Physics, 35, 444-451, 1964.

The Giant’s Shoulders: third edition

September 16, 2008

This is, by far, the smallest Giant’s Shoulders blog carnival; and, I hope it will remain so in future too. The next edition of the carnival will be hosted by Doctor Silence at second order approximation on October 15, 2008. So, have fun with this one and be ready with your entries for the next one!

[1] 350 BC Aristotle on mayfly

John Wilkins at Evolving thoughts decides to read a bit of Aristotle and evaluate his writings, with specific reference to mayfly:

At any rate, the more I read Aristotle, and the more I understand both where things stood at his time and what he actually said, I find him to be an amazing natural historian, a good observer, and generally not a bad theoretician. Sure, his theories are wrong, and his overall philosophy of teleology in biological cases (not, I hasten to add, in his physics) is unnecessary now we have teleosemantic explanations (i.e., natural selection), but he is not the moron of popular history of biology; far from it.

Funnily enough, the EMBOR author of the canard, Katrin Weigmann, is trying to make the case that science is not infallible, while ignoring the very real actual achievements of the people she denigrates. Nobody thought science was infallible anyway, but trying to make out that errors were made where they weren’t doesn’t give one much confidence in any subsequent argument. And of course this is another case of scientists doing bad history for [scientific] political reasons.

[2] 1817 Defining Parkison’s disease

Scicurious at Neurotopia writes about a classic monograph by Parkinson on shaking  palsy:

I definitely recommend Parkinson’s monograph, partially because it’s always interesting to read the old lit, and also because his case descriptions are incredibly vivid and empathetic. Although his methods of treatment probably brought little real cure, he was a compassionate physician and a brilliant man of his time, who put together all the dots to define what we now call Parkinson’s Disease.

[3] 1958/59 Modelling spinodal decomposition

I write about a series of papers that laid the foundation for the Cahn-Hilliard equation for the study of microstructural evolution:

In a series of papers published in The Journal of Chemical Physics, Cahn and Hilliard (and Cahn, by himself) provide the context as well as formulation of the CH equation; the first of these papers, published by Cahn and Hilliard in 1958 [1] discusses the formulation of the free energy which takes into account the interfacial energies that result from composition gradients; the second, published by Cahn in 1959 [2] discusses the thermodynamic basis behind the free eenergy formulation; the third, published by Cahn and Hilliard in 1959 [3], the formulated free energy is used to study phase separation in immiscible liquid mixtures. In a paper published in Acta Metallurgica in 1961 [4], Cahn discusses the study of spinodal decomposition in solids (including the elastic stress effects due to the lattice parameter differences between the two phases). These four papers (sometimes along with another by Cahn and Allen [5] — which is a classic by itself and deserves a separate post for one of the future Giants’ shoulders carnival) forms the theoretical basis for almost all the diffuse interface studies on microstructural evolution in the metallurgical and materials science literature.

[4] 1963 Interference between different photons

Skullsinthestars writes about a clever and simple experiment that proved one of the most famous statements concerning the quantum mechanics of photons wrong:

One of the most famous statements concerning quantum mechanics, as it relates to the light particles known as photons, was made by the brilliant scientist Paul Dirac in his Quantum Mechanics book:

“each photon then interferes only with itself.  Interference between different photons never occurs.”

This statement is bold and unambiguous: in Dirac’s view, a photon only creates interference patterns by virtue of its own wave function, and wave functions of different photons do not interact.

The statement is bold, unambiguous, often quoted — and wrong!  In 1963, Leonard Mandel and G. Magyar of Imperial College disproved this statement with a clever and simple experiment and a two-page paper in Nature.

[5] 1973 Beginnings of Genetic Engineering

Evilutionary biologist writes about a classic which started the field of genetic engineering:

Cohen had previously determined how to make E. coli take in foreign DNA (a citation classic worthy feat in itself) when he transformed E. coli with a plasmid known as pSC101, that conferred resistance to the antibiotic tetracycline.

Boyer on the other hand had discovered EcoRI, a restriction enzyme that could snip open pSC101 while leaving “sticky ends“.

Like chocolate and peanut butter, the combination was unbeatable. Cohen and Boyer realized they could combine their techniques to create a new plasmid containing foreign DNA.

[6] 1977 Categorizing fundamental types of living beings

Epicanis at the Big room (and the little things in it) writes about a paper that forms the basis of the modern classification of fundamental types of living beings — the three groups in the phylogenetic tree of life:

The “plant” and “animal” distinction is pretty classic – until comparatively recently, bacteria were assumed to be “plants”, just as fungi (”plants” that lacked chlorophyll) were. Non-photosynthetic bacteria were referred to as “schizomycetes” (literally “fission” [splitting in two] fungi, because they reproduce by splitting from one cell into two rather than forming spores), while bacteria with chlorophyll (cyanobacteria or “blue-green” algae, and possibly the “green sulfur bacteria”) were designated “schizophyta” (”fission plants”).

Within the last fifty years or so, though, it’s become obvious that bacteria were a different type of life from fungi, chlorophyll-containing plants, or animals. The latter critters have cells that in turn contain “organelles”, which are more or less very specialized “mini-cells” within themselves. The nucleus, for example, is a compartment within the cell where the cell’s DNA is kept and processed. Bacteria, it turned out, don’t have any of these organelles (in fact there’s good evidence that at least some if not all organelles used to be bacteria, but this post’s long enough already so I won’t go into that), and life was re-organized into the bacterial “prokaryotes” (”before nucleus”) and the “eukaryotes” (having a “true nucleus” – i.e. everything that isn’t bacteria).

From this paper we get the the modern fundamental three groups we still use today: Eukaryotes, Eubacteria [“True” bacteria], and the Archaea (or “Archaebacteria” as this paper names it). The name comes from the idea that the environment in which methanogens thrives resembles what has often been assumed to be the atmosphere of the very early Earth.

Happy reading!

The Cahn-Hilliard equation

September 13, 2008

Diffusion and microstructural evolution

The Cahn-Hilliard (CH) equation describes microstructural evolution in cases where the microstructure is completely described by the composition field variable:

\frac{\partial c}{\partial t} = \nabla \cdot M \nabla \mu \;\;\;\;\; \dots (1),

where the composition c (\mathbf{r}, t) is a function of position \mathbf{r} and the time t, \mu is the chemical potential and M is the mobility; the operator \nabla is the vector differential operator.

The classical equation that describes microstructural evolution through diffusion is the Fick’s second law, which has the following form:

\frac{\partial c}{\partial t} = \nabla \cdot D \nabla c \;\;\;\;\; \dots (2),

where, D is the diffusion coefficient.

This strong resemblance between Fick’s second law (Eq. 2) and CH equation (Eq. 1) is not accidental; CH can indeed be considered as a generalisation of Fick’s second law; in fact, CH is the result of a study of spinodal decomposition — a phenomenon which also brought out the limitations of Fick’s second law. In this post, I will discuss CH equation in the context of spinodal decomposition.

Spinodal decomposition

Let us consider the phase diagram of a binary alloy of the type shown below; let us consider an alloy of composition c_0 that is quenched from a high temperature T_1 in the single phase region to a temperature T_2 deep in the two phase region.

This phase diagram indicates that the correpsonding free energy as a function of composition, at the temperature T_2, looks, schematically, as shown below.


I have marked five points on the free energy curve; the points A, B and C correspond to the extrema — A and C correspond to minima while B corresponds to a maxima; in other words, the second derivative of the free energy G with respect to composition, \partial^{2} G/\partial c^{2} is positive at A and C and is negative at B; and, at the points marked D and E, the second derivative is zero; i.e., \partial^{2} G/\partial c^{2} = 0.

Suppose the chosen alloy composition c_0 is such that it falls in between the points D and E. In such a system, even a small composition fluctuation will grow spontaneously. What is more, if Fick’s second law is used to describe the system, the movement of atoms will be such that the diffusion is against the concentration gradient — making it necessary to assume that the diffusion coefficient D is negative in this region. This apparent discrepancy is overcome by realising that the diffusion takes place in such a way as to decrease inhomogeneities in chemical potential \mu (and to decrease the total free energy of the system — the chemical potential \mu = \partial G/\partial c (upto a constant)). Thus, the first modification that needs to be done to the Fick’s law is to replace composition gradients by gradients in chemical potential (and to calculate the chemical potential from the free energy) — which results in the CH equation.

Diffuse interface approach

Replacing \nabla c by \nabla \mu, and eliminating the necessity of assuming negative diffusion coefficients in the study of spinodal decomposition is but only one aspect of the CH equation. The second and the more fundamental aspect of CH is related to the fact that the widths of the interfaces in the system are not, a priori, fixed to some value — usually zero. Thus, in contrast to the usual models (like Fick’s law) which arbitrarily fix the interface width to zero (and hence are called sharp interface models), CH equation, by virtue of its allowing the system to choose its own interface width, belongs to another class of models called the diffuse interface models.

In addition to this fundamental correctness (in that this equation does not impose arbitrary restrictions on the system) and consistency (in that this equation is posed in terms of a continuously decreasing free energy), CH equation is also ideal for a numerical  study of microstructural evolution since it does not require that the interfaces be tracked at all times and what would be topological singularities (like the formation of new interfaces or the disappearance of the existing ones in the microstructure) in the corresponding sharp interface model are naturally accounted for in this model.

The CH series of papers

In a series of papers published in The Journal of Chemical Physics, Cahn and Hilliard (and Cahn, by himself) provide the context as well as formulation of the CH equation; the first of these papers, published by Cahn and Hilliard in 1958 [1] discusses the formulation of the free energy which takes into account the interfacial energies that result from composition gradients; the second, published by Cahn in 1959 [2] discusses the thermodynamic basis behind the free eenergy formulation; the third, published by Cahn and Hilliard in 1959 [3], the formulated free energy is used to study phase separation in immiscible liquid mixtures. In a paper published in Acta Metallurgica in 1961 [4], Cahn discusses the study of spinodal decomposition in solids (including the elastic stress effects due to the lattice parameter differences between the two phases). These four papers (sometimes along with another by Cahn and Allen [5] — which is a classic by itself and deserves a separate post for one of the future Giants’ shoulders carnival) forms the theoretical basis for almost all the diffuse interface studies on microstructural evolution in the metallurgical and materials science literature. A more pedagogical exposition of the CH equation can be found in the extremely readable accounts by Cahn [6] and Hilliard [7].

Numerical simulations and online resources

Cahn, in his 1961 Acta Metallurgica paper [4] traces some of the ideas of spinodal decomposition in solids to Gibbs, and in his Institute of Metals lecture in 1968 [6] traces the name and the concept of spinodal to van der Waals, a link to whose paper is available here. It is also well known that the work of Cahn and Hilliard is but a continuum version of that of Hillert (whose Ph D thesis, discussing spinodal decomposition was apparently given to Cahn by Hilliard [8]). However what makes CH as formulated by Cahn and Hilliard very different from the preceding works is (a) the use of diffuse interface approach, (b) the exposition of the formulation in variational terms using thermodynamic principles, and (c) the ease with which CH can be extended and numerically simulated. Thus, it is no wonder that with the extensive use of numerical simulations in the metallurgical and materials literature CH has become the necessary tool in the arsenal of every theoretical materials scientist.

For those of you who might be interested in solving the CH equation numerically, a C code that uses spectral methods (and FFTW) to solve CH spinodal decomposition is available for download (under GPL license) from here. For some more material on CH equation, diffuse interface modelling and references to recent reviews, go here. Here is a page where you can see simulation of spinodal decomposition using CH equation.

References

[1] J W Cahn and J E Hilliard, Free energy of a nonuniform system. I. Interfacial free energy, The Journal of Chemical Physics, 28, 2, pp. 258-267, 1958.

[2] J  W Cahn, Free energy of a nonuniform system. II. Thermodynamic basis, The Journal of Chemical Physics, 30, 5, pp. 1121-1124, 1959.

[3] J W Cahn and J E Hilliard, Free energy of a nonuniform system. III. Nucleation in a two-component incompressible fluid, The Journal of Chemical Physics, 31, 3, pp. 688-699, 1959.

[4] J W Cahn, On spinodal decomposition, Acta Metallurgica, 9, pp. 795-801, 1965.

[5] J W Cahn and S M Allen, A microscopic theory of domain wall motion and its experimental verification in Fe-Al alloy domain growth kinetics, Journal de Physique, 38, pp. C7-51, 1977.

[6] J W Cahn, Spinodal decomposition, The 1967 Institute of Metals Lecture, TMS AIME242, pp. 166-180, 1968.

[7] J E Hilliard, Spinodal decomposition, in Phase transformations, American Society for Metals publication, pp. 497-560, 1970.

[8] J W Cahn, Reflections on diffuse interfaces and spinodal decompositons, in The selected works of John W. Cahn, edited by W. Craig Carter and William C. Johnson, TMS, Warrendale, PA , pp. 1-8, 1998.

The third Giant’s Shoulders: five days to go

September 10, 2008

As noted earlier, the third Giant’s Shoulders carnival will be hosted here on the 15th of this month; so, send your entries to me at this email address (preferably on or before 14th of September 2008):

gs dot carnival dot guru at gmail dot com

I am looking forward to a carnival with huge number of entries which will keep the readers occupied for at least a month. So, send as many entries as you can and on as varied subjects as you can.