Read more. Biomedical Engineering, Trends in Materials Science. Modern trends in chemical reaction dynamics: experiment and theory. Detonation: Theory and Experiment. Theory and experiment. Advances in Engineering and Technology. Spintronic Materials and Technology. Nanotechnology and Materials Technology. Spintronic materials and technology. Technology Trends in Wireless Communications. Ceramic Fabrication Technology Materials Engineering, Materials Science Research Trends.
Trends in Stem Cell Biology and Technology. Theory and Experiment in Gravitational Physics. Supersymmetry: theory, experiment, and cosmology. Theory, Experiment and Cosmology. Materials science and engineering. In general, LDA calculations underestimate the band gap by 50 percent or more, and typically overestimate Hartree-Fock gaps compared to experimental results. The reason for these discrepancies is that exchange-correlation self-energy effects can significantly modify the properties of the excited electrons from those of an independent-particle picture.
Optical transition energies, even at the simplest level, need to be properly computed as transitions between quasi-particle states. First-principles calculation of the quasi-particle energies in real materials became possible in with the development of methods based on the GW approximation for calculating the electron self-energy. This approach is based on an evaluation of the self-energy operator expanded to first order in the dynamically screened with local fields Coulomb interaction and the electron Green's function. The method has been applied to a range of systems, including semiconductors and insulators, simple metals, surfaces, interfaces, clusters, materials under pressure, and materials as complex as the C 60 fullerites.
Historically, new nonlinear optical materials e.
At the moment, most levels of theory can provide rough estimates of the properties of any proposed nonlinear optical material. There are no really first-principles approaches; indeed, most are semiempirical. Such methods are good at discussing possible systematics between related materials—that is, via interpolation —but they are not good at extrapolating to new classes of materials. Those few nearly first-principles calculations can reasonably consider only a few simple materials; even then, simplifying approximations are made.
With computational effort typically several times more than that of an LDA band calculation, the GW quasi-particle approach at present remains the only successful ab initio. Researchers are pushing forward in several different directions. One direction is continuing the applications of the full method and extending it to other systems.
For example, this approach is just being refined for application to transition metals and transition-metal oxides. Another direction is developing algorithms and codes to reduce the computational effort in the calculations, such as reformulating the present k -space formalism for the self-energy operator to one of real-space formalism. At present, ab initio quasi-particle calculations are basically limited to systems with less than about atoms. Yet another direction is developing less accurate but simpler methods for approximate self-energy calculations.
Calculation of the intensity of linear optical spectra actually requires computation of the two-particle Green's function with the proper vertex function included to take into account electron-hole interaction or excitonic effects. This kind of calculation has yet to be carried out from first principles for real materials.
Previous theoretical investigations on this subject were mainly restricted to model tight-binding Hamiltonian studies. For semiconductors such as silicon, standard random phase approximation calculations for the dielectric function yield band-edge optical absorption strengths that are typically off by more than a factor of two.
Clearly, this is an important issue and needs to be addressed before quantitative prediction of optical absorption can be achieved. Of course, the higher-order optical response of solids is even more difficult to calculate. The results are highly sensitive to the electronic excitation spectrum because of the multiple energy denominators. However, several groups are now working toward better treatment of these quantities. In the future we can expect a much wider class of nonlinear optical materials to be produced, including, for example, polymers and metallorganic materials.
Nanostructures also are a source of nonlinear optical activity. The challenge to theorists will be to devise effective means to explore the parameter space of these materials. Meeting this challenge will require not only greatly enhanced computing capability but also improved understanding of the effect of electron-electron interactions for self-energy and excitonic effects. At the heart of essentially all modern-day critical technologies is the need to describe how atomically and chemically dissimilar materials bond to each other to form solid interfaces.
The central importance of understanding the behavior of solid surfaces and interfaces can be readily demonstrated from a consideration of key industrial segments of the economy; metal-semiconductor, semiconductor-semiconductor, and semiconductor-oxide interfaces form the cornerstone of the microelectronics industry.
Adhesion, friction, and wear tribological phenomena between solid surfaces are ubiquitous in all manufacturing processes. The microstructural evolution of grain boundaries is central to materials performance in metallurgy. An understanding of the chemical processes occurring at surfaces is key to the development of cost-effective semiconductor etching and catalysis processes. An understanding of the structure of surfaces and how they function as catalysts offers opportunities for the design of completely new types of catalytic systems leading to revolutionary applications.
While the behavior of surfaces and interfaces often determines the basis for the performance of existing materials, one cannot overestimate the scientific and technological impacts of the design of new materials based on our ability to synthesize atomically structured materials. All synthetic, atomically modulated. Because of the dramatic effects caused by the interfaces e. It is the emergence of advanced crystal growth techniques that has allowed the atom-by-atom synthesis of novel materials exhibiting unique and unexpected properties. It is this capability to atomically engineer the properties of synthetically modulated materials that has led to a revolution in the field of modern materials science by qualitatively altering our approach to materials design at the atomic scale.
Surfaces and interfaces are complex, heterogenous, low-symmetry systems. Their description by accurate quantum-mechanical methods is a challenging task because the reduced symmetry of these systems implies that large unit cells must be utilized. Moreover, solid surfaces exhibit a much larger variety of atomic structures than their bulk counterparts. In fact, because of the different possible crystallographic orientations and the numerous metastable structural phases for a given orientation, the number of possible atomic structures is essentially infinite.
The study of solid interfaces and chemisorbed species presents an even greater challenge with regard to the possible number of systems. Nevertheless, the close synergy between experimental and theoretical activities that is characteristic of the field of surface science has allowed rapid progress in the development of general physics-based guiding principles to predict the atomic geometry and electronic structure of solid surfaces and interfaces.
The advent of ultrahigh vacuum technology and the accelerating theoretical developments in electronic structure calculations, coupled with the emergence of high-performance computing environments, have greatly enhanced our understanding of chemical bonding at solid surfaces and interfaces. Below is a discussion of critical issues germane to the theoretical description of surfaces and interfaces.
Since the mids, there have been tremendous advances in our ability to describe the ground-state properties and phase transformations of bulk materials using ab initio methods. These same ab initio electronic structure methods have now been used to determine the atomic geometry and electronic structure of clean and adsorbate-covered surfaces.
Modern surface science has greatly benefited from the continuous development of powerful methods, such as the density functional method, which, together with the efficient implementation of pseudopotential and all-electron formalisms, has enabled very accurate calculations of the ground-state properties of surfaces and interfaces.
Moreover, significant conceptual developments in electronic structure theory have enabled dramatic increases in our ability to perform ab initio calculations of the static and dynamic properties of large-scale materials systems. Prominent among these developments are the advances made by Car and Parrinello in calculating structural and dynamical properties. Ab initio quasi-particle calculations based on the GW approximation have permitted the determination of the electronic excitation spectrum at surfaces, in excellent agreement with experimental spectroscopic observations.
Also to be emphasized is the critical role played by empirical quantum mechanical methods, such as the tight-binding method, in the early determination of the atomic and electronic. The experimental validation of these early theoretical predictions demonstrated the predictive power of quantum mechanical electronic structure methods and laid the foundation of modern theoretical surface science. In surface and interface science the role of computational materials physics is to complement experimental work by addressing critical issues that cannot be measured directly and to provide physical insight into observed phenomena.
Foremost among these critical issues are the following:. The nature and origin of atomic reconstructions at surfaces, interfaces, and grain boundaries;. The electronic structure electronic surface and interface bound states and resonances, chemisorption-induced surface states, and so on of clean and adsorbate-covered surfaces, and interfaces;.
The attachment sites and binding energies of chemisorbed atoms and molecules on reconstructed surfaces;. The effects of steps, defects, and impurities on the physical properties of surfaces and interfaces;. The determination of the rectifying potential Schottky barrier at metal-semiconductor contacts;. Prediction of novel electronic, optical, magnetic, and mechanical properties of semiconductor superlattices and metallic multilayer materials.
As in the case of most areas germane to the field of theoretical and computational materials physics, the advent of high-performance computing environments will have a significant impact on the field of surface science. In particular, the size and complexity of the system that can be described will increase with computer power and memory.
As massively parallel processor MPP computing environments mature from the development phase to the production phase and become available to a wider user base, it is likely that similar large-scale calculations will be performed on a routine basis, bringing tremendous benefits to the field. Another aspect of surface and interface science that will greatly benefit from the wide availability of high-performance computing environments is the bridging of the length-scale gap by physics-based multiscale modeling, from the atomistic level atomic geometry and electronic structure to the continuum elasticity, plastic deformation, and so on.
Much theoretical work remains to be done in this area. As an illustration of the impact of multiscale modeling, consider the task of predicting the deformation under load and the eventual failure of a polycrystalline metal with incorporated atomic impurities that give rise to a wide spectrum of grain boundary strengths. Current continuum-based, finite-element methods cannot be used to perform simulations unless they are augmented to include microscopic processes involving dislocation movement.
Consequently, significant effort should be expended in extracting atomic-level parameters from ab initio quantum mechanical calculations in order to augment constitutive models used in continuum-like simulations. For the particular case of predicting the microstructural evolution of grain boundaries in polycrystalline metals, essential atomic-level parameters are grain-boundary potentials, which are total-energy curves obtained from tensile and shear deformation of the boundary.
The next section on thin-film growth describes strategies to integrate modeling at different length and time scales as they relate to simple growth. Recent technological advances have led to the development of electronic and photonic devices of smaller and smaller sizes. The fabrication of these structures requires high material uniformity and interfacial homogeneity on the atomic level, currently achieved by using such techniques as molecular beam epitaxy MBE and chemical vapor deposition CVD.
In addition, scanning tunneling microscopy STM has made it possible to observe the formation and shapes of small clusters of material at the initial stages of deposition and the layer-by-layer growth of a crystal that often takes place on sharply defined steps between adjacent layers of the material. These processes are governed by physics at the atomic scale and cannot be explained by traditional nucleation theory or phenomenological continuum approaches.
The most useful theoretical approaches are computational in nature and involve ab initio calculations of interatomic potentials, molecular dynamics to probe the short time dynamics, and larger scale kinetic Monte Carlo KMC calculations, which are relevant to growth processes involving the deposition of many atoms. Figure 2. In some cases, such as silicon, surface reconstruction can produce inequivalent steps that are alternately rough and smooth. The KMC simulations take as input specified rates for atomic deposition and diffusion and then use those rates to simulate the kinetic processes in which atoms are added and are moved about on the growing surface see Figure 2.
However, the rates that are currently determined from a combination of experimental and theoretical information are not well established. While KMC has shown great. The STM also makes possible a direct comparison of theoretical and computational modeling of the kinetic processes governing the growth of these structures. The use of this experimental technique along with advances in computational capabilities will improve our understanding of the kinetics of growth and our ability to fabricate smaller and better technological devices in the next decade.
Many of the problems that arise in modeling growth are similar to those encountered in the study of other nonequilibrium processes. The most interesting phenomena observed experimentally often involve behaviors that occur over a broad range of spatial and temporal scales. Realistic parameter regimes are impossible to achieve with the computers that are currently available, and, for systems that stretch the limits of the current capacity, it is difficult to run a sufficient number of realizations to ensure the statistical significance of the results.
This makes it difficult to relate numerical results to observed behavior or, more ambitiously, to use simulations to predict future directions for experimental study. Anticipated improvements in machine performance and parallelized codes should allow these problems to begin to be addressed. Some specific issues of interest, starting with those associated with the smallest scales, are discussed below. Integrating the modeling done at different length and time scales is the foremost challenge to a comprehensive understanding of the process of growth. Modeling nonequilibrium systems accurately thus depends crucially on the use of accurate empirical potentials within KMC algorithms.
The KMC simulations employ these potentials to set the relative rates for the possible moves e. To understand the basic trends and general principles, a detailed understanding of the potentials is somewhat less important, although care must be taken to properly identify essential properties. Furthermore, to improve the predictive powers of KMC and the ability of this method to provide quantitative results, much work needs to be done to improve our ability to extract these potentials from ab initio calculations or from techniques such as effective medium theory. In the low-density limit, KMC can be used to model the self-assembly of small clusters of atoms deposited on a surface.
Nanoscience vs nanotechnology
Because the system sizes considered are relatively small on the order of hundreds or thousands of atoms , it is in this intermediate regime that interaction potentials can be incorporated in the greatest detail and the quantitative predictive powers of KMC may be the greatest. Recent experimental progress on growth techniques for quantum wires and quantum dots should yield the next generation of smaller and faster devices.
Thus, results obtained by modeling the self-assembly of small structures will have increasing technological relevance. Realistic simulations of the growth of many layers of atoms on a substrate pose difficult if not impossible challenges for the current generation of computers. However, with the development of faster, more powerful computers, there will be a new opportunity for numerical simulations to play a role in the development of new growth techniques, where, to date, results obtained by modeling have lagged significantly behind the experimental.
While it is known experimentally that the long-time behavior is important in setting growth parameters and that the growth morphology changes as a system evolves from a few to many layers, with today 's generation of computers we are still limited to crystals on the order of 10 7 atoms e. As crystalline layers are deposited using MBE, step-like interfaces exist between regions of different numbers of layers. In some cases, growth that occurs at step edges is highly uniform, making such steps desirable. In fact, substrates are often cleaved at an angle to the crystalline axis in order to initiate growth on a stepped surface.
However, in other situations, steps may aggregate on the surface or individual steps may become nonuniform, leading to defect trapping, grain boundaries, and inhomogeneous growth. A better understanding of how to manipulate the desirable properties of step growth and prevent the undesirable properties is of fundamental technological importance. A better understanding of step edge barriers and surface reconstruction e.
The misfit in lattice constant between the substrate and adlayer in growing films cannot be accounted for without the inclusion of elastic forces. Consideration of models that do not constrain atoms to periodic lattice positions will be essential for the study of strain relief and the formation or elimination of dislocations. To understand real heteroepitaxial film growth and the dependence of these effects on the thickness of the film, the interplay between elastic and relaxation effects must be determined. A new technique for controlling growth involves the use of surfactants to aid in the growth process.
Here the key discovery is that in many cases a small amount of a particular growth surfactant can lead to improved homogeneity at interfaces between similar or dissimilar materials. For example, a small amount of As, a surfactant used in the deposition of Ge on Si, causes the Ge to deposit uniformly on the Si surface, whereas without the As, the Ge on Si tends to clump, in effect not wetting the surface. While the use of surfactants shows much promise experimentally, at this initial stage very little is known about the microscopic kinetic processes that make this method effective.
Computational advances will enhance our ability to predict effective surfactants for specific growth processes and methods of improving the homogeneity of surfactant-aided deposition. Currently, one of the major difficulties lies in the fact that effective surfactants operate at low densities relative to the underlying crystal, and with today's generation of computers it is difficult to incorporate a sufficient number of atoms of the growing crystal into the simulations to obtain a realistic contrast in densities.
Trends in Nanophysics: Theory, Experiment and Technology / Edition 1
In industrial applications materials are much more commonly grown using CVD, which takes place at much higher densities and higher deposition rates than MBE techniques. To date, numerical modeling has focused on the comparably simpler problem of MBE. However, with substantially faster computers it may be possible to address issues encountered with CVD techniques, such as the inclusion of complex chemistry that is not readily accessible to experimental study.
A complete set of reaction rates for silene deposition of silicon has been determined by quantum chemical calculations. This necessitated performing accurate calculations for the very extensive number of different reactions possible in the Si-H system. The modeling of other important. With two orders of magnitude in computational power, it would be possible to deal with ternary systems and somewhat heavier atoms. However, the quantum chemical calculations scale with such a high power of the number of electrons that raw computing power alone will not solve the problem of heavier atoms.
Either a significantly enhanced density functional theory or elimination of the core electrons using frozen orbitals or pseudopotentials will be needed to deal with that problem. The ubiquitous interface between solid and fluid materials governs much of the desired and undesired behavior of technologically important systems and processes. At nanoscale size and time dimensions, collective atomistic dynamics dictates the complex interfacial performance. For nanotechnology it is this performance that must be engineered to some specific function. While the process of designing static interfacial functions is a well-developed engineering art, our ability to control interfacial dynamics is relatively primitive.
It is easy to name some pressing examples. We need to engineer the material properties defining the solid-solid and solid-liquid interfaces to control their friction and wear; to optimize dry and wet lubrication; to reduce degradation from impact, fracture, and cavitation; and to design repetitive dynamical contact and separation processes associated with specific technological needs. For example, the multibillion-dollar magnetic recording industry expects to increase areal recording densities in hard disk drives by about two orders of magnitude over the next decade by flying the recording heads substantially closer to the spinning disk than today 's nanometer flying heights.
They may even be in continuous contact with disk surfaces. This will place extreme demands on the tribology of the slider-disk interface. If future improvements in recording densities are to be achieved, novel thin-film recording media and lubricants will have to be developed to meet the increasingly severe tribological demands of this slider-disk interface. Key to these developments is a molecular-level understanding of contact wear, bonding failure of thin films due to static and dynamical loading, lubrication using a few monolayers of molecules, and capillary stiction created when breaking or making the lubricating contact.
A major consequence of this lack of understanding is that in most practical applications today the materials design of the recording media, disk-head substrates, and lubricant are made mainly by trial and error. Many other emerging high-technology industries will depend on rugged, reliable, and long-lived miniaturized devices incorporating micromechanical moving parts. The success of these components, and the resulting technologies, will likely depend critically on the development of superior molecular thin films for coating and lubricating the tiny parts. Classical continuum physics has historically provided engineers and technologists with most of the needed theoretical and computational tools, such as fluid dynamics and thermodynamics.
As a technology enters the realm of nanoscale physics, however, this continuum approach is no longer valid. The nanophysics of materials presently belongs to the domain of condensed-matter physics, statistical mechanics, quantum chemistry, and, in the broadest sense, the science of nonlinear complex behavior. With the advent of scalable, massively parallel computers, the computational aspects of these disciplines can now have the potential to provide very powerful tools and immediately useful solutions to this emerging field of nanoengineering technology.
One area of need in computational materials science is the development of general computational tools for studying the dynamics of condensed phase interfaces, with the goal of making them useful and available to research design engineers and scientists working in a broad spectrum of activities related to materials. This capability should grow with increasing applications through collaborative interactions with industrial researchers. Specific applications should be chosen so as to hasten tools development and to demonstrate the power and versatility of this computational approach to nanoengineering.
Initial applications could include 1 the dynamical fatigue and fracture of solids and thin films bonded to solid surfaces under load, 2 solid surface damage due to cavitation of a contacting fluid, and 3 thin liquid-film lubrication between sliding solid surfaces. These applications are discussed in the paragraphs below. Although much work on fracture in brittle materials and at interfaces has been done over some 70 years, the mechanisms that govern the structure and dynamics of cracks are not well understood.
Trends In Nanophysics: Theory, Experiment And Technology (Engineering Materials) By
An obvious difficulty is that we do not yet understand why cracks attain a limiting velocity that is about one half the velocity predicted by linear elasticity theory. In a typical fracture sequence an initially smooth and mirror-like fracture surface begins to appear misty and then evolves into a rough hackled region. In some brittle materials the crack pattern can also exhibit a wiggle of characteristic wavelength. All of these features are unexplained by continuum elasticity theory. For truly fundamental understanding we must go to the complex microscopic level, and molecular simulation could give us this capability.
Owing to experimental difficulties, few studies have examined the effects of cavitation in thin films, such as thin disk-lubricant films under the high shear forces typical of disk-drive operation. Thus, the intuition gained from hydrofoil cavitation experiments in high-speed water tunnels may not apply to such microtribological situations. Recently, a surface-force apparatus technique was used to observe cavitation in thin liquid films bounded by moving solid surfaces.
The experiments indicate that under certain conditions the formation of a cavitating bubble is a much more violent and destructive event than its eventual collapse. This finding contradicts the currently held belief that cavitation damage is due solely to the extremely large implosive pressure generated at the moment the bubble collapses, a conclusion based on Rayleigh's classic paper.
Because the thin liquid films are about 10 nanometers thick, a calculated molecular description is probably required to get an accurate physical understanding. The friction and wear that occur between two rubbing surfaces can be greatly reduced by separating the surfaces with a film of lubricating molecules.
The key properties that enable a molecular film to provide good lubrication are low shear strength and resistance to penetrating asperities. Nevertheless, a molecular picture of how molecules lubricate has yet to be developed. Molecular dynamics simulations could study the mechanics of adsorbed molecules.
Surface forces would be calculated as a function of the surface coverage to see how the lubricant's molecular mechanics change as they go from being isolated on the surface to packed together in a complete monolayer. Atomic-force microscope experiments and scanning tunneling microscope experiments are being developed to study lubricant molecules on well-characterized surfaces. By combining these experimental results with simulations, we could hope to develop a detailed knowledge of how lubricant molecules can be designed to be.
These studies should also provide fundamental insights into the nature of the chemical bonding of lubricant molecules to surfaces. This type of information should provide the necessary scientific underpinning needed to develop novel lubricant systems. In the Born-Oppenheimer approximation the electronic and nuclear motions are treated as decoupled. The electronic motion problem is solved for different arrangements of the nuclei, and the resulting electronic energies, as a function of nuclear position, define a potential energy surface that governs the nuclear motion.
Armed with a potential energy surface, it is possible in principle to calculate the rates of dynamical processes, such as the rates of chemical reactions or the rates of conformational changes. The stationary points on potential energy surfaces have special significance: minima correspond to stable chemical species, while first-order saddle points stationary points at which the Hessian has one direction of negative curvature correspond naively to the barriers on pathways to chemical reaction. Applications to problems that involve infinite or just very large finite systems require some extension.
For periodic systems it is possible to generalize the methods of molecular quantum mechanics to deal with translational symmetry, but this is ineffective or very inefficient in situations such as low-density chemisorption. The study of heterogeneous catalysis then involves investigating the reaction path and energetics of the substrate molecule and a cluster representation of the catalyst.
Sometimes attempts are made to saturate dangling bonds at the periphery of the cluster. A more elaborate approach is to use a cluster model for an explicit representation of part of the bulk and then to embed this finite description in a bulk model. The issues of accuracy of the bulk description and continuity between cluster and bulk description are key elements of embedding techniques. The calculation of electronic wave functions and properties for molecules and finite clusters is the province of quantum chemistry. Quantum-chemical methods form a spectrum, from ab initio methods in which there are no adjustable parameters in principle to purely empirical methods in which a potential energy surface is built up entirely from functional forms with fitted or estimated parameters.
Between these extremes are semiempirical methods, in which the electronic motion is described using the same or approximately the same equations as the ab initio methods, but these are parametrized using data fitted to the experiment or otherwise estimated. Ab initio methods comprise two distinct classes. In one class, which characterizes the more traditional approach in chemistry but not physics , an independent-particle model is first solved for the electronic motion the Hartree-Fock equations.
Refinement of this model involves the incorporation of electron correlation, omitted at the Hartree-Fock level, by perturbational or variational methods. The second class comprises density functional methods, in which the electronic structure treatment is based on considering the electron density rather than the wave function.
The simplest density functional method assumes a local density model for the exchange correlation potential i. Broadly speaking, elaborate density-functional-based calculations for molecules are similar in computational effort to Hartree-Fock calculations but commonly yield better results. Traditional methods of accounting for electron correlation in molecules yield higher accuracy but are more expensive. Local density model calculations can be formulated very efficiently by using pseudopotential methods and plane-wave basis sets, at least for some elements.
Armed with a potential energy surface for a given system, it is possible to compute reaction rates for chemical reactions or inelastic scattering such as changes of the internal quantum state in the colliding systems or scattering of a molecule from a surface. For semiquantitative accuracy it is possible to solve the scattering equations within the framework of classical mechanics, in which the reacting system traces out trajectories in phase space.
For small systems it is also possible to solve the scattering equations quantum mechanically, but this becomes very demanding computationally even for systems with three atoms. A potential energy surface can also be used in following the time evolution of a system by molecular dynamics MD most commonly Newtonian dynamics. In traditional MD calculations the potential is specified empirically in terms of a generalized force field.
However, this is inadequate for representing more complicated phenomena such as the forming or breaking of chemical bonds. One of the difficulties is that because small typically femtosecond timesteps are used, simulating a process for more than a few picoseconds is very demanding, and nanoseconds are at the limit of what can be achieved.
Since many chemical phenomena take place on a time scale of microseconds or milliseconds, there is a very considerable gap as much as six orders of magnitude or more between what can be achieved and what is desired. In addition to this problem of timescale, there are problems associated with multiple minima on potential surfaces e.
It should be noted that MD simulations of liquids and macromolecules are widespread and very successful, especially in a biological context. One obvious approach to avoiding the limitations that empirical potentials impose on MD is to incorporate some sort of electronic structure calculation ideally nonempirical into the MD calculation, so that the potential energy and forces are calculated explicitly at each geometry required. Such an approach—sometimes called quantum molecular dynamics QMD —is limited to relatively fast methods of electronic structure calculation, such as the local density model or semiempirical methods for MD, although the same strategy can be employed with more elaborate methods for classical scattering.
The advantage of this approach is that it is unbiased apart from the choice of electronic structure methodology , but it suffers from the same difficulties with multiple minima and the time scale as conventional molecular dynamics. With current computing resources i. For calculations sampling only one or a few geometries, it is possible to treat systems of 50 or perhaps atoms using traditional correlated electronic structure methods or density functional methods and larger using semiempirical methods. MD simulations using empirical potentials can be performed for millions of atoms.
There are many major unsolved problems in the development and application of computational chemistry methods. One of the most important is the extension of existing methods to much larger systems. This would not only allow treatment of larger molecules and clusters but would alleviate some of the difficulties in embedding approaches, since the abstracted system would be larger and the errors in the embedding procedure could be expected to be less significant.
Most of the computational methods in use scale as N 3 or worse up to N 7 scaling for the most elaborate traditional methods in principle, although for. Some effort has begun on improving this situation, with a focus on methods scaling to order N. For large periodic systems, fast multipole methods are beginning to see some use. Of course, the ability to treat larger systems magnifies other problems, such as dealing with multiple minima.
Hence, attempts to extend methods to treat larger systems should go hand in hand with methods for handling the global optimization problem. A different methodological problem is related to the accuracy of methods. In addition to increasing the size of the system we can treat, it is important to be able to improve the accuracy of existing methods. If theory is to be able to substitute for experiment in the estimation of molecular properties, significant improvement in accuracy a factor of at least three will be required. At the interface of theoretical methodology and computer science is the development of algorithms and the implementation of different methods.
While there is a long and extremely successful history of computational chemistry on scalar and vector computers, the use of parallel machines is still in its infancy. Some of the challenges of parallel computational chemistry can be met by suitable reprogramming of parts of existing codes, but there is little doubt that much of the underlying theoretical formulation has been devised exclusively with serial computation in mind. The most successful parallel computational chemistry methods will require at the least new algorithms and possibly different formulations.
Any scientific discipline can generate a list of applications that represent important challenges for computational scientists. In the context of materials science, several areas seem ripe for investigation using the techniques of computational chemistry to obtain a microscopic understanding. Perhaps the most important is chemistry at solid surfaces, under which heading the panel includes oxidation and corrosion as well as catalysis.
If catalysts or corrosion protection agents are to be designed from first principles, it will be necessary to have a detailed microscopic understanding of the reaction mechanism being catalyzed or of the stepwise mechanism of corrosion. Since there are no empirical potentials that provide an accurate model of chemical reactions, it is difficult to see how such a microscopic understanding can be obtained without reliable first-principles calculations.
Another area with significant demands on reliable calculations is the determination of cluster structures i. Further, the reliability of these results depends entirely on the ability of LDA to describe the breaking and forming of chemical bonds. The correlation effects that arise when bonds are formed or broken are multiconfigurational in nature, and it seems unlikely that LDA can describe these quantitatively. A different coupling of chemical reactions to bulk material is the propagation of shock waves, generated by a reaction, through materials. Of particular importance are shock waves generated by rapid energetic processes, such as detonation of an explosive, as described in the next subsection.
For a qualitative understanding it may be possible to approximate the chemical kinetics by simple models, but for investigation of real materials a more reliable description of the elementary kinetics will be required. Typically, in any complex reaction scheme only a few if any of the rate constants can be measured or deduced experimentally, so it is possible for theoretically computed numbers to have considerable impact on combustion or explosion modeling.
Molecular properties other than structure and energetics can be calculated theoretically. For example, the nonlinear optical properties of molecules are determined by electric hyperpolarizabilities, related to the response of a molecule to a static or time-varying applied electric field. The calculation of such quantities can thus play a role in the design or improvement of nonlinear optical materials.
Hyperpolarizabilities are strongly influenced by electron correlation and require very elaborate electronic structure calculations. Hartree-Fock values are quite inadequate, and density functional methods do not perform as well for these quantities as they do for other properties. The implications of increases in computer power discussed in this section assume an increase of two to three orders of magnitude in real performance, resulting largely from scalable parallel architectures.
The pressing need for any research group wishing to take advantage of increased computer power is to adapt its codes to these parallel architectures in ways that can properly exploit scalability. Parallelization of molecular electronic structure codes, whether traditional or based on density functional methods, is often difficult because the programs are very large and were commonly developed by somebody other than the current user s. Nevertheless, considerable efforts have been made to parallelize most approaches to molecular electronic structure calculations, and there is considerable expertise available in this area.
Scalability has so far proven only fair—most of the obvious parallelization strategies scale well only for a limited number of processors up to, say, In addition, the memory demands of some parallel quantum chemical algorithms can become excessive. The parallelization of reaction dynamics was explored in some depth in the earliest days of parallel computing. Classical trajectory methods can be treated rather easily, since individual trajectories are independent. Quantum scattering methods are considerably more complicated but again can be parallelized very effectively.
The scalability of both approaches to reaction dynamics is good. Parallel approaches to MD also have received considerable attention.
Parallelization is rather straightforward. Nevertheless, the scalability of traditional empirical potential-based MD is only fair; performance is beginning to fall off noticeably around processors. QMD using LDA and plane-wave basis sets has been parallelized by several groups, performing well and showing good scalability. Assuming that it is possible to realize a factor of to 1, increase in computer power in these various methods, we can estimate what the consequences will be for the chemistry that can be attacked. We can also identify other problems that will only be exacerbated by the increase in power.
From the perspective of electronic structure, the scaling of methods is at the most optimistic N 2 to N 3 , so our hypothesized increase in computer power would allow systems of perhaps 10 times as many atoms to be treated i. While this scaling applies to QMD as well, it is likely that QMD calculations on 1,atom systems would run into severe problems with multiple minima. The other consequence of increased computer power is higher accuracy for existing systems. It is not clear how increased computer power will influence the accuracy of density functional model calculations, since this will also depend on the development of better functionals.
The impact of such an increase in computer power on reaction dynamics may be more limited. Quantum scattering methods are limited to only a few degrees of freedom, so the increased power may be best used to examine better potential functions or for the inclusion of smaller terms, such as nonadiabatic Born-Oppenheimer breakdown effects or molecular fine structure.
- Trends in Nanophysics;
- Learning Core Data for iOS A Hands-On Guide to Building Core Data Applications;
- The de havilland Mosquito. An illustrated history!
- Trends In Nanophysics: Theory, Experiment And Technology (Engineering Materials) By!
Classical trajectory methods may benefit from more extensive sampling, but the. While classical MD will also benefit from an increase in computer power, the ability to handle much larger systems will be counterbalanced to some extent by problems of global optimization. The second major challenge in MD, the time-scale problem, will not be significantly affected by an increase in magnitude of two to three orders in computer power, since a brute force approach will still fall short by three to four orders of magnitude.
This problem will require alternative physics approaches to make it tractable. The field of chemical detonations is at a crossroads. There are two vastly different pictures emerging of the way in which chemistry proceeds behind the initial shock wave. Both are consistent with the classical theory of detonations by Zel'dovich, von Neumann, and Doering ZND —namely, that following shock compression the explosive molecules react and the product molecules expand, represented by a pressure profile whose principal features are an almost instantaneous shock rise; a von Neumann spike, where reactants are heated and compressed; a reaction zone, where reactions occur accompanied by decreasing density and pressure; and a Taylor wave of the expanding product gases.
The experimental picture is still shrouded in some mystery, since these rapid events at the shock front are very difficult to resolve on the subnanosecond time scale, though picosecond spectroscopy shows promise in the next few years of shedding some light on these features. It is, currently, the theory that is most unsettled. There are two prevailing pictures of the reaction process. The first and most commonly held view is that the directed kinetic energy of the shock rise in density cannot be used immediately to cause chemical reactions, but rather must be fed up a chain of gradually increasing frequencies.
It is well known that phonon modes whose frequencies differ significantly take a long time to come to equilibrium through anharmonic coupling. By analogy, proponents of this first picture argue that energy moves up a ladder of frequencies determined by translations, rotations, molecular torsions, and bond-bending modes, followed by bond vibrations—first weak, then intermediate, and, finally, the highest-frequency ones. The difference could not be more dramatic—gradual equilibration leading to chemical reaction versus nonequilibrium energy transfer to strong bonds and quick initial reactions, followed by gradual decomposition into products.
Dremin's view is supported by evidence that there is a marked difference in the chemistry of molecules at an equilibrium state achieved by static rather than dynamic means; that is, under static compression and heating in a diamond cell apparatus, benzene for example can be chemically unaffected, while under shock compression to the same final state, decomposition can occur, with the degree of polymerization dependent on the duration of the shock pulse. Thus, shock chemistry is a direct consequence of the nonequilibrium nature of shock compression and is therefore distinct from equilibrium chemistry.
Between these two theoretical views of detonation, and at the same time providing an. These simulations use reactive empirical bond order REBO potentials, from which chemically plausible potential surfaces are obtained by modifying the short-range attractive part of a given atom's interaction with another, depending on its local environment.
If it has no neighbor, a bond can be formed; if it is already bonded, the third party is repelled. When a crystal of these AB molecules is shocked, all three features of the classical ZND theory of detonation are observed. In contrast to laboratory experiments, molecular dynamics simulations are carried out at the right time and distance scales to resolve these rapid events—a sharp shock rise, leading to a von Neumann spike, followed by a Taylor wave expansion.
The results shed new light on the detonation process, supporting a significant feature of Dremin's picture. In particular, the diatomic bond almost immediately loses its integrity upon shock compression in the overdriven, full-detonation case, and the bond vibrational energy is not in thermal equilibrium with the translational and rotational degrees of freedom before chemical reaction occurs.
Exothermic chemistry can happen very quickly, without recourse to doorway modes or energy ladders of the more conventional picture. Of course, the model is diatomic, so there are no intermediate frequencies to warm up. However, a new triatomic model resembling ozone has been tested with similar results. It is very likely that the inclusion of even more atoms in the explosive molecule will widen the reaction zone, in agreement with some experimental conclusions. One of the hallmarks of classical hydrodynamic detonation theory, confirmed experimentally, is the existence of a failure diameter below which a detonation will not propagate.
The failure diameter is closely related to the length of the reaction zone in the material. The length of the reaction zone and hence the failure diameter can vary tremendously depending on the particular explosive. For example, the reaction zone length in RDX cyclotetramethylenetetra-nitramine is several millimeters and in nitromethane is tens of micrometers, while in PETN pentaerythritol tetranitrate it is so small that it has yet to be resolved experimentally using state-of-the-art nanosecond probes. Recent results demonstrate a failure diameter in the model AB solid of about 10 nanometers and indicate that the detonation velocity in the model varies with the radius of the explosive in a manner consistent with the classic theory of reactive flows.
This result again confirms the ability of MD to study detonations at atomic resolution while treating enough atoms for long enough periods of time to link the results of the simulations to continuum theory. These results suggest that materials such as condensed phase ozone and nitric oxide have failure diameters as small as several nanometers.
The role of inhomogeneities in explosives can also be studied by molecular hydrodynamics. Studies have been done on the effect of so-called hot spots on the detonation process. Results indicate that the passage of a shock wave over a large void tends to heat up the system locally by spallation on one side and impact on the other. The role of crystal structure or lack of it in the case of fluids also is important in the question of explosive sensitivity, where there is experimental evidence of shear motion exciting chemical reaction in certain favorable packing directions.
Closely related to this phenomenon is tribochemistry, where rubbing two surfaces together can cause chemical reactions to occur. Recently, tribochemical reactions have been observed in molecular dynamics simulations of friction between two diamond surfaces. While these are not energetic materials, it is clear that this kind of simulation can shed light on the issue of structural causes of explosive sensitivity. Challenges in the future involve both molecular complexity and hydrodynamic effects, which will impose larger scales in time and distance upon detonation simulations.
For example, in order to see if the most energetic and stiffest bond breaks first upon shock compression— as it seems to for the simplest AB and O 3 molecular models—an REBO potential should be developed for a molecule with more vibrational degrees of freedom e. Studies of shock waves in systems of large but unreactive molecules i. It is therefore reasonable to imagine that inclusion of chemistry will change all that. If the reaction zone and therefore the failure diameter increase as expected, it may become necessary to use scalable parallel computing resources.
As more complex molecules are used in these simulations, there will be even greater challenges and opportunities in the area of designing realistic potentials, as well as the need for increased computer power. Another ambitious undertaking would be to see if the cellular nonplanar structures observed experimentally will also appear in detonation simulations. These can arise from defects in solids and density fluctuations in fluids and from wave interactions due to edge effects.
In real systems the distance scales, as in the case of reaction zones, cover many orders of magnitude.
Trends in Nanophysics: Theory, Experiment and Technology (Engineering Materials)
There are two natural spinoffs from this work in detonations that have strong implications for the Navy in the safety of explosives: 1 tribochemistry, that is, reactions at surfaces initiated by friction work, to date, has been only exploratory in nature , and 2 fracture chemistry induced at crack tips. Both fields of study will push computer resources to the limit, especially for realistic molecular models.
In summary, our understanding of energetic materials is on the threshold of a revolution, in no small way stimulated by computer chemistry experiments at the molecular scale. As these simulations are carried out on even larger and faster computers, more and more realism can be incorporated into them, both in the interaction potentials for modeling more sophisticated molecular species and in the ability to treat more complex flows such as detonation failure and cellular structures by expanding the accessible time and distance scales of the simulations.
Stainless steel is not used to bring city utilities, water, or natural gas into houses. Economics requires that the cheapest material that will perform adequately and with a reasonable amortizable lifetime be used. Structural materials must be sufficiently strong and stable, which requires understanding metallurgical problems such as fracture, fatigue, creep, oxidation, corrosion, and embrittlement, to name a few. All these phenomena are exacerbated by enhanced mass transport at elevated temperatures, leading to phase changes, particle diffusion, ablation, and even chemical reaction.
The motivation for the use of these materials at high temperatures arises from the greater energy efficiency associated with the higher temperature in the thermodynamic Carnot cycle. In a more mundane sense, material applications in turbines, aircraft jet engines, and nuclear reactors all require high-temperature materials. Even boilers, pressure vessels, and pipes may be included at their temperature extremes.
Clearly, all these systems are of primary importance to the Navy. Solutions to these problems by metallurgists involve developing alloys having desirable properties that avoid or subvert failure. Alloys are composed of combinations of several metallic and nonmetallic elements, each adding some desirable aspect or preventing some deleterious phenomenon. Mechanical failure or fracture is controlled by the motion or rather the lack of dislocations. The ability of microscopic crystalline grains to slide over one another leads to ductility—the resistance to crack propagation—which is governed by dislocation movement; strength is added by preventing or pinning dislocation movement by incorporating foreign atoms or particles into the alloy.
Superalloys often include over a dozen constituents to achieve this goal. Other systems include the myriad materials called stainless steels and refractory alloys based on four- and five-dimensional group V and group VI transition elements. The second important issue for high-temperature materials is stability. In the operating environments of these materials, such as gas turbines, jet engines, or nuclear reactors, there are often trace elements such as sulfur, oxygen, sodium, carbon, and hydrogen.
At elevated operating temperatures, chemical attack can readily occur, leading to oxidation, hot corrosion, and embrittlement.
Related Trends in Nanophysics: Theory, Experiment and Technology (Engineering Materials)
Copyright 2019 - All Right Reserved