Modelling the Localized to Itinerant Electronic Transition in the Heavy Fermion System CeIr
- 格式:pdf
- 大小:249.39 KB
- 文档页数:12
Geometric ModelingGeometric modeling is an essential aspect of computer graphics and design, allowing for the creation of realistic and detailed virtual objects and environments. It involves representing physical objects and spaces using mathematical equations and algorithms to simulate their appearance and behavior in a digital environment. Geometric modeling plays a crucial role in various industries, including animation, architecture, engineering, and manufacturing, where precise and accurate representations of objects are required for design, analysis, and visualization purposes. One of the primary purposes of geometric modeling is to create 3D models of objects and scenes that can be manipulated and viewed from different angles and perspectives. This allows designers and artists to visualize their ideas more effectively and make changes to the design quickly and easily. By using geometric modeling techniques, designers can create complex shapes, textures, and lighting effects that mimic real-world objects and environments, enhancing the realism and visual appeal of their creations. In addition to its applications in design and visualization, geometric modeling is also used in computer-aided design (CAD) and computer-aided manufacturing (CAM) systems to create precise and accurate representations of physical objects for engineering and manufacturing purposes. Engineers and manufacturers use geometric modeling software to design and analyze products, simulate their performance under various conditions, and generate instructions for machining and production processes. By using geometric modeling tools, they can ensure that their designs meet the required specifications and standards and optimize the manufacturing process for efficiency and cost-effectiveness. Geometric modeling techniques can be classified into two main categories: constructive solid geometry (CSG) and boundary representation (B-rep). CSG involves creating complex shapes by combining simple geometric primitives, such as cubes, cylinders, and spheres, using Boolean operations, such as union, intersection, and difference. B-rep, on the other hand, represents objects as a collection of surfaces or boundaries defined by their vertices, edges, and faces. Both techniques have their advantages and limitations, depending on the specific requirements of the application. The advancement of geometric modeling technology has led to the development of more sophisticated andpowerful software tools that enable designers and engineers to create highly detailed and realistic 3D models with greater ease and efficiency. These tools offer a wide range of features and functionalities, such as parametric modeling, surface modeling, solid modeling, and mesh modeling, to support different design tasks and workflows. They also support various file formats for interoperability and collaboration between different software applications and systems. Overall, geometric modeling plays a crucial role in modern computer graphics and design, enabling designers, artists, engineers, and manufacturers to create realistic and accurate representations of physical objects and spaces for various purposes. By leveraging the power of geometric modeling techniques and software tools, professionals can bring their creative ideas to life, optimize their designs for performance and manufacturability, and ultimately, improve the quality and efficiency of their work. As technology continues to evolve, geometric modeling will continue to advance, opening up new possibilities and opportunities for innovation and creativity in the digital world.。
Mn掺杂ZnO薄膜的软X射线发射光谱研究金晶;张新夷;周映雪【摘要】利用先进光源(ALS)8.0.1光束线的软X射线荧光谱仪,对采用分子束外延(MBE)设备在200℃下生长的Zn0.97Mn0.03O和Zn0.67Mn0.33O薄膜样品进行了电子结构的研究.根据共振和非共振Mn L2,3边的X射线发射光谱,计算出Mn L2与Mn L3发射峰相对积分强度的比值(I(L2)/I(L3)),可知样品的铁磁性与自由d 载流子的数目有关.在Zn0.97Mn0.03O中,Mn主要处于替代位置,并表现出较强的Coster-Kronig(C-K)跃迁效应,这说明样品中存在大量的自由d载流子.这些非局域的d载流子的行为类似于巡游电子,与Ruderman-Kittel-Kasuya-Yosid(RKKY)模型下计算得出的间隙Mn提供的4s电子,都可成为铁磁交换作用的媒介.在Zn0.67Mn0.33O中,自由d载流子的数目较少以及MnO团簇的存在是导致铁磁性向反铁磁性转变的主要原因.%The local electronic structures ofZn0.97Mn0.03O和Zn0.67Mn0.33O thin films prepared by a molecular beam epitaxy (MBE) at 200℃ were investigated by soft X-ray fluorescence spectrometer of beamline 8.0.1 of the Advanced Light Source (ALS). The special interest can be given to find the relationship between the electronic structure of Mn and magnetic properties of our samples. Analysis of the integral intensity ratio of Mn L2 to L3 emission lines(I(L2)//(L3)) from resonant and nonresonant Mn L2,3 X-ray emission spectra (XES) indicates that ferromagnet-ism (FM) is related to the free d charge carriers in the film. For ferromagnetic Zn0.97Mn0.03O sample, the majority of Mn atoms are incorporated at Zn substitutional sites and the film shows strong Coster-Kronig (C-K) transitions due to a large amount offree charge carriers available around Mn atoms. Both non-localized d charge carriers as itinerant electrons and 4s electrons from interstitial Mn obtained by Ruderman-Kittel-Kasuya-Yosid (RKKY) calculations can induce the ferromagnetic exchange interaction. However, the disappearance of FM in Zn0.67Mn0.33O sample can be explained in terms of the existence of MnO clusters leading to a reduction in the number of free charge carriers.【期刊名称】《无机材料学报》【年(卷),期】2012(027)003【总页数】5页(P296-300)【关键词】ZnO;X射线发射光谱;C-K跃迁;自由d载流子【作者】金晶;张新夷;周映雪【作者单位】上海大学材料科学与工程学院,上海200072;复旦大学应用表面物理国家重点实验室,上海200433;复旦大学应用表面物理国家重点实验室,上海200433;复旦大学应用表面物理国家重点实验室,上海200433【正文语种】中文【中图分类】O484稀磁半导体(DMSs)具有电子的电荷和自旋属性, 是目前自旋电子学领域内最典型的材料. 这种新型材料可以有效改善半导体器件中电子自旋的操纵[1-3]. 相关理论研究预言Mn掺杂p型ZnO DMSs的居里温度(TC)可以达到室温, 在实验中也观察到(Zn,Mn)O具有丰富的磁学现象[4-6]. (Zn, Mn)O磁性的多样性, 说明样品的磁性对制备方法和生长条件十分敏感[7-8]. 然而, 对其铁磁性是载流子诱导产生的, 还是源于 Mn相关的磁性第二相仍存在很大争议. 另外, 由于载流子的种类及浓度不同, 以及存在的缺陷等, 使得磁性的来源更加复杂.由于Mn掺杂ZnO的磁性与Mn原子和其邻近的原子以及周围的载流子之间的电子交换相互作用有关, 因此 Mn原子的局域环境以及电子结构对于了解(Zn,Mn)O 的磁性可以提供重要信息. 在本课题组最近的工作中[9], 利用软 X射线光谱并结合第一性原理研究发现, (Zn,Mn)O的铁磁性主要起源于替位Mn和间隙Mn之间的RKKY交换相互作用, 而RKKY作用与磁性离子的占位形式和巡游电子的数量有关. 本工作主要采用在共振和非共振激发条件下Mn L边的软X射线发射光谱(XES)和吸收光谱(XAS), 研究了样品的磁性与自由 d载流子之间的关系, 探讨了其磁性的起源.1 实验采用分子束外延(MBE)设备制备 Zn0.97Mn0.03O和 Zn0.67Mn0.33O薄膜样品. 生长条件、磁性和结构的分析可参考文献[10]. 前期研究发现, 在低Mn浓度样品中, Mn原子主要处于替代位置; 当Mn浓度较高(>20%)时, 在薄膜中会生成 MnO 团簇结构.随着 Mn浓度从低到高, 样品的磁性从铁磁(TC=45 K)向反铁磁发生转变. (Zn,Mn)O样品的软 X射线光谱的实验是在美国劳伦斯伯克利国家实验室(LBNL)的先进光源(ALS)8.0.1光束线(BL8.0.1)的软 X射线荧光(SXF)光谱仪上进行的. 样品的Mn L边XAS则在全电子产额模式(TEY)下通过测量样品的漏电流获得. Mn L 边XAS的测量是为了在Mn L边XES中选择合适的共振激发能量. 在共振和非共振的Mn L边XES测量中, 光谱仪的入射狭缝为 50 μm, 其对应的谱线的分辨率可达 0.6~0.8 eV. 入射光子与样品表面的角度为60°, 且发射光子的角度与入射光子的角度始终为90°. 测量谱图按照高度透明的金网栅检测到的落到样品表面的光子总数进行归一化. 全部实验均在室温下进行.2 原理2.1 软X射线光谱原理X射线与物质相互作用时, 激发和退激发过程如图1所示. X射线穿透物质时, 会被物质内部吸收.当 X 射线激发价带电子或芯电子, 使之成为光电子, 这种过程称为光电子发射(PES). 当芯能级电子吸收X射线能量跃迁到导带, 这种芯激发状态很不稳定, 可通过荧光(XES)或 Auger电子模式退激发.在X射线发射过程中, 如果芯电子被入射X射线共振激发到某吸收域附近, 其后退激发产生的发射谱强烈地依赖入射光能量, 这种发射谱称为共振X射线发射谱(RXES); 如果入射X射线能量远高于某吸收边时, 芯电子则被激发到连续的导带上, 这种发射谱为普通X射线发射谱(NXES).2.2 Coster-Kronig(C-K)跃迁Mn L2,3 X射线发射产生于从占据的3d4s价带态分别到2p3/2和2p1/2芯空穴的跃迁. 对于d轨道被完全占据的自由原子来说, Mn L2与Mn L3发射峰相对积分强度的比值(I(L2)/I(L3))仅取决于2p1/2和2p3/2能级电子数的统计分布. 当没有非辐射跃迁存在时,其比值应等于 1/2. 然而, 在固体中由于 2p芯空穴和未占据的3d电子之间静电相互作用, 比值会偏离1/2[11]. 在以上两种情况下, I(L2)/I(L3)可以提供价带中具有d对称性电子分布的信息. I(L2)/I(L3)的计算可以采用多重衰变过程来实现. 在这种模型中, 由X射线吸收产生的芯空穴可经由多种方式衰变: 辐射, 无辐射俄歇(Auger)和 Coster-Kronig(C-K)过程.如果电离空穴与填充空穴的电子在同一个主壳层内, 则称为C-K跃迁. 对于Mn掺杂的ZnO, 无辐射的L2L3M4,5 C-K跃迁可以影响其XES中I (L2)/I(L3)积分强度的比值. C-K过程发生的几率在自由原子中较小而在凝聚态体系中被增强. 这主要是由于在固体中原子间电子相互作用的屏蔽导致了无辐射C-K过程中初态和末态之间能量的减小, 这种效应在金属中表现得更加强烈.图1 X射线吸收, 光电子发射和荧光发射过程Fig. 1 X-ray absorption, photoelectron emission and fluorescence emission2 结果与讨论图2是Zn0.97Mn0.03O和Zn0.67Mn0.33O的Mn 2p XAS图谱. 640和651 eV附近的主峰主要由Mn 2p能级的自旋轨道劈裂而成. 谱线的多重态结构产生于2p5芯空穴和3d6电子之间的库仑及交换相互作用[12].从谱线特征可知两种样品的 Mn离子均为二价. 在Zn0.97Mn0.03O中, Mn处于替代位置并保持四面体对称性(Td); 而Zn0.67Mn0.33O的Mn 2p3/2和2p1/2吸收峰逐渐扩展并出现新特征(如箭头所示). 这主要是由于在高 Mn掺杂样品中形成八面体构型(Oh)的MnO团簇, 此时MnO已成为Mn的主要形态, 这与文献[10]的结果是一致的.为了获得有关占据的Mn 3d态的光谱信息, 在不同激发能量下测量了两种样品的Mn L2,3 XES,如图3所示. 所有光谱的特征强烈地依赖于激发能量和Mn浓度. 特征峰1为弹性峰. 特征峰2和3的双峰结构来源于Mn原子内3d能级之间的dd 激发,它是典型的到 3d5多重态的跃迁. 特征峰 4的宽峰结构对应于(Zn,Mn)O中非局域的配位O与Mn原子之间的电荷转移(CT)跃迁[13]. 其中, 低 Mn掺杂样品的CT峰比高Mn样品的更尖锐, 这主要是由于不同构型Mn2+和晶体场的相互作用的不同造成的. 当在高激发能下, 会出现L3和L2的荧光特征峰, 其发射能量并不随入射能量发生变化.图2 Zn0.97Mn0.03O和Zn0.67Mn0.33O的Mn 2p XAS图谱Fig. 2 Mn 2p XAS of Zn0.97Mn0.03O and Zn0.67Mn0.33O图3 不同激发能量下Zn0.97Mn0.03O和Zn0.67Mn0.33O的 Mn L2,3 XES图谱Fig. 3 Mn L2,3 XES of Zn0.97Mn0.03O (a) and Zn0.67Mn0.33O (b) at different excitation energies两种Mn掺杂ZnO的Mn L2 RXES和Mn L2 NXES如图4所示. 从图4 (a)看出, 与低Mn掺杂样品相比, 高掺杂样品的Mn L2发射峰处于较低的能量. 相同地, 在非共振激发下(图4 (b)), 样品Mn L3发射带劈裂成两个子带, 低能子带(637 eV)发射峰在高Mn样品中增强, 而高能子带(640 eV)发射峰在低Mn样品中较明显. Mn L3高能子带对应Mn 2p XAS最强吸收峰, 由于强烈的自吸收效应, 从而在高 Mn 样品中其高能发射峰强度大大降低[14]. 根据Mn 2p XAS和 Mn L2,3 XES结果, 可以认为, 在Zn0.67Mn0.33O中, 低能子带与Mn 3d-O 2p相互作用有关, 这主要产生于样品中形成的MnO第二相; 而在Zn0.97Mn0.03O中, 高能子带主要产生于Mn 3d态,它应该与Mn原子构型相关, 如替位Mn或间隙Mn.图4 Zn0.97Mn0.03O和Zn0.67Mn0.33O 的Mn L2 RXES和Mn L2,3 NXESFig.4 Mn L2 RXES (a) and Mn L2,3 NXES (b) of Zn0.97Mn0.03O andZn0.67Mn0.33O在(Zn,Mn)O中, Mn掺杂浓度对Mn及其邻近原子之间相互作用的影响也可以通过比较当激发能高于L2 吸收阈值时, Mn L2与Mn L3发射峰相对积分强度的比值(I(L2)/I(L3))来说明. 我们采用下面的公式来表示I(L2)/I(L3)[15],其中, f2,3是C-K跃迁几率, μ3/μ2是激发能量在L3和L2边光吸收系数的比值. 当激发能远高于 L2 边吸收阈值时, μ3/μ2比值为2. 此时, I(L2)/I(L3)仅由 f2,3决定, 而f2,3随元素中可以获得的自由d载流子的数目而增加[16]. 在共振激发下,μ3/μ2比值会随激发能而变化; 当激发在L2阈值时, 强烈的极化场导致其值迅速减小. 共振激发下的 I(L2)/I(L3)通常要比非共振激发时的大.I(L2)/I(L3)计算结果如表1所示. 在两种激发模式下, Zn0.97Mn0.03O的I(L2)/I(L3)比Zn0.67Mn0.33O的小, 这表明C-K跃迁在低Mn样品中被增强而在高Mn样品中被抑制. 在非共振激发下, I(L2)/I(L3)主要由f2,3决定. 因此, 我们可以认为在低Mn样品中具有大量的3d导带电子, 这些电子极大地贡献了自由电荷载流子的数目, 从而增强了 C-K跃迁的几率.对于共振激发下, 由于μ3/μ2的减小导致两种样品的I(L2)/I(L3)迅速增加.Singhal等[5]对Mn掺杂ZnO薄膜的磁性研究表明, 铁磁性产生于载流子诱导机制, 但并未指出非局域的自由电荷载流子对其铁磁性的贡献. 我们最近采用理论计算研究了在RKKY模型下, 不同Mn构型的交换相互作用与 Mn掺杂浓度的关系[9]. 理论计算研究表明, 在低Mn掺杂样品中, 替位Mn和间隙Mn原子之间的交换相互作用诱导了样品铁磁性的出现. 根据本文工作, 通过对 Mn L2,3 XES中I(L2)/I(L3)的进一步分析, 可知低Mn掺杂样品中存在大量的自由 d载流子. 可以认为在 Zn0.97Mn0.03O样品中, 适当的Mn浓度使得Mn-Mn距离满足了铁磁交换作用所需要的相邻Mn原子之间距离的要求;非局域的自由d载流子和间隙Mn原子提供的4s电子作为巡游电子而存在, 它们都可以成为铁磁交换作用的媒介. 在RKKY模型下, 磁性局域电子和巡游电子之间的交换相互作用使巡游电子发生自旋极化, 自旋极化随着局域电子的距离以震荡的方式衰减会导致两个近邻磁性离子之间产生间接超交换作用. 然而, 铁磁性样品的TC较低, 仅为45 K. 这主要是因为较低的巡游电子浓度所产生弱 RKKY交换相互作用造成的.表1 在共振和非共振激发下Zn0.97Mn0.03O和Zn0.67Mn0.33O的I(L2)/I(L3)Table 1 I (L2)/I(L3) intensity ratio of Zn0.97Mn0.03O andZn0.67Mn0.33O for the nonresonant and resonant excitationsSampleI(L2)/I(L3)RXES I(L2)/I(L3)NXES Zn0.97Mn0.03O 1.324 0.515Zn0.67Mn0.33O 2.055 0.706对于 Zn0.67Mn0.33O, 铁磁性开始向反铁磁性发生转变, 这与样品中反铁磁性耦合的 MnO团簇的出现有关. 一方面MnO是绝缘体, 其自由载流子浓度很低, 这限制了 Mn2+与载流子之间的相互作用,并最终阻碍了相邻Mn2+之间的铁磁交换.另一方面MnO又是一种典型的反铁磁材料. 因此, 通过控制Mn掺杂浓度, 改变了自由d电子的数目, 从而可以影响(Zn,Mn)O的磁性.3 结论主要采用Mn L边软X射线吸收和发射光谱对Zn0.97Mn0.03O和Zn0.67Mn0.33O的电子结构和磁性进行了研究. 通过光谱研究表明, Mn掺杂浓度对其在ZnO晶格中的占位形式和磁性起到十分重要的作用.在低Mn浓度时, Mn 原子主要处于替代位置, 样品表现出铁磁性; 对于高Mn浓度样品, MnO团簇已成为主要相, 并表现出强烈的反铁磁性. 结合 Mn L2,3 XES中对共振和非共振激发下I(L2)/I(L3)分析,可知在铁磁性样品中存在大量的自由 d载流子. 这些非局域的d载流子和RKKY模型下计算出的间隙Mn原子所提供的4s电子可以成为铁磁交换作用媒介的巡游电子. 在RKKY模型下, 相邻Mn 3d局域电子之间通过巡游电子产生间接交换作用是产生铁磁性耦合的主要原因. 因此, 软 X射线光谱提供了有关(Zn,Mn)O薄膜电子结构的重要信息, 这为分析DMSs的磁性创造了有利条件.参考文献:【相关文献】[1] Das Sarma S. A new class of device based on electron spin, rather than on charge, may yield dthe next generation of microelectronics. Am. Sci., 2001, 89(6): 516−523.[2] Bratkovsky A M. Spintronic effects in metallic, semiconductor, metal-oxide and metal-semiconductor heterostructures. Rep. Prog. Phys., 2008, 71(2): 026502.[3] Manyala N, DiTusa J F, Aeppli G, et al. Doping a semiconductor to created an unconventional metal. Nature, 2008, 454: 976−980.[4] Thakur P, Gautam S, Chae K H, et al. X-ray absorption and emission studies of Mn-doped ZnO thin films. Journal of the Korean Physical Society, 2009, 55(1): 177−182. [5] Singhal R K, Dhawan M S, Gaur S K, et al. Room temperature ferromagnetism in Mn-doped dilute ZnO semiconductor: an electronic structure study using X-rayphoto emission. J. Alloys Compd., 2009, 477(1/2): 379−385.[6] Kolesnik S, Dabrowski B. Absence of room temperature ferromagnetism in bulk Mn-doped ZnO. J. Appl. Phys., 2004, 96(9): 5379−5381.[7] Zhang J, Skomski R, Sellmyer D J. Sample preparation and annealing effects on the ferromagnetism in Mn-doped ZnO. J. Appl. Phys., 2005, 97(10): 10D303−1−3.[8] Wu Y, Rao K V, Wolfgang Voit, et al. Room temperature ferromagnetism and fast ultraviolet photoresponse of inkjet-printed Mn-doped ZnO thin films. IEEE Trans. Magn., 2010, 46(6): 2152−2155.[9] Jin J, Chang G S, Boukhvalov D W, et al. Element-specific electronic structure of Mn dopants and ferromagnetism of (Zn,Mn)O thin films. Thin Solid Films, 2010, 518(10): 2825−2829.[10] Xu W, Zhou Y X, Zhang X Y, et al. Local structures of Mn in dilute magnetic semiconductor ZnMnO. Solid State Commun., 2007, 141(7): 374−377.[11] Chang G S, Kurmaev E Z, Boukhvalov D W, et al. Clustering of impurity atoms in Co-doped anatase TiO2 thin films probed with soft x-ray fluorescence. J. Phys: Condens. Matter, 2006, 18(17): 4243−4251.[12] Fromme B, Brunokowski U, Kisker E, et al. d-d excitations and interband transitions in MnO: A spin-polarized electronenergy-loss study. Phys. Rev. B, 1998, 58(15): 9783−9792.[13] Butorin S M, Guo J H, Magnuson M, et al. Low-energy d-d excitations in MnO studied by resonant X-ray fluorescence spectroscopy. Phys. Rev. B, 1996, 54(7): 4405−4408. [14] Bartkowski S, Neumann M, Kurmaev E Z, et al. Electronic sturcture of titanium monoxide. Phys. Rev. B, 1997, 56(16): 10656−10667.[15] Kurmaev E Z, Ankudinov A L, Rehr J J, et al. The L2:L3 intensity ratio in soft X-ray emission spectra of 3d-metals. J. Electr. Spectr. Relat. Phenom., 2005, 148(1): 1−4.[16] Grebennikov V I. Surface Investigations: X-ray, Synchrotron and Neutron Techniques 11. 2002: 41.。
Chapter 6 Magnetism of MatterThe history of magnetism dates back to earlier than 600 B.C., but it is only in the twentieth century that scientists have begun to understand it, and develop technologies based on this understanding. Magnetism was most probably first observed in a form of the mineral magnetite called lodestone, which consists of iron oxide-a chemical compound of iron and oxygen. The ancient Greeks were the first known to have used this mineral, which they called a magnet because of its ability to attract other pieces of the same material and iron.The Englishman William Gilbert(1540-1603) was the first to investigate the phenomenon of magnetism systematically using scientific methods. He also discovered that Earth is itself a weak magnet. Early theoretical investigations into the nature of Earth's magnetism were carried out by the German Carl Friedrich Gauss(1777-1855). Quantitative studies of magnetic phenomena initiated in the eighteenth century by Frenchman Charles Coulomb(1736-1806), who established the inverse square law of force, which states that the attractive force between two magnetized objects is directly proportional to the product of their individual fields and inversely proportional to the square of the distance between them.Danish physicist Hans Christian Oersted(1777-1851) first suggested a link between electricity and magnetism. Experiments involving the effects of magnetic and electric fields on one another were then conducted by Frenchman Andre Marie Ampere(1775-1836) and Englishman Michael Faraday(1791-1869), but it was the Scotsman, James Clerk Maxwell(1831-1879), who provided the theoretical foundation to the physics of electromagnetism in the nineteenth century by showing that electricity and magnetism represent different aspects of the same fundamental force field. Then, in the late 1960s American Steven Weinberg(1933-) and Pakistani Abdus Salam(1926-96), performed yet another act of theoretical synthesis of the fundamental forces by showing that electromagnetism is one part of the electroweak force. The modern understanding of magnetic phenomena in condensed matter originates from the work of two Frenchmen: Pierre Curie(1859-1906), the husband and scientific collaborator of Madame Marie Curie(1867-1934), and Pierre Weiss(1865-1940). Curie examined the effect of temperature on magnetic materials and observed that magnetism disappeared suddenly above a certain critical temperature in materials like iron. Weiss proposed a theory of magnetism based on an internal molecular field proportional to the average magnetization that spontaneously align the electronic micromagnets in magnetic matter. The present day understanding of magnetism based on the theory of the motion and interactions of electrons in atoms (called quantum electrodynamics) stems from the work and theoretical models of two Germans, Ernest Ising and Werner Heisenberg (1901-1976). Werner Heisenberg was also one of the founding fathers of modern quantum mechanics.Magnetic CompassThe magnetic compass is an old Chinese invention, probably first made in China during the Qin dynasty (221-206 B.C.). Chinese fortune tellers used lodestonesto construct their fortune telling boards.Magnetized NeedlesMagnetized needles used as direction pointers instead of the spoon-shaped lodestones appeared in the 8th century AD, again in China, and between 850 and 1050 they seemto have become common as navigational devices on ships. Compass as a Navigational AidThe first person recorded to have used the compass as a navigational aid was Zheng He (1371-1435), from the Yunnan province in China, who made seven ocean voyages between 1405 and 1433.有关固体磁性的基本概念和规律在上个世纪电磁学的发展史中就开始建立了。
the following section assignments of model -回复The section assignments of the model refer to the specific tasks or responsibilities assigned to different parts or components of the model. These assignments help in ensuring an organized and efficient functioning of the model. In this article, we will provide a step-by-step explanation of each section assignment of the model, elaborating on their importance and how they contribute to the overall operation.1. Data collection and preprocessing:The first assignment deals with collecting relevant data for the model and preparing it for analysis. This involves identifying the sources of data, ensuring its quality and reliability, and converting it into a suitable format for further analysis. Proper data collection and preprocessing are crucial for the accuracy and effectiveness of the model.2. Feature selection and engineering:In this assignment, the focus is on selecting the most relevant features or variables for the model and engineering new features if required. Feature selection helps in reducing the dimensionality of the data and improving the model's efficiency. Feature engineering involves creating new features by combining or transformingexisting ones to capture additional information or patterns.3. Model building and training:The next assignment involves building the actual model using the selected features and training it on the prepared dataset. This step includes selecting the appropriate algorithms and techniques based on the problem at hand and the available data. The model is trained using labeled data to learn the underlying patterns and relationships.4. Model evaluation and validation:Once the model is trained, it needs to be evaluated to assess its performance and validity. This assignment involves various metrics and techniques to evaluate the model's accuracy, precision, recall, and other relevant parameters. Cross-validation techniques are often used to validate the model's generalizability and robustness.5. Model optimization and tuning:In this assignment, the focus is on improving the model's performance by optimizing its parameters and tuning the algorithms used. Different optimization techniques such as grid search or Bayesian optimization can be employed to identify the optimal set of hyperparameters for the model. This step involves experimentation and fine-tuning to achieve the best possible results.6. Model deployment and integration:The penultimate assignment deals with deploying the trained model into a production environment where it can be utilized for real-time predictions or decision-making. This step involves integrating the model with existing systems, creating relevant APIs or interfaces, and ensuring its compatibility and scalability. Continuous monitoring and maintenance are also essential to ensure the model's ongoing performance and accuracy.7. Model interpretation and communication:The final assignment focuses on interpreting and communicating the model's results and findings to stakeholders and decision-makers. This step involves translating complex technical jargon into easily understandable insights and recommendations. Visualization techniques and storytelling methods can be employed to effectively communicate the model's outcomes and implications.In conclusion, the section assignments of the model encompass a series of steps that collectively form a comprehensive approach to data analysis and modeling. Each assignment plays a crucial role in ensuring the model's accuracy, efficiency, and usability. By following these assignments in a systematic manner,organizations can harness the power of data and make informed decisions that drive growth and success.。
Modeling the Spatial Dynamics of Regional Land Use:The CLUE-S ModelPETER H.VERBURG*Department of Environmental Sciences Wageningen UniversityP.O.Box376700AA Wageningen,The NetherlandsandFaculty of Geographical SciencesUtrecht UniversityP.O.Box801153508TC Utrecht,The NetherlandsWELMOED SOEPBOERA.VELDKAMPDepartment of Environmental Sciences Wageningen UniversityP.O.Box376700AA Wageningen,The NetherlandsRAMIL LIMPIADAVICTORIA ESPALDONSchool of Environmental Science and Management University of the Philippines Los Ban˜osCollege,Laguna4031,Philippines SHARIFAH S.A.MASTURADepartment of GeographyUniversiti Kebangsaan Malaysia43600BangiSelangor,MalaysiaABSTRACT/Land-use change models are important tools for integrated environmental management.Through scenario analysis they can help to identify near-future critical locations in the face of environmental change.A dynamic,spatially ex-plicit,land-use change model is presented for the regional scale:CLUE-S.The model is specifically developed for the analysis of land use in small regions(e.g.,a watershed or province)at afine spatial resolution.The model structure is based on systems theory to allow the integrated analysis of land-use change in relation to socio-economic and biophysi-cal driving factors.The model explicitly addresses the hierar-chical organization of land use systems,spatial connectivity between locations and stability.Stability is incorporated by a set of variables that define the relative elasticity of the actual land-use type to conversion.The user can specify these set-tings based on expert knowledge or survey data.Two appli-cations of the model in the Philippines and Malaysia are used to illustrate the functioning of the model and its validation.Land-use change is central to environmental man-agement through its influence on biodiversity,water and radiation budgets,trace gas emissions,carbon cy-cling,and livelihoods(Lambin and others2000a, Turner1994).Land-use planning attempts to influence the land-use change dynamics so that land-use config-urations are achieved that balance environmental and stakeholder needs.Environmental management and land-use planning therefore need information about the dynamics of land use.Models can help to understand these dynamics and project near future land-use trajectories in order to target management decisions(Schoonenboom1995).Environmental management,and land-use planning specifically,take place at different spatial and organisa-tional levels,often corresponding with either eco-re-gional or administrative units,such as the national or provincial level.The information needed and the man-agement decisions made are different for the different levels of analysis.At the national level it is often suffi-cient to identify regions that qualify as“hot-spots”of land-use change,i.e.,areas that are likely to be faced with rapid land use conversions.Once these hot-spots are identified a more detailed land use change analysis is often needed at the regional level.At the regional level,the effects of land-use change on natural resources can be determined by a combina-tion of land use change analysis and specific models to assess the impact on natural resources.Examples of this type of model are water balance models(Schulze 2000),nutrient balance models(Priess and Koning 2001,Smaling and Fresco1993)and erosion/sedimen-tation models(Schoorl and Veldkamp2000).Most of-KEY WORDS:Land-use change;Modeling;Systems approach;Sce-nario analysis;Natural resources management*Author to whom correspondence should be addressed;email:pverburg@gissrv.iend.wau.nlDOI:10.1007/s00267-002-2630-x Environmental Management Vol.30,No.3,pp.391–405©2002Springer-Verlag New York Inc.ten these models need high-resolution data for land use to appropriately simulate the processes involved.Land-Use Change ModelsThe rising awareness of the need for spatially-ex-plicit land-use models within the Land-Use and Land-Cover Change research community(LUCC;Lambin and others2000a,Turner and others1995)has led to the development of a wide range of land-use change models.Whereas most models were originally devel-oped for deforestation(reviews by Kaimowitz and An-gelsen1998,Lambin1997)more recent efforts also address other land use conversions such as urbaniza-tion and agricultural intensification(Brown and others 2000,Engelen and others1995,Hilferink and Rietveld 1999,Lambin and others2000b).Spatially explicit ap-proaches are often based on cellular automata that simulate land use change as a function of land use in the neighborhood and a set of user-specified relations with driving factors(Balzter and others1998,Candau 2000,Engelen and others1995,Wu1998).The speci-fication of the neighborhood functions and transition rules is done either based on the user’s expert knowl-edge,which can be a problematic process due to a lack of quantitative understanding,or on empirical rela-tions between land use and driving factors(e.g.,Pi-janowski and others2000,Pontius and others2000).A probability surface,based on either logistic regression or neural network analysis of historic conversions,is made for future conversions.Projections of change are based on applying a cut-off value to this probability sur-face.Although appropriate for short-term projections,if the trend in land-use change continues,this methodology is incapable of projecting changes when the demands for different land-use types change,leading to a discontinua-tion of the trends.Moreover,these models are usually capable of simulating the conversion of one land-use type only(e.g.deforestation)because they do not address competition between land-use types explicitly.The CLUE Modeling FrameworkThe Conversion of Land Use and its Effects(CLUE) modeling framework(Veldkamp and Fresco1996,Ver-burg and others1999a)was developed to simulate land-use change using empirically quantified relations be-tween land use and its driving factors in combination with dynamic modeling.In contrast to most empirical models,it is possible to simulate multiple land-use types simultaneously through the dynamic simulation of competition between land-use types.This model was developed for the national and con-tinental level,applications are available for Central America(Kok and Winograd2001),Ecuador(de Kon-ing and others1999),China(Verburg and others 2000),and Java,Indonesia(Verburg and others 1999b).For study areas with such a large extent the spatial resolution of analysis was coarse(pixel size vary-ing between7ϫ7and32ϫ32km).This is a conse-quence of the impossibility to acquire data for land use and all driving factors atfiner spatial resolutions.A coarse spatial resolution requires a different data rep-resentation than the common representation for data with afine spatial resolution.Infine resolution grid-based approaches land use is defined by the most dom-inant land-use type within the pixel.However,such a data representation would lead to large biases in the land-use distribution as some class proportions will di-minish and other will increase with scale depending on the spatial and probability distributions of the cover types(Moody and Woodcock1994).In the applications of the CLUE model at the national or continental level we have,therefore,represented land use by designating the relative cover of each land-use type in each pixel, e.g.a pixel can contain30%cultivated land,40%grass-land,and30%forest.This data representation is di-rectly related to the information contained in the cen-sus data that underlie the applications.For each administrative unit,census data denote the number of hectares devoted to different land-use types.When studying areas with a relatively small spatial ex-tent,we often base our land-use data on land-use maps or remote sensing images that denote land-use types respec-tively by homogeneous polygons or classified pixels. When converted to a raster format this results in only one, dominant,land-use type occupying one unit of analysis. The validity of this data representation depends on the patchiness of the landscape and the pixel size chosen. Most sub-national land use studies use this representation of land use with pixel sizes varying between a few meters up to about1ϫ1km.The two different data represen-tations are shown in Figure1.Because of the differences in data representation and other features that are typical for regional appli-cations,the CLUE model can not directly be applied at the regional scale.This paper describes the mod-ified modeling approach for regional applications of the model,now called CLUE-S(the Conversion of Land Use and its Effects at Small regional extent). The next section describes the theories underlying the development of the model after which it is de-scribed how these concepts are incorporated in the simulation model.The functioning of the model is illustrated for two case-studies and is followed by a general discussion.392P.H.Verburg and othersCharacteristics of Land-Use SystemsThis section lists the main concepts and theories that are prevalent for describing the dynamics of land-use change being relevant for the development of land-use change models.Land-use systems are complex and operate at the interface of multiple social and ecological systems.The similarities between land use,social,and ecological systems allow us to use concepts that have proven to be useful for studying and simulating ecological systems in our analysis of land-use change (Loucks 1977,Adger 1999,Holling and Sanderson 1996).Among those con-cepts,connectivity is important.The concept of con-nectivity acknowledges that locations that are at a cer-tain distance are related to each other (Green 1994).Connectivity can be a direct result of biophysical pro-cesses,e.g.,sedimentation in the lowlands is a direct result of erosion in the uplands,but more often it is due to the movement of species or humans through the nd degradation at a certain location will trigger farmers to clear land at a new location.Thus,changes in land use at this new location are related to the land-use conditions in the other location.In other instances more complex relations exist that are rooted in the social and economic organization of the system.The hierarchical structure of social organization causes some lower level processes to be constrained by higher level dynamics,e.g.,the establishments of a new fruit-tree plantation in an area near to the market might in fluence prices in such a way that it is no longer pro fitable for farmers to produce fruits in more distant areas.For studying this situation an-other concept from ecology,hierarchy theory,is use-ful (Allen and Starr 1982,O ’Neill and others 1986).This theory states that higher level processes con-strain lower level processes whereas the higher level processes might emerge from lower level dynamics.This makes the analysis of the land-use system at different levels of analysis necessary.Connectivity implies that we cannot understand land use at a certain location by solely studying the site characteristics of that location.The situation atneigh-Figure 1.Data representation and land-use model used for respectively case-studies with a national/continental extent and local/regional extent.Modeling Regional Land-Use Change393boring or even more distant locations can be as impor-tant as the conditions at the location itself.Land-use and land-cover change are the result of many interacting processes.Each of these processes operates over a range of scales in space and time.These processes are driven by one or more of these variables that influence the actions of the agents of land-use and cover change involved.These variables are often re-ferred to as underlying driving forces which underpin the proximate causes of land-use change,such as wood extraction or agricultural expansion(Geist and Lambin 2001).These driving factors include demographic fac-tors(e.g.,population pressure),economic factors(e.g., economic growth),technological factors,policy and institutional factors,cultural factors,and biophysical factors(Turner and others1995,Kaimowitz and An-gelsen1998).These factors influence land-use change in different ways.Some of these factors directly influ-ence the rate and quantity of land-use change,e.g.the amount of forest cleared by new incoming migrants. Other factors determine the location of land-use change,e.g.the suitability of the soils for agricultural land use.Especially the biophysical factors do pose constraints to land-use change at certain locations, leading to spatially differentiated pathways of change.It is not possible to classify all factors in groups that either influence the rate or location of land-use change.In some cases the same driving factor has both an influ-ence on the quantity of land-use change as well as on the location of land-use change.Population pressure is often an important driving factor of land-use conver-sions(Rudel and Roper1997).At the same time it is the relative population pressure that determines which land-use changes are taking place at a certain location. Intensively cultivated arable lands are commonly situ-ated at a limited distance from the villages while more extensively managed grasslands are often found at a larger distance from population concentrations,a rela-tion that can be explained by labor intensity,transport costs,and the quality of the products(Von Thu¨nen 1966).The determination of the driving factors of land use changes is often problematic and an issue of dis-cussion(Lambin and others2001).There is no unify-ing theory that includes all processes relevant to land-use change.Reviews of case studies show that it is not possible to simply relate land-use change to population growth,poverty,and infrastructure.Rather,the inter-play of several proximate as well as underlying factors drive land-use change in a synergetic way with large variations caused by location specific conditions (Lambin and others2001,Geist and Lambin2001).In regional modeling we often need to rely on poor data describing this complexity.Instead of using the under-lying driving factors it is needed to use proximate vari-ables that can represent the underlying driving factors. Especially for factors that are important in determining the location of change it is essential that the factor can be mapped quantitatively,representing its spatial vari-ation.The causality between the underlying driving factors and the(proximate)factors used in modeling (in this paper,also referred to as“driving factors”) should be certified.Other system properties that are relevant for land-use systems are stability and resilience,concepts often used to describe ecological systems and,to some extent, social systems(Adger2000,Holling1973,Levin and others1998).Resilience refers to the buffer capacity or the ability of the ecosystem or society to absorb pertur-bations,or the magnitude of disturbance that can be absorbed before a system changes its structure by changing the variables and processes that control be-havior(Holling1992).Stability and resilience are con-cepts that can also be used to describe the dynamics of land-use systems,that inherit these characteristics from both ecological and social systems.Due to stability and resilience of the system disturbances and external in-fluences will,mostly,not directly change the landscape structure(Conway1985).After a natural disaster lands might be abandoned and the population might tempo-rally migrate.However,people will in most cases return after some time and continue land-use management practices as before,recovering the land-use structure (Kok and others2002).Stability in the land-use struc-ture is also a result of the social,economic,and insti-tutional structure.Instead of a direct change in the land-use structure upon a fall in prices of a certain product,farmers will wait a few years,depending on the investments made,before they change their cropping system.These characteristics of land-use systems provide a number requirements for the modelling of land-use change that have been used in the development of the CLUE-S model,including:●Models should not analyze land use at a single scale,but rather include multiple,interconnected spatial scales because of the hierarchical organization of land-use systems.●Special attention should be given to the drivingfactors of land-use change,distinguishing drivers that determine the quantity of change from drivers of the location of change.●Sudden changes in driving factors should not di-rectly change the structure of the land-use system asa consequence of the resilience and stability of theland-use system.394P.H.Verburg and others●The model structure should allow spatial interac-tions between locations and feedbacks from higher levels of organization.Model DescriptionModel StructureThe model is sub-divided into two distinct modules,namely a non-spatial demand module and a spatially explicit allocation procedure (Figure 2).The non-spa-tial module calculates the area change for all land-use types at the aggregate level.Within the second part of the model these demands are translated into land-use changes at different locations within the study region using a raster-based system.For the land-use demand module,different alterna-tive model speci fications are possible,ranging from simple trend extrapolations to complex economic mod-els.The choice for a speci fic model is very much de-pendent on the nature of the most important land-use conversions taking place within the study area and the scenarios that need to be considered.Therefore,the demand calculations will differ between applications and scenarios and need to be decided by the user for the speci fic situation.The results from the demandmodule need to specify,on a yearly basis,the area covered by the different land-use types,which is a direct input for the allocation module.The rest of this paper focuses on the procedure to allocate these demands to land-use conversions at speci fic locations within the study area.The allocation is based upon a combination of em-pirical,spatial analysis,and dynamic modelling.Figure 3gives an overview of the procedure.The empirical analysis unravels the relations between the spatial dis-tribution of land use and a series of factors that are drivers and constraints of land use.The results of this empirical analysis are used within the model when sim-ulating the competition between land-use types for a speci fic location.In addition,a set of decision rules is speci fied by the user to restrict the conversions that can take place based on the actual land-use pattern.The different components of the procedure are now dis-cussed in more detail.Spatial AnalysisThe pattern of land use,as it can be observed from an airplane window or through remotely sensed im-ages,reveals the spatial organization of land use in relation to the underlying biophysical andsocio-eco-Figure 2.Overview of the modelingprocedure.Figure 3.Schematic represen-tation of the procedure to allo-cate changes in land use to a raster based map.Modeling Regional Land-Use Change395nomic conditions.These observations can be formal-ized by overlaying this land-use pattern with maps de-picting the variability in biophysical and socio-economic conditions.Geographical Information Systems(GIS)are used to process all spatial data and convert these into a regular grid.Apart from land use, data are gathered that represent the assumed driving forces of land use in the study area.The list of assumed driving forces is based on prevalent theories on driving factors of land-use change(Lambin and others2001, Kaimowitz and Angelsen1998,Turner and others 1993)and knowledge of the conditions in the study area.Data can originate from remote sensing(e.g., land use),secondary statistics(e.g.,population distri-bution),maps(e.g.,soil),and other sources.To allow a straightforward analysis,the data are converted into a grid based system with a cell size that depends on the resolution of the available data.This often involves the aggregation of one or more layers of thematic data,e.g. it does not make sense to use a30-m resolution if that is available for land-use data only,while the digital elevation model has a resolution of500m.Therefore, all data are aggregated to the same resolution that best represents the quality and resolution of the data.The relations between land use and its driving fac-tors are thereafter evaluated using stepwise logistic re-gression.Logistic regression is an often used method-ology in land-use change research(Geoghegan and others2001,Serneels and Lambin2001).In this study we use logistic regression to indicate the probability of a certain grid cell to be devoted to a land-use type given a set of driving factors following:LogͩP i1ϪP i ͪϭ0ϩ1X1,iϩ2X2,i......ϩn X n,iwhere P i is the probability of a grid cell for the occur-rence of the considered land-use type and the X’s are the driving factors.The stepwise procedure is used to help us select the relevant driving factors from a larger set of factors that are assumed to influence the land-use pattern.Variables that have no significant contribution to the explanation of the land-use pattern are excluded from thefinal regression equation.Where in ordinal least squares regression the R2 gives a measure of modelfit,there is no equivalent for logistic regression.Instead,the goodness offit can be evaluated with the ROC method(Pontius and Schnei-der2000,Swets1986)which evaluates the predicted probabilities by comparing them with the observed val-ues over the whole domain of predicted probabilities instead of only evaluating the percentage of correctly classified observations at afixed cut-off value.This is an appropriate methodology for our application,because we will use a wide range of probabilities within the model calculations.The influence of spatial autocorrelation on the re-gression results can be minimized by only performing the regression on a random sample of pixels at a certain minimum distance from one another.Such a selection method is adopted in order to maximize the distance between the selected pixels to attenuate the problem associated with spatial autocorrelation.For case-studies where autocorrelation has an important influence on the land-use structure it is possible to further exploit it by incorporating an autoregressive term in the regres-sion equation(Overmars and others2002).Based upon the regression results a probability map can be calculated for each land-use type.A new probabil-ity map is calculated every year with updated values for the driving factors that are projected to change in time,such as the population distribution or accessibility.Decision RulesLand-use type or location specific decision rules can be specified by the user.Location specific decision rules include the delineation of protected areas such as nature reserves.If a protected area is specified,no changes are allowed within this area.For each land-use type decision rules determine the conditions under which the land-use type is allowed to change in the next time step.These decision rules are implemented to give certain land-use types a certain resistance to change in order to generate the stability in the land-use structure that is typical for many landscapes.Three different situations can be distinguished and for each land-use type the user should specify which situation is most relevant for that land-use type:1.For some land-use types it is very unlikely that theyare converted into another land-use type after their first conversion;as soon as an agricultural area is urbanized it is not expected to return to agriculture or to be converted into forest cover.Unless a de-crease in area demand for this land-use type occurs the locations covered by this land use are no longer evaluated for potential land-use changes.If this situation is selected it also holds that if the demand for this land-use type decreases,there is no possi-bility for expansion in other areas.In other words, when this setting is applied to forest cover and deforestation needs to be allocated,it is impossible to reforest other areas at the same time.2.Other land-use types are converted more easily.Aswidden agriculture system is most likely to be con-verted into another land-use type soon after its396P.H.Verburg and othersinitial conversion.When this situation is selected for a land-use type no restrictions to change are considered in the allocation module.3.There is also a number of land-use types that oper-ate in between these two extremes.Permanent ag-riculture and plantations require an investment for their establishment.It is therefore not very likely that they will be converted very soon after into another land-use type.However,in the end,when another land-use type becomes more pro fitable,a conversion is possible.This situation is dealt with by de fining the relative elasticity for change (ELAS u )for the land-use type into any other land use type.The relative elasticity ranges between 0(similar to Situation 2)and 1(similar to Situation 1).The higher the de fined elasticity,the more dif ficult it gets to convert this land-use type.The elasticity should be de fined based on the user ’s knowledge of the situation,but can also be tuned during the calibration of the petition and Actual Allocation of Change Allocation of land-use change is made in an iterative procedure given the probability maps,the decision rules in combination with the actual land-use map,and the demand for the different land-use types (Figure 4).The following steps are followed in the calculation:1.The first step includes the determination of all grid cells that are allowed to change.Grid cells that are either part of a protected area or under a land-use type that is not allowed to change (Situation 1,above)are excluded from further calculation.2.For each grid cell i the total probability (TPROP i,u )is calculated for each of the land-use types u accord-ing to:TPROP i,u ϭP i,u ϩELAS u ϩITER u ,where ITER u is an iteration variable that is speci fic to the land use.ELAS u is the relative elasticity for change speci fied in the decision rules (Situation 3de-scribed above)and is only given a value if grid-cell i is already under land use type u in the year considered.ELAS u equals zero if all changes are allowed (Situation 2).3.A preliminary allocation is made with an equalvalue of the iteration variable (ITER u )for all land-use types by allocating the land-use type with the highest total probability for the considered grid cell.This will cause a number of grid cells to change land use.4.The total allocated area of each land use is nowcompared to the demand.For land-use types where the allocated area is smaller than the demanded area the value of the iteration variable is increased.For land-use types for which too much is allocated the value is decreased.5.Steps 2to 4are repeated as long as the demandsare not correctly allocated.When allocation equals demand the final map is saved and the calculations can continue for the next yearly timestep.Figure 5shows the development of the iteration parameter ITER u for different land-use types during asimulation.Figure 4.Representation of the iterative procedure for land-use changeallocation.Figure 5.Change in the iteration parameter (ITER u )during the simulation within one time-step.The different lines rep-resent the iteration parameter for different land-use types.The parameter is changed for all land-use types synchronously until the allocated land use equals the demand.Modeling Regional Land-Use Change397Multi-Scale CharacteristicsOne of the requirements for land-use change mod-els are multi-scale characteristics.The above described model structure incorporates different types of scale interactions.Within the iterative procedure there is a continuous interaction between macro-scale demands and local land-use suitability as determined by the re-gression equations.When the demand changes,the iterative procedure will cause the land-use types for which demand increased to have a higher competitive capacity (higher value for ITER u )to ensure enough allocation of this land-use type.Instead of only being determined by the local conditions,captured by the logistic regressions,it is also the regional demand that affects the actually allocated changes.This allows the model to “overrule ”the local suitability,it is not always the land-use type with the highest probability according to the logistic regression equation (P i,u )that the grid cell is allocated to.Apart from these two distinct levels of analysis there are also driving forces that operate over a certain dis-tance instead of being locally important.Applying a neighborhood function that is able to represent the regional in fluence of the data incorporates this type of variable.Population pressure is an example of such a variable:often the in fluence of population acts over a certain distance.Therefore,it is not the exact location of peoples houses that determines the land-use pattern.The average population density over a larger area is often a more appropriate variable.Such a population density surface can be created by a neighborhood func-tion using detailed spatial data.The data generated this way can be included in the spatial analysis as anotherindependent factor.In the application of the model in the Philippines,described hereafter,we applied a 5ϫ5focal filter to the population map to generate a map representing the general population pressure.Instead of using these variables,generated by neighborhood analysis,it is also possible to use the more advanced technique of multi-level statistics (Goldstein 1995),which enable a model to include higher-level variables in a straightforward manner within the regression equa-tion (Polsky and Easterling 2001).Application of the ModelIn this paper,two examples of applications of the model are provided to illustrate its function.TheseTable nd-use classes and driving factors evaluated for Sibuyan IslandLand-use classes Driving factors (location)Forest Altitude (m)GrasslandSlope Coconut plantation AspectRice fieldsDistance to town Others (incl.mangrove and settlements)Distance to stream Distance to road Distance to coast Distance to port Erosion vulnerability GeologyPopulation density(neighborhood 5ϫ5)Figure 6.Location of the case-study areas.398P.H.Verburg and others。
REVIEW ARTICLEAdvances in discrete element modelling of underground excavationsCarlos Labra ÆJerzy Rojek ÆEugenio On˜ate ÆFrancisco ZarateReceived:5November 2007/Accepted:6May 2008/Published online:17July 2008ÓSpringer-Verlag 2008Abstract The paper presents advances in the discrete element modelling of underground excavation processes extending modelling possibilities as well as increasing computational efficiency.Efficient numerical models have been obtained using techniques of parallel computing and coupling the discrete element method with finite element method.The discrete element algorithm has been applied to simulation of different excavation processes,using dif-ferent tools,TBMs and roadheaders.Numerical examples of tunnelling process are included in the paper,showing results in the form of rock failure,damage in the material,cutting forces and tool wear.Efficiency of the code for solving large scale geomechanical problems is also shown.Keywords Coupling ÁDiscrete element method ÁFinite element method ÁParallel computation ÁTunnelling1IntroductionA discrete element algorithm is a numerical technique which solves engineering problems that are modelled as a large system of distinct interacting bodies or particles that are subject to gross motion.The discrete element method (DEM)is widely recognized as a suitable tool to model geomaterials [1,2,4,8].The method presents important advantages in simulation of strong discontinuities such as rock fracturing during an underground excavation or rock failure induced by a tunnel excavation.It is difficult to solve such problems using conventional continuum-based procedures such as the finite element method (FEM).The DEM makes possible the simulation of different excavation processes [5,7]allowing the determination of the damage of the rock or soil,or evaluation of cutting forces in rock excavation with roadheaders or TBMs.Different possibil-ities of DEM applications in simulation of tunnelling process are shown in the paper.Examples include new developments like evaluation of tool wear in rock cutting processes.The main problem in a wider use of this method is the high computational cost required by the simulations first of all due to large number of discrete elements usually required.Different strategies are possible in addressing this problem.This paper will present two approaches:parall-elization and coupling the DEM and FEM.Parallelization techniques are useful for the simulation of large-scale problems,where the number of particles involved does not allow the use of a single processor,or where the single processor calculation would require an extremely long time.A shared memory parallelization of the DEM algorithm is presented in the paper.A high per-formance code for the simulation of tunnel construction problems is described and examples of the efficiency of thebra ÁE.On˜ate ÁF.Zarate International Center for Numerical Methods in Engineering,Technical University of Catalonia,Gran Capitan s/n,08034Barcelona,Spaine-mail:clabra@ E.On˜ate e-mail:onate@ F.Zaratee-mail:zarate@J.Rojek (&)Institute of Fundamental Technological Research,PolishAcademy of Sciences,Swietokrzyska 21,00049Warsaw,Poland e-mail:jrojek@.plActa Geotechnica (2008)3:317–322DOI 10.1007/s11440-008-0071-2code for solving large-scale geomechanical problems are shown in the paper.In many cases discontinuous material failure is localized in a portion of the domain,the rest of it can be treated as continuum.Continuous material is usually modelled more efficiently using the FEM.In such problems coupling of the discrete element method with the FEM can provide an optimum solution.Discrete elements are used only in a portion of the analysed domain where material fracture occurs,while outside the DEM subdomainfinite elements can be bining these two methods in one model of rock cutting allows us to take advantages of each method. The paper presents a coupled discrete/finite element tech-nique to model underground excavation employing the theoretical formulation initiated in[5]and further devel-oped in[6].2Discrete element method formulationThe discrete element model assumes that material can be represented by an assembly of distinct particles or bodies interacting among themselves.Generally,discrete elements can have arbitrary shape.In this work the formulation employing cylindrical(in2D)or spherical(in3D)rigid particles is used.Basic formulation of the discrete element formulation using spherical or cylindrical particles wasfirst proposed by Cundall and Strack[1].Similar formulation has been developed by the authors[5,7]and implemented in the explicit dynamic code Simpact.The code has a lot of original features like modelling of tool wear in rock cut-ting,thermomechanical coupling and other capabilities not present in commercial discrete element codes.Translational and rotational motion of rigid spherical or cylindrical elements is described by means of the Newton–Euler equations of rigid body dynamics:M D€r D¼F D;J D_X D¼T Dð1Þwhere r D is the position vector of the element centroid in a fixed(inertial)coordinate frame,X D is the angular veloc-ity,M D is the diagonal matrix with the element mass on the diagonal,J D is the diagonal matrix with the element moment of inertia on the diagonal,F D is the vector of resultant forces,and T D is the vector of resultant moments about the element central axes.Vectors F D and T D are sums of all forces and moments applied to the element due to external load,contact interactions with neighbouring spheres and other obstacles,as well as forces resulting from damping in the system.Equations of motion(1)are inte-grated in time using the central difference scheme.The overall behaviour of the system is determined by the cohesive/frictional contact laws assumed for the inter-action between contacting rigid spheres(or discs in2D).The contact law can be seen as the formulation of the material model on the microscopic level.Modelling of rock or cohesive zones requires contact models with cohesion allowing tensile interaction force between particle.In the present work the simplest of the cohesive models,the elastic perfectly brittle model is used.This model is char-acterized by linear elastic behaviour when cohesive bonds are active:r¼k n u n;s¼k t u tð2Þwhere r and s are the normal and tangential contact force, respectively,k n and k t are the interface stiffness in the normal and tangential directions and u n and u t the normal and tangential relative displacements,respectively. Cohesive bonds are broken instantaneously when the interface strength is exceeded in the tangential direction by the tangential contact force or in the normal direction by the tensile contact force.The failure(decohesion)criterion is written as:r R n;j s j R t;ð3Þwhere R n and R t are the interface strengths in the normal and tangential directions,respectively.Breakage of cohe-sive bonds allows us to simulate fracture of material and its propagation.In the absence of cohesion the frictional contact is assumed with the Coulomb friction model.3Coupling the DEM and FEMIn the present work the so-called explicit dynamic formu-lation of the FEM is used.The explicit FEM is based on the solution of discretized equations of motion written in the current configuration in the following form:M F€r F¼F ext FÀF int Fð4Þwhere M F is the mass matrix,r F is the vector of nodal displacements,F F ext and F F int are the vectors of external loads and internal forces,respectively.Similarly to the DEM algorithm,the central difference scheme is used for time integration of(4).It is assumed that the DEM and FEM can be applied in different subdomains of the same body.The DEM and FEM subdomains,however,do not need to be disjoint—they can overlap each other.The common part of the subdomains is the part where both discretization types are used with gradually varying contribution of each modelling method.This idea follows that used for molecular dynamics coupling with a continuous model in[9].The coupling of DEM and FEM subdomains is provided by additional kinematical constraints.Interface discrete elements are constrained by the displacementfield of overlapping interfacefinite elements.Making use of thesplit of the global vector of displacements of discrete ele-ments,r D ,into the unconstrained part,r DU ,and the constrained one,r DC ,r D ={r DU ,r DC }T ,additional kine-matic relationships can be written jointly in the matrix notation as follows:v ¼r DC ÀNr F ¼0;ð5Þwhere N is the matrix containing adequate shape functions.Additional kinematic constraints (5)can be imposed by the Lagrange multiplier or penalty method.The set of equations of motion for the coupled DEM/FEM system with the penalty coupling is as follows"M F 0000"M DU 0000"M DC 0000"J D26643775€r F €r DU €r DC _X D 8>><>>:9>>=>>;¼"F ext F À"F int F þN T k DF v "F DU"F DC Àk DF v "T D 8>><>>:9>>=>>;ð6Þwhere k DF is the diagonal matrix containing on its diagonal the values of the discrete penalty function,and globalmatrices "M F ;"M DU ;"M DC and "J D ;and global vectors"F int F ;"F ext F ;"F DU ;"F DC and "T D are obtained by aggregation of adequate elemental matrices and vectors taking into account appropriate contributions from the discrete and finite element parts.Equation (6)can be integrated in time using the standard central difference scheme.4Application of DEM to simulation of tunnelling process Fracture of rock or soil as well as interaction between a tunnelling machine and rock during an excavation process can be simulated by means of the DEM.This kind of analysis enables the comparison of the excavation process under different conditions.4.1Simulation of tunnelling with a TBMSimplified models of a tunnelling process must be used due to a high computational cost of a full-scale simulation in this case.We assume that the TBM is modelled as a cylinder with a special contact model for the tunnel face is adopted.Figure 1presents a simplified tunnelling process.The rock sample,with a diameter of 10m and a length of 7m,is discretized with randomly generated and densely com-pacted 40,988spheres.Discretization of the TBM geometry employs 1,193rigid triangular elements.Tunnelling pro-cess has been carried out with prescribed horizontal velocity 5m/h and rotational velocity of 10rev/min.Rock properties of granite are used,and the microscopic DEM parameters corresponding to the macroscopic granite properties are obtained using the methodology described in [10].A special condition is adopted to eliminate the spherical particles in the face of the tunnel.Each particle,which is in contact with the TBM and lacks cohesive contacts with other particles,is removed from the model.Thus,the advance of the TBM and the absorption of the material in the shield of the TBM is modelled.Figure 1a,c presents the displacement of the TBM and the elimination of the rock material.The area affected by the loss of cohesive contacts,resulting in material failure is shown in Fig.2.This loss of cohesion can be considered as damage ,because it produces the change of the equivalent Young modulus.4.2Simulation of linear cutting test of single disccutter Simulation of the linear cutting test was performed.A rock sample with dimensions of 13591095cm is repre-sented by an assembly of randomly generated and densely compacted 40,449spherical elements of radii ranging from 0.08to 0.60cm.The granite properties are assumed in the simulation and appropriate DEM parameters are evaluated.The disc cutter is treated as a rigid body and the parameters describing its interaction with the rock are as follows:contact stiffness modulus k n =10GPa,Coulomb friction coefficient l =0.8.The velocity of the disc cutter is assumed to be 10m/s.Fig.1Simulation of TBM excavation:Evolution and elimination ofmaterialFig.2Simulation of TBM excavation:Damage over tunnel surfaceFigure 3a shows the discretization of the disc cutter.Only the area of the cutter ring in direct interaction with the rock is discretized with discrete elements due to the com-putational cost reasons.The whole model is presented in Fig.3b.The evolution of the normal cutting force during the process is depicted in Fig.4a.The values of the forces should be validated,because the boundary condition can affect the results.The evolution of the wear,using the for-mulation presented in [5],can be seen in Fig.4b.The elimination of the discrete elements,where the wear exceed the prescribed limit,permit the modification of the disc cutter shape,which leads to a change of the interaction forces.In the present case,a low value of the wear constant is considered,in order to maintain the initial tool shape.Accumulated wear indicates the areas where the removal of the tool material is most intensive.An acceleration of the wear process using higher values of the wear constant is required in order to obtain in a short time considered in the analysis the amount of wear equivalent to real working time.5High performance simulationsOne of the main problems with the DEM simulation is the computational cost.The contact search,the force calcula-tion for each contact,and the large number of elements necessary to resolve a real life problem requires a high computational effort.High performance computation,and parallel implementation could be necessary to run simu-lations with large number of time steps.The advances of the computer capabilities during last years and the use of multiprocessors techniques enable the use of parallel computing methods for the discrete element analysis of large scale real problems.A shared memory parallel version of the code is tested.The main idea is to make a partition of the mesh of particles and use each processor for the contact calculation at different parts of the mesh.The partition process is performed using a special-ized library [3].The calculation of the cohesive contacts requires most of the computational cost.A special structure for the database,and the dynamic load balance is used in order to obtain a good performance for the simulations.Two different structures for the contact data are used in order to have a good management of the information.The first data structure is created for the initial cohesive contacts,where a static array can be used.The other data structureisFig.3Linear cutting test simulation:a cutter ring with partial discretization;b full discretized model0.014Table 1Times for different number of processors Time (s)versus processors 124Total404.31272.93156.85Static contacts (per step)0.12790.06920.0351Dynamic contacts (per step)0.00590.00570.0055Time integration (per step)0.04260.03570.0344Speed up1.001.842.58designed for the dynamic contacts,occurring in the process of rock fragmentation,and the interaction between differ-ent bodies.The management of this kind of contact is completely dynamic,and it is not necessary to store vari-ables with the history information.Table 1presents the times of parallel simulations of a tunnelling process,which was described earlier.The main computational cost is due to the cohesive contacts evalu-ation.The results shown in the table confirm that a good speed-up has been achieved.6DEM and DEM/FEM simulation of rock cutting A process of rock cutting with a single pick of a roadheader cutter-head has been simulated using discrete and hybrid discrete/finite element models.In the hybrid DEM/FEM model discrete elements have been used in the part of rock mass subjected to fracture,while the other part have been discretized with finite elements.In both models the tool is considered rigid,assuming the elasticity of the tool is irrelevant for the purpose of modelling of rock fracture.Figure 5presents results of DEM and DEM/FEM sim-ulation.Both models produce similar failures of rock during cutting.Cutting forces obtained using these two models are compared in Fig.6.Both curves show oscilla-tions typical for cutting of brittle rock.In both cases similar values of amplitudes are observed.Mean values of cutting forces agree very well.This shows that combined DEM/FEM simulation gives similar results to a DEM analysis,while being more efficient numerically—computation time has been reduced by half.7Conclusions •Discrete element method using spherical or cylindrical rigid particles is a suitable tool in modelling of underground excavation processes.•Use of the model in a particular case requires calibra-tion of the discrete element model using available experimental results.•Discrete element simulations of real engineering prob-lems require large computation time and memory resources.•Efficiency of discrete element computation can be improved using technique of parallel computations.Parallelization makes possible the simulation of large problems.•The combination of discrete and finite elements is an effective approach for simulation of underground rock excavation.Acknowledgments The work has been sponsored by the EU project TUNCONSTRUCT (contract no.IP 011817-2)coordinated by Prof.G.Beer (TU Graz,Austria).References1.Cundall PA,Strack ODL (1979)A discrete numerical method for granular assemblies.Geotechnique29:47–65Fig.5Simulation of rock cutting:a DEM model,b DEM/FEM model2.Campbell CS(1990)Rapid granularflows.Annu Rev Fluid Mech2:57–923.Karypis G,Kumar V(1998)A fast and high quality multilevelscheme for partitioning irregular graphs.SIAM J Sci Comput 20:359–3924.Mustoe G(ed)(1992)Eng Comput9(2).Special issue5.On˜ate E,Rojek J(2004)Combination of discrete element andfinite element methods for dynamic analysis of geomechanics put Methods Appl Mech Eng193:3087–3128 6.Rojek J(2007)Modelling and simulation of complex problems ofnonlinear mechanics using thefinite and discrete element meth-ods(in Polish).Habilitiation Thesis,Institute of Fundamental Technological Research Polish Academy of Sciences,Warsaw7.Rojek J,On˜ate E,Zarate F,Miquel J(2001)Modelling of rock,soil and granular materials using spherical elements.In:2nd European conference on computational mechanics ECCM-2001, Cracow,26–29June8.Williams JR,O’Connor R(1999)Discrete element simulationand the contact problem.Arch Comp Meth Eng6(4):279–304 9.Xiao SP,Belytschko T(2004)A bridging domain method forcoupling continua with molecular put Methods Appl Mech Eng193:1645–166910.Zarate F,Rojek J,On˜ate E,Labra C(2007)A methodology todetermine the particle properties in2d and3d dem simulations.In:ECCOMAS thematic conference on computational methods in tunnelling EURO:TUN-2007,Vienna,Austria,27–29August。
Abstract Submittedfor the MAR15Meeting ofThe American Physical SocietyCarrier Mediated Ferromagnetism in Fe-doped SrTiO31CHUN-LAN MA,School of Mathematics and Physics,Suzhou University of Science andTechnology,Suzhou215009,China,ROCIO CONTRERAS-GUERRERO,RAVIDROOPAD,Ingram School of Engineering,Texas State University,San Marcos,TX78666,USA,BYOUNGHAK LEE,Department of Physics,Texas State Uni-versity,San Marcos,TX,78666,USA—The discovery of III-V dilute magneticsemiconductors(DMSs)and the subsequent unsuccessful search for room temper-ature ferromagnetism in DMSs have motivated researches on alternate dilute mag-netic systems.Recent progresses in thinflim growth techniques of perovskite oxidessuggest that dilute magnetic oxides(DMOs)can be viable candidates to improvethe magnetic properties of DMSs.In this talk we present an ab initio study of Fe-doped SrTiO3.Wefind that a ferromagnetic ordering among localized Fe t2g spinsis mediated by itinerant Fe e g electrons.The exchange interaction between t2g ande g electrons depends on crystalfield splitting,on-site electron-electron interaction,and the relative energy of Fe d-ortbitals to oxygen p-orbitals.The exchange couplingand the majority-minority spin splitting decrease with decreasing carrier concentra-tion,confirming that itinerant carriers mediate the ferromagnetism.1C.Ma is supported by NSF of China(Grant Nos.11247023and11304218),JiangsuQing Lan Project,and Jiangsu Overseas Research&Training Program.R.C.-G,R.D.,and B.L.are supported by AFOSR,award number FA9550-10-1-0133.Chun-Lan MaSchool of Mathematics and Physics,Suzhou University of Science and Technology,Suzhou215009,China Date submitted:15Nov2014Electronic form version1.4。
第24卷 第4期物 理 学 进 展Vol.24,No.4 2004年12月PRO GRESS IN PHYSICS Dec.,2004文章编号:1000Ο0542(2004)04Ο0381Ο17半金属磁性材料任尚坤1,2,张凤鸣1,都有为1(1.南京大学固体微结构国家实验室,南京210093;江苏省纳米技术重点实验室,南京210093;2.周口师范学院,河南周口466000)摘 要: 半金属材料的一个重要特征为具有高达100%的传导电子自旋极化率。
半金属磁性材料是一种具有极大的应用潜能的自旋电子学材料。
本文从半金属性的来源、材料的晶体结构、半金属的电子态和电磁特性等不同角度对半金属材料进行了系统分类。
对现已发现的几种半金属材料的基本性质和原子结构特征进行了综述。
分别对5种传导电子自旋极化率的测量方法进行了分析和讨论。
关键词:半金属;铁磁性;自旋极化率;自旋电子学中图分类号:TM27;O44;TQ58 文献标识码:A0 引言近年来,自旋电子学作为一门具有极大应用和商业潜能的新兴学科受到人们的普遍关注[1]。
自旋电子学利用电荷和自旋两种信息载体,结合当代微电子技术,将对新一代电子材料和电子产品产生重大影响。
早在80年代,荷兰Nijmegen大学的de Groot[2]等人对三元合金NiMnSb和PtMnSb等化合物进行计算时,发现了一种新型的能带结构,并称这类化合物为半金属(half-metallic)磁性体。
这类材料是一种新型的功能材料。
其新颖点在于具有两个不同的自旋子能带。
一种自旋取向的电子(设定为自旋向上的电子)的能带结构呈现金属性,即Fermi面处于导带中,具有金属的行为;而另一自旋取向的电子(设定为自旋向下的电子)呈现绝缘体性质或半导体性质,所以半金属材料是以两种自旋电子的行为不同(即金属性和非金属性)为特征的新型功能材料。
目前半导体自旋电子学技术上存在的一个关键性问题就是如何高效率地将极化电子注入半导体材料中。
a r X i v :0801.0412v 1 [c o n d -m a t .s t r -e l ] 2 J a n 2008Modelling the Localized to Itinerant Electronic Transition in the Heavy Fermion System CeIrIn 5J.H.Shim,K.Haule and G.Kotliar Center for Materials Theory,Department of Physics and Astronomy,Rutgers University,Piscataway,NJ 08854We address the fundamental question of crossover from localized to itinerant state of a paradigmatic heavy fermion material CeIrIn 5.The temperature evo-lution of the one electron spectra and the optical conductivity is predicted from first principles calculation.The buildup of coherence in the form of a disper-sive many body feature is followed in detail and its effects on the conduction electrons of the material is revealed.We find multiple hybridization gaps and link them to the crystal structure of the material.Our theoretical approach explains the multiple peak structures observed in optical experiments and the sensitivity of CeIrIn 5to substitutions of the transition metal element and may provide a microscopic basis for the more phenomenological descriptions cur-rently used to interpret experiments in heavy fermion systems.Heavy fermion materials have unusual properties arising from the presence of a partially filled shell of f -orbitals and a very broad band of conduction electrons.At high temperatures,the f -electrons behave as atomic local moments.As the temperature is reduced,the moments combine with the conduction electrons to form a fluid of very heavy quasiparticles,with masses which are two to three orders of magnitude larger than the mass of the electron (1,2).These heavy quasiparticles can undergo superconducting or magnetic transitions at much lower temperatures.Understanding how the itinerant low energy excitations emerge from the lo-calized moments of the f shell is one of the central challenges of the condensed matter physics. It requires the understanding of how the dual,atomic particle-like and itinerant wave-like,char-acter of the electron manifests itself in the different physical properties of a material.CeIrIn5(3)has a layered tetragonal crystal structure(4,5)(Fig.1A)in which the layers of CeIn3(shown as red and gray spheres)are stacked between layers of IrIn2(yellow and grey spheres).Each Ce atom is surrounded by four In atoms in the same plane and eight In atoms out of plane.To describe the electronic structure of these class of materials,one needs to go beyond the traditional concepts of bands and atomic states,and focus on the concept of a spectral function A(k,ω)LL,which describes the quantum mechanical probability of removing or adding an electron with angular momentum and atomic character L=(l,m,a),momentum k and energyω.It is measured directly in angle resolved photoemission and inverse photoemission experiments.To evaluate the spectral function we use Dynamical Mean Field Theory(DMFT)(6)in combination with the Local Density Approximation(LDA+DMFT)(7),which can treat the realistic band structure,the atomic multiplet splitting and Kondo screening on the same footing. The spectral function is computed from corresponding one-electron Green’s function A(k,ω)= (G†(k,ω)−G(k,ω))/(2πi)where the latter takes the form1G(k,ω)=the results were further crosschecked against a continuous time quantum Monte Carlo method (9,10).The Slater integrals,F2,F4,F6,were computed by the atomic physics program of Ref.11 and F0was estimated by the constrained LDA to be5eV(12).The localized Ce-4f-orbital was constructed from the non-orthogonal Linear-Muffin-Tin-orbitals in particular way to maximize its f character,as explained elsewhere(13).The spectral function of f electron materials has been known to exhibit remarkable many body effects.To set the stage for their theoretical description,Fig.1B displays the Ce-4f localspectral function,i.e.,A(ω)=kA(k,ω),which is measured in angle integrated photoemis-sion experiments.At room temperature,there is very little spectral weight at the Fermi level as the f electrons are tightly bound and localized on the Ce atom,giving rise to a broad spectrum concentrated mainly in the lower and upper Hubbard bands at-2.5eV and+3eV,respectively.As the temperature is decreased,a narrow peak appears near the Fermi level(see Fig.1B). The states forming this peak have a small butfinite dispersion,and therefore the area of the peak can be interpreted as the degree of f electron delocalization.This quantity as well as the scattering rate of the Ce-4f states(ImΣ(ω=0))exhibit a clear crossover at a temperature scale T∗of the order of50K(Fig.1C).Our results are consistent with the angle integrated photoemission measurements(14)in which the onset of states with f character at the Fermi level were observed.But the experimental resolution has to be improved by one order of magnitude to resolve the narrow peak predicted by the theory.We now turn to the total(traced over all orbitals)momentum resolved spectral function Tr[A(k,ω)]plotted along symmetry directions in the Brillouin zone.In a band theory descrip-tion,it would be sharply peaked on a series of bandsǫn(k)and the weight of those peaks would be unity.It is worthwhile comparing the high intensity features of the LDA+DMFT spectra(color coded)with the LDA bands(ǫn(k)-drawn in blue)(Fig.2A).In the region below-1eV, there is a good correspondence between them.Notice however the systematic down shift(in-dicated by a green arrow)of the LDA+DMFT features relative to the LDA bands(which have mainly In-5p and Ir-5d character).Surprisingly,a similar trend is seen in the angle resolved photoemission experiments(ARPES)of Ref.15which we redraw in Fig.2B.The position of these bands is weakly temperature dependent and,if warmed to room temperature,an almost rigid upward shift of5meV was identified in our theoretical treatment.Experimentally,it was not possible to resolve the momentum in z direction therefore the same experimental data (which can be thought as the average of the two paths fromΓ-X and Z-R)is repeated in the two directions.Near the Fermi level(between-0.5eV and+1eV),there are significant discrepancies be-tween the LDA bands(which in this region have significant f character)and the LDA+DMFT features.The correlations treated by LDA+DMFT substantially modify the spectral function features with f content,transferring spectral weight into the upper Hubbard band located around +3eV(white region in Fig.2A).Hubbard bands are excitations localized in real space,without a well defined momentum, and therefore they show up as a blurred region of spectral weight in momentum plot of Fig.2A. There is also a lower Hubbard band around-2.5eV which is hardly detectable in thisfigure. The reason is that it carries a very small spectral weight which is redistributed over a broad frequency region as shown in Fig.1B.It is also useful to compare the LDA+DMFT Hubbard bands with those obtained with more familiar LDA+U method.The latter method inserts a sharp non-dispersive band around-2.5eV and significantly twists the rests of the conduction bands.For the purpose of describing the set of bands below-1eV,the LDA+DMFT method,is therefore closer to the LDA type of calculation with f bands removed from the valence band.To obtain further insights into the nature of the low energy spectra,we show in Figs.2C and D the momentum resolved f electron spectral function of Fig.1B.The two plots correspond to the low(10K)and high(300K)temperature spectra,respectively.At room temperature a set of broad and dispersive bands are seen,and should be interpreted as the spd bands leaving an imprint in the f electron spectral function due to hybridization.At low temperature,a narrow stripe of spectral weight appears at zero frequency,which cuts the conduction bands and splits them into two separate pieces,divided by a new hybridization gap.The two straight non-dispersive bands at-0.3eV and+0.3eV can also be identified in Fig.2C and are due to the spin-orbit coupling(16).The same splitting of the coherence peak can be identified in the local spectra plotted in Fig.1B and was recently observed in ARPES study(14).A detailed analysis of the zero energy stripe of spectra in Fig.2C reveals that the low energy features correspond to the three very narrow bands(the dispersion is of the order of3meV) crossing the Fermi level.This is the origin of the large effective mass and large specific heat of the material at low temperatures.The low energy band structure and its temperature dependence are theoretical predictions which can be verified experimentally in future ARPES studies.Optical conductivity is a very sensitive probe of the electronic structure and has been applied to numerous heavy fermion materials(17).It is a technique which is largely complementary to the photoemission,on two counts.It probes the bulk and not the surface,and it is most sensitive to the itinerant spd electrons,rather than the f electrons.A prototypical heavy fermion at high temperatures has an optical conductivity,character-ized by a very broad Drude peak.At low temperatures,optical data is usually modeled in terms of transitions between a two renormalized bands,separated by a hybridization gap.These two bands give rise to a very narrow Drude peak of small weight,and an optical absorption feature above the hybridization gap,termed mid-infrared peak.This picture qualitatively describes the experimental data of CeIrIn5(18,19),which we reproduce in Fig.3B.However,this simplifiedtwo band model fails to account for some aspects of the data.For example,at10K there is a clear structure in the mid-infrared peak.In addition to the broad shoulder around0.07eV, a second peak around0.03eV can be identified,which was previously interpreted as the ab-sorption on the bosonic mode that might bind electrons in the unconventional superconducting state(20,18).Also the hybridization gap in simplified theories gives rise to a sharp drop of conductivity below the energy of the gap,while broader features are seen experimentally.Optical conductivity within LDA+DMFT was recently implemented(21)and we show re-sults in Fig.3A.They bear a strong resemblance to the experimental data Fig.3B.For example a broad Drude peak at high temperature and a very clear splitting of the mid-infrared peak at low temperature.To understand the physical origin of these multiple peaks we plot in Figs.3C and3D the momentum resolved conduction electron(non-Ce-4f)spectral function along a representative high-symmetry line at10K and300K,respectively.Notice the dramatic difference between the two temperatures.At high temperature,we see two bands in this momentum direction,one very broad and one narrower.The dispersion of the left band in Fig.3D is due to electron electron scattering,which broadens the band for approximately100meV.The character of both bands is primarily of In-5p,with an important difference.The left band comes mostly from the In atoms in the IrIn2layer,while the right band is mostly due to In in the CeIn3layer.The latter In atoms will be called in plane(each Ce has4neighbors of this type)and the former out of plane(there are8nearest neighbors to Ce atom).As the temperature is lowered,the two In bands hybridize in very different way with the Ce4f moment.It is very surprising that the in-plane In atoms hybridize less with Ce moment leading to a small hybridization gap of the magnitude30meV(blue arrow in Fig.3A).The out of plane In are more coupled to Ce moment,which leads to larger hybridization gap of the order of70meV(green arrow in Fig.1A).The existence of multiple hybridization gaps results in the splitting of the mid-infrared peak in optical conductivity shown in Fig.3A.The remarkable fact that the Ce moment is more coupled to out of plane In than in-plane In provides a natural explanation for why these materials are sensitive to substitution of transition metal ion Ir with Co or ly,the out of plane In are not only strongly coupled to Ce but also to the transition metal ion in their immediate neighborhood,while the in-plane In are insensitive to the substitution.Some of the results of the microscopic theory such as the momentum dependent hybridiza-tion(19)and the slow buildup of coherence(22)were forshadowed by earlier phenomenolog-ical approaches.Thefirst principles DMFT treatment places these ideas within a microscopic framework.In investigating the formation of the heavy fermion state with temperature in CeIrIn5,we have shown that incorporating local correlations on the f site only,allows for a coherent de-scription of the evolution of the one electron spectra and the optical conductivity with temper-ature.The approach,provides a natural explanation for many surprising features observed in this material,and makes a number of quantitative predictions for the evolution of the spectra as a function of temperature which can be tested by ARPES measurements currently under way. While the single site DMFT description is sufficient in a broad region of temperatures and pa-rameters,cluster extensions of DMFT will be necessary to address the quantum criticality that takes place as Ir is replaced by Rh and Co,and the possible instabilities towards unconventional superconductivity.While model cluster DMFT studies seem very promising,the implementa-tion of these methods in conjunction with realistic electronic structure remains a challenge for the future.Furthermore,to treat other compounds of the same class(Ir substituted by Co or Rh),the correlations on the3d or4d transition metal will require GW to treat the electronic structure.References and Notes1.G.R.Stwart,Rev.Mod.Phys.56,755(1984).2.J.W.Allen,J.Phys.Soc.Japan,74,34(2005).3.C.Petrovic et al.,Europhys.Lett.53,(2001).4.Y.N.Grin,Y.P.Yarmolyuk and E.I.Giadyshevskii,Sov.Phys.Crystallogr.24,137(1979).5.E.G.Moshopoulou,Z.Fisk,J.L.Sarrao,J.D.Thompson,J.Solid State Chem.158,25(2001).6.G.Kotliar and D.V ollhardt,Physics Today57,53(2004).7.G.Kotliar et al.Rev.Mod.Phys.78,865(2006).8.S.Y.Savrasov,Phys.Rev.B54,16470(1996).9.P.Werner,anac,L.de’Medici,M.Troyer,lis,Phys.Rev.Lett.97,076405(2006).10.K.Haule,Phys.Rev.B75,155113(2007).11.R.D.Cowan,The Theory of Atomic Structure and Spectra(Univ.California Press,Berke-ley,1981).12.A.K.McMahan,C.Huscroft,R.T.Scalettar,E.L.Pollock,put,-Aided Mater.Des.5,131(1998).13.A.Toropova,C.A.Marianetti,K.Haule,G.Kotliar,cond-mat/0708.1181.14.S.I.Fujimori et al.,Phys.Rev.B73,224517(2006).15.S.I.Fujimori et al.,Phys.Rev.B67,144507(2003).16.A.Sekiyama et al.,Nature403,396(2000).17.L.Degiorgi,Rev.Mod.Phys.71,687(1999).18.F.P.Mena,D.van der Marel,J.L.Sarrao,Phys.Rev.B72,045119(2005).19.K.S.Burch et al.,Phys.Rev.B75,054523(2007).20.E.J.Singley,D.N.Basov,E.D.Bauer,M.B.Maple,Phys.Rev.B65,161101(R)(2002).21.K.Haule,V.Oudovenko,S.Y.Savrasov,G.Kotliar,Phys.Rev.Lett.94,036401(2005).22.S.Nakatsuji,D.Pines,Z.Fisk,Two Fluid Description of the Kondo Lattice.Phys.Rev.Lett.92,016401(2004).23.We are grateful to Shin-ichi Fujimori and Atsushi Fujimori for their published data ofARPES on CeIrIn5,to F.P.Mena,D.van der Marel,J.L.Sarrao for their published data of optics on CeIrIn5and to Jim Allen for unpublished data on other heavy fermion compounds.Work supported by the NSF Division of Material Research(grant0528969).Figure1:(A)Crystal structure of CeIrIn5.Red,gold,and gray spheres correspond Ce,Ir,and In atoms,respectively.(B)Ce4f local density of state calculated by LDA+DMFT at10K and 300K.(C)The quasiparticle peak hight versus temperature(blue),imaginary part of Ce4f5/2 self-energyΣf(ω=0)(red)and its temperature derivative(green).The buildup of coherence is very slow and gradual.Around T∗∼50K the coherencefirst sets in and manifested itself in fast increase of the quasiparticle peak(blue line)and crossover in the scattering rate(the deriva-tive of scattering rate-green line-is peaked at T∗).The quasiparticle peak weight saturates at much lower temperature<5K and drops to zero at very high temperature,displaying a verylong logarithmic tail.Figure2:(A)Momentum resolved total spectral functions calculated by LDA+DMFT method at10K is shown by color scheme.The LDA bands are drawn by blue lines.(B)Color plot shows experimental ARPES data reproduced from Ref.15.Blue lines follow the LDA bands.(C)Momentum resolved Ce-4f spectral function at10K.(D)same as(C)but at300K.Figure3:The optical conductivity at several temperatures(A)obtained by LDA+DMFT and (B)measured experimentally and reproduced from Ref.18.The momentum resolved non-fspectral function(A total-A Ce−4f)at(C)10K and(D)and300K.。