Modular group representations and fusion in logarithmic conformal field theories and in the
- 格式:pdf
- 大小:545.35 KB
- 文档页数:51
拟晶体的名词解释拟晶体(quasicrystal)是一种具有非周期性长程有序结构的固体材料。
它的发现颠覆了传统晶体学中“周期性结构”的概念,为材料科学带来了新的研究方向和挑战。
拟晶体的特殊结构和性质使其成为了科学界关注的焦点,并在多个领域展示了潜在的应用前景。
一、拟晶体的发现拟晶体最初在1982年由丹麦物理学家丹尼斯·谢伯诺夫斯基(Dan Shechtman)首次发现。
当时,他对铝-锰-合金进行高温冷却实验,意外观察到了一种不规则的、五角星形的模式。
根据传统晶体学的观点,这种非周期性的结构理论上是不可能存在的,因此遭遇到了科学界的质疑和反对。
然而,经过进一步实验证实和理论分析,谢伯诺夫斯基却成功证明了这种新型结构的存在,并于2011年获得了诺贝尔化学奖。
这个重大发现引起了广泛的兴奋和研究热潮,也为拟晶体的研究奠定了基础。
二、拟晶体的结构特点拟晶体的结构具有多面体对称性,一般表现为五角形、六角形、十二面体等不规则形态,而不像晶体那样有明显的周期性。
拟晶体中的原子或分子排列方式遵循一定的规则,但这种规则并不像晶体那样精确地重复。
拟晶体的结构包含了不同长度尺度的重复单元,这使得它具备了多样的物理和化学性质。
三、拟晶体的物理性质拟晶体的非周期性结构赋予了它独特的物理性质。
首先,拟晶体具有良好的电学导性,可用于制备高性能的电子器件。
其次,拟晶体在光学领域也有广泛的应用,拥有宽波长范围内的光学散射效应,可以用于制备不同波长的光学器件。
此外,拟晶体还具备较好的热导性、力学性能和磁学性质,为材料科学和工程领域提供了新的研究方向。
四、拟晶体的应用前景由于其特殊的结构和性质,拟晶体在多个领域展示了潜在的应用前景。
在材料科学领域,拟晶体可用于制备高性能的金属合金和陶瓷材料,具备优异的力学性能、耐热性和电学导性。
在能源领域,拟晶体也被用于改善材料的热导性,为热能转换和热管理提供新的解决方案。
此外,拟晶体还在催化、药物输送和生物材料等领域展示了独特的应用潜力。
与客户共享MODULARITY方案
佚名
【期刊名称】《现代塑料》
【年(卷),期】2005(000)006
【摘要】中国是阿博格最重要的市场之一。
此次是我们第一次到中国的南方城市广州参加Cinaplas展览会。
过去几年,我们在上海举办的Chinaplas展会中曾取得了很好的效果,这次到广州参展,希望能更多地接触中国南方地区的客户,使我们的产品和技术能够为他们解决生产中的实际问题。
【总页数】1页(P68)
【正文语种】中文
【中图分类】F767.6
【相关文献】
1.关注客户成功,与客户共享欢乐——记天津长荣公司最佳烫金产品评选及客户年会活动 [J],
2.共享经济背景下滴滴快车平峰期客户特征及客户拓展策略——以长春市为例 [J], 杨诗郁
3.横河电机提供Modular Procedural Automation解决方案 [J],
4.陶氏先进有机硅解决方案亮相2018深圳国际薄膜与胶带展览会,生动展现以客户为中心的理念,卓越可靠的材料解决方案助力本地客户追求高性能和成本效益,打造独特竞争优势 [J],
5.爱克发Dotrix Modular LM提供食品绿色包装方案 [J], 色空
因版权原因,仅展示原文概要,查看原文内容请购买。
泛素化组学英文全文共四篇示例,供读者参考第一篇示例:The development of high-throughput mass spectrometry techniques has greatly facilitated ubiquitinomics research. Mass spectrometry allows for the rapid identification and quantification of thousands of proteins in a single experiment. By combining mass spectrometry with specific ubiquitin-affinity purification methods, researchers can isolate ubiquitinated proteins from a cell lysate and analyze their ubiquitin modification sites.第二篇示例:One of the key techniques used in ubiquitinomics is mass spectrometry, a powerful analytical tool that allows for the identification and quantification of proteins in complex samples. By coupling mass spectrometry with advanced proteomics methodologies, researchers can uncover the intricacies of ubiquitin signaling pathways and elucidate the functional consequences of protein ubiquitination.第三篇示例:In cancer research, ubiquitin proteomics has been used to identify novel biomarkers for early detection and prognosis of cancer. By profiling the ubiquitinome of cancer cells, researchers have been able to identify specific ubiquitinated proteins that are dysregulated in cancer and may serve as potential targets for therapy. In neurodegenerative diseases, ubiquitin proteomics has shed light on the role of aberrant protein aggregation and clearance mechanisms in disease progression, offering new insights into potential therapeutic strategies.第四篇示例:The study of ubiquitinomics is challenging due to the dynamic and reversible nature of ubiquitination. Ubiquitin is rapidly added and removed from target proteins in response to various stimuli, making it difficult to capture the full landscape of ubiquitinated proteins in a cell. Additionally, ubiquitination can occur on multiple lysine residues within a protein, leading to a complex pattern of ubiquitin modifications that can be difficult to analyze.。
(0,2) 插值||(0,2) interpolation0#||zero-sharp; 读作零井或零开。
0+||zero-dagger; 读作零正。
1-因子||1-factor3-流形||3-manifold; 又称“三维流形”。
AIC准则||AIC criterion, Akaike information criterionAp 权||Ap-weightA稳定性||A-stability, absolute stabilityA最优设计||A-optimal designBCH 码||BCH code, Bose-Chaudhuri-Hocquenghem codeBIC准则||BIC criterion, Bayesian modification of the AICBMOA函数||analytic function of bounded mean oscillation; 全称“有界平均振动解析函数”。
BMO鞅||BMO martingaleBSD猜想||Birch and Swinnerton-Dyer conjecture; 全称“伯奇与斯温纳顿-戴尔猜想”。
B样条||B-splineC*代数||C*-algebra; 读作“C星代数”。
C0 类函数||function of class C0; 又称“连续函数类”。
CA T准则||CAT criterion, criterion for autoregressiveCM域||CM fieldCN 群||CN-groupCW 复形的同调||homology of CW complexCW复形||CW complexCW复形的同伦群||homotopy group of CW complexesCW剖分||CW decompositionCn 类函数||function of class Cn; 又称“n次连续可微函数类”。
Cp统计量||Cp-statisticC。
群的概念教学中几个有限生成群的例子霍丽君(重庆理工大学理学院重庆400054)摘要:群的概念是抽象代数中的最基本的概念之一,在抽象代数课程的教学环节中融入一些有趣的群例,借助于这些较为具体的群例来解释抽象的群理论,对于激发学生的学习兴趣以及锻炼学生的数学思维能力等方面都会起到一定的积极作用。
该文介绍了一种利用英文字母表在一定的规则下构造的有限生成自由群的例子,即该自由群的同音商,称为英语同音群。
此外,该文结合线性代数中的矩阵相关知识,给出了有限生成群SL2(Z )以及若于有限生成特殊射影线性群的例子。
关键词:有限生成群英语同音群一般线性群特殊射影线性群中图分类号:O151.2文献标识码:A文章编号:1672-3791(2022)03(b)-0165-04Several Examples of Finitely Generated Groups in the ConceptTeaching of GroupsHUO Lijun(School of Science,Chongqing University of Technology,Chongqing,400054China)Abstract:The concept of group is one of the most basic concepts in abstract algebra.Integrating some interesting group examples into the teaching of abstract algebra course and explaining the abstract group theory with the help of these more specific group examples will play a positive role in stimulating students'learning interest and training students'mathematical thinking ability.In this paper,we introduce an example of finitely generated free group by using the English alphabet under some certain rules,which is called homophonic quotients of free groups,or briefly called English homophonic group.In addition,combined with the theory of matrix in linear algebra,we give some examples of about finitely generated group SL_2(Z)and finitely generated special projective linear groups.Key Words:Group;Finitely generated group,English homophonic group;General linear group;Special projective linear group1引言及准备知识群是代数学中一个最基本的代数结构,群的概念已有悠久的历史,最早起源于19世纪初叶人们对代数方程的研究,它是阿贝尔、伽罗瓦等著名数学家对高次代数方程有无公式解问题进行探索的结果,有关群的理论被公认为是19世纪最杰出的数学成就之一[1-2]。
s Data mining and knowledge discovery in databases have been attracting a significant amount of research, industry, and media atten-tion of late. What is all the excitement about?This article provides an overview of this emerging field, clarifying how data mining and knowledge discovery in databases are related both to each other and to related fields, such as machine learning, statistics, and databases. The article mentions particular real-world applications, specific data-mining techniques, challenges in-volved in real-world applications of knowledge discovery, and current and future research direc-tions in the field.A cross a wide variety of fields, data arebeing collected and accumulated at adramatic pace. There is an urgent need for a new generation of computational theo-ries and tools to assist humans in extracting useful information (knowledge) from the rapidly growing volumes of digital data. These theories and tools are the subject of the emerging field of knowledge discovery in databases (KDD).At an abstract level, the KDD field is con-cerned with the development of methods and techniques for making sense of data. The basic problem addressed by the KDD process is one of mapping low-level data (which are typically too voluminous to understand and digest easi-ly) into other forms that might be more com-pact (for example, a short report), more ab-stract (for example, a descriptive approximation or model of the process that generated the data), or more useful (for exam-ple, a predictive model for estimating the val-ue of future cases). At the core of the process is the application of specific data-mining meth-ods for pattern discovery and extraction.1This article begins by discussing the histori-cal context of KDD and data mining and theirintersection with other related fields. A briefsummary of recent KDD real-world applica-tions is provided. Definitions of KDD and da-ta mining are provided, and the general mul-tistep KDD process is outlined. This multistepprocess has the application of data-mining al-gorithms as one particular step in the process.The data-mining step is discussed in more de-tail in the context of specific data-mining al-gorithms and their application. Real-worldpractical application issues are also outlined.Finally, the article enumerates challenges forfuture research and development and in par-ticular discusses potential opportunities for AItechnology in KDD systems.Why Do We Need KDD?The traditional method of turning data intoknowledge relies on manual analysis and in-terpretation. For example, in the health-careindustry, it is common for specialists to peri-odically analyze current trends and changesin health-care data, say, on a quarterly basis.The specialists then provide a report detailingthe analysis to the sponsoring health-care or-ganization; this report becomes the basis forfuture decision making and planning forhealth-care management. In a totally differ-ent type of application, planetary geologistssift through remotely sensed images of plan-ets and asteroids, carefully locating and cata-loging such geologic objects of interest as im-pact craters. Be it science, marketing, finance,health care, retail, or any other field, the clas-sical approach to data analysis relies funda-mentally on one or more analysts becomingArticlesFALL 1996 37From Data Mining to Knowledge Discovery inDatabasesUsama Fayyad, Gregory Piatetsky-Shapiro, and Padhraic Smyth Copyright © 1996, American Association for Artificial Intelligence. All rights reserved. 0738-4602-1996 / $2.00areas is astronomy. Here, a notable success was achieved by SKICAT ,a system used by as-tronomers to perform image analysis,classification, and cataloging of sky objects from sky-survey images (Fayyad, Djorgovski,and Weir 1996). In its first application, the system was used to process the 3 terabytes (1012bytes) of image data resulting from the Second Palomar Observatory Sky Survey,where it is estimated that on the order of 109sky objects are detectable. SKICAT can outper-form humans and traditional computational techniques in classifying faint sky objects. See Fayyad, Haussler, and Stolorz (1996) for a sur-vey of scientific applications.In business, main KDD application areas includes marketing, finance (especially in-vestment), fraud detection, manufacturing,telecommunications, and Internet agents.Marketing:In marketing, the primary ap-plication is database marketing systems,which analyze customer databases to identify different customer groups and forecast their behavior. Business Week (Berry 1994) estimat-ed that over half of all retailers are using or planning to use database marketing, and those who do use it have good results; for ex-ample, American Express reports a 10- to 15-percent increase in credit-card use. Another notable marketing application is market-bas-ket analysis (Agrawal et al. 1996) systems,which find patterns such as, “If customer bought X, he/she is also likely to buy Y and Z.” Such patterns are valuable to retailers.Investment: Numerous companies use da-ta mining for investment, but most do not describe their systems. One exception is LBS Capital Management. Its system uses expert systems, neural nets, and genetic algorithms to manage portfolios totaling $600 million;since its start in 1993, the system has outper-formed the broad stock market (Hall, Mani,and Barr 1996).Fraud detection: HNC Falcon and Nestor PRISM systems are used for monitoring credit-card fraud, watching over millions of ac-counts. The FAIS system (Senator et al. 1995),from the U.S. Treasury Financial Crimes En-forcement Network, is used to identify finan-cial transactions that might indicate money-laundering activity.Manufacturing: The CASSIOPEE trou-bleshooting system, developed as part of a joint venture between General Electric and SNECMA, was applied by three major Euro-pean airlines to diagnose and predict prob-lems for the Boeing 737. To derive families of faults, clustering methods are used. CASSIOPEE received the European first prize for innova-intimately familiar with the data and serving as an interface between the data and the users and products.For these (and many other) applications,this form of manual probing of a data set is slow, expensive, and highly subjective. In fact, as data volumes grow dramatically, this type of manual data analysis is becoming completely impractical in many domains.Databases are increasing in size in two ways:(1) the number N of records or objects in the database and (2) the number d of fields or at-tributes to an object. Databases containing on the order of N = 109objects are becoming in-creasingly common, for example, in the as-tronomical sciences. Similarly, the number of fields d can easily be on the order of 102or even 103, for example, in medical diagnostic applications. Who could be expected to di-gest millions of records, each having tens or hundreds of fields? We believe that this job is certainly not one for humans; hence, analysis work needs to be automated, at least partially.The need to scale up human analysis capa-bilities to handling the large number of bytes that we can collect is both economic and sci-entific. Businesses use data to gain competi-tive advantage, increase efficiency, and pro-vide more valuable services to customers.Data we capture about our environment are the basic evidence we use to build theories and models of the universe we live in. Be-cause computers have enabled humans to gather more data than we can digest, it is on-ly natural to turn to computational tech-niques to help us unearth meaningful pat-terns and structures from the massive volumes of data. Hence, KDD is an attempt to address a problem that the digital informa-tion era made a fact of life for all of us: data overload.Data Mining and Knowledge Discovery in the Real WorldA large degree of the current interest in KDD is the result of the media interest surrounding successful KDD applications, for example, the focus articles within the last two years in Business Week , Newsweek , Byte , PC Week , and other large-circulation periodicals. Unfortu-nately, it is not always easy to separate fact from media hype. Nonetheless, several well-documented examples of successful systems can rightly be referred to as KDD applications and have been deployed in operational use on large-scale real-world problems in science and in business.In science, one of the primary applicationThere is an urgent need for a new generation of computation-al theories and tools toassist humans in extractinguseful information (knowledge)from the rapidly growing volumes ofdigital data.Articles38AI MAGAZINEtive applications (Manago and Auriol 1996).Telecommunications: The telecommuni-cations alarm-sequence analyzer (TASA) wasbuilt in cooperation with a manufacturer oftelecommunications equipment and threetelephone networks (Mannila, Toivonen, andVerkamo 1995). The system uses a novelframework for locating frequently occurringalarm episodes from the alarm stream andpresenting them as rules. Large sets of discov-ered rules can be explored with flexible infor-mation-retrieval tools supporting interactivityand iteration. In this way, TASA offers pruning,grouping, and ordering tools to refine the re-sults of a basic brute-force search for rules.Data cleaning: The MERGE-PURGE systemwas applied to the identification of duplicatewelfare claims (Hernandez and Stolfo 1995).It was used successfully on data from the Wel-fare Department of the State of Washington.In other areas, a well-publicized system isIBM’s ADVANCED SCOUT,a specialized data-min-ing system that helps National Basketball As-sociation (NBA) coaches organize and inter-pret data from NBA games (U.S. News 1995). ADVANCED SCOUT was used by several of the NBA teams in 1996, including the Seattle Su-personics, which reached the NBA finals.Finally, a novel and increasingly importanttype of discovery is one based on the use of in-telligent agents to navigate through an infor-mation-rich environment. Although the ideaof active triggers has long been analyzed in thedatabase field, really successful applications ofthis idea appeared only with the advent of theInternet. These systems ask the user to specifya profile of interest and search for related in-formation among a wide variety of public-do-main and proprietary sources. For example, FIREFLY is a personal music-recommendation agent: It asks a user his/her opinion of several music pieces and then suggests other music that the user might like (<http:// www.ffl/>). CRAYON(/>) allows users to create their own free newspaper (supported by ads); NEWSHOUND(<http://www. /hound/>) from the San Jose Mercury News and FARCAST(</> automatically search information from a wide variety of sources, including newspapers and wire services, and e-mail rele-vant documents directly to the user.These are just a few of the numerous suchsystems that use KDD techniques to automat-ically produce useful information from largemasses of raw data. See Piatetsky-Shapiro etal. (1996) for an overview of issues in devel-oping industrial KDD applications.Data Mining and KDDHistorically, the notion of finding useful pat-terns in data has been given a variety ofnames, including data mining, knowledge ex-traction, information discovery, informationharvesting, data archaeology, and data patternprocessing. The term data mining has mostlybeen used by statisticians, data analysts, andthe management information systems (MIS)communities. It has also gained popularity inthe database field. The phrase knowledge dis-covery in databases was coined at the first KDDworkshop in 1989 (Piatetsky-Shapiro 1991) toemphasize that knowledge is the end productof a data-driven discovery. It has been popular-ized in the AI and machine-learning fields.In our view, KDD refers to the overall pro-cess of discovering useful knowledge from da-ta, and data mining refers to a particular stepin this process. Data mining is the applicationof specific algorithms for extracting patternsfrom data. The distinction between the KDDprocess and the data-mining step (within theprocess) is a central point of this article. Theadditional steps in the KDD process, such asdata preparation, data selection, data cleaning,incorporation of appropriate prior knowledge,and proper interpretation of the results ofmining, are essential to ensure that usefulknowledge is derived from the data. Blind ap-plication of data-mining methods (rightly crit-icized as data dredging in the statistical litera-ture) can be a dangerous activity, easilyleading to the discovery of meaningless andinvalid patterns.The Interdisciplinary Nature of KDDKDD has evolved, and continues to evolve,from the intersection of research fields such asmachine learning, pattern recognition,databases, statistics, AI, knowledge acquisitionfor expert systems, data visualization, andhigh-performance computing. The unifyinggoal is extracting high-level knowledge fromlow-level data in the context of large data sets.The data-mining component of KDD cur-rently relies heavily on known techniquesfrom machine learning, pattern recognition,and statistics to find patterns from data in thedata-mining step of the KDD process. A natu-ral question is, How is KDD different from pat-tern recognition or machine learning (and re-lated fields)? The answer is that these fieldsprovide some of the data-mining methodsthat are used in the data-mining step of theKDD process. KDD focuses on the overall pro-cess of knowledge discovery from data, includ-ing how the data are stored and accessed, howalgorithms can be scaled to massive data setsThe basicproblemaddressed bythe KDDprocess isone ofmappinglow-leveldata intoother formsthat might bemorecompact,moreabstract,or moreuseful.ArticlesFALL 1996 39A driving force behind KDD is the database field (the second D in KDD). Indeed, the problem of effective data manipulation when data cannot fit in the main memory is of fun-damental importance to KDD. Database tech-niques for gaining efficient data access,grouping and ordering operations when ac-cessing data, and optimizing queries consti-tute the basics for scaling algorithms to larger data sets. Most data-mining algorithms from statistics, pattern recognition, and machine learning assume data are in the main memo-ry and pay no attention to how the algorithm breaks down if only limited views of the data are possible.A related field evolving from databases is data warehousing,which refers to the popular business trend of collecting and cleaning transactional data to make them available for online analysis and decision support. Data warehousing helps set the stage for KDD in two important ways: (1) data cleaning and (2)data access.Data cleaning: As organizations are forced to think about a unified logical view of the wide variety of data and databases they pos-sess, they have to address the issues of map-ping data to a single naming convention,uniformly representing and handling missing data, and handling noise and errors when possible.Data access: Uniform and well-defined methods must be created for accessing the da-ta and providing access paths to data that were historically difficult to get to (for exam-ple, stored offline).Once organizations and individuals have solved the problem of how to store and ac-cess their data, the natural next step is the question, What else do we do with all the da-ta? This is where opportunities for KDD natu-rally arise.A popular approach for analysis of data warehouses is called online analytical processing (OLAP), named for a set of principles pro-posed by Codd (1993). OLAP tools focus on providing multidimensional data analysis,which is superior to SQL in computing sum-maries and breakdowns along many dimen-sions. OLAP tools are targeted toward simpli-fying and supporting interactive data analysis,but the goal of KDD tools is to automate as much of the process as possible. Thus, KDD is a step beyond what is currently supported by most standard database systems.Basic DefinitionsKDD is the nontrivial process of identifying valid, novel, potentially useful, and ultimate-and still run efficiently, how results can be in-terpreted and visualized, and how the overall man-machine interaction can usefully be modeled and supported. The KDD process can be viewed as a multidisciplinary activity that encompasses techniques beyond the scope of any one particular discipline such as machine learning. In this context, there are clear opportunities for other fields of AI (be-sides machine learning) to contribute to KDD. KDD places a special emphasis on find-ing understandable patterns that can be inter-preted as useful or interesting knowledge.Thus, for example, neural networks, although a powerful modeling tool, are relatively difficult to understand compared to decision trees. KDD also emphasizes scaling and ro-bustness properties of modeling algorithms for large noisy data sets.Related AI research fields include machine discovery, which targets the discovery of em-pirical laws from observation and experimen-tation (Shrager and Langley 1990) (see Kloes-gen and Zytkow [1996] for a glossary of terms common to KDD and machine discovery),and causal modeling for the inference of causal models from data (Spirtes, Glymour,and Scheines 1993). Statistics in particular has much in common with KDD (see Elder and Pregibon [1996] and Glymour et al.[1996] for a more detailed discussion of this synergy). Knowledge discovery from data is fundamentally a statistical endeavor. Statistics provides a language and framework for quan-tifying the uncertainty that results when one tries to infer general patterns from a particu-lar sample of an overall population. As men-tioned earlier, the term data mining has had negative connotations in statistics since the 1960s when computer-based data analysis techniques were first introduced. The concern arose because if one searches long enough in any data set (even randomly generated data),one can find patterns that appear to be statis-tically significant but, in fact, are not. Clearly,this issue is of fundamental importance to KDD. Substantial progress has been made in recent years in understanding such issues in statistics. Much of this work is of direct rele-vance to KDD. Thus, data mining is a legiti-mate activity as long as one understands how to do it correctly; data mining carried out poorly (without regard to the statistical as-pects of the problem) is to be avoided. KDD can also be viewed as encompassing a broader view of modeling than statistics. KDD aims to provide tools to automate (to the degree pos-sible) the entire process of data analysis and the statistician’s “art” of hypothesis selection.Data mining is a step in the KDD process that consists of ap-plying data analysis and discovery al-gorithms that produce a par-ticular enu-meration ofpatterns (or models)over the data.Articles40AI MAGAZINEly understandable patterns in data (Fayyad, Piatetsky-Shapiro, and Smyth 1996).Here, data are a set of facts (for example, cases in a database), and pattern is an expres-sion in some language describing a subset of the data or a model applicable to the subset. Hence, in our usage here, extracting a pattern also designates fitting a model to data; find-ing structure from data; or, in general, mak-ing any high-level description of a set of data. The term process implies that KDD comprises many steps, which involve data preparation, search for patterns, knowledge evaluation, and refinement, all repeated in multiple itera-tions. By nontrivial, we mean that some search or inference is involved; that is, it is not a straightforward computation of predefined quantities like computing the av-erage value of a set of numbers.The discovered patterns should be valid on new data with some degree of certainty. We also want patterns to be novel (at least to the system and preferably to the user) and poten-tially useful, that is, lead to some benefit to the user or task. Finally, the patterns should be understandable, if not immediately then after some postprocessing.The previous discussion implies that we can define quantitative measures for evaluating extracted patterns. In many cases, it is possi-ble to define measures of certainty (for exam-ple, estimated prediction accuracy on new data) or utility (for example, gain, perhaps indollars saved because of better predictions orspeedup in response time of a system). No-tions such as novelty and understandabilityare much more subjective. In certain contexts,understandability can be estimated by sim-plicity (for example, the number of bits to de-scribe a pattern). An important notion, calledinterestingness(for example, see Silberschatzand Tuzhilin [1995] and Piatetsky-Shapiro andMatheus [1994]), is usually taken as an overallmeasure of pattern value, combining validity,novelty, usefulness, and simplicity. Interest-ingness functions can be defined explicitly orcan be manifested implicitly through an or-dering placed by the KDD system on the dis-covered patterns or models.Given these notions, we can consider apattern to be knowledge if it exceeds some in-terestingness threshold, which is by nomeans an attempt to define knowledge in thephilosophical or even the popular view. As amatter of fact, knowledge in this definition ispurely user oriented and domain specific andis determined by whatever functions andthresholds the user chooses.Data mining is a step in the KDD processthat consists of applying data analysis anddiscovery algorithms that, under acceptablecomputational efficiency limitations, pro-duce a particular enumeration of patterns (ormodels) over the data. Note that the space ofArticlesFALL 1996 41Figure 1. An Overview of the Steps That Compose the KDD Process.methods, the effective number of variables under consideration can be reduced, or in-variant representations for the data can be found.Fifth is matching the goals of the KDD pro-cess (step 1) to a particular data-mining method. For example, summarization, clas-sification, regression, clustering, and so on,are described later as well as in Fayyad, Piatet-sky-Shapiro, and Smyth (1996).Sixth is exploratory analysis and model and hypothesis selection: choosing the data-mining algorithm(s) and selecting method(s)to be used for searching for data patterns.This process includes deciding which models and parameters might be appropriate (for ex-ample, models of categorical data are differ-ent than models of vectors over the reals) and matching a particular data-mining method with the overall criteria of the KDD process (for example, the end user might be more in-terested in understanding the model than its predictive capabilities).Seventh is data mining: searching for pat-terns of interest in a particular representa-tional form or a set of such representations,including classification rules or trees, regres-sion, and clustering. The user can significant-ly aid the data-mining method by correctly performing the preceding steps.Eighth is interpreting mined patterns, pos-sibly returning to any of steps 1 through 7 for further iteration. This step can also involve visualization of the extracted patterns and models or visualization of the data given the extracted models.Ninth is acting on the discovered knowl-edge: using the knowledge directly, incorpo-rating the knowledge into another system for further action, or simply documenting it and reporting it to interested parties. This process also includes checking for and resolving po-tential conflicts with previously believed (or extracted) knowledge.The KDD process can involve significant iteration and can contain loops between any two steps. The basic flow of steps (al-though not the potential multitude of itera-tions and loops) is illustrated in figure 1.Most previous work on KDD has focused on step 7, the data mining. However, the other steps are as important (and probably more so) for the successful application of KDD in practice. Having defined the basic notions and introduced the KDD process, we now focus on the data-mining component,which has, by far, received the most atten-tion in the literature.patterns is often infinite, and the enumera-tion of patterns involves some form of search in this space. Practical computational constraints place severe limits on the sub-space that can be explored by a data-mining algorithm.The KDD process involves using the database along with any required selection,preprocessing, subsampling, and transforma-tions of it; applying data-mining methods (algorithms) to enumerate patterns from it;and evaluating the products of data mining to identify the subset of the enumerated pat-terns deemed knowledge. The data-mining component of the KDD process is concerned with the algorithmic means by which pat-terns are extracted and enumerated from da-ta. The overall KDD process (figure 1) in-cludes the evaluation and possible interpretation of the mined patterns to de-termine which patterns can be considered new knowledge. The KDD process also in-cludes all the additional steps described in the next section.The notion of an overall user-driven pro-cess is not unique to KDD: analogous propos-als have been put forward both in statistics (Hand 1994) and in machine learning (Brod-ley and Smyth 1996).The KDD ProcessThe KDD process is interactive and iterative,involving numerous steps with many deci-sions made by the user. Brachman and Anand (1996) give a practical view of the KDD pro-cess, emphasizing the interactive nature of the process. Here, we broadly outline some of its basic steps:First is developing an understanding of the application domain and the relevant prior knowledge and identifying the goal of the KDD process from the customer’s viewpoint.Second is creating a target data set: select-ing a data set, or focusing on a subset of vari-ables or data samples, on which discovery is to be performed.Third is data cleaning and preprocessing.Basic operations include removing noise if appropriate, collecting the necessary informa-tion to model or account for noise, deciding on strategies for handling missing data fields,and accounting for time-sequence informa-tion and known changes.Fourth is data reduction and projection:finding useful features to represent the data depending on the goal of the task. With di-mensionality reduction or transformationArticles42AI MAGAZINEThe Data-Mining Stepof the KDD ProcessThe data-mining component of the KDD pro-cess often involves repeated iterative applica-tion of particular data-mining methods. This section presents an overview of the primary goals of data mining, a description of the methods used to address these goals, and a brief description of the data-mining algo-rithms that incorporate these methods.The knowledge discovery goals are defined by the intended use of the system. We can distinguish two types of goals: (1) verification and (2) discovery. With verification,the sys-tem is limited to verifying the user’s hypothe-sis. With discovery,the system autonomously finds new patterns. We further subdivide the discovery goal into prediction,where the sys-tem finds patterns for predicting the future behavior of some entities, and description, where the system finds patterns for presenta-tion to a user in a human-understandableform. In this article, we are primarily con-cerned with discovery-oriented data mining.Data mining involves fitting models to, or determining patterns from, observed data. The fitted models play the role of inferred knowledge: Whether the models reflect useful or interesting knowledge is part of the over-all, interactive KDD process where subjective human judgment is typically required. Two primary mathematical formalisms are used in model fitting: (1) statistical and (2) logical. The statistical approach allows for nondeter-ministic effects in the model, whereas a logi-cal model is purely deterministic. We focus primarily on the statistical approach to data mining, which tends to be the most widely used basis for practical data-mining applica-tions given the typical presence of uncertain-ty in real-world data-generating processes.Most data-mining methods are based on tried and tested techniques from machine learning, pattern recognition, and statistics: classification, clustering, regression, and so on. The array of different algorithms under each of these headings can often be bewilder-ing to both the novice and the experienced data analyst. It should be emphasized that of the many data-mining methods advertised in the literature, there are really only a few fun-damental techniques. The actual underlying model representation being used by a particu-lar method typically comes from a composi-tion of a small number of well-known op-tions: polynomials, splines, kernel and basis functions, threshold-Boolean functions, and so on. Thus, algorithms tend to differ primar-ily in the goodness-of-fit criterion used toevaluate model fit or in the search methodused to find a good fit.In our brief overview of data-mining meth-ods, we try in particular to convey the notionthat most (if not all) methods can be viewedas extensions or hybrids of a few basic tech-niques and principles. We first discuss the pri-mary methods of data mining and then showthat the data- mining methods can be viewedas consisting of three primary algorithmiccomponents: (1) model representation, (2)model evaluation, and (3) search. In the dis-cussion of KDD and data-mining methods,we use a simple example to make some of thenotions more concrete. Figure 2 shows a sim-ple two-dimensional artificial data set consist-ing of 23 cases. Each point on the graph rep-resents a person who has been given a loanby a particular bank at some time in the past.The horizontal axis represents the income ofthe person; the vertical axis represents the to-tal personal debt of the person (mortgage, carpayments, and so on). The data have beenclassified into two classes: (1) the x’s repre-sent persons who have defaulted on theirloans and (2) the o’s represent persons whoseloans are in good status with the bank. Thus,this simple artificial data set could represent ahistorical data set that can contain usefulknowledge from the point of view of thebank making the loans. Note that in actualKDD applications, there are typically manymore dimensions (as many as several hun-dreds) and many more data points (manythousands or even millions).ArticlesFALL 1996 43Figure 2. A Simple Data Set with Two Classes Used for Illustrative Purposes.。
a r X i v :m a t h /0101142v 4 [m a t h .R A ] 19 D e c 2001Wreath Products in the Unit Group of Modular Group Algebras of 2-groups of Maximal Class Alexander B.Konovalov Abstract We study the unit group of the modular group algebra KG ,where G is a 2-group of maximal class.We prove that the unit group of KG possesses a section isomorphic to the wreath product of a group of order two with the commutator subgroup of the group G .MSC2000:Primary 16S34,20C05;Secondary 16U60Keywords:Group algebras,unit groups,nilpotency class,wreath pro-ducts,2-groups of maximal class 1Introduction Let p be a prime number,G be a finite p -group and K be a field of characteristic p .Denote by ∆=∆K (G )the augmentation ideal of the modular group algebra KG .The group of normalized units U (G )=U (KG )consists of all elements of the type 1+x ,where x ∈∆.Our further notation follows [20].Define Lie-powers KG [n ]and KG (n )in KG :KG [n ]is two-sided ideal,gene-rated by all (left-normed)Lie-products [x 1,x 2,···,x n ],x i ∈KG ,and KG (n )is defined inductively:KG (1)=KG,KG (n +1)is the associative ideal generated by [KG (n ),KG ].Clearly,for every n KG (n )⊇KG [n ],but equality need not hold.For modular group algebras of finite p -groups KG (|G ′|+1)=0[24].Then inour case finite lower and upper Lie nilpotency indices are defined:t L (G )=min {n :KG [n ]=0},t L (G )=min {n :KG (n )=0}.It is known that t L (G )=t L (G )for group algebras over the field of charac-teristic zero [19],and for the case of characteristic p >3their coincidence was proved by A.Bhandari and I.B.S.Passi [3].Consider the following normal series in U (G ):U (G )=1+∆⊇1+∆(G ′)⊇1+∆2(G ′)⊇1+∆t (G ′)(G ′),where t (G ′)is the nilpotency index of the augmentation ideal of KG ′.An obvious question is whether does exist a refinement for this normal series. There were two conjectures relevant to the question above.Thefirst one,as it was stated in[20],is attributed to A.A.Bovdi and consists in the equality cl U(G)=t(G′),i.e.this normal series doesn’t have a refinement. In particular,C.Baginski[1]proved that cl U(G)=p if|G′|=p(in case of cyclic commutator subgroup t(G′)=|G′|).A.Mann and A.Shalev proved that cl U(G)≤t(G′)for groups of class two[16].The second conjecture was suggested by S.A.Jennings[13]in a more general context,and in our case it means that cl U(G)=t L(G)−1.Here N.Gupta and F.Levin proved inequality≤in[11].Thefirst conjecture was more attractive and challenging,since methods for the systematic computation of the nilpotency index of the augmentation ideal t(G′) were more known than such ones for the calculation of the lower Lie nilpotency index(for key facts see,for example,[12],[15],[18],[21],[24]).Moreover,A.Shalev[20]proved that these two conjectures are incompatible in general case,although t(G′)=t L(G)−1for some particular families of groups, including2-groups of maximal ter using computer Coleman managed to find counterexample to Bovdi’s conjecture(cf.[25]),and thefinal effort in this direction was made bu X.Du[10]in his proof of Jennings conjecture.Study of the structure of the unit group of group algebra and its nilpotency class raised a number of questions of independent interest,in particular,about involving of different types of wreath products in the unit group(as a subgroup or as a section).In[9]D.Coleman and D.Passman proved that for non-abelianfinite p-group G a wreath product of two groups of order p is involved into U(KG).Later this result was generalized by A.Bovdi in[4].Among other related results it is worth to mention[16],[17],[23].It is also an interesting question whether U(KG)posesses a given wreath product as a subgroup or only as a section,i.e.as a factor-group of a certain subgroup of U(KG).Baginski in[1]described all p-groups,for which U(KG) does not contain a subgroup isomorphic to the wreath product of two groups of order p for the case of odd p,and the case of p=2was investigated in[7].The question whether U(G)possesses a section isomorphic to the wreath product of a cyclic group C p of order p and the commutator subgroup of G was stated by A.Shalev in[20].Since the nilpotency class of the wreath productC p≀H is equal to t(H)-the nilpotency index of the augmentation ideal of KH[8],this question was very useful for the investigation of thefirst conjecture.In [22]positive answer was given by A.Shalev for the case of odd p and a cyclic commutator subgroup of G.The present paper is aimed to extend the last result on2-groups of maximal class,proving that if G is such a group then the unit group of KG possesses a section isomorphic to the wreath product of a group of order two with the commutator subgroup of the group G.We prove the following main result.Theorem1Let K be afield of characteristic two,G be a2-group of maximal class.Then the wreath product C2≀G′of a cyclic group of order two and thecommutator subgroup of G is involved in U(KG).2PreliminariesWe consider2-groups of maximal class,namely,the dihedral,semidihedral andgeneralized quaternion groups,which we denote by D n,S n and Q n respectively. They are given by following representations[2]:D n= a,b|a2n−1=1,b2=1,b−1ab=a−1 ,S n= a,b|a2n−1=1,b2=1,b−1ab=a−1+2n−2 ,Q n= a,b|a2n−1=1,b2=a2n−2,b−1ab=a−1 ,where n≥3(We shall consider D3and S3as identical groups).We may assume that K is afield of two elements,since in the case of an arbitraryfield of characteristic two we may consider its simple subfield and cor-responding subalgebra in KG,where G is one of the groups D n,S n or Q n. Denote for x= g∈Gαg·g,αg∈K by Supp x the set{g∈G|αg=0}.Since K is a field of two elements,x∈1+∆⇔|Supp x|=2k+1,k∈N.Next,for every element g in G there exists unique representation in the form a i b j,where0≤i<2n−1,0≤j<2.Then for every x∈KG there exists unique representation in the form x=x1+x2b,where x i=a n1+...+a n k i.We shall call x1,x2components of x.Clearly,x1+x2b=y1+y2b⇔x i=y i,i=1,2.The mapping x→¯x=b−1xb,which we shall call conjugation,is an automor-phism of order2of the group algebra KG.An element z such that z=¯z will be called self-conjugated.Using this notions,it is easy to obtain the rule of multiplication of elementsfrom KG,which is formulated in the next lemma.Lemma1Let f1+f2b,h1+h2b∈KG.Then(f1+f2b)(h1+h2b)=(f1h1+f2¯h2α)+(f2¯h1+f1h2)b,whereα=1for D n and S n,α=b2for Q n.We proceed with a pair of technical results.Lemma2An element z∈KG commute with b∈G if and only if z is self-conjugated.Lemma3If x and y are self-conjugated,then xy=yx.In the next lemma wefind the inverse element for an element from U(KG). Lemma4Let f=f1+f2b∈U(KG).Then f−1=(¯f1+f2b)R−1,where R=f1¯f1+f2¯f2α,andα=1for D n,S n,α=b2for Q n.Proof.Clearly,R is a self-conjugated element of K a of augmentation1, hence R is a central unit in KG.Then the lemma follows since(f1+f2b)(¯f1+f2b)=(¯f1+f2b)(f1+f2b)=R.Now we formulate another technical lemma,which is easy to prove by straight-forward calculations using previous lemma.Lemma5Let f,h∈U(G),f=f1+f2b,h=h1+h2b,and h=¯h is self-conjugated.Let R andαbe as in the lemma4.Then f−1hf=t1+t2b,where t1=h1+h2(f1f2+¯f1¯f2)αR−1,t2=h2(¯f21+f22α)R−1.Let us consider the mappingϕ(x1+x2b)=x1¯x1+x2¯x2α,whereαwas defined in the Lemma5.It is easy to verify that such mapping is homomorphism from U(KG)to U(K a )and,clearly,for every x its imageϕ(x)is self-conjugated. Such mappingϕ:U(KG)→U(K a )we will call norm.We will also say that the norm of an element x is equal toϕ(x).3Dihedral and Semidihedral GroupNow let G be the dihedral or semidihedral group.Note that in Lemma5thefirst component of f−1hf is always self-conjugated.In general,the second one need not have the same property,but it is self-conjugated in the case when f1+f2=1, where f=f1+f2b.It is easy to check that the setH(KG)={h1+h2b∈U(KG)|h1+h2=1}is a subgroup of U(KG).Now we define the mappingψ:H(KG)→U(K a )as a restriction ofϕ(x) on H(KG).For convenience we also call it norm.Lemma6kerψ=C H(b)={h∈H(KG)|hb=bh}.The proof follows from the Lemma2and the equalityψ(h)=1+h1+¯h1for h∈H(KG).Lemma7C H(b)is elementary abelian group.Proof.C H(b)is abelian by the lemma3.Now,h2=f21+f22+(f1f2+f1f2)b= f21+f22=1+f21+f21=1,and we are done.Note that for elements f,h∈C H(b)we may obtain more simple rule of their multiplication:(f1+f2b)(h1+h2b)=(1+f1+h1)+(f1+h1)b.Now we consider a subgroup in G generated by b∈G and A=a+(1+a)b. Note that A∈H(KG).First we calculate the norm of A:ψ(A)=1+a+¯a.Since a2n−1=1,we haveψ(A)2n−2=1.Then A2n−2commute with b,and A2n−1=1 by the Lemma7.Smaller powers of A have non-trivial norm,so they do not commute with b.Clearly,order of A is equal to2n−2or2n−1.In the following lemma we will show that actually only the second case is possible.Lemma8Let A=a+(1+a)b.Then the order of A is equal to2n−1.Proof.We will show that the case of2n−2is impossible since Supp A2n−2=1. We will use formula(8)from[6],which describes2k-th powers of an element x∈U(G):x2k=x2k1+(x2¯x2)2k−1b2k+k−1i=1(x2¯x2)2i−1(x1+¯x1)2k−2i b2i+x2(x1+¯x1)2k−1b.Let us show that the second component of A2n−2is non-trivial.Let us denote it by t2(A2n−2)=t2.By the cited above formula,t2=(1+a)(a+¯a)2n−2−1,where ¯a=a−1for the dihedral group,and¯a=a−1+2n−2for the semidihedral group.Now we consider the case of the dihedral group.We havet2=(1+a)(a+a−1)2n−2−1=(1+a)(a−1(1+a2))2n−2−1= (1+a)a2n−2+1(1+a2)2n−2−1=(a2n−2+1+a2n−2+2)(1+a2)2n−2−1. Note that if X= x ,x2m=1,then(1+x)2m−1= y∈X y=X the sum of all its elements[2].Thus,we have(1+a2)2n−2−1=1+a2+a4+...+a2n−1−2=G′.Thent2=(a2n−2+1+a2n−2+2) a2 + a ,and for the case of the dihedral group the lemma is proved.Now we will consider the semidihedral group.We havet2=(1+a)(a+a2n−2−1)2n−2−1=(1+a)(a(1+a2n−2−2))2n−2−1=(a2n−2−1+a2n−2)(1+a2n−2−2)2n−2−1,and the rest part of the proof is similar.Note that from t2(A2n−2)=a ,since A∈H(G).To construct a section isomorphic to the desired wreath product,first we take elements b,b A,b A2,···,b A2n−2−1.For every k we have(b A k)2=1.By the Lemma 5all elements b A k are self-conjugated,since A∈H(KG),and they commute each with other by the Lemma3.So,we get the next lemma.Lemma9 b,b A,b A2,···,b A2n−2−1 is elementary abelian subgroup.Now we can obtain elements b A k,using the Lemma5.Lemma10Let A=a+(1+a)b,R=ψ(A)=1+a+¯a.Thenb A k=1+R k+R k b,k=1,2,...,2n−2.Proof.First we obtain b A by Lemma5with h1=0,h2=1,f1=a,f2=1+a. We getb A=(a(1+a)+¯a(1+¯a))R−1+(¯a2+(1+a)2)R−1b==(a+¯a+a2+¯a2)R−1+(1+a2+¯a2)R−1b=(R+R2)R−1+R2R−1b=1+R+Rb. Now let b A k=1+R k+R k ing the same method for h1=1+R k,h2=R k, we get b A k+1=1+R k+1+R k+1b,as required.Lemma11There exists following direct decomposition:b,b A,b A2,···,b A2n−2−1 = b × b A × b A2 ×···× b A2n−2−1 Proof.We need to verify that the product of the form b i0(b A)i1···(b A k)i k, where k=2n−2−1,i m∈{0,1}and not all i m are equal to zero,is not equal to1∈G.Clearly,multiplication by b only permute components.So,we may consider only products without b and proof that they are not equal to1or b.Note that b A k are self-conjugated and lies in H(KG).From this follows the rule of their multiplication:(1+R k+R k b)(1+R m+R m b)=1+R k+R m+(R k+R m)b.The product of more than two elements is calculated by the same way:(b A)i1(b A2)i2···(b A k)i k=1+i1R+i2R2+···+i k R k+(i1R+i2R2+···+i k R k)b.Putγ=i1R+i2R2+···+i k R k and R=1+r,where r=a+¯a.Then γcould be written in the formγ=µ+r j1+···+r j k,whereµ∈{0,1}and j1<j2<···<j k=i k.Since(a+¯a)2n−2=0,r is nilpotent and its smaller powers are linearly independent,so r j1+···+r j k=0.From the other side,it is easy to see that the support of r j s does not contain1,so r j1+···+r j k=1. Henceγ∈{0,1},and the support of the product(b A)i1(b A2)i2···(b A k)i k contains elements different from1and b,which proves the lemma.Now we are ready tofinish the proof of Theorem1for the dihedral and semidihedral groups.It was shown that U(KG)contains the semi-direct product F of b × b A × b A2 ×···× b A2n−2−1 and A .As was proved above,the order of A is2n−1and its2n−2-th power commutes with b.From this follows that the factorgroup F/ A2n−2 is isomorphic to C2≀G′,as required.4Generalized Quaternion GroupNow let G be the generalized quaternion group.First we need to calculate cl U (G ).In fact,we need to know only t L (G ),since cl U (G )=t L (G )−1[10].Note that Theorem 2is already known (see Theorem 4.3in [5]),but we provide an independent proof for the generalized quaternion group.Theorem 2Let G be the generalized quaternion group.Then cl U (G )=|G ′|.Proof .First,cl U (G )≤|G ′|by [26].Now we prove that t L (G )≥|G ′|+1.To do this,we will construct non-trivial Lie-product of the length 2n −2=|G ′|.Consider Lie-product [b,a,···,a k ]which we denote by [b,k ·A ].Clearly,[b,a ]=(a +a −1)b ,and (a +a −1)is central in KG .It is easy to prove by induction that[b,k ·a ]=(a +a −1)k b ,therefore the commutator[b,(2n −2−1)·a ]=a 2n −2+1(1+a 2)2n −2−1b =a 2n −2+1 a 2 b does not vanish.From the Theorem 2it follows that t L (G )=t L (G )sincecl U (G )=t L (G )−1≤t L (G )−1≤|G ′|,confirming conjecture about equality of the lower and upper Lie nilpotency indices (cf.[3]).From this we conclude that G and U (KG )have the same exponent,using the theorem from [24]about coincidence of their exponents in the case when t L (G )≤1+(p −1)p e −1,where p e =exp G and p is the characteristic of the field K .Note that these two statements regarding Lie nilpotency indices and exponent are also true for all 2-groups of maximal ing the technique described here we also may show that modular group algebras of 2-groups of maximal class are Lie centrally metabelian.For a unit A of KG we denote by (b,k ·A )the commutator (b,A,···,A k ).Now we need a pair of technical lemmas.Lemma 12Let A ∈U (KG ),A 2n −1=1,b ∈G,b A i b A j =b A j b A i for every i,j ,where b A i =A −i bA i .Then for every k ∈N (b,k ·A )2=1.Proof .We use induction by k .By straightforward calculation,(b,A )2=1.Now,let (b,k ·A )=X,X 2=1.Then (X,A )=XX A .Since elements b A i ,i ∈N commute each with other,X and X A also commute,and (XX A )2=1.Lemma 13Let A ∈U (KG ),A 2n −1=1,b ∈G,b A i b A j =b A j b A i for every i,j ,where b A i =A −i bA i .Then for every k,m ∈N(b,A,···,A k ,A 2m )=(b,A,···,Ak +2m ).Proof .We use induction by m .First,(b,k ·A,A )=(b,(k +1)·A ).Let the statement holds for some m .Consider the commutator(b,k ·A,A 2m +1)=(b,k ·A,A 2m )2(b,k ·A,A 2m ,A 2m),since (x,yz )=(x,y )(x,z )(x,y,z ).By the Lemma 12the square of the first commutator is 1,while the second is equal to (b,(k +2m +1)·A ).This gives possibility to proof the next property of U (KG ).Lemma 14Let A ∈U (KG ),A 2n −1=1,b ∈G,b A i b A j =b A j b A i for every i,j ,where b A i =A −i bA i .Then A 2n −2commute with b .Proof .We will show using induction by m that the group commutator (b,A 2m )=(b,A,···,A 2m ),so (b,A 2n −2)=(b,A,···,A 2n −2)=1,since cl U (G )=2n −2.First,(b,A 2)=(b,A )2(b,A,A )=(b,A,A )by the Lemma 12.Let (b,A 2m )=(b,2m ·A ).Then (b,A 2m +1)=(b,A 2m )2(b,A 2m ,A 2m )=(b,A 2m ,A 2m )by the Lemma ing induction hypothesis and Lemma 13,we get(b,A 2m ,A 2m )=(b,2m ·A,A 2m )=(b,2m +1·A ).Let us take an element A =a 2n −3+1+(1+a )b ,where a 2n −1=1.Calculating A −1hA for self-conjugated h by the Lemma 5,we get a self-conjugated element again.The norm of A is ϕ(A )=1+a 2n −2+1+a 2n −2−1,so order of ϕ(A )is 2n −2,and from this we conclude that the order of A is great or equal to 2n −2.From the other side,it is not greater then 2n −1,since G and U (KG )have the same exponent.Moreover,if A 2n −2=1,then A 2n −2commute with b by lemma 14,and it is necessary to know whether its lower powers commute with b .As in the previous section,in the following lemma we will exactly calculate the order of A .Lemma 15Let A =a 2n −3+1+(1+a )b .Then the order of A is equal to 2n −1.Proof .The proof is similar to the proof of the lemma 8.We will show that Supp A 2n −2=1,calculating the second component t 2(A 2n −2)=t ing the same formula from [6],we have:t 2=(1+a )(a 2n −3+1+a −2n −3−1)2n −2−1=(1+a )(a −2n −3−1(1+a 2n −2+2))2n −2−1=(1+a )(a −2n −3−1)2n −2−1(1+a 2n −2+2)2n −2−1=(1+a )a −2n −3+1(1+a 2n −2+2)2n −2−1=(a −2n −3+1+a −2n −3+2)(1+a 2n −2+2)2n −2−1.Then,(1+a 2n −2+2)2n −2−1=G ′.From thist 2=(a −2n −3+1+a −2n −3+2) a 2 + a ,and the lemma is proved.Now we calculate elements b A k ,k =1,2,···,2n −2,using Lemma 5.Lemma16Let A=a2n−3+1+(1+a)b,R=ϕ(A)=1+a2n−2+1+a2n−2−1.Thenb A k=βk−1i=−1(b2R)i+(b2R)k b,k=1,2,...,2n−2,whereβ=a2n−3+1+a−2n−3−1+a2n−3+2+a−2n−3−2.Proof.Remember that for Q n in Lemma5f−1hf=t1+t2b,where t1=h1+h2(f1f2+¯f1¯f2)b2R−1,t2=h2(¯f21+f22b2)R−1.First we obtain the second component.For A=f1+f2b we have¯f21+f22b2= (a−2n−3−1)2+(1+a)2a2n−2=b2(1+a2+a−2)=b2R2.Then the second component of b A is b2R2R−1=b2R.Now it is easy to prove by induction that the second component of b A k is(b2)k R k.From this immediately follows that A k,k<2n−2, doesn’t commute with b,since ord R=2n−2.Now we will calculate thefirst component.First,for the element A expression of the form f1f2+¯f1¯f2is equal to a2n−3+1(1+a)+a−2n−3−1(1+a−1)=a2n−3+1+ a−2n−3−1+a2n−3+2+a−2n−3−2,which we will denote byβ.Using the formula at the beginning of the proof for h1=0,h2=1we conclude that thefirst component of b A is equal toβb2R−1.Now let thefirst component of b A k,where k<2n−2−1,is equal toβk−1 i=−1(b2R)i.Taking into consideration its previously calculated second component,we obtain that thefirst component of b A k+1is equal toβk−1i=−1(b2R)i+β(b2R)k b2R−1=βki=−1(b2R)i.Now we are ready to construct the subgroup,whose factorgroup is isomorphic to the desired wreath product.Let us consider the subgroup F1in U(G):F1= b,b A,b A2,···,b A2n−2−1 A ,where b∈G,A=a2n−3+1+(1+a)b,A2n−2is the minimal power of A which commutes with b.Further,the subgroup b,b A,b A2,···,b A2n−2−1 is abelian,and the intersection of subgroups b , b A ,···, b A2n−2−1 is b2 .Moreover,the order of A is2n−1.Let us take F2as a factorgroup of the group F1as follows:F2=F1/ b2 A2n−2 .It is clear,that cl F2≤2n−2=cl U(G).If we will show that actually we have equality cl F2=2n−2=cl(C2≀G′),then from this it will follow that F2∼=C2≀G′.Let M=C2≀G′and cl F2=2n−2=cl M.Let us assume that F2∼=M. Then there exists such normal subgroup N⊳M,that M/N∼=F2,since thereexists a homomorphism M →F 2,which is induced by mapping of generators of M into F 2.Since |Z (M )|=2,N ⊳M ,then N ∩Z (M )=∅,so Z (M )⊆N .In this case the nilpotency class cl F 2should be less then cl M ,and we will get a contradiction.To obtain the lower bound for the nilpotency class cl F 2we will show that thecommutator (b,A...A 2n −2−1)in U (G )does not belong to the subgroup b 2 A 2n −2 ,so its image in F 2is nontrivial.By the lemma 13(b,A...A 2n −2−1)=(b,A...A 2n −3−1,A 2n −3)=(b,A...A 2n −4−1,A2n −4,A 2n −3)=···=(b,A,A 2,A 4,···,A 2n −4,A 2n −3),and we obtainmore simple commutator of the length n −1.Further,(b,A )=b −1b A =bb A ,then (b,A,A 2)=bb A b A 2b A 3,and,by induction,(b,A,A 2,···,A 2n −3)=bb A b A 2···b A 2n −2−1=(bA −1)2n −2A 2n −2.It remains to show that (bA −1)2n −2does not contained in b 2 A 2n −2 .Notethat (bA −1)−1=Ab 3,(Ab 3)2n −2=(Ab )2n −2.By the Lemma 15the second component of A 2n −2is equal to a 2 =References[1]Baginski,C.,Groups of units of modular group algebras,Proc.Amer.Math.Soc.101(1987),619-624.[2]Baginski C.,Modular group algebras of2-groups of maximal class,Comm.Algebra20(1992),1229-1241.[3]Bhandari A.K.and Passi I.B.S.,Lie-nilpotency indices of group algebras,Bull.London Math.Soc.24(1992),68-70.[4]Bovdi,A.A.,Construction of a multiplicative group of a group algebra withfiniteness conditions,Mat.Issled.56(1980),14-27,158.[5]Bovdi A.A.and Kurdics,J.,Lie properties of the group algebra and thenilpotency class of the group of units,J.Algebra212(1999),28-64.[6]Bovdi A.and Lakatos P.,On the exponent of the group of normalized unitsof a modular group algebras,Publ.Math.Debrecen42(1993),409-415. [7]Bovdi,V.and Dokuchaev,M.,Group algebras whose involutory units com-mute,electronic preprint(/abs/math.RA/0009003).[8]Buckley,J.T.,Polynomial functions and wreath products,Ill.J.Math.14(1970),274-282.[9]Coleman,D.B.and Passman,D.S.,Units in modular group rings,Proc.Amer.Math.Soc.25(1970),510-512.[10]Du,X.,The centers of radical ring,Canad.Math.Bull.35(1992),174-179.[11]Gupta,N.D.and Levin,F.,On the Lie ideals of a ring,J.Algebra81(1983),225-231.[12]Jennings,S.A.,The structure of the group ring of a p-group over a modularfield,Trans.Amer.Math.Soc.50(1941),175-185.[13]Jennings,S.A.,Radical rings with nilpotent associated groups,Trans.Roy.Soc.Canada49(1955),31-38.[14]Konovalov,A.B.,On the nilpotency class of a multiplicative group of amodular group algebra of a dihedral2-group,Ukrainian Math.J.47(1995), 42-49.[15]Koshitani,S.,On the nilpotency indices of the radicals of group algebras ofp-groups which have cyclic subgroups of index p,Tsukuba J.Math.1(1977), 137-148.[16]Mann,A.and Shalev,A.,The nilpotency class of the unit group of a modulargroup algebra II,Isr.J.Math.70(1990),267-277.[17]Mann,A.,Wreath products in modular group rings,Bull.Lond.Math.Soc.23(1991),443-444.[18]Motose,K.and Ninomiya,Y.,On the nilpotency index of the radical of agroup algebra,Hokkaido Math.J.4(1975),261-264.[19]Passi I.B.S.,Passman D.S.and Sehgal S.K.,Lie solvable group rings,Canad.J.Math.25(1973),748-757.[20]Shalev,A.,On some conjectures concerning units in p-group algebras.Pro-ceedings of the Second International Group Theory Conference(Bressanone, 1989),Rend.Circ.Mat.Palermo(2)23(1990),279-288.[21]Shalev,A.,Dimension subgroups,nilpotency indices,and number of gene-rators of ideals in p-group algebras,J.Algebra129(1990),412-438. [22]Shalev, A.,The nilpotency class of the unit group of a modular groupalgebra I,Isr.J.Math.70(1990),257-266.[23]Shalev,A.,Large wreath products in modular group rings.Bull.LondonMath.Soc.23(1991),46-52.[24]Shalev,A.,Lie dimension subgroups,Lie nilpotency indices,and the ex-ponent of the group of normalized units.J.London Math.Soc.43(1991), 23-36.[25]Shalev, A.,The nilpotency class of the unit group of a modular groupalgebra III,Arch.Math.(Basel)60(1993),136-145.[26]Sharma,R.K.and Bist,V.,A note on Lie nilpotent group rings,Bull.Austral.Math.Soc.45(1992),503-506.Department of Mathematics and Economy Cybernetics,Zaporozhye State University,Zaporozhye,UkraineMailing address:P.O.Box1317,Central Post Office,Zaporozhye,69000,UkraineE-mail:konovalov@。
数学与统计学院硕士研究生课程内容简介学科基础课-------------------- 泛函分析--------------------课程编号:1 课程类别:学科基础课课程名称:泛函分析英文译名:Functional Analysis学时:60学时学分:3学分开课学期:1 开课形式:课堂讲授考核形式:闭卷考试适用学科:基础数学、应用数学、运筹与控制论、课程与教学论授课单位及教师梯队:数学与统计学院,基础数学系教师。
内容简介:本课程介绍紧算子与Fredholm算子、抽象函数简介、Banach代数的基本知识、C*代数、Hilbert 空间上的正常算子、无界正常算子的谱分解、自伴扩张、无界算子序列的收敛性、算子半群、抽象空间常微分方程。
主要教材:张恭庆、郭懋正:《泛函分析讲义》(下册),北京大学出版社,1990年版。
参考书目(文献):1.定光桂:《巴拿赫空间引论》,科学出版社,1984年版。
2.M. Reed, B. Simon, Methods of Modern Mathematical Physics I, Functional Analysis, 1972.3.K. Yosida, Functional Analysis, Sixth Edition, 1980.4.张恭庆、林源渠:《泛函分析讲义》(上册),北京大学出版社,1987。
5.V. Barbu, Nonlinear Semigroups and Differential Equations in Banach Spaces, 1976.6.A. Pazy, Semigroup of Linear Operators and Applications to Partial Differential Equations, 1983.-------------------- 非线性泛函分析--------------------课程编号:2 课程类别:学科基础课课程名称:非线性泛函分析英文译名:Nonlinear Functional Analysis学时:60学时学分:3学分开课学期:2 开课形式:课堂讲授考核形式:闭卷考试适用学科:应用数学、基础数学、运筹学与控制论授课单位及教师梯队:数学与统计学院,应用数学系教师。