9. 040301_DTE-MS-JK_R3-Wear_Paper Published_Rev 0
- 格式:pdf
- 大小:564.83 KB
- 文档页数:8
实用临床医药杂志Journal of Clinical Medicine in Practice 2220年第24卷第10期・28・等速技术和弹力带阻抗运动在脑卒中康复领域应用的研究进展吕俊良,王志强,于莉莉,张萍(中国医科大学附属盛京医院康复医学中心,辽宁沈阳,110006)摘要:等速技术在康复医学领域中具有高安全性、可重复性、客观性等优点,在临床得到了广泛应用。
本文重点阐述近年来等速技术应用于脑卒中患者康复评定和康复训练的研究进展,并综述弹力带阻抗运动训练对脑卒中偏瘫患者肌力、耐力等运动功能的影响,对比两种训练方法的优缺点,探讨弹力带阻抗运动训练替代等速肌力训练作为偏瘫肌力训练器材的可能性。
关键词:等速技术;弹力带;阻抗运动;脑卒中;偏瘫;康复医学中图分类号:R743文献标志码:A文章编号:1072-2353(2222)10-024-05D0I:10.7619/jcmp.222010007Research progress of isokinetic technique andresistance exercise of elastic band inreCabilitation of cereXrol strokeLYU Junliang,WANG Zhiqiang,YU Lilt,ZHANG Ping(Rehabilitation Medical Center,Shengjing Hospital Affiliated to China MedicalUniversity,Shenyang,Liaoning,110006)ABSTRACT:IsoUiaeda technique has beed widely used in the field of redabilitatiou medicine, which hco thd abvantagdo of high safety ,npdhbility ani oUjechvity.Thio pdpd focasdo ou thd n-sedrch prooresa of isoUiaeda techniqrd in redabilitatiou assessmedi ant redabilitatiou trainina of stroUd patiedia in recedi yua,ani sammanzda thd effecia of resistaica exercise by elastia bani ou mrscid $1:011£止,ani。
Examiners’ Report/Principal Examiner Feedback Summer 2014Pearson Edexcel GCE inDesign & Technology6FT03 Paper 01Food Products, Nutrition and ProductDevelopmentEdexcel and BTEC QualificationsEdexcel and BTEC qualifications are awarded by Pearson, the UK’s largest awarding body. We provide a wide range of qualifications including academic, vocational, occupational and specific programmes for employers. For further information visit our qualifications websites at or . Alternatively, you can get in touch with us using the details on our contact us page at /contactus.Pearson: helping people progress, everywherePearson aspires to be the world’s leading learning company. Our aim is to help everyone progress in their lives through education. We believe in every kind of learning, for all kinds of people, wherever they are in the world. We’ve been involved in education for over 150 years, and by working across 70 countries, in 100 languages, we have built an international reputation for our commitment to high standards and raising achievement through innovation in education. Find out more about how we can help you and your students at: /ukSummer 2014Publications Code UA038510All the material in this publication is copyright© Pearson Education Ltd 2014UNIT 6FT03Food Products, Nutrition and Product DevelopmentThe focus of the 6FT03 paper is to examine students on the knowledge they have developed on a range of food commodities, aspects of nutrition, product development and food innovation. Students are required to have a comprehensive knowledge of the main food commodities, their composition, basic processing and typical spoilage patterns.A sound knowledge of nutrition and its influence on the diet, contemporary lifestyle issues and new product development is particularly important for food technologists. Similarly, consumer behaviour, demographics, modern lifestyles, cultural changes and sustainable issues have an influence on new product development. It is also important for students to be aware of the influence of new technologies and materials on the development of new food products.The coverage of this paper effectively tested the students’ knowledge and understanding of the topic areas. The 'ramped' nature of the exam paper and variety of question styles and command words promoted accessibility to students of all ability levels. Progression, application of knowledge and understanding within the subject area was evident, promoting stretch and challenge opportunities for higher ability students. Marks were scored evenly across all areas of the paper, with effective differentiation across the paper.Question 1(a)In Q1(a) students were required to identify the component proteins which form muscle. Most students correctly identified actin and myosin. The paper is ramped, with the challenge increasing as the paper progresses; this question elicited a very good response from students, with nearly all being able to achieve full marks.Question 1(b)This question focused on ‘the breakdown of fish tissues after catching’ and ‘the development of odours’. Good responses identified that LITTLE glycogen is present in the fish after catching due to it being used up in the movement of fish as they are caught. This results in little lactic acid being produced and then a relatively high pH resulting leading to the rapid bacterial spoilage of fish. Good responses used technical terms frequently e.g. ‘trimethylamine oxide changes to trimethylamine’. This seems to be a popular topic with students; many did well demonstrating a good understanding of the changes which occur in fish after catching. Where students did less well, they had an attempt at the question but showed little understanding of the changes which occur after death. Superficial knowledge was shown; students who did not do as well would write about anything to do with fish, not specific to the question. A focus on ‘slimy’ skin, change in colour rather than more technical changes was apparent inmiddling answers. Many students wrote about changes occurring in meat rather than the specific changes in fish. This was a common issue where students had learnt previous answers to past questions but applied this knowledge incorrectly thus showing little understanding.Question 2(a)This question required that students could identify the enzymes needed to enable digestion of the macro-nutrients. This proved to be a very good differentiation question with top students being able to identify these enzymes correctly. It was a recall question with no explanations or discussions required; as such those who had learned the process of digestion were able to achieve well.Question 2(b)Students were required to identify the final components the macro-nutrients are broken down into in the digestive process in order to enable absorption to occur. Some students revealed high level knowledge of the process, achieving full marks. Such knowledge is fundamental to a good understanding of the digestive process. Many students scored poorly on this question simply due to a lack of knowledge rather than lack of understanding. Very few achieved highly; students seemed to list a number of hopeful, but incorrect answers. Fatty acids alone was the most common incorrect answer for fat, while starch or glycerol were the most common incorrect responses for carbohydrates.Question 2(c)This question focused on the role of bile as a key component in the digestive system. Good responses focused on the role of bile as an emulsifier of fat, enabling it to be broken down thus facilitating the role of lipase. It was pleasing to see many students achieve well with this question. Question 3(a)This question focused on the characteristic composition and behaviour of climacteric and non–climacteric fruits during storage and ripening. Many students showed a good understanding, being able to describe differences and relate the behaviour to the storage and/or shelf life of fruit.Question 3(b)This also focused on changes during storage and ripening. It steps up in difficulty as students are now expected to ‘explain’ the ripening process. It is not enough in an ‘explain’ question to simply state the changes; to achieve full marks students would need to explain why colour, texture, sweetness etc changes as a fruit ripens. For example, by stating that ‘starch breaks down to simple sugars to become sweeter’, or that ‘colour changes due to chlorophyll pigments breaking down to release other pigments’, then being able to name these pigments eg carotonoids or anthocyanins ensured students were awarded high marks. At A2 level, it is expected thatstudents will have developed this knowledge and understanding; it is not enough to simply state that the changes occur.Question 4(a)There have been many cultural changes impacting new product development; students who achieved well identified and described this, often giving appropriate reasons for such changes and went on to give specific examples of relevant new products.Question 4(b)(i) and (ii)Several students were able to identify correctly the stages of the life cycle; less were able to apply the correct marketing plan to the identified stage. Where this was done well, it was very pleasing to witness excellent knowledge and understanding demonstrated. The students who scored highly on this question tended to write about the introduction and decline stages, although good responses were also sometimes seen for the maturity and growth stages. The most common reason for not achieving marks was to identify the ‘growth’ stage but write about marketing plans implemented in the ‘introduction’ stage, thus confusing the two.Question 5(a)This question focused on the role of iron in the diet; students were expected to be able to give the functions of iron in the diet and state good food sources. Higher achieving students could frequently give 2 functions with formation of haemoglobin and prevention of anaemia being the most common correct answers. It would have been good to see more students mention myoglobin or iron’s role in enzyme systems. Although the majority of students were able to identify sources of both haem iron and non-haem iron, many failed to achieve a mark by simply writing ‘meat’ as a good food source of haem iron. ‘Meat’ alone is not specific enough at A2 level; to achieve a mark, students should have written ‘red meat’ or given a named red meat.Question 5(b)Students were required to discuss the role of dietary fibre in the diet. Dietary fibre is a very important component of the diet. As Food Technology students, students should be able to demonstrate high level knowledge of this component and be able to discuss its role. However, there seemed to be much confusion about this component with several students being unable to discuss its role beyond preventing constipation. Good responses could identify a variety of correct functions, as well as discuss the implications of too much fibre in the diet. It would have been good to see more development of answers e.g. how dietary fibre can be useful in the control of obesity or how dietary fibre may help prevent diabetes type II rather than simply stating that it does these things.Questions 6(a) and 6(b)Many students could identify the uses of energy, but it would have been good to have seen more technical terms used, eg Basal metabolism rather than ‘ for bodily functions’. Good responses were frequently given for Q6(b) with many students being able to clearly explain the concept of negative energy balance. Some students unfortunately confused negative energy balance with positive energy balance, thus not gaining any marks. Question 6(c)Obesity is a diet related disease and Food Technology students should have a clear awareness of the implications of this illness not just for the individual but also for society as a whole. The question was generally very well answered with most students identifying obesity related diseases as well as commenting on the burden on the NHS or the cost to the economy. Many students suggested ways health professionals could promote healthy lifestyles.Question 6(d)Linked to the previous question but it also specifically requires students to apply their knowledge of new materials on new food products, specifically those focused on reduced-calorie diets. All the materials studied in the specification; modified starches, encapsulated materials and meat analogues or novel proteins, have been used to provide the consumer with reduced calorie products. An understanding of the function of these materials and how they can be applied is expected at this level. There was a very good range of answers with all these new materials covered. Most students referred to the low fat/high protein nature of meat analogues. Some wrote about modified starches, especially their role as fat replacers in low-fat meals to give a creamy mouthfeel. There were a few good answers focusing on the role of encapsulated materials, although it would have been good to see these particular responses developed more, especially as encapsulation is one of the growing sectors in new food materials. Some good responses focused on artificial sweeteners, which although not on the 6FT03 specification, is still a valid response as examples of new food materials widely used in reduced-calorie food products. Q6(d) was either answered very well or students tended to miss the point of the question and instead of writing about a new food material and how these can be used in food products aimed at those on a reduced-calorie diet, wrote about low-fat products or slimming diets in general.Question 7(a)For this question, students were required to state the three main parts of a cereal grain and most could give three correct answers.Question 7(b)Students were required to relate their knowledge of wheat, the different types, whether hard or soft, and then apply this knowledge to the processesinvolved in bread making, biscuit making and pasta making. There were many excellent responses and it is good to see that centres are very evidently using practical work to teach theory work very successfully. Good responses showed an excellent understanding of the different wheats and their properties, revealing how these properties are put to use in the processes of making these important products. It seems that perhaps this question was frequently misread. Students who did not achieve well with this question answered by writing about different types of flour used in these processes rather than focusing on the different types of wheat. For instance, there was much description of white flour as opposed to wholemeal, or plain or even self-raising. 00 flour was frequently mentioned for pasta making, correct but not relevant to the question. Some students spent time writing about different cereal grains such as rye, not even focusing on wheat. It is important to be able to understand the materials used in order to understand the processes involved in the making of bread, biscuits and pasta. Good responses showed an excellent understanding of the different types of wheat and their properties, applying this knowledge to the processes involved.It was very pleasing to see the depth of detail included in questions which required explanation and discussion. Successful students were able to demonstrate high level knowledge and understanding in their responses to the questions. It is very evident that centres are teaching the specification well and training students to appropriately recognise and use the command words which are used to differentiate questions. Less successful students frequently had difficulty in achieving marks in the questions which required explanations. It is not sufficient to simply provide descriptions; underlying explanations also need to be provided at this level. Less successful students appeared to sometimes misread questions, for instance question 7b.Centres need to be aware of the necessity to prepare students for this exam by ensuring that they have a full understanding of the requirements of different question types: name, state, give, describe, outline, compare, contrast, discuss, evaluate and explain.Centres must ensure full coverage of the specification as any area could be tested. It would be useful for all centres to ensure the ‘Subject Content Guide’ for 6FT03 is referred to by both teachers and students. This can be accessed on the Pearson Edexcel website, on the GCE Food Technology page, under Teacher Support Materials.Grade BoundariesGrade boundaries for this, and all other papers, can be found on the website on this link:/iwantto/Pages/grade-boundaries.aspxPearson Education Limited. Registered company number 872828 with its registered office at Edinburgh Gate, Harlow, Essex CM20 2JE。
GLOBAL CERTIFICATION FORUM (GCF) LtdWork Item DescriptionField Trial requirements for GSM/GPRS/EGPRSReference: GCF WI-108Version: v3.1.0Date: 29.01.2010Document Type: Technical1 ScopeThe scope of this work item covers the renewal of Field Trial requirements for GSM/(E)GPRS including SIM/USIM.2 DescriptionThis Work Item description has been created to handle the renewal of GSM/(E)GPRS including SIM/USIM Field Trial requirements.3 JustificationField Trials are an integral part of the GCF scheme and therefore are required to evolve in conjunction with the implementation of associated mobile technology.4 Supporting companiesCSR, Ericsson Mobile Platforms, Motorola, NEC, Nokia, O2 UK, Orange France, RIM, Sony Ericsson, Vodafone Group, Broadcom, TIM, TeliaSonera5 RapporteurMarc OuwehandNokia CorporationTelephone: +358 40828 0908E-mail: marc.ouwehand@6 Affected bandsNote: GSM 850 and 1900 are outside the GCF certification scheme.7 Core Specifications8 Test SpecificationsNote: The operator expectations on each identified Field Trial Requirement can be derived from the GSMA PRD DG 11 ‘DG Field and Lab Trial Guidelines’, It is emphasised that DG.11 is only a guideline and that manufacturers may use their own test procedures.9 Work Item Certification Entry9.1 Work Item Certification Entry Criteria (CEC)N/A.9.2 Target Work Item Certification Entry Date / GCF-CC versionN/A10 Work Item Freeze and Completion CriteriaDuring the next PRP review this WI should be set to ‘Completed’.11 Conformance Test RequirementsN/A12 IOP Test RequirementsN/A13 Field Trial RequirementsFor BSS/MSC network dependent Field Trial Requirements (BM)For GPRS network dependent Field Trial Requirements (GPRS)For SIM/UICC dependent Field Trial Requirements (2GSIM)For SMSC dependent Field Trial Requirements (SMS)For Network/SIM/UICC/Client independent Field Trial Requirements (NI)14 Periodic Review PointsThe next PRP-review for this WI will be held at the t FT AG meeting during Q3 2010.15 Other commentsThe information below is coming from both WI’s, which during the FT AG #15 meeting has been merged into this WI.Former WI-028 Comments:- During the SG 25 meeting a concern was raised about the approval of this Work Item as well as the CR’s attached to this Work Item via 10 day rule process. The SG made clear that this approval process is not conform the official GCF rules. The SG supposed to approve this Work Item and CR’s. During the SG 25 meeting document S-06-053 was created and approved by the SG to give a mandate for approval via 10 day rule. This mandate applies only for CR’s concerning this Work Item.- Due to the introduction of this Work Item and Work Item 27 (HSDPA) it is required that GCF operators re-declare their status as GCF Qualified operator by using the new Annex B, which should be available in thePRD GCF-OP released in April 2006. The CR for this renewed Annex B is a part of this Work Item and will be uploaded as CR FT-06-022r4During the teleconference of 20.3.2006 there was agreed that an agenda point will be made for FT AG #04,3-4 May 2006 concerning re-declaration.- During the teleconference of 20.3.2006, there was noticed that the mandate for this Work Item didn’t include the EGPRS feature. Therefore it was agreed that the EGPRS topic will be put on the agendafor FT AG #04.Mr. M. Ouwehand (Nokia) will take care that there will be a discussion/input documentavailable for the FT AG #04.When WI-028 has been activated there was agreed to put a ‘transition period’ in place, due the fact that bythat time there were no enough FTQ ANNEX B documents available.During the FT AG #08, 2-3 May 2007, there has been agreed that the ‘transition period’ is ended by therelease of GCF-CC 3.26.0.Former WI-048 Comments:It was suggested during the FTAG meeting discussions of this WI that the most effective method ofadministrating and executing the EDGE classified test requirements while maintaining confidence in GCF FTfor both GPRS and EDGE networks is:a) Introduce a new classification called EDGEb) Copy all existing GPRS requirements to the EDGE Requirements.c) FT on GPRS NW Configurations do not need to perform the EDGE classified test requirementsd) FT on EDGE NW Configurations do not need to perform the GPRS classified test requirements.When this WI meet the CEC and therefore will be activated, it should be considered by FT AG to merge the EDGE Field Trial requirements table into the existing GPRS Field Trial requirements table as has been done with PS, HSDPA & EUL requirements table merge.The CR’s to GCF-CC related to this Work Item need to be submitted at the same time that CR to activate this Work item will be submitted.16 Document Change Record。
EUROPEAN COMMISSION12Directorate General Health and Consumer Protection345SANCO/825/00 rev. 8.1 616/11/2010 78Guidance document on pesticide residue 9analytical methods10111213141516[Revision 8 is the version of this guidance document that is currently valid. It is, however, under1718continuous review and will be updated when necessary. The document is aimed at 19manufacturers seeking pesticides authorisations and parties applying for setting or modification 20of an MRL. It gives requirements for methods that would be used in post-registration 21monitoring and control by the competent authorities in Member States in the event that 22authorisations are granted. For authorities involved in post-registration control and monitoring, the document may be considered as being complementary to the documents: Method Validation2324and Quality Control Procedures for Pesticide Residues Analysis in Food and Feed (for the valid revision visit http://ec.europa.eu/food/plant/protection/resources/publications_en.htm) and the2526OECD document “Guidance Document on pesticide residue analytical methods”, 2007.27(ENV/JM/ ENV/JM/MONO(2007)17).1Preamble (4)28292General (5)302.1Good Laboratory Practice (5)312.2Selection of analytes for which methods are required (5)322.3Description of an analytical method and its validation results (5)332.4Hazardous reagents (6)342.5Acceptable analytical techniques considered commonly available (6)352.6Multi-residue methods (7)362.7Single methods and common moiety methods (7)372.8Single methods using derivatisation (7)382.9Method validation (8)392.9.1Calibration (8)2.9.2Recovery and Repeatability (9)40412.9.3Selectivity (11)422.10Confirmation (11)432.10.1Confirmation simultaneous to primary detection (11)442.10.2Confirmation by an independent analytical technique (12)452.11Independent laboratory validation (ILV) (12)2.12Availability of standards (13)46472.13Extraction Efficiency (13)483Analytical methods for residues in plants, plant products, foodstuff (of plant origin),feedingstuff (of plant origin) (Annex IIA Point 4.2.1 of Directive 91/414/EEC; Annex Point IIA,4950Point 4.3 of OECD) (14)513.1Purpose (14)523.2Selection of analytes (14)533.3Commodities and Matrix Groups (14)543.4Limit of quantification (15)553.5Independent laboratory validation (ILV) (15)564Analytical methods for residues in foodstuff (of animal origin) (Annex IIA Point 4.2.1 of 57Directive 91/414/EEC; Annex Point IIA, Point 4.3 of OECD) (16)584.1Purpose (16)594.2Selection of analytes (16)604.3Commodities (16)614.4Limit of quantification (16)4.5Independent laboratory validation (ILV) (16)62635Analytical methods for residues in soil (Annex IIA, Point 4.2.2 of Directive 91/414/EEC;64Annex Point IIA, Point 4.4 of OECD) (17)655.1Purpose (17)665.2Selection of analytes (17)675.3Samples (17)685.4Limit of quantification (17)696Analytical methods for residues in water (Annex IIA, Point 4.2.3 of Directive 91/414/EEC;70Annex Point IIA; Point 4.5 of OECD) (19)716.1Purpose (19)726.2Selection of analytes (19)736.3Samples (19)746.4Limit of quantification (19)756.5Direct injection (20)767Analytical methods for residues in air (Annex IIA, Point 4.2.4 of Directive 91/414/EEC; 77Annex Point IIA; Point 4.7 of OECD) (21)7.1Purpose (21)78797.2Selection of analytes (21)807.3Samples (21)7.4Limit of quantification (21)81827.5Sorbent characteristics (22)837.6Further validation data (22)7.7Confirmatory methods (22)84858Analytical methods for residues in body fluids and tissues (Annex IIA, Point 4.2.5 of86Directive 91/414/EEC; Annex Point IIA Point 4.8 of OECD) (23)8.1Purpose (23)87888.2Selection of analytes (23)898.3Samples (23)908.4Sample set (23)918.5Limit of quantification (23)929Summary - List of methods required (24)10Abbreviations (25)939411References (27)951Preamble96This document provides guidance to applicants, Member States and EFSA on the data 97requirements and assessment for residue analytical methods for post-registration control and 98monitoring purposes. It is not intended for biological agents such as bacteria or viruses. It 99recommends possible interpretations of the provisions of section 3.5.2 of Annex II of 100Regulation (EC) No 1107/2009 [1] and of the provisions of section 4, part A of Annex II and 101section 5, part A of Annex III of Council Directive 91/414/EEC [2]. It also applies to 102applications for setting or modification of an MRL within the scope of Regulation (EC) No 103396/2005 [3]. It has been elaborated in consideration of the ‘Guidance Document on pesticide 104residue analytical methods’ of the OECD [4] and SANCO/10684/2009 “Method validation 105and quality control procedures for pesticide residue analysis in food and feed” [5].106This document has been conceived as an opinion of the Commission Services and elaborated 107in co-operation with the Member States. It does not, however, intend to produce legally 108binding effects and by its nature does not prejudice any measure taken by a Member State nor 109any case law developed with regard to this provision. This document also does not preclude 110the possibility that the European Court of Justice may give one or another provision direct 111effect in Member States.112This guidance document must be amended at the latest if new data requirements as referred to 113in Article 8 (1)(b) and 8 (1)(c) of Regulation (EC) No 1107/2009 will have been established 114in accordance with the regulatory procedure with scrutiny referred to in Article 79 (4).1152General1162.1Good Laboratory Practice117According to Guidance Document 7109/VI/94-Rev. 6.c1 (Applicability of Good Laboratory 118Practice to Data Requirements according to Annexes II, Part A, and III, Part A, of Council 119Directive 91/414/EEC) [6] the development and validation of an analytical method for 120monitoring purposes and post-registration control is not subject to GLP. However, where the 121method is used to generate data for registration purposes, for example residue data, these 122studies must be conducted to GLP.1232.2Selection of analytes for which methods are required124The definition of the residues relevant for monitoring in feed and food as well as in 125environmental matrices and air is not the subject matter of this document. Criteria for the 126selection of analytes in case that no legally binding definition is available are given in the 127respective sections 3 - 8. In addition, sections 5.2, 6.2, 7.2 and 8.2 clarify under which 128circumstances analytical methods for residues may not be necessary.1292.3Description of an analytical method and its validation results130Full descriptions of validated methods shall be provided. The submitted studies must include 131the following points:132•Itemisation of the fortified compounds and the analytes, which are quantified133•Description of the analytical method134•Validation data as described in more detail below135•Description of calibration including calibration data136•Recovery and Repeatability137•Data proving the selectivity of the method138•Confirmatory data, if not presented in a separate study139•References (if needed)140141The following information should be offered in the description of the analytical method:142•An introduction, including the scope of the method143•Outline/summary of method, including validated matrices, limit of quantification (LOQ), 144range of recoveries, fortification levels and number of fortifications per level145•Apparatus and reagents146•instrument parameters used as example if appropriate147•Description of the analytical method, including extraction, clean-up, derivatisation (if148appropriate), chromatographic conditions (if appropriate) and quantification technique149•Hazards or precautions required150•Time required for one sample set151•Schematic diagram of the analytical method152•Stages where an interruption of the method is possible153•Result tables (if results are not presented in separate studies)154•Procedure for the calculation of results from raw data155•Extraction efficiency of solvents used156•Important points and special remarks (e.g. volatility of analyte or its stability with regard 157to pH)158•Information on stability of fortified/incurred samples, extracts and standard solutions (If 159the recoveries in the fortified samples are within the acceptable range of 70-120 %,160stability is sufficiently proven.)161Sometimes it may be necessary for other information to be presented, particularly where 162special methods are considered.1632.4Hazardous reagents164Hazardous reagents (carcinogens category I and II [7]) shall not be used. Among these 165compounds are diazomethane, chromium (VI) salts, chloroform and benzene.1662.5Acceptable analytical techniques considered commonly available167Analytical methods shall use instrumentation regarded as "commonly available":168•GC detectors: FPD, NPD, ECD, FID, MS, MS n (incl. Ion Traps and MS/MS), HRMS169•GC columns: capillary columns170•HPLC detectors: MS, MS/MS, HRMS, FLD, UV, DAD171•HPLC columns: reversed phase, ion-exchange, normal phase172•AAS, ICP-MS, ICP-OES173Other techniques can be powerful tools in residue analysis, therefore the acceptance of 174additional techniques as part of enforcement methods should be discussed at appropriate 175intervals. Whilst it is recognised that analytical methodology is constantly developing, some 176time elapses before new techniques become generally accepted and available.1772.6Multi-residue methods178Multi-residue methods that cover a large number of analytes and that are based on GC-MS 179and/or HPLC-MS/MS are routinely used in enforcement laboratories for the analysis of plant 180matrices. Therefore, validated residue methods submitted for food of plants, plant products 181and foodstuff of plant origin (Section 3) should be multi-residue methods published by an 182international official standardisation body such as the European Committee for 183Standardisation (CEN) (e.g. [8 - 12]) or the AOAC International (e.g. [13]). Single residue 184methods should only be provided if data show and are reported that multi-residue methods 185involving GC as well as HPLC techniques cannot be used.186If validation data for the residue analytical method of an analyte in at least one of the 187commodities of the respective matrix group have been provided by an international official 188standardisation body and if these data have been generated in more than one laboratory with 189the required LOQ and acceptable recovery and RSD data (see Section 2.9.2), no additional 190validation by an independent laboratory is required.1912.7Single methods and common moiety methods192Where a pesticide residue cannot be determined using a multi-residue method, one or where 193appropriate more alternative method(s) must be proposed. The method(s) should be suitable 194for the determination of all compounds included in the residue definition. If this is not 195possible and an excessive number of methods for individual compounds would be needed, a 196common moiety method may be acceptable, provided that it is in compliance with the residue 197definition. However, common moiety methods shall be avoided whenever possible.1982.8Single methods using derivatisation199For the analysis of some compounds by GC, such as those of high polarity or with poor 200chromatographic properties, or for the detection of some compounds in HPLC, derivatisation 201may be required. These derivatives may be prepared prior to chromatographic analysis or as 202part of the chromatographic procedure, either pre- or post-column. Where a derivatisation 203method is used, this must be justified.204If the derivatisation is not part of the chromatographic procedure, the derivative must be 205sufficiently stable and should be formed with high reproducibility and without influence of 206matrix components on yield. The efficiency and precision of the derivatisation step should be 207demonstrated with analyte in sample matrix against pure derivative. The storage stability of 208the derivative should be checked and reported. For details concerning calibration refer to 209Section 2.9.1.210The analytical method is considered to remain specific to the analyte of interest if the 211derivatised species is specific to that analyte. However, where – in case of pre-column 212derivatisation – the derivative formed is a common derivative of two or more active 213substances or their metabolites or is classed as another active substance, the method should be 214considered non-specific and may be deemed unacceptable.2152.9Method validation216Validation data must be submitted for all analytes included in the residue definition for all 217representative sample matrices to be analysed at adequate concentration levels.218Basic validation data are:219•Calibration data220•Concentration of analyte(s) found in blank samples221•Concentration level(s) of fortification experiments222•Concentration and recovery of analyte(s) found in fortified samples223•Number of fortification experiments for each matrix/level combination224•Mean recovery for each matrix/level combination225•Relative standard deviation (RSD) of recovery, separate for each matrix/level combination 226•Limit of quantification (LOQ), corresponding to the lowest validated level227•Representative clearly labelled chromatograms228•Data on matrix effects, e.g. on the response of the analyte in matrix as compared to pure 229standards230.Further data may be required in certain cases, depending on the analytical method used, and 231the residue definition to be covered.2322.9.1Calibration233The calibration of the detection system shall be adequately demonstrated at a minimum of 3 234concentration levels in duplicate or (preferably) 5 concentration levels with single 235determination. Calibration should be generated using standards prepared in blank matrix 236extracts (matrix matched standards) for all sample materials included in the corresponding 237validation study (Sections 3 - 8). Only, if experiments clearly demonstrate that matrix effects 238are not significant (i.e. < 20 %), calibration with standards in solvent may be used. Calibration 239with standards in solvent is also acceptable for methods to detect residues in air (Section 7). 240In case that aqueous samples are analysed by direct injection HPLC-MS/MS calibration shall 241be performed with standards in aqueous solution.242The analytical calibration must extend to at least the range which is suitable for the 243determination of recoveries and for assessment of the level of interferences in control 244samples. For that purpose a concentration range shall be covered from 30 % of the LOQ to 24520 % above the highest level (Section 2.9.2).246All individual calibration data shall be presented together with the equation of the calibration. 247Concentration data should refer to both, the mass fraction in the original sample (e.g. mg/kg) 248and to the concentration in the extract (e.g. µg/L). A calibration plot should be submitted, in 249which the calibration points are clearly visible. A plot showing the response factor1 versus the 250concentration for all calibration points is preferred over a plot of the signal versus the 251concentration.252Linear calibrations are preferred if shown to be acceptable over an appropriate concentration 253range. Other continuous, monotonically increasing functions (e.g. exponential/power, 254logarithmic) may be applied where this can be fully justified based on the detection system 255used.256When quantification is based on the determination of a derivative, the calibration shall be 257conducted using standard solutions of the pure derivative generated by weighing, unless the 258derivatisation step is an integral part of the detection system. If the derivative is not available 259as a reference standard, it should be generated within the analytical set by using the same 260derivatisation procedure as that applied for the samples. Under these circumstances, a full 261justification should be given.2622.9.2Recovery and Repeatability263Recovery and precision data must be reported for the following fortification levels, except for 264body fluids and body tissues (Section 8):265•LOQ 5 samples266•10 times LOQ, or MRL (set or proposed) or other relevant level (≥ 5 x LOQ)2675 samples268Additionally, for unfortified samples residue levels must be reported:269samples•blankmatrix 2270According to the residue definition the LOQ of chiral analytes usually applies to the sum of 271the two enantiomers. In this case it is not necessary to determine the enantiomers separately. 2721 The response factor is calculated by dividing the signal area by the respective analyte concentration.Enantioselective methods would only be required if a single enantiomer is included in the 273residue definition.274In cases of complex residue definitions (e.g. a residue definition which contains more than 275one compound) the validation results shall be reported for the single parts of the full residue 276definition, unless the single elements cannot be analysed separately.277The mean recovery at each fortification level and for each sample matrix should be in the 278range of 70 % - 120 %. In certain justified cases mean recoveries outside of this range will be 279accepted.280For plants, plant products, foodstuff (of plant and animal origin) and in feeding stuff recovery 281may deviate from this rule as specified in Table 1.2282Table 1: Mean recovery and precision criteria for plant matrices and animal matrices [4]283Concentration level Range of mean recovery(%)Precision, RSD(%)> 1 µg/kg ≤ 0.01 mg/kg 60 - 120 30> 0.01 mg/kg ≤ 0.1 mg/kg 70 - 120 20> 0.1 mg/kg ≤ 1.0 mg/kg 70 - 110 15> 1 mg/kg 70 - 110 10284If blank values are unavoidable, recoveries shall be corrected and reported together with the 285uncorrected recoveries.286The precision of a method shall be reported as the relative standard deviation (RSD) of 287recovery at each fortification level. For plants, plant products, foodstuff (of plant and animal 288origin) and feeding stuff the RSD should comply with the values specified in Table 1. In other 289cases the RSD should be ≤ 20 % per level. In certain justified cases, e.g. determination of 290residues in soil lower than 0.01 mg/kg, higher variability may be accepted.291When outliers have been identified using appropriate statistical methods (e.g. Grubbs or 292Dixons test), they may be excluded. Their number must not exceed 1/5 of the results at each 293fortification level. The exclusion should be justified and the statistical significance must be 2942 According to Annex IIA 4.2 of Directive 91/414/EEC the mean recovery should normally be 70 % - 110 % andthe RSD should preferably be ≤ 20 %.clearly indicated. In that case all individual recovery data (including those excluded) shall be 295reported.2962.9.3Selectivity297Representative clearly labelled chromatograms of standard(s) at the lowest calibrated level, 298matrix blanks and samples fortified at the lowest fortification level for each analyte/matrix 299combination must be provided to prove selectivity of the method. Labelling should include 300sample description, chromatographic scale and identification of all relevant components in the 301chromatogram.302When mass spectrometry is used for detection, a mass spectrum (in case of MS/MS: product 303ion spectrum) should be provided to justify the selection of ions used for determination.304Blank values (non-fortified samples) must be determined from the matrices used in 305fortification experiments and should not be higher than 30 % of the LOQ. If this is exceeded, 306detailed justification should be provided.3072.10Confirmation308Confirmatory methods are required to demonstrate the selectivity of the primary method for 309all representative sample matrices (Sections 3 – 8). It has to be confirmed that the primary 310method detects the right analyte (analyte identity) and that the analyte signal of the primary 311method is quantitatively correct and not affected by any other compound.3122.10.1Confirmation simultaneous to primary detection313A confirmation simultaneous to the primary detection using one fragment ion in GC-MS and 314HPLC-MS or one transition in HPLC-MS/MS may be accomplished by one of the following 315approaches:316•In GC-MS, HPLC-MS, by monitoring at least 2 additional fragment ions (preferably317m/z > 100)for low resolution system and at least 1 additional fragment ion for high318resolution/accurate mass system319•In GC-MS n (incl. Ion Traps and MS/MS), HPLC-MS/MS, by monitoring at least 1320additional SRM transition321The following validation data are required for the additional fragment ions (MS and HRMS) 322or the additional SRM transition (MS n and MS/MS): calibration data (Section 2.9.1), recovery 323and precision data according to Section 2.9.2 for samples fortified at the respective LOQ (n = 3245) and for 2 blank samples.325For all mass spectrometric techniques a mass spectrum (in case of single MS) or a product ion 326spectrum (in case of MS n) should be provided to justify the selection of the additional ions. 3272.10.2Confirmation by an independent analytical technique328Confirmation can also be achieved by an independent analytical method. The following are 329considered sufficiently independent confirmatory techniques:330•chromatographic principle different from the original method331• e.g. HPLC instead of GC332•different stationary phase and/or mobile phase with significantly different selectivity333•the following are not considered significantly different:334•in GC: stationary phases of 100 % dimethylsiloxane and of 95 % dimethylsiloxane 335+ 5 % phenylpolysiloxane336•in HPLC: C18- and C8-phases337•alternative detector338• e.g. GC-MS vs. GC-ECD, HPLC-MS vs. HPLC-UV/DAD339•derivatisation, if it was not the first choice method340•high resolution/accurate mass MS341•in mass spectrometry an ionisation technique that leads to primary ions with different m/z 342ratio than the primary method (e.g. ESI negative ions vs. positive ions)343It is preferred that confirmation data are generated with the same samples and extracts used 344for validation of the primary method.345The following validation data are required: calibration data (Section 2.9.1), recovery and 346precision data (Section 2.9.2) for samples fortified at the respective LOQ (n ≥ 3) and of a 347blank sample and proof of selectivity (Section 2.9.3).3482.11Independent laboratory validation (ILV)349A validation of the primary method in an independent laboratory (ILV) must be submitted for 350methods used for the determination of residues in plants, plant products, foodstuff (of plant 351and animal origin) and in feeding stuff. The ILV shall confirm the LOQ of the primary 352method, but at least the lowest action level (MRL).353The extent of independent validation required is given in detail in sections 3 and 4.354In order to ensure independence, the laboratory chosen to conduct the ILV trials must not 355have been involved in the method development and in its subsequent use. In case of multi-356residue methods it would be accepted if the ILV is performed in a laboratory that has already 357experience with the respective method.358The laboratory may be in the applicant’s organisation, but should not be in the same location. 359In the exceptional case that the lab chosen to conduct the ILV is in the same location, 360evidence must be provided that different personnel, as well as different instrumentation and 361stocks of chemicals etc have been used.362Any additions or modifications to the original method must be reported and justified. If the 363chosen laboratory requires communication with the developers of the method to carry out the 364analysis, this should be reported.3652.12Availability of standards366All analytical standard materials used in an analytical method must be commonly available. 367This applies to metabolites, derivatives (if preparation of derivatives is not a part of the 368method description), stable isotope labelled compounds or other internal standards.369If a standard is not commercially available the standard should be made generally available by 370the applicant and contact details be provided.3712.13Extraction Efficiency372The extraction procedures used in residue analytical methods for the determination of residues 373in plants, plant products, foodstuff (of plant and animal origin) and in feeding stuff should be 374verified for all matrix groups for which residues ≥ LOQ are expected, using samples with 375incurred residues from radio-labelled analytes.376Data or suitable samples may be available from pre-registration metabolism studies or 377rotational crop studies or from feeding studies. In cases where such samples are no longer 378available to validate an extraction procedure, it is possible to "bridge" between two solvent 379systems (details in [4]). The same applies if new matrices are to be included.3803Analytical methods for residues in plants, plant products, foodstuff (of 381plant origin), feedingstuff (of plant origin)382(Annex IIA Point 4.2.1 of Directive 91/414/EEC; Annex Point IIA, Point 3834.3 of OECD)3843.1Purpose385•Analysis of plants and plant products, and of foodstuff and feeding stuff of plant origin for 386compliance with MRL [3].3873.2Selection of analytes388The selection of analytes for which methods for food and feed are required depends upon the 389definition of the residue for which a maximum residue level (MRL) is set or is applied for 390according to Regulation (EC) No 396/2005.3913.3Commodities and Matrix Groups392Methods validated according to Section 2.9 and 2.10 must be submitted for representative 393commodities (also called “matrices” by analytical chemists) of all four matrix groups in 394Table 2.395396Table 2: Matrix groups and typical commoditiesMatrix group Examples for commoditiesbarley, rice, rye, wheat, dry legume vegetables dry commodities (high protein/highstarch content)commodities with high water content apples, bananas, cabbage, cherries, lettuce, peaches,peppers, tomatoescommodities with high oil content avocados, linseed, nuts, olives, rape seedcommodities with high acid content grapefruits, grapes, lemons, oranges397Important Note: This list of commodities is not a comprehensive list of commodities/matrices.398Applicants may consult regulatory authorities for advice on the use of other commodities.If samples with high water content are extracted at a controlled pH a particular method or 399validation for commodities with high acid content is not required.400Where a previously validated method has been adopted to a new matrix group, validation data 401must be submitted for representative matrices of this group.402。
Package‘rSAFE’October14,2022Title Surrogate-Assisted Feature ExtractionVersion0.1.4DescriptionProvides a model agnostic tool for white-box model trained on features extracted from a black-box model.For more information see:Gosiewska et al.(2020)<doi:10.1016/j.dss.2021.113556>. Depends R(>=3.5.0)License GPL-3Encoding UTF-8LazyData trueRoxygenNote7.2.1Imports DALEX,dendextend,ggplot2,ggpubr,grDevices,ingredients,sets,statsSuggests gbm,knitr,pander,randomForest,rmarkdown,spelling,testthat,vdiffrVignetteBuilder knitrURL https:///ModelOriented/rSAFEBugReports https:///ModelOriented/rSAFE/issuesLanguage en-USNeedsCompilation noAuthor Alicja Gosiewska[aut,cre],Anna Gierlak[aut],Przemyslaw Biecek[aut,ths],Michal Burdukiewicz[ctb](<https:///0000-0001-8926-582X>)Maintainer Alicja Gosiewska<*************************>Repository CRANDate/Publication2022-08-1313:20:02UTC12apartments R topics documented:apartments (2)HR_data (3)plot.safe_extractor (3)print.safe_extractor (4)safely_detect_changepoints (5)safely_detect_interactions (6)safely_select_variables (7)safely_transform_categorical (8)safely_transform_continuous (9)safely_transform_data (11)safe_extraction (12)Index14 apartments Apartments dataDescriptionDatasets apartments and apartmentsTest are artificial,generated from the same model.Structure of the dataset is copied from real dataset from PBImisc package,but they were generated in a way to mimic effect of Anscombe quartet for complex black box models.Usagedata(apartments)Formata data frame with1000rows and6columnsDetails•m2.price-price per square meter•surface-apartment area in square meters•no.rooms-number of rooms(correlated with surface)•district-district in which apartment is located,factor with10levels(Bemowo,Bielany,Moko-tow,Ochota,Praga,Srodmiescie,Ursus,Ursynow,Wola,Zoliborz)•floor-floor•construction.year-construction yearHR_data3HR_data Why are our best and most experienced employees leaving prema-turely?DescriptionA dataset from Kaggle competition Human Resources Analytics.https:/// FormatA data frame with14999rows and10variablesDetails•satisfaction_level Level of satisfaction(0-1)•last_evaluation Time since last performance evaluation(in Years)•number_project Number of projects completed while at work•average_monthly_hours Average monthly hours at workplace•time_spend_company Number of years spent in the company•work_accident Whether the employee had a workplace accident•left Whether the employee left the workplace or not(1or0)Factor•promotion_last_5years Whether the employee was promoted in the lastfive years•sales Department in which they work for•salary Relative level of salary(high)SourceDataset HR-analytics from https://plot.safe_extractor Plotting Transformations of the SAFE Extractor ObjectDescriptionPlotting Transformations of the SAFE Extractor ObjectUsage##S3method for class safe_extractorplot(x,...,variable=NULL)4print.safe_extractor Argumentsx safe_extractor object containing information about variables transformations cre-ated with safe_extraction()function...other parametersvariable character,name of the variable to be plottedValuea plot objectprint.safe_extractor Printing Summary of the SAFE Extractor ObjectDescriptionPrinting Summary of the SAFE Extractor ObjectUsage##S3method for class safe_extractorprint(x,...,variable=NULL)Argumentsx safe_extractor object containing information about variables transformations cre-ated with safe_extraction()function...other parametersvariable character,name of the variable to be plotted.If this argument is not specified then transformations for all variables are printedValueNo return value,prints the structure of the objectsafely_detect_changepoints5 safely_detect_changepointsIdentifying Changes in a Series Using PELT AlgorithmDescriptionThe safely_detect_changepoints()function calculates the optimal positioning and number of change-points for given data and penalty.It uses a PELT algorithm with a nonparametric cost function based on the empirical distribution.The implementation is inspired by the code available on https:///rkillick/changepoint.Usagesafely_detect_changepoints(data,penalty="MBIC",nquantiles=10)Argumentsdata a vector within which you wish tofind changepointspenalty penalty for introducing another changepoint,one of"AIC","BIC","SIC","MBIC", "Hannan-Quinn"or numeric non-negative valuenquantiles the number of quantiles used in integral approximationValuea vector of optimal changepoint positions(last observations of each segment)See Alsosafely_transform_continuousExampleslibrary(rSAFE)data<-rep(c(2,7),each=4)safely_detect_changepoints(data)set.seed(123)data<-c(rnorm(15,0),rnorm(20,2),rnorm(30,8))safely_detect_changepoints(data)safely_detect_changepoints(data,penalty=25)6safely_detect_interactions safely_detect_interactionsDetecting Interactions via Permutation ApproachDescriptionThe safely_detect_interactions()function detects second-order interactions based on predictions made by a surrogate model.For each pair of features it performs values permutation in order to evaluate their non_additive effect.Usagesafely_detect_interactions(explainer,inter_param=0.5,inter_threshold=0.5,verbose=TRUE)Argumentsexplainer DALEX explainer created with explain()functioninter_param numeric,a positive value indicating which of single observation non-additive effects are to be regarded as significant,the higher value the higher non-additiveeffect has to be to be taken into accountinter_thresholdnumeric,a value from[0,1]interval indicating which interactions should be re-turned as significant.It corresponds to the percentage of observations for whichinteraction measure is greater than inter_param-if this percentage is less thaninter_threshold then interaction effect is ignored.verbose logical,if progress bar is to be printedValuedataframe object containing interactions effects greater than or equal to the specified inter_threshold See Alsosafe_extractionExampleslibrary(DALEX)library(randomForest)library(rSAFE)safely_select_variables7 data<-apartments[1:500,]set.seed(111)model_rf<-randomForest(m2.price~construction.year+surface+floor+no.rooms+district,data=data)explainer_rf<-explain(model_rf,data=data[,2:6],y=data[,1])safely_detect_interactions(explainer_rf,inter_param=0.25,inter_threshold=0.2,verbose=TRUE)safely_select_variablesPerforming Feature Selection on the Dataset with Transformed Vari-ablesDescriptionThe safely_select_variables()function selects variables from dataset returned by safely_transform_data() function.For each original variable exactly one variable is chosen•either original one or transformed one.The choice is based on the AIC value for linear model(regression)or logistic regression(classification).Usagesafely_select_variables(safe_extractor,data,y=NULL,which_y=NULL,class_pred=NULL,verbose=TRUE)Argumentssafe_extractor object containing information about variables transformations created with safe_extraction() functiondata data,original dataset or the one returned by safely_transform_data()function.Ifdata do not contain transformed variables then transformation is done inside thisfunction using’safe_extractor’argument.Data may contain response variable ornot-if it does then’which_y’argument must be given,otherwise’y’argumentshould be provided.y vector of responses,must be given if data does not contain itwhich_y numeric or character(optional),must be given if data contains response valuesclass_pred numeric or character,used only in multi-classification problems.If responsevector has more than two levels,then’class_pred’should indicate the class ofinterest which will denote failure-all other classes will stand for success.verbose logical,if progress bar is to be printed8safely_transform_categorical Valuevector of variables names,selected based on AIC valuesSee Alsosafely_transform_dataExampleslibrary(DALEX)library(randomForest)library(rSAFE)data<-apartments[1:500,]set.seed(111)model_rf<-randomForest(m2.price~construction.year+surface+floor+no.rooms+district,data=data)explainer_rf<-explain(model_rf,data=data[,2:6],y=data[,1])safe_extractor<-safe_extraction(explainer_rf,verbose=FALSE)safely_select_variables(safe_extractor,data,which_y="m2.price",verbose=FALSE)safely_transform_categoricalCalculating a Transformation of Categorical Feature Using Hierar-chical ClusteringDescriptionThe safely_transform_categorical()function calculates a transformation function for the categorical variable using predictions obtained from black box model and hierarchical clustering.The gap statistic criterion is used to determine the optimal number of clusters.Usagesafely_transform_categorical(explainer,variable,method="complete",B=500,collapse="_")Argumentsexplainer DALEX explainer created with explain()functionvariable a feature for which the transformation function is to be computedmethod the agglomeration method to be used in hierarchical clustering,one of:"ward.D", "ward.D2","single","complete","average","mcquitty","median","centroid"B number of reference datasets used to calculate gap statisticscollapse a character string to separate original levels while combining them to the new oneValuelist of information on the transformation of given variableSee Alsosafe_extractionExampleslibrary(DALEX)library(randomForest)library(rSAFE)data<-apartments[1:500,]set.seed(111)model_rf<-randomForest(m2.price~construction.year+surface+floor+no.rooms+district,data=data)explainer_rf<-explain(model_rf,data=data[,2:6],y=data[,1])safely_transform_categorical(explainer_rf,"district")safely_transform_continuousCalculating a Transformation of a Continuous Feature UsingPDP/ALE PlotDescriptionThe safely_transform_continuous()function calculates a transformation function for the continuous variable using a PD/ALE plot obtained from black box model.Usagesafely_transform_continuous(explainer,variable,response_type="ale",grid_points=50,N=200,penalty="MBIC",nquantiles=10,no_segments=2)Argumentsexplainer DALEX explainer created with explain()functionvariable a feature for which the transformation function is to be computedresponse_type character,type of response to be calculated,one of:"pdp","ale".If features are uncorrelated,one can use"pdp"type-otherwise"ale"is strongly recommended.grid_points number of points on x-axis used for creating the PD/ALE plot,default50N number of observations from the dataset used for creating the PD/ALE plot, default200penalty penalty for introducing another changepoint,one of"AIC","BIC","SIC","MBIC", "Hannan-Quinn"or numeric non-negative valuenquantiles the number of quantiles used in integral approximationno_segments numeric,a number of segments variable is to be divided into in case of founding no breakpointsValuelist of information on the transformation of given variableSee Alsosafe_extraction,safely_detect_changepointsExampleslibrary(DALEX)library(randomForest)library(rSAFE)data<-apartments[1:500,]set.seed(111)model_rf<-randomForest(m2.price~construction.year+surface+floor+no.rooms+district,data=data)explainer_rf<-explain(model_rf,data=data[,2:6],y=data[,1])safely_transform_continuous(explainer_rf,"construction.year")safely_transform_data11safely_transform_data Performing Transformations on All Features in the DatasetDescriptionThe safely_transform_data()function creates new variables in dataset using safe_extractor object.Usagesafely_transform_data(safe_extractor,data,verbose=TRUE)Argumentssafe_extractor object containing information about variables transformations created with safe_extraction() functiondata data for which features are to be transformedverbose logical,if progress bar is to be printedValuedata with extra columns containing newly created variablesSee Alsosafe_extraction,safely_select_variablesExampleslibrary(DALEX)library(randomForest)library(rSAFE)data<-apartments[1:500,]set.seed(111)model_rf<-randomForest(m2.price~construction.year+surface+floor+no.rooms+district,data=data)explainer_rf<-explain(model_rf,data=data[,2:6],y=data[,1])safe_extractor<-safe_extraction(explainer_rf,verbose=FALSE)safely_transform_data(safe_extractor,data,verbose=FALSE)safe_extraction Creating SAFE Extractor-an Object Used for Surrogate-AssistedFeature ExtractionDescriptionThe safe_extraction()function creates a SAFE-extractor object which may be used later for surro-gate feature extraction.Usagesafe_extraction(explainer,response_type="ale",grid_points=50,N=200,penalty="MBIC",nquantiles=10,no_segments=2,method="complete",B=500,collapse="_",interactions=FALSE,inter_param=0.25,inter_threshold=0.25,verbose=TRUE)Argumentsexplainer DALEX explainer created with explain()functionresponse_type character,type of response to be calculated,one of:"pdp","ale".If features are uncorrelated,one can use"pdp"type-otherwise"ale"is strongly recommended.grid_points number of points on x-axis used for creating the PD/ALE plot,default50N number of observations from the dataset used for creating the PD/ALE plot, default200penalty penalty for introducing another changepoint,one of"AIC","BIC","SIC","MBIC", "Hannan-Quinn"or numeric non-negative valuenquantiles the number of quantiles used in integral approximationno_segments numeric,a number of segments variable is to be divided into in case of founding no breakpointsmethod the agglomeration method to be used in hierarchical clustering,one of:"ward.D", "ward.D2","single","complete","average","mcquitty","median","centroid"B number of reference datasets used to calculate gap statisticscollapse a character string to separate original levels while combining them to the newoneinteractions logical,if interactions between variables are to be taken into accountinter_param numeric,a positive value indicating which of single observation non-additiveeffects are to be regarded as significant,the higher value the higher non-additiveeffect has to be to be taken into accountinter_thresholdnumeric,a value from[0,1]interval indicating which interactions should be re-turned as significant.It corresponds to the percentage of observations for whichinteraction measure is greater than inter_param-if this percentage is less thaninter_threshold then interaction effect is ignored.verbose logical,if progress bar is to be printedValuesafe_extractor object containing information about variables transformationSee Alsosafely_transform_categorical,safely_transform_continuous,safely_detect_interactions, safely_transform_dataExampleslibrary(DALEX)library(randomForest)library(rSAFE)data<-apartments[1:500,]set.seed(111)model_rf<-randomForest(m2.price~construction.year+surface+floor+no.rooms+district,data=data)explainer_rf<-explain(model_rf,data=data[,2:6],y=data[,1],verbose=FALSE)safe_extractor<-safe_extraction(explainer_rf,grid_points=30,N=100,verbose=FALSE) print(safe_extractor)plot(safe_extractor,variable="construction.year")Index∗apartmentsapartments,2∗datasetsHR_data,3apartments,2apartmentsTest(apartments),2HR_data,3plot.safe_extractor,3print.safe_extractor,4safe_extraction,6,9–11,12safely_detect_changepoints,5,10safely_detect_interactions,6,13safely_select_variables,7,11safely_transform_categorical,8,13safely_transform_continuous,5,9,13safely_transform_data,8,11,1314。
With the aims of improving the competitiveness of our robot systems and differentiating our robot products from those of our competitors, we are developing various applications based on robot simulators. This paper presents the new robot simulator K-ROSET and describes applications expanded on its system.K-ROSET robot simulator for facilitating robot introduction into complex work environmentsPrefaceAs the range of applications for robot systems has increased, various complicated issues have arisen, such as coordination between robots and their peripheral equipment and the installation of robots with multiple applications on the same line. Additionally, there is a demand for simple creation of advanced robot operation programs. In order to resolve these issues, the various companies that make robots are working to improve and add functionality to their own application examination simulators.In 2011, we developed K-ROSET, a new robot application examination simulator. In addition to the basic functions that are demanded of a robot application examination simulator, K-ROSET provides an environment for developing and testing robot operation programs on a computer. K-ROSET’s functions can also be expandedthrough the addition of the necessary applications. In this paper, we will provide an overview of K-ROSET and examples of how its functions can be expanded.1 O verview of K-ROSETIn order to improve the efficiency of robot teaching, it is necessary to make use of offline tools such as robot simulators. We have developed the K-ROSET robot simulator and the KCONG automatic teaching data generator as offline tools to simplify the introduction of robots, and we provide our users with optimally-configured robot systems that make use of the tools in different ways according to the purpose and use.K-ROSET is a tool that simulates the operations ofFig. 1 Operation screen of K-ROSETTable 1 Main functions of K-ROSETactual robots on a computer. It enables operating robots using the same methods, and executing operation plans using the same logic, as with the actual robots. Furthermore, by adding necessary applications, it is possible to automate the actual work of robot teaching, eliminating teaching work based on experiences and trial and error that used to be performed by humans.The main functions of K-ROSET are shown in Table 1, while its operation screen is shown in Fig. 1.(1) StructureWith K-ROSET, we have improved operability by adopting a software structure that integrates 3D rendering software with high processing speed and low memory requirements, complete with an operating interface that is conveniently laid out around it. By placing the robots, workpieces, teaching points, etc. on the screen, the operator can intuitively generate an operation program for the robot and simulate an actual system on the computer.(2) ApplicationsActual robot systems can be used for a wide variety of tasks that include handling, arc welding and painting, and on K-ROSET, simulations can be performed separately by application (Fig. 2). It is also possible to simulate robot systems in which robots with different applications (such as arc welding and handling, or handling and sealing) are installed simultaneously (Fig. 3).Handling robot Sealing robotSealing robotHandling robotFig. 3 Simulation example of multiple applications(a) Arc welding(b) Spot weldingFig. 2 Simulation examples of applicable targets2 C haracteristics of K-ROSETWith complex robot systems that include multiple robots or things like external axes, conveyors and peripheral equipment, it is vital to be able to study the operation without using the actual robots and equipment. When doing so, making use of the following robot simulator functions can be expected to have the benefits shown in Table 2during the various steps of introducing manufacturing equipment.①L ayout examination②C reation and verification of robot operation programs③C ycle-time verificationThe parts of K-ROSET that compute robot operations make use of the same operation software that is used in robot controllers. Additionally, because its simulation speed is several times faster than the operation speed of actual robots, it can carry out high-precision and high-speed computation of cycle time.Making use of K-ROSET’s functions eliminates the trouble of guiding the robot into a proper position through manual operation, making it possible to reduce teaching time. For example, it is possible to click on a workpiece on the screen to create a teaching point in that location and drag and drop that teaching point into the program area (the edit screen area) to create an operationinstruction.Table 2 Merits of robot simulatorsDisplay of painting operationsCoordinate system ofeach teaching pointFig. 4 Simulation example of teaching points creationFig. 5 Simulation example of real applicationFig. 4 shows an example in which teaching points have been created for a workpiece, while an example of operation based on the teaching points created is shown in Fig. 5. The operation trajectory of the robot tool tip is shown in Fig. 5.3 E xamples of customizationWith K-ROSET, users can create their own operation interfaces, expand functionality and otherwise customize the program (using plugins). In addition to using K-ROSET’s main simulation function, it is possible to use new functions and custom functions along with K-ROSET.Actual examples of additional applications that have been developed using customization functions are given below.( i ) CS-Configurator (Fig. 6)Parameters for the safety monitoring unit can be set easily based on visual representation. For example, a 3D display enables intuitive configuration of the monitoring space. (ii) K-SPARC (Fig. 7)Palletization patterns are automatically generated by K-SPARC, and K-ROSET is used to arrange robots and equipment. Additionally, the operation program can be run to confirm the loading operation.(iii) Interference prediction function (Fig. 8)When changing programs after robot installation, connecting to this function online makes it possible to predict interference between robots, workpieces and surrounding equipment during operation and to easily check the locations of predicted interference using a 3Ddisplay, preventing interference before it occurs.Fig. 8 Example of interference prediction functionFig. 6 Example of CS-Configurator setting screenFig. 7 Example of K-SPARC setting screen(iv) Electrical consumption simulation function (Fig. 9)This function can be used to run a robot operation program on K-ROSET, estimate the current and power used during operation, and display the results in tabular format. (v) Picking robot simulation (K-PET)In recent years, the use of robots in consumer products industries such as food, drugs and cosmetics has expanded rapidly, and it is particularly common to use them in combination with vision systems for the high-speed transfer of small-item workpieces. Quick verification of a robot’s transfer ability is one of the keys to the expansion into these markets. Because of this, we are working to develop systems that are specialized for this kind of application and can carry out setup and simulation in a more simplified manner. K-PET, a specialized tool for the computer simulation of picKstar, a high-speed picking robot developed by Kawasaki, is shown in Fig. 10. K-PET features a menu that can be used to easily set up feed and discharge conveyors, feeding and discharge methods for the workpiece in question, etc. Additionally, it makes iteasy to determine how multiple picKstar units will be arranged.4 L inkage with other applications(1) Linkage with vision systemsLinking K-ROSET with other applications makes it possible to carry out more advanced application verifications. Development is now underway for a simulation function that combines K-ROSET with K-VFinder, a 2D visual recognition system that is used with products such as picKstar. Doing so will make it possible to simultaneously carry out studies of vision system installation on a computer and operation verification of robots that are combined with vision systems.An example of a linkage with a vision system is shown in Fig. 11. The workpiece information generated by K-ROSET on the left side of the screen is sent to K-VFinder on the right side, and a simulation is carried out as if the workpiece had been recognized with an actual camera.Fig. 9 Example of power consumption simulationFig. 10 Example of K-PET setting screenWorkpieceFig. 11 Example of K-ROSET and K-VFinder(2) Linkage with automatic teaching systemsThe KCONG software for automatic teaching data generator comes with a built-in 3D CAD program, and K-ROSET uses the same 3D CAD program so that it can be linked with KCONG. We have thus enabled linking data between the two systems to merge the application study function (including peripheral equipment) of K-ROSET with KCONG’s function for automatically generating teaching data based on 3D workpiece data.Figure 12 shows this linkage. KCONG automatically generates teaching points based on the data for the system layout created using K-ROSET. Additionally, the data created is given to K-ROSET for operation verification.Concluding remarksWe do not simply develop tools for robot application study and simulation. We are also working to make use of robot simulation technology as a tool to differentiate our robot systems.We intend to continue to differentiate ourselves from other companies through the development of offline study systems and a range of other applications, in order to provide our customers with more desirable and effective robot systems.Shogo HasegawaFA System Department,FA and Clean Group,Robot Division,Precision Machinery CompanyMasayuki WatanabeFA System Department,FA and Clean Group,Robot Division,Precision Machinery CompanyTakayuki YoshimuraFA System Department,FA and Clean Group,Robot Division,Precision Machinery CompanyHiroki KinoshitaControl System Department,System Technology Development Center,Corporate Technology DivisionProfessional Engineer (Information Engineering)Fumihiro HondaNew Energy and Industrial Technology Development OrganizationHironobu UrabeIT System Department,System Development Division,Kawasaki Technology Co., Ltd.KCONG screenFig. 12 Example of K-ROSET and KCONG。
2D-Series, DXP, DXR & DXSDXP TropicalizedAluminumDXR Composite Resin("S" SiliconeO-Rings only;Stainless steelconduit entries)(Area Classification"0" only available withATEX/IECEx approv-als)DXS316Stainless steel (Onlyavailable with R & Mshaft options) Bus NetworkAS AS-Interface w(2) SPDT GOSwitches(Area class cannot be 0)*FF FoundationFieldbus w/ 0 -10K Pot*FL Foundation Fieldbus w/(2)SPDT GO Switches*FP Foundation Fieldbus w/(2)SPDT GO Switches and0-10K PotDN DeviceNet w(2) SPDTGO Switches(Area class cannot be 0)PB Profibus w/(2) SPDT GOSwitches (Area class cannotbe 0)Partial Stroke TestES ESD/PSTModule w/GO Switch(Area class cannot be 0)GO SwitchesL2 (2)GO SwitchesSPDT hermetic sealL4 (4)GO SwitchesSPDT hermetic seal(not available with pilot)Z2 (2)GO SwitchesDPDT hermetic sealZ4 (4)GO SwitchesDPDT hermetic seal, (notavailable with pilot)Mechanical Switches(Area class cannot be 2, DXR withG approval not available with pilot)M2 (2)Mech SPDTM4 (4)Mech SPDTM6 (6)Mech SPDTT2 (2)Mech DPDTK2 (2)Mech SPDTgold contactsK4 (4)Mech SPDTgold contactsProximity SwitchesR2 (2) SPDT Prox switchesR4 (4) SPDT Prox switches(R2 & R4 only available withDXR and Ex me certification)Inductive SensorsE2 (2) p+f NJ2-V3-Ninductive NAMURE4 (4) p+f NJ2-V3-Ninductive NAMURAnalog Output(Available with 2-switchoptions only for L,Z,M,K,E,T)_X 4-20mAtransmitter_H 4-20mAtransmitter with HART(Not available with switchoption T; LH not availablew/pilot valve) (LH, ZH Notavailable with DXR)Example:LH =(2) GO Switcheswith HART™ transmitter* *FF, *FL and *FP with AreaClassification "0" has an ibprotection0Intrinsically safe(Bus/sensor cannot beAS, DN, ES, PB, or _X;Requires appropriateI.S. barrier)- North AmericaClass I Div 1&2Groups A, B, C, DType 4, 4X- ATEX/IECExZone 0II2GD, T6/T4Ex ia IICEx tb IIIC, IP66/67(Foundation Fieldbus)Zone 1, Ex ib IIC T4, IP671 Explosion proof /Flame proof (DXP/S only)- North AmericaClass I Div 1Groups C, D;Class I Div 2,Groups A, B, C, D.(Groups A & B must behermetically sealed)Type 4, 4X,- ATEX/IECExZone 1II2G, II2GD, T6/T4/T3Ex d IIB+H2Ex tb IIIC IP66/67(O-Rings must be S forDUST certification)2Non-incendive(Bus/sensor must beL, Z, P, E, AS, FF,_X, _H, _Eor DN)- North AmericaClass I Div 2Groups A, B, C, D;Class II Div 2Groups F,G- ATEX (DXP/S only)II3G Ex nA nC tb, IP66/67(O-Rings must be S forDUST certification)G General Purpose Type 4, 4X(not available with DXR withmechanical switches)C Flameproof(DXS not available withvalve; Conduit entries mustbe E or M) ATEX/IECExII2G, II2GD, T6/T4/T3Ex d IICEx tb IIIC IP66/67M Flameproof (only availablewith R2 and R4 sensoroptions) (DXR only)ATEX/IECEx Zone 1, II2GDEx e mb IIC T4, Ex tb IIICT66 IP67 (Not available w/pilot)W No approvals; Type 4, 4XIP66/683 way, 180OT Port 3 positionOrdering Guide Fill in the boxes to create your 'ordering number.'3BlankNo Spool Valve A AluminumHard coat anodized(IP65)6 316 Stainless steel (IP65)BlankNo Spool Valve 2 .86 Cv (1/4" NPT Ports)3 3.7 Cv (1/2" NPT Ports) (For manual override consult factory) (Spool Valve A)(Spool Valve 6)BlankNo override1 Single Pushbutton Momentary/Latching2 Dual Pushbutton Momentary/LatchingT Partial stroke test button with lockable cover (Sensor ES only) (Not avail w/ Area Class C) (DXP/S - Conduit Entries 4 or 3 only. DXR - consult factory)Pneumatic AccessoriesFlow Control, 1/8" NPT (1 per kit) AL-M20Flow Control, 1/4" NPT (1 per kit) AL-M21 Flow Control, 1/2" NPT (1 per kit) AL-M22 Exhaust Protectors, 1/4" NPT (2 per kit) AL-M32 Exhaust Protectors, 1/2" NPT (2 per kit) AL-M33Miscellaneous TopWorx Demo CaseATK2(In cludes Valvetop DXP , TXP , TVA, cutaway GO Switch models 20, 35, & 70, and accessories)ESD Demo Case ATK-ESD(Includes working DXP-ES1GN4B1A2T mounted to rack and pinion actuator with control box)BlankNo Regional Cert B InMetro (Area Class 0,1 and C only)N NEPSI F FISCO(Bus/Sensor must be FF; Area Class must be 0)K KOSHA(DXP/S only) (Area class 1 or C)R EAC (DXP/S only)(O-Rings must be B or S, B= Gas Approved;S = Gas/Dust Approved) A ANZEx Ex d IIC, Ex d IIB+H2 (DXP/S only) P PESO (India) (Gas approval only)。
Procedure & Checklist – Preparing HiFi SMRTbell®Libraries using the SMRTbell Express Template Prep Kit 2.0This procedure describes the construction of HiFi SMRTbell libraries for de novo assembly and variant detection applications using the SMRTbell Express Template Prep Kit 2.0 and recommended HiFi sequencing conditions using PacBio’s new Sequel® II Binding Kit 2.2. A minimum input amount of 5 µg of high-molecular weight genomic DNA is recommended for generating HiFi library yields sufficient for running multiple SMRT®Cells on the Sequel II or Sequel IIe System (Sequel II Systems). Note that final HiFi library construction yields will be dependent on the specific size-selection method employed.We recommend fragmenting the gDNA so that the target size distribution mode is between 15 kb - 18 kb. To reduce the presence of fragments >30 kb, PacBio recommends a 2-cycle shearing method on the Megaruptor 3 system. Generally, a narrower fragment size distribution results in more uniform and higher-quality HiFi data. Details regarding DNA shearing conditions (e.g., buffers and DNA sample concentration) are described in the “DNA Requirements for Shearing” section.RequiredEquipment Vendor Throughput Run TimeFemto Pulse AgilentTechnologies Process up to 11 samples per runBatch process up to 88 samples 85 minsMegaruptor 3 Diagenode Shear up to 8 samples at a time40 mins(for 1 cycle of shearing)PippinHT Sage Science Maximum of 20 samples per instrument run 2 hrsBluePippin Sage Science Maximum of 4 samples per instrument run 4.5 hrsSageELF Sage Science Maximum of 2 samples per instrument run 4.5 hrs Table 1: Recommended equipment for HiFi SMRTbell library construction for de novo assembly and variant detection applications.Required MaterialsDNA SizingFemto Pulse Agilent Technologies, Inc. P-0003-0817DNA QuantitationQubit™ Fluorometer ThermoFisher Scientific Q33238Qubit 1X dsDNA HS Assay Kit ThermoFisher Scientific Q33230DNA ShearingMegaruptor 3 System Diagenode B06010003Megaruptor 3 Shearing Kit Diagenode E07010003SMRTbell Library PreparationSMRTbell® Express Template Prep Kit 2.0 PacBio 100-938-900AMPure® PB Beads PacBio 100-265-900SMRTbell® Enzyme Clean Up Kit 2.0 (New*) PacBio 101-932-600Sequencing Primer v5 (New*) PacBio 102-067-400100% Ethanol, Molecular Biology Grade Any MLSWide Orifice Tips (Tips LTS W-O 200UL Fltr RT-L200WFLR) Rainin 30389241Lo-Bind 0.2 mL tube strips USA Scientific, TempAssure1402-4708Multi-channel Pipette Rainin, 17013810Magnetic separation rack V&P Scientific, Inc, VP 772F4-1Thermal Cycler that is 100 µL and 8-tube strip compatible Any MLSSize-selection (One of the following systems)PippinHT System Sage Science HTP00010.75% Agarose Gel Cassettes, Marker 75E Sage Science HPE7510BluePippin System Sage Science BLU00010.75% Agarose Cassettes, Marker S1 Sage Sciences BLF7510SageELF System Sage Science ELF00010.75% Agarose Cassettes Sage Science ELD7510SequencingSequel® II Binding Kit 2.2 (New*)PacBio 101-894-200Sequel® II Sequencing Kit 2.0 PacBio 101-820-200SMRT® Cell 8M Tray PacBio 101-389-001* To obtain a copy of the previous version of this Procedure & Checklist that specifies use of SMRTbell Enzyme Clean Up Kit (PN 101-746-400) and Sequencing Primer v2 (PN 101-847-900), contact ****************.HiFi Library Construction WorkflowPacBio recommends that gDNA samples be resuspended in an appropriate buffer (e.g., Qiagen Elution Buffer) before proceeding with DNA shearing.Figure 1: Workflow for preparing HiFi libraries using the SMRTbell Express Template Prep Kit 2.0.Reagent HandlingSeveral reagents in the SMRTbell Express Template Prep Kit 2.0 (shown in Table 2 below) are sensitive to temperature and vortexing. We recommend to:•Never leave reagents at room temperature.•Always work on ice when preparing master mixes.•Finger-tap followed by a quick spin prior to use.Reagent Where UsedDNA Prep Additive Remove single-strand overhangsDNA Prep Enzyme Remove single-strand overhangs DNA Damage Repair Mix v2 DNA Damage RepairEnd Prep Mix End-Repair/A-tailingOverhang Adapter v3 LigationLigation Mix LigationLigation Additive LigationLigation Enhancer LigationSMRTbell Enzyme Clean Up Mix Nuclease TreatmentSMRTbell Enzyme Cleanup Buffer 2.0 Nuclease TreatmentTable 2: Temperature sensitive reagentsGenomic DNA (gDNA) Quality EvaluationThis procedure requires high-quality, high-molecular weight input gDNA with a majority of the DNA fragments >50 kb as determined by pulsed-field gel or capillary electrophoresis. Any of the three commercially available systems listed in Table 4 below may be used to evaluate gDNA quality, but the Femto Pulse system is highly recommended for high-throughput library construction due to its ability to rapidly process multiple samples in a single run using very low amounts (<1 ng) of DNA per sample. Links to recommended procedures for each system are also provided in the table. Examples of gDNA quality assessment using Bio-Rad’s CHEF Mapper (2A) and Agilent Technologies’ Femto Pulse (2B) are shown in Figure 2. Lanes A3 and B1 correspond to high-quality gDNA samples that are suitable for HiFi library construction using this procedure. Lanes A4 and B2 show degraded gDNA samples that not suitable for use in this procedure.Method ProcedureFemto Pulse Agilent Technologies, Inc.Bio-Rad CHEF Mapper XA Pulsed Field Electrophoresis System Procedure & Checklist - Using the BIO-RAD® CHEF Mapper® XA Pulsed Field Electrophoresis SystemSage Science Pippin Pulse Procedure & Checklist - Using the Sage Science PippinPulse Electrophoresis Power Supply SystemTable 3. gDNA Quality Evaluation Methods and Procedures.Figure 2: Evaluation of high-molecular weight gDNA quality using two DNA sizing analysis systems. A) Bio-Rad CHEF Mapper and B) Agilent Technologies’ Femto Pulse.165.510 kb 50 kb 42 kb33 kb 21 kb 17.7 kb 1.3 kb1 bpLane 1: 8 kb - 48 kb Ladder (Bio-Rad) Lane 2: 5 kb ladder (Bio-Rad) Lane 3: HMW gDNA Lane 4: Degraded gDNALane 1: HMW gDNALane 2: Degraded gDNA Lane 3: 165 kb ladder48 kb-20 kb-80 kb----------10 kb-14322 1 3ABDNA Requirements for ShearingBefore shearing, ensure that the genomic DNA is in an appropriate buffer (e.g.,Qiagen Elution Buffer, 10 mM Tris-Cl, pH 8.5 or PacBio EB buffer). If you are unsure of the buffer composition or if the gDNA is not in Elution Buffer, perform a 1X AMPure PB bead purification followed by elution with Elution Buffer or an equivalent low salt buffer (i.e., 10 mM Tris-Cl, pH 8.5- 9.0).PacBio highly recommends Diagenode’s Megaruptor 3 system for shearing gDNA. The Megaruptor 3 system allows up to 8 gDNA samples to be processed simultaneously with a consistent fragment size distribution across multiple hydropore-syringes. Furthermore, the Megaruptor 3 system generates a narrower size distribution than the g-TUBE device (Covaris).Shearing Using Diagenode’s Megaruptor 3 SystemTo maximize HiFi yield per SMRT Cell, PacBio recommends fragmenting the gDNA to a size distribution mode between 15 kb – 18 kb for human whole genome sequencing. Libraries with a size distribution mode larger than 20 kb are not recommended for HiFi sequencing. Recommended library insert size distributions to use for different WGS applications are summarized in Table 4 below.Application Recommended Library Insert SizeHuman Variant Detection 15 – 18 kbHuman de Novo15 – 18 kbPlant/Animal de Novo15 – 20 kbTable 4: Library size recommendations for Human variant detection and de novo assembly.To shear gDNA on the Megaruptor 3 system, use a two-cycle shear method, which requires running a second round of shearing immediately following the first fragmentation step in the same hydropore-syringe. The recommended concentration is 83.3 ng/µL (5 µg of input DNA in 60 µL Elution Buffer).The DNA shearing guidelines below have been tested by PacBio on the Megaruptor 3 system only. The response of individual gDNA samples to the shearing recommendations described below may differ; therefore, performing a small-scale test shear is highly recommended, including the Megaruptor 3 system.For the Megaruptor and Megaruptor 2 systems, shearing optimization is necessary before proceeding with this Procedure & Checklist. The shearing procedure described in the “Shearing Using Diagenode’s Megaruptor 3 system” section below is not compatible with the Megaruptor or Megaruptor 2 systems. For Megaruptor and Megaruptor 2 systems, follow Diagenode’s DNA shearing recommendations described in their manual. For additional guidance, contact Technical Support or your local FAS.The g-TUBE device generates a broader DNA fragment size-distribution compared to the Megaruptor 3 system. Note that HiFi read quality and overall HiFi data yield may be reduced due to the residual presence of large DNA fragments generated by g-TUBEs. For additional guidance, contact Technical Support or your local FAS.Figure 3: Examples of human genomic DNA samples sheared to a target 15 kb - 18 kb size distribution mode using a 2-cycle shear method on the Megaruptor 3 system.Prepare SMRTbell LibrariesAlways work on ice throughout the library construction process. To process multiple samples at a time, the following equipment are required:• Lo-Bind tube strips• Multi-channel pipette• Wide-bore tips• Magnetic rack compatible with tube strips• Thermocycler compatible with tube stripsRemove Single-Strand OverhangsThe sample volume recovered from the Megaruptor 3 system after shearing is used directly in the single-strand overhang digestion step. Before proceeding, ensure that the sheared DNA is in Elution Buffer or an equivalent low salt buffer (i.e., 10 mM Tris-Cl, pH 8.5- 9.0). In this step, DNA Prep Additive is diluted first followed by digestion. Scale up the reaction volumes for digestion if working with multiple samples.1. Prepare the DNA Prep Additive. The DNA Prep Additive is diluted with Enzyme Dilution Buffer toa total volume of 5 µL. This amount is sufficient for processing 1 to 4 samples. The volume maynot be sufficient for 5 samples due to pipetting errors. We recommend scaling up the dilutionvolume based on the number of samples to be processed (example: prepare 2X volume for 8samples and 4X volume for 16 samples).Note: The diluted DNA Prep Additive should be used immediately and should not be stored.2. Prepare the digestion by following the reaction table below. For multiple samples, prepare amaster mix, followed by addition of 10.0μL master mix to each sheared DNA sample.3. Add 10.0 µL of the above master mix to the tube-strips containing 45.0 µL - 53.0 µL of shearedDNA. The total volume in this step is 55.0 µL - 63.0 µL.4. Using a multi-channel pipette, mix the reaction wells by pipetting up and down 10 times with wide-orifice pipette tips.5. Spin down the contents of the tube strips with a quick spin in a microfuge.6. Incubate at 37°C for 15 minutes, then return the reaction to 4°C.7. Proceed to the next step.Repair DNA DamageTo each Reaction Mix 1, add 2.0 µL of DNA Damage Repair Mix v2.1. Mix the reaction well by pipetting up and down 10 times with wide-orifice pipette tips.2. Spin down the contents of the tube strips with a quick spin in a microfuge.3. Incubate at 37°C for 30 minutes, then return the reaction to 4°C.4. Proceed to the next step.End-Repair/A-tailingTo each Reaction Mix 2, add 3.0 µL of End Prep Mix.1. Mix the reaction well by pipetting up and down 10 times with wide-orifice pipette tips.2. Spin down the contents of the tube strips with a quick spin in a microfuge.3. Incubate at 20°C for 10 minutes.4. Incubate at 65°C for 30 minutes, then return the reaction to 4°C.5. Proceed to the next step.Adapter LigationIn this step, 5.0 µL of Overhang Adapter is added to each Reaction Mix 3 (from the previous step). Then, 32.0 µL of the ligase master mix is added to each Reaction Mix 3/Adapter Mix for incubation. Always work on ice. 1. To each Reaction Mix 3, add 5.0 µL of Overhang Adapter.2. Mix the reaction well by pipetting up and down 10 times with wide-orifice pipette tips. Leave the tube strips on ice.3. Prepare a Master Mix containing Ligation Enhancer, Ligation Additive and Ligation Mix using the table4. Mix the reaction well by pipetting up and down 10 times with wide-orifice pipette tips. It is important to mixwell.5. To the Reaction Mix 3/Adapter Mix, add 32.0 µL of the Ligase Master Mix. The total volume in this step is97.0 µL- 105.0 µL.6. Mix the reaction well by pipetting up and down 10 times with wide-orifice pipette tips. It is important to mixwell.7. Incubate at 20°C for 1 hour. Optional: The Ligation reaction may also be left at 20°C overnight.8. Proceed to the next step.Purify SMRTbell Library Using 1.0X AMPure® PB BeadsPage 11 PN 101-853-100 Version 05(August 2021)Nuclease Treatment of SMRTbell LibraryTo each library sample, add the nuclease mix to remove damaged SMRTbell templates.1. Prepare a Master Mix of the Enzyme Cleanup Mix and Buffer.2. Mix the reaction well by pipetting up and down 10 times with wide-orifice pipette tips. It is important to mixwell.3. Spin down the contents of the tube strips with a quick spin in a microfuge.4. To each 15.0μL of sample, add 55.0 μL of Nuclease Master Mix. The total reaction volume at this step is70.0 µL.5. Mix the reaction well by pipetting up and down 10 times with wide-orifice pipette tips. It is important to mixwell.6. Incubate at 37°C for 30 mins and store on ice immediately.7. Spin down the contents of tube strips with a quick spin in a microfuge.8. Proceed directly to the AMPure PB bead purification step below immediately. Do not store samples at thisstage. Do not let samples sit for long periods of time. Always work on ice.Page 12 PN 101-853-100 Version 05(August 2021)Purify SMRTbell Library Using 1.0X AMPure® PB BeadsSize Selection of SMRTbell LibrariesFor high-throughput whole genome sequencing applications, PacBio highly recommends the PippinHT system (Sage Science) for size-selection of SMRTbell libraries for HiFi sequencing. Typical recovery yields are 35% - 50% and are highly dependent on the size distribution of the starting SMRTbell library.Size Selection Using the PippinHT SystemVerify that your PippinHT system software is up to date and follow the procedure below to remove SMRTbellSize Selection Using the BluePippin SystemSage Science’s BluePippin system may also be used for size-selection of HiFi SMRTbell libraries. Verify that your BluePippin system software is up to date and follow the procedure below to remove SMRTbell templates <10 kb using the BluePippin system. Typical recovery yields are highly dependent on the size distribution of the starting SMRTbell library. For the latest BluePippin system User Manual and guidance on size-selection protocols, contact Sage Science ().Size Selection Using the SageELF SystemSage Science’s SageELF system may also be used to fractionate SMRTbell libraries for HiFi whole genome sequencing applications. Verify that your SageELF system software is up to date and follow the size selection procedure below. For the latest SageELF User Manual and guidance on size-selection protocols, contact Sage Science ().6Set up the run Protocol:– In the “Protocol Editor” tab, click on the “New Protocol” button.– Select the “0.75% 1-18kb v2” in the cassette definition menu.– Select “size-based” for separation mode.– Enter 3450 in the “Target Value” field and move the bar slider to selectwell #12.– Save as new protocol.– On the Main screen, clear previous run data, select cassette description,cassette definition and protocol, enter sample ID(s).– Select in the Nest Selector the cartridge that will be run.7Start the run.8 Once the run is complete, (approximately 4.5 hours), collect 30 μL of the respectivefractions from the elution wells. Fractions of interest are typically ~11 kb, ~13 kb,~15 kb, ~17 kb.9 Check the sizes of all 12 fractions by loading on a Femto Pulse. To determine theaverage library size, perform a smear analysis by selecting the region of interestby defining the start and end points of the fractions.10 Pool together fractions that have an average library size 10 – 20 kb.11 Proceed to the AMPure PB Bead purification step.Purify Size-Selected HiFi Library Fractions with 1.0X AMPure ® PB BeadsSequencing PreparationSee Quick Reference Card - Loading and Pre-Extension Recommendations for Sequel II/IIe Systems .For Research Use Only. Not for use in diagnostic procedures. © Copyright 2020 - 2021, Pacific Biosciences of California, Inc. All rights reserved. Information in thisdocument is subject to change without notice. Pacific Biosciences assumes no responsibility for any errors or omissions in this document. Certain notices, terms, conditions and/o r use restrictions may pertain to your use of Pacific Biosciences products and/or third p arty products. Please refer to the applicable PacificBiosciences Terms and Conditions of S a le and to the applicable license terms at /lice nses.html. Pacific Biosciences, the Pacific Biosciences logo, PacBio, S M RT, SMRTbell, Iso-Seq and Sequel are trademarks of Pacific Biosciences. Femto Pulse and Fragment Analyzer are trademarks of Agilent Technologies. All other trademarks are the sole property of their respective owners.Revision History (Description)Version Date Initial release.01 September 2019 Internal revision with no content change (not uploaded to website).02 December 2019 On page 1, changed “HiFi reads” to just “Reads”. On page 12, under Repair DNA Damage,corrected “remove single strand overhangs” to “repair DNA damage”. On page 13, corrected “remove single strand overhangs” to “adapter ligation”.03 January 2020 Updated for SMRTbell Enzyme Clean Up Kit 2.0 and Sequencing Primer v5.04 April 2021 Removed SMRT Link Sample Setup and Run Design tables. Added reference to QRC.05August 2021。
基于改进YOLOv5和ResNet50的女装袖型识别方法作者:曹涵颖妥吉英来源:《现代纺织技术》2024年第01期摘要:針对女装袖型分类繁多、特征识别困难、检测效果不理想等问题,根据不同女装袖型的关联信息,结合注意力机制改进的YOLOv5目标检测网络和ResNet50残差网络,提出了一种女装袖子造型的自动识别方法。
首先,从电商平台收集服装样本图像,按照长短大类和形态小类标记对女装袖型进行归类,建立了包含3600张图像的袖型数据集;其次,结合注意力机制改进的YOLOv5目标检测网络和ResNet50残差网络,设计了女装袖型识别方法;最后,在袖型数据集上开展模型训练,并通过实验验证袖型识别的效果。
结果表明:改进YOLOv5和ResNet50相结合的深度学习方法可以有效地对女装袖型进行识别,整体识别准确率约93.3%。
该女装袖型识别方法准确、便捷,可以实现大量服装款式的分类快速检测,提高服装设计效率,促进人工智能技术在服装设计领域的应用,助力我国智能制造和电子商务的发展。
关键词:女装袖型;深度学习;YOLOv5;注意力机制;ResNet50中图分类号:TS941.26文献标志码:A文章编号:1009-265X(2024)01-0045-09袖子是服装的重要组成部分,对服装整体的风格塑造和款式设计都有着重要影响。
随着服装数字化设计、智能制造和电子商务的飞速发展,服装企业和电商平台累积了数以万计的服装款式图像[1-3]。
如何有效利用这些服装袖子数据,让服装设计师快速获取服装设计要素,提高设计开发效率,高效衔接设计与生产制作,以及帮助消费者更快地获得个性化推荐和自主化设计,是服装设计领域的重要发展方向之一[4-5]。
应用人工智能技术开展服装袖型的自动识别研究,对于服装设计、智能制造和电子商务领域,具有重要的现实意义和广泛的应用价值。
近年来,对服装图像的检测与识别研究主要集中在服装整体的款式识别。
例如,为准确而快速地分类电商平台中的西装图像,刘正东等[6]基于尺寸分割和负样本增强技术的SSD方法开展了西装识别研究,识别准确率超过90%。
Information Kit for Conversions from ProSystem fx Engagement to Workpapers CSThis document provides information about the data converted from ProSystem fx® Engagement to Workpapers CS™.ContentsWhat to expect from the data conversion (2)Conversion considerations and recommendations (2)Installing the conversion program (2)Converting the client data before import into Workpapers CS (2)Data transferred during conversion (4)Chart of Accounts and balances (4)Grouping schedules (5)Transactions (5)Engagement-related data transferred during conversion (6)Engagement information (6)Folder information (7)Workpaper information (7)ProSystem fx Engagement Excel and Word demographic formulas (7)ProSystem fx Engagement Excel and Word link formulas (8)Conversion notes and exceptions (8)Items not converted (8)Data Conversion Report (9)Getting help (9)Help & How-To Center (9)Product support (9)What to expect from the data conversionThe overall objective of the data conversion from ProSystem fx Engagement is to provide accurate, comprehensive Workpapers CS data to help you move forward with Workpapers CS.Important!Due to differences between applications, some data must be modified during the conversion process and some data cannot be converted. Additions and/or modifications may be required to exactly duplicate engagement and workpaper information in Workpapers CS after the conversion.Conversion considerations and recommendationsPlease review the following before beginning the conversion process.▪We recommend that you convert a smaller, easy-to-process client first. This will help you become familiar with the conversion options in Workpapers CS.▪Some data items from ProSystem fx Engagement are not converted because there is no exact equivalent in Workpapers CS.Installing the conversion programClick this link to download a ZIP file and install the ProSystem fx Engagement to Workpapers CS conversion utility.Converting the client data before import into Workpapers CS Important! When you convert large or complex sets of engagement files for a ProSystem fx Engagement client, you should allow a significant amount of time for the conversion and import. Please wait for the process to be completed before converting another client.After installing the ProSystem fx Engagement to Workpaper CS conversion program, use the following steps to create the converted data files for import into Workpaper CS.The conversion process does not modify existing client data in ProSystem fx Engagement. However, we strongly recommend that you create a backup of the original client before you process any clients.1. Verify that the ProSystem fx Engagement binder has been synchronized with the Local File Roomand that any instances of this binder or workpapers are closed on your workstation.2. To start the conversion program, right-click the CS Data Conversions icon on your desktop andchoose Run as Administrator. If you did not install the shortcut, click Start on the Windows taskbar and then choose All Programs > CS Professional Suite > CS Data Conversions.3. In the Select competitor field, select ProSystem fx from the drop-down list.4. In the Select the export location for the converted files field, click the Browse button to browse to thelocation where the import files should be placed until imported into WorkpaperCS.5. Click Start conversion.6. If prompted to close all open sessions of Word® and Excel®, close those sessions.7. If prompted to select your user, select the login for the Local File Room for which you want to convertdata. This dialog will open only if multiple logins exist on the workstation.8. In the Processing Type dialog, click either Single or Multiple. Single converts just one client at a timeand allows for greater customization. Multiple allows for multiple clients, but the application makes more assumptions about the clients during the conversion.9. In the Select a Client dialog or Select client(s) dialog, select the client(s) you want to convert, andclick Continue.Note: This dialog lists all available clients for conversion from ProSystem fx Engagement. Yourselection of Single or Multiple in step 8 determines whether you can select one or multiple clients. 10. In the Engagement selection dialog, which lists all of the binders / engagements for the selectedclient(s), select an engagement type for each binder you want to convert, and then click Continue. 11. If you selected a single conversion, an optional Trial Balance selection dialog may open if multipleTrial Balances existed in ProSystem fx Engagement. Workpapers CS supports only one Trial Balance per engagement. Select the desired Trial Balance.If you selected multiple conversions, the last accessed Trial Balance will be used.12. Click Continue.13. In the Account classification selection dialog, select the ProSystem fx Engagement group thatcontains the account classification you want to use in Workpaper CS.Note: If you click Skip, the program will not convert any Account Classifications for the Trial Balance Accounts.14. In the Tax group selection dialog, select the set of tax codes to convert for your Trial Balance.If you selected multiple conversions, the last tax year will be used.15. Click Continue to begin the data conversion process, and then follow the prompts that appear on thescreen.16. At the prompt indicating the conversion process is complete, click OK to begin importing theconverted data into Workpapers CS.17. In Accounting CS, choose File > Import > ProSystem fx Engagement to open the ProSystem fxEngagement conversion wizard.18. Source Data: Select the location where your ProSystem fx Engagement export files are stored andclick Next.19. Source Data - Clients: Mark the checkbox next to the ProSystem fx Engagement client—or multipleclients—that you want to convert, and then click Next.20. Staff: Select the Accounting CS Workpapers staff member—or multiple staff members—to map to foreach corresponding ProSystem fx Engagement staff in the list.Note:If the appropriate staff member is not available from the drop-down list in the Accounting CS Staff column, exit the wizard and add that staff member in the Setup > Firm Information > Staffscreen, and then restart the conversion process.21. Click Finish to complete the conversion.Data transferred during conversionThe following tables detail the ProSystem fx Engagement data that converts to Workpapers CS.Chart of Accounts and balancesProSystem fx Engagement menu navigation andfield name Workpapers CS menunavigation and field nameComments and additionalinformationTrial Balance > Chart ofAccountsActions > Enter Trial BalanceAccount # Account numberDescription DescriptionReport ReportBudget BudgetProposed PotentialUNADJ Unadjusted Unadjusted balance is convertedonly for the current period.ADJ Adjusted Prior-year and prior-periodbalances only.FTAX Tax Prior-year and prior-periodbalances only.OBAL1 Other Prior-year and prior-periodbalances only.Trial Balance > AccountGroupings > Tax CodeAccount GroupingTax Code Tax Code Tax codes and tax codeassignments for clients with ayear end of 2012 or later.menu navigation andfield namenavigation and field name informationTrial Balance > AccountGroupings > Group AccountGrouping > AdvancedAccClass RatioClass Classification codeClassification subcodeIn ProSystem fx Engagement,classification and ratios are seton a group-by-group basis.During the conversion process,you will be asked from whichgroup we should pullclassification and ratios.Grouping schedulesProSystem fx Engagement menu navigation andfield name Workpapers CS menunavigation and field nameComments and additionalinformationTrial Balance > AccountGroupings > Group AccountGroupingEnter Trial Balance > Account Groupings Account group name GroupingCode CodeCode Description Code DescriptionSubcode SubcodeSubcode Description Subcode DescriptionTransactionsProSystem fx Engagement menu navigation andfield name Workpapers CS menunavigation and field nameComments and additionalinformationTrial Balance > Journal EntrySummaryActions > Enter Transactions JE# ReferenceDescription DescriptionAdditional Distributions (Account, Amount, Description) Distributions (Account, Account Description, Amount)menu navigation andfield namenavigation and field name information Reversing journal entry Auto-reverse next periodType:Adjusting Journal Entries Reclassifying Journal Entries Federal Tax Journal Entries Other Journal Entries1 Proposed Journal Entry Type:AdjustingReclassifyingTaxOtherPotentialJournal entries are posted to thespecific engagement for whichthe entries were intended.Engagement-related data transferred during conversionThe following information is provided to identify ProSystem fx Engagement items that are automatically converted to Workpapers CS items.Engagement informationProSystem fx Engagement menu navigation andfield name Workpapers CS menunavigation and field nameComments and additionalinformationBinder Properties Engagement PropertiesName Engagement binder name If you convert multiple binders fora single client, the binders musthave a unique name.Type Type You must select the newWorkpapers CS type during theconversion.Entity In the ProSystem fx Engagementconversion, Entity is determinedby the selection on the TaxGroup selection screen. Thisdata is mapped to the Client >Accounting information tab. Period sequence Period FrequencyBinder Index View Binder TreeEngagement tree structure Engagement tree structureFolder informationProSystem fx Engagement menu navigation andfield name Workpapers CS menunavigation and field nameComments and additionalinformationTab Properties Engagement tree structureIndex # Folder name Index # and Name are combinedto comprise the Workpapers CSfolder name.Name Folder nameWorkpaper informationProSystem fx Engagement menu navigation andfield name Workpapers CS menunavigation and field nameComments and additionalinformationWorkpaper Properties Workpaper PropertiesName NameIndex # ReferenceSign Off: Preparers Preparer You can rename Preparer,Reviewer, and Reviewer 2names in Workpapers CS, ifdesired.Sign Off: 1st Reviewers Reviewer You can rename Preparer,Reviewer, and Reviewer 2names in Workpapers CS, ifdesired.Sign Off: 2nd Reviewers This will depend on staff mappingand whether signoffs are set upin Workpapers CS beforehand. Sign Off Initials Sign Off Initials Initials and date will display underthe signed off heading inWorkpapers CS.ProSystem fx Engagement Excel and Word demographic formulasThe following table lists the applicable ProSystem fx Engagement formulas and the equivalent formula variables in Workpapers CS.ProSystem fx EngagementName FunctionsWorkpapers CS VariablesBinder Name Engagement NameBinder Due Date Completion DateBinder Type Engagement TypeProSystem fx EngagementWorkpapers CS VariablesName FunctionsBinder Report Release Date Report Release DateWorkpaper Name Workpaper NameWorkpaper Index Workpaper ReferenceCurrent Period End Current Period DateProSystem fx Engagement Excel and Word link formulasExcel® and Word® link formulas do not convert.Conversion notes and exceptionsThis section details conversion notes and exceptions.▪Engagement tree structure: The order and appearance of the engagement tree structure in Workpapers CS after the conversion may differ from the ProSystem fx Engagement binder. Please note all items are converted to the correct folder locations within the engagement. To providemaximum flexibility, Workpapers CS does not automatically sort folders and workpapers.▪Excel and Word workpaper add-ins, macros, and links: Excel workpapers are modified during the conversion process for removal of add-ins, macros, and/or links.▪Manual workpapers: Manual workpapers are converted as text documents.▪Tax Codes: If UltraTax CS® Tax Codes are desired during the conversion, a translation of ProSystem fx Engagement Tax Codes to UltraTax CS is available. Only one Tax Code Group will convert.▪Workpaper references: Workpapers CS requires that workpapers have reference values.Workpapers without an index value are assigned a reference value during the conversion. You may rename the workpaper reference, if desired. If a duplicate reference exists in a folder, the duplicate references will be renamed.▪Finalized Binders: Finalized binders will be converted as Active unfinalized binders. We recommend that you convert only active / unfinalized binders.Items not convertedThis section details items not converted.▪Workpapers not in a standard binder folder: This includes workpapers within the Unfiled Workpapers, Conflicts, Incompatible Workpapers, Published Workpapers, and Trash folders. To convert these workpapers, you must move the workpapers into a standard binder folder before the conversion process.▪Trial Balance: Consolidated trial balances.▪Firm information▪Client Information▪Engagement and workpaper password information▪Engagement and workpaper history▪Workpaper notes and templates▪Staff▪M3 Tax CodesData Conversion ReportThe data conversion from ProSystem fx Engagement to Workpapers CS creates a report for each engagement converted. The Data Conversion report lists certain modifications made during the conversion process, such as truncations, abbreviations, and so on. Most items in the report are informational and do not require immediate attention.To access the report, locate and open YYYYYY.html, where YYYYYY is the binder name. The report is placed in the user’s Documents folder.Getting helpHelp & How-To CenterFor answers to questions on using Workpapers CS, access the Help & How-To Center by clicking the Help link on the toolbar. For more information, including sample searches, see Finding answers in the Help & How-To Center.Product supportFrom the Support Contact Information page on our website, you can complete a form to send a question to our Support team. For additional product support, visit the Support section of our website. You can also access our Support website from Workpapers CS by choosing Help > Additional Resources > General Support Information.。
Package‘voson.tcn’October12,2022Title Twitter Conversation Networks and AnalysisVersion0.5.0Description Collects tweets and metadata for threaded conversations andgenerates networks.Type PackageImports dplyr,httr,jsonlite,openssl,progress,rlang,stringr,tibble,tidyrDepends R(>=4.1)Encoding UTF-8Maintainer Bryan Gertzel<*********************.au>License GPL(>=3)RoxygenNote7.2.1NeedsCompilation noURL https:///vosonlab/voson.tcnBugReports https:///vosonlab/voson.tcn/issuesAuthor Bryan Gertzel[aut,cre],Robert Ackland[ctb](<https:///0000-0002-0008-1766>),Francisca Borquez[ctb]Repository CRANDate/Publication2022-08-3013:50:02UTCR topics documented:tcn_counts (2)tcn_network (3)tcn_threads (4)tcn_token (5)tcn_tweets (6)Index812tcn_counts tcn_counts Get conversation tweet countsDescriptionReturn the tweet count per day,hour or minute for conversation ids.Usagetcn_counts(ids=NULL,token=NULL,endpoint="recent",start_time=NULL,end_time=NULL,granularity="day",retry_on_limit=TRUE,verbose=TRUE)Argumentsids List.Conversation ids.token List.Twitter API tokens.endpoint Character string.Twitter API v2search endpoint.Can be either"recent"for the last7days or"all"if users app has access to historical"full-archive"tweets.Default is"recent".start_time Character string.Earliest tweet timestamp to return(UTC in ISO8601format).If NULL API will default to30days before end_time.Default is NULL.end_time Character test tweet timestamp to return(UTC in ISO8601format).If NULL API will default to now-30seconds.Default is NULL.granularity Character string.Granularity or period for tweet counts.Can be"day","minute"or"hour".Default is"day".retry_on_limit Logical.When the API v2rate-limit has been reached wait for reset time.De-fault is TRUE.verbose Logical.Output additional information.Default is TRUE.ValueA dataframe of conversation ids and counts.NoteA rate-limit of300requests per15minute window applies.If a conversation count request containsmore than31days-worth of results it will use more than one request as API results will be paginated.tcn_network3Examples##Not run:#get tweet count for conversation thread over approximately7dayscounts<-tcn_counts(ids=c("xxxxxx","xxxxxx"),token=token,endpoint="all",start_time="2020-09-30T01:00:00Z",end_time="2020-10-07T01:00:00Z",granularity="day")#total tweets per conversation id for periodcounts$counts|>dplyr::count(conversation_id,wt=tweet_count)##End(Not run)tcn_network Generate network from conversation tweetsDescriptionCreates the nodes and edges for a Twitter conversation network.An"activity"type network with tweets as nodes,or an"actor"type with users as nodes can be created.Usagetcn_network(data=NULL,type="actor")Argumentsdata Named list.Dataframes of threaded conversation tweets and users collected by get_threads function.type Character string.Type of network to generate,either"actor"or"activity".De-fault is"actor".ValueNamed list of dataframes for network nodes and edges.Examples##Not run:#generate twitter conversation networknetwork<-tcn_network(tweets,"activity")#network nodes and edges4tcn_threads network$nodesnetwork$edges##End(Not run)tcn_threads Get threaded conversation tweetsDescriptionCollects tweets that share the same Twitter conversation ID as supplied tweets.Usagetcn_threads(tweet_ids=NULL,token=NULL,endpoint="recent",start_time=NULL,end_time=NULL,annotations=FALSE,max_results=NULL,max_total=NULL,retry_on_limit=TRUE,skip_list=NULL,verbose=TRUE)Argumentstweet_ids List.Tweet ids of any tweet that are part of the threaded conversations of inter-est.Also accepts a list of tweet URLs or a mixed list.token List.Twitter API tokens.endpoint Character string.Twitter API v2search endpoint.Can be either"recent"for the last7days or"all"if users app has access to historical"full-archive"tweets.Default is"recent".start_time Character string.Earliest tweet timestamp to return(UTC in ISO8601format).If NULL API will default to30days before end_time.Default is NULL.end_time Character test tweet timestamp to return(UTC in ISO8601format).If NULL API will default to now-30seconds.Default is NULL.annotations Logical.Include tweet context annotationfield in results.If TRUE Twitter imposes a max_results limit of100.Default is FALSE.max_results Numeric.Set maximum number of tweets to collect per API v2request.Up to 100tweets for standard or500tweets for academic projects can be collected perrequest.Default is NULL to set maximum allowed.tcn_token5 max_total Numeric.Set maximum total number of tweets to collect as a cap limit precau-tion.Will only be accurate to within one search request count(being max_results,or100for standard or500tweets for academic project).This will not be idealfor most cases as an API search generally retrieves the most recent tweetsfirst(reverse-chronological order),therefore the beginning part of the last conversa-tion thread may be absent.Default is NULL.retry_on_limit Logical.When the API v2rate-limit has been reached wait for reset time.De-fault is TRUE.skip_list Character vector.List of tweet conversation IDs to skip searching if found.verbose Logical.Output additional information.Default is TRUE.ValueA named list.Dataframes of tweets,users,errors and request metadata.Examples##Not run:#get twitter conversation threads by tweet ids or urlstweet_ids<-c("xxxxxxxx","https:///xxxxxxxx/status/xxxxxxxx")tweets<-tcn_threads(tweet_ids,token,endpoint="recent")#get twitter conversation threads by tweet ids or urls using historical endpoint#starting from May01,2021.tweet_ids<-c("xxxxxxxx","https:///xxxxxxxx/status/xxxxxxxx")tweets<-tcn_threads(tweet_ids,token=token,endpoint="all",start_time="2021-05-01T00:00:00Z")##End(Not run)tcn_token Get a twitter API access tokenDescriptionAssigns a bearer token to the token object or retrieves a bearer token from the Twitter API using a Twitter apps consumer keys.Usagetcn_token(bearer=NULL,consumer_key=NULL,consumer_secret=NULL)Argumentsbearer Character string.App bearer token.consumer_key Character string.App consumer key.consumer_secretCharacter string.App consumer secret.ValueNamed list containing the token.Examples##Not run:#assign bearer tokentoken<-tcn_token(bearer="xxxxxxxx")#retrieve twitter app bearer tokentoken<-tcn_token(consumer_key="xxxxxxxx",consumer_secret="xxxxxxxx")##End(Not run)tcn_tweets Get tweetsDescriptionCollects tweets for a list of tweet ids.Usagetcn_tweets(tweet_ids=NULL,token=NULL,referenced_tweets=FALSE,retry_on_limit=TRUE,clean=TRUE,verbose=TRUE)Argumentstweet_ids List.Tweet ids or tweet URLs.token List.Twitter API tokens.referenced_tweetsLogical.Also retrieve tweets referenced by requested tweets.Default is FALSE.retry_on_limit Logical.When the API v2rate-limit has been reached wait for reset time.De-fault is TRUE.clean Logical.Clean results.verbose Logical.Output additional information.Default is TRUE.ValueA named list.Dataframes of tweets,users,errors and request metadata.Examples##Not run:#get twitter conversation threads by tweet ids or urlstweet_ids<-c("xxxxxxxx","https:///xxxxxxxx/status/xxxxxxxx")tweets<-tcn_tweets(tweet_ids,token)##End(Not run)Indextcn_counts,2tcn_network,3tcn_threads,4tcn_token,5tcn_tweets,68。
S A M P L E R SItem Portfolio SamplerSingle Copy Sampler (Box)Description Use for a customer’s first introduction to the program.Includes:Program Guide,System 44Research Foundation Paper,SPI Assessment Paper,Product T our CD,and Print Component Sampler.Use at conferences,events and with customers who have already been introduced to the program and are serious about purchasing.Includes the Program Guide,System 44Research Foundation Paper,SPI Assessment Paper,Product T our CD,Component Map,Poster,44Book ,Decodable Digest ,T eaching Resources Guides,Screening ,Assessment and Reporting Guide and an assortment of Student Library Books.Item #159280159279Availability at JC 9/1/089/1/08Availability on the Portal N/A N/AUnitCost$12.50*$34.00*P R O M OT I O N A L M AT E R I A L SItemProgram GuideProduct T our CD Simulator CDComponent MapTeaching Guide Sampler (Book)“Tiny TE”DescriptionComprehensive overview of the program.Great introduction for customers—perfect leave behind.T otal pages:36.Content currently featured on the System 44Web site.This CD allows customers to get an overview of the program,and hear from Marilyn Adams and T ed Hasselbring.(This is a Flash®file—NO Internetconnection needed.)T abs have been added for easier navigation.Use this CD to better familiarize yourself with the software.You will be able to access all of the zones—it is representative of a student experience.May also be used with customers for demonstration purposes,but do not hand out the CD.We created a limited quantity for internal use only.This is NOT in JC.Visual overview of the program including teacher and student materials,and software applications.Each edition is highlighted.Includes several lessons from the Teaching Guide.Provides customers with an overview of the entire book and highlights several lessons.We are not providing copies of the complete TG in any e this item for any customers who want to know what's covered in the TG.T otal pages:60.Item #153045159589N/A 0-545-11901-4159292Availability at JCNow!Now!N/A Now!Now!Availability on the PortalNow!N/AN/ANow!N/APromotional Materials and SamplersItem#1602443007/08SCHOLASTIC,SYSTEM 44,SCHOLASTIC PHONICS INVENTORY,and associated logos and designs are trademarks and/or registered trademarks of Scholastic Inc.Other company names,brand names,and product names are the property and/or trademarks of their respective owners*Unit cost based on initial quote.Final cost may differ slightly.R E S E A R C HItemSystem 44Research Foundation PaperSPI Assessment PaperSPI T echnical ManualSystem 44Formative Research PaperDescriptionThorough explanation of the research foundations behind System 44.Underscores the importance of anassessment like SPI and its effectiveness.Details how SPI was validated including the study that correlates SPI to the TOWRE and Woodcock-JohnsonDetails the results of studies conducted during the development of System 44.Item #1595861593280-545-11679-1TKAvailability at JCNow!Now!8/1/089/15/08Availability on the PortalNow!Now!Now!9/1/08P R O M OT I O N A L M AT E R I A L SDescription Includes selections from the Teaching Guide ,Decodable Digest ,44Book ,Student Libraries and T eaching Resources Guide.Also features selections from theScreening ,Assessment and Reporting Guide ,highlighting the System 44and SPI reports.T otal pages:76Features wonderful visual of the sounds and spellings that are taught in the program.Great for hanging during an event or presentation.Item PrintComponent Sampler (Book)The System of Sounds &Spelling PosterItem #159293155626Availability at JCNow!Now!Availability on the PortalN/AN/AO N LY O N T H E S A L E S P O R TA L !Availability on the PortalNow!Now!Now!Now!Now!Now!Now!Now!Now!And distributedon CDN/ADescriptionFor READ 180and non READ 180customers.Compiled list of questions since the launchof System 44.Updated weekly and posted on the portal.PowerPoint®slides outlining the strengths and weaknesses of technology and print competitors.The list of books in the Upper Elementary and Secondary Student Libraries with their corresponding Lexile®levels and genre.For READ 180and non READ 180customers.For READ 180and non READ 180customers.A detailed program scope and sequence.Up-to-date technical specifications.NEW!Includes PowerPoint®presentation with video and script that was presented at the summer sales meeting.An animation of the 44reasons poster that runs on a loop for conferences and other events.ItemOrder FormsFAQsCompetitive OverviewBook ListSales ProposalsCost ProposalsScope &SequenceT echnical SpecificationsSales PresentationConference Exhibit Loop。
第 22卷第 6期2023年 6月Vol.22 No.6Jun.2023软件导刊Software Guide线上线下混合教学模式在程序设计类课程教学中的应用王晓芳,荆山,吴鹏,乔善平,赵燕,黄艺美(济南大学信息科学与工程学院,山东济南 250022)摘要:为了将线上网络化教学和传统线下课堂教学的优势相结合,探索基于SPOC的线上线下混合式教学模式,并以JSP应用程序设计课程为例,从教学内容、教学平台、教学资源、教学设计和教学评价5个方面阐述课程改革和实践情况。
该模式既发挥了教师的主导作用,又充分发挥了学生的主动性和创造性,满足了不同层次学生的个性化需求,为提高课程的高阶性、创新性和挑战度创造了有利条件。
该教学模式获得了学生的认可,取得了较好的教学效果。
关键词:SPOC;线上线下混合式教学模式;程序设计类课程;课程思政;改革方案DOI:10.11907/rjdk.221904开放科学(资源服务)标识码(OSID):中图分类号:G434 文献标识码:A文章编号:1672-7800(2023)006-0085-06Application of Online and Offline Mixed Teaching Mode in theProgramming CoursesWANG Xiao-fang, JING Shan, WU Peng, QIAO Shan-ping, ZHAO Yan, HUANG Yi-mei(School of Information Science and Engineering, University of Jinan, Ji'nan 250022, China)Abstract:In order to combine the advantages of online networked teaching with traditional offline classroom teaching, explore a blended on‐line and offline teaching mode based on SPOC, take the JSP application design course as an example, and elaborates the reform plan and spe‐cific practice from five aspects: teaching contents, teaching platform, teaching resources, teaching design and teaching evaluation. This mod‐el gives full play to the advantages of combining students′ subjectivity with teachers′ dominance, meets the personalized needs of students at different levels, and creates favorable conditions to improve the higher order, innovation and challenge of the curriculum. The teaching model has been recognized by the students and achieves good teaching results.Key Words:SPOC; online and offline hybrid teaching mode; programming courses; curriculum ideological and political education; reform plan0 引言目前,高校传统课堂授课模式和在线慕课是两种主要的授课方式。
Package‘deepdep’February21,2023Title Visualise and Explore the Deep Dependencies of R PackagesVersion0.4.2Description Provides tools for exploration of R package dependencies.The main deepdep()function allows to acquire deep dependencies of any pack-age and plot them in an elegant way.It also adds some popularity measures for the packages e.g.in the form of down-load count through the'cranlogs'package.Uses the CRAN metadata database<http://>and Bioconductor metadata<>.Other data acquire functions are:get_dependencies(),get_downloads()and get_description().The deepdep_shiny()function runs shiny application that helps to produce a nice'deepdep'plot. License GPL-3Encoding UTF-8RoxygenNote7.2.3Depends R(>=3.2.0)Imports cranlogs,httr,jsonliteSuggests BiocManager,covr,devtools,ggplot2,ggraph,graphlayouts,igraph,knitr,miniCRAN,plyr,rmarkdown,scales,shiny,shinycssloaders,spelling,stringi,testthat(>=2.1.0),vcrVignetteBuilder knitrURL https://dominikrafacz.github.io/deepdep/,https:///DominikRafacz/deepdepBugReports https:///DominikRafacz/deepdep/issuesLanguage en-GBNeedsCompilation noAuthor Dominik Rafacz[aut,cre](<https:///0000-0003-0925-1909>), Hubert Baniecki[aut],Szymon Maksymiuk[aut],Laura Bakala[aut],Dirk Eddelbuettel[ctb]1Maintainer Dominik Rafacz<***********************>Repository CRANDate/Publication2023-02-2100:10:05UTCR topics documented:deepdep (2)deepdep_shiny (4)get_available_packages (4)get_dependencies (5)get_description (6)get_downloads (7)plot_dependencies (8)plot_downloads (10)print.available_packages (11)print.deepdep (12)print.package_dependencies (12)print.package_description (13)print.package_downloads (14)Index15 deepdep Acquire the dependencies of the package on any depth levelDescriptionThis function is an ultimate wrapper for get_dependencies.It inherits all of the arguments and allows to recursively search for the dependencies at the higher level of depth.Usagedeepdep(package,depth=1,downloads=FALSE,bioc=FALSE,local=FALSE,dependency_type="strong")Argumentspackage A of the package that is on CRAN,Bioconductor repository or locally installed.See bioc and local arguments.depth An integer.Maximum depth level of the dependency.By default it’s1.downloads A logical.If TRUE add dependency downloads data.By default it’s FALSE.bioc A logical value.If TRUE the Bioconductor dependencies data will be taken from the Bioconductor repository.For this option to work properly,BiocManagerpackage needs to be installed.local A logical value.If TRUE only data of locally installed packages will be used (without API usage).dependency_typeA character vector.Types of the dependencies that should be sought,a sub-set of c("Imports","Depends","LinkingTo","Suggests","Enhances").Other possibilities are:character string"all",a shorthand for the whole vec-tor;character string"most"for the same vector without"Enhances";characterstring"strong"(default)for thefirst three elements of that vector.Works anal-ogously to package_dependencies.ValueAn object of deepdep class.See Alsoget_dependenciesExampleslibrary(deepdep)dd_downloads<-deepdep("ggplot2")head(dd_downloads)dd_2<-deepdep("ggplot2",depth=2,downloads=TRUE)plot_dependencies(dd_2,"circular")dd_local<-deepdep("deepdep",local=TRUE)plot_dependencies(dd_local)4get_available_packages deepdep_shiny Run Shiny appDescriptionThis function runs shiny app that helps to produce nice deepdep plot.Usagedeepdep_shiny()get_available_packagesGet the list of available packagesDescriptionGet names of packages that you have locally installed or that are available to be installed.Usageget_available_packages(bioc=FALSE,local=FALSE,reset_cache=FALSE)Argumentsbioc A logical value.If TRUE the Bioconductor dependencies data will be taken from the Bioconductor repository.For this option to work properly,BiocManagerpackage needs to be installed.local A logical value.If TRUE only data of locally installed packages will be used (without API usage).reset_cache A logical value.If TRUE the cache will be cleared before obtaining the list of packages.DetailsFunction uses caching-only thefirst usage scraps information from servers.Those objects are then saved locally in temporaryfile and further usages loads needed data from thefile.Arguments bioc and local cannot be TRUE simultaneously.If neither local nor bioc are TRUE, vector contains all packages available currently on CRAN.If bioc is TRUE,vector contains all packages available currently on CRAN and via Bioconductor.If local is TRUE,vactor contains all of the packages that are currently installed.ValueA character vector.get_dependencies5 Exampleslibrary(deepdep)av<-get_available_packages()head(av)get_dependencies Acquire the dependencies of the packageDescriptionThis function uses get_description and get_downloads to acquire the dependencies of the pack-age(with their downloads).Usageget_dependencies(package,downloads=TRUE,bioc=FALSE,local=FALSE,dependency_type="strong")Argumentspackage A of the package that is on CRAN,Bioconductor repository or locally installed.See bioc and local arguments.downloads A logical.If TRUE add package downloads data.By default it’s TRUE.bioc A logical value.If TRUE the Bioconductor dependencies data will be taken from the Bioconductor repository.For this option to work properly,BiocManagerpackage needs to be installed.local A logical value.If TRUE only data of locally installed packages will be used (without API usage).dependency_typeA character vector.Types of the dependencies that should be sought,a sub-set of c("Imports","Depends","LinkingTo","Suggests","Enhances").Other possibilities are:character string"all",a shorthand for the whole vec-tor;character string"most"for the same vector without"Enhances";characterstring"strong"(default)for thefirst three elements of that vector.Works anal-ogously to package_dependencies.6get_descriptionValueAn object of package_dependencies class.See Alsoget_description get_downloadsExampleslibrary(deepdep)dependencies<-get_dependencies("htmltools",downloads=FALSE)dependenciesdependencies_local<-get_dependencies("deepdep",downloads=FALSE,local=TRUE)dependencies_localget_description Scrap the DESCRIPTIONfile and CRAN metadata of the packageDescriptionThis function uses api of CRAN Data Base to scrap the DESCRIPTIONfile and CRAN metadata of the package.It caches the results to speed the computation process.Usageget_description(package,bioc=FALSE,local=FALSE,reset_cache=FALSE) Argumentspackage A of the package that is on CRAN,Bioconductor repository or locally installed.See bioc and local arguments.bioc A logical value.If TRUE the Bioconductor dependencies data will be taken from the Bioconductor repository.For this option to work properly,BiocManagerpackage needs to be installed.local A logical value.If TRUE only data of locally installed packages will be used (without API usage).reset_cache A logical value.If TRUE the cache will be cleared before obtaining the list of packages.ValueAn object of package_description class.get_downloads7 Exampleslibrary(deepdep)description<-get_description("ggplot2")descriptiondescription_local<-get_description("deepdep",local=TRUE)description_localget_downloads Scrap the download data of the packageDescriptionThis function uses API of CRAN Logs to scrap the download logs of the package.Usageget_downloads(package)Argumentspackage A of the package that is on CRAN.ValueAn object of package_downloads class.Exampleslibrary(deepdep)downloads<-get_downloads("ggplot2")downloadsplot_dependencies Main plot function for a deepdep objectDescriptionVisualize dependency data from a deepdep object using ggplot2and ggraph packages.Several tree-like layouts are available.Usageplot_dependencies(x,type="circular",same_level=FALSE,reverse=FALSE,label_percentage=1,show_version=FALSE,show_downloads=FALSE,show_stamp=TRUE,declutter=FALSE,...)##Default S3method:plot_dependencies(x,type="circular",same_level=FALSE,reverse=FALSE,label_percentage=1,show_version=FALSE,show_downloads=FALSE,show_stamp=TRUE,declutter=FALSE,...)##S3method for class characterplot_dependencies(x,type="circular",same_level=FALSE,reverse=FALSE,label_percentage=1,show_version=FALSE,show_downloads=FALSE,show_stamp=TRUE,declutter=FALSE,...)##S3method for class deepdepplot_dependencies(x,type="circular",same_level=FALSE,reverse=FALSE,label_percentage=1,show_version=FALSE,show_downloads=FALSE,show_stamp=TRUE,declutter=FALSE,...)Argumentsx A deepdep object or a character package name.type A character.Possible values are circular and tree.same_level A logical.If TRUE links between dependencies on the same level will be added.By default it’s FALSE.reverse A logical.If TRUE links between dependencies pointing from deeper level to more shallow level will be added.By default it’s FALSE.label_percentageA numeric value between0and1.A fraction of labels to be displayed.Bydefault it’s1(all labels displayed).show_version A logical.If TRUE required version of package will be displayed below pack-age name.Defaults to FALSE.show_downloads A logical.If TRUE total number of downloads of packages will be displayed below package names.Defaults to FALSE.show_stamp A logical.If TRUE(the default)the package version and plot creation time will be addeddeclutter A logical.If TRUE then all layers beyond thefirst one ignore non-strong de-pendencies(i.e."Suggests"and"Enhances").This visualizes the so-called"hardcosts of weak suggests"....Other arguments passed to the deepdep function.ValueA ggplot2,gg,ggraph,deepdep_plot class object.10plot_downloadsExampleslibrary(deepdep)#:#use local packagesplot_dependencies("deepdep",depth=2,local=TRUE)dd<-deepdep("ggplot2")plot_dependencies(dd,"tree")dd2<-deepdep("ggplot2",depth=2)plot_dependencies(dd2,"circular")#:#show grand_total download countplot_dependencies("shiny",show_downloads=TRUE)plot_downloads Plot download count of CRAN packages.DescriptionThis function uses API of CRAN Logs to scrap the download logs of the packages and then plots the data.It works on objects of class character(vector),deepdep,package_dependencies and package_downloads.Usageplot_downloads(x,...)##Default S3method:plot_downloads(x,...)##S3method for class deepdepplot_downloads(x,from=Sys.Date()-365,to=Sys.Date(),...)##S3method for class package_dependenciesplot_downloads(x,from=Sys.Date()-365,to=Sys.Date(),...)##S3method for class package_downloadsplot_downloads(x,from=Sys.Date()-365,to=Sys.Date(),...)##S3method for class characterplot_downloads(x,from=Sys.Date()-365,to=Sys.Date(),...)print.available_packages11 Argumentsx A character s of the packages that are on CRAN....Ignored.from A Date class object.From which date plot the data.By default it’s one year back.to A Date class object.To which date plot the data.By default it’s now.ValueA ggplot2class object.Exampleslibrary(deepdep)plot_downloads("htmltools")dd<-deepdep("ggplot2")plot_downloads(dd)print.available_packagesPrint function for an object of available_packages classDescriptionPrint function for an object of available_packages classUsage##S3method for class available_packagesprint(x,...)Argumentsx An object of available_packages class....other12print.package_dependencies Exampleslibrary(deepdep)av<-get_available_packages()head(av)print.deepdep Print function for an object of deepdep classDescriptionPrint function for an object of deepdep classUsage##S3method for class deepdepprint(x,...)Argumentsx An object of deepdep class....otherExampleslibrary(deepdep)dd<-deepdep("stringr")ddprint.package_dependenciesPrint function for an object of package_dependencies classDescriptionPrint function for an object of package_dependencies classprint.package_description13Usage##S3method for class package_dependenciesprint(x,...)Argumentsx An object of package_dependencies class....otherExampleslibrary(deepdep)get_dependencies("htmltools",downloads=TRUE)print.package_descriptionPrint function for an object of package_description classDescriptionPrint function for an object of package_description classUsage##S3method for class package_descriptionprint(x,...)Argumentsx An object of package_description class....otherExampleslibrary(deepdep)description<-get_description("ggplot2")description14print.package_downloads print.package_downloadsPrint function for an object of package_downloads classDescriptionPrint function for an object of package_downloads classUsage##S3method for class package_downloadsprint(x,...)Argumentsx An object of package_downloads class....otherExampleslibrary(deepdep)desc<-get_downloads("stringr")descIndexdeepdep,2,9deepdep_shiny,4get_available_packages,4get_dependencies,2,3,5get_description,5,6,6get_downloads,5,6,7package_dependencies,3,5plot_dependencies,8plot_downloads,10print.available_packages,11print.deepdep,12print.package_dependencies,12print.package_description,13print.package_downloads,1415。
收稿日期:20230420基金项目:国家自然科学基金资助项目(51735010);西安现代智能纺织设备重点实验室项目(2019220614S Y S 021C G 043)㊂作者简介:任经琦(1997),男,安徽淮北人,硕士研究生㊂通信作者:张团善(1969),男,湖北随州人,教授,博士㊂E -m a i l :z h a n g t u a n s h a n @x pu .e d u .c n ㊂第36卷第2期2024年 4月沈阳大学学报(自然科学版)J o u r n a l o f S h e n y a n g U n i v e r s i t y (N a t u r a l S c i e n c e )V o l .36,N o .2A pr .2024文章编号:2095-5456(2024)02-0112-10基于改进Y O L O v 7的织物表面疵点检测技术任经琦,张团善*,赵浩铭(西安工程大学机电工程学院,陕西西安 710600)摘 要:针对目前纺织工业中织物疵点检测技术的局限性,提出一种用于自动检测织物缺陷的改进Y O L O v 7算法㊂首先,在颈部网络引入S P D -C o n v 模块,在进行卷积下采样时保留与疵点相关的特征辨别信息,解决了原网络对于小目标的特征信息学习不足的问题;其次,Y O L O v 7的主干网络通过引入C A 注意力机制,在兼顾通道注意力的同时,还考虑了位置信息,从而更有效地识别疵点;最后,把W I o U 用作边框损失函数,使其更加侧重于一般品质的锚框,从而增强了Y O L O v 7的泛化能力㊂通过实验对比发现,改进后算法的m A P 值为92.28%,精度为95.65%,分别比原始Y O L O v 7算法提高了2.64%和4.12%,能够达到纺织产业在疵点检测方面的要求㊂关 键 词:疵点检测;Y O L O v 7;S P D -C o n v 模块;W I o U ;C A 注意力机制中图分类号:T P 312;T S 107 文献标志码:AF a b r i c S u r f a c eD e f e c tD e t e c t i o nT e c h n o l o g y B a s e d o n I m pr o v e d Y O L O v 7R E NJ i n g q i ,Z HA N GT u a n s h a n ,Z HA OH a o m i n g(S c h o o l o fM e c h a n i c a l a n dE l e c t r i c a l E n g i n e e r i n g ,X i a nP o l y t e c h n i cU n i v e r s i t y,X i a n710600,C h i n a )A b s t r a c t :I nv i e wo f t h e l i m i t a t i o n so f t h ec u r r e n t f a b r i cd e f e c td e t e c t i o nt e c h n o l o g y int h e t e x t i l e i n d u s t r y ,a ni m p r o v e d Y O L O v 7a l g o r i t h mf o ra u t o m a t i cd e t e c t i o no f f a b r i cd e f e c t s w a s p r o p o s e d .F i r s t l y,t h eS P D -C o n vm o d u l ew a s i n t r o d u c e d i n t o t h en e c kn e t w o r k ,w h i c h r e t a i n e d t h e f e a t u r e d i s c r i m i n a t i o n i n f o r m a t i o n r e l a t e d t o d e f e c t s d u r i n g c o n v o l u t i o n d o w n s a m p l i n g ,a n ds o l v e dt h e p r o b l e m o f i n s u f f i c i e n tl e a r n i n g o ff e a t u r ei n f o r m a t i o no f s m a l lt a r g e t si n t h e o r i g i n a l n e t w o r k .S e c o n d l y,t h e b a c k b o n e n e t w o r k o f Y O L O v 7i n t r o d u c e dt h e C A a t t e n t i o n m e c h a n i s m ,w h i c h n o to n l y t o o ki n t oa c c o u n tt h ec h a n n e l a t t e n t i o n ,b u ta l s oc o n s i d e r e dt h el o c a t i o ni n f o r m a t i o n ,s o a st oi d e n t i f y de f e c t s m o r e e f f e c t i v e l y .F i n a l l y ,W I o U w a s u s e d a s t h eb o r d e r l o s s f u n c t i o n t om a k e i tm o r e f o c u s e do n t h e a n c h o r b o xo f g e n e r a l q u a l i t y ,s oa s t oe n h a n c e t h e g e n e r a l i z a t i o na b i l i t y ofY O L O v 7.T h r o u g he x p e r i m e n t a l c o m p a r i s o n ,i tw a sf o u n dt h a t t h e m A Pv a l u ea n da c c u r a c y o f t h e i m p r o v e d a l g o r i t h m w e r e 92.28%a n d 95.65%,w h i c hw e r e 2.64%a n d4.12%h i gh e r t h a n t h eo r i g i n a lY O L O v 7a l g o r i t h m ,r e s p e c t i v e l y ,w h i c hc o u l d m e e tt h er e qu i r e m e n t so ft h e t e x t i l e i n d u s t r yi n t e r m s o f d e f e c t d e t e c t i o n .K e yw o r d s :d e f e c t d e t e c t i o n ;Y O L O v 7;S P D -C o n vm o d u l e ;W I o U ;C Aa t t e n t i o nm e c h a n i s m 织物是指一类主要由纺织纤维所构成,应用于人类生活的材料㊂在纺织品中,疵点检测是品质评估的关键步骤[1]㊂但是纱线问题㊁机械故障和原料残次等因素,会在织物表面形成瑕疵㊂织物品质好坏往往和这些瑕疵相关[2]㊂国内外专家研究出众多织物缺陷检测方法,其中主要包括数字图像处理和深度学习等[3]㊂数字图像处理方法是先采用一些图像预处理算法对含有疵点的像进行去噪和增强,再提取相关特征实现疵点检测[4]㊂刘海军等[5]采用离散化方法获取角度直方图,并将其拼接成梯度方向上的特征向量㊂随后,使用最近邻分类器对织物疵点进行分类㊂这种基于数字图像处理方法在提取特征时容易受到许多因素的影响,无法有效地检测出疵点[6]㊂深度学习的检测方法通常基于卷积神经网络,可以直接使用卷积提取缺陷图像的特征,增强了对目标的表达能力[7]㊂其中单阶段目标检测算法有R e t i n a N e t [8]㊁S S D [9]等,双阶段目标检测算法有S P P -N e t [10]㊁R -F C N [11]㊁F a s t e rR -C N N [12]等㊂李明等[13]研发了一种缺陷检测算法,该算法融合了G A N 和F a s t e rR -C N N ,先使用G A N 训练来增加数据集中的样本数,再进行疵点检测,然而算法的检测速度较慢㊂王贺楠等[14]对Y O L O v 3算法进行改进,采用轻量级网络G h o s t N e t 作为骨干网络,有助于提升算法的检测速度㊂然而利用这种方法去检测背景复杂的机织物时,其准确度会大幅降低[15]㊂针对以上算法存在的问题,本文提出一种改进Y O L O v 7算法,不仅能够拥有较快的检测速度,而且能够具有良好的准确度㊂1 Y O L O v 7的结构Y O L O v 7[16]是官方Y O L O 系列最新推出的一种算法,Y O L O v 7的基础版本有3种,分别是Y O L O v 7㊁Y O L O v 7-t i n y 和Y O L O v 7-W 6㊂其中Y O L O v 7-t i n y 是对于轻量级GP U 的模型,结构最简单;Y O L O v 7-W 6是适用于高端G P U 的模型,训练时间最长㊂Y O L O v 7是为一般G P U 设计的模型,在保持较快检测速度时又兼顾精度㊂因此,本文选用Y O L O v 7对织物疵点实施检测㊂Y O L O v 7的结构主要由4个部分组成,分别为输入端㊁主干网络㊁颈部网络和检测端㊂1.1 输入端输入端主要使用马赛克数据加强㊁锚框自动计算和图片的统一缩放等处理方式㊂马赛克数据加强是从数据集中任意挑选4张图像来拼凑,增加了疵点的数量㊂自动锚框计算是每次训练时,Y O L O v 7存在与锚框计算相关的代码,会根据数据集种类寻找到最合适的锚框㊂图片的统一缩放是将图像缩放成相同的尺寸,再送入到主干网络中㊂1.2 主干网络主干网络主要由E L A N ㊁M P 和C B S 模块构成,其中C B S 模块是由一个卷积层(C o n v )㊁一个批标准化(B N )层和一个S i L U 激活函数构成㊂E L A N 模块是一种有效的层聚合架构,具备优秀的学习能力和推理速度㊂M P 模块有两个分支,上分支利用最大池化将图像长宽减半,再通过C B S 模块对图像通道减半㊂下分支则通过第1个C B S 模块对图像通道减半,第2个C B S 模块对图像长宽减半后,使用c a t操作将上下两个分支合并,最后通道数减少了一半,在颈部网络中的M P 模块的前后通道数保持不变㊂1.3 颈部网络颈部网络采用了与Y O L O v 5[17]相同的路径聚合特征金字塔网络(P A N+F P N )结构,该结构引入了自下而上的路径,底部的特征信息更加顺畅地传送到顶层,从而实现了顶层特征和底部特征的融合,让不同尺寸的特征图都含有疵点的位置和语义信息,保证了对疵点图片的精准预测㊂在颈部网络中增加了S P P C S P C 模块,增大了感受野并更容易区分大小目标㊂1.4 检测端检测端部分利用R E P 模块对P A F P N 输出的不同大小的特征图先调整通道数,然后使用1ˑ1卷积来完成对疵点的回归㊁分类和置信度的预测㊂2 Y O L O v 7的改进2.1 S P D -C o n v 模块无论是训练还是推理,Y O L O v 7往往对一些分辨率较高的中等和大型物体有着良好的检测效果㊂但是,本文所研究的对象疵点存在像素较低和特征信息有限的问题㊂此外,Y O L O v 7使用了一些步长大于1的卷积或池化操作,造成细节信息的缺失和重要特征学习不充分的情况㊂故本文引入了一个卷311第2期 任经琦等:基于改进Y O L O v 7的织物表面疵点检测技术积神经网络(C N N )模块来改善这种情况㊂S P D -C o n v 由空间到深度(S P D )层和一个步长为1的卷积(C o n v )层组成㊂其中C 1表示特征图的深度,S 表示特征图的长和宽㊂当图像比例t =2时S P D -C o n v 模块结构如图1所示㊂图1 S P D -C o n v 模块F i g.1 S P D -C o n vm o d u l e S P D 层会把大小为S ˑS ˑC 1的原图X 以(t ,t )为比例进行切割,从而获得数量为x y t2个子特征图f ,如式(1)~式(3)所示:f 0,0=X [0:S :t ,0:S :t ],f 1.0=X [1:S :t ,0:S :t ], ,f t -1,0=X [t -1:S :t ,0:S :t ];(1)f 0,1=X [0:S :t ,1:S :t ],f 1,1, ,f t -1,1=X [t -1:S :t ,1:S :t ];(2)f 0,t -1=X [0:S :t ,t -1:S :t ],f 1,t -1, ,f t -1,t -1=X [t -1:S :t ,t -1:S :t ]㊂(3) 接下来对得到的子特征图采取下采样处理,如图1所示生成了4个子图f 0,0㊁f 1,0㊁f 0,1㊁f 1,1,并且每个子图的大小为(S /2,S /2,C 1),等同于将原图X 下采样2次㊂随后把处理后的子特征图按照通道方向进行拼接,生成特征图X ᶄ,X ᶄ的空间维度缩减了一半,通道维度变为原本的2倍㊂所以,S P D 可以将原图X (S ,S ,C 1)转变成拥有目标特征辨别能力的中间特征图X ᶄS t ,S t ,t 2C 1æèçöø÷)在S P D 层后,增加含有一个C 2过滤器的步长为1的卷积层,目的是为了能够有效地保存X ᶄ所含的特征辨别信息㊂在本文中特征辨别信息是指疵点的几何特征,包括周长㊁长宽比等㊂其中C 2<t 2C 1,C 2过滤器能够将中间特征图X ᶄS t ,S t ,t 2C æèçöø÷1转变成最后的输出特征图X ᵡS t ,S t ,C æèçöø÷2㊂故本文将SP D -C o n v 添加到颈部网络最后一个E L A N -W 模块的后面来提高对疵点的检测精度㊂2.2 C A 注意力机制本文所研究的疵点对于整张织物图像来说只占很小部分,多余的部分为一些无关的背景信息㊂原Y O L O v 7算法在卷积提取特征时,经常会遗漏一些关于疵点的特征信息㊂故本文引入一种坐标注意力机制(C A ),具体步骤如图2所示㊂C A 注意力机制的实现流程如下,对于输入特征图X ,大小为c ˑh ˑw ㊂其中c 表示通道数,h 是高,w 是宽㊂先是为了防止失去位置信息,全局平均池化被拆分成了分别沿着宽度和高度这2个不同方向上实施,得到2个大小为c ˑh ˑ1的特征图z h和c ˑ1ˑw 的特征图z w ㊂如式(4)㊁式(5)所示:z hc (h )=1w ð0ɤi <wx c (h ,i);(4)z wc (w )=1h ð0ɤj <hx c (j ,w )㊂(5) 然后将这2个特征图进行转换,再沿着空间维度进行拼接操作㊂利用大小为1ˑ1的卷积运算F 1降低拼接后的特征图维度,并对其进行归一化和非线性激活,最终得到中间特征图f ,f ɪ1ˑ(w +h )ˑc /r ,如式(6)所示:f =δ(F 1([z h ,z w]))㊂(6)411沈阳大学学报(自然科学版) 第36卷式中:δ表示非线性激活函数㊂接着沿着空间方向进行切分得到两个分开的特征图fh 和f w ,然后分别使用卷积核大小为1的卷积F h 和F w 增加特征图的维度,通过s i g m o i d 激活函数后获得注意力权重g h和g w ,如式(7)㊁式(8)所示:g h =σ(F h (f h));(7)g w =σ(F w (f w))㊂(8)式中:σ表示s i g m o i d 激活函数㊂最后将g h ㊁g w和原图X 合并计算,获得含有C A 注意力权重的最终特征图y ㊂如式(9)所示:y c (i ,j )=x c (i ,j )g h c (i )gwc (j )㊂(9)图2 C A 注意力机制F i g.2 C Aa t t e n t i o nm e c h a n i s m 故本文将C A 注意力机制添加到Y O L O v 7的主干特征提取网络中,使算法更准确地定位和辨认疵点㊂2.3 边界损失函数的改进原Y O L O v 7算法的损失函数由3部分构成,分为置信度损失,分类损失及边界框损失㊂其中置信度损失和分类损失使用的是二值交叉熵损失函数,但在边界框回归损失中是由C I o Ul o s s 来负责预测㊂图3 交并比示意图F i g .3 S c h e m a t i cd i a gr a mo f i n t e r s e c t i o na n d p a r a l l e l r a t i o 但是C I o U 并没有解决训练数据集中高质量样本与低质量样本之间的边界框回归平衡的问题㊂Z h a n g 等[18]曾提出了使用非单调聚焦机制(F M )的F o c a lE I o Uv 1来处理这个难题㊂但是因为它的F M 是静止的并规定了锚框的边界值,使交并比损失等于边界值时,锚框拥有最大的梯度增益,但没有完全发挥出非单调F M 的能力㊂故本文引入了一种动态F M 的边界框回归损失W I o U 来代替C I o U ㊂W I o U 有3种,分别为W I o U v 1㊁W I o U v 2和W I o U v 3㊂如图3所示㊂图3中锚框A 的面积为S A ,中心坐标为(x ,y )㊂目标框B 的面积为S B ,中心坐标为(x g t ,y gt ),x g t 和y gt 分别为目标框B 中心的横坐标与纵坐标㊂则交并比损失函数L I o U 的表达式如式(12)所示㊂S =S A +S B -W i H i ;(10)511第2期 任经琦等:基于改进Y O L O v 7的织物表面疵点检测技术r I o U =W i H i S,(11)L I o U =1-r I o U =1-W i H i S㊂(12)式中:W i 和H i 分别表示锚框与目标框交集的长和宽;S 是指两框并集的面积;r I o U 为交并比㊂为了在锚框和目标框具有良好的重合度情况下减少几何因素对损失函数的影响,构造了距离注意力R W I o U ,接着获得有2层注意力机制的W I o U v 1,W I o U v 1损失函数表达式如式(13)所示:L W I o U v 1=R W I o U L I o U ,R W I o U =e x p (x -x g t )2+(y -y g t)2(W 2g +H 2g )æèçöø÷*üþýïïïï㊂(13)式中:W g 为外接矩形的长,H g 为外接矩形的宽㊂其中R W I o U 取值范围为[1,e )时,会明显地增大一般品质锚框的L I o U ,当锚框和目标框的位置一样时L I o U 的取值范围为[0,1],这会大幅减小高质量锚框的R W I o U ,并且会集中关注它们中心点之间的距离,式(13)中的 *表示将W g 和H g 独立出来,这种方法可以帮助清除妨碍收敛的要素,例如纵横比㊂W I o U v 3使用离群度β来表示锚框的质量,离群度和W I o U v 3损失函数表达式如式(14)㊁式(15)所示㊂β=L I o U췍L I o U;(14)L W I o U v 3=r L W I o U v 1,r =βδαβ-δ㊂(15)式中:췍L I o U 表示滑动平均值,r 为梯度增益,α和δ分别表示超参数,用来控制离群度β和梯度增益r 的映射㊂当β=δ时,r =1㊂通过估计离群度的大小来描述动态F M ㊂离群度越小,说明锚框品质越好,动态F M 可以通过为高品质的锚框增加较小的梯度增益,来让边界框回归集中到一般品质的锚框㊂此外,该机制也将较小的梯度增益分给低品质的锚框,来减少低质量的样本对边界框回归的伤害㊂因此当锚框的离群度为常量时,其梯度增益达到最大㊂因为췍L I o U 是不断变化的,所以锚框的质量分类标准也是不断变化的,W I o U v 3可以在任何时候完成满足当时状况的梯度增益分配方案㊂故本文选用W I o U v 3作为边界框损失函数,运用其合理的梯度增益分配方案能够更多地集中到一般品质的锚框,加强Y O L O v 7的定位能力㊂改进后的Y O L O v 7结构如图4所示㊂图4 改进后Y O L O v 7结构F i g .4 I m pr o v e dY O L O v 7s t r u c t u r e 611沈阳大学学报(自然科学版) 第36卷3 实验与结果分析3.1 实验环境本文实验的操作系统为W i n d o w s 11,C P U 型号为C o r ei 7-13700k ,3.4G H Z ㊂G P U 型号为N V I D I A G e F o r c eR T X3090,显存为24G B ㊂本实验采用深度学习框架P y T o r c h 1.12.1,编程语言是P yt h o n 3.8,C U D A 版本为C U D A 11.6㊂3.2 数据集采集与扩充本实验选用了自制数据集,先运用C C D 相机采集了1350张带有疵点的织物图像,并通过随机的旋转㊁裁剪㊁调节亮度㊁饱和度等方式将带有缺陷的织物图像扩充为12150张图像㊂3.3 数据标注本文使用标记工具l a b e l I m g 对疵点数据集进行了标注,然后将标注结果转换为t x t 格式㊂最后,按照8ʒ2ʒ1的比例划分了扩展后的数据集,分别作为训练集㊁测试集和验证集㊂数据集包含的疵点类型分为5类,分别为毛边㊁滴墨㊁擦洞㊁破洞和黄渍㊂自制数据集中所含疵点分类情况如表1所示㊂表1 织物疵点分类结果T a b l e1 C l a s s i f i c a t i o n r e s u l t s o f f a b r i cd e f e c t s疵点类型毛 边滴 墨擦 洞破 洞黄 渍图例数量/个2733053462451813.4 网络训练本文在算法改进前后使用的超参数相同,输入的图片大小都为640ˑ640,都使用了官方提供的Y O L O v 7.p t 作为预训练权重,选用S G D 优化器更替权重,初始学习率为0.01,动量为0.937,权重衰减为0.005,批次设置为16,训练时长为300个轮次㊂Y O L O v 7算法在改进前后的损失值变化如图5所示(见封3)㊂从图5可以看出改进后的Y O L O v 7边界损失收敛速度从15轮往后开始加快并且损失值全程小于原始算法,说明改进后的算法对疵点位置定位得愈加准确㊂改进后算法的分类损失收敛速度在前50轮下降比较剧烈,后面逐渐平缓㊂改进后Y O L O v 7的分类损失值也要更小,说明改进后算法对疵点的分类效果要更好㊂从图6(见封3)可以看出改进后Y O L O v 7的平均精度均值要明显大于原算法,说明改进后的Y O L O v 7在疵点数据集中表现出比原算法更佳的检测效果㊂3.5 评价指标为了评价改进Y O L O v 7算法的性能,本文主要使用了精度P ㊁召回率R ㊁平均精度均值m A P 和检测速度等评价指标,如式(16)~式(19)所示:P =T P T P +F P;(16)R =F P T P +F N;(17)S A P =ʏ1P (R )d R ;(18)m A P =ðki =1SA P ik㊂(19)式中:T P 是指被正确分配的正样本;F P 是指错误分配的正样本;F N 是指被错误分配的负样本㊂P 可以表明算法误检疵点的状况;R 是反映算法漏检疵点的情况;S A P 表示以召回率为横轴,精度为纵轴所围曲线的面积;m A P 是指数据集中各类疵点S A P 的均值㊂711第2期 任经琦等:基于改进Y O L O v 7的织物表面疵点检测技术3.6实验对比分析实验对比是为了更好地展现改进模型的优势,本文是将S S D㊁Y O L O v5s㊁Y O L O v7和本文算法进行对比㊂都使用一样的数据集来进行训练㊁检验㊂检验结果如表2所示㊂表2不同算法检验对比T a b l e2C o m p a r i s o no f d i f f e r e n t a l g o r i t h m s模型P/%检测速度/(帧㊃s-1)m A P/%R/%S S D77.424676.1379.23Y O L O v5s88.6212187.8191.41Y O L O v791.5312889.6493.68本文算法95.659492.2896.71通过表2分析可得,在疵点检测方面,Y O L O v7算法的整体性能要优于S S D和Y O L O v5s,改进后Y O L O v7的m A P相对于原算法增加了2.64%,召回率增加了3.03%㊂此外,因为S P D-C o n v模块和C A 注意力机制的加入导致改进后算法检测速度有所下降,帧数比原算法少了34帧,检测速度为94帧㊃s-1,仍然能够帮助纺织厂实现智能检测㊂改进后Y O L O v7的检测精度大幅增加,比原算法提升了4.12%㊂总体来说,改进后的Y O L O v7算法要比其他3个算法更适合用于织物疵点的日常检测,综合表现也要更加优秀㊂3.7消融实验为了验证各个改进模块对疵点检测性能是否有提升,本文设计了几组对比实验㊂实验结果如表3所示㊂表3消融实验结果T a b l e3A b l a t i o ne x p e r i m e n t s r e s u l t s单位:%算法S P D-C o n v C A W I o U v3m A P PY O L O v7ˑˑˑ89.6491.53改进1Pˑˑ90.6892.78改进2ˑPˑ90.5193.65改进3ˑˑP91.0194.92本文算法P P P92.2895.65由表3可知本文的3种改进点对Y O L O v7有着不同程度的提升,其中以添加S P D-C o n v模块和更换边界框损失函数提升较大,m A P分别提升了1.04%和1.37%,添加C A注意力机制后m A P也提升了0.87%㊂此外,更换边界框损失函数的方法精度提升最大,增加了3.39%㊂所以说在S P D-C o n v模块,C A注意力机制和W I o U v3损失函数共同作用下可以有效提高Y O L O v7算法对织物疵点的检测性能㊂从测试结果中选取了部分图片进行对比分析,如图7所示㊂由图7可以看出,当有多种疵点共存时,改进后的Y O L O v7可以识别出滴墨和擦洞这两种疵点,而原算法只识别出滴墨这一种疵点,通过将W I o U v3作为边界框损失函数从而一定程度上改善了检测过(a)改进前样本1(b)改进前样本2(c)改进前样本3 811沈阳大学学报(自然科学版)第36卷(d )改进后样本1(e)改进后样本2(f)改进后样本3图7 检测结果F i g.7 T e s t r e s u l t s 程中的漏检现象㊂另外,原始Y O L O v 7对于破洞和黄渍这两种疵点的置信度是0.86和0.79,改进后Y O L O v 7对破洞和黄渍的置信度分别提高至0.89和0.88㊂改进后Y O L O v 7对毛边的置信度是0.85,而原Y O L O v 7对毛边的置信度为0.75㊂所以说改进后的Y O L O 7通过S P D -C o n v 模块和C A 注意力机制的作用提取到更多有效的特征信息,可以更加准确地定位疵点,并且疵点的置信度也有较大提高㊂4 结 论考虑到织物缺陷检测对纺织行业的重要性,本研究提出了一种改进的Y O L O v 7算法,能够准确地检测织物中的疵点㊂首先,原网络中引入S P D -C o n v 模块,再进行卷积下采样的同时保留了疵点的判别特征信息㊂其次,把C A 注意力机制加入Y O L O v 7主干特征网络中,增强了网络对疵点信息的特征提取功能㊂最后,通过对边框损失函数的优化,有效地提高了算法的泛化性能㊂消融实验结果表明,本文的疵点检测算法可以迅速精准地识别出疵点,m A P 达到92.28%㊂在后续的研究中,本文会以减少模型参数量为目标,以更轻量化的模型来提高算法的平均精度㊂参考文献:[1]张家玮.基于机器视觉技术的织物缺陷检测方法研究[D ].无锡:江南大学,2021.Z HA N GJ W.R e s e a r c h o n f a b r i c d e f e c t d e t e c t i o n m e t h o d b a s e d o n m a c h i n e v i s i o n t e c h n o l o g y [D ].W u x i :J i a n gn a n U n i v e r s i t y,2021.[2]程珍.基于深度卷积神经网络的织物瑕疵检测算法研究[D ].上海:东华大学,2021.C H E N GZ .R e s e a r c ho n f a b r i cd e f e c td e t e c t i o na l g o r i t h m b a s e do nd e e p c o n v o l u t i o n a l n e u r a l n e t w o r k [D ].S h a n g h a i :D o n gh u a U n i v e r s i t y,2021.[3]李飞龙.基于深度学习的织物疵点检测研究[D ].武汉:武汉纺织大学,2021.L IFL .R e s e a r c ho f f a b r i c d e f e c t d e t e c t i o nb a s e do nd e e p l e a r n i n g [D ].W u h a n :W u h a nT e x t i l eU n i v e r s i t y,2021.[4]杨必成,张团善.基于改进S h u f f l e N e t V 2的织物颜色恒常性算法[J ].沈阳大学学报(自然科学版),2023,35(3):216223.Y A N GBC ,Z H A N G TS .C l o t h c o l o r c o n s t a n c y a l g o r i t h mb a s e do n i m p r o v e dS h u f f l e N e t V 2[J ].J o u r n a l o f S h e n y a n g U n i v e r s i t y(N a t u r a l S c i e n c e ),2023,35(3):216223.[5]刘海军,单维锋,袁静,等.基于梯度方向直方图的本色布疵点检测算法[J ].毛纺科技,2018,46(1):6972.L I U HJ ,S H A N W F ,Y U A NJ ,e t a l .G r e y f a b r i c d e f e c t d e t e c t i o na l g o r i t h m b a s e do nh i s t o gr a m o f o r i e n t e d g r a d i e n t [J ].W o o l T e x t i l e J o u r n a l ,2018,46(1):6972.[6]王斌,李敏,雷承霖,等.基于深度学习的织物疵点检测研究进展[J ].纺织学报,2023,44(1):219227.WA N GB ,L IM ,L E IC L ,e ta l .R e s e a r c h p r o g r e s s i nf a b r i cd e f e c td e t e c t i o nb a s e do nd e e p l e a r n i n g [J ].J o u r n a lo fT e x t i l e R e s e a r c h ,2023,44(1):219227.[7]郑雨婷,王成群,陈亮亮,等.基于卷积神经网络的织物图像识别方法研究进展[J ].现代纺织技术,2022,30(5):111.Z H E N G Y T ,WA N GCQ ,C H E NLL ,e t a l .R e s e a r c h p r o g r e s s o f f a b r i c i m a g e p r o c e s s i n g me t h o d s b a s e d o n c o n v o l u t i o n a l n e u r a l n e t w o r k [J ].A d v a n c e dT e x t i l eT e c h n o l o g y,2022,30(5):111.[8]L I N T Y ,G O Y A LP ,G I R S H I C K R ,e ta l .F o c a l l o s s f o rd e n s eo b j e c td e t e c t i o n [C ]ʊ2017I E E EI n t e r n a t i o n a lC o n f e r e n c eo n C o m p u t e rV i s i o n (I C C V ).V e n i c e ,I t a l y.I E E E ,2017:29993007.[9]L I U W ,A N G U E L O V D ,E R H A N D ,e ta l .S S D :s i n g l es h o t M u l t i B o xd e t e c t o r [M ]ʊC o m p u t e rV i s i o n -E C C V 2016.C h a m :911第2期 任经琦等:基于改进Y O L O v 7的织物表面疵点检测技术S p r i n g e r I n t e r n a t i o n a l P u b l i s h i n g,2016:2137.[10]H EK M ,Z H A N G X Y ,R E NSQ ,e t a l .S p a t i a l p y r a m i d p o o l i n g i nd e e p c o n v o l u t i o n a l n e t w o r k s f o r v i s u a l r e c o g n i t i o n [J ].I E E E T r a n s a c t i o n s o nP a t t e r nA n a l y s i s a n d M a c h i n e I n t e l l i ge n c e ,2015,37(9):19041916.[11]D A I JF ,L IY ,H E K M ,e t a l .R -F C N :o b j e c td e t e c t i o nv i ar e g i o n -b a s e df u l l y c o n v o l u t i o n a l n e t w o r k s [C ]ʊP r o c e e d i ng so f th e 30t h I n t e r n a ti o n a l C o n f e r e n c e o nN e u r a l I n f o r m a t i o nP r o c e s s i n g S y s t e m s .D e c e m b e r 5-10,2016,B a r c e l o n a ,S pa i n .A C M ,2016:379387.[12]H EZ W ,Z H A N G L .D o m a i na d a p t i v eo b j e c t d e t e c t i o nv i aa s y mm e t r i c t r i -w a y f a s t e r -R C N N [C ]ʊV E D A L D IA ,B I S C HO F H ,B R O X T ,e t a l .E u r o p e a nC o n f e r e n c e o nC o m p u t e rV i s i o n .C h a m :S p r i n ge r ,2020:309324.[13]李明,景军锋,李鹏飞.应用G A N 和F a s t e rR -C N N 的色织物缺陷识别[J ].西安工程大学学报,2018,32(6):663669.L IM ,J I N GJF ,L IPF .Y a r n -d ye df a b r i cd e f e c td e t e c t i o nb a s e do nG A Na n dF a s t e rR -C N N [J ].J o u r n a l o fX i a nP o l y t e c h n i c U n i v e r s i t y,2018,32(6):663669.[14]王贺楠,曹江涛,韩济阳,等.基于改进Y O L O v 3算法的布匹疵点检测[J ].沈阳航空航天大学学报,2022,39(1):5460.WA N G H N ,C A OJT ,HA NJY ,e t a l .F a b r i c d e f e c t d e t e c t i o nb a s e do n i m p r o v e dY O L O v 3a l g o r i t h m [J ].J o u r n a l o f S h e n y a n gA e r o s p a c eU n i v e r s i t y,2022,39(1):5460.[15]陶显,侯伟,徐德.基于深度学习的表面缺陷检测方法综述[J ].自动化学报,2021,47(5):10171034.T A O X ,HO U W ,X UD.As u r v e y o f s u r f a c e d e f e c t d e t e c t i o nm e t h o d s b a s e d o n d e e p l e a r n i n g[J ].A c t aA u t o m a t i c a S i n i c a ,2021,47(5):10171034.[16]WA N GCY ,B O C H K O V S K I YA ,L I A O H Y M.Y O L O v 7:T r a i n a b l e b a g -o f -f r e e b i e s s e t s n e ws t a t e -o f -t h e -a r t f o r r e a l -t i m e o b j e c t d e t e c t o r s [E B /O L ].2022:a r X i v :2207.02696.h t t p :ʊa r x i v .o r g /a b s /2207.02696.pd f [17]齐晓轩,陶九明,陈鉴,等.基于改进Y O L O v 5s 的桥墩缺陷检测R O V 设计应用[J ].沈阳大学学报(自然科学版),2023,35(6):503510.Q IXX ,T A OJM ,C H E NJ ,e t a l .D e s i g n a n d a p p l i c a t i o no f p i e r d e f e c t d e t e c t i o nR O Vb a s e do n i m p r o v e dY O L O v 5s [J ].J o u r n a l o f S h e n y a n g U n i v e r s i t y (N a t u r a l S c i e n c e ),2023,35(6):503510.[18]Z HA N G Y F ,R E N W Q ,Z HA N G Z ,e t a l .F o c a la n d e f f i c i e n tI O U l o s sf o r a c c u r a t e b o u n d i n g b o x r e gr e s s i o n [J ].N e u r o c o m p u t i n g,2022,506:146157.ʌ责任编辑:李 艳췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍췍ɔ(上接第97页)[9]F A N G Q ,L IB ,L IY Y ,e ta l .0D /2D Z -s c h e m eh e t e r o j u n c t i o n so f g -C 3N 4q u a n t u m d o t s /Z n O n a n o s h e e t sa sah i g h l y e f f i c i e n t v i s i b l e -l i g h t p h o t o c a t a l y s t [J ].A d v a n c e dP o w d e rT e c h n o l o g y ,2019,30(8):15761583.[10]J U N G H ,P HAM T T ,S H I N E W.E f f e c to f g -C 3N 4p r e c u r s o r so nt h e m o r p h o l o g i c a ls t r u c t u r e so f g -C 3N 4/Z n O c o m p o s i t e p h o t o c a t a l y s t s [J ].J o u r n a l o fA l l o y s a n dC o m po u n d s ,2019,788:10841092.[11]P R A B HA K A R V A T T I K U T IS V ,R E D D Y P A K ,S H I M J ,e ta l .V i s i b l e -l i g h t -d r i v e n p h o t o c a t a l y t i ca c t i v i t y o fS n O 2-Z n O q u a n t u md o t s a n c h o r e do n g -C 3N 4n a n o s h e e t s f o r p h o t o c a t a l y t i c p o l l u t a n t d e g r a d a t i o n a n dH 2p r o d u c t i o n [J ].A C SO m e g a ,2018,3(7):75877602.[12]马国峰,祁岚钰,关振声.多孔g -C 3N 4的制备与光催化降解性能[J ].沈阳大学学报(自然科学版),2023,35(2):9198.MA GF ,Q ILY ,G U A NZS .P r e p a r a t i o na n d p h o t o c a t a l y t i cd e g r a d a t i o n p r o p e r t i e so f p o r o u s g -C 3N 4[J ].J o u r n a l o f S h e n y a n g U n i v e r s i t y (N a t u r a l S c i e n c e ),2023,35(2):9198.[13]王英刚,卢曦,宗芳,等.石墨烯二氧化钛复合材料对水中苯酚的光催化去除[J ].沈阳大学学报(自然科学版),2022,34(2):100105.WA N G Y G ,L U X ,Z O N GF ,e t a l .P h o t o c a t a l y t i c r e m o v a l o f p h e n o l f r o m w a t e r b y rG O /T i O 2co m p o s i t em a t e r i a l s [J ].J o u r n a l o f S h e n y a n g U n i v e r s i t y (N a t u r a l S c i e n c e ),2022,34(2):100105.[14]K UMA RS ,B A R U A H A ,T O N D A S ,e t a l .C o s t -e f f e c t i v ea n de c o -f r i e n d l y s y n t h e s i so fn o v e l a n ds t a b l eN -d o p e dZ n O /g -C 3N 4c o r e -s h e l l n a n o p l a t e sw i t he x c e l l e n t v i s i b l e -l i g h t r e s p o n s i v e p h o t o c a t a l y s i s [J ].N a n o s c a l e ,2014,6(9):48304842.[15]S U NJX ,Y U A N Y P ,Q I U L G ,e ta l .F a b r i c a t i o no f c o m p o s i t e p h o t o c a t a l y s t g -C 3N 4-Z n Oa n de n h a n c e m e n to f p h o t o c a t a l y t i c a c t i v i t y u n d e r v i s i b l e l i gh t [J ].D a l t o nT r a n s a c t i o n s ,2012,41(22):67566763.[16]S U N D ,MA OJ ,C H E N GL ,e t a l .M a g n e t i c g -C 3N 4/N i F e 2O 4c o m p o s i t ew i t he n h a n c e da c t i v i t y o n p h o t o c a t a l y t i cd i s i n f e c t i o no f A s p e r g i l l u s f l a v u s [J ].C h e m i c a l E n g i n e e r i n g J o u r n a l ,2021,418:129417.[17]WA N GJ H.C o n s t r u c t i o no ft e r n a r y h e t e r o s t r u c t u r e d A g /A g 2O@Z n O@g -C 3N 4n a n o c o m p o s i t ea sa n w i d e n e dv i s i b l el i g h t p h o t o c a t a l y s t f o r t h e o r g a n i c o x i d a t i o n [J ].J o u r n a l o f P h y s i c s a n dC h e m i s t r y o f S o l i d s ,2023,180:111389.ʌ责任编辑:智永婷ɔ021沈阳大学学报(自然科学版) 第36卷。
Package‘winfapReader’October12,2022Type PackageTitle Interact with Peak Flow Data in the United KingdomVersion0.1-5Maintainer Ilaria Prosdocimi<***************************>URL https://ilapros.github.io/winfapReader/BugReports https:///ilapros/winfapReader/issuesDescription Obtain information on peakflow data from the Na-tional River Flow Archive(NRFA)in the United Kingdom,ei-ther from the Peak Flow Datasetfiles<https:///peak-flow-dataset>once these have been downloaded to the user's com-puter or using the NRFA's API.Thesefiles are in a format suitable for direct use in the'WIN-FAP'software,hence the name of the package.License GPL-3Imports lubridateDepends utils,R(>=3.5.0)Suggests testthat,utf8,rnrfa,httr,jsonlite,curl,knitr,rmarkdown,zooLazyData trueRoxygenNote7.2.1VignetteBuilder knitrEncoding UTF-8Language en-GBNeedsCompilation noAuthor Ilaria Prosdocimi[aut,cre](<https:///0000-0001-8565-094X>), Luke Shaw[aut](Luke developped the code to handle the missing and gapperiods for Peaks over threshold records.)Repository CRANDate/Publication2022-09-0811:00:02UTC12get_amax R topics documented:get_amax (2)get_cd (3)get_pot (4)known_Oct1 (5)read_amax (6)read_cd3 (7)read_pot (7)water_year (8)Index10 get_amax A function to obtain annual maxima(AMAX)data using the NRFA APIDescriptionThe function queries the NRFA API for the.AMfile similar to the WINFAPfile for a given stations.It then processes thefile in a fashion similar to read_amax.Usageget_amax(station)Argumentsstation the NRFA station number for which the annual maxima records should be ob-tained.Can also be a vector of station numbers.Valuea data.frame with information on the annual maxima for the station with the following columnsStation NRFA station number(can be a vector of station numbers)WaterYear the correct water year for the peakflowDate date of maximumflowFlow the maximumflow in m3/sStage the stage(height)reached by the river-this information is used to derive theflow via a rating curveRejected logical,if TRUE the water year has beenflagged as rejected by the NRFASee Alsoread_rmation on riverflow gauging in the UK and the annual maxima can be found at the National River Flow Archive website https://get_cd3Examplesa40003<-get_amax(40003)#the Medway at Teston/East FarleighmultipleStations<-get_amax(c(40003,42003))names(multipleStations)summary(multipleStations$ 42003 )get_cd A function to obtain information on the station and on the catchmentupstream of the station using the NRFA APIDescriptionThe function queries the NRFA API for for information of a given station.Unlike get_amax and get_pot,the output of this function is not exactly the same from the output of the read_cd3func-tion due to differences in the information made available by the NRFA APIUsageget_cd(station,fields="feh")Argumentsstation the NRFA station(s)number for which the the information is requiredfields the type of information which is required.Can be"feh"(default),which out-puts a subset of information typically used when applying theflood estimationhandbook methods,or"all",which output all information made available in theNRFA API.Valuea data.frame of one row with different columns depending on whetherfields="all"orfields="feh"was selected.See Alsoread_rmation on catchment descriptors riverflow gauging in the UK can be found at the National River Flow Archive website https://ExamplescdMult<-get_cd(c(40003,42003),fields="all")###lots of information on the catchment/station###including information on rejected annual maximacdMult$ 40003 $ peak-flow-rejected-amax-years ##no rejectionscdMult$ 42003 $ peak-flow-rejected-amax-years ##several rejectionscd40003<-get_cd(40003,fields="feh")#less information,mostly the FEH descriptorsdim(cd40003)4get_pot sapply(cdMult,ncol)get_pot A function to obtain Peaks-Over-Threshold(POT)data using theNRFA APIDescriptionThe function queries the NRFA API for the.PTfile similar to the WINFAPfile for a given stations.It then processes thefile in a fashion similar to read_pot.Usageget_pot(station,getAmax=FALSE)Argumentsstation the NRFA station number for which peaks over threshold information should be obtained.It can also be a vector of station numbersgetAmax logical.If TRUE information on the annual maxima values will be retrieved and attached to the WaterYearInfo tableValueLike read_pot a list of three objects tablePOT,WaterYearInfo and dateRange.tablePOT contains a table with all the peaks above the threshold present in the recordWaterYearInfo a table containing the information on the percentage of missing values in any water year for which some data is available in the POT record.This is useful to assess whether the lack of exceedances is genuine or the result of missing data and to assess whether the threshold exceedances present in tablePOT can be deemed to be representative of the whole yeardateRange a vector with thefirst and last date of recording for the POT record as provided in the[POT Details]field.Note that this period might be different than the period for which annual maxima records are availableSee Alsoread_rmation on the peaks over threshold records and riverflow gauging in the UK can be found at the National River Flow Archive website https://known_Oct15 Examples###the example take longer than5seconds to runp40003<-get_pot(40003)#the Medway at Teston/East Farleighp40003$tablePOT[p40003$tablePOT$WaterYear>1969&p40003$tablePOT$WaterYear<1977,]###no events in1971nor1975p40003$WaterYearInfo[p40003$WaterYearInfo$WaterYear>1969&p40003$WaterYearInfo$WaterYear<1977,]#in1971all records are valid,#in1975no exceedances#might be due to the fact that almost no valid record are availablep40003<-get_pot(40003,getAmax=TRUE)p40003$WaterYearInfo[p40003$WaterYearInfo$WaterYear>1969&p40003$WaterYearInfo$WaterYear<1977,]#the annual maximum in1971and1975was below the threshold#no events exceeded the thresholdknown_Oct1Known events which happened on October1st before9amDescriptionThe Water Year in the UK runs from9am of the1st October of a given year to8:59am of the1st October of the next year.Since the WINFAPfiles contain information only on the date of the annual maximum(and not time)it is possible that an event is mis-classified when using the water_year function.This dataset lists the events which are known to have happened to October1st before 9am.This is used to correct the WaterYear information in these known cases in the read_amax and get_amax functions.For some stations events on October1st have been deemed as annual maxima only in some winfap releases.They are maintained in the dataset in the event that somebody read old winfapfiles.Usageknown_Oct1FormatA data frame with36rows and3variables:Station NRFA station numberDate date of maximumflow(always the1st October)WaterYear the correct water year for the peakflow6read_amax SourceDerived manually by identifying events which happened on Oct.1st and comparing it with infor-mation on https://read_amax A function to read.AMfilesDescriptionThe function reads.AMfiles once these are in a local folder:thesefiles contain information on annual maxima(AMAX)records extracted from the instantaneous riverflow measurements.The function checks for the presence of any[AM Rejected]information and includes it in the output. Usageread_amax(station,loc_WinFapFiles=getwd())Argumentsstation NRFA station number(s)for which the.AMfile(named station.AM)should be read.loc_WinFapFilesthefile.path of the WINFAPfiles,i.e.the location in which the station.AMfilecan be found.Default is the working directoryValuea data.frame with information on the annual maxima for the station with the following columnsStation NRFA station number(can be a vector of station numbers)WaterYear the correct water year for the peakflowDate date of maximumflowFlow the maximumflow in m3/sStage the stage(height)reached by the river-this information is used to derive theflow via a rating curveRejected logical,if TRUE the water year has beenflagged as rejected by the NRFASee AlsoInformation on the.AMfiles and riverflow gauging in the UK can be found at the National River Flow Archive website https://read_cd37 read_cd3A function to read.CD3filesDescriptionThe function reads.CD3files once these are in a local folder:thesefiles contain information on the gauging station and on the catchment upstream the station.Usageread_cd3(station,loc_WinFapFiles=getwd())Argumentsstation the NRFA station number(s)for which the.CD3file(names station.CD3) should be readloc_WinFapFilesthefile.path of the WINFAPfiles,i.e.the location in which the station.CD3file can be found.Default is the working directoryValuea data.frame with information on the catchment descriptors for the stationSee AlsoInformation on the.CD3files and riverflow gauging in the UK can be found at the National River Flow Archive website https://.Specific information on the catchment descrip-tors can be found at https:///feh-catchment-descriptorsread_pot A function to read.PTfilesDescriptionThe function reads.PTfiles once these are in a local folder:thesefiles contain information on Peaks-Over-Threshold(POT)records from the instantaneous riverflow measurements.The func-tion checks for the presence of any[POT GAPS]and[POT REJECTED]periods.If these are present,they are merged and information on the proportion of days with missing records in each water year is provided.Usageread_pot(station,loc_WinFapFiles=getwd(),getAmax=FALSE)Argumentsstation NRFA station number(s)for which the.PTfile(names station.PT)should be read.loc_WinFapFilesthefile.path of the WINFAPfiles,i.e.the location in which the station.PTfilecan be found.Default is the working directorygetAmax logical.If TRUE the annual maxima values(extracted from a station.AMfile) will be attached to the WaterYearInfo tableValuea list of three objects tablePOT,WaterYearInfo and dateRange.tablePOT contains a table with all the peaks above the threshold present in the.PTfileWaterYearInfo a table containing the information on the percentage of missing values in any water year for which some data is available in the POT record.This is useful to assess whether the lack of exceedances is genuine or the result of missing data and to assess whether the threshold exceedances present in tablePOT can be deemed to be representative of the whole yeardateRange a vector with thefirst and last date of recording for the POT record as provided in the[POT Details]field.Note that this period might be different than the period for which annual maxima records are availableSee AlsoInformation on the.PTfiles and riverflow gauging in the UK can be found at the National River Flow Archive website https://water_year Derive water year value for a dateDescriptionDerive water year value for a dateUsagewater_year(date,start_month=10)Argumentsdate the(vector of)dates for which the water year will be calculatedstart_month the month in which the water year starts,default is OctoberValueThe water year valueExampleswater_year(as.Date(c("2010-11-03","2013-02-03")))Index∗datasetsknown_Oct1,5get_amax,2,3get_cd,3get_pot,3,4known_Oct1,5read_amax,2,6read_cd3,3,7read_pot,4,7water_year,810。
Package‘cohorttools’November23,2022Type PackageTitle Cohort Data AnalysesVersion0.1.6Author Jari Haukka[aut,cre]Maintainer Jari Haukka<***********************>Depends R(>=3.6),Epi,cmprsk,ggplot2Imports stats,survival,DiagrammeR,DiagrammeRsvg,rsvg,mgcvSuggests knitr,rmarkdown,lattice,mstate,testthatDescription Functions to make lifetables and to calculate hazard function estimate using Poisson re-gression model with splines.Includes function to draw simpleflowchart of cohort study.Func-tion boxesLx()makes boxes of transition rates between states.It utilizes'Epi'package'Lexis'data.License GPL-2Encoding UTF-8RoxygenNote7.2.1NeedsCompilation noRepository CRANDate/Publication2022-11-2312:00:04UTCR topics documented:boxesLx (2)estim.hazard (3)gv2image (5)mkflowchart (5)mkratetable (6)plotcuminc (7)plotratetable (8)Index912boxesLx boxesLx Boxes plot summarizing Lexis objectDescriptionCreates boxes graph describing LexisUsageboxesLx(x,layout="circo",prop.penwidth=FALSE,scale.Y=1,rankdir="TB",node.attr="shape=box",edge.attr="minlen=1",show.loop=FALSE,show.persons=FALSE,fontsizeN=14,fontsizeL=8,show.gr=TRUE)Argumentsx Lexis objectlayout Graphviz layout"circo","dot","twopi"or,"neato".It determines general layout of graph.prop.penwidth use line width relative to incidence.If TRUE linewidths of showing transition rates beween states are relative to log of rate.scale.Y scale for incidence.Scale factor rates,default is1.rankdir for graph,default is TB.NOTE!this works best with layout"dot"node.attr general node attributers.Attributes like shape,color,fillcolor,etc.for nodes.Consult Graphviz documentation for details https:///doc/info/attrs.html.edge.attr general edge(line)attributers.Attributes like color,arrowhead,fontcolor etc.for edges.Consult Graphviz documentation for details https://www.graphviz.org/doc/info/attrs.htmlshow.loop,should loop(staying in same state be shown),default FALSEshow.persons,should number of persons be shown(entry->exit),default FALSEfontsizeN font size for nodesfontsizeL font size for edgesshow.gr should graph be shown.If TRUE,function DiagrammeR::grViz is used to show graph.ValueCharacter vector containing Graphviz script.This may used to create graph by DiagrammeR::grViz function.Author(s)Jari Haukka jari.haukka@helsinki.fiSee AlsogrVizExampleslibrary(DiagrammeR)library(survival)library(Epi)library(mstate)data(ebmt3)bmt<-Lexis(exit=list(tft=rfstime/365.25),exit.status=factor(rfsstat,labels=c("Tx","RD")),data=ebmt3)bmtr<-cutLexis(bmt,cut=bmt$prtime/365.25,precursor.states="Tx",new.state="PR")summary(bmtr)kk<-boxesLx(bmtr)##Not run:#Graph to filegv2image(kk,file="k1",type="pdf")##End(Not run)boxesLx(bmtr,layout="dot",rankdir="LR",show.loop=FALSE,show.persons=TRUE)boxesLx(bmtr,node.attr= shape=hexagon color=navy style=filled fillcolor=lightblue , edge.attr= color=steelblue arrowhead=vee fontcolor="#8801d7" ,layout="circo",prop.penwidth=TRUE)estim.hazard Estimates hazard function using Poisson modelDescriptionEstimates hazard function using Poisson modelUsageestim.hazard(formula,data,time,status,breaks,knots,time.eval=breaks,alpha=0.05,use.GAM=FALSE,print.GAM.summary=FALSE,...)Argumentsformula formula with Surv in LHS,NOTE!only one variable in RHSdata data used by formulatime time variablesstatus status indicator Lowest value used as sensoring.If only one unique value de-tected,all are assumed eventsbreaks time is splitted with these valuesknots knots for natural splines used in estimation of hazard functiontime.eval in which time points hazard function is evaluate.alpha significance level for confidence intervalsuse.GAM logical determining if generalized additive model(GAM)is usedprint.GAM.summarylogical determining if summary of GAM is printed...parameters for glmValueReturns data frame with time and hazard function values with attribute’estim.hazard.param’con-taining estimation parameters(breaks and knots)Author(s)Jari Haukka<***********************>Exampleslibrary(survival)tmp.hz<-estim.hazard(time=lung$time,status=lung$status)head(tmp.hz,2)attributes(tmp.hz)$estim.hazard.param#estimation parameterstmp.hz2<-estim.hazard(formula=Surv(time,status)~sex,data=lung)head(tmp.hz2,2)gv2image5 gv2image Function makes image from graphviz codeDescriptionFunction makes image from graphviz codeUsagegv2image(gv,file="gv",type="png",engine="dot",...)Argumentsgv character string containing graphviz codefilefile name for image,character stringtype type of(’pdf’,’png’,’ps’,’raw’,’svg’,’webp’)as character stringengine grViz engine,defaults is’dot’...parameters for rsvg_ValueInvisible name offile created.Author(s)Jari Haukka<***********************>mkflowchart Function makesflowchart in graphvizDescriptionFunction makesflowchart in graphvizUsagemkflowchart(N,text.M,text.P,type=1)ArgumentsN Population sizestext.M Text for exclusions,length one less than Ntext.P Text for main boxes,must be same length with Ntypeflowchart type(1or2)6mkratetable ValueCharacter string,graphviz languageAuthor(s)Jari Haukka<***********************>ExamplesDiagrammeR::grViz(mkflowchart(N=c(743,32,20),text.M=c("Excluded","Excluded\n other with reasons"),text.P=c("Studies","Relevant studies","Included in final review"),type=1))mkratetable Function makes rate table with confidence intervals for crude inci-dences(rates)DescriptionFunction makes rate table with confidence intervals for crude incidences(rates)Usagemkratetable(formula,data,alpha=0.05,add.RR=FALSE,lowest.N=0,...) Argumentsformula where Surv object is on lhs and marginal variable(s)on rhs.Marginal variables should usually be factorsdata data.frame to be usedalpha confidence level,default is0.05add.RR should rate ratio(RR)be addedlowest.N lowest frequency to be shown...additional parameter for function survival::pyearsValuetable with columns named after marginal variables and n,event,incidence,se,exact.lower95ci and exact.upper95ci variablesNotepackages survival is utilized.Frequencies lower than lowest.N replaced by999999Person-years scaled by default with365.25plotcuminc7Author(s)Jari Haukka<***********************>See Alsosurvival pyearsExampleslibrary(survival)tmp.lt1<-mkratetable(Surv(time,status)~sex,data=lung)tmp.lt2<-mkratetable(Surv(time,status)~sex+ph.ecog,data=lung,add.RR=TRUE,lowest.N=10) plotcuminc Plots cumulative incidence ratesDescriptionPlots cumulative incidence ratesUsageplotcuminc(ftime,fstatus,cencode,pop.length=50,group,...)Argumentsftime failure time variablefstatus variable with distinct codes for different causes of failure and also a distinct code for censored observationscencode value of fstatus variable which indicates the failure time is censored.pop.length number of population sizes showngroup plots will be made for each group.If missing then treated as all one group ...additional parametersValueif missing group ggplot2object or if group given named list of ggplot2objectsNotepackage cmprsk and ggplot2are utilizedAuthor(s)Jari Haukka<***********************>8plotratetableSee Alsosurvival pyearsExamplesset.seed(2)ss<-rexp(100)gg<-factor(sample(1:3,100,replace=TRUE),1:3,c( a , b , c ))cc<-sample(0:2,100,replace=TRUE)print(plotcuminc(ftime=ss,fstatus=cc,cencode=0))print(plotcuminc(ftime=ss,fstatus=cc,cencode=0,group=gg))plotratetable Function makes plot(s)from ratetableDescriptionFunction makes plot(s)from ratetableUsageplotratetable(rt,RR=FALSE)Argumentsrt Rate table produced by function mkratetableRR Boolean,if TRUE rate ratios plottedValueggplot object,or list if multiple variables in rate tableExampleslibrary(ggplot2)library(survival)tmp.lt1<-mkratetable(Surv(time,status)~ph.ecog,data=lung,add.RR=FALSE)plotratetable(tmp.lt1)tmp.lt2<-mkratetable(Surv(time,status)~sex+ph.ecog+cut(age,4),data=lung,add.RR=TRUE,lowest.N=1) plotratetable(tmp.lt2,TRUE)Index∗survivalmkratetable,6plotcuminc,7boxesLx,2estim.hazard,3grViz,3gv2image,5mkflowchart,5mkratetable,6plotcuminc,7plotratetable,8pyears,7,8survival,7,89。
Wear258(2005)1148–1155Railway noise and the effect of top of rail liquid friction modifiers:changes in sound and vibration spectral distributions in curvesDonald T.Eadie a,∗,Marco Santoro a,Joe Kalousek ba Kelsan Technologies Corp.,1140West15th St.,North Vancouver,BC,Canada V7P1M9b National Research Council of Canada,Centre for Surface Transportation,3250East Mall,Vancouver,BC,Canada V6T1W5Received13June2003;received in revised form28November2003;accepted1March2004Available online11November2004AbstractFor railway noise in curves,bothflanging and squeal noise can be environmentally significant.Rolling noise is dominant in tangent track. This paper examines the spectral sound distribution in curves for different wheel/rail system types,and compares spectra after the top of rail friction level is controlled with a special friction modifier.The friction modifier controls top of rail(TOR)friction at an intermediate level, and imparts“positive friction”attributes to the interfacial layer.A significant range of spectral characteristics was noted for the different wheel/rail system types.In all cases the friction modifier significantly reduced the sound levels at the frequencies associated with top of rail squeal,and also at the frequency bands related toflange contact noise.For some Metro systems a noticeable reduction was also recorded at the lower frequencies associated with rolling noise,possibly due to a reduction in“graunching”.There was some suggestion of reduction at low frequency vibration(frequencies down to30Hz)as well as in the high stick-slip oscillation frequencies.©2004Elsevier B.V.All rights reserved.Keywords:Railway noise;Metro systems;Liquid friction modifiers1.BackgroundRailway noise is an area of continuing research interest. Regulatory and public concern continues to drive technolog-ical initiatives to understand and define sources and means of mitigating the different types of noise emanating from rail-way systems.Vibration from trains is also an area of concern especially for Metro systems with nearby office or residential buildings.This paper is concerned with wheel squeal,flanging noise, rolling noise,and vibration.Squeal andflanging noise are associated with curves,particularly sharp curves(R<500m) whereas rolling noise is generally associated with tangent track.Spectral sound distribution levels are also considered for different types of trains.In particular we examine how noise and spectral sound distribution is altered by control ∗Corresponding author.Tel.:+16049846100;fax:+16049843419.E-mail addresses:deadie@(D.T.Eadie),msantoro@(M.Santoro),joe.kalousek@nrc.ca(J.Kalousek).of friction on the top of rail(TOR).Control of friction is achieved with a special friction modifier that does not impact traction or braking.1.1.Frequency range of different noise typesTable1summarizes the frequency range for the different types of railway noise of concern in this paper.These are either the ranges generally accepted within the industry,or those established from our own experience with sound mea-surements on many different systems.The identification offlanging noise as high frequency (5000–10000Hz)will be addressed in more detail later in the paper.1.2.Mechanism of noise generationThe sources and reduction possibilities of railway noise have recently been reviewed by Talotte et al.[1].0043-1648/$–see front matter©2004Elsevier B.V.All rights reserved. doi:10.1016/j.wear.2004.03.061D.T.Eadie et al./Wear258(2005)1148–11551149Table1Frequency range for different types of railway noiseNoise type Frequency range(Hz) Rolling30–5000Flat spots50–250(speed dependant) Ground borne vibrations4–80Structure–borne noise30–200Top of rail squeal1000–5000Flanging noise5000–10000 Rolling noise is established as originating from structural vibrations of the wheel,rail and sleepers resulting from the combined surface roughness of the wheel and running sur-faces.Roughness on wheels can be induced by factors such as the use of tread brakes,especially those made from cast iron.Ground borne vibrations and structure-borne noise mainly occur at low frequencies(<50Hz).Frequencies above this are attenuated increasingly rapidly[2].Vibration disturbance is usually caused by the large vertical dynamic forces be-tween wheels and rails.These forcesfluctuate in response to wheel and rail roughness over a wide range of frequen-cies.Wheel squeal originates from frictional instability in curves between the wheel and rail.Stick-slip oscillations (more accurately referred to as roll-slip)excite a wheel resonance;the wheel vibration radiates noise efficiently. The accepted model involves TOR frictional instability un-der lateral creep conditions leading to excitation of out of plane wheel bending oscillations.These are radiated and heard as squeal.The most recent developments involve more rigorous mathematical modeling by Heckl[3].The start-ing point for squeal is lateral creep forces that occur as a bogie goes through a curve and the wheel/rail contact patch becomes saturated with slip(creep saturation).A crit-ical component in all the modeling work is the require-ment that beyond the point of creep saturation,further in-creases in creep levels lead to lower coefficient of fric-tion.This is known as negative friction,referring to the slope of the friction creep curve at saturated creep con-ditions.In more general tribological terms,this would be equated to changes in sliding velocity,rather than the rail-road term creep.This leads to roll-slip oscillations be-tween the wheel and the rail which excite a wheel res-onance,and the wheel web radiates the noise.Most re-cently,De Beer et al.have developed a frequency domain model of squeal noise in terms of lateral contact position, and supported the results with laboratory measurements [4].“Graunching”is another form of noise associated with curving.Unfortunately little is known in the literature about this phenomenon.This is a low frequency phenomenon,of perhaps several,probably associated with stick-slip of a dif-ferent form than that of top of rail squeal.It may possibly be due toflange rubbing.2.Field experimentation2.1.Wheel/rail system types and characteristicsFor this purpose the wheel/rail systems examined have been classified into three general types:1.Trams(Europe and Japan)and light rail(North America),with axle loads typically9–11t.2.Metro(Europe)and heavy rail(North America).Theseare characterized by moderate axle loads and heavier rail sections than in trams(11–15t).3.Heavy haul freight(axle loads20–35t).Each test is further characterized according to the follow-ing parameters:1.Curve radius.2.Speed.3.Number of axles per car.4.Number of cars.5.Track type.6.Brake type.7.Gauge face lubrication practice.8.Friction modifier application method.2.2.Sound measurementsSound level measurements were carried out using a Bruel &Kjaer2260Sound Level Meter.The microphone wasfitted with a foam windscreen and handheld on the outside(high rail side)of the test curve.The sound meter was held7.5m from the center of the track,with the microphone1.2m above the height of the rail.The maximum sound level range was preset to between65and130dB for train passes(35–100dB for ambient).Sound measurements were made in the center of the curve.The sound level meter was programmed for event recording,enabling the instrument to automatically measure and stores the event data.The logged data was downloaded via a serial interface to a laptop computer.In all but one case the data reported is the averaged L Leq values for each of sev-eral trains under the particular conditions evaluated.L Leq is the linear weighting of the average noise level.Measurements were made for the residence time of the train in the curve.For trams,the residence time was approximately15s,for Metros about30s,and for freight trains about2min.Five or more trains were measured and the data in this paper is always the average for all the trains measured.For System8,L Amax was measured with a Rion NA-24m set for A weighting,which measured only at certain selected frequencies.2.3.Vibration measurementsVibration measurements were carried out in a track build-ing located about10m from the trackside friction modifier delivery system,near the beginning of the test curve.This location was less than ideal,but access to local office build-1150 D.T.Eadie et al./Wear258(2005)1148–1155ings was not available.A Norsonics Model121analyzer was used to collect the data.The analyzer was set to collect1/3 octave band spectra for1s periods and was operated continu-ously during the testing.The vibration results were converted from acceleration to velocity,and all the vibration results in this paper are in terms of velocity decibels using a decibel reference of dB re2.5×10−8m/s(1in./s).2.4.Friction modifier characteristicsThe top of rail friction modifier used in this work is a water-based liquid material known as KELTRACK®[5–7]. After water removal,a thin dryfilm remains that provides an intermediate coefficient of friction in the range0.30–0.35 as measured with a push tribometer.This friction level does not compromise braking or traction.The differences betweena true friction modifier and a lubricant have been reviewed[8].Friction levels have not been specifically reported in this paper,but the reader is referred to a previous publication for typical friction values[11].Typical top of rail friction levels will be0.5–0.6,and after application of friction modifier will be controlled in the range0.35±0.4.The friction modifier is a suspension of engineered solids that provide the required frictional properties.Other compo-nents are present in order to:(1)Control application performance in the delivery mecha-nism.(2)Maximizefilm strength and endurance,a parameterknown as“retentivity”.The positive friction characteristics of this friction modi-fier thinfilm have been established in two roller rig studies in Japan[9]and in other studies[6].The frictional properties of the thinfilm are the net result of the friction modifier and the other Third Body compo-nents.(The Third Body refers to the all the components of layer between wheel and rail.)The friction modifier has been designed for optimal interaction with the iron oxide wear components that dominate the Third Body under normal con-ditions.Thefilm characteristics have also been developed to provide the appropriate durability so thefilm lasts for as many axles as possible.2.5.Friction modifier applicationThe friction modifier can be applied by a number of al-ternative methods such as spray from on-board locomotives or transit cars[10],using a trackside applicator,or from a track maintenance vehicle.In this work the water-based liq-uid friction modifier was applied to the top of rail either by hand application using paint rollers,or automatically using a special applicator from Portec Rail,the Protector Top of Rail system[7,11].For hand application,a target application rate of0.3g/m per rail was used.The friction modifier was applied to the tangent immediately before the curve and through the body of the curve.In all cases except for System8,frictionFig.1.Typical trackside top of rail applicatorinstallation.Fig.2.Close up of top of rail application bar.modifier was applied to the top of both rails(inner and outer). Tests on System8examined the difference in sound reduction for application to the top of the low(inner)rail only versus both rails.Fig.1shows the automatic trackside application system.Two specially designed wiping bars are mounted on thefield side of each rail,and these are shown in Fig.2. 3.Test results3.1.Spectral sound patterns across different rail systemsSpectral sound patterns in curves werefirst compared without friction modifier between the three general types of wheel/rail systems,Fig.3.This data is the same as the“baseline”spectra for each separate system shown in Figs.4–8,10and11.Corresponding information on curve radius,train speed,etc.can be found in Tables2–4.The spec-tral sound patterns for the three systems are rather different. The tram systems have the lowest overall sound levels.Nev-ertheless there are distinct maxima for the trams within the wheel squeal range,e.g.at1250and2500Hz.This is con-sistent with the human experience at these sites,where thereD.T.Eadie et al./Wear 258(2005)1148–11551151Fig.3.Sound spectrum:different wheel/rail systemtypes.Fig.4.System 1,site A,tram.Fig.5.System 1,site B,tram.was a substantial amount of annoying wheel squeal in all cases.The Metro systems show a generally higher overall sound level and less distinction between rolling and squeal noise contributions.(All the Metro sites are underground.)The freight systems generally show a larger contribution fromtheFig.6.System 2,tram.Fig.7.System 3,Metro.Fig.8.System 4,Metro.flanging frequency range than the other systems,presumably because of the larger lateral and flanging forces associated with the high axle loads.3.2.Effect of friction modifier:tram systemsThe effect of friction modifier was examined on the spec-tral sound distribution of two tram systems.Measurements were taken at two curves on System 1and one curve on System 2.Characteristics of the tram systems are shown in Table 2below,which also provides average sound levels with and without the friction modifier.Spectral sound characteristics with and without friction modifier are shown in Figs.4–6.Table 2Test site characterization,tram systemsSystem 1,site ASystem 1,site B System 2System type Tram Tram Tram Curve radius (m)191935Speed (kph)161616Axles per car 886Cars per train 111Track typeImbedded U-rail Imbedded U-rail Imbedded U-rail Brake typeDisc Disc Disc Gauge face lubrication No Yes No FM application method Manual Manual Manual Average L Leq baseline,dB re 2×10−5Pa 83.392.483.3Average L Leq friction mod,dB re 2×10−5Pa71.280.672.51152 D.T.Eadie et al./Wear258(2005)1148–1155Table3Test site characterization,Metro systemsSystem3System4System type Metro(underground)Metro(underground) Curve radius(m)9790Speed(kph)832Axles per car44Cars per train1010Track type Wooden ties/concrete Wooden ties/concrete Brake type Tread TreadGauge face lubrication Yes YesFM application method Manual Trackside applicator Average L Leq baseline,dB re2×10−5Pa101.5105.9Average L Leq frictionmod,dB re2×10−5Pa91.998.6Figs.5and6indicate that noise in the wheel squeal and flanging parts of the spectra have been significantly reduced with the friction modifier.At frequencies below the squeal range there is little change in the spectra.3.3.Effect of friction modifier–Metro systemsTable3details the system characteristics for the two Metro systems selected for reporting,and Figs.7and8show the effect of friction modifier on the spectral sound distribution for these systems.The average sound levels in the Metro systems are rather high across the entire spectrum.Although there are no spe-cific peaks in the part of the spectrum assigned to squeal,to the human ear at both Metro sites the presence of squeal was very apparent.Based on this,although the spectra are rel-atively featureless there must be a considerable broad-band contribution which may be due to rolling noise,squeal,or flanging noise.It may be that the relatively high overall sound levels are obscuring the spectral maxima apparent in the tram e of narrow-band spectra rather than1/3octave would help clarify this point,and future studies will use this methodology were appropriate.Application of friction mod-ifier to the top of rail in these cases affects a broader range of the spectrum than in the tram case.One possibleexplana-Fig.9.Average vibration spectra,System4,Metro.tion is that there is significant contribution from“graunching”noise at low frequency,and that these have a stick-slip oscil-lation mechanism.If so,the positive friction characteristics of the friction modifier might be expected to mitigate this noise source.Reductions in sound levels in these cases are observed right down to30Hz.3.4.Effect of friction modifier on vibrations on System4 (Metro)During the sound testing described for System4,vibration was also measured in the curve.Vibration levels before and after friction modifier application are shown in Fig.9.Although the test site was less than ideal,the measure-ments suggest some reduction in vibration levels in the 30–60Hz frequency range.The measurements at frequencies below30Hz are of limited value as the trains were too close to the measurement site.The test results also show vibration reduction at the high frequencies(>1500Hz)where the fric-tion modifier is effective as a result of controlling slip-stick interaction at the wheel/rail interface.Further investigation of this area is clearly warranted.3.5.Effect of friction modifier–heavy haul freightTable4outlines system characteristics for three different heavy haul freight systems.Table4Test site characterization,freight systemsSystem5System6System7System type Heavy haul freight Heavy haul freight Heavy haul freight Curve radius(m)291200148.5Speed(kph)323232Axles per car444Cars per train1006080Track type Concrete ties/ballast Concrete ties/ballast Wooden ties/ballast Brake type Tread Tread TreadGauge face lubrication No Yes VariableFM application Trackside applicator Trackside applicator Trackside applicator Average L Leq baseline,dB re2×10−5Pa90.6102.4Average L Leq friction mod,dB re2×10−5Pa81.886.9D.T.Eadie et al./Wear 258(2005)1148–11551153Fig.10.System 5,heavy haul freight.Results before and after friction modifier applications by automated trackside delivery system are shown below in Figs.10and 11Systems 5and 6,respectively.The results show a distinct reduction in squeal and higher frequency flanging noise.Changes in rolling noise are marginal at best.3.6.Identification of flanging noise spectral pattern Because flanging noise sounds to the human ear like a grinding or hissing noise,it is sometimes assumed that this is associated with lower spectral frequencies.The following examples illustrates the distinction between top of rail squeal and flange contact sound spectral patterns.The first example is from a North American freight rail sharp curve which has been monitored for several years (Sys-tem 7,Table 4).In this curve the trackside gauge face grease lubrication is used to control wear,and the friction modifier,applied by an atomized spray to the top of rail [11],controls the wheel squeal noise.On one particular occasion,local res-ident complaints prompted us to carry out additional sound level monitoring.To the trained ear,it was clear that the sound (which was indeed quite irritating)was the hissing sound typ-ical of flange contact.It turned out that the trackside grease lu-brication units had become non-functional.When these were repaired,the hissing flange noise disappeared.The spectral sound patterns are shown in Fig.12.This figure shows (A)trains with some TOR squeal,(B)TOR application with no gauge face lubrication (flanging noise),and (C)full control of train noise with both TOR friction control and gauge face lubrication.This result does not however address thequestionFig.11.System 6,heavy haulfreight.Fig.12.(A)Some squeal,(B)TOR only,(C)TOR plus gauge face lubrica-tion.of the spectral frequency of “graunching”or grinding which may also be associated with flange contact.Further work is needed in this area.The results on Metros and trams although limited,suggest that in some cases top of rail friction control alone can elim-inate both top of rail squeal and higher frequency flanging noise.(This conclusion is undoubtedly site specific.)How-ever in heavy haul freight applications,the higher lateral and flanging forces in sharp curves require gauge face/flange in-terface lubrication in addition to top of rail friction control to fully control noise.This conclusion is also probably site spe-cific,and further testing is needed to explore the limits and track conditions that affect these conclusions.The dual top of rail and gauge face application may only be required in sites with specific conditions of curvature,track superelevation,axle load,bogie type,etc.The second example is from a tram system in Japan.Sys-tem details are shown in Table 5.Fig.13shows that with application of friction modifier to the top of the low rail only,there is a reduction in sound levels in the range from 1000to 4000Hz,but little change at frequencies above 4000Hz.In this case the figure shows the spectra of the 1s highest A-weighted level.However with application of friction modifier to the top of both rails,the high frequency component is further reduced.The fric-tion levels on the top of the low rail are primarily respon-sible for reducing gauge-spreading forces,regardless of top of high rail friction levels.However,the friction levels on both rails affect the reduction in flanging force [12].Hence providing controlled reduced friction on the high rail willTable 5Test site characterization,System 8System 8System type Tram Curve radius (m)160Speed (kph)24Axles per car 4Cars per train 1Track type Wooden ties/ballast Brake typeTread Gauge face lubrication NoFM applicationTrackside applicator Average L Amax baseline (dBA)90.8Average L Amax friction mod (dBA)85.71154 D.T.Eadie et al./Wear258(2005)1148–1155parison of friction control on low rail only vs.application to both rails.further reduceflanging forces,and consequentlyflanging noise.4.DiscussionThefield data described shows considerable variation in spectral sound characteristics between different wheel/rail systems in curves.All systems considered show contribu-tions from bothflanging noise and top of rail wheel squeal. Application of top of rail friction modifier reduced these noise sources in all systems considered.The reduction inflanging noise in most systems tested in transits(trams and Metros)is interesting.In systems where conventional gauge face lubrication is used,there is still a reduction inflanging noise with the application of friction modifier(see Figs.5,7and8).In cases where there is no gauge lubrication,the friction modifier reducesflanging noise to an even lower level(see Figs.4and6).Reduction inflanging noise is because the friction modifier reduces TOR friction from an average“dry”state of0.45–0.6 down to an optimum level around0.35as measured with a push tribometer.The result is a corresponding reduction in lateral forces andflanging forces.It is well known that the presence of gauge face lubrication increases the angle of attack and the magnitude of lateral forces.The top of rail friction modifier is effective enough to mitigate the increase in lateral forces caused by gauge face lubrication[12].The other outcome is,in some cases,elimination of high rail gauge face contact with associated dramatic reduction in rail wear[11].The reduction in rolling noise in some cases but not in others is of interest.The mechanism for this noise reduction is not yet entirely clear.Future work will investigate the possible role of rail roughness changes with friction modifiers and how this might be related to the results reported in this paper. 5.ConclusionsThis paper has shown that friction modifiers can reduce overall noise in curves across a wide range of wheel/rail sys-tems,as shown in Fig.14.Fig.14.Summary of average sound level reductions.This work also shows that in practical railways there is a large variation in absolute sound levels and spectral pat-terns.These have been characterized across trams,Metro, and heavy haul freight.The results show that:•Friction modifiers reduce squeal noise across all systems considered.•Friction modifiers reduceflanging noise in all transit sys-tems tested,but not necessarily in freight,where effective gauge face lubrication may also be required because of the higher lateral andflanging forces,especially in sharper curves.•For systems with highest overall noise levels,the noise tends to be reduced across a broader part of the spectrum with friction modifiers.•In one case,some reduction in low frequency vibration has been observed with friction modifier application.Ongoing work is examining in more detail the mechanisms by which friction modifiers reduce low frequency noise and vibrations,as well as the effects on wheel and rail rough-ness.Future work should also consider the potential impact of friction modifier on roughness levels on tangent track.The controlled intermediate friction is expected to reduce the rate of roughness increase because of reduced longitudinal and friction forces.AcknowledgmentsThe following individuals are acknowledged for their helpful contributions:Mr.David Elvidge for the freightflang-ing noise measurements(System7),ATS Consulting(Dr. Hugh Saurenman)for the vibration measurements,Dr.Y.Oka for the sound measurements on System8,One of the referees for suggesting that“graunching”was responsible for some of the low frequency noise.References[1]C.Talotte,P.E.Gautier,D.J.Thompson,C.Hanson,Identificationmodeling and reduction potential of railway noise sources,in:Pro-D.T.Eadie et al./Wear258(2005)1148–11551155ceedings of the Seventh International Workshop on Railway Noise, Portland,Maine,October2001.[2]C.Esveld,Modern Railway Track,2nd ed.,TU Delft,Netherlands,2001,ISBN90-800324-3-3.[3]M.A.Heckl,Curve squeal of train wheels.Part3.Active control,J.Sound Vibrat.229(3)(2000)709–735,reference therein.[4]F.G.De Beer,M.H.A.Janssens,P.P.Kooijman,Squeal noise of railbound vehicles influenced by lateral contact position,in:Proceedings of the Seventh International Workshop on Railway Noise,Portland, Maine,October2001.[5]K.S.Chiddick,Solid lubricants and friction modifiers for heavy loadsand rail application,US Patent6136757(October2000).[6]D.T.Eadie,J.Kalousek,K.S.Chiddick,The role of high positivefriction(HPF)modifier in the control of short pitch corrugations and related phenomena,Wear253(1–2)(2002)185–192.[7]D.T.Eadie,M.Santoro,W.Poll,Local Control of Noise and Vi-bration with KELTRACK Friction Modifier and Protector TracksideApplication:An Integrated Solution,Seventh International Workshop on Railway Noise,Portland,Maine,October2001.[8]D.T.Eadie,J.Kalousek,Railway Age(2001)48.[9]A.Matsumoto,Y.Sato,H.Ono,Y.Wang,Y.Yamamoto,M.Tani-moto,Y.Oka,Creep force characteristics between rail and wheel on scaled model,Wear253(1–2)(2002)199–203.[10]M.Tomeoka,N.Kabe,M.Tanimoto,E.Miyauchi,M.Nakata,Fric-tion control between wheel and rail by means of on-board lubrica-tion,Wear253(1–2)(2002)124–129.[11]D.T.Eadie,N.Hooper,B.Vidler,T.Makowsky,Top of rail frictioncontrol:lateral force and rail wear reduction in a freight application, in:Proceedings of the International Heavy Haul Conference,Dallas, May4–8,2003.[12]J.Kalousek,Rolling radius difference:Do appreciate it’s signifi-cance?National Research Council of Canada Centre for Surface Transportation Report CSTT-VTD-54-AAR,June2001.。