A Point Cloud-Vision Hybrid Approach for 3D Location Tracking of Mobile Construction Assets
- 格式:pdf
- 大小:354.25 KB
- 文档页数:7
With Azurion, performance and superior care become oneAzurion 7Image Guided Therapy System17% reduction of procedure time with Philips Azurion at St. Antonius Hospital.1The ability to treat one more patient per day today, or in the futureAzurion enables you to provide superior careAzurion helps you optimize your lab performanceAn easy-to-use platform supports you in quickly and easily performing diverse proceduresThis is exemplified by our Image Guided Therapy System - Azurion 7. It allows you to easily and confidently perform a wide range of routine and complex procedures with a unique user experience, helping you optimize your lab performance and provide superior care. Azurion is powered by ConnectOS, a real-time multi-workspot technology designed specifically for the Azurion image guided therapy platform.As the interventional space evolves, we continue to integrate essential lab systems and tools onto the Azurion platform for a better user experience. The Azurion integrated lab offers a seamless user experience that gives you control of all compatible applications from a single touch screen at table side, to help make fast and informed decisions without breaking sterility.With Azurion’s industry leading image guided therapy platform, we reinforce our commitment to you and your patients. Our goal is to help you effectively meet today’s challenges so that you are ready for the future.Treating patients. It’s what you do.You strive every day to provide the best patient care, quickly and reliably, no matter which procedure you are performing.So try to imagine an increased number of procedures, for more patients, carried out consistently and efficiently with fewerpreparation errors. Workflow can be optimized and performed on an intuitive platform designed to make your day a lot easier.“ I t (Azurion) profited from the experienceprovided by the users and it’s for sure something which is incredible easy to use.”Prof. Laurent Spelle, Hospital Bicetre AP-HP, Paris, FranceFull control at table side to enhance decision making You can now control all compatible applications in the interventional lab via the central touch screen module and FlexVision Pro. Not only does this improve workflow within the exam room, it helps reduce the need for team members to leave the sterile area and walk to the control room during procedures. This can save time and help avoid delays.Gain advanced physiologic guidance to help improve treatment outcomesYou can access IntraSight, a comprehensive suite of clinically proven 2-6 imaging, physiology and co-registration 7 tools, via the central touch screen module. These tools allow you to go beyond the angiogram and complete your view of the target vessel, to help you make fast, informed clinical decisions.Azurion with FlexArm – more independent control for physiciansThe FlexArm option further evolves Azurion’s table side control with the intuitive Axsys controller to make procedures flow naturally and easily. When changes or complications occur, the physician can quickly and easily take action. This can also reduce the need to move in and out of the sterile field during a procedure.Designed around you and your procedureAll Azurion systems and interventional tools use the same standardized user interface to support training. Use has been further simplified through a sophisticated help function. You can access digital user guides with one click for on-the-spot assistance. Clear and simple to useOn screen, information clearly stands out against the distinctive black background where active applications are highlighted. Backlit icons and distinctly shaped buttons on the Control Module promote intuitive operation. All controls are designed for easy cleaning to meet stringent sterility requirements. Less clutter and faster workflowWith the Azurion integrated lab, controlling allcompatible applications at the touch screen module can reduce extra interfaces and controls table side. The FlexSpot works according to the same principle. It gives you access to all compatible applications in one compact, customizable workplace that can be placed in the control room or exam room where needed.Save time by setting the display to re-arrange and re-size as applications are opened and closed.At Philips, we are guided by you. With Azurion, we’ve brought the userexperience and simplicity of touch screen controls right where it’s needed to make a difference to lab workflow.Outstanding user experienceFlexSpot Touch screen module ProFlexVision Pro“ T he new integrated system saves us time, because we can control all applications via one user interface.”Dr. med. Peter Ong, Robert Bosch Hospital, Stuttgart, Germany.With Azurion we help you to optimize your lab performanceAzurion’s integrated approach can help you achieve measurable improvements in throughput, cost reduction and staff satisfaction.Do more at table sideWith our enhanced touch screen module, you will experience simpler, smoother procedures, based on familiar tablet interactions. For example, you can now easily mark relevant details on 2D images on the touch screen with your fingertip.Azurion allows you to run an entire case without breaking sterilityThe touch screen module offers total control within the sterile field. Run an entire case table side as you quickly diagnose, navigate, annotate and measure to your exact specifications, even when wearing gloves and under a sterile drape. Table side control saves you from having to go to the control room to access applications.Save time through Instant Parallel WorkingThe Azurion 7 image guided therapy system has been specifically designed to save time by enabling interventional team members to do two tasks at the same time in the exam room and control room - without interrupting each other. As an example, while fluoroscopy/exposure is taking place, a technologist in the control room can instantly review previous images from the same patient, prepare the next exam or finish reporting on another patient. This leads to higher throughput and faster exam turnover without compromising quality of care.Simplify workflowEnter patient information once and it is automatically transferred to connected applications to reduce data entry errors. To save time, IntelliSpace Cardiovascular 8 and IntelliSpace Portal launch automatically with the specific patient on the exam room monitor.Azurion’s full system automatic position control (APC) gives you more flexibility to recall the stored position of the C-arm, table and other parameters for a particular image to simplify positioning.Imagine an easier work dayYou can combine different user centric workspots (FlexVision Pro, FlexSpot and touch screen modules) to view, control and run applications where and when needed. At these workspots you can co-register 9 iFR or IVUS data with the angiogram, so you have the tools in hand to manage procedure quality and patient care. Together these flexible workspots allow you to customize your workflow to boost efficiency.Safeguard clinical performance and enhance lab security over time with Windows 10 platform The standard Windows 10 platform can help support compliance with the latest security and standards to protect patient data. It can also accommodate new software options to extend your system’s clinical relevance over time.Clinical demands are getting more specific. So are we. Our clinical suites are tailored to meet your specific challenges, while offering you the flexibility to carry out procedures in the easiest, most efficient way.We have a flexible portfolio of integrated technologies and services to support the full interventional spectrum. We also offer Hybrid OR solutions that create an innovative care environment for performing open and minimally invasive surgical procedures.Simplified set-up and operationThe Azurion 7 uses a range of ProcedureCards to help optimize and standardize system set-up for all your cases. The system will automatically select the appropriate ProcedureCard(s) based on the (CIS/RIS/HIS) code of the scheduled procedure from the information system. ProcedureCards can increase the consistency of exams by offering presets (e.g. most-frequently used, default protocols and user-specified settings) on the procedure, physician or department level.In addition, hospital checklists and/or protocols can be uploaded into the ProcedureCards to help safeguard the consistency of interventional procedures and reduce preparation errors.Enhance patient care with continuous monitoring The Philips Interventional Hemodynamic System is integrated with the IntelliVue X3 patient monitor, allowing continuous patient monitoring throughout procedures in the interventional workflow. There isno need to change cables, minimizing disruption to vulnerable patients and giving you more time to focus on them. Continuous patient monitoring also results in a gap-free patient record.Increase clinical confidence with 3D imagingThe clinical application software SmartCT enriches our exceptional 3D interventional tools for interventional procedures with step-by-step guidance that is designed to remove the barriers to acquiring 3D images in the interventional lab.Easily control advanced 3D visualization and measurements at table side on the touch screen module. Studies have shown that 3D CT-like imaging can enhance diagnostic accuracy 9,10,11 and support improved patient outcomes.Azurion enables you to provide superior careAs patient volumes rise and procedures become more complex, how do you maintain high standards of quality and safety in your healthcare facility?Clinical suites “ W e always stay in the angio suite because the ease of use has really improved and now we can control everything from the side of the patient in the angio lab itself.”Prof. Vincent Costalat, Centre Hospitalier Universitaire de Montpellier, FranceHigh quality images at low X-ray doseOur ClarityIQ imaging technology provides significantly lower dose across clinical areas, patients, and operators.12 In routine coronary procedures13, ClarityIQ technology reduces patient dose by 67% without affecting procedural performance while maintaining equivalent image quality, compared to a system without ClarityIQ.14,15 In interventional Neuro procedures, ClarityIQ technology reduces patient dose by 65%, compared to a system without ClarityIQ.16 Managing dose efficientlyDoseWise is integrated across the Philips image-guided therapy Azurion portfolio. DoseWise consists of a comprehensive range of radiation dose management tools, training, and integrated product technologies that aim to help you take control over patient care, staff safety, and regulatory compliance. Another feature is the Zero Dose Positioning function. It lets you pan the table, change table height or field-of-view on your Last Image Hold (LIH) image. This enables positioning without the use of radiation on previously recorded last image.Managing dose across your organizationPhilips DoseAware provides real-time feedback in the exam room It displays the invisible nature of radiation in real time, so that you and your staff can see it promptly, easily and simply – and rapidly understand the effect of behavior changes and work patterns.DoseAware Xtend is a dedicated solution for treatment rooms that builds on the capabilities of DoseAware and interfaces seamlessly with the Azurion image-guided therapy system. Thanks to this seamless integration, DoseAware Xtend can provide live individual dose rates (live screen) during procedures, and summarized procedure doses (review screen). It also reminds staffto better protect themselves by providing a warning symbol when the lead protection screen is not being used properly.Perform standardized quality assurance verifications in just 5 minutes17To make it easier for you to routinely perform consistent verification tests of radiation dose and image quality, only Philips offers the User Quality Control Mode (UQCM) tool on its Azurion system.With this option, you can independently verify and audit the radiation and image quality related factors of your Azurion system in a standardized way in just5 minutes, as well as carry out a range of validation and quality assurance tests.High standards of safety and low radiation exposureAs you look for new radiation dose management strategies to continue to enhance patient and staff safety, while maintaining and enhancing yourlevel of care, we can support you in meeting your goals.Azurion – a comprehensive image guided therapy platformThe Azurion 7 integrated lab brings together a range of sophisticated interventional tools, including clinically proven 2-6 imaging and physiology tools, advancedhemodynamic measurements and cardiac informatics to support clinical excellence during procedures.Azurion 7 C/F12With its 12" Flat Detector, the 7 Series provides high-resolution imaging over a large field-of-view with flexible projection capabilities, making it ideal for cardiac interventions. The entire coronary tree can be visualized in a single view with minimal table panning.Azurion 7 C/F20Enhance visibility for diverse cardiac and vascular procedures with the excellent image quality and broad coverage of the next generation 20" Flat Detector. This system supports head-to-toe imaging and patient access from all sides.Azurion 7 C20 with FlexArmCreate a Hybrid OR that provides unlimited imaging flexibility for diverse procedures and exceptional positioning freedom for medical teams with the Azurion 7 and the next generation 20" Flat Detector, combined with the ceiling-mounted FlexArm option. You get a highly cost-effective environment that is ready for the procedures of the future.Azurion 7 C20 with FlexMoveMove to a Hybrid OR with confidence, with the Azurion 7 and the next generation 20" Flat Detector, combined with the ceiling-mounted FlexMove option. FlexMove offers exceptional workflow flexibility to perform open and minimally invasive procedures in the same room.Azurion 7 B12/12The Azurion 7 biplane system with two 12" Flat Detectors provides high-resolution imaging and positioning flexibility to reveal critical anatomical information during congenital heart and electrophysiology procedures.Azurion 7 B20/12The Azurion 7 biplane system with a 20” and 12” Flat Detector provides exceptional clarity of detail and navigational precision to support a wide range of challenging cardiac and vascular interventions.Azurion 7 B20/15Enhance insight and certainty during neuro interventions with the Azurion 7 biplane system. It pairs a 20" frontal with a 15" lateral detector.information,pleasecontactyoursalesrepresentativeorsendanemailto:***************************High productivity combined with low cost of ownershipWith Philips, you get the best service performance which enables you totreat more patients, and professional support to help you deliver cost-efficient care.Best service performance18 enables you to treatmore patients19Staying on top of today’s complex healthcareenvironment is challenging enough without aconstant concern of keeping your systems up andrunning smoothly. With Philips, your operationsare protected by the best overall service engineerperformance for imaging systems according to IMVServiceTrak for 5 years in a row. Philips remotelyconnected systems provide 135 more hours ofoperational availability per year, enabling you totreat more patients.Professional support helps you delivercost-efficient careTo help you fully leverage your financial,technological and staffing resources and realizea high return on your investment, we offerprofessional support through our experiencednetwork of over 7,000 field service engineers, aswell as a flexible service offering that includes:• Innovative financing solutions tailored to meetthe needs of healthcare organizations• A broad range of healthcare consulting programsto help your organization further enhance theefficiency and efficacy of your care deliveryprocess• Philips Healthcare Education can help unlockthe full potential of your staff, technology andorganization to meet new challenges throughinnovative, meaningful and evidence-basedhealthcare education.Cost-effectively manage future upgrades withthe Technology Maximizer programTechnology Maximizer is a program that runs intandem with your Philips Service Agreement.20When you opt into the program, you receivethe latest available software and hardware21technology releases for a fraction of the cost ofpurchasing them individually. The TechnologyMaximizer Plus allows you to further tailorupgrades to reduce costs. No need to wait forbudget approval.No need to buy individual upgrades. Just a cost-effective way to manage ongoing technologyupgrades through your operational budget.Doing business responsibly and sustainablyWhen you choose Philips, you are choosing apartner committed to meet sustainability andcircular economy ambitions. As a leading healthtechnology company, our purpose is to improvepeople’s health and well-being through meaningfulinnovation, positively impacting 2.5 billion lives peryear by 2030.The Azurion is the result of our EcoDesign processand offers significant environmental improvements:• 100% product take-back after customers’acceptance of our trade-in offer.• 100% repurposing of the equipment that isreturned to Philips• Up to 90% of material weight is reused duringrefurbishing, depending on type and age ofproduct• At least 10% lower energy consumption over totalproduct life usage22Read more about our Environmental, Social andCorporate Governance (ESG) commitments here:https:///a-w/about/sustainability.html135 more hours of operational availability on average, per year,enabling you to treat more patients.19Philips remotely connected systems provide1.2.3.4.5.6.7.8.9. 10.11.12.13.14.15.16.17.18.20.22.**********************。
毕业设计说明书英文文献及中文翻译学生姓名:学号:计算机与控制工程学院:专指导教师:2017 年 6 月英文文献Cloud Computing1。
Cloud Computing at a Higher LevelIn many ways,cloud computing is simply a metaphor for the Internet, the increasing movement of compute and data resources onto the Web. But there's a difference: cloud computing represents a new tipping point for the value of network computing. It delivers higher efficiency, massive scalability, and faster,easier software development. It's about new programming models,new IT infrastructure, and the enabling of new business models。
For those developers and enterprises who want to embrace cloud computing, Sun is developing critical technologies to deliver enterprise scale and systemic qualities to this new paradigm:(1) Interoperability —while most current clouds offer closed platforms and vendor lock—in, developers clamor for interoperability。
DATASHEETBloxOne™ Threat Defense AdvancedStrengthen and Optimize Your Security Posture from the FoundationThe Need for Foundational Security at ScaleThe traditional security model is inadequate in today’s world of digitaltransformations.• The perimeter has shifted, and your users directly accesscloud-based applications from everywhere.• SD-WAN drives network transformation and branch offices directlyconnect to Internet with no ability to replicate full HQ security stack.• IoT leads to an explosion of devices that do not accept traditionalendpoint technologies for protection.• Most security systems are complex, and do not easily scale to thelevel needed to protect these dynamic environments.Moreover, security operations teams are chronically short staffed (thereis a shortage of 2.93 million security operations personnel worldwideaccording to a recent ISC2 report), use siloed tools and manual processesto gather information, and must deal with hundreds to thousands ofalerts everyday.What organizations need is a scalable, simple and automated securitysolution that protects the entire network without the need to deploy ormanage additional infrastructure.Infoblox Provides a Scalable Platform That MaximizesYour Existing Threat Defense InvestmentInfoblox BloxOne Threat Defense strengthens and optimizes yoursecurity posture from the foundation up. It maximizes brand protectionby securing your existing networks as well as digital imperatives likeSD-WAN, IoT and the cloud. It uses a hybrid architecture for pervasive,inside-out protection, powers security orchestration, automation andresponse (SOAR) solutions by providing rich network and threat context,optimizes the performance of the entire security ecosystem and reducesyour total cost of enterprise threat defense.Figure 2: BloxOne ThreatMaximize Security Operation Center EfficiencyReduce Incident Response Time• Automatically block malicious activity and provide the threat data to the rest of your security ecosystem for investigation, quarantine and remediation• Optimize your SOAR solution using contextual network and threat intelligence data, and Infoblox ecosystem integrations (a critical enabler of SOAR)-reduce threat response time and OPEX• Reduce number of alerts to review and the noise from your firewallsUnify Security Policy with Threat Intel Portability • Collect and manage curated threat intelligence data from internal and external sources and distribute it to existing security systemsAdvanced Threat DetectionSOARNetwork Access Control(NAC)Next-Gen Endpoint Security• Reduce cost of threat feeds while improving effectiveness of threat intel across entire security portfolio Faster Threat Investigation and Hunting• Makes your threat analysts team 3x more productive by empowering security analysts with automated threat investigation, insights into related threats and additional research perspectives from expert cyber sources to make quick, accurate decisions on threats. • Reduce human analytical capital neededFigure 1: Infoblox hybrid architecture enables protection everywhere and deployment anywhereInfoblox is leading the way to next-level DDI with its Secure Cloud-Managed Network Services. Infoblox brings next-level security, reliability and automation to on-premises, cloud and hybrid networks, setting customers on a path to a single pane of glass for network management. Infoblox is a recognized leader with 50 percent market share comprised of 8,000 customers, including 350 of the Fortune 500.Corporate Headquarters | 3111 Coronado Dr. | Santa Clara, CA | 95054+1.408.986.4000 | 1.866.463.6256 (toll-free, U.S. and Canada) | ***************** | © 2019 Infoblox, Inc. All rights reserved. Infoblox logo, and other marks appearing herein are property of Infoblox, Inc. All other marks are the property of their respective owner(s).Hybrid Approach Protects Wherever You areDeployedAnalytics in the Cloud• Leverage greater processing capabilities of the cloud to detect a wider range of threats, including data exfiltration, domain generation algorithm (DGA), fast flux, fileless malware, Dictionary DGA and more using machine learning based analytics• Detect threats in the cloud and enforce anywhere to protect HQ, datacenter, remote offices or roaming devices Threat Intelligence Scaling• Apply comprehensive intelligence from Infoblox research and third-party providers to enforce policies on-premises or in the cloud, and automatically distribute it to the rest of the security infrastructure• Apply more threat intelligence in the cloud without huge investments into more security appliances for every sitePowerful integrations with your security ecosystem • Enables full integration with on-premises Infoblox and third-party security technologies, enabling network-wide remediation and improving ROI of those technologies Remote survivability/resiliency• If there is ever a disruption in your Internet connectivity, the on-premises Infoblox can continue to secure the networkTo learn more about the ways that BloxOne Threat Defense secures your data and infrastructure, please visit: https:///products/bloxone-threat-defense“In this day and age there is way too muchransomware, spyware, and adware coming in over links opened by Internet users. The Infoblox cloud security solution helps block users from redirects that take them to bad sites, keeps machines from becoming infected, and keeps users safer.”Senior System Administrator and Network Engineer,City University of Seattle。
Computer Science and Application 计算机科学与应用, 2023, 13(7), 1343-1351 Published Online July 2023 in Hans. https:///journal/csa https:///10.12677/csa.2023.137132自动驾驶中点云与图像多模态融合研究综述 孟 玥,李士心*,陈范凯,刘 宸,丛笑含天津职业技术师范大学电子工程学院,天津收稿日期:2023年6月6日;录用日期:2023年7月5日;发布日期:2023年7月12日摘要 针对复杂多变的道路环境,综合国内外研究现状,本文从激光雷达和摄像头方面论述了汽车自动驾驶中的网络输入的格式,并以两种传感器融合为例,归纳了自动驾驶汽车环境感知任务中多模态传感器融合的分类方法,在此基础上,又从融合阶段的角度总结出另一种分类,简化了融合方法的分类和理解,强调了融合程度的区别以及融合方法的整体性,这种分类对于推动融合方法的研究和发展具有创新价值。
最后分析传感器融合所遗留的问题,对未来的发展趋势进行预测。
关键词激光雷达,摄像头,多模态,传感器融合Research Review of Multimodal Fusion of Point Cloud and Image in Autonomous DrivingYue Meng, Shixin Li *, Fankai Chen, Chen Liu, Xiaohan CongCollege of Electronic Engineering, Tianjin University of Technology and Education, Tianjin Received: Jun. 6th , 2023; accepted: Jul. 5th , 2023; published: Jul. 12th , 2023AbstractIn view of the complex and changeable road environment, this paper discusses the format of net-work input in auto driving from the aspects of laser radar and camera, and summarizes the classi-fication method of multimodal sensor fusion in the environmental perception task of autonomous vehicle, based on which, another classification is summarized from the perspective of fusion stage, *通讯作者。
【关键字】精品1 艳日辉Aeonium decorum f variegata까라솔/일월금2 爱染锦Aeonium domesticum fa Variegata애염금3 日本小松Aeonium sedifolius소인제4 唐扇Aloinopsis schooneesii알로이놉시스5 吹雪之松锦Anacampseros rufescens Sunrise 취설송6 乒乓福娘Cotyledon orbiculata cv방울복랑7 达摩福娘Cotyledon pendens펜덴스8 熊童子Cotyledon tomentosa웅동자9 黄熊Cotyledon tomentosa ssptomentosa웅동자금10 玉稚儿Crassula arta 옥치아11 火星兔子Crassula ausensis v titanopsis티타놉시스12 半球乙女心Crassula Brevifolia브레비폴리아13 克拉夫Crassula clavata 크라바타14 纪之川Crassula cvMoonglow기천15 白鹭Crassula deltoidea백로16 "精灵豆" Crassula elegans엘레강스17 绒针Crassula mesembryanthoides은전18 小天狗Crassula nudicaulis애성19 筒叶花月Crassula obliqua Gollum우주목20 新花月锦Crassula obliqua f variegata신화월금21 红缘心水晶Crassula pellucida ssp marginalis f rubra 홍연심수정22 十字星锦Crassula perforata var23 钱串Crassula perforata고주성/루페스트리24 梦椿Crassula pubescens푸베센스/파브센스25 若歌诗Crassula rogersii로게르시26 小米星Crassula rupestris TOM THUMB 희성27 彩色蜡笔Crassula Rupestris variegata "pastel" 희성금28 雨心Crassula volkensii볼켄시29Crassula rupestris 원종애성30 妖玉Dintheranthus puberulus31 仙女杯DUDLEYA pulverulenta두둘레야32 黑紫色东云Echeveria agavoides Rubra 루브라33 昂斯洛Echeveria cv Onslow34 乙女梦Echeveria gibbiflora v carunculata 을녀몽35Echeveria Lamillette f cristata 라밀레떼철화36 利比亚Echeveria Liberia리베리아37Echeveria Lime & Chili 라임앤칠리38Echeveria Pansy 팬지39 舞会红裙Echeveria Party Dress파티드레스몬스터40 杜里万Echeveria tolimanensis토리마넨시스41 纸风车Echeveria tuxpan 턱시판42 胜者骑兵Echeveria Victor Reiter빅터레이터43 晚霞Echeveria afterglow에프터글로우44 魅惑之宵Echeveria agavoides45 罗密欧Echeveria agavoides Romeo로미오46 圣路易斯Echeveria agavoides santa lewis47 圣诞Echeveria Agavoides Christmas크리스마스48 圣诞东云Echeveria agavoides Christmas크리스마스이브49 红洛Echeveria agavoides hyb톰스텝50 玉点Echeveria agavoides Jade Point제이드포인트51 玛利亚Echeveria agavoides Maria블랙마리아52 东云赤星Echeveria agavoides Martins Hybrid 긴잎적성53 紫乌木Echeveria agavoides purple ebony퍼플에보니54 月影杂交Echeveria agavoides sp라즈아가55 玉杯Echeveria agavoides sp Gilva56 白蜡Echeveria agavoides Wax왁스57 星影Echeveria albicans성영58 阿米西达Echeveria amistar아미스타59 花乃井Echeveria amoena 원종아모에나60 亚特兰蒂斯Echeveria Atlantis피치스앤크림61 狂野男爵Echeveria Baron Bold바론볼드62 苯巴蒂斯Echeveria Ben Badis벤바디스63 大红Echeveria Big Red빅레드64 黑王子Echeveria black prince블랙프린스65 蓝精灵Echeveria blue apple크리시엔라이언미스테리66 日本蓝鸟Echeveria blue bird67 蓝云Echeveria blue cloud블루크라우드68 蓝之天使Echeveria blue elf블루엘프69 蓝灵Echeveria Blue Fairy문페어리70 蓝色苍鹭Echeveria Blue heron 블루헤론71 若桃Echeveria blue minima블루미니마72 天空蓝蓝Echeveria Blue Sky블루스카이73 红糖Echeveria BROWN SUGAR 라벤다힐74 织锦Echeveria Californica Queen 정야금75 广寒宫Echeveria cante 칸테76 银明色Echeveria carnicolor고사옹철화77 灰姑娘Echeveria Cinderella78 克拉拉Echeveria clara 클라라79 蒙恰卡/魔爪Echeveriacuspidata Menchaca 멘챠카80 绿爪Echeveria cuspidata var zaragozae 흑자라고사81 白冠闪Echeveria cv Bombycina백섬관82 白凤Echeveria cv HAKUHOU백봉83 初恋Echeveria cv Huthspinke초연84 丹尼尔Echeveria cv Joan Daniel조안다니엘85 芙蓉雪莲Echeveria cv Laulindsayana먼로86 女雏Echeveria cv Mebina메비나87 桃太郎Echeveria cv Momotarou모모타로88 紫珍珠Echeveria cv Peale von Nurnberg 뉴헨의진주89 紫心/粉色回忆Echeveriacv Rezry 원종프리티90 高砂之翁Echeveria cv Takasagono-okina 홍공작91 红化妆Echeveria cv Victor홍화장92 玫瑰莲Echeveria cv Derosa데로사93 女王花笠Echeveria cv Meridian여왕화립94 静夜Echeveria Derenbergii정야95 戴伦Echeveria Deren-Oliver데렌올리버96 多罗莎Echeveria Derosa 워필드원더97 蒂凡尼Echeveria diffractens파르메소라98 多明戈Echeveria domingo도밍고99 多多Echeveria Dondo 돈도100 月影Echeveria elegans월영编号图片中文名称拉丁学名韩文名称101 法比奥拉Echeveria Fabiola파비올라102 菲奥娜Echeveria Fiona 피오나/피어리스103 飞云Echeveria flying cloud플라잉클라우드104 范女王Echeveria fun queen펀퀸105 德克拉Echeveria gibbiflora Decora데코라106 银武源Echeveria GINBUGEN은무원107 银红莲Echeveria GINKOUREN은홍련108 红豆Echeveria globuliflora빅토리아109 小红衣Echeveria globulosa글로블로사110 金辉Echeveria GOLDEN GLOW111 蓝色天使Echeveria Graptoveria Fanfare 팡빠레112 绿翡翠Echeveria Green Emerald그린에메랄드113 群月冠Echeveria GUNGEKKAN군월관114 月影系Echeveria Hanatsukiyo화월야115 花车锦Echeveria holwayi호베이116 花车Echeveria hoveyi 호베이117 月影之宵Echeveria hughmillus휴밀러스118 冰河世纪Echeveria Ice Age아이스에이지119 伊利亚Echeveria Iria 이리아120 乔斯林的喜悦Echeveria Jocelyns Joy 조슬린조이121 卡托斯Echeveria kaltose122 迈达斯国王Echeveria King Midas123 雪莲Echeveria laui 라우이124 薰衣草Echeveria lavender hill라벤더힐125 白兔耳Echeveria leucotricha백토이126 丽娜莲Echeveria lilacina릴리시나127 露娜莲Echeveria lola 로라128 罗西马Echeveria longissima롱기시마129 罗西马杂交Echeveria longissima롱기시마교배종130 露西Echeveria Lucy 루시131 红稚莲Echeveria macdougallii홍치아132 巧克力方砖Echeveria Melaco 멜라코133 记忆Echeveria Memory 론에반스134 野玫瑰之精Echeveria mexensis ZALAGOSA애기들135 红爪Echeveria mexensis ZALAGOSA 검은가시자라고사136 黑爪Echeveria Mexensis Zaragosa흑자라고사137 墨西哥巨人Echeveria Mexican Giant자이언트138 墨西哥雪球Echeveria Mexican Snowball 크리스탈139 米纳斯Echeveria minas 미나스140 美尼月迫Echeveria Minima hyb미니케세르141minima/姬莲EcheveriaMinima 블루미니마/미니마142 麒麟座Echeveria Monocerotis모노케로티스143 梦露EcheveriaMonroe144 月亮仙精灵Echeveria moonfairy 문페어리145 月光女神Echeveria MoonGoddess 카르포르니카퀸146 摩氏石莲花Echeveria moranii 모라니147 妮可莎娜Echeverianicksana 니크셔나148 红司Echeverianodulosa 환엽홍사149 蜡牡丹Echeveria nuda누다군생150 猎户座EcheveriaOrion 오리온151 紫焰Echeveria paintedfrills 샤비홍152 皮氏蓝石莲Echeveria peacockii desmetiana 데스메치153 墨西哥蓝鸟Echeveria Pecockii 블루피코키154 彼得EcheveriaPeter 피터155 红粉佳人Echeveria Pretty in Pink 핑크프리티156 子持白莲Echeveriaprolifica 리틀장미157 Pachyphytodies群Echeveria Pulidonis 폴덴시스158 花月夜Echeveria pulidonis 황홀한연꽃159 雪锦星Echeveria pulvinataFrosty 프로스티160 大和锦Echeveria purpusorum 대화금161 彩虹Echeveriarainbow 레인보우162 彩虹锦Echeveriarainbow 웨스트레인보우163 拉姆雷特Echeveria ramillete 라밀레트철화164 冰莓群Echeveria Rasberry Ice 라즈베리아이스165 里加Echeveria Riga리가166 如贝拉EcheveriaRubella 루벨라167 红粉台阁Echeveria runyonii cv 카시즈168 特玉莲Echeveria runyoniiTopsy Turvy 흡엽옥접169 桑彻斯(月影) Echeveria Sanchez-mejoradae 산체스메조라데170 祝之松Echeveria scaphophylla축송171 玉蝶Echeveriasecunda 칠복수172 维纳斯Echeveria secundaforma vyrnesii 세쿤다바이네시173 锦司晃Echeveria setosaHybrid 금사황174 小蓝衣Echeveria setosa vdeminuta 룬데리175 青渚莲Echeveria setosa varminor 세토사176 抵园之舞Echeveriashaviana 샤비아나177 沙维娜Echeveriashaviana 핑크프릴178 沙漠之星Echeveria shavianaDesert Star 사막의179 晚霞之舞Echeveria shaviana Madre del Sur 마드레델슈180 雪莱EcheveriaShelley 쉘리181 爱斯诺EcheveriaSierra 씨에라182 银后Echeveria SilverQueen 실버183 恩西诺Echeveria sp ElEncino 엔시노184 霜之朝Echeveria sp SIMONOASA 서리의아침185Echeveria sp El Encino Hidalgo 엘엔시노186 贝飞达Echeveria sp Vista Hermosa,bifida 비피다187 立田锦X PachyveriaAlbocarinata 입전금188 剑司诺瓦스트릭티플로라189 蓝宝石섭코림보사/에쿠스190 钢叶莲서브리기다191 蜜糖슈가드192 酥皮鸭구미리193 TP티피194 艾格尼斯玫瑰트라문타나195 大和峰투루기다196 紫罗兰皇后바이올렛퀸197 白线화이트라인198 白王子화이트프린스199 白玫瑰화이트로즈200 大合美尼대화련201 杨金Echeveria yangjin 양진202 摩莎Echeveria Racemosa 라세모사203 纯乌木Echeveria agavoides ebony red 레드에보니204 卡罗拉Echeveria colorata 콜로라타205 芙蓉雪莲Echeveria cv. 'Laulindsayana' 먼로206 Pinky Echeveria cv. Pinky 핑키207 Echeveria decairn 디케인208 船长甘草Echeveria derenbergii Captain Hay 캡틴헤이209月影Echeveria elegans Potosina포토시나210星影(月影某种)Echeveria elegans potosina크리스탈211Echeveria puli-lindsayana프리린제212蜜糖EcheveriaGraessnerisyn/Nivalis니바리스213五十铃玉Fenestraria aurantiaca오십령옥214月美人Graptopetalum amethystinum아메치스\월미인215紫雾Graptopetalum Purple Haze홍용월216姬秋丽Graptopetalum mendozae멘도사217蓝豆Graptopetalum pachyphyllum"Bluebean"블루빈218银天女Graptopetalum rusbyi드워프219格林Graptoveria A GrimmOne에그린원220红葡萄GraptoveriaAmetum홍포도221厚叶旭鹤GraptoveriaBAINESII연봉222紫梦Graptoveria cv Purple Dream퍼플드림223黛比Graptoveria Debbie데비224姬胧月Graptoveria Gilva프리티225奥普琳娜Graptoveria Opalina오팔리나226山地玫瑰Greenoviadiplocycla-Agando그라노비아227日高Hylotelephium siboldii골든글로우228福兔耳Kalanchoe eriophylla백토이229白银之舞백은무230扇雀Kalanchoe rhombopilosa231黑兔耳Kalanchoe tomentosaf nigromarginatas흑토이232瑞典摩南群生Monanthes polyphylla모난데스233弗雷费尔Pachyphtum cv Frevel후레뉴234球美人Pachyphy Amazoness아마조네스235粉球美人Pachyphy Amazoness아마조네스236糖美人Pachyphytum Oviride오비리데237千代田之松Pachyphytum compactum천대전송철화238稻田姬Pachyphytumglutinicaule도전희239郡雀Pachyphytum hookeri군작240京美人Pachyphytumlongifolium경미인241东美人Pachyphytumpachyphtooides동미인242桃美人Pachyphytumcv mombuin아마조네스243千代田之松Pachypodium compactum천대전송244蓝黛莲Pachyveria glauca그리니245Pachyveria Elaine Reinelt.'FREVEL일레인246三日月美人Pchyphytum oviferum mikadukibijin247华丽风车pentandrum - v superbum펜탄드럼248红背椒草Peperomia claveolens레드페페249密叶莲Sedeveria Darley Dale달리데일250达利Sedeveria Darley Dale데일리데일251柳叶莲华Sedeveria Hummellii스노우제이드252蒂亚Sedeveria Letizia레티지아253马库斯Sedeveria Markus마커스254红宝石Sedeveria pink rubby핑클루비255黄丽Sedum adolphii황려256春萌Sedum Alice Evans춘맹257新玉坠Sedum burrito청옥/세둠부리토258凝脂莲Sedum clavatum라울259劳尔/天使之霖Sedum Clavatum라울260大姬星美人Sedum dasyphyllum cv Lilac Mound희성미인261绿龟之卵Sedum hernandezii녹귀란262丸叶松绿Sedum lucidum Obesum환엽송록263球松Sedum multiceps소송록264虹之玉Sedum rubrotinctum오로라265红霜Sedum spathulifolium은설266白霜sedum spathulifolium백설267珊瑚珠Sedum stahlii스탈리268天使之泪Sedum torereasei부사269春之奇迹Sedum versadense스프링윈더270蛛丝卷绢SempervivumArachnoideum거미줄바위솔271蓝月亮Senecio antandroi미공모272银月Senecio haworthii은월273蓝松Senecio serpens만보274立田凤Sinocrassula densirosulata입전봉275因地卡Sinocrassula indica인디카276滇石莲Sinocrassula yunnanensis사마로277立田锦X Pachyveria Albocarinata입전금278锦晃星Echeveria pulvinata279金钱木Portulaca molokiniensis280银星Graptoveria‘Silver Star’此文档是由网络收集并进行重新排版整理.word可编辑版本!。
DATASHEETTIMMS is a manually operated push-cartdesigned to accurately model interior spaceswithout accessing GPS. It consists of 3core elements: LiDAR and camera systemsengineered to work indoors in mobile mode,computers and electronics for completing dataacquisition, and data processing workflow forproducing final 2D /3D maps and models. Themodels are “geo-located”, meaning the real worldposition of each area is known.With TIMMS a walk-through of an interior spacedelivers full 360 degree coverage. The spatialdata is captured and georeferenced in real-time.Thousands of square feet are mapped in minutes,entire buildings in a single day.TIMMS is ideal for applications such assituational awareness, emergency response,and creating accurate floor plans. All typesof infrastructure can be mapped, even thoseextending over several city blocks:• Plant and factory facilities• High-rise office, residential, and governmentbuildings• Airports, train stations and othertransportation facilities• Music halls, theatres, auditoriums and otherpublic event spaces• Covered pedestrian concourses (above andbelow ground) with platforms, corridors,stair locations and ramps• Underground mines and tunnelsYOUR BENEFITS• High efficiency, accuracy and speed• Lower data acquisition cost for as-builts• Reduced infringement on operationsT rimble Indoor Mobile Mapping Solution (TIMMS)►No need for GNSS►Little or no LiDAR shadowing►Long-range LiDAR►Self-contained►Simple workflow►Fully customizable►Use survey control for precisegeoreferencingTHE OPTIMAL FUSION OF TECHNOLOGIES FOR CAPTURING SPATIAL DATA OFINDOOR AND GNSS-DENIED SPACESDATASHEETTRIMBLE APPLANIX 85 Leek CrescentRichmond Hill, Ontario L4B 3B3, Canada+1-289-695-6000 Phone +1-905-709-6027 Fax © 2017, T rimble Navigation Limited. All rights reserved. T rimble logo are trademarks of T rimble, registered in the United States and in other countries. All other trademarks are the property of their respective owners. (07/1)PERFORMANCE Onboard powerU p to 4 hours without charge or swap Hot swappable for unlimited operational time Data storage 1TB SSD OperationsNominal data collection speed at 1 meter per second Maximum distance between position fix 100 meters Typical field metricsLiDAR point clouds - 1 cm relative to position accuracy*P roductivity – in excess of 250,000 square feet per day PHYSICAL DIMENSIONSHeight with mast low..........................................................................173 cm Height with mast high........................................................................221 cm Distance to wheel with mast low (front to back)..............................80 cm Distance to wheel with mast high (front to back).............................88 cm Distance between wheels (outside surface of wheels).....................51 cm Weight...................................................................................109 lb or 49.5 kg*rms derived by comparison of TIMMS with static laser scan, results may vary according tobuilding configuration and trajectory chosen.* System performance may vary with scanner type and firmware version. Published valuesbased on X-130.TIMMS COMPONENTS Mobile Unit & MastTIMMS aquisition systemI nertial Measurement Unit (IMU)P OS Computer System (PCS)L iDAR Control Systems (LCS)One LiDARS upported scanners include:T rimble TX-5FARO Focus X-130, X-330, S-70-A, S-150-A, S-350-A One spherical camera (6 camera configuration)Field of View (FOV) >80% of full sphere 2 MegaPixel (MP) per camera Six (6) 3.3 mm focal length 1 meter/second (Up to 4 FPS)One operator and logging computer 16 batteries (8 + 8 spare)2 battery chargers SOFTWARE COMPONENTRealtime monitoring and control GUI Post-processing suiteSYSTEM DELIVERABLEGeoreferenced trajectory in SBET formatGeoreferenced point cloud in ASPRS LAS format Georeferenced spherical imagery in JPEG format Georeferenced raster 2D floorplanUSER SUPPLIED EQUIPMENT PC for post processing Windows 7 64-Bit OSMinimum of 300 GB of disk32 gigabytes of RAM required (64 recommended)USER SUPPLIED SOFTWAREBasic LiDAR processing tools: recommended functionality LAS import compatible Visualization ClippingRaster to Vector tools (manual and/or automated)Trimble Indoor Mobile Mapping Solution (TIMMS)。
“Among IT professionals surveyed...the most commonly-cited first-choice top priority for 2019 is ‘delivering improved experiences for customers through new technology deployment.’”—Econsultancy 2020 Digital TrendsThings IT will want to know.What are the key benefits of Experience Manager from a cloud perspective?Experience Manager offers midsize and enterprise companies all the benefits of a cloud offering, such as agility and scalability. Experience Manager simplifies and reduces the cost of updates, proactively monitors mission-critical service level availability and performance, minimises security threats and downtime, and uses auto-scaling to deliver optimal performance. With it, IT can take advantage of afully configurable and extensible CI/CD pipeline to automate functional and load testing, code quality checks and the promotion of custom code from lower non-production environments (dev/stage) all the way to the production environment. Experience Manager API connectivity complements existing systems and external notification channels. Using Adobe’s engineering best practices, Experience Manager conducts automated code inspection, testing and security validation in order to speed releases without compromising quality.How does Experience Manager supportcloud security?Experience Manager operations are supported by a monitoring, reporting and alerting infrastructure that allows Adobe to proactively keep the service healthy. Various elements of the architecture are equipped with liveliness health checks, and if found to be unhealthy, are silently removed from the service and replaced with new ones.We have preconfigured Experience Manager cloud service with security rules based on enterprise-tested best practices and security frameworks. If interested, your IT team can also review our whitepaper on Experience Manager security,1 as well as other documents on our security best practices2 and security checklists.3How does Experience Manager ensure availability?Through proactive monitoring and robust defense mechanisms, we guarantee high service level availability. Our customers face minimal to no interruption of service, even during maintenance operations such as patches, security updates or upgrades. These operations are fully automated in a way that doesn’t resultin interruption of service for practitioners, for either content management or cross-channel content delivery capabilities. Will Experience Manager integrate with the different third-party services and tools we currently use?Yes. Experience Manager has been designed and built asan open system to integrate with many third-party systemsor services you might employ. We and our rich partnerecosystem provide several prebuilt connectors that make it simple to integrate with a wide variety of services, such as AWS, Salesforce, Facebook, Twitter, YouTube and others. Our integration framework, open APIs and SDKs further extend your ability to easily and effectively integrate with different CRMs, back-end databases, e-commerce engines and much more.What programming or scripting languages will our developers need to know in order to build our site with Experience Manager?Experience Manager is a Java-based environment. However, in terms of simply creating, editing and pushing out content, very little, if any, coding will be required. Typically, you won’t need to code any on-page customisations since the Experience Manager interface can usually handle that. For customisations that go beyond the interface or to build integrations that require coding, your developers will needto use Java and Java-based standards. With that in mind, Experience Manager doesn’t put any limits on the look and feel of your site. You can build your page layouts however you choose, whether through the Experience Manager interface or coding customisations that you need or prefer.More and more, IT uses Javascript like Angular and React to develop high-performing web experiences. This process often leaves marketers unable to edit any content or experiences without asking IT. But with the Experience Manager single-page application editor, marketers can make edits on their own within the governance guardrails IT puts in place.How do we migrate our existing contentonto Experience Manager?No migration of any kind is ever cut and dried, but the Experience Manager team has performed numerous successful migrations with customers all over the globe. Adobe has the experience, best practices, tools and service professionals to help make your migration as seamless and simple as possible. We will review the current setup and content structure of your environment and work with you to develop a migration plan that will work best for your circumstances. Your IT team can also review the Experience Manager Assets Migration Guide to get some insights on the migration process itself.4Will Experience Manager work with our current authentication system?IT will want to know if they have to re-create usernames and passwords for users of Experience Manager or if they can use the single sign-on authentication system they already have in place. The answer to this question is two-fold. First, content authors are required to use an Adobe ID, so that they can use single sign-on across the entire Adobe ecosystem. These activities will not work with your current authentication system.For everything other than content authoring, Experience Manager allows IT to leverage an authentication systemof their choice through a variety of methods. When using a non-Adobe identity provider, Experience Manager supports SAML 2.0, LDAP, SSO, OAuth 1.0a and OAuth 2.0. While Experience Manager doesn’t directly support OpenID, support for that is provided through various community projects. In addition, if they choose to use Adobe as the identity provider, Experience Manager supports basic authentication, forms-based authentication and token-based authentication. How can we get up and running quickly with Experience Manager?We’ve invested in optimising time to value for our customers at every stage, from implementation to onboarding to driving ROI. With our Blueprint framework, you can accelerate your time to go-live — with a functioning site that tracks relevant KPIs and runs personalisation activities from day one. This framework is based on best practices learned over years of successful Adobe implementations and sets you up with a strong and future-proof foundation. In addition, a streamlined onboarding experience, followed by a step-by-step guided learning journey, will help you and your team start off with the skills you need to get up and running fast.Points marketing and IT should consider together.How does Experience Manager fit into a headless CMS strategy?In today’s omnichannel world, discussions have heatedup about the benefits of decoupling content management systems (CMS) from the front end in what is called a headless architecture. One of its primary benefits is to make it easier for developers to deliver content to any desired channel. With that in mind, IT might suggest that since it has developed or plans to develop a headless CMS system, you don’t need Experience Manager.But since headless environments make it more difficult for marketing to make needed updates, let alone manage the user experience, IT and marketing need to ask a few questions. How does marketing make changes when it needs to? How easy or fast is it for marketing to make small edits? Does IT want to rewrite or recompile code every time someone in marketing wants to add a new product offering or customer reference onto the site?Experience Manager provides the balance you need by acting as a hybrid CMS. While Experience Manager gives you the choice to use a traditional, hybrid headless and pure headless approach, it’s best used with a hybrid approach that supports channel-centric content management and provides headless CMS functionality at the same time.If you choose the headless-only option within Experience Manager, you can integrate single page applications (SPAs) and allow both marketers and developers to continue to work the way they prefer. Developers can choose to code in React, Angular or another other framework and they can access the content repository directly with Experience Manager Assets HTTP API to pull out content below the page level as JSON.And marketers can author and edit SPA content in the Experience Manager SPA editor. This means everyone can work on SPA content at the same time, which increases collaboration, reduces IT requests and speeds content creation and delivery.Additionally, marketers can author and edit content destined for headless delivery with Experience Manager Content Fragments and Experience Fragments. Content Fragmentsare channel agnostic and can be pulled using the Assets HTTP API. They allow marketers to self-serve edits for content on any channel — including IoT, in-venue screens, voice, chatand more.What content governance processes does Experience Manager have in place?Experience Manager helps IT manage business processes and ensure content integrity with two types of enterprise-level workflow governance: out-of-the-box and customisable. So marketers can author and edit channel-agnostic content within the guardrails of governance policies that IT sets using the Workflow Editor. Out-of-the-box workflows include reviews, approvals, publishing and more. Programmatic workflows come out-of-the-box and include several automatic steps like asset tagging. IT can designate which content can be edited and which user groups (such as marketing, developers, legal and more) can access content.How do we handle our need to scale?The system constantly monitors the service, detects the need for extra capacity and scales accordingly. Irrespective of whether the traffic is expected or unpredictable, you can be assured that performance will be optimal for the end consumers.12 Adobe Experience M anager 6.4 Sites Security De velopment Best Practices3 Adobe Experience Manager 6.5 Sites Security Checklist4 Adobe Experience Manager 6.4 Assets Migration Guide5 “The Business Impact of Investing in Experience,” a commissioned study conducted by Forrester Consulting on behalf of Adobe, February 2018Copyright © 2020 Adobe. All rights reserved. Adobe and the Adobe logo are either registered trademarks or trademarks of Adobe Inc. in the United States and/or other countries.。
创新英文翻译Innovation in English TranslationIntroductionTranslation plays a vital role in bridging the gap between languages and cultures. It enables communication and facilitates the exchange of ideas, knowledge, and information across different linguistic communities. As the world becomes increasingly globalized, the demand for quality translation is growing rapidly. In response to this demand, translation services have evolved, and innovative approaches to English translation have emerged. This document aims to explore the concept of innovation in English translation and highlight some of the innovative practices being adopted in the field.Understanding Innovation in TranslationInnovation can be defined as the introduction of new ideas, methods, or practices that result in significant improvements or advancements. In the context of translation, innovation refers to the creative and novel approaches used toovercome linguistic and cultural barriers, enhance translation quality, and improve overall efficiency.Innovation in English Translation Techniques1. Neural Machine Translation (NMT):Neural Machine Translation is a cutting-edge technique that uses artificial intelligence and deep learning models to produce high-quality translations. Unlike traditional rule-based or statistical approaches, NMT leverages neural networks to learn the patterns and structures of languages, resulting in more accurate and nuanced translations. This innovation has revolutionized the translation industry by significantly reducing errors and improving fluency in translated texts.2. Crowdsourcing Translation:Crowdsourcing is a collaborative approach that involves engaging a large group of people, typically through online platforms, to contribute to the translation process. This method harnesses the collective intelligence of the crowd, allowing for faster translation turnaround times and access to a larger pool of translators with varied expertise. Crowdsourcing promotes transparency, scalability, and cost-effectiveness, making it an innovative solution in English translation.3. Post-Editing Machine Translation (PEMT):Post-editing machine translation is a hybrid approach that combines the speed and efficiency of machine translation with the human touch of expert editors. With PEMT, machine-generated translations are initially produced and then edited by professional translators, ensuring the accuracy and fluency of the final output. This technique speeds up the translation process while maintaining high quality, making it an innovative solution in the translation industry.4. Natural Language Processing (NLP):Natural Language Processing is a subfield of artificial intelligence that focuses on the interaction between computers and human language. By using NLP techniques, translators can automate certain parts of the translation process, such as terminology extraction, text alignment, and quality evaluation. This automation not only saves time but also enhances consistency and accuracy in translation, making NLP a valuable innovation in English translation.Innovation in Translation Tools1. Translation Memory (TM) Software:Translation memory software is an innovative tool that stores previously translated sentences, phrases, and segments in a database. When a similar sentence appears in a new translation project, the software suggests previously approved translations, allowing for consistent terminology and faster translation. TM software enhances efficiency, reduces costs, and improves the overall quality of translations.2. Computer-Assisted Translation (CAT) Tools:CAT tools are computer software designed to assist translators by automating repetitive tasks and providing features like translation memory, terminology management, and glossary creation. These tools enhance translation productivity, consistency, and accuracy, ultimately resulting in higher-quality translations. Continual advancements in CAT tool technology contribute to innovation within the translation industry.3. Cloud-Based Translation Platforms:Cloud-based translation platforms allow for seamless collaboration among translators, project managers, and clients. These platforms provide a centralized space for file sharing, real-time collaboration, and project management,making the translation process more efficient and transparent. Cloud-based translation platforms have revolutionized the way translation projects are managed, ensuring greater accuracy and faster delivery of translations.ConclusionInnovation in English translation is essential for meeting the growing demands of a globalized world. The use of advanced techniques such as Neural Machine Translation, crowdsourcing, and post-editing machine translation, paired with innovative translation tools like translation memory software, CAT tools, and cloud-based translation platforms, have significantly improved the quality, efficiency, and accessibility of translated content. As technology advances, it is expected that further innovations will continue to shape the field of English translation, enabling effective communication and understanding across cultural and linguistic boundaries.。
VxBlock 1000 Converged Infrastructure © 2019 Dell Inc. or its subsidiaries.VxBlock 1000 Converged Infrastructure Foundation for Mission-Critical CloudESSENTIALSo Consolidate, optimize and protecthigh-value workloads – benefitingfrom improved application performanceand reduced downtimeo Deliver cloud operations – with asystem designed to help you automateand proactively manage daily IT taskso Simplify life cycle management – witha turnkey system that dramaticallyreduces the time, effort and riskinvolved in patching and upgradesSimplify IT with a turnkey system for workload consolidation VxBlock is the proven leader in converged infrastructure, providing enterprises worldwide the amazing simplicity of a turnkey engineered system experience that allows them to focus on innovating rather than spending time on maintenance. The VxBlock System 1000 combines industry-leading technologies – including powerful Dell EMC storage and data protection options, Cisco UCS blade and rack servers, Cisco LAN and SAN networking, and VMware virtualization – into one fully integrated system. It’s the converged infrastructure foundation for the Dell Technologies Cloud, leveraging its deep VMware integration to simplify automation of everything from daily infrastructure provisioning tasks to delivery of IaaS and SaaS. Why is that important? VxBlock 1000 takes the complexities out of component integration. It simplifies upgrades and daily operations, comes with converged management and a simplified path to a cloud operating model. All with single-call support. VxBlock business outcomes really shine with high-value applications like SAP, Oracle, Microsoft SQL and Epic where “good enough” will not get the job done. With VxBlock, you can rely on the rich data services, high availability and data efficiency absolutely required to keep your mission-critical business up, running and protected at any scale. Delivering real business results Enterprises using Dell EMC VxBlock Systems report significantly better business outcomes, including more efficient IT operations, dramatically less unplanned downtime, and much faster upgrades and patches than with a DIY approach. VxBlock 1000Data Sheet and SpecificationsVxBlock 1000 Overview VxBlock 1000 redefines the converged infrastructure market, enabling you to free up resources, focus on innovation and accelerate IT transformation. Traditional converged infrastructure systems often require you to choose different systems for different applications’ performance, capacity and data services needs. VxBlock 1000 breaks thos e boundaries by offering the industry’s first converged system designed for all workloads in the modern data center with: • Unprecedented choice to mix, share and adapt pools of market-leading storage, data protection and compute resources for all workloads to maximize performance and utilization. • Converged management and automation that simplifies daily operations thanks to Dell EMC VxBlock Central software and its enriched integration with VMware vRealize Operations (vROPS) and vRealize Orchestrator (vRO), providing a simplified path to a cloud operating model. • Simplified life cycle management with a turnkey infrastructure that is created, managed, supported and sustained as one – including ongoing certified code upgrades. • A future ready design to ensure that your system is able to support next-generation technologies to address extreme performance, scalability and ongoing simplicity requirements. \ In 2009, Dell EMC created the converged infrastructure (CI) market when we introduced the first Vblock System. Dell EMC continues to innovate in the CI market with the next-generation VxBlock System, the VxBlock 1000. The VxBlock 1000 is not a reference architecture and it’s not a bill-of-materials (BOM); it’s a fully integrated system that brings together leading technologies, including • Dell EMC Unity, Unity XT, XtremIO, PowerMax and Isilon storage options that you can mix and match • Dell EMC Avamar, Data Domain, NetWorker, RecoverPoint and VPLEX protection options • Cisco UCS B-Series and C-Series server options • Cisco Nexus LAN and MDS SAN switches • VMware (including vRealize, vSphere, NSX and vCenter)Delivering RealBusiness Results“Installing the converged VxBlock Systemallowed us to take a 10-year step forward intechnology with a single purchase. By usingthe VxBlock System to transform our datacenter, we now have the ability to deploysolutions in hours as opposed to weeks.”— Ryan Deppe, Network Operations Supervisor,Cianbro Corporation“With VxBlock 1000, we get one source ofsupport and always know we’re running thelatest gold-standard code stack. Dell EMCremoves so much of the time and risk ofmanaging IT. We love it.”— Darell Schueneman, Team Lead, CloudOperations, Plex Systems“The VxBlock has enabled us to maketremendous positive changes. Our users arethrilled with the IT team’s responsivenessand with the accelerated applicationperformance. It’s also changed therelationship between IT and the college.Business units now engage us to consultearly in their decision process. We’vebecome an integral partner in collegebusiness.”— Mark Wiseley, Senior Director of IT, PalmerCollege“We’re now 100 percent responsive to thebusiness.”— Michael Tomkins, Chief Technology Officer, FoxSports AustraliaUsing the Logical Configuration Survey (LCS) to customize integration and deployment options, all system elements are pre-integrated, pre-configured, and then tested and validated before shipping. Turnkey integration allows you to operate and manage your system as a single engineered product, rather than as individual, siloed components. Ongoing, component-level testing and qualification result in a drastically simplified update process. Converged Management and Automation VxBlock Central software provides a single unified interface and access point for converged infrastructure operations. It dramatically simplifies daily administration by providing enhanced system-level awareness, automation and analytics. VxBlock Central includes Dell EMC launch points to VMware vRealize Orchestrator (vRO) with workflows for automating daily operational tasks and to vRealize Operations (vROPs) for deep VxBlock analytics and simplified capacity management. Integrated Data Protection Dell EMC Data Protection for Converged Infrastructure simplifies backup, recovery and failover of your VxBlock 1000. Dell EMC offers the most advanced data deduplication, replication and data protection technologies for achieving your Recover Point Objective (RPO) and Recover Time Objective (RTO) requirements.Simplified Life Cycle Management Ongoing life cycle management, including interoperability test, security / patch management and component updates, is one of the cornerstones of VxBlock. This is accomplished through our unique Release Certification Matrix (RCM), where we validate interoperability to ensure your system’s health is optimized. We’ve invested thousands of hours in testing, validation and certification so you don’t have to. You get the peace of mind of knowing that Dell EMC delivers full life cycle assurance with every VxBlock 1000. Support and Services Dell EMC delivers fully integrated, 24/7 support with a single call support. There’s never any finger -pointing between vendors, and you can always rely on our fully cross-trained team for a fast resolution to any problem. Dell EMC’s portfolio of services (including deployment services, migration services and residency services) accelerates speed of deployment and integration into your IT environment and minimizes downtime by ensuring your software and hardware remains up to date throughout the product life cycle.UnprecedentedChoiceof All-FlashStorageXtremIOPerformance, in-lineall the time and datareduction PowerMaxMission critical AFAwith NVMe bestperformance,lowest latency Isilon Gen 6Unstructureddata with linearscalability Unity and Unity XTMidrange unifiedstorage with provenreliability AFA or cost-effective hybrid optionVxBlock 1000 Support Specification Summary Component DetailsCOMPUTE (Note: mixing blade servers and rack servers in one system is supported) Chassis: Cisco UCS 5108Cisco UCS B-Series blade servers: B200 M5, B480 M5 Cisco UCS C-Series rack servers: C220 M5, C240 M5, C480 M5 Cisco Fabric Extenders (FEX) and IOM: Nexus 2232PP, Nexus 2348UPQ, UCS 2204XP, UCS 2208XP, UCS 2304XP Cisco Fabric Interconnect (FI): Cisco UCS 6454, 6332-16UPCisco UCS Virtual Interface Card (VIC): 1340, 1380, 1385, 1387, 1440, 1480, 1455, 1457, 1495, 1497MAXIMUM NUMBER OFSERVERS PER SYSTEM Cisco chassis: 88 Cisco blade servers: up to 616Cisco rack mount servers: up to 1,120NETWORKING LAN: Cisco Nexus 9336C-FX2SAN: Cisco MDS 9148T, 9396T, 9148S, 9396S, 9706, 9710Management connectivity : Cisco Nexus 31108TC-V, Nexus 9336C-FX2 STORAGE (Note: mixing multiple storage types in one system is supported) Dell EMC StorageUnity All Flash 350F, 450F, 550F, 650F; Unity Hybrid 300, 400, 500, 600; Unity XT 380/380F, 480/480F, 680/680F, 880/880FPowerMax 2000 and 8000VMAX All Flash 250F, 950FXtremIO X2-S, X2-RIsilon All Flash, Hybrid and Archival F800, H600, H500, H400, A200, A2000 VIRTUALIZATION VMware : vSphere Enterprise Plus (includes VDS), NSX, ESXi, vCenter Server Note: Bare metal deployments are also supportedDATA PROTECTION Integrated Backup, Integrated Replication, Integrated Business ContinuityDell EMC : Avamar, NetWorker, Data Protection Search, Data Protection Advisor, Data Protection Central, CloudBoost, RecoverPoint and RP4VM, Data Domain, Data Domain Virtual Edition, Data Domain Cloud Disaster Recovery, Cyber Recovery, VPLEXVMware : Site Recovery ManagerSYSTEM MANAGEMENT Compute: AMP-VX for multi system management includes 4–8 PowerEdge servers, VMware vSAN and Log Insight, and Avamar Virtual Edition integrated with Data Domain 6300 for protection of the AMP-VX; AMP-3S for single system management includes 2–16 Cisco rack mount C220 M5 servers and a Dell EMC Unity 300 hybrid storage arraySoftware: core management software includes VMware vSphere and vCenter, Dell EMC VxBlock Central, Dell EMC Secure Remote Support (SRS), Dell EMC PowerPath and element managersCABINETIntelligent Physical Cabinet Solution from Dell EMC© 2019 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC and other trademarks aretrademarks of Dell Inc. or its subsidiaries. Other trademarks may be trademarks of their respectiveLearn More aboutVxBlock 1000solutions Contact a Dell EMC Expert。
VoxelNet:End-to-End Learning for Point Cloud Based3D Object DetectionYin ZhouApple Inc****************Oncel TuzelApple Inc****************AbstractAccurate detection of objects in3D point clouds is a central problem in many applications,such as autonomous navigation,housekeeping robots,and augmented/virtual re-ality.To interface a highly sparse LiDAR point cloud with a region proposal network(RPN),most existing efforts have focused on hand-crafted feature representations,for exam-ple,a bird’s eye view projection.In this work,we remove the need of manual feature engineering for3D point clouds and propose VoxelNet,a generic3D detection network that unifies feature extraction and bounding box prediction into a single stage,end-to-end trainable deep network.Specifi-cally,VoxelNet divides a point cloud into equally spaced3D voxels and transforms a group of points within each voxel into a unified feature representation through the newly in-troduced voxel feature encoding(VFE)layer.In this way, the point cloud is encoded as a descriptive volumetric rep-resentation,which is then connected to a RPN to generate detections.Experiments on the KITTI car detection bench-mark show that VoxelNet outperforms the state-of-the-art LiDAR based3D detection methods by a large margin.Fur-thermore,our network learns an effective discriminative representation of objects with various geometries,leading to encouraging results in3D detection of pedestrians and cyclists,based on only LiDAR.1.IntroductionPoint cloud based3D object detection is an important component of a variety of real-world applications,such as autonomous navigation[11,14],housekeeping robots[26], and augmented/virtual reality[27].Compared to image-based detection,LiDAR provides reliable depth informa-tion that can be used to accurately localize objects and characterize their shapes[21,5].However,unlike im-ages,LiDAR point clouds are sparse and have highly vari-able point density,due to factors such as non-uniform sampling of the3D space,effective range of the sensors, occlusion,and the relative pose.To handle these chal-lenges,many approaches manually crafted featurerepresen-Figure1.V oxelNet directly operates on the raw point cloud(no need for feature engineering)and produces the3D detection re-sults using a single end-to-end trainable network.tations for point clouds that are tuned for3D object detec-tion.Several methods project point clouds into a perspec-tive view and apply image-based feature extraction tech-niques[28,15,22].Other approaches rasterize point clouds into a3D voxel grid and encode each voxel with hand-crafted features[41,9,37,38,21,5].However,these man-ual design choices introduce an information bottleneck that prevents these approaches from effectively exploiting3D shape information and the required invariances for the de-tection task.A major breakthrough in recognition[20]and detection[13]tasks on images was due to moving from hand-crafted features to machine-learned features.Recently,Qi et al.[29]proposed PointNet,an end-to-end deep neural network that learns point-wise features di-rectly from point clouds.This approach demonstrated im-pressive results on3D object recognition,3D object part segmentation,and point-wise semantic segmentation tasks.In[30],an improved version of PointNet was introduced which enabled the network to learn local structures at dif-ferent scales.To achieve satisfactory results,these two ap-proaches trained feature transformer networks on all input points(∼1k points).Since typical point clouds obtained using LiDARs contain∼100k points,training the architec-1Figure2.V oxelNet architecture.The feature learning network takes a raw point cloud as input,partitions the space into voxels,and transforms points within each voxel to a vector representation characterizing the shape information.The space is represented as a sparse 4D tensor.The convolutional middle layers processes the4D tensor to aggregate spatial context.Finally,a RPN generates the3D detection.tures as in[29,30]results in high computational and mem-ory requirements.Scaling up3D feature learning networks to orders of magnitude more points and to3D detection tasks are the main challenges that we address in this paper.Region proposal network(RPN)[32]is a highly opti-mized algorithm for efficient object detection[17,5,31, 24].However,this approach requires data to be dense and organized in a tensor structure(e.g.image,video)which is not the case for typical LiDAR point clouds.In this pa-per,we close the gap between point set feature learning and RPN for3D detection task.We present V oxelNet,a generic3D detection framework that simultaneously learns a discriminative feature represen-tation from point clouds and predicts accurate3D bounding boxes,in an end-to-end fashion,as shown in Figure2.We design a novel voxel feature encoding(VFE)layer,which enables inter-point interaction within a voxel,by combin-ing point-wise features with a locally aggregated feature. Stacking multiple VFE layers allows learning complex fea-tures for characterizing local3D shape information.Specif-ically,V oxelNet divides the point cloud into equally spaced 3D voxels,encodes each voxel via stacked VFE layers,and then3D convolution further aggregates local voxel features, transforming the point cloud into a high-dimensional volu-metric representation.Finally,a RPN consumes the vol-umetric representation and yields the detection result.This efficient algorithm benefits both from the sparse point struc-ture and efficient parallel processing on the voxel grid.We evaluate V oxelNet on the bird’s eye view detection and the full3D detection tasks,provided by the KITTI benchmark[11].Experimental results show that V oxelNet outperforms the state-of-the-art LiDAR based3D detection methods by a large margin.We also demonstrate that V oxel-Net achieves highly encouraging results in detecting pedes-trians and cyclists from LiDAR point cloud.1.1.Related WorkRapid development of3D sensor technology has moti-vated researchers to develop efficient representations to de-tect and localize objects in point clouds.Some of the earlier methods for feature representation are[39,8,7,19,40,33, 6,25,1,34,2].These hand-crafted features yield satisfac-tory results when rich and detailed3D shape information is available.However their inability to adapt to more complex shapes and scenes,and learn required invariances from data resulted in limited success for uncontrolled scenarios such as autonomous navigation.Given that images provide detailed texture information, many algorithms infered the3D bounding boxes from2D images[4,3,42,43,44,36].However,the accuracy of image-based3D detection approaches are bounded by the accuracy of the depth estimation.Several LIDAR based3D object detection techniques utilize a voxel grid representation.[41,9]encode each nonempty voxel with6statistical quantities that are de-rived from all the points contained within the voxel.[37] fuses multiple local statistics to represent each voxel.[38] computes the truncated signed distance on the voxel grid.[21]uses binary encoding for the3D voxel grid.[5]in-troduces a multi-view representation for a LiDAR point cloud by computing a multi-channel feature map in the bird’s eye view and the cylindral coordinates in the frontal view.Several other studies project point clouds onto a per-spective view and then use image-based feature encoding公众号DASOU-整理schemes[28,15,22].There are also several multi-modal fusion methods that combine images and LiDAR to improve detection accu-racy[10,16,5].These methods provide improved perfor-mance compared to LiDAR-only3D detection,particularly for small objects(pedestrians,cyclists)or when the objectsare far,since cameras provide an order of magnitude more measurements than LiDAR.However the need for an addi-tional camera that is time synchronized and calibrated with the LiDAR restricts their use and makes the solution more sensitive to sensor failure modes.In this work we focus on LiDAR-only detection.1.2.Contributions•We propose a novel end-to-end trainable deep archi-tecture for point-cloud-based3D detection,V oxelNet, that directly operates on sparse3D points and avoids information bottlenecks introduced by manual feature engineering.•We present an efficient method to implement V oxelNet which benefits both from the sparse point structure and efficient parallel processing on the voxel grid.•We conduct experiments on KITTI benchmark and show that V oxelNet produces state-of-the-art results in LiDAR-based car,pedestrian,and cyclist detection benchmarks.2.VoxelNetIn this section we explain the architecture of V oxelNet, the loss function used for training,and an efficient algo-rithm to implement the network.2.1.VoxelNet ArchitectureThe proposed V oxelNet consists of three functional blocks:(1)Feature learning network,(2)Convolutional middle layers,and(3)Region proposal network[32],as il-lustrated in Figure2.We provide a detailed introduction of V oxelNet in the following sections.2.1.1Feature Learning NetworkVoxel Partition Given a point cloud,we subdivide the3D space into equally spaced voxels as shown in Figure2.Sup-pose the point cloud encompasses3D space with range D, H,W along the Z,Y,X axes respectively.We define each voxel of size v D,v H,and v W accordingly.The resulting 3D voxel grid is of size D =D/v D,H =H/v H,W = W/v W.Here,for simplicity,we assume D,H,W are a multiple of v D,v H,v W.Grouping We group the points according to the voxel they reside in.Due to factors such as distance,occlusion,ob-ject’s relative pose,and non-uniform sampling,the LiDARFullyConnectedNeuralNetPoint-wiseInputPoint-wiseFeatureElement-wiseMaxpoolPoint-wiseConcatenateLocallyAggregatedFeaturePoint-wiseconcatenatedFeatureFigure3.V oxel feature encoding layer.point cloud is sparse and has highly variable point density throughout the space.Therefore,after grouping,a voxel will contain a variable number of points.An illustration is shown in Figure2,where V oxel-1has significantly more points than V oxel-2and V oxel-4,while V oxel-3contains no point.Random Sampling Typically a high-definition LiDAR point cloud is composed of∼100k points.Directly pro-cessing all the points not only imposes increased mem-ory/efficiency burdens on the computing platform,but also highly variable point density throughout the space might bias the detection.To this end,we randomly sample afixed number,T,of points from those voxels containing more than T points.This sampling strategy has two purposes,(1)computational savings(see Section2.3for details);and(2)decreases the imbalance of points between the voxels which reduces the sampling bias,and adds more variation to training.Stacked Voxel Feature Encoding The key innovation is the chain of VFE layers.For simplicity,Figure2illustrates the hierarchical feature encoding process for one voxel. Without loss of generality,we use VFE Layer-1to describe the details in the following paragraph.Figure3shows the architecture for VFE Layer-1.Denote V={p i=[x i,y i,z i,r i]T∈R4}i=1...t as a non-empty voxel containing t≤T LiDAR points,where p i contains XYZ coordinates for the i-th point and r i is the received reflectance.Wefirst compute the local mean as the centroid of all the points in V,denoted as(v x,v y,v z). Then we augment each point p i with the relative offset w.r.t. the centroid and obtain the input feature set V in={ˆp i= [x i,y i,z i,r i,x i−v x,y i−v y,z i−v z]T∈R7}i=1...t.Next, eachˆp i is transformed through the fully connected network (FCN)into a feature space,where we can aggregate in-formation from the point features f i∈R m to encode the shape of the surface contained within the voxel.The FCN is composed of a linear layer,a batch normalization(BN) layer,and a rectified linear unit(ReLU)layer.After obtain-ing point-wise feature representations,we use element-wise MaxPooling across all f i associated to V to get the locally aggregated feature˜f∈R m for V.Finally,we augmenteach f i with˜f to form the point-wise concatenated featureas f outi =[f T i,˜f T]T∈R2m.Thus we obtain the outputfeature set V out={f outi }i...t.All non-empty voxels areencoded in the same way and they share the same set of parameters in FCN.We use VFE-i(c in,c out)to represent the i-th VFE layer that transforms input features of dimension c in into output features of dimension c out.The linear layer learns a ma-trix of size c in×(c out/2),and the point-wise concatenation yields the output of dimension c out.Because the output feature combines both point-wise features and locally aggregated feature,stacking VFE lay-ers encodes point interactions within a voxel and enables thefinal feature representation to learn descriptive shape information.The voxel-wise feature is obtained by trans-forming the output of VFE-n into R C via FCN and apply-ing element-wise Maxpool where C is the dimension of the voxel-wise feature,as shown in Figure2.Sparse Tensor Representation By processing only the non-empty voxels,we obtain a list of voxel features,each uniquely associated to the spatial coordinates of a particu-lar non-empty voxel.The obtained list of voxel-wise fea-tures can be represented as a sparse4D tensor,of size C×D ×H ×W as shown in Figure2.Although the point cloud contains∼100k points,more than90%of vox-els typically are empty.Representing non-empty voxel fea-tures as a sparse tensor greatly reduces the memory usage and computation cost during backpropagation,and it is a critical step in our efficient implementation.2.1.2Convolutional Middle LayersWe use Conv M D(c in,c out,k,s,p)to represent an M-dimensional convolution operator where c in and c out are the number of input and output channels,k,s,and p are the M-dimensional vectors corresponding to kernel size,stride size and padding size respectively.When the size across the M-dimensions are the same,we use a scalar to represent the size e.g.k for k=(k,k,k).Each convolutional middle layer applies3D convolution,BN layer,and ReLU layer sequentially.The convolutional middle layers aggregate voxel-wise features within a pro-gressively expanding receptivefield,adding more context to the shape description.The detailed sizes of thefilters in the convolutional middle layers are explained in Section3.2.1.3Region Proposal NetworkRecently,region proposal networks[32]have become an important building block of top-performing object detec-tion frameworks[38,5,23].In this work,we make several key modifications to the RPN architecture proposed in[32], and combine it with the feature learning network and con-volutional middle layers to form an end-to-end trainable pipeline.The input to our RPN is the feature map provided by the convolutional middle layers.The architecture of this network is illustrated in Figure4.The network has three blocks of fully convolutional layers.Thefirst layer of each block downsamples the feature map by half via a convolu-tion with a stride size of2,followed by a sequence of con-volutions of stride1(×q means q applications of thefilter). After each convolution layer,BN and ReLU operations are applied.We then upsample the output of every block to a fixed size and concatanate to construct the high resolution feature map.Finally,this feature map is mapped to the de-sired learning targets:(1)a probability score map and(2)a regression map.2.2.Loss FunctionLet{a pos i}i=1...N pos be the set of N pos positive an-chors and{a neg j}j=1...N neg be the set of N neg negative anchors.We parameterize a3D ground truth box as (x g c,y g c,z g c,l g,w g,h g,θg),where x g c,y g c,z g c represent the center location,l g,w g,h g are length,width,height of the box,andθg is the yaw rotation around Z-axis.To re-trieve the ground truth box from a matching positive anchor parameterized as(x a c,y a c,z a c,l a,w a,h a,θa),we define the residual vector u∗∈R7containing the7regression tar-gets corresponding to center location∆x,∆y,∆z,three di-Voxel Input Feature BufferVoxel CoordinateBufferK T7Sparse TensorK31Voxel-wise FeatureK C 1Point CloudIndexingMemory CopyS t a c k e d V F EFigure 5.Illustration of efficient implementation.mensions ∆l,∆w,∆h ,and the rotation ∆θ,which are com-puted as:∆x =x g c −x a cd a ,∆y =y g c −y a c d a ,∆z =z gc −z a c h a ,∆l =log(l g l a ),∆w =log(w g w a ),∆h =log(h gh a ),(1)∆θ=θg −θawhere d a =(l a )2+(w a )2is the diagonal of the base of the anchor box.Here,we aim to directly estimate the oriented 3D box and normalize ∆x and ∆y homogeneously with the diagonal d a ,which is different from [32,38,22,21,4,3,5].We define the loss function as follows:L =α1N pos i L cls (p posi ,1)+β1N neg jL cls (p neg j ,0)+1N posiL reg (u i ,u ∗i )(2)where p pos i and p neg j represent the softmax output for posi-tive anchor a posi and negative anchor a neg j respectively,whileu i ∈R 7and u ∗i ∈R 7are the regression output and ground truth for positive anchor a pos i .The first two terms are the normalized classification loss for {a pos i }i =1...N pos and {a negj }j =1...N neg ,where the L cls stands for binary cross en-tropy loss and α,βare postive constants balancing the rel-ative importance.The last term L reg is the regression loss,where we use the SmoothL1function [12,32].2.3.Efficient ImplementationGPUs are optimized for processing dense tensor struc-tures.The problem with working directly with the point cloud is that the points are sparsely distributed across space and each voxel has a variable number of points.We devised a method that converts the point cloud into a dense tensor structure where stacked VFE operations can be processed in parallel across points and voxels.The method is summarized in Figure 5.We initialize aK ×T ×7dimensional tensor structure to store the voxel input feature buffer where K is the maximum number of non-empty voxels,T is the maximum number of points per voxel,and 7is the input encoding dimension for each point.The points are randomized before processing.For each point in the point cloud,we check if the corresponding voxel already exists.This lookup operation is done effi-ciently in O (1)using a hash table where the voxel coordi-nate is used as the hash key.If the voxel is already initial-ized we insert the point to the voxel location if there are less than T points,otherwise the point is ignored.If the voxel is not initialized,we initialize a new voxel,store its coordi-nate in the voxel coordinate buffer,and insert the point to this voxel location.The voxel input feature and coordinate buffers can be constructed via a single pass over the point list,therefore its complexity is O (n ).To further improve the memory/compute efficiency it is possible to only store a limited number of voxels (K )and ignore points coming from voxels with few points.After the voxel input buffer is constructed,the stacked VFE only involves point level and voxel level dense oper-ations which can be computed on a GPU in parallel.Note that,after concatenation operations in VFE,we reset the features corresponding to empty points to zero such that they do not affect the computed voxel features.Finally,using the stored coordinate buffer we reorganize the com-puted sparse voxel-wise structures to the dense voxel grid.The following convolutional middle layers and RPN oper-ations work on a dense voxel grid which can be efficiently implemented on a GPU.3.Training DetailsIn this section,we explain the implementation details of the V oxelNet and the training procedure.work DetailsOur experimental setup is based on the LiDAR specifi-cations of the KITTI dataset [11].Car Detection For this task,we consider point clouds within the range of [−3,1]×[−40,40]×[0,70.4]meters along Z,Y ,X axis respectively.Points that are projected outside of image boundaries are removed [5].We choose a voxel size of v D =0.4,v H =0.2,v W =0.2meters,which leads to D =10,H =400,W =352.We set T =35as the maximum number of randomly sam-pled points in each non-empty voxel.We use two VFE layers VFE-1(7,32)and VFE-2(32,128).The final FCN maps VFE-2output to R 128.Thus our feature learning net generates a sparse tensor of shape 128×10×400×352.To aggregate voxel-wise features,we employ three convo-lution middle layers sequentially as Conv3D(128,64,3,(2,1,1),(1,1,1)),Conv3D(64,64,3,(1,1,1),(0,1,1)),andConv3D(64,64,3,(2,1,1),(1,1,1)),which yields a4D ten-sor of size64×2×400×352.After reshaping,the input to RPN is a feature map of size128×400×352,where the dimensions correspond to channel,height,and width of the3D tensor.Figure4illustrates the detailed network ar-chitecture for this task.Unlike[5],we use only one anchor size,l a=3.9,w a=1.6,h a=1.56meters,centered at z a c=−1.0meters with two rotations,0and90degrees. Our anchor matching criteria is as follows:An anchor is considered as positive if it has the highest Intersection over Union(IoU)with a ground truth or its IoU with ground truth is above0.6(in bird’s eye view).An anchor is considered as negative if the IoU between it and all ground truth boxes is less than0.45.We treat anchors as don’t care if they have 0.45≤IoU≤0.6with any ground truth.We setα=1.5 andβ=1in Eqn.2.Pedestrian and Cyclist Detection The input range1is [−3,1]×[−20,20]×[0,48]meters along Z,Y,X axis re-spectively.We use the same voxel size as for car detection, which yields D=10,H=200,W=240.We set T=45 in order to obtain more LiDAR points for better capturing shape information.The feature learning network and con-volutional middle layers are identical to the networks used in the car detection task.For the RPN,we make one mod-ification to block1in Figure4by changing the stride size in thefirst2D convolution from2to1.This allowsfiner resolution in anchor matching,which is necessary for de-tecting pedestrians and cyclists.We use anchor size l a= 0.8,w a=0.6,h a=1.73meters centered at z a c=−0.6 meters with0and90degrees rotation for pedestrian detec-tion and use anchor size l a=1.76,w a=0.6,h a=1.73 meters centered at z a c=−0.6with0and90degrees rota-tion for cyclist detection.The specific anchor matching cri-teria is as follows:We assign an anchor as postive if it has the highest IoU with a ground truth,or its IoU with ground truth is above0.5.An anchor is considered as negative if its IoU with every ground truth is less than0.35.For anchors having0.35≤IoU≤0.5with any ground truth,we treat them as don’t care.During training,we use stochastic gradient descent (SGD)with learning rate0.01for thefirst150epochs and decrease the learning rate to0.001for the last10epochs. We use a batchsize of16point clouds.3.2.Data AugmentationWith less than4000training point clouds,training our network from scratch will inevitably suffer from overfitting. To reduce this issue,we introduce three different forms of data augmentation.The augmented training data are gener-ated on-the-fly without the need to be stored on disk[20].1Our empirical observation suggests that beyond this range,LiDAR returns from pedestrians and cyclists become very sparse and therefore detection results will be unreliable.Define set M={p i=[x i,y i,z i,r i]T∈R4}i=1,...,N as the whole point cloud,consisting of N points.We parame-terize a3D bouding box b i as(x c,y c,z c,l,w,h,θ),where x c,y c,z c are center locations,l,w,h are length,width, height,andθis the yaw rotation around Z-axis.We de-fineΩi={p|x∈[x c−l/2,x c+l/2],y∈[y c−w/2,y c+ w/2],z∈[z c−h/2,z c+h/2],p∈M}as the set con-taining all LiDAR points within b i,where p=[x,y,z,r] denotes a particular LiDAR point in the whole set M.Thefirst form of data augmentation applies perturbation independently to each ground truth3D bounding box to-gether with those LiDAR points within the box.Specifi-cally,around Z-axis we rotate b i and the associatedΩi with respect to(x c,y c,z c)by a uniformally distributed random variable∆θ∈[−π/10,+π/10].Then we add a translation (∆x,∆y,∆z)to the XYZ components of b i and to each point inΩi,where∆x,∆y,∆z are drawn independently from a Gaussian distribution with mean zero and standard deviation1.0.To avoid physically impossible outcomes,we perform a collision test between any two boxes after the per-turbation and revert to the original if a collision is detected. Since the perturbation is applied to each ground truth box and the associated LiDAR points independently,the net-work is able to learn from substantially more variations than from the original training data.Secondly,we apply global scaling to all ground truth boxes b i and to the whole point cloud M.Specifically, we multiply the XYZ coordinates and the three dimen-sions of each b i,and the XYZ coordinates of all points in M with a random variable drawn from uniform distri-bution[0.95,1.05].Introducing global scale augmentation improves robustness of the network for detecting objects with various sizes and distances as shown in image-based classification[35,18]and detection tasks[12,17].Finally,we apply global rotation to all ground truth boxes b i and to the whole point cloud M.The rotation is applied along Z-axis and around(0,0,0).The global ro-tation offset is determined by sampling from uniform dis-tribution[−π/4,+π/4].By rotating the entire point cloud, we simulate the vehicle making a turn.4.ExperimentsWe evaluate V oxelNet on the KITTI3D object detection benchmark[11]which contains7,481training images/point clouds and7,518test images/point clouds,covering three categories:Car,Pedestrian,and Cyclist.For each class, detection outcomes are evaluated based on three difficulty levels:easy,moderate,and hard,which are determined ac-cording to the object size,occlusion state,and truncation level.Since the ground truth for the test set is not avail-able and the access to the test server is limited,we con-duct comprehensive evaluation using the protocol described in[4,3,5]and subdivide the training data into a training setMethod ModalityCar Pedestrian CyclistEasy Moderate Hard Easy Moderate Hard Easy Moderate HardMono3D[3]Mono 5.22 5.19 4.13N/A N/A N/A N/A N/A N/A 3DOP[4]Stereo12.639.497.59N/A N/A N/A N/A N/A N/A VeloFCN[22]LiDAR40.1432.0830.47N/A N/A N/A N/A N/A N/A MV(BV+FV)[5]LiDAR86.1877.3276.33N/A N/A N/A N/A N/A N/A MV(BV+FV+RGB)[5]LiDAR+Mono86.5578.1076.67N/A N/A N/A N/A N/A N/A HC-baseline LiDAR88.2678.4277.6658.9653.7951.4763.6342.7541.06 V oxelNet LiDAR89.6084.8178.5765.9561.0556.9874.4152.1850.49 Table1.Performance comparison in bird’s eye view detection:average precision(in%)on KITTI validation set.Method ModalityCar Pedestrian CyclistEasy Moderate Hard Easy Moderate Hard Easy Moderate HardMono3D[3]Mono 2.53 2.31 2.31N/A N/A N/A N/A N/A N/A 3DOP[4]Stereo 6.55 5.07 4.10N/A N/A N/A N/A N/A N/A VeloFCN[22]LiDAR15.2013.6615.98N/A N/A N/A N/A N/A N/A MV(BV+FV)[5]LiDAR71.1956.6055.30N/A N/A N/A N/A N/A N/A MV(BV+FV+RGB)[5]LiDAR+Mono71.2962.6856.56N/A N/A N/A N/A N/A N/A HC-baseline LiDAR71.7359.7555.6943.9540.1837.4855.3536.0734.15 V oxelNet LiDAR81.9765.4662.8557.8653.4248.8767.1747.6545.11 Table2.Performance comparison in3D detection:average precision(in%)on KITTI validation set.and a validation set,which results in3,712data samples for training and3,769data samples for validation.The split avoids samples from the same sequence being included in both the training and the validation set[3].Finally we also present the test results using the KITTI server.For the Car category,we compare the proposed method with several top-performing algorithms,including image based approaches:Mono3D[3]and3DOP[4];LiDAR based approaches:VeloFCN[22]and3D-FCN[21];and a multi-modal approach MV[5].Mono3D[3],3DOP[4]and MV[5]use a pre-trained model for initialization whereas we train V oxelNet from scratch using only the LiDAR data provided in KITTI.To analyze the importance of end-to-end learning,we implement a strong baseline that is derived from the V ox-elNet architecture but uses hand-crafted features instead of the proposed feature learning network.We call this model the hand-crafted baseline(HC-baseline).HC-baseline uses the bird’s eye view features described in[5]which are computed at0.1m resolution.Different from[5],we in-crease the number of height channels from4to16to cap-ture more detailed shape information–further increasing the number of height channels did not lead to performance improvement.We replace the convolutional middle lay-ers of V oxelNet with similar size2D convolutional layers, which are Conv2D(16,32,3,1,1),Conv2D(32,64,3,2, 1),Conv2D(64,128,3,1,1).Finally RPN is identical in V oxelNet and HC-baseline.The total number of parame-ters in HC-baseline and V oxelNet are very similar.We train the HC-baseline using the same training procedure and data augmentation described in Section3.4.1.Evaluation on KITTI Validation SetMetrics We follow the official KITTI evaluation protocol, where the IoU threshold is0.7for class Car and is0.5for class Pedestrian and Cyclist.The IoU threshold is the same for both bird’s eye view and full3D evaluation.We compare the methods using the average precision(AP)metric. Evaluation in Bird’s Eye View The evaluation result is presented in Table1.V oxelNet consistently outperforms all the competing approaches across all three difficulty levels. HC-baseline also achieves satisfactory performance com-pared to the state-of-the-art[5],which shows that our base region proposal network(RPN)is effective.For Pedestrian and Cyclist detection tasks in bird’s eye view,we compare the proposed V oxelNet with HC-baseline.V oxelNet yields substantially higher AP than the HC-baseline for these more challenging categories,which shows that end-to-end learn-ing is essential for point-cloud based detection.We would like to note that[21]reported88.9%,77.3%, and72.7%for easy,moderate,and hard levels respectively, but these results are obtained based on a different split of 6,000training frames and∼1,500validation frames,and they are not directly comparable with algorithms in Table1. Therefore,we do not include these results in the table. Evaluation in3D Compared to the bird’s eye view de-tection,which requires only accurate localization of ob-jects in the2D plane,3D detection is a more challeng-ing task as it requiresfiner localization of shapes in3D space.Table2summarizes the comparison.For the class Car,V oxelNet significantly outperforms all other ap-proaches in AP across all difficulty levels.Specifically, using only LiDAR,V oxelNet significantly outperforms the。
云计算英文术语Cloud Computing English TerminologyCloud computing has become an integral part of our modern technological landscape. It offers numerous benefits, including scalability, cost-efficiency, and enhanced accessibility. As the popularity of cloud computing continues to rise, it is essential to familiarize ourselves with the relevant English terminology associated with this field. In this article, we will explore and explain some of the key cloud computing English terms that are commonly used.1. Cloud ComputingCloud computing refers to the delivery of computing resources, such as servers, storage, databases, and software applications, over the internet. It eliminates the need for local infrastructure and provides users with on-demand access to a shared pool of resources.2. Infrastructure as a Service (IaaS)IaaS is a cloud computing service model that provides virtualized computing resources over the internet. It allows users to rent virtual machines, storage, and networks on a pay-as-you-go basis, enabling businesses to scale their infrastructure without large upfront investments.3. Platform as a Service (PaaS)PaaS is a cloud computing service model that provides a platform for developing, testing, and deploying applications. It offers a complete environment for application development, including operating systems,databases, and programming languages, without the need to worry about underlying infrastructure management.4. Software as a Service (SaaS)SaaS is a cloud computing service model that delivers software applications over the internet. Users can access applications through web browsers or dedicated client applications without the need for installation or maintenance. Popular examples of SaaS include customer relationship management (CRM) systems, office productivity suites, and collaboration tools.5. Hybrid CloudA hybrid cloud is a combination of public and private cloud infrastructure. It allows organizations to take advantage of the benefits of both cloud environments, providing flexibility, scalability, and control over sensitive data. Hybrid clouds enable seamless integration between on-premises infrastructure and public cloud services.6. Public CloudPublic cloud refers to computing resources that are shared and accessed over the internet by multiple users. It is owned and operated by third-party service providers and offers scalability, cost-effectiveness, and ease of use. Public cloud services are available to the public on a pay-as-you-go basis.7. Private CloudPrivate cloud is a cloud infrastructure dedicated to a single organization. It can be hosted on-premises or by a third-party service provider and offersenhanced security, control, and customization compared to public cloud environments. Private clouds are suitable for organizations with strict regulatory compliance requirements or specific resource needs.8. VirtualizationVirtualization is a technology that enables the creation of virtual instances of computing resources, such as servers, storage, and networks. It allows the efficient utilization of hardware resources by running multiple virtual machines or operating systems on a single physical server.9. ContainerizationContainerization is a lightweight form of virtualization that encapsulates an application and its dependencies into a container. Containers provide a consistent and portable execution environment, enabling applications to run reliably across different computing environments without the need for complete virtualization.10. Data Security and PrivacyData security and privacy are crucial aspects of cloud computing. Organizations must ensure that their data is protected against unauthorized access, data breaches, and other security threats. Encryption, access control, and data anonymization techniques are commonly used to safeguard sensitive information in the cloud.ConclusionAs cloud computing continues to revolutionize the IT industry, understanding and familiarizing ourselves with the English terminologyassociated with this field becomes increasingly important. This article has provided an overview of some of the key cloud computing English terms, ranging from cloud service models like IaaS, PaaS, and SaaS, to concepts like hybrid cloud, virtualization, and data security. By becoming familiar with these terms, individuals and organizations can better navigate the cloud computing landscape and leverage its benefits effectively.。
文章编号 2097-1842(2023)04-0802-14航空叶片形貌高精度结构光扫描视点规划李茂月*,蔡东辰,赵伟翔,肖桂风(哈尔滨理工大学 先进制造智能化技术教育部重点实验室, 黑龙江 哈尔滨 150080)摘要:航空发动机叶片的加工质量与检测精度对于叶片的使用寿命有着十分重要的影响。
本文提出一种基于结构光的高精度扫描视点规划方法以提高叶片检测精度。
首先,对叶片整体尺寸进行粗扫描,获取粗模型数据,并根据相机分辨率与采集精度确定视野范围。
其次,利用改进Angle Criterion 算法进行边界提取,根据边界坐标与视野范围完成边界分割点的确定,利用曲面的截面线法对粗模型进行切片,根据切片结果确定内部分割点,从而完成点云均匀分割。
然后,对分割后的点云数据建立有向包围盒获取中心点坐标,并对其法向量进行统计,确定主法线方向,从而生成高精度扫描的视点坐标。
最后,对叶片进行表面形貌检测验证,实验结果表明,与超体素分割的视点采集结果相比,本文方法的平均标准差降低了0.005 4 mm ,且采集视点减少了1/3。
提出的视点规划方法在薄壁叶片在机加工检测领域具有良好的应用前景。
关 键 词:视点规划;边界提取;点云分割;法向量统计;薄壁叶片中图分类号:TH741 文献标志码:A doi :10.37188/CO.2022-0221High precision structural light scanning viewpoint planning for aircraftblade morphologyLI Mao-yue *,CAI Dong-chen ,ZHAO Wei-xiang ,XIAO Gui-feng(Key Laboratory of Advanced Manufacturing and Intelligent Technology , Ministry of Education ,Harbin University of Science and Technology , Harbin 150080, China )* Corresponding author ,E-mail : lmy 0500@Abstract : The machining quality and detection accuracy of aero-engine blades have a very important influ-ence on their service life of blades. To improve the accuracy of blade detection, a high-precision scanning viewpoint planning method based on structured light is proposed in this paper. Firstly, coarse model data was obtained by coarse scanning under the overall size of the blade, and the field of view was determined accord-ing to the camera resolution and acquisition accuracy. Secondly, an improved Angle Criterion algorithm was used to extract the boundary, and the boundary segmentation points were determined according to the bound-ary coordinates and the range of the visual field. The coarse model was sliced by the section line method for a surface, and the internal segmentation points were determined according to the slice results to complete the uniform segmentation of point clouds. Then, a directed bounding box was established for the segmented收稿日期:2022-10-25;修订日期:2022-12-12基金项目:国家自然科学基金(No. 51975169);黑龙江省自然科学基金(No. LH2022E085)Supported by National Natural Science Foundation of China (No. 51975169); Natural Science Foundation of Heilongjiang Province of China (No. LH2022E085)第 16 卷 第 4 期中国光学(中英文)Vol. 16 No. 42023年7月Chinese OpticsJul. 2023point cloud data to obtain the coordinates of the center point, and the normal vector was statistically analyzed to determine the orientation of the main normal to generate the viewpoint coordinates for high-precision scan-ning. Finally, the surface morphology of the blade was tested and verified. The experimental results show that the average standard deviation of the proposed method is reduced by 0.005 4 mm and the collected view-point is reduced by 1/3 compared with the viewpoint acquisition result of the supervoxel segmentation, which has good application prospects in the machining inspection of thin-walled blades.Key words: viewpoint planning;boundary extraction;point cloud segmentation;normal vector statistics;thin-walled blade1 引 言复杂薄壁类曲面零件在现代制造业中的应用越来越广泛,提高该类零件的加工与测量精度十分必要[1]。
第40卷第1期2024年1月森㊀林㊀工㊀程FOREST ENGINEERINGVol.40No.1Jan.,2024doi:10.3969/j.issn.1006-8023.2024.01.015基于激光点云数据的单木骨架三维重构赵永辉,刘雪妍,吕勇,万晓玉,窦胡元,刘淑玉∗(东北林业大学计算机与控制工程学院,哈尔滨150040)摘㊀要:针对树木三维重构过程中面临的处理速度慢㊁重构精度低等问题,提出一种采用激光点云数据的单木骨架三维重构方法㊂首先,根据点云数据类型确定组合滤波方式,以去除离群点和地面点;其次,采用一种基于内部形态描述子(ISS )和相干点漂移算法(CPD )的混合配准算法(Intrinsic Shape -Coherent Point Drift ,IS -CPD ),以获取单棵树木的完整点云数据;最后,采用Laplace 收缩点集和拓扑细化相结合的方法提取骨架,并通过柱体构建枝干模型,实现骨架三维重构㊂试验结果表明,相比传统CPD 算法,研究设计的配准方案精度和执行速度分别提高50%和95.8%,最终重构误差不超过2.48%㊂研究结果证明可有效地重构单棵树木的三维骨架,效果接近树木原型,为构建林木数字孪生环境和林业资源管理提供参考㊂关键词:激光雷达;树木点云;关键点提取;树木骨架;几何模型中图分类号:S792.95;TN958.98㊀㊀㊀㊀文献标识码:A㊀㊀㊀文章编号:1006-8023(2024)01-0128-073D Reconstruction of Single Wood Skeleton Based on Laser Point Cloud DataZHAO Yonghui,LIU Xueyan,LYU Yong,WAN Xiaoyu,DOU Huyuan,LIU Shuyu ∗(College of Computer and Control Engineering,Northeast Forestry University,Harbin 150040,China)Abstract :In response to the slow processing speed and low reconstruction accuracy encountered during the 3D reconstruction of trees,a method for 3D reconstruction of single -tree skeletons using laser point cloud data is proposed.Firstly,a combination filtering method is determined based on the point cloud data type to remove outliers and ground points.Secondly,a hybrid registration algorithm based on ISS (Intrinsic Shape Descriptor)and CPD (Coherent Point Drift algorithm),called IS -CPD (Intrinsic Shape -Coherent Point Drift),is employed to obtain complete point cloud data for individual trees.Finally,a method combining Laplace contraction of point sets and topological refinement is used to obtain the skeleton,and branch models are constructed using cylinders to achieve 3D skeleton reconstruction.Experimental results show that compared to traditional CPD algorithm,the proposed registration scheme im-proves accuracy and execution speed by 50%and 95.8%respectively,with a final reconstruction error of no more than 2.48%.The research demonstrates the effective reconstruction of the 3D skeleton of individual trees,with results close to the original trees,provi-ding a reference for building digital twin environments of forest trees and forestry resource management.Keywords :LiDAR;tree point cloud;key point extraction;tree skeleton;geometry model收稿日期:2023-02-10基金项目:国家自然科学基金(31700643)㊂第一作者简介:赵永辉,硕士,工程师㊂研究方向为物联网与人工智能㊂E-mail:hero9968@∗通信作者:刘淑玉,硕士,讲师㊂研究方向为通信与信号处理㊂E -mail:1000002605@引文格式:赵永辉,刘雪妍,吕勇,等.基于激光点云数据的单木骨架三维重构[J].森林工程,2024,40(1):128-134.ZHAO Y H,LIU X Y,LYU Y,et al.3D reconstruction of sin-gle wood skeleton based on laser point cloud data[J].Forest En-gineering,2024,40(1):128-134.0㊀引言激光雷达可用于获取目标稠密点云数据,是实现自动驾驶和三维重建的重要手段㊂使用机载或地基激光雷达可以获取树高㊁胸径和冠层等量化信息,用于树木的三维重建,为推断树木的生态结构参数和碳储量反演提供依据,也可为林业数字孪生提供数据支撑㊂主流的点云数据去噪方法主要有基于密度㊁基于聚类和基于统计3种[1]㊂分离地面点和非地面点是点云数据处理的第一步,学者提出多种算法用于地面点分离㊂然而,即使是最先进的滤波算法,也需要设置许多复杂的参数才能实现㊂Zhang 等[2]提出了一种新颖的布料模拟滤波算法(Cloth Simu-lation Filter,CSF),该算法只需调整几个参数即可实现地面点的过滤,但该算法对于点云噪声非常敏感㊂在点云配准方面,经典的算法是Besl 等[3]提出的迭代最近点算法(Iterative Closest Point,ICP),但易出现局部最优解,从而限制了该算法的应用㊂因此,许多学者采用概率统计方法进行点云配准,典型的方法是相干点漂移算法(Coherent Point Drift,CPD)[4-5]等,但该方法存在运行时间长和计算复杂的问题㊂石珣等[6]结合曲率特征与CPD 提出了一第1期赵永辉,等:基于激光点云数据的单木骨架三维重构种快速配准方法,速度大大提高,但细节精确度有所下降㊂陆军等[7]㊁夏坎强[8]㊁史丰博等[9]对基于关键点特征匹配的点云配准方法进行了深入研究㊂三维树木几何重建从传统的基于规则㊁草图和影像重建,发展到如今借助激光雷达技术,可以构建拓扑正确的三维树木几何形态㊂翟晓晓等[10]以点云数据进行树木重建,由于受激光雷达视场角的约束,难以获得树冠结构的信息,因此仅重建了树干㊂Lin 等[11]㊁You 等[12]涉及点云骨架提取的研究,构建了树的几何和拓扑结构,但重构模型的真实感不够强㊂Cao 等[13]使用基于Laplace 算子的建模方法提取主要枝干的几何信息,拓扑连接正确,并保留了部分细枝㊂曹伟等[14]对点云树木建模的发展和前景进行了综述,但在结合点云数据提取骨架并重建等方面研究不足㊂本研究提出一种基于骨架的方法,旨在准确地从单木的点云数据中重建三维模型㊂原始点云数据经过CSF 算法和K 维树(Kd -Tree)近邻搜素法的组合滤波后,提取了准确的单木数据㊂同时,基于树木特征点云的混合配准算法(Intrinsic Shape -Co-herent Point Drift,IS -CPD),可显著提高配准效率㊂最后,通过提取单棵树木的骨架点,构造连接性,并用圆柱拟合枝干,实现了单木的三维建模㊂1㊀数据采集及预处理1.1㊀数据获取数据采集自山东省潍坊市奎文区植物园内一株高约8.5m㊁树龄约20a 的银杏树㊂使用Ro-boSense 雷达从2个不同角度进行点云数据采集,雷达高为1.5m,与树木水平距离约为10m㊂通过对来自树木正东方向和正北方向的2组点云数据进行采集,如图1所示㊂(a )角度1点云数据(正东方向)(a )Angle 1 point cloud data (East direction )(b )角度2点云数据(正北方向)(b )Angle 2 point cloud data (North direction)图1㊀2组点云的最初扫描结果Fig.1Initial scan results of two sets of point clouds1.2㊀点云预处理为了提高后续处理点云数据的准确性和时效性,需要对数据进行预处理㊂首先,利用CSF 滤波算法去除冗余的地面背景信息,该算法参数较少,分离速度快㊂通过使用落在重力下的布来获取地形的物理表示,单木点云可以被分离出来㊂由于扫描环境和激光雷达硬件误差的影响,可能会出现离群点㊂因此,采用Kd -Tree 算法对提取的点云进行降噪处理,提高单个树木数据的精度,以备在后续的算法使用中得到更准确的结果㊂通过搜索待滤波点云p i (x i ,y i ,z i )中每个点的空间邻近点p j (x j ,y j ,z j ),计算之间的平均距离(d i )㊁全局均值(μ)以及标准差(σ)㊂筛选符合范围(μ-αˑσɤd i ɤμ+αˑσ)的点并过滤掉离群值(α为决定点云空间分布的参数),d i ㊁μ㊁σ的计算公式如下㊂d i =ðkj =1x i -y j k μ=ðn i =1d i n σ=ðni =1(d i -μ)2n ìîíïïïïïïïïïïïï㊂(1)921森㊀林㊀工㊀程第40卷式中:k 为决定点云密集度的参数;n 为点云数量㊂通过试验发现,最终选定参数k =20,α=1.2时,对点云数据进行处理结果最优,滤噪结果如图2所示,基本去除了离群噪声点和地面点同时又确保对点云模型轮廓的保护㊂2㊀单木骨架重构方法单木骨架重构方法的过程主要包括以下几个步骤,如图3所示㊂首先,对预处理的2组点云数据进行特征提取,并进行精确的配准;其次,对点云进行几何收缩,获取零体积点集,并通过拓扑细化将点集细化成一维曲线,得到与点云模型基本吻合的骨架线;最后,基于骨架线对树木枝干进行圆柱拟合,以构建枝干的三维模型㊂图2㊀2组点云滤噪结果图Fig.2Two sets of point cloud filtering and denoisingresults图3㊀单木骨架重构方法过程图Fig.3Process diagram of single wood skeleton reconstruction method2.1㊀三维点云配准CPD 配准是一种基于概率的点集配准算法,在对点集进行配准时,一组点集作为高斯混合模型(Gaussian Mixture Model,GMM)的质心,假设模板点集坐标为X M ˑD =(y 1,y 2, ,y M )T ,另一组点集作为混合高斯模型的数据集,假设目标点集坐标为X N ˑD =(x 1,x 2, ,x N )T ,N ㊁M 分别代表2组点的数目,D 为Z 组的维度,T 为矩阵转置㊂通过GMM 的最大后验概率得到点集之间的匹配对应关系㊂GMM 概率密度函数如下㊂p (x )=ω1N +(1-ω)ðMm =11M p (x m )㊂(2)式中:p x |m ()=1(2πσ2)D 2exp (-x -y m 22σ2),;p (x )是概率密度函数;ω(0ɤωɤ1)为溢出点的权重参数;m 为1 M 中的任何一个数㊂GMM 质心的位置通过调整变换参数(θ)的值进行改变,而变换参数的值可以通过最小化-log 函数来求解㊂E θ,σ2()=-ðN n -1log ðMm -1p (m )p (x n |m )㊂(3)式中,x n 与y m 之间的匹配关系可以由GMM 质心的后验概率p (m x n )=p (m )p (x n m )来定义㊂采用期望最大值算法进行迭代循环,从而对最大似然估计进行优化,当收敛时迭代停止㊂得到θ和σ2的解,即完成模板网格点集向目标网格点集的配准㊂扫描设备采集的点云数据通常数量庞大,因此并非所有点云信息都对配准有效㊂此外,CPD 算法的计算复杂度较高,匹配速度较慢㊂因此,本研究采用ISS(Intrinsic Shape Signaturs)算法[15]提取关键点,以降低几何信息不显著点的数量㊂通过对这些特征点进行精确配准,可以提高点云配准的效率㊂图4给出了IS -CPD 配准过程㊂31第1期赵永辉,等:基于激光点云数据的单木骨架三维重构图4㊀基于特征点提取的配准过程图Fig.4Registration process diagram based on feature point extraction ㊀㊀IS-CPD点云配准算法流程如下㊂(1)选择2个视角点云重叠区域㊂(2)采用ISS算法提取特征点集㊂设点云数据有n个点,(x i,y i,z i),i=0,1, ,n-1㊂记P i=(x i,y i,z i)㊂①针对输入点云的每个点构建一个半径为r的球形邻域,并根据式(4)计算每个点的权重㊂W ij=1||p i-p j||,|p i-p j|<r㊂(4)②根据式(5)计算各点的协方差矩阵cov及其特征值{λ1i,λ2i,λ3i},并按从小到大的次序进行排列㊂cov(p i)=ð|p i-p j|<r w ij(P i-P j)(P i-P j)Tð|P i-P j|<r w ij㊂(5)③设置阈值ε1与ε2,满足λ1iλ2i ≪ε1㊁λ2iλ3i≪ε2的点即为关键点㊂(3)初始化CPD算法参数㊂(4)求出相关概率矩阵与后验概率p(m|x n)㊂(5)利用最小负对数似然函数求出各参数的值㊂(6)判断p的收敛性,若不收敛,则重复步骤(4)直到收敛㊂(7)在点集数据中,利用所得到的转换矩阵,完成配准㊂2.2㊀点云枝干重建传统的构建枝干的方法是直接在点云表面上进行重构,这种方法会导致大量畸变结构㊂因此,本研究先提取单木骨架线,再通过拟合圆柱来构建几何模型㊂图5为骨架提取并重建枝干的过程㊂为精确提取树干和树枝,采用Laplace收缩法提取骨架㊂首先,对点云模型进行顶点邻域三角化,得到顶点的单环邻域关系㊂然后,计算相应的余切形式的拉普拉斯矩阵,并以此为依据收缩点云,直至模型收缩比例占初始体积的1%,再通过拓扑细化将点集细化成一维曲线㊂采用最远距离点球对收缩点进行采样,利用一环邻域相关性将采样点连接成初始骨架,折叠不必要的边,直到不存在三角形,得到与点云模型基本吻合的骨架线㊂为准确地模拟树枝的几何形状,采用圆柱拟合方法㊂在树基区域,使用优化方法来获得主干的几何结构[16]㊂由于靠近树冠和树枝尖端的小树枝的点云数据较为杂乱,使用树木异速生长理论来控制枝干半径㊂最终,拟合圆柱体来得到树木点云的3D 几何模型[17],原理如图6所示㊂以粗度R为半径,以上端点M和下端点N为圆心生成多个圆截面,并沿着骨架线连接圆周点绘制出圆柱体,以此代表每个树枝,最终完成整棵树的枝干的绘制㊂131森㊀林㊀工㊀程第40卷图5㊀构建枝干模型流程图Fig.5Flow chart for building branch model(a)圆柱模型示例(a)Example of a cylindrical model(b)绘制局部树枝示例(b)Example of drawing a partial tree branchNMR图6㊀绘制树干几何形状原理Fig.6Principle of drawing tree trunk geometry3㊀试验结果与分析3.1㊀点云配准结果与分析为验证IS-CPD配准算法的有效性,对滤波后的点云进行试验,比较该算法与原始CPD算法及石珣等[6]提出的方法在同一数据下的运行时间及均方根误差(RMSE,式中记为R MSE),其表达式见式(6),值越小表示配准效果越精确㊂图7及表1给出了3种配准算法的对比结果㊂R MSE=㊀ðn i-1(x i-x︿i)2n㊂(6)式中:n为点云数量;x i和x︿i分别为配准前后对应点之间欧氏距离㊂经过配准结果图7和表1的分析,石珣等[5]算法虽提高了配准速度,但其细节精度下降,配准结果不佳㊂相比之下,CPD和IS-CPD算法均能成功地融合2个不同角度的点云,达到毫米级的精度,2种方法可视为效果近乎一致㊂相比之下,本研究算法的时间复杂度要小得多㊂此外,由表2可知,配准时间缩短至10.77s,平均配准精度相较CPD提高了约50%㊂3.2㊀点云枝干重建结果与分析在几何重建部分(图8),采用基于Laplace收缩的骨架提取方法,仅需不到5次迭代,就可以将点收缩到较好的位置,如图8(b)所示㊂对收缩后的零体231第1期赵永辉,等:基于激光点云数据的单木骨架三维重构图7㊀点云配准可视化对比Fig.7Point cloud registration visualization comparison表1㊀点云配准结果对比Tab.1Comparison of point cloud registration results配准算法Registration algorithm 点云总数/个Total number of point clouds角度1Angle 1角度2Angle 2历时/s Time 均方根误差/mRMSE CPD石珣等[6]Shi xun et al [6]本算法Proposed algorithm3795637647261.748.3ˑ10-386.58 1.6ˑ10-210.774.1ˑ10-3㊀㊀注:IS -CPD 算法提取关键点所需的时间可以忽略不计㊂Note:The time required for the IS -CPD algorithm to extract key points can beignored.积点集进行拓扑细化,得到与点云模型基本吻合的骨架线,如图8(c)所示㊂随后,对枝干进行圆柱拟合㊂至此,树木点云重建工作全部完成㊂图8(d)为树木骨架几何重建的最终结果㊂本研究使用单棵树木的树高和胸径作为重建模型的精度评价指标㊂首先,采用树干点拟合圆柱的方法来将点云投影至圆柱轴向方向,通过求取该轴向投影的最大值和最小值来获取树高信息㊂同时,(a )输入点云(a )Input point cloud(b )点云收缩(b )Point cloud shrinkage (c )连接骨架线(c )Connecting skeleton lines(d )树木点云的几何模型(d )Geometric model of treepoint cloud图8㊀单木几何重建过程Fig.8Single wood geometry reconstruction process在Pitkanen 等[18]研究方法的基础上,对树干点云进行分层切片处理,将二维平面上的分层点云进行投影,再通过圆拟合方法得到更为精确的胸径尺寸㊂为验证该算法重建模型的准确性,进行20次试验,并将其与Nurunnabi 等[16]的重建方法进行了比较㊂表2为2种方法分别获得的树高和胸径的平均值,并将其与真实测量值进行了对比㊂结果表明,该算法相较于Nurunnabi 等[16]的重建方法具有更高的精度,胸径平均误差仅为2.48%,树高平均误差仅为1.64%㊂表2㊀树木重构精度分析Tab.2Tree reconstruction accuracy analysis评估方法Evaluation method胸径/m DBH 树高/m Height 平均误差(%)Average error胸径DBH 树高Height Nurunnabi 等[16]Nurunnabi et al [16]2.13ˑ10-18.26 5.973.17本算法Proposed algorithm1.96ˑ10-18.392.48 1.64实测值Measured value2.01ˑ10-18.53331森㊀林㊀工㊀程第40卷4㊀结论本研究讨论了激光雷达重建单棵树木的流程,分析并改进了关键问题㊂充分发挥CSF滤波和Kd-Tree算法的优势,从而精准地分离出了单棵树木的数据,提高了处理速度㊂提出IS-CPD配准算法,可将点云配准的效率提高约95.8%㊂通过精确配准后的点云数据,成功提取骨架树,最终重构误差控制在2.48%以内㊂试验结果表明,研究方法在树木点云数据滤波㊁配准和骨架提取方面具有可行性,树木枝干结构重建效果良好,且重构模型可为评估农林作物㊁森林生态结构健康等提供支持㊂ʌ参㊀考㊀文㊀献ɔ[1]鲁冬冬,邹进贵.三维激光点云的降噪算法对比研究[J].测绘通报,2019(S2):102-105.LU D D,ZOU J parative research on denoising al-gorithms of3D laser point cloud[J].Survey and Mapping Bulletin,2019(S2):102-105.[2]ZHANG W,QI J,WAN P,et al.An easy-to-use air-borne LiDAR data filtering method based on cloth simula-tion[J].Remote Sensing,2016,8(6):501. [3]BESL P J,MCKAY H D.A method for registration of3-D shapes[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,1992,14(2):239-256. [4]MYRONENKO A,SONG X.Point set registration:coherent point drift[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2010,32(12):2262-2275. [5]王爱丽,张宇枭,吴海滨,等.基于集成卷积神经网络的LiDAR数据分类[J].哈尔滨理工大学学报,2021, 26(4):138-145WANG A L,ZHANG Y X,WU H B,et al.LiDAR data classification based on ensembled convolutional neural net-works[J].Journal of Harbin University of Science and Technology,2021,26(4):138-145.[6]石珣,任洁,任小康.等.基于曲率特征的漂移配准方法[J].激光与光电子学进展,2018,55(8):248-254. SHI X,REN J,REN X K,et al.Drift registration based on curvature characteristics[J].Laser&Optoelectronics Progress,2018,55(8):248-254.[7]陆军,邵红旭,王伟.等.基于关键点特征匹配的点云配准方法[J].北京理工大学学报,2020,40(4):409-415. LU J,SHAO H X,WANG W,et al.Point cloud registra-tion method based on key point extraction with small over-lap[J].Transactions of Beijing Institute of Technology, 2020,40(4):409-415.[8]夏坎强.基于ISS特征点和改进描述子的点云配准算法研究[J].软件工程,2022,25(1):1-5.XIA K Q.Research on point cloud algorithm based on ISS feature points and improved descriptor[J].Software Engi-neering,2022,25(1):1-5.[9]史丰博,曹琴,魏军.等.基于特征点的曲面点云配准方法[J].北京测绘,2022,36(10):1345-1349.SHI F B,CAO Q,WEI J,et al.Surface point cloud reg-istration method based on feature points[J].Beijing Sur-veying and Mapping,2022,36(10):1345-1349. [10]翟晓晓,邵杰,张吴明.等.基于移动LiDAR点云的树木三维重建[J].中国农业信息,2019,31(5):84-89.ZHAI X X,SHAO J,ZHANG W M,et al.Three-di-mensional reconstruction of trees using mobile laser scan-ning point cloud[J].China Agricultural Information,2019,31(5):84-89.[11]LIN G,TANG Y,ZOU X,et al.Three-dimensional re-construction of guava fruits and branches using instancesegmentation and geometry analysis[J].Computers andElectronics in Agriculture,2021,184:106107. [12]YOU A,GRIMM C,SILWAL A,et al.Semantics-guided skeletonization of upright fruiting offshoot trees forrobotic pruning[J].Computers and Electronics in Agri-culture,2022,192:106622.[13]CAO J J,TAGLIASACCJI A,OLSON M,et al.Pointcloud skeletons via Laplacian based contraction[C]//Proceedings of the Shape Modeling International Confer-ence.Los Alamitos:IEEE Computer Society Press,2010:187-197.[14]曹伟,陈动,史玉峰.等.激光雷达点云树木建模研究进展与展望[J].武汉大学学报(信息科学版),2021,46(2):203-220.CAO W,CHEN D,SHI Y F,et al.Progress and pros-pect of LiDAR point clouds to3D tree models[J].Geo-matics and Information Science of Wuhan University,2021,46(2):203-220.[15]YU Z.Intrinsic shape signatures:A shape descriptor for3D object recognition[C]//IEEE International Confer-ence on Computer Vision Workshops.IEEE,2010. [16]NURUNNABI A,SADAHIRO Y,LINDENBERGH R,etal.Robust cylinder fitting in laser scanning point clouddata[J].Measurement,2019,138:632-651. [17]GUO J W,XU S B,YAN D M,et al.Realistic proce-dural plant modeling from multiple view images[J].IEEE Transactions on Visualization and Computer Graph-ics,2020,26(2):1372-1384.[18]PITKANEN T P,RAUMONEN P,KANGAS A.Measur-ing stem diameters with TLS in boreal forests by comple-mentary fitting procedure[J].ISPRS Journal of Photo-grammetry and Remote Sensing,2019,147:294-306.431。
Data SheetFortiNDR™ CloudFortinet’s SaaS-based FortiNDR Cloud leverages artificial intelligence (AI) and machine learning (ML), behavioral, and human analysis to inspect network traffic to detect malicious behavior early while reducing false positives. FortiNDR Cloud provides unified network traffic visibility across multi-cloud and hybrid environments as well as distributed workforces and constrained, mission-critical environments. FortiNDR Cloud automatically identifies anomalous and malicious behavior, provides risk scores, and shares relevant threat intelligence to assist security teams in prioritizing response efforts.As the world’s only Guided-SaaS NDR, FortiNDR Cloud provides in-person Technical Success Manager (TSM) support. TSMs act as trusted advisors who share findings, tune configurations, and helporganizations optimize NDR deployments.Network Detection and ResponseHighlights•365-day historical deep network traffic visibility and analytics •Curated threat intelligence, powered by FortiGuard Labs, for reduced false positives •Fortinet Security Fabric and third-party integrations •Leverage AI, expert analysis, and cloud compute for threat detection•Coverage for over 90% of MITRE ATT&CK techniquesHighlightsKey Features•Guided SaaS with trusted advisors•365-day data retention for retrospective analysis and threat hunting•Hunt adversaries with Guided Playbooks•Automatic and manual response for quarantine and control•Orchestrated response with integrations with Fortinet and third party tools including CrowdStrike, FortiEDR, Splunk, Cortex, FortiSIEM, and FortiSOAR•Global crowdsourced threat intelligence from numerous third-party feeds and proprietary sensors Basic CompetenciesImproved Visibility of ThreatsReal-time, automated investigation of network security incidents and extended historical network visibility enable a faster, more comprehensive response to threats. Because the impact of an intrusion increases over time, real-time response is the best way to minimize damage.Get Expertise on DemandFortiNDR Cloud helps security teams overcome the skills gap challenge by providing in-person Technical Success Manager (TSM) support. TSMs act as trusted advisors who share findings, tune configurations, and help organizations optimize NDR deployments.Fewer Distractions from False Positives and Detection TuningWith threat analysis and detection tuning provided in real-time, organizations are less vulnerable while awaiting a vendor’s application patch or anti-malware signature.365-day Data Retention for Retrospective Analysis and Threat HuntingFortiNDR Cloud retains rich network metadata for 365 days, enabling a comprehensive investigation. This data ensures newly discovered tools, tactics, and procedures can be retroactively investigated to discover if and when threats may have infiltrated the customer’snetwork.FortiNDR™ Cloud DeploymentFortiNDR Cloud Sensor Specificationsx 482 mm (w/ handle) x 757.75 mm (w/ bezel)42.8mmx 434 mm (w/o handle) x 743.91 mm (w/o Bezel)x 482 mm (w/ handle) x 757.75 mm (w/ bezel)42.8mmx 434 mm (w/o handle) x 743.91 mm (w/o Bezel)UL/cUL, CB UL/cUL, CB—Ordering InformationFortinet Corporate Social Responsibility PolicyFortinet is committed to driving progress and sustainability for all through cybersecurity, with respect for human rights and ethical business practices, making possible a digital world you can always trust. You represent and warrant to Fortinet that you will not use Fortinet’s products and services to engage in, or support in any way, violations or abuses of human rights, including those involving illegal censorship, surveillance, detention, or excessive use of force. Users of Fortinet products are required to comply with the Fortinet EULA and report any suspected violations of the EULA via the procedures outlined in the Fortinet Whistleblower Policy.Reports at 1 Gbps of metered ussage. Includes FortiCare premium. Does not include physical sensors.True Up Usage NDRC-TRUEUP-1MTH Throughput True-up SKU for traffic overages in FortiNDR Cloud for 1 Gbps of metered usage.FortiNDRCloud-500FFNRC-500FFortiNDRCloud 500F (small) physical sensor to deliver data to FortiNDR Cloud SaaS Platform. Hardware Only. 1U with 2 x Copper / 2 x Fiber SFP+. Must purchase support. Ship with 2 x 10G multimode transceivers.Small Sensor (500F) Licence and SuppportFC-10-NDR5F-247-02-DD Annual license for support for FNRC-500F (small) sensor and forwarding traffic to the FortiNDR Cloud SaaS Platform, includes FortiCare premium.FortiNDRCloud-900FFNRC-900FFortiNDRCloud 900F (large) physical sensor to deliver data to FortiNDR Cloud SaaS Platform. Hardware Only. 1U with 2 x Copper / 2 x Fiber SFP+. Must purchase support. Ship with 4 x 10G multimode transceiversLarge Sensor (900F) Licence and SuppportFC-10-NDR9F-247-02-DDAnnual license for support for FNRC-900F (large) sensor and forwarding traffic to the FortiNDR Cloud SaaS Platform, includes FortiCare premium. Copyright © 2023 Fortinet, Inc. All rights reserved. Fortinet, FortiGate, FortiCare and FortiGuard, and certain other marks are registered trademarks of Fortinet, Inc., and other Fortinet names herein may also be registered and/or common law trademarks of Fortinet. All other product or company names may be trademarks of their respective owners. Performance and other metrics contained herein were attained in internal lab tests under ideal conditions, and actual performance and other results may vary. Network variables, different network environments and other conditions may affect performance results. Nothing herein represents any binding commitment by Fortinet, and Fortinet disclaims all warranties, whether express or implied, except to the extent Fortinet enters a binding written contract, signed by Fortinet’s General Counsel, with a purchaser that expressly warrants that the identified product will perform according to certain expressly-identified performance metrics and, in such event, only the specific performance metrics expressly identified in such binding written contract shall be binding on Fortinet. For absolute clarity, any such warranty will be limited to performance in the same ideal conditions as in Fortinet’s internal lab tests. Fortinet disclaims in full any covenants, representations, and guarantees pursuant hereto, whether express or implied. Fortinet reserves the right to change, modify, transfer, or otherwise revise this publication without notice, and the most current version of the publication shall be applicable.。
英语作文创业还是就业英文回答:To start a business or to get a job is a momentous decision that weighs heavily on many individuals as they navigate their career paths. There is no one-size-fits-all solution, and the optimal choice hinges upon an array of factors unique to each person's circumstances, aspirations, and risk tolerance.Embarking on an Entrepreneurial Journey。
Becoming an entrepreneur entails venturing into the uncharted waters of self-employment. This path offers the allure of independence, the potential for high returns, and the satisfaction of building something from scratch. However, it also comes with its fair share of challenges, including financial risks, long hours, and the weight of responsibility.Individuals suited to entrepreneurship tend to possessa strong entrepreneurial spirit, an unwavering belief in their ideas, and the resilience to overcome setbacks. They are often comfortable with uncertainty and thrive in thefast-paced, ever-evolving business landscape.Examples of Entrepreneurial Success。
FortiOS Is the Foundation of the Fortinet Security FabricExecutive SummaryFortiOS, Fortinet’s operating system, is the foundation of the Fortinet SecurityFabric. The Security Fabric is the industry’s highest-performing and most expansivecybersecurity platform, organically built on a common management and securityframework. FortiOS ties all of the Fabric’s security and networking componentstogether to ensure seamless integration. This enables the convergence ofnetworking and security functions to deliver a consistent user experience andresilient security posture across all manner of environments. On-premises, cloud,hybrid, and converging IT/OT/IoT infrastructure are included.FortiOS 7.4 is packed with powerful new features that give IT leadersunprecedented visibility and enforcement across even the most complex hybridenvironments. Updates include:"FortiOS … improves operational efficiency and provides consistent security no matter where users or applications are distributed.”1SOLUTION BRIEF n Industry-first unified networking and security architecture for OT, IoT, and IT devicesn Industry-first unified management and analytics capabilities across Fortinet’s entire secure networking portfolio through FortiAnalyzern Greater automation and real-time response capabilities for SOC teams to protect against and reduce time to resolution for sophisticated attacks such as weaponized AI attacks, targeted ransomware, and criminal-sponsored APTsn Enhancements to reduce alert triage and incident investigation across early detection solutions including FortiEDR,FortiXDR, FortiRecon, and FortiDeceptorn New features to reduce risk across converging OT/IT/IoT environmentsFortiOS and the Fortinet Security Fabric Enable Broad, Integrated, and Automated SecurityFigure 1: The Fortinet Security FabricHaving one unifying operating system that spans the entire distributed Security Fabric ensures:n Consistent, centralized management and orchestration of security policy and configurationsn Broad reach and control across the expanded attack surface and at every step of the attack cyclen High-performance enforcement of context-aware security policyn Artificial intelligence (AI)-based threat detection and recommendationsn AI-based data correlation for analysis and reporting across a unified Fabric-level datasetn Automated, multipronged response in real time to cyberattacks across the attack surface and throughout the attack cyclen Improved threat response and reduced risk through enhanced security orchestration, automation, and response (SOAR) capabilitiesFortiOS 7.4 Delivers New CapabilitiesFortiOS uniquely empowers organizations to run their businesses without compromising performance, protection, or puttingthe brakes on innovation. A few of the key FortiOS 7.4 and Security Fabric enhancements designed to address today’s unique challenges are listed below.Secure networking and managementNew innovations to Fortinet’s Secure Networking Portfolio and FortiOS 7.4 span FortiManager, hybrid mesh firewall, Secure SD-WAN, single-vendor SASE, Universal zero-trust network access (ZTNA), and secure WLAN/LAN.Unified management and analytics across hybrid networksFortiManager provides IT leaders with unprecedented visibility and enforcement across all secure networking elements, including hybrid mesh firewall, single-vendor SASE, Universal ZTNA, Secure SD-WAN, and secure WLAN/LAN.Hybrid mesh firewall for data center and cloudFortiGate 7080F is a new series of next-generation firewalls (NGFWs) that eliminates point products, reduces complexity, and delivers higher performance through purpose-built ASIC technology and AI/ML-powered advanced security.FortiFlex is a points-based consumption program with support for hybrid mesh firewall deployments and a variety of products, such as virtual machines, FortiGate appliances, and SaaS-based services, among others.Secure SD-WAN for branch officesFortinet Secure SD-WAN enables consistent security and superior user experience for business-critical applications, whether in the cloud or on-premises, and supports a seamless transition to single-vendor SASE. New enhancements include automation in overlay orchestration to accelerate site deployments and a redesign of the monitoring map view to provide global WAN status for each.Single-vendor SASE for remote users and branch officesFortiSASE converges cloud-delivered security and networking to simplify operations across hybrid networks. FortiSASEnow integrates with FortiManager, allowing unified policy management for Secure SD-WAN and SASE along with unmatched visibility across on-premises and remote users.Universal ZTNA for remote users and campus locationsFortinet Universal ZTNA provides the industry’s most flexible zero-trust application access control no matter where the user or application is located. Universal ZTNA now delivers user-based risk scoring as part of our continuous checks for ongoing application access.“Via the power of the FortiOS operating system, FortiGate delivers one of the top secure SD-WAN solutions, includes a powerful LAN edge controller, enables the industry’s only Universal ZTNA application gateway, and facilitates the convergence of NOC and SOC.”2WLAN/LAN for branch offices and campus locationsFortiAP secure WLAN access points now integrate with FortiSASE, marking theindustry’s first AP integration with SASE. This enables secure micro-brancheswhere an AP is deployed to send traffic to a FortiSASE solution and ensurecomprehensive security of all devices at the site.Prevention, early detection, and real-time responseFortinet has added new real-time response and automation capabilities acrossthe Security Fabric to enable SOC teams to protect against and reduce time toresolution for sophisticated attacks such as weaponized AI attacks, targetedransomware, and criminal-sponsored APTs. New solutions and enhancementsacross five key areas include:Endpoint security and early responseFortiEDR and FortiXDR now provide additional interactive incident visualizationwith enriched contextual incident data using multiple threat intelligence feeds toenable customers to simplify and expedite investigations.FortiNDR Cloud combines robust artificial intelligence, complemented by pragmatic analysis and breach protectiontechnology. The solution provides 365-day retention and visibility into network data, built-in playbooks, and threat hunting capabilities to detect anomalous and malicious behavior on the network. Choose from a self-contained, on-premisesdeployment powered by the Fortinet Virtual Security Analyst, or a new guided SaaS offering maintained by advanced threat experts from FortiGuard Labs.FortiRecon , supported by threat experts from FortiGuard Labs, now delivers enhanced proactive threat intelligence into critical risks associated with supply chain vendors and partners, including external exposed assets, leaked data, and ransomware attack intelligence.FortiDeceptor now offers vulnerability outbreak defense. When a vulnerability is reported by FortiGuard Labs, it is automatically pushed as a feed to the outbreak decoy to redirect attackers to fake assets and quarantine the attack early in the kill chain.Further, a SOAR playbook can automatically initiate the creation of and strategically place deception assets to gather granular intel and stop suspicious activities. FortiDeceptor also now offers a new attack exchange program that allows FortiDeceptor users to anonymously exchange valuable intel on the most current attacks and take proactive steps to avoid a breach.SOC automation and augmentationFortiAnalyzer enables more sophisticated event correlation across different types of log sources using a new intuitive rules editor that can be mapped to MITRE ATT&CK use cases.FortiSOAR now offers a turnkey SaaS subscription option, inline playbook recommendations driven by machine learning, extensive OT security features and playbooks, and unique no/low-code playbook creation enhancements.FortiSIEM now includes new link graph technology that allows for easy visualization of relationships between users, devices, and incidents. The solution is also now powered by an advanced machine learning framework, which enhances protection by detecting anomalies and outliers that may be missed by traditional methods.FortiGuard SOC-as-a-Service now offers AI-assisted incident triage as well as new SOC operations readiness andcompromise assessment services from FortiGuard Labs.AI-powered threat intelligenceFortiGuard Industrial Security Service significantly reduces time to protection with enhanced automated virtual patching for both OT and IT devices based on global threat intelligence, zero-day research, and Common Vulnerabilities and Exposures (CVE) query service.FortiGuard IoT Service enhances granular OT security at the industry level with Industrial-Internet-of-Things (IIoT) and Internet-of-Medical-Things (IoMT) device convergence.FortiSIEM unified security analytics dashboards now incorporate mapping of industrial devices and communication paths to the Purdue model hierarchy, include new OT-specific playbooks for threat remediation, and use of the ICS MITRE ATT&CK matrix for OT threat analysis.Identity and accessFortiPAM privileged account management provides remote access for IT and OT networks. It now includes ZTNAcontrols when users try to access critical assets. The ZTNA tags can be applied to check device posture continuously for vulnerabilities, updated AV signatures, location, and machine groups.Application securityFortiDevSec provides comprehensive application security testing for application code and runtime applications. The solution incorporates SAST, DAST, and SCA, for early vulnerability and misconfigurations detection, and protection including secret discovery. Risk reduction for cyber-physical and industrial control systemsFortinet’s portfolio of solutions and our Security Fabric for OT are designed specifically for cyber-physical security. New enhancements include:FortiGate 70F Rugged Next-Generation Firewall (NGFW) is the latest addition to Fortinet’s rugged portfolio designed for harsh environments. It features a new compact design with converged networking and security capabilities on a single processor. FortiDeceptor Rugged 100G is now available as an industrially hardened rugged appliance, ideal for harsh industrial environments. FortiPAM offers enterprise-grade privileged access management for both IT and OT ecosystems.FortiSIEM unified security analytics dashboards now include event correlation and mapping of security events to the Purdue model. FortiSOAR now offers features to reduce alert fatigue and enable security automation and orchestration across IT andOT environments.FortiGuard Industrial Security Service now includes more than 2,000 application control signatures for OT applications and protocols that support deep packet inspection.Fortinet Cyber Threat Assessment Program (CTAP) for OT validates OT network security effectiveness, application flows, and includes expert guidance.OT tabletop exercises for OT security teams are led by FortiGuard Incident Response team facilitators with expertise in threat analysis, mitigation, and incident response.FortiOS and the Fortinet Security Fabric Address Current and Emerging Security Challenges FortiOS 7.4 provides features and enhancements to support today’s fast-changing hybrid networking and security needs. FortiOS is continually updated to ensure organizations stay ahead of today’s ever-evolving threat landscape. With an expansive Fortinet Security Fabric solution in place, organizations of any size can be assured that they have the tools they need to address all their security and networking challenges, no matter how broadly their users and networks are distributed, today and in the future.1 “Ken Xie Q&A: Growth, Differentiators, and FortiSP5,” Fortinet, February 13, 2023.2 John Maddison, “Setting the Record Straight on Competitor Misinformation,” Fortinet, November 11, 2022. Copyright © 2023 Fortinet, Inc. All rights reserved. Fortinet, FortiGate, FortiCare and FortiGuard, and certain other marks are registered trademarks of Fortinet, Inc., and other Fortinet names herein may also be registered and/or common law trademarks of Fortinet. All other product or company names may be trademarks of their respective owners. Performance and other metrics contained herein were attained in internal lab tests under ideal conditions, and actual performance and other results may vary. Network variables, different network environments and other conditions may affect performance results. Nothing herein represents any binding commitment by Fortinet, and Fortinet disclaims all warranties, whether express or implied, except to the extent Fortinet enters a binding written contract, signed by Fortinet’s General Counsel, with a purchaser that expressly warrants that the identified product will perform according to certain expressly-identified performance metrics and, in such event, only the specific performance metrics expressly identified in such binding written contract shall be binding on Fortinet. For absolute clarity, any such warranty will be limited to performance in the same ideal conditions as in Fortinet’s internal lab tests. Fortinet disclaims in full any covenants, representations, and guarantees pursuant hereto, whether express or implied. Fortinet reserves the right to change, modify, transfer, or otherwise。
DS-A81024S/24024-slot Single Controller StorageIntroductionThe DS-A81 series is an economical and reliable Hybrid SAN (Storage Area Network) product. Hikvision Hybrid SAN product creates a network with access to data storage servers, and it inte grates Hikvision’s unique video direct streaming technology as well as supports IPSAN. Hybrid SAN system supports third-party cameras such as BOSCH, AXIS, SONY, Samsung, etc., and supports RTSP and ONVIF protocols, and extended data retention. With no recording server needed, Hikvision’s Hybrid SAN systems truly make applications simple, flexible, and budget-friendly.Key FeatureEconomical and Stable Hardware Platform●64-bit multi-core processor.● 4 to 64 GB high-speed cache.●PCI-E3.0 high-speed transmission channel,support 4×GbE or 2 ×10GbE is extendable●Five 10M/100M/1000M network interfaces.●Redundant design for key modules.●4U chassis model supports 24 HDDs.HDD Detection and Repair/RAID Optimization●Detection before operation.●Detection in operation.●Fault repair.●RAID 0, 1, 3, 5, 6, 10, 50.●Global and local hot spare.●Quick RAID building.●Capability of cascading upAdvanced Data Protection●Synchronous backup of key information in systemdisk and HDD.●RAID-based tamper-proof data technology.●Auto data synchronization between devices. Energy Efficient●CPU load balance.●Auto fan tuning. User-Friendly UI●One touch configuration.●Various alarm management.●Supports SADP protocol.●Supports SNMP.Video Security-Specialized Stream Media Direct Storage (Direct Streaming Mode)●Space pre-allocation strategy.●Direct storage for video and picture streams.●Supports direct IP camera/DVR/NVR streaming andrecording.●Support H264, H264+, H265, H265+.●Support N+1.●Supports camera access through iSCSI, RTSP, ONVIF,and PSIA protocol.●Supports alarm/scheduled/manual recording.●Automatic network replenishment (ANR), timelyuploading, and video loss alarm.●Lock key videos.●Supports both Direct Streaming Mode and IPSANMode.●Search, play, and download videos by video type orvideo time.SpecificationNetworkProtocol iSCSI, RTSP, ONVIF, PSIAExternal interfaceUSB 2*USB3.0,1*USB2.0VGA SupportedData network interface 4, 1000M Ethernet interface(4 × GbE or 2 × 10GbE is extendable)MiniSAS interface 1COM 1Management network interface 1, 1000M Ethernet interfaceStorageHDD slot 24HDD information 24*10TBInterface/capacity SATA/1 TB, 2 TB, 3 TB, 4 TB, 6 TB, 8 TB, 10 TB, 12 TB, 14 TB, 16 TBHot-swapping SupportedHDD support RAID(Enterprise Hard Disk)RAID RAID 0, 1, 3, 5, 6, 10, 50,JBOD, Hot-SparePerformanceDirect Streaming Mode: video (2Mbps) + picture512-chHardwareStructure Controller StructureProcessor 64-bit multi-core processorCache 4 GB (extendable to 64GB)Storage ManagementDisk management Disk inspection, alarm and repairLogical volume management iSCSI volume, video volume managementRecording ManagementRecording mode Timing recording, manual recording, alarm recordingVideo protection Lock key video, N+1 service protection, ANR, video loss detection and alarm Searching mode Search by time and eventDownloading mode Quick download, batch download, download by segment, download by merging Device MaintenanceManagement mode Web-based GUI, serial port CLI, centralized management of multiple devices Alarm mode Sound, light, email, message, web pageLog downloading Download on web pageHDDHDD Model Hikvision,HK7216AH,16T; Hikvision,HK7210AH,10T; Hikvision,HK728TAH,8T; Hikvision,HK726TAH,6T; Hikvision,HK724TAH,4T;GeneralPower supply Redundant 550W Consumption(with HDDs) Working: ≤ 480Environment temperature Working: +5 °C to +40 °C (41 °F to 104 °F);Storing: -20 °C to +70 °C (-4 °F to +158 °F)Humidity Working: 20% to 80%RH (non-condensing/frozen); Storing: 5% to 90%RH (non-condensing/frozen)Chassis 4U (19-inch height)Dimensions(W × D × H) 484 x 684 x 174 mm (19.1 x 26.9 x 6.9 in) Weight (without HDDs) 29kgTypical Application⏹Physical InterfaceNo.Description No.Description1Power module7COM2VGA interface8FN button3USB interface9Power switch4USB interface10Management LAN interface5LAN interface 1 to 411RS-232 interface6SAS interface⏹Available ModelDS-A81024S/240Dimension。
Hybrid Cloud ManagementMicro Focus® Hybrid Cloud Management (HCM) is a unified automation framework which allows IT to aggregate cloud services; design, deploy, manage and govern hybrid resources, orchestrate IT processes and provide cloud and cost governance.Hybrid Cloud Management for the Digital EnterpriseMicro Focus Hybrid Cloud Management (HCM) is a unified solution for enterprise multi-cloud management. HCM allows IT to quickly aggre-gate and broker a select set of cloud services for users. HCM enables IT to design, deploy, manage and govern the full range of hybrid re-source services, from simple images through architected, tiered environments. HCM flexibly automates the deployment of production-ready deployments, along with Day Two life-cycle actions. HCM enables IT to maximize efficiency by orchestrating repetitive IT pro-cesses via integrations and a massive content library. HCM helps bring visibility and gover-nance to public cloud spending across large organizations. Finally HCM helps automate the Operations side of DevOps, providing on-demand access to resources.Aggregate public cloud services or use VM w are templates as building-block components for service designs. Create complex service de-signs to run on any cloud with the drag and drop designer using components for containers, VMs, databases, networking, and middleware. Orchestrate any process or automation tool with the industry’s most powerful orchestrator and content library. Design ‘drag-and-drop’ or ‘infrastructure as code’ orchestration flows to orchestrate automation tools, integrate with any vendor technology, or automate any task in the datacenter on applications and infra-structure. Use the integrated CI/CD Application Release Au t o m ation pipeline to continuously deliver applications and infrastructure with customizable stage gate actions such as ap-provals, security scans, execution of scripts, or deployment of infrastructure. Publish any service design to the multi-tenant consumerData SheetIT Operations ManagementAcross any technologyCloud assessmentand migrationITOM Platform deployment optionsPhysicalVirtualCloudContainerService delivery &orchestrationCloud brokering and governanceEnd-to-end IT process orchestrationAdd-OnFigure 1. Orchestrate IT processes to design, deliver and manage hybrid IT servicesFigure 2. Customizable resource dashboard shows deployment and subscription information across the hybrid-cloud infrastructureData SheetHybrid Cloud Managementmarket place portal or select them as deploy-ment stage gate actions in the ARA pipeline. Key FeaturesAdaptive Service DesignsDesign hybrid-cloud service designs with the drag and drop designer. Deploy and manage applications, and infrastructure on any plat-form—public or private. Create designs from simple infrastructure offerings to complex, hybrid multi-tier designs with on-premise, cloud, and container components. Services can be designed once and deployed to any cloud. These designs can be used as part of the Application Release Orchestration pipe-line or published to multi-tenant organization consumer catalogs. As part of the service design, Administrators are able to define con-sumer modifiable properties, such as size of instances or deployment location.Service Aggregation, Brokeringwith Cost GovernanceAggregate services from public cloud pro-viders such as Amazon or Azure—or use the industry’s first and only solution to aggregate VMware image templates. Configure these providers and use the brokering feature to browse all available services. Compare prices by region, or provider. Select, create offerings, and publish best-fit services to multi-tenant organization consumer catalogs or use in the ARO pipeline. Administrators are able to track and manage subscription usage, resource consumption and public cloud spend with governance policies.Powerful Self-Service PortalAggregated public and private cloud services, or services designed with the service designer are published to catalogs which are assigned to organizations. Organizations can be config-ured to integrate with LDAP services. Users across your organization can browse and subscribe to the catalog services administra-tors have published. At checkout, consumers select configuration options based on theservice design properties Administratorshave made available. Once a subscription ismade, consumers are able to manage theirown subscriptions, access consoles, or viewresource performance statistics for servicesin the stack. Administrators have visibility intoall subscriptions, and resources consumedwith key features like cloud spend reporting,predictive capacity modeling, resource con-sumption with right-sizing recommendations,and subscription owner information—acrossall organizations.Built-In Application Release OrchestrationEnable DevOps and continuous delivery withbuilt-in, fully customizable, automated stagegates with customized conditional gate ac-tions. Empower development and testingteams to subscribe to required platform ser-vices as needed—straight from the releasepipeline. T rack service usage and costs acrossapplications in development, testing and pro-duction environments. Integrate the applica-tion release pipeline with Fortify Static CodeAna l yzer to identify security vulnerabilities inyour source code early in the software devel-opment lifecycle. The HCM ARA pipeline canbe integrated with Serena release control. Planlarge scale releases with Serena and use HCMARA to perform the CICD actions.Workload and Cost AnalyticsOptimize workload placement, and continu-ously improve your cloud service deliverythrough the use of cloud analytics, capacityplanning and showback reporting.Master-Level OrchestratorOrchestrate complete IT actions and pro-cesses across silos including the direction ofthird party automation and orchestration tools.Automate IT processes easily with the intui-tive workflow designer and execution engine.Accelerate development and enable infra-structure as code with text authoring.Out-of-Box Integrations and Open APIsLeverage the extensive content library of over8000 out-of-box operations and workflows.Access the “app store style” library to consumethe latest content packs. Use wizards and openAPIs to quickly create custom integrations.Database and Middlware AutomationProvide DBaaS (database as a service), PaaS(platform as a service), and XaaS (anything asa service). Out-of-box content packs provideworkflows and operations that you can includein your service designs and publish in your cat-alog to automatically provision and configuredatabases and middleware. This built-in intel-ligence is based on industry standards, vendorbest practices, and real-world experience.Automation for SAP HANASAP-focused content accelerates service de-livery and orchestration in support of SAP in-stal l ations. Au t o m ate key SAP administration,maintenance, provisioning, and daily processes.Modern Cloud-Native ArchitectureMinimize implementation and upgrade effortswith pre-integrated, containerized compo-nents based on open-source Docker contain-ers and Kubernetes technologies. Deploy theHCM suite quickly, and easily scale out as nec-essary. Get access to new features frequentlywith quarterly updates that are easy to apply.Add PlateSpin® for Workload MigrationSafely migrate complex workloads from any-where-to-anywhere with least amount of riskand cost. Automate testing to ensure a suc-cessful migration with near-zero downtime atcutover. A highly scalable solution—migratebetween multiple physical, virtual, and cloudservers rapidly and reliably.Key BenefitsAccelerate Time to MarketAccelerate delivery of hybrid IT services byreducing manual, error-prone tasks. Improvespeed and agility by orchestrating processes across domains, systems, and teams. Services that used to take days and weeks to deliver can now be available in hours or minutes which will ultimately accelerate your release process. Improve Efficiency and Productivity Leverage unified management of multiple clouds, environments and technologies for faster, more efficient delivery of infrastructure and platform services. Orchestrate IT proces-ses across IT silos to reduce errors and in-crease productivity.Increase Investment in Innovation Allocate more budget and resources to in-novation. Developers can spend more time writing code and less time requesting, waiting for, or configuring environments and trouble-shooting deployment issues. QA teams canspend more time testing and less time tryingto find and configure test environments. AndIT teams can focus on innovation rather than troubleshooting.Flexible Resource Automationfrom Adaptive Service Designsand a Master OrchestratorStreamline user interaction with IT with acentralized, self-service portal designed toenhance the user experience. Create flexible,attribute-based catalog offerings that accom-modate variations in a single catalog entrywhich decreases the number of services inthe catalog and simplifies both the user andadministrator experience.Learn more at/hybridcloudFigure 3.Adaptive multi-tier applicationservice designshown in theservice designerFigure 4. AggregatedAWS, Azure, and VMwaretemplates shown in thecloud brokering screenFigure 5. Applicationshown in the ApplicationRelease OrchestrationCI/CD pipelineHCM supports Vertica version 9.0.1 for report-ing and analytics.The Vertica version included with HCM is quali-fied with the following operating systems:■Red Hat Enterprise Linux 7.3 ■CentOS 7.3Like what you read? Share it.C ontinuous Deployment Y esOperating System Version PlatformR ed Hat EnterpriseLinux7.2, 7.3, 7.4 x86-64 C entOS7.2, 7.3, 7.4 x86-64 O racle Linux7.3 x86-64M asternodesRAM24 GB 32 GB Processor16 cores 16 cores Free disk space 150 GB (not including space forthe NFS server) 150 GB (not including space forthe NFS server) W orkernodesRAM32 GB32 GBProcessor 16 cores16 coresF ree disk space150 GB 150 GBDatabase VersionM icrosoft SQL Database 2012, 2012 Cluster, 2014, 2016O racle Database12c R1 Standard Edition, 12c R1 Enterprise Edi-tion, 12c R1 RAC, 12c R2 RAC E xternal PostgreSQLDatabaseA dd-OnItem R ecommendedRequirements R AM 16 GB P rocessor 8 cores F ree disk space 150 GBFigure 6. Platform hardware sizingFigure 7. Hybrid Cloud Management editions support key use casesFigure 8. Supported operating systemsFigure 9. Supported databasesFigure 10. NFS server sizing。
A Point Cloud-Vision Hybrid Approach for 3D LocationTracking of Mobile Construction AssetsY. Fang a, J. Chen b, Y. K. Cho a, and P. Zhang ca School of Civil and Environmental Engineering, Georgia Institute of Technologyb School of Electrical and Computer Engineering, Robotics Institute, Georgia Institute of Technologyc Department of Construction Management, Tsinghua UniversityE-mail: yihaifang@, jchen490@, yong.cho@, zhangpy14@Abstract –Modeling as-is site condition and tracking the three-dimensional (3D) location of mobile assets (e.g., worker, equipment, material) are essential for various construction applications such as progress monitoring, quality control and safety management. Many efforts have been dedicated to vision-based technologies due to their merits in cost-effectiveness and light infrastructure compared to real-time location systems (RTLS). However, a major challenge of vision-based tracking is that it lacks 3D information and thus the results are sensitive to occlusion, illumination conditions and scale variation. To address this problem, this study presents a point cloud-vision hybrid approach to reconstruct and update the area of interest in 3D for scene updating and mobile asset tracking. Baseline 3D geometry information in point cloud is obtained at the start by Structure from Motion (SfM) using Unmanned Aerial Vehicle (UAV), given which mobile and static assets present in the scene are recognized and labeled. Based on 2D aerial isometric images capture by the UAV, labeled assets are automatically recognized and their locations are updated. The proposed approach was implemented in a field test and the results demonstrate that it was able to reconstruct the site and update the location of mobile assets accurately and reliably. Findings in this study indicate the proposed hybrid approach effectively augments the state-of-the-art in site modeling and asset tracking in construction.Keywords –Point cloud-vision hybrid approach; Mobile asset tracking; Structure from Motion (SfM); Unmanned Aerial Vehicle (UAV)1IntroductionThree-dimensional (3D) location information of construction assets are of great interest to various engineering and management applications including progress monitoring, quality control, operation analysis, safety monitoring and occupational health assessments. Although much research efforts have been dedicated to investigating the merits of Real-time Location System (RTLS) and vision-based methods, limitations in cost-effectiveness, ease of use, and robustness greatly hinder their field deployment. This study focuses on addressing the challenges in obtaining 3D location data of construction assets through mobile camera systems.Traditional vision-based tracking methods recognize and track the objects of interest from images or video streams captured by cameras at known locations and angles. Such settings can be easily achieved by installing cameras at multiple locations on a construction site or taking advantage of the existing surveillance camera systems. However, cameras at a fixed location inevitably suffer from massive occlusions introduced by ever-changing site conditions such as structure elements, temporary structures, and equipment. In addition, although 3D location can be computed based on the 2D image captured by two cameras setup at known locations [1], this method requires the tracked objects to be present on both images and not fully occluded. Therefore, continuously tracking the 3D location of construction assets is not always practical at many construction sites with cameras at fixed locations. A mobile camera system on an aerial platform such as an Unmanned Aerial Vehicles (UAV) is considered a promising alternative. Recently, UAV technology has drawn much attention from the construction industry for its potential in various applications including maintenance inspection, construction survey, and safety management. Nevertheless, 3D location tracking using a mobile camera faces several major challenges, including estimating the position and orientation of the on-board camera, and transforming the pixel coordinates of recognized objects from the camera frame to the global frame.To address these challenges, this paper proposes a point cloud-vision hybrid approach for 3D site reconstruction and mobile asset tracking. This paper first reviews current practices in construction asset tracking and the state-of-the-art Structure from Motion (SfM)technology. Then, the point cloud-vision hybrid approach is introduced by a flowchart and the details of the techniques and algorithms used in each step. Results from a case study implementing the proposed method forvehicle location tracking are presented followed by discussion and conclusions. 2 Related Work2.1 State-of-the-art Construction Asset Tracking Methods The location information of construction assets such as workers, equipment, and materials is of interest to various construction applications including progress monitoring, quality control, operation analysis, safety monitoring and occupational health assessments. Much efforts have been dedicated to real-time location systems (RTLS) such as Global Positioning System (GPS) [2], Radio Frequency Identification (RFID) [3], and Ultra-wide Band (UWB) [4]. Although varying in tracking accuracy (e.g., meters for GPS and centimeters for UWB), the RTLS technologies provide direct measurement of the 3D location of the tracked objects. However, most RTLS systems require tagging the objects to be tracked,which increases the complexity in applications. In addition to the tags, high-accuracy RTLS such as UWB requires heavy infrastructure deployment, for example the installation of a series of antennas around the tracking site. This results in huge investment in time and cost ($140/m2).Another research direction for construction asset tracking focuses on computer vision technologies that recognize and track the objects of interest from 2D images or video streams captured by cameras. Compared to RTLS-based tracking methods, vision-based tracking does not require additional sensors and tags and thus has the advantages of simple deployment and low cost [5]. Various vision-based tracking algorithms have been studied and tested in construction scenarios. Generally, tracking algorithms can be categorized into kernel-based [6], contour-based [7], and point-based [8] methods. Different in the means to represent object, contour-based methods use contours or silhouettes that enclose theobject region, kernel-based methods use the responses of the object region to selected kernels, and point-basedmethods use a set of feature points detected in the objectregion [9]. Compared to the other two methods, the point-based method is more robust to illumination variation andocclusions, which commonly occur in outdoorconstruction environment. Although the images from asingle camera provide only 2D pixel coordinates, imagesfrom multiple cameras at multiple known and fixedlocations provide 3D metric coordinates through camera calibration, pose estimation, and triangulation [1]. 2.2 Obtaining 3D Information by Photogrammetry and Structure fromMotion (SfM)Photogrammetry is an image-based technology thatreconstructs 3D objects from 2D photographs. This technology extracts 2D input data from photographs and maps them onto a 3D space. Since constructing a 3Dmodel only requires taking images from different angles,using photogrammetry for 3D data acquisition is flexible, cost-effective, and non-invasive to the survey objects. Structure from Motion (SfM) photogrammetry is an emerging technique that was built upon but fundamentally differs from traditional photogrammetry. In SfM approach, the critical parameters such as camera location/orientation and scene geometry are automatically computed without the need of a series of targets with known locations [10]. Instead, these parameters are computed simultaneously using a highly redundant, iterative bundle adjustment procedure, based on a database of features automatically extracted from a set of multiple overlapping images [11].The results of SfM or traditional photogrammetry are represented by a dense point cloud comprised of millions of points, each of which contains 3D position (XYZ) and color (RGB) data. The point cloud data is useful in various construction applications such as acquiring as-is geometry data for building component modeling [12],construction progress monitoring and control [13] [14], and construction documentation especially for historical structures [15]. Although the point cloud contains comprehensive 3D geometric data of the objects in the scene, it has been a challenge to track objects using a SfM-generated point cloud. This is mainly because in SfM technique, generating a point cloud from a large amount of images takes time, as it requires massive computation capability. Therefore, noticeable delay between the actual and tracked location makes it impractical to track dynamic objects on construction sites using SfM method alone. 2.3 Alternative Monitoring Method usingUnmanned Aerial Vehicles (UAV) Many vision-based tracking methods are based on theassumption of using the images captured by cameras atfixed and known locations. This is convenient sincemany construction sites are equipped with surveillancecamera systems. However, construction sites are usuallyvery congested, which makes it impossible for staticcameras to constantly maintain a clear line-of-sight to theobjects to be tracked without the occlusions from structure elements, equipment, and materials present on the site. Recent development of Unmanned Aerial Vehicles (UAV) offers a low-cost alternative for construction monitoring applications such as bridge and road assessment [16] [17], earthwork surveying [18], and safety inspection [19] . Compared to fixed site cameras, the onboard camera on a UAV is more flexible in image capture angles and thus less prone to occlusions. It should be noticed that a major limitation of UAV-based monitoring is the limited time for a single flight (around 30 minutes) due to the battery life.3Point cloud-vision Hybrid Approach To address the challenges in vision-based tracking using mobile cameras, this study proposes a point cloud-vision hybrid approach for tracking of mobile construction assets using SfM and UAV technologies. The flowchart for the proposed hybrid approach is shown in Figure 1.Figure 1: Flowchart for the proposed point cloud-vision hybrid approachThe first step involves collecting aerial images of a construction site from a UAV at multiple viewpoints. The construction site images are highly-overlapping and encircle the site in order to cover the full 3D structure of construction-related entities. Next, the 3D point cloud of the construction site (see Figure 2a) is generated based on the image data using a Structure from Motion (SfM) algorithm adopted from [11]. This algorithm detects common features across each camera frames using Scale Invariant Feature Transform (SIFT). The process works by finding point correspondences between images and solving for point coordinates and camera poses in a bundle adjustment procedure. This establishes a baseline3D model of the construction site which includes both background elements and the mobile assets. The mobile assets located in the point cloud are separated out using a segmentation and clustering routine. Ground segmentation is first applied to filter out points belongingto the ground which is considered as background. Next, individual point cloud clusters are separated based on neighboring Euclidean distance. Point cloud clusters for objects to be tracked are further identified through a supervised procedure where the user selects a set of clusters corresponding to the interested objects. Bounding boxes are calculated for each identified point cloud cluster which acts as a compact representation of the object.A series of images is then collected across time from the UAV to specifically track targeted mobile assets. The image data can be either in the form of a video feed or discrete images taken at specific timestamps. Comparedto fixed camera setups for object tracking, here it is necessary to solve for the image-specific position and orientation of the camera since the UAV is moving from frame to frame. This is formulated as a perspective-n-point (PnP) problem to recover the complete 6 degree-of-freedom motion (i.e., x, y, z coordinates and yaw, pitch, roll rotations) of the UAV based on the input images. First, an image is synthetically generated from the point cloud projected onto a two-dimensional plane (Figure 2a). Feature points can be calculated from the synthetic image that can be matched to feature points derived from the UAV images. Then a depth buffer is created based on the3D point cloud data collected from the photogrammetry step. The depth buffer is shown in Figure 2b, where bright points indicate points that are close to the camera while darker points indicate points that are further away from the camera. This enables us to calculate the 3D position of each feature point on the synthetic image. The UAV images are then matched to the synthetic image to obtain a corresponding set of 3D point features. Finally,a camera pose estimate is calculated for each UAV image which minimizes the least-squares re-projection error of3D point features in the image.Figure 2: (a) synthetic image and (b) depth buffer generated from point cloudThe next step in the processing pipeline is object detection. For each image taken from the UAV in the previous step, the pixel coordinates of objects to be tracked are identified through a point-based method, namely matching of SIFT feature points. This process is semi-automated by the user specifying interested objects in the reference image. As shown in Figure 3, bounding boxes are drawn around the two vehicles to be tracked in the reference image (left image) while the same objects are detected in the tracking image through feature point matching (right image). The process can potentially be fully-automated by having a database of possible objects to be tracked or training an object detection classifier. In the last step, the location of each detected object in global coordinates is calculated based on the recovered camera pose and its image coordinates. A ray casting method is used where the object location is determined by the intersection of a line formed by an image projection vector originating from the camera with the point cloud surface. For each detected object in the image, a corresponding projection vector is determined based on its pixel coordinates and camera parameters such as focal length and image size. Figure 4 shows the projection of detected objects from image coordinates to 3D space. Successfully matched objects have their bounding boxes updated in the point cloud based on the estimated 3D location.Figure 3: Object detection using feature point matching with a reference image, (a) objects of interest, (b) feature points identifiedFigure 4: Projection of image coordinates into 3D space from camera origin4 Case Study and Preliminary ResultsAn ongoing construction project was selected as a case study of the proposed method. Two vehicles (i.e., a concrete mixer truck and a minivan) were chosen as the targeted mobile assets to be tracked. Both vehicles moved in random patterns in an area of 40 m by 20 m. This case study employed an 8-axis UAV (octocopter) equipped with a mirrorless digital camera. In total 169 images at a resolution of 4912 x 3264 were used to generate the 3D point cloud of the site. Table x shows the computation time involved in each step of the proposedpipeline. The step of SfM computation and point cloudgeneration takes the longest amount of time but only needs to be carried out once at the start of the experiment. On the other hand, the tracking and location updating step involves a trade-off between accuracy and computation time. Using high resolution images for tracking will potentially improve the tracking accuracy since more feature points can be detected but this will also increase the computation time.Figure 5: Results of 3D location tracking of twovehicles: image captured from UAV (left) andupdated 3D site model (right)Table 1: Preparation and computation timeActivities TimePreparation ~20min UAV flight ~3.5 minSfM computation/Point cloudgeneration~150 minTracking image by UAVUp to 30 min3D location computation 3 minFigure 5 shows the results obtained from tracking two vehicles over time using images captured from the UAV. The left column shows the time where each image is taken whereas the middle column shows the image captured by the camera. The images are annotated with feature points for each detected object. The right column shows the updated 3D site model (point cloud) corresponding to each captured image. The 3D site model is generated using a top-down view of the site 3D point cloud with bounding boxes formed around each tracked object.Results from the case study indicate that the proposed method dynamically updates the 3D location of two vehicles in a construction site by using the images captured from a UAV and matching them to a 3D site model in the form of a point cloud. The first four and the last images (time: 0s to 44s and 144s) show the cases of successful tracking where the bounding box for the two vehicles are shown in green. The results at time 73s and 99s show a sequence when the matching algorithm lost track of a vehicle due to insufficient feature points. The corresponding bounding box is drawn based on the previous location estimate but is highlighted in red to indicate uncertainty in location.5DiscussionIn this study, the tracked objects focuses on large objects exhibiting smooth linear motion such as vehicles. Smaller objects that are prone to occlusion incur difficulty in the feature point matching stage. The method is also limited in terms of automation in the sense that not all dynamic assets are automatically tracked and updated in the 3D point cloud. Instead, the user manually specifies a fixed number of objects to be tracked in the 3D scene update.In terms of accuracy in location estimation, the method relies on accurately calculating the UAV camera position and orientation for each captured image and correctly identifying the tracked object in each image. The camera pose is derived through a least-squares estimation scheme and is negatively affected by the presence of outliers. In the matching stage with a reference image, outlier points have to be rejected bythreshold elimination considering the re-projection error to the image coordinate frame. In the object detection stage, the pixel coordinates of tracked objects can also be incorrectly identified when there exists feature points that are incorrectly matched. Thus, the matching algorithm needs to ensure that a sufficient number of feature points can be identified for each object and that there exists geometric consistency among the matched feature points for objects from each frame to the next frame.6CONCLUSIONTo address the challenges in 3D location tracking using a mobile camera, this paper proposes a point cloud-vision hybrid approach for 3D site reconstruction and mobile asset tracking. The method involves a processing pipeline with the steps of point cloud generation, camera pose estimation, object detection and object localization. The method updates the baseline 3D scene in the form of point cloud with dynamic bounding boxes for each tracked vehicle, which can be further utilized in site management applications. Preliminary results from a case study show that the proposed method was able to successfully track the 3D location of two vehicles in a construction site by using images captured from a UAV and matching them to a 3D site model in the form of a point cloud. Despite of the limitations in semi-automated processing pipeline and limited UAV flight time, the proposed point cloud-vision hybrid approach enable by SfM and UAV technology shows advantages in flexibility and robustness to occlusions over traditional method using fixed cameras. Findings in this study indicate great potential of the proposed method in 3D location tracking of mobile construction assets in congested construction environment.The proposed method involves limitations in terms of the number and size of objects that can be tracked. Having a large number of tracked objects or having target objects that are too small will complicate the feature point matching process and reduce the localization accuracy. For future work, the authors would like to experiment with alternative vision tracking methods such as kernel-based and contour-based methods to investigate whether those methods will improve the localization accuracy.7ACKNOWLEDGEMENTThis material is based upon work supported by the National Science Foundation (Award #: CMMI- 1358176). Any opinions, findings, and conclusions or recommendations expressed on this material are those of the authors and do not necessarily reflect the views of the NSF. References[1] M.-W. Park, C. Koch, and I. Brilakis, “Three-Dimensional Tracking of Construction ResourcesUsing an On-Site Camera System,” J. Comput. Civ.Eng., vol. 26, no. 4, pp. 541–549, 2012.[2] N. Pradhananga and J. Teizer, “Automatic spatio-temporal analysis of construction site equipmentoperations using GPS data,” Autom. Constr., vol. 29,pp. 107–122, 2013.[3] Y. Fang, Y. K. Cho, S. Zhang, and E. Perez, “CaseStudy of BIM and Cloud–Enabled Real-Time RFIDIndoor Localization for Construction ManagementApplications,” J. Constr. Eng. Manag., pp. 1–12,2016.[4] Y. K. Cho, J. H. Youn, and D. Martinez, “Errormodeling for an untethered ultra-wideband systemfor construction indoor asset tracking,” Autom.Constr., vol. 19, no. 1, pp. 43–54, 2010.[5] J. Yang, M.-W. Park, P. a. Vela, and M. Golparvar-Fard, “Construction performance monitoring via stillimages, time-lapse photos, and video streams: Now,tomorrow, and the future,” Adv. Eng. Informatics,vol. 29, no. 2, pp. 211–224, 2015.[6] E. Maggio and A. Cavallaro, “Accurate appearance-based Bayesian tracking for maneuvering targets,”Comput. Vis. Image Underst., vol. 113, no. 4, pp.544–555, 2009.[7] M. Yokoyama and T. Poggio, “A contour-basedmoving object detection and tracking,” 2005 IEEEInt. Work. Vis. Surveill. Perform. Eval. Track.Surveill., no. 1, pp. 271–276, 2005.[8] T. Mathes and J. Piater, “Robust non-rigid objecttracking using point distribution manifolds,” PatternRecognit., pp. 515–524, 2006.[9] I. Brilakis, M.-W. Park, and G. Jog, “Automatedvision tracking of project related entities,” Adv. Eng.Informatics, vol. 25, no. 4, pp. 713–724, Oct. 2011. [10] M. J. Westoby, J. Brasington, N. F. Glasser, M. J.Hambrey, and J. M. Reynolds, “‘Structure-from-Motion’ photogrammetry: A low-cost, effective toolfor geoscience applications,” Geomorphology, vol.179, pp. 300–314, 2012.[11] N. Snavely, S. M. Seitz, and R. Szeliski, “Modelingthe world from Internet photo collections,” Int. J.Comput. Vis., vol. 80, no. 2, pp. 189–210, 2008. [12] F. Dai and M. Lu, “Assessing the Accuracy ofApplying Photogrammetry to Take Geometric Measurements on Building Products,” J. Constr. Eng.Manag., vol. 136, no. February, pp. 242–250, 2010. [13] S. El-Omari and O. Moselhi, “Integrating 3D laserscanning and photogrammetry for progress measurement of construction work,” Autom. Constr.,vol. 18, no. 1, pp. 1–9, 2008.[14] M. Golparvar-Fard, J. Bohn, J. Teizer, S. Savarese,and F. Peña-Mora, “Evaluation of image-basedmodeling and laser scanning accuracy for emergingautomated performance monitoring techniques,”Autom. Constr., vol. 20, no. 8, pp. 1143–1155, 2011.[15] N. Yastikli, “Documentation of cultural heritageusing digital photogrammetry and laser scanning,” J.Cult. Herit., vol. 8, no. 4, pp. 423–427, 2007. [16] N. Metni and T. Hamel, “A UAV for bridgeinspection: Visual servoing control law with orientation limits,” Autom. Constr., vol. 17, no. 1, pp.3–10, 2007.[17] S. Rathinam, Z. W. Kim, and R. Sengupta, “Vision-Based Monitoring of Locally Linear Structures Using an Unmanned Aerial Vehicle,” J. Infrastruct.Syst., vol. 14, no. 1, pp. 52–63, 2008.[18] S. Siebert and J. Teizer, “Mobile 3D mapping forsurveying earthwork projects using an Unmanned Aerial Vehicle (UAV) system,” Autom. Constr., vol.41, pp. 1–14, 2014.[19] J. Irizarry, M. Gheisari, and B. N. Walker, “Usabilityassessment of drone technology as safety inspectiontools,” Electron. J. Inf. Technol. Constr., vol. 17, no.September, pp. 194–212, 2012.。