Vision Based Object Identification and Tracking for Mobile Robot
- 格式:pdf
- 大小:670.98 KB
- 文档页数:5
国际自动化与计算杂志.英文版.1.Improved Exponential Stability Criteria for Uncertain Neutral System with Nonlinear Parameter PerturbationsFang Qiu,Ban-Tong Cui2.Robust Active Suspension Design Subject to Vehicle Inertial Parameter VariationsHai-Ping Du,Nong Zhang3.Delay-dependent Non-fragile H∞ Filtering for Uncertain Fuzzy Systems Based on Switching Fuzzy Model and Piecewise Lyapunov FunctionZhi-Le Xia,Jun-Min Li,Jiang-Rong Li4.Observer-based Adaptive Iterative Learning Control for Nonlinear Systems with Time-varying DelaysWei-Sheng Chen,Rui-Hong Li,Jing Li5.H∞ Output Feedback Control for Stochastic Systems with Mode-dependent Time-varying Delays and Markovian Jump ParametersXu-Dong Zhao,Qing-Shuang Zeng6.Delay and Its Time-derivative Dependent Robust Stability of Uncertain Neutral Systems with Saturating ActuatorsFatima El Haoussi,El Houssaine Tissir7.Parallel Fuzzy P+Fuzzy I+Fuzzy D Controller:Design and Performance EvaluationVineet Kumar,A.P.Mittal8.Observers for Descriptor Systems with Slope-restricted NonlinearitiesLin-Na Zhou,Chun-Yu Yang,Qing-Ling Zhang9.Parameterized Solution to a Class of Sylvester MatrixEquationsYu-Peng Qiao,Hong-Sheng Qi,Dai-Zhan Cheng10.Indirect Adaptive Fuzzy and Impulsive Control of Nonlinear SystemsHai-Bo Jiang11.Robust Fuzzy Tracking Control for Nonlinear Networked Control Systems with Integral Quadratic ConstraintsZhi-Sheng Chen,Yong He,Min Wu12.A Power-and Coverage-aware Clustering Scheme for Wireless Sensor NetworksLiang Xue,Xin-Ping Guan,Zhi-Xin Liu,Qing-Chao Zheng13.Guaranteed Cost Active Fault-tolerant Control of Networked Control System with Packet Dropout and Transmission DelayXiao-Yuan Luo,Mei-Jie Shang,Cai-Lian Chen,Xin-Ping Guanparison of Two Novel MRAS Based Strategies for Identifying Parameters in Permanent Magnet Synchronous MotorsKan Liu,Qiao Zhang,Zi-Qiang Zhu,Jing Zhang,An-Wen Shen,Paul Stewart15.Modeling and Analysis of Scheduling for Distributed Real-time Embedded SystemsHai-Tao Zhang,Gui-Fang Wu16.Passive Steganalysis Based on Higher Order Image Statistics of Curvelet TransformS.Geetha,Siva S.Sivatha Sindhu,N.Kamaraj17.Movement Invariants-based Algorithm for Medical Image Tilt CorrectionMei-Sen Pan,Jing-Tian Tang,Xiao-Li Yang18.Target Tracking and Obstacle Avoidance for Multi-agent SystemsJing Yan,Xin-Ping Guan,Fu-Xiao Tan19.Automatic Generation of Optimally Rigid Formations Using Decentralized MethodsRui Ren,Yu-Yan Zhang,Xiao-Yuan Luo,Shao-Bao Li20.Semi-blind Adaptive Beamforming for High-throughput Quadrature Amplitude Modulation SystemsSheng Chen,Wang Yao,Lajos Hanzo21.Throughput Analysis of IEEE 802.11 Multirate WLANs with Collision Aware Rate Adaptation AlgorithmDhanasekaran Senthilkumar,A. Krishnan22.Innovative Product Design Based on Customer Requirement Weight Calculation ModelChen-Guang Guo,Yong-Xian Liu,Shou-Ming Hou,Wei Wang23.A Service Composition Approach Based on Sequence Mining for Migrating E-learning Legacy System to SOAZhuo Zhang,Dong-Dai Zhou,Hong-Ji Yang,Shao-Chun Zhong24.Modeling of Agile Intelligent Manufacturing-oriented Production Scheduling SystemZhong-Qi Sheng,Chang-Ping Tang,Ci-Xing Lv25.Estimation of Reliability and Cost Relationship for Architecture-based SoftwareHui Guan,Wei-Ru Chen,Ning Huang,Hong-Ji Yang1.A Computer-aided Design System for Framed-mould in Autoclave ProcessingTian-Guo Jin,Feng-Yang Bi2.Wear State Recognition of Drills Based on K-means Cluster and Radial Basis Function Neural NetworkXu Yang3.The Knee Joint Design and Control of Above-knee Intelligent Bionic Leg Based on Magneto-rheological DamperHua-Long Xie,Ze-Zhong Liang,Fei Li,Li-Xin Guo4.Modeling of Pneumatic Muscle with Shape Memory Alloy and Braided SleeveBin-Rui Wang,Ying-Lian Jin,Dong Wei5.Extended Object Model for Product Configuration DesignZhi-Wei Xu,Ze-Zhong Liang,Zhong-Qi Sheng6.Analysis of Sheet Metal Extrusion Process Using Finite Element MethodXin-Cun Zhuang,Hua Xiang,Zhen Zhao7.Implementation of Enterprises' Interoperation Based on OntologyXiao-Feng Di,Yu-Shun Fan8.Path Planning Approach in Unknown EnvironmentTing-Kai Wang,Quan Dang,Pei-Yuan Pan9.Sliding Mode Variable Structure Control for Visual Servoing SystemFei Li,Hua-Long Xie10.Correlation of Direct Piezoelectric Effect on EAPap under Ambient FactorsLi-Jie Zhao,Chang-Ping Tang,Peng Gong11.XML-based Data Processing in Network Supported Collaborative DesignQi Wang,Zhong-Wei Ren,Zhong-Feng Guo12.Production Management Modelling Based on MASLi He,Zheng-Hao Wang,Ke-Long Zhang13.Experimental Tests of Autonomous Ground Vehicles with PreviewCunjia Liu,Wen-Hua Chen,John Andrews14.Modelling and Remote Control of an ExcavatorYang Liu,Mohammad Shahidul Hasan,Hong-Nian Yu15.TOPSIS with Belief Structure for Group Belief Multiple Criteria Decision MakingJiang Jiang,Ying-Wu Chen,Da-Wei Tang,Yu-Wang Chen16.Video Analysis Based on Volumetric Event DetectionJing Wang,Zhi-Jie Xu17.Improving Decision Tree Performance by Exception HandlingAppavu Alias Balamurugan Subramanian,S.Pramala,B.Rajalakshmi,Ramasamy Rajaram18.Robustness Analysis of Discrete-time Indirect Model Reference Adaptive Control with Normalized Adaptive LawsQing-Zheng Gao,Xue-Jun Xie19.A Novel Lifecycle Model for Web-based Application Development in Small and Medium EnterprisesWei Huang,Ru Li,Carsten Maple,Hong-Ji Yang,David Foskett,Vince Cleaver20.Design of a Two-dimensional Recursive Filter Using the Bees AlgorithmD. T. Pham,Ebubekir Ko(c)21.Designing Genetic Regulatory Networks Using Fuzzy Petri Nets ApproachRaed I. Hamed,Syed I. Ahson,Rafat Parveen1.State of the Art and Emerging Trends in Operations and Maintenance of Offshore Oil and Gas Production Facilities: Some Experiences and ObservationsJayantha P.Liyanage2.Statistical Safety Analysis of Maintenance Management Process of Excavator UnitsLjubisa Papic,Milorad Pantelic,Joseph Aronov,Ajit Kumar Verma3.Improving Energy and Power Efficiency Using NComputing and Approaches for Predicting Reliability of Complex Computing SystemsHoang Pham,Hoang Pham Jr.4.Running Temperature and Mechanical Stability of Grease as Maintenance Parameters of Railway BearingsJan Lundberg,Aditya Parida,Peter S(o)derholm5.Subsea Maintenance Service Delivery: Mapping Factors Influencing Scheduled Service DurationEfosa Emmanuel Uyiomendo,Tore Markeset6.A Systemic Approach to Integrated E-maintenance of Large Engineering PlantsAjit Kumar Verma,A.Srividya,P.G.Ramesh7.Authentication and Access Control in RFID Based Logistics-customs Clearance Service PlatformHui-Fang Deng,Wen Deng,Han Li,Hong-Ji Yang8.Evolutionary Trajectory Planning for an Industrial RobotR.Saravanan,S.Ramabalan,C.Balamurugan,A.Subash9.Improved Exponential Stability Criteria for Recurrent Neural Networks with Time-varying Discrete and Distributed DelaysYuan-Yuan Wu,Tao Li,Yu-Qiang Wu10.An Improved Approach to Delay-dependent Robust Stabilization for Uncertain Singular Time-delay SystemsXin Sun,Qing-Ling Zhang,Chun-Yu Yang,Zhan Su,Yong-Yun Shao11.Robust Stability of Nonlinear Plants with a Non-symmetric Prandtl-Ishlinskii Hysteresis ModelChang-An Jiang,Ming-Cong Deng,Akira Inoue12.Stability Analysis of Discrete-time Systems with Additive Time-varying DelaysXian-Ming Tang,Jin-Shou Yu13.Delay-dependent Stability Analysis for Markovian Jump Systems with Interval Time-varying-delaysXu-Dong Zhao,Qing-Shuang Zeng14.H∞ Synchronization of Chaotic Systems via Delayed Feedback ControlLi Sheng,Hui-Zhong Yang15.Adaptive Fuzzy Observer Backstepping Control for a Class of Uncertain Nonlinear Systems with Unknown Time-delayShao-Cheng Tong,Ning Sheng16.Simulation-based Optimal Design of α-β-γ-δ FilterChun-Mu Wu,Paul P.Lin,Zhen-Yu Han,Shu-Rong Li17.Independent Cycle Time Assignment for Min-max SystemsWen-De Chen,Yue-Gang Tao,Hong-Nian Yu1.An Assessment Tool for Land Reuse with Artificial Intelligence MethodDieter D. Genske,Dongbin Huang,Ariane Ruff2.Interpolation of Images Using Discrete Wavelet Transform to Simulate Image Resizing as in Human VisionRohini S. Asamwar,Kishor M. Bhurchandi,Abhay S. Gandhi3.Watermarking of Digital Images in Frequency DomainSami E. I. Baba,Lala Z. Krikor,Thawar Arif,Zyad Shaaban4.An Effective Image Retrieval Mechanism Using Family-based Spatial Consistency Filtration with Object RegionJing Sun,Ying-Jie Xing5.Robust Object Tracking under Appearance Change ConditionsQi-Cong Wang,Yuan-Hao Gong,Chen-Hui Yang,Cui-Hua Li6.A Visual Attention Model for Robot Object TrackingJin-Kui Chu,Rong-Hua Li,Qing-Ying Li,Hong-Qing Wang7.SVM-based Identification and Un-calibrated Visual Servoing for Micro-manipulationXin-Han Huang,Xiang-Jin Zeng,Min Wang8.Action Control of Soccer Robots Based on Simulated Human IntelligenceTie-Jun Li,Gui-Qiang Chen,Gui-Fang Shao9.Emotional Gait Generation for a Humanoid RobotLun Xie,Zhi-Liang Wang,Wei Wang,Guo-Chen Yu10.Cultural Algorithm for Minimization of Binary Decision Diagram and Its Application in Crosstalk Fault DetectionZhong-Liang Pan,Ling Chen,Guang-Zhao Zhang11.A Novel Fuzzy Direct Torque Control System for Three-level Inverter-fed Induction MachineShu-Xi Liu,Ming-Yu Wang,Yu-Guang Chen,Shan Li12.Statistic Learning-based Defect Detection for Twill FabricsLi-Wei Han,De Xu13.Nonsaturation Throughput Enhancement of IEEE 802.11b Distributed Coordination Function for Heterogeneous Traffic under Noisy EnvironmentDhanasekaran Senthilkumar,A. Krishnan14.Structure and Dynamics of Artificial Regulatory Networks Evolved by Segmental Duplication and Divergence ModelXiang-Hong Lin,Tian-Wen Zhang15.Random Fuzzy Chance-constrained Programming Based on Adaptive Chaos Quantum Honey Bee Algorithm and Robustness AnalysisHan Xue,Xun Li,Hong-Xu Ma16.A Bit-level Text Compression Scheme Based on the ACW AlgorithmHussein A1-Bahadili,Shakir M. Hussain17.A Note on an Economic Lot-sizing Problem with Perishable Inventory and Economies of Scale Costs:Approximation Solutions and Worst Case AnalysisQing-Guo Bai,Yu-Zhong Zhang,Guang-Long Dong1.Virtual Reality: A State-of-the-Art SurveyNing-Ning Zhou,Yu-Long Deng2.Real-time Virtual Environment Signal Extraction and DenoisingUsing Programmable Graphics HardwareYang Su,Zhi-Jie Xu,Xiang-Qian Jiang3.Effective Virtual Reality Based Building Navigation Using Dynamic Loading and Path OptimizationQing-Jin Peng,Xiu-Mei Kang,Ting-Ting Zhao4.The Skin Deformation of a 3D Virtual HumanXiao-Jing Zhou,Zheng-Xu Zhao5.Technology for Simulating Crowd Evacuation BehaviorsWen-Hu Qin,Guo-Hui Su,Xiao-Na Li6.Research on Modelling Digital Paper-cut PreservationXiao-Fen Wang,Ying-Rui Liu,Wen-Sheng Zhang7.On Problems of Multicomponent System Maintenance ModellingTomasz Nowakowski,Sylwia Werbinka8.Soft Sensing Modelling Based on Optimal Selection of Secondary Variables and Its ApplicationQi Li,Cheng Shao9.Adaptive Fuzzy Dynamic Surface Control for Uncertain Nonlinear SystemsXiao-Yuan Luo,Zhi-Hao Zhu,Xin-Ping Guan10.Output Feedback for Stochastic Nonlinear Systems with Unmeasurable Inverse DynamicsXin Yu,Na Duan11.Kalman Filtering with Partial Markovian Packet LossesBao-Feng Wang,Ge Guo12.A Modified Projection Method for Linear FeasibilityProblemsYi-Ju Wang,Hong-Yu Zhang13.A Neuro-genetic Based Short-term Forecasting Framework for Network Intrusion Prediction SystemSiva S. Sivatha Sindhu,S. Geetha,M. Marikannan,A. Kannan14.New Delay-dependent Global Asymptotic Stability Condition for Hopfield Neural Networks with Time-varying DelaysGuang-Deng Zong,Jia Liu hHTTp://15.Crosscumulants Based Approaches for the Structure Identification of Volterra ModelsHouda Mathlouthi,Kamel Abederrahim,Faouzi Msahli,Gerard Favier1.Coalition Formation in Weighted Simple-majority Games under Proportional Payoff Allocation RulesZhi-Gang Cao,Xiao-Guang Yang2.Stability Analysis for Recurrent Neural Networks with Time-varying DelayYuan-Yuan Wu,Yu-Qiang Wu3.A New Type of Solution Method for the Generalized Linear Complementarity Problem over a Polyhedral ConeHong-Chun Sun,Yan-Liang Dong4.An Improved Control Algorithm for High-order Nonlinear Systems with Unmodelled DynamicsNa Duan,Fu-Nian Hu,Xin Yu5.Controller Design of High Order Nonholonomic System with Nonlinear DriftsXiu-Yun Zheng,Yu-Qiang Wu6.Directional Filter for SAR Images Based on NonsubsampledContourlet Transform and Immune Clonal SelectionXiao-Hui Yang,Li-Cheng Jiao,Deng-Feng Li7.Text Extraction and Enhancement of Binary Images Using Cellular AutomataG. Sahoo,Tapas Kumar,B.L. Rains,C.M. Bhatia8.GH2 Control for Uncertain Discrete-time-delay Fuzzy Systems Based on a Switching Fuzzy Model and Piecewise Lyapunov FunctionZhi-Le Xia,Jun-Min Li9.A New Energy Optimal Control Scheme for a Separately Excited DC Motor Based Incremental Motion DriveMilan A.Sheta,Vivek Agarwal,Paluri S.V.Nataraj10.Nonlinear Backstepping Ship Course ControllerAnna Witkowska,Roman Smierzchalski11.A New Method of Embedded Fourth Order with Four Stages to Study Raster CNN SimulationR. Ponalagusamy,S. Senthilkumar12.A Minimum-energy Path-preserving Topology Control Algorithm for Wireless Sensor NetworksJin-Zhao Lin,Xian Zhou,Yun Li13.Synchronization and Exponential Estimates of Complex Networks with Mixed Time-varying Coupling DelaysYang Dai,YunZe Cai,Xiao-Ming Xu14.Step-coordination Algorithm of Traffic Control Based on Multi-agent SystemHai-Tao Zhang,Fang Yu,Wen Li15.A Research of the Employment Problem on Common Job-seekersand GraduatesBai-Da Qu。
英语全国大学生竞赛2023年考试真题c类National College English Competition 2023Class C Exam PaperPart I: Listening Comprehension (30 points)Section ADirections: In this section, you will hear 10 short conversations. At the end of each conversation, a question will be asked about what was said. Both the conversation and the question will be spoken only once. After each question, there will be a pause. During the pause, you must read the four choices marked A), B), C) and D), and decide which is the best answer. Then mark the corresponding letter on Answer Sheet 1 with a single line through the center.1. A) At 3:00. B) At 2:30. C) At 1:30. D) At 2:00.2. A) She has trouble finding the key. B) She lost her key. C) She didn't take the key. D) She forgot the key.3. A) Sue will make the hotel reservation. B) The woman should book her room herself. C) The hotel has no empty rooms.D) The woman should confirm her reservation.4. A) The woman should have a look at the clock. B) The man's watch is changing. C) The woman's watch is fast. D) The man will buy a new watch.5. A) The man's interest in cooking. B) A culinary event in town. C) The types of dishes served. D) The enigmatic nature of cooking.6. A) His studies require much reading. B) The woman knows about Mr. Smith. C) He is not going to the library now. D) There are several libraries on campus.7. A) Call Jack later. B) Get the man's number from Mary. C) Wait for the signal to call. D) Talk to the man for Jack.8. A) It was rushed and incomplete. B) He spent all his money on the trip. C) He couldn't take the vacation he planned. D) He enjoyed it even though he had to pay for it.9. A) The man is a good cook. B) The man likes the city. C) The man wants the woman's recipes. D) The man enjoys the woman's cooking.10. A) The weather in Miami. B) The man's interest in swimming. C) How the man is doing in Miami. D) The weekend plans in Miami.Section BDirections: In this section, you will hear 3 short passages. At the end of each passage, you will hear some questions. Both the passages and the questions will be read twice. When you hear a question, you must choose the best answer from the four choices marked A), B), C) and D). Then mark the corresponding letter on Answer Sheet 1 with a single line through the center.Passage OneQuestions 11 to 13 are based on the passage you have just heard.11. A) They think they will get something for Christmas. B) They both don't believe in Christmas. C) They have a lot in common. D) They like talking about Christmas.12. A) The Christmas sales. B) Christmas presents. C) Christmas trees. D) Christmas dinner.13. A) The St. Nick's Charity. B) The Christmas party. C) The summer party. D) The summer season.Passage TwoQuestions 14 to 16 are based on the passage you have just heard.14. A) The brother. B) The brother's friend. C) The radio. D) The television.15. A) Cars changing direction. B) Different-colored cars. C) The number of cars. D) The size of the cars.16. A) He was stuck in traffic. B) He liked the music. C) He wanted to find out if there was a traffic report. D) He wanted to get the weather.Passage ThreeQuestions 17 to 20 are based on the passage you have just heard.17. A) A large amount of money. B) A fighting championship in Tokyo. C) The fall wrestling season. D) A college championship.18. A) Who might win the fight. B) When the fight will take place. C) The trainer's prediction. D) Where the fight will take place.19. A) He is confident in his victory. B) He is unbeatable. C) He is popular among his peers. D) He is a newcomer.20. A) Hesitantly. B) Bitterly. C) Confidently. D) Cautiously.Part II: Reading Comprehension (40 points)Directions: There are 4 passages in this part. Each passage is followed by some questions or unfinished statements. For each of them there are four choices marked A), B), C) and D). You should decide on the best choice and mark the corresponding letter on Answer Sheet 1 with a single line through the center.Passage OneQuestions 21 to 25 are based on the following passage.Do you use the word "like" too much? That question has become a topic of debate among linguists, writers, and speakers of English. "Like" has undergone a remarkable versatility since the 1980s when it began to show up in new, unexpected places and during a broader range of activities. It's not just teenagers who are using "like" this way, either.Some linguists view "like" as a super-common hedge (掩饰话语) used to soften speech, express uncertainty, or clarify the intended meaning. Christopher Snyder, a linguist at Texas A&M University, puts it this way: "Under this view, you are seeking to leave an escape route in case your listener might disagree or ridicule what you are saying." Bruce Fraser, a sociolinguist at Boston University, adds that "like" can also serve to convey solidarity, especially when speakers fall back on it as a common conversational strategy.Despite its prevalence, not everyone is on board with using "like" as a hedge. Some style manuals and experienced writers discourage its frequent use as an oversimplification and a sign of laziness, especially in more formal writing genres. Furthermore, the use of "like" as a filler word is often controversial in public speech forums, such as interviews or speeches.21. According to some linguists, the word "like" is used mainly as a hedge to _______A) stress intentionsB) soften speechC) express certaintyD) criticize others22. According to the passage, who could possibly object to the use of "like" in speech?A) Teenagers.B) Experienced writers.C) Speakers of English.D) Linguists.23. According to Bruce Fraser, "like" indicates solidarity when _______A) it is used among teenagersB) it is overused by adultsC) speakers want to secure their meaningsD) speakers want to show agreement24. The word "hedge" in the passage most likely means_______A) insecurityB) limitationC) way outD) clarification25. The author suggests that using "like" in public speeches may be deemed _____A) informalB) formalC) controversialD) convincingPassage TwoQuestions 26 to 30 are based on the following passage.Dyslexia is a common reading disorder that hinders the ability to read. It should be noted that dyslexia does not result from vision problems. People with dyslexia have trouble reading accurately and fluently. They may also have difficulty understanding what they read. Dyslexia is not related to intelligence, yet it often creates challenges for students in school. Learning disabilities such as dyslexia result from neurobiological differences in the brain, not differences in intelligence.It is estimated that one in ten people has dyslexia, with varying degrees of severity. While dyslexia is lifelong, individuals can learn to read and write by mastering a variety of learning methods. Early identification and treatment are essential for people with dyslexia to achieve success in school and later in life.Reading difficulties are often evident in early childhood, such as trouble learning nursery rhymes or playing word games. In school, children with dyslexia may struggle to spell, read aloud, or learn new words. As they grow older, students with dyslexia may have difficulty with more complex language skills such as grammar, understanding textbooks, and writing essays.26. According to the passage, dyslexia primarily affects a person's _____A) visionB) intelligenceC) reading abilityD) listening skill27. The passage suggests that _____ can help people with dyslexia overcome reading challenges.A) intensive exercisesB) vision trainingC) early identification and treatmentD) studying grammar28. It is pointed out in the passage that dyslexia _______A) is caused by brain injuriesB) is related to insufficient intelligenceC) results from neurobiological differencesD) can be cured through eye surgery29. Early signs of dyslexia often involve issues with _____A) mathematicsB) writingC) memorizationD) language30. According to the passage, students with dyslexia may find it hard to _____ as they grow older.A) play sportsB) do well in examsC) memorize poemsD) understand textbooksPassage ThreeQuestions 31 to 35 are based on the following passage.Office workers forced to sit for hours in a poorly ventilated room could soon have the solution to their problems solved through a new smart air conditioning system. Designed to fight the lag in concentration caused by stuffiness, rising temperatures, and sudden spikes in humidity, the system monitors and adjusts the environment without the need for human intervention.Developed by a team at the German Fraunhofer Research Group, one of the sensors in the system measures the rate of carbon dioxide in the air, a key indicator of poor ventilation. If the carbon dioxide levels become excessive, the air conditioning unit adjusts the airflow without human operators having to do anything.Ilias Tsagaris, head of the research program, commented on the extensive application of the system, saying, "Our feedback system can be used in any workplace, home, or vehicle where it alters the environment independently from human interference."In addition to controlling carbon dioxide levels, the smart system can adjust temperature and humidity according topre-set criteria. Ehsan Mohamed, one of the team members, mentioned, "Our objective in developing this system was to create a comfortable and productive environment for workers. The greatest advantage of our system is the degree of autonomy it offers, minimizing the need for environmental management."31. The smart air conditioning system is meant to address _____ in office settings.A) the lack of proper seatingB) temperature and ventilation issuesC) the shortage of office suppliesD) stress related to too much interaction32. The system operates _____ without human intervention.A) in silenceB) flawlesslyC) electricallyD) automatically33. According to the passage, the new system measures _____ to track poor ventilation.A) room temperatureB) carbon dioxide levelsC) humidity changesD) the presence of mold34. The smart system can help create a comfortable workplace by _____.A) responding to workers' preferencesB) maintaining a stable work environmentC) monitoring workers' levels of productivityD) reducing reliance on trained personnel35. Mohamed believes that the new system _____.A) relies on manual controlB) meets home cooling needsC) speeds up work performanceD) ensures environmental comfortPassage FourQuestions 36 to 40 are based on the following passage.Popular culture often portrays Western eating habits as unhealthy and fast-paced, emphasizing convenience and instant gratification over healthy choices and traditional cooking methods. While this portrayal is somewhat accurate, it fails to consider the diversity of Western diets and the growing interest in healthy eating that has emerged in recent years.A typical Western diet includes a variety of foods such as meat, dairy products, grains, fruits, and vegetables. Fast food is a significant part of many Westerners' diets, as it provides quick and easy meals for those with busy lifestyles. However, modern health campaigns encourage people to make more nutritiouschoices by consuming less processed food and more fruits and vegetables.The rise of organic supermarkets and health-conscious eateries reflects a shift in Western eating habits towards more sustainable and mindful choices. This trend is driven by concerns about environmental sustainability, animal welfare, and personal health. People are increasingly interested in knowing where their food comes from and how it is produced, leading to a greater demand for locally sourced, organic, and ethically sourced food products.Despite ongoing challenges in promoting healthy eating, Western attitudes towards food are changing as people become more aware of the impact of their dietary choices on their health and the environment. With the rise of organic and plant-based food options, healthy eating is becoming more accessible and appealing to a broader audience.36. According to the passage, Western eating habits are often portrayed as _____ in popular culture.A) slow-pacedB) traditionalC) unhealthyD) diverse37. The paragraph suggests fast food is popular among Westerners because it _____A) is nutritious and affordableB) is rich in vitamins and mineralsC) saves time and energyD) encourages healthy eating habits38. People are turning to organic supermarkets because they want to _____.A) support local farmersB) reduce their food expensesC) follow popular trendsD) make sustainable food choices39. The passage indicates people's growing interest in knowing _____.A) where to find cheap foodB) how to cook traditional dishesC) the nutritional content of their foodD) where food is sourced and produced40. The author suggests that the popularity of organic and plant-based foods _____.A) has made healthy eating less appealingB) represents a challenge in promoting healthy eatingC) responds to a growing demand for sustainable and mindful choicesD) has not affected Western attitudes towards foodPart III: Writing (30 points)Directions: For this part, you are allowed 30 minutes to write a short essay. You should start your essay with a brief introduction that captures the relevance of the topic. You should then analyze the advantages and disadvantages of technological advancements in education. Consider the impact on students, teachers, administrators, and the overall learning environment.You should write at least 250 words but no more than 300 words.Sample Writing:With the rapid advancement of technology, its impact on education has been significant in recent years. On the one hand, technology has brought numerous benefits to the educational sector, enhancing the learning experience for students and providing more efficient tools for teachers and administrators. However, there are also drawbacks that need to be considered when evaluating the role of technology in education.One advantage of technological advancements in education is the increased accessibility to information. With the internet and digital resources, students can access a wealth of knowledge on various subjects anytime and anywhere. This has revolutionized the way students learn and research, making education more personalized and interactive. Additionally, technological tools such as online platforms and virtual classrooms have enabled teachers to create engaging lessons and collaborate with students across borders.On the other hand, the overreliance on technology in education can have negative consequences. For instance, the proliferation of digital devices in classrooms could lead to distractions and reduced attention spans among students. Moreover, technological advancements may widen the education gap between students with access to the latest toolsand those without, creating inequalities in learning opportunities.In conclusion, while technology has revolutionized education in many positive ways, its impact is not without challenges. It is important for educators and policymakers to strike a balance between leveraging technology for educational benefits and addressing its potential drawbacks. By harnessing the power of technology responsibly, the educational sector can continue to innovate and improve learning outcomes for all students.。
视觉定位英语Vision-Based LocalizationVision-based localization is a fundamental problem in the field of computer vision and robotics, with numerous practical applications, such as autonomous driving, augmented reality, and indoor navigation. The ability to accurately determine the position and orientation of a camera or a robot within a known environment is crucial for these applications, as it enables precise interaction with the surrounding world.One of the key challenges in vision-based localization is dealing with the inherent uncertainty and variability present in real-world environments. Factors such as changes in lighting conditions, occlusions, and dynamic objects can significantly affect the accuracy and reliability of the localization process. To address these challenges, researchers have developed various techniques and algorithms that leverage the power of computer vision and machine learning.One common approach to vision-based localization is the use of feature-based methods. These techniques rely on the identificationand matching of salient visual features, such as corners, edges, or distinctive texture patterns, between the current camera image and a pre-existing map or database of the environment. By matching these features, the system can estimate the position and orientation of the camera relative to the known environment. This approach has been widely used in various applications, including simultaneous localization and mapping (SLAM) systems, where the robot or camera simultaneously builds a map of the environment and localizes itself within that map.Another approach to vision-based localization is the use of direct methods, which operate directly on the pixel values of the camera image, without the need for explicit feature extraction. These methods often employ optimization techniques to align the current camera image with a reference image or a predicted image based on a known 3D model of the environment. Direct methods can be more robust to changes in lighting and texture patterns, as they do not rely on the stability of specific visual features.In recent years, the rise of deep learning has revolutionized the field of vision-based localization. Convolutional neural networks (CNNs) have shown remarkable success in tasks such as image classification, object detection, and semantic segmentation, which can be leveraged for localization. Deep learning-based methods can learn end-to-end mapping functions that directly relate camera images tothe corresponding pose information, without the need for explicit feature extraction or matching. These techniques have demonstrated impressive performance, particularly in challenging environments with significant perceptual aliasing or dynamic changes.One of the key advantages of vision-based localization is its versatility. Unlike other localization methods that rely on dedicated hardware, such as GPS or radio-based systems, vision-based techniques can leverage the ubiquity of cameras in modern devices, from smartphones to autonomous robots. This allows for the deployment of localization solutions in a wide range of environments, including indoor spaces, where other localization approaches may be less effective.However, vision-based localization also faces several challenges that need to be addressed. For example, the accuracy and reliability of the system can be affected by the quality and resolution of the camera, as well as the complexity and dynamics of the environment. Additionally, the computational requirements of the localization algorithms can be significant, particularly when dealing with high-resolution images or complex 3D models.To address these challenges, researchers are actively exploring various techniques to improve the performance and efficiency of vision-based localization systems. This includes the development ofmore robust and adaptive algorithms, the use of multiple sensors (e.g., combining vision with inertial measurement units), and the optimization of computational resources through techniques like hardware acceleration or distributed processing.As the field of computer vision and robotics continues to evolve, the importance of accurate and reliable vision-based localization will only grow. With the increasing demands for autonomous systems, augmented reality applications, and the need for seamless indoor and outdoor navigation, the development of advanced vision-based localization techniques will play a crucial role in shaping the future of these technologies.。
些室塑皇叁堂翌主堑壅兰!!查皇丝些堡型墨!!一摘要、/企业识别系统(CIS)是集企业身份、企业形象、企业标识和形象传播于一身冉寸系统工程。
目前电信企业形象现状堪忧,而CIS作为形象管理的重要手段在塑造企业形象上大有作为,因此在现代市场经济条件下必须利用CIS理论对电信企业进行形象的再塑造。
电信企业应具有朝阳形象、竞争形象、科学形象、文明形象、亲和形象和诚信形象。
)电信企业识别系统(TCIS)可分为三个部分:理念识别、行为识别和视觉识剐。
电信企业的理念识别要结合企业特点,电信企业的理念应该定位为优质服务和面向市场。
电信企业视觉识别中的基本要素、应用要素和环境识别要素都要体现理念,要能够传达出上面所说的六种形象。
网络时代的电信企业还要重视网络识别,重视运用多媒体来进行视觉识别。
电信企业行为识别主要包括企业组织结构特色、规章制度、员工激励、服务质量和公共关系五个方面。
根据企业的实际和现代的管理趋势,我们认为要建立面向市场,以用户为中心的新型组织结构。
在规章制度的建立上我们建议借用计算机软件工程中面向对象的方法来建立面向过程和面向对象的规章制度的分类方法以及面向对象的岗位规章。
关键词:电信企业企业理念识别企业行为识别企业视觉识别些查!皇垄芏翌圭堑壅生!!圭兰堡垒兰坚型垒苎AbstractCorporationldentifySystem(CIS)isonesystemprojectincludingcorporationidentity,corporationvisualization,corporationidentificationandvisualizationpropagandism.Thetelecommcorporation’Svisualizationisnotwellatpresent.BeingtheimportantmeansofvisualizationmanagementCISisabletodevelopitsskillsto如11.ThereforeinmodemmarketeconomyconditionwemustuseCIStoremoldthevisualizationoftelecomcorporations.ThevisualizationofTelecomcorporationsshouldhavesuchcharacteristicsasyoung,competitive,scientific,civilized,ardentandhonest.Telecomcorporationidentitysystemcomprisesthreeparts:mindidentity,behavioridentityandvisionidentity.Themindidentityshouldrepresentcharacteristicoftelecomcorporationanditshouldbeorientedtomarketdemandandhighqualityservice.Basicfactor,applicationfactorandconditionfactorintelecomcorporationvisionidentityshouldrepresentmindandpropagandismsixvisualizationsmentionedabove.Innetworkagetelecomcorporationsshouldthinkmuchofthenetworkidentityandusemultimediaidentity.Behavioridentitycomprisesorganizationalframework,bylaw,bestirringemployee,qualityofserviceandrepublicrelation.Accordingtothefactintelecomcorporationsandthetrendofmodemmanagementmethodwethinkaneworganizeframeworkfiguresasmarket.orientedandregardcustomerasthefoCUSshouldbeestablished.Wealsosuggestestablishprocess.orientedrules,object—orientedrulesandobjectorientedstationruleswhichconceptborrowedfromcomputersoftwareengineering.Keywords:TeleeomCorporation,corporationmindidentity,corporationbehavioridentity,corporationvisionidenti吼北京邮电大学硕士研究生论文电信企业识别系统前言市场经济发展到21世纪。
专利名称:VIEW BASED OBJECT DETECTION IN IMAGES 发明人:SHARMA, Pratik申请号:IB2018/051592申请日:20180311公开号:WO2019/175620A1公开日:20190919专利内容由知识产权出版社提供摘要:The first step to segment the image into different objects is producing the sharpened version of an image by blurring the image slightly and then original image and the blurred version of the image are compared one pixel at a time with each other. If the original pixel is brighter than the blurred version of the image it is further brightened and if the original pixel is darker than the blurred version of the image it is further darkened, and the resulting image is the sharpened version of the original image with thick edges to segment it into different objects. Now each object will have different salient features in different views and hence based on the salient features we detected for the object we will also narrow down what is the view of the detected object which will help us in doing Object Recognition.申请人:SHARMA, Pratik地址:Kailashpuri, Bunglow No 2, Govind Nagar, Malad East Mumbai 400097 IN国籍:IN更多信息请下载全文后查看。
cog一级功能和二级功能分类Cog一级功能和二级功能分类一、Cog一级功能分类1. 自然语言处理(Natural Language Processing, NLP)- 文本识别与解析(Text Recognition and Parsing):能够识别和解析输入的文本,提取其中的关键信息。
- 文本生成与合成(Text Generation and Synthesis):能够根据输入的要求和条件生成符合语法规则且意义明确的文本。
- 语义理解与推理(Semantic Understanding and Reasoning):能够理解文本的语义,并进行推理和逻辑分析。
2. 计算机视觉(Computer Vision)- 图像识别与分类(Image Recognition and Classification):能够识别和分类输入的图像,识别其中的对象、场景或特征。
- 目标检测与跟踪(Object Detection and Tracking):能够检测和跟踪图像或视频中的目标,并标注其位置和轨迹。
- 图像生成与合成(Image Generation and Synthesis):能够根据输入的条件和要求生成新的图像,具有一定的创造性。
3. 机器学习与深度学习(Machine Learning and Deep Learning) - 模型训练与调优(Model Training and Tuning):能够根据给定的数据集训练模型,并通过调优提高模型的性能。
- 特征提取与降维(Feature Extraction and Dimensionality Reduction):能够从原始数据中提取有用的特征,并降低数据的维度。
- 模型评估与预测(Model Evaluation and Prediction):能够评估模型的性能,对新的数据进行预测并给出相应的概率或置信度。
4. 自动化与控制(Automation and Control)- 过程监测与控制(Process Monitoring and Control):能够监测和控制系统或过程的状态和行为,实现自动化的控制和优化。
I.J. Image, Graphics and Signal Processing, 2015, 12, 31-38Published Online November 2015 in MECS (/)DOI: 10.5815/ijigsp.2015.12.05Recognition and Classification of Human Behavior in Intelligent Surveillance Systems using Hidden Markov ModelAdeleh FarzadIslamic Azad University of Rasht, Department of Electrical Engineering, Rasht, IranEmail: farzad.adeleh@Rahebeh Niaraki AsliUniversity of Guilan, Department of Electrical Engineering, Rasht, IranEmail: niaraki@guilan.ac.irAbstract—Nowadays, the human behavior analysis by computer vision techniques has been an interesting issue for researchers. Automatic recognition of actions in video allows automation of many otherwise manually intensive tasks such as video surveillance. Video surveillance system especially for elderly care and their behavior analysis has an important role to take care of aged, impatient or bedridden persons. In this paper, we propose a high accuracy human action classification and recognition method using hidden Markov model classifier. In our approach, first, we use star skeleton feature extraction method to extract extremities of human body silhouette to produce feature vectors as inputs of hidden Markov model classifier. Then, hidden Markov model, which is learned and used in our proposed surveillance system, classifies the investigated behaviors and detects abnormal actions with high accuracy in comparison by other abnormal detection reported in previous works. The accuracy about 94% resulted from confusion matrix approve the efficiency of the proposed method when compared with its counterparts for abnormal action detection.Index Terms—Video surveillance, human action recognition, star skeleton method, feature extraction, hidden Markov model.I.I NTRODUCTIONHuman action recognition and classification methods have many different applications useful in human life. Video surveillance is one of its attractive utilization which is applied in intelligent supervision systems in banks, parking lots and smart buildings [1, 2]. Interaction between human and machine to order and communication is another important issue, which is done by various techniques such as speech recognition [3] and hand gesture classification [4]. The processing of video frames come from security cameras with the aim of controlling and recognizing abnormal behaviors create an automatic care monitoring system as a human action recognizer. On the other side, the number of the elderly and the sick who live alone and need to be checked by continuous monitoring are increasing thus intelligent systems are useful and necessary for elderly permanent monitoring. Several factors are vital in the efficiency of an action recognition system such as, detection time, background of the location, the abnormal conditions and the number of people in the interested environment. The significance of each factor in the object of study and the type of action or behavior identify the type of recognition and classification. For instance, in partly behaviors just the top part of body is used to hand gesture recognition [5]. Human behavior analysis from a captured video requires a pre-processing step including foreground and background detection, and tracking individual in consecutive frames. The others major steps are feature extraction, a suitable classifier or model selection and finally the process of classification, identification and authentication based on extracted features. The first step for detection of an object behavior is identifying the movement of an object in the image and its segmentation. The most famous strategy for moving object detection is background subtraction [6]. A simple approach of background subtraction is achieved by comparing each frame of the video with static background. As we said, after pre-processing step, an automatic recognition system includes two fundamental stages: first stage is extracting features of the input frame and the second stage is actions classification [7]. One of the most important steps in behavior analysis process is feature extraction and creating a suitable feature vector. This part of process will generate the primitive data for classifier. There are wide selections of feature extraction methods in human action recognition such as blob method [2], edge based method [8]. Furthermore, the extremities of human contour to its centroid are one of the conceptual features extracted from star skeleton method [9]. Low computational complexity and low sensitivity to resizing are from the advantages of star-skeleton method. In recognition system, a sequence of images introduces the action, and independent of the feature extraction method, the system produces a feature vector and converts it to asymbol which is detectable by a classification method [10]. Human action classification presented by different strategies in the previous studies follow feature extraction step. K-nearest neighborhood (KNN) is a simple and useful classifier with high compatibility to take perception and without needing to create hypothesis on data [11]. A drawback of KNN classification is high computational timing in learning procedure. A good selection of k value is another problem which has to be set by different simulations. In Ref. [12], K-Means algorithm extracts features and KNN classifies different actions. Support vector machine (SVM) is another method of classification [13] that has indicated well performance in recent years in comparison with the old methods. Although SVM generally is used in two class problems, by using the strategy of one-versus-one and one-versus-all case it could solve multi class problems[14]. Hidden Markov model (HMM) presented in [15] isa high precision with extra computing load classification. In this paper we use HMM as a high accuracy classification method in our proposed surveillance system. The remainder of the paper is as follows: section 2 overviews the related work. Section 3 is a brief review on hidden Markov model. Section 4 described the principal of our proposed surveillance system. The simulation results and comparison is presented in section 5 and finally, the paper is concluded in section 6.II.R ELATED W ORKMany different approaches for action recognition have been proposed over the past two decades [16]. These researches have different applications according to behaviors varieties. Sensors and cameras are widely used for surveillance applications. In some researches, the acceleration obtained from sensors is used for human action recognition, such as elderly people care in smart homes by sensors [17, 18]. The main disadvantages of acceleration based methods are a person must wear a particular sensor or device or place in a particular place. The other method is video surveillance, in this method one or multi-camera is used in different locations for human behavior recognition. This type of supervision has been used for human different behaviors recognition such as care behavior for elderly people and abnormal or criminal behaviors in indoors or outdoors. Ref. [19] presents a particle video-based abnormal behavior detection method and hidden Markov model is used for small groups of abnormal behavior detection. An automated video surveillance for crime scene detection using statistical characteristics is presented in [20]. If the scene shows some peculiar situation such as purse snatching, kid napping and fighting on the street, the surveillance system recognize the situation and automatically report to agency. Another application of video surveillance is elderly people behavior analyzing in emergency. In Ref. [21] the recognition of abnormal human activities such as falling, chest pain and fainting, vomiting, and headache is studied. The proposed system model presents a novel combination of R transform and principal component analysis (PCA) for abnormal activity recognition. Hidden Markov model (HMM) is applied on extracted features for training and activity recognition. Ref. [22] presented a method for human fall detections based on combination of eigenspace technique and integrated time motion images (ITMI). Eigenspace technique is applied to ITMI for extracting eigen motion. On the other hand, multi class SVM classifies and determines a fall event. Ref. [23] proposed a method to detect falls based on a combination of motion history and human shape variation. Ref. [24] presents a HMM classifier for behavior understanding from video streams in a nursing center. To extract an activity from video stream, it is necessary to detect the foreground objects and extract image features. Based on the extracted foreground pixel, a posture is represented by a pair of histogram projection in both horizontal and vertical. The motion computed from the motion history map (MHS) is also used as the features in determining the activity and a duration-like HMM is adopted for activity feature extraction. Ref. [25] presents a novel method to detect various posture-based events in a typical elderly monitoring application in a home surveillance scenario. Combination of best-fit approximated ellipse around the human body, horizontal and vertical velocities of movement and temporal changes of centroid point, would provide a useful cue for detection of different behaviors. Extracted feature vectors are finally fed to a fuzzy multiclass support vector machine for precise classification of motions and the determination of a fall event.In this paper, the classification and recognition human abnormal behaviors are our focus. For this purpose, extremities have been identified with sufficient accuracy by feature extracting step according to center of gravity and position of the body, and final feature vector is produced. Then HMM classifies according to extracted features and at the end abnormal behaviors are detected.III. A B RIEF R EVIEW ON H IDDEN M ARKOV M ODEL Hidden Markov model is a powerful model for recognition random of events and dynamic processes [15]. Training is one of the most important ability of HMM. For training process, we apply a set of sequential data to HMM and estimate its primary parameters. In this paper, we use discrete HMM for classification and recognition human behavior.A discrete HMM consists of a number of states each of which is assigned a probability of transition from one state to another state and when the system is in a state in particular time such as t is shown by q t, (t=1,2,....). With time transitions, states occur stochastically. Like Markov models, states at any time depend only on the previous state or the state at the preceding time. In a discrete HMM one symbol is yielded from one of the HMM states according to the probabilities assigned to the states. HMM states are not directly visible, and can be observed only through a sequence of observed symbols [26]. To describe a discrete HMM, the following notations aredefined [15]:N = number of states in the modeV = {v 1, v 2,...,v M }: set of possible output symbols. M = number of observation symbols. Q = {q 1, q 2,...,q t }: set of statesWe display state transition probability matrix by A = {a i j } achieved according to equation (1):1[|]ij t t a P q j q i +===1,i j N ≤≤ (1)Where ij a is the probability of transition from state i to state j .We display symbol output probability matrix by B = {bj (k ) } and achieved according to equation (2):()[|]j t k b k P O v q j ===1i M ≤≤ (2)Where O = (o 1,o 2,...,o T ) is the sequence of observations, O T is the output at time t and all the observations displayed by T .Initial state probability matrix shown by π = {πi } and is achieved according to equation (3):[]i t P q i π== 1i N ≤≤ (3)For each of the above parameters, the model is defined completely if there is a value for above parameters. So a HMM like λ can be shown by a set of three matrixes as equation 4.(,,)A B λπ= (4)A. Recognition and Training Using HMMTo identify observed symbol sequences, we conceive one HMM for each category. For a classifier of C categories, we choose the best matches of model with the observations from C HMMs λᵢ ={A ᵢ, B ᵢ, πᵢ} i =1,......,C . accordingly for a sequence of unknown category, we calculate Pr (λᵢ|O ) for each HMM λ ᵢ and select , where*arg max((|))r i c P O λ= (5)Given the observation sequence O = (o 1, o 2, ..., o T ) and the HMM λi , according to the Bayes rule, we should salve how to evaluate Pr (λᵢ|O ), the probability that the sequence was produced by HMM λi . This probability is calculated by using the forward algorithm [27]. The forward algorithm is defined as follows:12(,,...,|,)t T t P o o o q i αλ== (6)αt (i) is called the forward variable and is calculated recursively as follow:11()i j b o απ= 1i N ≤≤ (7)111()[()]()Nt t ij j t i j a i a b o α++==∑11t T j N≤≤≤≤ (8)1(|)()NT i P O i λα==∑ (9)We calculate the likelihood of each HMM using the above equation and select the most likely HMM as the recognition result. For learning stage, each HMM must be trained so that it is most similar to produce the symbol patterns for its category. Training an HMM means optimizing the parameters (A , B , π) of the model to maximize the probability of the observation sequence Pr (λ|O ). The Baum-Welch algorithm is used for these estimations. We should define a number of variables before Baum-Welch algorithm definition:12()(,,...,|,)t t t T t i P o o o q i βλ++== (10)βt (i) is called the backward variable and can also be solved inductively in a manner similar to that used for the forward variable αt (i).()1T i β= 1i N ≤≤ (11)111()()()Nt ij j t t j i a b oj ββ++==∑1,2,...,11t T T i N=--≤≤ (12)111(|)()()Ni ji P O b o i λπβ==∑ (13)To determine the optimal sequence of states it is required to define a variable named γ as follows:()(|,)t t i P q i o γλ===(,|)(|)t P O q i P O λλ==1(,|)(,|)t Nti P O q i P O qi λλ===∑ (14)This equation can be summarized as follows:1()()()()()t t t Ntti i i i i i αβγαβ==∑ (15)And finally Baum-Welch algorithm can be defined as follows:1(,)(,|,)t t t i j P q i q j O ζλ+===11()()()(|)t ij j t t i a b o i P O αβλ++ (16)Using these equations, HMM parameters λ can be improved to λ. The re-estimation equations from λ = (A , B , π) to (,,)B λ=are:()i i πγ= (17)1111(,)()T t t ij T t i j a i ζγ-=-==∑∑ (18)111()()()t k T t t o v j Tt i b k i γγ-====∑∑ (19)Although Baum-Welch algorithm does not always find the global maximum, it find the local maximum of Pr (O|λ). IV. T HE P RINCIPLE OF O UR P ROPOSED S URVEILLANCES YSTEM Totally, our proposed surveillance system includes several steps. Some of these steps are pre-processing and others are main steps. Fig. 1 shows our proposed surveillance system for human action detection procedures. As shown in figure, at first we use background subtraction algorithm for input data to extract silhouette of body by foreground and background detection. Then, we extract extremities by star-skeleton method . For this purpose, we calculate the centroid of the contour and provide the distances of each point on the contour from the centroid in a counter-clockwise. With this procedure, we find the local maxima of the external points from distance sequences and analyze the distance diagram of contour extremities for extracting important points or extremities of human contour. For feature vector production, we use polar coordinate system for description of the extremities. We place the center of the polar coordinate system on the centroid of the contour and produce feature vector by the position of the points in each division. As shown in Fig. 2.Fig.1. Human Action Recognition Procedures VideoFig.2. The Extremity Points of Human Body Silhouette in Star SkeletonMethodWe choose eight angle and three length divisions thus we have a feature vector with the length of 24. Accordingly, the number of points counted in angle and length divisions produce final feature vector of body contour. Overall, we have a feature vector for each video frame and a time sequence by converting these feature vectors to discrete symbols. As we know HMM is an effective method to analyze these sequences. We apply Leave-one-out method for training step, so in each operation, one of the video samples is chosen as test sample and others are used for training, every time test sample change and other samples are used for training. Therefore, as we said, in a feature vector the number ofimportant extremities points counted and saved. Then this information is used as the input of training stage and HMM parameters are trained. To get HMM we use Baum-Welch algorithm and behavioral classification to adjust suitable HMM parameters from feature vector. After training HMM parameters for each class and in each stage of Leave-one-out method, we test new sample of behaviors. For this purpose, we program a function that takes new videos as inputs according to HMM parameters in training stage and compare the feature extracted from these videos frames with every HMM of each class or action and achieve a probability for the class. Finally the system select the class with the best matches to desired action and this class is labeled as the result of classification for this action.V. T HE S IMULATION R ESULTS OF C LASSIFICATION FORB EHAVIORAL S URVEILLANCE D ETECTIONIn this paper, we focus on behaviors that are useful for elderly care. To examine our method, we collect a dataset as shown in Fig. 3. We consider a set of action including falling from the bed, falling from the chair, collapsing, sitting and bending by several persons. This collection consists of five different actions. For every action, we examine seven different samples thus altogether we use 35 different video sequences. As we said our surveillance system finally select a class with the best matches to desired action and this class is labeled as the result. The simulations carried out on a computer system with Windows7, X64, Core i5, 2.13 GHz, RAM 4 GB.Fig. 4 shows the result of our system recognition accuracy for different samples of each action. As shown in Fig. 4, the system recognize actions 1, 4 and 5 completely true but the system is mistaken in recognition of action 2 and 3, because they are somehow similar together.Action1Action2Action3Action4Action5Fig.3. Our Dataset for Caring Behaviors Including Five ActionsFig.4. Our Proposed System Recognition Accuracy for Different Samples of Each ActionTable 1 shows the accuracy of our surveillance system by a confusion matrix. Inaddition, Fig. 5 exhibits the color-coded bar chart of the correct and incorrect detection accuracy for each action, which is derived from confusion matrix. The summation of results shows 94% accuracy in correct action detection. The proposed method works as a surveillance system. When the sequence input data is checked, if its features is similar to abnormal behavior such as falling from the bed, falling from the chair and collapsing with a high percentage, this action is labeled as abnormal behaviors and active analarm.To show the efficiency of the proposed approach, we have compared our surveillance method with 94% accuracy to its counterparts [24, 25], which are briefly introduced in related work section. Ref. [24] has been proposed based on duration-like HMM classification approach and its test stimuli are similar to ours. For abnormal detection, the approach has reported 90% accuracy. Multi-class SVM [25] reported result show it benefits of 88.8% accuracy for abnormal behavior detection.Table 1. The Confusion Matrix of the DatasetFig.5. Correct and Incorrect Detection Accuracy for Each Action2040608010012012345A c c u r a c yActionsAction1Action2Action3Action4Action5VI.C ONCLUSIONSIn this paper, we propose a high accuracy behavioral surveillance system for elderly care. Abnormal behavior detection in our proposed system works based on HMM classification. In our system, after pre-processing steps star-skeleton method is used to extract body features and extremities. Then, some suitable feature vectors are generated in polar coordinate system. Finally, this information applies to the input of HMM classifier which is able to detect and label each input action. The simulation results show the efficiency of our method to correct detection of five different actions as well as abnormal detection. The accuracy of our method in elderly care surveillance is 94% which shows improved in comparison with the previous similar works presented in Refs. [24] and [25] with 90% and 88% accuracy.R EFERENCES[1]Teddy Ko, A Survey on Behavior Analysis in VideoSurveillance for Homeland Security Applications, 37thIEEE Applied Imagery Pattern Recognition Workshop,PP. 1-8, April 2008.[2]S. Mhatre, S. Varma, R. Nikhare, Visual SurveillanceUsing Absolute Difference Motion Detection,International Conference on Technologies forSustainable Development (ICTSD), pp. 1-5, 2015. [3]Hajer Rahali, Zied Hajaiej, Noureddine Ellouze, RobustFeatures for Speech Recognition using TemporalFiltering Technique in the Presence of Impulsive Noise,International Journal of Image, Graphics and SignalProcessing (IJIGSP), Vol.6, No.11, pp. 17-24, October2014.[4]S. Rautaray, A. Agrawal, Real Time Multiple HandGesture Recognition System for Human ComputerInteraction, International Journal of Intelligent Systemsand Applications(IJISA), Vol. 4, No. 5, pp. 56-64, May2012.[5]J. Huang, S. Hsu1and, C. Huang, Human Upper BodyPosture Recognition and Upper Limbs MotionParameters Estimation, IEEE Signal and InformationProcessing Association Annual Summit and Conference,pp. 1-9, 2013.[6]Shahrizat Shaik Mohamed, Nooritawati MdTahir, RamliAdnan, Background Modeling and BackgroundSubtraction Performance for Object Detection, 6thInternational Colloquium on Signal Processing and ItsApplications (CSPA), pp.1-6, 2010.[7]Al Mansur, Yasushi Makihara and Yasushi Yagi,Action Recognition using Dynamics Features,International Conference on Robotics and Automation,pp. 4020 - 4025, 2011.[8]Chun-Hua Hu, Song-Lin Wo, An efficient method ofhuman behavior recognition in smart environments,International Conference on Computer Application andSystem Modeling (ICCASM), Vol. 12, PP. 690-693,2010.[9]Xin Yuan, Xubo Yang, A Robust Human ActionRecognition System using Single Camera, InternationalConference on Computational Intelligence and SoftwareEngineering, pp.1-4, 2009.[10]Chih-Chiang Chen, Jun-Wei Hsieh, Yung-Tai Hsu,Chuan-Yu Huang, Segmentation of Human Body PartsUsing Deformable Triangulation, 18th InternationalConference on Pattern Recognition (ICPR'06), Vol.1,PP. 355 - 358, 2006.[11]M.A. Wajeed, T. Adilakshami, Semi-supervised textclassification using enhanced KNN algorithm, WorldCongress on Information and CommunicationTechnologies (WICT), PP. 138-142, 2011.[12]Sarvesh Vishwakarma, Anupam Agrawal, Frameworkfor Human Action Recognition using Spatial Temporalbased Cuboids, International Conference on ImageInformation Processing (ICIIP), pp. 1-6, 2011.[13]Chen Junli, Jiao Licheng, Classification Mechanism ofSupport Vector Machines, 5th International Conferenceon Signal Processing Proceedings (WCCC-ICSP), Vol.3, PP. 1556 - 1559, 2000.[14]Megha D Bengalur , Human Activity Recognition usingBody Pose Features and Support Vector Machine,International Conference on Advances in Computing,Communications and Informatics (ICACCI), pp. 1970 -1975, 2013.[15]Lawrence R. Rabiner, Fellow, A Tutorial on HiddenMarkov Models and Selected Applications in SpeechRecognition, Proceeding of the IEEE, Vol.77, pp. 257 -286, 1989.[16]Zia Moghaddam and Massimo Piccardi, Senior Member,Training Initialization of Hidden Markov Models inHuman Action Recognition, IEEE Trans. onAutomation Science and Engineering, Vol.11, pp. 394-508, 2014.[17]N.K. Suryadevara , S.C. Mukhopadhyay , R. Wang ,R.K. Rayudu, Forecasting the behavior of an elderlyusing wireless sensors data in a smart home,Engineering Applications of Artificial Intelligence(Elsevier), vol. 26, pp. 2641–2652, November 2013. [18]N. Noury, T. Hadidi, Computer simulation of theactivity of the elderly person living independently in aHealth Smart Home, Computer Methods and Programsin Biomedicine, (Elsevier), vol. 108, pp. 1216–1228,December 2012.[19]Dongping. Zhang, Jiao.Xu, Yafei.Lu, Huailiang. Peng,Dynamic Model Behavior Analysis of Small Groups,IEEE Conference Based on Article Video WirelessCommunications & Signal Processing (WCSP), pp.1 – 6,2013.[20]Koichiro Goya, Xiaoxue Zhang, Kouki Kitayama, AMethod for Automatic Detection of Crimes for PublicSecurity by Using Motion Analysis, IEEE, FifthInternational Conference on Intelligent InformationHiding and Multimedia Signal Processing, pp. 736 - 741,2009.[21]Zafar Ali Khan, Won Sohn, (2010), Feature Extractionand Dimensions Reduction using R transform andPrincipal Component Analysis for Abnormal HumanActivity Recognition, 6th International Conference onAdvanced Information Management and Service (IMS),pp. 253 - 258.[22]Homa Foroughi, Hadi Sadoghi Yazdi, HamidrezaPourreza, Malihe Javidi, An Eigenspace-BasedApproach for Human Fall Detection Using IntegratedTime Motion Image and Multi-class Support VectorMachine, 4th International Conference on IntelligentComputer Communication and Processing (ICCP),pp.83-90, 2008.[23] C. Rougier, J. Meunier, A. St-Arnaud, J. Rousseau, FallDetection from Human Shape and Motion History usingVideo Surveillance, 21st International Conference onAdvanced Information Networking and ApplicationsWorkshops, vol.2, pp. 875 - 880, 2007.[24]Pau-Choo Chung, Chin-De Liu, A Daily BehaviorEnabled Hidden Markov Model for Human BehaviorUnderstanding, Pattern Recognition (Elsevier), vol. 41,pp. 1572-1580, May 2008.[25]Homa Foroughi, Mohamad Alishahi, HamidrezaPourreza, Maryam Shahinfar, Distinguishing FallActivities using Human Shape Characteristics,Technological Developments in Education andAutomation (Springer), PP. 23-528, 2010.[26]Junji YAMATO, Jun OHYA, Kenichiro ISHII,Recognizing Human Action in Time Sequential Imagesusing Hidden Markov Model, IEEE Computer SocietyConference on Computer Vision and PatternRecognition, 1992.[27]X.D. Huang, Y. Ariki, and M.A. Jack. "Hidden MarkovMod es for Speech Recognition". Edmgurgh Univ. Press,1990.Authors’ ProfilesAdeleh Farzad received her B.S. degreein Electronic Engineering from IslamicAzad University of Lahijan, Iran, in 2012.She is also accepted in M.S. degree inElectronic Engineering in Islamic AzadUniversity of Rasht, Iran, in 2013. Hercurrent research interests include humanbehavior analysis and classification inintelligent surveillance system andabnormal action detection.Rahebeh Niaraki Asli received her B.S.and M.S. degrees in ElectronicEngineering from the University ofGuilan, Rasht, Iran, in 1995 and 2000,respectively. Also, she received Ph.D.degree in electrical engineering from theIran University of Science andTechnology, Tehran, Iran, in 2006. From1995 to 2002 she has worked inelectronic laboratories of the Department of Electrical Engineering in the University of Guilan. During 2002 to 2006, she was with design circuit research group in the Iran University of Science and Technology electronic research center (ERC) and CAD research group of Tehran University. Since 2006, she has been an Assistant Professor in Department of Electrical Engineering, Engineering faculty of Guilan University. Her current research interests include reliable and testable VLSI design, object tracking and machine vision systems.How to cite this paper: Adeleh Farzad, Rahebeh Niaraki Asli,"Recognition and Classification of Human Behavior in Intelligent Surveillance Systems using Hidden Markov Model", IJIGSP, vol.7, no.12, pp.31-38, 2015.DOI: 10.5815/ijigsp.2015.12.05。
信息技术与信息化计算机技术与应用63 视觉跟踪技术发展和难点问题的分析The Tendency of the V isual Tracking and the Analysis of Tr oubles张 进3ZHAN G J in摘 要 本文介绍了计算机视觉领域里的一种新兴技术即视觉跟踪技术。
其中,主要介绍了视觉跟踪技术的产生、发展,同时也提到了跟踪技术中难点问题和解决思路。
关键词 视觉跟踪 目标检测 目标识别 目标跟踪 Abstract I n this paper,it describes a new technol ogy which called visual tracking of the computer visi onfield .The text intr oduces that the new technol ogy ’s e mergence and devel opment,at the sa me ti m e,it refers s ome p r oble m s of this technol ogy and how t o res olve these p r oble m s.Keywords V isual tracking Object detecti on Object identificati on Object tracking3山东建筑大学信电学院 250010 在当今的信息化社会中,随着计算机网络、通信以及微电子技术的发展,计算机图像以其直观形象、内容丰富的特点备受人们青睐。
然而,在很多应用领域,人类在全部依赖视觉获得信息的同时,也需要付出艰辛的劳动。
需要一种智能计算机系统技术,来模拟人眼获取外界信息图像,并模拟人脑进行视觉信息的分析和理解,从而做出相应的响应,这种技术的研究越来越受到诸多学者专家的厚爱,它就是我要介绍的视觉跟踪技术。
视觉跟踪技术用途广泛,目前它已经应用于计算机视觉等许多领域,如:视频监控、视觉用户接口、虚拟现实、智能大楼、基于目标跟踪的视频压缩等。
业务对象v型识别方法Business object recognition is essential in various industries to automate processes, improve efficiency, and provide better customer service. V-shaped objects are common in many business settings, such as product packaging, machinery parts, or architectural structures. Recognizing these objects accurately can help streamline operations and ensure precision in tasks.业务对象的识别在各个行业中至关重要,可以帮助自动化流程,提高效率,并提供更好的客户服务。
V型物体在许多商业环境中很常见,比如产品包装、机械零件或建筑结构。
准确识别这些对象可以帮助简化操作,确保任务的精准性。
There are various methods for V-shaped object recognition, including using computer vision technology, machine learning algorithms, and image processing techniques. Computer vision technology involves analyzing images or videos to identify and understand objects and patterns. Machine learning algorithms canbe trained on labeled data to recognize V-shaped objects based on specific features or characteristics.V型物体识别的方法有很多种,包括使用计算机视觉技术、机器学习算法和图像处理技术。
Abstract —A k ey problem of an Image Based Visual Servo (IBVS) system is how to identify and track objects in a series of images. In this paper, a scale-invariant image feature detector and descriptor, which is called the Scale-Invariant Feature Transform (SIFT), is utilized to achieve robust object tracking in terms of rotation, scaling and changes of illumination. To the best of our knowledge, this paper represents the first work to apply the SIFT algorithm to visual servoing for robust mobile robot track ing. First, a SIFT method is used to generate the feature points of an object template and a series of images are acquired while the robot is moving. Second, a feature matching method is applied to match the features between the template and the images. Finally, based on the locations of the matched feature points, the location of the object is approximated in the images of camera views. This algorithm of object identification and tracking is applied in an Image-Based Visual Servo (IBVS) system for providing the location of the object in the feedback loop. In particular, the IBVS controller determines the desired wheel speeds Ȧ_1 and Ȧ_2 of a wheeled mobile robot, and accordingly commands the low-level controller of the robot. Then the IBVS controller drives the robot toward a target object until the location of the object reaches the desired location in the image. The IBVS system is implemented and tested in a mobile robot with an on-board camera, in our laboratory. The results are used to demonstrate satisfactory performance of the object identification and trac king algorithm. Furthermore, a MATLAB simulation is used to confirm the stability and convergence of the IBVS controller.I. I NTRODUCTIONisual Servoing (VS) also is known as Vision Based Robot Control, is a technique which utilizes vision information in feedback to control the motion of a robot. Figure 1 shows the block diagram of a typical visual servo system where vision (through camera) is a part of the control system. In the control loop, a vision system is used for robot control, by providing feedback information about the state of the environment to the controller.Vision is a powerful sensor as it can mimic the humanManuscript received October 30, 2009. This work has been supported by research grants from the Canada Research Chairs Program, Natural Sciences and Engineering Research Council (NSERC) of Canada, the Canada Foundation for Innovation (CFI), and the British Columbia Knowledge Development Fund (BCKDF).Haoxiang L ang is with Mechanical Engineering Department, the University of British Columbia, Vancouver, BC, V6T 1Z4, Canada (phone: 604-822-4850; fax: 604-827-3524; e-mail: hxlang@mech.ubc.ca).Ying Wang is now with Division of Engineering, Southern Polytechnic State University, Marietta, GA 30060 USA (e-mail: ywang8@). Clarence W. de Silva is with the Mechanical Engineering Department, Mechanical Engineering Department, the University of British Columbia, Vancouver, BC, V6T 1Z4, Canada (e-mail: desilva@mech.ubc.ca).sense of vision and allow non-contact measurement of the working environment. Accordingly, much attention of the research community has directed to applying vision as a feedback sensor in industrial control applications. Among the projects in visual servoing projects, quite well known is the “DARPA Urban Challenge,” which involves teams for building an autonomous vehicles that are capable of driving in traffic, performing complex maneuvers such as merging, passing, parking and negotiating intersections in an urban environment. In these vehicles, camera is the main sensor for providing feedback from the vehicle environment to thevehicle control system.II.R ELATED R ESEARCHVision-based automated object detection has been playing a significant role in industrial and service applications. Studies have focused on detecting objects efficiently by using features such as color, shape, size, and texture. However, there are a number of problems that arise when using these methods to process real world images under different conditions and environments. Most recent machine vision algorithms may not necessarily possess adequate performance for common practical use.Seelen et al. [1] have used Symmetry Analysis and Model Matching to detect the rear, front and side views of a group of object types by measurement of the inherent vertical symmetric structure. In their paper, the authors mention that the method has to be robust against changes in illumination and slight differences of the right and left pars of an object. It follows that the symmetry-based method is challenged under real operating conditions.As well known, color is a very useful feature in object detection field. However, few existing detection and tracking applications have used color for object recognition, because color-based recognition is complicated, and the existing color machine vision techniques have not been shown to be effective. Buluswar and Draper [2] have presented a technique for achieving effective real-time color recognition in outdoor scenes. It is claimed that this method has been successfully tested in several domains, such as autonomous highway navigation, off-road navigation and target detection for unmanned military vehicles.Bertozzi et al. [3] have proposed a corner-based method to hypothesize vehicle locations. The system presented in the paper was composed of a pipeline of two different engines: PAPRICA, a massively parallel architecture for efficient execution of low-level image processing tasks, improved by the integration of a specific feature for direct data I/O; and a traditional serial architecture running medium-level tasks aimed at the detection of the vehicle position in the sequence. A preliminary version of the system was reported, and it was demonstrated on MOB-LAB land vehicle.The use of constellations of vertical and horizontal edges has shown to be a strong cue for hypothesizing objects in some cases. In an effort to finding pronounced vertical structure in an image, Matthews et al. [4] used edge detection to find strong vertical edges. To localize left and right positions of a vehicle, they computed the vertical profile of the edge image followed by smoothing using a triangular filter. By finding the local maximum peaks of the vertical profile, they claimed that they could find the left and the right positions of a vehicle.Template-based methods use a predefined pattern of the object class and perform correlation between the image and the template. Handmann et al. [5] proposed a template based on the observation that the rear/frontal view of a vehicle has a “U” shape. During verification, they considered a vehicle to be present in the image if they could find the “U” shape. Ito et al. [6] used a very loose template to recognize pronounced vertical/horizontal edges and existing symmetry. Due to the simplicity of the template, these two papers did not seek very accurate results.Appearance-based methods learn the characteristics of object appearance from a set of training images which capture the variability in the object class. Compared to the approaches discussed above, it is most accurate and reliable.In particular, Lowe [7, 8] proposed an algorithm for object recognition and tracking, called the Scale Invariant Feature Transform (SIFT), which uses a class of local image features. In his algorithm, the detected features are invariant to changes in illumination, noise, rotation and scaling; and it has been proven that this approach has high robustness and reliability. In the present paper, the SIFT algorithm is utilized to enable a mobile robot track an object in the camera view, and feed back the environment information, to control the robot to a goal location. To the best of our knowledge, this paper is the first work to apply the SIFT algorithm to visual servoing for robust mobile robot tracking.III.O BJECT I DENTIFICATION AND T RACKINGThe overall procedure of the tracking algorithm is shown in Figure 2. First, the original images captured from the camera are converted into grayscale images which are represented by a matrix of unsigned 8-bit values. For the convenience of mathematical operations, each pixel is further converted into a double precision floating point value, in the range of 0.0 to 1.0 (0.0: black; 1.0: white).Fig. 2. Flowchart of object identification.The second step of the tracking algorithm requires a robust feature detector with high repeatability with respect to rotation, illumination and scaling. Repeatability of a feature detector, which evaluates the geometric stability under different transformation of images, is an important criterion in choosing a good detector. It is the percentage of totaldetected features in the second image that is transformedfrom the first one. For example, if the features detected in first image can also be detected in the second image of a video stream by using the same detector, it can be concluded that the feature detector has high repeatability. In [9], Schmid et al. reviewed popular detectors and found that the improved Gaussian derivative version with Harris operator had the best performance. Mikolajczyk [10] also found that the L aplacian of the Gaussian (L oG) function, ߪଶߘଶܩ, generated the most stable image features. However, L owe proposed a Difference of Gaussian (DoG) detector [7, 8] which provided an approximation to the L oG with much cheaper computing expense.As we all know, the scales of objects in different images are different and unknown. Therefore, it is required that the features should be stable in different scales. For this purpose, a DoG pyramid is generated for the image in different scales as shown in Figure 3.Fig. 3. Difference of Gaussian (DoG) pyramid.The general idea of generating the DoG pyramid is presented next. Suppose that the left-bottom is the original image. A Gaussian blurred imaged is generated for the right second layer by using the convolution:ሺ ǡ ǡɐሻൌ ሺ ǡ ǡɐ ሻכ ሺ ǡ ሻ(1) where ɐൌͲǤͷ,ܩሺݔǡݕሻൌͳଶ݁ିሺ௫మା௬మሻଶఙమൌቆͳξʹߨߪ݁ି௫మଶఙమቇቆͳξʹߨߪ݁ି௬మଶఙమቇ(2)The third, fourth and the fifth images are generated by using Gaussian blurring from the previous image; and the DoG images on the right of the Figure 3 are given byܦሺݔǡݕǡߪሻൌܮሺݔǡݕǡ݇ߪሻെܮሺݔǡݕǡߪሻ(3) The feature points of an image are decided by finding the local extrema in different scales. Figure 4 shows three DoG images in the neighboring scales. Each pixel in the DoG images is compared with its 8 neighbor pixels in the same images and 9×2 corresponding neighboring scale. If the checked pixel has a maximum or a minimum value among these 27 pixels, it will be selected as one of the interest points (Features).Fig. 4. Local extrema.For each interest point, the gradient and orientation are calculated by using݃ሺݔǡݕሻൌඥሺܮሺݔͳǡݕሻെܮሺݔെͳǡݕሻሻଶሺܮሺݔǡݕͳሻെܮሺݔǡݕെͳሻሻଶ(4) ߠሺݔǡݕሻൌܣݐܽ݊ʹሺሺܮሺݔǡݕͳሻെܮሺݔǡݕെͳሻሻȀሺܮሺݔͳǡݕሻെܮሺݔെͳǡݕሻሻሻ(5) After eliminating key points with low contrast and those along the edges, a set of feature points is generated. There are three parameters for each key point: location, gradient and orientation.Figure 5 shows the manner in which the SIFT feature descriptors are generated (Feature representation). The red circle in the center represents an interest point. First, an 8×8 pixel window is selected around a key point in the image. Second, the window is divided into 4 sub-windows in which each sub-window contains 4×4 blocks. Third, the gradient histograms of 8 directions are calculated. It follows that a SIFT feature vector contains 128 elements (4×4×8). Finally, the vector is normalized in order to be invariant to changes in illumination.Fig. 5. SIFT feature descriptor.In order to find the location of the object in an image, the SIFT features of both the current image from the camera view and the template image are generated. The matching points between the current image and the template are determined by searching the minimum of Euclidean distance.IV. E XPERIMENTThe experimental equipment used in this paper is aPioneer DX3TM mobile robot with an on-board camera. The objective of this experiment is to move the robot from the original location to the grasping location (goal location) using the camera feedback information. Figure 6 shows the block diagram of the Image Based Visual Serving (IBVS) system.Fig.6. Block diagram of the IBVS control system.First, a kinematic model of the mobile robot (Pioneer robot) is generated. Then, a state-space model is determined using the robot kinematic model and the camera projection model, which represents the relationship between the robot velocity in the environment and the pixel velocity of the feature points in the camera views. The system functions to drive the robot to the goal location by eliminating the pixel errors between current location and the goal location. There are two requirements in order to achieve the objective:1) A robust object tracking algorithm which canprovide locations of the object in the images in terms of pixel coordinates.2) An IBVS control strategy, which eliminates thepixel error between the current location and the goal location of the robot.Figures 7 (b) and (c) show two views from the onboard camera. Figure 8 (a) shows the camera view when the mobile robot is away from the object (a book), and Figure 8 (b) shows the camera view when the robot arrives at the goal location (grasping location). First, the SIFT features of the object are extracted by using the algorithm discussed earlier. The matched feature points between the camera view at the goal location and the template are generated in Figure 8(b). The approximated object location in the image is determined by calculating (σሺ௫ሻσሺሻǡσሺ௬ሻσሺሻ)(6)where i is the number of matched feature points, and (ݔǡݕ) are the locations of the matched feature points. In each iteration of the control loop, the onboard camera captures one image shot.(a) SIFT keys of the object.(b) SIFT keys of the camera view at the starting point.(c) SIFT keys of the camera view at goal location.Fig. 7. Object identification and tracking.(a) Feature matching between the current camera view and the template.(b) Feature matching between the camera view of the goal location and thetemplate.Fig. 8. Feature matching.The SIFT features of the image are generated by using the feature detector and feature descriptor as discussed before. The locations of the matched feature points in the current image are decided by matching the feature points in the current camera view with the feature points of the object template. The location of the object in the image can be found by using (6). This information is sent to the IBVS controller as a feedback, as shown in Figure 6.Apart from the experimental verification the vision basedcontrol system developed in the present work, a MATLAB simulations was carried. The system has two inputs (linear velocity and angular velocity of the robot) and two outputs (position of the feature point in the image frame). Typical simulation results are shown in Figure 9, which verify that the system is stable and converges to the desired values.Fig.9. Simulation results.V. C ONCLUSIONSIn this paper, a scale invariant image feature detector and descriptor, called Scale-Invariant Feature Transform (SIFT), was presented. A feature matching method was applied to identify an object present in a series of camera images (a video stream). This algorithm of object identification and tracking was implemented in an Image Based Visual Servoing (IBVS) system, to determine the location of atarget object, for position feedback control. The IBVS was utilized to drive a wheeled mobile robot toward a target object until the object reaches the desired location in the image. Apart from the experimental verification, a MATL AB simulation was carried out to show that the system was stable and would converge.R EFERENCES[1] W. V. Seelen, C. Curio, J. Gayko, U. handmann, and T. Kalinke,“Scene analysis organization of behavior in driver assistance system,” Proceedings of IEEE International Conference on Image Processing, pp 524-527, 2000.[2] S. D. Buluswar and B. A. Draper, “Color machine vision forautonomous vehicles,” J. System Architecture, pp. 317-325, 1997. [3] M. Bertozzi, A. Broggi, and S. Castelluccio, “A real-time orientedsystem for vehicle detection,” J. System Architecture, pp. 317-325, 1997.[4] N. Matthews, P. An, D. Charnley, and C. Harris, “Vehicle detectionand recognition in greyscale imagery,” Control Engineering Practice, Vol. 4, pp. 473-479, 1996.[5] U. Handmann, T. Kalinke, C. Tzomakas, M. Werner, and W.Y.Seelen, “An image processing system for driver assistance,” Image and Vision Computing, Vol.18, pp. 367-376, 2000.[6] T. Ito, K. Yamada, and K. Nishioka, “Understanding driving situationusing a network model,” Intelligent Vehicle, pp 48-53, 1995.[7] D. G. Lowe, “Object recognition from local scale-invariant features,”International Conference on Computer Vision, Corfu, Greece, pp. 1150-1157, September 1999.[8] D. G. Lowe, “Distinctive image features from scale-invariantkeypoints, International Journal of Computer Vision,” Vol.60, No. 2, pp. 91-110, 2004.[9] C. Schmid, R. Mohr., and C. Bauckhage, “Evaluation of interest pointdetectors,” International Journal of Computer Vision, Vol. 37, No. 2, pp. 151–172, 2000.[10] K. Mikolajczyk and C. Schmid, “An affine invariant interest pointdetector,” European Conference on Computer Vision, Copenhagen, Denmark, pp. 128-142, 2002.。