Computer Vision_Computer Graphics Collaboration Techniques_ 5th International Conference, MIRAGE 201
- 格式:doc
- 大小:31.00 KB
- 文档页数:2
计算机视觉技术常用工具推荐计算机视觉技术是指通过计算机对图像或视频中的信息进行获取、处理和分析,使计算机能够具备对视觉信息进行理解和判别的能力。
随着计算机视觉技术的快速发展,出现了许多实用的工具和库,为计算机视觉领域的开发者和研究人员提供了极大的便利。
在本文中,我将为大家推荐几个常用的计算机视觉工具,帮助您加快开发进程和提升工作效率。
1. OpenCVOpenCV(Open Source Computer Vision)是计算机视觉领域应用最广泛的开源库之一。
它提供了丰富的图像处理和计算机视觉算法,涵盖图像处理、特征提取、目标检测、人脸识别、运动跟踪等多个领域。
OpenCV支持各种编程语言,如C++、Python和Java等,使其易于使用和集成到不同的开发环境中。
同时,OpenCV还与其他视觉库和工具有良好的兼容性。
2. TensorFlowTensorFlow是一个开源的人工智能框架,但它也广泛应用于计算机视觉任务。
TensorFlow提供了丰富的计算图和深度学习模型,以及高效的计算和优化工具,使计算机视觉模型的训练和部署更加容易。
通过使用TensorFlow,您可以快速构建和训练各种计算机视觉模型,如卷积神经网络(CNN)、循环神经网络(RNN)等,用于图像分类、目标检测、语义分割等任务。
3. PyTorchPyTorch是另一个流行的开源深度学习框架,也被广泛应用于计算机视觉领域。
与TensorFlow相比,PyTorch注重于灵活性和易用性。
它提供了直观的动态计算图和强大的自动求导功能,使开发者可以更直观地构建和调试模型。
PyTorch还提供了许多预训练的模型和工具,如TorchVision,用于图像分类、目标检测和图像生成等任务。
4. CUDACUDA是英伟达开发的并行计算平台和应用程序接口,可以加速计算机视觉任务的执行速度。
通过利用GPU的并行计算能力,CUDA可以显著提高计算机视觉算法的性能。
A third field which plays an important role is neurobiology, specifically the study of the biological vision system. Over the last century, there has been an extensive study of eyes, neurons, and the brain structures devoted to processing of visual stimuli in both humans and various animals. This has led to a coarse, yet complicated, description of how "real" vision systems operate in order to solve certain vision related tasks. These results have led to a subfield within computer vision where artificial systems are designed to mimic the processing and behaviour of biological systems, at different levels of complexity. Also, some of the learning-based methods developed within computer vision have their background in biology.Yet another field related to computer vision is signal processing. Many methods for processing of one-variable signals, typically temporal signals, can be extended in a natural way to processing of two-variable signals or multi-variable signals in computer vision. However, because of the specific nature of images there are many methods developed within computer vision which have no counterpart in the processing of one-variable signals. A distinct character of these methods is the fact that they are non-linear which, together with the multi-dimensionality of the signal, defines a subfield in signal processing as a part of computer vision.Beside the above mentioned views on computer vision, many of the related research topics can also be studied from a purely mathematical point of view. For example, many methods in computer vision are based on statistics, optimization or geometry. Finally, a significant part of the field is devoted to the implementation aspect of computer vision; how existing methods can be realized in various combinations of software and hardware, or how these methods can be modified in order to gain processing speed without losing too much performance.The fields most closely related to computer vision are image processing, image analysis, robot vision and machine vision. There is a significant overlap in the range of techniques and applications that these cover. This implies that the basic techniques that are used and developed in these fields are more or less identical, something which can be interpreted as there is only one field with different names. On the other hand, it appears to be necessary for research groups, scientific journals, conferences and companies to present or market themselves as belonging specifically to one of these fields and, hence, various characterizations which distinguish each of the fields from the others have been presented.The following characterizations appear relevant but should not be taken as universally accepted: Image processing and image analysis tend to focus on 2D images, how to transform one image to another, e.g., by pixel-wise operations such as contrast enhancement, localoperations such as edge extraction or noise removal, or geometrical transformations such as rotating the image. This characterization implies that image processing/analysis neither require assumptions nor produce interpretations about the image content.Computer vision tends to focus on the 3D scene projected onto one or several images, e.g., how to reconstruct structure or other information about the 3D scene from one or several images. Computer vision often relies on more or less complex assumptions about the scene depicted in an image.Machine vision tends to focus on applications, mainly in industry, e.g., vision basedautonomous robots and systems for vision based inspection or measurement. This implies that image sensor technologies and control theory often are integrated with the processing of image data to control a robot and that real-time processing is emphasized by means of efficient implementations in hardware and software. It also implies that the externalconditions such as lighting can be and are often more controlled in machine vision thanthey are in general computer vision, which can enable the use of different algorithms.There is also a field called imaging which primarily focus on the process of producingexploration is already being made with autonomous vehicles using computer vision, e. g., NASA's Mars Exploration Rover.Other application areas include:Support of visual effects creation for cinema and broadcast, e.g., camera tracking(matchmoving).Surveillance.Typical tasks of computer visionEach of the application areas described above employ a range of computer vision tasks; more or less well-defined measurement problems or processing problems, which can be solved using a variety of methods. Some examples of typical computer vision tasks are presented below.RecognitionThe classical problem in computer vision, image processing and machine vision is that of determining whether or not the image data contains some specific object, feature, or activity. This task can normally be solved robustly and without effort by a human, but is still not satisfactorily solved in computer vision for the general case: arbitrary objects in arbitrary situations. The existing methods for dealing with this problem can at best solve it only for specific objects, such as simple geometric objects (e.g., polyhedrons), human faces, printed or hand-written characters, or vehicles, and in specific situations, typically described in terms of well-defined illumination, background, and pose of the object relative to the camera.Different varieties of the recognition problem are described in the literature:Recognition: one or several pre-specified or learned objects or object classes can berecognized, usually together with their 2D positions in the image or 3D poses in the scene.Identification: An individual instance of an object is recognized. Examples: identification ofa specific person's face or fingerprint, or identification of a specific vehicle.Detection: the image data is scanned for a specific condition. Examples: detection ofpossible abnormal cells or tissues in medical images or detection of a vehicle in anautomatic road toll system. Detection based on relatively simple and fast computations is sometimes used for finding smaller regions of interesting image data which can be further analyzed by more computationally demanding techniques to produce a correctinterpretation.Several specialized tasks based on recognition exist, such as:Content-based image retrieval: finding all images in a larger set of images which have a specific content. The content can be specified in different ways, for example in terms ofsimilarity relative a target image (give me all images similar to image X), or in terms ofhigh-level search criteria given as text input (give me all images which contains manyhouses, are taken during winter, and have no cars in them).Pose estimation: estimating the position or orientation of a specific object relative to the camera. An example application for this technique would be assisting a robot arm inretrieving objects from a conveyor belt in an assembly line situation.Optical character recognition (or OCR): identifying characters in images of printed orhandwritten text, usually with a view to encoding the text in a format more amenable toediting or indexing (e.g. ASCII).MotionSeveral tasks relate to motion estimation, in which an image sequence is processed to produce an estimate of the velocity either at each points in the image or in the 3D scene. Examples of such tasks are:Egomotion: determining the 3D rigid motion of the camera.Tracking: following the movements of objects (e.g. vehicles or humans).Scene reconstructionGiven one or (typically) more images of a scene, or a video, scene reconstruction aims at computing a 3D model of the scene. In the simplest case the model can be a set of 3D points. More sophisticated methods produce a complete 3D surface model.Image restorationThe aim of image restoration is the removal of noise (sensor noise, motion blur, etc.) from images. The simplest possible approach for noise removal is various types of filters such as low-pass filters or median filters. More sophisticated methods assume a model of how the local image structures look like, a model which distinguishes them from the noise. By first analysing the image data in terms of the local image structures, such as lines or edges, and then controlling the filtering based on local information from the analysis step, a better level of noise removal is usually obtained compared to the simpler approaches.Computer vision systemsThe organization of a computer vision system is highly application dependent. Some systems are stand-alone applications which solve a specific measurement or detection problem, while other constitute a sub-system of a larger design which, for example, also contains sub-systems for control of mechanical actuators, planning, information databases, man-machine interfaces, etc. The specific implementation of a computer vision system also depends on if its functionality ispre-specified or if some part of it can be learned or modified during operation. There are, however, typical functions which are found in many computer vision systems.Image acquisition: A digital image is produced by one or several image sensors, which, besides various types of light-sensitive cameras, include range sensors, tomographydevices, radar, ultra-sonic cameras, etc. Depending on the type of sensor, the resultingimage data is an ordinary 2D image, a 3D volume, or an image sequence. The pixel values typically correspond to light intensity in one or several spectral bands (gray images orcolour images), but can also be related to various physical measures, such as depth,absorption or reflectance of sonic or electromagnetic waves, or nuclear magneticresonance.Pre-processing: Before a computer vision method can be applied to image data in order to extract some specific piece of information, it is usually necessary to process the data inorder to assure that it satisfies certain assumptions implied by the method. Examples are Re-sampling in order to assure that the image coordinate system is correct.Noise reduction in order to assure that sensor noise does not introduce falseinformation.Contrast enhancement to assure that relevant information can be detected.Scale-space representation to enhance image structures at locally appropriate scales.Feature extraction: Image features at various levels of complexity are extracted from theimage data. Typical examples of such features areLines, edges and ridges.Localized interest points such as corners, blobs or points.More complex features may be related to texture, shape or motion.Detection/Segmentation: At some point in the processing a decision is made about which image points or regions of the image are relevant for further processing. Examples are Selection of a specific set of interest pointsSegmentation of one or multiple image regions which contain a specific object ofinterest.High-level processing: At this step the input is typically a small set of data, for example a set of points or an image region which is assumed to contain a specific object. Theremaining processing deals with, for example:Verification that the data satisfy model-based and application specific assumptions.Estimation of application specific parameters, such as object pose or object size.Classifying a detected object into different categories.See alsoActive visionArtificial intelligence Digital image processing Image processing List of computer visiontopicsMachine learningMachine visionMachine Vision GlossaryMedical imagingPattern recognitionTopological data analysisFurther readingSorted alphabetically with respect to first author's family namePedram Azad, Tilo Gockel, Rüdiger Dillmann (2008). Computer Vision - Principles andPractice. Elektor International Media BV. ISBN 0905705718. /book.html.Dana H. Ballard and Christopher M. Brown (1982). Computer Vision. Prentice Hall. ISBN 0131653164. /rbf/BOOKS/BANDB/bandb.htm.Wilhelm Burger and Mark J. Burge (2007). Digital Image Processing: An AlgorithmicApproach Using Java. Springer. ISBN 1846283795 and ISBN 3540309403./.James L. Crowley and Henrik I. Christensen (Eds.) (1995). Vision as Process. Springer-Verlag. ISBN 3-540-58143-X and ISBN 0-387-58143-X.E. Roy Davies (2005). Machine Vision : Theory, Algorithms, Practicalities. MorganKaufmann. ISBN 0-12-206093-8.Olivier Faugeras (1993). Three-Dimensional Computer Vision, A Geometric Viewpoint. MIT Press. ISBN 0-262-06158-9.R. Fisher, K Dawson-Howe, A. Fitzgibbon, C. Robertson, E. Trucco (2005). Dictionary of Computer Vision and Image Processing. John Wiley. ISBN 0-470-01526-8.David A. Forsyth and Jean Ponce (2003). Computer Vision, A Modern Approach. Prentice Hall. ISBN 0-12-379777-2.Gösta H. Granlund and Hans Knutsson (1995). Signal Processing for Computer Vision.Kluwer Academic Publisher. ISBN 0-7923-9530-1.Richard Hartley and Andrew Zisserman (2003). Multiple View Geometry in Computer Vision.Cambridge University Press. ISBN 0-521-54051-8.Berthold Klaus Paul Horn (1986). Robot Vision. MIT Press. ISBN 0-262-08159-8.Fay Huang, Reinhard Klette and Karsten Scheibe (2008). Panoramic Imaging - Sensor-Line Cameras and Laser Range-Finders. Wiley. ISBN 978-0-470-06065-0.Bernd Jähne and Horst Haußecker (2000). Computer Vision and Applications, A Guide for Students and Practitioners. Academic Press. ISBN 0-13-085198-1.Bernd Jähne (2002). Digital Image Processing. Springer. ISBN 3-540-67754-2.Reinhard Klette, Karsten Schluens and Andreas Koschan (1998). Computer Vision - Three-Dimensional Data from Images. Springer, Singapore. ISBN 981-3083-71-9.Tony Lindeberg (1994). Scale-Space Theory in Computer Vision. Springer. ISBN0-7923-9418-6. http://www.nada.kth.se/~tony/book.html.David Marr (1982). Vision. W. H. Freeman and Company. ISBN 0-7167-1284-9.Gérard Medioni and Sing Bing Kang (2004). Emerging Topics in Computer Vision. Prentice Hall. ISBN 0-13-101366-1.Tim Morris (2004). Computer Vision and Image Processing. Palgrave Macmillan. ISBN0-333-99451-5.Nikos Paragios and Yunmei Chen and Olivier Faugeras (2005). Handbook of Mathematical Models in Computer Vision. Springer. ISBN 0-387-26371-3.Azriel Rosenfeld and Avinash Kak (1982). Digital Picture Processing. Academic Press. ISBN 0-12-597301-2.Linda G. Shapiro and George C. Stockman (2001). Computer Vision. Prentice Hall. ISBN 0-13-030796-3.Milan Sonka, Vaclav Hlavac and Roger Boyle (1999). Image Processing, Analysis, andMachine Vision. PWS Publishing. ISBN 0-534-95393-X.Emanuele Trucco and Alessandro Verri (1998). Introductory Techniques for 3-D Computer Vision. Prentice Hall. ISBN 0132611082.External linksGeneral resourcesKeith Price's Annotated Computer Vision Bibliography (/Vision-Notes/bibliography/contents.html) and the Official Mirror Site Keith Price's AnnotatedComputer Vision Bibliography (/bibliography/contents.html)USC Iris computer vision conference list (/Information/Iris-Conferences.html)Retrieved from "/wiki/Computer_vision"Categories: Artificial intelligence | Computer visionThis page was last modified on 30 June 2009 at 03:36.Text is available under the Creative Commons Attribution/Share-Alike License; additional terms may apply. See Terms of Use for details.Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profitorganization.。
计算机图像学基础——图形图像图素象素位图的概念一、计算机图形学(Computer Graphics)1、什么是计算机图形学?计算机图形学是研究怎样利用计算机来显示、生成和处理图形的原理、方法和技术的一门学科。
IEEE定义:Computer graphics is the art or science of producing graphical images with the aid of computer.2、计算机图形学的研究内容计算机图形学的研究内容非常广泛,如图形硬件、图形标准、图形交互技术、光栅图形生成算法、曲线曲面造型、实体造型、真实感图形计算与显示算法、非真实感绘制,以及科学计算可视化、计算机动画、自然景物仿真、虚拟现实等。
简单地说,计算机图形学的主要研究内容就是研究如何在计算机中表示图形、以及利用计算机进行图形的计算、处理和显示的相关原理与算法。
图形通常由点、线、面、体等几何元素和灰度、色彩、线型、线宽等非几何属性组成。
从处理技术上来看,图形主要分为两类,一类是基于线条信息表示的,如工程图、等高线地图、曲面的线框图等,另一类是明暗图,也就是通常所说的真实感图形。
计算机图形学主要目的就是要利用计算机表达的真实感图形。
为此,必须建立图形描述的场景的几何表示,运用某种光照模型,计算出假想的光源、纹理、材质属性下的光照明效果。
所以计算机图形学与计算机辅助几何设计有着密切的关系。
图形学也把可以表示几何场景的曲线曲面造型技术和实体造型技术作为其主要的研究内容。
同时,真实感图形计算的结果是以数字图象的方式提供的,计算机图形学和图形图象处理有着密切的联系3、计算机图形学的主要应用领域1).计算机辅助设计与制造(Computer Aided Design / Computer Aided Manufacture)机械结构、零部件、土木建筑工程、集成电路等的设计等,利用计算机图形学不仅可提高设计效率、缩短设计周期、改善设计质量、降低设计成本,而且可以为后续的计算机辅助制造建立起数据库,CAD/CAM一体化,生产的自动化奠定基础。
计算机视觉面试题计算机视觉(Computer Vision)是计算机科学与工程领域的一个重要分支,研究如何让计算机“看懂”、理解和分析图像和视频。
在计算机视觉面试中,面试官通常会提出一些与图像处理、图像识别和物体检测等相关的问题。
本文将介绍一些常见的计算机视觉面试题,帮助读者准备面试,增加对计算机视觉领域的了解。
1. 图像处理图像处理是计算机视觉中的基础知识。
面试官可能会问到一些与图像处理相关的问题,如:- 什么是图像滤波器?请举例说明不同类型的图像滤波器。
- 什么是边缘检测?请说明一种常用的边缘检测算法。
- 什么是直方图均衡化?它有什么作用?- 什么是图像分割?请说明一种常用的图像分割算法。
2. 特征提取与描述特征提取与描述是计算机视觉中的重要任务,用于表示图像的关键信息。
面试官可能会问到一些与特征提取与描述相关的问题,如:- 什么是特征点?请说明一种常用的特征点检测算法。
- 什么是图像描述子?请说明一种常用的图像描述子算法。
- 什么是尺度不变特征变换(SIFT)?它有哪些应用?3. 目标检测与识别目标检测与识别是计算机视觉中的核心任务,用于在图像或视频中找到和识别特定的目标物体。
面试官可能会问到一些与目标检测与识别相关的问题,如:- 什么是滑动窗口?请说明滑动窗口检测算法的基本原理。
- 什么是卷积神经网络(CNN)?请说明其在目标检测中的应用。
- 什么是区域提议网络(RPN)?它与目标检测有什么关系?- 什么是图像语义分割?请说明一种常用的图像语义分割算法。
4. 图像生成与合成图像生成与合成是计算机视觉中的前沿研究领域,用于生成逼真的图像或将多张图像合成为一张图像。
面试官可能会问到一些与图像生成与合成相关的问题,如:- 什么是生成对抗网络(GAN)?请说明其在图像生成中的应用。
- 什么是图像风格迁移?请说明一种常用的图像风格迁移算法。
- 什么是图像补全?请说明一种常用的图像补全算法。
总结:计算机视觉是一个发展迅速且具有广阔前景的领域,面试官在面试中常常会涉及这些方面的问题。
计算机⽅向的⼀些顶级会议和期刊(转载)IEEE TRANSACTIONS ON COMPUTERSComputer VisionConf.:Best: ICCV, Inter. Conf. on Computer VisionCVPR, Inter. Conf. on Computer Vision and Pattern RecognitionGood: ECCV, Euro. Conf. on Comp. VisionICIP, Inter. Conf. on Image ProcessingICPR, Inter. Conf. on Pattern RecognitionACCV, Asia Conf. on Comp. VisionJour.:Best: PAMI, IEEE Trans. on Patt. Analysis and Machine IntelligenceIJCV, Inter. Jour. on Comp. VisionGood:CVIU, Computer Vision and Image UnderstandingPR, Pattern Reco.NetworkConf.:ACM/SigCOMM ACM Special Interest Group of Communication..ACM/SigMetric 这个系统⽅⾯也有不少的Info Com ⼏百⼈的⼤会,不如ACM/SIG的精。
Globe Com 这个就很⼀般了,不过有时候会有⼀些新的想法提出来。
Jour.:ToN (ACM/IEEE Transaction on Network)A.I.Conf.:AAAI: American Association for Artificial IntelligenceACM/SigIR: 这个是IR⽅⾯的,可能DB/AI的⼈都有IJCAI: International Joint Conference on Artificial IntelligenceNIPS: Neural Information Processing SystemsICML: International Conference on Machine LearningJour.:Machine LearningNEURAL COMPUTATION: 这个的影响因⼦在AI⾥最⾼,2000年为1.921ARTIFICIAL INTELLIGENCE: 1.683(2000年的数据,下同)PAMI: 1.668IEEE TRANSACTIONS ON FUZZY SYSTEMS: 1.597IEEE TRANSACTIONS ON NEURAL NETWORKS: 1.395AI MAGAZINE: 1.044NEURAL NETWORKS: 1.019PATTERN RECOGNITION: 0.781IMAGE AND VISION COMPUTING: 0.616IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING: 0.465APPLIED INTELLIGENCE: 0.268OS,SystemConf.:SOSP: The ACM Symposium on Operating Systems Principles(2年⼀次,想中⼀篇太难了)OSDI: USENIX Symposium on Operating Systems Design and ImplementationDatabaseConf.:ACM SIGMODVLDB:International Conference on Very Large Data BasesICDE:International Conference on Data Engineering//这三个会议并称为数据库⽅向的三⼤顶级会议SecurityConf.:IEEE Security and PrivacyCCS: ACM Computer and Communications SecurityNDSS (Network and Distributed Systems Security)WebConf.:WWW(International World Wide Web Conference)TheoryConf.:STOCFOCSEDAConf.:Best:DAC: IEEE/ACM Design Automation ConferenceICCAD: IEEE International Conference on Computer Aided DesignGood:ISCAS: IEEE International Symposium on Circuits And SystemsISPD: IEEE International Symposium on Physical DesignICCD: IEEE International Conference on Computer DesignASP-DAC: European Design Automation ConferenceE-DAC: Asia and South Pacific Design Automation Conference备注:x-DAC有很多,是地区性最⾼级DAC会议,上⾯两个影响最⼴。
计算机视觉简史计算机视觉(ComputerVision)一词源于1960年代,指的是利用计算机技术来识别、理解和分析图像内容的技术,它与深度学习和机器学习有着千丝万缕的关系。
至今,计算机视觉技术已经广泛应用于医学影像诊断、安防监控、面部检测以及自动驾驶等领域。
往事是新的。
计算机视觉的发展史可追溯到1700年前的古希腊,当时的古希腊哲学家伊壁鸠鲁提出了“视觉识别(Visual Recognition)”的概念,这为计算机视觉技术奠定了基础。
在20世纪50年代,IBM、Carnegie Mellon大学和MIT实验室等研究机构相继开始将线性代数和概率论等数学知识应用到计算机视觉领域,以建立图像识别的基础框架。
1960年,IBM的科学家米利贝尔和亚历山大金林斯利合著了《计算机图像处理机器看图》( Machine Perception of Three-Dimensional Solids)一书,此书是第一本专门介绍计算机视觉技术的著作,从而使计算机视觉这一概念被众人所熟知。
在20世纪70年代,随着数字处理技术和数字图像处理技术不断发展,计算机视觉技术也在成长。
1979年,IBM研究室的研究员费希尔(R.M.Fisher)提出了聚类分析方法,这一算法可以有效地用来识别图像中特定的物体。
1980年,计算机图形学研究领域发展迅猛,大量的图形分析算法被提出,这为计算机视觉提供了大量的支持,使它的应用领域可以进一步扩大。
20世纪90年代,随着数字图像处理技术的进一步发展,计算机视觉技术也在不断成熟,其中特别是当时刚刚被提出的深度学习算法为计算机视觉技术的发展做出了重大贡献。
在21世纪初,随着云计算技术、大数据技术和机器学习技术的飞速发展,计算机视觉技术也有了飞跃式的进步,它与深度学习和机器学习的结合让计算机视觉具备了更强的分析能力,使它在医学影像诊断、安防监控、面部检测以及自动驾驶等领域有了广泛的应用。
至此,计算机视觉发展史从古希腊哲学家伊壁鸠鲁到IBM研究室的研究员费希尔,再到深度学习算法,它们充分说明了计算机视觉技术在过去几个世纪中经受考验,并取得了长足进步。
opencv和openGL的关系
OpenCV是 Open Source Computer Vision Library
OpenGL是 Open Graphics Library
OpenCV主要是提供图像处理和视频处理的基础算法库,还涉及⼀些机器学习的算法。
⽐如你想实现视频的降噪、运动物体的跟踪、⽬标(⽐如⼈脸)的识别这些都是CV的领域
OpenGL则专注在Graphics,3D绘图。
其实两者的区别就是Computer Vision和Computer Graphics这两个学科之间的区别,前者专注于从采集到的视觉图像中获取信息,是⽤机器来理解图像;后者是⽤机器绘制合适的视觉图像给⼈看。
似乎没啥关系!
完整版本回答参见专栏⽂章
OpenCV 为啥勾搭上 OpenGL? - ⿊客与画家 - 知乎专栏
⼀个是最⼴泛使⽤开源的计算机视觉库
⼀个是三维⼯业标准
两者本毫⽆关系
不过⾃2.3开始,OpenCV的highgui模块开始⽀持OpenGL渲染
另外增强现实(AR)应⽤中,既可能⽤到OpenCV,也可能⽤到OpenGL
由于显卡的能⼒增强,现在的 OpenCV 已经有新的形态了,即⼤量的运算位于显卡上。
运算通过 CUDA 或 OpenCL
渲染通过 OpenGL
这两点意味着除了⽤户界⾯交互与⽂件IO外(highgui模块),可以逐渐脱离 CPU。
这才是 OpenCV 与 OpenGL 的真正关系,或者说 OpenCV 与显卡的真正关系。
---------------------------------------------------------------最新资料推荐------------------------------------------------------
Computer Vision_Computer Graphics Collaboration Techniques_ 5th International Conference, MIRAGE 2011, Rocquencourt,
France, Oc
Lecture Notes in Computer Science 6930Commenced Publication in 1973Founding and Former Series Editors:Gerhard Goos, Juris Hartmanis, and Jan van LeeuwenEditorial BoardDavid HutchisonLancaster University, UKTakeo KanadeCarnegie Mellon University, Pittsburgh, PA, USAJosef KittlerUniversity of Surrey, Guildford, UKJon M. KleinbergCornell University, Ithaca, NY, USAAlfred KobsaUniversity of California, Irvine, CA, USAFriedemann MatternETH Zurich, SwitzerlandJohn C. MitchellStanford University, CA, USAMoni NaorWeizmann Institute of Science, Rehovot, IsraelOscar NierstraszUniversity of Bern, SwitzerlandC. Pandu RanganIndian Institute of Technology, Madras, IndiaBernhard SteffenTU Dortmund University, GermanyMadhu SudanMicrosoft Research, Cambridge, MA, USADemetri TerzopoulosUniversity of California, Los Angeles, CA, USADoug TygarUniversity of California, Berkeley, CA, USAGerhard WeikumMax Planck Institute for Informatics, Saarbruecken, GermanyAndr
1 / 2
Gagalowicz Wilfried Philips (Eds.)Computer Vision/Computer GraphicsCollaboration Techniques5th International Conference, MIRAGE 2019Rocquencourt, France, October 10-11, 2019Proceedings1 3Volume EditorsAndr GagalowiczINRIA RocquencourtDomaine deVoluceau78153 Le Chesnay, FranceE-mail: andre.gagalowicz@inria.frWilfried PhilipsGhent UniversityTELINSt. -Pietersnieuwstraat 419000 Ghent, BelgiumE-mail: philips@telin.ugent.beISSN 0302-9743 e-ISSN 1611-3349ISBN 978-3-642-24135-2 e-ISBN 978-3-642-24136-9DOI 10.1007/978-3-642-24136-9Springer Heidelberg Dordrecht London NewYorkLibrary of Congress Control Number:Applied forCR Subject Classif i cation (1998): I.3, H.5.2, I.4-5, I.2, J.3, I.2.10LNCS Sublibrary: SL 6 Image Processing, ComputerVision, Pattern Recognition,and Graphics Springer-Verlag Berlin Heidelberg 2019This work is subject to copyright. All rights are reserved, whether the whole or part of the material isconcerned, specif i cally the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting,reproduction on microf i lms or in any ...。