Video Surveillance A Distributed Approach to protect Privacy Update 20070912
- 格式:pdf
- 大小:192.42 KB
- 文档页数:11
The Bosch NDC-265-P 720p IP dome camera is a ready-to-use, complete network video surveillance system inside a compact camera. This camera brings Bosch’s high-performance technology into the realm of small office and retail businesses by offering a cost-effective solution for a broad range of applications.To use the camera as a stand-alone video surveillance system, simply take it out of the box, assemble it according to the quick installation guide, start it, and walk away – it’s already recording. In this mode, no additional equipment is required.For medium-to-large-scale or growing systems, the camera easily integrates with the Bosch Divar 700 Series recorder. It uses H.264 compression technology to give clear images while reducing bandwidth and storage by up to 30%.FunctionsBest-in-class HD 720p image performanceThe camera delivers the clearest HD 720p images and most accurate colors within its class. Progressive scan ensures moving objects are always sharp.Tri-streaming videoTri-streaming allows a data stream to be encoded simultaneously according to different, customized settings. Two types of streams can serve different purposes. For example, use H.264 stream for local recording and viewing, and the M-JPEG stream to provide compatibility with legacy DVRs.Efficient recordingA removable microSD/SDHC card offers edge recording inside the IP dome camera. This card not only saves network bandwidth but also reduces the requirement for a high-capacity hard disk or DVR. When used with a microSD/SDHC card, the camera is a complete, self-contained surveillance system without the need for additional equipment.The camera can also be used with an iSCSI server connected via the network to store long-term recordings. 16-channel PC surveillance softwareThe bundled PC surveillance software offers a user-friendly interface to support easy installation and configuration. A wizard allows the configuration of multiple cameras simultaneously using an auto detection device. Multiple cameras can be monitored in one screen and video clips on microSD/SDHC cards can be archived, searched, and exported in a single application.NDC-265-P HD 720p IP Dome Camera▶Complete network video surveillance system inside adome camera▶HD 720p progressive scan for sharp images of movingobjects▶Tri-streaming: Dual H.264 and M-JPEG simultaneously▶Removable microSD/SDHC card offers days ofstorage inside camera▶PC surveillance software supports multiple cameramonitoring▶Two-way audio and audio alarm▶Power over Ethernet (IEEE 802.3af compliant)▶ONVIF conformant2ONVIF 1.0 conformantThe latest Open Network Video Interface Forum (ONVIF) standard ensures compatibility with other surveillance products. This will contribute to a reduction in future upgrade or migration costs.Two-way audio and audio alarmTwo-way audio allows remote users to listen in on an area and communicate messages to visitors or intruders via an external loudspeaker (not included). Audio detection can be used to generate an alarm, if needed.Tamper and motion detectionA wide range of configuration options is available for alarms signaling camera tampering. A built-in algorithm for detecting movement in the video can also be used for alarm signaling.Power-over-EthernetPower for the camera can be supplied via a Power-over-Ethernet (IEEE 802.3af) compliant network cable connection. With this configuration, only a single cable connection is required to view, power, and control the camera.Certifications and ApprovalsSafety EU: EN 60950-1: 2006,Reference IEC 60950-1:2005US: UL 60950-1:1st edition dated October 31, 2007Canada: CAN/CSA-C22.2 NO. 60950-1-03EMC EN 50130-4:1995 + A1:1998 + A2:2003FCC Part15 Subpart B, Class BEMC directive 2004/108/ECEN 55022 class BEN 61000-3-2 :2006EN 61000-3-3 : 1995+A1 :2001+A2 :2005EN 55024AS/NZS CISPR 22 (equal to CISPR 22)ICES-003 Class BEN 50121-4:2006Product Certifica-tionsCE, FCC, UL, cUL, C-tick, CB, VCCIPower supply CE, UL, cUL, PSE, CCC Installation/Configuration NotesConnections1I/O2Power 12 VDC3Ethernet RJ454Audio Line-out5Audio Line-inDimensionsmm (in)Parts IncludedQuantity Component1NDC-265-P 720p IP Dome camera1Quick Installation Guide1Installation paper sticker1Torx screwdriver for Dome cover1CD ROM1Camera fixing screw kit1MicroSDHC card (warranty provided by card manufacturer) 1Universal power supply with US, EU and UK plug3Technical SpecificationsPowerInput voltage+12 VDC orPower-over-EthernetPower consumption 4.2 W (max)VideoSensor type¼-inch CMOSSensor pixels1280 x 800Sensitivity 1.0 lxVideo resolution720p, 4CIF/D1, VGA, CIF, QVGA Video compression H.264 MP (Main Profile); H.264 BP+(Baseline Profile Plus); M-JPEGMax. frame rate30 fps (M-JPEG frame rate can vary de-pending on system loading)LensLens type Varifocal 2.7 to 9 mm, DC Iris F1.2 tocloseLens mount Board mountedConnectionAnalog video out 2.5 mm jack for installation onlyAlarm input Short or DC 5V activationRelay out Input rating Maximum 1 A 24 VAC/VDC AudioAudio input Built-in microphoneLine in jack connectorAudio output Line out jack connectorAudio communication Two-way, full duplexAudio compression G.711, L16 (live and recording)Local StorageMemory card slot Supports up to 32 GB microSD/SDHCcard(An SD card of Class 4 or higher is recom-mended for HD recording) Recording Continuous recording, ring recording.alarm/events/schedule recording Software ControlUnit configuration Via web browser or PC surveillance soft-wareNetworkProtocols HTTP, HTTPs, SSL, TCP, UDP, ICMP,RTSP, RTP, Telnet, IGMPv2/v3, SMTP,SNTP, FTP, DHCP client, ARP, DNS,DDNS, NTP, SNMP, UPnP, 802.1X, iSCSI Ethernet10/100 Base-T, auto-sensing, half/full du-plex, RJ45PoE IEEE 802.3af compliant MechanicalDimensions Diameter: 135 mm (5.32 in)Height: 102 mm (4 in)Weight568 g (1.25 lb) approx. EnvironmentalOperating temperatureCamera-10 ºC to +50 ºC (14 ºF to +122 ºF)Operating temperatureUniversal Power Supply Unit0 ºC to +40 ºC (+32 ºF to +104 ºF) Storage temperature-20 ºC to +70 ºC (-4 ºF to +158 ºF) Humidity10% to 80% relative humidity (non con-densing)Ordering InformationNDC-265-P 720p IP Dome CameraHD 720p IP Dome Camera System includingvarifocal lens and power supplyNDC-265-P4Americas:Bosch Security Systems, Inc. 130 Perinton Parkway Fairport, New York, 14450, USA Phone: +1 800 289 0096 Fax: +1 585 223 9180***********************.com Europe, Middle East, Africa:Bosch Security Systems B.V.P.O. Box 800025600 JB Eindhoven, The NetherlandsPhone: + 31 40 2577 284Fax: +31 40 2577 330******************************Asia-Pacific:Robert Bosch (SEA) Pte Ltd, Security Systems11 Bishan Street 21Singapore 573943Phone: +65 6258 5511Fax: +65 6571 2698*****************************Represented by© Bosch Security Systems Inc. 11 | Data subject to change without notice T7233796747 | Cur: en-US, V2, 5 Jan 2011。
群晖surveillance内存占用【实用版】目录1.群晖 surveillance 简介2.内存占用的影响因素3.解决内存占用过高的方法4.结论正文1.群晖 surveillance 简介群晖 Surveillance 是一款适用于家庭和办公室的网络视频监控软件,用户可以通过智能手机或平板电脑随时随地查看监控画面。
该软件支持多种设备和操作系统,包括 Windows、macOS、iOS 和 Android 等。
通过群晖 Surveillance,用户可以轻松地设置监控摄像头、查看实时视频、回放录像等功能。
2.内存占用的影响因素在使用群晖 Surveillance 时,内存占用是一个需要注意的问题。
内存占用的大小受以下几个因素影响:(1)监控摄像头数量:摄像头数量越多,实时视频流的处理和存储所需的内存资源就越多。
(2)视频质量:高质量的视频流需要更多的内存来存储和处理。
用户可以根据自己的需求调整视频质量以降低内存占用。
(3)存储方式:群晖 Surveillance 支持本地存储和网络存储两种方式。
本地存储需要更多的内存来存储视频文件,而网络存储可以减轻内存负担。
(4)其他因素:如操作系统、硬件配置等也可能对内存占用产生影响。
3.解决内存占用过高的方法为了避免内存占用过高导致系统崩溃或性能下降,用户可以采取以下措施:(1)优化摄像头数量:减少不必要的摄像头,以便降低内存占用。
(2)调整视频质量:降低视频质量以减少内存占用。
用户可以根据自己的需求在设置中调整视频质量。
(3)选择合适的存储方式:如果内存资源有限,建议采用网络存储方式以减轻内存负担。
(4)升级硬件配置:提高内存容量和处理器性能,以应对更高的内存需求。
4.结论群晖 Surveillance 作为一款网络视频监控软件,在提供便捷的监控功能的同时,也需要用户关注内存占用问题。
W H I T E PA P E RVideo Surveillance in Focus:Selecting the right system to meet your organization’s needsPlanning with the Future In MindThe first question organizations should ask is: How will this organization use a video surveillance system and what might it use the sys-tem for in the future?The questionsounds obvious, but surprisingly, manyorganizations do not consider the second half of this question before making a purchase decision. The second part of the question is the most critical, be-cause an organization’s use of its video surveillance system can change significantly over time.For instance, when video surveillance systems firstbecame popular, many municipalities simply installedVideo surveillance systems are helping municipalities, businesses and public safety organizations across the globe protect their citizens, protect their property and put criminals behind bars. But given the wide variety of technology choices for video surveillance, selecting the right system can seem like a daunting task. Fortunately, it doesn’t have to be. Organizations that wish to install, add to or upgrade a video surveillance system can ask justthree powerful questions to guide them in selecting the optimal video system.cameras as a deterrent to crime. But today, even if an organization installs a video system for a specific purpose – such as simply to monitor activity in a high crime area – that same organization might find itself using that video for much more.It might be used in a courtroom to help prosecute a crime. Or the video could be sent to first responders when they are on the way to a crime scene so they are better prepared to handle the situation when they arrive.Today, many private and public safety organizations are also sharing video from their systems with other organizations in emergency situations. This means that video from many disparate systems needs to be piped to and accessed by many different command centers, such as the police department’s command center, the fire department’s command center or the Department of Transportation’s (DOT’s) command center.QUESTION 1:How will this organization use a video surveillance system and what might it use the system for in the future?The Easiest Way to Share VideoGetting the different organizations’ video and command systems – which often use very differ-ent technology – to talk to one another can be a challenge. Fortunately, this challenge can be solved using software called physical security information management (PSIM). This software neutralizes the incompatibilities between different video systems and allows them to easily communicate with each other.Using PSIM software, an organization can easily access and manage the video from disparatesurveillance system – even if they have no idea what camera, compression or transmission technology that system uses. Using PSIM software, organiza-tions can even easily track movement across many different types of cameras and video systems owned by many organizations.For instance, PSIM software can facilitate the tracking of suspects as they move from a private parking lot onto the street. Or the police dispatch center can quickly tap into and view video from the Department of Transportation’s cameras no matter what technology the DOT system uses.Hassle-free Video System ManagementIn addition, if an organization finds that the video surveillance system it installed several years ago does not meet its current needs and decides to upgrade just part of its system, PSIM software can help the organization easily manage both the old and the new system. For instance, this software can be used to neutralize the differences between an “old” analog and a “new” IP-based video system. This allows organizations to operate and manage the two different video networks as one integrated system using one user interface. This reduces training costs as well as video storage and retrieval headaches.Quality, Frame Rate, Cost and Bandwidth:Finding the Right BalanceThe second question that organizations need to ask is: What level of video quality is needed – and how does this affect band-width needs? First, an organization must determine how much detail it needs in its video recordings.For instance, security guards at an organization might use a video surveillance system to simply track the movements of someone who has entered a restricted area. There might not be any need to recognize the identity of that individual. But in a courtroom, jurors must be capable of recognizing a face or reading a license plate from a video.Picture quality is determined by resolution, and the primary measurement for resolution is pixels per foot (PPF). As a rule of thumb, the minimum pixel measurement needed to recognize a face is 40 by 40 or 1600 pixels. To read a license plate, a minimum of 80 by 80 or 6400 pixels is required.Another feature to consider is frame rate. Generally, video being viewed live should support frame rates of 24 to 30 frames per second (fps) – which are the lowest speeds that the eye perceives as actual motion – to avoid eye strain. Depending on how the video is supported, however, some security opera-tions may find 10 to 15 fps acceptable today for basic video capture.In contrast, for video that is being reviewed after it is recorded, a higher resolution is often more important than a high frame rate. The higher resolution allows individuals to be recognized and license plates to be read more easily.It is critical to realize that video systems that sup-port better resolution and faster frame rates require higher bandwidth capabilities and thus cost more. Fortunately, using compression techniques, one can maintain a higher image quality while reducing band-width needs – and cost. But here again, organiza-tions must be willing to make choices.For instance, using a compression technique that is too aggressive results in anomalies that can damage the video’s credibility in court. Once again, an orga-nization must first determine how it wants to use its video system both now and in the future to make the right choices in this area.QUESTION 2:What level of video quality is needed – and how does this affect bandwidth needs?The Importance of ScalabilityThe third question to consider is: How much storage doI need for my video system? Unfortunately, video storage is oftenjust an afterthought with many organiza-tions, and this can lead to problems in the future.In fact, even when an organization starts with a small video system, it is critical to select a system that can easily scale because the storage require-ments for video surveillance systems are extremely large – and expand very quickly.For example, just one megapixel camera (1280 x 1024 pixels) will generate 80 kilobits of data per frame at 30 frames per second. In total, it will generate 207 gigabytes of data each 24-hour day. Even if motion analytics is used to record only when motion is detected, the camera will generate 20.7 gigabytes of data per day (assuming motion 10 percent of the time).Even if the camera records only when motion is detected, one year’s worth of video for just two cameras would generate 15 terabytes of data. And a law enforcement organization using that data as evidence in a trial might be required to store that data for at least five years.The Cost of Storing VideoOnce an organization’s storage needs reach 10 terabytes of data or higher, storing video usingtraditional systems – where data is backed up via tapes or hard drives – can become very costly. Why? As the amount of stored data grows, moreback-up copies of that data are needed. This not only leads to exponential increases in costs, it also leads to security concerns because the data is now located in many different facilities.Even when multiple back-up copies of data are made, the traditional data backup system is not that reliable, given that between one and five percent of hard drives fail every year and between 10 and 20 percent of tapes fail.Fortunately, there is an alternative way to store data using a technique called Information Dispersal. This technique stores slices of information on a network of servers – and then reassembles it when it isneeded. In addition, a file can be reassembled even if all the slices are not available, so if one server fails, the data can still be retrieved intact.The cost of a dispersed system is significantly less as well. With a large-scale storage system, where a company might need three back-up copies of the data using a traditional storage model, a dispersed storage system can cost one-fifth to one-half as much as a traditional storage system. Plus thedispersed back-up system will consume one-eighth the power and require much lower bandwidth.The bottom line is that once an organization under-stands how it wants to use its video surveillance system – and what it might use that system for in the future – it can more easily choose the right video system. By asking just three key questions, an organization will be well on its way to selecting a video surveillance system that meets its unique needs – both today and in the future.Investigate FurtherAs with any technology solution, wireless video surveillance networks are evolving and improving rapidly. That’s why it’s important to partner with a provider that knows wireless technology inside and out. For nearly 80 years, Motorola has been recognized as the leading provider of wireless communications, networks, devices and services. To learn more about how Motorola can help government and public safety agencies develop and deploy intelligent wireless video solutions that will provide immediate benefits and position your organization to take fast advantage of future innovation, please visit us at /videosurveillance.Motorola, Inc. 1301 E. Algonquin Road, Schaumburg, Illinois 60196 U.S.A. /videosurveillanceMOTOROLA and the stylized M Logo are registered in the U.S. Patent and Trademark Office. All other products or service names are the property of their registered owners.© Motorola, Inc. 2009QUESTION 3:How much storage do I need for my video system?。
The HR-NR5000-2U series NVRs are designed for the dynamic needs ofthe emergent digital video surveillance market. With high image resolution,seamlessly integrated mass storage, and alarm recording, The NVRdeliver flexible and powerful performance to security professionals fordistributed enterprise architectures and centralized managementapplications.The NVR are a complete network based video recording solution whichsupports various video compression formats of IP cameras (H.264,MPEG-4, and MJPEG), and image resolutions, from CIF/ D1, VGA, tomultiple megapixels. Integrating with megapixel IP cameras, MatriVideo™provides a breakthrough platform to view high resolution video from yoursurveillance system. Simply plug into the network infrastructure and theNVR can display the video image from any IP camera source through theEthernet. The IP cameras can be installed without cumbersome cabling,meaning that setting up a new IP-based security system has never beeneasier or more cost effective.Based on the embedded Linux OS, The NVR_On_Chip technology,combined with dual power supplies and RAID-5 and RAID-6 enhancement,provides customers a stable and secure system. The embedded Linux OSincreases overall system availability by reducing system downtime causedby power supply and hard drive failure. It has the capability to keep acomplete six month audit trail database of user activities. MatriVideo™NVRs allow different recording settings - continuous, motion detected,event triggered, and scheduled recording. This increases the data storageutilization giving more space to store video and saving money. The NVR provide the most intelligent and comprehensive video search tools. Users can retrieve the desired video images with a few simple clicks. Its graphical display illustrates the recorded video history, yet with quick indicators for alarm instances (video loss, motion detection, real-time video analytics, sensor or digital inputs, even POS and Access Control System events), and manual recording instances (special events triggered by security personnel onsite).a high performance enterprise-class network based solution for distributed architecturedata protection (optional)ID-1304NR52UFunctionOperating systemImage controlDual watchdogVideo inputAudio inputSupported resolutionSupported formatMax frame rate**Recording modesPre/post alarm recordingInternal storageInternal RAID5 and RAID6External storageSearching methodMonitoring environmentSpeaker outAudio line inSerial portKeyboardEthernetUSBVGA outputI/O controlsOptional remote I/O ModuleHardware accessoriesVideo servers and IP-camerasACSElectric power sourceDual power suppliesPower outputOperating temperatureHumidityDimension (mm)Weight (kg)ModelNetwork video recorder Embedded Linux Contrast / Brightness / Saturation / Hue Yes 16 / 32 / 48 / 64 channels 16 / 32 / 48 / 64 channels All resolutions supported MJPEG / MPEG4 / H.26430 fps (NTSC) / 25 fps (PAL)Continuous / Scheduled / Motion detected / Intelligent video detected / Event triggered / Manual / API 1 ~ 60 seconds Up to 8 swappable HDDs Optional Archive Server / IP SAN / SAS Date / Time / Camera / Alarm list Command Center(Lite/Dual), nCCTV 112 x RS-232PS/2 and USB 2 x RJ-45, 10/100/1000Mbps 6 x USB2.015 pin female D-SUB Max. 128 Inputs / 64 Outputs using Remote I/O Modules (optional)ID-IO2000-1600 (16 DI), ID-IO2000-0008 (8 DO), ID-IO2000-0808 (8 DI / 8DO) C232-485i (RS232 / 485 converter)1 x Power cable 3S, ACTi, Appro, Arecont Vision, Arlotto, AXIS, Basler, Brickcom, CNB Technology, Hikvision, Hi-Sharp, Hitron, Hunt, Instek Digital,ITX, IQinVision, MatriCam, Messoa, Mobotix , Pixord, Pelco, Probe, Samsung, Secubest, SANYO, SONY , Vivotek Cardax, CEM, Tyco AC 100 ~ 240V Optional 400W 0 ~ 40°C Max. 90%, non-condensing 575(L) x 485(W) x 95(H) mm (w/o box)14.1 Kg (w/o box)HR-NR5416-2U / HR-NR5432-2U / HR-NR5448-2U / HR-NR5464-2U SpecificationsFront ViewRear View Power socket Exhaust fan a) The actual video display performance may vary according to type of camera(s) and lighting condition.b) Product specifications and availability are subject to change without notice.COM VGA Speaker out / audio in USB DVI PS/2Ethernet。
ves评估VES(Video Evaluation Standard)是一种对视频质量进行客观评估的指标体系,主要用于评估视频编码和传输质量。
VES评估包括以下几个方面:分辨率、比特率、帧率、压缩编码器效能、视频详细度、运动估计、码率失真表现、变化场景、失真度、编码格式、音视频同步等。
首先,分辨率是用来衡量视频清晰度的一个重要指标。
通常使用像素数量来表示,分辨率越高,视频的细节越丰富,清晰度越高。
在VES评估中,会对视频的分辨率进行检测,以确认视频是否达到了预期的清晰度要求。
其次,比特率是指视频编码过程中每秒传输的数据量。
比特率越高,视频质量越好,但也会造成传输压力增大。
VES评估中会对比特率进行检测,以确认视频是否符合预期的传输质量。
帧率是指视频中每秒显示的图像帧数。
帧率越高,视频显示的动作就越流畅。
在VES评估中,帧率也是一个重要的参考指标,用于评估视频的流畅度和动作表现力。
压缩编码器效能是指编码器在压缩视频时的效率和准确性。
VES评估中,会对不同的编码器进行测试,以确定其在视频压缩方面的表现。
视频详细度是指视频的细节表现能力。
VES评估中,会对视频的细节进行评估,以确定视频的清晰度和细节表现能力。
运动估计是指对视频中的运动进行检测和估计。
VES评估中,会对视频中的运动进行评估,以确定运动估计的准确性和准确性对视频质量的影响。
码率失真表现是指在特定比特率下,视频质量的损失程度。
VES评估中,会对不同比特率情况下的视频质量进行评估,以确定视频的码率失真表现。
变化场景是指视频中场景的变化情况。
VES评估中,会对视频中的场景变化进行检测,以确定视频的场景变化情况对视频质量的影响。
失真度是指视频质量的失真程度。
VES评估中,会对视频的失真度进行评估,以确定视频的失真程度和失真对观看体验的影响。
编码格式是指视频编码时使用的格式。
VES评估中,会对不同的编码格式进行测试,以确定不同格式对视频质量的影响。
专利名称:VIDEO SURVEILLANCE发明人:ROBAATO SHII ORIBAA JIYUNIAA 申请号:JP30044488申请日:19881128公开号:JPH02166986A公开日:19900627专利内容由知识产权出版社提供摘要:PURPOSE: To provide an inexpensive and flexible system by multiplexedly transmitting modulated signals from plural video cameras, applying signals to plural video screens and tuners in parallel by a signal splitter and controlling the tuners by a computer. CONSTITUTION: Respective video modulators 2 modulate signals outputted from cameras 1 and plural combiners 3 connected in series apply the signals from all the modulators 2 to the signal splitter 4 through a video signal transmission line 5. The splitter 4 supplies the received signals to plural video display devices 6, each of which transmits the supplied signal to a channel selected by a tuner 7 and a display screen 8 displays a picture included within the visiual field of the camera 1. The computer controls the tuners 7 and executes sequence control so as to display pictures observed by a series of cameras 1 on the screen 8. Consequently the flexible system reducing wiring cost can be obtained.申请人:ROBAATO SHII ORIBAA JIYUNIAA更多信息请下载全文后查看。
System OverviewExperience the superior clarity of Dahua's Ultra 4K HDCVI cameras for vast coverage and superior image details. As the world's first 4K over coax cameras, the Dahua Ultra 4K HDCVI series leverages existing coax infrastructures to deliver forensic-level images seamlessly and more cost effectively. The outstanding performance, easy installation and rugged design make the camera an ideal choice for mid- to large-size businesses and for applications in large, open spaces (city centers, parks, campuses) where highly reliable surveillance and construction flexibility are valued.FunctionsMicro Four Thirds SystemThis Dahua camera takes advantage of the The Micro Four Thirds System, an interchangeable-lens photography system that offers superior picture quality from a smaller lens. This system offers nine times the surface area and pixel size of the standard 1/3-in. sensor commonly used in video surveillance applications. The larger surface area and pixel size increases the light gathering capability of the sensor to deliver color images in ultra-low light conditions in the 8MP camera class when other cameras have switched to monochrome.4K Resolution4K resolution is a revolutionary breakthrough in image processing technology. 4K delivers four times the resolution of standard HDTV 1080p devices and offers superior picture quality and image details. 4K resolution improves the clarity of a magnified scene to view or record crisp forensic video from large areas.Starlight+ TechnologyFor challenging low-light applications, Dahua’s Starlight Ultra-low Light Technology offers best-in-class light sensitivity, capturing color details in low light down to 0.001 lux. The camera uses a large pixel sensor, smart imaging algorithms and a set of optical features to balance light throughout the scene, resulting in clear images in dark environments.Ultra Series | A83AA9M43 Lens InterfaceThe lens is a key component of any security camera and one that directly effects the quality of the image. The Dahua 4K HDCVI box camera features the M43 lens interface, typically found on most SLR cameras. This camera is compatible with all types of M43 lenses, including fixed and vari-focal lenses.Four Signals over One Coaxial CableHDCVI technology simultaneously transmits video, power, data, and audio over a single coaxial cable. Dual-way data transmission allows the HDCVI camera to communicate with an HCVR to send control signals or to trigger alarms. HDCVI along with PoC technology delivers power 1 to devices at the edge, simplifying installation.Long Distance TransmissionHDCVI technology guarantees real-time transmission over long distances without loss of video quality. HDCVI cameras provide the same resolution as most IP network camera systems using existing RG-59, RG-6, or CAT 6 UTP cabling.SimplicityHDCVI technology seamlessly integrates traditional analog surveillance systems with upgraded, high-quality HD video, making it the best choice to protect security investments. The plug and play approach enables full HD video surveillance without the hassles of configuring a network. Multiple OutputsThe camera supports HDCVI and CVBS signal outputs simultaneously with one HDMI connector and one BNC connector. These output signals allow the camera to integrate with various devices, including analog matrix systems and monitors.1. Requires PoC Transceivers for each channel and an external power supply for each transceiver.•Micro Four Thirds 8 MP Progressive-scan CMOS Sensor •M43 Lens Interface•3840 x 2160 at 30 fps (HDMI) and 3840 x 2160 at 15 fps (BNC) •Simultaneous HDMI and HDCVI/CVBS Video Output •Starlight+ Technology for Ultra-low Light Sensitivity•Ultra Wide Dynamic Range (140 dB) and 2D and 3D Noise Reduction •Built-in Microphone •Software Upgrade via USB •Five-year Warranty**Warranty applies to products sold through an authorized Dahua Dealer.2. 4K at 15 fps resolution achievable with the following Dahua HDCVI DVRs: C52A1N/C52A2N/C52A3N(Channel 1) and X58A3S (Channels 1 and 9).3. Transmission distance results verified by real-scene testing in Dahua's test laboratory. Actual transmissiondistances may vary due to external influences, cable quality, and wiring structures.Rev 001.010© 2018 Dahua. All rights reserved. Design and specifications are subject to change without notice.PFB110WWall/Ceiling MountBracketPFB121WWall Mount Bracket PFM800-EPassive HDCVI Balun PFM810POC Transceiver DH-PFM320D-015Power Adapter DH-PFM321D-US Power AdapterDH-PFL085-J12M 12 MP Fixed LensDH-PFL2070-J12M 12 MP Vari-focalLens。
INTRODUCING MVWith an unobtrusive industrial design suitable for any setting—and available in indoor (MV21) and outdoor (MV71) models—the MV family simplifies and streamlines the unnecessarily complex world of secu-rity cameras. By eliminating servers and video recorders, MV frees administrators to spend less time on deployment and maintenance, and more time on meeting business needs.High-endurance solid state on-camera storage eliminates the concern of excessive upload bandwidth use and provides robust failoverprotection. As long as the camera has power it will continue to record, even without network connectivity. Historical video can be quicklyOVERVIEWCisco Meraki’s MV family of security cameras are exceptionally simple to deploy and configure. Their integration into the Meraki dashboard, ease of deployment, and use of cloud-augmented edge storage, eliminate the cost and complexity required by traditional security camera solutions.Like all Meraki products, MV cameras provide zero-touch deployment. Using just serial numbers, an administrator can add devices to the Meraki dashboard and begin configuration before the hardware even arrives on site. In the Meraki dashboard, users can easily streamvideo and create video walls for monitoring key areas across multiple locations without ever configuring an IP or installing a plugin.searched and viewed using motion-based indexing, and advanced export tools allow evidence to be shared with security staff or law enforcement easily.Because the cameras are connected to Meraki’s cloud infrastructure, security updates and new software are pushed to customers auto-matically. This system provides administrators with the peace of mind that the infrastructure is not only secure, but that it will continue to meet future needs.Simply put, the MV brings Meraki magic to the security camera world.Product Highlights• Meraki dashboard simplifies operation• Cloud-augmented edge storage eliminates infrastructure • Suitable for deployments of all sizes: 1 camera or 1000+• Intelligent motion indexing with search engine• Built-in video analytics tools• Secure encrypted control architecture• No special software or browser plugins required • Granular user access controlsMV21 & MV71Cloud Managed Security CamerasDatasheet | MV SeriesCUTTING EDGE ARCHITECTUREMeraki's expertise in distributed computing has come to the security camera world. With cloud-augmented edge storage, MV cameras provide ground breaking ease of deployment, configuration, and operation. Completely eliminating the Network Video Recorder (NVR) not only reduces equipment CAPEX, but the simplified architecture also decreases OPEX costs. Each MV camera comes with integrated, ultra reliable, industrial-grade storage. This cutting edge technology allows the system to efficiently scale to any size because the storage expands with the addition of each camera. Plus, administrators can rest easyknowing that even if the network connection cuts out, the cameras will continue to record footage.SCENE BEING RECORDEDON-DEVICE STORAGELOCAL VIDEO ACCESS MERAKIREMOTE VIDEOACCESSOPTIMIZED RETENTIONMV takes a unique approach to handling motion data by analyzing video on the camera itself, but indexing motion in the cloud. This hybrid motion-based retention strategy plus scheduled recording give users the ability to define the video retention method that works best for every deployment.The motion-based retention tool allows users to pick the video bit rate and frame rate to find the perfect balance betweenstorage length and image quality. All cameras retain continuous footage as a safety net for the last 72 hours before intelligently trimming stored video that contains no motion, adding one more layer of security.Determine when cameras are recording, and when they’re not, with scheduled recording. Create schedule templates for groups of cameras and store only what’s needed, nothing more. Turn off recording altogether and only view live footage for selective privacy.Best of all, the dashboard provides a real-time retention estimate for each camera, removing the guesswork.10:00:3010:00:2010:00:25EASY TO ACCESS, EASY TO CONTROLThere is often a need to allow different users access but with tailored controls appropriate for their particular roles. For example, a receptionist needing to see who is at the front door probably does not need full camera configuration privileges.The Meraki dashboard has a set of granular controls for defining what a user can or cannot do. Prevent security staff from changing network settings, limit views to only selected cameras, or restrict the export of video: you decide what is possible.With the Meraki cloud authentication architecture, these controls scale for any organization and support Security Assertion Markup Language (SAML) integration.CONFIGURE VIEW-ONLYISOLATE EVENTS, INTELLIGENTLYMeraki MV cameras use intelligent motion search to quickly find important segments of video amongst hours of recordings. Optimized to eliminate noise and false positives, this allows users to retrospectively zero-in on relevant events with minimal effort.MV's motion indexing offers an intuitive search interface. Select the elements of the scene that are of interest and dashboard will retrieve all of the activity that occurred in that area. Laptop go missing? Drag the mouse over where it was last seen and quickly find out when it happened and who was responsible.8:009:00ANALYTICS, BUILT RIGHT INMV's built-in analytics take the average deployment far beyond just security. Make the most of an MVcamera by utilizing it as a sensor to optimize business performance, enhance public safety, or streamline operational objectives.Use motion heat maps to analyze customer behavior patterns or identify where students are congregating during class breaks. Hourly or daily levels ofgranularity allow users to quickly tailor the tool to specific use cases.All of MV's video analytics tools are built right into the dashboard for quick access. Plus, the standard MV license covers all of these tools, with no additionallicensing or costs required.SIMPLY CLOUD-MANAGEDMeraki's innovative GUI-based dashboardmanagement tool has revolutionized networks aroundthe world, and brings the same benefits to networkedvideo surveillance. Zero-touch configuration, remotetroubleshooting, and the ability to manage distributedsites through a single pane of glass eliminate manyof the headaches security administrators have dealtwith for decades. Best of all, dashboard functionalityis built into every Meraki product, meaning additionalvideo management software (VMS) is now a thing ofthe past.Additionally, features like the powerful drag-and-drop video wall help to streamline remote devicemanagement and monitoring — whether cameras aredeployed at one site, or across the globe.SECURE AND ALWAYS UP-TO-DATECentralized cloud management offers one of the mostsecure platforms available for camera operation. Allaccess to the camera is encrypted with a public keyinfrastructure (PKI) that includes individual cameracertificates. Integrated two-factor authenticationprovides strong access controls. Local video is alsoencrypted by default and adds a final layer of securitythat can't be turned off.All software updates are managed automaticallyfor the delivery of new features and to enable rapidsecurity updates. Scheduled maintenance windowsensure the MV family continues to address users'needs with the delivery of new features as part of theall-inclusive licensed service.Camera SpecificationsCamera1/3.2” 5MP (2560x1920) progressive CMOS image sensor128GB high endurance solid state storageFull disk encryption3 - 10mm vari-focal lens with variable aperture f/1.3 - f/2.5Variable field of view:28° - 82° (Horizontal)21° - 61° (Vertical)37° - 107° (Diagonal)Automatic iris control with P-iris for optimal image quality1/5 sec. to 1/32,000 sec. shutter speed*******************************(color)*************(B&W)S/N Ratio exceeding 62dB - Dynamic Range 69dBHardware based light meter for smart scene detectionBuilt-in IR illuminators, effective up to 30 meters (98 feet)Integrated heating elements for low temperature outdoor operation (MV71 only)Video720p HD video recording (1280x720) with H.264 encodingCloud augmented edge storage (video at the edge, metadata in the cloud)Up to 20 days of video storage per camera*Direct live streaming with no client software (native browser playback)**Stream video anywhere with autotmatic cloud proxyNetworking1x 10/100 Base-T Ethernet (RJ45)Compatible with Meraki wireless mesh backhaul (separate WLAN AP required) DSCP traffic markingFeaturesCloud managed with complete integration into the Meraki dashboardPlug and play deployment with self-configurationRemote adjustment of focus, zoom, and apertureDynamic day-to-night transition with IR illuminationNoise optimized motion indexing engine with historical searchShared video wall with individual layouts supporting multiple camerasSelective export capability with cloud proxyHighly granular view, review, and export user permissions with SAML integration Motion heat maps for relative hourly or day-by-day motion overviewMotion alertsPowerPower consumption (MV21) 10.94W maximum via 802.3af PoE Power consumption (MV71) 21.95W maximum via 802.3at PoE+EnvironmentStarting temperature (MV21): -10°C - 40°C (14°F - 104°F)Starting temperature (MV71): -40°C - 40°C (-40°F - 104°F)Working temperature (MV21): -20°C - 40°C (-4°F - 104°F)Working temperature (MV71): -50°C - 40°C (-58°F - 104°F)In the boxQuick start & installation guideMV camera hardwareWall mounting kit, drop ceiling T-rail mounting hardwarePhysical characterisitcsDimensions (MV21) 166mm x 116.5mm (diameter x height)Dimensions (MV71) 173.3mm x 115mm (diameter x height)Weather-proof IP66-rated housing (MV71 only)Vandal-proof IK10-rated housing (MV71 only)Lens adjustment range:65° Tilt350° Rotation350° PanWeight (MV21) 1.028kg (including mounting plate)Weight (MV71) 1.482kg (including mounting plate)Female RJ45 Ethernet connectorSupports Ethernet cable diameters between 5-8mm in diameterStatus LEDReset buttonWarrantyWarranty (MV21) 3 year hardware warranty with advanced replacementWarranty (MV71) 3 year hardware warranty with advanced replacementOrdering InformationMV21-HW: Meraki MV21 Cloud Managed Indoor CameraMV71-HW: Meraki MV71 Cloud Managed Outdoor CameraLIC-MV-XYR: Meraki MV Enterprise License (X = 1, 3, 5, 7, 10 years)MA-INJ-4-XX: Meraki 802.3at Power over Ethernet injector (XX = US, EU, UK, or AU) Note: Each Meraki camera requires a license to operate* Storage duration dependent on encoding settings.** Browser support for H.264 decoding required.Mounting Accessories Specifications Meraki Wall Mount ArmWall mount for attaching camera perpendicular to mounting surfaceIncludes pendant capSupported Models: MV21, MV71Dimensions (Wall Arm) 140mm x 244mm x 225.4mmDimensions (Pendant Cap) 179.9mm x 49.9mm (Diameter x Height)Combined Weight 1.64kgMeraki Pole MountPole mount for poles with diameter between 40mm - 145mm (1.57in - 5.71in)Can be combined with MA-MNT-MV-1: Meraki Wall Mount ArmSupported Models: MV71Dimensions 156.7mm x 240mm x 68.9mmWeight 1.106kgMeraki L-Shape Wall Mount BracketCompact wall mount for attaching camera perpendicular to mounting surfaceSupported Models: MV21, MV71Dimensions 206mm x 182mm x 110mmWeight 0.917kgOrdering InformationMA-MNT-MV-1: Meraki Wall Mount Arm for MV21 and MV71MA-MNT-MV-2: Meraki Pole Mount for MV71MA-MNT-MV-3: Meraki L-Shape Wall Mount Bracket for MV21 and MV71。
The information in this document is subject to change without notice and does not represent a commitment on the part of Native Instruments GmbH. The software described by this docu-ment is subject to a License Agreement and may not be copied to other media. No part of this publication may be copied, reproduced or otherwise transmitted or recorded, for any purpose, without prior written permission by Native Instruments GmbH, hereinafter referred to as Native Instruments.“Native Instruments”, “NI” and associated logos are (registered) trademarks of Native Instru-ments GmbH.ASIO, VST, HALion and Cubase are registered trademarks of Steinberg Media Technologies GmbH.All other product and company names are trademarks™ or registered® trademarks of their re-spective holders. Use of them does not imply any affiliation with or endorsement by them.Document authored by: David Gover and Nico Sidi.Software version: 2.8 (02/2019)Hardware version: MASCHINE MIKRO MK3Special thanks to the Beta Test Team, who were invaluable not just in tracking down bugs, but in making this a better product.NATIVE INSTRUMENTS GmbH Schlesische Str. 29-30D-10997 Berlin Germanywww.native-instruments.de NATIVE INSTRUMENTS North America, Inc. 6725 Sunset Boulevard5th FloorLos Angeles, CA 90028USANATIVE INSTRUMENTS K.K.YO Building 3FJingumae 6-7-15, Shibuya-ku, Tokyo 150-0001Japanwww.native-instruments.co.jp NATIVE INSTRUMENTS UK Limited 18 Phipp StreetLondon EC2A 4NUUKNATIVE INSTRUMENTS FRANCE SARL 113 Rue Saint-Maur75011 ParisFrance SHENZHEN NATIVE INSTRUMENTS COMPANY Limited 5F, Shenzhen Zimao Center111 Taizi Road, Nanshan District, Shenzhen, GuangdongChina© NATIVE INSTRUMENTS GmbH, 2019. All rights reserved.Table of Contents1Welcome to MASCHINE (23)1.1MASCHINE Documentation (24)1.2Document Conventions (25)1.3New Features in MASCHINE 2.8 (26)1.4New Features in MASCHINE 2.7.10 (28)1.5New Features in MASCHINE 2.7.8 (29)1.6New Features in MASCHINE 2.7.7 (29)1.7New Features in MASCHINE 2.7.4 (31)1.8New Features in MASCHINE 2.7.3 (33)2Quick Reference (35)2.1MASCHINE Project Overview (35)2.1.1Sound Content (35)2.1.2Arrangement (37)2.2MASCHINE Hardware Overview (40)2.2.1MASCHINE MIKRO Hardware Overview (40)2.2.1.1Browser Section (41)2.2.1.2Edit Section (42)2.2.1.3Performance Section (43)2.2.1.4Transport Section (45)2.2.1.5Pad Section (46)2.2.1.6Rear Panel (50)2.3MASCHINE Software Overview (51)2.3.1Header (52)2.3.2Browser (54)2.3.3Arranger (56)2.3.4Control Area (59)2.3.5Pattern Editor (60)3Basic Concepts (62)3.1Important Names and Concepts (62)3.2Adjusting the MASCHINE User Interface (65)3.2.1Adjusting the Size of the Interface (65)3.2.2Switching between Ideas View and Song View (66)3.2.3Showing/Hiding the Browser (67)3.2.4Showing/Hiding the Control Lane (67)3.3Common Operations (68)3.3.1Adjusting Volume, Swing, and Tempo (68)3.3.2Undo/Redo (71)3.3.3Focusing on a Group or a Sound (73)3.3.4Switching Between the Master, Group, and Sound Level (77)3.3.5Navigating Channel Properties, Plug-ins, and Parameter Pages in the Control Area.773.3.6Navigating the Software Using the Controller (82)3.3.7Using Two or More Hardware Controllers (82)3.3.8Loading a Recent Project from the Controller (84)3.4Native Kontrol Standard (85)3.5Stand-Alone and Plug-in Mode (86)3.5.1Differences between Stand-Alone and Plug-in Mode (86)3.5.2Switching Instances (88)3.6Preferences (88)3.6.1Preferences – General Page (89)3.6.2Preferences – Audio Page (93)3.6.3Preferences – MIDI Page (95)3.6.4Preferences – Default Page (97)3.6.5Preferences – Library Page (101)3.6.6Preferences – Plug-ins Page (109)3.6.7Preferences – Hardware Page (114)3.6.8Preferences – Colors Page (114)3.7Integrating MASCHINE into a MIDI Setup (117)3.7.1Connecting External MIDI Equipment (117)3.7.2Sync to External MIDI Clock (117)3.7.3Send MIDI Clock (118)3.7.4Using MIDI Mode (119)3.8Syncing MASCHINE using Ableton Link (120)3.8.1Connecting to a Network (121)3.8.2Joining and Leaving a Link Session (121)4Browser (123)4.1Browser Basics (123)4.1.1The MASCHINE Library (123)4.1.2Browsing the Library vs. Browsing Your Hard Disks (124)4.2Searching and Loading Files from the Library (125)4.2.1Overview of the Library Pane (125)4.2.2Selecting or Loading a Product and Selecting a Bank from the Browser (128)4.2.3Selecting a Product Category, a Product, a Bank, and a Sub-Bank (133)4.2.3.1Selecting a Product Category, a Product, a Bank, and a Sub-Bank on theController (137)4.2.4Selecting a File Type (137)4.2.5Choosing Between Factory and User Content (138)4.2.6Selecting Type and Character Tags (138)4.2.7Performing a Text Search (142)4.2.8Loading a File from the Result List (143)4.3Additional Browsing Tools (148)4.3.1Loading the Selected Files Automatically (148)4.3.2Auditioning Instrument Presets (149)4.3.3Auditioning Samples (150)4.3.4Loading Groups with Patterns (150)4.3.5Loading Groups with Routing (151)4.3.6Displaying File Information (151)4.4Using Favorites in the Browser (152)4.5Editing the Files’ Tags and Properties (155)4.5.1Attribute Editor Basics (155)4.5.2The Bank Page (157)4.5.3The Types and Characters Pages (157)4.5.4The Properties Page (160)4.6Loading and Importing Files from Your File System (161)4.6.1Overview of the FILES Pane (161)4.6.2Using Favorites (163)4.6.3Using the Location Bar (164)4.6.4Navigating to Recent Locations (165)4.6.5Using the Result List (166)4.6.6Importing Files to the MASCHINE Library (169)4.7Locating Missing Samples (171)4.8Using Quick Browse (173)5Managing Sounds, Groups, and Your Project (175)5.1Overview of the Sounds, Groups, and Master (175)5.1.1The Sound, Group, and Master Channels (176)5.1.2Similarities and Differences in Handling Sounds and Groups (177)5.1.3Selecting Multiple Sounds or Groups (178)5.2Managing Sounds (181)5.2.1Loading Sounds (183)5.2.2Pre-listening to Sounds (184)5.2.3Renaming Sound Slots (185)5.2.4Changing the Sound’s Color (186)5.2.5Saving Sounds (187)5.2.6Copying and Pasting Sounds (189)5.2.7Moving Sounds (192)5.2.8Resetting Sound Slots (193)5.3Managing Groups (194)5.3.1Creating Groups (196)5.3.2Loading Groups (197)5.3.3Renaming Groups (198)5.3.4Changing the Group’s Color (199)5.3.5Saving Groups (200)5.3.6Copying and Pasting Groups (202)5.3.7Reordering Groups (206)5.3.8Deleting Groups (207)5.4Exporting MASCHINE Objects and Audio (208)5.4.1Saving a Group with its Samples (208)5.4.2Saving a Project with its Samples (210)5.4.3Exporting Audio (212)5.5Importing Third-Party File Formats (218)5.5.1Loading REX Files into Sound Slots (218)5.5.2Importing MPC Programs to Groups (219)6Playing on the Controller (223)6.1Adjusting the Pads (223)6.1.1The Pad View in the Software (223)6.1.2Choosing a Pad Input Mode (225)6.1.3Adjusting the Base Key (226)6.2Adjusting the Key, Choke, and Link Parameters for Multiple Sounds (227)6.3Playing Tools (229)6.3.1Mute and Solo (229)6.3.2Choke All Notes (233)6.3.3Groove (233)6.3.4Level, Tempo, Tune, and Groove Shortcuts on Your Controller (235)6.3.5Tap Tempo (235)6.4Performance Features (236)6.4.1Overview of the Perform Features (236)6.4.2Selecting a Scale and Creating Chords (239)6.4.3Scale and Chord Parameters (240)6.4.4Creating Arpeggios and Repeated Notes (253)6.4.5Swing on Note Repeat / Arp Output (257)6.5Using Lock Snapshots (257)6.5.1Creating a Lock Snapshot (257)7Working with Plug-ins (259)7.1Plug-in Overview (259)7.1.1Plug-in Basics (259)7.1.2First Plug-in Slot of Sounds: Choosing the Sound’s Role (263)7.1.3Loading, Removing, and Replacing a Plug-in (264)7.1.4Adjusting the Plug-in Parameters (270)7.1.5Bypassing Plug-in Slots (270)7.1.6Using Side-Chain (272)7.1.7Moving Plug-ins (272)7.1.8Alternative: the Plug-in Strip (273)7.1.9Saving and Recalling Plug-in Presets (273)7.1.9.1Saving Plug-in Presets (274)7.1.9.2Recalling Plug-in Presets (275)7.1.9.3Removing a Default Plug-in Preset (276)7.2The Sampler Plug-in (277)7.2.1Page 1: Voice Settings / Engine (279)7.2.2Page 2: Pitch / Envelope (281)7.2.3Page 3: FX / Filter (283)7.2.4Page 4: Modulation (285)7.2.5Page 5: LFO (286)7.2.6Page 6: Velocity / Modwheel (288)7.3Using Native Instruments and External Plug-ins (289)7.3.1Opening/Closing Plug-in Windows (289)7.3.2Using the VST/AU Plug-in Parameters (292)7.3.3Setting Up Your Own Parameter Pages (293)7.3.4Using VST/AU Plug-in Presets (298)7.3.5Multiple-Output Plug-ins and Multitimbral Plug-ins (300)8Using the Audio Plug-in (302)8.1Loading a Loop into the Audio Plug-in (306)8.2Editing Audio in the Audio Plug-in (307)8.3Using Loop Mode (308)8.4Using Gate Mode (310)9Using the Drumsynths (312)9.1Drumsynths – General Handling (313)9.1.1Engines: Many Different Drums per Drumsynth (313)9.1.2Common Parameter Organization (313)9.1.3Shared Parameters (316)9.1.4Various Velocity Responses (316)9.1.5Pitch Range, Tuning, and MIDI Notes (316)9.2The Kicks (317)9.2.1Kick – Sub (319)9.2.2Kick – Tronic (321)9.2.3Kick – Dusty (324)9.2.4Kick – Grit (325)9.2.5Kick – Rasper (328)9.2.6Kick – Snappy (329)9.2.7Kick – Bold (331)9.2.8Kick – Maple (333)9.2.9Kick – Push (334)9.3The Snares (336)9.3.1Snare – Volt (338)9.3.2Snare – Bit (340)9.3.3Snare – Pow (342)9.3.4Snare – Sharp (343)9.3.5Snare – Airy (345)9.3.6Snare – Vintage (347)9.3.7Snare – Chrome (349)9.3.8Snare – Iron (351)9.3.9Snare – Clap (353)9.3.10Snare – Breaker (355)9.4The Hi-hats (357)9.4.1Hi-hat – Silver (358)9.4.2Hi-hat – Circuit (360)9.4.3Hi-hat – Memory (362)9.4.4Hi-hat – Hybrid (364)9.4.5Creating a Pattern with Closed and Open Hi-hats (366)9.5The Toms (367)9.5.1Tom – Tronic (369)9.5.2Tom – Fractal (371)9.5.3Tom – Floor (375)9.5.4Tom – High (377)9.6The Percussions (378)9.6.1Percussion – Fractal (380)9.6.2Percussion – Kettle (383)9.6.3Percussion – Shaker (385)9.7The Cymbals (389)9.7.1Cymbal – Crash (391)9.7.2Cymbal – Ride (393)10Using the Bass Synth (396)10.1Bass Synth – General Handling (397)10.1.1Parameter Organization (397)10.1.2Bass Synth Parameters (399)11Working with Patterns (401)11.1Pattern Basics (401)11.1.1Pattern Editor Overview (402)11.1.2Navigating the Event Area (404)11.1.3Following the Playback Position in the Pattern (406)11.1.4Jumping to Another Playback Position in the Pattern (407)11.1.5Group View and Keyboard View (408)11.1.6Adjusting the Arrange Grid and the Pattern Length (410)11.1.7Adjusting the Step Grid and the Nudge Grid (413)11.2Recording Patterns in Real Time (416)11.2.1Recording Your Patterns Live (417)11.2.2Using the Metronome (419)11.2.3Recording with Count-in (420)11.3Recording Patterns with the Step Sequencer (422)11.3.1Step Mode Basics (422)11.3.2Editing Events in Step Mode (424)11.4Editing Events (425)11.4.1Editing Events with the Mouse: an Overview (425)11.4.2Creating Events/Notes (428)11.4.3Selecting Events/Notes (429)11.4.4Editing Selected Events/Notes (431)11.4.5Deleting Events/Notes (434)11.4.6Cut, Copy, and Paste Events/Notes (436)11.4.7Quantizing Events/Notes (439)11.4.8Quantization While Playing (441)11.4.9Doubling a Pattern (442)11.4.10Adding Variation to Patterns (442)11.5Recording and Editing Modulation (443)11.5.1Which Parameters Are Modulatable? (444)11.5.2Recording Modulation (446)11.5.3Creating and Editing Modulation in the Control Lane (447)11.6Creating MIDI Tracks from Scratch in MASCHINE (452)11.7Managing Patterns (454)11.7.1The Pattern Manager and Pattern Mode (455)11.7.2Selecting Patterns and Pattern Banks (456)11.7.3Creating Patterns (459)11.7.4Deleting Patterns (460)11.7.5Creating and Deleting Pattern Banks (461)11.7.6Naming Patterns (463)11.7.7Changing the Pattern’s Color (465)11.7.8Duplicating, Copying, and Pasting Patterns (466)11.7.9Moving Patterns (469)11.8Importing/Exporting Audio and MIDI to/from Patterns (470)11.8.1Exporting Audio from Patterns (470)11.8.2Exporting MIDI from Patterns (472)11.8.3Importing MIDI to Patterns (474)12Audio Routing, Remote Control, and Macro Controls (483)12.1Audio Routing in MASCHINE (484)12.1.1Sending External Audio to Sounds (485)12.1.2Configuring the Main Output of Sounds and Groups (489)12.1.3Setting Up Auxiliary Outputs for Sounds and Groups (494)12.1.4Configuring the Master and Cue Outputs of MASCHINE (497)12.1.5Mono Audio Inputs (502)12.1.5.1Configuring External Inputs for Sounds in Mix View (503)12.2Using MIDI Control and Host Automation (506)12.2.1Triggering Sounds via MIDI Notes (507)12.2.2Triggering Scenes via MIDI (513)12.2.3Controlling Parameters via MIDI and Host Automation (514)12.2.4Selecting VST/AU Plug-in Presets via MIDI Program Change (522)12.2.5Sending MIDI from Sounds (523)12.3Creating Custom Sets of Parameters with the Macro Controls (527)12.3.1Macro Control Overview (527)12.3.2Assigning Macro Controls Using the Software (528)13Controlling Your Mix (535)13.1Mix View Basics (535)13.1.1Switching between Arrange View and Mix View (535)13.1.2Mix View Elements (536)13.2The Mixer (537)13.2.1Displaying Groups vs. Displaying Sounds (539)13.2.2Adjusting the Mixer Layout (541)13.2.3Selecting Channel Strips (542)13.2.4Managing Your Channels in the Mixer (543)13.2.5Adjusting Settings in the Channel Strips (545)13.2.6Using the Cue Bus (549)13.3The Plug-in Chain (551)13.4The Plug-in Strip (552)13.4.1The Plug-in Header (554)13.4.2Panels for Drumsynths and Internal Effects (556)13.4.3Panel for the Sampler (557)13.4.4Custom Panels for Native Instruments Plug-ins (560)13.4.5Undocking a Plug-in Panel (Native Instruments and External Plug-ins Only) (564)14Using Effects (567)14.1Applying Effects to a Sound, a Group or the Master (567)14.1.1Adding an Effect (567)14.1.2Other Operations on Effects (574)14.1.3Using the Side-Chain Input (575)14.2Applying Effects to External Audio (578)14.2.1Step 1: Configure MASCHINE Audio Inputs (578)14.2.2Step 2: Set up a Sound to Receive the External Input (579)14.2.3Step 3: Load an Effect to Process an Input (579)14.3Creating a Send Effect (580)14.3.1Step 1: Set Up a Sound or Group as Send Effect (581)14.3.2Step 2: Route Audio to the Send Effect (583)14.3.3 A Few Notes on Send Effects (583)14.4Creating Multi-Effects (584)15Effect Reference (587)15.1Dynamics (588)15.1.1Compressor (588)15.1.2Gate (591)15.1.3Transient Master (594)15.1.4Limiter (596)15.1.5Maximizer (600)15.2Filtering Effects (603)15.2.1EQ (603)15.2.2Filter (605)15.2.3Cabinet (609)15.3Modulation Effects (611)15.3.1Chorus (611)15.3.2Flanger (612)15.3.3FM (613)15.3.4Freq Shifter (615)15.3.5Phaser (616)15.4Spatial and Reverb Effects (617)15.4.1Ice (617)15.4.2Metaverb (619)15.4.3Reflex (620)15.4.4Reverb (Legacy) (621)15.4.5Reverb (623)15.4.5.1Reverb Room (623)15.4.5.2Reverb Hall (626)15.4.5.3Plate Reverb (629)15.5Delays (630)15.5.1Beat Delay (630)15.5.2Grain Delay (632)15.5.3Grain Stretch (634)15.5.4Resochord (636)15.6Distortion Effects (638)15.6.1Distortion (638)15.6.2Lofi (640)15.6.3Saturator (641)15.7Perform FX (645)15.7.1Filter (646)15.7.2Flanger (648)15.7.3Burst Echo (650)15.7.4Reso Echo (653)15.7.5Ring (656)15.7.6Stutter (658)15.7.7Tremolo (661)15.7.8Scratcher (664)16Working with the Arranger (667)16.1Arranger Basics (667)16.1.1Navigating Song View (670)16.1.2Following the Playback Position in Your Project (672)16.1.3Performing with Scenes and Sections using the Pads (673)16.2Using Ideas View (677)16.2.1Scene Overview (677)16.2.2Creating Scenes (679)16.2.3Assigning and Removing Patterns (679)16.2.4Selecting Scenes (682)16.2.5Deleting Scenes (684)16.2.6Creating and Deleting Scene Banks (685)16.2.7Clearing Scenes (685)16.2.8Duplicating Scenes (685)16.2.9Reordering Scenes (687)16.2.10Making Scenes Unique (688)16.2.11Appending Scenes to Arrangement (689)16.2.12Naming Scenes (689)16.2.13Changing the Color of a Scene (690)16.3Using Song View (692)16.3.1Section Management Overview (692)16.3.2Creating Sections (694)16.3.3Assigning a Scene to a Section (695)16.3.4Selecting Sections and Section Banks (696)16.3.5Reorganizing Sections (700)16.3.6Adjusting the Length of a Section (702)16.3.6.1Adjusting the Length of a Section Using the Software (703)16.3.6.2Adjusting the Length of a Section Using the Controller (705)16.3.7Clearing a Pattern in Song View (705)16.3.8Duplicating Sections (705)16.3.8.1Making Sections Unique (707)16.3.9Removing Sections (707)16.3.10Renaming Scenes (708)16.3.11Clearing Sections (710)16.3.12Creating and Deleting Section Banks (710)16.3.13Working with Patterns in Song view (710)16.3.13.1Creating a Pattern in Song View (711)16.3.13.2Selecting a Pattern in Song View (711)16.3.13.3Clearing a Pattern in Song View (711)16.3.13.4Renaming a Pattern in Song View (711)16.3.13.5Coloring a Pattern in Song View (712)16.3.13.6Removing a Pattern in Song View (712)16.3.13.7Duplicating a Pattern in Song View (712)16.3.14Enabling Auto Length (713)16.3.15Looping (714)16.3.15.1Setting the Loop Range in the Software (714)16.3.15.2Activating or Deactivating a Loop Using the Controller (715)16.4Playing with Sections (715)16.4.1Jumping to another Playback Position in Your Project (716)16.5Triggering Sections or Scenes via MIDI (717)16.6The Arrange Grid (719)16.7Quick Grid (720)17Sampling and Sample Mapping (722)17.1Opening the Sample Editor (722)17.2Recording Audio (724)17.2.1Opening the Record Page (724)17.2.2Selecting the Source and the Recording Mode (725)17.2.3Arming, Starting, and Stopping the Recording (729)17.2.5Checking Your Recordings (731)17.2.6Location and Name of Your Recorded Samples (734)17.3Editing a Sample (735)17.3.1Using the Edit Page (735)17.3.2Audio Editing Functions (739)17.4Slicing a Sample (743)17.4.1Opening the Slice Page (743)17.4.2Adjusting the Slicing Settings (744)17.4.3Manually Adjusting Your Slices (746)17.4.4Applying the Slicing (750)17.5Mapping Samples to Zones (754)17.5.1Opening the Zone Page (754)17.5.2Zone Page Overview (755)17.5.3Selecting and Managing Zones in the Zone List (756)17.5.4Selecting and Editing Zones in the Map View (761)17.5.5Editing Zones in the Sample View (765)17.5.6Adjusting the Zone Settings (767)17.5.7Adding Samples to the Sample Map (770)18Appendix: Tips for Playing Live (772)18.1Preparations (772)18.1.1Focus on the Hardware (772)18.1.2Customize the Pads of the Hardware (772)18.1.3Check Your CPU Power Before Playing (772)18.1.4Name and Color Your Groups, Patterns, Sounds and Scenes (773)18.1.5Consider Using a Limiter on Your Master (773)18.1.6Hook Up Your Other Gear and Sync It with MIDI Clock (773)18.1.7Improvise (773)18.2Basic Techniques (773)18.2.1Use Mute and Solo (773)18.2.2Create Variations of Your Drum Patterns in the Step Sequencer (774)18.2.3Use Note Repeat (774)18.2.4Set Up Your Own Multi-effect Groups and Automate Them (774)18.3Special Tricks (774)18.3.1Changing Pattern Length for Variation (774)18.3.2Using Loops to Cycle Through Samples (775)18.3.3Load Long Audio Files and Play with the Start Point (775)19Troubleshooting (776)19.1Knowledge Base (776)19.2Technical Support (776)19.3Registration Support (777)19.4User Forum (777)20Glossary (778)Index (786)1Welcome to MASCHINEThank you for buying MASCHINE!MASCHINE is a groove production studio that implements the familiar working style of classi-cal groove boxes along with the advantages of a computer based system. MASCHINE is ideal for making music live, as well as in the studio. It’s the hands-on aspect of a dedicated instru-ment, the MASCHINE hardware controller, united with the advanced editing features of the MASCHINE software.Creating beats is often not very intuitive with a computer, but using the MASCHINE hardware controller to do it makes it easy and fun. You can tap in freely with the pads or use Note Re-peat to jam along. Alternatively, build your beats using the step sequencer just as in classic drum machines.Patterns can be intuitively combined and rearranged on the fly to form larger ideas. You can try out several different versions of a song without ever having to stop the music.Since you can integrate it into any sequencer that supports VST, AU, or AAX plug-ins, you can reap the benefits in almost any software setup, or use it as a stand-alone application. You can sample your own material, slice loops and rearrange them easily.However, MASCHINE is a lot more than an ordinary groovebox or sampler: it comes with an inspiring 7-gigabyte library, and a sophisticated, yet easy to use tag-based Browser to give you instant access to the sounds you are looking for.What’s more, MASCHINE provides lots of options for manipulating your sounds via internal ef-fects and other sound-shaping possibilities. You can also control external MIDI hardware and 3rd-party software with the MASCHINE hardware controller, while customizing the functions of the pads, knobs and buttons according to your needs utilizing the included Controller Editor application. We hope you enjoy this fantastic instrument as much as we do. Now let’s get go-ing!—The MASCHINE team at Native Instruments.MASCHINE Documentation1.1MASCHINE DocumentationNative Instruments provide many information sources regarding MASCHINE. The main docu-ments should be read in the following sequence:1.MASCHINE MIKRO Quick Start Guide: This animated online guide provides a practical ap-proach to help you learn the basic of MASCHINE MIKRO. The guide is available from theNative Instruments website: https:///maschine-mikro-quick-start/2.MASCHINE Manual (this document): The MASCHINE Manual provides you with a compre-hensive description of all MASCHINE software and hardware features.Additional documentation sources provide you with details on more specific topics:►Online Support Videos: You can find a number of support videos on The Official Native In-struments Support Channel under the following URL: https:///NIsupport-EN. We recommend that you follow along with these instructions while the respective ap-plication is running on your computer.Other Online Resources:If you are experiencing problems related to your Native Instruments product that the supplied documentation does not cover, there are several ways of getting help:▪Knowledge Base▪User Forum▪Technical Support▪Registration SupportYou will find more information on these subjects in the chapter Troubleshooting.Document Conventions1.2Document ConventionsThis section introduces you to the signage and text highlighting used in this manual. This man-ual uses particular formatting to point out special facts and to warn you of potential issues.The icons introducing these notes let you see what kind of information is to be expected:This document uses particular formatting to point out special facts and to warn you of poten-tial issues. The icons introducing the following notes let you see what kind of information canbe expected:Furthermore, the following formatting is used:▪Text appearing in (drop-down) menus (such as Open…, Save as… etc.) in the software andpaths to locations on your hard disk or other storage devices is printed in italics.▪Text appearing elsewhere (labels of buttons, controls, text next to checkboxes etc.) in thesoftware is printed in blue. Whenever you see this formatting applied, you will find thesame text appearing somewhere on the screen.▪Text appearing on the displays of the controller is printed in light grey. Whenever you seethis formatting applied, you will find the same text on a controller display.▪Text appearing on labels of the hardware controller is printed in orange. Whenever you seethis formatting applied, you will find the same text on the controller.▪Important names and concepts are printed in bold.▪References to keys on your computer’s keyboard you’ll find put in square brackets (e.g.,“Press [Shift] + [Enter]”).►Single instructions are introduced by this play button type arrow.→Results of actions are introduced by this smaller arrow.Naming ConventionThroughout the documentation we will refer to MASCHINE controller (or just controller) as the hardware controller and MASCHINE software as the software installed on your computer.The term “effect” will sometimes be abbreviated as “FX” when referring to elements in the MA-SCHINE software and hardware. These terms have the same meaning.Button Combinations and Shortcuts on Your ControllerMost instructions will use the “+” sign to indicate buttons (or buttons and pads) that must be pressed simultaneously, starting with the button indicated first. E.g., an instruction such as:“Press SHIFT + PLAY”means:1.Press and hold SHIFT.2.While holding SHIFT, press PLAY and release it.3.Release SHIFT.1.3New Features in MASCHINE2.8The following new features have been added to MASCHINE: Integration▪Browse on , create your own collections of loops and one-shots and send them directly to the MASCHINE browser.Improvements to the Browser▪Samples are now cataloged in separate Loops and One-shots tabs in the Browser.▪Previews of loops selected in the Browser will be played in sync with the current project.When a loop is selected with Prehear turned on, it will begin playing immediately in-sync with the project if transport is running. If a loop preview starts part-way through the loop, the loop will play once more for its full length to ensure you get to hear the entire loop once in context with your project.▪Filters and product selections will be remembered when switching between content types and Factory/User Libraries in the Browser.▪Browser content synchronization between multiple running instances. When running multi-ple instances of MASCHINE, either as Standalone and/or as a plug-in, updates to the Li-brary will be synced across the instances. For example, if you delete a sample from your User Library in one instance, the sample will no longer be present in the other instances.Similarly, if you save a preset in one instance, that preset will then be available in the oth-er instances, too.▪Edits made to samples in the Factory Libraries will be saved to the Standard User Directo-ry.For more information on these new features, refer to the following chapter ↑4, Browser. Improvements to the MASCHINE MIKRO MK3 Controller▪You can now set sample Start and End points using the controller. For more information refer to ↑17.3.1, Using the Edit Page.Improved Support for A-Series Keyboards▪When Browsing with A-Series keyboards, you can now jump quickly to the results list by holding SHIFT and pushing right on the 4D Encoder.▪When Browsing with A-Series keyboards, you can fast scroll through the Browser results list by holding SHIFT and twisting the 4D Encoder.▪Mute and Solo Sounds and Groups from A-Series keyboards. Sounds are muted in TRACK mode while Groups are muted in IDEAS.。
videopoet 解析
VideoPoet是一个大型语言模型(LLM),主要用于零样本视频生成。
它能够执行多种视频生成任务,包括文本到视频、图像到视频、视频风格化、视频内部和外部填充以及视频到音频的转换。
这种模型利用了语言模型的能力,集成了多种视频生成能力于一个模型中,而非依赖于针对每个任务分别训练的组件。
VideoPoet采用仅解码器的Transformer架构,可以处理多模态输入,包括图像、视频、文本和音频。
这种设计使得它能够更灵活地处理各种输入和任务,提高了生成视频的质量和多样性。
此外,VideoPoet还具备一些其他功能,如控制长视频、互动式视频编辑和图像到视频控制等。
这些功能进一步丰富了其应用场景,使其不仅适用于简单的文本或图像生成,还可以用于更复杂的视频编辑和创作。
总体来说,VideoPoet是一个功能强大、灵活多样的视频生成模型,为视频创作和编辑提供了新的可能性。
可伸缩视频技术(H.264/SVC和HEVC/SVC)在智能交通中的应用胡斌1孙立朋2孙俊3(1. 深圳市易行网交通科技有限公司,深圳518054;2. 深圳市贝尔信智能系统有限公司,深圳 518057;3. 北京大学,北京 100871)中文摘要:视频监控是智能交通系统中重要的组成部分,然而现有编解码技术与实际交通系统增长性、异构性的矛盾却日益凸显。
针对这一情况,本文提出了在智能交通系统中应用可伸缩编码技术进行视频监控的解决方案。
文中介绍了可伸缩技术的历史、特性、发展趋势等内容,并且分别从监控视频的存储和管理以及视频信息发布两个方面介绍了可伸缩视频技术在智能交通中的应用。
结合应用案例,进一步说明了可伸缩技术在视频监控系统中应用的可行性和优势。
关键词:可伸缩视频;智能交通;视频数据存储;视频数据分发Towards the Application of Scalable Video Coding inIntelligent Transportation SystemBin Hu1, Lipeng Sun2, Jun Sun3(1. Shenzhen E-traffic Technology Co.,LTD,Shenzhen,5180402. Shenzhen Bellsent Intelligent System CO.,LTD, Shenzhen, 5180573. Institute of Computer Science and Technology, Peking University, Beijing, 100871)Abstract: Video surveillance plays an important role in Intelligent Transportation System (ITS).However, the state-of-art video coding technologies cannot well adapt to the drastically growingheterogeneous practical transportation systems. To effectively handle this problem, we propose abrand new solution to use scalable video coding (SVC) technologies for video surveillance in ITS.In this paper, the history, features and the trend of SVC technologies are briefly introduced,followed by the application of SVC in ITS from two aspects: 1) the storage and the management ofthe surveillance videos and 2) video transmission and distribution. With the practical applicationcases, the superior advantages of SVC in video surveillance system is clearly exhibited.Keywords: Scalable video coding (SVC); Intelligent transportation; Video storage; Video distribution1 视频技术在智能交通系统中的应用现状智能交通系统包含的子系统从功能上可分为车辆控制系统、交通监控系统、车辆运营管理系统以及旅行信息系统四大部分,在这些系统中视频技术具有关键性的支撑作用。
Deep Learning NVRHighlights• Facial RecognitionProvide enhanced tracking and greater security by categorizing and then identifying personnel in real-time • People & Vehicle DetectionBuild situational awareness by detecting people or vehicles in off-limit areas • People CountingMeasure marketing effectiveness and optimize rental prices by monitoring foot traffic • Intrusion DetectionSetup perimeter lines for automatic notifications when breaches occur• Easy and Flexible IntegrationComplete surveillance deployments in no time with support for over 7,600 IP cameras, H.264/H.265, and 4K resolution 1• Cross-Platform SupportAccess from anywhere and anytime with Windows ®, macOS ®, iOS™, Android ® and web browser support • Scalable StorageExpand with up to two DX517 expansion units for an additional 10 drive bays 2Deep Learning Video AnalyticsSynology NVR DVA3221 is an on-premises 4-bay desktop NVR solution that integrates Synology’s deep learning-based algorithms to provide a fast, smart, and accurate video surveillance solution. Built-in support for push notifications keeps users informed of possible intrusions, while the automated video surveillance system helps safeguard properties by detecting people, vehicles, and anonymous objects in designated areas.Next-generation Surveillance Solution with Deep Learning Technology• Event surveillance : Create a custom watchlist by importing photos of attendees into a database and set rules to notify system administrators when personnel or a VIP client enters or leaves the compound.• Suspicious tracking : Capture and register, or manually submit photos of an unknown person into a database to search for footage of them, or to be alerted when they re-appear on camera.• Covered face detection : Detect when a person is or is not wearing a mask and be notified.• Flexible storage and camera channels : Expand storage to 10 extra drive bays using two Synology DX517 expansion units. Install up to 32 IP camera channels using 6 real-time video analysis tasks 3.Generate Actionable InsightsAdvanced deep learning-based algorithms built into the DVA3221 convert vast amounts of unstructured video data into business intelligence and security insights. This helps significantly reduce workloads while enhancing overall security.• Facial recognition : Instantly identify key personnel in a customdatabase with an accuracy of 97.04%4.• People and vehicle detection : Notify administrators when an anonymous person or vehicle is loitering in a pre-defined area such as a no-parking zone or in front of a shop.• People counting : Count the number of people entering and exiting in an area using bidirectional counters to help owners gauge sales and marketing effectiveness, or adjust rental rates.• Intrusion detection : Detect when a person or vehicle crossesa perimeter line, alerting administrators to potential intrusions.Safeguarding Your AssetsSurveillance Station delivers intelligent monitoring and video management tools to protect yourproperty and assets.Smarter On-premises SecuritySynology DVA and Surveillance Station elevate video surveillance into a real-time security solution, providing instant feedback and notifications.Powerful VMS PlatformPowered by Synology Surveillance Station running on Synology DSM, the DVA3221 combines powerful video analytics with robust storage and data management capabilities.• Complete data ownership with zero subscription fees : Unlike security cameras that require monthly subscriptions and store data on the cloud, all DVA3221 recordings are privately stored on the unit itself, free of charge.• Works with more than 7,600 cameras : No lock-ins. Choose from a wider range of camera options to suit your requirements with ONVIF support.• Remote monitoring and control : Access your surveillance system anytime, anywhere, from a web browser, desktop client, or the DS cam mobile app.• Automation & notification : Set up custom actions when predefined events occur, such as automatically taking snapshots when motion is detected. Get notified with e-mail, SMS, or push notifications so that you can respond in a timely manner.• Data Protection and CMS : Comprehensive data protection and backup tools included for both on-site, remote, or cloud-based backup targets. Set up and manage multi-site surveillance systems easily, with optional N+M failover for maximum service availability.5Surveillance On-the-GoWith DS cam, you can watch surveillance video feeds, take snapshots, zoom in and adjust camera position with PTZ control, browse through recorded events, and receive instant notifications on-the-go.Technical SpecificationsHardwareIntel® Atom C3538 Quad-core 2.1GHz Hardware encryption engine Yes (AES-NI)NVIDIA ® GeForce ® GTX 16506Memory8 GB DDR4 non-ECC SO-DIMM (4 GB x 2; expandable up to 32 GB with 16 GB ECC SO-DIMM x 2)7Compatible drive type 4 x 3.5” or 2.5” SATA HDD/SSD (drives not included)Hot swappable drive YesExternal port • 3 x USB 3.2 Gen 1 port • 2 x eSATA portDrive tray and status indicator 2Status indicatorAlert indicator 4LAN indicator Drive tray lock 6Power button and indicator USB 3.2 Gen 1 port 8System fan Power port 10Kensington Security Slot Console port12Reset buttoneSATA port141GbE RJ-45 port1313147General DSM specificationNetworking protocol SMB, AFP, NFS, FTP, WebDAV, CalDAV, iSCSI, Telnet, SSH, SNMP, VPN (PPTP, OpenVPN™, L2TP)File system• Internal: Btrfs, ext4• External: Btrfs, ext4, ext3, FAT, NTFS, HFS+, exFAT8Supported RAID type Synology Hybrid RAID (SHR), Basic, JBOD, RAID 0, RAID 1, RAID 5, RAID 6, RAID 10Storage management • Maximum single volume size: 108TB • Maximum system snapshots: 65,5369• Maximum internal volume: 64SSD cache Read/write cache supportFile sharing capability • Maximum local user account: 2,048• Maximum local group: 256• Maximum shared folder: 512• Maximum concurrent SMB/NFS/AFP/FTP connection: 2,000Privilege Windows Access Control List (ACL), application privilegesDirectory service Windows® AD integration: Domain users login via SMB/NFS/AFP/FTP/File Station, LDAP integrationSecurity Firewall, shared folder encryption, SMB encryption, FTP over SSL/TLS, SFTP, rsync over SSH, login auto block, Let's Encrypt support, HTTPS (customizable cipher suite)Supported client Windows® 7 onwards, macOS® 10.12 onwardsSupported browser Chrome®, Firefox®, Edge®, Internet Explorer® 10 onwards, Safari® 10 onwards, Safari (iOS 10 onwards), Chrome (Android™ 6.0 onwards) on tabletsInterface language English, Deutsch, Français, Italiano, Español, Dansk, Norsk, Svensk, Nederlands, Русский, Polski, Magyar, Português do Brasil, Português Europeu, Türkçe, Český,Surveillance StationMaximum IP camera• 32 channels (total of 960 FPS at 720p, H.264), including a maximum of 6 real-time Deep Video Analytics tasks• 8 free camera licenses included; additional cameras require the purchasing of additional licensesDeep video analytics feature • Face recognition• People counting• People and vehicle detection • Intrusion detectionIP Camera support • Video codec: MJPEG, MPEG-4, H.264, H.265, and MxPEG (not supported by Deep Video Analytics tasks)• Audio codec: PCM, AAC, AMR, G.711, and G.726• Camera bit-rate fine-tuning via constant and variable bit-rate control• ONVIF™ 2.6, Profile S certified, Profile G for edge recordingLive View • Pan Tilt Zoom (PTZ) support with configurable PTZ speed• On-screen camera controls: camera zoom, focus, iris adjustment, auto pan, and auto object tracking • E-Map, Snapshot, and Snapshot Editor supported for instant editing after taking a snapshot• Video quality settings: bit-rate control, image quality, resolution, and FPS• Joystick support for easy navigation• Alert panel for a quick display of the most recently triggered events• Customizable layout and sequential camera view supportRecording • Supported recording modes: manual, continuous, motion detection, I/O alarm, action rule, and customized• Recording formats: MP4• The customized recording mode can be configured as a combination of conditional events, including motion detection, audio detection, tampering detection, and alarm input• Edge recording on the SD card of certain supported cameras• Configurable pre-recording and post-recording timePlayback • Customizable layouts for playback in Timeline• Playback controls include pause, stop, previous recording, next recording, fast forward, slow motion, frame by frame, and digital zoom in/out• Embedded watermark for evidence integrity• Image enhancements including brightness, contrast, saturation, and sharpness (web client only)Management • Create and manage different user privileges• Rotate recorded videos by archived days or storage size• Event notification supported via SMS, E-mail, and mobile devices via DS cam • Recordings can be backed up to an external storage or remote server• Built-in NTP server• Supported tunnel: MPEG-4, H264 via RTSP over TCP, UDP, and HTTPiOS/Android™ applications Synology LiveCam, DS cam, DS file, DS finder Surveillance desktop clientsupport Windows® 7 onwards, macOS® 10.12 onwardsEnvironment and Packaging Environment safety RoHS compliantPackage content • 1 x DVA3221 main unit • 1 x Quick Installation Guide • 1 x Accessory pack• 1 x AC power cord• 2 x RJ-45 LAN cableOptional accessories • 16 GB DDR4 ECC SO-DIMM (D4ECSO-2666-16G)• Synology Expansion Unit DX517• VisualStation VS360HD, VS960HD• Surveillance Device License PackWarranty 3 years10*Model specifications are subject to change without notice. Please refer to for the latest information.1. Check our compatibility list for all supported cameras.2. DVA3221 supports up to two Synology Expansion Unit DX517, sold separately.3. The DVA3221 unit comes with 8 camera licenses. Support for more cameras requires additional licenses.4. Testing is done by the National Institute of Standards and Technology (NIST) under their Face Recognition Vendor Test (FRVT), WILD dataset.5. Face Recognition is not supported on servers running as CMS recording servers or failover servers. DVA features other than task, detection result, and archivesettings are not supported on servers running as CMS recording servers.6. Display output is not supported.7. DVA3221 comes with 8 GB DDR4 non-ECC SO-DIMM (2 x 4 GB) pre-installed. The non-ECC memory modules must be removed before the optional ECC memorymodules can be installed.8. exFAT Access can be purchased separately in Package Center.9. System snapshots include snapshots taken by iSCSI Manager, Snapshot Replication, and Virtual Machine Manager. Availability of these packages varies bymodel.10. The warranty period starts from the purchase date as stated on your receipt of purchase. Learn more about our limited product warranty policy.SYNOLOGY INC.Copyright © 2020, Synology Inc. All rights reserved. Synology, the Synology logo are trademarks or registered trademarks of Synology Inc. Other product and company names mentioned herein may be trademarks of their respective companies. Synology may make changes to specification and product descriptions at anytime, without notice.DVA3221-2020-ENU-REV001。
Adaptive background mixture models for real-time tracking Chris Stauffer W.E.L GrimsonThe Artificial Intelligence LaboratoryMassachusetts Institute of TechnologyCambridge,MA02139AbstractA common method for real-time segmentation of moving regions in image sequences involves“back-ground subtraction,”or thresholding the error between an estimate of the image without moving objects and the current image.The numerous approaches to this problem differ in the type of background model used and the procedure used to update the model.This paper discusses modeling each pixel as a mixture of Gaus-sians and using an on-line approximation to update the model.The Gaussian distributions of the adaptive mixture model are then evaluated to determine which are most likelyto result f rom a background process. Each pixel is classified based on whether the Gaussian distribution which represents it most effectivelyis con-sidered part of the background model.This results in a stable,real-time outdoor tracker which reliablydeals with lighting changes,repetitive motions from clutter,and long-term scene changes. This system has been run almost continuously for16 months,24hours a day,through rain and snow.1IntroductionIn the past,computational barriers have limited the complexity of real-time video processing applications. As a consequence,most systems were either too slow to be practical,or succeeded by restricting themselves to very controlled situations.Recently,faster comput-ers have enabled researchers to consider more complex, robust models for real-time analysis of streaming data. These new methods allow researchers to begin model-ing real world processes under varying conditions.Consider the problem of video surveillance and monitoring.A robust system should not depend on careful placement of cameras.It should also be robust to whatever is in its visualfield or whatever lighting effects occur.It should be capable of dealing with movement through cluttered areas,objects overlap-ping in the visualfield,shadows,lighting changes,ef-fects of moving elements of the scene(e.g.swaying trees),slow-moving objects,and objects being intro-duced or removed from the scene.Traditional ap-proaches based on backgrounding methods typically fail in these general situations.Our goal is to cre-ate a robust,adaptive tracking system that isflexible enough to handle variations in lighting,moving scene clutter,multiple moving objects and other arbitrary changes to the observed scene.The resulting tracker is primarily geared towards scene-level video surveil-lance applications.1.1Previous work and current shortcom-ingsMost researchers have abandoned non-adaptive methods of backgrounding because of the need for manual initialization.Without re-initialization,errors in the background accumulate over time,making this method useful only in highly-supervised,short-term tracking applications without significant changes in the scene.A standard method of adaptive backgrounding is averaging the images over time,creating a background approximation which is similar to the current static scene except where motion occurs.While this is ef-fective in situations where objects move continuously and the background is visible a significant portion of the time,it is not robust to scenes with many mov-ing objects particularly if they move slowly.It also cannot handle bimodal backgrounds,recovers slowly when the background is uncovered,and has a single, predetermined threshold for the entire scene.Changes in scene lighting can cause problems for many backgrounding methods.Ridder et al.[5]mod-eled each pixel with a Kalman Filter which made their system more robust to lighting changes in the scene. While this method does have a pixel-wise automatic threshold,it still recovers slowly and does not han-dle bimodal backgrounds well.Koller et al.[4]have successfully integrated this method in an automatic traffic monitoring application.Pfinder[7]uses a multi-class statistical model forthe tracked objects,but the background model is a single Gaussian per pixel.After an initialization pe-riod where the room is empty,the system reports good results.There have been no reports on the success of this tracker in outdoor scenes.Friedman and Russell[2]have recently implemented a pixel-wise EM framework for detection of vehicles that bears the most similarity to our work.Their method attempts to explicitly classify the pixel values into three separate,predetermined distributions corre-sponding to the road color,the shadow color,and col-ors corresponding to vehicles.Their attempt to medi-ate the effect of shadows appears to be somewhat suc-cessful,but it is not clear what behavior their system would exhibit for pixels which did not contain these three distributions.For example,pixels may present a single background color or multiple background col-ors resulting from repetitive motions,shadows,or re-flectances.1.2Our approachRather than explicitly modeling the values of all the pixels as one particular type of distribution,we simply model the values of a particular pixel as a mix-ture of Gaussians.Based on the persistence and the variance of each of the Gaussians of the mixture,we determine which Gaussians may correspond to back-ground colors.Pixel values that do notfit the back-ground distributions are considered foreground until there is a Gaussian that includes them with sufficient, consistent evidence supporting it.Our system adapts to deal robustly with lighting changes,repetitive motions of scene elements,track-ing through cluttered regions,slow-moving objects, and introducing or removing objects from the scene. Slowly moving objects take longer to be incorporated into the background,because their color has a larger variance than the background.Also,repetitive vari-ations are learned,and a model for the background distribution is generally maintained even if it is tem-porarily replaced by another distribution which leads to faster recovery when objects are removed.Our backgrounding method contains two significant parameters–α,the learning constant and T,the pro-portion of the data that should be accounted for by the background.Without needing to alter parameters,our system has been used in an indoors,human-computer interface application and,for the past16months,has been continuously monitoring outdoor scenes.2The methodIf each pixel resulted from a particular surface un-der particular lighting,a single Gaussian would be suf-ficient to model the pixel value while accountingfor(a)(b)(c)(d)Figure1:The execution of the program.(a)the cur-rent image,(b)an image composed of the means of the most probable Gaussians in the background model, (c)the foreground pixels,(d)the current image with tracking information superimposed.Note:while the shadows are foreground in this case,if the surface was covered byshadows a significant amount o f the time, a Gaussian representing those pixel values maybe sig-nificant enough to be considered background. acquisition noise.If only lighting changed over time,a single,adaptive Gaussian per pixel would be sufficient. In practice,multiple surfaces often appear in the view frustum of a particular pixel and the lighting condi-tions change.Thus,multiple,adaptive Gaussians are necessary.We use a mixture of adaptive Gaussians to approximate this process.Each time the parameters of the Gaussians are up-dated,the Gaussians are evaluated using a simple heuristic to hypothesize which are most likely to be part of the“background process.”Pixel values that do not match one of the pixel’s“background”Gaussians are grouped using connected components.Finally, the connected components are tracked from frame to frame using a multiple hypothesis tracker.The pro-cess is illustrated in Figure1.2.1Online mixture modelWe consider the values of a particular pixel over time as a“pixel process”.The“pixel process”is a time series of pixel values,e.g.scalars for grayvalues or vectors for color images.At any time,t,what is known about a particular pixel,{x0,y0},is its history {X1,...,X t}={I(x0,y0,i):1≤i≤t}(1) where I is the image sequence.Some“pixel processes”are shown by the(R,G)scatter plots in Figure2(a)-(c)(b)(c)Figure 2:This figure contains images and scatter plotsof the red and green values of a single pixel from the image over time.It illustrates some of the difficulties involved in real environments.(a)shows two scatter plots from the same pixel taken 2minutes apart.This would require two thresholds.(b)shows a bi-model dis-tribution of a pixel values resulting from specularities on the surface of water.(c)shows another bi-modality resulting from monitor flicker.which illustrate the need for adaptive systems with au-tomatic thresholds.Figure 2(b)and (c)also highlight a need for a multi-modal representation.The value of each pixel represents a measurement of the radiance in the direction of the sensor of the first object intersected by the pixel’s optical ray.With a static background and static lighting,that value would be relatively constant.If we assume that independent,Gaussian noise is incurred in the sampling process,its density could be described by a single Gaussian dis-tribution centered at the mean pixel value.Unfortu-nately,the most interesting video sequences involve lighting changes,scene changes,and moving objects.If lighting changes occurred in a static scene,it would be necessary for the Gaussian to track those changes.If a static object was added to the scene and was not incorporated into the background until it had been there longer than the previous object,the corresponding pixels could be considered foreground for arbitrarily long periods.This would lead to accu-mulated errors in the foreground estimation,resulting in poor tracking behavior.These factors suggest that more recent observations may be more important in determining the Gaussian parameter estimates.An additional aspect of variation occurs if moving objects are present in the scene.Even a relatively con-sistently colored moving object is generally expected to produce more variance than a “static”object.Also,in general,there should be more data supporting the background distributions because they are repeated,whereas pixel values for different objects are often not the same color.These are the guiding factors in our choice of model and update procedure.The recent history of each pixel,{X 1,...,X t },is modeled by a mixture of K Gaus-sian distributions.The probability of observing the current pixel value isP (X t )=K i =1ωi,t ∗η(X t ,µi,t ,Σi,t )(2)where K is the number of distributions,ωi,t is an es-timate of the weight (what portion of the data is ac-counted for by this Gaussian)of the i th Gaussian in the mixture at time t,µi,t is the mean value of the i th Gaussian in the mixture at time t,Σi,t is the co-variance matrix of the i th Gaussian in the mixture at time t,and where ηis a Gaussian probability density function η(X t ,µ,Σ)=1(2π)n 2|Σ|12e −12(X t −µt )TΣ−1(X t −µt )(3)K is determined by the available memory and compu-tational power.Currently,from 3to 5are used.Also,for computational reasons,the covariance matrix is assumed to be of the form:Σk,t =σ2k I(4)This assumes that the red,green,and blue pixel valuesare independent and have the same variances.While this is certainly not the case,the assumption allows us to avoid a costly matrix inversion at the expense of some accuracy.Thus,the distribution of recently observed values of each pixel in the scene is characterized by a mixture of Gaussians.A new pixel value will,in general,be represented by one of the major components of the mixture model and used to update the model.If the pixel process could be considered a sta-tionary process,a standard method for maximizing the likelihood of the observed data is expectation maximization [1].Unfortunately,each pixel process varies over time as the state of the world changes,so we use an approximate method which essentially treats each new observation as a sample set of size 1and uses standard learning rules to integrate the new data.Because there is a mixture model for every pixel in the image,implementing an exact EM algorithm on a window of recent data would be costly.Instead,we implement an on-line K-means approximation.Every new pixel value,X t,is checked against the existing K Gaussian distributions,until a match is found.A match is defined as a pixel value within2.5standard deviations of a distribution1.This threshold can be perturbed with little effect on performance.This is effectively a per pixel/per distribution threshold.This is extremely useful when different regions have differ-ent lighting(see Figure2(a)),because objects which appear in shaded regions do not generally exhibit as much noise as objects in lighted regions.A uniform threshold often results in objects disappearing when they enter shaded regions.If none of the K distributions match the current pixel value,the least probable distribution is replaced with a distribution with the current value as its mean value,an initially high variance,and low prior weight.The prior weights of the K distributions at time t,ωk,t,are adjusted as followsωk,t=(1−α)ωk,t−1+α(M k,t)(5) whereαis the learning rate2and M k,t is1for the model which matched and0for the remaining mod-els.After this approximation,the weights are re-normalized.1/αdefines the time constant which de-termines the speed at which the distribution’s param-eters change.ωk,t is effectively a causal low-passfil-tered average of the(thresholded)posterior probabil-ity that pixel values have matched model k given ob-servations from time1through t.This is equivalent to the expectation of this value with an exponential window on the past values.Theµandσparameters for unmatched distribu-tions remain the same.The parameters of the dis-tribution which matches the new observation are up-dated as followsµt=(1−ρ)µt−1+ρX t(6)σ2t=(1−ρ)σ2t−1+ρ(X t−µt)T(X t−µt)(7) whereρ=αη(X t|µk,σk)(8) 1Depending on the curtosis of the noise,some percentage of the data points“generated”by a Gaussian will not“match”. The resulting random noise is easily ignored by neglecting con-nected components containing only1or2pixels.2While this rule is easily interpreted an an interpolation between two points,it is often shown in the equivalent form:ωk,t=ωk,t−1+α(M k,t−ωk,t−1)which is effectively the same type of causal low-pass filter as mentioned above,except that only the data which matches the model is included in the estimation.One of the significant advantages of this method is that when something is allowed to become part of the background,it doesn’t destroy the existing model of the background.The original background color re-mains in the mixture until it becomes the K th most probable and a new color is observed.Therefore,if an object is stationary just long enough to become part of the background and then it moves,the distribution describing the previous background still exists with the sameµandσ2,but a lowerωand will be quickly re-incorporated into the background.2.2Background Model EstimationAs the parameters of the mixture model of each pixel change,we would like to determine which of the Gaussians of the mixture are most likely produced by background processes.Heuristically,we are interested in the Gaussian distributions which have the most sup-porting evidence and the least variance.To understand this choice,consider the accumu-lation of supporting evidence and the relatively low variance for the“background”distributions when a static,persistent object is visible.In contrast,when a new object occludes the background object,it will not,in general,match one of the existing distributions which will result in either the creation of a new dis-tribution or the increase in the variance of an existing distribution.Also,the variance of the moving object is expected to remain larger than a background pixel until the moving object stops.To model this,we need a method for deciding what portion of the mixture model best represents background processes.First,the Gaussians are ordered by the value of ω/σ.This value increases both as a distribution gains more evidence and as the variance decreases.Af-ter re-estimating the parameters of the mixture,it is sufficient to sort from the matched distribution to-wards the most probable background distribution,be-cause only the matched models relative value will have changed.This ordering of the model is effectively an ordered,open-ended list,where the most likely back-ground distributions remain on top and the less prob-able transient background distributions gravitate to-wards the bottom and are eventually replaced by new distributions.Then thefirst B distributions are chosen as the background model,whereB=argmin bbk=1ωk>T(9)where T is a measure of the minimum portion of the data that should be accounted for by the background. This takes the“best”distributions until a certain por-tion,T,of the recent data has been accounted for.If a small value for T is chosen,the background model is usually unimodal.If this is the case,using only the most probable distribution will save processing.If T is higher,a multi-modal distribution caused by a repetitive background motion(e.g.leaves on a tree,aflag in the wind,a constructionflasher,etc.) could result in more than one color being included in the background model.This results in a transparency effect which allows the background to accept two or more separate colors.2.3Connected componentsThe method described above allows us to identify foreground pixels in each new frame while updating the description of each pixel’s process.These labeled foreground pixels can then be segmented into regions by a two-pass,connected components algorithm[3].Because this procedure is effective in determining the whole moving object,moving regions can be char-acterized not only by their position,but size,mo-ments,and other shape information.Not only can these characteristics be useful for later processing and classification,but they can aid in the tracking process.2.4Multiple Hypothesis TrackingWhile this section is not essential in the under-standing of the background subtraction method,it will allow one to better understand and evaluate the results in the following sections.Establishing correspondence of connected compo-nents between frames is accomplished using a lin-early predictive multiple hypotheses tracking algo-rithm which incorporates both position and size.We have implemented an online method for seeding and maintaining sets of Kalmanfilters.At each frame,we have an available pool of Kalman models and a new available pool of connected com-ponents that they could explain.First,the models are probabilistically matched to the connected regions that they could explain.Second,the connected re-gions which could not be sufficiently explained are checked tofind new Kalman models.Finally,mod-els whosefitness(as determined by the inverse of the variance of its prediction error)falls below a threshold are removed.Matching the models to the connected compo-nents involves checking each existing model against the available pool of connected components which are larger than a pixel or two.All matches are used to up-date the corresponding model.If the updated model has sufficientfitness,it will be used in the following frame.If no match is found a“null”match can be hypothesized which propogates the model as expected and decreases itsfitness by a constant factor.The unmatched models from the current frame and the previous two frames are then used to hypothe-size new ing pairs of unmatched connected components from the previous two frames,a model is hypothesized.If the current frame contains a match with sufficientfitness,the updated model is added to the existing models.To avoid possible combina-torial explosions in noisy situations,it may be desir-able to limit the maximum number of existing models by removing the least probable models when excessive models exist.In noisy situations(d cameras in low-light conditions),it is often useful to remove the short tracks that may result from random correspon-dances.Further details of this method can be found at /projects/vsam/.3ResultsOn an SGI O2with a R10000processor,this method can process11to13frames a second(frame size160x120pixels).The variation in the frame rate is due to variation in the amount of foreground present. Our tracking system has been effectively storing track-ing information forfive scenes for over16months[6]. Figure3shows accumulated tracks in one scene over the period of a day.While quick changes in cloud cover(relative toα, the learning rate)can sometimes necessitate a new set of background distributions,it will stabilize within10-20seconds and tracking will continue unhindered.Because of the stability and completeness of the representation it is possible to do some simple classi-fication.Figure4shows the classification of objects which appeared in a scene over a10minute period using a simple binary threshold on the time-averaged aspect ratio of the object.Tracks lasting less than a second were removed.Every object which entered this scene–in total,33 cars and34people–was tracked.It successfully clas-sified every car except in one case,where it classified two cars as the same object because one car entered the scene simultaneously with another car leaving at the same point.It found only one person in two cases where two people where walking in physical contact. It also double counted2objects because their tracks were not matched properly.4ApplicabilityWhen deciding on a tracker to implement,the most important information to a researcher is where the(a)(b)Figure3:Thisfigure shows consecutive hours of track-ing from6am to9am and3pm to7pm.(a)shows the image at the time the template was stored and(b) show the accumulated tracks of the objects over that time.Color encodes direction and intensityencodes size.The consistencyo f the colors within particular regions reflects the consistencyo the speed,direction, and size parameters which have beenacquired.(a)(b)Figure4:Thisfigure shows which objects in the scene were classified as people or cars using simple heuristics on the aspect ratio of the observed object.Its accuracy reflects the consistencyo the connected regions which are being tracked.tracker is applicable.This section will endeavor to pass on some of the knowledge we have gained through our experiences with this tracker.The tracking system has the most difficulty with scenes containing high occurrences of objects that vi-sually overlap.The multiple hypothesis tracker is not extremely sophisticated about reliably dissambiguat-ing objects which cross.This problem can be com-pounded by long shadows,but for our applications it was much more desirable to track an object and its shadow and avoid cropping or missing dark objects than it was to attempt to remove shadows.In our ex-perience,on bright days when the shadows are the most significant,both shadowed regions and shady sides of dark objects are black(not dark green,not dark red,etc.).The good news is that the tracker was relatively robust to all but relatively fast lighting changes(e.g.flood lights turning on and partly cloudy,windy days). It successfully tracked outdoor scenes in rain,snow, sleet,hail,overcast,and sunny days.It has also been used to track birds at a feeder,mice at night us-ing Sony NightShot,fish in a tank,people entering a lab,and objects in outdoor scenes.In these en-vironments,it reduces the impact of repetative mo-tions from swaying branches,rippling water,spec-ularities,slow moving objects,and camera and ac-quisition noise.The system has proven robust to day/night cycles and long-term scene changes.More recent results and project updates are available at /projects/vsam/.5F u t u re workAs computers improve and parallel architectures are investigated,this algorithm can be run faster,onlarger images,and using a larger number of Gaussians in the mixture model.All of these factors will in-crease performance.A full covariance matrix would further improve performance.Adding prediction to each Gaussian(e.g.the Kalmanfilter approach),may also lead to more robust tracking of lighting changes.Beyond these obvious improvements,we are inves-tigating modeling some of the inter-dependencies of the pixel processes.Relative values of neighboring pixels and correlations with neighboring pixel’s dis-tributions may be useful in this regard.This would allow the system to model changes in occluded pixels by observations of some of its neighbors.Our method has been used on grayscale,RGB, HSV,and local linearfilter responses.But this method should be capable of modeling any streamed input source in which our assumptions and heuristics are generally valid.We are investigating use of this method with frame-rate stereo,IR cameras,and in-cluding depth as a fourth channel(R,G,B,D).Depth is an example where multi-modal distributions are use-ful,because while disparity estimates are noisy due to false correspondences,those noisy values are often relatively predictable when they result from false cor-respondences in the background.In the past,we were often forced to deal with rela-tively small amounts of data,but with this system we can collect images of moving objects and tracking data robustly on real-time streaming video for weeks at a time.This ability is allowing us to investigate future directions that were not available to us in the past. We are working on activity classification and object classification using literally millions of examples[6]. 6ConclusionsThis paper has shown a novel,probabilistic method for background subtraction.It involves modeling each pixel as a separate mixture model.We implemented a real-time approximate method which is stable and robust.The method requires only two parameters,αand T.These two parameters are robust to different cameras and different scenes.This method deals with slow lighting changes by slowly adapting the values of the Gaussians.It also deals with multi-modal distributions caused by shad-ows,specularities,swaying branches,computer moni-tors,and other troublesome features of the real world which are not often mentioned in computer vision.It recovers quickly when background reappears and has a automatic pixel-wise threshold.All these factors have made this tracker an essential part of our activity and object classification research.This system has been successfully used to track peo-ple in indoor environments,people and cars in outdoor environments,fish in a tank,ants on afloor,and re-mote control vehicles in a lab setting.All these situa-tions involved different cameras,different lighting,and different objects being tracked.This system achieves our goals of real-time performance over extended pe-riods of time without human intervention. AcknowledgmentsThis research is supported in part by a grant from DARPA under contract N00014-97-1-0363adminis-tered by ONR and in part by a grant jointly admin-istered by DARPA and ONR under contract N00014-95-1-0600.References[1]A Dempster,ird,and D.Rubin.“Maximum likelihoodfrom incomplete data via the EM algorithm,”Journal of the Royal Statistical Society,39(Series B):1-38,1977.[2]Nir Friedman and Stuart Russell.“Image segmentation invideo sequences:A probabilistic approach,”In Proc.of the Thirteenth Conference on Uncertainty in Artificial Intelli-gence(UAI),Aug.1-3,1997.[3] B.K.P.Horn.Robot Vision,pp.66-69,299-333.The MITPress,1986.[4] D.Koller,J.Weber,T.Huang,J.Malik,G.Ogasawara,B.Rao,and S.Russel.“Towards robust automatic trafficscene analysis in real-time.”In Proc.of the International Conference on Pattern Recognition,Israel,November1994.[5]Christof Ridder,Olaf Munkelt,and Harald Kirchner.“Adaptive Background Estimation and Foreground De-tection using Kalman-Filtering,”Proceedings of Interna-tional Conference on recent Advances in Mechatronics, ICRAM’95,UNESCO Chair on Mechatronics,193-199, 1995.[6]W.E.L.Grimson,Chris Stauffer,Raquel Romano,and LilyLee.“Using adaptive tracking to classify and monitor activi-ties in a site,”In Computer Vision an dPattern Recogn ition 1998(CVPR98),Santa Barbara,CA.June1998.[7]Wren,Christopher R.,Ali Azarbayejani,Trevor Darrell,and Alex Pentland.“Pfinder:Real-Time Tracking of the Human Body,”In IEEE Transactions on Pattern Analy-sis an dMachin e In telligen ce,July1997,vol19,no7,pp.780-785.。
IP Camera监控软件IP Surveillance应用--录像及回放安装IP Surveillance并成功添加摄像机后,我们可以使用IP surveillance进行录像,录像之后还可以进行录像回放。
录像:步骤一、录像参数设定。
1、打开IP Surveillance主界面,点击Config。
2、进入系统设定页面。
3、设定录像存储路径。
在存储设定窗口,双击存储路径,点击存储路径末尾的按钮,可以更改存储路径,如下图所示;(默认存储在C盘,如果C盘空间足够,也可以不作更改)。
保留录像数据:录像数据的保留期限,超过保留期限的录像数据会被删除。
硬盘空间自动循环:循环范围即每次删除的录像数据时间,每一次录像的保留期限到时都会自动删除一个循环范围的过期录像数据。
步骤二、设置录像计划。
完成录像参数设定之后回到主界面,点击Schedule。
1、选择摄像机。
2、编辑录像时间条;右上角红线表示录像时间,默认为全天录像。
3、也可以点击设定,对录像计划做更进一步的设置。
备注:录像模式默认为基本模式——预设状态,用户也可以根据自己的需要选择其他方式。
读取:选择常用场景的应用模式,根据相应模式可以生成时间条。
更改录像时间:可以拖动时间条的开始点或者结束点来更改录像时间,也可以双击时间条或者点击对应条目的设定。
4、设置好一个摄像机的录像计划之后,可以点击左上角的复制到将设置复制到其他摄像机录像计划。
5、点击右下角确定,回到主界面。
步骤三、启动计划录像系统。
录像计划设置完成,主界面点击Start,选择启动计划录像系统,开始录像。
录像的时候主界面的画面左上角将出现一个红色的录像标志。
录像回放:步骤一、进入IP Surveillance主界面,点击Playback,进入录像回放窗口。
步骤二、点击左上角Open Record,进入开启历史录像数据(这里是对本机录像数据进行回放,远程回放在后续文档再作介绍)。
步骤三、选择要回放的录像数据。
⚫Built-in 30X HD integrated camera module⚫Progressive scanning CMOS sensor⚫ICR infrared filter type automatic switch to realize true day/night surveillance⚫Starlight-level ultra-low illumination⚫Supports multi-frame composite pattern wide dynamic and the maximum dynamic range is 132dB⚫Inbuilt efficient infrared lamps, wave length 850nm, to ensure stable long-term use and reduce maintenance cost⚫Night vision distance up to 200 meters⚫The ways of turning on the infrared lamps can be flexible to meet diversified surveillance environment demands⚫The infrared power is automatically adjusted based on dome drive zooming or manually to optimize lighting effects in the night⚫HD network video output: 1920×1080@60fps⚫Features smart functions to achieve border protection (wire cross, intrusion)⚫Supports intelligent tracking⚫H.265+M-JPEG or H.264+M-JPEG video encoding, three streams⚫Supports dual SD card⚫Supports alarm recording and alarm snapshots⚫Bi-directional audio, G.711-A/G.711-U/G.726/AAC standard optional⚫Two alarm inputs and one alarm output⚫Supports motion detection, 4 detecting areas dividable⚫Supports multiple ways to trigger alarms, such as I/O input, motion detection, intelligent analysis, SD card removal,network disconnection, heartbeat loss, etc.; supports flexible alarm associated configurations, such as I/O output, email,FTP uploading, audio, SD card snapshot and SD card recording ⚫Supports Region of Interest (ROI), 8 regions dividable⚫Allows multiple users to perform live access and parameters settings via Web Server⚫Supports preset, autopan, pattern, autoscan, time tour, normal tour, power-on return, etc.⚫Compatible with Infinova digital video surveillance software and convenient to integrate with other video surveillancesoftware⚫Supports ONVIF Profile Q, S, G & T standard ⚫Standard SDK, easy to integrate with other digital system⚫Supports RS485 control and analog video output for easy debugging⚫IP67 protection rate, inbuilt a heater and air circulation system to avoid icing⚫With Window wiper⚫Supports water-proof breather valve⚫IK10⚫Supports remote network upgradeVT231-B2 series HD IR IP dome camera provides high quality video with the resolution up to 1920x1080@60fps, adoptsH.265/H.264/M-JPEG compression formats, and boasts of excellent resolution and color reproduction ability to ensure richer and more accurate details to be acquired and also guarantee the accuracy of intelligent analysis in a more effective way.This product adopts large power LED infrared lamps, with infrared wave length 850nm, long night vision distance up to 200m, strong illumination. The IR lamps can turn on or off automatically based on environmental lighting conditions or can be manually adjusted. The IR illumination allows flexible adjustment so as to reduce IR lamp calorific value and extend its service life.User-friendly GUI interface design allows users to perform dome PTZ control easily via network and to configure detailed camera parameters settings. At Web interface users can perform dome camera settings and operations by using a mouse which is more convenient than the traditional keyboard control. It also supports area zoom and image PTZ function.VT231-B2 series dome cameras also feature general dome functions such as preset, pattern, autopan, autoscan, time tour and normal tour.VT231-B230-C061 HD IR IP dome camera, 2.0Mpx, 30X, 1/2" CMOS, day/night, H.265/H.264/MJPEG, with audio alarm, with window wiper, outdoor, bracket mount, PoE/24VDC/24VACVT231-B230-D061 HD IR IP dome camera, 2.0Mpx, 30X, 1/2.8" CMOS, day/night, H.265/H.264/MJPEG, with audio alarm, with window wiper, outdoor, bracket mount, PoE/24VDC/24VACAccessoriesV1761K Wall Mount, bayonet, 10 inchesV1762K Corner Mount, bayonet, 10 inchesV1763K Pole-side Mount, bayonet, 10 inchesLAS60-57CN-RJ45-F PoE power sourcing equipment, 100-240VAC inputs, 60W output(Unit: inch, in the parentheses is mm)Pole mountingWall mounting Corner mounting。
video tape翻译
Video tape(录像带)是一种由磁带制成的磁性存储介质,它可以用来记录视频、音频和其他数据。
Video tape通常是在一个卷轴上,其长度可以达到几百米。
它通常包含一系列的小的磁性区域,这些区域的表面都有铁粉覆盖,能够记录电流的变化从而记录这些数据。
当电流通过磁带时,它会在磁带表面留下磁性影像,这种影像可以在播放机中重放出来,也就是说,录像带可以用来存储和播放视频信息。
Video tape最初是由卡特尔·弗莱克(Charles P. Fleck)于1956年发明的,他的发明使得许多视频记录和录制工作变得简单快速。
随着电子视频技术的发展,video tape也发生了巨大的变化。
现在,磁带可以用来记录更多的信息,不仅仅是视频信息,还可以存储其他格式的数据,如图像、文本等。
磁带可以分为磁带和光盘两种格式。
磁带格式可以用来存储视频信息,但它的存储容量比光盘小得多,而且磁带的播放速度比光盘慢。
相反,光盘可以存储大量的数据,它的播放速度也比磁带快。
Video tape的应用非常广泛,它可以用来录制电视节目、电影、音乐会等各种活动,也可以用于数据存储,如记录电脑操作系统的步骤等。
Video tape仍然是一种流行的记录、存储和播放媒体的方式之一,尽管它的使用范围有所缩小,因为现在还有更多的存储介质可供选择,如CD-ROM、DVD等。
但是,磁带仍是录制视频信息的首选媒介,因为它可以在短时间内录制大量的信息,并且可以在短时间内播放出来,它也可以存储复杂的数据,具有更好的保护性和可靠性。
Key FeatureLive View and Playback● Up to 256 channels live view simultaneously ● Custom window division configurable● Viewing maps and real-time events during live view and playback ● Adding tags during playback and playing tagged video● Transcoded playback, frame- extracting playback, and stream type self-adaptive ●Fisheye DewarpingVisual Tracking Recording and Storage● Recording schedule for continuous recording, event recording and command recording● Storing videos on encoding devices, Hybrid SANs, cloud storage servers, pStors, or in pStor cluster service ● Providing main storage and auxiliary storage ● Providing video copy-back●Storing alarm pictures on NVRs, Hybrid SANs, cloud storage servers, pStors, or HikCentral serverEvent Management● Camera linkage, alarm pop-up window and multiple linkage actions● Multiple events for video surveillance, access control, resource group, resource maintenance, etc.Person and Visitor Management● Getting person information from added devices● Provides multiple types of credentials, including card number, face, and fingerprint, for composite authentications ● Visitor registration and check-outAccess Control, Elevator Control, and Video Intercom● Setting schedules for free access status and access forbidden status of doors or floors● Supports multiple access modes for both card reader authentication and person authentication● Setting access groups to relate persons, templates, and access points, which defines the access levels of different persons ● Supports advanced functions such as multi-factor authentication, anti-passback, and multi-door interlocking ● Controlling door or floor status in real-time ● Calling indoor station by the Control Client● Calling the platform by door station and indoor station, and answering the call by the Control ClientHikCentral Professional is a flexible, scalable, reliable and powerful central surveillance system. It can be delivered after pre-installed on a server.HikCentral Professional provides central management, information sharing, convenient connection and multi-service cooperation. It is capable of adding devices for management, live view, storage and playback of video files, alarm linkage, access control, time and attendance, facial identification, and so on.Time and Attendance●Setting different attendance rules for various scenarios, such as one-shift and man-hour shift●Customizing overtime levels and setting corresponding work hour rate●Supports flexible and quick settings of timetables and shift schedule●Supports multiple types of reports according to different needs and sending reports to specified emails regularly●Sending the original attendance data to a third-party database, thus the client can access third-party T&A and paymentsystemSupported Database Type VersionMicrosoft® SQL Server 2008 R2 and abovePostgreSQL 9.6.2 and aboveMySQL 8.0.11 and aboveOracle 12.2.0.1 and aboveSecurity Control●Real-time alarm management for added security control panels●Adding zone as hot spot on E-map and viewing the video of the linked camera●Event and alarm linkage with added cameras, including pop-up live view, captured picture●Subscribing the events that the Control Client can display in real-time●Acknowledging the received alarm on the Control ClientEntrance and Exit Control●Managing parking lot, entrances and exits, and lanes. Supports linking a LED screen with lane for information display●Setting entry & exit rules for vehicles in the vehicle lists as well as vehicles not in any vehicle lists●Entrance and exit control based on license plate recognition, card, or video intercom●Viewing real-time and history vehicle information and controlling barrier gate manually on the Control Client Temperature Screening●Displaying the skin-temperature and whether wearing a mask or not about the recognized persons in real time●Triggering events and alarms when detects abnormal temperature and no mask worn●Viewing reports about skin-surface temperature and mask-wearingFace and Body Recognition●Displaying the information of the recognized persons in real-time●Searching history records of recognized persons, including searching in captured pictures, searching matched persons,searching by features of persons, and searching frequently appeared personsIntelligent Analysis●Supports setting resource groups and analyzing data by different groups●Supports intelligent analysis reports including people counting, people density analysis, queue analysis, heat analysis,pathway analysis, person feature analysis, temperature analysis, and vehicle analysis●Display the number of people in specified regions in real-timeNetwork Management●Managing network transmission devices such as switches, displaying the network connection and hierarchical relationshipof the managed resources by a topology●Viewing the network details between the device nodes in the topology, such as downstream and upstream rate, portinformation, etc. and checking the connection path●Exporting the topology and abnormal data to check the device connection status and health statusSoftware SpecificationThe following table shows the maximum performance of the HikCentral Professional server. For other detailed data and performance, refer to Software Requirements & Hardware Performance.Features Maximum PerformanceDevices and Resources CamerasCentralized Deployment: 3,000①Distributed Deployment: 10,000②Central System (RSM): 100,000③Managed Device IP Addresses*Including Encoding Devices, Access Control Devices, ElevatorControl Devices, Security Control Devices, and Remote SitesCentralized Deployment: 1,024①Distributed Deployment: 2,048②Video Intercom Devices1,024Alarm Inputs (Including Zones of Security Control Devices) 3,000Alarm Outputs 3,000Dock Stations 1,500Security Radars and Radar PTZ Cameras 30Alarm Inputs of Security Control Devices 2,048DS-5600 Series Face Recognition Terminals When Appliedwith Hikvision Turnstiles32Recording Servers 64Streaming Servers 64Security Audit Server 8DeepinMind Server 64ANPR Cameras 3,000People Counting Cameras Recommended: 300Heat Map Cameras Recommended: 70Thermal Cameras Recommended: 20④Queue Management Cameras Recommended: 300Areas 3,000Cameras per Area 256Alarm Inputs per Area 256Alarm Outputs per Area 256Resource Groups 1,000Resources in One Resource Group 64Recording Recording Schedule 10,000 Recording Schedule Template 200Event & Alarm Event and Alarm RulesCentralized Deployment: 3,000Distributed Deployment: 10,000Central System (RSM): 10,000 Storage of Events or Alarms without PicturesCentralized Deployment: 100/sDistributed Deployment: 1000/s Events or Alarms Sent to Clients*The clients include Control Clients and Mobile Clients.120/s100 Clients/sNotification Schedule Templates 200Picture Picture Storage*Including event/alarm pictures, face pictures, and vehiclepictures.20/s (Stored in SYS Server)120/s (Stored in Recording Server)Reports Regular Report Rules 100Event or Alarm Rules in One Event/Alarm Report Rule 32Records in One Sent Report 10,000 or 10 MB Resources Selected in One Report20People Counting 5 million Heat Map 0.25 million ANPR 60 million Events 60 million Alarms 60 million Access Records 1.4 billion Attendance Records 55 million Visitor Records 10 million Operation Logs 5 million Service Information Logs 5 million Service Error Logs 5 million Recording Tags 60 millionUsers and Roles Concurrent Accesses via Web Clients, Control Clients, andOpenAPI Clients100 Concurrent Accesses via Mobile Clients and OpenAPI Clients 100 Users 3,000 Roles 3,000Vehicle (ANPR) Vehicle Lists 100 Vehicles per Vehicle List 5,000 Under Vehicle Surveillance Systems 4 Vehicle Undercarriage Pictures 3,000Entrance & Exit Lanes 8Cards Linked with Vehicles 250,000 Vehicle Passing Frequency in Each Lane 1 Vehicle/sFace Comparison Persons with Profiles for Face Comparison 1,000,000 Face Comparison Groups 64 Persons in One Face Comparison Group 1,000,000Access Control Persons with Credentials for Access Control 50,000 Visitors 10,000 Total Credentials (Card + Fingerprint) 250,000 Cards 250,000 Fingerprints 200,000 Profiles 50,000 Access Points (Doors + Floors) 1,024 Access Groups 512 Persons in One Access Group 50,000 Access Levels 512 Access Schedules 32Time and Attendance Persons for Time and Attendance 10,000 Attendance Groups 256 Persons in One Attendance Group 10,000 Shift Schedules 128 Major Leave Types 64 Minor Leave Types of One Major Type 128Smart Wall Decoding Devices 32 Smart Walls 32 Views 1,000 View Groups 100 Views in One View Group 10 Cameras in One View 150 Views Auto-Switched Simultaneously 32Streaming Server’s Maximum Performance①: For one site, the maximum number of the added encoding devices, access control devices, security control devices, and video intercom devices in total is 1,024. If the number of the manageable cameras (including the cameras directly added to the site and the cameras connected to these added devices) exceeds 3,000, the exceeded cameras cannot be imported to the areas.②: For one site with Application Data Server deployed independently, the maximum number of the added encoding devices, access control devices, and security control devices in total is 2,048. If the number of the manageable cameras (including the cameras directly added to the system and the cameras connected to these added devices) exceeds 10,000, the exceeded cameras cannot be imported to the areas.③: For on e site, if the number of the manageable cameras (including the cameras managed on the current site and the cameras from the Remote Sites) in the Central System exceeds 100,000, the exceeded cameras cannot be managed in the Central System.④: This recommend ed value refers to the number of thermal cameras connected to the system directly. It depends on the maximum performance (data processing and storage) in the situation when the managed thermal cameras uploading temperature data to the system. For thermal cameras connected to the system via NVR, there is no such limitation.Hardware SpecificationProcessor Intel® Xeon® E-2124Memory16G DDR4 DIMM slots, Supports UDIMM, up to 2666MT/s, 64GB Max. Supports registered ECCStorage ControllersInternal Controllers: SAS_H330 Software RAID: PERC S140External HBAs: 12Gbps SAS HBA (non-RAID)Boot Optimized Storage Subsystem: 2x M.2 240GB (RAID 1 or No RAID), 1x M.2 240GB (No RAID Only) Drive Bays 1T 7.2K SATA×2Power SuppliesSingle 250W (Bronze) power supplyDimensionsForm Factor: Rack (1U)Chassis Width: 434.00mm (17.08 in)Chassis Depth: 595.63mm (23.45 in) (3.5”HHD)Note: These dimensions do not include: bezel, redundant PSUDimensions with Package (W × D × H) 750 mm × 614 mm × 259 mm (29.53" × 24.17" × 10.2") Net Weight 12.2kg Weight with Package 18.5kgEmbedded NIC2 x 1GbE LOM Network Interface Controller (NIC) portsDevice AccessFront Ports:1x USB 2.0, 1 x IDRAC micro USB 2.0 management port Rear Ports:2 x USB 3.0, VGA, serial connector Embedded ManagementiDRAC9 with Lifecycle Controller iDRAC DirectDRAC RESTful API with Redfish Integrations and ConnectionsIntegrations:Microsoft® System CenterVMware® vCenter™BMC Truesight (available from BMC)Red Hat AnsibleConnections:Nagios Core & Nagios XIMicro Focus Operations Manager i (OMi)IBM Tivoli Netcool/OMNIbusOperating Systems Microsoft Windows Server® with Hyper-VSystem Requirement* For high stability and good performance, the following system requirements must be met. Feature DescriptionOS for HikCentral Professional Server Microsoft® Windows 7 SP1 (64-bit)Microsoft® Windows 8.1 (64-bit)Microsoft® Windows 10 (64-bit)Microsoft® Windows Server 2008 R2 SP1 (64-bit)Microsoft® Windows Server 2012 (64-bit)Microsoft® Windows Server 2012 R2 (64-bit)Microsoft® Windows Server 2016 (64-bit)Microsoft® Windows Server 2019 (64-bit)*For Windows 8.1 and Windows Server 2012 R2, make sure it is installed with the rollup (KB2919355) updated in April, 2014.OS for Control Client Microsoft® Windows 7 SP1 (32/64-bit)Microsoft® Windows 8.1 (32/64-bit)Microsoft® Windows 10 (64-bit)Microsoft® Windows Server 2008 R2 SP1 (64-bit)Microsoft® Windows Server 2012 (64-bit)Microsoft® Windows Server 2012 R2 (64-bit)Microsoft® Windows Server 2016 (64-bit)Microsoft® Windows Server 2019 (64-bit)*For Windows 8.1 and Windows Server 2012 R2, make sure it is installed with the rollup (KB2919355) updated in April, 2014.OS for Visitor Terminal Android 7.1 and laterBrowser Version Internet Explorer 10/11 and aboveChrome 61 and aboveFirefox 57 and aboveSafari 11 and above (running on Mac OS X 10.3/10.4)Database PostgreSQL V9.6.13OS for Smartphone iOS 10.0 and laterAndroid phone OS version 5.0 or later, and dual-core CPU with 1.5 GHz or above, and at least 2G RAMOS for Tablet iOS 10.0 and laterAndroid tablet with Android OS version 5.0 and laterVirtual Machine VMware® ESXi™ 6.xMicrosoft® Hyper-V with Windows Server 2012/2012 R2/2016 (64-bit)*The Streaming Server and Control Client cannot run on the virtual machine. *Virtual server migration is not supported.Typical Application。
IP camera TesterIPC-8600 Series (VII Generation )7 inch capacitive touch screen / ONVIF /SDI camera/Analog camera/ CVI camera /TVI camera/AHD /SDI camera test / WIFI /12V 2APower output / 5V 2A power bank / DC48V PoE power supply /HDMI output/Video level meter/ IP camera viewer /PTZ control /Ping test/Link test/Cable tracer /IP discovery, Auto -search the whole network segment IP / Rapid ONVIF, Auto-login and display the camera image, activate Hikvision camera quickly. Support more than 80 IP camera brands test (Hikvision, Dahua, Honeywell, Samsung etc)7 inch Touch Screen IP camera tester is for maintenance and installation of IP camera and Analogue camera ,display HD camera and analogue camera image ,PTZ control, easy to use and operate . Built in network testing tools (IP address search, PING etc), quickly check the IP camera problem. Cable scan, TDR tester, easy to check the network cable, BNC cable problem, Optical power meter, Visual fault detector function, effective to solve the optical fiber transmission problem.New IP camera tester normal working time lasts 16 hours, built in WIFI ,as well as test Wireless IP ,Analog and SDI camera ,12V 2A power output for camera ,DC48V PoE power supply,5V 2A power output(as a power bank 13000mAh),HDMI output, Audio input and output, Network bandwidth testing, OSD menu rotates in 180 angle ,OSD menu can be rotated in 180 angle by manual setting. This function is very convenient for the user to upward the downside to connect the LAN cable and do the testing.CCTV system installation and maintenanceNetwork cabling project installation and maintenanceDome camera IP camera testingVideo transmission channel testingPTZ controller1. 7 inch touch screen ,1024*600 resolution NEW2.Capacitive touch screen ,OSD menu, 3.Touch screen and key operation 4.Built-in WIFI, wireless network camera test NEW 5.AHD/CVI/TVI camera test . snapshot ,video record and playback ,zoom, NEW (*Optional ) 6. IP discovery, do not need to k now the first two digits of camera’s IP address , it can auto -scan the whole network segment IP , and auto-modify tester’s IPaddress7.Rapid ONVIF, search camera quickly, auto log in and display image from the camera, activate Hikvision camera. 8.Support 1080P camera 9. SDI camera test (*Optional )SDI digital video surveillance testing, SDI input, support resolution as follows:1280x720P 25Hz/1280x720P 30Hz/1280x720P 50Hz/1280x720P 60Hz/1920x1080P 25Hz/1920x1080P 60H z/1920x1080I 50Hz/1920x1080I 60Hz10.Digital camera image test and video image zoom, record, screen snapshot, Photo viewer and playback 11.Currently, Support ONVIF, combine with more than 80 brand camera ,such as DAHUA,Haikang,KEDA,Samsung,HIKVISION and TIANDY camera. 12. If IP camera manufacturers can offer Video Management Software that compatible with mobile phone or tablet PC, install the Video Management Softwarein the tester enable the tester to display IP camera image by IP camera viewer.Features Application13. OSD menu rotates in 180 angle NEW14. Display real resolution of the IP camera images15. Video screen snapshot ,video recording ,files named and save, easy to find16. Support network PTZ controller (ONVIF)17. Support 1080P HD video files and MKV/MP4 media files play18. PTZ controller ,Photograph, Video record, Record playback19. Video level meter, Video PEAK lever, SYNC level, COLOR BURST measurement NEW20. Optical Power Meter, it can measure the optical power value and fiber loss *(Optional)21. Visual fault locator , to test fiber’s bending and breakage * (Optional)22. TDR cable test, test cable length and short-circuit * (Optional)23. Digital Multi-meter, Voltage, current, resistance and capacitance measuring, continuity testing, diode testing. (Optional)24. Color bar generator, analog video color bar and testing image output NEW25. POE voltage measurement, PING test ,IP address scan, port flashing etc26. PoE DC48V power output,Max power 24W NEW27. 12V /2A power output NEW28. 5V 2A power output,as the power bank NEW29. HDMI output, 1920*1080P Resolution NEW30. Audio in /out31. LED lamp, calculators, music players and other application tools32. Support customers self update software33. Lithium Ion Polymer Battery, working time lasts 16 hours“*”Sign means the function optionalModel IPC-8600【*】models optionalDisplay 7 inch Capacitive touch screen , resolution 1024(RGB)x 600Network port 10/100M auto adapt,RJ45SDI camera test(Optional) *1 channel SDI IN BNC Input ,Support resolution 720p 60fps / 1080p 60fps /1080i 60fpsTVI camera test*(Optional) *Support resolution 720p 25fps,30fps,50fps,60fps / 1080P 25fps,30fps , camera OSD menu control over coaxial cable CVI camera test(Optional) *Support resolution 720p 25fps,30fps,50fps,60fps / 1080P 25fps,30fps , camera OSD menu control over coaxial cableAHD camera test* (Optional) * Support AHD2.0, Support resolution 1080P 25fps,30fps /720p 25fps,30fps , camera OSD menu control over coaxial cableWIFI Built in WIFI,speeds150M,display wireless camera imageIP camera type ONVIF,ACTi、Dahua IPC-HFW2100P、Hikvision,DS-2CD864-E13、Samsung SNZ-5200、Tiandy TD-NC9200S2、Kodak IPC120L、Honeywell HICC-2300T、Aipu-waton IP5000-BC-13MP/IRS06-13MP、fine-Tida IPC、FSJ BY-1080Q、WEISKY IPC camera etc. Customized welcomeVideo input/output 1 channel BNC input & 1 channel BNC looped output, NTSC/PAL (Auto adapt) Video Level Test: Video signals measured in IRE or mVVideo level meter PEAK video signal level, SYNC signal level, Color bar chroma level measurement Zoom Image Support Analog camera and IP camera image zoom /move Specificatio nsSnapshot,Video record andplaybackImage Screen snapshot, record ,save ,view and record and play backHDMI output 1 channel HDMI output,support 1920*1080P12V/2ApoweroutputOutput DC12V/2A power for camera5V power output 5V 2A power output,as a Power bankPoE power output 48V PoE power output,Max power 24WAudio test 1 channel audio signal input, test whether sound normal, 1 channel audio signal, to connect headphonePTZ controlSupport RS232/RS485 control, Baud 600-115200bps, Compatible with more than 30 protocols such as PELCO-D/P,Samsung, Panasonic, Lilin, Yaan, etccolor bar generatorOutput one channel PAL/NTSC color bar video signal for testing monitor or video cable.(red, green ,blue, white and blackcolor )UTP Cable tester Test UTP cable connection status and display on the screen. Read the number on the screenData monitor Captures and analyzes the command data from controlling device,also can send hexadecimalNetwork test IP address scan, link test, Ping test, Quickly search the connection IP camera and other device’s IP addressCable scan Search the cable by the audio signalPoE test Measurement POE switch or PSE power supply voltage and cable connection statusDigital Multi-meter (optional)* AC/DC Voltage,AC/DC current、Resistance、Capacitance、Data hold、Relative measurement、Continuity testing . Testing speed: 3 times/ seconds,Data range -6600~+6600.Optical power meter (Optional)* Calibrated Wavelength(nm) :850/1300/1310/1490/1550/1625nm Power range(dBm) :-70~+10dBmVisual fault locator(optional)*Test fiber’s bending and breakage ( SM and MM fiber)TDR cable test(optional) *Cable length and short circuit measurement(BNC cable, Coaxial cable, Cat5/6, telephone cable) POWERExternal powersupplyDC 12V 2ABattery Built-in 7.4V Lithium polymer battery ,6500mAhRechargeable After charging 7~8 hours, normal working time 16 hoursParameterOperation setting Capacitive touch screen, OSD menu,Chinese/EnglishAuto off 1-30 (mins)GeneralWorkingTemperature-10℃---+50℃Working Humidity 30%-90%Dimension/Weight 231mm x 172mm x 52mm / 1.26Kg。
Video Surveillance:A Distributed Approach to protect Privacy⋆Update:2007/09/12Martin Schaffer and Peter SchartnerUniversity of Klagenfurt,AustriaComputer Science·System Security{m.schaffer,p.schartner}@syssec.atAbstract.The topmost concern of users who are kept under surveil-lance by a CCTV-System is the loss of their privacy.To gain a highacceptance by the monitored users,we have to assure,that the recordedvideo-material is only available to a subset of authorized users under ex-actly previously defined circumstances.In this paper we propose a CCTVvideo surveillance system providing privacy in a distributed way usingthreshold multi-party computation.Due to theflexibility of the accessstructure,we can handle the problem of loosing private-key-shares thatare necessary for reconstructing video-material as well as adding newusers to the system.If a pre-defined threshold is reached,a shared up-date of the master secret and the according re-encryption of previouslystored ciphertext without revealing the plaintext is provided.1IntroductionThe major concern of users monitored by a CCTV video surveillance system is the loss of their privacy.It is obvious,that encrypting the recorded material raises the acceptance by the monitored users.But what about unauthorized decryption?In this paper we propose several mechanisms concerning the setup, the recording and the retrieval of videos,and the key management during all these phases.Some of the mechanisms involve multi-party computation(MPC, cf.[18,6,8]),so that we can enforce dual control.A trusted third party(TTP) may also be used to enforce dual control.But if this TTP is compromised,a single unauthorized person may be able to decrypt the whole video-material. The main requirements for our system include:–Privacy-protection of the monitored users.–Shared generation and update of keys and key components.–Tree-based access structure to provide a mechanism for substitution.–Dual control(4-eyes principle)within the video retrieval process.–Minimal access of authorized people to monitored information.Several papers about video surveillance exist,most of which focus on the ability to detect and identify moving targets.Only a few discuss the privacy protection of recorded material.The authors in[9]for example describe a coop-erative,multi-sensor video surveillance system that provides continuous coverage over battlefield areas.In[10]the emphasis lies on face recognition–a solution to protect privacy by de-identifying facial images is given.A similar approach can be found in[2]concentrating on videos in general.The most general solution to cover privacy with respect to monitoring targets seems to be presented in [16].However the system in[16]uses a privacy preserving video console provid-ing access control lists.Once the console has been compromised videos may be decrypted without any restrictions.In our paper we focus on video surveillance where real-time reactions are not necessary like in private organisations where staffhas to be monitored.In case of criminal behaviour recorded video material can be decrypted if sufficiently enough instances agree–this e.g.is not provided in[10].Our approach can certainly be combined with the general solution proposed in[16]but also used for other applications such as key escrow as proposed in[15].Video CamerasFig.1.System ArchitectureFigure1shows the system architecture of the proposed system including all components and interactions within the setup phase,recording phase,retrieval phase and key putations within dotted boxes represent MPCs whereas undotted boxes are performed by a single belled arrows show which instance(s)deliver(s)or receive(s)which value(s).The proposed system employs the following hardware-components and users: Video Cameras.According to the monitored processes,video cameras record either single pictures or videos.Since we want to guarantee privacy of the2monitored users,we have to encrypt the video material.After encrypting the video,it is sent to the video server S V,whereas the key used for this encryption is(encrypted and)sent to the key server S K.Video Server S V.Here the received encrypted video material is stored in a suitable database so that it can be easily retrieved,if required.Key Server S K.Keys that are used for encrypting videos are chosen interval-wise(so we provide minimal access to videos).Therefore,we call them interval-keys(ikeys)and store them encrypted at S K.Users.In this paper,the term user always means an instance retrieving videos, not a person monitored by the system.Within strict regulations(e.g.dual control),these users are authorized to decrypt the stored videos.Due to the fact that enterprises are often hierarchically structured,we have to provide easy deputy-mechanisms.If a user is not available,he may be simply replaced by a qualified set of users of the next lower level in the hierarchy. Smartcards.In order to enforce the cooperation of several users in the decryp-tion process,the corresponding private key d is shared among a group of users.Hence,each user holds a share of the private key,which is stored ina pin-protected smartcard.Note that the private key d is never available inthe system or present during intermediate results of the decryption process.The proposed system consists of the following procedures:Setup Phase.To initialize the system,the users perform a MPC which pro-vides each user with a random share of private key d in a fair way.Addition-ally,the users generate shares of the corresponding public key.These shares are sent to the video cameras which reconstruct the public key e. Recording Phase.Due to performance reasons,the recorded video-material is encrypted by use of a hybrid cryptosystem.Hence,each video camera has to hold the public key e.The ikey is then encrypted by use of an asymmetric scheme(employing the public key e)and isfinally sent to S K,whereas the symmetrically encrypted video is sent to S V.Retrieval Phase.Within the retrieval phase,the authorized users perform a MPC which provides them with the ikey of the specific interval,without direct usage of the private key d(which corresponds to the public key e). Key Management.In order to take part in the system,a new user has to re-trieve a share of the private key d.To achieve this,the users already enrolled in the system perform a MPC whichfinally provides the new user with his share.Since the decryption process involves threshold cryptography,some smartcards(and the shares stored there)may be lost,without any danger for the privacy of the stored video material.Additionally,we employ a mech-anism which regularly updates the remaining shares(without changing the shared private key)and hence makes the shares on the lost(or stolen)smart-cards useless.Finally,we propose a mechanism to perform a shared update of d and the corresponding public key e.It is obvious that in this case,all en-crypted ikeys have to be re-encrypted,whereas the encrypted video material remains unchanged since the ikeys have not been compromised.3In the remainder of the paper we will give a more formal description of the processes briefly discussed by now.Note that within the proposed mechanisms we will only care about passive adversaries(see[6,8])from inside the system and we will assume that there exist pair-wise protected links between the individual parties participating in the surveillance process.2PreliminariesThroughout this work let G q be a group of prime order q,where the Discrete Logarithm Problem and closely related problems are believed to be hard.More-over,let g be a generator of G q.For computations in Z Z q we omit to write MOD q, since this will be clear from the context.The direct successors of the root of the tree-based access structure are calledfirst-level-users and united in the set U. To reduce complexity,we will only consider one video camera called V C.2.1Shamir’s Secret SharingTo share a secret s∈Z Z q among n users resulting in the shares s1,...,s n(denoted by s→(s1,...,s n))we use the following t-degree polynomial according to[17]:s i=g(i),g(x)=s+tj=1r j x j,r j∈R Z Z∗q(1)In order to reconstruct the secret s(denoted by(s1,...,s n)→s)we need at least t+1shares,because there are t+1unknown values in a t-degree polynomial.For efficiency reasons we use the interpolation formula of Lagrange(see e.g.[12]):s=g(0),g(x)=ni=1s iλs x,i,λs x,i=n j=1j=i(x−j)(i−j)−1(2)Several computations of the upcoming sections use the following transformation:z=y s(2)=y n i=1s iλs0,i=n i=1y s iλs0,i(3) 2.2Symmetric CryptosystemRecording videos causes a lot of data.Hence,we apply a symmetric algorithm (e.g.AES,see[1])to encrypt the video-material.We simply define the encryption function E S(m,k)=c and decryption function D S(c,k)=m.42.3ElGamal CryptosystemWe suppose that the reader is familiar with the basic ElGamal cryptosystem[4]. Assuming the key generation has already taken place resulting in the public key e∈G q and the private key d∈Z Z q,the encryption E and decryption D are: E(m,e)=(gα,meα)=(c1,c2),e=g d,α∈R Z Z q(4)D((c1,c2),d)=c2 c d1 −1=m(5) 3ElGamal Threshold Decryption and Re-encryptionA public key cryptosystem can be shared in several ways.The plaintext,the public key,the ciphertext as well as the private key can be used in a distributed way.For a video surveillance system sharing the encryption process does not make sense.However,sharing the decryption process enables us to realize dual-control.To increase the security it is very useful to share the ciphertext as well. For lack of space we decided not to describe this variation.Instead we focus on how to share the decryption process emphasizing on selected aspects of the corresponding management of key-shares.Due to its simplicity we use ElGamal threshold decryptionfirstly proposed in[3].The basic ElGamal decryption can be divided into two parts so that its computation only uses shares of private key d.Therefore d has to be shared using a t-degree polynomial:d→(d1,...,d n). The ElGamal decryption function can be modified replacing d with its Lagrange-representation over the shares:D((c1,c2),d)=c2 c d1 −1(3)=c2 n i=1cλd0,i1i −1(5)=m,c1i=c d i1(6) Now we can divide this computation into the following two sub-functions: Decryption Step1.This step has to be done by at least t+1shareowners.D1(c1,d i)=c d i1(6)=c1iDecryption Step2.To compute m at least t+1outputs of D1are required.D2((c11,...,c1n),c2)=c2 n i=1cλd0,i1i −1(6)=mIf the private key d has been compromised we have to provide an update of d and a re-encryption of the corresponding ciphertext without revealing plaintext. In[19]an approach based on distributed blinding is given.There,a ciphertext is first blinded by a randomly chosen and encrypted value.After having decrypted the blinded ciphertext in a particular way the resulting blinded plaintext is en-crypted with the new public key andfinally unblinded.The advantage of this approach is that the instances that blind the ciphertext do not know anything5about the private key.This is useful for transferring a ciphertext from one in-stance to another one(with different keys).In our case we need a mechanism that provides an update of the private key and the corresponding ciphertext. In our scenario the solution in[19]would require a distributed blinding,a dis-tributed decryption and a distributed unblinding.As a consequence we propose a different variation based on the distanceδbetween the old private key d and the new private key d′.The advantage of our re-encryption is that we only modify the old ciphertext and do not perform decryptions and encryptions respectively. Theorem1.Assume(c1,c2)is a ciphertext performed over m and e.Then a ciphertext based on e′=egδand decryptable by d′=d+δcan be computed by doing the following random transformation of(c1,c2)without intermediately revealing the corresponding plaintext m:RE((c1,c2),δ,e′)=(c1gβ,c2cδ1e′β)=(c′1,c′2),β∈R Z Z q(7)d′=d+δ,e′=egδ(8) Proof.Let(c′1,c′2)be a transformed ciphertext according to(7).Then the basic ElGamal decryption with new private key d′results in m because:D((c′1,c′2),d′)(5)=c′2(c′d′1)−1(7)=c2cδ1e′β((c1gβ)d′)−1(8)=c2cδ1(egδ)β((c1gβ)d+δ)−1(4)=meαgαδ(g d gδ)β((gαgβ)d+δ)−1(4)=mgα(d+δ)gβ(d+δ)(g(α+β)(d+δ))−1=m⊓⊔The re-encryption process in(7)can also be divided into two sub-functions so that it can be performed in a distributed way(assume:δandβare shared): Re-encryption Step1.Thefirst step is done locally by every user P i.RE1(c1,δi,e′,βi)=(gβi,cδi1,e′βi)=(˜c1i,c1i,e′i)Re-encryption Step2.The second step uses all outputs of RE1and(c1,c2).RE2(c1,(˜c11,...,˜c1n),(c11,...,c1n),(e′1,...,e′n),c2)=(c′1,c′2)c′1=c1ni=1˜cλβ0,i1i,c′2=c2 n i=1cλδ0,i1i n i=1e′iλβ0,iIf there is no need to mask the correspondence between old and new ciphertext, the modifications of the original randomnessαby use ofβcan be removed.64Video Surveillance4.1Setup PhaseDuring the initialization of the system,a key-pair(e,d)for the ElGamal cryp-tosystem has to be generated in a shared way.To achieve this,all users co-operatively generate shares of the private key d without reconstructing it.Then they computed and send e to the video camera.The distributed key generation proposed in[11]is very useful to generate a private key without reconstructing it.A more secure version is proposed in[5].However,we need a fair tree-structured generation of the private key.Based on this fact we modify the original protocol in order to be able to build such a tree.A detailed description of a tree-shared generation of secret values can be found in[14]–we refer to it for lack of space.4.2Recording PhaseWithin this phase V C uses local hybrid encryption.First of all V C generates an interval-key k at random and encrypts the interval-video using symmetric encryption described in section2.2:E S(video,k)=c.For encryption of k the camera uses asymmetric encryption described in section2.3with public key e: E(k,e)=(c1,c2).Within each interval the camera sends the encrypted video to S V and its corresponding encrypted ikey to S K.Both server store the ciphertext in a particular database.4.3Retrieval PhaseThe retrieval of a particular video can be done in two steps:Decryption of ikey.S K has to send(c1,c2)to every user in U who agrees to reconstruct the video.Then each user P i performs D1(c1,d i)=c1i and broadcasts the result within U.Finally every user P i decrypts ikey k by computing D2((c11,...,c1n),c2)=k.Decryption of Video.S V has to send the encrypted video c(corresponding to k)to every user P i who decrypts it by performing D S(c,k)=video.5Managing Private-Key-SharesGenerally,an access structure has to be veryflexible within an organization.The more users exist the sooner it might occur that a user leaves or joins the system.5.1Registration of a New UserWhen registering a new user P n+1we have to distinguish users of thefirst level who do not have a predecessor and users of lower levels who have predecessors.7New First-Level-User.Every existingfirst-level-user P i shares his share d i→(d i1,...,d in+1)among U′=U∪{P n+1}.Then every user P j in U′inter-polates the received shares(d1j,...,d nj)→d j.Due to the fact,that every share changes,an update of successor-shares has to be performed.Others.Every user P i of a lower level always has a predecessor P who is re-sponsible for registering his new successor P n+1.If P does not know the shares of his existing successors they have to send him their shares.Owningat least t+1shares of his successor enables P to generate a share d n+1= n i=1d iλd n+1,i for P n+1without causing a recursive update of successor-shares.After importing d n+1to P n+1’s smartcard P removes d1,...,d n fromhis smartcard.Generation of new shares can be done in several ways.An important fact is tokeep side effects minimal which we cannot guarantee with the solution describedabove when registering afirst-level-user.For more efficient but also some more complex variations we refer to our technical report[13].5.2Loss of SmartcardsIf a user collects at least t+1previously lost smartcards he might be able to compromise the private key d.Regularly updates of shares without changing d make the collector’s shares unusable.Such updates can be very time-consuming because all users of the access structure have to participate in the update process at the same time(except if centralized updates are used).If a user looses his smartcard his share can be reconstructed using the computations in section5.1. We always have to consider the worst case which is that another user of the access structurefinds the smartcard.Then the threshold is decreased which we want to avoid.Due to this fact we propose to run an update-protocolfirst and then generate a new share for the user who lost his smartcard.5.3Proactive BehaviourCollecting lost smartcards can be used to decrease the threshold.So we have to update the private-key-shares without changing the private key(as proposed in [7]).This should be done in case of loosing a smartcard but can also be performed proactively ing short intervals can be very time-consuming if up-dates are done in a distributed way because users have to be online at the same time.In this case the update could be initiated by a central trusted authority.A big advantage of this variation is that updates could be run in batch-mode.What happens if threshold t is vulnerable within one interval?In this case we propose to update the private key in a shared way in sufficient time which forces a re-encryption of ciphertext that corresponds to the compromised private key (see section6).Until the re-encryption process has beenfinished S K has to be protected against availability-compromising attacks.To handle this problem we propose to share the ciphertext-pairs(c1,c2)among several server.This would lead to several modifications of the basic system which we do not describe here.85.4De-registration of UsersIf a user leaves the organisation his smartcard(holding the share)should be securely destroyed.If a new user takes over his tasks the protocol described in section5.1has to be run.6Update of Private Key and corresponding Ciphertext First of all(e,d)has to be updated by all cameras and all shareowners of d. Before destroying the update-values a re-encryption of every ciphertext(c1,c2) generated using e has to be done.Shared Generation of Update-Values.All the users in U run the tree-based key generation mentioned in section4.1to get sharesδ1,...,δn of a private-key-updateδand sharesβ1,...,βn of randomness-updateβ.Update of Private-Key-Shares.Every user P i computes d′i=d i+δi which is a share of private key d′=d+δ.Shared Update of Public Key.All users perform the updated public key e′=e n i=1gδiλδ0,i in a distributed way and send e′to V C.Re-encryption of Encrypted ikeys.S K has to send the old ciphertext-part c1to every user P i participating in the re-encryption processes.Then each P i has to perform RE1(c1,δi,e′,βi)=(˜c1i,c1i,e′i).Finally,each output of RE1 has to be sent to S K which then replaces the old ciphertext(c1,c2)by the output of RE2(c1,(˜c11,...,˜c1n),(c11,...,c1n),(e′1,...,e′n),c2)=(c′1,c′2).7Security AnalysisWe now briefly analyse the power of each instance of the system to retrieve any secret information.However,we do not consider the tree-structure–the analysis can be interpreted recursively.As long as video cameras are not able to solve the Discrete Logarithm Prob-lem and do not compromise at least t+1first-level-users,they are not able to get any information about the private key d,update-valuesδandβor any shares of the users.To decrypt video-material S V needs the corresponding ikey.But to get access to it he has to compromise at least t+1first-level-users and S K.Afirst-level-user needs at least t other shares to reconstruct d or update-valuesδandβ. Moreover,he has to compromise S K and S V to be able to decrypt videos.Up to t smartcards offirst-level-users can be stolen and compromised without reveal-ing any information about d.Regularly updates of shares increase the security of the private key.Moreover,the smartcards are secured by a Personal Identifi-cation Number.To preserve resistance against active malicious behaviour(e.g. sending wrong intermediate results),extensions according to secure multi-party computation with active adversaries are required(see[6,8]).98Conclusion&Future ResearchConsidering the requirements stated in section1,it can be seen that all of them have been fulfilled.Privacy-protection of the monitored users is provided by encryption of video-material and interval-keys.4-eyes principle(dual control)is provided by a tree-based access structure.Minimal access of authorized people to monitored in-formation is guaranteed by scaling monitored intervals to a minimum so that many ikeys are generated.Keys and key components are generated tree-based in a fair distributed way according to[14].Update of keys and key components is realized by tree-based generation of update-values and threshold re-encryption. Tree-based secret sharing provides the possibility to simulate any user by his successors.The discussed distributed version of ElGamal is well known since[3]and only one-out-of many.Discussing how public key cryptosystems can be distributed can lead to many more applications than access structures to monitored informa-tion.When sharing functions the management of key-shares appears to be much more difficult than the“normal”key management.Some future research could emphasize on managing keys in distributed public-key cryptosystems keeping the number of local shares minimal not limiting to the ElGamal cryptosystem. References1.Advanced Encryption Standard(AES).FIPS-Pub197,NIST,2001.2.M.Boyle,C.Edwards,S.Greenberg.The Effects on Filtered Video on Aware-ness and Privacy.CSCW’00:Proceed.of the2000ACM conference on Computer supported cooperative work,Philadephia,PA,USA,2000,pages1–10.3.Y.Desmedt,Y.Frankel.Threshold Cryptosystems.Adv.in Crypt.:Proceed.ofCRYPTO’89,Springer-Verlag,pp.307–315,1990.4.T.ElGamal.A Public-Key Cryptosystem and a Signature Scheme Based on Dis-crete Logarithms.Adv.in Crypt.:CRYPTO’84,Springer-Verlag,pp.10–18,1985.5.R.Gennaro et al.Secure Distributed Key Generation for Discrete-Log BasedCryptosystems.Proceed.of EUROCRYPT’99,Springer LNCS1592,pp.295-310.6.O.Goldreich et al.How to play any mental game–a completeness theorem forprotocols with honest majority.Proc.19th ACM STOC,p.218–229,1987.7. A.Herzberg et al.Proactive secret sharing or:how to cope with perpetual leakage.In Adv.in Crypt.:CRYPTO’95,vol.963of LNCS,Springer-Verlag,pp.339–352.8.M.Hirt.Multi-Party Computation:Efficient Protocols,General Adversaries,andVoting.Ph.D.thesis,ETH Zurich,2001Reprint as vol.3of ETH Series in Infor-mation Security and Cryptography,Hartung-Gorre Verlag,Konstanz,2001.9.R.T.Collins et al.A System for Video Surveillance and Monitoring.Proceedingsof the American Nuclear Society(ANS)Eighth International Topical Meeting on Robotics and Remote Systems,1999.10. E.Newton,L.Sweeney,B.Malin.Preserving Privacy by De-identifying FacialImages.IEEE Transactions on Knowledge and Data Engineering,vol.17,no.2, pp.232–243,February2005.1011.T.Pedersen.A threshold cryptosystem without a trusted party.Adv.in Crypt.:EUROCRYPT’91,vol.547of LNCS,Springer-Verlag,pp.522–526,1991.12.J.Pieprzyka,T.Hardjono,J.Seberry.Fundamentals of Computer Security.Springer-Verlag,2003.13.M.Schaffer.Managing Key-Shares in Distributed Public-Key Cryptosystems.Technical Report TR-syssec-05-04,University of Klagenfurt,Austria,August 2005.14.M.Schaffer.Tree-shared Generation of a Secret Value.Technical Report TR-syssec-05-01,University of Klagenfurt,Austria,June2005.15.M.Schaffer,P.Schartner.Hierarchical Key Escrow with Passive Adversaries.Technical Report TR-syssec-05-02,University of Klagenfurt,Austria,June2005.16. A.Senior et al.Blinkering Surveillance:Enabling Video Privacy through Com-puter Vision.IBM Research Report RC22886(W0308-109),August28,2003.17. A.Shamir.How to share a m.of the ACM,vol.11,pp.612–613,1979.18. A.C.Yao.Protocols for secure computation.In Proceed.of the23rd IEEE Sym-posium on Foundations of Computer Security,1982.19.L.Zhou et al.Distributed Blinding for ElGamal Re-encryption.Proceed.25thIEEE Int.Conf.on Distributed Computing Systems.Ohio,June2005,p.815–824.11。