Density-based multifeature background subtraction with support vector machine
- 格式:pdf
- 大小:1.42 MB
- 文档页数:7
NuMicro®FamilyArm® ARM926EJ-S BasedNuMaker-HMI-N9H30User ManualEvaluation Board for NuMicro® N9H30 SeriesNUMAKER-HMI-N9H30 USER MANUALThe information described in this document is the exclusive intellectual property ofNuvoton Technology Corporation and shall not be reproduced without permission from Nuvoton.Nuvoton is providing this document only for reference purposes of NuMicro microcontroller andmicroprocessor based system design. Nuvoton assumes no responsibility for errors or omissions.All data and specifications are subject to change without notice.For additional information or questions, please contact: Nuvoton Technology Corporation.Table of Contents1OVERVIEW (5)1.1Features (7)1.1.1NuMaker-N9H30 Main Board Features (7)1.1.2NuDesign-TFT-LCD7 Extension Board Features (7)1.2Supporting Resources (8)2NUMAKER-HMI-N9H30 HARDWARE CONFIGURATION (9)2.1NuMaker-N9H30 Board - Front View (9)2.2NuMaker-N9H30 Board - Rear View (14)2.3NuDesign-TFT-LCD7 - Front View (20)2.4NuDesign-TFT-LCD7 - Rear View (21)2.5NuMaker-N9H30 and NuDesign-TFT-LCD7 PCB Placement (22)3NUMAKER-N9H30 AND NUDESIGN-TFT-LCD7 SCHEMATICS (24)3.1NuMaker-N9H30 - GPIO List Circuit (24)3.2NuMaker-N9H30 - System Block Circuit (25)3.3NuMaker-N9H30 - Power Circuit (26)3.4NuMaker-N9H30 - N9H30F61IEC Circuit (27)3.5NuMaker-N9H30 - Setting, ICE, RS-232_0, Key Circuit (28)NUMAKER-HMI-N9H30 USER MANUAL3.6NuMaker-N9H30 - Memory Circuit (29)3.7NuMaker-N9H30 - I2S, I2C_0, RS-485_6 Circuit (30)3.8NuMaker-N9H30 - RS-232_2 Circuit (31)3.9NuMaker-N9H30 - LCD Circuit (32)3.10NuMaker-N9H30 - CMOS Sensor, I2C_1, CAN_0 Circuit (33)3.11NuMaker-N9H30 - RMII_0_PF Circuit (34)3.12NuMaker-N9H30 - RMII_1_PE Circuit (35)3.13NuMaker-N9H30 - USB Circuit (36)3.14NuDesign-TFT-LCD7 - TFT-LCD7 Circuit (37)4REVISION HISTORY (38)List of FiguresFigure 1-1 Front View of NuMaker-HMI-N9H30 Evaluation Board (5)Figure 1-2 Rear View of NuMaker-HMI-N9H30 Evaluation Board (6)Figure 2-1 Front View of NuMaker-N9H30 Board (9)Figure 2-2 Rear View of NuMaker-N9H30 Board (14)Figure 2-3 Front View of NuDesign-TFT-LCD7 Board (20)Figure 2-4 Rear View of NuDesign-TFT-LCD7 Board (21)Figure 2-5 Front View of NuMaker-N9H30 PCB Placement (22)Figure 2-6 Rear View of NuMaker-N9H30 PCB Placement (22)Figure 2-7 Front View of NuDesign-TFT-LCD7 PCB Placement (23)Figure 2-8 Rear View of NuDesign-TFT-LCD7 PCB Placement (23)Figure 3-1 GPIO List Circuit (24)Figure 3-2 System Block Circuit (25)Figure 3-3 Power Circuit (26)Figure 3-4 N9H30F61IEC Circuit (27)Figure 3-5 Setting, ICE, RS-232_0, Key Circuit (28)Figure 3-6 Memory Circuit (29)Figure 3-7 I2S, I2C_0, RS-486_6 Circuit (30)Figure 3-8 RS-232_2 Circuit (31)Figure 3-9 LCD Circuit (32)NUMAKER-HMI-N9H30 USER MANUAL Figure 3-10 CMOS Sensor, I2C_1, CAN_0 Circuit (33)Figure 3-11 RMII_0_PF Circuit (34)Figure 3-12 RMII_1_PE Circuit (35)Figure 3-13 USB Circuit (36)Figure 3-14 TFT-LCD7 Circuit (37)List of TablesTable 2-1 LCD Panel Combination Connector (CON8) Pin Function (11)Table 2-2 Three Sets of Indication LED Functions (12)Table 2-3 Six Sets of User SW, Key Matrix Functions (12)Table 2-4 CMOS Sensor Connector (CON10) Function (13)Table 2-5 JTAG ICE Interface (J2) Function (14)Table 2-6 Expand Port (CON7) Function (16)Table 2-7 UART0 (J3) Function (16)Table 2-8 UART2 (J6) Function (16)Table 2-9 RS-485_6 (SW6~8) Function (17)Table 2-10 Power on Setting (SW4) Function (17)Table 2-11 Power on Setting (S2) Function (17)Table 2-12 Power on Setting (S3) Function (17)Table 2-13 Power on Setting (S4) Function (17)Table 2-14 Power on Setting (S5) Function (17)Table 2-15 Power on Setting (S7/S6) Function (18)Table 2-16 Power on Setting (S9/S8) Function (18)Table 2-17 CMOS Sensor Connector (CON9) Function (19)Table 2-18 CAN_0 (SW9~10) Function (19)NUMAKER-HMI-N9H30 USER MANUAL1 OVERVIEWThe NuMaker-HMI-N9H30 is an evaluation board for GUI application development. The NuMaker-HMI-N9H30 consists of two parts: a NuMaker-N9H30 main board and a NuDesign-TFT-LCD7 extensionboard. The NuMaker-HMI-N9H30 is designed for project evaluation, prototype development andvalidation with HMI (Human Machine Interface) function.The NuMaker-HMI-N9H30 integrates touchscreen display, voice input/output, rich serial port serviceand I/O interface, providing multiple external storage methods.The NuDesign-TFT-LCD7 can be plugged into the main board via the DIN_32x2 extension connector.The NuDesign-TFT-LCD7 includes one 7” LCD which the resolution is 800x480 with RGB-24bits andembedded the 4-wires resistive type touch panel.Figure 1-1 Front View of NuMaker-HMI-N9H30 Evaluation BoardNUMAKER-HMI-N9H30 USER MANUAL Figure 1-2 Rear View of NuMaker-HMI-N9H30 Evaluation Board1.1 Features1.1.1 NuMaker-N9H30 Main Board Features●N9H30F61IEC chip: LQFP216 pin MCP package with DDR (64 MB)●SPI Flash using W25Q256JVEQ (32 MB) booting with quad mode or storage memory●NAND Flash using W29N01HVSINA (128 MB) booting or storage memory●One Micro-SD/TF card slot served either as a SD memory card for data storage or SDIO(Wi-Fi) device●Two sets of COM ports:–One DB9 RS-232 port with UART_0 used 75C3232E transceiver chip can be servedfor function debug and system development.–One DB9 RS-232 port with UART_2 used 75C3232E transceiver chip for userapplication●22 GPIO expansion ports, including seven sets of UART functions●JTAG interface provided for software development●Microphone input and Earphone/Speaker output with 24-bit stereo audio codec(NAU88C22) for I2S interfaces●Six sets of user-configurable push button keys●Three sets of LEDs for status indication●Provides SN65HVD230 transceiver chip for CAN bus communication●Provides MAX3485 transceiver chip for RS-485 device connection●One buzzer device for program applicationNUMAKER-HMI-N9H30 USER MANUAL●Two sets of RJ45 ports with Ethernet 10/100 Mbps MAC used IP101GR PHY chip●USB_0 that can be used as Device/HOST and USB_1 that can be used as HOSTsupports pen drives, keyboards, mouse and printers●Provides over-voltage and over current protection used APL3211A chip●Retain RTC battery socket for CR2032 type and ADC0 detect battery voltage●System power could be supplied by DC-5V adaptor or USB VBUS1.1.2 NuDesign-TFT-LCD7 Extension Board Features●7” resolution 800x480 4-wire resistive touch panel for 24-bits RGB888 interface●DIN_32x2 extension connector1.2 Supporting ResourcesFor sample codes and introduction about NuMaker-N9H30, please refer to N9H30 BSP:https:///products/gui-solution/gui-platform/numaker-hmi-n9h30/?group=Software&tab=2Visit NuForum for further discussion about the NuMaker-HMI-N9H30:/viewforum.php?f=31 NUMAKER-HMI-N9H30 USER MANUALNUMAKER-HMI-N9H30 USER MANUAL2 NUMAKER-HMI-N9H30 HARDWARE CONFIGURATION2.1 NuMaker-N9H30 Board - Front View Combination Connector (CON8)6 set User SWs (K1~6)3set Indication LEDs (LED1~3)Power Supply Switch (SW_POWER1)Audio Codec(U10)Microphone(M1)NAND Flash(U9)RS-232 Transceiver(U6, U12)RS-485 Transceiver(U11)CAN Transceiver (U13)Figure 2-1 Front View of NuMaker-N9H30 BoardFigure 2-1 shows the main components and connectors from the front side of NuMaker-N9H30 board. The following lists components and connectors from the front view:NuMaker-N9H30 board and NuDesign-TFT-LCD7 board combination connector (CON8). This panel connector supports 4-/5-wire resistive touch or capacitance touch panel for 24-bits RGB888 interface.Connector GPIO pin of N9H30 FunctionCON8.1 - Power 3.3VCON8.2 - Power 3.3VCON8.3 GPD7 LCD_CSCON8.4 GPH3 LCD_BLENCON8.5 GPG9 LCD_DENCON8.7 GPG7 LCD_HSYNCCON8.8 GPG6 LCD_CLKCON8.9 GPD15 LCD_D23(R7)CON8.10 GPD14 LCD_D22(R6)CON8.11 GPD13 LCD_D21(R5)CON8.12 GPD12 LCD_D20(R4)CON8.13 GPD11 LCD_D19(R3)CON8.14 GPD10 LCD_D18(R2)CON8.15 GPD9 LCD_D17(R1)CON8.16 GPD8 LCD_D16(R0)CON8.17 GPA15 LCD_D15(G7)CON8.18 GPA14 LCD_D14(G6)CON8.19 GPA13 LCD_D13(G5)CON8.20 GPA12 LCD_D12(G4)CON8.21 GPA11 LCD_D11(G3)CON8.22 GPA10 LCD_D10(G2)CON8.23 GPA9 LCD_D9(G1) NUMAKER-HMI-N9H30 USER MANUALCON8.24 GPA8 LCD_D8(G0)CON8.25 GPA7 LCD_D7(B7)CON8.26 GPA6 LCD_D6(B6)CON8.27 GPA5 LCD_D5(B5)CON8.28 GPA4 LCD_D4(B4)CON8.29 GPA3 LCD_D3(B3)CON8.30 GPA2 LCD_D2(B2)CON8.31 GPA1 LCD_D1(B1)CON8.32 GPA0 LCD_D0(B0)CON8.33 - -CON8.34 - -CON8.35 - -CON8.36 - -CON8.37 GPB2 LCD_PWMCON8.39 - VSSCON8.40 - VSSCON8.41 ADC7 XPCON8.42 ADC3 VsenCON8.43 ADC6 XMCON8.44 ADC4 YMCON8.45 - -CON8.46 ADC5 YPCON8.47 - VSSCON8.48 - VSSCON8.49 GPG0 I2C0_CCON8.50 GPG1 I2C0_DCON8.51 GPG5 TOUCH_INTCON8.52 - -CON8.53 - -CON8.54 - -CON8.55 - -NUMAKER-HMI-N9H30 USER MANUAL CON8.56 - -CON8.57 - -CON8.58 - -CON8.59 - VSSCON8.60 - VSSCON8.61 - -CON8.62 - -CON8.63 - Power 5VCON8.64 - Power 5VTable 2-1 LCD Panel Combination Connector (CON8) Pin Function●Power supply switch (SW_POWER1): System will be powered on if the SW_POWER1button is pressed●Three sets of indication LEDs:LED Color DescriptionsLED1 Red The system power will beterminated and LED1 lightingwhen the input voltage exceeds5.7V or the current exceeds 2A.LED2 Green Power normal state.LED3 Green Controlled by GPH2 pin Table 2-2 Three Sets of Indication LED Functions●Six sets of user SW, Key Matrix for user definitionKey GPIO pin of N9H30 FunctionK1 GPF10 Row0 GPB4 Col0K2 GPF10 Row0 GPB5 Col1K3 GPE15 Row1 GPB4 Col0K4 GPE15 Row1 GPB5 Col1K5 GPE14 Row2 GPB4 Col0K6GPE14 Row2GPB5 Col1 Table 2-3 Six Sets of User SW, Key Matrix Functions●NAND Flash (128 MB) with Winbond W29N01HVS1NA (U9)●Microphone (M1): Through Nuvoton NAU88C22 chip sound input●Audio CODEC chip (U10): Nuvoton NAU88C22 chip connected to N9H30 using I2Sinterface–SW6/SW7/SW8: 1-2 short for RS-485_6 function and connected to 2P terminal (CON5and J5)–SW6/SW7/SW8: 2-3 short for I2S function and connected to NAU88C22 (U10).●CMOS Sensor connector (CON10, SW9~10)–SW9~10: 1-2 short for CAN_0 function and connected to 2P terminal (CON11)–SW9~10: 2-3 short for CMOS sensor function and connected to CMOS sensorconnector (CON10)Connector GPIO pin of N9H30 FunctionCON10.1 - VSSCON10.2 - VSSNUMAKER-HMI-N9H30 USER MANUALCON10.3 - Power 3.3VCON10.4 - Power 3.3VCON10.5 - -CON10.6 - -CON10.7 GPI4 S_PCLKCON10.8 GPI3 S_CLKCON10.9 GPI8 S_D0CON10.10 GPI9 S_D1CON10.11 GPI10 S_D2CON10.12 GPI11 S_D3CON10.13 GPI12 S_D4CON10.14 GPI13 S_D5CON10.15 GPI14 S_D6CON10.16 GPI15 S_D7CON10.17 GPI6 S_VSYNCCON10.18 GPI5 S_HSYNCCON10.19 GPI0 S_PWDNNUMAKER-HMI-N9H30 USER MANUAL CON10.20 GPI7 S_nRSTCON10.21 GPG2 I2C1_CCON10.22 GPG3 I2C1_DCON10.23 - VSSCON10.24 - VSSTable 2-4 CMOS Sensor Connector (CON10) FunctionNUMAKER-HMI-N9H30 USER MANUAL2.2NuMaker-N9H30 Board - Rear View5V In (CON1)RS-232 DB9 (CON2,CON6)Expand Port (CON7)Speaker Output (J4)Earphone Output (CON4)Buzzer (BZ1)System ResetSW (SW5)SPI Flash (U7,U8)JTAG ICE (J2)Power ProtectionIC (U1)N9H30F61IEC (U5)Micro SD Slot (CON3)RJ45 (CON12, CON13)USB1 HOST (CON15)USB0 Device/Host (CON14)CAN_0 Terminal (CON11)CMOS Sensor Connector (CON9)Power On Setting(SW4, S2~S9)RS-485_6 Terminal (CON5)RTC Battery(BT1)RMII PHY (U14,U16)Figure 2-2 Rear View of NuMaker-N9H30 BoardFigure 2-2 shows the main components and connectors from the rear side of NuMaker-N9H30 board. The following lists components and connectors from the rear view:● +5V In (CON1): Power adaptor 5V input ●JTAG ICE interface (J2) ConnectorGPIO pin of N9H30Function J2.1 - Power 3.3V J2.2 GPJ4 nTRST J2.3 GPJ2 TDI J2.4 GPJ1 TMS J2.5 GPJ0 TCK J2.6 - VSS J2.7 GPJ3 TD0 J2.8-RESETTable 2-5 JTAG ICE Interface (J2) Function●SPI Flash (32 MB) with Winbond W25Q256JVEQ (U7); only one (U7 or U8) SPI Flashcan be used●System Reset (SW5): System will be reset if the SW5 button is pressed●Buzzer (BZ1): Control by GPB3 pin of N9H30●Speaker output (J4): Through the NAU88C22 chip sound output●Earphone output (CON4): Through the NAU88C22 chip sound output●Expand port for user use (CON7):Connector GPIO pin of N9H30 FunctionCON7.1 - Power 3.3VCON7.2 - Power 3.3VCON7.3 GPE12 UART3_TXDCON7.4 GPH4 UART1_TXDCON7.5 GPE13 UART3_RXDCON7.6 GPH5 UART1_RXDCON7.7 GPB0 UART5_TXDCON7.8 GPH6 UART1_RTSCON7.9 GPB1 UART5_RXDCON7.10 GPH7 UART1_CTSCON7.11 GPI1 UART7_TXDNUMAKER-HMI-N9H30 USER MANUAL CON7.12 GPH8 UART4_TXDCON7.13 GPI2 UART7_RXDCON7.14 GPH9 UART4_RXDCON7.15 - -CON7.16 GPH10 UART4_RTSCON7.17 - -CON7.18 GPH11 UART4_CTSCON7.19 - VSSCON7.20 - VSSCON7.21 GPB12 UART10_TXDCON7.22 GPH12 UART8_TXDCON7.23 GPB13 UART10_RXDCON7.24 GPH13 UART8_RXDCON7.25 GPB14 UART10_RTSCON7.26 GPH14 UART8_RTSCON7.27 GPB15 UART10_CTSCON7.28 GPH15 UART8_CTSCON7.29 - Power 5VCON7.30 - Power 5VTable 2-6 Expand Port (CON7) Function●UART0 selection (CON2, J3):–RS-232_0 function and connected to DB9 female (CON2) for debug message output.–GPE0/GPE1 connected to 2P terminal (J3).Connector GPIO pin of N9H30 Function J3.1 GPE1 UART0_RXDJ3.2 GPE0 UART0_TXDTable 2-7 UART0 (J3) Function●UART2 selection (CON6, J6):–RS-232_2 function and connected to DB9 female (CON6) for debug message output –GPF11~14 connected to 4P terminal (J6)Connector GPIO pin of N9H30 Function J6.1 GPF11 UART2_TXDJ6.2 GPF12 UART2_RXDJ6.3 GPF13 UART2_RTSJ6.4 GPF14 UART2_CTSTable 2-8 UART2 (J6) Function●RS-485_6 selection (CON5, J5, SW6~8):–SW6~8: 1-2 short for RS-485_6 function and connected to 2P terminal (CON5 and J5) –SW6~8: 2-3 short for I2S function and connected to NAU88C22 (U10)Connector GPIO pin of N9H30 FunctionSW6:1-2 shortGPG11 RS-485_6_DISW6:2-3 short I2S_DOSW7:1-2 shortGPG12 RS-485_6_ROSW7:2-3 short I2S_DISW8:1-2 shortGPG13 RS-485_6_ENBSW8:2-3 short I2S_BCLKNUMAKER-HMI-N9H30 USER MANUALTable 2-9 RS-485_6 (SW6~8) FunctionPower on setting (SW4, S2~9).SW State FunctionSW4.2/SW4.1 ON/ON Boot from USB SW4.2/SW4.1 ON/OFF Boot from eMMC SW4.2/SW4.1 OFF/ON Boot from NAND Flash SW4.2/SW4.1 OFF/OFF Boot from SPI Flash Table 2-10 Power on Setting (SW4) FunctionSW State FunctionS2 Short System clock from 12MHzcrystalS2 Open System clock from UPLL output Table 2-11 Power on Setting (S2) FunctionSW State FunctionS3 Short Watchdog Timer OFFS3 Open Watchdog Timer ON Table 2-12 Power on Setting (S3) FunctionSW State FunctionS4 Short GPJ[4:0] used as GPIO pinS4Open GPJ[4:0] used as JTAG ICEinterfaceTable 2-13 Power on Setting (S4) FunctionSW State FunctionS5 Short UART0 debug message ONS5 Open UART0 debug message OFFTable 2-14 Power on Setting (S5) FunctionSW State FunctionS7/S6 Short/Short NAND Flash page size 2KBS7/S6 Short/Open NAND Flash page size 4KBS7/S6 Open/Short NAND Flash page size 8KBNUMAKER-HMI-N9H30 USER MANUALS7/S6 Open/Open IgnoreTable 2-15 Power on Setting (S7/S6) FunctionSW State FunctionS9/S8 Short/Short NAND Flash ECC type BCH T12S9/S8 Short/Open NAND Flash ECC type BCH T15S9/S8 Open/Short NAND Flash ECC type BCH T24S9/S8 Open/Open IgnoreTable 2-16 Power on Setting (S9/S8) FunctionCMOS Sensor connector (CON9, SW9~10)–SW9~10: 1-2 short for CAN_0 function and connected to 2P terminal (CON11).–SW9~10: 2-3 short for CMOS sensor function and connected to CMOS sensorconnector (CON9).Connector GPIO pin of N9H30 FunctionCON9.1 - VSSCON9.2 - VSSCON9.3 - Power 3.3VCON9.4 - Power 3.3V NUMAKER-HMI-N9H30 USER MANUALCON9.5 - -CON9.6 - -CON9.7 GPI4 S_PCLKCON9.8 GPI3 S_CLKCON9.9 GPI8 S_D0CON9.10 GPI9 S_D1CON9.11 GPI10 S_D2CON9.12 GPI11 S_D3CON9.13 GPI12 S_D4CON9.14 GPI13 S_D5CON9.15 GPI14 S_D6CON9.16 GPI15 S_D7CON9.17 GPI6 S_VSYNCCON9.18 GPI5 S_HSYNCCON9.19 GPI0 S_PWDNCON9.20 GPI7 S_nRSTCON9.21 GPG2 I2C1_CCON9.22 GPG3 I2C1_DCON9.23 - VSSCON9.24 - VSSTable 2-17 CMOS Sensor Connector (CON9) Function●CAN_0 Selection (CON11, SW9~10):–SW9~10: 1-2 short for CAN_0 function and connected to 2P terminal (CON11) –SW9~10: 2-3 short for CMOS sensor function and connected to CMOS sensor connector (CON9, CON10)SW GPIO pin of N9H30 FunctionSW9:1-2 shortGPI3 CAN_0_RXDSW9:2-3 short S_CLKSW10:1-2 shortGPI4 CAN_0_TXDSW10:2-3 short S_PCLKTable 2-18 CAN_0 (SW9~10) Function●USB0 Device/HOST Micro-AB connector (CON14), where CON14 pin4 ID=1 is Device,ID=0 is HOST●USB1 for USB HOST with Type-A connector (CON15)●RJ45_0 connector with LED indicator (CON12), RMII PHY with IP101GR (U14)●RJ45_1 connector with LED indicator (CON13), RMII PHY with IP101GR (U16)●Micro-SD/TF card slot (CON3)●SOC CPU: Nuvoton N9H30F61IEC (U5)●Battery power for RTC 3.3V powered (BT1, J1), can detect voltage by ADC0●RTC power has 3 sources:–Share with 3.3V I/O power–Battery socket for CR2032 (BT1)–External connector (J1)●Board version 2.1NUMAKER-HMI-N9H30 USER MANUAL2.3 NuDesign-TFT-LCD7 -Front ViewFigure 2-3 Front View of NuDesign-TFT-LCD7 BoardFigure 2-3 shows the main components and connectors from the Front side of NuDesign-TFT-LCD7board.7” resolution 800x480 4-W resistive touch panel for 24-bits RGB888 interface2.4 NuDesign-TFT-LCD7 -Rear ViewFigure 2-4 Rear View of NuDesign-TFT-LCD7 BoardFigure 2-4 shows the main components and connectors from the rear side of NuDesign-TFT-LCD7board.NuMaker-N9H30 and NuDesign-TFT-LCD7 combination connector (CON1).NUMAKER-HMI-N9H30 USER MANUAL 2.5 NuMaker-N9H30 and NuDesign-TFT-LCD7 PCB PlacementFigure 2-5 Front View of NuMaker-N9H30 PCB PlacementFigure 2-6 Rear View of NuMaker-N9H30 PCB PlacementNUMAKER-HMI-N9H30 USER MANUALFigure 2-7 Front View of NuDesign-TFT-LCD7 PCB PlacementFigure 2-8 Rear View of NuDesign-TFT-LCD7 PCB Placement3 NUMAKER-N9H30 AND NUDESIGN-TFT-LCD7 SCHEMATICS3.1 NuMaker-N9H30 - GPIO List CircuitFigure 3-1 shows the N9H30F61IEC GPIO list circuit.Figure 3-1 GPIO List Circuit NUMAKER-HMI-N9H30 USER MANUAL3.2 NuMaker-N9H30 - System Block CircuitFigure 3-2 shows the System Block Circuit.NUMAKER-HMI-N9H30 USER MANUALFigure 3-2 System Block Circuit3.3 NuMaker-N9H30 - Power CircuitFigure 3-3 shows the Power Circuit.NUMAKER-HMI-N9H30 USER MANUALFigure 3-3 Power Circuit3.4 NuMaker-N9H30 - N9H30F61IEC CircuitFigure 3-4 shows the N9H30F61IEC Circuit.Figure 3-4 N9H30F61IEC CircuitNUMAKER-HMI-N9H30 USER MANUAL3.5 NuMaker-N9H30 - Setting, ICE, RS-232_0, Key CircuitFigure 3-5 shows the Setting, ICE, RS-232_0, Key Circuit.NUMAKER-HMI-N9H30 USER MANUALFigure 3-5 Setting, ICE, RS-232_0, Key Circuit3.6 NuMaker-N9H30 - Memory CircuitFigure 3-6 shows the Memory Circuit.NUMAKER-HMI-N9H30 USER MANUALFigure 3-6 Memory Circuit3.7 NuMaker-N9H30 - I2S, I2C_0, RS-485_6 CircuitFigure 3-7 shows the I2S, I2C_0, RS-486_6 Circuit.NUMAKER-HMI-N9H30 USER MANUALFigure 3-7 I2S, I2C_0, RS-486_6 Circuit3.8 NuMaker-N9H30 - RS-232_2 CircuitFigure 3-8 shows the RS-232_2 Circuit.NUMAKER-HMI-N9H30 USER MANUALFigure 3-8 RS-232_2 Circuit3.9 NuMaker-N9H30 - LCD CircuitFigure 3-9 shows the LCD Circuit.NUMAKER-HMI-N9H30 USER MANUALFigure 3-9 LCD Circuit3.10 NuMaker-N9H30 - CMOS Sensor, I2C_1, CAN_0 CircuitFigure 3-10 shows the CMOS Sensor,I2C_1, CAN_0 Circuit.NUMAKER-HMI-N9H30 USER MANUALFigure 3-10 CMOS Sensor, I2C_1, CAN_0 Circuit3.11 NuMaker-N9H30 - RMII_0_PF CircuitFigure 3-11 shows the RMII_0_RF Circuit.NUMAKER-HMI-N9H30 USER MANUALFigure 3-11 RMII_0_PF Circuit3.12 NuMaker-N9H30 - RMII_1_PE CircuitFigure 3-12 shows the RMII_1_PE Circuit.NUMAKER-HMI-N9H30 USER MANUALFigure 3-12 RMII_1_PE Circuit3.13 NuMaker-N9H30 - USB CircuitFigure 3-13 shows the USB Circuit.NUMAKER-HMI-N9H30 USER MANUALFigure 3-13 USB Circuit3.14 NuDesign-TFT-LCD7 - TFT-LCD7 CircuitFigure 3-14 shows the TFT-LCD7 Circuit.Figure 3-14 TFT-LCD7 CircuitNUMAKER-HMI-N9H30 USER MANUAL4 REVISION HISTORYDate Revision Description2022.03.24 1.00 Initial version NUMAKER-HMI-N9H30 USER MANUALNUMAKER-HMI-N9H30 USER MANUALImportant NoticeNuvoton Products are neither intended nor warranted for usage in systems or equipment, anymalfunction or failure of which may cause loss of human life, bodily injury or severe propertydamage. Such applications are deemed, “Insecure Usage”.Insecure usage includes, but is not limited to: equipment for surgical implementation, atomicenergy control instruments, airplane or spaceship instruments, the control or operation ofdynamic, brake or safety systems designed for vehicular use, traffic signal instruments, all typesof safety devices, and other applications intended to support or sustain life.All Insecure Usage shall be made at customer’s risk, and in the event that third parties lay claimsto Nuvoton as a result of customer’s Insecure Usage, custome r shall indemnify the damagesand liabilities thus incurred by Nuvoton.。
density方案引言在计算机图形学和图像处理领域,density是一个关键概念,经常用于描述像素密度、线段密度、采样密度等。
针对不同应用场景,设计合适的density方案可以有效提高图像的清晰度和质量。
本文将介绍density的定义和分类,并结合实际应用场景,提出一种适用于不同设备的density方案。
1. density的定义1.1 像素密度像素密度通常用单位长度上的像素数量来衡量,常见表示为DPI(每英寸像素数)或PPI(每英寸像素数)。
它用于描述图像的清晰度,像素密度越高,图像越清晰。
1.2 线段密度线段密度指每单位长度上表示线段的像素数。
在绘制曲线或边缘时,较高的线段密度可以提高图像的光滑度和精细度。
1.3 采样密度采样密度表示在离散数据中取样的频率。
在信号处理和图像重建中,采样密度决定了信号的保真度和细节还原能力。
2. density的分类根据应用场景和使用对象的不同,density可以分为像素密度、线段密度和采样密度三个类别。
2.1 像素密度像素密度适用于显示设备,例如计算机显示器、移动设备屏幕等。
不同设备具有不同的像素密度,合理设置像素密度可以使图像在不同设备上保持一致的尺寸和清晰度。
在移动设备上,由于屏幕尺寸和分辨率的多样性,像素密度适用于确保应用在不同设备上的显示效果一致。
在设计移动应用时,可以使用独立像素单位(dp或dip)来忽略不同设备上的像素密度差异,从而实现自适应布局。
2.2 线段密度线段密度适用于绘图、图形渲染等领域。
在绘制平滑曲线时,较高的线段密度可以减少锯齿状边缘,提高图像质量。
对于打印、绘画等需要高精度输出的应用,也需要考虑线段密度以保证图像的视觉效果。
2.3 采样密度采样密度适用于信号处理、图像处理等领域。
在数字信号处理中,合适的采样密度可以保证信号在离散形式下的还原精度。
在图像处理中,采样密度决定了图像的细节保留程度和色彩还原度。
3. 适用于不同设备的density方案根据不同设备和应用场景的需求,我们提出以下方案来处理density问题。
拉曼晶圆lt衬底材料界面
拉曼晶圆LT衬底材料界面通常指的是衬底的表面和材料之间
的界面。
由于拉曼晶圆LT衬底材料通常是通过化学气相沉积
或分子束外延等方式生长在衬底上的,因此衬底材料界面对于拉曼晶圆的性能和质量起着至关重要的作用。
在拉曼晶圆LT衬底材料界面上,常常存在着应力、晶格失配、缺陷等问题。
这些问题会导致材料的晶格畸变和结构杂质的形成,影响拉曼晶圆的光电性能。
为了解决这些问题,可以通过界面控制和应力调控技术来优化拉曼晶圆的界面结构和性能。
界面控制技术包括衬底前处理、表面清洁和预测等方法,用于改善拉曼晶圆LT衬底材料的表面质量和界面结构。
应力调控
技术则是通过在衬底生长过程中控制材料的应力分布和缓解晶格失配,来减少界面应力和缺陷的形成。
通过界面控制和应力调控技术,可以改善拉曼晶圆LT衬底材
料的界面结构和质量,提高拉曼晶圆的光电性能和稳定性。
这对于应用于光电子器件和半导体器件等领域具有重要的意义。
基于深度学习的衬衫定制设计作者:贾群喜张伟民罗依平来源:《电脑知识与技术》2021年第31期摘要:为了深度学习更好地在衬衫定制设计中的应用,解决衬衫定制中人体参数识别不准、价格昂贵和标准不统一等问题,提出了一种基于深度学习的衬衫定制设计方法。
利用Deeplab V3图像分割神经网络,在自主建立数据集的基础上,进行人体模型和背景的分割,结合人体特征点的方法进行人体尺寸的拟合,同时考虑衬衫的个性化定制,并进行真人试穿检验。
实验结果综合得分在6.0以上,表明可以该方法进行衬衫个性化定制方面表现良好,并在具有一定的应用价值。
关键词:衬衫;定制;Deeplab V3;特征点提取;人体尺寸拟合中图分类号:TP18 文献标识码:A文章编号:1009-3044(2021)31-0109-03Shirt Customization Design Based on Deep LearningJIA Qun-xi,ZHANG Wei-min,LUO Yi-ping(Luoyang Institute of Science and Technology, Luoyang 471000, China)Abstract:In order to better apply deep learning in shirt customization design and solve the problems of inaccurate human parameter identification, high price and inconsistent standards in shirt customization, a design method of shirt customization based on deep learning is proposed. The Deeplab V3 image segmentation neural network is used to segment the human model and background on the basis of independently establishing the data set. The fitting of human size is carried out by combining the method of human feature points. At the same time, the personalized customization of shirts is considered, and the real-life test is carried out. The comprehensive score of the experimental results is above 6.0, indicating that the method can perform well in the personalized customization of shirts, and has certain application value.Key words:Shirt ; customization; Deeplab V3 ; feature point extraction ; Body size fitting目前,伴隨着生活水平的提升,消费者对于服装的个性化、差异化要求也在逐步提升。
第37卷第4期2022年4月Vol.37No.4Apr.2022液晶与显示Chinese Journal of Liquid Crystals and Displays氮化硅膜层对侧视角白画面色偏的影响及发红改善叶成枝*,操彬彬,吕艳明,马力,安晖,彭俊林,冯耀耀,杨增乾,栗芳芳,陆相晚,李恒滨(合肥鑫晟光电科技有限公司,安徽合肥230001)摘要:研究了TFT基板中氮化硅膜层对TFT-LCD终端产品侧视角白画面色偏的影响,填补了TFT膜层与TFT-LCD 侧视角白画面色偏的研究欠缺,为类似的光学色度不良改善提供了有益的参考。
通过大量的实验测试和仿真模拟,探索了氧化硅膜层厚度对侧视角白画面色偏的影响规律。
结果表明,氮化硅膜层是一个影响色偏量和色偏方向的关键因素,其中栅极绝缘层(GI)和第一钝化绝缘层(PVX1)对色偏的影响较大,而第二钝化绝缘层(PVX2)对色偏的影响较小,氮化硅厚度与白画面侧视角色偏程度呈非线性相关。
以某款HADS产品为例,其中氮化硅膜层包括栅极绝缘层(320 nm)、第一钝化绝缘层(100nm)和第二钝化绝缘层(200nm)。
基于单因子实验结果和模拟结果,综合考虑电学特性和产能影响,采用PVX1140nm作为改善条件,对侧视角白画面发红改善有效,即宏观观察无发红现象且实测数据都满足产品规格。
关键词:薄膜晶体管液晶显示器;氮化硅膜层;光学色度;侧视角色偏;全白画面中图分类号:TN141.9;O439doi:10.37188/CJLCD.2021-0284Influence of SiN x film on off-angle color shift of white screen andreddish improvementYE Cheng-zhi*,CAO Bin-bin,LYU Yan-ming,MA Li,AN Hui,PENG Jun-lin,FENG Yao-yao,YANG Zeng-qian,LI Fang-fang,LU Xiang-wan,LI Heng-bin(Hefei Xinsheng Optoelectronics Technology Co.,Ltd.,Hefei230001,China)Abstract:Research on the relationship of SiNx film to off-angle color shift on white screen is carried out. The research in this paper fills in the lack of research of correlation between TFT film and off-angle color shift of TFT-LCD,and provides a useful reference for the improvement of similar optical chromaticity de‐fects in other products.Through experimental measurement and software simulation,the influence of the thickness of the silicon nitride film on the color shift coordinates is explored.Experimental results show that the silicon nitride film layer is a key factor affecting the color shift,GI and PVX1have a greater impact on color shift,while PVX2has a little impact on color shift,and the thickness of the SiN x is nonlinearly re‐lated to off-angle color shift on white screen.Taking a HADS product for example,the SiN x film layer in‐cludes gate insulating layer(320nm),1st passivation insulator layer(100nm),2nd passivation insulator layer(200nm).Based on single-factor experimental and simulation results,comprehensively considering 文章编号:1007-2780(2022)04-0467-08收稿日期:2021-11-11;修订日期:2022-02-17.*通信联系人,E-mail:yechengzhi@第37卷液晶与显示the electrical characteristics and production capacity,the reddish on white screen from the viewing angle is improved effectively when PVX1140nm is used as the condition.That means there is no reddish by mac‐roscopic observation and the measured data meets product specifications.Key words:thin film transistor liquid crystal display;SiNx film;optical chromaticity;off-angle color shift;white display1引言薄膜晶体管液晶显示技术(Thin Film Tran‐sistor Liquid Crystal Display,TFT-LCD)是目前平板显示市场主流的显示技术之一[1]。
MCT SERIESSPECIFICATIONSTYPICAL PERFORMANCE CHARACTERISTICSTHICK FILM PRECISION CHIP RESISTORS MILITARY GRADEChoice of termination materials: 100% tin plated nickel barrier is standard (palladium silver, gold, or tin-lead solder is available)Wraparound termination is standard, single sided termination is available (Option ‘S’)Wide resistance range: 0.1Ω to 100M Ω100% Group A screening per MIL-R-55342 available Wide design flexibility for use with hybrid or smd circuitry Custom sizes, high surge, untrimmed chips available Backside metalization availablePrecision performance to 0.1% 50ppm/°C!RCD series MCT chip resistors are designed to meetapplicable performance requirements of MIL-PRF-553421.Variety of sizes and termination materials allows wide design flexibility. Palladium silver terminations are recommended for use with conductive epoxy, gold is recommended for wire bonding, and tin plated nickel for soldering.* Dimension given is for single face termination. Add .004” [.1mm] for wraparound termination. ** Wattage based on mounting to ceramic substrate, derate by 25% if mounted to a glass epoxy P .C.B.1 Military p/n’s are given for reference only and do not imply qualification or exact interchangeability. 2 Dimension “t” (underside termination width) applies only to termination type WSingle Face Termination (Opt. S)Wraparound Termination (standard)D C R e p y T e g a t t a W **C °07@e g a t l o V g n i t a R e c n a t s i s e R e g n a R *L ]2.0[800.±W ]2.0[800.±t 2*T 2040T C M W M 05V 0301ΩM 1-]0.1[930.]15.[020.500.±210.]31.±3.[600.±410.]51.±53.[5050T C M W M 001V 051ΩM 01-]72.1[050.]72.1[050.700.±410.]81.±53.[600.±810.]51.±54.[3060T C M W M 001V 041.0ΩM 001-]35.1[060.]67.[030.600.±210.]51.±3.[600.±810.]51.±54.[5080T C M W M 051V 071.0ΩM 001-]19.1[570.]72.1[050.800.±610.]2.±4.[600.±810.]51.±05.[5001T C M W M 002V 0011ΩM 02-]45.2[001.]72.1[050.800.±810.]2.±54.[600.±020.]51.±05.[6021T C M W M 052V 5211.0ΩM 001-]51.3[421.]55.1[160.800.±020.]2.±5.[600.±020.]51.±05.[5051T C M W M 052V 5211ΩM 01-]18.3[051.]72.1[050.010±020.]52.±5.[600.±020.]51.±05.[0121T C M W M 005V 0511.0ΩM 001-]51.3[421.]45.2[001.010.±020.]52.±5.[600.±020.]51.±05.[0102T C M W M 008V 0511.0ΩM 001-]0.5[691.]45.2[001.010.±020.]52.±5.[600.±020.]51.±05.[2152T C M WM 0001V0021.0ΩM001-]3.6[842.]81.3[521.010.±020.]52.±5.[600.±020.]51.±05.[c i t s i r e t c a r a h C 01ΩM 1o t Ωe g n a r 01<ΩM 1>,Ωt n e i c i f f e o C .p m e T m p p 003±)d r a d n a t s (m p p 001±).t p o (m p p 05±).t p o (m p p 004±)d r a d n a t s (k c o h S l a m r e h T %5.0%5.0%5.0%57.0d a o l r e v O e m i T t r o h S %5.0%52.0%52.0%1e r u s o p x E .p m e T h g i H %5.0%5.0%5.0%57.0e c n a t s i s e R e r u t s i o M %5.0%5.0%5.0%5.1ef i L d a o L %0.1%5.0%5.0%5.1RESISTORS CAPACITOR S C OILS DELAY LINES35 70 105 140 150 175% O F R A T E D P O W E RDERATINGAMBIENT TEMPERATURE (°C)100 75 50 25 0FA045B Sale of this product is in accordance with GF-061. Specifications subject to change without notice.RCD Components Inc, 520 E.Industrial Park Dr, Manchester, NH, USA 03109 Tel: 603-669-0054 Fax: 603-669-5455 Email:***********************tLtWT17PART NUMBER DESIGNATION:RCD TypeOptional TC Characteristic : 50 = 50ppm, 101 =100ppm, 201= 200ppm, etc. (leave blank if standard)Packaging : B=bulk, T=tape&reel, W=waffle tray MCT0805 - 1001 - FT 101 WRes. Value: 4 digit code if tol. is 0.1% to 1%(R100=0.1Ω, 1R00=1Ω, 10R0=10Ω, 1000=100Ω,1001=1K Ω, 1002=10K , 1003=100K , 1004=1M Ω)3 digit code if ≥2% (R10=0.1Ω, 1R0=1Ω, 100= 10Ω,101=100Ω, 102=1K Ω, 103=10K , 104=100K ,105=1M Ω)Tolerance Code : B=0.1%, C=0.25%, D=0.5%,F=1%, G=2%, J=5%, K=10%, M=20%Termination Material : Standard = W (lead-free100% Tin with Nickel barrier). Optional = Q (Tin-Lead solder with Ni barrier), P= untinned Palladium Silver, G=GoldOptions: S = single sided termination(leave blank for wraparound termination)。
Short Papers___________________________________________________________________________________________________Density-Based Multifeature Background Subtraction with Support Vector Machine Bohyung Han,Member,IEEE,andLarry S.Davis,Fellow,IEEEAbstract—Background modeling and subtraction is a natural technique for object detection in videos captured by a static camera,and also a critical preprocessing step in various high-level computer vision applications.However,there have not been many studies concerning useful features and binary segmentation algorithms for this problem.We propose a pixelwise background modeling and subtraction technique using multiple features,where generative and discriminative techniques are combined for classification.In our algorithm,color,gradient,and Haar-like features are integrated to handle spatio-temporal variations for each pixel.A pixelwise generative background model is obtained for each feature efficiently and effectively by Kernel Density Approximation(KDA).Background subtraction is performed in a discriminative manner using a Support Vector Machine(SVM)over background likelihood vectors for a set of features.The proposed algorithm is robust to shadow,illumination changes,spatial variations of background.We compare the performance of the algorithm with other density-based methods using several different feature combinations and modeling techniques,both quantitatively and qualitatively.Index Terms—Background modeling and subtraction,Haar-like features,support vector machine,kernel density approximation.Ç1I NTRODUCTIONT HE identification of regions of interest is typically the first step in many computer vision applications,including event detection, visual surveillance,and robotics.A general object detection algorithm may be desirable,but it is extremely difficult to properly handle unknown objects or objects with significant variations in color,shape,and texture.Therefore,many practical computer vision systems assume a fixed camera environment,which makes the object detection process much more straightforward;a back-ground model is trained with data obtained from empty scenes and foreground regions are identified using the dissimilarity between the trained model and new observations.This procedure is called background subtraction.Various background modeling and subtraction algorithms have been proposed[1],[2],[3],[4],[5]which are mostly focused on modeling methodologies,but potential visual features for effective modeling have received relatively little attention.The study of new features for background modeling may overcome or reduce the limitations of typically used features,and the combination of several heterogeneous features can improve performance,espe-cially when they are complementary and uncorrelated.There have been several studies for using texture for background modeling to handle spatial variations in the scenes;they employ filter responses,whose computation is typically very costly.Instead of complex filters,we select efficient Haar-like features[6]and gradient features to alleviate potential errors in background subtraction caused by shadow,illumination changes,and spatial and structural variations.Model-based approaches involving probability density function are common in background modeling and subtraction,and we employ Kernel Density Approximation(KDA)[3],[7],where a density function is represented with a compact weighted sum of Gaussians whose number,weights,means,and covariances are determined automatically based on mean-shift mode-finding algorithm.In our framework,each visual feature is modeled by KDA independently and every density function is1D.By utilizing the properties of the1D mean-shift mode-finding procedure,the KDA can be implemented efficiently because we need to compute the convergence locations for only a small subset of data.When the background is modeled with probability density functions,the probabilities of foreground and background pixels should be discriminative,but it is not always true.Specifically,the background probabilities between features may be inconsistent due to illumination changes,shadow,and foreground objects similar in features to the background.Also,some features are highly correlated,i.e.,RGB color features.So,we employ a Support Vector Machine(SVM)for nonlinear classification,which mitigates the inconsistency and the correlation problem among features.The final classification between foreground and background is based on the outputs of the SVM.There are three important aspects of our algorithm—integration of multiple features,efficient1D density estimation by KDA,and foreground/background classification by SVM.These are coordi-nated tightly to improve background subtraction performance.An earlier version of this research appeared in[8];the current paper includes more comprehensive analysis of the feature sets and additional experiments.2P REVIOUS W ORKThe main objective of background subtraction is to obtain an effective and efficient background model for foreground object detection.In the early years,simple statistics,such as frame differences and median filtering,were used to detect foreground objects.Some techniques utilized a combination of local statistics[9] or vector quantization[10]based on intensity or color at each pixel.More advanced background modeling methods are density-based,where the background model for each pixel is defined by a probability density function based on the visual features observed at the pixel during a training period.Wren et al.[11]modeled the background in YUV color space using a Gaussian distribution for each pixel.However,this method cannot handle multimodal density functions,so it is not robust in dynamic environments.A mixture of Gaussians is another popular density-based method which is designed for dealing with multiple backgrounds. Three Gaussian components representing the road,shadow,and vehicle are employed to model the background in traffic scenes in[12].An adaptive Gaussian mixture model is proposed in[2]and[13],where a maximum of K Gaussian components for each pixel are allowed but the number of Gaussians is determined dynami-cally.Also,variants of incremental EM algorithms have been employed to deal with real-time adaptation constraints of back-ground modeling[14],[15].However,a major challenge in the mixture model is the absence or weakness of strategies to determine the number of components;it is also generally difficult to add or remove components to/from the mixture[16].Recently, more elaborate and recursive update techniques are discussed in [4]and[5].However,none of the Gaussian mixture models have any principled way to determine the number of Gaussians. Therefore,most real-time applications rely on models with a fixed. B.Han is with the Department of Computer Science and Engineering, POSTECH,Pohang790-784,Korea.E-mail:bhhan@postech.ac.kr..L.S.Davis is with the Department of Computer Science,University of Maryland,College Park,MD20742.E-mail:lsd@. Manuscript received28Sept.2009;revised19Dec.2010;accepted27Aug. 2011;published online8Dec.2011.Recommended for acceptance by G.D.Hager.For information on obtaining reprints of this article,please send e-mail to: tpami@,and reference IEEECS Log NumberTPAMI-2009-09-0649.Digital Object Identifier no.10.1109/TPAMI.2011.243.0162-8828/12/$31.00ß2012IEEE Published by the IEEE Computer Societynumber of components or apply ad hoc strategies to adapt the number of mixtures over time[2],[4],[5],[13],[17].Kernel density estimation is a nonparametric density estimation technique that has been successfully applied to background subtraction[1],[18].Although it is a powerful representation for general density functions,it requires many samples for accurateestimation of the underlying density functions and is computa-tionally expensive,so it is not appropriate for real-time applica-tions,especially when high-dimensional features are involved.Most background subtraction algorithms are based on pixel-wise processing,but multilayer approaches are also introduced in [19]and[20],where background models are constructed at the pixel,region,and frame levels and information from each layer is combined for discriminating foreground and background.In[21], several modalities based on different features and algorithms are integrated and a Markov Random Field(MRF)is employed for the inference procedure.The co-occurrence of visual features within neighborhood pixels is used for robust background subtraction in dynamic scenes in[22].Some research on background subtraction has focused more on features than the algorithm itself.Various visual features may be used to model backgrounds,including intensity,color,gradient, motion,texture,and other general filter responses.Color and intensity are probably the most popular features for background modeling,but several attempts have been made to integrate other features to overcome their limitations.There are a few algorithms based on motion cue[18],[23].Texture and gradients have also been successfully integrated for background modeling[24],[25] since they are relatively invariant to local variations and illumina-tion changes.In[26],spectral,spatial,and temporal features are combined,and background/foreground classification is per-formed based on the statistics of the most significant and frequent features.Recently,a feature selection technique was proposed for background subtraction[27].3B ACKGROUND M ODELING AND S UBTRACTIONA LGORITHMThis section describes our background modeling and subtraction method based on the1D KDA using multiple features.KDA is a flexible and compact density estimation technique[7],and we present a faster method to implement KDA for1D data.For background subtraction,we employ the SVM,which takes a vector of probabilities obtained from multiple density functions as an input.3.1Multiple Feature CombinationThe most popular features for background modeling and subtrac-tion are probably pixelwise color(or intensity)since they are directly available from images and reasonably discriminative.Although it is natural to monitor color variations at each pixel for background modeling,they have several significant limitations as follows: .They are not invariant to illumination changes and shadows..Multidimensional color features are typically correlated and joint probability modeling may not be advantageousin practice..They rely on local information only and cannot handle structural variations in neighborhoods.We integrate color,gradient,and Haar-like features together to alleviate the disadvantages of pixelwise color modeling.The gradient features are more robust to illumination variations than color or intensity features and are able to model local statistics effectively.So,gradient features are occasionally used in back-ground modeling problems[26],[27].The strength of Haar-like features lies in their simplicity and the ability to capture neighborhood information.Each Haar-like feature is weak by itself,but the collection of weak features has significant classifica-tion power[6],[28].The integration of these features is expected to improve the accuracy of background subtraction.We have 11features altogether,RGB color,two gradient features(horizontal and vertical),and six Haar-like features.The Haar-like features employed in our implementation are illustrated in Fig.1.The Haar-like features are extracted from9Â9rectangular regions at each location in the image,while the gradient features are extracted with3Â3Sobel operators.The fourth and fifth Haar-like features are similar to the gradient features,but differ in filter design,especially scale.3.2Background Modeling by KDAThe background probability of each pixel for each feature is modeled with a Gaussian mixture density function.1There are various methods to implement this idea,and we adopt KDA, where the density function for each pixel is represented with a compact and flexible mixture of Gaussians.The KDA is a density approximation technique based on mixture models,where mode locations(local maxima)are detected automatically by the mean-shift algorithm and a single Gaussian component is assigned to each detected mode.The covariance for each Gaussian is computed by curvature fitting around the associated mode.More details about KDA are found in[7],and we summarize how to build a density function by KDA for1D case.Suppose that training data for a sequence are composed of n frames and x F;i(i¼1;...;n)is the i th value for feature F at a certain location.Note that the index for pixel location is omitted for simplicity.For each feature,we first construct a1D density function at each pixel by kernel density estimation based on Gaussian kernel as follows:^fFðxÞ¼1ffiffiffiffiffiffi2pX ni¼1F;iF;iexpÀðxÀx F;iÞ22 2F;i!;ð1Þwhere F;i and F;i are the bandwidth and weight of the i th kernel, respectively.As described above,the KDA finds local maxima in the underlying density function(1),and a mode-based representation of the density is obtained by estimating all the parameters for a compact Gaussian mixture.The original density function is simplified by KDA as~fFðxÞ¼1ffiffiffiffiffiffi2pX m Fi¼1~ F;iF;iexpÀðxÀ~x F;iÞ22~F;i!;ð2Þwhere~x F;i is a convergence location by the mean-shift mode finding procedure,and~ F;i and~ F;i are the estimated standard deviation and weight for the Gaussian assigned to each mode, respectively.Note that the number of components in(2),m F,is generally much smaller than n.One remaining issue is bandwidth selection—initialization of F;i—to create the original density function in(1).Although there is no optimal solution for bandwidth selection for the nonpara-metric density estimation,we initialize the kernel bandwidth based on the median absolute deviation over the temporal neighborhood of the corresponding pixels,which is almostFig.1.Haar-like features for our background modeling.1.Histogram-based modeling is appropriate for1D data.However,the construction of a histogram at each pixel requires significant memory and the quantization is not straightforward due to the wide range of Haar-like feature values.Also,a histogram is typically less stable than continuous density functions when only a small number of samples are available.identical to the method proposed in[1].Specifically,the initial kernel bandwidth of the i th sample for feature F is given byF;i¼max min;med t¼iÀtw ;...;iþt wj x F;tÀx F;tÀ1jÀÁ;ð3Þwhere t w is the temporal window size and min is the predefined minimum kernel bandwidth to avoid too narrow a kernel.The motivation for this strategy is that multiple objects may be projected onto the same pixel,but feature values from the same object would be observed for a period of time;the absolute difference between two consecutive frames,therefore, does not jump too frequently.Although KDA handles multimodal density functions for each feature,it is still not sufficient to handle long-term background variations.We must update background models periodically or incrementally,which is done by Sequential Kernel Density Approximation(SKDA)[7].The density function at time step tþ1 is obtained by the weighted average of the density function at the previous time step and a new observation,which is given by^f tþ1 F ðxÞ¼ð1ÀrÞ^f t FðxÞþr~ tþ1F;iffiffiffiffiffiffi2p~ tþ1F;iexpÀÀxÀ~x tþ1F;iÁ22À~ tþ1F;iÁ2!;ð4Þwhere r is a learning rate(forgetting factor).To prevent the number of components from increasing indefinitely by this procedure,KDA is applied at each time step.The detailed technique and analysis are described in[7].Since we are dealing with only1D vectors,the sequential modeling is also much simpler and similar to the one introduced in Section3.3;it is sufficient to compute the new convergence points of the mode locations immediately before and after the new data in the density function at each time step.2The computation time of SKDA depends principally on the number of iterations for the data points in(4), and can vary at each time step.3.3Optimization in One DimensionWe are only interested in1D density functions,and the convergence of each sample point can be obtained much faster than the general case.Given1D samples,x i and x j(i;j¼1;...;n), the density function created by kernel density estimation has the following properties:ðx i x j^x iÞ_ðx i!x j!^x iÞ)^x i¼^x j;ð5Þwhere^x i and^x j are the convergence location of x i and x j, respectively.In other words,all the samples located between a sample and its associated mode converge to the same mode location.So,every convergence location in the underlying density function can be found without running the actual mode-finding procedure for all the sample points,which is the most expensive part in the computation of KDA.This section describes a simple method to find all the convergence points by a single linear scan of samples using the above properties,efficiently.We sort the sample points in ascending order,and start performing mean-shift mode finding from the smallest sample. When the current sample moves in the gradient ascent direction by the mean-shift algorithm in the underlying density function and passes another sample’s location during the iterative procedure, we note that the convergence point of the current sample must be the same as the convergence location of the sample just passed, terminate the current sample’s mean-shift process,and move on to the next smallest sample,where we begin the mean-shift process again.3If a mode is found during the mean-shift iterations,its location is stored and the next sample is considered.After finishing the scan of all samples,each sample is associated with a mode and the mode-finding procedure is complete.This strategy is simple,but improves the speed of mean-shift mode-finding significantly,especially when many samples are involved.Fig.2 illustrates the impact of1D optimization for the mode-finding procedure,where a huge difference in computation time between two algorithms is evident.3.4Foreground and Background ClassificationAfter background modeling,each pixel is associated with k1D Gaussian mixtures,where k is the number of features integrated. Background/foreground classification for a new frame is per-formed using these distributions.The background probability of a feature value is computed by(2),and k probability values are obtained from each pixel,which are represented by a k-dimen-sional vector.Such k-dimensional vectors are collected from annotated foreground and background pixels,and we denote them by y j(j¼1;...;N),where N is the number of data points.In most density-based background subtraction algorithms,the probabilities associated with each pixel are combined in a straightforward way,either by computing the average probability or by voting for the classification.However,such simple methods may not work well under many real-world situations due to feature dependency and nonlinearity.For example,pixels in shadow may have a low-background probability in color modeling unless shadows are explicitly modeled as transformations of color variables,but high-background probability in texture modeling. Also,the foreground color of a pixel can look similar to the corresponding background model,which makes the background probability high although the texture probability is probably low. Such inconsistency among features is aggravated when many features are integrated and data are high dimensional,so we train a classifier over the background probability vectors for the feature set,f y j g1:N.Another advantage to integrating the classifier for foreground/background segmentation is to select discriminative features and reduce the feature dependency problem;otherwise, highly correlated nondiscriminative features may dominate the classification process regardless of the states of other features.An SVM is employed for the classification of background and foreground.We used a radial basis function kernel to handle nonlinear input data,and used approximately40K data points (10K for foreground and30K for background)for training.Note that a universal SVM is used in our method for all sequences—not a separate SVM for each pixel nor for each sequence;this is possible because we learn the classifier based on probability vectors rather than feature vectors directly.A similar approach is introduced in[29],where the magnitude of optical flow and interframe image difference are used as features for classification. However,the2D features have limitation in handling dynamic and multimodal scenes properly—e.g.,stationary foreground and moving background.Other background subtraction methods using SVM require per-pixel(or per-block)classifiers and separateputation time comparison between the original KDA and KDA with 1D optimization.2.Strictly speaking,it may be sometimes necessary to compute the convergences at more locations due to a cascaded merges.3.This can be implemented using a reference variable.If a sample is moving backward,its convergence location is the same as the previous one. If a sample is moving forward,its convergence location can be set when the next mode is found.training for each sequence[30],[31]since they apply SVM to the original feature space,so they need many frames for training and significant memory space to store many SVM classifiers.Fig.3summarizes the training procedure for background subtraction;the density-based background model is constructed for each feature,and the SVM is trained using background probability vectors and binary labels annotated from validation frames.For testing,a background probability vector for each pixel is computed based on the modeled backgrounds,and the trained SVM is applied to the vector for classification.We emphasize that the sequences used for training are separate from the testing sequences and that the SVM does not need to be retrained to be applied to a new input sequence.4E XPERIMENTSWe present the performance of our background modeling and subtraction algorithm using real videos.Each sequence involves challenges such as significant pixelwise noises(subway),dynamic background of a water fountain(fountain),and reflections and shadow in wide area(caviar[32]).4.1Feature AnalysisWe describe the characteristics of individual features and the performance of multiple feature integration.Fig.4illustrates the correlation between every pair of features.RGB colors and three Harr-like features are significantly correlated,and the fourth andFig. 3.Diagram for training procedure.Note that we use pixelwise density functions by KDA but train a single common SVM for all pixels and allsequences. Fig.4.Correlations between features.The upper triangular matrix illustrates the correlation coefficients between feature values while the lower triangular matrix represents the correlation coefficients between background likelihoods offeatures.Fig.5.Feature performance for classification.The histograms of background probabilities for foreground and background pixels are presented for each feature. Note that,in the ideal case,foreground should have low-background probabilities only and background pixels should have high-background probabilityonly.Fig.6.Improvement of discriminativeness by adding heterogeneous features—gradient or Haar-like features.Note that the combination of R and G may not improve classification performance compared with R only.(See also Fig.5a.) However,when the vertical gradient or the first Haar-like feature are combined with R feature,the characteristics of foreground pixels become quite different from those of backgroundpixels.Fig.7.PR curves for different density estimations.fifth Haar-like features have nontrivial correlation with vertical and horizontal gradient features,respectively,as one would expect.However,note that the correlations between background likelihoods of even highly correlated features are not so strong, which explains why there are still benefits of integrating highly correlated features.Fig.5illustrates the discriminativeness of features for foreground/background classification.The histograms of background probabilities for foreground and background pixels are presented for three different features—a representative feature for color,gradient,and Haar-like feature.According to Fig.5,color features have substantially poor quality,compared with the other two features;background probabilities for background pixels vary significantly,which makes it difficult to classify them correctly. Also,background probabilities of gradient features for foreground pixels are widespread.The combination of heterogeneous features improves back-ground/foreground classification performance.Recall that the R feature has mediocre quality for background subtraction.The inclusion of the G feature does not help much(Fig.6a)because background probabilities of pixels in these two feature bands are highly correlated.However,this problem is alleviated when gradient or Haar-like feature are integrated;foreground and background pixels are now more separable so that classification is more straightforward(Figs.6b and6c).4.2Evaluation of Background SubtractionThe performance of our background subtraction algorithm is evaluated in various ways.For quantitative evaluation,Precision-Recall(PR)curves are employed,where the precision and recall are defined asprecision¼nðB T\BÞT;and recall¼nðF T\FÞT;respectively,where F T and B T are the groundtruth sets of foreground and background pixels,and F and B denote the sets of foreground and background pixels obtained by a background subtraction algorithm.We first compared our algorithm with other density-based techniques such as Gaussian Mixture Model(GMM)[5]and Kernel Density Estimation(KDE)[1]based on RGB color feature only.For the GMM method,we downloaded the code for[5]and tested two versions—with and without shadow detection.In our experiments,our algorithm with SVM(KDAþRGBþGradþHaar) is noticeably better than KDE and GMM with/without shadow detection as shown in Fig.7.Additionally,we implemented KDE with all11features and SVM(KDEþRGBþGradþHaar)and obtained comparable accuracy to our technique,which suggests the performance of KDA with respect to KDE.4Note that the range of0:9$0:95in both precision and recall is critical to actual performance in background subtraction.The performance of several different feature sets are compared and illustrated in Fig.8.Our algorithm with all three features (RGBþGradþHaar)has the best performance for all three sequences.The RGBþGrad shows better results than RGB only, and the performance of RGBþHaar is comparable to RGBþGradþHaar;the integration of gradient feature is not very useful due to the correlation and similarity with related Haar-like features. However,the combination of the correlated features still improves the classification performance nontrivially in the subway and the fountain sequence since the SVM plays a role to select proper features implicitly.We also tested our algorithm using four independent Haar-like features based on the result in Fig.4, where RGB features are excluded since they are significantly correlated with some of the selected Haar-like features.The accuracy based on the independent Haar-like features is signifi-cantly lower than full feature combination.Fig.8.PR curves of our algorithms with several different featuresets.Fig.9.PR curves for testing the contribution of SVM.4.The performances of KDA and GMM are similar in RGB space with almost identical implementations of both algorithms except density estimation,which is shown by pink line in Fig.7and red line in Fig.9b.。