A grayscale reader for camera images of Xerox DataGlyphs
- 格式:pdf
- 大小:556.58 KB
- 文档页数:10
halcon 函数gamma_image 函数说明-回复Function Overview:The gamma_image function is a powerful tool in the Halcon software library. It allows users to adjust the gamma correction of an image. Gamma correction is a technique used to alter the brightness levels of an image in a non-linear fashion. This function has several parameters that can be adjusted to achieve the desired effect.Step 1: Understanding Gamma CorrectionGamma correction is used to modify the brightness levels of an image to match the characteristics of a display device. It is essential because human perception of brightness is not linear; our eyes perceive changes in brightness logarithmically. Gamma correction is applied to achieve a more visually pleasing image by compensating for the non-linear response of the display.Step 2: Analyzing the Function ParametersThe gamma_image function has several parameters that need to be understood before using it effectively. These parameters include the input image, the gamma factor, the minimum intensity, and the maximum intensity. Let's delve into each parameter in detail:- Input Image: This parameter specifies the image upon which gamma correction will be applied. It can be a grayscale or color image. The image can be stored in memory or loaded from a file.- Gamma Factor: The gamma factor is the most critical parameter in the function. It determines the amount of gamma correction applied to the image. A value less than 1 reduces the brightness, while a value greater than 1 increases it. The gamma factor directly influences the overall contrast of the image.- Minimum Intensity: This parameter sets the minimum intensity level of the output image. Any pixel with an intensity below this value will be adjusted to the minimum value. It can be useful for excluding dark areas from gamma correction.- Maximum Intensity: This parameter sets the maximum intensity level of the output image. Any pixel with an intensity above this value will be adjusted to the maximum value. It can be useful for excluding bright areas from gamma correction.Step 3: Applying Gamma CorrectionTo use the gamma_image function, follow these steps:1. Load an input image: This can be done using the appropriate Halcon function to read an image from file or capturing it from a camera.2. Set the desired gamma factor: Determine whether you want to increase or decrease the image's brightness. Experiment with different gamma values to achieve the desired effect.3. Adjust the minimum and maximum intensity values: If needed, set the range of intensity values you want to exclude from gamma correction. This helps retain the details in darker or brighter regions of the image.4. Call the gamma_image function: Pass the input image, gamma factor, minimum intensity, and maximum intensity as parameters to the function. It will perform the gamma correction and produce the output image.5. Display or save the output image: Use the appropriate Halcon function to display the adjusted image on-screen for visualinspection or save it to the disk for further analysis and processing.Step 4: Additional ConsiderationsHere are some additional points to keep in mind while using the gamma_image function:- Gamma correction can impact image details: Depending on the chosen gamma value, details in dark or bright regions of the image may be lost or enhanced. It is essential to find the right balance that preserves the crucial details.- Image quality considerations: Applying gamma correction can introduce artifacts or loss of information in the image. It is advisable to work with high-quality images and considerpost-processing techniques to alleviate these issues.- Optimization considerations: The gamma_image function can be computationally intensive for large images. Consider using optimized programming techniques or applying the function on smaller image patches to improve processing speed.In conclusion, the gamma_image function in Halcon is a versatiletool for adjusting the gamma correction of images. By understanding the function parameters and following thestep-by-step process, users can effectively apply gamma correction to enhance image brightness and contrast. It offers a wide range of possibilities for image enhancement and adaptation to different display devices with non-linear brightness characteristics.。
DS-D4212FI-108H(All-in-one)LED Multi-Media Display⏹The LED multi-media display is a standardized display product, featuring in self-illuminating display, comprehensiveinformation release, remote management and control, miracast, etc. With its fashionable and simple design, the terminal boasts good performance in various aspects. It can bring a wide view without light reflection even in strong light. Theintegrated management system enables you to remotely control terminals, manage programs, and perform miracast by a PC or mobile phone. Operation through a remote control is also allowed. The LED multi-media display is equipped withHikvision’s professional image processing technology, including Pix Code (algorithm for optimizing the details of low-grayscale image), Pix Master (algorithm for image enhancement), Color Master (algorithm for color fidelity), HDR display, 3D display and other research and development achievements. With clear and vivid images, the device is applicable to scenes with 100 to 200 inch wide view, such as large-scale shopping malls, business buildings, financial halls, airports, high-speed railway stations, government halls, news release halls, exhibition halls, etc.⏹Appearance: Ultra-thin bezel and cabinet, with all modules fast assembled into a whole display. The thickness of a wall-mounting display is less than 5 cm.⏹Integrated Design: Sliding button operation box enables you to start up and shut down, switch signal sources, adjustbrightness and volume, etc. on the front panel.⏹Built-in Speaker: Dual-track audio amplification system supports playing audio and video simultaneously and playing audiothrough external speaker.⏹Built-in Bluetooth: Supports Bluetooth and playing audio through a Bluetooth speaker.⏹Wireless Screen Mirroring: Supports wireless screen mirroring of PC and Android and iOS devices, up to 4 devices at thesame time.⏹Convenient Control: Supports convenient remote control. You can start up and shut down the device, adjust volume, etc.via remote control.⏹Dual Systems: Built-in Windows and Android operating systems, together with a USB interface, caster to user preferences.⏹Scene Preset: Scene modes include Standard, Soft, Cinema, and Conference. Supports one-key switch of scene modes.⏹Color Processing: Empowered by self-developed color management technology and ultra-gray scale image controltechnology, it presents outstanding dynamic colors and delicate images with soft color.⏹Energy Saving: Energy efficient design with low power usage makes the standby consumption of the display as low as 0.5 W.⏹Sufficient Protection: The surface of the lamp board adopts HCOB grouting protection technology and it is waterproof,dustproof, and anti-bumping. The display is suitable for using at close range and the image presented is soft.⏹Application Scenarios: Classrooms, offices, small and medium-sized government smart centers, exhibition halls, commercialchain retails, etc.SpecificationModel DS-D4212FI-108H(All-in-one) ProductModelProduct Model DS-D4212FI-108HCabinet Pixel Configuration SMD Triad LED Pixel Pitch Category P1.2Pixel Pitch 1.25 mmPixel Density 640000 dots/m²Maintenance Method Front maintenanceKey Material Power Specification 4.2 V/40 A (cabinet power supply), 19 V/8 A (control power supply)Display White Balance Brightness 600 cd/m²Viewing Angle 178° (H)/178° (V)Bezel Width 4 mmContrast Ratio 6000:1Surface Treatment HOB: dispensing technology and surface sealing with strong protection Display Ratio 16:9Optimal View Distance 3 mBuilt-in System Operation System Android 8.0.0/Windows i7ProcessorAndroid: Max. 1.5 GHz, Cortex-A73 × 2 + A53 × 2Windows: Max. 1.60 GHz, max. turbo 3.40 GHz MemoryAndroid: 3 GBWindows: 4 GBBuilt-in StorageAndroid: 16 GBWindows: 256 GBInterface Video & Audio Input HDMI × 1Video & Audio Output LINE OUT × 1, SPDIF × 1, 15 W speaker × 2Data Transmission Interface USB 3.0 × 4Control Interface Debugging network interface × 1, wired internet compatibleProcessing Performance Frame Frequency 30 Hz/60 Hz Refresh Rate 1920 Hz Grey Level 16 bit Display Color 16777216 Processing Depth 8 bitPower Standby Consumption ≤ 0.5 W Max. Consumption ≤ 1500 W Average Consumption ≤ 500 WInput Voltage 200 to 240 VACWorking Environment Working Temperature -10 to 40 °C (14 to 104 °F) Working Humidity 20% to 80% RH (non-condensing) Storage Temperature -10 to 50 °C (14 to 122 °F) Storage Humidity 10% to 90% RH (non-condensing)General Mounting Type Wall mounting and mobile bracket mounting Packing Method Wooden case packageNet Weight 85 kg (187.4 lb.)Packing ListBackup lamp board × 1, backup backboard × 1, quick start guide × 1, accessories (lamps, power cords, network cables, screws, etc.) HardwareDimensions (W × H × D)2400 × 1350 × 25.1 mm (94.5 × 53.1 × 1.0 inch) Screen Area 3.24 m² Screen Resolution 1920 × 1080 MaterialAluminum alloyPanelIndicator Red: The device is standby. Blue: The device is powered on. ButtonOn/Off/Sleep button × 1⏹Physical InterfaceNo. DescriptionNo. Description 1 200 to 240 VAC Power Input2 AC Power Switch3 SPDIF Audio Output4 Audio Output5 HDMI Digital Signal Input 6/8/9 USB 3.0 Interface 7 USB-B 3.0 Interface 10 Network Interface 11 Increase Brightness 12 Decrease Brightness 13 Increase Volume 14Decrease Volume15Switch Signals⏹Available ModelDS-D4212FI-108H(All-in-one)⏹Dimension⏹Accessory⏹OptionalDS-DLAIO108-LMDS-DLAIO108-WM。
Spyder CheckrUSER GUIDESpyder Checkr Spyder Checkr 24 Spyder Checkr PhotoIntroduction 3 Challenges and Solutions 3 What’s Included and Operating Requirements 4 Targets 5 Patches 8 Getting Started 12 Setup Hardware 13 Adjust the Target Shot 14 Save the Target and Apply 15 Creating Multiple Light Source Calibrations 17 In-Camera Color Balancing 17 Changing Between Spyder Checkr Targets 17IntroductionSpyder Checkr provides a fast, reliable method of color calibrating camera, lens and sensor combinations. It also facilitates harmonizing color between different cameras. It allows photographers and videographers to obtain more consistent, accurate color within their normal post-production workflow in common editing software.Challenges and SolutionsEvery combination of lens, camera, and sensor has a unique color signature; this may change in different lighting conditions. And, of course, these devices don’t perceive or record color the way the human eye does.Color control and consistency requires a reference tool to help the user adapt to these variations. Adding color management at the capture stage of the digital workflow assures consistency and accuracy from day to day as well as from camera to camera.Spyder Checkr enables the user to create custom camera calibrations that compensates for the characteristics of the optics and sensors which produces more accurate color reproduction in edited images. The workflow is simple: photograph the Spyder Checkr, import the image into a supported image editor for basic adjustments, open the image in the Spyder Checkr software, and export the preset. You can apply this preset during image import or editing.The Spyder Checkr products work with the easy to use calibration software that makes post-production quicker by getting consistent, predictable color right from the start.What’s Included and Operating RequirementsSpyder Checkr•Spyder Checkr 48 Target Color Cards and Gray Cards•Spyder Checkr Case•Download link and Software Serial NumberSpyder Checkr 24•Spyder Checkr 24 Target Color Card and Gray Card•Spyder Checkr 24 Sleeve•Download link and Software Serial NumberSpyder Checkr Photo• 4 Interchangeable Target Color Cards•Spyder Checkr Photo Case•Lanyard•Download link and Software Serial NumberSupported Languages: English, Spanish, French, German, Italian, Russian, Traditional Chinese, Simplified Chinese, Korean, JapaneseOperating Requirements:Windows 7 32/64, Windows 8.0, 8.1 32/64, Windows 10 32/64Mac OS X 10.7 or higherMonitor Resolution 1280x768 or greater16-bit video card (24 recommended)1 GB of available RAM500 MB of available hard disk spaceInternet access for product activationTargetsSpyder Checkr(SCK100)The 48 patch Spyder Checkr target closes for safe storage, opens like a book, and stays securely in its fully open position. Each half of the Spyder Checkr has a frame that holds a color chip chart in place. You can open these frames and invert the color target sheets to display their gray face. This will expose the Spyder Checkr Gray Target for visual comparisons, or tasks like In-Camera Custom White Balance.The Spyder Checkr can be used for visual color comparisons, as well as with a number of third party software packages for camera profiling and other tasks. But most commonly, it is used with the Spyder Checkr software in creating camera calibrations.The FadeCheckr patch is provided to assist users in determining how much light exposure their Spyder Checkr target has had. It is designed to fade from red to yellow after the equivalent of 30 days of full sun in the summer. This degree of exposure is sufficient to justify replacing your Spyder Checkr Target Sheets. New sets are available for purchase from .Spyder Checkr 24(SCK200)The Spyder Checkr 24 has 24 patches on one side for color calibration, and a gray face target on the other for visual comparisons or tasks like in-camera custom white balance. The target slips easily into the provided sleeve for protection from the elements when not in use.The Spyder Checkr 24 can be used for visual color comparisons, as well as with a number of third party software packages for camera profiling and other tasks. But most commonly, it is used with the Spyder Checkr software in creating camera calibrations.Spyder Checkr Photo(SCK300/SCK310)The Spyder Checkr Photo case closes for safe target storage, opens by pressing the button opposite the spine to reveal pages holding 4 targets, and stays securely open in multiple positions. Each page has one large and one small mounting tab to hold the target in place. You can remove the cards and replace or rearrange as needed.To remove, lift the right corner of the card from the small mounting tab side and twist to the left to safely release from mount.To insert, tuck the large indent of the card into the raised large mounting tab, then slide the opposite end under the raised small mounting tab.The Spyder Checkr Photo can be used for visual color comparisons, as well as with a number of third party software packages for camera profiling and other tasks. But most commonly, it is used with the Spyder Checkr software in creating camera calibrations. Additionally, the gray targets can also be used for tasks like In-Camera Custom White Balance.PatchesA Spyder Checkr with moderate usage and stored correctly will last approximately 2 years. Replacement cards are available for purchase from the Datacolor website for circumstances involving heavy use or in cases where the patches become scratched, worn, or soiled over time. Please avoid touching the patches of your Spyder Checkr as oils from the skin can affect the patch color and texture.The color patches on the Spyder Checkr 24 and right half of the Spyder Checkr and Spyder Checkr Photo represent the standard 24 colors used in a variety of color products. These patches are near or within the sRGB color gamut to avoid gamut clipping and assure usability with a wide range of capture, display, and output devices.Note: the order that various software programs read these standard 24 colors varies, but the Spyder Checkr provides them in a serpentine pattern, such that reading down one column and up the next will produce the usual measurement order.The color patches in the left half of the Spyder Checkr and Spyder Checkr Photo represent additional colors targeting several uses. There are six additional skin tones for a total of eight. There are six medium saturation color patches, in the red, green, blue, cyan, magenta, and yellow zones to improve the coverage of the inside of the color gamut.There are three near white tints and three near black tones, for checking color tints and tones at both ends of the dynamic range. And, the gray ramp has been increased from the standard 20% steps, to 10% steps, plus extra samples at 5% and 95% to provide a more detailed gray ramp.Spyder CheckrPlease note the reference numbers across the left side, and the reference letters across the top of the SpyderCHECKR frame.Color Patches•Med Saturation, RGBCM&Y: A1-A6•Near White Tints: B1-B3•Near Black Tones: B4-B6•Skin Tones: C1-C5•Full saturation patches: Columns F, G, and HGray Patches•Grays proceed from white (E1) down to black (E6) in 20% steps• A zigzag path across columns E and D (not including D1) will cover the 10% gray ramp in order•An additional 95% patch is located at D1•An additional 5% patch is located at C6•The same 10% ramp patches are repeated on the back of the chart, with a large 50% gray patch, for uses requiring neutrals-only to be visible.Spyder Checkr 24•Grays proceed from white down to black in 20% steps•Full saturation•The same 20% ramp patches are repeated on the back of the chart, with a large 50% gray patch, for uses requiring neutrals-only to be visibleSpyder Checkr PhotoColor Patches•Med Saturation RGBCM&Y• 3 Near White Tints• 3 Near Black Tones• 5 Skin Tones•Full saturationGray Patches•Grayscale on full saturation card proceed from white down to black in 20% steps•Grayscale on additional color card include 95% patch and supplemental patches to create 10% gray ramp in order with other grayscale column •An additional 5% patch•Large 50% gray patch with 20% step grayscale for uses requiring neutrals-only to be visible•Large 18% gray patch with 20% step grayscale for uses requiring neutrals-only to be visibleGetting StartedDownload and Install SoftwareDownload and install the Spyder Checkr software for your platform from the Datacolor website. Before starting, make sure that you are logged onto your computer as an administrator. Run the installer. Once this is completed, run the Spyder Checkr application.Serialization and ActivationYour Spyder Checkr comes with a unique serial number in the packaging for software activation. Our software uses a web-based activation process so it’s simplest to install and activate our software from an Internet connected computer.Software UpdatesThe software update option in the Spyder Checkr application (accessible in “Preferences”) is on by default. When Datacolor posts a new build of the software it will tell you on next launch that there is an update available and offer to take you to the Datacolor website to download the latest version.Setup HardwareMounting OptionsThe Spyder Checkr, Spyder Checkr 24, and Spyder Checkr Photo can be held or placed on a table or shelf.The Spyder Checkr and Spyder Checkr Photo can stand upright in stable locations.The Spyder Checkr has a standard tripod mount (1/4 inch 20 UNC thread) on the base for mounting at any height or angle desired. There is also a standard tripod stud at the top of the Spyder Checkr spine to mount other objects like a Spyder Cube.LightingLight the targets from a 45- degree angle. The ideal manner is to use a single light source from an extended distance with no reflector or diffuser. Generally, you would want the sweet spot in the center of the light field to completely overlap the edges of the target. This assures that all portions of the target will have the same amount and color of light. This reduces light fall-off and color variation across the width of a target.CapturingDo not fill the viewfinder frame with the target. The sweet spot of the lens is near the center and away from the corners. It’s best to shoot the target with a generous amount of border and crop it down later.Use a tripod to support the camera whenever possible. Make sure the camera is directly in front of the target. The central axis of the lens should be in line with the center of the Spyder Checkr.The Spyder Checkr and the camera sensor should be parallel.Focus on the chart. It is important to shoot in the camera's RAW format, if it offers one. Take a few frames at different settings, if you are unsure of any of the variables. These are your target shots.You can now remove the Spyder Checkr and continue to shoot.Adjust the Target ShotTo make the best color profile for your camera and lens combination, there are several adjustments that need to be made to the image of the Spyder Checkr target image.OpenDownload your target shot and open in your preferred editing software.Crop and StraightenUse the crop tool to select the outside black edge of the patch area with no background showing beyond the target. Use the rotate function of the crop tool to straighten the target image. Activating a lens profile will reduce lens distortion in your target shot, but this is rarely necessary.White BalanceAny of the light or medium gray patches can be used to gray balance/white balance the target image. The 20% gray patch is recommended. Use the white balance eyedropper tool and click on the desired gray patch.Exposure AdjustmentLook at RGB values or Percentages of the white patch. Adjust the exposure slider until the white patch lists as approximately 90%, or about RGB 230, 230, 230.Next check the black patch. The black/shadow adjustment is used to set the black value to 4%, or about RGB 10, 10, 10. If the value is below this level, it may be best to leave it as shot or reshoot with brighter illumination.Once these adjustments are made, the target image is ready to be used to create a color profile for your camera and lens combination.Save the Target and ApplySave the target image as an uncompressed Tiff.Launch the Spyder Checkr software and open the target image by dragging-and-dropping the file, or select “Open” from the Spyder Checkr menu to navigate to and select your file.Processing your Target ShotIf you captured your image and cropped appropriately the sampling squares should already be placed within the correct patches of your target image. If not, you can drag on any edge or corner of the image area to adjust the fit. You can left click on the grid and move the entire target grid if needed. The colors inside the sampling squares should be a somewhat less saturated version of the patch colors. If the patch and sample colors are of different colors, check that the target image is not upside down, sideways, or inverted.Once you have reviewed and adjusted the target grid on the target image, choose your Rendering Mode and where you would like to save the calibration.Rendering ModesIn the right pane of the Spyder Checkr application you will find a drop down list of mode choices. The three modes are described below:Colorimetric Mode – provides the most literal results. Best whenattempting to reproduce artwork and color critical work.Saturation Mode - provides a boost in Saturation. Offers results that are generally more pleasing for many types of images.Portrait Mode - selectively reduces the color saturation of skin tonecomponents to make portrait processing easier.The effects of these modes can be subtle; the amount of change to your image depends on the color accuracy of your camera’s sensor and color neutrality of a particular lens.Saving the ProfileOnce you have selected the rendering mode you would like to use, choose where you would like Spyder Checkr to save the calibration.Click on the “Save Calibration” button. This will launch a window that allows you to name the calibration.We suggest using a naming system that includes the camera model, lens, calibration Mode, and Spyder Checkr chart (48 for Spyder Checkr, 24 for Spyder Checkr 24, Photo for Spyder Checkr Photo). For example, if you calibrate a Nikon D810 with an 85mm lens in Portrait Mode with the Spyder Checkr 24, a file name such as D810_85_portrait_24 would be optimal. This way you can easily identify the preset for the correct camera and lens combination.Note: Once you save your calibration, you must restart your editing software for your calibration to appear and use.Using Your New CalibrationAfter re-launching your editing software (your calibration data will NOT be available until you quit and restart your editing software), open an individual image or a group of images by camera and lens then find the software’s preset menu and apply.Creating Multiple Light Source CalibrationsSpyder Checkr's Tools menu contains commands for creating multiple calibrations from any two existing calibration presets. Choose any two presets built for the same camera, and a series of three new calibration presets will be created which offer increased precision for light sources between the original sources. This function is of use mainly for advanced processes such as museum photography.In-Camera Color BalancingShoot the Gray Target face of the Spyder Checkr in your camera's White Balance or Gray Balance mode to produce an in-camera color balance for the lighting conditions under which you have shot the Spyder Checkr. This will assure that your initial view of images, quick exports to JPG, or images downloaded directly to a mobile/tablet device will have the intended color balance and will assist in assuring that you do not overexpose as you shoot. The mix of several levels of gray in the Gray Target will provide a more global balance than shooting just one density of gray. Shooting the center section of the Gray Target will further enhance this multi-level function.Changing between Spyder Checkr TargetsThe Spyder Checkr software supports multiple model versions of the Spyder Checkr targets and will automatically launch configured for the target type you have purchased. If you use multiple targets, the appropriate target can be chosen in the Spyder Checkr preferences. Select the target type you want to use and when the dialog box comes up telling you that it is necessary to restart the application to switch, select OK, to auto-quit the application. Re-launch and it will now use the other target size.For more information on Spyder Checkr or our other Spyder products, visit Datacolor. © Copyright 2023 Datacolor. All rights reserved.。
滤镜蒙版的使用方法Using filters on photos can greatly enhance the overall look and feel of an image. One important feature of filters is the ability to apply them selectively through the use of masks. Masks allow you to control which areas of the image are affected by the filter, giving you more control over the final result.在照片上使用滤镜可以极大地提升图像的整体外观和感觉。
滤镜的一个重要特点是可以通过蒙版来选择性地应用它们。
蒙版可以让您控制图像的哪些区域受到滤镜的影响,从而更好地控制最终的效果。
When using masks with filters, it's important to understand how they work and how to use them effectively. Masks are essentially grayscale images that determine which areas of the image are affected by the filter. White areas of the mask allow the filter to be applied, while black areas prevent the filter from affecting those areas. Gray areas allow for partial application of the filter, giving you even more control over the intensity of the effect.在使用蒙版和滤镜时,重要的是要了解它们的工作原理以及如何有效地使用它们。
For check processing speed and reliability……MagTek delivers the solutions you need.Retailers and financial institutions rely on MagTek. Millions of check transactions every day are processed with MagTek check readers and scanners. For some of the industry’s biggest retailers and financial institutions to the smallest businesses, MagTek check readers and scanners deliver flexibility, accuracy and value. With solutions for both auto- and single-feed operation and advanced features including integrated, secure card reader authenticators and color scanning, MagTek check readers and scanners fit the needs of a wide range of check transaction applications. For check processing that’s easy and secure, put your trust in MagTek.2Small Document ScannersExcella™A 45+ document per minute (DPM), multifeed check reader and scanner with a capacity of up to 70 documents in its hopper.ImageSafeA multi-purpose, compact check reader and dual-sided scanner with an integrated MagneSafe™ secure card reader authenticator.Mini MICRA small, single-feed MICR reader ideal for applications where fast and accurate MICR reading is required.3Excella™ STXFull-featured, single-feed check reader and scanner with an integrated MagneSafe™ secure card reader authenticatorMICRSafeOptimized to reduce transaction time and manual entry errors, the MICRSafe is a single-feed MICR reader built for reliability and security with its integrated MagneSafe™ secure card reader authenticator.MICRImage™An easy-to-use, single-sided small document scanner featuring a sleek, small-footprint design.Conventional ProductsExcella™Excella is the ideal desktop check reading device for early-image capture in high-volume electronic check applications, including BOC, Check 21 and remote deposit capture. With a small footprint and striking modern design, Excella is an auto-feed check reader which captures both the front and back image of checks in a single pass at 45+ DPM. The automatic feeder has a capacity of up to 70 documents to easily support even the most active check processing applications.BenefitsWhen high performance check processing is an integral part of your application, Excella delivers both the speed and efficiency you need. The Excella reads both E13B and CMC7 MICR fonts and features a programmable endorsement message. Excella captures check images at a 200 dpi resolution and offers CCITT G4 or JPEG image compression to optimize your applications’ storage use. The easy-maintenance, high-speed unit is API and protocol compatible with MagTek’s Excella STX single-feed scanner for optimum application flexibility. Features• Reads E13B and CMC7 MICR fonts• Captures front and back image of checks in a single pass• Single line programmable printer for rear side endorsement printing• Black/white and grayscale images are standard• Image resolution: 200 dpi (scaling to 100 dpi)• Image compression: CCITT G4 or JPEG• Image files: TIFF 6.0 (other formats can be made available)• 45+ DPM automatic feeder with capacity for up to 70 documents and alternate input tray to feed single checks • USB 2.0 (USB 1.1 compatible) and Ethernet 100Base-T • Easy access to check path and scan bars for maintenance and cleaning• API and protocol compatible with MagTek’s Excella STX (single-feed, dual-side scanner)4to-Feed.st.ficient.5ImageSafeThe multi-purpose ImageSafe is a compact check reader and dual-sided scanner that offers a cost-effective alternative to implement PC-based electronic check applications. Ideal for use with Check 21, BOC, and remote deposit capture applications, ImageSafe also enables secure card-based payment transactions with its integrated MagneSafe™ secure card reader authenticator that encrypts card data at the point of swipe. The flexible device can also be used with an ID card for strong two-factor authentication during online financial transactions.BenefitsThe ImageSafe is a full-featured device in a compact footprint.It enables the double-sided capture of complete check images in a single pass and supports E13B and CMC7 fonts. Using the MagneSafe secure card reader authenticator, users can easily secure credit, debit and gift card transactions in addition to capturing check images. ImageSafe connects to PCs and terminals through USB 2.0 or 1.1 interfaces and features an easy-to-read LED indicator for visibility into device status. It’s everything retailers need in a secure transaction device, and more.Features• Duplex, dual-sided scanning for complete image capture of both sides of the check in one pass• MagneSafe secure card reader authenticator• Front and back side virtual endorsement• USB 2.0 or 1.1 interface• LED indicator to provide device status• Cover swings open for easy access to check path and scan bars for maintenance and cleaning.• MICR reading: E13B and CMC7• Automatic parsing of MICR fields: transit, account, etc • Black/white and grayscale image rendition and 200dpi image resolution• Image files: TIFF 6.0, JFIF with EXIF tags, BMP• Image compression: CCITT G4 or JPEG• Ability to capture up to four images per check• SHA1 digital signature for image file authentication • Small footprint6mpact.cure.gh-Value.7Excella™ STXExcella STX is check reading made easy. For merchants and financial institutions that need an affordable method for rapid image capture at the earliest entry point when processing BOC, Check 21 and remote deposit capture transactions. The Excella STX leads the industry in reliability and ease-of-use. Featuring a front and back printer for franking and endorsing, Excella STX scans front and back images in a quick, single pass. The small footprint unit also offers single-side scanning for standard ID cards and offers an optional color scanning feature.BenefitsThe full-featured Excella STX is a single-feed check reader and scanner designed to meet the dynamic requirements of many retail and financial applications. The duplex reader offers MICR reading for E13B and CMC7 fonts and connects to standard PCs and terminals through a USB or Ethernet 100Base-T interface. The unit also features easy access to check path and scan bars to simplify maintenance and cleaning. With an integrated MagneSafe™ secure card reader authenticator; this device may be everything your application needs for secure payment processing.Features• Single-feed check operation• MICR reading: E13B and CMC7• Scans front and back images of a check in a single pass • Single-side scanning of standard ID cards• MagneSafe secure card reader authenticator• Optional front and back printers for frankingand endorsing• Optional color scanning available• Interfaces: USB and Ethernet 100Base-T• Small footprint• Easy access to path and scan bars for maintenance and cleaning• API and protocol compatible with MagTek’s Excella (auto-feed scanner)8cure.liable.exible.9MICRSafeOptimized to reduce transaction time and manual entry errors, the MICRSafe is a single-feed MICR reader built for reliability and security with its integrated MagneSafe™ secure card reader authenticator. Ideal for applications where fast and accurate MICR reading is required, the MICRSafe offers a range of interfaces options, including connectivity to PCs and the most popular POS terminals. With a simple drop-and-push check feed movement, the MICRSafe significantly speeds check verification and conversion.BenefitsThe MICRSafe enables users to easily format the MICR data to match any application input requirements. In a single pass, the MICRSafe reads E13B and CMC7 MICR fonts and connects to POS terminals through a USB, RS-232 or PC keyboard wedge interface. Designed for multi-use environments, the MICRSafe also offers an integrated 3-track MagneSafe secure card reader authenticator to read ISO and AAMVA standard credit and debit cards and ID cards. This highly dependable device will deliver superior performance throughout years of daily use.Features• Reads E13-B and CMC-7 MICR fonts• Reads MICR characters on checks and otherMICR encoded documents• Small footprint with high accuracy and dependability • Interfaces include USB, RS-232 and PC Keyboard Wedge • T riple DES encryption• DUKPT key management• MagnePrint® card authentication• Tokenization• Card and data authentication• Device and host mutual authentication1011ngle feed. mpact design. exible.Mini MICROptimized to reduce transaction time and manual entry errors, the Mini MICR is a single-feed MICR reader with a reliable and durable design. Ideal for retail applications where fast and accurate MICR reading is required, the Mini MICR offers a range of interfaces options, including connectivity to the most popular POS terminals. With a simple drop-and-push check feed movement, the Mini MICR significantly speeds check verification and conversion.BenefitsThe Mini MICR enables users to easily format the MICR data to match any application input requirements. In a single pass, the Mini MICR reads E13B and CMC7 MICR fonts and connects to POS terminals through a USB, RS-232 or PC keyboard wedge inter-face. Designed for multi-use environments, the Mini MICR also offers an optional 3-track magnetic stripe reader to read ISO standard credit and debit cards and ID cards. This highly dependable device will deliver superior performance throughout years of daily use.Features• Reads E13-B and CMC-7 MICR fonts• Reads MICR characters on checks and otherMICR-encoded documents• Small footprint with high accuracy and dependability • Optional 3-track MSR (magnetic stripe reader) for ISO and AAMVA cards• Interfaces include USB, RS-232 and PC keyboard wedge. • Other interfaces available12urable.curate.mpact.13MICRImage™Designed to interface with PC applications and the most popular POS terminals in the market, the MICRImage is the ideal solution for retail Check Conversion and ACH applications where both size and ease-of-use are critical factors. With a small footprint and an advanced ergonomic design, the MICRImage delivers the accuracy and compatibility retailers of all sizes require for electronic check transactions. Reading MICR data and scanning a check’s front image in a single pass, MICRImage can store up to 100 black and white check images using standard CCITT or G4 compression.BenefitsMICRImage provides the highest MICR read accuracy available to ensure successful electronic check transactions. The efficient device combines design, function and value to provide the best solution for electronic check conversion. The ergonomic design features visual indicators for ease of use and rapid training while multiple interface ports ensure simple connectivity. The MICRImage is the ideal check reader and scanner for electronic check applications of all types including NACHA’s POP, ARC and RCK.Features• Reads E13-B and CMC-7 fonts• Image resolution: 200 dpi• Image compression: CCITT or G4• Image files: TIFF 6.0 (other formats can be made available)• Image storage capacity for up to 100 black & white images• Each model includes two interface ports: RS-232/RS-232, RS-232/Ethernet, or RS-232/Modem• USB connectivity is also available• Easy-access feature allows convenient cleaning ofscan bar and check path (no tools required)• Optional 3-track MSR (magnetic stripe reader)• Also available: MICR ER (enhanced reading) feature used for automated confirmation of MICR data14eek.werful.sy-to-Use.15。
"Camera Raw color grading is like having a magic wand for your photos, allowing you to sprinkle them with a burst of vibrant colors or a touch of moody drama. It's like being a wizard of colors, where you can fine-tune, tweak, and perfect the shades and tones in your images without any fear of messing it up. Whether you want your landscape to scream with bold and vivid hues or give your portrait a mysterious and captivating vibe, this tool gives you the power to play with the colors like never before. With Camera Raw color grading, your photos will go from meh to marvelous in just a few clicks, and you'll feel like a color maestro in no time!""Camera Raw 彩色分级就像有魔杖为你的照片,允许你喷洒充满活力的色彩或触摸喜悦的戏剧。
这就像一个颜色的巫师,在那里,你可以微调,微调,并完善你的图像中的遮荫和音调,而不用担心弄乱它。
无论你想用大胆和生动的花蕾尖叫还是给你的肖像一种神秘和吸引人的气息这个工具给了你以前所未有的颜色演奏的力量用摄像机原始的彩色分级,你的照片会在短短几下就从meh变成神奇的,你很快就会感觉像一个彩色大师!"。
MicroscopesSoftwareDigital CamerasNikon offers total software solution covering image capture, archiving, and analysisWhy NIS-Elements?NIS-Elements is an integrated software imaging platform developed by Nikon to achieve comprehensive microscope control, image capture,documentation, data management and analysis.NIS-Elements handles multidimensional imaging tasks flawlessly with support for capture, display, peripheral device control, and datamanagement & analysis of images of up to six dimensions. The system also contributes to experiment efficiency with a database building feature developed to handle archiving, searching, and analysis of large numbers of multidimensional image files.Unified control of the entire imaging system offers significant benefits to users for cutting-edge research, such as live cell imaging.The most sophisticated of the three packages, NIS-Elements AR is optimized for advanced research applications. It features fully automated acquisition and device control through full 6D (X, Y, Z, Lambda (Wavelength), Time,Multipoint) image acquisition and analysis.NIS-Elements BR is suited for standard research applications,such as analysis and photodocumentation of fluorescent imaging. It features acquisition and device control through 4D (up to four dimensions can be selected from X, Y, Z,Lambda (Wavelength), Time, Multipoint) acquisition.NIS-Elements D supports color documentation requirements in bioresearch, clinical and industrial applications, with basic measuring and reporting capabilities.•Highest Quality Optical PerformanceThe world-renowned Nikon CFI60 infinity optical system effectively set a new standard for optical quality by providing longer working distances, higher numerical apertures, and the widest magnification range and documentation field sizes.As a leader in digital imaging technology, Nikon recognized the importance of adapting its optics to optimize the digital image. Nikon’s new objectives and accessories are specifically engineered for digital imaging, with exclusive features, such as the Hi S/N System, which eliminates stray light and provides unprecedented signal-to-noise ratios.Because what you see depends greatly on the quality of your microscope, we strive to power our microscope systems with optical technologies that are nothing but state-of-the-art.•Diverse Line of Powerful Digital CamerasImage capture has become a high priority in microscopy and the demand for products that deliver high quality and versatile functionality has grown considerably in recent years. In accordance, Nikon offers a full line of digital cameras, addressing the varied needs of microscopists in multiple disciplines. Each Nikon digital camera is designed to work seamlessly with Nikon microscopes, peripherals, and software. With Nikon Digital Sight (DS) series cameras, even novice users can take beautiful and accurate microscopic images. For the advanced researcher, hi-resolution image capture and versatile camera control is fast and simple. Together with Nikon’s new software solutions, image processing and analysis have reached new levels of ease-of-use and sophistication.•Intelligent Software SolutionsDesigned to serve the needs of advanced bioresearch, clinical, industrial and documentation professionals, NIS-Elements provides a totally integrated solution for users of Nikon and other manufacturers’ accessories by delivering automated intelligence to microscopes, cameras, and peripheral components. The software optimizes the imaging process and workflow and provides the critical element of information management for system based microscopy.AnnotationsBinary ColorThe NIS-Elements suite is available in three packages scaled to address specific application requirements.Total Imaging SolutionIn designing and bringing to market the mosttechnologically advanced optical systems, Nikon has worked very hard to provide a “total imagingsolution” that meets the ever-evolving demands of the microscope user.As a leading microscope manufacturer, Nikon realizes the importance of providing its customers with system-based solutions to free the user to focus on the work and not the complexities of the microscope. NIS-Elements was designed with this in mind. Never before has a software package achieved such comprehensive control of microscope image capturing and document data management.Optical ConfigurationMicroscope parameters, such as fluorescence filter and shutter combinations, can be saved and displayed as icons in the tool bar,allowing one-click setup. Setting up a CCD camera, applying shading compensation to each objective lens, and saving calibration data is also possible.Multichannel ImageImages using defined filters can be captured to view in various light wavelengths. Simply define the color of channels and the opticalconfiguration that is to be used for capturing the set of images.Z-seriesImages at different Z-axis planes can be captured with a motorized Z-Focus control. NIS-Elements supports two methods of Z-axis capture: Absolute Positioning and Relative Positioning.View SynchronizerThe View Synchronizer allows for the comparison of two or more multidimensional image documents. It automatically synchronizes the views of all documents added.Confocal Image ImportImages acquired with Nikon confocal microscopes C1si and C1plus can be imported.ProcessVarious image views can be selected to study captured data.Time LapseThe sophisticated but user-friendly time-lapse process enables the staggering of image capture simply by defining interval, duration, and frequency of capture.Large Image StitchingThis tool allows composition of large-area images with high magnification. Ultra high-resolution images can be stitched automatically from multiple frames through use of a motorized stage. NIS-Elements uses special algorithms to assure maximal accuracy during stitching. The user can also capture and stitch frames by moving the microscope stage manually.Multipoint ExperimentsWith the motorized stage installed, it is possible to automatically capture images at different XY and Z locations.ViewnD Viewer (Multidimensional image display)Easy-to-use parameters for multidimensional image operation are located on the frame of the screen.T: Time LapseXY: MultipointZ: Z-series (slices)Wavelength: MultichannelVolume renderingMultidimensional image 0 sec.15 sec.30 sec.Image acquisition screenOrthogonal imageRealizing a smooth flow fromimage capture to process and measurementImage AcquisitionDiverse Dimensional Acquisition12312341234Report GeneratorReport Generator enables the user to create customized reports containing images, database descriptions, measured data, user texts, and graphics. PDF files can be created directly from NIS-Elements.Time MeasurementTime Measurement records the average pixel intensities within defined probes during a time interval and can be performed on live or captured data sets. Time measurement also allows for real-time ratios between two channels.Interactive MeasurementNIS-Elements offers all necessary measurement parameters, such as taxonomy, counts, length, semiaxes, area and angle profile.Measurements can be made by drawing the objects directly on the image. All output results can be exported to any spreadsheet editor.Automatic MeasurementNIS-Elements enables automatic measurement by creating a binary image. It can automatically measure length, area, density and colorimetry parameters sets, etc. About 90different object and field features can be measured automatically.ProfileFive possible interactive line profile measurements provide consecutive intensity of a sourced image along an arbitrary path (free line, two-point line, horizontal line, vertical line and polyline).ClassifierClassifier allows segmentation of the image pixels according todifferent user-defined classes, and is based on different pixel features such as intensity values, RGB values, HSI values, or RGB valuesignoring intensity. The classifier enables data to be saved in separatefiles.RAM CapturingRAM Capturing enables the recording of very quick sequences to capture the most rapid biological events by streaming datadirectly to the computer’s video memory.MeasurementImage ProcessingColor Adjustmentcontrast/background subtraction/component mixNIS-Elements is suitable for hue adjustment, independently for each color, and converts the color image to an RGB or HSI component.Filterssmoothing/sharpness/edge detectionNIS-Elements contains intelligent masking filters for image smoothing,sharpness, edge detection, etc. These filters not only filter noise, but also are effective in retaining the image’s sharpness and detail.MorphologyNIS-Elements offers a rich spectrum of mathematical morphology filters for object classification. Morphology filters can be used to segment binary and grayscale images for measurement analysis purposes. Various morphometric parameters mean image processing is easier than ever.•Basic morphology (erosion, dilatation, open, close)•Homotopic transformations (clean, fill holes, contour, smooth)•Skeleton functions (medial axis, skeletonize, pruning) •Morphologic separation and othersMerge ChannelsMultiple single channel images (captured with different optical filters or under different camera settings) can be merged together simply by dragging from one image to another. In addition, the combined images can be stored to a file while maintaining their original bit depths or, optionally, can be converted into an RGB image.Image ArithmeticA+B/A-B/Max/MinNIS-Elements performs arithmetic operations on color images.OriginalBefore using the edge detection filter After using the edge detection filterContourThresholdZones of Influence + SourceThe real-time 2D deconvolution module (from AutoQuant ®) allows the user to observe live specimens with less out-of-focus blur. It allows faint biological processes to be observed that mayotherwise be missed and increases observed signal-to-noise ratio.NIS-Elements can combine X, Y, Z, Lambda (wavelength), Time and Multi-Stage points within one integrated platform formultidimensional imaging. All combinations of multidimensional images can be combined together in one ND2 file sequence using an efficient workflow and intuitive GUI. The user can easily choose the proper parameters for each dimension and the software and hardware will work seamlessly together to provide high quality results. Results may be exported into other supported image and video file formats.The haze and blur of the image that can occur when capturing a thick specimen or a fluorescence image can be eliminated from thecaptured 3D image. Images acquired with Nikon confocal microscopes C1si and C1plus can be imported to NIS-Elements.2D Real-time DeconvolutionMultidimensional Acquisition (4D/6D)3D DeconvolutionNIS-Elements has a powerful image database module that supports image and meta data. Various databases & tables can easily be created and images can be saved to the database via one simple mouse-click. Filtering, sorting and multiple grouping are also available according to the database field given for each image.DatabaseVirtual 3D imageFocused image created from a sequence of Z-stack imagesStereovision image T, XY, Z, λsimultaneous acquisitionConvert sequential images to ND2 fileND documentation exportationMultidimensional image croppingAVI generationBefore deconvolutionAfter deconvolutionVarious convenient plug-ins for advanced imaging and analysis capabilitiesEDF: Extended Depth of FocusExtended Depth of Focus (EDF) is an additional software plug-in for NIS-Elements. Thanks to the EDF function, images that have been captured in a different Z-axis can be combined to create an all-in-focus image. Also, it is possible to create stereovision image & 3D surface image for a virtual 3D image.System Configuration Examples123123123EnFeaturesNIS-Elements AR NIS-Elements BR NIS-Elements DCapture RAM Capture ●Time Lapse ●●●Z-stack●●●Multichannel Fluorescence ●●Multi-position ●●●4D ●6D●Display and process GUI Multi-Window Multi-Window S ingle-WindowAnnotation ●●●Reference ●●●ND Viewer ●●●Filter, Morphology●●Capture, display and multifunction Large Image ●●●EDF●●●3D Deconvolution ●2D RT Deconvolution ●Live Compare ●●●Macro Macro●●●Advanced Interpreter ●●●Measurement Segmentation ●●Time Measurement ●●Auto Measurement ●●●(Available from version 2.3)Report Report Generator ●●●Management Database ●●●Vector Layer●●●Multidimensional File Format●●●Printed in Japan (0605-10.5)T Code No. 2CE-MRPH-2This brochure is printed on recycled paper made from 40% used material.Specifications and equipment are subject to change without any notice or obligation on the part of the manufacturer. May 2006©2006 NIKON CORPORATION* Monitor images are simulated.Company names and product names appearing in this brochure are their registered trademarks or trademarks.NIS-Elements Supported DevicesNikon Devices & CamerasMicroscopes and Accessories Eclipse TE2000-ETE2000-PFS (Perfect Focus System) Eclipse 90iDIH-M/E Digital Imaging Head Eclipse LV100AMotorized Universal Epi-illuminator & Motorized Nosepiece Nikon Motorized Z-Focus Accessory (RFA): optional CamerasDigital Sight 5M/2M Series DS-1QM DQC-FSDXM1200 SeriesOther Cameras & DevicesCamerasRoper cameras: CoolSnap Series, Cascade Series PixelLink camerasHamamatsu ORCA series camerasOptionalPrior (Stage): ProScan, OptiScan Prior Filter wheelsUniblitz (Shutter/Filter wheel) : Uniblitz Shutter Through TE2000-E Hub Sutter Lambda 10-2, 10-3, DG4/5 Z-Focus module: Nikon RFA, Prior Piezo PI E-665Camera Emission Splitter: Optical Insights Dual View and Quad View EXFO (Fiber Illuminator): EXFO X-Cite 120 seriesOperating EnvironmentMinimum PC Requirements: CPU Pentium IV 3.2 GHz or higher RAM 1GB or higher OS Windows XP Professional SP2 English Version Hard Disk 600MB or more required for installation Video 1280X1024 dots, True Color mode●yes ●optionalNIKON INSTRUMENTS (SHANGHAI) CO., LTD.CHINA phone: +86-21-5836-0050 fax: +86-21-5836-0030(Beijing office)phone: +86-10-5869-2255 fax: +86-10-5869-2277(Guangzhou office)phone: +86-20-3882-0552 fax: +86-20-3882-0580NIKON SINGAPORE PTE LTDSINGAPORE phone: +65-6559-3618 fax: +65-6559-3668NIKON MALAYSIA SDN. BHD.MALAYSIA phone: +60-3-78763887 fax: +60-3-78763387NIKON INSTRUMENTS KOREA CO., LTD.KOREA phone: +82-2-2186-8400 fax: +82-2-555-4415NIKON INSTRUMENTS EUROPE B.V.P.O. Box 222, 1170 AE Badhoevedorp, The Netherlands phone: +31-20-44-96-222 fax: +31-20-44-96-298/NIKON FRANCE S.A.S.FRANCE phone: +33-1-45-16-45-16 fax: +33-1-45-16-00-33NIKON GMBHGERMANY phone: +49-211-9414-0 fax: +49-211-9414-322NIKON INSTRUMENTS S.p.A.ITALY phone: + 39-55-3009601 fax: + 39-55-300993NIKON AGSWITZERLAND phone: +41-43-277-2860 fax: +41-43-277-2861NIKON UK LTD.UNITED KINGDOM phone: +44-20-8541-4440 fax: +44-20-8541-4584NIKON INSTRUMENTS INC.1300 Walt Whitman Road, Melville, N.Y. 11747-3064, U.S.A.phone: +1-631-547-8500; +1-800-52-NIKON (within the U.S.A.only) fax: +1-631-547-0306/NIKON CANADA INC.CANADA phone: +1-905-625-9910 fax: +1-905-625-0103NIKON CORPORATIONParale Mitsui Bldg.,8, Higashida-cho, Kawasaki-ku,Kawasaki, Kanagawa 210-0005, Japanphone: +81-44-223-2167 fax: +81-44-223-2182 http://www.nikon-instruments.jp/eng/NIS-Elements is compatible with all common file formats, such as JP2, JPG, TIFF, BMP, GIF, PNG, ND2, JFF, JTF, AVI, ICS/IDS. ND2 is a special format for NIS-Elements.ND2 allows storing sequences of images acquired during nD experiments. It contains information about the hardware settings and the experiment conditions and settings.。
transforms.grayscale的用法Title: Understanding the Usage of transforms.grayscale: A Comprehensive GuideIntroduction:In the field of image processing, the ability to transform an image to grayscale is often a fundamental step in many applications. The transforms.grayscale function is a powerful tool that allows us to convert colored images into monochrome representations. In this article, we will delve into the details of the transforms.grayscale function, discussing its purpose, parameters, and usage. By the end, you will have a comprehensive understanding of this tool and how it can be applied in various image processing tasks.Section 1: What is transforms.grayscale?The transforms.grayscale function is a method commonly used in image processing libraries to convert images from color to grayscale. It takes an input image as its argument and applies an algorithm that calculates the grayscale value for each pixel based on the RGB values. This process effectively eliminates color information, resulting in a monochrome image where each pixel has a single intensity value ranging from black to white.Section 2: Parameters of transforms.grayscaleThe transforms.grayscale function typically accepts only one parameter, which is the input image to be converted. However, depending on the image processing library, additional optional parameters may be available to fine-tune the grayscale conversion process. These parameters may include brightness adjustment, contrast enhancement, or color channel customization for specialized applications.Section 3: Implementation of transforms.grayscaleTo utilize the transforms.grayscale function, the following steps can be followed:Step 1: Importing the necessary librariesBegin by importing the required image processing libraries such as OpenCV, PIL, or scikit-image. These libraries provide the necessary tools and functions to perform image operations effectively.Step 2: Loading the imageNext, load the desired image using the appropriate function provided by the chosen library. This step should yield an imageobject or matrix, ready for subsequent processing.Step 3: Applying transforms.grayscaleInvoke the transforms.grayscale function on the loaded image. Pass the image object as the parameter within the function call. This will initiate the grayscale conversion process.Step 4: Displaying or saving the grayscale imageAfter the grayscale conversion is complete, choose to display the resulting image using the library's built-in display functions or save it as a new image file for further analysis or use.Section 4: Practical Applications of transforms.grayscaleThe transforms.grayscale function finds extensive use in various fields including but not limited to:1. Computer Vision: Grayscale images provide a simplified representation that's easier to process for various computer vision algorithms. Many image analysis tasks, such as object detection, feature extraction, and image segmentation, can benefit from the grayscale conversion offered by transforms.grayscale.2. Digital Photography: While modern digital cameras offer a grayscale mode, transforms.grayscale is an essential tool to convert color images captured in full swing into black and white monochromatic representations, creating vintage aesthetics or emphasizing particular elements.3. Medical Imaging: Grayscale representations are commonly used in medical imaging for easier analysis and diagnosis. By converting color medical images into grayscale, healthcare professionals can focus on specific elements, such as bone density in X-rays or tissue changes in MRI scans.Section 5: ConclusionIn conclusion, the transforms.grayscale function is an essential tool in image processing, providing the ability to convert colored images into grayscale representations. By eliminating color information, this function allows for simplified processing and analysis in various applications. Understanding the parameters and implementing the function correctly opens up opportunities for enhanced computer vision, digital photography, and medicalimaging. With this comprehensive guide, you are equipped with the knowledge to utilize the transforms.grayscale function effectively in your image processing endeavors.。
A GRAYSCALE READER FOR CAMERAIMAGES OF XEROX DATAGLYPHSK.L.C.MoravecXerox Research Centre Europe6chemin de Maupertuis38240Meylan,Francekimberly.moravec@AbstractXerox DataGlyph technology is a2-D matrix barcode encoding which com-bines relatively high data density with an error correction scheme and anunobtrusive format.Some of the more exciting applications proposed forthis technology involve the decoding of unconstrained camera images ofDataGlyphs.This paper details a set of grayscale methods including steeredgradientfilters and the Radon transform which can decode many uncon-strained camera images.This method has been successfully tested on nearly camera images with very good results.1IntroductionMachine readable tags such as barcodes have been essential for inventory management, package tracking and process control in many industries such as manufacturing and re-tail for decades.Recently,researchers in the area of ubiquitous computing have used hand-held barcode readers as an inexpensive and robust way to bridge the gap between virtual and real worlds[9,11,14].Hand-held digital cameras and webcams coupled with barcodes are also used in augmented reality desktops[10,16].These intriguing new applications for barcodes motivated the development of a camera-based reader for DataGlyphs.The DataGlyph,a type of2-D matrix barcode,was developed at Xerox’s Palo Alto Research Center[6,7].It can encode more data than a1-D barcode and has an extremely high data density for a2-D barcode.It appears to a human as a grey halftone that is easily integrated into a document’s design.It has been used in products such as Xerox Flowport™and is an active area of development1.This paper describes robust methods of decoding DataGlyphs from images taken by a hand-held camera or webcam,extending the range of applications for which DataGlyphs can be used.This introduction explains in detail the DataGlyph format and the unique challenges of decoding camera images of DataGlyphs.The paper then describes the proposed decoder and the methods used in it.Results of the new decoder are compared with a binarization and correlation-based method,followed by the paper’s conclusions.1For current information,see .698Figure1:A300dpi DataGlyph containing the text of the abstract(left)and a diagram showing the detailed structure of the DataGlyph(right).The glyph consists of individual diagonal marks,forward slashes represent a binary‘1’and reverse slashed representa binary‘0’.is the size of the glyph mark,and is the size of the inter-mark spacing.1.1The DataGlyph FormatA DataGlyph represents binary data on paper as a grid of small lines known as glyph marks(shown in Figure1).By printing at higher resolutions,the grid,known as a glyph block,appears to the human eye as an evenly textured gray rectangle,much like a halftone photograph.An example of this is shown in the left half of Figure1.To encode the data in a robust manner,the data isfirst converted using a Reed-Solomon code with two CCIR checkbits.It is then interlaced(to deal with burst errors, resulting from damage such as a paper fold or a mark on a glyph)andfinally it is en-meshed in a synchronization frame.Typical decoding of a DataGlyph consists of an image processing stage and a decoding stage.First,the skew and the scale of the glyph are determined using profiles at different angles(an accepted technique forfinding skew in Document images[2,13]).The decoder then binarizes the image using a locally adaptive threshold and performs correlation on the marks using a template generated from the scale and angle information.Building the block begins byfinding the highest correlation mark and then‘walking’outwards,in mark-sized steps,to locate its neighbors.Applied recursively,this step eventually locates most of the marks in the block.Uncertainties are passed on to the decoder.The decoding stage takes the binary data matrix found by the image processing stage and attempts to decode it byfinding the synchronization frame,de-interlacing the data and then decoding the Reed-Solomon code.Because of the error coding,an incorrectly decoded message is almost impossible.If the image processing has failed tofind a cer-tain percentage of the marks correctly,the error coding spots this and returns a failure. For faxed and scanned images,this rarely happens:the combined steps are extremely successful and would be difficult to improve upon.699Figure2:A camera image of the DataGlyph in Figure1.The image contains lighting variations,blur and perspective warping.1.2Camera ImagesWhen hand-held cameras or hand-positioned webcams are used to take images of Data-Glyphs,the resulting images are different from scanned or faxed images of DataGlyphs, as illustrated in Figure2.In scanned or faxed images,where the paper is afixed distance from the CCD,it can be assumed that the glyph marks of a glyph printed at a particular resolution(say600or300dpi)will all have the same scale in pixels,whereas for camera images,the scale of the marks can vary over the image depending on the distance of the camera from the glyph.There are also often other features of camera images which make them difficult to decode:there is usually some camera noise,some blur,and there is likely to be some skew and perspective warping in the image.In addition,any decoding method must be able to decode images with uneven background lighting,a phenomenon which often occurs because of the shadow of the camera or a directed light source.2Decoding MethodsThe camera decoder proposed in this paper uses methods which are fairly robust to un-known scale,lighting variations,noise,blur,skew and perspective.There is no bina-rization step;the proposed decoder uses the full grayscale information to perform its processing and produce a labelled glyph matrix in a specified format,which can then be decoded using standard algorithms.The proposed decoder consists of four main subrou-tines:Mark Estimation,where the overall orientation of the glyph marks is determined; Block Estimation,where the locations of marks in the glyph grid are found;Integration, which labels each mark location based on local angle information and Decoding,which uses standard algorithms to decode the glyph.A detailed description of each method is given below.700Figure 3:The angles of the glyph image (left)and the angle histogram (right)showing four peaks,two pairs each corresponding to ‘0’and ’1’marks.Figure 4:The final glyph images filtered for ‘0’marks (left)and ‘1’marks (right).2.1Mark EstimationAn orientation filter in the proposed decoder finds the rough orientation of the glyph for the Radon step and finds the glyph mark centers without the need to make assumptions about the scale or overall orientation of the image.For our purposes,a fast calculation ofthe gradient was desired.The filters chosen for the proposed decoder aregradient operators [1].The filters,and ,find the gradient in the vertical and horizontal directions respectively.These particular gradient operators give a consistent response over all angles.Givenand ,the images filtered by and ,the orientation of each pixel can be derived by finding the maximum response of the filters [5]using the following equation:(1)(2)is shown in the left half of Figure 3.From these images,a histogram of angles can be701derived(as shown in the right half of Figure3),consisting of two pairs of peaks separated by radians.One pair corresponds to‘0’marks at angle and the other corresponds to ‘1’marks at angle.Steering thefiltered images to each angle produces images with strong responses at that angle:(3)(4)The original image is thenfiltered a second time at each angle to produce two image with peaks at the centers of‘0’marks and‘1’marks(shown in Figure4).This corresponds to taking a directional second derivative which has local maxima at the centers of the glyph marks.The local maxima of bothfilters are combined into a single image(shown on the left of Figure6).This image of the glyph centers is passed on to the next stage, Block Estimation.2.2Block EstimationThe proposed decoder assumes that the image of the glyph centers is a noisy grid under perspective.A grid under perspective can be modelled as the intersections of two pencils of lines[15]which can easily be found using the Radon or Hough transform[8].The Radon transform,,is given by(5) where is a pixel of the image at column along a line characterized by the slope and-intercept.Here the transform is parameterized by slope rather than angle[4].A Radon transform in the-direction gives a sharp point response for each row in the image.Since the image of the glyph block,,is a projective transform of the origi-nal block image,,the two are related by the parameters,and as follows[12]:(6)(7) Noting that the original glyph rows can be related to the image glyph rows by(8) it is shown in[3]that(9) which is a linear relation between and for all.Therefore,the responses for each row in the-pencil will always be colinear in the Radon transform of the glyph centers image,as shown in the top left of Figure5.A second Radon transform determines this line(shown in the top right of Figure5),and a profile of the line(Figure5,bottom)is retrieved for further analysis.702Figure5:Block Estimation.The results of thefirst Radon transform(top right),showing the pencil of glyph marks as a line of peaks down the center of the image.The results of the second Radon transform(top left),showing the single peak corresponding to the line in the top right image.The peaks of the profile along that line(bottom),each peak corresponding to a row of glyph marks in the image.The local maxima along the line correspond to rows of the glyph,but this line is often noisy due to stray marks and poorly located glyph centers.The valid local maxima are found by modelling the inter–glyph mark spacing by a Poisson distribution and mark magnitude by a Gaussian distribution;outliers are deleted.The same method forfinding the rows of the glyph block is used tofind the columns of the glyph block.The intersections of the row and column pencils are the new,estimated centers of the glyph marks.The results of block estimation are shown in Figure6.Finding the glyph mark centers is an essential step of the proposed decoder.Although the Radon transform can be applied to the original grayscale image,the most distinct sets of linear features in these original images are the spaces between the marks and the marks themselves.The rows and columns of the mark centers are only apparent when the stronger line structure in the images has been eliminated703Figure6:Estimated glyph mark centers before and after Block Estimation(left and right respectively).2.3Integration and DecodingGiven the corrected glyph mark centers,a matrix of the correct dimensions is formed. The major orientation of the steered images at the corrected centers becomes the label of the matrix.In cases where there is ambiguous information from the steered images,the matrix entry is labelled‘2’,corresponding to the DataGlyph format label for uncertainty. The labelled matrix block of‘0’s,‘1’s and‘2’s is decoded as per standard DataGlyph decoding.3ResultsThe proposed camera-image decoder described in Section2was tested against the current DataGlyph decoder described in Section1.1.Although other camera-based decoders have been written in the past,the current DataGlyph decoder is the only method for which documentation or code is still available.The two methods were tested on a database of DataGlyph camera images.Ten glyphs of URLs were created for the database.A specialized stand was made which was at a height such that both the current and proposed decoders decoded glyphs reliably.For images with extreme noise and blur,twelve images of each glyph were taken and com-bined into an average image to remove camera noise.The blur images were then created from these noiseless images byfiltering with a Gaussian kernel.Additive Gaussian noise was added to the noiseless images to create the noisy image database.For the resolution and perspective images,the stand was altered by placing mm thick shims under the stand base at the sides and corners.Results are summarised in Tables1and2.Allfigures are for decodes of ten images, with the exception of skew,wherefigures are for three images.Significant improvement to the decode rate for the current decoding methods can be seen when the proposed meth-ods are employed.Within the parameters that it operates,the proposed camera decoder operates reliably,while the current decoder has variable performance even for small im-age degradations.An unoptimized C prototype,running on a733MHz Pentium III processor,has a processing time of seconds for the largest glyph tested(marks).704Synthetic Degradation:no.decoded out of10imgs.Blur Noise Current New Current NewDecoder Decoder Decoder Decoder 0.56105910 141010910 1.50815610 Perspective Warping:no.decoded out of10imgs.X Pencil Y Pencil#Current New Current New shims Decoder Decoder Decoder Decoder16101010 2510410 319210Resolution:no.decoded out of10imgs. #Current New shims Decoder Decoder 01010141025934104210Skew:no.decoded out of3imgs.Current NewDecoder Decoder 03353310231533201325033003Table1:Tables showing the number of images decoded using the new decoder proposed in this paper versus the current decoder generally in use.Results are given for camera images corrupted by synthetic degradation,perspective warping,resolution and skew.4ConclusionsThe novel aspects of the proposed decoder lie in the choice of methods for decoding camera images.The guiding principle was to choose methods which were invariant to camera image degradations as opposed to calculating those degradations explicitly and correcting for them.By using an orientationfiltering approach,marks can be located and classified without the need for binarization,or the need to specify a scale or template ing a steerable approach avoids the need to know skew a ing this perspecitve grid-modelling method allows hand-held and hand-positioned cameras to reliably decode DataGlyphs,increasing the range of applications for which Dataglyph technology can be used.5AcknowledgementsThe author would like to thank Jeff Breidenbach and Jindong Chen of Palo Alto Research Center for the code they provided,their support and helpful comments.The author would also like to thank Christopher Dance for his extensive help and suggestions.705Database Total Range%DecodedNo.of Current NewImages Decoder DecoderBlur30–3393Noise30–80100X&Y Pencil60–shims4798Skew21–52100Resolution50–shims5098Overall191–5298Table2:Table showing the overall performance of the new proposed decoder versus the current decoder for camera images.References[1]S.Ando.Consistent gradient operators.IEEE Trans.Pattern Analysis and MachineIntelligence,22(3):252–265,March2000.[2]H.S.Baird.The skew angle of printed documents.Society of Scientific PhotographicEngineering,40:21–24,1987.[3]C.R.Dance.Perspective estimation for document images.Proceedings SPIE Docu-ment Recognition,V olume4670,2001.[4]R.D.Duda and e of the Hough transform to detect lines and curves inmunications of the ACM,15:11–15,1972.[5]W.T.Freeman and E.H.Adelson.The design and use of steerablefilters.IEEETrans.Pattern Analysis and Machine Intelligence,13(9):891–906,September1991.[6]D.L.Hecht.Embedded Data Glyph technology for hardcopy digital documents.In Proceedings Color Imaging:Devide-Independent Color,Color Hardcopy and Graphic Arts III,volume2171,pages341–352,1994.[7]D.L.Hecht.Printed embedded data graphical user interfaces.IEEE Computer,pages47–55,March2001.[8]J.Illingworth and J.Kittler.A survey of the Hough puter Vision,Graphics and Image Processing,44:87–116,1988.[9]T.Kindberg.Implementing physical hyperlinks using ubiquitous identifier resolu-tion.Technical Report HPL-2001-95,HP Laboratories Palo Alto,March2002. [10]M.Kobayishi and H.Koike.EnhancedDesk:integrating paper documents and dig-ital documents.In3rd Asian Pacific Conference on Computer Human Interaction, pages57–62,1998.[11]P Ljungstrand and L.E.Holmquist.Webstickers:Using physical objects as WWWbookmarks.In Extended Abstracts of the CHI99,pages332–333,1999.[12]J.Mundy and A.Zisserman.Geometrical Invariance in Computer Vision.MITPress,1992.706[13]W.Postl.Detection of linear oblique structures and skew scan in digitized doc-uments.In Proceedings International Conference on Pattern Recognition,pages 687–689,1986.[14]M.Spasojevic and T.Kindberg.A study of an augmented museum experience.Technical Report HPL-2001-178,HP Laboratories Palo Alto,July2001.[15]T.Tuytelaars,M Proesmans,and L.Van Gool.The cascaded Hough transform.InInternational Conference on Image Processing,page II:736,1997.[16]P.Wellner.Interacting with paper on the Digital munications of the ACM,36(7):86–89,1993.707。