Gamal. Synthesis of high dynamic range motion blur free image from multiple captures
- 格式:pdf
- 大小:693.46 KB
- 文档页数:10
高能γγ→γγ散射截面数值计算自20世纪30年代由许多研究者发现,γγ→γγ散射(γγ→γγ bremsstrahlung)是一种重要的现象,可以解释多种自然现象,如同步辐射(synchrotron radiation)、X射线、α射线等。
γγ→γγ散射发生时,两个γ射线在某一点发生相互作用,以产生一对新的γ射线。
由于这种作用是由强耦合的粒子间极度短程的弱相互作用而引起的,因此,它的散射截面大小一般是高能γγ能谱(γγenergy spectrum)上最小的量。
γγ→γγ散射是一种量子散射现象。
由于介子(Pion)级数收敛,以及短程弱作用性质,它的截面(cross section)较小。
由于这种交互作用紧紧耦合,粒子受到共同的影响,从而使散射截面大小远小于相同能量状态下的单个粒子散射截面。
这样,γγ→γγ散射截面就会受到起始γ能量、终止γ能量、以及粒子间的不同作用机制的影响。
γγ→γγ散射截面的数值计算是一项重要的研究领域,尤其是用于研究γ射线的传播。
这项研究可以帮助我们揭示与γ射线有关的各种现象,如用于高能天文学、高能物理学、X射线等应用中。
第一步,有必要确定γγ→γγ散射的物理机制,以及其在不同能量状态下的强度。
普通的方法是采用基本的多体理论方法来构建数值模型,以计算散射截面。
其中,最重要的是使用一组有限元方程来描述γγ→γγ散射的过程,通常使用相对论(relativity)中的量子电动力学(quantum electrodynamics,QED)技术作为原理模型,也可以使用其他技术,如强耦合理论(strong coupling theory)等。
第二步,基于预先定义的机制,运用数值方法进行建模,以计算γγ→γγ散射截面。
具体而言,通常使用Monte Carlo方法,通过随机抽样事件以模拟光子散射过程,从而计算ΦΦ→ΦΦ散射截面。
此外,可以使用Vlasov方法计算量子散射截面,该方法基于多体理论,可以考虑γγ→γγ散射截面中量子效应的影响。
CMAC最优设计及其算法——GA技术优化CMAC偏移矢量
分布
周旭东;王国栋
【期刊名称】《自动化学报》
【年(卷),期】1998(24)5
【摘要】采用GA(GeneticAlgorithm)技术实现CMAC(CerebelarModelArticulationControler)最优设计及算法.该方法解决了CMAC与其学习对象的整体优化问题,具有理论意义和实用价值.仿真结果证明该方法是成功的和有效的.对不同的客观对象(如空间曲面),可以采用GA技术找到CMAC的最优内部表示(偏移矢量分布),实现一般CMAC难以达到的精度.该方法比Albus的CMAC和Parks等的CMAC学习效果都有不同程度的提高,适合于要求高精度学习的情况.同时给出了任意偏移矢量分布的CMAC算法.
【总页数】6页(P593-598)
【关键词】CMAC;最优设计;遗传算法;偏移矢量分布;神经网络
【作者】周旭东;王国栋
【作者单位】东北大学轧制技术及连轧自动化国家重点实验室
【正文语种】中文
【中图分类】TP18
【相关文献】
1.遗传算法对CMAC与PID并行励磁控制的优化 [J], 薛一涛
2.基于遗传算法的FUZZY+CMAC优化设计 [J], 张平;苑明哲;王宏
3.基于遗传禁忌搜索算法优化的CMAC-PID液压弯辊复合控制 [J], 郎宪明;屈宝存;张奎
4.基于GA优化的CMAC-PID板形复合控制 [J], 焉颖;王焱
5.任意偏移矢量分布的N维CMAC映射算法及其应用 [J], 周旭东;王国栋;李淑华因版权原因,仅展示原文概要,查看原文内容请购买。
专利名称:HIGH DYNAMIC RANGE IMAGEGENERATION AND RENDERING 发明人:SUN, SHIJUN申请号:EP11740168申请日:20110116公开号:EP2531978A4公开日:20141119专利内容由知识产权出版社提供摘要:Techniques and tools for high dynamic range (HDR) image rendering and generation. An HDR image generating system performs motion analysis on a set of lower dynamic range (LDR) images and derives relative exposure levels for the images based on information obtained in the motion analysis. These relative exposure levels are used when integrating the LDR images to form an HDR image. An HDR image rendering system tone maps sample values in an HDR image to a respective lower dynamic range value, and calculates local contrast values. Residual signals are derived based on local contrast, and sample values for an LDR image are calculated based on the tone-mapped sample values and the residual signals. User preference information can be used during various stages of HDR image generation or rendering.申请人:MICROSOFT CORPORATION更多信息请下载全文后查看。
高动态范围图像的合成及可视化研究的开题报告
一、选题背景
高动态范围图像的合成及可视化研究是计算机图形学和计算机视觉领域中的一个重要研究课题,随着科技的不断发展,高动态范围图像在数字影像处理和计算机图形学应用中得到了广泛的应用。
然而,在数字摄影技术中,由于相机的限制,不同曝光条件下所拍摄的照片存在明显的颜色失真、曝光不足或者过度等问题,导致图像的画面细节严重损失,无法恢复。
高动态范围图像的合成及可视化技术可以有效地解决这一问题。
二、研究目的
本研究旨在通过深入分析现有高动态范围图像合成及可视化技术,探讨高动态范围图像的合成和可视化方法,提出一种可行性较高的高动态范围图像合成及可视化方案,并对此方案进行实验和分析。
三、研究内容
1. 高动态范围图像的概念和特征分析。
2. 目前高动态范围图像合成及可视化技术的研究现状,包括全景制作、照片曝光融合、基于HDR图像融合的曲面重建等方面的研究。
3. 提出一种基于HDR合成及映射的高动态范围图像可视化方案,包括基于全景图拍摄的HDR图像合成算法、基于能量方法的曝光融合技术、基于能量最小化的HDR 图像调整方法等。
4. 对提出的高动态范围图像合成及可视化方案进行实验和分析,验证其可行性和实用性。
四、研究意义
本研究将对高动态范围图像的合成及可视化技术进行深入的探究和研究,提出一种可行性较高的解决方案,并对其进行实验和分析,为数字影像处理、计算机图形学等领域的相关研究提供重要的理论和实践基础。
同时,该方案可以为数字摄影技术提供实用的解决方案,为照片修复和图像增强等领域的研究提供重要的支持。
高光谱数据可视化python实现-回复标题:基于Python的高光谱数据可视化实现导言:高光谱数据是一种含有大量连续波段信息的数据,它具有广泛的应用价值,如农业、环境监测、地质勘探等领域。
可视化这些数据能够帮助我们更好地理解和分析数据,从而发现隐藏在其中的规律和信息。
本文将介绍如何使用Python实现高光谱数据的可视化,通过一步一步的详细实践,让读者熟悉高光谱数据可视化的实现过程。
第一步:获取高光谱数据首先,我们需要获取一组高光谱数据。
可以通过公开的数据集或者自行采集所需数据。
数据集通常包含多个波段,并以像素为单位存储。
这里我们以一个包含8个波段的高光谱遥感图像数据集为例。
第二步:导入所需库和数据集在Python中,我们可以使用多个库来处理和可视化高光谱数据。
常用的库包括NumPy、Pandas、Matplotlib和Seaborn。
首先,我们要确保这些库已经安装在本地环境中。
然后,我们可以使用以下代码导入数据集并加载所需库:pythonimport numpy as npimport pandas as pdimport matplotlib.pyplot as pltimport seaborn as sns# 导入高光谱数据集dataset = pd.read_csv('spectral_data.csv')第三步:数据探索与预处理在进行数据可视化前,我们需要对数据进行探索和预处理。
这包括查看数据的基本信息,检查数据是否存在缺失值或异常值,并对数据进行必要的处理。
python# 查看前几行数据print(dataset.head())# 查看数据形状print(dataset.shape)# 检查是否存在缺失值print(dataset.isnull().sum())# 检查数据的统计摘要print(dataset.describe())根据探索结果,我们可以根据需要对数据进行清洗、处理缺失值或异常值等操作。
联合最大熵的改进Niblack红外图像分割算法
李云红;刘畅;李传真;周小计;苏雪平;任劼;高子明
【期刊名称】《西北大学学报:自然科学版》
【年(卷),期】2022(52)2
【摘要】针对红外图像在分割过程中容易产生过分割和边缘断裂的问题,该文提出了一种联合最大熵的改进Niblack红外图像分割算法。
首先,根据图像的像素矩阵
确定邻域窗口,再利用图像整体与局部的灰度值信息选取修正系数,改善了传统Niblack参数选择方法的不足;然后,通过局部邻域熵确定背景因子,实现图像的背景分类;最后,采取最大熵法和改进的Niblack法对不同类别的图像进行分割。
实验证明,该文算法和Niblack法、OTSU法、最大熵法和分水岭法相比,分割交并比IoU
平均值为0.8335,相比该文其他对比算法均有所提高,同时平均误分率仅为0.0164。
【总页数】6页(P256-261)
【作者】李云红;刘畅;李传真;周小计;苏雪平;任劼;高子明
【作者单位】西安工程大学电子信息学院
【正文语种】中文
【中图分类】TP391.4
【相关文献】
1.基于改进遗传算法的最大熵作物病害叶片图像分割算法
2.基于粒子群优化法的Niblack电力设备红外图像分割
3.基于分形理论的改进型二维最大熵红外图像分割
算法4.改进的OTSU和最大熵结合的迭代图像分割算法5.改进鲸鱼算法的二维最大熵图像分割研究
因版权原因,仅展示原文概要,查看原文内容请购买。
改进的低纬度化极稳定算法
张琪;张英堂;李志宁;范红波
【期刊名称】《石油地球物理勘探》
【年(卷),期】2018(053)003
【摘要】针对频率域常规化极因子在低纬度地区不稳定这一问题,提出了一种改进的低纬度化极稳定算法.为了对常规化极因子进行改造,提出了局部压制类和全局压制类化极方法,实现对放大区的高频压制;然后通过计算化极结果与归一化磁源强度的相关系数对引入的参数进行寻优,得到最优参数下的化极结果,提高了化极结果的精度;最后针对参数最优的化极因子仍存在高频放大效应这一问题,采用Tikhonov 正则化方法稳定化极因子.理论模型的试验结果表明,改进算法可有效压制原始信号中的高频噪声并提高化极结果的信噪比,低纬度化极的精度和稳定性得到提高.【总页数】11页(P606-616)
【作者】张琪;张英堂;李志宁;范红波
【作者单位】陆军工程大学石家庄校区车辆与电气工程系,河北石家庄050003;94019部队,新疆和田 848000;陆军工程大学石家庄校区车辆与电气工程系,河北石家庄 050003;陆军工程大学石家庄校区车辆与电气工程系,河北石家庄050003;陆军工程大学石家庄校区车辆与电气工程系,河北石家庄 050003
【正文语种】中文
【中图分类】P631
【相关文献】
1.从化极算法误差方程看各种波数域低纬度化极方法 [J], 柴玉璞
2.改进的阻尼法低纬度磁异常化极方法 [J], 荆磊;杨亚斌;陈亮;郜晓亮
3.低纬度磁异常化极的伪倾角方法改进 [J], 石磊;郭良辉;孟小红;王延峰
4.一种改进的低纬度磁场化极方法——变频双向阻尼因子法 [J], 林晓星;王平
5.利用稳定滤波器在低纬度地区化极 [J], Mend.,CA;郑方中
因版权原因,仅展示原文概要,查看原文内容请购买。
基于压缩感知与小波域奇异值分解的图像认证
高志荣;吕进
【期刊名称】《中南民族大学学报(自然科学版)》
【年(卷),期】2010(029)004
【摘要】提出了一种基于压缩感知与小波域奇异值分解的图像哈希认证新方法.该方法首先通过小波变换得到图像的低频子带,然后对低频子带进行基于块的奇异值分解,最后对产生的最大奇异值数组进行基于压缩感知的随机投影,产生哈希认证信息,并偕同原图像传送到接收端.图像接收端通过比较哈希信息完成图像认证,并通过利用传递的哈希信息重建原始图像的奇异值,进一步映射奇异值差值到图像像素域,从而实现对图像篡改的识别与定位.大量实验结果证明了该方法的有效性.
【总页数】5页(P89-93)
【作者】高志荣;吕进
【作者单位】中南民族大学,计算机科学学院,武汉,430074;中南民族大学,计算机科学学院,武汉,430074
【正文语种】中文
【中图分类】TP309.2
【相关文献】
1.基于小波域奇异值分解的振动信号压缩算法 [J], 王怀光;张培林;陈林;陈彦龙
2.基于奇异值分解的自嵌入图像认证水印算法 [J], 胡玉平
3.一种基于压缩感知的无损图像认证算法 [J], 艾鸽;伍家松;段宇平;舒华忠
4.基于压缩感知的数字图像认证算法研究 [J], 邓桂兵;周爽;李珊珊;蒋天发
5.基于奇异值分解的小波域数字水印方法 [J], 曾晴;马苗;孙莉;周涛
因版权原因,仅展示原文概要,查看原文内容请购买。
伽玛射线暴单脉冲光变曲线的研究的开题报告研究题目:伽玛射线暴单脉冲光变曲线的研究研究背景:伽玛射线暴(Gamma-ray burst,简称GRB)是宇宙中最强大的天体爆炸现象,一般指持续时间小于两秒的暴。
GRB的峰值亮度相当于数十亿个恒星的总亮度,能量释放的速率甚至高达太阳的10³倍。
由于其强度和短暂性,GRB 通常很难被观测和研究。
然而,近些年来,随着先进的天文设备和技术的发展,GRB的研究得到了显著的进展。
除了编目和分类、搜寻、归类外,对 GRB 光变曲线的研究也逐渐成为研究的重点之一。
GRB的光变曲线通常可以分为单脉冲和多脉冲两种类型。
其中单脉冲类型是指在光变曲线上只有一次峰值的变化;多脉冲类型是指在光变曲线上有多个峰值的变化。
单脉冲GRB的主要特点是时间尺度短、能量释放高和多重谱偏振,因此对于单脉冲光变曲线的研究具有极大的实际意义。
研究内容:本研究旨在通过大量GRB的光变曲线数据进行分析,主要研究内容包括:1. 统计学特征分析:对样本数据进行统计学特征分析,探讨单脉冲GRB的空间分布、能量释放、光度等方面的规律性。
2. 峰值分析:对光变曲线的峰值进行分析,探讨它们的时间尺度、幅度和形态特征。
3. 光变演化分析:对光变曲线的演化过程进行分析,研究单脉冲GRB的光变特征,探讨可能存在的物理机制。
4. 光变模拟和对比:利用数值模拟的手段对观测数据进行对比,研究可能存在的多种机制,如内部冲击、外部冲击等。
研究意义:通过研究 GRB 单脉冲光变曲线,可以对其它天体爆炸现象的研究提供借鉴和参考。
同时,该研究还有助于进一步掌握 GRB 的物质来源、能量释放机制和物理演化过程,为解决相关问题提供理论支持,并建立起一个完整的 GRB 研究体系。
研究方案:1. 收集并筛选比较典型的单脉冲GRB光变曲线数据;2. 进行数据处理和统计学特征分析,探讨单脉冲GRB的空间分布和能量释放规律,绘制相关图表;3. 分析光变曲线中的峰值,探讨其时间尺度、幅度和形态特征;4. 分析光变曲线的演化过程,研究单脉冲GRB的光变特征,探讨其可能的物理机制;5. 利用数值模拟的手段对观测数据进行对比,研究多种机制。
对彩色和亮度通道进行各向异性扩散的彩色图像分割
黄敦;游志胜
【期刊名称】《计算机工程》
【年(卷),期】2002(028)006
【摘要】提出了一种利用各向异性扩散和平均值位移聚类的彩色图像分割的算法.它主要的思想是把一幅彩色图像分成彩色通道和亮度通道,再分别对两种通道进行各向异性扩散,然后再利用平均值位移算法进行聚类分割,最后把两者的结果合并起来就可得到最后的分割结果.在这种算法中,各向异性扩散起着去噪、增强和粗分割的作用,它让一幅图中彩色信息更均匀.平均值位移在扩散的基础上,作最后的分割处理.使用CIEL*u*V*色彩空间作为图像的表示进行分割,能很有效地在彩色分量上分割出彩色区域,而彩色信息区别不明显的区域则可以通过对亮度信息分割得到.实验数据表明,它是一种有效、稳定、健壮的算法.
【总页数】4页(P166-169)
【作者】黄敦;游志胜
【作者单位】四川大学计算机系,重庆,610064;四川大学计算机系,重庆,610064【正文语种】中文
【中图分类】TP391
【相关文献】
1.水下彩色图像的亮度通道多尺度Retinex增强算法 [J], 张凯;金伟其;裘溯;王霞
2.利用小波和分层聚类进行彩色图像分割 [J], 叶吉祥;陈香华;谭冠政
b分通道直方图的彩色图像分割算法及应用 [J], 于艺铭;金典;王琪;张琪;陈茜;王小菊
4.分通道直方图的彩色图像分割模型 [J], 于艺铭;王琪
5.用BP网络进行彩色图像分割和边缘检测 [J], 熊联欢;胡汉平;李德华;李泽宇因版权原因,仅展示原文概要,查看原文内容请购买。
生成反应谱曲线的基本原理问题描述:程序生成反应谱曲线的基本原理和基本过程是怎样的呢?解答:对于任意一个(地震)加速度时程曲线,程序(包括SAP2000、CSiBridge、ETABS 等)都可以生成与其相应的反应谱曲线,包括:谱位移、谱速度或伪谱速度、谱加速度或伪谱加速度。
基本原理和基本过程如下:首先,程序分别针对不同频率(或周期)和不同阻尼的单自由度体系进行动力时程分析。
对于任意荷载作用下的单自由度体系的动力时程分析,理论上最严谨的解法应采用杜哈梅积分。
虽然杜哈梅积分也可以借助卷积计算、傅里叶变换或直接积分法等进行数值计算,但整体来讲,该方法在数值计算中的可操作不强。
因此,计算机程序更多地使用加速度的线性插值进行数值计算,这也符合地震动加速度时程曲线的数字化原则(即相邻离散时刻的加速度值以直线相连)。
具体的求解过程,读者可参考本知识库中另一篇文档《单自由度体系的动力时程分析》,此处不再赘述。
然后,程序统计每次时程分析(固定的频率和阻尼)中的峰值加速度、峰值速度、峰值位移等。
这里需要注意的是,峰值位移和峰值速度分别指相对位移的最大值和相对速度的最大值,而峰值加速度指绝对加速度(绝对加速度 = 相对加速度 + 输入加速度)的最大值。
至于伪谱速度和伪谱加速度,则分别为谱位移与圆频率ω 的乘积以及谱位移与圆频率平方ω2 的乘积。
最后,程序根据以上峰值及相应的频率、阻尼即可绘制反应谱曲线。
通常,反应谱曲线的横坐标为周期或频率,纵坐标可为谱位移、谱速度、谱加速度、伪谱速度、伪谱加速度的其中之一,而不同阻尼比下得到的多条反应谱曲线则组成反应谱曲线族。
可以看出,对一个特定的单自由度体系(即固定的周期和阻尼)进行一次动力时程分析,只能确定反应谱曲线族中的一个点。
关于显示反应谱曲线的具体操作步骤,读者可参考本知识库中另一篇文档《显示反应谱曲线的具体步骤》。
APPLICATIONAZ MIF developers are high contrast, ultra-high purity tetramethyl-ammonium hydroxide (TMAH) based photoresist developers formulated for a wide range of advanced IC and thick photoresist applications. •Surfactant enhanced and surfactant free options •Industry leading normality control •Wide range of normality available•High purity, low particulate formulations•Multiple bulk and non-bulk packaging optionsPROCESSINGGENERAL PROCESSING GUIDELINESAZ MIF developers should be used at room temperature in puddle, spray, or batch immersion processing mode. Variations in develop time, developer temperature, and substrate temperature will result in inconsistent develop uniformity and will affect process repeatability/reproducibility. It is important to monitor and control these variables.When processed in batch immersion mode, MIF developer bath life will be limited by the volume of dissolved photoresist in solution and by carbonate uptake from the fab environment. Bath change out frequency should be specified by thenumber of substrates processed and by elapsed time since the last bath change. The maximum number of substrates that may be processed through a given bath will depend upon the photoresist thickness, the % of substrate surface covered, and the volume of the developer tank.MerckPeRFoRmaNce MaTeRIaLstechnical datasheet AZ® Organic DevelopersMetal Ion Free (TMAH) Photoresist DevelopersWhen not in use, developer tanks should be covered to minimize evaporation and the rate of carbonate uptake. Inert gas blankets (dry N2 for example) may also be used to isolate developer tanks from the fab environment. In general, immersion tanks should be changed at least every 24 hours (or sooner if the maximum number of substrates processed is reached).BATH AGITATIONMild agitation of immersion developer tanks may improve wafer-to-wafer develop uniformity and photo speed when batch processing substrates.PUDDLE DEVELOPINGDue to their lower surface tension, surfactant enhanced developers improve substrate wetting and facilitate puddle formation using lower dispense volumes than typical surfactant free developers. Complete development of patterns in thick photoresist films (> 3.0µm) may require multiple developer puddles. Increased normality developers and/or aggressive surfactants can improve dissolution rates and reduce develop time for thick photoresist films (see application guide section of this publication).RINSINGUse de-ionized water only to rinse wafers post develop and to “quench” the developer activity. Spray pressure or bath agitation during rinsing may reduce post develop defect density by minimizing redeposited surface particles.DEVELOPER APPLICATIONS GUIDE0.26N (2.38%) TMAH DEVELOPERS0.26N TMAH developers are the industry standard for advanced integrated circuit (IC) production and general lithography.AZ 300MIF DeveloperAZ 300MIF is an ultra-high purity, general purpose, surfactant free 0.26N TMAH developer featuring class leading normality control and ppb level metals content. Recommended for puddle, spray, and immersion applications.AZ 726MIF DeveloperAZ 726MIF is a surfactant enhanced 0.26N TMAH developer optimized for puddle develop processes.AZ 917MIF DeveloperAZ 917 MIF is a surfactant enhanced 0.26N developer formulated to improve photo speed in puddle or immersion develop processes with no loss of contrast or selectivity. Improves photo speed by 10-20% vs. AZ 726MIF.AZ 2026 MIF DeveloperAZ 2026 MIF developer contains different surfactants which also have an impact on dissolution rate of photoresist. Dark erosion is higher than with AZ 726 MIF, however this helps to avoid scrumming, which mainly is observed when the photoresist is processed on steppers without applying a post-exposure-bake (PEB).CUSTOM NORMALITY TMAH DEVELOPERSCustom normality developers may be desirable in cases where the develop rate or selectivity provided by 0.26N materials is inadequate. Reduced normality developers can improve selectivity to unexposed resist and increased normality developers will reduce the required exposure dose and/or develop time for thick resist processing.AZ 422 MIF DeveloperAZ 422 MIF developer is a reduced normality (0.215N) surfactant free developer engineered to maximize dissolution selectivity and process control.AZ 435MIF DeveloperAZ 435 MIF developer is a surfactant free, increased normality (0.35N) TMAH developer optimized to improve photo speed for medium thick photoresist processing (5-10µm thick) while maintaining good process control. Recommended for use with AZ 9200 and AZ P4000 series photoresists.AZ® Organic DevelopersAZ 405 MIF DeveloperAZ 405 MIF developer is an aggressive, surfactant enhanced, high normality developer (0.405N) designed for thick photoresist processing (>15µm thick). This developer provides a metal ion free alternative to the sodium or potassium based developers typically employed in thick resist processing. Recommended for use with AZ 9260, AZ 50XT, and AZ P4620 photoresists.AZ 2033 MIF developerAZ 2033 MIF developer contains high TMAH (3.0% TMAH), which is specially designed for improved compatibility with the AZ 8100 Series Photoresist.Developer Normality SurfactantAZ 300 MIF developer0.26N NoAZ 726 MIF developer0.26N YesAZ 927 MIF developer0.26N YesAZ 2026 MIF developer0.26N YesAZ 2033 MIF developer0.33N YesAZ 422 MIF developer0.215N NoAZ 435 MIF developer0.35N NoAZ 405 MIF developer 0.405N YesAZ 732c MIF developer0.30N YesProducts are warranted to meet the specifications set forth on their label/packaging and/or certificate of analysis at the time of shipment or for the expressly stated duration. EMD MAKES NO REPRESENTATION OR WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING MERCHANTABILITY OR FITNESS FOR A PARTICULAR USE REGARDING OUR PRODUCTS OR ANY INFORMATION PROVIDED IN CONNECTION THEREWITH. Customer is responsible for and must independently determine suitability of EMD´s products for customer’s products, intended use and processes, including the non -infringement of any third parties´intellectual property rights. EMD shall not in any event be liable for incidental, consequential, indirect, exemplary or special damages of any kind resulting from any use or failure of the products: All sales are subject to EMD’s complete Terms and Conditions o f Sale. Prices are subject to change without notice. EMD reserves the right to discontinue products without prior notice.EMD, EMD Performance Materials, AZ, the AZ logo, and the vibrant M are trademarks of Merck KGaA, Darmstadt, Germany.North America:EMD Performance Materials 70 Meister AvenueSomerville, NJ USA 08876(908) 429-3500Germany:Merck Performance Materials (Germany) GmbH Wiesbaden, Germany +49 611 962 4031Korea:Merck Performance Materials (Korea) Ltd.Seoul, Korea+82 2 2056 1316Singapore:Merck Performance Materials Pte. Ltd.Jurong East, Singapore +65 68900629Taiwan:Merck Performance Materials Co. Ltd.Hsinchu, Taiwan+886 3 5970885#375Japan:Merck Performance Materials G. K.Tokyo, Japan+81 3 5453 5062China:Merck Electronic Materials Shanghai, China+86 (21) 2083 2362AZ® Organic DevelopersMATERIALS COMPATIBILITY and HANDLINGTMAH containing developers are compatible with all standard semiconductor processing equipment designed to handle high pH aqueous solutions.Note: Contaminating inorganic developer baths or lines withtetramethylammonium hydroxide (TMAH) based metal-ion-free developers, even at the parts-per-million level, will neutralize the dissolution activity of the inorganic developer process. Use extreme caution when changing developing equipment from a metal-ion-free to an inorganic process.TMAH containing developers should be avoided in cases where slight etching of an aluminum layer cannot be tolerated. 0.26N TMAH developers will etch typical deposited aluminum substrate layers at ~100Å/min.Recommended personal protective gear during handling includes eye protection, apron, caustic resistant gloves. Refer to the current version of the SDS for information on exposure hazards. STORAGEStore AZ MIF Developers in a cool, dry location in sealed original containersaway from sunlight and incompatibles. Do not expose to excessive temperatures or moisture. Recommended storage temperature is >0C. Do not freeze. Empty containers may contain harmful residue. DISPOSALAZ MIF Developers are compatible with typical facility acid/base drain lines and materials. For disposal other than via facility solvent drains, refer to the current product SDS and to local regulations.。
基于自适应伽马转换函数的遥感图像增强刘攀;侯晓荣【期刊名称】《电脑知识与技术》【年(卷),期】2013(000)003【摘要】针对传统的遥感图像增强方法视觉效果不够理想的缺点,提出了一种基于平稳小波变换和分段自适应伽马灰度转换函数的遥感图像对比度增强方法.首先,对遥感图像进行平稳小波变换,接着对得到的最大尺度低频子带图像采用基于核函数的样本加权模糊C-均值聚类算法将该图像分为低、中和高亮度区域,并对各亮度区域依据其各自特点分别采用不同的自适应伽马灰度转换函数进行增强,然后利用贝叶斯萎缩阈值法和非线性自适应增益函数相结合计算高频子带的增益系数,从而得到增强后的高频子带图像;最后,由低频子带图像和高频子带图像重构得到增强后的图像.实验表明,该方法提高了遥感图像的全局对比度,改善了视觉效果,增强了图像的细节并抑制了噪声,获得了较好的图像增强整体效果.%Astraditional remote sensing image enhancement method has the problem of unperfected visual effect, a methodbased on stationary wavelet transformation and the piecewise adaptive gamma grey level transformation function is proposed. Firstly, the remote sensing images is decomposed into high frequency detail and low frequency approximation components at variousres?olutions, and then decompose the low frequency subband image of the largest scale into low-, middle-, and high-intensity lay?ers using the kernel-basedweighting fuzzy c-mean clusteringalgorithm,and brightness of the area in accordance with their respec?tive characteristics using differentadaptive gamma grey level transformation function enhanced.Then, the gain coefficients of high frequency subbands are available by calculating Bayesian shrinkage method and nonlinear adaptive gain function to get the en?hanced high frequency subband images.Finally, the enhanced image is reconstructed by the low frequency subband and high fre?quency subbands.The results show that the proposed approach can improve the global contrast and enhance visual effect of the re?mote sensing image, it is also can enhance image details and suppress noise better, and the whole visual effect is improved signifi?cantly.【总页数】5页(P575-579)【作者】刘攀;侯晓荣【作者单位】电子科技大学能源科学与工程学院,四川成都 611731;电子科技大学能源科学与工程学院,四川成都 611731【正文语种】中文【中图分类】TP311【相关文献】1.航天遥感相机星上自适应图像增强研究与实验 [J], 袁航飞;郭永飞;刘春香;吕恒毅;宁永慧2.一种自适应强度变换的彩色遥感图像增强方法 [J], 杨蕴; 李玉; 赵泉华3.改进的自适应伽马变换图像增强算法仿真 [J], 杨先凤; 李小兰; 贵红军4.基于暗通道和伽马变换的水下图像增强 [J], 胡易;邹立;昝世良;曹芳芳;赵猛5.利用自适应形态学实现遥感水体图像增强 [J], 王小鹏;杨文婷;文昊天因版权原因,仅展示原文概要,查看原文内容请购买。
用于激射波长可控的GaAs/AlGaAs量子阱激光二极管的大面积均匀OMV…Aang,CA;李学颜【期刊名称】《半导体情报》【年(卷),期】1994(031)005【摘要】在垂直旋转圆形OMVPE外延炉里压力减小到0.2atm下生长的GaAs和AlGaAs的特性非常均匀。
衬底旋转500r/min,厚外延层均匀性是±1%。
而3~10nm厚的量子阱,其厚度的均匀性是±2%。
铝组分的变化系数是1.8×10-3或者更小。
对于含有单量子阱有源层的宽面积GRIN-SCH二极管激光器,其阈值电流密度和微分量子效率非常均匀。
激光器的激射波长可以通过调整有源层厚度和组分精确控制。
175个器件分布在16cm2的片子上,片子包含一个10nm厚的Al0.07Ga0.93As有源层,激射波长,总的变化为3.0nm。
从9个其它外延片上取出的所有供试验的管子,其波长范围是从803.5到807.4nm。
【总页数】7页(P17-22,31)【作者】Aang,CA;李学颜【作者单位】不详;不详【正文语种】中文【中图分类】TN365【相关文献】1.GaAs/AlGaAs量子阱双异质结激光二极管与光波导调制器的单片集成 [J], Taruc.,S;鲁瑛2.用于低阈值大功率应变InGaAs—AlGaAs GRIN SCH SQW激光二极管的… [J], Verma.,F;秦志强3.808nmInGaAsP单量子阱激光器激射波长的温度依赖性 [J], 张永明;钟景昌;路国光;秦莉;赵英杰;郝永芹;姜晓光4.大功率808nm AlGaAs/GaAs宽波导量子阱激光二极管(英文) [J], 方高瞻;肖建伟;马骁宇;冯小明;王晓薇;刘媛媛;刘斌;谭满清;蓝永生5.带有非吸收窗口的高性能InGaAs/AlGaAs量子阱激光二极管 [J], 刘翠翠;林楠;马骁宇;井红旗;刘素平因版权原因,仅展示原文概要,查看原文内容请购买。
Synthesis of High Dynamic Range Motion Blur Free Image From Multiple Captures Xinqiao(Chiao)Liu,Member,IEEE,and Abbas El Gamal,Fellow,IEEEAbstract—Advances in CMOS image sensors enable high-speed image readout,which makes it possible to capture multiple images within a normal exposure time.Earlier work has demonstrated the use of this capability to enhance sensor dynamic range.This paper presents an algorithm for synthesizing a high dynamic range,mo-tion blur free,still image from multiple captures.The algorithm consists of two main procedures,photocurrent estimation and sat-uration and motion detection.Estimation is used to reduce read noise,and,thus,to enhance dynamic range at the low illumina-tion end.Saturation detection is used to enhance dynamic range at the high illumination end as previously proposed,while mo-tion blur detection ensures that the estimation is not corrupted by motion.Motion blur detection also makes it possible to extend exposure time and to capture more images,which can be used to further enhance dynamic range at the low illumination end.Our algorithm operates completely locally;each pixel’s final value is computed using only its captured values,and recursively,requiring the storage of only a constant number of values per pixel inde-pendent of the number of images captured.Simulation and ex-perimental results demonstrate the enhanced signal-to-noise ratio (SNR),dynamic range,and the motion blur prevention achieved using the algorithm.Index Terms—CMOS image sensor,dynamic range extension, motion blur restoration,motion detection,photocurrent estima-tion,saturation detection.I.I NTRODUCTIONM OST of today’s video and digital cameras use charge-coupled-device(CCD)image sensors[1],where the charge collected by the photodetectors during exposure time is serially read out resulting in slow readout speed and high power consumption.Also,CCDs are fabricated in a non-standard technology,and as a result,other analog and digital camera functions such as A/D conversion,image processing, and compression,control,and storage cannot be integrated with the sensor on the same chip.Recently developed CMOS image sensors[2],[3],by comparison,are read out nondestructivelyManuscript received March5,2002;revised November14,2002.This work was supported in part by Agilent,in part by Canon,in part by Hewlett-Packard, in part by Interval Research,and in part by Kodak,all under the Programmable Digital Camera Program.This paper was presented in part at the2001SPIE Electronic Imaging conference,San Jose,CA,January2001[23]and at the IEEE International Conference on Acoustics,Speech,and Signal Processing, Salt Lake City,UT,May,2001[24].This paper was recommended by Associate Editor B.E.Shi.X.Liu was with the Information Systems Laboratory,Department of Elec-trical Engineering,Stanford University,Stanford,CA94304.He is now with Canesta Inc.,San Jose,CA95134USA(e-mail:chiao@).A.El Gamal is with the Information Systems Laboratory,Department of Electrical Engineering,Stanford University,Stanford,CA94305(e-mail: abbas@).Digital Object Identifier10.1109/TCSI.2003.809815and in a manner similar to a digital memory and can thus be operated continuously at very high frame rates[4]–[6].A CMOS image sensor can also be integrated with other camera functions on the same chip ultimately leading to a single-chip digital camera with very small size,low power consumption, and additional functionality[7]–[10].In[11],it is argued that the high frame-rate capability of CMOS image sensors coupled with the integration of processing with capture can enable the efficient implementations of many still and standard video imaging applications that can benefit from high frame rates, most notably,dynamic range extension.CMOS image sensors generally suffer from lower dynamic range than CCDs due to their high readout noise and nonunifor-mity.To address this problem,several methods have been pro-posed for extending CMOS image sensor dynamic range.These include well-capacity adjusting[12],multiple capture[13],[14],[15],time to saturation[17],[18],spatially-varying exposure[16],logarithmic sensor[19],[20],and local adaptation[21]. With the exception of multiple capture,all other methods can only extend dynamic range at the high illumination end.Mul-tiple capture also produces linear sensor response,which makes it possible to use correlated double sampling(CDS)for fixed pattern noise(FPN)and reset noise suppression,and to perform conventional color processing.Implementing multiple capture, however,requires very high frame-rate nondestructive readout, which has only recently become possible using digital pixel sen-sors(DPS)[6].The idea behind the multiple-capture scheme is to acquire several images at different times within exposure time—shorter-exposure-time images capture the brighter areas of the scene, while longer-exposure-time images capture the darker areas of the scene.A high dynamic-range image can then be synthesized from the multiple captures by appropriately scaling each pixel’s last sample before saturation(LSBS).In[22],it was shown that this scheme achieves higher signal-to-noise ratio(SNR)than other dynamic range-extension schemes.However,the LSBS algorithm does not take full advantage of the captured images. Since read noise is not reduced,dynamic range is only extended at the high illumination end.Dynamic range can be extended at the low illumination end by increasing exposure time.However, extending exposure time may result in unacceptable blur due to motion or change of illumination.In this paper,we describe an algorithm for synthesizing a high dynamic range image from multiple captures while avoiding motion blur.The algorithm consists of two main procedures, photocurrent estimation and motion/saturation detection.Esti-mation is used to reduce read noise,and,thus,enhance dynamic range at the low-illumination end.Saturation detection is used1057-7122/03$17.00©2003IEEEFig.1.CMOS image-sensor pixel diagram.to enhance dynamic range at the high-illumination end as pre-viously discussed,while motion blur detection ensures that the estimation is not corrupted by motion.Motion blur detection also makes it possible to extend exposure time and to capture more images,which can be used to further enhance dynamic range at the low illumination end.Our algorithm operates com-pletely locally,each pixel’s final value is computed using only its captured values,and recursively,requiring the storage of only a constant number of values per pixel independent of the number of images captured.We present three estimation algorithms.•An optimal recursive algorithm when reset noise and offset FPN are ignored.In this case,only the latest estimate and the new sample are needed to update the pixel photocurrent estimate.•An optimal nonrecursive algorithm when reset noise and FPN are considered.•A suboptimal recursive estimator for the second case,which is shown to yield mean-square error close to the nonrecursive algorithm without the need to store all the samples.The later recursive algorithm is attractive since it requires the storage of only a constant number of values per pixel.The motion-detection algorithm we describe in this paper de-tects change in each pixel’s signal due to motion or change in illumination.The decision to stop estimating after motion is de-tected is made locally and is independent of other pixels signals. The rest of the paper is organized as follows.In Section II, we describe the image-sensor signal and noise model we as-sume throughout the paper.In Section III,we describe our high-dynamic-range image-synthesis algorithm.In Section IV,we present the three estimation algorithms.In Section V,we present our motion-detection algorithm.Experimental results are pre-sented in Section VI.II.I MAGE-S ENSOR M ODELIn this section,we describe the CMOS image-sensor opera-tion and signal-and-noise model we use in the development and analysis of our synthesis algorithm.We use the model to define sensor SNR and dynamic range.The image sensor used in an analog or digital camera con-sists of a2-D array of pixels.In a typical CMOS image sensor [3],each pixel consists of a photodiode,a reset transistor,and several other readout transistors(see Fig.1).The photodiode is reset before the beginning of capture.During exposure,the photodiode converts incident light into photocurrent,foris the exposure time.This process is quite linear,and,thus,is a good measure of incident light in-tensity.Since the photocurrent is too small to measure directly, it is integrated onto the photodiode parasiticcapacitoris the electron charge.•Reset noise(including offsetFPN),the saturation charge,also referred to as well capacity.If the photocurrent is constant over exposure time,SNR is given bySNR.Thus,it is always preferred to have the longest possible exposure time.Saturation and change in photocurrent due to motion,however,makes it impractical to make exposure time too long.Dynamic range is a critical figure of merit for image sensors. It is defined as the ratio of the largest nonsaturating photocurrent to the smallest detectable photocurrent,typically defined as the standard deviation of the noise under dark ing the sensor model,dynamic range can be expressed asDR,and/or decrease read noise.III.H IGH-D YNAMIC-R ANGE I MAGE S YNTHESISWe first illustrate the effect of saturation and motion on image capture using the examples in Figs.2and3.The first plot in Fig.2represents the case of a constant low light,where pho-tocurrent can be well estimatedfromFig.2.Q (t )versus t for three lighting conditions.(a)Constant low light.(b)Constant high light.(c)Light changing.represents the case of a constant high light,where,and,as follows:1)Capture first image,set.3)Capture next image.4)For each pixel:Use the motion-detection algorithm to check if motion/saturation has occurred.i)Motion/saturation detected :Set final photocurrentestimatefromIV.P HOTOCURRENT E STIMATIONDynamic range at the low-illumination end can be enhanced using multiple captures by appropriately averaging each pixel’s photocurrent samples to reduce readout noise.Since the sensor noise depends on the signal and the photocurrent samples are de-pendent,equal weight averaging may not reduce readout noise and can in fact be worse than simply using the LSBS to estimate photocurrent.In this section,we use linear mean-square error (MSE)estimation to derive the optimal weights to be used in the averaging.We first formulate the problem.We then present estimation algorithms for three cases:1)when reset noise and offset FPN are ignored;2)when reset noise and FPN are consid-ered;and3)a recursive estimator for case2)without the need to store all the samples.Simulation results are presented and the performance of the three algorithms is compared in the last subsection.A.Problem FormulationWeassumeand define the pixelcurrent th charge sample is thus givenby(3)whereis the resetnoise.,and,,the linear MMSE problem is formulated as follows.Attime,we use superscript(k)to represent the number ofcaptures used and use subscript as the index of the coefficients for each capture.minimizes.Even though this assumption is not realistic for CMOS sensors,it is reasonable for high-end CCDs using very high-resolutionA/D converters.As we shall see,the optimal estimate in thiscase can be cast in a recursive form,which is not the case whenreset noise is considered.To derive the best estimate,define the pixel photocurrentsamplesas,i.e.,weights(6)This is a convex optimization problem with a linear constraintas in(5).To solve it,we define theLagrangian(7)whereat t=0,therefore,weights startwith aand we get[26](11)whereis known.C.Estimation Considering Reset Noise and FPNFrom(3)and(4),to minimize the MSE of the bestestimatorwhichgives(13)where(16)suchthat,asgiven in(14).The pixel current estimate given thefirst...requires the storage ofthevectorD.Recursive AlgorithmNow,we restrict ourselves to recursive estimates,i.e.,esti-mates of theformwhereagainand(20)The MSEof(21)To minimize the MSE,we requirethatwhichgivesFig.4.Distribution of estimation weights among total32samples used in thenonrecursive and recursivealgorithms.e-Fig.4plots the weights for the nonrecursive and recursive al-gorithms in Sections IV-C and D,respectively.Note that with atypical readout noise rms(60e)if sensor read noise is zero.On theother extreme,if shot noise can be ignored,then,the best es-timate is averaging(i.e.,).Also notethat weights for the nonrecursive algorithm can be negative.ItFig. 5.Simulated equivalent readout noise rms value versus number of samples k.Fig.6.Estimation enhances the SNR and dynamic range.is preferred to weight the later samples higher since they have higher SNR,and this can be achieved by using negative weights for some of the earlier samples under the unbiased estimate con-strain (sum of the weights equals one).Fig.5compares the equivalent readout noise rms at low il-lumination level correspondingtofA as a function of the number ofsamplesfor conventional sensor operation,where the lastsampleis constant and that satu-ration does not occurbeforedue to mo-tion or saturation before the new image is used to update the photocurrent estimate.Since the statistics of the noise are not completely known and no motion model is specified,it is not possible to derive an optimal detection algorithm.Our algorithm is,therefore,based on heuristics.By performing the detection step prior to each estimation step we form a blur free high dy-namic range image fromthethcapture,the best MSE linear estimateof(23)and the best predictorofchanged betweentimeto update the estimateof .Theconstant is chosen to achieve the desired tradeoff between SNR and motion blur.The higherthe(a)(b)(c)(d)(e)(f)Fig.7.Six of the65images of the high dynamic scene captured nondestructively at1000frames/s.(a)t=0ms.(a)t=0ms.(b)t=10ms.(c)t=20ms.(d)t=30ms.(e)t=40ms.(f)t=50ms.One potential problem with this“hard”decision rule is that gradual drift into update and set,,,.The counters(a)(b)(c)Fig.10.Readout values(marked by“+”)and estimated values(solid lines)for(a)pixel in the dark area,(b)pixel in bright area,and(c)pixel with varying illumination due to motion.VII.C ONCLUSIONThe high frame-rate capability of CMOS image sensors makes it possible to nondestructively capture several images within a normal exposure time.The captured images provide additional information that can be used to enhance the perfor-mance of many still and standard video imaging applications [11].The paper describes an algorithm for synthesizing a high dynamic range,motion blur free image from multiple captures. The algorithm consists of two main procedures,photocurrent estimation and motion/saturation detection.Estimation is used to reduce read noise,and,thus,enhance dynamic range at the low illumination end.Saturation detection is used to enhance dynamic range at the high illumination end,while motion blur detection ensures that the estimation is not corrupted by motion.Motion blur detection also makes it possible to extend exposure time and to capture more images,which can be used to further enhance dynamic range at the low illumination end. Experimental results demonstrate that this algorithm achieves increased SNR,enhanced dynamic range,and motion blur prevention.A CKNOWLEDGMENTThe authors would like to thank T.Chen,H.Eltoukhy, A.Ercan,S.Lim,and K.Salama for their feedback.R EFERENCES[1] A.J.Theuwissen,Solid-State Imaging With Charge-Coupled De-vices.Norwell,MA:Kluwer,May1995.[2] E.R.Fossum,“Active pixel sensors:Are CCDs dinosaurs,”Proc.SPIE,vol.1900,pp.2–14,Feb.1993.[3],“CMOS image sensors:Electronic camera-on-chip,”IEEE Trans.Electron Devices,vol.44,pp.1689–1698,Oct.1997.[4] A.Krymski,D.Van Blerkom,A.Andersson,N.Block,B.Mansoorian,and E.R.Fossum,“A high speed,500frames/s,102421024CMOS active pixel sensor,”in Proc.1999Symp.VLSI Circuits,June1999,pp.137–138.[5]N.Stevanovic,M.Hillegrand,B.J.Hostica,and A.Teuner,“A CMOSimage sensor for high speed imaging,”in Dig.Tech.Papers2000IEEE Int.Solid-State Circuits Conf.,Feb.2000,pp.104–105.[6]S.Kleinfelder,S.H.Lim,X.Liu,and A.El Gamal,“A10000frames/sCMOS digital pixel sensor,”IEEE J.Solid-State Circuits,vol.36,pp.2049–2059,Dec.2001.[7]M.Loinaz,K.Singh,A.Blanksby,D.Inglis,K.Azadet,and B.Ackland,“A200mW3.3V CMOS color camera IC producing352228824b video at30frames/s,”in Dig.Tech.Papers1998IEEE Int.Solid-State Circuits Conf.,Feb.1998,pp.168–169.[8]S.Smith,J.Hurwitz,M.Torrie,D.Baxter,A.Holmes,M.Panaghiston,R.Henderson,A.Murray,S.Anderson,and P.Denyer,“A single-chip 3062244-pixel CMOS NTSC video camera,”in Dig.Tech.Papers 1998IEEE Int.Solid-State Circuits Conf.,Feb.1998,pp.170–171. [9]S.Yoshimura,T.Sugiyama,K.Yonemoto,and K.Ueda,“A48kframes/s CMOS image sensor for real-time3-D sensing and motion detection,”in Dig.Tech.Papers2001IEEE Int.Solid-State Circuits Conf.,Feb.2001,pp.94–95.[10]T.Sugiyama,S.Yoshimura,R.Suzuki,and H.Sumi,“A1/4-inchQVGA color imaging and3-D sensing CMOS sensor with analog frame memory,”in Dig.Tech.Papers2002IEEE Int.Solid-State Circuits Conf.,Feb.2002,pp.434–435.[11]S.H.Lim and A.El Gamal,“Integration of image capture and pro-cessing—Beyond single chip digital camera,”Proc.SPIE,vol.4306,pp.219–226,Mar.2001.[12]S.J.Decker,R.D.McGrath,K.Brehmer,and C.G.Sodini,“A2562256CMOS imaging array with wide dynamic range pixels and col-lumn-parallel digital output,”IEEE J.Solid-State Circuits,vol.33,pp.2081–2091,Dec.1998.[13]O.Yadid-Pecht and E.Fossum,“Wide intrascene dynamic range CMOSAPS using dual sampling,”IEEE Trans.Electron Devices,vol.44,pp.1721–1723,Oct.1997.[14] D.Yang,A.El Gamal,B.Fowler,and H.Tian,“A6402512CMOSimage sensor with ultra-wide dynamic range floating-point pixel level ADC,”IEEE J.Solid-State Circuits,vol.34,pp.1821–1834,Dec.1999.[15]O.Yadid-Pecht and A.Belenky,“Autoscaling CMOS APS with cus-tomized increase of dynamic range,”in Dig.Tech.Papers2001IEEE Int.Solid-State Circuits Conf.,Feb.2001,pp.100–101.[16]M.Aggarwal and N.Ahuja,“High dynamic range panoramic imaging,”in Proc.8th IEEE puter Vision,vol.1,2001,pp.2–9. [17]W.Yang,“A wide-dynamic range,low power photosensor array,”in Dig.Tech.Papers1994IEEE Int.Solid-State Circuits Conf.,Feb.1994,pp.230–231.[18] E.Culurciello,R.Etienne-Cummings,and K.Boahen,“Arbitrated ad-dress event representation digital image sensor,”in Dig.Tech.Papers 2001IEEE Int.Solid-State Circuits Conf.,Feb.2001,pp.92–93. [19]M.Loose,K.Meier,and J.Schemmel,“A self-calibrating single-chipCMOS camera with logarithmic response,”IEEE J.Solid-State Circuits, vol.36,pp.586–596,Apr.2001.[20]S.Kavadias,B.Dierickx,D.Scheffer,A.Alaerts,D.Uwaerts,and J.Bogaerts,“A logarithmic response CMOS image sensor with on-chip calibration,”IEEE J.Solid-State Circuits,vol.35,pp.1146–1152,Aug.2000.[21]T.Delbruck and C.A.Mead,“Analog VLSI phototransduction,”Cali-fornia Institute of Technology,Pasadena,CNS Memo no.30,May11, 1994.[22] D.Yang and A.El Gamal,“Comparative analysis of SNR for image sen-sors with enhanced dynamic range,”Proc.SPIE,vol.3649,pp.197–211, 1999.[23]X.Liu and A.El Gamal,“Photocurrent estimation from multiple non-destructive samples in a CMOS image sensor,”Proc.SPIE,vol.4306, pp.450–458,2001.[24],“Simultaneous image formation and motion blur restoration viamultiple capture,”in Proc.ICASSP2001,vol.3,Salt Lake City,UT,May 2001,pp.1841–1844.[25]H.Sorenson,Parameter Estimation,Principles and Problems.NewYork:Marcell Dekker,1980.[26]X.Liu,“CMOS image sensors dynamic range and SNR enhancement viastatistical signal processing,”Ph.D.dissertation,Stanford Univ.,Stan-ford,CA,2002.[27] A.Ercan,F.Xiao,X.Liu,S.H.Lim,A.El Gamal,and B.Wandell,“Ex-perimental high speed CMOS image sensor system and applications,”in Proc.IEEE Sensors2002,Orlando,FL,June2002,pp.15–20.Xinqiao(Chiao)Liu(S’97–M’02)received the B.S.degree in physics from the University of Science andTechnology of China,Anhui,China,in1993,and theM.S.and Ph.D.degrees in electrical engineering fromStanford University,Stanford,CA,in1997and2002,respectively.In the summer of1998,he worked as a ResearchIntern at Interval Research Inc.,Palo Alto,CA,onimage sensor characterization and novel imagingsystem design.He is currently with Canesta Inc.,San Jose,CA,developing3-D image sensors.At Stanford,his research was focused on CMOS image sensor dynamic range and SNR enhancement via innovative circuit design,and statistical signal processingalgorithms.Abbas El Gamal(S’71–M’73–SM’83–F’00)received the B.S.degree in electrical engineeringfrom Cairo University,Cairo,Egypt,in1972,theM.S.degree in statistics,and the Ph.D.degree inelectrical engineering,both from Stanford Univer-sity,Stanford,CA,in1977and1978,respectively.From1978to1980,he was an Assistant Professorof Electrical Engineering at the University ofSouthern California,Los Angeles.He joined theStanford faculty in1981,where he is currently aProfessor of Electrical Engineering.From1984 to1988,while on leave from Stanford,he was Director of LSI Logic Re-search Lab,Sunnyvale,CA,later,Cofounder and Chief Scientist of Actel Corporation,Sunnyvale,CA.From1990to1995,he was a Cofounder and Chief Technical Officer of Silicon Architects,Mountainview,CA,which was acquired by Synopsis.He is currently a Principal Investigator on the Stanford Programmable Digital Camera project.His research interests include digital imaging and image processing,network information theory,and electrically configurable VLSI design and CAD.He has authored or coauthored over 125papers and25patents in these areas.Dr.El Gamal serves on the board of directors and advisory boards of sev-eral IC and CAD companies.He is a member of the ISSCC Technical Program Committee.。