随机反卷积
- 格式:pdf
- 大小:14.05 MB
- 文档页数:8
卷积和反卷积的计算公式一、卷积计算公式。
(一)离散卷积(一维情况)设离散序列x[n]和h[n],它们的卷积y[n]定义为:y[n]=∑_m =-∞^∞x[m]h[n - m](二)离散卷积(二维情况)对于二维离散信号x[m,n]和h[m,n],其卷积y[m,n]为:y[m,n]=∑_k =-∞^∞∑_l=-∞^∞x[k,l]h[m - k,n - l](三)连续卷积(一维情况)对于连续函数x(t)和h(t),它们的卷积y(t)定义为:y(t)=∫_-∞^∞x(τ)h(t-τ)dτ二、反卷积计算公式。
反卷积(也称为去卷积)是卷积的逆运算。
在离散情况下,如果已知y[n](卷积结果)和h[n],求x[n],可以通过求解以下方程(在某些条件下):y[n]=∑_m =-∞^∞x[m]h[n - m]1. 频域方法(离散情况)- 对y[n]、h[n]分别进行离散傅里叶变换(DFT),得到Y[k]和H[k]。
- 根据卷积定理Y[k]=X[k]H[k],则X[k]=(Y[k])/(H[k])(假设H[k]≠0)。
- 再对X[k]进行逆离散傅里叶变换(IDFT)得到x[n]。
2. 迭代算法(离散情况)- 一种简单的迭代算法是假设初始的x^0[n]=y[n]/h[0](当h[0]≠0时)。
- 然后通过迭代公式x^i + 1[n]=x^i[n]+frac{y[n]-∑_m =-∞^∞x^i[m]h[n - m]}{∑_m =-∞^∞h[m]h[n - m]}逐步逼近真实的x[n],其中i表示迭代次数。
在连续情况下,反卷积的求解更加复杂,通常也可以利用频域方法,通过傅里叶变换将问题转换到频域,利用Y(ω)=X(ω)H(ω),得到X(ω)=(Y(ω))/(H(ω))(假设H(ω)≠0),再通过逆傅里叶变换得到x(t),但在实际应用中要考虑到函数的性质、收敛性等诸多问题。
matlab反卷积函数在MATLAB中,反卷积通常使用deconv函数来实现。
deconv函数用于执行一维和二维信号的反卷积操作。
语法:[result, remainder] = deconv(signal, kernel)其中,signal是输入信号,kernel是卷积核(即系统的冲激响应),result是反卷积的结果,remainder是余数。
例子:% 创建一个输入信号input_signal = [1, 2, 3, 4, 5];% 创建一个卷积核(冲激响应)conv_kernel = [0.5, 1, 0.5];% 进行卷积操作conv_result = conv(input_signal, conv_kernel, 'full');% 进行反卷积操作[deconv_result, remainder] = deconv(conv_result, conv_kernel);% 显示结果disp('原始信号:');disp(input_signal);disp('卷积结果:');disp(conv_result);disp('反卷积结果:');disp(deconv_result);disp('余数:');disp(remainder);请注意,反卷积的结果可能受到噪声和边界效应的影响。
在实际应用中,可能需要对信号进行处理,以减小这些影响。
此外,deconv 函数还支持其他参数,例如deconv(signal, kernel, n),其中n表示截断余数的长度。
详细的用法和选项可以在MATLAB的文档中找到。
opencv反卷积函数
在计算机视觉和图像处理领域,反卷积是一种重要的技术,用于从模糊的图像中恢复原始清晰的信息。
OpenCV(开源计算机视觉库)提供了多种反卷积的方法,其中一种是使用维纳滤波器(Wienerfilter)的方法。
维纳滤波器是一种最小均方误差的最优估计器,能够有效地减小图像中的噪声并增强图像的细节。
在OpenCV中,可以使用cv2.wiener2()函数来实现维纳滤波反卷积。
该函数需要传入以下参数:
src:输入的模糊图像,应该是一个单通道的灰度图像。
dst:输出的清晰图像,其大小和类型与输入图像相同。
roi:一个矩形区域,指定需要进行反卷积的区域。
如果为空,则默认对整个图像进行处理。
max_ksize:一个可选参数,指定最大核大小。
如果为0,则表示不使用非局部均值滤波。
使用cv2.wiener2()函数进行反卷积的基本步骤如下:
读取模糊的图像,并将其转换为灰度图像。
创建输出图像的变量,并指定反卷积区域(如果需要)。
调用cv2.wiener2()函数进行反卷积处理。
显示或保存处理后的清晰图像。
需要注意的是,反卷积可能会放大图像中的噪声,因此在进行反卷积之前,可以使用其他方法对图像进行预处理,如平滑滤波或中值滤波,以减少噪声的影响。
另外,反卷积的结果可能与原始图像存在一定的差异,因为反卷积是一种估计过程,受到多种因素的影响,如模糊程度、噪声水平、图像内容等。
opencv 反卷积函数
总之,OpenCV提供的反卷积函数可以帮助我们从模糊的图像中恢复出更清晰的结果,但在实际应用中需要根据具体情况选择合适的方法和参数。
基于交换卷积的含高比例新能源电力系统运行评估方法吴少雷;冯玉;吴凯;陆巍;赵成;骆晨【摘要】随着新能源的大规模建设,评估确定含高比例新能源电力系统的运行方式成为亟待解决的问题.针对目前的评估方法没有充分考虑机组调峰潜能的问题,提出了交换卷积的方法.该方法以目前广泛采用的基于等效电量函数法和等效电量频率法的随机生产模拟为基础,首先建立持续负荷曲线和等效负荷频率曲线进行预随机生产模拟;然后根据预随机生产模拟结果确定高频率启动机组;最后以交换卷积方法进行交换卷积.采用某省实际电网算例对所提方法进行计算,验证了所提方法的有效性.与传统方法相比,所提方法可以更加充分发挥系统的调峰潜能,增进评估的有效性.【期刊名称】《中国电力》【年(卷),期】2019(052)004【总页数】7页(P25-31)【关键词】随机生产模拟;交换卷积;新能源;电力系统;运行评估【作者】吴少雷;冯玉;吴凯;陆巍;赵成;骆晨【作者单位】国网安徽省电力有限公司电力科学研究院,安徽合肥 230601;国网安徽省电力有限公司电力科学研究院,安徽合肥 230601;国网安徽省电力有限公司电力科学研究院,安徽合肥 230601;国网安徽省电力有限公司电力科学研究院,安徽合肥 230601;国网安徽省电力有限公司电力科学研究院,安徽合肥 230601;国网安徽省电力有限公司电力科学研究院,安徽合肥 230601【正文语种】中文【中图分类】TM7110 引言新能源的大规模建设发展,缓解了中国电力能源过度依靠火电的局面,改善了中国能源结构,为推动能源转型及能源、经济社会的可持续发展做出了巨大贡献。
然而,新能源出力具有巨大的随机性和不稳定性,其大规模并网对电网安全稳定运行提出了巨大挑战。
电网安全稳定运行与消纳新能源成为电网运行人员和电力科研工作者关注的问题,对含高比例新能源电力系统进行评估具有重要意义。
为有效评估含高比例新能源的电力系统,已开展了大量富有成效的工作。
解释反卷积的原理和用途
反卷积是信号处理中的一种基本技术,其原理可以理解为卷积操作的逆操作。
然而,反卷积并不能完全还原出原始的输入信号,因为卷积操作在某些信息上是不可逆的。
具体来说,反卷积是将卷积核进行转置后再与卷积后的结果进行一次卷积操作。
虽然不能完全恢复出原始的输入信号,但反卷积可以在一定程度上恢复被卷积生成后的原始输入,将带有小部分缺失的信息最大化地恢复。
反卷积的用途广泛,包括但不限于以下几个方面:
1. 信道均衡:在通信系统中,信号在传输过程中可能会受到各种干扰和失真,导致信道均衡问题。
反卷积可以用于估计和补偿这些失真,从而恢复原始信号。
2. 图像恢复:在图像处理中,图像可能会受到噪声、模糊或其他失真的影响。
反卷积可以用于估计和恢复原始图像。
3. 地震学:在地震学中,地震波传播的特性可以通过反卷积来估计。
这有助于研究地球内部结构和地震活动的规律。
4. 无损探伤:在无损探伤中,反卷积可以用于从探测到的信号中提取有关材料性质的信息。
5. 未知输入估计和故障辨识:在某些情况下,系统中的未知输入或故障可以通过反卷积来估计和辨识。
总之,反卷积是一种重要的信号处理技术,广泛应用于各种领域。
反卷积详解
反卷积,也称为反卷积操作,是一种图像处理技术。
在卷积神经
网络中,反卷积用于对卷积层的输出进行反向操作,生成特征图。
与
卷积层相反的是,反卷积同样也是一组可训练的卷积核,但是其卷积
核的大小通常大于输入特征图的大小。
反卷积的输出大小取决于卷积
核的尺寸和被反卷积的输入大小。
反卷积的作用是将卷积层中常常会缩小特征图尺寸的操作重新放
大回去。
例如,一张28x28的图片,在通过卷积层进行特征提取之后,可能会被压缩成14x14,7x7或者更小的尺寸。
如果我们需要对这些特
征进行可视化或者进一步处理,就需要利用反卷积对特征图进行还原。
反卷积的实现方式通常是通过空间上的插值操作来放大特征图,
再通过卷积运算进行特征增强。
反卷积的过程中,我们需要确定反卷
积核的大小,步长,填充等参数,使得反卷积得到的特征图与原始输
入尺寸相同。
总之,反卷积作为卷积神经网络中不可或缺的一环,具有非常重
要的作用。
它使得我们可以将卷积神经网络中提取的特征进行可视化
和进一步处理,为计算机视觉领域的应用带来了很大的便利。
反卷积、低通滤波后的反卷积和维纳滤波python实现反卷积(Deconvolution)是一种图像处理技术,用于恢复由于模糊或其他失真过程导致的图像质量下降。
低通滤波后的反卷积和维纳滤波是两种常用的去模糊方法。
下面分别介绍这三种方法的Python实现。
1. 反卷积反卷积的基本思想是将模糊图像与模糊核进行卷积运算,得到原始图像。
在Python中,可以使用OpenCV库中的deconvolve函数实现反卷积。
pythonimport cv2import numpy as np# 读取模糊图像和模糊核blurred_image = cv2.imread('blurred_image.jpg', cv2.IMREAD_GRAYSCALE)blur_kernel = np.array([[1, 4, 6, 4, 1], [4, 16, 24, 16, 4], [6, 24, 36, 24, 6], [4, 16, 24, 16, 4], [1, 4, 6, 4, 1]]) / 256 # 使用OpenCV的deconvolve函数进行反卷积deconv_image = cv2.deconvolve(blurred_image, blur_kernel)# 显示原始图像、模糊图像和反卷积后的图像cv2.imshow('Original Image', blurred_image)cv2.imshow('Blurred Image', deconv_image)cv2.waitKey(0)cv2.destroyAllWindows()2. 低通滤波后的反卷积低通滤波后的反卷积是在对模糊图像进行低通滤波后,再进行反卷积操作。
这里我们使用高斯滤波作为低通滤波器。
在Python中,可以使用OpenCV库中的GaussianBlur和deconvolve函数实现低通滤波后的反卷积。
深度学习框架高级研发工程师岗位面试题及答案1.请简要介绍您的背景与经验。
答:我持有计算机科学硕士学位,并在过去五年内一直从事深度学习框架研发。
我曾参与开发了一个基于TensorFlow的自然语言处理库,实现了文本分类、命名实体识别等功能,同时也优化了模型训练效率。
2.请分享一个您在深度学习框架研发中遇到的具体挑战,并描述您是如何解决的。
答:在优化计算图构建过程中,遇到过多次重复计算的问题,影响了性能。
我采用了计算图剪枝技术,识别出重复计算的节点并进行共享,从而减少了计算量,提高了框架的效率。
3.请详细解释动态图与静态图的区别,以及它们在深度学习框架中的应用。
答:动态图在每次执行时都构建计算图,适用于开发过程中的迭代与调试。
静态图在编译前就构建计算图,用于优化和部署阶段。
例如,PyTorch使用动态图便于快速试验新想法,而TensorFlow 的静态图在生产环境中更高效。
4.当需要在框架中添加新的优化器或损失函数时,您会如何设计与实现?答:首先,我会分析优化器或损失函数的特点和数学公式。
然后,在框架中创建相应的类或模块,并在反向传播中实现梯度计算。
我会确保新组件与现有的框架接口无缝衔接,并进行单元测试以验证正确性。
5.在分布式训练中,如何处理数据并行和模型并行?请给出一个实际的案例。
答:数据并行指不同设备处理不同数据样本,模型并行指不同设备处理模型的不同部分。
例如,在分布式训练中,每个设备可以负责一批数据的训练,同时模型的不同层可以分配到不同设备上进行计算,从而加速训练过程。
6.解释一下自动微分是什么,并说明它在深度学习中的作用。
答:自动微分是一种计算导数的技术,它能够自动计算复杂函数的导数,包括复合函数、参数化函数等。
在深度学习中,自动微分使得反向传播成为可能,通过计算损失函数对模型参数的导数,从而进行参数更新和优化。
7.在深度学习框架中,什么是权重共享?请提供一个应用场景。
答:权重共享是指在不同部分的网络层之间共享相同的权重参数。
一、Richardson-Lucy 算法R-L 算法是目前世界上应用最广泛的函数恢复技术之一,它是一种迭代方法。
MATLAB 提供的deconvlucy ()函数还能够用于实现复杂图像重建的多种算法中,这些算法都基于Lucy-Richardson 最大化可能性算法。
R-L 算法是一种迭代非线性复原算法,它是从最大似然公式推导出来的,图像用泊松分布加以模型化的。
当下面这个迭代收敛时模型的最大似然函数就可以得到一个令人满意的方程:1(,)(,)(,)[(,)](,)(,)k k k g x y f x y f x y h x y h x y f x y ∧∧+∧=⊕* 其中,*代表卷积,⊕代表相关,∧f 代表未退化图像的估计,g和h 和以前定义一样。
在IPT 中,L-R 算法由名为deconvlucy 的函数完成的。
deconvlucy()函数的调用格式:J=deconvlucy(I ,PSF ,NUMIT ,DAMPAR ,WEIGHT)。
其中,I 表示输入图像,PSF 表示点扩散函数。
其他参数都是可选参数:NUMIT 表示算法的迭代次数,默认为10次;DAMPAR 是一个标量,它指定了结果图像与原图像I 之间的偏离阈值表,默认值为0(无衰减);WEIGHT 是一个与I 同样大小的数组,它为每一个像素分配一个权重来反映其重量,表示像素加权值,默认值为原始图像的数值。
图像复原源代码:%% Deblurring Gray Images Using the Lucy-Richardson Algorithmclcclearclose allI=imread('E:\'); % 彩色图像的像素为512*512I1=rgb2gray(I); % 灰度图像的像素为512*512 % figure,imshow(I),title('Original color image');% figure,imshow(I1),title('Original gray image');I2=I1(1:2:end,1:2:end); % 图像的像素设置为256*256figure,imshow(I2),title('Gray Image 256*256');PSF = fspecial('gaussian',5,5); % 点扩散函数Blurred = imfilter(I2,PSF,'symmetric','conv');figure;imshow(Blurred);title('Gaussian Blurred');V = ;BlurredNoisy = imnoise(Blurred,'gaussian',0,V);figure;imshow(BlurredNoisy);title('Blurred & Noisy');K=size(I2);WT=zeros(K);WT(5:end-4,5:end-4)=1;J1 = deconvlucy(BlurredNoisy,PSF);% H1 = deconvlucy(BlurredNoisy,PSF,5); % 迭代5次% H1_cell=deconvlucy({BlurredNoisy},PSF,5);% H2_cell=deconvlucy(H1_cell,PSF);% H2=im2uint8(H2_cell{2});J2 = deconvlucy(BlurredNoisy,PSF,5,im2uint8(3*sqrt(V))); % 迭代5次J3 =deconvlucy(BlurredNoisy,PSF,15,im2uint8(3*sqrt(V)));% 迭代15次J4 =deconvlucy(BlurredNoisy,PSF,25,im2uint8(3*sqrt(V)));% 迭代25次J5 =deconvlucy(BlurredNoisy,PSF,40,im2uint8(3*sqrt(V)));% 迭代40次J6 =deconvlucy(BlurredNoisy,PSF,20,im2uint8(3*sqrt(V)),WT);% 迭代20次,加WTJ7 =deconvlucy(BlurredNoisy,PSF,40,im2uint8(3*sqrt(V)),WT); % 迭代40次,加WT%figure, imshow(J1);title('J1:deconvlucy(A,PSF)');% figure, imshow(H1); title('H1:Restored Image NUMIT=5');% figure,imshow(H2),title('H2:Restored Image NUMIT=15'); figure, imshow(J2);title('J2:deconvlucy(A,PSF,NUMIT=5,DAMPAR)');figure, imshow(J3);title('J3:deconvlucy(A,PSF,NUMIT=15,DAMPAR)');figure, imshow(J4);title('J4:deconvlucy(A,PSF,NUMIT=25,DAMPAR)');figure, imshow(J5);title('J5:deconvlucy(A,PSF,NUMIT=40,DAMPAR)');figure, imshow(J6),title('J6:deconvlucy(A,PSF,NUMIT=20,DAMPAR,WEIGHT)'); figure, imshow(J7),title('J7:deconvlucy(A,PSF,NUMIT=40,DAMPAR,WEIGHT)');二、维纳滤波维纳滤波法是由Wiener首先提出的,在图像复原领域,由于维纳滤波计算量小,复原效果好,从而得到了广泛的应用和发展。
matlab 反卷积摘要:1.MATLAB 反卷积的概述2.MATLAB 反卷积的原理3.MATLAB 反卷积的实现方法4.MATLAB 反卷积的应用实例5.总结正文:一、MATLAB 反卷积的概述MATLAB 反卷积是一种在MATLAB 中实现的图像处理技术,主要用于还原卷积操作前的图像。
卷积操作在图像处理中广泛应用,如边缘检测、滤波等。
然而,在某些情况下,需要对卷积后的图像进行反卷积处理以恢复原始图像。
MATLAB 反卷积功能正是为了满足这一需求。
二、MATLAB 反卷积的原理MATLAB 反卷积的原理基于卷积的逆运算。
卷积操作可以看作是从原始图像到卷积后图像的映射,反卷积则是从卷积后图像到原始图像的逆映射。
这个过程可以通过求解卷积矩阵的逆矩阵来实现。
具体来说,设原始图像为X,卷积后图像为Y,卷积矩阵为K,则有:Y = X * K求解逆矩阵K^-1,则有:X = Y * K^-1通过这个公式,可以实现从卷积后图像到原始图像的反卷积操作。
三、MATLAB 反卷积的实现方法在MATLAB 中,可以使用内置的反卷积函数进行反卷积操作。
常用的函数有imfilter 和conv2。
1.使用imfilter 函数:imfilter 函数可以直接对图像进行反卷积处理,不需要求解逆矩阵。
其语法如下:Y = imfilter(X, K, "reverse")其中,X 为原始图像,K 为卷积矩阵,"reverse"表示反卷积操作。
2.使用conv2 函数:conv2 函数需要用户自行求解逆矩阵K^-1,然后使用该矩阵进行反卷积操作。
其语法如下:K = conv2(X, Y);X = Y * K;四、MATLAB 反卷积的应用实例假设有一个边缘检测操作,其卷积矩阵K 为:K = [1 -1; 0 -2];原始图像X 为:X = [1 2 3; 4 5 6; 7 8 9];通过卷积操作得到卷积后图像Y:Y = X * K;接下来,使用MATLAB 反卷积功能,对卷积后图像进行反卷积处理:X_recovered = imfilter(Y, K, "reverse");可以发现,反卷积后的图像X_recovered 与原始图像X 非常接近,说明MATLAB 反卷积功能有效。
Stochastic DeconvolutionJames Gregson Felix Heide Matthias Hullin Mushfiqur Rouf Wolfgang HeidrichThe University of British ColumbiaAbstractWe present a novel stochastic framework for non-blind deconvolution based on point samples obtained from ran-dom walks.Unlike previous methods that must be tailored to specific regularization strategies,the new Stochastic De-convolution method allows arbitrary priors,including non-convex and data-dependent regularizers,to be introduced and tested with little effort.Stochastic Deconvolution is straightforward to implement,produces state-of-the-art re-sults and directly leads to a natural boundary condition for image boundaries and saturated pixels.1.IntroductionImage deconvolution or deblurring has applications in astronomy,microscopy,GIS and photography among other disciplines.As such it has seen considerable research in graphics and vision.This paper presents Stochastic Deconvolution,a new framework for non-blind image deconvolution based on stochastic random walks.Stochastic Deconvolution is based on an adaptation of a recent stochastic optimiza-tion method for solving computed tomography problems[6] to the problem of deconvolution.The resulting algorithm amounts to a variant of coordinate-descend optimization, where the descent direction is chosen using a random walk that utilizes spatial coherence.By solving the image deblur-ring problem in this fashion,the Stochastic Deconvolution framework directly addresses several issues inherent in de-veloping deconvolution algorithms:•Ease of Implementation.Both the basic algorithm and its regularized variants are very straightforward to im-plement,and is based on only two very simple opera-tions:splatting of the point spread function(PSF)and point-evaluation of the regularization term.•Regularization Research.Because of the simplicity of implementing new regularizers,Stochastic Deconvolu-tion enables research into new regularization terms and image priors for deconvolution through rapid experi-mentation.We demonstrate that the methods works for a large array of regularizers,including ones thatare smooth,non-smooth but convex,non-convex,dis-continuous,and even data-dependent.•Boundary Conditions.When capturing blurred im-ages,information is propagated into the captured im-age where no data is captured.Deblurring in these re-gions requires some condition on scene content out-side the captured region.An additional benefit ofStochastic Deconvolution is that it naturally handlesthese boundary conditions and can use a near-identicalprocess to deal with saturated regions.•Shift-variant Kernels.Finally,Stochastic Deconvolu-tion generalizes naturally to deblurring problems withspatially varying kernels such as the synthetic camerashake example depicted in Figure1.The remainder of this paper is structured as follows:in the next section we discuss related work while providing an introduction to the deconvolution problem.We then in-troduce Stochastic Deconvolution in Section3.Results are presented in Section4after which we conclude with a dis-cussion of future research directions in Section5.2.Background and Related WorkIn this section,we introduce the notation for the decon-volution problem and summarize the optimization frame-work from Stochastic Tomography[6],which we modify to solve deconvolution problems.2.1.Image DeconvolutionImage deconvolution attempts to remove the blurs intro-duced when images are captured with real optical systems, including motion blur(e.g.[4,15,21,21,8,7])and depth-of-field blur(e.g.[12,9,3]).These artifacts are effectively captured by a point-spread-function(PSF)k that measures the projection of a point-light source on the captured image for afixed set of camera parameters.In general,the PSF is a function of the projected co-ordinate of the source x,the distance of the source from the camera d,and the chromaticity of the image point(i.e.k=k(x,d,λ)).However in many scenarios the PSF is as-sumed to be spatially invariant,(i.e.independent of image position).The captured image q is then represented as the intrinsic(deblurred)image p convolved with the PSF:1Figure1.Left:Snapshot of the algorithm in progress showing stochastic random walks that form the basis of Stochastic Deconvolution. Green points represent energy added while blue correspond to energy subtracted from the reconstruction.The algorithm automatically focuses sampling effort in regions where the largest improvements to the system energy are obtained.Right:Example of deblurring with a spatially-varying(per-pixel)PSF simulating strong motion-blur.Input image(center-left)and a sampling of per-pixel PSFs at full-scale(center-right).Deblurred result using Stochastic Deconvolution algorithm(right).The Stochastic Deconvolution algorithm naturally handles PSFs with strong spatial variation,including rotations around the optical axis,without resorting to patch-based approximations.q=k⊗p.(1) The goal of deconvolution is to invert Equation1to obtain an estimate of the intrinsic image.In this paper,we are focusing on the non-blind version of this problem,where the PSF is assumed to be provided either by calibration or some form of PSF estimation(e.g.[4,10]).Traditional methods for solving deconvolution problems include Fourier-space division,the Wiener Filter[18],as well as iterative methods such as Richardson-Lucy[16,14]. All these methods produce significant artifacts in cases where certain image frequencies are completely eliminated by the blur,which is common especially in defocus blur.Although the results can be improved significantly with variations of the original Richardson-Lucy algorithm (e.g.[21,22,5]),most state-of-the art deconvolution meth-ods take a slightly different approach.The basic problem from Equation1can be seen as an linear inverse problem that is usually ill-posed,since the PSFfilters outs some fre-quency components.General deconvolution methods de-fine a quadraticfitting energy(either in the Fourier or im-age domain)that is minimized when the solution estimate convolved by the PSF equals the captured image,e.g.when defined in the image domain:F fit=||q−k⊗p||22(2) Since the system is ill-posed,infinitely many solutions weakly minimize thefit energy(Equation2).To address this,a prior or regularizerΓ(p)is typically added,weighted byλ,to give the system energy,Equation3.F=F fit+λΓ(p)(3) The regularizer penalizes solutions that do not conform to prior expectations on the solution such as smoothness or sparsity.Good regularizers suppress ringing and noise with-out introducing other undesirable artifacts.However a problem arises because the regularizer typi-cally changes the mathematical structure of the problem.In particular,priors favoring piecewise smooth solutions can-not be expressed as linear systems,making it necessary to develop highly specialized,regularizer-specific solvers (e.g.[12,11,19]).Developing such solvers is a demand-ing task,complicated further by problems with millions of unknowns.The goal of our work is to design a simple,reasonably efficient,general-purpose deconvolution algorithm capable of handling effectively arbitrary priors.To do so,we adapt the random walk optimization strategy from Stochastic To-mography[6]and modify it to solve deconvolution prob-lems.The result is a straightforward method for image de-convolution that allows the use of arbitrary priors with no change to the underlying algorithm.Another benefit of our method is natural handling of boundary conditions and sat-urated pixels.2.2.Review of Stochastic TomographyRecently,Gregson et al.[6]presented a stochastic ran-dom walk algorithm for solving tomographic reconstruction problems.The method minimizes a convex objective func-tion F by continuously placing discrete point samples in a volume that each improve the objective.The change to the objective can be evaluated efficiently due to the small support of each sample.A local sample mutation strat-egy inspired by Metropolis-Hastings then focuses the sam-pling efforts in regions with high payoff,i.e.regions where samples have recently been placed successfully,leading to a method that makes many(107-109)low-cost incremen-tal solution updates.However,as their work pointed out, the method deviates from Metropolis-Hastings in a number of key ways,including the fact that the random walk de-pends on the full history of the sampling process and does thus not represent a Markov Chain,but rather a stochas-tic coordinate-descent method that employs a Metropolis-Hastings style heuristic for picking the next coordinate axis to descend along.Thefinal result of the tomographic re-construction is given by the volume density of the placed samples.Algorithm1reproduces the full method for com-pleteness sake.Algorithm1Stochastic Optimization Algorithm,from[6] x0←random()for k=1to N do//New sample from x k−1using//transition PDF t(x k|x k−1)x k←sample(x k−1,t(x k|x k−1))a←∆F(x k)/∆F(x k−1)if∆F(x k−1)<0or random()≤a then//Record only samples that reduce the objective fn.if∆F(x k)>0then//Incorporate the sample into the outputrecord(x k)end ifelse//Keep exploring space from previous samplex k=x k−1end ifend forA key advantage of Stochastic Tomography over other tomographic solvers is that the objective function F may contain arbitrary convex regularizers without a change in the fundamental algorithm,allowing for easy experimen-tation and testing of new priors and ing L1regularizers on several captured and synthetic examples, Gregson et al.demonstrated that Stochastic Tomography can be an effective method for regularized tomographic re-construction.One of the contributions of our work is to recognize that this framework for stochastic optimization with a random walk is in fact more general,and can be adapted to inverse problems other than tomography.This is significant since frequency content in measured quantities can differ signifi-cantly between deblurring and tomography,leading to more aggressive,often non-convex priors that are more difficult to optimize.The key features required of problems is to have i)a strong geometric structure in which many degrees of freedom can be explored by walks in a low dimensional space,and ii)to have small stencils,so local updates can be performed efficiently.Deconvolutionfits this definition nicely since the PSF links the intrinsic and captured images geometrically in2D and has relatively compact support allowing efficient local updates.To apply this random walk framework,we only need to derive problem-specific functions for sample mu-tation,i.e.a transition probability t(x k|x k−1)for choos-ing sample x k based on the previous sample location x k−1, a method for keeping track of the change∆F(x k)of the objective function when placing a new sample x k,andfi-nally a method for accepting and recording a new sample record(x k).The next section describes how to derive meth-ods for these tasks in the case of deconvolution problems.3.Stochastic DeconvolutionStochastic Deconvolution begins from Equation2,which is used as the data-fitting term of the system objective func-tion(Equation3),and from an initial estimate of the p(0)of the intrinsic image.We create a random walk of pixel loca-tions x k at which we add or remove an energy quantum e d, thus generating a sequence p(k)of estimates of the intrinsic image:p(0)=q(4)p(k)=p(k−1)±e d·δxk,(5)whereδxkis the characteristic function(Kronecker Delta) for pixel x k.Both positive and negative energies are tested for each sample location x k in Algorithm1but only the sign causing the greatest improvement kept.Evaluating∆F.The quantity∆F(x k)measures the change in the objective function if a given sample x k with value±e d were to be accepted and added to the solution.In order to efficiently compute∆F(x k),we also keep track of a second sequence of images q(k)=k⊗p(k),which rep-resents the observed image we would expect if p(k)was the intrinsic image.q(k)can be efficiently updated during the random walk:q(0)=k⊗p(0)=k⊗q(6)q(k)=q(k−1)±e d(k⊗δxk)(7)In other words,q(k)can be updated by splatting k⊗δxk,a shifted and mirrored copy of the PSF at the sample location x k.With this second image,the change in the data term F fit can be computed efficiently through local updates.The change in the regularization energy is evaluated in an analogous manner,but is specific to the chosen regularizer. For example,if the total-variation(TV)function is used, then the change inλΓis simply the sum of differences of affected gradient magnitudes,scaled byλ.Mutation Strategy.The mutation function generates a new sample x k from the previously accepted sample x k−1. We use a simple,symmetric strategy where new samples are generated by a Gauss-distributed offset from the most recently accepted sample.The width of the Gaussian is a user parameter typically set to50-100%of the PSF width, with the choice not being ing a Gaussian distri-bution ensures ergodicity;this ensures the sampling process does not erroneously’miss‘areas.We also add a Russian-roulette chain terminating mu-tation where the sample is simply moved anywhere in the image domain with uniform probability.This mutation is applied with1%probability,leading to sample chains with expected length of100.We have found this helps to over-come any start-up bias while also contributing to ergodicity. Convergence.In each iteration,the Stochastic Deconvo-lution framework picks a single pixel in the image and checks if the objective can be improved by depositing en-ergy in this pixel.This corresponds to picking a single degree-of-freedom and descending along that axis,mak-ing it a form of Coordinate Descent.What distinguishes Stochastic Deconvolution from other Coordinate Descent methods[13,17]is that we use the random walk process to exploit spatial coherence in the deconvolution problem, and focus the computational effort on regions with sharp edges,where most work is to be done in deconvolution.Coordinate Descent methods provably converge for smooth objective functions for afixed step length so long as all possible descent directions(i.e.pixels)are examined with afinite probability.In our framework,this condition is met by the ergodicity of the sampling process in the limit of number of samples.For general,non-smooth objectives,no proof of con-vergence is available for Coordinate Descent,although convergence has been shown for specific,separable L1-regularized problems such as basis pursuit[17,13].In this paper,we show empirical evidence of the convergence of Stochastic Deconvolution for convex objectives,in particu-lar a total variation(TV)regularized deconvolution problem (Section4).As with other optimization strategies,no theoretical re-sults are available for the use of non-smooth,non-convex objectives with Coordinate Descent.Our results in Sec-tion4empirically show that Stochastic Deconvolution is competitive for such regularizers and even for a simple dis-continuous and data-dependent prior.Boundary Conditions and Saturated Pixels.The issue of boundary handling is difficult in deconvolution algo-rithms,since the process of capturing an image necessar-ily cuts off some of the data needed to deconvolve at the image boundaries.Stochastic Deconvolution naturally han-dles this situation by padding the input image by the PSF width and creating a mask that indicates which pixels are from the captured region versus from the boundary region.During the sampling process,samples are allowed to be placed anywhere within the image or padded regions,but when evaluating the change in system energy due to a sam-ple,only samplesflagged as interior have their data-fitting term F fit evaluated,since the saturated and padding pixels have no valid captured value.The same strategy can be used for other pixels where the measurements in the observed image are invalid,for exam-ple excessively bright pixels,where the image sensor has been saturated.Ignoring the data term for these regions while enforcing the regularization term causes the method to perform a simple form of inpainting in the padded and saturated regions to improve thefit to the valid measure-ments.Choosing e d.To choose the deposition energy e d,an ini-tially large value is assigned,e.g.e d=0.05(assuming pixel values in[0,1]).An outer iteration of the sampling procedure from Listing1is then started with a total of one mutation per-pixel,and the percentage of accepted samples computed.If this value falls below7.5%,e d is scaled by 0.75before starting the next outer iteration.Outer iterations are continued in this manner until a set number of iterations is exhausted or convergence stalls.This simple adaptive choice for e d works well in practice and frees the user from specifying a specific value.Comparison with Stochastic Tomography.While Stochastic Deconvolution uses the same basic random walk as Stochastic Tomography[6],there are also a number of differences that are worth pointing out.First,adapting the method to deblurring requires very specific modifications to handle boundaries and saturation,while switching from continuously placed samples to discrete pixel locations.Perhaps more significantly,deblurring can be thought of as redistributing the energy from the blurred image to form the sharp intrinsic image.This makes the need for nega-tive energy samples obvious since both negative and posi-tive samples are needed near edges.For Stochastic Tomog-raphy,such samples were only needed to prevent the algo-rithm from stalling due to startup-bias.3.1.RegularizationWe have implemented a host of different regularization strategies in the Stochastic Deconvolution framework but summarize here several that highlight theflexibility of the method.Total Variation.Total variation(TV)regularizers corre-sponds to an assumption of sparse gradients,that is,of piecewise-smooth solutions with occasional step disconti-nuities.This is incorporated by adding the one of the fol-lowing regularization energies at each pixel:ΓT V(x)=||∇p(x)||2(8)ΓAT V(x)=||∇p(x)||1(9)ΓMT V(x)=3i=1∇p(i)(x) 222(10)where Equation8is the standard TV,Equation9is a sim-ple,anisotropic variant and Equation10is an anisotropic adaptation to color images[20].The gradient terms are evaluated withfirst-orderfinite regulariz-ers are simple and generally effective regularizers that have the benefits of being convex.Sparse1st and2nd Order Derivatives.We have also im-plemented a version of the regularizer introduced by Levin et al.[12],which uses a fractional(0.8)norm to enforce a heavy-tailed distribution forfirst and second order deriva-tives.We refer to that paper and the code posted on the corresponding project page for details.Gamma-corrected Sum of Absolute Differences.Fi-nally,we introduce a new regularizer that is designed to bet-ter deal with dark image regions.A standard problem with deconvolution algorithms is that the deconvolution has to be performed in linear intensity space,but the results have to be gamma corrected for viewing.The gamma curve,how-ever,stretches the low intensity regions of the image dis-proportionately,thus amplifying noise in the solution.To-gether with the already low signal values,this results in poor signal-to-noise ratio in dark image regions.Our approach is to introduce a regularizer that minimizes the data term in linear space,but ensures sparse gradients in the gamma-corrected image.To achieve this,we apply an gamma curve to the signal before evaluating a sum of absolute differences(SAD)regularizer in a3×3window W centered at x:Γ(x)=i∈W(x)|p(x i)1γ−p(x)1γ|,(11)withγ≈2to simulate a regular display gamma.This regularizer is non-convex and would be non-trivial to design and implement a custom solver for,but is easily added to the Stochastic Deconvolution framework.Discontinuous and Data-Dependent Regularizers.In Section4we demonstrate theflexibility of the Stochastic Deconvolution framework by experimenting with a data-dependent regularizer.4.ResultsThe following sections present results comparing differ-ent regularization strategies and objective functions,as well as comparing to several existing methods.Runtimes vary based on PSF and image size but are typically only a few minutes.As an example,a0.7megapixel monochrome im-age with a21x21PSF took126seconds with our unopti-mized parison with Existing Methods.Figures2and3 show comparisons with the Coded Exposure Photography method of Raskar et al.[15].With the addition of pri-ors,Stochastic Deconvolution produces results with less noise and chromatic artifacts.However we note that this is expected given that their method is effectively unregu-larized.To illustrate the effect of different regularizers we show results for an enlarged area of the train image using the convex Total-Variation(TV)prior,the prior from Levin et al.[12],as well as the Gamma prior described in Sec-tion3.1.All three priors reduce the noise and chromatic artifacts present in the original results,however the two non-convex priors,(Figure2(d)and2(e)),provide the smoothest results. We note that our Gamma prior accomplishes its intended aim of reducing noise levels in darker regions,as can be seen by zooming in on the window and roof regions of Fig-ures2(d)and2(e).We stress that it was straightforward to implement all of these priors in our common framework, while developing specialized solvers for each method would have taken significantly moreeffort.(a)Stochastic Deconvolution,Gammaprior(b)Raskar et.al.(c)SD,TVprior(d)SD,Levinprior(e)SD,Gamma priorparison of Raskar et.al.(left)vs.Stochastic De-convolution(right)using the regularizer of Levin et.al.Incor-poration of the regularizer significantly reduces the noise in the reconstructed image while preserving image detail.Figure 3shows a comparison between Raskar et al.and Stochastic Deconvolution for the white-car image.We use the Gamma prior which reduces the noise and chromatic artifacts in dark regions such as the wheels and windows,while slightly improving the legibility of the text on the cab.We conclude that the Gamma prior is effective for preserv-ing details and improving overall imagequality.(a)Raskar et.al.(b)SD,gamma priorFigure parison of Coded Exposure Photography (Raskar et al.)(top)to Stochastic Deconvolution (bottom).Addition of a prior helps to suppress noise and chromatic artifacts present in the original results,while improving the legibility of the text.Figure 4shows a comparison of deconvolution results using the method of Fergus et al.[4]with Stochastic De-convolution.Stochastic Deconvolution produces sharper re-sults with reduced ringing.Stochastic Deconvolution is also able to reconstruct the entire image right up to the image boundary through the use of the stochastic boundary condi-tion.Finally,we show a comparison of deconvolution results between the relatively recent method for large-blur removal of Xu and Jia [19]with Stochastic Deconvolution using Levin et al.’s prior.Our results are very comparable for this challenging dataset;both methods show minor artifacts throughout the image,however the results are very similar in terms of overall quality.Figure 6highlights the effect of the stochastic boundary condition for inpainting plausible content in boundary regions,including additional windows and staircase details.Defocus Blur and Lens Aberrations.We have also ap-plied Stochastic Deconvolution to remove defocus blurs and lens aberrations in images taken with standard SLR cam-eras.Results comparing Stochastic Deconvolution using the Levin prior to the method of Levin et al.are shown in Fig-ure 7.As expected,the results are very similar.Figure 8shows a color image blurred by a synthetic,wavelength dependent PSFs.Deblurring using theMTV(a)Fergus et.al.(b)Stochastic DeconvolutionFigure parison with the method of Fergus et.al.(top).The Stochastic Deconvolution result (bottom)shows substantially re-duced ringing as well as much-improved handling of image bound-aries due to the use of the Stochastic boundarycondition.(a)input (b)Xu andJia(c)Method of Levin etal.(d)SD,Levin priorFigure 5.Non-blind deconvolution comparison with Xu and Jia (using kernels estimated by Xu and Jia)for the Roma image.regularizer results in a slightly sharper image with reduced chromatic artifacts.Optimizing for such priors has been the focus of several papers,e.g.[20,1],however they are easily implemented within our framework.Spatially Varying PSFs.Due to the local nature of Stochastic Deconvolution,the deblurring problem can be relaxed from deconvolution to deblurring with spatially varying kernels.While many other deconvolution methods require subdividing the image into tiles with approximately constant PSF,in Stochastic Deconvolution every pixel can have its own distinct PSF.Figure 1shows a synthetic ex-ample for deblurring results of a strong,spatially varying motion blur with rotational components about the optical(a)boundaryinpainting(b)detail(c)Method of Levin etal.(d)SD,Levin priorFigure6.Top row:inpainted details from the stochastic boundarycondition,windows are added to a building on the boundary(redoutline)and staircase details outside the image are introduced(yel-low outline).Zoom in to the top-leftfigure for additional features.Bottom row:the method of Levin et al.rings for highly saturatedpixels,while masking these from the reconstruction produces con-siderably smallerartifacts.(a)Blurredinput.(b)SD,Levinprior(c)SD,Levin priorparison of method of Levin et al.with StochasticDeconvolution for defocus blur from a standardSLR.(a)SD,TVprior(b)SD,MTV priorparison of per-channel TV(top)with the multichan-nel MTV prior(bottom)for a blur kernel with chromatic aberra-tion.Image sharpness is slightly improved and color artifacts re-duced around the tree branches.axis.For real motion blur,one could obtain spatially vary-ing PSFs either using estimation methods such as the one byHirsch et al.[7],or using IMU sensors that are becomingincreasingly available in cellphones and cameras[8].Data-Dependent Regularizers We now provide a sim-ple example of a discontinuous data-dependent regularizer.An image known to consist of onlyfive colors is blurredand corrupted with noise.Deblurring with a TV regular-izer yields gives an optimal peak PSNR value of31.91dBamong all prior weights tested,however by clustering theimage colors periodically and adding the L1distance to thenearest cluster this can be improved by0.7dB while simul-taneously reducing the weight on the TV term by an orderof magnitude.Although something of a contrived exam-ple,many applications can exploit similar domain-specificknowledge,an example being magnetic resonance imaging(MRI)where given tissue types and machine settings pro-duce gray-values that are known a priori.Exploiting thisknowledge can reduce reliance on heuristic priors,e.g.spar-sity of gradients,and as illustrated above and in Figure9,quantitatively improves reconstruction quality.However,such discontinuous,discrete-choice regularizers are prob-lematic to implement effectively in conventional,gradient-basedsolvers.(a)Blurredinput(b)TV PSNR:32.0dB(c)DD TV PSNR:32.7dBFigure9.Simple data-dependent TV regularizer.Adding the L1RGB distance to the nearest of one offive RGB clusters(computedby K-means)to a standard TV regularizer improves the best PSNRvalues by0.7dB over all parameter valuesEmpirical Convergence As a variant of coordinate-descent,our method has no theoretical convergence guaran-tees for general,non-smooth objectives.However,we haveperformed empirical convergence tests for the anisotropicTV regularizer and comparedfinal objective values to theprovably convergent method of Chambolle and Pock[2].For the blurred image in Figure10(a),the objective valuecomputed by Stochastic Deconvolution after300×N pixelsmutations was26.42,while the objective value by theprimal-dual method was26.69.We attribute the minor dis-crepancy to differences in boundary handling and termina-tion criteria between the two methods.The objective func-tion history is shown in Figure10(c),showing a fast ini-tial convergence rate that graduallyflattens,as might be ex-pected from a stochastic sub-gradient method.With thatsaid,visual convergence is in practice very quick;by iter-ation50the gray-values are being adjusted by only0.08%of the maximum range and are indistinguishable from theresults at300iterations without careful,pixel-level exami-nation.5.Conclusions and Future WorkIn this paper we have present Stochastic Deconvolution,a new,general-purpose method for the deconvolution prob-lem based on stochastic random-walks.Stochastic Decon-volution is straightforward to implement,easily incorpo-rates state-of-the-art priors and produces high-quality re-sults.The performance of our unoptimized implementation iscurrently comparable to other recent methods such as the。