人脸识别程序源代码修订稿
- 格式:docx
- 大小:51.65 KB
- 文档页数:5
⼈脸识别代码⼀、加载图⽚数据from os import listdirfrom os.path import isdirfrom PIL import Imagefrom matplotlib import pyplotfrom numpy import savez_compressedfrom numpy import asarrayfrom mtcnn.mtcnn import MTCNNdef extract_face(filename, required_size=(160, 160)):image = Image.open(filename)image = image.convert('RGB')pixels = asarray(image)detector = MTCNN()results = detector.detect_faces(pixels)x1, y1, width, height = results[0]['box']x1, y1 = abs(x1), abs(y1)x2, y2 = x1 + width, y1 + heightface = pixels[y1:y2, x1:x2]image = Image.fromarray(face)image = image.resize(required_size)face_array = asarray(image)return face_arraydef load_faces(directory):faces = list()for filename in listdir(directory):path = directory + filenameface = extract_face(path)faces.append(face)return facesdef load_dataset(directory):X, y = list(), list()for subdir in listdir(directory):path = directory + subdir + '/'if not isdir(path):continuefaces = load_faces(path)labels = [subdir for _ in range(len(faces))]print('>loaded %d examples for class: %s' % (len(faces), subdir))X.extend(faces)y.extend(labels)return asarray(X), asarray(y)trainX, trainy = load_dataset('5-celebrity-faces-dataset/train/')print(trainX.shape, trainy.shape)testX, testy = load_dataset('5-celebrity-faces-dataset/val/')savez_compressed('5-celebrity-faces-dataset.npz', trainX, trainy, testX, testy)⼆、提取图⽚特征from numpy import loadfrom numpy import expand_dimsfrom numpy import savez_compressedfrom keras.models import load_modeldef get_embedding(model, face_pixels):face_pixels = face_pixels.astype('float32')mean, std = face_pixels.mean(), face_pixels.std()face_pixels = (face_pixels - mean) / stdsamples = expand_dims(face_pixels, axis=0)yhat = model.predict(samples)return yhat[0]data = load('5-celebrity-faces-dataset.npz')trainX, trainy, testX, testy = data['arr_0'], data['arr_1'], data['arr_2'], data['arr_3']print('Loaded: ', trainX.shape, trainy.shape, testX.shape, testy.shape)model = load_model('facenet_keras.h5')print('Loaded Model')newTrainX = list()for face_pixels in trainX:embedding = get_embedding(model, face_pixels)newTrainX.append(embedding)newTrainX = asarray(newTrainX)print(newTrainX.shape)newTestX = list()for face_pixels in testX:embedding = get_embedding(model, face_pixels)newTestX.append(embedding)newTestX = asarray(newTestX)print(newTestX.shape)# save arrays to one file in compressed formatsavez_compressed('5-celebrity-faces-embeddings.npz', newTrainX, trainy, newTestX, testy)三、识别from numpy import loadfrom sklearn.metrics import accuracy_scorefrom sklearn.preprocessing import LabelEncoderfrom sklearn.preprocessing import Normalizerfrom sklearn.svm import SVCdata = load('5-celebrity-faces-embeddings.npz')trainX, trainy, testX, testy = data['arr_0'], data['arr_1'], data['arr_2'], data['arr_3']print('Dataset: train=%d, test=%d' % (trainX.shape[0], testX.shape[0]))# normalize input vectorsin_encoder = Normalizer(norm='l2')trainX = in_encoder.transform(trainX)testX = in_encoder.transform(testX)# label encode targetsout_encoder = LabelEncoder()out_encoder.fit(trainy)trainy = out_encoder.transform(trainy)testy = out_encoder.transform(testy)# fit modelmodel = SVC(kernel='linear', probability=True)model.fit(trainX, trainy)# predictyhat_train = model.predict(trainX)yhat_test = model.predict(testX)# scorescore_train = accuracy_score(trainy, yhat_train)score_test = accuracy_score(testy, yhat_test)# summarizeprint('Accuracy: train=%.3f, test=%.3f' % (score_train*100, score_test*100))from random import choicefrom numpy import loadfrom numpy import expand_dimsfrom sklearn.preprocessing import LabelEncoderfrom sklearn.preprocessing import Normalizerfrom matplotlib import pyplotdata = load('5-celebrity-faces-dataset.npz')testX_faces = data['arr_2']data = load('5-celebrity-faces-embeddings.npz')trainX, trainy, testX, testy = data['arr_0'], data['arr_1'], data['arr_2'], data['arr_3'] in_encoder = Normalizer(norm='l2')trainX = in_encoder.transform(trainX)testX = in_encoder.transform(testX)out_encoder = LabelEncoder()out_encoder.fit(trainy)trainy = out_encoder.transform(trainy)testy = out_encoder.transform(testy)model = SVC(kernel='linear', probability=True)model.fit(trainX, trainy)selection = choice([i for i in range(testX.shape[0])])random_face_pixels = testX_faces[selection]random_face_emb = testX[selection]random_face_class = testy[selection]random_face_name = out_encoder.inverse_transform([random_face_class]) samples = expand_dims(random_face_emb, axis=0)yhat_class = model.predict(samples)yhat_prob = model.predict_proba(samples)class_index = yhat_class[0]class_probability = yhat_prob[0,class_index] * 100predict_names = out_encoder.inverse_transform(yhat_class)print('Predicted: %s (%.3f)' % (predict_names[0], class_probability))print('Expected: %s' % random_face_name[0])pyplot.imshow(random_face_pixels)title = '%s (%.3f)' % (predict_names[0], class_probability)pyplot.title(title)pyplot.show()。
人脸识别源代码※人脸检测(文章+程序)---技术文档及代码非常全『人脸检测(文章+程序).rar(1.27 MB)』※完整的Matlab下人脸检测及识别系统源代码『Face-Recognition-Detection.rar (393.19 KB)』注:这个人脸检测和识别系统开发于Matlab 7.0.1下,非常值得学习。
※Matlab实现的基于颜色分隔的人脸人眼检测与定位及识别算法源代码『Face-Eye-Detection.part1.rar (1.91 MB)Face-Eye-Detection.part2.rar (152.54 KB)』注:这是一个matlab程序,用来检测并定位人脸及人眼。
采用的算法是肤色的颜色分隔。
附件中的文件包括 eyematch.m, eyematch2.m, face.m, findeye.m,skin.m, k001.JPG等等。
※完整的包括及动作识别的C++人脸检测源代码『FaceDetection.rar (875.84 KB)』本文的目的是提供一个我开发的SSE优化的,C++库,用于人脸检测,你可以马上把它用于你的视频监控系统中。
涉及的技术有:小波分析,尺度缩减模型(PCA,LDA,ICA),人工神经网络(ANN),支持向量机(SVM),SSE编程,图像处理,直方图均衡,图像滤波,C++编程,还有一下其它的人脸检测的背景知识。
※基于Gabor特征提取和人工智能的人脸检测系统源代码『fdp5final.rar(185.56 KB) 』使用步骤:1. 拷贝所有文件到MATLAB工作目录下(确认已经安装了图像处理工具箱和人工智能工具箱)2. 找到"main.m"文件3. 命令行中运行它4. 点击"Train Network",等待程序训练好样本5. 点击"Test on Photos",选择一个.jpg图片,识别。
Android中的⼈脸检测的⽰例代码(静态和动态)(1)背景。
Google 于2006年8⽉收购Neven Vision 公司(该公司拥有10多项应⽤于移动设备领域的图像识别的专利),以此获得了图像识别的技术,并加⼊到android中。
Android 中的⼈脸识别技术,⽤到的底层库:android/external/neven/,framework 层:frameworks/base/media/java/android/media/FaceDetector.java。
Java 层接⼝的限制:A,只能接受Bitmap 格式的数据;B,只能识别双眼距离⼤于20 像素的⼈脸像(当然,这个可在framework层中修改);C,只能检测出⼈脸的位置(双眼的中⼼点及距离),不能对⼈脸进⾏匹配(查找指定的脸谱)。
⼈脸识别技术的应⽤:A,为Camera 添加⼈脸识别的功能,使得Camera 的取景器上能标识出⼈脸范围;如果硬件⽀持,可以对⼈脸进⾏对焦。
B,为相册程序添加按⼈脸索引相册的功能,按⼈脸索引相册,按⼈脸分组,搜索相册。
(2)Neven库给上层提供的主要⽅法:A,android.media.FaceDetector .FaceDetector(int width, int height, int maxFaces):Creates a FaceDetector, configured with the size of the images to be analysed and the maximum number of faces that can be detected. These parameters cannot be changed once the object is constructed.B,int android.media.FaceDetector .findFaces(Bitmap bitmap, Face [] faces):Finds all the faces found in a given Bitmap . The supplied array is populated with FaceDetector.Face s for each face found. The bitmap must be in 565 format (for now).(3)静态图⽚处理代码实例:通过对位图的处理,捕获位图中的⼈脸,并以绿框显⽰,有多个⼈脸就提⽰多个绿框。
1 .利用OpenCV进行人脸检测人脸检测程序主要完成3部分功能,即加载分类器、加载待检测图象以及检测并标示。
本程序使用OpenCV中提供的"haarcascade_frontalface_alt.xml”文件存储的目标检测分类,用cvLoa d函数载入后,进行强制类型转换。
OpenCV中提供的用于检测图像中目标的函数是cvHaarDete ctObjects,该函数使用指针对某目标物体(如人脸)训练的级联分类器在图象中找到包含目标物体的矩形区域,并将这些区域作为一序列的矩形框返回。
分类器在使用后需要被显式释放,所用的函数为cvReleaseHaarClassifierCascade。
这些函数原型请参看有关OpenCV手册。
2 .程序实现1)新建一个VisualC++MFC项目,取名为“FaceDetection",选择应用程序类型为“单文档”。
将菜单中多余的项去掉,并添加一项“人脸检测”,其ID为"ID_FaceDetected”,并生成该菜单项的消息映射函数。
2)在“FaceDetectionView.h”头文件中添加以下灰底色部分程序代码:〃南京森林公安高等专科学校江林升//FaceDetectionView.h:CFaceDetectionView 类的接□#pragmaonce#include"cv.h"#include"highgui.h"classCFaceDetectionView:publicCView<protected:〃仅从序列口化创建CFaceDetectionView();DECLARE_DYNCREATE(CFaceDetectionView)精心整理public:CFaceDetectionDoc*GetDocument()const;CvHaarClassifierCascade*cascade;〃特征器分类CvMemStorage*storage;voiddetect_and_draw(IplImage*img);IplImage*src; 〃载入的图像3)在,小2。
教你如何⽤Python实现⼈脸识别(含源代码)⼯具与图书馆Python-3.xCV2-4.5.2矮胖-1.20.3⼈脸识别-1.3.0若要安装上述软件包,请使⽤以下命令。
pip install numpy opencv-python要安装FaceRecognition,⾸先安装dlib包。
pip install dlib现在,使⽤以下命令安装⾯部识别模块pip install face_recognition下载⼈脸识别Python代码请下载python⾯部识别项⽬的源代码:项⽬数据集我们可以使⽤我们⾃⼰的数据集来完成这个⼈脸识别项⽬。
对于这个项⽬,让我们以受欢迎的美国⽹络系列“⽼友记”为数据集。
该数据集包含在⾯部识别项⽬代码中,您在上⼀节中下载了该代码。
建⽴⼈脸识别模型的步骤在继续之前,让我们知道什么是⼈脸识别和检测。
⼈脸识别是从照⽚和视频帧中识别或验证⼀个⼈的脸的过程。
⼈脸检测是指在图像中定位和提取⼈脸(位置和⼤⼩)以供⼈脸检测算法使⽤的过程。
⼈脸识别⽅法⽤于定位图像中唯⼀指定的特征。
在⼤多数情况下,⾯部图⽚已经被移除、裁剪、缩放和转换为灰度。
⼈脸识别包括三个步骤:⼈脸检测、特征提取、⼈脸识别。
OpenCV是⼀个⽤C++编写的开源库.它包含了⽤于计算机视觉任务的各种算法和深度神经⽹络的实现。
1.准备数据集创建2个⽬录,训练和测试。
从互联⽹上为每个演员选择⼀个图⽚,并下载到我们的“⽕车”⽬录中。
确保您所选择的图像能够很好地显⽰⼈脸的特征,以便对分类器进⾏分类。
为了测试模型,让我们拍摄⼀张包含所有强制转换的图⽚,并将其放到我们的“test”⽬录中。
为了您的舒适,我们增加了培训和测试数据与项⽬代码。
2.模型的训练⾸先导⼊必要的模块。
import face_recognition as frimport cv2import numpy as npimport os⼈脸识别库包含帮助⼈脸识别过程的各种实⽤程序的实现。
function varargout = FR_Processed_histogram(varargin)%这种算法是基于直方图处理的方法%The histogram of image is calculated and then bin formation is done on the%basis of mean of successive graylevels frequencies. The training is done on odd images of 40 subjects (200 images out of 400 images)%The results of the implemented algorithm is 99.75 (recognition fails on image number 4 of subject 17)gui_Singleton = 1;gui_State = struct('gui_Name', mfilename, ...'gui_Singleton', gui_Singleton, ...'gui_OpeningFcn',@FR_Processed_histogram_OpeningFcn, ...'gui_OutputFcn',@FR_Processed_histogram_OutputFcn, ...'gui_LayoutFcn', [] , ...'gui_Callback', []);if nargin && ischar(varargin{1})gui_State.gui_Callback = str2func(varargin{1});endif nargout[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});elsegui_mainfcn(gui_State, varargin{:});end% End initialization code - DO NOT EDIT%--------------------------------------------------------------------------% --- Executes just before FR_Processed_histogram is made visible.function FR_Processed_histogram_OpeningFcn(hObject, eventdata, handles, varargin)% This function has no output args, see OutputFcn.% hObject handle to figure% eventdata reserved - to be defined in a future version of MATLAB% handles structure with handles and user data (see GUIDATA) % varargin command line arguments to FR_Processed_histogram (see VARARGIN)% Choose default command line output for FR_Processed_histogramhandles.output = hObject;% Update handles structureguidata(hObject, handles);% UIWAIT makes FR_Processed_histogram wait for user response (see UIRESUME)% uiwait(handles.figure1);global total_sub train_img sub_img max_hist_level bin_num form_bin_num;total_sub = 40;train_img = 200;sub_img = 10;max_hist_level = 256;bin_num = 9;form_bin_num = 29;%--------------------------------------------------------------------------% --- Outputs from this function are returned to the command line.function varargout = FR_Processed_histogram_OutputFcn(hObject, eventdata, handles)% varargout cell array for returning output args (see VARARGOUT);% hObject handle to figure% eventdata reserved - to be defined in a future version of MATLAB% handles structure with handles and user data (see GUIDATA)% Get default command line output from handles structurevarargout{1} = handles.output;%--------------------------------------------------------------------------% --- Executes on button press in train_button.function train_button_Callback(hObject, eventdata, handles)% hObject handle to train_button (see GCBO)% eventdata reserved - to be defined in a future version of MATLAB% handles structure with handles and user data (see GUIDATA)global train_processed_bin;global total_sub train_img sub_img max_hist_level bin_num form_bin_num;train_processed_bin(form_bin_num,train_img) = 0;K = 1;train_hist_img = zeros(max_hist_level, train_img);for Z=1:1:total_subfor X=1:2:sub_img %%%train on odd number of images of each subjectI = imread( strcat('ORL\S',int2str(Z),'\',int2str(X),'.bmp') );[rows cols] = size(I);for i=1:1:rowsfor j=1:1:colsif( I(i,j) == 0 )train_hist_img(max_hist_level, K) = train_hist_img(max_hist_level, K) + 1;elsetrain_hist_img(I(i,j), K) = train_hist_img(I(i,j), K) + 1;endendendK = K + 1;endend[r c] = size(train_hist_img);sum = 0;for i=1:1:cK = 1;for j=1:1:rif( (mod(j,bin_num)) == 0 )sum = sum + train_hist_img(j,i);train_processed_bin(K,i) = sum/bin_num;K = K + 1;sum = 0;elsesum = sum + train_hist_img(j,i);endendtrain_processed_bin(K,i) = sum/bin_num;enddisplay ('Training Done')save 'train'train_processed_bin;%--------------------------------------------------------------------------% --- Executes on button press in Testing_button.function Testing_button_Callback(hObject, eventdata, handles)% hObject handle to Testing_button (see GCBO)% eventdata reserved - to be defined in a future version of MATLAB% handles structure with handles and user data (see GUIDATA) global train_img max_hist_level bin_num form_bin_num;global train_processed_bin;global filename pathname Iload 'train'test_hist_img(max_hist_level) = 0;test_processed_bin(form_bin_num) = 0;[rows cols] = size(I);for i=1:1:rowsfor j=1:1:colsif( I(i,j) == 0 )test_hist_img(max_hist_level) =test_hist_img(max_hist_level) + 1;elsetest_hist_img(I(i,j)) = test_hist_img(I(i,j)) + 1;endendend[r c] = size(test_hist_img);sum = 0;K = 1;for j=1:1:cif( (mod(j,bin_num)) == 0 )sum = sum + test_hist_img(j);test_processed_bin(K) = sum/bin_num;K = K + 1;sum = 0;elsesum = sum + test_hist_img(j);endendtest_processed_bin(K) = sum/bin_num;sum = 0;K = 1;for y=1:1:train_imgfor z=1:1:form_bin_numsum = sum + abs( test_processed_bin(z) - train_processed_bin(z,y) );endimg_bin_hist_sum(K,1) = sum;sum = 0;K = K + 1;end[temp M] = min(img_bin_hist_sum);M = ceil(M/5);getString_start=strfind(pathname,'S');getString_start=getString_start(end)+1;getString_end=strfind(pathname,'\');getString_end=getString_end(end)-1;subjectindex=str2num(pathname(getString_start:getString_end));if (subjectindex == M)axes (handles.axes3)%image no: 5 is shown for visualization purposeimshow(imread(STRCAT('ORL\S',num2str(M),'\5.bmp')))msgbox ( 'Correctly Recognized');elsedisplay ([ 'Error==> Testing Image of Subject >>' num2str(subjectindex) ' matches with the image of subject >> ' num2str(M)])axes (handles.axes3)%image no: 5 is shown for visualization purposeimshow(imread(STRCAT('ORL\S',num2str(M),'\5.bmp')))msgbox ( 'Incorrectly Recognized');enddisplay('Testing Done')%--------------------------------------------------------------------------function box_Callback(hObject, eventdata, handles)% hObject handle to box (see GCBO)% eventdata reserved - to be defined in a future version ofMATLAB% handles structure with handles and user data (see GUIDATA)% Hints: get(hObject,'String') returns contents of box as text% str2double(get(hObject,'String')) returns contents of box as a double%--------------------------------------------------------------------------% --- Executes during object creation, after setting all properties.function box_CreateFcn(hObject, eventdata, handles)% hObject handle to box (see GCBO)% eventdata reserved - to be defined in a future version of MATLAB% handles empty - handles not created until after all CreateFcns called% Hint: edit controls usually have a white background on Windows.% See ISPC and COMPUTER.if ispc && isequal(get(hObject,'BackgroundColor'), get(0,'defaultUicontrolBackgroundColor'))set(hObject,'BackgroundColor','white');end%--------------------------------------------------------------------------% --- Executes on button press in Input_Image_button.function Input_Image_button_Callback(hObject, eventdata, handles) % hObject handle to Input_Image_button (see GCBO)% eventdata reserved - to be defined in a future version of MATLAB% handles structure with handles and user data (see GUIDATA) global filename pathname I[filename, pathname] = uigetfile('*.bmp', 'Test Image');axes(handles.axes1)imgpath=STRCAT(pathname,filename);I = imread(imgpath);imshow(I)%--------------------------------------------------------------------------% --- Executes during object creation, after setting all properties.function axes3_CreateFcn(hObject, eventdata, handles)% hObject handle to axes3 (see GCBO)% eventdata reserved - to be defined in a future version of MATLAB% handles empty - handles not created until after all CreateFcnscalled% Hint: place code in OpeningFcn to populate axes3 %Programmed by Usman Qayyum。
人脸识别代码详细解读人脸识别代码详细解读随着科技的不断进步,人脸识别技术已经逐步应用到了生活的各个领域,比如安防监控、人脸支付、智能门禁等,可以说已经成为了一种不可或缺的技术。
那么,在这些应用中,人脸识别又是如何实现的呢?这就要涉及到人脸识别的代码实现。
一、代码概述人脸识别的实现需要借助于计算机视觉技术和机器学习技术,有多种不同的实现方法,其中比较常用的是使用OpenCV库和Python语言进行编程。
本文将以使用OpenCV 和Python编程实现人脸识别为例进行讲解。
OpenCV是一个开源的计算机视觉库,提供了多种图像处理和计算机视觉相关的函数,包括人脸检测、人脸识别、目标跟踪、图像分割等。
Python是一种高级编程语言,易于学习和使用,已经成为了计算机视觉领域最为流行的编程语言之一。
二、代码实现步骤1. 导入所需的库和模块首先需要导入OpenCV库以及一些相关的模块,比如cv2模块、numpy模块等,如下所示:import cv2 import numpy as np2. 人脸检测接下来需要对输入的图像进行人脸检测。
OpenCV提供了多种人脸检测的方法,其中最常用的是基于Haar特征的级联分类器检测方法。
该方法使用训练好的分类器对输入的图像进行扫描,当图像中检测到具有特定特征的区域时,即认为该区域包含人脸。
以下是基于Haar特征的级联分类器人脸检测的代码实现:faceCascade =cv2.CascadeClassifier('haarcascade_frontalface_defa ult.xml') img = cv2.imread('test.jpg') gray =cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) faces = faceCascade.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=5, minSize=(30, 30))其中,faceCascade是已经训练好的级联分类器模型,通过调用detectMultiScale函数可以对输入图像进行人脸检测,scaleFactor控制图像缩放比例,minNeighbors 控制人脸检测的参数,minSize表示人脸检测的最小尺寸。
OpenCV⼈脸识别LBPH算法源码分析1 背景及理论基础⼈脸识别是指将⼀个需要识别的⼈脸和⼈脸库中的某个⼈脸对应起来(类似于指纹识别),⽬的是完成识别功能,该术语需要和⼈脸检测进⾏区分,⼈脸检测是在⼀张图⽚中把⼈脸定位出来,完成的是搜寻的功能。
从OpenCV2.4开始,加⼊了新的类FaceRecognizer,该类⽤于⼈脸识别,使⽤它可以⽅便地进⾏相关识别实验。
原始的LBP算⼦定义为在3*3的窗⼝内,以窗⼝中⼼像素为阈值,将相邻的8个像素的灰度值与其进⾏⽐较,若周围像素值⼤于或等于中⼼像素值,则该像素点的位置被标记为1,否则为0。
这样,3*3邻域内的8个点经⽐较可产⽣8位⼆进制数(通常转换为⼗进制数即LBP码,共256种),即得到该窗⼝中⼼像素点的LBP值,并⽤这个值来反映该区域的纹理特征。
如下图所⽰:原始的LBP提出后,研究⼈员不断对其提出了各种改进和优化。
1.1 圆形LBP算⼦基本的 LBP算⼦的最⼤缺陷在于它只覆盖了⼀个固定半径范围内的⼩区域,这显然不能满⾜不同尺⼨和频率纹理的需要。
为了适应不同尺度的纹理特征,Ojala等对LBP算⼦进⾏了改进,将3×3邻域扩展到任意邻域,并⽤圆形邻域代替了正⽅形邻域,改进后的LBP算⼦允许在半径为R的圆形邻域内有任意多个像素点,从⽽得到了诸如半径为R的圆形区域内含有P个采样点的LBP算⼦,OpenCV中正是使⽤圆形LBP算⼦,下图⽰意了圆形LBP算⼦:1.2 旋转不变模式从LBP的定义可以看出,LBP算⼦是灰度不变的,但却不是旋转不变的,图像的旋转就会得到不同的LBP值。
Maenpaa等⼈⼜将LBP算⼦进⾏了扩展,提出了具有旋转不变性的LBP算⼦,即不断旋转圆形邻域得到⼀系列初始定义的LBP值,取其最⼩值作为该邻域的LBP值。
下图给出了求取旋转不变LBP的过程⽰意图,图中算⼦下⽅的数字表⽰该算⼦对应的LBP值,图中所⽰的8种LBP模式,经过旋转不变的处理,最终得到的具有旋转不变性的LBP值为15。
1.色彩空间转换function [r,g]=rgb_RGB(Ori_Face)R=Ori_Face(:,:,1);G=Ori_Face(:,:,2);B=Ori_Face(:,:,3);R1=im2double(R); % 将uint8型转换成double型G1=im2double(G);B1=im2double(B);RGB=R1+G1+B1;row=size(Ori_Face,1); % 行像素column=size(Ori_Face,2); % 列像素for i=1:rowfor j=1:columnrr(i,j)=R1(i,j)/RGB(i,j);gg(i,j)=G1(i,j)/RGB(i,j);endendrrr=mean(rr);r=mean(rrr);ggg=mean(gg);g=mean(ggg);2.均值和协方差t1=imread('D:\matlab\皮肤库\1.jpg');[r1,g1]=rgb_RGB(t1); t2=imread('D:\matlab\皮肤库\2.jpg');[r2,g2]=rgb_RGB(t2); t3=imread('D:\matlab\皮肤库\3.jpg');[r3,g3]=rgb_RGB(t3); t4=imread('D:\matlab\皮肤库\4.jpg');[r4,g4]=rgb_RGB(t4); t5=imread('D:\matlab\皮肤库\5.jpg');[r5,g5]=rgb_RGB(t5); t6=imread('D:\matlab\皮肤库\6.jpg');[r6,g6]=rgb_RGB(t6); t7=imread('D:\matlab\皮肤库\7.jpg');[r7,g7]=rgb_RGB(t7); t8=imread('D:\matlab\皮肤库\8.jpg');[r8,g8]=rgb_RGB(t8);t9=imread('D:\matlab\皮肤库\9.jpg');[r9,g9]=rgb_RGB(t9);t10=imread('D:\matlab\皮肤库\10.jpg');[r10,g10]=rgb_RGB(t10);t11=imread('D:\matlab\皮肤库\11.jpg');[r11,g11]=rgb_RGB(t11);t12=imread('D:\matlab\皮肤库\12.jpg');[r12,g12]=rgb_RGB(t12);t13=imread('D:\matlab\皮肤库\13.jpg');[r13,g13]=rgb_RGB(t13);t14=imread('D:\matlab\皮肤库\14.jpg');[r14,g14]=rgb_RGB(t14);t15=imread('D:\matlab\皮肤库\15.jpg');[r15,g15]=rgb_RGB(t15);t16=imread('D:\matlab\皮肤库\16.jpg');[r16,g16]=rgb_RGB(t16);t17=imread('D:\matlab\皮肤库\17.jpg');[r17,g17]=rgb_RGB(t17);t18=imread('D:\matlab\皮肤库\18.jpg');[r18,g18]=rgb_RGB(t18);t19=imread('D:\matlab\皮肤库\19.jpg');[r19,g19]=rgb_RGB(t19);t20=imread('D:\matlab\皮肤库\20.jpg');[r20,g20]=rgb_RGB(t20);t21=imread('D:\matlab\皮肤库\21.jpg');[r21,g21]=rgb_RGB(t21);t22=imread('D:\matlab\皮肤库\22.jpg');[r22,g22]=rgb_RGB(t22);t23=imread('D:\matlab\皮肤库\23.jpg');[r23,g23]=rgb_RGB(t23);t24=imread('D:\matlab\皮肤库\24.jpg');[r24,g24]=rgb_RGB(t24);t25=imread('D:\matlab\皮肤库\25.jpg');[r25,g25]=rgb_RGB(t25);t26=imread('D:\matlab\皮肤库\26.jpg');[r26,g26]=rgb_RGB(t26);t27=imread('D:\matlab\皮肤库\27.jpg');[r27,g27]=rgb_RGB(t27);r=cat(1,r1,r2,r3,r4,r5,r6,r7,r8,r9,r10,r11,r12,r13,r14,r15,r16,r17,r18,r19,r20,r21,r22, r23,r24,r25,r26,r27);g=cat(1,g1,g2,g3,g4,g5,g6,g7,g8,g9,g10,g11,g12,g13,g14,g15,g16,g17,g18,g19,g20 ,g21,g22,g23,g24,g25,g26,g27);m=mean([r,g])n=cov([r,g])3.求质心function [xmean, ymean] = center(bw)bw=bwfill(bw,'holes');area = bwarea(bw);[m n] =size(bw);bw=double(bw);xmean =0; ymean = 0;for i=1:m,for j=1:n,xmean = xmean + j*bw(i,j);ymean = ymean + i*bw(i,j);end;end;if(area==0)xmean=0;ymean=0;elsexmean = xmean/area;ymean = ymean/area;xmean = round(xmean);ymean = round(ymean);end4.求偏转角度function [theta] = orient(bw,xmean,ymean) [m n] =size(bw);bw=double(bw);a = 0;b = 0;c = 0;for i=1:m,for j=1:n,a = a + (j - xmean)^2 * bw(i,j);b = b + (j - xmean) * (i - ymean) * bw(i,j);c = c + (i - ymean)^2 * bw(i,j);end;end;b = 2 * b;theta = atan(b/(a-c))/2;theta = theta*(180/pi); % 从幅度转换到角度5.找区域边界function [left, right, up, down] = bianjie(A)[m n] = size(A);left = -1;right = -1;up = -1;down = -1;for j=1:n,for i=1:m,if (A(i,j) ~= 0)left = j;break;end;end;if (left ~= -1) break;end;end;for j=n:-1:1,for i=1:m,if (A(i,j) ~= 0)right = j;break;end;end;if (right ~= -1) break;end;end;for i=1:m,for j=1:n,if (A(i,j) ~= 0)up = i;break;end;end;if (up ~= -1)break;end;end;for i=m:-1:1,for j=1:n,if (A(i,j) ~= 0)down = i;break;end;end;if (down ~= -1)break;end;end;6.求起始坐标function newcoord = checklimit(coord,maxval) newcoord = coord;if (newcoord<1)newcoord=1;end;if (newcoord>maxval)newcoord=maxval;end;7.模板匹配function [ccorr, mfit, RectCoord] = mobanpipei(mult, frontalmodel,ly,wx,cx, cy, angle)frontalmodel=rgb2gray(frontalmodel);model_rot = imresize(frontalmodel,[ly wx],'bilinear'); % 调整模板大小model_rot = imrotate(model_rot,angle,'bilinear'); % 旋转模板[l,r,u,d] = bianjie(model_rot); % 求边界坐标bwmodel_rot=imcrop(model_rot,[l u (r-l) (d-u)]); % 选择模板人脸区域[modx,mody] =center(bwmodel_rot); % 求质心[morig, norig] = size(bwmodel_rot);% 产生一个覆盖了人脸模板的灰度图像mfit = zeros(size(mult));mfitbw = zeros(size(mult));[limy, limx] = size(mfit);% 计算原图像中人脸模板的坐标startx = cx-modx;starty = cy-mody;endx = startx + norig-1;endy = starty + morig-1;startx = checklimit(startx,limx);starty = checklimit(starty,limy);endx = checklimit(endx,limx);endy = checklimit(endy,limy);for i=starty:endy,for j=startx:endx,mfit(i,j) = model_rot(i-starty+1,j-startx+1);end;end;ccorr = corr2(mfit,mult) % 计算相关度[l,r,u,d] = bianjie(bwmodel_rot);sx = startx+l;sy = starty+u;RectCoord = [sx sy (r-1) (d-u)]; % 产生矩形坐标8.主程序clear;[fname,pname]=uigetfile({'*.jpg';'*.bmp';'*.tif';'*.gif'},'Please choose a color picture...');% 返回打开的图片名与图片路径名[u,v]=size(fname);y=fname(v); % 图片格式代表值switch ycase 0errordlg('You Should Load Image File First...','Warning...');case{'g';'G';'p';'P';'f';'F'}; % 图片格式若是JPG/jpg、BMP/bmp、TIF/tif 或者GIF/gif,才打开I=cat(2,pname,fname);Ori_Face=imread(I);subplot(2,3,1),imshow(Ori_Face);otherwiseerrordlg('You Should Load Image File First...','Warning...');endR=Ori_Face(:,:,1);G=Ori_Face(:,:,2);B=Ori_Face(:,:,3);R1=im2double(R); % 将uint8型转换成double型处理G1=im2double(G);B1=im2double(B);RGB=R1+G1+B1;m=[ 0.4144,0.3174]; % 均值n=[0.0031,-0.0004;-0.0004,0.0003]; % 方差row=size(Ori_Face,1); % 行像素数column=size(Ori_Face,2); % 列像素数for i=1:rowfor j=1:columnif RGB(i,j)==0rr(i,j)=0;gg(i,j)=0;elserr(i,j)=R1(i,j)/RGB(i,j); % rgb归一化gg(i,j)=G1(i,j)/RGB(i,j);x=[rr(i,j),gg(i,j)];p(i,j)=exp((-0.5)*(x-m)*inv(n)*(x-m)'); % 皮肤概率服从高斯分布endendendsubplot(2,3,2);imshow(p); % 显示皮肤灰度图像low_pass=1/9*ones(3);image_low=filter2(low_pass, p); % 低通滤波去噪声subplot(2,3,3);imshow(image_low);% 自适应阀值程序previousSkin2 = zeros(i,j);changelist = [];for threshold = 0.55:-0.1:0.05two_value = zeros(i,j);two_value(find(image_low>threshold)) = 1;change = sum(sum(two_value - previousSkin2));changelist = [changelist change];previousSkin2 = two_value;end[C, I] = min(changelist);optimalThreshold = (7-I)*0.1two_value = zeros(i,j);two_value(find(image_low>optimalThreshold)) = 1; % 二值化subplot(2,3,4);imshow(two_value); % 显示二值图像frontalmodel=imread('E:\我的照片\人脸模板.jpg'); % 读入人脸模板照片FaceCoord=[];imsourcegray=rgb2gray(Ori_Face); % 将原照片转换为灰度图像[L,N]=bwlabel(two_value,8); % 标注二值图像中连接的部分,L为数据矩阵,N为颗粒的个数for i=1:N,[x,y]=find(bwlabel(two_value)==i); % 寻找矩阵中标号为i的行和列的下标bwsegment = bwselect(two_value,y,x,8); % 选择出第i个颗粒numholes = 1-bweuler(bwsegment,4); % 计算此区域的空洞数if (numholes >= 1) % 若此区域至少包含一个洞,则将其选出进行下一步运算RectCoord = -1;[m n] = size(bwsegment);[cx,cy]=center(bwsegment); % 求此区域的质心bwnohole=bwfill(bwsegment,'holes'); % 将洞封住(将灰度值赋为1)justface = uint8(double(bwnohole) .* double(imsourcegray));% 只在原照片的灰度图像中保留该候选区域angle = orient(bwsegment,cx,cy); % 求此区域的偏转角度bw = imrotate(bwsegment, angle, 'bilinear');bw = bwfill(bw,'holes');[l,r,u,d] =bianjie(bw);wx = (r - l +1); % 宽度ly = (d - u + 1); % 高度wratio = ly/wx % 高宽比if ((0.8<=wratio)&(wratio<=2))% 如果目标区域的高度/宽度比例大于0.8且小于2.0,则将其选出进行下一步运算S=ly*wx; % 计算包含此区域矩形的面积A=bwarea(bwsegment); % 计算此区域面积if (A/S>0.35)[ccorr,mfit, RectCoord] = mobanpipei(justface,frontalmodel,ly,wx, cx,cy, angle);endif (ccorr>=0.6)mfitbw=(mfit>=1);invbw = xor(mfitbw,ones(size(mfitbw)));source_with_hole = uint8(double(invbw) .* double(imsourcegray));final_image = uint8(double(source_with_hole) + double(mfit));subplot(2,3,5);imshow(final_image); % 显示覆盖了模板脸的灰度图像imsourcegray = final_image;subplot(2,3,6);imshow(Ori_Face); % 显示检测效果图end;if (RectCoord ~= -1)FaceCoord = [FaceCoord; RectCoord];endendendend% 在认为是人脸的区域画矩形[numfaces x] = size(FaceCoord);for i=1:numfaces,hd = rectangle('Position',FaceCoord(i,:));set(hd, 'edgecolor', 'y');end。
人脸识别程序源代码 WEIHUA system office room 【WEIHUA 16H-WEIHUA WEIHUA8Q8-
1.利用OpenCV进行人脸检测
人脸检测程序主要完成3部分功能,即加载分类器、加载待检测图象以及检测并标示。
本程序使用OpenCV中提供的“”文件存储的目标检测分类,用cv Load函数载入后,进行强制类型转换。
OpenCV中提供的用于检测图像中目标的函数是cvHaarDetectObjects,该函数使用指针对某目标物体(如人脸)训练的级联分类器在图象中找到包含目标物体的矩形区域,并将这些区域作为一序列的矩形框返回。
分类器在使用后需要被显式释放,所用的函数为cvReleas eHaarClassifierCascade。
这些函数原型请参看有关OpenCV手册。
2.程序实现
1)新建一个Visual C++ MFC项目,取名为“FaceDetection”,选择应用程序类型为“单文档”。
将菜单中多余的项去掉,并添加一项“人脸检测”,其ID为“I D_FaceDetected”,并生成该菜单项的消息映射函数。
2)在“”头文件中添加以下灰底色部分程序代码:
}
cvShowImage( "人脸检测", img );
cvReleaseImage( &gray );
cvReleaseImage( &small_img );
}
需要注意的是,本程序运行时应将分类器文件置于程序目录下,如果运行的是生成的EXE文件,则应将分类器文件与该EXE文件放在同一个目录下。
三、程序运行结果
运行该程序,选择人脸检测菜单项,弹出文件打开对话框,选择要检测的图像文件,程序就会将检测到的人脸用圆圈标示出来,如图3所示。
本程序能顺利检测出大部分人脸,但由于光照、遮挡和倾斜等原因,部分人脸不能正确检测,另外,也有一些非人脸部分由于具有人脸的某些特征,也被当成了人脸,这些都是本程序需要改进的部分。