为了正常的体验网站,请在浏览器设置里面开启Javascript功能!

人脸识别论文

2017-09-29 50页 doc 268KB 29阅读

用户头像

is_353097

暂无简介

举报
人脸识别论文人脸识别论文 摘 要 人脸识别技术属于生物验证的一种,在身份验证领域日益发挥重要作用,具有十分广泛的应用前景。人脸检测和定位问题是人脸识别技术首先要解决的问题。人脸检测算法是一个高效的、自动的人脸识别系统中的关键技术之一。如今人脸检测问题已成为一个热门研究领域,新的算法不断被提出,不过由于人脸的复杂性,目前尚不能找到一个完美的算法。在不同应用环境下,有不同的算法,本文针对应用于视频中人脸检测问题进行了研究,论文的主要工作如下: 本文首先对人脸识别的任务提出了要求并阐述了其难点,结合其实现难度,提出一个能教快速的识别出图片...
人脸识别论文
人脸识别论文 摘 要 人脸识别技术属于生物验证的一种,在身份验证领域日益发挥重要作用,具有十分广泛的应用前景。人脸检测和定位问题是人脸识别技术首先要解决的问题。人脸检测算法是一个高效的、自动的人脸识别系统中的关键技术之一。如今人脸检测问题已成为一个热门研究领域,新的算法不断被提出,不过由于人脸的复杂性,目前尚不能找到一个完美的算法。在不同应用环境下,有不同的算法,本文针对应用于视频中人脸检测问题进行了研究,论文的主要工作如下: 本文首先对人脸识别的任务提出了并阐述了其难点,结合其实现难度,提出一个能教快速的识别出图片中人脸的基于肤色的人脸检测算法,由于其一般适用性本文使用了RGB色彩空间,本算法将三维RGB空间映射为两维独立分布的空间。在二维空间下,肤色类聚性好且符合高斯分布,利用人工阈值法将肤色与非肤色区域分开,形成二值图像。 非肤色区域中依然有可能有部分区域颜色与肤色相近,所以阈值分割后的图像依然存有部分假肤色区域,经过形态学处理和限制长宽比之后,可得到人脸区域。 在确定人脸区域后,我们需要在原图上对人脸区域进行标记,这里可通过找到一些关键点,作出矩形框标记人脸区域 关键字:人脸识别,肤色检测,色彩空间,高斯分布,人脸检测 Abstract Face recognition technology is biological validation of an increasingly important role in the field of authentication, a very wide range of applications. Face detection and face recognition technology positioning problem is to be solved first. Face detection algorithm is an efficient, one of the key technologies of automatic face recognition system. Now face detection has become a hot area of research, new algorithms constantly being made, but because of the complexity of the human face, is still unable to find a perfect algorithm. In different environments, there are different algorithms, this paper applies to video Face Detection conducted a study, the main work of this thesis is as follows: This paper proposes a face recognition task requirements and described its difficulties, combined with the difficulty of its implementation, proposes a can teach quickly identify the faces in the pictures Face detection algorithm based on skin color, because of its general applicability as used herein, the RGB color space, this three-dimensional RGB space mapping algorithm is a two-dimensional space of independent distribution. In two-dimensional space, color clustering is good and in line with the Gaussian distribution, the use of artificial threshold method to separate color and non-color area, forming a binary image. Non-skin area, there are still some regions may have similar color and skin color, the image thresholding is still there after part false color region, after morphological and restrictions aspect ratio obtained face region. In determining the face region, we need to face in the picture area mark, there may be some of the key points by finding made rectangle marked face region Keywords: face recognition, color detection, color space, the Gaussian distribution, face detection 目 录 摘 要 ..................................................................................................................... I ABSTRACT(英文摘要) .................................................................................... ? 目 录 ..................................................................................................................... ? 第一章 绪 论 ....................................................................................................... 1 第二章 需求分析 ................................................................................................... 3 2.1 任务概述 ........................................................................................................ 3 2.1.1 人脸识别完成的主要目标 .................................................... 3 2.2 需求规定 ........................................................................................................ 3 2.2.1 对功能的规定 ..................................................................... 3 第三章 基于肤色检测的人脸识别及实现 ........................................................... 4 3.1 人脸肤色建模 ................................................................................................ 4 3.1.1 色彩空间 ............................................................................ 4 3.1.2 图片的修正处理.................................................................. 5 3.1.3 人脸肤色模型 ..................................................................... 5 3.1.4 特征的确定......................................................................... 13 3.2 各功能模块的实现 ........................................................................................ 14 3.2.1 打开图像模块 ..................................................................... 14 3.2.2 脸和头发识别模块 .............................................................. 16 3.2.3 标记模块 ............................................................................ 17 3.2.4 特征提取模块 ..................................................................... 24 附录 ......................................................................................................................... 39 结束语 ..................................................................................................................... 63 参考文献 ................................................................................................................. 64 目 录 致谢 ......................................................................................................................... 65 - I - 第一章 绪 论 随着科学技术的进步和社会的发展,对快速、有效辨别人身的需求越来越迫切。传统的人身辨别方法主要是通过人身标识物品和人身标识知识两种方式来实现的。常见的人身标示物品有钥匙、证件等各种标识,人身标示知识有用户名、密码等。众所周知,像钥匙、证件标识等人身标识物品很容易丢失或被伪造,而标识知识容易遗忘或记错,更为严重的是传统身份识别系统往往无法区分标识物品真正的拥有者和取得标识物品的冒充者,一旦他人获得标识物品, 也可以拥有相同的权力。即便是将两种方式结合起来使用,此类问题仍然会出现,比如自动取款机虽然用到银行卡和用户密码,但还是有卡里的钱被盗的现象。所以传统的人身辨别方法已不能满足社会的需要。而生物特征识别技术为人身辨别提供了一个很重要的保障。它包括身体特征和行为特征两个部分。其中身体特征包括面部特征、指纹、手形、基因、身体气味等。而人脸就是主要的面部特征,与用其他身体特征鉴别身份相比,人脸不容易被遗忘、丢失和盗取,而且人脸图像采集具有非接触性、非侵犯性、非强制性特点,容易被人们所接受。 几十年来,各领域对人脸识别领域进行了深入的研究,现在主要有以下几种方法: 1.基于统计模式识别,有PCA主元分析.,LDA,Hotelling变换等。 2.基于模板匹配,提取脸部特征,主要为提取眼部、嘴部、鼻部、耳部等特征构成特征向量,进行匹配识别。 3.基于神经网络,具有良好的抗噪能力,但收敛速度慢。 4.基于知识匹配,如轮廓、肤色、纹理等。 本文将基于知识匹配肤色检测的方法,实现人脸识别并标记出嘴、鼻、眼等特征部位。利用肤色检测时有一定的局限性,主要是肤色受光照等影响比较大,且容易受到干扰色影响。但是由于肤色的持久性和稳定性强,而且不会受到拍照角度、姿势的影响等优势,越来越多的人开始关注和研究基于肤色的人脸检测算法。利用该算法时,可以先将类人脸区域与背景 - 0 - 第一章 绪 论 相隔离,然后对类人脸区域进行进一步检测和识别,可以大大提高检测的速度和准确度。 - 1 - 第二章 需求分析 2.1 任务概述 2.1.1 人脸识别完成的主要目标 进入系统后需要选定图像。之后检测脸和头发区域,画出脸和头发的直方图最后进行人脸标记。在这里我们定义r=(R/(R+B+G)),b=(B/(R+B+G)),这样将三维RGB降为二维,而在这个二维平面上肤色的区域相对集中,而这个平面的作用将在后文中提出。 2.2 需求规定 2.2.1 对功能的规定 1、打开图像:需要由用户向程序载入图像,程序获取到图像的像素宽和像素高,并重置相应的标志位。 2、脸和头发识别模块:第一步将原图划分为脸部、头发、背景三块区域,然后利用一般规律,和相应的数学模型可以快速识别出人脸。 3、直方图绘图模块:在人脸检测中,我们往往需要先提取图像中皮肤区域来缩小人脸的检测范围,这一般获得皮肤的颜色范围还需要定义阈值并不断的调整,实际中参数太多而不容易控制。这里我们就可以考虑用直方图的反射映射。 4、标记模块:对处理后的图片进行检测,并标记出人脸。 5、特征提取模块:利用模式匹配提取出特征验证识别的准确度,并可以根据反馈改进程序。 - 0 - 第三章 基于肤色检测的人脸识别及实现 3.1 人脸肤色建模 3.1.1 色彩空间 []肤色特征主要由肤色模型描述,而肤色模型的建立与选择何种色彩空间密 [1]切相关。选择色彩空间主要从两个方面来考察:一是该色彩空间中能否用给定的模型描述“肤色”区域的分布;二是色彩空间中“肤色”与“非肤色”区域重 [2][31[4]叠的有多少。人脸检测常用的色彩空间有以下三种: (1)RGB色彩空间 颜色混合的基本定律明,自然界任何一种颜色都可以用红、绿、蓝三种原色匹配产生,即R(红)、G(绿)、B(蓝)三个分量。三维空间中的三个轴分别与红、绿、蓝三基色相对应,原点对应于黑色,离原点最远的顶点对应于白色,而其它颜色则落在三维空间中由红、绿、蓝三基色构成的彩色立方体中。 - 0 - 第三章 基于肤色监测的人脸识别及实现 图3-1 RGB色彩空间 (2)HSV色彩空间 HSV模型用色调(H)、饱和度(S)、亮度(V)来描述色彩空间。HSV空间是一个六棱锥,沿着锥体的圆圈方向分布色调,锥体的上下方向为明暗度,椎体的顶部处亮度为0,每一层周围的半径代表饱和度。 图3-2 HSC色彩空间 (3)YCrCb色彩空间 ”表示明亮度(Luminance或Luma),也就是灰阶值;而YCrCb即YUV,“Y - 1 - “U”和“V” 表示的则是色度(Chrominance或Chroma),作用是描述影像色彩及饱和度,用于指定像素的颜色。“亮度”是透过RGB输入信号来建立的,方法是将RGB信号的特定部分叠加到一起。“色度”则定义了颜色的两个方面?色调与饱和度,分别用Cr和Cb来表示。其中,Cr反映了RGB输入信号红色部分与RGB信号亮度值之间的差异。而Cb反映的是RGB输入信号蓝色部分与RGB信号亮度值之间的差异。 图3-3 Y值为0.5时的UV色彩空间 3.1.2 图片的修正处理 从摄像头中获得的图片会存在各种情况,比如噪点多、光照不均匀甚至图像扭曲等问题,所以有必要在对图像进行识别时进行除噪、光照补偿等修正处理,从而获得优质的照片源,获得更加准确的识别效果。 - 2 - 第三章 基于肤色监测的人脸识别及实现 3.1.3 人脸肤色模型 GRg,r,R,G,BR,G,B我们定义,,Y=0.30R+0.59G+0.11B,这样将三维RGB空间降为二维,而在这个二维平面上,肤色的区域相对集中,可以用gauss分布来描述这种分布。如下图: 图3-4 高斯分布 0.333,r,0.664, ,0.246,g,0.398,当满足 时,该区域可能是皮肤的区域,同时Y<40 ,r,g, ,g,0.5,0.5r, 时,该区域可能是头发的颜色,那么就可以将原图划分为三种区域:皮肤、头发、背景。然后判定区域内容上面是头发,下面是皮肤并且面积大于一定的阈值,这样的区域极有可能是人脸。其主要代码如下: //////////////////////////////////////////////////////////////////////////////// // 求取头发和脸部区域 //////////////////////////////////////////////////////////////////////////////// void CFaceDetectDlg::OnBtnFacehair() { m_bShowFace = false; - 3 - SetCursor(LoadCursor(NULL,IDC_WAIT)); method2->MarkHairFace(); for(int i=0; im_pBinaryArray[i][j]) { case 0: m_tResPixelArray[i][j].rgbBlue = m_tResPixelArray[i][j].rgbGreen = 0; m_tResPixelArray[i][j].rgbRed = 255; break; case 1: m_tResPixelArray[i][j].rgbBlue = 255; m_tResPixelArray[i][j].rgbGreen=m_tResPixelArray[i][j].rgbRed=0; break; case 2: m_tResPixelArray[i][j].rgbBlue = m_tResPixelArray[i][j].rgbGreen = m_tResPixelArray[i][j].rgbRed = 0; break; } } MakeBitMap(); SetCursor(LoadCursor(NULL,IDC_ARROW)); } //////////////////////////////////////////////////////////////////////////////// // 标记脸部区域 //////////////////////////////////////////////////////////////////////////////// - 4 - 第三章 基于肤色监测的人脸识别及实现 m_bShowFace = true; SetCursor(LoadCursor(NULL,IDC_WAIT)); int *numR,*numB, i, j, left,right,top,bottom; int maxnumR = 0, maxnumB = 0; numR = new int[m_nWndWidth]; numB = new int[m_nWndWidth]; for(j=0;jm_pBinaryArray[i][j] == 0) countR++; if(method2->m_pBinaryArray[i][j] == 1) countB++; } if(countR > maxnumR) maxnumR = countR; numR[j] = countR; if(countB > maxnumB) maxnumB = countB; numB[j] = countB; } CList myListR(sizeof(CPoint)); CList myListB(sizeof(CPoint)); CPoint tempR,tempB; int flagR = 0,flagB = 0; for(j=0;jmaxnumR/2) { flagR = 1; tempR.x = j; - 5 - } } else { if(numR[j]<=maxnumR/2 || j==m_nWndWidth-1) { flagR = 0; tempR.y = j; myListR.AddTail(tempR); } } if(flagB == 0) { if(numB[j]>maxnumB/5) { flagB = 1; tempB.x = j; } } else { if(numB[j]<=maxnumB/5 || j==m_nWndWidth-1) { flagB = 0; tempB.y = j; if(myListB.GetCount() > 1 && (tempB.x-myListB.GetTail().y)<20) myListB.SetAt(myListB.GetTailPosition(),CPoint(myListB.GetTail().x,j)); else myListB.AddTail(tempB); } } - 6 - 第三章 基于肤色监测的人脸识别及实现 } if(numR!=NULL)delete numR; if(numB!=NULL)delete numB; int *hairmark, k; hairmark = new int[m_nWndWidth]; for(j=0;jm_nWndWidth/10) { for(int t = temp.x;t<=temp.y;t++) hairmark[t] = 1; } } for(k=0;km_nWndWidth/10) { for(int t=temp.x;t<=temp.y;t++) { if(hairmark[t]==1) { int endpos=t+(temp.y-temp.x)/5; if(endpos > temp.y)endpos = temp.y; int yes = 1; - 7 - for(int q=t;q<=endpos;q++) if(hairmark[q]==0) yes = 0; if(yes == 1) { templeft = t; break; } } } for(int p=temp.y;p>=temp.x;p--) { if(hairmark[p]==1) { int beginpos=p-(temp.y-temp.x)/5; if(beginpos < temp.x)beginpos = temp.x; int yes = 1; for(int q=p;q>=beginpos;q--) if(hairmark[q]==0) yes = 0; if(yes == 1) { tempright = p; break; } } } } if(templeft!=-1 && tempright!=-1) { left = templeft; right = tempright; break; } - 8 - 第三章 基于肤色监测的人脸识别及实现 } if(hairmark !=NULL) delete hairmark; myListR.RemoveAll(); myListB.RemoveAll(); if(left-m_nWndWidth/50>0) left-=(int)m_nWndWidth/50; else left = 0; if(right+m_nWndWidth/40>m_nWndWidth) right=m_nWndWidth-1; else right += m_nWndWidth/40; for(i=0; im_pBinaryArray[i][l] == 0) count++; } if(count>=(right-left)*0.5) { top = i; break; } } bottom = (int)(top+(right-left)*1.5)>=m_nWndHeight? m_nWndHeight-1:(int)(top+(right-left)*1.5); CopyBitMap(m_tResPixelArray,m_tOriPixelArray); for(i=top;i<=bottom;i++) { m_tResPixelArray[i][left].rgbBlue=255; m_tResPixelArray[i][left].rgbGreen = m_tResPixelArray[i][left].rgbRed = 0; m_tResPixelArray[i][right].rgbBlue=255; m_tResPixelArray[i][right].rgbGreen = m_tResPixelArray[i][right].rgbRed = 0; - 9 - } for(j=left;j<=right;j++) { m_tResPixelArray[top][j].rgbBlue=255; m_tResPixelArray[top][j].rgbGreen = m_tResPixelArray[top][j].rgbRed = 0; m_tResPixelArray[bottom][j].rgbBlue=255; m_tResPixelArray[bottom][j].rgbGreen = m_tResPixelArray[bottom][j].rgbRed = 0; } MakeBitMap(); SetCursor(LoadCursor(NULL,IDC_ARROW)); if(m_bFaceOK) { ReplaceDlg dlg; if(dlg.DoModal()==IDOK) { CopyBitMap(m_tResPixelArray,m_tOriPixelArray); CRect rect(left,top,right,bottom); m_rFaceRegion = rect; MakeBitMap(); } } else { m_bFaceOK = true; CopyBitMap(m_tResPixelArray,m_tOriPixelArray); CRect rect(left,top,right,bottom); m_rFaceRegion = rect; MakeBitMap(); } - 10 - 第三章 基于肤色监测的人脸识别及实现 3.1.4 特征的确定 无论是眼睛还是嘴巴,都处于人脸上的非肤色区域,利用这些特点可以淘汰很多候选。 (1)双眼的定位 对于双眼的定位,对筛选过的候选区进行两两配对,再利用PCA模板进行匹配,就可以得到真正的双眼。 但是,PCA验证有时候难以区分眉毛和眼睛,所以还需要用一些几何特征 [5]进一步筛选,比如假定人脸为正姿态,人眼必定处于人脸上半部分,且连线近似水平,而且眼睛连线中心比眉毛连线中心低。 (2)嘴巴的定位 找到双眼后,可以在它的下方搜索候选特征区域[6],寻找非肤色,面积较大,近似于双眼对称位置的候选作为嘴巴,然后根据先验知识调整嘴巴大小。 因代码较长,附在附录。 3.2 各功能模块的实现 3.2.1 打开图像模块 第二章的研究结果表明,S-ppt的模型化合物2,4,6-三苯基-1,3,5-三嗪(MSPPT)在100:C的沸水中具有很好的稳定性,在250:C的高温水中MSPPT却发生了水解。图3-5是MSPPT在250:C的高温水中水解20小时后的水溶液的HPCE谱图。 - 11 - 图3-5 打开图片窗体 打开图片代码: void CFaceDetectDlg::OnBtnOpenfile() { CAddSampleDlg FileDlg(TRUE, "", NULL, OFN_HIDEREADONLY | OFN_OVERWRITEPROMPT | OFN_ALLOWMULTISELECT, "BMP人脸图像(*.bmp)|*.bmp|所有文件(*.*)|*.*||", AfxGetMainWnd()); CString strFile; if (FileDlg.DoModal () != IDOK) return; POSITION pos = FileDlg.GetStartPosition(); strFile = FileDlg.GetNextPathName(pos); m_pMainDib->Open(strFile); m_nWndWidth = m_pMainDib->GetWidth(); - 12 - 第三章 基于肤色监测的人脸识别及实现 m_nWndHeight= m_pMainDib->GetHeight(); m_sFileName = strFile; m_rFaceRegion.left = m_rFaceRegion.right = m_rFaceRegion.top = m_rFaceRegion.bottom = 0; m_bLeftEyeOK = m_bRightEyeOK = m_bLeftNostrilOK = m_bRightNostrilOK = m_bLeftEyeLeftCornerOK = m_bLeftEyeRightCornerOK = m_bRightEyeLeftCornerOK = m_bRightEyeRightCornerOK = m_bLeftMouthCornerOK = m_bRightMouthCornerOK = false; m_bMidMouthOK = m_bMidNoseOK = false; m_LeftEye = m_RightEye = m_LeftEyeLeftCorner = m_LeftEyeRightCorner = m_LeftNostril = m_RightNostril = m_RightEyeLeftCorner = m_RightEyeRightCorner = m_LeftMouthCorner = m_RightMouthCorner = m_MidMouth = m_MidNose = CPoint(-1,-1); SetCursor(LoadCursor(NULL,IDC_WAIT)); LoadOriPixel(m_pMainDib); MakeBitMap(); SetCursor(LoadCursor(NULL,IDC_ARROW)); } - 13 - 3.2.2 脸和头发识别模块 图3-6 进行处理后的对比图 void CFaceDetectDlg::OnBtnFacehair() { m_bShowFace = false; SetCursor(LoadCursor(NULL,IDC_WAIT)); method2->MarkHairFace(); for(int i=0; im_pBinaryArray[i][j]) { case 0: m_tResPixelArray[i][j].rgbBlue = m_tResPixelArray[i][j].rgbGreen = 0; m_tResPixelArray[i][j].rgbRed = 255; break; case 1: - 14 - 第三章 基于肤色监测的人脸识别及实现 m_tResPixelArray[i][j].rgbBlue = 255; m_tResPixelArray[i][j].rgbGreen=m_tResPixelArray[i][j].rgbRed=0; break; case 2: m_tResPixelArray[i][j].rgbBlue = m_tResPixelArray[i][j].rgbGreen = m_tResPixelArray[i][j].rgbRed = 0; break; } } MakeBitMap(); SetCursor(LoadCursor(NULL,IDC_ARROW)); } 3.2.3 标记模块 图3-7 标记后的人脸 标记模块代码: void CFaceDetectDlg::OnBtnMarkFace2() { m_bShowFace = true; SetCursor(LoadCursor(NULL,IDC_WAIT)); int *numR,*numB, i, j, left,right,top,bottom; int maxnumR = 0, maxnumB = 0; - 15 - numR = new int[m_nWndWidth]; numB = new int[m_nWndWidth]; for(j=0;jm_pBinaryArray[i][j] == 0) countR++; if(method2->m_pBinaryArray[i][j] == 1) countB++; } if(countR > maxnumR) maxnumR = countR; numR[j] = countR; if(countB > maxnumB) maxnumB = countB; numB[j] = countB; } CList myListR(sizeof(CPoint)); CList myListB(sizeof(CPoint)); CPoint tempR,tempB; int flagR = 0,flagB = 0; for(j=0;jmaxnumR/2) { flagR = 1; tempR.x = j; } } - 16 - 第三章 基于肤色监测的人脸识别及实现 else { if(numR[j]<=maxnumR/2 || j==m_nWndWidth-1) { flagR = 0; tempR.y = j; myListR.AddTail(tempR); } } if(flagB == 0) { if(numB[j]>maxnumB/5) { flagB = 1; tempB.x = j; } } else { if(numB[j]<=maxnumB/5 || j==m_nWndWidth-1) { flagB = 0; tempB.y = j; if(myListB.GetCount() > 1 && (tempB.x-myListB.GetTail().y)<20) myListB.SetAt(myListB.GetTailPosition(),CPoint(myListB.GetTail().x,j)); else myListB.AddTail(tempB); } } } if(numR!=NULL)delete numR; if(numB!=NULL)delete numB; - 17 - int *hairmark, k; hairmark = new int[m_nWndWidth]; for(j=0;jm_nWndWidth/10) { for(int t = temp.x;t<=temp.y;t++) hairmark[t] = 1; } } for(k=0;km_nWndWidth/10) { for(int t=temp.x;t<=temp.y;t++) { if(hairmark[t]==1) { int endpos=t+(temp.y-temp.x)/5; if(endpos > temp.y)endpos = temp.y; int yes = 1; for(int q=t;q<=endpos;q++) if(hairmark[q]==0) yes = 0; - 18 - 第三章 基于肤色监测的人脸识别及实现 if(yes == 1) { templeft = t; break; } } } for(int p=temp.y;p>=temp.x;p--) { if(hairmark[p]==1) { int beginpos=p-(temp.y-temp.x)/5; if(beginpos < temp.x)beginpos = temp.x; int yes = 1; for(int q=p;q>=beginpos;q--) if(hairmark[q]==0) yes = 0; if(yes == 1) { tempright = p; break; } } } } if(templeft!=-1 && tempright!=-1) { left = templeft; right = tempright; break; } } if(hairmark !=NULL) delete hairmark; myListR.RemoveAll(); myListB.RemoveAll(); - 19 - if(left-m_nWndWidth/50>0) left-=(int)m_nWndWidth/50; else left = 0; if(right+m_nWndWidth/40>m_nWndWidth) right=m_nWndWidth-1; else right += m_nWndWidth/40; for(i=0; im_pBinaryArray[i][l] == 0) count++; } if(count>=(right-left)*0.5) { top = i; break; } } bottom = (int)(top+(right-left)*1.5)>=m_nWndHeight? m_nWndHeight-1:(int)(top+(right-left)*1.5); CopyBitMap(m_tResPixelArray,m_tOriPixelArray); for(i=top;i<=bottom;i++) { m_tResPixelArray[i][left].rgbBlue=255; m_tResPixelArray[i][left].rgbGreen = m_tResPixelArray[i][left].rgbRed = 0; m_tResPixelArray[i][right].rgbBlue=255; m_tResPixelArray[i][right].rgbGreen = m_tResPixelArray[i][right].rgbRed = 0; } for(j=left;j<=right;j++) - 20 - 第三章 基于肤色监测的人脸识别及实现 { m_tResPixelArray[top][j].rgbBlue=255; m_tResPixelArray[top][j].rgbGreen = m_tResPixelArray[top][j].rgbRed = 0; m_tResPixelArray[bottom][j].rgbBlue=255; m_tResPixelArray[bottom][j].rgbGreen = m_tResPixelArray[bottom][j].rgbRed = 0; } MakeBitMap(); SetCursor(LoadCursor(NULL,IDC_ARROW)); if(m_bFaceOK) { ReplaceDlg dlg; if(dlg.DoModal()==IDOK) { CopyBitMap(m_tResPixelArray,m_tOriPixelArray); CRect rect(left,top,right,bottom); m_rFaceRegion = rect; MakeBitMap(); } } else { m_bFaceOK = true; CopyBitMap(m_tResPixelArray,m_tOriPixelArray); CRect rect(left,top,right,bottom); m_rFaceRegion = rect; MakeBitMap(); } } - 21 - 3.2.4 特征提取模块 图3-8 提取特征之后的人脸 /////////////////////////////////// //确认两个眼睛的垂直区域 ////////////////////////////////// //左眼 if(LeftEyeAreaH != CPoint(-1,-1)) { int *tArray = new int[m_nWndHeight]; int i,j; for(i = 0; i m_rFaceRegion.top+6*nSlidWinHeight; i--) for(j=LeftEyeAreaH.x; j<=LeftEyeAreaH.y;j++) if(m_tResPixelArray[i][j].rgbRed == 255 && m_tResPixelArray[i][j].rgbGreen == 0) - 22 - 第三章 基于肤色监测的人脸识别及实现 tArray[i] ++; CList myListA(sizeof(CPoint)); CList myListB(sizeof(CPoint)); int flag = 0; CPoint tPoint(-1,-1); for(i = nMidFaceV-nSlidWinHeight; i > m_rFaceRegion.top+6*nSlidWinHeight; i--) { if(tArray[i] > 0 && flag ==0) { tPoint.x = i; flag = 1; } if(tArray[i] == 0 && flag ==1) { tPoint.y = i; myListA.AddTail(tPoint); flag = 0; } } delete tArray; //去掉长度太小的候选者 for(i=0; i=minVal) myListB.AddTail(temp); } myListA.RemoveAll(); //合并相邻很紧的区域 bool quit = 1; while(quit) - 23 - { bool doJoin = false; for(int i=0; i= LeftEyeAreaV.y;i--) for(j=LeftEyeAreaH.x; j<=LeftEyeAreaH.y;j++) if(m_tResPixelArray[i][j].rgbGreen == 0) { if(jm_LeftEyeRightCorner.x) { m_LeftEyeRightCorner.x = j; m_LeftEyeRightCorner.y = i; } sumX += j; sumY += i; sum++; } m_LeftEye.x = (int)(sumX/sum); m_LeftEye.y = (int)(sumY/sum); m_bLeftEyeOK = TRUE; m_bLeftEyeLeftCornerOK = TRUE; m_bLeftEyeRightCornerOK =TRUE; } myListB.RemoveAll(); } //右眼 if(RightEyeAreaH != CPoint(-1,-1)) { - 25 - int *tArray = new int[m_nWndHeight]; int i,j; for(i = 0; i m_rFaceRegion.top+6*nSlidWinHeight; i--) for(j=RightEyeAreaH.x; j<=RightEyeAreaH.y;j++) if(m_tResPixelArray[i][j].rgbRed == 255 && m_tResPixelArray[i][j].rgbGreen == 0) tArray[i] ++; CList myListA(sizeof(CPoint)); CList myListB(sizeof(CPoint)); int flag = 0; CPoint tPoint(-1,-1); for(i = nMidFaceV-nSlidWinHeight; i > m_rFaceRegion.top+6*nSlidWinHeight; i--) { if(tArray[i] > 0 && flag ==0) { tPoint.x = i; flag = 1; } if(tArray[i] == 0 && flag ==1) { tPoint.y = i; myListA.AddTail(tPoint); flag = 0; } } delete tArray; //去掉长度太小的候选者 for(i=0; i=minVal) myListB.AddTail(temp); } myListA.RemoveAll(); //合并相邻很紧的区域 bool quit = 1; while(quit) { bool doJoin = false; for(int i=0; i LeftEyeAreaV.x && index > 0) { index --; tt = myListB.GetAt(myListB.FindIndex(myListB.GetCount()-index)); } RightEyeAreaV = tt; } double sumX = 0.0; double sumY = 0.0; int sum = 0; m_RightEyeLeftCorner.x = 100000; m_RightEyeRightCorner.x = -1; for(i=RightEyeAreaV.x; i>=RightEyeAreaV.y;i--) for(j=RightEyeAreaH.x; j<=RightEyeAreaH.y;j++) if(m_tResPixelArray[i][j].rgbGreen == 0) { if(jm_RightEyeRightCorner.x) - 28 - 第三章 基于肤色监测的人脸识别及实现 { m_RightEyeRightCorner.x = j; m_RightEyeRightCorner.y = i; } sumX += j; sumY += i; sum++; } m_RightEye.x = (int)(sumX/sum); m_RightEye.y = (int)(sumY/sum); m_bRightEyeOK = TRUE; m_bRightEyeLeftCornerOK = TRUE; m_bRightEyeRightCornerOK =TRUE; } myListB.RemoveAll(); } CopyBitMap(m_tResPixelArray,m_tOriPixelArray); MakeBitMap(); } //////////////////////////////////////////////////////////////////////////////// // 标记嘴巴 //////////////////////////////////////////////////////////////////////////////// void CFaceDetectDlg::OnBtnMarkMouse() { int i,j; if(!(m_bLeftEyeOK&&m_bRightEyeOK)) { AfxMessageBox("请先确定眼睛"); return; } //左右眼的水平区域 int nLeft,nRight,nTop,nBottom; - 29 - nLeft = m_rFaceRegion.left-5 > 0 ? m_rFaceRegion.left-5:0; nRight = m_rFaceRegion.right+5 < m_nWndWidth? m_rFaceRegion.right+5:m_nWndWidth-1; nTop = m_rFaceRegion.top-5 > 0 ? m_rFaceRegion.top-5:0; nBottom = m_rFaceRegion.bottom+5< m_nWndHeight?m_rFaceRegion.bottom+5:m_nWndHeight-1; SetPixelArray(m_tResPixelArray,0); for(i=nTop; i<=nBottom; i++) for(j=nLeft; j<=nRight; j++) { BYTE R,G,B; double temp,dlta; R = m_tOriPixelArray[i][j].rgbRed; G = m_tOriPixelArray[i][j].rgbGreen; B = m_tOriPixelArray[i][j].rgbBlue; if((R==G) && (G==B)) temp = 0; else temp = 0.5*(2*R-G-B)/sqrt((double)(R-G)*(double)(R-G)+(double)(R-B)*(double)(G-B)); dlta = acos(temp); if(dlta < 0.2) { m_tResPixelArray[i][j].rgbRed = 255; } else m_tResPixelArray[i][j].rgbRed = 0; } MakeBitMap(); AfxMessageBox("嘴的肤色鉴定"); //双目斜角 double tanThta; if(m_RightEye.y == m_LeftEye.y) tanThta = 0; - 30 - 第三章 基于肤色监测的人脸识别及实现 else tanThta = (m_RightEye.y - m_LeftEye.y)/(m_RightEye.x - m_LeftEye.x); //双目距离 int EyesDis = (m_RightEye.x-m_LeftEye.x)*(m_RightEye.x-m_LeftEye.x); EyesDis += (m_RightEye.y-m_LeftEye.y)*(m_RightEye.y-m_LeftEye.y); EyesDis = (int)sqrt((double)EyesDis); //双目平均高度 int EyeV = (m_RightEye.y + m_LeftEye.y)/2; //可能的嘴的区域 int MouthUp = (EyeV+1.0*EyesDis) > nBottom ? nBottom:(int)(EyeV+1.0*EyesDis); int MouthDown = (EyeV+1.5*EyesDis) > nBottom ? nBottom:(int)(EyeV+1.5*EyesDis); int* Y_Arry = new int[MouthDown-MouthUp]; for(i =0 ;i < MouthDown-MouthUp ;i++) Y_Arry[i] = 0; int* X_Arry = new int[EyesDis]; for(i =0 ;i < EyesDis ;i++) X_Arry[i] = 0; for(i = MouthUp ; i < MouthDown; i++) for(j = m_LeftEye.x; j< m_RightEye.x; j++) { if(m_tResPixelArray[i][j].rgbRed == 255) { Y_Arry[i-MouthUp] ++; X_Arry[j-m_LeftEye.x] ++; } } int maxY = 0; for(i =0 ;i < MouthDown-MouthUp ;i++) { if(Y_Arry[i]>maxY) { maxY = Y_Arry[i]; m_MidMouth.y = i+MouthUp - (MouthDown-MouthUp)/10; } - 31 - } m_LeftMouthCorner.y =(int)(m_MidMouth.y - tanThta*EyesDis/2); m_RightMouthCorner.y =(int)(m_MidMouth.y + tanThta*EyesDis/2); for(i =0 ;i < EyesDis ;i++) { if(X_Arry[i]>0) { m_LeftMouthCorner.x = i+m_LeftEye.x; break; } } for(i = EyesDis -1; i >=0 ;i--) { if(X_Arry[i]>0) { m_RightMouthCorner.x = m_LeftEye.x+i; break; } } //唇中点较薄 int min = 1000000; for(i = (int)(3*EyesDis/7+0.5) ; i <= (int)(4*EyesDis/7+0.5);i++) { if(X_Arry[i] 0 ? m_rFaceRegion.left-5:0; nRight = m_rFaceRegion.right+5 < m_nWndWidth? m_rFaceRegion.right+5:m_nWndWidth-1; nTop = m_rFaceRegion.top-5 > 0 ? m_rFaceRegion.top-5:0; nBottom = m_rFaceRegion.bottom+5< m_nWndHeight?m_rFaceRegion.bottom+5:m_nWndHeight-1; SetPixelArray(m_tResPixelArray,0); for(i=nTop; i<=nBottom; i++) for(j=nLeft; j<=nRight; j++) { double Y; Y = 0.30*m_tOriPixelArray[i][j].rgbRed+0.59*m_tOriPixelArray[i][j].rgbGreen - 33 - +0.11*m_tOriPixelArray[i][j].rgbBlue; if(Y<100) { m_tResPixelArray[i][j].rgbRed = 255; } else m_tResPixelArray[i][j].rgbRed = 0; } MakeBitMap(); AfxMessageBox("鼻子的肤色鉴定"); //双目斜角 double tanThta; if(m_RightEye.y == m_LeftEye.y) tanThta = 0; else tanThta = (m_RightEye.y - m_LeftEye.y)/(m_RightEye.x - m_LeftEye.x); //双目距离 int EyesDis = (m_RightEye.x-m_LeftEye.x)*(m_RightEye.x-m_LeftEye.x); EyesDis += (m_RightEye.y-m_LeftEye.y)*(m_RightEye.y-m_LeftEye.y); EyesDis = (int)sqrt((double)EyesDis); //双目平均高度 int EyeV = (m_RightEye.y + m_LeftEye.y)/2; //可能的鼻子的区域 int NoseUp = (EyeV+0.5*EyesDis) > nBottom ? nBottom:(int)(EyeV+0.5*EyesDis); int NoseDown = (EyeV+0.8*EyesDis) > nBottom ? nBottom:(int)(EyeV+0.8*EyesDis); int* Y_Arry = new int[NoseDown-NoseUp]; for(i =0 ;i < NoseDown-NoseUp ;i++) Y_Arry[i] = 0; int* X_Arry = new int[EyesDis]; for(i =0 ;i < EyesDis ;i++) X_Arry[i] = 0; for(i = NoseUp ; i < NoseDown; i++) for(j = m_LeftEye.x+EyesDis/5; j< m_RightEye.x-EyesDis/5; j++) - 34 - 第三章 基于肤色监测的人脸识别及实现 { if(m_tResPixelArray[i][j].rgbRed == 255) { Y_Arry[i-NoseUp] ++; X_Arry[j-m_LeftEye.x] ++; } } int maxY = 0; for(i =0 ;i < NoseDown-NoseUp ;i++) { if(Y_Arry[i]>maxY) { maxY = Y_Arry[i]; m_MidNose.y = i+NoseUp; } } m_LeftNostril.y =(int)(m_MidNose.y - tanThta*EyesDis/2); m_RightNostril.y =(int)(m_MidNose.y + tanThta*EyesDis/2); for(i =0 ;i < EyesDis ;i++) { if(X_Arry[i]>0) { m_LeftNostril.x = i+m_LeftEye.x; break; } } for(i = EyesDis-1; i >=0 ;i--) { if(X_Arry[i]>0) { m_RightNostril.x = i+m_LeftEye.x; break; } - 35 - } //唇中点较薄 int min = 1000000; for(i = (int)(EyesDis/3+0.5) ; i <= (int)(2*EyesDis/3+0.5);i++) { if(X_Arry[i] 0 ? m_rFaceRegion.left-5:0; nRight = m_rFaceRegion.right+5 < m_nWndWidth? m_rFaceRegion.right+5:m_nWndWidth-1; nTop = m_rFaceRegion.top-5 > 0 ? m_rFaceRegion.top-5:0; nBottom = m_rFaceRegion.bottom+5< m_nWndHeight?m_rFaceRegion.bottom+5:m_nWndHeight-1; //边缘检查 DoLOG(nLeft,nRight,nTop,nBottom,m_tOriPixelArray,m_tResPixelArray); /////////////////////////////////// //确认两个眼睛的水平区域 ////////////////////////////////// int nSlidWinWidth = (m_rFaceRegion.right - m_rFaceRegion.left)/6/2; int nSlidWinHeight = (m_rFaceRegion.bottom - m_rFaceRegion.top)/15/2; int nMidFaceH = (m_rFaceRegion.right+m_rFaceRegion.left)/2; - 0 - 附 录 int nMidFaceV = (m_rFaceRegion.bottom+m_rFaceRegion.top)/2; int *tempArray = new int[m_nWndWidth]; for(i = 0; i m_rFaceRegion.top+6*nSlidWinHeight; i--) for(j=m_rFaceRegion.left+nSlidWinWidth; j= nSlidWinWidth*nSlidWinHeight/3) { m_tResPixelArray[i][j].rgbRed = 255; tempArray[j] ++; } } MakeBitMap(); AfxMessageBox("眼睛的区域鉴别"); CList myList1(sizeof(CPoint)); CList myList2(sizeof(CPoint)); int flag = 0; CPoint tPoint(-1,-1); for(i = 0; i 0 && flag ==0) { tPoint.x = i; flag = 1; - 1 - } if(tempArray[i] == 0 && flag ==1) { tPoint.y = i; myList1.AddTail(tPoint); flag = 0; } } delete tempArray; //去掉长度太小的候选者 for(i=0; i=minVal) myList2.AddTail(temp); } myList1.RemoveAll(); //合并相邻很紧的区域 bool quit = 1; while(quit) { bool doJoin = false; for(int i=0; i(m_rFaceRegion.right - m_rFaceRegion.left)/2) { LeftEyeAreaH.x = t.x; LeftEyeAreaH.y = t.x+(t.y-t.x)/3; RightEyeAreaH.x = t.y-(t.y-t.x)/3; RightEyeAreaH.y = t.y; } else { AfxMessageBox("确认眼睛位置失败,请手动标定"); return; } } //仅有两个区域 else if(myList2.GetCount()==2) { LeftEyeAreaH = myList2.GetHead(); RightEyeAreaH = myList2.GetTail(); } - 3 - else //多于两个区域 { int ldis = -100000; int rdis = 100000; for(i=0; i nMidFaceH) { if(((temp.x+temp.y)/2-nMidFaceH)ldis) { ldis = (temp.x+temp.y)/2-nMidFaceH; LeftEyeAreaH = temp; } } } } myList2.RemoveAll(); /////////////////////////////////// //确认两个眼睛的垂直区域 ////////////////////////////////// - 4 - 附 录 //左眼 if(LeftEyeAreaH != CPoint(-1,-1)) { int *tArray = new int[m_nWndHeight]; int i,j; for(i = 0; i m_rFaceRegion.top+6*nSlidWinHeight; i--) for(j=LeftEyeAreaH.x; j<=LeftEyeAreaH.y;j++) if(m_tResPixelArray[i][j].rgbRed == 255 && m_tResPixelArray[i][j].rgbGreen == 0) tArray[i] ++; CList myListA(sizeof(CPoint)); CList myListB(sizeof(CPoint)); int flag = 0; CPoint tPoint(-1,-1); for(i = nMidFaceV-nSlidWinHeight; i > m_rFaceRegion.top+6*nSlidWinHeight; i--) { if(tArray[i] > 0 && flag ==0) { tPoint.x = i; flag = 1; } if(tArray[i] == 0 && flag ==1) { tPoint.y = i; myListA.AddTail(tPoint); flag = 0; } } delete tArray; //去掉长度太小的候选者 for(i=0; i=minVal) myListB.AddTail(temp); } myListA.RemoveAll(); //合并相邻很紧的区域 bool quit = 1; while(quit) { bool doJoin = false; for(int i=0; i= LeftEyeAreaV.y;i--) for(j=LeftEyeAreaH.x; j<=LeftEyeAreaH.y;j++) if(m_tResPixelArray[i][j].rgbGreen == 0) { if(jm_LeftEyeRightCorner.x) { m_LeftEyeRightCorner.x = j; m_LeftEyeRightCorner.y = i; } sumX += j; sumY += i; sum++; } m_LeftEye.x = (int)(sumX/sum); m_LeftEye.y = (int)(sumY/sum); - 7 - m_bLeftEyeOK = TRUE; m_bLeftEyeLeftCornerOK = TRUE; m_bLeftEyeRightCornerOK =TRUE; } myListB.RemoveAll(); } //右眼 if(RightEyeAreaH != CPoint(-1,-1)) { int *tArray = new int[m_nWndHeight]; int i,j; for(i = 0; i m_rFaceRegion.top+6*nSlidWinHeight; i--) for(j=RightEyeAreaH.x; j<=RightEyeAreaH.y;j++) if(m_tResPixelArray[i][j].rgbRed == 255 && m_tResPixelArray[i][j].rgbGreen == 0) tArray[i] ++; CList myListA(sizeof(CPoint)); CList myListB(sizeof(CPoint)); int flag = 0; CPoint tPoint(-1,-1); for(i = nMidFaceV-nSlidWinHeight; i > m_rFaceRegion.top+6*nSlidWinHeight; i--) { if(tArray[i] > 0 && flag ==0) { tPoint.x = i; flag = 1; } if(tArray[i] == 0 && flag ==1) { - 8 - 附 录 tPoint.y = i; myListA.AddTail(tPoint); flag = 0; } } delete tArray; //去掉长度太小的候选者 for(i=0; i=minVal) myListB.AddTail(temp); } myListA.RemoveAll(); //合并相邻很紧的区域 bool quit = 1; while(quit) { bool doJoin = false; for(int i=0; i LeftEyeAreaV.x && index > 0) { index --; tt = myListB.GetAt(myListB.FindIndex(myListB.GetCount()-index)); } RightEyeAreaV = tt; } double sumX = 0.0; double sumY = 0.0; int sum = 0; m_RightEyeLeftCorner.x = 100000; m_RightEyeRightCorner.x = -1; - 10 - 附 录 for(i=RightEyeAreaV.x; i>=RightEyeAreaV.y;i--) for(j=RightEyeAreaH.x; j<=RightEyeAreaH.y;j++) if(m_tResPixelArray[i][j].rgbGreen == 0) { if(jm_RightEyeRightCorner.x) { m_RightEyeRightCorner.x = j; m_RightEyeRightCorner.y = i; } sumX += j; sumY += i; sum++; } m_RightEye.x = (int)(sumX/sum); m_RightEye.y = (int)(sumY/sum); m_bRightEyeOK = TRUE; m_bRightEyeLeftCornerOK = TRUE; m_bRightEyeRightCornerOK =TRUE; } myListB.RemoveAll(); } CopyBitMap(m_tResPixelArray,m_tOriPixelArray); MakeBitMap(); } 标记嘴巴的函数 void CFaceDetectDlg::OnBtnMarkEye() { - 11 - int i,j; if(!m_bFaceOK) { AfxMessageBox("请先确定脸部区域"); return; } //左右眼的水平区域 CPoint LeftEyeAreaH(-1,-1),RightEyeAreaH(-1,-1); CPoint LeftEyeAreaV(-1,-1),RightEyeAreaV(-1,-1); int nLeft,nRight,nTop,nBottom; nLeft = m_rFaceRegion.left-5 > 0 ? m_rFaceRegion.left-5:0; nRight = m_rFaceRegion.right+5 < m_nWndWidth? m_rFaceRegion.right+5:m_nWndWidth-1; nTop = m_rFaceRegion.top-5 > 0 ? m_rFaceRegion.top-5:0; nBottom = m_rFaceRegion.bottom+5< m_nWndHeight?m_rFaceRegion.bottom+5:m_nWndHeight-1; //边缘检查 DoLOG(nLeft,nRight,nTop,nBottom,m_tOriPixelArray,m_tResPixelArray); /////////////////////////////////// //确认两个眼睛的水平区域 ////////////////////////////////// int nSlidWinWidth = (m_rFaceRegion.right - m_rFaceRegion.left)/6/2; int nSlidWinHeight = (m_rFaceRegion.bottom - m_rFaceRegion.top)/15/2; int nMidFaceH = (m_rFaceRegion.right+m_rFaceRegion.left)/2; int nMidFaceV = (m_rFaceRegion.bottom+m_rFaceRegion.top)/2; int *tempArray = new int[m_nWndWidth]; for(i = 0; i m_rFaceRegion.top+6*nSlidWinHeight; i--) - 12 - 附 录 for(j=m_rFaceRegion.left+nSlidWinWidth; j= nSlidWinWidth*nSlidWinHeight/3) { m_tResPixelArray[i][j].rgbRed = 255; tempArray[j] ++; } } MakeBitMap(); AfxMessageBox("眼睛的区域鉴别"); CList myList1(sizeof(CPoint)); CList myList2(sizeof(CPoint)); int flag = 0; CPoint tPoint(-1,-1); for(i = 0; i 0 && flag ==0) { tPoint.x = i; flag = 1; } if(tempArray[i] == 0 && flag ==1) { tPoint.y = i; myList1.AddTail(tPoint); flag = 0; - 13 - } } delete tempArray; //去掉长度太小的候选者 for(i=0; i=minVal) myList2.AddTail(temp); } myList1.RemoveAll(); //合并相邻很紧的区域 bool quit = 1; while(quit) { bool doJoin = false; for(int i=0; i(m_rFaceRegion.right - m_rFaceRegion.left)/2) { LeftEyeAreaH.x = t.x; LeftEyeAreaH.y = t.x+(t.y-t.x)/3; RightEyeAreaH.x = t.y-(t.y-t.x)/3; RightEyeAreaH.y = t.y; } else { AfxMessageBox("确认眼睛位置失败,请手动标定"); return; } } //仅有两个区域 else if(myList2.GetCount()==2) { LeftEyeAreaH = myList2.GetHead(); RightEyeAreaH = myList2.GetTail(); } else //多于两个区域 { int ldis = -100000; int rdis = 100000; for(i=0; i nMidFaceH) { if(((temp.x+temp.y)/2-nMidFaceH)ldis) { ldis = (temp.x+temp.y)/2-nMidFaceH; LeftEyeAreaH = temp; } } } } myList2.RemoveAll(); /////////////////////////////////// //确认两个眼睛的垂直区域 ////////////////////////////////// //左眼 if(LeftEyeAreaH != CPoint(-1,-1)) { int *tArray = new int[m_nWndHeight]; int i,j; for(i = 0; i m_rFaceRegion.top+6*nSlidWinHeight; i--) for(j=LeftEyeAreaH.x; j<=LeftEyeAreaH.y;j++) if(m_tResPixelArray[i][j].rgbRed == 255 && m_tResPixelArray[i][j].rgbGreen == 0) tArray[i] ++; CList myListA(sizeof(CPoint)); CList myListB(sizeof(CPoint)); int flag = 0; CPoint tPoint(-1,-1); for(i = nMidFaceV-nSlidWinHeight; i > m_rFaceRegion.top+6*nSlidWinHeight; i--) { if(tArray[i] > 0 && flag ==0) { tPoint.x = i; flag = 1; } if(tArray[i] == 0 && flag ==1) { tPoint.y = i; myListA.AddTail(tPoint); flag = 0; } } delete tArray; //去掉长度太小的候选者 for(i=0; i=minVal) myListB.AddTail(temp); } - 17 - myListA.RemoveAll(); //合并相邻很紧的区域 bool quit = 1; while(quit) { bool doJoin = false; for(int i=0; i= LeftEyeAreaV.y;i--) for(j=LeftEyeAreaH.x; j<=LeftEyeAreaH.y;j++) if(m_tResPixelArray[i][j].rgbGreen == 0) { if(jm_LeftEyeRightCorner.x) { m_LeftEyeRightCorner.x = j; m_LeftEyeRightCorner.y = i; } sumX += j; sumY += i; sum++; } m_LeftEye.x = (int)(sumX/sum); m_LeftEye.y = (int)(sumY/sum); m_bLeftEyeOK = TRUE; m_bLeftEyeLeftCornerOK = TRUE; m_bLeftEyeRightCornerOK =TRUE; } myListB.RemoveAll(); - 19 - } //右眼 if(RightEyeAreaH != CPoint(-1,-1)) { int *tArray = new int[m_nWndHeight]; int i,j; for(i = 0; i m_rFaceRegion.top+6*nSlidWinHeight; i--) for(j=RightEyeAreaH.x; j<=RightEyeAreaH.y;j++) if(m_tResPixelArray[i][j].rgbRed == 255 && m_tResPixelArray[i][j].rgbGreen == 0) tArray[i] ++; CList myListA(sizeof(CPoint)); CList myListB(sizeof(CPoint)); int flag = 0; CPoint tPoint(-1,-1); for(i = nMidFaceV-nSlidWinHeight; i > m_rFaceRegion.top+6*nSlidWinHeight; i--) { if(tArray[i] > 0 && flag ==0) { tPoint.x = i; flag = 1; } if(tArray[i] == 0 && flag ==1) { tPoint.y = i; myListA.AddTail(tPoint); flag = 0; } } delete tArray; - 20 - 附 录 //去掉长度太小的候选者 for(i=0; i=minVal) myListB.AddTail(temp); } myListA.RemoveAll(); //合并相邻很紧的区域 bool quit = 1; while(quit) { bool doJoin = false; for(int i=0; i LeftEyeAreaV.x && index > 0) { index --; tt = myListB.GetAt(myListB.FindIndex(myListB.GetCount()-index)); } RightEyeAreaV = tt; } double sumX = 0.0; double sumY = 0.0; int sum = 0; m_RightEyeLeftCorner.x = 100000; m_RightEyeRightCorner.x = -1; for(i=RightEyeAreaV.x; i>=RightEyeAreaV.y;i--) for(j=RightEyeAreaH.x; j<=RightEyeAreaH.y;j++) if(m_tResPixelArray[i][j].rgbGreen == 0) { if(jm_RightEyeRightCorner.x) { m_RightEyeRightCorner.x = j; m_RightEyeRightCorner.y = i; } sumX += j; sumY += i; sum++; } m_RightEye.x = (int)(sumX/sum); m_RightEye.y = (int)(sumY/sum); m_bRightEyeOK = TRUE; m_bRightEyeLeftCornerOK = TRUE; m_bRightEyeRightCornerOK =TRUE; } myListB.RemoveAll(); } CopyBitMap(m_tResPixelArray,m_tOriPixelArray); MakeBitMap(); - 23 - 结束语 对于这篇论文我有不少。这次是对我们掌握所学知识的一次考核。它可以让我们了解自己一学期里的学习情况,有助于在今后的学习中加以改进。通过这次上机实验,我发现了自己有的根本就不懂。这都要怪自己在平时的学习过程中不够用心,不懂的地方也没有及时的去搞懂,课后也没怎么花时间去研究。这次实验我觉得自己不仅学到了很多软件设计方面的知识,也让我巩固了以前所学的知识以及看到了自己以前学习中的不足之处。 在今后的学习过程中,我一定会认真学习专业课程,再也不能像以前那样迷迷糊糊的,课后要多花时间去研究一些有关编程方面的题目,不懂的地方及时把它弄懂。以后自己还要多动手编写一些程序。在这里我要感谢学校能提供这么一个宝贵的机会,我们可以把课堂上所学的理论知识运用到实践中去,把书上的东西变成我们自己的。这个机会既让我们学习巩固了知识,也让我们发现了自己的不足之处。特别要感谢我们的指导老师,正因为有了您的帮助我们才可以完成实验。我也希望今后还会有更多的机会让我们去实践,去充实自己。 - 0 - 参考文献 1.徐慧等.VisualC++数字图像实用工程案例精选[M].北京:人民邮电出版社,2004,330一331 2.高守传等.VisuslC++实践与提高一数字工程处理与工程应用篇【M】.北京:中国铁道出版 社,2006. 3.R?Brunelli,T?Poggio.FaeeRecognition:FeaturesversusTemPlates.IEEETrans.onP attern AnalysisandMachineIntelligenee,1993,15(10),1042一1052. 4.N?Roder,X.Li?AeeuraeyAnalysisforFaeialFeatureDeteetionPattemReeognition,1 996, 29(l):143~157. 5.曾朝.基于BP神经网络的人脸识别系统研究.湖南大学.2007. 6.黄福.基于3D建模的面部识别系统的研究.中北大学.2010 - 0 - 致 谢 在本次设计的过程中,老师们给了我们很大的帮助。不仅使我们在规定时间内完成了系统的设计,同时还使我们学到了很多有益的知识和宝贵的经验在此, 我谨向他们表示最衷心的感谢。感谢老师的指导,使我们充分利用软件工程,VS,开发出成功的项目.在开发的过程中使我体会到了工程化开发方法和瀑布模型利用的重要性,在此还要感谢所有合作参与项目开发的同学们这次毕业设计的成功离不开所有参与的同学们的帮助和支持,是他们让我知道了团队工作的重要。 0
/
本文档为【人脸识别论文】,请使用软件OFFICE或WPS软件打开。作品中的文字与图均可以修改和编辑, 图片更改请在作品中右键图片并更换,文字修改请直接点击文字进行修改,也可以新增和删除文档中的内容。
[版权声明] 本站所有资料为用户分享产生,若发现您的权利被侵害,请联系客服邮件isharekefu@iask.cn,我们尽快处理。 本作品所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用。 网站提供的党政主题相关内容(国旗、国徽、党徽..)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。

历史搜索

    清空历史搜索