人脸姿态估计(计算欧拉角)

⼈脸姿态估计(计算欧拉⾓)
1.什么是⼈脸姿态估计问题
⼈脸姿态估计主要是获得脸部朝向的⾓度信息。⼀般可以⽤旋转矩阵、旋转向量、四元数或欧拉⾓表⽰(这四个量也可以互相转换)。⼀般⽽⾔,欧拉⾓可读性更好⼀些,使⽤更为⼴泛。本⽂获得的⼈脸姿态信息⽤三个欧拉⾓(pitch,yaw,roll)表⽰。
欧拉⾓动图注解
pitch:俯仰⾓,表⽰物体绕x轴旋转
yaw:偏航⾓,表⽰物体绕y轴旋转
roll:翻滚⾓,表⽰物体绕z轴旋转
2.计算步骤
1)⾸先定义⼀个具有n个关键点的3D脸部模型,n可以根据⾃⼰对准确度的容忍程度进⾏定义,以下⽰例定义6个关键点的3D脸部模型(左眼⾓,右眼⾓,⿐尖,左嘴⾓,右嘴⾓,下颌);
2)采⽤⼈脸检测以及⾯部关键点检测得到上述3D脸部对应的2D⼈脸关键点;
3)采⽤Opencv的solvePnP函数解出旋转向量;
4)将旋转向量转换为欧拉⾓;
3.定义6关键点的3D Model
C++
// 3D model points.
std::vector<cv::Point3d> model_points;
model_points.push_back(cv::Point3d(0.0f, 0.0f, 0.0f));              // Nose tip
model_points.push_back(cv::Point3d(0.0f, -330.0f, -65.0f));          // Chin
model_points.push_back(cv::Point3d(-225.0f, 170.0f, -135.0f));      // Left eye left corner
model_points.push_back(cv::Point3d(225.0f, 170.0f, -135.0f));        // Right eye right corner机关事务管理条例
model_points.push_back(cv::Point3d(-150.0f, -150.0f, -125.0f));      // Left Mouth corner
model_points.push_back(cv::Point3d(150.0f, -150.0f, -125.0f));      // Right mouth corner
python
# 3D model points.
model_points = np.array([
(0.0, 0.0, 0.0),            # Nose tip
(0.0, -330.0, -65.0),        # Chin
(-225.0, 170.0, -135.0),    # Left eye left corner
(225.0, 170.0, -135.0),      # Right eye right corne
(-150.0, -150.0, -125.0),    # Left Mouth corner
(150.0, -150.0, -125.0)      # Right mouth corner
])
4.关键点检测
利⽤相关算法进⾏⼈脸关键点检测,⼀般常见68个关键点检测模型,其具体顺序如下所⽰,⽽6个关键点对应的索引id分别为:
下巴:8
⿐尖:30
左眼⾓:36
茶余饭后杂志右眼⾓:45
左嘴⾓:48
右嘴⾓:54
C++
上海国乐// 2D image points. If you change the image, you need to change vector
std::vector<cv::Point2d> image_points;
image_points.push_back( cv::Point2d(359, 391) );    // Nose tip
image_points.push_back( cv::Point2d(399, 561) );    // Chin
image_points.push_back( cv::Point2d(337, 297) );    // Left eye left corner    image_points.push_back( cv::Point2d(513, 301) );    // Right eye right corner    image_points.push_back( cv::Point2d(345, 465) );    // Left Mouth corner
image_points.push_back( cv::Point2d(453, 469) );    // Right mouth corner python
#2D image points. If you change the image, you need to change vector image_points = np.array([
(359, 391),    # Nose tip
(399, 561),    # Chin
(337, 297),    # Left eye left corner
(513, 301),    # Right eye right corne
(345, 465),    # Left Mouth corner
(453, 469)      # Right mouth corner
], dtype="double")
5.⽤Opencv的solvePnP函数解出旋转向量
OpenCV中solvePnP 和 solvePnPRansac都可以⽤来估计Pose。第⼆个solvePnPRansac利⽤随机抽样⼀致算法(Random sample consensus,RANSAC)的思想,虽然计算出的姿态更加精确,但速度慢、没法实现实时系统,所以我们这⾥只关注第⼀个solvePnP函数,具体的参数可以参见Opencv⽂档。
solvePnP implements several algorithms for pose estimation which can be selected using the parameter
flag. By default it uses the  flag SOLVEPNP_ITERATIVE which is essentially the DLT solution followed by
Levenberg-Marquardt optimization. SOLVEPNP_P3P uses  only 3 points for calculating the pose and it
should be used only when using solvePnPRansac.
C++: bool solvePnP(InputArray objectPoints, InputArray imagePoints, InputArray cameraMatrix, InputArray
distCoeffs, OutputArray rvec, OutputArray tvec, bool useExtrinsicGuess=false, int
flags=SOLVEPNP_ITERATIVE )
确定pose也就是确定从3D model到图⽚中⼈脸的仿射变换矩阵,它包含旋转和平移的信息。solvePnP函数输出结果包括旋转向量(roatation vector)和平移向量(translation vector)。这⾥我们只关⼼旋转信息,所以主要将对 roatation vector进⾏操作。
在调⽤solvePnP函数前需要初始化cameraMatrix,也就是相机内参,并调⽤solvePnP函数:
c++
// Camera internals
胃肠动力double focal_length = im.cols; // Approximate focal length.
cv::Point2d center = cv::ls / 2, im.rows / 2);
cv::Mat camera_matrix = (cv::Mat_<double>(3, 3) << focal_length, 0, center.x, 0, focal_length, center.y, 0, 0, 1);
cv::Mat dist_coeffs = cv::Mat::zeros(4, 1, cv::DataType<double>::type); // Assuming no lens distortion
cv::Mat rotation_vector; // Rotation in axis-angle form
cv::Mat translation_vector;
// Solve for pose
cv::solvePnP(model_points, landmarks, camera_matrix, dist_coeffs, rotation_vector, translation_vector);
python
# Camera internals
focal_length = size[1]
center = (size[1]/2, size[0]/2)
camera_matrix = np.array(
[[focal_length, 0, center[0]],
[0, focal_length, center[1]],
[0, 0, 1]], dtype = "double"
)
print "Camera Matrix :\n {0}".format(camera_matrix)
dist_coeffs = np.zeros((4,1)) # Assuming no lens distortion
(success, rotation_vector, translation_vector) = cv2.solvePnP(model_points, image_points, camera_matrix, dist_coeffs, flags=cv2.CV_ITERATIVE)
print "Rotation Vector:\n {0}".format(rotation_vector)
print "Translation Vector:\n {0}".format(translation_vector)脚误
6.将旋转向量转换为欧拉⾓
1)旋转向量—>旋转矩阵—>欧拉⾓
旋转向量转旋转矩阵
theta = (rvec)
r = rvec / theta
R_ = np.array([[0, -r[2][0], r[1][0]],
[r[2][0], 0, -r[0][0]],
[-r[1][0], r[0][0], 0]])
R = np.cos(theta) * np.eye(3) + (1 - np.cos(theta)) * r * r.T + np.sin(theta) * R_
print('旋转矩阵')
print(R)
旋转矩阵转欧拉⾓
def isRotationMatrix(R):
Rt = np.transpose(R)  #旋转矩阵R的转置
shouldBeIdentity = np.dot(Rt, R)  #R的转置矩阵乘以R进口开关
I = np.identity(3, dtype=R.dtype)          # 3阶单位矩阵
n = (I - shouldBeIdentity)  #默认求⼆范数
return n < 1e-6                            # ⽬的是判断矩阵R是否正交矩阵(旋转矩阵按道理须为正交矩阵,如此其返回值理论为0)
def rotationMatrixToAngles(R):
assert (isRotationMatrix(R))  #判断是否是旋转矩阵(⽤到正交矩阵特性)
sy = math.sqrt(R[0, 0] * R[0, 0] + R[1, 0] * R[1, 0])  #矩阵元素下标都从0开始(对应公式中是sqrt(r11*r11+r21*r21)),sy=sqrt(cosβ*cosβ)    singular = sy < 1e-6  # 判断β是否为正负90°
if not singular:  #β不是正负90°
x = math.atan2(R[2, 1], R[2, 2])
y = math.atan2(-R[2, 0], sy)
z = math.atan2(R[1, 0], R[0, 0])
else:              #β是正负90°
x = math.atan2(-R[1, 2], R[1, 1])
y = math.atan2(-R[2, 0], sy)  #当z=0时,此公式也OK,上⾯图⽚中的公式也是OK的
z = 0
x = x*180.0/3.141592653589793
y = y*180.0/3.141592653589793
z = z*180.0/3.141592653589793
return np.array([x, y, z])
2)旋转向量—>四元数—>欧拉⾓

本文发布于:2024-09-20 17:23:05,感谢您对本站的认可!

本文链接:https://www.17tex.com/xueshu/532086.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:旋转   矩阵   向量   关键点   欧拉
留言与评论(共有 0 条评论)
   
验证码:
Copyright ©2019-2024 Comsenz Inc.Powered by © 易纺专利技术学习网 豫ICP备2022007602号 豫公网安备41160202000603 站长QQ:729038198 关于我们 投诉建议