当前位置:文档之家› 2008 Head Pose Estimation in Computer Vision_A Survey(Erik Muiphy et al)

2008 Head Pose Estimation in Computer Vision_A Survey(Erik Muiphy et al)

2008 Head Pose Estimation in Computer Vision_A Survey(Erik Muiphy et al)
2008 Head Pose Estimation in Computer Vision_A Survey(Erik Muiphy et al)

Head Pose Estimation in Computer Vision:A Survey Erik Murphy-Chutorian,Student Member,IEEE and Mohan Manubhai Trivedi,Senior Member,IEEE

Abstract—The capacity to estimate the head pose of another person is a common human ability that presents a unique chal-lenge for computer vision https://www.doczj.com/doc/124477537.html,pared to face detection and recognition,which have been the primary foci of face-related vision research,identity-invariant head pose estimation has fewer rigorously evaluated systems or generic solutions.In this paper, we discuss the inherent dif?culties in head pose estimation and present an organized survey describing the evolution of the?eld. Our discussion focuses on the advantages and disadvantages of each approach and spans90of the most innovative and characteristic papers that have been published on this topic.We compare these systems by focusing on their ability to estimate coarse and?ne head pose,highlighting approaches that are well suited for unconstrained environments.

Index Terms—Head Pose Estimation,Human Computer In-terfaces,Gesture Analysis,Facial Land Marks,Face Analysis

I.I NTRODUCTION

F ROM an early age,people display the ability to quickly

and effortlessly interpret the orientation and movement of a human head,thereby allowing one to infer the intentions of others who are nearby and to comprehend an important nonverbal form of communication.The ease with which one accomplishes this task belies the dif?culty of a problem that has challenged computational systems for decades.In a computer vision context, head pose estimation is the process of inferring the orientation of a human head from digital imagery.It requires a series of processing steps to transform a pixel-based representation of a head into a high-level concept of direction.Like other facial vision processing steps,an ideal head pose estimator must demonstrate invariance to a variety of image-changing factors. These factors include physical phenomena like camera distortion, projective geometry,multi-source non-Lambertian lighting,as well as biological appearance,facial expression,and the presence of accessories like glasses and hats.

Although it might seem like an explicit speci?cation of a vision task,head pose estimation has a variety of interpretations.At the coarsest level,head pose estimation applies to algorithms that identify a head in one of a few discrete orientations,e.g.,a frontal versus left/right pro?le view.At the?ne(i.e.,granular),a head pose estimate might be a continuous angular measurement across multiple Degrees of Freedom(DOF).A system that estimates only a single DOF,perhaps the left to right movement is still a head pose estimator,as is the more complex approach that estimates a full3D orientation and position of a head,while incorporating additional degrees of freedom including movement of the facial muscles and jaw.

Manuscript received June5,2007;revised November29,2007and Febru-ary28,2008;accepted March24,2008.

E.Murphy-Chutorian and M.M.Trivedi are with the Computer Vi-sion and Robotics Research Laboratory,Department of Electrical and Computer Engineering,University of California,San Diego.Email: {erikmc,mtrivedi}@https://www.doczj.com/doc/124477537.html,

In the context of computer vision,head pose estimation is most commonly interpreted as the ability to infer the orientation of a person’s head relative to the view of a camera.More rigorously, head pose estimation is the ability to infer the orientation of a head relative to a global coordinate system,but this subtle difference requires knowledge of the intrinsic camera parameters to undo the perceptual bias from perspective distortion.The range of head motion for an average adult male encompasses a sagittal?exion and extension(i.e.,forward to backward movement of the neck) from?60.4?to69.6?,a frontal lateral bending(i.e.,right to left bending of the neck)from?40.9?to36.3?,and a horizontal axial rotation(i.e.,right to left rotation of the head)from?79.8?to 75.3?[26].The combination of muscular rotation and relative orientation is an often overlooked ambiguity(e.g.,a pro?le view of a head does not look exactly the same when a camera is viewing from the side as compared to when the camera is viewing from the front and the head is turned sideways).Despite this problem,it is often assumed that the human head can be modeled as a disembodied rigid object.Under this assumption,the human head is limited to3degrees of freedom(DOF)in pose,which can be characterized by pitch,roll,and yaw angles as pictured in Fig.1.

Head pose estimation is intrinsically linked with visual gaze estimation,i.e.,the ability to characterize the direction and focus of a person’s eyes.By itself,head pose provides a coarse indi-cation of gaze that can be estimated in situations when the eyes of a person are not visible(like low-resolution imagery,or in the presence of eye-occluding objects like sunglasses).When the eyes are visible,head pose becomes a requirement to accurately predict gaze direction.Physiological investigations have demonstrated that a person’s prediction of gaze comes from a combination of both head pose and eye direction[59].By digitally composing images of speci?c eye directions on different head orientations, the authors established that an observer’s interpretation of gaze is skewed in the direction of the target’s head.

A graphic example of this effect was demonstrated in the nineteenth-century drawing shown in Fig.2[134].In this sketch, two views of a head are presented at different orientations,but the eyes are drawn in an identical con?guration in both.Glancing at this image,it is quite clear that the perceived direction of gaze is highly in?uenced by the pose of the head.If the head is removed entirely,and only the eyes remain,the perceived direction is similar to that in which the head is in a frontal con?guration. Based on this observation and our belief that the human gaze estimation ability is appropriately processing the visual information,we hypothesize that a passive camera sensor with no prior knowledge of the lighting conditions has insuf?cient information to accurately estimate the orientation of an eye without knowledge of the head orientation as well.To support this statement,consider the proportions of visible sclera(i.e.,the white area),surrounding the eye.The high-contrast between the sclera and the iris are discernible from a distance,and might have evolved to facilitate gaze perception[54].An eye direction model

Digital Object Indentifier 10.1109/TPAMI.2008.1060162-8828/$25.00 ? 2008 IEEE

Roll

Pitch

Yaw

Fig.1

T HE THREE DEGREES -OF -FREEDOM OF A HUMAN HEAD CAN BE

DESCRIBED BY THE EGOCENTRIC ROTATION ANGLES pitch ,roll ,AND yaw .

that uses this scleral iris cue would need a head pose estimate to interpret the gaze direction,since any head movement introduces a gaze shift that would not affect the visible sclera.Consequently,to computationally estimate human gaze in any con?guration,an eye tracker should be supplemented with a head pose estimation system.

This paper presents a survey of the head pose estimation meth-ods and systems that have been published over the past 14years.This work is organized by common themes and trends,paired with the discussion of the advantages and disadvantages inherent in each approach.Previous literature surveys have considered general human motion [76,77],face detection [41,143],face recognition [149],and affect recognition [25].In this paper,we present a similar treatment for head pose estimation.

The remainder of the paper is structured as follows:Section II describes the motivation for head pose estimation approaches;Section III contains an organized survey of head pose estimation approaches;Section IV discusses the ground truth tools and datasets that are available for evaluation and compares the systems described in our survey based on published results and general applicability;Section V presents a summary and concluding remarks.

II.M OTIVATION

People use the orientation of their heads to convey rich,inter-personal information.For example,a person will point the direction of his head to indicate who is the intended target of a conversation.Similarly in a dialogue,head direction is a nonverbal communique that cues a listener when to switch roles and begin speaking.There is important meaning in the movement of the head as a form of gesturing in a conversation.People nod to indicate that they understand what is being said,and they use additional gestures to indicate dissent,confusion,consideration,and agreement.Exaggerated head movements are synonymous with pointing a ?nger,and they are a conventional way of directing someone to observe a particular location.

In addition to the information that is implied by deliberate head gestures,there is much that can be inferred by observing a person’s head.For instance,quick head movements may be a sign of surprise or alarm.In people,these commonly trigger re?exive responses from an observer,which is very dif?cult to ignore even in the presence of contradicting auditory stimuli [58].Other important observations can be made by establishing the

(a)(b)

Fig.2

W OLLASTON I LLUSION :A LTHOUGH THE EYES ARE THE SAME IN BOTH

IMAGES ,THE PERCEIVED GAZE DIRECTION IS DICTATED BY THE

ORIENTATION OF THE HEAD [134].

visual focus of attention from a head pose estimate.If two people focus their visual attention on each other,sometimes referred to as mutual gaze ,this is often a sign that two people are engaged in a discussion.Mutual gaze can also be used as a sign of awareness,e.g.,a pedestrian will wait for a stopped automobile driver to look at him before stepping into a crosswalk.Observing a person’s head direction can also provide information about the environment.If a person shifts his head towards a speci?c direction,there is a high likelihood that it is in the direction of an object of interest.Children as young as six months exploit this property known as gaze following ,by looking towards the line-of-sight of a caregiver as a saliency ?lter for the environment [79].

Like speech recognition,which has already become entwined in many widely available technologies,head pose estimation will likely become an off-the-shelf tool to bridge the gap between humans and computers.

III.H EAD P OSE E STIMATION M ETHODS

It is both a challenge and our aspiration to organize all of the varied methods for head pose estimation into a single ubiquitous taxonomy.One approach we had considered was a functional taxonomy that organized each method by its operating domain.This approach would have separated methods that require stereo depth information from systems that require only monocular video.Similarly,it would have segregated approaches that require a near-?eld view of a person’s head from those that can adapt to the low resolution of a far-?eld view.Another important consideration is the degree of automation that is provided by each system.Some systems estimate head pose automatically,while others assume challenging prerequisites,such as locations of facial features that must be known in advance.It is not always clear whether these requirements can be satis?ed precisely with the vision algorithms that are available today.

Instead of a functional taxonomy,we have arranged each system by the fundamental approach that underlies its imple-mentation.This organization allows us to discuss the evolution of different techniques,and it allows us to avoid ambiguities that arise when approaches are extended beyond their original functional boundaries.Our evolutionary taxonomy consists of the following eight categories that describe the conceptual approaches that have been used to estimate head pose:

TABLE I

T AXONOMY OF H EAD P OSE E STIMATION A PPROACHES.

Approach Representative Works

Appearance Template Methods

?Image Comparison Mean Squared Error[91],Normalized Cross-Correlation[7]

?Filtered Image Comparison Gabor-wavelets[111]

Detector Arrays

?Machine Learning SVM[47],Adaboost Cascades[53]

?Neural Networks Router Networks[103]

Nonlinear Regression Methods

?Regression Tools SVR[65,86],Feature-SVR[78]

?Neural Networks MLP[108,115,130],Convolutional Network[96],LLM[100],Feature-MLP[33] Manifold Embedding Methods

?Linear Subspaces PCA[111],Pose-eigenspaces[114],LEA[27]

?Kernelized Subspaces KPCA[136],KLDA[136]

?Nonlinear Subspaces Isomap[45,101],LE[6],LLE[102],Biased Manifold Embedding[4],SSE[142] Flexible Models

?Feature Descriptors Elastic Graph Matching[55]

?Active Appearance Models ASM[60],AAM[18,140]

Geometric Methods

?Facial Features Planar&3D Methods[30],Projective Geometry[42],Vanishing Point[132] Tracking Methods

?Feature Tracking RANSAC[31,147],Weighted Least Squares[145],Physiological Constraint[144]?Model Tracking Parameter Search[98,107,138],Least-squares[14],Dynamic Templates[139]?Af?ne Transformation Brightness and Depth Constancy[80]

?Appearance-based Particle Filters Adaptive Diffusion[94],Dual-linear State Model[85]

Hybrid Methods

?Geometric&Tracking Local Feature con?guration+SSD Tracking[43,52]

?Appearance Templ.&Tracking A V AM with Keyframes[82],Template Matching w/Particle Filtering[1]

?Nonlinear Regression&Tracking SVR+3D Particle Filter[85]

?Manifold Embedding&Tracking PCA Comparison+Continuous Density HMM[49]

?Other Combinations Nonlinear regression+Geometric+Particle Filter[109],KLDA+EGM[136]

?Appearance Template Methods compare a new image of

a head to a set of exemplars(each labeled with a discrete

pose)in order to?nd the most similar view.?Detector Array Methods train a series of head detectors each attuned to a speci?c pose and assign a discrete pose to the detector with the greatest support.

?Nonlinear Regression Methods use nonlinear regression tools to develop a functional mapping from the image or feature data to a head pose measurement.

?Manifold Embedding Methods seek low-dimensional man-ifolds that model the continuous variation in head pose.New images can be embedded into these manifolds and then used for embedded template matching or regression.?Flexible Models?t a non-rigid model to the facial structure of each individual in the image plane.Head pose is estimated from feature-level comparisons or from the instantiation of the model parameters.

?Geometric Methods use the location of features such as the eyes,mouth,and nose tip to determine pose from their relative con?guration.

?Tracking Methods recover the global pose change of the head from the observed movement between video frames.?Hybrid Methods combine one or more of these aforemen-tioned methods to overcome the limitations inherent in any single approach.

Table I provides a list of representative systems for each of these categories.In this section,each category is described in detail. Comments are provided on the functional requirements of each approach and the advantages and disadvantages of each design choice.A.Appearance Template Methods

Appearance template methods use image-based comparison metrics to match a view of a person’s head to a set of exemplars with corresponding pose labels.In the simplest implementation, the queried image is given the same pose that is assigned to the most similar of these templates.An illustration is presented in Fig. 3.Some characteristic examples include the use of normalized cross-correlation at multiple image resolutions[7]and mean squared error(MSE)over a sliding window[91]. Appearance templates have some advantages over more com-plicated methods.The templates can be expanded to a larger set at any time,allowing systems to adapt to changing conditions.Fur-thermore,appearance templates do not require negative training examples or facial feature points.Creating a corpus of training data requires only cropping head images and providing head pose annotations.Appearance templates are also well suited for both high and low-resolution imagery.

There are many disadvantages with appearance templates. Without the use of some interpolation method,they are only capable of estimating discrete pose locations.They typically assume that the head region has already been detected and localized,and localization error can degrade the accuracy of the head pose estimate.They can also suffer from ef?ciency concerns, since as more templates are added to the exemplar set,more computationally expensive image comparisons will need to be computed.One proposed solution to these last two problems was to train a set of Support Vector Machines(SVMs)to detect and localize the face,and subsequently use the support vectors as appearance templates to estimate head pose[88,89].

Despite those limitations,the most signi?cant problem with appearance templates is that they operate under the faulty assump-tion that pairwise similarity in the image space can be equated to similarity in pose.Consider two images of the same person

Fig.3

A PPEARANCE T EMPLATE METHODS COMPARE A NEW HEAD VIEW TO A SET OF TRAINING EXAMPLES(EACH LABELED WITH A DISCRETE POSE)

AND FIND THE MOST SIMILAR VIEW.

at slightly different poses and two images of different people at the same pose.In this scenario,the effect of identity can cause more dissimilarity in the image than from a change in pose,and template matching would improperly associate the image with the incorrect pose.Although this effect would likely lessen for widely varying poses,there is still no guarantee that pairwise similarity corresponds to similarity in the pose domain(e.g.,a right-pro?le image of a face may be more similar to a left-pro?le image than to a frontal view).Thus,even with a homogeneous set of discrete poses for every individual,errors in template comparison can lead to highly erroneous pose estimates.

To decrease the effect of the pairwise similarity problem,many approaches have experimented with various distance metrics and image transformations that reduce the head pose estimation error. For example,the images can be convolved with a Laplacian-of-Gaussian?lter[32]to emphasize some of the more common facial contours while removing some of the identity-speci?c texture variation across different individuals.Similarly,the images can be convolved with a complex Gabor-wavelet to emphasize directed features,such as the vertical lines of the nose and horizontal orientation of the mouth[110,111].The magnitude of this complex convolution also provides some invariance to shift,which can conceivably reduce the appearance errors that arise from the variance in facial feature locations between people.

B.Detector Arrays

Many methods have been introduced in the last decade for frontal face detection[97,104,126].Given the success of these approaches,it seems like a natural extension to estimate head pose by training multiple face detectors,each to speci?c a different discrete pose.Fig.4illustrates this approach.For arrays of binary classi?ers,successfully detecting the face will specify the pose of the head,assuming that no two classi?ers are in disagreement.For detectors with continuous output,pose can be estimated by the detector with the greatest support.Detector arrays are similar to appearance templates in that they operate directly on an image patch.Instead of comparing an image to a large set of individual templates,the image is evaluated by a detector trained on many images with a supervised learning algorithm.

An early example of a detector array used three SVMs for three discrete yaws[47].A more recent system trained?ve FloatBoost classi?ers operating in a far-?eld,multi-camera setting

[146].

Fig.4

D ETECTOR A RRAYS COMPRIS

E A SERIES O

F HEAD DETECTORS,EACH ATTUNED TO A SPECIFIC POSE,AND A DISCRETE POSE IS ASSIGNED TO THE DETECTOR WITH THE GREATEST SUPPORT.

An advantage of detector array methods is that a separate head detection and localization step is not required,since each detector is also capable of making the distinction between head and non-head.Simultaneous detection and pose estimation can be performed by applying the detector to many subregions in the image.Another improvement is that unlike appearance templates, detector arrays employ training algorithms that learn to ignore the appearance variation that does not correspond to pose change. Detector arrays are also well suited for high and low-resolution images.

Disadvantages to detector array methods also exist.It is bur-densome to train many detectors for each discrete pose.For a detector array to function as a head detector and pose estimator, it must also be trained on many negative non-face examples, which require substantially more training data.In addition,there may be systematic problems that arise as the number of detectors is increased.If two detectors are attuned to very similar poses, the images that are positive training examples for one must be negative training examples for another.It is not clear whether prominent detection approaches can learn a successful model when the positive and negative examples are very similar in appearance.Indeed,in practice these systems have been limited to one degree of freedom and fewer than12detectors.Furthermore, since the majority of the detectors have binary outputs,there is no way to derive a reliable continuous estimate from the result, allowing only coarse head pose estimates and creating ambiguities when multiple detectors simultaneously classify a positive image. Finally,the computational requirements increase linearly with the number of detectors,making it dif?cult to implement a real-time system with a large array.As a solution to this last problem,it has been suggested that a router classi?er can be used to pick a single subsequent detector to use for pose estimation[103].In this case the router effectively determines pose(i.e.,assuming this is a face,what is its pose?),and the subsequent detector con?rms the choice(i.e.,is this a face at the pose speci?ed by the router?).Although this technique sounds promising in theory, it should be noted that it has not been demonstrated for pitch or yaw change,but rather only for rotation in the camera-plane,?rst using neural network-based face detectors[103],and later with cascaded AdaBoost detectors[53].

Fig.5

N ONLINEAR REGRESSION PROVIDES A FUNCTIONAL MAPPING FROM THE IMAGE OR FEATURE DATA TO A HEAD POSE MEASUREMENT.

C.Nonlinear Regression Methods

Nonlinear regression methods estimate pose by learning a non-linear functional mapping from the image space to one or more pose directions.An illustration is provided in Fig.5.The allure of these approaches is that with a set of labeled training data, a model can be built that will provide a discrete or continuous pose estimate for any new data sample.The caveat with these approaches is that it is not clear how well a speci?c regression tool will be able to learn the proper mapping.

The high-dimensionality of an image presents a challenge for some regression tools.Success has been demonstrated using Support Vector Regressors(SVRs)if the dimensionality of the data can be reduced,as for example with Principal Component Analysis(PCA)[64,65],or with localized gradient orientation histograms[86]–the latter giving better accuracy for head pose estimation.Alternatively,if the location of facial features are known in advance,regression tools can be used on relatively low-dimensional feature data extracted at these points[70,78].

Of the nonlinear regression tools used for head pose estimation, neural networks have been the most widely used in the literature. An example is the multi-layer perceptron(MLP),consisting of many feed-forward cells de?ned in multiple layers(e.g.the output of the cells in one layer comprise the input for the subsequent layer)[8,23].The perceptron can be trained by backpropagation, which is a supervised learning procedure that propagates the error backwards through each of the hidden layers in the network to update the weights and biases.Head pose can be estimated from cropped images of a head using an MLP in different con?gurations.For instance,the output nodes can correspond to discretized poses,in which case a Gaussian kernel can be used to smooth the training poses to account for the similarity of nearby poses[10,106,148].Although this approach works,it is similar to detector arrays and appearance templates in that it provides only a coarse estimate of pose at discrete locations.

An MLP can also be trained for?ne head pose estimation over a continuous pose range.In this con?guration,the network has one output for each DOF,and the activation of the output is proportional to its corresponding orientation[108,115,117,129, 130].Alternatively,a set of MLP networks with a single output node can be trained individually for each DOF.This approach has been used for heads viewed from multiple far-?eld cameras in indoor environments,using background subtraction or a color ?lter to detect the facial region and Bayesian?ltering to fuse

and

Fig.6

M ANIFOLD E MBEDDING M ETHODS TRY TO DIRECTLY PROJECT A PROCESSED IMAGE ONTO THE HEAD POSE MANIFOLD USING LINEAR AND

NON-LINEAR SUBSPACE TECHNIQUES.

smooth the estimates of each individual camera[120,128–130].

A locally-linear map(LLM)is another popular neural network consisting of many linear maps[100].To build the network,the input data is compared to a centroid sample for each map and used to learn a weight matrix.Head pose estimation requires a nearest-neighbor search for the closest centroid,followed by linear regression with the corresponding map.This approach can be extended with difference vectors and dimensionality reduction [11]as well as decomposition with Gabor-wavelets[56].

As was mentioned earlier with SVR,it is possible to train a neural network using data from facial feature locations.This approach has been evaluated with associative neural networks [33,34].

The advantages of neural network approaches are numerous. These systems are very fast,only require cropped labeled faces for training,work well in near-?eld and far-?eld imagery,and give some of the most accurate head pose estimates in practice (see Section IV).

The main disadvantage to these methods is that they are to prone to error from poor head localization.As a suggested solution,a convolutional network[62]that extends the MLP by explicitly modeling some shift,scale,and distortion invariance can be used to reduce this source of error[95,96].

D.Manifold Embedding Methods

Although an image of a head can be considered a data sam-ple in high-dimensional space,there are inherently many fewer dimensions in which pose can vary.With a rigid model of the head,this can be as few as three dimensions for orientation and three for position.Therefore,it is possible to consider that each high-dimensional image sample lies on a low-dimensional continuous manifold constrained by the allowable pose variations. For head pose estimation,the manifold must be modeled,and an embedding technique is required to project a new sample into the manifold.This low-dimensional embedding can then be used for head pose estimation with techniques such as regression in the embedded space or embedded template matching.Any dimensionality reduction algorithm can be considered an attempt at manifold embedding,but the challenge lies in creating an algorithm that successfully recovers head pose while ignoring other sources of image variation.

Two of the most popular dimensionality reduction techniques, principal component analysis(PCA)and its nonlinear kernelized version KPCA,discover the primary modes of variation from a set of data samples[23].Head pose can be estimated with PCA,e.g., by projecting an image into a PCA subspace and comparing the results to a set of embedded templates[75].It has been shown that similarity in this low-dimensional space is more likely to correlate with pose similarity than appearance template matching with Gabor-wavelet preprocessing[110,111].Nevertheless,PCA and KPCA are inferior techniques for head pose estimation [136].Besides the linear limitations of standard PCA that cannot adequately represent the nonlinear image variations caused by pose change,these approaches are unsupervised techniques that do not incorporate the pose labels that are usually available during training.As a result,there is no guarantee that the primary components will relate to pose variation rather than to appearance variation.Probably,they will be correspond to both.

To mitigate these problems,the appearance information can be decoupled from the pose by splitting the training data into groups that each share the same discrete head pose.Then,PCA and KPCA can be applied to generate a separate projection matrix for each group.These pose-speci?c eigenspaces,or pose-eigenspaces,each represent the primary modes of appearance variation and provide a decomposition that is independent of the pose variation.Head pose can be estimated by normalizing the image and projecting it into each of the pose-eigenspaces, thus?nding the pose with the highest projection energy[114]. Alternatively,the embedded samples can be used as the input to a set of classi?ers,such as multi-class SVMs[63].As testament to the limitations of KPCA,however,it has been shown that by skipping the KPCA projection altogether and using local Gabor binary patterns,one can greatly improve pose estimation with a set of multi-class SVMs[69].Pose-eigenspaces have an unfortunate side-effect.The ability to estimate?ne head pose is lost since, like detector arrays,the estimate is derived from a discrete set of measurements.If only coarse head pose estimation is desired,it is be better to use multi-class linear discriminant analysis(LDA)or the kernelized version,KLDA[23]since these techniques can be used to?nd the modes of variation in the data that best account for the differences between discrete pose classes[15,136]. Other manifold embedding approaches have shown more promise for head pose estimation.These include Isometric feature mapping(Isomap)[101,119],Locally Linear Embedding(LLE) [102],and Laplacian Eigenmaps(LE)[6].To estimate head pose with any of these techniques,there must be a procedure to embed a new data sample into an existing manifold.Raytchev et al. [101]described such a procedure for an Isomap manifold,but for out-of-sample embedding in an LLE and LE manifold there has been no explicit solution.For these approaches,a new sample must be embedded with an approximate technique,such as a Generalized Regression Neural Network[4].Alternatively,LLE and LE can be replaced by their linear approximations,locally embedded analysis(LEA)[27]and locality preserving projections (LPP)[39].

There are still some remaining weaknesses in the manifold embedding approaches mentioned thus far.With the exception of LDA and KLDA,each of these techniques operates in an unsupervised fashion,ignoring the pose labels that might be available during training.As a result,they have the tendency to build manifolds for identity as well as pose[4].As one solution to this problem,identity can be separated from pose by creating a separate manifold for each subject that can be aligned together. For example,a high-dimensional ellipse can be?t to the data in a set of Isomap manifolds and then used to normalize the manifolds[45].To map from the feature space to the embedded space,nonlinear interpolation can be performed with radial basis functions.Nevertheless,even this approach has its weaknesses, because appearance variation can be due to factors other than identity and pose,such as lighting.For a more general solution, instead of making separate manifolds for each variation,a single manifold can be created that uses a distance metric that is biased towards samples with smaller pose differences[4].This change was shown to improve the head pose estimation performance of Isomap,LLE,and LE.

Another dif?culty to consider is the heterogeneity of the train-ing data that is common in many real-world training scenarios.To model identity,multiple people are needed to train a manifold,but it is often impossible to obtain a regular sampling of poses from each individual.Instead,the training images comprise a disjoint set of poses for each person sampled from some continuous measurement device.A proposed remedy to this problem is to create individualize submanifolds for each subject,and use them to render virtual reconstructions of the discrete poses that are missing between subjects[142].This work introduced Synchro-nized Submanifold Embedding(SSE),a linear embedding that creates a projection matrix that minimizes the distances between each sample and its nearest reconstructed neighbors(based on the pose label),while maximizing the distances between samples from the same subject.

All of the manifold embedding techniques described in this section are linear or nonlinear approaches.The linear techniques have the advantage that embedding can be performed by matrix multiplication,but they lack the representational ability of the nonlinear techniques.As a middle ground between these ap-proaches,the global head pose manifold can be approximated by a set of localized linear manifolds.This has been demonstrated for head pose estimation with PCA,LDA,and LPP[66].

E.Flexible Models

The previous methods have considered head pose estimation as a signal detection problem,mapping a rectangular region of image pixels to a speci?c pose orientation.Flexible models take a different approach.With these techniques,a non-rigid model is ?t to the image such that it conforms to the facial structure of each individual.In addition to pose labels,these methods require training data with annotated facial features,but it enables them to make comparisons at the feature level rather than at the global appearance level.A conceptual illustration is presented in Fig.7. Recall the appearance template methods from Section III-A.To estimate pose,the view of a new head is overlaid on each template and a pixel-based metric is used to compare the images.Even with perfect registration,however,the images of two different people will not line up exactly,since the location of facial features vary between people.Now,consider a template based on a deformable graph of local feature points(eye corners, nose,mouth corners,etc).To train this system,facial feature locations are manually labeled in each training image,and local feature descriptors such as Gabor-jets,can be extracted at each location.These features can be extracted from views of multiple people,and extra invariance can achieved by storing a bunch of

Fig.7

F LEXIBLE MODELS ARE FIT TO THE FACIAL STRUCTURE OF THE

INDIVIDUAL IN THE IMAGE PLANE.H EAD POSE IS ESTIMATED FROM FEATURE-LEVEL COMPARISONS OR FROM THE INSTANTIATION OF THE

MODEL PARAMETERS.

descriptors at each node.This representation has been called an Elastic Bunch Graph[57],and possesses the ability to represent non-rigid,or deformable,objects.To compare a bunch graph to a new face image,the graph is placed over the image,and exhaustively or iteratively deformed to?nd the minimum distance between the features at every graph node location.This process is called Elastic Graph Matching(EGM).For head pose estimation, a different bunch graph is created for every discrete pose,and each of these are compared to a new view of the head.The bunch graph with the maximum similarity assigns a discrete pose to the head[55,136].Because EGM uses features located at speci?c facial points,there is signi?cantly less inter-subject variability than with unaligned points.This makes it much more likely that similarity between the models will equate to similarity in pose.A disadvantage to this method is that the pose estimate is discrete, requiring many bunch graphs to gain?ne head pose estimates. Unfortunately comparing many bunch graphs,each with many deformations,is computationally expensive in comparison to most other head pose estimation techniques.

Another?exible model that has evolved for head pose estima-tion is the Active Appearance Model(AAM)[19],which learns the primary modes of variation in facial shape and texture from a2D perspective.Consider a set of M speci?c facial points, (perhaps the corners of the eyes,ear tips,nostrils,chin,and mouth).Each point has a2D coordinate in an image,and these points can be ordered by facial feature and concatenated into a vector of length2M.If these feature vectors are computed for many faces,spanning the different individuals and poses in which all of the features can be found,they can be used to?nd the variation in facial https://www.doczj.com/doc/124477537.html,ing a dimensionality reduction technique such as PCA on this data results in an Active Shape Model(ASM)[17],capable of representing the primary modes of shape variation.Simply by looking at the largest principal components,one can?nd the directions in the data that correspond to variations in pitch and yaw[60,61].If the location of the facial features were known in a new image,pose could be estimated by projecting the feature locations into the shape subspace and evaluating the components responsible for pose. This can be accomplished by augmenting the ASM with texture information and performing an iterative search to?t the shape to a new image of a face.Early work extracted local grayscale pro?les at each feature point and employed a greedy search to match the feature points[60,61].Later,the joint shape and texture AAM were introduced[19].

To build an AAM,?rst an ASM must be generated from a set of training data.Next,the face images must be warped such that the feature points match those of the mean shape.The warped images should be normalized,and then used to build a shape-free texture model,(originally a texture-based PCA subspace). Finally,the correlations between shape and texture are learned and used to generate a combined appearance(shape and texture) model[24].Given a rough initialization of face shape,the AAM can be?t to a new facial image by iteratively comparing the rendered appearance model to the observed image and adjusting the model parameters to minimize a distance measure between these two images.Once the model has converged to the feature locations,an estimate of head pose can be obtained by mapping the appearance parameters to a pose estimate,a simple example being yaw estimation with linear regression[18].

AAMs have come a long way since their original inception.Fit-ting methods based on the inverse compositional image alignment algorithm overcome the linear assumption of how appearance error relates to the gradient descent search and allows for more accurate,real-time convergence[72].A tracked AAM over a video sequence can also be used to estimate the3D shape modes, which can subsequently be reintroduced to constrain the2D AAM ?tting process[140].Once the3D constraint is learned,the AAM can be used to directly estimate the3D orientation of the head. Alternatively,since the AAM shape points have a one-to-one correspondence,Structure From Motion(SFM)algorithms can be used to estimate the3D shape of the face,as well as the relative pose difference between two video frames[35].Further work with AAMs have introduced modi?cations that expand their utility to driver head pose estimation[3]and multiple cameras[44]. AAMs have good invariance to head localization error,since they adapt to the image and?nd the exact location of the facial features.This allows for precise and accurate head pose estimation.The main limitation of AAMs is that all of the facial features are required to be located in each image frame.In practice,these approaches are limited to head pose orientations from which the outer corners of both eyes are visible.It is also not evident that AAM?tting algorithms could successfully operate for far-?eld head pose estimation with low-resolution facial images.

F.Geometric Methods

There is a great divide between most of the computer vision pose estimation approaches and the results of psychophysical experiments.While the former have focused predominantly on appearance-based solutions,the latter consider the human per-ception of head pose to rely on cues such as the deviation of the nose angle and the deviation of the head from bilateral symmetry [133].These effects and other factors,such as the location of the face in relation to the contour of the head,strongly in?uence the human perception of head pose,suggesting that these are extremely salient cues regarding the orientation of the head. Geometric approaches to head pose estimation use head shape and the precise con?guration of local features to estimate pose, as depicted in Fig.8.These methods are particularly interesting because they can directly exploit properties which are known to in?uence human head pose estimation.

Fig.8

G EOMETRIC M ETHODS ATTEMPT TO FIND LOCAL FEATURES SUCH AS

THE EYES,MOUTH,AND NOSE TIP AND DETERMINE POSE FROM THE RELATIVE CONFIGURATION OF THESE FEATURES.

Early approaches focused on estimating the head from a set of facial feature locations.It is assumed that these features are already known,and that pose can be estimated directly from the con?guration of these points.

The con?guration of facial features can be exploited in many ways to estimate https://www.doczj.com/doc/124477537.html,ing?ve facial points(the outside corners of each eye,the outside corners of the mouth,and the tip of the nose)the facial symmetry axis is found by connecting a line between the midpoint of the eyes and midpoint of the mouth [30].Assuming a?xed ratio between these facial points and a ?xed length of the nose,the facial direction can be determined under weak-perspective geometry from the3D angle of the nose. Alternatively,the same?ve points can be used to determine the head pose from the normal to the plane,which can be found from planar skew-symmetry and a coarse estimate of the nose position [30].Another estimate of pose can be obtained using a different set of5points(the inner and outer corners of each eye,and the tip of the nose)[42].Under the assumption that all four eye points are assumed to be coplanar,yaw can be determined from the observable difference in size between the left and right eye due to projective distortion from the known camera parameters.Roll can be found simply from the angle of this line from the horizon. Pitch is determined by comparing the distance between the nose tip and the eye-line to an anthropometric model.Unlike the previous two approaches,however,this technique does not present a solution to improve pose estimation for near-frontal views. These con?gurations are called degenerative angles,since they require very high precision to accurately estimate head pose with this model.Another method was recently proposed using the inner and outer corners of each eye and the corners of the mouth,which are automatically detected in the image[132].The observation is that three lines between the outer eye corners,the inner eye corners,and the mouth are parallel.Any observed deviation from parallel in the image plane is a result of perspective distortion. The vanishing point(i.e.,where these lines would intersect in the image plane)can be calculated using least squares to minimize the over-determined solution for three-lines.This point can be used to estimate the3D orientation of the parallel lines if the ratio of their lengths is known,and it can be used to estimate the absolute3D position of each feature point,if the actual line length is known.As this information varies across identity,the EM algorithm with a Gaussian mixture model can adapt the

facial

Fig.9

T RACKING M ETHODS FIND THE RELATIVE MOVEMENT BETWEEN VIDEO FRAMES TO ESTIMATE THE GLOBAL MOVEMENT OF A HEAD.

parameters for each person to minimize the backprojection error. The downside to this approach is that pose can only be estimated if the pose is near enough to a frontal view to see all of the facial lines.

These geometric methods are fast and simple.With only a few facial features,a decent estimate of head pose can be obtained. The obvious dif?culty lies in detecting the features with high precision and accuracy,but the more subtle challenges stem from handling outlying or missing feature detections.Far-?eld imagery, in this context,is problematic since the resolution can make it dif?cult or impossible to precisely determine the feature locations. Also,situations often arise that can permanently obscure facial landmarks,such as when a person wears glasses and obscures his eye corners.Considering that geometric approaches depend on the accurate detection of facial points,they are typically more sensitive to occlusion than appearance-based methods that use information from the entire facial region.

It is worth noting that even very simple geometric cues can be used to estimate head pose.Fitting an ellipse to the gradient contour and color of a face provides a coarse estimate of pose for one degree of freedom[20].For near-frontal faces,the yaw of a face can be reliably estimated by creating a triangle between the eyes and the mouth and?nding its deviation from a pure isosceles triangle[90].With multiple-cameras surrounding the head,yaw can be estimated as the orientation with the most skin color[12] or with a skin-color template[13].Likewise,the location of the face can be estimated by effectively registering the location of the face relative to the segmented facial region[141].

G.Tracking Methods

Tracking methods operate by following the relative movement of the head between consecutive frames of a video sequence as depicted in Fig.9.Temporal continuity and smooth motion constraints are utilized to provide a visually appealing estimate of pose over time.These systems typically demonstrate a high level of accuracy(see Section IV),but initialization from a known head position is requisite.Typically,the subject must maintain a frontal pose before the system has begun,and must be reinitialized whenever the track is lost.As a result,approaches often rely on manual initialization or a camera view such that the subject’s neutral head pose is forward-looking and easily reinitialized with a frontal face detector.

Tracking methods can operate in a bottom-up manner,follow-ing low-level facial landmarks from frame to frame.Early work considered six feature points(tracked using correlation windows) and determined the head movement from weak-perspective ge-ometry[31].A more sophisticated approach is to assume the human face is a planar surface in an orthographic space.In this case,two degrees-of-freedom can be recovered by using weighted least-squares to determine the best af?ne transformation between any two frames.The problem is reduced to a rotational ambiguity,which is capable of providing the head direction [145].Earlier approaches used traditional least squares to?t automatically selected facial points under af?ne geometry[73] and weak-perspective geometry[121].A global SSD tracker coarsely followed the entire face,as local features were tracked from within this region.More recently,these methods have evolved into more complex techniques that match feature points with robust SIFT[68]descriptors and use prior knowledge of 3D face shape[93,144]or stereo and RANSAC-based matching [147]to recover the pose change under full perspective projection. Tracking can alternatively employ a model-based approach,by ?nding the transformation of a model that best accounts for the observed movement of the head.For head pose estimation,it is common to use a rigid3D model of the head.To estimate head pose,one simply needs to?nd the rotation and translation of the model that best?ts each new image-based observation.This can be accomplished by texture-mapping the image of the head onto a3D model.In the simplest implementation,this can be done manually,and then head pose can be estimated by searching through a discrete set of transformations to?nd the one that minimizes the difference in appearance between the new frame and the model[98].This can be improved to continuous pose measurement using a gradient descent search[107]and further re?ned with optical?ow to guide the optimization[71].Also, since a global appearance model can suffer when dynamic lighting introduces effects such as partial shadowing,a similarity metric can be used that averages over a set of localized regions[138]. Often,reasonable accuracy can be obtained with af?ne trans-formations(e.g.,with a stereo camera rig,one can?nd the relative pose change as the translation and rotation that minimize the error in a least-squares sense for both grayscale intensity and depth [37,80]).A similar approach has been presented with a time-of-?ight sensor using constancy of2D motion?elds and depth from optical?ow[150].

The primary advantage of tracking approaches is their ability to track the head with high accuracy by discovering the small pose shifts between video frames.In this tracking con?guration, these methods consistently outperform other head pose estimation approaches(see Section IV).An additional advantage with model-based tracking is the ability to dynamically construct an individu-alized archetype of a person’s head.This allows these approaches to avoid the detrimental effects of appearance variation.

The dif?culty with tracking methods is the requisite of an accurate initialization of position and pose to generate a new model or adapt an existing model.Without a separate localization and head pose estimation step,these approaches can only be used to discover the relative transformation between frames.In this mode of operation,these methods are not estimating head pose in the absolute sense,but rather tracking the movement of the head.Nevertheless,for some applications,only relative motion is required.Some examples include tracking the

head

Fig.10

H YBRID M ETHODS COMBINE ONE OR MORE APPROACHES TO ESTIMATE

POSE.T HIS IMAGE IS AN EXAMPLE OF AN APPEARANCE TEMPLATE METHOD COMBINED WITH A POINT TRACKING SYSTEM.

with a manually initialized cylindrical model and recursive least squares optimization[14],or by tracking with a deformable anthropomorphic3D model[21].Tracking approaches can be initialized automatically,using dynamic templates to recreate the model whenever the head pose estimate is near the original view [139].

These model-tracking approaches can be improved with appearance-based particle?ltering[38,51]to incorporate prior information about the dynamics of the head.In a classical particle ?lter,an observation of the object’s state is observed at every time-step and assumed to be noisy;the optimal track can be found by maximizing the posterior probability of the movement given the observation,using a simulated set of samples.For appearance-based particle?ltering,instead of observing a noisy sample of the head’s absolute position and orientation,an image of the head is obtained at every time-step.The observation noise is negligible, and the dif?culty instead lies in inferring the object’s state from the image pixels.This appearance-based?ltering problem can be solved with a similar construction.A set of pose samples are generated with a prior dynamic model and used to render views of the model with different transformations.Each virtual image can be directly compared to the observed image,and these comparisons can be used to update the particle?ltering weights. This technique allows for accurate real-time head pose tracking in a variety of environments,including near-?eld video[22],low-resolution video with adaptive PCA subspaces[124],near-?eld stereo with af?ne approximations[94],and daytime and nighttime driving video with a dual-linear state model[85].

H.Hybrid Methods

Hybrid approaches combine one or more of the aforementioned methods to estimate pose,as illustrated by example in Fig.10. These systems can be designed to overcome the limitations of any one speci?c head pose category.A common embodiment is to supplement a static head pose estimation approach with a tracking system.The static system is responsible for initial-ization and the tracking system is responsible for maintaining pose estimates over time.If the tracker begins to drift,the static system can reinitialize the track.This method yields the high accuracy of pure tracking approaches without initialization and drift limitations.Many successful combinations have been presented by mixing an automatic geometric method with point tracking[40,43,46,52,87],PCA embedded template matching with optical?ow[151],PCA embedded template matching with a continuous density hidden Markov model[49],PCA embedded

template keyframe matching with stereo tracking by grayscale and depth constancy[81],and color and texture appearance templates with image-based particle?ltering[1].

Some notable extensions to these works have also been pre-sented.The work by Morency et al.[81]was extended with pose-based eigenspaces to synthetically generate new templates from a single initialization[82].Ba and Obodez[1]later extended and re?ned their technique for multiple cameras and far-?eld imagery [2].

Hybrid systems can also use two or more independent tech-niques and fuse the estimates from each system into a single result.In this case,the system gains information from multiple cues that together increase the estimation accuracy.Speci?c examples include appearance template matching with geometric cues(also with a particle?lter)[109]and manifold embedding estimation re?ned by Elastic Graph Matching[136,137].

IV.H EAD P OSE E STIMATION C OMPARISONS

A.Ground Truth Datasets

To evaluate and compare head pose estimation systems,an accurate method is requisite to measure the ground truth for a set of evaluative data.Typically,ground truth is essential to train any head pose estimation approach.The following list describes the most common methods for capturing this ground truth data,in an approximate order from least accurate(coarse)to most accurate (?ne).

Directional Suggestion–A set of markers are placed at discrete locations around a room and each human subject is asked to direct his head towards each of the locations while a camera captures discrete images for every location[33].This method is a poor source of ground truth.Firstly,it assumes that each subject’s head is in the exact same physical location in3D space, such that the same head directions correspond to the same pose orientations.Secondly and most importantly,it assumes that a person has the ability to accurately direct their head towards an object.Unfortunately,this is a subjective task that people tend to perform rather poorly.For instance,subjective errors can be seen in the widely-used Pointing’04dataset[125].

Directional Suggestion with Laser Pointer–This approach is identical to directional suggestion,but a laser pointer is af?xed to the subject’s head[100].This allows the subject to pinpoint the discrete locations in the room with much higher accuracy from visual feedback,but it still assumes that the subjects’heads are located at the same point in space such that direction is equivalent to head pose.In practice,this is dif?cult to ensure,since people have a tendency to shift their head position during data capture. Manual Annotation–Images of human faces are viewed by a person who assigns a pose label based on his own perception of the pose.For a coarse set of poses in1DOF this may be suf?cient,but it is inappropriate for?ne head pose estimation. Camera Arrays–In this approach,multiple cameras at known positions simultaneously capture images of a person’s face from different angles.If care is taken to ensure that each subject’s head is in the same location during capture,this method provides a highly accurate set of ground truth.The disadvantage is that this is only applicable for near-?eld images and cannot be applied to ?ne poses or real-world video.

Magnetic Sensors–Magnetic Sensors,such as the Polhemus FastTrak or Ascension Flock of Birds,work by emitting and measuring a magnetic?eld.The sensor can be af?xed to a subject’s head and used to determine position and orientation angles of the head[89].Since these objective estimates of pose are relatively affordable,they have been the most widely used source of objective ground truth.These products offer a theoretical accuracy of less than1?,but from our personal experience we have found them to be highly susceptible to noise and the presence of even the smallest amounts of metal in the environment. The environments in which data can be collected are severely restricted,and certain applications such as automotive head pose estimation,are therefore impossible with these sensors.

Inertial Sensors–Inertial sensors utilize accelerometers,gy-roscopes,or other motion-sensing devices often coupled with a Kalman?lter to reduce noise.The least expensive inertial sensors, such as the Mind?ux InertiaCube2,do not measure position,but only orientation in3DOF.The advantage over magnetic sensors is that there is no metallic interference while attaining similar theoretical accuracy.For head pose estimation,the sensor can be af?xed to a subject’s head during data capture[81].

Optical Motion Capture Systems–Optical motion capture sys-tems are robust,expensive deployments that are most often used for professional cinematic capture of articulated body movement. Typically,an array of calibrated near-infrared cameras use multi-view stereo and software algorithms to follow re?ective or active markers attached to a person.For head pose estimation,these markers can be af?xed to the back of a subject’s head[86]and used to track the absolute position and orientation.Some examples of optical motion capture systems include the Vicon MX and the Phoenix Technologies Visualeyez.

Using these techniques,a variety of datasets have been col-lected that range in scope,accuracy,availability,and popularity. Table V on Page20contains a description of prominent collec-tions.

https://www.doczj.com/doc/124477537.html,parison of Published Results

The mean absolute angular error for pitch,roll,and yaw is a common informative metric for evaluating a head pose estimation system.This metric can be used to evaluate a system on datasets with coarse or?ne pose labels,and it provides a single statistic that gives insight into the accuracy of competing methods.Many of the papers that have been discussed here have used this metric for evaluation.Reported results on coarse and?ne datasets are described in Table II and Table III respectively.For coarse head pose estimation,it is usual to evaluate an approach by classi?cation error(i.e.,how often an image at speci?c discrete pose angle is correctly labeled with the correct pose label). Although a proof of the validity of the system,the results depend on the number of discrete poses(more discrete poses make for a more challenging dataset).Also,this representation gives little information about the character of each misclassi?cation(was a nearby pose selected,or was the misclassi?cation a widely incorrect estimate?).Regardless of these limitations,classi?cation error has been frequently used to evaluate head pose estimation approaches.Reported results of such errors along with the number of discrete poses in each dataset are described in Table II. From these tables,a series of observations can be made. On the Pointing’04dataset,nonlinear regression with MLP neural networks[117]had the lowest reported mean angular error (MAE),demonstrating the powerful representational ability of this nonlinear regression approach to not only estimate head pose,but to learn a mapping that can tolerate the systematic errors in the

TABLE II

M EAN A BSOLUTE E RROR OF C OARSE H EAD P OSE E STIMATION

Dataset Mean Absolute Error Classi?cation#of Discrete Publication Yaw Pitch Accuracy Poses Notes Pointing’04

J.Wu[137]--[90%]?93-Stiefelhagen[117]9.5?9.7?{52.0%,66.3%}?{13,9}?1 Human Performance[34]11.8?9.4?{40.7%,59.0%}?{13,9}?2 Gourier(Associative Memories)[34]10.1?15.9?{50.0%,43.9%}?{13,9}?3 Tu(High-order SVD)[125]12.9?17.97?{49.25%,54.84%}?{13,9}?4 Tu(PCA)[125]14.11?14.98?{55.20%,57.99%}?{13,9}?4 Tu(LEA)[125]15.88?17.44?{45.16%,50.61%}?{13,9}?4 V oit[129]12.3?12.77?-93-Zhu(PCA)[66]26.9?35.1?-935 Zhu(LDA)[66]25.8?26.9?-935 Zhu(LPP)[66]24.7?22.6?-935 Zhu(Local-PCA)[66]24.5?37.6?-935 Zhu(Local-LPP)[66]29.2?40.2?-935 Zhu(Local-LDA)[66]19.1?30.7?-935 CHIL-CLEAR06

V oit[128]--39.40%8-V oit[129]49.2?-34.90%8-Canton-Ferrer[12]73.63?-19.67%8-Zhang[146]33.56?-[87%]?5-CMU-PIE

Brown(Probabilistic Method)[10] 3.6?--9-Brown(Neural Network)[10] 4.6?--9-Brown[10]--91%93 Ba[1]--71.20%9-Tian[120]--89%96 Softopia HOIP

Raytchev(PCA)[101]15.9?12.4?-917 Raytchev(LPP)[101]15.9?12.8?-917 Raytchev(Isomap)[101]11.5?11.9?-918 T.Ma(SVR)[70]--81.50%4534 T.Ma(SBL)[70]--81.70%4534 CVRR-86

J.Wu(PCA)[136]--36.40%86-J.Wu(LDA)[136]--40.10%86-J.Wu(KPCA)[136]--42%86-J.Wu(KLDA)[136]--50.30%86-J.Wu(KLDA+EGM)[136]--75.40%86-CAS-PEAL

Y.Ma[69]--97.14%73 Other Datasets

Niyogi[91]--48%159 N.Kr¨u ger[55]--92%510 S.Li[63]--[96.8%]?10-Tian[120]--84.30%1211 Notes

?Any neighboring poses considered correct classi?cation as well

?DOF classi?ed separately{yaw,pitch}

https://www.doczj.com/doc/124477537.html,ed80%of Pointing’04images for training and10%for evaluation

2.Human performance with training

3.Best results over different reported methods

4.Better results have been obtained with manual localization

5.Results for32-dim embedding

6.Best single camera results

7.Results for100-dim embedding

8.Results for8-dim embedding

9.Dataset:15images for each of11subjects

10.Dataset:413images of varying people

11.Dataset:Far-?eld images with wide-baseline stereo

TABLE III

M EAN A BSOLUTE E RROR AND C LASSIFICATION E RROR OF F INE H EAD P OSE E STIMATION Dataset Mean Absolute Error

Publication Yaw Pitch Roll Notes CHIL-CLEAR07

Canton-Ferrer[13]20.48?---V oit[130]8.5?12.5?--Yan[142] 6.72?8.87? 4.03?

Ba[2]15?10?5?1 IDIAP Head Pose

Ba[2]8.8?9.4?9.89?-V oit[130]14.0?9.2?--CVRR-363

Appearance Templates(Cross.Corr.)21.25? 5.19?--Replica of Y.Li[64]system[86]13.46? 5.72?-2 Murphy-Chutorian[86] 6.4? 5.58?--USF HumanID

Fu[27] 1.7?---BU Face Tracking

La Cascia[14] 3.3? 6.1?9.8?3 Xiao[139] 3.8? 3.2? 1.4?-CVRR LISAP-14

Appearance Templates(Cross.Corr.)15.69?7.72?-4,5 Replica of Y.Li[64]approach[86]12.8? 4.66?-2,4,5 Murphy-Chutorian[86]8.25? 4.78?-4,5 Murphy-Chutorian[85] 3.39? 4.67? 2.38?4 FacePix

Balasubramanian(Isomap)[4]10.41?--6 Balasubramanian(LLE)[4] 3.27?--6 Balasubramanian(LE)[4] 3.93?--6 Balasubramanian(Biased Isomap)[4] 5.02?--6 Balasubramanian(Biased LLE)[4] 2.11?--6 Balasubramanian(Biased LE)[4] 1.44?--6 Other Datasets

Y.Li[64]9.0?8.6?-7,8 Y.Wu[138]34.6?14.3?-7,9 Sherrah[109]11.2? 6.4?-3,10 L.Zhao[148][9.3?]?[9.0?]?-7 Ng[88]11.2?8.0?-10,11 Stiefelhagen[115]9.5?9.1?-7,12 Morency[81] 3.5? 2.4? 2.6?13,14,18 Morency[82] 3.2? 1.6? 1.3?13,15,18 K.Huang[49]12?--16,17,18 Seemann[108]7.5? 6.7?-16,18 Osadchy[95],[96]12.6?-[3.6?]?3,19 Hu[46] 6.5? 5.4?-20 Oka[94] 2.2? 2.0?0.7?15,18 Tu[124]-[4?]?-16,21 G.Zhao[147][2.44?] [2.76?] [2.86?] 22

Wang[132] 1.67 2.56 3.5413,23 Notes

?Only one DOF allowed to vary at a time

?In-plane rotation instead of roll

Root Mean Square(RMS)error

?Error combined over pitch,roll,and yaw

1.Best results over different reported methods

2.Results for50-dim embedding

3.Error estimated from plots

4.Averaged results for monocular nighttime and daytime sequences

5.Dataset:Uniformly sampled5558pose images from driving sequences

6.Results for100-dim embedding

7.Data collected with magnetic sensor

8.Dataset:1283images covering12people

9.Dataset:16video sequences covering11people

10.Dataset:Two200-frame video sequences

11.Results for test subjects not included in training set

12.Dataset:760images covering2people

13.Data collected with an inertial sensor

14.Dataset:4video sequences

15.Dataset:2video sequences

16.Dataset:1video sequence

17.Best results over different reported methods

https://www.doczj.com/doc/124477537.html,es stereo imagery

19.Dataset:388images

20.Dataset:5sequences,each with20views and limited pitch variation

21.Error averaged over results from four resolutions

22.Dataset:3video sequences

23.Dataset:400frames for each of8subjects

training data,discussed in Table V on page20.In comparison, people do not learn invariance to this error and demonstrate no-tably worse performance when asked to perform the similar task of yaw estimation[34].On the multi-camera CHIL-CLEAR07 evaluative dataset,manifold embedding with SSE provided the most accurate results.This suggests that manifold embedding approaches can provide superior representational ability,although of all the techniques presented in this paper,only the linear SSE embedding has been evaluated with inhomogeneous training data.

A series of comparisons have been made for different manifold embedding approaches.On the Pointing’04dataset,Local-LDA produced better yaw estimation than PCA,LDA,LPP,Local-PCA,and Local-LPP[66],but for pitch estimation,however, standard LDA provided better results than these other techniques. The fact that the localized versions of these embeddings do not uniformly improve pose estimation may be limited by the ability to choose the correct local projection for each new sample.On the CVRR-86dataset,KLDA was shown to outperform PCA,LDA, and KPCA,clearly demonstrating that the kernelized versions provide a better embedding for pose estimation[136].On the Softopia HOIP dataset it has been shown that a projection into 8-dimensions with Isomap is enough to obtain superior results over PCA and LPP subspaces with100dimensions[101].This should motivate continued investigation of nonlinear embedding methods as the increase in representational ability can lead to large improvements in pose estimation.

The methods that track the head in video sequences using?ex-ible models and tracking methods[14,81,82,94,124,140,147] report a signi?cantly lower error than systems that estimate pose in individual images.Although these systems use different datasets that cannot be directly compared,from our experience, visual tracking approaches provide substantially less error than systems that estimate the head pose error from individual video frames and temporally?lter the results.

https://www.doczj.com/doc/124477537.html,parison of Real-World Applicability

For a head pose estimation system to be of general use,it should be invariant to identity,have suf?cient range of allowed motion,require no manual intervention,and should be easily deployed on conventional hardware.Although some systems address all of these concerns,they often assume one or more conditions that simplify the pose estimation problem,at the expense of general applicability.We have identi?ed the following set of assumptions that have been commonly used in the literature: A.Continuous video assumption.There is a continuous video

stream with only small pose changes between frames,so that head pose can be estimated by incremental relative shifts between each frame.

B.Initialization assumption.The head pose of the subject is

known when the pose estimation process begins.In practice, the subject is typically told to assume a frontal pose until the system has begun,or the system waits until a frontal facial detector discovers a frontal pose.

C.Anti-Drift assumption.Head pose will only be calculated

for a short time,during which there will be no signi?cant abnormalities from the visual information.If this assump-tion were violated,the pose estimation system would be subject to drift,and continued estimates of pose will have large errors.

D.Stereo vision assumption.Subject is visible by two or

more cameras at a small enough distance to discriminate depth information across the face.Alternatively,depth in-formation can be obtained by other specialized means,such as a time-of-?ight sensor[150].

E.Frontal view assumption.Pose variation is restricted to a

range that contains all of the facial features seen from a frontal view.

F.Single degree of freedom assumption.The head is only

allowed to rotate around one axis.

G.Feature location assumption.The location of facial fea-

tures are provided to the system.This typically implies that facial features are manually labeled in the testing data. H.Familiar-identity assumption.The system needs to esti-

mate pose only for the person or set of persons on which it has been trained.

I.Synthetic data assumption.The system only operates

on synthetic images that do not contain the appearance variation found in real-world imagery.

These assumptions limit the applicability of any system,even when shown to be quite successful in a constrained environment. Regardless of estimation accuracy,it is important to identify the systems which are applicable for head pose estimation in real world environments such as the automobile and intelligent spaces. Such systems should provide an identity-invariant estimate of head pose in at least two degrees-of-freedom without any manual intervention.For systems with discrete pose estimates,the number of?xed poses must be large enough to suf?ciently sample the continuous pose space.These systems are indicated in bold type in Table IV,which contains a comprehensive list of all the papers covered in this survey.

V.S UMMARY AND C ONCLUDING R EMARKS

Head pose estimation is a natural step for bridging the informa-tion gap between people and computers.This fundamental human ability provides rich information about the intent,motivation,and attention of people in the world.By simulating this skill,systems can be created that can better interact with people.The majority of head pose estimation approaches assume the perspective of a rigid model,which has inherent limitations.The dif?culty in creating head pose estimation systems stems from the immense variability in individual appearance coupled with differences in lighting,background,and camera geometry.

Viewing the progress in head pose estimation chronologically, we have noticed some encouraging progress in the?eld.In recent years,people have become much more aware of the need for comparison metrics that emphasize pose variation rather than image variation.This trend has manifested itself with the demise of appearance templates and the explosion of nonlinear manifold embedding methods.Coarse head pose estimation is also disappearing as most recent work focuses on?ne estimation and multiple degrees-of-freedom.As a result,new datasets have been introduced in the past few years that allow for more accurate evaluation in challenging environments.We feel there is still much room for continued improvement.Model-based tracking algorithms have shown great promise,but they will require more thorough evaluations on standard datasets to understand their potential.Geometric methods have not reached their full potential, yet modern approaches can automatically and reliably detect facial feature locations,and these approaches should continue

TABLE IV

H EAD P OSE E STIMATION P UBLICATIONS.

Year Publication Approach Assump.DOF Domain View Automatic 1994Gee[30]Geometric G2continuous near no 1994Beymer[7]Appearance Templ.E215poses near yes 1995Lanitis[60,61]Manifold Embed.E2continuous near no 1995Schiele[106]Nonlinear Regress.F115poses near yes 1996Maurer[73]Tracking A,B,C3continuous near yes 1996Horprasert[42]Geometric G3continuous near no 1996Niyogi[91]Appearance Templ.-215poses near no 1996Gee[31]Tracking A,B,C,G3continuous near no 1997Horprasert[43]Hybrid A,B,C,G3continuous near no 1997Jebara[52]Hybrid A,C3continuous near yes 1997N.Kr¨u ger[55]Flexible Models E,F15poses near yes 1998Rowley[103]Detector Arrays F1continuous far yes 1998Rae[100]Nonlinear Regress.-235poses near yes 1998Sch¨o dl[107]Tracking A,B3continuous near yes 1998Bruske[11]Nonlinear Regress.H,I2continuous near no 1998McKenna[75]Manifold Embed.H249poses near yes 1998Pappu[98]Tracking-2187poses near yes 1998Toyama[121]Tracking-3continuous near yes 1998J.Huang[47]Detector Arrays E,F13poses near yes 1999Ng[88,89]Appearance Templ.-2continuous near yes 1999Sherrah[110,111]Manifold Embed.-1?19or7poses near no 2000DeCarlo[21]Tracking A,B,G3continuous near yes 2000Y.Li[64,65]Nonlinear Regress.-2continuous near yes 2000Malciu[71]Tracking A,B,C3continuous near no 2000Cootes[18]Flexible Models F1continuous near no 2000Y.Wu[138]Tracking-31024poses far yes 2000La Cascia[14]Tracking A,B,C3continuous near yes 2000Nikolaidis[90]Geometric E,F1continuous near yes 2001Cordea[20]Geometric F1continuous near yes 2001Yao[145]Tracking A,B,C,G2continuous near yes 2001S.Li[63]Detector Arrays F110poses far yes 2001Sherrah[109]Hybrid A,B2continuous near yes 2002V.Kr¨u ger[56]Manifold Embed.E,H,I2continuous near no 2002L.Zhao[148]Nonlinear Regress.F1?19poses near no 2002Morency[80]Tracking A,B,C,D3continuous near yes 2002Brown[10]Nonlinear Regress.F19poses near no 2002Yang[144]Tracking A,B,C,D,G3continuous near no 2002Srinivasan[114]Manifold Embed.F17poses near no 2002Stiefelhagen[115,117]Nonlinear Regress.-2continuous both yes 2003Morency[81,82]Hybrid A,B,D3continuous near yes 2003Zhu[150]Tracking A,B,C,D3continuous near yes 2003Xiao[139]Tracking A,B,C3continuous near yes 2003Chen[15]Detector Arrays E,F,H,I1continuous near yes 2003Tian[120]Nonlinear Regress.D,F1up to12poses far yes 2003Jones[53]Detector Arrays F112poses far yes 2004Osadchy[95,96]Nonlinear Regress.-2continuous far yes 2004Raytchev[101]Manifold Embed.-2continuous near no 2004K.Huang[49]Hybrid A,F138poses near yes 2004Seemann[108]Nonlinear Regress.D2continuous far yes 2004Ba[1,2]Hybrid-3continuous far yes 2004Gourier[33,34]Nonlinear Regress.-213yaws,9pitches near yes 2004Xiao[140]Flexible Models E3continuous near yes 2004Y.Hu[46]Hybrid A,C2continuous near yes 2004Dornaika[22]Tracking A,B3continuous near yes 2004Baker[3]Flexible Models E3continuous near yes 2004Moon[78]Nonlinear Regress.G2continuous near no 2004Zhu[151]Hybrid A3continuous near yes 2004 C.Hu[44]Flexible Models E,D3continuous near yes 2005Oka[94]Tracking A,B,D,C3continuous near yes 2005Xiong[141]Geometric A,D2continuous near yes 2005N.Hu[45]Manifold Embed.-1continuous near no 2005J.Wu[136,137]Hybrid-286poses near yes 2006Fu[27]Manifold Embed.F1continuous near no 2006Voit[128–130]Nonlinear Regress.-2continuous both yes 2006 B.Ma[69]Detector Arrays-17poses near yes 2006Y.Ma[70]Nonlinear Regress.-2continuous near yes 2006Ohayon[93]Tracking A,G3continuous near no 2006Gui[35]Flexible Models E,A3continuous near yes 2006Tu[124]Tracking A,B,C3continuous far yes 2006Canton-Ferrer[12,13]Geometric F,D1continuous far yes 2006Tu[125]Tracking-213yaws,9pitches near no 2006Zhang[146]Detector Arrays F15poses far yes 2007Murphy-Chutorian[86]Nonlinear Regress.-2continuous near yes 2007Yan[142]Manifold Embed.-2continuous far no 2007Balasubramanian[4]Manifold Embed.F1continuous near no 2007G.Zhao[147]Tracking A,D3continuous far yes 2007Zhu[66]Manifold Embed.-293poses near no 2007Wang[132]Geometric E,G3continuous near yes 2008Murphy-Chutorian[85]Hybrid A3continuous near yes (Systems in bold face provide fully automatic,?ne,identity-invariant head pose estimation in at least2DOF)

?Only one degree of freedom allowed to vary at a time.

to evolve.Another important trend observed is an increase in the number of head pose publications over the last few years. This might be a sign that more people have become interested in this?eld,suggesting a more rapid developmental cycle for novel approaches.

Although head pose estimation will continue to be an exciting ?eld with much room for improvement,people desire off-the-shelf,universal head pose estimation programs that can be put to use in any new application.To satisfy the majority of applications, we propose the following design criteria as a guide for future development.

?Accurate:The system should provide a reasonable estimate of pose with a mean absolute error of5?or less.?Monocular:The system should be able to estimate head pose from a single camera.Although accuracy might be improved by stereo or multi-view imagery,this should not be a requirement for the system to operate.?Autonomous:There should be no expectation of man-ual initialization,detection,or localization,precluding the use of pure-tracking approaches that measure the relative head pose with respect to some initial con?guration and shape/geometric approaches that assume facial feature lo-cations are already known.

?Multi-Person:The system should be able to estimate the pose of multiple people in one image.

?Identity&Lighting Invariant:The system must work across all identities with the dynamic lighting found in many environments.

?Resolution Independent:The system should apply to near-?eld and far-?eld images with both high and low resolution.?Full Range of Head Motion:The methods should be able to provide a smooth,continuous estimate of pitch,yaw,and roll,even when the face is pointed away from the camera.?Real-time:The system should be able to estimate a con-tinuous range of head orientation with fast(30fps or faster) operation.

Although no single system has met all of these criteria,it does seem that a solution is near at hand.It is our opinion that by using today’s methods in a suitable hybrid approach(perhaps a combination of manifold embedding,regression,or geometric methods combined with a model-based tracking system),one could meet these criteria.

For future work,we would expect to see an evaluation of nonlinear manifold embedding techniques on challenging far-?eld imagery,demonstrating that these approaches provide continued improvement in the presence of clutter or imperfect localization. We would like to see extensions for geometric and tracking approaches that adapt the models to each subject’s personal facial geometry for more accurate model?tting.For?exible models,an important improvement will be the ability to selectively ignore parts of the model are self-occluded,overcoming a fundamental limitation in an otherwise very promising category.In closing,we describe a few application domains in which head pose estimation has and will continue to have a profound impact.

Head pose estimation systems will play a key role in the creation of intelligent environments.Already there has been a huge interest in smart rooms that monitor the occupants and use head pose to measure their activities and visual focus of attention[5,50,74,92,99,116,122–124,128,131].Head pose endows these systems with the ability to determine who is speaking to whom,and to provide the information needed to analyze the non-verbal gestures of the meeting participants. These type of high-level semantic cues can be transcribed along with the conversations,intentions,and interpersonal interactions of the meeting participants to provide easily searchable indexes for future reference.

Head pose estimation could enable breakthrough interfaces for computing.Some existing examples include systems that allow a user to control a computer mouse using his head pose movements [28],respond to pop-up dialog boxes with head nods and shakes [84],or use head gestures to interact with embodied agents[83]. It seems only a matter of time until similar estimation algorithms are integrated into entertainment devices with mass-appeal. Head pose estimation will have a profound impact on the future of automotive safety.An automobile driver is fundamentally limited by the?eld-of-view that one can observe at any one time.When one fails to notice a change to his environment, there is an increased potential for a life-threatening collision that could be mitigated if the driver was alerted to an unseen hazard.As evidence to this effect,a recent comprehensive survey on automotive collisions demonstrated a driver was31%less likely to cause an injury-related collision when there was one or more passengers[105].Consequently,there is great interest in driver assistance systems that act as virtual passengers,using the driver’s head pose as a cue to visual focus of attention and mental state[3,16,36,48,85,86,98,135,151].Although the rapidly shifting lighting conditions in a vehicle make for one of the most dif?cult visual environments,the most recent of these systems demonstrated a fully-automatic,real-time hybrid approach that can estimate the pose and track the driver’s head in daytime or nighttime driving[85].

We believe that ubiquitous head pose estimation is just beyond the grasp of our current systems,and we call on researchers to improve and extend the techniques described in this paper to allow life-changing advances in systems for human interaction and safety.

A CKNOWLEDGMENT

Support for this research was provided by the University of California Discovery Program and the V olkswagen Electronics Research Laboratory.We sincerely thank our reviewers for their constructive and insightful suggestions that have helped improve the quality of this paper.In addition,we thank our colleagues in the Computer Vision and Robotics Research laboratory for their valuable assistance.

R EFERENCES

[1]S.Ba and J.-M.Odobez,“A probabilistic framework for joint

head tracking and pose estimation,”in Proc.Int’l.Conf.Pattern Recognition,2004,pp.264–267.

[2]——,“From camera head pose to3d global room head

pose using multiple camera views,”in Proc.Int’l.Workshop Classi?cation of Events Activities and Relationships,2007. [3]S.Baker,I.Matthews,J.Xiao,R.Gross,T.Kanade,and

T.Ishikawa,“Real-time non-rigid driver head tracking for driver mental state estimation,”in Proc.11th World Congress Intelligent Transportation Systems,2004.

[4]V.Balasubramanian,J.Ye,and S.Panchanathan,“Biased

manifold embedding:A framework for person-independent

head pose estimation,”in Proc.IEEE https://www.doczj.com/doc/124477537.html,puter Vision and Pattern Recognition,2007.

[5]S.Basu,T.Choudhury,B.Clarkson,and A.Pentland,“To-

wards measuring human interactions in conversational set-tings,”in Proc.IEEE Int’l Workshop Cues in Communication, 2001.

[6]M.Belkin and P.Niyogi,“Laplacian eigenmaps for dimension-

ality reduction and data representation,”Neural Computation, vol.15,no.6,pp.1373–1396,2003.

[7]D.Beymer,“Face recognition under varying pose,”in Proc.

IEEE https://www.doczj.com/doc/124477537.html,puter Vision and Pattern Recognition,1994, pp.756–761.

[8]C.Bishop,Neural Networks for Pattern Recognition.Oxford

University Press,1995.

[9]K.Bowyer and S.Sarkar,“USF HumanID3D face dataset,”

2001.

[10]L.Brown and Y.-L.Tian,“Comparative study of coarse

head pose estimation,”in Proc.Workshop Motion and Video Computing,2002,pp.125–130.

[11]J.Bruske, E.Abraham-Mumm,J.Pauli,and G.Sommer,

“Head-pose estimation from facial images with subspace neural networks,”in Proc.Int’l.Neural Network and Brain Conference,1998,pp.528–531.

[12]C.Canton-Ferrer,J.Casas,and M.Pard`a s,“Head pose de-

tection based on fusion of multiple viewpoint information,”

in Multimodal Technologies for Perception of Humans,Int’l.

Workshop Classi?cation of Events Activities and Relation-ships,CLEAR2006,ser.Lecture Notes in Computer Science, R.Stiefelhagen and J.Garofolo,Eds.,vol.4122,2007,pp.

305–310.

[13]——,“Head orientation estimation using particle?ltering in

multiview scenarios,”in Proc.Int’l.Workshop Classi?cation of Events Activities and Relationships,2007.

[14]M.L.Cascia,S.Sclaroff,and V.Athitsos,“Fast,reliable

head tracking under varying illumination:An approach based on registration of texture-mapped3d models,”IEEE Trans.

Pattern Anal.Machine Intell.,vol.22,no.4,pp.322–336, 2000.

[15]I.Chen,L.Zhang,Y.Hu,M.Li,and H.Zhang,“Head pose

estimation using Fisher Manifold learning,”in Proc.IEEE Int’l.Workshop Analysis and Modeling of Faces and Gestures, 2003,pp.203–207.

[16]S.Cheng,S.Park,and M.Trivedi,“Multi-spectral and multi-

perspective video arrays for driver body tracking and activity analysis,”Computer Vision and Image Understanding,vol.

106,no.2-3,2006.

[17]T.Cootes,C.Taylor,D.Cooper,and J.Graham,“Active shape

models–their training and application,”Computer Vision and Image Understanding,vol.61,no.1,pp.38–59,1995. [18]T.Cootes,K.Walker,and C.Taylor,“View-based active

appearance models,”in Proc.Int’l.Conf.Automatic Face and Gesture Recognition,2000,pp.227–232.

[19]T.Cootes,G.Edwards,and C.Taylor,“Active appearance

models,”IEEE Trans.Pattern Analysis and Machine Intelli-gence,vol.23,no.6,pp.681–685,2001.

[20]M.Cordea,E.Petriu,N.Georganas,D.Petriu,and T.Whalen,

“Real-time2(1/2)-D head pose recovery for model-based video-coding,”IEEE Trans.Instrum.Meas.,vol.50,no.4, pp.1007–1013,2001.

[21]D.DeCarlo and D.Metaxas,“Optical?ow constraints on

deformable models with applications to face tracking,”Int’l.

https://www.doczj.com/doc/124477537.html,puter Vision,vol.38,no.2,pp.231–238,2000. [22]F.Dornaika and F.Davoine,“Head and facial animation

tracking using appearance-adaptive models and particle?lters,”

in Proc.IEEE https://www.doczj.com/doc/124477537.html,puter Vision and Pattern Recognition Workshop,2004,pp.153–162.

[23]R.Duda,P.Hart,and D.Stork,Pattern Classi?cation,2nd ed.

John Wiley&Sons,Inc.,2001.

[24]G.J.Edwards,https://www.doczj.com/doc/124477537.html,nitis,C.J.Taylor,and T.F.Cootes,

“Statistical models of face images-improving speci?city,”

Image Vision Computing,vol.16,no.3,pp.203–211,1998.

[25]B.Fasel and J.Luettin,“Automatic facial expression analysis:

A survey,”Pattern Recognition,vol.36,no.1,pp.259–275,

2003.

[26]V.F.Ferrario,C.Sforza,G.Serrao,G.Grassi,and E.Mossi,

“Active range of motion of the head and cervical spine:a three-dimensional investigation in healthy young adults,”J.

Orthopaedic Research,vol.20,no.1,pp.122–129,2002. [27]Y.Fu and T.Huang,“Graph embedded analysis for head pose

estimation,”in Proc.IEEE Int’l.Conf.Automatic Face and Gesture Recognition,2006,pp.3–8.

[28]Y.Fu and T.S.Huang,“hMouse:Head tracking driven

virtual computer mouse,”in IEEE Workshop Applications of Computer Vision,2007,pp.30–35.

[29]W.Gao,B.Cao,S.Shan,X.Zhang,and D.Zhou,“The

CAS-PEAL large-scale chinese face database and baseline evaluations,”Joint Research&Development Laboratory,Tech.

Rep.,2004,JDL-TR-04-FR-001.

[30]A.Gee and R.Cipolla,“Determining the gaze of faces in

images,”Image and Vision Computing,vol.12,no.10,pp.

639–647,1994.

[31]——,“Fast visual tracking by temporal consensus,”Image and

Vision Computing,vol.14,no.2,pp.105–114,1996. [32]R.Gonzalez and R.Woods,Digital Image Processing,2nd ed.

Prentice-Hall,Inc.,2002,pp.582–584.

[33]N.Gourier,D.Hall,and J.Crowley,“Estimating face orien-

tation from robust detection of salient facial structures,”in Proc.Pointing2004Workshop:Visual Observation of Deictic Gestures,2004,pp.17–25.

[34]N.Gourier,J.Maisonnasse,D.Hall,and J.Crowley,“Head

pose estimation on low resolution images,”in Multimodal Technologies for Perception of Humans,Int’l.Workshop Clas-si?cation of Events Activities and Relationships,CLEAR2006, ser.Lecture Notes in Computer Science,R.Stiefelhagen and J.Garofolo,Eds.,vol.4122,2007,pp.270–280.

[35]Z.Gui and C.Zhang,“3D head pose estimation using non-rigid

structure-from-motion and point correspondence,”in Proc.

IEEE Region10Conf.,2006,pp.1–3.

[36]Z.Guo,H.Liu,Q.Wang,and J.Yang,“A fast algorithm face

detection and head pose estimation for driver assistant system,”

in Proc.Int’l Conf.Signal Processing,vol.3,2006.

[37]M.Harville,A.Rahimi,T.Darrell,G.Gordon,and J.Wood?ll,

“3D pose tracking with linear depth and brightness con-straints,”in Proc.IEEE Int’https://www.doczj.com/doc/124477537.html,puter Vision,1999, pp.206–213.

[38]S.Haykin,Adaptive Filter Theory,4th ed.Prentice-Hall,Inc.,

2002.

[39]X.He,S.Yan,Y.Hu,and H.J.Zhang,“Learning a locality

preserving subspace for visual recognition,”in Proc.IEEE Int’https://www.doczj.com/doc/124477537.html,puter Vision,2003,pp.385–392.

[40]J.Heinzmann and A.Zelinsky,“3-D facial pose and gaze

point estimation using a robust real-time tracking paradigm,”

in Proc.IEEE Int’l.Conf.Automatic Face and Gesture Recog-nition,1998,pp.142–147.

[41]E.Hjelm?a s and B.Low,“Face detection:A survey,”Computer

Vision and Image Understanding,vol.83,no.3,pp.236–274, 2001.

[42]T.Horprasert,Y.Yacoob,and L.Davis,“Computing3-d head

orientation from a monocular image sequence,”in Proc.Int’l.

Conf.Automatic Face and Gesture Recognition,1996,pp.242–247.

[43]——,“An anthropometric shape model for estimating head

orientation,”in Proc.Int’l.Workshop Visual Form,1997,pp.

247–256.

[44]C.Hu,J.Xiao,I.Matthews,S.Baker,J.Cohn,and T.Kanade,

“Fitting a single active appearance model simultaneously to multiple images,”in Proc.British Machine Vision Conference, 2004,pp.437–446.

[45]N.Hu,W.Huang,and S.Ranganath,“Head pose estimation

by non-linear embedding and mapping,”in Proc.IEEE Int’l.

Conf.Image Processing,vol.2,2005,pp.342–345.

[46]Y.Hu,L.Chen,Y.Zhou,and H.Zhang,“Estimating face pose

by facial asymmetry and geometry,”in Proc.IEEE Int’l.Conf.

Automatic Face and Gesture Recognition,2004,pp.651–656.

[47]J.Huang,X.Shao,and H.Wechsler,“Face pose discrimination

using support vector machines(SVM),”in Proc.Int’l.Conf.

Pattern Recognition,1998,pp.154–156.

[48]K.Huang and M.Trivedi,“Video arrays for real-time tracking

of person,head,and face in an intelligent room,”Machine Vision and Applications,vol.14,no.2,pp.103–111,2003.

[49]——,“Robust real-time detection,tracking,and pose estima-

tion of faces in video streams,”in Proc.Int’l.Conf.Pattern Recognition,2004,pp.965–968.

[50]K.Huang,M.Trivedi,and T.Gandhi,“Driver’s view and ve-

hicle surround estimation using omnidirectional video stream,”

in Proc.IEEE Intelligent Vehicles Symposium,2003,pp.444–449.

[51]M.Isard and A.Blake,“CONDENSATION–conditional den-

sity propagation for visual tracking,”Int’https://www.doczj.com/doc/124477537.html,puter Vision, vol.29,no.1,pp.5–28,1998.

[52]T.Jebara and A.Pentland,“Parametrized structure from mo-

tion for3d adaptive feedback tracking of faces,”in Proc.IEEE https://www.doczj.com/doc/124477537.html,puter Vision and Pattern Recognition,1997,pp.

144–150.

[53]M.Jones and P.Viola,“Fast multi-view face detection,”

Mitsubishi Electric Research Laboratories,Tech.Rep.096, 2003.

[54]H.Kobayasi and S.Kohshima,“Unique morphology of the

human eye,”Nature,vol.387,no.6635,pp.767–768,1997.

[55]N.Kr¨u ger,M.P¨o tzsch,and C.von der Malsburg,“Determi-

nation of face position and pose with a learned representa-tion based on labeled graphs,”Image and Vision Computing, vol.15,no.8,pp.665–673,1997.

[56]V.Kr¨u ger and G.Sommer,“Gabor wavelet networks for

ef?cient head pose estimation,”Image and Vision Computing, vol.20,no.9-10,pp.665–672,2002.

[57]https://www.doczj.com/doc/124477537.html,des,J.C.V orbr¨u ggen,J.Buhmann,https://www.doczj.com/doc/124477537.html,nge,C.von der

Malsburg,R.P.W¨u rtz,and W.Konen,“Distortion invariant object recognition in the dynamic link architecture,”IEEE https://www.doczj.com/doc/124477537.html,put.,vol.42,pp.300–311,1993.

[58]https://www.doczj.com/doc/124477537.html,ngton and V.Bruce,“You must see the point:Automatic

processing of cues to the direction of social attention,”J.Ex-perimental Psychology:Human Perception and Performance, vol.26,no.2,pp.747–757,2000.

[59]https://www.doczj.com/doc/124477537.html,ngton,H.Honeyman,and E.Tessler,“The in?uence of

head contour and nose angle on the perception of eye-gaze direction,”Perception and Psychophysics,vol.66,no.5,pp.

752–771,2004.

[60]https://www.doczj.com/doc/124477537.html,nitis,C.Taylor,and T.Cootes,“Automatic interpretation

of human faces and hand gestures using?exible models,”in Proc.IEEE Int’l.Conf.Automatic Face and Gesture Recogni-tion,1995,pp.98–103.

[61]——,“Automatic interpretation and coding of face images

using?exible models,”IEEE Trans.Pattern Anal.Machine Intell.,vol.19,no.7,pp.743–756,1997.

[62]Y.LeCun,L.Bottou,Y.Bengio,and P.Haffner,“Gradient-

based learning applied to document recognition,”Proc.IEEE, vol.86,no.11,pp.2278–2324,1998.

[63]S.Li,Q.Fu,L.Gu,B.Scholkopf,Y.Cheng,and H.Zhang,

“Kernel machine based learning for multi-view face detection and pose estimation,”in Proc.IEEE Int’https://www.doczj.com/doc/124477537.html,puter Vision,2001,pp.674–679.

[64]Y.Li,S.Gong,and H.Liddell,“Support vector regression

and classi?cation based multi-view face detection and recog-nition,”in Proc.IEEE Int’l.Conf.Automatic Face and Gesture Recognition,2000,pp.300–305.

[65]Y.Li,S.Gong,J.Sherrah,and H.Liddell,“Support vector

machine based multi-view face detection and recognition,”

Image and Vision Computing,vol.22,no.5,p.2004,2004.

[66]Z.Li,Y.Fu,J.Yuan,T.Huang,and Y.Wu,“Query driven

localized linear discriminant models for head pose estimation,”

in Proc.IEEE Int’l.Conf.Multimedia and Expo,2007,pp.

1810–1813.

[67]D.Little,S.Krishna,J.Black,and S.Panchanathan,“A

methodology for evaluating robustness of face recognition algorithms with respect to variations in pose angle and illu-mination angle,”in Proc.IEEE Int’l.Conf.Acoustics,Speech, and Signal Processing,vol.2,2005,pp.89–92.

[68]D.Lowe,“Distinctive image features from scale-invariant

keypoints,”Int’l https://www.doczj.com/doc/124477537.html,puter Vision,vol.60,no.2,pp.91–110, 2004.

[69]B.Ma,W.Zhang,S.Shan,X.Chen,and W.Gao,“Robust

head pose estimation using LGBP,”in Proc.IEEE Int’l.Conf.

Pattern Recognition,2006,pp.512–515.

[70]Y.Ma,Y.Konishi,K.Kinoshita,https://www.doczj.com/doc/124477537.html,o,and M.Kawade,

“Sparse bayesian regression for head pose estimation,”in Proc.

IEEE Int’l.Conf.Pattern Recognition,2006,pp.507–510. [71]M.Malciu and F.Preteux,“A robust model-based approach

for3D head tracking in video sequences,”in Proc.IEEE Int’l.

Conf.Automatic Face and Gesture Recognition,2000,pp.169–174.

[72]I.Matthews and S.Baker,“Active appearance models revis-

ited,”Int’https://www.doczj.com/doc/124477537.html,puter Vision,vol.60,no.2,pp.135–164, 2004.

[73]T.Maurer and C.von der Malsburg,“Tracking and learning

graphs and pose on image sequences of faces,”in Proc.IEEE Int’l.Conf.Automatic Face and Gesture Recognition,1996, pp.176–181.

[74]I.McCowan, D.Gatica-Perez,S.Bengio,https://www.doczj.com/doc/124477537.html,thoud,

M.Barnard,and D.Zhang,“Automatic analysis of multimodal group actions in meetings,”IEEE Trans.Pattern Analysis and Machine Intelligence,vol.27,no.3,pp.305–317,2005. [75]S.McKenna and S.Gong,“Real-time face pose estimation,”

Real-Time Imaging,vol.4,no.5,pp.333–347,1998. [76]T.Moeslund,A.Hilton,and V.Kr¨u ger,“A survey of computer

vision-based human motion capture,”Computer Vision and Image Understanding,vol.81,no.3,pp.231–268,2001. [77]——,“A survey of advances in vision-based human motion

capture and analysis,”Computer Vision and Image Under-standing,vol.104,no.2,pp.90–126,2006.

[78]H.Moon and https://www.doczj.com/doc/124477537.html,ler,“Estimating facial pose from a sparse

representation,”in Proc.Int’l.Conf.Image Processing,2004, pp.75–78.

[79]M.Morales,P.Mundy,C.Delgado,M.Yale,R.Neal,and

H.Schwartz,“Gaze following,temperament,and language

development in6-month-olds:A replica and extension,”Infant Behavior and Development,vol.23,no.2,pp.231–236,2000.

[80]L.-P.Morency,A.Rahimi,N.Checka,and T.Darrell,“Fast

stereo-based head tracking for interactive environments,”in Proc.Int’l.Conf.Automatic Face and Gesture Recognition, 2002,pp.375–380.

[81]L.-P.Morency,A.Rahimi,and T.Darrell,“Adaptive view-

based appearance models,”in Proc.IEEE https://www.doczj.com/doc/124477537.html,puter Vision and Pattern Recognition,2003,pp.803–810.

[82]L.-P.Morency,P.Sundberg,and T.Darrell,“Pose estimation

using3d view-based eigenspaces,”in Proc.IEEE Int’l.Work-shop Analysis and Modeling of Faces and Gestures,2003,pp.

45–52.

[83]L.-P.Morency,C.Christoudias,and T.Darrell,“Recognizing

gaze aversion gestures in embodied conversational discourse,”

in Proc.Int’l.Conf.Multimodal Interfaces,2006,pp.287–294.

[84]L.-P.Morency, C.Sidner, C.Lee,and T.Darrell,“Head

gestures for perceptual interfaces:The role of context in improving recognition,”Arti?cial Intelligence,vol.171,no.

8–9,pp.568–585,2007.

[85]E.Murphy-Chutorian and M.Trivedi,“Hybrid head orientation

and position estimation(HyHOPE):A system and evaluation for driver support,”in Proc.IEEE Intelligent Vehicles Sympo-sium,2008.

[86]——,“Head pose estimation for driver assistance systems:A

robust algorithm and experimental evaluation,”in Proc.IEEE Conf.Intelligent Transportation Systems,2007,pp.709–714.

[87]R.Newman,Y.Matsumoto,S.Rougeaux,and A.Zelinsky,

“Real-time stereo tracking for head pose and gaze estimation,”

in Proc.IEEE Int’l.Conf.Automatic Face and Gesture Recog-nition,2000,pp.122–128.

[88]J.Ng and S.Gong,“Composite support vector machines for

detection of faces across views and pose estimation,”Image and Vision Computing,vol.20,no.5-6,pp.359–368,2002.

[89]——,“Multi-view face detection and pose estimation using a

composite support vector machine across the view sphere,”

in Proc.IEEE Int’l.Workshop Recognition,Analysis,and Tracking of Faces and Gestures in Real-Time Systems,1999, pp.14–21.

[90]A.Nikolaidis and I.Pitas,“Facial feature extraction and pose

determination,”Pattern Recognition,vol.33,no.11,pp.1783–1791,2000.

[91]S.Niyogi and W.Freeman,“Example-based head tracking,”

in Proc.Int’l.Conf.Automatic Face and Gesture Recognition, 1996,pp.374–378.

[92]J.-M.Odobez and S.Ba,“A cognitive and unsupervised map

adaptation approach to the recognition of the focus of attention from head pose,”in Proc.IEEE Int’l.Conf.Multimedia and Expo,2007,pp.1379–1382.

[93]S.Ohayon and E.Rivlin,“Robust3D head tracking using

camera pose estimation,”in Proc.IEEE Int’l.Conf.Pattern Recognition,2006,pp.1063–1066.

[94]K.Oka,Y.Sato,Y.Nakanishi,and H.Koike,“Head pose

estimation system based on particle?ltering with adaptive diffusion control,”in Proc.IAPR Conf.Machine Vision Ap-plications,2005,pp.586–589.

[95]R.Osadchy,https://www.doczj.com/doc/124477537.html,ler,and Y.LeCun,“Synergistic face

detection and pose estimation with energy-based model,”in Proc.Advances in Neural Information Processing Systems, 2004,pp.1017–1024.

[96]——,“Synergistic face detection and pose estimation with

energy-based models,”J.Machine Learning Research,vol.8, pp.1197–1215,2007.

[97]E.Osuna,R.Freund,and F.Girosi,“Training support vector

machines:An application to face detection,”in Proc.IEEE https://www.doczj.com/doc/124477537.html,puter Vision and Pattern Recognition,1997,pp.

130–136.

[98]R.Pappu and P.Beardsley,“A qualitative approach to classify-

ing gaze direction,”in Proc.IEEE Int’l.Conf.Automatic Face and Gesture Recognition,1998,pp.160–165.

[99]A.Pentland and T.Choudhury,“Face recognition for smart

environments,”Computer,vol.33,no.2,pp.50–55,2000. [100]R.Rae and H.Ritter,“Recognition of human head orientation based on arti?cial neural networks,”IEEE Trans.Neural Networks,vol.9,no.2,pp.257–265,1998.

[101]B.Raytchev,I.Yoda,and K.Sakaue,“Head pose estimation by nonlinear manifold learning,”in Proc.Int’l.Conf.Pattern Recognition,2004,pp.462–466.

[102]S.Roweis and L.Saul,“Nonlinear dimensionality reduction by locally linear embedding,”Science,vol.290,no.5500,pp.

2323–2326,2000.

[103]H.Rowley,S.Baluja,and T.Kanade,“Rotation invariant neural network-based face detection,”in Proc.IEEE Conf.

Computer Vision and Pattern Recognition,1998,pp.38–44. [104]——,“Neural network-based face detection,”IEEE Trans.

Pattern Anal.Machine Intell.,vol.20,no.1,pp.23–38,1998. [105]T.Rueda-Domingo,https://www.doczj.com/doc/124477537.html,rdelli-Claret,J.L.del Castillo, J.Jim′e nez-Mole′o n,M.Garc′?a-Mart′?n,and A.Bueno-Cavanillas,“The in?uence of passengers on the risk of the driver causing a car collision in spain,”Accident Analysis&

Prevention,vol.36,no.3,pp.481–489,2004.

[106]B.Schiele and A.Waibel,“Gaze tracking based on face-color,”in Proc.IEEE Int’l.Conf.Automatic Face and Gesture Recognition,1995,pp.344–349.

[107]A.Sch¨o dl, A.Haro,and I.Essa,“Head tracking using a textured polygonal model,”in Proc.Workshop Perceptual User Interfaces,1998.

[108]E.Seemann,K.Nickel,and R.Stiefelhagen,“Head pose estimation using stereo vision for human-robot interaction,”in Proc.IEEE Int’l.Conf.Automatic Face and Gesture Recogni-tion,2004,pp.626–631.

[109]J.Sherrah and S.Gong,“Fusion of perceptual cues for robust tracking of head pose and position,”Pattern Recognition, vol.34,no.8,pp.1565–1572,2001.

[110]J.Sherrah,S.Gong,and E.-J.Ong,“Understanding pose discrimination in similarity space,”in British Machine Vision Conference,1999,pp.523–532.

[111]——,“Face distributions in similarity space under varying head pose,”Image and Vision Computing,vol.19,no.12,pp.

807–819,2001.

[112]T.Sim,S.Baker,and M.Bsat,“The CMU pose,illumination, and expression database,”IEEE Trans.Pattern Analysis and Machine Intelligence,vol.25,no.12,pp.1615–1618,2003. [113]Softopia,“HOIP face database.”[Online].Available:http: //www.softopia.or.jp/en/rd/facedb.html

[114]S.Srinivasan and K.Boyer,“Head pose estimation using view based eigenspaces,”in Proc.Int’l.Conf.Pattern Recognition, 2002,pp.302–305.

[115]R.Stiefelhagen,J.Yang,and A.Waibel,“Modeling focus of attention for meeting indexing based on multiple cues,”IEEE Trans.Neural Networks,vol.13,no.4,pp.928–938,2002. [116]R.Stiefelhagen,“Tracking focus of attention in meetings,”in Proc.Int’l.Conf.Multimodal Interfaces,2002,pp.273–280. [117]——,“Estimating head pose with neural networks-results on the Pointing04ICPR workshop evaluation data,”in Proc.

Pointing2004Workshop:Visual Observation of Deictic Ges-tures,2004.

[118]R.Stiefelhagen,K.Bernardin,R.B.J.Garofolo,D.Mostefa, and P.Soundararajan,“The CLEAR2006evaluation,”in Multi-modal Technologies for Perception of Humans,Int’l.Workshop Classi?cation of Events Activities and Relationships,CLEAR 2006,ser.Lecture Notes in Computer Science,R.Stiefelhagen and J.Garofolo,Eds.,vol.4122,2007,pp.1–44.

[119]J.Tenenbaum,V.de Silva,and https://www.doczj.com/doc/124477537.html,ngford,“A global ge-ometric framework for nonlinear dimensionality reduction,”

Science,vol.290,pp.2319–2323,2000.

[120]Y.-L.Tian,L.Brown,J.Connell,S.Pankanti,A.Hampapur,

A.Senior,and R.Bolle,“Absolute head pose estimation from

overhead wide-angle cameras,”in Proc.IEEE Int’l.Workshop Analysis and Modeling of Faces and Gestures,2003,pp.92–

99.

[121]K.Toyama,““look,ma–no hands!”hands-free cursor control with real-time3d face tracking,”in Proc.Workshop Perceptual User Interfaces,1998,pp.49–54.

[122]M.Trivedi,“Human movement capture and analysis in intelli-gent environments,”Machine Vision and Applications,vol.14, no.4,pp.215–217,2003.

[123]M.Trivedi,K.Huang,and I.Miki′c,“Dynamic context capture and distributed video arrays for intelligent spaces,”IEEE Trans.Syst.,Man,Cybern.A,vol.35,no.1,pp.145–163, 2005.

[124]J.Tu,T.Huang,and H.Tao,“Accurate head pose tracking in low resolution video,”in Proc.IEEE Int’l.Conf.Automatic Face and Gesture Recognition,2006,pp.573–578.

[125]J.Tu,Y.Fu,Y.Hu,and T.Huang,“Evaluation of head pose estimation for studio data,”in Multimodal Technologies for Perception of Humans,Int’l.Workshop Classi?cation of Events Activities and Relationships,CLEAR2006,ser.Lecture Notes in Computer Science,R.Stiefelhagen and J.Garofolo,Eds.,

vol.4122,2007,pp.281–290.

[126]P.Viola and M.Jones,“Rapid object detection using a boosted

cascade of simple features,”in Proc.IEEE https://www.doczj.com/doc/124477537.html,puter

Vision and Pattern Recognition,2001,pp.511–518.

[127]M.V oit,“CLEAR2007evaluation plan:Head pose

estimation,”2007.[Online].Available:https://www.doczj.com/doc/124477537.html,a.de/~mvoit/clear07/CLEAR07HEADPOSE2007-03-26.doc [128]M.V oit,K.Nickel,and R.Stiefelhagen,“A bayesian approach

for multi-view head pose estimation,”in IEEE Int’l.Conf.

Multisensor Fusion and Integration for Intelligent Systems,

2006,pp.31–34.

[129]——,“Neural network-based head pose estimation and multi-

view fusion,”in Multimodal Technologies for Perception of

Humans,Int’l.Workshop Classi?cation of Events Activities and

Relationships,CLEAR2006,ser.Lecture Notes in Computer

Science,R.Stiefelhagen and J.Garofolo,Eds.,vol.4122,2007,

pp.291–298.

[130]——,“Head pose estimation in single-and multi-view environ-

ments results on the CLEAR’07benchmarks,”in Proc.Int’l.

Workshop Classi?cation of Events Activities and Relationships,

2007.

[131]A.Waibel,T.Schultz,M.Bett,M.Denecke,R.Malkin,

I.Rogina,R.Stiefelhagen,and J.Yang,“SMaRT:the smart

meeting room task at isl,”in IEEE Int’l.Conf.Acoustics,

Speech,and Signal Processing,vol.4,2003,pp.752–755.

[132]J.-G.Wang and E.Sung,“EM enhancement of3D head pose

estimated by point at in?nity,”Image and Vision Computing,

vol.25,no.12,pp.1864–1874,2007.

[133]H.Wilson,F.Wilkinson,L.Lin,and M.Castillo,“Perception

of head orientation,”Vision Research,vol.40,no.5,pp.459–

472,2000.

[134]W.H.Wollaston,“On the apparent direction of eyes in a

portrait,”Phil.Trans.Royal Society of London,vol.114,pp.

247–256,1824.

[135]J.-W.Wu and M.Trivedi,“Visual modules for head gesture

analysis in intelligent vehicle systems,”in Proc.IEEE Intelli-

gent Vehicles Symposium,2006,pp.13–18.

[136]J.Wu and M.Trivedi,“A two-stage head pose estimation

framework and evaluation,”Pattern Recognition,vol.41,no.3,

pp.1138–1158,2008.

[137]J.Wu,J.Pedersen,D.Putthividhya,D.Norgaard,and M.M.

Trivedi,“A two-level pose estimation framework using major-

ity voting of Gabor wavelets and bunch graph analysis,”in

Proc.Pointing2004Workshop:Visual Observation of Deictic

Gestures,2004.

[138]Y.Wu and K.Toyama,“Wide-range,person-and illumination-

insensitive head orientation estimation,”in Proc.IEEE Int’l.

Conf.Automatic Face and Gesture Recognition,2000,pp.183–

188.

[139]J.Xiao,T.Moriyama,T.Kanade,and J.Cohn,“Robust

full-motion recovery of head by dynamic templates and re-

registration techniques,”Int’l.J.Imaging Systems and Tech-

nology,vol.13,no.1,pp.85–94,2003.

[140]J.Xiao,S.Baker,I.Matthews,and T.Kanade,“Real-time

combined2D+3D active appearance models,”in Proc.IEEE

https://www.doczj.com/doc/124477537.html,puter Vision and Pattern Recognition,vol.2,2004,

pp.535–542.

[141]Y.Xiong and F.Quek,“Meeting room con?guration and

multiple cameras calibration in meeting analysis,”in Int’l.

Conf.Multimodal Interfaces,2005,pp.1–8.

[142]S.Yan,Z.Zhang,Y.Fu,Y.Hu,J.Tu,,and T.Huang,

“Learning a person-independent representation for precise3d

pose estimation,”in Proc.Int’l.Workshop Classi?cation of

Events Activities and Relationships,2007.

[143]M.-H.Yang,D.Kriegman,and N.Ahuja,“Detecting faces in

images:a survey,”IEEE Trans.Pattern Anal.Machine Intell.,

vol.24,no.1,pp.34–58,2002.

[144]R.Yang and Z.Zhang,“Model-based head pose tracking with

stereovision,”in Proc.Int’l.Conf.Automatic Face and Gesture

Recognition,2002,pp.242–247.

[145]P.Yao,G.Evans,and A.Calway,“Using af?ne correspondence to estimate3-d facial pose,”in Proc.Int’l.Conf.Image Processing,2001,pp.919–922.

[146]Z.Zhang,Y.Hu,M.Liu,and T.Huang,“Head pose estimation in seminar room using multi view face detectors,”in Multi-modal Technologies for Perception of Humans,Int’l.Workshop Classi?cation of Events Activities and Relationships,CLEAR 2006,ser.Lecture Notes in Computer Science,R.Stiefelhagen and J.Garofolo,Eds.,vol.4122,2007,pp.299–304. [147]G.Zhao,L.Chen,J.Song,and G.Chen,“Large head move-ment tracking using SIFT-based registration,”in Proc.Int’l.

Conf.Multimedia,2007,pp.807–810.

[148]L.Zhao,G.Pingali,and I.Carlbom,“Real-time head orien-tation estimation using neural networks,”in Proc.Int’l.Conf.

Image Processing,2002,pp.297–300.

[149]W.Zhao,R.Chellappa,P.Phillips,and A.Rosenfeld,“Face recognition:A literature survey,”ACM Computing Surveys, vol.35,no.4,pp.399–458,2003.

[150]Y.Zhu and K.Fujimura,“3D head pose estimation with optical ?ow and depth constraints,”in Proc.IEEE Int’l.Conf.3-D Digital Imaging and Modeling,2003,pp.211–216. [151]——,“Head pose estimation for driver monitoring,”in Proc.

IEEE Intelligent Vehicles Symposium,2004,pp.

501–506.

Erik Murphy-Chutorian received the B.A.degree

in Engineering Physics from Dartmouth College in

2002.He received the M.S.degree in Electrical

and Computer Engineering from the University of

California,San Diego in2005,and he is a can-

didate for the Ph.D.degree,anticipating comple-

tion in2008.At the University of San Diego,he

has tackled problems in computer vision,includ-

ing Object Recognition,Invariant Region Detection,

Visual Tracking,and Head Pose Estimation.He

has designed and implemented numerous real-time systems for human-computer interaction,intelligent environments,and driver assistance.Currently,he a Software Engineer at Google,Inc.in Mountain View,

CA.

Mohan M.Trivedi received the B.E.degree(with

honors)from the Birla Institute for Technology and

Science in India,in1974and the Ph.D.degree

from Utah State University,Logan,in1979.He is

a Professor of electrical and computer engineering

and the founding Director of the Computer Vision

and Robotics Research Laboratory,University of

California at San Diego(UCSD),La Jolla,CA.He

has a broad range of research interests in the intelli-

gent systems,computer vision,intelligent(“smart”)

environments,intelligent vehicles and transportation systems,and human-machine interfaces areas.He has published nearly300 technical articles and has edited over a dozen volumes,including books, special issues,video presentations,and conference proceedings.

Dr.Trivedi serves on the executive committee of the California Institute for Telecommunication and Information Technologies[Cal-IT2]as the Leader of the Intelligent Transportation and Telematics Layer at UCSD.He serves regularly as a Consultant to industry and government agencies in the USA and abroad.He was the Editor-in-Chief of the Machine Vision and Applications Journal.He served as the Chairman of the Robotics Technical Committee of the Computer Society of the IEEE and on the ADCOM of the IEEE SMC Society.He received the Distinguished Alumnus Award from the Utah State University,Pioneer Award(Technical Activities),and Meritorious Service Award from the IEEE Computer Society.

TABLE V

S TANDARD D ATASETS FOR H EAD P OSE E STIMATION

Pointing’04–The Pointing’04corpus[33]was included as part of the Pointing2004Workshop on Visual Observation of Deictic Gestures to allow for the uniform evaluation of head pose estimation systems.It was also used as one of two datasets to evaluate head pose estimators in the2006 International Workshop on Classi?cation of Events Activities and Relationships(CLEAR’06)[118].The Pointing’04consists of15sets of near-?eld images,with each set containing2series of93images of the same person at93discrete poses.The discrete poses span both pitch and yaw,ranging from?90?to90?in both DOF.The subjects range in age from20to40years old,?ve possessing facial hair and seven wearing glasses.Each subject was photographed against a uniform background,and the heads were manually cropped.Head pose ground truth was obtained by directional suggestion.This led to poor uniformity between subjects,and as a result,the Pointing’04dataset contains substantial errors.This has been noted in evaluations[125].The data is publicly available at http://www-prima.inrialpes.fr/Pointing04.

CHIL-CLEAR06–The CHIL-CLEAR06dataset was collected for the CLEAR’06workshop and contains multi-view video recordings of seminars given by students and lecturers[118].The recordings consist of12training sequences and14test sequences recorded in a seminar room equipped with four synchronized cameras in each of the corners of the room.The length of the sequences range from18to68minutes each,and provide a far-?eld, low-resolution image of the head in natural indoor lighting.Manual annotations are made in every tenth video frame,providing a bounding box around the location of the presenter’s head,along with a coarse pose orientation label from one of8discrete yaws at45?intervals.More information about the dataset can be found at https://www.doczj.com/doc/124477537.html,.

CHIL-CLEAR07–The CHIL-CLEAR07dataset was created for the CLEAR’07workshop and similar to CHIL-CLEAR06contains multi-view video recordings of a lecturer in a seminar room[127].In this iteration,however,head orientation was provided by a magnetic sensor,providing?ne head pose estimates relative to a global coordinate system.The dataset consists of15videos with four-synchronized cameras captured at15fps.In addition to the pose information from the magnetic sensor,which was capture for every frame,a manual annotation of the bounding box around each presenter’s head was provided for every5th frame.More information about the dataset can be found at https://www.doczj.com/doc/124477537.html,.

IDIAP Head Pose–The IDIAP Head Pose database consists of eight meeting sequences recorded from a single camera,each approximately one minute in duration[1].It was provided as another source of evaluation for head pose estimators at the CLEAR’07workshop,and from this context is often referred to as the AMI dataset.It consists of eight meeting sequences recorded from a single camera,each approximately one minute in duration.In each sequences,two subjects whom are always visible were continuously annotated using a magnetic sensor to measure head location and orientation.The head pose information is provided as?ne measures of pitch and yaw relative to the camera,ranging within?90?and90?for both DOF.The dataset is available on request,details can be found at http://www.idiap.ch/HeadPoseDatabase.

CMU PIE–The CMU Pose,Illumination,and Expression(PIE)dataset contains68facial images of different people across13poses,under43 different illumination conditions,and with4different expressions[112].The pose ground truth was obtained with a13cameras array,each positioned to provide a speci?c relative pose angle.This consisted of9cameras at approximately22.5?intervals across yaw,one camera above the center,one camera below the center,and one in each corner of the https://www.doczj.com/doc/124477537.html,rmation regarding obtaining the dataset can be found at http://www.ri.cmu. edu/projects/project418.html.

Softopia HOIP–The Softopia HOIP dataset consists of two sets of near-?eld face images of300people,(150men and150women)against a uniform background[113].The?rst set consists of168discrete poses(24yaws and7pitches at15?intervals),spanning frontal and rear views of each head. The second set contains?ner yaw increments,capturing511discrete poses(73yaws at5?intervals and7pitches at15?intervals).With both datasets, the images were captured with high precision,using camera arrays and rotating platforms.The dataset is available to Japanese academic institutions at http://www.softopia.or.jp/rd/facedb.html.

CVRR-86–The CVRR-86dataset contains3894near-?eld images of28subjects at86discrete poses against a uniform background[136].Subjects were asked to move their heads while pose information was recorded with a magnetic sensor.Pitch and yaw were quantized into15?intervals ranging from?45?to60?and?90?to90?,respectively.Not every pose has samples from each person,and some poses have more than one sample from each person.The dataset is not available online at this time,but more information can be found at https://www.doczj.com/doc/124477537.html,.

CVRR-363–The CVRR-363dataset contains near-?eld images of10people against a uniform background[86].363discrete poses ranging from ?30?to20?in pitch and?80?to80?in yaw at5?intervals were recorded with an optical motion capture system.To ensure that a single image was captured for each person at every discrete pose,the subjects were provided with visual feedback on multiple projection screens,and software ensured that an image was captured whenever the subject was with0.5?of the desired pose.The dataset is not available online at this time,but more information can be found at https://www.doczj.com/doc/124477537.html,.

USF HumanID–The USF HumanID Face dataset contains near-?eld color face images of10people against a uniform background[9].There are 181images of head person,varying across yaw from?90?to90?in1?increments.The dataset is not publicly available online at this time.

BU Face Tracking–The BU face tracking dataset contains two sets of30fps video sequences with head pose recorded by a magnetic sensor[14]. The?rst set consists of9video sequences for each of?ve subjects taken under uniform illumination,and the second set consists of9video sequences for each of three subjects with time varying illumination.Subjects were asked to perform free head motion,which includes translations and rotations. The dataset is available online at https://www.doczj.com/doc/124477537.html,/groups/ivc/HeadTracking.

CVRR LISAP-14–The CVRR LISAP-14dataset contains14video sequences of an automobile driver while driving in daytime and nighttime lighting conditions(eight sequences during the day,four sequences at night with near-IR illumination)[85].Each sequence is approximately8-minutes in length, and includes head position and orientation as recorded by an optical motion capture system.The dataset is not available online at this time,but more information can be found at https://www.doczj.com/doc/124477537.html,.

CAS-PEAL The CAS-PEAL dataset contains99,594images of1040people with varying pose,expression,accessory,and lighting[29].For each person,9yaws were simultaneously captured with a camera array at3pitches with directional suggestion,providing a total of27discrete poses.A subset of the data is available on request–more information can be found at https://www.doczj.com/doc/124477537.html,/peal.

FacePix The FacePix dataset contains181images for each of30subjects spanning?90?to90?in yaw at1?intervals[67].The images were captured using a camera on a rotating platform and following the capture,they were cropped and scaled manually to ensure that the eyes and face appear at the same vertical position in every view. The dataset is available online at https://www.doczj.com/doc/124477537.html,/resources/index.php.

周末祝福语大全经典

11、山中的夜晚,清风明月,我在石桌上摆两杯浓酒,一杯给你,一杯给我,思绪伴着酒香,对月遥祝,友情常在。祝你周末愉快! 12、中国黄历:周末;天气:或阴或晴;宜睡觉,宜玩闹,宜 浪漫,宜恶搞,宜开心。忌早起,忌无聊,最重要的是,忌收短信 不理会! 13、快乐陪周一,幸运伴周二,轻松有周三,温馨携周四,愉 悦同周五,休闲找周六,潇洒属周日;祝愿你一周舒心,周末愉快! 14、在高山上让心飞翔,我会充满希望;在大海中畅游万里, 我会充满力量;在今天为你送上祝福,我会快乐异常。祝愿周末愉快! 15、弄不清思绪万千,望不穿心湖微澜,看不透咫尺牵绊,数 不尽月夜更换,舍不去情深缘浅,放不下无尽挂牵。祝周末愉快! 16、捉一只蝴蝶送您,愿您拥有快乐的翅膀,兜一丝清风送您,愿您心情舒畅,发个短信给您,愿天天神采飞扬!祝您渡过愉快的 周末。 17、受幸运之神眷顾,你周一会快快乐乐,周二喜事多多,周 三健健康康,周四风风火火,周五愉快侵袭,周末欢天喜地,兴奋 至极。 18、微风时,晃动的叶子,是我思念的频率。下雨时,飞落的 雨丝,是我祝福的讯息。日出后,美丽的彩虹是我甜蜜的言语。祝 你周末快乐! 19、周末到了,放松的日子里,请别再忙碌,收到短信,为你 送上温暖贴心;我的绵绵情意,化作短信,为你送上轻松开心。周 末愉快!

20、周末来到,祝你在春光明媚里获得春色撩人的魅力,在春 风拂面里获得妙手生春的业绩,在春暖花香中获得春风满面的心情。周末愉快! 21、一声铃音,打断美梦。接个电话,死不了人。头发没梳, 脸也没洗。爬出被窝,拿起手机。不要生气,莫要骂人。只想说声,周末愉快。 22、周末到了,送你一副对联。上联:该吃吃,该喝喝,遇事 别往心里搁。下联:谈谈心,聊聊天,高兴一天是一天。横批:周 末快乐! 23、周日到心情好:幸福连连看,快乐对对碰,心情爽歪歪, 身体硬朗朗,好运滚滚来,烦恼忘光光,问候叙叙旧,咱俩常来往,祝周日快乐! 24、鸟儿鸣了,叫醒了心底的喜悦;花儿开了,绽放了青春的 色彩;周末至了,放松了紧张的神经;问候到了,愉悦了彼此的心田:周末愉快! 25、周末终于又来到,来个心情大清扫;快乐平安又吵闹,所 有幸福争先到;短信祝福不忘掉,轻轻问候周末好。祝你周末快乐,生活乐逍遥! 26、周一满面苍白,周二无精打采,周三充满无奈,周四稍有 期待,周五精神豪迈,周六悠闲自在,周日开始颓败。周末如此短暂,好好享受吧! 27、夏送问候添清凉,冬送问候增温暖。又到周末,忙碌了一 周的您可以稍稍休息一下了,愿冬日微暖的阳光伴您渡过一个轻松 愉快的周末! 28、思念是永恒的主题,祝福是不变的牵挂,想起朋友的感觉,

动漫创意宣传标语经典用语大全

竭诚为您提供优质文档/双击可除动漫创意宣传标语经典用语大全 篇一:世界上(国内外)经典的创意广告词汇总大全 经典创意广告词大全-国最佳广告语-哪些大公司的创意广告词 麦当劳---我就喜欢 东风悦达起亚赛拉图---开启风尚新视界 中国联通——引领通讯未来 神州电器---我们想的比你更多 雕牌洗衣粉---不选贵的,只选对的? 网易:趣味相投n多快乐 卓越:超越平凡生活 ebay易趣:爱上ebay易趣生活多倍乐趣 必度搜索:必度一下,你就懂得! 一拍:一拍即合 阿里巴巴:网上贸易?创造奇迹 新浪uc:我有uc?何需电话 Tom?sKYpe:打字太累长途太贵sKYpe最实惠

海尔“真诚到永远” 网通“由我天地宽” 中国移动“沟通从心开始” 中国联通:情系中国结,联通四海心 钱不是问题,问题是没钱。 水能载舟,亦能煮粥。 火可以试金,金可以试女人,女人可以试男人。 避孕的效果:不成功,便成人。 魅力空间,创意无限。(时尚创意节目广告词)一生追随,无怨无悔。(企业广告词) 自始至衷,有你感动。(“情感”节目广告词) 美好滋味,自己体会。(“快餐店”广告词) 壮志在心头,好运伴我走。(“鞋业公司”广告词) 自我自主,任你摆步。(“运动鞋”产品广告词) 朋友一齐干杯,知己为我而醉。(酒企业广告词) 做广告录音还是魅力造音。(魅力造音网广告词) 让快乐张开翅膀,看梦想瞬间绽放。(“梦想中国”节目广告词) 品味迎客松,独领中国风。(“迎客松”香烟广告词) 我动故我在。(“运动系列产品”广告词) 岁月情,随“乐”心。(音乐节目广告词) 让快乐张开翅膀,看梦想瞬间绽放。(“梦想中国”节目

经典的快乐周末祝福语大全

经典的快乐周末祝福语大全 本文是关于经典的快乐周末祝福语大全,仅供参考,希望对您有所帮助,感谢阅读。 1、擦干额头的汗水,让步伐更加坚定;丢开沉重的包袱,让旅途更加轻松;唤醒沉睡的心灵,让生活更加阳光;周末了,好好休息,明天会更好!愿你一切都好。 2、送您一个梦,梦想成真; 送您一颗心,心想事成; 送您一只船,船行风顺; 送您一匹马,马到成功; 送您一首歌,歌唱快乐。祝您永远幸福!周末快乐! 3、星期一万事如意,星期二健康常伴,星期三开心每天,星期四鸿鹄满志,星期五家庭和睦,周末把所有的好运汇聚,祝您周末好心情! 4、这是个惬意的宁静世界,阳光灿烂如昔,柔风轻轻如昔,但愿美好的时光就这样驻足,带给您欢欣与鼓舞,周末愉快! 5、周末又到了,我们不出门,是为了做宅男宅女,你不出门,哼,是为了省钱吧?为了你的感情和谐,为了你的身心健康,建议你请我吃点玩点,这不过分吧! 6、时间因祝福而流光溢彩,空气因祝福而芬芳袭人,心情因祝福而花开灿烂,当你打开信息时,愿祝福让你开心此时此刻!周末愉快! 7、快乐演绎了开心曲,身心愉悦;轻松跳起了惬意舞,舒爽连连;周末爽歪了时光钟,刻刻美妙;朋友悦动了朋友心,情谊无价。祝周末开心哦! 8、不要认为这条短信发错了,没错就是发给你的,不要总是逃,因为逃不掉,我让你做梦都能梦见,梦见我对你的祝福,祝你周末愉快! 9、周末难得一两天,放松身心充充眠。偷得浮生半日闲,浓浓小酒情义添。淡淡清茶香飘远,三朋四友聚聚餐。五湖四海皆周末,祝福声声八方传。祝周末好心情,开开心心! 10、擦干额头的汗水,丢开沉重的包袱,唤醒沉睡的心灵,周末到了,好好休息,明天会更好,愿你一切都美好,周末快乐! 11、寂寂的清晨有微微的清风,微微的清风有丝丝的凉意,丝丝的凉意里有我轻轻的问候!愿我的问候能消除你酷暑的炎热,带给你一天的好心情! 12、祝你周末喜气洋洋,满面阳光灿烂,爱情扬眉吐气,事业洋洋得意,晦

最有创意的广告语30条

最有创意的广告语30条 篇一:全球最经典的30条广告语 全球最经典的30条广告语 1.Goodtothelastdrop. 滴滴香浓,意犹未尽。(麦斯威尔咖啡) 2.obeyyourthirst. 服从你的可望。(雪碧) 3.Thenewdigitalera. 数码新时代。(索尼影碟机) 4.welead.otherscopy. 我们领先,他人仿效。(理光打印机) 5.impossiblemadepossible. 使不可能变为可能。(佳能打印机) 6.Taketimetoindulge. 尽情享受吧!(雀巢冰激凌) 7.Therelentlesspursuitofperfection. 不懈追求完美。(凌志轿车) 8.Poetryinmotion,dancingclosetome. 动态的诗,向我舞近。(丰田汽车) https://www.doczj.com/doc/124477537.html,etowheretheflavoris.marlborocountry.

光临风韵之境——万宝路世界。(万宝路香烟) 10.Tome,thepastblackandwhite, butthefutreisalwayscolor. 对我而言,过去平淡无奇;而未来,却是绚烂缤纷。(轩尼诗酒)11.Justdoit. 只管去做(耐克运动鞋) 12.askformore. 渴望无限(百事流行鞋) 13.Thetasteisgreat. 味道好极了。(雀巢咖啡) 14.Feelthenewspace. 感受新境界。(三星电子) 15.intelligenceeverywhere. 智慧演绎,无处不在。(摩托罗拉手机) 16.Thechoiceofanewgeneration. 新一代的选择。(百事可乐) 17.weintegrate,youcommunicate. 我们集大成,您超越自我。(三菱电工) 18.TakeToSHiBa,taketheworle. 拥有东芝,拥有世界。(东芝电子) 19.Let’smakethingbetter. 让我们做的更好。(飞利浦电子)

女人拍照pose大全(图文并茂)、摄影必备

女人拍照pose大全(图文并茂)、摄影必备

女孩拍照姿势大全 1、头部和身体忌成一条直线。两者若成一条直线,难免会有呆板之感。因此,当身体正面朝向镜头时,头部应该稍微向左或是向右转一些,照片就会显得优雅而生动;同样道理,当被摄者的眼睛正对镜头时,让身体转成一定的角度,会使画面显得生气和动势,并能增加立体感。

2、双臂和双腿忌平行。无论被摄者是持坐姿或是站姿,千万不要让双臂或双腿呈平行状,因为这样会让人有僵硬、机械之感。妥当的做法可以是一曲一直或两者构成一定的角度。这样,就能既造成动感,姿势又富于变化。 3、尽量让体型曲线分明。对于女性被摄者来说,表现其富于魅力的曲线是很有必要的。通常的做法是让人物的一条腿实际上支撑全身的重量,另外一条腿稍微抬高些并靠着站立的的那条腿,臂部要转过来,以显示其最窄的一面,胸部则通过腰部的曲线,尽量显示其高耸和丰硕感。同时,人物的一只手可以摆在臀部,以便给画面提供必要的宽度。

4、坐姿忌陷。表现被摄者坐姿时,不要让其象平常一样将整个身体坐进椅子。如果这样,她的大腿就会呈现休息的状态,以至于腿上端多脂肪的部分隆起,使腿部显得粗苯。正确的做法是让其身体向前移,靠近椅边坐着,并保持挺胸收腹,这样可以避免肩垂肚凸的现象。

5、镜头宜远不宜近。一般来说,拍人像照,距离远些总比近些好。因为当镜头(尤其是短焦距的镜头)离被摄者很近时,会出现畸变现象。因此,摄影时应选择合适焦距的镜头,并让镜头和被摄者保持一定的距离。根据实践我们知道,若是使用标准镜头拍摄人物头像,最佳距离应在6---8英尺之间;拍胸像则应在8---9英尺之间;拍全 身像,以13---22英尺之间为宜。 6、表现好手姿。被摄者的手在画面中的比例不大,但若摆放不当,将会破坏画面的整体美。拍摄时要注意手部的完整,不要使其产生变形、折断、残缺的感觉。如手叉腰或放到口袋里时,要露处部分手指,以免给人以截断的印象。

最新周末祝福语大全(经典)

最新周末祝福语大全(经典) 本文是关于最新周末祝福语大全(经典),仅供参考,希望对您有所帮助,感谢阅读。 又到周末了,得向幸福继续小跑,接下来是为大家搜集了关于周末祝福语大全,供大家参考借鉴。 1. 写邮件太慢,发传真太贵,去看你有点远,只有短信具有“短平快”的特点,所以趁周末用这种安静快捷的方式为你送上“长远深”的祝福:愿快乐! 2. 云淡风轻,随意放飞美丽心情!诗情画意,爱心在阳光下沐浴!盈盈笑语,管它哪堪泪水委屈!祝福为你,生活洋溢甜蜜温馨!周末愉快! 3. 抛开心中的不快,迎接周末的到来。甩开笨重的包袱,放松一下美好心情。周末到啦,快乐到啦,无忧无虑的时刻到啦,再也不怕上班迟到啦。祝你周末愉快。 4. 周一制造奇迹,周二生活甜蜜,周三工作顺利,周四有点财气,周五遇上运气,周末省省力气,想想美好回忆,从此万事如意。 5. 周末到,祝福送,愿你“高举”快乐的大旗,“抗起”幸福的钢枪,“敲响”好运的锣鼓,“吹响”友谊的号角,好久不见了,大家周末一起聚聚吧? 6. 给小雀以翅膀,它就能飞翔!给花朵以阳光,它就能绽放!给朋友以短信,能让人快乐!给祝福以真诚,可拥有朋友!亲爱的你,周末快乐! 7. 美不美,周末随;亲不亲,周末人。都说周末好风光,愿你疲倦一扫光,烦恼全跑光,忙碌都除光,悠闲似阳光。你的脸上有阳光,因为有我祝福伴身旁。 8. 周末已到来,“吉祥”映在天上,“快乐”留在心中;“好运”引领方向,“幸福”跟着前进;“小曲”需要记牢,“健康”也要打牢;前方发来“捷报”,“美好”就要围绕。祝朋友周末开心! 9. 选择微笑的是智者,选择沉着的是睿者,选择宽容的是仁者,选择忍让的是德者,选择施舍的是善者,选择周末休息的是幸福者,祝你周末有个好心情。 10. 将快乐打入您的幸福帐号里,收到请释放忧郁;将甜蜜打包寄到你心里,接到请露出笑意;将祝福快递到你眼底,看到愿周末幸福无比。周末快乐哟!

经典广告创意案例分析

经典广告创意案例分析 “U是时候红牛了” 红牛饮料简介: 14年前,风靡全球的红牛饮料来到中国,在中央电视台春节晚会上首次亮相,一句“红牛来到中国”广告语,从此中国饮料市场上多了一个类别叫做“能量饮料”,金色红牛迅速在中国刮起畅销旋风。 红牛功能饮料源于泰国,至今已有40年的行销历史,产品销往全球140个国家和地区,凭借着强劲的实力和信誉,“红牛”创造了奇迹。做为一个风靡全球的品牌,红牛在广告宣传上的推广,也极其具有特色。 红牛饮料广告创意特点分析: 一、独特性 红牛是一种维生素功能型饮料,主要成分为牛磺酸、赖氨酸、B族维生素和咖啡因(含量相当于一杯袋泡茶)。红牛功能饮料科学地把上述各种功效成分融入产品之中,与以往普通碳酸饮料不同。从推广之初,就将产品定位在需要补充能量的人群上。 “汽车要加油,我要喝红牛”,产品在广告宣传中就将功能性饮料的特性:促进人体新陈代谢,吸收与分解糖分,迅速补充大量的能量物质等优势以醒目、直接的方式传达给诉求对象。 让大家通过耳熟能详、朗朗上口的广告语,接受“红牛”作为功能性饮料能够提神醒脑、补充体力、抗疲劳的卓越功效。 二、广泛性 “红牛”的消费群体适合于需要增强活力及提升表现的人士饮用。特别适合长时间繁忙工作的商务人士、咨询服务业人士、需要长时间驾驶的专业司机、通宵达旦参加派对的休闲人士、正在进行运动或剧烈运动前的运动爱好者和需要保持学习状态的大中学生。目标对象较为广泛,供不同职业、不同年龄段人饮用。 三、树立品牌形象,注重本土化 红牛初来中国时,面临的是一个完全空白的市场。引用营销大师的观点而言,那是一个彻底的“蓝海”。因为当时的中国市场,饮料品牌并不多,知名的外来饮料有可口可乐和百事可乐,运动类型饮料有健力宝,几大饮料公司广告宣传力度都非常强,各自占据大范围的市场。红牛饮料要想从这些品牌的包围中迅速崛起,不是一件容易的事情。 因此,红牛饮料“中国红”的风格非常明显,以本土化的策略扎根中国市场。公司在广告中宣传红牛的品牌上,尽力与中国文化相结合。这些叙述固化在各种宣传文字中,在色彩表现上以“中国红”为主,与品牌中红牛的“红”字相呼应,从而成为品牌文化的底色。中国人万事都图个喜庆、吉利,因而红红火火,越喝越牛。这正体现了红牛饮料树立品牌形象的意图,了解中国市场消费者的购买心理后,将红牛自身特点与中国本土文化结合的完美体现。 四、多媒体、大冲击、深记忆 红牛在1995年春节联欢晚会之后的广告上首次出现,以一句“红牛来到中国”告知所有中国消费者,随后红牛便持续占据中央电视台的广告位置里,从“汽车要加油,我要喝红牛”到“渴了喝红牛,累了困了更要喝红牛”,大量黄金时间广告的宣传轰炸。并配合以平面广告的宣传,红牛在短短的一两年里,让汽车司机、经常熬夜的工作人员、青少年运动爱好者,都成为红牛的忠实消费群体。红牛一举成名,给中国消费者留下很深的记忆。后来出现了大量模仿甚至假冒红牛的饮料,比如:蓝狮、金牛、红金牛、金红牛等等。 四、一句广告词,响彻十余年 一个来自于泰国的国际性品牌——红牛,以功能性饮料的身份挟着在当时看来颇为壮观的广告声势向人们迎面铺来。一直以来,“困了累了喝红牛”这句带有明确诉求的广告语惹得

女人拍照必备姿势(可爱)3【珍藏版】

女人拍照必备姿势(可爱) 1.头部和身体忌成一条直线。两者若成一条直线,难免会有呆板之感。因此,当身体正面朝向镜头时,头部应该稍微向左或是向右转一些,照片就会显得优雅而生动;同样道理,当被摄者的眼睛正对镜头时,让身体转成一定的角度,会使画 面显得生气和动势,并能增加立体感。 2、双臂和双腿忌平行。无论被摄者是持坐姿或是站姿,千万不要让双臂或双腿呈平行状,因为这样会让人有僵硬、机械之感。妥当的做法可以是一曲一直或两者构成一定的角度。这样,就能既造成动感,姿势又富于变化。 3、尽量让体型曲线分明。对于女性被摄者来说,表现其富于魅力的曲线是很有必要的。通常的做法是让人物的一条腿实际上支撑全身的重量,另外一条腿稍微抬高些并靠着站立的的那条腿,臂部要转过来,以显示其最窄的一面,胸部则通

过腰部的曲线,尽量显示其高耸和丰硕感。同时,人物的一只手可以摆在臀部, 以便给画面提供必要的宽度。 4、坐姿忌陷。表现被摄者坐姿时,不要让其象平常一样将整个身体坐进椅子。如果这样,她的大腿就会呈现休息的状态,以至于腿上端多脂肪的部分隆起,使腿部显得粗苯。正确的做法是让其身体向前移,靠近椅边坐着,并保持挺胸收腹, 这样可以避免肩垂肚凸的现象。 5、镜头宜远不宜近。一般来说,拍人像照,距离远些总比近些好。因为当镜头(尤其是短焦距的镜头)离被摄者很近时,会出现畸变现象。因此,摄影时应选择合适焦距的镜头,并让镜头和被摄者保持一定的距离。根据实践我们知道,若是使用标准镜头拍摄人物头像,最佳距离应在6---8英尺之间;拍胸像则应在8---9英尺之间;拍全身像,以13---22英尺之间为宜。

6、表现好手姿。被摄者的手在画面中的比例不大,但若摆放不当,将会破坏画面的整体美。拍摄时要注意手部的完整,不要使其产生变形、折断、残缺的感觉。 如手叉腰或放到口袋里时,要露处部分手指,以免给人以截断的印象。

周末经典晚上祝福语.doc

周末经典晚上祝福语 关闭灯光,让心平静,呼吸均匀,完全放松,无牵无挂,反复默念,睡眠心法。愿我的支言片语可以让你晚安,美梦!下面是我为大家整理的:周末经典晚上祝福语,欢迎阅读,仅供参考,。 周末经典晚上祝福语(一) 1. 带着“称心如意”的感觉,怀着“怡然自得”的心情,伴着“妙不可言”的美梦,收阅“赏心悦目”的短信,祝今晚睡一个“甘之如饴”的好觉! 2. 脱下“疲惫不堪”,换上“惬意满怀”,铺上“四体通泰”,枕上“物我皆忘”,盖上“心旷神怡”,祝福今夜梦里“花好月圆”! 3. 风铃在窗边摇摆,轻轻敲掉你的“心烦意乱”;花香在空气蔓延,慢慢化解你的“思绪万千”;我的短信在此时抵达,深深祝福你会“美梦徜徉”! 4. “抽掉”工作的劳累,“剪掉”身体的疲惫,“清空”内心的烦躁,“删除”生活的烦恼,“注入”轻松的休息,“粘贴”快乐的微笑,愿你舒舒服服睡个好觉。晚安! 5. 夜幕时分,向月亮“借”了“一摞”舒心,向星星“讨”了“一打”放松,向静谧“求”了“一份”安逸,向漆黑“要”了“一批”睡意,寄送给你,愿你睡意浓浓,安眠每夜! 6. 宁静夜,月亮圆,月光暖,带去我,深思念。温馨星,闪闪闪,传递我,美祝愿,祝福你,美梦香,好梦甜,佳梦圆。 7.

洗个舒心澡,睡个安稳觉;喝杯健康奶,睡眠质量好;枕头不要太高,拳头大小正好;睡前不要劳神,放松心情最好。睡好觉,劲头足,轻轻松松干工作!Goodnight! 8. 化舒爽为清风,拂去心压力;化轻松为琴曲,缓解神压抑;化美意为问候,驱散体累意;化温馨为安神剂,送去美睡意;愿你夜里再无失眠绕你,心静美梦跟你! 9. 送一首“高枕无忧”曲,送一床“睡意浓浓”被,送一幅“睡眼惺忪”画,送一句“睡意袭来”咒:祝你今夜“酣然入梦”,“美梦连连”! 10. 托晚风吹走你的忧愁,让黑夜吓跑你的烦恼,请月光帮你把压力清零,送问候还你心里一份宁静,夜深了,放松入睡吧,愿今夜好梦不断。 11. 没有我;世界可依然精彩;没有我星光可以依然灿烂;没有我;人间可以依然温暖;没有我;没人可以对你说晚安...晚安 12. 每当夜晚降临的时候,天边的星星是我关注你的眼睛,月儿弯弯,它送去我最真心的祝福:晚安,做个好梦哦。 13. 生气容易长皱纹,开心就会变年轻,熬夜肯定会伤身,适当睡眠一身轻!提醒调皮又可爱的你:少在深夜晒月亮!容易出事!祝:睡眠甜甜!好梦圆圆! 14. 时时都有快乐相伴,刻刻都有好运相随,天天都是阳光明媚,夜夜都是星光灿烂,事事都能如你所愿,处处都能左右逢源,健康与你相依,好梦永远与你相伴! 15. 每当黑夜降临,便是我想你的开始,星星不眨眼,你就为它闪耀,今天是北半球黑夜最长的一天,也是我一年中想你最多的一晚!

经典品牌创意广告标语

经典品牌创意广告标语 1. 我动故我在。“运动系列产品”广告词 2. 岁月情,随“乐”心。音乐节目广告词 3. 让快乐张开翅膀,看梦想瞬间绽放。“梦想中国”节目广告词 4. 品味迎客松,独领中国风。“迎客松”香烟广告词 5. “钻”心“石”智钻石广告词 6. 你的希望,我的可能。企业广告词 7. 感受阳光,给予希望。“希望工程”宣传语 8. 有酸有甜,有“自”有味。优酸乳广告词 9. 幸福女人,钻石人生。“钻石”广告词 10. 畅饮杯中往事。“酒企业”广告词 11. 相同的选择,不同的期待。超市,商场广告词 12. 营养作主,我行我素。“绿色蔬菜”广告词 13. 咀嚼老歌,回味经典。音乐广播广告词 14. 把握人生,沟通世界。手机广告词新歌速递,快乐知己。音乐广播广告词 15. 感受阳光,给予希望。“希望工程”宣传语 16. 有酸有甜,有“自”有味。优酸乳广告词 17. 幸福女人,钻石人生。“钻石”广告词 18. 畅饮杯中往事。“酒企业”广告词 19. 相同的选择,不同的期待。超市,商场广告词 20. 营养作主,我行我素。“绿色蔬菜”广告词 1. 山叶钢琴:学琴的孩子不会变坏 2. 麦氏咖啡:好东西要与好朋友分享

3. 人头马XO:人头马一开,好事自然来 4. 鹿牌威士忌:自在,则无所不在 5. 德芙巧克力:牛奶香浓,丝般感受 6. 可口可乐:永远的可口可乐,独一无二好味道 7. 盈锡福------以“帽“取人.盈锡福帽子 8. 适合大众,价廉物美.工农牌服装 9. 富于伸缩,胖瘦皆宜.工农牌服装 10. 简单朴实,运动服装.工农牌服装 11. 任凭虎啸豹孔,为独金羊行俏!金羊牌服装 12. 办公窒的年轻人,你的时髦装扮应朝那个方向走?法国西装制服上衣 13. 木棉的质料,触感柔适,加上迷人的刺绣,更显绝代风姿!越南刺绣洋装 14. 使你更修长纤细!印度开前襟洋装 15. 曲线玲珑,极具小家碧玉的端庄!印度丝质洋装 16. 棉麻衫令您好拥有凉爽的触觉感受!印度混纺花洋装 17. 自然柔和的美感,仿佛戴上神秘的面纱!印度纱质洋装 18. 轻盈、柔和、大方的款式,展示出你个人的风格!英国礼服 19. 新一个自己,欧洲风尚!瑞奇牌服装 20. 浪漫舒适的法国风情!瑞奇牌服装 1. 他使你将走路视为乐趣。特莱德制鞋公司 2. 像你的丝袜一样合脚。肖特巴克制鞋公司 3. 足下之领带。 ELBEO袜子 4. 洒在你脚上的皎洁月光。铁衣牌长统丝袜 5. 不只是吸引。浪莎袜业 6. 步步都精彩。耐尔袜业 7. 她身上有一处是人工的。日本“爱德兰”丝袜

给顾客的经典英语周末祝福语

给顾客的经典英语周末祝福语 一串串数也数不清的祝福,给我最挂念的人。在这样的季节,把我的真挚的祝福捎给你,愿幸福常在你身边。秒针每走一下,就是我的一个祝福,老朋友,你欠我很多祝福呦! a string can even count blessing, give me the most about people. In this season, put my sincere greetings to you, and wish happiness always by your side. The second hand every walk once, is one of my blessing, my friend, you owe me many blessings to play. 遥递祝福频频佳音递暖暖祝语传仅祝我最心爱的——你永浴多采多姿的世界里情谊永固无论路途多遥远无论天涯海角我也要送给你这份最衷心的祝福在这亮丽的日子,寄一份祝福给你愿你永远拥有真正的快乐和喜悦…… remote deferred warm wishes frequently ZhuYu preach recursive caroling only wish I most beloved - you forever the bath colorful world friendship remains and never can end whatever way more distant whatever tianyahaijiao I will give you the sincere greetings in this beautiful day, to

经典创意汽车广告语大全

经典创意汽车广告语大全 1. 真诚的奉献,明智的选择。北京轻型客车 2. 广州本田,阔步向前。广州本田汽车 3. 捷达前卫,低耗如驼。一汽捷达汽车 4. 捷达营造全新驾乘空间。一汽捷达汽车 5. 时尚潮流,旗舰产品。旗舰汽车 6. 红旗世纪星,驾乘更从容。红旗汽车 7. 一汽越野,西部先锋。一汽汽车 8. 夏利2000------小型车中的高档车。夏利汽车 9. 乘风千里,得意万里! 万里牌汽车 10. 今日拥有金虹桥,明天就是金百万! 金虹桥牌输送机 11. 小旋风、新时尚、新感受,四轮驱动展豪情! 小旋风牌轿车 12. 乘风跃进,得意万里! 南京伊维柯汽车 13. 得意!南京伊维柯! 南京伊维柯汽车 14. 奥拓,向我们驶来! 长安奥拓牌汽车 15. 唯我华悦一家! 华悦汽车 16. 华夏第一乘,三峰伴君行! 三峰牌汽车 17. 龙江客车,时代新型! 龙江牌汽车

18. 制动之灵敏,定会让你欣赏不巳! 浦沅牌汽车 19. 用户满意年,满意在浦沅! 浦沅牌汽车 20. 一帆风顺万里行,得心应手开南菱! 南菱牌微型汽车 21. 用户信誉维系东方信誉,使用满意就是东方标准! 东方牌汽车 22. 燕京汽车伴君万里行! 燕京牌汽车 23. 成功路上,燕京汽车,助您一臂之力! 燕京牌汽车 24. 华兴汽车,名车之选。华兴汽车 25. 动静皆风云。上海奇瑞轿车 26. 强者,强自内心。上海帕萨特轿车 27. 微型汽车精品,中国柳州五菱。五菱汽车 28. 一步到位,步步到时位。中国重汽斯太尔 29. 轻轻一抓就起来! 东风装载机 30. 千锤百炼,郑州东风! 东风汽车 31. 东风世界,无忧无虑。东风汽车 32. 桑塔纳为中国足球尽心尽力! 上海桑塔纳轿车 33. 桑塔纳轿车,最完美的驾驶体验。上海桑塔纳轿车 34. 路遥知马力,日久见跃进! 跃进牌汽车 35. 买车当然选择江铃jmc。中国江铃jmc汽车 36. 一代名车,中国江铃。中国江铃汽车

【最新2018】经典快乐的周末祝福语大全-范文word版 (2页)

【最新2018】经典快乐的周末祝福语大全-范文word版 本文部分内容来自网络,本司不为其真实性负责,如有异议或侵权请及时联系,本司将予以删除! == 本文为word格式,下载后可随意编辑修改! == 经典快乐的周末祝福语大全 把臂膀给予飞雀就欢快了;把心抛给大海就辽阔了;把目光留给苍穹就深远了;把祝福发给你就开心了,祝周末愉快! 别因太多的忙碌冷淡温柔莫因无奈的追求湮没享受挣钱不是人生的全部 无限风光就在心灵深处停停你匆匆的脚步尽情享受生活的赐福!周末了,别忘了——好好歇歇! 海水的快乐,是一波浪漫的波涛;天空的宁静,是多变云的映衬;我的幸福,是有你相依,亲爱的,周末快乐! 快乐+新口号:快乐天工作,充实发展自我;周末天笑呵呵,放松娱乐享生活。一周欢声笑语过,烦恼无聊都没辄。周末了,一起快乐! 每个周末有两天,每个月都有四个周末,每年有十二个月,你的一生中有 很多年,但我的周末祝福只有一句,那就是每个周末都要记得有我祝福你。年 年月月周周,不曾停息。我可以忘,但你不能,要把想起我的祝福当作一种惯性。 让星星送去我的祝福,让小雨送去清爽的凉风,让好运和美丽永远追随你一生,让短信一次次的奉我永远的祝福,祝周末愉快! 上帝创造了星期,对你施了咒语,让你星期一有顺利,星期二有和气,星 期三有平安,星期四有惊喜,星期五有梦想,星期六有好运,星期天有美丽! 生活是平淡的,犹如蓝天下碧蓝的湖水生活也可以是诗,在一路的奔腾中高 歌只要用心对待,每一个日子都是幸福周末愉快! 问候一声:周末好,愿凉爽的夏风吹走的是您的烦恼,带来的是一整个星期的好运 夏日高温不退,伴着一缕清风为您送来这一季的祝福,原您清凉一“夏”,愿轻松和愉快萦绕在您身边。 一二三四五六七,上班五天到假期,随心所欲享周末,短信祝福更惬意!周末愉快!

最创意广告语大全

最创意广告语大全 现在所有人都想拥有自己的一家小店,还在为小店的广告语发愁吗?第一句子为你整理了【最创意广告语大全】快来收藏吧!1、冷暖自知,因时而动西门子――冰箱 2、展现声音魅力――xx音响 3、每天一点色彩,触动生活无限精彩安利AMWAY――化妆品 4、钻石恒久远,一颗永留传――迪比尔斯珠宝 5、如果你对蟑螂在你家里繁衍生息已经忍无可忍,如果你对老鼠在你家里安家落户已经忍无可忍_―― DEP公司:灭虫剂,灭鼠剂 6、学琴的孩子不会坏―― xx山叶钢琴 7、我们的钓竿连鱼看了都喜欢―― 渔具 8、产品它能粘合住一切,除了一颗破碎的心――粘合剂9、装上去容易,取下来简单――哥伦布防滑链公司10、培训如果你听了一课之后发现不喜欢这门课程,那你可以要求退回你的学费,但必须用法语说――法语学习班11、速度超人,色惊四座,HP designjet5000 HP――打印机 12、无限自由,轻松拥有―― 联想电脑 13、数字时代,动力核“芯” INTEL――CPU 14、活的色彩EPSON――彩色打印机 15、小空间,大作为利盟――打印机 16、明基“微雕”扫描技术,向每一个细节索取快感明基――扫描仪 17、个性主义的时代看法――LG未来窗液晶电视 18、电子商务你想有多E―― IBM 19、四海一家的解决之道―― IBM

20、无论何时何地,我们与你同在―― 中国移动 21、无微不至的通信网络―― 中国移动 22、用户至上,用心服务――xx电信 23、让沟通无处不在――中国移动 24、由我天地宽―― 中国网通 25、打,就打个痛快淋漓,爱打才会赢――中国电信26、融入第一世界,执掌个人天下――北京空中创业家园27、山水间,天地间―― 北京冠云山庄 28、是经典,就绝不再现――北京马甸经典家园 29、居藏龙卧虎地,享人生写意时―― 北京幸福二村30、实用主义―― 北京望都家园 31、择邻而居,心有灵犀―― 北京上河村 32、兑现你的生活梦想―― 北京京贸国际公寓 33、从容的尊贵――xx深蓝xx 34、流水的风景――xxxx风景 35、丰盛的简约主义――xxxx 36、窗外,心灵与西山一同宽广―― 北京世纪城 37、在每次不经意间被真诚打动――北京优秀赏 38、水边就是我的家――xxxxxx 39、完美主义心情,上海生活版本――上海新福康里40、细致造就差别―― 广州珠江地产 41、微软鼠标:按捺不住,就快滚 42、555香烟:超凡脱俗,醇和满足

女孩拍照姿势大全

女孩拍照姿势大全 1、头部和身体忌成一条直线。两者若成一条直线,难免会有呆板之感。因此,当身体正面朝向镜头时,头部应该稍微向左或是向右转一些,照片就会显得优雅而生动;同样道理,当被摄者的眼睛正对镜头时,让身体转成一定的角度,会使画面显得生气和动势,并能增加立体感。 2、双臂和双腿忌平行。无论被摄者是持坐姿或是站姿,千万不要让双臂或双腿呈平行状,因为这样会让人有僵硬、机械之感。妥当的做法可以是一曲一直或两者构成一定的角度。这样,就能既造成动感,姿势又富于变化。 3、尽量让体型曲线分明。对于女性被摄者来说,表现其富于魅力的曲线是很有必要的。通常的做法是让人物的一条腿实际上支撑全身的重量,另外一条腿稍微抬高些并靠着站立的的那条腿,臂部要转过来,以显示其最窄的一面,胸部则通过腰部的曲线,尽量显示其高耸和丰硕感。同时,人物的一只手可以摆在臀部,以便给画面提供必要的宽度。 4、坐姿忌陷。表现被摄者坐姿时,不要让其象平常一样将整个身体坐进椅子。如果这样,她的大腿就会呈现休息的状态,以至于腿上端多脂肪的部分隆起,使腿部显得粗苯。正确的做法是让其身体向前移,靠近椅边坐着,并保持挺胸收腹,

这样可以避免肩垂肚凸的现象。 5、镜头宜远不宜近。一般来说,拍人像照,距离远些总比近些好。因为当镜头(尤其是短焦距的镜头)离被摄者很近时,会出现畸变现象。因此,摄影时应选择合适焦距的镜头,并让镜头和被摄者保持一定的距离。根据实践我们知道,若是使用标准镜头拍摄人物头像,最佳距离应在6---8英尺之间;拍胸像则应在 8---9英尺之间;拍全身像,以13---22英尺之间为宜。 6、表现好手姿。被摄者的手在画面中的比例不大,但若摆放不当,将会破坏画面的整体美。拍摄时要注意手部的完整,不要使其产生变形、折断、残缺的感觉。如手叉腰或放到口袋里时,要露处部分手指,以免给人以截断的印象。

周末经典祝福语大全

周末经典祝福语大全 导读:本文周末经典祝福语大全,仅供参考,如果能帮助到您,欢迎点评和分享。 周末已驾到,自在又逍遥。忙里偷偷闲,放松心情妙。一起来看看为大家精心整理的“周末经典祝福语大全”,欢迎大家阅读,供大家参考。内容还请关注哦。 周末经典祝福语大全1、采一束阳光,让周末的您热情开朗,兜一缕清风,让周末的您精神舒爽,捕一抹彩虹,让周末的您色彩斑斓,发个信息,让周末的您欢乐吉祥! 2、问候一声,周末好,微风吹走一周的操劳,带来周末的美好,送一份美丽的心情,送一份青春不老,发一条祝福短信给你,这份友情不需要回报。周末快乐。 3、周末需要游戏,但不要游戏周末;周末需要歌舞,但不要醉生梦死;周末需要重复,但不要重蹈覆辙;周末需要微笑,但不要一笑而过;周末需要祝福,但不要不去分享!周末快乐! 4、用最好的心情,最美的微笑,最红的玫瑰,最温柔的拥抱,和最爱的人,在一起过周末。看看你的脸,看看你的笑,幸福扑面而来,愿你的生活阳光灿烂,爱情柔风细雨,快乐永久长驻, 5、当周末你看到这条短信时,能想起我的脸,想起我的声音并想象我在念着这条信息,你就能明白我对你的思念,在这字里行间。周末了,我在想你,你呢?

6、受过伤害,仍在爱,因为有你;失落过,仍满怀希望,因为有你;天天工作,不知疲惫,因为有你陪我过周末! 7、周末放松的口袋打开吧:滚出烦恼,掉出埋怨,摘出欣慰,拣出满意,挑出精彩,其实一周的收获还不赖。朋友,和我一起happy,生活乐开怀。周末愉快。 8、周末真无聊,和我一样无聊的回复愿意和我一起度过无聊的回复正在享受美好周末的回复愿意帮我改变无聊现状的回复没空搭理我的回复5或者不回复,这我都能理解! 9、周末,让世界有了色彩;周末,让生命值得期待;拉开紧闭的窗帘,追随季节的脚步,找出难得的知音,寻觅幸福的结局。周末啦,愿你幸福快乐! 10、一周工作,忙碌多多;盼来周末,缓解压力;听听音乐,心情美丽;打打游戏,快乐不停;出去玩玩,晒晒骄阳;好友常联,空时聚聚。祝你周末,幸福甜蜜! 11、"忙碌的生活带走的只是时间,对朋友的牵挂常留心底,一声问候一个祝福,一句关怀,在这美丽神怡的日子里祝您周日愉快!" 12、愿你的周末充满惬意,愿你的生活充满阳光,愿你的爱情温柔浪漫,愿你的事业一帆风顺,愿你的生活快乐永远。亲爱的朋友,周末愉快! 13、钱不在多少,常有就好;人不在众寡,互助就好;事不在轻重,做好就好;友不在亲疏,诚心就好;短信不在早晚,祝福就好;周末不在长短,快乐就好。祝你周末快乐,一切都好。

17种经典的集体拍照姿势摆法

17种经典的集体拍照姿势摆法(图文) 照片总是带给人美好的回忆,难忘的一刻总想拿出相机定格瞬间,但拍照的时候又苦于没有好的创意拍出的照片显得暗淡无光,这里分享17种经典的集体拍照姿势摆法,无论是毕业季还是其他时间大家都可以拿来参考,希望能为你留下美好的回忆! 工具/原料 相机,一群人或一家人 方法/步骤 1、一群朋友的大合照,大家可以自然地一起跑、一起跳,这样拍出的照片很有欢乐感 2、选一个站在最前,然后其他人在后面探头出来

3、这种各自站立的合照,颇适合小组团队,让主角站在中间靠前一点,会有突出效果 4、当拍摄一大群人时,将全体视为一个整体来拍照,要注意是否每个人都有露面

5、这种方式是很自然、很普通合照,但是很容易拍出效果,这种姿势很不错 6、试着横排合照之余,从旁边拍摄。选择大光圈,同时对焦于最接近的那一位,会得到浅景深效果。远一点的人虽然变得模糊,但出来的感觉会很有趣

7、拍摄一大群人时,以全身照拍摄是最可行、最常用的方式,拍摄时须注意全部人都露面 8、选一个做站最前面,然后一个跟一个靠着前面那个来站着合照,这样整体的投入感会很强,也很有气氛

9、围圈躺下,头贴头拍照,在户外的草地之类比较适合 10、这个比起普通合影有趣一点。所有人靠在一起,头部要靠拢多一些,并拥向镜头

11、可以的话,在屋顶、露台等高角度拍摄集体照,照片会变得更有吸引力、更有趣,这个交错排列,拍出来很温馨的感觉 12、这种拍摄姿势,适合在户外草地或是自家床上,而且有多个孩子会显得非常热闹,一家人俯卧在地,同时以双手撑起上身,拍摄以低角度为好

男生女生拍照摆pose大全

摘要: 很多人喜欢拍照摆pose,但是也有很多人不喜欢摆pose,不过男生女生拍照摆pose也是各不相同,也有很多人在拍照的时候为此发愁,这里就由北京摄影培训指南网的小编来为大家总结一下男生女生拍照 摆pose大全,让你以后喜欢上拍照!... 很多人喜欢拍照摆pose,但是也有很多人不喜欢摆pose,不过男生女生拍照摆pose也是各不相同,也有很多人在拍照的时候为此发愁,这里就由北京摄影培训指南网的小编来为大家总结一下男生女生拍照摆 pose大全,让你以后喜欢上拍照! 女生拍照摆pose大全: 1. 回眸:只是角度不同,人像作品也会变得很不同。 2. 以手抚脸:一般人像摄影来说,不会特意拍摄手部,但你可以请模特以手抚脸摸头,变成有趣的创 作。注意别坦平手掌,应展示双手侧面。

3. 对角线:不要害怕倾斜,让你的模特在照片画出对角线吧。 4. 亲切坐姿:重点是两膝并拢,拍摄时稍微采用高角度。 5. 侧卧:一种感觉较亲近而随意的姿势,尝试从更低角度,例如贴近地面来拍摄。

6. 俯卧:以手肘支地,舒适地躺在地上,一般都会拍摄于草地、花园等。 7. 仰卧二:也是简单但抢眼的姿势,拍摄时尝试比较不同的手脚位置,另外要注意对焦模特的双眼。 8. 俯卧二:身体更贴近地面,要选择例如草地和沙滩的地方使用,同样要对焦于模特的双眼。

9. 侧坐:感觉很专业及有个性的坐姿。 10. 抱脚而坐:感觉亲切可爱的姿势,拍摄时可尝试多个不同角度方向。 11. 伸展式坐姿:主要来展示模特的身形,使用剪影手法也会很不错。

12. 随意站姿:较随意而弹性的姿势,可要求模特转向不同方向、改变双手位置及头部角度。 13. 性感站姿:稍微向前倾,以强调上半身的曲线。 14. 性感站姿二:双手举高过头,强调身形曲线,尤其是腰部,对模特的身形有较高要求。

对业主周末问候语

本文为word格式,下载后可编辑修改,也可直接使用 对业主周末问候语 1、一样的周末,不一样的心情,愿您事事如意,心情好,笑口常开,身体好,我的信息,愿您周末好,周末愉快! 2、不说想你,却一直把你放心底;不说念你,却一直对你常惦记;不曾忘记,只是牵挂深埋在心里。周末了,我想告诉你:歇歇吧,好好休息,放松自己! 3、中国黄历:周末;天气:或阴或晴;宜睡觉,宜玩闹,宜浪漫,宜恶搞,宜开心。忌早起,忌无聊,最重要的是,忌收短信不理会! 4、周末是座商场,聚集快乐利益,销售悠闲梦想,逍遥没有繁忙,朋友人来人往。懒觉就是利润,休息就是奖赏,幸福就是方向。周末快乐 5、为你设计,逍遥周末:学会放松,怡然自得;释放压力,过的洒脱;运动锻炼,切莫错过;逛街散步,两者皆可;拜访亲友,真情收获;祝开心周末! 6、福山有路为你开,财山有树为你栽,开心有意为你来,好运有心把你爱,周末有情把你赖,短信让你把笑开,周末愿你笑颜晒,幸福甜蜜你心宅。 7、一星期的祝福送给你:星期一开门顺,星期二遇真爱,星期三人人赞,星期四好运至,星期五美食煮,星期六狗遛遛,星期天乐神仙,祝你一周快乐,天天好心情! 8、春有百花,秋望月;夏有凉风,冬观雪;心中若无烦恼事,便是人生好时节。愿你:晨有清逸,暮有闲悠,周末将至,欢快无忧! 9、伸出左脚,把烦恼绊倒,伸出右脚,把忧愁踹跑,找个笼子,让幸福无处可逃,找副绳索,把健康缚牢,再拿手机,把祝福编好发给你:周末快乐! 10、犹豫半天,徘徊良久,思索再三,我还是决定给你发这条短信,其实也没有什么特别的事情,主要就是为了祝你周末快乐,时刻开心! 11、有时候世界很大,大到我们一辈子都没机会遇见。有时候世界又很小,小到一转头就看见了你的微笑。所以,在遇见时心存感激;相爱时彼此珍惜;转身时保持优雅;挥别时努力微笑。看到短信时,分享彼此的祝福!周末快乐! 12、有个祝福,编写它的人很幸福,传递它的人也很幸福,收到它的人更会一生幸福!今天我就要把三种幸福全部送给你,祝你周末快乐幸福!

16个经典广告创意文案

1.马克的故事 求职网站广告 某航天中心的指挥塔内,年轻人马克聚精会神地注视着面前的显示屏。忽然,显示屏上同时出现了两个移动的目标,而且这两个飞行物正越飞越近,有迎头相撞的危险。马克心急如焚,紧盯着显示屏,手忙脚乱地急速操作着键盘。然而,飞行物竟然象设定好了程序一样依然越飞越近。 最后,惨剧发生了,撞击的火光映红了整座指挥塔。但就在惨剧发生的一刹那,马克像变了个人一样,他兴奋地紧握双拳,脸上掠过一阵难以抑制的狂喜。这时,画面出现如下字幕:“马克,曾任电子游戏编程员”。 广告语:“你可以换老板,但千万别换专业。” 2.食指手术的故事 番茄酱广告 令人期待的时刻终于来到了。静静的病房里,护士正小心翼翼地为中年男子一层一层地揭开缠在食指上的厚厚的

纱布。病人惴惴不安,身边的妻子紧握着他的另一只手,主治医生则站在病人的对面,神情也并不轻松。 终于,通过手术被加长了的食指活生生地“耸立”在了众人的眼前。手术成功啦!夫妻俩欣喜万分。回到家里,他们迫不及待地打开冰箱,从里面取出了番茄酱瓶子。丈夫把刚刚动过手术的长长的手指伸进瓶子,顺利地将瓶底仅存的一层番茄酱“捞”了出来,兴奋而又自豪地凝望着妻子,而妻子则眼巴巴地盯着丈夫食指尖上的番茄酱…… 3.全是洗衣粉惹的祸 洗衣液广告 一位年轻的日本夫妇来到南美某国旅游。在机场,当日本男子经过安检门的时候,报警器响了起来。他马上意识到自己的衣袋里还装者几枚硬币。于是,他一脸歉意地掏出这几枚硬币,让海关人员过目。不料,海关人员顿时大惊失色。 原来,该男子手里除了那几枚硬币外,还有一些可疑的白色粉末。海关人员一涌而上,不由分说地把他强行驾走。

经典祝福语句

经典祝福语大全 1、温馨的留言洋溢着幸福的期待,美好的回味是凝聚着不褪的色彩,快乐的心情只因你的存在,深情的牵挂传递着不变的情怀。在你的温馨留下我的足迹,把真诚留在你的…》寄一份真情的问候,字字句句都是幸福快乐;送一串深精彩空间深的祝福,分分秒秒都是平安吉祥;传一份浓浓的心意,点点滴滴都是万事如意;祝朋友:幸福安康! 3、送你千万幅春景,春景艳艳美心间。送你美丽的春天,春天洒满友谊篇!友情的世界因无私而纯洁!人生的路因有你而不孤单!我的有你的光临而变的!是网络让我们相聚,是让我们交流,是你的真诚让我感动!祝朋友开心快乐每一天 4、轻轻的问候最贴心,淡淡的感情最迷人,静静的思念最醉人,朋友的关心最感人,愿您在闲暇之余品尝我最真诚的祝福!愿你快乐幸福每一天!好运常相伴!》问候是滋润友情的甘露,安慰是心心相惜的源泉,鼓励是关爱心灵的美酒,宽容是连接友谊的红线,让温馨的问候萦绕着你,让美好的祝福伴你永远!愿你有个好心情,快乐每一天! 5、明天送你千万缕春光,春光暖暖照身边。送你千万袖春风,春风习习拂你面。送你千万朵春花,春花灿烂笑开颜。送你千万场春雨,春雨淅淅润心田。送你千万幅春景,春景艳艳美心间。送你美丽的春天,春天洒满友谊篇!友情的世界因无私而纯洁!人生的路因有你而不孤单!我的有你的光临而变的!是网络让我们相聚,是让我们交流,是你的真诚让我感动! 明天祝朋友开心快乐每一天! 6、让风吹走你的忧郁,让雨洗掉你的烦恼,让阳光带给你温暖,让月亮带给你温馨,让爱情带给你幸福,让友情带给你快乐,让我给你无限的问候。》真诚的朋友我会永远珍惜,真心的朋友我将永远铭记;虽然表面平平淡淡,但是内心真真切切;人生路上有你不会孤寂,人在旅途有你就会快乐;感动在心,感激无尽;在此就说一声:谢谢你,亲爱的好朋友! 7、让快乐飞进你的小屋,让好运降落在你身边,让幸福常常与你相伴,让如意流淌你的心房,让岁月的诗句写满真诚与难忘,让我的留言为你带来快乐和吉祥、平安与健康 8、我想在你的心里盖一间小屋,高兴了在那里挥拳欢呼,伤心了躲着放声痛哭。屋前有一条精彩美文小路屋后种一片毛竹。请问你心里还有地么可不可以出租深深地祝福送上我美好的心愿。 9、闲遐的时候我们生活在网络,是否也像那美丽的春天,每个人都在展示心中那一抹美好的艳丽,网络虚拟与现实同在,真情与缄默并存。各住不同的区域,似乎并没有阻挡彼此走

相关主题
文本预览
相关文档 最新文档