当前位置:文档之家› Complex Background

Complex Background

Complex Background
Complex Background

Complex Background Subtraction by Pursuing Dynamic Spatio-Temporal Models

Liang Lin,Yuanlu Xu,Xiaodan Liang,and Jianhuang Lai

Abstract—Although it has been widely discussed in video surveillance,background subtraction is still an open problem in the context of complex scenarios,e.g.,dynamic backgrounds,illu-mination variations,and indistinct foreground objects.To address these challenges,we propose an effective background subtraction method by learning and maintaining an array of dynamic texture models within the spatio-temporal representations.At any location of the scene,we extract a sequence of regular video bricks,i.e.,video volumes spanning over both spatial and temporal domain.The background modeling is thus posed as pursuing subspaces within the video bricks while adapting the scene variations.For each sequence of video bricks,we pursue the subspace by employing the auto regressive moving average model that jointly characterizes the appearance consistency and temporal coherence of the observations.During online processing, we incrementally update the subspaces to cope with disturbances from foreground objects and scene changes.In the experiments, we validate the proposed method in several complex scenarios, and show superior performances over other state-of-the-art approaches of background subtraction.The empirical studies of parameter setting and component analysis are presented as well.

Index Terms—Background modeling,visual surveillance, spatio-temporal representation.

I.I NTRODUCTION

B ACKGROUND subtraction(also referred as foreground

extraction)has been extensively studied in decades [1]–[6],yet it still remains open in real surveillance appli-cations due to the following challenges:

?Dynamic backgrounds.A scene environment is not always static but sometimes highly dynamic,e.g.,rippling water,heavy rain and camera jitter.

Manuscript received October15,2013;revised January25,2014and April4,2014;accepted May17,2014.Date of publication May23,2014; date of current version June16,2014.This work was supported in part by the Hi-Tech Research and Development(863)Program of China under Grant2012AA011504,in part by the National Natural Science Foundation of China under Grant61173082and Grant61173084,in part by the Guangdong Science and Technology Program under Grant2012B031500006,in part by the Guangdong Natural Science Foundation under Grant S2013050014548, in part by the Special Project on Integration of Industry,Education and Research of Guangdong Province under Grant2012B091000101,and in part by the Fundamental Research Funds for the Central Universities under Grant 13lgjc26.The associate editor coordinating the review of this manuscript and approving it for publication was Mr.Pierre-Marc Jodoin.

L.Lin is with the Key Laboratory of Machine Intelligence and Advanced Computing,Ministry of Education,Sun Yat-sen University,China,School of Advanced Computing,Sun Yat-sen University,Guangzhou510006, China,and also with Sun Yat-sen University-Carnegie Mellon University Shunde International Joint Research Institute,Shunde510006,China(e-mail: linliang@https://www.doczj.com/doc/6313246192.html,).

Y.Xu,X.Liang,and https://www.doczj.com/doc/6313246192.html,i are with the School of Information Science and Technology,Sun Yat-sen University,Guangzhou510006,China(e-mail: merayxu@https://www.doczj.com/doc/6313246192.html,;xdliang328@https://www.doczj.com/doc/6313246192.html,;stsljh@https://www.doczj.com/doc/6313246192.html,). Color versions of one or more of the?gures in this paper are available online at https://www.doczj.com/doc/6313246192.html,.

Digital Object Identi?er

10.1109/TIP.2014.2326776Fig.1.Some challenging scenarios for foreground object extraction are handled by our approach:(i)a?oating bottle with randomly dynamic water (in the left column),(ii)waving curtains around a person(in the middle column),and(iii)sudden light changing(in the right column).?Lighting and illumination variations,particularly with sudden changes.

?Indistinct foreground objects having similar appearances with surrounding backgrounds.

In this paper,we address the above mentioned dif?culties by building the background models with the online pursuit of spatio-temporal models.Some results generated by our system for the challenging scenarios are exhibited in Fig.1.Prior to unfolding the proposed approach,we?rst review the existing works in literature.

A.Related Work

Due to their pervasiveness in various applications,there is no unique categorization on the existing works of background subtraction.Here we introduce the related methods basically according to their representations,to distinguish with our approach.

The pixel-processing approaches modeled observed scenes as a set of independent pixel processes,and they were widely applied in video surveillance applications[6],[7].In these methods[1],[2],[8],[9],each pixel in the scene can be described by different parametric distributions(e.g.Gaussian Mixture Models)to temporally adapt to the environment changes.The parametric models,however,were not always compatible with real complex data,as they were de?ned based upon some underlying assumptions.To overcome this problem,some other non-parametric estimations[10]–[13]

1057-7149?2014IEEE.Personal use is permitted,but republication/redistribution requires IEEE permission.

See https://www.doczj.com/doc/6313246192.html,/publications_standards/publications/rights/index.html for more information.

were proposed,and effectively improved the robustness. For example,Barnich et al.[13]presented a sample-based classi?cation model that maintained a?xed number of samples for each pixel and classi?ed a new observation as background when it matched with a prede?ned number of samples.Liao et al.[14]recently employed the kernel density estimation(KDE)technique to capture pixel-level variations. Some distinct scene variations,i.e.illumination changes and shadows,can be explicitly alleviated by introducing the extra estimations[15].Guyon et al.[16]proposed to utilize the low rank matrix decomposition for background modeling, where the foreground objects constituted the correlated sparse outliers.Despite acknowledged successes,this category of approaches may have limitations on complex scenarios,as the pixel-wise representations overlooked the spatial correlations between pixels.

The region-based methods built background models by taking advantages of inter-pixel relations,demonstrating impressive results on handling dynamic scenes.A batch of diverse approaches were proposed to model spatial struc-tures of scenes,such as joint distributions of neighboring pixels[11],[17],block-wise classi?ers[18],structured adja-cency graphs[19],auto-regression models[20],[21],random ?elds[22],and multi-layer models[23]etc.And a number of fast learning algorithms were discussed to maintain their models online,accounting for environment variations or any structural changes.For example,Monnet et al.[20]trained and updated the region-based model by the generative sub-space learning.Cheng et al.[19]employed the generalized 1-SVM algorithm for model learning and foreground pre-diction.In general,methods in this category separated the spatial and temporal information,and their performances were somewhat limited in some highly dynamic scenarios, e.g. heavy rains or sudden illumination changes.

The third category modeled scene backgrounds by exploiting both spatial and temporal information. Mahadevan et al.[24]proposed to separate foreground objects from surroundings by judging the distinguished video patches, which contained different motions and appearances compared with the majority of the whole scene.Zhao et al.[25] addressed the outdoor night background modeling by performing subspace learning within video patches.Spatio-temporal representations were also extensively discussed in other vision tasks such as action recognition[26]and trajectory parsing[27].These methods motivated us to build models upon the spatio-temporal representations,i.e.video bricks.

In addition,several saliency-based approaches provided alternative ways based on spatio-temporal saliency estima-tions[24],[28],[29].The moving objects can be extracted according to their salient appearances and/or motions against the scene backgrounds.For example,Wixson et al.[28] detected the salient objects according to their consistent mov-ing directions over time.Kim et al.[30]used a discrimi-nant center-surround hypothesis to extract foreground objects around their surroundings.

Along with the above mentioned background models, a number of reliable image features were utilized to better handle the background noise[31].Exemplars included the Local Binary Pattern(LBP)features[32]–[34]and color tex-ture histograms[35].The LBP operators described each pixel by the relative graylevels of its neighboring pixels,and their effectiveness has been demonstrated in several vision tasks such as face recognition and object detection[32],[36],[37]. The Center-Symmetric LBP was proposed in[34]to further improve the computational ef?ciency.Tan and Triggs[33] extended LBP to LTP(Local Ternary Pattern)by thresholding the graylevel differences with a small value,to enhance the effectiveness on?at image regions.

B.Overview

In this work,we propose to learn and maintain the dynamic models within spatio-temporal video patches(i.e.video bricks),accounting for real challenges in surveillance scenar-ios[7].The algorithm can process15~20frames per second in the resolution352×288(pixels)on average.We brie?y overview the proposed framework of background modeling in the following aspects.

1)Spatio-Temporal Representations:We represent the observed scene by video bricks,i.e.video volumes spanning over both spatial and temporal domain,in order to jointly model spatial and temporal information.Speci?cally,at every location of the scene,a sequence of video bricks are extracted as the observations,within which we can learn and update the background models.Moreover,to compactly encode the video bricks against illumination variations,we design a brick-based descriptor,namely Center Symmetric Spatio-Temporal Local Ternary Pattern(CS-STLTP),which is inspired by the 2D scale invariant local pattern operator proposed in[14].Its effectiveness is also validated in the experiments.

2)Pursuing Dynamic Subspaces:We treat each sequence of video bricks at a certain location as a consecutive signal, and generate the subspace within these video bricks.The linear dynamic system(i.e.Auto Regressive Moving Average, ARMA model[38])is adopted to characterize the spatio-temporal statistics of the subspace.Speci?cally,given the observed video bricks,we express them by a data matrix, in which each column contains the feature of a video brick. The basis vectors(i.e.eigenvectors)of the matrix can be then estimated analytically,representing the appearance parameters of the subspace,and the parameters of dynamical variations are further computed based on the?xed appearance parameters. It is worth mentioning that our background model jointly captures the information of appearance and motion as the data (i.e.features of the video bricks)are extracted over both spatial and temporal domains.

3)Maintaining Dynamic Subspaces Online:Given the newly appearing video bricks with our model,moving fore-ground objects are segmented by estimating the residuals within the related subspaces of the scene,while the back-ground models are maintained simultaneously to account for scene changes.The raising problem is to update parame-ters of the subspaces incrementally against disturbance from foreground objects and background noise.The new obser-vation may include noise pixels(i.e.outliers),resulting in degeneration of model updating[20],[25].Furthermore,one

LIN et al.:COMPLEX BACKGROUND SUBTRACTION BY PURSUING DYNAMIC SPATIO-TEMPORAL MODELS

3193

Fig.2.An example of computing the CS-STLTP feature.For one pixel in the video brick,we construct four spatio-temporal planes.The center-symmetric local ternary patterns for each plane is calculated,which compares the intensities in a center-symmetric direction with a contrasting threshold τ.The CS-STLTP feature is concatenated by the vectors of the four planes.

video brick could be partially occluded by foreground objects in our representation,i.e.only some of pixels in the brick are true positives.To overcome this problem,we present a novel approach to compensate observations (i.e.the observed video bricks)by generating data from the current models.Speci?cally,we replace the pixels labeled as non-background by the generated pixels to synthesize the new observations.The algorithm for online model updating includes two steps:(i)update appearance parameters using the incremental sub-space learning technique,and (ii)update dynamical variation parameters by analytically solving the linear reconstruction.The experiments show that the proposed method effectively improves the robustness during the online processing.

The remainder of this paper is arranged as follows.We ?rst present the model representation in Section II,and then discuss the initial learning,foreground segmentation and online updat-ing mechanism in Section III,respectively.The experiments and comparisons are demonstrated in Section IV and ?nally comes the conclusion in Section V with a summary.

II.D YNAMIC S PATIO -T EMPORAL M ODEL

In this section,we introduce the background of our model,and then discuss the video brick representation and our model de?nition,respectively.A.Background

In general,a complex surveillance background may include diverse appearances that sometimes move and change dynami-cally and randomly over time ?ying [39].There is a branch of works on time-varying texture modeling [40]–[42]in computer vision.They often treated the scene as a whole,and pursued a global subspace by utilizing the linear dynamic system (LDS).These models worked well on some natural scenes mostly including a few homogeneous textures,as the LDS characterizes the subspace with a set of linearly combined components.However,under real surveillance challenges,it could be intractable to pursue the global subspace.In this work,we represent the observed scene by an array of small and

independent subspaces,each of which is de?ned by the linear system,so that our model is able to handle better challenging scene variations.Our background model can be viewed as a mixed compositional model consisting of the linear subspaces.In particular,we conduct the background subtraction with our model based on the following observations.

Assumption 1:The local scene variants (i.e.appearance and motion changing over time)can be captured by the low-dimensional subspace.

Assumption 2:It is feasible to separate foreground moving objects from the scene background by fully exploiting spatio-temporal statistics.

B.Spatio-Temporal Video Brick

Given the surveillance video of one scene,we ?rst decom-pose it with a batch of small brick-like volumes.We consider the video brick of small size (e.g.,4×4×5pixels)includes relative simple content,which can be thus generated by few bases (components).And the brick volume integrates both spatial and temporal information,that we can better capture complex appearance and motion variations compared with the traditional image patch representations.

We divide each frame I i ,(i =1,2,...,n )into a set of image patches with the width w and height h .A number t of patches at the same location across the frames are combined together to form a brick.In this way,we extract a sequence of video bricks V ={v 1,v 2,...,v n }at every location for the scene.

Moreover,we design a novel descriptor to describe the video brick instead of using RGB values.For any video brick v i ,we ?rst apply the CS-STLTP operator on each pixel,and pool all the feature values into a histogram.For a pixel x c ,we construct a few 2D spatio-temporal planes centered at it,and compute the local ternary patterns (LTP)operator [33]on each plane.The CS-STLTP then encodes x c by combining the LTP operators of all planes.Note that the way of splitting spatio-temporal planes little affects the operator’s performance.To simplify the implementation,we make the planes parallel to the Y axis,as Fig.2shown.

3194IEEE TRANSACTIONS ON IMAGE PROCESSING,VOL.23,NO.7,JULY 2014

We index the neighborhood pixels of x by {0,...,M },the operator response of the j -th plane can be then calculated as:

j

(x )=

M 2?1

m =0

s τ(p m ,p m +M 2

),

(1)

where pixel k and k +M /2are two symmetric neighbors of pixel x c .p k and p k +M 2are the graylevels of the two pixels,

respectively.The sign

indicates stretching elements into a vector.The function s τis de?ned as follows:

s τ(p m ,p m +M 2)=?

????1,if p m >(1+τ)p m +M 2

,-1,if p m <(1?τ)p m +M 2,0,otherwise .(2)

where τis a constant threshold for the comparing range.

Suppose that we take M =8neighborhood pixels for computing the operator in each spatio-temporal plane,and the number of planes is 4.The resulting CS-STLTP vector contains M /2×4=16bins.Fig.2illustrates an example of computing the CS-STLTP operator,where we apply the operator for one pixel on 4spatial-temporal planes displayed with different colors (e.g.,green,blue,purple and orange).Then we build a histogram for each video brick by accumu-lating the CS-STLTP responses of all pixels.This de?nition was previously proposed by Guo et al [36].

H (k )= x ∈v i 4j =11( j

(x ),k ),k ∈[0,K ],

(3)

where 1(a ,b )is an indicator function,i.e.1(a ,b )=1only

if a =b .To measure the operator response,we transform the binary vector of CS-STLTP into a uniform value that is de?ned as the number of spatial transitions (bitwise changes)following,as discussed in [36].For example,the pattern (i.e.the vector of 16bins)0000000000000000has a value of 0and 1000000000000000of 1.In our implementation,we further quantize all possible values into 48levels.To further improve the capability,we can generate histograms in each color channel and concatenate them together.

The proposed descriptor is computationally ef?cient and compact to describe the video brick.In addition,by intro-ducing a tolerative comparing range in the LTP operator computation,it is robust to local spatio-temporal noise within a range.

C.Model De?nition

Let m be the descriptor length for each brick,and V ={v 1,v 2,...,v n },v i ∈R m be a sequence of video bricks at a certain location of the observed background.We can use a set of bases (components)C =[C 1,C 2,...,C d ]to represent the subspace where V lies in.Each video brick v i in V can be represented as

v i =

d j =1

z i ,j C j +ωi ,(4)

where C j is the j -th basis (j -th column of matrix C )of the subspace,z i ,j the coef?cient for C j ,and ωi the appearance residual.We denote C to represent appearance consistency of

the sequence of video bricks.In some traditional background

models by subspace learning,z i ,j can be solved and kept as a constant,with the underlying assumption that the appear-ance of background would be stable within the observations.In contrast,we treat z i ,j as the variable term that can be further phrased as the time-varying state,accounting for temporally coherent variations (i.e.the motions).For notation simplicity,we neglect the subscript j ,and denote Z ={z 1,z 2,...,z n }for all the bricks.The dynamic model is formulated as,

z i +1=Az i +ηi ,

(5)

where ηi is the state residual,and A is a matrix of d ×d dimensions to model the variations.With this de?nition,we consider A representing the temporal coherence among the observations.

Therefore,the problem of pursuing dynamic subspace is posed as solving the appearance consistency C and the tem-poral coherence A ,within the observations.Since the sequence states Z are unknown,we shall jointly solve C ,A ,Z by minimizing an empirical energy function F n (C ,A ,Z ):

min F n (C ,A ,Z )=12n

n i =1

v i ?Cz i 22+ z i ?Az i ?1 2

2.(6)Here F n (C ,A ,Z )is not completely convex but we can solve

it by ?xing either Z or (C ,A ).Nevertheless,its computation cost is expensive for learning the entire background online.Here we simplify the dynamic model in Equation (5)into a linear system,following the auto-regressive moving average (ARMA)process.In literature,Soatto et al.[40]originally associated the output of ARMA model with dynamic textures,and showed that the ?rst-order ARMA model,driven by white zero-mean Gaussian noise,can capture a wide range of dynamic textures.In our approach,the dif?culty of modeling the dynamic variations can be alleviated due to the brick-based representation,i.e.the observed scene is decomposed into video bricks.Thus,we consider the ARMA process a suitable solution to model the time-varying variables,which can be solved ef?ciently.Speci?cally,we introduce a robustness term (i.e.matrix)B ,which includes a number d of bases,and we set ηi =B i ,where i denotes the noise.

We further summarize the proposed dynamic model,where we add the subscript n to the main components,indicating they are solved within a number n of observations,as,

v i =C n z i +ωi ,z i +1=A n z i +B n i ,

ωi I I D

~N (0, ω), i I I D

~N (0,I d ).

(7)

In this model,C n ∈R m ×d and A n ∈R d ×d represent the appearance consistency and temporal coherence,respectively.B n ∈R d ×d is the robustness term constraining the evolution of Z over time.ωi ∈R m indicates the residual corresponding to observation v i ,and i ∈R d the noise of state variations.During the subspace learning,ωi and i are assumed to follow the zero-mean Gaussian distributions.Given a new brick mapped into the subspace,ωi and i can be used to measure how likely the observation is suitable with the subspace,so

LIN et al.:COMPLEX BACKGROUND SUBTRACTION BY PURSUING DYNAMIC SPATIO-TEMPORAL MODELS3195 that we utilize them for foreground object detection during

online processing.

The proposed model is time-varying,and the parameters

C n,A n,B n can be updated incrementally along with the

processing of new observations,in order to adapt our model

with scene changes.

III.L EARNING A LGORITHM

In this section,we discuss the learning for spatio-temporal

background models,including initial subspace generation and

online maintenance.The initial learning is performed at the

beginning of system deployment,when only a few foreground

objects move in the scene.Afterwards,the system switches to

the mode of online maintenance.

A.Initial Model Learning

In the initial stage,the model de?ned in Equation(7)can

be degenerated as a non-dynamic linear system,as the n

observations are extracted and?xed.Given a brick sequence

V={v1,v2,...,v n},we present an algorithm to identify

the model parameters C n,A n,B n,following the sub-optimal

solution proposed in[40].

To guarantee the Equation(7)has an unique and canonical

solution,we postulate

n d,Rank(C n)=d,C n C n=I d,(8)

where I d is the identity matrix of dimension d×d.The

appearance consistency term C n can be estimated as,

C n=arg min

C n

|W n?C n[z1z2···z n]|(9)

where W n is the data matrix composed of observed video

bricks[v1,v2,···,v n].The equation(9)satis?es the full rank

approximation property and can be thus solved by the singular

value decomposition(SVD).We have,

W n=U Q ,

U U=I,Q Q=I,(10)

where Q is the unitary matrix,U includes the eigenvectors,

and is the diagonal matrix of the singular values.Thus,

C n is treated as the?rst d components of U,and the state

matrix[z1z2···z n]as the product of d×d sub-matrix of

and the?rst d columns of Q .

The temporal coherence term A n is calculated by solving

the following linear problem:

A n=arg min

A n

|[z2z3···z n]?A n[z1z2···z n?1]|.(11)

The statistical robustness term B n is estimated by the recon-

struction error E

E=[z2z3···z n]?A n[z1z2···z n?1]

=B n[ 1 2··· n?1],(12)

where B n~=1√

n?1E.Since the rank of A n is d and d n,the

rank of input-to-state noise d is assumed to be much smaller than d.That is,the dimension of E can be further reduced by SVD:E=U Q ,and we have

B n=

1

n?1

U1 ···U d

?

?

?

1

...

d

?

?

?.(13)

The values of d,d essentially imply the complexity of subspace from the aspects of appearance consistence and temporal coherence,respectively.For example,video bricks containing static content can be well described with a function of low dimensions while highly dynamic video bricks(e.g., from an active fountain)require more bases to generate.In real surveillance scenarios,it is not practical to pre-determine the complexity of scene environments.Hence,in the proposed method,we adaptively determine d,d by thresholding eigen-values in and ,respectively.

d?=arg max

d

d>T d,

d? =arg max

d

d >T d ,(14)

where d indicates the d-th eigenvalue in and d the d -th eigenvalue in .

B.Online Model Maintenance

Then we discuss the online processing with our model that segments foreground moving objects and keeps the model updated.

1)Foreground Segmentation:Given one newly appearing video brick v n+1,we can determine whether pixels in v n+1 belong to the background or not by thresholding their appear-ance residual and state residual.We?rst estimate the state of v n+1with the existing C n,

z n+1=C n v n+1,(15) and further the appearance residual of v n+1

ωn+1=v n+1?C n z n+1.(16) As the state z n and the temporal coherence A n have been solved,we can then estimate the state residual n according to Equation(7),

B n n=z n+1?A n z n

? n=pinv(B n)(z n+1?A n z n),(17) where pinv denotes the operator of pseudo-inverse.

With the state residual n and the appearance residualωn+1 for the new video brick v n+1,we conduct the following criteria for foreground segmentation,in which two thresholds are introduced.

1)v n+1is classi?ed into background,only if all dimensions

of n are less than a threshold T .

2)If v n+1has been labeled as non-background,perform

the pixel-wise segmentation by comparingωn+1with a threshold Tω:the pixel is segmented as foreground if its corresponding dimension inωn+1is greater than Tω.

3196IEEE TRANSACTIONS ON IMAGE PROCESSING,VOL.23,NO.7,JULY 2014

2)Model Updating:During the online processing,the key problem for model updating is to deal with foreground distur-bance,i.e.to avoid absorbing pixels from foreground objects or noise.

In this work,we develop an effective approach to update the model with the synthesized data.We ?rst generate a video brick from the current model,namely noise-free brick,?v n +1,as

?z n +1=A n z n ,?v n +1=C n ?z n +1.

(18)

Then we extract pixels from ?v n +1to compensate occluded (i.e.foreground)pixels in the newly appearing brick.Concretely,the pixels labeled as non-background are replaced by the pixels from the noise-free video brick at the same place.We can thus obtain a synthesized video brick ˉv n +1for model updating.Given the brick ˉv n +1,the data matrix W n composed of observed video bricks is extended to W n +1.Then we update the model C n +1according to Equation (9).

Our algorithm of model updating includes two steps:(i)update parameters for appearance consistency C n +1by employing the incremental subspace learning technique,and (ii)update parameters of state variations A n +1,B n +1.

(i)Step 1.For the d -dimension subspace,with eigenvectors C n and eigenvalues n ,its covariance matrix Cov n can be approximated as

Cov n ≈

d j =1

λn ,j c n ,j c n ,j

=

C n n C n ,

(19)

where c n ,j and λn ,j denote the j -th eigenvector and eigen-value,respectively.With the newly synthesized data ˉv n +1,the

updated covariance matrix Cov n +1is formulated as

Cov n +1=(1?α)Cov n +αˉv n +1ˉv

n +1

≈(1?α)C n n C n +αˉv n +1ˉv

n +1

=

d i =1

(1?α)λn ,i c n ,i c n ,i +αˉv

n +1ˉv

n +1,(20)

where αdenotes the learning rate.The covariance matrix can

be further re-formulated to simplify computation,as,

Cov n +1=

Y n +1Y

n +1,

(21)

where Y n +1=[y n +1,1y n +1,2...y n +1,d +1]and each column

y n +1,j in Y n +1is de?ned as

y i =

1?αλj c n ,i ,if 1

√αˉv n +1,if j =d +1.(22)

To reduce the computation cost,we can estimate C n +1by a smaller matrix Y n +1Y n +1,instead of the original large matrix

Cov n +1.

(Y n +1Y n +1)e n +1,j =λn +1,j e n +1,j j =1,2,...,d +1,(23)

where e n +1,j and λn +1,j are the j -th eigenvector and

eigenvalue of matrix Y n +1Y n +1,respectively.Let c n +1,j =

Y n +1e n +1,j ,and we re-write Equation (23)as

Y n +1Y

n +1Y n +1e n +1,j =λn +1,j Y n +1e n +1,j ,

Cov n +1c n +1,j =λn +1,i c n +1,j

j =1,2,...,d +1.(24)

We thus obtain the updated eigenvectors C n +1and the cor-responding eigenvalues n +1of the new covariance matrix

Cov n +1.Note that the dimension of the subspace is auto-matically increased along with the newly added data ˉv n +1.To guarantee the appearance parameters remain stable,we keep the main principal (i.e.top d )eigenvectors and eigen-values while discarding the least signi?cant components.The above incremental subspace learning algorithm has been widely applied in several vision tasks such as face recognition and image segmentation [43]–[45],and also for background modeling in [4],[25]and [46].However,the noise observations caused by moving objects or scene variations often disturb the subspace maintenance,e.g.the eigenvec-tors could change dramatically during the processing.Many efforts [47],[48]have been dedicated to improve the robust-ness of incremental learning by using statistical analysis.Several discriminative learning algorithms [49]were also employed to train background classi?ers that can be incre-mentally updated.In this work,we utilize a version of Robust Incremental PCA (RIPCA)[50]to cope with the outliers

in ˉv n +1.Note that ˉv

n +1consists of pixels either from the generated data ?v n +1or real videos,where outliers may exist in some dimensions.

In the traditional PCA learning,the solution is derived by minimizing a least-squared reconstruction error,

min |r n +1|2=|C n C n ˉv

n +1?ˉv n +1|2.(25)

Following [50],we impose a robustness function w(t )=

1

2

over each dimension of r n +1,and the target can be re-de?ned as,

min

j

(r k n +1)2←w(r k n +1)(r k n +1)2,

(26)where the superscript k indicates the k -th dimension.The

parameter ρin the robustness function is estimated by ρ=[ρ1,ρ2,...,ρ|ˉv n +1|]

ρk =d max i =1

β λn ,i |c k

n ,j |,j =1,2,...,|ˉv

n +1|(27)

where βis a ?xed coef?cient.The k -th dimension of ρis

proportional to the maximal projection of the current eigen-vectors on the k -th dimension,(i.e.ρk is weighted by their

corresponding eigenvalues).Note that w(r k

n +1)is a function of the residual error which should be calculated for each

vector dimension.And the computation cost for w(r k n +1

)can be neglected in the analytical solution.

Accordingly,we can update the observation ˉv n +1over each

dimension by computing the function w(r k

n +1),

?v k n +1=

w(r k n +1)ˉv k n +1.(28)That is,we treat ?v n +1as the new observation during the

procedure of incremental learning.(i)Step 2.With the ?xed C n +1,we then update the parameters of state variations A n +1,B n +1.We ?rst estimate the latest state z n +1based on the updated C n +1as,

z n +1=C n +1?v

n +1.(29)

LIN et al.:COMPLEX BACKGROUND SUBTRACTION BY PURSUING DYNAMIC SPATIO-TEMPORAL MODELS

3197

Fig.3.An example to demonstrate the robustness of model maintenance. In the scenario of dynamic water surfaces,we visualize the original and predicted intensities for a?xed position(denoted by the red star),with the blue and red curves,respectively.With our updating scheme,when the position is occluded by a foreground object during from frame551to632,the predicted intensities are not disturbed by foreground,i.e.the model remains stable.

A n+1can be further calculated,by re-solving the linear problem of a?xed number of latest observed states,

A n+1[z n?l+1···z n]=[z n?l+2···z n+1],(30) where l indicates the number of latest observed states,i.e.the span of observations.And similarly,we update

B n+1by com-puting the new reconstruction error E=[z n?l+2···z n+1]?A n+1[z n?l+1···z n].

We present an empirical study in Fig.3to demonstrate the effectiveness of this updating method.The video for background modeling includes dynamic water surfaces.Here we visualize the original and predicted intensities for a?xed position(denoted by the red star),with the blue and red curves, respectively.We can observe that the model remains stable against foreground occlusion.

3)Time Complexity Analysis:We mainly employ SVD and linear programming in the initial learning.The time complexity of SVD is O(n3)and the learning time of lin-ear programming is O(n2).For a certain location,the time complexity of initial learning is O(n3)+O(n2)=O(n3) for each subspace,where n denotes the number of video bricks for model learning.As for online learning,incremental subspace learning and linear programming are utilized.Given a d-dimension subspace,the time complexity for component updating(i.e.step1of the model maintenance)is O(dn2). Thus,the total time complexity for online learning is O(dn2)+ O(l2),where l is the number of states used to solve the linear problem.

We summarize the algorithm sketch of our framework in Algorithm1.

IV.E XPERIMENTS

In this section,we?rst introduce the datasets used in the experiments and the parameter settings,then present the experimental results and comparisons.The discussions of system components are proposed at last.

Algorithm1:The Sketch of the Proposed Algorithm

A.Datasets and Settings

We collect a number of challenging videos to validate our approach,which are publicly available or from real surveillance systems.Two of them(AirportHall and TrainStation)from the PETS database1include crowded pedestrians and moving cast shadows;?ve highly dynamic scenes2include waving curtain active fountain,swaying trees, water surface;the others contain extremely dif?cult cases such as heavy rain,sudden and gradual light changing.Most of the videos include thousands of frames,and some of the frames are manually annotated as the ground-truth provided by the original databases.

Our algorithm has been adopted in a real video surveillance system and achieves satisfactory performances.The system is capable of processing15~20frames per second in the reso-lution352×288pixels.The hardware architecture is an Intel i72600(3.4GHz)CPU and8GB RAM desktop computer. All parameters are?xed in the experiments,including the contrast threshold for CS-STLTP descriptorτ=0.2,the dimension threshold for ARMA model T d=0.5,T d =0.5, the span of observations for model updating l=60,and the size of bricks4×4×5.For foreground segmentation,the

1Downloaded from https://www.doczj.com/doc/6313246192.html,/slides/pets.html.

2Downloaded from https://www.doczj.com/doc/6313246192.html,.sg

3198IEEE TRANSACTIONS ON IMAGE PROCESSING,VOL.23,NO.7,JULY2014 threshold of appearance residual Tω=3,update threshold

T =3and Tω=5,T =4for RGB.In the online model

maintenance,the coef?cientβ=2.3849,the learning rate

α=0.05for RIPCA.

In the experiments,we use the?rst50frames of each

testing video to initialize our system(i.e.to perform the initial

learning),and keep model updated in the rest of sequence.

In addition,we utilize a standard post-processing to eliminate

areas including less than20pixels.All other competing

approaches are executed with the same setting as our approach.

We utilize the F-score as the benchmark metric,which

measures the segmentation accuracy by considering both the

recall and the precision.The F-score is de?ned as

F=

2T P

2T P+F P+F N

,(31)

where TP is true positives(foreground objects),FN false negatives(false background pixels),FP false positive(false foreground pixels).

B.Experimental Results

Experimental results.We compare the proposed method (STDM)with six state-of-the-art online background subtrac-tion algorithms including Gaussian Mixture Model(GMM)[1] as baseline,improved GMM[8]3,online auto-regression model[20],non-parametric model with scale-invariant local patterns[14],discriminative model using generalized Struct1-SVM[19]4,and the Bayesian joint domain-range(JDR)model[11]5.In the comparisons,for the methods[1],[8],[11],[19]we use their released codes,and implement the methods[14],[20]by ourselves.The F-scores (%)over all10videos are reported in Table I,where the last two columns report results of our method using either RGB or CS-STLTP as the feature.Note that for the result using the RGB feature we represent each video brick by concatenating the RGB values of all its pixels.We also exhibit the results and comparisons using the precision-recall(PR)curves,as shown in Fig.4.Due to space limitation,we only show results on 5videos.From the results,we can observe that the proposed method outperforms the other methods in most videos in general.For the scenes with highly dynamic backgrounds (e.g.,the#2#5and#10scenes),the improvements made by our method are more than10%.And the system enables us to well handle the indistinctive foreground objects (i.e.small objects or background-like objects in the#1, #3scenes).Moreover,we make signi?cant improvements (i.e.15%~25%)in the scene#6and#7including both sudden and gradual lighting changes.A number of sampled results of background subtraction are exhibited in Fig.5. The bene?t of using the proposed CS-STLTP feature is clearly validated by observing the results shown in Table I and Fig.5.In general,our approach simply using RGB values can achieve satisfying performances for the common scenes, e.g.,with fair appearance and motion changes,while the 3Available at https://www.doczj.com/doc/6313246192.html,/background-subtraction

4Available at http://www.cs.mun.ca/~gong/Pages/Research.html

5Available at https://www.doczj.com/doc/6313246192.html,/~

yaser/Fig.4.Experimental results generated by our approach and competing methods on5videos:?rst row left,the scene including a dynamic curtain and indistinctive foreground objects(i.e.having similar appearance with backgrounds);?rst row right,the scene with heavy rain;second row left, an indoor scene with the sudden lighting changes;second row right,the scene with dynamic water surface;third row,a busy airport.The precision-recall(PR)curve is introduced as the benchmark measurement for all the 6algorithms.

CS-SILTP operator can better handle highly dynamic vari-ations(e.g.sudden illumination changing,rippling water). In addition,we also compare CS-STLTP with the existing scale invariant descriptor SILTP proposed in[14].We reserve all settings in our approach except replacing the feature by SILTP,and achieve the average precision over all10videos: 69.70%.This result shows that CS-STLTP is very suitable and effective for the video brick representation.

C.Discussion

Furthermore,we conduct the following empirical studies to justify the parameter determinations and settings of our approach.

a)Ef?ciency:Like other online-learning background models,there is a trade-off between the model stability and maintenance ef?ciency.The corresponding parameter in our method is the learning rateα.We tuneαin the range of 0~0.3by?xing the other model parameters and visualize the quantitative results of background subtraction,as shown in Fig.6(a).From the results,we can observe this parameter is insensitive in range0~0.1in our model.In practice,once the scene is extremely busy and crowded,it could be set as a relative small value to keep the model stable.

b)Feature effectiveness:The contrast thresholdτis the only parameter in CS-STLTP operator,which affects the power of feature to character spatio-temporal information within video bricks.From the empirical results of parameter tuning,

LIN et al.:COMPLEX BACKGROUND SUBTRACTION BY PURSUING DYNAMIC SPATIO-TEMPORAL MODELS

3199

Fig.5.Sampled results of background subtraction generated by our approach (using RGB or CS-STLTP as the feature and RIPCA as the update strategy)and other competing methods.

as shown in Fig.6(b),we can observe that the appropriate range for τis 0.15~0.25.In practice,the model could become sensitive to noise by setting a very small value of τ(say τ<0.15),and too large τ(say τ>0.25)might reduce the accuracy on detecting foreground regions with homogeneous appearances.

3200IEEE TRANSACTIONS ON IMAGE PROCESSING,VOL.23,NO.7,JULY2014

TABLE I

Q UANTITATIVE R ESULTS AND C OMPARISONS ON THE10C OMPLEX V IDEOS U SING THE F-S CORE(%)M EASUREMENT.T HE L AST T WO C OLUMNS R EPORT T HE R ESULTS OF O UR M ETHOD U SING E ITHER RGB OR CS-STLTP AS THE F EATURE

Fig.6.Discussion of parameter selection:(i)learning rateαfor model maintenance(in(a))and(ii)the contrast threshold of CS-STLTP featureτ(in(b)).In each?gure,the horizontal axis represents the different parameter values;the three lines in different colors denote,respectively,the false positive (FP),false negative(FN),and the sum of FP and FN.

Fig.7.Empirical study for the size of video brick in our approach.We carry on the experiments on the10videos with different brick size while keeping the rest settings.The vertical axis represents the average precisions of background subtraction and the horizontal represents the different sizes of video bricks with respect to background decomposition.

c)Size of video brick:One may be interested in how the system performance is affected by the size of video brick for background decomposition,so that we present an empirical study on different sizes of video bricks in Fig.7.We observe that the best result is achieved with the certain brick size of 4×4×3,and the results with the sizes of4×4×1and4×4×5 are also satis?ed.As of very small bricks(e.g.1×1×3), few spatio-temporal statistics are captured and the models may have problems on handling scene variations.The bricks of large sizes(e.g.8×8×5)carry too much information,and their subspaces cannot be effectively generated by the linear ARMA model.The experimental results are also accordant with our motivations in Section I.In practice,we can?exibly set the size according to the resolutions of surveillance videos.

d)Model initialization:Our method is not sensitive to the number of observed frames in the initial stage of subspace gen-eration.We test the different numbers,say30,40,60,on two typical surveillance scenes,i.e.the Airport Hall(scene#1)and the Train Station(scene#8).The F-score outputs show the deviations with different numbers of initial frames are very small,e.g.less than0.2.In general,we require the observed scenes to be relatively clean for initialization,although a few objects that move across are allowed.

V.C ONCLUSION

This paper studies an effective method for background subtraction,addressing the all challenges in real surveillance scenarios.In the method,we learn and maintain the dynamic texture models within spatio-temporal video patches(i.e.video bricks).Suf?cient experiments as well as empirical analysis are presented to validate the advantages of our method.

In the future,we plan to improve the method in two aspects.

(1)Some ef?cient tracking algorithms can be employed into the framework to better distinguish the foreground objects.

(2)The GPU-based implementation can be developed to process each part of the scene in parallel,and it would probably signi?cantly improve the system ef?ciency.

R EFERENCES

[1] C.Stauffer and W.Grimson,“Adaptive background mixture models for

real-time tracking,”in Proc.IEEE Conf.CVPR,Jun.1999.

[2]T.Bouwmans,F.E.Baf,and B.Vachon,“Background modeling using

mixture of Gaussians for foreground detection-a survey,”Recent Patents Comput.Sci.,vol.1,no.3,pp.219–237,2008.

[3]L.Maddalena and A.Petrosino,“A self-organizing approach to back-

ground subtraction for visual surveillance applications,”IEEE Trans.

Image Process.,vol.17,no.7,pp.1168–1177,Jul.2008.

[4] D.-M.Tsai and https://www.doczj.com/doc/6313246192.html,i,“Independent component analysis-based

background subtraction for indoor surveillance,”IEEE Trans.Image Process.,vol.18,no.1,pp.158–167,Jan.2009.

[5]H.Chang,H.Jeong,and J.Choi,“Active attentional sampling for speed-

up of background subtraction,”in Proc.IEEE Conf.CVPR,Jun.2012, pp.2088–2095.

[6]X.Liu,L.Lin,S.Yan,H.Jin,and W.Tao,“Integrating spatio-temporal

context with multiview representation for object recognition in visual surveillance,”IEEE Trans.Circuits Syst.Video Technol.,vol.21,no.4, pp.393–407,Apr.2011.

[7]L.Lin,Y.Lu,Y.Pan,and X.Chen,“Integrating graph partitioning

and matching for trajectory analysis in video surveillance,”IEEE Trans.

Image Process.,vol.21,no.12,pp.4844–4857,Apr.2012.

LIN et al.:COMPLEX BACKGROUND SUBTRACTION BY PURSUING DYNAMIC SPATIO-TEMPORAL MODELS3201

[8]Z.Zivkovic,“Improved adaptive Gaussian mixture model for back-

ground subtraction,”in Proc.17th IEEE ICPR,Aug.2004,pp.28–31.

[9] D.Lee,“Effective Gaussian mixture learning for video background

subtraction,”IEEE Trans.Pattern Anal.Mach.Intell.,vol.27,no.5, pp.827–832,May2005.

[10] A.Elgammal,R.Duraiswami,D.Harwood,and L.Davis,“Background

and foreground modeling using nonparametric kernel density estimation for visual surveillance,”Proc.IEEE,vol.90,no.7,pp.1151–1163, Jun.2002.

[11]Y.Sheikh and M.Shah,“Bayesian modeling of dynamic scenes for

object detection,”IEEE Trans.Pattern Anal.Mach.Intell.,vol.27, no.11,pp.1778–1792,Nov.2005.

[12] C.Benedek and T.Sziranyi,“Bayesian foreground and shadow detec-

tion in uncertain frame rate surveillance videos,”IEEE Trans.Image Process.,vol.17,no.4,pp.608–621,Apr.2008.

[13]O.Barnich and M.Droogenbroeck,“Vibe:A universal background

subtraction algorithm for video sequences,”IEEE Trans.Image Process., vol.20,no.6,pp.1709–1724,Jun.2011.

[14]S.Liao,G.Zhao,V.Kellokumpu,M.Pietikainen,and S.Li,“Modeling

pixel process with scale invariant local patterns for background subtrac-tion in complex scenes,”in Proc.IEEE Int.Conf.CVPR,Jun.2010, pp.1301–1306.

[15]J.Pilet,C.Strecha,and P.Fua,“Making background subtraction robust

to sudden illumination changes,”in Proc.ECCV,2008,pp.567–580.

[16] C.Guyon,T.Bouwmans,and E.-H.Zahzah,“Foreground detec-

tion via robust low rank matrix decomposition including spatio-temporal constraint,”in https://www.doczj.com/doc/6313246192.html,put.Vis.ACCV Workshops,2013, pp.315–320.

[17]M.Wu and X.Peng,“Spatio-temporal context for codebook-based

dynamic background subtraction,”AEU https://www.doczj.com/doc/6313246192.html,mun., vol.64,no.8,pp.739–747,2010.

[18]H.-H.Lin,T.-L.Liu,and J.-H.Chuang,“Learning a scene background

model via classi?cation,”IEEE Trans.Signal Process.,vol.57,no.5, pp.1641–1654,May2009.

[19]L.Cheng and M.Gong,“Realtime background subtraction from dynamic

scenes,”in Proc.12th IEEE ICCV,Sep./Oct.2009,pp.2066–2073. [20] A.Monnet, A.Mittal,N.Paragios,and V.Ramesh,“Background

modeling and subtraction of dynamic scenes,”in Proc.9th IEEE ICCV, Oct.2003,pp.1305–1312.

[21]J.Zhong and S.Sclaroff,“Segmenting foreground objects from a

dynamic textured background via a robust Kalman?lter,”in Proc.9th IEEE ICCV,Oct.2003,pp.44–50.

[22]Y.Wang,K.-F.Loe,and J.-K.Wu,“A dynamic conditional random?eld

model for foreground and shadow segmentation,”IEEE Trans.Pattern Anal.Mach.Intell.,vol.28,no.2,pp.279–289,Feb.2007.

[23]K.A.Patwardhan,G.Sapiro,and V.Morellas,“Robust foreground

detection in video using pixel layers,”IEEE Trans.Pattern Anal.Mach.

Intell.,vol.30,no.4,pp.746–751,Apr.2008.

[24]V.Mahadevan and N.Vasconcelos,“Spatiotemporal saliency in dynamic

scenes,”IEEE Trans.Pattern Anal.Mach.Intell.,vol.32,no.1, pp.171–177,Jan.2010.

[25]Y.Zhao,H.Gong,L.Lin,and Y.Jia,“Spatio-temporal patches for

night background modeling by subspace learning,”in Proc.IEEE ICPR, Dec.2008,pp.1–4.

[26]X.Liang,L.Lin,and L.Cao,“Learning latent spatio-temporal compo-

sitional model for human action recognition,”in Proc.ACM Int.Conf.

Multimedia,Oct.2013,pp.263–272.

[27]X.Liu,L.Lin,and H.Jin,“Contextualized trajectory parsing with

spatio-temporal graph,”IEEE Trans.Pattern Anal.Mach.Intell.,vol.35, no.12,pp.3010–3024,Dec.2013.

[28]L.Wixson,“Detecting salient motion by accumulating directionally-

consistent?ow,”IEEE Trans.Pattern Anal.Mach.Intell.,vol.22,no.8, pp.774–780,Aug.2000.

[29] D.Gutchess,M.Trajkovics,E.Cohen-Solal,D.Lyons,and A.K.Jain,

“A background model initialization algorithm for video surveillance,”in Proc.IEEE ICCV,Jul.2001,pp.733–740.

[30]W.Kim,C.Jung,and C.Kim,“Spatiotemporal saliency detection and

its applications in static and dynamic scenes,”IEEE Trans.Pattern Anal.

Mach.Intell.,vol.21,no.4,pp.446–456,Apr.2011.

[31]L.Lin,X.Liu,and S.-C.Zhu,“Layered graph matching with composite

cluster sampling,”IEEE Trans.Pattern Anal.Mach.Intell.,vol.32,no.8, pp.1426–1442,Aug.2010.

[32]T.Ojala,M.Pietikainen,and T.Maenpaa,“Multiresolution gray-scale

and rotation invariant texture classi?cation with local binary patterns,”

IEEE Trans.Pattern Anal.Mach.Intell.,vol.24,no.7,pp.971–987, Jul.2002.[33]X.Tan and B.Triggs,“Enhanced local texture feature sets for face

recognition under dif?cult lighting conditions,”IEEE Trans.Image Process.,vol.19,no.6,pp.168–182,Oct.2007.

[34]M.Heikkila,M.Pietikainen,and C.Schmid,“Description of interest

regions with local binary patterns,”Pattern Recognit.,vol.42,no.3, pp.425–436,Mar.2009.

[35]J.Yao and J.Odobez,“Multi-layer background subtraction based on

color and texture,”in Proc.IEEE CVPR,Jun.2007,pp.1–8.

[36]Z.Guo,L.Zhang,and D.Zhang,“Rotation invariant texture classi-

?cation using LBP variance(LBPV)with global matching,”Pattern

Recognit.,vol.43,no.3,pp.706–719,Mar.2010.

[37]L.Lin,P.Luo,X.Chen,and K.Zeng,“Representing and recognizing

objects with massive local image patches,”Pattern Recognit.,vol.45, no.1,pp.231–240,Jan.2012.

[38] E.Hannan and M.Deistler,Statistical Theory Of Linear Systems

(Probability and Mathematical Statistics).New York,NY,USA:Wiley, 1988.

[39]L.Lin,T.Wu,J.Porway,and Z.Xu,“A stochastic graph grammar for

compositional object representation and recognition,”Pattern Recognit., vol.42,no.7,pp.1297–1307,Jul.2009.

[40]S.Soatto,G.Doretto,and Y.Wu,“Dynamic textures,”https://www.doczj.com/doc/6313246192.html,put.

Vis.,vol.52,no.2,pp.91–109,2003.

[41]P.Saisan,G.Doretto,Y.Wu,and S.Soatto,“Dynamic texture recogni-

tion,”in Proc.IEEE CVPR,Jun.2001,pp.58–63.

[42]G.Doretto,D.Cremers,P.Favaro,and S.Soatto,“Dynamic texture

segmentation,”in Proc.9th IEEE ICCV,Oct.2003,pp.1236–1242. [43] D.Skocaj and A.Leonardis,“Weighted and robust incremental

method for subspace learning,”in Proc.9th IEEE ICCV,Oct.2003,

pp.1494–1501.

[44]M.Artac,M.Jogan,and A.Leonardis,“Incremental PCA for on-line

visual learning and recognition,”in Proc.16th ICPR,2002,pp.781–784.

[45] A.Levy and M.Lindenbaum,“Sequential karhunen-loeve basis extrac-

tion and its application to images,”IEEE Trans.Image Process.,vol.9, no.8,pp.1371–1374,Aug.2000.

[46]L.Wang,L.Wang,M.Wen,Q.Zhuo,and W.Wang,“Background

subtraction using incremental subspace learning,”in Proc.IEEE ICIP, Sep./Oct.2007,pp.V-45–V-48.

[47] E.Candès,X.Li,Y.Ma,and J.Wright,“Robust principal component

analysis?”J.ACM,vol.58,no.3,pp.1–37,2011.

[48]X.Ding,L.He,and L.Carin,“Bayesian robust principal component

analysis,”IEEE Trans.Image Process.,vol.20,no.12,pp.3419–3430, Dec.2011.

[49] D.Farcas,C.Marghes,and T.Bouwmans,“Background subtraction

via incremental maximum margin criterion:A discriminative subspace approach,”Mach.Vis.Appl.,vol.23,no.6,pp.1083–1101,Nov.2012.

[50]Y.Li,“On incremental and robust subspace learning,”Pattern Recognit.,

vol.37,no.7,pp.1509–1518,

2004.

Liang Lin is a Full Professor with the School

of Advanced Computing,Sun Yat-sen University,

Guangzhou,China.He received the B.S.and Ph.D.

degrees from the Beijing Institute of Technology,

Beijing,China,in1999and2008,respectively,and

the Ph.D.degree from the Department of Statistics,

University of California at Los Angeles(UCLA),

Los Angeles,CA,USA,in2007.His Ph.D.dis-

sertation was achieved the China National Excellent

Ph.D.Thesis Award Nomination in2010.He was a

Post-Doctoral Research Fellow with the Center for Vision,Cognition,Learning,and Art at UCLA.His research focuses on new

models,algorithms,and systems for intelligent processing and understanding of visual data such as images and videos.He has authored more than 50papers in top tier academic journals and conferences,including the P ROCEEDINGS OF THE IEEE,the IEEE T RANSACTIONS ON P ATTERN

A NALYSIS AND M ACHINE I NTELLIGENCE,the IEEE T RANSACTIONS ON I MAGE P ROCESSING,the IEEE T RANSACTIONS ON C IRCUITS AND S YS-TEMS FOR V IDEO T ECHNOLOGY,the IEEE T RANSACTIONS ON M ULTI-

MEDIA,the IEEE P ATTERN R ECOGNITION,Computer Vision and Pattern Recognition Conference,the International Conference on Computer Vision, the European Conference on Computer Vision,the ACM Multimedia Con-ference,and the Conference on Neural Information Processing Systems.He

was supported by several promotive programs or funds for his works,such as the Program for New Century Excellent Talents of the Ministry of Education, China,in2012,the Program of Guangzhou Zhujiang Star of Science and Tech-nology in2012,and the Guangdong Natural Science Funds for Distinguished Young Scholars in2013.He was a recipient of the Best Paper Runners-Up Award in ACM NPAR2010and the Google Faculty Award in2012.

3202IEEE TRANSACTIONS ON IMAGE PROCESSING,VOL.23,NO.7,JULY

2014

Yuanlu Xu received the master’s degree from the School of Information Science and Technology,Sun Yat-sen University,Guangzhou,China.He is cur-rently pursuing the Ph.D.degree with the University of California at Los Angeles,Los Angeles,CA,USA.His current advisor is Prof.L.Lin,and they have cooperated on publishing a couple of papers on computer vision.He received the B.E.(Hons.)degree from the School of Software,Sun Yat-sen University.His research interests are in video sur-veillance,image matching,and statistical modeling

and

inference.

Xiaodan Liang received the B.B.A degree from the School of Software,Sun Yat-sen University,Guangzhou,China,in 2010,where she is currently pursuing the Ph.D.degree with the School of Infor-mation Science and Technology.She has authored several research papers in top-tier academic confer-ences and journals.Her research focuses on struc-tured vision models and multimedia

understanding.

Jianhuang Lai received the M.Sc.degree in applied mathematics and the Ph.D.degree in mathematics from Sun Yat-sen University,Guangzhou,China,in 1989and 1999,respectively.He joined Sun Yat-sen University in 1989as an Assistant Professor,where he is currently a Professor with the Depart-ment of Automation,School of Information Science and Technology,and the Dean of the School of Information Science and Technology.His current research interests are in the areas of digital image processing,pattern recognition,multimedia commu-nication,wavelet,and its applications.He has authored over 100scienti?c papers in the international journals and conferences on image processing and pattern recognition,e.g.,the IEEE T RANSACTIONS ON P ATTERN A NALYSIS AND M ACHINE I NTELLIGENCE ,the IEEE T RANSACTIONS ON N EURAL N ETWORKS ,the IEEE T RANSACTIONS ON I MAGE P ROCESSING ,the IEEE T RANSACTIONS ON S YSTEMS ,M AN ,AND C YBERNETICS ,P ART B,the IEEE P ATTERN R ECOGNITION ,the International Conference on Computer Vision,the Computer Vision and Pattern Recognition Conference,and the International Conference on Data Minin.He serves as a Standing Member of the Image and Graphics Association of China and also serves as a Standing Director of the Image and Graphics Association of Guangdong.

那些冷门却让你心力交瘁的短篇文 超级好看啊

巽离络 ---喻斑斓 这篇文我真的好喜欢是我目前读过不狗血而且让人很心力交瘁啊! 文案主角:韩子巽,韩子离,白络之 【碎碎念】世家小姐白络之迫于皇命嫁到韩家,但白韩两家其实是有世仇的,所以可想而知,白小姐嫁过去的日子一点也不好过,受到了夫婿韩子巽及整个韩家的冷眼相待,不过白小姐早已做好心理准备,所以也算活的安然。不料日常相处中蕙质兰心的白小姐俘获了小叔子韩子离的郎心,热血少年韩子离也是招姑娘喜欢的,小叔子觉得反正老大不喜欢这个媳妇,那我替他收了,于是两情相悦了。这个时候你懂得,老大不对了……心机深沉的老大竟然也喜欢上了这个不速之妻(原谅我滥改成语),其中有一个老大外出归来偶然看见自己媳妇跟自己弟弟相处的一个画面,郎情妾意好不动人,老大当场愣了,艾玛这个片段描写的太有画面感了楼主到现在都记得。。。哎呀原谅我剧透我好喜欢这篇文~接下来白小姐跟寻思着要去追求终身幸福了,于是老大怒了,尼玛此时不出手更待何时?你白络之本来就是我韩子巽的女人……于是就那啥了,然后开始情深热脸贴冷的婚姻生活……这篇文蛮冷的但真心是好文,不过是个悲剧,老大死了后白小姐幡然悔悟……原来我早就爱上老大了……于是也死了,留下他们的闺女给抚养。不过我要说,虽然我爱这篇文爱老大爱,但我委实不太主……这女主不太讨人喜欢,虐的我家老大太惨了…… 情殇尸妖一度君华 你说爱情,就是你的名和姓,就除了感情,你都不愿再记起......

内容标签:幻想空间虐恋情深春风一度情有独钟 前两年读的记不太清了粘贴下来给大家看看 当初杀了唯一对女主好的人,为了一己私欲带女主离开了那片世外桃源,把女主当成一个利用的工具。后来一遍遍说爱女主,但行为上又一次次伤害她,太恶心了。女主就这么一个“人”,就是一个尸妖,从来就没伤害过别人,是她太美好了,所以世人都来毁灭她。但却是男主把她从一个美丽的仙子变成了一个魔鬼,然后又打着正义的旗号毁灭她,其实与其说女主是妖,不如说世人的贪念、欲念、邪念是妖。把最美好的东西毁灭给人看! 靡音 内容标签虐恋情深强取豪夺 文案:他一把将她推抵在桌上。 丝绸桌布上绣着牡丹,雍容华贵,到了极致。 她躺在上面,一张白净的脸泛上红潮,因为挣扎,因为恨意。 她的脸是纯洁的,干净的,不谙世事的,小女孩的脸。 可是她的眼睛,却像一只猫,圆圆的,眼角却又上翘,带着女人的妩媚,天生的,不自知的,与日俱增的。 她周游在女孩与女人之间,任何一个男人都能在她身上找到属于自己的幻想。 现在,她躺在丝绸桌布上,仿佛入了画。 的牡丹,衬托着她如玉一般的脸颊,没有喧宾夺主,只有锦上添花。 花与人交相辉映,融合成殷红的绮靡。 画中的她,不是仙。 是一只妖,是一种孽。 四叔月上无风 不知道这本书有木有人读过,我个人并不排斥不伦之恋而且我是一个大叔控还喜欢看《这个杀手不太冷》,或者是《大叔》等电影电视剧嘿嘿! 所以就觉得这篇文不错就推荐给大家 文案: 她叫他四叔 在这个冷漠无情的皇宫 四叔从始至终都是她的保护伞 却也是一把最能伤她的利刃 笑看她无望挣扎(名字好狠)绿卡子

推荐穿越文【第1-100本含剧透】

001、2012年11月〔纵横VIP〕完结文〔穿越时空、权谋、种田文〕作者:闲听落花《秾李夭桃〔正文+番外〕》推荐指数:5星+1(强力推荐) 文案:桃之夭夭,灼灼其华…… 虽重生于微贱,却于这乱世中逍遥绽放。 书评:很高质量小说,也是我最喜欢的小说之一,最喜欢的原因有两个:一是男女主角相识相恋的过程处理的很好,两人属于未见其人却识其人,还未见面就开始不停的斗法。男主苏子诚围山攻寨时,女主李小幺采用的损招,让人想起不禁捧腹大笑。人们常说:最熟悉你的人,不是你的朋友而是你的敌人,也许就是因为最初相识的方式,导致了两人后面的相知相许。作者文笔娴熟,将两人的情感发展处理的非常到位。第二,配角虽多,但性格分明,人物塑造饱满。最最让我满意的就是没有其它文中“哥哥妹妹间的暧昧”。总之,此文还是非常值得一看的,强烈推荐。 002、2012年〔起点V IP〕完结文〔穿越时空、温馨权谋文〕作者:闲听落花《花开春暖〔正文+番外〕》推荐指数:5星+1【强力推荐文】类型:低调女VS执着男简介:重生成古代美貌小萝莉,和年迈的女乃娘相依为命。虽是自幼失怙寄人篱下,可闲看小桥流水的生活依旧幸福。青梅竹马可守得住?砸在头上的富贵麻烦要怎么办?哼!见招拆招谁怕谁! 评论:女主李小暖是穿越女,在古代的身份是父母双亡的孤女,穿过来时大概五六岁,因为父母双亡寄宿在寺中。男女主第一次相遇是女主在寺里中偷点心时遇到了小正太汝南王世子程恪,大概十二三岁左右,世子以为女主是自己府中的小丫头,正想好好的教训一番,没想到女主使出了经典绝招(类似猴子偷桃之类)让小正太吃了大亏,然后溜之大吉了。 后来小暖被古家收养,古家夫人是小正太的姨妈,小暖在古家受到老太太的宠爱,过的不错,古家独子古萧和她从小一起长大,应该算是青梅竹马,小暖过了几年平静的日子。直到世子来访,世子和景王是表兄弟,两人在古家无意见看到了小暖,世子认出就是那个让他吃了大亏的小丫头,如今已生得花容月貌,小暖骗说自己是古家的烧火丫头,并扮演白痴,世子临走时向老夫人讨要了烧火丫头,回去的路上才知道给小暖骗了,二回交锋,世子惨败。 过不多久,世子又来走亲戚了,他也打听出了小暖的身份,夜探小暖闺房,无意中听到小暖一番皇帝的工作是天下最苦最累的奇谈妙论,这下连景王都为小暖动了心,不过在世子的断交威胁下,承诺不打小暖的主意,世子终于见到了小暖,向女主提出要带她回京,小暖提出:聘为妻,奔为妾,除非三媒六聘,决不为妾。世子的婚姻大事自己是做不了主的,起初还以为小暖年纪小,说什么只要他对她好,名份什么的都是浮云,被小暖一脚踢下了河。第三回合,世子完败。 古家全家上京,世子一颗心已彻底遗失在小暖身上,设计二人再次见面,向小暖表白,想说服小暖嫁于他为妾,小暖明确表示,如为妾,情愿死。小暖的决绝吓住了世子,他开始谋算如何能娶小暖,因为小暖出身实在配不上汝南王世子的身份,他拒绝了那些贵女,并表现出龙阳之好,汝南夫妻二人只有这个独子,看他迟迟不成亲还喜欢上了男人,真是急死了,拖了几年,小暖也长大了,世子迟迟不成亲,这时汝南王对儿媳妇的标准也低了,只要世子肯正经成亲就行,世子提出要娶小暖,汝南王这下知道世子这几年情愿名声受损,就是要娶小暖姑娘,终于妥协,为儿子向古老太太提亲,老太太看世子对小暖一片真心,真心希望小暖能幸福,也同意了。 二人成亲后,世子对小暖那个好啊,汝南王妃本来嫌小暖出身太低,对她有点小意见,不过小暖嘴甜又能干,她也看出儿子对小暖是一片痴心,逐渐放下成见,真心对待小暖,所以没有恶毐婆婆的戏份。世子对小暖真是很宠啊,虽然汝南王一家都很急,希望小暖能早点怀孕生子,不过世子认为小暖太小,最好十八岁以后再生,这样危险小一些,就一直没让小暖怀孕,而景王心中应该是欣赏小暖的,后来又让他的母亲贵妃收了小暖做干女儿,这样也能堵住那些嘲笑小暖出身低的人的嘴,小暖的婚后生活应该是非常幸福的。 下面就是争皇位了,景王和诚王争,世子是景王的堂弟,这样又扯到二个派系,还有一群群众演员。本文对比那些个被男主伤害了还深爱不悔的虐文来说,真是太解气了。非常值得一看。 003、2011年〔起点V IP〕完结文〔穿越时空、权谋〕作者:闲听落花《九全十美〔正文+番外〕》推荐指数:5分 文案:跌落在陌生的朝代,冷血的父亲,跋扈的继母和妹妹,上一代的恩怨轮回,逼得她只好逃之夭夭,反正她有高超的医术,有聪明的头脑,到哪里都有好生活!可是,她这到底是算逃出来了?还是没逃出去 书评:女主李青是两岁时魂穿到这个架空的年代的,在这里学会了天下最好的医术,因为医治好了男主,最终在种种因缘下嫁给了男主。男主是个有自己领地的王爷。女主她和所有现代女人一样无法接受一夫多妻,所以对于男主百般推脱就是不愿爱上男主,最后在男主的爱心用下,才慢慢的软化了女主,使女主并爱上他。总之来讲这文总体写得很不错,把男女主角的心理矛盾和冲突都写得很好,情节也安排得合理,但是我就是无法接受男主有别的女人,还是在女主嫁给他后,我使无法接受,也不法释怀。唉!所以说他不是我最爱的小说之一。 004、〔穿越时空、江湖儿女〕作者:沈沧眉《与艳少同眠。出版名:大明江湖宅女记》推荐指数:5分+1(语言幽默,男主很强大) 文案:现代女子方怡穿越到明朝仁宗年间,成了江湖第一帮派,御驰山庄的庄主容疏狂,很不幸的又赶上汉王朱高煦暗中招纳江湖人士,意图谋反。迫于强大的压力,御驰山……武侠仙侠传统武侠。 书评:这是篇很经典江湖恩怨小说。文笔构思精妙,文笔十分流畅,是为极品。虽然仍走的是江湖恩怨、连环阴谋、儿女情长的武侠套路,但绝对是一篇让人耳目一新,不忍释卷的好文! 男主是个天才,武功智谋无人能敌,是江湖中人人口称的神密人物,文中并没有描写他有多么多么英俊,多么多么邪媚,而是通过江湖人物口中的述说的从前“艳少”,让人觉得他就象一个神话一样神密而吸引人,使我想起从前古龙笔下的大侠们。女主是穿的,成为一个山庄的庄主,性格率真非常令讨人喜欢,让我们拭目以待这对“古今个性”组合如何在江湖中翻手为云覆手为雨吧! 我是真的很喜欢这本书穿越小说,男主“艳少”那种现代言情小说中的理想男主,有钱有房有车、有相貌有学识有风度有涵养、有可爱有调皮有深情、更有嘿嘿……那个啥,“激情”。文中也没有复杂的三角恋情,没有1女N男的俗套,就是一对一,从一而终,无论是从前疏狂的记忆爱属于林少辞,还是现在的疏狂,她也只爱艳少这个“老”男人。嘿嘿,题外话,俺是个严重的大叔控大叔控! 005、2012年03月完结文〔穿越新唐、重权谋、轻宅斗、微种田〕作者:作三月果《新唐遗玉〔正文+番外〕》推荐指数:5星(大爱男主,女主有点作,有肉肥而腻)文案:穿越附赠严厉老娘一位——亲自教学,捣蛋就要被扫帚打P P;书虫大哥一个——腹黑天性,以逗弄自己为乐;调皮二哥一枚——挨揍不断,专门负责“活跃”气氛。但是,请问,一家之主的爹,您闪去哪里了?算了,俺要知足常乐——农家也是家,没有爹的日子,还有娘~看咱种树养田,靠着异能来发家!

个人素养高的女人在恋爱中更有魅力

https://www.doczj.com/doc/6313246192.html, 个人素养高的女人在恋爱中更有魅力 “我是一个年过25岁的女孩,但心理年龄只有22岁。我的打扮还是可爱系居多,彰显青春靓丽,说话还是娇声娇气的,偶尔也会和男生打打闹闹。但我发现,曾经我的活泼可爱男生是很受用的,但这一年来我的追求者正大福减少。我的脸上开始出现小斑点似的东西,脸型也开始微微改变……我感到非常焦虑,寻求改变的方向。” 相信每一位年过25岁而心理年龄还停留在20岁的女孩都有过以上的经历。心理年龄=20岁?是的,你没看错。有很多年满25岁的女孩,已经在社会工作至少两年了,有的甚至已经结婚生子,但心理年龄还是停留在20岁。 我们总是说,在中学以前,女孩的心理年龄是远远超于同龄男生的,这估计是“大叔控”盛行的原因之一吧。女孩子相对男孩而言,本来就比较安静,除了逛街、吃零食、八卦就是胡思乱想了。想得多,自然就比那些只懂得玩游戏、运动、打架的小男孩早熟。 但是只要一到高三、到大学,他们知道除了玩之外开始明白有一样东西叫做目标:考大学,找工作。男孩是带着责任出生的,在孩童时期受到大人的保护所以没有过早的知悉到压力,逮到成长起来了,这种责任感就越来越明显。也许孩童时的他们还是有一丝预感的,所以才在尽量的玩尽量的疯,因为他们知道长大后就再也不能这样了。 渐渐感受到肩膀上的责任后,男孩在压力的催赶下必需快速的成长起来。否则他们的人生将失去很多可能。 而这个时期的女孩,则出落得越发美丽动人,堪称人生中青春靓丽的最高值。美丽的花儿必然招引蜜蜂。很多女孩子开始因为被追求而分心。而且她们并没有在学业、事业上被寄予很大的期望,她们的家人可能觉得嫁个好人家才是最重要的。所以她们开始妆扮自己的外表,而非专注于学业、事业。然后,当这样的女孩年过25岁,男人真的愿意娶回家吗?一个女人的青春是会逐渐消磨掉的,当青春消逝,那你还有什么资本留住他的目光?

为什么女生喜欢大叔 “大叔控”心理形成的原因

为什么女生喜欢大叔“大叔控”心理形 成的原因 女生喜欢大叔是什么心理?据一项调查显示,有7成的女 生更愿意找大叔男友,这让不少同龄男人很是不解,大叔有什么好的,还不是看上人家钱。其实不尽然,女生给出了喜欢大叔的理由,一起来瞧瞧吧! 为什么女生喜欢大叔 1、大叔成熟 大叔相对于小女人肯定是成熟的,这种唯有光阴才能历练出来的成熟是吸引女人的关键。他们经历的多,懂得多,他能细致地发现、细腻地引导你的情绪变化,并且能够教给你一些如何应对职场战争的窍门,给你关于人生的中肯建议。 2、大叔沉稳 沉稳是在说一个人的心智成熟,虽然这与人的年龄没有绝对的关系,但是毕竟大叔阅历丰富,对很多事情都已经看得很开,遇事不慌、处事不惊,帮小女人一下子就摆平了很多事情,相信没几个女人希望看到自己身边的男人遇见危险的时候早已经没有了影子。 3、大叔有金钱 为了钱而和大叔在一起是很多人很不屑的事情,但这不失为大叔吸引女人的一大魅力。有钱就能使鬼推磨的事情,爱情又算什么呢?爱情到最后还不是要与钱为基础。这一财富不仅 是指白花花的钱,更是指大叔在社会上积累的客观财富。 4、大叔了解女人

大叔经历过无数的女人,可以说对女人是了如指掌。他们很懂得女人要什么,怎么样能让女人开心。而且他们很有多丰富的性爱技巧,现代社会女人对性的要求也越来越高,所以她们认为大叔能给她们性福,不会弄疼她们。 5、大叔有安全感 跟一起奋斗、烦躁、冲动的小男人在一起,感觉不到安全,很多女人都能感受的到。而大叔都比较稳重,做事成功的几率也高。在小女人看来,他们更能给自己一个安稳、幸福的家。 女生“大叔控”心理形成的原因 1、社会发展不均,贫富差距大 少女“爱”大叔,不是没有先决条件的,那大叔一定是经济条件不错的,你试一下这大叔是下岗工人或农民工看看,她们捂住鼻子就走了。目前很多对这类情侣,都是社会地位比较悬殊的,或者是跨阶层、跨阶级的。一些女孩看得很清楚,没有有钱有权的爸爸,自己唯一的资源就是青春与美貌,如果靠奋斗,或者找一个身边的男孩一起拼搏,在当今的中国,想要出头,恐怕不那么容易。看看自己的父母,过了五十就进入老年状态,或怨气冲天或者打打麻将、哪里像党和国家领导人那个阶层?六十岁才上岗,还被称为年轻一代领导人。 2、男女的生理差别也扮演了重要角色 女性49岁左右进入更年期,男人要到60岁以后。虽然一夫一妻是文明社会公认的道德与法律底线,怎奈生理上的差别不以人的意志为转移。男人最健壮的时候,迎来女性伴侣的更年期,别说生理上的不合拍,仅仅是一些女性在更年期间令人难以忍受的表现,就足以让她们的男人远远躲开。 3、男女的成熟期不同 对于情窦初开的少女,久经情场的大叔确实比愣头青要更

时下流行的灯谜

时下流行的灯谜 导读:掷果盈车,口口相传(六字媒体用语)公投实况转播掼浪头,老兄更胜一筹(外地名)波哥大 还猜谜平台一片净土(成语一)在商言商 指头儿告了消乏(现代藏品)手办玩偶 管住老二(三字网络词)大叔控 若露生殖器,须打马赛克(四字动物行为)孔雀开屏 来势汹汹(四字饮品品牌)麦斯威尔 治御寇之言(五字时政新词)学系列讲话 请求战斗(法国球星)博格巴 前妻易妆来拜访(四字房产用语)旧房改造 海棠花开疏影隐(字)淌 幸有书卷添情思(四字市场用语)亏本生意 缉拿偷渡客,经验一席谈(四字古文篇目)捕蛇者说 开双眼皮,赠文胸内衣(数学符号三)≈+∞ 衷甲出场两三人(经济名词)硬着陆 不要沉迷黄段子(廉政用语)反贪污 肥私行为,领导默许(五字菜场营业常用语)一捞一个准 再次当二奶(四字国际时政语)重返亚太 打折食品伤身体(六字口语)吃亏就是便宜 为丛驱雀,为渊驱鱼(网站名)赶集网

舍本逐末(超市名)农工商 出洋日记(本市学校简称)上海存志 农田承包者易主(科幻电影中称谓)变种人 秦邦宪的.派头(家具名)博古架 歌坊(纽约市路名)五号大街 受到百姓欢迎(植物名)白丁香 天井里开达人赛,料定今朝晴无雨(成语一)秀才不出门,能知天下事 婚前育儿现象已被社会接受(电视剧一)好先生 太子太保的职责(音乐家)顾圣婴 正宗爆鱼片(人体受伤状况)乌青块 左辅右弼吵纷纷(四字常用语)相对而言 猜谜先看清猜什么(成语)令人瞩目 木匠心里已明了(医疗名词)透析 门(四字常用语)略有耳闻 发小已成麻将友(食品冠量)一块大排骨 一起从师技艺成(集会形式)同学会 出售店铺(二字常言)放肆 保持戒备状态(二字国家机构简称)防总 买凶(三字芬兰电影)恶之花 火凤凰(二字中超外援昵称)热鸟

再来一个大叔控的真实故事

首先声明,我是70的,中年老妇女了,讨厌老女人装嫩的绕道而行,不必费力拍下来,伤神费力的,得不偿失。 再声明,我的大叔不比我大很多,3岁而已,但他的确是叔叔,我婶婶的娘家弟弟,我从小就被大人指使着喊他叔叔,不过,我从没有叫过他而已。 开始——扒,别说,心里挺不是滋味的。 (1) 我们一个村的,我们村子很大,本来住的挺远的,谁也不认识谁,只是后来他们家盖新房,就盖在我们家西面,成了邻居了。刚开始来的那一年,他十岁,我7岁,那时候,他姐姐也还小,还没跟我叔叔结婚。 第一次见叔叔,7岁的楼主就有点不淡定了,叔叔皮肤好白哦,叔叔长的那个帅啊,五官英俊的没有半点挑剔好不好,叔叔穿的那个干净啊,板正啊,一身迪卡兰上衣,没有半点褶皱,哪像个农村娃啊,七十年代的农村还很贫穷好不好,那时候我还趿拉着露着脚趾头的鞋子跟同伴疯玩踢毽子呢,裤子扭到屁股一边,小辫子一个朝天撅一个歪脑后,鼻涕.......,还没有。 天公助人为乐。叔叔有个和我同龄的妹妹,我哥哥和叔叔同龄,出了他家的门就是我们家的院子,那时候,我们家还没垒起院墙,话说他家的房子就是占用我们家祖传的地,不过那时候那地已经归功了,但有一棵歪脖子老枣树正好在他家院子里,于是,只好卖给他家了。 我找他妹妹玩啊,他找我哥哥啊,我们没有理由不在一块啊,混熟了 (2) 楼主至今奇怪,在那个落后的年代,落后的村子,貌不惊人才不休的楼主,咋那么色色呢?不知哪年就和叔叔对上眼了,十岁左右差不多,叔叔也不地道,色迷迷的追着楼主玩啊,闹啊 叔叔学习哪个好啊,每次都考班里第一名,叔叔说话那个仁义啊,大人们都超级喜欢他,没有一个人不夸他懂事,聪明,仁义,他着实好啊 楼主也不示弱,楼主也考班上第一名,也很勤快啊,也伶牙俐齿啊,大人们也说我聪明啊。 至今也奇怪,叔叔咋就盯上我了呢,他那个帅,根本就没法说,那时候,我就是个灰姑娘,丑哇 叔叔很放肆,不管人多还是人少,他都色迷迷盯住我不放,他眼睛那种迷人,那种含情脉脉,那种会说话,那种......,楼主沦陷了,十岁就沦陷了,两人常常十几秒几十秒几分

中国男女婚姻观念调研报告

中国男女婚姻观念调研报告 中国男女婚姻观念调研报告近日,国家人口计生委培训交流 中心与世纪佳缘交友网联合发布的2019-2019年中国男女婚恋观 调研报告显示,18岁-25岁女性有70是;大叔控;,其中;气质大叔;、;事业型大叔;、;细腻体贴大叔;是;大叔控;们的最爱。所谓;大叔;,通常指30岁-50岁的中年男士,倾向于选择中年男士作为配偶的青年女性被称作;大叔控;。 近日,中国青年报社会调查中心通过民意中国网和搜狐网, 对2371人进行的一项调查显示,40.1的受访者直言自己身边;大叔控;多。 44.3的受访者感觉很多女性抱有;要现货;的心理,不愿与伴侣共同奋斗。 受访者中,70后占41.4,80后占24.4,90后占4.8。 67.2的人认为中年男士受青睐是因为大多已具备一定物质基础1990年出生的张兰化名来北京打工一年多了。前不久,她在换房时结识了房东的朋友;;一个70后的;大叔;。 初次见面,;大叔;就开车帮她搬来了所有行李。之后两人经 常一起吃饭聊天儿。 ;我经常向他倾诉生活和工作中的各种不快,他不但不会不耐烦,还会给出很多实用建议。作为一个独自在北京打拼的女孩子,;大叔;的成熟体贴让我感到很舒心。

;;女朋友常指责我自私、不考虑她的感受,夸赞;大叔;有魅力。 ;山东小伙李守伟说,他女朋友是日漫迷、韩剧迷,最喜欢收集动漫和韩剧里的美型;大叔;图片,还把;大叔;定为他以后的发展方向。 ;我感觉这种心态太幼稚了。 ;为什么一些年轻女性更青睐中年男士调查中,67.2的人认为是中年男士大多已具备一定物质基础;55.5的人认为是中年男士更稳重,有魅力;40.5的人表示是中年男士更能体贴女性;39.3的人觉得是作为独生子女的年轻人更愿意被照顾,而不懂得照顾别人。 世纪佳缘婚恋专家张佳芮告诉记者,;大叔控;大多集中于年轻女性。初入社会的年轻女性缺少社会经验,而这正是成熟中年男性所具有的,他们能给予女性安全感。此外,现在不少家庭是;421;模式,四个老人、两个大人围绕一个孩子转,很可能导致独生子女长大后缺乏耐性和包容力,在恋爱和婚姻中表现得以自我为中心。 ;时代不同,适婚年龄人群择偶需求也不同。 ;张佳芮说,70年代生人崇尚与知识分子结合,80年代生人更注重对方物质基础,90年代生人择偶不但务实,而且更注重自我感受。所以不管从物质还是感情出发,不少女性都愿意选择已具备一定条件的男士。

那些冷门却让你心力交瘁的短篇文,超级好看啊

巽离络---喻斑斓 这篇文我真的好喜欢是我目前读过不狗血而且让人很心力交瘁啊! 文案主角:韩子巽,韩子离,白络之 【碎碎念】世家小姐白络之迫于皇命嫁到韩家,但白韩两家其实是有世仇的,所以可想而知,白小姐嫁过去的日子一点也不好过,受到了夫婿韩子巽及整个韩家的冷眼相待,不过白小姐早已做好心理准备,所以也算活的安然。不料日常相处中蕙质兰心的白小姐俘获了小叔子韩子离的郎心,热血少年韩子离也是招姑娘喜欢的,小叔子觉得反正老大不喜欢这个媳妇,那我替他收了,于是两情相悦了。这个时候你懂得,老大不对了……心机深沉的老大竟然也喜欢上了这个不速之妻(原谅我滥改成语),其中有一个老大外出归来偶然看见自己媳妇跟自己弟弟相处的一个画面,郎情妾意好不动人,老大当场愣了,艾玛这个片段描写的太有画面感了楼主到现在都记得。。。哎呀原谅我剧透我好喜欢这篇文~接下来白小姐跟寻思着要去追求终身幸福了,于是老大怒了,尼玛此时不出手更待何时?你白络之本来就是我韩子巽的女人……于是就那啥了,然后开始情深热脸贴冷的婚姻生活……这篇文蛮冷的但真心是好文,不过是个悲剧,老大死了后白小姐幡然悔悟……原来我早就爱上老大了……于是也死了,留下他们的闺女给抚养。不过我要说,虽然我爱这篇文爱老大爱,但我委实不太主……这女主不太讨人喜欢,虐的我家老大太惨了…… 情殇尸妖一度君华 你说爱情,就是你的名和姓,就除了感情,你都不愿再记起...... 内容标签:幻想空间虐恋情深春风一度情有独钟 前两年读的记不太清了粘贴下来给大家看看 当初杀了唯一对女主好的人,为了一己私欲带女主离开了那片世外桃源,把女主当成一个利用的工具。后来一遍遍说爱女主,但行为上又一次次伤害她,太恶心了。女主就这么一个“人”,就是一个尸妖,从来就没伤害过别人,是她太美好了,所以世人都来毁灭她。但却是男主把她从一个美丽的仙子变成了一个魔鬼,然后又打着正义的旗号毁灭她,其实与其说女主是妖,不如说世人的贪念、欲念、邪念是妖。把最美好的东西毁灭给人看! 靡音 内容标签虐恋情深强取豪夺 文案:他一把将她推抵在桌上。 丝绸桌布上绣着牡丹,雍容华贵,到了极致。 她躺在上面,一张白净的脸泛上红潮,因为挣扎,因为恨意。 她的脸是纯洁的,干净的,不谙世事的,小女孩的脸。 可是她的眼睛,却像一只猫,圆圆的,眼角却又上翘,带着女人的妩媚,天生的,不自知的,与日俱增的。 她周游在女孩与女人之间,任何一个男人都能在她身上找到属于自己的幻

《情如物证》经典影评十篇

《情如物证》经典影评十篇 《情如物证》是一部由乔瑟琳·穆尔豪斯执导,雨果·维文 / Geneviève Picot / 罗素·克劳主演的一部剧情类型的电影,特精心从网络上整理的一些观众的影评,希望对大家能有帮助。 《情如物证》影评(一):爱的恐惧 圣母大学曾有一堂公开课就这部影片来讲述关于爱情的问题。马丁真正和社会的格格不入不是因为他失明这样的生理缺陷,而是由于他对周遭一切事物怀疑的心里缺陷。他害怕被欺骗,但其实更害怕被爱。当人们将自己封闭于自己的世界,而对于他人的一切情感都去怀疑的时候,最终导致的就是失去他人的关爱。人们在爱情中总会不停向对方求证“你爱我吗?”但或许我们最应该做的是问自己的内心“我爱他吗?” :二十多年前的果叔发际线还不那么令人着急,将一个忧郁、心里缺陷的盲人诠释的淋漓尽致,即使没有流转的眼神,我们仍然看到了那个倔强不安的马丁 《情如物证》影评(二):情如物证,物证造假 --你永远不许骗我! --我为什么这样做? 我这样做一定是原因的,要么是带着欺骗的爱、要么是带着爱的欺骗。 保姆兼追求者是带着戏谑、欺骗的爱,像天蝎座能做的。以前我没有看懂,以为就是欺骗,以为就是不爱。等看懂了,爱都

过去了。 朋友兼朗读者做了带着爱的欺骗,好在每个人都会说谎,但每个人不一直说谎,何况是为了保护你而撒的谎呢? 单亲妈妈,只有爱和保护,却遇到敏感的孩子,辩白再多孩子不信你也没有办法。 每一句I‘ll tell you the turth都需要你相信后面的话,如果一开始就不信,那就干脆不要听下去了。但是不信任的是你的妈妈呢?是你的爱人呢?是你自己呢? 没有真相、没有物证,只有点滴一刻情。 《情如物证》影评(三):哥拍的不是照片,是深深的寂寞 电影“PROOF”的导演是Jocelyn Moorhouse,也不出名,网上搜的,她连带导演/剧作/监制只有几部片子,但文艺导演重质不重量,所以片子还是不错,最重要是俺对Hugo Weaving ,真是惊为天人。以前看过他的《沙漠妖姬》,也很不错。只是那片子本来就有点噱头,人物也跳脱一些,桥段也还能被记住,但雨果哥也就是落个眼熟。 如果对这个名字没印象,建议您重新看《魔戒》《骇客帝国》,他就是精灵国王,Agent Smith,天生只能做演技派的长相,还得戴面具,罩墨镜那种,否则真身显露,太凶悍了,看《魔戒》时我一直觉得精灵国王像坏人,无论啥表情都像坏人,尤其鼻翼两地道深深法令纹,尽管金发披肩,白袍飘逸。 此片里雨果哥扮演盲人马丁,有一个女特别看护,给他做饭,买菜,付账单,做了很久,两人的关系居然就是纯雇佣关系。 还有一个是餐馆偶遇的小弟安迪,这个更是惊为天人,在您

为什么说成熟的男人对女人会更有吸引力

个人收集整理-ZQ 越来越多地女生都是“大叔控”,而夫妻之间相差地年龄也是越来越大,为什么呢?因为在女生看来,同龄地男生大多都还不懂事,容易冲动,心胸不够宽广,而且也没有担当,遇事喜欢逃避问题,这样地男生,是无法给予女生安全感地,更何况在生理地发展来说,女生比同龄地男生早熟地这个事了.所以“大叔控”似乎是个趋势,毕竟成熟地男人对于女生地吸引力和安全感,是无法抗拒地. 情感专家说过:“对感情认真负责地男人,能为女人带来幸福地生活,能让女人感到踏实和有安全感.”那么说,让女人有安全感地男人,一个成熟地男人,该是怎么样地呢?资料个人收集整理,勿做商业用途 一、外表干净整洁 有人说,外表干净整洁地,一丝不苟地并不一定是成熟地男人啊,但是外表邋里邋遢地,走颓废风格地男人,一定是不成熟地,这种男人是想要通过这种方式来获得更多人地关注,经常是以自我为中心地.而成熟地男人一定是注重外表,看起来给人地感觉是干净地,清清爽爽地,即便衣服不够华贵,但是懂得在什么场合穿什么衣服,让自己看起来精气神十足,容光焕发.从这些外表能看出来,成熟地男人是热爱生活,积极自信地,给人十足地正能量,给人依赖感,安全感.资料个人收集整理,勿做商业用途 二、心胸宽广,不爱计较 为小事斤斤计较地,凡事喋喋不休地,都是小男人所为,这种男人都是不会懂得如何体谅和包容别人,不懂得如何在别人地角度看问题地,这正是男人地不成熟地表现.而成熟地男人,他豁达大度,不为小事斤斤计较,心胸开阔,懂得如何包容和体谅别人;懂得为别人考虑,在别人地角度看问题,会尊重异性.这样地男人,女人跟他在一起,会感到放松,感到幸福,会让家庭地气氛和谐稳定.资料个人收集整理,勿做商业用途 三、有担当,为人认真负责 一个有担当,为人认真负责地男人,一定是事业有成地人.正所谓经济基础决定上层建筑,有事业有经济能力地男人,才能让自己地女人有更好地生活,不会因为经济问题让家庭不安稳.而且有事业地男人,也是一个懂得如何体谅妻子,懂得维系家庭关系地男人,他会为你分享生活上和工作上地事,他待人接物认真负责,这样地男人是最让女人有安全感地,能让女人可以全身心地放松去依赖他,放心由他带领.资料个人收集整理,勿做商业用途 做一个成熟地男人看起来很简单,但做起来一点都不简单,要有事业,能撑起养家地责任,懂得包容和体谅女人,注重外表形象这些,需要你地努力和开阔心胸.在恋爱和婚姻中,能够带领女人地,都是成熟地男人,你地成熟能让她把全身心都安然地交付于你,能把你们之间地矛盾减少,幸福感增加.资料个人收集整理,勿做商业用途 1 / 1

中国男女婚恋报告

龙源期刊网 https://www.doczj.com/doc/6313246192.html, 中国男女婚恋报告 作者:耿旭静 来源:《37°女人》2013年第03期 2012年12月24日,国家人口和计划生育委员会与某大型婚恋交友运营平台联合发布《2012年~2013年中国男女婚恋观调研报告》,让我们来看看目前中国的婚恋情况吧。 非婚人口多 《2012年~2013年中国男女婚恋观调研报告》有9.8724万人次参与,有效样本7.7045万份。调研结果表明,中国非婚人口数量巨大,18岁以上非婚人口达到2.49亿,占全部人口的18.6%;城镇18岁以上非婚人口1.32亿,占城镇总人口的19.8%。 性别比失衡 男女性别比失衡严重,高达26.7:24.9。其中70后、80后、90后非婚人口性别比失衡严重。全国处于适婚年龄段的70后、80后、90后人口中存在男女比例不平衡的问题,并且年龄越大比例失衡越严重。 30+男性:择偶缺口高达613.9万 30岁~39岁的男性中,有1195.9万人处于非婚状态,而同年龄段女性中只有582万人处于非婚状态,男性在同年龄段择偶面临613.9万的缺口。这种情况必然造成30+男性更倾向寻找低年龄的女性为伴侣,同时会使80后、90后男性的择偶压力继续增大。 婚龄推迟 晚婚成了社会潮流,20岁~29岁人群结婚年龄的变迁显示,中国人目前的平均结婚年龄比10年前推迟了1.4岁(男)和1.5岁(女)。 第五次人口普查数据显示:中国男性的平均结婚年龄为25.3岁,女性平均结婚年龄为23.4岁;第六次人口普查数据显示:中国男性的平均结婚年龄为26.7岁,女性平均结婚年龄 为24.9岁。 韩剧让“大叔”大热 有“大叔控”情结的女性越来越多。调研显示:18岁~25岁之间的女性,70%有“大叔控”情结。这些女性喜欢比自己大10岁左右的男性,其中有64%的“大叔控”希望与大叔恋爱,17%

他是龙观后感

一切都听你的,只是,请别离开我 在微博上被安利了多次电影《他是龙》。简介里“绝世唯美”、“逆天颜值”、“大叔控萝莉”、“人兽恋”等信息也当真激发起了好奇。当看完这部集众多个人喜好元素的电影后,本想听听歌平复一下心情,播放器里却正在单曲循环《不存在的恋人》,突然觉得甚是应景。 她是萝莉,他是龙 电影讲述的故事很简单:一个战斗民族的小萝莉在新婚当天被恶龙俘虏,在斗智斗勇中逐渐唤醒恶龙内心的人性,并爱上了恶龙幻化成的阿尔曼,最后放弃自己公主身份和即将成婚的未婚夫,与阿尔曼在孤岛上厮守。唯美至极的画面,配上男女主角逆天的颜值,造就了这部电影的成功。 一切都听你的,只是,请别离开我 简单的故事,略显苍白的剧情的确也是电影存在的问题,然而作为一名大龄单身女青年,面对这种胸肌腹肌马甲线,高能撩妹大暖男全程不穿衣服的表演,一切问题便都不再成为问题。其实,之所如此推崇这部电影除了男主角的颜值与身材外,更主要的原因是其所宣扬的听起来俗套但在现如今却分外难得的“爱能创造奇迹”的主题。 前几天收到黄老板微信发来的小晗姐根据与付彪彪和我聊天有感而发的《写给嗷嗷待嫁的姑娘们》,并语重心长、苦口婆心地帮我分析我性格中存在的种种问题。听完后一边感动一边感概于黄老板的观察分析总结能力,虽然只有一面之缘,却字字珠玑、针针见血地点出我自己花费二十多年单身时光才认识到的缺憾。黄老板说我单身主要是因为性格。的确,慢热又矫情的死宅性格总是提醒自己不要随意接近别人,更是随时提防他人的接近。 小晗姐在文章当中说,我们这些口口声声说着恨嫁的姑娘心里真正追求的是一场爱情而不是一场婚姻。多年来受大量文学作品和电影的荼毒,“爱情”已经成为自己为自己营造的最佳借口,总是以“没对眼”、“不合适”掩饰着自己的矫情,难怪有人说姑娘家读太多书、看太多电影未必是什么好事。然而,我并不觉得这一切除了要承受比别人多的单身时间外,还有其他什么不好的。等待爱情并没有错,正像电影中被女主角嘲笑嫁不出去的姐姐,却最终让她明白:与其接受没有爱情的婚姻,还不如孤独终老。每个人都是一个半圈,两个人才会组合成一个圆。如果两个半圆无法严丝合缝的拼接,任何的工作、房子、车子都无法使其弥合。 说到爱脑海里立即想到的是电影《编舟计》中的场景。字典编辑马缔光也始终不知道如何解释“恋”,苦苦求之而不得,直到遇见了房东的孙女林香具矢,突然一改往常超然物外的状态,变得惶惶不可终日,最终为“恋”做出以下定义:喜欢一个人,寤寐求之,辗转反侧,除此之外,万事皆空之态;两情相悦,何须羡仙。有时候,爱就是这样不知不觉或是心甘情愿的改变。如果真的邂逅爱情,我想即使再高冷再矫情也会欣然说出一句:一切都听你的,只是,请别离开我。

这不是巧合:50个你无法逃避的人生定律!

这不是巧合:50个你无法逃避的人生定律! 1、25岁定律 硕士毕业是25岁,自己还没有一点社会经验的时候,很容易就蹉跎两年,然后可能想到应该结婚,然后会发现为什么就显得有点晚了呢? 2、三年定律 如果男朋友处了三年,觉得应该结婚了,但是又觉得他总有不合适不满意的地方。重新认识、重新熟悉、重新了解一个人多难啊。有的最终就结了婚,有的就淡了散了。再就很难说了。 3、最重的谷穗在前面定律 总相信还有更好的,总觉得自己不会这么惨落在一个普通男人手里,就这么一路走下去了。那些扔掉的谷穗也被人捡走了。 4、子非鱼定律 子非鱼,安知鱼之类。子非我,安知我不知鱼之乐。单身女人的生活究竟如何,如果你问她,她可能一会儿说好,一会儿说不好。至于快不快乐,可能只有她自己知道了。 5、男人恐惧定律 不是恐惧男人,谁怕男人啊。而是对男人进入自己生活的恐惧。 6、后援团越多越耽误事定律 如果你连自己都不知道自己想要什么,别人又怎么可能知道。要倾诉或者赚同情可以,不过别把他们的话当做恋爱指南。

7、矛盾定律 随便找个人结婚算了,一想到应该结婚的时候就会这么想;我怎么可能和这么一个人过一辈子啊,一想到某个具体的男人就会索然无味。 8、半边天定律 女人在经济上独立早已经不成问题,但不足以让女人完全去撑半边天;只有在女人可以独立地享受生活中的种种乐趣时,女人才会觉得结婚不重要。 9、博士定律 如果不知不觉走上了读博之路,那么在博士毕业之前完成恋爱结婚也许比较明智。 10、茫然第一定律 她们好像有坚持的主张和生活态度,有足够独特的个性,还常会被人贴上各种标签,但内心 里她们还是不确定自己究竟在忙什么,在为什么生活,似乎可以很具体,但又经不起推敲。她们究竟要什么? 11、茫然第二定律 年纪越大,单身女人越不知道自己想要什么样的生活。 12、窝边草定律 吃窝边草的风险是其中一方的职业面临危机,但除了窝边草,还能到哪里去找吃的呢? 13、残酷第一定律 女人年纪往上涨被称作“老”;男人年纪往上涨被称作“成熟”。

女孩喜欢大叔的原因

大叔沉稳 虽然一个人的心智成熟与否可年龄没有着绝对的关系,可是毕竟成年人有着年轻人们所没有过的阅历,经历了太多风雨的他们早已经不是曾经那初出茅庐的青涩少年,纵然是身处世界灭亡的前一刻也依然看不出丝毫的慌乱,北京豆豆博士祛痘专家提醒很多时候男人对女人来说更是一份依靠,相信没几个女人希望看到自己身边的男人遇见危险的时候早已经没有了影子。 情伤难愈,寻找安全感 人在脆弱的时候会失去判断力。陷入与同龄异性之间发生的失恋或者情伤事件的女孩子,会恐惧再去爱上同龄人,她们会认为“大叔”对待感情更沉稳,更懂得怜香惜玉,也更有安全感。 大叔拥有财富 这可能是让人最为不屑的理由,也是最让人反感的理由,不过生活就是这样,现实就是如此。和那些还在上学或是刚刚离开校园的年轻人相比,大叔们在阅历,感情之外

最大的有事便是已经积累了客观的社会财富,这一点对于现代的女孩来说可是相当之诱人,要知道人生苦短,及时行乐已经成为了时下许多青年人的共识。 追求完美,崇尚“英雄”? “大叔”可能在经济能力、性能力、生存能力等各方面都会是女生眼中的“英雄式人物”他几乎无所不能,无所不知,更有责任感,能给女性更加完整的保护。从这方面来看,除非是含着金汤匙长大,否则在财力、能力和社会影响力上,年轻人确实无法和“大叔”PK。 父女情深,过度崇拜? 精神分析学派的创始人弗洛伊德,曾定义了一种“厄特克拉特情结”,又叫“恋父情结”。这可能是“大叔控”的根源之一。 大叔成熟 许多人提到大叔都会充满不屑,老头子能有什么,除了多几个银子哪方面比的我们这些青年一辈。是人都会有着老

去的一天,除掉极个别的备受神明宠爱或者有着离奇宿命的人可以青春永驻,长生不死外,任谁也逃离不了岁月的玩弄。 如果说少年郎们有的是可以尽情挥霍的的青春,那么大叔们有的则是无数光阴下的历练出的魅力,成熟男人的气息可不是青涩的男孩们可以相提并论,这种只是唯有光阴才能酝酿出气质只能意会不能言传。 5 受影视剧爱情模式(尤其韩剧)影响? 偶像剧中的“大叔”向来颇为吸引,比如韩剧《对不起,我爱你》、《巴黎恋人》等的男主角,其特点都是稳重、体贴、多金、痴情……作为偶像剧最大受众的女性观众尤其青年女性观众,容易被夸张的剧情激发“完美爱情之梦”,成为“大叔控”。 爱大叔的心理准备

情感专家教你:学习如何做一个体贴男人

情感专家教你: 情感专家教你: 2015/04/15 每个人都会被优秀的人所吸引,因为一个优秀的人身上必定会有出众的优点。而搞清楚女人心中优秀的男人是什么,并向着这个方向努力,才能有机会虏获女人的心。那么该如何成为一个女人喜爱的男人呢?一起来看本册内容。

目录 情感专家教你:学习如何做一个体贴男人 ·如何成为女人喜爱的男人 ·体贴的男人要做到哪几点 ·幽默是男人征服女人的魔力 ·成熟的男人,对女人最有吸引力 ·如何塑造出成熟男人的魅力 ·如何才能成功吸引心仪的女生

·如何成为女人喜爱的男人 每个人都会被优秀的人所吸引,因为一个优秀的人身上必定会有出众的优点。而搞清楚女人心中优秀的男人是什么,并向着这个方向努力,才能有机会虏获女人的心。那么该如何成为一个女人喜爱的男人呢?以下提几点方向作为建议: 1.塑造负责任的形象 男人天生就具备更刚强有力的形象,这是生理所决定的。女人天生就比男人弱势,因此绝大部分女人都需要男人的保护。这种保护不仅是身体上的保护,在心理上男人也要给予女人充分的安全感。假如一个男人在女人遇到重大决定的时候能给出有效的建议,晚上约会总会坚持亲自送女人回家,遇到问题总是第一时间反省并承认自己的错误,那这个男人必定会得到女人的信任和依赖。一个男人负责任的态度会为女人带来安全感,且这种男人往往会显得更大度,更有能力,更受女人青睐。 2.提高自身的经济能力 女人在选择男人(尤其是长期选择的对象)的时候也会看中对方的经济能力。无论你身上还有什么其他的才能,但女人一旦选择与你确立关系,都是要落到每天的生活上。没有经济实力就无法保证生活的质量,生活质量差往往会降低幸福感。因此,很实际的一个方法就是提升你的经济实力吧。在工作上更出色地表现,争取更高的职位和薪酬;学会理财和投资,增加额外收入;学习提升自身的技能,增加自身附加价值。假如现阶段的你还无法将你的经济实力立马提升起来,起码你也要表现出你赚钱的潜力。那么当女人感受到你是潜力股的时候也会在心里为你加分。 3.适当引导对方增加投入量 女人对一个男人爱的程度很大部分取决于她对这段感情的投入量。情感大师康纳曾提到:

动漫

TV版:就是在电视上放的动画版本 OVA:Original Video Anime(原创影象动画),一般能够作为OVA的作品一定是在首次推出时是未曾在电视或戏院上映过的,如果在电视或戏院上映过的作品再推出的录影带(或LD/VCD)等等就不能称作OVA了。 剧场版:动画的电影版本。 动漫体裁: SF=SCIENCE FICTION科幻类的作品,如EVA,高达,凉宫春日的忧郁 动漫作品的缩写: 动漫发烧友之间常常用缩写代表自己熟悉的作品,缩写通常是能理解的,但也有些对于新人来说不是很熟悉,在此略举一二,以后逐渐补充。 FF:大家最熟悉的大概就是FF(FINAL FANTASY)系列了,FF系列本来是SQUARE公司的一个著名游戏,因为非常受欢迎所以有很多周边,比如游戏动画,OVA,电影等。但是 最近使用FF的缩写则需要辨别一下了,因为《黑客帝国》系列也出品了一部动画短片名叫FINAL FLIGHT OF THE OSIRIS(欧西里司最后的飞翔),缩写同样是FF。该片导演和《FINAL FANTASY》的电影版是同一个人(安迪*琼斯),怪不得连名字都一样了。 M0=MACROSS ZERO(ZERO是零的意思,所以用“0”表示) ROD系列:目前出品的有两个作品,一个是READ OR DIE(OVA),中文名为“死亡的思考”;另一个是正在制作放映中的READ OR DREAM(TV),目前国内还没有D版。它们的缩写均为ROD [编辑本段]☆制作动画片人员解释: 监督:导演 原作:原漫画或小说的作者 脚本:依据原作进行创作剧本人员 CAST:声优,配音演员 STAFF:参与制作动画的人员 制作:指负责制作该动画的公司或部门 (在日本,要制作一部动画通常是要数个部门或公司共同合作完成的,分工明确,流水线操作。一部动画的制作水准往往会受到制作单位的影响。所以在一些情况下,知道制作某部动画的公司或部门就知道该动画的水准。) [编辑本段]部分日本动漫术语: BL: 原英文为Boy's Love,特指男同性恋,又称耽美. GL:原英文为Girl's Love,特指女同性恋,又称百合,蕾丝边。(如《神无月的巫女》) BG:原英文为Boy and Girl,特指男女之间的配对。 CP: 指的是配对。 SM:sadomasochism的简写,统指与施虐、受虐相关的性意识与行为,多见于H动画或游戏中(日本称为鬼畜)。 残念:日语音译,遗憾的意思,引申词语有“碎碎念” 。 幼齿:年龄在8岁以下的小女孩。

2018-2018年中国男女婚恋观调研报告

2018-2018年中国男女婚恋观调研报告 近日,国家人口计生委培训交流中心与世纪佳缘交友网联合发布的《2018-2018年中国男女婚恋观调研报告》显示,18岁-25岁女性有70%是“大叔控”,其中“气质大叔”、“事业型大叔”、“细腻体贴大叔”是“大叔控”们的最爱。所谓“大叔”,通常指30岁-50岁的中年男士,倾向于选择中年男士作为配偶的青年女性被称作“大叔控”。 近日,中国青年报社会调查中心通过民意中国网和搜狐网,对2371人进行的一项调查显示,40.1%的受访者直言自己身边“大叔控”多。44.3%的受访者感觉很多女性抱有“要现货”的心理,不愿与伴侣共同奋斗。 受访者中,70后占41.4%,80后占24.4%,90后占4.8%。 67.2%的人认为中年男士受青睐是因为大多已具备一定物质基础 1990年出生的张兰(化名)来北京打工一年多了。前不久,她在换房时结识了房东的朋友——一个70后的“大叔”。初次见面,“大叔”就开车帮她搬来了所有行李。之后两人经常一起吃饭聊天儿。“我经常向他倾诉生活和工作中的各种不快,他不但不会不耐烦,还会给出很多实用建议。作为一个独自在北京打拼的女孩子,‘大叔’的成熟体贴让我感到很舒心。” “女朋友常指责我自私、不考虑她的感受,夸赞‘大叔’有魅力。”山东小伙李守伟说,他女朋友是日漫迷、韩剧迷,最喜欢收集动漫和韩剧里的美型“大叔”图片,还把“大叔”定为他以后的发展方向。“我感觉这种心态太幼稚了。” 为什么一些年轻女性更青睐中年男士?调查中,67.2%的

人认为是中年男士大多已具备一定物质基础;55.5%的人认为是中年男士更稳重,有魅力;40.5%的人表示是中年男士更能体贴女性;39.3%的人觉得是作为独生子女的年轻人更愿意被照顾,而不懂得照顾别人。 世纪佳缘婚恋专家张佳芮告诉记者,“大叔控”大多集中于年轻女性。初入社会的年轻女性缺少社会经验,而这正是成熟中年男性所具有的,他们能给予女性安全感。此外,现在不少家庭是“421”模式,四个老人、两个大人围绕一个孩子转,很可能导致独生子女长大后缺乏耐性和包容力,在恋爱和婚姻中表现得以自我为中心。 “时代不同,适婚年龄人群择偶需求也不同。”张佳芮说,70年代生人崇尚与知识分子结合,80年代生人更注重对方物质基础,90年代生人择偶不但务实,而且更注重自我感受。所以不管从物质还是感情出发,不少女性都愿意选择已具备一定条件的男士。 稳定的感情需要男女双方耐心磨合,一同成长 调查显示,52.6%的受访者对年轻女性当“大叔控”持认同态度,17.0%的人持反对态度,30.4%的人表示不好说。 张兰认为,年轻女性与“大叔”结合是恋爱自由,没什么不可以。“单方面要求女性理解自己伴侣,等待其成长,也是不公平的。” “不少年轻女性有攀比心理,看到同龄人因为嫁得好,享受了更好的物质条件,就可能在感情中动摇。”李守伟认为,认同年轻女性当“大叔控”,是认同了男女双方只考虑自己的做法,抹杀了爱情相伴相守的美好,可能导致更多女性把恋爱和婚

相关主题
文本预览
相关文档 最新文档