当前位置:文档之家› Competing for attention:An empirical study of online reviewers's strategic behavior

Competing for attention:An empirical study of online reviewers's strategic behavior

Competing for attention:An empirical study of online reviewers's strategic behavior
Competing for attention:An empirical study of online reviewers's strategic behavior

R ESEARCH A RTICLE

C OMPETING FOR A TTENTION:A N E MPIRICAL S TUDY OF

O NLINE R EVIEWERS’S TRATEGIC B EHAVIOR1

Wenqi Shen

Virginia Polytechnic Institute and State University, Blacksburg, VA 24061 U.S.A. {shenw@https://www.doczj.com/doc/6f9955508.html,}

Yu Jeffrey Hu

Scheller College of Business, Georgia Institute of Technology, Atlanta, GA 30308 U.S.A. {yuhu@https://www.doczj.com/doc/6f9955508.html,}

Jackie Rees Ulmer

Krannert School of Management, Purdue University, West Lafayette, IN 47907 U.S.A. {jrees@https://www.doczj.com/doc/6f9955508.html,}

Top online reviewers who reliably gain consumers’ attention stand to make significant financial gains and monetize the amount of attention and reputation they have earned. This study explores how online reviewers strategically choose the right product to review and the right rating to post so that they can gain attention and enhance reputation. Using book reviews from Amazon and Barnes & Noble (BN), we find that reviewers on Amazon, where a reviewer ranking system quantifies reviewers’ online reputations, are sensitive to the competition among existing reviews and thus tend to avoid crowded review segments. However, on BN, which does not include such a ranking mechanism, reviewers do not respond to the competition effect. In addition, reviewers on Amazon post more differentiated ratings compared with reviewers on BN since the competition for attention on Amazon is more intense than on BN. Overall, reviewers on Amazon behave more strategically than reviewers on BN. This study yields important managerial implications for companies to improve their design of online review systems and enhance their understanding of reviewers’ strategic behaviors.

Keywords: Online attention, scarcity of attention, competing for attention, online product reviews, user-generated content

“Attention is the hard currency of cyberspace.”

–Thomas Mandel and Gerard Van der Leun (1996)

Introduction

Consumers increasingly rely on online opinions and experi-ences shared by fellow consumers when deciding what products to purchase. Deloitte (2007), for instance, finds that almost two thirds (62%) of consumers read consumer-written online product reviews; among consumers who read online reviews, 82 percent say their purchase decisions have been directly influenced by online reviews. Consumers also increasingly expect to find product reviews on retailer web-sites or manufacturer websites.2 Therefore, it is very impor-tant for companies developing or maintaining online review systems to understand the incentives for online reviewers to write product reviews and the effect of incentives on the content of product reviews.

1Rajiv Kohli was the accepting senior editor for this paper. Paul Pavlou served as the associate editor.2Consumers expect to find product reviews on shopping websites (72%) while around half (47%) seek them out on company websites and 43% on price comparison sites, according to a recent study by Lightspeed Research (Leggatt 2011).

MIS Quarterly Vol. 39 No. 3, pp. 683-696/September 2015683

Shen et al./Online Reviewers’ Strategic Behaviors

There is a large body of literature on online product reviews, but most of the existing literature has overlooked the question of how online reviewers are incentivized to write reviews. Instead, the literature has focused on the numerical aspects of reviews such as volume, valence, or variance, and the impact of reviews on consumers’ purchases (e.g., Basuroy et al. 2003; Dellarocas et al. 2004; Liu 2006). Several recent studies investigate the development and evolution of review ratings rather than their influence on sales (e.g., Godes and Silva 2012; Li and Hitt 2008; Moe and Trusov 2011). Other studies investigate various factors that could influence online reviewers’ behaviors such as online reviewers’ characteristics (Forman et al. 2008), product types (Mudambi and Schuff 2010), previous review ratings (Moe and Trusov 2011), and product prices (Li and Hitt 2010). The literature seems to assume that online reviewers may be motivated to write pro-duct reviews simply because of altruism, product involve-ment, and self-enhancement, which are the major motives for offline consumers to provide word-of-mouth (e.g., Dichter 1966). Less attention has been paid to the fact that reviewers, by writing product reviews, can receive benefits such as online reputation and attention from other consumers. Our study is part of this emerging research stream, empirically investigating how online reputation and attention from other consumers affects reviewers’ contributions to online review systems.

Users’ voluntary contributions have been studied in other online contexts such as open source software (Lerner and Tirole 2002), firm-hosted online forums (Jeppesen and Frederiksen 2006), and blogs (Faw 2012). Researchers have found that when lacking direct monetary incentives, incentives such as peer recognition and online reputation are important drivers for online users to contribute voluntarily (e.g., Lerner and Tirole 2002). Roberts et al. (2006) explore intrinsic and extrinsic motivation of voluntary developers in the open source software community and report that multiple types of motivation play a role in the decision to participate in the community and to what extent. In the context of firm-hosted online forums, firm recognition of users’ contributions is also reported as valuable to the users (Jeppesen and Frederiksen 2006). Positive reputation and peer recognition can motivate online users to continue contributing voluntarily (Pavlou and Gefen 2004, Resnick et al. 2000).

For online reviewers, gaining online reputation and attention from other consumers is an important motivation for their contribution to the review systems.3 For example, Peter Durward Harris, a previous top 10 and hall of fame Amazon reviewer, mentioned that reviewers tend to use the helpfulness vote as a proxy for the attention gained by their reviews, and reported that “every reviewer cares at least a little about votes,” which “provide reviewers with reassurance that people are reading their reviews and assessing them” (Harris 2010). Top online reviewers who reliably gain consumers’ attention stand to make significant financial gains and monetize the amount of attention and reputation they have earned. For instance, Forbes reported that a top Amazon reviewer made thousands of dollars and frequently received free dinner offers and free travel offers; this reviewer also received up to 40 free books a day, communicated regularly with authors and his fans, and eventually launched a writing career (Coster 2006). The launch of Amazon’s Vine Program in 2007 (http://www. https://www.doczj.com/doc/6f9955508.html,/gp/vine/help) has made it easier for top reviewers to receive products from manufacturers and pub-lishers. Similarly, top product bloggers who consistently attract consumer attention can make money, earn free pro-ducts, and even build up a career and become a celebrity through blogging (Faw 2012).

Although online reviewers may desire to gain attention, it is not a trivial task since attention is arguably the most valuable and scarce resource on the Internet (Dahlberg 2005; Daven-port and Beck 2001; Goldhaber 1997). The ever-increasing amount of user-generated content creates a processing prob-lem for users seeking relevant and useful information and leads to a competition for attention (Hansen and Haas 2001; Hunt and Newman 1997; Ocasio 1997; Reuters 1998). Given the scarcity and value of attention, online reviewers are likely to compete for attention when contributing voluntarily to review systems.

Drawing upon the aforementioned theories on online reputa-tion and consumer attention, this paper extends the literature on online product reviews by empirically investigating how incentives such as reputation and attention affect online reviewers’ behaviors. We argue that reviewers may write their reviews in such a way that it attracts the attention of other consumers. In particular, we study reviewers’ decisions on two levels. First, at the product level, given the status of the current review environment, we investigate the factors that affect a reviewer’s decision on whether to write a review for a particular product. Second, at the review level, we investi-gate factors that affect a reviewer’s rating decision on whether to differentiate from the current average rating based on the status of his/her online reputation.

We use a rich data set of online reviews of books and elec-tronics collected from Amazon and Barnes & Noble (BN). The data was collected on a daily basis, which allows us to replicate the review environment when reviewers made the

3Goldhaber (1997) has proposed an “attention economy” theory, stating that “obtaining attention is obtaining a kind of enduring wealth, a form of wealth that puts you in a preferred position to get anything this new economy offers.”

684MIS Quarterly Vol. 39 No. 3/September 2015

Shen et al./Online Reviewers’ Strategic Behaviors

review decisions. In addition, different from most of the prior studies focusing on one product category and one website, our data set allows us to compare across product categories and across different review systems.

Our results indicate that reviewers’ review decisions are affected by the existence of a reputation system that amplifies the effect of reputation and consumer attention. Our compar-ison across two different review systems confirms that reviewers’ behaviors become more strategic in providing reviews when there exists a reviewer ranking system that makes each reviewer’s reputation very quantifiable and visible. We find that reviewers on Amazon, where a reviewer ranking system exists, become sensitive to the competition among existing reviews and tend to avoid crowded review segments. In direct contrast, on the BN website, which does not include such a reviewer ranking system, reviewers do not respond to the competition effect. In addition, reviewers on Amazon post more differentiated ratings compared with reviewers on the BN website, presumably because Amazon’s reviewer ranking system makes each reviewer’s reputation very quantifiable and visible and intensifies the competition for attention. Our findings yield interesting managerial impli-cations for companies interested in encouraging online reviewers’ contributions and in managing review activities on their websites. We discuss these details in the “Discussion and Conclusion” section and provide guidance for managers so that they can improve the design of their review systems in order to fulfill different business needs and goals.

The rest of the paper is organized as follows. We present the data and the empirical model used to address our research question. The following section reports our empirical findings and is followed by additional analysis on Amazon reviewers’ rating strategies. Finally, we discuss the mana-gerial implications in the last section.

Empirical Methodology

Data

This study uses book reviews on https://www.doczj.com/doc/6f9955508.html, and https://www.doczj.com/doc/6f9955508.html, to study reviewers’ behaviors. We selected Amazon and BN as they are the two largest online book retailers and, most importantly, have two different review environments. Amazon offers a reviewer ranking sys-tem that ranks reviewers based on their contributions. The reviewer ranking system builds up online reputation for reviewers, allowing top reviewers to build their online reputa-tion and to consistently gain future attention. In contrast, BN does not offer such a reviewer ranking system, and thus does not allow reviewers to build reputation and consistently gain future attention. Therefore, reviewers should behave more strategically within Amazon’s environment than within BN’s. Our sample includes all books released in September and October 2010, resulting in a sample of 1,751 books. At the end of the data collection period, there are 690 books on Amazon and 460 books on BN having more than 2 reviews. We have a total of 10,195 reviews in the data set.

The data in our sample includes daily information on books, reviews, and reviewers. For books, we collect the book’s release date and its daily sales rank, which will be used as a proxy of its popularity. For reviews, we collect the date when the review was posted, the reviewer’s user name (which could be a real name or a pen name), the review rating, and the total helpfulness vote (this is used as a proxy of the amount of attention a review has captured). The votes are collected daily. We then obtain the reviewer rank from each reviewer’s online profile on Amazon (Amazon ranks reviewers based on the number of reviews and the quality of their reviews4). One unique feature of our sample is that we collect all the review information starting from the release date of each book. Therefore, we are able to observe the dynamics of reviewers’strategies over the time period of the sample. The data spans a three-month period from September 2010 through November 2010.

Finally, we randomly selected about 500 electronic products on Amazon, including laptops, netbooks, tablets, Blu-Ray players, GPSs, TVs, and digital cameras. This data set is used for cross category comparison in order to generalize our results from books to other product categories. The data collected in the electronic product dataset is similar to that of the book dataset. The results are reported in the section “Additional Analyses on Amazon Reviewers.”

Empirical Model

The problem for consumers of finding the most useful reviews can be solved if consumers can allocate a sufficient amount of time and effort to read through all of the reviews. However, given the limited time and attention each consumer is able to spend, it is unlikely that a consumer will be able to process all of the available reviews before purchase, as some items may

4Amazon uses the ratio of (Helpful vote)/(Total vote) to measure the quality of a review. In addition, they claim that they consider the relative magnitude of the amount of the total vote at the same time.

MIS Quarterly Vol. 39 No. 3/September 2015685

Shen et al./Online Reviewers’ Strategic Behaviors

have literally hundreds of reviews (Chen et al. 2007; Forman et al. 2008). Consumers have to use heuristics to select a subset of the reviews to read rather than processing all of the reviews systematically (Forman et al. 2008). This constraint on consumers’ limited attention has implications for reviewers’ decisions on creating and posting reviews. Reviewers have to adopt the right strategy so as to compete for this scarce attention from consumers. The competition effect can be magnified when reviewers are able to quantify their online reputation through a reviewer ranking system. As mentioned above, with such a reputation system, reviewers are able to potentially monetize their online reputation and the amount of attention by receiving payments, free products, free travel or dinner offers, and even career opportunities (Coster 2006). Therefore, the competition for attention becomes more intense when there exists a reviewer ranking system to quan-tify a reviewer’s online reputation. The reviewer ranking system builds up online reputation for reviewers, allowing top reviewers to consistently gain future attention. In contrast, without a reviewer ranking system, reviewers are not able to build up reputation and consistently gain future attention. As a result, reviewers may behave more strategically so that they gain attention and enhance reputation when there is a mech-anism to quantify their online reputation.

We study reviewers’ behaviors in two review mechanisms at two levels. At the product level, we study how two factors, popularity and crowdedness, affect a reviewer’s decision on whether to write a review for a product. At the review rating level, we study how reputation status affects reviewers’ deci-sions on whether to differentiate from the current consensus. Product Choice

The intention of this section is to study how two factors, popularity and crowdedness, affect reviewers’ decisions on choosing a product to review in two different review systems. In order to effectively compete for attention, reviewers may have to choose an appropriate product for which to post a review. This involves balancing the popularity of the product and the crowdedness of the review segment for that product. The popularity of a product is determined by the sales volume of the product. Since sales volume indicates the product awareness in the market (Godes and Mayzlin 2004, Liu 2006), it provides a good measurement of the amount of potential consumers who may pay attention to the reviews. Therefore, popularity indicates the total attention a product may attract. The crowdedness of a review segment is mea-sured by the number of preexisting reviews for the product. This measures the level of competition for attention in the review segment of that product.The data is unbalanced panel data for a three-month period

and is grouped at the book level. We construct a dependent variable, DailyReviewNumber, which is the count of the daily number of new reviews for each book. This variable directly measures how many reviewers choose to review a book each day knowing the popularity and the crowdedness of that book on that day. By understanding how these two factors can affect the arrival process of reviews, we can infer how reviewers make decisions based on these two factors in dif-ferent mechanisms.

Since our dependent variable is a count variable, we cannot use the traditional ordinary least square model as the assump-tions of homoscedasticity and normal distribution of the errors are violated. A common model to account for the discrete and nonnegative nature of the count data is the Poisson model, which has been widely used in the marketing literature to study consumers’ purchasing behaviors (e.g., Dillon and Gupta 1996; Gupta 1988; Schmittlein et al. 1987; Wagner and Taudes 1986). We assume the arrival of reviews (i.e., the DailyReviewNumber) follows a Poisson distribution. Note that the Poisson model assumes equal mean and variance. Since we find over-dispersion in our data, we estimate a negative binomial distribution model that allows for over-dispersion of the count variable (Hausman et al. 1984).

We performed the Hausman (1978) test to validate the use of a fixed effects model or a random effects model. The Haus-man test checks for violations of the assumption that the random effects specification is uncorrelated with the indepen-dent variables. Our result rejects the null hypothesis at the 1% significance level and is in favor of using a fixed effects model. Using a fixed effects model allows the error term to be correlated with the explanatory variables, making the estimation more robust. Moreover, it controls for the time-invariant unobserved characteristics, such as book quality or author name effects that are associated with each book, which may affect the arrival rate of the reviews.

The independent variables include the natural log of book i’s sales rank at the previous day t-1(SalesRank

i(t-1)

) and the natural log of the number of existing reviews at the previous

day t-1 (ReviewNumber

i(t-1)

). SalesRank measures the popu-larity of the book and ReviewNumber measures the crowdedness of the review segment for that book. We use the previous day SalesRank and ReviewNumber to directly estimate the impact of existing popularity and crowdedness on reviewers’ product choice decisions in the current period t. Amazon is a dummy variable that is 1 if the data is from Amazon and 0 if it is from BN. AmazonReviewNumber and AmazonSalesRank are interaction terms that measure the difference of main effects between the two mechanisms.

686MIS Quarterly Vol. 39 No. 3/September 2015

Shen et al./Online Reviewers’ Strategic Behaviors

Our model is different from the models used in the online review literature that try to assess the impact of online reviews on product sales. First, the dependent variable in our model is the daily count of new reviews rather than the cumulative number of reviews. This variable does not have the cumulative effect of reviews that could potentially drive product sales. Second, we use the previous period’s sales rank rather than the current or the following period’s sales rank. Even if the daily count of reviews potentially affects sales, it would not be able to affect the previous period’s sales. Therefore, our model does not contradict previous findings that the cumulative number of reviews can drive product sales. In addition, we calculate the variance inflation factor (VIF) value for every variable in all models to check for potential multicollinearity. All of the VIF values are below 10, suggesting no serious multicollinearity in our results (Hair et al. 1995; Marquardt 1970; Mason et al. 1989). One may still argue that the number of daily reviews could be affected by the cumulative sales volume. That is, the number of potential reviewers is proportional to the existing adopters who have already purchased the product. High cumulative sales volume indicates more potential reviewers and could lead to additional future daily reviews. To control for this effect, we construct a variable, PotentialReviewers, to account for the possibility that the increasing number of daily reviews is simply due to the increasing number of potential reviewers over time. Since the SalesRank is a good proxy for the sales volume, the cumulative SalesRank can be used as a reason-able proxy for the existing adopters which indicates the number of potential reviewers. However, the sales rank is negatively correlated with the sales volume (i.e., the smaller the sales rank, the higher the sales volume). We use the inverse of the SalesRank to account for this effect. PotentialReviewers i(t-1) is defined as the sum of the inverse of SalesRank from book i ’s release date to the previous day t-1:

(1)

()PotentialReviewers i t SalesRank t

t i ?=?= 11

1

ι

ιwhere t 0 = the release date of book i , and t = the current date. We use PotentialReviewers i(t-1) to control for the size effect of potential reviewers on the daily number of reviews.Finally, reviewers may lose interest in writing reviews for products that have been released for some time. Their enthu-siasm for writing reviews may decay over time, which can affect the number of daily reviews at different time periods. We construct a variable DaysElapsed it to control for the time effect. DaysElapsed it is the number of days since book i 's release date. We include interaction terms AmazonPotential Reviewers and AmazonDaysElapsed in the model as well.

Figure 1 summarizes the key variables in the model.Review Rating Choice

The purpose of studying a reviewer’s rating choice is to examine how reviewers offer ratings differently under the two review systems. Reviewers may post more differentiated ratings when there is a reviewer ranking system to quantify their online reputation than when there is no such system. We use RatingDeviation ij(t-1) as the dependent variable to investi-gate reviewers’ rating behaviors. It is the squared difference between the average rating for book i at time t-1 and reviewer j ’s rating; that is, RatingDeviation ij(t-1) = (AvgRating i(t-1) –Rating ij )2. This variable measures the distance from the focal rating to the average rating. Therefore, it indicates how dif-ferentiated the rating is. A large value of RatingDeviation implies that the reviewer is trying to differentiate from others while a value close to zero indicates that the reviewer is fol-lowing the mass opinion. We estimate the following model by using data on the day before the review is posted (i.e., time t-1). Using data on the previous day allows us to replicate the environment when reviewers make rating decisions. β1, the coefficient of the dummy variable Amazon ij , is expected to be positive as reviewers on Amazon would post more differ-entiated ratings than those on BN. Meanwhile, we control for the effects of popularity and crowdedness at the book level.

(2)

()()()RatingDeviation Amazon SalesRank ij t ij i t i t ijt

???=++

++1012231ββββεReviewNumber Results

We estimate our empirical models using unbalanced panel data over a period of three months. The negative binomial distribution models are estimated via the maximum likelihood method. Table 1 provides descriptions of all variables and Table 2 summarizes the descriptive statistics for the relevant variables. Note that Rating and RealName are time invariant variables. All of the other variables change every day.Table 3 presents the fixed effects panel data estimation results for the model of choosing a book to review. Both coefficients of SalesRank are negative and significant at the 1% signifi-cance level for Amazon and BN as in columns (1) and (2),indicating that the more popular the book, the more potential buyers are aware of the product. Regardless of a reviewer ranking system, an individual review is able to share from more total attention. As a result, more reviewers will review the book (i.e., the coefficient for SalesRank is negative). Note

MIS Quarterly Vol. 39 No. 3/September 2015687

Shen et al./Online Reviewers’ Strategic Behaviors

Figure 1. Summary of Key Variables for Product Choice Model

Table 1. Description of Variables

Variable

Description

Variables regarding books

DailyReviewNumber it The number of reviews that book i receives at day t

ReviewNumber it The natural log of the cumulative number of reviews of book i from its release date to day t SalesRank it The natural log of the sales rank of book i at day t

Amazon i

A dummy variable which indicates whether a book is on Amazon or BN 1 = book i is on Amazon 0 = book i is on BN

PotentialReviewers it The sum of the inverse of SalesRank from book i ’s release date to day t DaysElapsed it The number of days from book i’s release date to day t

Variables regarding reviews

RatingDeviation ijt The square of the difference between the rating of review j of book i and the average rating at day t

Rating ij ?

The rating of review j of book i (one-to-five scale)

Variables for Amazon only

DailyTotalVote ijt The number of votes that review j of book i receives at day t

DailyUnhelpfulVote ijt The number of unhelpful votes that review j of book i receives at day t

TotalVote ijt

The cumulative number of votes that review j of book i receives from the day it posted to day t TotalUnhelpfulVote ijt The cumulative number of unhelpful votes that review j of book i receives from the day it posted to day t

ReviewerRank jt The natural log of the reviewer’s reviewer rank of review j of book i at day t

Realname j

?

A dummy variable measuring the real name identity of the reviewer of review j of book i 1 = the reviewer uses a real name,0 = the reviewer uses a pen name.

?

Variables are time invariant.

688MIS Quarterly Vol. 39 No. 3/September 2015

Shen et al./Online Reviewers’ Strategic Behaviors

Table 2. Descriptive Statistics of Variables

Variable N Mean Std. Dev.Min Max DailyReviewNumber 83,644.179 1.2930231 ReviewNumber83,644 1.419 1.1260 6.059 SalesRank83,6449.558 2.384.69315.438 PotentialReviewers83,644 3.656 2.761.06638.934 Amazon83,644.485.50001 RatingDeviation461,004 1.024 1.833015.793 DailyTotalVote461,004.083.7270248 DailyUnhelpfulVote461,004.032.5360236 TotalVote461,004 6.15420.8170925 TotalUnhelpfulVote461,004 2.22513.1020885

Table 3. Fixed Effects Estimation on Book Choice (DailyReviewNumber

it

)

Amazon

(1)BN

(2)

Amazon & BN

(3)

SalesRank

i(t-1)?

-.460**

(.023)

-.668**

(.051)

-.668**

(.051)

ReviewNumber

i(t-1)?

-.073**

(.014)

.473**

(.031)

.473**

(.031)

PotentialReviewers

i(t-1)?

.165**

(.023)

.129**

(.036)

.129**

(.036)

DaysElapsed

it -.018**

(.001)

-.036**

(.002)

-.036**

(.002)

AmazonSalesRank

i(t-1)?

.207**

(.056)

AmazonReviewNumber

i(t-1)?

-.547**

(.035)

AmazonPotentialReviewers

i(t-1)?

.035

(.043)

AmazonDaysElapsed

i(t-1)

.018** (.003)

Amazon

i 1.786** (.143)

N40,56826,53367,101 Log likelihood-21627.664-4981.676-26609.34

Note: **p < 0.01, *p < 0.05

?Variables are normalized using the following formula to allow comparison across two sites:

(Variable – Mean)/Std. Dev

that after controlling for the effect from the number of potential reviewers, we still observe a significant effect for popularity.

Next, the higher the number of preexisting reviews indicating a higher level of crowdedness, the more severe the competi-tion for attention. Consequently, when there is a reviewer ranking system, we find that fewer reviewers will choose to review a book as the review segment becomes crowded (i.e., the coefficient of ReviewNumber is negative for Amazon). This indicates that reviewers tend to avoid crowded review segments so as to reduce the competition for attention. Note that after controlling for the time decay effect, we still observe a significant negative impact from crowdedness on a

MIS Quarterly Vol. 39 No. 3/September 2015689

Shen et al./Online Reviewers’ Strategic Behaviors

reviewer’s book choice for Amazon. On the other hand, when there is no reviewer ranking system to intensify the competition effect, reviewers do not have the same incentive to avoid crowded review segments, but rather just follow the “buzz.” As a result, the coefficient of ReviewNumber is positive for BN.

Interestingly, these results are consistent with the strategies proposed by Harris. In an article he shared on Amazon, he suggested that, based on his experience, the ideal products to review on Amazon were “products that attracted a reasonable amount of traffic but not many reviews” (2009, p. 1). Our results indicate that reviewers seem to follow these rational rules in order to strategically gain attention quickly without much competition when a reviewer ranking system exists. Next, we analyze reviewers’ decisions on choosing an appro-priate rating. We use a panel dataset that includes only the data on the day before a review is posted. This allows us to reproduce the environment when reviewers make decisions and examine how they choose a strategy given the informa-tion at that time. Table 4 presents the results for reviewers’rating choice. The coefficient for the dummy variable Ama-zon is positive and significant at the 1% level, indicating that Amazon reviewers tend to provide more differentiated ratings than BN reviewers. With a reviewer ranking system, Amazon reviewers are more strategic reviewers than BN reviewers, and post more differentiated ratings to capture attention so as to improve their online reputation. However, without such a ranking mechanism, BN reviewers do not have a strong incentive to gain attention by posting deviated ratings.

Our results indicate that, at the product level, online reviewers do not randomly select books to review but tend to choose a popular book to review. In addition, when there is a reviewer ranking system, reviewers tend to avoid products with crowded review segments so as to reduce competition for attention. At the review level, reviewers post more differ-entiated ratings when there is a reviewer ranking system that intensifies the competition for attention among reviewers. Additional Analyses on

Amazon Reviewers

To further understand online reviewers’ behaviors, we con-ducted several additional analyses focusing on Amazon reviewers. First, we investigate whether posting a differen-tiated rating can bring reputational benefits to reviewers, which helps to explain our main results on reviewers’ rating choice. Second, we test our models using data from elec-tronics products to provide a cross-category robustness check. Third, we study the sentiment of review text to provide additional insight into reviewers’ review choice. Amazon Reviewers’ Rating Strategies

Our main findings suggest that Amazon reviewers generally

tend to provide more differentiated ratings than BN reviewers. As discussed earlier, the different behaviors between reviewers in the two review systems could be due to the intro-duction of a reviewer ranking system on Amazon, which offers reviewers the opportunity to gain attention and enhance online reputation. In this section, we specifically focus on Amazon reviewers’ behaviors to further investigate whether providing differentiated ratings can indeed bring more attention and how reviewers behave differently with different levels of online reputation.

First, we examine the trade-off for reviewers to differentiate from community consensus. The trade-off includes two parts: the benefit and the cost that a differentiated rating brings. First, we examine whether a differentiated rating can attract more attention to confirm that reviewers can benefit from deviating. Then, we test whether a differentiated rating brings more negative feedback to show the cost of deviating.

The data is unbalanced panel data from Amazon that is grouped at the review level. We use daily total votes (DailyTotalVote

i j t

) and daily unhelpful votes

(DailyUnhelpfulVote

ijt

) as two dependent variables to measure the level of attention and negative feedback for each review on each day. These two variables are count variables. There-fore, as discussed above, we assume the daily total votes and daily unhelpful votes follow a Poisson distribution. Due to over-dispersion in our data, we use the negative binomial distribution model to test the benefit and the cost effects.

The independent variable of interest is RatingDeviation

ij(t-1)

.

A positive coefficient of RatingDeviation for both models shows that differentiated ratings would bring more attention (i.e., more total votes) and more negative feedback (i.e., more unhelpful votes) at the same time.

The control variables include the characteristics of the book and the reviewer which could affect the level of attention and negative feedback of a review. For the characteristics of books, we control for the popularity of the book in the

previous period (SalesRank

i(t-1)

), and the crowdedness of the

review segment (ReviewNumber

i(t-1)

). The more popular the book is, the more likely the review will get a vote. However, the more crowded the review segment is, the less likely the review will get a vote.

690MIS Quarterly Vol. 39 No. 3/September 2015

Shen et al./Online Reviewers’ Strategic Behaviors

Table 4. Estimations on Rating Strategies (RatingDeviation ij(t-1))

Amazon ij

.217**(.065)ReviewNumber i(t-1) .213**(.015)SalesRank i(t-1)-.059**(.006)N 8,887R2

0.03

Note: **p < 0.01, *p < 0.05

Table 5. Fixed Effects Estimation for Cost and Benefit of Differentiated Ratings

Variable

Gaining Attention:DailyTotalVote ijt (Benefit)

Receiving Negative Feedback:DailyUnhelpfulVote ijt (Cost)

RatingDeviation ij(t-1).093**(.006).114**(.009)SalesRank i(t-1)-.245**(.006)-.351**(.011)ReviewNumber i(t-1)-.853**(.013)-1.103**(.022)ReviewerRank ij(t-1)-.037**(.002)-.035**(.003)RealName ij -.194**(.044).326**(.084)DaysElapsed ijt -.014**(.0004)-.011**(.001)

TotalVote ij(t-1)

-.002**(.0002)

TotalUnhelpfulVote ij(t-1)-.002**(.0003)N

344,205228,298Log Likelihood

-76281.890

-30459.596

Note: **p < 0.01, *p < 0.05

For the characteristics of reviewers, we control for reviewers’reputation in the previous period (ReviewerRank ij(t-1)), and the identity disclosure status (Realname ij ). The ReviewerRank ij(t-1)is the natural log of reviewer’s rank for review j of book i at time t-1. The Realname variable is a dummy variable that is 1 if the reviewer discloses his or her real world name and 0otherwise.

In addition, we use DaysElapsed ijt to control for the time decay effect on the daily number of votes. We control for the previous total level of attention TotalVote ij(t-1) when estimating the benefit from a differentiated rating, and the previous total

negative feedback, TotalUnhelpfulVote ij(t-1), when estimating the cost. These two variables control for the possible herding effect where online users’ attention or opinions may simply follow those of previous users’.

Table 5 reports the fixed effects panel data estimation results for the benefit and cost of differentiated ratings. The column “Gaining Attention” shows the benefit from being differen-tiated and the model uses DailyTotalVote as the dependent variable. The column “Receiving Negative Feedback” shows the cost of deviating from the consensus and the model uses DailyUnhelpfulVote as the dependent variable.

MIS Quarterly Vol. 39 No. 3/September 2015691

Shen et al./Online Reviewers’ Strategic Behaviors

The coefficient of RatingDeviation is positive and significant at the 1% significance level for both models. This suggests that the more differentiated the rating, the more attention a review can gain but the more unhelpful votes it receives at the same time. Therefore, reviewers will face a trade-off when deviating from the mass opinion. They may have to balance between the cost and the benefit when making their decision. In other words, reviewers with different reputation costs may behave differently so as to obtain more net benefits.In addition, the negative sign of the coefficients of SalesRank and ReviewNumber helps to support our main findings on reviewers’ book choices. The more popular the book, the more votes a review receives. Thus, more reviewers would review the book. However, the more crowded the review segment, the fewer votes a review receives. As a result, fewer reviewers would review the book.

Next, we examine whether reviewers with different reputation costs behave differently in terms of deviating from commu-nity consensus at the time. We use RatingDeviation ij(t-1) as the dependent variable to investigate reviewers’ rating choices. We estimate the following model by using data on the day before the review is posted. Using data from the previous day allows us to replicate the environment in which reviewers make rating decisions.

(3)

()()()()RatingDeviation SalesRank ij t ij t i t i t ijt

????=+++

++101123141βββββεReviewerRank RealName ReviewNumber ij The results are reported in Table 6. The coefficient for ReviewerRank is positive and significant. This implies that for top ranking reviewers whose reputation costs are relatively high, they will be more likely to offer a less differentiated rating than low ranking reviewers. On the other hand, low ranking reviewers, whose reputation costs are relatively low,tend to offer more differentiated ratings. In other words, the benefits from gaining attention seem to outweigh the costs of losing reputation for low ranking reviewers.

Different Product Categories:Books Versus Electronics

In our main analysis, we used data from the book category since Amazon and BN are the market leaders in the retail book industry. One may argue that there could be a potential sample selection bias in that reviewers may behave differently when reviewing different types of products. To eliminate the bias from using only books, we collected reviews from the electronic products category on Amazon during the same data collection period. We ran the same fixed effects analysis

using the electronic products dataset and the results are shown in Table 7.

The results from the electronic products category are consis-tent with our previous results from books on Amazon. The negative coefficients of SalesRank and ReviewNumber con-firm our previous findings in the book category. Amazon reviewers’ strategic decisions on choosing products to review do not exhibit significant differences between product cate-gories. When reviewing electronic products, reviewers still consider both popularity and crowdedness effects when selecting a product to review. Thus, our main results should be robust.

Review Content: Rating Versus Text

Although most of the existing studies on online reviews focus on review ratings rather than review text, the sentiment of the review text may also play a role in reviewers’ review deci-sions in addition to the numerical ratings. To measure the sentiment of review text, we use the negative word list in the Harvard Psychosociological Dictionary, specifically the Harvard IV dictionary. The Harvard General Inquirer (details available at https://www.doczj.com/doc/6f9955508.html,/~inquirer/) classifies words into 182 categories such as positive, negative, strong,weak, etc. The negative word category has been commonly used in the finance and accounting literature to examine the tone and sentiment of corporate 10-K reports (e.g., Loughran and McDonald 2011) or Wall Street Journal news stories (e.g., Tetlock 2007; Tetlock et al. 2008).

Following prior research, we use the frequency of negative words to measure the sentiment of review text. PerNegWords ij is defined as the percentage of the number of negative words in review j of book i . AvgPerNegWords it is the mean of PerNegWords across all existing reviews of book i at time t . To understand reviewers’ strategies on text tone,we construct a variable, NegTextDeviation , which is similar to RatingDeviation in the above analysis. NegTextDeviation ijt is defined as NegTextDeviation ij(t-1) = (AvgPerNegWords i(t-1) -PerNegWords ij )2. Next, we replace the dependent variable in equation (3) with NegTextDeviation ij(t-1) and control for the total number of words in each review. Length ij is the natural log of the total number of words in review j of book i . Since equations (3) and (4) are interlinked and the errors terms are correlated, we estimate the two models simultaneously through seemingly unrelated regression (SUR).

(4)

()()()()()NegTextDeviation SalesRank Length ij t ij t ij t i t i t t ij ij

?????=+++

+++1011213141ββββββεReviewerRank RealName ReviewerNumber Table 8 reports the estimation results from SUR. The positive coefficient of ReviewerRank in both models indicates that

692MIS Quarterly Vol. 39 No. 3/September 2015

Shen et al./Online Reviewers’ Strategic Behaviors

Table 6. OLS Estimation on Amazon Reviewer Rating Choice (RatingDeviation

ij(t-1)

)

ReviewerRank

ij(t-1)

.010** (.004)

RealName

ij -.120** (.046)

SalesRank

i(t-1)-.054** (.008)

ReviewNumber

i(t-1)

.316** (.020)

N 8,125

R2 0.05

Note: **p < 0.01, *p < 0.05

Table 7. Fixed Effects Estimation on Electronic Product Choice (DailyReviewNumber

it

) Variable(1)(2)(3)

SalesRank

i(t-1)-.205**-.247** (.019)(.020)

ReviewNumber

i(t-1)-.175**-.301** (.042)(.043)

PotentialReviewers

i(t-1)

.013**.027**.019** (.003)(.004)(.004)

DaysElapsed

it -.002*

(.001)

-.004**

(.001)

.001

(.001)

N20,53220,53220,532 Log Likelihood-6461.962-6506.139-6436.897

Note: **p < 0.01, *p < 0.05

Table 8. SUR Estimation on Choosing Rating/Text Strategies

RatingDeviation

ij(t-1)NegTextDeviation

ij(t-1)

ReviewerRank

ij(t-1)

.010**

(.004)

.000005**

(.000002)

RealName

ij -.120**

(.046)

.000003

(.00002)

ReviewNumber

i(t-1)

.316**

(.020)

.00005**

(.000009)

SalesRank

i(t-1)-.054**

(.008)

-.000001

(.000004)

Length

ij -.0002** (.00001)

N 8,1258,125

R2 0.050.07

Note: **p < 0.01, *p < 0.05

MIS Quarterly Vol. 39 No. 3/September 2015693

Shen et al./Online Reviewers’ Strategic Behaviors

high ranking reviewers post less deviated reviews in terms of the ratings and in terms of the percentage of negative words. In contrast, low ranking reviewers tend to offer more differentiated and more deviated reviews in terms of the ratings and in terms of the percentage of negative words. Discussion and Conclusion

This paper empirically examines how online reviewers com-pete for attention and gain reputation under two different review systems. To the best of our knowledge, this is the first attempt to empirically test how online reviewers’ behaviors are driven by the desire to gain attention and online repu-tation. Our study adds to the literature by providing empirical evidence on how gaining attention and reputation may affect reviewers’ behaviors. The majority of the prior literature is focused on understanding the consequences of web content or online information, such as the impact of online reviews on product sales. Different from previous studies, we try to investigate the antecedents of web content. That is, online reviewers seem to behave differently when they have strong incentives to gain attention and enhance their online reputa-tion. More importantly, our comparison between the two review systems on Amazon and BN suggests that reviewers become more strategic when a reviewer ranking system exists to quantify their online reputation. Reviewers are able to build up their online reputation and consistently gain attention through a reviewer ranking system. In addition, reviewers have the opportunity to monetize their online reputation and the amount of attention they gain by receiving free products, travel invitations, and even job offers (Coster 2006). As a result, the competition for attention becomes more intense when there is a reviewer ranking system.

Our results suggest that reviewers generally tend to post more reviews for popular products. However, when there is a reviewer ranking system to quantify their online reputation, reviewers respond strategically to the competition effect and post fewer reviews when that review segment becomes crowded. Moreover, reviewers tend to post more differ-entiated ratings when there is a reviewer ranking system than when there is no such system. The introduction of a reviewer ranking system helps reviewers to quantify their online repu-tation and magnifies the competition for attention among reviewers. Since attention is usually virtual in nature and difficult to quantify, this paper offers a unique way to empi-rically measure attention and to study how online users compete for attention.

This study yields several interesting managerial implications. First, it offers insights to companies who are interested in designing and developing online review systems. We show that reviewers respond strategically to incentives, such as quantified online reputation, that are introduced by a reviewer ranking system. Companies may incorporate such incentive mechanisms into their design to offer additional incentives for reviewers to contribute consistently. Such mechanisms help reviewers to quantify their social and other benefits and thus motivate them to contribute consistently. For example, https://www.doczj.com/doc/6f9955508.html, grants badges to reviewers to indicate different levels of contributions.

Second, our results can be beneficial to designers of websites that feature reviews. We find that reviewers respond differently to popularity and crowdedness when there is a reviewer ranking system to magnify the competition for attention. Given that our findings indicate that reviewers are more likely to write a review for popular but uncrowded products when there is a reviewer ranking system, this pro-vides opportunities for companies to increase the review volume for niche products. Companies may send invitation e-mails to niche product buyers and emphasize the small number of existing reviews, or increase the benefits for reviewing niche products such as adding more weight to reviews of niche products when calculating reviewer rank. We also suggest that a website may consider emphasizing these two dimensions differently on the product review page. As an example, for niche products with few existing reviews, the website may highlight the small number of existing reviews in order to entice more reviews or continue the practice of sending “Be the first to review this product”–type e-mails to purchasers.

Third, our additional analysis on Amazon reviewers’ rating and text strategies suggest that reviewers with different levels of online reputation tend to provide different ratings and use different tones in their review text. Retailers can use this information to ensure high information quality in the reviews appearing on their websites. If retailers are aware that under certain incentives, some reviewers write reviews that are not entirely indicative of product quality, retailers can then implement strategies for mitigating such behavior while main-taining the positive effects of such incentive systems. Depending on specific goals, companies may develop dif-ferent algorithms for selecting certain groups of reviewers to receive review invitations rather than sending invitations to every buyer, currently the most common practice. We note that Amazon’s recent Vine program (http://www.amazon. com/gp/vine/help) accomplishes this by providing the contact information of selected top reviewers to participating sup-pliers, in order to generate early reviews of products by top reviewers.

694MIS Quarterly Vol. 39 No. 3/September 2015

Shen et al./Online Reviewers’ Strategic Behaviors

Companies may also signal consumers of those reviewers offering consistently highly differentiated reviews to help facilitate consumers’ decisions on the most useful reviews to read. While a review which is contrary to prevailing opinion is likely reflecting a diversity in thought or opinion, consis-tently contrary or outlying reviews may indicate gaming behavior by the reviewer, and may not necessarily serve the true purpose of the review, which is to signal product quality. Companies may symbolize these reviewers by flagging or labeling them so that consumers can easily identify those out-lying reviewers in addition to using reviewer rank as an identifier. Amazon and several other sites currently feature a “helpfulness” button for reviews, which can be used by review consumers to signal “unhelpful” reviews. Companies might also review the algorithms used to compute reviewer rank, in order to determine whether or not more or less weight needs to be placed on a “helpfulness” or “usefulness” vote by review consumers.

Acknowledgments

We thank the senior editor, the anonymous associate editor, and the reviewers for their constructive feedback which helped improve this paper considerably. We appreciate the participants at the Con-ference on Information Systems and Technology 2009 and the INFORMS Annual Meeting 2009 for their valuable comments and suggestions.

References

Basuroy, S., Chatterjee, S., and Ravid, S. A. 2003. “How Critical are Critical Reviews? The Box Office Effects of Film Critics Star-Power, and Budgets,” Journal of Marketing (67:4), pp.

103-117.

Chen, P., Dhanasobhon, S., and Smith, M. D. 2007. “All Reviews Are Not Created Equal: The Disaggregate Impact of Reviews and Reviewers at https://www.doczj.com/doc/6f9955508.html,,” in Proceedings of the 28th International Conference on Information Systems, Montreal, December 9-12.

Coster, H. 2006. “The Secret Life of An Online Book Reviewer,”

Forbes, December 1.

Dahlberg, L. 2005. “The Corporate Colonization for Online Atten-tion and the Marginalization of Critical Communication?,”

Journal of Communication Inquiry (29:2), pp. 160-180. Davenport, T. H., and Beck, J. C. 2001. The Attention Economy: Understanding the New Currency of Business, Boston: Harvard Business School Press.

Dellarocas, C., Awad, N. F., and Zhang, X. 2004. “Exploring the Value of Online Product Ratings in Revenue Forecasting: The Case of Motion Pictures,” in Proceedings of the 25th Inter-national Conference on Information Systems, R. Agarwal, L.

Kirsch, and J. I. DeGross (eds.), Washington, DC, December 12-15, pp. 379-386.Deloitte. 2007. “Most Consumers Read and Rely on Online Reviews; Companies Must Adjust,” Deloitte & Touche USA LLP.

Dichter, E. 1966. “How Word-of-Mouth Advertising Works,”

Harvard Business Review (44), pp. 147-166.

Dillon, W. R., and Gupta, S. 1996. “A Segment-level Model of Category Volume and Brand Choice,” Marketing Science (15:1), pp. 38-59.

Faw, L. 2012. “Is Blogging Really a Way for Women to Earn a Living?,” Forbes, April 25.

Forman, C., Ghose, A., and Wiesenfeld, B. 2008. “Examining the Relationship Between Reviews and Sales: The Role of Reviewer Identity Disclosure in Electronic Markets,” Information Systems Research (19:3), pp. 291-313.

Godes, D., and Mayzlin, D. 2004. “Using Online Conversations to Study Word-of-Mouth Communication,” Marketing Science (23:4), pp. 545-560.

Godes, D., and Silva, J. 2012. “Sequential and Temporal Dynamics of Online Opinion,” Marketing Science (31:3), pp. 448-473. Goldhaber, M. H. 1997. “The Attention Economy: The Natural Economy and the Net,” First Monday (2:4) (available online at https://www.doczj.com/doc/6f9955508.html,/article/view/519/440; retrieved October 21, 2014).

Gupta, S. 1988. “Impact of Sales Promotions on When, What, and How Much to Buy,” Journal of Marketing Research (25:4), pp.

342-355.

Hair, Jr., J. F., Anderson, R. E., Tatham, R. L., and Black, W. C.

1995. Multivariate Data Analysis (3rd ed.), New York: Macmillan.

Hansen, M. T., and Haas, M. R. 2001. “Competing for Attention in Knowledge Markets: Electronic Document Dissemination in a Management Consulting Company,” Administrative Science Quarterly (46:1), pp. 1-28.

Hausman, J. 1978. “Specification Tests in Econometrics,” Econo-metrica (46:6), pp. 1251-1271.

Hausman, J., Hall, B. H., and Griliches, Z. 1984. “Econometric Models for Count Data with an Application to the Patents-R&D Relationship,” Econometrica (52:4), pp. 909-937.

Harris, P. D. 2009. “Learn About Amazon’s Two Reviewer Ranking Systems” (available online at https://www.doczj.com/doc/6f9955508.html,/ gp/richpub/syltguides/fullview/L1KFFD7OYD YV; last retrieved January 5, 2010).

Harris, P. D. 2010. “Write Amazon Reviews, Lists, and Guides”

(available online at https://www.doczj.com/doc/6f9955508.html,/gp/richpub/ syltguides/fullview/R35VR0WLR2RO7G/ref=cm_sylt_byauth or_title_full_10; retrieved January 5, 2010).

Hunt, R. E., and Newman, R. G. 1997. “Medical Knowledge Overload: A Disturbing Trend for Physicians,” Health Care Management Review (22:1), pp. 70-75.

Jeppesen, L. B., and Frederiksen, L. 2006. “Why Do Users Contribute to Firm-Hosted User Communities?,” Organization Science (17:1), pp. 45-64.

Leggatt, H. 2011. “24% of Consumers Turned off after Two Negative Online Reviews,” BizReport, April 13 (available online at https://www.doczj.com/doc/6f9955508.html,/2011/04/27-of-consumers-turned-off-by-just-two-negative-online-revie.html; retrieved July 25, 2011).

MIS Quarterly Vol. 39 No. 3/September 2015695

Shen et al./Online Reviewers’ Strategic Behaviors

Lerner, J., and Tirole, J. 2002. “Some Simple Economics of Open Source,” The Journal of Industrial Economics (50:2), pp.

197-234.

Li, X., and Hitt, L. 2008. “Self Selection and Information Role of Online Product Reviews,” Information Systems Research (19:4), pp. 456-474.

Li, X., and Hitt, L. 2010. “Price Effects in Online Product Reviews: An Analytical Model and Empirical Analysis,” MIS Quarterly (34:4), pp. 809-831.

Liu, Y. 2006. “Word of Mouth for Movies: Its Dynamics and Impact on Box Office Revenue,” Journal of Marketing (70:3), pp. 74-89.

Loughran, T., and McDonald, B. 2011. “When is a Liability not a Liability? Textual Analysis, Dictionaries, and 10-Ks,” Journal of Finance (66), pp. 35-65.

Mandel, T., and Van der Leun, G. 1996. Rules of the Net, New York: Hyperion.

Marquardt, D. W. 1970. “Generalized Inverses, Ridge Regression, Biased Linear Estimation, and Nonlinear Estimation,” Techno-metrics (12), pp. 591-256.

Mason, R. L., Gunst, R. F., and Hess, J. L. 1989. Statistical Design and Analysis of Experiments: Applications to Engineering and Science, New York: Wiley.

Moe, W. W., and Trusov, M. 2011. “The Value of Social Dynamics in Online Product Ratings Forums,” Journal of Marketing Research (48:3), pp. 444-456.

Mudambi, S. M.,and Schuff, D. 2010. “What Makes a Helpful Online Review? A Study of Customer Reviews on https://www.doczj.com/doc/6f9955508.html,,” MIS Quarterly (34:1), pp. 185-200.

Ocasio, W. 1997. “Towards an Attention-Based View of the Firm,”

Strategic Management Journal, (18:3), pp. 187-206. Pavlou, P., and Gefen, D. 2004. “Building Effective Online Marketplaces with Institution-Based Trust,” Information Systems Research (15:1), pp. 37-59.

Resnick, P., Zeckhauser, R., Friedman, E., and Kuwabara, K. 2000.

“Reputation Systems,” Communications of the ACM (43:12), pp.

45-48.

Reuters. 1998. Out of the Abyss: Surviving the Information Age, London: Reuters Limited.

Roberts, J. A., Hann, I., and Slaughter, S. A. 2006. “Understanding the Motivations, Participation, and Performance of Open Source Software Developers: A Longitudinal Study of the Apache Projects,” Management Science (52:7), pp. 984-999.Schmittlein, D. C., Morrison, D. G., and Colombo, R. 1987.

“Counting You Customers: Who Are They and What Will They Do Next?,” Management Science (33:1), pp. 1-24.

Tetlock, P. C. 2007. “Giving Content to Investor Sentiment: The Role of Media in the Stock Market,” Journal of Finance (62), pp.

1139-1168.

Tetlock, P. C., Saar-Tsechansky M., and Macskassy S. 2008.

“More than Words: Quantifying Language to Measure Firms’Fundamentals,” Journal of Finance (63), pp. 1437-1467. Wagner, U., and Taudes, A. 1986. “A Multivariate Polya Model of Brand Choice and Purchase Incidence,” Marketing Science (5:3), pp. 219-244.

About the Authors

Wenqi Shen is currently an E-commerce Operations Manager at Virginia Polytechnic Institute and State University. She received her Ph.D. in management information systems from the Krannert Graduate School of Management at Purdue University. Her research interests include E-commerce, online user community, social media, and firm information security.

Yu Jeffrey Hu is an associate professor and a codirector of the Business Analytics Center at the Scheller College of Business at the Georgia Institute of Technology. He received his Ph.D. from MIT’s Sloan School of Management. His research has been published in top journals such as Management Science, Information Systems Research, and Review of Financial Studies. He has won several research awards including the inaugural Management Science Best Paper Award in Information Systems.

Jackie Rees Ulmer is currently an associate professor of Management Information Systems in the Krannert Graduate School of Management at Purdue University. She earned her Ph.D. in Decision and Information Sciences from the Warrington College of Business at the University of Florida in 1998. Her research interests include text mining, machine learning, information security risk management, privacy, and evolutionary computation.

696MIS Quarterly Vol. 39 No. 3/September 2015

Copyright of MIS Quarterly is the property of MIS Quarterly and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission.However,users may print,download,or email articles for individual use.

兴旺小学班级管理团队激励机制(1)

庙子镇兴旺小学班级团队管理激励机制实施方案一、实施目的: 为进一步提高班级管理水平,调动教师参与班级管理工作积极性,体现班级管理团队合作精神,我校特制定班级管理激励机制实施方案。 二、组织机构 组长:尚宗林 副组长:张如林邱效俊 成员:全体教师 班级管理团队人员安排: 成员分工: 1、熟悉班级学生情况,协助班主任团结任课教师形成班级工作团队,共同分析、研究学生的思想、学习和生活状况,齐心协力,协同班主任一起关心每一位学生的健康成长。 2、班主任负责班上的日常工作,每个团队成员每天到班上协助班

主任完成日常工作的管理。 3、班级参加集体性活动,团队成员应参与组织与管理。集体性活动包括:社会实践活动、主题活动、文艺汇演、阳光体育、大扫除等学校组织的有关学生活动。班级参加集体比赛,团队要通力合作,出谋献策。 三、考核办法: 为使《班级团队管理工作》的管理落到实处,特制定如下量化细则: (一)、根据我校具体情况,把所有任课教师与相应班级组成班级管理团队,班级管理人考核得分也就是该教师的得分。 (二)、班级管理考核内容具体分为: A卫生 B晨读、写字 C两操 D课外活动(小组活动)E安全F纪律 G班级文化建设 H家校沟通 I学校大型活动及临时性工作完成情况 J其他工作 (三)、班级管理考核每周各项逐一考核,学期末汇总,取总分的考核方法进行。 1、安全工作:(15分) (1)班级全学期未出现安全事故记满分。 (2)发生事故纠纷上交由学校处理,但学校不承担费用的,每次扣1分。需学校承担费用100元以内的,每次扣2分,需学校承担费用100元以上——500元以内的,每次扣5分。发生重大事故需上报镇中心校处理的,此项记0分。

房地产开发公司规章制度大全

xxxx地产开发有限公司 规章制度 第一章总则 为适应公司发展需要,保护公司股东和债全权人的合法权益,维护公司正常的工作、经济秩序,促进公司不断发展壮大,全面推动公司以提高经济效益为中心的各项工作,保证公司各项工作任务的顺利完成,根据《公司章程》结合企业经营管理的实际情况,制定本规章制度。 第二章组织机构 第一条股东会:公司股东会由全体股东组成,是公司的最高权力机构,依照公司《章程》行使职权。 第二条董事会:董事会对股东会负责,董事长是公司法定代表人,代表公司行使职权。董事会参加人员:董事长、董事。监事列席董事会。 第三条监事会:依照公司《章程》行使权力。 第四条总经理办公会:参加人员为总经理、副总经理、工程技术负责人、财务负责人、必要时各部、室负责人可列席。 第五条公司机构设立办公室、财审部、工程部、拆迁部、销售部、物业分公司。 第三章工作职责 第一条公司董事会职责: 一、决定公司经营计划和投资方案。

二、制定公司年度财务预、决算方案,利润分配方案和弥补亏损方案。 三、决定内部管理机构的设置和职工报酬事项。 四、由董事长提名,董事会通过,聘任或解聘公司总经理,副总经理、工 程技术负责人、财务负责人。 五、制定公司的基本管理制度。 六、负责召集公司股东会,向股东会报告年度工作。 七、执行股东会的决议。 第二条监事会职责:对董事会成员、总经理履行公司职务时,违反法律、法规或公司章程的行为进行监督,检查公司财务。 第三条总经理办公会职责: 一、组织实施公司的经营管理工作,组织实施股东会、董事会决议。 二、组织实施公司年度经营计划和项目投资方案。 三、组织实施公司董事会授予的其它工作。 四、拟订公司内部管理机构设置方案,拟订公司的基本管理制度,制订公司内部管理的各项规章。 五、聘任或解聘公司部门负责人。 第四条公司业务办公会议职责:讨论研究公司一般性日常工作及业务,汇报工作情况,接受工作任务及其他临时性任务,参加人员:正副总经理、办公室主任、部室正副经理和公司技术负责人、财务负责人。 第五条公司各级职责: 一、职务、岗位职责: 董事长职责 (一)代表公司依法行使法定代表人权力,承担民事义务。

实用公司规章制度大全

实用公司规章制度大全 一·行为规 一、着装方面 1、在每天早上上班的头十分钟,每位员工必须检查衣 着及仪容仪表和个人区域的卫生整洁情况。所有员工上班应衣着正式、搭配得体、从头到脚请适当修饰。注意个人清洁卫生,一天一换。 2、男士除七、八两月外,一律着西装、西裤、领带及 正规衬衫和皮鞋。每月修理鬓角一到二次。 3、女士一律着职业化服饰,化淡妆。

二、维护整洁 1、公共卫生由公司请专人打扫,请各位同事自觉维护 干净、整洁的办公环境。 2、每位员工必须负责所分配文件柜的外整洁,并保持 柜门的关闭。下班时要收拾整理好桌子的文件,资料等 整齐地摆放在桌上,将文件柜上锁,将当天的垃圾清理 掉。员工的工作日记和重要文件等在下班后或人离开办 公室后都必须锁进各人的抽屉和文件柜里。交一把备用 钥匙给行政部保管。 3、员工离开办公桌或会议室,必须将所坐的办公椅收 回桌子放好。 4、注意保持地面干净,若有东西掉落地面应及时处理。 5、在使用了传真机或复印机后,所产生的错误报告或 废纸等,该使用者必须马上处理,或扔掉或收作草稿纸; 传真后的稿件,发传真者必须立即拿走。不能堆放在传 真机和复印机上面或附近。 6、保持衣柜次序,各位员工请及时整理各自衣物,如 需在公司换鞋,请将需更换的鞋子用鞋盒装好。雨披请 用袋子装好,不得随意堆放。 7、办公室必须保持安静,不得高声谈话,通时也必须 注意音量,不能影响他人工作和公司整体形象。互相之 间的交谈保持1米的距离以。

8、有商业推销上门,若无意接待,请立即回绝,不要 纠缠。若有意接待或客户来访,请在门口洽谈室或会议 室里会谈,但也须注意音量,不允许在办公区域里接待。 办公室专门安排人员送茶水等。接待完客户或会谈结束 后,由与会者中职位最低的员工收拾整理好桌椅、资料 及杯子等。 三、公司的财产和财物 1、员工应爱护公司的财产和财物,包括电脑、打印机、 复印机、传真机、打字机、碎纸机、手机和机等,必须爱惜使用。如有损坏,照价维修、照价赔偿。公司发的文具等,如计算器、笔、本子等,员工必须保管好。如有遗失,自己负责原价补回。 2、电脑及打印机的清洁及一般的维护由在座使用的员工 负责。每天最后离开办公室的员工应注意将每台电脑、复印机和打字机的电源关掉。 3、电脑的日常维护由在座使用的员工负责,另有专人进 行电脑的维护。 4、公司的电脑、传真、打字机以及复印机原则上不能用 于私人用途。若有特殊原因,须事先提出。 5、公司配备的手提电脑不能擅自带回家中使用。 6、原则上不能将非本公司成员带来本公司,更不能随意

项目销售团队激励机制与提成方案设计

实用标准文档 长沙融科东塘项目 销售组织及日常管理方案 凌峻(中国)房地产策划代理机构 二OO五年十一月

前言 长沙融科东塘项目作为2006年长沙市最值得期待的楼盘,所针对的目标客户群体是中高端的消费者,目标客户对楼盘的各个方面期望值都较高;为配合项目首期的营销推广工作,销售服务就必须就其他项目有本质的飞跃和提高。本方案就是为解决项目销售准备工作而展开。 “尊重、完美、严谨、专业”的风格是最能体现项目形象及消费群体自尊、自律的心理特点,现场所有工作人员的行为礼节都应体现这一风格,同时使视觉体系与服务体系达到顾客满意的效果!

一、销售部人事组织管理 1、配置原则 针对目标消费群体的特点,按高标准、高起点的要求,充分体现项目的形象定位,除开发商与策划公司组成营销核心外,还需把建筑商、设计院、物业管理公司都对整合到项目的营销战略体台系当中。 2、销售人员的配置 销售部销售中心现场经理一名; 销售主任(组长)2名; 销售代表8名。 3、现场销售人员岗位职责 ●销售代表 认真贯彻执行公司销售管理规定和实施细则,努力提高业务水平; 积极完成制定的销售目标,为客户提供主动、热情、满意及周到的服务; 与客户签订销售合同,督促合同正常如期履行,并积极催讨应收销售款项; 妥善解决在销售过程中出现的问题; 与管理处保持良好的沟通工作,协助客户作好收楼工作; 主动收集市场信息及客户意见,填写每日客户来访登记表、总结每周工作,认真填写销售总结报告; 按照公司的标准合同签约,严守公司商业机密,做到以公司利益为重并遵守公司的各项规章制度及国家的法律法规; 努力达到公司考核要求标准,认真圆满完成公司赋予的各项工作。 ●销售主任(组长) 监督本组销售代表的行为规范; 负责本组的销售工作及收集周边楼盘市场动态; 检查、汇总本组客户来访登记表; 努力提高本组的销售成绩;

集团管理规章制度范本

XX集团治理制度 目录 第一篇员工手册 第一章总经理致辞 第二章企业文化 第三章入职指引 第四章职员行为规范 第五章职员享有的权利 第二篇行政管理制度第一章行政事务治理制度 第二章办公规范治理制度 第三章保密制度 第四章会议治理制度 第五章车辆治理方法 第六章司机治理规定

第七章计算机房治理规定 第八章电话治理规定 第九章前台接待规范及迎宾室使用第十章值班治理制度 第十一章卫生治理准则 第十二章警卫人员值勤准则 第三篇人事管理制度 第一章人事治理规章 第二章绩效考核治理 第三章考勤治理制度 第四章新进人员任用方法 第五章职员离职处理原则 第四篇财务管理制度 第一章财务治理规则 第二章固定资产治理制度 第三章资产操纵制度原则 第四章资金治理规定 第五章资金预算制度 第六章会计档案治理

第七章出纳作业处理准则 第五篇审计稽核制度 第一章内部稽核制度 第二章内部审计工作规定 第六篇岗位职责 第一章办公室岗位职责 第二章人力资源部岗位职责 第三章投资发展部岗位职责 第四章财务部岗位职责 第五章稽核审计部岗位职责 第一篇员工手册 第五章总经理致辞 各位同仁: 欢迎加盟德祥集团。 德祥集团是一个稳健进展并背负社会责任的大伙儿庭,以人

为本在那个地点不只是一种原则,它差不多成为一种关切,一种体恤。在我们的队伍中,只有分工不同。职员的各展其材、各尽其用,使事业进展得到了可靠的人力保证。 曾经,我们点燃的只是一个事业的希望,今天我们足以为德祥稳步前行的进展态势和不骄不躁的处世态度而感到欣慰,并为之鼓舞。古人曰:知、仁、勇,天下之达德也。以德求祥的精神正是集知德,勇德于心,于力。才可激荡身心,奋勇前行。制造是一种力量,求新则是一种高度。年轻,因此努力而勤奋;诚信,因此知仁而施惠。在浮华之中心平气和淡薄处世,不做一时豪迈的过客。不退却、不招摇,谦虚慎重、稳健有恒是我们做情况的原则;不断学习则是我们前进的蓄力。因为我们希望制造的是一种持久的光芒,而不仅仅是制造一缕刹那的光束。 在那个地点,您将会在公司兴盛进展的同时,实现个人的理想与报负。在那个地点,您的付出将会得到充分的回报。 身为总经理,我将与大伙儿一样为一个共同的目标努力工作,为大伙儿制造最佳的工作环境,让各位在与公司的共同进展中,实现自己的人生规划。 本《职员手册》的内容涵盖了大伙儿在此工作应遵循的差不多规则,同时 包含了公司职员应享有的权利义务,更重要的是,希望大伙儿与

T-Reader-一种基于自注意力机制的多任务深度阅读理解模型

第32卷 第11期 2018年11月中文信息学报JOU RNAL OF CHINESE INFORM A TION PROCESSING Vol .32,No .11Nov .,2018文章编号:1003‐0077(2018)11‐0128‐07 T ‐Reader :一种基于自注意力机制的多任务深度阅读理解模型 郑玉昆1,李丹2,范臻1,刘奕群1,张敏1,马少平1 (1.清华大学计算机系,北京100084; 2.阿姆斯特丹大学ILPS ,荷兰阿姆斯特丹1098XH ) 摘 要:该文介绍T HUIR 团队在“2018机器阅读理解技术竞赛”中的模型设计与实验结果。针对多文档机器阅读理解任务,设计了基于自注意力机制的多任务深度阅读理解模型T ‐Reader ,在所有105支参赛队伍中取得了第八名的成绩。除文本信息外,提取了问题与段落精准匹配等特征作为模型输入;在模型的段落匹配阶段,采用跨段落的文档级自注意力机制,通过循环神经网络实现了跨文档的问题级信息交互;在答案范围预测阶段,通过进行段落排序引入强化学习的方法提升模型性能。 关键词:机器阅读理解;问答系统;深度学习;强化学习 中图分类号:T P 391 文献标识码:A T ‐Reader :A Multi ‐task Deep Reading Comprehension Model with Self ‐attention Mechanism ZHENG Yukun 1,LI Dan 2,FAN Zhen 1,LIU Yiqun 1,ZHANG Min 1,M A Shaoping 1(1.Department of Computer Science and Technology ,Tsinghua University ,Beijing 100084,China ; 2.University of Amsterdam ILPS ,Amsterdam 1098XH ,Netherlands ) Abstract :This paper describes the approach and the experimental results of T HUIR at 2018NLP Challenge on Ma ‐chine Reading Comprehension .We design a multi ‐task deep neural model with self ‐attention mechanism .The self ‐attention mechanism on passages within a document allows information to flow across passages ,and the recurrent neural network further shares information across documents .Besides the distributed representation of questions and p assages learned during model training ,we also extract features denoting exact matching between questions and pas ‐sages as the inputs of the model .When predicting the span of the answer ,we introduce passage ranking into the model to promote the model performance via reinforcement learning .This proposed method ranks 8/105on the final test set .Keywords :machine reading comprehension ;q uestion answering system ;deep learning ;reinforcement learning 收稿日期:2018‐06‐25 定稿日期:2018‐07‐26 基金项目:国家自然科学基金(61622208,61732008,61532011);国家973计划(2015CB 358700)0 简介 近年来,机器阅读理解发展迅速,成为了深度学 习领域的热点。随着多个阅读理解与问答数据集的 发布,越来越多端到端的深度学习方法被提出,算法 性能上取得了极大进步。数据集方面,斯坦福问答 数据集SQuAD [1]和微软阅读理解数据集M S M ARCO [2]是两个目前最为流行的真实问答数据集。其中,SQuAD 属于抽取式的问答数据集,但不 同于以往类似数据集的是,SQuAD 数据集涉及了多种模式的上下文逻辑推理。M S M ARCO 数据集从Bing 搜索引擎中收集问题的文档,答案不再具有约束,即一些答案可能由用户撰写,存在无法与段落完全匹配的文字或者同时包含多个段落片段。M S M ARCO 数据集的设置更接近真实阅读理解与问答的情景,同时任务也更具有挑战性。目前两个数据集的最新排行榜上机器均取得了超过人类平均水平的成绩,极大地推动了深度学习在自然语言理解 的发展。万方数据

attention造句

attention造句 1.They attract attention regardless of their quality. 不管品质,只需吸引注意力. 2.This alliance could draw regulators 'attention. 三家联盟可能会招来监管者的注意. 3.Where this might grab our attention is in commodities. 埃及局势动荡可能会引起我们注意的是大宗商品. 4.These plates always need attention. 这些盘子总是需要注意力. 5.With this medical attention,he recovered,and was almost immediately back to his regular teaching schedule. 在细心的医疗照顾下禅师很快康复,之後立刻投入到正常教学中. Kurita's attention narrowed to saving himself . 栗田只顾到自己逃命。 This would claim a good deal of your attention . 这件事您多分心了。 Pay no attention to the arguments between us . 别在乎我们之间的争论。

He escaped attention by being silent . 他不声不响以躲避别人的注意。 A newspaper headline caught his attention . 报纸的大标题引起他的注意。 Clouds must be given a particular attention . 必须对云给予特别的注意。 I shall give the matter my personal attention . 我将亲自过问此事。 He coughed to draw the other's attention . 他咳了一声让大家注意。 The roof needs attention , i.e. to be repaired . 屋顶需要修理了。 Pay attention when i am talking to you ! 我跟你说话的时候,你要留心听。 It's difficult to see attention in a sentence. 用attention造句挺难的 Dantes' whole attention was riveted on him . 邓蒂斯聚精会神地注视着他。 Pay special attention to the sign of the answer . 特别注意答案的正负号。 He gave reverent attention to the teacher . 他恭敬地听老师讲课。

企业规章制度大全

企业规章制度大全 则 1、本手册是本公司全体员工的指导规范和行为准则。 2、本手册解释权属中心办公室。 3、本手册自200x年x月x日开始实施生效。 第二章员工守则 1、1本公司为健全管理制度和组织功能,特依据外商投资企业劳动人事法规和本公司人事政策制定本手册。 1、2 凡本公司所属员工,除法律法规另有规定者外,必须遵守本手册规定。 1、3 凡本手册所称员工,系指正式被聘于本公司并签订劳动合同或聘用合同者。 2、1 聘用关系 2, 1、1 本公司招聘的对象是资格最符合的个人,无性别、地域、户口等区别。 聘用员工在三个月试用期满后,凡符合录用条件的外地人员,本公司将按照广州市人才引进的有关规定,为中级职称以上符合引进条件的员工办理引进和户口转移手续。 2、1、2 新员工聘用设有三个月的试用期,销售人员的试用期为6个月。如果磨擦员工的工作表现不能令上司满意,被证明

不符合的录用条件,公司可以在试用期内终止聘用,或将试用期延长,以作进一步观察,但延长期最多不超过三个月。在试用期内,员工及公司任何一方都可提前五个工作日通知对方,终止聘用关系。公司为试用期员工签订3-6个月的试用合同。公司为工作关系和户口关系不能转入公司的人员及退休被聘用人员,签订聘用合同。 2、1、3有下列情形之一者,不得聘用为本公司员工: (1) 曾经被本公司开除或未经核准而擅自离职者; (2) 被剥夺公民权利者; (3) 通缉在案未撤销者; (4) 受有期徒刑之宣告,尚未结案者; (5) 经指定医院体检不合格者; (6) 患有精神病或传染病或吸用毒品者; (7) 未满16周岁者; (8)

政府法规定的其他情形者。 2、1、4 应聘人员面试及体检合格后,按照公司录取通知确定的日期、地点、亲自办理报到手续、并应缴验下列证件: (1) 本人最近一寸证件照片四张; (2) 本公司指定医院之合格体检表; (3) 学历、职称证件、身份证;(正本核对后发还,复印件留存) (4) 退工单、劳动手册等前服务单位离职证明。 经面试甄选合格之应聘之员,未于通知时间、地点办理报到手续者,视为拒绝接受本公司聘用,该通知则失其效力。 2、1、5 有下列情形之一者,本公司可以不经预先通知而终止聘用关系,并不级予当事人经济补偿费。 (1) 在欺骗公司的情况下与公司签订劳动合同,致本公司误信造成损失者; (2) 违反劳动合同或本手册规则经本公司认定情节重大者; (3) 营私舞弊,收受贿赂,严重失职,对公司造成损害者;

注意网络神经机制的述评

Advances in Psychology 心理学进展, 2017, 7(3), 366-376 Published Online March 2017 in Hans. https://www.doczj.com/doc/6f9955508.html,/journal/ap https://https://www.doczj.com/doc/6f9955508.html,/10.12677/ap.2017.73047 文章引用: 孙玉静, 尚雪松(2017). 注意网络神经机制的述评. 心理学进展, 7(3), 366-376. A Review of the Neural Mechanism of Attention Networks YuJing Sun 1, Xuesong Shang 2 1 Faculty of Psychology, Southwest University, Chongqing 2The School of Psychology and Cognitive Science, East China Normal University, Shanghai Received: Mar. 7th , 2017; accepted: Mar. 28th , 2017; published: Mar. 31st , 2017 Abstract Based on the early attentional classification theory, new studies define three terms about the at-tentional function from the anatomy and nerval function including alerting, orienting and execu-tive control. By examining the target and cue effects in the response time of the signal, it can be noted that the network testing can effectively detect the efficiency of each network in the system. Neuroimaging studies have confirmed that these networks have a certain degree of anatomical and functional independence. The attention network test (ANT) examines the effects of cues and targets within a single reaction time task to provide a means of exploring the efficiency of the alerting, orienting, and executive control networks involved in attention. However, some interac-tions in these three networks are proved. The revised attention network test (ANT-R) adjusts cue-target interval and cue validity. Related research results support the hypothesis of functional integration and interaction of these brain networks. In this paper, on the basis of existing research summary, the study proposed future prospects. Keywords Attention Network Test (ANT), Alerting, Orienting, Executive Control, Neural Mechanisms 注意网络神经机制的述评 孙玉静1,尚雪松2 1 西南大学心理学部,重庆 2华东师范大学心理与认知科学学院,上海 收稿日期:2017年3月7日;录用日期:2017年3月28日;发布日期:2017年3月31日

Attention 注意力 concentration 专注力

Attention 注意力concentration 专注力 Meditation冥想是一种改变意识的形式,它通过获得深度的宁静状态而增强自我知识和良好状态。在冥想期间,人们也许集中在自己的呼吸上并调节呼吸,采取某些身体姿势(瑜伽姿势),使外部刺激减至最小,产生特定的心理表象,或什么都不想 SMR 在对动物脑电进行研究中发现,当猫学习对运动进行抑制时,在它的感觉运动区的皮层上,可以记录到12--15赫兹的正弦节律,因为这种活动发生在大脑皮层的感觉运动区,所以取名为感觉运动节律(SMR)。 关于SMR在运动员训练中使用neuroharmony论文https://www.doczj.com/doc/6f9955508.html,/p-612651492.html 我们就“注意力”和“专注力”的问题来探讨一下。 第一点要明白的就是“注意力”与“专注力”是两个不同的概念。 “注意力”从专业的角度来判断人类对一件事件的分散程度,强调的是分散度,而这种分散度是先天的。 “专注力”则是判断人类对一件事情的持续程度,它是后天培养的,强调的是持续度,而这种持续度是通过后天培养的。 第二点要明白的就是虽然“注意力”和“专注力”是两个不同的概念,但是我们也不能因此割裂两者之间的联系。 两者的联系是:注意力分散度高的孩子,外显行为就会表现出比较好动、毛躁、不专心。但是我们可以通过后天对专注力的训练,使这类型的孩子变得专注起来。延长孩子学习或者做事的持续度,从而提高学习和做事的效率。 弄清孩子的专注力,有助于父母或者是老师有针对性地对孩子施教。 《全国中小学专注力师资培训班》感悟体会 文章来自: 青岛开发区红石崖小学张述英作者:青岛开发区红石崖小学张述英点击数:2155 发布时间:2012-8-17 “人来到这个世界上不是要带走什么,而是要留下什么东西,人生的意义就在于在你离开这个世界的时候还有多少人在想着、用着、做着你所留下的物或者思想……”我总是感动于拥有这样思想的伟大的人。 在参加全国中小学专注力师资培训班里,我又与这样的人进行了面对面的交流。他们有来自台湾的著名专家,也有来自中科院的教授,也有着工作在一线的教师专家,他们都有着同样的品格,那就是在有限的生命里要做出有益于人类的事情,这种精神令我们每一个人都深深折服不已。 三天半的培训,既紧张又让我们感到新奇。我们接受的不单单只是听报告,更多的是现场与教授们思想的交流,还有小朋友们喜欢的跳舞、唱歌、做游戏。 “成就了孩子,也就成就了自己” 曹教授的一句话拉开了培训的帷幕。他的“训练学生的专注力,全面提升学生素质”从“什么叫专注力?专注力缺失的表现,专注力缺失的原因以及专注力缺失的危害”这四个方面的内容阐述的。还记得曹教授一再的强调的一句话“班里的差孩子是我们的研究对象,不要把他们看成是麻烦,要看成是宝贝,是上苍对自己的厚爱”想到自己以前每年分到一个新的班级,从来不会先去主动认识孩子,而更多地却是去向老班主

浅谈团队中的激励机制

浅谈团队中的激励机制 摘要:在以人为本的管理理念下,激励管理机制可以极大限度的激发员工实现自我价值的热情,激励他们向教育目标靠近。在激励机制中充分利用心理效应的积极作用,合理规避心理效应的消极作用,让激励管理机制作用充分发挥。研究了工作团队的激励机制,特别分析了工作团队中的薪酬制度,机会主义倾向和解决的办法,一些过程评价方法及层次分析法在工作团队绩效测评中的应用。 关键词:激励管理机制、自我价值、手段、应用

Abstract: In a people-oriented management philosophy, the incentive management system can greatly stimulate the limits of the enthusiasm of staff to achieve self-worth and encourage them to close the aims of education. Make full use of the incentive mechanism the positive role of the psychological effects, reasonable to avoid the negative effects of the psychological effects, so that the role of incentive management system into full play. Study team incentives, in particular the analysis of work teams in the pay system, opportunistic tendencies and a solution, a number of process evaluation methods and analytic hierarchy process in the work team performance measurement application。 Keywords: Incentive management system、Self-worth、Means、Apply

公司员工规章制度大全

入职指引 第一节入职与试用 一、用人原则:重选拔、重潜质、重品德。 二、招聘条件:合格的应聘者应具备应聘岗位所要求的年龄、学历、专业、执业资格等条件,同时具备敬业精神、协作精神、学习精神和创新精神。 三、入职 考勤管理 一、工作时间公司每周工作五天半,员工每日正常工作时间为8 小时。其中: 周一至周五: 上午:8 :30 - 12 :00 下午: 13 :30 - 18 :00 为工作时间 12 :00 - 13 :30 为午餐休息 实行轮班制的部门作息时间经人事部门审查后实施。 二、考勤 1 、所有专职员工必须严格遵守公司考勤制度,上下班亲自打卡(午休不打卡),不得代替他人打卡。 2、迟到、早退、旷工( 1 )迟到或早退10 分钟以内者,每次扣发薪金10 元。30 分钟以上1 小时以内者,每次扣发薪金100 元。超过 1 小时以上者必须提前办理请假手续,否则按旷工处理。( 2 ) 月迟到、早退累计达五次者,扣除相应薪金后,计旷工一次。旷工一次扣发一天双倍薪金。年度内旷工三天及以上者予以辞退。 3 、请假( 1 )病假 a 、员工病假须于上班开始的前30 分钟内,即8 :30 - 9 :00 致电部门负责人,请假一天以上的,病愈上班后须补区、县级以上医院就诊证明。 b 、员工因患传染病或其他重大疾病请假,病愈返工时需持区、县级以上医院出具的康复证明,经人事部门核定后,由公司给予工作安排。(2) 事假:紧急突发事故可由自己或委托他人告知部门负责人批准,其余请假均应填写《请假单》,经权责领导核准,报人事部门备案,方可离开工作岗位,否则按旷工论处。事假期间不计发工资。 4 、出差(1) 员工出差前填好《出差申请单》呈权责领导批准后,报人事部门备案,否则按事假进行考勤。 (2) 出差人员原则上须在规定时间内返回,如需延期应告知部门负责人,返回后在《出差申请单》上注明事由,经权责领导签字按出差考勤。 5 、请假出差批准权限:三天以内由直接上级审批,三天以上十天以内由隔级上级审批,十天以上员工由人力资源部审查、总经办审批,公司员工由所在公司人事主管部门审查、总经理审批。 6 、加班(1) 加班应填写《加班单》,经部门负责人批准后报人事部门备案,否则不计加班费。加班工时以考勤打卡时间为准,统一以《劳动合同》约定标准为基数,以天为单位计算。 (2) 加班工资按以下标准计算: 工作日加班费=加班天数×基数×150 % 休息日加班费=加班天数×基数×200 % 法定节日加班费=加班天数×基数×300 % ( 3 )人事部门负责审查加班的合理性及效率。 ( 4 )公司内临时工、兼职人员、部门主管(含)以上管理人员不计算加班费。 ( 5 )公司实行轮班制的员工及驾驶员加班费计算办法将另行规定。 7 、考勤记录及检查 ( 1 )考勤负责人需对公司员工出勤情况于每月五日前(遇节假日顺延)将上月考勤予以上报,经部门领导审核后,报人事部门汇总,并对考勤准确性负责。 ( 2 )人事部门对公司考勤行使检查权,各部门领导对本部门行使检查权。检查分例行检查(每月至少两次)和随机检查。

Charlie Puth—Attention—歌词

Attention 2019-01-31 You've been running round, running round, running round throwing that dirt all on my name Cause you knew that I, knew that I, knew that I'd call you up You've been going round, going round, going round every party in LA Cause you knew that I, knew that I, knew that I be at one I know that dress is karma, perfume regret You got me thinking 'bout when you were mine And now I'm all up on ya, what you expect But you're not coming home with me tonight You just want attention You don't want my heart Maybe you just hate the thought of me with someone new Yeah, you just want attention I knew from the start You're just making sure I'm never getting over you You've been running round, running round, running round throwing that dirt all on my name

团队激励机制的建立

团队激励机制的建立 【摘要】本文通过分析团队、团队成员,贯穿激励理论的运用,提出一种基于对绩效贡献率评价的共评法绩效评价理论,实现全套的团队激励机制的建立。 【关键词】团队团队成员过程激励结果激励共评法 21世纪的市场竞争更加的激烈,各组织中团队的作用日益突出,对团队的激励问题已经成为各企业提高竞争力的首要问题。团队是由致力于共同的宗旨和绩效目标、承担一定职责、技能互补的异质成员组成的群体。JIT、CIMS、BPR、敏捷制造等现代管理思想都指出并强调,未来的组织形式是充满活力的“团队”;“团队”中的成员是具有多种技能的“多面手”,享有高度的自主权和决策柔性。因此,对团队激励机制的研究是社会的走向,是企业的取胜的关键砝码。 一、团队的概念与作用 (一)团队概念 1.团队与群体的区别 我们把群体定义为:两个或两个以上相互作用和相互依赖的个体,为了实现某个特定目标而结合在一起。在工作群体(work group)中,成员通过相互作用,来共享信息,做出决策,帮助每个成员更好地承担起自己的责任。工作群体中的成员不一定要参与到需要共同努力的集体工作中,他们也不一定有机会这样做。因此,工作群体的绩效,仅仅是每个群体成员个人贡献的总和。在工作群体中,不存在一种积极的协同作用,能够使群体的总体绩效水平大于个人绩效之和。工作团队(work team)则不同,它通过其成员的共同努力能够产生积极协同作用,其团队成员努力的结果使团队的绩效水平远大于个体成员绩效的总和。图9-1明确展示了工作群体与工作团队的区别。 2.团队的概念 团队是由员工和管理层组成的一个共同体,该共同体合理利用每一个成员的知识和技能协同工作,解决问题,达到共同的目标。 (二)团队的作用 团队产生的最直接原因是联合更多的力量来解决单个人无法解决的问题,比如企业要开发一套ERP系统,就需要懂管理、懂计算机、懂生产、懂财务等等方面的专家,而几乎没有这样的通才,只有借助团队的力量。 1.提高组织的反应速度与灵活性 2.提升组织的运行效率(改进程序和方法) 3.增强组织的民主气氛,促进员工参与决策的过程,使决策更科学、更准确 4.团队成员互补的技能和经验可以应对多方面的挑战 5.在多变的环境中,团队比传统的组织更灵活,反应更迅速 二、团队成员的特点

企业规章制度大全.doc

企业规章制度大全_14 企业规章制度大全 第一章总则 1、本手册是本公司全体员工的指导规范和行为准则。 2、本手册解释权属中心办公室。 3、本手册自200x年x月x日开始实施生效。 第二章员工守则 本公司为健全管理制度和组织功能,特依据外商投资企业劳动人事法规和本公司人事政策制定本手册。 凡本公司所属员工,除法律法规另有规定者外,必须遵守本手册规定。 凡本手册所称员工,系指正式被聘于本公司并签订劳动合同或聘用合同者。 聘用关系 2,本公司招聘的对象是资格最符合的个人,无性别、地域、户口等区别。 聘用员工在三个月试用期满后,凡符合录用条件的外地人员,本公司将按照广州市 人才引进的有关规定,为中级职称以上符合引进条件的员工

办理引进和户口转移手续。 新员工聘用设有三个月的试用期,销售人员的试用期为6个月。如果磨擦员工的工作表现不能令上司满意,被证明不符合的录用条件,公司可以在试用期内终止聘用,或将试用期延长,以作进一步观察,但延长期最多不超过三个月。在试用期内,员工及公司任何一方都可提前五个工作日通知对方,终止聘用关系。公司为试用期员工签订3-6个月的试用合同。公司为工作关系和户口关系不能转入公司的人员及退休被聘用人员,签订聘用合同。 有下列情形之一者,不得聘用为本公司员工: 曾经被本公司开除或未经核准而擅自 离职者; 被剥夺公民权利者; 通缉在案未撤销者; 受有期徒刑之宣告,尚未结案者; 经指定医院体检不合格者; 患有精神病或传染病或吸用毒品者; 未满16周岁者; 政府法规定的其他情形者。 应聘人员面试及体检合格后,按照公司录取通知确定的日

期、地点、亲自办理报到手续、并应缴验下列证件: 本人最近一寸证件照片四张; 本公司指定医院之合格体检表; 学历、职称证件、身份证; 退工单、劳动手册等前服务单位离职证明。 经面试甄选合格之应聘之员,未于通知时间、地点办理报到手续者,视为拒绝接受本公司聘用,该通知则失其效力。 有下列情形之一者,本公司可以不经预先通知而终止聘用关系,并不级予当事人经济补偿费。 在欺骗公司的情况下与公司签订劳动合同,致本公司误信造成损失者; 违反劳动合同或本手册规则经本公司认定情节重大者; 营私舞弊,收受贿赂,严重失职,对公司造成损害者; 对本公司各级管理者或其他同事实施 暴行或有重大侮辱之行为而使之受到伤害者; 故意损耗本公司物品,或故意泄露公司技术、经营机密者; 无故旷工3日以上,或一年内累计旷工6日以上者; 被判有期徒弄以上的刑事责任者;

创业团队组建管理与激励机制研究

创业团队组建管理与激励机制研究 摘要:大部分的新创企业是由创业团队组建的,团队创业已经成为现在主要的创业方式。本文基于双因素理论探讨了创业团队的组建与激励方式,结合新东方的案例分析了企业如何通过合理搭配团队人员、设计激励机制增加创业成功率。 Abstract:The vast majority of new ventures are founded and led by teams. New venture teams have become the main way of entrepreneurship. Based on the two-factor theory,the paper discusses two issues. First,how to form entrepreneurial teams?Second,how to incentive the members?Combined with the case of New Oriental Enterprise,the paper analyzes the ways of member selection,the design of incentive mechanism which increase the success rate of entrepreneurship. 关键词:创业团队;组建;管理;激励机制 Key words:entrepreneurial team;set up;management;incentive mechanism 中图分类号:F272.92 文献标识码:A 文章编号:1006-4311(2016)16-0068-04

相关主题
文本预览
相关文档 最新文档