当前位置:文档之家› PEVQ_OemLibManual4.0——PEVQ评估版使用手册

PEVQ_OemLibManual4.0——PEVQ评估版使用手册

PEVQ_OemLibManual4.0——PEVQ评估版使用手册
PEVQ_OemLibManual4.0——PEVQ评估版使用手册

User Manual

PEVQ? OEM Library

Version 4.0

August 2010 3SQM?, PEAQ?, PEVQ?, PEDQ?, POLQA? and the OPTICOM logo are registered trademarks of OPTICOM GmbH; PESQ? is a registered trademark of OPTICOM GmbH and Psytechnics Ltd.; the ‘Single-sided Speech Quality Measure’ and ‘The Perceptual Quality Experts’ are trademarks of OPTICOM GmbH. This information may be subject to change. All other brand and product names are trademarks and/or registered trademarks of their respective owners.

Contents CONTENTS..........................................................................................................................................................I 1PREFACE.. (6)

1.1What’s New in PEVQ V4.0 (7)

1.2Abbreviations (8)

1.3Organization of this Manual (9)

2LEGAL NOTES (10)

3PRINCIPLES AND DEFINITIONS (11)

3.1Full Reference, Reduced Reference and No Reference Metrics (11)

3.2Intrusive versus Non-intrusive (11)

3.3Active versus Passive (12)

3.4Perceptual versus Non-Perceptual (12)

3.5Subjective versus Objective (12)

3.6MOS Scales (12)

3.7Quality (13)

3.7.1Definition and Difference between Subjective and Objective Quality Scores (13)

3.7.2What is good Quality? (13)

3.7.3How can Quality be related to Non-Perceptual Values? (14)

3.8Context Dependency of MOS Scales (14)

3.8.1Relation between Objective and Subjective Scores (14)

4VIDEO QUALITY METRICS (15)

i

4.1What Metrics to Choose - FR or NR? (15)

4.2What Application to target – MM, SD or HD? (15)

4.3International Standardization (16)

4.3.1ITU-T J.144 (17)

4.3.2ITU-T J.247 (17)

4.3.3ITU-T J.246 (18)

4.4Independent Validation of the Accuracy of the Results for Multimedia Data (18)

4.5Independent Validation of the Accuracy of the Results for HDTV Data (19)

5THE PEVQ ALGORITHM (20)

5.1Scope of PEVQ (20)

5.2Features of PEVQ (24)

5.3Limitations of PEVQ (25)

5.4Test Conditions for which PEVQ may be used with Limited Accuracy (25)

5.5Test Conditions for which PEVQ is Unsuitable (25)

5.6Structure of the PEVQ Algorithm (26)

5.7Input Signals (26)

5.8Results Obtained from PEVQ (27)

5.8.1Perceptual KPIs (27)

5.8.1.1MOS / PEVQ Score (27)

5.8.1.2Temporal Distortion Indicator (28)

5.8.1.3Chrominance Distortion Indicator (29)

5.8.1.4Luminance Distortion Indicator (29)

5.8.2Non-Perceptual KPIs Derived from the Comparison of two Files (30)

5.8.2.1PSNR (30)

5.8.2.2Blockiness (31)

ii

5.8.2.3Jerkiness (32)

5.8.2.4Blur (32)

5.8.2.5Estimated Effective Frame Rate (33)

5.8.2.6Frame Freeze (33)

5.8.2.7Frame Skip (35)

5.8.2.8Delay (35)

5.8.3KPIs Characterizing a Single Video Sequence (37)

5.8.3.1Temporal Activity (37)

5.8.3.2Spatial Complexity (38)

5.8.3.3Brightness (39)

5.8.3.4Contrast (39)

5.8.3.5Other KPIs (40)

6RECOMMENDED RESULT REPORTING (41)

6.1Reporting Accuracy for MOS Scores (41)

6.2Monitoring Scenario (41)

6.3Lab Scenario (41)

6.4Field Test Scenario (41)

6.5Drive / Walk Test Scenario (42)

7RECOMMENDED MEASUREMENT PRACTICE (43)

8COMPARING PEVQ TO OTHER METRICS (44)

8.1Defining Requirements (44)

9THE OPTICOM PEVQ LIBRARY (45)

9.1Contents of the PEVQ Software Package (45)

9.1.1.1Windows Version (45)

9.1.1.2Linux Version (45)

iii

9.2Hard- and Software Requirements (45)

9.3Processing Speed of PEVQ (46)

9.4License Types (47)

9.4.1The Small Volume Business Model in Detail (48)

9.5Installation of the PEVQ Software Package (49)

9.5.1Installation of the Dongle and Dongle Driver (49)

9.5.1.1Windows (49)

9.5.1.2Linux (49)

9.5.2Installation of the OEM Module (50)

9.5.2.1Windows Systems (50)

9.5.2.2Linux Systems (50)

9.6The Demo Program (52)

9.6.1Getting started with the Demo Program (52)

9.6.2Possible Problems and Solutions (53)

9.6.3Compiling the Demo Program (54)

9.6.3.1Windows Systems (54)

9.6.3.2Linux Systems (54)

9.7The PEVQ OEM Library (55)

9.7.1File Structure of the OEM Library (55)

9.7.1.1Windows Systems (55)

9.7.1.2Linux Systems (55)

9.7.2Explanation of the API (56)

9.7.3The API Functions in Detail (57)

9.7.3.1PevqInit() (57)

9.7.3.2PevqLibRun() (60)

9.7.3.3PevqLibGetResultsPointer() (61)

9.7.3.4PevqLibFree() (62)

9.7.3.5PevqCreateLicenseKeyInfo() (63)

9.7.3.6PEVQ_ERRORCODE - Error Codes (64)

9.7.3.7PEVQ_RESULT_DATA - Results Returned by PEVQ (65)

iv

9.7.3.8PEVQ_SETUP_DATA – Initialization structure of PEVQ (optional) (67)

9.7.4Using PEVQ in your own Programs (69)

9.7.4.1Windows (69)

9.7.4.2Linux (69)

10REFERENCES (70)

11ABOUT OPTICOM (71)

12ANNEX (72)

12.1Version History (72)

12.2Hardcopy of the Demo Program - PevqOEMMain.c (73)

12.3Contact Information (85)

v

1Preface

3G and Triple-play networks (Audio + Video + Data) become more and more widespread every day. Their new applications and services include Video Telephony, Video Conferencing, Video Streaming and Video Podcasts (e.g. Youtube). IPTV, Mobile TV, and others address new fixed and mobile applications. The current networks have never been as powerful and reliable as they are today. A major key factor for their increasing success is that they satisfy their customers’ high expectations while keeping down the costs. Operators and service providers achieve this by employing new powerful technology for their setups as well as new measurement tools that help keeping up a certain Quality-of-Experience level (QoE).

One of the major use cases with next-generation networks is simulcast streaming (or broadcasting) of identical contents in various formats for different application scenarios. Also referred to as the "Triple Screen" scenario, typically a video content will be transmitted in high quality over cable or satellite networks in HDTV, a medium quality will be presented for streaming to PC clients and Laptops over the Internet and the lowest quality will be presented on mobile multimedia devices, such as Mobiles, Smartphones and Tablet-PCs (e.g. iPad). Triple Screen scenarios imply many steps of signal post-processing, including re-formatting (e.g. 16:9 vs. 4:3), re-scaling (e.g. from HD to VGA, or CIF), re-framing (e.g. from 50 fps to 25 fps), transcoding, and re-transmission on IP based networks. The issue for the test engineer is to maintain an acceptable Quality-of-Experience accross the various formats, while achieving the best compromise in quality, when considering system immanent limitations of each format.

With OPTICOM’s Perceptual Evaluation of Video Quality (PEVQ) measurement tool you now have the possibility to identify and control your network’s quality status. PEVQ evaluates the quality of video files based on perceptual measurement, reliably, objectively and fast. Find out where your network’s bottlenecks are and fix sudden problems quickly. PEVQ provides estimates of the video quality degradation caused by a transmission by analyzing the degraded video signal output from the network in relation to the quality of the original source. This approach is based on modeling the behavior of the human visual system and detecting abnormalities in the video signal quantified by a variety of Key Performance Indicators (KPIs).

PEVQ has been developed as a universal video quality metrics that can handle any video frame size from low Mobile quality up to High Definition contents. In analogy with the Video Quality Experts Group (VQEG), we will distinguish between PEVQ-MM for Multimedia qualities, like QCIF, CIF and VGA resolutions, and PEVQ-HD for standard (SD) and HDTV applications. As a unique proposition, the very same PEVQ library can cover this wide range of quality thus offering a tool that can help to measure and compare video QoE for simulcast contents.

Due to the common testing layout, PEVQ can be well combined with PESQ, OPTICOM’s well established industry standard for telephony-band voice quality testing according to ITU-T Rec. P.862. In combination, the delay between video and audio, often a typical cause for lip-sync issues, can be analyzed. For high quality SD and HD definition video, PEVQ can be combined with PEAQ, the proven industry standard for CD-quality audio testing according to ITU-R Rec. BS.1387. This A/V combination allows for lip-sync analysis according to the EBU recommendations for TV broadcast.

PEVQ was developed by OPTICOM, the leading provider of signal based perceptual measurement technology for voice, audio and video. PEVQ is partially based on the earlier PVQM technology developed by KPN Research (now part of TNO). PEVQ was independently benchmarked by VQEG, the Video Quality Experts Group. Based on the reported accuracy, PEVQ has been standardized by the International Telecommunication in 2008 as part of the new ITU-T Rec. J.247 for multimedia applications. PEVQ was also validated by VQEG for HDTV in 2010.

NOTE: This manual covers the use of OPTICOM’S OEM PEVQ library for developers, who are looking for a

video QoE software core to embed into a T&M product, or system. In case you are not interested in an OEM

module, but you are actually looking for a ready-to-go analyzing tool set, please feel free to inquire about

OPTICOM’s PEXQ Software Suite for Windows for Voice, Audio and Video analysis.

1.1What’s New in PEVQ V4.0

The new release of PEVQ contains important changes to its predecessor version 3.2:

Improved HD and SD measurement accuracy

New mapping for HDTV 720 and HDTV1080

Support for interlaced sequences

Optional disabling of time alignment

Extended range of FourCC input formats (four character code)

1.2Abbreviations

This manual uses the following abbreviations:

API Application Programming Interface

AVI Audio Video Interleave; multimedia container format

CIF Common Intermediate Format (Frame size =352x288 Pixels)

DMOS Difference Mean Opinion Score (Difference between SMOS and OMOS) FourCC Four Character Code (4CC)

Fps Frames per second

FR Full Reference (Metrics)

HDTV High Definition Television

ITU-T International Telecommunication Union, Standardization Sector

KPIs Key Performance Indicators

MOS Mean Opinion Score

NR No Reference (Metrics)

OMOS Objective Mean Opinion Score

PEAQ Perceptual Evaluation of Audio Quality

PESQ Perceptual Evaluation of Speech Quality

PEVQ Perceptual Evaluation of Video Quality

PSNR Peak Signal to Noise Ratio

PVQM Perceptual Video Quality Measure

QCIF Quarter CIF (Frame size = 176x144 Pixels)

RGB24 Definition of the color space Red, Green, Blue with 24bits resolution

RR Reduced Reference (Metrics)

SDTV Standard Definition Television

SMOS Subjective Mean Opinion Score

VGA Video Graphics Array (Frame size = 640x480 Pixels)

YUVxxx Definition of a color space consisting of a luminance (L) and two

chrominance components (U and V). X represents the number of samples per

macro block for each color component.

1.3Organization of this Manual

Chapter 1 gives an overview of OPTICOM's Perceptual Evaluation of Video Quality (PEVQ) measurement tool

Chapter 2 contains important legal notes regarding the usage of the PEVQ software.

Chapter 3 contains important definitions of terms which are used throughout this manual in later chapters. Some of these terms may have ambiguous meaning and this chapter defines the intended use in this manual. Chapter 3 also explains some important concepts behind subjective scales.

Chapter 4 describes some video quality metrics, the recommendations which stand behind these metrics, and the validation procedure of PEVQ regarding these recommendations.

Chapter 5 describes PEVQ in detail. Main focus is given to the structure of PEVQ. The KPIs derived from PEVQ are explained. Chapter 6 and 7 give advice regarding the measurement methodology and the reporting of results.

Chapter 8 briefly outlines the procedure for selecting a specific algorithm for the evaluation of video quality.

Chapter 9 focuses on the OPTICOM PEVQ OEM library. It explains the installation as well as the Application Programming Interface (API) usage. Detailed technical information for implementers is provided here.

2Legal Notes

THIS SOFTWARE IS LICENSED, NOT SOLD. ANY COMMERCIAL USE OF THIS SOFTWARE IS SUBJECT TO THE EXECUTION OF A PEVQ OEM LICENSE AGREEMENT WITH OPTICOM GMBH, BASED ON TERMS AND CONDITIONS TO BE AGREED. The following statement(s) shall be incorporated by L ICENSEE into the product as defined in the PEVQ License Agreement:

Perceptual Evaluation of Video Quality (PEVQ?) measurement technology included in this product is protected by copyright and by European, US and other patents and is provided under license from

OPTICOM Dipl.-Ing. M. Keyhl GmbH, Erlangen, Germany, - www.opticom.de

For further information please refer to https://www.doczj.com/doc/b23217893.html,

PEVQ is a registered trademark of OPTICOM (in selected countries)

Further statements shall be incorporated to

a.prohibit additional copying of the PEVQ software in whole or in part, other than is essential for the proper operation

of the PEVQ software or for normal security back-up purposes;

b.prevent the End-User from modifying, translating, reverse-engineering or decompiling the PEVQ software except to

the extent permitted by law;

c.require that the acknowledgement of the rights in the PEVQ software shall not be removed from the PEVQ software

or any installation of it;

d.Portions of this software are copyright ? 2006 The FreeType Project (www.freetyp

https://www.doczj.com/doc/b23217893.html,). All rights reserved.

Third party credits:

Portions of this software are copyright ? 2006 The FreeType Project (https://www.doczj.com/doc/b23217893.html,). All rights reserved.

3Principles and Definitions

This chapter contains some general definitions of terms which are used throughout this manual. Most readers will be familiar with these terms, but since some of them are often used synonymously although they describe different aspects, we prefer to define our usage here.

The terms “Intrusive”, “Active” and “Full Reference” measurement as well as their opposites are very often used synonymously. However, they all have their distinct meaning and the ITU, based on OPTICOM’s comments as in the following, is currently working on the according clarification: “ITU-T G100/P10 Amendment 2, new definitions for inclusion in Recommendation P.10/G.100”

3.1Full Reference, Reduced Reference and No Reference Metrics

Full Reference / No Reference (FR/NR) describe the type of the applied measurement algorithm. An FR algorithm will require access to the undistorted reference signal as well as to the degraded signal. The measurement is based on an analysis of differences. A No Reference algorithm in turn needs access to the degraded signal only. As a consequence, the characteristics of the original signal are mostly unknown. So the measurement is based on the assumption that detected artifacts merely result from the network under test and do not represent characteristics of the original source signal.

PEVQ represents an FR algorithm and as such offers the highest possible accuracy and reliability due to an analysis of the quality differences compared to the original reference. An example for an NR measure is e.g. ITU-T P.563 for voice quality assessment. NR methods are also often referred to as “single ended” measures. NR measures may provide flexibility for the testing setup but at the cost of significantly lower accuracy and reliability.

NOTE: A Reduced Reference (RR) algorithm is a special type of a Full Reference measure that has been

tailored to work at a significantly reduced bit-rate when compared to the video signal bandwidth of the

original reference signal. The design idea is to extract information useful for a comparison measurement

from the source signal, without having to transmit the source signal to the measurement site. Typical

applications are e.g. on-line monitoring of a video transmission network. RR metrics are proposed in ITU-T

Rec. J.246. Due to the need to embed the reduced reference signal in the transmitted video stream the

applicability is very much restricted to closed embedded monitoring applications. As a common criticism it

could be noted that in most network monitoring environments the full reference signal would be accessible

anyway, so in light of the reduced accuracy of RR versus FR there is little benefit of the additional network

complexity.

3.2Intrusive versus Non-intrusive

Intrusive and Non-intrusive in turn describe the way in which the data acquisition is performed. Intrusive means that a known signal is injected into the system under test, while non-intrusive means that measurements are performed with whatever content can be detected on e.g. a network. Please note, that FR measurement algorithms can be used non-intrusively as well, if the reference signal is recorded at the input of the system under test, while the degraded signal is recorded at the output of the system under test.

3.3Active versus Passive

Last but not least, the terms Active versus Passive describe the way in which the test system establishes the connection to the system under test in order to perform the measurement. A network sniffer for example is a passive system, since it simply spies on the network and measures whatever data come along. The same would be true for a voice quality test system which is performing tests on arbitrary calls that are carried along the line. An active tester for example would explicitly setup a connection to another party (e.g. dial a number) before starting the data acquisition.

NOTE: Many combinations of the three pairs listed under 3.1, 3.2 and 3.3 are possible, but not every

combination makes sense. One of the most common observations is that users sometimes are under the

impression they were in need of a No-Reference metrics, because they cannot inject the reference signal to

the network. While of course a NR metrics might be one option, still an FR metrics applied passively – i.e.

by recording a test sequence and comparing it to a known reference sequence stored on hard disk – might

be the superior approach. This is especially true, because one would benefit from the higher degree of

accuracy and repeatability of a FR metrics.

3.4Perceptual versus Non-Perceptual

Perceptual metrics like PEAQ, PESQ or PEVQ assess certain effects similar as human beings would do. Typically such measurement algorithms try to model the human perception. Perceptual modeling is not only used for the assessment of quality. Other famous perceptual algorithms are e.g. MP3 or AAC which use perceptual models for the compression of music. Non-perceptual metrics are general physical or technical measures like e.g. levels or PSNR. As with many physical measures, non-perceptual metrics may or may not correlate well with subjectively perceived quality of experience.

3.5Subjective versus Objective

The terms subjective and objective are used to distinguish between the assessment by human beings (subjective) and a technical model of human perception (objective, e.g. a computer program) on the other side. The persons scoring voice or video quality in a subjective test are often called subjects, while the algorithm performing a similar assessment in the objective domain is frequently called an objective model or method.

3.6MOS Scales

In all fields of subjective testing and perceptual measurement M ean O pinion S core (MOS) scales are widely used to report the results. A Mean Opinion Score is the result of averaging individual >Opinion Scores< (OS) of a test population. Opinion scales are defined in the ITU recommendations P.800, BS.1116, BS.1534 and others. An OS scale simply defines a well specified range of numbers, typically from 1 to 5 that describe the quality as it is assessed by human subjects. A general misconception is however, that these scales are something absolute and context free. In reality, quite the opposite is the case, as explained in the following.

NOTE: There is no such thing like an absolute MOS scale. MOS values are only valid within the context of

the experiment that was conducted to generate them.

NOTE: Typically measured MOS scores of e.g. 3.81 must be related to subjective Mean Opinion Scores

derived over the whole test population, and not to Opinion Scores of an individual person. While commonly,

there is a good correlation of measured results with individual subjective scores of experienced engineers,

deviating subjective opinion scores of individuals should never be taken as an indication for a measurement

inaccuracy. Such discrepancies might simply pinpoint to the very limited statistical basis.

3.7Quality

3.7.1 Definition and Difference between Subjective and Objective Quality Scores

Many definitions of quality have been published in the past, but the one that we like most is:

“Quality is the judgment of the difference between what we perceive and what we expect.” [JEKO00]

This statement clearly defines the limits of subjective experiments and objective methods. The subjective experiments are always biased by the expectation of the subjects. This is unavoidable and may already be caused by simple things, like e.g. the shape of the handset used for listening. Subjects will expect more distortions from a handset which looks like a mobile than from a handset which looks like a traditional ISDN phone. Objective methods on the other side naturally have no expectation at all. They are not sensitive to the context and they will produce the same score for the same degradations, independent of the shape of the phone. In other words, what a perceptual metrics will measure as quality is the perceived difference between two signals only and thus only a subset of the above definition.

Which quality statement is correct, the subjective or the objective one is a philosophical question. For engineers and technicians the answer is given by the objective of the experiment and must be decided on a case by case basis. However, it very is important, that the person performing the assessment is aware of this difference.

3.7.2 What is good Quality?

As outlined in Section 3.7.1, the perceived quality is highly depending on the expectation of the subjects. In real life, this expectation is depending on a multitude of factors, which can not be foreseen here. One such factor could for example be the price of a service. For a free service the expectation of the users may be low and thus the relatively poor (objective) quality may be sufficient. For an expensive premium service however the same persons will most likely expect much higher objective quality. The answer to the question of what we define as good quality is therefore not a simple one and definitely not one to be answered by an R&D department and even less by a measurement tool vendor.

One possible scenario for technical teams to find the right answer could be to make recordings of different objective quality and play these to the responsible persons from e.g. marketing. It is then up to the marketing department to take a decision of what quality is really required. Of course, they should have listening examples of competitive products available too. Once the marketing department has made that decision, the technical teams can define the objective quality of the finally chosen file versions as the target quality for the service. In any case one should be warned to use the verbal meanings for the scores of the MOS scale as the reference. These are definitely just anchor points for subjects in the subjective test and also subject to the user’s expectation. Also, one must never expect to achieve the ideal results published for a specific codec in real life. Typically these scores were determined for an isolated codec without a real network.

3.7.3 How can Quality be related to Non-Perceptual Values?

Very often people try to relate a poor MOS score to other, non-perceptual values like e.g. delay, packet loss or attenuation. This may sometimes work, but generally it is not possible, since human perception is working in entirely different dimensions than those rather technical parameters. Very often different artefacts in a test sample emphasise or mask each other. By looking at only one parameter it is impossible to model this behaviour.

3.8Context Dependency of MOS Scales

As outlined in 3.7, subjective quality scores depend on many different factors apart from human perception. Therefore it is generally not possible to reproduce subjective test results with absolute accuracy. This is not only true for two subjective tests conducted by different labs, but also for identical tests conducted by the same lab. The cause for this lies in the multitude of factors that affect our expectation (see also 3.7). Also, human beings tend to behave slightly different at different times and days. An easy to understand example is, if the range of qualities presented in two different experiments covers a different range of degradations. One example could be a first experiment containing 90% very high quality samples and only 10% poor quality samples and a second experiment containing 10% very high quality and 90% poor quality samples.

A small common set of identical test conditions should be contained in both experiments. In this case the scores for the common set achieved in the first experiment will most likely be worse than those achieved in the second experiment, although the subjects were given exactly the same instructions and thought that they were using the same MOS scale for their judgment of quality. The reason for this effect is that the range of qualities in a subjective experiment also defines the scale applied by the subjects.

NOTE: Naturally, an objective quality measure does not show such a behavior, since it is context free by

definition.

3.8.1 Relation between Objective and Subjective Scores

Before comparing subjective and objective scores it is required, to compensate for systematic differences between subjective experiments caused by differing expectations of the subjects or context dependencies of the MOS scales. One method to do this is to apply for each subjective experiment a third order, monotonic polynomial to the objective scores which minimizes the root mean square error (RMSE) between the two data sets. Doing this in the exact and correct way requires some special tools which are not easily available. For the general case however it is sufficient to draw a scatter plot, e.g. in MS Excel, with the subjective scores on one axis and the objective scores on the other axis. Then add a third order regression line and read the correlation given for the regression line. The biggest risk in this case is that the regression line is not monotonic, but this can be checked visually. Using third order polynomial mappings per experiment is also the recommended practice of VQEG and the ITU-T. It is also applicable for other types of measurements like e.g. voice quality assessment.

4Video Quality Metrics

4.1What Metrics to Choose - FR or NR?

Depending on the application, the difference in scope and the detailed level of quality that needs to be assessed determines if a FR metrics is the preferred choice, or if a less accurate NR (or NR/IP) type of assessment can be considered a working alternative. As shown in Fig. 1, compared to a NR/IP metrics, an FR metrics like PEVQ will fully analyze the decoded video stream on a pixel-by-pixel comparison with the reference signal. This type of analysis will provide the highest possible

accuracy among all metrics proposed by the industry. Fig. 1 also indicates what level of detail will not be taken into account by IP based estimates, namely the integrity of the video stream for the decoder, the quality of the decoder implementation including any error concealment, and even more important, the presence of a valid video signal and its basic video quality starting from the source (see 3.1). Designed as an end-to-end video comparison of the received signal with the original source signal, only a perceptual FR metrics like PEVQ will be able to adequately score the quality of the video signal as perceived by the subscriber.

Figure. 1: Compared to a NR/IP metrics, a FR metrics like PEVQ will fully analyze the decoded video stream on a pixel-by-pixel comparison with the reference signal. This type of analysis will provide the

highest possible accuracy among all metrics proposed by the industry.

[Source: Triple-Play Service Deployment JDS Uniphase Corporation 2007]

4.2What Application to target – MM, SD or HD?

Of course, the target application to be tested will define the screen size (= frame size), or resolution. The Video Quality Experts Group has distinguished frame sizes during their test series in the following way:

SD - Legacy type TV resolutions, typically referred to as Standard Definition (SDTV) were tested in a first VQEG phase as part of J.144 (see 4.3.1 below).

MM - Multimedia applications include typical computer image formats like VGA, CIF and QCIF, and were the subject of the VQEG Multimedia test phase that finally lead to J.247 (see 4.3.2) and J. 246 (see 4.3.3).

HD - The very latest VQEG phase was focusing on HDTV. Frame sizes of 720p and 1080i were picked as representative resolutions for this test series (see 4.5).

One of the major use cases with next-generation networks is simulcast streaming (or broadcasting) of identical contents for various application scenarios. Also referred to as a "Triple Screen" scenario, typically a video content will be transmitted in high quality over cable or satellite networks in HDTV, a medium quality will be presented for streaming to PC clients and Laptops over the Internet and the lowest quality will be presented on mobile multimedia devices, such as Mobiles, smartphones and Tablet-PCs (e.g. iPad).

For the test engineer the challenge is to find a metrics that can equally be applied to these various resolutions while correctly maintaining the individual quality ranges. The various VQEG findings and resulting recommendations are reported in the next chapters. As summarized in Fig. 2, you will find that OPTICOM's PEVQ is the only video quality metrics that has been validated by VQEG for both, Multimedia and HDTV. The other technology providers that are reported in the VQEG recommendations could only handle some spotted resolutions. It is obvious that when comparing identical video contents represented in different resolutions – i.e. quality ranges – one wants to apply the very same perceptual model for the assessment. Applying different metrics for either resolution would just lead to incomparable results.

Figure 2: Matrix of recommendations resulting from VQEG benchmarks. OPTICOM's PEVQ is the only perceptual FR metrics that was validated by VQEG for Triple Screen scenarios covering frame sizes

from QCIF up to HD.

4.3International Standardization

Standardization of metrics for video quality assessment has most recently been performed by the ITU in cooperation with the Video Quality Experts Group (VQEG; see also: https://www.doczj.com/doc/b23217893.html,). VQEG gathered leading experts in the field of video signal processing and benchmarked different proposed methods. Results were forwarded to the ITU as well as to other standards bodies. The ITU mostly bases its decisions which methods to recommend on these benchmarks. A joint rapporteur’s group is the official link between the VQEG and the ITU.

In a first phase 1997-2000, VQEG investigated the performance of FR metrics suitable for assessing the quality of Cable TV applications of standard definition (525/60 and 625/50). The findings were inconclusive, and not suitable for a recommendation. Therefore from 2001 to 2003 VQEG performed a second test, FR-TV Phase II. In 2003 VQEG concluded the report for Phase II: Four QoE models were found to be suitable for the assessment of video signals of broadcast quality. These models were adopted in ITU-T Rec. J.144, titled “Objective perceptual video quality measurement techniques for digital cable television in the presence of a full reference”, released 2004. J.144 covers plain compression using MPEG2 and H.263 codecs at high bitrates. The video format is limited to NTSC and PAL resolutions at 60Hz and 50Hz interlaced, respectively.

Due to the evolving IP-based transmission technology and the development of video codec’s like H.264, J.144 soon was considered to be outdated. The recommended metrics are unsuitable for multimedia applications, in particular when including IP-based transmission errors, like e.g. packet loss. J.144 only allows for very small and static delays between the signals. In fact, it would fail completely in case of frozen or skipped frames. Also, other image sizes, frame rates or distortions are out of the scope of J.144. In consequence, J.144 was not well adopted by the industry.

4.3.2 ITU-T J.247

Driven by the market demand along with the introduction of video services in 3G and IPTV networks, VQEG worked on a new project, focussing on Multimedia applications, rather than “legacy” TV applications. From 2006 to 2008, VQEG conducted a series of benchmarks, while over four dozen video experts from all over the world participated in this project. Apart from Acreo, CRC, IRCCyN, France Telecom, FUB, Nortel, NTIA, and Verizon, who were engaged as independent testing laboratories, the industry consortium included KDDI, NTT, OPTICOM, Psytechnics, SwissQual, Symmetricom and Yonsei University. A total of 13 organizations performed subjective tests, leading to 41 subjective experiments.

This time, the experiments included video sequences with a wide range of quality, and both compression and transmission errors were present in the test conditions. The experiments included 346 source video sequences and 5320 processed video sequences, while the clips were evaluated by 984 viewers. In fact this benchmark series represents the largest video quality test ever conducted, which is why VQEG believes the results, beyond the initial Recommendations so far approved, will be beneficial to advance the state of the art in video and multimedia quality assessment.

The new ITU-T Recommendation J.247, titled “Objective perceptual multimedia video quality measurement in the presence of a full reference” is the most important outcome of the Multimedia benchmark and was officially released in August 2008. J.247 recommends four different models, but only two of them are without any limitations suitable for all tested resolutions (QCIF, CIF and VGA). The two best performing models - one of them being PEVQ - behave statistically equivalent as far as the best case performance is regarded. By looking at the worst case performance (e.g. outliers) however, PEVQ is by far the best model and outperforms all others.

NOTE: PEVQ has been further advanced since the code freeze for the standardization benchmark. Starting

with version 3.0 of OPTICOM’s PEVQ OEM DLL, an optional ITU-T J.247 conforming operating mode is

available to be selected during the initialization of the PEVQ OEM DLL.The optional strictly conforming

mode allows to force the program to run in a fully ITU conforming version, while ignoring more recent

advancements. However, unless mandatory for reasons of backward compatibility, OPTICOM recommends

to use the advanced mode (= default, see chapter 9.7.3.1), which will mostly maintain or even exceed the

benchmarked accuracy.

For the reason of completeness, it should be mentioned that ITU-T Recommendation J.246, titled “Perceptual audiovisual quality measurement techniques for multimedia services over digital cable television networks in the presence of a reduced bandwidth reference” is another outcome of the Multimedia benchmark. A Reduced Reference algorithm (see section 3.1) is a special type of a Full Reference measure that has been tailored to work at a significantly reduced bit-rate, compared to the video signal bandwidth of the original reference signal. The design idea is to extract information useful for a comparison measurement from the source signal, without having to transmit the source signal to the sink. Typical applications are e.g. on-line monitoring of a digital cable TV network. Due to the need to embed the reduced reference signal in the transmitted video stream the applicability is very much restricted to closed embedded monitoring applications of consistently digital networks. As a common criticism it could be noted that in most network monitoring or control centers the full reference signal would be accessible anyway, so in light of the reduced accuracy of RR versus FR there is little benefit of the additional network complexity. OPTICOM’s PEVQ does not support a RR mode.

4.4Independent Validation of the Accuracy of the Results for Multimedia

Data

The accuracy of PEVQ multimedia data has been validated independently by VQEG. The following text is a citation from “SYNOPSIS OF THE VIDEO QUALITY EXPERTS GROUP ON THE VALIDATION OF OBJECTIVE MODELS OF MULTIMEDIA QUALITY ASSESSMENT, PHASE I”, which is online available at https://www.doczj.com/doc/b23217893.html,. It describes the general test procedure. The details may be looked up in the report from VQEG.

“Forty one subjective experiments provided data against which model validation was performed. The experiments were divided between the three video resolutions and two frame rates (25fps and 30fps). A common set of carefully chosen video sequences were inserted identically into each experiment at a given resolution, to anchor the video experiments to one another and assist in comparisons between the subjective experiments. The subjective experiments included processed video sequences with a wide range of quality, and both compression and transmission errors were present in the test conditions. These forty one subjective experiments included 346 source video sequences and 5320 processed video sequences. These video clips were evaluated by 984 viewers.

Objective models were submitted prior to scene selection, PVS generation, and subjective testing, to ensure none of the models could be trained on the test material. 31 models were submitted, 6 were withdrawn, and 25 are presented in this report. A model is considered in this context to be a model type (i.e. FR or RR or NR) for a specified resolution (i.e. VGA or CIF or QCIF).”

The summary results from this benchmark as far as PEVQ is concerned are shown in Table 1. The table gives the averaged results only. Individual results score up to 0.939.

Resolution Correlation

VGA 0.825

CIF 0.808

QCIF 0.841

Table 1: Averaged correlation of PEVQ with subjective tests as conducted by VQEG. The averaged correlation is an average across 14 subjective experiments per resolution.

VQEG also conducted a secondary analysis which averaged all scores give to all files sent through each test setup (HRC). This is equivalent to performing several measurements on the same device/network under test, using different reference files. The results obtained by PEVQ are shown in Table 2.

Resolution Correlation

VGA 0.914

CIF 0.919

QCIF 0.937

Table 2: Averaged correlation of PEVQ with subjective tests as conducted by VQEG. The averaged correlation is an average across 14 subjective experiments per resolution after averaging the data

per HRC.

4.5Independent Validation of the Accuracy of the Results for HDTV Data An independent validation of the accuracy of PEVQ for HDTV data was conducted in 2010 by VQEG. The test addressed four video formats at 1080p at 25 and 29.97 frames per second and 1080i at 50 and 59.94 fields-per seconds. Six subjective experiments provided data against which models validation was performed. The subjective experiments included processed video sequences with a wide range of quality. The impairments examined were restricted to MPEG-2 and H.264, both coding and coding plus transmission errors. A total of 14 models where submitted to this test. 6 models were withdrawn prior to publication of the results. Details may be looked up in the report:” Report on the validation of Video Quality Models for High Definition Video Content”, available at VQEG.

The summary result from this benchmark as far as the submitted PEVQ model is concerned is shown in Table 3.

Resolution Correlation

HDTV (PEVQ V 3.4)0.65

Table 3: Averaged correlation of PEVQ V 3.4 with subjective tests as conducted by VQEG.

The model with the highest correlation within this test achieved a correlation of 0.86. It should be noted that due to a limited number of subjective HD databases, PEVQ V 3.4, which was tested by VQEG, was mostly trained on much smaller resolutions than required for the HD test and it was therefore certainly performing far below its potential. The real potential of the algorithm can be seen by looking at the results of the VQEG Multimedia project, which resulted in the standardization of PEVQ in ITU-T Rec. J.247. Consequently, now that more HD databases became available, OPTICOM further improved PEVQ. By applying some very minor modifications to the algorithm, the prediction accuracy could be increased significantly for HD resolutions. The prediction accuracy achieved by PEVQ V 4.0 and above is now equal to the performance of the best model in the VQEG test. Since those results were achieved after the validation databases became available, those results could of course not be taken into account within the VQEG HD benchmark anymore.

The summary result for PEVQ V 4.0 on the VQEG HD databases is shown in Table 4.

Resolution Correlation

HDTV (PEVQ V 4.0)0.89

Table 4: Averaged correlation of PEVQ V 4.0 with subjective tests as conducted by VQEG.

职位评估IPE系统指导手册

职位评估(IPE系统)指导手册为何要进行职位评估?

随着企业规模扩大,分工与协作要求,人们常常需要确定一个职位的价值,或者想知道财务经理岗位和销售经理岗位相比,究竟哪个岗位对企业的价值更大,哪个岗位应该获得更好的报酬。那么,究竟如何确定某个职位在企业里的地位呢?对不同职位间的贡献价值如何进行衡量比较呢?这就需要进行职位评估. 职位评估的意义: 职位评估(Job Evaluation,或称为岗位评价、岗位测评)是一种职位价值的评价方法。它是在职位描述(Job Description)的基础上,对职位本身所具胡的特性(比如职位对企业的影响、职责范围、任职条件、环境条件等)进行评价,以确定职位相对价值的过程。很显然,它的评价对象是职位,而非任职者,这就是大家通常所说的“对岗不对人”原则。而且,它反映的只是相对价值,而不是职位的绝对价值(职位的绝对价值是无法衡量的)。 职位评估的具体作用: 1、确定职位级别的手段 职位等级常常被企业作为划分工资级别、福利标准、出差待遇、行政权限等的依据,而职位评估则是确定职位等级的最佳手段。有时企业仅仅依靠职位头衔称谓来划分职位等级,而不是依据职位评估,这样有失准确和公平。举例来说,在某企业内部,尽管财务经理和销售经理都是经理,但他们在企业内的价值并不相同,所以职位等级理应不同。同理,在不同企业之间,尽管都有财务经理这个职位,但由于企业规模不同,该职位的具体工作职责和要求不尽相同,所以职位级别也不相同,待遇自然也不同。 2、薪酬分配的基础 在通过职位评估得出职位等级之后,就便于确定职位工资的差异了。当然,这个过程还需要薪酬调查数据做参考。国际化

科技型中小企业评价系统用户手册(企业端)

科技型中小企业评价系统 用 户 手 册 2017年10月

目录 1. 引言 (1) 1.1编写目的 (1) 1.2文档适用范围 (1) 2. 功能模块简介 (1) 2.1企业用户功能模块 (1) 3、运行环境 (1) 4.使用说明 (2) 4.1账户注册登记 (2) 4.1.1 企业注册登记承诺书 (2) 4.1.2 用户注册登记 (3) 4.1.3 密码找回 (5) 4.1.4 邮箱找回 (7) 4.2系统主界面 (9) 4.2.1修改密码 (9) 4.3用户信息管理 (10) 4.3.1 企业注册登记 (10) 4.3.2 企业信息管理 (12) 4.4 评价信息管理 (13) 4.4.1 企业填报材料 (13) 4.4.2 填报材料保存 (18) 4.4.3 自评结果 (18) 4.4.4上传封面文件 (20) 4.4.5 提交填报信息 (20) 4.4.6 填报材料状态 (21) 4.5 审批进度查询 (22) 4.5.1查看当前状态 (22) 4.5.2 查看审批进度 (22)

1.引言 1.1编写目的 本用户手册对如何使用科技型中小企业评价系统做了详细介绍。文档主要包括引言、功能简介、用户操作说明等内容。 1.2文档适用范围 适用于在该系统中注册登记的所有用户。 2.功能模块简介 2.1企业用户功能模块 (1)用户信息管理 (2)评价信息管理 (3)审批进度查询 3、运行环境 (1)IE11; (2)Google Chrome; (3)360极速模式(9.1版本或以上); (4)360兼容模式(内核IE9以上); (5)firefox(建议使用Google Chrome或360极速模式)。

信用评级系统操作手册范本

信用评级系统操作手册 联合信用管理 Lianhe Credit lnformation Servlte co.,Ltd 二00九年

目录 第一章系统概述..........................................4 1、系统架构................4 2、主要模块................4 第二章系统登录与退山....................................5 第一节用户端设置................5 第二节系统登录................6 第三节系统退出................7 第三章我的门户..........................................7 1、我的工作................7 2、报告跟踪................7 3、最近更新................8 4、业务通讯................8 第四章公司类客户评级子系统..............................8 第一节基本信息................8 1、信息录入................8 2、信息查询...............12 3、修改信息...............13 第=节财务信息...............13 1、财务报表...............13 2、财务明细...............16 3、预测数据...............18 4、审计信息..........错误!未定义书签。 第三节等级初评...............19 第五章担保机构评级子系统......................22 第一节基本信息...............22 1、信息录入...............22 2、信息查询...............24 3、修改信息...............25 第=节财务信息...............26

【制度】公司人才评价体系方案说明

公司人才评价方案 一、人才评价目的 建立人才评价体系是对公司人才进行客观的评价,通过评价,准确掌握人力资源数量、质量,为公司人才发展、人才使用、人才储备提供可靠依据,实现人力资源统筹配置、人力资本不断增值,更好地为公司的发展提供人力资源支持。 二、人才评价范围 公司范围内转正后满半年工龄的在职管理族员工、专业族员工、技术族员工、营销族员工、操作族员工。 三、人才评价周期 每半年开展一次。每年1月份、7月份各组织一次。 四、人才评价原则 4.1 客观、公正原则。 4.2 定期化、制度化原则。 4.3 可行性、实用性原则。 五、人才评价组织 5.1 资源管理部 5.1.1制订人才评价体系方案; 5.1.2拟订人才评价工作计划、制定评价标准、组织专业测评、审核评价结果; 5.1.3负责人才评价结果的运用(人才任用、人才培养、人才调配)。 5.1.4与部门负责人沟通,保证人才评价工作开展; 5.1.5负责参与人才评价工作人员的培训、工作指导。

5.2用人部门负责人 5.2.1根据评价方案执行本部门人才的评估; 5.2.2提供本部门各岗位的岗位要求,配合建立岗位素质评价模型。 六、岗位评价模型建立 6.1.1 确定评价指标。设计调查问卷,在公司范围内抽取不同层级、不同岗位的员工进行调查,调查员工对公司企业文化的认同、价值观的理解及表现的行为特征,分析调查结果,找出公司人才的核心素质指标。(如:价值观、个性特质)。通过岗位分析,对完成岗位工作目标所需要的知识、技能进行分析,提炼出各岗位素质指标。可以从以下几方面考虑: 知识。如完成工作目标所需要的专业知识、行业知识、本部门相关知识等; 能力。如为实现工作目标应具有的操作技能、逻辑思维能力、管理能力、沟通能力等。 职业素养。如员工的工作热情、诚信、职业道德等。 6.1.2 确定评价标准。评价标准是对评价指标进行分等级可测量性的描述,体现在行为特征和目标完成结果。分为优秀、良好、一般、较差。 6.1.2.1在公司范围内找出被评价岗位在职员工中优秀员工、一般员工、较差员工进行调查、访谈,在评价指标中具有哪些素质特征和行为特征及工作绩效。通过分析,找出胜任素质指标及行为特征并进行描述,为确定评价的标准提供依据。 6.1.2.2 找出行业标杆企业岗位员工胜任能力模型进行对比,找出差距,按照企业发展的现状和发展方向对评价标准进行调整。 6.1.2.3 成立专家小组,在分析调查数据的基础上,对评价指标、标准进行修正调整。

教学质量评估系统用户操作手册

1 系统的安装与初始化 1.1 数据库服务器端的数据库还原 打开系统的安装目录,找到“数据库文件”的文件夹,产看是否有“附加数据库”文件夹,在此文件夹中有“teacher_Data.MDF”文件,在确认无误的情况下,添加系统的数据库,具体步骤如下。 a)将安盘中的“数据库文件”->“附加数据库”文件夹中的“teacher_Data.MDF”和“teacher_Log.LDF”拷贝到服务器中数据库的安装目录下的Date文件夹中(例如:C:\Program Files\ Microsoft SQL Server\MSSQL\Data)。 b)打开如图1.1所示的数据库服务器客户端。 图1.1 数据库服务器客户端 c)右击“数据库”文件夹,选中“所有任务”—>“附加数据库”,如图1.2所示。 图1.2 打开数据库的附加数据库过程图

d)通过第三步骤的操作打开如图1.3所示的界面。 图1.3 附加数据库 e)点击“浏览”按钮,选中在第一步骤中拷贝的“teacher_Data.MDF”文件,可以根据用户的需要设置数据库的所有者,如图1.4所示。 图1.4 还原数据库文件图 f)点击“确定”后,在“数据库文件”下有“teacher”文件,说明数据库添加成功,如图1.5所示。

图1.5 数据库还原成功 g)更改数据库的登录的密码,在本系统中由于默认的用户角色为sa,所以一sa 为例说明,用户如果创建了自己的用户,则按照用户自己创建的用户角色,设置密码。如图1.6所示,重新输入密码。 图1.6 更改数据库登录密码 1.2 后台教学质量评估管理系统的安装与初始化 1.2.1 安装系统必备软件 a)打开系统的控制面板的管理工具,查看计算机是否已经装好https://www.doczj.com/doc/b23217893.html, Framework 1.1,如果有如图1.7所示的目录,则可跳到1.2.2 直接安装教学质量评估

WEB安全评估系统用户手册范本

WEB安全评估管理系统(WESM)用户手册1 WSEM概述 1.1 WSEM简介 WEB安全评估管理系统(WSEM)在java tomcat应用服务器中运行。其中,安全评估模块、项目管理模块、用户管理模块、知识库管理模块和项目风险统计模块是本系统的主要功能模块,SSCA源代码评估工具和SSVS安全测试工具是与本系统交互的安全工具组件。 SSCA软件安全分析器和SSVS安全测试工具将代码扫描文件和安全测试文件以xml的方式传递给安全评估模块,安全评估模块将上传的结果信息整理入库,项目风险统计模块根据入库信息提供统计报表。 知识库管理模块对代码扫描和安全测试的规则进行编辑,编辑好的规则通过组件接口下发给SSCA软件安全分析器和SSVS安全测试工具,安全工具根据下发规则对源代码和系统进行扫描。 1.2 运行环境 运行WSEM所需要的硬件及软件环境如下: ●CPU Pentium 2.4GHz及以上 ●存512MB及以上 ●硬盘40GB及以上 ●网卡100Mbps 及以上 ●操作系统Windows 2000/XP/2003,Vista等 2 安装与卸载 2.1 安装 双击安装程序wsem setup.exe,进行WSEM的安装,如下图所示

点击“下一步”,显示用户许可协议,用户需选择“我同意该许可协议的条款”能继续安装。如图: 点击“下一步”,要求输入用户名和公司名,如图:

点击“下一步”,要求选择安装路径,默认安装在C:\Program files\WSEM,用户可根据需要修改该路径。如图: 注意:请确保路径中没有其它特殊字符。

点击“下一步”,选择创建快捷方式的位置。如图: 点击“下一步”,确认安装的配置。如下图:

蓝鸽作业系统用户操作手册

互评式作业平台——用户操作手册 互评作业 一、概述 学生进入到互评作业模块中,系统将自动获取当前学生所需互评的作业,并以列表的形式展现。学生选择一份作业进入互评后,根据教师制定的评估方法对所需互评的学生作业中的主观题进行批改并提交。 图1互评作业主界面 二、操作说明 1、选择作业 根据作业列表信息,选择一份作业,点击图标“”,进入互评界面。 2、互评作业 互评界面左边以树的形式展现当前学生需要互评的学生及所对应的作业,右边显示内容根据左边树型列表切换而变化。 所有的主观题评估分为四种互评类型显示:单词短语类型,作文翻译类型,口语类型,特殊题型(仅指篇章听写)。 2.1 互评总体信息

图2 互评总体信息 1)点击【提交】按钮,则提交对应学生的互评结果,标志完成了对该学生的互评批改。 2)点击【全部提交】按钮,则提交所有学生的互评结果,标志完成了对所有学生的互评批改。 3)点击“显示互评任务详细情况”,展现互评任务详细情况,如图3: 图3 互评总体信息-查看互评任务详情 4)点击左侧“学生1”,“学生2”,“…”,可查看对应学生的互评情况 2.2 单个学生的互评信息

图4 单个学生的互评信息 1)点击【提交】按钮,则提交当前学生的互评结果,并返回到互评总体信息主界 面,标志完成了对该学生的互评批改。 2)点击“查看或评阅”栏中的“详情”图标“”,则可进入该题型的互评批改 界面。 3)点击左边题型树中某学生的相关题型,如“命题作文”、“短文回答”,则进入该 学生相应的题型的互评批改界面。 2.3 互评主观题型信息 互评主观题型信息包含:单词短语类型、作文翻译类型、口语类型、特殊题型(仅指篇章听写)四种题型。 2.3.1 单词短语类题型

岗位评价体系的说明

福莱特公司文件 CHANGZHOU FLIGHT ×××× CO ., LTD 文件编号:CFLT-JZ-04-05 发文日期:2004-05-30 文件类不:规定 拟文人:审核:批准: 收文人:公司全体职员传阅阅后存档页数: 5 岗位评价体系 1. 目的 岗位评价作为一种解决工资分配问题的公正方法,是确定合理的工资差不或奖金差不的基础。工作评 价的核心是给各种不同的工作,按照岗位的整体工作中的相对价值,来确定不同岗位的等级,其目标 是为了实现同工同酬,即完成同等价值的工作,支付等量的

酬劳。 2. 适用范围 常州福莱特×××有限公司职能部门各级不岗位。 3. 解释 岗位评价-是对不同岗位的工作进行研究和分级的方法。 工作评价关怀的是岗位的分级,而不去注意谁 去做这项工作或谁在做这项工作。 岗位评价技术-岗位评价的实质是把提供不同价值的产品 或服务的具体劳动,还原为抽象劳动,进而使 各种具体劳动之间能够相互比较,以确定各个 岗位在组织中的相对价值。当所有岗位的总点 数得出以后,就能够依照每一岗位点数的多少, 度量出每一岗位在一个组织中的相对位置或相 对价值。 岗位评价方法-目前,岗位评价有四种方法能够采纳:排列 法、分类法、要素比较法、要素分级计点法。 其中,本公司采纳的要素分级计点法是数量化 的评价方法,在诸多评价方法中,是科学性程 度最高的一种。

4. 岗位评价体系的架构 公司岗位评价体系,把岗位劳动对人的要求划分为四大要素,在四大要素的基础上,又进一步分解为14个子因素,每个子因素再细分为4~6个等级,并分不一一定义和配点。 5. 架构分解 岗位评价要素、要素分级及配点表

航运公司安全管理状况评估系统用户手册

航运公司安全管理状况评估系统 用户使用手册 广西南宁顺天科技有限责任公司 2009年11月

目录

一、关于本手册 本手册是“航运公司安全管理状况评估系统”的用户使用手册,主要是为指导用户使用本产品而编写的。在手册中,我们将以该产品安装在简体中文Windows2003上为例进行详细的介绍。 希望本手册能够帮助您在短时间内对“航运公司安全管理状况评估系统”有一个概括的了解,让您亲身体验到它所带来的方便与快捷。 1.1手册结构 本手册针对用户如何使用“航运公司安全管理状况评估系统”进行了详细的介绍,请认真阅读其中的内容。 下表是本手册的详细结构。 章节描述 第一章关于本手册介绍本手册的阅读方法 第二章产品功能简介引导您从整体上了解本产品的功能 第三章评估系统的操作方法 第四章系统安装步骤 1.2针对的用户 本手册默认用户具备基本的计算机操作技能,熟悉Windows操作环境并且已经掌握基本的软件操作方法。无论您是老用户还是新用户,都可以在手册中找到相关的信息。有几种浏览本手册的方法:您可以按顺序阅读每一章,或利用目录寻找您需要的主题。 1.3技术支持 本公司提供多种技术支持服务,当您在使用中遇到任何问题时,请选择以下方式获得技术支持。 服务热线

您可在北京时间周一至周五9:00–17:30(节假日除外)拨打热线电话获得技术支持服务。 电话:(0771)-8060806技术支持部

二.系统功能 2.1功能模块结构图

三.操作指南 3.1登录系统 当用户链接到本系统地址(例如:后,将首先被要求输入“用户名”和登录“密码”,如图3.1-1: 图3.1-1 当用户输入“用户名”和登录“密码”,按“登录”按钮(或Enter键),此时系统将检查输入的“用户名”和登录“密码”是否正确,如果输入正确,将进入系统的主界面,登录后的主界面如图3.1-2: 备注:1、界面中的“全屏”勾选框,如果勾选上,进入的主界面将全屏幕显示,但当浏览器阻止弹出窗口的情况请不要勾选。2、首次使用此系统,请“下载打印控件”,以便使用系统中的打印功能。 图3.1-2 用户可通过上方的菜单导航栏操作本系统的所有功能,也可通过快捷入口操作本系统的功能。 3.2系统管理 系统管理是系统的正常运行提供管理功能,首次安装系统后,首先要通过系统配置管理功能建立起组织机构的管理体系结构。 以管理员身份进入系统后,会有用户管理、部门管理、角色管理、角色用户等接点,当然,这些接点可以利用权限使一般人员可见:如图3.2-1 图3.2-1 3.2.1部门管理 部门管理是根据公司现有的组织机构图来构建职位的目录树,如图3.2.1-1 图3.2.1-1

汽车整车标准符合性评估系统软件使用手册1

第一章总体说明 第一节概述 汽车整车标准符合性评估系统软件是我单位研制的最新型的、基于WINDOWS2000/WinXp系统平台、对机动车安全技术性能或综合性能进行检测及管理的系统。该系统在继承原有版本设计的优秀性能以外,更加从可靠性、易用性、过程显示、人机交互、系统维护等诸多性能因素考虑出发,予以增强改进,检测数据利用大型的关系数据库存储处理,并且增加了网络部分的功能,便于用户进行后期管理工作,也有利于社会车辆数据的统一查询和管理。采用汽车整车标准符合性评估系统软件可以构成高效,低成本,不同工艺流程的全自动机动车检测系统。 本系统在技术设计、检测方法和工艺安排上符合国家最新标准和法规: ?GB7258-2004《机动车运行安全技术条件》 ?GB21861-2008《机动车安全技术检验项目和方法》 ?GB18285-2005《点燃式发动机汽车排气污染物排放限值及测量方法(双怠速法及简易工况法)》 ?GB3847-2005《车用压燃式发动机和压燃式发动机汽车排气烟度排放限值及测量方法》 ?GB 18322-2002《农用运输车自由加速烟度排放限值及测量方法》 ?GA/T134~1996《机动车安全检测站条件》 ?GA2073-2000《汽车安全检测线电子计算机控制系统技术规范》 ?GB9361《计算机场地安全要求》 ?GB/T15481-2000《检测和校准实验室能力的通用要求》 ?GB/T 113423《工业控制用软件评定规则》 ?机械部《汽车工业企业整车出厂质量保证检测线管理办法》 ?GB/T 5384《摩托车和轻便摩托车最高车速试验方法》 ?GB/T 5385《摩托车和轻便摩托车加速性能试验方法》 ?QC/T 60—1993《摩托车和轻便摩托车整车性能台架试验方法》 ?QC/T 67《摩托车喇叭声级测量方法》 ?GB 14621-2002《摩托车和轻便摩托车排气污染物排放限值及测量方法

山东普通高中学生综合素质评价信息管理系统操作手册学生用户手册

山东省普通高中学生综合素质评 价信息管理系统 操作手册 学生角色

二〇一七年五月

目录 文档编写目的 (5) 功能简介及使用注意事项 (5) 第一部分首页 (7) 一、通知公告 (7) 二、结果公示 (8) 三、点滴记录 (9) 四、同学圈 (10) 第二部分点滴记录 (10) 一、任职情况 (11) 二、奖惩情况 (14) 三、典型事例 (19) 四、研究性学习及创新成果 (23) 五、日常体育锻炼 (28) 六、心理素质展示 (30) 七、艺术素养 (31) 八、社会实践 (36) 九、陈述报告 (40) 十、陈述总报告(只有高三才有) (42)

十一、材料排序 (42) 第三部分档案查看 (43) 一、学期档案 (43) 二、材料记录 (46) 第四部分课程管理 (47) 一、在线选课 (47) 第五部分我的课表 (48) 第六部分我的同学圈 (49)

文档编写目的 本文档用来指导学生用户快速学习、使用“山东省普通高中学生综合素质评价信息管理系统”(以下简称“综评系统”)。 功能简介及使用注意事项 1. 学生使用主要流程,如下图: 其中个人记录包含:任职情况、奖惩情况、典型事例、研究型学习、日常锻炼、心理素质、艺术素养、社会实践、陈述报告九大部分。 筛选排序包含:任职情况、奖惩情况、艺术素养三部分。 材料检查包含:基础信息、奖惩情况、日常操行、课程修习、校本课程、体测数据、学校特色指标、教师评语八部分。 2.学生帐号及使用: 建籍完成后生成:1学生1家长 账号为省学籍号 学校管理员导出初始化账号和密码

线下下发 学校管理员可重置其密码 注意:学籍号不可重复使用 家长可绑定手机——》找回密码 3.学生记录综评资料 随时记录班级提交学校公示(5天)归档(3.1/8.1/5.1)PC+移动端 点滴记录可选入档案 佐证材料:文件(图片)+链接(视频) 注意:实名记录 入档材料虚假取消高考资格 4.学生筛选综评资料,并排序 学期末筛选 毕业学期前排序省里统一划线 注意:进入档案依据 5.学生查看他人评价 学籍信息有误——>学籍系统中修改 体测数据有误——>体卫艺平台中修改 成绩等有错误——>反馈给老师

两化融合评估系统(企业)使用手册

两化融合评估服务系统(企业) 用户操作手册V1.0 指导单位:工业和信息化部信息化推进司 支撑单位:工业和信息化部电子科学技术情报研究所 2015年3月19日

目录 1. 总体流程 (1) 2. 登录/注册 (1) 3. 企业基本信息 (6) 3.1 首次登入评估系统 (6) 3.2 企业基本信息修改 (8) 3.3 修改密码 (8) 4. 上报/修改数据 (10) 5. 公告信息 (12) 6. 技术支持 (13) 附件1:行业和问卷对应关系表 (14) 附件2:企业两化融合评估报告(参考样例) (17)

1.总体流程 用户注册->登录评估系统->填写企业基本信息->填报问卷->提交问卷->生成评估报告->等待审核->审核通过->结束 2.登录/注册 操作步骤: a)请打开本省评估系统首页 方式一:请在浏览器地址栏输入网址:,打开中国两化融合咨询服务平台的各省级评估服务分平台入口网页,并点击页面中本省链接,如图:

方式二:请在浏览器地址栏直接输入本省评估系统网址,打开登录页,各省网址如下表:

b)请在登录页点击注册按钮,页面跳转至填写注册信息页,如图: 注意:已经在中国两化融合咨询服务平台()注册过账号的用户,可以直接使用之前的账号登录,不需要重复注册(包括贯标跟踪服务系统、评定管理平台系统、工作博客、工作论坛注册的用户都不必重新注册账号,使用已有账号登录即可)。 c)请在注册页面填写用户名、密码、邮箱等必填信息,点击【注册】按钮,页面跳转至 通过邮件确认页,

d)输入用户名、密码、邮箱等必填信息,点击【注册】按钮,页面跳转至通过邮件确认 页,如图: e)登录您的邮箱,收件箱中会收到标题为“注册系统邮件验证”的邮件(如果收件箱中 没有收到,请检查垃圾邮箱中是否收到),打开验证邮件并点击激活链接,可注册成功,页面跳转至注册成功页,如图:

某公司岗位评价操作手册

某电子信息技术股份有限公司 岗位评价操作手册 一、目的 (1) 二、评价因素体系 (1) (一)评价因素的组成 (1) (二)因素权重 (1) (三)因素的等级及其分值 (2) (四)等级选择的标准 (2) 三、岗位评价的程序 (2) (一)确定各因素等级的分值 (2) (二)选择评价人 (3) (三)培训评价人 (3) (四)进行评价 (3) (五)结果处理 (3) 四、注意事项 (4) 附件1:评价因素等级说明 (5) 附件2:岗位评价打分表 (6)

一、目的 岗位评价是指依据岗位说明书,设计一定的评价程序和标准,集合有代表性的多个评价人的意见,对岗位价值关键因素(如工作性质、强度、责任、复杂性以及所需的任职条件等)的差异程度,进行综合评估。它反映的只是岗位间的相对价值,而不是岗位的绝对价值。 本次岗位评价的目的主要体现在以下两个方面: 一是优化薪酬体系。利用岗位评价所确定的岗位相对价值差异,使薪酬向对集团价值具有重要影响的岗位倾斜,形成客观、公正的工资水平。 二是构建员工晋升通道。透明化的岗位评价标准和岗位等级,为员工提供职业发展与岗位晋升的指引。 二、评价因素体系 评价因素体系包括评价因素的组成、因素权重、因素等级及分值、等级选择的标准。 (一)评价因素的组成 在本方案中,设计了十个评价因素,涵盖了一切岗位中所具有的共性的要素,包括:学历、胜任经验、管理能力、专业入门难度、稀缺性、专业技能、经营责任、决策能力、沟通与协调能力、工作环境。 (二)因素权重 根据不同因素对岗位的重要性的不同,对因素的权重进行划分,确定各因素在整个因素体系中的相对重要性。 表1 因素权重表

海氏岗位价值评估评分指导手册完整版

岗位价值评估评分指导手册 此次岗位价值评估实用海氏岗位价值评估系统。 海氏岗位价值评估系统实质上是一种评分法,根据这个系统,所有的职务包含的最主要的评价因素有三种,即技能水平、解决问题的能力和承担的职务责任。每一个评价因素又分别由数量不等的子因素构成。由于海氏岗位价值评估系统有效地解决了不同职能部门的不同职务之间相对价值的相互比较和量化的难题,因此被企业界广泛接受。 海氏岗位价值评估系统简介 分值

一、知识水平&技能技巧 知识水平和技能技巧是知识和技能的总称,它由三个子因素构成: 1、专业知识技能:要使工作绩效达到可接受的水平所需的专门业务知识及其相应的实 际运作技能的总和。 (一)、专业知识技能 注:技术岗位由E等起评。

2、管理技巧:在经营、辅助和直接管理领域中,协调涉及各种管理情景的各种职能 并使之一体化的技巧。 评价关键:一是所需管理能力与技巧的范围(广度);二是所需管理能力与技巧的水平(深度)。 (二)、管理技巧 3、人际关系技巧:该职位所需要的激励、沟通、协调、培养和关系处理等方面活动技 巧。 评价关键:根据所管辖人员多少,同事以及上级、下级的素质、要求,交往接触的时间和频率等等诸多方面来综合评判。 (三)、人际沟通的技巧

一、解决问题的能力 解决问题的能力有二个子因素构成。 1、思维环境 评分关键:遇到问题时,任职者是否可向他人请教,或从过去的案例中获得指导。 (一)、思维环境的等级划分: 2、思维难度 评分关键:指工作中遇到问题的频率和难度所造成的思维的复杂程度。 (二)、思维难度的等级划分

二、承担的职务责任 1、行动的自由度:该职位能在多大程度上对其工作进行个人性的指导与控制。 评分关键:该职位受到流程制度和上级领导管理的程度。职位越高,自由度越大。

公司岗位评估-岗位说明书手册

公司岗位评估-岗位说明书网上提交系统使用手册(编写人:电脑中心马继平) 一、管理人员进行新岗位的编制 1、管理人员进入该系统主页面,单击屏幕右边的“岗位编制”按钮,进入岗位设置主 页面; 2、通过展开屏幕左边的树形机构目录,找到要设置岗位的部门(部、处、科室),然 后单击“设置新岗位”按钮; 3、屏幕出现设置设置新岗位的填写表单,包括 (1)岗位所属部门(是一个提示信息,是你在第二步中所选择的部门) (2)岗位编码(六位) (3)岗位名称(不超过十个汉字) (4)编写人帐号(七位) (5)审核人帐号(七位) (6)审批人帐号(七位) 在新岗位设置填写表单的下方,系统会列出该岗位目前已经设置的岗位列表清单,包括岗位编码及岗位名称,以供参考。 4、单击“保存”按钮,以保存所填写的新岗位设置情况。 5、若要修改或删除刚刚填写的新岗位信息,单击屏幕右边对应的“修改”或“删除” 按钮。

6、单击“返回”按钮,返回到岗位设置主页面,选择某个部门的某个岗位,单击进入 该岗位的设置情况就可以对该岗位的设置进行修改或删除。 二、岗位编写人填写岗位说明信息 编写人、审核人,以及审批人进入岗位说明书网上提交系统后,系统将需要他们进行填写、审核、或审批的岗位所属部门按照树形结构显示在屏幕左边,当你选择某个部门后,屏幕右边将显示该部门需要你填写、审核、或审批的岗位清单。 对于岗位编写人,若第一次进行某个岗位的编写页面,系统将自动进入该岗位的基本信息信息填写页面,以接受岗位编写人关于该岗位以下信息的填写: 1、该岗位的直属上级岗位名称(从下拉列表中选择) 2、该岗位的设置目的(简单而精确地描述该岗位为什么存在,长度不能超过100个汉 字) 填写完后,按“保存”按钮,进入该岗位说明书填写的主页面,在该页面中对于编写人,可以针对该岗位说明书进行以下操作: 1、单击“填写岗位基本信息”按钮,可以对上面所填写的两条信息进行修改; 2、单击“填写主要岗位职责”按钮,可以对该岗位的所有主要职责进行逐条填写,包 括每条职责的:(主要职责是用陈述句来说明一项有明确结果或产出的工作以及岗位所负的责任。请以重要性为序,并标明每项职责所占工作时间的比重。) (1)序号(必须是数字:1、2、3……) (2)主要岗位职责(描述,长度不能超过100个汉字)

教学质量评估系统用户操作手册说课讲解

1系统的安装与初始化 1.1 数据库服务器端的数据库还原 打开系统的安装目录,找到“数据库文件”的文件夹,产看是否有“附加数据库” 文件夹,在此文件夹中有“ teacher_Data.MDF'文件,在确认无误的情况下,添加系 统的数据库,具体步骤如下。 a )将安盘中的“数据库文件”-> “附加数据库”文件夹中的 “teacher_Data.MDF ”和 “ teacher_Log.LDF ”拷贝到服务器中数据库的安装目录下 的 Date 文件夹中(例如: C:\Program Files' Microsoft SQLServer\MSSQL\Data )。 b )打开如图1.1所示的数据库服务器客户端 控吿!1 台根目录VNicrnscift SQL Server 5 MH ex oso ft £QL 5 er ver s 1个顶目 ± Q S9L 霾 I-诂 Hoc al) Windows ITT 〕 ±1 数据库 T1 一I 恋据转换服長 +二I 菅理 ii _|复制 书CJ 雯全性 + J 支持服务 _ Neta Data Ssrvices 图1.1数据库服务器客户端 c )右击“数据库”文件夹,选中“所有任务”一 > “附加数据库”,如图1.2所示 雏]控制台槻目录\riicr^sQft SQL Server a'xSyL Server 姐\ (1口?口】fWindcws MT ■.叭数据库 图1.2打开数据库的附加数据库过程图 Iflicrosoft SQL Serviers SQL Server LJ 控制台很目录 曰WicTwoft SQL Servers -ij 胡L Server 射 F (local ) QTindtrws NT 〕 + -ilL... ....— 数据崖 9个顶目 10 master J mail 01 ITQrtkw^n d. 由-Ui 数据 新建数据库直)■… i-LJ 管连 +二I 安全 EJ I I Net w 查看? 从这里创建窗口世) 备份数据库⑻… 还融 据库?… 附加数据库3 ? 刷新电) 导岀臓◎,一 . 导人数据②,, 导出数据?… 复制癫据库向号?)一一

职位评估手册(new)

知识结构 定义: 知识是评定工作对事实、理论、术语、概念、程序、技术的理解能力。知识不是天生的,而是获得的。它可以从正规的学术、职业或工作培训中获得,也可以是从工作经验的积累中得到。知识与技能有关,但又不同,一般认为技能是应用知识的熟练程度。 讨论: 在评定中,只考虑对职位的最低知识要求,这可以从在职人员工作前资格考察得到验证。不能以下列作为评定基础。 ? 超出最低工作要求的个别在职人员的知识层次。 ? 为其它目的,非职位评估设立的录用标准。 ? 单位内部的特殊政策或惯例,通常在六个月或更短的工作实践中获得的知识。 级别定义: 1、包括阅读和理解非专业化的一般交流和简单指示,以及手工检查等书面资料,加减整数或简单操作机器的能力。 2 、包括阅读、理解用简明而反复使用的专业术语表达的简单指示和交流,如安全指示、标签上的预防措施等。还要求能够操作机械设备,进行简单的信息输入记录表格或自动化记录系统,以保持这一系统的运行。另外,要求有做工作记录的能力,包括加减整数和简单小数的能力。 3、包括阅读和理解有关资料、图纸、文件。理解重复出现的单位内部及单位之间的

商业信件。并要求具备一定常规保养知识,必须能够清楚地书写词名。具有计算能力,包括整数乘除和简单小数。. 4、必须能够理解专业的技术操作程序和公式,包括机器操作,数据输入,验证、清算帐目,实验程序等,要求足够的语言能力,能组织清蜥的书面文字,计算能力包括整数、复合小数和百分数的加、减、乘、除,以及公式计算。这个级别要求有一定词汇量,以及业务能力,如起草、薄记、PC 机操作等。. 5、要求能阅读和理解复杂的业务操作、工作程序、政策、手册:理解和应用专业图表、图纸、转换表格、符号语言;建立自已独特的表格、档案记录等;起草报告和信件,检查他人工作质量;理解和运用代数、几何、统计和科学计数法;对基础物理、化学、工程、财务及其它专业知识的理解。在这一级别,通常要求有理解和应用该系统的知识,而不是建立系统的知识。. 6、包括职业的如会计、工程、经济、人才资源、计算机及其它学科的原则、程序、方法、技艺的知识要求很高的文化、创造性或数学技能,而学术知识是其次的,处于这一级别的典型工作是要求掌握建立基本操作系统的原则和程度的足够知识。. 7 、在这一级别,要求有先进的专业知识,或具有研究学生的专业知识,或者具有两方面的大学专业,如化学和生物,工程和管理等。. 8 、要求是专业或商业领域有创造力的专家所具备的知识,经常是某一范围内具最高学识

职位评估(IPE系统)指导手册

为何要进行职位评估?职位评估(IPE

系统)指导手册

随着企业规模扩大,分工与协作要求,人们常常需要确定一个职位的价值,或者想知道财务经理岗位和销售经理岗位相比,究竟哪个岗位对企业的价值更大,哪个岗位应该获得更好的报酬。那么,究竟如何确定某个职位在企业里的地位呢?对不同 职位间的贡献价值如何进行衡量比较呢?这就需要进行职位评估. 职位评估的意义: 职位评估(Job Evaluation,或称为岗位评价、岗位测评)是一种职位价值的评价方法。它是在职位描述(Job Description) 的基础上,对职位本身所具胡的特性(比如职位对企业的影响、职责范围、任职条件、环境条件等)进行评价,以确定职位相对价值的过程。很显然,它的评价对象是职位,而非任职者,这就是大家通常所说的“对岗不对人”原则。而且,它反映的只是相对价值,而不是职位的绝对价值(职位的绝对价值是无法衡量的)。 职位评估的具体作用: 1、确定职位级别的手段 职位等级常常被企业作为划分工资级别、福利标准、出差待遇、行政权限等的依据,而职位评估则是确定职位等级的最佳手段。有时企业仅仅依靠职位头衔称谓来划分职位等级,而不是依据职位评估,这样有失准确和公平。举例来说,在某企业内部,尽管财务经理和销售经理都是经理,但他们在企业内的价值并不相同,所以职位等级理应不同。同理,在不同企业之间,尽管都有财务经理这个职位,但由于企业规模不同,该职位的具体工作职责和要求不尽相同,所以职位级别也不相同,待遇自然也不同。 2、薪酬分配的基础 在通过职位评估得出职位等级之后,就便于确定职位工资的差异了。当然,这个过程还需要薪酬调查数据做参考。国际化的职位评估体系(如

资产评估项目管理系统使用说明书

资产评估项目管理系统使用说明书 国家知识产权局 State Intellectual Property Office of the Republic of China 中国资产评估协会 CHINA Appraisal Society 中电亿商网络技术有限责任公司&Gnet实验室 CEBP Network Technology Co.,Ltd & Gnet Research laboratory Copyright:2009─2012

目录 一、系统简介 (4) 二、系统模块图 (5) 三、安装应用软件 (5) 3.1.运行网络桌面 (5) 3.2.安装应用软件 (6) 四、系统界面架构 (7) 4.1功能菜单 (7) 4.2快速启动栏 (7) 4.3任务列表 (7) 4.4新闻列表 (8) 4.5项目信息 (8) 五、快速入门 (8) 5.1.系统设置 (8) 5.1.1基本信息设置 (8) 5.1.2评估程序设置 (9) 5.1.3复核设置 (10) 5.1.4项目状态设置 (10) 5.1.5计划设置 (11) 5.1.6模版设置 (12) 5.1.7公告设置 (13) 5.1.8客户信息设置 (14) 5.1.9组织结构 (15) 5.1.9.1组织结构 (15) 5.1.9.1.1更新部门 (15) 5.1.9.1.2更新人员 (16) 5.1.9.1.3新建部门 (16) 5.1.9.1.4删除部门 (16) 5.1.9.1.5新建人员 (16) 5.1.9.1.6清除用户 (18) 5.1.9.1.7部门信息修改 (18)

5.1.9.1.8人员信息修改 (18) 5.1.9.1.9部门转换 (18) 5.1.9.2用户权限 (19) 5.1.9.3用户审批 (20) 5.2项目管理 (21) 5.2.1我的项目 (21) 5.2.1.1新建项目 (23) 5.2.1.2修改项目 (23) 5.2.1.3删除项目 (24) 5.2.1.4项目权限分配 (25) 5.2.1.5项目状态 (26) 5.2.2归档项目 (27) 5.2.3.项目查询统计 (27) 5.2.4任务分配 (28) 5.2.5人员动态 (29) 5.3常用工具 (30) 5.3.1无形资产评估查询 (30) 5.3.2日程管理 (30) 5.3.3日程统计 (31) 5.3.4个人日程查询 (32) 5.3.5快速查询 (32) 5.3.6常用网址 (32) 六、软硬件环境需求 (33)

信用评级系统操作手册

信用评级系统操作手册 联合信用管理有限公司 Lianhe Credit lnformation Servlte co.,Ltd 二00九年

目录 第一章系统概述..........................................4 1、系统架构................4 2、主要模块................4 第二章系统登录与退山....................................5 第一节用户端设置................5 第二节系统登录................6 第三节系统退出................7 第三章我的门户..........................................7 1、我的工作................7 2、报告跟踪................7 3、最近更新................8 4、业务通讯................8 第四章公司类客户评级子系统..............................8 第一节基本信息................8 1、信息录入................8 2、信息查询...............12 3、修改信息...............13 第=节财务信息...............13 1、财务报表...............13 2、财务明细...............16 3、预测数据...............18 4、审计信息..........错误!未定义书签。 第三节等级初评...............19 第五章担保机构评级子系统......................22 第一节基本信息...............22 1、信息录入...............22 2、信息查询...............24 3、修改信息...............25 第=节财务信息...............26

岗位价值评估的使用说明书

岗位价值评估的使用说明书 一、价值薪酬设计操作步骤 第一步:画出组织机构图与设定工作分析表,确定岗位的层级及要求、工作内容 第二步:通过点因素法,进行岗位价值评估打分,评估出每个岗位的价值得分 第三步:分层级,将相近岗位归为一类,做出层级工资表,并算出层级平均分 第四步:选取标杆岗位计算K值系数,得出层级岗位年薪 第五步:确定年底奖金与月薪的比例 第六步:按级差,划分出五级工资(五级工资级差10%-15%) 第七步:做出各岗位的固定工资与绩效工资比例分配(根据岗位类型不同,固定与绩效比例亦有所不同) 第八步:手工调整,得出适合企业的价值薪酬方案 二、岗位价值评估操作步骤 (一)岗位价值评估打分 【示例】:使用“点值法岗位价值评估打分工具”对该企业所有岗位进行价值评估打分,结果

(二)分层级 其核心目的在于将多个岗位进行归类,一般设计19级-32级的总层级数。 方法:最低分一般从75分起计。每个层级级差如下: ①基层级差25分②中层级差35分③高层级差45分④决策层级差55分 步骤: 1)设定最低层级的岗位价值最小值 2)设定薪酬级差 3)划分层级 【示例】:该企业的分层级结果如下:

(三)计算层级平均分 层级平均分计算公式: 层级平均分=该层级内所有岗位的岗位价值得分总和÷该层级岗位数量(若该层级只有一个岗位,则取该岗位的价值得分) 若该层级无对应岗位,则该层级平均分=(该层级最小值+该层级最大值)÷2

(四)选取标杆岗位进行K值系数推算 标杆岗位的选取标准: ●正职岗位 ●具备易理解性 ●具备广泛性 ●长期存在 标杆岗位的选择要素: 1)一般不选取上山型岗位为标杆岗位 2)所选择岗位的岗位人具备胜任力 3)所选择的岗位要符合员工心理满足(即员工对岗位薪酬具备相当满意度、认同度) 年薪:指全年度所有货币性总收入(含月薪、年底奖金或红包等,不含额外的物质性奖励)

相关主题
文本预览
相关文档 最新文档