当前位置:文档之家› Rate-Distortion Optimized Embedding

Rate-Distortion Optimized Embedding

Rate-Distortion Optimized Embedding
Rate-Distortion Optimized Embedding

Rate-Distortion Optimized Embedding

Jin Li and Shawmin Lei

Sharp Laboratories of America, Digital Video Division

5750 NW Pacific Rim Blvd., Camas, WA 98607, USA.

Tel. (360) 817-8486, Fax. (360) 817-8436, Email: lijin@https://www.doczj.com/doc/1e15066066.html,

ABSTRACT: In this paper, we improve the perform-ance of the embedded coder by reorganising its output bitstream in the rate-distortion (R-D) sense. In the proposed rate-distortion optimized embedding (RDE), the coding bit is first allocated to the coefficient with the steepest R-D slope, i.e. the biggest distortion de-crease per coding bit. To avoid transmission of the position of the coded coefficient, RDE uses the ex-pectation R-D slope that can be calculated by the coded bits that have already been transmitted to the decoder. RDE also takes advantage of the probability estimation table of the QM-coder so that the calcula-tion of the R-D slope is just a lookup table operation. Extensive experimental results show that RDE signifi-cantly improves the coding efficiency.

1. INTRODUCTION

The embedded image coding receives great attention recently. The representative works of embedding in-clude the embedded zerotree wavelet coding (EZW) proposed by Shapiro [1] , the set partitioning in hier-archical trees (SPIHT) proposed by Said and Pearlman [2] , and the layered zero coding (LZC) proposed by Taubman and Zakhor [3] . In addition to providing a very good rate-distortion (R-D) performance, embed-ded coder has a desirable property that the bitstream can be truncated at any point and still decode reason-able quality image. Such embedding property is ideal for a number of applications such as progressive image transmission, rate control, scalable coding, etc.. EZW and its derivatives achieve embedding by organizing the quantization bit-plane by bit-plane and entropy encoding the significant status or refinement bit. How-ever, there is no guarantee that the rate-distortion performance was optimized at the truncation point.

It is well known that the coding achieves optimality if the rate-distortion slopes for all coded coefficients are constant. Xiong and Ramchandran [4] fixed the rate-distortion slope, and applied tree-pruning to spatial frequency quantization to exactly encode each coeffi-cient with the same rate-distortion slope. Although achieving good coding efficiency, the scheme was very complex, and it lost the bitstream embedding property which was very useful in many applications. Li and Kuo [5] showed that the R-D slopes of signifi-cance identification and refinement coding was differ-ent, and by placing the significance identification before refinement coding in each layer, the coding efficiency could be improved. However, the improve-ment of [5] was fairly limited.In this research, we propose a rate-distortion opti-mized embedding (RDE) by allocating the available coding bits first to the coefficient with the steepest R-D slope, i.e. the biggest distortion decrease per coding bit. Considering the synchronization between the en-coder and the decoder, the actual optimization is based on the expectation R-D slope that can be calculated on the decoder side. We take advantage of the probability estimation table of the QM-coder [6] [7] to simplify the calculation of the R-D slope and speed up the algorithm. The algorithm significantly improves the coding efficiency.

The paper is organized as follows. The initiative of the rate-distortion optimization is introduced in Sec-tion 2. The framework and the implementation detail of RDE are investigated in Section 3. We focus pri-marily on the two key steps of RDE, i.e., the R-D slope calculation and the coefficient selection. Exten-sive experimental results are shown in Section 4 to compare the performance of RDE with various other algorithms. Concluding remarks are presented in Sec-tion 5.

2. CODING OPTIMIZATION BASED ON THE RATE-DISTORTION SLOPE For convenience, let us assume that the image has already been converted into the transform domain. The transform used in embedded coding is usually wavelet decomposition, but it can be DCT as well, as in [10] . Let the index position of the image be denoted as i=(x,y), let the coefficient at index position i be de-noted as w i. Suppose the coefficients have been nor-malized with absolute maximum value of 1. Because w i is between –1 and +1, it can be represented by a binary bit stream as:

±0.b1b2b3...b j (1)

where b j is the jth most significant bit (MSB) of coef-ficient w i. Traditional coding first determines the bit-depth n (or quantization precision), then sequentially encodes one coefficient by another. For each coeffi-cient w i, the most significant n bits are bound together with the sign

±0.b1b2b3…b n(2) and encoded by the entropy coder. The quantization precision is predetermined before coding. If the coding bitstream is truncated, the bottom half of the coeffi-cients will be lost. Embedded coding is distinctive from traditional coding in the sense that the image is coded bit-plane by bit-plane rather than coefficient by

coefficient. It first encodes bit b1 of coefficient w1, then bit b1 of w2, then bit b1 of w3, etc.. After the bit plane b1 of all coefficients has been encoded, it moves on to bit plane b2, and then bit plane b3. In the em-bedded coding, each bit is an independent coding unit. Whenever a coefficient w i becomes nonzero, its sign is encoded right after. The embedded coding bitstream can be truncated at any point with graceful quality degradation since at least part of each coefficient has been coded.

Although the bit-plane oriented embedded coding is better than the coefficient oriented coding, it is still not optimal in the rate-distortion (R-D) sense. We illus-trate the initiative in Figure 1. Suppose there are five symbols a, b, c, d and e that can be coded independ-ently. Coding each symbol requires a certain amount of bits and results in a certain amount of coding dis-tortion decrease. Coding of all symbols sequentially gives an R-D curve shown as the solid line in Figure 1. If the coding order is reorganized so that the symbol with the steepest R-D slope is encoded first, we can get an R-D curve shown as the dashed line in Figure 1. Though both lines reach the same final R-D point, the algorithm that follows the dashed line performs better when the coding is truncated at an intermediate bit rate. Therefore, the initiative of the rate-distortion optimized embedding (RDE) is to allocate the avail-able coding bits first to the coefficient with the steep-est R-D slope, i.e., the coefficient with the biggest distortion decrease per coding bit. Note when all bits have been coded (i.e., the coding achieves lossless), RDE will have exactly the same performance as its counterpart without R-D optimization. However, in a practical wide bit rate range, RDE will outperforms traditional embedding significantly.

3. IMPLEMENTATION OF RATE-DISTORTION OPTIMIZED EMBED-DING (RDE)

The framework of the RDE can be shown in Figure 2. In general, RDE calculates the R-D slope, or the dis-tortion decrease divided by the coding bits, for the candidate bit of all coefficients. It then encodes the coefficient w i that has the steepest R-D slope, i.e., the biggest distortion decrease per coding bit. Such coding strategy achieves embedding with optimal R-D per-formance. If the coding bit stream is truncated, the performance of coding at that bit rate will be optimal. Since the actual R-D slope will not be available at the decoder, RDE uses the expectation R-D slope that can be calculated by the already coded bits. By doing so, the decoder can derive the same R-D slope and track the coefficient to be transmitted next just the same as the encoder. Therefore, the location information of w i does not need to be transmitted.3.1 Calculation of Rate-Distortion Slope

The two key steps of RDE are coefficient selection and R-D slope calculation. In this subsection, we de-velop a very efficient algorithm which calculate the R-D slope with just a lookup table operation.

For coefficient w i, assume the most significant n i-1 bits have already been coded, and the n i th bit is the next bit to be processed. We call n i as the current coding layer, and the n i th bit of w i as the candidate bit. The situation is shown in Figure 3, where the coded bits are marked with slashes, and the candidate bits are marked with horizontal or vertical bars. As traditional embedded coding, RDE classifies the coding of candi-date bits into two categories – significance identifica-tion and refinement coding. For coefficient w i, if all previous coded bits b j are ‘0’ for j=1...n i-1, the sig-nificance identification mode is used for the n i th bit. If either one of the previous coded bits is ‘1’, the refine-ment mode is used. We show an example in Figure 3, where the bits undergone significance identification are marked by vertical bars, and the bits undergone refinement coding are marked by horizontal bars. The R-D slopes for the two modes are very different.

The significance identification mode classifies coeffi-cient w i from interval (-2T,2T) to interval (-2T,-T] of negative significance, (-T,T) of non-significance, and [T,2T) of positive significance, where T=2-n i is the width of the coding interval determined directly by the current coding layer n i. From the coding interval, we can easily derive the decoding reconstruction before significance identification as:

r b=0 (3) and the decoding reconstruction for negative signifi-cance, non-significance, positive significance as:

r s-,a=1.5T, r i,a=0 , r s-,a=1.5T(4) respectively. In significance identification, the coded symbol is highly biased towards non-significance. We encode the result of significance identification with a QM-coder, which estimates the probability of signifi-cance p i with a state machine, and then arithmetic encodes the result. As shown in Figure 4, the QM-coder uses a context which is a 7-bit string with each bit representing the significant status of one spatial neighbor coefficient or one coefficient at the same spatial position but in the parent band of current coef-ficient w i. By monitoring the pattern of past 0s (‘insig-nificance’) and 1s (‘significance’) under the same context (i.e., the same neighborhood configuration), the QM-coder estimates the probability of significance p i of the current coding symbol. The probability esti-mation is very simple for QM-coder, as it is just a state table transition operation. For details of the QM-coder and its probability estimation, we refer to [6] [7] . Assuming the QM-coder performs closely to the Shannon bound for coding the result of significance identification, we may calculate the expectation coding rate as:

E[D R]=(1-p i ) D R insig +p i D R sig

=(1-p i ) [-log 2(1-p i )]+p i (-log 2p i +1) =p i +H(p i )(5)where H(p i ) is the entropy of a binary symbol with probability of 1 equals to p i . Note that in (5), when the symbol becomes significant, it needs one additional bit to encode the sign of significance. With the known probability of significance p i , the expectation distor-tion decrease of significance identification can be calculated as:

E[D D] =(1-p i ) D D insig +p i D D sig (6)

where D D sig and D D insig can be further calculated by taking the probability weighted average of the coding distortion decrease:

D D insig =?ó-T T

[(x-r b )2-(x-r i,a )2]p(x)dx (7)

D D sig =?ó-2T -T

[(x-r b )2-(x-r s-,a )2]p(x)dx

+?óT

2T

[(x-r b )2-(x-r s+,a )2]p(x)dx (8)

By substituting (3) and (4) into (7), it becomes:

D D insig =0 (9)

There is no coding distortion decrease if w i is still insignificant. To calculate D D sig , we need the a priori probability distribution of coefficient w i within the significance interval (-2T,-T] and [T,2T). Note that the probability distribution of w i within interval (-T,T) is irrelevant to the calculation of distortion decrease.Assuming that the a priori probability distribution within the significance interval is uniform, we can derive:

p(x)=1

2T

, for T<|x|<2T (10)

Based on (3), (4), (8) and (10), we conclude that

D D sig =2.25T 2 (11)

The expectation distortion decrease for significance identification is therefore:

E[D D]=p i 2.25T 2(12)

It is straightforward to derive the R-D slope of signifi-cant identification w i as:

l sig =E[D D]E[D R]= 2.25T 21+H(p i )/p i

(13)

Function

f s (p i )=

1

1+H(p i )/p i

(14)is plotted in Figure 5. It is apparent that the symbol with higher probability of significance has a larger rate-distortion slope and is thus favored to be encoded first.

We may similarly calculate the R-D slope of refine-ment coding mode. The refinement coding classifies coefficient w i from interval [S,S+2T) to interval

[S,S+T) or [S+T,S+2T), where T=2-n i is again deter-mined by the coding layer n i , and S is the start of the refinement interval, which is the value indicated by the coded bits of the coefficients. The decoding recon-struction before refinement coding is:

r b =S+T (15)

Depending on the coding result, the decoding recon-struction after refinement becomes:

r 0,a =S+0.5T, r 1,a =S+1.5T (16)

Because the refinement coding is equilibrium between ‘0’ and ‘1’, the expectation coding rate is close to one bit.

E[D R]=1.0 (17)

Similar to (6), the expectation distortion decrease can be calculated as:

E[D D] =?óS S+T

[(x-r b )2-(x-r 0,a )2]p(x)dx

+?óS+T

S+2T

[(x-r b )2-(x-r 1,a )2]p(x)dx (18)

Assuming that the a priori probability distribution with in interval [S,S+2T) is uniform, i.e., p(x)=1/2T,we conclude for refinement coding:

E[D D] =0.25 T 2(19)The R-D slope of refinement coding is thus:l ref =E[D D]E[D R]

=0.25T 2 (20)

Compare (13) and (20), it is apparent that for the same coding layer n i , the R-D slope of refinement coding is smaller than that of significance identification when-ever the significance probability p i is above 0.01. Thus in one coding layer, significance identification should be in general placed before the refinement coding, as indicated in [5] .

We have also modeled the a priori probability func-tion of coefficient w i to be Laplacian. In such case, the R-D slope for significance identification and refine-ment coding becomes:

l sig = 2.25T 2

1+H(p i )/p i g sig (s ,T)(21)

l ref = 0.25T 2 g ref (s ,T) (22)

where s is the variance of Laplacian distribution which is again estimated from the already coded bits,and g sig (s ,T) and g ref (s ,T) are Laplacian modifica-tion factors in the form of:

g sig (s ,T)=

12.25è??

?

÷?0.75 + 3s T - 3e -T/s 1-e -T/s (23)

g ref (s ,T)=4è????

÷÷?0.75 + 2s T

e -T/s - s T ()1+e -2T/s 1-e -2T/s (24)However, experiments show that the additional per-formance improvement provided by the Laplacian

model is minor. Since the uniform probability distri-

bution model is much simpler to implement, it is used throughout the experiment.

Because the probability of significance p i determined by the QM-coder state is discrete, and the width of interval T determined by the coding layer n i is dis-crete, both R-D slope (13) and (20) have a discrete number of states. For fast calculation, (13) and (20) may be pre-computed and stored in a table indexed by the coding layer n i and the probability state of QM-coder. Thus, computation of the R-D slope will be only a lookup table operation. The number of entries M of the lookup table can be calculated by:

M=2xNxL+N(25) where N is the maximum coding layer, L is the num-ber of states in the QM-coder. In the current imple-mentation of RDE, there are a total of 113 states in the QM-coder, and a maximum of 20 coding layers. This brings a lookup table of size 4540.

3.2 Coefficient Selection and Flowchart of Rate-Distortion Optimization

The second key step in RDE is selecting the coeffi-cient with the maximum R-D slope. Since an exhaus-tive search or a sorting over all image coefficients is computational expensive, a threshold based selection approach is introduced in this subsection. We first set an R-D slope threshold l. The whole image is scanned and those coefficients with R-D slope greater than l are encoded. The R-D slope threshold l is then re-duced by a certain factor and the whole image is scanned again.

The coding operation of RDE can be shown in Figure 6. It can be described step by step as:

1)The image is decomposed by the wavelet

transform.

2)The initial R-D slope threshold l is set to l0, with:

l0 =0.25x0.52=0.03125(26) 3)The image is scanned and coded.

The scan goes from the coarse scale to the fine scale, and follows the raster line order within the subband.

4)For each coefficient, its expectation R-D slope is

calculated.

Depending on whether the coefficient is already significant, the R-D slope is calculated by (13) and (20), respectively. Note that the calculation of the R-D slope is only a lookup table operation indexed by the QM-coder state and the coding layer n i.

5)The calculated R-D slope is compared with l.

If the R-D slope is smaller than l, the coding proceeds to the next coefficient. Only those coefficients with R-D slope greater than l are encoded.

6)The coefficient is encoded with significant

identification or refinement coding.

The bit of significance identification is encoded by a QM-coder with context designated in Figure 4. The bit of sign and the bit of refinement are encoded by a QM-coder with fixed probability 0.5.

7)The coder checks if the assigned coding rate is

reached. If not, it repeats step 4) until the entire image has been scanned.

8)The R-D slope threshold is reduced by a factor of

a:

l?l/a(27) In our current implementation, a is set to 1.25. The coder then goes back to step 3) and scans the entire image again.

4. EXPERIMENT RESULTS

Extensive experiments are performed to compare RDE with two existing algorithms. One is a predictive embedded zerotree wavelet (PEZW) coding which is an improved version of the original EZW and is cur-rently in the MPEG4 verification mode (VM) 6.0. We use PEZW as a reference for the state-of-the-art wavelet coding technique. The other one is the layered zero coding (LZC) proposed by Taubman and Zakhor. RDE encodes the bit of significance identification and the bit of refinement very similar to the one used by the LZC. In essence, RDE shuffles the embedded bitstream of LZC to improve its performance. We therefore compare the RDE with LZC to show the advantage of rate-distortion optimization. Two ex-perimental images are used. One is the famous Lena of size 512x512, the other one is the first frame of Akiyo, which is a MPEG4 test image of size 176x144 (QCIF) or size 352x288 (CIF). The Lena image is decom-posed by a 5-level 9-7 tap biorthogonal Daubechies filter [8] as in most of the other coding literature. The Akiyo image is decomposed according to the specifi-cation in MPEG4 VM 6.0 with a 9-3 tap biorthogonal Daubechies filter [8] of 4-levels for QCIF and 5-levels for CIF.

The performance of RDE versus LZC of Lena for bit rate range 0.15bpp to 1.0bpp is shown in Figure 7. We plot the R-D performance curve of LZC with dashed line, and plot the R-D performance curve of RDE with solid line. For the clarity of the result, we split the result into 3 subgraphs with bitrate range 0.15-0.30bpp, 0.40-0.60bpp and 0.70-1.00bpp. RDE out-performs LZC by 0.2-0.4dB over the entire bit rate range. The performance advantage becomes larger at higher bitrate. The R-D performance curve of RDE is also much smoother than that of LZC. In Table 1, we show the performance comparison of PEZW, LZC and RDE on Akiyo image at 10k, 20k and 30k bits for QCIF and 25k, 50k and 70kbits for CIF. The result of PEZW is obtained from document [9] . It is shown that the performance gap between LZC and RDE becomes larger for a low-detailed image as Akiyo. In general, RDE outperforms LZC for 0.4-0.8dB, and outperforms PEZW for 0.3-1.2dB for Akiyo image over a variety of bitrate.

5. CONCLUSIONS

In this paper, we propose a rate-distortion optimized embedding (RDE) algorithm. By reordering the em-bedding bitstream so that the available coding bit is first allocated to the coefficient with the steepest R-D slope, i.e. the biggest distortion decrease per coding bit, RDE substantially improves the performance of embedded coding. For synchronisation between the encoder and the decoder, RDE uses the expectation R-D slope which can be calculated at the decoder side with the already coded bits. RDE also takes advantage of the probability estimation table of the QM-coder so that the calculation of the R-D slope is just one lookup table operation. Extensive experimental results con-firm that RDE significantly improves the coding effi-ciency.

6. REFERENCES

[1] J. Shapiro, “Embedded image coding using zero-tree of wavelet coefficients”, IEEE Trans. On Signal Processing, vol. 41, pp.3445-3462, Dec. 1993.

[2] A. Said, and W. Pearlman, “A new, fast and effi-cient image codec based on set partitioning in hierar-chical trees”, IEEE Trans. On Circuit and System for Video Technology, Vol. 6, No. 3, Jun. 1996, pp. 243-250

[3] D. Taubman and A. Zakhor, “Multirate 3-D sub-band coding of video”, IEEE Trans. On Image Proc-essing, Vol. 3, No. 5, Sept. 1994, pp.572-588.

[4] Z. Xiong and K. Ramchandran, “Wavelet packet-based image coding using joint space-frequency quan-tization”, First IEEE International Conference on Image Processing, Austin, Texas, Nov. 13-16, 1994. [5] J. Li, P. Cheng and C. -C. J. Kuo, “On the im-provements of embedded zerotree wavelet (EZW) coding”, SPIE: Visual Communication and Image Processing, vol. 2601, Taipei, Taiwan, May. 1995, pp. 1490-1501.

[6] W. Pennebaker and J. Mitchell, IBM, “Probability adaptation for arithmetic coders”, US patent 5,099,440, Mar. 24, 1992.

[7] D. Duttweiler and C. Chamzas, “Probability esti-mation in arithmetic and adaptive Huffman entropy coders”, IEEE Trans. On Image Processing, vol. 4 no. 3, Mar. 1995, pp. 237-246.

[8] M. Antonini, M. Barlaud, P. Mathieu, and I. Dau-bechies, “Image coding using wavelet transform,”IEEE Trans. on Image Processing, Vol. 1, pp. 205-230, Apr. 1992.

[9] J. Liang, “Results Report on Core Experiment T1: Wavelet Coding of I Pictures,” ISO/IEC JTC1/SC29/WG11 MPEG97/M1796,Feb.1997. [10] J. Li, J. Li, C.-C. Jay Kuo, “Layered DCT still image compression”, IEEE Trans. On Circuit and System for Video Technology, Vol. 7, No. 2, pp.440-443, Apr. 1997.

D

R

a

b

c

d

(D0,R0)

c

a

d

b

e

e

(D end,R end)

Figure 1 Initiative of rate-distortion optimization.

B egin

E valuate the R-D

slope of all sym bol

Select the coeff. w ith

m axim um R-D slope

E ncode this coeff.

R ate reached?

E nd

Y

N

Figure 2 The framework of rate-distortion optimized embedding.

0+

-

+

Sign b1 b2 b3 b4 b5 b6 b7

101101

1001010

0000101

0001110

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

0110001

. . . .

. . . .

. . . .

. . . .

. . . .

.

.

.

+

-w1

w2

w3

w4

.

.

.

w n

Figure 3 Element of rate-distortion optimized embed-ded. (The already coded bits are marked with slashes, the candidate bits of significance identification with vertical bars, the candidate bits of refinement coding with horizontal bars. )

Figure 4 Context for QM-coder. ( is the current coding coefficient w i , are its context coefficients. )

0.2

0.4

0.6

0.8

1

0.10.20.30.40.50.60.70.80.91p

S i g f a c t

Figure 5 The rate-distortion slope modification factor for significance identification.

Begin l ?l 0

Scanning: all coefficients Rate reached?

End

Encode coefficient with R-D slope greater than l

l l /a

mode?

Cal R-D slope for sig. id.Cal R-D slope for refinement

R-D slope > l ?

N

Y sig

ref

N

mode?

significance identification

sig

refinement coding

ref Y Figure 6 The flowchart of rate-distortion optimized embedding.

0.160.18

0.2

0.220.240.260.2831.5

3232.53333.53434.535Rate(bpp)

P S N R (d B )( a )

0.4

0.45

0.50.55

35.536

36.53737.538Rate(bpp)

P S N R (d B )

( b )

0.7

0.750.8

0.850.90.951

3838.53939.54040.5Rate(bpp)

P S N R (d B )

( c )

Figure 7 Comparison between RDE (solid line) and LZC (dashed line) for Lena image with bitrate range (a) 0.15-0.3bpp, (b) 0.4-0.6bpp, (c) 0.7-1.0bpp.Table 1 Experimental result for image Akiyo.

LZC

PEZW

RDE

Size Bit Rate (bits)

PSNR_Y

PSNR_U

PSNR_V (dB)Bit Rate (bits)

PSNR_Y PSNR_U

PSNR_V (dB)Bit Rate (bits)

PSNR_Y PSNR_U

PSNR_V (dB)QCIF 1000032.38

34.6737.12

10256 32.3

34.2

36.91000033.13

35.0237.90

QCIF 2000037.30

39.1841.18

20816 37.5

39.1

41.02000037.97

40.0342.17

QCIF 3000041.40

43.5644.00

29240 40.8

41.5

42.63000042.01

44.2344.75

CIF 2500034.69

38.2540.38

25112 34.7

37.7

40.12500035.03

38.2341.03

CIF 5000039.23

41.7244.24

49016 39.3

41.3

43.65000040.04

42.4644.73

CIF 7000042.35

44.5846.12

70448 42.2

43.2

45.2

7000043.03

44.3546.39

自动化常用英文术语

有限自动机finite automaton 工厂信息协议FIP (factory information protocol) 一阶谓词逻辑first order predicate logic 固定顺序机械手fixed sequence manipulator 定值控制fixed set point control 流量传感器flow sensor/transducer 流量变送器flow transmitter 涨落fluctuation 柔性制造系统FMS (flexible manufacturing system) 强迫振荡forced oscillation 形式语言理论formal language theory 形式神经元formal neuron 正向通路forward path 正向推理forward reasoning 分形体,分维体fractal 变频器frequency converter 频域模型降阶法frequency domain model reduction method 频域响应frequency response 全阶观测器full order observer 功能分解functional decomposition 功能相似functional simularity 模糊逻辑fuzzy logic 对策树game tree 闸阀gate valve 一般均衡理论general equilibrium theory 广义最小二乘估计generalized least squares estimation 生成函数generation function 地磁力矩geomagnetic torque 几何相似geometric similarity 框架轮gimbaled wheel 全局渐进稳定性global asymptotic stability 全局最优global optimum 球形阀globe valve 目标协调法goal coordination method 文法推断grammatical inference 图搜索graphic search 重力梯度力矩gravity gradient torque 成组技术group technology 制导系统guidance system 陀螺漂移率gyro drift rate 陀螺体gyrostat

rfc3956.Embedding the Rendezvous Point (RP) Address in an IPv6 Multicast Address

Network Working Group P. Savola Request for Comments: 3956 CSC/FUNET Updates: 3306 B. Haberman Category: Standards Track JHU APL November 2004 Embedding the Rendezvous Point (RP) Address in an IPv6 Multicast Address Status of this Memo This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the "Internet Official Protocol Standards" (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited. Copyright Notice Copyright (C) The Internet Society (2004). Abstract This memo defines an address allocation policy in which the address of the Rendezvous Point (RP) is encoded in an IPv6 multicast group address. For Protocol Independent Multicast - Sparse Mode (PIM-SM), this can be seen as a specification of a group-to-RP mapping mechanism. This allows an easy deployment of scalable inter-domain multicast and simplifies the intra-domain multicast configuration as well. This memo updates the addressing format presented in RFC 3306. Table of Contents 1. Introduction (2) 1.1. Background (2) 1.2. Solution (2) 1.3. Assumptions and Scope (3) 1.4. Terminology (4) 1.5. Abbreviations (4) 2. Unicast-Prefix-based Address Format (4) 3. Modified Unicast-Prefix-based Address Format (5) 4. Embedding the Address of the RP in the Multicast Address (5) 5. Examples (7) 5.1. Example 1 (7) 5.2. Example 2 (7) 5.3. Example 3 (8) 5.4. Example 4 (8) Savola & Haberman Standards Track [Page 1]

Embedding CSR values

Embedding CSR Values:The Global Footwear Industry’s Evolving Governance Structure Suk-Jun Lim Joe Phillips ABSTRACT.Many transnational corporations and international organizations have embraced corporate social responsibility(CSR)to address criticisms of working and environmental conditions at subcontractors’factories.While CSR‘codes of conduct’are easy to draft, supplier compliance has been elusive.Even third-party monitoring has proven an incomplete solution.This article proposes that an alteration in the supply chain’s governance,from an arms-length market model to a collaborative partnership,often will be necessary to effectuate CSR.The market model forces contractors to focus on price and delivery as they compete for the lead firm’s business,rendering CSR observance secondary,at best.A collaborative partnership where the lead firm gives select suppliers secure product orders and other benefits removes disincentives and adds incentives for CSR compliance.In time,the suppliers’CSR habit should shift their business philosophy toward pursuing CSR as an end in itself,regardless of buyer incentives and monitoring. This article examines these hypotheses in the context of the athletic footwear sector with Nike,Inc.and its suppliers as the specific case study.The data collected and conclusions reached offer strategies for advancing CSR beyond the superficial and often ineffectual‘code of conduct’stage. KEY WORDS:corporate social responsibility,global value chains,Nike,athletic footwear industry,gover-nance Introduction The governance of global value chains(GVCs)has occupied political economy and management liter-ature for more than a decade.Studies reveal that major retailers no longer own manufacturing facili-ties but contract with multiple suppliers,usually in less developed countries(LDCs),and often play them against each other to achieve the best price, highest quality,and quickest delivery.While eco-nomically bene?ting the buyer and subsequent consumers,this competitive,market-oriented GVC usually drives suppliers toward lower wages and sweatshop-like labor/environmental standards. One supposed route to improved conditions at the factories is‘corporate social responsibility’(CSR). CSR is sometimes presented as an enlightened approach by transnational corporations(TNCs);it is sometimes criticized as a public relations cover for business-as-usual.Stages of CSR development in a corporation’s culture have been chronicled(Zadeck, 2004)and the business case for CSR analyzed (Capaldi,2005;Vogel,2005).Less discussed and understood are CSR’s role in the evolution of GVC governance and how governance changes promote CSR compliance among contractors. Our insight is that a global buyer’s CSR concerns can trigger a transformation in GVCs from the competitive,arms-length market structure to a ‘collaborative partnership’where suppliers have a more layered and economically securer relationship with the buyer.This deeper,securer relationship should lead not only to superior compliance with the TNC’s code of conduct,but,eventually,to suppliers developing an independent ethical commitment to CSR. The traditional market-oriented governance model has been inhospitable to buyer’s CSR codes because the suppliers’performance is judged entirely on price,quality,and delivery.These economic pressures encourage suppliers to cheat on CSR to Journal of Business Ethics(2008)81:143–156óSpringer2007 DOI10.1007/s10551-007-9485-2

C-C计算延迟时间和嵌入维数3

function LCE1=lyapunov_rosenstein(x,m,t,P,p,ts) % Syntax:LCE1=lyapunov_rosenstein(x,m,t,P,p) % Calculate maximum lyapunov exponent from small sets % "A practical method for calculating largest lyapunov % exponents from small sets"--Michael T.Rosenstein %Input: % x----time series coloum format % m----embedding dimension % t----time delay % P----mean perieod time % p----p norm % ts---Sampling frequence %Output: % LCE1-Dominant lyapunov exponents % %Usage: % x=[];!!!!Remember: in coloum format % m=30; % t=10; % P=20;%gained by FFT % p=inf; % ts=0.001; % LCE1=lyapunov_rosenstein(x,m,t,P,p,ts); %Related routine: % phasespace.m %Author:skyhawk % Created on 2004.3.8 % Modified on 2004.3.23 % Air Force Engineering University % Air Force Engineering Institute % Dept.1, Shaan Xi, Xi'an 710038, PR China. % Email:xunkai_wei@https://www.doczj.com/doc/1e15066066.html, % [Y,M]=phasespace(x,m,t); %calculate two points' distance in phase space and determine the nearest neighbor %count computation times for i=1:M %Calculate original distance count_refer=1; Refer=Y(i,:); for j=1:M if abs(j-i)

Fixation, Embedding, and Sectioning of Plant Tissues

2008; doi: 10.1101/pdb.prot4941 Cold Spring Harb Protoc Detlef Weigel and Jane Glazebrook Fixation, Embedding, and Sectioning of Plant Tissues Service Email Alerting click here.Receive free email alerts when new articles cite this article - Categories Subject Cold Spring Harbor Protocols. Browse articles on similar topics from (114 articles) Visualization of Gene Expression (35 articles)Plant Cell Culture (118 articles)Plant Biology, general (98 articles)Plant (991 articles)Molecular Biology, general (876 articles)Laboratory Organisms, general (80 articles)In Situ Hybridization (71 articles)Immunohistochemistry (995 articles)Cell Biology, general (66 articles)Arabidopsis (19 articles)Analysis of Gene Expression in Plants (139 articles)Analysis of Gene Expression https://www.doczj.com/doc/1e15066066.html,/subscriptions go to: Cold Spring Harbor Protocols To subscribe to

Feature Extraction based on Sparsity embedding with manifold information for face recognition

Feature Extraction based on Sparsity Embedding with Manifold Information for Face Recognition Shi-Qiang Gao1, Xiao-Yuan Jing1,2,*, Chao Lan1, Yong-Fang Yao1, Zai-Juan Sui1 1School of Automation, Nanjing University of Posts and Telecommunications, Nanjing, 210003, China 2State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, 210093, China *Corresponding author, Email: jingxy_2000@https://www.doczj.com/doc/1e15066066.html, Abstract—In the past few years, manifold learning and sparse representation have been widely used for feature extraction and dimensionality reduction. The sparse representation technique shows that one sample can be linearly recovered by the others in a data set. Based on this, sparsity preserving projections (SPP) has recently been proposed, which simply minimizes the sparse reconstructive errors among training samples in order to preserve the sparse reconstructive relations among data. However, SPP does not investigate the inherent manifold structure of the data set, which may be helpful for the recognition task. Motivated by this, we present in this paper a novel feature extraction approach named sparsity embedding with manifold information (SEMI), which not only preserves the sparse reconstructive relations, but also maintains the manifold structure of the reconstructed data. Specifically, for a sparse reconstructed sample, we minimize both its difference to the corresponding original sample as SPP does, and its distance to the original intra-class samples. Provided that this sample lies on different submanifolds from other samples, we additionally maximize, in the objective function, its distance to the original inter-class samples. Experimental results on two public ORL and AR face databases demonstrate that SEMI outperforms related methods in classification performance. Keywords-sparse representation; suprivised manifold learning; feature extraction; dimensionality reduction; face recognition I.I NTRODUCTION Feature extraction has been widely used for face and palm recognition [1][2]. As a feature extraction and dimensionality reduction technique, manifold learning has attracted more and more research interest. It tends to preserve the manifold structure of a given data set in a low-dimensional subspace. Traditional manifold learning algorithms only define nonlinear mapping on training data, and cannot deal with new test data. Linear manifold embedding methods are developed to address this issue, such as locality preserving projection (LPP) [3] and neighborhood preserving embedding (NPE) [4]. LPP seeks a linear embedded space where local relationships of samples are preserved, while NPE seeks an embedded space where reconstructive relations of samples by their k nearest neighbors can be preserved. While training, all abovementioned manifold learning methods do not consider the class label information, which has been used to extract face features in [5]. To take advantage of the class separability, some supervised manifold learning methods have been proposed. Local discriminant embedding (LDE) maintains the intrinsic neighbor relations of the intra-class samples by setting the affinity weights [6]. Marginal fisher analysis (MFA) constructs the intra-class compactness and inter-class separability graphs by using the class label information [7]. As described in [8][9], the discriminant information can be extracted to enhance the classification performance. As another technique of pattern recognition, sparse representation shows that one sample can be recovered by the others. Recently, a new sparse representation method, namely sparsity preserving projections (SPP), has been proposed, which tries to preserve the sparse reconstructive relations among data by minimizing sparse reconstructive errors [10]. However, in real-world application like face recognition, face images often lie on a manifold structure, and preserving the manifold structure may enhance the recognition performance. Motivated by this, we proposed a novel feature extraction method named sparsity embedding with manifold information (SEMI), which aims at preserving both the sparse reconstructive relations and the manifold structure of the data set. Explicitly, to preserve the sparse relations as SPP does, we minimize the difference between original data and their corresponding reconstructive samples. Then, presuming that a reconstructed sample is close not only to the corresponding original sample but also to its neighbors, SEMI minimizes the distance between the original data and the reconstructed intra-class samples. Furthermore, providing that samples from different classes lie on separate submanifolds, SEMI also maximizes the distance between original data and reconstructed inter-class samples. By this means, SEMI not only preserves the sparse reconstructive relations but also the manifold structure of the data set. The rest of this paper is organized as follows: in Section 2, we briefly review two feature extraction methods, i.e., LDE and SPP. In Section 3, we present the sparsity embedding with manifold information (SEMI) approach. Experimental results on the public ORL and AR face databases are provided in Section 4 and conclusions are given in Section 5. II.R ELEATED W ORK A.Local Discriminant Embedding (LDE) LDE tries to not only preserve the neighbor relations of intra-class samples, but also enhance the separability between inter-class neighbor samples. Suppose a sample set 12n [,,...,]n m X x x x R× =∈ and let W and W′ be the affinity matrices which are defined as follows:

De-embedding技术

De-embedding技术总结 RF工程师们依赖于矢量网络分析仪(VNA)来测量RF元件的S参数,从而进行描述以及后续的设计。测量中会出现一个问题,即这些元件往往是表贴式,因此不能与VNA直接相连。简单的印刷版(PCB)测试装置往往都会表贴上待测器件(DUT)以便与VNA相连接,如图1所示。然而,这些测试装置本身会为S参数测量带来一些寄生效应,因此,需要De-embedding技术进行消除。 图1 一、De-embedding技术定义 微波元器件的测量必须包含两部分:第一部分是把实际被测器件(DUT)加上焊点(对于在片测量系统)或者测试夹具(对于同轴测量系统)当做一个被测器件;另一部分是空的焊点或者测试夹具作为被测器件。然后用第一部分测得的S参数扣除第二部分(焊点或者测试夹具部分产生的寄生效应),从而可以得到被测器件的真实性能。我们把这个步骤就叫做De-embedding。 De-embedding技术是微波测量的重要技术之一,主要目的是消除寄生元件、部件对实际被测器件(DUT)的影响。一般分为四大类:串联技术、并联技术、级联技术及混合技术。为了确保测量值为实际被测器件的本身特性,必须进行De-embedding。 二、De-embedding技术理论简介 由上面对于De-embedding技术的定义我们知道,我们测量的S参数实际是DUT、寄生元件(如探针)、其他元件(如焊点)综合下的S参数,因此,我们需要对这些因素进行消除,以达到精确测量的目的。 对于任何一个被测器件DUT,我们都会使用到探针,而探针跟DUT是级联关系,我们采用A参数处理;对于焊点来说,有短路焊点与开路焊点,开路焊点相当于在DUT并联,短路焊点相当于串联器件,对于并联与串联,我们分别使用导纳参数与阻抗参数进行运算。

Efficient Steganographic Embedding by Exploiting Modification Direction

Ef?cient Steganographic Embedding by Exploiting Modi?cation Direction Xinpeng Zhang and Shuozhong Wang Abstract—A novel method of steganogrphic embedding in digital images is described,in which each secret digit in a(2n+1)-ary notational system is carried by n cover pixels and,at most, only one pixel is increased or decreased by1.In other words, the(2n+1)different ways of modi?cation to the cover pixels correspond to(2n+1)possible values of a secret digit.Because the directions of modi?cation are fully exploited,the proposed method provides high embedding ef?ciency that is better than previous techniques. Index Terms—Covert communication,steganography,embed-ding ef?ciency,embedding rate. I.I NTRODUCTION D IGITAL steganography is a technique for sending secret messages under the cover of a carrier signal.Despite that steganographic techniques only alter the most insigni?cant components,they inevitably leave detectable traces so that successful attacks are often possible.The primary goal of attack on steganographic systems,termed steganalysis,is to detect the presence of hidden data by?nding statistical abnor-mality of a stego-media caused by data embedding[1],[2]. Generally speaking,the more the secret data are embedded,the more vulnerable is the steganographic system to steganalytic attempts. One way of improving security of a steganographic system is to reduce the amount of alterations necessary to be intro-duced into the cover signal for data hiding when the number of secret bits is signi?cantly less than that of available host samples.For example,the method proposed in[3]can conceal as many as log2(mn+1) bits of data in a binary image block sized m×n by changing,at most,two bits in the block. Matrix encoding[4],on the other hand,uses less than one change of the least signi?cant bit(LSB)in average to embed l bits into2l?1pixels.In this way,the introduced distortion is signi?cantly lowered compared to a plain LSB replacement technique in which secret bits are used to simply replace the LSB plane.Further,some effective encoding methods derived from the cyclic coding have been described[5],and the matrix encoding can be viewed as a special case.In[6], two steganographic encoding methods based on random linear Manuscript received June5,2006.The associate editor coordinating the review of this manuscript and approving it for publication was Prof.Deepa Kundur.This work was supported in part by the National Natural Science Foundation of China under Grants60372090and60502039,in part by the Key Project of Shanghai Municipality for Basic Research under Grant04JC14037, and in part by Shanghai Leading Academic Discipline Project under Grant T0102. The authors are with the School of Communication and Information Engineering,Shanghai University,Shanghai200072China(e-mail:{xzhang, shuowang}@https://www.doczj.com/doc/1e15066066.html,). Digital Object Identi?er10.1109/LCOMM.2006.060863.codes and simplex codes are developed for large payloads. The stego-coding technique can also be performed on a data stream derived from the host in a dynamically running manner, so that insertion and extraction of each secret bit are carried out in a series of consecutive cover bits[7].All the above-mentioned stego-coding techniques are independent of various cover-bit-modi?cation approaches.For example,if the stego-coding methods are used in the LSB plane of an image,adding 1to a pixel is equivalent to subtracting1from the pixel to ?ip its LSB for carrying the secret message. In[8],another type of steganographic method for improving embedding ef?ciency is presented in which the choice of whether to add or subtract one to/from a pixel depends both on the original gray value and a pair of two consecutive secret bits.In other words,the direction of modi?cation to the cover pixels is exploited for data hiding.However,there exist two different modi?cation-directions corresponding to a same pair of secret bits to be embedded,meaning that the exploitation is incomplete. This letter proposes a novel steganographic embedding method that fully exploits the modi?cation directions,called the EMD embedding for short.In this method,modi?cations in different directions are used to represent different secret data,leading to a higher embedding ef?ciency. II.EMD E MBEDDING The main idea of the proposed steganographic method is that each secret digit in a(2n+1)-ary notational system is carried by n cover pixels,where n is a system parameter, and,at most,only one pixel is increased or decreased by1. Actually,for each group of n pixels,there are2n possible ways of modi?cation.The2n different ways of alteration plus the case in which no pixel is changed form(2n+1)different values of a secret digit. Before data-embedding,a data-hider can conveniently con-vert a secret message into a sequence of digits in the notational system with an odd base(2n+1).If the secret message is a binary stream,it can be segmented into many pieces with L bits,and the decimal value of each secret piece is represented by K digits in a(2n+1)-ary notational system,where L= K·log2(2n+1) (1) For example,the binary sequence(110101101001)can be expressed as(231114)in a5-ary notational system where L=4and K=2.Thus,the redundancy rate in the(2n+1)-ary sequence is R R=1? L K log2(2n+1) < 1 L+1 (2) 1089-7798/06$20.00c 2006IEEE

相关主题
文本预览
相关文档 最新文档