当前位置:文档之家› Cointegration and Forward and Spot Exchange Rate Regressions

Cointegration and Forward and Spot Exchange Rate Regressions

Cointegration and Forward and Spot Exchange Rate Regressions
Cointegration and Forward and Spot Exchange Rate Regressions

*

Thanks to Charles Engel for his insights on the issues presented herein and to Vinay

Datar for motivating me to write the paper. Updates to the paper can be found at https://www.doczj.com/doc/633332328.html,/~ezivot/homepage.htm.

Cointegration and Forward and Spot Exchange Rate Regressions

by

Eric Zivot *

Department of Economics University of Washington

Box 353330

Seattle, WA 98195-3330ezivot@https://www.doczj.com/doc/633332328.html, June 12, 1997

Last Revision: September 29, 1998

Abstract

In this paper we investigate in detail the relationship between models of cointegration between the current spot exchange rate, s t , and the current forward rate, f t , and models of cointegration between the future spot rate, s t+1, and f t and the implications of this relationship for tests of the forward rate unbiasedness hypothesis (FRUH). We argue that simple models of cointegration between s t and f t more easily capture the stylized facts of typical exchange rate data than simple models of cointegration between s t+1 and f t and so serve as a natural starting point for the analysis of exchange rate behavior. We show that simple models of cointegration between s t and f t imply rather complicated models of cointegration between s t+1 and f t . As a result, standard methods are often not appropriate for modeling the cointegrated behavior of (s t+1, f t )N and we show that the use of such methods can lead to erroneous inferences regarding the FRUH.

2

1. Introduction

There is an enormous literature on testing if the forward exchange rate is an unbiased predictor of future spot exchange rates. Engel (1996) provides the most recent review. The earliest studies, e.g. Cornell (1977), Levich (1979) and Frenkel (1980), were based on the regression of the log of the future spot rate, s t+1, on the log of the current forward rate, f t . The results of these studies generally support the forward rate unbiasedness hypothesis (FRUH). Due to the unit root behavior of exchange rates and the concern about the spurious regression phenomenon illustrated by Granger and Newbold (1974), later studies, e.g. Bilson (1981), Fama (1984) and Froot and Frankel (1989),concentrated on the regression of the change in the log spot rate, )s t+1, on the forward premium, f t -s t . Overwhelmingly, the results of these studies reject the FRUH. The most recent studies, e.g.,Hakkio and Rush (1989), Barnhart and Szakmary (1991), Naka and Whitney (1995), Hai, Mark and Yu (1997), Norrbin and Reffett (1996), Newbold, et. al. (1996), Clarida and Taylor (1997), Barnhart,McNown and Wallace (1998) and Luintel and Paudyal (1998) have focused on the relationship between cointegration and tests of the FRUH. The results of these studies are mixed and depend on how cointegration is modeled.

Since the results of Hakkio and Rush (1989), it is well recognized that the FRUH requires that s t+1 and f t be cointegrated and that the cointegrating vector be (1,-1) and much of the recent literature has utilized models of cointegration between s t+1 and f t . It is also true that the FRUH requires s t and f t to be cointegrated with cointegrating vector (1,-1) and only a few authors have based their analysis on models of cointegration between s t and f t . In this paper we investigate in detail the relationship between models of cointegration between s t and f t and models of cointegration between s t+1 and f t and the implications of this relationship for tests of the FRUH. We argue that simple models of cointegration between s t and f t more easily capture the stylized facts of typical exchange rate data than simple models of cointegration between s t+1 and f t and so serve as a natural starting point for the analysis of exchange rate behavior. Simple models of cointegration between s t and f t imply rather complicated models of cointegration between s t+1 and f t . In particular, starting with a first order bivariate vector error correction model for (s t , f t )N we show that the implied cointegrated model for (s t+1, f t )N is nonstandard and does not have a finite VAR representation. As a result,standard VAR methods are not appropriate for modeling (s t+1, f t )N and we show that the use of such

3

methods can lead to erroneous inferences regarding the FRUH. In particular, we show that tests of the null of no-cointegration based on common cointegrated models for (s t+1, f t )N are likely to be severely size distorted. In addition, using the implied triangular cointegrated representation for (s t+1,f t )N we can explicitly characterize the OLS bias in the levels regression of s t+1 on f t . Based on this representation, we can show that the OLS estimate of the coefficient on f t in the levels regression is downward biased (away from one) even if the FRUH is true. Finally, we show that the results of Naka and Whitney (1995) and Norrbin and Reffett (1996) supporting the FRUH are driven by the specification of the cointegration model for (s t+1, f t )N .

The plan of the paper is as follows. In section 2, we discuss the relationship between cointegration and the tests of the forward rate unbiasedness hypothesis. In section 3 we present some stylized facts of exchange rate data typically used in investigations of the FRUH. In section 4, we discuss some simple models of cointegration between s t and f t that capture the basic stylized facts about the data and we show the restrictions that the FRUH places on these models. In section 5, we consider models of cointegration between s t+1 and f t that are implied by models of cointegration between s t and f t . In section 6, we use our results to reinterpret some recent findings concerning the FRUH reported by Naka and Whitney (1995) and Norrbin and Reffett (1996). Our concluding remarks are given in section 7.

2. Cointegration and the Forward Rate Unbiasedness Hypothesis: An Overview

The relationship between cointegration and the forward rate unbiasedness hypothesis has been discussed by several authors starting with Hakkio and Rush (1987). Engel (1996) provides a comprehensive review of this literature and serves as a starting point for the analysis in this paper.Following Engel (1996), the forward exchange rate unbiasedness hypothesis (FRUH) under rational expectations and risk neutrality is given by

E t [s t+1] = f t ,

(1)

where E t [@] denotes expectation conditional on information available at time t. Using the terminology of Baillie (1989), FRUH is an example of the “observable expectations” hypothesis. The FRUH is usually expressed as the levels relationship

4

s t+1 = f t + >t+1,

(2)

where >t+1 is a random variable (rational expectations forecast error) with E t [>t+1] = 0. It should be kept in mind that rejection of the FRUH can be interpreted as a rejection of the model underlying E t [@]or a rejection of the equality in (1) itself.

Two different regression equations have generally been used to test the FRUH. The first is the “levels regression”

s t+1 = μ + $f f t + u t+1

(3)

and the null hypothesis that FRUH is true imposes the restrictions μ = 0, $f = 1 and E t [u t+1] = 0. In early empirical applications authors generally focused on testing the first two conditions and either ignored the latter condition or informally tested it using the Durbin-Watson statistic. Most studies using (3) found estimates of $f very close to 1 and hence supported the FRUH. Some authors, e.g.Barnhart and Szakmary (1991), Liu and Maddala (1992), Naka and Whitney (1995) and Hai, Mark and Wu (1997), refer to testing μ = 0, $f = 1 as testing the forward rate unbiasedness condition (FRUC). Testing the orthogonality condition E t [u t+1] = 0, conditional on not rejecting FRUC, is then referred to as testing forward market efficiency under rational expectaions and risk neutrality.Assuming s t and f t have unit roots, i.e., s t , f t ~ I (1), (see, for example, Messe and Singleton (1982),Baillie and Bollerslev (1989), Mark (1990), Liu and Maddala (1992), Crowder (1994), or Clarida and Taylor (1997) for empirical evidence), then the FRUH requires that s t+1 and f t be cointegrated with cointegrating vector (1, -1) and that the stationary, i.e., I (0), cointegrating residual, u t+1, satisfy E t [u t+1] = 0. Notice that testing FRUC is then equivalent to testing for cointegration between s t+1 and f t and that the cointegrating vector is (1,-1) and testing forward market efficiency is equivalent to testing that the forecast error, s t+1 - f t , has conditional mean zero.

The FRUH assumes rational expectations and risk neutrality. Under rational expectations,

if agents are risk averse then a stationary time-varying risk premium exists and the relationship between s t+1 and f t becomes

s t+1 = f t - rp t re + >t+1,

(4)

where rp t re = f t - E t [s t+1] represents the stationary rational expectations risk premium. As long as rp t

re

is stationary s t+1 and f t will be cointegrated with cointegrating vector (1, -1) but f t will be a biased predictor of s t+1 provided rp t re is predictable using information available at time t . In this regard, the

5

FRUC is a misleading acronym since if the FRUC is true forward rates are not necessarily unbiased predictors of future spot rates. Since the FRUC is equivalent to cointegration between s t+1 and f t and that the cointegrating vector is (1,-1), which implies that s t+1 and f t trend together in the long-run but may deviate in the short-run, it is more appropriate to call this the long-run forward rate unbiasedness condition (LRFRUC).

Several authors, e.g. Messe and Singleton (1982), Meese (1989) and Isard (1995), have stated that since s t and f t have unit roots the levels regression (3) is not a valid regression equation because of the spurious regression problem described in Granger and Newbold (1974). However,this is not true if s t+1 and f t are cointegrated. What is true is that if s t+1 and f t are cointegrated with cointegrating vector (1, -1), which allows for the possibility of a stationary time varying risk premium so that u t+1 in (3) is I (0), then the OLS estimates from (3) will be super consistent (converge at rate T instead of rate T ?) for the true value $f = 1 but generally not efficient and biased away from 1 in finite samples so that the asymptotic distributions of t -tests and F -tests on μ an $f will follow non-standard distributions, see Corbae, Lim and Ouliaris (1992). Hence, even if the FRUH is not true due the existence of a stationary risk premium so that (3) is a misspecified regression, OLS on (3) still gives a consistent estimate of $f = 1. More importantly, there are simple modifications to OLS that yield asymptotically unbiased and efficient estimates of the parameters of (3) in the presence of general serial correlation and heteroskedasticity and these modifications should be used to make inferences about the parameters in the levels regression 1. In this sense, the levels regression (3) under cointegration is asymptotically immune to the omission of a stationary risk premium. Hai, Mark and Wu (1997) use Stock and Watson’s (1993) dynamic OLS estimator on the levels regression (3) and provide evidence that s t+1 and f t are cointegrated with cointegrating vector (1, -1).

The second regression equation used to test the FRUH is the “differences equation”

)s t+1 = μ* + "s (f t - s t ) + u*t+1

(5)

and the null hypothesis that FRUH is true imposes the restrictions μ* = 0, "s = 1 and E t [u*t+1] = 0.Empirical results based on (5), surveyed in Engel (1996), overwhelmingly reject the FRUH. In fact,typical estimates of "s across a wide range of currencies and sampling frequencies are significantly negative. This result is often referred to as the forward discount anomaly, forward discount bias or forward discount puzzle and seems to contradict the results based on the levels regression (3). Given

6

that s t , f t ~ I (1), for (5) to be a “balanced regression” (i.e., all variables in the regression are integrated of the same order) the forward premium, f t - s t , must be I (0) or, equivalently, f t and s t must be cointegrated with cointegrating vector (1,-1). Assuming that covered interest rate parity holds, the forward premium is simply the interest rate differential between the respective countries and there are good economic reasons to believe that such differentials do not contain a unit root. Hence, tests of the FRUH based on (5) implicitly assume that the forward premium is I (0) and so such tests are conditional on f t and s t being cointegrated with cointegrating vector (1,-1). In this respect, (5) can be thought of as one equation in a particular vector error correction model (VECM) for (f t , s t )N .2Horvath and Watson (1995) and Clarida and Taylor (1997) use VECM-based tests and provide evidence that f t and s t are cointegrated with cointegrating vector (1,-1).

As noted by Fama (1984), the negative estimates of "s are consistent with rational

expectations and market efficiency and imply certain restrictions on the risk premium. To see this,note that under rational expectations we may write

)s t+1=(f t - s t ) - rp t re +>t+1,

(6)

so that the difference regression (5) is misspecified if risk neutrality fails. Since all variables in (6) are I (0), if the risk premium is correlated with the forward premium then the OLS estimate of "s in the standard differences regression (5), which omits the risk premium, will be biased away from the true value of 1. Hence the negative estimates of "s from (5) can be interpreted as resulting from omitted variables bias. As discussed in Fama (1984), for omitted variables bias to account for negative

estimates of "s it must be true that cov(E t [s t+1] - s t , rp t re ) < 0 and var(rp t re

) > Var(E t [s t+1] - s t ). Hence,

as Engel (1996) notes, models of the foreign exchange risk premium should be consistent with these two inequalities.

The tests of the FRUH based on (3) and (5) involve cointegration either between s t+1 and f t

or f t and s t . As Engel (1996) points out, since

s t+1 - f t = )s t+1 - (f t - s t )

it is trivial to see under the assumption that f t and s t are I(1) that (i) if s t and f t are cointegrated with cointegrating vector (1,-1) then s t+1 and f t must be cointegrated with cointegrating vector (1,-1); and (ii) if s t+1 and f t are cointegrated with cointegrating vector (1,-1) then s t and f t must be cointegrated with cointegrating vector (1,-1). Cointegration models for (s t , f t )N and (s t+1, f t )N can both be used to

7

describe the data and test the FRUH but the form of the models used can have a profound impact on the resulting inferences. For example, we show that a simple first order vector error correction model for (s t , f t )N describes monthly data well and leads naturally to the differences regression (5) from which the FRUH is easily rejected. In contrast, we show that some simple first order vector error correction models for (s t+1, f t )N , which are used in the empirical studies of Naka and Whitney (1995) and Norrbin and Reffett (1996), miss some important dynamics in monthly data and as a result indicate that the FRUH appears to hold. Hence, misspecification of the cointegration model for (s t+1, f t )N can explain some of the puzzling empirical results concerning tests of the FRUH.

In the next section, we describe some stylized facts of monthly exchange rate data that are typical in the analysis of the FRUH. In the remaining sections we use these facts to motivate certain models of cointegration for (s t , f t )N and (s t+1, f t )N to support our claims regarding misspecification and tests of the FRUH.

3. Some Stylized Facts of Typical Exchange Rate Data

Let f t denote the log of the forward exchange rate in month t and s t denote the log of the spot exchange rate. We focus on monthly data for which the maturity date of the forward contract is the same as the sampling interval to avoid modeling complications created by overlapping data. For our empirical examples, we consider forward and spot rate data (all relative to the U.S. dollar) on the pound, yen and Canadian dollar taken from Datastream 3. Figure 1 shows time plots of s t+1, f t - s t (forward premium), and s t+1 - f t (forecast error) for the three currencies and Table 1 gives some summary statistics of the data. Spot and forward rates behave very similarly and exhibit random walk type behavior. The forward premiums are all highly autocorrelated but the forecast errors show very little autocorrelation. The variances of )s t+1 and )f t+1 are roughly ten times larger than the variance of f t - s t and are similar to the variance of s t+1 - f t . For all currencies, )s t+1, )f t+1 and s t+1 - f t are negatively correlated with f t - s t . Any model of cointegration with cointegrating vector (1, -1) for (s t ,f t )N or (s t+1, f t )N should capture these basic stylized facts.

4. Models of Cointegration between f t and s t

4.1 Vector error correction representation

8

The stylized facts of the monthly exchange rate data reported in the previous section can be captured by a simple cointegrated VAR(1) model for y t = (f t , s t )N . This simple model has also recently been used by Godbout and van Norden (1996). Before presenting the empirical restults, we begin this section with a review of the properties of such a model. The general bivariate VAR(1) model for y t is

y t = μ + M y t-1 + ,t ,

where ,t ~ iid (0,E ) and E has elements F ij (i,j = f,s ), and can be reparameterized as

)y t = μ + A y t-1+ ,t

(7)

where A = M - I . Under the assumption of cointegration, A has rank 1 and there exist 2 × 1 vectors " and $ such that A = "$N . Using the normalization $ = (1, -$s )N , (7) becomes the vector error correction model (VECM)

)f t = μf + "f (f t-1 - $s s t-1) + ,ft ,(8a))s t = μs + "s (f t-1 - $s s t-1) + ,st .

(8b)

Since spot and forward rates often do not exhibit a systematic tendency to drift up or down it may be more appropriate to restrict the intercepts in (8) to the error correction term. That is, μf = -"f μc and μs = -"s μc . Under this restriction s t and f t are I (1) without drift and the cointegrating residual,f t - $s s t , is allowed to have a nonzero mean μc 4.

With the intercepts in (8) restricted to the error correction term, the VECM can be solved to give a simple AR(1) model for the cointegrating residual $N y t - μc = f t - $s s t - μc . Premultiplying (7)by $N and rearranging gives

f t - $s s t - μc = N (f t-1 - $s s t-1 - μc ) + 0t ,

(9)

where N = 1 + $N " = 1 + ("f - $s "s ) and 0t = $N ,t = ,ft - $s ,st . Since (9) is simply an AR(1) model,the cointegrating residual is stable and stationary if *N * = *1 + ("f - $s "s )* < 1. Notice that if "f =$s "s then the cointegrating residual is I (1) and f t and s t are not cointegrated.

The exogeneity status of spot and forward rates with regard to the cointegrating parameters " and $ was the focus of attention in Norrbin and Reffett (1996) so it is appropriate to discuss this issue in some detail. Exogeneity issues in error correction models are discussed at length in Johansen (1992, 1995), Banerjee et. al. (1993), Urbain (1993), Ericsson and Irons (1994) and Zivot (1998).For our purposes weak exogeneity of spot or forward rates for the cointegrating parameters " and

9

$ places restrictions on the parameters of the VECM (8). In particular, if f t is weakly exogenous with respect to ("s , $s )N then "f = 0 and efficient estimation of the cointegrating parameters can be made from the single equation conditional error correction model

)s t = μs + "s (f t-1 - $s s t-1) + (s )f t +

(10a)

where (s = F s -s 1 F fs and

)f t = μf + "f (f t-1 - $s s t-1) + (f )s t +

(10b)

where (f = F f -f 1 F fs and

If $s = 1 then the forward premium is I (0) and follows an AR(1) process and the VECM (8)

becomes

)f t = μf + "f (f t-1 - s t-1) + ,ft ,(11a))s t = μs + "s (f t-1 - s t-1) + ,st .

(11b)Notice that (11b) is simply the standard differences regression (5) used to test the FRUH. Further,if "f and "s are of the same sign and magnitude then the implied value of N in (9) is close to 1 and this corresponds to the stylized fact that the forward premium is stationary but very highly autocorrelated.Also, the implied variance of the of the forward premium from (9) is F 00 = F ff + F ss - 2D fs (F ff F ss )? and will be very small relative to the variances of )f t and )s t given the stylized facts that F ff . F ss and D fs . 1.

The FRUH places testable restrictions on the VECM (8). Necessary conditions for the FRUH to hold are (i) s t and f t are cointegrated (ii) $s = 1 and (iii) μc = 0. In addition, the FRUH requires that "s = 1 in order for the forecast error in (2) to have conditional mean zero. It is important to stress that, together, these two restrictions limit both the long-run and short-run behavior of spot and forward rates. Applying these restrictions, (8), led one period, becomes

)f t+1 = "f (f t - s t ) + ,f,t+1,(12a))s t+1 = (f t - s t ) + ,s,t+1.

(12b)

Notice that the FRUH requires that the expected change spot rate is equal to the forward premium or, equivalently, that the adjustment to long-run equilibrium occurs in one period. The change in the forward rate, on the other hand, is directly related to the persistence of interest rate differentials now

10

measured by "f since N = 1 + ("f - 1) = "f . Stability of the VECM under the FRUH requires that *"f *< 1. Thus, the FRUH is consistent with a highly persistent forward premium.

The representation in (12) shows that weak exogeneity of spot rates with respect to the cointegrating parameters is inconsistent with the FRUH because if spot rates are weakly exogenous then "s = 0 and the FRUH cannot hold. In addition if the FRUH is true and forward rates are weakly exogenous then (12) cannot capture the dynamics of typical data. To see this, suppose that forward rates are weakly exogenous so that "f = 0. Since F ss . F ff . F sf = F 2 it follows that (s . (f . 1. If μs = 0, "s = 1 and $s = 1, then (10a) becomes

s t = f t-1 + )f t +

which simply states that the current spot rate is equal to the current forward rate plus a white noise error. This result is clearly inconsistent with the data since it implies that the forward premium is serially uncorrelated.

4.2 Phillips’ triangular representation .

Another useful representation of a cointegrated system is Phillips’ (1991) triangular representation, which is similar to the triangular representation of a limited information simultaneous equations model. This representation is most useful for studying the asymptotic properties of cointegrating regressions and Baillie (1989) has advocated its use for testing rational expectations restrictions in cointegrated VAR models. This representation is also used by Naka and Whitney (1995) to test the FRUH. The general form of the triangular representation for y t is

f t = μc + $s s t + u ft ,(13a)s t = s t-1 + u st ,

(13b)

where the vector of errors u t = (u ft , u st )N = (f t - $s s t - μc , )s t )N has the stationary moving average

representation u t = R (L)e t where and e t is i.i.d. with mean zero and

R (L )'j 4

k '0R k L k

,j 4

k '0

|k |R k <4covariance matrix V . Equation (13a) models the (structural) cointegrating relationship and (13b) is a reduced form relationship describing the stochastic trend in the spot rate. For a given VECM representation, the triangular representation is simply a reparameterization. For the VECM (8) with the restricted constant, the derived triangular representation is given by (13a)-(13b) with

u ft = N u f,t-1 + 0t ,

(13c)

11

u st = "s u ft-1 + ,st ,

(13d)

and N , "s , 0t and ,st are as previously defined. Equation (13c) models the disequilibrium error (which equals the forward premium if $s = 1) as an AR(1) process and (13d) allows the lagged error to affect the change in the spot rate. Let e t = (0t , ,st )N . Then the vector u t = (u ft , u st )N = (f t - $s s t - μc ,)s t )N has the VAR(1) representation u t = Cu t-1 + e t where

C

'

N "s ,V

'

F 00F 0F s 0F ,

Hence, R (L) = (I - C L)-1. As noted previously, F 00 is very small relative to F ss and F 0s = F fs - F ss .

Phillips and Loretan (1991) and Phillips (1991) show how the triangular representation of a cointegrated system can be used to derive the asymptotic properties of the OLS estimates of the cointegration parameters. For our purposes, the most important result is that the OLS estimate of $s from (13a) is asymptotically unbiased and efficient only if u ft and u st are contemporaneously uncorrelated and there is no feedback between u ft and u st (i.e, s t is weakly exogenous for $s and u ft does not Granger cause u st and vise-versa). These correlation and feedback effects can be expressed in terms of specific components of the long-run covariance matrix of u t . The long-run covariance

matrix of u t is defined as = (I - C )-1V (I - C )-1N and this matrix can be

S 'j 4

k '&4E [u 0u )

k ]'R (1)V R (1))decomposed into S = ) + 'N where ) = '0 + ', '0 = E[u 0u 0N ] and . Let S and

''j 4k '1

E [u 0u )

k ]) have elements T ij and )ij (i,j = 0,s ), respectively. Using these matrices it can be shown that the elements that contribute to the bias and nonnormality of OLS estimates are the quantities 2 = T s 0/T ss and )s 0, which measure the long-run correlation and endogeneity between u ft and u st . If these elements are zero then the OLS estimates are asymptotically (mixed) normal, unbiased and efficient https://www.doczj.com/doc/633332328.html,ing the triangular system (13) some tedious calculations show (see Zivot (1995))

2 = ("s F 00/(1 - N ) + F 0,/(1 - N ))@("s 2F 00/(1 - N ) + 2"s F 0s /(1 - N ) + F ss )-1,)s 0 = "s F 00N [(1 - N )(1 - N 2)]-1 + F 0s /(1 - N ),

and these quantities are zero if "s = 0 (spot rates are weakly exogenous for $s ) and F 0s = 0. In typical exchange rate data, however, F 0s . 0 and F 00 is much smaller than F ss which implies that 2 .0 and )s 0 . 0 so the OLS bias is expected to be very small.

To illustrate the expected magnitude of the OLS bias we conducted a simple Monte Carlo

12

experiment where data was generated by (13) with $s = 1, "s = 1, -3, N = 0.9, F ss = (0.035)2, F 00 =(0.001)2, and D 0s = 06. When "s = 1 the FRUH is true and when "s = -3 it is not. In both cases the forward premium is highly autocorrelated. Table 2 gives the results of OLS applied to the levels regression (12a) for samples of size T =100 and T =250. In both cases the magnitude of the OLS bias is negligible. The OLS standard errors, however, are biased which cause the size distortions in the nominal 5% t -tests of the hypothesis that $s = 1.4.3 Empirical Example

Table 3 presents estimation results for the VAR model (7) and Table 4 gives the results for the triangular model (13) imposing $s = 1 for the pound, yen and Canadian dollar monthly exchange rate series. The VAR(1) model was selected for all currencies by likelihood ratio tests for lag lengths and standard model selection criteria. For all currencies, f t and s t behave very similarly: the estimated intercepts and error variances are nearly identical and the estimated correlation between ,ft and ,st ,D fs , is 0.99. The estimated coefficients from the triangular model essentially mimic the corresponding coefficients from the VAR(1). Table 5 gives the results of Johansen’s likelihood ratio test for the number of cointegrating vectors for the three exchange rate series based on the estimation of (7). If the intercepts are restricted to the error correction term then the Johansen rank test finds one cointegrating vector in all cases. If the intercept is unrestricted then the rank tests finds that spot and forward rates for the pound and Canadian dollar are I (0). However, for each series, the likelihood ratio statistic does not reject the hypothesis that the intercepts be restricted to the error correction term. Table 5 also reports the results of the Engle-Granger (1987) two-step residual based ADF t -test for no cointegration based on estimating $s by OLS. For long lag lengths the null of no-cointegration between forward and spot rates is not rejected at the 10% level but for short lag lengths the null is rejected at the 5% level 7. Table 6 reports estimates of $s using OLS, Stock and Watson’s (1993)dynamic OLS (DOLS) and dynamic GLS (DGLS) lead-lag estimator and Johansen’s (1995) reduced rank MLE 8. The latter three estimators are asymptotically efficient estimators and yield asymptotically valid standard errors. Notice that all of the estimates of $s are extremely close to 1and the hypothesis that $s = 1 cannot be rejected using the asymptotic t-tests based on DOLS/DGLS and MLE. Table 7 shows the MLEs of the parameters of the VECM (8) where the intercepts are

13

restricted to the error correction term. Notice also that the estimates of "f and "s are both significantly negative and of about the same magnitude indicting that the error correction term, which is essentially

the forward premium since $^

s . 1, is very highly autocorrelated. Further, since the estimates of "f

and "s are significantly different from zero neither spot nor forward rates appear to be weakly exogenous with respect to the cointegrating parameters.

The above empirical results are based on a two-step procedure of first testing for cointegration between f t and s t and then testing if the cointegrating vector is (1,-1). Alternatively, one can use a one-step procedure to test the null of no-cointegration against the joint hypothesis of cointegration with a prespecified cointegrating vector. The advantage of using tests that impose a prespecified cointegrating vector is that if the cointegrating vector is true then the test can have substantially higher power than tests that implicitly estimate the cointegrating vector 9. The most commonly used one-step procedure to test the joint hypothesis of cointegration between f t and s t with the prespecified cointegrating vector (1,-1) is to run either a unit root test (e.g. ADF t-test) or a stationarity test (e.g KPSS test) on the forward premium f t - s t . Table 5 also reports the ADF unit root tests and KPSS stationarity tests on the forward premia for the three exchange rate series. For short lag lengths the ADF tests indicate that the forward premia are I (0) whereas for long lags the series appear I (1). The KPSS tests give mixed results 10.

Use of the ADF test as a test for no-cointegration against the alternative of cointegration with a prespecified cointegrating vector, however, has been criticized by Kremers, Ericsson and Dolado (1992) and Zivot (1998) as having low power since the ADF test places unrealistic parametric restrictions on the short-run dynamics of the data. They show that a single equation conditional ECM test, based on models similar to (10a) and (10b), can have substantially higher power than the ADF test. A limitation of the conditional ECM test, however, is that it assumes weak exogeneity of one variable with respect to the cointegration parameters. In the context of testing the stationarity of the forward premium it is hard to argue a priori that either forward rates or spot rates are weakly exogenous, and the empirical results of table 5 indicate they are not, and so it appears that the conditional ECM test is not appropriate. Horvath and Watson (1995), however, develop a multivariate procedure to test for cointegration with a prespecified cointegrating vector within a VECM that does not require any exogeneity assumptions. The Horvath and Watson test statistic in

14

the present context is simply the Wald statistic for testing the joint hypothesis "f = "s = 0 where the parameters are estimated by OLS from the VECM (11)11. Under the null of no-cointegration the Wald test has a nonstandard asymptotic distribution and Horvath and Watson (1995) supply the appropriate critical values. They show that their test can have considerably higher power than Johansen’s rank test, which is based on implicitly estimating the cointegrating vector. Additionally,they show that their test has good power even if the cointegrating vector is moderately misspecified.The estimates of the VECMs imposing the cointegrating vector (1,-1) are presented in Table 8 and the Horvath-Watson Wald statistics are reported in Table 5. Using the Horvath-Watson Wald test,the null of no-cointegration is rejected at the 5% level in favor of the alternative of cointegration with cointegrating vector (1,-1) for all three exchange rates.

Based on the results from Table 8, the FRUH is clearly rejected for all three exchange rate

series since the null hypothesis that "s = 1 can be rejected at any reasonable level of significance using an asymptotic t -test. As discussed in Engel (1996), a rejection of the FRUH is usually interpreted as evidence for the existence of a time varying risk premium or a peso problem. Using (11b), it is easy to see that the risk premium implied by the VECM (11) is rp t re = (1 - "s )(f t - s t ) and so the forward premium is perfectly correlated with the risk premium.

5. Models of Cointegration between s t+1 and f t implied by cointegration between

f t and s t

As discussed earlier, cointegration between f t and s t with cointegrating vector (1,-1) implies cointegration between s t+1 and f t with cointegrating vector (1,-1). However, the implied VECM and triangular representations for s t+1 and f t based on the simple models of cointegration between f t and s t presented in the previous section are somewhat nonstandard. To see this, consider first the derivation of the VECM for )f t and )s t+1. By adding and subtracting "s f t-1 from the right hand side of (11b), led one period, and adding and subtracting "f s t from the right hand side of (11a), we get the VECM for ()f t , )s t+1)N

)f t = μf - "f (s t - f t-1) + "f )s t + ,ft ,(14a))s t+1 = μs - "s (s t - f t-1) + "s )f t + ,s,t+1.

(14b)

Notice that in (14) the error correction term is now the lagged forecast error, s t - f t-1, and not the

15

lagged forward premium and that the error terms are separated by one time period and are thus contemporaneously uncorrelated. The errors, however, are not independent due to the correlation between ,ft and ,st . Consequently, the representation in (14) is not a VECM that can be derived from a finite order VAR model for (s t+1, f t )N . In addition, since the error correction term enters both equations neither the forward rate nor the future spot rate is weakly exogenous for the cointegration parameters.

Although )s t is on the right hand side of (14a) and )f t is on the right hand side of (14b) these models should not be interpreted as conditional models since they are derived by simple algebraic manipulation of (11). In particular, since the time index is shifted by one period between the two equations it is more appropriate to interpret )f t in (14b) as a predetermined variable. Estimation of (14b) by OLS will yield consistent but not necessarily efficient estimates of "s 12. However,estimation of (14a) by OLS is problematic since both s t - f t-1 and )s t are correlated with ,ft .

Next consider the derivation of the triangular representation. Using (8b) and f t - s t - μc = u ft

the triangular model for s t+1 and f t becomes

s t+1 = μc + f t +

(15b)

where

Now suppose that s t+1 and f t are cointegrated and (15a) is the correct representation given that $f = 1. Then the OLS estimate of $f from the levels regression (3) will be consistent but asymptotically biased and inefficient due to dynamic behavior and feedback between the elements of v t+1 = (v s,t+1, v ft )N . Since the VECM (14) cannot be derived from a finite order cointegrated VAR model for (s t+1 ,f t ), testing for cointegration and estimating the cointegrating vector using standard VAR techniques is problematic. In particular, since s t - f t-1 is correlated with ,ft , testing and estimation methods based on naive VECMs for )s t+1 and )f t , like the Horvath-Watson Wald test and the Johansen MLE, are likely to suffer from biases. The dynamic OLS/GLS estimator of Stock and Watson (1993), however, should work well since it is designed to pick-up feedback effects through the inclusion of leads and lags of )f t in the levels regression (3). The results of Table 10 show that this is indeed the case for the pound, yen and Canadian dollar. For all series, the OLS estimates of $f are downward biased but the Stock-Watson DOLS and DGLS estimates are nearly one.

Somewhat surprisingly, the triangular representation (15) shows that the OLS estimate of $f

will be biased even if "s = 1 (the FRUH is true). This result is due to the fact that the long-run covariance matrix of v t+1 is not diagonal. To illustrate, let the FRUH be true and suppose that "f =

17

S

'

F ss F F sf F '0

'

ss 00F ff

,'

'

0F 00

)

'

ss F 0

F ,

0 so that the forward premium is not autocorrelated (this assumption greatly simplifies the calculations but does not qualitatively affect the end result). Then the triangular representation (15)simplifies to

s t+1 = f t + ,s,t+1,f t = f t-1 + ,ft ,

and v t+1 = e t+1 = (,s,t+1, ,ft )N . Then by straightforward calculations the long-run covariance matrix of e t+1 and its components are

so that 2 = F sf /F ff and )fs = 0. Further, since F ss . F ff and D sf . 1 it follows that 2 . 1 and so OLS on the levels regression (3) will suffer from bias even if the FRUH is true. To illustrate the magnitude of the bias, Table 2(c) reports OLS estimates of the levels regression (3) when data are generated from (12). The OLS estimate of $f is biased downward, to a similar degree observed in empirical results, and the finite sample distribution is heavily left-skewed. The t -statistic for testing $f = 1 is centered around -1.5 and a nominal 5% test rejects the null that $f = 1 about 30% of the time when the null is true. Table 2 also reports results for the Stock-Watson DOLS estimator. In all cases, the Stock-Watson estimator is essentially equal to the true value of unity and the t -statistic for testing $f = 1 is roughly symmetric and centered around zero. However, there is moderate size distortion in the nominal 5% t -tests of $f = 1 for T =100 but the distortions dissipates as T increases.

6. A Reinterpretation of Some Recent Results Regarding the FRUH.

The results of the previous sections can be used to reinterpret the results of Norrbin and Reffett (1996), hereafter NR, and Naka and Whitney (1995), hereafter NW, who use particular cointegrated models for (s t+1, f t )N and find support for the FRUH. 6.1 Norrbin and Reffett’s model

NR based their analysis on the following VECM for (s t+1, f t )N

18

)s t+1 = μs + *s (s t - $f f t-1) + H st+1,(16a))f t = μf + *f (s t - $f f t-1) + H ft,

(16b)

which is based on a cointegrated VAR(1) model for (s t+1, f t )N 14. NR are primarily interested in directly testing the LRFRUC, i.e., that (s t+1, f t ) are cointegrated with cointegrating vector (1, -1), and not the FRUH. Their approach is to impose $f = 1, estimate (16) by OLS and test the significance of the error correction coefficients *s and *f . Table 11 presents the estimation results for (16) applied to our data. They find that estimates of *s are not statistically different from zero, estimates of *f are not statistically different from 1, the R2s from (16a) and (16b) are close to zero and one, respectively,and the error term from (16b) is highly serially correlated 15. Our results are very similar. From these results they conclude that s t+1 and f t are cointegrated with cointegrating vector (1,-1) (since *f … 0)and that spot rates are weakly exogenous for the cointegrating parameters (since *s = 0)16. Based on their finding that spot rates are weakly exogenous they argue that tests of the LRFRUC constructed from an error correction equation for )s t+1 are bound to lead one to mistakenly reject the LRFRUC and, therefore, reject the FRUH.

Using the cointegrated model for s t+1 and f t implied by the cointegrated model for s t and f t

presented in section 5 we can give alternative interpretations of NR’s results. Most importantly, our results show that NR’s claim that spot rates are weakly exogenous is inconsistent with the FRUH.To see how NR arrived at their results observe that (16) is a restricted version of (14) since )f t is omitted from (16a) and )s t is omitted from (16b). Now, NR’s finding that estimates of *s are close to zero can be explained by omitted variables bias. For example, if (14) is the true model, with μf =μs = 0, then straightforward calculations based on the stylized facts of the data show

and so plim *^

s . 0. Also, as

plim T &1

j T 1(s t &f t &1))f t .F 2

,plim T &1

j T 1

(s t &f t &1)2.F 2mentioned in the last section, estimation of (16b) by OLS is problematic due to the correlation between *f (s t - $f f t-1) and e ft . Furthermore, the finding that *6

f = 1 with $f = 1 in (16b) implies that )f t = s t - f t-1 + H ^ft or, equivalently, that f t = s t + H ^ft . This combined with the result that R2 = 1 and H ^ft is

highly autocorrelated simply shows that the forward premium is highly autocorrelated and does not provide evidence one way or another about the FRUH. Finally, consider NR’s Table 2 which gives the results for the estimation of the error correction model

)s t+1 = μ + "(s t - f t-1) + *)f t + B)f t-1 + ()s t + u t+1

(17)

19

which mimics the ECM estimated by Hakkio and Rush (1989). NR claim that this regression is misspecified since it mistakenly assumes that forward rates are weakly exogenous (presumably due to the presence of )f t ). However (17) is in the form of (14b) which is not a conditional model and does not make any assumptions about the weak exogeneity of forward rates so NR’s claim is not true.6.2 Naka and Whitney’s model

NW are interested in testing the LRFRUC and the FRUH simultaneously using the following cointegrated triangular representation for (s t+1, f t )N

s t+1 = μ + $f f t + v s,t+1,(18a))f t = v ft ,

(18b)

where

v s,t+1 = D v st + w s,t+1,(18c)v ft = w ft ,

(18d)and w s,t+1, w f t are i.i.d . error terms. Notice that (18a) allows for serial correlation in the “levels regression” but the restriction that f t is strictly exogenous is imposed in (18b). The VECM derived from (18) is

)s t+1 = (1 - D )μ - (1 - D )(s t - $f f t-1) + $f )f t + w s,t+1,(19a))f t = w ft .

(19b)

In (19a) the speed of adjustment coefficient is directly related to the correlation in the forecast error and the long-run impact of forward rates on future spot rates (the coefficient on f t ) is restricted to be equal to the short-run effect (the coefficient on )f t ) . In (19), the FRUH imposes the restrictions μ= 0, D = 0 and $f = 1. NW estimate (19a) by nonlinear least squares (NLS) and report estimates of μ and D close to zero and estimates of $f close to one 17. Table 12 replicates NW’s analysis using our data and we find very similar results. Based on these results, NW cannot reject the FRUH.

The triangular representation (18) used by NW is very similar to the triangular model (15) but with some important differences. In particular, since f t is assumed to be strictly exogenous and w s,t+1and w ft are assumed to be independent NW’s model does not allow for feedback between )s t and )f t or for contemporaneous correlation between

20

can be made. These assumptions, however, place unrealistic restrictions on the dynamics of spot and forward rates. For example, suppose (18) is the correct model and that the FRUH is true so μ = 0,$f = 1 and D = 0. Then the derived VECM for (f t , s t )N is

)f t = w ft

)s t = (f t-1 - s t-1) + w st

where w ft and w st are independent. This model implies that f t-1 - s t-1 = w f,t-1 - w s,t-1 , which is a white noise process, and that )f t and )s t are uncorrelated. Clearly these results are at odds with the observation that the forward premium is highly autocorrelated and that )f t and )s t are highly contemporaneously correlated. In addition, NW fail to recognize that assuming f t is strictly exogenous and w st and w ft are independent the results of Phillips and Loretan (1991) and Phillips (1991) show that OLS on (18a) yields asymptotically efficient estimates of $f and so there is no efficiency gain in estimating the nonlinear error correction model (19a)18. Indeed, the results of Tables 10 and 12 show that the OLS and NLS estimates of $f are almost identical. Finally, since NW’s estimates of $f are essentially unity, their tests for the significance of D in (19a) are roughly equivalent to tests for serial correlation in the forecast error s t+1 - f t . Given the remarks in section 5we know that this test of the FRUH is bound to have low power. In sum, by starting with a simple cointegrated model for s t+1 and f t , NW fail to capture some important dynamics between s t and f t that provide information about the validity of the FRUH.

7. Conclusion

In this paper we illustrate some potential pitfalls in modeling the cointegrated behavior of spot and forward exchange rates and we are able to give explanations for some puzzling results that commonly occur in exchange rate regressions used to test the FRUH. We find that a simple first order VECM for s t and f t captures the important stylized facts of typical monthly exchange rate data and serves as a natural statistical model for explaining exchange rate behavior. We show that the cointegrated model for s t+1 and f t derived from the VECM for s t and f t is not a simple finite order VECM and that estimating a first order VECM for s t+1 and f t can lead to mistaken inferences concerning the exogeneity of spot rates and the unbiasedness of forward rates.

遥感卫星影像镶嵌的基本原则

北京揽宇方圆信息技术有限公司 遥感卫星影像镶嵌的基本原则 遥感卫星影像镶嵌是指对一幅或若干幅图像通过几何镶嵌、色调调整、去重叠等处理,镶嵌到一幅大的背景图像中的影像处理方法。 基本原则 镶嵌时应对多景影像数据的重叠带进行严格配准,镶嵌误差不低于配准误差,镶嵌区应保证有10-15个像素的重叠带。影像镶嵌时除了要满足在镶嵌线上相邻影像几何特征一致性,还要求相邻影像的色调保持一致。镶嵌影像应保证色调均匀、反差适中,如果两幅或多幅相邻影像时相不同使得影像光谱特征反差较大时,应在保证影像上地物不失真的前提下进行匀色,尽量保证镶嵌区域相关影像色彩过渡自然平滑。 1、原则上,镶嵌只针对采样间隔相同影像。需在相邻数据重叠区域进行如下处理:首先,在相邻数据重叠区勾绘镶嵌线,镶嵌线勾绘尽量靠近采样间隔较小影像的外边缘,以保证其数据使用率最大化。然后对镶嵌线两侧影像进行裁切,裁掉重叠区域影像,为避免因坐标系转换导致接边处出现漏缝,对于采样间隔小的影像严格沿镶嵌线裁切,采样间隔大的影像应适当外扩一定范围,原则上不超过10个像素进行裁切。 2、镶嵌前进行重叠检查。景与景间重叠限差应符合要求。重叠误差超限时应立即查明原因,并进行必要的返工,使其符合规定的接边要求。采用

“拉窗帘”方式目视检查相邻影像间重叠区域的精度,若同名地物出现“抖动”或“错位”现象,则量测该处同名点误差,两者接边精度不超过1个像素。 3、镶嵌时应尽可能保留分辨率高、时相新、云雾量少、质量好的影像。 4、选取镶嵌线对DOM进行镶嵌,镶嵌处无地物错位、模糊、重影和晕边现象。 5、时相相同或相近的镶嵌影像纹理、色彩自然过渡;时相差距较大、地物特征差异明显的镶嵌影像,允许存在光谱差异,但同一地块内光谱特征尽量一致。 重叠精度检查 叠加相邻纠正单元,采用“拉窗帘”方式逐屏幕目视检查相邻纠正单元间重叠区域的精度,若同名地物出现“抖动”或“错位”现象,则量测该处同名点误差,两者相对精度应满足下表要求。 相邻影像采样间隔≤1米时,其相对误差限差满足表中规定。 相对误差限差表 地形类别 平地、丘陵(采样间 隔) 山地、高山地(采样间 隔) 相对误 差 2.0倍8.0倍 基础底图采样间隔>1米时,其相对误差限差满足表中规定。 相对误差限差表 地形类别 平地、丘陵(采 样间隔) 山地、高山地(采 样间隔) 相对误差 2.0倍 4.0倍 注:相对误差因侧视角超限、基础底图和高程数据等控制资料精度不足引起,且无法改正的特殊地区除外,但该区域周边不超限。 镶嵌步骤 1、镶嵌线选取

creo2.0_MXXX破解的破解文件及详细安装说明

1. 制作许可证文件:解压下载回来的破解文件,在目录下找到ptc_licfile.dat 文件,复制到D:\Program Files\PTC(没有的话创建,创建后不能删除),然后右键,选择打开方式为用记事本打开,打开后在“编辑”菜单里点“替换”,如下图所示,查找内容00-00-00-00-00-00,替换为你的主机MAC地址(不知道如何查询MAC地址的百度,笔记本有好几个MAC,慢慢研究),输入完成后点全部替换然后保存ptc_licfile.dat文件; 2. 运行setup.exe 开始安装,界面如下图所示,选择“安装新软件”-“下一步”;

3. 选中“我接受”,接受许可协议,然后点“下一步”; 4. 在许可证汇总中输入你的许可证的位置D:\Program Files\PTC\ptc_licfile.dat, 软件会自动检测许可证是否可以,提示“可用的”,单击“下一步”,继续安装;

5. 选择软件的安装位置和需要安装的应用程序,如果需要详细配置应用程序的功能,单击“自定义”,选择需要的应用程序功能,选择完成后,单击“安装”,开始安装程序;

6. 程序安装中 7. 软件安装完成,单击“完成”,退出安装; 8. 破解Creo Parametric:

1.双击运行:辅助论坛Creo破解补丁.exe 2.指定目录如:PTC\Creo 1.0(2.0)\Common Files\Mxxx,然后破解 3.完成后会出现如下提示,点击[OK],破解成功 26.破解Distributed Services: 1. 在右侧指定..\Creo 1.0( 2.0)\Creo Distributed Services Manager 后再次点击【look For】

常见国产卫星遥感影像数据的简介

北京揽宇方圆信息技术有限公司 常见国产卫星遥感影像数据的简介 本文介绍了常见国产卫星数据的简介、数据时间、传感器类型、分辨率等情况。 中国资源卫星应用中心产品级别说明 ◆1A级和1C级产品均为相对辐射校正产品,只是不同卫星选用的生产参数不同。 ◆2级,2A级和2C级产品均为系统几何校正产品,只是不同卫星选用的生产参数不同。 其中: ■GF-1卫星和ZY3卫星归档产品为1A级,ZY1-02C卫星数据归档产品级别为1C级,其他卫星归档级别为2级! ◆归档产品是指:该类产品已经存在于系统中,仅需要从存储系统中迁移出来.即可供用户下载的数据。 ◆生产产品是指:该类产品不是已经存在的产品,需要对原始数据产品进行生产,然后再提供给用户下载的数据。

■当用户需要的产品级别是上述归档的级别,直接选择相应的产品级别,然后查询即可! ■当用户需要的产品级别不是上述归档的级别,就需要进行生产.本系统提供GF-1卫星和ZY3卫星2A级的生产产品,ZY1-02C卫星2C级的生产产品,在选择需要的级别查询后,无论有没有数据,在查询结果页上方有一个“查询0级景”按钮,点击此按钮后,进行数据查询,如果有数据,选择需要的产品直接订购,即可选择需要的产品级别。 国产卫星 一、GF-3(高分3号) 1.简介 2016年8月10日6时55分,高分三号卫星在太原卫星发射中心用长征四号丙运载火箭成功发射升空。 高分三号卫星是中国高分专项工程的一颗遥感卫星,为1米分辨率雷达遥感卫星,也是中国首颗分辨率达到1米的C频段多极化合成孔径雷达(SAR)成像卫星,由中国航天科技集团公司研制。 2.数据时间 2016年8月10日-现在 3.传感器 SAR:1米 二、ZY3-02(资源三号02星) 1.简介 资源三号02星(ZY3-02)于2016年5月30日11时17分,在我国在太原卫星发射中心用长征四号乙运载火箭成功将资源三号02星发射升空。这将是我国首次实现自主民用立体测绘双星组网运行,形成业务观测星座,

G 试验和 GM 试验检测原理和临床应用

G 试验和 GM 试验检测原理和临床应用 首都医科大学附属北京安贞医院左大鹏 G 试验和 GM 试验的检测原理和临床应用。目前我们国际合国内都对侵袭性真菌病的诊断制订了标准,其中有权威的是欧洲癌症研究和治疗组织暨侵袭性真菌病感染协作组,我们简称叫欧洲标准所制订的一个标准。这个标准它对侵袭性真菌病它分了三个不同的诊断层次。一个叫确诊,一个叫临床诊断,一个叫疑诊。 我们国家的很多的专业委员会也都根据欧洲标准就结合我们国家和专业的特点制订了我们国家各个专业的诊断标准。比如说中国侵袭性真菌感染工作组在 2010 年第 3 次修订的血液病和恶性肿瘤患者侵袭性真菌感染的诊断,诊断标准和治疗原则,这第 3 次修订,第一次是 2006 ,第二次 2008 , 2010 年是第 3 次修订了。这应该是血液里的肿瘤病人所用的一个标准。中华医学会器官移植学分会制订的实体器官移植患者侵袭性真菌感染的诊断和治疗指南。这可能是实体器官移植的这个方面的诊断标准。中华医学会儿科学会,中华医学会呼吸病学会,中华医学会急诊协会,中华妇产科学会也都根据自己的专业制订了的相应的侵袭性真菌感染的诊断和治疗指南。这作为我们从事这些方面的医务工作者所遵循的一个标准。 那我们看不管是国内和国际现在都采用这样一种方式。首先要看这个病人有没有宿主因素,它是不是容易发生侵袭性真菌,真菌感染的高危人群,你要是这样的,你要一个健康人这就不具备了,起码有一些原因它容易得侵袭性真菌感染要具备这样一个高危因素。第二它有临床表现,当然包括临床症状和影像学,比如说胸片, CT ,颅片等等。还有有微声学的检查,还有一个病理学检查,如果你只有宿主因素有临床表现我们叫做拟诊,不叫疑诊,疑似诊断。如果说你在这两个基础上又多了一条有微生物学证据,或者是培养的,或者是非培养的技术凡是证明他有真菌性感染的这种可能性就要临床诊断,临床诊断。如果再加上从肺的组织,从脏器的组织取出病理来确定,那叫确诊。所以侵袭性真菌感染根据宿主因素临床表现微生物学真菌,组织病理的结果,可以把它分成为拟诊,临床诊断跟确诊三个层次。确诊了当然就已经是确定了,治疗起来更有针对性。临床诊断基本是确定了,大方向也没有问题。拟诊是不能,不叫,就是说我们只能是一种预防性的治疗,或者是一种抢先的有干预的治疗,还不能说它一定有真菌感染。比如说我们高度怀疑,高度怀疑。 不管我们前面讲了,就是你要做到临床诊断你就必须有微生物学的真菌。在欧洲标准和我们国家各个专业把脑脊液能够找到隐球菌抗原这个阳性结果作为真菌性脑膜炎或者叫播散性真菌隐球菌病的一个确诊的依据。这时候确诊就不需要组织学了,它只要脑脊液里面找到隐球菌的抗原就确诊。血清的 1 、 3 β D 葡聚糖的检测我们称为 G 试验和这个曲霉菌的半乳甘露聚糖的 GM 试验,那就这个 G 试验和 GM 试验作为临床诊断侵袭性真菌病的依据。当然它不是确诊,它是临床诊断。 我们来看这是中华内科杂志 2007 年中国,中华这个重症协会所制定的关于重症患者侵袭性真菌感染的诊断语言,它在这个微生物学检查里面它要求所有的标本应该是新鲜的,

cadence软件安装步骤说明

Cadence软件安装破解步骤 文档目录 1、安装准备工作 (2) 2、软件安装 (2) 3、软件破解 (4) 4、关于license (4) 5、环境配置 (6) 6、环境配置示例 (7)

Cadence公司软件安装步骤大同小异,这里就归类到一起,安装其所有软件均适用。 1、安装准备工作: 图形安装工具:iscape.04.11-p004 所要安装的软件包:如IC615等(几乎所有cadence软件的图形安装步骤都一样)。 破解文件:破解文件包括两个文件,以为patch文件,以为pfk 文件。 License:Cadence的license比较好找,也好制作。网上很多license,也可以自己制作。 2、软件安装: 1)、进入iscape.04.11-p004/bin/,运行iscape.sh进入软件安装图形界面,如下图所示。 说明:在选择软件安装路径是须注意,如果解压后有多个CDROM

文件夹,在该处选择到CDROM1下即可,其他CDROM包会自动加载。 2)、继续到以下界面,选中所要安装的软件,然后继续下一步: 3)、点击下一步到一下安装界面,进行配置。

点击“Start”开始安装。 4)、安装到一定完成后会弹出一些关于软件的配置,如OA库位置的设置等,若没有特殊要求更改的可一直回车。配置完成后可关闭图形安装窗口。 3、软件破解: 将破解文件复制到软件的安装目录下,运行patch文件跑完即可。但是需要注意的是32bit/64bit的软件破解文件有可能不是同一个patch文件,出现破解不完全。若是这样,会出现只能运行32bit或者64bit的软件,运行另一版本会提示license的错误。在找patch文件的时候需注意patch所适用的软件及版本。 4、关于License: 在网上能找到很多license可用,特别是eetop。也可以根据自己

抗凝血酶Ⅲ测定试剂盒(发色底物法)产品技术要求meichuang

抗凝血酶Ⅲ测定试剂盒(发色底物法) 适用范围:本产品用于体外定量测定人血浆中抗凝血酶III的活性。 1.1产品型号:产品为冻干型和液体型,试剂规格如下: 2.性能指标 2.1外观 .产品外包装应完整,无破损,标识、标签清晰; .冻干型:凝血酶试剂、发色底物为白色冻干品,复溶后为清晰无色液体,缓冲液为无色透明液体。 .液体型:凝血酶试剂、发色底物为无色液体,缓冲液为无色透明液体。 2.2装量(液体) 液体试剂的装量应不低于产品标示量。 2.3残留水分(冻干型) 凝血酶试剂、发色底物的含水量应≤3%。 2.4准确性

用试剂盒测试定值血浆,测量结果与定值血浆标示值相对偏差应≤±10%。 2.5 重复性 用正常值血浆重复测定的结果变异系数(CV%)均应≤6%。 用异常值血浆重复测定的结果变异系数(CV%)均应≤8%。 2.6批间差 用3个不同批号的试剂测试正常值血浆,所得结果的批间差应≤10%。 2.7瓶间差(冻干型) 用正常值血浆测试的瓶间变异系数(CV)应≤8%。 2.8 线性 线性范围为30%~150%,相关系数r≥0.98 2.9稳定性 a)冻干型制剂复溶稳定性:复溶后样品在2-8℃条件下,保存24小时,取该样品检测外观、准确性、重复性应符合2.1、2.4、2.5的要求。 b)液体制剂效期稳定性:在2-8℃条件有效期为12个月。取到效期后的样品检测外观、准确性、重复性,线性应符合2.1、2.4、2.5、2.8的要求。 c)冻干型制剂效期稳定性:在2-8℃条件有效期为12个月。取到效期后的样品检测外观、残留水分、准确性、重复性,瓶间差、线性应符合2.1、2.3、2.4、2.5、2.7、2.8的要求。

SPOT卫星遥感影像数据基本参数

SPOT5遥感卫星基本参数 北京揽宇方圆信息技术有限公司 前言: 遥感传感器是获取遥感数据的关键设备,由于设计和获取数据的特点不同,传感器的种类也就繁多,就其基本结构原理来看,目前遥感中使用的传感器大体上可分为如下一些类型:(1)摄影类型的传感器; (2)扫描成像类型的传感器; (3)雷达成像类型的传感器; (4)非图像类型的传感器。 无论哪种类型遥感传感器,它们都由如下图所示的基本部分组成: 1、收集器:收集地物辐射来的能量。具体的元件如透镜组、反射镜组、天线等。 2、探测器:将收集的辐射能转变成化学能或电能。具体的无器件如感光胶片、光电管、光敏和热敏探测元件、共振腔谐振器等。 3、处理器:对收集的信号进行处理。如显影、定影、信号放大、变换、校正和编码等。具体的处理器类型有摄影处理装置和电子处理装置。 4、输出器:输出获取的数据。输出器类型有扫描晒像仪、阴极射线管、电视显像管、磁带记录仪、XY彩色喷笔记录仪等等。 虽然不同卫星的基本组成部分是相同的,但是由于,各个组成部分的具体构造的精细度又是不同的,的,所以不同的卫星具有不同的分辨率。 一、法国SPOT卫星 法国SPOT-4卫星轨道参数: 轨道高度:832公里 轨道倾角:98.721o 轨道周期:101.469分/圈 重复周期:369圈/26天 降交点时间:上午10:30分 扫描带宽度:60 公里 两侧侧视:+/-27o 扫描带宽:950公里 波谱范围: 多光谱XI B1 0.50 – 0.59um 20米分辨率B2 0.61 – 0.68um B3 0.78 – 0.89um SWIR 1.58 – 1.75um

安装bt5到u盘方法与步骤

安装bt5到u盘方法与步骤 先弄个BackTrack的Live版ISO文件,官网上有。我选的是BackTrack5R2KDE64位(文档上介绍的GNOME版) 运行虚拟机,从ISO文件启动,BackTrack就跑起来了。用startx命令切换到图形界面。 安装过程需要从互联网下载安装软件,所以先检查互联网连接,可用nslookup https://www.doczj.com/doc/633332328.html, 如果域名解析成功,互联网连接就没问题了。不行的话用ifconfig检查接口状态,用/etc/init.d/networking stop关闭网络接口,用/etc/init.d/networking start启动网络接口 在U盘上安装先要在系统中找到U盘,即找到它的路径,可以用dmesg|egrep hd.\|sd.命令,一般U盘的路径是/dev/sdb,不过不同环境不一样,例如,接了不止一个U盘的话,就不一定是这个路径了。 找到U盘。用fisk/dev/sdb对它做分区,分区步骤如下 1)建一个主分区(primary),大小500M左右,把它toggle为83,设为active(这个区后面是用作/boot分区的,路经是/dev/sdb1) 2)建一个扩展分区(extend),大小是剩下的空间(就是直接敲回车就行了) 3)建一个逻辑分区(logical),大小跟2)的一样(也是直接敲回车就行了,这个后面是用作/分区,路经是/dev/sdb5) 4)别忘了敲w命令哦,保存分区表 后面的安装需要一些软件和工具,所以要升级一下BackTrack apt-get update apt-get install hashalot 升级成功后,要对U盘上的分区启用加密 cryptsetup-y--cipher aes-xts-plain--key-size512luksFormat/dev/sdb5 这里会要求建立加密口令的

高分辨率遥感卫星介绍

北京揽宇方圆信息技术有限公司 高分辨率遥感卫星有哪些 高分辨率遥感可以以米级甚至亚米级空间分辨率精细观测地球,所获取的高空间分辨率遥感影像可以清楚地表达地物目标的空间结构与表层纹理特征,分辨出地物内部更为精细的组成,地物边缘信息也更加清晰,为有效的地学解译分析提供了条件和基础。随着高分辨率遥感影像资源日益丰富,高分辨率遥感在测绘制图、城市规划、交通、水利、农业、林业、环境资源监测等领域得到了飞速发展。 北京揽宇方圆信息技术有限公司是国内的领先遥感卫星数据机构,而且是整合全球的遥感卫星数据资源,分发不同性能、技术应用上可以互补的多种卫星影像,包括光学、雷达卫星影像、历史遥感影像等各种卫星数据服务,各种专业应用目的的图像处理、解译、顾问服务以及基于卫星影像的各种解决方案等。遥感卫星影像数据贯穿中国1960年至今的所有卫星影像数据,是中国遥感卫星数据资源最多的专业遥感卫星数据服务机构,提供多尺度、多分辨率、全覆盖的遥感卫星影像数据服务,最大限度的保证了遥感影像数据获取的及时性和完整性。 一、卫星类型 (1)光学卫星:worldview1、worldview2、worldview3、worldview4、quickbird、geoeye、ikonos、pleiades、deimos、spot1、kompsat系例、spot2、spot3、spot4、spot5、spot6、spot7、landsat5(tm)、Sentinel-卫星、landsat(etm)、rapideye、alos、kompsat系例卫星、planet卫星、北京二号、高景一号、资源三号、高分一号、高分二号、环境卫星。 (2)雷达卫星:terrasar-x、radarsat-2、alos雷达卫星、高分三号卫星、哨兵卫星 (3)侦查卫星:美国锁眼卫星全系例(1960-1980) 二、卫星分辨率 (1)0.3米:worldview3、worldview4 (2)0.4米:worldview3、worldview2、geoeye、kompsat-3A (3)0.5米:worldview3、worldview2、geoeye、worldview1、pleiades

血、尿常规检测原理

一、血常规 1.网织红细胞仪器测定法原理:目前国内外使用仪器法测定网织红细胞,一般采用流式细胞术。流式细胞术法是将红细胞经特殊荧光染料染色后,使含RNA的网织红细胞被计数,进而得出网织红细胞的百分率和绝对值。 2.血细胞分析仪的原理 (—)细胞计数及体积测定原理 流式细胞术加光学测定原理:利用流式细胞术使单个细胞随着流体动力聚集的鞘流液在通过激光照射的检测区时,使光束发生折射、散射和衍射,散射光由光检测器接收后产生脉冲,脉冲大小与产生的细胞大小成正比,脉冲的数量代表了被照细胞的数量。 (二)白细胞分类的原理 光散射与细胞化学技术联合白细胞分类技术此类仪器联合利用激光散射和过氧化酶染色技术进行血细胞分类计数。嗜酸性粒细胞有很强的过氧化酶活性,依次为中性粒细胞、单核细胞,而淋巴细胞和嗜碱性粒细胞无此酶。如果待测物质中含有过氧化物酶,能催化一种供氢体,通常是苯胺或酚等脱氢,当其脱氢后,供氢体分子结构发生了变化,从而出现了色基显色,即可使4氯仿-苯酚显色并沉淀后定位于酶反应部位。利用酶反应强度不同(阴性、弱阳性、强阳性)和细胞体积不同,激光光束射到细胞上所产生的前向角和散射角不同,以X轴为吸光率(酶反应强度),Y轴为光散射(细胞大小),每个细胞产生两个信号并结合定位在细胞图上。仪器每分钟可测定上千个细胞,经计算机处理得出细胞分类结果。 (三)红细胞测试原理 红细胞(RBC)和血细胞比容(HCT)原理同(一) 血红蛋白测定(Hb)细胞悬液加入溶血剂后,红细胞溶解释放出Hb,并与溶血剂中的某些成分形成Hb衍生物,在540nm波长的下比色,吸光度与Hb含量成正比,可直接反应Hb的浓度。 各项红细胞指数检测原理红细胞平均体积(MCV)、红细胞平均血红蛋白含量(MCH)、平均红细胞血红蛋白的浓度?(MCHC)及红细胞体积分布宽度(RDW)均根据仪器检测的RBC、HCT和Hb的实验数据,经仪器内存电脑换算出来。 (四)血小板分析原理 血小板随着红细胞一起在一个系统内进行检测,根据阈值不同,分别技术血小板与红细胞数。血小板储存于64个通道内,根据所测血小板体积大小自动计算出血小板的平均体积(MPV)。血小板直方图也是反应血小板体积的,横坐标表示体积,范围一般为2-30fl,纵坐标表示不同体积血小板出现的相对频数。但要注意不同的仪器血小板直方图范围存在差异,为了使血小板技术更准确,有些意气专门设置了增加血小板准确性的技术,如鞘流技术浮动界标复合曲线等。 3.全自动五分类连接网织红细胞仪 二、血凝仪原理:凝固法--磁珠法+光学法 免疫法--乳胶颗粒法,免疫浊度分析 发色底物法--比色法 在血栓/止血检验中最常用的凝血酶原时间(PT)、活化部分凝血活酶时间(APTT)、纤维蛋白原(FIB)、凝血酶时间(TT)、内源凝血因子、外源凝血因子、高分子量肝素、低分子量肝素、蛋白C、蛋白S等均可用凝固法测量。所以目前半自动血凝仪基本上都是以凝固法测量为主,而在全自动血凝仪中也一定有凝固法测量。

遥感卫星影像数据采购知识要素

北京揽宇方圆信息技术有限公司 (一)遥感卫星数据类型有哪些? 北京揽宇方圆卫星公司可提供多种遥感数据类型供用户选择,目前来说是国内遥感数据最多的遥感数据中心,分辨率从0.3米到30米的光学卫星影像,还有各种极化方式的雷达卫星影像,高光谱卫星影像,还有解密的1960年至1980年的锁眼卫星影像,根据自己的情况来定,也可以把自己的卫星数据需求告诉我们,给您推荐合适的卫星数据类型。如果您想获取高程信息DEM、DLG等信息,需要购买的就是卫星影像立体像对数据,并不是所有卫星都有立体像对哦。 (二)遥感卫星数据影像有哪些级别? 卫星公司北京揽宇方圆销售的都是1A级别原始卫星影像,光学卫星影像原始数据都是以全色+多光谱捆绑形式提供,卫星影像一般可以经过一定的处理,形成各级别的影像数据,不同的级别可以针对不同的用户需求,在订购时需特别注意。 *名词(全色就是黑白数据,多光谱是指红绿蓝近红外) (三)遥感卫星数据影有没有最小数量起订的说法? 北京揽宇方圆提醒您在购买卫星影像时,都要确认购买面积大小或景数。对于高分辨率影像来说,一般是按面积大小来计算,单位为平方公里。但是往往有个最小购买面积,例如,WorldView影像的存档数据最低起购面积为25平方公里,且需要满足四边形两边相距大于等于5公里;而中低分辨率影像则往往按景数来计算,景是一幅卫星影像的通俗讲法,例如,一景高分一号卫星影像,范围大小为32.5×32.5公里。 (四)遥感卫星存档数据是指什么? 北京揽宇方圆详解遥感卫星存档数据:是指先前卫星已经拍摄过的某区域的影像数据,已存档在数据库中,是现成品。该种影像的购买价格相对较低,订购时间较快。但是订购前需要对既定需求区域做出确认,即确认所需区域是否有卫星影像数据存档、卫星影像存档数据的拍摄时间、拍摄质量(包含了云量、拍摄倾角等因素)等。 (五)遥感卫星编程数据是什么意思? 北京揽宇方圆遥感公司对遥感卫星编程数据的解释是指地面编程控制卫星对需求区域拍摄最新的影像,可以让用户得到需求区域最新的影像。但是编程影像的拍摄周期通常较长,订购初期需要先向卫星运营公司申请拍摄区域的拍摄周期,然后由卫星公司反馈计划拍摄周期。在这个拍摄周期中,并不能够保证拍摄成功,这与所拍摄地的天气情况、拍摄数据的优先级权重以及需求数据范围有关。 (六)遥感卫星影像数据价格如何一般是多少? 目前市面上的商业遥感卫星数量较多,北京揽宇方圆是国内遥感数据资源最多的公司,不同的行业根据自己的遥感项目业务要求,对各卫星影像的分辨率、波段数量、质量以及影像拍摄的时间要求各异,而卫星

破解版天越软件安装方法

1、安装天越软件 2、安装完毕后打开C:\WINDOWS\system32\drivers\etc\,用安装包中的hosts文件,替 换掉原文件。 3、器件图标配置:将“cfg.dwg”文件替换C:\Program Files\tyicd\Project\def中的“cfg.dwg” 4、打开本地破解文件夹中的破解限制时间软件,将nexbox导入进去,时间选择12年 左右,点run(杀毒软件可能会将netbox当作病毒,需将其信任) 此时电脑右下角出现的标志 5、开启软件使用(只能兼容AUTOCAD2004或2006版本),用户名zgx,密码zgx。 6、登录后“用户名或者密码不正确”或者:”超过时间限制失效”的提示后,说明破解的时 间不正确,先将右下脚nexbox软件关闭,再重复4的步骤,时间做一下调整。直至可以打开为止。 7、登录软件前5分钟,CAD命令行会遇到“网络不通,请检测网络”而不能插入器件等 编辑,请将cad图纸保存,关掉图纸(天越软件不用关),再打开图纸即可正常使用。 8、使用过程偶尔由于特殊操作,会发生“致命错误”而关闭,建议将CAD自动保存时间 设定为3分钟,避免无保存退出造成的灾难。 技巧问题:

1、系统图器件不能再导成系统图,会出错!!馈线例外 2、在天越中90度旋转系统图,馈线线标和器件编号朝向会相反。解决办法:关闭天越,用CAD打开并进行旋转。 3、如需修改天线、器件等标号名称,请使用查找/替换修改。注意保存图纸,偶尔会出现“致命错误”。 4、定向吸顶天线,请插入软件中“对数周期天线” //若需破解版软件可留邮箱,本人会免费传用。

市场流通科破解企业难题调研报告完整版

编号:TQC/K764 市场流通科破解企业难题调研报告完整版 Daily description of the work content, achievements, and shortcomings, and finally put forward reasonable suggestions or new direction of efforts, so that the overall process does not deviate from the direction, continue to move towards the established goal. 【适用信息传递/研究经验/相互监督/自我提升等场景】 编写:________________________ 审核:________________________ 时间:________________________ 部门:________________________

市场流通科破解企业难题调研报告 完整版 下载说明:本报告资料适合用于日常描述工作内容,取得的成绩,以及不足,最后提出合理化的建议或者新的努力方向,使整体流程的进度信息实现快速共享,并使整体过程不偏离方向,继续朝既定的目标前行。可直接应用日常文档制作,也可以根据实际需要对其进行修改。 为进一步理清工作思路,破解发展难题,确保学习实践科学发展观活动取得实际效果,根据《市商务局深入学习实践科学发展观活动实施方案》的要求,经局党组研究决定,组织局领导和机关各科室进行深入企业调研。XX年4月19日市场流通科在分管领导熊洲林副局长的带领下到鲁甸县生猪标准化健康养殖示范基地——鲁甸长城建安有限公司进行调研,了解了企业的基本情况,为企业进一步理清了发

常用的遥感卫星影像数据有哪些

北京揽宇方圆信息技术有限公司 常用的遥感卫星影像数据有哪些 公司拥有WorldView、QuickBird、IKONOS、GeoEye、SPOT、高分一号、资源三号等卫星的代理权,与国内多家遥感影像一级代理商长期合作,能够为客户提供全天候、全覆盖、多分辨率、多尺度的影像产品 WorldView,分辨率0.5米 WorldView卫星系统由两颗(WorldView-I和WorldView-II)卫星组成。WorldView-I全色成像系统每天能够拍摄多达50万平方公里的0.5米分辨率图像,并具备现代化的地理定位精度能力和极佳的响应能力,能够快速瞄准要拍摄的目标和有效地进行同轨立体成像。WorldView-II多光谱遥感器具有8个波段,平均重访周期为一天,每天采集能力达到97.5万平方公里。

QuickBird,分辨率0.61米 QuickBird具有较高的地理定位精度,每年能采集7500万平方公里的卫星影像数据,在中国境内每天至少有2至3个过境轨道,有存档数据约500万平方公里,重访周期为1-6天,每天采集能力达到21万平方公里。 IKONOS,分辨率0.8米 IKONOS卫星是世界上第一颗高分辨率卫星,开启了商业高分辨率卫星的新时代,同时也创立了全新的商业化卫星影像标准。全色影像分辨率达到了0.8米,多光谱影像分辨率4米,平均重访周期3天。

Geoeye,分辨率0.41米 GeoEye-1卫星具有分辨率最高、测图能力极强、重返周期极短的特点。全色影像分辨率达到了0.41米,多光谱影像分辨率1.65米,定位精度达到3米,重访周期2-3天,每天采集能力70万平方公里。

dpdk安装及示例程序使用指南(虚拟机版)

DPDK安装及示例程序使用指南(适用于虚拟机) --torronto 2016.1.27 关于dpdk的介绍不用多说,主要就是它是intel开发的一个网络数据包查找转发的套件,用以分析网络数据的,所以只支持intel的网卡以及极少数除intel之外的网卡,具体支持的型号,官网有说明。因此,大多数时候,我们都是用虚拟机来仿真。 1.在虚拟机中的ubuntu系统上手动设置2个网卡(一共3个),就使用默认的桥接模式,然后修改处理器个数为2个处理器,每个处理器2核心。内存分配,1GB以上,2GB更好。 2.去官网下载dpdk软件包,http://www.dpdk.eu/download 3.将软件包解压在主目录下,根据个人喜好,因为后面编译和使用示例每次都要访问的。

4.从终端进入 5.tools文件夹中有一个setup.sh方便新手完成dpdk的设置初始化操作:(当然,配置编译之前先进入特权模式) 6.我们可以看到setup.sh里的一些选项如下: ------------------------------------------------------------------------------ RTE_SDK exported as /home/torronto/dpdk-2.2.0 ------------------------------------------------------------------------------ ---------------------------------------------------------- Step 1: Select the DPDK environment to build ---------------------------------------------------------- [1] arm64-armv8a-linuxapp-gcc [2] arm64-thunderx-linuxapp-gcc [3] arm64-xgene1-linuxapp-gcc [4] arm-armv7a-linuxapp-gcc [5] i686-native-linuxapp-gcc [6] i686-native-linuxapp-icc [7] ppc_64-power8-linuxapp-gcc [8] tile-tilegx-linuxapp-gcc [9] x86_64-ivshmem-linuxapp-gcc [10] x86_64-ivshmem-linuxapp-icc [11] x86_64-native-bsdapp-clang [12] x86_64-native-bsdapp-gcc

常见地遥感卫星地介绍及具体全参数

常见的遥感卫星的介绍及具体参数 遥感卫星(remote sensing satellite )用作外层空间遥感平台的人造卫星。用卫星作为平台的遥感技术称为卫星遥感。通常,遥感卫星可在轨道上运行数年。卫星轨道可根据需要来确定。遥感卫星能在规定的时间覆盖整个地球或指定的任何区域,当沿地球同步轨道运行时,它能连续地对地球表面某指定地域进行遥感。所有的遥感卫星都需要有遥感卫星地面站,卫星获得的图像数据通过无线电波传输到地面站,地面站发出指令以控制卫星运行和工作。以下列出较为常见的遥感卫星: 一、Landsat卫星 美国NASA的陆地卫星(Landsat)计划(1975年前称为地球资源技术卫星——ERTS ),从1972年7月23日以来,已发射7颗(第6颗发射失败)。目前Landsat1—4均相继失效,Landsat 5仍在超期运行(从1984年3月1日发射至今)。Landsat 7于1999年4月15日发射升空。其常见的遥感扫描影像类型有MMS影像、TM图像。 (一)、MSS影像 MSS影像为多光谱扫描仪(MultiSpectral Scanner)获取的图像,第一颗至第三颗地球卫星(Landsat)上反光束导管摄像机获取的三个波段摄影相片分别称为第1、2、3波段,多光谱扫描仪有4个波段获取的扫描影像被命名为4、5、6、7波段,两个波段为可见光波段,两个波段为近红外波段,此外,第三颗地球卫星上还供有热红外波段影像,这个影像称为第8波段,但使用不久,就因为一起的问题二关闭了。 表 1 :Landsat上MSS波段参数

(二)、TM影像 TM影像是指美国陆地卫星4~5号专题制图仪(thematic mapper)所获取的多波段扫描影像。 影像空间分辨率除热红外波段为120米外,其余均为30米,像幅185×185公里2。每波段像元数达61662个(TM-6为15422个)。一景TM影像总信息量为230兆字节),约相当于MSS影像的7倍。 因TM影像具较高空间分辨率、波谱分辨率、极为丰富的信息量和较高定位精度,成为20世纪80年代中后期得到世界各国广泛应用的重要的地球资源与环境遥感数据源。能满足有关农、林、水、土、地质、地理、测绘、区域规划、环境监测等专题分析和编制1∶10万或更小比例尺专题图,修测比例尺地图的要求。 表 2 :Landsat上TM波段参数 (三)、ETM 1999年4月15日,美国发射了Landsat-7,它采用了增强-加型专题绘图仪(ETM)遥感器来获取地球表层信息,它与TM的区别在于增加了全色波段,分辨率为15米,并改进了热红外波段影像的分辨率。

卫星全色和多光谱模式介绍

QuickBird卫星全色和多光谱模式 时间:2009-08-24 众所周知,遥感是使用各种传感器远距离探测目标所辐射、反射或散射的电磁波,经加工处理变成能够识别和分析的图像和信号,以获取目标性质和状态信息的综合技术。 遥感根据获取目标的手段不同可分为狭义遥感和广义遥感。 狭义遥感以电磁辐射为感测对象,而广义遥感还包括磁力、重力等地球物理的测量和属于地球物理测量范畴的地震波、声波等弹性波。 我们通常所说的遥感概念则专指以电磁辐射为特征的狭义遥感。不同的目标物受到太阳或其他辐射源的电磁辐射时,它们所特有的反射、发射、透射、吸收电磁辐射的性质是不同的。通过获取目标物对电磁辐射的显示特征,可识别目标的属性和状态。所以传感器谱段的设置与目标物的光谱特性有着密切的关系。 目前世界上用于卫星遥感的传感器有两大类:光学遥感和微波遥感。 光学遥感: 光学遥感指利用光学设备探测和记录被测物体辐射、反射和散射的相应谱段电磁波,并分析、研究其特性及变化的技术。 光学遥感覆盖了红外、可见光和紫外三个谱段,常用的有以下三种: 可见光遥感: 其工作波长为0.4~0.76微米,一般采用感光胶片或光电探测器作为感测元件,属于摄影成像遥感。它主要使用可见光远摄镜头照相和可变焦距电视摄像等,感测的是目标及背景反射或自身发出的可见光,记录的信息或拍摄的图像是物体反射光或发光强度的空间分布。可见光遥感是光学遥感中历史最长的一种,是对地观测和军事侦察的主要手段之一。摄影成像的分辨率(G)很高,可以近似地表示为: G=f×R/H 其中f为镜头焦距,R为镜头与底片的综合分辨率,H为高度(或距离)。 红外遥感器: 主要包括红外扫描仪、红外辐射仪等。红外遥感通过探测红外辐射获取目标和背景的辐射温度或热成像。其探测能力取决于目标、背景与周围环境的温度差。红外遥感的最大优点是可获取无光照或薄云下目标和背景的图像。 多谱段遥感: 使用几个不同的谱段同时对一目标或地区进行感测,从而获得与各谱段相对应的各种信息。将不同谱段的遥感信息加以组合,可获取目标物更多的信息。多谱段遥感是在可见光和红外遥感的基础上发展起来的,它能明显地分辨多种目标和背景特性,兼有可见光和红外遥感技术的优点。也为高光谱和超高光谱的发展提供了依据。微波遥感: 微波遥感是利用微波遥感设备,对地物目标和环境的微波辐射、反射或散射能量实施探测的技术,其波长为1~1000毫米. 微波遥感按工作模式的不同可分为两种: 有源微波遥感: 主要由成像雷达、微波散射计和微波高度计组成。在卫星遥感中应用较多的是合成孔径雷达,它是利用平台与目标的相对运动产生的多普勒频移,经二维相关处理或匹配滤波处理而获得高分辨率的图像。 无源微波遥感: 主要指各种微波辐射计,它是通过测量自然界各种物体发出的微弱微波辐射来测量目标的辐射特性和实际温度。

WorldView卫星影像命名规则

WorldView卫星影像命名规则 WorldView-2于2009年10月6日发射升空,运行在770Km高的太阳同步轨道上。更高的轨道带来了更短的重访周期和更好的拍摄机动性。作为Digital Globe公司当时先进的遥感卫星,它同样使用了控制力矩陀螺技术。这项高性能技术可以提供多达10倍以上的加速度的姿态控制操作,从而可以更精确的瞄准和扫描目标。卫星的旋转速度可从QuickBird的60秒减少至9秒,星下摆动距离达200km。所以,WorldView-2在太空中的角色就像一个神奇的画笔,能灵活的前后扫描、拍摄大面积的区域,能在单次操作中完成多频谱影像的扫描。除了更快速的采集和更高的精度,WorldView-2还是第一颗具有八波段多光谱的高分辨率遥感卫星,它不但具有传统遥感卫星的四个多光谱波段,还新增加了海岸线、黄、红边和近红外2波段。 一般情况下,我们订购的影像都是分块存储的,上图就是一幅分块影像的所有文件。 (1)*.ATT——姿态文件:存储第一个数据点的时间、数据点数目、点和姿态信息间隔。 (2)*.EPH——星历文件:存储第一个数据点获取的时间、数据点数目、点和星历信息之间的间隔。 (3)*.GEO——几何定标文件:虚拟相机的标注摄影测量参数,是基础产品的相机和光学系统之间的关系。

(4)*.IMD——影像元数据文件:存储影像关键信息,包括产品级别、角点坐标、投影信息、获取时间、分辨率、视线高度、方位角、云覆盖率等。对后期数据处理分析有很大帮助。 (5)*.RPB——RPC参数文件:包含影像的RPC参数,是影像物方空间坐标与像方空间坐标之间的数学映射。这是我们做卫星影像立体成图RPC空三的关键参数。 (6)*.STE——立体文件:包含构成立体的影像列表,重叠区域等。 (7)*.TIF——影像文件:原始影像格式为非标准16bit,普通看图软件无法打开显示,可将其转换成8bit后再打开。或者使用ArcGIS、ERdas等专业软件打开。 (8)*.TIL——影像分块文件:产品分块情况及各部分位置关系。 (9)*.XML——影像索引文件:包含索引、许可、影像元数据、分块、rpc 文件的索引信息。 (10)*README.TXT——高级影像索引文件:产品文件列表和辅助数据文件以及产品版本信息。 备注说明: 北京揽宇方圆200多颗遥感卫星数据资源,各卫星都有详细的价格体系表,不同行业根据自己遥感项目业务要求,对各卫星影像的分辨率、波段数量、质量以及影像拍摄的时间要求各异,而卫星影像的价格则主要由以上参数决定。 北京揽宇方圆信息技术有限公司是国内的领先遥感卫星数据机构,遥感行业的国家高新技术企业,整合全球200多颗遥感卫星数据资源,遥感卫星影像数据贯穿中国1960年至今的所有商业卫星影像数据,是中国遥感卫星数据资源最多的专业遥感卫星数据服务机构,提供多尺度、多分辨率、全覆盖的遥感卫

卫星遥感数据处理规范流程

北京揽宇方圆信息技术有限公司遥感卫星影像图像数据处理介绍 北京揽宇方圆信息技术有限公司是国内的领先遥感卫星数据机构,而且是整合全球的遥感卫星数据资源,分发不同性能、技术应用上可以互补的多种卫星影像,包括光学、雷达卫星影像、历史遥感影像等各种卫星数据服务,各种专业应用目的的图像处理、解译、顾问服务以及基于卫星影像的各种解决方案等。遥感卫星影像数据贯穿中国1960年至今的所有卫星影像数据,是中国遥感卫星数据资源最多的专业遥感卫星数据服务机构,提供多尺度、多分辨率、全覆盖的遥感卫星影像数据服务,最大限度的保证了遥感影像数据获取的及时性和完整性。 优势: 1:北京揽宇方圆国内老牌卫星数据公司,经营时间久,行业口碑相传,1800个行业用户选择的实力见证。 2:北京揽宇方圆遥感数据购买专人数据查询一对一服务,数据查询网址是卫星公司网。 3:北京揽宇方圆拥有大型正版遥感处理软件,遥感数据处理工程师有10年以上遥感处理工作经验,并有国家大型项目工作经验自主卫星数据处理软件著作权,最大限度保持遥感卫星影像处理的真实度。

4:北京揽宇方圆国家高新技术企业,通过ISO900认证的国际质量管理操作体系,无论是遥感卫星品质和遥感数据处理质量,都能得到保障。 5:影像数据官方渠道:所有的卫星数据都是卫星公司授权的原始数据,全球公众数据查询网址公开查询,影像数据质量一目了然,数据反应客观公正实事求是,数据处理技术团队国标规范操作,提供的是行业优质的专业化服务。 6:签定正规合同:影像数据服务付款前,买卖双方须签订服务合同,提供合同相应的正规发票,发票国家税网可以详细查询,有增值税普通发票和增值税专用发票两种发票类型可供选择。以最有效的法律手段来保障您的权益。 7:对公帐号转款:合同约定的对公帐号,与合同主体名发票上面的帐号名称一致,是由工商行政管理部门核准的公司银行账户,所有交易记录均能查询,保障资金安全。 8:售后服务:完善的售后服务体制,全国热线,登陆官网客服服务同步。 技术能力说明 北京揽宇方圆拥有大型正版遥感处理软件,遥感数据处理工程师有10年以上遥感处理工作经验,并有国家大型项目工作经验自主卫星数据处理软件著作权,最大限度保持遥感卫星影像处理的真实度。 一.图像预处理 1.降噪处理 由于传感器的因素,一些获取的遥感图像中,会出现周期性的噪声,我们必须对其进行消除或减弱方可使用。 (1)除周期性噪声和尖锐性噪声 周期性噪声一般重叠在原图像上,成为周期性的干涉图形,具有不同的幅度、频率、和相位。它形成一系列的尖峰或者亮斑,代表在某些空间频率位置最为突出。一般可以用带通或者槽形滤波的方法来消除。 消除尖峰噪声,特别是与扫描方向不平行的,一般用傅立叶变换进行滤波处理的方法比较方便。

相关主题
文本预览
相关文档 最新文档