当前位置:文档之家› Bayesian input variable selection using posterior probabilities and expected utilities

Bayesian input variable selection using posterior probabilities and expected utilities

Bayesian input variable selection using posterior probabilities and expected utilities
Bayesian input variable selection using posterior probabilities and expected utilities

Helsinki University of Technology,Laboratory of Computational Engineering publications.Report B.ISSN1457-1404 Bayesian Input Variable Selection Using

Posterior Probabilities and Expected Utilities

Research report B31.ISBN951-22-6229-0.

Aki Vehtari and Jouko Lampinen

Laboratory of Computational Engineering

Helsinki University of Technology

P.O.Box9203,FIN-02015,HUT,Finland

{Aki.Vehtari,https://www.doczj.com/doc/f214965476.html,mpinen}@hut.?

May28,2002

Revised December20,2002

Abstract

We consider the input variable selection in complex Bayesian hierarchical models.Our goal is to ?nd a model with the smallest number of input variables having statistically or practically at least the

same expected utility as the full model with all the available inputs.A good estimate for the expected

utility can be computed using cross-validation predictive densities.In the case of input selection

and a large number of input combinations,the computation of the cross-validation predictive den-

sities for each model easily becomes computationally prohibitive.We propose to use the posterior

probabilities obtained via variable dimension MCMC methods to?nd out potentially useful input

combinations,for which the?nal model choice and assessment is done using the expected utilities.

Variable dimension MCMC methods visit the models according to their posterior probabilities.As

models with negligible probability are probably not visited in a?nite time,the computational savings

can be considerable compared to going through all possible models.If there is problem of obtaining

enough samples in reasonable time to estimate the probabilities of the models well,we propose to use

the marginal posterior probabilities of the inputs to estimate their relevance.As illustrative examples

we use MLP neural networks and Gaussian processes in one toy problem and in two challenging

real world problems.Results show that using posterior probability estimates computed with variable

dimension MCMC helps?nding useful models.Furthermore,bene?ts of using expected utilities

for input variable selection are that it is less sensitive to prior choices and it provides useful model

assessment.

Keywords:Bayesian model choice;input variable selection;expected utility;cross-validation;vari-able dimension Markov chain Monte Carlo;MLP neural networks;Gaussian processes;Automatic Rel-evance Determination

1Introduction

In practical problems,it is often possible to measure many variables,but it is not necessarily known which of them are relevant and required to solve the problem.In Bayesian hierarchical models,it is usually feasible to use large number of potentially relevant input variables by using suitable priors with hyperparameters controlling the effect of the inputs in the model(see,e.g.,Lampinen&Vehtari,2001). Although such models may have good predictive performance,it may be dif?cult to analyse them,or costly to make measurements or computations.To make the model more explainable(easier to gain scienti?c insights)or to reduce the measurement cost or the computation time,it may be useful to select a smaller set of input variables.In addition,if the assumptions of the model and prior do not match well the properties of the data,reducing the number of input variables may even improve the performance of the model.

In prediction and decision problems,it is natural to assess the predictive ability of the model by esti-mating the expected utilities,as the principle of rational decisions is based on maximizing the expected utility(Good,1952;Bernardo&Smith,1994)and the maximization of expected likelihood maximizes the information gained(Bernardo,1979).In machine learning community expected utility is sometimes called generalization error.Following simplicity postulate(Jeffreys,1961),it is useful to start from sim-pler models and then test if more complex model would give signi?cantly better https://www.doczj.com/doc/f214965476.html,bining the principle of rational decisions and simplicity postulate,our goal is to?nd a model with the smallest number of input variables having statistically or practically at least the same predictive ability as the full model with all the available inputs.An additional advantage of comparing the expected utilities is that it takes into account the knowledge of how the model predictions are going to be used and further it may reveal that even the best model selected from some collection of models may be inadequate or not practically better than the previously used models.

Vehtari and Lampinen(2002,2003)present with Bayesian justi?cation how to obtain distributions of expected utility estimates of complex Bayesian hierarchical models using cross-validation predictive densities.The distribution of the expected utility estimate describes the uncertainty in the estimate and can also be used to compare models,for example,by computing the probability of one model having a better expected utility than some other model.In the case of K inputs,there are2K input combina-tions,and computing the expected utilities for each model easily becomes computationally prohibitive. To use expected utility approach we need to?nd way to?nd out smaller number of potentially useful input combinations.One approach would be using global optimization algorithms to search the input combination maximizing the expected utility(e.g.Draper&Fouskakis,2000).Potential problems with this approach are that it may be slow,requiring hundreds or thousands of cross-validation evaluations, and results depend on search heuristics

Current trend in Bayesian model selection(including the input variable selection)is to estimate posterior probabilities of the models using Markov chain Monte Carlo(MCMC)methods and especially variable dimension MCMC methods(Green,1995;Stephens,2000).The variable dimension MCMC visits models according to their posterior probabilities,and thus models with negligible probability are probably not visited in?nite time.The posterior probabilities of the models are then estimated based on number of visits for each model and models with highest posterior probabilities are further investigated.

Marginal likelihoods and posterior probabilities of the input combinations have been used directly for input selection,for example,by Brown,Vannucci,and Fearn(1998),Ntzoufras(1999),Han and Carlin(2000),Sykacek(2000),Kohn,Smith,and Chan(2001),and Chipman,George,and McCulloch (2001).Although this kind of approach has produced good results,it may be sensitive to prior choices as discussed in section2.4,and it does not necessarily provide model with the best expected utility as demonstrated in section3.Furthermore,Spiegelhalter(1995)and Bernardo and Smith(1994)argue that

when selecting a single model from some family of models instead of integrating over the discrete model choices(e.g.,input combinations),it is better to compare the consequences(e.g.,utilities)of the models instead of their posterior probabilities.

We propose to use posterior and marginal posterior probabilities obtained via variable dimension MCMC methods to?nd out potentially useful input combinations and to do the?nal model choice and assessment using the expected utilities(with any desired utility)computed by using the cross-validation predictive densities.Similar approach has been used before at least by Vannucci,Brown,and Fearn (2001),and Brown,Vannucci,and Fearn(2002),but they considered only general linear models and squared loss,and they did not estimate the uncertainty in the expected loss estimates.

As illustrative examples,we use MLP networks(MLP)and Gaussian processes(GP)(Neal,1996, 1999;Lampinen&Vehtari,2001)in one toy problem and two real world problems:concrete quality estimation and forest scene classi?cation.MLPs and GPs are non-parametric non-linear models,where the interactions between the input variables are handled implicitly in the model and thus there is no need to specify them explicitly.This alleviates the input variable selection,as we only need to select whether the input variable should be included in the model or not,and if it is included,the model automatically handles possible interactions.For MLPs and GPs it is common to use hierarchical prior structures which produce continuous input variable selection,that is,there are hyperparameters which control how strong effect each input variable may have in the model.Consequently,model averaging (averaging over different input variable combinations)usually does not produce different results than the the model with all the input variables(full model),and thus we may use the full model as the baseline, to which we can compare models with fewer input variables.

We have tried to follow the notation of Gelman,Carlin,Stern,and Rubin(1995)and we assume that reader has basic knowledge of Bayesian methods(see,e.g.,short introduction in Lampinen&Vehtari, 2001)and Bayesian model assessment and comparison based on expected utilities(Vehtari&Lampinen, 2002,2003).Knowledge of MCMC,MLP or GP methods is helpful but not necessary.

2Methods

We?rst discuss relationship of posterior probabilities of models to expected utilities(section2.1).Then we discuss variable dimension MCMC methods,which can be used to obtain posterior probability esti-mates for a large number of models in a time comparable to computing the cross-validation predictive densities for a single model(section2.2).Finally we discuss prior issues speci?c in input selection (sections2.3and2.4).

2.1Expected utilities and posterior probabilities

Given models M l,l=1,...,L we would like to?nd the model with the smallest number of input variables having at least the same expected utility as the full model with all the available inputs.One way would be to estimate expected utilityˉu M

l

for each model,but this may be computationally prohibitive.

Distributions of expected utility estimates of complex Bayesian hierarchical models can be obtained using cross-validation predictive densities(Vehtari&Lampinen,2002,2003).Using the distributions it is easy to compute the the probability of one model having a better expected utility than some other

model p(ˉu M

1?M2>0).In real world problems it is useful to use application-speci?c utilities for model

assessment.For example,Draper and Fouskakis(2000)discuss example in which monetary utility is used for data collection costs and the accuracy of predicting mortality rate in health policy problem. However,for model comparison likelihood based utilities are useful as they measure how well the model

estimates the whole predictive distribution.Expected predictive likelihoods are also related to posterior probabilities.

The posterior predictive distribution of not yet observed y(n+1)given the training data D={y(i);i= 1,2,...,n},where y(i)are i.i.d.,is obtained by integrating the predictions of the model with respect to the posterior distribution of the model

p(y(n+1)|D,M l,I)= p(y(n+1)|θ,D,M l,I)p(θ|D,M l,I)dθ,(1)

whereθdenotes all the model parameters and hyperparameters and I denotes the assumptions about the model space discussed in the section2.3.For convenience we consider expected predictive log-likelihood which is given by

E y(n+1) log p(y(n+1)|D,M l,I) .(2)

Assuming that the distribution of the y(n+1)is same as the distribution of y(i),Equation2can be approximated using the cross-validation predictive densities as

E i log p(y(i)|D(\i),M l,I) ,(3) where D(\i)denotes all the elements of D except y(i).Samples from cross-validation predictive densities are easily obtained with importance-sampling leave-one-out CV or k-fold-CV.

Commonly used Bayesian model selection method is the Bayes factor(Kass&Raftery,1995)

B jk=p(D|M j,I)

p(D|M k,I)

,(4)

where

p(D|M l,I)= i p(y(i)|y(s i),M l,I),(5)

where the right hand side is obtained using the chain rule,and y(s i)is a set of data points so that y(s1)=?, y(s2)=y(1),and y(s i)=y(1,...,i?1);i=3,...,n.p(D|M l,I)is called marginal probability of the data,or prior predictive likelihood.Taking the logarithm of the right hand side of the Equation(5)and dividing by n we get

E i log p(y(i)|y(s i),M) ,(6) which is similar to cross-validation estimate of the expected predictive likelihood(Equation3).In ex-pected utility sense this is an average of predictive log-likelihoods with number of data points used for ?tting ranging from0to n?1.As the learning curve is usually steeper with smaller number of data points,this is less than the expected predictive log-likelihood with n/2data points.As there are terms which are conditioned on none or very few data points,Equation6is sensitive to prior changes.With more vague priors and more?exible models few?rst terms dominate the expectation unless n is very large.

This comparison shows that the prior predictive likelihood of the model can be used as lower bound of the expected predictive likelihood(favoring less?exible models).The predictive likelihood is good utility for model comparison,but naturally the?nal model choice and assessment can be done using the application speci?c utilities.The prior predictive likelihoods can also be combined with prior proba-bilities of the models getting posterior probabilities of the models(with uniform prior on models prior predictive likelihoods are proportional to posterior probabilities).It is possible to use the prior predictive likelihoods or the posterior probabilities depending whether it is believed that using non-uniform prior on model space helps us to?nd more useful models.Model space prior affects only which models are

selected for further study,it does not affect the?nal model comparison based on expected utilities.In our examples we have used uniform prior or modest prior favoring smaller models(see discussion on section2.3and examples in section3).

Computation of the prior predictive likelihood p(D|M l,I)for complex hierarchical models is usually very hard(Kass&Raftery,1995).In the next section we discuss variable dimension MCMC methods which can be used to estimate posterior probabilities p(M l,I|D)and as

p(M l,I|D)=p(D|M l,I)p(M l|I)/p(D|I)(7) it is also possible to obtain relative prior predictive likelihood values ignoring p(D|I)which is constant for all models M l given I.

If there are many correlated inputs,it is probable that there are also many high-probability input combinations and thus it may be hard to estimate the probabilities of input combinations well.In this case,we propose to use the marginal probabilities of the inputs,which are easier to estimate,to indicate potentially useful inputs.This is illustrated in sections3.4and3.6.

In addition to input selection,the marginal probabilities of inputs can be used to estimate the rel-evance of the inputs,which has great importance in analyzing the?nal model.For MLP networks, MacKay(1994),and Neal(1996)proposed the“automatic relevance determination”(ARD)prior as an automatic method for determining the relevance of the inputs in MLP.In section3.4we discuss and illustrate the bene?t of the marginal probabilities of inputs over the ARD values for relevance estimation in MLP.

2.2Variable dimension Markov chain Monte Carlo

For simple models it may be possible to estimate the prior predictive likelihood p(D|M l,I)analytically, but even in this case if there is large number of models it may be computationally prohibitive to do it for all models.In case of complex hierarchical models we usually do not have analytic solutions or good analytical approximations(e.g.,variational approximations,Jordan,Ghahramani,Jaakkola,& Saul,1998),and we need to resort to stochastic approximations such as the the Monte Carlo methods (Gilks,Richardson,&Spiegelhalter,1996;Robert&Casella,1999;Chen,Shao,&Ibrahim,2000).In the MCMC,samples from the posterior distribution are generated using a Markov chain that has the desired posterior distribution as its stationary distribution.

In the case of input selection,models will generally have different numbers of parameters if they have different number of inputs.The variable dimension MCMC methods(Green,1995;Stephens,2000;Nt-zoufras,1999;Han&Carlin,2000)allow jumps between models with different dimensional parameter spaces,and thus we can get samples from the posterior distribution of the input combinations.The variable dimension MCMC methods visits models(one visit is one sample)according to their posterior probabilities,and thus models with negligible probability are probably not visited in?nite time.Conse-quently,only the most probable models are investigated and computational savings can be considerable compared to going through all possible models.Speed and accuracy of variable dimension MCMC methods may in some cases further increased by analytically marginalizing over as many parameters as possible.

We have used the reversible jump Markov chain Monte Carlo(RJMCMC;Green,1995)which is is one of the simplest to implement and one of the fastest on big problems.The RJMCMC is an extension to the Metropolis-Hastings method allowing jumps between models with different dimensional parameter spaces.In the case of input selection,models have different number of parameters as they have different number of inputs.When adding or removing inputs,the corresponding parameters are added or removed,

respectively.If the current state of the Markov chain is (M 1,θM 1)the jump to state (M 2,θM 2)is accepted with probability α=min 1,p (D |θM 2,M 2)p (θM 2|M 2)p (M 2|I )j (M 2,M 1)q (u 2|θM 2,M 2,M 1)p (D |θM 1,M 1)p (θM 1|M 1)p (M 1|I )j (M 1,M 2)q (u 1|θM 1,M 1,M 2) ?h M 1,M 2(θM 1,u 1)?(θM 1,u 1) ,(8)where I denotes the assumptions about the model space,j is the probability of jumping from one model to another,q is the proposal distribution for u and h M 1,M 2is an invertible function de?ning mapping (θM 2,u 2)=h M 1,M 2(θM 1,u 1).

In the case of suitable proposal distribution,the acceptance probability term can be greatly simpli-?ed.When adding a new input,we set h M 1,M 2as identity,that is (θM 2)=(θM 1,u 1),and then use the conditional prior of the new parameters as the proposal distribution (see sections 3.1and 3.2).Now the Jacobian is 1,the prior terms for the parameters common to both models cancel out and the prior and the proposal distribution for the new parameters cancel out.Moreover,as we set j (M 1,M 2)=j (M 2,M 1),Equation 8simpli?es to α=min 1,p (D |θM 2,M 2)p (M 2|I )p (D |θM 1,M 1)p (M 1|I )

.(9)We use hierarchical priors for the parameters speci?c to inputs,and so the conditional prior of the new parameters is natural proposal distribution with a reasonable acceptance rate and mixing behaviour.In our case studies time needed for obtaining posterior probability estimates for all input combinations (or marginal probability estimates for inputs)with RJMCMC was relative to time used for computing expected utilities with k -fold-CV predictive densities for single model.

2.3Priors on model space

Model space prior can be used to favor certain models or even if we would like to use uniform prior on models,it may be useful to use non-uniform prior due to computational reasons.As discussed below it is possible that when using uniform prior on models the implicit prior on number of inputs may cause the variable dimension MCMC methods to miss some input https://www.doczj.com/doc/f214965476.html,ing non-uniform prior we may be able to improve sampling and to obtain posterior probabilities based on uniform prior,the appropriate prior correction can be made afterwards.

We are interested in input variable selection for MLPs and GPs,where the interactions between the input variables are handled automatically in the model,and thus we do not need to consider model space priors which consider the interactions.For example,Chipman et al.(2001)discuss some choices for model space priors in the case of explicit interactions in the model.

If we have K input variables,there are L =2K possible different input combinations (models).A simple and popular choice is the uniform prior on models

p (M l )≡1/L ,(10)

which is noninformative in the sense of favoring all models equally,but as seen below,will typically not be noninformative with respect to the model size.

It will be convenient to index each of the 2K possible input combinations with the vector

γ=(γ1,...,γK )T ,(11)

where γk is 1or 0according to whether the input k is included in the model or not,respectively.We get equal probability for all the input combinations (models)by setting

p (γ)=(1/2)K .(12)

From this we can see that the implicit prior for the number of inputs k is the Binomial

p(k)=Bin(K,1/2),(13) which clearly is not noninformative,as E[k]=0.5K and Var[k]=0.25K.For example,if K=27,then k lies in the range7to20with prior probability close to1,and thus it is possible that variable dimension MCMC methods will not sample models with less than7inputs(see also examples in section3).

To favor smaller models various priors on the number of inputs(or other components)have been used;for example,geometric(Rios Insua&Müller,1998),truncated Poisson(Phillips&Smith,1996; Denison et al.,1998;Sykacek,2000),and truncated Poisson with a vague Gamma hyperprior forλ(Andrieu,de Freitas,&Doucet,2000).A problem with these approaches is that the implicit Binomial prior still is there,producing the combined prior

p(k)=Bin(K,1/2)h(k),(14) where h(k)is the additional prior on the number of inputs.Although it is possible to move the mass of the prior to favor a smaller number of inputs with the additional prior,the Binomial prior effectively restricts k a priori to lie in a short range.

Instead of an additional prior on the number of inputs,we could set the probability of single input being in the model,π,to the desired value and get

p(γ)=πk(1?π)1?k(15) and correspondingly

p(k)=Bin(K,π).(16) In this case,E(k|π)=Kπand var(k|π)=Kπ(1?π).Although having more control,this would still effectively restricts k a priori to lie in a short range

A more?exible approach is to place a hyperprior onπ.Following Kohn et al.(2001)and Chipman et al.(2001),we use a Beta prior

p(π)=Beta(α,β),(17) which is convenient,as then the prior for k is Beta-binomial

p(k)=Beta-bin(n,α,β).(18) In this case,E[k|π,α,β]=Kαand Var[k|π,α,β]=Kαβ(α+β+K)

,and thus the values forα

2

andβare easy to solve after setting E[k]and Var[k]to the desired values.As the Beta-binomial is often nonsymmetric,it may be easier to choose the values forαandβby plotting the distribution with different values ofαandβ,as we did in the examples in section3.Ifα=1andβ=1then the prior on k is uniform distribution on(0,K),but now the models are not equally probable,as the models with few or many inputs have higher probability than the models with about K/2inputs.For example,for models with more than K/2inputs,the model with one extra input is a priori K times more probable. Consequently,it is not possible to be uninformative in input selection,and some care should be taken when choosing priors,as efforts to be uninformative in one respect will force one to be informative in other respect.Even if there is some prior belief about the number of inputs,it may be hard to present in mathematical form or there may be computational problems as in the example in section3.6.

Above we have assumed that each input has equal probability.This assumption could be relaxed by using,for example,a prior of the form

p(γ)= πγk k(1?πk)1?γk,(19)

whereπk is the probability of input k being in the model.This kind of prior could be further combined with a hierarchical prior onπk to gain more?exibility.It seems that prior information about the relative probabilities of the inputs is rarely available,as this kind of priors are seldom used.

In some cases there might be information about dependencies between input combinations that could be used.For example,dependency priors in the case of related input variables are discussed by Chipman (1996).Although we know that the inputs in our case problems are not independent,we do not know a priori what dependencies there might be,so we use the independence prior.Additionally,as one of our goals is to get more easily explainable models,it is desired that inputs that are as independent as possible are selected.

2.4Priors on input speci?c parameters

As discussed and illustrated,for example,by Richardson and Green(1997),Dellaportas and Forster (1999),and Ntzoufras(1999),the prior on parameters p(θM

l

|M l)greatly affects the prior predictive

likelihood(and thus posterior probability)of the model M l having extra parameterθ+

M l .If the prior

on the extra parameters p(θ+

M l |M l)is too tight,the extra parameters might not reach a useful range in

the posterior,thus making the model less probable.On the other hand,if the prior is too vague,the probability of any value for the extra parameter gets low,and correspondingly,the probability of the model gets low.

Often,it is recommended to test different priors,but there is no formal guidance what to do if the different priors produce different results.Some methods for controlling the effects of the prior in linear models are discussed by Ntzoufras(1999),but these methods may be dif?cult to generalize to other https://www.doczj.com/doc/f214965476.html,ing hierarchical priors seems to alleviate partly the problem,as discussed by Richardson and Green(1997)and illustrated in section3.Furthermore,since the effect of the model space prior is considerable and its selection usually quite arbitrary,there is probably no need to excessively?ne tune the priors of the parameters in question.Naturally,the prior sensitivity is an even lesser problem when the?nal model choice is based on the expected utilities.

3Illustrative examples

As illustrative examples,we use MLP networks and Gaussian processes with Markov Chain Monte Carlo sampling(Neal,1996,1997,1999;Lampinen&Vehtari,2001;Vehtari,2001)in one toy problem (section3.4)and two real world problems:concrete quality estimation(section3.5)and forest scene classi?cation(section3.6).We?rst brie?y describe the models and algorithms used(sections3.1,3.2, and3.3).

3.1MLP neural networks

We used an one hidden layer MLP with tanh hidden units,which in matrix format can be written as

f(x,θw)=b2+w2tanh(b1+w1x),

whereθw denotes all the parameters w1,b1,w2,b2,which are the hidden layer weights and biases,and the output layer weights and biases,respectively.We used Gaussian priors on weights and the Automatic

Relevance Determination(ARD)prior on input weights

w1,k j~N(0,αw

1,k

),

αw

1,k ~Inv-gamma(αw

1,ave

,νw

1,α

)

αw

1,ave ~Inv-gamma(αw

1,0

,νw

1,α,ave

)

νw

1,α

=V[i]

i~U d(1,K)

V[1:K]=[0.4,0.45,0.5:0.1:1.0,1.2:0.2:2,2.3,2.6,3,3.5,4]

b1~N(0,αb

1

)

αb

1~Inv-gamma(αb

1,0

,να,b

1

)

w2~N(0,αw

2

)

αw

2~Inv-gamma(αw

2,0

,να,w

2

)

b2~N(0,αb

2

)

where theα’s are the variance hyperparameters(αw

1,k ’s are also called ARD parameters),ν’s are the

number of degrees of freedom in the inverse-Gamma distribution.To allow easier sampling with Gibbs method,discretized values ofνwere used so that[a:s:b]denotes the set of values from a to b with step s,and U d(a,b)is a uniform distribution of integer values between a and b.

Since the ARD prior allows less relevant inputs to have smaller effect in the model,it produces effect similar to continuous input variable selection,and thus discrete input variable selection is not always necessary.To be exact,the ARD prior controls the nonlinearity of the input,instead of the predictive importance or causal importance(Lampinen&Vehtari,2001),but since“no effect”is linear effect,it will also allow irrelevant inputs to have smaller effect in the model.This is illustrated in section.

In MLP,the weights w1,k j are connected to input k.For a new input k,the w1,k j andαw

1,k were gener-

ated from the proposal distribution(see Equation8),which was the same as their respective conditional prior distributions.As a hierarchical prior structure was used,the conditional priors were also adapting to the data and so useful acceptance rates were obtained.We also tested the conditional maximization and the auxiliary variable methods(Brooks,Giudici,&Roberts,2003).Finding the conditional maxi-mum was too slow and unstable while the auxiliary variable method easily got stuck in,despite of tuning attempts.

3.2Gaussian processes

The Gaussian process is a non-parametric regression method,with priors imposed directly on the co-variance function of the resulting approximation.Given the training inputs x(1),...,x(n)and the new input x(n+1),a covariance function can be used to compute the n+1by n+1covariance matrix of the associated targets y(1),...,y(n),y(n+1).The predictive distribution for y(n+1)is obtained by conditioning on the known targets,giving a Gaussian distribution with the mean and the variance given by

E y[y|x(n+1),θ,D]=k T C?1y(1,...,n)

Var y[y|x(n+1),θ,D]=V?k T C?1k,

where C is the n by n covariance matrix of the observed targets,y(1,...,n)is the vector of known values for these targets,k is the vector of covariances between y(n+1)and the known n targets,and V is the prior variance of y(n+1).For regression,we used a simple covariance function producing smooth functions

C i j=η2exp ?p u=1ρ2u(x(i)u?x(j)u)2 +δi j J2+δi jσ2e.

The?rst term of this covariance function expresses that the cases with nearby inputs should have highly correlated outputs.Theηparameter gives the overall scale of the local correlations.Theρu parameters are multiplied by the coordinate-wise distances in input space and thus allow for different distance mea-sures for each input dimension.The second term is the jitter term,whereδi j=1when i=j.It is used to improve matrix computations by adding constant term to residual model.The third term is the residual model.

We used Inverse-Gamma prior onη2and hierarchical Inverse-Gamma prior(producing ARD like prior)onρu.

η2~Inv-gamma(η20,νeta2)

ρu~Inv-gamma(ρave,νρ)

ρave~Inv-gamma(ρ0,ν0)

νρ~Inv-gamma(νρ,0,νν

)

ρ,0

Similar to MLP,in GP the“ARD”parametersρu measure the nonlinearity of the inputs asρu de?nes the characteristic length of the function for given input direction.However,this prior produces effect similar to continuous input variable selection,and thus discrete the input variable selection is not always necessary.

For a new input,the correspondingρu was generated from its conditional prior.As a hierarchical prior structure was used,the conditional prior was also adapting and so useful acceptance rates were ob-tained.The acceptance rates for the GP were naturally higher than for MLP as proposal distribution was univariate compared to J+1-dimensional proposal distribution for MLP(J is number of hidden units). We also tested the auxiliary variable method(Brooks et al.,2003),but it did not improve acceptance rates,despite of some tuning attempts.

3.3Algorithms used

For integration over model parameters and hyperparameters the in-model sampling was made using Metropolis-Hastings sampling,Gibbs sampling,and hybrid Monte Carlo(HMC)as described in Refer-ences(Neal,1996,1997,1999;Vehtari&Lampinen,2001;Vehtari,2001)and for estimation of posterior probabilities of input combinations the between-model sampling was made using the RJMCMC as de-scribed in References(Green,1995;Vehtari,2001).Between-model jump consisted of proposing adding or removing single input or switching included input to not-included input,and the proposal distribution for the new parameters was the conditional prior of the new parameters.As hierarchical priors for the parameters speci?c to inputs were used,the conditional priors are adapting to the data and thus the con-ditional prior is a natural proposal distribution with a reasonable acceptance rate and mixing behavior. The MCMC sampling was done with the FBM1software and Matlab-code partly derived from the FBM and Netlab2toolbox.

To make convergence diagnostics and estimation of credible intervals(CI)easier,ten independent RJMCMC chains(with different starting points)were run for each case.For convergence diagnos-tics,we used visual inspection of trends,the potential scale reduction method(Gelman,1996)and the Kolmogorov-Smirnov test(Robert&Casella,1999).For between-model convergence diagnostics,we used the chi-squared and Kolmogorov-Smirnov tests proposed by Brooks,Giudici,and Philippe(2002), which also utilize several independent chains.As the number of visits to each model was typically very 1https://www.doczj.com/doc/f214965476.html,/~radford/fbm.software.html

2https://www.doczj.com/doc/f214965476.html,/netlab/

Figure1:The target function is an additive function of four inputs.The predictive importance of every input is equal,in RMSE terms,as the latent functions are scaled to equal variance over the uniform input distribution U(?3,3).

low,we mainly analysed the visits to each subpopulation having equal number of inputs.Other conver-gence assessment methods for the RJMCMC are discussed,for example,by Brooks and Giudici(1999), and Brooks and Giudici(2000).Sequential correlations of MCMC samples were estimated using au-tocorrelations(Neal,1993;Chen et al.,2000)and chains were thinned to get less dependent samples. Depending on case every400th–1600th meta-iteration sample was saved and total of4000samples were saved from ten independent chains.

The distributions of the expected utilities of the models were estimated using the cross-validation predictive densities obtained using k-fold-CV as described by Vehtari and Lampinen(2002).

3.4Toy problem

With suitable priors it is possible to have a large number of input variables in Bayesian models,as less relevant inputs can have a smaller effect in the model.For example,in MLP it is useful to use so called “automatic relevance determination”prior(ARD;MacKay,1994;Neal,1996).In ARD each group of weights connected to the same input has common variance hyperparameters,while the weight groups can have different hyperparameters.In many experiments ARD has been shown to be useful to allow many input variables in the MLP(Lampinen&Vehtari,2001).Such models may have good predictive performance,but it may be dif?cult to analyse them,or costly to make measurements or computations, and thus input selection may be desirable.

In this toy problem we compare ARD,posterior probabilities of inputs and expected utility based approach for input relevance determination.The target function is an additive function of four inputs (Figure1),with equal predictive importance for every input.We used10-hidden-unit MLPs with sim-ilar priors as described in(Lampinen&Vehtari,2001;Vehtari,2001)and uniform prior on all input combinations

Figure2shows the predictive importance,posterior probability,the mean absolute values of the?rst and second order derivatives of the output with respect to each input,and the relevance estimates from the ARD.Note,that the ARD coef?cients are closer to the second derivatives than to the?rst derivatives (local causal importance)or to the error due to leaving input out(predictive importance).

Figure3shows the predictive importance,the relevance estimates from the ARD,and posterior prob-abilities of inputs1(almost linear)and4(very nonlinear)when the weighting of the input4is varied from1to1/16.As the inputs are independent,the leave-input-out error measures well the predictive im-portance.Based on the ARD values,it is not possible to know which one of the inputs is more important. It is also hard to distinguish the irrelevant input from almost linear input.Marginal probability of the input indicates well whether the input has any predictive importance.

Figure4shows the predictive importance,posterior probabilities,and the relevance estimates from

Figure 2:Different measures of importance of inputs for the test function in

Figure 1.The ARD coef?-cients are closer to the second derivatives than to the ?rst derivatives (local causal importance)or to the error due to leaving input out (predictive importance).

Figure 3:Top plot shows the leave-input-out error.Middle plot shows that although weight of the input 4is reduced,ARD value stays at constant level measuring the nonlinearity of the input,until the weight is so small that the information from the input is swamped by the noise.Bottom plot plot shows that the probability of the input indicates whether the input has any predictive importance.

Figure 4:Input 5is duplicate of input 4.Low leave-input-out error for inputs 4and 5does not mean that both inputs could be left out.Posterior probabilities and ARD values for both inputs 4and 5are smaller than in the model without the input 5(see Figure 2),but the correlation between inputs can be diagnosed (see text).

Input 401I n p u t 500.00.30.310.30.40.7(a)Estimated with RJMCMC Input 401I n p u t 500.10.20.310.20.50.7(b)Assuming independence

Input 401I n p u t 500.01.411.40.8(c)Relative difference

Table 1:Joint and marginal posterior probabilities of inputs 4and 5.0and 1indicate whether the input is in the model.Note that at least one of the inputs has to be in the model but preferably only one of them.the ARD in case where the input 5is duplicate of input 4.Leave-input-out error indicates that either of the inputs 4or 5could be left out,but in order to know if both could be left out backward elimination should be used.Marginal posterior probabilities and ARD values for single inputs are lower than in the case without the 5th input,but now it is possible to examine joint probabilities of inputs and correlations of ARD values.ARD values of input 4and 5have correlation coef?cient of ?0.68,which indicates that maybe only one of the inputs is necessary.Tables 1a,b,c illustrate the effect of correlation to the posterior probabilities and indicate well that it is necessary to have at least one the inputs in the model and preferably only one of them.

When the inputs are independent,the leave-input-out error measures well the predictive importance.The ARD does not measure the predictive importance and thus it does not distinguish an irrelevant in-put from an almost linear input.Marginal probability of the input indicates well whether the input has any predictive importance.When the inputs are dependent,the leave-input-out does not measure the dependence between inputs.To handle dependencies it could be replaced with computationally heavy backward elimination.Although the ARD does not measure the predictive importance,the correlations between inputs can be discovered by examining the correlations between the ARD values.Joint predic-tive importances and dependencies of inputs can be easily diagnosed by examining the joint probabilities of inputs.

3.5Real world problem I:Concrete quality estimation

In this section,we present results from the real world problem of predicting the quality properties of con-crete.The goal of the project was to develop a model for predicting the quality properties of concrete, as a part of a large quality control program of the industrial partner of the project.The quality variables included for example compressive strengths and densities for1,28and91days after casting,and bleed-ing(water extraction),?ow value,slump and air-%,that measure the properties of fresh concrete.These quality measurements depend on the properties of the stone material(natural or crushed,size and shape distributions of the grains,mineralogical composition),additives,and the amount of cement and water. In the study,we had27explanatory variables selected by the concrete expert,(listed,e.g.,in Figure7) and215samples designed to cover the practical range of the variables,collected by the concrete man-ufacturing company.In the following,we report results for the air-%.Similar results were achieved for other target variables(Vehtari,2001).See the details of the problem,the descriptions of the variables and the conclusions made by the concrete expert in Reference(J?rvenp??,2001).

The aim of the study was to identify which properties of the stone material are important,and ad-ditionally,examine the effects that properties of the stone material have on concrete.It was desirable to get both the estimate of relevance of all available input variables and select a minimal set required to get a model with statistically the same predictive capability as with the full model.A smaller model is easier to analyze and there is no need to make possibly costly or toxic measurements in the future for properties having negligible effect.Note that as the cost of the toxic measurements was dif?cult to estimate,we did not include the cost directly to the utility function.Instead we just tested which toxic measurements could be left out without statistically signi?cant drop in prediction accuracy.The problem is complicated because there are strong cross-effects,and the inputs measuring similar properties have strong dependencies.

For models used in(J?rvenp??,2001),we had made the input selection using the deviance infor-mation criterion(DIC)(Spiegelhalter,Best,Carlin,&van der Linde,2002)and heuristic backward selection.DIC estimates the expected utility using plug-in predictive distribution and asymptotic ap-proximation(Vehtari,2002;Vehtari&Lampinen,2003).Although this approach produced reasonable results,it required a full model?tting for each model investigated,contained some ad hoc choices to speed up the heuristic backward selection,and lacked estimate of the associated uncertainty and clear results for the relevance of the different inputs.

Below we present results using the RJMCMC and the expected utilities computed by using the cross-validation predictive densities.With this approach,we were able to get more insight about the problem, smaller models,and improved reliability of the results.We used Gaussian process models with quadratic covariance function and ARD type hierarchical prior for input speci?c parameters.The residual model used was input dependent Student’s tνwith unknown degrees of freedomν.As the size of the residual variance varied depending on three inputs,which were zero/one variables indicating the use of additives, the parameters of the Student’s tνwere made dependent on these three inputs with a common hyperprior.

From about108possible input combinations,the4000saved states included about3500and2500 different input combinations with uniform and Beta priors,respectively.Few most probable models were visited by all ten independent chains and for example,ten most probable models were visited by at least eight chains.Thus,useful credible intervals could be computed for the model probabilities.

Figures5and6show the posterior probabilities of the number of inputs with an equal prior proba-bility for all the models and with Beta-bin(27,5,10)prior on the number of inputs,respectively.With equal prior probability for all models,the prior probability for the number of inputs being less than eight is so low that it is unlikely that the RJMCMC will visit such models.Parameters for the Beta-binomial prior were selected to better re?ect our prior information,that is,we thought it might be possible to have

Figure5:Concrete quality estimation example,predicting the air-%with GP:The posterior probabilities of the number of inputs with“uninformative”prior,i.e.,equal prior probability for all models.

Figure6:Concrete quality estimation example,predicting the air-%with GP:The posterior probabilities of the number of inputs with Beta-bin(27,5,10)prior on the number of inputs.

a low number of inputs,most probably about6-12inputs and not excluding the possibility for a larger number of inputs.Note that the Beta-binomial prior used is in fact more vague about the number of inputs than the“uninformative”prior.The posterior distribution of the number of inputs is quite widespread, which is natural as the inputs are dependent and the ARD type prior allows use of many inputs.

Figure7shows the marginal posterior probabilities of the inputs with a Beta-bin(27,5,10)prior on the number of inputs.The nine most probable inputs are clearly more probable than the others and the other inputs have posterior probability approximately equal to or less than the mean prior probability of an input(1/3).

Figure8shows the ARD values of the inputs for the full model.Eight of the nine most probable inputs have also a larger ARD value than the other inputs,but they cannot be clearly distinguished from the other inputs.Moreover,input“BET”(measuring the speci?c surface area of the?nes)is ranked much lower by the ARD than by the probability(compare to Figure7).Further investigation revealed that“BET”was relevant,but had near linear effect.Figure9shows the posterior probabilities of the ten most probable input combinations with a Beta-bin(27,5,10)prior on the number of inputs.All the ten models are very similar,only minor changes are present in few inputs,and,the changed inputs are known to correlate strongly.In this case,two models are signi?cantly more probable than others,but between

Figure 7:Concrete quality estimation example,predicting the air-%with GP:The marginal posterior probabilities of the inputs

with a Beta-bin(27,5,10)prior on the number of inputs.The inputs in the most probable model are in boldface (see Figure 9).

Figure 8:Concrete quality estimation example,predicting the air-%with GP:The ARD values of the inputs of the full model.The nine most probable inputs are in https://www.doczj.com/doc/f214965476.html,pare to Figure 7.

Figure9:Concrete quality estimation example,predicting the air-%with GP:The probabilities of the ten most probable models with a Beta-bin(27,5,10)prior on the number of inputs.The top part shows the probabilities of the models,the middle part shows which inputs are in the model,and the bottom part shows the number of inputs in the model.

them,there is no clear difference.As the other probable models are similar to the two most probable models,it is likely that the probability mass has been spread to many equally good models.

For the?nal model choice,we computed the distributions of the expected utilities for the most prob-able models.Differences between the most probable models and the full model were small,and so there was no big danger of choosing a bad model.To verify that by conditioning on single model we do not un-derestimate the uncertainty about the structure of model(see,e.g.,Draper,1995;Kass&Raftery,1995), we also computed the expected utility for the model,in which we integrated over all the possible input combinations.Such integration can readily be approximated using the previously obtained RJMCMC samples.There was no signi?cant difference in the expected predictive likelihoods.

To illustrate the differences between the posterior probabilities and the expected predictive likeli-hoods,Figure10shows the expected utilities computed using the cross-validation predictive densities for the full model and the models having the k(k=5,...,15)most probable inputs.Note that the expected predictive likelihoods are similar for models having at least about eight most probable inputs, while posterior probabilities are very different for models with different number of inputs.For example, the posterior probability of the full model is vanishingly small compared to the most probable models, but the expected predictive likelihood is similar to the most probable models.The performance of the full model is similar to smaller models,as the ARD type prior allows many inputs without reduced predictive performance.Note that if the point estimate(e.g.,mean)of the expected utility would be used for model selection,larger models would be selected than when selecting the smallest model with statistically the

Figure 10:Concrete quality estimation example,predicting the air-%

with GP:The expected utilities (mean predictive likelihoods)of the models having the k most probable inputs (see Figure 7).After about nine inputs,adding more inputs does not improve the model performance signi?cantly.To give an impression of the differences in pairwise comparison,there is for example about 90%probability that the nine input model has a higher predictive likelihood than the eight input model.

Figure 11:Concrete quality estimation example,predicting the air-%with MLP:The posterior probabilities of the number of inputs with a Beta-bin(27,5,10)prior on the number of https://www.doczj.com/doc/f214965476.html,pare to the results for the GP in Figure 6.

same utility as the best model.

To illustrate the effect of the prior on approximating functions,we also report results for input se-lection with MLP.The results for the MLP were not sensitive to changes in the hyperparameter values,so the difference in the results is probably caused mainly by the difference in the form of the covariance function realized by the GP and MLP models.

Figure 11shows the posterior probabilities of the number of the inputs with a Beta-bin(27,5,10)prior on the number of inputs.In the case of MLP,larger number of inputs is more probable than in the case of GP (compare to Figure 6).Figure 12shows the marginal posterior probabilities of the inputs with a Beta-bin(27,5,10)prior on the number of inputs.Most of the inputs have higher posterior probabilities than the mean prior probability (1/3).There is no clear division between more probable inputs and less probable inputs.The nine most probable inputs are same as in the GP case (compare to Figure 7),except that “SC -pore area >300?”has replaced very similar input “SC -pore area >300-900?”.Figure 13shows the ARD values of the inputs for the full model.The order of the inputs based on the ARD

Figure 12:Concrete quality estimation example,predicting the air-%with MLP:The marginal posterior proba-bilities of inputs with a

Beta-bin(27,5,10)prior on the number of inputs.The nine most probable inputs in the GP case are in boldface (see Figure 7).

Figure 13:Concrete quality estimation example,predicting the air-%with MLP:The ARD values of the inputs of the full https://www.doczj.com/doc/f214965476.html,pare to Figure 12.

Figure14:Concrete quality estimation example,predicting the air-%with MLP:The probabilities of the ten most probable models with a Beta-bin(27,5,10)prior on the number of inputs.The top part shows the probabilities of the models,the middle part shows which inputs are in the model,and the bottom part shows the number of inputs in the https://www.doczj.com/doc/f214965476.html,pare to the results for the GP in Figure9.

values is clearly different from the order of the inputs based on marginal posterior probabilities(compare to Figure12).?g:bet_mlp_b_air_pm shows the posterior probabilities of the ten most probable input combinations with a Beta-bin(27,5,10)prior on the number of inputs.There is more variation in the input combinations than in the case of GP and no model is signi?cantly more probable than the others (compare to Figure9).

Although different inputs would be selected in the case of MLP from the case of GP,the predic-tive performance(measured with the expected predictive likelihood)was similar for both model types. Figure15shows the expected utilities for the full model and the models having the k(k=5, (15)

most probable inputs.The expected predictive likelihoods are similar for the models having at least about eight most probable inputs and similar to the expected predictive likelihoods of the GP models (Figure10).

For the bleeding,Figure16shows the marginal posterior probabilities of the number of inputs and Figure17shows the posterior probabilities of the ten most probable models.About half of the inputs have higher posterior probability than the mean prior probability(1/3).The probability mass has been spread to many inputs and many similar models,due to many correlating inputs.It is less clear than in the case of air-%,which are the most probable inputs and input combinations.However,the most probable models had indistinguishable expected utilities,and thus there were no danger of selecting a bad model.Note how the input“SC-Qnty0.8/1.0”which is included in the most probable model,has

uml期末深刻复习

第一章 1、UML(Unified Modeling Langeage)是一种可视化的建模语言,提供了一种标准的、易于理解的方式描述系统的实现过程,从而实现了用户与设计者之间的有效交流。 2、定义系统的物理元素,用于描述事物的静态特征,包括类、接口、协作、用例、主动类、组件和节点。 3、行为建模元素包括哪些? 反映事物之间的交互过程和状态变化,包括交互图和状态图。 4、组织建模元素包括哪些? 子系统、模型、包、框架等。 5、关系元素包括哪些? 关联、泛化、组成、实现、聚集、依赖、约束 6、对于UML的描述,错误的是(A、C)。 A:UML是一种面向对象的设计工具。 B:UML不是一种程序设计语言,而是一种建模语言。 C:UML不是一种建模语言规格说明,而是一种表示的标准。 D:UML不是过程,也不是方法,但允许任何过程和方法使用它。 7、从系统外部用户角度看,用于描述系统功能集合的UML图是用例视图。 8、对如下的用例图的功能进行简单描述。 Buy Goods 8、在UML中,描述父类与子类之间关系的是泛化关系。 9、“交通工具”类与“汽车”类之间的关系属于(D)。 A:关联关系 B:聚集关系

C:依赖关系 D:泛化关系 第二章 1、从软件工程的角度,软件开发可分为:需求分析、系统分析、设计、实现、测试5个阶段。 2、用UML进行建模时会涉及9种图,Rose 2003只支持其中的8种,还有一种图只能用别的图来代替。这个不能在Rose中直接表示的图是(C)。 A:顺序图 B:用例图 C:对象图 D:构件图 3、应用题:Rose分别用哪些图描述系统的静态和动态方面? 静态:用例图、类图、构件图、部署图; 动态:状态图、协作图、顺序图、活动图。 4、默认情况下,Rose模型文件的扩展名为(A)。 A:.mdl B:.ptl C:.cat D:.sub 5、关于浏览窗口的描述,正确的是(A、B、C、D)。 A:可视化地显示模型中所有元素的层次结构 B:具有托放功能,通过模型元素的托放操作可以方便地改变一个模型的特征 C:在浏览器中的模型元素发生变化时,可以自动更新模型中的相关元素 D:只有在浏览窗口中才能把模型元素从模型中永久删除 6、Rose是什么的缩写?

ic笔试常见试题

1.setup和holdup时间区别. Answer: 建立时间:触发器在时钟沿来到前,其数据输入端的数据必须保持不变的时间 保持时间:触发器在时钟沿来到后,其数据输入端的数据必须保持不变的时间 2.多时域设计中,如何处理信号跨时域 Answer: 情况比较多,如果简单回答的话就是:跨时域的信号要经过同步器同步,防止亚稳态传播。例如:时钟域1中的一个信号,要送到时钟域2,那么在这个信号送到时钟域2之前,要先经过时钟域2的同步器同步后,才能进入时钟域2。这个同步器就是两级d触发器,其时钟为时钟域2的时钟。这样做是怕时钟域1中的这个信号,可能不满足时钟域2中触发器的建立保持时间,而产生亚稳态,因为它们之间没有必然关系,是异步的。这样做只能防止亚稳态传播,但不能保证采进来的数据的正确性。所以通常只同步很少位数的信号。比如控制信号,或地址。当同步的是地址时,一般该地址应采用格雷码,因为格雷码每次只变一位,相当于每次只有一个同步器在起作用,这样可以降低出错概率,象异步FIFO的设计中,比较读写地址的大小时,就是用这种方法。 如果两个时钟域之间传送大量的数据,可以用异步FIFO来解决问题。 https://www.doczj.com/doc/f214965476.html,tch与register的区别,为什么现在多用register.行为级描述中latch如何产生的 区别不多说。为什么避免使用latch,因为设计中用latch会使设计后期的静态时序分析变的困难(必须用的地方当然另当别论)。 行为级描述中latch产生的原因:多由于构造组合逻辑电路时,使用if或case语句,没有把所有的条件给足,导致没有提到的条件,其输出未知。或者是每个条件分支中,没有给出所有输出的值,这就会产生latch。所以构造组合逻辑电路时,其always语句中的敏感信号必须包括所有的输入端,每个条件分支必须把所有的输出端的值都给出来。 4.BLOCKING NONBLOCKING 赋值的区别 Answer: 这个问题可参考的资料很多,讲的都很透彻,可以找一下。基本用法就是常说的“组合逻辑用BL OCKING,时序逻辑用NONBLOCKIN G”。 5.MOORE 与MEELEY状态机的特征 Answer: 6.IC设计中同步复位与异步复位的区别 Answer: 如果光说概念的话:同步复位在时钟沿采复位信号,完成复位动作。 异步复位不管时钟,只要复位信号满足条件,就完成复位动作。 象芯片的上电复位就是异步复位,因为这时时钟振荡器不一定起振了,可能还没有时钟脉冲。异步复位很容易受到复位端信号毛刺的影响,比如复位端信号由组合逻辑组成,那组合逻辑输出产生的冒险,就会使触发器错误的复位。 7.实现N位Johnson Counter,N= 8.用FSM实现101101的序列检测模块 9. 集成电路设计前端流程及工具。 10. FPGA和ASIC的概念,他们的区别 11. LATCH和DFF的概念和区别 Answer: LATC是H锁存器,DFF是触发器,其电路形式完全不同。 12. 用DFF实现二分频。

Linux内核Input子系统初始化驱动架构

Input子系统初始化驱动架构 目标:分析input子系统初始化驱动架构 本文要点: 1.input子系统重要数据结构及关系图 2.input子系统初始化驱动架构 input子系统结构图: Input子系统构建过程结构关系图 注1:开发人员在初始化设备时,需初始化相应的输入设备类型evbit[],按键值keybit[],相对位移relbit[]等。 如:evbit[0] = BIT(EV_KEY),表示此设备支持按键;keybit[BIT_WORD(BTN_0)] = BIT_MASK(BTN_0),表示支持的上报值BTN_0; relbit[0] = BIT(REL_X) | BIT(REL_Y),用于表示相对坐标值,如鼠标的位移。

主要数据结构间的关系:

struct input_id { __u16 bustype; //总线类型,如#define BUS_HOST 0x19 __u16 vendor; //制造商 __u16 product; //产品id __u16 version; //版本 }; struct timer_list { …… void (*function)(unsigned long); unsigned long data; }; #define EV_MAX 0x1f #define EV_CNT (EV_MAX+1) #define KEY_MAX 0x2ff #define KEY_CNT (KEY_MAX+1) #define ABS_MAX 0x3f #define REP_MAX 0x01 #define BITS_PER_BYTE 8 #define DIV_ROUND_UP(x,y) (((x) + ((y) - 1)) / (y)) #define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(long)) struct input_dev { //input子系统中唯一一个需要驱动开发人员填充的数据结构…… struct input_id id;//和具体的handler匹配时,会用到 unsigned long evbit[BITS_TO_LONGS(EV_CNT)]; //需要开发人员填充 unsigned long keybit[BITS_TO_LONGS(KEY_CNT)]; //需要开发人员填充 //timer.data = (long) dev //timer.function = input_repeat_key; struct timer_list timer;//用于处理按键按下的时候重复上报键值 int abs[ABS_MAX + 1]; int absmin[ABS_MAX + 1]; //#define REP_DELAY 0x00 ;rep[REP_DELAY] = 250 int rep[REP_MAX + 1];// #define REP_PERIOD 0x01;rep[REP_PERIOD] = 33; struct list_head h_list;//list_add_tail_rcu(&handle->d_node, & dev->h_list); struct list_head node;//把自己链入到input_dev_list

实验六 Simulink子系统封装

实验六Simulink子系统封装 The electronic circuit in the following figure consists of a voltage source, a resistor, and an inductor in the form of a tightly wound coil. An iron ball beneath the inductor experiences a gravitational force as well as an induced magnetic force (from the inductor) that opposes the gravitational force. A differential equation for the force balance on the ball is given by where M is the mass of the ball, h is the height (position) of the ball, g is the acceleration due to gravity, i is the current, and βis a constant related to the magnetic force experienced by the ball. This equation describes the height, h, of the ball due to the unbalanced forces acting upon it. The current in the circuit also varies with time and is given by the following differential equation where L is the inductance of the coil, V is the voltage in the circuit, and R is the resistance of the circuit. The system of equations has three states The system also has one input (V), and one output (h). It is a nonlinear system due to the term in the equation involving the square of i and the inverse of h. Values of the parameters are given as: M=0.1 kg, g=9.81 m/s2, R=2 Ohm, L=0.02 H, β=0.001 1、在Simulink建立以上描述的系统的框图模型,并将其封装为子系统(模块),如

输入子系统

1 输入子系统架构Overview 输入子系统(Input Subsyst em)的架构如下图所示 输入子系统由输入子系统核心层( Input Core ),驱动层和事件处理层(Event Handler)三部份组成。一个输入事件,如鼠标移动,键盘按键按下,joystick的移动等等通过 Driver -> Input Core -> Eventhandler -> userspac e 的顺序到达用户空间传给应用程序。 其中Input Core 即 Input Layer 由 driver/input/input.c及相关头文件实现。对下提供了设备驱动的接口,对上提供了Event Handler层的编程接口。 1.1 主要数据结构 表 1Input Subsystem main data struct ure

1.2 输入子系统架构示例图 图2 输入子系统架构示例图 2 输入链路的创建过程 由于input子系统通过分层将一个输入设备的输入过程分隔为独立的两部份:驱动到Input Core,Input Core到Event Handler。所以整个链路的这两部分的接口的创建是独立的。 2.1 硬件设备的注册 驱动层负责和底层的硬件设备打交道,将底层硬件对用户输入的响应转换为标准的输入事件以后再向上发送给Input Core。 驱动层通过调用Input_register_device函数和Input_unregist er_device函数来向输入子系统中注册和注销输入设备。 这两个函数调用的参数是一个Input_dev结构,这个结构在driver/input/input.h中定义。驱动层在调用Input_regist er_device之前需要填充该结构中的部分字段 #include #include #include MODULE_LICENSE("GPL"); struct input_dev ex1_dev; static int __init ex1_init(void) { /* extra safe initialization */ memset(&ex1_dev, 0, sizeof(st ruct input_dev)); init_input_dev(&ex1_dev); /* set up descriptive labels */ ex1_https://www.doczj.com/doc/f214965476.html, = "Example 1 device"; /* phys is unique on a running system */ ex1_dev.phys = "A/Fake/Path"; ex1_dev.id.bust ype = BUS_HOST; ex1_dev.id.vendor = 0x0001;

密码锁verilog课程设计

课程设计报告课程设计题目:4位串行数字密码锁 学号:201420130326 学生姓名:谢渊良 专业:通信工程 班级:1421302 指导教师:钟凯 2017年1月5日

1.摘要 随着科技的发展数字电路的各种产品广泛应用,传统的机械锁由于其构造的简单,安全性不高,电子密码锁其保密性高,使用灵活性好,安全系数高,使用方便,将会是未来使用的趋势。本设计使用EDA设计使设计过程廷到高度自动化,其具有强大的设计功能、测试、仿真分析、管理等功能。使用EDA环境完成电路的系统综合设计和仿真。用VHDL可以更加快速、灵活地设计出符合各种要求的密码锁。本设计基于Verilog HDL语言来设计密码锁,先介绍设计要求和整体设计思想,随后对所使用各模块分别为键盘模块、连接模块、控制模块进行了介绍,给出各个模块的主要代码,在对各个模块的功能进行仿真。 关键字:密码锁 Verilog HDL 2.设计内容 设计一个4位数字密码锁子系统 1)1.2设计要求开锁密码为4位二进制,当输入密码与锁内给定的密码一致时,方可开锁。否则进入“错误”状态,发出报警信号。 2)锁内的密码可调。 3)串行数字密码锁的报警,直到按下复位开关,才停下。此时,数字密码锁又自动等待下一个开锁状态。 3.系统设计 本设计中,FPGA系统采用硬件描述语言Verilog按模块化方式进行设计,并用modersim软件对各个模块进行编写仿真。 3.1键盘模块 键盘电路理想接口图: flag 设计原理: 本模块采用2×2的扫描键盘电路,对输入信号进行采集,此模块的主要功能是每按下一个

按键,flag产生一个矩形波,作为连接模块的触发信号。同时key_value值为所按下键的编码值,与flag一同传入连接模块。 实际设计接口图: 键盘模块仿真图: 跟据图中所示当输出kevalue:10值的时候,flag出现一个矩形波。 当输出kevalue:11值的时候,flag再次出现上跳沿。实际上,上面的图写的测试文件是有一点错误的,当a扫描到第三个值(01)时,b在实际电路中应该是01而不是11,此时根据程序flag应置为1,当然此时flag本来就是1,不会发生错误。在实际中,时钟频率跳的如此之快,人按一下按键的持续时间还是有的,所以flag应在按键按完后再下降下来。不然多出很多无用的矩形波,这个装置就没用了。 3.2连接模块 连接模块接口图:

Linux input子系统学习小结 -

Linux input子系统学习小结 目录 一基本框架 二注册流程 三事件触发流程 摘要 本文inuxinpu子系统学习小结,抓住input子系统里面的几个关键点(handle,handler,设备文件,client)形成的简洁小结,简化理解。

一基本框架 linux中输入设备驱动的分层如下图所示: 图1 linux中输入设备的分层---from网络 原理以及为什么要设计这样一个系统,详细请参考其他资料

二注册过程: 如图2 图2 Input 子系统在初始化时,首先调用register_chrdev(INPUT_MAJOR, "input", &input_fops); 为自己注册了256个字符设备节点。(这也代表着系统中最多可以存在256个输入设备)这256个设备会被分为8类,分别对应于数组input_table[8]中存放的8个handler. staticstructinput_handler *input_table[8]; // kernel/input/input.c 其中数第1个句柄管理次设备号0-31,第2个句柄管理设备号32-63,以此类推….. 每一个句柄,都可以用来实现一类事件驱动(但每类事件驱动最多只能管理32个设备)。 例如:/dev/input/eventX所表示的设备和dev /input/mouseX所表示的设备就分别使用了最常见的两种事件驱动。以/dev/input/eventX 为例,他们都使用同一个event事件驱动,从对象的角度看,拥有同一个handler。而这个handler 所管理的设备,拥有次设备号64-95,每一个次设备号又可以对应到一个handle. /sys/devices/virtual/input/的目录和文件创建在什么时候? mtk_tpd.c -----》(tpd->dev=input_allocate_device())调用input_dev_attr_groups数组来创建。

Saturation

Saturation 限制信号的范围 库 不连续 Saturation块施加输入信号的上限和下限。 Saturation块接受以下数据类型的实际信号:?浮点 ?内置整数 ?固定点 见Data Types Supported by Simulink

Saturation块对话框中的Main窗格中显示如下: Saturation块的Signal Attributes窗格中的对话框中显示如下:

Show data type assistant 显示的数据类型助理 设置 Data Type Assistant帮助您设置Output data type参数. 见Specify Block Output Data Types. 命令行信息 见Block-Specific Parameters. Upper limit 指定在输入信号上的上限。 设置 默认:0.5

Minimum:值从Output minimum参数 Maximum:值从Output maximum参数 Tip ?当Saturation块的输入信号高于此值,块的输出被裁剪到这个值。 ?Upper limit参数转换为输出数据类型在脱机状态下使用舍入到最近的饱和度。 命令行信息 见Block-Specific Parameters. Lower limit 指定输入信号的下限。想 设置 默认:-0.5 Minimum:值从Output minimum参数 Maximum:值从Output maximum参数 Tips ?当Saturation块的输入信号低于此值,则块的输出被裁剪到这个值。 ?下限参数转换为输出数据类型在脱机状态下使用舍入到最近的饱和度命令行信息 见Block-Specific Parameters. Treat as gain when linearizing 选择此参数会导致增益视为 1 的线性化命令 设置

linux input子系统

Linux系统提供了input子系统,按键,触摸屏,鼠标等输入型设备都可以利用input接口函数来实现设备驱动。在输入子系统中主要有4个层次依次为: a.设备驱动层,具体设备层次,主要处理硬件中断,读取输入时间和控制硬件 b.输入子系统核心,管理输入设备,事件接口 c.inputer_handler,管理设备的事件处理方法 d.应用层,提供应用界面 如下图所示: 首先在内核linux2.6.32/documentation/input-programming中有一个简单的例子如下: #include #include #include #include #include static struct input_dev *button_dev;//定义一个输入设备结构体 static irqreturn_t button_interrupt(int irq, void *dummy) { input_report_key(button_dev, BTN_0,inb(BUTTON_PORT) & 1);//向输入子系统报告产生按键事件 input_sync(button_dev);//通知接受者,一个报告发送完毕 return IRQ_HANDLED; } static int __init button_init(void) { int error; if (request_irq(BUTTON_IRQ, button_interrupt, 0, "button", NULL))//申请中断 { printk(KERN_ERR "button.c: Can't allocate irq %d\n", button_irq); return -EBUSY; } button_dev = input_allocate_device();//分配一个input设备结构体 if (!button_dev) //判断分配是否成功 { printk(KERN_ERR "button.c: Not enough memory\n");

输入子系统

其实,输入子系统就是具体到某一类字符设备驱动框架,后续还有总线设备驱动框架等等一些字符驱动框架。首先,从应用程序先是调用input.c文件,在编写比较基础的一些字符设备驱动时,我们写测试程序都是在主函数中调用到open函数打开dev目录下的设备节点。为了无缝衔接应用程序(中的那些库函数)。比如我们在开发Qt程序,一般我们都见不到设备节点的。所以引入输入子系统等一些具体到一类的字符设备驱动。 字符设备驱动程序框架,注册一个file_operations结构体,接着通过class自动创建设备节点,然后进行一些寄存器地址映射,最后构造file_operations结构体中的函数。Input.c文件也类似。这边有个概念就是分层分离的概念(本文不加赘述)input.c作为核心层,底下分别拥有一个设备层(device)和一个事件处理层(handler),正常情况下Driver->Inputcore->Event handler->userspace的顺序到达用户控件的应用程序。但是在理解上更像是二者向一的关系,比如driver和event_handler都需要向input.c注册,二者需要通过input.c中的函数input_register_handler和函数input_register_device相互匹配。Device上报事件(即调用input_event函数)在input_event函数中调用connect函数。Input handler负责处理事件 list_for_each_entry(handle, &dev->h_list, d_node) if (handle->open) handle->handler->event(handle, type, code, value); 现在开始从调用input.c文件开始往下分析:应用程序调用一系列系统接口,最后调用到input.c文件,入口函数中注册类和fops结构体,接着在input_open_file函数中定义一个input_handler结构体handler和input_table结构体数组(以次设备号)接着获得handler中的结构体给新的fops结构体再让应用层(file->f_op = new_fops;)直接调用到input_handler 中的fops结构体。接着打开new_fops结构体中的open函数err = new_fops->open(inode, file)。在input_open_file函数中看到input_handler结构体以及input_table数组结构体,该数组在input_register_handler函数中进行处理,由前文可知handler和device都需要向input.c文件注册。应该就是这个函数还有一个input_register_device函数,在input_register_handler函数中除了处理input_table数组还有将handler->node加入input_handler_list链表,然后列出每一个input_dev_list链表中的条目接着调用input_attach_handler(dev, handler);在这里面又干什么?在里面根据id_table与dev进行匹配input_match_device(handler->id_table, dev);最后如果匹配成功就进行连接handler->connect(handler, dev, id);在input_register_device也是一样的处理,不过加入的链表是设备链表(input_dev_list)列出的是handler的链表进行匹配和连接。实际中就是该输入子系统,我们需要做的就是编写设备层中的驱动程序。意思就是在内核中就有这么一套机制,就是事件处理层已经拥有。现在具体到一个事件处理层驱动程序evdev.c文件,在入口函数就是注册一个input_handler结构体 static struct input_handler evdev_handler = { .event = evdev_event, .connect = evdev_connect, .disconnect = evdev_disconnect, .fops = &evdev_fops, .minor = EVDEV_MINOR_BASE,(次设备号64) .name = "evdev", .id_table = evdev_ids, }; 看到这个结构体联系上文就知道,在input_register_handler中处理input_table[64>>5](相当于2)接着根据evdev_ids进行匹配如果匹配成功,执行evdev_connect函数(注意这边用

input子系统学习笔记

in pu t子系统学习笔记一序 i nput子系统学习系列文章,是我在实际开发过程中遇到也是必须啃下去的第一个Li nux驱动,所以有必要记载下来的。由于刚开始未接触过I nput子系统,部分资料还是借鉴网络,本系列文章是本人的学习心得以及集百家所长的产物。 注:这是本人第一次在O ur U ni x博客发表文章,所以i nput子系统学习系列文章的介绍格式参考自博主的:m m c子系统学习笔记 i nput子系统学习系列文章,主要包含下述内容: I nputsubsyst em理论部分 i nputsubsyst em介绍 i nput子系统结构图 l i nux中输入设备驱动的分层 输入子系统设备驱动层实现原 软件设计流程 设计有关的A P I 分配一个输入设备 注册一个输入设备 驱动实现-事件支持 驱动实现-报告事件 释放与注销设备 实例分析(按键驱动) 代码实现之重要函数分析 i nput_al l ocat e_devi ce() 注册函数i nput_r egi st er_devi ce() i nput_r eport_key()向子系统报告事件 handl er注册分析 关键数据结构 注册i nput_handl er 注册i nput_handl e 子系统 子系统初始化函数i nput_i ni t() evdev输入事件驱动分析 evdev的初始化 in pu t子系统学习笔记二in pu t子系统介绍及结构图 i nput子系统介绍 输入设备(如按键,键盘,触摸屏,鼠标,蜂鸣器等)是典型的字符设备,其一般的工作机制是底层在按键,触摸等动作发生时产生一个中断(或驱动通过t i m er定时查询),然后cpu通过S PI,I2C或者外部存储器总线读取键值,坐标等数据,放一个缓冲区,字符设备驱动管理该缓冲区,而驱动的r ead()接口让用户可以读取键值,坐标等数据。

基于I2C的嵌入式触摸屏驱动设计

Linux/Android 多点触摸支持 2011-10-10 13:28 在内核2.6.30开始,添加了对多点触摸的支持。在 ENAC(http://lii-enac.fr/en/architecture/linux-input/)中给出来多点触摸的例子,支持Ubuntu10.10、Fedora14、Android。而且还直接支持win7的HID 多点协议,加载相关驱动后,只需VID/PID对应上,那么支持WIN7多点的,在LINUX中也可实现多点触摸了。 整个处理过程就是:touch screen-->drivers-->input subsystem-->evdev-->user space。由于touchscreen的不同,要使Linux支持多点触摸,还是要搭配或开发驱动。在驱动层中主要是把数据接收下来,然后根据多点触摸协议,上传到input 子系统。 下面是多点触摸协议: for (i=0; i

EDA-VerilogHDL试题【可参考】

一、填空题(10分,每小题1分) 1.用EDA技术进行电子系统设计的目标是最终完成 ASIC 的设 计与实现。 2.可编程器件分为FPGA 和 CPLD 。 3.随着EDA技术的不断完善与成熟,自顶向下的设计方法更多的被应用于Verilog HDL 设计当中。 4.目前国际上较大的PLD器件制造公司有 Altera 和 Xilinx 公司。 5.完整的条件语句将产生组合电路,不完整的条件语句将产生时序 电路。 6.阻塞性赋值符号为=,非阻塞性赋值符号为 <= 。 二、选择题(10分,每小题2分) 1.大规模可编程器件主要有FPGA、CPLD两类,下列对FPGA结构与工作原理的描述 中,正确的是 C 。 A.FPGA全称为复杂可编程逻辑器件; B.FPGA是基于乘积项结构的可编程逻辑器件; C.基于SRAM的FPGA器件,在每次上电后必须进行一次配置; D.在Altera公司生产的器件中,MAX7000系列属FPGA结构。 2.基于EDA软件的FPGA / CPLD设计流程为:原理图/HDL文本输入→综合 →_____→→适配→编程下载→硬件测试。正确的是 B 。 ①功能仿真②时序仿真③逻辑综合④配置⑤分配管脚 A.③①B.①⑤C.④⑤D.④② 3.子系统设计优化,主要考虑提高资源利用率减少功耗(即面积优化),以及提高运行 速度(即速度优化);指出下列哪些方法是面积优化 B 。 ①流水线设计②资源共享③逻辑优化④串行化⑤寄存器配平⑥关键路径法 A.①③⑤B.②③④C.②⑤⑥D.①④⑥ 4.下列标识符中,____A______是不合法的标识符。 A.9moon B.State0 C.Not_Ack_0 D.signall 5.下列语句中,不属于并行语句的是:___D____

网络聊天室课程设计说明书含源程序

操作系统课程设计报告 目录 一.设计要求和目的 (3) 二.背景知识 (4) 三.客户端设计 (5) 四.客户端主要代码 (6) 五.设计体会 (9) 摘要 随着网络信息时代的来临,Internet人们越来越习惯于在网上获取和交流信息。据调查显示,80%以上的人上网都会打开聊天工具来聊天,而几乎每一个年轻人都会去聊天。使用网上聊天已经成为现代年轻人一种新的交往方式。聊天室更适合于陌生人之 自由的聊天世界。因此我们联系所学的操作系统、数据库和MFC知识决定做一个简易的聊天系统。 一、设计要求和目的 此课题是实现一个即时通信系统完成1:进程间通信、并发(同步/互斥)、文件读写 2:内存管理、Dll、Windows消息机制、IO。 课题任务分工: 客户端子系统: 在客户端系统的功能实现上,可以分为以下几个部分: [1]进程信息的输入 系统把用户进程要求发送的信息由键盘输入从文件读取。这部分功能是客户端子系统的基本部分,这个功能是以后各个部分的基础。系统要求做到即能够从其它子系 统中共享一部分信息

[2]进程信息的存储:将进程的信息存储到客户端系统中,以及将发送的信息保存在文件中,以备以后进程之间通信确认以及查询。 [3]通信数据的传递及接收:将客户所发送的信息由客户端由网络传到服务器端上,并且接受航服务器返回的接收方发送的信息,然后存储起来。 用户 客户端 服务器端 用户设置 连接设置 用户发送信息 通信内容 用户得到的信息 处理用户 退出 服务器日志 数据通信信息 连接 保存 保存 处理 监控 二.背景知识 SOCKET Socket 可以看成在两个程序进行通讯连接中的一个端点,是连接应用程序和网络驱动程序的桥梁,Socket 在应用程序中创建,通过绑定与网络驱动建立关系。此后,应用程序送给Socket 的数据,由Socket 交网络驱动程序向网络上发送出去。计算机从网络上收到与该Socket 绑定IP 地址和端口号相关的数据后,由网络驱动程序交给Socket ,应用程序便可从

DSP名词解释2

DSP 专有名词解释 A Absolute Lister 绝对列表器 ACC 累加器 AD 模拟器件公司 Analog Devices ADC 数模转换器 ADTR 异步数据发送和接收寄存器 All-pipeline-Branching 全流水分支 ALU 算数逻辑运算单元 Arithmetic Logical Unit AMBA 先进微控制器总线结构(ARM处理器的片上总线) Advanced microcontroller bus architecture ANSI 美国国家标准局 AP 应用处理器 Application Processor API 应用程序编程接口 Application Programmable interface ARAU 辅助寄存器单元 Auxiliary Register Arithmetic Unit ARSR 异步串行端口接收移位寄存器 ARP 辅助寄存器指针/地址解析协议 Address Resolution Protocol Archiver Utility 归档器公用程序 ASIC 专用集成电路 Application Specific Integrated Circuit ASP 音频接口 /动态服务器页面(Active Server Page) ASK 振幅调制 ASPCR 异步串行端口控制寄存器 AXSR 异步串行端口发送移位寄存器 ATM 异步传输模式 B B0,B1 DARAM B0、B1 块双口随机存储器 BDM 背景调试模式 Background Debug Mode Bluetooth 蓝牙 BEGIER 调试中断使能寄存器 BOPS 每秒十亿次操作 BOOT Loader 引导装载程序 C C Compiler C编译器 CALU 中央算术逻辑单元 Central Arithmetic Logical Unit CAN 控制器局域网 Controller Area Network CCS 代码调试器/代码设计套件 Code Composer Studio CDMA 码分多址 Code Division Multiple Access cDSP 可配置数字信号处理器或可定制数字信号处理器 Code Size 代码长度

数据结构:栈子系统

/* *题目:设计一个字符型的链栈。 * 编写进栈、出栈、显示栈中全部元素的程序。 *题目:编写一个把十进制整数转换为二进制数的应用程序 *题目:编写一个把中缀表达式转换为后缀表达式(逆波兰式)的应用程序*题目:设计一个选择式菜单,以菜单方式选择上述操作。 * 栈子系统 * ******************************* * * 1------进栈 * * * 2------出栈 * * * 3------显示 * * * 4------数值转换 * * * 5------逆波兰式 * * * 0------返回 * * ******************************* * 请选择菜单号(0--5): */ #include <> #include <> #define STACKMAX 50 typedef struct sta //栈的存储结构 { int data; struct sta *next; }stackNode; typedef struct //指向栈顶的指针 { stackNode *top; }linkStack; void conversion(int n); void push(linkStack *p, int x); int pop(linkStack *p); void showStack(linkStack *p); void sufflx(); /************************************************* Function: main() Description: 主调函数 Calls: push() pop()

BIOS名词解释

bios BIOS是英文"Basic Input Output System"的缩略语,直译过来后中文名称就是"基本输入输出系统"。其实,它是一组固化到计算机内主板上一个ROM芯片上的程序,它保存着计算机最重要的基本输入输出的程序、系统设置信息、开机后自检程序和系统自启动程序。其主要功能是为计算机提供最底层的、最直接的硬件设置和控制。 目录 展开

bios 编辑本段 简介 英语单词Biography的复数形式的缩写 (Biography-Biographies-Bios),一般读作/'baious/。 BIOS设置程序是储存在BIOS芯片中的,只有在开机时才可以进行设置。CMOS主要用于存储BIOS设置程序所设置的参数与数据,而BIOS设置程序主要对计算机的基本输入输出系统进行管理和设置,使系统运行在最好状态下,使用BIOS设置程序还可以排除系统故障或者诊断系统问题。有人认为既然BIOS是"程序",那它就应该是属于软件,感觉就像自己常用的Word 或Excel。但也有很多人不这么认为,因为它与一般的软件还是有一些区别,而且它与硬件的联系也是相当地紧密。形象地说,BIOS应该是连接软件程序与硬件设备的一座"桥梁",负责解决硬件的即时要求。主板上的BIOS芯片或许是主板上唯一贴有标签的芯片,一般它是一块32针的双列直插式的集成电路,上面印有"BIOS"字样。586以前的BIOS多为可重写EPROM芯片,上面的标签起着保护BIOS内容的作用(紫外线照射会使EPROM内容丢失),不能随便撕下。586以后的ROM BIOS多采用EEPROM(电可擦写只读ROM),通过跳线开关和系统配带的驱动程序盘,可以对EEPROM进行重写,方便地实现BIOS升级。计算机用户在使用计算机的过程中,都会接触到BIOS,它在计算机系统中起着非常重要的作用。一块主板性能优越与否,很大程度上取决于主板上的BIOS管理功能是否先进。 BIOS芯片是主板上一块长方型或正方型芯片,BIOS中主要存放:

仿真实验指导书(VC版)

仿真实验指导书 实验一 UWB (超宽带)信号仿真与调制仿真 (一)、实验目的 通过超宽带系统仿真,进一步了解PPM-UWB 信号模型的特性,更加牢固地掌握UWB 的调制解调技术。 (二)、实验工具 C++/VC (三)、实验步骤 1、熟悉C++/VC 软件开发平台、UWB 信号产生原理及UWB 信号调制解调过程。 2、构成UWB 信号的w(t)一般是由被称作monocycle shape 的脉冲波形序列组成。monocycle shape 的脉冲波形的一个主要特点是冲击脉冲波形具有很陡的上升和下降沿,通常用在UWB 信号中的monocycle shape 脉冲波形有如下几种类型:Ganssian 脉冲、Gaussian First Derivative 脉冲、Scholtz’s 脉冲、Manchester 脉冲、RZ Manchester 脉冲、Sine 脉冲和Rectangle 脉冲等。由于UWB 信号的传输是利用基带信号直接进行的,因此,所采用的脉冲波形的特性将对UWB 系统的性能有着直接的影响。 3、这里采用Gaussian First Derivative Pulse 作为UWB 的脉冲信号,它的公式如下: w(t)=))/)((2exp()(/22 au c c au T T t T t e T A ?×?×?××× (1) 其中:A 是幅值,T c 为时移,T au 是脉宽参数. 仿真波形如右图1所示。 --0-0-0-00000T m e (n s ) A m p u d e 图1 UWB 脉冲信号图 4、在脉冲信号仿真的基础上,进行UWB 调制仿真。在时域上利用脉冲位置调制的UWB 信号可以由公式(2)表示。 ()()()()()()s k k k k k tr t f j c j N j S t w t jT c T d δ+∞ ????=?∞ = ???×∑ (2) 利用C++/VC 软件完成上述信号调制仿真,仿真流程如下图2所示。

相关主题
文本预览
相关文档 最新文档