当前位置:文档之家› Lerner et al.Feature Extraction by NN Nonlinear Mapping 1 Feature Extraction by Neural Netw

Lerner et al.Feature Extraction by NN Nonlinear Mapping 1 Feature Extraction by Neural Netw

Lerner et al.Feature Extraction by NN Nonlinear Mapping 1 Feature Extraction by Neural Netw
Lerner et al.Feature Extraction by NN Nonlinear Mapping 1 Feature Extraction by Neural Netw

Feature Extraction by Neural Network Nonlinear Mapping

for Pattern Classification

B. Lerner, H. Guterman, M. Aladjem, and I. Dinstein

Department of Electrical and Computer Engineering

Ben-Gurion University of the Negev

Beer-Sheva 84105, Israel

Abstract

Feature extraction has been always mutually studied for exploratory data projection and for classification. Feature extraction for exploratory data projection aims for data visualization by a projection of a high-dimensional space onto two or three-dimensional space, while feature extraction for classification generally requires more than two or three features. Therefore, feature extraction paradigms for exploratory data projection are not commonly employed for classification and vice versa. We study extraction of more than three features, using neural network (NN) implementation of Sammon’s nonlinear mapping to be applied for classification. Comparative classification experiments reveal that Sammon’s method, which is primarily an exploratory data projection technique, has a remarkable classification capability. The classification performance of (the unsupervized) Sammon’s mapping is highly comparable with the performance of the principal component analysis (PCA) based feature extractor and is slightly inferior to the performance of the (supervized) multilayer perceptron (MLP) feature extractor. The paper thoroughly investigates a random and a non-random initializations of Sammon’s mapping. Only one experiment of Sammon’s mapping is required when the eigenvectors corresponding to the largest eigenvalues of the sample covariance matrix are used to initialize the projection. This approach tremendously reduces the computational load and substantially raises the classification performance of Sammon’s mapping using only very few eigenvectors.

_________________________

The 13th International Conference on Pattern Recognition, ICPR13, Vienna, vol. 4, 320-324, 1996. Corresponding author: Boaz Lerner, University of Cambridge Computer Laboratory, New Museums Site, Cambridge CB2 3QG, UK. Email: boaz.lerner@https://www.doczj.com/doc/6417186774.html,

1. Introduction

Feature extraction is the process of mapping the original features (measurements) into fewer features which include the main information of the data structure. A large variety of feature extraction methods based on statistical pattern recognition or on artificial neural networks appears in the literature [1]-[9]. In all the methods, a mapping f transforms a pattern y of a d -dimensional feature space to a pattern x of an m -dimensional projected space, m

x f y =(),(1)

such that a criterion J is optimized. The mapping f(y) is determined amongst all the transformations g(y), as the one that satisfies [9],

{}{}J f y g J g y ()max ()= .(2)

The mappings differ by the functional forms of g(x) and by the criteria they have to optimize.

Feature extraction methods can be grouped into four categories [4] based on a priori knowledge used for the computation of J : supervized versus unsupervized, and by the functional form of g (x ): linear versus nonlinear. In cases where the target class of the patterns is unknown,unsupervized methods are the only way to perform feature extraction. In other cases, supervized paradigms are preferable. Linear methods are simpler and are often based on an analytical solution but they are inferior to nonlinear methods when the classification task requires complex hypersurfaces. Widespread unsupervized methods for feature extraction are PCA [3], [9] (a linear mapping) and Sammon’s nonlinear mapping [6]. The PCA attempts to preserve the variance of the projected data, whereas Sammon’s mapping tries to preserve the interpattern distances. The MLP when acting as a feature extractor provides a supervized nonlinear mapping of the input space into its hidden layer(s).

Feature extraction for exploratory data projection enables high-dimensional data visualization for better data structure understanding and for cluster analysis. In feature extraction for classification, it is desirable to extract high discriminative reduced-dimensionality features which reduce the classification computational requirements. However, feature extraction criteria for exploratory data projection regularly aim to minimize an error function, such as the mean square

error or the interpattern distance difference whereas feature extraction criteria for classification aim to increase class separability as possible. Hence, the optimum extracted features (regarding a specific criterion) calculated for exploratory data projection are not necessarily the optimum features regarding class separability and vice versa. In particular, two or more classes may have principal features that are similar. Moreover, feature extraction for exploratory data projection is used for two or three-dimensional data visualization, whereas classification usually needs more than two or three features. Consequently, feature extraction paradigms for exploratory data projection are not generally used for classification and vice versa.

This paper studies the application of feature extraction paradigms for exploratory data projection to be also employed for classification. It uses Sammon’s nonlinear mapping which is primarily an exploratory data projection technique. The classification accuracy of a NN implementation of Sammon’s mapping for more than three features is compared with the accuracy of the PCA based and the MLP feature extractors which are usually employed for classification. In addition, the paper extensively compares and investigates the trade-offs between a random and a nonrandom initializations of Sammon’s mapping.

2. Paradigms of feature extraction for exploratory data projection and classification

Sammon [6] proposed a feature extraction method for exploratory data projection. This method is an unsupervized nonlinear paradigm that attempts to maximally preserve all the interpattern distances. We extend in this study the domain of the method to be applicable for classification purposes. The classification capability using Sammon’s mapping is compared to two well-known feature extraction paradigms for classification. The first is the PCA which is an unsupervized linear paradigm and the second is the MLP feature extractor which is a supervized nonlinear paradigm. The outline of the experiments is shown in Fig. 1.

A. Sammon’s mapping

The criterion to minimize in Sammon’s mapping is Sammon's stress (error), defined as:

[]E d i j d i j d i j d i j j i n i n j i n

i n =?=+=?=+=?∑∑∑∑11112111***(,)(,)(,)(,)(3)

where d *(i,j) and d(i,j) are the distances between pattern i and pattern j in the input space and in the projected space, respectively. The Euclidean distance is frequently used. Sammon’s stress is a measure of how well the interpattern distances are preserved when the patterns are projected from a high-dimensional space to a lower dimension space. The minimum of Sammon’s stress is achieved by carrying out a steepest-descent procedure. As in steepest-descent based approaches,local minima in the error surface is often unavoidable. This implies that a repetitive number of experiments with different random initializations have to be performed before the initialization with the lowest stress is obtained. However, several methods which make use of some knowledge of the feature data may be more effective. For example, the initialization could be based on the first norms of the feature vectors [2] or on the projections of the data onto the space spanned by the principal axes of the data [2], [4]. The second drawback of Sammon’s mapping is its computational load which is O(n 2). In each iteration n (n -1)/2 distances, along with the error derivatives, must be calculated. As the number of vectors (n ) increases, the computational requirements (time and storage) grow quadratically.

Fig. 1. The experiments’ layout.

Mao and Jain [4] have suggested a NN implementation of Sammon’s mapping. Fig. 2 shows the NN architecture they have used in their paper. It is a two layer feedforward network whereas the number of input units is set to be the feature space dimension, d , and the number of output

units is specified as the extracted feature space dimension, m . No rule for determining the number of hidden layers and the number of hidden units in each hidden layer is suggested. They derived a weight updating rule for the multilayer feedforward network that minimizes Sammon’s stress based on the gradient descent method. The general updating rule for all the hidden layers,l =1,...,L -1 and for the output layer (l = L ) is:

???jk l jk l jk l j l jk l j

l E y y ()()()(()((()()()())ωη??ωημμννμν= =?=????1)1)(4)

where ωjk is the weight between unit j in layer l -1 and unit k in layer l , η is the learning rate, y j (l) is the output of the j th unit in layer l and μ and ν are two patterns. The ?jk (l)’s are the errors accumulated in each layer and backpropagated to a preceding layer, similarly to the standard backpropagation. However, in the NN implementation of Sammon’s mapping the errors in the output layer are functions of the interpattern distances.

input layer hidden layer output layer

Fig. 2. A two-layer perceptron NN for Mao and Jain’s implementation of Sammon’s mapping and for the MLP feature extractor.

In Mao and Jain’s implementation the network is able to project new patterns after training, a property Sammon’s mapping does not have. Mao and Jain suggested to use data projections along

the PCA axes as an initialization to Sammon’s mapping. They employed a two stage training phase using the standard backpropagation algorithm for the first stage and their modified unsupervized backpropagation algorithm for a refinement in the second stage. Our Sammon’s mapping study has been stimulated by Mao and Jain’s research. The NN based Sammon’s mapping implementation we use is similar to the implementation suggested by Mao and Jain but it is simpler. Only one training stage using Mao and Jain’s unsupervized backpropagation algorithm (their second stage) is used. In addition, Mao and Jain in their research employed a PCA based initialization for Sammon’s mapping whereas we employed and compared both random and PCA based initializations.

B. The PCA based feature extractor

Among the unsupervized linear projection methods the PCA is probably the most widely used.The PCA, also known as the Karhunen-Loe've expansion, attempts to reduce the dimensionality of the feature space by creating new features that are linear combinations of the original features.The procedure begins with a rotation of the original data space followed by ranking the transformed features and picking out few projected features. This procedure finds the subspace in which the original sample vectors may be approximated with the least mean square error for a given dimensionality.

Let x = f(y) be a linear mapping of a random feature vector y , y ∈ R d , x ∈ R m and m < d . The approximation y ^,

y j j x u j m ^==∑1(5)with the minimum mean square error,

{}ε=??E y y y y t ()()^^ (6)

is obtained when u j (? j=1,m ) are the eigenvectors associated with the m largest eigenvalues λj of the covariance matrix Ψ of the mixture density (λ1≥λ2≥...≥λm ≥...≥λd ). The expansion coefficient x j associated with u j is the j th PCA feature of x ,

j j t

x u y =.

(7)

C. The MLP feature extractor

When acting as a classifier, the MLP hidden unit outputs can be used as an implementation of a nonlinear projection of high-dimensional input (feature) space to a much simpler (abstract) feature space [10]. Patterns represented in this space are more easily separated by the network output layer. Furthermore, visualization of the last hidden internal representations may supply an insight to the data structure, hence, to play as a mean of data projection. Using this approach, the classifier acts ideally as feature extractor and as exploratory data projector. Although not acting as a classifier, the MLP feature extractor training is based on class label information, hence it is supervized. The number of input units (Fig. 2) is specified to be the number of features and the number of output units to be the number of pattern classes. The hidden layer dimension is set according to the task, either exploratory data projection or a classification.

3. The experiments

A. The data set

The data set was derived from chromosome images which were gathered in Soroka Medical Center, Beer-Sheva, Israel. The chromosome images were acquired and segmented in a process described elsewhere [11]. The experiments were held with 300 patterns from three types of chromosomes (types "13", "19" and "x"), 100 patterns from each type. The chromosome patterns were represented in feature space by 64 density profile (d.p.) features (integral intensities along sections perpendicular to the medial axis of the chromosome) [11].

B. The classifier

A two layer feedforward NN trained by the standard backpropagation learning algorithm was chosen to be used as a classifier. The number m, of input units was set by the projected space dimension and the number of output units was determined by the number of classes (three classes in our case). Higher complex architectures were not considered as candidates for the classifier because only low-dimensional extracted features were employed as the classifier input. The classifier parameters which were adapted for the chromosome data in a previous investigation [12] were: learning rate of 0.1, momentum constant of 0.95, 10 hidden units and a training period

of 500 epochs. Each experiment with the classifier was repeated ten times with different randomly chosen initial weight matrices and the results were averaged. Although only one experiment of the classifier is sufficient to compare the feature extraction paradigms, averaging over several classifier initializations yields more objective results. Exactly the same ten classifier initializations were used for examining all the feature extraction paradigms.

C. The methodology

C1. General

The general scheme of the experiments was outlined in Fig. 1. As Fig. 1 indicates, the paradigms extract features from the 64-dimensional chromosome patterns. The outputs of the three feature extraction paradigms are used to project the samples into two-dimensional maps and to train and test the MLP classifier. The two-dimensional projection maps are visually analyzed and compared to the two-dimensional scatter plots of two of the original features. The probability of correct classification of the test set is evaluated for one to seven extracted features and compared to these probabilities based on the first 10 and all the 64 d.p. features. The first 10 d.p. features which are extracted from the upper tip of the chromosome, provide the cytotechnician an enhanced discriminative capability. In addition, they were ranked by a feature selection algorithm among the best d.p. features [11].

Twenty-one randomly chosen training and test sets were derived from the entire chromosome data set for the classification experiments. Each training set contained randomly selected 90% of the data set while the reminder patterns were reserved for the test (the holdout method [3]). Each feature extraction paradigm was applied to these data sets. Classification results were averaged over the twenty-one data sets and the ten classifier initializations (see Sec. 3B).

C2. Sammon’s mapping

In a preliminary study, ten random generator seeds were tested to initialize Sammon’s mapping. The seed which was responsible for the highest classification performance was chosen to initialize the weight matrices of the random initialization. The second initialization of Sammon’s mapping was based on all the eigenvectors of the sample covariance matrix estimated from the training data set. In the exploratory data analysis experiments, the two Sammon’s

projections were obtained by setting the network output dimension to 2. For the classification experiments the network output dimension was changed in the [1,7] range. Sammon’s mapping parameters for both initializations were: learning rate of 1, momentum constant of 0.5, 20 hidden units and a training period of 40 epochs. This set of parameters yielded the ultimate performances in a preliminary study.

C3. The PCA based feature extractor

In this study we use the classical implementation of PCA. The eigenfeatures of the training set were sorted by a descending order corresponding to the eigenvalue magnitudes. The first one to seven eigenfeatures were used for classification and the first two eigenfeatures were used to plot the two-dimensional projection map.

C4. The MLP feature extractor

A two layer perceptron NN trained by the backpropagation algorithm was employed as a feature extractor. The input layer was 64-dimensional and the output was 3-dimensional (3 classes). The number of the hidden layer units was set to be 2 in the exploratory data projection experiments and it was changed from 1 to 7 in the classification experiments.

D. Results

D1. Projection maps

A comparison is made between the two-dimensional projection maps of the chromosome feature set projected by the three feature extraction paradigms. The evaluation of the projection maps is only based on visual judgment which is, to our opinion, the most qualitatively way to evaluate these maps, except for complex psychophysical experiments. Furthermore, a quantitative evaluation of the feature extraction paradigms for projection purposes is appeared to be inherently biased toward one of the paradigms. For example, Sammon’s stress, when was used to evaluate projection methods, ranked Sammon’s mapping as the best projection method [4]. To our knowledge, there is no criterion to judge objectively the projection methods.

In Fig. 3, the two-dimensional projection maps of the three paradigms are given. For a comparison, a scatter plot of the first two original d.p. features is given in Fig. 3a. These two

(a)

(b)

(c)(d)

Fig. 3. The two-dimensional projection maps of: (a) two d.p. features (the 1st and the 2nd ), (b)Sammon’s mapping, (c) the PCA based feature extractor and (d) the MLP feature extractor (o, *and x for chromosome types 13, 19 and x, respectively).

features are amongst the most discriminative d.p. features [11]. The second projection map (Fig.3b) is formed by Sammon’s mapping onto two-dimensional space. Random initialization is preferred to be used in Sammon’s mapping experiments mainly because the PCA based initialization is frequently yields very similar maps to the PCA based feature extractor maps [4].The third projection map (Fig. 3c) is produced by data projection along the two principal components. The fourth projection map (Fig. 3d) is produced by the two hidden units of the two layer perceptron feature extractor. All the maps are based on the test set and on the parameters of

the networks which were previously specified (Sections 3C2-3C4). The maps were obtained in an experiment in which 50% of the data set were used for training. Producing the same maps for the case which was experimented in the classification experiment (90% of the data set used for training) is of less interest because only ten test patterns per class were available for the experiment. The figure reveals the difference in the way the three feature extraction paradigms project data. Visually analyzed, the maps of the PCA based and the MLP NN feature extractors are more perceptive than the map of Sammon’s mapping and the pattern spread is more evident. Moreover, the ratio of the cluster between scatter to the cluster within scatter of these two maps is larger. Not to be forgotten however, that projecting along the axes with the data largest and second largest variances, as the PCA does, is the easiest way to interpret projection maps. Considering discriminative power, the map of the MLP feature extractor is superior. It is important to mention, however, that the MLP is a supervized feature extraction paradigm where the other two are unsupervized. However, the MLP severely distorts the structure of the data and the interpattern distances while the PCA based feature extractor and Sammon’s mapping preserve them very well. Another interesting point to observe is the way the MLP shrinks each class pattern to almost one point (or line), a quality which eases the classification process. These shrinked clusters are (almost) concentrated in three of the four map corners corresponding to the ultimate values of the hidden unit activation function (sigmoid). All the projection maps, especially these of the PCA based and the MLP paradigms reveal that the projected features are less correlated between themselves than the original features (Fig. 3a).

D2. Classification

We have used the MLP NN probability of correct classification of the test set as the criterion to evaluate the classification performances of the three feature extraction paradigms. The comparison of the probabilities of correct classification using the three feature extraction paradigms is given in Fig. 4 for 1 to 7 extracted features. Each point in the graph is an average over 210 experiments (see Sec. 3C1). For a comparison, the probabilities of correct classification using the original first 10 and all the 64 d.p. features are 86.6% and 83.7%, respectively.

As is shown in Fig. 4, the MLP feature extractor is responsible for achieving the best probability of correct classification. Sammon’s mapping and the PCA based feature extractor

Fig. 4. The probability of correct classification using the three paradigms for increasing number of extracted features (. for the PCA, -. for Sammon’s mapping (random initialization) and solid line for the two layer perceptron).

Fig. 5. The probability of correct classification based on two initializations of Sammon’s mapping for increasing number of projections (-. for the random initialization and -- for the PCA based initialization).

lead to similar results which are inferior to the MLP feature extractor. Only three extracted features are needed using each of the paradigms to achieve superior classification performances compare to these achieved by the first 10 or all the 64 d.p. features. In Fig. 5 the random and the PCA based initializations of Sammon’s mapping are compared. The experiments were held for the same previous ranges of projections as in Fig. 4. The superiority of the PCA based initialization over the random initialization is apparent. Moreover, decreasing randomality aids the PCA based initialization to achieve more stable classification results than the random initialization.

Fig. 6. The probability of correct classification using 2 Sammon’s mapping projections. The average (dashed line) and the standard deviation (dashdot line) of 10 random initializations are compared to the PCA based initialization (solid line) for increasing number of eigenvectors.

Fig. 6 presents the results of another experiment to compare both Sammon’s mapping initializations for only two extracted features. Ten Sammon’s mapping random initializations were examined and the average and the standard deviation of the probability of correct classification of the 10 experiments are plotted. The average and the standard deviation are compared to the probability of correct classification using the PCA based initialization when additional eigenvectors are appended to the input-hidden initial weight matrix. Fig. 6 shows that only very few (six or more) eigenvectors are sufficient to initialize the PCA based feature extractor to

outperform the average performance of the random based initialization. Furthermore, the substantial advantage of the PCA based initialization over the random initialization is that only one experiment of Sammon’s mapping is required. The random initialization requires several experiments with different random generator seeds before selecting the best (or the averaged) initialization. Concerning the fact that the computational complexity of Sammon’s mapping is O(n2) for n patterns, this advantage is crucial.

4. Discussion

We study the classification capabilities of the well-known Sammon’s mapping, which is originally applied for exploratory data projection. A comparison of the classification performance of a NN implementation of Sammon’s mapping [4] with the PCA based and the MLP feature extractors, is made. The three paradigms are evaluated using a chromosomial feature set.

Although originally aimed and used for exploratory data projection, Sammon’s mapping has an admirable classification capability. Only one experiment of Sammon’s mapping is required when the PCA eigenvectors of the sample covariance matrix corresponding to the largest eigenvalues are used to initialize the algorithm. This fact has an enormous computational impact on the feature extraction process. In addition, the improved initial directions, the PCA provides, enable a classification performance improvement based on only very few eigenvectors. A combination of a nonlinear feature extraction paradigm and class information improves the discriminative capability. The MLP feature extractor which is a supervized nonlinear paradigm, is found in this study as the best feature extraction paradigm for both exploratory data projection and classification.

References

1.H. Bourland, and Y. Kamp, “Auto-association by multilayer perceptrons and singular value

decomposition,” Biol. Cybern., vol. 59, pp. 291-294, 1988.

2.Y. Chien, Interactive Pattern Recognition. NY: Marcel Dekker, Inc., 1978.

3.K. Fukunaga, Introduction to Statistical Pattern Recognition (2nd ed.). New York: Academic

Press, 1990.

4.J. Mao, and A. K. Jain, “Artificial neural networks for feature extraction and multivariate data

projection,” IEEE Trans. Neural Networks, vol. 6, pp. 296-317, 1995.

5.E. Oja, “Principal components, minor components and linear neural networks,” Neural

Networks, vol. 5, pp. 927-935, 1992.

6.J. W. Sammon Jr., “A nonlinear mapping for data structure analysis,” IEEE Trans. Comput.,

vol. 18, pp. 401-409, 1969.

7.T. D. Sanger, “Optimal unsupervized learning in a single-layer linear feedforward neural

network,” Neural Networks, vol. 2, pp. 459-473, 1989.

8.T. Kohonen, “The self organizing map,” Proc. IEEE, vol. 78, pp. 1464-1480, 1990.

9.P. A. Devijver, and J. Kittler, Pattern Recognition- A Statistical Approach. NJ :Prentice Hall,

1982.

10.R. P. Gorman, and T. J. Sejnowski, “Analysis of hidden units in a layered network trained to

classify sonar targets,” Neural Networks, vol. 1, pp. 75-89, 1988.

11.B. Lerner, H. Guterman, I. Dinstein, and Y. Romem, “Medial axis transform based features

and a neural network for human chromosome classification,” Pattern Recognition, vol. 28, pp.

1673-1683, 1995.

12.B. Lerner, H. Guterman, I. Dinstein, and Y. Romem, “Human chromosome classification using

multilayer perceptron neural network,” Int. J. Neural Systems, vol. 6, pp. 359-370, 1995.

万和热水器e4解决方法

热水器在我们生活中起到了非常大的作用,给我们的生活带来了很大的优越感。特别是到了冬季北方天气寒冷,平时生活时刻离不开热水器,不管是洗脸刷牙洗菜做饭还是洗澡都要用热水。所以我们最怕的就是冬季家里的热水器出现故障,。因为热水器使用久了出现一些小的故障是避免不了的。为了让万和用户能更好的了解热水器一些常见故障,今天就给大家讲讲万和热水器显示E4故障代码怎么解决? 万和恒温热水器出现故障的时候,在显示屏上面都会出现一个故障代码,一个故障代码代表一种含义。万和E4故障在冬季发生的很多,E4代码的意思是风机霍尔传感器故障,当我们开启热水器龙头后,热水器风机开始运转,可以风机霍尔传感器出现故障后,检测不到风机运转的型号,当电脑板接收不到风机工作的信号时就会报警显示E4故障代码。

这时我们可以把风机拆卸下来,去热水器配件市场购买一个风机霍尔传感器更换上就可以了,万和热水器售后每年维修大量的万和E4故障,可以说百分九十九都是风机霍尔故障,但是也有主板出现故障的,但是这种几率非常小。 1.E4是因为热水器水箱内部管温出现故障。 2.故障代码意义 接口故障(E0):温度传感器开路或短路;点火系统故障(E1):点火结束后,还未检测到火焰;燃气不足或意外熄火(E2):正常燃烧后,检测不到火焰。出水温度过高(E3):出水温度超过80℃或干烧。应更换电池(E4):电池电压低于2V。 3.原因分析及解决方法: A、热水器的水温过高造成的。可以适当调低水温。 B、热水器风压力开关发生故障,需要进行更换。 C、热水器的感温探头损坏,需要进行更换。 D、水流出现异常,应该是堵塞造成的,需要把异物清除。 E、热水器主板故障,需要找专业人员进行检测或者更换热水器主板。 F、热水器主要元件故障,需要进行检测,这种故障只能进行元件的更换。 以上就是一些主要的解决方法,希望能够帮到大家!

加密狗使用说明

Ikey使用说明 用户需要将ikey(加密狗)插入电脑的usb接口后才能使用云南省房地产估价管理系统。使用加密狗之前需要在电脑上先安装ikey的驱动程序。 用户可以在https://www.doczj.com/doc/6417186774.html,的登录页面下载到驱动程序,驱动程序根据用户使用的操作系统的不同,分为: 请用户根据自己的操作系统选择相应的驱动程序。 在安装驱动程序过程中,需要注意: a)下载下来的驱动程序,路径名请确保没有中文。 b)如果杀毒软件弹出安全警告,请点击放过或允许。 c)在安装驱动程序前,请确保加密狗没有插在电脑上。 下面介绍一下,驱动程序的安装: 1.在Windows2000或Windows2003或WindowsXP上安装加密狗驱动程序 1)驱动程序下载下来后,图标为 2)双击ikeyAll.exe,看到如下界面: 3)点击两个Next,进入如下界面:

4)点击“是“,就会看到如下界面(如果这过程中杀毒软件弹出安全警告,请点击允 许或放过): 5)到这个界面,表示安装时成功的,如果这过程中杀毒软件弹出安全警告,请点击允 许或放过。 6)将加密狗插入电脑的usb接口,过一会,就会自动跳到以下的界面: 7)单击“Finish“,驱动程序就安装完成。 8)打开https://www.doczj.com/doc/6417186774.html,/Appraisal/index.jsp,如果浏览器出现下列提示: 9)则右键点击提示,如下图所示:

10)点击“运行加载项”,会出现下列提示: 11)点击“运行”,然后刷新网页(按F5),安装完成。 2.在WindowsVista系统上安装加密狗驱动程序 1)下载下来的是一个压缩包,图标为 2)将IKEYforVista.rar解压缩。 3)进入IKEYforVista\IKEYDRVR-32bit-4.0.0.1017,看到如下文件夹: 4)双击setup.exe安装,看到如下界面: 5)点击“Next”,看到如下界面:

ET工具命令(CAD)

从AutoCADR14开始Autodesk公司就在AutoCAD的基础上补充了一个程序——《扩展工具》。AutoCADR14里面叫做《附赠程序》,英文版为《BONUS》;AutoCAD2002及以后的版本里都叫做《ET扩展工具》,英文版为《ExpressTools》。操作AutoCAD时,利用AutoCAD固有的命令,可以进行一系列的绘图操作。但有时,总觉得有的命令在使用时,须按要求进行多次选择后,才开始执行,一次、二次没什么,命令用多了就觉得很麻烦,对提高绘图速度也是个障碍。由此,许多CAD 的高手们就按照自己绘图的要求,环境,习惯等,对CAD进行二次开发,做了很多“插件”、“工具”、“外挂”之类的实用扩展工具。其实,AutoCAD本身就带有这种实用的扩展工具《ET扩展工具》,里面有不少好用的工具程序,可协助你做图层管理、文字书写、像素编修、绘图辅助、超级填充图案、尺寸样式等,经过安装后,它们会正规地出现在你的AutoCAD菜单中。各种版本的《ET扩展工具》功能数量不一定相同,但大同小异,只要略加熟悉,能熟练运用后,你将有如虎添翼的感觉,大幅提高你的绘图速度,将指日可待。 《ET扩展工具》的主要命令及其功能介绍 LMAN:将图层内容写入文件或加载文件 LAYMCH:单击参考对象作图层更换 LAYCUR:单击对象将图层更换至当前层(更换到所选中的图层中) LAYISO:单击对象显示该图层其余关闭 LAYOFF:单击对象关闭该图层 LAYLCK:单击对象锁定该图层 LAYULK:单击对象解锁该图层 LAYON:将所有被关闭的图层打开 LAYFRZ:单击对象冻结该图层 LAYTHW:将所有被冻结的图层解冻 TEXTFIT:将文字作FIT(填入)重新调整 TEXTMASK:将文字的重叠对象处作暂时消除(打印可见?取消消隐?) CHT:修改多个文字的特性(无此命令) TXTEXP:将文字分解为Line对象 ARCTEXT:将文字写于单击的弧段上(变为块) FIND:寻找与更换文字的内容 BURST:将Block字转换成一般文字 GATTE:修改Block属性文字内容 EXCHPROP:修改对象的特性 MSTRETCH:多重选框作Stretch(拉伸) MOCORO:同步移动复制旋转比例调整 EXTRIM:单击封闭对象作一次剪切 CLIPLT:单击任意对象作局部截取 MPEDIT:多重对象单击,作Edit编辑 XPLODE:分解对象并指定分解后的特性 NCOPY:连续单一单击对象作复制 BTRIM:可单击Block为边界作剪切 BEXTEND:可单击BIock为边界作延伸 WIPEOUT:建立橡皮擦 REVCLOUD:依指正的弧长徒手绘样条线 GQLATTACH:连结引线与Mtext建立关联 QLDETACHSET:分离引线与文字的关联性 PACK:复制图文件与所使用的相关文件 GETSEL:计算图层中或对象类别的数量

密码锁使用说明书(终版修正)

LT-5000-PW 密码锁 韩式风格设计,外观精致大方,时尚典雅,采用先进的微波检查技术,即时卡片放在黑色钱包里也能读取,并杜绝红外检测易受外界光线影响的问题,读写卡距离远可达20~50mm 。 功能操作 密码、M1卡设置 目的步骤现象(操作成功时) 设置管理密码 ⑴进入编程:按“*#”输入出厂初使管理员密码 “123456”再按“#” 键盘灯亮,数码管显示“00” ⑵按键“8#”蜂鸣器“嘀~” ⑶输入新管理密码(6~12位任意数字),再按“#”蜂鸣器“嘀~” ⑷再输入新管理密码,按“#”蜂鸣器“嘀~”长鸣。 增加用户密码 ⑴进入编程:按“*#”输入管理员密码,按“#”键盘灯亮,数码管显示“00” ⑵按键“7#”蜂鸣器“嘀~” ⑶输入新的编号(一组密码对应一个编号,按顺序 编排不可重复,01~99两位数字),再按“#” 数码管显示“当前设置的编 号”,蜂鸣器“嘀~” ⑷输入新用户密码(6~12位数任意组合),按“#”键蜂鸣器“嘀~” ⑸再次输入密码,“#”确认蓝灯亮,蜂鸣器“嘀~”长鸣。 (6)如需再添加密码按“7#”→输入新编号→输入 新密码→再次输入密码→按“#”确认 删除用户密码 ⑴进入编程:按“*#”输入管理员密码,按“#”键盘灯亮,数码管显示“00” ⑵按键“5#”蜂鸣器“嘀~” ⑶输入用户编号(要删除密码的对应编号),按“#”蜂鸣器“嘀~” ⑷再输入用户编号,按“#”确认删除蓝灯亮,蜂鸣器“嘀~”长鸣。 ⑸如需继续删除密码按“5#”→输入编号→再输编 号→按“#”确认删除 增加M1卡 ⑴进入编程:按“*#”输入管理员密码,按“#”键盘灯亮,数码管显示“00” ⑵按键“6#”蜂鸣器“嘀~” ⑶输入用户编号(一张卡对应一个编码,按顺序编 排不可重复00~99两位数字),按“#” 数码管显示“当前设置的编 号”,蜂鸣器“嘀~” 3

万和燃气热水器打不着火不打火点不着火的原因分析

万和燃气热水器打不着火不打火点不着火的原因分析 导读:万和热水器打不着火的原因一:燃气通路故障;燃气热水器不打火的原因二:水路故障;燃气热水器打不着火的原因三:电路系统故障;万和热水器打不着火的原因四:热水器机械故障。 万和燃气热水器打不着火的原因:燃气通路故障 燃气通路是首先要检查的第一步:万和热水器中电磁阀在不通电时是将燃气通路关闭的,它是靠热水器脉冲控制器将阀门打开通气后才能使热水器点着火,这个位置的故障率极高,但通常不是电磁阀本身故障造成的,多数是由于它未能得到供电而导致开不了阀,燃气不能到达燃烧器导致万和热水器打不着火; 万和燃气热水器打不着火的原因:水路故障 万和燃气热水器水路故障主要有进水口过滤网堵塞、水阀结垢、水箱铜管变形堵塞、水压低等。它们造成燃气热水器打不着火的共同点是热水器出水量很小(燃气热水器的正常启动水压是0.02MPa,如果水压过低热水器就无法启动)。老式强排热水器这个现象很明显,所以排除是否因为水路故障造成燃气热水器点不着火只需要观察出水量或水压是否过低。 万和燃气热水器打不着火的原因:电路系统故障 热水器电路部分涉及的配件较多,是燃气热水器打不着火故障原因查找的核心部分,包括漏电保插头、热水器电源控制盒、脉冲点火器、电磁阀、风机启动电容、风机、风压检测开关、微动开关或水流传感器开关、点火针、感应针、冷热水开关。 燃气热水器脉冲点火器发出工作信号传送至热水器电源控制盒,通过一条连接到风压检测开关的负压管,使风压检测开关工作,此返回到脉冲点火器开始连续放电和延时2秒吸开电磁阀通气即可点着火,感应针感应到火焰正常后即通过脉冲给出电磁阀持续的维持电压,使电磁阀保持在开启状态,燃烧器便可持续正常工作。任何一个环节出现故障都会导致燃气热水器点不着火。 万和燃气热水器打不着火的原因:热水器机械故障 水气联动装置主要由水阀内部水压提供动力,推动联动杆,同时打开电路部分和其中一级燃气密封通道,若水气联动装置出现故障整个电路部分都没办法工作,导致燃气热水器打不着火;风机部分若因风叶卡死或电机转速慢达不到风压检测开关的启动压力,脉冲点火器无法得风压检测开关的信号,也会导致燃气热水器点不着火。 检查总结:燃气热水器打不着火可以分为无点火声不点火和有点火声打不着火,常见原因就是电源问题、电磁阀故障、电点火器故障、点火针或感应瓷针故障等这几个方面。

KEY按键应用大全精修订

K E Y按键应用大全集团标准化工作小组 #Q8QGGQT-GX8G08Q8-GNQGJ8-MHHGN#

《KEY(按键)应用》大全 技术类别: ? 设计中你是否遇到过这样的问题:你的产品上要求有几十个按键,处理器IO口很少,PCB的面积又有严格限制,或者你要严格控制成本,无法使用象7219那样的扩展芯片,怎么解决这个问题 下面有个方法,大家都见过遥控器吧,上面不但有几十个按键,而且功能还挺多什么去抖动,同时按键处理都有了,最最重要的是控制芯片体积小,价格便宜(也就1,2块钱),外围器件少。。不过具体实现起来有点小麻烦,这类芯片的信号一般是PPM输出的,通常为了有更远的遥控距离,按键编码调制在一个38k左右的载波上。所以我们不得不再为这个方案多花上1块钱,加一个有烂运放做的低通滤波器,将载波滤除后在接到单片机的IO脚上,由于两个频率相差较多,这个滤波器并不难做。我使用LM324做的。其中有两级低通,一个比较器。 当你的示波器上出现一串可爱的几百赫兹的方波时,你的硬件就成功啦。既然只用一条IO就扩展了几十个按键,软件上自然会多费些事,此类芯片发码都是有引导头,同步部分,用户码,键码等部分组成,有三十多个位,具体可参照sc6121资料。下面时一个完整的接收程序,针对的芯片是 sc6121,处理器89c51 \ ib_KeyCode[0] 用户码低位 ,ib_KeyCode[1]用户码高位 ,ib_KeyCode[2]键码 ,ib_KeyCode[3]键码的反码 */ The meaning of 'SysKBMsg' is list as following. Program Design:LiBaizhan Ver: Date:2003-3-16 ************************************************************************************/ #include <> #include <> sbit Key1 = P1^0; sbit Key2 = P1^1; /* Some System Var Pre_Definition Crystal Frequence is */ #define TIME_2MS 0X74 #define TIME_20MS 0X043B #define KB_INTERNAL 3 /*Key DBClk Detection Internal */ /* SysKBMsg define Keyboard Message,it include Single_Click or Double_Click It's bit6~bit0 record key_encode and bit 7 record DBClk(1) or SglClk(0) It can record and encode up to 127(2^7-1) keys No key is press when it is 0 This method did not deal key such as Ctrl_Key or Alt_Key or Shift_Key...etc. */

电子密码锁使用说明

基于51单片机的简易电子密码锁 使用说明 一、实现功能: 1、设置6位密码,密码通过键盘输入,若密码正确,则将锁打开。 2、密码可以由用户自己修改设定(只支持6位密码),锁打开后 才能修改密码。修改密码之前必须再次输入密码,在输入新密 码时候需要二次确认,以防止误操作。 3、报警、锁定键盘功能。密码输入错误显示器会出现错误提示, 若密码输入错误次数超过3次,蜂鸣器报警并且锁定键盘。 4、AT24C02保存密码,支持复位保存,掉电保存功能。 二、按键说明 按键定义图

如图示:采用4X4键盘输入,键盘对应名称如下: 1 2 3 A 4 5 6 B 7 8 9 C * 0 # D 其中,【0—9】为数字键,用于输入相应的密码, 【*】号键为取消当前操作 【#】号键为确认 【D】键为修改密码 其它键无功能及定义 三、作用说明: 密码锁初始密码为:000000. 1、开锁:插上电源后,程序自动调入初始密码,此时依次输 入:000000,然后按【#】(确认)键,此时锁会打开,可以 看到显示open,密码锁打开。(如为自己焊接,请首次使用 输入:131420,对密码进行初始化,当显示出现:initpassword, 证明密码初始化完成,此时初始密码即为:000000)。 2、退出并关锁:按下【*】(取消)键,此时锁关闭,所有输 入清除。 3、修改密码:在开锁状态下,再次输入正确的密码并按下【#】 (确认)键,此时听到两声提示,输入新的六位密码并按【D】 (重设)键,再重复输入一次新密码并按【D】,会听到两

声提示音,表示重设密码成功,内部保存新密码并存储到AT24C02。(如两次输入的新密码不一样,则重设密码失败)。 4、报警并锁定键盘:当输入密码错误后,报警并锁定键盘3 秒,如3秒内又有按键,3秒再启动。 5、当重置新密码时,新密码会保存于AT24C02存储器里。 有任何问题请与我联系: QQ:331091810 E_mail:331091810@https://www.doczj.com/doc/6417186774.html, 旺旺ID:j_yongchao2008 淘宝店址:https://www.doczj.com/doc/6417186774.html,/

cadET工具详解

AutoCAD Express Tools扩展工具主要命令及其功能介绍,应该是2004 AutoCAD Express Tools扩展工具可协助你做图层管理、文字书写、像素编修、绘图辅助、超级填充图案、尺寸样式等熟练运用后,将大幅提高你的绘图速度. LMAN:将图层内容写入文件或加载文件 LAYMCH:单击参考对象作图层更换 ET扩展工具》的主要命令及其功能介绍 LMAN:将图层内容写入文件或加载文件 LAYMCH:单击参考对象作图层更换 LAYCUR:单击对象将图层更换至当前层 LAYISO:单击对象显示该图层其余关闭 LAYFRZ:单击对象冻结该图层 LAYOFF:单击对象关闭该图层 LAYLCK:单击对象锁定该图层 LAYULK:单击对象解锁该图层 LAYON:将所有被关闭的图层打开 LAYTHW:将所有被冻结的图层解冻 TEXTFIT:将文字作FIT(填入)重新调整 TEXTMASK:将文字的重叠对象处作暂时消除 CHT:修改多个文字的特性 TXTEXP:将文字分解为Line对象 ARCTEXT:将文字写于单击的弧段上 FIND:寻找与更换文字的内容 BURST:将Block字转换成一般文字 GATTE:修改Block属性文字内容 EXCHPROP:修改对象的特性 MSTRETCH:多重选框作Stretch(拉伸) MOCORO:同步移动复制旋转比例调整 EXTRIM:单击封闭对象作一次剪切 CLIPLT:单击任意对象作局部截取 MPEDIT:多重对象单击,作Edit编辑 XPLODE:分解对象并指定分解后的特性 NCOPY:连续单一单击对象作复制 BTRIM:可单击Block为边界作剪切 BEXTEND:可单击BIock为边界作延伸 WIPEOUT:建立橡皮擦

万和热水器维修:不打火故障维修

万和热水器e3,一般是风压过大或者烟管堵塞,但是烟管一般不会堵塞,而是燃烧器的位置堵塞,造成风压过大,导致风压开关断开,从而引发e3代码故障。我刚好修好了家里热水器的e3故障,下面我就介绍一下方法吧。 第二个可能就是风压开关老化导致容易触发。于是我更换了风压开关,并把动作压力稍微调大一点,也就是风压开关上面的一个旋钮,往里面旋转就是增大压力。网上有些人说这个不能乱调,其实你自己可以把握,只要你调到不出现e3故障即可,然后往回调记录下出现e3故障的位置,然后再往顺时针调,直到不出现e3故障为止,不能太过,免得真正堵塞的时候也不能产生断开动作,调好后为了验证堵塞是否产生动作,关闭煤气,开水点火,然后马上堵住热水器的烟管排风口(先把烟管拆下来,然后直接用手捂住排风口也可以),如果马上显示e3,说明保护起作用。 至此,修好了e3故障。其实,一般热水器烟管是不会堵塞的,除非有老鼠爬进去在里面做窝了。一般都是燃烧器的换热片那里老化导致通风不畅,从而导

致风压增大。 如果你不知道风压开关是哪一个,建议你百度一下“风压开关”,有详细的原理和说明。至于你提到的距离远的出水温度高,说明你的热水器不是真正的恒温热水器,或者恒温不起作用了。我家的热水器刚好跟你相反,距离近的水温高,距离较远的另一个洗手间由于热水传输过程中降温了,所以水温感觉会比实际标示低一度左右。如果你的热水器出水量小反而温度高,而且是保持温度高,说明不是恒温热水器,这种现象只会发生在固定火力的热水器上面,因为火力是固定的,水量越小温度越高。 另外我还调节了下面图片中的蓝色电位器从电路板上标示的意思我估计是第二个电磁燃气比例阀的流量调节,适当调小一点,会降低一点风压。因为燃气空气比例是有程序控制的,把燃气轻微调小了,风压也有所改变。仅供参考,自

特种作业加密锁说明

特种作业加密锁申请流程 针对想用加密锁进特种作业系统的几种情况介绍如下 1、有三类人员加密锁,想用三类人员加密锁进特种作业系统。此种情况的企业要在特 种作业系统上进行【新用户注册】. 注册方式:是点击上面页面的“新用户注册”

1)在上面页面,正确选择所属地市特考小组,输入企业名称(全称) 点击【查询】按钮,若弹出如下提示框,请确认您输入的名称是否和三类人员 企业名称完全一致!请重新输入重试! 2)若正确查询出三类人员企业信息后,衣再补充完剩余的信息最后点击【注册】 按钮 按上面的操作完成后,等上15分钟左右就可以用三类人员加密锁直接登陆特种作业系统了 2、有三类人员加密锁,但想单独购买特种作业加密锁,达到单锁单用。此种情况的企 业要先在特种作业系统上进行【新用户注册】注册方式是 1)填写页面上所有带红星号的内容 2)点击【注册】按钮提示【注册成功】后要打印加密锁申请表,如果打印不成功 可以直接在当前页面下方直接下载空的申请表进行填写打印。最后再根据注册页面 下方的联系方式联系利方软件咨询买锁事项。 3、没有三类人员加密锁但想买个锁能同时进三类人员系统和特种作业系统,此种情况 办理方式是 1)联系利方软件购买三类人员加密锁 2)收到锁后进三类人员系统找到【信息填报】--【企业基本信息】补充完信息后 点击【保存、提交】提交后状态会变为【审核中】 3)按照上面第一项所说的方式进行特种作业系统【新用户注册】 经过这些操作加密锁便可以在两个系统进行使用了。 4、只想购买特种作业加密锁,此种情况的企业要先在特种作业系统上进行【新用户注 册】注册方式是 1)填写页面上所有带红星号的内容 2)点击【注册】按钮提示【注册成功】后要打印加密锁申请表,如果打印不成功 可以直接在当前页面下方直接下载空的申请表进行填写打印。最后再根据注册页面 下方的联系方式联系利方软件咨询买锁事项

加密狗专业网络版安装说明

加密狗专业网络版安装说明 1.运行环境: 局域网必须连通完好。 操作系统:Windows XP 简体中文版;内存:1G以上; 硬盘空间:600 兆以上; CPU:Pentium IV 以上;发声设备:声卡、音箱等必备的发声设备。可接外设:盲文点显器、盲文刻印机、打印机 2.在局域网中指定一台电脑为服务器,服务器端程序和软件加密狗安装在这台电脑上。服务器加密狗应便于专人管理,一般人不易接近,以免加密狗丢失,造成阳光读屏软件不能运行。如果安装服务器的电脑,使用的是Windows XP2 操作系统,则应将系统自带的防火墙选择“例外”放行阳光软件或将防火墙关闭。关闭防火墙的操作如下:选择:开始—所有程序—控制面板—Windows 防火墙-

双击 Windows 防火墙: 关闭防火墙。点击“确定” 3.安装服务器端程序:将安装盘插入光驱,打开“专业网络版服务器”文件夹: 双击 Setup 。 4.. 显示安装画面:

若您的电脑上没有并口,系统会有如下提示,选择确定 5.插入加密狗。 6.显示完成画面:

7.在屏幕右下角检查网络狗服务程序: 8.用右键点击R。 10.打开服务管理器 11.若下图右边红圈内为红色,说明狗没有工作 12.刷新硬件狗。 点击文件,选择刷新硬件狗:

13.若下图右边红圈内为黄色,说明狗工作正常 若刷新后红圈内仍为红色,应将服务器卸载后重新安装 如果按照上述说明操作,客户端仍提示: 可能是局域网中服务器和客户端的IP 地址不在同一个段内,请先查看服

务器的 IP 地址,方法如下:用鼠标右键点击“网上邻居”的“属性”如下图: 在打开窗口的右边查看“本地联结”的“属性”如下图: 选中下图红框部分:

万和热水器常见故障及原因

不少人反馈,万和热水器使用一段时间后,会出现各种各种的故障,客户都认为是热水器买的不好,其实并不是。因为如果热水器没有正确安装或者使用不当时,是会出现故障的。本文主要针对万和热水器的常见故障原因及处理方法,给大家分析分析,以供参考。 原因1:燃气压力低:燃气管道堵塞、燃气管路压力低; 解决方法:自查燃气管道是否堵塞(可从燃气表具处接一根管道到热水器燃气进口处,直接与热水器相接,如热水器能正常工作即从表具到热水器进口处的管路有堵塞),请燃气公司检查燃气管路压力。 原因2:水流量过大:管道压力过大、管道口径太大; 解决方法:可关小进水管的阀门及提高热水器的设定温度。

原因3:设置不当,应调在高档或冬天,而现在调在中档或春秋档; 解决方法:按说明书上的使用方法正确设置。 现象二:水太烫 原因1:水压低、水流量小、水管堵塞; 解决方法:a.将水管调换到一定的口径;b.加装增压泵;c.清理热水器进水管过滤网;d.调整热水器上的水量调节到大;e.调换花洒,花洒眼应尽量大。 原因2:燃气压力过高主要体现在液化气(LPG)上; 解决方法:调换液化气的减压阀(必须是劳动牌),将燃气压力调至要求值。 原因3:设置不当,主要是将热水器设置在高温区造成。 解决方法:按说明书上的使用方法正确设置 现象三:热水器点燃后有一段冷水,重新开水后又有一段冷水; 原因:这主要是强排风热水器的特性,不是故障。强排风热水器是为了提高用户使用的安全性,在热水器点燃前设置了前清扫,在热水器熄火后进行后清扫,将热水器中原有的废气排出热水器内,所以造成重新开水后又有一段冷水出现,另外,由于用户的龙头到热水器出水管的距离长短,及热水器将冷水加热到热水需一定的时间,造成热水器点燃后有一段冷水的原因。用户可缩短该段时间,将热水器先设置在高温档,等水热后再将水温设置在需使用温度。 现象四:智能恒温系列热水器的常见故障代码 显示01不是故障代码,代表热水器已运行了30分钟,提醒用户注意使用安全。显示11、12、14、16、31、32、72、73、90等,以上数字显示都是热水器出现故障后的代码,都必须进行保修。 现象五:风机运转声音

如何使用U盘制作Windows系统开机加密狗图文教程

如何使用U盘制作Windows系统开机加密狗图文教程U盘等移动设备除了可以用来储存文件,装系统以外,还可以用来制作加密狗。这篇文 章中所指的加密狗仅指针对Windows系统开机的加密狗。U盘等移动设备除了可以用来储存 文件,装系统以外,还可以用来制作加密狗。这篇文章中所指的加密狗仅指针对Windows 系统开机的加密狗。它的作用是为了避免他人随便开启自己的电脑查看自己的私密信息,除 了设置个人密码外,还可以通过简单的设置让电脑只有在插入自己的U盘后才能启动,否则 启动后即自动关机。 制作这样一个加密狗并不需要很复杂的过程,也不像一位朋友所认为那样的需要什么英语基础什么的。 照着下面几个步骤,一分钟之内任何人都能做出一个属于自己的加密狗。 步骤: 1.插上自己的U盘或者其他移动存储设备 2.在U盘创建一个任意类型的文件,文件名也任意。我这里为了后面方便讲解,用了一个空白的文本文件,取名为“加密”,文件扩展名为“.txt"。文件就放在U盘根目录下,U 盘的驱动器盘符为J: 3.在电脑上任意位置新建一个文本文件,在这个文本文件中输入如下内容 if not exist J:加密.txt shutdown -s -t 10 -c "你无法使用该计算机" 这句话的意思是如果U盘中不存在加密.txt这个文件,则在10秒后关闭该计算机,并显示"你无法使用该计算机"这一提示信息。 语句中的10可以自己修改为其他,如100,提示语“你无法使用该计算机”也可改成其他语句。只要不改动整个语句的格式就可以了。 输入完成后,将这个文本文件另存为一下,名字依然任意,文件扩展名为“.bat"。保存对话框中选择保存类型为所有文件,所以在文件名后手工输入扩展名“.bat"。这里取名为26.bat 4.点击Windows开始菜单中的“运行”,它的位置就在“所有程序”的右边。打开“开始菜单”就能看到它了。

黑龙江省投标企业加密锁使用方法

黑龙江省建设工程招投标监管系统 投标企业 用户手册 黑龙江省建设厅信息中心 二○一二年四月

目录 1登陆、退出系统 (2) 登陆 (2) 退出 (5) 2投标企业信息管理 (5) 3投标管理 (7) 投标 (7) 3.1.1公开招标工程投标 (7) 3.1.2 邀请招标工程投标 (10) 修改投标 (14) 取消投标 (18) 查看投标 (19) 查询投标 (20) 4项目成员信息管理 (22) 1登陆、退出系统 登陆 打开IE浏览器,在地址栏中输入“后回车,打开黑龙江工程招投标网站,如图所示:

图1.1-1 黑龙江工程招投标网首页 单击网站右侧“建设工程招投标监管系统”,打开黑龙江省建设工程招投标监管系统登陆界面,如图所示: 图1.1-2黑龙江工程招投标网首页

图1.1-3黑龙江省建设工程招投标监管系统登陆界面插入投标企业身份认证锁,单击“登录”按钮即可进入黑龙江省建设工程招投标监管系统完成登录,如图所示系统界面: 图1.1-4黑龙江省建设工程招投标监管系统界面

系统界面分为三个区域,上部记载有当前用户登陆信息,左侧为系统功能菜单,右侧为工作面版用于显示操作功能菜单后的相关信息,详见上图标注。 退出 单击系统界面右上角的“退出”按钮,即可退出系统。 图1.2-1 黑龙江省建设工程招投标监管系统界面 2投标企业信息管理 此功能模块用于投标企业信息维护操作。 单击“投标企业基本信息”功能菜单,打开企业基本信息编辑页面,如图所示:

图投标企业基本信息编辑页面 填写企业基本信息后,单击“保存”按钮,保存投标企业信息。 相应资质库中已有企业信息,单击“确定”按钮,将企业数据同步即可,如图所示: 图2投标企业基本信息编辑页面 注:添加的企业信息(非同步资质系统信息)、资质信息、项目成员信息须省级审核通过后才可进行投标。

万和热水器说明书

万和热水器说明书 Company number:【WTUT-WT88Y-W8BBGB-BWYTT-19998】

万和热水器怎么样 万和热水器怎么样现在市场上的热水器数不胜数,人们想要在众多热水器中选择自己心仪的一款,就必须的对热水器有所了解。万能热水器是许多人都会选择的一个大牌子,但许多人对万能热水器又不怎么了解,下面由我为大家解答万和热水器质量怎么样万和热水器品牌怎么样万和热水器的分类是怎样万和热水器价格怎么样等 一、万和热水器品牌怎么样 万和于93年,成立于广东顺德,经过十九年的发展耕耘,目前是国内生产规模最大的燃气具研发制作企业,也是国内首个提倡和推动中国五金制品协会燃气用具的分会理事长。公司在研发节能环保的技术上,曾受到国务院总理温家宝的肯定和赞扬。 万和以“燃气具专家”为品牌定位,生产万和燃气热水器、万和燃气灶具、万和燃气壁挂炉、万和燃气烧烤炉、万和燃气空调、万和燃气取暖器等燃气具产品并与之相配套的万和电热水器、万和消毒碗柜、万和吸油烟机等厨卫电器产品,以及太阳能、热泵热水器等新能源产品和空气能+燃气、太阳能+燃气、电能+燃气等能源集成热水系统。 二、万和热水器分类 1、万和燃气热水器 万和燃气热水器怎么样 万和燃气热水器为广东万和新电气股份有限公司亚丁旗下品牌,多年来专注燃气具的研究和专业打造。万和燃气热水器开创了全自动燃气热水器时代,获省科技进步三等奖,万和燃气热水器在2002年获得中国名牌产品称号,多年被评定为中国名牌,其产品技术处于国内领先水平。 万和燃气热水器的价格 万和燃气热水器的性价比。万和燃气热水器的价格在市场中应该算是知足的,从几百到几千都是为满足人们的不同需求而设计的,在质量上有保障,有良好的维修服务和售后服务,一直都是大众信赖的产品。 万和燃气热水器产品 万和户外式燃气热水器:w24a 、w16a 万和平衡式燃气热水器:g12v1凝炼冷凝恒温型、2c 超薄智能 万和强排式燃气热水器:q24bv107 万和烟道式燃气热水器:d8c 2、万和电热水器 万和电热水器怎么样 万和电热水器是万和集团主营业务的重要组成部分。万和96年开始进军电热水器领域,经过十余年稳健发展,万和在电热水器的研发、生产、质量管理、营销、服务等各方面均已十分成熟,产品安全技术、节能技术、内胆技术和智能控制技术已经达到了国际先进水平,是《家用贮水式电热水器节能产品认证技术要求》的起草单位之一。目前万和电热水器已经形

加密狗加密与解密方法技术白皮书

加密狗加密与解密方法 加密狗加密方法 1 打开EZCAD软件包,找到“JczShareLock3.exe”执行程序。 2 双击执行该程序,弹出“Select parameter”对话框,如图1所示。在图中可以看出我 们可以设置两级密码,这两个密码是完全独立的,其中任何一次使用达到设定要求以后,加密狗就会限定板卡的使用权限。如同时设置两级密码,权限应不同,即这两个密码设置的时间等权限长短不一。如图,软件默认的是一级密码选中状态,如果想选择二级密码直接点选即可。 图1 Select Parameter 3 当我们选择好设定密码的级数后,点击确定按钮,弹出“JczShareLock”对话框,如图 2是软件默认的发布版界面,点击下拉菜单,我们可以选择共享版模式,如图3。 图2 发布版界面

图3 共享版界面 下面我们分别说明发布版模式和共享版模式的加密方法。 4 首先是发布版模式如图2。发布版模式下没有次数,天数,时间等的设置,只有密码设 置,主要应用于保护自己模式的设置,防止别人更改。点击“写入/Write In”按钮,进入密码写入界面。如图4。如果我们是第一次写入密码,那么就直接勾选修改密码选项,在新密码下面的前一个输入栏里输入4位数字,在后面的输入栏里输入4数字,这样完成了密码的初步设定,然后在确认密码下的输入栏内重复输入上面设定的密码,然后点击确认,完成密码的设定。如果我们是修改密码的话,那么我们首先要在密码写入界面上方的输入密码下的正确输入栏内输入以前设定的密码,然后在勾选修改密码,输入新的密码。否则修改密码就会失败,并出现“密码错误”提示信息。 图4 密码写入界面 5 共享版的密码设定,如图3是共享版的界面。在这里我们首先要设定好限制使用的次数, 天数,时间,直接在后面的输入栏内直接输入即可。这里注意:我们所设定的时间,天数是以软件运行所在电脑的内部时钟为准的,我们编写之前一定要注意,我们所使用的电脑的时间是否准确。设定好这些后点击“写入/Write In”按钮,进入密码写入界面。 共享版的密码写入界面与发布版是一样的,操作也相同,可参考步骤4进行操作。

万和热水器e3不打火解决办法

家里的万和热水器e3,一般是风压过大或者烟管堵塞,但是烟管一般不会堵塞,而是燃烧器的位置堵塞,造成风压过大,导致风压开关断开,从而引发e3代码故障。下面我们就一起来看看怎么解决。 第二个可能就是风压开关老化导致容易触发。于是我更换了风压开关,并把动作压力稍微调大一点,也就是风压开关上面的一个旋钮,往里面旋转就是增大压力。网上有些人说这个不能乱调,其实你自己可以把握,只要你调到不出现e3故障即可,然后往回调记录下出现e3故障的位置,然后再往顺时针调,直到不出现e3故障为止,不能太过,免得真正堵塞的时候也不能产生断开动作,调好后为了验证堵塞是否产生动作,关闭煤气,开水点火,然后马上堵住热水器的烟管排风口(先把烟管拆下来,然后直接用手捂住排风口也可以),如果马上显示e3,说明保护起作用。

至此,修好了e3故障。其实,一般热水器烟管是不会堵塞的,除非有老鼠爬进去在里面做窝了。一般都是燃烧器的换热片那里老化导致通风不畅,从而导致风压增大。 如果你不知道风压开关是哪一个,建议你百度一下“风压开关”,有详细的原理和说明。 至于你提到的距离远的出水温度高,说明你的热水器不是真正的恒温热水器,或者恒温不起作用了。我家的热水器刚好跟你相反,距离近的水温高,距离较远的另一个洗手间由于热水传输过程中降温了,所以水温感觉会比实际标示低一度左右。如果你的热水器出水量小反而温度高,而且是保持温度高,说明不是恒温热水器,这种现象只会发生在固定火力的热水器上面,因为火力是固定的,水量越小温度越高。 另外我还调节了下面图片中的蓝色电位器(min)从电路板

上标示的意思我估计是第二个电磁燃气比例阀的流量调节,适当调小一点,会降低一点风压。因为燃气空气比例是有程序控制的,把燃气轻微调小了,风压也有所改变。仅供参考,自己研究,个人理解,由于不清楚厂家具体设计原理,所以没有权威的科学根据,仅仅是推测。如果调节错误会导致e1意外熄火故障,请谨慎。 快益修以家电、家居生活为主营业务方向,提供小家电、热水器、空调、燃气灶、油烟机、冰箱、洗衣机、电视、开锁换锁、管道疏通、化粪池清理、家具维修、房屋维修、水电维修、家电拆装等保养维修服务。

加密狗安装说明(必看)

请不要使用360或者卡巴斯基杀毒软件,这两个对系统干扰比较大。如果在安装第6步的时候提示有病毒,大家请放心这个没有问题的,关闭杀毒软件安装就可以了,同时全部安装完整后,在杀毒软件中设置广联达安装目录为免杀目录或者加入白名单。 本驱动适用人群: 1.必须是B锁客户(购买价格为145一个地区,185元2个地区的客户的狗。) 2.必须是2010年5月25日后汇款购买的客户使用或者在2010年5月25日后按照我们的提示把狗给我们发回升级过的客户使用。 3.正常安装所有的广联达软件后,最后安装6驱动。注意正版驱动全部用154(去广联达服务新干线随便下载一个就行了,不用分地区。),不推进使用156或者157版驱动.新版软件目前没必要用这个两个版本的驱动。 有了这个驱动,开始买狗提供的光盘上的第6步驱动就不用装了,完全用现在的替换,复制狗驱动我们会不定期更新,请注意关注。 安装新驱动后,或者平时使用时软件提示一堆英文错误或者提示找不到狗,请按照如下方式解决: 1.直接删除C:\Program Files\Common Files目录下的Grandsoft Shared目录. 2.直接删除C:\Program Files目录下的Grandsoft Installation Information 目录 3.直接删除广联达的安装目录. 4.重装软件。 如果上述方法还是不行,那就只能格式化C盘重装XP,不要使用覆盖安装XP,否则没用。 所谓问题,就是提示一堆英文的语句之类的话或者提示没有找到狗。至于提示什么没装定额库,没有规则之类的都是客户自己的安装问题,这个自己检查。 如果想知道您的这个狗都支持什么软件请进入“开始”-“程序”-“广联达加密锁程序”-“查看已够产品程序”,这里面的所有内容都是您的狗所支持的,内容相当丰富,会给你一个惊喜的。 需要说明的问题; (1)复制狗绝不是一次性购买,就不在需要服务的,复制狗也要同正版一样升级,其中包括2个含义,第一个是在狗不变的情况下,狗的驱动(也就是上面说的6

ET2009新功能

ET二代产品新功能 ET智能化服装CAD软件系统2009年全面升级,将储备两年之久的自动化功能开放出来,特色功能如: 1.增加快捷显示控件工具条,修改外观界面,调整工具栏和增加常用图标。 2.自定义扣眼形状功能:现在可以做到用自己画出的扣眼形状用登录附件储存起来,在做 扣眼时我们就可以自由的选择扣眼的形状。 3.智能联动修改完整方案,底图和裁片联动,衣袖联动,放码联动,裁片提取(母板联动) 4.能做到同时取裁片并生成想要的缝边宽度,如款式需要改动时,我们可做到对母板进行 修改时所取出的裁片也会对应的进行修改。(这可解决我们改款时多次的去取裁片) 5.拉链缝合,螺旋操等工艺常用工具作放入操作工具面板。 6.一枚袖,二枚袖,插肩袖可以依各种数值调整自动生成。自动处理插肩袖。按照母板的 袖型和自己的要求自动生成插肩袖,在做调整袖肥和袖山高时并可保证我们的尺寸的袖型。 7.上下级领(西装领),可以调整驳口位置,领口大小,反驳线,领宽,倒伏量,领型等, 自动生成西装领。自动生成上下级西装领。解决了手工配领的困难,做到智能、联动的修改衣领型,也可对做领子的合体度随意进行控制,这是一个和设计结合的功能。 8.综合袖调整:将袖山合拼并跟袖窿对照联动修改,同时可拉链缝合,调整时可以同时掌 握我们的数据控制形状和溶位。(可解决我们在改板时对板型的控制从而得到更满意的效果) 9.缩水整体方案,整体缩水,单片缩水,,胸围腰围线单独缩水。裁片动态局部缩水:可 做到对局部位置缩水不一样的,进行动态局部缩水。(在我们制单有这种特殊要求时即可快捷的做到) 10.变形缝合,以一边的曲线型做一个纸样的剪开并完成纸样。 11.ET可以打开DXF,格博,力克纸样文件,用ET文件格式可以保存为DXF格式,格博, 力克文件。ET可以用格博读图板,和力克读图板直接读成ET读图文件,也可以直接输出并驱动格博绘图仪和力克绘图仪。 12.测量机制:将测量过的尺寸可全部命名,解决了重复测量线段,并与尺寸表互通。可以 通过对照模式看到我们做的纸样尺寸的制单尺寸的差值,从而检查纸样。得出的表格还可以绘到纸样上面(不需师傅再用手工画表格)

万和热水器说明书

万和热水器怎么样 万和热水器怎么样?现在市场上的热水器数不胜数,人们想要在众多热水器中选择自己心 仪的一款,就必须的对热水器有所了解。万能热水器是许多人都会选择的一个大牌子,但许 多人对万能热水器又不怎么了解,下面由我为大家解答万和热水器质量怎么样?万和热水器 品牌怎么样?万和热水器的分类是怎样?万和热水器价格怎么样等 一、万和热水器品牌怎么样 万和于93年,成立于广东顺德,经过十九年的发展耕耘,目前是国内生产规模最大的燃 气具研发制作企业,也是国内首个提倡和推动中国五金制品协会燃气用具的分会理事长。公 司在研发节能环保的技术上,曾受到国务院总理温家宝的肯定和赞扬。 万和以"燃气具专家"为品牌定位,生产万和燃气热水器、万和燃气灶具、万和燃气壁挂 炉、万和燃气烧烤炉、万和燃气空调、万和燃气取暖器等燃气具产品并与之相配套的万和电 热水器、万和消毒碗柜、万和吸油烟机等厨卫电器产品,以及太阳能、热泵热水器等新能源 产品和空气能+燃气、太阳能+燃气、电能+燃气等能源集成热水系统。 二、万和热水器分类 1、万和燃气热水器 ? 万和燃气热水器怎么样? 万和燃气热水器为广东万和新电气股份有限公司亚丁旗下品牌,多年来专注燃气具的研 究和专业打造。万和燃气热水器开创了全自动燃气热水器时代,获省科技进步三等奖,万和 燃气热水器在2002年获得中国名牌产品称号,多年被评定为中国名牌,其产品技术处于国内 领先水平。 ? 万和燃气热水器的价格 万和燃气热水器的性价比。万和燃气热水器的价格在市场中应该算是知足的,从几百到 几千都是为满足人们的不同需求而设计的,在质量上有保障,有良好的维修服务和售后服务, 一直都是大众信赖的产品。 ? 万和燃气热水器产品 万和户外式燃气热水器:w24a 、w16a 万和平衡式燃气热水器:g12v1凝炼冷凝恒温型、2c 超薄智能 万和强排式燃气热水器:q24bv107 万和烟道式燃气热水器:d8c 2、万和电热水器 ? 万和电热水器怎么样? 万和电热水器是万和集团主营业务的重要组成部分。万和96年开始进军电热水器领域, 经过十余年稳健发展,万和在电热水器的研发、生产、质量管理、营销、服务等各方面均已 十分成熟,产品安全技术、节能技术、内胆技术和智能控制技术已经达到了国际先进水平, 是《家用贮水式电热水器节能产品认证技术要求》的起草单位之一。目前万和电热水器已经 形成年产200多万台的规模,市场占有率从2001年开始跻身于电热水器的第一集团,销售规 模以50%以上速度逐年增长,在众多电热水器品牌中脱颖而出,成为电热水器行业仅有的几 个中国名牌之一。 ? 万和电热水器维修 :水不热 原因1:燃气压力低:燃气管道堵塞、燃气管路压力低。 解决方法:自查燃气管道是否堵塞(可从燃气表具处接一根管道到热水器燃气进口处,直 接与热水器相接,如热水器能正常工作即从表具到热水器进口处的管路有堵塞),请燃气公司 检查燃气管路压力。

相关主题
文本预览
相关文档 最新文档