当前位置:文档之家› Multi-class latency-bounded web services

Multi-class latency-bounded web services

Multi-class latency-bounded web services
Multi-class latency-bounded web services

Multi-Class Latency-Bounded Web Services

Vikram Kanodia and Edward W.Knightly

Department of Electrical and Computer Engineering

Rice University

In Proceedings of IEEE/IFIP IWQoS2000

Abstract—Two recent advances have resulted in signi?cant improve-ments in web server quality-of-service.First,both centralized and dis-tributed web servers can provide isolation among service classes by fairly distributing system resources.Second,session admission control can protect classes from performance degradation due to overload.The goal of this work is to design a general“front-end”algorithm that uses these two building blocks to support a new web service model,namely,multi-class services which control response latencies to within pre-speci?ed targets.Our key technique is to devise a general service abstraction to adaptively control not only the latency of a particular class,but also to assess the inter-class relationships.In this way,we capture the extent to which classes are isolated or share system resources(as determined by the server architecture and system internals)and hence their effects on each other’s QoS.For example,if the server provides class isolation(i.e., a minimum fraction of system resources independent of other classes), yet also allows a class to utilize unused resources from other classes, the algorithm infers and exploits this behavior,without an explicit low level model of the server.Thus,as new functionalities are incorporated into web servers,the approach naturally exploits their properties to ef?-ciently satisfy the classes’performance targets.We validate the scheme with trace driven simulations.

I.I NTRODUCTION

An increasingly dominant factor in poor end-to-end perfor-mance of web traf?c is excessive latencies due to overloaded web servers.Consequently,reducing and controlling server latencies is a key challenge for delivering end-to-end quality-of-service.

Towards this end,two key mechanisms have been intro-duced to improve web QoS.First,admission control has been proposed as a mechanism to prevent web servers from en-tering overload situations[1],[2],[3].Speci?cally,by ad-mitting new sessions only if the measured load is below a pre-speci?ed threshold,admission control can prevent the server from entering a regime in which latencies are exces-sive,or session throughput collapses due to dropped requests and aborted sessions.

Second,web servers can now provide performance isola-tion and differentiation among the different service classes hosted by the site.In particular,a server may support a num-ber of service classes which may represent different classes of users or different applications(news,email,static docu-ments,dynamic content,etc.).Whether such classes are sup-ported in a single-node server or a distributed cluster,mecha-nisms devised in[4],[5],[6]and[7]respectively can ensure that each service class receives a certain share of system re-This research is supported in part by Nokia Corporation.Additional funding is provided by NSF CAREER Award ANI-9733610,NSF Grant ANI-9730104,and Texas Instruments.The authors may be reached via https://www.doczj.com/doc/8b7856115.html,/networks.sources(disk,CPU,memory,etc.).Moreover,by appropri-ately weighting the share of system resources,differentiation among service classes is achieved.Similarly,as delays are also incurred in the system’s request queues,prioritization of incoming requests can further differentiate the performance among classes[1],[8].

Thus,differentiation and isolation can be achieved by pri-oritized scheduling of system resources,and protection from overload can be achieved by admission control.However, even if taken together,these two mechanisms cannot en-sure that a request’s targeted delay will be satis?ed.Con-sequently,because end-to-end latency is a key component of user-perceived quality-of-service,new mechanisms are needed to ensure that the service class’request delays are lim-ited to within the targeted value.

In this paper,we introduce a new framework for multi-class web server control which can satisfy per-class latency constraints,and devise an algorithm termed Latency-targeted Multiclass Admission Control(LMAC).Our key technique is to design a scheme within a general framework of request and service envelopes.Such envelopes statistically describe the server’s request load and service capacity as a function of interval length,resulting in a high-level service abstraction which circumvents the need to model or measure the com-ponents of a request’s delay.For example,a request incurs delays in the request queue,CPU processing,memory,disk in the case of cache misses,and so on:individually control-ling the latency in each subsystem is simply an intractable and impractical task in a modern server.Instead,we utilize the envelopes as a simple tool for controlling class quality-of-service while maximizing utilization of system resources. Our approach has three key distinctions.First,it enables web servers to support a strong service model with class laten-cies bounded to a pre-speci?ed target,i.e.,a minimum frac-tion of accepted requests will be serviced within the class de-lay target.

Second,it provides a mechanism to characterize and con-trol the inter-class relationships.For example,suppose server resources are allocated to classes in a weighted-fair manner so that classes have performance isolation,yet a class is able to utilize unused resources of other classes.In general,the extent to which an increased load in one class affects the per-formance of another class is a complex function of the to-tal system load,the particular resource scheduling algorithm, and the low-level interactions among the server’s resources. Building on the results of[9],we use the envelopes as a way

FRONT END

BACKEND NODES

Fig.1.System Model

to characterize the high-level isolation/sharing relationships among classes and design a general multi-class algorithm to exploit these effects.

Finally,by decoupling access control and resource alloca-tion from the internals of the server,we obtain a general solu-tion that applies to a broad class of servers,including single-node and distributed servers,and servers with varying levels of quality-of-service support.Consequently,as the server is enhanced with functionalities such as weighted-fair resource allocation,the algorithm naturally exploits these features to better utilize the available resources and support an increased number of sessions per service class.

To evaluate our scheme,we perform a broad set of trace-driven-simulation experiments.We ?rst compare our scheme with an uncontrolled system and illustrate that the algorithm is able to prevent performance degradation due to overload.Next,comparing the delays obtained in simulations with the class QoS objectives,we ?nd that in many cases,latencies can be controlled to within several percent of the targeted value.Moreover,in the single-class case,we compare with a sim-ple queuing theoretic approach,and ?nd that envelopes con-trol the system to a signi?cantly higher degree of accuracy.Finally,in the multi-class case,simulations indicate that sub-stantial inter-class resource sharing gains are available.Here,we ?nd that the approach is able to extract these gains,and ef-?ciently utilize system resources while satisfying each class’delay targets.

The remainder of this paper is organized as follows.In Section II,we describe the server architecture and the system abstraction used for QoS management.Next,in Section III,we describe a simple single-class queuing theoretic approach to serve as a benchmark for performance analysis,and illus-tration of the key problems in meeting delay targets.In Sec-tion IV,we introduce the request and service envelopes and develop an access control algorithm based on the properties of these envelopes.We next describe the simulation scenario

and present experimental results in Sections V and VI respec-tively.Finally,in Section VII,we conclude.

II.S YSTEM A RCHITECTURE

In this section we describe the basic system architecture of a scalable QoS web server.We are not proposing a new architecture as all of the mechanisms described below have been introduced previously.Rather,our goal is to consider a general system model for admission control which can exploit various QoS server mechanisms to ef?ciently satisfy targeted class latency objectives.

Figure 1depicts the general system model that we con-sider.The system consists of a state-of-the-art web server augmented with admission control capabilities as in [1],[2],[3].All incoming requests,which can be sessions (as in [2])or individual “page”requests,are classi?ed into differ-ent quality-of-service classes.There are a number of possible classi?cation criteria including the address of the server (in case of web hosting applications),the identity of the user is-suing the request,or the particular application or data type.The goal of our admission control algorithm is to determine whether admission of a new request in a particular service class can be supported while meeting the latency targets of all classes.If it is not possible,the request should be rejected outright,or redirected to a lower priority class or a different server.

As shown in the ?gure,incoming requests are ?rst queued onto the listen queue or dropped if the listen queue is full.The admission controller dequeues requests from the listen queue and determines if they will admitted or rejected.Notice that the admission control unit is part of the front-end and mon-itors all of the server’s arrivals and departures.As depicted in Figure 2,the admission control unit performs observation-based control of the server using measured request and ser-vice rates of each class.Further,notice from Figure 2(b)that our class-based admission control will incorporate effects of

j

j

a , a ,.... a

s , s ,...... s

Fig.2.Admission Control Model

inter-class resource sharing by also considering the effects of a new admission on other service classes.

If admitted,requests are then scheduled according to the server’s request scheduling algorithm which can be ?rst-come-?rst-serve or class based [1],[8].Finally,requests are submitted to the back-end nodes in a distributed server [7],or in the case of a single node server are simply processed by the node itself.

A key point is that the admission controller applies to a general system model including single-node and distributed servers,FCFS and class-based scheduling,and standard as well as QoS-enhanced operating systems.When QoS mech-anisms are present in the server (such as class-based rather than FCFS request scheduling),the admission controller will measure the corresponding performance improvements and exploit the QoS functionality by admitting more requests per class,thereby increasing the overall system ef?ciency.For example,consider a server farm where the front-end does so-phisticated load balancing to achieve better overall through-put by exploiting locality information at the back-end [10].In this case,the admission controller will measure the decreased service latencies and be able to admit an increased number of sessions into various classes,thereby exploiting the ef?-ciency gain of load balancing.Finally,notice from Figure 2that the admission controller does not measure or model re-sources at the operating system level,such as disk,memory,or CPU.Instead,we abstract all low-level resources into a vir-tual server which allows us to design an admission controller that is applicable to a broad class of web server architectures and applications.

III.B ASELINE S CHEME

In this section,we sketch a simple queuing theoretic algo-rithm devised to satisfy a delay target.The goal here is three-fold.First,we illustrate an abstraction of the server resources into a simple queuing model.Second,we highlight key is-

sues for managing multi-class web services.Finally,we use the approach as a baseline for experimental comparisons and,by highlighting its limitations,we further motivate the LMAC scheme.

A.Problem Formulation

Consider a single class with quality-of-service targets given by a delay bound of 0.5seconds to be met by 99%of requests.Further consider a stationary and homogeneous arrival of ses-sions and requests within sessions,so that there exists some maximum number of requests per second which can be ser-viced so that this QoS requirement is met.If the overall arrival rate of requests to the server is greater than this maximum,the difference should be blocked (or redirected)by the access controller to prevent an overload situation.

The key question is,how to determine which load level is the maximum one that can support the service.Speci?-cally,if the current load is below this maximum,then the cur-rent 99percentile delay will be below the target.However,when a new session requests access to the server,the new 99percentile delay of this class and others is in general a com-plex function of the system workload,and the low-level in-teractions among the many resources consumed such as disk,bandwidth,memory,and CPU.Below,we sketch a baseline approach for assessing the impact of new requests and ses-sions on the delay target via a simple queuing theoretic ab-straction.

B.Sketch Algorithm

Here,we approximate class ’s service by an M/M/1queue with an unknown service rate.In particular,as described above,a request’s service latency includes delays from ses-sion queues,disk access,etc.The M/M/1model abstracts these resources into a single virtual server with independent and exponential requests and services as follows.

Over the last seconds from the current time ,the mean

service time of request in class

mean arrival rate of admitted class requests class target delay

variance of class requests over durations variance of class latency for concurrent requests

(1) where is an indicator function,and the mean delay is

(3) so that the delay violation probability under an increased load will be

(4) Thus,the increased load due to the new session should be admitted if the estimated is less than the class’

target.Consequently,under the particular assumptions of the M/M/1model,the above scheme limits the class’latency to within the target for the speci?ed fraction of requests.

C.Limitations of the Baseline Scheme

While server access control based on Equations(1)-(4) does provide the ability to meet a class’latency objectives with a high level abstraction of system resources,it encoun-ters several key problems which preclude its practicality to realistic web servers.

First,it offers no support for multiple services classes.That is,by treating each class independently,the impact of a new session on other classes is ignored.Second,the assumption that inter-request times are independent and exponentially distributed con?icts with measurement studies[11].Third, the assumption of independent and exponentially distributed service times cannot account for the highly variable service times of requests and ignores the strong effects of caching, namely,that consecutive requests for the same document can result in highly correlated,as well as highly variable,service times.

In Section VI,we experimentally quantify the impact of these limitations in a realistic scenario.

IV.M ULTI-C LASS A DMISSION C ONTROL

In this section,we build on the previous admission control model and introduce the Latency-targeted Multi-class Admis-sion Control(LMAC)algorithm.The goal is to provide a strong service model for web classes that controls statistical latency targets of multiple service classes.The LMAC al-gorithm has two key distinctions from the baseline scheme. First,we introduce use of envelopes as a general yet accu-rate way of describing a class’service and demand.As for the baseline scheme,this is a high-level workload and service characterization,yet,unlike the baseline scheme,envelopes capture effects of temporal correlation and high variability in requests and service latencies.Second,exploiting the inter-class theory of[9],we show how the performance effects of one class on another can be incorporated into admission con-trol decisions.

A.Envelopes:A General Service and Demand Abstraction Deterministic[12]and statistical[9],[13]traf?c envelopes have been developed to manage network QoS.Moreover,de-terministic[12]and statistical[9]service envelopes have pro-vided a foundation for multi-class network QoS,in[9],also incorporating inter-class resource sharing.Below,we extend these techniques to the scenario of measurement-based web services exploiting two key properties of envelopes:charac-terization of temporal correlation and variance and simple on-line measurement via jumping windows.

Denoting the number of class requests in the interval

0.1

0.20.30.4

0.5

0102030405060

70Interval Length (sec)

N u m b e r o f R e q u e s t s

(a)Cumulative Envelope

Fig.3.95Percentile Request Envelopes

by

(5)the mean number of requests in an interval of length is

,

and the variance is given by

(8)

and likewise for the service variance .Notice that

as in Equation (2).

Figure 4depicts an example service envelope from the sim-ulation experiments of Section VI.Analogous to Figure 3,it depicts the number of concurrent requests serviced as a func-tion of the the latency incurred.The key property of the ?gure is its convexity so that,for example,the time required to ser-vice requests is far less than times the time required to ser-vice 1request.This is due to the effects of caching (requests

Note that if the ’th request enters the system after the request is serviced,then this duration re?ects the two request’s inter-arrival times rather than the time to service two requests.

0.5

1

1.52

020

40

60

80

100

Interval length (sec)

N u m b e r o f R e q u e s t s

Fig.4.Service Envelope

for the same document within close temporal proximity expe-rience signi?cantly smaller latencies)and more generally,the server’s ability to ef?ciently service concurrent requests.C.Sketch LMAC Algorithm

The LMAC test is invoked upon arrival of a new session or request in class which will increase the request rate from its

current value to

.The LMAC test consists of two parts:the ?rst ensures class ’s delay target is satis?ed and the second ensures that other classes will not suffer QoS vio-lations due to the increased workload of class .We illustrate the test pictorially using Figures 3and 4.

For class itself,with a statistical characterization of both requests and service,maintaining a maximum horizontal dis-tance of between the two curves ensures that the delay tar-get is satis?ed with probability (see [9],[14]for further details).With an increase in ,class itself increases its re-quest rate yet retains its previous service level.Hence,the latency target is satis?ed if the two curves remain apart

after an increase of

in the request envelope.For classes

,we must also consider the performance effects of class on class ’s delay targets.Our approach is to bound the effects of the incremental load.Speci?cally,by an upper bound on class ’s new latency,we ensure that class does not force class into QoS violations.Moreover,by applying this bound only to the incremental load the perfor-mance penalty for this worst case approach is mitigated.In other words,class ’s current service measurements incorpo-rate the effects of class ’s load ,so that only

is included in the bound.The bound is obtained by consider-ing that the incremental requests have strict priority over class sessions.Under this worst-case scenario,class ’s workload remains the same yet its service over intervals of

length is decreased by

.Hence,the new request can be admitted if each class can satisfy its ,even under a reduction in service by .

We make three observations about the LMAC test.First,

each class’service envelope captures the gains from inter-class resource sharing.For example,if class can exploit unused capacity from an idle or lower priority class ,class measures a correspondingly larger service envelope.Simi-larly,if the server has complete isolation of classes (e.g.,via separate back-end nodes in the extreme case),then no such gains will be available and the algorithm will correctly limit admissions to re?ect this.Second,the algorithm also ensures class performance isolation,i.e.,that admissions in one class do not cause violations in another class,by incorporating the effects of inter-class interference.Finally,we note that while LMAC attempts to maximize the utilization of the web server subject to the QoS constraints independent of the server archi-tecture,QoS functionality in the server itself remains critical.For example,if a web server provides no QoS support and no class differentiation,LMAC infers that only a single service can truly be provided,and restricts admissions to satisfy the most stringent class requirements.Alternatively,when QoS mechanisms are deployed in the web server [4],[7],the re-sulting ef?ciency gains are in turn exploited by the LMAC algorithm,which increases the number of admitted sessions in each class and hence the overall system utilization.

V.S IMULATION S CENARIO

Our simulation scenario consists of a prototype imple-mentation of the LMAC algorithm built into the simula-tor described in [10],which was developed to approximate the behavior of OS management for CPU,memory and caching/disk storage.The front end node has a listen queue in which all incoming requests are queued before being ser-viced.Each back-end node consists of a CPU and locally attached disk(s)with separate queues for each.In addition,each node maintains its own main memory cache of con?g-urable size and replacement policy.

Upon arrival,each request is queued onto the listen queue or dropped if the listen queue is full.Processing a request re-quires the following steps:dequeuing from the listen queue,connection establishment,disk reads (if needed),data trans-mission and ?nally connection tear down.The processing oc-curs as a sequence of CPU and I/O bursts.The CPU and I/O bursts of different requests can be overlapped but the individ-ual processing steps for each request must be performed in sequence.Also,data transmission immediately follows disk read for each block.

We have used the same costs for basic request processing as in [10].The numbers were derived by performing measure-ments on a 300MHz Pentium II machine running FreeBSD 2.2.5and an aggressive experimental server.

Connection establishment and tear down costs are set to 145s of CPU time each.Transmit processing costs are 40s of CPU time per 512bytes.Reading a ?le from the disk re-quires a latency of 28ms for 2seeks +rotational latency fol-lowed by a transfer time of 410s per 4KByte (resulting in a peak transfer rate of 10MBytes/sec).A ?le larger than 44

KBytes is charged an additional latency of14ms(1seek+ rotational latency)for every44KByte block length in excess of44KBytes.The cache replacement policy is Greedy-Dual-Size[15].To incorporate cache behavior,we deliberately set the cache size in our simulation to be32MB.The small cache size effectively compensates for the relatively small data set of our traces.

The input to the simulator are streams of tokenized re-quests,one stream for each user class.Requests within a user class arrive with a user de?ned mean rate.Each request rep-resents a?le(and the corresponding?le size in bytes).We generate the arrival stream from logs collected from real web servers.

One of the traces used in our simulations is generated from the CS departmental server log at Rice University.Although we do have the request arrival times embedded in the logs, they do not represent the workload of an overloaded web server.For simplicity we simulate inter-arrival times as expo-nential.(As this particular assumption is unrealistic,we plan to collect additional traces which also include access times.) The latency experienced by a request is the delay from the time the request arrives at the listen queue until the time when that connection is torn down.The time taken to make ad-mission control decisions are assumed to be negligible.This does not imply that we completely ignore the effect of the server time spent in processing eventually rejected requests. In fact as can be seen from Figure1,all admission control decisions are made after the server dequeues a request from the listen queue.Thus any rejected requests has used up some of the resources of the server(namely the time spent in the?-nite sized listen queue).We ignore the actual processing time spent while making the admission control decision.

VI.E XPERIMENTAL I NVESTIGATIONS

In this section,we describe the experiments performed to investigate the performance of the LMAC algorithm.The?rst experiment was performed to demonstrate LMAC’s capabil-ity to actually protect the server from overload.In particular, without admission control,as the load offered to a server is in-creased beyond the server’s capacity,the request latencies be-come excessive.Admission control provides protection from overload by monitoring the utilization of the server and block-ing requests which will yield unacceptable performance. A.Overload Protection

To demonstrate the overload protection capabilities of the LMAC algorithm,we simulate various offered loads to the web server,keeping the targeted request latencies to be the same.We compare the performance of LMAC with a web server without admission control capabilities and measure the 95percentile delay in both cases.Figure5shows the results for a targeted delay of1second.As depicted in the?gure, latencies in the unmodi?ed server increase without bound as the load is increased.On the other hand,the LMAC algorithm

Fig.5.95Percentile Latency vs.Load

blocks requests to meet the delay constraint and thus protects the web server from overload.In addition to overload pro-tection,the?gure indicates that LMAC controls latencies to within a small error of the target.

B.Accuracy

For the second experiment,we compare the performance of LMAC with the queuing theoretic baseline approach of Sec-tion III.(We do not compare with admission control schemes from the literature as none have latency targets.)Figure6 shows the measured utilization versus95percentile delay for an offered load of200requests/sec.The server is a stand alone server with all incoming requests belonging to a sin-gle user class.The Baseline and the LMAC curves depict the throughput obtained when targeting95percentile delay values of.25,.5,1,1.25,1.5,1.75,2and2.5seconds.The Simulation curve depicts the measured value of95percentile delay and measured value of throughput when targeting the

Fig.6.Throughput vs.95Percentile Latency

Class Multi-class with Sharing Throughput(reqs/sec)Throughput(reqs/sec)

1.467.501

92145

our approach can be applied to off-the-shelf servers enhanced with monitoring and admission control.Moreover,as QoS and performance functionalities are added to servers,e.g., class-based request scheduling,operating systems enhanced with quality-of-service mechanisms,or locality aware load balancing,we have shown that the LMAC algorithm exploits these features and realizes a corresponding increase in utiliza-tion to various service classes.

In future work,we plan to address dynamic content,exper-iment with further traces,and implement the algorithm on a prototype server.

A CKNOWLEDGMENT

We are grateful to Mohit Aron,Peter Druschel,Vivek Pai and Willy Zwaenepoel for helpful discussions and assistance with the simulator and traces.

R EFERENCES

[1]N.Bhatti and R.Friedrich.Web server support for tiered services.IEEE

Network,13(5):64–71,September1999.

[2]L.Cherkasova and P.Phaal.Session-based admission control-a mech-

anism for improving performance of commercial web servers.In Pro-ceedings of IEEE/IFIP IWQoS’99,London,UK,June1999.

[3]K.Li and S.Jamin.A measurement-based admission controlled web

server.In Proceedings of IEEE INFOCOM2000,Tel Aviv,Israel, March2000.

[4]G.Banga,P.Druschel,and J.Mogul.Resource containers:A new fa-

cility for resource management in server systems.In Proceedings of the3rd USENIX Symposium on Operating Systems Design and Imple-mentation,February1999.

[5]J.Bruno,J.Brustoloni,E.Gabber,B.Ozden,and A.Silberschatz.

Retro?tting quality of service into a time-sharing operating system.In Proceedings of the1999USENIX Annual Technical Conference,Mon-terey,California,1999.

[6]J.Bruno,E.Gabber,B.Ozden,and A.Silberschatz.The eclipse oper-

ating system:Providing quality of service via reservation domains.In Proceedings of the1998USENIX Annual Technical Conference,New Orleans,Louisiana,1998.

[7]M.Aron,P.Druschel,and W.Zwaenepoel.Cluster reserves:A mech-

anism for resource management in cluster-based network servers.In Proceedings of ACM SIGMETRICS2000,June2000.

[8]M.Crovella,R.Frangioso,and M.Harchol-Balte.Connection schedul-

ing in web servers.In Proceedings of the1999USENIX Symposium on Internet Technologies and Systems(USITS’99),October1999.

[9]J.Qiu and E.Knightly.Inter-class resource sharing using statistical

service envelopes.In Proceedings of IEEE INFOCOM’99,New York, NY,March1999.

[10]V.Pai,M.Aron,G.Banga,M.Svendsen,P.Druschel,W.Zwaenepoel,

and E.Nahum.Locality-aware request distribution in cluster-based network servers.In Proceedings of the8th Conference on Architectural Support for Programming Languages and Operating systems,San Jose, CA,October1998.

[11]M.Crovella and A.Bestavros.Self-similarity in world wide web traf?c:

Evidence and possible causes.IEEE/ACM Transactions on Networking, 5(6):835–846,December1997.

[12]R.Cruz.Quality of service guarantees in virtual circuit switched

networks.IEEE Journal on Selected Areas in Communications, 13(6):1048–1056,August1995.

[13]R.Boorstyn,A.Burchard,J.Liebeherr,and C.Oottamakorn.Effective

envelopes:Statistical bounds on multiplexed traf?c in packet networks.

In Proceedings of IEEE INFOCOM2000,Tel Aviv,Israel,March2000.

[14] E.Knightly and N.Shroff.Admission control for statistical QoS:The-

ory and practice.IEEE Network,13(2):20–29,March1999.

[15]P.Cao and S.Irani.Cost aware WWW proxy caching algorithms.In

Proceedings of the USENIX symposium on Internet technologies and Systems(USITS),December1997.

DF12型手扶拖拉机变速驱动系统设计

本科毕业设计 题目:DF12型手扶拖拉机变速驱动系统设计 学院: 姓名: 学号: 专业:机械设计制造及其自动化 年级: 指导教师:

摘要 随着变型运输拖拉机和农用运输车的发展,原来依靠农村运输业发展起来的小四轮拖拉机逐步转向田间地头。然而,目前小四轮拖拉机田间作业能力差,又没有很多配套的农业机械,农忙季节短,致使大量小四轮拖拉机一年中作业时间短,被迫长期闲置着。这影响了农村专业户的作业效益,也造成了不应该有的资源浪费。针对这些情况,我们在原有小四轮拖拉机的基础上稍微作些改动,使它的功能延伸。譬如可在原来小四轮拖拉机的基础上,改变座位、方向盘、离合、油门、刹车的方位,把拖拉机变成倒开式,在变速箱后安装挖掘装置、铲运装置或装载装置而成。 本次毕业设计是对机械专业学生在毕业前的一次全面训练,目的在于巩固和扩大学生在校期间所学的基础知识和专业知识,训练学生综合运用所学知识分析和解决问题的能力。是培养、锻炼学生独立工作能力和创新精神之最佳手段。毕业设计要求每个学生在工作过程中,要独立思考,刻苦钻研,有所创造的分析、解决技术问题。通过毕业设计,使学生掌握改造方案的拟定、比较、分析及进行必要的计算。 关键字:齿轮操纵机构轴轴承锁定机构

DESING of a CERTAIN TYPE of TRACTOR GEARBOX With variant transport tractors and agricultural the development of carriage car, depends on rural transportation industry to develop the small four-wheel tractor to field edge of a field. However, at present, small four-wheel the tractor field work ability is poor, and not many ancillary agricultural machinery, busy season is short, resulting in a large number of small four-wheel tractor year short operation time, forced long-term idle. The influence of rural specialist work benefit, also cause should not be some waste of resources. In light of these circumstances, we in the original small four-wheel tractor based on slightly to make some changes, make it functional extension. For example, in the original small four-wheel tractor based on, change seats, steering wheel, clutch, throttle, brake position, the tractor into inverted open, in a gearbox installed after digging device, lifting device or a loading device. The graduation design is about mechanical speciality students before graduation and a comprehensive training, purpose is to consolidate and expand the students learn the basic knowledge and professional knowledge, training students' comprehensive use of the knowledge the ability to analyze and solve problems. Is training, training students the ability to work independently and the spirit of innovation is the best means. Graduation design requirements of each student in the course of the work, we need to think independently, study assiduously, create somewhat analysis, solving technical problems. Through the graduation project, so that students master the transformation plan formulation, comparison, analysis and necessary calculation. Key words:Gear Manipulation of body Axis Bearing Locking mechanism

DAQ数据采集卡快速使用指南

DAQ数据撷取卡快速使用指南 首先感谢您选购NI的DAQ产品,以下将简短地为您叙述快速安装与使用DAQ卡的步骤。 在安装DAQ的硬件之前,请您先确认是否安装了DAQ的驱动程序,基本上您的计算机必须有Measurement And Automation (MAX)来管理您所有的NI装置,另外您必须安装NI-DAQ软件,目前建议安装最新的版本(您可利用光盘安装或是上网下载最新版本驱动程序https://www.doczj.com/doc/8b7856115.html,/support点选Drivers and Updates),新版驱动程序可支持大多数NI的DAQ卡片,包含S、E、M系列以及USB接口产品。 在安装完成NI-DAQ之后,您可以在桌面上发现有MAX应用程序,此时您可以关闭计算机,进行硬件安装,将PCI或是PCMCIA接口的DAQ卡片插入并重新开机,开机之后操作系统会自行侦测到该装置,并且自动安装驱动程序,依照对话框的带领便能顺利完成安装程序。 安装程序完成后,建议您开启MAX在Device and interface选项中会有Traditional DAQ 以及DAQmx两个类别,那是依照您的卡片型号支持哪一种API而分类,一般而言,E系列卡片两种都支持,而M系列只支持DAQmx,S系列则不一定,在对应的Traditional DAQ 或DAQmx中找到您的DAQ卡片型号,然后建议您先进行校正以及测试。 您可参考https://www.doczj.com/doc/8b7856115.html,/support/daq/versions确认您硬件适用的版本 如何做校正与硬件测试:

若需校正硬件,请于MAX中,您所安装的卡片型号上按鼠标右键选择self-calibration 即可,系统会对DAQ卡以现在温度做一次校正。 若需测试硬件,请于MAX中,您所安装的卡片型号上按鼠标右键选择Test Panels,然后选择所要测试的项目,并且依照接脚图将讯号连接妥当即可测试,建议您分别测试AI、AO、DI以及Counter。 接脚图: 您可以在MAX中的DAQmx找到您所安装的卡片型号,并按鼠标右键,选择Device Pinout便可以依照接脚图去连接相关接法,进行量测。 接线模式(Input Configuration):接线模式一般有分为Differential、RSE、NRSE三种,其中以Differential最为准确,但此模式需要一次使用掉两个channel,一般而言,选择Differential

在LabVIEW中驱动数据采集卡的三种方法

在LabVIEW中驱动数据采集卡的三种方法 作者:EEFOCUS 文章来源:EDN China 一、引言 近年来,面向仪器的软件开发平台,如美国NI公司LabVIEW的成熟和商业化,使用者在配有专用或通用插卡式硬件和软件开发平台的个人计算机上,可按自己的需求,设计和组建各种测试分析仪器和测控系统。由于LabVIEW提供的是一种适应工程技术人员思维习惯的图形化编程语言,图形界面丰富,内含大量分析处理子程序,使用十分方便,个人仪器发展到了使用者也能设计,开发的新阶段。 鉴于是工程技术人员自己编制,调用软件来开发仪器功能,软件成了仪器的关键。故人们也称这类个人仪器为虚拟仪器,称这种主要由使用者自己设计,制造仪器的技术为虚拟仪器技术(Virtual Instrumentation Technology)。使用虚拟仪器技术,开发周期短、仪器成本低、界面友好、使用方便、可靠性高, 可赋于检测仪初步智能,能共享PC机丰富的软硬件资源,是当前仪器业发展的一个重要方面。 虚拟仪器的典型形式是在台式微机系统主板扩展槽中插入各类数据采集插卡,与微机外被测信号或仪器相连,组成测试与控制系统。但NI公司出售的,直接支持LabVIEW的插卡价格十分昂贵,严重限制着人们用LabVIEW来开发各种虚拟仪器系统。在LabVIEW中如何驱动其它低价位的数据采集插卡,成为了国内许多使用者面临的关键问题。 二、三种在LabVIEW中使用国产数据采集插卡的方法 笔者将近年来工程应用中总结出的三种在LabVIEW中驱动通用数据采集插卡的方法介绍如下。介绍中,以某市售8通道12位A/D插卡为例。设插卡基地址为base=0x100,在C语言中,选择信号通道ch的指令是_outp(base,ch),启动A/D的指令是_inp(base),采样量化后的12位二进制数的高4位存于base+2中,低8位存于base+3中。 1、直接用LabVIEW的In Port , Out Port图标编程 LabVIEW的Functions模板内Adevanced \ Memory中的In Port 、Out Port 图标,与_inp、_outp功能相同,因此可用它们画程序方框图, 设计该A/D插卡的驱动程序。N个通道扫描,各采集n点数据的LabVIEW程序方框图如图1所示。图中用LabVIEW的计时图标控制扫描速率。

电动车辆无级变速驱动系统、变速控制系统及方法与设计方案

一种电动车辆技术领域的电动车辆无级变速驱动系统、变速控制系统及方法,包括:第一驱动电机、行星轮系、第二驱动电机和第三输出机构,其中,行星轮系设有齿圈、行星支架、太阳轮和若干行星齿轮,行星齿轮周向均布在行星支架上,行星支架和齿圈之一与第二驱动电机的输出轴啮合、另一与第三输出机构啮合,齿圈与行星齿轮啮合,太阳轮与第一驱动电机输出端连接并与行星齿轮啮合。本技术采用行星减速机构,通过第二驱动电机带动行星支架或者齿圈的转动达到所需要的减速比,完成电动车的无级变速,实现换挡自动化。 技术要求 1.一种电动车辆无级变速驱动系统,其特征在于,包括:第一驱动电机、行星轮系、第二驱动电机和第三输出机构,其中:行星轮系设有齿圈、行星支架、太阳轮和若干行星齿轮,行星齿轮周向均布在行星支架上,行星支架和齿圈之一与第二驱动电机的输出轴啮合、另一与第三输出机构啮合,齿圈与行星齿轮啮合,太阳轮与第一驱动电机输出端连 接并与行星齿轮啮合。 2.根据权利要求1所述的电动车辆无级变速驱动系统,其特征是,所述的齿圈、行星支架和太阳轮均套设在主轴上。 3.根据权利要求1所述的电动车辆无级变速驱动系统,其特征是,所述第三输出机构设有轮轴,所述的轮轴与电动车辆的驱动轴固定连接。 4.根据权利要求2所述的电动车辆无级变速驱动系统,其特征是,所述的第一驱动电机包括转子总成和绕线定子,其中,转子总成设有转子内衬套,转子内衬套一端固定有太阳轮。 5.根据权利要求2所述的电动车辆无级变速驱动系统,其特征是,所述第一驱动电机的输出端与太阳轮通过减速机构相连,所述的减速机构为皮带、链条、齿轮传动机构中任意 一种。

6.一种应用于上述任一权利要求所述电动车辆无级变速驱动系统的变速控制系统,其特征在于,包括:行车电脑、制动传感器、变速传感器、第一驱动电机控制单元、第二驱动电机控制单元、第一转速传感器和第二转速传感器,其中, 制动传感器与行车电脑相连并输出制动信号; 变速传感器与行车电脑相连并输出加速信号或减速信号; 行车电脑与第一驱动电机控制单元、第二驱动电机控制单元相连并传输两驱动电机的控制信息; 第一转速传感器与行车电脑相连并输出第一驱动电机转速信息; 第二转速传感器与行车电脑相连并输出第二驱动电机转速信息。 7.根据权利要求6所述电动车辆无级变速驱动系统的变速控制系统,其特征在于,所述的变速控制系统还包括:第一电流传感器、第二电流传感器、第一电压传感器、第二电压传感器、第一扭矩传感器和第二扭矩传感器,其中, 第一扭矩传感器与行车电脑相连并输出第一驱动电机扭矩信息; 第二扭矩传感器与行车电脑相连并输出第二驱动电机扭矩信息; 第一电流传感器、第一电压传感器与第一驱动电机控制单元相连并分别输出第一驱动电机的输入电流和输入电压信息; 第二电流传感器、第二电压传感器与第二驱动电机控制单元相连并分别输出第二驱动电机的输入电流和输入电压信息。 8.一种基于权利要求6或7所述变速控制系统的电动车辆变速控制方法,其特征在于,包括以下步骤: S1,首先判断输入信号是否为制动信号,若为制动信号,则第一驱动电机和第二驱动电机停转,电动车辆停车,否则判断为变速信号;

研华数据采集卡USB 的安装和使用

基于Labview的研华数据采集卡的安装和使用数据采集卡型号:USB 4704,要求用labview采集研华的采集卡上的数据第一节研华设备管理器DAQNavi SDK安装 安装前的准备: 要求先安装好labview, 然后再进行以下安装 第一步: 安装研华的DAQ设备管理程序DAQNavi SDK包 1. 双击""文件,弹出安装对话框, 选择第1项“Update and DAQNavi”并点击“Next”: 点击“Next”:

如左上所示勾选,并点击“Next”: 点击“Next”,得如下图所示对话框,表示正在安装,请耐心等待。

耐心等待安装结束。安装结束后,选择操作系统上的“程序”,在程序列表中应该有“Advantech Automation”选项,点击该选项展开应有“DAQNavi”,如下图所示: 单击上图中的“Advantech Nagigator”选项,即可打开研华的设备管理器对话框,如下图所示,在这里,左侧的“Device”栏中列出了本机上连接的所有采集卡,可以对这些卡进行管理和测试,具体如何测试,请参照帮助文档。

第三二步.usb4704采集卡驱动安装 1. 双击“进行安装; 2. 安装完毕后,将采集卡与PC机相连(将usb数据线一端连上采集卡,另外一端连到计算机的USB口上),系统将自动安装采集卡的驱动,并识别采集卡。 3. 检查采集卡安装成功否 首先查看插在PC机上的采集卡上的灯是否呈绿色; 其次,打开“DAQNavi”,如下图所示,观察设备列表中是否显示出了“USB-4704” 第三步:在研华的设备列表中添加模拟卡(Demo Device) 若没有实际的采集卡,可以添加模拟卡进行模拟测试和数据采集编程练习 那么如何添加模拟卡呢? 如下图所示,点击“Advantech Automation”——〉DAQNavi ——〉Add Demo Device

数据采集卡PCI-8344A驱动说明书

PCI-8344A驱动1.2版说明 一、驱动适用范围 1. 适用于windows98,2K,XP系统 2. 编程适用于VC,VB,Delphi等决大多数编程语言 二、与上一个版本驱动的区别 1. 增加了一些错误号 2. 函数名普遍加了前缀“ZT8344A” 3. 废弃了用结构体传递参数的方式 三、驱动函数的参数说明 请以这个版本驱动中的《PCI8344A.h》文件中所述为准。 《PCI8344A.h》是一个纯文本文件,可用写字板或WORD打开。 推荐:如果用 VC 或 UltraEdit 打开,其中的注释及关键字会有不同的颜色, 从而有助于阅读。 四、连续AD采集的编程思路 1. 首先在程序初始化时调用 ZT8344A_OpenDevice 函数,用于打开设备,只调一次即可; 2. 调用 ZT8344A_DisableAD 函数,禁止AD 调用 ZT8344A_ClearHFifo 函数,清硬件缓冲区(HFIFO) 调用 ZT8344A_ClearSFifo 函数,清软件缓冲区(SFIFO) 调用 ZT8344A_OpenIRQ 函数,打开HFIFO半满中断 调用 ZT8344A_AIinit 函数,做一些AD初始化工作 3. 在一个循环中不断调用ZT8344A_GetSFifoDataCount 判断SFIFO中数据的个数, 申请一个数组,并把这个数组中传入 ZT8344A_AISFifo 用于接收数据, 把读出的数据保存到文件或直接显示, 注意:SFIFO的默认大小为 819200,用户要不断读数,使SFIFO有空间放入新的来自 HFIFO的数,如果SFIFO中的有效数据的个数接近 819200,会使整个AD过程停止。如果想重新采集,必须重复2—3步。 4. 调用 ZT8344A_CloseIRQ 函数,停止采集过程 5. 在程序退出前调用 ZT8344A_CloseDevice 函数 提示:1. 在这版驱动中,板卡的序号是从1开始的 2. 如果函数返回 -1,应该调用ZT8344A_ClearLastErr 函数得到错误号, 然后去《PCI8344A.h》文件中查找这个错误号对应的含义。 3. 一旦错误号不为0,如果想重新使函数正常工作,必须调用 ZT8344A_ClearLastErr 函数清除错误号。

研华数据采集卡USB的安装和使用

基于Labview的研华数据采集卡的安装和使用 数据采集卡型号: USB 4704,要求用labview采集研华的采集卡上的数据第一节研华设备管理器DAQNavi SDK安装 安装前的准备: 要求先安装好labview, 然后再进行以下安装 第一步: 安装研华的DAQ设备管理程序DAQNavi SDK包 1. 双击""文件,弹出安装对话框, 选择第1项“Update and DAQNavi”并点击“Next”: 点击“Next”:

如左上所示勾选,并点击“Next”: 点击“Next”,得如下图所示对话框,表示正在安装,请耐心等待。

耐心等待安装结束。安装结束后,选择操作系统上的“程序”,在程序列表中应该有“Advantech Automation”选项,点击该选项展开应有“DAQNavi”,如下图所示: 单击上图中的“Advantech Nagigator”选项,即可打开研华的设备管理器对话框,如下图所示,在这里,左侧的“Device”栏中列出了本机上连接的所有采集卡,可以对这些卡进行管理和测试,具体如何测试,请参照帮助文档。

第三二步.usb4704采集卡驱动安装 1. 双击“进行安装; 2. 安装完毕后,将采集卡与PC机相连(将usb数据线一端连上采集卡,另外一端连到计算机的USB口上),系统将自动安装采集卡的驱动,并识别采集卡。 3. 检查采集卡安装成功否 首先查看插在PC机上的采集卡上的灯是否呈绿色; 其次,打开“DAQNavi”,如下图所示,观察设备列表中是否显示出了“USB-4704” 第三步:在研华的设备列表中添加模拟卡(Demo Device) 若没有实际的采集卡,可以添加模拟卡进行模拟测试和数据采集编程练习 那么如何添加模拟卡呢 如下图所示,点击“Advantech Automation”——〉DAQNavi ——〉Add Demo Device

相关主题
相关文档 最新文档