Model-Based Design of Runtime Adaptation Strategies
- 格式:pdf
- 大小:222.63 KB
- 文档页数:6
model-based decentralized policyoptimization讲解Model-Based Decentralized Policy Optimization (MPO) is a powerful technique in the field of artificial intelligence and machine learning that aims to optimize policies in a decentralized manner using models. In this article, we will provide a comprehensive explanation of MPO, its advantages, and its applications.1. Introduction to MPOMPO is an algorithmic framework that leverages the power of models to optimize policies. It is designed to tackle complex decision-making problems that involve multiple agents or actors, where each agent's decision affects the overall outcome. MPO allows these agents to learn and improve their policies simultaneously through the use of models.2. Key Components of MPOMPO consists of several key components, including:a. Model Learning: Agents learn a model that captures the dynamics of the environment and the interactions among different agents.b. Policy Optimization: Agents optimize their policies based on the learned models. This optimization can be done using various techniques such as reinforcement learning or evolutionary algorithms.c. Decentralized Execution: Agents execute their optimized policies in a decentralized manner, making decisions based on their local observations and the learned models.3. Advantages of MPOMPO offers several advantages over other policy optimization techniques. These advantages include:a. Scalability: MPO is scalable and can handle problems with a large number of agents. The decentralized nature of MPO allows agents to learn and optimize their policies independently, leading to efficient computation and scalability.b. Robustness: The use of models in MPO provides agents with a better understanding of the environment and the interactions among agents. This leads to more robust policies that can adapt to changes in the environment or the behavior of other agents.c. Adaptability: MPO allows agents to adapt their policies based on new information or changes in the environment. The use of models enables agents to make predictions about the consequences of their actions and adjust their policies accordingly.4. Applications of MPOMPO has found applications in various domains, including robotics, multi-agent systems, and autonomous vehicles. Some specific applications include:a. Cooperative Control: MPO can be used to optimize the behavior of a group of cooperative robots or agents to achieve a common goal. Each agent can learn its own model and optimize its policy independently, leading to efficient cooperation.b. Traffic Management: MPO can be applied to optimize traffic signal timings in a decentralized manner. Each traffic signal can learn its model and optimize its policy based on local traffic conditions, resulting in smoother traffic flow.c. Resource Allocation: MPO can optimize the allocation of resources in a decentralized system, such as managing the distribution of electricity across a power grid or allocating bandwidth in a communication network.5. ConclusionModel-Based Decentralized Policy Optimization is a powerful technique for optimizing policies in decentralized systems. By leveraging the use of models, MPO allows agents to learn, optimize, and execute their policies independently, leading to scalable, robust, and adaptable decision-making. The applications of MPO span a wide range of domains, making it a valuable tool in the field of artificial intelligence and machine learning.。
Model-Based Inversion of Dynamic Range CompressionStanislaw Gorlow,Student Member,IEEE,and Joshua D.Reiss,Member,IEEEAbstract—In this work it is shown how a dynamic nonlinear time-variant operator,such as a dynamic range compressor,can be inverted using an explicit signal model.By knowing the model parameters that were used for compression one is able to recover the original uncompressed signal from a“broadcast”signal with high numerical accuracy and very low computational complexity.A compressor-decompressor scheme is worked out and described in detail.The approach is evaluated on real-world audio material with great success.Index Terms—Dynamic range compression,inversion, model-based,reverse audio engineering.I.I NTRODUCTIONS OUND or audio engineering is an established discipline employed in many areas that are part of our everyday life without us taking notice of it.But not many know how the audio was produced.If we take sound recording and reproduction or broadcasting as an example,we may imagine that a prerecorded signal from an acoustic source is altered by an audio engineer in such a way that it corresponds to certain criteria when played back.The number of these criteria may be large and usually depends on the context.In general,the said alteration of the input signal is a sequence of numerous forward transformations, the reversibility of which is of little or no interest.But what if one wished to do exactly this,that is to reverse the transfor-mation chain,and what is more,in a systematic and repeatable manner?The research objective of reverse audio engineering is twofold:to identify the transformation parameters given the input and the output signals,as in[1],and to regain the input signal that goes with the output signal given the transformation parameters.In both cases,an explicit signal model is manda-tory.The latter case might seem trivial,but only if the applied transformation is linear and orthogonal and as such perfectly invertible.Yet the forward transform is often neither linear nor invertible.This is the case for dynamic range compressionManuscript received December05,2012;revised February28,2013; accepted February28,2013.Date of publication March15,2013;date of current version March29,2013.This work was supported in part by the “Agence Nationale de la Recherche”within the scope of the DReaM project (ANR-09-CORD-006)as well as the laboratory with which thefirst author is affiliated as part of the“mobilitéjuniors”program.The associate editor coordinating the review of this manuscript and approving it for publication was Prof.Woon-Seng Gan.S.Gorlow is with the Computer Science Research Laboratory of Bordeaux (LaBRI),CNRS,Bordeaux1University,33405Talence Cedex,France(e-mail: stanislaw.gorlow@labri.fr).J.D.Reiss is with the Centre for Digital Music(C4DM),Queen Mary,Uni-versity of London,London E14NS,U.K.(e-mail:josh.reiss@). Digital Object Identifier10.1109/TASL.2013.2253099(DRC),which is commonly described by a dynamic nonlinear time-variant system.The classical linear time-invariant(LTI) system theory does not apply here,so a tailored solution to the problem at hand must be found instead.At this point,we also like to highlight the fact that neither V olterra nor Wiener model approaches[2]–[4]offer a solution,and neither do describing functions[5],[6].These are useful tools when identifying a time-invariant or a slowly varying nonlinear system or ana-lyzing the limit cycle behavior of a feedback system with a static nonlinearity.A method to invert dynamics compression is described in[7], but it requires an instantaneous gain value to be transmitted for each sample of the compressed signal.To provide a means to control the data rate,the gain signal is subsampled and also en-tropy coded.This approach is highly inefficient as it does not rely on a gain model and is extremely generic.On the other hand,transmitting the uncompressed signal in conjunction with a few typical compression parameters like threshold,ratio,attack,and release would require a much smaller capacity and yield the best possible signal quality with regard to any thinkable measure.A more realistic scenario is when the uncompressed signal is not available on the consumer side.This is usually the case for studio music recordings and broadcast material where the listener is offered a signal that is meant to sound“good”to everyone.However,the loudness war [8]has resulted in over-compressed audio material.Over-com-pression makes a song lose its artistic features like excitingness or liveliness and desensitizes the ear thanks to a louder volume. There is a need to restore the original signal’s dynamic range and to experience audio free of compression.In addition to the normalization of the program’s loudness level,the Dolby solution[9],[10]also includes dynamic range expansion.The expansion parameters that help reproduce the original program’s dynamic range are tuned on the broadcaster side and transmitted as metadata together with the broadcast signal.This is a very convenient solution for broadcasters,not least because the metadata is quite compact.Dynamic range ex-pansion is yet another forward transformation rather than a true inversion.Evidently,none of the previous approaches satisfy the re-verse engineering objective of this work.The goal of the present work,hence,is to invert dynamic range compression,which is a vital element not only in broadcasting but also in mastering. The paper is organized as follows.Section II provides a brief introduction to dynamic range compression and presents the compressor model upon which our considerations are based. The data model,the formulation of the problem,and the pur-sued approach are described next in Section III.The inversion1558-7916/$31.00©2013IEEEFig.1.Basic broadband compressor model(feed forward).is discussed in detail in Section IV.Section V illustrates how an integral step of the inversion procedure,namely the search for the zero-crossing of a non-linear function,can be solved in an iterative manner by means of linearization.Some other com-pressor features are discussed in Section VI.The complete al-gorithm is given in the form of pseudocode in Section VII and its performance is evaluated for different compressor settings in Section VIII.Conclusions are drawn in Section IX,where some directions for future work are mentioned.II.D YNAMIC R ANGE C OMPRESSIONDynamic range compression or simply“compression”is a sound processing technique that attenuates loud sounds and/or amplifies quiet sounds,which in consequence leads to a reduc-tion of an audio signal’s dynamic range.The latter is defined as the difference between the loudest and quietest sound mea-sured in decibel.In the following,we will use the word“com-pression”having“downward”compression in mind,though the discussed approach is likewise applicable to“upward”compres-sion.Downward compressing means attenuating sounds above a certain threshold while leaving sounds below the threshold unchanged.A sound engineer might use a compressor to reduce the dynamic range of source material for purposes of aesthetics, intelligibility,recording or broadcast limitations.Fig.1illustrates the basic compressor model from([11],ch.2)amended by a switchable RMS/peak detector in the side chain making it compatible with the compressor/limiter model from ([12],p.106).We will hereafter restrict our considerations to this basic model,as the purpose of the present work is to demon-strate a general approach rather than a solution to a specific problem.First,the input signal is split and a copy is sent to the side chain.The detector then calculates the magnitude or level of the sidechain signal using the root mean square(RMS)or peak as a measure for how loud a sound is([12],p.107). The detector’s temporal behavior is controlled by the attack and release parameters.The sound level is compared with the threshold level and,for the case it exceeds the threshold,a scale factor is calculated which corresponds to the ratio of input level to output level.The knee parameter determines how quick the compression ratio is reached.At the end of the side chain,the scale factor is fed to a smoothingfilter that yields the gain.The response of thefilter is controlled by another set of attack and re-lease parameters.Finally,the gain control applies the smoothed gain to the input signal and adds afixed amount of makeup gain to bring the output signal to a desired level.Such a broad-band compressor operates on the input signal’s full bandwidth, treating all frequencies from zero through the highest frequency equally.A detailed overview of all sidechain controls of a basic gain computer is given in([11],ch.3),e.g.,III.D ATA M ODEL,P ROBLEM F ORMULATION,ANDP ROPOSED S OLUTIONA.Data Model and Problem FormulationThe employed data model is based on the compressor from Fig.1.The following simplifications are additionally made:the knee parameter(“hard”knee)and the makeup gain(fixed at 0dB)are ignored.The compressor is defined as a single-input single-output(SISO)system,that is both the input and the output are single-channel signals.What follows is a description of each block by means of a dedicated function.The RMS/peak detector as well as the gain computer build upon afirst-order(one-pole)lowpassfilter.The sound level or envelope of the input signal is obtained by(1)where represents an RMS detector,and a peak detector.The non-zero smoothing factor,may take on different values,or,depending on whether the detector is in the attack or release phase.The condition for the level detector to enter the attack phase and to choose over is(2)A formula that converts a time constant into a smoothing factor is given in([12],p.109),so e.g.,where is the sampling frequency.The static nonlinearity in the gain computer is usually modeled in the logarithmic domain as a continuous piecewise linear function:(3) where is the slope,,and is the threshold in decibel.The slope is further derived from the de-sired compression ratio according to(4)Equation(3)is equivalently expressed in the linear domain as(5) where,and is the linear scale factor beforefiltering.The smoothed gain is then calculated as the exponentially-weighted moving average,(6) where the decision for the gain computer to choose the attack smoothing factor instead of is subject to(7) The output signal isfinally obtained by multiplying the above gain with the input signal:(8) Due to the fact that the gain is strictly positive,,it follows that(9) where sgn is the signum or sign function.In consequence,it is convenient to factorize the input signal as a product of the sign and the modulus according to(10)The problem at hand is formulated in the following manner: Given the compressed signal and the model parameters recover the modulus of the original signal from based on.For a more intuitive use,the smoothing factors and may be replaced by the time constants and.The meaning of each parameter is listed below.The threshold in dBThe compression ratio dB:dBThe detector type(RMS or peak)The attack time of the envelopefilter in msThe release time of the envelopefilter in msThe attack time of the gainfilter in msThe release time of the gainfilter in msB.Proposed SolutionThe output of the side chain,that is the gain of,given ,and,may be written as(11) In(11),denotes a nonlinear dynamic operator that maps the modulus of the input signal onto a sequence of instanta-neous gain values according to the compressor model rep-resented ing(11),(8)can be solved for yieldingsubject to invertibility of.In order to solve the above equa-tion one requires the knowledge of,which is unavailable. However,since is a function of,we can express as a function of one independent variable,and in that manner we obtain an equation with a single unknown:(12) where represents the entire compressor.If is invertible, i.e.,bijective for all can be obtained from by(13) And yet,since is unknown,the condition for applying decompression must be predicted from,and ,and therefore needs the condition for toggling between the attack and release phases.Depending on the quality of the prediction,the recovered modulus may differ somewhat at transition points from the original modulus,so that in the end(14)In the next section it is shown how such an inverse compressor or decompressor is derived.IV.I NVERSION OF D YNAMIC R ANGE C OMPRESSIONA.Characteristic FunctionFor simplicity,we choose the instantaneous envelope value instead of as the independent variable in(12).The relation between the two items is given by(1).From(6)and(8), when(15)(16) From(1),(17) or equivalently(note that by definition)(18) Moreover,(18)has a unique solution if and also are in-vertible.Moving the expression on the left-hand side over to the right-hand side,we may define(19) which shall be termed the characteristic function.The root or zero-crossing of hence represents the sought-after enve-lope value.Once is found(see Section V),the current values of,and are updated as per(20) and the decompressed sample is then calculated as(21)B.Attack-Release Phase Toggle1)Envelope Smoothing:In case a peak detector is in use, takes on two different values.The condition for the attack phase is then given by(2)and is equivalent to(22) Assuming that the past value of is known at time,what is needed to be done is to express the unknown in terms of such that the above equation still holds true.If is rather small,,or equivalently if is sufficiently large,ms at44.1-kHz sampling,the term in(15)is negligible,so it approximates(15)as(23) Solving(23)for and plugging the result into(22),we obtain(24) If(24)holds true,the detector is assumed to be in the attack phase.2)Gain Smoothing:Just like the peak detector,the gain smoothingfilter may be in either the attack or release phase. The necessary condition for the attack phase in(7)may also be formulated as(25) But since the current envelope value is unknown,we need to substitute in the above inequality by something that is known.With this in mind,(15)is rewritten as(26) Provided that,and due to the fact that ,the expression in square brackets in(26)is smaller than one,and thus during attack(27) Substituting by using(20), and solving(27)for results in(28) If in(25)is substituted by the expression on the right-hand side of(28),(25)still holds true,so the following sufficient condition is used to predict the attack phase of the gainfilter:(29) Note that the values of all variables are known whenever(29)is evaluated.C.Envelope PredictorAn instantaneous estimate of the envelope value is re-quired not only to predict when compression is active,formally according to(5),but also to initialize the iterative search algorithm in Section V.Resorting once more to(15)itcan be noted that in the opposite case where, and so(30) The sound level of the input signal at time is therefore(31) which must be greater than the threshold for compression to set in,whereas and are selected based on(24)and(29), respectively.D.Error AnalysisConsider being estimated from according to(32) The normalized error is then(33)(34) As during attack andduring release,respectively.The instantaneous gain can also be expressed as(35) where is the runtime in ing(35)in(34),the mag-nitude of the error is given by(36)(37) For,(36)becomes(38) whereas for,(37)converges to infinity:(39) So,the error is smaller for large or short.The smallest possible error is for,which then again depends on the current and the previous value of.The error accumulatesifFig.2.Graphical illustration for the iterative search for the zero-crossing.with.The difference between consecutive-values is signal dependent.The signal envelopefluctuates less and is thus smoother for smaller or longer.is also more stable when the compression ratio is low.Foris perfectly constant.The threshold has a negative impact on error propagation.The lower the more the error depends on ,since more samples are compressed with different-values. The RMS detector stabilizes the envelope more than the peak detector,which also reduces the error.Furthermore,since usu-ally,the error due to is smaller during release whereas the error due to is smaller during attack.Finally,the error is expected to be larger at transition points between quiet to loud signal passages.The above error may cause a decision in favor of a wrong smoothing factor in(24),like instead of e.g.,The decision error from(24)then propagates to(29).Given that ,the error due to(32)is accentuated by(24)with the consequence that(29)is less reliable than(24).The total error in(29)thus scales with.In regard to(31),re-liability of the envelope’s estimate is subject to validity of(24) and(29).A better estimate is obtained when the sound level de-tector and the gainfilter are both in either the attack or release phase.Here too,the estimation error increases withand also with.V.N UMERICAL S OLUTION OF THE C HARACTERISTIC F UNCTION An approximate solution to the characteristic function can be found,e.g.,by means of linearization.The estimate from(31) may moreover serve as a starting point for an iterative search of an optimum:The criterion for optimality is further chosen as the deviation of the characteristic function from zero,initialized to(40) Thereupon,(19)may be approximated at a given point using the equation of a straight line,,where is the slope and is the-intercept.The zero-crossing is characterized by the equation(41)as shown in Fig.2.The new estimate of the optimal is found as(42) If is less optimal than,the iteration is stopped and is thefinal estimate.The iteration is also stopped if is smaller than some.In the latter case,has the optimal value with respect to the chosen criterion.Otherwise,is set to and is set to after every step and the procedure is repeated until has converged to a more optimal value.The proposed method is a special form of the secant method with a single initial value.VI.G ENERAL R EMARKSA.Stereo LinkingWhen dealing with stereo signals,one might want to apply the same amount of gain reduction to both channels to prevent image shifting.This is achieved through stereo linking.One way is to calculate the required amount of gain reduction for each channel independently and then apply the larger amount to both channels.The question which arises in this context is which of the two channels was the gain derived from.To give an answer resolving the dilemma of ambiguity,one solution would be to signal which of the channels carries the applied gain.One could then decompress the marked sample and use its gain for the other channel.Although very simple to implement, this approach provokes an additional data rate of44.1kbps at44.1-kHz sampling.A rate-efficient alternative that comes witha higher computational cost is realized in the following way. First,one decompresses both the left and the right channel in-dependently and in so doing one obtains two estimates and,where subscript shall denote the left channel and subscript the right channel,respectively.In a second step,one calculates the compressed values of and and selects the channel for which holds true.In afinal step,one updates the remaining variables using the gain of the selected channel.B.LookaheadA compressor with a look-ahead function,i.e.,with a delay in the main signal path as in([12],p.106),uses past input samples as weighted output samples.Now that some future input sam-ples are required to invert the process—which are unavailable, the inversion is rendered impossible.and must thus be in sync for the approach to be applied.C.Clipping and LimitingAnother point worth mentioning is that“hard”clipping and “brick-wall”limiting are special cases of compression with the attack time set to zero and the compression ratio set to. The static nonlinearity in that particular case is a one-to-many mapping,which by definition is noninvertible.VII.T HE A LGORITHMThe complete algorithm is divided into three parts,each of them given as pseudocode below.Algorithm1out-lines the compressor that corresponds to the model from Sections II–III.Algorithm2illustrates the decompressor de-scribed in Section IV,and the iterative search from Section V isfinally summarized in Algorithm3.The parameter repre-sents the sampling frequency in kHz.function C OMPfor doif thenelseend ifif thenelseend ifif thenelseend ifend forreturnend functionVIII.P ERFORMANCE E VALUATIONA.Performance MetricsTo evaluate the inverse approach,the following quantities are measured:the root-mean-square error(RMSE),(43) given in decibel relative to full scale(dBFS),the perceptual sim-ilarity between the original and decompressed signal,and the execution time of the decompressor relative to real time(RT). Furthermore,we present the percentage of compressed samples, the mean number of iterations until convergence per compressed sample,the error rate of the attack-release toggle for the gainsmoothingfilter,andfinally the error rate of the envelope pre-dictor.The perceptual similarity is assessed by PEMO-Q[13], Algorithm2The decompressorfunction D ECOMPfor doif thenelseend ifif thenelseend ifif thenC HARFZEROelseend ifend forreturnend functionAlgorithm3The iterative search for the zero-crossingfunction C HARFZEROrepeatif thenreturnend ifuntilreturnend function [14]with as metric.The simulations are run in MATLAB on an Intel Core i5-520M CPU.putational ResultsFig.3shows the inverse output signal for a synthetic input signal using an RMS detector.The inverse signal is obtained from the compressed signal with an error of dBFS.It is visually indistinguishable from the original signal.Due to the fact that the signal envelope is con-stant most of the time,the error is noticeable only around tran-sition points—which are few.The decompressor’s performance is further evaluated for some commercial compressor presets. The used audio material consists of12items covering speech, sung voice,music,and jingles.All items are normalized to LKFS[15].The-value in the break condition of Algorithm3 is set to.A detailed overview of compressor settings and performancefigures is given in Tables I–II.The presented results suggest that the decompressed signal is perceptually in-distinguishable from the original—the-value isflawless. This was also confirmed by the authors through informal lis-tening tests.As can be seen from Table II,the largest inversion error is associated with setting E and the smallest with setting B.For allfive settings,the error is larger when an RMS detector is in use.This is partly due to the fact that has a stronger curvature in comparison to.By defining the distance in (40)as,it is possible to attain a smaller error for an RMS detector at the cost of a slightly longer runtime.In most cases,the envelope predictor works more reliably as compared to the toggle switch between attack and release.It can also be observed that the choice of time constants seems to have little impact on decompressor’s accuracy.The major parameters that affect the decompressor’s performance are and,while the threshold is evidently the predominant one:the RMSE strongly correlates with the threshold level.Figs.4–5show the inversion error as a function of various time constants.These are in the range of typical attack and re-lease times for a limiter(peak)or compressor(RMS)([12],pp. 109–110).It can be observed that the inversion accuracy de-pends on the release time of the peak detector and not so much on its attack time for both the envelope and the gainfilter,see Figs.4,5(b).For the envelopefilter,all error curves exhibit a local dip around a release time of0.5s.The error increases steeply below that bound but moderately with larger values.In the proximity of5s,the error converges to dBFS.With regard to the gainfilter,the error behaves in a reverse manner. The curves in Fig.5(b)exhibit a local peak around0.5s with a value of180dBFS.It can further be observed in Fig.4(a) that the curve for ms has a dip where is close to1ms,i.e.,where is minimal.This is also true for Fig.4(c)and(d):the lowest error is where the attack and release times are identical.As a general rule,the error that is due to the attack-release switch is smaller for the gainfilter in Fig.5. Looking at Fig.6one can see that the error decreases with threshold and increases with compression ratio.At a ratio of 10:1and beyond,the RMSE scales almost exclusively with the threshold.The lower the threshold,the stronger the error prop-agates between decompressed samples,which leads to a largerFig.3.An illustrative example using an RMS amplitude detector with set to 5ms,a threshold ofdBFS (dashed line in the upper right corner),acom-pression ratio of 4:1,and set to 1.6ms for attack and 17ms for release,respectively.TheRMSE is dBFS.TABLE IS ELECTED C OMPRESSOR S ETTINGSTABLE IIP ERFORMANCE F IGURES O BTAINED FOR V ARIOUS A UDIO M ATERIAL (12I TEMS )RMSE value.The RMS detector further augments the error be-cause it stabilizes the envelope more than the peak de-tector.Clearly,the threshold level has the highest impact on the decompressor’s accuracy.IX.C ONCLUSION AND O UTLOOKThis work examines the problem of finding an inverse to a nonlinear dynamic operator such as a digital compressor.The proposed approach is characterized by the fact that it uses an explicit signal model to solve the problem.To find the “dry”or uncompressed signal with high accuracy,it is suf ficient to know the model parameters.The parameters can e.g.,be sent together with the “wet”or compressed signal in the form of metadata as is the case with Dolby V olume and ReplayGain [16].A new bit-stream format is not mandatory,since many digital audio stan-dards,like WA V or MP3,provide means to tag the audio con-Fig.4.RMSE as a function of typical attack and release times using a peak (upper row)or an RMS amplitude detector (lower row).In the left column,the attack time of the envelope filter is varied while the release time is held constant.The right column shows the reverse case.The time constants of the gain filter are fixed at zero.In all four cases,threshold and ratio are fixed at 32dBFS and 4:1,respectively.Fig.5.RMSE as a function of typical attack and release times using a peak (upper row)or an RMS amplitude detector (lower row).In the left column,the attack time of the gain filter is varied while the release time is held constant.The right column shows the reverse case.The time constants of the envelope filter are fixed at zero.In all four cases,threshold and ratio are fixed at 32dBFS and 4:1,respectively.tent with “ancillary”data.With the help of the metadata,one can then reverse the compression applied after mixing or be-fore broadcast.This allows the end user to have control over the amount of compression,which may be preferred because the sound engineer has no control over the playback environ-ment or the listener’s individual taste.When the compressor parameters are unavailable,they can possibly be estimated from the compressed signal.This mayFig.6.RMSE as a function of threshold relative to the signal’s average loudness level(left column)and compression ratio(right column)using a peak(upper row)or an RMS amplitude detector(lower row).The time constants are:ms,ms,and s.thus be a direction for future work.Another direction would be to apply the approach to more sophisticated models that include a“soft”knee,parallel and multiband compression,or perform gain smoothing in the logarithmic domain,see[11],[12],[17], [18]and references therein.In conclusion,we want to draw the reader’s attention to the fact that the presentedfigures suggest that the decompressor is realtime capable which can pave the way for exciting new applications.One such application could be the restoration of dynamics in over-compressed audio or else the accentuation of transient components,see[19]–[21],by an adaptively tuned decompressor that has no prior knowledge of the compressor parameters.A CKNOWLEDGMENTThis work was carried out in part at the Centre for Digital Music(C4DM),Queen Mary,University of London.R EFERENCES[1]D.Barchiesi and J.Reiss,“Reverse engineering of a mix,”J.AudioEng.Soc.,vol.58,pp.563–576,2010.[2]T.Ogunfunmi,Adaptive Nonlinear System Identification:The Volterraand Wiener Model Approaches.New York,NY,USA:Springer Sci-ence+Business Media,2007,ch.3.[3]Y.Avargel and I.Cohen,“Adaptive nonlinear system identificationin the short-time Fourier transform domain,”IEEE Trans.SignalProcess.,vol.57,no.10,pp.3891–3904,Oct.2009.[4]Y.Avargel and I.Cohen,“Modeling and identification of nonlinear sys-tems in the short-time Fourier transform domain,”IEEE Trans.SignalProcess.,vol.58,no.1,pp.291–304,Jan.2010.[5]A.Gelb and W.E.Vander Velde,Multiple-Input Describing Functionsand Nonlinear System Design.New York,NY,USA:McGraw-Hill,1968,ch.1.[6]P.W.J.M.Nuij,O.H.Bosgra,and M.Steinbuch,“Higher-order sinu-soidal input describing functions for the analysis of non-linear systems with harmonic responses,”Mech.Syst.Signal Process.,vol.20,pp.1883–1904,2006.[7]chaise and L.Daudet,“Inverting dynamics compression withminimal side information,”in Proc.DAFx,2008,pp.1–6.[8]E.Vickers,“The loudness war:Background,speculation and recom-mendations,”in Proc.AES Conv.129,Nov.2010.[9]Dolby Digital and Dolby V olume Provide a Comprehensive LoudnessSolution,Dolby Laboratories,2007.[10]Broadcast Loudness Issues:The Comprehensive Dolby Approach,Dolby Laboratories,2011.[11]R.Jeffs,S.Holden,and D.Bohn,Dynamics processor—Technology&Application Tips,Rane Corporation,2005.[12]U.Zölzer,DAFX:Digital Audio Effects,2nd ed.Chichester,WestSussex,U.K.:Wiley,2011,ch.4,The Atrium,Southern Gate,PO19 8SQ.[13]R.Huber and B.Kollmeier,“PEMO-Q—A new method for objectiveaudio quality assessment using a model of auditory perception,”IEEE Trans.Audio Speech Lang.Process.,vol.14,no.6,pp.1902–1911, Nov.2006.[14]HörTech gGmbH,PEMO-Q[Online].Available:http://www.ho-ertech.de/web_en/produkte/pemo-q.shtml,version1.3[15]ITU-R,Algorithms to Measure Audio Programme Loudness and True-Peak Audio Level,Mar.2011,rec.ITU-R BS.1770-2.[16]Hydrogenaudio,ReplayGain[Online].Available:http://wiki.hydroge-/index.php?title=ReplayGain,Feb.2013[17]J.C.Schmidt and J.C.Rutledge,“Multichannel dynamic range com-pression for music signals,”in Proc.IEEE ICASSP,1996,vol.2,pp.1013–1016.[18]D.Giannoulis,M.Massberg,and J.D.Reiss,“Digital dynamic rangecompressor design—A tutorial and analysis,”J.Audio Eng.Soc.,vol.60,pp.399–408,2012.[19]M.M.Goodwin and C.Avendano,“Frequency-domain algorithms foraudio signal enhancement based on transient modification,”J.Audio Eng.Soc.,vol.54,pp.827–840,2006.[20]M.Walsh,E.Stein,and J.-M.Jot,“Adaptive dynamics enhancement,”in Proc.AES Conv.130,May2011.[21]M.Zaunschirm,J.D.Reiss,and A.Klapuri,“A sub-band approachto modification of musical transients,”Comput.Music J.,vol.36,pp.23–36,2012.。
医学文本预训练的bert-base model-概述说明以及解释1.引言1.1 概述概述:医学文本预训练技术是近年来兴起的一种新兴研究领域,它通过使用深度学习模型,特别是基于神经网络的自然语言处理模型,对大规模医学文本数据进行预训练,以提取医学领域特定的知识和模式。
在这一领域中,BERT-base模型作为一种被广泛应用的预训练模型,具有强大的文本表示能力和泛化能力,可以有效地应用于医学文本的处理和分析。
本文将重点介绍医学文本预训练技术和BERT-base模型的原理和应用,探讨它们在医学领域中的潜在应用和发展前景,为读者提供一份全面了解和掌握这一领域技术的参考资料。
1.2 文章结构文章结构部分需要介绍整篇文章的布局和内容安排。
在本篇文章中,我们将首先在引言部分对医学文本预训练的背景和重要性进行概述,然后介绍BERT-base模型的基本原理和特点。
接下来,我们将探讨医学文本预训练和BERT-base模型在医学领域的应用。
在结论部分,我们将对整篇文章进行总结,详细地归纳医学文本预训练和BERT-base模型的关键特点和应用价值。
此外,我们也将展望未来医学文本预训练和BERT-base模型在医学领域的发展前景,并结束文章强调医学文本预训练和BERT-base模型在医学领域的重要性和潜在贡献。
1.3 目的本文旨在探讨医学文本预训练技术在自然语言处理领域的应用,重点介绍了基于BERT-base模型的医学文本预训练方法。
通过对医学领域的文本数据进行预训练,可以提高模型在医学文本理解、医学知识图谱构建等任务上的性能表现,有助于医学大数据的应用与挖掘,推动医学人工智能的发展。
通过本文的介绍和分析,读者可以了解到医学文本预训练技术的重要性和应用前景,为进一步探索医学自然语言处理领域提供理论支持和实践指导。
2.正文2.1 医学文本预训练:医学文本预训练是指在医学领域专门针对医学文本数据进行预训练的过程。
在传统的自然语言处理领域,预训练模型如BERT在通用文本数据上表现出色,但在医学领域的应用效果并不尽如人意。
第39卷第12期2005年12月浙 江 大 学 学 报(工学版)Journal o f Zhejiang U niv ersity (Engineer ing Science)Vol.39No.12Dec.2005收稿日期:20041008.浙江大学学报(工学版)网址:w w w.journals.z /eng作者简介:郭斯羽(1975-),男,湖南长沙人,博士,副教授,研究方向为数据挖掘及图像处理.E mail:syguo75@一种基于模型的自适应阈值分割算法郭斯羽1,张煦芳2(1.浙江大学智能系统与决策研究所,浙江杭州310027;2.浙江大学附属妇产科医院,浙江杭州310006)摘 要:为了减少穷举式阈值分割方法中的重复计算,提出了连通域树(CCtT r ee)的结构与构造算法.在进行新阈值下的分割与连通域标记时,根据原阈值分割标记后得到的结果,结合新出现的连通域,以合并的方式得到新阈值分割下的连通域来减少多余的计算过程.给出了在CCT ree 中利用树搜索算法进行模型匹配区域搜索的方法.实际的图像库实验表明,在保证同样的模型匹配区域检出效果的基础上,基于CCT ree 的方法在运行时间上明显优于ET L,并能迅速有效地筛除重叠区域,获得更好的匹配区域.利用CCT r ee 方法可以准确而快速地获得基于模型匹配的阈值分割结果.关键词:图像分割;自适应阈值分割;连通域分析中图分类号:T P391 文献标识码:A 文章编号:1008973X(2005)12195004Model based adaptive thresholding algorithmGU O Si y u 1,ZHANG Xu fang 2(1.I nstitute of I ntelligent S y stems and D ecision M aking ,Zhej iang Univer sity ,H angz hou 310027,China;2.W omen s H osp ital Schoo l of M edicine,Zhej iang Univ er sity ,H angz hou 310006,China)Abstract:T o reduce co mputatio n overhead in ex haustive thresholding methods,a structur e and co nstr uc tion alg orithm o f connected co mponent tree (CCT ree)w as pro posed.T hro ug h the labelling of the new ly e m er ged reg ions and their m er ging w ith the ex isting com ponents available as the result of previous thresh o ldings,the thresholding and co nnected co mponent labelling under a new threshold w as co nstructed incre m entally,and the computation overhead w as avo ided.A m ethod fo r m odel matching region detection thro ug h tree searching on CCTree w as proposed.Exper im ental results on a real image data base show that the CCT ree based algor ithm is superior to the ET L based alg orithm w hile achieving the same m odel m atc hing region detection effect,that the CCT ree based alg orithm can effectively filter out overlapping r eg io ns and obtain better matching regions,and that the algo rithm can achiev e accurate and fast thresholding out come based on m odel matching.Key words:image segm entation;adaptive thresholding;co nnected component analysis 阈值分割是一种基础而常用的分割方法.Ot su [1]提出了基于类间方差最大准则的阈值分割方法,其他基于一维或二维直方图的阈值方法在其后也被提出[2~4].在许多应用中,紧接着阈值分割步骤的,就是对于分割后的二值图像进行连通域标记,对各连通域进行一些特征(如面积、圆度、矩等)统计,并在统计结果的基础上进行进一步的对象识别工作.连通域标记的问题已有较成熟的解决办法[5~7],但更近期的工作也仍在进行[8,9].对于阈值分割效果的好坏,目前无普遍性的评价方法,但将阈值分割与其后的连通域特征统计结合,则可根据连通域特征对所要识别的对象进行建模.此时阈值分割作为一个预处理步骤,一般希望能尽可能地将所有与对象模型相匹配的连通域作为候选区域找出来,并在以后的步骤中进行筛选.从这个意义而言,可将阈值分割对候选区域的筛选效果作为分割的优劣判据,进而得到基于对象模型意义上的 最优!阈值分割.本文根据这一思路,提出了一种树形结构,通过对该树的搜索,可以迅速获得所有可能的阈值分割后所得的符合对象模型的连通域,并且这一分割对于对象而言是自适应的,即对于不同的对象,能找到对该对象而言最为合适的分割阈值.1 基本思路当阈值分割方法用作预处理步骤时,希望该方法尽可能正确地将与对象模型匹配的候选区域找出来.对象模型中使用比较普遍的是诸如区域面积、填充面积、填充图像的长短轴和离心度,以及矩等区域特征.目前已有的阈值分割方法一般都能获得适合人工观察的分割效果,但是对于模型,特别是使用了诸如填充面积、填充图像的矩等特征的模型,已有的方法很可能造成一些关键点的缺失,从而使一些候选区域被丢失,图1是这种情况的一个示意.如果采用填充面积作为对象模型的一个特征,则图1(a)中的像素点p 将直接决定图示区域是否能成为候选区域,但已有的分割方法将很可能丢失点p ,从而错误地排除该区域,如图1(b).图1 阈值分割中的关键点示意图F ig.1 Cr itical po int in thresholding为了达到基于模型的阈值分割所期望的目的,可以通过穷举所有可能阈值来逐一进行分割,然后对分割得到的二值图像进行连通域标记,并进而统计特征,以得到候选区域.本文中称此类方法为穷举式阈值分割和连通域标记(ex haustive thresholding and connected co mpo nent labeling,ETL ).已有的单阈值分割方法能较好地将对象从背景中分割出来,因此可按这些方法所给出的分割阈值为基准,在该值附近进行穷举式的分割.这种方法不能保证获得所有候选区域,但时间有望大为缩短,从而获得时间与筛选效果间的权衡.对ETL 过程进行进一步分析可以发现,ETL 过程包含大量重复的连通域标记工作.设I M ∀N ={I (i,j )|1#i #M ,1#j #N }为M ∀N 的灰度图像,其中0#I (i,j )#G,G 为最大灰度级;则对于阈值t,I 的阈值分割图像B t (i,j )为B t (i,j )=1,I (i,j )#t,0,I (i,j )>t.因此,若像素点p 1和p 2在阈值t *的分割图像中处于同一连通域中,则 t ∃t *,p 1和p 2在阈值t 的分割图像中仍处于同一连通域,证明略去.这样在ET L 中,当阈值由小到大依次遍历0,1,%,G 这G +1个灰度级时,分割图像中的连通域变化实际是保持较低阈值分割下的连通域不变,在新的阈值分割下出现新的连通域,而部分原有连通域可能会通过新的连通域而合并成一个更大的连通域.故在每一个新的阈值下,只需处理新出现的连通域标记以及连通域合并的工作,而对已有连通域进行的任何标记工作都是冗余的.由该思路提出以下的连通域树(co nnected component tree,CCTree)结构及构造算法.2 CCT ree 结构及算法步骤CCTree 由若干树节点CCNo de 组成,每个CC Node 都对应着某一阈值分割下的一个连通域,而CCNode 又可分为2类:1)在树构造的初期,根据灰度图像可以获得若干连通域C ,C 内的所有点具有相同的灰度,而任意与C 相邻的点的灰度不同于C 的灰度,这些连通域首先组成了一组CCNo de,称之为基本区域(prima ry region,PR),它们构成了树的叶节点.在PR 中需要记录属于该PR 的所有点的坐标,以及该PR 与其他PR 间的相邻关系;2)在PR 的基础上,通过连通域的合并而形成新的连通域,与这些新连通域对应的CCNode 构成了树的中间节点.在实际算法中,PR 被融合进了中间节点中,但在概念上分清这2类节点将有助于算法的理解.CCNode 的结构如图2,其中pNBs 、pPRs 和pCurRoot 只在树的构造过程中有用,而pPnts 也只在PR 节点中才有实际数据.pCurRoot 是一个指向CCNode 指针的指针,其内容就是某PR 在树构造步骤的某个阶段所在连通域所对应的CCNode 的指针.这一数据通过对该PR 的pParent 逐层往上访问也能得到,但pCurRoot 的引入能加速该过程.对于任何一个CCNode,CCT ree 还要求其最大灰度值nGr ay 必须大于其任一个子节点的nGray 值,算法步骤分3步如下.1951第12期郭斯羽,等:一种基于模型的自适应阈值分割算法图2 CCNode结构Fig.2 Str ucture o f CCN ode1)获取PR首先在原灰度图像上进行连通域标记,这些连通域构成了PR.本文利用文献[8]中给出的方法完成这一标记过程.在标记完成后,还需进一步获得各PR的点链表以及其相邻的PR的链表.只有那些灰度值小于当前PR灰度值的相邻PR(neig hboring PR,NPR)才被加入到链表中,这是由于当灰度阈值由小到大逐步改变时,一个PR作为新的二值连通域出现并与已有连通域进行可能的合并时,根据PR的定义可知,已有连通域对应的最大灰度值必然小于当前PR的灰度值,因此上述得到的相邻PR 的信息已经足够处理连通域合并的任务了.2)PR排序在得到PR后,按PR灰度值由小到大对它们进行排序,然后通过遍历PR来进行树的构造.这一做法的合理性是基于如下事实,即在ET L过程中,阈值遍历也是按由小到大的顺序来进行的.同时这样的处理能保证在处理某PR时,其所有的NPR都已经被标记与合并,因为NPR的灰度值被要求小于当前PR的灰度值.3)遍历PR,进行连通域合并遍历排序后的PR,对每一个PR,以其为当前PR(current PR,CPR),遍历其pNBs字段来访问其所有的current PR,,NPR.根据NPR的情况,采取以下2种操作之一.(1)节点挂靠 如果NPR所在的当前子树的根节点pCur Root所存储的CCNo de指针(neigh boring CCNo de,NCC)的nGray值小于CPR的nGray值,这说明该NPR当前所处的连通域必须作为一个子节点与CPR进行合并.以CPR作为合并后的连通域所对应的CCN ode,将NCC加入CPR 的pChildren链表中,将NCC的pPRs链表与CPR 的pPRs链表合并成为一个更大的链表,并将NCC 所有的PR的pCurRoot的内容改为CPR.示例如图3.图3(a)表示应当使用挂靠操作的情况,深色方块为已处理过的CCNo de,浅色方块为当前待处理的CCNode,方块中的数字代表该连通域的灰度值.图3 挂靠操作示意图F ig.3 Do cking operation(2)节点融合 如果NCC的nGray值等于CPR的nGray值,则有2种情况:①若NCC等于CPR,说明NPR已经通过其他PR以间接的方式加入到了CPR所在的连通域中,故不需再对NPR进行处理;②NCC不等于CPR,说明NPR当前所处的连通域应该与CPR融合成为一个节点.以CPR作为合并后的连通域所对应的CCNode,将NCC的pChildren链表与CPR的pChildren链表合并,将NCC的pPRs链表与CPR的pPRs链表合并,并将NCC所有的PR的pCurRoot内容改为CPR.示例如图4.图4 融合操作示意图F ig.4 M erg ing o per atio n4)后续整理在树已经构造完成后,可再对各节点进行整理,1952浙 江 大 学 学 报(工学版) 第39卷将pNBs、pPRs和pCurRoot等字段去除,以节省空间开销.从CCT ree获得某阈值t分割下的各连通域的工作,可通过对CCTree进行搜索来完成:从CCT ree的根节点开始,如果访问的节点的nGray 大于t,则访问其所有子节点;否则该节点即对应一个阈值t分割下的连通域,返回该节点的点链表,并不再搜索其子节点.返回节点的nGray值即为该连通域的自适应分割阈值.可以证明上述算法步骤的确给出一个树结构,且通过给出的搜索方法的确能得到任意阈值分割下的所有连通域.限于篇幅,证明从略.3 实验结果及分析算法在一个真实的医学图像库上进行了验证.该图像库由39张尿沉渣镜检照片组成.利用活动轮廓模型识别出图像中的细胞并加以统计.阈值分割用于提供一个初始轮廓.作为使用本文提出的方法的前提,根据以往的细胞数据数据,建立了一个由包围矩形大小和区域填充面积构成的简单细胞对象粗筛选模型.为使初始轮廓尽量包围住细胞,需要找到最大!的匹配模型的区域,即CCT ree中最接近根节点的那些匹配模型的节点所对应的连通域.图像采用256灰度级,大小为288∀384像素.所用的对比方法如下:1)完整的CCT ree构造及搜索(full CCT ree,FCCT);2)以0~255为阈值进行完整ETL(full ETL,FET L);3)以文献[1]中的方法进行幸运阈值分割和标记(lucky thresho lding and la beling,LT L);4)以文献[1]中的方法求取阈值t,在该阈值附近某范围(实验中取t&10和t&20)内进行幸运ET L(lucky ET L,LET L);5)以文献[1]中的方法求取阈值t,并保留该阈值附近某范围(实验中取t&10和t&20)内的灰度值,而将范围以外的灰度值分别设为0和255,并在此基础上构造幸运CCT ree(lucky CCT ree,LCCT)并搜索.各方法中使用的标记算法均由文献[8]给出.各方法的运行时间如表1,单位为s;各方法找到的最优区域(即最大的模型匹配区域)数和可行区域(即与模型匹配,但不一定是最大区域)数及各自占总最优区域数和可行区域数的比例如表2.对照表1与表2,可看出在保证同样的最优区域检出效果的前提下,使用CCT ree的方法在运行时间上均优于使用ET L的方法,也证明CCTree算法的确有效消除了ETL过程中的重复计算量;由表2表1 各方法的运行时间T ab.1 Running t ime of algo rithms s 方法平均最大最小FCCT 1.61 2.310.98F ET L 5.13 5.33 4.98L T L0.010.020.00 L ET L(10)0.220.310.17L CCT(10)0.080.170.06L ET L(20)0.410.530.36L CCT(20)0.110.280.06表2 各方法找到的候选区域数T ab.2 Reg io ns found by a lgo rithms方法最优数(区域检出百分比)可行数(区域检出百分比) FCCT2578(100%)2578(8.6%)F ET L2578(100%)29925(100%)L T L4(0.2%)358(1.2%) L ET L(10)146(5.7%)7365(24.6%)L CCT(10)146(5.7%)468(1.6%)L ET L(20)279(10.8%)13832(46.2%)L CCT(20)279(10.8%)587(2.0%)还可看出,使用CCT ree的方法检出的可行区域数大大少于使用ETL的方法,这说明CCT ree能有效地排除重叠的分割区域,从而也大大减轻了其后的区域处理工作,因此CCTree在运行时间和检出效果上均明显优于ET L方法.4 结 语本文提出的CCTr ee方法,在已知二值连通区域的对象模型时,能快速完成阈值分割问题,并且分割阈值是基于对象的自适应阈值,实验结果也证明了这一点.目前给出的CCTr ee方法仍是针对简单的单阈值分割,而不适用于一维直方图的多阈值分割以及基于二维或更高维直方图的分割方法,这将是今后的一个研究方向.参考文献(References):[1]OT SU N.A thr esho ld select ion method fro m g ray levelhisto g rams[J].IEEE Transactions on Systems,M an, and C ybernetics,1979,9(1):6266.[2]SA H OO P K,SL A A F D W,A L BERT T A.T hr esho ldselection using a minima l histo gr am entro py difference [J].Optical Engineering,1997,36(7):19761981.(下转第1964页)1953第12期郭斯羽,等:一种基于模型的自适应阈值分割算法提出了最优三界点的控制策略及其退化策略.最优三界点控制策略具有很强的及时控制特性,为解决包括追加生产力等生产特殊情况在内的生产控制问题,提供了很好的帮助.下一步的工作是将本文方法扩展到多产品、多设备的生产模型中去.参考文献(References):[1]GERSHW IN S B.Manufacturing systems engineering[M].Englewo od Cliffs,N ew Jer sey:Prentice H all, 1994.[2]A K EL L A R,K U M A R P R.O ptimal contro l of pro duction rate in a failure pro ne manufactur ing system[J].IEEE Transactions Automatic Control,1986,AC 31: 116126.[3]H U A NG Lei,H U Jian Q iang,V A KIL I P.O ptimalco nt rol of a multi state manufactur ing sy st em:contro l of product ion rate and tempo rar y increase in capacity[A].Proceedings of the37th Decision&C ontrol[C].T ampa,Flor ida:IEEE,1998:21552159.[4]T A N Ba ris.Pr oduction contro l of a pull sy stem w ithpro duction and demand uncer tainty[J].IEEE Transac tions Autom atic C ontrol,2002,47(5):779783. [5]HU J Q,V A KIL I P,Y U Y G.Optimality o f hedgingpoint policies in the pr oduction co ntr ol of failur e pr one manufacturing sy stems[J].IEEE Transactions Auto matic C ontrol,1994,39:18751880.[6]L IBERO PO U LO S G,CA RA M A NI S M.Pr oductioncontro l o f manufactur ing systems w ith pr oductio n r ate dependent failure rates[J].IEEE Transactions Auto matic C ontrol,1994,39:889895.[7]HU A N G L,H U J Q,VA K IL I P.Optim al control offailure prone production system with the possibility of u sing extra capacity[M].N ew Yo rk:Spring er V erlag, 1996.[8]T A N B.Va riance of the thr oughput of an N st ation pr oductio n line w ith no intermediate buffers and time de pendent failure[J].Operational Research,1997,101(3):560576.(上接第1953页)[3]BRIN K A D.T hresholding of digital images using twodimensional entr opies[J].Pattern Recognition,1992, 25(8):803808.[4]ABU TAL EB A S.Auto matic thresholding of gray level pictures using tw o dimensional entropy[J].C omputer Vision,G raphics,and Im age Processing,1989,47:2232.[5]L U M IA R,SH A PIRO L,ZU N IG A O.A new connected co mpo nents alg or ithm for v irt ual memor y computer s [J].Computer Vision,Graphics,and Image Processing, 1983,22:287300.[6]CHO U DH AR Y A,T H A KU R R.Connected componentlabeling on coar se g r ain par allel co mputers:an ex per i mental study[J].Journal of Parallel and DistributedC om puting,1994,20(1):7883.[7]BU LG AR EL L I A,ST EFA N O L D.A simple and efficient connected component labeling alg or ithm[A].W ER NER B,ed.Proceedings of the10th Image Analysis and Processing[C].V enice:IEEE,1999:322327. [8]孔斌.快速连通域分析算法及其实现[J].模式识别与人工智能,2003,16(1):110115.KO N G Bing.A fast connected component ana lysis alg o r ithm and its implementat ion[J].Pattern Recognition and Artificial Intelligence,2003,16(1):110115. [9]庞全,苏佳,段会龙.基于四叉树和交叉熵的面向对象图像分割方法[J].浙江大学学报:工学版,2004,38(12):16151618.PA N G Q un,SU Jia,DU A N H ui lo ng.O bject o riented imag e segmentatio n method based on cross ent ropy and quad tree[J].Journal of Zhejiang University:Engineer ing Science,2004,38(12):16151618.1964浙 江 大 学 学 报(工学版) 第39卷。
第28卷第6期2009年12月飞行器测控学报Jo urnal of Spacecraft TT&C TechnologyVol.28No.6Dec.2009载体旋转条件下GPS 中频信号生成方法刘旭东,赵军祥(北京跟踪与通信技术研究所#北京#100094)摘要:GPS 中频信号是分析研究GPS 信号特点和捕获跟踪方法的关键环节,高动态旋转条件下得到GP S 中频信号十分困难,需要利用仿真手段模拟生成。
在综合分析卫星仰角、GP S 天线增益以及空间载体旋转对GPS 卫星可见性和GP S 中频信号影响的基础上,导出了高动态旋转条件下GPS 中频数字信号模型,提出了载体旋转条件下GPS 中频信号的生成方法,并利用M atlab 平台模拟生成了高动态旋转条件下GPS 中频信号。
该方法可用于高动态旋转载体的GP S 轨迹测量方案设计验证,具有一定的理论价值和工程意义。
关键词:GPS;旋转载体;中频信号中图分类号:P22814文献标识码:A文章编号:1674-5620(2009)06-0090-05A Generation Method of GPS IF Signal underCarrier Rotating ConditionsLIU Xu -dong,ZH AO Jun -xiang(Beijing In st it ut e of T racking and T elecommu nicati o n T echnology,B eijing 100094)Abstr act:The GP S IF signal is the key aspect for researches on the character istics of GPS signal and the design of acquisition and tr acking loops.It is difficult to obtain GP S IF signa l in highly dynamic r otating sit uation,the simula -tion technique would be adopted to gain the signal.Based on compr ehensively analyzing the impact of sat ellite eleva -tion,the gain of GP S antenna and rotation of carr ier to GPS satellites visibilit y and the GP S IF signal,a model of GPS IF signal in highly dynamic r otating situation was developed,a generation method of GPS IF signal was pr o -posed and the I F signal was simulat ed in Matlab platfor m.The method may be valuable in the theory r esear ch and the project application in verif ying t rajectory deter minat ion method for highly dynamic rot ating car rier.Keywords:GPS;Rotating Carr ier;IF Signal0引言目前,GPS 测量技术已在导弹航天测控领域广泛应用,成为靶场主要的外测手段之一[1]。
Geometric ModelingGeometric modeling is a crucial aspect of computer-aided design and computer graphics. It involves the creation of digital representations of physical objects and shapes using mathematical equations and algorithms. Geometric modeling plays a significant role in various industries, including engineering, architecture, animation, and manufacturing. It allows designers and engineers to visualize and analyze complex structures, simulate real-world scenarios, and create realistic visualizations of their designs. However, it also presents several challenges and limitations that need to be addressed to improve its effectiveness and efficiency. One of the primary challenges in geometric modeling is the complexity of representing real-world objects and shapes accurately. While simple geometric shapes like cubes, spheres, and cylinders can be easily defined using basic mathematical equations, more complex and irregular shapes, such as organic formsor intricate architectural designs, require advanced modeling techniques. Creating accurate and detailed digital representations of these shapes often involves a significant amount of time and computational resources. Moreover, the process of converting physical objects into digital models can be prone to errors and inaccuracies, leading to discrepancies between the digital and physical representations. Another critical issue in geometric modeling is the trade-off between accuracy and efficiency. As the complexity of the modeled object increases, the computational resources and time required to generate and manipulate the model also escalate. This can hinder the real-time visualization and interaction withthe model, especially in applications like virtual reality and simulation. Balancing the need for high-fidelity models with the computational constraints isa constant struggle for geometric modelers, requiring them to develop innovative algorithms and optimization techniques to improve the efficiency of modeling processes without sacrificing accuracy. Furthermore, geometric modeling often involves the integration of different modeling techniques and tools, such as parametric modeling, surface modeling, and solid modeling. Each technique has its strengths and weaknesses, and the choice of the most suitable approach depends on the specific requirements of the modeling task. However, interoperability and compatibility issues between different modeling techniques can arise, leading toinefficiencies and inconsistencies in the modeling workflow. Ensuring seamless integration and data exchange between diverse modeling tools is essential for enhancing the overall effectiveness of geometric modeling in various applications. In addition to technical challenges, geometric modeling also raises ethical and legal considerations, particularly in the context of intellectual property rights and digital ownership. With the increasing use of 3D scanning and modeling technologies, the ability to create digital replicas of physical objects has become more accessible. This raises concerns about copyright infringement, unauthorized reproduction of patented designs, and the protection of digital assets. As a result, the development of robust mechanisms for verifying the authenticity and ownership of digital models, as well as the establishment ofclear regulations and standards for digital rights management, is crucial toensure the ethical and legal use of geometric modeling technology. Despite these challenges, the continuous advancements in geometric modeling techniques and technologies offer promising opportunities for innovation and improvement. The integration of artificial intelligence and machine learning algorithms ingeometric modeling has the potential to automate and optimize various aspects of the modeling process, reducing the manual effort and time required to create and manipulate complex models. Additionally, the emergence of collaborative and cloud-based modeling platforms enables distributed teams to work together on shared models, fostering greater creativity and productivity in design and engineering projects. In conclusion, geometric modeling is a multifaceted discipline that presents various technical, ethical, and legal challenges. While the complexityand accuracy of digital representations, the trade-off between accuracy and efficiency, and the interoperability of modeling techniques are significant concerns, the continuous advancements in technology and the evolving landscape of modeling applications offer opportunities for addressing these challenges. By embracing innovation, collaboration, and ethical considerations, the field of geometric modeling can continue to drive advancements in design, engineering, and digital creativity, shaping the future of various industries and applications.。
base models 和instruction models1. 引言1.1 概述在信息化与智能化发展的浪潮下,人工智能技术正日益成为各个领域的关注焦点。
而在构建和训练AI模型过程中,"Base Models"和"Instruction Models"这两个概念变得至关重要。
本文旨在探讨并比较这两种模型,并分析它们在实际应用中的优缺点。
1.2 文章结构本文主要分为五个部分,除引言外,还包括"Base Models"、"Instruction Models"、对比与分析以及结论与展望。
首先,在第二部分中将介绍和描述“Base Models”。
我们会探讨其定义和背景,并详细阐述它们的特征和优点。
此外,我们还将介绍一些基于Base Models的应用领域和案例研究。
紧接着,在第三部分中,我们将解释“Instruction Models”的概念,并阐述其构建原理和方法论。
进一步地,我们将深入探讨该模型的实施步骤,并通过案例分析来说明其应用情况。
随后,在第四部分中,我们将对这两种模型进行对比与分析。
我们会指出它们之间的相似之处与区别,并讨论它们在不同场景下的适用性与限制性因素。
此外,我们还将进行综合评价和效果比较,为读者提供全面的了解。
最后,在第五部分中,我们会总结本文的主要观点和发现。
同时,我们也会指出研究中存在的不足之处并提出改进方向。
此外,我们还将展望未来发展可能的方向,探讨人工智能技术对各个领域带来的潜在影响和挑战。
1.3 目的本文旨在全面介绍"Base Models"和"Instruction Models"两种模型,并比较它们的特点和应用场景。
通过对这两种模型进行深入分析和评价,我们可以更好地理解它们在实际应用中的作用和局限,并为相关领域内从事研究或实践工作的人员提供参考依据。
ATI RAID Installation Guide1. ATI BIOS RAID Installation Guide (2)1.1 Introduction to RAID (2)1.2 RAID Configurations Precautions (2)1.3 Create Disk Array (3)2. ATI Windows RAID Installation Guide (8)2.1 Components of WebPAM Installation Software (8)2.2 Browser Support (8)2.3 Installing WebPAM (8)2.4 Log-in to WebPAM (11)2.5 Create RAID in WebPAM (12)1. ATI BIOS RAID Installation GuideATI BIOS RAID Installation Guide is an instruction for you to configure RAID functions by using the onboard FastBuild BIOS utility under BIOS environment. After you make a SATA / SATAII driver diskette, press <F2> to enter BIOS setup to set the option to RAID mode by following the detailed instruction of the “User Manual” in our support CD or “Quick Installation Guide”, then you can start to use the onboard FastBuild BIOS utility to configure RAID.1.1 Introduction to RAIDThe term “RAID” stands for “Redundant Array of Independent Disks”, which is a method combining two or more hard disk drives into one logical unit. For optimal performance, please install identical drives of the same model and capacity when creating a RAID set.RAID 0 (Data Striping)RAID 0 is called data striping that optimizes two identical hard disk drives to read and write data in parallel, interleaved stacks. It will improve data access and storage since it will double the data transfer rate of a single disk alone while the two hard disks perform the same work as a single drive but at a sustained data transfer rate.WARNING!!Although RAID 0 function can improve the access performance, it does not provide any fault tolerance. Hot-Plug any HDDs of the RAID 0 Disk will cause data damage or data loss.RAID 1 (Data Mirroring)RAID 1 is called data mirroring that copies and maintains an identical image of data from one drive to a second drive. It provides data protection and increases fault tolerance to the entire system since the disk array management software will direct all applications to the surviving drive as it contains a complete copy of the data in the other drive if one drive fails.RAID 10 (Stripe Mirroring)RAID 0 drives can be mirrored using RAID 1 techniques, resulting in a RAID 10 solution for improved performance plus resiliency. The controller combines the performance of data striping (RAID 0) and the fault tolerance of disk mirroring (RAID 1). Data is striped across multiple drives and duplicated on another set of drives.1.2 RAID Configurations Precautions1. Please use two new drives if you are creating a RAID 0 (striping) array for performance. It is recommendedto use two SATA drives of the same size. If you use two drives of different sizes, the smaller capacity hard disk will be the base storage size for each drive. For example, if one hard disk has an 80GB storagecapacity and the other hard disk has 60GB, the maximum storage capacity for the 80GB-drive becomes60GB, and the total storage capacity for this RAID 0 set is 120GB.2. You may use two new drives, or use an existing drive and a new drive to create a RAID 1 (mirroring) arrayfor data protection (the new drive must be of the same size or larger than the existing drive). If you use two drives of different sizes, the smaller capacity hard disk will be the base storage size. For example, if onehard disk has an 80GB storage capacity and the other hard disk has 60GB, the maximum storage capacity for the RAID 1 set is 60GB.3. Please verify the status of your hard disks before you set up your new RAID array.WARNING!!Please backup your data first before you create RAID functions. In the process you create RAID, the system will ask if you want to “Clear Disk Data” or not. It is recommended to select “Yes”, and then your future data building will operate under a clean environment.1.3 Create Disk ArrayPower on your system. If this is the first time you have booted with the disk drives installed, the ATI onboard BIOS will display the following screen.Press <Ctrl+F> keys, then the FastBuild Utility Main Menu appears.Press 2 on the Main Menu screen to display the Define LD Menu.Press the arrow keys to highlight a logical drive number you want to define and press <Enter> to select it. The Define LD Menu for the logical drive number you selected will next appear.Choose the RAID level you want. In the Define LD Menu section, press the spacebar to cycle through logical drive types, including RAID 0, RAID 1, and RAID 10.WARNING!!While you are allowed to use any available RAID level for your bootable logical drive, it is recommended to use RAID 1 for most applications.Press the arrow key to move to Disk Assignments. Press the spacebar to toggle between N and Y for each available drive. Y means this disk drive will be assigned to the logical drive. Assign the appropriate number of disk drives to your logical drive. Then press <Ctrl-Y> to save your logical drive configuration. You have the option of using all of the disk drive capacity for one logical drive or allocating a portion to a second logical drive.Choose one of the following actions:1. Use the full capacity of the disk drives for a single logical drive: Please read “One Logical Drive” below.2. Split the disk drives among two logical drives: Please read “Two Logical Drives” below.One Logical DriveAfter selecting the logical drive in Disk Assignments as the above-mentioned procedures, press any key (except for <Ctrl-Y>) to use the full portion of the logical drive for one logical drive. Then please follow the steps below:2. Press <Esc> again to exit the Utility.3. Press <Y> to restart your computer.You have successfully created a new RAID logical drive. Please install the operating system to your computer by following the detailed instruction of the “User Manual” in our support CD or “Quick Installation Guide”.Two Logical DrivesAfter selecting the logical drive in Disk Assignments as the above-mentioned procedures, press <Ctrl-Y> to allocate a portion of the disk drives to the first logical drive. Then please follow the steps below.1. Enter the desired capacity (MB) for the first logical drive and press <Enter>. The Define LD Menu displays again.2. Press the up and down arrow keys to select an available logical drive number and press <Enter>.3. Choose the RAID level and options for the second logical drive. Note that the disk drives in Channels 1 and 2reflect smaller capacities because a portion of their capacity belongs to the first logical drive. In this example the disk drives in Channels 3 and 4 are not assigned to a logical drive.4. Press <Ctrl-Y> to save your logical drive configuration.5. Press <Esc> to exit to the Main Menu. Press <Esc> again to exit the Utility.6. Press <Y> to restart the computer.You have successfully created a new RAID logical drive. Please install the operating system to your computer by following the detailed instruction of the “User Manual” in our support CD or “Quick Installation Guide”.2. ATI Windows RAID Installation GuideATI Windows RAID Installation Guide is an instruction for you to configure RAID functions by using WebPAM RAID management software under Windows environment. The WebPAM (Web-Based Promise Array Management) software offers local and remote management and monitoring of all ATI SB600 SATA logical drives that exist anywhere on a network. Its browser-based GUI provides email notification of all major events/alarms, memory cache management, drive event logging, logical drive maintenance, rebuild, and access to all components in the RAID configuration (server, controller, logical drives, physical drives, and enclosure). WebPAM is designed to work with ATI SB600 SATA RAID controllers. Other brands of RAID controllers are not supported. Please read this guide carefully and follow the instructions below to configure and manage RAID functions.2.1 Components of WebPAM Installation SoftwareWebPAM installation software will install two major components to your system:1. WebPAM RAID management software: The WebPAM software installs on the PC with the ATI SB600 SATARAID Controller (the “Host PC”).2. Java Runtime Environment (in a private folder): The WebPAM installation program installs a private JRE in folder_jvm under the same directory where WebPAM is installed. WebPAM uses this private JRE to avoidincompatibility issues with any other JREs that may be present on your system.2.2 Browser SupportOn the Host PC with the ATI SB600 Controller, where you install WebPAM, you must have one of the following browsers: Internet Explorer 6.0, Mozilla Suite 1.7, Mozilla Firefox 1.0, or Netscape Navigator 7.1.If you do not have one of the above browsers, install the browser first and make it the default browser. Then install WebPAM. You must use one of the browsers listed above on your networked PC in order to access WebPAM over the network.2.3 Installing WebPAMFollow these steps to install WebPAM on your Windows-based PC or Server.1. Boot up the PC/server and launch Windows. If the computer is already running, exit all programs.2. Insert the software CD into your CD-ROM drive.3. Double-click on the Install CD’s icon to open it.4. Double-click on the Installer icon to launch it. The first WebPAM installation dialog box appears.5. Follow the prompts in the installation dialog box. The first WebPAM installation dialog box appears as shown below.6. Select an installer language from the dropdown menu and click the OK button.7. Click the Next button when the Introduction screen appears.8. Click on the “I accept the terms of the license agreement” option to proceed with installation when the License agreement screen appears. If you select the “I do not accept the terms of the license” option, the installation will quit. Click the Next button when you are finished.9. When the Choose Install Folder screen appears, select a folder for the WebPAM applications you are installing. For example, the Windows default folder is C:\Program Files\ATI\WebPAM. If you want a different folder, type its location or click the Choose... button and select a new location. If you change your mind and want the default location, click on the Previous button, then the Next button. Click the Next button when you are finished.10. When the Check HTTP SSL screen appears, you can choose External Security. An explanation follows. External SSL Security – Applies security to all connections involving the Internet or outside your company firewall. Security options are invisible to authorized users. ATI provides a default certificate for the server as well as for internal data communication. However, in some cases it is always better to install and verify your own certificate for the webserver. And, and if possible, verify certificate by certificate authority like Verisign or Thwate. See your MIS Administrator for guidance. Click the Next button when you have made your choice.11. Review your choices when the Pre-Installation Summary screen appears. Click the Previous button to makechanges or click the Installation button to continue.12. When the Install Complete screen appears, click the Done button. This completes the WebPAM installation. Thenyou can start to log-in to WebPAM. Please read the instruction below for details.2.4 Log-in to WebPAMDouble-click on the WebPAM icon on your Windows desktop. Or you may launch your Browser to type the entry in the Browser address field. If you did not choose the External Security option during WebPAM installation, use the Regular connection. If you chose the External Security option during WebPAM installation, use the Secure connection.Regular connection:http://127.0.0.1:8080/ati or http://localhost:8080/atiSecure connection: https://127.0.0.1:8443/ati or https://localhost:8443/atiPlease note that the IP address shown above applies to a log-in at the Host PC. When you log in over a network, enter the Host PC’s actual IP address or hostname.When the opening screen appears:1. Type admin in the Login ID field.2. Type admin in the Password field.3. Click the Sign in button. This is the default login for the Administrator. The Login ID and Password are case sensitive. Click the WebPAM online help for instructions on adding users and changing passwords.4. After you successfully log-in to WebPAM, you are allowed to click the button on the top such as “Language”, “Help”, or “Logout” for other requirement.2.5 Create RAID in WebPAMAfter you log-in to WebPAM, you can click the button on the left. Click 127.0.01., ATI SB600 SATA Controller, and then Controller 1 to view the controller information.Click Logical Drive View.Click the Create button to create RAID array. Then you can start to select RAID level. After selecting the RAID level that you wish, click Next for the next page. Here we take RAID 1 for example.You can select drive group. Please select a free drive(s) for one logical drive that has free space. Click Next.Select drives. You can choose to use maximum capacity or key in the logical drive size in (GB). Then select the drives that you plan to create RAID. Click Next for the next page.Assign a name for to the logical drive. The logical drive name that you assign is supposed to contain 1 to 32 characters. After that, please click Next.In the Final Settings page, please confirm your choices in the following list. Or you may make any changes here. If you have confirmed the information in the list, click Finish.Finally, in Logical Drive Overview page, you can see the RAID configuration you just made on your system, including Assigned Name, RAID Level, Status, Background Activity, and Capacity.In the future, if you plan to configure other RAID functions, you may click the Delete or Synchronization Schedule on the top to meet your RAID requirement.。
Model-Based Design of Runtime Adaptation StrategiesJoseph P. Loyall1, Richard Shapiro1, Sandeep Neema2, Sherif Abdelwahed2, RichardSchantz1, Nagabhushan Mahadevan21BBN Technologies, Cambridge, Massachusetts2Vanderbilt University, Nashville, Tennessee1 Problem DescriptionDesigning and implementing embedded computing systems is more challenging than non-embedded systems because of the need to interact closely with a changeable physical universe that operates in real time, and because of the extra constraints imposed by the packaging and environmental concerns for many embedded systems. In addition, the challenges posed by distributed systems architectures and composite implementations of stand-alone components (which are already prevalent in non-embedded environments) have begun to become commonplace consequences of the requirements for new embedded solutions. There are especially challenging issues surrounding the design and implementation of distributed embed-ded systems with managed quality of service (QoS) properties when those systems must adapt to the changes in both the computational and physical universes within which these systems must operate.To date, research has concentrated on constructing runtime system abstractions and mechanisms that can adaptively meet the design requirements. However, using these abstractions and mechanisms requires highly skilled individuals with strong intuitions about both the needs of the domain and the ways to manipu-late the various dimensions contributing to the managed QoS behavior. There are few design paradigms to guide the design or construction of this sort of dynamic, adaptive, managed runtime behavior. There is a compelling need for applying design-time methodologies to develop and control these runtime adaptations systematically, and along the way to establish appropriate interchanges and interfaces between the design-time tools and their runtime counterparts.Under the auspices of the DARPA MoBIES program, we are investigating the application of model in-tegrated computing (MIC) to the problems of designing, customizing, parameterizing, and managing the adaptive runtime characteristics of distributed realtime embedded (DRE) applications.In particular, we are seeking to utilize and augment the Generic Modeling Environment (GME) MIC tool to design runtime con-trol capabilities as provided by the Quality Objects (QuO) adaptive QoS middleware framework. GME uses domain models and advanced user/designer interfaces to manipulate elements in the domain space that are intelligently interpreted by the models to produce effective designs and code elements representing that design [4]. QuO provides a set of extensions to existing off-the-shelf middleware, including CORBA and Java RMI, specification languages, and a runtime system to support QoS awareness and management of, and adaptation to, dynamic conditions [18].This work is sponsored by the DARPA/IXO Model-Based Integration of Embedded Software program, under con-tract F33615-02-C-4037 with the Air Force Research Laboratory Information Directorate, Wright Patterson Air Force Base.Runtime Adaptive Behav-iorOur approach is in developing a seman-tically rich, domain-specific modeling lan-guage supporting the high-level de-sign/representationstrategies and concerns. We are designingand implementing mappings and code gen-middleware constructs from the high-levelrepresentation. We plan to link the design-time modeling tool and runtime middlewareusing a feedback loop for incorporating in-formation gathered at runtime to feed backinto refinement of the design time model.The first step in our technical approachis to identify the relevant runtime QoS char-acteristics of DRE systems suitable for de-sign-time modeling. The QoS aspects define a space, illustrated in Figure 1, in which the application hasunacceptable quality of service attributes if the measure of any dimension of interest falls below the minimaloperating threshold. Above a maximum operating threshold, improvements in QoS make no difference inapplication operation. Between the minimum and maximum thresholds is a space in which tradeoffs andadaptations can be made in order to maintain as high a level as feasible of acceptable quality of service.While Figure 1 illustrates three dimensions – corresponding to the dimensions of amount of data, fidelity,and latency – the space can consist of any number of dimensions, 1, 2, … N, corresponding to the QoSdimensions relevant to the application.While the minimum and maximum thresholds define an acceptable operating space, it is usually not thecase that every point in the acceptable space is equivalent. Nor is it true that it is always possible to movesmoothly from one point in the space to another. The underlying system, mechanisms, and resource man-agers provide knobs to control the level of QoS (e.g., through resource allocation, rate or priority adjust-ment, or data shaping). The granularity at which QoS can be adapted depends on these knobs. Further-more, adaptation of one QoS dimension will frequently affect other QoS dimensions (e.g., increasing theamount of data will use more bandwidth). Finally, the application’s requirements will often mean that sometradeoffs and adaptations are preferred over others. The main goal of our research is modeling AdaptationStrategy Preferences, which indicate how the adaptation moves the application through the acceptable op-erating space as system, functional, and mission conditions change. These preferences specify which ad-aptation behaviors and mechanisms are employed, the manner and order in which they are employed, thetradeoffs to be made, and the conditions that trigger adaptation. To this end, our modeling language sup-ports modeling the following components of a QoS adaptive runtime system:• The application’s structure – Primarily the functional structure of the application, including data and control flow and points for inserting adaptive decisions.• Mission requirements – The functional and QoS goals that must be met by the application. These help determine the relative merit of possible adaptations and points in the adaptation space.• Observable parameters – The system conditions that need to be monitored at runtime in order to de-termine the system’s QoS state and drive adaptation. Examples include latency, throughput, and bandwidth between a pair of hosts, as well as reflective information about application execution, such as the nature of data content or the speed of operation. These parameters determine the application’s current position in the adaptation space.• Controllable parameters and adaptation behaviors – The knobs available to the application for QoS control and adaptation. These can be in the form of interfaces to mechanisms, managers or resources, or in the form of packaged adaptations, such as those provided by QuO’s Qosket encapsulation capa-bility [13].• The system dynamics – The interactions between observable and controllable parameters and adapta-tion behaviors. These help define the set of possible trajectories that an application can take through the N-dimensional QoS parameter space.• Minimum and maximum acceptable ranges of operation – This includes the lower bound on the level of acceptable QoS below which the system is unacceptable for a given mission and the upper bound above which additional resources lead to no improvement in system QoS.• Adaptation strategies (controller model) - The adaptation strategy specifies the adaptations employed and the tradeoffs made in response to dynamic system conditions in order to maintain an acceptable mission posture.We are using the graphical meta-modeling environment of GME [4] to define the modeling language. Our initial implementation builds upon ongoing efforts using GME for modeling the program’s functional structure graphically, using icons to represent program components and connections to represent control and data flow between application components. We model controllers as state machines and as differ-ence/differential equations. Observable and controllable parameters are atoms in GME, representing inter-faces to system primitives, managers, mechanisms, and so forth. System dynamics, mission requirements, and ranges of operation are expected to be functions defined on observable and controllable parameters.In our current implementation concept, application structure maps to functional specification of the pro-gram’s structure, such as CORBA IDL descriptions, control flow, and dataflow. Observable and controllable parameters map to QuO system condition objects, which provide interfaces to mechanisms and managers. QuO has an increasing library of qoskets, encapsulated and reusable behaviors, which will serve the role of adaptation behaviors. We are exploring how to map mission requirements, system dynamics, and minimum and maximum acceptable ranges of operation.One of the key components of our high-level modeling is specifying the adaptation strategy. We are approaching it, in part, from a control theory point of view. We plan a hierarchy of controllers, the outermost of which tries to maximize a measure of utility computed from the observable and controllable parameters in a manner defined by the mission requirements and system dynamics. We have identified and defined two types of controllers that we expect to be useful for modeling controlled behavior and generating pre-dictable QoS adaptive middleware software: supervisory controllers and classical compensators. Supervi-sory controllers are represented by finite state machines, readily modeled in GME and then translated into QuO contracts [8], the QuO construct for specifying, controlling, and negotiating QoS adaptation [5]. We will model classical compensators as difference/ differential equations, from which native language functions, contained in qoskets and referenced by contracts, will be generated.Figure 2 illustrates an example controller that we have modeled and simulated in our first prototype de-velopment of these concepts. It implements an adaptive strategy for a signal processing application based on a measure of utility. We use two observable/controllable parameters, queue length and confidence of the signal analysis. Queue length is representative of the signal latency, indicating how rapidly the applica-tion is processing signals compared to the rate at which they are coming in, i.e., the queue length will growslower than the rate at which they are ap-pearing and will shrink (or stay stable) ifprocessing is faster than the incoming rate.Confidence is a value provided by the sig-nal analyzer based upon the amount of sig-nal data it has processed and the amountof processing it has performed on the data.In general, the signal analyzer will havemore confidence in its conclusions when ithas processed a larger window of signaldata and/or performed more feature extrac-tion operations on the data. However, theincrease in confidence is not a linear func-tion. After processing a certain amount ofdata and extracting a certain number offeatures, more data and processing providelittle, if any, increase in confidence. Theutility function combines these two observable parameters, using weights assigned to each. The weights are provided by the mission requirements, which determine the relative importance to the application ofprocessing more signals versus processing signals more accurately.The modeled controller, realized in runtime by a QuO contract, tries to maintain the proper controlled balance of high confidence and low latency, by adjusting the queue length and processing characteristics. At each step of signal analysis, the controller contract decides whether the signal analyzer gets a chunk of data from the buffer (processing another piece of the current signal), thereby attempting to increase confi-dence in the analysis, or the next signal from the queue, thereby attempting to reduce latency. A more de-tailed discussion of this controller, the utility function, and the application can be found in [1].3 Status of the Research and Ongoing WorkSo far, we have developed the paradigm for modeling adaptive behaviors described above, and evalu-ated its utility in modeling a simple, adaptive signal analysis application. A screen shot of the adaptive sig-nal analysis model is shown in Figure 3. We have simulated this application using MATLAB and are in theprocess of creating a runtime instantiation for it using the QuO middleware, while developing code genera-tors for much of the QuO middleware aspect constructs needed to generate the runtime system [6]. We willbe building on this by modeling and generating more capable adaptive DRE systems, using the signalanalysis domain and our existing UAV and avionics applications [7] as examples, in order to extend, en-hance, and evaluate our adaptation modeling language, code generation capabilities, and runtime middle-ware support.4 Comparison with Other WorkThere are a number of related efforts in the areas of QoS adaptive middleware and model integratedcomputing. Many of the QoS adaptive middleware efforts, such as ACE/TAO [14, 15], CIAO [17],RTCORBA [10], and FTCORBA [9], focus on distribution or infrastructure middleware and services that arecomplementary to our efforts. QuO works with these to provide a higher level, more dynamic policy andadaptation layer for combining and controlling the available mechanisms and services for end-to-end, dy-namic solutions. Other research efforts focusing on QoS adaptive middleware, such as the University ofIllinois’s Agilos middlewareproject, are pursuing similargoals and are building uponeach other’s advances.For modeling environ-commercially available tool-sets, such as Simulink andusing as part of our totalsolution. In addition, thereare a few other meta-programmable tools avail-able and emerging. Meta-Edit+, a meta-MetaCase, has capabilitiesfor constructing domain-specific graphical lan-classical attributed entity-relationship concept. How-ever, because it uses a relatively simple, proprietary reporting definition language to which only read ac-cess is provided, it is difficult to integrate Meta-Edit+ in a tool-chain. Honeywell’s Dome is another meta-programmable tool with capabilities similar to Meta-Edit+ in being able to construct and instantiate domain-specific graphical languages and create modeling environments. The meta-programming environment of Dome is much more mature and powerful than that of Meta-Edit+ in that it has a graphical modeling envi-ronment instantiated within Dome itself.Figure 3: GME model of the signal analysis application. SA is the application structure (shown). QoSParams are the observable/controllable parameters. The other elementsare the controller, mission requirements, and system dynamics. Other research efforts have investigated the modeling of control-based QoS. For example, Sha et al also investigate an approach to modeling controllers as differential and difference equations in [16], includ-ing systems with highly variable workload. Fischer et al present a Petri Net approach to modeling QoS management systems [2]. There is also an effort within the OMG to standardize a UML approach for modeling QoS characteristics of systems [3, 11, 12].5 ConclusionsWe reported on ongoing work in the area of modeling adaptive behaviors and adaptation strategies and generating QoS adaptive middleware code for realizing them at runtime. Our work builds on the GME meta-modeling environment from Vanderbilt University and BBN’s QuO QoS adaptive middleware. Our approach is to provide an environment for capturing an application’s structure, operational requirements, minimum and maximum QoS thresholds, observable and controllable parameters, system dynamics, and adaptation control strategy. The model capturing these attributes would then synthesize the middleware code needed to add the described QoS awareness, control, and adaptation to the application. This work is ongoing, but our initial progress in describing our QoS adaptation modeling language and applying it to the area of signal analysis is promising.References[1] S. Abdelwahed, S. Neema, N. Mahadevany, J. Loyall, R. Shapiro. “Online Hybrid Control Design forQoS Management,” submitted to 42nd IEEE Conference on Decision and Control, December 2003, Maui, Hawaii.[2] S. Fischer, H. de Meer. “QoS Management: A Model-Based Approach,” Proceedings of MASCOTS’98,IEEE Computer Society.[3] I-Logix, THALES, and Tri-Pacific. “UML profile for QoS and FT Characteristics and Mechanisms JointInitial submission,” OMG document number realtime/02-09-01.[4] A. Ledeczi, M. Maroti, A. Bakay, G. Karsai, J. Garrett, C. Thomason, G. Nordstrom, J. Sprinkle, P. Vol-gyesi. “The Generic Modeling Environment,” WISP'2001, May 2001, Budapest, Hungary.[5] J. Loyall, P. Rubel, M. Atighetchi, R. Schantz, J. Zinky. “Emerging Patterns in Adaptive, DistributedReal-Time, Embedded Middleware,” OOPSLA 2002 Workshop - Patterns in Distributed Real-time and Embedded Systems, November 2002, Seattle, Washington.[6] J. Loyall, D. Bakken, R. Schantz, J. Zinky, D. Karr, R. Vanegas, K. Anderson. “QoS Aspect Languagesand Their Runtime Integration,” Lecture Notes in Computer Science, Vol. 1511, Springer-Verlag, 1998.[7] J. Loyall, J. Gossett, C. Gill, R. Schantz, J. Zinky, P. Pal, R. Shapiro, C. Rodrigues, M. Atighetchi, D.Karr. “Comparing and Contrasting Adaptive Middleware Support in Wide-Area and Embedded Distrib-uted Object Applications,” 21st IEEE International Conference on Distributed Computing Systems (ICDCS-21), April 2001, Phoenix, AZ.[8] S. Neema, T. Bapty, J. Gray, A. Gokhale, "Generators for Synthesis of QoS Adaptation in DistributedReal-time Embedded Systems", ACM SIGPLAN/SIFSOFT Conference, GPCE 2002, Pittsburgh, PA, USA, October 2002.[9] Object Management Group. Fault tolerant CORBA. OMG Technical Committee Document formal/2001-09-29, September 2001.[10] Object Management Group. Real-time CORBA. OMG Technical Committee Document formal/2001-09-28, September 2001.[11] Object Management Group. “UML Profile for Modeling Quality of Service and Fault Tolerance Charac-teristics and Mechanisms, Request for Proposal,” OMG Document: ad/2002-01-07.[12] Open-IT. “Response to the OMG RFP for Modeling Quality of Service and Fault Tolerance Characteris-tics and Mechanisms,” Document Version 1.0, OMG document number realtime/2002-09-02.[13] R. Schantz, J. Loyall, M. Atighetchi, P. Pal. “Packaging Quality of Service Control Behaviors for Re-use,” ISORC 2002, The 5th IEEE International Symposium on Object-Oriented Real-time distributed Computing, April 29 - May 1, 2002, Washington, DC.[14] D. Schmidt. “The Adaptive Communication Environment: Object-Oriented Network Programming Com-ponents for Developing Client/Server Applications,” 12th Annual Sun Users Group Conference, June 1994, San Francisco, CA.[15] D. Schmidt, D. Levine, S. Mungee. “The Design and Performance of the TAO Real-Time Object Re-quest Broker”, Computer Communications Special Issue on Building Quality of Service into Distributed Systems, 21(4), pp. 294—324, 1998.[16] L. Sha, X. Liu, Y. Lu, T. Abdelzaher. “Queueing Model Based Network Server Performance Control,”Proceedings of the 23rd IEEE Real-time Systems Symposium (RTSS ’02).[17] N. Wang, D. Schmidt, A. Gokhale, C. Gill, B. Natarajan, C. Rodrigues, J. Loyall, R. Schantz. “TotalQuality of Service Provisioning in Middleware and Applications,” Microprocessors and Microsystems spec. iss. on Middleware Solutions for QoS-enabled Multimedia Provisioning over the Internet, 2003. [18] J. Zinky, D. Bakken, R. Schantz. “Architectural Support for Quality of Service for CORBA Objects,”Theory and Practice of Object Systems 3(1), 1997.。