The Implementation of ObjectMath - a High-Level Programming Environment for Scientific Comp
- 格式:pdf
- 大小:138.84 KB
- 文档页数:7
Design and Implementation of a Bionic Robotic Hand with Multimodal Perception Based on ModelPredictive Controlline 1:line 2:Abstract—This paper presents a modular bionic robotic hand system based on Model Predictive Control (MPC). The system's main controller is a six-degree-of-freedom STM32 servo control board, which employs the Newton-Euler method for a detailed analysis of the kinematic equations of the bionic robotic hand, facilitating the calculations of both forward and inverse kinematics. Additionally, MPC strategies are implemented to achieve precise control of the robotic hand and efficient execution of complex tasks.To enhance the environmental perception capabilities of the robotic hand, the system integrates various sensors, including a sound sensor, infrared sensor, ultrasonic distance sensor, OLED display module, digital tilt sensor, Bluetooth module, and PS2 wireless remote control module. These sensors enable the robotic hand to perceive and respond to environmental changes in real time, thereby improving operational flexibility and precision. Experimental results indicate that the bionic robotic hand system possesses flexible control capabilities, good synchronization performance, and broad application prospects.Keywords-Bionic robotic hand; Model Predictive Control (MPC); kinematic analysis; modular designI. INTRODUCTIONWith the rapid development of robotics technology, the importance of bionic systems in industrial and research fields has grown significantly. This study presents a bionic robotic hand, which mimics the structure of the human hand and integrates an STM32 microcontroller along with various sensors to achieve precise and flexible control. Traditional control methods for robotic hands often face issues such as slow response times, insufficient control accuracy, and poor adaptability to complex environments. To address these challenges, this paper employs the Newton-Euler method to establish a dynamic model and introduces Model Predictive Control (MPC) strategies, significantly enhancing the control precision and task execution efficiency of the robotic hand.The robotic hand is capable of simulating basic human arm movements and achieves precise control over each joint through a motion-sensing glove, enabling it to perform complex and delicate operations. The integration of sensors provides the robotic hand with biological-like "tactile," "auditory," and "visual" capabilities, significantly enhancing its interactivity and level of automation.In terms of applications, the bionic robotic hand not only excels in industrial automation but also extends its use to scientific exploration and daily life. For instance, it demonstrates high reliability and precision in extreme environments, such as simulating extraterrestrial terrain and studying the possibility of life.II.SYSTEM DESIGNThe structure of the bionic robotic hand consists primarily of fingers with multiple joint degrees of freedom, where each joint can be controlled independently. The STM32 servo acts as the main controller, receiving data from sensors positioned at appropriate locations on the robotic hand, and controlling its movements by adjusting the joint angles. To enhance the control of the robotic hand's motion, this paper employs the Newton-Euler method to establish a dynamic model, conducts kinematic analysis, and integrates Model Predictive Control (MPC) strategies to improve operational performance in complex environments.In terms of control methods, the system not only utilizes a motion-sensing glove for controlling the bionic robotic hand but also integrates a PS2 controller and a Bluetooth module, achieving a fusion of multiple control modalities.整整整整如图需要预留一个图片的位置III.HARDWARE SELECTION AND DESIGN Choosing a hardware module that meets the functional requirements of the system while effectively controlling costs and ensuring appropriate performance is a critical consideration prior to system design.The hardware components of the system mainly consist of the bionic robotic hand, a servo controller system, a sound module, an infrared module, an ultrasonic distance measurement module, and a Bluetooth module. The main sections are described below.A.Bionic Mechanical StructureThe robotic hand consists of a rotating base and five articulated fingers, forming a six-degree-of-freedom motion structure. The six degrees of freedom enable the system to meet complex motion requirements while maintaining high efficiency and response speed. The workflow primarily involves outputting different PWM signals from a microcontroller to ensure that the six degrees of freedom of the robotic hand can independently control the movements of each joint.B.Controller and Servo SystemThe control system requires a variety of serial interfaces. To achieve efficient control, a combination of the STM32 microcontroller and Arduino control board is utilized, leveraging the advantages of both. The STM32 microcontroller serves as the servo controller, while the Arduino control board provides extensive interfaces and sensor support, facilitating simplified programming and application processes. This integration ensures rapid and precise control of the robotic hand and promotes efficient development.C.Bluetooth ModuleThe HC-05 Bluetooth module supports full-duplex serial communication at distances of up to 10 meters and offers various operational modes. In the automatic connection mode, the module transmits data according to a preset program. Additionally, it can receive AT commands in command-response mode, allowing users to configure control parameters or issue control commands. The level control of external pins enables dynamic state transitions, making the module suitable for a variety of control scenarios.D.Ultrasonic Distance Measurement ModuleThe US-016 ultrasonic distance measurement module provides non-contact distance measurement capabilities of up to 3 meters and supports various operating modes. In continuous measurement mode, the module continuously emits ultrasonic waves and receives reflected signals to calculate the distance to an object in real-time. Additionally, the module can adjust the measurement range or sensitivity through configuration response mode, allowing users to set distance measurement parameters or modify the measurement frequency as needed. The output signal can dynamically reflect the measurement results via level control of external pins, making it suitable for a variety of distance sensing and automatic control applications.IV. DESIGN AND IMPLEMENTATION OF SYSTEMSOFTWAREA.Kinematic Analysis and MPC StrategiesThe control research of the robotic hand is primarily based on a mathematical model, and a reliable mathematical model is essential for studying the controllability of the system. The Denavit-Hartenberg (D-H) method is employed to model the kinematics of the bionic robotic hand, assigning a local coordinate system to each joint. The Z-axis is aligned with the joint's rotation axis, while the X-axis is defined as the shortest distance between adjacent Z-axes, thereby establishing the coordinate system for the robotic hand.By determining the Denavit-Hartenberg (D-H) parameters for each joint, including joint angles, link offsets, link lengths, and twist angles, the transformation matrix for each joint is derived, and the overall transformation matrix from the base to the fingertip is computed. This matrix encapsulates the positional and orientational information of the fingers in space, enabling precise forward and inverse kinematic analyses. The accuracy of the model is validated through simulations, confirming the correct positioning of the fingertip actuator. Additionally, Model Predictive Control (MPC) strategies are introduced to efficiently control the robotic hand and achieve trajectory tracking by predicting system states and optimizing control inputs.Taking the index finger as an example, the Denavit-Hartenberg (D-H) parameter table is established.The data table is shown in Table ITABLE I. DATA SHEETjoints, both the forward kinematic solution and the inverse kinematic solution are derived, resulting in the kinematic model of the ing the same approach, the kinematic models for all other fingers can be obtained.The movement space of the index finger tip is shownin Figure 1.Fig. 1.Fig. 1.The movement space at the end of the index finger Mathematical Model of the Bionic Robotic Hand Based on the Newton-Euler Method. According to the design, each joint of the bionic robotic hand has a specified degree of freedom.For each joint i, the angle is defined as θi, the angular velocity asθi, and the angular acceleration as θi.The dynamics equation for each joint can be expressed as:τi=I iθi+w i(I i w i)whereτi is the joint torque, I i is the joint inertia matrix, and w i and θi represent the joint angular velocity and acceleration, respectively.The control input is generated by the motor driver (servo), with the output being torque. Assuming the motor input for each joint is u i, the joint torque τi can be mapped through the motor's torque constant as:τi=kτ∙u iThe system dynamics equation can be described as:I iθi+b iθi+c iθi=τi−τext,iwhere b i is the damping coefficient, c i is the spring constant (accounting for joint elasticity), and τext,i represents external torques acting on the joint i, such as gravity and friction.The primary control objective is to ensure that the end-effector of the robotic hand (e.g., fingertip) can accurately track a predefined trajectory. Let the desired trajectory be denoted as y d(t)and the actual trajectory as y(t)The tracking error can be expressed as:e(t)=y d(t)−y(t)The goal of MPC is to minimize the cumulative tracking error, which is typically achieved through the following objective function:J=∑[e(k)T Q e e(k)]N−1k=0where Q e is the error weight matrix, N is the prediction horizon length.Mechanical constraints require that the joint angles and velocities must remain within the physically permissible range. Assuming the angle range of the i-th joint is[θi min,θi max]and the velocity range is [θi min,θi max]。
Journal of VLSI Signal Processing39,295–311,2005c 2005Springer Science+Business Media,Inc.Manufactured in The Netherlands.Parallel-Beam Backprojection:An FPGA Implementation Optimizedfor Medical ImagingMIRIAM LEESER,SRDJAN CORIC,ERIC MILLER AND HAIQIAN YU Department of Electrical and Computer Engineering,Northeastern University,Boston,MA02115,USAMARC TREPANIERMercury Computer Systems,Inc.,Chelmsford,MA01824,USAReceived September2,2003;Revised March23,2004;Accepted May7,2004Abstract.Medical image processing in general and computerized tomography(CT)in particular can benefit greatly from hardware acceleration.This application domain is marked by computationally intensive algorithms requiring the rapid processing of large amounts of data.To date,reconfigurable hardware has not been applied to the important area of image reconstruction.For efficient implementation and maximum speedup,fixed-point implementations are required.The associated quantization errors must be carefully balanced against the requirements of the medical community.Specifically,care must be taken so that very little error is introduced compared tofloating-point implementations and the visual quality of the images is not compromised.In this paper,we present an FPGA implementation of the parallel-beam backprojection algorithm used in CT for which all of these requirements are met.We explore a number of quantization issues arising in backprojection and concentrate on minimizing error while maximizing efficiency.Our implementation shows approximately100times speedup over software versions of the same algorithm running on a1GHz Pentium,and is moreflexible than an ASIC implementation.Our FPGA implementation can easily be adapted to both medical sensors with different dynamic ranges as well as tomographic scanners employed in a wider range of application areas including nondestructive evaluation and baggage inspection in airport terminals.Keywords:backprojection,medical imaging,tomography,FPGA,fixed point arithmetic1.IntroductionReconfigurable hardware offers significant potentialfor the efficient implementation of a wide range ofcomputationally intensive signal and image process-ing algorithms.The advantages of utilizing Field Pro-grammable Gate Arrays(FPGAs)instead of DSPsinclude reductions in the size,weight,performanceand power required to implement the computationalplatform.FPGA implementations are also preferredover ASIC implementations because FPGAs have moreflexibility and lower cost.To date,the full utility ofthis class of hardware has gone largely unexploredand unexploited for many mainstream applications.In this paper,we consider a detailed implementa-tion and comprehensive analysis of one of the mostfundamental tomographic image reconstruction steps,backprojection,on reconfigurable hardware.While weconcentrate our analysis on issues arising in the useof backprojection for medical imaging applications,both the implementation and the analysis we providecan be applied directly or easily extended to a widerange of otherfields where this task needs to be per-formed.This includes remote sensing and surveillanceusing synthetic aperture radar and non-destructiveevaluation.296Leeser et al.Tomography refers to the process that generates a cross-sectional or volumetric image of an object from a series of projections collected by scanning the ob-ject from many different directions[1].Projection data acquisition can utilize X-rays,magnetic resonance,ra-dioisotopes,or ultrasound.The discussion presented here pertains to the case of two-dimensional X-ray ab-sorption tomography.In this type of tomography,pro-jections are obtained by a number of sensors that mea-sure the intensity of X-rays travelling through a slice of the scanned object.The radiation source and the sen-sor array rotate around the object in small increments. One projection is taken for each rotational angle.The image reconstruction process uses these projections to calculate the average X-ray attenuation coefficient in cross-sections of a scanned slice.If different structures inside the object induce different levels of X-ray atten-uation,they are discernible in the reconstructed image. The most commonly used approach for image recon-struction from dense projection data(many projections, many samples per projection)isfiltered backprojection (FBP).Depending on the type of X-ray source,FBP comes in parallel-beam and fan-beam variations[1].In this paper,we focus on parallel-beam backprojection, but methods and results presented here can be extended to the fan-beam case with modifications.FBP is a computationally intensive process.For an image of size n×n being reconstructed with n projec-tions,the complexity of the backprojection algorithm is O(n3).Image reconstruction through backprojection is a highly parallelizable process.Such applications are good candidates for implementation in Field Pro-grammable Gate Array(FPGA)devices since they pro-videfine-grained parallelism and the ability to be cus-tomized to the needs of a particular implementation. We have implemented backprojection by making use of these principles and shown approximately100times speedup over a software implementation on a1GHz Pentium.Our architecture can easily be expanded to newer and larger FPGA devices,further accelerating image generation by extracting more data parallelism.A difficulty of implementing FBP is that producing high-resolution images with good resemblance to in-ternal characteristics of the scanned object requires that both the density of each projection and their total num-ber be large.This represents a considerable challenge for hardware implementations,which attempt to maxi-mize the parallelism in the implementation.Therefore, it can be beneficial to usefixed-point implementations and to optimize the bit-width of a projection sample to the specific needs of the targeted application domain. We show this for medical imaging,which exhibits distinctive properties in terms of requiredfixed-point precision.In addition,medical imaging requires high precision reconstructions since visual quality of images must not be compromised.We have paid special attention to this requirement by carefully analyzing the effects of quan-tization on the quality of reconstructed images.We have found that afixed-point implementation with properly chosen bit-widths can give high quality reconstructions and,at the same time,make hardware implementation fast and area efficient.Our quantization analysis inves-tigates algorithm specific and also general data quanti-zation issues that pertain to input data.Algorithm spe-cific quantization deals with the precision of spatial ad-dress generation including the interpolation factor,and also investigates bit reduction of intermediate results for different rounding schemes.In this paper,we focus on both FPGA implemen-tation performance and medical image quality.In pre-vious work in the area of hardware implementations of tomographic processing algorithms,Wu[2]gives a brief overview of all major subsystems in a com-puted tomography(CT)scanner and proposes loca-tions where ASICs and FPGAs can be utilized.Ac-cording to the author,semi-custom digital ASICs were the most appropriate due to the level of sophistica-tion that FPGA technology had in1991.Agi et al.[3]present thefirst description of a hardware solu-tion for computerized tomography of which we are aware.It is a unified architecture that implements for-ward Radon transform,parallel-and fan-beam back-projection in an ASIC based multi-processor system. Our FPGA implementation focuses on backprojection. Agi et al.[4]present a similar investigation of quanti-zation effects;however their results do not demonstrate the suitability of their implementation for medical ap-plications.Although theirfiltered sinogram data are quantized with12-bit precision,extensive bit trunca-tion on functional unit outputs and low accuracy of the interpolation factor(absolute error of up to2)ren-der this implementation significantly less accurate than ours,which is based on9-bit projections and the max-imal interpolation factor absolute error of2−4.An al-ternative to using specially designed processors for the implementation offiltered backprojection(FBP)is pre-sented in[5].In this work,a fast and direct FBP al-gorithm is implemented using texture-mapping hard-ware.It can perform parallel-beam backprojection of aParallel-Beam Backprojection 297512-by-512-pixel image from 804projections in 2.1sec,while our implementation takes 0.25sec for 1024projections.Luiz et al.[6]investigated residue number systems (RNS)for the implementation of con-volution based backprojection to speedup the process-ing.Unfortunately,extra binary-to-RNS and RNS-to-binary conversions are introduced.Other approaches to accelerating the backprojection algorithm have been investigated [7,8].One approach [7]presents an order O (n 2log n )and merits further study.The suitability to medical image quality and hardware implementation of these approaches[7,8]needs to be demonstrated.There are also a lot of interests in the area of fan-beam and cone-beam reconstruction using hardware implementa-tion.An FPGA-based fan-beam reconstruction module [9]is proposed and simulated using MAX +PLUS2,version 9.1,but no actual FPGA implementation is mentioned.Moreover,the authors did not explore the potential parallelism for different projections as we do,which is essential for speed-up.More data and com-putation is needed for 3D cone-beam FBP.Yu’s PC based system [10]can reconstruct the 5123data from 288∗5122projections in 15.03min,which is not suit-able for real-time.The embedded system described in [11]can do 3D reconstruction in 38.7sec with the fastest time reported in the literature.However,itisFigure 1.(a)Illustration of the coordinate system used in parallel-beam backprojection,and (b)geometric explanation of the incremental spatial address calculation.based on a Mercury RACE ++AdapDev 1120devel-opment workstation and need many modifications for a different platform.Bins et al.[12]have investigated precision vs.error in JPEG compression.The goals of this research are very similar to ours:to implement de-signs in fixed-point in order to maximize parallelism and area utilization.However,JPEG compression is an application that can tolerate a great deal more error than medical imaging.In the next section,we present the backprojection algorithm in more detail.In Section 3we present our quantization studies and analysis of error introduced.Section 4presents the hardware implementation in de-tail.Finally we present results and discuss future di-rections.An earlier version of this research was pre-sented [13].This paper provides a fuller discussion of the project and updated results.2.Parallel-Beam Filtered BackprojectionA parallel-beam CT scanning system uses an array of equally spaced unidirectional sources of focused X-ray beams.Generated radiation not absorbed by the object’s internal structure reaches a collinear array of detectors (Fig.1(a)).Spatial variation of the absorbed298Leeser et al.energy in the two-dimensional plane through the ob-ject is expressed by the attenuation coefficient µ(x ,y ).The logarithm of the measured radiation intensity is proportional to the integral of the attenuation coef-ficient along the straight line traversed by the X-ray beam.A set of values given by all detectors in the array comprises a one-dimensional projection of the attenu-ation coefficient,P (t ,θ),where t is the detector dis-tance from the origin of the array,and θis the angle at which the measurement is taken.A collection of pro-jections for different angles over 180◦can be visualized in the form of an image in which one axis is position t and the other is angle θ.This is called a sinogram or Radon transform of the two-dimensional function µ,and it contains information needed for the reconstruc-tion of an image µ(x ,y ).The Radon transform can be formulated aslog e I 0I d= µ(x ,y )δ(x cos θ+y sin θ−t )dx dy≡P (t ,θ)(1)where I 0is the source intensity,I d is the detected inten-sity,and δ(·)is the Dirac delta function.Equation (1)is actually a line integral along the path of the X-ray beam,which is perpendicular to the t axis (see Fig.1(a))at location t =x cos θ+y sin θ.The Radon transform represents an operator that maps an image µ(x ,y )to a sinogram P (t ,θ).Its inverse mapping,the inverse Radon transform,when applied to a sinogram results in an image.The filtered backprojection (FBP)algo-rithm performs this mapping [1].FBP begins by high-pass filtering all projections be-fore they are fed to hardware using the Ram-Lak or ramp filter,whose frequency response is |f |.The dis-crete formulation of backprojection isµ(x ,y )=πK Ki =1 θi(x cos θi +y sin θi ),(2)where θ(t )is a filtered projection at angle θ,and K is the number of projections taken during CT scanning at angles θi over a 180◦range.The number of val-ues in θ(t )depends on the image size.In the case of n ×n pixel images,N =√n D detectors are re-quired.The ratio D =d /τ,where d is the distance between adjacent pixels and τis the detector spac-ing,is a critical factor for the quality of the recon-structed image and it obviously should satisfy D >1.In our implementation,we utilize values of D ≈1.4and N =1024,which are typical for real systems.Higher values do not significantly increase the image quality.Algorithmically,Eq.(2)is implemented as a triple nested “for”loop.The outermost loop is over pro-jection angle,θ.For each θ,we update every pixel in the image in raster-scan order:starting in the up-per left corner and looping first over columns,c ,and next over rows,r .Thus,from (2),the pixel at loca-tion (r ,c )is incremented by the value of θ(t )where t is a function of r and c .The issue here is that the X-ray going through the currently reconstructed pixel,in general,intersects the detector array between detec-tors.This is solved by linear interpolation.The point of intersection is calculated as an address correspond-ing to detectors numbered from 0to 1023.The frac-tional part of this address is the interpolation factor.The equation that performs linear interpolation is given byint θ(i )=[ θ(i +1)− θ(i )]·I F + θ(i ),(3)where IF denotes the interpolation factor, θ(t )is the 1024element array containing filtered projection data at angle θ,and i is the integer part of the calculated address.The interpolation can be performed before-hand in software,or it can be a part of the backpro-jection hardware itself.We implement interpolation in hardware because it substantially reduces the amount of data that must be transmitted to the reconfigurable hardware board.The key to an efficient implementation of Eq.(2)is shown in Fig.1(b).It shows how a distance d between square areas that correspond to adjacent pixels can be converted to a distance t between locations where X-ray beams that go through the centers of these areas hit the detector array.This is also derived from the equa-tion t =x cos θ+y sin θ.Assuming that pixels are pro-cessed in raster-scan fashion,then t =d cos θfor two adjacent pixels in the same row (x 2=x 1+d )and sim-ilarly t =d sin θfor two adjacent pixels in the same column (y 2=y 1−d ).Our implementation is based on pre-computing and storing these deltas in look-up tables(LUTs).Three LUTs are used corresponding to the nested “for”loop structure of the backprojection algorithm.LUT 1stores the initial address along the detector axis (i.e.along t )for a given θrequired to update the pixel at row 1,column 1.LUT 2stores the increment in t required as we increment across a row.LUT 3stores the increment for columns.Parallel-Beam Backprojection299Figure2.Major simulation steps.3.QuantizationMapping the algorithm directly to hardware will not produce an efficient implementation.Several modifica-tions must be made to obtain a good hardware realiza-tion.The most significant modification is usingfixed-point arithmetic.For hardware implementation,narrow bit widths are preferred for more parallelism which translates to higher overall processing speed.How-ever,medical imaging requires high precision which may require wider bit widths.We did extensive analy-sis to optimize this tradeoff.We quantize all data and all calculations to increase the speed and decrease the re-sources required for implementation.Determining al-lowable quantization is based on a software simulation of the tomographic process.Figure2shows the major blocks of the simulation. An input image isfirst fed to the software implementa-tion of the Radon transform,also known as reprojection [14],which generates the sinogram of1024projections and1024samples per projection.Thefiltering block convolves sinogram data with the impulse response of the rampfilter generating afiltered sinogram,which is then backprojected to give a reconstructed image.All values in the backprojection algorithm are real numbers.These can be implemented as eitherfloating-point orfixed-point values.Floating-point represen-tation gives increased dynamic range,but is signifi-cantly more expensive to implement in reconfigurable hardware,both in terms of area and speed.For these reasons we have chosen to usefixed-point arithmetic. An important issue,especially in medical imaging,is how much numerical accuracy is sacrificed whenfixed-point values are used.Here,we present the methods used tofind appropriate bit-widths for maintaining suf-ficient numerical accuracy.In addition,we investigate possibilities for bit reduction on the outputs of certain functional units in the datapath for different rounding schemes,and what influence that has on the error intro-duced in reconstructed images.Our analysis shows that medical images display distinctive properties with re-spect to how different quantization choices affect their reconstruction.We exploit this and customize quan-tization to bestfit medical images.We compute the quantization error by comparing afixed-point image reconstruction with afloating-point one.Fixed-point variables in our design use a general slope/bias-encoding,meaning that they are represented asV≈V a=SQ+B,(4) where V is an arbitrary real number,V a is itsfixed-point approximation,Q is an integer that encodes V,S is the slope,and B is the bias.Fixed-point versions of the sinogram and thefiltered sinogram use slope/bias scaling where the slope and bias are calculated to give maximal precision.The quantization of these two vari-ables is calculated as:S=max(V)−min(V)max(Q)−min(Q)=max(V)−min(V)2,(5) B=max(V)−S·max(Q)orB=min(V)−S·min(Q),(6) Q=roundV−BS,(7)where ws is the word size in bits of integer Q.Here, max(V)and min(V)are the maximum and mini-mum values that V will take,respectively.max(V) was determined based on analysis of data.Since sinogram data are unsigned numbers,in this case min(V)=min(Q)=B=0.The interpolation factor is an unsigned fractional number and uses radix point-only scaling.Thus,the quantized interpolation factor is calculated as in Eq.(7),with saturation on overflow, with S=2−E where E is the number of fractional bits, and with B=0.For a given sinogram,S and B are constants and they do not show up in the hardware—only the quan-tized value Q is part of the hardware implementation. Note that in Eq.(3),two data samples are subtracted from each other before multiplication with the inter-polation factor takes place.Thus,in general,the bias B is eliminated from the multiplication,which makes quantization offiltered sinogram data with maximal precision scaling easily implementable in hardware.300Leeser etal.Figure 3.Some of the images used as inputs to the simulation process.The next important issue is the metric used for evalu-ating of the error introduced by quantization.Our goal was to find a metric that would accurately describe vi-sual differences between compared images regardless of their dynamic range.If 8-bit and 16-bit versions of a single image are reconstructed so that there is no vis-ible difference between the original and reconstructed images,the proper metric should give a comparable estimate of the error for both bit-widths.The proper metric should also be insensitive to the shift of pixel value range that can emerge for different quantization and rounding schemes.Absolute values of single pix-els do not effect visual image quality as long as their relative value is preserved,because pixel values are mapped to a set of grayscale values.The error metric we use that meets these criteria is the Relative Error (RE):RE = M i =1 (x i −¯x )− y F P i−¯y F P 2M i =1 y F P i−¯y F P ,(8)Here,M is the total number of pixels,x i and y F Pi are the values of the i -th pixel in the quantized and floating-point reconstructions respectively,and ¯x,¯y FP are their means.The mean value is sub-tracted because we only care about the relative pixel values.Figure 3shows some characteristic images from a larger set of 512-by-512-pixel images used as inputs to the simulation process.All images are monochrome 8-bit images,but 16-bit versions are also used in simu-lations.Each image was chosen for a certain reason.For example,the Shepp-Logan phantom is well known and widely used in testing the ability of algorithms to accu-rately reconstruct cross sections of the human head.It is believed that cross-sectional images of the human head are the most sensitive to numerical inaccuracies and the presence of artifacts induced by a reconstruction algo-rithm [1].Other medical images were Female,Head,and Heart obtained from the visible human web site [15].The Random image (a white noise image)should result in the upper bound on bit-widths required for a precise reconstruction.The Artificial image is unique because it contains all values in the 8-bit grayscale range.This image also contains straight edges of rect-angles,which induce more artifacts in the reconstructed image.This is also characteristic of the Head image,which contains a rectangular border around the head slice.Figure 4shows the detailed flowchart of the simu-lated CT process.In addition to the major blocks des-ignated as Reproject,Filter and Backproject,Fig.4also includes the different quantization steps that we have investigated.Each path in this flowchart rep-resents a separate simulation cycle.Cycle 1gives aParallel-Beam Backprojection301Figure 4.Detailed flowchart of the simulation process.floating-point (FP)reconstruction of an input image.All other cycles perform one or more type of quan-tization and their resulting images are compared to the corresponding FP reconstruction by computing the Relative Error.The first quantization step converts FP projection data obtained by the reprojection step to a fixed-point representation.Simulation cycle 2is used to determine how different bit-widths for quantized sino-gram data affect the quality of a reconstructed image.Our research was based on a prototype system that used 12-bit accurate detectors for the acquisition of sino-gram data.Simulations showed that this bit-width is a good choice since worst case introduced error amounts to 0.001%.The second quantization step performsthe Figure 5.Simulation results for the quantization of filtered sinogram data.conversion of filtered sinogram data from FP to fixed-point representation.Simulation cycle 3is used to find the appropriate bit-width of the words representing a filtered sinogram.Figure 5shows the results for this cycle.Since we use linear interpolation of projection values corresponding to adjacent detectors,the interpo-lation factor in Eq.(3)also has to be quantized.Figure 6summarizes results obtained from simulation cycle 4,which is used to evaluate the error induced by this quantization.Figures 5and 6show the Relative Error metric for different word length values and for different simula-tion cycles for a number of input images.Some input images were used in both 8-bit and 16-bit versions.302Leeser etal.Figure 6.Simulation results for the quantization of the interpolation factor.Figure 5corresponds to the quantization of filtered sinogram data (path 3in Fig.4).The conclusion here is that 9-bit quantization is the best choice since it gives considerably smaller error than 8-bit quantiza-tion,which for some images induces visible artifacts.At the same time,10-bit quantization does not give vis-ible improvement.The exceptions are images 2and 3,which require 13bits.From Fig.6(path 4in Fig.4),we conclude that 3bits for the interpolation factor (mean-ing the maximum error for the spatial address is 2−4)Figure 7.Relative error between fixed-point and floating-point reconstruction.is sufficiently accurate.As expected,image 1is more sensitive to the precision of the linear interpolation be-cause of its randomness.Figure 7shows that combining these quantization schemes results in a very small error for image “Head”in Fig.3.We also investigated whether it is feasible to discard some of the least significant bits (LSBs)on outputs of functional units (FUs)in the datapath and still not introduce any visible artifacts.The goal is for the re-constructed pixel values to have the smallest possibleParallel-Beam Backprojection 303bit-widths.This is based on the intuition that bit re-duction done further down the datapath will introduce a smaller amount of error in the result.If the same bit-width were obtained by simply quantizing filtered projection data with fewer bits,the error would be mag-nified by the operations performed in the datapath,es-pecially by the multiplication.Path number 5in Fig.4depicts the simulation cycles that investigates bit reduc-tion at the outputs of three of the FUs.These FUs imple-ment subtraction,multiplication and addition that are all part of the linear interpolation from Eq.(3).When some LSBs are discarded,the remaining part of a binary word can be rounded in different ways.We investigate two different rounding schemes,specifically rounding to nearest and truncation (or rounding to floor).Round-ing to nearest is expected to introduce the smallest er-ror,but requires additional logic resources.Truncation has no resource requirements,but introduces a nega-tive shift of values representing reconstructed pixels.Bit reduction effectively optimizes bit-widths of FUs that are downstream in the data flow.Figure 8shows tradeoffs of bit reduction and the two rounding schemes after multiplication for medi-cal images.It should be noted that sinogram data are quantized to 12bits,filtered sinogram to 9bits,and the interpolation factor is quantized to 3bits (2−4pre-cision).Similar studies were done for the subtraction and addition operations and on a broader set of im-ages.It was determined that medical images suffer the least amount of error introduced by combining quanti-zations and bit reduction.For medical images,in case of rounding to nearest,there is very little difference inthe Figure 8.Bit reduction on the output of the interpolation multiplier.introduced error between 1and 3discarded bits after multiplication and addition.This difference is higher in the case of bit reduction after addition because the multiplication that follows magnifies the error.For all three FUs,when only medical images are considered,there is a fixed relationship between rounding to near-est and truncation.Two least-significant bits discarded with rounding to nearest introduce an error that is lower than or close to the error of 1bit discarded with trun-cation.Although rounding to nearest requires logic re-sources,even when only one LSB is discarded with rounding to nearest after each of three FUs,the overall resource consumption is reduced because of savings provided by smaller FUs and pipeline registers (see Figs.11and 12).Figure 9shows that discarding LSBs introduces additional error on medical images for this combination of quantizations.In our case there was no need for using bit reduction to achieve smaller resource consumption because the targeted FPGA chip (Xilinx Virtex1000)provided sufficient logic resources.There is one more quantization issue we considered.It pertains to data needed for the generation of the ad-dress into a projection array (spatial address addr )and to the interpolation factor.As described in the intro-duction,there are three different sets of data stored in look-up tables (LUTs)that can be quantized.Since pixels are being processed in raster-scan order,the spa-tial address addr is generated by accumulating entries from LUTs 2and 3to the corresponding entry in LUT 1.The 10-bit integer part of the address addr is the index into the projection array θ(·),while its fractional part is the interpolation factor.By using radix point-only。
面向对象编程英语Object-oriented programming (OOP) is a programming paradigm based on the concept of "objects". It is widely used today for developing software applications, and it has become an essential skill for software developers. In this article, we will discuss the basics of OOP and the importance of mastering it.1. What is OOP?OOP is a programming concept that revolves around the idea of "objects". An object is an instance of a class, which is a blueprint for creating objects. Classes define the attributes and behavior of an object, and objects caninteract with each other through their methods. OOP is built on three main principles: encapsulation, inheritance, and polymorphism.2. EncapsulationEncapsulation is the process of hiding the implementation details of an object from the outside world. It protects the object's data and methods from being accessed or modified by unauthorized code. Encapsulation helps to improve the robustness and maintainability of the code by reducing the side-effects of modifying an object's data.3. InheritanceInheritance is a mechanism that allows a class toinherit attributes and behavior from a parent or base class. It simplifies code by allowing developers to reuse code without duplication. Inheritance enables developers to build complex systems by arranging classes in a hierarchicalstructure.4. PolymorphismPolymorphism is the ability of objects to take on different forms. It allows developers to create code that can work with multiple types of objects at once, without needing to know the exact class or type of each object. Polymorphism is achieved through methods that can accept parameters of different types.5. Why is OOP important?OOP has become a fundamental skill for software developers because of its many benefits. OOP allows developers to write more organized, modular, and reusable code. It improves code readability and makes it easier to understand, maintain, and test. OOP also enables teamwork by dividing work into smaller components that can be developed independently. Additionally, OOP is widely used in modern software development frameworks and technologies, such as Java, Python, and .NET.ConclusionIn conclusion, OOP is a powerful programming paradigm that has become essential for software developers. Its fundamental principles of encapsulation, inheritance, and polymorphism enable developers to write more organized, modular, and reusable code. Mastery of OOP is crucial for building complex software systems, improving team productivity, and staying up-to-date with modern software development technologies.。
《计算机英语(第2版)》参考答案注:这里仅给出《计算机英语(第2版)》新增或变化课文的答案,其他未改动课文答案参见《计算机英语(第1版)》原来的答案。
Unit OneSection CPDA Prizefight: Palm vs. Pocket PCI. Fill in the blanks with the information given in the text:1. With DataViz’s Documents To Go, you can view and edit desktop documents on your PDA without converting them first to a PDA-specific ________. (format)2. Both Palm OS and Windows Mobile PDAs can offer e-mail via ________ so that new messages received on your desktop system are transferred to the PDA for on-the-go reading. (synchronization)3. The Windows Mobile keyboard, Block Recognizer, and Letter Recognizer are all ________ input areas, meaning they appear and disappear as needed. (virtual)4. Generally speaking, Windows Mobile performs better in entering information and playing ________ files while Palm OS offers easier operation, more ________ programs, better desktop compatibility, and a stronger e-mail application. (multimedia; third-party)II. Translate the following terms or phrases from English into Chinese and vice versa:1. data field数据字段2. learning curve学习曲线3. third-party solution第三方解决方案4. Windows Media Player Windows媒体播放器5. 开始按钮Start button6. 指定输入区designated input area7. 手写体识别系统handwriting-recognition system8. 字符集character setUnit ThreeSection BLonghorn:The Next Version of WindowsI. Fill in the blanks with the information given in the text:1. NGSCB, the new security architecture Microsoft is developing for Longhorn, splits the OS into two parts: a standard mode and a(n) ________ mode. (secure)2. It is reported that Longhorn will provide different levels of operation that disable the more intensive Aero effects to boost ________ on less capable PCs. (performance)3. With Longhorn’s new graphics and presentation engine, we can create and display Tiles on the desktop, which remind us of the old Active Desktop but are based on ________ instead of ________. (XML; HTML)4. The most talked-about feature in Longhorn so far is its new storage system, WinFS, whichworks like a(n) ________ database. (relational)II. Translate the following terms or phrases from English into Chinese and vice versa:1. search box搜索框2. built-in firewall内置防火墙3. standalone application独立应用程序4. active desktop 活动桌面5. mobile device移动设备6. 专有软件proprietary software7. 快速加载键quick-launch key8. 图形加速器graphics accelerator9. 虚拟文件夹virtual folder10. 三维界面three-dimensional interfaceUnit FourSection CArraysI. Fill in the blanks with the information given in the text:1. Given the array called object with 20 elements, if you see the term object10, you know the array is in ________ form; if you see the term object[10], you know the array is in ________ form. (subscript; index)2. In most programming languages, an array is a static data structure. When you define an array, the size is ________. (fixed)3. A(n) ________ is a pictorial representation of a frequency array. (histogram)4. An array that consists of just rows and columns is probably a(n) ________ array. (two-dimensional)II. Translate the following terms or phrases from English into Chinese and vice versa:1. bar chart条形图2. frequency array频率数组3. graphical representation图形表示4. multidimensional array多维数组5. 用户视图user(’s) view6. 下标形式subscript form7. 一维数组one-dimensional array8. 编程结构programming constructUnit FiveSection BMicrosoft .NET vs. J2EEI. Fill in the blanks with the information given in the text:1. One of the differences between C# and Java is that Java runs on any platform with a Java Virtual ________ while C# only runs in Windows for the foreseeable future. (Machine)2. With .NET, Microsoft is opening up a channel both to ________ in other programming languages and to ________. (developers; components)3. J2EE is a single-language platform; calls from/to objects in other languages are possiblethrough ________, but this kind of support is not a ubiquitous part of the platform. (CORBA)4. One important element of the .NET platform is a common language ________, which runs bytecodes in an Internal Language format. (runtime)II. Translate the following terms or phrases from English into Chinese and vice versa:1. messaging model消息收发模型2. common language runtime通用语言运行时刻(环境)3. hierarchical namespace分等级层次的名称空间4. development community开发社区5. CORBA公用对象请求代理(程序)体系结构6. 基本组件base component7. 元数据标记metadata tag8. 虚拟机virtual machine9. 集成开发环境IDE(integrated development environment)10. 简单对象访问协议SOAP(Simple Object Access Protocol)Unit SixSection ASoftware Life CycleI. Fill in the blanks with the information given in the text:1. The development process in the software life cycle involves four phases: analysis, design, implementation, and ________. (testing)2. In the system development process, the system analyst defines the user, needs, requirements and methods in the ________ phase. (analysis)3. In the system development process, the code is written in the ________ phase. (implementation)4. In the system development process, modularity is a very well-established principle used in the ________ phase. (design)5. The most commonly used tool in the design phase is the ________. (structure chart)6. In the system development process, ________ and pseudocode are tools used by programmers in the implementation phase. (flowcharts)7. Pseudocode is part English and part program ________. (logic)8. While black box testing is done by the system test engineer and the ________, white box testing is done by the ________. (user; programmer)II. Translate the following terms or phrases from English into Chinese and vice versa:1. standard graphical symbol标准图形符号2. logical flow of data标准图形符号3. test case测试用例4. program validation程序验证5. white box testing白盒测试6. student registration system学生注册系统7. customized banking package定制的金融软件包8. software life cycle软件生命周期9. user working environment用户工作环境10. implementation phase实现阶段11. 测试数据test data12. 结构图structure chart13. 系统开发阶段system development phase14. 软件工程software engineering15. 系统分析员system(s) analyst16. 测试工程师test engineer17. 系统生命周期system life cycle18. 设计阶段design phase19. 黑盒测试black box testing20. 会计软件包accounting packageIII. Fill in each of the blanks with one of the words given in the following list, making changes if necessary:development; testing; programmer; chart; engineer; attend; interfacessystem; software; small; userdevelop; changes; quality; board; UncontrolledIV. Translate the following passage from English into Chinese:软件工程是软件开发的一个领域;在这个领域中,计算机科学家和工程师研究有关的方法与工具,以使高效开发正确、可靠和健壮的计算机程序变得容易。
1前言。
2Mathematics(数学)。
3DataStructures&Algorithms(数据结构、算法)。
4Compiler(编译原理)。
5OperatingSystem(操作系统)。
6Database(数据库)。
7C(C语言)。
8C++(C++语言)。
9Object-Oriented(面向对象)。
10SoftwareEngineering(软件工程)。
11UNIXProgramming(UNIX编程)。
12UNIXAdministration(UNIX系统管理)。
13Networks(网络)。
14WindowsProgramming(Windows编程)。
15Other(*)。
Mathematics(数学)。
书名(英文):DiscreteMathematicsandItsApplications(FifthEdition)。
书名(中文):离散数学及其应用(第五版)。
原作者:KennethH.Rosen。
书名(英文):ConcreteMathematics:AFoundationforComputerScience(SecondEdition)。
书名(中文):具体数学:计算机科学基础(第2版)。
原作者:RonaldL.Graham/DonaldE.Knuth/OrenPatashnik。
DataStructures&Algorithms(数据结构、算法)。
书名(英文):DataStructuresandAlgorithmAnalysisinC,SecondEdition。
书名(中文):数据结构与算法分析--C语言描述(第二版)。
原作者:MarkAllenWeiss。
书名(英文):DataStructures&ProgramDesignInC(SecondEdition)。
书名(中文):数据结构与程序设计C语言描述(第二版)。
原作者:RobertKruse/C.L.Tondo/BruceLeung。
U n i t14568翻译-CAL-FENGHAI.-(YICAI)-Company One1unit14 Translate the paragraph into Chinese.篮球运动是一个名叫詹姆斯·奈史密斯的体育老师发明的。
1891年冬天,他接到一个任务,要求他发明一种运动,让田径运动员既保持良好的身体状态,又能不受伤害。
篮球在大学校园里很快流行起来。
20世纪40年代,职业联赛开始之后,美国职业篮球联赛一直从大学毕业生里招募球员。
这样做对美国职业篮球联赛和大学双方都有好处:大学留住了可能转向职业篮球赛的学生,而美国职业篮球联赛无需花钱组建一个小职业篮球联盟。
大学篮球在全国的普遍推广以及美国大学体育协会对“疯狂三月”(即美国大学体育协会甲级联赛男篮锦标赛)的市场推广,使得这项大学体育赛事一直在蓬勃发展。
The sport of basketball was created by a physical education teacher named James Naismith, who in the winter of 1891 was given the task of creating a game that would keep track athletes in shape without risking them getting hurt a lot. Basketball quickly became popular on college campuses. When the professional league was established in the 1940s, the National Basketball Association (NBA) drafted players who had graduated from college.This was a mutually beneficial relationship for the NBA and colleges — the colleges held onto players who would otherwise go professional, and the NBA did not have to fund a minor league. The pervasiveness of college basketball throughout the nation and the NCAA’ s (美国大学体育协会) marketing of “March Madness” (officially the NCAA Division I Men’ s Basketball Championship), have kept the college game alive and well.5 Translate the paragraph into English.现在中国大学生参加志愿活动已成为常态。
软件工程练习题1、考察在你所编写的软件中已经出现的故障。
辨别并列出那些导致每个故障的缺陷和错误。
2、描述你早上去上课或者上班的过程,并画一个图来表达这个过程。
3、静态模型和动态模型的区别是什么?并说明每种模型的作用和用途。
4、按照工作分解结构描述获得学位(学士的、硕士或博士的)的过程。
画出过程的活动图。
什么是关键路程?5、预测产生一个估计值E,该估计值最终将与实际值A进行比较。
设计两个可由E和A计算得到的值,用以帮助确定估计值的准确性。
定义这两个值,并论述每个值怎样用于告诉我们某个预测是可接受的。
6、描述两种不同的规模度量(方法),并且指出每种的优点和缺点。
7、大部分系统的需求详细说明了系统应该做预期要做的工作。
这种需求是不是也说明了系统不应该去做没有预期要做的工作?如果你的答案是no,为什么;如果答案是yes,举一个例子。
8、下列陈述中描述了程序的模块(假设的)。
对于每一个模块,判断该模块是否可能有高的或低的内聚。
如果是低内聚,请解释原因。
a.模块“InventorySearchByID”查询清单记录,看是否匹配指定范围的ID号。
返回一个包含任何匹配的记录的数据结构。
b.模块“ProcessPurchase”移除清单中已购买产品,为客户打印收据并更新日志。
c.模块“FindSet”处理用户的要求,确定了一系列的满足要求的项目清单,并以可以向客户展示的格式列出来。
9、把设计划分为系统设计和程序设计两个阶段为什么很有用?10、假如你正在做一个书店的运营系统,书店的收入来源自两个不同的服务:顾客买书,顾客把自己的书拿来重新装订,要为两个服务设计两个不同的类,这两个类都是继承于“销售项目”这个类,这样做可能的好处是什么?在这个例子中有没有可能的原因不允许继承?详尽描述什么因素会影响你的决定。
11、在6.7中讨论了Chidamber和Kemerer的继承深度的度量,为什么一个继承层次深的类要比一个继承层次相对浅的类看起来更难理解和维护?12、解释设计和实现之间的关系。
CArchive使用的一种错误方式Introduction介绍Object serialization is one of the most powerful features of MFC. 对象串行化是MFC的一个重要特性。
With it you can store your objects into a file and let MFC recreate your objects from that file.利用它你可以将你的对象存入一个文件,然后让MFC通过读取文件去重新创建它们。
Unfortunately there are some pitfalls in using it correctly.不幸的是,这里面有很多的陷阱。
So it happened that yesterday I spent a lot of time in understanding what I did wrong while programming a routine that wrapped up some objects for a drag and drop operation. 昨天我花了很长的时候去查清一个实现拖放操作时出现的BUG,After a long debugging session I understood what happened and so I think that it might be a good idea to share my newly acquired knowledge with you.我花了很长的时间才调试出来,所以我觉得有必要和大家分享一下。
The Wrong Way错误的方式Let's start with my wrong implementation of object serialization.先说我的错误的方式。
A very convenient way to implement drag and drop and clipboard operations is to wrap up your objects in a COleDataSource object. 一个实现拖放操作非常方便的方式是把你的对象封装到一个COleDataSource对象里。
设计翻译成英文DesignDesign is the process of creating and refining a plan or blueprint for the construction or implementation of an object, system, or process. It involves carefully considering and combining various elements and factors to achieve a desired outcome.Design can be applied to various fields and disciplines, such as architecture, fashion, graphic design, industrial design, interior design, and web design. Each field has its own unique considerations and requirements, but all share a common goal: to create something functional, aesthetically pleasing, and meaningful.In architecture, design entails the creation of detailed plans, drawings, and specifications for buildings and structures. Architects take into account factors such as aesthetics, functionality, safety, and sustainability when designing structures. They must consider the needs and preferences of the client, as well as the context and environment in which the building will be located.Fashion design involves the creation of clothing, footwear, and accessories. Designers in this field must consider factors such as trends, market demand, and the target audience. They often draw inspiration from various sources, such as art, culture, and nature, to create unique and visually appealing garments.Graphic design focuses on visual communication. Graphic designers create visual content for various mediums, such as print,web, and social media. Their designs may include logos, brochures, posters, websites, and advertisements. They must consider factors such as the message, audience, and medium when creating their designs.Industrial design involves the creation of products for mass production. Industrial designers consider factors such as usability, ergonomics, aesthetics, and cost when designing products. Their designs may range from household appliances to automobiles.Interior design involves the planning and design of interior spaces. Interior designers consider factors such as functionality, aesthetics, and safety when designing spaces. They work with clients to create spaces that meet their needs and preferences while ensuring that the design is practical and visually pleasing.Web design focuses on the creation and layout of websites. Web designers consider factors such as usability, accessibility, and aesthetics when designing websites. They must create user-friendly and visually appealing interfaces that effectively communicate the desired message.In all fields of design, the process usually involves several stages, including research, conceptualization, development, and refinement. Designers often collaborate with other professionals, such as engineers, architects, and marketers, to ensure that the final product meets the desired objectives.In conclusion, design is a multifaceted process that involves careful consideration of various factors and elements to createsomething functional, aesthetically pleasing, and meaningful. Design can be applied to various fields and disciplines, with each having its own unique considerations and requirements. Through research, conceptualization, and collaboration, designers strive to create innovative and impactful designs.。
2021 年全国计算机等级考试四级试题及答案 (一)一、选择题: ( 共 90 题, 分为 1 分题和 2 分题 , 总分值 120 分。
除标注2 分题外 , 其它均为 1 分题。
)(1)假设或非门的输入变量为 A 和 B, 输出变量为 Y, 那么 A 和 B 分别为下述哪一种情况时 , Y 才为 1?A)1,0 B) 0,1 C) 0,0 D) 1,1(2)存放器 A 存有带符号整数且只作算术移位 ,HA 和 LA 分别代表其位和最低位。
存放器 B 存有带符号整数且只作逻辑移位 ,HB 和LB 分别代表其位和最低位。
当存放器 A 与 B 都有左移位发生时 , 下述哪一个命题是准确的 ? (2 分)A) LA 与 LB 均成为 0 B) LA 与 LB 均成为 1C)LA 变成 0 但 LB 保持原值 D) LA 保持原值但 LB 的值改变(3)下述哪一种情况称为指令周期 ?A)取指令时间 B) 取操作数时间C)取指令和执行指令的时间 D) 存储操作结果的时间(4)设 S 是一个至少有两个元素的集合 , 且定义运算 X*Y=X适用于S 集中的所有元素 X 和 Y, 以下命题中哪一个命题必是真命题 ?Ⅰ. * 在 S 集中是可结合的Ⅱ. * 在 S 集中是可交换的Ⅲ. * 在 S 集中有单位元A)只有Ⅰ B) 只有Ⅱ C) Ⅰ和Ⅲ D) Ⅱ和Ⅲ(5)设 Z 是整数集 , 且设 f :Z×Z→Z, 对每一个∈ Z×Z, 有f()=m2n 。
集合 {0} 的原象为 (2 分)A){0} ×ZB)Z ×{0}C)({0} ×Z) ∩(Z ×{0})D)({0} ×Z) ∪(Z ×{0})(6)对于一个只有 3 个不同元素的集合 A来说 ,A 上的等价关系的总数为A)2 B) 5 C) 9 D)取决于元素是否为数值(7)设有命题:对于组成元素为集合的集合C,存有函数为 f :C→∪ C,使得对每一个S∈C,有f(S) ∈S。
In Proc of CC-92:International Workshop on Compiler Construction, Paderborn, Germany, October 5-7, 1992, LNCS 641, Springer-VerlagThe Implementation of ObjectMath –a High-Level Programming Environmentfor Scientific ComputingLars Viklund, Johan Herber and Peter FritzsonProgramming Environments LaboratoryDepartment of Computer and Information ScienceLinköping UniversityS-581 83 Linköping, SwedenAbstract.We present the design and implementation of ObjectMath, a language andenvironment for high-level equation-based modeling and analysis in scientificcomputing. The ObjectMath language integrates object-oriented modeling withmathematical language features that make it possible to express mathematics in anatural and consistent way. The implemented programming environment includes agraphical browser for visualizing and editing inheritance hierarchies, an applicationoriented editor for editing ObjectMath equations and formulae, a computer algebrasystem for doing symbolic computations, support for generation of numerical codefrom equations, and routines for graphical presentation. This programmingenvironment has been successfully used in modeling and analyzing two differentproblems from the application domain of machine element analysis in an industrialenvironment.1IntroductionThe programming development process in scientific computing has not changed very much during the past 30 years. Most scientific software is still developed the traditional way [3]. Theory development is usually done manually, using only pen and paper. In order to perform numerical calculations, the mathematical model must be implemented in some programming language, in most cases FORTRAN. Often more than half the time and effort in a project is spent writing and debugging FORTRAN programs. The process is highly iterative, as feedback from numerical computations and physical experiments can affect both the underlying mathematical model and the numerical implementation. This iteration cycle is very time consuming and has a tendency to introduce errors, see Fig. 1.In order to improve this situation we have designed and implemented an object-oriented programming environment and modeling language called ObjectMath. This environment supports high-level equation-based modeling and analysis in scientific computing. It is currently being used for applications in advanced mechanical analysis, but it is intended to be applicable to other areas as well. The implemented programming environment includes a graphical browser for visualizing and editing inheritance hierarchies, an application oriented editor for editing ObjectMath equations and formulae, a computer algebra system for doing symbolic computations, support for generation of numerical code from equations and for combined symbolic/numerical computations, as well as routines for graphic presentation. This paper describes the ObjectMath environment and its implementation.Fig. 1.The iterative process of modeling in traditional mechanical analysis2The ObjectMath LanguageObjectMath [8] is a hybrid modeling language, combining object-oriented constructs with a mathematical language. This combination makes ObjectMath a suitable language for implementing complex mathematical models. Formulae and equations can be written with a notation that closely resembles conventional mathematics, while the use of object-oriented modeling makes it possible to structure the model in a natural way.We have chosen to use an existing computer algebra language, Mathematica [11], as a base for ObjectMath. The relationship between Mathematica and ObjectMath can be compared to that between C and C++. The C++ programming language is basically the C language augmented with classes and other object-oriented language constructs. In a similar fashion, the ObjectMath language can be viewed as an object-oriented version of the Mathematica language.Ordinary Mathematica packages can be imported into an ObjectMath model. Such packages exist for a variety of application areas such as trigonometry, vector analysis, statistics and Laplace transforms. It is also possible to call external functions written in other languages. The current implementation of the programming environment supports external C++ functions, but in principle is it possible to use any language.3The ObjectMath Programming EnvironmentIn this section we give an overview of the basic features of the ObjectMath programming environment. The implementation is described in the next section. Currently, the programming environment supports:•Graphic browsing and editing of inheritance hierarchies•Textual editing of ObjectMath code•Interactive symbolic computation•Automatic code generation from simple ObjectMath equations•Mixing ObjectMath and C++ for combined symbolic/numerical computations •Graphic presentationThe graphical browser is used for viewing and editing ObjectMath inheritance hierarchies. It is integrated with the Gnu Emacs editor for editing of equations and formulae. The Mathematica computer algebra system is also integrated within the environment. ObjectMath code is translated into pure Mathematica code by the ObjectMath translator. Algebraic simplification of equations can be done interactively in Mathematica. Figure 2 shows the screen during a typical session.Fig. 2.The ObjectMath programming environment in useAnalyzing a mathematical model expressed in ObjectMath involves performing numerical computations. The Mathematica system can be used for some of these calculations. However, Mathematica code is interpreted and cannot be executed as efficiently as programs written in compiled languages such as C, C++ or FORTRAN. This might be a serious drawback, particularly when doing mostly numerical computations in realistic applications. It is also desirable to take advantage of the large number of existing, highly optimized, special purpose numerical routines.The ObjectMath environment provides the possibility to generate C++ code and to mix ObjectMath and C++, thus enabling us to take advantage of symbolic computation while still being able to write time-critical functions in a language that can be compiled into efficient code. Numerical routines can either be called from within the ObjectMath environment, via an implemented message-passing protocol, or be used independently of the environment as a computation kernel, for example together with a graphical front end.A library with general classes is also available. This includes classes for modeling simple bodies (spheres, cylinders, rings, etc.), coordinate systems and contacts between bodies. The classes for modeling bodies implement methods which generate three-dimensional plots of the bodies from the surface descriptions. Graphical support helps the user to visually verify the formulae and equations which specify geometric properties.4Implementation of the Programming EnvironmentThe ObjectMath programming environment has been implemented in C++, Scheme, Gnu Emacs Lisp and Mathematica. It currently runs on Sun workstations under the X window system. The main parts are:•The graphical browser, which allows editing of class hierarchies.•The Gnu Emacs editor, which is used for editing of ObjectMath equations and formulae.•The ObjectMath translator, which translates ObjectMath programs into Mathematica code.•The Mathematica system, which runs as a subprocess to Emacs.Gnu Emacs communicates with the ObjectMath translator and the Mathematica process via UNIX pipes, while communication with the browser is done through sockets. The passing of code from the translator to the Mathematica process utilizes a temporary file. Figure 3 shows the internal structure of the system.Fig. 3.Structure of the implementation4.1The ObjectMath BrowserThe ObjectMath browser allows the user to view and edit ObjectMath inheritance hierarchies. Properties such as parameters to classes are also edited in the browser, while the equations and formulae in the classes are edited as text in Gnu Emacs. When the user selects a class or instance to be edited, an Emacs text buffer with the body of the declaration is created and displayed. The browser also has a number of command buttons, for instance one for translating the current model and sending it to Mathematica.The browser is implemented in C++ using the ET++ class library [10]. ET++ includes classes for building user interfaces and general classes such as different kinds of collections. The object-oriented design of ET++ makes the library very flexible. Advanced classes can be utilized even if they do not fit exactly into the application being implemented by defininga new class inheriting from a suitable existing class and redefining a few methods.4.2Customizing the Gnu Emacs EditorMost of the features in the Gnu Emacs editor are written in a Lisp dialect called Gnu Emacs Lisp [6]. Gnu Emacs can easily be extended by writing new Lisp code and installing it as anextension to the editor. Emacs Lisp is a full programming language, with additional features for handling editor specific functions such as text buffers. In the ObjectMath environment, Gnu Emacs has been extended with a special ObjectMath mode. A separate Emacs buffer is used for each class or instance declaration. Switching between different object buffers is done by selecting their icons in the browser window using the mouse.4.3Communication between Gnu Emacs and Other SubsystemsThe communication between the browser and Emacs goes through a pair of sockets. When the browser is started, it forks a process in which Emacs is executed. After this, the browser creates a socket and listens to it. Emacs starts a communications subprocess which creates its own socket and opens a connection to the browser. Whenever the user issues a command in the browser that affects data in Emacs one or more messages are passed to Emacs in order to keep the data structures up to date, save modified files, etc.4.4Translating ObjectMath ProgramsObjectMath programs are translated into Mathematica packages by a series of program transformations. The Mathematica context facility is used to implement objects. A Mathematica context provides a separate name space, similar to a block in Algol-like languages. Packages generated from ObjectMath models consists of a number of context declarations, one for each instance. Retranslation in the ObjectMath translator is incremental with the granularity of an object. Therefore, new Mathematica code will be produced very fast if only a single instance is changed.A first version of the ObjectMath translator was implemented in Gnu Emacs Lisp. Unfortunately, this implementation turned out to be too inefficient for practical use, forcing us to re-implement it in Scheme. The Scheme program was compiled with Bartlett’s Scheme→C compiler [1] and runs about 50 times faster than the original Emacs Lisp implementation.4.5Generating Numerical Code from ObjectMath EquationsThe ObjectMath environment supports automatic generation of numerical code for solving sets of non-linear equations. Generated code is linked to numerical FORTRAN routines which perform the actual solving. Currently, we use the MINPACK [5] routines HYBRD and HYBRJ as solvers. One of the input parameters to these is a routine which calculates the values of the functions. This routine is generated from a C++ code template, using a translator which generates three-address statements expressed in C++ from ObjectMath expressions. The ObjectMath→C++ translator does common subexpression elimination, using the fact that functions such as sin,cos etc. do not have side effects.4.6Mixing ObjectMath and C++ CodeThe ObjectMath environment allows C++ functions to be used as ObjectMath methods. These C++ functions might contain ObjectMath symbolic expressions which must be evaluated and expanded before compiling the C++ code, see [8] for an example. Translatingan ObjectMath model with C++ methods requires the following steps:1.Translate the ObjectMath code into Mathematica code and load it into the Mathematicasystem.2.Generate C++ class declarations and do some syntactic transformations on the usersupplied C++ functions.3.Generate C++ code for initialization of the external function interface.4.Call Mathematica to evaluate ObjectMath expressions in the C++ code, once for eachinstance inheriting the C++ method. Expand the result from the symbolic evaluation into the C++ code.pile and link the C++ code.6.Start the resulting program as a subprocess (computation server) of Mathematica. The steps above are performed automatically. Any compilation errors in the C++ code are reported to the user.4.7Graphic PresentationAs mentioned earlier, some classes in the ObjectMath class library includes methods which generate three-dimensional pictures of bodies described with parametric surface techniques. Our parametric surface descriptions consist of a function of two arguments and intervals for the two parameters. A surface is obtained by varying the two parameters of the function. The generated picture can be combined with other graphical objects, for instance vectors representing forces and normals, axes in the coordinate systems, or textual labels. The user has control over several parameters concerned with the rendering of surfaces, such as lighting, shading and color. The view reference point may also be adjusted.5Related WorkThere exist a number of systems and research areas which in some way are related to the ObjectMath programming environment. Some of these are:•Computer algebra systems such as Maple [2] or Mathematica [11].•Systems for matrix computations, e.g. MATLAB [7].•Symbolic and numerical hybrid systems. An example is the FINGER package [9], a hybrid system supporting finite element analysis.An exhaustive survey can be found in [3].6ConclusionsThere is a strong need for efficient high-level programming support in scientific computing. The goal of our work has been to build an object-oriented programming environment that satisfies part of this need. A prototype environment supporting symbolic, numerical and graphic analysis has been implemented. The implemented programming environment has been successfully used in modeling and analyzing two different problems from the application domain (machine element analysis) in an industrial environment [4].The successful use of the ObjectMath programming environment shows that a combination of programming in equations and object-orientation is suitable for modeling machine elements. Complex mathematical equations and functions can be expressed in a natural way instead of as low-level procedural code. The object-oriented features allow better structure of models and permit reuse of equations through inheritance. References[1]Joel F. Bartlett. Scheme→C, a portable Scheme-to-C compiler. Research Report 89-1, DEC Western Research Laboratory, Palo Alto, California, January 1989.[2]Char, Geddes, Gonnet, Monagan, and Watt.Maple Reference Manual. WATCOMPublications, 5th edition, 1988.[3]Peter Fritzson and Dag Fritzson. The need for high-level programming support inscientific computing applied to mechanical analysis. Technical Report LiTH-IDA-R-91-04, Department of Computer and Information Science, Linköping University, S-581 83, Linköping, Sweden, March 1991. Accepted for publication in Computers and Structures – an International Journal.[4]Peter Fritzson, Lars Viklund, Johan Herber, and Dag Fritzson. Industrial applicationof object-oriented mathematical modeling and computer algebra in mechanical analysis. In Georg Heeg, Boris Magnusson, and Bertrand Meyer, editors, Technology of Object-Oriented Languages and Systems – TOOLS 7. Prentice Hall, 1992.[5]Burton S. Garbow, Kenneth E. Hillstrom, and Jorge J. ers Guide forMINPACK-1. Argonne National Laboratory, Argonne, Illinois, USA, March 1980.Report ANL-80-74.[6]Bil Lewis, Dan LaLiberte, and the GNU Manual Group.The GNU Emacs LispReference Manual. Free Software Foundation, Inc., 675 Massachusetts Avenue Cambridge, MA 02139 USA, 1.02 edition, June 1990.[7]Cleve Moler. MATLAB users’ guide. Report CS81-1, University of New MexicoComputer Science Department, 1981.[8]Lars Viklund and Peter Fritzson. An object-oriented language for symboliccomputation – applied to machine element analysis. In Paul Wang, editor, Proceedings of the International Symposium on Symbolic and Algebraic Computation, 1992.[9]Paul S. Wang. FINGER: A symbolic system for automatic generation of numericalprograms in finite element analysis.Journal of Symbolic Computation, 2:305–316, 1986.[10]André Weinand, Erich Gamma, and Rudolf Marty. ET++ – an object-orientedapplication framework in C++. In OOPSLA’88 Conference Proceedings, 1988. [11]Stephen Wolfram.Mathematica – A System for Doing Mathematics by Computer.Addison-Wesley Publishing Company, second edition, 1991.。