计算机图形学第6章交互技术
- 格式:ppt
- 大小:455.50 KB
- 文档页数:30
图形学复习提纲图形学复习提纲2010.10.10第1章引言1.1 计算机图形学及其相关概念计算机图形学(Computer Graphics)计算机图形学是研究怎样利用计算机来显示、生成和处理图形的原理、方法和技术的一门学科。
IEEE定义:Computer graphics is the art or science of producing graphical images with the aid of computer.计算机图形学的研究对象——图形通常意义下的图形:能够在人的视觉系统中形成视觉印象的客观对象都称为图形。
两类图形要素:1.几何要素:点,线,面,体等;2.非几何要素:明暗,灰度,色彩等计算机图形学中所研究的图形:从客观世界物体中抽象出来的带有颜色及形状信息的图和形。
图形的两种表示方法:点阵法是用具有颜色信息的点阵来表示图形的一种方法,它强调图形由哪些点组成,并具有什么灰度或色彩。
参数法是以计算机中所记录图形的形状参数与属性参数来表示图形的一种方法。
通常把参数法描述的图形叫做图形(Graphics)把点阵法描述的图形叫做图象(Image)1.4 计算机图形系统1.4.2 计算机图形系统的结构课后作业:习题一(p19)1.1 名词解释:图形、图象、点阵法、参数法。
1.2 图形包括哪两方面的要素,在计算机中如何表示它们?1.3 什么叫计算机图形学?分析计算机图形学、数字图象处理和计算机视觉学科间的关系。
1.7 一个交互性计算机图形系统必须具有哪几种功能?其结构如何?第2章图形设备计算机图形系统包含哪些外部设备?图形输入设备:概念、特点图形显示设备:概念、结构原理、工作方式、特点图形绘制设备:概念、特点课后作业:习题二(p63)2.2. PC图形显示卡主要有哪几种?2.4. 试列举出你所知道的图形输入与输出设备。
2.5. 说明三维输入设备的种类以及应用范围。
2.6. 阴极射线管由哪几部分组成?它们的功能分别是什么?2.16. 什么是象素点?什么是显示器的分辨率?第3章交互式技术如何设计一个好的用户接口为什么要定义逻辑输入设备交互式绘图技术有哪些?设备的评价三个层次:⒈设备层: 硬件性能最优化⒉任务层:单任务:选择最佳的交互设备⒊对话层:多任务:比较优劣3.2.2 输入模式1. 请求方式(request mode)输入设备在应用程序的控制下工作:2. 取样方式(sample mode)应用程序和输入设备同时工作:输入设备不断地产生数据,并送入数据缓冲区;当程序遇到采样语句要求输入时,从数据缓冲区中读取数据。
习题参考答案6.1交互式绘图系统基本的交互任务有哪些?答:1定位,2笔画,3定值,4选择,5拾取,6字符串,7三维交互。
6.2编写程序实现橡皮筋技术画直线和圆。
答:思想:首先将绘图模式设定为异或。
画直线时,点击鼠标左键,光标所在位置即为直线的起点,用鼠标牵引光标移动,当前光标所在位置即认为是直线的终点。
光标从原位置移动到新位置时,首先在起点与原位置之间画一条直线,因为是异或模式,原有直线变为不可见,然后再在起点与新位置之间画一条直线,作为当前直线。
画圆时,点击鼠标左键,光标所在位置即为圆的圆心,用鼠标牵引光标移动,当前光标所在位置与圆心的距离即被认为是圆的半径。
当鼠标牵引光标从原位置移动到新位置时,首先在以圆心与原位置的距离为半径画圆,因为是异或模式,原有的圆变为不可见,然后再以圆点与新位置的距离为半径画圆,作为当前圆。
6.3引力场是人机交互中的常见的辅助技术,它能给用户带来什么便利?设计人员在设计引力场的时候需要注意什么问题?答:用户用光标进行选图操作时,引力场的使用可使光标较容易地定位在选择区域小的图形上。
设计人员在设计引力场时,引力场的大小要适中,外形应与其所含图形的外形一致。
6.4图形模式和图像模式下,拖拽的处理方法有什么不同?答:图形模式下的拖拽是在异或的绘图模式下进行的。
首先在原位置再次绘制要拖拽图形,由于自身异或的结果为空,原位置处的图形变为不可见,然后在新位置处绘制图形,实现了图形的拖拽。
而图像模式下的拖拽,则是进行了图像的整体移动,即首先在要经过位置处按拖动图像大小保存原有屏幕图像,然后将拖动的图像整体移动到该位置,当图像离开该位置而移动到下一个新位置时,再恢复该位置保存的屏幕图像。
图形模式不需要保存屏幕图像,只需在原位置重绘图形。
图像模式需要保存图像经过处的屏幕图像,并在移开后重新显示保存的屏幕图像。
6.5请叙述三种输入控制模式的流程。
答:请求模式下,用户在接收到应用程序请求后才输入数据;应用程序等待用户输入数据,输入结束,才进行处理。
计算机图形学基础(第2版)课后习题答案__陆枫__何云峰第⼀章绪论概念:计算机图形学、图形、图像、点阵法、参数法、图形的⼏何要素、⾮⼏何要素、数字图像处理;计算机图形学和计算机视觉的概念及三者之间的关系;计算机图形系统的功能、计算机图形系统的总体结构。
第⼆章图形设备图形输⼊设备:有哪些。
图形显⽰设备:CRT的结构、原理和⼯作⽅式。
彩⾊CRT:结构、原理。
随机扫描和光栅扫描的图形显⽰器的结构和⼯作原理。
图形显⽰⼦系统:分辨率、像素与帧缓存、颜⾊查找表等基本概念,分辨率的计算第三章交互式技术什么是输⼊模式的问题,有哪⼏种输⼊模式。
第四章图形的表⽰与数据结构⾃学,建议⾄少阅读⼀遍第五章基本图形⽣成算法概念:点阵字符和⽮量字符;直线和圆的扫描转换算法;多边形的扫描转换:有效边表算法;区域填充:4/8连通的边界/泛填充算法;内外测试:奇偶规则,⾮零环绕数规则;反⾛样:反⾛样和⾛样的概念,过取样和区域取样。
5.1.2 中点 Bresenham 算法(P109)5.1.2 改进 Bresenham 算法(P112)习题解答习题5(P144)5.3 试⽤中点Bresenham算法画直线段的原理推导斜率为负且⼤于1的直线段绘制过程(要求写清原理、误差函数、递推公式及最终画图过程)。
(P111)解: k<=-1 |△y|/|△x|>=1 y为最⼤位移⽅向故有构造判别式:推导d各种情况的⽅法(设理想直线与y=yi+1的交点为Q):所以有: y Q-kx Q-b=0 且y M=y Qd=f(x M-kx M-b-(y Q-kx Q-b)=k(x Q-x M)所以,当k<0,d>0时,M点在Q点右侧(Q在M左),取左点 P l(x i-1,y i+1)。
d<0时,M点在Q点左侧(Q在M右),取右点 Pr(x i,y i+1)。
d=0时,M点与Q点重合(Q在M点),约定取右点 Pr(x i,y i+1) 。
第一章绪论概念:计算机图形学、图形、图像、点阵法、参数法、图形的几何要素、非几何要素、数字图像处理;计算机图形学和计算机视觉的概念及三者之间的关系;计算机图形系统的功能、计算机图形系统的总体结构。
第二章图形设备图形输入设备:有哪些。
图形显示设备:CRT的结构、原理和工作方式。
彩色CRT:结构、原理。
随机扫描和光栅扫描的图形显示器的结构和工作原理。
图形显示子系统:分辨率、像素与帧缓存、颜色查找表等基本概念,分辨率的计算第三章交互式技术什么是输入模式的问题,有哪几种输入模式。
第四章图形的表示与数据结构自学,建议至少阅读一遍第五章基本图形生成算法概念:点阵字符和矢量字符;直线和圆的扫描转换算法;多边形的扫描转换:有效边表算法;区域填充:4/8连通的边界/泛填充算法;内外测试:奇偶规则,非零环绕数规则;反走样:反走样和走样的概念,过取样和区域取样。
5.1.2 中点 Bresenham 算法(P109)5.1.2 改进 Bresenham 算法(P112)习题解答习题5(P144)5.3 试用中点Bresenham算法画直线段的原理推导斜率为负且大于1的直线段绘制过程(要求写清原理、误差函数、递推公式及最终画图过程)。
(P111)解: k<=-1 |△y|/|△x|>=1 y为最大位移方向故有构造判别式:推导d各种情况的方法(设理想直线与y=yi+1的交点为Q):所以有: y Q-kx Q-b=0 且y M=y Qd=f(x M-kx M-b-(y Q-kx Q-b)=k(x Q-x M)所以,当k<0,d>0时,M点在Q点右侧(Q在M左),取左点 P l(x i-1,y i+1)。
d<0时,M点在Q点左侧(Q在M右),取右点 Pr(x i,y i+1)。
d=0时,M点与Q点重合(Q在M点),约定取右点 Pr(x i,y i+1) 。
所以有递推公式的推导:d2=f(x i-1.5,y i+2)当d>0时,d2=y i+2-k(x i-1.5)-b 增量为1+k=d1+1+k当d<0时,d2=y i+2-k(x i-0.5)-b 增量为1=d1+1当d=0时,5.7 利用中点 Bresenham 画圆算法的原理,推导第一象限y=0到y=x圆弧段的扫描转换算法(要求写清原理、误差函数、递推公式及最终画图过程)。
Angel: Interactive Computer Graphics, Fifth Edition Chapter 1 Solutions1.1 The main advantage of the pipeline is that each primitive can be processed independently. Not only does this architecture lead to fast performance, it reduces memory requirements because we need not keep all objects available. The main disadvantage is that we cannot handle most global effects such as shadows, reflections, and blending in a physically correct manner.1.3 We derive this algorithm later in Chapter 6. First, we can form the tetrahedron by finding four equally spaced points on a unit sphere centered at the origin. One approach is to start with one point on the z axis(0, 0, 1). We then can place the other three points in a plane of constant z. One of these three points can be placed on the y axis. To satisfy the requirement that the points be equidistant, the point must be at(0, 2p2/3,−1/3). The other two can be found by symmetry to be at(−p6/3,−p2/3,−1/3) and (p6/3,−p2/3,−1/3).We can subdivide each face of the tetrahedron into four equilateral triangles by bisecting the sides and connecting the bisectors. However, the bisectors of the sides are not on the unit circle so we must push thesepoints out to the unit circle by scaling the values. We can continue this process recursively on each of the triangles created by the bisection process.1.5 In Exercise 1.4, we saw that we could intersect the line of which theline segment is part independently against each of the sides of the window. We could do this process iteratively, each time shortening the line segment if it intersects one side of the window.1.7 In a one–point perspective, two faces of the cube is parallel to the projection plane, while in a two–point perspective only the edges of the cube in one direction are parallel to the projection. In the general case of a three–point perspective there are three vanishing points and none of the edges of the cube are parallel to the projection plane.1.9 Each frame for a 480 x 640 pixel video display contains only about300k pixels whereas the 2000 x 3000 pixel movie frame has 6M pixels, or about 18 times as many as the video display. Thus, it can take 18 times asmuch time to render each frame if there is a lot of pixel-level calculations.1.11 There are single beam CRTs. One scheme is to arrange the phosphors in vertical stripes (red, green, blue, red, green, ....). The major difficulty is that the beam must change very rapidly, approximately three times as fast a each beam in a three beam system. The electronics in such a system the electronic components must also be much faster (and more expensive). Chapter 2 Solutions2.9 We can solve this problem separately in the x and y directions. The transformation is linear, that is xs = ax + b, ys = cy + d. We must maintain proportions, so that xs in the same relative position in the viewport as x is in the window, hencex − xminxmax − xmin=xs − uw,xs = u + wx − xminxmax − xmin.Likewiseys = v + hx − xminymax − ymin.2.11 Most practical tests work on a line by line basis. Usually we use scanlines, each of which corresponds to a row of pixels in the frame buffer. If we compute the intersections of the edges of the polygon with a line passing through it, these intersections can be ordered. The first intersection begins a set of points inside the polygon. The second intersection leaves the polygon, the third reenters and so on.2.13 There are two fundamental approaches: vertex lists and edge lists. With vertex lists we store the vertex locations in an array. The mesh is represented as a list of interior polygons (those polygons with no otherpolygons inside them). Each interior polygon is represented as an array of pointers into the vertex array. To draw the mesh, we traverse the list of interior polygons, drawing each polygon.One disadvantage of the vertex list is that if we wish to draw the edges inthe mesh, by rendering each polygon shared edges are drawn twice. Wecan avoid this problem by forming an edge list or edge array, each elementis a pair of pointers to vertices in the vertex array. Thus, we can draw each edge once by simply traversing the edge list. However, the simple edge list has no information on polygons and thus if we want to render the mesh in some other way such as by filling interior polygons we must add somethingto this data structure that gives information as to which edges form each polygon.A flexible mesh representation would consist of an edge list, a vertex listand a polygon list with pointers so we could know which edges belong to which polygons and which polygons share a given vertex.2.15 The Maxwell triangle corresponds to the triangle that connects thered, green, and blue vertices in the color cube.2.19 Consider the lines defined by the sides of the polygon. We can assigna direction for each of these lines by traversing the vertices in acounter-clockwise order. One very simple test is obtained by noting thatany point inside the object is on the left of each of these lines. Thus, if we substitute the point into the equation for each of the lines (ax+by+c), we should always get the same sign.2.23 There are eight vertices and thus 256 = 28 possible black/white colorings. If we remove symmetries (black/white and rotational) there are14 unique cases. See Angel, Interactive Computer Graphics (Third Edition) or the paper by Lorensen and Kline in the references.Chapter 3 Solutions3.1 The general problem is how to describe a set of characters that might have thickness, curvature, and holes (such as in the letters a and q). Suppose that we consider a simple example where each character can be approximated by a sequence of line segments. One possibility is to use a move/line system where 0 is a move and 1 a line. Then a character can be described by a sequence of the form (x0, y0, b0), (x1, y1, b1), (x2, y2, b2), .....where bi is a 0 or 1. This approach is used in the example in the OpenGL Programming Guide. A more elaborate font can be developed by using polygons instead of line segments.3.11 There are a couple of potential problems. One is that the application program can map different points in object coordinates to the same point in screen coordinates. Second, a given position on the screen when transformed back into object coordinates may lie outside the user’s window.3.19 Each scan is allocated 1/60 second. For a given scan we have to take 10% of the time for the vertical retrace which means that we start to draw scan line n at .9n/(60*1024) seconds from the beginning of the refresh. But allocating 10% of this time for the horizontal retrace we are at pixel m on this line at time .81nm/(60*1024).3.25 When the display is changing, primitives that move or are removed from the display will leave a trace or motion blur on the display as the phosphors persist. Long persistence phosphors have been used in text only displays where motion blur is less of a problem and the long persistence gives a very stable flicker-free image.Chapter 4 Solutions4.1 If the scaling matrix is uniform thenRS = RS(α, α, α) = αR = SRConsider R x(θ), if we multiply and use the standard trigonometric identities for the sine and cosine of the sum of two angles, we findR x(θ)R x(φ) = R x(θ + φ)By simply multiplying the matrices we findT(x1, y1, z1)T(x2, y2, z2) = T(x1 + x2, y1 + y2, z1 + z2)4.5 There are 12 degrees of freedom in the three–dimensional affine transformation. Consider a point p = [x, y, z, 1]T that is transformed top_ = [x_y_, z_, 1]T by the matrix M. Hence we have the relationshipp_ = Mp where M has 12 unknown coefficients but p and p_ are known. Thus we have 3 equations in 12 unknowns (the fourth equation is simplythe identity 1=1). If we have 4 such pairs of points we will have 12equations in 12 unknowns which could be solved for the elements of M.Thus if we know how a quadrilateral is transformed we can determine theaffine transformation.In two dimensions, there are 6 degrees of freedom in M but p and p_ haveonly x and y components. Hence if we know 3 points both before and after transformation, we will have 6 equations in 6 unknowns and thus in two dimensions if we know how a triangle is transformed we can determine theaffine transformation.4.7 It is easy to show by simply multiplying the matrices that theconcatenation of two rotations yields a rotation and that the concatenationof two translations yields a translation. If we look at the product of arotation and a translation, we find that the left three columns of RT arethe left three columns of R and the right column of RT is the rightcolumn of the translation matrix. If we now consider RTR_ where R_ is arotation matrix, the left three columns are exactly the same as the leftthree columns of RR_ and the and right column still has 1 as its bottomelement. Thus, the form is the same as RT with an altered rotation (whichis the concatenation of the two rotations) and an altered translation.Inductively, we can see that any further concatenations with rotations and translations do not alter this form.4.9 If we do a translation by -h we convert the problem to reflection abouta line passing through the origin. From m we can find an angle by whichwe can rotate so the line is aligned with either the x or y axis. Now reflectabout the x or y axis. Finally we undo the rotation and translation so the sequence is of the form T−1R−1SRT.4.11 The most sensible place to put the shear is second so that the instance transformation becomes I = TRHS. We can see that this order makessense if we consider a cube centered at the origin whose sides are alignedwith the axes. The scale gives us the desired size and proportions. Theshear then converts the right parallelepiped to a general parallelepiped.Finally we can orient this parallelepiped with a rotation and place it wheredesired with a translation. Note that the order I = TRSH will work too.4.13R = R z(θz)R y(θy)R x(θx) =⎡⎢⎢⎢⎣cos θy cos θz cos θz sin θx sin θy −cos θx sin θz cos θx cos θz sin θy + sin θx sin θz 0cos θy sin θz cos θx cos θz + sin θx sin θy sin θz −cos θz sin θx + cos θx sin θy sin θz 0 −sin θy cos θy sin θx cos θx cos θy 00 0 0 1⎤⎥⎥⎥⎦4.17 One test is to use the first three vertices to find the equation of theplane ax + by + cz + d = 0. Although there are four coefficients in theequation only three are independent so we can select one arbitrarily ornormalize so that a2 + b2 + c2 = 1. Then we can successively evaluateax + bc + cz + d for the other vertices. A vertex will be on the plane if weevaluate to zero. An equivalent test is to form the matrix⎡⎢⎢⎢⎣1 1 1 1x1 x2 x3 x4y1 y2 y3 y4z1 z2 z3 z4⎤⎥⎥⎥⎦for each i = 4, ... If the determinant of this matrix is zero the ith vertex isin the plane determined by the first three.4.19 Although we will have the same number of degrees of freedom in theobjects we produce, the class of objects will be very different. For exampleif we rotate a square before we apply a nonuniform scale, we will shear the square, something we cannot do if we scale then rotate.4.21 The vector a = u ×v is orthogonal to u and v. The vector b = u ×a is orthogonal to u and a. Hence, u, a and b form an orthogonal coordinatesystem.4.23 Using r = cos θ2+ sin θ2v, with θ = 90 and v = (1, 0, 0), we find forrotation about the x-axisr =√22(1, 1, 0, 0).Likewise, for rotation about the y axisr =√22(1, 0, 1, 0).4.27 Possible reasons include (1) object-oriented systems are slower, (2)users are often comfortable working in world coordinates with higher-level objects and do not need the flexibility offered by a coordinate-free approach, (3) even a system that provides scalars, vectors, and points would have to have an underlying frame to use for the implementation. Chapter 5 Solutions5.1 Eclipses (both solar and lunar) are good examples of the projection of an object (the moon or the earth) onto a nonplanar surface. Any time a shadow is created on curved surface, there is a nonplanar projection. All the maps in an atlas are examples of the use of curved projectors. If the projectors were not curved we could not project the entire surface of a spherical object (the Earth) onto a rectangle.5.3 Suppose that we want the view of the Earth rotating about the sun. Before we draw the earth, we must rotate the Earth which is a rotation about the y axis. Next we translate the Earth away from the origin. Finally we do another rotation about the y axis to position the Earth in its desired location along its orbit. There are a number of interesting variants of this problem such as the view from the Earth of the rest of the solar system.5.5 Yes. Any sequence of rotations is equivalent to a single rotation abouta suitably chosen axis. One way to compute this rotation matrix is to form the matrix by sequence of simple rotations, such asR = RxRyRz.The desired axis is an eigenvector of this matrix.5.7 The result follows from the transformation being affine. We can also take a direct approach. Consider the line determined by the points(x1, y1, z1) and (x2, y2, z2). Any point along can be written parametrically as (_x1 + (1 − _)x2, _y1 + (1 − _)y2, _z1 + (1 − _)z2). Consider the simple projection of this point 1d(_z1+(1−_)z2) (_x1 + (1 − _)x2, _y1 + (1 − _)y2)which is of the form f(_)(_x1 + (1 − _)x2, _y1 + (1 − _)y2). This form describes a line because the slope is constant. Note that the function f(_) implies that we trace out the line at a nonlinear rate as _ increases from 0 to 1.5.9 The specification used in many graphics text is of the angles the projector makes with x,z and y, z planes, i.e the angles defined by the projection of a projector by a top view and a side view.Another approach is to specify the foreshortening of one or two sides of a cube aligned with the axes.5.11 The CORE system used this approach. Retained objects were kept in distorted form. Any transformation to any object that was defined with other than an orthographic view transformed the distorted object and the orthographic projection of the transformed distorted object was incorrect.5.15 If we use _ = _ = 45, we obtain the projection matrixP =266641 0 −1 00 1 −1 00 0 0 00 0 0 1377755.17 All the points on the projection of the point (x.y, z) in the direction dx, dy, dz) are of the form (x + _dx, y + _dy, z + _dz). Thus the shadow of the point (x, y, z) is found by determining the _ for which the line intersects the plane, that isaxs + bys + czs = dSubstituting and solving, we find_ =d − ax − by − czadx + bdy + cdz.However, what we want is a projection matrix, Using this value of _ we findxs = z + _dx =x(bdy + cdx) − dx(d − by − cz)adx + bdy + cdzwith similar equations for ys and zs. These results can be computed by multiplying the homogeneous coordinate point (x, y, z, 1) by the projection matrixM =26664bdy + cdz −bdx −cdx −ddx−ady adx + cdz −cdy −ddy−adz −bdz adx + bdy −ddz0 0 0 adx + bdy + cdz37775.5.21 Suppose that the average of the two eye positions is at (x, y, z) and the viewer is looking at the origin. We could form the images using the LookAt function twice, that isgluLookAt(x-dx/2, y, z, 0, 0, 0, 0, 1, 0);/* draw scene here *//* swap buffers and clear */gluLookAt(x+dx/2, y, z, 0, 0, 0, 0, 1, 0);/* draw scene again *//* swap buffers and clear */Chapter 6 Solutions6.1 Point sources produce a very harsh lighting. Such images are characterized by abrupt transitions between light and dark. The ambient light in a real scene is dependent on both the lights on the scene and the reflectivity properties of the objects in the scene, something that cannot be computed correctly with OpenGL. The Phong reflection term is not physically correct; the reflection term in the modified Phong model is even further from being physically correct.6.3 If we were to take into account a light source being obscured by an object, we would have to have all polygons available so as to test for this condition. Such a global calculation is incompatible with the pipeline model that assumes we can shade each polygon independently of all other polygons as it flows through the pipeline.6.5 Materials absorb light from sources. Thus, a surface that appears red under white light appears so because the surface absorbs all wavelengths of light except in the red range—a subtractive process. To be compatible with such a model, we should use surface absorbtion constants that define the materials for cyan, magenta and yellow, rather than red, green and blue. 6.7 Let ψ be the angle between the normal and the halfway vector, φ be the angle between the viewer and the reflection angle, and θ be the anglebetween the normal and the light source. If all the vectors lie in the same plane, the angle between the light source and the viewer can be computer either as φ + 2θ or as 2(θ + ψ). Setting the two equal, we find φ = 2ψ. Ifthe vectors are not coplanar then φ < 2ψ.6.13 Without loss of generality, we can consider the problem in two dimensions. Suppose that the first material has a velocity of light of v1 andthe second material has a light velocity of v2. Furthermore, assume thatthe axis y = 0 separates the two materials.Place a point light source at (0, h) where h > 0 and a viewer at (x, y)where y < 0. Light will travel in a straight line from the source to a point(t, 0) where it will leave the first material and enter the second. It willthen travel from this point in a straight line to (x, y). We must find the tthat minimizes the time travelled.Using some simple trigonometry, we find the line from the source to (t, 0)has length l1 = √h2 + t2 and the line from there to the viewer has length1l2 = _y2 + (x − t)2. The total time light travels is thus l1v1 + l2v2 .Minimizing over t gives desired result when we note the two desired sinesare sin θ1 = h√h2+t2 and sin θ2 = −y √(y2+(x−t)2 .6.19 Shading requires that when we transform normals and points, we maintain the angle between them or equivalently have the dot productp ·v = p_ ·v_ when p_ = Mp and n_ = Mp. If M T M is an identity matrix angles are preserved. Such a matrix (M−1 = M T ) is called orthogonal. Rotations and translations are orthogonal but scaling and shear are not.6.21 Probably the easiest approach to this problem is to rotate the givenplane to plane z = 0 and rotate the light source and objects in the sameway. Now we have the same problem we have solved and can rotate everything back at the end.6.23 A global rendering approach would generate all shadows correctly. Ina global renderer, as each point is shaded, a calculation is done to seewhich light sources shine on it. The projection approach assumes that wecan project each polygon onto all other polygons. If the shadow of a given polygon projects onto multiple polygons, we could not compute these shadow polygons very easily. In addition, we have not accounted for thedifferent shades we might see if there were intersecting shadows from multiple light sources.Chapter 7 Solutions7.1 First, consider the problem in two dimensions. We are looking for an _ and _ such that both parametric equations yield the same point, that isx(_) = (1 − _)x1 + _x2 = (1 − _)x3 + _x4,y(_) = (1 − _)y1 + _y2 = (1 − _)y3 + _y4.These are two equations in the two unknowns _ and _ and, as long as the line segments are not parallel (a condition that will lead to a division by zero), we can solve for _ _. If both these values are between 0 and 1, the segments intersect.If the equations are in 3D, we can solve two of them for the _ and _ where x and y meet. If when we use these values of the parameters in the two equations for z, the segments intersect if we get the same z from both equations.7.3 If we clip a convex region against a convex region, we produce the intersection of the two regions, that is the set of all points in both regions, which is a convex set and describes a convex region. To see this, consider any two points in the intersection. The line segment connecting them must be in both sets and therefore the intersection is convex.7.5 See Problem 6.22. Nonuniform scaling will not preserve the angle between the normal and other vectors.7.7 Note that we could use OpenGL to, produce a hidden line removed image by using the z buffer and drawing polygons with edges and interiors the same color as the background. But of course, this method was not used in pre–raster systems.Hidden–line removal algorithms work in object space, usually with either polygons or polyhedra. Back–facing polygons can be eliminated. In general, edges are intersected with polygons to determine any visible parts. Good algorithms (see Foley or Rogers) use various coherence strategies to minimize the number of intersections.7.9 The O(k) was based upon computing the intersection of rays with the planes containing the k polygons. We did not consider the cost of filling the polygons, which can be a large part of the rendering time. If we consider a scene which is viewed from a given point there will be some percentage of 1the area of the screen that is filled with polygons. As we move the viewer closer to the objects, fewer polygons will appear on the screen but eachwill occupy a larger area on the screen, thus leaving the area of the screen that is filled approximately the same. Thus the rendering time will be about the same even though there are fewer polygons displayed.7.11 There are a number of ways we can attempt to get O(k log k) performance. One is to use a better sorting algorithm for the depth sort. Other strategies are based on divide and conquer such a binary spatial partitioning.7.13 If we consider a ray tracer that only casts rays to the first intersection and does not compute shadow rays, reflected or transmitted rays, then the image produced using a Phong model at the point of intersection will be the same image as produced by our pipeline renderer. This approach is sometimes called ray casting and is used in volume rendering and CSG. However, the data are processed in a different order from the pipeline renderer. The ray tracer works ray by ray while the pipeline renderer works object by object.7.15 Consider a circle centered at the origin: x2 + y2 = r2. If we know thata point (x, y) is on the curve than, we also know (−x, y), (x,−y),(−x,−y), (y, x), (−y, x), (y,−x), and (−y,−x) are also on the curve. This observation is known as the eight–fold symmetry of the circle. Consequently, we need only generate 1/8 of the circle, a 45 degree wedge, and can obtain the rest by copying this part using the symmetries. If we consider the 45 degree wedge starting at the bottom, the slope of this curve starts at 0 and goes to 1, precisely the conditions used for Bresenham’s line algorithm. The tests are a bit more complex and we have to account for the possibility the slope will be one but the approach is the same as for line generation.7.17 Flood fill should work with arbitrary closed areas. In practice, we can get into trouble at corners if the edges are not clearly defined. Such can be the case with scanned images.7.19 Note that if we fill by scan lines vertical edges are not a problem. Probably the best way to handle the problem is to avoid it completely by never allowing vertices to be on scan lines. OpenGL does this by havingvertices placed halfway between scan lines. Other systems jitter the y value of any vertex where it is an integer.7.21 Although each pixel uses five rays, the total number of rays has only doubled, i.e. consider a second grid that is offset one half pixel in both the x and y directions.7.23 A mathematical answer can be investigated using the notion of reconstruction of a function from its samples (see Chapter 8). However, a very easy to see by simply drawing bitmap characters that small pixels lead to very unreadable characters. A readable character should have some overlap of the pixels.7.25 We want k levels between Imin and Imax that are distributed exponentially. Then I0 = Imin, I1 = Iminr,I2 = Iminr2, ..., Ik−1 = Imax = Iminrk−1. We can solve the last equation for the desired r = ( ImaxImin)1k−17.27 If there are very few levels, we cannot display a gradual change in brightness. Instead the viewer will see steps of intensity. A simple rule of thumb is that we need enough gray levels so that a change of one step is not visible. We can mitigate the problem by adding one bit of random noise to the least significant bit of a pixel. Thus if we have 3 bits (8 levels), the third bit will be noise. The effect of the noise will be to break up regions of almost constant intensity so the user will not be able to see a step because it will be masked by the noise. In a statistical sense the jittered image is a noisy (degraded) version of the original but in a visual sense it appears better.。
计算机图形学作业答案第一章序论第二章图形系统1.什么是图像的分辨率?解答:在水平和垂直方向上每单位长度(如英寸)所包含的像素点的数目。
2.计算在240像素/英寸下640×480图像的大小。
解答:(640/240)×(480/240)或者(8/3)×2英寸。
3.计算有512×512像素的2×2英寸图像的分辨率。
解答:512/2或256像素/英寸。
第三章二维图形生成技术1.一条直线的两个端点是(0,0)和(6,18),计算x从0变到6时y所对应的值,并画出结果。
解答:由于直线的方程没有给出,所以必须找到直线的方程。
下面是寻找直线方程(y =mx+b)的过程。
首先寻找斜率:m =⊿y/⊿x =(y2-y1)/(x2-x1)=(18-0)/(6-0) = 3 接着b在y轴的截距可以代入方程y=3x+b求出 0=3(0)+b。
因此b=0,所以直线方程为y=3x。
2.使用斜截式方程画斜率介于0°和45°之间的直线的步骤是什么?解答:(1)计算dx:dx=x2-x1。
(2)计算dy:dy=y2-y1。
(3)计算m:m=dy/dx。
(4)计算b: b=y1-m×x1(5)设置左下方的端点坐标为(x,y),同时将x end设为x的最大值。
如果dx < 0,则x=x2、y=y2和x end=x1。
如果dx > 0,那么x=x1、y=y1和x end=x2。
(6)测试整条线是否已经画完,如果x > x end就停止。
(7)在当前的(x,y)坐标画一个点。
(8)增加x:x=x+1。
(9)根据方程y=mx+b计算下一个y值。
(10)转到步骤(6)。
3.请用伪代码程序描述使用斜截式方程画一条斜率介于45°和-45°(即|m|>1)之间的直线所需的步骤。
假设线段的两个端点为(x1,y1)和(x2,y2),且y1<y2int x = x1, y = y1;float x f, m = (y2-y1)/(x2-x1), b = y1-mx1;setPixel( x, y );/*画一个像素点*/while( y < y2 ) {y++;x f = ( y-b)/m;x = Floor( x f +0.5 );setPixel( x, y );}4.请用伪代码程序描述使用DDA算法扫描转换一条斜率介于-45°和45°(即|m| ≤1)之间的直线所需的步骤。