交互式计算机图形学(第五版)1-7章课后题答案
- 格式:doc
- 大小:83.00 KB
- 文档页数:13
第一章1、试述计算机图形学研究的基本内容?答:见课本P5-6页的1.1.4节。
2、计算机图形学、图形处理与模式识别本质区别是什么?请各举一例说明。
答:计算机图形学是研究根据给定的描述,用计算机生成相应的图形、图像,且所生成的图形、图像可以显示屏幕上、硬拷贝输出或作为数据集存在计算机中的学科。
计算机图形学研究的是从数据描述到图形生成的过程。
例如计算机动画制作。
图形处理是利用计算机对原来存在物体的映像进行分析处理,然后再现图像。
例如工业中的射线探伤。
模式识别是指计算机对图形信息进行识别和分析描述,是从图形(图像)到描述的表达过程。
例如邮件分捡设备扫描信件上手写的邮政编码,并将编码用图像复原成数字。
3、计算机图形学与CAD、CAM技术关系如何?答:见课本P4-5页的1.1.3节。
4、举3个例子说明计算机图形学的应用。
答:①事务管理中的交互绘图应用图形学最多的领域之一是绘制事务管理中的各种图形。
通过从简明的形式呈现出数据的模型和趋势以增加对复杂现象的理解,并促使决策的制定。
②地理信息系统地理信息系统是建立在地理图形基础上的信息管理系统。
利用计算机图形生成技术可以绘制地理的、地质的以及其它自然现象的高精度勘探、测量图形。
③计算机动画用图形学的方法产生动画片,其形象逼真、生动,轻而易举地解决了人工绘图时难以解决的问题,大大提高了工作效率。
5、计算机绘图有哪些特点?答:见课本P8页的1.3.1节。
6、计算机生成图形的方法有哪些?答:计算机生成图形的方法有两种:矢量法和描点法。
①矢量法:在显示屏上先给定一系列坐标点,然后控制电子束在屏幕上按一定的顺序扫描,逐个“点亮”临近两点间的短矢量,从而得到一条近似的曲线。
尽管显示器产生的只是一些短直线的线段,但当直线段很短时,连成的曲线看起来还是光滑的。
②描点法:把显示屏幕分成有限个可发亮的离散点,每个离散点叫做一个像素,屏幕上由像素点组成的阵列称为光栅,曲线的绘制过程就是将该曲线在光栅上经过的那些像素点串接起来,使它们发亮,所显示的每一曲线都是由一定大小的像素点组成的。
Angel: Interactive Computer Graphics, Fifth Edition Chapter 1 Solutions1.1 The main advantage of the pipeline is that each primitive can be processed independently. Not only does this architecture lead to fast performance, it reduces memory requirements because we need not keep all objects available. The main disadvantage is that we cannot handle most global effects such as shadows, reflections, and blending in a physically correct manner.1.3 We derive this algorithm later in Chapter 6. First, we can form the tetrahedron by finding four equally spaced points on a unit sphere centered at the origin. One approach is to start with one point on the z axis(0, 0, 1). We then can place the other three points in a plane of constant z. One of these three points can be placed on the y axis. To satisfy the requirement that the points be equidistant, the point must be at(0, 2p2/3,−1/3). The other two can be found by symmetry to be at(−p6/3,−p2/3,−1/3) and (p6/3,−p2/3,−1/3).We can subdivide each face of the tetrahedron into four equilateral triangles by bisecting the sides and connecting the bisectors. However, the bisectors of the sides are not on the unit circle so we must push thesepoints out to the unit circle by scaling the values. We can continue this process recursively on each of the triangles created by the bisection process.1.5 In Exercise 1.4, we saw that we could intersect the line of which theline segment is part independently against each of the sides of the window. We could do this process iteratively, each time shortening the line segment if it intersects one side of the window.1.7 In a one–point perspective, two faces of the cube is parallel to the projection plane, while in a two–point perspective only the edges of the cube in one direction are parallel to the projection. In the general case of a three–point perspective there are three vanishing points and none of the edges of the cube are parallel to the projection plane.1.9 Each frame for a 480 x 640 pixel video display contains only about300k pixels whereas the 2000 x 3000 pixel movie frame has 6M pixels, or about 18 times as many as the video display. Thus, it can take 18 times asmuch time to render each frame if there is a lot of pixel-level calculations.1.11 There are single beam CRTs. One scheme is to arrange the phosphors in vertical stripes (red, green, blue, red, green, ....). The major difficulty is that the beam must change very rapidly, approximately three times as fast a each beam in a three beam system. The electronics in such a system the electronic components must also be much faster (and more expensive). Chapter 2 Solutions2.9 We can solve this problem separately in the x and y directions. The transformation is linear, that is xs = ax + b, ys = cy + d. We must maintain proportions, so that xs in the same relative position in the viewport as x is in the window, hencex − xminxmax − xmin=xs − uw,xs = u + wx − xminxmax − xmin.Likewiseys = v + hx − xminymax − ymin.2.11 Most practical tests work on a line by line basis. Usually we use scanlines, each of which corresponds to a row of pixels in the frame buffer. If we compute the intersections of the edges of the polygon with a line passing through it, these intersections can be ordered. The first intersection begins a set of points inside the polygon. The second intersection leaves the polygon, the third reenters and so on.2.13 There are two fundamental approaches: vertex lists and edge lists. With vertex lists we store the vertex locations in an array. The mesh is represented as a list of interior polygons (those polygons with no otherpolygons inside them). Each interior polygon is represented as an array of pointers into the vertex array. To draw the mesh, we traverse the list of interior polygons, drawing each polygon.One disadvantage of the vertex list is that if we wish to draw the edges inthe mesh, by rendering each polygon shared edges are drawn twice. Wecan avoid this problem by forming an edge list or edge array, each elementis a pair of pointers to vertices in the vertex array. Thus, we can draw each edge once by simply traversing the edge list. However, the simple edge list has no information on polygons and thus if we want to render the mesh in some other way such as by filling interior polygons we must add somethingto this data structure that gives information as to which edges form each polygon.A flexible mesh representation would consist of an edge list, a vertex listand a polygon list with pointers so we could know which edges belong to which polygons and which polygons share a given vertex.2.15 The Maxwell triangle corresponds to the triangle that connects thered, green, and blue vertices in the color cube.2.19 Consider the lines defined by the sides of the polygon. We can assigna direction for each of these lines by traversing the vertices in acounter-clockwise order. One very simple test is obtained by noting thatany point inside the object is on the left of each of these lines. Thus, if we substitute the point into the equation for each of the lines (ax+by+c), we should always get the same sign.2.23 There are eight vertices and thus 256 = 28 possible black/white colorings. If we remove symmetries (black/white and rotational) there are14 unique cases. See Angel, Interactive Computer Graphics (Third Edition) or the paper by Lorensen and Kline in the references.Chapter 3 Solutions3.1 The general problem is how to describe a set of characters that might have thickness, curvature, and holes (such as in the letters a and q). Suppose that we consider a simple example where each character can be approximated by a sequence of line segments. One possibility is to use a move/line system where 0 is a move and 1 a line. Then a character can be described by a sequence of the form (x0, y0, b0), (x1, y1, b1), (x2, y2, b2), .....where bi is a 0 or 1. This approach is used in the example in the OpenGL Programming Guide. A more elaborate font can be developed by using polygons instead of line segments.3.11 There are a couple of potential problems. One is that the application program can map different points in object coordinates to the same point in screen coordinates. Second, a given position on the screen when transformed back into object coordinates may lie outside the user’s window.3.19 Each scan is allocated 1/60 second. For a given scan we have to take 10% of the time for the vertical retrace which means that we start to draw scan line n at .9n/(60*1024) seconds from the beginning of the refresh. But allocating 10% of this time for the horizontal retrace we are at pixel m on this line at time .81nm/(60*1024).3.25 When the display is changing, primitives that move or are removed from the display will leave a trace or motion blur on the display as the phosphors persist. Long persistence phosphors have been used in text only displays where motion blur is less of a problem and the long persistence gives a very stable flicker-free image.Chapter 4 Solutions4.1 If the scaling matrix is uniform thenRS = RS(α, α, α) = αR = SRConsider R x(θ), if we multiply and use the standard trigonometric identities for the sine and cosine of the sum of two angles, we findR x(θ)R x(φ) = R x(θ + φ)By simply multiplying the matrices we findT(x1, y1, z1)T(x2, y2, z2) = T(x1 + x2, y1 + y2, z1 + z2)4.5 There are 12 degrees of freedom in the three–dimensional affine transformation. Consider a point p = [x, y, z, 1]T that is transformed top_ = [x_y_, z_, 1]T by the matrix M. Hence we have the relationshipp_ = Mp where M has 12 unknown coefficients but p and p_ are known. Thus we have 3 equations in 12 unknowns (the fourth equation is simplythe identity 1=1). If we have 4 such pairs of points we will have 12equations in 12 unknowns which could be solved for the elements of M.Thus if we know how a quadrilateral is transformed we can determine theaffine transformation.In two dimensions, there are 6 degrees of freedom in M but p and p_ haveonly x and y components. Hence if we know 3 points both before and after transformation, we will have 6 equations in 6 unknowns and thus in two dimensions if we know how a triangle is transformed we can determine theaffine transformation.4.7 It is easy to show by simply multiplying the matrices that theconcatenation of two rotations yields a rotation and that the concatenationof two translations yields a translation. If we look at the product of arotation and a translation, we find that the left three columns of RT arethe left three columns of R and the right column of RT is the rightcolumn of the translation matrix. If we now consider RTR_ where R_ is arotation matrix, the left three columns are exactly the same as the leftthree columns of RR_ and the and right column still has 1 as its bottomelement. Thus, the form is the same as RT with an altered rotation (whichis the concatenation of the two rotations) and an altered translation.Inductively, we can see that any further concatenations with rotations and translations do not alter this form.4.9 If we do a translation by -h we convert the problem to reflection abouta line passing through the origin. From m we can find an angle by whichwe can rotate so the line is aligned with either the x or y axis. Now reflectabout the x or y axis. Finally we undo the rotation and translation so the sequence is of the form T−1R−1SRT.4.11 The most sensible place to put the shear is second so that the instance transformation becomes I = TRHS. We can see that this order makessense if we consider a cube centered at the origin whose sides are alignedwith the axes. The scale gives us the desired size and proportions. Theshear then converts the right parallelepiped to a general parallelepiped.Finally we can orient this parallelepiped with a rotation and place it wheredesired with a translation. Note that the order I = TRSH will work too.4.13R = R z(θz)R y(θy)R x(θx) =⎡⎢⎢⎢⎣cos θy cos θz cos θz sin θx sin θy −cos θx sin θz cos θx cos θz sin θy + sin θx sin θz 0cos θy sin θz cos θx cos θz + sin θx sin θy sin θz −cos θz sin θx + cos θx sin θy sin θz 0 −sin θy cos θy sin θx cos θx cos θy 00 0 0 1⎤⎥⎥⎥⎦4.17 One test is to use the first three vertices to find the equation of theplane ax + by + cz + d = 0. Although there are four coefficients in theequation only three are independent so we can select one arbitrarily ornormalize so that a2 + b2 + c2 = 1. Then we can successively evaluateax + bc + cz + d for the other vertices. A vertex will be on the plane if weevaluate to zero. An equivalent test is to form the matrix⎡⎢⎢⎢⎣1 1 1 1x1 x2 x3 x4y1 y2 y3 y4z1 z2 z3 z4⎤⎥⎥⎥⎦for each i = 4, ... If the determinant of this matrix is zero the ith vertex isin the plane determined by the first three.4.19 Although we will have the same number of degrees of freedom in theobjects we produce, the class of objects will be very different. For exampleif we rotate a square before we apply a nonuniform scale, we will shear the square, something we cannot do if we scale then rotate.4.21 The vector a = u ×v is orthogonal to u and v. The vector b = u ×a is orthogonal to u and a. Hence, u, a and b form an orthogonal coordinatesystem.4.23 Using r = cos θ2+ sin θ2v, with θ = 90 and v = (1, 0, 0), we find forrotation about the x-axisr =√22(1, 1, 0, 0).Likewise, for rotation about the y axisr =√22(1, 0, 1, 0).4.27 Possible reasons include (1) object-oriented systems are slower, (2)users are often comfortable working in world coordinates with higher-level objects and do not need the flexibility offered by a coordinate-free approach, (3) even a system that provides scalars, vectors, and points would have to have an underlying frame to use for the implementation. Chapter 5 Solutions5.1 Eclipses (both solar and lunar) are good examples of the projection of an object (the moon or the earth) onto a nonplanar surface. Any time a shadow is created on curved surface, there is a nonplanar projection. All the maps in an atlas are examples of the use of curved projectors. If the projectors were not curved we could not project the entire surface of a spherical object (the Earth) onto a rectangle.5.3 Suppose that we want the view of the Earth rotating about the sun. Before we draw the earth, we must rotate the Earth which is a rotation about the y axis. Next we translate the Earth away from the origin. Finally we do another rotation about the y axis to position the Earth in its desired location along its orbit. There are a number of interesting variants of this problem such as the view from the Earth of the rest of the solar system.5.5 Yes. Any sequence of rotations is equivalent to a single rotation abouta suitably chosen axis. One way to compute this rotation matrix is to form the matrix by sequence of simple rotations, such asR = RxRyRz.The desired axis is an eigenvector of this matrix.5.7 The result follows from the transformation being affine. We can also take a direct approach. Consider the line determined by the points(x1, y1, z1) and (x2, y2, z2). Any point along can be written parametrically as (_x1 + (1 − _)x2, _y1 + (1 − _)y2, _z1 + (1 − _)z2). Consider the simple projection of this point 1d(_z1+(1−_)z2) (_x1 + (1 − _)x2, _y1 + (1 − _)y2)which is of the form f(_)(_x1 + (1 − _)x2, _y1 + (1 − _)y2). This form describes a line because the slope is constant. Note that the function f(_) implies that we trace out the line at a nonlinear rate as _ increases from 0 to 1.5.9 The specification used in many graphics text is of the angles the projector makes with x,z and y, z planes, i.e the angles defined by the projection of a projector by a top view and a side view.Another approach is to specify the foreshortening of one or two sides of a cube aligned with the axes.5.11 The CORE system used this approach. Retained objects were kept in distorted form. Any transformation to any object that was defined with other than an orthographic view transformed the distorted object and the orthographic projection of the transformed distorted object was incorrect.5.15 If we use _ = _ = 45, we obtain the projection matrixP =266641 0 −1 00 1 −1 00 0 0 00 0 0 1377755.17 All the points on the projection of the point (x.y, z) in the direction dx, dy, dz) are of the form (x + _dx, y + _dy, z + _dz). Thus the shadow of the point (x, y, z) is found by determining the _ for which the line intersects the plane, that isaxs + bys + czs = dSubstituting and solving, we find_ =d − ax − by − czadx + bdy + cdz.However, what we want is a projection matrix, Using this value of _ we findxs = z + _dx =x(bdy + cdx) − dx(d − by − cz)adx + bdy + cdzwith similar equations for ys and zs. These results can be computed by multiplying the homogeneous coordinate point (x, y, z, 1) by the projection matrixM =26664bdy + cdz −bdx −cdx −ddx−ady adx + cdz −cdy −ddy−adz −bdz adx + bdy −ddz0 0 0 adx + bdy + cdz37775.5.21 Suppose that the average of the two eye positions is at (x, y, z) and the viewer is looking at the origin. We could form the images using the LookAt function twice, that isgluLookAt(x-dx/2, y, z, 0, 0, 0, 0, 1, 0);/* draw scene here *//* swap buffers and clear */gluLookAt(x+dx/2, y, z, 0, 0, 0, 0, 1, 0);/* draw scene again *//* swap buffers and clear */Chapter 6 Solutions6.1 Point sources produce a very harsh lighting. Such images are characterized by abrupt transitions between light and dark. The ambient light in a real scene is dependent on both the lights on the scene and the reflectivity properties of the objects in the scene, something that cannot be computed correctly with OpenGL. The Phong reflection term is not physically correct; the reflection term in the modified Phong model is even further from being physically correct.6.3 If we were to take into account a light source being obscured by an object, we would have to have all polygons available so as to test for this condition. Such a global calculation is incompatible with the pipeline model that assumes we can shade each polygon independently of all other polygons as it flows through the pipeline.6.5 Materials absorb light from sources. Thus, a surface that appears red under white light appears so because the surface absorbs all wavelengths of light except in the red range—a subtractive process. To be compatible with such a model, we should use surface absorbtion constants that define the materials for cyan, magenta and yellow, rather than red, green and blue. 6.7 Let ψ be the angle between the normal and the halfway vector, φ be the angle between the viewer and the reflection angle, and θ be the anglebetween the normal and the light source. If all the vectors lie in the same plane, the angle between the light source and the viewer can be computer either as φ + 2θ or as 2(θ + ψ). Setting the two equal, we find φ = 2ψ. Ifthe vectors are not coplanar then φ < 2ψ.6.13 Without loss of generality, we can consider the problem in two dimensions. Suppose that the first material has a velocity of light of v1 andthe second material has a light velocity of v2. Furthermore, assume thatthe axis y = 0 separates the two materials.Place a point light source at (0, h) where h > 0 and a viewer at (x, y)where y < 0. Light will travel in a straight line from the source to a point(t, 0) where it will leave the first material and enter the second. It willthen travel from this point in a straight line to (x, y). We must find the tthat minimizes the time travelled.Using some simple trigonometry, we find the line from the source to (t, 0)has length l1 = √h2 + t2 and the line from there to the viewer has length1l2 = _y2 + (x − t)2. The total time light travels is thus l1v1 + l2v2 .Minimizing over t gives desired result when we note the two desired sinesare sin θ1 = h√h2+t2 and sin θ2 = −y √(y2+(x−t)2 .6.19 Shading requires that when we transform normals and points, we maintain the angle between them or equivalently have the dot productp ·v = p_ ·v_ when p_ = Mp and n_ = Mp. If M T M is an identity matrix angles are preserved. Such a matrix (M−1 = M T ) is called orthogonal. Rotations and translations are orthogonal but scaling and shear are not.6.21 Probably the easiest approach to this problem is to rotate the givenplane to plane z = 0 and rotate the light source and objects in the sameway. Now we have the same problem we have solved and can rotate everything back at the end.6.23 A global rendering approach would generate all shadows correctly. Ina global renderer, as each point is shaded, a calculation is done to seewhich light sources shine on it. The projection approach assumes that wecan project each polygon onto all other polygons. If the shadow of a given polygon projects onto multiple polygons, we could not compute these shadow polygons very easily. In addition, we have not accounted for thedifferent shades we might see if there were intersecting shadows from multiple light sources.Chapter 7 Solutions7.1 First, consider the problem in two dimensions. We are looking for an _ and _ such that both parametric equations yield the same point, that isx(_) = (1 − _)x1 + _x2 = (1 − _)x3 + _x4,y(_) = (1 − _)y1 + _y2 = (1 − _)y3 + _y4.These are two equations in the two unknowns _ and _ and, as long as the line segments are not parallel (a condition that will lead to a division by zero), we can solve for _ _. If both these values are between 0 and 1, the segments intersect.If the equations are in 3D, we can solve two of them for the _ and _ where x and y meet. If when we use these values of the parameters in the two equations for z, the segments intersect if we get the same z from both equations.7.3 If we clip a convex region against a convex region, we produce the intersection of the two regions, that is the set of all points in both regions, which is a convex set and describes a convex region. To see this, consider any two points in the intersection. The line segment connecting them must be in both sets and therefore the intersection is convex.7.5 See Problem 6.22. Nonuniform scaling will not preserve the angle between the normal and other vectors.7.7 Note that we could use OpenGL to, produce a hidden line removed image by using the z buffer and drawing polygons with edges and interiors the same color as the background. But of course, this method was not used in pre–raster systems.Hidden–line removal algorithms work in object space, usually with either polygons or polyhedra. Back–facing polygons can be eliminated. In general, edges are intersected with polygons to determine any visible parts. Good algorithms (see Foley or Rogers) use various coherence strategies to minimize the number of intersections.7.9 The O(k) was based upon computing the intersection of rays with the planes containing the k polygons. We did not consider the cost of filling the polygons, which can be a large part of the rendering time. If we consider a scene which is viewed from a given point there will be some percentage of 1the area of the screen that is filled with polygons. As we move the viewer closer to the objects, fewer polygons will appear on the screen but eachwill occupy a larger area on the screen, thus leaving the area of the screen that is filled approximately the same. Thus the rendering time will be about the same even though there are fewer polygons displayed.7.11 There are a number of ways we can attempt to get O(k log k) performance. One is to use a better sorting algorithm for the depth sort. Other strategies are based on divide and conquer such a binary spatial partitioning.7.13 If we consider a ray tracer that only casts rays to the first intersection and does not compute shadow rays, reflected or transmitted rays, then the image produced using a Phong model at the point of intersection will be the same image as produced by our pipeline renderer. This approach is sometimes called ray casting and is used in volume rendering and CSG. However, the data are processed in a different order from the pipeline renderer. The ray tracer works ray by ray while the pipeline renderer works object by object.7.15 Consider a circle centered at the origin: x2 + y2 = r2. If we know thata point (x, y) is on the curve than, we also know (−x, y), (x,−y),(−x,−y), (y, x), (−y, x), (y,−x), and (−y,−x) are also on the curve. This observation is known as the eight–fold symmetry of the circle. Consequently, we need only generate 1/8 of the circle, a 45 degree wedge, and can obtain the rest by copying this part using the symmetries. If we consider the 45 degree wedge starting at the bottom, the slope of this curve starts at 0 and goes to 1, precisely the conditions used for Bresenham’s line algorithm. The tests are a bit more complex and we have to account for the possibility the slope will be one but the approach is the same as for line generation.7.17 Flood fill should work with arbitrary closed areas. In practice, we can get into trouble at corners if the edges are not clearly defined. Such can be the case with scanned images.7.19 Note that if we fill by scan lines vertical edges are not a problem. Probably the best way to handle the problem is to avoid it completely by never allowing vertices to be on scan lines. OpenGL does this by havingvertices placed halfway between scan lines. Other systems jitter the y value of any vertex where it is an integer.7.21 Although each pixel uses five rays, the total number of rays has only doubled, i.e. consider a second grid that is offset one half pixel in both the x and y directions.7.23 A mathematical answer can be investigated using the notion of reconstruction of a function from its samples (see Chapter 8). However, a very easy to see by simply drawing bitmap characters that small pixels lead to very unreadable characters. A readable character should have some overlap of the pixels.7.25 We want k levels between Imin and Imax that are distributed exponentially. Then I0 = Imin, I1 = Iminr,I2 = Iminr2, ..., Ik−1 = Imax = Iminrk−1. We can solve the last equation for the desired r = ( ImaxImin)1k−17.27 If there are very few levels, we cannot display a gradual change in brightness. Instead the viewer will see steps of intensity. A simple rule of thumb is that we need enough gray levels so that a change of one step is not visible. We can mitigate the problem by adding one bit of random noise to the least significant bit of a pixel. Thus if we have 3 bits (8 levels), the third bit will be noise. The effect of the noise will be to break up regions of almost constant intensity so the user will not be able to see a step because it will be masked by the noise. In a statistical sense the jittered image is a noisy (degraded) version of the original but in a visual sense it appears better.。
CG Assignment 5 (Paper Homework 2 Chapters 5-8)Deadline June 20 2010請將答案寫在Word檔(亦可寫在紙本上用掃描器或像機取像成影像檔・jpg)中上傳至portal.1.Exercise 5.2(p. 286 in textbook)5.2 Suppose that the axis of the plane is its z direction and up is the y direction.In the airplane*s coordinate system, the roll・ pitch and yaw correspond to rotations about the z, x and y axes respectively. Thus we can control theorientation relative the origin by a rotation of the form(vol I) R j (pi tch) (yaw). We must also do a translation to move the airplane to its desired location.2.Exercise 5.7 (p. 286)5.7 The result follows from the transformation being affine. We can also take adirect approach. Consider the line determined by the points (忑1,2/1,21) and Z/2,^2)- Any point along can be written parametrically as (ctxy + (1 — a)龙2, ay、+ (1 — a)?/2, otZ\ + (1 —Consider the simple projection of this point —小2)(曲】+(】一 &)爼2,虫/1 + (1 — Q)92) which is of the form + (1 — 0)^2, + (1 一a)92)・ This form describes a line because the slope is constant. Note that the function /(a) implies that we trace out the line at a nonlinear rate as a increases from 0 to 1.3.Exercise 5.15 (p. 286)5.15 If we use 0 = 0 = 45, we obtain the projection matrix10-10 0 1-10 0 0 0 00 0 0 14. Exercise5.17 (p. 287)5.17 All the points on the projection of the point {x.y^z} in the direction (1工、d 和 d z ) are of the form (x + ad 巧 y + ad y ^z + adz). Thus the shadow of the point (a;, z) is found by determining the a for which the line intersects the plane, that isax s + by s + cz s = dSubstituting and solving, we findd — ax — by — cz(id x + bdy + cd zHowever, what we want is a projection matrix, Using this value of a we x(bd y + cd x ) — d x {d -by - cz) with similar equations for y s and z$. These results can be computed bymultiplying the homogeneous coordinate point 1) by the projection matrix5. Exercise6.7 (p. 327)6.7 Let 0 be the angle between the normal and the halfway vector, 0 be the angle between the viewer and the reflection angle, and 9 be the angle between the normal and the light source. If all the vectors lie in the same plane, theangle between the light source and the viewer can be computer either as 0 + 20 or as 2(0 + 0). Setting the two equal, we find © = 20. If the vectors are not coplanar then 0 V 20・6. Exercise 6.19 (p. 328)6.19 Shading requires that when we transform normals and points, we maintain t he angle between them or ecjuivalently have the dot product p • v = p‘ •v‘ when p z = Mp and n f = Mp. If is an identity matrixangles are preserved. Such a matrix (M _1 = ) is called orthogonal.Rotations and translations are orthogonal but scaling and shear are not.7. Exercise 6.24 (p. 328)6.24 The normal at each point on the sphere points from the origin to that point. Given this normal n and the location of the viewer p, we can compute thereflection vector r = 2(p • n)n — p as in Section 6.5. The viewer would see thefind x s = z + ad 工= (idx + bdy + cd zbdy + cd z —Cldy —adz 0 ad/工 + cdz —bdz 0 Tdy ad x + bdy 0 —dd x_ddy -dd z adx + bdy + cd zcolor of the first opaque object along this reflected vector from the sphere.8.Exercise 7J (p. 379)7.1 First, consider the problem in two dimensions. We are looking for an a and(3 such that both parametric equations yield the same point, that isx(a) = (1 - a)xi + ax2 = (1 - 0)©3 + 0©4,9(a) = (1 - Q)S +QV2 = (1 - 0)1/3 + 0V4・These are two equations in the two unknowns a and 0 and, as long as the line segments are not parallel (a condition that will lead to a division by zero), we can solve for a 0. If both these values are between 0 and 1, the segmentsintersect.If the equations arc in 3D, we can solve two of them for the a and (3 where x and y meet. If when we use these values of the parameters in the two equations for z,the segments intersect if we get the same z from both equations.9.Exercise 7.6 (p- 380)7.6 First we want to move the window to the center with the translationT(-(x max + x m i n)/2, -(y m ai + "诚n)/2,0). Next we want to scale the sides to be the size of the viewport by S(Urnax~Umln, rmQJ~rmtn, 1). Finally 'ymax~ymin 7 7 we move the origin to the center of the viewport bymax10.Exercise 7.18 (p. 381)7.18 Suppose that the equation of the edge is y = mx + h where m and h aredetermined from and (©2,92)・ For any changein y、thecorresponding change in x must be Thus if (勺./) is theintersection of the edge with a scan line, the intersection with the next scanline must be at @ + 令皿 + 1) because we move one unit in y and must still be on the edge・ Thus, once we have the first intersection, the calculation of successive int er sect ion requires only one addition per intersection.11.Exercise 8.1. (p. 448)8.1Suppose that we move across a scanline left to right starting on the outsideof a polygon. Assume that 0 corresponds to the background color and 1 is the edge and fill color. As we move across the scanline we replace each point by the exclusive or of its value with the value of the point on the left (which has already been processed). If we start on the outside of any polygon, vve can assume the first point is a 0. Thus as long as we remain outside the firstpolygon we generate 0 ㊉0 = 0. When we encounter the first edge, a L we compute 1 ㊉0 = 1 (that is we have a 0 from the last point XORed with the edge). As we proceed inside the polygon we compute 1 ㊉0 = 1 because we have a 1 from the last XOR and a 0 from the unfilled point. When weencounter the second edge, we compute 1 ㊉1 =0, thus returning to our initial situation12.Exercise 8.2 (p. 448)8.2The problem is that if the area under the cursor is not all Vs or all 0's, someof the points will be complemented and some will be unchanged・ Thus, the cursor region, may not look like the original cursor, although a second XOR will return us to the original display and we should be able to notice where the cursor is located by the bits that do change.13.Exercise &10 (p. 449)8.10 There are two issues. First, if we use a and 1 — ci we avoid problems ofhaving colors and opacities exceeding 1 and being clipped. However, by using this choice, the order in which we composite multiple surfaces matters which would not make a difference if we used Q and 1.。
计算机图形学基础第一章1.名词解释:图形:从客观世界物体中抽象出来的带有颜色信息及形状信息的图和形。
图像:点阵法:是用具有灰度或颜色信息的点阵来表示的一种方法。
参数法:是以计算机中所记录图形的形状参数与属性参数来表示图像的一种方法。
2.图形包括那两方面的要素,在计算机中如何表示他们?构成图形的要素可以分为两类:一类是刻画形状的点、线、面、体等几何要素;另一类是反映物体本身固有属性,如表面属性或材质的明暗、灰度、色彩等非几何要素。
3.什么叫计算机图形学?分析计算机图形学,数字图像处理和计算机视觉学科间的关系。
计算机图形学是研究怎样利用计算机来显示、生成和处理图形的原理、方法和技术的一门学科。
【关系图在课本第一页】4,有关计算机图形学的软件标准有哪些?计算机图形核心系统(GKS)及其语言联编、计算机图形元文件(CGM),计算机图形接口(CGI),基本图形转换规范(IGES)、产品数据转换(STEP)6.试发挥你的想象力,举例说明计算机图形学有哪些应用范围,解决的问题是什么?【具体参照课本第5页】第二章1.名词解释LCD: 就是Liquid Crystal Display,它是利用液晶的光电效应,通过施加电压改变液晶的光学特性,从而造成对入射光的调剂,使通过液晶的透射光或反射光受所加电压的控制,达到显示的目的。
LED: 即Liquid-Emitting Diode, 采用二极管激发的光来显示图像。
随机扫描:采用随机定位的方式控制电子束运动光栅扫描:示器显示图形时,电子束依照固定的扫描线和规定的扫描顺序进行扫描。
电子束先从荧光屏左上角开始,向右扫一条水平线,然后迅速地回扫到左边偏下一点的位置,再扫第二条水平线,照此固定的路径及顺序扫下去,直到最后一条水平线,即完成了整个屏幕的扫描。
刷新:刷新是经过一段时间后,信息可能丢失,需要重写,为了使信息储存更长的时间,必须不断的刷新每个储存单元中储存的信息,也就是将各储存单元中的数据读出之后,再写回到元单元中,对各储存单元中的电容器进行充电.刷新频率:刷新率是指电子束对屏幕上的图像重复扫描的次数。
1. 阐述计算机图形学,图象处理,模式识别和计算几何这四门学科之间的关系.答: 计算机图形学研究是数据模型和几何模型转化为图像信号,模式识别是研究图像信号到数据模型和几何模型图像处理是处理图像到图像。
计算几何是研究几何模型和数据处理的学科,探讨几何形体的计算机表示.2.计算机图形学的研究内容是什么?计算机图形学是研究通过计算机将数据转换为图形,并在专门显示设备上显示的原理、方法和技术的学科.计算机图形学的研究内容非常广泛,主要有以下几个方面:计算机图形学的应用;计算机图形设备和系统;国际标准化组织(ISO)发布的图形标准;人机交互接口技术;基本图形实体、自由曲线和自由曲面的生成算法;图形变换和裁剪;曲面和实体造型算法;颜色、光照模型及真实感图形显示技术与算法等内容。
2. 简述计算机图形学的发展过程和发展趋势发展过程:1950年,第一台图形显示器作为美国麻省理工学院(MIT)旋风I(Whirlwind I)计算机的附件诞生了。
该显示器用一个类似于示波器的阴极射线管(CRT)来显示一些简单的图形。
1958年美国Calcomp公司由联机的数字记录仪发展成滚筒式绘图仪,GerBer公司把数控机床发展成为平板式绘图仪。
1962年,MIT林肯实验室的Ivan E.Sutherland 发表了一篇题“Sketchpad:一个人机交互通信的图形系统”的博士论文,他在论文中首次使用了计算机图形学Computer Graphics”这个术语,证明了交互计算机图形学是一个可行的、有用的研究领域,从而确定了计算机图形学作为一个崭新的科学分支的独立地位。
1973年开始,相继出现了英国剑桥大学CAD小组的Build系统、美国罗彻斯特大学的PADL-1系统等实体造型系统。
从80年代中期以来,超大规模集成电路的发展,为图形学的飞速发展奠定了物质基础。
计算机的运算能力的提高,图形处理速度的加快,使得图形学的各个研究方向得到充分发展,图形学已广泛应用于动画、科学计算可视化、CAD/CAM、影视娱乐等各个领域。
《计算机图形学》1-8章习题解答《计算机图形学》1-4章习题解答习题11.计算机图形学的研究内容是什么?答:几何模型构造,图形生成,图形操作与处理,图形信息的存储、检索与交换,人机交互及用户接口,动画,图形输出设备与输出技术,图形标准与图形软件包的研究等。
2.计算机图形学与图像处理有何联系?有何区别?答:计算机图形学与图像处理都是用计算机来处理图形和图像,结合紧密且相互渗透,但其属于两个不同的技术领域。
计算机图形学是通过算法和程序在显示设备上构造图形,是从数据到图像的处理过程;而图像处理是对景物或图像的分析技术,是从图像到图像的处理过程。
3.简述计算机图形学的发展过程。
答:略。
(参考:教材P3)4.简述你所理解的计算机图形学的应用领域。
5.如果使用每种基色占10比特的直接编码方式表示RGB 颜色的值,每一像素有多少种可能的颜色?答:824107374110242223101010==⨯⨯6.如果每个像素的红色和蓝色都用5比特表示,绿色用6比特表示,一共用16比特表示,总共可以表示多少种颜色?答:65536222655=⨯⨯7.解释水平回扫、垂直回扫的概念。
答:水平回扫:电子束从CRT 屏幕右边缘回到屏幕左边缘的动作。
垂直回扫:电子束到达每次刷新周期末尾,从CRT 屏幕右下角回到屏幕左上角的动作。
8.为什么很多彩色打印机使用黑色颜料? 答:彩色颜料(青、品红、黄)相对来说较贵,并且在技术上很难通过多种颜色产生高质量的黑色。
9.简述随机扫描显示器和光栅扫描显示器的简单工作原理和各自的特点。
答:随机扫描显示器的工作原理:要显示的图形定义是一组画线命令,存放在刷新缓存中,由显示控制器控制电子束的偏移,周期性地按画线命令依次画出其组成线条,从而在屏幕上产生图形。
特点:其显示的图形质量好,刷新缓存中的内容可局部或动态修改,分辨率和对比度高,并且图形不会产生锯齿状线条。
光栅扫描显示器的工作原理:将CRT屏幕分成由像素构成的光栅网格,其中像素的灰度和颜色信息保存在帧缓存中。
第七章1.什么叫几何造型?几何造型是一种技术,它能将物体的形状及其属性(如颜色、纹理等)存储在计算机内,形成该物体的三维几何模型,利用这个模型对原物体进行确切的数学描述或是对原物体某种状态进行真实模拟。
几何造型是用计算机及其图形工具表示、描述物体的形状,设计几何形体,模拟物体动态过程的一门综合技术。
它是集成CAD/CAM的基础,主要包括曲面造型、实体造型和特征造型三个分支。
2.几何造型有哪3种模型?各有什么特点?(1)线框模型线框模型表示的主体,不能充分反映出与计算机内部关于线数据和形状特征数据的关系;采用线框模型,在计算体积、重量等质量参数时,就无法利用隐线消去法,此外,这种模型很难表示圆筒或球之类的曲面立体。
由于线框模型的数据结构简单,具有计算机处理速度快的优点,因此用途还是很广的,特别是当未使用高性能计算机时,就能充分发挥其处理速度快的优点。
主体的线框模型在计算机生成后,利用投影法就可很容易得到立体的三视图,在制图领域中有很广泛的应用。
(2) 表面模型表面模型是在线框模型的基础上,增加了有关生成立体各表面的数据而构成的模型。
表面的定义就是一些指定某表面由哪些棱线按何种顺序组成的信息。
这种模型通常用于构造复杂的曲面物体,构形时常常利用线框功能,先构造一线框图,然后用扫描或旋转等手段变成曲面,当然也可以用系统提供的许多曲面图素来建立各种曲面模型。
表面模型由于比线框模型更高级、更优越,以及更易于在微机上实现等特点,在工程领域中有广泛的应用,特别是进行类似汽车外形设计这种有复杂表面设计工作的领域。
(3) 实体模型实体模型是3种模型中最重要的,也是出现最晚的。
实体模型的优点可以概括为:完整定义了立体图形,能区分内外部;能提供清晰的剖面图;能准确计算质量特性和有限元网格;方便机械运动的模拟。
3.分析比较CSG法与B-rep法优缺点。
CSG法(1)边界表示法强调的是形体的外表细节,详细记录了形体的所有几何和拓扑信息。
7.6习题2.试证明下述几何变换的矩阵运算具有互换性:(1)两个连续的旋转变换;(2)两个连续的平移变换;(3)两个连续的变比例变换;(4)当比例系数相等时的旋转和比例变换;(1)证明:设第一次的旋转变换为:cos 0 1 sin 0 1 0Tl= - sinO 1 cos 0 1 00 0 1第二次的旋转变换为:Cos 0 2 sin 0 2 0T2= - sin 0 2 cos 0 2 00 0 1则因为cos 0 1 sin 0 2+ sin 0 1-sin 0 1 sin 0 1+ cos 0 1 0Cos (0 1+02) sin (0 1+02) 0二-sin ( 0 1+02) cos ( 0 1+0 2) 00 0 1T2*T1 二cos 9 1 0cos 0 2 sin 0 2 0 COS 0 1sin 0 1 -sin 0 2cos 0 2 0 -sin 0 10 0 1 0 0T1*T2 = cos 0 1 sin 0 1 0 sin 0 2 00 - sin 0 1 cos 0 110 0 1cos 0 2-sin 0 2cos 0 20 =cos 0 1cos 9 2 0-sin 0 1 cos 0 2 00 cos 0 2+sin O 1 sin 0 2 cos 0 2- cos 0 1 sin 0 21cos 0 1 cos 0 2+ sin 0 1 sin 0 2 sin 0 1 cos 0 2 0=- sin 0 2cos 0 1 - cos 0 2 sin 0 1 cos Q 1 cos 0 2 01Cos (0 1+02) sin ( 0 1+0 2) 0 =-sin ( 0 1+0 2) cos (01+02)0 0 0 1即T1*T2二T2*T1,两个连续的旋转变换具有互换性(2) 证明:设第一次的平移变换为:1 00 Tl=0 1TxlTyl1笫二次的平移变换为:1 0 0T2=0 1 0Tx2Ty21则因为T1*T2 =1 0 0 1 0 00 1 0 0 1 0TxlTyl1 Tx2Ty2110 0二1 0Txl+Tx2Tyl+Ty21而T2*T1 =1 0 0 1 0 0 0 1 0 0 1 0Tx2Ty21 TxlTyl110 0二1cos 0 1 sin 0 2+ -sinO 1 sin 0 1+Txl+Tx2Tyl+Ty2 1即T1*T2= T2*T1,两个连续的平移变换具有互换性(3) 证明:设第一次的变比例变换为:Sxl 0Tl=0 Syl0 0第二次的变比例变换为:Sx2 0 T2 =0 Sy2 0 0 则因为T1*T2 = Sxl 00 Syl 0 0Sxl*Sx2= 0而T2*T1 =Sx2 0 0 Sy2 0 0 Sxl*Sx2= 0即T1*T2二T2灯1,两个连续的变比例变换具有互换性(4) 证明:设第一次为比例系数相等时的比例变换:STl= 0Sx2 0 0 00 Sy2 0 11Syl*Sy2 01Sxl 0 0 0Syl0 11Syl*Sy2 01第二次的为旋转变换:T2*T1 =cos 0 sin 0 0SOO- sin 0 cos 0 0OSO0 0 10 0 1即T1*T2= T2*T1, “当比例系数相等时的旋转和比例“变换具有互换性3.证明二维点相对x 轴作对称,紧跟着相对y=-x 直线作对称变换完全等价于 该点相对坐标原点作旋转变换。
Angel: Interactive Computer Graphics, Fifth Edition Chapter 1 SolutionsThe main advantage of the pipeline is that each primitive can be processed independently. Not only does this architecture lead to fast performance, it reduces memory requirements because we need not keep all objects available. The main disadvantage is that we cannot handle most global effects such as shadows, reflections, and blending in a physically correct manner.We derive this algorithm later in Chapter 6. First, we can form the tetrahedron by finding four equally spaced points on a unit sphere centered at the origin. One approach is to start with one point on the z axis(0, 0, 1). We then can place the other three points in a plane of constant z. One of these three points can be placed on the y axis. To satisfy the requirement that the points be equidistant, the point must be at(0, 2p2/3,1/3). The other two can be found by symmetry to be at(p6/3,p2/3,1/3) and (p6/3,p2/3,1/3).We can subdivide each face of the tetrahedron into four equilateral triangles by bisecting the sides and connecting the bisectors. However, the bisectors of the sides are not on the unit circle so we must push these points out to the unit circle by scaling the values. We can continue this process recursively on each of the triangles created by the bisection process. In Exercise , we saw that we could intersect the line of which theline segment is part independently against each of the sides of the window. We could do this process iteratively, each time shortening the line segment if it intersects one side of the window.In a one–point perspective, two faces of the cube is parallel to the projection plane, while in a two–point perspective only the edges of the cube in one direction are parallel to the projection. In the general case of a three–point perspective there are three vanishing points and none of the edges of the cube are parallel to the projection plane.Each frame for a 480 x 640 pixel video display contains only about300k pixels whereas the 2000 x 3000 pixel movie frame has 6M pixels, or about 18 times as many as the video display. Thus, it can take 18 times asmuch time to render each frame if there is a lot of pixel-level calculations. There are single beam CRTs. One scheme is to arrange the phosphorsin vertical stripes (red, green, blue, red, green, ....). The major difficulty is that the beam must change very rapidly, approximately three times as fast a each beam in a three beam system. The electronics in such a system the electronic components must also be much faster (and more expensive). Chapter 2 SolutionsWe can solve this problem separately in the x and y directions. The transformation is linear, that is xs = ax + b, ys = cy + d. We mustmaintain proportions, so that xs in the same relative position in the viewport as x is in the window, hencex xminxmax xmin=xs uw,xs = u + wx xminxmax xmin.Likewiseys = v + hx xminymax ymin.Most practical tests work on a line by line basis. Usually we use scanlines, each of which corresponds to a row of pixels in the frame buffer. If we compute the intersections of the edges of the polygon with a line passing through it, these intersections can be ordered. The first intersection begins a set of points inside the polygon. The second intersection leaves the polygon, the third reenters and so on.There are two fundamental approaches: vertex lists and edge lists.With vertex lists we store the vertex locations in an array. The mesh is represented as a list of interior polygons (those polygons with no otherpolygons inside them). Each interior polygon is represented as an array of pointers into the vertex array. To draw the mesh, we traverse the list of interior polygons, drawing each polygon.One disadvantage of the vertex list is that if we wish to draw the edges in the mesh, by rendering each polygon shared edges are drawn twice. We can avoid this problem by forming an edge list or edge array, each element is a pair of pointers to vertices in the vertex array. Thus, we can draw each edge once by simply traversing the edge list. However, the simple edge list has no information on polygons and thus if we want to render the mesh in some other way such as by filling interior polygons we must add something to this data structure that gives information as to which edges form each polygon.A flexible mesh representation would consist of an edge list, a vertex list and a polygon list with pointers so we could know which edges belong to which polygons and which polygons share a given vertex.The Maxwell triangle corresponds to the triangle that connects thered, green, and blue vertices in the color cube.Consider the lines defined by the sides of the polygon. We can assigna direction for each of these lines by traversing the vertices in acounter-clockwise order. One very simple test is obtained by noting that any point inside the object is on the left of each of these lines. Thus, if we substitute the point into the equation for each of the lines (ax+by+c), we should always get the same sign.There are eight vertices and thus 256 = 28 possible black/white colorings. If we remove symmetries (black/white and rotational) there are 14 unique cases. See Angel, Interactive Computer Graphics (Third Edition) or the paper by Lorensen and Kline in the references.Chapter 3 SolutionsThe general problem is how to describe a set of characters that might have thickness, curvature, and holes (such as in the letters a and q). Suppose that we consider a simple example where each character can be approximated by a sequence of line segments. One possibility is to use a move/line system where 0 is a move and 1 a line. Then a character can be described by a sequence of the form (x0, y0, b0), (x1, y1, b1), (x2, y2, b2), .....where bi is a 0 or 1. This approach is used in the example in the OpenGL Programming Guide. A more elaborate font can be developed by using polygons instead of line segments.There are a couple of potential problems. One is that the application program can map different points in object coordinates to the same point in screen coordinates. Second, a given position on the screen when transformed back into object coordinates may lie outside the user’s window.Each scan is allocated 1/60 second. For a given scan we have to take 10% of the time for the vertical retrace which means that we start to draw scan line n at .9n/(60*1024) seconds from the beginning of the refresh. But allocating 10% of this time for the horizontal retrace we are at pixel m on this line at time .81nm/(60*1024).When the display is changing, primitives that move or are removedfrom the display will leave a trace or motion blur on the display as the phosphors persist. Long persistence phosphors have been used in text only displays where motion blur is less of a problem and the long persistence gives a very stable flicker-free image.Chapter 4 SolutionsIf the scaling matrix is uniform thenRS = RS(α, α, α) = αR = SRConsider R x(θ), if we multiply and use the standard trigonometric identities for the sine and cosine of the sum of two angles, we findR x(θ)R x(φ) = R x(θ + φ)By simply multiplying the matrices we findT(x1, y1, z1)T(x2, y2, z2) = T(x1 + x2, y1 + y2, z1 + z2)There are 12 degrees of freedom in the three–dimensional affine transformation. Consider a point p = [x, y, z, 1]T that is transformed top_ = [x_y_, z_, 1]T by the matrix M. Hence we have the relationshipp_ = Mp where M has 12 unknown coefficients but p and p_ are known. Thus we have 3 equations in 12 unknowns (the fourth equation is simply the identity 1=1). If we have 4 such pairs of points we will have 12 equations in 12 unknowns which could be solved for the elements of M.Thus if we know how a quadrilateral is transformed we can determine theaffine transformation.In two dimensions, there are 6 degrees of freedom in M but p and p_ haveonly x and y components. Hence if we know 3 points both before and after transformation, we will have 6 equations in 6 unknowns and thus in two dimensions if we know how a triangle is transformed we can determine theaffine transformation.It is easy to show by simply multiplying the matrices that theconcatenation of two rotations yields a rotation and that the concatenationof two translations yields a translation. If we look at the product of arotation and a translation, we find that the left three columns of RT arethe left three columns of R and the right column of RT is the rightcolumn of the translation matrix. If we now consider RTR_ where R_ is arotation matrix, the left three columns are exactly the same as the leftthree columns of RR_ and the and right column still has 1 as its bottom element. Thus, the form is the same as RT with an altered rotation (whichis the concatenation of the two rotations) and an altered translation. Inductively, we can see that any further concatenations with rotations and translations do not alter this form.If we do a translation by -h we convert the problem to reflection abouta line passing through the origin. From m we can find an angle by whichwe can rotate so the line is aligned with either the x or y axis. Now reflectabout the x or y axis. Finally we undo the rotation and translation so the sequence is of the form T1R1SRT.The most sensible place to put the shear is second so that the instance transformation becomes I = TRHS. We can see that this order makessense if we consider a cube centered at the origin whose sides are alignedwith the axes. The scale gives us the desired size and proportions. Theshear then converts the right parallelepiped to a general parallelepiped.Finally we can orient this parallelepiped with a rotation and place it where desired with a translation. Note that the order I = TRSH will work too.R = R z(θz)R y(θy)R x(θx) =cos θy cos θz cos θz sin θx sin θy cos θx sin θz cos θx cos θz sin θy + sin θx sin θz 0 cos θy sin θz cos θx cos θz + sin θx sin θy sin θz cos θz sin θx + cos θx sin θy sin θz 0 sin θy cos θy sin θx cos θx cos θy 00 0 0 1One test is to use the first three vertices to find the equation of the plane ax + by + cz + d = 0. Although there are four coefficients in the equation only three are independent so we can select one arbitrarily or normalize so that a2 + b2 + c2 = 1. Then we can successively evaluateax + bc + cz + d for the other vertices. A vertex will be on the plane if we evaluate to zero. An equivalent test is to form the matrix1 1 1 1x1 x2 x3 x4y1 y2 y3 y4z1 z2 z3 z4for each i = 4, ... If the determinant of this matrix is zero the ith vertex isin the plane determined by the first three.Although we will have the same number of degrees of freedom in the objects we produce, the class of objects will be very different. For example if we rotate a square before we apply a nonuniform scale, we will shear the square, something we cannot do if we scale then rotate.The vector a = u ×v is orthogonal to u and v. The vector b = u ×a is orthogonal to u and a. Hence, u, a and b form an orthogonal coordinate system.Using r = cos θ2+ sin θ2v, with θ = 90 and v = (1, 0, 0), we find forrotation about the x-axisr =√22(1, 1, 0, 0).Likewise, for rotation about the y axisr =√22(1, 0, 1, 0).Possible reasons include (1) object-oriented systems are slower, (2) users are often comfortable working in world coordinates with higher-level objects and do not need the flexibility offered by a coordinate-free approach, (3) even a system that provides scalars, vectors, and points would have to have an underlying frame to use for the implementation.Chapter 5 SolutionsEclipses (both solar and lunar) are good examples of the projection of an object (the moon or the earth) onto a nonplanar surface. Any time a shadow is created on curved surface, there is a nonplanar projection. All the maps in an atlas are examples of the use of curved projectors. If the projectors were not curved we could not project the entire surface of a spherical object (the Earth) onto a rectangle.Suppose that we want the view of the Earth rotating about the sun. Before we draw the earth, we must rotate the Earth which is a rotation about the y axis. Next we translate the Earth away from the origin. Finally we do another rotation about the y axis to position the Earth in its desired location along its orbit. There are a number of interesting variants of this problem such as the view from the Earth of the rest of the solar system.Yes. Any sequence of rotations is equivalent to a single rotation abouta suitably chosen axis. One way to compute this rotation matrix is to form the matrix by sequence of simple rotations, such asR = RxRyRz.The desired axis is an eigenvector of this matrix.The result follows from the transformation being affine. We can also take a direct approach. Consider the line determined by the points(x1, y1, z1) and (x2, y2, z2). Any point along can be written parametrically as (_x1 + (1 _)x2, _y1 + (1 _)y2, _z1 + (1 _)z2). Consider the simple projection of this point 1d(_z1+(1_)z2) (_x1 + (1 _)x2, _y1 + (1 _)y2)which is of the form f(_)(_x1 + (1 _)x2, _y1 + (1 _)y2). This form describes a line because the slope is constant. Note that the function f(_) implies that we trace out the line at a nonlinear rate as _ increases from 0 to 1.The specification used in many graphics text is of the angles the projector makes with x,z and y, z planes, the angles defined by the projection of a projector by a top view and a side view.Another approach is to specify the foreshortening of one or two sides of acube aligned with the axes.The CORE system used this approach. Retained objects were kept in distorted form. Any transformation to any object that was defined with other than an orthographic view transformed the distorted object and the orthographic projection of the transformed distorted object was incorrect. If we use _ = _ = 45, we obtain the projection matrixP =266641 0 1 00 1 1 00 0 0 00 0 0 137775All the points on the projection of the point , z) in the directiondx, dy, dz) are of the form (x + _dx, y + _dy, z + _dz). Thus the shadow of the point (x, y, z) is found by determining the _ for which the line intersects the plane, that isaxs + bys + czs = dSubstituting and solving, we find_ =d ax by czadx + bdy + cdz.However, what we want is a projection matrix, Using this value of _ we findxs = z + _dx =x(bdy + cdx) dx(d by cz)adx + bdy + cdzwith similar equations for ys and zs. These results can be computed by multiplying the homogeneous coordinate point (x, y, z, 1) by the projection matrixM =26664bdy + cdz bdx cdx ddxady adx + cdz cdy ddyadz bdz adx + bdy ddz0 0 0 adx + bdy + cdz37775.Suppose that the average of the two eye positions is at (x, y, z) andthe viewer is looking at the origin. We could form the images using the LookAt function twice, that isgluLookAt(x-dx/2, y, z, 0, 0, 0, 0, 1, 0);/* draw scene here *//* swap buffers and clear */gluLookAt(x+dx/2, y, z, 0, 0, 0, 0, 1, 0);/* draw scene again *//* swap buffers and clear */Chapter 6 SolutionsPoint sources produce a very harsh lighting. Such images are characterized by abrupt transitions between light and dark. The ambient light in a real scene is dependent on both the lights on the scene and the reflectivity properties of the objects in the scene, something that cannot be computed correctly with OpenGL. The Phong reflection term is not physically correct; the reflection term in the modified Phong model is even further from being physically correct.If we were to take into account a light source being obscured by an object, we would have to have all polygons available so as to test for this condition. Such a global calculation is incompatible with the pipeline model that assumes we can shade each polygon independently of all other polygons as it flows through the pipeline.Materials absorb light from sources. Thus, a surface that appears red under white light appears so because the surface absorbs all wavelengths of light except in the red range—a subtractive process. To be compatible with such a model, we should use surface absorbtion constants that define the materials for cyan, magenta and yellow, rather than red, green and blue. Let ψ be the angle between the normal and the halfway vector, φ bethe angle between the viewer and the reflection angle, and θ be the angle between the normal and the light source. If all the vectors lie in the same plane, the angle between the light source and the viewer can be computer either as φ + 2θ or as 2(θ + ψ). Setting the two equal, we find φ = 2ψ. Ifthe vectors are not coplanar then φ < 2ψ.Without loss of generality, we can consider the problem in twodimensions. Suppose that the first material has a velocity of light of v1 and the second material has a light velocity of v2. Furthermore, assume thatthe axis y = 0 separates the two materials.Place a point light source at (0, h) where h > 0 and a viewer at (x, y)where y < 0. Light will travel in a straight line from the source to a point (t, 0) where it will leave the first material and enter the second. It willthen travel from this point in a straight line to (x, y). We must find the tthat minimizes the time travelled.Using some simple trigonometry, we find the line from the source to (t, 0) has length l1 = √h2 + t2 and the line from there to the viewer has length1l2 = _y2 + (x t)2. The total time light travels is thus l1v1 + l2v2 .Minimizing over t gives desired result when we note the two desired sines are sin θ1 = h√h2+t2 and sin θ2 = y √(y2+(xt)2 .Shading requires that when we transform normals and points, we maintain the angle between them or equivalently have the dot productp ·v = p_ ·v_ when p_ = Mp and n_ = Mp. If M T M is an identity matrix angles are preserved. Such a matrix (M1 = M T ) is called orthogonal. Rotations and translations are orthogonal but scaling and shear are not. Probably the easiest approach to this problem is to rotate the givenplane to plane z = 0 and rotate the light source and objects in the same way. Now we have the same problem we have solved and can rotate everything back at the end.A global rendering approach would generate all shadows correctly. Ina global renderer, as each point is shaded, a calculation is done to see which light sources shine on it. The projection approach assumes that we can project each polygon onto all other polygons. If the shadow of a given polygon projects onto multiple polygons, we could not compute these shadow polygons very easily. In addition, we have not accounted for the different shades we might see if there were intersecting shadows from multiple light sources.Chapter 7 SolutionsFirst, consider the problem in two dimensions. We are looking for an _ and _ such that both parametric equations yield the same point, that isx(_) = (1 _)x1 + _x2 = (1 _)x3 + _x4,y(_) = (1 _)y1 + _y2 = (1 _)y3 + _y4.These are two equations in the two unknowns _ and _ and, as long as the line segments are not parallel (a condition that will lead to a division by zero), we can solve for _ _. If both these values are between 0 and 1, the segments intersect.If the equations are in 3D, we can solve two of them for the _ and _ where x and y meet. If when we use these values of the parameters in the two equations for z, the segments intersect if we get the same z from both equations.If we clip a convex region against a convex region, we produce the intersection of the two regions, that is the set of all points in both regions, which is a convex set and describes a convex region. To see this, consider any two points in the intersection. The line segment connecting them must be in both sets and therefore the intersection is convex.See Problem . Nonuniform scaling will not preserve the anglebetween the normal and other vectors.Note that we could use OpenGL to, produce a hidden line removed image by using the z buffer and drawing polygons with edges and interiors the same color as the background. But of course, this method was not used in pre–raster systems.Hidden–line removal algorithms work in object space, usually with either polygons or polyhedra. Back–facing polygons can be eliminated. In general, edges are intersected with polygons to determine any visible parts. Good algorithms (see Foley or Rogers) use various coherence strategies to minimize the number of intersections.The O(k) was based upon computing the intersection of rays with the planes containing the k polygons. We did not consider the cost of filling the polygons, which can be a large part of the rendering time. If we consider a scene which is viewed from a given point there will be some percentage of 1the area of the screen that is filled with polygons. As we move the viewer closer to the objects, fewer polygons will appear on the screen but each will occupy a larger area on the screen, thus leaving the area of the screenthat is filled approximately the same. Thus the rendering time will be about the same even though there are fewer polygons displayed.There are a number of ways we can attempt to get O(k log k) performance. One is to use a better sorting algorithm for the depth sort. Other strategies are based on divide and conquer such a binary spatial partitioning.If we consider a ray tracer that only casts rays to the first intersection and does not compute shadow rays, reflected or transmitted rays, then the image produced using a Phong model at the point of intersection will be the same image as produced by our pipeline renderer. This approach is sometimes called ray casting and is used in volume rendering and CSG. However, the data are processed in a different order from the pipeline renderer. The ray tracer works ray by ray while the pipeline renderer works object by object.Consider a circle centered at the origin: x2 + y2 = r2. If we know thata point (x, y) is on the curve than, we also know (x, y), (x,y),(x,y), (y, x), (y, x), (y,x), and (y,x) are also on the curve. Thisobservation is known as the eight–fold symmetry of the circle. Consequently, we need only generate 1/8 of the circle, a 45 degree wedge, and can obtain the rest by copying this part using the symmetries. If we consider the 45 degree wedge starting at the bottom, the slope of this curve starts at 0 and goes to 1, precisely the conditions used for Bresenham’s line algorithm. The tests are a bit more complex and we have to account for the possibility the slope will be one but the approach is the same as for line generation.Flood fill should work with arbitrary closed areas. In practice, we canget into trouble at corners if the edges are not clearly defined. Such can be the case with scanned images.Note that if we fill by scan lines vertical edges are not a problem. Probably the best way to handle the problem is to avoid it completely by never allowing vertices to be on scan lines. OpenGL does this by having vertices placed halfway between scan lines. Other systems jitter the y value of any vertex where it is an integer.Although each pixel uses five rays, the total number of rays has only doubled, . consider a second grid that is offset one half pixel in both thex and y directions.A mathematical answer can be investigated using the notion ofreconstruction of a function from its samples (see Chapter 8). However, a very easy to see by simply drawing bitmap characters that small pixels lead to very unreadable characters. A readable character should have some overlap of the pixels.We want k levels between Imin and Imax that are distributed exponentially. Then I0 = Imin, I1 = Iminr,I2 = Iminr2, ..., Ik1 = Imax = Iminrk1. We can solve the last equation forthe desired r = ( ImaxImin)1k1If there are very few levels, we cannot display a gradual change in brightness. Instead the viewer will see steps of intensity. A simple rule of thumb is that we need enough gray levels so that a change of one step is not visible. We can mitigate the problem by adding one bit of random noise to the least significant bit of a pixel. Thus if we have 3 bits (8 levels), the third bit will be noise. The effect of the noise will be to break up regions of almost constant intensity so the user will not be able to see a step because it will be masked by the noise. In a statistical sense the jittered image is a noisy (degraded) version of the original but in a visual sense it appears better.。