GLSL学习__中文版
- 格式:pdf
- 大小:377.87 KB
- 文档页数:20
OpenGL学习:第一课先了解一下OpenGL中对数据类型的定义,对后面使用一些库函数会有所帮助的。
打开gl.h文件,就可以看到OpenGL定义的一些基本数据类型,如下所示:typedef unsigned int GLenum;typedef unsigned char GLboolean;typedef unsigned int GLbitfield;typedef signed char GLbyte;typedef short GLshort;typedef int GLint;typedef int GLsizei;typedef unsigned char GLubyte;typedef unsigned short GLushort;typedef unsigned int GLuint;typedef float GLfloat;typedef float GLclampf;typedef double GLdouble;typedef double GLclampd;typedef void GLvoid;先从最简单的学习。
点是OpenGL中最基本最简单的图元,它不像数学中的点是要无穷小的,它是有大小的,大小默认为1个像素,但也可以改变。
改变一个点的大小,函数名称为glPointSize,其函数声明如下:WINGDIAPI void APIENTRY glPointSize (GLfloat size);你仍然可以到gl.h中查看该函数的声明。
函数声明中,size是点的大小,默认值为1.0f,单位为“像素”,而且,size 必须要大于0.0f,原因很简单了,如果等于0了,你又怎么能在图形显示设备上看到点的存在呢。
为了学习方便,使用VC 6.0进行调试学习。
首先,新建一个Win32 Console Application,切换到Fileview视图,在Source Files中新建一个C++源文件,然后就可以在这个源文件中进行调试学习。
glsl函数GLSL(OpenGL着色器语言)是一种基于C语言的着色器语言,被广泛用于图形学领域中的渲染管线中。
GLSL函数是GLSL中的基本构件,它们可以让我们实现复杂的计算和操作图像。
下面介绍几个常用的GLSL函数。
1. mix函数:mix函数可以用于线性插值。
它需要三个参数,分别是起始值、终止值以及插值因子,插值因子的范围通常是[0,1]。
对于一个二维向量,可以使用如下语句进行线性插值:vec2 result = mix(startingVector, endingVector, interpolationFactor);2. dot函数:dot函数可以计算两个向量的点积。
它的返回值是两个向量的标量积,可以用于计算向量间的角度和是否平行等。
float dotProduct = dot(vectorA, vectorB);3. normalize函数:normalize函数可以将一个向量转化为单位向量,即长度为1的向量。
如果我们想计算两个向量夹角的余弦值,可以使用normalize函数和dot函数:float cosine = dot(normalize(vectorA), normalize(vectorB));4. length函数:length函数可以计算向量的长度。
它的返回值是一个标量值:float vectorLength = length(vector);5. clamp函数:clamp函数可以将一个值限制在一个区间内。
它需要三个参数,分别是待限制的值、最小值和最大值。
如果待限制的值超出指定区间,则返回最小值或最大值。
float result = clamp(value, minValue, maxValue);这里只是介绍了几个常用的GLSL函数,GLSL还有许多其它有用的函数,例如trunc函数可以快速舍去小数部分,fract函数可以返回一个值的小数部分,ceil 和floor函数可以用于上下取整等。
glsl 语法GLSL是一种高级编程语言,用于编写着色器程序并在OpenGL ES或OpenGL上运行。
GLSL是OpenGL着色器语言的缩写。
这种语言的主要目的是描述如何把GPU中的顶点和像素转化成最终图像。
GLSL本质上是一种类C语言,因此将其用于实现着色器程序的过程是相当容易和自然的。
着色器是在GPU上运行的小程序,它们能够根据输入的数据生成颜色、纹理、阴影等效果。
其中,顶点着色器(vertex shader)负责处理顶点数据,将其变换到适合绘制的视图坐标系中;而片段着色器(fragment shader)则负责处理像素数据,根据交互效应来生成最终颜色。
GLSL 编译器能够将这些着色器代码编译成机器语言,从而为GPU 提供可执行的着色器程序。
GLSL的语法包括以下几个方面。
1. 数据类型:GLSL支持vec2、vec3、vec4、mat2、mat3、mat4、int、float、bool、sampler2D等数据类型。
不同的数据类型具有不同的作用和属性。
其中,vec2、vec3、vec4表示二维矢量、三维矢量和四维矢量,mat2、mat3、mat4表示矩阵,int和float代表整数和实数,bool代表布尔型。
2. 函数:GLSL支持一些内置函数,例如normalize()、length()、dot()、cross()、mix()等。
这些函数能够处理向量、矩阵等数据类型,以及生成颜色、纹理等效果。
3. 运算符:GLSL支持常见的运算符,包括加、减、乘、除、赋值、比较等运算符。
此外,GLSL还支持复合运算符,例如+=,*=,-=等。
4. 控制语句:GLSL支持if/else、while、for等控制语句,还支持break、continue、return等流程控制语句。
5. 变量定义:GLSL中的变量定义方式与C语言类似。
可以使用类型名称和变量名定义变量,例如:vec4 position = vec4(0.0, 0.0, 0.0, 1.0);。
LearnOpenGL学习笔记(四)——着⾊器类编写之前我们将着⾊器的代码⽤glsl写好之后,保存为字符串指针,然后⽤⼀个函数去编译它,这是⼀种⼿段,对于简单的着⾊器代码可以这样。
但当我们针对复杂的着⾊器,我们发现编写、编译、管理着⾊器是⼀件⿇烦事。
我们⽤⼀个类将着⾊器的所有编译,链接,管理都放在⼀个⽂件⾥。
再将着⾊器源码单独设置成.glsl⽂件⽤来,从⽂件流读取,不再放到c++编译器⾥了。
这样主函数就⽐较简洁了。
我们建⽴⼀个类shader,将⼀切着⾊器的步骤都在这个类⾥封装了,这样我们在主函数实例化它,我们就直接可以使⽤着⾊器不⽤在意内部的具体情况(从⽂件流读取,编译,链接)。
因为我们是在.h⽂件⾥⾯实现的这些步骤,包括函数的具体实现我们都放到.h⽂件⾥了,所以我们还需要⼀些特殊处理#ifndef SHADER_H //先测试x是否被宏定义过#define SHADER_H //如果没有宏定义下⾯就宏定义x并编译下⾯的语句#include <glad/glad.h>; // 包含glad来获取所有的必须OpenGL头⽂件#include <string>#include <fstream> //file stream ,fstream是C++ STL中对⽂件操作的合集,包含了常⽤的所有⽂件操作。
#include <sstream> //字符串流,可以⽀持C风格的串流的输⼊输出操作。
#include <iostream>#endif //如果已经定义了则编译#endif后⾯的语句然后我们可以声明这个类的结构了:class Shader {public:// 程序IDunsigned int ID;// 构造器读取并构建着⾊器Shader(const GLchar* vertexPath, const GLchar* fragmentPath);// 使⽤/激活程序void use();// uniform⼯具函数void setBool(const std::string &name, bool value) const;void setInt(const std::string &name, int value) const;void setFloat(const std::string &name, float value) const;};看着⽐较简单,接下来我们对函数的具体实现,分析,⾸先是最重要的构造函数,这⼀个函数⾥⾯,将⼀切编译链接都做完了。
GLSL基础语法介绍
GLSL(OpenGL Shading Language)是一种用于编写着色器的编程语言。
以下是GLSL的一些基础语法:
1.变量类型:GLSL支持多种变量类型,包括float、int、bool、vec2、vec3、
vec4等。
其中,vec2、vec3和vec4分别表示2维、3维和4维向量。
2.变量声明:在GLSL中,变量可以在声明时初始化。
例如,以下代码声明了
一个名为“a”的浮点型变量,并将其初始化为1.0:
float a = 1.0;
3.运算符:GLSL支持基本的算术运算符(如+、-、*、/)和比较运算符(如
==、!=、<、>、<=、>=)。
4.控制结构:GLSL支持if-else语句、for循环和while循环等控制结构。
例
如,以下代码使用for循环打印出0到9的数字:
for (int i = 0; i < 10; i++) {
gl_FragColor = vec4(i, 0, 0, 1);
}
5.函数:GLSL支持内建函数和自定义函数。
例如,以下代码使用内建的sin
函数计算正弦值:
float a = sin(1.0);
以上是GLSL的一些基础语法,如果你需要更深入的学习,可以参考相关的学习资料或在线教程。
Table of Contents1.21.2.11.2.21.2.31.2.41.2.5绪言This book is translated from official user guide of scikit-learn.1.1. 广义线性模型英文原文以下介绍的方法均是用于求解回归问题,其目标值预计是输入变量的一个线性组合。
写成数学语言为:假设是预测值,则有在本节中,称向量为 coef_ ,{% math %}w0{% endmath %}为`intercept`若要将通用的线性模型用于分类问题,可参见Logistic回归1.1.1 普通最小二乘法LinearRegression 使用系数拟合一个线性模型。
拟合的目标是要将线性逼近预测值()和数据集中观察到的值()两者之差的平方和尽量降到最小。
写成数学公式,即是要解决以下形式的问题LinearRegression 的 fit 方法接受数组X和y作为输入,将线性模型的系数存在成员变量 coef_ 中:>>> from sklearn import linear_model>>> clf = linear_model.LinearRegression()>>> clf.fit ([[0, 0], [1, 1], [2, 2]], [0, 1, 2])LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)>>> clf.coef_array([ 0.5, 0.5])需要注意的是,普通最小二乘法的系数预测取决于模型中各个项的独立性。
假设各个项相关,矩阵的列总体呈现出线性相关,那么就会很接近奇异矩阵,其结果就是经过最小二乘得到的预测值会对原始数据中的随机误差高度敏感,从而每次预测都会产生比较大的方差。
glslcookbook 代码编译(原创实用版)目录1.GLSL 概述2.代码编译过程3.编译器选项和参数4.代码优化和调试5.结论正文1.GLSL 概述GLSL(OpenGL Shading Language)是一种基于 C 语言的编程语言,用于编写 OpenGL 着色器程序。
着色器程序在计算机图形学中扮演着至关重要的角色,它们负责计算渲染到屏幕上的 3D 模型的颜色和纹理。
GLSL 具有较高的性能,因为它是专为图形处理器(GPU)优化的。
2.代码编译过程在编写 GLSL 代码时,我们需要将其编译为可在 GPU 上执行的着色器程序。
编译过程主要包括以下步骤:- 预处理:预处理器负责处理 GLSL 代码中的宏、条件编译和文件包含等预处理指令。
- 词法分析:词法分析器将预处理后的代码转换为单词(tokens),例如变量名、关键字和运算符等。
- 语法分析:语法分析器将单词转换为抽象语法树(AST),它是一种层次结构,表示代码的语法结构。
- 语义分析:语义分析器检查 AST 中的语法错误和语义错误,例如类型不匹配等。
- 中间代码生成:中间代码生成器将经过语义分析的 AST 转换为可在 GPU 上执行的低级代码。
- 优化和调试:编译器可以应用各种优化技术,以提高代码的性能。
此外,我们还可以使用调试器查找和修复代码中的错误。
- 代码生成:最后,编译器将中间代码转换为可在 GPU 上执行的二进制代码。
3.编译器选项和参数在编译 GLSL 代码时,我们可以使用各种编译器选项和参数来影响编译过程。
例如:- -o:指定输出文件。
- -I:指定包含目录。
- -D:指定宏定义。
- -p:指定调试选项。
- -O:指定优化选项。
4.代码优化和调试为了提高 GLSL 代码的性能,我们可以采用以下优化技术:- 使用着色器语言规范中的内置函数,而不是自定义函数。
- 避免在顶点着色器中使用复杂的计算。
- 使用纹理压缩格式以减少内存占用。
GLSL Tutorial/opengl/glslIntroductionIn this tutorial shader programming using GLSL will be covered. There is an introduction to the specification, but reading the OpenGL 2.0 and GLSL official specs is always recommended if you get serious about this. It is assumed that the reader is familiar with OpenGL programming, as this is required to understand some parts of the tutorial.GLSL stands for GL Shading Language and often referred as glslang and was defined by the Architectural Review Board of OpenGL, the governing body of OpenGL.I won't go into disputes, or comparisons, with Cg, Nvidia's proposal for a shading language that is also compatible with OpenGL. The only reason I chose GLSL and not Cg for this tutorial, is GLSL closeness to OpenGL.Before writing shaders, in any language, it is a good idea to understand the basics of the graphics pipeline. This will provide a context to introduce shaders, what types of shaders are available, and what shaders are supposed to do. It will also show what shaders can't do, which is equally important.After this introduction the OpenGL setup for GLSL is discussed. The necessary steps to use a shader in an OpenGL application are discussed in some detail. Finally it is shown how an OpenGL application can feed data to a shader making it more flexible and powerful.Some basic concepts such as data types, variables, statements and function definition are then introduced.Please bear in mind that this is work in progress and therefore bugs are likely to be present in the text or demos. Let me know if you find any bug, regardless of how insignificant, so that I can clean them up. Also suggestions are more than welcome. I hope you enjoy the tutorial.Pipeline OverviewThe following figure is a (very) simplified diagram of the pipeline stages and the data that travels amongst them. Although extremely simplified it is enough to present some important concepts for shader programming. In this subsection the fixed functionality of the pipeline is presented. Note that this pipeline is an abstraction and does not necessarily meet any particular implementation in all its steps.Vertex TransformationIn here a vertex is a set of attributes such as its location in space, as well as its color, normal, texture coordinates, amongst others. The inputs for this stage are the individual vertices attributes. Some of the operations performed by the fixed functionality at this stage are: •Vertex position transformation•Lighting computations per vertex•Generation and transformation of texture coordinatesPrimitive Assembly and RasterizationThe inputs for this stage are the transformed vertices, as well as connectivity information. This latter piece of data tells the pipeline how the vertices connect to form a primitive. It is in here that primitives are assembled.This stage is also responsible for clipping operations against the view frustum, and back face culling.Rasterization determines the fragments, and pixel positions of the primitive. A fragment in this context is a piece of data that will be used to update a pixel in the frame buffer at a specific location. A fragment contains not only color, but also normals and texture coordinates, amongst other possible attributes, that are used to compute the new pixel's color.The output of this stage is twofold:•The position of the fragments in the frame buffer•The interpolated values for each fragment of the attributes computed in the vertex transformation stageThe values computed at the vertex transformation stage, combined with the vertex connectivity information allow this stage to compute the appropriate attributes for the fragment. For instance, each vertex has a transformed position. When considering the vertices that make up a primitive, it is possible to compute the position of the fragments of the primitive. Another example is the usage of color. If a triangle has its vertices with different colors, then the color of the fragments inside the triangle are obtained by interpolation of the triangle's vertices color weighted by the relative distances of the vertices to the fragment.Fragment Texturing and ColoringInterpolated fragment information is the input of this stage. A color has already been computed in the previous stage through interpolation, and in here it can be combined with a texel (texture element) for example. Texture coordinates have also been interpolated in the previous stage. Fog is also applied at this stage. The common end result of this stage per fragment is a color value and a depth for the fragment.Raster OperationsThe inputs of this stage are:•The pixels location•The fragments depth and color valuesThe last stage of the pipeline performs a series of tests on the fragment, namely:•Scissor test•Alpha test•Stencil test•Depth testIf successful the fragment information is then used to update the pixel's value according to the current blend mode. Notice that blending occurs only at this stage because the Fragment Texturing and Coloring stage has no access to the frame buffer. The frame buffer is only accessible at this stage.Visual Summary of the Fixed FunctionalityThe following figure presents a visual description of the stages presented above:Replacing Fixed FunctionalityRecent graphic cards give the programmer the ability to define the functionality of two of the above described stages:•Vertex shaders may be written for the Vertex Transformation stage.•Fragment shaders replace the Fragment Texturing and Coloring stage's fixed functionality.In the next subsections these programmable stages, hereafter the vertex processor and the fragment processor, are described.Vertex ProcessorThe vertex processor is responsible for running the vertex shaders. The input for a vertex shader is the vertex data, namely its position, color, normals, etc, depending on what the OpenGL application sends.The following OpenGL code would send to the vertex processor a color and a vertex position for each vertex.glBegin(...);glColor3f(0.2,0.4,0.6);glVertex3f(-1.0,1.0,2.0);glColor3f(0.2,0.4,0.8);glVertex3f(1.0,-1.0,2.0);glEnd();In a vertex shader you can write code for tasks such as:•Vertex position transformation using the modelview and projection matrices•Normal transformation, and if required its normalization•Texture coordinate generation and transformation•Lighting per vertex or computing values for lighting per pixel•Color computationThere is no requirement to perform all the operations above, your application may not use lighting for instance. However, once you write a vertex shader you are replacing the full functionality of the vertex processor, hence you can't perform normal transformation and expect the fixed functionality to perform texture coordinate generation. When a vertex shader is used it becomes responsible for replacing all the needed functionality of this stage of the pipeline.As can be seen in the previous subsection the vertex processor has no information regarding connectivity, hence operations that require topological knowledge can't be performed in here. For instance it is not possible for a vertex shader to perform back face culling, since it operates on vertices and not on faces. The vertex processor processes vertices individually and has no clue of the remaining vertices.The vertex shader is responsible for at least writing a variable: gl_Position, usually transforming the vertex with the modelview and projection matrices.A vertex processor has access to OpenGL state, so it can perform operations that involve lighting for instance, and use materials. It can also access textures (only available in the newest hardware). There is no access to the frame buffer.Fragment ProcessorThe fragment processor is where the fragment shaders run. This unit is responsible for operations like:•Computing colors, and texture coordinates per pixel•Texture application•Fog computation•Computing normals if you want lighting per pixelThe inputs for this unit are the interpolated values computed in the previous stage of the pipeline such as vertex positions, colors, normals, etc...In the vertex shader these values are computed for each vertex. Now we're dealing with the fragments inside the primitives, hence the need for the interpolated values.As in the vertex processor, when you write a fragment shader it replaces all the fixed functionality. Therefore it is not possible to have a fragment shader texturing the fragment and leave the fog for the fixed functionality. The programmer must code all effects that the application requires.The fragment processor operates on single fragments, i.e. it has no clue about the neighboring fragments. The shader has access to OpenGL state, similar to the vertex shaders, and therefore it can access for instance the fog color specified in an OpenGL application.One important point is that a fragment shader can't change the pixel coordinate, as computed previously in the pipeline. Recall that in the vertex processor the modelview and projection matrices can be used to transform the vertex. The viewport comes into play after that but before the fragment processor. The fragment shader has access to the pixels location on screen but it can't change it.A fragment shader has two output options:•to discard the fragment, hence outputting nothing•to compute either gl_FragColor (the final color of the fragment), or gl_FragData when rendering to multiple targets.Depth can also be written although it is not required since the previous stage already has computed it.Notice that the fragment shader has no access to the frame buffer. This implies that operations such as blending occur only after the fragment shader has run.OpenGL Setup for GLSL - OverviewThis section, OpenGL Setup for GLSL, assumes you've got a pair of shaders, a vertex shader and a fragment shader, and you want to use them in an OpenGL application. If you're not ready yet to write your own shaders there are plenty of places to get shaders from the internet. Try the site from the Orange Book. The tools for shader development, namely Shader Designer or Render Monkey, all have a lot of shader examples.As far as OpenGL goes, setting your application is similar to the workflow of writing a C program. Each shader is like a C module, and it must be compiled separately, as in C. The set of compiled shaders, is then linked into a program, exactly as in C.Notice that ARB extensions are being used in here. My laptop doesn't support OpenGL 2.0 so I'll do the tutorial using extensions. If you are new to extensions I suggest you take a look at GLEW. GLEW simplifies the usage of extensions to a great deal since the extension functions can be used right away.Two extensions are required:GL_ARB_fragment_shaderGL_ARB_vertex_shaderA small example of a GLUT program using GLEW to check the extensions could be as shown below:#include <GL/glew.h>#include <GL/glut.h>void main(int argc, char **argv) {glutInit(&argc, argv);...glewInit();if (GLEW_ARB_vertex_shader && GLEW_ARB_fragment_shader)printf("Ready for GLSL\n");else {printf("Not totally ready :( \n");exit(1);}setShaders();glutMainLoop();}The figure bellow shows the necessary steps, the functions used will be detailed in latter sections.In the next subsections the steps to create a program are detailed.OpenGL Setup for GLSL - Creating a ShaderThe following figure shows the necessary steps to create a shader.The first step is creating an object which will act as a shader container. The function available for this purpose returns a handle for the container.The syntax for this function is as follows:GLhandleARB glCreateShaderObjectARB(GLenum shaderType);Parameter:shaderType - GL_VERTEX_SHADER_ARB or GL_FRAGMENT_SHADER_ARB. You can create as many shaders as you want to add to a program, but remember that there can only be a main function for the set of vertex shaders and one main function for the set of fragment shaders in each single program.The following step is to add some source code. The source code for a shader is a string array, although you can use a pointer to a single string.The syntax of the function to set the source code for a shader is:void glShaderSourceARB(GLhandleARB shader, int numOfStrings, const char **strings, int *lenOfStrings);Parameters:shader - the handler to the shader.numOfStrings - the number of strings in the array.strings - the array of strings.lenOfStrings - an array with the length of each string, or NULL, meaning that thestrings are NULL terminated.Finally, the shader must be compiled. The function to achieve this is:void glCompileShaderARB(GLhandleARB program);Parameters:program - the handler to the program.OpenGL Setup for GLSL - Creating a ProgramThe following figure shows the necessary steps to get a shader program ready and going.The first step is creating an object which will act as a program container. The function available for this purpose returns a handle for the container.The syntax for this function is as follows:GLhandleARB glCreateProgramObjectARB(void);You can create as many programs as you want. Once rendering, you can switch from program to program, and even go back to fixed functionality during a single frame. For instance you may want to draw a teapot with refraction and reflection shaders, while having a cube map displayed for background using OpenGL's fixed functionality.The next step involves attaching the shaders created in the previous subsection to the program you've just created. The shaders do not need to be compiled at this time; they don't even have to have source code. All that is required to attach a shader to a program is the shader container. To attach a shader to a program use the function:void glAttachObjectARB(GLhandleARB program, GLhandleARB shader); Parameters:program - the handler to the program.shader - the handler to the shader you want to attach.If you have a pair vertex/fragment of shaders you'll need to attach both to the program. You can have many shaders of the same type (vertex or fragment) attached to the same program, just like a C program can have many modules. For each type of shader there can only be one shader with a main function, also as in C.You can attach a shader to multiple programs, for instance if you plan to use the same vertex shader in several programs.The final step is to link the program. In order to carry out this step the shaders must be compiled as described in the previous subsection.The syntax for the link function is as follows:void glLinkProgramARB(GLhandleARB program);Parameters:program - the handler to the program.After the link operation the shader's source can be modified, and the shaders recompiled without affecting the program.As shown in the figure above, after linking the program, there is a function to actually load and use the program, glUseProgramObjectARB. Each program is assigned an handler, and you can have as many programs linked and ready to use as you want (and your hardware allows).The syntax for this function is as follows:void glUSeProgramObjectARB(GLhandleARB prog);Parameters:prog - the handler to the program you want to use, or zero to return to fixedfunctionalityIf a program is in use, and it is linked again, it will automatically be placed in use again, so in this case you don't need to call this function again. I the parameter is zero then the fixed functionality is activated.OpenGL Setup for GLSL - ExampleThe following source code contains all the steps described previously. The variables p,f,v are declared globally as GLhandleARB.void setShaders() {char *vs,*fs;v = glCreateShaderObjectARB(GL_VERTEX_SHADER_ARB);f = glCreateShaderObjectARB(GL_FRAGMENT_SHADER_ARB);vs = textFileRead("toon.vert");fs = textFileRead("toon.frag");const char * vv = vs;const char * ff = fs;glShaderSourceARB(v, 1, &vv,NULL);glShaderSourceARB(f, 1, &ff,NULL);free(vs);free(fs);glCompileShaderARB(v);glCompileShaderARB(f);p = glCreateProgramObjectARB();glAttachObjectARB(p,v);glAttachObjectARB(p,f);glLinkProgramARB(p);glUseProgramObjectARB(p);}A complete GLUT example is available here, containing two simple shaders, and the text file reading functions. A Unix version can be obtained here thanks to Wojciech Milkowski. Please let him know if you use it: wmilkowski 'at' gazeta.plOpenGL Setup for GLSL - Troubleshooting: The InfoLog Debugging a shader is hard. There is no printf yet and probably never will be, although developer tools with debugging capability are to be expected in the future. It is true that you can use some tricks now but these are not trivial by any means. All is not lost and some functions are provided to check if your code compiled and linked successfully.The status of the compile or link steps can be queried with the following function:void glGetObjectParameterivARB(GLhandleARB object, GLenum type, int *param); Parameters:object - the handler to the object. Either a shader or a programtype - GL_OBJECT_LINK_STATUS or GL_OBJECT_COMPILE_STATUS.param - the return value, 1 for OK, 0 for problems.There are more options regarding the second parameter, type, however these won't be explored in here. Check out the 3Dlabs site for the complete specification.When errors are reported it is possible to get further information with the InfoLog. This log stores information about the last operation performed, such as warnings and errors in the compilation, problems during the link step. The log can even tell you if your shaders will run in software, meaning your hardware does not support some feature you're using, or hardware, the ideal situation. Unfortunately there is no specification for the InfoLog messages, so different drivers/hardware may produce different logs.In order to get the InfoLog use the following function:void glGetInfoLogARB(GLhandleARB object, int maxLen, int *len, char *log); Parameters:object - the handler to the object. Either a shader or a programmaxLen - The maximum number of chars to retrieve from the InfoLog.len - returns the actual length of the retrieved InfoLog.log - The log itself.The GLSL specification could have been nicer in here. You must know the length of the InfoLog to retrieve it. To find this precious bit of information use the following function: void glGetObjectParameterivARB(GLhandleARB object, GLenum type, int *param); Parameters:object - the handler to the object. Either a shader or a programtype - GL_OBJECT_INFO_LOG_LENGTH.param - the return value, the length of the InfoLog.The following function can be used to print the contents of the infoLog:void printInfoLog(GLhandleARB obj){int infologLength = 0;int charsWritten = 0;char *infoLog;glGetObjectParameterivARB(obj, GL_OBJECT_INFO_LOG_LENGTH_ARB,&infologLength);if (infologLength > 0){infoLog = (char *)malloc(infologLength);glGetInfoLogARB(obj, infologLength, &charsWritten, infoLog);printf("%s\n",infoLog);free(infoLog);}}OpenGL Setup for GLSL - Cleaning UpIn a previous subsection a function to attach a shader to a program was presented. A function to detach a shader from a program is also available.The syntax is as follows:void glDetachObjectARB(GLhandleARB program, GLhandleARB shader);Parameter:program - The program to detach from.shader - The shader to detach.Only shaders that are not attached can be deleted so this operation is not irrelevant. To delete a shader use the following function:void glDeleteObjectARB(GLhandleARB id);Parameter:shader - The shader or program to delete.In the case of a shader that is still attached to some (one or more) programs, the shader is not deleted, but marked for deletion. The delete operation will only be concluded when the shaderis no longer attached to any program, i.e. it has been detached from all programs it was attached to.OpenGL Setup for GLSL - Communication OpenGL ->ShadersAn application in OpenGL has several ways of communicating with the shaders. Note that this is a one way communication though, since the only output from a shader is to render to some targets, usually the color and depth buffers.The shader has access to part of the OpenGL state, therefore when an application alters this subset of the OpenGL state it is effectively communicating with the shader. So for instance if an application wants to pass a light color to the shader it can simply alter the OpenGL state as it is normally done with the fixed functionality.However, using the OpenGL state is not always the most intuitive way of setting values for the shaders to act upon. For instance consider a shader that requires a variable to tell the elapsed time to perform some animation. There is no suitable named variable in the OpenGL state for this purpose. True, you can use an unused lights specular cutoff angle for this but it is highly counterintuitive.Fortunately, GLSL allows the definition of user defined variables for an OpenGL application to communicate with a shader. Thanks to this simple feature you can have a variable for time keeping appropriately called timeElapsed, or some other suitable name.In this context, GLSL has two types of variable qualifiers (more qualifiers are available to use inside a shader as detailed in Data Types and Variables subsection):•Uniform•AttributeVariables defined in shaders using these qualifiers are read-only as far as the shader is concerned. In the following subsections the details of how, and when to use these types of variables are detailed.There is yet another way of sending values to shaders: using textures. A texture doesn't have to represent an image; it can be interpreted as an array of data. In fact, using shaders you're the one who decides how to interpret your textures data, even when it is an image. The usage of textures is not explored in this section since it is out of scope.OpenGL Setup for GLSL - Uniform VariablesA uniform variable can have its value changed by primitive only, i.e., its value can't be changed between a glBegin / glEnd pair. This implies that it can't be used for vertices attributes. Look for the subsection on attribute variables if that is what you're looking for.Uniform variables are suitable for values that remain constant along a primitive, frame, or even the whole scene. Uniform variables can be read (but not written) in both vertex and fragment shaders.The first thing you have to do is to get the memory location of the variable. Note that this information is only available after you link the program. Note: with some drivers you may be required to be using the program, i.e. you'll have to call glUSeProgramObjectARB before attempting to get the location (it happens with my laptop ATI graphics card).The function to retrieve the location of an uniform variable given its name, as defined in the shader, is:GLint glGetUniformLocationARB(GLhandleARB program, const char *name); Parameters:program - the handler to the programname - the name of the variable.The return value is the location of the variable, which can then be used to assign values to it.A family of functions is provided for setting uniform variables, its usage being dependent on the data type of the variable. A set of functions is defined for setting float values as:void glUniform1fARB(GLint location, GLfloat v0);void glUniform2fARB(GLint location, GLfloat v0, GLfloat v1);void glUniform3fARB(GLint location, GLfloat v0, GLfloat v1, GLfloat v2);void glUniform4fARB(GLint location, GLfloat v0, GLfloat v1, GLfloat v2, GLfloat v3);orGLint glUniform{1,2,3,4}fvARB(GLint location, GLsizei count, GLfloat *v); Parameters:location - the previously queried location.v0,v1,v2,v3 - float values.count - the number of elements in the arrayv - an array of floats.A similar set of function is available for data type integer, where "f" is replaced by "i". There are no functions specifically for bools, or boolean vectors. Just use the functions available for float or integer and set zero for false, and anything else for true. In case you have an array of uniform variables the vector version should be used.For sampler variables, use the function glUniform1iARB, or glUniform1ivARB if setting an array of samplers.Matrices are also an available data type in GLSL, and a set of functions is also provided for this data type:GLint glUniformMatrix{2,3,4}fvARB(GLint location, GLsizei count, GLboolean transpose, GLfloat *v);Parameters:location - the previously queried location.count - the number of matrices. 1 if a single matrix is being set, or n for an array of nmatrices.transpose - wheter to transpose the matrix values. A value of 1 indicates that thematrix values are specified in row major order, zero is column major orderv - an array of floats.An important note to close this subsection, and before some source code is presented: the values that are set with these functions will keep their values until the program is linked again. Once a new link process is performed all values will be reset to zero.And now to some source code. Assume that a shader with the following variables is being used:uniform float specIntensity;uniform vec4 specColor;uniform float t[2];uniform vec4 colors[3];In the OpenGL application, the code for setting the variables could be:GLint loc1,loc2,loc3,loc4;float specIntensity = 0.98;float sc[4] = {0.8,0.8,0.8,1.0};float threshold[2] = {0.5,0.25};float colors[12] = {0.4,0.4,0.8,1.0,0.2,0.2,0.4,1.0,0.1,0.1,0.1,1.0};loc1 = glGetUniformLocationARB(p,"specIntensity");glUniform1fARB(loc1,specIntensity);loc2 = glGetUniformLocationARB(p,"specColor");glUniform4fvARB(loc2,1,sc);loc3 = glGetUniformLocationARB(p,"t");glUniform1fvARB(loc3,2,threshold);loc4 = glGetUniformLocationARB(p,"colors");glUniform4fvARB(loc4,3,colors);A working example, with source code, can be obtained hereNotice the difference between setting an array of values, as it is the case of t or colors, and setting a vector with 4 values, as the specColor. The count parameter (middle parameter of glGetUniform{1,2,3,4}fvARB) specifies the number of array elements as declared in the shader, not as declared in the OpenGL application. So although specColor contains 4 values, the count of the function glUniform4fvARB parameter is set to 1, because it is only one vector. An alternative for setting the specColor variable could be:loc2 = glGetUniformLocationARB(p,"specColor");glUniform4fARB(loc2,sc[0],sc[1],sc[2],sc[3]);Yet another possibility provided by GLSL is to get the location of a variable inside an array. For instance, it is possible to get the location of t[1]. The following snippet of code shows this approach to set the t array elements.loct0 = glGetUniformLocationARB(p,"t[0]");glUniform1fARB(loct0,threshold[0]);loct1 = glGetUniformLocationARB(p,"t[1]");glUniform1fARB(loct1,threshold[1]);Notice how the variable is specified in glGetUniformLocationARB using the square brackets.OpenGL Setup for GLSL - Attribute VariablesAs mentioned in subsection Uniform, uniform variables can only be set by primitive, i.e., they can't be set inside a glBegin-glEnd.If it is required to set variables per vertex then attribute variables must be used. In fact attribute variables can be updated at any time. Attribute variables can only be read (not written) in a vertex shader. This is because they contain to vertex data, hence not useful in a fragment shader. As for uniform variables, first it is necessary to get the location in memory of the variable. Note that the program must be linked previously and some drivers may require that the program is in use.GLint glGetAttribLocationARB(GLhandleARB program,char *name);Parameters:program - the handle to the program.。
着⾊器语⾔GLSL(opengl-shader-language)⼊门⼤全基本类型:类型说明void空类型,即不返回任何值bool布尔类型 true,falseint带符号的整数 signed integerfloat带符号的浮点数 floating scalarvec2, vec3, vec4n维浮点数向量 n-component floating point vectorbvec2, bvec3, bvec4n维布尔向量 Boolean vectorivec2, ivec3, ivec4n维整数向量 signed integer vectormat2, mat3, mat42x2, 3x3, 4x4 浮点数矩阵 float matrixsampler2D2D纹理 a 2D texturesamplerCube盒纹理 cube mapped texture基本结构和数组:类型说明结构struct type-name{} 类似c语⾔中的结构体数组float foo[3] glsl只⽀持1维数组,数组可以是结构体的成员向量的分量访问:glsl中的向量(vec2,vec3,vec4)往往有特殊的含义,⽐如可能代表了⼀个空间坐标(x,y,z,w),或者代表了⼀个颜⾊(r,g,b,a),再或者代表⼀个纹理坐标(s,t,p,q) 所以glsl提供了⼀些更⼈性化的分量访问⽅式.vector.xyzw其中xyzw 可以任意组合vector.rgba其中rgba 可以任意组合vector.stpq其中rgba 可以任意组合vec4 v=vec4(1.0,2.0,3.0,1.0);float x = v.x; //1.0float x1 = v.r; //1.0float x2 = v[0]; //1.0vec3 xyz = v.xyz; //vec3(1.0,2.0,3.0)vec3 xyz1 = vec(v[0],v[1],v[2]); //vec3(1.0,2.0,3.0)vec3 rgb = v.rgb; //vec3(1.0,2.0,3.0)vec2 xyzw = v.xyzw; //vec4(1.0,2.0,3.0,1.0);vec2 rgba = v.rgba; //vec4(1.0,2.0,3.0,1.0);运算符:优先级(越⼩越⾼)运算符说明结合性1()聚组:a*(b+c)N/A2[] () . ++ --数组下标__[],⽅法参数__fun(arg1,arg2,arg3),属性访问__a.b__,⾃增/减后缀__a++ a--__L - R3++ -- + - !⾃增/减前缀__++a --a__,正负号(⼀般正号不写)a ,-a,取反__!false__R - L4* /乘除数学运算L - R5+ -加减数学运算L - R7< > <= >=关系运算符L - R8== !=相等性运算符L - R12&&逻辑与L - R13^^逻辑排他或(⽤处基本等于!=)L - R14||逻辑或L - R15__? :__三⽬运算符L - R16= += -= *= /=赋值与复合赋值L - R17,顺序分配运算L - Rps 左值与右值:左值:表⽰⼀个储存位置,可以是变量,也可以是表达式,但表达式最后的结果必须是⼀个储存位置.右值:表⽰⼀个值, 可以是⼀个变量或者表达式再或者纯粹的值.操作符的优先级:决定含有多个操作符的表达式的求值顺序,每个操作的优先级不同.操作符的结合性:决定相同优先级的操作符是从左到右计算,还是从右到左计算。
GLSL学习__中文版2.1 OpenGL Shading语言介绍这本书的目的是帮助读者学习和使用高级图形编程语言――OPENGL SHADING 语言。
对于这种语言提供支持的相关OpenGL扩展在2003年6说被ARB组织批准,并且将被加入到新的OpenGL2.0的核心中去。
当前图形硬件的变得越来越复杂,那些传统的固定功能逐渐将被可编程的功能取代。
顶点处理和面片处理就是两个这样的例子。
顶点处理包括那些对于每一个顶点进行分别运算的步骤,比如坐标转换和光照。
面片指的那些在图形数据光栅化时象素对应的数据结构。
一个面片包含更新帧缓存中某个位置数据的所有必要信息。
面片处理包括所有发生在面片级别上的操作,比较典型的例子就是从包含纹理的存储器中获得数据,以及将纹理值赋予每一个面片。
通过使用OpenGL Shading语言,用户不仅仅可以实现所有显卡的固定管线功能,而且可以做的更多.OpenGL Shading的设计使程序员能够在OpenGL渲染管线中每一个可编程的地方实现他们的想法。
通过OpenGL Shading语言的代码编写的能够在OpenGL可编程处理器上运行的程序代码叫做SHADER。
OpenGL Shader这个词有时候也用来特指那些用OpenGL Shading语言开发的shader,以和那些使用其他shading语言(比如RenderMan)开发的shader相区分。
因为在OpenGL中定义了两种可编程的处理单元,所以也对应有两种类型的shader:顶点shader和面片shader。
OpenGL可以将shader 进行编译链接,使其成为可执行程序一部分。
OpenGL Shading语言是基于C语言发展起来的,它和RenderMan以及其他的shading语言拥有相同的特征。
这种语言有丰富的数据类型定义,其中包括向量和矩阵这些与三维操作息息相关的类型定义。
一些类型qualifiers对输入和输出进行处理,使其适合shader使用。
2.1 OpenGL Shading语言介绍这本书的目的是帮助读者学习和使用高级图形编程语言――OPENGL SHADING 语言。
对于这种语言提供支持的相关OpenGL扩展在2003年6说被ARB组织批准,并且将被加入到新的OpenGL2.0的核心中去。
当前图形硬件的变得越来越复杂,那些传统的固定功能逐渐将被可编程的功能取代。
顶点处理和面片处理就是两个这样的例子。
顶点处理包括那些对于每一个顶点进行分别运算的步骤,比如坐标转换和光照。
面片指的那些在图形数据光栅化时象素对应的数据结构。
一个面片包含更新帧缓存中某个位置数据的所有必要信息。
面片处理包括所有发生在面片级别上的操作,比较典型的例子就是从包含纹理的存储器中获得数据,以及将纹理值赋予每一个面片。
通过使用OpenGL Shading语言,用户不仅仅可以实现所有显卡的固定管线功能,而且可以做的更多.OpenGL Shading的设计使程序员能够在OpenGL渲染管线中每一个可编程的地方实现他们的想法。
通过OpenGL Shading语言的代码编写的能够在OpenGL可编程处理器上运行的程序代码叫做SHADER。
OpenGL Shader这个词有时候也用来特指那些用OpenGL Shading语言开发的shader,以和那些使用其他shading语言(比如RenderMan)开发的shader相区分。
因为在OpenGL中定义了两种可编程的处理单元,所以也对应有两种类型的shader:顶点shader和面片shader。
OpenGL可以将shader 进行编译链接,使其成为可执行程序一部分。
OpenGL Shading语言是基于C语言发展起来的,它和RenderMan以及其他的shading语言拥有相同的特征。
这种语言有丰富的数据类型定义,其中包括向量和矩阵这些与三维操作息息相关的类型定义。
一些类型qualifiers对输入和输出进行处理,使其适合shader使用。
某些C++语言的特点,比如函数的参数重载,需要使用时再声明变量,而不是在程序的一开头就声明。
该语言支持循环,子函数的调用,条件表达式。
而且还提供了许多库函数方便用户去实现某些算法。
简述如下:•OpenGL Shading语言是一种高级程序语言。
•顶点和面片的shader使用的指令集基本相同,只有少量的差别。
•语言的语法和流控制是基于C语言和C++语言的。
•该语言使用()而不是读写操作去管理输入和输出的。
•shader程序的长度通常不做限制。
接下来的部分将介绍一些基本概念,它们可以帮助用户理解并且有效的使用OpenGL Shading语言。
在以后的章节中,这些概念会被详细解释,但本章的作用更像一个索引目录,只是给读者提供一个比较概略的说明。
2.2 为什么要写shader直到最近,OpenGL才给程序开发者提供了一套灵活并且稳定的接口将图形数据展示到显示设备上。
正如第一章所示,你可以把OpenGL当作一系列的操作,它们作用于通过计算机图形硬件传递的几何数据和图象数据,把处理的结果显示到屏幕上。
这个渲染管线的各个状态拥有各种参数供用户操作以达到不同的目的。
但是无论是这些基本的操作还是OpenGL应用程序接口的调用顺序都不能随意改变的。
?除了对于传统的渲染机制进行支持,OpenGL逐步完善,为各种各样的程序需求提供了十分广泛的支持。
用户不需要为那些在传统结构下运行良好的程序重新写shader。
但是,当用户想要完成诸如对区域光的支持,逐个面片的计算光照而不是逐个顶点计算或者使用传统的OpenGL渲染模型遇到了某些限制时,就需要为这些程序编写shader了。
设计OpenGL Shading语言以及那些OpenGL为使用OpenGL Shading语言提供的相关程序接口目的,是为了让用户在OpenGL渲染管线中的某些关键点使用那些专门设计的高级编程语言去完成特定的处理。
为了让用户在渲染管线中能够完全自由的设计特定的处理方式,以上这些渲染管线中的关键节点被设计成可编程的。
这使得程序在某些图形硬件的环境下完成种类众多的渲染效果。
如果读者想要对OpenGLshader的渲染能力进行了解,请花一点时间去浏览本书中的那些彩页。
本书提供了许多类型的shader示例,首先从对模型的表明进行操作开始。
随着每一代新的图形硬件的产生,越来越多的渲染技术被OpenGLshader实现并被应用到实时渲染的程序中。
下面是OpenGLshader能够完成功能的简单列表。
•具有真实感的材质――金属、石头、木头、图画等。
•更加逼真的光照效果――区域光、软影等等•自然现象的渲染――火、烟、水、云等等•非真实感材质――画家手绘效果,素描画,展示技术的模拟?•纹理存储器的其它使用方法――纹理可以被用作存储纹理,光泽度,多项式系数等等。
•图象处理――卷积,平滑处理,复杂融合等等•动画效果――关键帧插值,粒子系统,过程化定义的动画?•用户编程实现的抗锯齿化算法以上的这些技术原来只能通过软件来实现。
如果非要通过OpenGL实现,只能在有限的范围那完成。
但是现在,在那些巧妙设计的图形硬件的帮助下,这些效果都可以通过硬件加速完成,效率得到了大幅度的提升,而且可以把CPU从这些耗时的计算中解放出来,完成其它的工作。
2.3 OpenGL 可编程处理器OpenGL产生以来最大的变化就是可编程的顶点、面片处理器的诞生,同时,这也是对高级Shading语言需求产生的原因。
在第一章中,我们讨论了OpenGL的渲染管线以及通过固定管线功能实现顶点和面片的处理。
随着这些管线可编程特性的引入,在OpenGL Shading语言产生作用的时候,固定的管线功能将会失效。
()图片2.1展示了当可编程处理器被激活的时OpenGL的处理管线。
在这种情况下,图1.1表示的固定管线将被图2.1的可编程的顶点面片处理器所取代。
OpenGL的其它渲染操作保持不变。
Figure 2.1. OpenGL logical diagram showing programmable processors for vertex and fragment shaders rather than fixed functionality这个表格列举了OpenGL shading语言通过可编程处理器定义的OpenGL流处理特性。
数据从应用程序传输到顶点处理器,然后到达面片处理器,最终被送到帧缓冲中。
2.3.1 顶点处理器顶点处理器是一个处理顶点和与其绑定的各种数据的可编程单元。
顶点处理器主要完成下面几个传统的图形操作:•Color material application•顶点转换•法向量转换和归一化•纹理坐标生成•纹理坐标转换•光照计算•颜色和材质的使用由于顶点处理器的可编程性,它可以用来完成多种计算。
在其上面运行的shader被称作顶点shader。
顶点shader可以定义作用在顶点和绑定数据上的一系列操作。
顶点shader???顶点处理器不能够完成那些需要同时考虑多个顶点或者模型拓扑结构的计算。
在顶点处理器和面片处理器中,还保留了许多固定管线功能,它们包括:?图2.2列举了顶点处理器的输入和输出数据。
顶点shader就是那些在顶点 Figure 2.2. Vertex processor inputs and outputs在顶点shader里面定义的那些变量可以被当作属性变量。
这些变量表示那些由应用程序经常传输给顶点shader的数据值。
由于这类数据值来自那些定义了顶点数据的应用程序,所有它们被当作顶点shander的一部分。
应用程序可以在函数glBegin和glEnd之间通过顶点数列的相关幻术传入这些属性值,所有针对不同的顶点绑定不同的数据。
一共有两种属性变量:内建的变量和用户定义的变量。
在OpenGL中,那些标准的属性变量包括诸如颜色,表面法向量,纹理坐标和顶点位置。
OpenGL通过调用glColor, glNormal, glVertex等函数指定这些属性值,然后通过调用顶点数列的绘制函数将这些标准的OpenGL属性值传递给顶点处理器。
当顶点shader执行的时候,它可以使用名为gl_Color, gl_Normal,gl_Vertex等的这些内建属性变量名去访问这些属性值。
因为以上这种内建属性变量只能访问那些由OpenGL定义好的数据,所以又增加了一种允许程序针对每一个顶点传入任意数据值的接口。
在目前的OpenGL 应用程序接口中,这些通用的顶点属性可以通过一定的索引值访问,这些索引值从0开始,最大值受各种硬件的限制。
通过调用函数glVertexAttribARB,给出必须的索引值,可以给每一个顶点指定这些通用的属性值。
顶点shader中,可以通过用户指定的名字访问这些通用的顶点属性。
另外,还有一个函数glBindAttribLocationARB允许应用程序将某一个指定的索引值绑定到顶点shader的某一个变量名上。
统一变量的作用是帮助应用程序向顶点处理器或者面片处理器传递数据。
统一变量通常用来存储那些变化相对比较少的变量。
一个shader可以使用统一变量作为他的参数。
应用程序可以初始化这些统一变量,而最终用户可以通过改变这些变量的数值使得同一咯shader表现出不同的效果。
但是这些统一变量的值不能在glBegin和glEnd函数之间进行设置,所以对于每一个几何物体它们最多只能被设置一次。
OpenGL Shading语言支持内建的和用户定义的统一变量。
顶点shader和面片shader可以使用带有“gl_”为前缀的变量名去访问那些表示当前OpenGL状态的统一变量。
应用程序可以通过用户定义的统一变量为shader直接提供任意想要的数据。
通过函数glGetUniformLocationARB可以得到shader中用户定义的统一变量值。
使用另外一个新的OpenGL函数glUniformARB可以将数据载入。
这个新的函数为了能够帮助用户载入浮点数、整数、布尔数、矩阵值和数组拥有很多变形。
顶点处理器的一个新特性是能够从纹理存储空间中读取数据。
这一新的特征让顶点能够实现诸如置换贴图等算法。
(当然,由于顶点纹理对应的图片个数的最小为0,所以并不是所有支持OpenGL Shading语言的硬件平台都支持顶点纹理。
)为了访问mipmap纹理,shader中可以使用LOD。
而纹理的滤波,边界处理以及wrapping由现由的OpenGL参数控制。
理论上,顶点处理器每一次只能处理一个顶点单元(但是有的OpenGL实现支持多个顶点处理器,这让程序能够并行的处理多个顶点)。
对于每一个顶点,顶点shader完成一遍运算。
顶点处理器的这种设计方式是针对单个顶点的空间转换和光照计算。