48
Fifth International Students Conference on Informatics
–
ICDD 2015
May 21-23, 2015, Sibiu, Romania
2.1 OpenGL
”OpenGL (Open Graphics Library) is a cross-language, multi-platform application programming
interface (API) for rendering 2D and 3D vector graphics. The API is typically used to interact
with a graphics processing unit (GPU), to achieve hardware-accelerated rendering. The API is
defined as a number of functions which may be called by the client program, alongside a number
of named integer constants.”[5]
The first version of OpenGL, version 1.0, was released in 1992. This version used a fixed
functionality pipeline, and was mainly CPU consumer. With version 2.0, which came out in
2004, introduced a programmable pipeline, where programmers could code their own shader
in a C-style language GLSL (OpenGL Shading Language). The latest version of OpenGl was
released in 2014.
The engine uses OpenGL version 2.1 and GLSL version 1.20 which makes it run on on most
of todays hardware and makes it OpenGL ES (a lighter version of OpenGL used on Android
and iOS devices) compatible.
The choice to use OpenGL insteadof Microsoft’sDirect3D was to make the engine run on ev-
ery existing platform, not only on Microsoft’splatforms (Windows, Windows Phone, XBox360).
2.2 GLEW
Given the high workload involved in identifying and loading OpenGL extensions, a few libraries
have been designed which load all available extensions and functions automatically. One of the
most used library of this kind is OpenGL Extension Wrangler Library (GLEW).
”GLEW is a cross-platform open-source C/C++ extension loading library. GLEW provides
efficient run-time mechanisms for determining which OpenGL extensions are supported on the
target platform. OpenGL core and extension functionality is exposed in a single header file,
which is machine-generated from the official extension list. GLEW is available for a variety of
operating systems.”[6]
There are a few other extension loader libraries, but GLEW seems to be the most used and
most reliable, although it loads all extensions on program start (even those that the program
does not use), and does not check for errors (ex. you can use glGenBuffers(GL
FLOAT) which
is not a valid operation).
2.3 SDL 2
Given that creating an OpenGL context is quite a complex process, and given that it varies be-
tween operating systems, automatic OpenGL context creation has become a common feature of
several game-development and user-interface libraries such as Simple Directmedia Layer (SDL).
”SDL is a cross-platform development library designed to provide low level access to audio,
keyboard, mouse, joystick, and graphics hardware via OpenGL and Direct3D. It is used by video
playback software, emulators, and popular games including Valve’s award winning catalog and
many HumbleBundlegames. Officially supports Windows, Mac OS X, Linux, iOS, and Android.
It is written in C, works natively with C++.”[7].
2.4 Assimp
”Open Asset Import Library (Assimp) is a cross-platform 3D model import library which aims to
provide a common application programming interface (API) for different 3D asset file formats.
Written in C++, it offers interfaces for both C and C++. The imported data is provided in a
103
41
Fifth International Students Conference on Informatics
–
ICDD 2015
May 21-23, 2015, Sibiu, Romania
straightforward, hierarchical data structure. Assimp currently supports 41 different file formats
for reading, including COLLADA (.dae), 3DS, DirectX X, Wavefront OBJ and Blender 3D
(.blend).”[8]
2.5 GLM
”OpenGL Mathematics (GLM) is a header only C++ mathematics library for graphics software
based on the OpenGL Shading Language (GLSL) specifications.
GLM provides classes and functions designed and implemented with the same naming con-
ventions and functionalities than GLSL so that anyone who knows GLSL, can use GLM as well
in C++.
This project isn’t limited to GLSL features. An extension system, based on the GLSL ex-
tension conventions, provides extended capabilities: matrix transformations, quaternions, data
packing, random numbers, noise, etc.
This library works perfectly with OpenGL but it also ensures interoperability with other third
party libraries and SDK. It is a good candidate for software rendering (raytracing / rasterisa-
tion), image processing, physic simulations and any development context that requires a simple
and convenient mathematics library.
GLM is written in C++98 but can takeadvantageof C++11 when supported by the compiler.
It is a platform independent library with no dependence.”[9]
3 A peek inside
3.1 ’Initializing...’
The engine starts with context and window creation, camera initialization and loading of all the
data required for the terrain (height map) and models(vertex data, bone data, textures). In the
init function, the sky dome, sun, and water models are generated. Basically this is where the
scene building is accomplished.
void init()
{
window.Create("Engine", screenWidth, screenHeight, Engine::FULLSCREEN);
SDL_SetRelativeMouseMode(SDL_TRUE);
camera.Init(screenWidth,screenHeight);
level = new Level("Resources/Map/imgn45w114_1");
levelRenderer = new LevelRenderer(level);
modelManager = new ModelManager(level);
sky = new Engine::SkyDome;
sun = new Engine::Sun(screenWidth, screenHeight);
water = new Engine::Water;
}
104
58
Fifth International Students Conference on Informatics
–
ICDD 2015
May 21-23, 2015, Sibiu, Romania
3.1.1
Camera
The camera stores and computes the transformation matrices needed to get the object from
view space to projection space and than to screen coordinates. In the Init function the camera
builds the projection matrix with the use of screen width and height, and sets the model matrix
to be the identity matrix.
void Camera::Init(const int screenWidth, const int screeHeight)
{
this->screenWidth = screenWidth;
this->screeHeight = screeHeight;
projectionMatrix = glm::perspective(VIEW_ANGLE, (float)screenWidth/screeHeight,
NEAR_CLIP_PLANE, FAR_CLIP_PLANE);
}
3.1.2
Terrain
The terrain generating starts with the construction of a new Level object. The terrain’s height
map is stored in a binary file containing nRows ∗ nCols floats, representing the height data.
The nRows and nCols are the first two unsigned int data from the binary file. With the height
data loaded, the LevelRenderer generates the vertex positions, normalsand texture coordinates.
Note: vertex normals don’t exist in geometry, but they do exist in graphic programming, and
represent: vertexNormal = normalize(
k
i=0
N
i
), where k is the number of adjacent triangles
and N
i
is the normal of the i-th triangle. The last step of the terrain generating is to make
triangles out of these vertexes, by generating the indexes array and to upload all the data to
the GPU.
void LevelRenderer::buildModel(const Level *level)
{
const float* levelData = level->GetData();
//construct vertex position based on level data
Engine::Vertex *vertices = new Engine::Vertex[ nRows * nCols ];
for(int z=0; z<nRows; z++)
for (int x=0; x<nCols; x++)
{
float y = levelData[z*nCols + x];
vertices[z*nCols + x].SetPosition(x*CELL_SIZE, y, z*CELL_SIZE);
vertices[z*nCols + x].SetUV(x,z);
}
//calculate vertex normals based on triangles formed with adjacent vertexes
for(int z=1; z<nRows-1; z++)
for (int x=1; x<nCols-1; x++)
{
Position leftUpper = Position::Normal(vertices[z*nRows + x].position,
vertices[(z-1)*nRows + x].position, vertices[z*nRows + x-1].position);
Position centerUpper = Position::Normal(vertices[z*nRows + x].position,
vertices[(z-1)*nRows + x+1].position, vertices[(z-1)*nRows + x].position);
Position rightUpper = Engine::Position::Normal(vertices[z*nRows + x].position,
105
49
Fifth International Students Conference on Informatics
–
ICDD 2015
May 21-23, 2015, Sibiu, Romania
vertices[z*nRows + x+1].position, vertices[(z-1)*nRows + x+1].position);
Position rightLower = Engine::Position::Normal(vertices[z*nRows + x].position,
vertices[(z+1)*nRows + x].position, vertices[z*nRows + x+1].position);
Position centerLower = Position::Normal(vertices[z*nRows + x].position,
vertices[(z+1)*nRows + x-1].position, vertices[(z+1)*nRows + x].position);
Position leftLower = Position::Normal(vertices[z*nRows + x].position,
vertices[z*nRows + x-1].position, vertices[(z+1)*nRows + x-1].position);
vertices[z*nRows + x].normal = (leftUpper + centerUpper + rightUpper +
rightLower + centerLower + leftLower);
vertices[z*nRows + x].normal.Normalize();
}
//assign triangle indices. Two triangles at once, that form a rectangle.
unsigned int *indices = new unsigned int[(nCols-1)*(nRows-1)*2*3];
for(int i=0,k=0; i<nRows-1; i++)
for(int j=0; j<nCols-1; j++)
{
indices[k++] = i*nCols + j;
indices[k++] = (i+1)*nCols + j;
indices[k++] = i*nCols + j+1;
indices[k++] = i*nCols + j+1;
indices[k++] = (i+1)*nCols + j;
indices[k++] = (i+1)*nCols + j+1;
}
//upload to vertices to GPU
glGenBuffers(1, &vboId);
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, sizeof(Engine::Vertex) * nRows * nCols,
vertices, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
delete[] vertices;
//upload to indices to GPU
glGenBuffers(1, &iboId);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, iboId);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(unsigned int)*(nCols-1)*(nRows-1)*2*3,
indices, GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
delete[] indices;
}
3.1.3
Sky Dome
ASky Dome is a sphere cut in two, with a circle, asbase. This object’s vertexesand indexes are
generated and uploaded to GPU in the Init function. Note: the vertexes generated here have
only position; they don’t have normals nor texture coordinates. The Sky Dome will represent
much of the background of a 3D game.
106
48
Fifth International Students Conference on Informatics
–
ICDD 2015
May 21-23, 2015, Sibiu, Romania
3.1.4
Sun
The Sun is basically just a circle, although it appears to be a sphere. To trick the user into
believing that it’s a sphere, but making much less computations by representing it as a circle is
done using a trick called billboarding. A billboard is a 2D object in a 3D world that rotates,
moves or scales according to the position and rotation of the camera. In this application, the
sun rotates as the camera rotates, and moves with the camera, so the user can never see the sun
from behind. The vertexes generated in the Init function have only position.
//generate vertices of circle
const float slice = 2*PI/nVertices;
for (unsigned int i=1, iVert=1; i<=nVertices; i++, iVert++)
{
float crtSlice = slice*i;
vertices[iVert].SetPosition(center.position.x + cos(crtSlice) * radius,
center.position.y + sin(crtSlice) * radius, center.position.z);
}
3.1.5
Ocean
The Ocean is a simple flat plane split in many triangles int the Init function. The weaving effect
will be computed in the vertex shader every time we render it. The vertexes present here have
position, normal and texture coordinates.
3.1.6
Model Manager
The Model Manager handles the work with every imported models present while the engine
runs, also handles the work with the Particle System. In the Init function the Model Manager
loads all the available models to an array of pointers once, than loads data referring to models
positions, rotation, scale and finally builds particle systems with trees, rocks, grass.
3.1.7
Particle System
Calling many OpenGL calls a frame produces the phenomena known as ’bottleneck’. The ren-
dering of a single model (ex. a tree) repetitively (500-1000 times) with different transformation
matrices produces this problem. To avoid this, a solution was needed to draw all those models
with a single OpenGL call. Thus came the idea of putting a model repetitively in a buffer, with
the transformation matrices already applied, and rendering the models (the whole buffer) as a
single object. The class managing these actions is the Particle System.
3.2 First Rendering. Lazy Init.
The shaders are loaded durring runtime, at the moment they are needed (lazy init).
Render(...)
{
if (shaderProgram == nullptr)
initShaders();
...
}
107
44
Fifth International Students Conference on Informatics
–
ICDD 2015
May 21-23, 2015, Sibiu, Romania
3.3 Rendering
After everything needed loaded, the rendering begins.
3.3.1
Main render function
The order of rendering does matter. The sky needs to be rendered first, because it has depth
buffer disabled, and sun needs to be rendered last, because the effect of the sun rays is in fact a
blurring (post processing) and a custom depth testing of the resulted final image. The rendering
of all the other objects come in between.
void MainGame::renderScene()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
sky->Render(camera, sun->GetSunPosition());
levelRenderer->Render(camera, sun);
modelManager->Render(camera, sun);
water->Render(camera, time);
sun->Render(camera);
window.SwappBuffer();
}
3.3.2
Model Manager render function
The ModelManager has alist of particle systems, 3D models and model elements. Model elements
have a pointer to a model, and information about it like rotation, position, scale. Every frame
the ModelManager renders the particle systems and the model elements.
void ModelManager::Render(const Camera &camera, const Sun *sun)
{
//render models with specific position, rotation and scale
//update animation frame based on current time
for (auto it = modelElements.begin(); it<modelElements.end(); it++)
{
it->model->Update(time);
it->model->Position = it->position;
it->model->RotateY = it->rotationY;
it->model->Scale = it->scale;
it->model->Render(camera, sun);
}
//render particles
for(auto it = particleModels.begin(); it != particleModels.end(); it++)
(*it)->Render(camera, sun);
}
108
48
Fifth International Students Conference on Informatics
–
ICDD 2015
May 21-23, 2015, Sibiu, Romania
3.3.3
Model rendering
Amodelisbuilt from manny meshes. Amesh representsa numberof triangles that use the same
texture (material) for rendering. For example the rendering of a tree which is model consist of
rendering of its trunk and leafs which are separate meshes, using different textures. To render
amodel we need to send its model matrix, the view-projection matrix and light position to the
shader which will render it on the right spot, with the right rotation and scale, and with the
right illumination. But before we render anything via the glDrawElements call we need to tell
the shader, at which offsets will it find the vertex attributes, and we also need to tell it which
texture to use.
void Model::Render(const CameraSpectator &camera, const Sun *sun)
{
program->Use();
glUniform3fv(program->GetUniformLocation("lightPos"), 1, &sunPos[0]);
glUniform3fv(program->GetUniformLocation("lightColor"), 1, &sun->GetSunColor()[0]);
glm::mat4 m = computeModelMatrix();
glm::mat4 mvp = camera.GetCameraMatrix() * m;
glUniformMatrix4fv(program->GetUniformLocation("MVP"), 1, GL_FALSE, &mvp[0][0]);
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glVertexAttribPointer(0, 3, GL_FLOAT ,GL_FALSE, sizeof(Engine::Vertex),
(void*)offsetof(Engine::Vertex,position));
glVertexAttribPointer(1, 3, GL_FLOAT ,GL_FALSE, sizeof(Engine::Vertex),
(void*)offsetof(Engine::Vertex,normal));
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(Engine::Vertex),
(void*)offsetof(Engine::Vertex,uv));
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, iboId);
glActiveTexture(GL_TEXTURE0);
for (auto it = meshes.begin(); it != meshes.end(); ++it)
{
glBindTexture(GL_TEXTURE_2D, materials[it->materialIndex]);
glDrawElements(GL_TRIANGLES, it->nIndices, GL_UNSIGNED_INT,
(void*)(sizeof(unsigned int) * it->baseIndex));
}
program->UnUse();
}
3.3.4
Sun, sun rays rendering
The rendering of the sun and sun rays involves multiple steps that include altering the graphics
pipeline, and rendering to different color/depth textures.
void Sun::Render(const Camera &camera)
{
//copy existing depth buffer to depth texture
glBindTexture(GL_TEXTURE_2D, texIdDepth[0]);
109
Documents you may be interested
Documents you may be interested