In this section we will have short but powerful introduction to Blender. We will cover
just enough of model creation basics, you will need to create most of simple projects.
No, we will not cover animation, shaders or modificators here, but just enough minimum
to create this ramp floor for our tutorial:
You will find lot of keyboard shortcuts here. And this is one of the most awesome
features of Blender - you can work without menus or panels! Everything you need
can be done with keyboard!
So let’s dive in Blender now!
Welcome to Belnder
When you open Blender, you will see some pretty image, made with Belnder, version information,
some useful links and recent files
To close this window, simply click outside it. You will then see your workspace with the
Default window layout (we will learn about them later). Workspace contains a few kickstarting
You may be confused of the last one, but you will see shortly that so many cool things could
be done starting with plain cube and modifying it. Oh, and about modifying: let’s switch to
the Edit mode, hitting the Tab key:
You can exit it hitting Tab again. In the edit mode you can manipulate mesh’
edges, vertices or faces. To switch between these, use three buttons on the screen’s
Let’s choose the faces editing mode. Unlike other 3D editors, in Blender selection is done
with the Right mouse button. Select one face of the cube:
You may have noticed that the axis arrows have moved in the selected face’s place.
These are used to manipulate selected elements. Also, they show the orientation of the selected
element. You can move the selected element by simply dragging one of the arrows. Selected
element will be moved along the selected axis only:
The same operation, movement, could be performed hitting the G key. You can move other
elements, too - this will change the form of our cube:
Now let’s try something more complex. See the Tools panel on your left?
Select a face in a face editing mode and click Extrude (or hit the E key). Your face
will be extruded and you will be able to move it freely. But usually, designers move elements along
some axis - this makes models more accurate. To fix the movement axis, just hit its letter while being
in the extruding mode - X, Y or Z:
Interesting fact: you may extrude both vertices and edges too.
Now, let’s use even more advanced operation, which is oftenly described later in tutorials on
3D modelling. Choose the Loop cut and slice operation from the Tools panel - you will
see nothing. Until you move your cursor over your model. Depending on the edge, cursor is closer
to, you will see purple rectangle, looping through your model:
When you click the Left mouse button, you will move to the next part of this operation -
slicing. Just place the new edges where you want:
Now let’s create walls for our “ramp”. Create a few loop cuts alongside the ramp and we will start
Or maybe just moving faces?..
No, that’s definitely not what we want! We want walls, not a new ramp! Hmmm… But if we will
extrude walls one-by-one, it will be inaccurate… Hold the Shift key and
right-click the two neighbour walls:
Now we will work with three elements in the same way. Hit the E key and then - Z
and extrude all three walls at the same time up:
Now we need two more walls to prevent our hero (the ball, if you recall from the previous part)
from falling aside. Select two edges at the corner of our ramp and hit the W key. You
will see the context menu like this:
Click the Subdivide item, and the selected edges will be connected right in the middle:
You can perform that operation on faces - that is oftenly handy. Now, if you undo your changes with
usual Ctrl+Z (or Command+Z on Mac) and try to perform
the same operation on four opposite edges, you will see there is a redundant (in our case)
You can remove it by selecting that edge, hitting X and selecting Dissolve edges.
If you choose the Delete edge - you will loose the neighbour faces, which were made of that
So in the end we need to have two edges on the same line:
Now, switch to the Ortho View, choosing one from the View menu at the bottom of the
screen, or hitting the Num 5 key:
Your workspace now should look different:
Using the View menu, you may switch between different views, perpendicular to your model.
Switching between different views will not clear the selection. And this is awesome!
So if you try to move the selected edges in the Right Ortho View, you will move
both of them:
Yeeks… They move just along the Y axis, but not along the edge. But Blender easily handles
that - you need to switch between coordinate system using the corresponding menu
at the bottom of your screen:
Use the Normal one and you will see the arrows at the selected edges changed:
Now movement is done along the edge, just as we need:
Try moving(yes, moving, not extruding) our edges up - they will move along the normal:
But if you click the mouse wheel and rotate camera, or even if you switch to the
Top Ortho view, you will notice that our walls have different width:
So we need to make one wall thinner. But we should not forget about other edges - ones,
which will make another wall for us. Undoing now is not an option… We need to move the edges.
But if you move only those visible at the Top Ortho view, you will forget about the ones
at the bottom and screw the model. And selecting all those edges one-by-one is not an option
Moreover, we do not see those edges at the bottom! This is easy to fix, though: see the small
button with rectangles near the vertex/edge/face switcher?
Click it and you will be able to select bottom edges without the need to rotate the camera.
And now we will try the circle-selection tool, which will come to help you when you need to
select many elements at a time. Hit the C key and you will see the circle in a
workspace. Try dragging it (left-click the mouse and drag) over the edges we need:
Hmmm… It’s way too much… Now, hold the Shift key and drag the circle over
the neighbour, redundant ones:
Now we can switch back to the Top Ortho View and successfully move our edges:
Now that we have our walls precisely set up, we can extrude the last two walls.
Select the Normal coordinate system and perform the extrusion along the Z axis:
Now we will scale our model a few times. Staying in the Edit mode, select all the faces with
the A key:
And hit the S key and start entering scale factor number. That’s right, just press,
You can correct what you entered using the Backspace key. You can do the same
thing while moving or rotating elements. This is useful when you need to
make operation really precise. But you still can use your mouse, of course.
Hint: if you scaled your model outside the Edit mode, you may find your scale, translation
or rotation different from identity values (1, 1, 1 for scale or 0, 0, 0 for position/rotation).
This may cause different bugs while exporting models. To fix this, you need to select your
model in the Object mode, hit the Ctrl+A and select Apply Scale(or whatever you need to fix) from the pop-up menu.
Texturing our model
Now we need to paint our model to have something more beautiful in our application than just
pitch black… stuff…
Adding textures to a model in Blender is extremely easy - you just select your model, switch to
the Texture tab on the right panel and click New button:
Then you pass in some params like texture size, the background color and image name - and you are done!
But that will only add a blank texture. And then you will need to paint it as you wish.
But painting a texture requires your model to have vertices, synchronized with your texture.
So each vertex will know where it lays both in 3D space and on the texture image. This
assignment process is called Texture unwrapping or UV mapping(because texture
coordinates are usually called u and v instead of x and y, since those are already
involved to describe vertex’ position). And this process requires one thing from you: you
need to specify, where Blender should “cut” your model. This is quite simple task, but this
will result on how the texture will look like and how easy it will be to paint.
So, go to the Edit mode and select a few loops of edges:
Now, on the left panel, switch to the Shading/UVs tab and click the Mark Seam button:
This will mark the selected edges as seams to “cut” your model along. Have no fear, your model
will not be actually cut - it will be used for maths only.
Then, on the same panel click the Unwrap button and select the first unwrapping method on
Again, no effect you will see now. To see something, switch the window layout at the top menu to
You will see two windows - on your left there will be UV/image editor and on your right
there will be the 3D view. And again, nothing interesting here… But I am not fooling
with you - it’s only how Blender works… To see something marvelous, select everything on
the 3D view:
You will see some lines on your left. That’s what you have selected, mapped onto image plane.
But there is no actual image in the UV/Image editor for now. To add one, just click the
New button on the bottom menu of the UV/Image editor or select an existing one:
This will not change the image itself. The image will be the background for our image editor
window, nothing more. To start making miracles, go to the Texture Paint mode in the
And your model will change its look…
What is this pink monster?! Well, on the left panel of our 3D View there’s a message,
saying the texture slot is missing and proposing to create one… Let’s do this…
Now we are able to paint our model! See, how awesome it is: you have a brush tool activated.
Brush has three params:
Color - this could be changed with the color circle below
Pressure, or Alpha
Radius could be changed by pressing the F key and moving mouse cursor:
Pressure could be changed by pressing Shift+F and doing the same:
And you can just pain like in… Microsoft Paint!
But if you look into the UV/Image editor, you will see… nothing! Again! ‘the hell?!
That is just misunderstanging - you were painting on the other image instead of the selected one:
We created a new one, when created a texture slot…
To start drawing in the UV/Image editor instead of 3D View, you just need to switch
its mode to Paint at the bottom menu:
Okay, so far so good. We are able to paint our model. But there’s one interesting thing: if
you try to draw a straight line - you may face situation, when line is straight in the image
but is curved on the model:
But that’s happening not everywhere - only on certain faces/edges:
Well, that’s because of UV mapping is not precise enough. If you switch to the View mode
in the UV/Image editor and to the Edit mode in the 3D View, and select all the model,
you will see the points in the image editor, you may drag:
Try selecting them with Right mouse button and moving them with G:
Yes, now texture looks creepy, but lines are almost straight:
Exporting our model
When you finish painting your texture, the last thing we need to do is to export our model
to the format, understandable by Irrlicht. For good, both Blender and Irrlicht support
many different formats:
Blender’s file dialogs look differently, but have very intuitive interface:
If you do not see the needed format in Blender - you just need to turn on a corresponding plugin:
After exporting our model to, say, 3DS format, take a look at the directory you have exported
your model to:
Where are the textures? Relax, they are in the UV/Image editor, yet unsaved. You can
save the modified image with the Image -> Save menu at the bottom of UV/Image Editor:
Now we have everything we need for our Newtonian sample!
Have you ever heard about end-to-end testing? Or maybe about testing automation?
Those of you who had, may now be imaging Selenium. That’s right, in most of cases
you will need to run Selenium Server and use Selenium Webdriver in your
tests. Those come handy to run a standalone browser window, with no caches,
filled-in fields or cookies and perform some operations in it.
In this article I will tell you my story of writing E2E tests for Angular webapp.
A brief of history
In my case, we first tried to use Protractor with Chai.js. That time we ended
up with almost unsupportable bunch of code, succeeding in 100% of runs.
Next time we eliminated Chai and reworked all our tests to use Protractor only.
So the code became more clear (I did not like the syntax, but it worked…),
but after upgrading libraries (including Protractor), the ratio of successfull
test runs decreased to just 40%.
We worked for two days, trying to fix those tests. And that’s how webdriverio
came to our project.
And here’s a short tutorial on how to implement E2E tests with webdriverio in
a sample project.
These days I was given a reeeeally interesting homework at the university. I was given a set of
MD5 hashes, calculated from single words (taken from Libre Office’ dictionaries) with a given
sault. And the task was to find all those words.
So, the first idea which came to my mind was using an internet service for MD5 breaking. But…
aaarrrggghhh! There’s a sault, so the webservice, looking for words over a dictionary fails to
So the second idea was to take that dictionary from Libre Office and iterate through it. At the
end, it worked =) And worked reeeally fast. But that is not an interesting part.
I wandered if I could find those words in my dictionary, generated by my own code.
At my job we recently started researching logging tools to make our RESTful API, written in Clojure,
writing logs in JSON format. We were using Log4j already, but decided to use another tool for
this task, making it less painful. So we felt into timbre. Is seemed so easy to use, but it is
According to timbre’s API, we needed to define our own appender for writing to a custom JSON file.
And we found the output-fn option to configure this behaviour. But it is not documented at all,
so we started looking for repositories, using timbre, examples and all the stuff. And finally,
we ended up with our own solution.
Underneath you will find description of our way to use timbre from scratch.
How do we usually create a web application? We run a bootstrapping script, which provides us with a skeleton of our application and then we just extend it with the features we need.
That’s exactly what we did at the last hackathon we were attending - we started with rails new twf and spent half of the day integrating our blank app with Angular, Paperclip, creating API methods and so on. But the effort we needed to accomplish our goal (quite a simple web app) was really huge.
So I decided to find the best combination of backend and frontend technologies that would cause less pain.
At the project I was recently introduced to, the line between frontend and backend is distinguished very clearly: we have an API, written in Clojure and thin frontend application, made with Angular that works on a generated set of static assets - HTMLs, CSS and JS files (but under the hood we are using HAML and SCSS).
The application I will be implementing throughout the whole article has the same architecture: it has RESTful API and MVVM on the frontend, made with Angular. I welcome you to the journey of research and new technologies!
That’s our “game”? Doubtely… So let’s make things move like in real world! Or just like that…
First of all, go and get the
Newton GD files. And unpack it… right to the source directory of our project! That’s right!
I’m not insane and I’m aware you are going to put a lot of files in your project. But have no
fear - you may always add them to .gitignore and skip them from being tracked in your Git repo:
You are using Git, right?.. Now, you place the Newton GD sources in your project directory and change
your CMakeLists.txt file to look like this:
Try to compile your project - it should be just fine. And observe the power of CMake!
Let’s start modifying our Irrlicht sample application. First of all, we will add some Newton headers:
The basic thing in the whole Newton GD library is NewtonWorld. That is what it means - the world, where
all the physics happen. It is something different from where we place our 3D models. And that should be
obvious - graphics are managed by Irrlicht and physics - by Newton. Those are totally different libraries.
So we need to tie those two so that graphics correspond to what happens in physical world.
First of all, we need to have a variable for our NewtonWorld. And since physics are handled by scripts too,
we need to have that variable close to our other objects - in the ScriptManager class.
There are two functions we need to bind to our NewtonBody:
The first one, transformCallback, is called whenever body changes its transform -
e. g. either position or rotation. This is a good place to synchronize our Irrlicht meshes’
positions with their Newton bodies.
The applyForceAndTorqueCallback function is called on each NewtonUpdate to set the final
forces and torques for bodies. We will modify this one later, but for now its implementation
is just good.
But what’s with that NewtonUpdate? This is a function, which does as it says: it
updates NewtonWorld and all its bodies, taking into account the time since the
last update. This function call has one great candidate to be placed into: handleFrame.
But we need to modify that method to receive the time since the last frame been rendered
and we will use this time to update NewtonWorld too.
Remember about architecture: everything, what needs to be exposed to our scripts should be
declared as public in our ScriptManager. Everything else - as protected or private.
This is the basic principle of encapsulation, so let’s keep it in our application.
And update the main application loop:
Hint: to make simulation slower and so watch ball falling in detail, make the
NewtonUpdate argument even smaller. Thousand times, say.
Since we have initialization for our Newton stuff, we need to clean it up at the exit
to prevent memory leaks. Let’s declare a method for that:
And call it right before the program’s end:
And now is the right moment to add key codes’ definitions and exit function to our
ScriptManager so that we could write more clear code and close our application
correctly, using, say, Esc key.
To stop our application, we need to break our while (device->run()) loop. This could be
achieved by simply closing the IrrlichtDevice with device->closeDevice(). But we
do not have an access to the device from the ScriptManager. So let’s add it as a
So now we can create a function, exposed to our scripts, which will stop our application:
And bind it to the Lua function:
Now we can use our exit function in the Lua scripts. But we will need to use hexadecimal
key codes and that’s… ugly. So we need to define some symbolic names for those codes:
Now we can create a Esc key handler in our script:
Now we are ready to create our first Newton bodies. Bodies are some invisible objects,
which define how our Irrlicht meshes will behave (e. g. where they will be placed,
how they will interact when moving, etc.). Basically, there are two types of bodies:
dynamic, whose movement is determined by the forces, applied to them
kinematic, which are controlled by setting their velocities
Those two kinds of bodies are totally different, so the interactions between them
are not pre-defined. So when your dynamic body will fall onto a kinematic one, it will
And each body has its shape, which determines behaviour of the body, when it collides others
and the collision detection itself, of course. Shapes could be convex or concave.
Convex shapes are easier to work with (on the level of physics simulation), but not all the
bodies in practice are convex. For example, levels are oftenly concave. So they need their special
shapes, which are called Triangle Mesh.
Note: to keep the performance of your application high, try to minimalize the use of
triangle meshes and use as simple shapes, as possible. Sometimes it is more effective to
combine a set of primitive shapes, like spheres, cylinders and boxes into one compound
shape, then to use a trimesh.
Let’s create our first simple scene, empowered with physics! We will need only two things:
Since we do not have the good mesh in standard Irrlicht distribution for the floor
(there is a Quake-like level, but that is too much for our case), we will learn how
to make that simple thing in Blender. The next part is a short break between coding
As we discussed, we will describe the whole game in scripts, and the core functionality
we will define in the core. In this chapter we will be adding Lua to our application.
You do not need to download Lua itself - you’d better install it with your system’s
package manager (yum or apt or whatever your Linux uses, brew for OSX…).
The only thing you need to download from Internet this time is Lua wrapper called
So go and get it from
And unpack it… right to the source directory of our project! That’s right! That’s
really small library so it will not pollute your project with tons of files.
Now, I mentioned dependency managers earlier. This is how we will handle them in our C++
application - we will simply put the sources of all the libraries we depend on, with the
versions we depend on, right in our project. Given that, you may put Irrlicht there as well -
you are free to do anything with our project!
To build our project we will need to change our CMakeLists.txt file to fetch
our new dependency:
And here’s the thing: if you try to compile our project on another machine, you will
not need to install any other libraries than Lua on that machine! That supposed to sound
like “sweet, huh?”, except that one little “but…“… Bittersweet…
Back to our busines… luacppinterface needs to be tweaked a bit to fit our project -
we will hack its CMakeLists.txt file to make it depend on system Lua libraries.
Just make it look like this:
It barely differs from the original file, but it makes a compilation pleasant - you
do not need to specify paths to Lua libs anymore!
Injecting some Lua
Our application now uses C++ code to place some 3D objects in a scene. Let’s move,
say, sphere creation, to the script.
First of all, add luacppinterface headers to our main.cpp file:
Now let’s look at some of Irrlicht’ conventions:
it uses irr::video::IVideoDriver for rendering operations
it uses irr::scene::ISceneManager for scene management
So why not to define a ScriptManager to handle scripts? Our requirements
for this class (for now) are:
it should load and evaluate scripts
it should provide simple API to our scripts
Let’s get coding!
This is just a skeleton - we will fill it out in a minute. Just catching up:
this class depends on IVideoDriver and ISceneManager to handle 3D objects and the scene
it contains Lua luaState field to store the current state of our script running
it stores all the nodes as a <string, ISceneNode*> map to allow access to our nodes from scripts
it exposes three methods as an API to Lua scripts: createSphereNode, setNodePosition and
getNodePosition so we will be able to make some manipulations in our scripts
it provides really short and simple interface to our C++ core: ScriptManager(...) and loadScript
The main principle, each and every programmer breaks every day is KISS(Keep It Stupidly Simple).
And that principle should guide us through this whole tutorial to not overthink and override
ourselves as well as the project we are making. That is why our APIs are that simple.
But let’s get back to our ScriptManager. It shows how things will look like, but never
defines how they will actually work. So here are the key points to Lua API:
LuaTable is an array-like structure in Lua, representing both indexed as well as key-value
arrays in Lua. This type is a way to pass variables between Lua script and C++ program. You
may use both table.Get<value_type>(index) and table.Get<value_type>("key") methods
to access its values.
To bind our ScriptManager methods to Lua functions, we need to use pointers to those
functions. And as it is not that simple in usual C++, we will use C++11x lambdas:
All the functions and variables you want to pass to Lua scripts should be global. And since
we have our pretty luaState member, we may set global members through its methods:
We will be using just a map of a Irrlicht’ nodes and its name to bypass those nodes between
scripts and core:
Given those, we have our API and are able to create and run our first Lua script.
Add one in the media/scripts/ directory:
Note: paths in the script will be used by C++ core, relatively to the binary file, which
is… generated by our C++ code! So all the paths in the scripts are just the same as they
are in C++ core.
And add the ScriptManager initialization code:
Now you may remove the code, creating sphere in the main() function. And run the code.
You should see exactly the same picture as before:
Your task is: try to move all the other “factory” functions (creating cube, ninja,
circle animator for cube and fly animator for Ninja) to Lua script, adding API for them
We will now advance our script and add some convention to it. These will be our tasks
for the rest of this chapter:
move keyboard events handling to script
create two function in script so we may call them by convention, not by configuration
The last phrase I took from Ember.js introduction. It says “prefer convention over
configuration”, meaning we’d better call the functions of same name on different scripts,
instead of setting somehow which function to call.
That is, we will define handleFrame() function in our script, which will be called
on each onFrame event in our C++ core and the main() function, which will be called right
after script has been loaded.
Moreover, we will define a global keyboard state table for each of scripts we load and will
be updating it as user presses keys on his keyboard. And this variable will be shared with
script, but as read-only one. So changes in that table will have no effect on the application
Variables are added to a GlobalEnvironment just as function do:
Lua-defined functions are found by their names and called with Invoke(args) method:
Let’s add some simple interaction to our script now. I’ll help you a bit:
This is how nodes could be moved relatively to their current position in Irrlicht.
And here’s how our Lua script may look like now:
If you run our application now, you should be able to control sphere with w and
First of all, you will definetely need the Irrlicht engine, so
go get it.
Then you will need to compile it. Compilation process depends on the operating system you use,
but it’s really similar on every one.
Install these dependencies with your system’ package manager:
libenet-dev libxxf86vm-dev zlib-dev cmake.
Unzip Irrlicht, go to the directory you unpacked with the terminal and run the following:
Belive it or not, but that’s all!
Unzip Irrlicht, go to the directory you unpacked and open the VisualStudio project (depending on
VisualStudio version, you might want to open a bit different file) in source/Irrlicht:
Build it with VisualStudio - and you are done!
The steps are a bit complicated. And they require you to install XCode and
Command-Line Tools - those could be found either in AppStore or on the Apple
First of all, you need to install a bunch of dependencies (I use brew for this purpose):
Get a list of all compilers available for your OSX version:
I got something like this:
Now the build process:
And the final step - copy the library to the lib/MacOSX directory:
Phew! That’s a damn bunch of commands, don’t you think?
By performing those steps, described above, you will end up with the compiled Irrlicht library file
within the lib/ directory, depending on your platform:
Now, create a blank project in your favorite IDE and proceed…
Our first application will show you Irrlicht basic features we will use later. They are:
mesh handling - loading, rendering, animating, etc.
user input handling - reacting to keyboard and mouse events
user interface (UI) - displaying some information within the application window
The good start for that is standard example from Irrlicht pack, the 04 - Movement one.
Let’s take a look over its code:
Building the project
Paste the code from above to your blank project in your IDE, in the source/main.cpp file.
This may differ, but is not critical. Now, add the CMakeLists.txt file to your project
and fill it with these commands:
Note: for those of you, guys, running MacOS X I prepared a bit more complicated
CMakeLists.txt file - just to make our application run everywhere:
But what happens in all that code? First two lines of our CMakeLists.txt file define the project:
Then we modify the variable CMAKE_CXX_FLAGS, which will be used to set compiler flags.
This is how we add items to lists or modify string variables with CMake: we set it the new
value, consisting of the old one and the new elements / parts:
Then we tell CMake not to build Newton demo sandbox subproject and set a few path variables -
we will use them to point compiler to the header and library files of our third-party libraries
(like Newton itself, Irrlicht and others).
Remember: these are only plain variables, they have no effect on compiler themselves.
Next, we point CMake to our sub-projects, which are by the fact our third-party libraries:
These tell CMake to build sub-projects before building our application. Because our sub-projects
are nothing but libraries, we can then look for the built libraries, required by our project
in the sub-projects’ output directories like this:
Same way we look for system libraries:
These commands set compile-ready variables like X11_LIBRARIES.
Some sub-projects may set CMake variables too, providing us with paths to include files or
library files. If Irrlicht did not do this, we try to find its paths with CMake:
Note the environment variables CMake provides us with: UNIX, APPLE, WIN32, MSVC
and many others. They describe which operating system CMake was ran under and which
compiler it was told to use.
And the most important part of our CMakeLists.txt file:
This actually runs the compiler with the include directories, source files and
output file specified.
After that, we may run linker to link the intermediate object files, provided by
compiler, and end up with the application executable:
For OSX users there is a small hack, needed to build the application:
Note the order the commands are specified in: having include path variables definitions
placed before sub-projects commands may be no harmful, but more “effective” commands,
like compiling sub-projects (add_subdirectory) depend on other CMake commands, so
be sure to keep the order sane and clean.
Running the build
Now that you are ready, run the following commands from your project directory
(you will need cmake to be installed in your system):
Warning: do not forget to replace path_to_directory_where_you_unpacked_irrlicht with
the actual path to the directory, where your Irrlicht files lay!
This will build our first Irrlicht application. Not obvious how handy it is right now,
but you will see the power of CMake in our later sessions.
Before you run the application, copy the whole media directory from the Irrlicht
dir to the parent dir of your project. You should end up with directory structure like this:
Note: If you now just run the irrlicht_newton_game1 binary on OSX, you will see
your application does not react to keyboard events. This is tricky, but you need
to pack your application as OSX application. This is easy, though: just create
a directory tree mkdir -p irrlicht_newton_game1.app/Contents/MacOS/ and move
your binary file there:
Open Finder and run the application from there. On other operating systems run
the executable file in your build directory.
Buuuuut, since we have CMake, we may simplify this task because this is a part of
application build process. So we need to create a usual binary file, when we are
running Linux or Windows or create a directory structure with binary on its deepest
level, when running OSX. CMake allows to do it in a really easy way:
You should see something like this:
To end the process you may consider switching to a terminal and running
Understanding the code
Here are few simple things we could extract from application’ code and understand right from scratch:
Each 3D model is a scene node
Primitive scene nodes (such as cube or sphere) could be easily created with built-in functions:
Animated 3D models (such as character models) could be loaded from file:
Hint: if mesh is animated, animation could be started with:
Hint: animation could be stopped with setting its speed to zero:
Node could be described not only by its vertices and indices (forming a set of triangles which are drawn
in 3D forming a model, called mesh) but by its position, rotation and scale
Those could be set with:
Hint: rotation is a set of angles relatively to the corresponding axes, the node will be rotated
around. E. g., vector3df(45, 90, 0) sets the rotation by 45 deg around X axis, 90 deg around Y axis
and no rotation aroung Z axis. All those axes are relative to the node itself.
Graphics User Interface’ (GUI) widgets for information output are labels; they are created with
Hint: its text could be set with:
User input is handled by an external IEventReceiver class object.
defines the logic of handling events like mouse events, keyboard events, joystick events,
GUI events, etc.
The type of event is passed with the event.EventType field. The corresponding field is filled
with the event parameters.
Hint:EventReceiver object has nothing in common with our main game loop. So we should create
some interface, some architecture trick to link those two. Because they are strongly related!
Main game loop should contain rendering call, GUI rendering call and other game logic processing
The simplest main loop could look like this:
There is no simple (or at least, built-in) way to get the delta time between two rendered frames.
This is an important variable! We’ll need that later, when we inject physics engine. And Newton GD
is not the only engine requiring this variable!
But that could be easily done with this workaround:
That was some short introduction to the Irrlicht engine. And that’s basically everything we will use
for the next few sections.
Let’s talk a bit about our application before we create it. In order to make the development process sequential
and least painful, we need to design the application well. The design of an application or the application architecture
is the hardest thing to change on later stages of development. Thus it must be well-thought at the very beginning to
prevent suffering in the future.
Well, there are number of application architecture’ levels:
The highest level defines which modules will the whole
application consist of and what functionality will each of those modules have.
The next level is how the modules communicate to each other, how they work together.
The lower level is the structure of each module - what classes, entities, data structures and similar things will
the module consist of.
One of the lowest, yet still very important architecture levels is how files are organized.
From the highest architecture layer point of view, I can advice a very simple architecture:
assume our game will have a stable, rarely changed core,
a set of assets(models, textures, sounds - any content, made by artists and used to be presented to the player)
and a bunch of scripts, defining all the logic of a game - how character looks like, how the menus are shown and how
they react to player’s actions, how objects in the game world behave and how that world looks like and all the stuff.
The main benefits of such an approach are:
scripts and assets may be changed at any time
scripts and assets define what we show to the user and how the application behaves so that none of the changes to scripts or assets make us to re-compile the core
we can modify the core and thus change how game works internally (mainly for optimization purposes) without changing the overall application functionality and behaviour
We can make the core so flexible that we may re-use it in the future projects.
We will use Irrlicht engine because of its simplicity. And it satisfies all our needs - it
does not need much content preparation; it provides GUI; extending it with IrrKlang will give
us very simple interface to sound and music playback.
Newton Game Dynamics engine we will use to simulate physics. It is easy to use and is really powerful -
you would be impressed!
The last, not the least, we will use Lua scripting language to write scripts. Lua is a lightweight
programming language and perfectly suits that goal.
One of the most beautiful parts of this tutorial, will be the part on making of assets. We will use
Blender 3D to create a couple of 3D models.
I also found CMake kind of user-friendly. It is not that handy as any of dependency managers for all
for Clojure and many others). Yet it makes your project a little more portable, helps to handle your
dependencies, totally eliminates the need of all those How to configure VisualStudio for OGRE tutorials.
Just try it!
Remember all the three rules for our architecture. And keeping them in mind, let’s get to some coding already!