Recently I’ve written a post about functional programming techniques, coming into the world of front-end and the library I crafted as an experiment. That library, libc.js was highly inspired by Elm and Mithril. But it suffered two major features:

  1. components were hardly able to be used in other components
  2. the interaction between components was nearly impossible (or, at least, not very transparent)

What’s hidden beneath the next version of the library?

Have you ever asked anyone if assembly language might be useful nowadays? So, here’s the short answer: YES. When you know how your computer works (not a processor itself, but the whole thing - memory organization, math co-processor and others), you may optimize your code while writing it. In this short article, I shall try to show you some use cases of optimizations, which you may incorporate with the usage of low-level programming.

Recently I was reading through my old posts and found out there is a gap in the article about SSE - the post did not cover some of the implementation caveats. I decided to fulfill this and re-publish a new version.

Feb 18, 2017

Functional web

In last couple of years the functional programming paradigm became very popular. A huge amount of libraries, tools, tutorials and blog posts appeared. Although the paradigm itself is rather old (lambda calculus was developed around 1930 and the Lisp language was introduced in 1950), its popularity blew up rapidly somewhere in 2014-2016 and that’s what is happening right now. Probably one of the most powerful influencers, giving FP (functional programming) that thrust is web development. Since Facebook introduced React, the community started incorporating many things from FP with React - including Redux and Immutable.js. But there are much more interesting things which were invented on this wave of FP popularity. One of them is Elm.

This is a story how I implemented invented yet another web framework wheel.

Dec 21, 2016

Clojure guards

Once I wanted to have something like a pretty “match” operator from Scala, but in Clojure. And hence there are no default options for it in Clojure out of the box, here are some alternatives I’ve found in the Internet.

Aug 25, 2016

Big O notation

The best big O notation explanation I’ve ever saw I’ve found on… Google Play Market! I was hanging around, looking for the suggested software and, for some reason, I’ve decided to install some educational application for programmers. And here’s what I’ve found…

This is a chicken. This 3D model I’ve made in 3.5 hrs in Blender (with texturing).

Taking into account the fact I’ve started learning Unity 3D, I will possibly use this in the remake of my old Shoot Them! game. Like this (early preview, made with Unity 3D in ~3 hrs):


In this section we will implement the communication layer for our application. It’ll handle all the requests to/from our web server. Have no worries - we will create server application in the next section!

First resource

Let’s create a Session resource. Since we have no backend part, we should stub the data. We’ll use Angular Services. That’s easy: a service defines a function, returning, say, an object. That object will be used every time you call a service. And you may use not only objects - you may return functions, constants or literally anything from your services.

General architecture

The first thing we need to think of is how we’ll be gathering the information about users. It’s quite easy - we just need to get a request from a visitor. Of any kind - it may be a request to get an image, a file, a stylesheet or a script.

Then we’ll just parse headers from that request and save the extracted data in the database. The only problem here is: how to get unique key from each request?. We may use visitor’s IP address. It’s the easiest way.

If you remember, we ended our coding excercises at place, where we almost created our first Newtonian body, but we did not actually have enough models.

We discussed collision shapes a bit. So let’s create one for our brand new model!

We have a nice ramp to work with. But how we can reconstruct the same shape in the terms of Newton? Newton offers a set of collision shapes for us:

  • Sphere


  • Box


  • Cone


  • Capsule


  • Cylinder


  • Chamfer Cylinder

    Chamfer Cylinder

  • Convex Hull

    Convex Hull

  • Trimesh


Obviously, not sphere, cone, capsule, nor cylinder make sense for us. We could use box shape, but then we simply ignore our inner faces (inside walls):

Box collision shape for our ramp

A bit better, but still the same situation with convex hull shape:

Convex hull collision shape for our ramp

Generally, the way we create our Newtonian body is:

  1. create collision shape
  2. create blank Newtonian body
  3. set body properties like collision shape, mass, inertia parameters, etc.
  4. store the pointer to the graphical entity for that body in the userData property

And then Newton Game Dynamics will take your body into account when processing other objects in the NewtonWorld.

Tree mesh collision shape

So we gonna use the triangle mesh shape. What we gonna do is loop through all the triangles of our mesh and build its copy, but in the world of “physic” bodies.

To loop through all the triangles, we need to take each 3 edges of our mesh and the corresponding vertices (because each edge is represented by two its vertices - their indexes in the list of vertices) and create Newtonian triangle. Irrlicht stores vertices in one of three formats:

  1. plain vertex, represented by its three coordinates; the irr::video::S3DVertex class in Irrlicht
  2. vertex with texture coordinates; irr::video::S3DVertex2TCoords class
  3. vertex with its tangent information; irr::video::S3DVertexTangents class

All those are represented by irr::video::S3DVertex class or its children. Moreover, we do not need nothing but the information on vertex’ coordinates in our case, so we may use only the base class’ properties.

The code, creating trimesh collision shape is quite simple and straightforward:

void createTrimeshShape(irr::scene::IMeshBuffer *meshBuffer, NewtonCollision *treeCollision,
                                     irr::core::vector3df scale = irr::core::vector3df(1, 1, 1)) {
    irr::core::vector3df vArray[3];

    irr::video::S3DVertex *mb_vertices = (irr::video::S3DVertex *) meshBuffer->getVertices();

    u16 *mb_indices = meshBuffer->getIndices();

    for (unsigned int j = 0; j < meshBuffer->getIndexCount(); j += 3) {
        int v1i = mb_indices[j + 0];
        int v2i = mb_indices[j + 1];
        int v3i = mb_indices[j + 2];

        vArray[0] = mb_vertices[v1i].Pos * scale.X;
        vArray[1] = mb_vertices[v2i].Pos * scale.Y;
        vArray[2] = mb_vertices[v3i].Pos * scale.Z;

        NewtonTreeCollisionAddFace(treeCollision, 3, &vArray[0].X, sizeof(irr::core::vector3df), 1);

We take the edges (indices), find their vertices, create a triangle - and we’re done! You may have noticed, we do not actually create the collision shape here - we take it as an argument for our function. You will see why this is done that way in a moment.

Now it’s body’s turn! But we need to extend our Entity class with the NewtonBody field so that we can seamlesly integrate it to our engine:

class Entity {
    scene::ISceneNode *mNode;
    NewtonBody *mBody;

    Entity(scene::ISceneNode *node) : mNode(node), mBody(0) { }

    Entity(scene::ISceneNode *node, NewtonBody *body) : mNode(node), mBody(body) { }

    scene::ISceneNode *getSceneNode() const {
        return mNode;

    NewtonBody *getBody() const {
        return mBody;

    void setBody(NewtonBody *body) {
        mBody = body;

And now we are ready to set our NewtonBody:

void createMeshBody(const std::string name) {
    Entity *entity = entities[name];
    irr::scene::IMeshSceneNode *node = (irr::scene::IMeshSceneNode *) entity->getSceneNode();

    NewtonCollision *shape = NewtonCreateTreeCollision(newtonWorld, 0);

    irr::scene::IMesh *mesh = node->getMesh();

    for (unsigned int i = 0; i < mesh->getMeshBufferCount(); i++) {
        irr::scene::IMeshBuffer *mb = mesh->getMeshBuffer(i);
        createTrimeshShape(mb, shape, node->getScale());

    NewtonTreeCollisionEndBuild(shape, 1);

    float mass = 0.0f;

    dMatrix origin;
    NewtonCollisionGetMatrix(shape, &origin[0][0]);
    NewtonBody *body = NewtonCreateDynamicBody(newtonWorld, shape, &origin[0][0]);

    dVector inertia;
    NewtonConvexCollisionCalculateInertialMatrix(shape, &inertia[0], &origin[0][0]);
    NewtonBodySetMassMatrix(body, mass, mass * inertia.m_x, mass * inertia.m_y, mass * inertia.m_z);
    NewtonBodySetCentreOfMass(body, &origin[0][0]);


    NewtonBodySetTransformCallback(body, transformCallback);
    NewtonBodySetForceAndTorqueCallback(body, applyForceAndTorqueCallback);

    NewtonBodySetUserData(body, entity);


There is an interesting piece here, though: we used recursion to create collision shape… But if you remember, we did not create collision shape with our createTrimeshShape method - all we do in that method is that we add new triangles to the existing shape. That’s because meshes in Irrlicht are stored as a tree structure - they may have sub-meshes, which are still parts of the whole mesh. And they can have their sub-meshes too. So we created a blank collision shape and fill it with new parts of the mesh.

Doing it that way prevents us from overcomplicating the task and building a composite mesh, made of a set of trimeshes. That would be really hard to calculate in real-time! And our really simple scene would work with the speed of 5x5 battle in Unreal Tournament 3…

Looking back to our list, we should now fill out all the fields for our NewtonBody. And since we are making the static model, we will set its mass to zero. This is enough for Newton to treat our body as the static one. I placed the other code to show the other fields, we need to fill in case we have a “usual” body.

So the other fields of NewtonBody are:

  1. massMatrix, which determines how the mass is spread along the body
  2. transformCallback and forceAndTorqueCallback are two mandatory fields, required by Newton
  3. userData, which will hold the pointer to the whole entity

massMatrix could be calculated automatically from the collision shape, like in our case. Without digging much into details, we will simply set it so the mass of our body is distributed uniformely.

transformCallback is the function, which will be called for our body each time it changes its position due to the interaction with other bodies inside NewtonWorld.

forceAndTorqueCallback is the function, which applies forces and torques to our body. This is a bit tricky, but you need to keep track of each force and torque by yourself and then apply them in a way that they summ up and create the final force, influencing the body. We will talk about it later, when we will deal with impulses.

So, the transformCallback:

static void transformCallback(const NewtonBody *body, const dFloat *matrix, int threadIndex) {
    Entity *entity = (Entity *) NewtonBodyGetUserData(body);
    scene::ISceneNode *node = entity->getSceneNode();

    if (!node)

    core::matrix4 transform;


Nothing tricky here.

To put everything in place, let’s add a sphere to our scene. The process is totally same, except of the collision shape creation - in case of primitives like box, sphere or cylinder, it is much more easy than with trimeshes - you do not need to loop through any indices or vertices - just set shape params like dimensions or radius. And body creation process is totally same.

void createSphereNode(const std::string name, const std::string textureFile) {
    scene::ISceneNode *node = smgr->addSphereSceneNode();

    if (node) {
        node->setMaterialTexture(0, driver->getTexture(textureFile.c_str()));
        node->setMaterialFlag(video::EMF_LIGHTING, false);

    entities[name] = new Entity(node);

NewtonCollision *createSphereCollisionShape(scene::ISceneNode *node, float radius) {
    dQuaternion q(node->getRotation().X, node->getRotation().Y, node->getRotation().Z, 1.f);
    dVector v(node->getPosition().X, node->getPosition().Y, node->getPosition().Z);
    dMatrix origin(q, v);

    int shapeId = 0;

    return NewtonCreateSphere(newtonWorld, radius, shapeId, &origin[0][0]);

void createSphereBody(const std::string name, float radius, float mass) {
    Entity *entity = entities[name];
    scene::ISceneNode *node = entity->getSceneNode();

    dQuaternion q(node->getRotation().X, node->getRotation().Y, node->getRotation().Z, 1.f);
    dVector v(node->getPosition().X, node->getPosition().Y, node->getPosition().Z);
    dMatrix origin(q, v);

    int shapeId = 0;

    NewtonCollision *shape = NewtonCreateSphere(newtonWorld, radius, shapeId, &origin[0][0]);

    dMatrix origin;
    NewtonCollisionGetMatrix(shape, &origin[0][0]);
    NewtonBody *body = NewtonCreateDynamicBody(newtonWorld, shape, &origin[0][0]);

    dVector inertia;
    NewtonConvexCollisionCalculateInertialMatrix(shape, &inertia[0], &origin[0][0]);
    NewtonBodySetMassMatrix(body, mass, mass * inertia.m_x, mass * inertia.m_y, mass * inertia.m_z);
    NewtonBodySetCentreOfMass(body, &origin[0][0]);


    NewtonBodySetTransformCallback(body, transformCallback);
    NewtonBodySetForceAndTorqueCallback(body, applyForceAndTorqueCallback);

    NewtonBodySetUserData(body, entity);


Have no fear about code duplication - we will remove it later. When you are done, you should get picture like this one:

First completed dynamic scene

Congrats! That’s our first completed dynamic scene!

In this section we will have short but powerful introduction to Blender. We will cover just enough of model creation basics, you will need to create most of simple projects.

No, we will not cover animation, shaders or modificators here, but just enough minimum to create this ramp floor for our tutorial:

The desired result

You will find lot of keyboard shortcuts here. And this is one of the most awesome features of Blender - you can work without menus or panels! Everything you need can be done with keyboard!

So let’s dive in Blender now!

Welcome to Belnder

When you open Blender, you will see some pretty image, made with Belnder, version information, some useful links and recent files


To close this window, simply click outside it. You will then see your workspace with the Default window layout (we will learn about them later). Workspace contains a few kickstarting items:

  • camera
  • light
  • cube


You may be confused of the last one, but you will see shortly that so many cool things could be done starting with plain cube and modifying it. Oh, and about modifying: let’s switch to the Edit mode, hitting the Tab key:

Edit mode

You can exit it hitting Tab again. In the edit mode you can manipulate mesh’ edges, vertices or faces. To switch between these, use three buttons on the screen’s bottom:

Switching between edges, faces and vertices in edit mode

Let’s choose the faces editing mode. Unlike other 3D editors, in Blender selection is done with the Right mouse button. Select one face of the cube:

Selecting items in blender

You may have noticed that the axis arrows have moved in the selected face’s place. These are used to manipulate selected elements. Also, they show the orientation of the selected element. You can move the selected element by simply dragging one of the arrows. Selected element will be moved along the selected axis only:

Moving selected elements

The same operation, movement, could be performed hitting the G key. You can move other elements, too - this will change the form of our cube:

Moving edges

Now let’s try something more complex. See the Tools panel on your left?

Tools panel

Select a face in a face editing mode and click Extrude (or hit the E key). Your face will be extruded and you will be able to move it freely. But usually, designers move elements along some axis - this makes models more accurate. To fix the movement axis, just hit its letter while being in the extruding mode - X, Y or Z:

Extruding faces

Interesting fact: you may extrude both vertices and edges too.

Now, let’s use even more advanced operation, which is oftenly described later in tutorials on 3D modelling. Choose the Loop cut and slice operation from the Tools panel - you will see nothing. Until you move your cursor over your model. Depending on the edge, cursor is closer to, you will see purple rectangle, looping through your model:

Loop cut

When you click the Left mouse button, you will move to the next part of this operation - slicing. Just place the new edges where you want:

Slicing the loop cut

Now let’s create walls for our “ramp”. Create a few loop cuts alongside the ramp and we will start extruding:

Extruding one wall

Or maybe just moving faces?..

Moving vs extruding

No, that’s definitely not what we want! We want walls, not a new ramp! Hmmm… But if we will extrude walls one-by-one, it will be inaccurate… Hold the Shift key and right-click the two neighbour walls:

Multiple selection

Now we will work with three elements in the same way. Hit the E key and then - Z and extrude all three walls at the same time up:

Simultaneous extrusion

Now we need two more walls to prevent our hero (the ball, if you recall from the previous part) from falling aside. Select two edges at the corner of our ramp and hit the W key. You will see the context menu like this:

Editing context menu

Click the Subdivide item, and the selected edges will be connected right in the middle:

Subdivision for two edges

You can perform that operation on faces - that is oftenly handy. Now, if you undo your changes with usual Ctrl+Z (or Command+Z on Mac) and try to perform the same operation on four opposite edges, you will see there is a redundant (in our case) edge:

4-th subdivision

You can remove it by selecting that edge, hitting X and selecting Dissolve edges. If you choose the Delete edge - you will loose the neighbour faces, which were made of that edge.

Delete or dissolve?

So in the end we need to have two edges on the same line:

The needed edges

Now, switch to the Ortho View, choosing one from the View menu at the bottom of the screen, or hitting the Num 5 key:

View menu

Your workspace now should look different:

Ortho view

Using the View menu, you may switch between different views, perpendicular to your model.

Top view

Right view

Switching between different views will not clear the selection. And this is awesome! So if you try to move the selected edges in the Right Ortho View, you will move both of them:

Selection persistence

Yeeks… They move just along the Y axis, but not along the edge. But Blender easily handles that - you need to switch between coordinate system using the corresponding menu at the bottom of your screen:

Coordinate system

Use the Normal one and you will see the arrows at the selected edges changed:

Normal coordinate system

Now movement is done along the edge, just as we need:

Moving with Normal coordinate system

Try moving (yes, moving, not extruding) our edges up - they will move along the normal:

Moving edges in Right Ortho view

But if you click the mouse wheel and rotate camera, or even if you switch to the Top Ortho view, you will notice that our walls have different width:


So we need to make one wall thinner. But we should not forget about other edges - ones, which will make another wall for us. Undoing now is not an option… We need to move the edges. But if you move only those visible at the Top Ortho view, you will forget about the ones at the bottom and screw the model. And selecting all those edges one-by-one is not an option too…

Selecting many edges manually is a pain...

Moreover, we do not see those edges at the bottom! This is easy to fix, though: see the small button with rectangles near the vertex/edge/face switcher?

'Limit selection to visible' switcher

Click it and you will be able to select bottom edges without the need to rotate the camera. And now we will try the circle-selection tool, which will come to help you when you need to select many elements at a time. Hit the C key and you will see the circle in a workspace. Try dragging it (left-click the mouse and drag) over the edges we need:

Circle selection

Hmmm… It’s way too much… Now, hold the Shift key and drag the circle over the neighbour, redundant ones:

Unselecting elements

Now we can switch back to the Top Ortho View and successfully move our edges:

Making walls thinner

Now that we have our walls precisely set up, we can extrude the last two walls. Select the Normal coordinate system and perform the extrusion along the Z axis:

Extruding last two walls

Now we will scale our model a few times. Staying in the Edit mode, select all the faces with the A key:

Selecting everything

And hit the S key and start entering scale factor number. That’s right, just press, say, 5:

Entering factor while scaling

You can correct what you entered using the Backspace key. You can do the same thing while moving or rotating elements. This is useful when you need to make operation really precise. But you still can use your mouse, of course.

Hint: if you scaled your model outside the Edit mode, you may find your scale, translation or rotation different from identity values (1, 1, 1 for scale or 0, 0, 0 for position/rotation). This may cause different bugs while exporting models. To fix this, you need to select your model in the Object mode, hit the Ctrl+A and select Apply Scale (or whatever you need to fix) from the pop-up menu.

Applying scale

Texturing our model

Now we need to paint our model to have something more beautiful in our application than just pitch black… stuff…

Adding textures to a model in Blender is extremely easy - you just select your model, switch to the Texture tab on the right panel and click New button:

Creating a texture

Then you pass in some params like texture size, the background color and image name - and you are done!

New texture params

But that will only add a blank texture. And then you will need to paint it as you wish. But painting a texture requires your model to have vertices, synchronized with your texture. So each vertex will know where it lays both in 3D space and on the texture image. This assignment process is called Texture unwrapping or UV mapping (because texture coordinates are usually called u and v instead of x and y, since those are already involved to describe vertex’ position). And this process requires one thing from you: you need to specify, where Blender should “cut” your model. This is quite simple task, but this will result on how the texture will look like and how easy it will be to paint.

So, go to the Edit mode and select a few loops of edges:

Selecting seam edges

Selecting seam edges

Now, on the left panel, switch to the Shading/UVs tab and click the Mark Seam button:

Shading/UVs tab

This will mark the selected edges as seams to “cut” your model along. Have no fear, your model will not be actually cut - it will be used for maths only.

Then, on the same panel click the Unwrap button and select the first unwrapping method on the list:

Unwrapping method

Again, no effect you will see now. To see something, switch the window layout at the top menu to UV Editing:

Layout switcher

Layouts available

You will see two windows - on your left there will be UV/image editor and on your right there will be the 3D view. And again, nothing interesting here… But I am not fooling with you - it’s only how Blender works… To see something marvelous, select everything on the 3D view:

UV-Mapped model

You will see some lines on your left. That’s what you have selected, mapped onto image plane. But there is no actual image in the UV/Image editor for now. To add one, just click the New button on the bottom menu of the UV/Image editor or select an existing one:

Selecting background image for UV mapping

This will not change the image itself. The image will be the background for our image editor window, nothing more. To start making miracles, go to the Texture Paint mode in the 3D view:

Texture Paint mode

And your model will change its look…


What is this pink monster?! Well, on the left panel of our 3D View there’s a message, saying the texture slot is missing and proposing to create one… Let’s do this…

Texture slot creation

Now we are able to paint our model! See, how awesome it is: you have a brush tool activated. Brush has three params:

  1. Color - this could be changed with the color circle below
  2. Radius
  3. Pressure, or Alpha

Radius could be changed by pressing the F key and moving mouse cursor:

Brush radius changing

Pressure could be changed by pressing Shift+F and doing the same:

Brush pressure changing

And you can just pain like in… Microsoft Paint!

Just paint!!!

But if you look into the UV/Image editor, you will see… nothing! Again! ‘the hell?!


That is just misunderstanging - you were painting on the other image instead of the selected one:

Choosing image for UV/Image editor

We created a new one, when created a texture slot…

To start drawing in the UV/Image editor instead of 3D View, you just need to switch its mode to Paint at the bottom menu:

Painting in the UV/Image editor

Okay, so far so good. We are able to paint our model. But there’s one interesting thing: if you try to draw a straight line - you may face situation, when line is straight in the image but is curved on the model:

UV mapping mistakes

UV mapping mistakes

But that’s happening not everywhere - only on certain faces/edges:

Mistakes are only on certain faces

Well, that’s because of UV mapping is not precise enough. If you switch to the View mode in the UV/Image editor and to the Edit mode in the 3D View, and select all the model, you will see the points in the image editor, you may drag:

Control points in the image editor

Try selecting them with Right mouse button and moving them with G:

Selecting control points

Moving control points

Yes, now texture looks creepy, but lines are almost straight:

Fixing UV mapping errors manually

Fixing UV mapping errors manually

Exporting our model

When you finish painting your texture, the last thing we need to do is to export our model to the format, understandable by Irrlicht. For good, both Blender and Irrlicht support many different formats:

Blender exporting

Blender’s file dialogs look differently, but have very intuitive interface:

Blender file dialog

If you do not see the needed format in Blender - you just need to turn on a corresponding plugin:

Blender settings menu

Blender settings menu

After exporting our model to, say, 3DS format, take a look at the directory you have exported your model to:

No textures!

Where are the textures? Relax, they are in the UV/Image editor, yet unsaved. You can save the modified image with the Image -> Save menu at the bottom of UV/Image Editor:

Saving image from UV/Image Editor

Now we have everything we need for our Newtonian sample!

Next chapter