Recently I’ve received
+10 karma on StackOverflow. I was curious for what question or answer and clicked to check this. It appeared
to be a seven-year-old answer about a Floyd-Warshall algorithm. I was surprised both of my bad English back those days and the very small value the answer had. So I’ve revised it and here it is – the brand-new version!
Let us have a graph, described by matrix
D[i][j] is the length of the edge
(i -> j) (from graph’s vertex with index
i to the vertex with index
D has the size of
N * N, where
N is a total number of vertices in a graph because we can reach the maximum of paths by connecting each graph’s vertex to each other.
Also, we’ll need matrix
R, where we will store shortest paths (
R[i][j] contains the index of a next vertex in the shortest path, starting at vertex
i and ending at vertex
R has the same size as
The Floyd-Warshall algorithm performs these steps:
initialize the matrix of all the paths between any two pairs of vertices in a graph with the edge’s end vertex (this is important since this value will be used for path reconstruction)
for each pair of connected vertices (read: for each edge
(u -> v)),
v, find the vertex, which forms shortest path between them: if the vertex
k defines two valid edges
(u -> k) and
(k -> v) (if they are present in the graph), which are together shorter than path
(u -> v), then assume the shortest path between
v lies through
k; set the shortest pivot point in matrix
R for edge
(u -> v) to be the corresponding pivot point for edge
(u -> k)
But how do we read the matrix
Some time ago I’ve published an article about bit O notation. I’ve mentioned its source but never published it. So here it is, the original article.
Big O notation is the most widely used method which describes algorithm complexity - the execution time required or the space used in memory or in the disk by an algorithm. Often in exams or interviews, you will be asked some questions about algorithm complexity in the following form:
For an algorithm that uses a data structure of size n, what is the run-time complexity or space complexity of the algorithm? The answer to such questions often uses big O notations to describe the algorithm complexity, such as
I’ve got amused by how deep and well-designed are those old ASCII-art rogue-like games! So for the last couple of days I’ve spent numerous hours playing as a Necromancer Troll (Beat enemies! Eat enemies!) in ADOM and taking command of a dwarwish horde in Dwarf Fortress. Amazing level of detail! Most powerful graphics engine in the world (which is your own imagination)!
At work, I’ve had a task to implement a Gantt chart diagram to show dependencies and order of some… let’s say, milestones. Given this feature is in a very unstable beta in Google Charts, I thought to myself: “Why don’t I implement it on my own?”. And tried to recall my D3 knowledge.
I’ve also found a minimalistic, but helpful example / screenshot of some Gantt chart implementation:
The challenges I’ve faced were:
And since D3 is a data-driven library, I’ve used map/reduce where possible.
Here’s how the result looked like:
The full implementation code is under the cut.
Some time ago there was a game popular over there, called Entanglement:
There are a few implementations of this game under Android:
But there was no such game for iOS. And as I pursued my second M. Sc. degree, I have had a course “iOS development”, where we were learning to use Swift and iOS platform.
I decided to implement this game as my course project. In Swift. Originally the game was implemented in Swift 2 (there was no Swift 3 back those days).
And recently I decided to refactor the game a bit and update it to the most recent Swift version (which is Swift 3 at the moment of writing this post).
In this post I’ll describe some interesting decisions I made while creating this game.
Recently I’ve written a post about functional programming techniques, coming into the world of front-end and the library I crafted as an experiment. That library, libc.js was highly inspired by Elm and Mithril. But it suffered two major features:
What’s hidden beneath the next version of the library?
Have you ever asked anyone if assembly language might be useful nowadays? So, here’s the short answer: YES. When you know how your computer works (not a processor itself, but the whole thing - memory organization, math co-processor and others), you may optimize your code while writing it. In this short article, I shall try to show you some use cases of optimizations, which you may incorporate with the usage of low-level programming.
Recently I was reading through my old posts and found out there is a gap in the article about SSE - the post did not cover some of the implementation caveats. I decided to fulfill this and re-publish a new version.
In last couple of years the functional programming paradigm became very popular. A huge amount of libraries, tools, tutorials and blog posts appeared. Although the paradigm itself is rather old (lambda calculus was developed around 1930 and the Lisp language was introduced in 1950), its popularity blew up rapidly somewhere in 2014-2016 and that’s what is happening right now. Probably one of the most powerful influencers, giving FP (functional programming) that thrust is web development. Since Facebook introduced React, the community started incorporating many things from FP with React - including Redux and Immutable.js. But there are much more interesting things which were invented on this wave of FP popularity. One of them is Elm.
This is a story how I
implemented invented yet another web framework wheel.