Personal site of Wouter Lindenhof


Polyglot programmer advantages and disadvantage

A polyglot programmer is what I would like to call a programmer that knows many languages and is able to write software in which various languages are combined and used. Knowing many different languages also introduces you to many different concepts.

The reason why we have so many programming languages is because each language has its own speciality. Some languages have strong typing and others have weak typing. Some language use prototyping, while others have inheritance.

Not only that, but each language has an ecosystem. Within an ecosystem there are various tools. Think about NuGet, CMake or even the bash scripts your build system might be using. There are so many languages that have a close relationship to a set of tools that you might not even realize half of the tools that you are using.

The one tool you might want to exclude is an IDE like Visual Studio. An IDE is more a tool that hides a lot of the tools that you are using which reduces the visibility of the tools you might be using. My opinion is that the use of an IDE is overrated, so if you know tool mostly through an IDE you might want to think about trying to use it without an IDE. After all, msbuild.exe builds the project not Visual Studio.

The above are all advantages of being a polyglot programmer has one huge distinct disadvantage. He is rarely seen as an expert on any of the subjects if his co-workers have a similar level of skill but for that particular subject. This is not strange as you might be considered the expert in general knowledge. Personally I find that a small price to pay.

When I look at my resume I know I can do almost any other software development. I'm not bound to a single technology. This means that there is always job I can do.

Filed under: Uncategorized No Comments

Why do I own a Mac?

In the past I was an active Microsoft fanboy and when someone else had a Mac it was difficult not to make some snarky remark about it. The other operating system I had my eye on is Linux. Even though both Microsoft and Linux seemed exact opposites I love playing around with both.

The only reason why I never fully switched to Linux was because I kept playing with it until it broke. Of course, this is mostly my fault but the unpolished experience stayed with me. Again, I caused this and by no means I'm saying the problem is Linux.

So how does tie in with owning a Mac?

The Mac is originally based on Unix which is also considered the ancestor of Linux. Since they are quite similar under the hood it means that everything I can do on Linux I can most likely also do on the Mac.

Steve Jobs is a man who was passionate about design and I believe that the Mac is well designed. Like all things it requires time to get adapted to the system.

Combining the above I came to a nice conclusion: With a Mac I have an easy to use system which has most if not all the things I want from Linux. That is the reason I have decided to buy a Mac and frankly I think the high price tag is worth the experience.

Filed under: Uncategorized No Comments

Fake full-screen

Today went well. My love for DX11 continues to increase and although SharpDX seems a lot harder to use than C++ implementation (be aware that I'm a C++ programmer) but the joy of C# balances it quite nicely.

Today I implemented something I always wanted to do: Fake full-screen.

Fake full-screen is the same as normal full-screen but with one slight alteration: You are not using the graphics card exclusively. All fake full-screen does is remove the border and maximize the form.

This has the following advantages:

  1. When debugging I can always access the debugger without losing the graphics context.
  2. Switching to and from full-screen becomes a lot faster
  3. I don't have to wait until the computer has changed the graphics context (that blinking black when you change resolution).
  4. It is easier to simulate various screen ratios.
  5. When the game crashes I can simply use alt-escape to make the task manager visible.

The major downside however is that you don't use the graphics card exclusively which means a performance drop but since that is not much of a difference compared to full-screen you can put that off until the rest is ready.

Filed under: SpaceMayhem No Comments

It was just one word…

Sometimes the bug that ruins my entire night is just one word and for some reason I find that funny. It's like a crossword puzzle of which you only have to find one more word, but you can't find it.

The bug in question was the following C# code:

public class CrossWordPuzzle {
    private string word;

    public void SetDefault()
        string word = "Oh crap..."; // Doesn't set the member variable.

I'm a C++ programmer and the above would generate an error since "word" is defined twice. C# however only gives a warning that the member variable is not used. The reason why I don't like it is because if you do use the member variable (say in another function) you will never notice the above problem. It was only because I was at a dead-end that I decided to look at the output.

Anyway the code is now switched to DirectX 11 and I finally understand how buffers and shaders work. It feels as if DirectX 11 is easier and makes a lot more sense than DirectX 9 but like always I think it takes a bit of time before I'm used to it and have internalized the features.

Tomorrow the work will move on to repairing the scene graph and the GUI. Until then I'm stuck rendering cubes.

Filed under: SpaceMayhem No Comments

Becoming serious about the project

Last year I was in a Ludum Dare together with a few friends and in the end we have decided to try to create an indie game from scratch. After all: We have the skills. We have the technology.

Sadly we didn't make a great start. Mostly because each one of us had no time and had to focus on our day-to-day job. So starting this week I have decided that I will try to put no more than 8 hours of work in my day job and try to spend every evening at least an hour working on our project.

In addition I have made myself another promise: I will try to provide updates on a regular basis using my blog. It will be a good reason to start using my blog again.

The project is called SpaceMayhem.

Filed under: SpaceMayhem No Comments

How to clean up

Today I came across this:

Personally I thought this was awesome, because that is usually my preferred method of cleaning. Just put everything outside the room and then move things back in. Often the things I don't bring back in to the room are the things that are put in the dumpster. This is not only the way I do with cleaning up a room, but also with code. :)

Filed under: Uncategorized No Comments

Fake it until you make it

At the office we are promoting test driven development which so far seems to pay off. For those who don't know what test driven development (hereafter TDD) I will provide with a quick introduction. TDD basically means that before you write any code, you write the test. So if you are writing a calculator you first write a test that calculates and then you start writing the actual "calculator" code. The advantage is that later when you need to change it you simply run all the tests and see if everything succeeds. No more debugging just to see if you didn't actually break anything.

This seems all fine when you are working on something simple, but when you start doing something complex that involves network, multithreading or anything which you have no control over it becomes a pain. Sadly I was working on some network code that didn't always seem to work and this I only noticed when I started running the application in a for loop. Ok fine, I was working with UDP and threading so there was a good chance, but how was I going to fix it?

For starters I just increased the amount of tests in the application itself (before I used the for command (MSDN) to do the same). Every test that was related to network code would run 10 times. To my surprise only a few failed. Running those all those tests only took me 10 seconds which if had done by hand would have taken at least a minute or so. At this point I had proof that something was wrong, but I still had no idea what was wrong, so how was I going to test that?

The first thing I did was abstract away the socket layer as that would remove the UDP failure. Just have two sockets which sends a command to each other. If the command is received the test succeeded and if it fails I could go back to the drawing board. Luckily all the tests worked which brought me to the conclusion that the problem was not in the socket part of the network code. The question remained how would I test that the problem was somewhere else. The UDP sockets worked (2000 tests in 4 seconds) so it must be something else.

And here is the answer: Fakes. We had already abstracted away the socket, so why not replace it with a data structure that we can link to one another copy? That way when it must send data it will just call the dataReceived handler of the other one. Now the server will receive data from his socket the exact moment the client would send it. Now that I had reduced my scope it was time to test it.

VS2010 parallel stacks

2000 tests later only 98% of the checks had passed. Now I could argue that 98% is nothing, and that I might be able to get away by having a resend functionality to cover up UDP, but this test proofed there was still something else wrong and because of the randomness there was a good chance it would be in the threading part. Because of the way that it was designed I was not able to abstract away further so the only thing was left was actually grab the debugger and go through it and try to

At the end of the day everything of those 8000 tests was working and no further problems. This was the first time I was working with fakes and frankly they are probably the reason why I did not decrease the resend delay (which would then solve the problem). figure out why exactly a race condition occurred. And frankly here VS2010 saved me. The parallel stacks window showed me a lot. In the end it turned out that I was adding something to the wrong list which would then not cause the right wake up event causing the worker thread to continue sleeping.

Knowing for certain that everything works and is tested (and protected by the tests) allows me to focus on my next task without having to worry about breaking something. I will get a warning the moment it happens.



Debugging is bad

When you encounter a bug, what do you do? You debug. The last months I have been busy getting the hang of Test Driven Development (hereafter TDD) and I love it.

TDD is all about making super small steps in which you do the following:

  1. Write a test (which won't compile since you haven't written the code). And once you have written the test you will not touch it unless you absolutely must.
  2. Make it compile as quickly as possible. You can forget about good design, all we care about is getting the code to compile.
  3. Make it link. Don't even bother writing the correct implementation just write a single line which returns a value. Once this step is completed the test can be run although it most likely fails
  4. Make the test succeed, again, make it succeed as quickly as possible. If the function add two numbers, just return the right result to make the test work.
  5. Refactor (no modifying the test)

Since I have been using TDD I noticed that the amount of bugs has been reduced a lot and that when I encounter a situation that doesn't work I prefer to write a new test over having to debug.

This got me thinking: Why do I even debug? It is time consuming, I often lose track of what is happening (recursive functions are a pain) and there is nothing that protects me from reintroducing the bug (a so called regression).

Granted TDD is not easy. Especially when the bug is caused by concurrency (threading, network) or by an external component (gui, graphics, database) in which case you are often forced to debug. But for the rest of it is easy.

The coolest thing was when I run the test for an list implementation being certain that my change to the delete code would affect the insert code. The test however told me everything worked as expected. So although TDD cost time when creating (10%~20% more then without) it even saved me time. TDD strength however comes when you need to maintain code (which is most likely 90% of the time).

So with all the above I have been teaching myself to no longer inspect variables and let the test do the debugging for me. Why do it manually if it can be done automatically.

Image is from:


Hexagon versus Quads in games

Just some notes I had written.

| 7 | 8 | 9 |   
| 4 | * | 6 | 
| 1 | 2 | 3 |

Normally moving from "*" to 7, 9, 1 or 2 would cost 2 movement points.

+---+   +---+
| 7 +---+ 9 | 
+---+ 8 +---+
| 4 +---+ 6 |
+---+ * +---+
| 1 +---+ 3 |
+---+ 2 +---+

In a hexagon moving from "*" to 1,2,3,4,6 or 8 would cost only 1 movement point. Moving to 7 or 9 would still cost 2 movement points.

Normal quads have only 4 attached quads. Hexagons have 6 attached quads.


| A | D | G | J | M | P | S |
| B | E | H | K | N | Q | T |
| C | F | I | L | O | R | U |


+---+   +---+   +---+   +---+
| A +---+ G +---+ M +---+ S |
+---+ D +---+ J +---+ P +---+
| B +---+ H +---+ N +---+ T |
+---+ E +---+ K +---+ Q +---+
| C +---+ I +---+ O +---+ U |
+---+ F +---+ L +---+ R +---+
    +---+   +---+   +---+    

Shortest path from B to T in quads is B->E->H->K->N->Q->T
Shortest path from B to T in hexagons is B->(D|E)->(G|H|I)->(J|K|L)->(M|N|O)->(P|Q)->T

Even though the cost is the same, the possible routes in hexagons are not shortest if drawn.

The reason for this is because horizontal movement is always done in steps of 1.0 unit. Moving
vertical however is more tricky because steps can be either 1.0 (for example D->E) but also 0.5 (A->D)

Moving from A to E is however always two steps. Hexagon (A->(B|D)->E)

Moving from A to I is however 4 steps in quads but 3 steps in Hexagon (A->(B|D)->(E|H)->I)

Using coordinates A would [0,0] (both quad and hex) while "I" would be [2,2]
Moving from A to I in hexagon however lets you visit D which is [1, 0.5] Which can then move
to H which is [2,1]. These are both two steps with a distance of 1.5 where a quad is always limited to 1.0.

So Hexagons allows of steps that must be in distance either 1.0 (vertical movement) or 1.5 (horizontal movement)

So a hexagon doesn't only modify the allowed movement and cost, but because of the inherit properties it
also allows modifies the coordinate system (as adjacent nodes are at different distances).

This means that moving from A to U (whose coordinates are the same in both quad and hexagon)
The distance would be [6,2.0] covering the horizontal movement first we go

[1,0.5]  => A->D   [1,0.5]
[1,-0.5] => D->G   [2,0.0]
[1,0.5]  => G->J   [3,0.5]
[1,0.5]  => J->N   [4,1.0]
[1,0.5]  => N->Q   [5,1.5]
[1,0.5]  => Q->U   [6,2.0]
Filed under: Uncategorized No Comments

Testing wordpress scheduling

One of the things I like of wordpress is the fact that I can schedule my posts. That is until it breaks.

I had a post scheduled at 1200 today but when I checked it at home I noticed that it hadn't updated yet. It even said that it had missed its schedule. Calling the cron job a few more times didn't do anything so this is just a quick post to check if it does work now.

Update @ 2012-02-02 00:06: It seems to be working just fine :)

Filed under: Uncategorized No Comments