score:0

Accepted answer

After a bit of tweaking I managed to run your program and obtain the results for size=8. For debugging purposes add these two lines in runParallel:

readLine()
system.shutdown()

When the program finishes - press enter - this will cause the shutdown method to be invoked and your program will exit.

I'm running on a 8-core i7 2.2Ghz machine - so your results may vary.

Regarding performance, this is pretty much an inefficient solution and here is why: At each step in the backtracking process - that is for each tried partial solution - you are creating and actor and doing a simple loop which creates more actors for the next proposed solutions.

Now, creating an actor in Akka is pretty fast but I dare say that in your case the overhead of creating an actor is bigger (probably by an order of magnitude, even) than the actual work it is doing - I haven't tested this, but it's likely.

The number of analysed partial solutions here is exponential in the size of the board. That is you are creating an exponential number of actors - this is a lot of overhead and you certainly lose any advantage gained by computing the solutions in parallel.

How can we make this better ? Well, for one, we can create a few less actors.

Lets create a fixed pool of QueenWorker actors (the same number as the number of cores you have in your computer), and place them behind a SmallesMailboxRouter.

When your workers check the current solution in the processed Work message instead of creating new actors they will send the new proposed solutions to the router, and the router will dispatch these new Work messages to its routees, in a uniform fashion.

A worker actor will now process a lot more partial solutions and not just one, as it did until now. That's a much better actual_work/akka_housekeeping ratio than before.

Can we do even better then this ? I think we can, if you model your problem a bit differently. In the queens problem, and in general, in backtracking you are exploring a tree of partial solutions. At each step in your algorithm when you generate more partial solutions from existing ones you are branching your tree. The problem with your approach is that when you are branching in the algorithm you are also branching in the concurrency flow - I'm referring here to the fact that you are sending a new Work message to be processed by another actor. This is fine, but if you add the numbers you'll see that this is a lot of overhead which won't gain you any time advantage.

That is because the granularity of your Work tasks is very small. You are doing very little work between sending messages - and this affects your performance. You can make your workers explore a much larger chunk of partial solutions before generating new Work messages.

For example, if you have an 8-core CPU, and the problem has size=8 then you are better of creating 8 workers and giving to worker K the task to compute the solutions which have the queen sitting on column K in the first line. Now each worker executes only one Work task which represents roughly 1/8 of the total work - this will give much better results.

Regarding a solution with futures - you can certainly do that but I think the program time will be roughly the same as the actor solution with the fixed pool of actors. But I encourage you to try since I might be wrong.


Related Query

More Query from same tag