A case study in choosing algorithms

Sunday, Aug 14, 2016
Section: Home / Post
Categories: Engineering, Physics,
Tags: Algorithms,


This past year, I have been crunching data from dark matter simulations. Data size can get pretty large when it comes to scientific computing. As I write this post, I have a script running on 3.8 TB (that’s right – 3,700 gigabytes) of cosmic particles. At these levels one starts thinking about parallelizing computations. And therein lay my dilemma and a soon to be learned lesson.

Around the time I began working on this, I was taking an algorithms course. And we had just learned about the bin packing problem. It involves figuring out the best way to fit an arbitrary number of objects with different sizes into a set of bins so the least amount of bins are used. And it is a hard problem to figure out fast. To get around the difficulty, computer scientists come up with heuristics: shortcuts that give a “good enough” answer. And one heuristic is the First-fit algorithm. Essentially: find the fist bin with enough space, dump the object in, repeat.

Now, this bin packing problem was similar to my parallelization challenge. I had to divide my data into multiple processes to be computed independently. And so, with the enthusiasm that comes only with applying newfound knowledge, I wrote some code. It read the data, assigned them “size” based on the big-O complexity of my calculations, and binned them for each process. Sweet, right?

Not so fast. I was noticing that my processes were still taking different times to finish. The disparity was more than I was happy with. There could be several reasons. One, big-O complexity rarely ever translates to proportionate running times as it ignores factors such as OS background. Two, I was not accounting for file i/o and lower-order big-O complexities while “sizing” data. In practice vs. theory, these things matter. So what could I do?

My solution in the end was quite simple, and my biggest lesson with parallel computing. Instead of pre-partitioning data for each process, I kept it all in a single pool. As soon as a process was free, it took one package from the pool and did its thing. The processes naturally divvied up the work. All was well! Now this might be parallel 101 for many, but I was so caught up in the fancy new algorithm I had learned that I did not pause to see if a plebeian approach, so to speak, may work better.

Now, as my pizza arrives and my script chugs through terabytes of data, I can watch Netflix in peace knowing that my pet processes are making the best of their time.



comments powered by Disqus