An Example of Experimental Refactoring
Tagged: learning software legacy-code
A while ago I had a chance to refactor a piece of a client's code. I want to write about that experience because it turned out to be a cool experiment. The code in question is a Python implementation of an Rtree that's used to find points near a user-defined location. I tried to swap out this piece of code with the Python rtree module that's a wrapper around libspatialindex. My hypothesis was that the C-based library would be faster than the custom Python implementation already in place.
The experiment ended in failure. After spending 4.5 hours on making the switch, I found out that the new code, while only a quarter of the length (~80 lines compared to ~270 lines - and simpler), was actually a lot slower. About 40 seconds slower for real-world usage. Failure indeed, but it resulted in two good things: flexing my legacy-code muscle and producing a number of tests. Here's how it all went down.
I started the experiment by trying to figure out what the current implementation was doing. Reading the code thoroughly and figuring out what was calling it was a good first step. Then I wrote tests that used a small set of real data to describe how the pure Python code worked. Later on I used these tests to confirm that the new implementation successfully substituted the old one. I got a good deal of tests down before moving on to manual testing, which revealed the need for two more test cases. Throughout this process I was able to understand what the code was doing and produce an "artifact" in the form of tests that will be useful in the future. This part of the experiment was all about exploration through testing as well as producing a valuable artifact.
After that I punched out the new code and got it to pass the tests. I still had to figure out if the new code was faster so I started benchmarking it with cProfile. I applied the profiler high up in the call stack, above the new code and made it dump a pstats report file. I did this a few times while switching between the old and new code. I looked through the pstats files sorting by cumtime and tottime to see which implementation was faster. Turns out that the shabby old implementation took about 100 seconds to run while the new one took around 140 seconds. I looked like the bridge between the application and the rtree library is inefficient in this scenario - just inserting the data into the rtree takes around 10 seconds. The next step would be to refactor the code that prepares and loads data, but...
Time ran out so I left these changes on their own little branch. If I ever to come back to this, I'd love to translate the current Python implementation into Cython and see if that yields a worthwhile improvement.
- Testing is super useful for exploring and understanding legacy code.
- Adding tests leaves you valuable assets. It'll be easier to change or improve this code later down the road.
- Benchmarking is a source of truth. No major code changes should be done without some sort of benchmarking.
- Changing code is like an experiment - structure is as one - create a hypothesis, try to prove it, and learn from it.