A Crash Course in Analyzing Memory Usage in Ruby

Tom Wey

While working on a Rails app recently, a question came up around the right way to implement a feature, and whether the impact on memory usage was something to be concerned about. In looking into the question, I learned a little about analyzing memory usage in Ruby. In this article we’ll look through some of the possibilities.

The app handles a number of legacy URL paths, and redirects each one to a configurable location. This is implemented as a custom middleware. When the app boots we load the YAML configuration, which maps from legacy paths to new paths, into a hash (both keys and values are strings). When a request arrives, we look up the path in the hash, if an entry exists, return a 301 (Moved Permanently) with a Location of the new path. Otherwise the request is passed on to the app. Initially there were a few hundred redirects and everything worked great. Then we learned that rather than a few hundred mappings we needed to handle many thousands.

Clearly the size of the hash increasing would have an impact on memory usage, but what would the impact be, and would the increase be problematic?

Measuring memory allocation

At this point I realised I wasn’t too sure how to estimate the size of the hash in memory. It’s fairly trivial to hop into irb and use String#bytesize to sum up combined size of all the keys and values in the hash:

mappings = YAML.load_file("./config/mappings.yml")

...

mappings.inject(0) do |size, (key, value)|
  size + key.bytesize + value.bytesize
end
# => 222701

This gives us 222,701 / 1024 = 217 KiB.

However this is only part of the picture. Ruby is storing our data as a hash, which has some additional overhead (for example the mapping from each key to it’s value).

Enter the memory_profiler gem. We can use this to take a more detailed look at the memory allocated for our hash:

require "memory_profiler"
require "yaml"

mappings = nil

report = MemoryProfiler.report do
  mappings = YAML.load_file("./config/mappings.yml")
end

report.pretty_print

This gives us a bunch of detail, and some totals:

Total allocated: 1757291 bytes (18221 objects)
Total retained:  622866 bytes (7269 objects)

I defined mappings outside of the report block, and assigned to it in the block, because I wanted this to show up in the “Total retained” figure, distinct from the “Total allocated” figure - I care about the long lasting memory footprint, not memory used temporarily while reading/parsing the YAML file. The memory_profiler documentation describes the “Total retained” value as:

Retained: long lived memory use [and object count] retained due to the execution of the code block.

That “Total retained” figure of 622,866 bytes is interesting. 622,866 / 1024 = 608 KiB. That’s quite a bit larger than our combined keys/values figure, but it still doesn’t sounds like all that much.

What does this look like in context of our Rails app?

That number isn’t very interesting on its own. How does it compare to the memory footprint of our entire Rails app?

Heroku gives us some basic metrics about our app - from the graphs provided it looks like memory usage levels off at around 145MB. In my experience it’s a pretty common pattern for the memory usage of a Rails app to increase after restarting and plateau after handling some requests. The dynos we’re using on Heroku have 512MB of memory available.

Let’s use another useful gem to confirm this: derailed_benchmarks. This gives us a number of commands to profile the memory usage of a Rails app.

I ran our app in production mode locally and used the following command to track memory usage:

bundle exec derailed exec perf:memovertime

This throws a bunch of requests at our app and profiles memory usage over time. This showed again that our app’s memory usage increases over time after starting but soon levels off. This time the reported figure was around 135MiB, similar to they value we got from Heroku. Particularly when you consider that the Heroku figure is reported as MB (Megabytes), the derailed_benchmarks figure is reported as MiB (Mebibytes)

Conclusion

It would appear that the memory implications of increasing our mappings hash to several thousand entries isn’t significant in the context of our Rails app. That was my assumption when the question arose, but it’s always nice to confirm with some numbers (and learn about a few new tools in the process).