Turning a specific solution into a general tool

In a previous article I explored how I make putting work into the background easier. The goal is to be able to decide when to run some procedure immediately or to run it asynchronously via a background job. Here it is:

class SomeProcess
  class Later < Que::Job
    def run(*args)
      options = args.pop # get the hash passed to enqueue
      ::SomeProcess.new(args).send(options['trigger_method'])
    end
  end

  def initialize(some_id)
    @some_id = some_id
    @object = User.find(some_id)
  end
  attr_reader :some_id

  def later(which_method)
    Later.enqueue(some_id, 'trigger_method' => which_method)
  end

  def call
    # perform some long-running action
  end
end

This works well for this class, but eventually we'll want to use this same idea elsewhere. You can always copy and paste, but we know that's a short term solution.

Generalizing your solution

Here's how we can take a solution like this and turn it into a more general tool.

First, I like to come up with the code that I want to write in order to use it. Deciding what code you want to write often means deciding how explicit you want to be.

Do we want to extend or include a module? How should we specify that methods can be performed later? Do we need to provide any default values?

I often begin answering these questions for myself but end up changing my answers as I think through them or even coming up with additional questions.

Here's where I might start...

Often, I want my code to clearly opt in to using a library like the one we're building. It is possible, however, to automatically make it available.

We can monkey-patch Class for example so that all classes might have this ability. But implicitly providing features to a vast collection of types lacks the clarity that developers of the future will want to find when reading through or changing our code.

Although I want to be able to make any class have the ability to run in the background, I'll want to explicitly declare that it can do that.

class SomeProcess
  include ProcessLater
end

And here's what we would need inside that module:

module ProcessLater
  def later(which_method)
    Later.enqueue(some_id, 'trigger_method' => which_method)
  end

  class Later < Que::Job
    def run(*args)
      options = args.pop # get the hash passed to enqueue
      ::SomeProcess.new(args).send(options['trigger_method'])
    end
  end
end

We've just moved some code around but have mostly left it the way it was before. This means we'll have a few problems.

Overcoming specific requirements in generalizations

Our ProcessLater module has a direct reference to SomeProcess so the next class where we attempt to use this module will have trouble.

We need to tell our background job what class to initialize when it's pulled from the queue.

That means our Later class needs to look something like this:

class Later < Que::Job
    def run(*args)
      options = args.pop # get the hash passed to enqueue
      class_to_run.new(args).send(options['trigger_method'])
    end
  end

Every class that uses ProcessLater would need to provide that class_to_run object. We could initialize our Later class with an argument, but often with background libraries we don't have control over the initialization. Typically, all we get is a method like run or perform which accepts our arguments.

We'll get to solving that in a minute but another problem we'll see is that every queued job would be for the ProcessLater::Later class. Even though we're creating a generalized solution, I'd rather see something more specific in my queue.

I like to keep related code as close together as is reasonably possible and that leads me to nesting my background classes within the class of concern.

Here's an example of what jobs I'd like to see in my queue: SomeProcess::Later, ComplexCalculation::Later, SolveHaltingProblem::Later.

Seeing that data stored for processing (along with any relevant arguments) would give me an idea of what work would need to be done.

Creating a custom general class

We can create those classes when we include our module.

module ProcessLater
  def later(which_method)
    Later.enqueue(some_id, 'trigger_method' => which_method)
  end

  class Later < Que::Job
    # create the class lever accessor get the related class
    class << self
      attr_reader :class_to_run
    end

    # create the instance method to access it
    def class_to_run
      self.class.class_to_run
    end

    def run(*args)
      options = args.pop # get the hash passed to enqueue
      class_to_run.new(args).send(options['trigger_method'])
    end
  end

  def self.included(klass)
    # create the unnamed class which inherits what we need
    later_class = Class.new(::ProcessLater::Later)

    # assign the @class_to_run variable to hold a reference
    later_class.instance_variable_set(:@class_to_run, self)

    # name the class we just created
    klass.const_set(:Later, later_class)
  end
end

There's a lot going on there but the end result is that when you include ProcessLater you'll get a background class of WhateverYourClassIs::Later.

But there's still a problem. The ProcessLater module has our later method enqueue the background job with Later which will actually look for ProcessLater::Later but we need it to be specifically the class we just created.

We want the instance we create to know how to enqueue itself to the background. All we need to do is provide a method which will look for that constant.

module ProcessLater
  def later(which_method)
    later_class.enqueue(some_id, 'trigger_method' => which_method)
  end

  private

  # Find the constant in the class that includes this module
  def later_class
    self.class.const_get(:Later)
  end

Knowing how to initialize

There's still one problem: initializing your object.

The later method knows about that some_id argument. But not all classes are the same and arguments for initialization are likely to be different.

We're going to go with a "let's just make it work" kind of solution. Since we need to know how to initialize, we can just put those arguments into an @initalizer_arguments variable.

class SomeProcess
  include ProcessLater

  def initialize(some_id)
    @initializer_arguments = [some_id]
    @object = User.find(some_id)
  end
  attr_reader :initializer_arguments
end

Now, instead of keeping track of an individual value, we track an array of arguments. We can alter our enqueueing method to use that array instead:

module ProcessLater
  def later(which_method)
    later_class.enqueue(*initializer_arguments, 'trigger_method' => which_method)
  end

Our general solution will now properly handle specific class requirements.

The downside with this is that we have this implicit dependency on the initializer_arguments method. There are ways around that and techniques to use to ensure we do that without failure but for the sake of this article and the goal of creating this generalized library: that'll do.

I'll cover handling those requirements like providing initializer_arguments in the future, but for now: how would you handle this? What impact would code like this have on your team?

A thin, slice between you and the background.

With that change, we're enqueueing our background jobs with the right classes.

Here's the final flow:

  1. Initialize your class: SomeProcess.new(123)
  2. Run later(:call) on it
  3. That enqueues the details storing the background class as SomeProcess::Later
  4. The job is picked up and the SomeProcess::Later class is initalized
  5. The job object in turn initializes SomeProcess.new(123) and runs your specified method: call

That gives us a very small generalized layer for moving work into the background. What you'll see in your main class files is this:

class SomeProcess
  include ProcessLater

  def initialize(some_id)
    @initializer_arguments = [some_id]
    @object = User.find(some_id)
  end
  attr_reader :initializer_arguments

  def call
    # perform some long-running action
  end
end

And here's the final library:

module ProcessLater
  def later(which_method)
    later_class.enqueue(initializer_arguments, 'trigger_method' => which_method)
  end

  private

  def later_class
    self.class.const_get(:Later)
  end

  class Later < Que::Job
    # create the class lever accessor get the related class
    class << self
      attr_accessor :class_to_run
    end

    # create the instance method to access it
    def class_to_run
      self.class.class_to_run
    end

    def run(*args)
      options = args.pop # get the hash passed to enqueue
      self.class_to_run.new(args).send(options['trigger_method'])
    end
  end

  def self.included(klass)
    # create the unnamed class which inherits what we need
    later_class = Class.new(::ProcessLater::Later)

    # name the class we just created
    klass.const_set(:Later, later_class)

    # assign the class_to_run variable to hold a reference
    later_class.class_to_run = klass
  end
end

We'll explore more about building your own tools in the future and I put a lot of effort into explaining what you can do with Ruby in the Ruby DSL Handbook, so check it out and if you have any questions (or feedback), just hit reply!

Certainly some will say "Why aren't you using ActiveJob?" or "Why aren't you using Sidekiq?" or "Why aren't you ...."

All of those questions are good ones.

The way your team works, interacts, and builds their own tools has a lot more to do with answering those questions than my reasons. Many different decisions can be made but it's important for your whole team to understand which questions are the most important to answer.

Follow-up this article with the next in the series: Building a tool that's easy for your team to use