This is the second post in what is now indisputably a “series” of articles about how we build things at TaskRabbit. Over time, we/I have internalized all kinds of lessons and patterns and are trying to take the time to write some of the key things down.
Building upwards from the last article about models, let’s talk about how we use them. The models represent rows in the database in the Rails ORM. What code is deciding what to put in those rows and which ones should be created, etc? In our architecture, this role is filled by service objects.
Overall, we default to the following rules when using models in our system:
- Models contain data/state validations and methods tied directly to them
- Models are manipulated by service objects that reflect the user experience
Something has to be fat
In the beginning, there was Rails and we saw that it was good. The world was optimized around the CRUD/REST use cases. Controllers had
update_attributes and such. When there was more logic/nuance, it was put there in the controller (or the view).
There was a backlash of sorts against that and the new paradigm was “Fat model, skinny controller”. The controllers were simple and emphasized workflow instead of business logic. Views were simpler. That stuff was put in the models. Model code was easier to reuse.
Thus arose the great “God Model” issue. Fat is one thing, but we had some seriously obese models. Things like
Task simply had too much going on. We could put stuff in mixins/concerns but that didn’t change the fact that there was tons of code that all could be subtly interacting with each other.
Business logic has to go somewhere. For us, that somewhere is in service objects.
In our architecture, we call them “Operations” and they extend a class called
Backend::Op. This more or less uses the subroutine gem.
Much can be read about what it means to be a service object, but here is my very scientific (Rails-specific) definition.
- Allows declaration of what fields (input parameters) it uses
- Reflects an action in the system like “sign up a user” or “invoice a job”
- Does whatever it needs to do to accomplish the action when asked including updating or creating one or more models
Here’s a simplified example:
class InvoiceJobOp < ::Backend::Op include Mixins::AtomicOperation # all in same transaction field :hours field :job_id validates :job_id, presence: true validate :validate_hour # hours given validate :validate_assignment # tasker is assigned # ... other checks def perform create_invoice! # record hours and such generate_payment! # pending payment transaction appointment_done! # note that appointment completed if ongoing? schedule_next_appointment! # schedule next if more else complete_assignment! # otherwise, no more end enqueue_background_workers! # follow up later on stuff end end
No Side Effects
When we followed the “Fat Model” pattern, we got what we wanted. This was usually methods in one of the models. Sometimes there were callbacks added. These were the most dangerous because they were happening on every
save. Often, this added unnecessary side effects.
With the service object approach, it is very clear what is happening for the action at hand. When you “invoice a job,” you create the invoice, generate the payment, mark the appointment done, schedule the next appointment, and enqueue some background workers.
This certainty leads to less technical and product debt. When something new needs to be added to this action, it’s very clear where it goes.
Op class above does several model manipulations to the related invoices, appointments, etc. Each some of these does a
save to something. Those
save calls could raise errors. If any of those raise an error, then the
Op itself will inherit it and it will be available on the
op.errors method just like a normal
This also allows chaining of operations. If there was a
ScheduleAppointmentOp class, it could be used in the above
schedule_next_appointment! method. If it raised an error, it would propagate to the
Generally speaking, we have one
Op per controller action that declares what it expects and manipulates the backend data as needed.
Here is a typical example from one of our controllers.
class JobsController < ApplicationController def confirm @job = Job.find(params[:id]) authorize @job, :confirm? # authorization op = Organic::JobConfirmOp.new(current_user) op.submit!(params.merge(job_id: @job.id)) # perform action render :show # render template end end
An action will typically do the following:
- Load a resource
- Authorize the user is allowed do do an action
- Perform the action with an operation (other things are in place to render and error if the op fails)
- Render a template
Note that this is clearly not a typical RESTful route. We’ve found that becomes less important when using this pattern. When the controllers are just wiring things up and are all a 5 lines or less, it feels like there is more flexibility.
It probably gets summed up something like this: wherever the fat (real work) is, that should be focused. For us, it’s not the controller because of service objects. The real work is 1 to 1 focused with the use case. If more was in the controllers, we’d probably be closer to the standard index, show, etc methods because of the focus concept.
So we have pushed everything out closer to the user experience and away from the models. But what if something is needed in a few pieces of the experience?
A few ways we have done sharing:
Ops can use a lower-level one or other type of class as noted above.
Ops can have a mixin with the shared behavior.
- We can add a method to an applicable model. We tend to do this on simple methods that are interpreting the model data to answer a commonly-asked question or commonly-used display value.
We have found that this approach provides a more maintainable and overall successful way of building Rails apps.