Ask A Question

Notifications

You’re not receiving notifications from this thread.

How to use Solid Queue in Rails with Active Job Discussion

What's great about the puma plugin is the cost savings when running on something like Heroku. Only one dyno needed for small apps including a Queue. Something that was quite expensive before because of the extra worker and extra Redis.

Since it adds so many tables I was wondering if it works with a separate database. Like a second SQLite. Maybe even a SQLite just for the queue while the actual application runs with PostgreSQL. Could be useful in situation with through-away jobs.

Reply

Yeah, that's an excellent point about Heroku! So long as memory usage is can live inside the dyno safely I guess.

You can setup Solid Queue to use a separate database by defining this config which will be applied to all the SolidQueue models.

# Use a separate DB for Solid Queue
config.solid_queue.connects_to = { database: { writing: :solid_queue_primary, reading: :solid_queue_replica } }
Reply

have you had success using the puma plugin? I'm running into an error ( NoMethodError @log_writer = launcher.log_writer ) with the plugin, but if I start solid_queue as a separate process (this is running via hatchbox.io), it works well.

any assistance? (Rails 7.1.3.3, SolidQueue 0.3.1)

Reply

Hey Chris great screencast about Solid Queue.

I'm curious about how to run Solid Queue using bin/dev and procfile.dev file.
Do I just throw the following into the procfile.dev:

web: bundle exec rake solid_queue:start

If not what would you recommend?

Reply

Yep, just call it "jobs:" because web would be the name for the Rails server.

Reply

Cool. Thanks

Reply

I am still using DelayedJob in my app, even though it is a bit long in the tooth. It, like Solid Queue is database agnostic. But the main reason I haven't moved off of it is because there is a related gem (delayed_job_groups_plugin) that allows you to kick off multiple jobs, and when they all complete, run some other process. It doesn't appear that SQ offers this out of the box.

Reply

We've been using Delayed::Job for almost 10 years in our application which is hosted on Heroku. We run multiple queues to separate our different job types. We've even added some additional columns to our Delayed::Job table that allow us to track the progress of certain jobs as they execute. Aside from the ability to run Solid Queue as a Puma plugin, which we wouldn't use anyway, what advantages does it have over Delayed::Job?

Reply

I also come from a background of using delayed_jobs (DJ) for the last decade or so. I never switched to the redis-based solutions (sidekiq, etc) because I always needed the ability to persist the jobs in case something crashed. I know redis-based solutions eventually added this, but at some point it wasn't worth the effort to refactor everything out for minimal gain.

I have a VERY customized implementation of DJ on some of my projects. I've added functionality like unique jobs, recurring cron-style jobs, etc... DJ doesn't play very well with ActiveJob (AJ) out of the box, so I actually ended up making my own adapter for DJ to plug into AJ, so that I could do things like insert a value into a custom column when a job was enqueued so I could trigger DB unique indexes (for scheduled jobs where I need to ENSURE that there was only 1 instance of the job scheduled or running).

SolidQueue (SQ) has been on my radar for a few months since I first heard about it, and I'm excited they are implementing a lot of the features I had to build on top of DJ.

With regards to what benefits SQ may have over DJ? In my opinion the biggest is probably stability and scalability. I run a fairly large number of jobs with DJ, and I do run into situations where DJ processes fail and locks aren't given up properly, or a DJ process eats up memory, etc... Also, better integration with AJ. While DJ "does" work with AJ, it's clunky, ESPECIALLY around job failures and retries. Both AJ and DJ have failure/retry mechanisms, but they tend to conflict with one another. It's hard to reason about what happens to a job that fails, b/c both DJ and AJ do stuff to it. I had to update lots of pieces of DJ to increase stability at scale.

A lot of the stability and scalability issues are due to the single delayed_jobs table and so many processes querying that single table for everything. If you don't optimize those SQL calls, you can reach deadlock and weird situations. And if you don't clean it out periodically, it can quickly become slow. The instant I saw that SQ had created multiple tables for specific purposes, I knew they probably addressed these issues by ensuring the "queued-up" table is always as small as possible, b/c that's the table where all the querying is going to take place. The smaller that table is, the faster everything will be. So off-loading failed, distant scheduled, or other types of jobs to separate tables actually makes a lot of sense.

Reply

Jumpstart Pro uses Concurrent.physical_processor_count to determine the number of Puma workers in production, if I run Solid Queue on the same server instance, I should modify the Puma config to subtract the number of Solid Queue workers, correct? Does each Solid Queue worker use a physical processor?

Reply

Now that we have SolidCache and SolidQueue replacing Redis cache and jobs, do we have to keep Redis as a dependency? Turbo still relies on it for ActionCable / Turbo Streams stuff, right? Can it run without?

Reply

Depends on if you're using those features, but you might still want Redis. There is also a PostgreSQL adapter you can use for ActionCable as well.

Reply

Hello Chris,

We can have many dispatcher and workers. Like the same, can we have multiple supervisor?

Reply
Join the discussion
Create an account Log in

Want to stay up-to-date with Ruby on Rails?

Join 87,563+ developers who get early access to new tutorials, screencasts, articles, and more.

    We care about the protection of your data. Read our Privacy Policy.