Dan LeGrand

Joined

6,510 Experience
53 Lessons Completed
1 Question Solved

Activity

I read the article and it has very useful information. I'm curious how the Gitlab dev team would recommend approaching a problem I'm currently working on.

I'm adding STI to an events table to track historical/audit data such as UserCreatedEvent, UserUpdatedEvent, etc. The primary benefit of using STI in this situation is I need to be able to display all the events for a given record in a single listing, with custom messages for each event. Using STI, I can make a single query user.events and I have a list of all the events with custom messages displayed based on the class loaded via the type column.

If I were to separate each event to a separate table, I would have multiple tables for each resource (ie, user_events table, account_events table, etc). And to display a list of events for a given record in a single sorted list, I would have to join on several tables and then possibly order the records in-memory to get them in chronological order.

I could probably create a view that implements this for me, but that adds a decent amount of complexity IMO.

I'm just curious if these types of situations are ones that Gitlab would utilize STI, or how would the Gitlab dev team handle implementing a feature like this? I definitely agree with the article overall, that STI should not be your first option, but I'm curious if they would ever recommend it for specific situations.

I do agree about the scale issue though; a single STI table does get large. I've been toying with the idea of using multiple databases where I archive data after a certain point to the other databases, and my primary database is streamlined only for recent data. Definitely adds some complexity with multiple databases though!

I also come from a background of using delayed_jobs (DJ) for the last decade or so. I never switched to the redis-based solutions (sidekiq, etc) because I always needed the ability to persist the jobs in case something crashed. I know redis-based solutions eventually added this, but at some point it wasn't worth the effort to refactor everything out for minimal gain.

I have a VERY customized implementation of DJ on some of my projects. I've added functionality like unique jobs, recurring cron-style jobs, etc... DJ doesn't play very well with ActiveJob (AJ) out of the box, so I actually ended up making my own adapter for DJ to plug into AJ, so that I could do things like insert a value into a custom column when a job was enqueued so I could trigger DB unique indexes (for scheduled jobs where I need to ENSURE that there was only 1 instance of the job scheduled or running).

SolidQueue (SQ) has been on my radar for a few months since I first heard about it, and I'm excited they are implementing a lot of the features I had to build on top of DJ.

With regards to what benefits SQ may have over DJ? In my opinion the biggest is probably stability and scalability. I run a fairly large number of jobs with DJ, and I do run into situations where DJ processes fail and locks aren't given up properly, or a DJ process eats up memory, etc... Also, better integration with AJ. While DJ "does" work with AJ, it's clunky, ESPECIALLY around job failures and retries. Both AJ and DJ have failure/retry mechanisms, but they tend to conflict with one another. It's hard to reason about what happens to a job that fails, b/c both DJ and AJ do stuff to it. I had to update lots of pieces of DJ to increase stability at scale.

A lot of the stability and scalability issues are due to the single delayed_jobs table and so many processes querying that single table for everything. If you don't optimize those SQL calls, you can reach deadlock and weird situations. And if you don't clean it out periodically, it can quickly become slow. The instant I saw that SQ had created multiple tables for specific purposes, I knew they probably addressed these issues by ensuring the "queued-up" table is always as small as possible, b/c that's the table where all the querying is going to take place. The smaller that table is, the faster everything will be. So off-loading failed, distant scheduled, or other types of jobs to separate tables actually makes a lot of sense.

When I deal with currency amounts, I always store the amount in the DB as cents and then use some custom methods to generate "_in_dollars" methods that I use on my forms, because most people prefer to write "$1,234.56" dollars instead of "123456" cents.

For my simple currency uses, I think I prefer formatting methods, but if I needed to extract an actual object that I could run methods on from the value, then serialize would make more sense to me.

I've used this model concern for years to handle currency that I store as cents in the DB but my users want to interact with it as dollars:

# app/models/concerns/formatted_currency_in_dollars.rb
module FormattedCurrencyInDollars
  extend ActiveSupport::Concern

  included do
    def self.formatted_currency_in_dollars(model_attribute)
      model_attribute = model_attribute.to_s

      if model_attribute.ends_with?("_in_cents")
        model_attribute = model_attribute.delete_suffix("_in_cents")
      end

      puts model_attribute

      # Getter (cents / 100.0)
      define_method("#{model_attribute}_in_dollars") do
        cents = send("#{model_attribute}_in_cents")
        return nil if cents.blank?
        value = (cents.to_f / 100.0)
        ("%.2f" % value) # Force 2 spaces to right of decimal
      end

      # Setter (dollars * 100.0)
      define_method("#{model_attribute}_in_dollars=") do |value|
        cents = (value.blank? ? nil : (value.to_s.gsub(/[^0-9.\-]/, "").to_f * 100)).to_i
        send("#{model_attribute}_in_cents=", cents)
      end
    end
  end
end

It assumes you have a DB column named "_in_cents", and it will create the getter/setter for the "_in_dollars". I prefer to be very explicit and store the "_in_cents" on my DB column, so that there's no confusion about what value is in the DB. If you were just looking at the DB values, and you saw "price" and the value was "200", is that "2.00 dollars" or "200 dollars"? Being explicit in situations like this avoids any confusion.

Posted in Bundler's New Ruby Version File Option Discussion

Quick note, some of the documentation for some Ruby version managers (such as chruby which I use) instruct users to store the string literal "ruby-x.x.x" with the version in the .ruby-version. If you get an error like:

[!] There was an error parsing `Gemfile`: Illformed requirement ["ruby-3.x.x"]. Bundler cannot continue.

it means you probably have the string in there. Just update your .ruby-version to ONLY have the version and not the "ruby-" string prefix.

I use Heroku, so had to use the fallback File.read() approach, which really isn't that much of a fallback IMO, only requires a few more character.

I can't believe I never thought about this! So simple once you see it. Now I have a singular source of truth for my Ruby version :)

Chruby readme reference:
https://github.com/postmodern/chruby?tab=readme-ov-file#default-ruby

Posted in Adding a Highlight Button to Trix Discussion

Awesome! Would actually love to see more videos on extending Trix. I've been using Froala Editor for my production applications, because Trix felt like it was lacking a lot of core functionality. My use case was having an editor capable of writing emails with styles users were familiar with, such as Gmail and Outlook, and Trix simply didn't have what I needed and the docs to customize it seem a bit sparse. Features like image resizing, font/background colors, etc... I'd love to use the quasi-official Rails HTML editor, but it needs some work before it can compete with other rich editors IMO.

Posted in Custom Turbo Stream Actions Discussion

Great episode! Is there any convention or recommendation for managing the JS code instead of throwing it all in the application.js? Something like app/javascript/stream_actions/console_log.js, etc, and then having an import statement that loads the entire folder?

Posted in Extracting Reusable Base Classes In Ruby Discussion

Is there any benefit to using the constant for the base URL instead of just using a method and overwriting it in each inherited client? Looking forward to the generator episode! I've been playing around with Rails generators recently to build out my own Rails template so I can stop copying/pasting code every time I start a new project.

Posted in Docker Basics for Rails Discussion

Note to others, running Rails 7.0.4.1 with dartsass-rails for your SASS compiling, you may run into segmentation violation (SEGV) errors when running the dartsass:watch command in your Procfile.dev (valled via bin/dev by default in Rails 7+) on ALPINE Linux base images.

I tested on the standard ruby:3.1.2 vs ruby:3.1.2-alpine, and the standard image works fine but Alpine has SEGV errors. Not sure if there's an Alpine dependency that's missing that may resolve, but since I was working on local development, I didn't investigate too much and just went with the base ruby:3.1.2 image. Maybe I'll circle back once the Rails 7.1+ updates implement a default Docker setup, hopefully some folks more knowledgeable about Docker and images have resolved it for us all :)

Hey Chris, is there any major difference between the newish Active record signed IDs in Rails 6.1 vs the Global ID signed IDs? A quick glance leads me to think the only real difference is that AR signed IDs can only be used for a single model, whereas signed global IDs can be used for any model, but wasn't sure if you knew about any other differences practically speaking. Great episode, this will help me reduce several extra DB columns I have for various one-time use tokens like password resets!

Posted in Hotwire Modal Forms Discussion

Great video, Chris, thanks!

Hey Chris, thanks for going over Turbo and StimulusJS 2.0 changes, I'm really excited about using them in my projects!

One really big thing that would be great to cover is using dialogs/modals with Turbo.

In several of my projects, users really like having dialogs/modals that can show information or allow them to edit information via forms without losing context in the app.

One issue with that, however, is that you can't bookmark or link to a "show" action dialog because it's displayed via JS. In my case, I can have multiple dialogs on top of each other, and each dialog is dynamically appended to the body, which is different from the approach of having a single <div id="dialog">...</div> hidden on the page and just updating content (because you can't have multiple).

With regards to dialogs and forms, I've gone the route of remote: true on all the forms and using create.js.erb to execute JS code on the client. As an example:

# posts_controller.rb
def create
  @post = Post.new(post_params)

  if @post.save
    # create.js.erb
  else
    render :form_validation_error # js.erb file
  end
end

# posts/create.js.erb
Dialog.hide("#new-post-form")
Snackbar.create("Post successfully created")

# posts/form_validation_error.js.erb
# Using jQuery syntax for example of what I'm doing
$("#new-post-form .dialog-content").replaceWith("<%=j render(partial: 'posts/form') %>")

I've been looking at Turbo with its turbo-forms and turbo-streams, and it looks like the core team is headed in a different direction, since they don't want to execute arbitrary JS as a response anymore and instead recommend StimulusJS callbacks (read that in one of the guides for either StimulusJS or Turbo, can't remember which).

But the core issue is making Turbo play nicely with multiple dialogs/modals on a page, some of them just showing info and some of them actually forms that submit data.

My current solutions "work", but it looks like I'll need to do a significant amount of refactoring to make them play nicely with the direction Turbo is going.

Posted in Introduction to Stimulus Reflex Discussion

Chris, from what you mentioned in the video about stimulus_reflex, it would re-run an #index action (original page) after a reflex is executed. Wouldn't this be executing a lot of extra DB queries and thus using a lot more memory on servers?

I'm building something similar to a calendar app that displays a variety of events, and when I create a new event, I just need to insert that 1 record onto the page. If I re-ran the entire #index page, that would result in at least 5+ extra queries in most production applications (retrieve current_user, retrieve any user-specific items, query the #index action and possibly any secondary lookups, etc). From my experience the DB is the slowest part of most Rails apps, followed by the view rendering.

My approach currently is to use custom js.erb files with partials (and now view_components from Github) to dynamically update or insert my new events onto the calendar day. This means I only run the minimum DB queries necessary to insert the new record into the DB, and then spit back a little JS to the page that dynamically updates itself.

If the #index pages have a decent amount of content and thus would run several DB queries, is there really a benefit to stimulus_reflex since it automatically re-renders pages?

I can see cable_ready being useful, as it provides a nicer websocket interface to dynamically updating select portions of the page, and thus minimizing unnecessary DB queries.

BTW, I'd love to see some videos on Github's view_components, and get your thoughts on them! I started using them in a few places and I do like the ability to quickly test my views without having to load the entire app via system tests.

Posted in Introduction to Stimulus Reflex Discussion

@Chris, since stimulus reflex is using action_cable behind the scenes which relies on a redis DB, do you have any idea if there are scale limitations for using stimulus reflex? ie, 1,000 concurrent users probably fine, but 100,000+ it starts crawling because redis is backed up, etc...

My understanding is that the default action_cable implementation had some scale concerns, and that's why any_cable (https://github.com/anycable/anycable) re-wrote some of the API in a faster language (I think they're using Go-lang?). I haven't encountered these scale issues in my own apps, so my knowledge is only "what the experts say", not based on real numbers.

I'm curious about the "production" viability of stimulus reflex before considering implementing in some of my systems.

Awesome video as always, and I'm looking forward to seeing more videos on stimulus reflex, keep them coming!

Posted in Migrating From jQuery to Vanilla Javascript Discussion

@Chris, I know this episode is a little old, but this is still something I'm dealing with today.

I understand the rationale behind the move to get rid of jQuery in web apps: jQuery was from a time when we needed cross-browser compatability, when vanilla JS didn't provide functionality, that vanilla JS is now good and fast, etc...

And I agree with many of the points.

However, in almost evey project I've written in the last 2 years that doesn't use jQuery, one of the first things I find myself doing is writing a JS class that has a lot of helper functions that look very similar to jQuery.

This is easiest to see with dynamically appending elements to the document using something like Rails' <action>.js.erb format.

Writing this:

const fragment = document.createRange().createContextualFragment("<%= j render(partial: '...') %>")

document.querySelector("#some-selector").appendChild(fragment)

Seems a lot more painful than this:

$("#some-selector").append("<%= j render(partial: '...') %>")

I end up writing a class like the below where I throw in all my helper functions, essentially mimicking jQuery:

# JS helper class attached to window.Global (using webpack)
export default class Global {
  static append(selector, html) {
    const fragment = document.createRange().createContextualFragment(html)
    document.querySelector(selector).appendChild(fragment)
  }
}

I understand if someone is using a framework like React/Angular/VueJS then they wouldn'd use jQuery, but that's because they're using another framework which abstracts complexity.

Do you find that you and other developers are writing your own helper functions to abstract some of the complexity of vanilla JS away? How do you handle trying to write less code that is easier to maintain with the move away from jQuery?

After some more digging and some answers on SO, I realized I've been thinking that the way I did things in sprockets can transfer to webpacker, which is not true.

Sprockets essentially combines all my required JS files into a single file with a global namespace, which is why I could reference classes/functions right after they were required (thus the order of files mattered).

Webpacker also creates a single pack file of JS code (or as many packs as you have), but the pack is not a combined file of JS, but rather a combination of es6 modules, such that each module is completely namespaced and separate from others. Thus, to reference my Datepicker class in another es6 module, I have to manually import Datepicker from "datepicker_file" in each separate class (noting that webpacker will ensure only 1 copy of the JS code is actually included).

That means only the bare minimum of entry-level JS code needs to go in the webpacker pack files. For instance, if I use the environment.js ProvidePlugin to make jquery global, you don't actually need to require("jquery") in the pack file.

And since my Datepicker class includes the pikaday library, I don't need to add pikaday at all to my pack file because webpacker will eventually include pikaday because it's a dependency.

I think webpacker will be better overall when I finally understand it, but it is going to require a LOT of refactoring for several of my applications, because they were written on the assumption that if a file was required it was accessible in the global namespace.

Posted in How to use Javascript via Webpacker in Rails 6 Discussion

@Chris,

Can you shed any light on how the require or import in the pack manifest file works?

Here's a real-life example from a project I'm upgrading to Rails 6 with webpacker to manage the assets.

I'm using the pikaday JS library for a calendar, wrapped with a Datepicker JS class (to make refactoring easier if I someday change the calendar library), and this class is used in a StimulusJS controller.

Because the controller file imports the Datepicker class, and the Datepicker class imports the Pikaday library, I don't need to import/require pikaday or my Datepicker in the application.js pack file at all, and I'm curious why that is?

Also, if I import Datepicker from "custom/datepicker" in the application.js pack file, why can't I reference Datepicker in other classes I import in the manifest, such as the controllers? Why do the controllers have to manually import Datepicker in order to reference it?

It seems like theres a lot of manually importing classes when I switched to webpacker...in the asset pipeline, I could reference these classes anywhere without having to manually import them all the time. Was it because they were somehow automatically being added to a global namespace?

I understand the principles behind avoiding over-populating the global namespace, but having to manually import every single class I want to use in every single file seems a little overkill. Have you happened upon anything that can help with this? (I've looked at webpacker's ProvidePlugin, and it doesn't seem to help with instantiating new classes, only if you're referencing the global itself, such as $ or jQuery).

app/javascript/packs/application.js

require("controllers") // loads the default index.js which loads all JS files in directory

app/javascript/controllers/assignment_form_controller.js

import { Controller } from "stimulus"
import Datepicker from "custom/datepicker"

export default class extends Controller {
  initialize() {
      new Datepicker(document.getElementById("some_id"))
    }
}

app/javascript/custom/datepicker.js

import Pikaday from "pikaday"

export default class Datepicker {
  constructor(element, options) {
    if (options == null) { options = {} }

    // Store DOM element for reference
    this.element = element

    // Do not re-run on elements that already have datepickers
    if (this.element.datepicker === undefined) {
      options = Object.assign({},
        this.defaultOptions(),
        options
      )

      const picker = new Pikaday(options)

      // Store picker on element for reference
      this.element.datepicker = picker

      return picker
    } else {
      console.log("Datepicker already attached")
      return
    }
  }

  // Overridden by `options` in constructor
  defaultOptions() {
    return {
      field: this.element,
      format: "M/D/YYYY",
      bound: true,
      keyboardInput: false,
      showDaysInNextAndPreviousMonths: true
    }
  }
}

I'm currently upgrading from Rails 5.2 to 6.0.1 and this gem was one of the blockers for me since it has a dependency on Rails 5.2. Really the only problem is that Rail's built-in store persists these values as strings and this gem typecasts the values for you. Once you handle that, you don't need this gem anymore and can write your own module or small gem.

See below for working version that doesn't use the gem. Obviously I haven't addressed null values and defaults, but those are fairly simple. Once I have a working module I may update this comment with the code so others can use it.

I don't think it's worth making a gem to add a few lines of code, so I tend to store something like this in a lib/modules/typed_store.rb file and then include TypedStore in my model file to use it.

Code from the typed_store gem (from my live project)

typed_store :recurring_rules, coder: DumbCoder do |s|
    s.integer :recurring_interval, default: 1, null: true
    s.string :recurring_frequency, default: "day", null: true
    s.integer :recurring_days, array: true, default: nil, null: true
    s.integer :recurring_day_of_month, default: nil, null: true
  end

Using built-in Rails store

If you're using PG with jsonb column types, you can use store_accessor directly and don't have to use the store method.

Just override the accessor methods to handle typecasting; that's also where you would handle defaults, nulls, etc...

store_accessor :recurring_rules,
    :recurring_interval,
    :recurring_frequency,
    :recurring_days,
    :recurring_day_of_month

  def recurring_interval
    super.to_i
  end

  def recurring_frequency
    super.to_s
  end

  def recurring_days
    super.to_a.map(&:to_i)
  end

  def recurring_day_of_month
    super.to_i
  end

I didn't even though about using the custom route names but then pointing them to separate controllers to keep the code clean. I got too stuck in the "all or none" mindset of only doing it one way.

Thanks for the input!

Chris,

These actions mostly touch the milestone model, but they update children associations (ie, when a milestone is activated, it's children task records are activated as well).

There is no separate Activation or Completion record, those were just names I gave to the actions I was taking on a milestone.

I also found that Github and Stripe both seem to use custom actions, so it made me feel a little better about moving to that approach.

Specifically with activate/deactivate, I have 10+ separate resources it can apply to, and it was much easier for me to remove those 10 separate controller files and just put the methods on the already-existing controllers.

It's always good to get input from folks in the community I look to for helping establish "best practices"!

I have a milestone resouce that can be activated/deactivated as well as completed/reopened.

Rails pushes the standard 7 actions in your controllers: index, new, create, show, edit, update, destroy.

These 7 actions work great for most things, but my use case didn't strictly fit into those 7 REST actions. I read an article a while back that some respected Rails developers follow REST conventions by creating resources such as:

  • activate => POST /milestones/:id/activations
  • deactivate => DELETE /milestones/:id/activations
  • complete => POST /milestones/:id/completions
  • reopen => DELETE /milestones/:id/completions

I used this approach for a while but I've found it to be difficult to work with.

It adds additional files, which sometimes leads to more complexity since there is now more to manage.

The biggest problem I encountered was it didn't make logical sense that the reopening of a milestone record was at the endpoint DELETE /milestones/:id/activations. It made more logical sense to me that it would be PUT /milestones/:id/reopen, since it is something we are doing to the milestone record.

I've been contemplating moving these non-standard actions to the milestones_controller.rb file itself and updating my routes accordingly.

I wanted to get some thoughts on these 2 different approaches and see how others had solved this problem of custom actions on resources?