Activity
I have a milestone
resouce that can be activated/deactivated as well as completed/reopened.
Rails pushes the standard 7 actions in your controllers: index, new, create, show, edit, update, destroy.
These 7 actions work great for most things, but my use case didn't strictly fit into those 7 REST actions. I read an article a while back that some respected Rails developers follow REST conventions by creating resources such as:
- activate => POST /milestones/:id/activations
- deactivate => DELETE /milestones/:id/activations
- complete => POST /milestones/:id/completions
- reopen => DELETE /milestones/:id/completions
I used this approach for a while but I've found it to be difficult to work with.
It adds additional files, which sometimes leads to more complexity since there is now more to manage.
The biggest problem I encountered was it didn't make logical sense that the reopening of a milestone record was at the endpoint DELETE /milestones/:id/activations
. It made more logical sense to me that it would be PUT /milestones/:id/reopen
, since it is something we are doing to the milestone
record.
I've been contemplating moving these non-standard actions to the milestones_controller.rb
file itself and updating my routes accordingly.
I wanted to get some thoughts on these 2 different approaches and see how others had solved this problem of custom actions on resources?
Hey Chris,
I've noticed that some of your recent videos about testing are using Minitest.
What are your current thoughts on Minitest vs rspec? I've been a fan of rspec for many years, but I'm considering using Minitest instead, since it's built-in and is thus the default.
One of the big considerations for me to move to Minitest is the addition of integration tests (a while back), but since I haven't tried it out yet I'm not aware of any "gotchas" with Minitest.
I always look forward to your videos!
Hey Chris,
Awesome episode as usual! Few quick questions:
- Since
ActionText
is storing aglobal_id
reference to records in order to display the updated partial at render time, does that mean it's making a separate DB query to retrieve each of those records? For instance, if I @mention 10 separate users, will it make 10 separate calls? Especially if I'm creating a commenting system that allows @mentions, and there could be 10-20 comments, each with several @mentions, etc... - I notice you used Zurb Tribute for the @mention library; you've done an episode before using AtJs, any benefits to Tribute over AtJs, or just preference?
I really do like the concept of storing a reference to the partial instead of the hard-coded HTML. I'm actually in a situation where I stored the HTML snippets in the text itself and now want to change it, but am struggling with how to do that using the Froala editor. I'll eventually migrate to ActionText
in a few months after Rails 6 has been vetted in the wild.
Always appreciate your timely and very applicable videos!
Chris, at the 3min mark you talk about using subdomains such as reply.gorails.com
instead of the root gorails.com
to process incoming emails.
What's the concern or downside of not using a subdomain? Is it to avoid situations where you have an email like admin@gorails.com
that you do not want processed by the application?
Posted in Activity Feed From Scratch Discussion
https://apidock.com/rails/ActionView/LookupContext/ViewPaths/template_exists%3f
Chris, one question that would be awesome for you to cover is handling lists of elements.
For instance, I'm working on a notifications system right now (based off of some of your episodes!), and I want to have a data-controller to maintain list-level actions (like mark all as read) but also individual elements (such as mark an individual notification as read/unread, click the notification to view associated record, etc).
Stimulus has been updating their docs and they now have a section that talks about multiple data-controllers for lists, but I'm not sure what the "standard" is for high-level list actions vs individual item actions (if that makes sense).
I'm compiling a list of problems I'm running into as I stumble my way through it! I use a lot of CoffeeScript classes to interact with my app and avoid spaghetti code, like you show in https://gorails.com/episode....
One issue I ran into already is that it seems the `data-controller="..."` attribute cannot have underscores (_) in the controller name. For example, `app/javascript/packs/transaction_record_controller.js` with `data-controller="transaction_record"` will not work, but once you remove the underscore it works fine. I couldn't find any documentation on this, and I can't think of anything my app is doing that would cause a conflict, so I'm assuming it's the way Stimulus works.
I've been slowly converting several of my CoffeeScript classes into Stimulus controllers, and so far I've found it to be a great way to take care of stuff like adding a datepicker to an AJAX form, etc...
I'm literally figuring out StimulusJS right now on an app, and I was hoping you would do a screencast on it soon and save me some time! Awesome timing :)
I did find one "gotcha" with using a preferences hash like this. In my particular situation, I wanted a settings hash that stored a nested filters hash.
My use case is I have a page where I want to use URL params for filtering:
/contacts?first_name=John&last_name=Smith
I wanted to store the filters hash via:
# in controller
user.update_attribute(settings[:contact_filters], params.slice(:first_name, :last_name))
# What I want it to do...
settings: {
contact_filters: {
first_name: "John",
last_name: "Smith"
}
}
The problem is, that the nested hashes are stored as strings and it is nigh impossible to cast them cleanly to a normal Ruby hash (at least after 15 min of StackOverflow searching).
I was thinking that I could get it to somehow work with a combination of:
JSON.parse(user.settings[:contact_filters])
but no such luck. I do love the simplicity of the nested hashes, but it does seem to have some difficulties at least with nested hashes.
Posted in Workflow for TDD/BDD on Rails ?
I just want to throw in my 2 cents with regards to TDD and the workflow/practices of a development team. You're correct that "textbook TDD" states you first write specs that don't pass, then you implement them feature-by-feature following a red-green refactoring process. That sounds great, and it's a good tool to teach others.
However, the real world is hardly that simple. For example, out of the last 3 Rails projects I've built in the last 2 years, all of them had features that we didn't know the final specification until we first built versions that didn't work and our clients had a chance to review it and give us further guidance on what they really wanted.
Here's a real-world example from my current project. I have a task system that assigns tasks to users, and they can complete a task or reopen it. My first pass was to have completed_at
and reopened_at
columns that tracked when these 2 states changed and to base my logic off of them.
But then I talked to my client, and they also told me they need an inbetween state when a task was "submitted" and needed to be reviewed, but wasn't technically complete.
If I followed TDD, I would have written a bunch of specs that would have been fairly useless and have to be refactored constantly. And the reason they would have to be changed constantly is because I didn't know what the final core architecture or features of the project was going to be.
Thus, in my opinion, pure TDD only makes good sense if you have a clear understanding of what the final product needs to do. In cases where you have a general idea but are refining it constantly, I recommend holding off on the specs until you've solidified on key design. Some projects gather all the product specifications ahead of time and TDD works out great; other projects have general ideas and but the key architecture is figured out at "run-time" so to speak, when your development team actually implements the features and finds all the "gotchas" (most projects are like this for me).
I say this because specs are nothing more than code that also has to be maintained and refactored as time goes on. When you view your specs this way, you realize there is a cost involved with writing and maintaining good specs.
View specs as a guarantee that future code doesn't break existing features. So don't worry as much about having specs while you're building out features, but when you ship that feature to your production system, at that time I strongly recommend you have specs that ensure it all works.
As a final word, don't go crazy testing everything. Much of the core Rails code is already tested, you don't need to repeat what they're already doing. Test your logic and your projects unique features, don't worry as much about "does this core Ruby/Rails method actually do what it says it does?" Chances are it probably does.
This actually makes a lot of sense. I've had frustrations with putting all my preferences in a single column. It bothers me to have a bunch of columns for preference data, but as I think about it, there's no "real" reason it should bother me; it's my own "preference" (no pun intended). I would have access to all of ActiveRecord's power, and the data would be treated like a first-class citizen instead of delegated to the sidelines. You're right that migrations aren't something to be scared about, and if you use a single column hash you have to re-deploy changes anyway, which should run migrations by default.
Adding 10-15 DB columns for preferences may be a better solution, because then you have full index/query power, even though JSONB can be indexed (I think). No reason to complicate things though, IMO.
One use case I've had for using single-column hashes is for unstructured data that you don't know ahead of time, such as someone uploads a JSON file with unknown data and you have to parse it out. But preferences are 99% of the time structured data that you know ahead of time and are building core logic around (ie, "send emails if this setting is true").
A portion of the application I'm working on has a comments page with a form at top to add new comments and a list of comments beneath it. I've been working on using CoffeeScript classes to represent my DB models, as per this episode on using data-behaviors
I started breaking my CoffeeScript classes up into separate classes to mirror my Rails controller actions:
class Comments.New...
class Comments.Create...
class Comments.Update...
Since I'm using server-generated JS, my Rails actions correspond to the class:
# app/views/comments/create.js.erb
new Comments.Create("<%=j render(partial: 'comment', locals: { comment: @comment }) %>");
# comments/create.coffee
class Comments.Create
constructor: (comment) ->
@comment = $(comment)
@newForm = $("[data-behavior='new-comment-form']")
@comments = $("[data-behavior='comments']")
@addComment()
@resetForm()
addComment: () ->
@comments.prepend(@comment)
resetForm: () ->
@newForm[0].reset()
The comment
partial has enough data attributes to allow my CoffeeScript code to know what to do with it.
To initialize on page load, I have a global.coffee
that starts all my major events:
# global.coffee
$(document).on "turbolinks:load", ->
$.map $("[data-behavior='new-comment-form']"), (element, index) ->
new Comments.New(element)
I probably spend 75% of my time building out front-end interfaces, and technically I'm a "back-end" guy :)p I just hate a lot of the bloat/unused portions of some of the really large frameworks out there.
I'm puzzling over if this is a good approach, or how others solve similar problems? I'm finding that I want to run a decent amount of JS/CoffeeScript code for each of my corresponding Rails actions.
Posted in Decorators From Scratch Discussion
Awesome episode! One thought on collections of presenters. I found it annoying to keep writing out the collection.map { |object| ObjectPresenter.new(object) } syntax every time.
In my situation I use a BasePresenter that all my other presenters inherit from. I then add a class method "collection(...)" that allows me to create a collection of presenters with cleaner syntax.
The above syntax now looks like this:
@objects = ObjectPresenter.collection(Object.all, view_context)
class BasePresenter < SimpleDelegator
def self.collection(objects, view = nil)
objects.map { |object| new(object, view) }
end
def initialize(object, view = nil)
@object = object
@view = (view || ActionController::Base.helpers)
super(@object)
end
end
Throwing in some thoughts on this as it's something I've dealt with in the past and am currently working on a project right now!
I agree with Chris's thoughts, that a single table is much easier to maintain, especially when you start dealing with associations and changing column names, etc...
One of the cleanest ways to separate them in your Rails code is to use different classes/models. For instance, let's say you have a workflows
table and some of the records are templates based on a is_a_template
boolean column.
I would break that up into a few different classes.
class BaseWorkflow < ApplicationRecord
# code shared between "live" and "template" records
scope :templates, -> { where(is_a_template: true) }
scope :not_templates, -> { where(is_a_template: false) }
has_many :steps
end
class WorkflowTemplate < BaseWorkflow
before_save { self.is_a_template = true }
default_scope { templates }
end
# "live" workflow
class Workflow < BaseWorkflow
before_save { self.is_a_template = false }
default_scope { not_templates }
end
The interesting part is when you have several layers of associations, such as workflow has_many :steps
, and step has_many :task
,etc... I personally found that using live/template classes for all the children became a burden and it was rather redundant, because you already know they are templates because their parent is a template.
This allows you to put all your core relationships in the BaseWorkflow
class and not duplicate them in each of your live/template classes.
I also played with duplicating all my tables and I also tried separating all of the live/template versions into 2 sets of models, and it because overwhelming very quickly and I just had a few core models I was dealing with. Anytime I had some logic change or needed a new method, it was a pain to duplicate it in 2 locations.
My current setup is to only break up the parent record into live/template, and then place any unique code in those classes instead of having conditional logic in the base record. This allows you to breakup the logic only when you need to, which is deciding whether this parent record and it's associations belongs in the live or template category.
Posted in Proper location for null objects
Alex,
I'm chiming in with some practical experience, having written several medium-size applications with 50+ models.
http://blog.makandra.com/2014/12/organizing-large-rails-projects-with-namespaces (SFW) is one of the articles I read when I decided to start namespacing my models directory. It does a good job explaining "why" you would want to namespace your models folder.
In brief, it helps organize your files for another developer to easily understand what the main pieces are in your application, and it's just more manageable. Even though I use keyboard shortcuts to find/open files 99% of the time, it just "looks" easier on the eyes when the folders only have a handful of files in it.
Think of it like Ruby classes: there's nothing to stop you from throwing 500+ lines of code into a single ruby class, but almost everyone would agree that's a bad idea; it's recommended to break that code up into several classes that work together. Having a handful of main folders that organize the files themselves makes it easier to understand IMO.
In your specific case with a User
model and a GuestUser
model, here's how I implement stuff like that:
app/models/user.rb
=> the main "user" model
app/models/users/as_guest.rb
=> the "guest" user
class User < ApplicationRecord
...
end
module Users
class AsGuest < ::User
...
end
end
Another practical example would be inviting new users. Typically, when you invite a new user, you want to validate that they enter an email/password combo properly and send them a confirmation email after they are invited. Instead of throwing this all into the User
model with conditional statements, I put that validation and email handling into the Users::AsInvitation
model. This allows me to create a User
object without having to send confirmation emails if I don't want to (for instance, for an API or admin panel).
Posted in Rails Counter Caches Discussion
I was just working on something like this today; awesome timing! One pain point I've run into is conditional counter caching. Rails doesn't have it built-in, so I have to add some callbacks that run the counts myself.
Would be great to see a follow-up video on optimizing conditional counts. I know that doing a raw count() works for 99% of the cases, but for special race conditions and possible performance it may be best to use the SQL coalesce syntax (what Rails uses for counter_cache I think). I'm just not well-versed enough in SQL to be able to write it without a little research.
As an example of what I currently do, I have a situation where a User has many Tasks, but I only care about counting active tasks:
# models/task.rb
after_save :update_user_active_tasks_count
after_destroy :update_user_active_tasks_count
def update_user_active_tasks_count
user.update_column(:active_tasks_count, user.tasks.active.count) if user
end
This seems very inefficient to run on every task update.
Along with what the others already mentioned, I've also found it easier to work with nested routes instead of subdomains. Subdomains used to (not sure if they still do) require additional work for SSL certs to work properly, or you had to purchase a more expensive cert on some hosting services for subdomains.
Also, using subdomains in development requires a few additional steps.
Some of my info may be out-of-date since I haven't played with subdomains in a year or two, but I found nested routes to be much cleaner and tended to play "nice" with Rails and its conventions.
This is also how Github does their namespacing, where it is github.com/{USER_NAME}/{REPO_NAME}
The alternate would be {USER_NAME}.github.com/{REPO_NAME}
and that just gets clunky and isn't as easy to type out in a URL, in my opinion.
Posted in Message Templates Discussion
Chris, what are you thoughts on keeping the template records inside the same table instead of a separate one? I know this wouldn't work for all situations, but I have a situation where 5 tables each have templates, so I have 10 tables to maintain and they're essentially duplicates of each other (one for the live data and the other for templates).
In order to reduce the DB complexity and cut down on maintenance, I'm thinking about combining the template tables into the normal tables and adding a `template:boolean` field.
That way there's only 1 version of the models to maintain and DB changes don't have to be kept in sync between multiple tables.
@excid3:disqus , just wanted to let you know there's a typo on the episode index page: "Cross Site Scriptiong" (extra "o"). Really enjoying your work, especially your recent episodes on Shrine file uploads with S3 as that's a feature I'm adding to an app right now!