Chris Zempel


2,240 Experience
0 Lessons Completed
1 Question Solved


Posted in How might I handle event-sourcing?

So it appears this is the big rub:

  • From what I can tell from bc3, they create events after the fact about things that have already happened
  • Kickstarter/the pattern itself definitely prescribe creating the event first, then applying it

As long as validations fire off for things that aren't generated by the system/world, all should be fine... right?

I'd really like the best of both worlds. Don't wanna jump whole hog out of using typical rails methods. But I think there's value in the event being created first, then applied because that way things can be recalculated, retried, etc.

So I think what makes the most sense is layering in a new custom active record primitive into the system, much like "create" or "update" - "track."

Then as for where calculators for events live, they'll live on whatever model makes sense. The model itself for typical form interactions, activemodel-backed classes for form interactions that make sense, etc.

This slightly shifts create and update.

  • create -> essentially becomes find_or_create
  • update -> doesn't simply accept a single set of updates. it needs to be able to accept many sets of updates all at once, consisting of at least one.

This means track becomes the method inside controllers, create and update need to be slightly wrapped, and everything is still forced to be phrased through whatever it is.

This solves for

  • aggregates
  • calculators
  • events

Doesn't solve for reactors yet, but once there are a variety of events happening, I think this will take the form of some centralized monitoring layer, maybe able to just get popped inside ApplicationRecord.

Posted in How might I handle event-sourcing?

I haven't made enough progress to commit on, but it seems as though:

  • I'll need the concept of whatever my bucket is. So a Place model.
  • The Event model is the right place to manage the create action via callbacks. I'm curious about using conventional event type names, and naming convention (is it item_created or item_create? are there existing parts of rails I can leverage to sorta handle/automatically set these calculations for me?)

Seems like the Event model will need to:

before_validation :set_aggregate # pull in validations from appropriate model
after_validation :track_event # persist this event if validations pass
after_create :apply_event # apply the contents of this event to the appropriate aggregate. in the case of creating an item, find_or_create

So the question becomes, where does the calculation logic for this event live. Kickstarter says, in the event definition. I think maybe I can just defer to Item.create for right now, though

Posted in How might I handle event-sourcing?

I've never been happy with the way I've implemented certain kinds of features:

  • Notifications
  • Some kinds of reporting
  • Activity Feeds

Then, I saw two things:

  1. The "bucket/recording" pattern inside
  2. This post on event sourcing by kickstarter:

This is a really useful pattern that forms the basis of the way I'd like to build rails apps whose main functions are informatoin systems with hard needs for reporting, notifications, and auditing. (I'm also gonna use the definitions/words from above as my language for this thread). Now, I need to understand the particulars of how to do this. It seems like there are basically two approaches:

  • ditching rails conventions, and stitching together a bunch of home-grown objects (see the kickstarter way)
  • deeply embracing rails and implementing an event-sourcing approach inside of it (the bc3 way)

Of the two, I'm a lot more interested in the deeply embracing rails approach because I think that will stand the test of time. To do this, the goal is to build a simple little app: "Stuff Tracker." Basically, it's purpose is to store a list of items of things in my house I'd like to get rid of, store basic info (name, note, where stored), store images about them (to expose a list of things to sell), and move them through a simple state machine to ultimately sell/discard/store each thing. This way I can easily begin tracking all of the stuff in my house I need to get rid of, and easily expose lists of various types to other people (can I sell this thing? would you like to buy anything on this list of items that's "for sale?" etc.) to arrive at a decision on what to do with it.

Purpose of this thread

I left the notebook I'd stored all my notes in about my thoughts here, and recently the number of computers and environments I code on exploded. To remedy this, and rebuild my notes, I'm gonna store the progression of thought on here. This would have an added benefit: perspective. So here's my ask: if you're reading through this and see a better way to do things, please either comment and/or pr! I'm not exactly sure what I'm doing, so different POV's would be really helpful.

I'm not interested in a 1:1 mapping of "the event sourcing pattern" into rails. I'm looking for an "event-sourcing-like" implementation usable across projects and that fits into the rails way. this means embracing:

  • concerns
  • callbacks
  • framing things in terms :resources the way they'd get defined in routes (will deal with graphql later)
  • NOT dealing with problems of distributed systems

Getting Started

Starting here:

When creating an event, a couple things jump to mind we'd need to handle:

  • wrap all associated mutations to the db in a transaction
  • store in a specific context (so for multitenancy, think a tenant. this notion is likely expanded across the notification paths of various projects/teams and stored in the notion of a "bucket")
  • we want to store the event, then find_or_create the associated aggregate
  • where does the calculator live? (Is the event the model itself?)
  • reacting to sync and async things we want to happen (basically only sync thing for now is creating the item)
  • dealing with nested records (won't have to handle immediately, but soon). So I think this is what the "parent_recording" is in the video. Basically storing related sequences of a request in a little graph so you can transact a bunch of parts of what forms a single action in the system

I've got a little time remaining today. Immediate questions I want to get a basic implementation of:

  • where do I store validations? (I'm thinking for now just on the model)
  • how do I determine a top-level context (or bucket) that this event gets tracked inside/against? (think I might just make a bucket model, too, and defer the decision of scoping to that bucket)
  • here's what my "events" currently look like: should I tie the aggregate to a bucket, or the event to a bucket and an aggregate?
  • testing

I think I'm gonna blindly follow the idea of "bucket" and the idea that if a thing is "recordable" it can consist of a discrete set of events (meaning, if a thing is recordable then that means its an aggregate). "Bucketable" means it can behave as a bucket. Ultimately I think these should be real domain concepts (like tenant, or project), but I'm gonna go ahead and make a literal bucket and merge that into some other sort of differentiation later.

Posted in How do I tackle this 28-line scope?

That ERD thing has been incredibly useful, though not in this specific instance. This database is crazy.

ActiveRecord uses AREL, and all it really is under the hood is an abstract syntax tree that builds up queries in a db-agnostic way before they can be translated into the actual, right sql for the given db. I'm also guessing that the main reason this was adopted was becuase AR didn't have support for .or back when it was written, so I think a lot of this could be moved out into AR-scopes now. The reason this got so big is because instead of using controller concerns which combine various scopes pertinent to the situation, they just directly called this on the model that accreted ALL the scopes over time.

There's no pressing business case to refactor this yet, and I think the right timing would be with a deeper update (say moving from api v1 to v2). Then it wouldn't be breaking it apart, necessarily, but just building new/better code around.

Posted in How do I tackle this 28-line scope?

Context: I'm working on a startup MVP that's received very little love in the way of refactoring. This has been the most complex codebase I've had to grapple with, and its been tons of fun figuring out how to breathe life back into it.

But this next bit is impressive. There are a variety of scopes on a bunch of different models that have accumulated some considerable length over the years. What started out as a quite readable/understandable 5-line query has turned into this in 13 commits over 3 years:

class Deal < ActiveRecord::Base
# ... deal code
  scope :visible_to, ->(user) do
    if user.nil?
      table = Deal.arel_table
      condition = table[:tutorial_type].eq('Investor')
      table = Deal.arel_table
      di_table = DealInvitation.arel_table
      project_table = Project.arel_table
      inv_table = Investment.arel_table
      client_table = Clients::Client.arel_table
      investment_access = Investments::InvestmentAccess.arel_table
      condition = table[:sponsor_id].eq(
                               .or(table[:id].in( di_table.project(:deal_id).where(di_table[:invitee_id].eq([:rejected].eq(false)) ) ))
                      .or(table[:id].in( inv_table.project(:deal_id).where(inv_table[:user_id].eq( ) ))
                      .or(table[:project_id].in( inv_table.project(:project_id).where(inv_table[:user_id].eq(
                      .or(table[:mode].in([Deal.modes['Public'], Deal.modes['Private']])
                              .and(Arel.sql((!user.is_fa_accessor.nil? && user.is_fa_accessor).to_s)))
                      .or(table[:tutorial_type].eq('SponsorExample').and(Arel.sql((user.type == 'Sponsor').to_s)))
                      .or(table[:tutorial_type].eq('SponsorSandbox').and(Arel.sql((user.type == 'Sponsor').to_s)))
    # ... rest of deal code

Now the thing is, it's not just this one, but there are a variety of models with scopes like this that are used all over the place in controllers/other business logic. When I come across these in the code, I need to be able to introduce changes, I want to actually be able to understand whats going on. I can't go talk with the previous developers about why they added the code they did, and these commits/tests/other artifacts in the codebase aren't that descriptive.

So - how might I approach breaking these things down?

Current approach to try:

  • Identify all the places depending on this, get some sort of testing around them
  • Trace all the features added that this was expanded to support, see if they're still even in use
  • Break it down into some composable scopes instead of one big thing

But some deeper questions:

  • In what situations might it make sense to lean on AREL?
  • What conclusions should I be drawing from the fact this exists the first place?

I've gotta store a bunch of events, which are snippets of data that contain information about a thing that happened inside the system I'm working on. I'm building out audit trail, which could potentially be used in legal proceedings.

There are a couple predominant things I'll need to be able to achieve with them:

  • Lots of searching, filtering, sorting and slicing by various attributes stored inside
  • csv exporting (wouldn't be a real feature if we didn't need to export to csv!)
  • They'll need to be stored and accessible for a minimum of a year (possibly 3)

And then, not a necessity but would be cool,

  • verify if the data was tampered or not w/ a hash

Currently it's safe to say that, the data model for these events isn't settled.

  "name": "some_predefined_event_name",
    "true_tstamp": "some tzinfo value",
       "user_id": 346,
        "investment_id: 256,
        "arbitrary": "json"

The best looking approach so far seems to involve longer-term storage on AWS Glacier, as that will tick the compliance boxes, but that leaves the shorter-term needs like searching & filtering, csv generation, unfulfilled.

Or maybe somehow disabling anything but writing and reading in DynamoDB via IAM policies, then taking the monthly chunks of that and moving them over to Glacier.

Is there a better way to go at this?

Posted in Looking for Rails work? / Hiring Rails developers?

Currently: NOT SEEKING WORK at the moment

Location: St. Louis, MO

Technologies Used: Ruby, Rails, Javascript, React, jQuery/Coffeescript. Experience with UX design, fairly proficient with vanilla css, lots of experience with popular css frameworks. I can stand up a Rails app on a fresh Linux box and troubleshoot server errors, but I prefer to have other people handle hosting :)



Posted in GoRails Markdown and Preview

In Turbolinks 5, the equivalent method of jQuery -> is

document.addEventListener("turbolinks:load", function() { ... }

However, from the docs:

When possible, avoid using the turbolinks:load event to add event listeners directly to elements on the page body. Instead, consider using event delegation to register event listeners once on document or window.

What this means is that events on a page bubble up the document tree. You can catch an event for a <li> on it's parent <ul>, and further than that -- all the way up to the document. So you could actually rewrite this Comment coffeescript class as:

$(document).on "change", "[data-behavior='post-form'] [data-behavior='post-body']", (event, object) ->
  html = marked $("[data-behavior='post-form'] [data-behavior='post-body']").val()
  $("[data-behavior='post-form'] [data-behavior='post-preview']").html html

However, I think this has reduced levels of developer clarity compared to the Comment implementation.

I think you could abstract out the selectors and make defining those communicate the intent clearly, as well as potentially let you drop in a bunch of different setups of selectors if you want the preview functionality in more places than one.

class Preview
  constructor: (element, inputField, previewField) ->
    @element =      element
    @inputField =   inputField
    @previewField = previewField

  inputSelector: =>
    @element + " " + @inputField

  previewSelector: =>
    @element + " " + @previewField

previews = [
  new Preview("[data-behavior='post-form']", "[data-behavior='post-body']", "[data-behavior='post-preview']")

for preview in previews
  do ->
    $(document).on "change", $(preview.inputSelector()), (event, object) ->
      html = marked $(preview.inputSelector()).val()
      $(preview.previewSelector()).html html

This would let you reuse "Previews" all over the place in a way thats turbolinks 5-friendly, and all you have to do is add in another Preview object with new selectors into the previews array. Total overkill right here, but interesting to think through.

Posted in Properly adding Turbolinks 5 event handling

Here's what I understand:

  • Turbolinks preserves the window and document from request to request
  • As a result, I should add click/change listeners to the window/document once when the js is first loaded so that it can execute when the relevant element click happens (the element may not be loaded on the page yet)

This is possible because of "event delegation," or when you click on a specific child element of the document, you're effectively clicking on the entire body and it will sequentially trigger that event up the DOM tree all the way to the document root.

This is why the TL readme says:

When possible, avoid using the turbolinks:load event to add event listeners directly to elements on the page body. Instead, consider using event delegation to register event listeners once on document or window.

Where I'm stuck is, how do I add an event listener to window to listen for specific elements that may not exist yet? I'd prefer to do this with css selectors and not with if statements checking them, but at this point I'm not even sure if thats an option.

I'd like the $.on style of setting things up, but it seems like I'll have to check the element the event was triggered on before deciding what to do.

here's some example code of a place where I'd like to do that. This displays different filter options based on what other filter options are selected:

document.addEventListener "turbolinks:load", ->
  $('#school_type').on 'change', ->
    selection = this.options[this.selectedIndex].value
    if selection == "HighSchool"

It works, but how could I catch the change event happening on the #school_type element on the window?

Lastly, but also firstly, is this something even worth worrying about?

Posted in Episode Suggestion: Normalization

I've identified a tendency in myself that I feel like, if overcome, would make me a considerably better developer and produce better software. Here I'll try to characterize/analyze it:

Description: I feel like my ability to expand the way I store and represent data in a db is hampered by my ability to perceive of more complex relationships in the application layer (aka, how the heck do I set up forms and controllers for this?).

Internal Tell: I'll feel subtly unhappy about the way my models are set up, but not really know what to do about it. I'll be staring at how I've set up the schema and identify that I actually need to move a concept out of a given table to become its own thing.

Here's a pretty clear-cut example to illustrate:

I'll have a User table that stores phone numbers. Soon after, the organizations they're a part of need to store those, too. So, I might end up adding a phone_number column to both.

class User < ApplicationRecord
  validates :phone_number

class Organization < ApplicationRecord
  validates :phone_number

Pretty much at this point I might find myself adding a concern with the logic for phone numbers, then including that in the models that have them. Ok, the data's stored two places, but my application code can be changed in one place.

Well, guess what, I need to now validate the uniqueness of the phone numbers. So, either I've gotta write some more Ruby to validate uniqueness across all the tables, or if I'm already in production, need to worry about migrating all the data to a new phone_numbers table so I can validate uniqueness there.

Timing: This is where the issue of timing comes into play. Breaking out a PhoneNumber into its own model is a pretty obvious thing since it bit me in one of my first applications (like Addresses), but there are some more and subtler ones that I'm finding over time (like any time I end up needing to store a bunch of constants or abusing enums, always worth breaking those out into their own table).

Guess what I'm really talking about is normalization. But there's a dimension of timing to this:

  • finding some concept that needs to be broken out immediately
  • discovering a concept that will implicitly be required by the app in the future, but not actually needed yet

before/after you already have production data to worry about

The Fear: I subconsciously shy away from good normalization because dealing with fields_for, while doable, is still confusing. Like, imagining the mapping of a form_for attribute to a database column on a model is easy. Imagining that relationship, validations, directory setup with a phone_number broken out isn't so natural yet.

So, knowing that I have a tendency towards just taking the easy path, this brings up the question: how can I add more things to that easy path? And that means, being able to phrase elements of the real world in such a way that they're well-suited for Rails.

So that's the starting point: basic discovery, jotting down some attributes onto models, setting up the relationships between them, but then - how far and how soon should I normalize? What do I need to learn more deeply to expand the easy path? (db stuff, just read through/code through variety of situations, look at how people have broken out what they have and ask why).

I've identified what I consider an issue. Is it actually an issue, or just a process I need to go through, or are there areas I need to learn? Not sure really sure what the actual problem is, just that there is one (I've tried describing it here). So I definitely don't know what a good answer to it would look like.

I often find myself needing to create form objects and other kinds of stuff that I don't feel good about, but can't put words to. So then a separation between how I'd like to store data, and how I can nicely phrase that in MVC.

Once I stumble across a situation like this, I'll write it up in this thread

Posted in Dynamically Defined has_many with odd behavior

Retrospecting, I should have tried to reproduce this independently of the app much, much sooner in the process. like after 15 minutes soon. Would've realized it right then.

Posted in Dynamically Defined has_many with odd behavior

The queries that aren't working are because the positions don't exist. I just assumed they had to in order for the join table to point at them...not sure what happened yet!

my feels: :D .... D: ... :D ... :'(

Posted in Dynamically Defined has_many with odd behavior

No dice! I think Rails will automatically take the modelname_type stored on whatever polymorphic model and get the table/instantiate based off that, else I wouldn't have been able to successfully query the quarterbacks.

But if you have any other ideas I would love to hear em! I'm stuck. :(

Posted in Dynamically Defined has_many with odd behavior

Here's a quirk that's confounding me:

Data Model:

class Athlete < ApplicationRecord
    has_many :stats

class Stat < ApplicationRecord
  belongs_to :athlete
  belongs_to :position, polymorphic: true

# then I have a long list of positions, namespaced like so:

class Position::Quarterback < ApplicationRecord
  has_many :stats
  has_many :athletes, through: :stats

class Position::RunningBack < ApplicationRecord
  has_many :stats
  has_many :athletes, through: :stats

I've got a big list of all the other positions elsewhere in the application.

Position.full_position_names #=> ["Quarterback", "RunningBack", ...]

Figured it would be much nicer to define all the has_many relationships off that list rather than type them all out by hand, so when the list changes, so does the defined relationship.

class Athlete < ApplicationRecord
  Position.full_position_names.each do |position|
    self.send(:has_many,                                    # => dynamically calling has_many with position names so we can change things easier
              "#{position.to_s.underscore}_stats".to_sym,   # => position name Quarterback will produce association quarterback_stats
              -> { order(season: :asc)},                    # => ordered by season with lowest year first
              through: :stats,                              # => look at stats table
              source: :position,                            # => since stat->position is polymorphic, we want it to look at :position_type column on stats table
              source_type: "Position::#{position.to_s}")    # => with a :position_type of Position::Quarterback

So far, so good. Now when I call quarterback_stats, it works:

irb(main):007:0> Athlete.first.quarterback_stats
  Athlete Load (0.7ms)  SELECT  "athletes".* FROM "athletes" ORDER BY "athletes"."id" ASC LIMIT $1  [["LIMIT", 1]]
  Position::Quarterback Load (0.8ms)  SELECT "quarterbacks".* FROM "quarterbacks" INNER JOIN "stats" ON "quarterbacks"."id" = "stats"."position_id" WHERE "stats"."athlete_id" = $1 AND "stats"."position_type" = $2 ORDER BY "quarterbacks"."season" ASC  [["athlete_id", 212080005], ["position_type", "Position::Quarterback"]]
=> #<ActiveRecord::Associations::CollectionProxy [#<Position::Quarterback id: 801653172, season: 2013, passing_yards: 180, passing_touchdowns: 8, rushing_yards: 80, rushing_touchdowns: 2, created_at: "2016-05-10 16:20:06", updated_at: "2016-05-10 16:20:06">, #<Position::Quarterback id: 833410073, season: 2014, passing_yards: 180, passing_touchdowns: 8, rushing_yards: 80, rushing_touchdowns: 2, created_at: "2016-05-10 16:20:06", updated_at: "2016-05-10 16:20:06">, #<Position::Quarterback id: 111928452, season: 2015, passing_yards: 180, passing_touchdowns: 8, rushing_yards: 80, rushing_touchdowns: 2, created_at: "2016-05-10 16:20:06", updated_at: "2016-05-10 16:20:06">, #<Position::Quarterback id: 530756924, season: 2016, passing_yards: 180, passing_touchdowns: 8, rushing_yards: 80, rushing_touchdowns: 2, created_at: "2016-05-10 16:20:06", updated_at: "2016-05-10 16:20:06">]>

But when I go and call running_back_stats...It doesn't work :(

irb(main):008:0> Athlete.first.running_back_stats
  Athlete Load (0.6ms)  SELECT  "athletes".* FROM "athletes" ORDER BY "athletes"."id" ASC LIMIT $1  [["LIMIT", 1]]
  Position::RunningBack Load (0.7ms)  SELECT "running_backs".* FROM "running_backs" INNER JOIN "stats" ON "running_backs"."id" = "stats"."position_id" WHERE "stats"."athlete_id" = $1 AND "stats"."position_type" = $2 ORDER BY "running_backs"."season" ASC  [["athlete_id", 212080005], ["position_type", "Position::RunningBack"]]
=> #<ActiveRecord::Associations::CollectionProxy []>

But when I do the same query manually....

irb(main):009:0> Athlete.first.stats.where(position_type: "Position::RunningBack")
  Athlete Load (0.5ms)  SELECT  "athletes".* FROM "athletes" ORDER BY "athletes"."id" ASC LIMIT $1  [["LIMIT", 1]]
  Stat Load (0.5ms)  SELECT "stats".* FROM "stats" WHERE "stats"."athlete_id" = $1 AND "stats"."position_type" = $2  [["athlete_id", 212080005], ["position_type", "Position::RunningBack"]]
=> #<ActiveRecord::AssociationRelation [#<Stat id: 419195171, athlete_id: 212080005, position_type: "Position::RunningBack", position_id: 505436969, created_at: "2016-05-10 16:20:06", updated_at: "2016-05-10 16:20:06">, #<Stat id: 110689410, athlete_id: 212080005, position_type: "Position::RunningBack", position_id: 4509324, created_at: "2016-05-10 16:20:06", updated_at: "2016-05-10 16:20:06">, #<Stat id: 832556053, athlete_id: 212080005, position_type: "Position::RunningBack", position_id: 927202847, created_at: "2016-05-10 16:20:06", updated_at: "2016-05-10 16:20:06">, #<Stat id: 680959409, athlete_id: 212080005, position_type: "Position::RunningBack", position_id: 776646567, created_at: "2016-05-10 16:20:06", updated_at: "2016-05-10 16:20:06">]>

I get back all the relevant stats that point to the existing positions.

Any idea what could be going on here?

Posted in Opinion: Namespacing Areas of the Application

I'm working on an application related to a specific sport. A large deal of the value of the application revolves around being able to track different stats by player by position by season.

There are a lot of positions, the majority of which have unique stats. So I figured it would be simple to give each position its own model.

Now what I'm running into is that this turns my models/ folder in to a mess. I've a hypothesis that this app would be a good deal more understandable both to myself and other members on the team if we simply namespaced all the various positions under a "Position" folder.

So here's the underlying question: when is namespacing appropriate to help clarify and increase comprehension of an app, and what drawbacks does it introduce?

What's not clear to me are a couple things:

1) The effects this will have on views, links, forms and routing. As far as I can tell, if its like:

class Position::Goalie < ActiveRecord::Base

# then the link_to @goalie will need: position_goalie_path(@goalie)
<%= link_to "Goalie!", @goalie %>

# the form_for @goalie:
<%= form_for [:position, @goalie] do |f| %>

2) Should I namespace the tables, too?

Behavior in the controller/view layers seems similar whether or not I prepend the database table with the namespace:

module Positions
  def self.table_name_prefix

So it seems that's solely based off the name of the model. I suppose in the case of a gem that would be used across multiple applications, namespacing the db tables might make sense. But in this case, since this will be just relevant to this one app, guess that makes it a non-decision, and maybe even better to just leave the models to their original names? (ex: table name: goalies instead of positions_goalies)

The real question here, does the naming of this have any non obvious effects elsewhere in the app?

3) General question for everyone: any projects that you've worked on where you've namespaced different portions of the application, was it helpful? Did it suck?

Posted in Loop through associated records and deliver as csv

You could totally do that. Just need to wrap your mind around it a little more. Really all you're doing is:

csv_string = CSV.generate do |csv|
  csv << ["row", "of", "CSV", "data"]
  csv << ["another", "row"]
  # ...

Now that you know you want:

id, user_id, user_name, brand, customer_id, product_group, product, quantity, price, setup, discount 

Then the question is, how will you set up your code to do that? Well, you'll need to grab each sale, but it would probably be nice to do that by each order. So loop through each of your orders, then grab each of its sales.

so to pseudocode this out, would probably look something like:

CSV.generate(headers: true) do |csv|
  csv << ["id", "user_id", "user_name", "brand", "customer_id", "product_group", "product", "quantity", "price", "setup", "discount"]
  Order.all.each do |order|
    order.sales.each do |sale|
      csv << [, order.user_id, order.user_name, order.brand, order.customer_id, sale.product group, #....etc as you listed above

that way you'll get one row per sale, initially grouped by order. probably wanna clean up your headers to be more specific though

Posted in Must Read / Must Watch

walks through a variety of interesting situations in his typically humorous way:

  • I’m calling a method, but I don’t know where it goes
  • I’m calling super but I don’t know where that goes
  • I’m calling something but I don’t know where it goes (again)