Chris Oliver

Joined

290,800 Experience
86 Lessons Completed
298 Questions Solved

Activity

Hey Jack,

This would be a good place to write a Ruby class that handles the processing, and then you can call that class and pass in the objects you need from background workers or rake tasks. Since the class would be agnostic, you can use the background worker or rake task to feed in the inputs like the model you need to process and the work can be done in either spot with the same code. Having a rake task is easy for testing, and then background jobs are important for production tasks, so a Ruby class would let you do that nicely.

Something like this:

# Your processing class
class ProcessWhatever
  def perform(model_id)
      model = Model.find(model_id)

        # do your work here
  end
end
# Your background worker can call it
class ProcessJob < ApplicationJob
  def perform(model_id)
      ProcessWhatever.new.perform(model_id)
  end
end
# Rake task
task :process_whatever do
  ProcessWhatever.new.perform(YOUR_ID_FROM_ARGS)
end

Posted in Best strategy for downloading multiple files from S3

Looking forward to hearing about your solution and how it goes once you get it implemented. There really isn't much information on this out there and definitely should be more!

Posted in Best strategy for downloading multiple files from S3

This is a good challenge Justin! I've thought about this in the past but never needed to implement it.

A lot of this probably will come down to your specific use case, but here are some thoughts:

  1. You could probably use a mix-and-match approach to use one strategy for say, downloads < 100MB and do like Strategy #1 for that and then if it's > 100MB you could do a different process.
  2. I don't feel like it would make sense to automatically create zip files whenever a bunch changes.
  3. I know that Dropbox will initialize a download as a background job and email you a link to it when it's finished. This could be a good approach, and you could dynamically spin up an EC2 server to run that download and zip, then kill the server afterwards. This is obviously useful if you're doing larger downloads that might need a large amount of disk space or ram for compressing. You might be able to use AWS Lambda for this instead of a full EC2 server, but it depends on the resources you're allowed in Lambda.

Dropbox takes the background process approach, while Slack takes the approach of no zipping files and just doing one at a time.

It's hard (read: impossible) to come up with a single solution to this without knowing more details like the average file sizes, types and percentages of requests, and things. If people are uploading small documents that's one thing. If they're uploading videos, that's another.

If you don't know the average usage before you get into this, then I would strongly encourage you to take the simplest approach first (rubyzip) and implement that. You could run this all on the webserver at first and then move it to a dedicated server for processing if it starts taxing your webserver. You can measure the free disk space before a job and what you'll need for the zip before executing it. That could tell you to either run it locally or on a dedicated EC2 instance for a few minutes. Once you get a user that's breaking the capabilities of one solution, then you can implement a more complex one and scale that out as you gather more usage measurements.

This will also come down to keeping an idea of what percentage of requests are uploads vs downloads. If you have lots of zip downloads, then re-creating the zip every change might make sense for speed of downloads. If it's mostly uploads and rarely downloads, then you can do that as-needed instead and maybe cache the zip file until a change is made to the bucket.

It'll be super interesting to see, but the nice part is that as long as you store metadata for all the files in your database, you can get a quick detail of the average file size, largest file, and so on to help you analyze which will be the best approach.

Do you have an idea of what usage you're expecting?

[Edit: I accidentally wrote a mini-novel 🤓]

Posted in Using ActiveAdmin to Build an Admin UI Discussion

I prefer Administrate personally because it's more or less just regular controllers namespaced to the admin area. No special DSLs or anything, so you can customize it just like you would write your normal app.

Also yeah, noticed those security updates recently as well. 👍

Posted in Using ActiveAdmin to Build an Admin UI Discussion

Haha no worries man, it's not a noob mistake, definitely one that I'd be like huh that's weird and have to do some digging to make sure it was all configured right myself. Run into those things all the time.

Posted in Using ActiveAdmin to Build an Admin UI Discussion

Hmm, looks right. Also double check you've got a belongs_to :user on the model I guess. I'm not sure what else it might be. That's odd.

Posted in Using ActiveAdmin to Build an Admin UI Discussion

Sorta! This will permit it if it's submitted, but your form is automatically generated by your database columns. If your migration didn't include a user_id column, then you won't have that in the form. Check that and make sure you added one, if not you can rollback and edit or add a new migration for it. I think that's probably the culprit.

Posted in Using ActiveAdmin to Build an Admin UI Discussion

Did you add the user_id column to your Post model?

Posted in How to Upload Video using video encoder FFMPEG?

None that I've done, but you can upload videos or songs using Shrine, Carrierwave, etc just like you would with pictures. Then from there as long as you've got Video.js or something, you can feed the url of the upload into that pretty much the same way you would take the upload url and embed it with an image tag. It would just go into a video tag instead.

As far as ffmpeg, you can do all kinds of cool stuff with it. I've used it with Shrine to process uploaded videos.

This example shows using ffmpeg to extra some video metadata out: https://github.com/janko-m/shrine#custom-metadata

And this example shows processing a video to transcode it and take a screenshot for a thumbnail: https://github.com/janko-m/shrine#custom-processing

I believe these gems are required to be in the production group because when the assets precompile command runs in the production env. Your assets.compile flag is only for assets that are referenced but uncompiled which isn't likely to happen because you should have them precompiled already.

Plus once the precompile is done, saving 3 requires in your gem will only save you a couple milliseconds on boot only, which only runs when you restart your app, not every request so you aren't likely to save any valuable time doing this anyways.

Posted in Our First API (Example) - GoRails

Hey Victor, please be constructive. Angry comments don't help anyone.

A common misconception like you have is that Rails API-only mode is how you should build all Rails APIs, which is flatly incorrect. In some specific cases where you don't want to render a website, you can use API-only mode to strip out most of the Rails functionality that's used for cookies and other web functionality but isn't needed for APIs. Since it removes a lot of Rails features, API-only mode should only be used when you know you really want to disable those features of Rails.

That's why it's not the default and why I'm not talking about it just yet. We will be covering this but it is not the "new way" of doing things as you claim. It is a way of doing things for a specific situation, not all situations.

So again, we're here to help each other learn these things, not shout angry comments at me about something that you believe is incorrect. It's not, and the whole reason I made this episode was to discuss how APIs are not a special thing like you claim. It's a common misunderstanding and one that we can fix if we talk about this stuff in-depth.

Next time instead of telling me I'm stupid, simply ask why I haven't talked about API-only mode and we can all have a much more constructive time together.

Posted in Our First API (Example) - GoRails

Yeah that's what I was thinking but just wanted to confirm it. Consuming pretty standard but there's few resources out there that actually talk about sending out webhooks and how to handle responses and exponential backoff and things. I think that'll be a fun set of episodes to do.

Posted in Our First API (Example) - GoRails

I wrote out a list of topics related to APIs and had like over 50 different episodes I could do. That's so many it could fill up the entire next year with only API content. Love it. :D

I believe mailers by default don't include helpers. This is by design I think, but it does seem like one of those things that would be included automatically.

You can add helper lines to your mailers in order to give them access to the helpers:

helper :application

Posted in Multiple File Uploads with Shrine Discussion

Great work Scott, and props for coming back and sharing your solution! 👍

Posted in Our First API (Example) - GoRails

Like building webhooks for your customers to use or consuming other service's webhooks?

First auth video will be next week. Going to be recording it tomorrow. :)

Posted in Changing currency

@sites.currency.to_sym should work. 👍

Posted in Can I move parts of Shrine config to environments?

DANG. That's a rough lesson to learn. Glad you didn't have to pay for any of that and I can't imagine how rampant that problem is for Amazon and other places.

Posted in Can I move parts of Shrine config to environments?

And yeah, best long term solution for IAM stuff would be to have separate users for each bucket so nobody in development could accidentally delete your production files and vice versa.

Posted in Can I move parts of Shrine config to environments?

You can still populate secrets.yml with environment variables so it doesn't matter either way you slice it.

production:
  aws_bucket: <%= ENV["AWS_BUCKET"] %>

Benefit of using secrets.yml is you can do both to keep everything consistent, especially when things like development environments often share test keys which don't really matter if they're in the repo or not.