Activity
I'd like to remove the Trackable module of Devise.
Is it as simple as removing :trackable
from the devise
line in User.rb
and then creating a migration to remove the columns created for the module?
# User.rb
...
devise :database_authenticatable, :registerable, :rememberable, :validatable, :trackable
# becomes
devise :database_authenticatable, :registerable, :rememberable, :validatable
...
and migration like:
def change
remove_column :users, :sign_in_count
remove_column :users, :current_sign_in_at
remove_column :users, :last_sign_in_at
remove_column :users, :current_sign_in_ip
remove_column :users, :last_sign_in_ip
end
I could just disable it but if I'm not going ot be using it at all I'd prefer to remove it entirely.
I'm sticking with the external service for now. The biggest advantage is that it's already written and working.
Each video encode takes around 10 seconds and if I get a lot of users the server resources would need careful management in a Rails implementation. With the Zeit Now function it just scales automatically and can handle any number of encodes.
After a user submits a URL via a form I need to run a handful of jobs in series
What would be a good way to do this?
- User submits URL
- Use the URL to build file paths
- Download file 1 from file path and upload to S3
- Download file 2 from file path and upload to S3
- Call an API endpoint with the S3 URLs
Steps 3 and 4 could be done in parallel but they both need to be complete before step 5 can run so I'm just going to keep it simple and run all in series.
The naΓ―ve option seems to be simply call each step at the end of the previous one. I found this article on multi-step jobs but it seems overly complicated.
Great intro! We use Twilio to send SMS notifications to our users.
Question - I set up the Twilio client in an initializer. Is there any advantage to this over setting it up in the Twilio service object? I know global variables are generally frowned upon but they seem ok for clients like this.
# config/initializers/twilio.rb
require 'twilio-ruby'
Twilio.configure do |config|
config.account_sid = ENV['TWILIO_ACCOUNT_SID']
config.auth_token = ENV['TWILIO_AUTH_TOKEN']
end
$twilio = Twilio::REST::Client.new
And thanks for the tip on the Google phone number gem. I've been using Twilio's Lookup REST API to validate phone numbers. It's likely more accurate but makes an API for every validation. Not so great. I think I'll be switching to phonelib
soon π
Posted in How to Build a Forum with Jumpstart Pro
What do you see for rails routes -c posts
?
Posted in Should I rewrite my app?
Thanks guys. Good reminder about Basecamp. I did already know that, being a user of both Classic and Basecamp 2. Very interesting that they keep every version alive.
Thumbs up for the rewrite so far then. Any naysayers?
Posted in Should I rewrite my app?
Before you shout "NO!!" hear me out. I think this may be one of the those cases where a rewrite is reasonable.
Our current application:
- is 6 years old
- is currently on Rails 3.2.14 and Ruby 2.0.0
- basic architecture and data hierarchy is fairly simple
- 15 models/controllers
- 12 service objects
- 50% test suite coverage
- 17,000 users
- 500,000 database records
- 12 database tables
Upgrading all the way from 3.2.14 to 6.0.0 looks like an enormous undertaking. Integration tests requiring JS broke a long time ago and I haven't had time to get them working again. This would need to be addressed before doing any kind of upgrade.
I'm the sole developer/designer on this app. We don't have the budget to hire out for upgrades.
If I do a rewrite I'll start with the Jumpstart Pro template which will save me dozens of hours of work for users, authentication, impersonation, social logins, payments, forms, Tailwind, etc.
I would be able to implement Active Storage, Action Cable, Action Text, Trix, Turbolinks, form_with and other 'new' features from scratch instead of trying to adapt our current code.
To test the feasibility of a rewrite I'm going to code about 20 hours over the next month and see how far I get. My main concern is the database. Whether to rewrite in a format 100% compatible with the existing data or to optimize then import data to a new database.
Thoughts? π€
Posted in Why Wistia?
Plyr is a really nice, accessible and well-designed player for video. It has support for Vimeo.
Homepage: https://plyr.io/ & Github: https://github.com/sampotts/plyr
Also, api.video is an interesting service. They don't charge for video encoding, storage or downloads (views) but they do charge for ingestion (uploads). The free plan could be a good option for when you're just starting out but it can get kinda pricey if your app grows quickly. They have a really nice player which supports easy, secure streaming.
Great questions! I'd like to know too π
Posted in How should I deal with lots of images?
I understand the paranoia, but it's more a case of protecting access than the actual location of the files themselves. Most cloud systems offer 'encryption at rest' and you would be using HTTPS to load the images. That being said, if you have SLAs and privacy policies already in place you might be better using your own storage.
The method would be the same as I mentioned earlier - you store an Image
record in your database with something like image.source
that contains the URL of the file. You would have some kind of server set up in front of your file storage to server the files via URLs.
It would be up to you to limit access to those files to your own application. On AWS S3 files/folders/buckets are set to private by default and you could request a short-lived (say, 10 minutes) access URL every time you load a file. You can also use CORS to limit who can access your files.
There's likely a easy way to do this, and probably avoid any web access, between your application and your file storage if it's all ont he same internal network but I have no idea how to do that.
Posted in Add dump.rdb to .gitignore?
I just bootstrapped a Rails app with the basic Jumpstart template (amazing work! π) and I see a dump.rdb
file was generated in the app root directory.
I assume this is from the runnign Redis instance?
Can I safely add this to my .gitignore
?
I'm building an app that takes file uploads from a user and creates videos from them. I have the simplest-possible MVP runnign for it right now with a plain HTML form sending the contents to a serverless function on Zeit Now which returns a video URL.
Now I want user auth, proper database storage and start charging for it I'm building it out as a full Rails app.
My two options seem to be:
Upload the user files to the Rails app and do the video encoding in a background job then finally upload to S3 (shown in Episode #150 Shrine Backgrounding and Video Transcoding
Or I could use Active Storage to upload the source files direct to S3 and continue to use my Zeit Now serverless function to create the video via webhook calls.
Which would you choose and why? π€
I'm looking at Wasabi for storing our user videos. Their pricing model is very keen but slightly misleading. The 'free egress' definitely has limits. The downloads of your stored files are free as long as the total monthly download bandwidth doesn't exceed your monthly storage volume.
For example, if during a given month you store 1TB of files and the total download bandwidth for those files for the month is less than 1TB you're covered.
If the download bandwidth exceeds 1TB then you're not a good fit for this service and if you exceed on a regular basis they will help you to move to another service.
All this is covered here: https://wasabi.com/pricing/pricing-faqs/
Having said that, it looks like a good fit for us since we store user-customized videos for teaching purposes. Each video is only viewed a few times but we store it forever. This means our monthly download bandwidth is quite slim compared to the storage volume. Currently we store around 3TB but the monthly download bandwidth is in the 200GB range.
Posted in How should I deal with lots of images?
I would use a cloud service like AWS S3 to store lots of images and save the URLs to the image model in the database, like:
image.source = "http://yourapp.s3.amazonaws.com/969ba2d01cc3.jpg"
To display the images on the page you can do image_tag(image.source)
.
AWS S3 is fairly standard but there are other options like Digital Ocean Spaces, Backblaze B2, Wasabi, etc. (I'm looking at Wasabi right now because they're touting themselves as 80% cheaper than S3! π)
If you want to display thumbnails for the image you can run then live through a transformation service like Imgix. You simply add your cloud storage as an Imgix 'source' then call your images through that, appending something like ?w=250&h=250
to give you a faster-loading thumbnail.
Imgix transformations are so advanced you could probaby do you signature overlays using it too. They can take multiple images and combine them with offsets, aplha values, etc. Amamzing stuff.
Posted in How do I handle this race condition?
Thanks Chris! π
Posted in How do I handle this race condition?
Thanks Chris. I haven't done any type of scheduling stuff before. Any good resources or videos for this?
Posted in How do I handle this race condition?
I'm using Uploadcare to allow my users to upload files. When the upload completes, Uploadcare returns a file UUID to my page which I use to create an Upload
record in my DB with a remote AJAX call.
At the same moment Uploadcare also send a webhook to my endpoint which enables me to decide if the file requires processing or not (video conversion).
Problem is, the webhook will sometimes hit my app before the upload record is created so the endpoint function can't find the upload. There's no way for me to delay the sending of the webhook or for me to create the upload any earlier in the process.
What are my options for solving this?
I found several articles but they all involve more complex setups running unicorn, or another server, or using the ngrok-tunnel gem. I'm looking for a simpler solution. I was hoping I could write a simple rake task like:
desc 'Starts rails server and ngrok' task :start do Process.exec("thin start") Process.exec("ngrok http 3000 -subdomain=mysubdomain") end
Then just `rake start`, but of course only the first process runs and ngrok never starts.
There's a lot of data and it feels funky to store it all locally and try to keep it all in sync. But on the other hand, calling the Stripe API all the time feels wrong too.
I'm thinking about card brand, card last 4 digits, trial end date, coupon applied, coupon end date, next billing date, etc.