Activity
If you use the gem, you have to use the asset pipeline for it. This is what I would recommend anwyays. The asset pipeline is perfect for these things. Any reason why you're avoiding using the asset pipeline?
So generally you want all your account related things to be outside the tenant.
User
Account / Tenant / Organization / Team (whatever you call this)
UserAccount (join table between the two to give Users access to accounts)
Everything is goes inside the tenant since those should all be private.
Or, you can put User inside the tenant and require users to login from the correct subdomain. This is how Slack works where you can use the same email to create many accounts. They're not global so you don't have a central account.
Kinda depends on what you want to accomplish. If you want users to be signed into all their tenants at once, putting the model outside the tenant is best.
Hey Nick,
That's sure weird! I think what it looks like is happening is that your Javascript is submitting the form twice.
You can tell because the first POST to /memberships hasn't finished before the second one starts. And that's why the logs are mixed together and you have those two UPDATEs on the user. They're processing almost at exactly the same time.
You should do some debugging on your Javascript to figure out why it's executing twice and that should fix the problem.
Posted in SweetAlert integration
Hey Michael,
Not quite sure what's going on there, but any reason you're not using the gem? It already hooks into the standard rails data-confirm stuff and makes this seamless and you can customize the colors and buttons like usual.
Posted in Disaster recover plan?!
I should clarify that. If your uploads are up to S3, any thing that happens to your server won't affect those files (which is good). You can also setup your S3 to back itself up to another bucket, or another service if you wanted.
Posted in Disaster recover plan?!
Yeah, that's what I was thinking as a first version. 🙌
Posted in Disaster recover plan?!
Definitely agreed. That's on the todo list but there are a few things that are higher priority at the moment right now. Plus there are SO many configuration options for the backup gem, that it'd be hard to verify they're all correct and test them in the UI.
Posted in Disaster recover plan?!
This is a great question Karim! Exciting to get your app in production for a customer! 🍻
So a couple things to consider here:
- Database backups are safest when you upload them to somewhere third-party like S3. You can edit your Backup config on Hatch via SSH to use your S3 credentials for uploads. That will keep them safe in case anything happens to your server.
- DigitalOcean backups are handy for quick restoration of a server. You can create a new server based on your previous one in a minute or two this way. If you didn't have those, you'd need to setup a new server which takes a few minutes using Hatch. It can speed recovery up a little bit which is good.
- You'll want to make sure that you have practiced database restoration at least once before you go to production to be safe. Download one of the db backups from your app to your local computer and try restoring it to Postgres locally. This command should import the backup into a database:
psql -U <username> -d <dbname> -1 -f <filename>.sql
Once it's imported, you can check that all the same data is available locally as it was on your server. - If you have file uploads that upload locally, you'll want to include those in your backup and restore process. If they're uploading to S3, you don't have to worry about it.
That's most of it. You should also setup the Backup config to notify you on slack or email for failed backups to be safe. If anything goes wrong, you'll want to address that right away.
Hey Liam!
So I use Janus for Vim. I'm guessing maybe it's a config option that comes by default then to do that if yours is different. They ship a lot of good defaults and I originally used it to just replicate my workflow from Sublime into Vim. I actually don't even know all the details of how they configure everything since it all worked out really nicely from a fresh install haha.
Maybe you can find their NERDTree config in the repo or you might like to just try out Janus itself.
Posted in Do I need rails-ujs and jquery_ujs?
I did an episode on the new Rails UJS library. It replaces everything jquery_ujs does, but it just doesn't require jQuery: https://gorails.com/episodes/rails-ujs-primer
Feel free to add jquery in, but you won't need the jquery_ujs library. Everything will still function exactly the same as before, they just want to reduce dependencies.
Very cool. Like the built-in Rails mailer previews but with a UI. http://guides.rubyonrails.o...
Since ActiveJob is just a wrapper for whatever background processor you use, all you have to do is write your code to use ActiveJob. All your emails already support ActiveJob, so to send them in the background you can just say "UserMailer.notification(user).deliver_later" and the deliver later part will create a job for it.
As for everything else like notifications and csv downloads, you'd just write your code in a job and have your controllers start the job. Nothing too fancy.
Is there anything specific I could cover for you that would help wrap your head around it?
I think most services confirm your email, but I don't know for sure if they force you to do that before using OAuth. It's definitely something to think about. Automatic merging is probably okay so long as you can trust that those accounts were already confirmed. I'm just not 100% that they are. I would guess that every service is different so it's probably safest not to auto-merge. If you find out more details on that, let us know!
Roadtrippppps. 🚗
Posted in Hatch / Updates
Hey Louis-Philippe!
The goal is certainly to help you upgrade Postgres and Redis. One of the tricky bits of this is that often times your code will be affected by this, so I don't want to accidentally break your apps.
Once there's a major version update, I'm going to be working on a script to help you do the migration as seamlessly as possible. The process is usually pretty simple for Postgres where you only have to run a couple commands to stop the old database, migrate it to the new version, and start the new version. You can see the process here: https://gist.github.com/delameko/bd3aa2a54a15c50c723f0eef8f583a44
Redis is similar, although, it's probably easier to upgrade than Postgres which is great.
Hatch won't take over full control of your servers like Heroku does, so you're free to upgrade things at any time or make changes as you like. At the end of the day, I want you to have full freedom to run what you want and have Hatch just make your life a bit easier so you're not dealing with the hassle of server management constantly.
Hey Karim! Excited you're moving to Hatch!
Jack's answers are all spot on.
- There's a gem called Backup that Hatch helps you setup for each app. You'll need to login to your server to edit the config if you want the backups to upload to S3 or something, but it's really easy to use.
- I haven't used Delayed_job on Hatch yet myself, but I believe someone has already. The important thing will be to basically add these commands to the deploy script (and make sure you've got that daemon gem installed). https://github.com/collectiveidea/delayed_job#running-jobs If you run into any troubles with this, let me know and I'll help you get that situated. I generally recommend you always use Sidekiq these days over delayed_job, resque, etc so this feature is a little less tested.
- Like Jack mentioned, RAM is going to be the most important thing here. You'll generally want a 1GB or 2GB server. If you don't know, you can start with a 1GB server and DigitalOcean makes it super easy to migrate up to a larger size if you start getting memory errors.
- Hatch has two cool features around logs. First, you can actually view the logs in the web UI by clicking on the "Rails Logs" tab of your app. It will retrieve the logs for you and you can read the last like 300 lines or so. Second, because logs can get unweildy, your logs on the server are rotated daily so that you don't run out of disk space. Every day they get compressed and labeled with the date so if you ever need to go back into the logs for a couple days, you can SSH in and find those.
- A little bash script or alias will do the trick for you like Jack mentioned. This would be nice to add to Hatch's UI so you could copy and paste that into your Bash or ZSH config. 👍
I like that aproach. You could certainly simplify your routes file and that would take care of things in a much nicer way. Loading the yaml file in an initializer and looping through it with some ruby in the routes file should do the trick really easily. And it's super flexible so you can change the format or whatever you like easily in the future.
That works. The other thing you can do is look at @articles.total_entries
I believe with will_paginate which will give you the count of all the results, not just the current page. It's either that or total_count
on that or Kaminari.
Your solution is simple enough though so I would stick with that. 👍
You're definitely doing something that is outside the scope of most pagination gems. What I would probably do is customize the view template to only show the last 5 pages links. This would visually make it so you could only see the first X items and then you can have your controller also verify the page # is between 1 and 5 so the users can't go outside those boundaries.
What does your solution look like right now?