Activity
I do usually recommend Redis most often and the reason for that is simple: You're probably already using it to run your background jobs with Sidekiq or the like, so you might as well just take advantage of that. You can run a separate Memcached instance but I don't think you're going to see a major difference unless there is a feature you specifically need from it that Redis doesn't provide. Basecamp uses Redis for caching, so that means it's definitely good to go. :)
You can try using the older trusty repository for now while they work up on updating their apt repository:
sudo sh -c 'echo deb https://oss-binaries.phusionpassenger.com/apt/passenger trusty main > /etc/apt/sources.list.d/passenger.list'
The main reason not to use that is because you can update Ruby versions independently of the packages. Ruby 2.3 will get patches, 2.4 will come out, etc and you may have older apps that use 1.9 for example. All those would be easily managed with a version manager like rbenv and you can install it without waiting for the package maintainers to update it.
Have you read this thread? https://github.com/mperham/...
You may just need to bump down your concurrency number for Sidekiq so it's not using so many connections or upgrade your Redis instance so you have one with more connections. Most of the Heroku things like this have some pretty low limitations on concurrency on the lower end.
It also contains the DB and cache because neither are that large. I still have 400MB of RAM free right now.
Fixed!
Whoops, just updated that. Hard to keep all these dang version numbers up to date.
Posted in Searchick filter with scope
Hey Christophe,
Didn't see this until just now. You actually need to provide the status in the search_data
so that ElasticSearch can index it and allow you to query off it. Right now you don't have it, so the where query isn't going to be able to find anything with that attribute. You'll want to just add that status into the hash and then reindex and your search should work then.
Did you get this working? You'll need to also make sure your background job runner is a second command added to your Procfile. For example, you'd add sidekiq to your Procfile as a background process and that will make sure these deliver_later emails get sent on Heroku.
Posted in Action Cable vs Mailboxer?
These are really two different things you can't compare much. ActionCable is websockets, allowing you to talk back and forth to the server in realtime. You'd still have to build your messaging system on top of that.
You can actually use ActionCable to implement a realtime messaging system that uses Mailboxer on the backend. It'd pretty much just require you to take the stuff I covered in the other episodes and rather than doing POST requests to the server, use JS to send it over to ActionCable actions. This should make for a pretty easy implementation of realtime messaging.
So many little nuances to everything, it can be really tough to explain and understand it all!
That's correct, I'm using regular interval polling rather than long polling. Long polling is used more as a replacement for something like WebSockets so that you can get realtime communication happening rather than periodically checking for updates.
If you're using long polling, that's a request that will sit open each time, potentially clogging up resources server side because what you're really trying to do there is keep a persistent connection going. ActionCable is a better solution for something like that instead.
Regarding #2, if you're doing regular interval polling, there will be no slowdown. Generating a request once every 5 seconds, when your app can handle say 30 requests per second means that you have consumed 1 request out of the 150 requests (30 requests/sec * 5 sec) which puts very little extra load on the server. Obviously if you've got tons and tons of users, you'll have to scale up accordingly. The notification requests returning JSON are also lighter weight than your average request by a lot because they're not generating a full HTML template. Caching can also speed that up meaning you can squeeze more than 30 requests per second out if some are regular HTML requests and others are the lightweight notification requests.
There will be a lot more requests happening, that's true. Better errors won't be affected by that, but what you're likely seeing is that better_errors really only works best with Webrick, not Puma. The trouble is that when you get into the multi-process and multi-threaded webservers, keeping track of the stack after the request gets very hard and Puma doesn't really allow you to use better_errors like it's intended because of that.
Twitter does simple polling on an interval every few seconds just like what I showed in the episode. If you open up your Webkit console and click on the network tab, you'll see periodic requests to /timeline
and /toast_poll
every handful of seconds. This is how they send notifications out, so it's definitely a solution that scales nicely for large sites.
The trouble with websocket connections is you can run out of resources quickly because one user is fully consuming a connection to the server. With polling you only use a connection to the server for a few milliseconds and close it afterwards freeing up the resource for someone else. Websockets are persistent so once that user opened the connection, nobody else can use that resource which makes for much harder scaling.
Posted in Push bitmap to clients in realtime
I definitely would imagine that using ActionCable for this would make the most sense. The trouble you're going to run into is the amount of data you need to stream and how often would conflict with polling in any sense. It'll be best in a situation like this to have open connections that you can just push data over to instantly whenever necessary. If you did JS fetching, you'd have to wait for the connection to open each time which would be too much overhead for fast performance on updates in a situation like this.
Posted in jquery.turbolinks not working on my app
Awesome, glad you got it working! Turbolinks can definitely be tricky to wrap your head around what's going on.
Also for reference, this is a pretty great list of solutions to convert JS from popular libraries to Turbolinks compatible stuff: http://reed.github.io/turbolinks-compatibility/twitter.html
Whoo! :)
Give their readme a look on the new iOS adapter. It might help wrap your head around it a bit. https://github.com/turbolin...
Basically your mobile app just ends up primarily being a WebView (webkit browser full screen) and it let's you set the website. This is similar to things like PhoneGap in the past, except that the code is all just your public website meaning you can update your mobile app at any time by deploying your website again. Pretty slick! I'm not sure if React Native lets you do things like that.
With Turbolinks on mobile, you get a web view that embeds the Rails site just like you would have in your browser, but you can override link clicks with native code. So all the stuff you see on mobile is just as if you were viewing it in the browser. It's a hybrid app because of the webview, but easily intercepts those things to do native Swift or whatever. Turbolinks would only need the server to return HTML and so you don't really need to build an API.
React Native is somewhat similar in that you're still sharing the same app code with the main website, but you will have to build an API to make React work, and you'll need to do some extra work to serve up the HTML as well.
Mostly because I'd rather spend most of my time doing the heavy lifting in Rails rather than JS. If you built your Rails frontend in React already, React native is the way to go. Since I'm using Turbolinks already and because I don't have any complex JS widgets on the frontend to need React, the Turbolinks adapters are the best solution for me. React could just as easily fill the same gaps, just fits well for me.
Super nice that it's included in the library isn't it? :D