Activity
Posted in Deploying Sidekiq To Heroku Discussion
You'll want to check your Heroku logs to find out what crashed when you get the "Something Went Wrong" 500 error. It will tell you what code crashed.
If you deploy new migrations, you'll need to also run heroku run rails db:migrate
to make sure production gets the new database changes as well.
Posted in Deploying Sidekiq To Heroku Discussion
I don't think you mentioned what the error is. What are you seeing?
Nothing really. I tried building a sample app to cache in Rails 5 with what we talked about and everything works as expected there. :\
Posted in Advanced Search, Autocomplete and Suggestions with ElasticSearch and the Searchkick gem Discussion
1. It will auto index the new question, but only the new one that is added. It won't touch the existing ones. Searchkick implements callbacks on the models to automatically index them when they change.
3. You'd have to configure this. For example, you have to figure out what "popular" means. Is it views? Comments? etc. Once you figure out what you want, you can then pass in that data into your search_data so Searchkick indexes it and then you can query against it.
Posted in Use .ENV instead of Secrets.yml for keys
You can use <%= ENV["AWS_SECRET_KEY"] %>
in your secrets.yml to grab environment variables. For example:
production:
aws_secret_key: <%= ENV["AWS_SECRET_KEY"] %>
The reason why you should continue using secrets.yml is because this provides a consistent interface if you stick with Heroku or migrate to your own server and choose to hardcode the values in a file on the server. Your Rails app doesn't have to care either way and you are free to use ENV variables or hardcoded values interchangeably.
Awesome! Yeah, the 12factor gem is useful for Heroku, but if you run your own server you don't want it as it won't write to that file.
Glad you got it working!
Posted in Message Templates Discussion
Yep, you can do that and use scopes in order to filter those as necessary.
Another alternative is a single Template table that stores templates for any object, where you store all the values in a serialized text, hstore or json column. The downside to this is that sometimes you can run into issues with data types not being consistent in the serialized ones.
I'm not quite sure on #1, but as for #2, you've got paperclip which wants an actual file. Facebook's API is only going to return you the url of the file, so you've got to tell Paperclip that you want it to download the file from the URL.
user.image = URI.parse(auth.info.image)
You'll want to modify your User method like this in order to actually assign a URI object to the image which Paperclip will know to download.
http://stackoverflow.com/questions/4049709/save-image-from-url-by-paperclip
I believe you're going to need it on the collection too because that's what the filter affects.
Yeah your logs/production.log (assuming you're running the production environment in your web server) file should have the actual error in it. Sometimes that file won't be written for a variety of reasons, so you might have to fix that first. I can't remember what the solutions were, but you can google for the empty production.log file and find some solutions on StackOverflow.
Hmm, I can't quite tell where you put it. Did you put it on your Index? <% cache [params[:rating], @profiles] do %>
Well if you've done that, you also need to salvage the old file uploads by moving them from one the old releases folders and move them into the new public/system folder that's being symlinked. That's likely why your image still don't work. Kind of a pain as well.
If you are just using test data, you can try reuploading an image, doing a new deploy, and making sure that image still works.
Doh, yeah they do that sometimes. At some point soon I'm going to swap out Disqus with my own comment system.
Basically those methods with !
at the end of them, they actually mutate (or modify) the variable in place. Without the !
a map will create a new array in memory so you'd have to full arrays in memory, and you could discard the other one after the method completes. Using the !
methods should improve your object allocations because they won't be creating new arrays in memory.
Posted in Pull data from another table in a lookup
@Alan, they're actually badges, so you get a badge at 1, 3, 6, 12, and 24 months but only after you cross that threshold. Going to be doing similar things for answering questions and whatnot. :D
@Jacob, yeah for sure! The editor is https://github.com/NextStepWebs/simplemde-markdown-editor which I like quite a bit. It was easy to setup, but unfortunately doesn't provide any way of doing @mentions unfortunately. Guess I may have to build my own editor at some point instead.
Posted in Pull data from another table in a lookup
Yeah, there's a lot of changes, I'll be announcing it all shortly!
Posted in Pull data from another table in a lookup
@Alan, I added a reactions feature so you can give kudos to other people just like on Github issues. :D
I think you're generally going to struggle optimizing the number of allocations in this because your operation happens on every single user balance. It's going to allocate a ton of objects no matter what you do since you're loading every active one.
One optimization you can make is to pluck the active user IDs and query by that rather than loading up all those objects. I would also test to see if the in place mutation methods improve performance like so:
def self.all_users_gold_balances
gold_gateway_id = Gateway.find_by(currency: "GLD").id
active_user_ids = active_users.pluck(:id)
user_balances = Balance.where(gateway_id: gold_gateway_id).includes(:user).where(user: active_user_ids).pluck('balances.amount', 'users.account_number')
user_balances.map! {|balance| array << {account_number: balance[1], gold_balance: balance[0].truncate(3)} }
user_balances.sort_by! {|hash| hash[:account_number]}
return user_balances
end
Yeah that should work. What cache key is that referencing in your logs?
You could include the filter and role in your cache key and save them as separate caches.
I would be somewhat careful with that because if you have a lot, then you're just filling up the cache storage with stuff that's likely to get blown away and you won't get any benefits from caching. Compiling a list of small cache snippets is super fast, so you might consider just not caching the list, but cache each individual item instead. The compilation of everything will be fast enough and can save you on some cache storage space if you notice bloat saving all the different copies.