Thomas Bush

Joined

4,680 Experience
22 Lessons Completed
2 Questions Solved

Activity

Chris, thanks for your reply. Sorry, I forgot to mention that I removed the show action from the default before_action

Here's what that currently looks like:

before_action :set_product, only: [:update, :destroy]

I added paranoia gem to my cms application from your soft delete tutorial. I would like users to still be able to access the show page of soft deleted items -- this is where I would add a restore link.

I assumed all I would need to do was specifically set the instance in the controller show action to include the with_deleted scope.

def show
    @product = Product.with_deleted.find(params[:id])
    ...
end

This results in the following error:
Couldn't find Product with 'id'=139 [WHERE `products`.`deleted_at` IS NULL]

I am confused, entering Product.with_deleted.find(139) in the rails console works exactly as expected so I do not understand what I am missing here.

Posted in Problems Viewing Videos

Sorry for the late response, yes I have poked around the videos -- I have not had any further issues. Since that time I have upgraded safari with the sierra update. I on xfinity internet and even though I pay for high speed, that doesn't mean I am always going to get it, I know that was an issues for @ShakyCode.

Posted in Problems Viewing Videos

Yes, same issues on youtube link as well, I assume that means buffering issues on my end? Sorry for the false alarm.

Posted in Problems Viewing Videos

I know this issue is solved by simply removing the autoplay, but one thing I noted -- this also occurs with youtube videos. I just clicked the link for the new phusion video in twitter. I was taken to the video page (not logged in) and shown the youtube video which stopped for an endless load twice at exactly 42 seconds.

Obviously as stated, removing autoplay portion and reloading page solves the issue, just figured this may be useful so you don't limit your bug search to Wistia. Also I know you mentioned this was going to be disabled, but I still got that link when clicking from twitter. I am perfectly happy as long as videos play, just figured I would offer more for informational purposes.

Thanks for the response Chris! I don't actually need a reference to all the individual files, but I do need to preserve file structure.

The zip file contains two folders of product images 'normal' and 'large' -- these names remain unchanged in all of my processing. Each of these subfolders contains 36 images -- essentially a 360 degree view of the product. The main zip folder will always be renamed to the product number, and all images in the 'normal' and 'large' subfolders are renamed 1-36.jpg.

I can currently get the zip file up to s3 with carrierwave, but it sounds like carrierwave may not be the correct solution for the processing portion of this problem -- running the task that unzips and standardizes subfolders and files names. So I need to find some other method to hook into s3 and run the task? Does that make sense? Any idea how how I would do that? I have most of the task completed, just don't know how to hook s3 and run it.

Goal

  • I take accept a zip file containing some config files as well as 2 folders filled with images
  • Unzip the folder
  • run a method to standardize the sub folder and image names as well as remove a unnecessary config files
  • push this end result to s3

Attempt

I am using carrierwave and a custom processor. I take in a .zip file, unzip it, and push the end result to s3. In an effort to break down a larger problem into smaller steps I am intentionally skipping the portion where I rename sub folders/files and removing extra config files. I already have this renaming/cleanup method created in a rake task so I don't assume the conversion will be that hard.

Problem

Carrierwave seems to be uploading a zip file, even though I am unzipping in the processor and the temp cached fold is unzipped as a result of my processes.

Carrierwave uploader

# encoding: utf-8
class RotatorUploader < CarrierWave::Uploader::Base
  include CarrierWave::RotatorConversion
  storage :fog

  def store_dir
    "#{model.class.to_s.underscore.pluralize}/#{model.id}/rotator"
  end

  def extension_white_list
    %w(zip)
  end

  process :unzip_folder

end

Custom Processor

module CarrierWave
  module RotatorConversion
    extend ActiveSupport::Concern
    module ClassMethods
      def unzip_folder
        process :unzip_folder
      end
    end

    def unzip_folder
      # move upload to local cache
      cache_stored_file! if !cached?

      directory = File.dirname( current_path )

      Zip::File.open(current_path) do |zip_file|
        zip_file.each do |entry|
        next if entry.name =~ /__MACOSX/ || entry.name =~ /\.DS_Store/ || !entry.file?
          entry_full_path = File.join( directory, entry.name )
          unless File.exist?(entry_full_path)
            FileUtils::mkdir_p(File.dirname(entry_full_path))
            zip_file.extract(entry, entry_full_path)
          end
        end
      end

    end

    def standardize_file_names(current_path)
      ... not yet included
    end

    private
      def prepare!
        cache_stored_file! if !cached?
      end
  end
end

I would really appreciate if anyone had any insight here, thanks!

So I ended up figuring this out, it was an improper path issue -- whoops -- as well as an addition to my nginx server block. Lesson learned, don't use relative paths for this. Could we add this to the Deploy ROR Guides? I think this would probably benefit a lot of people to just be secure by default. SSL can be tricky in my opinion and certbot simplifies this.

I have included the addition to the server block below.

/etc/nginx/sites-available/default

location ~ /.well-known {
  allow all;
}  

Next ssh into your server and execute the dry-run first to ensure everything is set up properly.

/home/deploy/certbot-auto renew -w /home/deploy/your-app/current/public --dry-run

If successful run the one off command to renew your current ssl.

/home/deploy/certbot-auto renew -w /home/deploy/your-app/current/public --quiet --no-self-upgrade

Future renewal

The other thing I came across that I found useful, I scheduled this in crontab, using whenever gem. This way you don't have to worry about it again. I preferred this approach over writing to crontab myself in that its now part of git/github, therefore this requirement is documented for other devs, or for myself 6 months from now.

Just in case anyone has a similar setup or is interested in this setup, I have included an example whenever task that would live in your schedule file below.

config/schedule.rb

every 1.day, :at => '3:21 am' do
  command "/home/deploy/certbot-auto renew -w /home/deploy/your-app/current/public --quiet --no-self-upgrade"
end

Anyone using this? I have certbot-auto installed, and have attempted to set up auto renewal, but it does not appear to be working. My cert expires in 2 days so I would REALLY appreciate any help anyone could provide.

error:

WARNING:certbot.renewal:Attempting to renew cert from /etc/letsencrypt/renewal/MY-SITE.com.conf produced an unexpected error: Failed authorization procedure. www.MY-SITE.com (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://www.MY-SITE.com/.well-known/acme-challenge/CHALLENGE-STRING: "<!DOCTYPE html>
<html>
<head>
  <title>The page you were looking for doesn't exist (404)</title>
  <meta name="viewport" content", MY-SITE.com (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://MY-SITE.com/.well-known/acme-challenge/DIFFERENT-STRING: "<!DOCTYPE html>
<html>
<head>
  <title>The page you were looking for doesn't exist (404)</title>
  <meta name="viewport" content". Skipping.
** DRY RUN: simulating 'certbot renew' close to cert expiry
**          (The test certificates below have not been saved.)

All renewal attempts failed. The following certs could not be renewed:
  /etc/letsencrypt/live/MY-SITE.com/fullchain.pem (failure)
** DRY RUN: simulating 'certbot renew' close to cert expiry
**          (The test certificates above have not been saved.)
1 renew failure(s), 0 parse failure(s)

Thanks Chris! I would love to be able to use this for all my projects, and just standardize on config.force_ssl = true in prod. Can't wait!

I was thinking maybe this could be a tutorial/guide request?

I always use the GoRails guides to setup my ubuntu, nginx, passenger servers(thanks!) I am having trouble figuring out how to use Lets Encrypt to get an ssl cert and automate renewal. I have seen quite a few tutorials and guides around the internet, but I must be missing something somewhere because they are not working for me.

Result

  • https: Safari server failed to respond error
  • http: still works.

Steps to install Lets Encrypt and create SSL

  • I install the agent via lets encrypt getting started guide.
  • I updated up my server block to included allow for /.well-known (included below)
  • ran certbot command to generate cert
    • ./certbot certonly -a webroot --webroot-path=/home/deploy/hexcom/current/public -d hexarmor.com -d www.hexarmor.com
  • I updated up my server block to listen on 443 and include cert keys (included below)
  • sudo service nginx reload

/etc/nginx/sites-available/default

server {
        listen 80 default_server;
        listen [::]:80 default_server ipv6only=on;

        listen 443 ssl;

        #ssl on;
        ssl_certificate /etc/letsencrypt/live/hexarmor.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/hexarmor.com/privkey.pem;

        server_name hexarmor.com;

        passenger_enabled on;
        rails_env    production;
        root         /home/deploy/hexcom/current/public;

        # redirect server error pages to the static page /50x.html
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

        location ~ /.well-known {
                allow all;
        }
}

Chris, any chance you could go into detail about how to set up nginx and passenger to properly utilize the resources of your server? I follow your guides when installing so no fine tuning is ever done to my servers. Also would love to see how to set up redis for caching.

@disqus_lKf13yxio5:disqus @disqus_rJnL9ex9US:disqus change 'require' to 'require_relative'. This solved the issue for me, apparently it has something to do with your $PATH variable. Sorry I can't provide a better explanation, as I don't quite understand it, but from my reading, differences in $PATH is why this would work for some and not others.

That looks perfect, I will look into this solution and see what I can come up with. Thanks as always for the help!

How do I use pushstate in a simple rails blog to create the effect that larger new sites are currently using: as you scroll to the bottom of the page, the next article seamlessly is loaded, the url is changed and a google analytics page view event is fired.

I was wondering if anyone could explain how this is accomplished, any tutorials, or if I could request this tutorial. I know this is accomplished through ajax and pushstate behavior, just not sure how to accomplish this is a standard rails app. By standard rails app I mean that my goal is to accomplish this without using a js frontend like angular. Is this possible?

For an example:
http://www.theonion.com, click on an article and begin scrolling - take note of url bar, page view, etc.

Posted in Who is using Hashicorp Otto?

One more (potentially embarrassing) otto question: How do I update the Otto tool?

I am currently on Otto 0.1.2, the project is currently on version 0.2.1. As a last ditch effort to solve this problem, I figured, maybe it will "just work" if I am using the most current version.

Typing otto -h
Shows a list of commands, none of which relate to updating the tool itself.

Posted in Who is using Hashicorp Otto?

Has anyone had issues with using mysql2 gem with otto?

Posted in Who is using Hashicorp Otto?

I am using Otto. I have one error I am currently battling, plus one workflow question. First the error: I get an error when Otto attempts to install mysql2 gem. This error occurs when I run otto build

otto: An error occurred while installing mysql2 (0.3.20), and Bundler cannot continue.
otto: Make sure that `gem install mysql2 -v '0.3.20'` succeeds before bundling.

I also have a basic workflow question. I want to use the standard AWS infra, nothing fancy (yet). I currently have my app setup to deploy with capistrano previous to otto. Does otto replace cap deploy and all processes? Currently my cap deploy handles migrations, adding cron tab updates (using whenever gem), restarting nginx and passenger, shared vs current file placement etc. What does this all look like in Otto?

I have worked out a solution after read a few tutorials - the one included below proving most helpful. I will include the links and a bit more about my apps which will hopefully explain the path a took in hopes that someone else may benefit.

Brittany, thanks for your suggestion, but I don't think apartment gem is what I am looking for - please correct me if I am wrong as this is my first work on an application of this nature. My understanding is that apartment gem would separate out the different tenants into completely different databases, whereas I actually want to share most of the database entires.

My tenants (if I understand the analogy correctly) would be web stores. We have about 120 different products, and 5 stores. Each store just contains some subset of the 120 different products with one store containing all the products. Products have parts (size, color, etc), but basically, there is overlap between stores. Because of the shared data I decided I didn't fit the multi tenant model.

My solution

The conclusion I came to was separate apps with a shared login. One app for the provider, and one app for each client (or store).

Blog Post

I based a lot of my solution of learning from this tutorial blog post and the two corresponding code repositories.

Basically, the provider at the top would have the devise install and manage all users. The client apps use omniauth to authenticate against the provider devise instance. This is accomplished through a custom auth strategy outlined in the post.

Hope this helps someone. I would still love to see if anyone else has a different take on this.

Chris, I had one more thought I was hoping I could get and experienced opinion on. The apps I am talking about a stores, we have one main store (150 products), 5 niche stores (20 or less products). All niche stores are just subsets of the main store with different branding, design, content etc.

What if it was all just one app?
I could key off domain with something like:

store_controller

def show
    @store = Store.find_by(domain: request.host)
end

Than I could easily list products etc through relationships. I would only have one codebase to maintain an a huge reduction in duplicative apps. Devise functionality I desire out of the box (I think).

Screencast tutorials to help you learn Ruby on Rails, Javascript, Hotwire, Turbo, Stimulus.js, PostgreSQL, MySQL, Ubuntu, and more.

© 2024 GoRails, LLC. All rights reserved.