@jkatzen hi, another error - I need to config nginx to receive larger attachments (eg upload image etc). I am currently getting the error:
0:02:53 Buzzy-Docker-buzzy2 nginx-proxy: nginx.1 | 2015/04/03 23:02:53 [error] 22#0: *1436 client intended to send too large body: 1409247 bytes, client: <SOME IP>, server: <MY SERVR>, request: "PUT /files/files/images/<SOMEID>?chunk=0&filename=undefined&token=<SOME TOKEN> HTTP/1.1", host: "a.buzzy.buzz", referrer: "http://meteor.local/go/<SOMEID>"
I guess I could fork/copy your docker image and change the settingā¦ but thought Iād check first.
cheers
@adamgins An easy way to make a 0 downtime deployment using OpsWorks is to deploy a new instance that is not hooked up to your DNS, wait for the instance to come online, and then finally swap the IPs of your DNS. Or you could have it so that you have an elastic IP address that you move between instances so that you will not have to wait for your DNS to update.
If you wish to employ failover as your primary means of deploying without downtime, I suggest you look at this guide from the AWS blog here: https://aws.amazon.com/blogs/aws/create-a-backup-website-using-route-53-dns-failover-and-s3-website-hosting/. Basically the only difference that you would want instead of hooking up your failover record to a static S3 website, you will have another instanceās IP address in your failoverās record. Then, when you are ready to deploy, you can deploy to your primary instance first which will then failover to your secondary instance until itās done. Then when your primary is back up, you can deploy to the failover to make things consistent.
1 Like
@jkatzen hi. Where would I add the modification to client_max_body_size
in the recipe. I have seen some examples, but not being a āchefā expert, I just thought Iād check what was best practices?
cheers
@adamgins
If you look under the āCustom Nginx Configurationā in the nginx-proxy repo, you can find that you are able to set Proxy-wide settings either by adding them to a derived image of nginx-proxy or mounting it during runtime similar to how my chef recipes mount the ssl certificates.
1 Like
@jkatzen thanks, will try speak tomorrow - booked via Sunsama.
I had added to my Dockerfile:
From jkatzen/meteordlinux
MAINTAINER Your Name
COPY ./dockerstuff /etc/nginx/certs
RUN { \
echo 'client_max_body_size 100m;'; \
} > /etc/nginx/conf.d/my_proxy.conf
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get install -y graphicsmagick
EXPOSE 80
I must be doing this at the wrong level as I got the following error:
/bin/sh: 1: cannot create /etc/nginx/conf.d/my_proxy.conf: Directory nonexistent
I tried start ābashā on the container and SSH into an example instance based off this but could not seem to find my around to find the nginx container.
Also I was thinking of modifying the recipe https://github.com/Buzzy-Buzz/opsworks-docker/blob/master/nginx-proxy/recipes/setup-nginx-proxy.rb to include it hereā¦ but was not sure if this made sense?
Anyway, looking forward to speaking tomorrow.
Iām trying to figure out how to deploy my Meteor app with Docker myself, so a few questions if you donāt mind @jkatzen. Iām currently set on DigitalOcean, but am open to using e.g. OpsWorks if thatās a better idea.
I see youāre basing your Docker image on meteorhacks/meteord, is this recommended over using the Phusion Passenger NodeJS Docker image as a base? If so, why?
Why would you eventually recommend going with OpsWorks over DigitalOcean? Can I achieve the same with DO?
@aknudsen
I used the meteorhacks/meteord image because it was just the quickest way for us to get up and running. I honestly donāt know too much about the Phusion image, so I canāt speak to it much. I imagine that if you wish to use it as a base image and then build your own docker image from it using a Dockerfile, that should not be too difficult. Once your image is built, and you have verified it works, then you can upload it to the dockerhub registry or somewhere else.
The reason we used OpsWorks was because our entire infrastructure was already on AWS and we did not want to have to work between different platforms just yet. OpsWorks had enough tools and customization options for us that we have made it our primary deployment option.
I know a lot of people use DigitalOcean as their hosting platform and honestly it looks great. I would imagine that you can get pretty close to a similar setup on DO instead of OpsWorks, but again I canāt say anything for certain since I have never used the platform myself. If there is an autodeployment that you can configure, that would probably be the way to go. Otherwise you may have to setup your own automated deployment somehow that will pull down your docker image from whatever registry you are using and then runs the docker container on your servers.
Thanks for your thoughtful answer @jkatzen. If youāve got the time, maybe you could chime in in my āDockerizing Meteorā thread?
Iām really new to Web apps deployment, and while I have a certain idea of what I want to accomplish Iām not sure how to get there. My core idea is I want production and a staging instances of my app (the latter for testing against), and to be able to switch the staging instance out for the production instance seamlessly in order to release a new version. I have a vague idea that I could use a load balancer to migrate from one instance to the other, but donāt know how it works in practice.
If you could provide some practical advice on how to reach my goal, that would be fantastic!
@jkatzen thanks heaps, again, for you time today. On that issue with the http -> https redirect .
I found this http://serverfault.com/questions/67316/in-nginx-how-can-i-rewrite-all-http-requests-to-https-while-maintaining-sub-dom
Based on this should I change the return 503
with 301 https://$server_name$request_uri;
on line https://github.com/jwilder/nginx-proxy/blob/master/nginx.tmpl#L37
thoughts?
@jkatzen I found the issue why I could not start up the 2nd stackā¦ I had spaces int he āStack Nameā, not sure why I didnāt see that in the error earlier today. hereās the extract of the compiled/generated command --restart=always -h Buzzy Docker Test Env 1-buzzytest2 -v /etc/nginx/certs
so itās the āBuzzy Docker Test Env 1-buzzytest2ā (generated by line 75 below) that was causing the issue.
# In /var/lib/aws/opsworks/cache.stage2/cookbooks/owdocker/recipes/docker-image-deploy.rb
72: bash "docker-run" do
73: user "root"
74: code <<-EOH
75: docker run #{dockerenvs} --restart=always -h #{hostname} -v /etc/nginx/certs -p #{node[:opsworks][:instance][:private_ip]}:8080:80 --name #{deploy[:application]} -d #{deploy[:environment_variables][:registry_image]}:#{deploy[:environment_variables][:registry_tag]}
76: EOH
77: end
78: Chef::Log.info('docker-run stop')
79: end
80: Chef::Log.info("Exiting docker-image-deploy")
Perhaps we need that in the instructions and/or some code to try and deal with that? Once I had removed the spaces from the Stack name it worked.
@jkatzen ow goes? Wondering if you have any thoughts on this 503 vs 301 .
also, I was a bit confused on how Iād get the nginx docker to pickup my own recipe instead of the jwilder one.
I assume I am cloning the jwilder/nginx docker as we did the other day but was not sure how to tell that docker to use my chef script with " 301 https://$server_name$request_uri;"instead?
@adamgins - glad you found out about the stackname - thanks!
The 503/301 thing kind of confuses me because I see https://github.com/jwilder/nginx-proxy/blob/master/nginx.tmpl#L89 which should tell the host to auto-redirect specifically for the virtual hosts that you have setup. You should still be able to use a custom template file though and I think that may work.
Changing return 503 to return 301 https://$server_name$request_uri; will return you https://_ for the entire server if you change it there since _ is specified as the server_name.
server {
listen 80 default_server;
# This is just an invalid value which will never
# trigger on a real hostname.
server_name _;
return 503;
}
You can see on line 89 of that same file though, when itās specifying the blocks for each virtual host, it has instead the server_name as your host variable and it does in fact try to send a 301 redirect.
If you want to go ahead though and specify a custom template file where you tell it the server name will always be your host, that would work, however you will lost the ability to use multiple hosts on a single machine. (ex. using AWS EC2 instance hostname + your domain hostname)
For some reason Iām getting the following error when i deploy my app, any ideas why?
Thanks @jkatzen yep, it worked for hardcoded, but as you say that makes it tricky to deploy on dev/test servers without having to rebuild the image. Is there a variable I can use in that line 36??
Something like $hostname
below:
$hostname;
returned some weird number/idā¦ about to try $host
but just guessing here
server {
listen 80 default_server;
server_name $hostname;
return 301 https://$server_name$request_uri;
}
@arjunrajjain my guess is you have a space in one of the names so the string is broken and itās picking up ā}ā or something. If you search for āerrorā further up in the log file you can see the chef code followed by what was generatedā¦ it may have a really long command you can you need to scroll to the rightā¦ from there youāll see the code itās complaining about and that should give an idea which variable it is that itās having heartburn with. I removed spaces from all app, stack and any other names. Alternatively, some of the environment variables may need quotes around them.
@arjunrajjain - you have a git submodule somewhere in your github repo and chef is complaining when itās trying to copy it down.
@adamgins - settings server_name to $hostname is not going to work because $hostname was never defined. $host should work as follows though (taken from nginx documentation):
$host
This variable is equal to line Host in the header of request or name of the server processing the request if the Host header is not available.
@jkatzen thanks that did it. BTW< I was getting some pretty strange behavior when I hard coded the domain name. The strange behavior was it seemed to work in desktop browsers but then sometimes from the cordova app it seemed to have issues connecting.
I saw some things in the log like:
Apr 15 05:26:05 Buzzy-Docker-buzzy1 nginx-proxy: nginx.1 | 2015/04/14 19:26:05 [error] 27#0: *5612 upstream prematurely closed connection while reading response header from upstream, client: <my ip address> server:<server sub-domain but weird I was not hitting this sub-domain> request: "GET /_timesync HTTP/1.1", upstream: "http://172.17.0.7:80/_timesync", host: "<main domain>", referrer: "https://<main-domain>/<some url>"
So Iām super close to getting everything working, but Iām having a final problem with Cordova. Whenever I build the app without (meteor add-platform ios android), everything works fine. But as soon as I add both platforms, i get the following -
@arjunrajjain
Looks like you are having the same issue as these people: https://github.com/meteor/meteor/issues/4207