I am wondering how i would deploy a meteor App that uses Dynamic Import with an Elastic Load Balancer and Autoscaling group. The root I.P. will change with every instance that is kicked up and as such the Dynamic import doesn’t work. Any help would be appreciated.
Hi, do/did you actually face this situation? With AWS you actually connect to the DNS name of the AWS asset and not the the elastic public IP. If you go with and ELB ideally you would set it up through the Elastic BeanStalk in which case you would connect to the DNS name of the BeanStalk. With 0.5$ per month you can route those DNS names in Route53 on your brand name.
Thanks for the reply. The issue is not with the AWS set up. It is with how the app looks for the imports directory using the ROOT_URL. Having the application running behind a load balancer means its public address can change as instances are added or removed, and the load balancer directs traffic to each instance as it sees fit. Meteor appears to look for the imports directory using the ROOT_URL, which in this case points to the load balancer and not a specific I.P. address. Hence the inability to find the imports directory. The following error appears in the browser console.
www.domain.com/meteor/dynamic-import/fetch:1 Failed to load resource: net::ERR_CONNECTION_REFUSED
now you got me concerned :). I’ll test that on my setup but I believe regardless of the number of instances running, the ROOT_URL is what matters and that never changes. Not sure if the number of instances can make a difference however I only run 1 at the moment but I will force 2 or 3 for the testing.
We run a bunch of instances across multiple applications behind a single load balancer, it has no impact on dynamic imports. Do you have any additional security on the load balancer regarding requests? Or do you intercept the dynamic imports url to ensure the requesting connection is logged in? Do you have sticky sessions enabled on your elb? Do you use a non / root url? None of these should really make a difference though. I suspect you’re doing something else that blocks the requests. For example trying to run meteor on lambda.
It feels like the application will only pull data from the instance with the I.P. address associated with the ROOT URL. So in my understanding the other instances actually see no traffic. And if there is no instance I.P. address associated with the ROOT_URL then the client receives no data. This is not the behaviour when not making use of dynamic import. Please let me know how your test goes.
Another way of reproducing the issue is deploying an instance with the application on it and then using that instance’s I.P. address to navigate to the application. This instance is not linked to any URL. Once again, the application produces the same error, it looks for the dynamic import file using the ROOT_URL and not the I.P. Address. When not using dynamic import, I can connect to any instance using the I.P. address and the application works fine. So I feel like the issue is not the load balancer. Its the way in which the application is looking for the data, it uses the root URL and not the I.P. address of that specific instance. I hope that makes sense.
If you’re using a load balancer you shouldn’t be accessing an application from it’s IP address. In general you should always be accessing the application from it’s
ROOT_URL - in theory you could modify the dynamic imports code to look not at the
ROOT_URL but the URL being accessed. However, if your
my.domain.com/app and you’re accessing
127.0.0.1/somewhere - how would dynamic imports map that? (e.g., where does it cut off the URL so it knows where to append the dynamic imports URL). This is only a problem if you’re using a non
/ ROOT_URL. However, since meteor supports those, it can’t make this change to dynamicImports.
Another point is that from a security perspective, you probably shouldn’t be allowing direct access to your servers, the load balancer is there to protect them from that (it also provides TLS negotiation, etc).
If you only care about accessing a specific server to test it, you could modify the /etc/hosts file on the accessing machine and add an entry with the domain name that points to the load balancer, and point it to the specific server instead.
I feel like I haven’t expressed myself and the problem correctly to you, @paulishca seems to understand the issue. In a highly available architecture instances will be added and removed as and when needed. That means I have no control over the I.P addresses assigned to each application server. There is no need for you to worry about security, I have ssl certificates and security groups that strictly control access to the app servers and the database instances which are a seperate Replica set.
The issue is, all the data is being fed from the server linked to the Root Url, if this server fails then there is no data being fed to the client. When not using dynamic import and placing all content in client and server folders, each instance can be accessed without an issue, when using dynamic import the app doesn’t know where to look for the imports folder. Each instance should be able to point to the dynamic imports folder irrespective of how the system architecture is set up.
I guess I don’t understand - in my architecture (which is also highly available and self-scaling), the dynamic imports work correctly, via the URL that points to the ELB - I don’t know or care what my instance IP addresses are, my application doesn’t know or care what they are either - it doesn’t need to.
The only way I can see the IP address being a problem, is if you’re somehow configuring your ROOT_URL on a per-server basis, which you shouldn’t be.
Thanks for the response.
In an ELB setup there shouldn’t be a single server linked to the ROOT_URL.
All requests should be made to the load balancer which then forwards the request among all the available instances (ie. balancing the load). With sticky sessions turned on, that means each subsequent request from a client will be forwarded to the same server it was originally assigned.
Not that that would even matter for dynamic imports, any of the servers would be capable of fulfilling the dynamic import request from any client.
Failure should only happen when the import request is sent to a different domain.
Because of the load balancer, all the instances should be serving on that same domain with the same ROOT_URL, so the ELB or instance choice won’t affect this.
However, if you are setting the ROOT_URL to be the ip address of each instance, that would break it, as the domain is changed and the browser will throw a CORS error.
That all said, I think this is a bug and dynamic import should work when the domain changes, as some folks are using Meteor for multiple domains from the one app. I know it used to work, and then stopped with the change from DDP to HTTP POST, and there has been discussion in Meteor issues about this.
While I think I should work, I didn’t open with that because it’s likely there’s something wrong with your ELB setup and wanted to explain how ELBs normally function first
Thanks for the explanation. I believe the solution will work as intended when implemented. I was trying to connect directly to an instance that was deployed behind an ELB and got the CORS error. I will consider this working as expected for my use case.