Hey @msavin, it took a while, a lot of luck, and more importantly, a lot of persistence from my relentless teammate @oleksiitrukhanov (who gets 100% of the credit and is the expert on the matter). Happy to say that Travis is successfully building a macOS app with meteor-desktop and uploading it to Appstoreconnect!
Our project goals were for Travis to perform the following actions on a push to master (*B&D = build and deploy):
-
iOS: B&D an iOS app to App Store (Appstoreconnect)
- And automatically push the new build out to internal testflight users
-
Android: B&D an Android app to Google Play
- And auto publish to alpha or beta testing channels
-
Windows: B&D a Windows Electron app (using meteor-desktop):
- And deploy it to the Windows Store
- As a
.exe
to an S3 bucket
- We save the previous version to a versions folder and write the new
.exe
to the same location the old version was. This way, we never have to update download links (it always points to the same spot which always has the latest version)
-
Mac: B&D a macOS Electron app (using meteor-desktop):
- And deploy it to the Mac App store (Appstoreconnect)
- As a
.app
to an S3 bucket
- Storage logic same as Windows app
-
Web: B&D the web app to 2 servers behind a load balancer
- Without wasting build time of course, which was a challenge: when PM2 is configured to deploy to multiple servers, it builds & deploys for each server listed, which means you are waiting for a full build for each server! @oleksiitrukhanov successfully configured the process to build once and reuse the same bundle to deploy to multiple servers. The implications are fantastic - scaling and deploying to multiple servers is now only limited by the actual upload-bundle-to-server time, and not also limited by the build time
- Fast:
totalTime = buildTime + (servers.length * uploadTime)
, vs.
- Slow:
totalTime = servers.length * (buildTime + uploadTime)
We also have a separate ‘server-only’ meteor app that services many client requests (especially reqs to other external services, and another that runs lengthy cron jobs) to which a client (or method) connects over DDP (2 servers behind a LB), but this is a separate repo so it is a totally separate process. We reuse the build once -> deploy to multiple
logic in Travis which makes deploying super efficient.
We’d like to eventually create an AMI
on deploy and then spawn up instances from the AMI
and add any spawned instances to the LB, which would then allow us to both decide on the # of servers at deploy and to scale up post-deploy in realtime, without first configing new servers then redeploying, etc. Ready for another challenge, @oleksiitrukhanov?
One important feature we wanted is for each of these main processes to be created and run as separate jobs on Travis instead of one long process on a push to master. Concurrent jobs are great (although more than 1 concurrent build on Travis += $$$ [$69/mo for 1 concurrent, $129/mo for 2, $249/mo for 5]).
Nevertheless, even if you have to wait for each to run one after the other, it’s much easier to debug if (when) one fails (and it takes ~the same amount of time as one long thread would have in any case).
It was beyond a lot of work and persistence and again, I’d like to thank and praise @oleksiitrukhanov for not giving up for weeks and for getting this done. He’ll be publishing an in-depth post on Medium shortly, specifically describing the macOS process to which I’ll provide a link here when complete. It will cover all the provisioning/configs and external services used to successfully deploy the app to Appstoreconnect.
He’ll also be writing an in-depth article covering all of the above (iOS, Android, Windows (.exe
and Store), Mac (.app
and Store), and Web (multiple servers/load balancing)) for anyone who’s looking for similar functionality but want’s to keep (most of) their hair on their head . Will post it here as well when complete.
We’d like to help anyone who’s struggling with this so if you or others have questions, don’t hesitate to post! Probably best to wait on the article for initial guidance, but nevertheless we’re here to help if we can.
Side note, we are also using the https://github.com/cult-of-coders/redis-oplog package which kicks butt, especially considering the extreme targeting available using channel: 'threads::' + threadId + '::messages'
for publications. On our db of 6M+ docs that are constantly churning, performance is . And we love the ability to publish updates to redis with total control or from external services via vent
and the handy documentation on outside mutations. Big thanks to @diaconutheodor on the fantastic package!