Everything can become complex and expensive – especially monolithic systems. It is not without a reason that monolithic systems are susceptible to evolve towards the anti-pattern called Big Ball of Mud. Microservices were invented in the first place to avoid going down that path.
This is not to say that a microservices architecture comes with a guarantee that the final system is going to be nice and clean. It depends
I use microservices in my project, and so far everything went well. Components are primarily interconnected using Apache Kafka as a message broker; they receive each their input from a specific topic and write their output (mostly) into another topic, consumed by yet another component unless circumstances dictate otherwise: some components write straight into MongoDB.
Our Meteor app initiates most of the microservice actions by sending a command via one of the Kafka topics; in other cases, cron starts a script that sends a command via Kafka. In some cases, the microservice uses a MongoDB change stream as a trigger, and sometimes components send each other commands.
Most of our components are implemented in node.js, some others in Java. The beauty of this is the infinite scalability: the point isn’t just that the Meteor server is reduced to the essentials, it’s also that the tasks are scaled out to any number of servers by the mere virtue of how Kafka works. The core idea is here that Kafka topics can be partitioned, and thus multiple consumers (services) can jointly process the messages if scalability is needed.
I encourage everyone to consider a similar architecture.
To me, the real eye opener was the article The Log: What every software engineer should know about real-time data’s unifying abstraction by Jay Kreps in 2013, back then the CTO at LinkedIn if I’m not mistaken. Highly recommended reading!