I am trying to learn more about meteor concurrency best practices. In particular, I have seen that Meteor.call() etc. will be synchronous given only one server running, but that will no longer be the case if you expand past one server. I haven’t been able to find an answer for how to handle this case. I was wondering if anyone knew of a tutorial example for this - in my experience, most tutorials show concurrency best practices through a shared bank account example.
Yeah - so I was looking at that article and had questions regarding the following:
" 1. Methods are independent as far as different clients are concerned. So for example, Bob’s client can call method A at the same time as Carol’s client calls method A — those method invocations will run concurrently — neither Bob nor Carol will wait for the other*… For any one client methods are executed in the order they are called from the client — imagine a FIFO queue on the server for each client."
I was wondering how to organize the methods such that Bob’s and Carol’s method A will not run at the same time - rather in the order they were executed - granted they both satisfy some condition.
The use case I am trying to implement is similar to an order book matching engine / stock exchange. Let’s say Alice and Bob want to buy 1 share of Apple at market price. Currently there is 1 share offered for 100 dollars and 1 share offered for 101 dollars. My collection is called Orderbook, and each document within that collection is a stock. Each stock has a dictionary where keys are price levels (in this case, say increments of 1 from 0 to 200). Each stock also has keys for minOffer and maxBid. The method Alice and Bob call would check the minOffer, and then create a transaction at the minOffer and increment higher if the order size was greater than the number available.
With traditional locks, I would lock each of the keys that represent price levels so that if Alice executed first, Bob’s method would be locked. Upon the key for 100 being unlocked, Bob’s method would see that there are no longer offers at 100 and increment to 101 and execute a buy order at 101.
There is no waiting. Bob’s and Alice’s methods will run at the same time. In order to do that, you have to setup it manually. From database level, I think so.
Thank you! This is very helpful. Tried doing some googling to look into the mechanism for async/await, but I was just wondering how the system will know which method to unlock if multiple clients are stuck in “await”.
I don’t imagine this method to be very heavy or take very long, but I’m just trying to understand more about the how the code is working under the hood.
Wouldn’t a simpler method be to have a central processing queue for orders. Then have methods just add a new job to the queue
That way you have one central queue that is always executed in the order that requests were received, and methods are really thin so their concurrency becomes trivial
This is a brokerage, not an order book. Why is this important?
@coagmano is right about brokerages. It’s very hard to implement a brokerage that isn’t centralized/stateful in this way.
You can implement an order book of strictly limit orders with meteor methods alone, in a way that scales horizontally. If you want to implement any other order, my recommendation is to define the user experience a little bit more clearly.
By the way, to create an order book, you actually need to make just two decisions:
Are your orders by default buying or selling? (I’m going to choose selling for my example)
What is the smallest mispricing tolerated by crossing (executing) a trade (I’m going to assume 1%).
If you’re not sure why these are important, or what they mean, maybe this is going to be a pretty ambitious project for you.
The only data structure you need in an order book is (assuming only selling and 1% error):
{
"id": "...", /* some identifying datastructure for the seller and order */
"selling": "$USD", /* asset your selling */
"buying": "STOCK", /* asset your buying */
"amount_selling": "32.00000000", /* amount of USD i'm selling in exchange for STOCK, up to 7 decimal points */
"buy_per_unit_sell": { "n": 1, "d": 2 } /* i'm interested in buying 1 share of STOCK for every 2 USD i'm selling, so the price is implied to be $2 per share */
}
Yes, @bmanturner 's implementation of a lock backed by mongo is incorrect. I’m not going to go into detail on how to implement it correctly, because your meteor instance could crash and not release the lock (mongo provides no facility for this). It’s not an appropriate system for sharing locks, ever, even with one machine running meteor, because the lifecycle management is very, very challenging.
If you need locks whose lifecycle is managed well, use Hazelcast.
Sorry, I’m not quite sure I follow or if my statement was unclear/not precise, but I’m looking to implement an exchange with traditional limit orders and market orders for now. Furthermore, given the setup described below as well, I think market orders would be a quick extension of limit orders where the limit is the max price for buy orders and min price for sell orders.
There are some differences between the system I’m proposing and a real stock exchange that simplify things - All assets/stocks would have a fixed max price and min price as well as a specified interval at which the asset would trade (similar to futures). I was envisioning a system in which price levels are locked rather than the entire order book to increase speed, as asynchronous limit orders that do not cross and are not stops will never cause race conditions.
However, I agree that a centralized queue could work as well and would perhaps not result in much worse runtime, as most orders entered are likely to be amongst the same few price levels at any point in time. Do you know of good examples of Meteor tutorials with server-side code running in the background on implemented queues of data? I imagine to start you could create another collection to manage the FIFO queue simply enough, but I’m not sure where to go from there.
is not atomic. While you may get away with that most times in a single-server model, it becomes increasingly dangerous as the number of servers and/or number of clients goes up.
As it happens, MongoDB does provide an atomic findOneAndUpdate designed to solve that very problem.
However, using code to solve the issue of ACID is a complex subject. That’s why we have ACID compliant database engines - and MongoDB is (now) one of those .
See the above comment. In the past I’ve used findOneAndUpdate to maintain an “autoincrement” counter, which converted to a fixed width string, I use as a document _id. I get guaranteed uniqueness, and documents are ordered for me (since the default for _id ordering is ascending). I can always process documents in order (either FIFO or LIFO). I use a similar strategy to ensure one-at-a-time access to those docs.
One complication is to ensure the code fails safely - for example I’ve “checked out” a document, but then I crash before I’ve released or cleaned up. There are strategies to handle this within MongoDB - I’ve used TTL indexes in the past to catch such failures.
Having said all that, I would probably go with a Redis queue. For dev, that could be a single local Redis instance. For production, I’d use a cluster (probably AWS ElastiCache or Azure Redis Cache, as they make it easy).
That’s a brokerage. Definitely take a look at something like these order types, it’ll become obvious to you that a brokerage “compiles” an “order” into a sequence of limit orders, possibly at different prices and quantities and at different times, to an exchange, to “implement” a user-facing order type. If you’re trying to implement a “market order,” as you say, as a primitive instead of a virtual agent making, delaying, changing, rescheduling, etc. limit orders, you’re going to have a bad time.
I know you think you’re simplifying things, or making things faster or whatever, but I guarantee you that you should just aspire to make a correct order book (exchange) implementation with super-conventional limit orders first.
What’s the right answer for how long a document representing a lock should last until it is deleted? That seems to be part of the “leasing time” implementation. There isn’t a 100% correspondence between a lease time and a fail-safe guarantee.
I’m not saying you can’t make distributed locks with mongo, even very good distributed locks, just that it’s going to be a huge pain and incredibly error-prone.
Even in the posted code, besides findOneAndUpdate:
locked has to be the uniquely-generated ID of the method invocation, not true. Otherwise some other caller could grab the lock, and this caller will think that it obtained the lock!
Between stopping the handle and findOneAndUpdate-ing, some other waiter could have taken the lock (i.e., this is an incorrect implementation of a condition). So now it needs to retry, and it needs to use its method invocation, etc. etc.
Do you have any advice on how to get started on this or where to look for a tutorial? Most examples I see with Redis do not involve use cases where access to the DB is necessary (most are exporting jobs to send emails)
Yes. On the server, you can use yourCollection.rawCollection().findOneAndUpdate().
Note that the non-callback form of these methods return a Promise, so I suggest using async/await syntax, as this plays nicely with Meteor’s server-side Fibers.
So I’m looking at using rawCollections().findOneAndUpdate() …
Getting the following error:
Exception while simulating the effect of invoking ‘orders.newBuyLimit’ Error: Can only call rawCollection on server collections
I’m calling orders.newBuyLimit from the client, and the method is defined in imports/api/orders.js . Do you have advice on where these should be defined for this to be a “server collection”?