Reading a large text file line by line


It should be so easy - all I want to do is to read a large text file line by line and for each line run some calculations and then insert into a mongo collection.

However, I have really struggled to get this working. I have tried using nodes ‘readline’ command which asynchronously reads a file line by line and on a ‘line’ event you can write a function to process the line. The issue is that if I give it a function it then says about the function not running in a fibre and suggests I use ‘bindEnvironment’. However when I use bindEnvironment, although the code then works correctly it quickly runs out of memory.

Can someone please point me to the correct way to implement this - ideally with a code example? It is important that it runs line by line as the file is too large to read into memory in one go.

Thank you!