Running a own Kadira instance Update: now with a guide!

The below settings.

“kadira”: {
“appId”: “<>”,
“appSecret”: “<>”,
“options”: {
“endpoint”:“http://ip:11011
}
},
I see the below when i try to access the engine. The UI is still not responding…
kadira-engine_1 | blocked due missing appId: /
kadira-engine_1 | blocked due missing appId: /favicon.ico

Ah! Yes, this is the meteor app settings, this can be run locally by using meteor --settings settings.json, or if you are using Heroku or similar, you have to put this inside the METEOR_SETTINGS. Here is a great guide for using the settings file:

https://themeteorchef.com/tutorials/making-use-of-settings-json

And obviously you have to input the correct appId and appSecret that you generate for your app in Kadira.

Hey all. Thank you for all the input here and the tutorial on setting up Kadira. I have successfully set up the kadira on digitalocean droplets. One $20/month droplet for the kadira app and primary mongodb replica (I use this plan because kadira consistently uses 66% ram on 2Gb), and another $5/month for secondary mongodb replica.

I have created a new user on Kadira and have set it to the business plan. However, here comes the problem, I am stuck at the settings page that gives me the appid and app secret. Kadira doesn’t give me the graphs page at all. Any idea of what may have gone wrong is appreciated.

Other pages should show up after you connected your application to the kadira instance and data starts flowing in.

You need to connect your meteor app properly as speak2ravi says. I had the same problem myself, and it was because of using the wrong ports in the endpoint URL in the settings.json configuration in meteor. (I used port 4000, but it is 11011)

Make sure that your Kadira instance is reachable on the same port, or you have the open them in the server firewall. (port 4000 and 11011 should be open)

I am still struggling with heavy data buildup in my MongoDB database, do someone have the same problem? I got to 500MB in just a few days. (not RAM, but storeage)

@vyvegard and @speak2ravi, thank you for the replies. I got it working!! I was foolishly pointing the endpoint to https:// instead of http://. Hence, the kadira-engine thought that I had never activated the appid and appsecret.

I have been running Kadira for about five hours and my disk space usage increased about 35mb. Projecting it out to 5 days, I should be using about 840mb. I will let you know how it goes in the next few days. I got a 20gb ssd droplet now. If things go weird, then I will setup a cron job to sweep the old entries on kadiraData as a temporary fix.

Some more math here:
To run the business plan for a month, Kadira will consume about 5gb of space. The default settings for business plan enables the backup lasting for three months. So one app requires 15gb on business plan. Although I found that the real-time data is way more useful than the backups, I do enjoy the added features in the business plan. As you can see, the data consumption is not financially reasonable on mLab. That’s why I used the droplets to host the mongoDBs. Now I wonder how much Arunoda spent per month for server costs.

I have looked at the kadiraData collection, and most of the documents have an expiration date of three months later. Perhaps we can look into reducing the data backup time for the business plan?

1 Like

I tried to find docker cloud but the term seems so general.

Where can I find docker cloud?

The URL for Docker Cloud is https://cloud.docker.com

Did you figure this out?

I got it working with Docker Cloud and NGINX in front. The ‘Access-Control-Allow-Origin’ issue was in fact that NGINX was listen on port 543 when the port open in AWS was 443!

###There any special mongo configuration to start the db service?

I start the default monodb configuration and I got the error:

echo $KADIRA_MONGO_URL
mongodb://mongoapm:27027
docker-compose ps
         Name                       Command               State                   Ports
------------------------------------------------------------------------------------------------------
mongoapm                 /entrypoint.sh mongod --sm ...   Up       27017/tcp, 0.0.0.0:27027->27027/tcp
user_kadira-engine_1   npm run start                    Exit 1
user_kadira-rma_1      npm run start                    Up
user_kadira-ui_1       su -c /usr/bin/entrypoint. ...   Exit 1
kadira-ui_1      | npm WARN package.json meteor-dev-bundle@0.0.0 No description
kadira-ui_1      | npm WARN package.json meteor-dev-bundle@0.0.0 No repository field.
kadira-ui_1      | npm WARN package.json meteor-dev-bundle@0.0.0 No README data
kadira-ui_1      | npm WARN package.json meteor-dev-bundle@0.0.0 No license field.
kadira-rma_1     |
kadira-rma_1     | > kadira-rma@1.0.0 start /app
kadira-rma_1     | > run-p -l run:**
kadira-rma_1     |
kadira-engine_1  | starting apm-engine on port 11011
kadira-engine_1  | DDONE
kadira-engine_1  | Error connecting to the Mongo Metrics Cluster
kadira-engine_1  |
kadira-engine_1  | /app/node_modules/mongodb/lib/mongo_client.js:338
kadira-engine_1  |           throw err
kadira-engine_1  |                 ^
kadira-engine_1  | MongoError: failed to connect to server [mongoapm:27027] on first connect [MongoError: connect ECONNREFUSED]
kadira-engine_1  |     at null.<anonymous> (/app/node_modules/mongodb/node_modules/mongodb-core/lib/topologies/server.js:328:35)
kadira-engine_1  |     at emit (events.js:107:17)
kadira-engine_1  |     at null.<anonymous> (/app/node_modules/mongodb/node_modules/mongodb-core/lib/connection/pool.js:274:12)
kadira-ui_1      | /home/meteor/www/bundle/programs/server/node_modules/fibers/future.js:313
kadira-ui_1      | 						throw(ex);
kadira-ui_1      | 						^
kadira-ui_1      | MongoError: failed to connect to server [mongoapm:27027] on first connect
kadira-ui_1      |     at Object.Future.wait (/home/meteor/www/bundle/programs/server/node_modules/fibers/future.js:449:15)
kadira-ui_1      |     at new MongoConnection (packages/mongo/mongo_driver.js:211:27)
kadira-ui_1      |     at new MongoInternals.RemoteCollectionDriver (packages/mongo/remote_collection_driver.js:4:16)

However mongo was up and running :frowning:

###This is the final config:

version: '2'

services:
  mongodb:
    image: mongo:latest
    container_name: "mongoapm"
    environment:
      - MONGO_DATA_DIR=/data/db
      - MONGO_LOG_DIR=/dev/null
    volumes:
      - ./data/db:/data/db
    ports:
        - "27027:27027"
    command: mongod --smallfiles --logpath=/dev/null # --quiet
  kadira-engine:
    image: vladgolubev/kadira-engine
    ports:
      - "11011:11011"
    environment:
      - PORT=11011
      - MONGO_URL=$KADIRA_MONGO_URL
      - MONGO_SHARD_URL_one=$KADIRA_MONGO_URL

  kadira-rma:
    image: vladgolubev/kadira-rma
    environment:
      - MONGO_URL=$KADIRA_MONGO_URL

  kadira-ui:
    image: vladgolubev/kadira-ui
    ports:
      - "4000:4000"
    environment:
      - MONGO_URL=$KADIRA_MONGO_URL
      - MONGO_SHARD_URL_one=$KADIRA_MONGO_URL
```

Hey, were you able to run Kadira connecting to local MongoDB successfully ? If so, please advise if there is any specific configuration needed for MongoDB when running locally ? (It would be great if you can share your steps when configuring MongoDB

I got my app to send data to a local mongodb, but the data never shows up in then kadira ui as I described in my post above. So in the end I’d say no I didn’t get it to work properly.

Can you share if there were an specific steps you did to setup your local mongodb? After uninstall and reinstall of mongodb, i see similar errors as thbaz above. Since you were able to send data to local mongodb, may be there is something that we are missing while setting up our instance.

“connecting to the Mongo Metrics Cluster” does this hint something ?

kadira-engine_1 | starting apm-engine on port 11011
kadira-engine_1 | DDONE
kadira-engine_1 | Error connecting to the Mongo Metrics Cluster
kadira-engine_1 |
kadira-engine_1 | /app/node_modules/mongodb/lib/mongo_client.js:338
kadira-engine_1 | throw err
kadira-engine_1 | ^
kadira-engine_1 | MongoError: failed to connect to server [127.0.0.1:27017] on first connect [MongoError: connect ECONNREFUSED]

Hi, out of curiosity, was there a reason you choose to setup Mongo db on mlabs for your example ?
Are there any specific infra requirements that needs to be setup before kadira can connect to local mongodb instance. I am getting errors when attempting to connect to local mongodb.

The first error comes from the engine and is as below ? ‘Error connecting to the Mongo Metrics Cluster’

Would you mind posting your nginx configuration? I have everything up and running but I can’t seem to get an SSL proxy properly running on the 11011 port, although I can get one on the UI port

are you pointing to a local instance of MongoDb?

No, I setup a replicaset on Digital Ocean. Was going to use mlab but as others in this thread have noted, Kadira eats a fair amount of data and I need to connect several apps to it

If your are kadira instance is able to connect to the replicaset, can you please share the configuration setups as I am also exploring to setup a self hosted mongodb, but was getting error connecting to stand alone mongodb

My kadira is running with a custom replicaset hosted on AWS. 1 thing to keep in mind. All replicaset members should allow from external IP’s by setting the bind_ip = 127.0.0.1 in the mongod.conf. Read more here.

Also I’ve noticed that setting up the replicaset and making it work for remote databases required me to give aliases to the replicaset members. So when referencing the set with the mongo string I refer to them as:

export MONGO_DATA_URL="mongodb://mongo-a,mongo-b,mongo-c/kadira-data?readPreference=primary&replicaSet=mySet"

Then I change the host file on each member’s server to refer to the right IP address and done. Meteor is sometimes complaining about not being able to connect to the primary instance using the connection string. Above solves it and might solve it for you as well.

Also a nice benefit is the fact that you don’t have to put the IP’s of each members in there. You can now simply swap members with different IP’s. It only requires you to change the host files. To simply this even more, you can use an internal DNS service like AWS Route53 to do the aliasing part. Simply connect the right IP to the right internal DNS record.