Anode Command Line Interface

Posted on Monday, May 28, 2012 by Ami Turgman


So, now that our anode farm is being used for prototyping as well as for production services, it's time to talk a bit of how we manage and track our applications. For this, let me introduce you to Anode Command Line Interface.

ACLI is a command line interface, developed mainly for the use in the anode project.

Why do you need another command line interface, there are already few out there...
Well, that's true... but believe me, the things these eyes have seen...

No, really... why do you need another command line interface?
From my experience they were'nt very easy to use. Some were too buggy, some couldn't get HTML objects as a command execution result, and others were too limited of how to configure the control look & feel, command line prompt text, and more... each of the existing CLIs had some of the required functionality, but did not support other features that we needed.
The most suitable library for our needs was the GCLI component, which was the main inspiration for implementing ACLI, mostly in the area of the command structure.

What are you using it for?
We use it to

  • Manage the farm
    • Getting information from all servers such as settings, processes and counters
    • Invoking different actions on all servers like forcing sync process
    • Getting application list
  • Manage applications
    • Viewing latest commits (integrated with github)
    • Getting information for an application, such as process info, ports, etc'
    • Restarting an application
  • Each application that is hosted on the farm implements its own plugin that is integrated into the console, and allows the developers to manage it with its own specific set of commands.
  • View logs for our applications, filtered by verbosity level and other params
  • Invoking end-to-end tests and viewing results


In addition to the obvious features (commands history, clear/help commands), we also wanted the following:

  • Supports plugins- remote commands (remote service REST APIs) integrated into the console, using docRouter metadata.
    Supporting plugins with client side processing and styling.
  • Visualizing json data as an HTML table with auto collapsing deep elements.
  • Supporting broadcasting requests when working on a farm.
  • Managing environment variables and use them as part of the commands.
  • Keeping command line history, plugins and environment variables persistent.
  • Supporting working in parallel with few instances/tabs of the console.

Example for 'My Board' feature

So, why is this so exciting?

  1. Since ACLI supports plugins, it's easy to use it as a management tool in any node.js application.
    Let's assume you develop a website using nodeJS. You can create another page under /management which will host ACLI, and then on the server side, implement any REST API that will be integrated into the console as a command, such as getting logs, getting list of users, making operations on users, and everything you can think of.
    Protecting this area by authentication/authorization mechanism will also be a good idea :)

  2. The powerful internal json-view control, which visualizes any json object provides a very easy-to-begin-with json-result visualizer.
    You can start creating server side commands which are integrated into the console, without writing any client-side code. If you'd like more advanced/custom look for the results, you'll be able to add client side handler that generates any HTML/jQuery object in later stages. The server side can also return HTML instead of a json object of course.

  3. Assuming that you are working on a farm, you will be able to create a command that collects data from all servers, displaying the progress of the process and then when all data is collected, displaying the results! This is a very powerful feature that allows you to create commands that collect the status from all servers, or invoking an action on all servers, such as resetting the application.

  4. Managing environment variables like any other native CLI will allow you to use them as part of any command

    1. Implicitly- for example as an environment variable default value for a parameter in a command, or
    2. Explicitly- by using the $ sign such as log --top $myTop.
  5. The console automatically keeps the state of the environment variables, the command line history and the installed plugins in the local storage.
    Every time you open the console, it will be in the exact state that it was when you last closed it. You won't have to install the plugins again, or re-set environment values. In addition to that, the state is kept per each session/tab that we opened. This way we can create several work spaces in which each one of them has certain environment variables, certain installed plugins, and so on... and all that in the context of the application that we are managing.

  6. The My Board feature which allows you to keep results always on screen. This is kind of a panel/container on the right side of the console, in which you can drag-n-drop any command execution result. In the example above, you can see that i'm keeping the environment variables panel (which is a json-view control by the way) on the My Board panel. This way, I can always see the current environment variables setting state (the set -o command returns an online control which will be updated any time an environment variable is updated). This panel can be toggled on/off at any time by pressing on its header.

Getting Started

The following is an example of how to quickly start using the component.

In addition to that, you can find basic and advanced samples which include a node.js application with a sample plugin on github.
The design document includes all the details needed in order to smoothly start integrating plugins as commands into the console.

HTML file:


    <div class="cli-output" id="cliOutput"></div>
    <div class="cli-my-board" id="cliMyBoard">
        <div class="cli-my-board-close"></div>
        <div class="cli-my-board-title">My Board</div>
    <div class="cli-input-container">
        <span class="cli-promptText" id="cliPrompt">></span>
        <input id="cliInput" class="cli-input" type="text">


client side js file:

var cli =  $("#cliInput").cli(
           resultsContainer: $("#cliOutput"),
           promptControl: $("#cliPrompt"),
           myBoard: $("#cliMyBoard"),
           environment: {user: { type: 'string', value: '', description: 'The current user' }},
           commands: [],
           context: { some: 'object' },
           welcomeMessage: "Welcome to anode console!<br/>. Type <b>help</b> to start exploring the commands currently supported!<br/>"

server side plugin with a command that gets a template and a querystring parameters and returns a JSON object:

var express = require('express'),
app = express.createServer(),
docRouter = require('docrouter').DocRouter;

module.exports = app;

app.use(docRouter(express.router, '/api/someplugin', function(app) {

    app.get('/json/:tparam', function(req, res) {
            var tparam = req.params.tparam1;
            var qparam = req.query['qparam'];

            var o = {tparam: tparam, qparam: qparam};
            res.writeHead(200, {'Content-Type': 'application/json' });
            id: 'sample_json',
            name: 'json',
            usage: 'json tparam qparam',
            example: 'json tparam1 qparamValue',
            doc: 'sample for a GET command getting a template param and a query param',
            params: {
                "tparam" : {
                        "short": "b",
                        "type": "string",
                        "doc": "template param",
                        "style": "template",
                        "required": "true"
                "qparam" : {
                        "short": "q",
                        "type": "string",
                        "doc": "query string param",
                        "style": "query",
                        "required": "true"

You are more than welcome to use this plugin.
Your feedback is highly appreciated! feel free to test it, open issues on github or send questions and comments to Ami Turgman.


Securing MongoDB traffic with ssltunnel on Windows

Posted on Thursday, March 29, 2012 by Dima Stopel

Hi guys,

Today I'd like to discuss ssltunnel. So, what is it? ssltunnel is a lightweight TCP over SSL / TLS tunnel running over node. If you need to add confidentiality (privacy), integrity, and authenticity to your TCP stream this is the tool for you. ssltunnel is available as node package via npm. It is distributed under MIT license.


In order to make the discussion about the deeper parts more concrete let's take a concrete example. Let's say that you use mongodb as your database and you need to connect to your CLI client (mongo.exe) running on you PC to your mongo server (mongod.exe) running on your remote VM. Now suppose that you want to assure that all the traffic is encrypted and that only you can connect to your mongo server. Here ssltunnel becomes handy.

ssltunnel consists of two parts: sslproxy and sslserver. The sslproxy part is running on the client machine communicating with the real client, mongo.exe in our case, and sslserver. The sslserver part is running on the server machine and communicating with sslproxy and the back-end server, mongod.exe in our case. sslproxy authenticates sslserver via SSL server certificate. sslserver authenticates sslproxy via SSL client certificate. The traffic itself is encrypted using standard SSL / TLS protocol.

Tunneling mongo traffic through ssltunnel

So, let's create this secure tunnel step by step. Let's suppose the following:

  1. all parts are running on local machine (for the sake of simplicity)
  2. mongod.exe listening port is 50080
  3. sslserver listening port is 80443
  4. sslclient listening port is 50081

step 1: installation

Please download the latest node. Open cmd and install ssltunnel package via npm. I'll install it on c:\ (I run Windows).

anydir/> cd /d c:\
c:\> npm install ssltunnel

You should now see node_modules directory created under c:\. Congratulations, you've successfully install ssltunnel :)

step 2: running the mongo server

If you don't have mongo please download the latest version now. Extract it in the directory of your choice. Run cmd and navigate to this directory. Now you can run the server. For the sake of simplicity I instruct it to put data in data\db folder.

d:\src\mongodb-win32-x86_64-2.0.2\bin>mongod --port 50080 --dbpath data\db

You should see something like this:

Tue Mar 27 16:41:56 [initandlisten] MongoDB starting : pid=3232 port=50080 dbpath=data\db 64-bit host=Dimast-laptop
Tue Mar 27 16:41:56 [initandlisten] db version v2.0.2, pdfile version 4.5
Tue Mar 27 16:41:56 [initandlisten] git version: 514b122d308928517f5841888ceaa4246a7f18e3
Tue Mar 27 16:41:56 [initandlisten] build info: windows (6, 1, 7601, 2, 'Service Pack 1') BOOST_LIB_VERSION=1_42
Tue Mar 27 16:41:56 [initandlisten] options: { dbpath: "data\db", port: 50080 }
Tue Mar 27 16:41:56 [initandlisten] journal dir=data/db/journal
Tue Mar 27 16:41:56 [initandlisten] recover : no journal files present, no recovery needed
Tue Mar 27 16:41:56 [initandlisten] waiting for connections on port 50080
Tue Mar 27 16:41:56 [websvr] admin web console waiting for connections on port 51080

step 3: establishing the tunnel

Let's navigate to the bin directory of ssltunnel:

c:\>cd c:\node_modules\ssltunnel\bin

Now we will create sslserver. Note that you need server certificate with private key and public client certificate in order to be able to verify the client. I have provided test certificates as part of the package. Please generate and use your own for production systems. See how to do it here.

So we instruct the sslserver (-r server) to listen on port 50443 and connect to back end server on host localhost (the default, actually) and port 50080. We also provide public and private server certificates and public client certificate which are stored in decrypted pem files.

  -r server 
  --proxy_port 50443 
  --server_port 50080 
  --server-host localhost 
  --srv_pub_cert ..\testcerts\sc_public.pem 
  --srv_prv_cert ..\testcerts\sc_private.pem 
  --clt_pub_cert ..\testcerts\cc_public.pem

This is the output you should get:

Running 'server' role. Listening on 50443, decrypting and forwarding to real server machine on localhost:50080
ssltunnel's server is listening on port: 50443

Now let's start the client:

Here we instruct the sslproxy (-r client) to listen on port 50081 and connect to sslserver on host localhost (also the default) and port 50443. We also provide public and private client certificates and sslserver's public certificate.

  -r client 
  --proxy_port 50081 
  --server_port 50443 
  --server-host localhost 
  --srv_pub_cert ..\testcerts\sc_public.pem 
  --clt_pub_cert ..\testcerts\cc_public.pem 
  --clt_prv_cert ..\testcerts\cc_private.pem

You should see something like this:

Running 'client' role. Listening on 50081, encrypting and forwarding to ssltunnel's server on localhost:50443
ssltunnel's client is listening on port: 50081

Congrats! You have an established secure tunnel.

step 3: connecting though the tunnel

Let's try to connect now. Fire up cmd and navigate to mongo's bing directory. Then run mongo.exe and instruct it to connect to localhost:50081.

d:\src\mongodb-win32-x86_64-2.0.2\bin>mongo localhost:50081
MongoDB shell version: 2.0.2
connecting to: localhost:50081/test
> show dbs
local   (empty)
test    0.078125GB

You have successfully connected to your mongo server through ssltunnel!


Few additional words on this. In addition to the above ssltunnel enables setting TCP keep-alive between sslproxy and sslserver. This makes it possible to overcome problems with servers with low TCP timeouts. It also supports setting various logs verbosity.

ssltunnel can also be used via node script. You just populate the options object with all the configuration details and run either ssltunnel.createClient() to create sslproxy or ssltunnel.createServer() to create sslserver. See this file for example (scroll to the bottom).

If you use ssltunnel and missing a feature feel free to send a pull request or just ask me to do it. If you have any questions do not hesitate to contact me at

Dima Stopel


We work at Microsoft and we use node.js

Posted on Tuesday, March 20, 2012 by Elad Ben-Israel

We recently spent some time with Charles Torre from Channel 9, discussing node.js at Microsoft and the project we have been working on, anode.

We thought it would be a nice opportunity to launch our blog and share some of our experiences. Currently there are no plans to release anode as a service, but we are pleased to share the modules we have created as part of the project.

Some background

Microsoft is probably the most diverse software company in the world. We build almost every type of software out there. It's amazing to witness how almost every software piece we use at the company is 100% home grown. I don't think there's any other company in the world like that: the operating systems we use on our desktops, laptops, servers and phones, the office suite, the IDE, compiler, source control, build system, issue tracking, project management, docs management, databases, our game room has Xbox and Kinect. Hell, even the phone system now uses Lync. Crazy. Inspiring. Addictive...

With that in mind, when designing new systems, decisions are apperently simple: run on Windows, host on IIS, write in .NET, use WCF, source control in TFS, data on SQL and so forth. However, good engineers understand that it is important to choose the right tools for the job. When you only have one option for each part of your stack, you don't make choices and naturally you will end up with sub-optimal solutions.

And there are some really good engineers at Microsoft!

Luckily, one of those engineers led our team a while back. He understood that he needs to keep us on our toes and make sure we don't find ourselves in this nice and happy NIH syndrome cosiness. He used to send out those emails encouraging us to play around and try new technologies and kept reminding us that we need to keep looking for the right tools, even if, god forbid, they were not created in Redmond.

One of these emails was about node.js. That was 8 months ago, so the node.js community was already pretty crazy. There were about 5,000 modules at the npm repository back them (today there are over 8,000) and things have been moving fast. Two of us decided to spend a day and play around.

Not optimized for prototyping

One of the pain points we had at the time was the turn-around time for publishing new code. We were doing a lot of experimentation and prototyping and the stack we were using (.NET/WCF/IIS/Azure; msbuild/mstest/TFS) practically meant a turn-around of about 2 hours:

  1. Build and test locally using Azure dev fabric and mocks
  2. Submit for the TFS build server to build and create a package
  3. Upload package to azure
  4. Deploy to staging
  5. Verify nothing broke by running tests against staging
  6. VIP-swap to production

Another big pain was the fact that it took about one minute for logs to be transfered from our roles into the Azure Table, from which we needed to download them and only then figure out what went wrong.

Now all this process was needed not nessesarily because we had millions of users who needed super high quality code (a lot of the stuff we did was experimental in nature). The main reason we needed all this was because of the 2h/1m turn-around. Since you couldn't really "develop on the cloud", you had to make sure things are going to work before you deployed, because once something didn't work (usually it was one of those "it all worked locally, damn it" bugs), 2 more hours went out the window...

We kept trying to improve the process: reduce testing time, improve our simulators to make sure they behave like the cloud, build in parallel, aggregate changes into less deployments, use log viewers we found to monitor the system. But we were an order of magnitude away from just writing a few lines of code, see if they worked okay on the cloud and integrated well with everything else and repeatedly do that over and over. And that's how we wanted to work…

From 7,200 to 10 in one day

Amazingly, after a day of work in a nice little coffee place in Tel-Aviv, borrowing ideas from Smarx Role and other PaaS providers, we managed to create an Azure role that "listened" on a blob account. When a blob container changed, it downloaded the code from that container, spawned node index.js (with an allocated process.env.PORT) and using http-proxy, routed incoming requests into these apps. When you went into the blob and updated one of the files, the role re-fetched the changes and respawned the app. We also grabbed stdout/err and pushed it almost immediately into an Azure Table. We wrote a little web app that tailed the table and showed recent logs in almost real-time.

So turn-around dropped from 2 hours to 10 seconds.


Our team was pretty excited. We felt that there's a new tool in the toolbox that's worth trying out. Gradually, people started using node.js for their experiments and protoypes and hosted their apps on our nice little PaaS-like role. People were happy that they can actually write the code and run it on the cloud so quickly, and if something didn't look good, they just updated it and it's instantly published.

Node.js and the ecosystem around it proved to be an incredibly friendly stack to learn and use. We found many useful node modules and a lot of high quality documentation and conversation shared openly by some awesome hackers.

Today we have a team of about 30 people (located in Tel-Aviv, San Francisco and Seattle) that use node.js and host their apps on our little platform.

Another coincidental development was that two other teams at Microsoft started looking at node.js seriously around the same time: (1) The folks at the developer division joined efforts with Joyent in order to create a native Windows port for node.js (we initially used the cygwin port), so today we have node.js running and behaving beautifully on Windows; and (2) the Azure team started working on iisnode and the Azure Node.js SDK, which makes our lives so much easier running our node.js PaaS on Azure.

Ever since, we added some nice improvements, but we try to keep things simple and tailored to our actual needs:

  • Code is automatically fetched from git and not from blob storage. Working in deltas makes so much sense in this context.
  • We deploy multiple git branches as means to isolate apps in development from production (but still want them all on the cloud).
  • We run tests against the deployed apps when we merge code to production.
  • We provide a MongoDB as a service for apps.
  • We use a fun web command line console that to interact with the system and apps.
  • We measure useful metrics for apps and provide standard request logging.

Currently, there is no plan to make anode externally available as a service, but we do have a commitment to open-source as many components of the system as we can and share our experience.

We started this site as home to these components and we plan to provide some more context on what we do through the blog.

Feel free to contact us if you have any questions or comments,
The anode crew