Jumat, 25 Oktober 2013

Xathrya Sabertooth

Xathrya Sabertooth


JavaScript: String Objects

Posted: 24 Oct 2013 06:38 AM PDT

This article is a supplement for Compact Tutorial on JavaScript Basic.

We can create a string primitives by giving them some characters to hold.

var myPrimitiveString = "Xathrya Sabertooth";

A String object does things slight differently, not only allowing us to store characters, but also providing a way to manipulate and change those characters.

Creating String Object

Declaring a new variable and assign it a new string primitive to initialize it. Now we try using typeof() to make sure data in the variable myPrimitiveString:

<HTML>  <BODY>  <SCRIPT type="text/javascript">      var myPrimitiveString = "Xathrya Sabertooth";      document.write (typeof(myPrimitiveString));  </SCRIPT>  </BODY>  </HTML>

We can still use the String object’s methods on it, though. JavaScript will simply convert the string primitive to a temporary string object, use the method on it, and then change the data type back to string. Now let’s try out using the length property of the String object.

<HTML>  <BODY>  <SCRIPT type="text/javascript">      var myPrimitiveString = "Xathrya Sabertooth";      document.write ( typeof( myPrimitiveString ) );      document.write ( "<br>" );      document.write ( myPrimitiveString.length );      document.write ( "<br>" );      document.write ( typeof( myPrimitiveString ) );  </SCRIPT>  </BODY>  </HTML>

Which should give this result:

string  18  string

So, myPrimitiveString is still holding a string primitive after the temporary conversion. You can also create String objects explicitly, using the new keyword together with the String() constructor.

<HTML>  <BODY>  <SCRIPT type="text/javascript">      var myObjectString = new String("Xathrya Sabertooth");      document.write ( typeof( myObjectString) );      document.write ( "<br>" );      document.write ( myObjectString.length );      document.write ( "<br>" );      document.write ( typeof( myObjectString) );  </SCRIPT>  </BODY>  </HTML>

Which should give this result:

object  18  object

The only difference between this script and the previous one is the myObjectString is a new object, created and supplied with some characters.

var myObjectString = new String("Xathrya Sabertooth");

The result of checking the length property is the same whether we create the String object implicitly or explicitly. The only real difference is that creating string object explicitly is marginally more efficient if we're going to be using the same String object again and again. Explicitly creating String objects also helps prevent the JavaScript interpreter getting confused between numbers and strings, as it can do.

String Object’s Methods

The String object has a lot of methods, so we will limit to two of them: indexOf() and substring() methods. The more complete list of String object’s properties and methods can be found at the handout section.

String is made of characters (it is sequence of characters). Each of these characters is given an index, zero-based index. So the first character’s position has the index 0, the second is 1, and so on. The method indexOf() finds and returns the position in the index at which a substring begins (and the lastIndexOf() method returns the position of last occurrence of the substring).

We can use this method for example, checking the position (and existence) of symbol @ in our entry when we ask user to input email address. No guarantee that the email address is valid, but at least it would go some way to that direction.

We will use prompt() function to obtain the user’s e-mail address and then check the input for the @ symbol, returning the index of the symbol using indexOf().

<html>  <body>  <script type="text/javascript">      var userEmail= prompt("Please enter your emailaddress ", "" );      document.write( userEmail.indexOf( "@" ) );  </script>  </body>  </html>

If the @ is not found, -1 is written to the page. As long as the character is in string, the indexOf() will return the index, something greater than -1.

The substring() method carves one string from another string. It takes index of start and end position of the substring as parameters. We can return everything from the first index to the end of the string by leaving off the second argument.

So, to extract all the character from the third character (at index 2) to the sixth character (index 5), we can write as:

<html>  <body>  <script type="text/javascript">      var characterName = "I am Xathrya Sabertooth";      var lastNameIndex = characterName.indexOf( "Xathrya " ) + 8;      var lastName = characterName.substring( lastNameIndex );      document.write( lastName );  </script>  </body>  </html>

We are extracting Sabertooth from the stirng in the variable characterName. We first find the start of the first name and add it with 8 to find the index of the first name (because “Xathrya ” is 8 characters long). We use indexOf here. The result is stored on lastNameIndex. Using that value, we extract the substring of lastname, form lastName Index to unspecified final index. The rest of the characters in the string will be returned.

Handouts

  1. JavaScript String Properties and Methods List (PDF)

Introduction to Network Topology

Posted: 24 Oct 2013 04:49 AM PDT

Network topology is a term which refer to the arrangement of various elements (links, nodes, etc.) of a computer network. Every node are arrange in structure which allows nodes to be interconnected. The structure may be depicted physically or logically. Physical topology refers to the placement of the network’s various components, including device location and cable installation. Logical topology refers to arrangement on how data flows within a network, regardless of its physical design. Distance between nodes, physical interconnections, transmission rates, and/or signal types may differ between two networks, yet their topologies may be identical.

After years of study on networking, one can mention some popular topology, based on it’s physical layout. They are:

  1. Bus topology
  2. Ring topology
  3. Star topology
  4. Tree topology
  5. Mesh topology

Aside of them, there are also a number of combination from above, which is also known as hybrid structure.

NetworkTopologies

Bus Topology

BusNetwork      bus-topology

In bus topology, all nodes are connected to a single cable, known as bus. A signal from the source travels in both directions to all machines connected on the bus cable until it finds the intended recipient. In other node, if the machine address does not match the intended address for the data, the machine ignores the data.Alternatively, if the data matches the machine address, the data is accepted.

The bus topology only needs one wire. A terminator is used in both side of bus / cable, which helps to send data inside the bus. Since the bus topology consists of only one wire, it is rather inexpensive to implement when compared to other topologies. However, the cost of managing the network is rather high. It is also considered single point of failure, which means when the bus is unavailable, the network would go down.

Ring Topology

RingNetwork      ring_topology

Like implied by the name, ring topology is set up in a circular form. Using this method, data travels around the ring in one direction an each device on the ring acts as a repeater to keep the signal strong as it travels. Each node can act as as receiver and transmitter, they become receiver for the incoming signal, and a transmitter to send the data on to the next node in the ring.

Every node is a critical link. When one node is down, the network would go down also.

Star Topology

StarNetwork     star-topology

A network created using star topology use a single node act as central. A hub or switch is used for this function. They are then connected point-to-point with other node the member of the network. If we see this as server-client architecture, we can see the switch is the server and the peripherals or other nodes are the clients.

All traffic that traverses the network passes through the central switch. Switch then multiplex the data and send / relay the data to the destination, based on their address.

The network does not necessarily have to resemble a star to be classified as star network, but all the nodes on the network must be connected to one central point.

The star topology is considered the easiest topology. The primary advantage of this topology is the modularity. Adding and removing nodes is really simple. However, the switch (the center node) itself is very critical, represents a single point of failure.

Tree Topology

TreeNetwork

This is rather complex topology, based on arranging nodes in hierarchical system. Like any tree concept, in tree network exist a single, ‘root’ node. This node connected either a single (or multiple) node(s) in the level below by point-to-point link. These lower level nodes are also connected to a single or multiple nodes in the next level doen.

Tree networks are constrained to any number of levels. But as tree networks are a variant of the bus network topology, they are prone to crippling network failures when the higher level of nodes fail or suffer damage. Each nodes has a specific, fixed number of nodes connected to it at the next lower level in the hierarchy. This number is referred to ‘branching factor’ of the tree.

  1. A network that is based upon the physical hierarchical topology must have at least three levels in the hierarchy of the tree, since a network with a central ‘root’ node and only one hierarchical level below it would exhibit the physical topology of a star.
  2. A network that is based upon the physical hierarchical topology and with a branching factor of 1 would be classified as a physical linear topology.
  3. The branching factor, f, is independent of the total number of nodes in the network and, therefore, if the nodes in the network require ports for connection to other nodes the total number of ports per node may be kept low even though the total number of nodes is large – this makes the effect of the cost of adding ports to each node totally dependent upon the branching factor and may therefore be kept as low as required without any effect upon the total number of nodes that are possible.
  4. The total number of point-to-point links in a network that is based upon the physical hierarchical topology will be one less than the total number of nodes in the network.
  5. If the nodes in a network that is based upon the physical hierarchical topology are required to perform any processing upon the data that is transmitted between nodes in the network, the nodes that are at higher levels in the hierarchy will be required to perform more processing operations on behalf of other nodes than the nodes that are lower in the hierarchy. Such a type of network topology is very useful and highly recommended.

Mesh Topology

In Mesh topology, each node must not only capture and disseminate its own data, but also serve as a replay for other nodes. That is, it must collaborate to propagate the data in the network.

The self-healing capability of mesh enables a routing based network to operate when one node breaks down or a connection goes bad. As a result, the network is typically quite reliable, as there is often more than one path between a source and a destination in the network.

Mesh network can be divided into two categories: partially connected mesh and fully connected mesh.

Partially connected

In this type of network topology, some of the nodes are connected to more than one other node in the network with a point-to-point link.

NetworkTopology-Mesh

Fully connected

A fully connected mesh is a mesh with complete arc from one node to all remaining nodes. This means each of the nodes is connected to each other. No switching nor broadcasting need as every node know all peer. However, the number of connections grows quadratically with the number of nodes.

connections = node * (node - 1) / 2

This topology is impractical for large-scale network.

NetworkTopology-FullyConnected

Selasa, 22 Oktober 2013

Xathrya Sabertooth

Xathrya Sabertooth


NodeJS HTTPS

Posted: 21 Oct 2013 09:31 AM PDT

HTTPS or HTTP Secure is the HTTP protocol over TLS. It is the secure version of HTTP and implemented as separate module in Node. The API itself is very similar to the HTTP one, with some small differences.

HTTPS Server

To create a server, we can do:

var https = require('https'),      fs = require('fs');    var options = {      key: fs.readFileSync('/path/to/server/private-key.pem');      cert: fs.readFileSync('/path/to/server/certificate.pem');  };    https.createServer(options, function(req,res) {      res.writeHead(200, {'Content-Type': 'text/plain'});      res.end('Hello World!');  });

So here, the first argument to https.createServer is an options object that much like in TLS module, provides the private key and the certificate strings.

HTTPS Client

To make a HTTPS request, we must use https module:

var https = require('https'),      options = {          host: 'encrypted.google.com',          port: 443,          path: '/',          method: 'GET'      };    var req = https.request(options, function(res) {      console.log("statusCode: ", res.statusCode);      console.log("headers: ", res.headers);        res.on('data', function(d) {          process.stdout.write(d);      });  });  req.end();

Here options we can change:

  • port: port of host to request to. Defaults is 443
  • key: the client private key string to use for SSL. Defaults to null.
  • cert: the client certificate to use. Defaults to null
  • ca: an authority certificate or array of authority certificates to check the remote host against.

We may want to use the key and cert options if the server needs to verify the client.

Much like http module, this module also offers a shortcut https.get method that can be used:

var https = require('https');  var options = { host: 'encrypted.google.com', path: '/' };  https.get(options, function(res) {      res.on('data', function(d) {          process.console.log(d.toString());      });  });

NodeJS TLS / SSL

Posted: 21 Oct 2013 08:52 AM PDT

Transport Layer Security (TLS) is a successor of Secure Socket Layer (SSL) protocol. The technology allow client /server application to communicate across a network in a way designed to prevent eavesdropping and tampering. TLS and SSL encrypt the segments of network connections above the Transport layer, enabling both privacy and message authentication.

Node uses OpenSSL to provide TLS and/or SSL encrypted stream communication.

TLS is a standard base on the earlier SSL specification. In fact, TLS 1.0 is also known as SSL 3.1, and the latest version (TLS 1.2) is also known as SSL 3.3. From now on we will use TLS instead of SSL.

Public / Private Keys

The keys used in TLS is in pair of public / private keys. A public key is a key which is open publicly, where used by other party to encrypt data they want to sent to us. While the private key, like implied by name, is only known by us or our machine. This key is used to decrypt the message sent by other machine.

Private Key

Each client and server must have a private key. A private key can be generated by openssl utility on the command line by:

openssl genrsa -out private-key.pem 1024

This should create a file named private-key.pem with our private key.

Public Key

All servers and some clients need to have a certificate. Certificates are public keys signed by a Certificate Authority or self-signed. The first step to getting a certificate is to create a “Certificate Signing Request” file. This can be done with:

openssl req -new -key private-key.pem -out csr.pem

This will create CSR named node.csrcsr.pem and using our generated key (private-key.pem). When you are asked for some data, answer it. They will be written in certificate.

The purpose of CSR is to request certificate. That is, if we want a CA (Certificate Authority) to sign our certificate, we could give this file to them to process and they will give us back a certificate.

Alternatively, we can create self-signed certificate with the CSR we had.

openssl x509 -req -in csr.pem -signkey private-key.pem -out certificate.pem

Thus we have our certificate file certificate.pem

TLS Client

We can connect to a TLS server using something like this:

var tls = require('tls'),      fs = require('fs'),      port = 3000,      host = '192.168.1.135',      options = {          key: fs.readFileSync('/path/to/private-key.pem'),          cert: fs.readFileSync('/path/to/certificate.pem')      };    var client = tls.connect(port, host, options, function() {      console.log('connected');      if (client.authorized) {          console.log('authorized: ' + client.authorized);          client.on('data', function(data) {              client.write(data);    // Just send data back to server          });      } else {          console.log('connection not authorized: ' + client.authorizationError);      }  });

First we need to inform Node of the client private key and client certificate, which should be strings. We are then reading the pem files into memory using synchronous version of fs.readFile, fs.readFileSync.

Then, we are connecting to the server. tls.connect returns a CryptoStream object, which we can use normally as ReadStream and WriteStream. We then wait for data from server as we would on a ReadStream, and then we send it back to the server.

TLS Server

TLS server is a subclass of net.Server. With it, we can make everything we can with a net.Server, except that we are doing over a secure connection.

var tls = require('tls'),      fs = require('fs');      options = {          key: fs.readFileSync('/path/to/private-key.pem'),          cert: fs.readFileSync('/path/to/certificate.pem')      };    tls.createServer(options, function(s) {      s.pipe(s);  }).listen(4004);

Beside key and cert options, tls.createServer also accepts:

  • requestCert: if true, the server will request a certificate from clients that connect and attempt to verify that certificate. The default value is false.
  • rejectUnauthorized: If true, the server will reject any connection which is not authorized with the list of supplied CAs. This option only has effect if requestCert is true. The default value is false.

Verification

On both the client and the server APIs, the stream has a property named authorized. This is a boolean indicating if the client was verified by one of the certificate authorities you are using, or one that they delegate to. If s.authorized is false, then s.authorizationError contains the description of how the authorization failed.

NodeJS Streaming HTTP Chunked Responses

Posted: 21 Oct 2013 03:50 AM PDT

NodeJS is extremely streamable, including HTTP responses. HTTP being a first-class protocol in Node, make HTTP response streamable in very convenient way.

HTTP chunked encoding allows a server to keep sending data to the client without ever sending the body size. Unless we specify a “Content-Length” header, Node HTTP server sends the header

Transfer-Encoding: chunked

to client, which makes it wait for a final chunk with length of 0 before giving the response as terminated.

This can be useful for streaming data – text, audio, video – or any other into the HTTP client.

Streaming Example

Here we are going to code an pipes the output of a child process into the client:

var spawn = require('child_process').spawn;    require('http').createServer(function(req, res) {      var child = spawn('tail', ['-f', '/var/log/system.log']);      child.stdout.pipe(res);      res.on('end', function() {          child.kill();      });  }).listen(4000);

Here we are creating an HTTP server and binding it to port 4000.

When there is a new request, we launch a new child process by executing the command “tail -f /var/log/system.log” which output is being piped into the response.

When response ends (because the browser window was closed, or the network connection severed, etc), we kill the child process so it does not hang around indefinitely.

Senin, 21 Oktober 2013

Xathrya Sabertooth

Xathrya Sabertooth


NodeJS Child Processes

Posted: 21 Oct 2013 02:03 AM PDT

Child process is a process created by another process (the parent process). The child inherits most of its parent’s attributes, such as file descriptors.

On Node, we can spawn child processes, which can be another Node process or any process we can launch from the command line. For that we will have to provide the command and arguments to execute it. We can either spawn and live along side with the process (spawn), or we can wait until it exits (exec).

Executing Command

We can launch another process and wait for it to finish like this:

var exec = require('child_process').exec;    exec('cat *.js | wc -l', function(err, stdout, stderr) {      if (err) {          console.log('child process exited with error code ' + err.code);          return;      }      console.log(stdout);  });

Here we are executing a command, represented as string like what we type on terminal, as first argument of exec(). Here our command is “cat *.js | wc -l” which has two commands piped. The first command is print out the content of every file which has .js extension. The second argument will retrieve data from the pipe and count the line for each file. The second argument is a callback which will be invoked once the exec has finished.

If the child process returned an error code, the first argument of the callback will contain an instance of Error, with the code property set to the child exit code.

If not, the output of stdout and stderr will be collected and be offered to us as strings.

We can also pass an optional options argument between the command and the callback function:

var options = { timeout: 10000 };  exec('cat *.js | wc -l', options, function (err, stdout, stderr) { ... });

The available options are:

  • encoding: the expected encoding for the child output. Default to ‘utf8′
  • timeout: the timeout in miliseconds for the execution of the command. Default is 0 which does not timeout.
  • maxBuffer: specify the maximum size of the output allowed on stdout or stderr. If exceeded, the child is killed. Default is 200 * 1024
  • killSignal: the signal to be sent to the child, if it times out or exceeds the output buffers. Identified as string.
  • cwd: current working directory, the working directory it will operated in.
  • env: environment variables to be passed in to child process. Defaults to null.

On the killSignal option, we can pass a string identifying the name of the signal we wish to send to the target process. Signals are identified in node as strings.

Spawning Processes

If in previous section we see that we can execute a process. In node we can spawn a new child process based on the child_process.spawn function.

var spawn = require('child_process').spawn;    var child = spawn('tail', ['-f', 'file.txt']);  child.stdout.on('data', function(data) {      console.log('stdout: ' + data);  });  child.stderr.on('data', function(data) {      console.log('stderr: ' + data);  });

Here we are spawning a child process to run the “tail” command. The tail command need some arguments, therefore we pass array of string for “tail”, which become second argument of  spawn(). This tail receive arguments “-f” and “file.txt” which will monitor the file “file.txt” if it exists and output every new data appended to it into the stdout.

We also listening to the child stdout and printing it’s output. So here, in this case, we are piping the changes to the “file.txt” file into our Node application.  We also print out the stderr stream.

Killing Process

We can (and should) eventually kill child process by calling the kill method on the child object. Otherwise, it will become zombie process.

var spawn = require('child_process').spawn;    var child = spawn('tail', ['-f', 'file.txt']);  child.stdout.on('data', function(data) {      console.log('stdout: ' + data);      child.kill();  });

In UNIX, this sends a SIGTERM signal to the child process.

We can also send another signal to the child process. You need to specify it inside the kill call like this:

child.kill('SIGKILL');

NodeJS Datagrams (UDP)

Posted: 20 Oct 2013 10:19 PM PDT

UDP is a connectionless protocol that does not provide the delivery characteristics that TCP does. When sending UDP packets, there is no guarantee for the order of packets and no guarantee for all packets will arrive. This may lead to conclusion that there is possibility that packets are arrive in random order or packets are incomplete.

On the other hand, UDP can be quite be useful in certain cases, like when we want to broadcast data, when we don’t need strict quality of delivery and sequence or even when we don’t know the address of our peers.

NodeJS has ‘dgram’ module to support Datagram transmission.

Datagram Server

A server listening to UDP port can be:

var dgram = require('dgram');    var server = dgram.createSocket('udp4');  server.on('message', function(message, rinfo) {      console.log('server got message: ' + message +                  ' from ' + rinfo.address + ':' + rinfo.port);  });    server.on('listening', function() {      var address = server.address();      console.log('server listening on ' + address.address +                  ':' + address.port);  });    server.bind(4002);

The createSocket function accepts the socket type as the first argument, which can be either “udp4″ (UDP over IPv4), “udp6″ (UDP over IPv6) or “unix_dgram” (UDP over UNIX domain socket).

When you run the script, you will see the server address, port, and then wait for messages.

We can test it using a tool like netcat:

echo 'hello' | netcat -c -u -w 1 localhost 4002

This sends an UDP packet with “hello” to localhost port 4002 which our program listen to. You should then get the server print out like:

server got message: hello  from 127.0.0.1:54950

Datagram Client

To create an UDP client to send UDP packets, we can do something like:

var dgram = require('dgram');    var client = dgram.createSocket('udp4');    var message = new Buffer('this is a message');  client.send(message, 0, message.length, 4002, 'localhost');  client.close();

Here we are creating a client using the same createSocket function we did to create the client, with difference we don’t bind.

You have to be careful not to change the buffer you pass on client.send before the message has been sent. If you need to know when your message has been flushed to the kernel, you should pass a callback function when the buffer may be reused.

client.send(message, 0, message.length, 4002, 'localhost', function() {      // buffer can be reused now  });

Since we are not binding, the message is sent from random UDP port. If we want to send from a specific port, we use client.bind(port).

var dgram = require('dgram');    var client = dgram.createSocket('udp4');    var message = new Buffer('this is a message');  client.bind(4001);  client.send(message, 0, message.length, 4002, 'localhost');  client.close();

The port binding on the client really mixes what a server and client are, but it can be useful for maintaining conversations like this:

var dgram = require('dgram');    var client = dgram.createSocket('udp4');    var message = new Buffer('this is a message');  client.bind(4001);  client.send(message, 0, message.length, 4002, 'localhost');  client.on('message', function(message, rinfo) {      console.log('and got the response: ' + message);      client.close();  });

Here we are sending a message and also listening to messages. When we receive one message we close the client.

Don’t forget that UDP is unreliable. Whatever protocol we devise on top of it, it has possibility of lost and out of order.

Datagram Multicast

One of the interesting uses of UDP is to distribute message to several nodes using only one network message. This is multicast. Message multicasting can be useful when we don’t want to need to know the address of all peers. Peers just have to “tune in” and listen to multicast channel.

Nodes can report their interest in listening to certain multicast channels by "tuning" into that channel. In IP addressing there is a space reserved for multicast addresses. In IPv4 the range is between 224.0.0.0 and 239.255.255.255, but some of these are reserved. 224.0.0.0 through 224.0.0.255 is reserved for local purposes (as administrative and maintenance tasks) and the range 239.0.0.0 to 239.255.255.255 has also been reserved for "administrative scoping".

Receiving Multicast Message

To join a multicast address, for example 230.1.2.3, we can do something like this:

var server = require('dgram').createSocket('udp4');    server.on('message', function(message, rinfo) {      console.log('server got message: ' + message + ' from ' +                    rinfo.address + ':' + rinfo.port);  });    server.addMembership('230.1.2.3');  server.bind(4002);

We say to the kernel that this UDP socket should receive multicast message for the multicast address 230.1.2.3. When calling the addMembership, we can pass the listening interface as an optional second argument. If omitted, Node will try to listen on every public interface.

The we can test the server using netcat like this:

echo 'hello' | netcat -c -u -w 1 230.1.2.3

Sending Multicast Message

To send a multicast message we simply have to specify the multicast address:

var dgram = require('dgram');    var client = dgram.createSocket('udp4');    var message = new Buffer('this is a message');  client.setMulticastTTL(10);  client.send(message, 0, message.length, 4002, '230.1.2.3');  client.close();

Here, besides sending the message, we previously set the Multicast time-to-live to 10 (an arbitrary value here). This TTL tells the network how many hops (routers) it can travel through before it is discard. Every time an UDP packet travels through a hop, TTL counter is decremented and if 0 is reached, the packet is discard.

NodeJS UNIX Sockets

Posted: 20 Oct 2013 09:47 PM PDT

UNIX socket or Unix domain socket, also known as IPC socket (inter-process communication socket) is a data communications endpoint for exchanging data between processes executing within the same host operating system. The similar functionality used by named pipes, but Unix domain sockets may be created as connection-mode or as connectionless.

Unix domain sockets use the file system as their address name space. They referenced by process as inodes in the file system. This allows two processes to open the same socket in order to communicate. However, communication occurs entirely within the operating system kernel.

Server

Node’s net.Server class not only supports TCP sockets, but also UNIX domain sockets.

To create a UNIX socket server, we have to create a normal net.Server bu then make it listen to a file path instead of a port.

var server = net.createServer(function(socket) {      // got a client connection here  });  server.listen('/path/to/socket');

UNIX domain socket servers present the exact same API as TCP server.

If you are doing inter-process communication that is local to host, consider using UNIX domain sockets instead of TCP sockets, as they should perform much better. For instance, when connecting node to a front-end web-server that stays on the same machine, choosing UNIX domain sockets is generally preferable.

Client

Connecting to a UNIX socket server can be done by using net.createConnection as when connecting to a TCP server. The difference is in the argument, a socket path is passed in instead of a port.

var net = require('net');  var conn = net.createConnection('/path/to/socket');  conn.on('connect', function() {      console.log('connected to unix socket server');  });

Passing File Descriptors Around

UNIX sockets have this interesting feature that allows us to pass file descriptors from a process into another process. In UNIX, a file descriptor can be a pointer to an open file or network connection, so this technique can be used to share files and network connections between processes.
For instance, to grab the file descriptor from a file read stream we should use the fd attribute like this:

var fs = require('fs');  var readStream = fs.createReadStream('/etc/passwd', {flags: 'r'});  var fileDescriptor = readStream.fd;

and then we can pass it into a UNIX socket using the second or third argument of socket.write like this:

var socket = ...  // assuming it is UTF-8  socket.write('some string', fileDescriptor);    // specifying the encoding  socket.write('453d9ea499aa8247a54c951', 'base64', fileDescriptor);

On the other end, we can receive a file descriptor by listening to the “fd” event like this:

var socket = ...  socket.on('fd', function(fileDescriptor) {      // now I have a file descriptor  });

We can do various Node API operation, depend on the type of file descriptor.

Read or Write into File

If it’s a file-system file descriptor, we can use the Node low-level “fs” module API to read or write data.

var fs = require('fs');  var socket = ...  socket.on('fd', function(fileDescriptor) {      // write some      var writeBuffer = new Buffer("here is my string");      fs.write(fileDescriptor, writeBuffer, 0, writeBuffer.length);        // read some      var readBuffer = new Buffer(1024);      fs.read(fileDescriptor, readBuffer, 0, readBuffer.length, 0,      function(err, bytesRead) {          if (err) {console.log(err); return; }          console.log('read ' + bytesRead + ' bytes:');          console.log(readBuffer.slice(0, bytesRead));      });  });

We should be careful because of the file open mode. If the is opened with “r” flag, no write operation can be done.

Listen to the Server Socket

As another example on sharing a file descriptor between processes: if the file descriptor is a server socket that was passed in, we can create a server on the receiving end and associate the new file descriptor by using the server.listenFD method on it to it like this:

var server = require('http').createServer(function(req, res) {      res.end('Hello World!');  });    var socket = ...  socket.on('fd', function(fileDescriptor) {      server.listenFD(fileDescriptor);  });

We can use listenFD() on an “http” or “net” server. In fact, on anything that descends from net.Server.

NodeJS TCP

Posted: 20 Oct 2013 09:24 PM PDT

TCP or Transmission Control Protocol, is a connection oriented protocol for data communication and data transmission in network. It provides reliability over transmitted data so the packets sent or received is guaranteed to be in correct format and order.

Node has a first-class HTTP module implementation, but this descends from the “bare-bones” TCP module. Being so, everything described here applies also to every class descending from the net module.

TCP Server

We can create TCP server and client, using “net” module.

Here, how we create a TCP server.

require('net').createServer(function(socket) {      // new connection      socket.on('data', function(data) {          // got data      });        socket.on('data', function(data) {          // connection closed      });        socket.write('Some string');  }).listen(4001);

Here our server is created using “net” module and listen to port 4001 (to distinguish with our HTTP server in 4000). Our callback is invoked every time new connection arrived, which is indicated by “connection” event.

On this socket object, we can then listen to “data” events when we get a package of data and the “end” event when that connection is closed.

Listening

As we saw, after the server is created, we can bind it to a specific TCP port.

var port = 4001;  var host = '0.0.0.0';  server.listen(port, host);

The second argument (host) is optional. If omitted, the server will accept connections directed to any IP address.

This method is asynchronous. To be notified when the server is really bound we have to pass a callback.

//-- With host specified  server.listen(port, host, function() {      console.log('server listening on port ' + port);  });    //-- Without host specified  server.listen(port, function() {      console.log('server listening on port ' + port);  });

Write Data

We can pass in a string or buffer to be sent through the socket. If a string is passed in, we can specify an encoding as a second argument. If no encoding specified, Node will assume it as UTF-8. The operation are much like in HTTP module.

var flush = socket.write('453d9ea499aa8247a54c951', 'base64');

The socket object is an instance of net.Socket, which is a writeStream, so the write method returns a boolean, saying whether it flushed to the kernel or not.

We can also pass in a callback. This callback will be invoked when data is finally written out.

// with encoding specified  var flush = socket.write('453d9ea499aa8247a54c951', 'base64', function(){      // flushed  });    // Assuming UTF-8  var flush = socket.write('Heihoo!', function(){      // flushed  });

.end()

Method .end() is used to end the connection. This will send the TCP FIN packet, notifying the other end that this end wants to close the connection.

But, we can still get “data” events after we have issued this. It is simply because there still might be some data in transit, or the other end might be insisting on sending you some more data.

In this method, we can also pass in some final data to be sent:

socket.end('Bye bye!');

Other Methods

Socket object is an instance of net.Socket, and it implements the WriteStream and ReadStream interface, so all those methods are available like pause() and resume(). We can also bind to the “drain” events like other stream object can do.

Idle Sockets

A socket can be in idle state, or idle for some time. For example, there has been no data received at moment. When this condition happen, we can be notified by calling setTimeout():

var timeout = 60000;    // 1 minute  socket.setTimeout(timeout);  socket.on('timeout', function() {      socket.write('idle timeout, disconnecting, bye!');      socket.end();  });

or in shorter form:

socket.setTimeout(60000, function() {      socket.end('idle timeout, disconnecting, bye!');  });

Keep-Alive

Keep-alive is mechanism to make the server prevent timeout. The concept is very simple: when we set up a TCP connection, we associate a set of timers and some of it deal with the keep-alive procedure. When the keep-alive timer reaches zero, we send our peer a keep-alive probe packet with no data in it and the ACK flag turned on.

In Node, all the functionality has been simplified. So, we can send keep-alive notification by invoking.

socket.keepAlive(true);

We can also speficy the delay between the last packet received and the next keep-alive packet on the second argument to the keep-alive call.

socket.keepAlive(true, 10000);    // 10 seconds

Delay or No Delay

When sending off TCP packets, the kernel buffers data before sending it off and uses the Naggle algorithm to determine when to send off the data. If you wish to turn this off and demand that the data gets sent immediately after write commands, use:

socket.setNoDelay(true);

Of course we can turn it on by simply invoking it with false value.

Connection Close

This method closes the server, preventing it from accepting new connections. This function is asynchronous, and the server will emit the "close" event when actually closed:

var server = ...  server.close();  server.on('close', function() {      console.log('server closed!');  });

TCP Client

We can create a TCP client which connect to a TCP server using “net” module.

var net = require('net');  var port = 4001;  var host = 'www.google.com';  var conn = net.createConnection(port, host);

Here, if we omitted the host for creating connection, the defaults will be localhost.

Then we can listen for data.

conn.on('data', function(data) {      console.log('some data has arrived');  });

or send some data.

conn.write('I send you some string');

or close it.

conn.close();

and also listen to the “close” event (either by yourself, or sent by peer)

conn.on('close', function(data) {      console.log('connection closed');  });

Socket conforms to the ReadStream and WriteStream interfaces, so we can use all of the previously described methods on it.

Error Handling

When handling a socket on the client or the server, we can (and should) handle the errors by listening to the “error” event.

Here is simple template how we do:

require('net').createServer(function(socket) {      socket.on('error', function(error) {          // do something      });  });

If we don’t catch an error, Node will handle an uncaught exception and terminate the current process. Unless that’s what we want, we should handle the errors.

NodeJS Streams, Pump, and Pipe

Posted: 20 Oct 2013 10:02 AM PDT

Stream is an object that represents a generic sequence of bytes. Any type of data can be stored as a sequence of bytes, so the details of writing and reading data can be abstracted.

Node has a useful abstraction for stream. More specifically, two very abstractions: Read Streams and Write Streams. They are implemented throughout several Node objects, and they represent inbound (ReadStream) or outbound (WriteStream) flow of data.

Though read and write operation on stream is not special in any programming language, Node has unique way to do it.

ReadStream

A ReadStream is like a faucet of data. The method of creating streams are depends on the type of stream itself. After you have created a one, you can: wait for data, know when it ends, pause it, and resume it.

Wait for data

By binding to the "data" event we can be notified every time there is a chunk being delivered by that stream. It can be delivered as a buffer or as a string.
If we use stream.setEncoding(encoding), the "data" events pass in strings. If we don't set an encoding, the "data" events pass in buffers.

var readStream = ...  readStream.on('data', function(data) {      // data is a buffer  });    var readStream = ...  readStream.setEncoding('utf8');  readStream.on('data', function(data) {      // data is utf8-encoded string  });

So here data passed in on the first example is a buffer, and the second one is a string because we specify it as utf8 string.

The size of each chunk may vary, it may depend on buffer size or on the amount of available data so it might unpredictable.

Know when it ends

A stream can end, and we can know when that happens. By binding to the “end” event, we can see it.

var reasStream = ...  readStream.on('end', function() {      console.log('the stream has ended');  });

Pause

A read stream is like a faucet, and we can keep the data from coming in by pausing it.

readStream.pause();

Resume

If stream is paused, we can reopen it and the stream can start flowing again.

readStream.resume();

WriteStream

A WriteStream is an abstraction on somewhere you can send data to. It can be a file or network connection or even an object that outputs data that was transformed.

When we have WriteStream object, we can do two operations: write and wait for it to drain.

Write

We can write data to stream. The data can be in string format or a buffer format.

By default, write operation will treat a stirng as utf8 string unless it is told otherwise.

var writeStream = ...;    writeStream.write('this is an utf-8 string');  writeStream.write('7e3e4acde5ad240a8ef5e731e644fbd1', 'base64');

For writing a buffer, we can slightly modify it to

var writeStream = ...;  var buffer = new Buffer('this is a buffer with some string');  writeStream.write(buffer);

Wait for it to drain

Node does not block on I/O operation, so it does not block on read or write commands. On write commands, if Node is not able to flush the data into the kernel buffers, it will buffer that data, storing it in our process memory. Because of this, writeStream.write() returns a boolean. If write() manages to flush all data to the kernel buffer, it returns true. If not, it returns false.

When a writeStream manages to do flush the data into the kernel buffers, it emits a “drain” event so we can listen it like this:

var writeStream = ...;  writeStream.on('drain', function() { console.log('drain emitted'); });

Stream by Example

FileSystem stream

We can create a read stream for a file path.

var fs = require('fs');    var rs = fs.createReadStream('/path/to/file');

We can also pass some options for .createReadStream(), for example: start and end position of file, the encoding, the flags, and the buffer size. Below is the default value of option:

{      flags: 'r',      encoding: null,      fd: null,      mode: 0666,      bufferSize: 64*1024  }

We can also create a write stream

var fs = require('fs');  var rs = fs.createWriteStream('/path/to/file', options);

Which also accepts a second argument with an option object. Below is the default value of option:

{      flags: 'w',      encoding: null,      mode: 0666  }

We can also give a single specification if it is necessary.

var fs = require('fs');  var rs = fs.createWriteStream('/path/to/file', {encoding: 'utf8'});

Case Study: Slow Client Problem

As said before, Node does not block on writes, and it buffers the data if the write cannot be flushed into the kernel buffers. Now if we are pumping data into a write stream (like a TCP connection to a browser) and our source of data is a read stream (like a file ReadStream):

var fs = require('fs');    require('http').createServer(function(req, res) {     var rs = fs.createReadStream('/path/to/big/file');     rs.on('data', function(data) {        res.write(data);     });     rs.on('end', function() {        res.end();     });  });

If the file is local, the read stream should be fast. Now if the connection to the client is slow, the writeStream will be slow. So readStream “data” events will happen quickly, the data will be sent to the writeStream, but eventually Node will have to start buffering the data because the kernel buffers will be full.

What will happen then is that the /path/to/big/file file will be buffered in memory for each request, and if we have many concurrent requests, Node memory consumption will inevitably increase, which may lead to other problems, like swapping, thrashing and memory exhaustion.

To address this problem we have to make use of the pause and resume of the read stream, and pace it alongside your write stream so your memory does not fill up:

var fs = require('fs');    require('http').createServer(function(req, res) {     var rs = fs.createReadStream('/path/to/big/file');     rs.on('data', function(data) {        if(!res.write(data)) {           rs.pause();        }     });     res.on('drain', function() {        rs.resume();     });     rs.on('end', function() {        res.end();     });  });

We are pausing the readStream if the write cannot flush it to the kernel, and we are resuming it when the writeSTream is drained.

Pump

What was described here is a recurring pattern, and instead of this complicated chain of events we can simply use util.pump() which does exactly what we described:

var util = require('util');  var fs = require('fs');    require('http').createServer(function(req, res) {      var rs = fs.createReadStream('/path/to/big/file');      util.pump(rs, res, function() {          res.end();      });  });

util.pump() accept 3 argumens: the readable stream, the writable stream, and a callback when the read stream ends.

Pipe

There is another approach we can use, pipe. A ReadStream can be piped into a WriteStream on the same fashion, simply by calling pipe(destination).

var fs = require('fs');    require('http').createServer(function(req, res) {      var rs = fs.createReadStream('/path/to/big/file');      rs.pipe(res);  });

By default, end() is called on the destination when the read stream ends. We can prevent that behavior by passing in end: false on the second argument options object like this:

var fs = require('fs');    require('http').createServer(function(req, res) {      var rs = fs.createReadStream('/path/to/big/file');      rs.pipe(res, {end: false});      rs.end(function() {          res.end("And that's all folks!");      });  });

Creating Own Read and Write Streams

We can implement our own read and write streams.

ReadStream

When creating a Readable stream, we have to implement following methods:

  • setEncoding(encoding)
  • pause()
  • resume()
  • destroy()

and emit the following events:

  • “data”
  • “end”
  • “error”
  • “close”
  • “fd” (not mandatory)

We should also implement the pipe() method, but we can lend some help from Node by inheriting from Stream.

var MyClass = ...  var util = require('util'),      Stream = require('stream').Stream;  util.inherits(MyClass, Stream);

This will make the pipe method available at no extra cost.

WriteStream

To implement our own WriteStream-ready pseudo-class we should provide the following methods:

  • write(string, encoding=’utf8′, [fd])
  • write(buffer)
  • end()
  • end(string, encoding)
  • end(buffer)
  • destroy()

and emit the following events:

  • “drain”
  • “error”
  • “close”

NodeJS HTTP

Posted: 20 Oct 2013 04:11 AM PDT

HTTP or Hyper Text Transfer Protocol is an application protocol for distributed, collaborative, hypermedia information systems. It is the protocol which become the foundation of data communication for the World Wide Web.

HTTP in most case is using client-server architecture. It means there are one (or more) server which can serve several client.

NodeJS has abstract HTTP behavior in a way we can use it for building scalable application.

HTTP Server

Using Node, we can easily create an HTTP server. For example:

var http = require('http');    var server = http.createServer();  server.on('request', function(req, res) {      res.writeHead(200, {'Content-Type': 'text/plain'});      res.write('Hello World!');      res.end();  });  server.listen(4000);

We have ‘http’ module, which is a further encapsulation of what ‘net’ module does for HTTP protocol.

Every server should bind and listen to an arbitrary port. In our example, our server listen to port 4000. The server is handling an event request for HTTP server. This event is triggered when a client is request or connecting to our server. We set callback which has two arguments: request and response.

Our callback will have two object, in our example are req (request) and res (response). A request is object which encapsulate all request data sent to our server. A response is object which we will sent back to the client. Here, when we are requested by client, we will write a response. That’s what HTTP does, as simple as that.

A response is composed of two field: header and body. We write the header with the content-type which indicate a plain text. In the body, we have a string ‘Hello World!’.

If you run this script on node, you can then point your browser to http://localhost:4000 and you should see the “Hello World!” string on it.

We can shorten the example to be:

require('http').createServer(function(req, res) {      res.writeHead(200, {'Content-Type': 'text/plain'});      res.end('Hello World!');  }).listen(4000);

Here we are giving up the intermediary variable for storing the http module (since we only need to call it once) and the server (since we only need to make it listen on port 4000). Also, as a shortcut, the http.createServer function accepts a callback function that will be invoked on every request.

Request Object

Request object (first argument of the callback) is an instantiation of http.ServerRequest class. It has several important aspect which we can see.

.url

This is the URL of the request. It does not contain schema, hostname, or port, but it contains everything after that.

We can try to analyze the url by:

require('http').createServer(function(req, res) {      res.writeHead(200, {'Content-Type': 'text/plain'});      res.end(req.url);  }).listen(4000);

The URL are the resource requested by client. For example:

  • http://localhost:4000/ means the URL requested is /
  • http://localhost:4000/index.html means the URL requested is /index.html
  • http://localhost:4000/controller/index.js means the URL requested is /controller/index.js
  • etc

.method

This contains the HTTP method used on the request. It can be ‘GET’, ‘POST’, ‘DELETE’, or any valid HTTP request method.

.headers

This contains an object with a property for every HTTP header on the request.

We can analyze the headers by:

require('http').createServer(function(req, res) {      res.writeHead(200, {'Content-Type': 'text/plain'});      res.end(util.inspect(req.headers));  }).listen(4000);

req.headers properties are on lower-case. For instance, if the browser sent a “Cache-Control: max-age: 0″ header, reg.headers will have a property named “cache-control” with the value “max-age: 0″ (this last one is untouched).

Response Object

Response object is the second argument for callback. It is used to reply the request to the client and an instantiation of http.ServerResponse class.

Write Header

A response is like request, composed of header and body. The header contains a property for every header we want to send. We can use res.writeHead(status, headers) to write the header.

For example:

var util = require('util');    require('http').createServer(function(req, res) {      res.writeHead(200, {          'Content-Type': 'text/plain',          'Cache-Control': 'max-age=3600'      });      res.end('Hello World!');  }).listen(4000);

On this example, we set 2 headers: one with “Content-Type: text/plain” and another with “Cache-Control: max-age=3600″.

Change or Set a Header

We can change a header which already set or set a new one by using.

res.setHeader(name, value);

This will only work if we haven’t already sent a piece of the body by using res.write().

Remove a Header

We can also remove a hader we already set by using.

res.removeHeader(name, value);

This will only work if we haven’t already sent a piece of the body by using res.write().

Write Response Body

To write a response, we can use:

// write a simple string  res.write('Here is string');    // existing buffer  var buf = new Buffer('Here is buffer');  buf[0] = 45;  res.write(buffer);

This method can, as expected, be used to reply dynamically generated strings or binary file.

HTTP Client

Creating http client using node is also possible and easy. The same module ‘http’ can be used to create HTTP client, even though it is specifically designed to be a server.

.get()

HTTP GET is a simple request to the url.

In this example, we sent HTTP GET request to the url http://www.google.com:80/index.html.

var http = require('http');    var options = {    host: 'www.google.com',    port: 80,    path: '/index.html'  };    http.get(options, function(res) {    console.log('got response: ' + res.statusCode);  }).on('error', function(err) {    console.log('got error: ' + err.message)  });

.request()

Using http.request, we can make any type of HTTP request (not limited to HTTP GET only).

http.request(options, callback);

The options is an object which describe the host we want to connect to. It is composed of:

  • host: a domain name or IP address of the server to issue the request to
  • port: Port of remote server
  • method: a string specifying the HTTP request method. Possible values: GET, POST, PUT, DELETE
  • path: Request path. It should include query string and fragments if any. E.G. ‘/index.html?page=12′
  • headers: an object containing request headers.

For example:

var options = {      host: 'www.google.com',      port: 80,      path: '/upload',      method: 'POST'  };    var req = require('http').request(options, function(res) {      console.log('STATUS: ' + res.statusCode);      console.log('HEADERS: ' + JSON.stringify(res.headers));      res.setEncoding('utf8');      res.on('data', function (chunk) {        console.log('BODY: ' + chunk);      });  });    // write data to request body  req.write('data ');  req.write('data ');  req.end();

We are writing the HTTP request body data (two lines with the “data ” string with req.write()) and ending the request immediately. Only then the server replies and the response callback gets activated.

We wait for response. When it comes, we get a ‘response’ event, which we are listening to on the callback function. By then we only have the HTTP status and headers ready, which we print.

Then we bind to ‘data’ events. These data happen when we get a chunk of the response body data.

This mechanism can be used to stream data from a server. As long as the server keeps sending body chunks, we keep receiving them.

Minggu, 20 Oktober 2013

Xathrya Sabertooth

Xathrya Sabertooth


NodeJS Low Level File System Operation

Posted: 19 Oct 2013 10:35 PM PDT

Node has a nice streaming API for dealing with files in an abstract way, as if they were network streams. But sometimes, we might need to go down a level and deal with the filesystem itself. Node has facilitate this by providing a low-level file system operation, using module fs.

Get File Metainfo

A metainfo is information about information of a file or directory. In POSIX API, we use stat() and fstat() function to do this. Node, which is inspired by POSIX, has taken this approach too. The stat() and fstat() has been encapsulated to fs module.

var fs = require('fs');    fs.stat('file.txt', function(err, stats) {      if (err) { console.log (err.message); return }      console.log(stats);  });

Here we need a fs module and do stat(). A callback is set with two arguments, first is for get the error message if there is an error occurred, and second is the stat information if the function succeeded.

If succeeded, the callback funtion might print something like this (result taken on Cygwin64 on Windows 8):

{ dev: 0,    mode: 33206,    nlink: 1,    uid: 0,    gid: 0,    rdev: 0,    ino: 0,    size: 2592,    atime: Thu Sep 12 2013 19:25:05 GMT+0700 (SE Asia Standard Time),    mtime: Thu Sep 12 2013 19:27:57 GMT+0700 (SE Asia Standard Time),    ctime: Thu Sep 12 2013 19:25:05 GMT+0700 (SE Asia Standard Time) }

stats is Stats instance, an object, which we cal call some methods of it.

stats.isFile();  stats.isDirectory();  stats.isBlockDevice();  stats.isCharacterDevice();  stats.isSymbolicLink();  stats.isFIFO();  stats.isSocket();

If we have a plain file descriptor, we can use fs.fstat(fileDescriptor, callback) instead.

If using low-level filesystem API in node, we will get file descriptors as a way to represent files. These file descriptors are plain integer numbers given by kernel that represent a file in Node process. Much like C POSIX APIs.

Open and Close a File

Opening a file is a simple matter by using fs.open()

var fs = require('fs');  fs.open('path/to/file', 'r', function(err, fd) {      // got file descriptor (fd)  });

It’s like C function, if you are familiar with.

The first argument to fs.open is the file path. The second argument is the flags, which indicate the mode with which the file is to be open. The valid flags can be ‘r’, ‘r+’, ‘w’, ‘w+’, ‘a’, or ‘a+’.

  • r = open text file for reading. The stream is positioned at the beginning of the file.
  • r+ = open for reading and writing. The stream is positioned at the beginning of the file.
  • w = truncate file to zero length or create text file for writing. The stream is positioned at the beginning of the file.
  • w+ = open for reading and writing. The file is created if it does not exist, otherwise it is truncated. The stream is positioned at the beginning of the file.
  • a = open for writing. The file is created if it does not exist. The stream is positioned at the end of the file. Subsequent writes to the file will always end up at the current end of file.
  • a+ = open for reading and writing. The file is created if it does not exist. The stream is positioned at the end of the file. Subsequent writes to the file will always end up at the current end of file.

On the callback function, we get the file descriptor or fd as second argument. It is a handler to read and write the file which is opened by fs.open() function.

After operation, it is recommended to close the opened file using fs.close(fd).

Read From a File

Once it’s open, we can read from a file but make sure we have set the mode to allow us read it.

var fs = require('fs');  fs.open('file.txt', 'r', function(err, fd) {     if (err) { throw err; }     var readBuffer = new Buffer(1024),        bufferOffset = 0,        bufferLength = readBuffer.length,        filePosition = 100;       fs.read(fd, readBuffer, bufferOffset, bufferLength, filePosition,        function(err, readBytes) {           if (err) { throw err; }           console.log('just read ' + readBytes + ' bytes');           if (readBytes > 0) {              console.log(readBuffers.slice(0, readBytes));           }     });  });

Here we open the file, and when it’s opened we are asking to read a chunk of 1024 bytes from it, starting at position 100 (so basically, we read data from bytes 100 to 1124).

A callback is called when one of the following three happens:

  • there is an error
  • something has been read
  • nothing could be read

If there is an error, the first argument of callback – err – will be set. Otherwise, it is null.

The second argument of callback – readBytes – is the number of bytes read into the buffer. If the read bytes is zero, the file has reached the end.

Write Into a File

Once file is open, we can write into a file but make sure we have set the mode to allow us read it.

var fs = require('fs');  fs.open('file.txt', 'a', function(err, fd) {     if (err) { throw err; }     var writeBuffer = new Buffer('Writing this string'),        bufferOffset = 0,        bufferLength = writeBuffer.length,        filePosition = null;       fs.write(fd, writeBuffer, bufferOffset, bufferLength, filePosition,        function(err, written) {           if (err) { throw err; }           console.log('wrote ' + written + ' bytes');     });  });

Here we open the file with append-mode (‘a’), and we are writing into it, starting at position 0. We pass in the buffer with data we want to written, an offset inside the buffer where we want to start writing from, the length of what we want to write, the file position and a callback.

In this case we are passing in a file position of null, which is to say that we writes at the current file position. As noted before, we open the file using append-mode, so the file cursor is positioned at the end of the file.

Case on Appending

If you are using these low-level file-system functions to append into a file, and concurrent writes will be happening, opening it in append-mode will not be enough to ensure there will be no overlap. Instead, you should keep track of the last written position before you write, doing something like this:

var fs = require('fs');    var startAppender = function(fd, startPos) {     var pos = startPos;     return {        append: function(buffer, callback) {           var oldPos = pos;           pos += buffer.length;           fs.write(fd, buffer, 0, buffer.length, oldPos, callback);        }     };  }

Here we declare a function stored on a variable named “startAppender”. This function starts the appender state (position and file descriptor) and then returns an object with an append function.

To use the Appender:

fs.open('file.txt', 'w', function(err, fd) {     if (err) { throw err; }     var appender = startAppender(fd, 0);     appender.append(new Buffer('append this!'), function(err) {        console.log('appended');     });  });

And here we are using the appender to safely append into a file.

This function can then be invoked to append, and this appender will keep track of the last position, and increments it according to the buffer length that was passed in.

Actually, there is a problem: fs.write() may not write all the data we asked it to, so we need to modify it a bit.

var fs = require('fs');    var startAppender = function(fd, startPos) {      var pos = startPos;      return {          append: function(buffer, callback) {              var written = 0;              var oldPos = pos;              pos += buffer.length;              (function tryWriting() {                  if (written < buffer.length) {                      fs.write(fd, buffer, written, buffer.length - written,                               oldPos + written,                           function(err, bytesWritten) {                              if (err) { callback(err); return; }                              written += bytesWritten;                              tryWriting();                          }                      );                  } else {                     // we have finished                     callback(null);                  }              })();          }      }  };

Here we use a function named "tryWriting" that will try to write, call fs.write, calculate how many bytes have already been written and call itself if needed. When it detects it has finished (written == buffer.length) it calls callback to notify the caller, ending the loop.

Also, the appending client is opening the file with mode "w", which truncates the file, and it's telling appender to start appending on position 0. This will overwrite the file if it has content. So, a wizer version of the appender client would be:

fs.open('file.txt', 'a', function(err, fd) {     if (err) { throw err; }     fs.fstat(fd, function(err, stats) {        if (err) { throw err; }        console.log(stats);        var appender = startAppender(fd, stats.size);        appender.append(new Buffer('append this!'), function(err) {           console.log('appended');        });     })  });

NodeJS Timers

Posted: 19 Oct 2013 09:10 PM PDT

A timer is a specialized type of clock for measuring time intervals. It is used for deploying routine action.

Node implements the timers API which also found in web browsers.

setTimeout

setTimeout let us to schedule an arbitrary function to be executed in the future. For example:

var timeout = 2000;    // 2 seconds  setTimeout(function() {      console.log('time out!');  }, timeout);

The code above will register a function to be called when the timeout expires. As in any place in JavaScript, we can pass in an inline function, the name of a function or a variable which value is a function.

If we set the timeout to be 0 (zero), the function we pass gets executed some time after the stack clears, but with no waiting. This can be used to, for instance schedule a function that does not need to be executed immediately. This was a trick sometimes used on browser JavaScript. Another alternative we can use is process.nextTick() which is more efficient.

clearTimeout

A timer schedule can be disabled after it is scheduled. To clear it, we need a timeout handle which is returned by function setTimeout.

var timeoutHandle = setTimeout(function() {       console.log('Groaaarrrr!!!');  }, 1000);  clearTimeout(timeoutHandle);

If you look carefully, the timeout will never execute because we clear it just after we set it.

Another example:

var timeoutA = setTimeout(function() {      console.log('timeout A');  }, 2000);    var timeoutB = setTimeout(function() {      console.log('timeout B');      clearTimeout(timeoutA);  }, 1000);

Which timeoutA will never be executed.

There are two timers above, A with timeout 2 seconds and B with timeout 1 second. The timeoutB (which fires first) unschedule timeoutA so timeout never executes and the program exits right after the timeoutB is executed.

setInterval

Set interval is similar to set timeout, but schedules a given function to run every X seconds.

var period = 1000; // 1 second  var interval = setInterval(function() {    console.log('tick');  }, period);

That code will indefinitely keep the console logging ‘tick’ unless we terminate Node.

clearInterval

To terminate schedule set by setInterval, the procedure we do is similar to what we did to setTimeout. We need interval handler returned by setInterval and do it like this:

var interval = setInterval(...);  clearInterval(interval);

process.nextTick

A callback function can also be scheduled to run on next run of the event loop. To do so, we use:

process.nextTick(function() {      // This runs on the next event loop      console.log('yay!');  });

This method is preferred to setTimeout(fn, 0) because it is more efficient.

On each loop, the event loop executes the queued I/O events sequentially by calling associated callbacks. If, on any of the callbacks you take too long, the event loop won’t be processing other pending I/O events meanwhile (blocking). This can lead to waiting customers or tasks. When executing something that may take too long, we can delay execution until the next event loop, so waiting events will be processed meanwhile. It’s like going to the back of the line on a waiting line.

To escape the current event loop, we can use process.nextTick() like this:

process.nextTick(function() {      // do something  });

This will delay processing that is not necessary to do immediately to the next event loop.

For instance, we need to remove a file, but perhaps we don’t need to do it before replying to the client. So we could do something like this:

stream.on('data', funciton(data) {      stream.end('my response');      process.nextTick(function() {          fs.unlink('path/to/file');      });  });

Let's say we want to schedule a function that does some I/O – like parsing a log file -to execute periodically, and we want to guarantee that no two of those functions are executing at the same time. The best way is not to use a setInterval, since we don't have that guarantee. the interval will fire no matter if the function has finished it's duty or not.

Supposing there is an asynchronous function called "async" that performs some IO and that gets a callback to be invoked when finished, and we want to call it every second:

var interval = 1000;  setInterval(function() {      async(function() {          console.log('async is done');      }  }, interval);

If any two async() calls can't overlap, it is better off using tail recursion like this:

var interval = 1000;  (function schedule() {      setTimeout(function() {          async(function() {              console.log('async is done!');              schedule();          });      }, interval);  })();

Here we declare schedule() and invoking it immediately after we are declaring it.

This function schedules another function to execute within one second. The other function will then call async() and only when async is done we schedule a new one by calling schedul() again, this time inside the schedule function. This way we can be sure that no two calls to async execute simultaneously in this context.

The difference is that we probably won’t have async called every second (unless async takes to time to execute), but we will have it called 1 second after the last one finished.

NodeJS Event Emitter

Posted: 19 Oct 2013 05:56 PM PDT

Many objects can emit events, in NodeJS of course. For instance a TCP server can emit a ‘connect’ event every time a client connects, or a file stream request can emit a ‘data’ event.

Connecting an Event

One can listen for events. If you are familiar with other event-driven programming, you will know that there must be a function or method “addListener” where you have to pass a callback. Every time the event is triggered, for example ‘data’ event is triggered every time there is some data available to read, then your callback is called.

In NodeJS, here is how we can achieve that:

var fs = require('fs');      // get the fs module  var readStream = fs.createReadStream('file.txt');  readStream.on('data', function(data) {      console.log(data);  });  readStream.on('end', function(data) {      console.log('file ended');  });

Here on readStream object we are binding two event: ‘data’ and ‘end’. We pass callback function to handle each of these cases. All are available uniquely to handle events from readStream object.

We can either pass in an anonymous function (as we are doing here), or a function name for a function available on the current scope, or even a variable containing a function.

Only Connect Once

There is a case where we only want to handle an event once and the rest we give up for it. It means, we only interests in first event only. We want to listen for event exactly once, no more or less.

There are two ways to do it: using .once() method or make sure we remove the callback once we are called.

The first on is the simplest way. We use .once() to tell NodeJS that we are only interested in handling first event occurred.

object.once('event', function() {      // Callback body  });

Other way is

function evtListener() {      // Function body      object.removeListener('event', evtListener);  }  object.on('event', evtListener);

Here we use removeListener() which will be discussed more in next section.

On two above samples, make sure you pass appropriate callback i.e. providing appropriate argument number. The event also should be specified.

Removing Callback from Certain Event

Though we have use it in previous section, we will discuss it again here.

To remove a callback we need the object in which we will remove the callback from it and also the event name. Note that this is a pair which we should provide. We can’t provide only one of it.

function evtListener() {      // Function body      object.removeListener('event', evtListener);  }  object.on('event', evtListener);

The removeListener belongs to the EventEmitter pattern. It accepts the event name and the function is should remove.

Removing All Callback from Certain Event

If you ever need to, removing all listener for an event from an Event Emitter is possible. We can use:

object.removeAllListener('event');

Creating Self-Defined Event

One can use this Event-Emitter patter throughout application. The way we do is creating a pseudo-class and make it inherit from the EventEmitter.

var EventEmitter = require('events').EventEmitter,      util         = require('util');    // Here is the MyClass constructor  var MyClass = function(option1, option2) {      this.option1 = option1;      this.option2 = option2;  }    util.inherits(MyClass, EventEmitter);

util.inherits() is setting up the prototype chain so that we get the EventEmitter prototype methods available on MyClass instance.

That way, instances of MyClass can emit events:

MyClass.prototype.someMethod = function() {      this.emit('custom event', 'some arguments');  }

Which emitting an event named ‘custom event’, sending also some data (in this case is “some arguments”).

A clients of MyClass instance can listen to “custom events” event by:

var myInstance = new MyClass(1,2);  myInstance.on('custom event', function() {      console.log('got a custom event!');  });

NodeJS Buffers

Posted: 19 Oct 2013 05:10 PM PDT

Natively, JavaScript is not very good at handling binary data. Therefore, NodeJS adds a native buffer implementation and still using JavaScript’s way to manipulate it. The class here is the standard way in Node to transport data.

Generally, buffer can be passed on every Node API which require data to be sent. Also, when receiving data on a callback, we get a buffer (except when we specify stream encoding, in which we get a string).

Create a Buffer

The default encoding format in NodeJS is UTF-8. To create a Buffer from UTF-8 string, we can do:

var buff = new Buffer('Hello World');

A new buffer can also be created from other encoding format. As long as we specify the encoding format in second argument, there is no problem:

var buf = new Buffer('8b76fde713ce', 'base64');

Accepted encodings are: “ascii”, “utf8″, and “base64″.

We can also create a new empty buffer by specify the size:

var buf = new Buffer(1024);

Accessing Buffer

Accessing a buffer is like accessing an array of string. We use [] to access individual ‘character’.

buf[20] = 127;   // Set byte 20 to 127

Format Conversion

A data held on a buffer using an encoding format can be converted to other encoding format.

var str  = buf.toString('utf8');     // UTF-8  var str1 = buf.toString('base64');   // Base64  var str2 = buf.toString('ascii');    // ASCII

When you don’t specify the encoding, Node will assume we are going to use UTF-8. If you need specific encoding, pass it as the argument.

Slice a Buffer

A buffer can be sliced into a smaller buffer by using the appropriately named slice() method.

var buffer = new Buffer('A buffer with UTF-8 encoded string');  var slice = buffer.slice(10,20);

On above code, we slice the original buffer that has 34 bytes into a new buffer that has 10 bytes equal to the 10th to 20th bytes of original buffer.

Note that the slice function does not create new buffer memory, it uses the original untouched buffer underneath.

Copy from Buffer

We can copy a part of a buffer into another pre-allocated buffer by:

var buffer = new Buffer('A buffer with UTF-8 encoded string');  var slice = new Buffer(10);  var targetStart = 0,      sourceStart = 10,      sourceEnd = 20;    buffer.copy(slice, targetStart, sourceStart, sourceEnd);

It should be self-explained. Here we copy part of buffer into slice, but only data on positions 10 through 20.

NodeJS Utilities: About I/O and Debugging

Posted: 19 Oct 2013 12:10 PM PDT

Utilities, like implied by the name, is utilities employed for some distinct purpose. These utilities are provided by NodeJS through global objects.

console

Node provides a global “console” object to which we can output strings. However, the output is classified into some mode, according to the output stream it print the output to.

.log()

If you want to print data to stdout, it’s as simple as writing following code:

console.log("Hello World!");

Which will print “Hello World!”. The data (string in that case) is streamed out after formatting it. It is mainly used for simple output and instead of string we can also output an object, like this:

var a = {1: true, 2: false};  console.log(a);    // => {'1': true, '2': false}

We can also use string interpolation to print out things, like:

var a = {1: true, 2: false};  console.log('This is a number: %d, and this is a stirng: %s, ' +              'and this is an object outputted as JSON: %j',              42, 'Hello', a);

Which in turns print:

This is a number: 42, and this is a stirng: Hello, and this is an object outputted as JSON: {"1": true, "2": false}

If you are familiar with C or C++, you might find it similar with C’s printf() function. The placeholder is similar, you use %d for number (integer and floating point number), %s for string, and %j for JSON which is not exists in C’s formatting.

.warn()

If you want to print out to stderr, you can do this:

console.warn("Warning!!");

.trace()

And to print stack trace, you can do:

console.trace();

For stack trace, you will be presented by the current stack condition.

util

util is a module, which bundles some functions. To use this module, we need to include the util module:

var util = require('util');

.log()

var util = require('util');  util.log('Hello');

Similar to console.log(), however it’s slightly different. The util.log() will print current timestamp and given string in which build a line like this: Mar 17:11:09 – Hello

.inspect()

There is also a handy function, inspect(), which is nice for quick debugging by inspecting and printing an object properties.

var util = require('util');  var a = {1: true, 2: false};  console.log(util.inspect(a));

We can give more arguments to util.inspect() which are in following format:

util.inspect(object, showHidden, depth = 2, showColors);

showHidden is argument which inspect non-enumerable properties if it is turned on. This properties are belong to the object prototype chain, not the object itself. The depth, third argument, is the default depth on the object graph it should show. This is useful when inspecting large objects. To recurse indefinitely, pass a null value.

An important note: util.inspect keeps track of the visited objects. If there is a circular dependencies, a string “[Circular]” will appear on the outputted string.

NodeJS API Quick Tour

Posted: 19 Oct 2013 11:42 AM PDT

Most programming language has shipped what it call as standard library. In NodeJS, there is a predefined library which we call API. All APIs are exactly a module, which can be included on source code.

Node provides a platform API that covers some aspects:

  1. process
  2. filesystem
  3. networking
  4. utilities

Note that we won’t cover all the object available, nor cover a material in details. For that purpose, you should go to http://nodejs.org/api/ instead.

Our list is also built only for stable modules. There are some unstable modules which may change later, for example crypto. We won’t cover that here.

[ Process ]

Node allows programmer to analyze process (environment variables, etc) and manage external processes. The involved modules are:

process

Inquire the current process to know the PID, environment variables, platform, memory usage, etc.

child_process

Spawn and kill new processes, execute commands and pipe their outputs

[ File System ]

Low-level API is also provided to manipulate files and directory. All the API is influenced by POSIX style.

fs

This is used for file manipulation, including: create, remove, load, write, and read files. This modules also used for create read and write streams.

path

Normalize and join file paths. It can also be used for checking whether a file exists or is a directory.

[ Networking ]

Used for networking purpose such as connecting, sending and receiving information over network.

net

This module is used for creating a TCP server or client.

dgram

This module is used for manipulating UDP packets, including receiving and send UDP packets.

http

Create an HTTP server or client. It is also a more specific version of net module.

tls

Transport Layer Security (TLS) is a successor of Secure Socket Layer (SSL) protocol. Node uses OpenSSL to provide TLS and/or SSL encrypted stream communication.

https

Implementing http over TLS/SSL.

dns

This module implement asynchronous DNS resolution.

[ Utilities ]

Various utilities for NodeJS.

util

A module which bundles various utility functions.

Introduction to NodeJS Modules

Posted: 19 Oct 2013 10:10 AM PDT

NodeJS is different with client-side JavaScript, aside from the platform it run on.

Client-side Javascript has a bad reputation because of the common namespace shared by all scripts, which can lead to conflicts and security leaks. Node however, implements CommonJS modules standard. Node use modules where each module is separated from other modules. This way, each module have separate namespace and could be exported only on the desired properties.

Import a Module

Import / include an existing module is easy. One can use a function require() to do so.

var module = require('module_name');

Which import module_name and name it as module. The module is either standard API provided by NodeJS, a module installed by npm, or simply user-defined module. Therefore, we can also use relative notation like this to import module:

var module = require("/absolute/path/to/module_name");  var module2 = require("./relative/path/to/module_name");

Where the first declaration will fetch module using absolute filepath and the second one will use path relative to current directory.

The fetched object is treated like other object. It has a name (variable name) and allocated in memory.

Modules are loaded only once per process. If you have several require calls to the same modules, Node caches the require calls if it resolves to the same files.

How Node Resolves Module Path

Core Modules

There are list of core modules, which Node includes in the distribution binary. It is called standard API. When we require this module, Node will just returns that module and the require() ends.

Modules with Path (Absolute or Relative)

If the module path is supplied (absolute or relative), node tries to load the modules as a file. If it does not succeed, it tries to load the module as a directory.

When loading module as a file, if a file exists, Node just loads it as Javascript text. If not, it tries doing the same by appending “.js” extension to the given path. Again, if this is not succeed, Node tries to append “.node” extension and load it as a binary add-on.

When loading module as a directory, it will do some procedures. If appending “/package.json” is a file, Node will try loading the package definition and look for a “main” field. It then try to load it as a file. If unsuccessful, Node will try to load it by appending “/index” to it.

Installed Module

Third party module is installed using NPM.

If the module path does not begin with “.” or “/” or if loading it with complete or relative paths does not work, Node tries to load the module as a module that was previously installed. It adds “/node_modules” to the current directory and tries to load the module from there. If it does not succeed it tries adding “/node_modules” to the parent directory and load the modules from there. This will be repeated until the module is found or nothing found after the root directory.

This means we can put our Node modules to our app directory and Node will find those.

Understanding Node Event Loop

Posted: 19 Oct 2013 09:48 AM PDT

Event I/O programming using node js is simple. Node has pulled speed and scalability on the fingertips of the common programmers.

But the event loop approach comes with a price, even though you are not aware of it. Here we will discuss about how node works, technically the node event loop and what we do’s and don’ts on top of it.

Event-Queue Processing Loop

Concurrent is true nature of Node and it is achieved by event loop.

Event loop can be thought as a loop that processes an event queue. Interesting events happen and go in a queue, waiting for their turn. Then, there is an event loop popping out these events, one by one, and invoking the associated callback functions, one at a time. The event loop pops one event out of the queue and invokes the associated callback. When the callback returns the event loop pops the next event and invokes the associated callback function. When the event queue is empty, the event loop waits for new events if there are some pending calls or servers listening, or just quits of there are none.

For example, we will create a new file as hello.js and write these:

setTimeout(function() {      console.log('World!');  }, 2000);  console.log('Hello');

run it using the node command line tool.

node hello.js

You should see the word “Hello” written out first and then the word “World!” will come out 2 seconds later. If we see the code, the word “World!” appears first but it is not the case when we execute the code. Remember that Node is using event loop. When we declare anonymous function that print out “World!”, this function is not executed yet. The function is passed in as the first argument to a setTimeout() call, which schedules this function to run in 2000 miliseconds (2 seconds) later. Then, next statement is executed which will prints out “Hello”. Two seconds later, the setTimeout is execute as the timeout event is occurred. This means the word “World!” is printed out.

So, the first argument to the setTimeout call is a function we call as a “callback”. It’s a function which will be called later, when the event we set out to listen to occurs.

After our callback is invoked, Node understands that there is nothing more to do and exits.

Callbacks that Generate Events

In above example, we only use one-time execution. We can keep Node busy and keep on scheduling callbacks by using this patterns:

(function schedule() {    setTimeout(function() {      console.log('HelloWorld!');      schedule();    }, 10000);  })();

If you look it, it is similar to recursive function, but it is not.

We trap the whole thing inside a function named “schedule” and invoking it immediately after declaration. This function will schedule a callback to execute in 1 seconds. This callback, when invoked, will print “Hello World!” and then run schedule again.

On every callback, we registering a new one to be invoked one second later, never letting Node finish.

Not Blocking

The primary concern and main use case for an event loop is to create highly scalable servers. Since an event loop runs in a single thread, it only processes the next event when the callback finishes. If you could see the call stack of a busy Node application you would see it going up and down really fast, invoking callbacks and piling up the next event in line. But for this to work well you have to clear the event loop as fast as you can.

There are two main categories of things that can block the event loop: synchronous I/O and big loops.

Node API is not all asynchronous. Some parts of it are synchronous like, for instance, some file operations. Don't worry, they are very well marked: they always terminate in "Sync" – like fs.readFileSync – , and they should not be used, or used only when initializing. On a working server you should never use a blocking I/O function inside a callback, since you're blocking the event loop and preventing other callbacks – probably belonging to other client connections – from being served.

The second category of blocking scenario is when a giant loop is performed. A giant loop in this term is a loop which take a lot of time like iterating over thousands of objects or doing complex time-taking operations in memory.

Let see an example:

var open = false;    setTimeout(function() {      open = true;  }, 1000);    while(!open) {      // wait  }    console.log('opened!');

You would expect the code to work, seeing the setTimeout() will surely be executed on next 1 second. However, this never happens as the node will never execute the timeout callback because the vent loop is stuck on the while loop, never giving the timeout event.

Function in JavaScript

Posted: 19 Oct 2013 09:47 AM PDT

Every programming language (except Assembly) has function.

Function is a block of code (enclosed by curly-bracket) which can be executed when “someone” calls it.

Like a variable, function can be defined anywhere in the code.

There are several ways of defining function in JavaScript:

  • Function Declaration
  • Function Expression
  • Function as a result of new Function call

Function Declaration

Basically, a function has a name a function name followed by parenthesis and list of arguments inside. After that is a block of code enclosed by curly-bracket.

function function_name(list of argument) {  some code to be executed  }

The code inside the function will be executed when the function is called. It can be called directly when an event occurs (like when a user clicks a button), or by another function.

Arguments are optional. You can have 1 or more arguments supplied to function. Of course you can have function which don’t have any argument.

An example function:

function greet(name) {      alert("Hello "+name);  }

Note that a keyword function in the beginning is a must.

To call a function, we can use following (call greet() function):

greet("Xathrya");

If you have function with argument, make sure when you call it you supply the correct argument.

Function declarations are parsed at pre-execution stage, when the browser prepares to execute the code. That’s why, both of these codes work:

//-- 1  function greet(name) {    alert("Hello "+name)  }    greet("Xathrya")    //-- 2  greet("Xathrya")    function greet(name) {    alert("Hello "+name)  }

And a function can be declared anywhere in the code, even in the scope of branching and repetition.

Function Expression

A function in JavaScript is a first-class value, just like a number or string. As we remember that JavaScript is loose in type system.

Anywhere where you could put a value, you can also put a function, declared "at place" with a function expression syntax: function(arguments) { ... }. No function name, as the function name will take the name of variable. Therefore we can have:

var f = function(name) {       alert("Hello "+name);  }

And we can invoke it as

f("Xathrya");

The point is, a function is construct as an expression like constructing any other variable.

Local Variable

A function may have variables, defined by var. In term of scope, they are local variables. We call local because they are only visible inside the function.

function sum(a, b) {      var sum = a + b;        return sum;  }

Returning a Value

Like the name, a function should return a value (even though not a must). To return a value, we can use return keyword.

function sum(a, b) {      return a+b;  }    var result = sum(2,5);  alert(result);

If a function does not return anything, it's result is considered to be a special value, undefined.

Function is a Value

In JavaScript, a function is a regular value.

Just like any value, a function can be assigned, passed as a parameter for another function and so on. It doesn’t matter how it was defined.

function greet(name) {      alert("Hello "+name);  }    var hello = greet; // assign a function to another variable    hello("dude");     // call the function

The function is assigned by reference. That is, a function is kept somewhere in memory and greet is a reference (or you could say pointer) to it. When we assign it to hi, both variables start to reference the same function.

A function can be used as an argument for another function. In extreme, we can declare a function which act as argument. Here what we talk about:

function runWithOne(f) {  // runs given function with argument 1      f(1)  }    runWithOne(      function(a){ alert(a) }  )

Logically, a function is an action. So, passing a function around is transferring an action which can be initiated from another part of the program. This feature is widely used in JavaScript.

In the example above, we create a function without a name, and don't assign it to any variable. Such functions are called anonymous functions.

Running at Place

It is possible to create and run a function created with Function Expression at once.

(function() {      var a, b;    // local variables        // ...       // and the code    })()

Please note the usage of parenthesis and curly-bracket, it matters.

Running in place is mostly used when we want to do the job involving local variables. We don't want our local variables to become global, so wrap the code into a function.

After the execution, the global namespace is still clean. That's a good practice.

In the above code, we wrap the function so that interpreter consider it as a part of statement. Hence, the Function Expression. If a unction is obviously an expression, then there's no need in wrapping it, for instance:

var result = function(a,b) { return a+b }(2,2);  alert(result) // 4

We see that the function is created and called instantly. That's just like var result = sum(2,2), where sum is replaced by a function expression.

Named Function Expression

A function expression may have a name. The syntax is called named function expression (or NFE).

var f = function greet(name) {      alert("Hello "+name)  }

As we said, a function is a value. Therefore we can invoke the function name (in this case greet).

NFEs exist to allow recursive calls from anonymous functions.

Function Naming

There is popular convention to name a function. A function is an action, so it’s name should be a verb, like get, read, calculateSum, etc.

Short function names can be allowed if:

  • A function is temporary and used only in nearest code. Same logic as with variables.
  • A function is used everywhere in the code. So from the one hand, there is no danger to forget what it does, and from the other hand, you have less writing.The real-world examples are '$', '$$', '$A', '$F' etc. JavaScript libraries use these names to make frequent calls shorter.

In other cases, the name of a function should be a verb or multiple words starting with a verb.