Xathrya Sabertooth |
- NodeJS Low Level File System Operation
- NodeJS Timers
- NodeJS Event Emitter
- NodeJS Buffers
- NodeJS Utilities: About I/O and Debugging
- NodeJS API Quick Tour
- Introduction to NodeJS Modules
- Understanding Node Event Loop
- Function in JavaScript
NodeJS Low Level File System Operation Posted: 19 Oct 2013 10:35 PM PDT Node has a nice streaming API for dealing with files in an abstract way, as if they were network streams. But sometimes, we might need to go down a level and deal with the filesystem itself. Node has facilitate this by providing a low-level file system operation, using module fs. Get File MetainfoA metainfo is information about information of a file or directory. In POSIX API, we use stat() and fstat() function to do this. Node, which is inspired by POSIX, has taken this approach too. The stat() and fstat() has been encapsulated to fs module. var fs = require('fs'); fs.stat('file.txt', function(err, stats) { if (err) { console.log (err.message); return } console.log(stats); }); Here we need a fs module and do stat(). A callback is set with two arguments, first is for get the error message if there is an error occurred, and second is the stat information if the function succeeded. If succeeded, the callback funtion might print something like this (result taken on Cygwin64 on Windows 8): { dev: 0, mode: 33206, nlink: 1, uid: 0, gid: 0, rdev: 0, ino: 0, size: 2592, atime: Thu Sep 12 2013 19:25:05 GMT+0700 (SE Asia Standard Time), mtime: Thu Sep 12 2013 19:27:57 GMT+0700 (SE Asia Standard Time), ctime: Thu Sep 12 2013 19:25:05 GMT+0700 (SE Asia Standard Time) } stats is Stats instance, an object, which we cal call some methods of it. stats.isFile(); stats.isDirectory(); stats.isBlockDevice(); stats.isCharacterDevice(); stats.isSymbolicLink(); stats.isFIFO(); stats.isSocket(); If we have a plain file descriptor, we can use fs.fstat(fileDescriptor, callback) instead. If using low-level filesystem API in node, we will get file descriptors as a way to represent files. These file descriptors are plain integer numbers given by kernel that represent a file in Node process. Much like C POSIX APIs. Open and Close a FileOpening a file is a simple matter by using fs.open() var fs = require('fs'); fs.open('path/to/file', 'r', function(err, fd) { // got file descriptor (fd) }); It’s like C function, if you are familiar with. The first argument to fs.open is the file path. The second argument is the flags, which indicate the mode with which the file is to be open. The valid flags can be ‘r’, ‘r+’, ‘w’, ‘w+’, ‘a’, or ‘a+’.
On the callback function, we get the file descriptor or fd as second argument. It is a handler to read and write the file which is opened by fs.open() function. After operation, it is recommended to close the opened file using fs.close(fd). Read From a FileOnce it’s open, we can read from a file but make sure we have set the mode to allow us read it. var fs = require('fs'); fs.open('file.txt', 'r', function(err, fd) { if (err) { throw err; } var readBuffer = new Buffer(1024), bufferOffset = 0, bufferLength = readBuffer.length, filePosition = 100; fs.read(fd, readBuffer, bufferOffset, bufferLength, filePosition, function(err, readBytes) { if (err) { throw err; } console.log('just read ' + readBytes + ' bytes'); if (readBytes > 0) { console.log(readBuffers.slice(0, readBytes)); } }); }); Here we open the file, and when it’s opened we are asking to read a chunk of 1024 bytes from it, starting at position 100 (so basically, we read data from bytes 100 to 1124). A callback is called when one of the following three happens:
If there is an error, the first argument of callback – err – will be set. Otherwise, it is null. The second argument of callback – readBytes – is the number of bytes read into the buffer. If the read bytes is zero, the file has reached the end. Write Into a FileOnce file is open, we can write into a file but make sure we have set the mode to allow us read it. var fs = require('fs'); fs.open('file.txt', 'a', function(err, fd) { if (err) { throw err; } var writeBuffer = new Buffer('Writing this string'), bufferOffset = 0, bufferLength = writeBuffer.length, filePosition = null; fs.write(fd, writeBuffer, bufferOffset, bufferLength, filePosition, function(err, written) { if (err) { throw err; } console.log('wrote ' + written + ' bytes'); }); }); Here we open the file with append-mode (‘a’), and we are writing into it, starting at position 0. We pass in the buffer with data we want to written, an offset inside the buffer where we want to start writing from, the length of what we want to write, the file position and a callback. In this case we are passing in a file position of null, which is to say that we writes at the current file position. As noted before, we open the file using append-mode, so the file cursor is positioned at the end of the file. Case on AppendingIf you are using these low-level file-system functions to append into a file, and concurrent writes will be happening, opening it in append-mode will not be enough to ensure there will be no overlap. Instead, you should keep track of the last written position before you write, doing something like this: var fs = require('fs'); var startAppender = function(fd, startPos) { var pos = startPos; return { append: function(buffer, callback) { var oldPos = pos; pos += buffer.length; fs.write(fd, buffer, 0, buffer.length, oldPos, callback); } }; } Here we declare a function stored on a variable named “startAppender”. This function starts the appender state (position and file descriptor) and then returns an object with an append function. To use the Appender: fs.open('file.txt', 'w', function(err, fd) { if (err) { throw err; } var appender = startAppender(fd, 0); appender.append(new Buffer('append this!'), function(err) { console.log('appended'); }); }); And here we are using the appender to safely append into a file. This function can then be invoked to append, and this appender will keep track of the last position, and increments it according to the buffer length that was passed in. Actually, there is a problem: fs.write() may not write all the data we asked it to, so we need to modify it a bit. var fs = require('fs'); var startAppender = function(fd, startPos) { var pos = startPos; return { append: function(buffer, callback) { var written = 0; var oldPos = pos; pos += buffer.length; (function tryWriting() { if (written < buffer.length) { fs.write(fd, buffer, written, buffer.length - written, oldPos + written, function(err, bytesWritten) { if (err) { callback(err); return; } written += bytesWritten; tryWriting(); } ); } else { // we have finished callback(null); } })(); } } }; Here we use a function named "tryWriting" that will try to write, call fs.write, calculate how many bytes have already been written and call itself if needed. When it detects it has finished (written == buffer.length) it calls callback to notify the caller, ending the loop. Also, the appending client is opening the file with mode "w", which truncates the file, and it's telling appender to start appending on position 0. This will overwrite the file if it has content. So, a wizer version of the appender client would be: fs.open('file.txt', 'a', function(err, fd) { if (err) { throw err; } fs.fstat(fd, function(err, stats) { if (err) { throw err; } console.log(stats); var appender = startAppender(fd, stats.size); appender.append(new Buffer('append this!'), function(err) { console.log('appended'); }); }) }); |
Posted: 19 Oct 2013 09:10 PM PDT A timer is a specialized type of clock for measuring time intervals. It is used for deploying routine action. Node implements the timers API which also found in web browsers. setTimeoutsetTimeout let us to schedule an arbitrary function to be executed in the future. For example: var timeout = 2000; // 2 seconds setTimeout(function() { console.log('time out!'); }, timeout); The code above will register a function to be called when the timeout expires. As in any place in JavaScript, we can pass in an inline function, the name of a function or a variable which value is a function. If we set the timeout to be 0 (zero), the function we pass gets executed some time after the stack clears, but with no waiting. This can be used to, for instance schedule a function that does not need to be executed immediately. This was a trick sometimes used on browser JavaScript. Another alternative we can use is process.nextTick() which is more efficient. clearTimeoutA timer schedule can be disabled after it is scheduled. To clear it, we need a timeout handle which is returned by function setTimeout. var timeoutHandle = setTimeout(function() { console.log('Groaaarrrr!!!'); }, 1000); clearTimeout(timeoutHandle); If you look carefully, the timeout will never execute because we clear it just after we set it. Another example: var timeoutA = setTimeout(function() { console.log('timeout A'); }, 2000); var timeoutB = setTimeout(function() { console.log('timeout B'); clearTimeout(timeoutA); }, 1000); Which timeoutA will never be executed. There are two timers above, A with timeout 2 seconds and B with timeout 1 second. The timeoutB (which fires first) unschedule timeoutA so timeout never executes and the program exits right after the timeoutB is executed. setIntervalSet interval is similar to set timeout, but schedules a given function to run every X seconds. var period = 1000; // 1 second var interval = setInterval(function() { console.log('tick'); }, period); That code will indefinitely keep the console logging ‘tick’ unless we terminate Node. clearIntervalTo terminate schedule set by setInterval, the procedure we do is similar to what we did to setTimeout. We need interval handler returned by setInterval and do it like this: var interval = setInterval(...); clearInterval(interval); process.nextTickA callback function can also be scheduled to run on next run of the event loop. To do so, we use: process.nextTick(function() { // This runs on the next event loop console.log('yay!'); }); This method is preferred to setTimeout(fn, 0) because it is more efficient. On each loop, the event loop executes the queued I/O events sequentially by calling associated callbacks. If, on any of the callbacks you take too long, the event loop won’t be processing other pending I/O events meanwhile (blocking). This can lead to waiting customers or tasks. When executing something that may take too long, we can delay execution until the next event loop, so waiting events will be processed meanwhile. It’s like going to the back of the line on a waiting line. To escape the current event loop, we can use process.nextTick() like this: process.nextTick(function() { // do something }); This will delay processing that is not necessary to do immediately to the next event loop. For instance, we need to remove a file, but perhaps we don’t need to do it before replying to the client. So we could do something like this: stream.on('data', funciton(data) { stream.end('my response'); process.nextTick(function() { fs.unlink('path/to/file'); }); }); Let's say we want to schedule a function that does some I/O – like parsing a log file -to execute periodically, and we want to guarantee that no two of those functions are executing at the same time. The best way is not to use a setInterval, since we don't have that guarantee. the interval will fire no matter if the function has finished it's duty or not. Supposing there is an asynchronous function called "async" that performs some IO and that gets a callback to be invoked when finished, and we want to call it every second: var interval = 1000; setInterval(function() { async(function() { console.log('async is done'); } }, interval); If any two async() calls can't overlap, it is better off using tail recursion like this: var interval = 1000; (function schedule() { setTimeout(function() { async(function() { console.log('async is done!'); schedule(); }); }, interval); })(); Here we declare schedule() and invoking it immediately after we are declaring it. This function schedules another function to execute within one second. The other function will then call async() and only when async is done we schedule a new one by calling schedul() again, this time inside the schedule function. This way we can be sure that no two calls to async execute simultaneously in this context. The difference is that we probably won’t have async called every second (unless async takes to time to execute), but we will have it called 1 second after the last one finished. |
Posted: 19 Oct 2013 05:56 PM PDT Many objects can emit events, in NodeJS of course. For instance a TCP server can emit a ‘connect’ event every time a client connects, or a file stream request can emit a ‘data’ event. Connecting an EventOne can listen for events. If you are familiar with other event-driven programming, you will know that there must be a function or method “addListener” where you have to pass a callback. Every time the event is triggered, for example ‘data’ event is triggered every time there is some data available to read, then your callback is called. In NodeJS, here is how we can achieve that: var fs = require('fs'); // get the fs module var readStream = fs.createReadStream('file.txt'); readStream.on('data', function(data) { console.log(data); }); readStream.on('end', function(data) { console.log('file ended'); }); Here on readStream object we are binding two event: ‘data’ and ‘end’. We pass callback function to handle each of these cases. All are available uniquely to handle events from readStream object. We can either pass in an anonymous function (as we are doing here), or a function name for a function available on the current scope, or even a variable containing a function. Only Connect OnceThere is a case where we only want to handle an event once and the rest we give up for it. It means, we only interests in first event only. We want to listen for event exactly once, no more or less. There are two ways to do it: using .once() method or make sure we remove the callback once we are called. The first on is the simplest way. We use .once() to tell NodeJS that we are only interested in handling first event occurred. object.once('event', function() { // Callback body }); Other way is function evtListener() { // Function body object.removeListener('event', evtListener); } object.on('event', evtListener); Here we use removeListener() which will be discussed more in next section. On two above samples, make sure you pass appropriate callback i.e. providing appropriate argument number. The event also should be specified. Removing Callback from Certain EventThough we have use it in previous section, we will discuss it again here. To remove a callback we need the object in which we will remove the callback from it and also the event name. Note that this is a pair which we should provide. We can’t provide only one of it. function evtListener() { // Function body object.removeListener('event', evtListener); } object.on('event', evtListener); The removeListener belongs to the EventEmitter pattern. It accepts the event name and the function is should remove. Removing All Callback from Certain EventIf you ever need to, removing all listener for an event from an Event Emitter is possible. We can use: object.removeAllListener('event'); Creating Self-Defined EventOne can use this Event-Emitter patter throughout application. The way we do is creating a pseudo-class and make it inherit from the EventEmitter. var EventEmitter = require('events').EventEmitter, util = require('util'); // Here is the MyClass constructor var MyClass = function(option1, option2) { this.option1 = option1; this.option2 = option2; } util.inherits(MyClass, EventEmitter); util.inherits() is setting up the prototype chain so that we get the EventEmitter prototype methods available on MyClass instance. That way, instances of MyClass can emit events: MyClass.prototype.someMethod = function() { this.emit('custom event', 'some arguments'); } Which emitting an event named ‘custom event’, sending also some data (in this case is “some arguments”). A clients of MyClass instance can listen to “custom events” event by: var myInstance = new MyClass(1,2); myInstance.on('custom event', function() { console.log('got a custom event!'); }); |
Posted: 19 Oct 2013 05:10 PM PDT Natively, JavaScript is not very good at handling binary data. Therefore, NodeJS adds a native buffer implementation and still using JavaScript’s way to manipulate it. The class here is the standard way in Node to transport data. Generally, buffer can be passed on every Node API which require data to be sent. Also, when receiving data on a callback, we get a buffer (except when we specify stream encoding, in which we get a string). Create a BufferThe default encoding format in NodeJS is UTF-8. To create a Buffer from UTF-8 string, we can do: var buff = new Buffer('Hello World'); A new buffer can also be created from other encoding format. As long as we specify the encoding format in second argument, there is no problem: var buf = new Buffer('8b76fde713ce', 'base64'); Accepted encodings are: “ascii”, “utf8″, and “base64″. We can also create a new empty buffer by specify the size: var buf = new Buffer(1024); Accessing BufferAccessing a buffer is like accessing an array of string. We use [] to access individual ‘character’. buf[20] = 127; // Set byte 20 to 127 Format ConversionA data held on a buffer using an encoding format can be converted to other encoding format. var str = buf.toString('utf8'); // UTF-8 var str1 = buf.toString('base64'); // Base64 var str2 = buf.toString('ascii'); // ASCII When you don’t specify the encoding, Node will assume we are going to use UTF-8. If you need specific encoding, pass it as the argument. Slice a BufferA buffer can be sliced into a smaller buffer by using the appropriately named slice() method. var buffer = new Buffer('A buffer with UTF-8 encoded string'); var slice = buffer.slice(10,20); On above code, we slice the original buffer that has 34 bytes into a new buffer that has 10 bytes equal to the 10th to 20th bytes of original buffer. Note that the slice function does not create new buffer memory, it uses the original untouched buffer underneath. Copy from BufferWe can copy a part of a buffer into another pre-allocated buffer by: var buffer = new Buffer('A buffer with UTF-8 encoded string'); var slice = new Buffer(10); var targetStart = 0, sourceStart = 10, sourceEnd = 20; buffer.copy(slice, targetStart, sourceStart, sourceEnd); It should be self-explained. Here we copy part of buffer into slice, but only data on positions 10 through 20. |
NodeJS Utilities: About I/O and Debugging Posted: 19 Oct 2013 12:10 PM PDT Utilities, like implied by the name, is utilities employed for some distinct purpose. These utilities are provided by NodeJS through global objects. consoleNode provides a global “console” object to which we can output strings. However, the output is classified into some mode, according to the output stream it print the output to. .log()If you want to print data to stdout, it’s as simple as writing following code: console.log("Hello World!"); Which will print “Hello World!”. The data (string in that case) is streamed out after formatting it. It is mainly used for simple output and instead of string we can also output an object, like this: var a = {1: true, 2: false}; console.log(a); // => {'1': true, '2': false} We can also use string interpolation to print out things, like: var a = {1: true, 2: false}; console.log('This is a number: %d, and this is a stirng: %s, ' + 'and this is an object outputted as JSON: %j', 42, 'Hello', a); Which in turns print: This is a number: 42, and this is a stirng: Hello, and this is an object outputted as JSON: {"1": true, "2": false} If you are familiar with C or C++, you might find it similar with C’s printf() function. The placeholder is similar, you use %d for number (integer and floating point number), %s for string, and %j for JSON which is not exists in C’s formatting. .warn()If you want to print out to stderr, you can do this: console.warn("Warning!!"); .trace()And to print stack trace, you can do: console.trace(); For stack trace, you will be presented by the current stack condition. utilutil is a module, which bundles some functions. To use this module, we need to include the util module: var util = require('util'); .log()var util = require('util'); util.log('Hello'); Similar to console.log(), however it’s slightly different. The util.log() will print current timestamp and given string in which build a line like this: Mar 17:11:09 – Hello .inspect()There is also a handy function, inspect(), which is nice for quick debugging by inspecting and printing an object properties. var util = require('util'); var a = {1: true, 2: false}; console.log(util.inspect(a)); We can give more arguments to util.inspect() which are in following format: util.inspect(object, showHidden, depth = 2, showColors); showHidden is argument which inspect non-enumerable properties if it is turned on. This properties are belong to the object prototype chain, not the object itself. The depth, third argument, is the default depth on the object graph it should show. This is useful when inspecting large objects. To recurse indefinitely, pass a null value. An important note: util.inspect keeps track of the visited objects. If there is a circular dependencies, a string “[Circular]” will appear on the outputted string. |
Posted: 19 Oct 2013 11:42 AM PDT Most programming language has shipped what it call as standard library. In NodeJS, there is a predefined library which we call API. All APIs are exactly a module, which can be included on source code. Node provides a platform API that covers some aspects:
Note that we won’t cover all the object available, nor cover a material in details. For that purpose, you should go to http://nodejs.org/api/ instead. Our list is also built only for stable modules. There are some unstable modules which may change later, for example crypto. We won’t cover that here. [ Process ]Node allows programmer to analyze process (environment variables, etc) and manage external processes. The involved modules are: processInquire the current process to know the PID, environment variables, platform, memory usage, etc. child_processSpawn and kill new processes, execute commands and pipe their outputs [ File System ]Low-level API is also provided to manipulate files and directory. All the API is influenced by POSIX style. fsThis is used for file manipulation, including: create, remove, load, write, and read files. This modules also used for create read and write streams. pathNormalize and join file paths. It can also be used for checking whether a file exists or is a directory. [ Networking ]Used for networking purpose such as connecting, sending and receiving information over network. netThis module is used for creating a TCP server or client. dgramThis module is used for manipulating UDP packets, including receiving and send UDP packets. httpCreate an HTTP server or client. It is also a more specific version of net module. tlsTransport Layer Security (TLS) is a successor of Secure Socket Layer (SSL) protocol. Node uses OpenSSL to provide TLS and/or SSL encrypted stream communication. httpsImplementing http over TLS/SSL. dnsThis module implement asynchronous DNS resolution. [ Utilities ]Various utilities for NodeJS. utilA module which bundles various utility functions. |
Introduction to NodeJS Modules Posted: 19 Oct 2013 10:10 AM PDT NodeJS is different with client-side JavaScript, aside from the platform it run on. Client-side Javascript has a bad reputation because of the common namespace shared by all scripts, which can lead to conflicts and security leaks. Node however, implements CommonJS modules standard. Node use modules where each module is separated from other modules. This way, each module have separate namespace and could be exported only on the desired properties. Import a ModuleImport / include an existing module is easy. One can use a function require() to do so. var module = require('module_name'); Which import module_name and name it as module. The module is either standard API provided by NodeJS, a module installed by npm, or simply user-defined module. Therefore, we can also use relative notation like this to import module: var module = require("/absolute/path/to/module_name"); var module2 = require("./relative/path/to/module_name"); Where the first declaration will fetch module using absolute filepath and the second one will use path relative to current directory. The fetched object is treated like other object. It has a name (variable name) and allocated in memory. Modules are loaded only once per process. If you have several require calls to the same modules, Node caches the require calls if it resolves to the same files. How Node Resolves Module PathCore ModulesThere are list of core modules, which Node includes in the distribution binary. It is called standard API. When we require this module, Node will just returns that module and the require() ends. Modules with Path (Absolute or Relative)If the module path is supplied (absolute or relative), node tries to load the modules as a file. If it does not succeed, it tries to load the module as a directory. When loading module as a file, if a file exists, Node just loads it as Javascript text. If not, it tries doing the same by appending “.js” extension to the given path. Again, if this is not succeed, Node tries to append “.node” extension and load it as a binary add-on. When loading module as a directory, it will do some procedures. If appending “/package.json” is a file, Node will try loading the package definition and look for a “main” field. It then try to load it as a file. If unsuccessful, Node will try to load it by appending “/index” to it. Installed ModuleThird party module is installed using NPM. If the module path does not begin with “.” or “/” or if loading it with complete or relative paths does not work, Node tries to load the module as a module that was previously installed. It adds “/node_modules” to the current directory and tries to load the module from there. If it does not succeed it tries adding “/node_modules” to the parent directory and load the modules from there. This will be repeated until the module is found or nothing found after the root directory. This means we can put our Node modules to our app directory and Node will find those. |
Posted: 19 Oct 2013 09:48 AM PDT Event I/O programming using node js is simple. Node has pulled speed and scalability on the fingertips of the common programmers. But the event loop approach comes with a price, even though you are not aware of it. Here we will discuss about how node works, technically the node event loop and what we do’s and don’ts on top of it. Event-Queue Processing LoopConcurrent is true nature of Node and it is achieved by event loop. Event loop can be thought as a loop that processes an event queue. Interesting events happen and go in a queue, waiting for their turn. Then, there is an event loop popping out these events, one by one, and invoking the associated callback functions, one at a time. The event loop pops one event out of the queue and invokes the associated callback. When the callback returns the event loop pops the next event and invokes the associated callback function. When the event queue is empty, the event loop waits for new events if there are some pending calls or servers listening, or just quits of there are none. For example, we will create a new file as hello.js and write these: setTimeout(function() { console.log('World!'); }, 2000); console.log('Hello'); run it using the node command line tool. node hello.js You should see the word “Hello” written out first and then the word “World!” will come out 2 seconds later. If we see the code, the word “World!” appears first but it is not the case when we execute the code. Remember that Node is using event loop. When we declare anonymous function that print out “World!”, this function is not executed yet. The function is passed in as the first argument to a setTimeout() call, which schedules this function to run in 2000 miliseconds (2 seconds) later. Then, next statement is executed which will prints out “Hello”. Two seconds later, the setTimeout is execute as the timeout event is occurred. This means the word “World!” is printed out. So, the first argument to the setTimeout call is a function we call as a “callback”. It’s a function which will be called later, when the event we set out to listen to occurs. After our callback is invoked, Node understands that there is nothing more to do and exits. Callbacks that Generate EventsIn above example, we only use one-time execution. We can keep Node busy and keep on scheduling callbacks by using this patterns: (function schedule() { setTimeout(function() { console.log('HelloWorld!'); schedule(); }, 10000); })(); If you look it, it is similar to recursive function, but it is not. We trap the whole thing inside a function named “schedule” and invoking it immediately after declaration. This function will schedule a callback to execute in 1 seconds. This callback, when invoked, will print “Hello World!” and then run schedule again. On every callback, we registering a new one to be invoked one second later, never letting Node finish. Not BlockingThe primary concern and main use case for an event loop is to create highly scalable servers. Since an event loop runs in a single thread, it only processes the next event when the callback finishes. If you could see the call stack of a busy Node application you would see it going up and down really fast, invoking callbacks and piling up the next event in line. But for this to work well you have to clear the event loop as fast as you can. There are two main categories of things that can block the event loop: synchronous I/O and big loops. Node API is not all asynchronous. Some parts of it are synchronous like, for instance, some file operations. Don't worry, they are very well marked: they always terminate in "Sync" – like fs.readFileSync – , and they should not be used, or used only when initializing. On a working server you should never use a blocking I/O function inside a callback, since you're blocking the event loop and preventing other callbacks – probably belonging to other client connections – from being served. The second category of blocking scenario is when a giant loop is performed. A giant loop in this term is a loop which take a lot of time like iterating over thousands of objects or doing complex time-taking operations in memory. Let see an example: var open = false; setTimeout(function() { open = true; }, 1000); while(!open) { // wait } console.log('opened!'); You would expect the code to work, seeing the setTimeout() will surely be executed on next 1 second. However, this never happens as the node will never execute the timeout callback because the vent loop is stuck on the while loop, never giving the timeout event. |
Posted: 19 Oct 2013 09:47 AM PDT Every programming language (except Assembly) has function. Function is a block of code (enclosed by curly-bracket) which can be executed when “someone” calls it. Like a variable, function can be defined anywhere in the code. There are several ways of defining function in JavaScript:
Function DeclarationBasically, a function has a name a function name followed by parenthesis and list of arguments inside. After that is a block of code enclosed by curly-bracket. function function_name(list of argument) { some code to be executed } The code inside the function will be executed when the function is called. It can be called directly when an event occurs (like when a user clicks a button), or by another function. Arguments are optional. You can have 1 or more arguments supplied to function. Of course you can have function which don’t have any argument. An example function: function greet(name) { alert("Hello "+name); } Note that a keyword function in the beginning is a must. To call a function, we can use following (call greet() function): greet("Xathrya"); If you have function with argument, make sure when you call it you supply the correct argument. Function declarations are parsed at pre-execution stage, when the browser prepares to execute the code. That’s why, both of these codes work: //-- 1 function greet(name) { alert("Hello "+name) } greet("Xathrya") //-- 2 greet("Xathrya") function greet(name) { alert("Hello "+name) } And a function can be declared anywhere in the code, even in the scope of branching and repetition. Function ExpressionA function in JavaScript is a first-class value, just like a number or string. As we remember that JavaScript is loose in type system. Anywhere where you could put a value, you can also put a function, declared "at place" with a function expression syntax: var f = function(name) { alert("Hello "+name); } And we can invoke it as f("Xathrya"); The point is, a function is construct as an expression like constructing any other variable. Local VariableA function may have variables, defined by var. In term of scope, they are local variables. We call local because they are only visible inside the function. function sum(a, b) { var sum = a + b; return sum; } Returning a ValueLike the name, a function should return a value (even though not a must). To return a value, we can use function sum(a, b) { return a+b; } var result = sum(2,5); alert(result); If a function does not return anything, it's result is considered to be a special value, Function is a ValueIn JavaScript, a function is a regular value. Just like any value, a function can be assigned, passed as a parameter for another function and so on. It doesn’t matter how it was defined. function greet(name) { alert("Hello "+name); } var hello = greet; // assign a function to another variable hello("dude"); // call the function The function is assigned by reference. That is, a function is kept somewhere in memory and A function can be used as an argument for another function. In extreme, we can declare a function which act as argument. Here what we talk about: function runWithOne(f) { // runs given function with argument 1 f(1) } runWithOne( function(a){ alert(a) } ) Logically, a function is an action. So, passing a function around is transferring an action which can be initiated from another part of the program. This feature is widely used in JavaScript. In the example above, we create a function without a name, and don't assign it to any variable. Such functions are called anonymous functions. Running at PlaceIt is possible to create and run a function created with Function Expression at once. (function() { var a, b; // local variables // ... // and the code })() Please note the usage of parenthesis and curly-bracket, it matters. Running in place is mostly used when we want to do the job involving local variables. We don't want our local variables to become global, so wrap the code into a function. After the execution, the global namespace is still clean. That's a good practice. In the above code, we wrap the function so that interpreter consider it as a part of statement. Hence, the Function Expression. If a unction is obviously an expression, then there's no need in wrapping it, for instance: var result = function(a,b) { return a+b }(2,2); alert(result) // 4 We see that the function is created and called instantly. That's just like Named Function ExpressionA function expression may have a name. The syntax is called named function expression (or NFE). var f = function greet(name) { alert("Hello "+name) } As we said, a function is a value. Therefore we can invoke the function name (in this case greet). NFEs exist to allow recursive calls from anonymous functions. Function NamingThere is popular convention to name a function. A function is an action, so it’s name should be a verb, like get, read, calculateSum, etc. Short function names can be allowed if:
In other cases, the name of a function should be a verb or multiple words starting with a verb. |
You are subscribed to email updates from Xathrya Sabertooth To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
Tidak ada komentar:
Posting Komentar