Sabtu, 18 Januari 2014

Xathrya Sabertooth

Xathrya Sabertooth


Access Ext3, ReiserFS, XFS in Windows using coLinux

Posted: 17 Jan 2014 08:18 PM PST

One of the problem with Windows Operating System is the narrow-range variety of available filesystem. When using Windows, you are limited to FAT and NTFS family file system. Now, suppose you want to access ext3, reiserfs, or XFS partition on Windows there is no way Windows facilitate this unless you do something. This article will describe the step for doing it using coLinux as tool.

In this article I use following:

  1. Windows 7 32-bit
  2. coLinux 0.7.9
  3. Ubuntu Image 9.04

Note: when this article written (18 January 2014) coLinux can only be used on 32-bit Windows!!

You can try any 32-bit Windows (Vista, 7, 8) but the system tested here is Windows 7 32-bit.

There are some alternative to Ubuntu: Alpine,Debian, Fedora, Gentoo, ArchLinux, Slackware. All of them use kernel 2.6. As the reason for using 9.04 is personal, you can use higher version available.

The Idea Behind

coLinux or cooperative Linux, is a free and open source Linux kernel running on Windows natively. We can say the linux kernel is run alongside Windows on single machine, like a virtual machine does. However the difference is we don’t emulate a machine to do this instead we run the linux kernel itself. Thus it is theoretically much more optimal than using any general purpose PC virtualization software.

So in general, we will do following scenario:

  1. install coLinux on Windows machine
  2. Assure access to disk partition
  3. Export all the mounted file system using samba

Much of the process will use command line interface (cmd.exe) which you should open using administration privilege.

Installation Step

coLinux can be obtained freely from their official site, or go to sourceforge site to download it. The version used here is 0.7.9-linux2.6.33.7. Download the installer and install it in C:\coLinux directory.

Edit the connection settings of the virtual ethernet card installed by the coLinux (It should be listed as “TAP Win32 Adapter V8 (coLinux)”). In the TCP/IP settings, set: IP address to 192.168.37.10 and Subnet Mask to 255.255.255.0

Download Ubuntu 9.04 disk image from sourceforge (listed as Ubuntu-9.04-1gb.7z). Extract the image file (Ubuntu-9.04.ext3.1gb.fs) to C:\coLinux. Note that you should have 7zip to extract this file. Now create a swap (for example 128MB swap) using following command on coLinux:

fsutil file createnew c:\coLinux\swap128.fs 134217728

You should know where the magic number comes ;)

Also make sure you run mkswap in Linux and make sure there is corresponding line in fstab.

Rename the file Ubuntu-9.04.ext3.1gb.fs to ubuntu.fs so we have two files: ubuntu.fs and swap128.fs.

Copy the example.conf to ubuntu.conf and edit it. Alternatively you can copy this text and save it as ubuntu.conf.

#  # This is an example for a configuration file that can  # be passed to colinux-daemon in this manner:  #  #    colinux-daemon @example.conf  #  # Note that you can still prepend or append configuration and   # boot parameters before and after '@', or you can use more   # that one '@ to load several settings one after another.  #   #    colinux-daemon @example.conf @overrider.conf mem=32  #  # Full list of config params is listed in colinux-daemon.txt.    # The default kernel  kernel=vmlinux    # File contains the root file system.  # Download and extract preconfigured file from SF "Images for 2.6".  cobd0="c:\coLinux\ubuntu.fs"    # Swap device, should be an empty file with 128..512MB.  cobd1="c:\coLinux\swap128.fs"    # Tell kernel the name of root device (mostly /dev/cobd0,  # /dev/cobd/0 on Gentoo)  # This parameter will be forward to Linux kernel.  root=/dev/cobd0    # Additional kernel parameters (ro = rootfs mount read only)  ro    # Initrd installs modules into the root file system.  # Need only on first boot.  initrd=initrd.gz    # Maximal memory for linux guest  mem=32    # Select console size, default is 80x25  #cocon=120x40    # Slirp for internet connection (outgoing)  # Inside running coLinux configure eth0 with this static settings:  # ipaddress 10.0.2.15   broadcast  10.0.2.255   netmask 255.255.255.0  # gateway   10.0.2.2    nameserver 10.0.2.3  eth0=slirp    # Tuntap as private network between guest and host on second linux device  eth1=tuntap    # Setup for serial device  #ttys0=COM1,"BAUD=115200 PARITY=n DATA=8 STOP=1 dtr=on rts=on"    # Run an application on colinux start (Sample Xming, a Xserver)  #exec0=C:\Programs\Xming\Xming.exe,":0 -clipboard -multiwindow -ac"

If you copy and edit from example.conf, there are lines you need to change. In the end you should make sure following entries exist:

cobd0=C:\coLinux\ubuntu.fs  cobd1=C:\coLinux\swap128.fs  mem=32  eth0=slirp  eth1=tuntap

Now create ubuntu-start.cmd with following content:

set COLINUX_CONSOLE_FONT=Lucida Console:12  set COLINUX_CONSOLE_EXIT_ON_DETACH=1  colinux-daemon.exe -t nt @ubuntu.conf

Then run ubuntu-start.cmd.

Next we do some configuration. Login as root with default password “root”. You can change the root password if you want.

Run the editor and edit /etc/network/interfaces. Add following:

auto eth1  iface eth1 inet static  address 192.168.37.20  network 192.168.37.0  netmask 255.255.255.0  broadcast 192.168.37.255

Bringing up the eth1 by

ifup eth1

Test the network by pinging from our Linux to Windows. It should work now.

ping 192.168.37.10

Next we do update to install necessary package.

aptitude update  aptitude safe-upgrade  aptitude install samba openssh-server mc  apt-get clean

Open /etc/fuse.conf and remove # at the beginning of the line user_allow_other

Open /etc/ssh/sshd_config and change PermitRootLogin to “no”

Add new user named user1 (or any name you wish) and next we can login to this account via ssh:

adduser user1

But first, we need to reload the SSH server:

/et/init.d/ssh reload

At this point, use SSH client such as putty to check whether you can login to Linux via SSH.

Mount File System

First, search for the partition you want to mount. Let’ say it is \Device\Harddisk1\Partition4

If you have ran ubuntu-start.cmd (the system is running) then halt it before going further.

Edit ubuntu.conf and insert following:

cobd2="\Device\Harddisk1\Partition4"

Then run ubuntu-start.cmd (or halt the previous machine and restart it). Then login as root.

Do following to create a mount point:

mkdir /media/codb2

Then edit /etc/fstab and add following line (assuming my partition is xfs):

/dev/codb2 /media/codb2 xfs defaults 0 0

Then mount it like usual.

mount /media/cobd2

Share via Samba

Give user1 the appropriate privilege. If you want to share a whole file system with user1, you must give read and write permissions.

After setting permission, add the following at the very end of /etc/samba/smb.conf

[my data]  path = /media/cobd2  valid users = user1  read only = no

Next add the user to the password’s database of samba:

smbpasswd -a user1

Then reload the samba

/etc/init.d/samba reload

On Windows, type \\192.168.37.20 and login as user1 using the password generated by smbpasswd.

Selasa, 14 Januari 2014

Xathrya Sabertooth

Xathrya Sabertooth

Xathrya Sabertooth


Know the Concept: Desktop Environment and Window Manager

Posted: 14 Jan 2014 02:37 AM PST

In modern operating system, user interfaces tends goes to graphical system. We see window, we see button, we see cursor, etc. In UNIX, the graphical component is managed by independent subsystem. Here we hear two concepts: Desktop Environment and Window Manager. However, still some people confuse about which is which and which is best to use. The former question is fairly simple to answer. However the latter question is a bit more complex due to specific user want.

The Layering System

UNIX use layering system for its graphical desktop. Mostly the system comprised of following (from the base to the top):

  • The Foundation – A system that allows graphic element to be drawn on the display. This system builds the primitive framework that allows system to paint something to various screen (display), interacts with keyboard and mouse, etc. This is required for any graphical desktop. On most UNIX system, X Windows is used. There is also an alternative for X Windows, Wayland, which is still in active development.
  • Window Manager – The Window Manager is the piece of the puzzle that controls the placement and appearance of windows. Window Managers include: Enlightenment, Afterstep, FVWM, Fluxbox, IceWM, etc. This layer needs the foundation (X Windows) but not Desktop Environment.
  • Desktop Environment – This is where it begins to get a little fuzzy for some. A Desktop Environment includes a Window Manager but builds upon it. The Desktop Environment typically is a far more fully integrated system than a Window Manager. Requires both X Windows (Foundation) and a Window Manager. Examples: GNOME, KDE

So, a Desktop Environment generally includes a suite of application that are tightly integrated so that all applications are aware of one another. It is basically rides on top of a Window Manager and adds many features, including panels, status bars, drag-and-drop capabilities, and a suite of integrated applications and tools. Most user opinions on operating systems are typically based on Desktop Environment.

As implied by the name, Window Manager manages windows. Window Manager allows the windows to be opened, closed, resized, and moved. t is also capable of presenting menus and options to the user. It controls the look and feel of the user’s GUI.

The Type of Window Managers

Window managers are often divided into three or more classes, which describe how windows are drawn and updated.

Compositing Window Managers

Compositing window managers let all windows be created and drawn separately and then put together and displayed in various 2D and 3D environments. The most advanced compositing window managers allow for a great deal of variety in interface look and feel, and for the presence of advanced 2D and 3D visual effects.Example:

  1. Compiz
  2. KWin
  3. Xfwm
  4. Enlightenment (E17)
  5. Mutter.

Stacking Window Managers

All window managers that have overlapping windows and are not compositing window managers are stacking window managers, although it is possible that not all use the same methods. Stacking window managers allow windows to overlap by drawing background windows first, which is referred to as painter’s algorithm. Changes sometimes require that all windows be re-stacked or repainted, which usually involves redrawing every window. However, to bring a background window to the front usually only requires that one window be redrawn, since background windows may have bits of other windows painted over them, effectively erasing the areas that are covered.

Examples:

  1. AfterStep
  2. Blackbox
  3. Fluxbox
  4. FLWM
  5. sawfish
  6. Window Maker
  7. WindowLab

Tiling Window Manager

Tiling window managers paint all windows on-screen by placing them side by side or above and below each other, so that no window ever covers another.

Examples:

  1. wmii
  2. xmonad

Dynamic Window Manager

Dynamic window managers can dynamically switch between tiling or floating window layout. It is a tiling window manager where windows are tiled based on preset layouts between which the user can switch. Layouts typically have a master area and a slave area. The master area usually shows one window, but one can also change the amount of windows in this area. The point of it is to reserve more space for the more important window(s). The slave area shows the other windows.

Examples:

  1. fvwm
  2. xmonad

So Which One Suit Me?

Again, it is a tough question. Your choice might be affected by many factors: your need, your taste, your mood, etc. Unless you want to explore deep, you might consider default Desktop Environment and default Window Manager provided.

Separating CPP Template Declaration and Implementation

Posted: 14 Jan 2014 12:59 AM PST

Templates are a feature of the C++ programming language that allow function and classes to operate with generic types. This allows a function or class to work on many different data types without being rewritten for each one. The C++ Standard Library specify a special section for heavy use of template, called Standard Template Library.

Defining a template is like defining a blueprint. When we need a specific instance of function or class, the instance can be made from the generic blueprint. However, the common procedure in C++ which put the definition in a C++ header file and the implementation in a C++ source file can’t be applied to template. This is the behavior which is true for all compiler.

In this article we will discuss about how to separate declaration and implementation of template object (class / function). The article will be divided into three sections: how template is parsed, compilation issue, and the solution methods.

How Template is Parsed

Unlike other code, templates are parsed not only once, but twice. This process is explicitly defined in the C++ standard and although some compilers do ignore this, they are, in effect, non-compliant and may behave differently to what this article describes.

So what are those two processes? They are, let’s say, Point of Declaration (PoD) and Point of Instantiation (PoI).

1. Point of Declaration (PoD)

During the first parse, or Point of Declaration, the compiler checks the syntax of the template and match it with C++ grammar. This way, compiler does not consider the dependent types (the template parameters that form the templates types within the templates).

In real world, we can consider this phase as checking the grammar of a paragraph without checking the meaning of the words (the semantics). Grammatically the paragraph can be correct but the the arrangement of words may have no useful meaning. During the grammar checking phase, we don’t are about the meaning of the words. Our intention only toward the correct syntax.

Now, consider following template code:

template <typename T>  void foo (T const & t)  {      t.bar();  }

This is syntactically correct. However, at this point we have no idea what type the dependent type T is so we just assume that in all cases of T it is correct to call member bar() on it. Of course, if type T doesn’t have this member then we have a problem but until we know what type T is we don’t know if there is a problem so this code is ok for now.

2. Point of Instantiation (PoI)

At PoD we have declare a template, which we have the blueprint. The instantiation process start at Point of Instantiation. At this point we actually define a concrete type of our template. So consider these 2 concrete instantiations of the template defined above…

foo(1); // this will fail the 2nd pass because an int (1 is an int) does not have a member function called bar()  foo(b); // Assuming b has a member function called bar this instantiation is fine

It is perfectly legal to define a template that won’t be corrected under all circumstances of instantiation. Since code for a template is not generated unless it is instantiated the compiler won’t complain unless we try to instantiate it.

At this point, the semantics are now checked against the known dependent type to make sure that the generated code will be correct. To do this, the compiler must be able to see the full definition of template. If the definition of the template is defined in a different place where it is being instantiated the compiler has no way to perform this check. Thus, the template won’t be instantiated and an error message will be delivered.

Remember that compiler can only see and process one translation unit (module / source code) at a time. Now, if the template is only used in one translation unit and the template is defined in same translation unit, no problem rises.

Now let’s recap what we got so far. If the template definition is in translation unit A and we try to instantiate it in translation unit B the compiler won’t be able to instantiate the template because it can’t see the full definition so it will result in linker errors (undefined symbols). If everything is in one place then it will work. but it is not a good way to write templates.

Template Compilation-Issue

From the first section we know that between template definition should be visible and on the same translation unit with the instantiation. Thus, the following code works:

/** BigFile.cpp **/  #ifndef _FOO_HPP_  #define _FOO_HPP_    template<typename T>  class Foo {  public:     Foo() { }     void setValue (T obj_i) { }     T getValue () { return m_Obj; }    private:     T m_Obj;  }    

Selasa, 17 Desember 2013

Xathrya Sabertooth

Xathrya Sabertooth


Disable Automatic Address Configuration – Automatic Private IP Addressing (APIPA)

Posted: 16 Dec 2013 10:48 PM PST

In every Windows Operating System enabled computer, there is a feature Microsoft offers which is APIPA. APIPA is a DHCP failover mechanism for local networks. With APIPA, DHCP clients can obtain IP addresses when DHCP servers are non-functional or the client couldn’t get the IP from server. APIPA exists in all modern versions of Windows except Windows NT.

When DHCP server fails, APIPA allocates IP addresses in the private range 169.254.0.1 to 169.254.255.254. This range is one of Private Network address (hence the name is Automatic Private IP Address).

The method is tested on Windows 8.1 64 bit. The method is generic one, using configuration of Registry entry.

To do, open registry edition. Before we proceed, please remember that incorrectly editing the registry may severely damage system. You can backup any valued data on your machine before making changes to the registry. You can also use the Last Know Good Configuration startup option if problems are encountered after done this guide.

On Windows Vista onward, you will face User Access Control which ask you whether you grant permission for Registry Editor. Choose yes to proceed.

In Registry Editor, navigate to the following registry key:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters

Now create a DWORD value (32 bit if there is both 32 and 64 bit DWORD value) with following name:

IPAutoconfigurationEnabled

Set the value to 0.

Close the Registry Editor. To make a change, restart the machine.

Data Structure Alignment in C++ on x86 and x64 Machine

Posted: 16 Dec 2013 10:36 PM PST

Data structure alignment is the way data is arranged and accessed in memory. It consists of two separate but related issues: data alignment and data structure padding.

In this article we will discuss about memory alignment for simple struct.

All of the codes are tested on Windows 8 64-bit using GCC compiler suite. OK, I said I use 64-bit Windows 8, but the title suggest we discuss both 32 and 64 bit, therefore I will also give both code.

Before we start, guess what the output of this program is? Write down your answer, don’t compile it yet.

#include <iostream>  using namespace std;    // Alignment requirements    // char         1 byte  // short int    2 bytes  // int          4 bytes  // double       8 bytes    struct A  {  	char c;  	short s;  };    struct B  {  	short s;  	char c;  	int i;  };    struct C  {  	char c;  	double d;  	int i;  };    struct D  {  	double d;  	int i;  	char c;  };    int main()  {  	cout << "The sizeof A is: " << sizeof(A) << endl;  	cout << "The sizeof B is: " << sizeof(B) << endl;  	cout << "The sizeof C is: " << sizeof(C) << endl;  	cout << "The sizeof D is: " << sizeof(D) << endl;  	return 0;  }

Now read this article.

Definition of Data Alignment

Every data type in C/C++ will have data alignment requirement (in fact, it is mandated by processor architecture, not by language).

MemoryAlignment1

Memory is byte addressable and arranged sequentially. If the memory is arranged as single bank of one byte width, the processor needs to issue 4 memory read cycles to fetch an integer. We can save lot of work when we read all 4 bytes of integer in one memory cycle only. To take such advantage, the memory will be arranged as group of 4 banks.

The memory addressing still be sequential. If bank 0 occupies an address X, bank 1, bank 2 and bank 3 will be at (X + 1), (X + 2) and (X + 3) addresses. If an integer of 4 bytes is allocated on X address (X is multiple of 4), the processor needs only one memory cycle to read entire integer.

Where as, if the integer is allocated at an address other than multiple of 4, it spans across two rows of the banks. Such an integer requires two memory read cycle to fetch the data.

MemoryAlignment2

A variable's data alignment deals with the way the data stored in these banks. It is expressed as the numeric address module of power of 2. For example, the address 0x0001103F modulo 4 is 3; that address is said to be aligned to 4n+3, where 4 indicates the chosen power of 2. The alignment of an address depends on the chosen power of two. The same address modulo 8 is 7.

The natural alignment of int on 32-bit machine is 4 bytes. When a data type is naturally aligned, the CPU fetches it in minimum read cycles.

Similarly, the natural alignment of several data type are listed here:

For 32-bit x86:

  • A “char” (one byte) will be 1-byte aligned
  • A “short int” (two bytes) will be 2-byte aligned
  • An “int” (four bytes) will be 4-byte aligned
  • A “long” (four bytes) will be 4-byte aligned
  • A “double (eight bytes) will be 8-byte aligned on Windows and 4-byte aligned on Linux (8-byte with -malign-double compile time option).
  • A “long long” (eight bytes) will be 8-byte aligned.
  • A “long double (ten bytes with C++Builder and DMC, eight bytes with Visual C++, twelve bytes with GCC) will be 8-byte aligned with C++Builder, 2-byte aligned with DMC, 8-byte aligned with Visual C++ and 4-byte aligned with GCC.
  • Any “pointer” (four bytes) will be 4-byte aligned. (e.g.: char*, int*)

A notable difference in alignment for 64-bit system when compared to 32-bit system:

  • A “long” (eight bytes) will be 8-byte aligned.
  • A “double“ (eight bytes) will be 8-byte aligned.
  • A “long double“ (eight bytes with Visual C++, sixteen bytes with GCC) will be 8-byte aligned with Visual C++ and 16-byte aligned with GCC.
  • Any “pointer” (eight bytes) will be 8-byte aligned.

So it means, a short int can be stored in bank 0 – bank 1 pair or bank 2 – bank 3 pair. A double requires 8 bytes, and occupies two rows in the memory banks. Any misalignment of double will force more than two read cycles to fetch double data.

As seen before, double variable will be allocated on 8 byte boundary on 32 bit machine and requires two memory read cycles. On a 64 bit machine, based on number of banks, double variable will be allocated on 8 byte boundary and requires only one memory read cycle.

So, we can formulate that a memory address A, is said to be N-byte aligned when A is a multiple of N bytes (where N is power of 2). A memory access is said to be aligned when the datum being accessed is N bytes long and the datum address is N-byte aligned. When a memory access is not aligned, it is said to be misaligned. Note that by definition byte memory accesses are always aligned.

Structure and Padding to Align the Data

In C/C++, structures are used as a data pack (composite data). It doesn’t provide any data encapsulation or data hiding features (except when we define it with the way we define a class).

As stated before, a good aligned data in memory can ease the fetch process. Because of the alignment requirements of various data types, every member of structure should be naturally aligned. The members of structures allocated sequentially increasing order.

Now, alignment should be used to balance the structure. The term balance here refer to make every member naturally aligned (remember, short int use 2 bytes and can be put on a pair of byte 0-byte 1 or byte 2-byte 3 but not byte 1-byte 2). Therefore we need to do something to make them in correct position (align).

The method we use is padding. Padding is only inserted when a structure member is followed by a member with a larger alignment requirement or at the end of the structure.

There is an alternative way, reordering the members, however C/C++ do not allow the compiler to reorder structure members to save space. This job should be done manually.

So how this stuff works?

Remember, we cannot say that the aggregate size of a struct is only sum of all the components. There exists a padding. The padding boundary also depend on the 32-bit or 64-bit architecture of the CPU and the OS. The alignment is done on the basis of the highest size of the variable in the structure.

Let’s view this little structure. When we count it, we should get 8 bytes as total size:

struct Mix  {  	char Data1;  	short Data2;  	int Data3;  	char Data4;  };

After compilation, appropriate paddings will be inserted to ensure a proper alignment for each of its member:

struct Mix  {  	char Data1;          // 1 byte  	char Padding1[1];  	short Data2;         // 2 bytes  	int Data3;           // 4 bytes  	char Data4;          // 1 byte  	char Padding2[3];  };

We see two padding there, Padding1 and Padding2. Remember that short require 2-bytes alignment. Hence, it cannot be placed right after Data1, because it would be put on Bank 1-Bank 2 pair. We add padding between them so when we fetch the Data2 we will have minimum fetch.

After the Data4, there is also a padding with 3 bytes at the end.

Now the compiled size of the structure is 12 bytes. It is important to note that the last member is padded with the number of bytes required so that the total size of the structure should be a multiple of the largest alignment of any structure member (alignment(int) in this case, which = 4

Let’s review the output for previous snippet. If you are confused, first refer to the previous section (Data Alignment)

For 64-bit OS user:

  1. The sizeof A is: 4
  2. The sizeof B is: 8
  3. The sizeof C is: 24
  4. The sizeof D is: 16

For 32-bit Windows user:

  1. The sizeof A is: 4
  2. The sizeof B is: 8
  3. The sizeof C is: 24
  4. The sizeof D is: 16

For 32-bit Linux user:

  1. The sizeof A is: 4
  2. The sizeof B is: 8
  3. The sizeof C is: 16
  4. The sizeof D is: 16

You can also prove it by yourself.

How do we get that?

Structure A

struct A  {  	char c;  	short s;  };

We have two members here, c as character, and s as short integer. Char is 1 byte and Short is 2 bytes. The total should be 3, but it’s 4.

If the short int element is immediately allocated after the char element, it will start at an odd address boundary. Therefore a padding is inserted there so now the structure will be:

struct A  {  	char c;  	char Padding;  	short s;  };

And the total sizeof(A) = sizeof(char) + 1 (padding) + sizeof(short) = 1 + 1 + 2 = 4 bytes.

Structure B

struct B  {  	short s;  	char c;  	int i;  };

We have three members here, s as short integer, c as character, and i as integer. Char is 1 byte, Short is 2 bytes, and Integer is 4 bytes. The total should be 7, but it’s 8.

It has the same reason as first example. As i is immediately after c, it will start at an odd address boundary. Therefore a padding is inserted. Now the structure will be:

struct B  {  	short s;  	char c;  	char Padding;  	int i;  };

And the total sizeof(B) = sizeof(short) + sizeof(char) +  1 (padding) + sizeof(int) = 2 + 1 + 1  + 4 = 8 bytes.

Structure C

struct C  {  	char c;  	double d;  	int i;  };

Now this is the trickiest part.

We have three members here, c as character, d as double float, and i as integer. If your architecture is 64-bit, you get Double as 8 bytes while 32 you get 4 bytes. Other than those, all other value is remain same. Char is 1 byte, and Integer is 4 bytes. The total should be 7, but it’s 8.

Now, the after compilation for x64 we got:

struct C  {  	char c;             // 1 byte  	char Padding1[7];  	double d;           // 8 bytes  	int i;              // 4 bytes  	char Padding2[4];  };

So you would wonder, why the padding Padding1 is 7 bytes instead of 3 bytes? Remember that the boundary is determined by the largest element’s boundary. We have double which is 8 bytes.

So the total size would be: sizeof(C) = sizeof(char) + 7 (padding) + sizeof(double) + sizeof(int) + 4 (padding) = 1 + 7 + 8 + 4 + 4 = 24

Now we see for the x86 Linux (gcc) case:

struct C  {  	char c;             // 1 byte  	char Padding1[3];  	double d;           // 4 bytes  	int i;              // 4 bytes  	char Padding2[4];  };

Here we have double as 4 bytes. As very same argument, we insert padding between c and d. There is padding at the end to meet natural alignment so it fit power of 2 size.

So the total size would be: sizeof(C) = sizeof(char) + 3 (padding) + sizeof(double) + sizeof(int) + 4 (padding)  = 1 + 3 + 4 + 4 + 4 = 16

Structure D

struct D  {  	double d;  	int i;  	char c;  };

We still have three members here, d as double, i as integer, and c as character. Char is 1 byte, double is 8 bytes or 4 bytes depend on which system you are (see previous explanation), and Integer is 4 bytes.

Both 64 and 32 bit will have following alignment after compilation:

struct D  {  	double d;           // 8 bytes  	int i;              // 4 bytes  	char c;		    // 1 byte  	char Padding1[3];  };

So you might expect, the padding is 3 byte at the end of struct so we can’t ensure the size is natural aligned.

So the total size would be: sizeof(D) = sizeof(double) + sizeof(int) + sizeof(char) + 3 (padding) = 8 + 4 + 1 + 3 = 16

Minggu, 08 Desember 2013

Xathrya Sabertooth

Xathrya Sabertooth


Setting Up Raspberry Pi + Smart Card Reader + PHP

Posted: 07 Dec 2013 06:02 AM PST

Smart Card is a pocket-sized card with embedded integrated circuits. It provides identification, authentication, data storage and application processing on simple medium. There are two big categories of smart card: contact and contactless. To identify and authenticate a smart card properly we need a smart card reader. There are many smart card reader and the way we operate is depend on what smart card it can detect.

To detect and use smart card, there is a specification for smart card integration into computing environments. It is called PC/SC (Personal Computer / Smart Card). The standard has been implemented to various operating system: Windows (since NT/9x), Linux, Unix.

We can however create a small device which can make use of smart card reader, instead of using our PC. That’s what we will discuss on this article.

As title suggest, after we have smart card connected we will use PHP as a programming environment.

For this article, we use:

  1. Working Raspberry Pi model B (+SD card)
  2. Raspbian Wheezy release date 2013-09-25
  3. Smart Card Reader, ACR122
  4. USB power hub

Grab the Materials

Make sure the Raspberry Pi is working properly with Raspbian Wheezy installed. Also make sure all the hardware are available.

We need USB power hub as the ACR122 Smart Card Reader gives too high load for Raspberry Pi so we will feed electricity from somewhere else.

Installation

Install the drivers and related package

apt-get install build-essential libusb-dev libusb++-dev libpcsclite-dev libccid

  • build-essential is package for building a application
  • libusb-dev and libusb++-dev are package for user-space USB programming
  • libpcsclite-dev is a middleware to access a smart card using PC/SC (development files) using PC\SC-lite definition
  • libccid is a smart card driver

Install PHP and all needed development tool

apt-get install php5-dev php5-cli php-pear

Once PHP installed, we need a PHP extension for smart card:

pecl install pcsc-alpha

You can see the documentation here.

Configuration

On some case, we need to add an entry to php.ini manually for registering pcsc extension. To do this, open up php.ini and add following entry:

extension = pcsc.so

Kamis, 05 Desember 2013

Xathrya Sabertooth

Xathrya Sabertooth


Ten C++11 Features You Should Know and Use

Posted: 04 Dec 2013 05:52 PM PST

This article will be a resume to several articles discussing individual subject.

There are lots of new additions to the C++ language and standard library after C++11 standard passed. However, I believe some of these new features should become routing for all C++ developers.

Features we are talking about:

  1. auto & decltype
  2. nullptr
  3. Range-based for loops
  4. Override and final
  5. Strongly-typed enums
  6. Smart pointers
  7. Lambdas
  8. non-member begin() and end()
  9. static_assert and type traits
  10. Move semantics

auto & decltype

More: Improved Typed Reference in C++11: auto, decltype, and new function declaration syntax

Before C++11 era, keyword auto was used for storage duration specification. In the new standard, C++ define clearly the purpose to be type inference. Keyword auto is a placeholder for a type, telling the compiler it has to deduce the actual type of a variable that is being declared from its initializer.

auto I = 42;        // I is an int  auto J = 42LL;      // J is an long long  auto P = new foo(); // P is a foo* (pointer to foo)

Using auto means less code for writing typename. The very convenience use of auto would be inferring type for iterator:

std::map<std::string, std::vector<int>> myMap;  for (auto it = begin(map); it != end(map); ++it)  {  //...  }

Here we save lot of works by order compiler to deduce the type of it.

decltype in other hand is a handy keyword to get type of an expression. Here we can inspect what’s going on this code:

short a = 10;  long b = 655351334;  decltype(a+b) c = 5;    std::cout << sizeof(a) << " " << sizeof(b) << " " << sizeof(c) << std::endl;

When we execute this code using proper C++11 compiler, we got variable c as a type used for summation of a and b. We know that when a short is summed with a long, the short variable will be typecasted automatically to type sufficient enough to hold the result code. And the decltype will give us it’s type.

As said before, decltype is used to get type of an expression, therefore it is valid for use to do this:

int function(int a, int b)  {  	return a * b;  }    int main()  {  	decltype(function(a,b)) c = 10;    	return 0;  }

As long as the expression involved is valid.

Now, in C++ we have a new function declaration syntax. This syntax leverage the power of both auto and decltype. Note that auto cannot be used as the return type of a function, so we must have a trailing return type. In this case auto does not tell the compiler it has to infer the type, it only instructs it to look for the return type at the end of the function.

template <typename T1, typename T2>  auto compose(T1 t1, T2 t2) -> decltype (t1 + t2)  {  	return t1+t2;  }

In above snippet, we compose the return type of function from the type of operator + that sums the values of types T1 and T2.

nullptr

More: Nullptr, Strongly typed Enumerations, and Cstdint

Since the inception of C++, zero is used as the value of null pointers. This is a direct influence from C language. The system itself has drawbacks due to the implicit conversion to integral types.

void function(int a);  void function(void* a);    function(NULL);

Now, which one is being called? On smarter compiler, it will gives error saying “the call is ambiguous”.

C++11 library gives solution for this. Keyword nullptr denotes a value of type std::nullptr_t that represents the null pointer literal. Implicit conversions exists from nullptr to null pointer value of any pointer type and any pointer-to-member types, but also to bool (as false). But no implicit conversion to integral types exists.

void foo(int* p) {}    void bar(std::shared_ptr<int> p) {}    int* p1 = NULL;  int* p2 = nullptr;     if(p1 == p2)  {  }    foo(nullptr);  bar(nullptr);    bool f = nullptr;  int i = nullptr; // error: A native nullptr can only be converted to bool or, using reinterpret_cast, to an integral type

Using 0 is still valid for backward compatibility.

Range-based for loops

More: C++ Ranged Based Loop

Ever wonder how could we do foreach statement in C++? Joy for us because C++11 now augmented the for statement to support it. Using this foreach paradigm we can iterate over collections. In the new form, it is possible to iterate over C-like arrays, initializer lists, and anything for which the non-member begin() and end() functions are overloaded.

std::map<std::string, std::vector<int>> map;  std::vector<int> v;  v.push_back(1);  v.push_back(2);  v.push_back(3);  map["one"] = v;    for(const auto& kvp : map)   {    std::cout << kvp.first << std::endl;      for(auto v : kvp.second)    {       std::cout << v << std::endl;    }  }    int arr[] = {1,2,3,4,5};  for(int& e : arr)   {    e = e*e;  }

The syntax is not different from “normal” for statement, okay it is a little different.

The for syntax in this paradigm is:

for (type iterateVariable : collection)  {  // ...  }

Override and final

In C++, there isn’t a mandatory mechanism to mark virtual methods as overriden in derived class. The virtual keyword is optional and that makes reading code a bit harder, because we may have to look through the top of the hierarchy to check if the method is virtual.

class Base   {  public:     virtual void f(float);  };    class Derived : public Base  {  public:     virtual void f(int);  };

Derived::f is supposed to override Base::f. However, the signature differ, one takes a float, one takes an int, therefor Base::f is just another method with the same name (and overload) and not an override. We may call f() through a pointer to B and expect to print D::f, but it’s printing B::f.

C++11 provides syntax to solve this problem.

class Base   {  public:     virtual void f(float);  };    class Derived : public Base  {  public:     virtual void f(int) override;  };

Keyword override force the compiler to check the base class(es) to see if there is a virtual function with this exact signature. When we compile this code, it will triggers a compile error because the function supposed to override the base class has different signature.

On the other hand if you want to make a method impossible to override any more (down the hierarchy) mark it as final. That can be in the base class, or any derived class. If it’s in a derived class, we can use both the override and final specifiers.

class Base   {  public:     virtual void f(float);  };    class Derived : public Base  {  public:     virtual void f(int) override final;  };    class F : public Derived  {  public:     virtual void f(int) override;  }

Function declared as ‘final’ cannot be overridden by ‘F::f’.

Strongly-typed enums

More: Nullptr, Strongly typed Enumerations, and Cstdint

Traditional enums in C++ have some drawbacks: they export their enumerators in the surrounding scope (which can lead to name collisions, if two different enums in the same have scope define enumerators with the same name), they are implicitly converted to integral types and cannot have a user-specified underlying type.

C++11 introduces a new category of enums, called strongly-typed enums. They are specified with the “enum class” keyword which won’t export their enumerators in the surrounding scope and no longer implicitly converted to integral types. Thus we can have a user specified underlying type.

enum class Options { None, One, All };  Options o = Options::All;

Smart pointers

All the pointers are declared in header <memory>

In this article we will only mention smart pointers with reference counting and auto releasing of owned memory that are available:

  • unique_ptr: should be used when ownership of a memory resource does not have to be shared (it doesn’t have a copy constructor), but it can be transferred to another unique_ptr (move constructor exists).
  • shared_ptr: should be used when ownership of a memory resource should be shared (hence the name).
  • weak_ptr: holds a reference to an object managed by a shared_ptr, but does not contribute to the reference count; it is used to break dependency cycles (think of a tree where the parent holds an owning reference (shared_ptr) to its children, but the children also must hold a reference to the parent; if this second reference was also an owning one, a cycle would be created and no object would ever be released).

The library type auto_ptr is now obsolete and should no longer be used.

The first example below shows unique_ptr usage. If we want to transfer ownership of an object to another unique_ptr use std::move. After the ownership transfer, the smart pointer that ceded the ownership becomes null and get() returns nullptr.

void foo(int* p)  {     std::cout << *p << std::endl;  }  std::unique_ptr<int> p1(new int(42));  std::unique_ptr<int> p2 = std::move(p1); // transfer ownership    if(p1)    foo(p1.get());    (*p2)++;    if(p2)    foo(p2.get());

The second example shows shared_ptr. Usage is similar, though the semantics are different since ownership is shared.

void foo(int* p)  {     std::cout << *p << std::endl;  }  void bar(std::shared_ptr<int> p)  {     ++(*p);  }  std::shared_ptr<int> p1(new int(42));  std::shared_ptr<int> p2 = p1;    foo(p2.get());  bar(p1);     foo(p2.get());

We can also make equivalent expression for first declaration as:

auto p3 = std::make_shared<int>(42);

make_shared<T> is a non-member function and has the advantage of allocating memory for the shared object and the smart pointer with a single allocation, as opposed to the explicit construction of a shared_ptr via the contructor, that requires at least two allocations. In addition to possible overhead, there can be situations where memory leaks can occur because of that. In the next example memory leaks could occur if seed() throws an error.

void foo(std::shared_ptr<int> p, int init)  {     *p = init;  }  foo(std::shared_ptr<int>(new int(42)), seed());

No such problem exists if using make_shared.

The last sample shows usage of weak_ptr. Notice that you always must get a shared_ptr to the referred object by calling lock(), in order to access the object.

auto p = std::make_shared<int>(42);  std::weak_ptr<int> wp = p;    {    auto sp = wp.lock();    std::cout << *sp << std::endl;  }    p.reset();    if(wp.expired())    std::cout << "expired" << std::endl;

Lambdas

More: Guide to Lambda Closure in C++11

Lambda is anonymous function. It is powerful feature borrowed from functional programming that in turned enabled other features or powered library. We can use lambda wherever a function object or a functor or a std::function is expected.

You can read the expression here.

std::vector<int> v;  v.push_back(1);  v.push_back(2);  v.push_back(3);    std::for_each(std::begin(v), std::end(v), [](int n) {std::cout << n << std::endl;});    auto is_odd = [](int n) {return n%2==1;};  auto pos = std::find_if(std::begin(v), std::end(v), is_odd);  if(pos != std::end(v))    std::cout << *pos << std::endl;

A bit trickier are recursive lambdas. Imagine a lambda that represents a Fibonacci function. If you attempt to write it using auto you get compilation error:

auto fib = [&fib](int n) {return n < 2 ? 1 : fib(n-1) + fib(n-2);};

The problem is auto means the type of the object is inferred from its initializer, yet the initializer contains a reference to it, therefore needs to know its type. This is a cyclic problem. The key is to break this dependency cycle and explicitly specify the function’s type using std::function.

std::function<int(int)> lfib = [&lfib](int n) {return n < 2 ? 1 : lfib(n-1) + lfib(n-2);};

non-member begin() and end()

Two new addition to standard library, begin() and end(), gives new flexibility. It is promoting uniformity, concistency, and enabling more generic programming which work with all STL containers. These two functions are overloadable, can be extended to work with any type including C-like arrays.

Let’s take an example. We want to print first odd element on a C-like array.

int arr[] = {1,2,3};  std::for_each(&arr[0], &arr[0]+sizeof(arr)/sizeof(arr[0]), [](int n) {std::cout << n << std::endl;});    auto is_odd = [](int n) {return n%2==1;};  auto begin = &arr[0];  auto end = &arr[0]+sizeof(arr)/sizeof(arr[0]);  auto pos = std::find_if(begin, end, is_odd);  if(pos != end)    std::cout << *pos << std::endl;

With non-member begin() and end() it could be put as this:

int arr[] = {1,2,3};  std::for_each(std::begin(arr), std::end(arr), [](int n) {std::cout << n << std::endl;});    auto is_odd = [](int n) {return n%2==1;};  auto pos = std::find_if(std::begin(arr), std::end(arr), is_odd);  if(pos != std::end(arr))    std::cout << *pos << std::endl;

This is basically identical code to the std::vector version. That means we can write a single generic method for all types supported by begin() and end().

template <typename Iterator>  void bar(Iterator begin, Iterator end)   {     std::for_each(begin, end, [](int n) {std::cout << n << std::endl;});       auto is_odd = [](int n) {return n%2==1;};     auto pos = std::find_if(begin, end, is_odd);     if(pos != end)        std::cout << *pos << std::endl;  }    template <typename C>  void foo(C c)  {     bar(std::begin(c), std::end(c));  }    template <typename T, size_t N>  void foo(T(&arr)[N])  {     bar(std::begin(arr), std::end(arr));  }    int arr[] = {1,2,3};  foo(arr);    std::vector<int> v;  v.push_back(1);  v.push_back(2);  v.push_back(3);  foo(v);

static_assert and type traits

static_assert performs an assertion check at compile-time. If the assertion is true, nothing happens. If the assertion is false, the compiler displays the specified error message.

template <typename T, size_t Size>  class Vector  {     static_assert(Size < 3, "Size is too small");     T _points[Size];  };    int main()  {     Vector<int, 16> a1;     Vector<double, 2> a2;     return 0;  }

static_assert becomes more useful when used together with type traits. These are a series of classes that provide information about types at compile time. They are available in the <type_traits> header. There are several categories of classes in this header: helper classes, for creating compile-time constants, type traits classes, to get type information at compile time, and type transformation classes, for getting new types by applying transformation on existing types.

In the following example function add is supposed to work only with integral types.

template <typename T1, typename T2>  auto add(T1 t1, T2 t2) -> decltype(t1 + t2)  {     return t1 + t2;  }

However, there are no compiler errors if one writes

std::cout << add(1, 3.14) << std::endl;  std::cout << add("one", 2) << std::endl;

The program actually prints 4.14 and “e”. But if we add some compile-time asserts, both these lines would generate compiler errors.

template <typename T1, typename T2>  auto add(T1 t1, T2 t2) -> decltype(t1 + t2)  {     static_assert(std::is_integral<T1>::value, "Type T1 must be integral");     static_assert(std::is_integral<T2>::value, "Type T2 must be integral");       return t1 + t2;  }

Move semantics

More: Move Semantics and rvalue references in C++11

C++11 has introduced the concept of rvalue references (specified with &&) to differentiate a reference to an lvalue or an rvalue. An lvalue is an object that has a name, while an rvalue is an object that does not have a name (temporary object). The move semantics allow modifying rvalues (previously considered immutable and indistinguishable from const& types).

A C++ class/struct used to have some implicit member functions: default constructor (only if another constructor is not explicitly defined) and copy constructor, a destructor and a copy assignment operator. The copy constructor and the copy assignment operator perform a bit-wise (or shallow) copy, i.e. copying the variables bitwise. That means if you have a class that contains pointers to some objects, they just copy the value of the pointers and not the objects they point to. This might be OK in some cases, but for many cases you actually want a deep-copy, meaning that you want to copy the objects pointers refer to, and not the values of the pointers. In this case you have to explicitly write copy constructor and copy assignment operator to perform a deep-copy.

What if the object you initialize or copy from is an rvalue (a temporary). You still have to copy its value, but soon after the rvalue goes away. That means an overhead of operations, including allocations and memory copying that after all, should not be necessary.

Enter the move constructor and move assignment operator. These two special functions take a T&& argument, which is an rvalue. Knowing that fact, they can modify the object, such as “stealing” the objects their pointers refer to. For instance, a container implementation (such as a vector or a queue) may have a pointer to an array of elements. When an object is instantiating from a temporary, instead of allocating another array, copying the values from the temporary, and then deleting the memory from the temporary when that is destroyed, we just copy the value of the pointer that refers to the allocated array, thus saving an allocation, copying a sequence of elements, and a later deallocation.

The following example shows a dummy buffer implementation. The buffer is identified by a name (just for the sake of showing a point revealed below), has a pointer (wrapper in an std::unique_ptr) to an array of elements of type T and variable that tells the size of the array.

template <typename T>  class Buffer   {     std::string          _name;     size_t               _size;     std::unique_ptr<T[]> _buffer;    public:     // default constructor     Buffer():        _size(16),        _buffer(new T[16])     {}       // constructor     Buffer(const std::string& name, size_t size):        _name(name),        _size(size),        _buffer(new T[size])     {}       // copy constructor     Buffer(const Buffer& copy):        _name(copy._name),        _size(copy._size),        _buffer(new T[copy._size])     {        T* source = copy._buffer.get();        T* dest = _buffer.get();        std::copy(source, source + copy._size, dest);     }       // copy assignment operator     Buffer& operator=(const Buffer& copy)     {        if(this != &copy)        {           _name = copy._name;             if(_size != copy._size)           {              _buffer = nullptr;              _size = copy._size;              _buffer = _size > 0 > new T[_size] : nullptr;           }             T* source = copy._buffer.get();           T* dest = _buffer.get();           std::copy(source, source + copy._size, dest);        }          return *this;     }       // move constructor     Buffer(Buffer&& temp):        _name(std::move(temp._name)),        _size(temp._size),        _buffer(std::move(temp._buffer))     {        temp._buffer = nullptr;        temp._size = 0;     }       // move assignment operator     Buffer& operator=(Buffer&& temp)     {        assert(this != &temp); // assert if this is not a temporary          _buffer = nullptr;        _size = temp._size;        _buffer = std::move(temp._buffer);          _name = std::move(temp._name);          temp._buffer = nullptr;        temp._size = 0;          return *this;     }  };    template <typename T>  Buffer<T> getBuffer(const std::string& name)   {     Buffer<T> b(name, 128);     return b;  }  int main()  {     Buffer<int> b1;     Buffer<int> b2("buf2", 64);     Buffer<int> b3 = b2;     Buffer<int> b4 = getBuffer<int>("buf4");     b1 = getBuffer<int>("buf5");     return 0;  }

The default copy constructor and copy assignment operator should look familiar. What’s new to C++11 is the move constructor and move assignment operator, implemented in the spirit of the aforementioned move semantics. If you run this code you’ll see that when b4 is constructed, the move constructor is called. Also, when b1 is assigned a value, the move assignment operator is called. The reason is the value returned by getBuffer() is a temporary, i.e. an rvalue.

You probably noticed the use of std::move in the move constructor, when initializing the name variable and the pointer to the buffer. The name is actually a string, and std::string also implements move semantics. Same for the std::unique_ptr. However, if we just said _name(temp._name) the copy constructor would have been called. For _buffer that would not have been even possible because std::unique_ptr does not have a copy constructor. But why wasn’t the move constructor for std::string called in this case? Because even if the object the move constructor for Buffer is called with is an rvalue, inside the constructor it is actually an lvalue. Why? Because it has a name, “temp” and a named object is an lvalue. To make it again an rvalue (and be able to invoke the appropriate move constructor) one must use std::move. This function just turns an lvalue reference into an rvalue reference.

UPDATE: Though the purpose of this example was to show how move constructor and move assignment operator should be implemented, the exact details of an implementation may vary. An alternative implementation was provided by Member 7805758 in the comments. To be easier to see it I will show it here:

template <typename T>  class Buffer  {     std::string          _name;     size_t               _size;     std::unique_ptr<T[]> _buffer;    public:     // constructor     Buffer(const std::string& name = "", size_t size = 16):        _name(name),        _size(size),        _buffer(size? new T[size] : nullptr)     {}       // copy constructor     Buffer(const Buffer& copy):        _name(copy._name),        _size(copy._size),        _buffer(copy._size? new T[copy._size] : nullptr)     {        T* source = copy._buffer.get();        T* dest = _buffer.get();        std::copy(source, source + copy._size, dest);     }       // copy assignment operator     Buffer& operator=(Buffer copy)     {         swap(*this, copy);         return *this;     }       // move constructor     Buffer(Buffer&& temp):Buffer()     {        swap(*this, temp);     }       friend void swap(Buffer& first, Buffer& second) noexcept     {         using std::swap;         swap(first._name  , second._name);         swap(first._size  , second._size);         swap(first._buffer, second._buffer);     }  };

Senin, 02 Desember 2013

Xathrya Sabertooth

Xathrya Sabertooth


Managing Windows Service via Command Line Interface

Posted: 01 Dec 2013 09:02 PM PST

Service is also known as background process, a program which run in background and similar concept to a Unix daemon. A Windows service must conform to the interface rules and protocols of the Service Control Manager, the component responsible for managing Windows services.

As title suggests, we will discuss about how to manage Windows service using Command Line Interface (CLI) not Graphical User Interface (GUI). Specifically we will use two CLI: command prompt and Windows PowerShell. Using a command prompt, we can invoke two different program to fulfill our need: sc.exe and net.exe.

We will use a fictional service serv.exe registered as serv48 as our example.

All examples are done using Administrator privilege, no user privilege involved.

Get Service Status

Get status from a registered service, such as state (running, paused, suspended, stopped), name, etc.

Using sc (command prompt)

sc query serv48

Sample response:

SERVICE_NAME: serv48          TYPE               : 10  WIN32_OWN_PROCESS          STATE              : 1  STOPPED          WIN32_EXIT_CODE    : 0  (0x0)          SERVICE_EXIT_CODE  : 0  (0x0)          CHECKPOINT         : 0x0          WAIT_HINT          : 0x0

Using PowerShell

Get-Service mysql56serv48

Sample response:

Status   Name               DisplayName  ------   ----               -----------  Stopped  Serv48             serv48

Register New Service

Creates a service entry in the registry and service database. In other word, it register the service to windows service component.

Using sc (command prompt)

sc create serv48 binPath=C:\ImportantApp\serv48d.exe DisplayName=serv48

Using PowerShell

New-Service -name serv48 -binaryPathName C:\ImportantApp\serv48d.exe -displayName serv48

Restart Service

This section will restart a service. Actually, a restart means stopping and starting the same service.

Using sc (command prompt)

sc stop serv48  sc start serv48

Using net (command prompt)

net stop serv48  net start serv48

Using PowerShell

Stop-Service serv48  Start-Service serv48

There is also a single command in Powershell to restart a service:

Restart-Service serv48

Resume Service

Resuming a service after the service is suspended.

Using sc (command prompt)

sc continue serv48

Using net (command prompt)

net continue serv48

Using PowerShell

Resume-Service serv48

Set Service

Change or set state of services with some options.

Using sc (command prompt)

sc config serv48 [option=value]

with options available (and possible value) are:

  • type= own,share,interact,kernel,filesys,rec,adapt
  • start= boot,system,auto,demand,disabled,delayed-auto
  • error= normal,servere,critical,ignore
  • binPath= binary pathname to the .exe file
  • tag= yes,no
  • depend= service it’s depended on, separated by / (forward slash)
  • DisplayName= name used to display the service

Using PowerShell

Set-Service serv48 [-option value]

with options available (and possible value) are:

  • ComputerName – specifies one or more computers, the default is local computer.
  • Description – new description for service which will appear in Computer Management.
  • DisplayName – New display name for the service
  • Name – new service name (in our case: serv48)
  • StartupType – Automatic, Manual, Disabled
  • Status – Running, Stopped, Paused

Start Service

Start a service, change the state from stopped to running.

Using sc (command prompt)

sc start serv48

Using net (command prompt)

net start serv48

Using PowerShell

Start-Service serv48

Stop Service

Stopping a service, change state from running to stopped.

Using sc (command prompt)

sc stop serv48

Using net (command prompt)

net stop serv48

Using PowerShell

Stop-Service

Suspend Service

Also known as pause. Pause a service for a moment until it is resumed.

Using sc (command prompt)

sc pause serv48

Using net (command prompt)

net pause serv48

Using PowerShell

Suspend-Service

Build Apache HTTPD for Windows from Source

Posted: 01 Dec 2013 05:47 PM PST

Apache HTTP Server, commonly referred to as Apache, is a popular web server application. It is notably having a key role in initial growth of World Wide Web. Originally based on the NCSA HTTPd server, development of Apache began in early 1995 after work on the NCSA code stalled. Apache quickly overtook NCSA HTTPd as the dominant HTTP server, and has remained the most popular HTTP server in use since April 1996.

In this article we will bring ourselves to build Apache HTTP Server for Windows. Some material used here are:

    1. Windows 8, 64-bit
    2. Apache HTTPD 2.4.7 (latest per November 28th, 2013)
    3. Windows 8 Platform SDK, February 2003 or later
    4. Microsoft Visual Studio 2010
    5. Perl 5.16.3
    6. awk
    7. nasm 2.11.rc1

Also for material building the Apache

    1. apr, apr-util, apr-iconv
    2. PCRE (Perl Compatible Regular Expressions)
    3. zlib library (optional)
    4. OpenSSL libraries (optional)

You should also provide free disk space at least 200MB when compiling. After installation, Apache needs approximately 80 MB of disk space.

In this article, we will use Visual Studio Command Prompt instead of common Command Prompt.

Instead of compiling to x64 code, we will target the x86 architecture.

Opening Visual Studio Command Prompt

As stated before, we use Visual Studio Command Prompt instead of common Command Prompt. I use Visual Studio 2010 (Visual Studio 10) on Windows 8 64-bit. To activate Visual Studio Command Prompt, there are two methods: search & run Visual Studio Command Prompt from Start Screen, run Command Prompt then execute script to set environment variable.

If you want to use first method, make sure you use Visual Studio Command Prompt for amd64 or 64-bit instead of 32-bit.

If you want to do second method, you can do:

"C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\vcvarsall.bat" amd64

Here we execute vcvarsall.bat and set the argument to amd64 to obtain necessary environment variables for the toolchain basically.

Grab the Materials

Source code of Apache HTTPD can be downloaded freely from here. Choose the closest mirror for you.

There are two Perl implementation for Windows: Strawberry Perl and ActiveState perl. I leave you to choose which one. Both can be downloaded from here.

Windows Software Developer Kit (SDK) for Windows 8 can be downloaded freely from here. Download the sdksetup.exe then run it. Choose Windows SDK from options available.

AWK is standard feature of most Unix-like operating system. For Windows, there is an alternative: Brian Kernighan’s http://www.cs.princeton.edu/~bwk/btl.mirror/ site has a compiled native Win32 binary, http://www.cs.princeton.edu/~bwk/btl.mirror/awk95.exe which you must save with the name awk.exe (rather than awk95.exe). It should be installed in environment path or known by Visual Studio.

NASM or Netwide Assembler is the assembler targeting x86 family processor. We can download nasm from it’s official site. The latest (version 2.11rc1 per November 28th, 2013) can be downloaded here. The one I use here is nasm-2.11rc1-installer.exe. Make sure nasm can be executed (the path is in the environment path).

APR (Apache Project Runtime) is used for building Apache. The project is a separated project from Apache HTTPD therefore we need to download it manually. Download it here http://apr.apache.org/download.cgi, select the appropriate mirror for you. The three we should download are: apr 1.5, apr-util 1.5.3, apr-iconv 1.2.1. Download the win32 version source code.

Perl Compatible Regular Expressions is regular expression pattern matching using the same syntax and semantics as Perl 5. It can be downloaded from www.pcre.org. The latest version is 8.33 which can be downloaded from ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/.

The zlib library is optional, used for mod_deflate. The current version is 1.2.8 and can be download from http://www.zlib.net/. In this case, the filename is zlib-1.2.8.tar.xz.

OpenSSL libraries is optional, used for mod_ssl and ab.exe with ssl support. You can obtain the OpenSSL for Windows from http://www.openssl.org/source/. Assuming we have downloaded it. In this case, the file name is openssl-1.0.1e.tar.gz.

Pre-Compilation Stage

In this article, I assume awk is installed as C:\Windows\awk.exe which should on the environment path.

Extract the Apache source code. Once it’s extracted we have “httpd-2.4.7” directory (for example: D:\httpd-2.4.7).

Extract the apr package to Apache’s srclib and rename them to apr, apr-iconv, apr-util respectively. Therefore we have three subdirectories apr, apr-iconv, and apr-util inside of “httpd-2.4.7/srclib”.

Extract the PCRE package to Apache’s srclib and rename it to pcre. Therefore we have “httpd-2.4.7/srclib/pcre”.

If you want to include zlib support, extract zlib source code inside Apache’s srclib sub directory and rename the directory to zlib. Therefore, we have “httpd-2.4.7/srclib/zlib”.

If you want to include openssl support, extract openssl source code inside Apache’s srclib sub directory and rename the directory to openssl. Therefore, we have “httpd-2.4.7/srclib/openssl”.

Compilation

The makefile script for Windows is defined as Makefile.win. In this article, we will build all the optional package first, manually.

To compile & build zlib, enter the Apache’s srclib for zlib and invoke the Makefile. Assuming Apache source code is in D:\httpd-2.4.7

cd D:\httpd-2.4.7\srclib\zlib  nmake -f win32\Makefile.msc AS=ml64 LOC="-DASMV -DASMINF -I." OBJA="inffasx64.obj gvmat64.obj inffas8664.obj"  nmake -f win32\Makefile.msc test  copy zlib.lib zlib1.lib

The last command is used to copy zlib.lib as zlib1.lib. We do this because when we compile OpenSSL we need library with this name.

To compile & build openssl, enter the Apache’s srclib for zlib and invoke the Makefile. Assuming Apache source code is in D:\httpd-2.4.7. To prepareOpenSSL to be linked to Apache mod_ssl or the abs.exe project, we can use following commands:

cd D:\httpd-2.4.7\srclib\openssl  perl Configure disable-idea enable-camellia enable-mdc2 enable-zlib VC-WIN64A -ID:\httpd-2.4.7\srclib\zlib -LD:\httpd-2.4.7\srclib\zlib  ms\do_win64a.bat  nmake -f ms\ntdll.mak

The above command configures OpenSSL with Visual C++ Win64 AMD. We disable the IDEA algorithm since this is by default disabled in the pre-distributions and really shouldn’t be missed. If you do however require this then go ahead and remove disable-idea.

Because we use OpenSSL 1.0.1e, we should invoke following command:

echo. > inc32\openssl\store.h

Now go to the top level directory of our Apache HTTPD source code. We will invoke the makefile to build Apache HTTPD.

nmake /f Makefile.win installr

Finally compile and install HTTPd, you can specify INSTDIR= to specify a path of where to install HTTPd as well. You can also specify database bindings by adding DBD_LIST=”mysql sqlite” etc. Also as it points out, don’t forget to add the libraries and includes from the databases to the INCLUDE and LIB