Xathrya Sabertooth |
- Linux Kernel, Components, and Integration
- Install Squid as Local Proxy Cache for Slackware64
- List of HTTP Status Code
- Revealing HTTP Request and Response
- QEMU on Windows
Linux Kernel, Components, and Integration Posted: 13 Mar 2013 06:01 AM PDT Kernel and Linux KernelIn term of Computer Science, Kernel is the core of an Operating System. A machine (for example: personal computer) can use various hardware produced by different vendors. All can be assembled into a single machine. A hardware like processor, RAM, Hard Disk, etc is the component to build a computer. But once the computer is built, we need an operating system to make all of these hardware can be operated. The kernel does the job. Operating System receives the request from user and processes it on user's behalf. Requests are received by command shell or some other kind of user interface and are processed by the kernel. So, kernel acts like an engine of the operating system which enables a user to use a computer system. Shell is the outer part of the operating system that provides an interface to the user for communicating with kernel. Linux is one of Kernel. It is a UNIX-like kernel created by Linus Torvalds on 1991. Linux is Open Source means everyone can contribute, develop, and make their own kernel using Linus’ kernel. Nowadays, every smart system use kernel to operate and some of those system using (or maybe subset of) Linux. ComponentsIf we observe more, kernel can be divide into some components. The major components forming a kernel are:
IntegrationsWe have seen that a kernel consists of different components. Integration design tells how these different components are integrated to create kernel's binary image. There are mainly two integration designs used for operating system kernels , monolithic and micro. Although there are more than two but we will limit our discussion into two most used integration design. In monolithic design all the kernel components are built together into a single static binary image . At boot up time , entire kernel gets loaded and then runs as a single process in a single address space. All the kernel components/services exist in that static kernel image . All the kernel services are running and available all the time. Since inside the kernel everything resides in a single address space, no IPC kind of mechanism is needed for communicating between kernel services. For all these reasons monolithic kernels are high performance. Most of the unix kernels are monolithic kernels. The disadvantage of this static kernel is lack of modularity and hotswap ability. Once the static kernel image is loaded, we can’t add/remove any component or service from the kernel. Our option is only change the hardcoded kernel. Another reason is that kernel use much memory. So, resource consumption is higher in case of monolithic kernels. The second kind of kernel is microkernel. In microkernel a single static kernel image is not built, instead kernel image is broken down into different small services. At boot up time , core kernel services are loaded , they run in privileged mode . Whenever some service is required , it has to get loaded for running. Unlike monolithic kernel all services are not up and running all the time. They run as and when requested. Also, unlike monolithic kernels , services in microkernels run in separate address spaces. Therefore communication between two different services requires IPC mechanism . For all these reasons microkernels are not high performance kernels but they require less resources to run . Linux kernel takes best of both these designs. Fundamentally it is a monolithic kernel. Entire linux kernel and all its services run as a single process , in a single address space , achieving very high performance . But it also has the capability to load / unload services at run time in the form of kernel modules . |
Install Squid as Local Proxy Cache for Slackware64 Posted: 13 Mar 2013 04:59 AM PDT Squid is a caching proxy for Web supporting HTTP, HTTPS, FTP, and more. It reduces bandwidth and improves response times by caching and reusing frequently-request web pages. Squid has extensive access controls and makes a great server accelerator. Run on most available operating system and licensed under GNU GPL. Squid is used by hundreds of Internet Providers world-wide to provide their users with the best possible web access. Squid optimises the data flow between client and server to improve performance and caches frequently-used content to save bandwidth. Squid can also route content requests to servers in a wide variety of ways to build cache server hierarchies which optimise network throughput. Thousands of web-sites around the Internet use Squid to drastically increase their content delivery. Squid can reduce server load and improve delivery speeds to clients. Squid can also be used to deliver content from around the world – copying only the content being used, rather than inefficiently copying everything. Finally, Squid’s advanced content routing configuration allows you to build content clusters to route and load balance requests via a variety of web servers. In this article we will discuss about how to install Squid, gives a simple configuration, and then use it as a local cache server. Our goals is to improves response times and minimizing bandwidth on Slackware64 machine. I use following:
Obtain the MaterialsThe only material we need is squid’s source code which can be downloaded from their official site. At the time of writing this article, the latest stable version available is version 3.3.3. The direct download can be made on here. As stated on site, we need Perl installed on our system. On Slackware64 it, is already installed by default, unless you have uninstalled it before. Make sure Perl is available. InstallationCreate a working directory. You can use any directory you want but in this case I will use my home directory /home/xathrya/squid. The archive we got is squid-3.3.3.tar.xz. Now extract and configure the makefile.In this article I use /usr/local/squid directory as root of installation which is the default path for installing squid. If you want to install squid on another directory, on ./configure use –prefix=/path/to/new/squid where /path/to/new/squid is a path like /usr After compilation finished, install Squid using root privilege. A complete command to do so is given below: tar -Jxf squid-3.3.3.tar.xz cd squid-3.3.3 ./configure make make install clean The compilation might took some times, depending on your machine. SetupSquid is officially installed on this stage, but we need to do some setup to make it work properly. Before we proceed we need to specify what are resource allocated to squid and what configuration must we set to meet our need. In my case, the squid can be activated on demand, the directory for caching is using a dedicated partition on /cache (you can also use other directory, and a dedicated partition is not a must) which is 48.0 GiB allocated, squid can use some peer that can be configured dynamically without need for me to change the configuration file directly. Your need might be different from me, so adjust it yourself. Create Basic Configuration FileIn this example, the configuration file is located at /usr/local/squid/etc/squid.conf, but might be vary if you install squid on different directory than /usr/local/squid. On general, squid configuration file is located on <root directory>/etc/squid.conf Now adjust your configuration file. Below is the configuration I use: ############################################################### ## ## BlueWyvern Proxy Service ## XGN-Z30A : SquidProxy ## ############################################################### ## # Proxy Manager Information ## cache_mgr xathrya@celestial-being.net visible_hostname proxy.bluewyvern.celestial-being.net ############################################################### ## # Basic Configuration ## cache_effective_user squid cache_effective_group squid # DNS server (not required) # Use this if you want to specify a list of DNS servers to use instead # of those given in /etc/resolv.conf #dns_nameservers 127.0.0.1 8.8.8.8 # Set Squid to listens port 1351 (normally listens to port 3128) http_port 1351 # Timeouts dead_peer_timeout 30 seconds peer_connect_timeout 30 seconds # Load the peer include /usr/local/squid/peers.conf ############################################################### ## # Access Control List # # My machine allow client from self, so IP other than self will be rejected # Also define some safe ports ## acl localnet src 10.0.0.0/8 # RFC1918 possible internal network acl localnet src 172.16.0.0/12 # RFC1918 possible internal network acl localnet src 192.168.0.0/16 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow localhost manager http_access deny manager # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user #http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet http_access allow localhost # And finally deny all other access to this proxy http_access deny all ############################################################### ## # Directory & Logs # # We use /cache for directory # I have 48.0 GiB = 51 GB available # 64 directories, 256 subdirectories for each directory # ## # Cache directory 48GiB = 51500MB cache_dir ufs /cache 51500 64 256 # Coredumps is specified on /cache too coredump_dir /cache # Squid logs cache_access_log /var/log/squid/access.log cache_log /var/log/squid/cache.log cache_store_log /var/log/squid/store.log # Defines an access log format logformat custom %{%Y-%m-%d %H:%M:%S}tl %03tu %>a %tr %ul %ui %Hs %mt %rm %ru %rv %st %Sh %Ss ############################################################### ## # Other ## refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 Make user squid and group squid if you don’t have it yet. Then create cache directory if you don’t have any and change ownership to user squid and group squid (or any user and group you assign to squid, see squid.conf). Also I use /var/log/squid directory to log things squid need to. After all the preparations are ready, we need to do initial setup. Below is the snippet I use to do: ln -s /usr/local/squid/sbin/squid /usr/bin/squid /bin/egrep -i "^squid" /etc/group if [ $? -ne 0]; then groupadd squid fi /bin/egrep -i "^squid" /etc/passwd if [ $? -ne 0 ]; then useradd -g squid -s /bin/false -M squid fi if [ ! -d /cache ]; then mkdir /cache fi chown squid.squid /cache if [ ! -d /var/log/squid ]; then mkdir /var/log/squid fi chown squid.squid /var/log/squid /usr/local/squid/sbin/squid -z Now create a file /usr/local/squid/etc/peers.conf and write all peer you want to use. Creating ScriptsAll the system are ready. Now we need to create a control panel script which can execute system. Using this script, I can start and stop squid, and also purge content from cache. The script I use is: #! /bin/bash ROOTFOLDER=/usr/local/squid SQUID=${ROOTFOLDER}/sbin/squid SQUIDCLIENT=${ROOTFOLDER}/bin/squidclient case $1 in "start") $SQUID start ifconfig | grep inet ;; "purge") $SQUIDCLIENT -h 127.0.0.1 -p 8080 -m purge $2 ;; "stop") $SQUID stop ;; esac |
Posted: 13 Mar 2013 01:47 AM PDT HTTP Status codes is one of data sent from server to client as part of response. This response indicate what is the status of client request on server side. All of the statuses are defined by Internet Engineering Task Force (IETF) using some related Request For Comments (RFC) documents. Currently, the official registry of HTTP status code are maintained by Internet Assigned Numbers Authority (IANA). Note that, some server also extend the codes with their status codes. This is not universally implemented but on this article we will list all of the known and reported status codes used as response. The TaxonomyHTTP status code is consists of three digit number range from 0 to 9 for each digit. The first digit indicated what category of message is and the rest of digit indicates what specific information it has. Globally, the status code is divided into five categories with first digit range from 1 to 5. 1xx InformationalThis category indicates that request has been received and process is continuing. This class of status is provisional response, consisting of the status-line and optional headers and terminated by an empty line. The status code is not defined on HTTP/1.0
2xx SuccessThe information is received and the request is processed successfully. In specifically the server has received, understood, accepted, and process the request.
3xx RedirectionThe request is received, but client must take additional action to complete the request. In this class, the further action must be performed by clients.The action required may be carried out by the user agent without user interaction and only if the method used in the second request is GET or HEAD. A user agent should not automatically redirect a request more than five times, since such redirections usually indicate an infinite loop.
4xx Client ErrorRequest is received, but server indicate that the request is error on client side. In this case, server should include an entity containing an explanation of the error situation, and explain whether it is a temporary or permanent condition. From RFC, the user agent should display any included entity to the user.
5xx Server ErrorRequest is received but server failed to fulfill an apparently valid request. In this case, the server cannot give the appropriate response due to some error occurred on server side.
|
Revealing HTTP Request and Response Posted: 13 Mar 2013 01:08 AM PDT In world wide web service, Hypertext Transfer Protocol (HTTP) is main application protocol used for communication and distributing information. Hypertext is a multi-linear set of objects, building a network by using logical links (thus called as hyperlinks) between the nodes (can be text or words). A session on HTTP is actually a sequence of network request and response transactions. But what is these two things actually? In this article we will discuss about what request and response in HTTP is. In this article I will also demonstrate the concept of HTTP request and response using two machine: my Slackware64 as a client and FreeBSD 8.3 with Apache as web server. The IP of client will be 192.168.1.5 and IP Of server will be 192.168.1.3. All scenario use an isolated network to ensure no noise occurred. Thus we only have 2 nodes connected peer to peer. The Key ConceptHTTP is fall in as Application layer protocol (layer 7 in OSI reference model, or layer 4 in TCP/IP model). The default port is 80, except defined otherwise. In network, at least there are two nodes communicate. One as the server and other as client. A client ask a request to server and a server must give a response. HTTP is a stateless protocol, which means each and every connection is independent of each other. The atomic transaction is called as session which consists of one request and replied by one response. HTTP is built on top of TCP (Transmission Control Protocol) in which established a connection between the server and a client. Despite of built on TCP, the stateless property of HTTP will not persist the connection after response is done given to client. Peeking to the Network LevelTo understand what data goes in and out when transaction is done, we will dive in to lower level. In this scenario we will initiate connection from client to the server and capture all the traffic on the server that’s coming from the client and see what’s happening. Specifically we will use wget to download a file from web server. Therefore, we need at least two terminal on client: one for requesting using wget, one for tcpdump the network interface. First, fire up tcpdump so we can get any data in and out from interface can be sniffed. root@BlueWyvern:/# tcpdump -i eth0 -s0 -n -A host 192.168.1.3 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on lo, link-type EN10MB (Ethernet), capture size 65535 bytes Next we use wget to send and receive data. root@BlueWyvern:/# wget http://192.168.1.3 --2013-03-13 14:16:54-- http://192.168.1.3/ Connecting to 192.168.1.3:80... connected. HTTP request sent, awaiting response... 200 OK Length: 45 1 Saving to: 'index.html' 100%[======================================>] 45 --.-K/s in 0s 2013-03-13 14:16:54 (8.92 MB/s) - 'index.html' saved [45/45] Now, let see what happende during the http request. Following is the result when we did connection: 14:16:54.185814 IP 192.168.1.5.48262 > 192.168.1.3.80: Flags [S], seq 3142401191, win 43690, options [mss 65495,sackOK,TS val 2109181 ecr 0,nop,wscale 7], length 0 E..<.Z@.@.*`...........P.M<..........0......... . .......... 14:16:54.185834 IP 192.168.1.3.80 > 192.168.1.5.48262: Flags [S.], seq 618379695, ack 3142401192, win 43690, options [mss 65495,sackOK,TS val 2109181 ecr 2109181,nop,wscale 7], length 0 E..<..@.@.<..........P..$....M<......0......... . ... ...... 14:16:54.185856 IP 192.168.1.5.48262 > 192.168.1.3.80: Flags [.], ack 1, win 342, options [nop,nop,TS val 2109181 ecr 2109181], length 0 E..4.[@.@.*g...........P.M<.$......V.(..... . ... .. 14:16:54.185913 IP 192.168.1.5.48262 > 192.168.1.3.80: Flags [P.], seq 1:108, ack 1, win 342, options [nop,nop,TS val 2109181 ecr 2109181], length 107 E....\@.@.)............P.M<.$......V....... . ... ..GET / HTTP/1.1 User-Agent: Wget/1.14 (linux-gnu) Accept: */* Host: 127.0.0.1 Connection: Keep-Alive 14:16:54.185948 IP 192.168.1.3.80 > 192.168.1.5.48262: Flags [.], ack 108, win 342, options [nop,nop,TS val 2109181 ecr 2109181], length 0 E..4.M@.@..t.........P..$....M=....V.(..... . ... .. 14:16:54.225066 IP 192.168.1.3.80 > 192.168.1.5.48262: Flags [P.], seq 1:336, ack 108, win 342, options [nop,nop,TS val 2109220 ecr 2109181], length 335 E....N@.@..$.........P..$....M=....V.w..... . /$. ..HTTP/1.1 200 OK Date: Wed, 13 Mar 2013 07:16:54 GMT Server: Apache/2.4.3 (Unix) PHP/5.4.7 Last-Modified: Mon, 11 Jun 2007 18:53:14 GMT ETag: "2d-432a5e4a73a80" Accept-Ranges: bytes Content-Length: 45 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html <html><body><h1>It works!</h1></body></html> 14:16:54.225093 IP 192.168.1.5.48262 > 192.168.1.3.80: Flags [.], ack 336, win 350, options [nop,nop,TS val 2109220 ecr 2109220], length 0 E..4.]@.@.*e...........P.M=.$......^.(..... . /$. /$ 14:16:54.225897 IP 192.168.1.5.48262 > 192.168.1.3.80: Flags [F.], seq 108, ack 336, win 350, options [nop,nop,TS val 2109221 ecr 2109220], length 0 E..4.^@.@.*d...........P.M=.$......^.(..... . /%. /$ 14:16:54.252701 IP 192.168.1.3.80 > 192.168.1.5.48262: Flags [F.], seq 336, ack 109, win 342, options [nop,nop,TS val 2109247 ecr 2109221], length 0 E..4.O@.@..r.........P..$....M=....V.(..... . /?. /% 14:16:54.252738 IP 192.168.1.5.48262 > 192.168.1.3.80: Flags [.], ack 337, win 350, options [nop,nop,TS val 2109247 ecr 2109247], length 0 E..4._@.@.*c...........P.M=.$......^.(..... . /?. /? Well, lot of text we’ve got for a simple request and response. Let’s examine it in detail. On that scenario, we will divide to three stage for understand it easier. Stage 1: Establishing TCP ConnectionLike all of use know that on TCP based connection, we must establish a connection before communicate. This stage is called as three-way handshake. This “handshake” is done from client to server with three phase. Now if we examine closely, the first three connection is the handshake done by client and server. As seen here: 14:16:54.185814 IP 192.168.1.5.48262 > 192.168.1.3.80: Flags [S], seq 3142401191, win 43690, options [mss 65495,sackOK,TS val 2109181 ecr 0,nop,wscale 7], length 0 This article won't cover three way handshake in detail. I assume you have known what and how three way handshake done. At this stage we have established a connection between client and server. At the same point, client can now send a request to server. Therefore, let's go to second stage. Stage 2: Client Initiating HTTP GET RequestNext on our example, client will give a request to server. Client initiate a HTTP "GET" request. GET is one of request defined for HTTP and used mostly on web. This request are used by clients for retrieving data from the server. If we look closely, it reveals some informations as below:
All can be seen here: 14:16:54.185913 IP 192.168.1.5.48262 > 192.168.1.3.80: Flags [P.], seq 1:108, ack 1, win 342, options [nop,nop,TS val 2109181 ecr 2109181], length 107 E….\@.@.)…………P.M<.$……V……. . … ..GET / HTTP/1.1 User-Agent: Wget/1.14 (linux-gnu) Accept: */* Host: 127.0.0.1 Connection: Keep-Alive Stage 3: Server Reply with HTTP ResponseOn getting the GET request from client, the server must responds, by revealing some information about itself and metadata about the data asked by the client along with the data.
We can see the actual packet from tcpdump output: 14:16:54.225066 IP 192.168.1.3.80 > 192.168.1.5.48262: Flags [P.], seq 1:336, ack 108, win 342, options [nop,nop,TS val 2109220 ecr 2109181], length 335 E....N@.@..$.........P..$....M=....V.w..... . /$. ..HTTP/1.1 200 OK Date: Wed, 13 Mar 2013 07:16:54 GMT Server: Apache/2.4.3 (Unix) PHP/5.4.7 Last-Modified: Mon, 11 Jun 2007 18:53:14 GMT ETag: "2d-432a5e4a73a80" Accept-Ranges: bytes Content-Length: 45 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html <html><body><h1>It works!</h1></body></html> Client will give ACK to server and later after no more data transmitted, the connection will be terminated. The HTTP Request TypesOn above scenario, we only discuss about GET request type. In this section, we will get in touch with other requests: HTTP HeadSimilar to HTTP GET request. It is the easiest method to know the complete details of the resource available on a particular URL, without downloading the entire data. In our scenario, if we use HEAD request we wil get all the header's in the response without getting "<html><body><h1>It works!</h1></body></html>" message. Mostly used to retrieve attributes / metada regardless of data and can save much of bandwidth if the data is big. HTTP PostMostly used to send data from client to server. Following is the example of HTTP post request from client to server. POST /receiver.php HTTP/1.1 Host: 192.168.1.3 User-Agent: ELinks/0.11.1 (textmode; Linux; 80x25-2) Referer: http://192.168.1.3/ Accept: */* Accept-Encoding: gzip Accept-Language: en Connection: Keep-Alive Content-Type: application/x-www-form-urlencoded Content-Length: 62 name=sarath&last=pillai&email=&telephone=&comments= As seen on above snippet, the request is being send to "receiver.php". This data being send to the server will ocntain other header's as well that we saw during our GET request example. The last line sends the exact data to the server. HTTP PutSimilar to the post request. PUT request sends or creates a resource in the specified URI. If the resource is already present in that specified URI, server then update the URI, otherwise the resource will be created. HTTP DeleteRequest server to delete a specific resource on a specified URI. Well, it's not advisable to configure a webserver for HTTP delete operation. However, if such functionality is desired we can use a HTTP Post operation using a web form which intern will delete a resource. HTTP TraceUsed to troubleshoot HTTP webpages. On simple case, if suppose a web page is not getting loaded the way we want in browser. We can then trace request is used to retrieve the complete request that the server got from the client back to the client itself. It is much like see what command you have send to server. This request is mostly disabled in most of the web server's. The reason is simple, this operation is similar to viewing web server log of the request we send. Famous ResponseNow, let's discuss about some widely known HTTP status code. The list below is some status code often occurred on transaction. For a complete list, you can read it on this article.
|
Posted: 12 Mar 2013 08:59 AM PDT QEMU is an emulator for various CPUs. QEMU can emulate Operating System like Linux, Windows, FreeBSD, etc such as Virtual Box and VMWare do. But unlike other emulator, QEMU can also emulate machine. Hence you can get an ARM “run” on top of your machine. When used as a machine emulator, QEMU can run OSes and programs made for one machine (e.g. an ARM board) on a different machine (our machine) by using dynamic translation with very good performance. When used as a virtualizer, QEMU achieves near native performance by executing the guest code directly on the host CPU. On previous article we have discuss many things on QEMU such as: Installing Slackware on QEMU and also Installing QEMU on Slackware64. In this article we will discuss something different. We will discuss about how to install QEMU on Windows using Windows 8 64-bit. Although I use Windows 8 64 bit as example, you can also build QEMU on your 32-bit Windows machine. Obtain the MaterialsQEMU binaries for Windows is officially not supported and not maintained by QEMU. To obtain QEMU for Windows we can get the unofficial binaries from some contributors. Here we have two provider: Eric Lassauge and Takeda Toshiya. Another QEMU provider can be found here. Thanks to you for providing the binaries! In this article I will use QEMU 1.3.1 which is provided by Eric Lassauge. You can do direct download here. InstallationWell, there is no special treatment for QEMU installation. First, extract the archive (in this case: Qemu-1.3.1-windows.zip). You will get a directory Qemu-1.3.1-windows and some binaries inside. Rename the folder to qemu. Move qemu folder to C: thus you will get C:\qemu now. Now open Control Panel. Now choose Edit the system environment variables. If you can’t find it, enter Control Panel\System and Security\System to address bar and choose Advanced system settings. Click on Environment Variables button and edit path. Add following: “;C:\qemu” without quote “. At this point you already have qemu on your system (easy, right?). |
You are subscribed to email updates from Xathrya Sabertooth To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
Tidak ada komentar:
Posting Komentar