Tuesday 30 April 2013

NFS Server (Network File Systems)

NFS was developed to allow machines to mount a disk partition on a remote machines as if it were on a local hard drive.

--->  This allows for fast seamless sharing of files across a network.
 

---> Main config files to edit to setup an nfs server are:

           1. /etc/exports
           2. /etc/hosts.allow
           3. /etc/hosts.deny



/etc/exports file 

exports file contains a list of entries, each entry indicates a volume that is shared and how its shared.

EX:- 
directory machine1(option11,option12) machine2(option21,option22)

Where
Directory: The directory that want to share. It may be an entire volume though it need no be. If you share a directory then all directories under it within the same file system will be shared as well.


machine1 and machine2: client machines that will have access to the directory. The machines may be listed by their DNS address or their IP address (machine.company.com or 192.168.0.25) Using Ip address is more reliable and more secure.

Option XX: The option listing for each machine will describe what kind of access that machine will have. Imp options are,
     
A.ro: The directory is shared read only; the client machine will

  not be able to write to it. This is the default.

B.rw: The client machine will have read and write access to the

  directory.

C.no_root_squash: By default, any file request made by user root on the client machine is treated as if it is made by user nobody on the server.If no_root_squash is selected, then root on the client machine will have the same level of access to the files on the system as root on the server. This can have serious security implications, although it may be necessary if you want to perform any administrative work on the client machine that involves the exported directories. You should not specify this option without a good reason.

D.no_subtree_check: if only part of a volume is exported, a routine called subtree checking verifies that a file that is requested from the client is in the appropriate part of the volume: if the entire volume is exported disabling this check will speed up transfers.

E.sync: By deault, all but the most recent version of the exportfs command will use async behavior, telling a client machine that a file write is complete.
  ie, it has been written to stable storage - when NFS has finished handing the write over to the file system. This behavior may cause data corruption if the server reboots, and sync option prevents this.
             
EG: /var/tmp 192.168.0.25(async,rw)


/etc/hosts.allow and /etc/hosts.deny

Those two files specify which computers on the network can use services on your machine. Each line of the file contains a single entry listing a service and a set of machines.


When a server gets a request from a machine, it does the following,
----> It first checks the hosts.allow file to see if the machine matches a description listed in there. If it does then the machnie is allowed access.


----> if the machine does not matches an entiry in hosts.allow, the server then checks hosts.deny to see if the client matches a listing in there.if it does then the machine is denied access.


----> if the client matches no listings in either file, then it is allowed access.


Configuring /etc/hosts.allow and /etc/hosts.deny for NFS security

----> In addition to controlling access to services handled by inetd, this file can also control access to NFS by restricting connections to the daemons that provide NFS services. Restrictions are done on a pre-services basic.
      
----> The first daemon to restrict access to is the portmapper. This daemon essentialy just tells requesting clients how to find all the NFS services on the system.
       
----> Restricting access to the portmapper is the best defense against someone braking into your system through NFS because completely unauthorized clients won't know where to find the NFS daemons.
       
----> However there are two things to watch out for, First restricting portmapper isn't enough if the intruder already knows for some reason how to find those daemons. And second, if you are running NIS, restricting portmapper will also restrict request to NIS. In general it is a good idea with NFS to explicitly deny access to IP address that you don't need to allow access to.
   
----> The first step in doing this is to add the following entry to /etc/hosts.deny
         portmap:ALL


----> If you have a newer version of nfs-utils, add entires for each of the NFS daemons in hosts.deny
            lockd:ALL
            mountd:ALL
            rquotad:ALL
            statd:ALL


----> If we choose ALL:ALL in the file /etc/hosts.deny, which causes any service that looks at these files to deny access to all hosts unless it is explicitly allowed.
        
----> In hosts.allow use the following format  


service: host [or network/netmask], host [or network/netmask]
     
here host is IP address of a potential client. If we want to allow access to 192.168.0.1 and 192.168.0.2, we could add the following entry to /etc/hosts.allow
   portmap: 192.168.0.1 , 192.168.0.2
 

For recent nfs-utils versions, we would also add the following
[these entries are harmless even if they are not supported]
    lockd: 192.168.0.1 , 192.168.0.2
    mountd: 192.168.0.1 , 192.168.0.2
    rquotad: 192.168.0.1 , 192.168.0.2
    statd: 192.168.0.1 , 192.168.0.2

---- If you ntend to run NFS on a large number of machines in a local network, /etc/hosts.allow also allow for network/netmask style entries in the same manner as /etc/exports above.

Different Stages of Linux Boot Process

Here checks how linux works starting at power on, and finally reaching the desktop.

1. HardWare Boot(power on):
     BIOS Initialization
     POST
     Boot Device Selection / Boot Loder initialization


When it power on it will first check is there anything wrong from the time it went off, In a system this is performed by BIOS, which is stored in CMOS chip.

BIOS acts as an interface between H/W and S/W.


The BIOS is responsible of identifying, checking and initializing system devices such as graphics cards, hard disks, etc, to do this the BIOS makes a POST .

At the end of POST a boot device is selected from a list of detected boot devices (CD-ROM, USB drive, hard disk or floppy disk).
 
While POST it indicate any errors happened in the device, Fatal errors & Non-Fatal Errors


Fatal: SMPS,HD etc --- Continuous beep sound ---- It will distrube booting process
 

Non-Fatal: keyboard,mouse --- small beep sound ---- this will not disturb booting process

The BIOS reads and execute the 1st physical sector of the chosen boot media on the system, usually it is the 1st 512 bytes of HD.

this 512 bytes are divided into 3

first 446 bytes containing Booting info(MBR)
second 64 bytes contains partition info
last 2 bytes containing Bootable Media check --- it checks booting devices

2. OS Boot:


     Boot loader
     MBR [Master Boot Recoder]
     IPL
   
Boot loader is responsible for loading and starting OS when system starts.
 

First section is the grub(grant unified boot loader), the grub is in the first 512 bytes of the HD.

Master Boot Recoder[MBR] contains info. about the OS only. The boot loader invoked by either, BIOS passes control to aninitial program loader(IPL) installed within MBR.

IPL in the first 446 bytes of MBR, the taks of the IPL is to locate and load a second stage boot loader for further booting.


3. GRUB

  GRUB stands for Grand Unified Bootloader
 

The grub info are stored in /boot/grub/grub.conf -- this is the second stage boot loader.

If you have multiple kernel images installed on your system, you can choose which one to be executed.


GRUB displays a splash screen, waits for few seconds, if you don’t enter anything, it loads the default kernel image as specified in the grub configuration file.

GRUB just loads and executes Kernel and initrd images

default=0 --- 0 means first kernal will load, 

         if there is 1 means 2nd lying kernal will load
 

timeout=3 --- loading within this much second

splashimage=(hd0,0)/grub/splash.xpm.gz
 

hidden menu
 

title Redhat enterprise linux server(2.6.18-53.el5)
          ---- in which name OS will load
 

root (hd0,0) ---- first HD, first partition
 

kernel /vmlinuz-2.6.18-53.el5 ro root=LABEL=/ rhgb quiet
  ------ kernel with this name(/vmlinuz-2.6.18-53.el5) will
  load and it will mount root file system as read only.

initrd /initrd-2.6.18-53.el5.img ---- this is the kernal
           module which is need to mount the / If we need
           to mount / we need kernal if there is no
           / we can't boot the kernal, so the need
              of initrd image.

initrd ---- The linux kernel needs to mount the root file system. To do so it typically needs access to modules. The kernel cannot mount the root file system without the modules but the modules are in root file system. Then initrd is the solution it is a part of typical grub specification in grub.conf.

initrd indicates that the initial ramdisk file called initrd /initrd-2.6.18-53.el5.img contains the needed modules for mounting root file system.


4. Kernel Initialization
   
-- The boot loader first loads the kernal. Mounts the root file system as specified in the “root=” in grub.conf


-- Kernel executes the /sbin/init program.

-- initrd is used by kernel as temporary root file system until kernel is booted and the real root file system is mounted.It also contains necessary drivers compiled inside, which helps it to access the hard drive partitions, and other hardware.
   
    [
     Kernal boot time functions are,

  1. Device Detection,Partation Detection
  2. Device Driver Initialization
  3. Mount root file system read only
  4. load initial process ie, init

First device drivers are called and will attempt to locate their corresponding devices. After all the essentials drivers are loaded, the kernal will mount the root file system read only. then the first process is loaded(init process) and control is passed from kernal to that process. ]


5. initialization of init

init reads its configuration in /etc/inittab

this file shows how init should setup the system in every run level,

Runlevel 0: -- shutdown
Runlevel 1:  single user mode(rescue operations are done from here), s/emergency mode in this mode root file system is read only mode.for file system modification, 

    use this e2fsck in this level
Runlevel 2: multi user mode without network
Runlevel 3: multi user mode with network 
Runlevel 4: Undefined, reserved for local use (GUI mode for

                  Slackware only)
Runlevel 5: graphical user interface(GUI)mode,full support
Runlevel 6: reboot

Depending on your default init level setting, the system will execute the programs from one of the following directories.
Run level 0 – /etc/rc.d/rc0.d/
Run level 1 – /etc/rc.d/rc1.d/
Run level 2 – /etc/rc.d/rc2.d/
Run level 3 – /etc/rc.d/rc3.d/
Run level 4 – /etc/rc.d/rc4.d/
Run level 5 – /etc/rc.d/rc5.d/
Run level 6 – /etc/rc.d/rc6.d/


Under the /etc/rc.d/rc*.d/ directories, you would see programs that start with S and K.

Programs starts with S are used during startup.S for startup.
Programs starts with K are used during shutdown.K for kill.

Some system services are running from this files as well

/etc/rc.d/rc.sysinit
sets kernal prameters in /etc/sysctl.conf
sets system clock
enable swap partations
sets host name
root file system check and remount
activate raid and lvm devices
enable disk quotas
check and mount other file systems

/etc/rc.d/init.d ----- init file which contains script for system services.






Mounting remote directories using SSHFS

1: Download and Install FUSE
 

# wget http://sourceforge.net/projects/fuse/files/fuse-2.X/2.8.5/fuse-2.8.5.tar.gz/download

# tar -zxvf fuse-2.6.5.tar.gz


Compile and Install fuse:
# cd fuse-2.6.5
# ./configure
# make
# make install


Step # 2:

Configure Fuse shared libraries loading You need to configure dynamic linker run time bindings using ldconfig command so that
sshfs command can load shared libraries such as libfuse.so.2:


# vi /etc/ld.so.conf.d/fuse.conf

Append following path: /usr/local/lib

Run ldconfig: # ldconfig

Step # 3:


Install sshfs Now fuse is loaded and ready to use. Now you need sshfs to access and mount file system using ssh

# wget http://easynews.dl.sourceforge.net/sourceforge/fuse/sshfs-fuse-1.7.tar.gz
 

# tar -zxvf sshfs-fuse-1.7.tar.gz

Compile and Install fuse:
# cd sshfs-fuse-1.7
# ./configure
# make
# make install

Mounting your remote filesystem

Now you have working setup, all you need to do is mount a filesystem under Linux. First create a mount point:

# mkdir /mnt/remote

Now mount a remote server filesystem using sshfs command:

# sshfs jeff@ctechz.com: /mnt/remote

To unmount file system just type:

# fusermount -u /mnt/remote

or

# umount /mnt/remote

In Ubuntu Machines
*****************
$sudo apt-get install sshfs
$sudo mkdir /media/dir-name
$sudo chown your-username /media/dir-name
$sudo adduser your-username fuse ----- add user to a group called fuse

Then you can mount any directory using sshfs cmd
$ sshfs root@192.168.0.152:/root/Jeff /mnt/mount/

[
 If you get the following error:
  fusermount: fuse device not found, try ‘modprobe fuse’ first
   You will have to load the fuse module by doing:

$sudo modprobe fuse

You can add fuse to the modules that are loaded on startup by editing the file /etc/modules and adding a line with only the word “fuse” in it, at the end.
and then issue the sshfs command above again.
 ]

To unmount the directory once your work is done, use the command:

$fusermount -u

for example, in my case, I would use

$fusermount -u /media/home-pc

Friday 19 April 2013

How To SetUp AWS CloudFront

 Content Delivery Network or Content Distribution Network (CDN)

A dynamic content delivery network or content distribution network (CDN) is a system of computers containing copies of data, placed at various points in a network so as to maximize bandwidth for access to the data from clients throughout the network.

A client accesses a copy of the data near to the client, as opposed to all clients accessing the same central server, so as to avoid bottleneck near that server. A cdn will increase speed and efficiency for your WebSite.

CloudFront is a web service that speeds up distribution of your static and dynamic web content, for example, .html, .css, .php, and image files, to end users. CloudFront delivers your content through a worldwide network of edge locations. 


When an end user requests content that you're serving with CloudFront, the user is routed to the edge location that provides the lowest latency,  so content is delivered with the best possible performance.

If the content is already in that edge location, CloudFront delivers it immediately. If the content is not currently in that edge location, CloudFront retrieves it from an Amazon S3 bucket or an HTTP server (for example, a web server) that you have identified as the source for the definitive version of your content.


Amazon CloudFront is a web service for content delivery. It integrates with other Amazon Web Services to give developers and businesses an easy way to distribute content to end users with low latency,
  high data transfer speeds, and no commitments.
 
It will distribute all requests coming to server into cloud and manages the load, and requests connecting to server are only for DB connections. all other requests will distribute among cloud thus manages the load.

This includes data transferred from Amazon EC2 and Amazon S3 to any Amazon CloudFront edge location.

Some people have begun utilizing S3 to host files for their website that would otherwise be expensive in andwidth costs to serve from their own server.

Enter Amazon CloudFront. CloudFront is a CDN, or Content Delivery Network. It utilizes Amazon S3 but distributes your data out to multiple datacenter locations, ensuring faster access times through low latency file hosting for your website users.
   
There are small downsides to CloudFront compared to expensive CDN solutions. For one, it takes some time (up to 24 hours) for file changes and updates to be pushed out to CloudFront edge servers.
   
It makes it easier for you to distribute content to end users quickly, with low latency and high data transfer speeds. Amazon CloudFront delivers your content through a worldwide network of edge locations. 


End users are routed to the nearest edge location, so content is delivered with the best possible performance.  
Amazon CloudFront works seamlessly with the Amazon Simple Storage Service (Amazon S3), which durably stores the original, definitive versions of your files.

 Manage Amazon CloudFront through an easy to use interface. The AWS Management Console lets you review all your CloudFront distributions, create new distributions, or edit existing ones.

 All the features of the CloudFront API are supported: you can enable or disable distributions, configure CNAMEs, enable end-user logging, and more.