Thursday, March 28, 2013

Basic installation of Chef using a managed server in 10 minutes

Chef is quite a comprehensive tool, capable of deploying configurations, code and cloud elements. We'll just do some basic steps, for more information please do visit their website.

As a first step, create yourself a free account at their website. Upon first login, go to your Organization and download the generated keys and knife configuration:


You will download a file called knife.rb and a certificate named <company>-validator.pem. For us company will be andreu2.

Now let's get your private certificate (node certificate). Click on your username, then View Profile and then get private key:


Now,  install the client on your desktop. The website recommends doing it this way:


$ curl -L https://www.opscode.com/chef/install.sh | sudo bash
 
It didn't work on my Debian 7, so I installed the packages from Debian Repository:
 
 $ sudo apt-get install rubygems chef

(it will install quite some packages)

Any workstation interacting with Cheff need to have the chef-repo. Create the folder ~Development and get the git repository:

$ mkdir ~/Development
$ git clone git://github.com/opscode/chef-repo ~/Development/chef-repo

Now we have a sub folder chef-repo inside Development. Create a sub folder inside named .chef and copy the knife.rb and both certificates:

$ mkdir ~/Development/chef-repo/.chef
$ cp  *.pem ~/Development/chef-repo/.chef
$ cp knife.rb ~/Development/chef-repo/.chef

Let's check if we can validate against the cheff server with the knife tool. If at this point the command fails, means that we have wrong certificates or with wrong name:

$ knife client list andreu2-validator

Cool. Now let's download some community cookbooks, like apache2 and networking_basic:

$ knife cookbook site install apache2
Installing apache2 to /home/amartin/Development/chef-repo/cookbooks
Checking out the master branch.
Creating pristine copy branch chef-vendor-apache2
Downloading apache2 from the cookbooks site at version 1.6.0 to /home/amartin/Development/chef-repo/cookbooks/apache2.tar.gz
Cookbook saved: /home/amartin/Development/chef-repo/cookbooks/apache2.tar.gz
Removing pre-existing version.
Uncompressing apache2 version 1.6.0.
removing downloaded tarball
1 files updated, committing changes
Creating tag cookbook-site-imported-apache2-1.6.0
Checking out the master branch.
Updating b5a1d0d..3eb507c
\
[...]
$ knife cookbook site install networking_basic
Installing networking_basic to /home/amartin/Development/chef-repo/cookbooks
Checking out the master branch.
Creating pristine copy branch chef-vendor-networking_basic
Downloading networking_basic from the cookbooks site at version 0.0.5 to /home/amartin/Development/chef-repo/cookbooks/networking_basic.tar.gz
Cookbook saved: /home/amartin/Development/chef-repo/cookbooks/networking_basic.tar.gz
Removing pre-existing version.
Uncompressing networking_basic version 0.0.5.
removing downloaded tarball
1 files updated, committing changes
Creating tag cookbook-site-imported-networking_basic-0.0.5
Checking out the master branch.
Updating 3eb507c..8f091db
[...]
Now we have both cookbooks inside the folder 'cookbooks'. If you have a look, you can see they are composed by recipes with different functions. For example, let's review this one:

$ cat cookbooks/apache2/recipes/mod_perl.rb
#
# Cookbook Name:: apache2
# Recipe:: perl
#
# adapted from the mod_python recipe by Jeremy Bingham
#
# Copyright 2008-2009, Opscode, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

case node['platform_family']
when "debian"
  %w{libapache2-mod-perl2 libapache2-request-perl apache2-mpm-prefork}.each do |pkg|

    package pkg

  end
when "rhel", "fedora"

  package "mod_perl" do
    notifies :run, "execute[generate-module-list]", :immediately
  end

  package "perl-libapreq2"

end

file "#{node['apache']['dir']}/conf.d/perl.conf" do
  action :delete
  backup false
end

apache_module "perl"

We can appreciate some basic instructions for package download on both Debian and Redhat, also module activation. 

Let's create our own Cookbook. First install magic_shell:

$ knife cookbook site install magic_shell
Installing magic_shell to /home/amartin/Development/chef-repo/cookbooks
Now we can create a cookbook, for example 'myalias':
$ knife cookbook create myalias
** Creating cookbook myalias
** Creating README for cookbook: myalias
** Creating metadata for cookbook: myalias
Now we will add dependencies, to use magic_shell at least our current version:

$ vim cookbooks/myalias/metadata.rb

Add the line depends          'magic_shell', '~> 0.2.0' at the end. Save the file.

Now we can create our own recipe. Let's edit the file cookbooks/myalias/recipes/default.rb and add a few alias and environment variables:

#
# Cookbook Name:: myalias
# Recipe:: default
#
# Copyright 2013, YOUR_COMPANY_NAME
#
# All rights reserved - Do Not Redistribute
#
# Alias `rm` made secured
magic_shell_alias 'rm' do
  command 'rm -i'
end

# Alias cow says mooo
magic_shell_alias 'cow' do
  command 'echo cow says moooo'
end

# Environmental settings, my editor is vim
magic_shell_environment 'EDITOR' do
  value 'vim'
end

Now we upload the cookbook to our company. Since we are not using apache2 or networking here, we can delete their cookbooks:

$ knife cookbook delete apache2
Do you really want to delete apache2 version 1.6.0? (Y/N) y
$ knife cookbook delete networking_basic
Do you really want to delete networking_basic version 0.0.5? (Y/N) y

$ rm -fr cookbooks/apache2 cookbooks/networking_basic

$ knife cookbook upload -a
Uploading magic_shell                  [0.2.0]
Uploading myalias                      [0.0.1]
Uploaded 2 cookbooks.

Now the official manual advises how to deploy virtual machines with vagrant and virtualbox applying cookbooks - is a must read!. However I will apply this changes to my local machine to speed up. First, create a user chef (for example) and we add him to sudoers:

$ sudo useradd -m chef
$ sudo passwd chef
$ sudo visudo (add chef to sudoers)

Now we confirm what is our node name:

$ knife node list
andreuantonio

And now we make effective our new cookbook. -N is for node name, and --sudo is to make clear sudo is needed to change the system:

$ knife bootstrap my_node_server   -N andreuantonio --ssh-user chef   --ssh-password cheffpassword   --ssh-port 22   --run-list "recipe[myalias]" --sudo
Bootstrapping Chef on my_node_server
my_node_server knife sudo password:
Enter your password:

my_node_server [2013-03-27T15:54:11+08:00] INFO: Setting the run_list to ["recipe[myalias]"] from JSON
my_node_server [2013-03-27T15:54:11+08:00] INFO: Run List is [recipe[myalias]]
my_node_server [2013-03-27T15:54:11+08:00] INFO: Run List expands to [myalias]
my_node_server [2013-03-27T15:54:11+08:00] INFO: Starting Chef Run for andreuantonio
my_node_server [2013-03-27T15:54:11+08:00] INFO: Running start handlers
my_node_server [2013-03-27T15:54:11+08:00] INFO: Start handlers complete.
my_node_server [2013-03-27T15:54:13+08:00] INFO: Loading cookbooks [magic_shell, myalias]
my_node_server [2013-03-27T15:54:15+08:00] INFO: Storing updated cookbooks/magic_shell/resources/environment.rb in the cache.
my_node_server [2013-03-27T15:54:16+08:00] INFO: Storing updated cookbooks/magic_shell/resources/alias.rb in the cache.
my_node_server [2013-03-27T15:54:18+08:00] INFO: Storing updated cookbooks/magic_shell/providers/environment.rb in the cache.
my_node_server [2013-03-27T15:54:19+08:00] INFO: Storing updated cookbooks/magic_shell/providers/alias.rb in the cache.
my_node_server [2013-03-27T15:54:20+08:00] INFO: Storing updated cookbooks/magic_shell/Rakefile in the cache.
my_node_server [2013-03-27T15:54:22+08:00] INFO: Storing updated cookbooks/magic_shell/metadata.rb in the cache.
my_node_server [2013-03-27T15:54:23+08:00] INFO: Storing updated cookbooks/magic_shell/CHANGELOG.md in the cache.
my_node_server [2013-03-27T15:54:24+08:00] INFO: Storing updated cookbooks/magic_shell/README.md in the cache.
my_node_server [2013-03-27T15:54:26+08:00] INFO: Storing updated cookbooks/magic_shell/.travis.yml in the cache.
my_node_server [2013-03-27T15:54:27+08:00] INFO: Storing updated cookbooks/magic_shell/.gitignore in the cache.
my_node_server [2013-03-27T15:54:29+08:00] INFO: Storing updated cookbooks/magic_shell/metadata.json in the cache.
my_node_server [2013-03-27T15:54:30+08:00] INFO: Storing updated cookbooks/magic_shell/.rvmrc in the cache.
my_node_server [2013-03-27T15:54:32+08:00] INFO: Storing updated cookbooks/myalias/recipes/default.rb in the cache.
my_node_server [2013-03-27T15:54:33+08:00] INFO: Storing updated cookbooks/myalias/README.md in the cache.
my_node_server [2013-03-27T15:54:34+08:00] INFO: Storing updated cookbooks/myalias/metadata.rb in the cache.
my_node_server [2013-03-27T15:54:34+08:00] INFO: Processing magic_shell_alias[rm] action add (myalias::default line 10)
my_node_server [2013-03-27T15:54:34+08:00] INFO: Adding rm.sh to /etc/profile.d/
my_node_server [2013-03-27T15:54:34+08:00] INFO: Processing file[/etc/profile.d/rm.sh] action create (/var/chef/cache/cookbooks/magic_shell/providers/alias.rb line 7)
my_node_server [2013-03-27T15:54:34+08:00] INFO: file[/etc/profile.d/rm.sh] created file /etc/profile.d/rm.sh
my_node_server [2013-03-27T15:54:34+08:00] INFO: file[/etc/profile.d/rm.sh] mode changed to 755
my_node_server [2013-03-27T15:54:34+08:00] INFO: Processing file[/etc/profile.d/rm.sh] action nothing (/var/chef/cache/cookbooks/magic_shell/providers/alias.rb line 7)
my_node_server [2013-03-27T15:54:34+08:00] INFO: Processing magic_shell_alias[cow] action add (myalias::default line 15)
my_node_server [2013-03-27T15:54:34+08:00] INFO: Adding cow.sh to /etc/profile.d/
my_node_server [2013-03-27T15:54:34+08:00] INFO: Processing file[/etc/profile.d/cow.sh] action create (/var/chef/cache/cookbooks/magic_shell/providers/alias.rb line 7)
my_node_server [2013-03-27T15:54:34+08:00] INFO: file[/etc/profile.d/cow.sh] created file /etc/profile.d/cow.sh
my_node_server [2013-03-27T15:54:34+08:00] INFO: file[/etc/profile.d/cow.sh] mode changed to 755
my_node_server [2013-03-27T15:54:34+08:00] INFO: Processing file[/etc/profile.d/cow.sh] action nothing (/var/chef/cache/cookbooks/magic_shell/providers/alias.rb line 7)
my_node_server [2013-03-27T15:54:34+08:00] INFO: Processing magic_shell_environment[EDITOR] action add (myalias::default line 20)
my_node_server [2013-03-27T15:54:34+08:00] INFO: Adding EDITOR.sh to /etc/profile.d/
my_node_server [2013-03-27T15:54:34+08:00] INFO: Processing file[/etc/profile.d/EDITOR.sh] action create (/var/chef/cache/cookbooks/magic_shell/providers/environment.rb line 7)
my_node_server [2013-03-27T15:54:34+08:00] INFO: file[/etc/profile.d/EDITOR.sh] created file /etc/profile.d/EDITOR.sh
my_node_server [2013-03-27T15:54:34+08:00] INFO: file[/etc/profile.d/EDITOR.sh] mode changed to 755
my_node_server [2013-03-27T15:54:34+08:00] INFO: Processing file[/etc/profile.d/EDITOR.sh] action nothing (/var/chef/cache/cookbooks/magic_shell/providers/environment.rb line 7)
my_node_server [2013-03-27T15:54:37+08:00] INFO: Chef Run complete in 26.190131 seconds
my_node_server [2013-03-27T15:54:37+08:00] INFO: Running report handlers
my_node_server [2013-03-27T15:54:37+08:00] INFO: Report handlers complete
Done! now if we login trough ssh we can find out the changes:


$ ssh chef@my_node_server
chef@my_node_server's password:
Last login: Wed Mar 27 16:04:49 2013 from XXXXXX
$ cow
cow says moooo

Of course, that changes are for the whole system not just for chef user :)

Now we can have our system updating changes using the chef-client daemon. The configuration file resides in /etc/chef/client.rb, make a copy first as client.rb.orig

Now you can customize the configuration file, mine's is a simple one with this parameters:
$ sudo cat client.rb
log_level        :info
log_location     STDOUT
chef_server_url  "https://api.opscode.com/organizations/andreu2"
validation_client_name "andreu2-validator"
node_name "andreuantonio"

We copy the files andreuantonio.pem and andreu2-validator.pem to /etc/chef/ and we can start the daemon. We can see it working in the log file:
/var/log/chef$ cat client.log
[2013-03-27T16:05:43+08:00] INFO: *** Chef 10.12.0 ***
[2013-03-27T16:05:46+08:00] INFO: Run List is [recipe[myalias]]
[2013-03-27T16:05:46+08:00] INFO: Run List expands to [myalias]
[2013-03-27T16:05:46+08:00] INFO: Starting Chef Run for andreuantonio
[2013-03-27T16:05:46+08:00] INFO: Running start handlers
[2013-03-27T16:05:46+08:00] INFO: Start handlers complete.
[2013-03-27T16:05:48+08:00] INFO: Loading cookbooks [magic_shell, myalias]
[2013-03-27T16:05:48+08:00] INFO: Processing magic_shell_alias[rm] action add (myalias::default line 10)
[2013-03-27T16:05:48+08:00] INFO: Adding rm.sh to /etc/profile.d/
[2013-03-27T16:05:48+08:00] INFO: Processing file[/etc/profile.d/rm.sh] action create (/var/chef/cache/cookbooks/magic_shell/providers/alias.rb line 7)
[2013-03-27T16:05:48+08:00] INFO: Processing file[/etc/profile.d/rm.sh] action nothing (/var/chef/cache/cookbooks/magic_shell/providers/alias.rb line 7)
[2013-03-27T16:05:48+08:00] INFO: Processing magic_shell_alias[cow] action add (myalias::default line 15)
[2013-03-27T16:05:48+08:00] INFO: Adding cow.sh to /etc/profile.d/
[2013-03-27T16:05:48+08:00] INFO: Processing file[/etc/profile.d/cow.sh] action create (/var/chef/cache/cookbooks/magic_shell/providers/alias.rb line 7)
[2013-03-27T16:05:48+08:00] INFO: Processing file[/etc/profile.d/cow.sh] action nothing (/var/chef/cache/cookbooks/magic_shell/providers/alias.rb line 7)
[2013-03-27T16:05:48+08:00] INFO: Processing magic_shell_environment[EDITOR] action add (myalias::default line 20)
[2013-03-27T16:05:48+08:00] INFO: Adding EDITOR.sh to /etc/profile.d/
[2013-03-27T16:05:48+08:00] INFO: Processing file[/etc/profile.d/EDITOR.sh] action create (/var/chef/cache/cookbooks/magic_shell/providers/environment.rb line 7)
[2013-03-27T16:05:48+08:00] INFO: Processing file[/etc/profile.d/EDITOR.sh] action nothing (/var/chef/cache/cookbooks/magic_shell/providers/environment.rb line 7)
[2013-03-27T16:05:50+08:00] INFO: Chef Run complete in 4.702428 seconds
[2013-03-27T16:05:50+08:00] INFO: Running report handlers
[2013-03-27T16:05:50+08:00] INFO: Report handlers complete


That's all. Please check Opscode website to view all the options and capabilities of this software - it's huge!

 

Wednesday, March 27, 2013

Creating a central git repositor with ssh and lighthttpd

Looking at the differences between SVN and GIT, it seems that the most significant is pretty much the centralized / decentralized feature - SVN keeps all the code on the  server, however you can use your own GIT repository and later commit to the main one.

To install a git server we need the packages git and git-daemon-sysvinit. This is a very basic installation, refer to Git website for the original documentation.

$ sudo apt-get install git-daemon-sysvinit git

To enable the GIT daemon to run in Debian we need to modify the file /etc/default/git-daemon:

GIT_DAEMON_ENABLE=false ----> GIT_DAEMON_ENABLE=true

$ sudo /etc/init.d/git-daemon  start
$ sudo /etc/init.d/git-daemon  status
[ ok ] git-daemon is running.

I currently developing a small Perl script to produce fax reports. I will add the files to my repository. First we initiate a local repository:

$ cd ~/projects/faxreport
~/projects/faxreport$ git init
Initialized empty Git repository in /home/amartin/projects/faxreport/.git/

now we add the files:

$ git add *

Now we commit with a proper description:

~/projects/faxreport$ git commit -m 'Initial FaxReport Release'
[master (root-commit) 36a9ab2] Initial FaxReport Release
 Committer: Andreu <amartin@mydomain.com>

 3 files changed, 197 insertions(+)
 create mode 100755 faxreport.pl
 create mode 100644 hylamail.sh
 create mode 100644 hylareport.sh

Your name / email details are automatically detected, anyway can modify your identity as follows:


    git config --global user.name "Your Name"
    git config --global user.email you@example.com
    git commit --amend --reset-author


Now, if we want to add the repository to our server we will initiate one with the --shared option (that will arrange for group permissions):

~$ mkdir /opt/git
~$ git init --bare --shared /opt/git/faxreport.git
~$ chgrp -R devgroup  /opt/git/faxreport.git
~$ chmod 770 -R  /opt/git/faxreport.git
~$ cd  [my Fax project folder, in my case $HOME/projects/faxreport]
~$ git clone /opt/git/faxreport.git (will create a new subfolder faxreport)
~$ cp [my files] faxreport/
~$ cd faxreport
~$ git add *
~$ git commit -m 'Initial release'
~$ git push /opt/git/faxreport.git master:master (master is the branch we are using by default)
~$ cd /opt/git/faxreport.git
~$ git update-server-info (needed to do clone via http)
Be aware that GIT will use ssh for authentication. From a client computer, we clone the content:

$ git clone amartin@myserver:/opt/git/faxreport.git
Cloning into 'faxreport'...
Verification code:
Password:
remote: Counting objects: 6, done.
remote: Compressing objects: 100% (5/5), done.
remote: Total 6 (delta 0), reused 0 (delta 0)
Receiving objects: 100% (6/6), done.

$ ls
faxreport
$ ls faxreport/
faxreport.pl  hylamail.sh  hylareport.sh  README.txt
If we don't want to use SSH, we can point our favorite web server to that folder, and use HTTP to access the repository. For me I changed the default document root in my default lighthttpd installation to the git repository one:

server.document-root        = "/opt/git"

And now I can clone my fax scripts using http:

$ git clone http://localhost/faxreport.git
Cloning into 'faxreport'...
$ ls faxreport/
faxreport.pl  hylamail.sh  hylareport.sh  README.txt

Now we upload a change in README.txt via ssh (works the same for  local):

$ git clone amartin@localhost:/opt/git/faxreport.git
Cloning into 'faxreport'...
Password:
remote: Counting objects: 9, done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 9 (delta 0), reused 0 (delta 0)
Receiving objects: 100% (9/9), done.
$ cd faxreport/
$ vim README.txt
$ git add README.txt
$ git commit -m 'Modified README.txt license details'
 [master 19434ad] Modified README.txt license details
 Committer: Andreu <amartin@asiarooms.com>
 1 file changed, 1 insertion(+)
$ git push
Password:
Counting objects: 5, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 410 bytes, done.
Total 3 (delta 0), reused 0 (delta 0)
To amartin@localhost:/opt/git/faxreport.git
   e48abb5..19434ad  master -> master


Done!


Basic installation of Puppet configuration manager

Configuration managers can be quite an essential tool when you need to manage server farms. One of the most popular are CFEngine, Puppet and Chef. CFEngine is quite advisable, however I quite like how easy is to setup your basic Puppet installation. Let's have a look.

For this example I have:


  • Puppet Master, IP 172.16.0.2 name Nova domain localdomain
  • Puppet Client, IP 172.16.0.100 name Zealot domain localdomain
We can either have a proper DNS setup, or just use hosts files for the test:

172.16.0.2 Nova.localdomain Nova
172.16.0.100 Zealot.localdomain Zealot

First I install the packages on the master. One thing to be careful is DNS and server names, as Puppet uses SSL certs this is quite sensitive.

sudo apt-get install puppetmaster

Now we create the master's certificate. Parameters are for the official name and the alias.

$  puppet cert generate Nova --dns_alt_names=Nova.localdomain

Now we can create our first Puppet command. We are going to create a file in the /tmp folder with specific permissions:

$ sudo vim /etc/puppet/manifests/site.pp

class test_class {
    file { "/tmp/hello":
       ensure => present,
       mode   => 644,
       owner  => root,
       group  => root
    }
}

# tell puppet on what clients to run this class
node Zealot {
    include test_class
}

Now, we go to the client. We install puppet client:

$ sudo apt-get install puppet

Now, we will contact the master to request a certificate and get it signed. Basically we will make the request, and right away we will sign it in the master. From the client, we execute:

$ sudo puppetd --no-daemonize --server nova --test --waitforcert 60

This will lunch a petition to the master, it will wait for 60 seconds for us to sign it. Now, if we go to the master we can see the requested certificate:

$ sudo puppetca list --all
  "Zealot.localdomain"           (6B:4E:76:01:EB:7C:69:04:69:76:2C:B6:CF:24:37:7A)
+ "nova"                (2E:0F:CC:95:C2:07:37:9F:23:77:A2:C1:AC:F5:E6:36)
+ "nova.localdomain"    (A5:A2:77:33:10:11:A1:67:EF:33:B0:EA:07:54:05:12) (alt names: "DNS:Nova", "DNS:nova.localdomain")

Now we sign it:

$ sudo puppetca sign Zealot.localdomain
notice: Signed certificate request for Zealot.localdomain
notice: Removing file Puppet::SSL::CertificateRequest localhost at '/var/lib/puppet/ssl/ca/requests/Zealot.localdomain.pem'

We got it signed. Now, after a few seconds we will see the client processing the certificate. the output will be similar to this one:

info: Requesting certificate
warning: peer certificate won't be verified in this SSL session
info: Caching configuration at /etc/puppet/localconfig.yaml
notice: Starting configuration run
info: Creating state file /var/lib/puppet/state/state.yaml
notice: Finished configuration run in 0.XX seconds 

If we see any error message at this point, most likely there's some problem with certificate names or resolutions. If everything went smooth, now we can edit the configuration on the client to auto start puppet:

$ sudo vim /etc/default/puppet

We update the line START=yes

After that, we specify the master in the /etc/puppet/puppet.conf
#puppet.conf
[main]
...

[agent]
server=Nova

$ sudo service puppet start

After a while we should be able to see the file /tmp/hello we instructed on the master's manifesto. By default puppet pull the configurations every 30 minutes, but if you want to pull on a different schedule we can add the parameter runinterval = X (number of minutes) on the [main] section of the client's file /etc/puppet/puppet.conf 



Tuesday, March 26, 2013

Basic installation of TrafficServer

TrafficServer is quite a piece of software donated by Yahoo! to the Apache software foundation. In a nutshell, is a caching proxy server / accelerator tool for your edge network - with lots of options and capabilities. You can deploy it as:


  • Web Proxy Cache (by default) 
  • Reverse Proxy 
  • Cache Hierarchy 

Using a Web Proxy Cache deployment would be similar to deploy a Squid proxy. Your clients would hit the proxy server, and if there's no cache available the request goes to the server. 

Reverse Proxy deployment is quite interesting, because clients don't hit your web server directly, they will hit the TrafficServer, then traffic server would query everything for them. That's quite useful as you add one more layer between your server and the network, however it will make your web server's access log register only the petitions from TrafficServer - you can make TrafficServer log the petitions for you instead. 

Cache Hierarchy is used for strategic regional cache - kind of set up your own content accelerator for overseas traffic.

It's installation is quite simple, on Debian / Ubunto you can use apt:

$ sudo apt-get install trafficserver

Configuration files are located in the folder /etc/trafficserver, and there's a lot ! going trough the documentation I saw there's many cool features for managing the cache, but that's another level from this kick start. To configure the ports where it will listen we can have a look at records.config:

CONFIG proxy.config.http.server_port INT 8080
CONFIG proxy.config.process_manager.mgmt_port INT 8084
CONFIG proxy.config.admin.autoconf_port INT 8083

For this example we will leave the ports as they are. Now we will configure a reverse proxy, for this example I have:

  • Web Server running TrafficServer, IP 172.16.0.2. Apache server name Nova.mydomain
  • Client, IP 172.16.0.100
First, I will configure the file /etc/trafficserver/remap.config :

$ sudo vim /etc/trafficserver/remap.config

map http://Nova.mydomain http://localhost
reverse_map http://localhost http://Nova.mydomain

If TrafficServer's management daemon is running we can execute the command traffic_line -x to reload the changes. If this command fails, means the management server is not running, so we should start it:

$ sudo /etc/init.d/trafficserver start
$ sudo /etc/init.d/trafficserver status
[ ok ] traffic_server is running.
[ ok ] traffic_manager is running.

If TrafficServer doesn't start, check out the file /etc/default/trafficserver and enable the start of traffic_server and traffic_manager:

 # TM_START=no --> TM_START=yes
 # TS_START=no  --> TS_START=yes

Now, we ensure the client resolves Zealot.com to my webserver IP. If not, we will edit the hosts file:

172.16.0.2     Nova.mydomain

TrafficServer listens on port 8080, so we need to do a redirection with iptables - on a working environment, we would ideally have a router translating that for us.

$ sudo iptables -t nat -I PREROUTING -s 172.16.0.100 -d 172.16.0.2 -p tcp --dport 80 -j REDIRECT --to-port 8080

Now, from the client, every time we try to contact the port 80 it will be redirected to TrafficServer, and it will reverse map the petitions to the localhost using the server name specified in the headers. If we check our access log we can see TrafficServer is requesting our page and not our clients:

$ sudo tail /var/log/apache/access.log
127.0.0.1 - - [24/Mar/2013:14:21:07 +0800] "GET / HTTP/1.1" 200 429 "-" "Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.16) Gecko/20120602 Iceweasel/3.5.16 (like Firefox/3.5.16)"
127.0.0.1 - - [24/Mar/2013:14:21:08 +0800] "GET / HTTP/1.1" 200 429 "-" "Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.16) Gecko/20120602 Iceweasel/3.5.16 (like Firefox/3.5.16

That's all !

Friday, March 22, 2013

Using Google Authenticator and Android for a two step verification access for SSH

Today I tried Google Authenticator, a really good security measure to protect your email - or a nightmare if you loose the phone and the emergency codes :)

I applied it to my SSH access... works like a charm. Thank you again Google !

First, you need to install the following packages - at least in Debian, other distros might change package names:

sudo apt-get install libpam-google-authenticator

or you might want to build it yourself if the package is not available in your distro:

$ sudo apt-get install git libpam0g-dev make gcc

After that we download the code using git:

$ git clone https://code.google.com/p/google-authenticator/
Cloning into 'google-authenticator'...
remote: Counting objects: 1048, done.
remote: Finding sources: 100% (1048/1048), done.
remote: Total 1048 (delta 504)
Receiving objects: 100% (1048/1048), 2.27 MiB | 575 KiB/s, done.
Resolving deltas: 100% (504/504), done.


Now let's build it:

$ make                                                                                                                          
gcc --std=gnu99 -Wall -O2 -g -fPIC -c  -fvisibility=hidden  -o pam_google_authenticator.o pam_google_authenticator.c                                                           
gcc -shared -g   -o pam_google_authenticator.so pam_google_authenticator.o base32.o hmac.o sha1.o -lpam
gcc --std=gnu99 -Wall -O2 -g -fPIC -c  -fvisibility=hidden  -o demo.o demo.c
gcc -DDEMO --std=gnu99 -Wall -O2 -g -fPIC -c  -fvisibility=hidden  -o pam_google_authenticator_demo.o pam_google_authenticator.c
gcc -g   -rdynamic -o demo demo.o pam_google_authenticator_demo.o base32.o hmac.o sha1.o  -ldl
gcc -DTESTING --std=gnu99 -Wall -O2 -g -fPIC -c  -fvisibility=hidden        \
              -o pam_google_authenticator_testing.o pam_google_authenticator.c
gcc -shared -g   -o pam_google_authenticator_testing.so pam_google_authenticator_testing.o base32.o hmac.o sha1.o -lpam
gcc --std=gnu99 -Wall -O2 -g -fPIC -c  -fvisibility=hidden  -o pam_google_authenticator_unittest.o pam_google_authenticator_unittest.c
gcc -g   -rdynamic -o pam_google_authenticator_unittest pam_google_authenticator_unittest.o base32.o hmac.o sha1.o -lc  -ldl

$ sudo make install
[sudo] password for amartin: 
cp pam_google_authenticator.so /lib/x86_64-linux-gnu/security
cp google-authenticator /usr/local/bin

Now, as the user we want to enable the two steps auth for, we will execute the command google-authenticator. Pretty much we will answer YES to everything. 

Be aware that all users in the system will need to have google-authenticator code, otherwise they won't be allowed to ssh the system with the current pam configuration.

$ google-authenticator

Do you want authentication tokens to be time-based (y/n) y
https://www.google.com/chart?chs=200x200&chld=M|0&cht=qr&chl=otpauth://totp/amartin@Zealot%3Fsecret%XXXXXXXXXXXX


(this will be shown on the console)


Your new secret key is: XXXXXXXXXXXX
Your verification code is 111111
Your emergency scratch codes are:
  666666
  222222
  [...]

Do you want me to update your "/amartin/.google_authenticator" file (y/n) y

Do you want to disallow multiple uses of the same authentication
token? This restricts you to one login about every 30s, but it increases
your chances to notice or even prevent man-in-the-middle attacks (y/n) y

By default, tokens are good for 30 seconds and in order to compensate for
possible time-skew between the client and the server, we allow an extra
token before and after the current time. If you experience problems with poor
time synchronization, you can increase the window from its default
size of 1:30min to about 4min. Do you want to do so (y/n) y

If the computer that you are logging into isn't hardened against brute-force
login attempts, you can enable rate-limiting for the authentication module.
By default, this limits attackers to no more than 3 login attempts every 30s.
Do you want to enable rate-limiting (y/n) y


Now, install Google authenticator and Barcode Scanner on your mobile. Once you start the app, you need to set up a new account and use Bar Code Scanner to set it up.


After setting up the account, you will see a similar screen every time you open the Google Authenticator:




Now, we need to enable the module for SSH:

$ sudo vi /etc/pam.d/sshd

We add the lines:

# Google authenticator
auth required pam_google_authenticator.so

Also, we indicate SSH to use a Challenge Response:

# sudo vi /etc/ssh/sshd_config
ChallengeResponseAuthentication yes


$ sudo service ssh restart
[ ok ] Restarting OpenBSD Secure Shell server: sshd.


Now, if we try to login from another device:

$ ssh 10.10.17.153
Verification code: (we enter the token we have in the mobile app)
Password: (usual password)
Linux Zealot 2.6.39-2-amd64 #1 SMP Wed Jun 8 11:01:04 UTC 2011 x86_64

Last login: Thu Mar 21 13:59:16 2013 from XXXXX.local

amartin@Zealot:~$


Done! 

Thursday, March 21, 2013

Securing your disk with Luks disk encryption


The other day talking about security topics, we agreed that most of the times the problem is not loosing a laptop / disk, the problem is loosing the data and have someone accessing it.

I decided to encrypt my home folder - I'm not fancy to re install my debian, and anyway my home folder is the only one with security risks. First, install the package:


# apt-get install cryptsetup

This will install what we need to use LUKS, also will update the initrd modules to load dm-crypt and dm-mod (needed to use LUKS).

After that backup my /home partition to somewhere else. Later I will unmount my home folder partition (/dev/sda7). To increase security, I will mess it up with some random data so will be more difficult to crack:

# badblocks -c 10240 -s -w -t random -v /dev/sda7
Checking for bad blocks in read-write mode
From block 0 to 97655807
Testing with random pattern:   3.72% done, 0:38 elapsed. (0/0/0 errors)
...
Reading and comparing:  43.18% done, 24:21 elapsed. (0/0/0 errors)
...
Pass completed, 0 bad blocks found. (0/0/0 errors)

Now is time to encrypt the device, making sure the password is secure enough and yet difficult to forget:

# cryptsetup --verbose --verify-passphrase luksFormat /dev/sda7

WARNING!
========
This will overwrite data on /dev/sda7 irrevocably.

Are you sure? (Type uppercase yes): YES      
Enter LUKS passphrase: 
Verify passphrase: 
Command successful.

Done. Now we will open the device under the name of "home":

# cryptsetup luksOpen /dev/sda7 home
Enter passphrase for /dev/sda7:

A new symbolik link will appear in the folder /dev/mapper:

# ls -l /dev/mapper/home
lrwxrwxrwx 1 root root 7 Mar 20 09:54 /dev/mapper/home -> ../dm-0

Now we can verify the status of our disk:

# cryptsetup -v status home
/dev/mapper/home is active.
  type:    LUKS1
  cipher:  aes-cbc-essiv:sha256
  keysize: 256 bits
  device:  /dev/sda7
  offset:  4096 sectors
  size:    195307520 sectors
  mode:    read/write
Command successful.

We will create a ext3 filesystem on it. With journal option (-j), reserve 1% of the disk for if we run out of space (-m 1) and several standard options (-O ....):

mkfs.ext3 -j -m 1 -O dir_index,filetype,sparse_super /dev/mapper/home 
mke2fs 1.42.5 (29-Jul-2012)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
6111232 inodes, 24413440 blocks
244134 blocks (1.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
746 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

We need to add not the device to /etc/crypttab. Syntax is <device name> <block device> <keyfile (deprecated, specify none)><options - I put just luks, other options available on manpage>

# cat /etc/crypttab 
# <target name> <source device>         <key file>      <options>
home    /dev/sda7 none luks

Then update our fstab to indicate the system to mount the mapper device, and comment the previous entry:

# cat /etc/fstab
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
[...]
#UUID=ef7a9c67-aa6a-48e8-ab7f-ea58cb9f856d /home           xfs     defaults        0       2
/dev/mapper/home                        /home           ext3     defaults        1       2
[...]

And we are done to reboot. The system will ask for the password on boot time, so we will need to write it in order to get our /home partition mounted. It is a good moment to copy all our files back to that partition, and afterwards close the access until next reboot with this command:

# cryptsetup luksClose home

The script /etc/init.d/cryptmount-early will ask you the password on boot time, so bear in mind that you will definitely mess the remote reboot.


Wednesday, March 20, 2013

Using autofs to mount NTFS USB disks

Sometimes when I restore my KDE session, I find an issue when auto starting applications who will read my USB disks straight away - music players, download managers, etc. Yesterday I implemented autofs to solve this, and works like a charm!

First of all, install the package:

$ sudo apt-get install autofs

In this example I'm going to setup my USB disk for torrents, so next time Ktorrent auto starts with KDE it won't be complaining about missing torrent files. My USB disk is using ntfs, with block device /dev/sdb1. I will auto mount it on /usb/torrents.

The main configuration file for autofs is the file /etc/auto.master (it might change depending on Linux distributions). Additional files are /etc/auto.smb (for cifs), /etc/auto.misc (iso9660, etc), /etc/auto.net (nfs).  We add this line to /etc/auto.master:

#cat auto.master
[...]
/usb /etc/auto.misc.torrents  -timeout=300

The name auto.misc.torrents is up to you, and time is in seconds.

Now, we create the file /etc/auto.misc.torrents:


# cat /etc/auto.misc.torrents 
torrents   -fstype=ntfs-3g            :/dev/sdb1

We start the service with service start autofs and we enter in the folder /usb:

#cd /usb
#ls
# <we see nothing>

But, if we enter in the folder torrents:

#cd torrents
#ls
asterisk.tar.bz2   burning  yoj      reaver.tar.bz2  smokeping-2.6.9  
#


The torrents folder will get auto mounted when we try to access it.

*There's a failure in autofs package, when mounting ntfs-3g. Basically it tries to pass the '-n' parameter which is not accepted. See below the output from automounter:


handle_packet: type = 3
handle_packet_missing_indirect: token 333, name torrents, request pid 2236
attempting to mount entry /usb/torrents
lookup_mount: lookup(file): looking up torrents
lookup_mount: lookup(file): torrents -> -fstype=ntfs            :/dev/sdb1
parse_mount: parse(sun): expanded entry: -fstype=ntfs            :/dev/sdb1
parse_mount: parse(sun): gathered options: timeout=399,fstype=ntfs
parse_mount: parse(sun): dequote(":/dev/sdb1") -> :/dev/sdb1
parse_mount: parse(sun): core of entry: options=timeout=399,fstype=ntfs, loc=:/dev/sdb1
sun_mount: parse(sun): mounting root /usb, mountpoint torrents, what /dev/sdb1, fstype ntfs, options 
do_mount: /dev/sdb1 /usb/torrents type ntfs options  using module generic
mount_mount: mount(generic): calling mkdir_path /usb/torrents
mount_mount: mount(generic): calling mount -t ntfs /dev/sdb1 /usb/torrents
spawn_mount: mtab link detected, passing -n to mount
>> ntfs-3g: Unknown option '-n'.
>> ntfs-3g 2010.3.6 integrated FUSE 27 - Third Generation NTFS Driver
>>              Configuration type 1, XATTRS are on, POSIX ACLS are off
>> Copyright (C) 2005-2007 Yura Pakhuchiy
>> Copyright (C) 2006-2009 Szabolcs Szakacsits
>> Copyright (C) 2007-2010 Jean-Pierre Andre
>> Copyright (C) 2009 Erik Larsson
>> Usage:    ntfs-3g [-o option[,...]] <device|image_file> <mount_point>
>> Options:  ro (read-only mount), remove_hiberfile, uid=, gid=,
>>           umask=, fmask=, dmask=, streams_interface=.
>>           Please see the details in the manual (type: man ntfs-3g).
>> Example: ntfs-3g /dev/sda1 /mnt/windows
>> Ntfs-3g news, support and information:  http://ntfs-3g.org
mount(generic): failed to mount /dev/sdb1 (type ntfs) on /usb/torrents
dev_ioctl_send_fail: token = 333
failed to mount /usb/torrents
handle_packet: type = 3
handle_packet_missing_indirect: token 334, name torrents, request pid 2236
dev_ioctl_send_fail: token = 334


I've seen some decent patches around, but being lazy as I am I just did the following to work around it:


root@Nova:/usr/bin# mv ntfs-3g ntfs-3g.orig
root@Nova:/usr/bin# vim ntfs-3g

#!/bin/sh
line=`echo $@ | sed -e 's/-n//g'`
/usr/bin/ntfs-3g.orig $line
root@Nova:/usr/bin# chmod +x ntfs-3g

root@Nova:/usr/bin# chmod 655 file

It is not ideal... but works for me :)

Monday, March 18, 2013

My take on pathping for Linux

Today I was wondering if there was a ported version of Microsoft's tool pathping. As i couldn't find nothing similar, I decided to write a quick script in perl with similar basic functions during my lunch break - lots of features are missing but I might be adding them the following days.

You will need Net::Traceroute and Net::Ping modules -> cpan -i Net::Traceroute Net::Ping

#!/usr/bin/perl
use Net::Traceroute;
use Net::Ping;

if (@ARGV <1) { 
        die "usage: pathping.pl \<host\>";
        }
print "Host: " . $ARGV[0] . " --> " ;

$totalpings = "10";
$pingtimeout = "2";
$tr = Net::Traceroute->new(host => $ARGV[0]);
if ( $tr->stat == '0' ) {
        my $hops = $tr->hops;
        my $distance = $tr->hop_query_time(($hops,TRACEROUTE_OK));
        if($hops > 1) {
            print "Hops: " . $hops . " Distance: " . $distance . "ms \n\n";
            for ($count=1; $count <= $hops; $count++) {
                print "Hop $count: ";
                $currenthop = $tr->hop_query_host($count,TRACEROUTE_OK);
                $currenthoptime = $tr->hop_query_time(($count,TRACEROUTE_OK));
                if ( ! $currenthop) { $currenthop = "*"; }
                print "$currenthop  $currenthoptime ms [";
                $loss=0;
                for ($lossloop=1; $lossloop <=$totalpings; $lossloop++) {

                        $p = Net::Ping->new("icmp",$pingtimeout);
                        if ($p->ping($currenthop)) {
                        print "!";
                        }
                        else {
                        $loss++;
                        print ".";
                        }
                        $p->close();
                        }
                print "] Packet loss: " . $loss . "/" . $totalpings . " " . ($loss*100)/$totalpings . "%";
                print "\n";
                }

    }

}
else {
        print "Host not found? Code: " . $tr->stat . "\n";
        }


Be aware that you will need sudo or to be root in order to run this script. This is the output:



$ sudo perl pathping.pl www.av.com

Host: www.av.com --> Hops: 12 Distance: 15.288ms

Hop 1: 10.65.17.2  0.91 ms [!!!!!!!!!!] Packet loss: 0/10 0%
Hop 2: 10.65.16.252  2.322 ms [!!!!!!!!!!] Packet loss: 0/10 0%
Hop 3: 165.21.240.189  12.167 ms [!!!!!!!!!!] Packet loss: 0/10 0%
Hop 4: 165.21.12.68  22.66 ms [!!!!!!!!!!] Packet loss: 0/10 0%
Hop 5: 203.208.190.21  12.139 ms [!!!!!!!!!!] Packet loss: 0/10 0%
Hop 6: 203.208.151.157  12.367 ms [!!!!!!!!!!] Packet loss: 0/10 0%
Hop 7: 203.84.209.229  11.467 ms [!!!!!!!!!!] Packet loss: 0/10 0%
Hop 8: 203.84.209.89  14.301 ms [!!!!!!!!!!] Packet loss: 0/10 0%
Hop 9: 106.10.128.7  12.77 ms [!!!!!!!!!!] Packet loss: 0/10 0%
Hop 10: 106.10.128.25  13.527 ms [!!!!!!!!!!] Packet loss: 0/10 0%
Hop 11: 106.10.128.213  14.378 ms [!!!!!!!!!!] Packet loss: 0/10 0%
Hop 12: 106.10.165.51  15.288 ms [!!!!!!!!!!] Packet loss: 0/10 0%