Friday, November 22, 2013

Installation of OpenVAS from source code

This week I was trying to get OpenVAS working in one of our old Ubuntu laptops. Ubuntu does come with some working packages, but a bit old using precise release. After a while I managed to get it working, these are the steps it took:

First, download the source codes from http://www.openvas.org/install-source.html (for this tutorial I'm using V5). Save the files on /opt/openvas/v5 (for example).

Decompress all the files, and *READ* the README file to check for dependencies. Once all of them cleared, just follow the install instructions for all packages:
cd <package name>; mkdir build; cd build; cmake .. && make && sudo make install
Note: This line will install the contents on /usr/local. Personally I don't install the Greenbone security desktop as it's discontinued in later releases. Greenbone security assistant should be good enough.

Next step, is create the om user needed by openvas. We need to generate the site cert and client cert:
sudo /usr/local/sbin/openvas-mkcert -n om -i
sudo /usr/local/sbin/openvas-mkcert-client -n om -i
Now, we need to download the plugins for OpenVAS - otherwise, scans are empty. According to our installation prefix, the plugin path should be /usr/local/var/lib/openvas/plugins: 
sudo /usr/local/sbin/openvas-nvt-sync
Also we will update scap data - for vulnerabilities info. This should go to /usr/local/var/lib/openvas/scap-data.
sudo /usr/local/sbin/greenbone-scapdata-sync 
Now we launch the OpenVAS scanner daemon openvassd. At launch time it will load all the plugins we downloaded updating the nvt. If the plugin update went well, it will take a while loading plugins - if the message All plugins loaded appears right away then we updated the plugins in the wrong directory or they cannot be accessed.

The log /usr/local/var/log/openvas/openvassd.messages should show this message:
openvassd 3.3.1 started 
Now let's run the manager daemon and update the NVT cache:
/usr/local/sbin/openvasmd -v --update
Now is one of the most troublesome moments. Checking the log /usr/local/var/log/openvas/openvasmd.log we can find what went wrong - almost every time I install it there's something not right. These are messages I found and how I solved them:

openvas_server_new: failed to set credentials key file -> re create the certificates
openvas_server_connect: failed to shake hands with server: The TLS connection was non-properly terminated. -> check you have the right gnutls version (you might have seen a warning after make

If you see these messages:

md   main:   INFO:2013-11-22 02h37.11 utc:6380:    OpenVAS Manager
md   main:   INFO:2013-11-22 02h37.11 utc:6380:    Set to connect to address 127.0.0.1 port 9391
md   main:   INFO:2013-11-22 02h37.11 utc:6380:    Updating NVT cache.
GLib:WARNING:2013-11-22 02h37.13 utc:6380: g_strcompress: trailing \

Seems all went good ! Now let's launch the daemon:
sudo /usr/local/sbin/openvasmd -v
Check this is the content of the log:

md   main:   INFO:2013-11-22 02h44.01 utc:6399:    OpenVAS Manager
md   main:   INFO:2013-11-22 02h44.02 utc:6400:    Manager bound to address * port 9390
md   main:   INFO:2013-11-22 02h44.02 utc:6400:    Set to connect to address 127.0.0.1 port 9391
lib  auth:WARNING:2013-11-22 02h44.02 utc:6400: Authentication configuration could not be loaded.

Next is the OpenVAS administrator daemon - controls the OAP:
sudo /usr/local/sbin/openvasad
You might see the following warning in the log file /usr/local/var/log/openvas/openvasad.log but for this example this can be ignored - on other scenarios it would matter:

lib  auth:WARNING:2013-11-25 15h00.36 SGT:30929: Authentication configuration could not be loaded.

Now it's time to launch the Greenbone security assistant. Launch the daemon with:
sudo /usr/local/sbin/gsad
And try to connect using https://<your openvas machine>. If you receive SSL errors and can't open the page, you can fall back to http version. Kill the gsad daemon and launch it like this:
sudo /usr/local/sbin/gsad --http-only
Now we need to create our user - i.e. openvasadmin. We can create it with this command:
sudo /usr/local/sbin/openvasad -c 'add_user' -n openvasadmin -r Admin 
Enter the password, and try it our in Greenbone. For non Admin users you can also use the tool /usr/local/sbin/openvas-adduser

That's all !

Tuesday, November 5, 2013

ELB Multi AZ and Nginx Proxy

Recently I found out that my nginx proxy is not making use of the multi AZ feature of my Amazon ELB.

They way multiple availability zones in an ELB works is basically adding a A record to the existing ELB for round robin resolution on each area (50 - 50). Nginx, by default, will cache the initial response of the ELB as the parameter proxy_buffering is default enabled. Setting it to Off, it will stop caching the response and start to balance across all the AZs.

proxy_buffering off;
Another way would be set the cache to expire in 1 minute:

I have not really tried the following one throughout, but we could enable our other AZ instances only for specific locations with proxy_cache_bypass. In my configuration would be:
 location /place-with-heavy-load { 
 [...]
set $no_cache 1;
[...]
location / {
 [...]
 proxy_cache_bypass $no_cache;
 proxy_pass http://my-elb-at-aws.com;
[...]

Monday, October 28, 2013

AWS EC2 instance auto scale policy with spot instances and automatic attachment to ELB

Some time ago I was having a look how to deploy this in our infrastructure. I found some interesting examples in this link, and with the official manual I managed to get the right policy for us, launching spot instances on demand and automatically registering them into our ELB. Here's my final take.

My variables are:

Security group: EE-FE-Stack-SG
SNS Topic: EE-FE-Stack
Minimum servers: 1
Scale up: 80% + CPU Used for 5 min Cooldown 300 sec
Scale down: 65% - CPU Used for 5 min Cooldown 600 sec
In bold the lines where you need to specify your load balancer name, AMI images and availability zone where you want to launch the instances.
In italic Userdata contains some instructions to try to fetch the cf-helper debian package and install it.

When launching the template, you will be asked to introduce the operators email, max spot price, min servers, max servers, security keys and some other parameters. Some of these values can be modified later on when re configuring the policy via web interface or cli tools.


{
"AWSTemplateFormatVersion" : "2010-09-09",

"Description" : "EE-FE Stack. Cooldown 300 sec up, 600 down.  [EE-FE-Stack-SG secutirygroup, EE-FE SNS topic]",

"Parameters" : {
"KeyName" : {
    "Description" : "Security key",
    "Type" : "String"
},

"InstanceType" : {
    "Type" : "String",
    "Default" : "m1.small",
    "AllowedValues" : [ "m1.small", "m1.medium", "m1.large", "m1.xlarge", "m2.xlarge", "m2.2xlarge", "m2.4xlarge", "c1.xlarge", "cc1.4xlarge" ],
    "Description" : "EC2 instance type (e.g. m1.large, m1.xlarge, m2.xlarge)"
},

"SpotPrice": {
    "Description": "Spot price for application AutoScaling Group",
    "Type": "Number",
    "MinValue" : ".03"
},

"MinInstances" : {
  "Description" : "The minimum number of Workers",
  "Type" : "Number",
  "MinValue" : "0",
  "Default"  : "0",
  "ConstraintDescription" : "Enter a number >=0"
},

"MaxInstances" : {
  "Description" : "The maximum number of Workers",
  "Type" : "Number",
  "MinValue" : "1",
  "Default"  : "4",
  "ConstraintDescription" : "Enter a number >1"
},

"OperatorEmail": {
  "Description": "Email Address",
  "Type": "String"
}
},

"Mappings" : {
"AWSInstanceType2Arch" : {
  "t1.micro"    : { "Arch" : "64" },
  "m1.small"    : { "Arch" : "64" },
  "m1.medium"   : { "Arch" : "64" },
  "m1.large"    : { "Arch" : "64" },
  "m1.xlarge"   : { "Arch" : "64" },
  "m2.xlarge"   : { "Arch" : "64" },
  "m2.2xlarge"  : { "Arch" : "64" },
  "m2.4xlarge"  : { "Arch" : "64" },
  "m3.xlarge"   : { "Arch" : "64" },
  "m3.2xlarge"  : { "Arch" : "64" },
  "c1.medium"   : { "Arch" : "64" },
  "c1.xlarge"   : { "Arch" : "64" },
  "cc1.4xlarge" : { "Arch" : "64HVM" },
  "cc2.8xlarge" : { "Arch" : "64HVM" },
  "cg1.4xlarge" : { "Arch" : "64HVM" }
},

"AWSRegionArch2AMI" : {
  "us-east-1"      : { "32" : "NOT_YET_SUPPORTED", "64" : "<Your AMI here>", "64HVM" : "NOT_YET_SUPPORTED" },
  "us-west-2"      : { "32" : "NOT_YET_SUPPORTED", "64" : "<Your AMI here>", "64HVM" : "NOT_YET_SUPPORTED" },
  "us-west-1"      : { "32" : "NOT_YET_SUPPORTED", "64" : "<Your AMI here>", "64HVM" : "NOT_YET_SUPPORTED" },
  "eu-west-1"      : { "32" : "NOT_YET_SUPPORTED", "64" : "<Your AMI here>", "64HVM" : "NOT_YET_SUPPORTED" },
  "ap-southeast-1" : { "32" : "NOT_YET_SUPPORTED", "64" : "<Your AMI here>", "64HVM" : "NOT_YET_SUPPORTED" },
  "ap-southeast-2" : { "32" : "NOT_YET_SUPPORTED", "64" : "<Your AMI here>", "64HVM" : "NOT_YET_SUPPORTED" },
  "ap-northeast-1" : { "32" : "NOT_YET_SUPPORTED", "64" : "<Your AMI here>", "64HVM" : "NOT_YET_SUPPORTED" },
  "sa-east-1"      : { "32" : "NOT_YET_SUPPORTED", "64" : "<Your AMI here>", "64HVM" : "NOT_YET_SUPPORTED" }
}
},

"Resources" : {
"NotificationTopic": {
  "Type": "AWS::SNS::Topic",
  "Properties": {
    "DisplayName" : "EE-FE-Stack",
    "Subscription": [ {
        "Endpoint": { "Ref": "OperatorEmail" },
        "Protocol": "email" } ]
  }
},

"WebServerGroup" : {
  "Type" : "AWS::AutoScaling::AutoScalingGroup",
  "Properties" : {
    "AvailabilityZones" : [ "us-west-1a" ],
    "LaunchConfigurationName" : { "Ref" : "LaunchConfig" },
    "MinSize" : { "Ref" : "MinInstances" },
    "MaxSize" : { "Ref" : "MaxInstances" },
    "LoadBalancerNames" : [ "<your load balancer here>"],
    "NotificationConfiguration" : {
      "TopicARN" : { "Ref" : "NotificationTopic" },
      "NotificationTypes" : [ "autoscaling:EC2_INSTANCE_LAUNCH","autoscaling:EC2_INSTANCE_LAUNCH_ERROR","autoscaling:EC2_INSTANCE_TERMINATE", "autoscaling:EC2_INSTANCE_TERMINATE_ERROR"]
    }
    }
},

"CfnUser" : {
    "Type" : "AWS::IAM::User",
    "Properties" : {
        "Path": "/",
        "Policies": [ {
            "PolicyName": "root",
            "PolicyDocument": { "Statement": [ {
                "Effect":"Allow",
                "Action":"cloudformation:DescribeStackResource",
                "Resource":"*"
            } ] }
        } ]
    }
},

"HostKeys" : {
    "Type" : "AWS::IAM::AccessKey",
    "Properties" : {
        "UserName" : { "Ref" : "CfnUser" }
    }
},

"LaunchConfig" : {
  "Type" : "AWS::AutoScaling::LaunchConfiguration",
  "Metadata" : {
    "Comment" : "Create a single webserver",
    "AWS::CloudFormation::Init" : {
      "config" : {
        "packages" : {
            "yum" : {

            }
        },
        "files" : {

        }
      }
    }
  },
  "Properties" : {
    "KeyName" : { "Ref" : "KeyName" },
    "SpotPrice" : { "Ref" : "SpotPrice" },
    "ImageId" : { "Fn::FindInMap" : [ "AWSRegionArch2AMI", { "Ref" : "AWS::Region" },
                                      { "Fn::FindInMap" : [ "AWSInstanceType2Arch", {     "Ref" : "InstanceType" },
                                      "Arch" ] } ] },
    "SecurityGroups" : [ "WEB" ],
    "InstanceType" : { "Ref" : "InstanceType" },
    "UserData"       : { "Fn::Base64" : { "Fn::Join" : ["", [
      "#!/bin/bash\n",
      "wget -O /opt/aws-cfn-bootstrap.deb http://pkg.camptocamp.net/staging/pool/sysadmin/a/aws-cfn-bootstrap/aws-cfn-bootstrap_1.3-1_all.deb\n",
      "dpkg -i /opt/aws-cfn-bootstrap.deb\n",
      "# Install the Worker application\n",
      "/opt/aws/bin/cfn-init ",
      "         --stack ", { "Ref" : "AWS::StackId" },
      "         --resource LaunchConfig ",
      "         --configset ALL",
      "         --region ", { "Ref" : "AWS::Region" }, "\n"
    ]]}}      
  }
},


"WebServerScaleUpPolicy" : {
  "Type" : "AWS::AutoScaling::ScalingPolicy",
  "Properties" : {
    "AdjustmentType" : "ChangeInCapacity",
    "AutoScalingGroupName" : { "Ref" : "WebServerGroup" },
    "Cooldown" : "300",
    "ScalingAdjustment" : "1"
  }
},
"WebServerScaleDownPolicy" : {
  "Type" : "AWS::AutoScaling::ScalingPolicy",
  "Properties" : {
    "AdjustmentType" : "ChangeInCapacity",
    "AutoScalingGroupName" : { "Ref" : "WebServerGroup" },
    "Cooldown" : "600",
    "ScalingAdjustment" : "-1"
  }
},

  "WorkerThreadHigh": {
   "Type": "AWS::CloudWatch::Alarm",
   "Properties": {
      "AlarmDescription": "Scale-up if Worker Thread Vs. Idle Percent > 80% for 5min",
      "MetricName": "CPUUtilization",
      "Namespace": "AWS/EC2",
      "Statistic": "Average",
      "Period": "300",
      "EvaluationPeriods": "2",
      "Threshold": "80",
      "AlarmActions": [ { "Ref": "WebServerScaleUpPolicy" } ],
      "Dimensions": [
        {
          "Name": "AutoScalingGroupName",
          "Value": { "Ref": "WebServerGroup" }
        }
      ],
      "ComparisonOperator": "GreaterThanThreshold"
    }
  },
  "WorkerThreadLow": {
   "Type": "AWS::CloudWatch::Alarm",
   "Properties": {
      "AlarmDescription": "Scale-down if CPU < 65% for 5 minutes",
      "MetricName": "CPUUtilization",
      "Namespace": "AWS/EC2",
      "Statistic": "Average",
      "Period": "300",
      "EvaluationPeriods": "2",
      "Threshold": "65",
      "AlarmActions": [ { "Ref": "WebServerScaleDownPolicy" } ],
      "Dimensions": [
        {
          "Name": "AutoScalingGroupName",
          "Value": { "Ref": "WebServerGroup" }
        }
      ],
      "ComparisonOperator": "LessThanThreshold"
    }
  }
}

}

Friday, October 25, 2013

DNS error after migrating Chef from version 10 to 11

This week I'm testing any possible errors if we migrate our Chef to version 11.0.6 - I know there's version 11.0.8 available but is giving out a ruby timezone error on ubuntu that I need to deal with yet.

So far the migration seems smooth, following the steps on the official guide. Restoring the backup was ok and a testing client was able to communicate with the server. However, after modifying a cookbook I was getting this error when applying the template change:

FATAL: SocketError: template[/etc/rsyslog.d/22-XXXXXX.conf] (Deploy_configuration_na_olive-logging-rsyslog::default line 13) had an error: SocketError: Error connecting to https://ip-10-XX-XX-XX.us-west-5.compute.internal/bookshelf/organization-00000000000000000000000000000000/checksum-b3f32a70cedbe6de9ac38?AWSAccessKeyId=XXXXXXXXX&Expires=1382603029&Signature=XXXXXXXX - getaddrinfo: Name or service not known

Straigh forward we see the message getaddrinfo: Name or service not known. Being an AWS EC2 instance, it was unable to resolve the name because the servers are in different geographic regions. However that's not the issue, the problem is that the chef server is using the internal DNS for it's settings.

To rectify this, modify (or create) the file /etc/chef-server/chef-server.rb with this content:

server_name = "<your chef server external DNS>"
bookshelf['url'] = "https://#{server_name}"
bookshelf['vip'] = server_name
nginx['url'] = "https://#{server_name}"
nginx['server_name'] = server_name
lb['api_fqdn'] = server_name
lb['web_ui_fqdn'] = server_name
api_fqdn = server_name

then, as the migration user execute a reload of settings and a server restart:

sudo chef-server-ctl reconfigure ; sudo chef-server-ctl restart
Now try again to apply the changes on the client. If it doesn't work, try to kill all chef processes and start them from scratch - for some reason I needed to do this as my chef processes were not being killed..

Tuesday, October 22, 2013

Creating your own Debian packages

With this example I'll create a basic Tor browser bundle debian package - there are real maintained debian packages for tor this is only for testing purposes.

We will install the content into /usr/local and will create some links to launch the software into /usr/local/bin.

First download the latest Tor package (I assume we are using 64 bits) and extract the files in /tmp/tor-browser_en-US

Now in your home (or testing) folder, create a subfolder named "tor-browser-bundle". Now create the subfolder DEBIAN and usr/local:

$ mkdir -p tor-browser-bundle/DEBIAN
$ mkdir -p tor-browser-bundle/usr/local
Now copy the /tmp/tor-browser_en-US to the folder we just created:

$ mv /tmp/tor-browser_en-US tor-browser-bundle/usr/local
 Now, we create the file DEBIAN/control containing this template:

Package: tor-bundle-browser
Priority: optional
Section: devel
Installed-Size: 120
Maintainer: Andreu Martin
Architecture: amd64
Version: 0.1
Depends: libc6 (>= 2.0)
Description: Tor bundle browser test
Now we create the file tor-bundle-browser/DEBIAN/postinst with the post installation tasks. This file must have permissions  >=0555 and <=0775.

$ cat tor-browser-bundle/DEBIAN/postinst 
ln -s /usr/local/tor-browser_en-US/start-tor-browser /usr/local/bin/
cat <<-EOF > /usr/local/bin/start-tor-firefox
#!/bin/sh
/usr/local/tor-browser_en-US/App/Firefox/firefox -no-remote -profile /usr/local/tor-browser_en-US/Data/profile
EOF
chmod 755 /usr/local/bin/start-tor-firefox
          $ chmod 775 tor-browser-bundle/DEBIAN/postinst 
 Now we are ready to build the .deb:

          $ dpkg-deb -z9 -Zgzip --build tor-browser-bundle
(-z9 specifies compression level - 0 to 9, -Zgzip for compression type - gzip, xz, bzip2, lzma, or none)

 Now we have a tor-browser-bundle.deb ready to go.

Additional reading on debian packages on the official manual.

Friday, August 30, 2013

Auto register chef clients with EC2 auto scaling

I recently deployed an auto scale policy at work increasing front ends on demand. One of the challenges I found is to register each chef client on boot up and get it removed on termination.

First thing, the ami must contain - or must obtain on boot time trough scripts, there's some cool stuff the people have post in internet about this - these things:

  • a chef client installation
  • a valid client.rb file
  • a role file
  • the validation certificate (based on your chef setup)
In my case, my ami already has a chef client installed but with only these three files (the ones mentioned above) in the /etc/chef folder. My client.rb loos like this:


$ cat /etc/chef/client.rb
log_level        :warn
log_location     STDOUT
chef_server_url  "http://ec2-.us.compute.amazonaws.com:4000"
validation_client_name "chef-validator" 

My json file specifing the role - just one rule, frontend. Other cookbooks and roles can be added.

$ cat /etc/chef/first-boot.json
{"run_list":["role[frontend]"]}

As you may have noticed already my client.rb does not specify any node_name. We add this parameter on boot time - each node, different node_name. We also create a knife configuration file to delete the node and client on shutdown - I place it on /root/.chef but you can choose anywhere else:

$ sudo cat /root/.chef/knife.rb

log_level                :info
log_location             STDOUT
client_key               '/etc/chef/client.pem'
validation_client_name   'chef-validator'
validation_key           '/etc/chef/validation.pem'
chef_server_url          'http://ec2-us-west-1.compute.amazonaws.com:4000'
cache_type               'BasicFile'

Ok, we have the skeleton ready. Now we just need to validate the client in order to register the client and node. For this, I use a script named /etc/init.d/chef-register :
#!/bin/sh
### BEGIN INIT INFO
# Provides:           chef-register
# Required-Start:    $local_fs $remote_fs $network $syslog
# Required-Stop:     $local_fs $remote_fs $network $syslog
# Default-Start:      2 3 4 5
# Default-Stop:      0 2 3 4 5 6
# Short-Description:  registers / deletes chef client
case $1 in
        start)
        echo "***** Registering node and client *****"
        instance_id=`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id`
        echo "node_name \"$instance_id\"" >> /etc/chef/client.rb
        echo "node_name '$instance_id'" >> /root/.chef/knife.rb
        /usr/bin/chef-client -j /etc/chef/first-boot.json
        ;;
        stop)
        echo "***** Deleting node and client *****"
        instance=`cat /etc/chef/client.rb | grep node_name | cut -d '"' -f 2`
        knife node delete $instance -c /root/.chef/knife.rb -y
        knife client delete $instance -c /root/.chef/knife.rb -y
        ;;
        *)
        echo $0 start or stop
        ;;
esac
Note that only once we validate the client we will obtain the client.pem on the /etc/chef folder.

To enable this script, we execute chmod +x /etc/init.d/chef-register  and update-rc.d chef-client defaults (for redhat based distros you will need to use chkconfig). On start, it will base the client name on the instance-id and put this parameter into both client.rb and knife.rb. On termination, it will delete the node and client - otherwise gets annoying when receiving timeouts deploying settings to the role.

That's it !

Wednesday, August 28, 2013

Optimizing linux TCP settings

Optimizing the TCP parameters can be tricky, and more when the servers are receiving hundreds of consecutive connections - i.e. load balancers.

Optimizing these settings depends a lot on the environment and nature of connection. As an instance, these are some of the settings I have:

The FIN in the tcp protocol defaults in 60 seconds. I tend to reduce it to 20, some people reduce it to less. In the end, is just a goodbye  from one IP to another :)

net.ipv4.tcp_fin_timeout = 20

For the TCP buffers I have the following settings:

(r = receive, w=send)
# 8MB for core mem max, default 65K
net.core.rmem_max = 8388608 
net.core.wmem_max = 8388608
net.core.rmem_default = 65536
net.core.wmem_default = 65536

#tcp socket buffers, minimum 4096K, initial 87K and max 8 MB
net.ipv4.tcp_rmem = 4096 87380 8388608 
net.ipv4.tcp_wmem = 4096 65536 8388608 
net.ipv4.tcp_mem = 8388608 8388608 8388608

Bear in mind that tcp_wmem overried net.core.wmem_default, in my case both are 65K.

Also, enable the tcp window scale

net.ipv4.tcp_window_scaling = 1
In my debian I have this file loading the settings on boot time:

$ cat /etc/sysctl.d/20-TCP-Tuning.conf
#FIN timeout to 20 sec
net.ipv4.tcp_fin_timeout = 20
# 8MB for core mem max, default 65K
net.core.rmem_max = 8388608
net.core.wmem_max = 8388608
net.core.rmem_default = 65536
net.core.wmem_default = 65536
#tcp sucket buffers, minimum 4096K, initial 87K and max 8 MB
net.ipv4.tcp_rmem = 4096 87380 8388608
net.ipv4.tcp_wmem = 4096 65536 8388608
net.ipv4.tcp_mem = 8388608 8388608 8388608
#Enable TCP window scaling
net.ipv4.tcp_window_scaling = 1
To hot load these settings we can use sudo sysctl -p /etc/sysctl.d/20-TCP-Tuning.conf


Thursday, August 22, 2013

Bash shell history commands

Some handy tips while working with bash history on linux.

The basics, defining the history file:

$ export HISTFILE = ~/.bash_history

Disabling history for your session:

$ export HISTSIZE=0

HISTSIZE affects the in-memory store, to set how many lines the file can contain we use HISTFILESIZE.

Adding a time stamp on the history list:

$ export HISTTIMEFORMAT='[%F %T]  '

History will look like this:

    1  [2013-06-09 10:40:12]   cat /etc/issue
    2  [2013-06-09 10:40:12]   clear
    3  [2013-06-09 10:40:12]   find /etc -name *.conf

%F Equivalent to %Y - %m - %d
%T Equivalent to time ( %H : %M : %S )

We can omit some comands to be displayed on the history file, such as the ones starting with a space, duplicates, or both:

$ export HISTCONTROL= < ignorespace | ignoredups | ignoreboth >

And to ignore certain command - in this case, command history:

$ export HISTIGNORE="history"

To repeat previous commands, we can press CTRL + R and start to write the command, the match in the history record will come up.

To get a full list of the current history we can use the history utility:

$ history[..] 1787  ssh sec02
 1788  ssh sec03
 1789  history
$

To delete the history:

$ history -c$ history
  791  history
$


to delete just one line, for example 791:

history -d 791

The bang bang (!!) is an existing feature to allow executing the latest commands easily. For example:

$ echo uno$ echo dos$ echo tres
$ history 1799  clear
 1800  echo uno
 1801  echo dos
 1802  echo tres
 1803  history
$

to execute the last command (history):

$ !!1799  clear
 1800  echo uno
 1801  echo dos
 1802  echo tres
 1803  history
$

To execute the second command in the list, starting from the last one:

$ !-2echo trestres$


We have something similar with the arguments, using !$. For example, we ping a host and then we telnet the argument using !$:

$ ping localhostPING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_req=1 ttl=64 time=0.036 ms
[...]
$ telnet !$ 22telnet localhost 22
Trying ::1...
Connected to localhost.
Escape character is '^]'.
SSH-2.0-OpenSSH_6.0p1 Debian-4

There are other combinations with bang, for more information have a look to the linux documentation project.

Friday, July 26, 2013

Configuring Amazon SES with Postfix

First thing, we need to be subscribed to their services and validate some email addresses. You can find about this process at their website.

Let's install postfix in our server:

    $ sudo apt-get install postfix
Following the debconf interface, set the host as to use relay host: 

set the right system mail name (i.e. mydomain.com):

specify the relay host as email-smtp.us-east-1.amazonaws.com (or the one we have subscribed to):


and if comes up remove the internet domain from my_destionations table:

now, add these lines to the /etc/postfix/main.cf file:

myhostname = mydomain.comsmtp_generic_maps = hash:/etc/postfix/genericsmtp_sasl_auth_enable = yessmtp_sasl_password_maps = hash:/etc/postfix/sesaccountsmtp_sasl_security_options = noanonymoussmtp_use_tls = yessmtp_tls_security_level = maysmtp_tls_note_starttls_offer = no
smtp_generic_maps will masquerade your local addresses as the external address you shall have validated with amazon. For this example, it is no.reply@mydomain.com and the /etc/postfix/generic would look like this:
www-data@mydomain.com      no.reply@mydomain.com
ubuntu@mydomain.com   no.reply@mydomain.com
After editing the file we need to generade the hash:

    $ sudo postmap /etc/postfix/generic

The smtp_sasl_password_maps enables the authentication with Amazon SES, basically we need to define our AWS keys in the format <server> <access key>:<secret key> :

email-smtp.us-east-1.amazonaws.com:25 AXXXXX:AXXXXXXXXXXXXXXXXXXXXX

Same, we generate the hash:

    $ sudo postmap /etc/postfix/sesaccount

Now issue a postfix reload to get the settings active.

    $ sudo postfix reload

To test the settings I like to use the package bsd-mailx, we install the package:
    $ sudo apt-get install bsd-mailx
    $ mail andres.martin@mydomain.com
    Subject: test
   
    .
    Cc:
    Null message body; hope that's ok

If everything went good, an email should have arrived.

Tuesday, June 25, 2013

Modify CPU affinity in Linux

For a long time already, Linux is able to use multi core CPUs. The ability to delegate processes in some or all of them is called CPU Affinity.

By default, unless there's a compatibility issue, Linux will use all your available processors. If we would like to modify that policy we can use a tool called taskset. Let's install it on our system:

$ sudo apt-get install taskset

Now let's see how many CPUs we have (you should already know tho :)):

$ cat /proc/cpuinfo

or, execute the command top and then press 1. This will break down the CPU status:

top - 17:52:13 up  9:00, 21 users,  load average: 1.75, 1.77, 1.71
Tasks: 210 total,   3 running, 207 sleeping,   0 stopped,   0 zombie
%Cpu0  : 33.1 us,  7.8 sy,  0.0 ni, 58.1 id,  0.7 wa,  0.0 hi,  0.3 si,  0.0 st
%Cpu1  : 13.1 us, 16.4 sy,  0.0 ni, 68.1 id,  2.3 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu2  : 15.7 us,  6.8 sy,  0.0 ni, 77.5 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu3  :  9.4 us, 26.2 sy,  0.0 ni, 62.1 id,  2.0 wa,  0.0 hi,  0.3 si,  0.0 st
KiB Mem:   8188992 total,  7601696 used,   587296 free,    64868 buffers
KiB Swap: 15624188 total,     1320 used, 15622868 free,  2751376 cached

In our example we have 4 CPUs. Let's run the kaffeine media player on the two first ones:

taskset 03 kaffeine

The hex mask works as the man page details:
0x00000001 (01) for the #1
0x00000002 (02) for the #2
0x00000003 (03) for #1 and #2
0x00000004 (04) for #3

and so on. An 'f' mask would mean system managed on all the processors.

We can also specify the processor instead of the mask:

taskset -c 3 <command to execute>

Let's check what affinity the process have:

$ taskset -p 700
pid 700's current affinity mask: 3

We can modify it:

$ taskset -p 03 700

If we want to assign a processor range:

$ taskset -pc 3-4 700
pid 700's current affinity list: 0,1
pid 700's new affinity list: 2,3

Are we using all the CPUs for the system? an easy way to do it is checking the /proc/interrupts file:

$ cat /proc/interrupts 
           CPU0       CPU1       CPU2       CPU3       
  0:         77         34         19         22   IO-APIC-edge      timer
  1:          8          7         11          9   IO-APIC-edge      i8042
  8:          0          0          0          1   IO-APIC-edge      rtc0
  9:      10052       9947      10066       9977   IO-APIC-fasteoi   acpi
 10:      64476      64276      64381      64603   IO-APIC-edge      ite-cir
 12:        348        303        347        327   IO-APIC-edge      i8042
 16:         33         37         36         37   IO-APIC-fasteoi   mmc0, ehci_hcd:usb1
 17:          0          0          0          0   IO-APIC-fasteoi   brcmsmac
 18:          0          0          0          0   IO-APIC-fasteoi   ips
 19:        117        152        120        130   IO-APIC-fasteoi   firewire_ohci
 23:    8770094    8769570    8768178    8770493   IO-APIC-fasteoi   ehci_hcd:usb2
 41:     300133     300414     300139     300056   PCI-MSI-edge      ahci
 42:         56         58         57         59   PCI-MSI-edge      snd_hda_intel
 43:         25         26         24         25   PCI-MSI-edge      snd_hda_intel
 44:    1093397    1093774    1094078    1092726   PCI-MSI-edge      eth0
 45:     463403     463582     464719     463726   PCI-MSI-edge      fglrx[0]@PCI:2:0:0
NMI:         34         22         16         14   Non-maskable interrupts
LOC:   48777039   50875209   50309525   47323147   Local timer interrupts
SPU:          0          0          0          0   Spurious interrupts
PMI:         34         22         16         14   Performance monitoring interrupts
IWI:          0          0          0          0   IRQ work interrupts
RES:   15846948   16275344    8478731    8848893   Rescheduling interrupts
CAL:      59602      55581      89923      81541   Function call interrupts
TLB:     180524     182431     106818     105437   TLB shootdowns
TRM:          0          0          0          0   Thermal event interrupts
THR:          0          0          0          0   Threshold APIC interrupts
MCE:          0          0          0          0   Machine check exceptions
MCP:        112        112        112        112   Machine check polls
ERR:          0

If any of the CPU columns would be plenty of 0's, that would mean trouble. Otherwise all our CPUs are plenty of work.

Wednesday, June 12, 2013

Traffic control in Linux: classifiend and prioritizing traffic 2/2

According to the manual, tc uses the following rules for bandwidth definitions:

mbps = 1024 kbps = 1024 * 1024 bps => byte/s
mbit = 1024 kbit => kilo bit/s.
mb = 1024 kb = 1024 * 1024 b => byte
mbit = 1024 kbit => kilo bit.

Internally, the number is stored in bps and b.

When tc prints the rate, it uses following :

1Mbit = 1024 Kbit = 1024 * 1024 bps => byte/s
The kernel will honor the TOS field in the packets (Type Of Service), which is defined as:

TOS     Bits  Means                    Linux Priority    Band
------------------------------------------------------------
0x0     0     Normal Service           0 Best Effort     1
0x2     1     Minimize Monetary Cost   1 Filler          2
0x4     2     Maximize Reliability     0 Best Effort     1
0x6     3     mmc+mr                   0 Best Effort     1
0x8     4     Maximize Throughput      2 Bulk            2
0xa     5     mmc+mt                   2 Bulk            2
0xc     6     mr+mt                    2 Bulk            2
0xe     7     mmc+mr+mt                2 Bulk            2
0x10    8     Minimize Delay           6 Interactive     0
0x12    9     mmc+md                   6 Interactive     0
0x14    10    mr+md                    6 Interactive     0
0x16    11    mmc+mr+md                6 Interactive     0
0x18    12    mt+md                    4 Int. Bulk       1
0x1a    13    mmc+mt+md                4 Int. Bulk       1
0x1c    14    mr+mt+md                 4 Int. Bulk       1
0x1e    15    mmc+mr+mt+md             4 Int. Bulk       1
As an example, from the RFC 1349 we can see these definitions:

TELNET -> 1000 (8 in decimal) => minimize delay
FTP Control -> 1000 (8 in decimal) => minimize delay
FTP Data -> 0100 (4 in decimal) => maximize throughput
Modifying the TOS field on the traffic we can get our on line games to have priority over the rest of the network : (ok, perhaps some other uses? :)


# iptables -t mangle -N games
# iptables -t mangle -A games -p tcp -s <my playstation ip> -j RETURN
# iptables -t mangle -A games -j TOS --set-tos Maximize-Throughput
# iptables -t mangle -A games -j RETURN
# iptables -t mangle -A POSTROUTING -p tcp -m tos --tos Minimize-Delay -j games
Now, let's put the scenario that we have a Internet line connected to our eth0 with a bandwidth of 10 MBPS. We want to reserve 30% of it to browse internet, 30% for our ftp server and the rest for a online game which uses the port 20000. First we create the qdisc:

# tc qdisc add dev eth0 root handle 1: htb default 90
# tc class add dev eth0 parent 1: classid 1:1 htb rate 10000 kbit ceil 10000 kbit
(we set the root class as 10MBPS)

Next classes will have 30%, 30% and 40% of the bandwidth:
# tc class add dev eth0 parent 1:1 classid 1:10 htb rate 3000kbit ceil 10000kbit
# tc class add dev eth0 parent 1:1 classid 1:20 htb rate 3000kbit ceil 10000 kbit
# tc class add dev eth0 parent 1:1 classid 1:30 htb rate 4000kbit ceil 10000 kbit

Now we will use the Stochastic Fairness Queueing (SFQ) to settle the defined classes. We will use the recommended value of 10 seconds for the queue:

# tc qdisc add dev eth0 parent 1:10 handle 10: sfq perturb 10
# tc qdisc add dev eth0 parent 1:20 handle 20: sfq perturb 10
# tc qdisc add dev eth0 parent 1:30 handle 30: sfq perturb 10

To finish, we will use the tc to identify traffic for each class (explained in the previous post). Destination port to browse will be 80, source port for our FTP server is 20 and the destination port for the game is 20000:

# tc filter add dev eth0 parent 1:0 protocol ip u32 match ip dport 80 0xffff classid 1:10
# tc filter add dev eth0 parent 1:0 protocol ip u32 match ip sport 20 0xffff classid 1:20
# tc filter add dev eth0 parent 1:0 protocol ip u32 match ip dport 20000 0xffff classid 1:30


That's it. Is interesting to have a look to at the manual for other options like bursting, etc.

Monday, May 27, 2013

Traffic control in Linux: classifiend and prioritizing traffic 1/2

In Linux we can use the tool tc (traffic control) to manage the traffic and provide some QoS. In this example, we are going to classify traffic according to the following:

  • 10.65.18.0/24 will have priority 100 and classification as 1:10
  • 10.65.20.0/24 will have priority 50 and classification as 1:20
  • SSH traffic will have a priority of 10 and classified as 1:2
To classify networks we can use the route classifier on tc. To classify traffic depending on it's packets, protocol or ports we can use the u32 classifier.

First thing, we will prepare the interface we want to use. There are 3 different classful qdiscs (HTB, CBQ, PRIO). In our example we will use HTB on eth0:

$ sudo tc qdisc add dev eth0 root handle 1:0 htb
We add a qdisc (queuing discipline) to eth0, handling the top of the classifier chain 1:0 using the classful disc htb.

Now, we are going to specify that traffic to 10.65.18.0/24 will be classified as 1:10 with priority 100:

$ sudo tc filter add dev eth0 parent 1:0 protocol ip prio \  100 route to 10 classid 1:10
$ sudo ip route add 10.65.18.0/24 via 10.65.17.1 dev \        eth0 realm 10
We are adding a filter to eth0, specifying the ip protocol and route to realm 10 using the class id 1:10. After that, we need to create the realm 10 with ip route.

Now, we will do the same for the next network with priority 50:

$ sudo tc filter add dev eth0 parent 1:0 protocol ip prio \    50 route to 20 classid 1:20
$ sudo ip route add 10.65.20.0/24 via 10.65.17.1 dev \        eth0 realm 20
Now we will classify and prioritize the ssh traffic. With u32 we can specify attributes from the packets as the documentation states, but we will make it simple specifying destination port and protocol number (TCP is protocol 0x6):

$ sudo tc filter add dev eth0 parent 1:0 prio 10 u32 \
        match tcp dst 22 0xffff \
        match ip protocol 0x6 0xff \
        flowid 1:2
Next post will give information about how to limit traffic bandwidth for each  class.

For more information you can visit The Linux Documentation Project and Linux Advanced Routing & Traffic Control.


Sunday, May 19, 2013

Bonding interfaces in Debian

Bonding interfaces allows us to team multiple network interfaces into a single one. We have several options with the bonding linux driver (for some of them we might need to configure the network ports to use lacp in the switch):

  • Balance-rr (mode 0)
  • Active-backup (mode 1)
  • Balance-xor (mode 2)
  • Broadcast (mode 3)
  • 802.3ad (mode4)
  • Balance-tlb (mode 5)
  • Balance-alb (mode 6)
A  full description of each mode is available at kernel.org website.

For this example we will use a basic mode 1, using eth0 and eth1 to generate bond0. First, we create the module configuration we need in the folder /etc/modprobe.d:

# vim bonding.conf
alias netdev-bond0 bonding
options bond0 miimon=100 mode=1
Note that miimon parameter is the link monitoring frequency in milliseconds (how often the link will be inspected to check if its failing or its busy).

When we create a bonding, we need to put the teamed interfaces in slave mode. For making this easy we will install the package ifenslave:

$ sudo apt-get install ifenslave
This package will generate some scripts in the  /etc/network/if-up.d  and /etc/network/if-pre-up.d directories that will configure the slaves to serve the master (bonding interface).

Now, we configure the file /etc/network/interfaces. We comment the entries for our defined interfaces and specify the configuration for the bond0 interfaces:
$ sudo vim /etc/network/interfaces
auto lo
iface lo inet loopback

# The primary network interface
#allow-hotplug eth0
#iface eth0 inet dhcp
auto bond0
iface bond0 inet dhcp
        slaves eth0 eth1
        bond-mode 1
        bond-miimon 100

In this case we have duplicated the information already passed to the module. If we need multiple bondX interfaces, we need to declare the configuration in this file.

Now we restart the networking service and we get our interface working:

$ sudo service networking stop
$ sudo service networking start

Let's check:

$ /sbin/ifconfig bond0
bond0     Link encap:Ethernet  HWaddr 08:00:27:e1:89:77 
          inet addr:10.65.17.158  Bcast:10.65.17.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fee1:8977/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:27926 errors:0 dropped:0 overruns:0 frame:0
          TX packets:599 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:2230068 (2.1 MiB)  TX bytes:40408 (39.4 KiB)

If we have issues getting up the bond0 interface, it could be that the slaves are not well defined or not set as slaves. We can check with mii-tool:

# mii-tool bond0
bond0: 10 Mbit, half duplex, link ok

If we don't get a message similar to the one above and similar to "No Mii transceivers found" most likely there's an issue with the slaves.

For a full list of options available have a look at the kernel documentation for bonding.

Tuesday, May 14, 2013

Network load sharing using multiple interfaces in Linux

One way to share the load between two or more interfaces is trough the traffic control settings. This is not to be confused with high availability, as if you put down one of the cards it might take a while to adjust itself - I rather use bonding for that, to be explained in a future post.

Let's put the case we have a host Client1, with eth0 10.65.17.158 and eth1 10.65.17.118. We want to unify these two cards to work as one.

First, we need to load the module sch_teql using the command:

# modprobe sch_teql

Now we will add both interfaces to the  TEQL device:
# tc qdisc add dev eth0 root teql0
# tc qdisc add dev eth1 root teql0
Now we turn up the teql0 device, and give it a valid ip:

# ip link set dev teql0 up
# ip addr add dev teql0 10.65.17.154/24
Now we can use the new IP 10.65.17.154 as a load share device between eth0 and eth1. Packets will arrive to the interfaces using a destination IP other than it's own, so would be discarded. To avoid this we can disable the rp_filter on each device:

# echo 0 > /proc/sys/net/ipv4/conf/eth0/rp_filter
# echo 0 > /proc/sys/net/ipv4/conf/eth1/rp_filter 
 That's all. For more info you can visit the official website for Linux Advanced Routing & Traffic Control

Monday, May 13, 2013

Removing file name spaces using $IFS Bash shell variable.

At some point all Linux users need to deal with file name spaces, specially with movies or pictures. Using the shell variable IFS we can easily remove those and solve the issue. In this example, assuming I have all my pictures in the current folder, I will remove the space of my pictures in one :

$ ls -l *jpg
-rw-r--r-- 1 amartin amartin   22175 May 10 09:08 foto 1.jpg
-rw-r--r-- 1 amartin amartin   22175 May 10 09:08 foto 2.jpg
-rw-r--r-- 1 amartin amartin   22175 May 10 09:08 foto 3.jpg

 $ IFS=$(echo -en "\n\t");for file in `ls *jpg`; do newname=`echo $file | sed -e 's/ //g'`; cp "$file" $newname; done

$ ls -ltr *jpg
-rw-r--r-- 1 amartin amartin   22175 May 10 09:08 foto 1.jpg
-rw-r--r-- 1 amartin amartin   22175 May 10 09:08 foto 2.jpg
-rw-r--r-- 1 amartin amartin   22175 May 10 09:08 foto 3.jpg
-rw-r--r-- 1 amartin amartin   22175  May 10 10:07 foto1.jpg
-rw-r--r-- 1 amartin amartin   22175 May 10 10:07 foto2.jpg
-rw-r--r-- 1 amartin amartin   22175 May 10 10:07 foto3.jpg


If we would like a dash '-' or any other character we just need to replace the sed command for sed -e 's/ /-/g'.

The IFS variable defaults to space, tab and new line in Debian:

$ echo "$IFS" | cat -TE
 ^I$
$

Or the equivalent:

$ IFS=$(echo -en " \n\t")

We just removed the space in between the " and \n from the IFS (echo -en "_\n\t") to make this example work.

Thursday, May 9, 2013

Routing and traffic control in Linux: using MARKed packets

On our Linux gateway box we can establish different routes for different hosts or nets using ip route. If we would like to do this, depending for example in the machine originating the traffic, a neat way to do it would be using netfilter, marking the packets we want to route.

First, to use this we need these kernel options (enabled by default on Debian):
IP: advanced router (CONFIG_IP_ADVANCED_ROUTER)
IP: policy routing (CONFIG_IP_MULTIPLE_TABLES)
IP: use netfilter MARK value as routing key (CONFIG_IP_ROUTE_FWMARK)
Now, let's say our gateway is 10.65.17.153. We have two internet providers, one very fast and one slow. We have a user (10.65.17.8) who we want to access a specific website (106.10.165.51) via the slow link.

First, we will mark the packets from our user. We will use the mark '1' using the mangle table:

 # iptables -A PREROUTING -t mangle -s 10.65.17.8 -d 106.10.165.51 -j MARK --set-mark 1

Now we need to add an action for marked packets. Let's have a look to the default rt_tables:

# cat /etc/iproute2/rt_tables
#
# reserved values
#
255     local
254     main
253     default
0       unspec
#
# local
#
#1      inr.ruhep

We are going to add a table starhub.link with table number 20 :

# echo 20 starhub.link >> /etc/iproute2/rt_tables

Now we associate table 20 with the Mark 1:

# ip rule add fwmark 1 table starhub.link

Last, we specify what does the table 20 (AKA starhub.link) do:

# ip route add default via 10.65.17.253 table starhub.link

That's it. Now our client will use Starhub link instead of the default one. The route appears in our system as follows:

# ip route ls
default via 10.65.17.1 dev eth0  proto static
10.65.17.0/24 dev eth0  proto kernel  scope link  src 10.65.17.153

Done. Thanks to the flexibility of IPTABLES we can make a lot of uses from this.


Tuesday, May 7, 2013

Linux peformance tips: disk i/o scheduler

The disk i/o scheduler is the method that Linux uses to describe how data will be submitted to the storage devices.  They apply to disk devices and not partitions.

On Linux, the main algorithms are these three:

  • CFQ (Completely fair Queuing)
  • Noop
  • Deadline
There is another popular scheduler called Anticipatory, but as from kernel version 2.6.33 it has been removed.

CFQ places synchronous requests on per-process queues and allocates time slices for each one of the queues to access the disk. Length of each time slice depends on the process priority, and allows process to idle at the end of the I/O call in anticipation of a close-by read request (another read on the same sector). You can use ionice to give priorities.

Tuning parameters can be given at /sys/block/<device>/queue/iosched/slice_idle, /sys/block/<device>/queue/iosched/quantum and /sys/block/<device>/queue/iosched/low_latency.

Noop operates as a simple FIFO queue, first in first out.

Deadline imposes a deadline to all requests, to prevent processes to "hang" waiting for disk. In addition of the read and write queues, it does maintain two deadline queues (one for read, one for write), so the deadline scheduler will check if the requests have expired in the deadline queues. Read queues have higher priority.

Tuning parameters can be given at /sys/block/<device>/queue/iosched/writes_starved, /sys/block/<device>/queue/iosched/read_expire and /sys/block/<device>/queue/iosched/write_expire.

To check what scheduler we are using we can query the block device (sda in my case):

$ cat /sys/block/sda/queue/scheduler
noop deadline [cfq]

I'm using CFQ. To change it to noop or deadline,  we can insert the desired scheduler into the same file we use to query:

(as root)
# echo deadline > /sys/block/sda/queue/scheduler
# cat /sys/block/sda/queue/scheduler
noop [deadline] cfq

For any other device the route would be /sys/block/<device name>/queue/scheduler. You can change it anytime, without crashing the system.

Which one is better? it depends on your environment. If you have a proper process priority scheme in your server, CFQ could be the best. For backup servers with low performance disks, deadline worked pretty fine for me in the past.

To put some numbers, on my desktop using default scheduler, dd operations timing to write 1 GB are as follows:

$ dd if=/dev/zero of=tmp1 bs=512 count=2000000
  • CFQ 10.8294 s, 94.6 MB/s
  • Deadline 9.90455 s, 103 MB/s
  • Noop 10.0025 s, 102 MB/s
Reading + writing :

$ dd if=tmp1 of=tmp2
  • CFQ 26.3413 s, 38.9 MB/s
  • Deadline 30.449 s, 33.6 MB/s
  • Noop 28.9345 s, 35.4 MB/s
Reading, deadline is ahead due to the prioritized read queue - however not so far from a FIFO algorithm. Read + Write makes CFQ more advantageous not because the algorithm itself (as I have not given high priority to the dd command) but just as result of performance degradation of deadline and noop.


Thursday, May 2, 2013

Adding a static arp entry in Windows 7

Following up on a previous article where I explained how to add a static ARP entry on Linux, trying to do the same on Windows 7 you would get an error message Access denied. This is how we would do it on Windows 7:

If, for example, we are using a network connection, first we execute ipconfig on a cmd box to list the interface name:


We see that our Wireless network is named Wireless Network Connection, also that the gateway has the IP address 10.43.1.1. From a cmd command prompt we list the current arp table:


 We have identified the mac address as 00-30-48-99-de-97. Now, from an elevated cmd prompt we execute the following netsh command to add the static entry:

netsh.exe interface ipv4 add neighbors "Wireless Network Connection" 10.43.1.1 00-30-48-99-de-97
That's all.


Tuesday, April 30, 2013

Remove one single line from a Cisco IOS ACL

For this example we will use extended ACL 100. This is the ACL:

gw-001#show run | inc access-list 100
access-list 100 remark NAT
access-list 100 deny   ip 10.62.17.0 0.0.0.255 172.31.0.0 0.0.255.255 log
access-list 100 permit ip 10.62.0.0 0.0.255.255 any log
access-list 100 permit ip any any log


We want to remove the line access-list 100 permit ip 10.62.0.0 0.0.255.255 any log:

gw-001#show ip access-lists 100
Extended IP access list 100
    10 deny ip 10.62.17.0 0.0.0.255 172.31.0.0 0.0.255.255 log (2 matches)
    20 permit ip 10.62.0.0 0.0.255.255 any log (70 matches)
    30 permit ip any any log (29 matches)

We want to delete the entry 20. Then:

gw-001#config t
Enter configuration commands, one per line.  End with CNTL/Z.
gw-001(config)#ip access-list extended 100
gw-001(config-ext-nacl)#no 20
gw-001(config-ext-nacl)#end

gw-001#
gw-001#show ip access-lists 100
Extended IP access list 100
    10 deny ip 10.62.17.0 0.0.0.255 172.31.0.0 0.0.255.255 log (2 matches)
    30 permit ip any any log (29 matches)


Done!

Friday, April 19, 2013

Getting a basic Smokeping + Apache installation working in 10 minutes

First, we need to download the source from the vendor's site. By default it will install in /opt/smokeping-<version>, we will follow the defaults:

$ cd /tmp
$ wget http://oss.oetiker.ch/smokeping/pub/smokeping-2.6.9.tar.gz
$ tar -xvzf smokeping-2.6.9.tar.gz
$ cd smokeping-2.6.9
$ ./configure ; make ; sudo make install

It is quite possible that during the configure process it will complain about missing dependencies like rrd tool, fping, etc. Follow the instructions and install them until all of them are satisfied.

After it's installation, we will have the folder /opt/smokeping-2.6.9. We create our custom /opt/smokeping-2.6.9/etc/config. Basically it is the default configuration but modifying the last section 'targets' to point to our servers:

$ cat  /opt/smokeping-2.6.9/etc/config

*** General ***
owner    = Andreu
contact  = andreu.antonio@gmail.com
mailhost = smtp.XXXXXXX.com
sendmail = /usr/sbin/sendmail
imgcache = /opt/smokeping-2.6.9/cache
imgurl   = cache
datadir  = /opt/smokeping-2.6.9/data
piddir  = /opt/smokeping-2.6.9/var
cgiurl   = http://some.url/smokeping.cgi
smokemail = /opt/smokeping-2.6.9/etc/smokemail.dist
tmail = /opt/smokeping-2.6.9/etc/tmail.dist
syslogfacility = local0
*** Alerts ***
to =
andreu.antonio@gmail.com
from = andreu.antonio.service@gmail.com
+someloss
type = loss
pattern = >0%,*12*,>0%,*12*,>0%
comment = loss 3 times  in a row
*** Database ***
step     = 300
pings    = 20
AVERAGE  0.5   1  1008
AVERAGE  0.5  12  4320
    MIN  0.5  12  4320
    MAX  0.5  12  4320
AVERAGE  0.5 144   720
    MAX  0.5 144   720
    MIN  0.5 144   720
*** Presentation ***
template = /opt/smokeping-2.6.9/etc/basepage.html.dist
+ charts
menu = Charts
title = The most interesting destinations
++ stddev
sorter = StdDev(entries=>4)
title = Top Standard Deviation
menu = Std Deviation
format = Standard Deviation %f
++ max
sorter = Max(entries=>5)
title = Top Max Roundtrip Time
menu = by Max
format = Max Roundtrip Time %f seconds
++ loss
sorter = Loss(entries=>5)
title = Top Packet Loss
menu = Loss
format = Packets Lost %f
++ median
sorter = Median(entries=>5)
title = Top Median Roundtrip Time
menu = by Median
format = Median RTT %f seconds
+ overview
width = 600
height = 50
range = 10h
+ detail
width = 600
height = 200
unison_tolerance = 2
"Last 3 Hours"    3h
"Last 30 Hours"   30h
"Last 10 Days"    10d
"Last 400 Days"   400d
*** Probes ***
+ FPing
binary = /usr/bin/fping
+ EchoPingSmtp       # SMTP (25/tcp) for mail servers
+ EchoPingHttps      # HTTPS (443/tcp) for web servers
+ EchoPingHttp       # HTTP (80/tcp) for web servers and caches
+ EchoPingIcp        # ICP (3130/udp) for caches
+ EchoPingDNS        # DNS (53/udp or tcp) servers
+ EchoPingLDAP       # LDAP (389/tcp) servers
+ EchoPingWhois      # Whois (43/tcp) servers
*** Targets ***
probe = FPing
menu = Top
title = Network Latency Grapher
remark = Welcome to this SmokePing website.
+ MyServers
menu = APAC
title = APAC
++ Bali
menu = Bali
title = Bali
probe = FPing
host = <my Server IP>
++ Manchester
menu = Manchester
title = Manchester MPLS
probe = FPing
host = <my Server IP>
++ Bangkok
menu = Bangkok
title = Bangkok MPLS
probe = FPing
host = <my Server IP>
++ Bangkok_IPSEC
menu = Bangkok
title = Bangkok IPSEC
probe = FPing
host = <my Server IP>
Now we modify our DNS to make our server has the additional name smokeping-001 (this may vary depending on your DNS solution). Now we go to modify apache config and create the site smokeping-001:

$ sudo vi /etc/apache2/sites-available/smokeping-001
<VirtualHost *:80>
 

        ServerAdmin admin@mydomain.com
        ServerName smokeping-001.mydomain.singapore
        DocumentRoot /opt/smokeping-2.6.9/htdocs

        <Directory />
                Options FollowSymLinks +ExecCGI
                AllowOverride None
        </Directory>
 

        ErrorLog ${APACHE_LOG_DIR}/smokeping-error.log
        LogLevel warn
        CustomLog ${APACHE_LOG_DIR}/smokeping-access.log combined

</VirtualHost>

We enable the site and also the Fast CGI mod. We might need to download this mod if it is not in our system (use apt / yum util).

$ sudo a2ensite smokeping-001
$ sudo a2enmod mod_fcgid
Smokeping installation will have placed the smokeping daemon. To start it, we need to execute:

$ sudo /etc/init.d/smokeping start

Wait for a few minutes, smokeping should be catching data by now.

Friday, April 12, 2013

File screening in Windows Server 2008

File screening is quite an useful tool. It allows us to prevent users from storing certain type of data on the file shares, advises us if they store certain files and let us know who access certain folder, between other functions.

In order to use it, first we need to have the File Services role together with the FSRM (File Server Resource Manager). If you are already sharing files most likely you already have the File Services Role, if not you can enable this from the server manager, right click on Server Manager and click on Add Roles:



Then Select File Services and Next, up to the end. In the following picture you can see I already have it, but just to give an idea :)


Now, let's install the FSRM role service. Go to File Services, right click and select Add Role Services:


Select the FSRM, and click Next:


It will ask us to create reports at this point. We can omit as we can create them later. Click on Next:


Now we can click on Install. It will take a few minutes to install the feature.



After we have it installed, we go to Administrative tools -> File Server Resource Manager. Right click on File Screening Management and select Create File Screen. We will create a File Screen that will block storing executable files on public folders:


Select the path of our public share (D:\public for the example), select the template Block Executable Files and select create.


Now we will configure the email settings, so we will receive an email every time someone tries to store executable files on the selected folder. Select the file screen we just created, right click and select Edit File Screen Properties:






Select the Administrator's email address to be sent the notification, click the 
following square if you wish to notify the user as well (email must be stored on the AD domain), then click OK. At this point, if we have not configured our SMTP server a message will prompt. If so, accept it and go back to the main screen. Click on File Server Resources Manager (local), on the actions window click on Configure Options and introduce your smtp, default admin email and default sender options:


We are done. There's good bunch of benefits from this feature, for more information you can visit the Microsoft library page for this role service.


Wednesday, April 10, 2013

Colours in your bash shell

Coloring your bash shell can be quite entertaining. Nowadays I think all the distros comes with colors enabled, if not we can enable it with an alias like this one:

$ alias ls='ls -ap --color'

-a shows all the files including the ones starting with a dot '.'
-p shows a slash '/' after the directories
--color enables color :)

I have that alias in my /etc/bash.bashrc (or /etc/bashrc depending on your distro) so every time I start my computer the colors will be there.

Now that we have the colors enabled, we can customize the shell colors using the variable LS_COLORS. 

If you used a Red Hat based distro, you will the file /etc/DIR_COLORS with a configuration example. If not, see below mine's:

COLOR tty
OPTIONS -F -T 0
TERM linux
TERM console
TERM con132x25
TERM con132x30
TERM con132x43
TERM con132x60
TERM con80x25
TERM con80x28
TERM con80x30
TERM con80x43
TERM con80x50
TERM con80x60
TERM cons25
TERM xterm
TERM rxvt
TERM xterm-color
TERM color-xterm
TERM vt100
TERM dtterm
TERM color_xterm
TERM ansi
TERM screen
TERM screen.linux
TERM kon
TERM kterm
TERM gnome
TERM konsole
EIGHTBIT 1
NORMAL 00       # global default, although everything should be something.
FILE 00         # normal file
DIR 01;34      # directory
LINK 01;36      # symbolic link
FIFO 40;33      # pipe
SOCK 01;35      # socket
BLK 40;33;01    # block device driver
CHR 40;33;01    # character device driver
ORPHAN 01;05;37;41  # orphaned syminks
MISSING 01;05;37;41 # ... and the files they point to
EXEC 01;32
.cmd 01;32 # executables (bright green)
.exe 01;32
.com 01;32
.btm 01;32
.bat 01;32
.sh  01;32
.csh 01;32
.tar 01;31 # archives or compressed (bright red)
.tgz 01;31
.arj 01;31
.taz 01;31
.lzh 01;31
.zip 01;31
.z   01;31
.Z   01;31
.gz  01;31
.bz2 01;31
.bz  01;31
.tz  01;31
.rpm 01;31
.cpio 01;31
.jpg 01;35 # image formats
.gif 01;35
.bmp 01;35
.xbm 01;35
.xpm 01;35
.png 01;35
.tif 01;35

For a color palette definition, you can check out bash documentation or the manpage for dir_colors. Also, you can use this bash line to show them:

$ for code in {0..255}; do echo -e "\e[38;05;${code}m $code: Test"; done

We can choose the colors we want and modify our /etc/DIR_COLORS accordingly. For example, in CentOS we have 01;34 (dark blue) as default directory color, if we want to change it to light blue we change the line:

DIR 01;34 

for
DIR 38;05;75 (following the colors from the previous for loop)
we sabe the file. Now, to generate the variable LS_COLOR we can execute dircolors:

$ dircolors /etc/DIR_COLORS

And export the variable shown. To make it more straight forward we can also do:

$ eval `dircolors /etc/DIR_COLORS`

And that's it!

If you wish to make it effective at boot time, we can add this line to our /etc/bash.bashrc (or /etc/bashrc):

eval `dircolors /etc/DIR_COLORS`

But that will affect the whole system. If we want to affect only our user, we can use the file $HOME/.dir_colors instead (actually any file name would do) and have this in our $HOME/.bashrc:

eval `dircolors /etc/DIR_COLORS`