Monday, November 2, 2015

Detecting torrent clients (uTorrent and Azureus) on your local network with perl and nmap

This week I had some torrent issues at one of my customer's office - torrents consuming the network bandwidth.

Normally you would start to analyze the traffic and enumerate the top consumers, but this was not possible as the network equipment's maintenance was outsourced to a third party and was not possible to tinkle with it - funny thing, the  third party couldn't do anything about these torrents tho.

So I had only my desktop to deal with the issue. Based on the two most popular torrent clients I built a script that detects whether there's an Utorrent or Azureus client - under normal circumstances using high ports. I tried the versions released during the time I was writing this article, not sure if it will work with previous or immediate future versions.

You may want to change the email from and to in bold (email function) and install the additional perl packages using cpan (Email::MIME and LWP::UserAgent).

Syntax is $ perl script.pl <ip or network, nmap format> <port range, nmap format>

#!/usr/bin/perl

use strict;
use warnings;
use WWW::Curl::Easy;
use Email::MIME;
require LWP::UserAgent;


my $ua = LWP::UserAgent->new;
$ua->timeout(10);
$ua->env_proxy;
$ua->agent('Petardo 1.0');

my @strings;
my @targets;
my $text;
my $server="";
my $pointer="";
my $agent="";


my $nmaptarget= $ARGV[0];
my $nmapports= $ARGV[1];
my $tmpfile="/tmp/torrents-Searcher.log";
my $logfile="/tmp/lolo3";
my $nmap="";


if ( ! defined $nmaptarget ) {
        print "Please give me a network to scan ( i.e. 172.172.172.0/24) \n";
        exit (1);
}

if ( ! defined $nmapports ){
        print "You didn't give me a port list to scan (nmap format). Will use 10000-65535\n";
        $nmapports="10000-65535";
}
scan();

open(FILE, "<", $tmpfile) || die "TMP File not found";
my @textlog = <FILE>;
close(FILE);

foreach (@textlog) {

        @strings = split / /;

        my $pointer=1;
        my $server=$strings[$pointer];
        while (defined $strings[$pointer] && $strings[$pointer] !~ /[1-9]{1,3}\.[1-9]{1,3}\.[1-9]{1,3}\.[1-9]{1,3}/ ) {
                $pointer++;
                next;
                }
        $server=$strings[$pointer];

        if (defined $server) {
                $server=~s/\(//;
                $server=~s/\)//;
        }
        $pointer++;

        my $arrSize = @strings;

        while ($pointer < $arrSize) {
                if ($strings[$pointer] ne "" && $strings[$pointer] =~ /[0-9]{1,5}\/tcp/) {
                        $strings[$pointer] =~ s/\/tcp//;
                        findutorrent($server,$strings[$pointer]);
                        findazureus($server,$strings[$pointer]);
                }
                $pointer++;
        }

}

sub findutorrent {
        my $url = "http://" . $_[0] . ":" . "$_[1]" . "/version";
        my $response = $ua->get($url);
        if ($response->is_success) {
                if ( $response->decoded_content =~ /uTorrent/ ) {
                        print $response->decoded_content;
                        print "Utorrent check URL " . $url . " gave code " . $response->code . "\n";
                        print "Found possible utorrent on IP ". $_[0] . " and port " . $_[1] . "\n";
                        emailme($_[0],$_[1],"utorrent");
                }
                else {
                        print "Probe success (" . $response->code . ") but no valid uTorrent response detected on port " . $_[1] . "\n";
                }

         }
         else {
             print "No joy on uTorrent: " . $response->status_line . " on port $_[1] \n";
         }
}

sub findazureus {
        my $url = "http://" . $_[0] . ":" . "$_[1]" . "/service/request1.php?p=789C258CB10EC2300C44FFC5334D44C74A0831302216BA554269E3A61124B1D2A4A022FE1D876EBEF7CEF70152062BAF14B5E1176706A6FD7FBA5BD9D19A5980BD2C129EB39F789A2283D3104C782723F043F5A93E393F594123552964255D46884C6F9910229FD2E2F72C6B8D801E566";
        my $response = $ua->get($url);
        if ($response->is_success) {
                if ( $response->content =~ /[^[:ascii]]/ && length($response->decoded_content ) > 100 && length($response->decoded_content) < 350 && ! $response->decoded_content =~ /SSH/ ) {
                        print $response->content . "\n";
                        print "Lenght: " . length($response->decoded_content) . "\n";
                        print "Azureus check URL " . $url . " gave code " . $response->code . "\n";
                        print "Found possible Azureus on IP ". $_[0] . " and port " . $_[1] . "\n";
                        emailme($_[0],$_[1],"azureus");
                }
                else {
                        print "Probe success (" . $response->code . "," . length($response->decoded_content) . ") but no valid Azureus response detected on port " . $_[1] . "\n";
                }

        }
         else {
             print "No joy on Azureus: " . $response->status_line . " on port $_[1] \n";
         }    
}


sub emailme {

        my $message = Email::MIME->create(
        header_str => [
                From    => 'no.reply@mydomain.com',
                To      => 'andres.martin@mydomain.com',
                Subject => 'torrent client found?',
                ],
        attributes => {
                encoding => 'quoted-printable',
                charset  => 'ISO-8859-1',
                },
        body_str => "Found possible ". $_[2] . " client on IP ". $_[0] . " and port " . $_[1] . "\n",
        );

        # send the message
        use Email::Sender::Simple qw(sendmail);
        sendmail($message);
}

sub scan {

        open (my $fh, '>', $tmpfile) or die "Can't write to file '$tmpfile' $!";
        print "Nmap scanning network " . $nmaptarget . " and ports " . $nmapports . "\n";
        my $nmap = <<`SHELL`;
/usr/bin/nmap -Pn -sT -T5 --open -max-rtt-timeout 50ms --host-timeout 10m -p $nmapports $nmaptarget \n
SHELL
        $nmap =~ s/\n/ /g;
        print $fh $nmap;
        print "Nmap finished scanning\n";
        close $fh
}

Monday, September 28, 2015

Deploying wordpress using a chef recipe and AWS OpsWorks

Although I am not a fan of Wordpress, many people are. Lately I had to set up multiple instances for my colleges, and even tho installing an instance takes just a few minutes, ends up being is a bit of a hassle to do it all by hand. I thought would be nice to automate Wordpress deployments in my environment.

Initially thought of a golden AMI image, with a set of instructions to update Wordpress and the OS on boot time, but using chef seemed more flexible. I did some research and found a nice cookbook from Kenta Yasukawa based on Apache. I copied most of his work, doing some modifications to use nginx + php fpm instead and other minor changes. My cookbook can be found at my github. You can configure your own user and passwords in the file attributes/default.rb:


Even tho MySQL by default is not exposed to the world, it is safer to establish your own user and passwords - you can clone the repo, modify the files the upload to your own repository or use S3:
default["mysql"]["root"] = "yourrootpass"
default["mysql"]["pass"] = "passforwpuser"
default["mysql"]["user"] = "yourwpuser"
With AWS OpsWorks, we can create our own stack of Wordpress servers. First, go to OpsWorks and create your first stack. Select your preferences, and after clicking in Advanced you will be able to specify the URL for the chef recipe (https://github.com/AndreuAntonio/chef-wp.git) in this case:


Create your layer with your personal preferences. I selected custom, as DB and FE are going to reside within the same server, but there are a lot more possibilities here:


Now edit the layer and specify what recipe you want to use and in what stage:

We will use the recipe Deploy_software_wordpress_nginx-mysql56::default during setup. Remember to check on the Security tab to specify the right security groups for this (i.e. HTTP open).


Now we can start deploying servers. Check the Instances tabs and deploy a new server:


Start the server. It takes a while to spin up. When ready, the status field will become a green online.

Click then on the public IP and Wordpress setup should show up:


To add more Wordpress servers, just deploy new instances and in a few minutes you'll have them running :)

This is just a test lab, for a real production environment there's other things that we should take care of, like load balancing, redundancy, backups etc. Don't take this example as it is to a production environment.

Monday, July 20, 2015

Fixing reports not showing in OpenVAS 8

Last week I was upgrading my OpenVAS installation and I realized the reports sections was empty. I tried to google for a solution but couldn't find anything useful, so I decided to share my findings here.

At first I thought could be an issue with GreenBone so I tried to fetch the report using the cli tools, but no joy:

$ omp -v -u amartin -w XXXXX -R e6feb760-c9c3-425d-9ef5-a861d0dad6d2 -f a3810a62-1f62-11e1-9219-406186ea4fc5

WARNING: Verbose mode may reveal passwords!

Will try to connect to host 127.0.0.1, port 9390...
Failed to get report.

After some debugging with strace I realized a command was being executed by openvasmd:

/bin/sh -c "su nobody -c "/bin/sh /usr/local/share/openvas/openvasmd/global_report_formats/c402cc3e-b531-11e1-9163-406186ea4fc5/generate ....

Checked the permissions on those files, user nobody had no rights to execute or read any files in /usr/local/share/openvas/openvasmd/global_report_formats. Open the permissions:

sudo chmod a+xr /usr/local/share/openvas/openvasmd/global_report_formats/ -R

And then OpenVAS reports were working again.



Tuesday, March 24, 2015

Fixing missing locale warnings in bash

Recently I have been getting some locale warnings when doing an scp (while trying the bash completion):

$ locale
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_MESSAGES to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory
LANG=en_SG.UTF-8
LANGUAGE=en_SG:en
LC_CTYPE="en_SG.UTF-8"
LC_NUMERIC="en_SG.UTF-8"
LC_TIME="en_SG.UTF-8"
LC_COLLATE="en_SG.UTF-8"
LC_MONETARY="en_SG.UTF-8"
LC_MESSAGES="en_SG.UTF-8"
LC_PAPER="en_SG.UTF-8"
LC_NAME="en_SG.UTF-8"
LC_ADDRESS="en_SG.UTF-8"
LC_TELEPHONE="en_SG.UTF-8"
LC_MEASUREMENT="en_SG.UTF-8"
LC_IDENTIFICATION="en_SG.UTF-8"
LC_ALL=

To fix it, just type sudo dpkg-reconfigure locales and select the locales you want to rebuild (in my case, en_SG.UTF-8). The warning should be gone.

Friday, February 13, 2015

Amazon VPC with Chef server in a separate VPC

This week I had to create a separate AWS account for an specific platform - isolated from the Chef server's network, accounting purposes. Since being at it, the new AWS account would have several VPCs with different environments (staging, live).

First complication with this is that the command knife ssh won't work for the servers in the new account. As long as the nodes have internet access they will be able to register into Chef and install the recipes all right, but they will register with the following information:

$ knife node show dev-http-01Node Name:   dev-http-01
Environment: _default
FQDN:   ip-10-0-1-89.us-west-2.compute.internal  <---****
IP:          10.0.1.89 <---****
Run List:    role[dev]
Roles:       dev
Recipes:     chef-client, keys-us-west-2, autoupdate_apt, ntp, Deploy_package_apache2-latest
Platform:    ubuntu 14.04
Tags:        
With my VPC settings (using amazon DNS and DHCP servers), even tho an elastic IP has been assigned it will register itself with the internal address.

One approach would be create a proxy or vpn connection to the chef server, so it can talk to this internal network. However I just need to knife ssh into a few hosts, so I created this Chef recipe that updates the FQDN with the actual external IP:

$ cat Deploy_script_setIP_VPC/recipes/default.rb #
# Cookbook Name:: Deploy_script_setIP_VPC
# Recipe:: default
#
# No Copyright
# Andres Martin andreu.antonio@gmail.com
template "/etc/init.d/if-config" do
 source "if-config.erb"
 owner "root"
 group "root"
 mode "754"
end
service "if-config" do
      supports :restart => true, :start => true, :stop => true, :reload => true
      action [ :enable, :start]
    end
$ cat Deploy_script_setIP_VPC/templates/default/if-config.erb
#!/bin/sh
case $1 in
        start)
        URL="http://ifconfig.me/"
        IP=`curl $URL`
        if [ -n "`nslookup $IP`" ]; then
                echo "IP resolved to $IP, setting hostname..."
                name=`nslookup $IP | awk '{ print $4 }' | grep amazonaws.com | cut -d "." -f 1`
                hostname $name
                fi
        ;;
        stop)
        echo "this won't work..."
        ;;
        *)
        echo "Only for start"
        ;;
esac
This script relies on the public service ifconfig.me (thanks guys for this website). The output should change as soon as the Chef client contacts Chef again:

$ knife node show dev-http-01Node Name:   dev-http-01
Environment: _default
FQDN:  ec2-XX-XX-XX-XX.us-west-2.compute.amazonaws.comIP:          10.0.1.89
Run List:    role[dev]
Roles:       dev
Recipes:     chef-client, keys-us-west-2, autoupdate_apt, ntp, Deploy_package_apache2-latest
Platform:    ubuntu 14.04
Tags:      
This script shouldn't be used in critical services tho - is not foolproof, just a quick fix for a certain scenario.

Thursday, February 5, 2015

Basic setup of MySQL Cluster

This week I was setting up a MySQL cluster. Here are a few steps on how to do a basic setup - for a good initial overview of MySQL cluster I'd recommend having a look at this page of the manual. This example launches all daemons by hand, no fancy boot scripts.

Can download the software from http://dev.mysql.com/downloads/cluster/, in my case I downloaded the Debian packages although downloading the tgz file would be the same as there is no scripts in the control information files. 

My example runs the following nodes:

2 x NDB data nodes - they will run the databases in memory so we need enough RAM (7 GB each for me)
1 x management node
2 x SQL API nodes - the SQL interface to NDB engine. Our app will point to them.

The data nodes will run the NDB daemon, SQL API nodes will run the mysql database software and the management node the management daemon. For each one of the 4 working nodes the IP / fqdn of the management server(s) must be stated on the configuration file, and the manaement server need to know each one of the working node's IPs / fqdn - see later the configuration file's content.

Installing

Copy the debian package and install it on each one of the servers:

$ sudo dpkg -i mysql-cluster-advanced-7.3.7-debian7-x86_64.deb

This will install the software in /opt/mysql/server-5.6/. To continue the installation we will need to create the mysql user and install some dependencies (libaio in Ubuntu 14):

$ sudo groupadd mysql
$ sudo useradd -g mysql mysql
sudo apt-get install libaio1

The original package contains a support install script:

$ sudo /opt/mysql/server-5.6/scripts$ sudo ./mysql_install_db --user=mysql

Configuration

Now we have the software in all 5 servers. For the data and SQL nodes we will create the /etc/my.cnf config file:

[mysqld]
ndbcluster
ndb-connectstring=172.3.5.172
[mysql_cluster]
ndb-connectstring=172.3.5.172

You can specify the node id with the parameter ndb-nodeid=X , or you can let the management node handle that for you.

On the management server, make sure the directory  /var/lib/mysql-cluster exists and create the file config.ini with this content:

[ndbd default]
NoOfReplicas=2
DataMemory=4500M # Space to store DB records
IndexMemory=400M # Space used to store hash indexes
MaxNoOfAttributes=10000 # max number of replication attributes
MaxNoOfTables=1000
MaxNoOfOrderedIndexes=1000
MaxNoOfConcurrentOperations=128000
MaxNoOfExecutionThreads=8
[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
DataDir=/var/lib/mysql-cluster
[tcp default]
SendBufferMemory=12M
ReceiveBufferMemory=12M
[ndb_mgmd]
nodeId=1
hostname=172.3.5.172
datadir=/var/lib/mysql-cluster
[ndbd]
nodeId=2
hostname=172.3.5.170
[ndbd]
nodeId=3
hostname=172.3.5.171
datadir=/opt/mysql/server-5.6/data/
[mysqld]
hostname=172.3.5.173
[mysqld]
hostname=172.3.5.174

Running the software

Now we have all configuration files in place. These are the software we will launch on each of the nodes:

Data nodes: Daemon /opt/mysql/server-5.6/bin/ndbd (--initial):

$ sudo /opt/mysql/server-5.6/bin/ndbd
2015-02-04 02:21:21 [ndbd] INFO     -- Angel connected to '172.3.172:1186'
2015-02-04 02:21:21 [ndbd] INFO     -- Angel allocated nodeid: 3

MGMT node: Daemon /opt/mysql/server-5.6/bin/ndb_mgmd -f /var/lib/mysql-cluster/config.ini (--initial)

$ /opt/mysql/server-5.6/bin/ndb_mgmd -f /var/lib/mysql-cluster/config.ini
MySQL Cluster Management Server mysql-5.6.21 ndb-7.3.7

SQL API nodes: Script /opt/mysql/server-5.6/support-files/mysql.server (start|stop|reload)

$ sudo /opt/mysql/server-5.6/support-files/mysql.server start
Starting MySQL
.. * 

Note that the (--initial) can be used the first time, or when there is a configuration change - but under special circumstances detailed in the manual. For the MGMT node, on configuration change is best to use (--reload) to refresh the parameters.

If all went good, you may see this status on the MGMT node:

$ sudo /opt/mysql/server-5.6/bin/ndb_mgm -e "show"
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=2    @172.3.5.170  (mysql-5.6.21 ndb-7.3.7, Nodegroup: 0)
id=3    @172.3.5.171  (mysql-5.6.21 ndb-7.3.7, Nodegroup: 0, *)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @172.3.5.172  (mysql-5.6.21 ndb-7.3.7)

[mysqld(API)]   2 node(s)
id=4    @172.3.5.173  (mysql-5.6.21 ndb-7.3.7)
id=5    @172.3.5.174  (mysql-5.6.21 ndb-7.3.7)

You can trace errors by checking the log files :
  • Data nodes: /opt/mysql/server-5.6/data/ndb_*_out.log
  • MGMT node: /var/lib/mysql-cluster/ndb_*_cluster.log
  • SQL API nodes: /opt/mysql/server-5.6/data/<hostname or IP>.err
If the nodes are not connecting, you may see a screen like this:

$ sudo /opt/mysql/server-5.6/bin/ndb_mgm -e "show"
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=2    @172.3.5.170  (mysql-5.6.21 ndb-7.3.7, starting, Nodegroup: 0)
id=3 (not connected, accepting connect from 172.3.5.171)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @172.31.5.172  (mysql-5.6.21 ndb-7.3.7)

[mysqld(API)]   2 node(s)
id=4 (not connected, accepting connect from 172.3.5.173)
id=5 (not connected, accepting connect from 172.3.5.174)

If so, check the log files for errors.

Testing the cluster

When all is up and running, we can go to any of the SQL API nodes and login as root - default setup has no password:

$ /opt/mysql/server-5.6/bin/mysql -u root
mysql> CREATE DATABASE MY_CLUSTER;
mysql> USE MY_CLUSTER;
mysql> CREATE TABLE cluster_table( ID INT) ENGINE=ndbcluster DEFAULT CHARSET=utf8;
Query OK, 0 rows affected (0.15 sec)
mysql> insert into cluster_table values (5);
Query OK, 1 row affected (0.00 sec)

Now go to the other SQL API node, and check if data is there:

$ /opt/mysql/server-5.6/bin/mysql -u root
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| MY_CLUSTER         |
| mysql              |
| ndbinfo            |
| performance_schema |
+--------------------+
8 rows in set (0.00 sec)

mysql> use MY_CLUSTER;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> select * from cluster_table;
+------+
| ID   |
+------+
|    5 |
+------+
1 row in set (0.00 sec)

If we can see both nodes accessing the same data, initial setup is done.