2014-12-16

Setup ElasticSearch Logstash and Kibana ELK with Bro

I would not follow this installation process anymore, but you may use it for a few notes. As logstash-forwarder has changed file locations and TLS configuration. Kibana has change ALOT from v3 to v6.



This tutorial will install ELK stack and input Bro HTTP, SSL, Conn, DNS, Files, and DHCP logs with GeoIP and using Kibana over HTTPS.



This documentation is assuming you are using Ubuntu as the server. I was using a 64GB RAM server with 6 cores.



Server Installation:

#Install Java
sudo add-apt-repository -y ppa:webupd8team/java;
sudo apt-get update;
sudo apt-get -y install oracle-java7-installer;

#Install ElasticSearch
wget -O - https://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -;
echo 'deb http://packages.elasticsearch.org/elasticsearch/1.4/debian stable main' | sudo tee /etc/apt/sources.list.d/elasticsearch.list;sudo apt-get update;
sudo apt-get -y install elasticsearch;

sudo vi /etc/elasticsearch/elasticsearch.yml
#Add the following line somewhere in the file, to disable dynamic scripts:
script.disable_dynamic: true
#Find the line that specifies network.host and uncomment it so it looks like this:
network.host: localhost
Save and exit elasticsearch.yml.

#Tune ElasticSearch
#Add to /etc/sysctl.conf
fs.file-max = 65536
vm.max_map_count=262144

#Add to /etc/security/limits.conf
* soft nproc 65535
* hard nproc 65535
* soft nofile 65535
* hard nofile 65535
elasticsearch - nofile 65535
elasticsearch - memlock unlimited

#Uncomment the following lines and change the values in /etc/default/elasticsearch:
#Set ES_HEAP_SIZE to half of your dedidcated RAM max 31GB
ES_HEAP_SIZE=31gg
MAX_OPEN_FILES=65535
MAX_LOCKED_MEMORY=unlimited

# Uncomment the line in "/etc/elasticsearch/elasticsearch.yml"
bootstrap.mlockall: true

sudo swapoff -a
#To disable it permanently, you will need to edit the /etc/fstab file and comment out any lines that contain the word swap.

#Reboot server
sudo shutdown -r now
#start elastic search:
sudo start elasticsearch restart
#autostart elasticsearch:
sudo update-rc.d elasticsearch defaults 95 10


#Install Kibana
cd ~; wget https://download.elasticsearch.org/kibana/kibana/kibana-3.1.2.tar.gz;
tar -zxvf kibana-3.1.2.tar.gz;
#Open the Kibana configuration file for editing:

vi ~/kibana-3.1.2/config.js
In the Kibana configuration file, find the line that specifies the elasticsearch, and replace the port number (9200 by default) with 80:

elasticsearch: "http://"+window.location.hostname+":80",

sudo mkdir -p /var/www/kibana;
sudo cp -R ~/kibana-3.1.2/* /var/www/kibana/;

#Install Nginx
sudo apt-get -y install nginx;
cd ~; wget https://gist.githubusercontent.com/thisismitch/2205786838a6a5d61f55/raw/f91e06198a7c455925f6e3099e3ea7c186d0b263/nginx.conf
Find and change the values of the server_name to your FQDN (or localhost if you aren't using a domain name) and root to the location where we installed Kibana, so they look like the following entries:
vi nginx.conf
 server_name           localhost;
 root /var/www/kibana;

sudo cp nginx.conf /etc/nginx/sites-available/default;
sudo apt-get install apache2-utils;
#replace $USERNAME with your username you want to use
sudo htpasswd -c /etc/nginx/conf.d/kibana.myhost.org.htpasswd $USERNAME;

#Make Kibana over SSL:
#generate certificate
sudo openssl req -x509 -sha512 -newkey rsa:4096 -keyout /etc/nginx/kibana.key -out /etc/nginx/kibana.pem -days 3560 -nodes

sudo vi /etc/nginx/sites-available/default
#change the listen on port to *:443
#and add to the file under the line that says "access_log            /var/log/nginx/kibana.myhost.org.access.log;":
 #Enable SSL
 ssl on;
 ssl_certificate /etc/nginx/kibana.pem;
 ssl_certificate_key /etc/nginx/kibana.key;
 ssl_session_timeout 30m;
 ssl_protocols TLSv1.2;
 ssl_ciphers EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS;
 ssl_prefer_server_ciphers on;
 ssl_session_cache shared:SSL:10m;
 ssl_stapling on;
 ssl_stapling_verify on;
 add_header Strict-Transport-Security max-age=63072000;
 add_header X-Frame-Options DENY;
 add_header X-Content-Type-Options nosniff;

#Change the line "elasticsearch: "http://"+window.location.hostname+":80"," in /var/www/kibana3/config.js to
elasticsearch: "https://"+window.location.hostname+":443",

#Restart nginx
sudo service nginx restart;


#Setup GEOIP
sudo mkdir /usr/share/GeoIP; #Create location that we will use to store the GeoIP databases/information
sudo wget http://download.maxmind.com/download/geoip/database/asnum/GeoIPASNum.dat.gz; #IPv4 ASNumber Database
sudo wget http://download.maxmind.com/download/geoip/database/asnum/GeoIPASNumv6.dat.gz; #IPv6 ASNumber Database
sudo wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCountry/GeoIP.dat.gz; #IPv4 GeoIP Country Code Database
sudo wget http://geolite.maxmind.com/download/geoip/database/GeoIPv6.dat.gz; #IPv6 GeoIP Country Code Database
sudo wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz; #IPv4 GeoIP City Database
sudo wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCityv6-beta/GeoLiteCityv6.dat.gz; #IPv6 GeoIP City Database
sudo gzip -d Geo*; #Decrompress all the databases
sudo mv Geo*.dat /usr/share/GeoIP/; #Move all the databases to the GeoIP directory



#Install LogStash

sudo apt-get install git;
echo 'deb http://packages.elasticsearch.org/logstash/1.4/debian stable main' | sudo tee /etc/apt/sources.list.d/logstash.list;
sudo apt-get update;
sudo apt-get -y install logstash;
sudo mkdir -p /etc/pki/tls/certs;
sudo mkdir /etc/pki/tls/private;
cd ~/ && git clone https://github.com/logstash-plugins/logstash-filter-translate.git;
sudo cp logstash-filter-translate/lib/logstash/filters/translate.rb /opt/logstash/lib/logstash/filters/translate.rb;
rm -rf logstash-filter-translate/;

#Server Config File
#Clone server config file
sudo apt-get install git;
git clone  https://github.com/neu5ron/siem-and-event-forwarding-configs.git;
sudo mv siem-and-event-forwarding-configs/logstash-server.conf /etc/logstash/conf.d/all_logstash.conf;



Client Installation:
wget -O - http://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -;
echo 'deb http://packages.elasticsearch.org/logstashforwarder/debian stable main' | sudo tee /etc/apt/sources.list.d/logstashforwarder.list;
sudo apt-get update;
sudo apt-get install logstash-forwarder;
cd /etc/init.d/; sudo wget https://raw.github.com/elasticsearch/logstash-forwarder/master/logstash-forwarder.init -O logstash-forwarder;
sudo chmod +x logstash-forwarder;
sudo update-rc.d logstash-forwarder defaults;
sudo mkdir -p /etc/pki/tls/certs;
cd /etc/pki/tls; sudo openssl req -x509 -batch -nodes -days 3650 -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
#Copy cert to logstash server
scp /etc/pki/tls/certs/logstash-forwarder.crt user@server_private_IP:/etc/pki/tls/certs/
#Copy key to logstash server
scp /etc/pki/tls/certs/logstash-forwarder.key user@server_private_IP:/etc/pki/tls/private/



#Client Logstash configuration file
#Clone server config file
git clone  https://github.com/neu5ron/siem-and-event-forwarding-configs.git
#Make sure you edit the file "logstash-bro_client.conf" to include the location of your bro logs and your $SERVERIP before moving the file.
sudo mv siem-and-event-forwarding-configs/logstash-bro_client.conf  /etc/logstash-forwarder

#Restart logstash
sudo service logstash-forwarder restart




#Now start logstash on the server
sudo service logstash restart



Notes:


errors/logs for logstash
for server /var/log/logstash/logstash.log
for client /var/log/syslog
Logstash troubleshoot client
sudo /opt/logstash-forwarder/bin/logstash-forwarder -config=/etc/logstash-forwarder
Logstash troubleshoot server
sudo /opt/logstash/bin/logstash -f /etc/logstash/conf.d/all_logstash.conf --configtest
ElasticSearch
get list of indexs
curl -XGET 'http://localhost:9200/_aliases'
delete a specific index
curl -XDELETE 'http://{server}/{index_name}/{type_name}/'
example: curl -XDELETE 'http://localhost:9200/logstash-2014.11.18/palo_alto_traffic_log'
delete all database
curl -XDELETE 'http://localhost:9200/*'




Documentation Followed:

#

https://www.digitalocean.com/community/tutorials/how-to-use-logstash-and-kibana-to-centralize-and-visualize-logs-on-ubuntu-14-04

https://www.digitalocean.com/community/tutorials/how-to-use-logstash-and-kibana-to-centralize-and-visualize-logs-on-ubuntu-14-04

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-configuration.html

http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/heap-sizing.html

#Bro

http://everythingshouldbevirtual.com/bro-ids-logstash-parsing

http://www.appliednsm.com/parsing-bro-logs-with-logstash/

#Palo Alto

https://github.com/timconradinc/logstash-palo-alto/tree/master/logstash

https://groups.google.com/forum/m/#!topic/logstash-users/I6iHvnlrWVM

#Using Kibana

http://blog.eslimasec.com/2014/05/elastic-security-deploying-logstash.html

https://stackoverflow.com/questions/26864881/kibana-splits-url

http://www.elasticsearch.org/blog/kibana-whats-cooking/

http://www.elasticsearch.org/blog/use-elk-display-security-datasources-iptables-kippo-honeypot/

#

http://technosophos.com/2014/03/19/ssl-password-protection-for-kibana.html

https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html