The Elastic Stack, consisting of Elasticsearch with Logstash and Kibana,
commonly abbreviated "ELK", makes it easy to enrich, forward, and
visualize log files. ELK is especially good for getting the most from your Snort 3.0 logs. This post will show you how to create a cool dashbaord:
The dashboard shows the following:
- bring_da_heat - a heat map that plots event priority vs classification
- apple_pie - a pie chart that shows total bytes transferred by app
- greatest_hits - a data table that shows the rules generating the most events
- global_hot_spots - a geo plot of the event source address*
- size_o_gram - a histogram of logged packet / buffer sizes
Get Started
To get started, you will need to install the following:
Go ahead and get Snort 3.0 and ELK installed now if you haven't done so already. There is plenty of help for that available elsewhere. Some things to note:
- The github repo is updated multiple times per week and the master branch is always clean so that is the best way to get Snort 3.0.
- The base appid module is built into Snort 3.0 but you will need Open App ID to get the Lua detector plugins.
- You can use the community rules in 3.0 format or translate other 2.X rules with snort2lua.
Run Snort
The next step is to get Snort running and generating events and app stats. Add the following to the default config file (after the -c argument below):
appid =
{
log_stats = true,
app_detector_dir = 'ODP'
}
alert_json =
{
fields = 'timestamp pkt_num proto pkt_gen pkt_len dir
src_addr src_port dst_addr dst_port service rule priority class action
b64_data'
}
The tokens in bold above and below are as follows:
- ODP is the path where you installed Open App ID. Note this path does not include the trailing /odp.
- INSTALL is the install prefix you used when configuring your Snort 3.0 build.
- RULES is the path containing the community rules.
- PCAP is your favorite pcap. You could use -i <iface> instead.
This command will process your pcap and generate alerts.json and app_stats.log files in your current directory:
INSTALL/bin/snort \
-c INSTALL/etc/snort/snort.lua \
-R RULES/snort3-community.rules \
--plugin-path INSTALL/lib \
-r PCAP \
-A json -y -q > alerts.json
The JSON events are determined by the configured fields to look like this:
{ "timestamp" : "03/08/01-04:21:07.583700", "pkt_num" : 737, "proto" : "UDP", "pkt_gen" : "raw", "pkt_len" : 161, "dir" : "C2S", "src_addr" : "192.168.16.222", "src_port" : 3076, "dst_addr" : "239.255.255.250", "dst_port" : 1900, "service" : "unknown", "rule" : "1:1917:15", "priority" : 3, "class" : "Detection of a Network Scan", "action" : "allow", "b64_data" : "TS1TRUFSQ0ggKiBIVFRQLzEuMQ0KSG9zdDoyMzkuMjU1LjI1NS4yNTA6MTkwMA0KU1Q6dXJuOnNjaGVtYXMtdXBucC1vcmc6ZGV2aWNlOkludGVybmV0R2F0ZXdheURldmljZToxDQpNYW46InNzZHA6ZGlzY292ZXIiDQpNWDozDQoNCg==" }
The app stats are in csv format with Unix timestamp, app, bytes to client, and bytes to server:
1059733200,FTP Data,4441712,185694921
Run ELK
Now lets process these logs with the elastic stack. Start by running elasticsearch and kibana as follows:
cd elasticsearch-5.5.1/
bin/elasticsearch -v &
cd kibana-5.5.1-darwin-x86_64
bin/kibana &
I've got version 5.5.1 of ELK installed on OS X. Adjust your paths as needed for your install of ELK. We are using the default ports of 9200 for elasticsearch and 5601 for kibana. You may need to adjust on your system.
Now we are ready to send the logs to elasticsearch using logstash. Get the config files
here. Edit alert_json.txt and alert_apps.txt and set the path on the 3rd line to point to your log files. Then you can run logstash like this:
cd logstash-5.5.1/
bin/logstash -f snort_json.txt &
bin/logstash -f snort_apps.txt &
Visualize
The logstash commands will populate the logstash-snort3j and logstash-snort3a indexes in elasticsearch. At this point we can start working on the dashboard using kibana. Point your browser to http://localhost:5601/ and follow these steps:
- Click on the gear (Management), Index Patterns, + Create Index Pattern, set the name logstash-snort3j, and then click Create.
- Edit b64_data (click pencil on right), set Format = String and Transform = Base64 Decode, and then click Update Field.
- Click on the gear (Management), Index Patterns, + Create Index Pattern, set the name logstash-snort3a, and then click Create.
- Click the scripted fields tab, + Add Scripted Field, set Name = app_total_bytes and Script = doc['bytes_to_client'].value+doc['bytes_to_server'].value and then click Create Field.
At this point you can click on the icons on the left for Discover, Visualize, and Dashboard to view the raw data, create tables, charts, etc., and build a dashboard. This is really best done by just exploring and experimenting, however you can import the dashboard shown above by clicking Management, Saved Objects, Import and selecting snort_dash.json. Tip: base your visualizations off saved searches so that you don't lose them when the data is deleted.
snort_csv.txt is also provided for use with snort -A csv if you want to process alerts in csv format. The index name for that is logstash-snort3.
* Snort 3.0 supports the target rule option, so use that instead of source address if your rules have targets. That gets the attacker correct for shellcode, etc.