Falco is an open-source tool created by sysdig, used to detect unexpected application behaviors and alerts on threats at runtime, i.e when the container is already deployed and running a service.

Falco analyses Linux system calls from the kernel at runtime, and compares the stream against a set of rules. If a rule is violated Falco will trigger an alert.

It is a Behavior based monitoring system. Instead of looking for known threats, it looks for any suspicious  activities performed in the running containers and triggers an alert. It works as an Intrusion Detection System on any Linux host.

Examples of some alerts are :

  • Container running an interactive shell
  • Unauthorized process
  • Write to non user-data directory
  • Sensitive mount by container

EFK :-

EFK is a log aggregator i.e It collects logs from hosts or applications and visualizes it in a graphical format. It consists of 

  • Elasticsearch
  • Fluentd
  • Kibana
I) Elasticsearch
  • A distributed text-based search engine.
  • Elasticsearch proccesses JSON data 
II) Fluentd 
  • Fluentd is an open source data collector.
  • It does the work of collection data, and forwarding it to elastic search.
III) Kibana
  • Kibana is an open source data visualization dashboard for Elasticsearch
  • It provides visualization capabilities on top of the content indexed on an Elasticsearch cluster.

We will be integrating Falco with the EFK stack.

INSTALLATION :-

Usually, we can install Falco separately and use without integrating it with the EFK stack.

You can either deploy it as a container or install it on the host machine. However, here we are going to deploy Falco along with the EFK stack as containers running on the same network.

In this case, the operating system used is Ubuntu

Step 1: –

Clone the following repository, on your host machine. 

You can download the repository from here.

Step 2 :-

Here, copy the ‘falco.yaml’ and ‘falco_rules.yaml’ file in /etc/falco directory, and create a file called falco_events.json in /var/log directory.

 We will use the default rule set provided by falco (falco_rules.yaml) that detects the following things :

  • Spawned shell on any containers.
  • Sensitive files opened.
  • Processes running other than the on intended.

We can also create a custom rule set and define our own set of rules in falco_rules.yaml.

Step 3 :-


Since we have already configured all the containers and the services, use the following command to start all the containers.

docker-compose up

Step 4 :-

We have not triggered any events yet.

You can either deploy some containers and try to perform some activities on them like starting a shell or writing to files under directories which are not allowed.

You can use the Falco even generator , which performs the same syscalls not allowed in the default falco rule set. 

Ordinarily, to prevent any configuration files from getting changed on the host, because of the event generator, you should deploy it only as a container.

To deploy it as a container,

docker pull falcosecurity/event-generator

docker run -it --rm falcosecurity/event-generator run syscall --loop

In this case, it will perform a variety of actions which are detected by the default  Falco rule set.

Step 5 :- 

Now, go to your localhost on port 5601, where kibana is running.

We click on ‘discover’ to create an index pattern.

Define the index pattern as logstash* and click on next.

Here, select @timestamp and click on Create index pattern.

Once we create the index pattern, a bunch of events can be seen which are triggered by the event generator.

We can see an alert for a file being created inside the /root directory, and a shell being spawned on one of the containers. We can also see that another process is running on a container which was no intended. The logs can even be found in the /var/log/falco_events.json file in JSON format.

For more information on how to create a custom rule set, you can refer the official documentation available here.

STEP 6 :-

We can see the logs that are generated in a JSON format. However, to create a visual format of all the events, let’s create pie charts.

Click on Visualize -> Add -> Pie

Next, select the index as logstash.

Next, in the Buckets section, set the Aggregation as terms and field to any of the rules or events you want to make a pie chart of. Here I am selecting priority.keyword.

In the options section, uncheck the donut checkbox and check the ‘Show labels’ box. Click on the play button above to Save changes.

The pie chart will display the data, once we apply the changes.

Click on the save option above and save it with any name of your choice. 

You can create any number of charts for different rules using the same process. Go to dashboards and add the following chart. Note that I have already added a few.

Dashboards -> Add -> Select your panel.

After that, we can watch live events being triggered in a graphical format.

Sending Alerts to Slack :-

We can even send the alerts to a slack channel. To integrate kibana with slack, follow the below steps.

Step 1 :-

Go to Incoming Webhooks, to create a webhook.

Step 2 :-

Select a channel to send the alerts to and click on ‘Add Incoming WebHook Integration’

Here we can see that the Integration has been added.

The webhook URL shown here will be added in our elasticsearch.yml file.

Add the following line to the elasticsearch.yml file.

xpack.notification.slack.account.monitoring.url : <webhook-url>

For more information on how to setup various other settings for your slack alert, please check the following article.

Note :- In the newer versions of elasticsearch you cannot configure Slack accounts in elasticsearch.yml file. You have to add with elasticsearch secure keystore method instead. Please refer the documentation mentioned above for further details.

Step 3:-

In the Kibana management dashboard, go to the ‘watcher’ option.

Now, click on add threshold option and choose the following options.

You can name it anything you want to, and for the indices to query option, select the index you added in your elasticsearch configuration.

You can create your own condition to generate an alert. Here we are generating an alert when the count is above 1000 in last 5 mins for the triggered events.

Next, select the recipient to send the message to and the message you want to be displayed and click on save.

You can test if its working or not by clicking on ‘Send a sample message now’ option. Once you click on it, you will be able to see an alert on slack.

Docker-compose.yaml brief overview :

  • Fluentd : We create the image using the provided Dockerfile. This is because we want to install an elasticsearch plugin inside the container. We create a volume where falco generates the events. This will enable Fluentd to forward the logs from this location to Elasticsearch.
  • For elasticsearch, we pull the image from dockerhub. We create a volume for the storage of data and map it to the port 9200.
  • For kibana, we pull the image from the Dockerhub. We assign it an environment variable to specify the URL for elasticsearch and we link it with elasticsearch. We also map it to the port 5601.
  • For falco, we set the privileged option to be true to give it extra privileges. We mount some of the system files to the container, for installation as mentioned in the official documentation.

Conclusion :-

  • Falco is a great tool to keep a track of all the containers for any suspicious behavior based activities. It triggers the alerts in real time. 
  • We can integrate it with the ELK or EFK to create our own SIEM and visualize the data generated.

Thank you for reading! – Siddarth Tanna and Setu Parimi

Sign up for the blog directly here.

Check out our professional services here.

Feedback is welcome! For professional services, fan mail, hate mail, or whatever else, contact [email protected]


0 Comments

Leave a Reply

%d bloggers like this: