Sandfly uses a server to manage the user interface and database. Scanning nodes are used to do the actual connection to remote systems to hunt for intruders on your network.
In order to get the best performance, we recommend the following.
The server runs Docker containers that contain a web interface and optional Elasticsearch database. The Elasticsearch database likes lots of RAM and CPU for best performance.
The server containers have been tested to work on Ubuntu 18, Ubuntu 20 and Centos 7. Other Linux distributions may work, but you must run the latest version of Docker in all cases. Old Docker versions are not compatible with Sandfly and will fail.
We recommend you have a system with at least 16GB of RAM and two or more CPUs dedicated to it. More RAM is better and an SSD drive is recommended for best performance.
If you are running a very large number of systems you will need to scale this figure up appropriately.
A server that is underpowered will have database timeout issues. If the User Interface is taking a long time to load data, then you have too little resources and need to upgrade the RAM and CPU.
Latest Version of Docker Required
Regardless of what version of Linux you want to use to run the Server and Node, they must be running the latest version of Docker. Some Linux distributions have very old versions of Docker in their package repositories. Please use the Sandfly Docker install scripts to be sure you are running only the latest version of Docker and not an out of date version.
Sandfly has two main options for running the database to store results: local or remote.
Local Elasticsearch Option
The local option you can use the default Elasticsearch container during install and Sandfly will initialize the database and User Interface (UI) so everything is stored in this location. This is a good solution for smaller deployments that want to keep things simple and setup fast.
The local Elasticsearch database is internally routed to the Sandfly server and is not reachable from the Internet. This ensures you can't accidentally expose the data to the world, but means that Elasticsearch packages such as Kibana won't work as they cannot connect to the database easily.
Remote Elasticsearch Option
The remote option uses Elasticsearch hosted outside the Sandfly install and connected to by a URL that has a username and password setup by Elasticsearch when you install the cluster. The URL can be on a separate machine, or could be on the same system depending on how you want to deploy it.
In this mode you are able to send events to the Elasticsearch database as usual, but it can be hosted wherever you want it. Further, you can also use tools like Kibana to attach to the database securely to help with searching and reporting on data.
If you are a small deployment and just want security monitoring and do not care about tools like Kibana for reporting, then the local option is a good choice. If you are a larger deployment and need Elasticsearch clustering and redundancy along with tools like Kibana for data analytics, the remote option is the best.
The scanning nodes are Docker containers that are multi-threaded. You can run multiple node containers on a single system instance.
The node containers have been tested to work on Ubuntu 18 and Centos 7 running Docker. Other Linux distributions may work, but you must run the latest version of Docker in all cases.
For best performance we recommend your system instances have at least 2GB of RAM.
Under the above configuration you can run 4 scanning node containers. Each scanning node container has 500 threads running. Therefore, running 4 node containers will give you 2000 scanning threads available to monitor hosts on your enterprise.
Of course you can increase RAM even more to add more scanning node containers. Or you can start a second virtual machine instance and run nodes there for even more redundancy in case one is taken offline for whatever reason.
The nodes connect back to the server over the Rabbit messaging protocol (AMQP). This is a high performance messaging system that does connection and load management automatically. As long as the node containers can see the server then they will organize themselves correctly regardless of where they are running.
Updated 10 days ago
|Protected System Requirements|