HomeDocumentationAPI Reference
Log In
Documentation

Log Level Change Guide

A step-by-step guide for modifying the level of logging on the Sandfly Server or Nodes.

This administrative-level change is not needed under most circumstances, however, this guide is provided for when normal logs are not sufficient.

Preparation

In order to complete the steps in this guide, the following items are required:

  • A valid account that can access the Sandfly User Interface (UI).
  • Shell access to the target Sandfly host(s) as root or a user who can edit the Sandfly config file.
  • For installations with multiple nodes, especially those with multiple named queues, determine which node(s) need to be changed depending on your logging needs.

Step 1: Pause Scheduled Tasks

Deactivate all enabled schedules via the Sandfly UI or API. This is to ensure that no scheduled tasks are started during the configuration change. See Deactivating and Deleting Schedule for details.

🚧

CAUTION: Completing this step will stop all scheduled scanning

Performing this step will stop all scheduled scanning until reactivated, which effectively means that monitoring of your systems will not occur. Consider using a separate Sandfly test system for debug log collection if possible.

Step 2: Make Sure All Tasks Have Completed

In the Sandfly UI, check the Task Queues by clicking on its button in the Top Bar or via the sidebar at Scanning > Task Queues and make sure the task queue(s) are at 0 (zero), as indicated by the Total Tasks value. It is important to not stop Sandfly in the middle of scans as it can leave orphaned files on the remote hosts.

Step 3: Stop Running Containers

First, if changing a node, make note of how many node containers are currently running with the docker ps | grep -v 'sandfly-' command so that you can restore the same quantity later.

The associated container(s) on the target host must be stopped in order for the changed configuration to be passed into the container. To do that, run the 'shutdown_sandfly.sh' script found under ~/sandfly-setup/start_scripts/ on the target host.

Step 4: Edit the Config

Using your favorite text editor, edit the applicable config file located under ~/sandfly-setup/setup/setup_data/ on the target host. For servers the config file will be named 'config.server.json' and 'config.node.json' for all nodes. Once inside the file, search for "log_level" and change its value to the desired log level state. Save the file and exit the editor.

Available container log levels:

  • info - (DEFAULT) Provides a base level of output for normal operations.
  • debug - Provides a verbose level of output, typically used for diagnosing issues.

🚧

CAUTION: Logging at the debug level produces increased log data size

Debug level logging will generate a larger than normal amount of log data. Be sure there is sufficient disk space and to lower the log level once debugging is no longer necessary.

Step 5: Start the Container(s)

With the configuration file updated, the associated container(s) needs to be started so that the new setting can be applied. To do that, run the 'start_sandfly.sh' script for the server or 'start_node.sh' for a node. Both scripts are found under ~/sandfly-setup/start_scripts/ on the target host.

If you are changing the log level for purposes of debugging on nodes, we suggest starting only a single node as it will force all scans of that queue name to go through that single container instance. If more than one node is running, then scans would be randomly split between the active nodes, making it difficult to know which log to view.

If restoring service back to normal, be sure to start the appropriate number of nodes for your environment.

At this point Sandfly should be functioning normally and producing logs with the new log level.

Step 6: Resume Paused Schedules

Resume all previously paused schedules from Step 1 in order to restore normal scanning activity.

This step may be deferred if the change is for quick debug log collection and the log setting is then promptly reverted. If deferred, just ensure that the schedules have been re-enabled once everything is done.