Spylunking API Reference

Using TCP to Publish to Splunk

Here is the code for the Splunk Publisher that uses a TCP logging.SocketHandler to send logs to the configured Splunk server.

This class was built for writing JSON log messages to Splunk over a TCP Port.

Available environment variables:

export SPLUNK_TCP_ADDRESS="<splunk port: 1514>"
export SPLUNK_INDEX="<splunk index>"
export SPLUNK_SOURCE="<splunk source>"
export SPLUNK_SOURCETYPE="<splunk sourcetype>"
export SPLUNK_DEBUG="<1 enable debug|0 off>"
export SPLUNK_LOG_TOKEN="<splunk log token>"
class spylunking.tcp_splunk_publisher.TCPSplunkPublisher(address=None, index=None, hostname=None, source=None, sourcetype='json', name=None, dc=None, env=None, custom_dict=None, debug=False, **kwargs)[source]

A logging handler to send logs to a Splunk Enterprise instance with a Splunk TCP input set up for json with:

The TCP Splunk Publisher requires having a JSON-ready entry in the /opt/splunk/etc/system/default/props.conf config file.

Tokens are not required for this to work, but can be included for authenticated TCP logging.

If you want to send include a token in on the tcp log body, then you can export:

export SPLUNK_LOG_TOKEN=<Optional Log Token>

Here are the additional lines added to the splunk props.conf for validation:

[usejson]
SHOULD_LINEMERGE = false
KV_MODE = json
TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z
build_into_splunk_tcp_message(body)[source]

Format a message for a Splunk JSON sourcetype

Parameters:body – splunk JSON dictionary msg body
debug_log(log_message)[source]

Write logs that only show up in debug mode. To turn on debugging with environment variables please set this environment variable:

export SPLUNK_DEBUG="1"
Parameters:log_message – message to log
makePickle(record)[source]

Convert a log record into a JSON dictionary packaged into a socket-ready message for a Splunk TCP Port configured that is set up with a JSON sourcetype

Parameters:record – log record to format as a socket-ready message
set_fields(custom_dict)[source]

Set the formatter’s fields as needed even after the logger has been created.

Parameters:custom_dict – new fields to patch in
write_log(log_message)[source]

Write logs to stdout

Parameters:log_message – message to log

Using Threads to Publish to Splunk

Here is the code for the Splunk Publisher that uses a thread to send logs to the configured Splunk server.

Note

Each created logger will spawn a separate thread with its own queue for log messages. When the parent process exits, please consider that the queue may hold log messages which can be lost due to the parent process exiting before they get a chance to be pushed to Splunk.

Including a handler derived from the original repository: https://github.com/zach-taylor/splunk_handler

This version was built to fix issues seen with multiple Celery worker processes.

Available environment variables:

export SPLUNK_HOST="<splunk host>"
export SPLUNK_PORT="<splunk port: 8088>"
export SPLUNK_API_PORT="<splunk port: 8089>"
export SPLUNK_ADDRESS="<splunk address host:port>"
export SPLUNK_API_ADDRESS="<splunk api address host:port>"
export SPLUNK_TOKEN="<splunk token>"
export SPLUNK_INDEX="<splunk index>"
export SPLUNK_SOURCE="<splunk source>"
export SPLUNK_SOURCETYPE="<splunk sourcetype>"
export SPLUNK_VERIFY="<verify certs on HTTP POST>"
export SPLUNK_TIMEOUT="<timeout in seconds>"
export SPLUNK_QUEUE_SIZE="<num msgs allowed in queue - 0=infinite>"
export SPLUNK_SLEEP_INTERVAL="<sleep in seconds per batch>"
export SPLUNK_RETRY_COUNT="<attempts per log to retry publishing>"
export SPLUNK_RETRY_BACKOFF="<cooldown in seconds per failed POST>"
export SPLUNK_DEBUG="<1 enable debug|0 off>"
class spylunking.splunk_publisher.SplunkPublisher(host=None, port=None, address=None, token=None, index=None, hostname=None, source=None, sourcetype='text', verify=True, timeout=60, sleep_interval=2.0, queue_size=0, debug=False, retry_count=20, retry_backoff=2.0, run_once=False)[source]

A logging handler to send logs to a Splunk Enterprise instance running the Splunk HTTP Event Collector. Originally inspired from the repository: https://github.com/zach-taylor/splunk_handler This class allows multiple processes like Celery workers to reliably publish logs to Splunk from inside of a Celery task

build_payload_from_queued_messages(use_queue, shutdown_event, triggered_by_shutdown=False)[source]

Empty the queued messages by building a large self.log_payload

Parameters:
  • use_queue – queue holding the messages
  • shutdown_event – shutdown event
  • triggered_by_shutdown – called during shutdown
close()[source]
debug_log(log_message)[source]

Write logs that only show up in debug mode. To turn on debugging with environment variables please set this environment variable:

export SPLUNK_DEBUG="1"
Parameters:log_message – message to log
emit(record)[source]

Emit handler for queue-ing message for the helper thread to send to Splunk on the sleep_interval

Parameters:record – LogRecord to send to Splunk https://docs.python.org/3/library/logging.html
force_flush()[source]

Flush the queue and publish everything to Splunk

format_record(record)[source]

Convert a log record into a Splunk-ready format

Parameters:record – message to format
is_shutting_down(shutdown_event)[source]

Determine if the parent is shutting down or this was triggered to shutdown

Parameters:shutdown_event – shutdown event
perform_work()[source]

Process handler function for processing messages found in the multiprocessing.Manager.queue

Build the self.log_payload from the queued log messages and POST it to the Splunk endpoint

publish_to_splunk(payload=None)[source]

Build the self.log_payload from the queued log messages and POST it to the Splunk endpoint

Parameters:payload – string message to send to Splunk
queue_empty(use_queue)[source]
Parameters:use_queue – queue to test
shutdown()[source]
start_worker_thread(sleep_interval=1.0)[source]

Start the helper worker thread to publish queued messages to Splunk

Parameters:sleep_interval – sleep in seconds before reading from the queue again
write_log(log_message)[source]

Write logs to stdout

Parameters:log_message – message to log

Using Multiprocesing to Publish to Splunk

Here is the code for the Splunk Publisher that uses a multiprocessing to send logs to the configured Splunk server.

Note

Each created logger will fork a separate process with its own queue for log messages. When the parent process exits, please consider that the queue may hold log messages which can take some time to finish publishing before the process exits.

Publish to Splunk using a multiprocessing.Process worker

Including a handler derived from the original repository: https://github.com/zach-taylor/splunk_handler

Please note this will not work with Celery. Please use the spylunking.splunk_publisher.SplunkPublisher if you want to see logs inside a Celery task.

Supported environment variables:

export SPLUNK_HOST="<splunk host>"
export SPLUNK_PORT="<splunk port: 8088>"
export SPLUNK_API_PORT="<splunk port: 8089>"
export SPLUNK_ADDRESS="<splunk address host:port>"
export SPLUNK_API_ADDRESS="<splunk api address host:port>"
export SPLUNK_TOKEN="<splunk token>"
export SPLUNK_INDEX="<splunk index>"
export SPLUNK_SOURCE="<splunk source>"
export SPLUNK_SOURCETYPE="<splunk sourcetype>"
export SPLUNK_VERIFY="<verify certs on HTTP POST>"
export SPLUNK_TIMEOUT="<timeout in seconds>"
export SPLUNK_QUEUE_SIZE="<num msgs allowed in queue - 0=infinite>"
export SPLUNK_SLEEP_INTERVAL="<sleep in seconds per batch>"
export SPLUNK_RETRY_COUNT="<attempts per log to retry publishing>"
export SPLUNK_RETRY_BACKOFF="<cooldown in seconds per failed POST>"
export SPLUNK_DEBUG="<1 enable debug|0 off>"
class spylunking.mp_splunk_publisher.MPSplunkPublisher(host=None, port=None, address=None, token=None, index=None, hostname=None, source=None, sourcetype='text', verify=True, timeout=60, sleep_interval=None, queue_size=0, debug=False, retry_count=20, run_once=False, retry_backoff=2.0)[source]

A logging handler to send logs to a Splunk Enterprise instance running the Splunk HTTP Event Collector.

Originally inspired from the repository but written for python’s multiprocessing framework: https://github.com/zach-taylor/splunk_handler

This class allows multiple processes like Celery workers to reliably publish logs to Splunk from inside of a Celery task

build_payload_from_queued_messages(use_queue, shutdown_event)[source]

Empty the queued messages by building a large self.log_payload

Parameters:
  • use_queue – queue holding the messages
  • shutdown_event – shutdown event
close()[source]
debug_log(log_message)[source]

Write logs that only show up in debug mode. To turn on debugging with environment variables please set this environment variable:

export SPLUNK_DEBUG="1"
Parameters:log_message – message to log
emit(record)[source]

Emit handler for queue-ing message for the helper thread to send to Splunk on the self.sleep_interval

Parameters:record – LogRecord to send to Splunk https://docs.python.org/3/library/logging.html
force_flush()[source]

Flush the queue and publish everything to Splunk

format_record(record)[source]

Convert a log record into a Splunk-ready format

Parameters:record – message to format
is_shutting_down(shutdown_event)[source]

Determine if the parent is shutting down or this was triggered to shutdown

Parameters:shutdown_event – shutdown event
perform_work(use_queue, shutdown_event, shutdown_ack_event, already_done_event)[source]

Process handler function for processing messages found in the multiprocessing.Manager.queue

Build the self.log_payload from the queued log messages and POST it to the Splunk endpoint

Parameters:
  • use_queue – multiprocessing.Queue - queue holding the messages
  • shutdown_event – multiprocessing.Event - shutdown event
  • shutdown_ack_event – multiprocessing.Event - acknowledge shutdown is in progress
  • already_done_event – multiprocessing.Event - already shutting down
publish_to_splunk(payload=None, shutdown_event=None, shutdown_ack_event=None, already_done_event=None)[source]

Publish the queued messages to Splunk

Parameters:
  • payload – optional string log message to send to Splunk
  • shutdown_event – multiprocessing.Event - shutdown event
  • shutdown_ack_event – multiprocessing.Event - acknowledge shutdown is in progress
  • already_done_event – multiprocessing.Event - already shutting down
queue_empty(use_queue)[source]
Parameters:use_queue – queue to test
shutdown()[source]
start_worker()[source]

Start the helper worker process to package queued messages and send them to Splunk

write_log(log_message)[source]
Parameters:log_message – message to log

Get a Splunk Service Session Key

Get a Splunk User Session Key - for running searches

spylunking.get_session_key.get_session_key(user, password, url='https://localhost:8089', verify=False, ssl_options=None)[source]

This will get a user session key and throw for any errors

Parameters:
  • user – username
  • password – password
  • url – splunk auth url
  • verify – verify cert
  • ssl_options – ssl options dictionary

Get a Splunk User Token

Get a Splunk User Token

spylunking.get_token.get_token(user=None, password=None, url='https://localhost:8089', verify=False, ssl_options=None, version='7.0.3', debug=False)[source]

This will get a user token and throw for any errors

Parameters:
  • user – username - defaults to env var: SPLUNK_ADMIN_USER
  • password – password - defaults to env var: SPLUNK_ADMIN_PASSWORD
  • url – splunk auth url
  • verify – verify cert
  • ssl_options – ssl options dictionary
  • version – splunk version string
  • debug – debug xml response from splunk (for versioning)

Search Splunk

Search wrapper for authenticating with Splunk and running a search query.

spylunking.search.search(user=None, password=None, token=None, address=None, query_dict=None, verify=False, debug=False)[source]

Search Splunk with a pre-built query dictionary and wait until it finishes.

Parameters:
  • user – splunk username
  • password – splunk password
  • token – splunk token
  • address – splunk HEC address: localhost:8089
  • query_dict – query dictionary to search
  • verify – ssl verify
  • debug – debug flag

Set up a Logger

There are multiple loggers avaiable depending on the type of logger that is needed.

Simple Logger

Build a simple, no dates colorized logger that prints just the message in colors and does not publish logs to Splunk using:

from spylunking.log.setup_logging import simple_logger
log = simple_logger()
log.info('simple logger example')
simple logger example

No Date Colorized Logger

Build a colorized logger that preserves the parent application name and log level without a date field and does not publish logs to Splunk using:

from spylunking.log.setup_logging import no_date_colors_logger
log = no_date_colors_logger(name='app-name')
log.info('no date with colors logger example')
app-name - INFO - no date with colors logger example

Test Logger

The test logger is for unittests and does not publish to Splunk.

from spylunking.log.setup_logging import test_logger
log = test_logger(name='unittest logger')
log.info('unittest log line')
2018-06-25 16:01:50,118 - using-a-colorized-logger - INFO - colorized logger example

Console Logger

The console logger is the same as the build_colorized_logger which can be created with authenticated Splunk-ready logging using:

from spylunking.log.setup_logging import build_colorized_logger
log = build_colorized_logger(name='using-a-colorized-logger')
log.info('colorized logger example')
2018-06-25 16:47:54,053 - unittest logger - INFO - unittest log line

Define Custom Fields for Splunk

You can export a custom JSON dictionary to send as JSON fields for helping drill down on log lines using this environment variable.

export LOG_FIELDS_DICT='{"name":"hello-world","dc":"k8-splunk","env":"development"}'

Or you can export the following environment variables if you just want a couple set in the logs:

export LOG_NAME=<application log name>
export DEPLOY_CONFIG=<PaaS/CaaS deployment config name>
export ENV_NAME<deployed environment name>

Log some new test messages to Splunk:

test_logging.py
2018-06-25 20:48:51,367 - testingsplunk - INFO - testing INFO message_id=0c5e2a2c-9553-4c8a-8fff-8d77de2be78a
2018-06-25 20:48:51,368 - testingsplunk - ERROR - testing ERROR message_id=0dc1086d-4fe4-4062-9882-e822f9256d6f
2018-06-25 20:48:51,368 - testingsplunk - CRITICAL - testing CRITICAL message_id=0c0f56f2-e87f-41a0-babb-b71e2b9d5d5a
2018-06-25 20:48:51,368 - testingsplunk - WARNING - testing WARN message_id=59b099eb-8c0d-40d0-9d3a-7dfa13fefc90
2018-06-25 20:48:51,368 - testingsplunk - ERROR - Testing EXCEPTION with ex=Throw for testing exceptions message_id=70fc422d-d33b-4a9e-bb51-ed86aa0a02f9

Once published, you can search for these new logs using those new JSON fields with the sp search tool. Here is an example of searching for the logs with the application log name hello-world:

sp -q 'index="antinex" AND name=hello-world'
creating client user=trex address=splunk:8089
connecting trex@splunk:8089
2018-06-25 20:48:51,368 testingsplunk - ERROR - Testing EXCEPTION with ex=Throw for testing exceptions message_id=70fc422d-d33b-4a9e-bb51-ed86aa0a02f9
2018-06-25 20:48:51,368 testingsplunk - CRITICAL - testing CRITICAL message_id=0c0f56f2-e87f-41a0-babb-b71e2b9d5d5a
2018-06-25 20:48:51,368 testingsplunk - ERROR - testing ERROR message_id=0dc1086d-4fe4-4062-9882-e822f9256d6f
2018-06-25 20:48:51,367 testingsplunk - INFO - testing INFO message_id=0c5e2a2c-9553-4c8a-8fff-8d77de2be78a
done

And you can view log the full JSON dictionaries using the -j argument on the sp command:

sp -q 'index="antinex" AND name=hello-world' -j
creating client user=trex address=splunk:8089
connecting trex@splunk:8089
{
    "asctime": "2018-06-25 20:48:51,368",
    "custom_key": "custom value",
    "dc": "k8-deploy",
    "env": "development",
    "exc": null,
    "filename": "test_logging.py",
    "levelname": "ERROR",
    "lineno": 41,
    "logger_name": "testingsplunk",
    "message": "Testing EXCEPTION with ex=Throw for testing exceptions message_id=70fc422d-d33b-4a9e-bb51-ed86aa0a02f9",
    "name": "hello-world",
    "path": "/opt/spylunking/spylunking/scripts/test_logging.py",
    "tags": [],
    "timestamp": 1529984931.3688767
}
{
    "asctime": "2018-06-25 20:48:51,368",
    "custom_key": "custom value",
    "dc": "k8-deploy",
    "env": "development",
    "exc": null,
    "filename": "test_logging.py",
    "levelname": "CRITICAL",
    "lineno": 31,
    "logger_name": "testingsplunk",
    "message": "testing CRITICAL message_id=0c0f56f2-e87f-41a0-babb-b71e2b9d5d5a",
    "name": "hello-world",
    "path": "/opt/spylunking/spylunking/scripts/test_logging.py",
    "tags": [],
    "timestamp": 1529984931.3684626
}
{
    "asctime": "2018-06-25 20:48:51,368",
    "custom_key": "custom value",
    "dc": "k8-deploy",
    "env": "development",
    "exc": null,
    "filename": "test_logging.py",
    "levelname": "ERROR",
    "lineno": 29,
    "logger_name": "testingsplunk",
    "message": "testing ERROR message_id=0dc1086d-4fe4-4062-9882-e822f9256d6f",
    "name": "hello-world",
    "path": "/opt/spylunking/spylunking/scripts/test_logging.py",
    "tags": [],
    "timestamp": 1529984931.3682773
}
{
    "asctime": "2018-06-25 20:48:51,367",
    "custom_key": "custom value",
    "dc": "k8-deploy",
    "env": "development",
    "exc": null,
    "filename": "test_logging.py",
    "levelname": "INFO",
    "lineno": 27,
    "logger_name": "testingsplunk",
    "message": "testing INFO message_id=0c5e2a2c-9553-4c8a-8fff-8d77de2be78a",
    "name": "hello-world",
    "path": "/opt/spylunking/spylunking/scripts/test_logging.py",
    "tags": [],
    "timestamp": 1529984931.3679354
}
done

Debug the Logger

Export this variable before creating a logger.

export SPLUNK_DEBUG=1

Full Console Logger with Splunk

To build a logger please use the build_colorized_logger method. Under the hood, this method will authenticate with Splunk and if it gets a valid token it will enable the splunk handlers by default and install the token into the logging configuration dictionary before starting up the python logger.

The build_colorized_logger calls the setup_logging method that builds the formatters, handlers and the python logging system.

Spylunking

Splunk-ready python logging functions, classes and tools

Please use these environment variables to publish logs and run searches with a local or remote splunk server:

export SPLUNK_ADDRESS="splunkenterprise:8088"
export SPLUNK_API_ADDRESS="splunkenterprise:8089"
export SPLUNK_PASSWORD="123321"
export SPLUNK_USER="trex"
export SPLUNK_TOKEN="<Optional pre-existing Splunk token>"

Splunk drill down fields with environment variables:

export LOG_NAME="<application log name>"
export DEPLOY_CONFIG="<application deployed config like k8 filename>"
export ENV_NAME="<environment name for this application>"

Splunk optional tuning environment variables:

export SPLUNK_INDEX="<splunk index>"
export SPLUNK_SOURCE="<splunk source>"
export SPLUNK_SOURCETYPE="<splunk sourcetype>"
export SPLUNK_VERIFY="<verify certs on HTTP POST>"
export SPLUNK_TIMEOUT="<timeout in seconds>"
export SPLUNK_QUEUE_SIZE="<num msgs allowed in queue - 0=infinite>"
export SPLUNK_SLEEP_INTERVAL="<sleep in seconds per batch>"
export SPLUNK_RETRY_COUNT="<attempts per log to retry publishing>"
export SPLUNK_RETRY_BACKOFF="<cooldown in seconds per failed POST>"
export SPLUNK_DEBUG="<debug the publisher - 1 enable debug|0 off>"
export SPLUNK_VERBOSE="<debug the sp command line tool - 1 enable|0 off>"

Change the absolute path to the logging config JSON file:

export SHARED_LOG_CFG=<absolute path to logging config JSON file>
spylunking.log.setup_logging.build_colorized_logger(name='lg', config='shared-logging.json', log_level=20, log_config_path=None, handler_name='console', handlers_dict=None, enable_splunk=True, splunk_user=None, splunk_password=None, splunk_address=None, splunk_api_address=None, splunk_index=None, splunk_token=None, splunk_handler_name='splunk', splunk_sleep_interval=-1, splunk_verify=None, splunk_debug=False)[source]

Build a colorized logger using function arguments and environment variables.

Parameters:
  • name – name that shows in the logger
  • config – name of the config file
  • log_level – level to log
  • log_config_path – path to log config file
  • handler_name – handler name in the config
  • handlers_dict – handlers dict
  • enable_splunk – Turn off splunk even if the env keys are set True by default - all processes that have the SPLUNK_* env keys will publish logs to splunk
  • splunk_user – splunk username - defaults to environment variable: SPLUNK_USER
  • splunk_password – splunk password - defaults to environment variable: SPLUNK_PASSWORD
  • splunk_address – splunk address - defaults to environment variable: SPLUNK_ADDRESS which is localhost:8088
  • splunk_api_address – splunk api address - defaults to environment variable: SPLUNK_API_ADDRESS which is localhost:8089
  • splunk_index – splunk index - defaults to environment variable: SPLUNK_INDEX
  • splunk_token – splunk token - defaults to environment variable: SPLUNK_TOKEN
  • splunk_handler_name – splunk log config handler name - defaults to : SPLUNK_HANDLER_NAME
  • splunk_handler_name – splunk log config handler name - defaults to : SPLUNK_HANDLER_NAME
  • splunk_sleep_interval – optional splunk sleep interval
  • splunk_verify – splunk verify - defaults to environment variable: SPLUNK_VERIFY=<1|0>
  • splunk_debug – print out the connection attempt for debugging Please Avoid on production…
spylunking.log.setup_logging.setup_logging(default_level=20, default_path=None, env_key='LOG_CFG', handler_name='console', handlers_dict=None, log_dict=None, config_name=None, splunk_host=None, splunk_port=None, splunk_index=None, splunk_token=None, splunk_verify=False, splunk_handler_name='splunk', splunk_sleep_interval=-1, splunk_debug=False)[source]

Setup logging configuration

Parameters:
  • default_level – level to log
  • default_path – path to config (optional)
  • env_key – path to config in this env var
  • handler_name – handler name in the config
  • handlers_dict – handlers dict
  • log_dict – full log dictionary config
  • config_name – filename for config
  • splunk_host – optional splunk host
  • splunk_port – optional splunk port
  • splunk_index – optional splunk index
  • splunk_token – optional splunk token
  • splunk_verify – optional splunk verify - default to False
  • splunk_handler_name – optional splunk handler name
  • splunk_sleep_interval – optional splunk sleep interval
  • splunk_debug – optional splunk debug - default to False
class spylunking.log.setup_logging.SplunkFormatter(*args, **kwargs)[source]
add_fields(log_record, record, message_dict)[source]
Parameters:
  • log_record – log record
  • record – log message
  • message_dict – message dict
format(record, datefmt='%Y:%m:%d %H:%M:%S.%f')[source]
Parameters:record – message object to format
get_current_fields()[source]
set_fields(new_fields)[source]

Change the fields that will be added in on a log

Parameters:new_fields – new fields to patch in
updated_current_fields(update_fields)[source]
Parameters:update_fields – dict with values for updating fields_to_add
spylunking.log.setup_logging.console_logger(name='cl', config='shared-logging.json', log_level=20, log_config_path=None, handler_name='console', handlers_dict=None, enable_splunk=False, splunk_user=None, splunk_password=None, splunk_address=None, splunk_api_address=None, splunk_index=None, splunk_token=None, splunk_handler_name=None, splunk_sleep_interval=-1, splunk_verify=None, splunk_debug=False)[source]

Build the full console logger

Parameters:
  • name – name that shows in the logger
  • config – name of the config file
  • log_level – level to log
  • log_config_path – path to log config file
  • handler_name – handler name in the config
  • handlers_dict – handlers dict
  • enable_splunk – Turn off splunk even if the env keys are set False by default - all processes that have the SPLUNK_* env keys will publish logs to splunk
  • splunk_user – splunk username - defaults to environment variable: SPLUNK_USER
  • splunk_password – splunk password - defaults to environment variable: SPLUNK_PASSWORD
  • splunk_address – splunk address - defaults to environment variable: SPLUNK_ADDRESS which is localhost:8088
  • splunk_api_address – splunk api address - defaults to environment variable: SPLUNK_API_ADDRESS which is localhost:8089
  • splunk_index – splunk index - defaults to environment variable: SPLUNK_INDEX
  • splunk_token – splunk token - defaults to environment variable: SPLUNK_TOKEN
  • splunk_handler_name – splunk log config handler name - defaults to : SPLUNK_HANDLER_NAME
  • splunk_sleep_interval – optional splunk sleep interval
  • splunk_verify – splunk verify - defaults to environment variable: SPLUNK_VERIFY=<1|0>
  • splunk_debug – print out the connection attempt for debugging Please Avoid on production…
spylunking.log.setup_logging.no_date_colors_logger(name='nd', config='shared-logging.json', log_level=20, log_config_path=None, handler_name='no_date_colors', handlers_dict=None, enable_splunk=False, splunk_user=None, splunk_password=None, splunk_address=None, splunk_api_address=None, splunk_index=None, splunk_token=None, splunk_handler_name=None, splunk_sleep_interval=-1, splunk_verify=None, splunk_debug=False)[source]

Build a colorized logger without dates

Parameters:
  • name – name that shows in the logger
  • config – name of the config file
  • log_level – level to log
  • log_config_path – path to log config file
  • handler_name – handler name in the config
  • handlers_dict – handlers dict
  • enable_splunk – Turn off splunk even if the env keys are set False by default - all processes that have the SPLUNK_* env keys will publish logs to splunk
  • splunk_user – splunk username - defaults to environment variable: SPLUNK_USER
  • splunk_password – splunk password - defaults to environment variable: SPLUNK_PASSWORD
  • splunk_address – splunk address - defaults to environment variable: SPLUNK_ADDRESS which is localhost:8088
  • splunk_api_address – splunk api address - defaults to environment variable: SPLUNK_API_ADDRESS which is localhost:8089
  • splunk_index – splunk index - defaults to environment variable: SPLUNK_INDEX
  • splunk_token – splunk token - defaults to environment variable: SPLUNK_TOKEN
  • splunk_handler_name – splunk log config handler name - defaults to : SPLUNK_HANDLER_NAME
  • splunk_sleep_interval – optional splunk sleep interval
  • splunk_verify – splunk verify - defaults to environment variable: SPLUNK_VERIFY=<1|0>
  • splunk_debug – print out the connection attempt for debugging Please Avoid on production…
spylunking.log.setup_logging.simple_logger(name='', config='shared-logging.json', log_level=20, log_config_path=None, handler_name='simple', handlers_dict=None, enable_splunk=False, splunk_user=None, splunk_password=None, splunk_address=None, splunk_api_address=None, splunk_index=None, splunk_token=None, splunk_handler_name=None, splunk_sleep_interval=-1, splunk_verify=None, splunk_debug=False)[source]

Build a colorized logger for just the message - Used by command line tools.

Parameters:
  • name – name that shows in the logger
  • config – name of the config file
  • log_level – level to log
  • log_config_path – path to log config file
  • handler_name – handler name in the config
  • handlers_dict – handlers dict
  • enable_splunk – Turn off splunk even if the env keys are set False by default - all processes that have the SPLUNK_* env keys will publish logs to splunk
  • splunk_user – splunk username - defaults to environment variable: SPLUNK_USER
  • splunk_password – splunk password - defaults to environment variable: SPLUNK_PASSWORD
  • splunk_address – splunk address - defaults to environment variable: SPLUNK_ADDRESS which is localhost:8088
  • splunk_api_address – splunk api address - defaults to environment variable: SPLUNK_API_ADDRESS which is localhost:8089
  • splunk_index – splunk index - defaults to environment variable: SPLUNK_INDEX
  • splunk_token – splunk token - defaults to environment variable: SPLUNK_TOKEN
  • splunk_handler_name – splunk log config handler name - defaults to : SPLUNK_HANDLER_NAME
  • splunk_sleep_interval – optional splunk sleep interval
  • splunk_verify – splunk verify - defaults to environment variable: SPLUNK_VERIFY=<1|0>
  • splunk_debug – print out the connection attempt for debugging Please Avoid on production…

Exit Case Handling for the Thread and Multiprocessing Publishers

spylunking.wait_for_exit.wait_for_exit(log, debug=False)[source]

Sleep to allow the thread to pick up final messages before exiting and stopping the Splunk HTTP publisher.

You can decrease this delay (in seconds) by reducing the splunk_sleep_interval or by exporting the env var: export SPLUNK_SLEEP_INTERVAL=0.5

If you set the timer to 0 then it will be a blocking HTTP POST sent to Splunk for each log message. This creates a blocking logger in your application that will wait until each log’s HTTP POST was received before continuing.

Note: Reducing this Splunk sleep timer could result in losing
messages that were stuck in the queue when the parent process exits. The multiprocessing Splunk Publisher was built to do this, but will not work in certain frameworks like Celery as it requires access to spawn daemon processes to prevent this ‘message loss’ case during exiting. Applications using this library should ensure there’s no critical log messages stuck in a queue when stopping a long-running process.
Parameters:
  • log – created logger
  • debug – bool to debug with prints