Python tracing
The Python tracing automatically instruments APIs, frameworks and application servers. The sfAPM python agent collects and sends the tracing metrics and the correlated application logs to the SnappyFlow server.
Python 3.6, 3.7, 3.8, 3.9, 3.10, 3.11
Supported Web FrameworksDjango 1.11, 2.0, 2.1, 2.2, 3.0, 3.1, 3.2, 4.0
Flask 1.0, 1.1, 2.0
Supported Platforms
Supported Trace Features
Below is the list of the supported trace features:
Instances
Supported Frameworks
Standard Library Modules
Django
Follow the below steps to enable the tracing for the application based on Django Framework.
Configuration
Add the below mentioned entries in requirements.txt file to install sf-elastic-apm and sf-apm-lib in your environment.
sf-elastic-apm==6.7.2
sf-apm-lib==0.1.1or
Install the below libraries using CLI.
pip install sf-elastic-apm==6.7.2
pip install sf-apm-lib==0.1.1
If the agent is already installed in your instance, the trace agent picks up the profileKey, projectName, and appName from the config.yaml file. Add the below entries in the
settings.py
file.i. Add the import statement.
from sf_apm_lib.snappyflow import Snappyflow
ii. Add the following entry in
INSTALLED_APPS
block.'elasticapm.contrib.django'
iii. Add the following entry in
MIDDLEWARE
block.'elasticapm.contrib.django.middleware.TracingMiddleware'
iv. Add the following source code to integrate the Django application to the SnappyFlow.try:
# Initialize Snappyflow. By default intialization will take profileKey, projectName and appName from sfagent config.yaml
sf = Snappyflow()
SFTRACE_CONFIG = sf.get_trace_config()
ELASTIC_APM={
# Specify your service name for tracing
'SERVICE_NAME': "custom-service" ,
'SERVER_URL': SFTRACE_CONFIG.get('SFTRACE_SERVER_URL'),
'GLOBAL_LABELS': SFTRACE_CONFIG.get('SFTRACE_GLOBAL_LABELS'),
'VERIFY_SERVER_CERT': SFTRACE_CONFIG.get('SFTRACE_VERIFY_SERVER_CERT'),
'SPAN_FRAMES_MIN_DURATION': SFTRACE_CONFIG.get('SFTRACE_SPAN_FRAMES_MIN_DURATION'),
'STACK_TRACE_LIMIT': SFTRACE_CONFIG.get('SFTRACE_STACK_TRACE_LIMIT'),
'CAPTURE_SPAN_STACK_TRACES': SFTRACE_CONFIG.get('SFTRACE_CAPTURE_SPAN_STACK_TRACES'),
'DJANGO_TRANSACTION_NAME_FROM_ROUTE': True,
'CENTRAL_CONFIG': False,
'METRICS_INTERVAL': '0s'
}
except Exception as error:
print("Error while fetching snappyflow tracing configurations", error)If the sfAgent is not installed in your instance, then follow the below steps:
i. Make sure the project and application is created in the SnappyFlow Server. Click Here to know how to create the project and application in SnappyFlow.
ii. Export
SF_PROJECT_NAME
,SF_APP_NAME
,SF_PROFILE_KEY
as the environment variables.# Update the below default values with proper values
export SF_PROJECT_NAME=<SF_PROJECT_NAME>
export SF_APP_NAME=<SF_APP_NAME>
export SF_PROFILE_KEY=<SF_PROFILE_KEY>iii. Add the following entries in the
settings.py
file.Add the import statement.
from sf_apm_lib.snappyflow import Snappyflow
Add the following entry in
INSTALLED_APPS
block.'elasticapm.contrib.django'
Add the following entry in
MIDDLEWARE
block.'elasticapm.contrib.django.middleware.TracingMiddleware'
Add the following source code to integrate the Django application to the SnappyFlow.
try:
sf = Snappyflow()
# Add below part to manually configure the initialization
SF_PROJECT_NAME = os.getenv('SF_PROJECT_NAME')
SF_APP_NAME = os.getenv('SF_APP_NAME')
SF_PROFILE_KEY = os.getenv('SF_PROFILE_KEY')
sf.init(SF_PROFILE_KEY, SF_PROJECT_NAME, SF_APP_NAME)
# End of manual configuration
SFTRACE_CONFIG = sf.get_trace_config()
ELASTIC_APM={
# Specify your service name for tracing
'SERVICE_NAME': "custom-service" ,
'SERVER_URL': SFTRACE_CONFIG.get('SFTRACE_SERVER_URL'),
'GLOBAL_LABELS': SFTRACE_CONFIG.get('SFTRACE_GLOBAL_LABELS'),
'VERIFY_SERVER_CERT': SFTRACE_CONFIG.get('SFTRACE_VERIFY_SERVER_CERT'),
'SPAN_FRAMES_MIN_DURATION': SFTRACE_CONFIG.get('SFTRACE_SPAN_FRAMES_MIN_DURATION'),
'STACK_TRACE_LIMIT': SFTRACE_CONFIG.get('SFTRACE_STACK_TRACE_LIMIT'),
'CAPTURE_SPAN_STACK_TRACES': SFTRACE_CONFIG.get('SFTRACE_CAPTURE_SPAN_STACK_TRACES'),
'DJANGO_TRANSACTION_NAME_FROM_ROUTE': True,
'CENTRAL_CONFIG': False,
'METRICS_INTERVAL': '0s'
}
except Exception as error:
print("Error while fetching snappyflow tracing configurations", error)
Verification
Once your application is up and running, follow the below steps to verfiy that the SnappyFlow has started to collect the traces.
- Make sure that the project and the application is created.
- In the app, click the View Dashboard icon.
- In the Dashboard window, go to Tracing section.
- In the Tracing section, click the View Transactions button.
- Now you can view the traces in Aggregate and Real Time tabs.
Troubleshoot Steps
If the trace data is not collected in the SnappyFlow server, then check the trace configuration in the
settings.py
.To enable the debug logs, add the below key-value pair in the ELASTIC_APM block of the
settings.py
.'DEBUG':'true'
Sample Application Code
The below link contains the sample application with the trace enabled by following the configuration mentioned in the above sections.
Click Here to view the reference application.
Flask
Add
sf-elastic-apm[flask]==6.7.2
sf-apm-lib==0.1.1entries in requirements.txt file and install these in your project environment
or
Install the below libraries using CLI.
pip install sf-elastic-apm[flask]==6.7.2
pip install sf-apm-lib==0.1.1Provide
SF_PROJECT_NAME
,SF_APP_NAME
,SF_PROFILE_KEY
as an environment variable.Add following entries in
app.py
Add imports statement
from elasticapm.contrib.flask import ElasticAPM
from sf_apm_lib.snappyflow import SnappyflowGet trace config
sf = Snappyflow() # Initialize Snappyflow. By default intialization will take profileKey, projectName and appName from sfagent config.yaml
# Add below part to manually configure the initialization
SF_PROJECT_NAME = os.getenv('SF_PROJECT_NAME')
SF_APP_NAME = os.getenv('SF_APP_NAME')
SF_PROFILE_KEY = os.getenv('SF_PROFILE_KEY')
sf.init(SF_PROFILE_KEY, SF_PROJECT_NAME, SF_APP_NAME)
# End of manual configuration
SFTRACE_CONFIG = sf.get_trace_config()
# Start Trace to log feature section
# Add below line of code to enable Trace to log feature:
SFTRACE_CONFIG['SFTRACE_GLOBAL_LABELS'] += ',_tag_redact_body=true'
# Option Configs for trace to log
# Add below line to provide custom documentType (Default:"user-input"):
SFTRACE_CONFIG['SFTRACE_GLOBAL_LABELS'] += ',_tag_documentType=<document-type>'
# Add below line to provide destination index (Default:"log"):
SFTRACE_CONFIG['SFTRACE_GLOBAL_LABELS'] += ',_tag_IndexType=<index-type>' # Applicable values(log, metric)
# End trace to log sectionInitialize elastic apm and instrument it to flask app
app.config['ELASTIC_APM'] = {
'SERVICE_NAME': '<SERVICE_NAME>', # Specify your service name for tracing
'SERVER_URL': SFTRACE_CONFIG.get('SFTRACE_SERVER_URL'),
'GLOBAL_LABELS': SFTRACE_CONFIG.get('SFTRACE_GLOBAL_LABELS'),
'VERIFY_SERVER_CERT': SFTRACE_CONFIG.get('SFTRACE_VERIFY_SERVER_CERT'),
'SPAN_FRAMES_MIN_DURATION': SFTRACE_CONFIG.get('SFTRACE_SPAN_FRAMES_MIN_DURATION'),
'STACK_TRACE_LIMIT': SFTRACE_CONFIG.get('SFTRACE_STACK_TRACE_LIMIT'),
'CAPTURE_SPAN_STACK_TRACES': SFTRACE_CONFIG.get('SFTRACE_CAPTURE_SPAN_STACK_TRACES'),
'DEBUG': True,
'METRICS_INTERVAL': '0s'
}
apm = ElasticAPM(app)Once your server is up and running you can check trace in Snappyflow Server.
For viewing trace in snappyflow server make sure project and app name is created or discovered with project name and app name specified in point no.2
Once project and app name is created, Go to View dashboard -> Click on Tracing on lef side bar -> Click on view transaction -> Go to real time tab
note'CAPTURE_BODY':'all' config should be present in apm agent code instrumentation for Trace to Log feature.
Script
Install following requirements
pip install sf-elastic-apm==6.7.2
pip install sf-apm-lib==0.1.1Add following code at start of script file to setup elastic apm client
import elasticapm
from sf_apm_lib.snappyflow import Snappyflow
sf = Snappyflow() # Initialize Snappyflow. By default intialization will pick profileKey, projectName and appName from sfagent config.yaml.
# Add below part to manually configure the initialization
SF_PROJECT_NAME = '<Snappyflow Project Name>'
SF_APP_NAME = '<Snappyflow App Name>'
SF_PROFILE_KEY = '<Snappyflow Profile Key>'
sf.init(SF_PROFILE_KEY, SF_PROJECT_NAME, SF_APP_NAME)
# End of manual configuration
trace_config = sf.get_trace_config() # Returns trace config
client = elasticapm.Client(
service_name="<Service name> ",# Specify service name for tracing
server_url=trace_config['SFTRACE_SERVER_URL'],
verify_cert=trace_config['SFTRACE_VERIFY_SERVER_CERT'],
global_labels=trace_config['SFTRACE_GLOBAL_LABELS']
)
elasticapm.instrument() # Only call this once, as early as possible.Once instrumentation is completed we can create custom transaction and span
Example
def main():
sess = requests.Session()
for url in [ 'https://www.elastic.co', 'https://benchmarks.elastic.co' ]:
resp = sess.get(url)
time.sleep(1)
client.begin_transaction(transaction_type="script")
main()
# Record an exception
try:
1/0
except ZeroDivisionError:
ident = client.capture_exception()
print ("Exception caught; reference is %s" % ident)
client.end_transaction(name=__name__, result="success")Refer link to know more:
https://www.elastic.co/guide/en/apm/agent/python/master/instrumenting-custom-code.html
Now run you script and test your trace in snappyflow server.
For viewing trace in snappyflow server make sure project and app name is created or discovered with project name and app name specified in point no.2
Once project and app name is created, Go to View dashboard -> Click on Tracing on left side bar -> Click on view transaction -> Go to real time tab
Refer complete script:
Celery
Install following requirements (Following example is based on redis broker)
pip install sf-elastic-apm==6.7.2
pip install redis
pip install sf-apm-lib==0.1.1Add following code at start of the file where celery app is initialized to setup elastic apm client
from sf_apm_lib.snappyflow import Snappyflow
from elasticapm import Client, instrument
from elasticapm.contrib.celery import register_exception_tracking, register_instrumentation
instrument()
try:
sf = Snappyflow() # Initialize Snappyflow. By default intialization will take profileKey, projectName and appName from sfagent config.yaml
# Add below part to manually configure the initialization
SF_PROJECT_NAME = '<SF_PROJECT_NAME>' # Replace with appropriate Snappyflow project name
SF_APP_NAME = '<SF_APP_NAME>' # Replace with appropriate Snappyflow app name
SF_PROFILE_KEY = '<SF_PROFILE_KEY>' # Replace Snappyflow Profile key
sf.init(SF_PROFILE_KEY, SF_PROJECT_NAME, SF_APP_NAME)
# End of manual configuration
SFTRACE_CONFIG = sf.get_trace_config()
apm_client = Client(
service_name= '<Service_Name>', # Specify service name for tracing
server_url= SFTRACE_CONFIG.get('SFTRACE_SERVER_URL'),
global_labels= SFTRACE_CONFIG.get('SFTRACE_GLOBAL_LABELS'),
verify_server_cert= SFTRACE_CONFIG.get('SFTRACE_VERIFY_SERVER_CERT')
)
register_exception_tracking(apm_client)
register_instrumentation(apm_client)
except Exception as error:
print("Error while fetching snappyflow tracing configurations", error)Once instrumentation is done and celery worker is running we can see trace for each celery task in Snappyflow server.
For viewing trace in snappyflow server make sure project and app name is created or discovered with project name and app name specified in point no.2
Once project and app name is created, Go to View dashboard -> Click on Tracing on left side bar -> Click on view transaction -> Go to real time tab
Refer complete code:
https://github.com/snappyflow/tracing-reference-apps/blob/master/ref-celery/tasks.py
Kubernetes
Django
Follow the below steps to enable the tracing for the applications based on Django Framework.
Configuration
Add the below mentioned entries in requirements.txt file to install sf-elastic-apm and sf-apm-lib in your environment.
sf-elastic-apm==6.7.2
sf-apm-lib==0.1.1or
Install the below libraries using CLI.
pip install sf-elastic-apm==6.7.2
pip install sf-apm-lib==0.1.1Add the following entries in
settings.py
file.Add the import statements.
from sf_apm_lib.snappyflow import Snappyflow
import osAdd the following entry in
INSTALLED_APPS
block.'elasticapm.contrib.django'
Add the following entry in
MIDDLEWARE
block.'elasticapm.contrib.django.middleware.TracingMiddleware'
Add the following source code to integrate the Django application to the SnappyFlow.
try:
sf = Snappyflow()
# Add below part to manually configure the initialization
SF_PROJECT_NAME = os.getenv('SF_PROJECT_NAME')
SF_APP_NAME = os.getenv('SF_APP_NAME')
SF_PROFILE_KEY = os.getenv('SF_PROFILE_KEY')
sf.init(SF_PROFILE_KEY, SF_PROJECT_NAME, SF_APP_NAME)
# End of manual configuration
SFTRACE_CONFIG = sf.get_trace_config()
ELASTIC_APM={
# Specify your service name for tracing
'SERVICE_NAME': "custom-service" ,
'SERVER_URL': SFTRACE_CONFIG.get('SFTRACE_SERVER_URL'),
'GLOBAL_LABELS': SFTRACE_CONFIG.get('SFTRACE_GLOBAL_LABELS'),
'VERIFY_SERVER_CERT': SFTRACE_CONFIG.get('SFTRACE_VERIFY_SERVER_CERT'),
'SPAN_FRAMES_MIN_DURATION': SFTRACE_CONFIG.get('SFTRACE_SPAN_FRAMES_MIN_DURATION'),
'STACK_TRACE_LIMIT': SFTRACE_CONFIG.get('SFTRACE_STACK_TRACE_LIMIT'),
'CAPTURE_SPAN_STACK_TRACES': SFTRACE_CONFIG.get('SFTRACE_CAPTURE_SPAN_STACK_TRACES'),
'DJANGO_TRANSACTION_NAME_FROM_ROUTE': True,
'CENTRAL_CONFIG': False,
'METRICS_INTERVAL': '0s'
}
except Exception as error:
print("Error while fetching snappyflow tracing configurations", error)
Provide
SF_PROJECT_NAME
,SF_APP_NAME
,SF_PROFILE_KEY
as an environment variables in Kubernetes deployment file.#deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: python-app
labels:
app: python-app
spec:
containers:
- name: python-app
image: imagename/tag:version
env:
- name: SF_PROFILE_KEY
value: <profle-key>
- name: SF_PROJECT_NAME
value: <project_name>
- name: SF_APP_NAME
value: <app-name>If the deployment is with helm charts, provide the above variables in the
values.yaml
and use them in the deployment file of charts.#values.yaml
global:
# update the sfappname, sfprojectname and key with the proper values
sfappname: <app-name>
sfprojectname: <project-name>
key: <profile-key>
replicaCount: 1
image:
repository: djangoapp
pullPolicy: IfNotPresent
tag: "latest"Pass the global section key-value from the
value.yaml
by setting thedeployment.yaml
as below :#deployment.yaml
apiVersion: apps/v1
kind: Deployment
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: SF_PROFILE_KEY
value: {{ .Values.global.key }}
- name: SF_PROJECT_NAME
value: {{ .Values.global.sfprojectname }}
- name: SF_APP_NAME
value: {{ .Values.global.sfappname }}
Verification
Once your application is up and running, follow the below steps to verfiy that the SnappyFlow has started to collect the traces.
- Make sure that the project and the application is created.
- In the app, click the View Dashboard icon.
- In the Dashboard window, go to Tracing section.
- In the Tracing section, click the View Transactions button.
- Now you can view the traces in Aggregate and Real Time tabs.
Sample Application Code
The below link contains the sample application with the trace enabled by following the configuration mentioned in the above sections.
Click Here to view the reference application.
Flask
Add
sf-elastic-apm[flask]==6.7.2
sf-apm-lib==0.1.1entries in requirements.txt file and install these in your project environment
or
Install through CLI using
RUN pip install sf-elastic-apm[flask]==6.7.2
RUN pip install sf-apm-lib==0.1.1Add following entries in app.py
Add imports statement
from elasticapm.contrib.flask import ElasticAPM
from sf_apm_lib.snappyflow import SnappyflowGet trace config
sf = Snappyflow() # Initialize Snappyflow. By default intialization will take profileKey, projectName and appName from sfagent config.yaml
# Add below part to manually configure the initialization
SF_PROJECT_NAME = os.getenv('SF_PROJECT_NAME')
SF_APP_NAME = os.getenv('SF_APP_NAME')
SF_PROFILE_KEY = os.getenv('SF_PROFILE_KEY')
sf.init(SF_PROFILE_KEY, SF_PROJECT_NAME, SF_APP_NAME)
# End of manual configuration
SFTRACE_CONFIG = sf.get_trace_config()
# Start Trace to log feature section
# Add below line of code to enable Trace to log feature:
SFTRACE_CONFIG['SFTRACE_GLOBAL_LABELS'] += ',_tag_redact_body=true'
# Option Configs for trace to log
# Add below line to provide custom documentType (Default:"user-input"):
SFTRACE_CONFIG['SFTRACE_GLOBAL_LABELS'] += ',_tag_documentType=<document-type>'
# Add below line to provide destination index (Default:"log"):
SFTRACE_CONFIG['SFTRACE_GLOBAL_LABELS'] += ',_tag_IndexType=<index-type>' # Applicable values(log, metric)
# End trace to log sectionInitialize elastic apm and instrument it to flask app
app.config['ELASTIC_APM'] = {
'SERVICE_NAME': '<SERVICE_NAME>', # Specify your service name for tracing
'SERVER_URL': SFTRACE_CONFIG.get('SFTRACE_SERVER_URL'),
'GLOBAL_LABELS': SFTRACE_CONFIG.get('SFTRACE_GLOBAL_LABELS'),
'VERIFY_SERVER_CERT': SFTRACE_CONFIG.get('SFTRACE_VERIFY_SERVER_CERT'),
'SPAN_FRAMES_MIN_DURATION': SFTRACE_CONFIG.get('SFTRACE_SPAN_FRAMES_MIN_DURATION'),
'STACK_TRACE_LIMIT': SFTRACE_CONFIG.get('SFTRACE_STACK_TRACE_LIMIT'),
'CAPTURE_SPAN_STACK_TRACES': SFTRACE_CONFIG.get('SFTRACE_CAPTURE_SPAN_STACK_TRACES'),
'DEBUG': True,
'METRICS_INTERVAL': '0s'
}
apm = ElasticAPM(app)
Provide
SF_PROJECT_NAME
,SF_APP_NAME
,SF_PROFILE_KEY
as an environment variables in Kubernetes deployment file.https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
If deploying with helm provide above variables in values.yaml and use them in deployment file of charts.
Once your server is up and running you can check trace in Snappyflow Server.
For viewing trace in snappyflow server make sure project and app name is created or discovered with project name and app name specified in point no.2
Once project and app name is created, Go to View dashboard -> Click on Tracing on lef side bar -> Click on view transaction -> Go to real time tab
note'CAPTURE_BODY':'all' config should be present in apm agent code instrumentation for Trace to Log feature.
Celery
Install following requirements (Following example is based on redis broker)
pip install sf-elastic-apm==6.7.2
pip install redis
pip install sf-apm-lib==0.1.1Add following code at start of the file where celery app is initialized to setup elastic apm client
from sf_apm_lib.snappyflow import Snappyflow
from elasticapm import Client, instrument
from elasticapm.contrib.celery import register_exception_tracking, register_instrumentation
instrument()
try:
sf = Snappyflow() # Initialize Snappyflow. By default intialization will take profileKey, projectName and appName from sfagent config.yaml
# Add below part to manually configure the initialization
SF_PROJECT_NAME = '<SF_PROJECT_NAME>' # Replace with appropriate Snappyflow project name
SF_APP_NAME = '<SF_APP_NAME>' # Replace with appropriate Snappyflow app name
SF_PROFILE_KEY = '<SF_PROFILE_KEY>' # Replace Snappyflow Profile key
sf.init(SF_PROFILE_KEY, SF_PROJECT_NAME, SF_APP_NAME)
# End of manual configuration
SFTRACE_CONFIG = sf.get_trace_config()
apm_client = Client(service_name= '<Service_Name>', # Specify service name for tracing
server_url= SFTRACE_CONFIG.get('SFTRACE_SERVER_URL'),
global_labels= SFTRACE_CONFIG.get('SFTRACE_GLOBAL_LABELS'),
verify_server_cert= SFTRACE_CONFIG.get('SFTRACE_VERIFY_SERVER_CERT')
)
register_exception_tracking(apm_client)
register_instrumentation(apm_client)
except Exception as error:
print("Error while fetching snappyflow tracing configurations", error)Once instrumentation is done and celery worker is running we can see trace for each celery task in Snappyflow server.
For viewing trace in snappyflow server make sure project and app name is created or discovered with project name and app name specified in point no.2
Once project and app name is created, Go to View dashboard -> Click on Tracing on left side bar -> Click on view transaction -> Go to real time tab
Refer complete code:
https://github.com/snappyflow/tracing-reference-apps/blob/master/ref-celery/tasks.py
Docker
Django
Follow the below steps to enable the tracing for the application based on Django Framework.
Configuration
Add below mentioned entries in requirements.txt file and install these in your project environment.
sf-elastic-apm==6.7.2
sf-apm-lib==0.1.1or
Install the below libraries using CLI.
RUN pip install sf-elastic-apm==6.7.2
RUN pip install sf-apm-lib==0.1.1Make sure the project and application is created in the SnappyFlow Server. Click Here to know how to create the project and application in SnappyFlow.
Add the following entries in
settings.py
file.Add the import statement.
from sf_apm_lib.snappyflow import Snappyflow
import osAdd the following entry in
INSTALLED_APPS
block.'elasticapm.contrib.django'
Add the following entry in
MIDDLEWARE
block.'elasticapm.contrib.django.middleware.TracingMiddleware'
Add the following source code to integrate the Django application to the SnappyFlow.
try:
sf = Snappyflow()
# Add below part to manually configure the initialization
SF_PROJECT_NAME = os.getenv('SF_PROJECT_NAME')
SF_APP_NAME = os.getenv('SF_APP_NAME')
SF_PROFILE_KEY = os.getenv('SF_PROFILE_KEY')
sf.init(SF_PROFILE_KEY, SF_PROJECT_NAME, SF_APP_NAME)
# End of manual configuration
SFTRACE_CONFIG = sf.get_trace_config()
ELASTIC_APM={
# Specify your service name for tracing
'SERVICE_NAME': "custom-service" ,
'SERVER_URL': SFTRACE_CONFIG.get('SFTRACE_SERVER_URL'),
'GLOBAL_LABELS': SFTRACE_CONFIG.get('SFTRACE_GLOBAL_LABELS'),
'VERIFY_SERVER_CERT': SFTRACE_CONFIG.get('SFTRACE_VERIFY_SERVER_CERT'),
'SPAN_FRAMES_MIN_DURATION': SFTRACE_CONFIG.get('SFTRACE_SPAN_FRAMES_MIN_DURATION'),
'STACK_TRACE_LIMIT': SFTRACE_CONFIG.get('SFTRACE_STACK_TRACE_LIMIT'),
'CAPTURE_SPAN_STACK_TRACES': SFTRACE_CONFIG.get('SFTRACE_CAPTURE_SPAN_STACK_TRACES'),
'DJANGO_TRANSACTION_NAME_FROM_ROUTE': True,
'CENTRAL_CONFIG': False,
'METRICS_INTERVAL': '0s'
}
except Exception as error:
print("Error while fetching snappyflow tracing configurations", error)
Provide
SF_PROJECT_NAME
,SF_APP_NAME
,SF_PROFILE_KEY
as an environment variables indocker-compose.yml
or docker stack deployment file or at command line when using docker run command for deployment.Follow the below referrence documentation:
https://docs.docker.com/compose/environment-variables/
Docker RUN:
docker run -d -t -i -e SF_PROJECT_NAME='' \
-e SF_APP_NAME='' \
-e SF_PROFILE_KEY='' \
-p 80:80 \
--link redis:redis \
--name <container_name> <dockerhub_id/image_name>
Verification
Once your application is up and running, follow the below steps to verfiy that the SnappyFlow has started to collect the traces.
- Make sure that the project and the application is created.
- In the app, click the View Dashboard icon.
- In the Dashboard window, go to Tracing section.
- In the Tracing section, click the View Transactions button.
- Now you can view the traces in Aggregate and Real Time tabs.
Troubleshoot Steps
If the trace data is not collected in the SnappyFlow server, then check the trace configuration in the
settings.py
.To enable the debug logs, add the below key-value pair in the ELASTIC_APM block of the
settings.py
.'DEBUG':'true'
Sample Application Code
The below link contains the sample application with the trace enabled by following the configuration mentioned in the above sections.
Click Here to view the reference application.
Flask
Add
sf-elastic-apm[flask]==6.7.2
sf-apm-lib==0.1.1entries in requirements.txt file and install these in your project environment
or
Install through CLI using
RUN pip install sf-elastic-apm[flask]==6.7.2
RUN pip install sf-apm-lib==0.1.1Add following entries in app.py
Add imports statement
from elasticapm.contrib.flask import ElasticAPM
from sf_apm_lib.snappyflow import SnappyflowGet trace config
sf = Snappyflow() # Initialize Snappyflow. By default intialization will take profileKey, projectName and appName from sfagent config.yaml
# Add below part to manually configure the initialization
# import os module
SF_PROJECT_NAME = os.getenv('SF_PROJECT_NAME')
SF_APP_NAME = os.getenv('SF_APP_NAME')
SF_PROFILE_KEY = os.getenv('SF_PROFILE_KEY')
sf.init(SF_PROFILE_KEY, SF_PROJECT_NAME, SF_APP_NAME)
# End of manual configuration
SFTRACE_CONFIG = sf.get_trace_config()
# Start Trace to log feature section
# Add below line of code to enable Trace to log feature:
SFTRACE_CONFIG['SFTRACE_GLOBAL_LABELS'] += ',_tag_redact_body=true'
# Option Configs for trace to log
# Add below line to provide custom documentType (Default:"user-input"):
SFTRACE_CONFIG['SFTRACE_GLOBAL_LABELS'] += ',_tag_documentType=<document-type>'
# Add below line to provide destination index (Default:"log"):
SFTRACE_CONFIG['SFTRACE_GLOBAL_LABELS'] += ',_tag_IndexType=<index-type>' # Applicable values(log, metric)
# End trace to log sectionInitialize elastic apm and instrument it to flask app
app.config['ELASTIC_APM'] = {
'SERVICE_NAME': '<SERVICE_NAME>', # Specify your service name for tracing
'SERVER_URL': SFTRACE_CONFIG.get('SFTRACE_SERVER_URL'),
'GLOBAL_LABELS': SFTRACE_CONFIG.get('SFTRACE_GLOBAL_LABELS'),
'VERIFY_SERVER_CERT': SFTRACE_CONFIG.get('SFTRACE_VERIFY_SERVER_CERT'),
'SPAN_FRAMES_MIN_DURATION': SFTRACE_CONFIG.get('SFTRACE_SPAN_FRAMES_MIN_DURATION'),
'STACK_TRACE_LIMIT': SFTRACE_CONFIG.get('SFTRACE_STACK_TRACE_LIMIT'),
'CAPTURE_SPAN_STACK_TRACES': SFTRACE_CONFIG.get('SFTRACE_CAPTURE_SPAN_STACK_TRACES'),
'DEBUG': True,
'METRICS_INTERVAL': '0s'
}
apm = ElasticAPM(app)
Provide
SF_PROJECT_NAME
,SF_APP_NAME
,SF_PROFILE_KEY
as an environment variables in docker-compose.yml or docker stack deployment file or at command line when using docker run command for deployment.Eg:
Docker-compose and stack: https://docs.docker.com/compose/environment-variables/
Docker run cli command:
docker run -d -t -i -e SF_PROJECT_NAME='<SF_PROJECT_NAME>' \
-e SF_APP_NAME='<SF_APP_NAME>' \
-e SF_PROFILE_KEY='<snappyflow profile key>' \
--name <container_name> <dockerhub_id/image_name>Once your server is up and running you can check trace in Snappyflow Server.
For viewing trace in snappyflow server make sure project and app name is created or discovered with project name and app name specified in point no.2
Once project and app name is created, Go to View dashboard -> Click on Tracing on lef side bar -> Click on view transaction -> Go to real time tab
note'CAPTURE_BODY':'all' config should be present in apm agent code instrumentation for Trace to Log feature.
Celery
Install following requirements (Following example is based on redis broker)
pip install sf-elastic-apm==6.7.2
pip install redis
pip install sf-apm-lib==0.1.1Add following code at start of the file where celery app is initialized to setup elastic apm client
from sf_apm_lib.snappyflow import Snappyflow
from elasticapm import Client, instrument
from elasticapm.contrib.celery import register_exception_tracking, register_instrumentation
instrument()
try:
sf = Snappyflow() # Initialize Snappyflow. By default intialization will take profileKey, projectName and appName from sfagent config.yaml
# Add below part to manually configure the initialization
SF_PROJECT_NAME = '<SF_PROJECT_NAME>' # Replace with appropriate Snappyflow project name
SF_APP_NAME = '<SF_APP_NAME>' # Replace with appropriate Snappyflow app name
SF_PROFILE_KEY = '<SF_PROFILE_KEY>' # Replace Snappyflow Profile key
sf.init(SF_PROFILE_KEY, SF_PROJECT_NAME, SF_APP_NAME)
# End of manual configuration
SFTRACE_CONFIG = sf.get_trace_config()
apm_client = Client(
service_name= '<Service_Name>', # Specify service name for tracing
server_url= SFTRACE_CONFIG.get('SFTRACE_SERVER_URL'),
global_labels= SFTRACE_CONFIG.get('SFTRACE_GLOBAL_LABELS'),
verify_server_cert= SFTRACE_CONFIG.get('SFTRACE_VERIFY_SERVER_CERT')
)
register_exception_tracking(apm_client)
register_instrumentation(apm_client)
except Exception as error:
print("Error while fetching snappyflow tracing configurations", error)Once instrumentation is done and celery worker is running we can see trace for each celery task in Snappyflow server.
For viewing trace in snappyflow server make sure project and app name is created or discovered with project name and app name specified in point no.2
Once project and app name is created, Go to View dashboard -> Click on Tracing on left side bar -> Click on view transaction -> Go to real time tab
Refer complete code:
https://github.com/snappyflow/tracing-reference-apps/blob/master/ref-celery/tasks.py
ECS
Django
Follow the below steps to enable the tracing for the applications based on Django Framework.
Configuration
Add the below mentioned entries in requirements.txt file to install sf-elastic-apm and sf-apm-lib in your environment.
sf-elastic-apm==6.7.2
sf-apm-lib==0.1.1or
Install the below libraries using CLI.
pip install sf-elastic-apm==6.7.2
pip install sf-apm-lib==0.1.1Make sure the project and application is created in the SnappyFlow Server. Click Here to know how to create the project and application in SnappyFlow.
Add the following entries in
settings.py
file.Add the import statements.
from sf_apm_lib.snappyflow import Snappyflow
import osAdd the following entry in
INSTALLED_APPS
block.'elasticapm.contrib.django'
Add the following entry in
MIDDLEWARE
block.'elasticapm.contrib.django.middleware.TracingMiddleware'
Add the following source code to integrate the Django application to the SnappyFlow.
try:
sf = Snappyflow()
# Add below part to manually configure the initialization
SF_PROJECT_NAME = os.getenv('SF_PROJECT_NAME')
SF_APP_NAME = os.getenv('SF_APP_NAME')
SF_PROFILE_KEY = os.getenv('SF_PROFILE_KEY')
sf.init(SF_PROFILE_KEY, SF_PROJECT_NAME, SF_APP_NAME)
# End of manual configuration
SFTRACE_CONFIG = sf.get_trace_config()
ELASTIC_APM={
# Specify your service name for tracing
'SERVICE_NAME': "custom-service" ,
'SERVER_URL': SFTRACE_CONFIG.get('SFTRACE_SERVER_URL'),
'GLOBAL_LABELS': SFTRACE_CONFIG.get('SFTRACE_GLOBAL_LABELS'),
'VERIFY_SERVER_CERT': SFTRACE_CONFIG.get('SFTRACE_VERIFY_SERVER_CERT'),
'SPAN_FRAMES_MIN_DURATION': SFTRACE_CONFIG.get('SFTRACE_SPAN_FRAMES_MIN_DURATION'),
'STACK_TRACE_LIMIT': SFTRACE_CONFIG.get('SFTRACE_STACK_TRACE_LIMIT'),
'CAPTURE_SPAN_STACK_TRACES': SFTRACE_CONFIG.get('SFTRACE_CAPTURE_SPAN_STACK_TRACES'),
'DJANGO_TRANSACTION_NAME_FROM_ROUTE': True,
'CENTRAL_CONFIG': False,
'METRICS_INTERVAL': '0s'
}
except Exception as error:
print("Error while fetching snappyflow tracing configurations", error)
Provide
SF_PROJECT_NAME
,SF_APP_NAME
,SF_PROFILE_KEY
as an environment variables in add container section of task definitions.Refer the below documentation:
Verification
Once your application is up and running, follow the below steps to verfiy that the SnappyFlow has started to collect the traces..
- Make sure that the project and the application is created.
- In the app, click the View Dashboard icon.
- In the Dashboard window, go to Tracing section.
- In the Tracing section, click the View Transactions button.
- Now you can view the traces in Aggregate and Real Time tabs.
Troubleshoot Steps
If the trace data is not collected in the SnappyFlow server, then check the trace configuration in the
settings.py
.To enable the debug logs, add the below key-value pair in the ELASTIC_APM block of the
settings.py
.'DEBUG':'true'
Sample Application Code
The below link contains the sample application with the trace enabled by following the configuration mentioned in the above sections.
Click Here to view the reference application.
Flask
Add
sf-elastic-apm[flask]==6.7.2
sf-apm-lib==0.1.1entries in requirements.txt file and install these in your project environment
or
Install through CLI using
RUN pip install sf-elastic-apm[flask]==6.7.2
RUN pip install sf-apm-lib==0.1.1Add following entries in
app.py
Add imports statement
from elasticapm.contrib.flask import ElasticAPM
from sf_apm_lib.snappyflow import SnappyflowGet trace config
sf = Snappyflow() # Initialize Snappyflow. By default intialization will take profileKey, projectName and appName from sfagent config.yaml
# Add below part to manually configure the initialization
# import os module
SF_PROJECT_NAME = os.getenv('SF_PROJECT_NAME')
SF_APP_NAME = os.getenv('SF_APP_NAME')
SF_PROFILE_KEY = os.getenv('SF_PROFILE_KEY')
sf.init(SF_PROFILE_KEY, SF_PROJECT_NAME, SF_APP_NAME)
# End of manual configuration
SFTRACE_CONFIG = sf.get_trace_config()
# Start Trace to log feature section
# Add below line of code to enable Trace to log feature:
SFTRACE_CONFIG['SFTRACE_GLOBAL_LABELS'] += ',_tag_redact_body=true'
# Option Configs for trace to log
# Add below line to provide custom documentType (Default:"user-input"):
SFTRACE_CONFIG['SFTRACE_GLOBAL_LABELS'] += ',_tag_documentType=<document-type>'
# Add below line to provide destination index (Default:"log"):
SFTRACE_CONFIG['SFTRACE_GLOBAL_LABELS'] += ',_tag_IndexType=<index-type>' # Applicable values(log, metric)
# End trace to log sectionInitialize elastic apm and instrument it to flask app
app.config['ELASTIC_APM'] = {
'SERVICE_NAME': '<SERVICE_NAME>', # Specify your service name for tracing
'SERVER_URL': SFTRACE_CONFIG.get('SFTRACE_SERVER_URL'),
'GLOBAL_LABELS': SFTRACE_CONFIG.get('SFTRACE_GLOBAL_LABELS'),
'VERIFY_SERVER_CERT': SFTRACE_CONFIG.get('SFTRACE_VERIFY_SERVER_CERT'),
'SPAN_FRAMES_MIN_DURATION': SFTRACE_CONFIG.get('SFTRACE_SPAN_FRAMES_MIN_DURATION'),
'STACK_TRACE_LIMIT': SFTRACE_CONFIG.get('SFTRACE_STACK_TRACE_LIMIT'),
'CAPTURE_SPAN_STACK_TRACES': SFTRACE_CONFIG.get('SFTRACE_CAPTURE_SPAN_STACK_TRACES'),
'DEBUG': True,
'METRICS_INTERVAL': '0s'
}
apm = ElasticAPM(app)
Provide
SF_PROJECT_NAME
,SF_APP_NAME
,SF_PROFILE_KEY
as an environment variables in add container section of task definitions.https://docs.aws.amazon.com/AmazonECS/latest/developerguide/taskdef-envfiles.html
Once your server is up and running you can check trace in Snappyflow Server.
For viewing trace in snappyflow server make sure project and app name is created or discovered with project name and app name specified in point no.2
Once project and app name is created, Go to View dashboard -> Click on Tracing on lef side bar -> Click on view transaction -> Go to real time tab
note'CAPTURE_BODY':'all' config should be present in apm agent code instrumentation for Trace to Log feature.
Celery
Install following requirements (Following example is based on redis broker)
pip install sf-elastic-apm==6.7.2
pip install redis
pip install sf-apm-lib==0.1.1Add following code at start of the file where celery app is initialized to setup elastic apm client
from sf_apm_lib.snappyflow import Snappyflow
from elasticapm import Client, instrument
from elasticapm.contrib.celery import register_exception_tracking, register_instrumentation
instrument()
try:
sf = Snappyflow() # Initialize Snappyflow. By default intialization will take profileKey, projectName and appName from sfagent config.yaml
# Add below part to manually configure the initialization
SF_PROJECT_NAME = '<SF_PROJECT_NAME>' # Replace with appropriate Snappyflow project name
SF_APP_NAME = '<SF_APP_NAME>' # Replace with appropriate Snappyflow app name
SF_PROFILE_KEY = '<SF_PROFILE_KEY>' # Replace Snappyflow Profile key
sf.init(SF_PROFILE_KEY, SF_PROJECT_NAME, SF_APP_NAME)
# End of manual configuration
SFTRACE_CONFIG = sf.get_trace_config()
apm_client = Client(
service_name= '<Service_Name>', # Specify service name for tracing
server_url= SFTRACE_CONFIG.get('SFTRACE_SERVER_URL'),
global_labels= SFTRACE_CONFIG.get('SFTRACE_GLOBAL_LABELS'),
verify_server_cert= SFTRACE_CONFIG.get('SFTRACE_VERIFY_SERVER_CERT')
)
register_exception_tracking(apm_client)
register_instrumentation(apm_client)
except Exception as error:
print("Error while fetching snappyflow tracing configurations", error)Once instrumentation is done and celery worker is running we can see trace for each celery task in Snappyflow server.
For viewing trace in snappyflow server make sure project and app name is created or discovered with project name and app name specified in point no.2
Once project and app name is created, Go to View dashboard -> Click on Tracing on left side bar -> Click on view transaction -> Go to real time tab
Refer complete code:
https://github.com/snappyflow/tracing-reference-apps/blob/master/ref-celery/tasks.py
AWS Lambda
Script
Add these python libraries in requirements.txt file. Follow the AWS lambda doc on adding runtime dependency to lambda function.
sf-apm-lib==0.1.1
sf-elastic-apm==6.7.2Instrument lambda function to enable tracing.
Import Libraries
import elasticapm
from sf_apm_lib.snappyflow import SnappyflowAdd code to get SnappyFlow Trace config, outside lambda handler method.
sf = Snappyflow()
SF_PROJECT_NAME = os.environ['SF_PROJECT_NAME']
SF_APP_NAME = os.environ['SF_APP_NAME']
SF_PROFILE_KEY = os.environ['SF_PROFILE_KEY']
sf.init(SF_PROFILE_KEY, SF_PROJECT_NAME, SF_APP_NAME)
trace_config = snappyflow.get_trace_config()Add custom instrumentation in lambda handler function
def lambda_handler(event, context):
client = elasticapm.Client(service_name="<SERVICE_NAME_CHANGEME>",
server_url=trace_config['SFTRACE_SERVER_URL'],
verify_cert=trace_config['SFTRACE_VERIFY_SERVER_CERT'],
global_labels=trace_config['SFTRACE_GLOBAL_LABELS']
)
elasticapm.instrument()
client.begin_transaction(transaction_type="script")
# DO SOME WORK. No return statements.
client.end_transaction(name=__name__, result="success")
# RETURN STATEMENT e.g. return response
Deploy the Lambda function. Follow README to test sample app
Sample code for reference:
https://github.com/upendrasahu/aws-lambda-python-tracing-sample
Configure Lambda function before trigger/invoke.
- Add the environment variable
SF_PROFILE_KEY
and set the value to your profile key copied from SnappyFlow. - Add environment variables
SF_APP_NAME
andSF_PROJECT_NAME
with appropriate values.
- Add the environment variable
Trace to Log Body
For the transactions that are HTTP requests which contain the request body, the sfPython trace agent capture the request body and store it in the SnappyFlow with the specific index and document type.
Request bodies usually contain sensitive data like passwords and credit card numbers. If your service handles data like this, we advise you to enable this feature with care.
Add the below values to enable this feature
Update the ELASTIC_APM block with the following key-value pair in the
settings.py
.'CAPTURE_BODY': 'all'
Add the below line in the try block of tracing instrumentation code in the
settings.py
.SFTRACE_CONFIG['SFTRACE_GLOBAL_LABELS'] += ',_tag_redact_body=true'
Follow the below steps in the try block of
settings.py
to customize the document type and destination index. (Optional)Add below line to customize the destination index (Default:"log"), Applicable values(log, metric).
SFTRACE_CONFIG['SFTRACE_GLOBAL_LABELS'] += ',_tag_IndexType=<index-type>'
Add the below line to customize the document type (Default:"user-input").
SFTRACE_CONFIG['SFTRACE_GLOBAL_LABELS'] += ',_tag_documentType=user-input'
The overall configuration is below:
try:
SFTRACE_CONFIG['SFTRACE_GLOBAL_LABELS'] += ',_tag_redact_body=true'
SFTRACE_CONFIG['SFTRACE_GLOBAL_LABELS'] += ',_tag_IndexType=log'
SFTRACE_CONFIG['SFTRACE_GLOBAL_LABELS'] += ',_tag_documentType=user-input'
ELASTIC_APM={
'CAPTURE_BODY': 'all'
}
except Exception as error:
print("Error while fetching snappyflow tracing configurations", error)
Log Correlation
For enabling the log correlation, follow the below instructions.
Django
a. Add import statement in settings.py
from elasticapm.handlers.logging import Formatter
b. Add following logging configuration in settings.py.
LOGGING = {
'version': 1,
'disable_existing_loggers': True, // Disable existing logger
'formatters': {
'elastic': { // Add elastic formatter
'format': '[%(asctime)s] [%(levelname)s] [%(message)s]',
'class': 'elasticapm.handlers.logging.Formatter',
'datefmt': "%d/%b/%Y %H:%M:%S"
}
},
'handlers': {
'elasticapm_log': {
'level': 'INFO',
'class': 'logging.handlers.RotatingFileHandler',
'filename': '/var/log/trace/django.log', //specify you log file path
'formatter': 'elastic'
}
},
'loggers': {
'elasticapm': {
'handlers': ['elasticapm_log'],
'level': 'INFO',
}
}
}
c. Usage:
import logging
log = logging.getLogger('elasticapm')
class ExampleView(APIView):
def get(self, request):
log.info('Get API called')
Refer code: https://github.com/snappyflow/tracing-reference-apps/blob/master/refapp-django
Flask
- Add following code in app.py after import statements to set logger configuration
import logging
from elasticapm.handlers.logging import Formatter
fh = logging.FileHandler('/var/log/trace/flask.log')
# we imported a custom Formatter from the Python Agent earlier
formatter = Formatter("[%(asctime)s] [%(levelname)s] [%(message)s]", "%d/%b/%Y %H:%M:%S")
fh.setFormatter(formatter)
logging.getLogger().addHandler(fh)
# Once logging is configured get log object using following code
log = logging.getLogger()
log.setLevel('INFO')
@app.route('/')
def home():
log.info('Home API called')
return 'Welcome to Home'
Refer code: https://github.com/snappyflow/tracing-reference-apps/blob/master/refapp-flask/app.py
Send log correlation data to snappyflow server
Below are the modes for sending log correlated data to snappyflow server
For Appliance:
Install sfagent and create config file.
Refer: https://docs.snappyflow.io/docs/Integrations/os/linux/sfagent_linux
Add elasticApmLog plugin to sfagent config.yaml and restart sfagent service. Eg. Config.yaml
key: <SF_PROFILE_KEY>
tags:
Name: <any-name>
appName: <SF_APP_NAME>
projectName: <SF_PROJECT_NAME>
logging:
plugins:
- name: elasticApmTraceLog
enabled: true
config:
log_level:
- error
- warning
- info
log_path: /var/log/trace/ntrace.log # Your app log file path
For Kubernetes:
Specify following values in metadata labels section of deployment file.
snappyflow/appname: <SF_APP_NAME>
snappyflow/projectname: <SF_PROJECT_NAME>
snappyflow/component: gen-elastic-apm-log # This is must for tracing log correlation
Sample deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: python-app
snappyflow/appname: '<sf_app_name>'
snappyflow/projectname: '<sf_project_name>'
snappyflow/component: gen-elastic-apm-log
name: python-app
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: python-app
strategy: {}
template:
metadata:
labels:
io.kompose.service: python-app
snappyflow/appname: '<sf_app_name>'
snappyflow/projectname: '<sf_project_name>'
snappyflow/component: gen-elastic-apm-log
spec:
containers:
- env:
- name: SF_APP_NAME
value: '<sf_app_name>'
- name: SF_PROFILE_KEY
value: '<sf_profile_key>'
- name: SF_PROJECT_NAME
value: '<sf_project_name>'
image: refapp-node:latest
imagePullPolicy: Always
name: python-app
ports:
- containerPort: 3000
resources:
requests:
cpu: 10m
memory: 10Mi
limits:
cpu: 50m
memory: 50Mi
restartPolicy: Always
For kubernetes mode we need sfagent pods to be running inside kubernetes cluster where your application pods are deployed.
For viewing trace and logs in Snappyflow server make sure project and app name is created or discovered. Once project and app name is created.
Go to: View App dashboard -> Click on Tracing on left side bar -> Click on view transaction -> Go to real time tab Then click on any trace and go to logs tab to see the correlated logs to trace.
# Note: To get trace in snappyflow server we need log entries to adhere following log format:
<date in following format>
[10/Aug/2021 10:51:16] [<log_level>] [<message>] | elasticapm transaction.id=<transaction_id> trace.id=<trace_id> span.id=<span_id>