In today’s DevOps world, managing alerts from multiple monitoring tools can quickly become difficult. Tools like Grafana, Kibana, Sentry, and AWS CloudWatch each provide important updates about system health, but their different alert formats and notification methods.
Imagine having to set up separate webhooks for each tool, dealing with inconsistent message layouts, and manually sending alerts to the right team—all while trying to fix a production issue quickly. It’s a tough situation.
Instead of dealing with multiple webhook endpoints and custom scripts, let's use Versus to act as a central hub, turning alerts into a standard format and delivering them where your team already works.
Step 1: Create a Unified Alert Template
Versus uses Go templates to format incoming alerts. This lets us make one template that works for different alert sources based on their data structure. Here’s an example template for AWS, Grafana, Kibana, and Sentry:
{{/* Alert from AWS */}}
{{ if .source }}
{{ if eq .source "aws.glue" }}
🚨 Job: {{.detail.jobName}} run failed
{{ else if eq .source "aws.ec2" }}
🚨 Instance: {{.detail.instance-id}} terminating
{{ end }}
{{/* Alert from Grafana */}}
{{ else if .receiver }}
{{ if eq .receiver "payment-team" }}
🔥 Transaction {{ .alerts[0].annotations.summary }} failed, please check!
{{ else if eq .receiver "devops-team" }}
🚨 Node {{ .alerts[0].annotations.summary }} down
{{ end }}
{{/* Alert from Kibana */}}
{{ else if .kibanaUrl }}
❌ Kibana Alert: {{.name}}
Message: {{.message}}
Kibana URL: <{{.kibanaUrl}}|View in Kibana>
{{/* Alert from Sentry */}}
{{ else }}
🚨 Sentry Alert: {{.data.issue.title}}
Project: {{.data.issue.project.name}}
Issue URL: {{.data.issue.url}}
{{ end }}
-
AWS Alerts: Looks for a
.source
field (e.g.,aws.glue
oraws.ec2
) and pulls out details like job names or instance IDs. -
Grafana Alerts: Uses the
.receiver
field to send alerts to specific teams (e.g.,payment-team
ordevops-team
) and grabs data from the alert annotations. -
Kibana Alerts: Spots alerts by the
.kibanaUrl
field and adds a clickable link. -
Sentry Alerts: Uses a default case to pull issue details from the
.data.issue
object.
Save this template as config/slack_message.tmpl
in a local config
directory.
Step 2: Configure Versus
Next, create a config/config.yaml
file to tell Versus how to run and connect to Slack:
name: versus
host: 0.0.0.0
port: 3000
alert:
slack:
enable: true
token: ${SLACK_TOKEN} # Your Slack Bot OAuth Token
channel_id: ${SLACK_CHANNEL_ID} # Your Slack channel ID
template_path: "/app/config/slack_message.tmpl" # Path inside the container
telegram:
enable: false
msteams:
enable: false
Step 3: Deploy Versus with Docker
Run Versus using Docker, attaching the config
directory and adding the Slack details:
docker run -d \
-p 3000:3000 \
-v $(pwd)/config:/app/config \
-e SLACK_ENABLE=true \
-e SLACK_TOKEN=your_slack_token \
-e SLACK_CHANNEL_ID=your_channel_id \
--name versus \
ghcr.io/versuscontrol/versus-incident
After starting, Versus will listen for webhook POST requests at http://localhost:3000/api/incidents
.
Step 4: Expose Versus with ngrok (Optional)
To test with external tools, use ngrok
to make your local Versus instance available online:
ngrok http 3000 --url your-versus-https-url.ngrok-free.app
This gives you a public URL (e.g., https://your-versus-https-url.ngrok-free.app
). The Versus API will be at https://your-versus-https-url.ngrok-free.app/api/incidents
. Save this URL for setting up webhooks.
Step 5: Set Up Webhooks in Your Tools
Now, connect each tool’s webhook to the Versus endpoint. Here’s how:
AWS (SNS)
- Create an SNS topic in AWS and add an HTTPS subscription:
https://your-versus-https-url.ngrok-free.app/api/incidents
. - Send a test message:
aws sns publish \
--topic-arn your_sns_topic_arn \
--message '{"source": "aws.glue", "detail": {"jobName": "ETLJob1"}}'
Grafana
- In Grafana, go to Alerting > Notification Channels > New Channel.
- Choose Webhook, set the URL to
https://your-versus-https-url.ngrok-free.app/api/incidents
, and use this JSON:
{
"receiver": "devops-team",
"alerts": [{"annotations": {"summary": "Server1"}}]
}
Kibana
- In Kibana, go to Stack Management > Alerts and Insights > Rules > Create Rule.
- Add a Webhook action with the URL
https://your-versus-https-url.ngrok-free.app/api/incidents
and this body:
{
"name": "High CPU Usage",
"message": "CPU exceeds 90%",
"kibanaUrl": "http://kibana.example.com/app/alerts/123"
}
Sentry
- In Sentry, go to Project Settings > Alerts > Create Alert Rule.
- Add a Webhook action with the URL
https://your-versus-https-url.ngrok-free.app/api/incidents
and this payload:
{
"data": {
"issue": {
"title": "Null Pointer Exception",
"project": {"name": "Backend"},
"url": "https://sentry.io/issues/123"
}
}
}
Step 6: Test and Check
Trigger alerts from each tool and look at your Slack channel. You should see messages like:
- AWS:
🚨 Job: ETLJob1 run failed
- Grafana:
🚨 Node Server1 down
- Kibana:
❌ Kibana Alert: High CPU Usage\nMessage: CPU exceeds 90%\nKibana URL:
- Sentry:
🚨 Sentry Alert: Null Pointer Exception\nProject: Backend\nIssue URL: https://sentry.io/issues/123
Scaling and Improving the Setup
-
More Channels: Turn on Telegram or Microsoft Teams in
config.yaml
and add matching templates (e.g.,telegram_message.tmpl
). -
On-Call Support: Enable AWS Incident Manager by adding
oncall: enable: true
and anawsim_response_plan_arn
in the config. - Production Use: Run Versus on a cloud service (e.g., AWS ECS) instead of localhost for better reliability.
Conclusion
Versus turns a confusing mix of alerts into a clear. Its Go templates and webhook support make it a great tool for DevOps teams using multiple monitoring solutions. Whether you’re tracking AWS resources, checking metrics in Grafana, reviewing logs in Kibana, or catching errors in Sentry, Versus ensures your team gets the right details in a consistent format, right when they need.