VictoriaLogs
: What if logs management became simple and performant?
Overview
Once our application is deployed, it is essential to have indicators that help identify potential issues and track performance changes. Among these sources of information, metrics and logs play an essential role by providing valuable insights into the application's operation. Additionally, it is often useful to implement detailed tracing to accurately track all actions performed within the application.
In this series of blog posts, we will explore the various areas of application monitoring. The goal is to thoroughly analyze the state of our applications, in order to improve their availability and performance, while ensuring an optimal user experience.
Too often, logs management means complex solutions and slow queries. Yet logs are an essential pillar for understanding, diagnosing, and improving our applications performance and health.
Indeed, while metrics allow us to observe indicator evolution over time and traces let us follow a request's journey through our platform, logs provide the detailed context important for understanding events.
β What are our logs for?
Logs aren't just simple messages we accumulate in a corner of our infrastructure: they constitute the living memory of our systems. They're useful because they fulfill several critical roles, here are some concrete scenarios:
- Diagnostics and Troubleshooting: An e-commerce application encounters 500 errors during payment; logs help trace the exact sequence of calls, identify that an external dependency (e.g., payment API) is the cause, and quickly fix the problem.
- Security and Compliance: Logs reveal suspicious connection attempts outside of normal hours; they help detect a brute force attack and strengthen security. They are also essential for meeting regulatory requirements (GDPR, PCI DSS, etc.).
- Proactive Monitoring and Alerting: Alerting rules automatically detect an abnormal increase in the error rate in the logs of a critical service, allowing intervention before the situation worsens.
- Audit and Traceability: During a GDPR audit, access logs make it possible to precisely reconstruct the history of actions on personal data.
But for these use cases to reveal their full value, it's not enough to collect logs: you must be able to search them quickly, formulate simple queries, and ensure their long-term retention without exploding costs or complexity.
This is exactly where VictoriaLogs
comes into play π
π VictoriaLogs: An answer to logs management and analysis
With the adoption of distributed architectures, our platforms generate ever-increasing number of logs.
To leverage these growing volumes, we've traditionally used solutions like ELK (Elasticsearch, Logstash, Kibana) or Grafana Loki, which can sometimes involve operational complexity.
In 2023, VictoriaLogs
emerged as a promising alternative that might just change the game.
Developed by the team behind the increasingly popular time series database VictoriaMetrics
, VictoriaLogs inherits the same qualities. Here are its main features:
- Easy to deploy and operate: Its installation and configuration are quite simple, and we will explore the most advanced mode (cluster) together below.
- High performance: Optimized for massive log ingestion and fast analytical queries, even on very large data volumes.
- Resource efficiency: Low CPU and memory footprint, and effective data compression to minimize storage costs compared to other logs management systems.
- Integration with the VictoriaMetrics ecosystem: Integrates naturally with VictoriaMetrics for a unified observability solution, with VMAlert for alerting, and with Grafana for visualization.
- Fast Full-Text and Label-based Search: VictoriaLogs allows for both full-text searches on log content and precise filtering by labels.
Several references attest to the performance of VictoriaLogs compared to other log management solutions.
The performance gaps, when compared with ELK or Loki, are quite impressive, whether in terms of memory usage or data compression.
Regarding log search, VictoriaLogs stands out by effectively combining Elasticsearch's full-text search and Loki's label-based filtering, thus offering the best of both approaches while maintaining fast query execution.
ποΈ Ingestion and Storage
A log in VictoriaLogs is typically a JSON object. Every log must contain the following fields:
_msg
: The raw content of the log message, as produced by the application._time
: The timestamp of the log._stream
: A set of labels (key-value) that uniquely identify the log source.
π‘ In addition to these fields, any other field can be added to the JSON to simplify and optimize search for relevant information according to the context (We will see some examples later).
The _stream
field in VictoriaLogs optimizes compression and ensures ultra-fast search thanks to the contiguous storage of logs sharing the same labels.
Efficiency depends on a careful choice: only constant fields that uniquely identify an application instance (container, namespace, pod) should be part of the stream. Dynamic fields (IP, user_id, trace_id) must remain in the message to avoid excessively high cardinality.
It's possible to store a log simply via a curl
command, or by using various agents for collecting and transporting logs such as Promtail, FluentBit, OpenTelemetry, and many others.
I chose Vector because it's a very high-performance solution but also because it's offered by default in the Helm chart we're going to use π.
Among the required configuration elements, you must specify the destination but also the essential fields we mentioned earlier, which are configured here using HTTP headers.
1 sinks:
2 vlogs-0:
3 compression: gzip
4 endpoints:
5 - http://<victorialogs_host>:9428/insert/elasticsearch
6 healthcheck:
7 enabled: false
8 inputs:
9 - parser
10 mode: bulk
11 request:
12 headers:
13 AccountID: "0"
14 ProjectID: "0"
15 VL-Msg-Field: message,msg,_msg,log.msg,log.message,log
16 VL-Stream-Fields: stream,kubernetes.pod_name,kubernetes.container_name,kubernetes.pod_namespace
17 VL-Time-Field: timestamp
18 type: elasticsearch
π‘ The header VL-Msg-Field
tells Vector to search for the log content in several common field names, providing flexibility for different log sources
The logs are collected on a Kubernetes cluster, and Vector enriches them with numerous fields to precisely identify their source. Here is a concrete example of an enriched log as it is stored in VictoriaLogs (This log has been intentionally truncated for the purpose of this article):
1 {
2 "_time": "2025-07-29T07:25:49.870820279Z",
3 "_stream_id": "00000000000000006a98e166d58afc9efc6ea35a22d87f1b",
4 "_stream": "{kubernetes.container_name=\"loggen\",kubernetes.pod_name=\"loggen-loggen-68dc4f9b8b-6mrqj\",kubernetes.pod_namespace=\"observability\",stream=\"stdout\"}",
5 "_msg": "236.161.251.196 - [07/Jul/2025:08:13:41 ] \"GET /homepage HTTP/2\" 204 4367 \"http://localhost/\" \"curl/7.68.0\" \"DE\" 0.83",
6 "file": "/var/log/pods/observability_loggen-loggen-68dc4f9b8b-6mrqj_33076791-133a-490f-bd44-97717d242a61/loggen/0.log",
7 "kubernetes.container_name": "loggen",
8 "kubernetes.node_labels.beta.kubernetes.io/instance-type": "c5.xlarge",
9 "kubernetes.node_labels.beta.kubernetes.io/os": "linux",
10 "kubernetes.node_[labels.eks.amazonaws.com/capacityType](https://labels.eks.amazonaws.com/capacityType)": "SPOT",
11 "kubernetes.pod_ip": "10.0.33.16",
12 "kubernetes.pod_labels.app.kubernetes.io/name": "loggen",
13 "kubernetes.pod_name": "loggen-loggen-68dc4f9b8b-6mrqj",
14 "kubernetes.pod_namespace": "observability",
15 "kubernetes.pod_node_name": "ip-10-0-47-231.eu-west-3.compute.internal",
16 "source_type": "kubernetes_logs",
17 "stream": "stdout"
18 <REDACTED>
19 }
Now that we have an overview of how VictoriaLogs works, I'll propose an installation and configuration method that can be considered for production.
ποΈ Installation and Configuration
VictoriaLogs can be installed in 2 ways:
A
Single
mode which has the advantage of being very simple because a single binary handles all operations. This is the preferred mode because it's simple to operate. If you have a powerful machine with enough resources to meet your needs, this mode will always be more performant as it doesn't require network transfers between the different components of the cluster mode.π‘ To ensure high availability, we can also deploy 2Single
instances as described here.The
Cluster
mode is used for very high loads and when horizontal scaling is needed (when a single machine is not sufficient to meet the demand). Since this is the mode that provides the most flexibility for scaling, we will explore it in this article.

If you've read the previous article on VictoriaMetrics, you'll notice that the cluster mode architecture is very similar:
VLStorage: This is the component responsible for persisting logs to disk. It's therefore a Statefulset, and each pod has a dedicated volume (Persistent Volume).
VLInsert: This component receives logs from various sources and protocols and is responsible for distributing them to the VLStorages.
Vector: Deployed as a DaemonSet, Vector is responsible for transferring logs stored on Kubernetes nodes to the VLInsert service.
VLSelect: This is the service that exposes the API allowing us to execute queries. Data is retrieved from the VLStorages.
VMAlert: To be able to send alerts based on logs, a dedicated VMAlert instance is deployed.
The installation is done using the Helm chart provided by VictoriaMetrics, by setting a few variables. Here is an example suitable for EKS that we will describe below:
observability/base/victoria-logs/helmrelease-vlcluster.yaml
1 printNotes: false
2
3 vlselect:
4 horizontalPodAutoscaler:
5 enabled: true
6 maxReplicas: 10
7 minReplicas: 2
8 metrics:
9 - type: Resource
10 resource:
11 name: cpu
12 target:
13 type: Utilization
14 averageUtilization: 70
15
16 podDisruptionBudget:
17 enabled: true
18 minAvailable: 1
19
20 affinity:
21 podAntiAffinity:
22 requiredDuringSchedulingIgnoredDuringExecution:
23 - labelSelector:
24 matchExpressions:
25 - key: "app"
26 operator: In
27 values:
28 - "vlselect"
29 topologyKey: "kubernetes.io/hostname"
30 topologySpreadConstraints:
31 - labelSelector:
32 matchLabels:
33 app: vlselect
34 maxSkew: 1
35 topologyKey: topology.kubernetes.io/zone
36 whenUnsatisfiable: ScheduleAnyway
37
38 resources:
39 limits:
40 cpu: 100m
41 memory: 200Mi
42 requests:
43 cpu: 100m
44 memory: 200Mi
45
46 vmServiceScrape:
47 enabled: true
48
49 vlinsert:
50 horizontalPodAutoscaler:
51 enabled: true
52 maxReplicas: 10
53 minReplicas: 2
54 metrics:
55 - type: Resource
56 resource:
57 name: cpu
58 target:
59 type: Utilization
60 averageUtilization: 70
61
62 podDisruptionBudget:
63 enabled: true
64 minAvailable: 1
65
66 affinity:
67 podAntiAffinity:
68 requiredDuringSchedulingIgnoredDuringExecution:
69 - labelSelector:
70 matchExpressions:
71 - key: "app"
72 operator: In
73 values:
74 - "vlinsert"
75 topologyKey: "kubernetes.io/hostname"
76 topologySpreadConstraints:
77 - labelSelector:
78 matchLabels:
79 app: vlinsert
80 maxSkew: 1
81 topologyKey: topology.kubernetes.io/zone
82 whenUnsatisfiable: ScheduleAnyway
83
84 resources:
85 limits:
86 cpu: 100m
87 memory: 200Mi
88 requests:
89 cpu: 100m
90 memory: 200Mi
91
92 vmServiceScrape:
93 enabled: true
94
95 vlstorage:
96 # -- Enable deployment of vlstorage component. StatefulSet is used
97 enabled: true
98 retentionPeriod: 7d
99 retentionDiskSpaceUsage: "9GiB"
100 replicaCount: 3
101
102 podDisruptionBudget:
103 enabled: true
104 minAvailable: 1
105
106 affinity:
107 podAntiAffinity:
108 requiredDuringSchedulingIgnoredDuringExecution:
109 - labelSelector:
110 matchExpressions:
111 - key: "app"
112 operator: In
113 values:
114 - "vlstorage"
115 topologyKey: "kubernetes.io/hostname"
116 topologySpreadConstraints:
117 - labelSelector:
118 matchLabels:
119 app: vlstorage
120 maxSkew: 1
121 topologyKey: topology.kubernetes.io/zone
122 whenUnsatisfiable: ScheduleAnyway
123
124 persistentVolume:
125 enabled: true
126 size: 10Gi
127
128 resources:
129 limits:
130 cpu: 500m
131 memory: 512Mi
132 requests:
133 cpu: 500m
134 memory: 512Mi
135
136 vmServiceScrape:
137 enabled: true
138
139 vector:
140 enabled: true
Autoscaling: The stateless components (
VLSelect
andVLInsert
) are configured to scale automatically beyond 70% CPU usage.Log Persistence: For this demo environment, each
VLStorage
instance has a 10Gi EBS volume with a 7-day retention period to prevent disk saturation.High Availability: The configuration ensures maximum availability through distribution across different zones (
topologySpreadConstraints
) and pod anti-affinity for each component.Monitoring: The
vmServiceScrape
settings automatically expose the metrics of each component for monitoring via the VictoriaMetrics operator.
Once the Helm chart is installed, we can check that all pods have started correctly.
1kubectl get po -n observability -l app.kubernetes.io/instance=victoria-logs
2NAME READY STATUS RESTARTS AGE
3victoria-logs-vector-9gww4 1/1 Running 0 11m
4victoria-logs-vector-frj8l 1/1 Running 0 10m
5victoria-logs-vector-jxm95 1/1 Running 0 10m
6victoria-logs-vector-kr6q6 1/1 Running 0 12m
7victoria-logs-vector-pg2fc 1/1 Running 0 12m
8victoria-logs-victoria-logs-cluster-vlinsert-dbd47c5fd-cmqj9 1/1 Running 0 11m
9victoria-logs-victoria-logs-cluster-vlinsert-dbd47c5fd-mbkwx 1/1 Running 0 12m
10victoria-logs-victoria-logs-cluster-vlselect-7fbfbd9f8f-nmv8t 1/1 Running 0 11m
11victoria-logs-victoria-logs-cluster-vlselect-7fbfbd9f8f-nrhs4 1/1 Running 0 12m
12victoria-logs-victoria-logs-cluster-vlstorage-0 1/1 Running 0 12m
13victoria-logs-victoria-logs-cluster-vlstorage-1 1/1 Running 0 11m
14victoria-logs-victoria-logs-cluster-vlstorage-2 1/1 Running 0 9m39s
And start using the Web UI which is exposed using Cilium and Gateway API resources π
All the configuration used for writing this article can be found in the Cloud Native Ref repository.
The ambition of this project is to be able to quickly start a complete platform that applies best practices in terms of automation, monitoring, security, etc.
Comments and contributions are welcome π
π©βπ» LogsQL: A powerful and easy-to-learn language
LogsQL stands out for its ability to perform fast full-text searches and use the fields exposed by the logs.
For example, we can search for logs generated by pods whose name starts with loggen
, then filter these results by including or excluding (by prefixing with -
) certain character strings.
1kubernetes.pod_name: "loggen"* "GET /homepage" -"example.com"
This query will therefore return all calls to the homepage with the GET method, excluding logs containing the domain "example.com".
π‘ Remember: Full-text search is performed on the content of the _msg
field.
We will now look at a few examples of simple queries that we could use in a Kubernetes environment.
βΈ Kubernetes Events
Kubernetes events are a valuable source of information because they often reveal problems related to resource state changes or errors that are not visible elsewhere. It's therefore advisable to analyze them regularly.
β οΈ Limitation: These events are ephemeral, and if you want to explore their history, you need a solution to persist this data. Until Vector supports this feature, I'm using Kubernetes Event Exporter, although the project doesn't seem very active.
Once the solution is deployed, we can search for events using the source
field.

- Use the
~
character to search for a string within a field. Here we can view the error notifications for a policy validation defined by Kyverno.
1source:"kubernetes-event-exporter" AND type: "Warning" AND message:~"validation error: Privileged mode is disallowed"
- The following query uses the logical operators
AND
andNOT
to view events of type "Warning" while filtering out Kyverno errors.
1source:"kubernetes-event-exporter" AND type: "Warning" AND NOT reason: "PolicyViolation"
π Web Server Logs
For the purpose of this article, I created a small and simple log generator. It allows simulating web server type logs in order to run a few queries.
1loggen --sleep 1 --error-rate 0.2 --format json
2
3{
4 "remote_addr": "208.175.166.30",
5 "remote_user": "-",
6 "time_local": "19/Apr/2025:02:11:56 ",
7 "request": "PUT /contact HTTP/1.1",
8 "status": 202,
9 "body_bytes_sent": 3368,
10 "http_referer": "[https://github.com/](https://github.com/)",
11 "http_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.1 Safari/605.1.15",
12 "country": "AU",
13 "request_time": 0.532
14}
π‘ Remember: Logs emitted by applications in JSON format allow all fields of the JSON object to be indexed. This simplifies searches and calculations. However, you must remain mindful of cardinality, which can impact performance.
Vector is configured to parse JSON logs and extract their fields. If it's not a JSON log, it keeps the original message without modifying it.
1 transforms:
2 parser:
3 inputs:
4 - k8s
5 source: |
6 .log = parse_json(.message) ?? .message
7 del(.message)
8 type: remap
We thus obtain new fields prefixed with log
in the logs stored by VictoriaLogs.
1{
2 "log.body_bytes_sent": "4832",
3 "log.country": "AU",
4 "log.http_referer": "-",
5 "log.http_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.1 Safari/605.1.15",
6 "log.remote_addr": "84.74.62.151",
7 "log.remote_user": "-",
8 "log.request": "PUT /products HTTP/1.1",
9 "log.request_time": "1.191",
10 "log.status": "204",
11 "log.time_local": "27/Jul/2025:10:57:48 ",
12 <REDACTED>
13}
Thanks to this, we can now write queries directly on the value of the fields. Here are some concrete examples:
Count HTTP codes and sort them in descending order:
1kubernetes.pod_name:"loggen"* | stats by (log.status) count() as count | sort by (count desc)
π‘ By using
stats
, it's possible to perform advanced calculations; many functions are available.

- We previously saw that the
~
character allows searching for a string within a field. This character indicates that we are using regular expressions (regexp
), as shown in this simple example to search only for requests from Japan or Italy.
1_time:5m kubernetes.pod_name: "loggen"* AND log.country:~"JP|IT"
- Other comparison operators can be used. Here
>
is used to filter only logs whose execution time exceeds 1.5 seconds.
1kubernetes.pod_labels.app.kubernetes.io/instance:"loggen" AND log.request_time:>1.5
There is also a command-line tool that allows you to execute queries from a terminal: vlogcli.
1vlogscli -datasource.url='[https://vl.priv.cloud.ogenki.io/select/logsql/query](https://vl.priv.cloud.ogenki.io/select/logsql/query)'
2sending queries to -datasource.url=[https://vl.priv.cloud.ogenki.io/select/logsql/query](https://vl.priv.cloud.ogenki.io/select/logsql/query)
3type ? and press enter to see available commands
4;> kubernetes.pod_labels.app.kubernetes.io/instance:"loggen" | stats quantile(0.5, log.request_time) p50, quantile(0.9, log.request_time) p90, quantile(0.99, log.request_time) p99
5executing ["kubernetes.pod_labels.app.kubernetes.io/instance":loggen | stats quantile(0.5, log.request_time) as p50, quantile(0.9, log.request_time) as p90, quantile(0.99, log.request_time) as p99]...; duration: 2.500s
6{
7 "p50": "1.022",
8 "p90": "1.565",
9 "p99": "1.686"
10}
11;>
π Grafana Integration
Integration with Grafana is done with the dedicated Datasource. This allows building graphs from the data present in VictoriaLogs.
Here, we use the Kubernetes operator for Grafana, which allows declaring configuration through custom resources (Custom Resources).
There is therefore a GrafanaDatasource
resource to add the connection to VictoriaLogs by indicating the address of the VLSelect
service.
1apiVersion: grafana.integreatly.org/v1beta1
2kind: GrafanaDatasource
3metadata:
4 name: vl-datasource
5 namespace: observability
6spec:
7 allowCrossNamespaceImport: true
8 datasource:
9 access: proxy
10 type: victoriametrics-logs-datasource
11 name: VictoriaLogs
12 url: [http://victoria-logs-victoria-logs-cluster-vlselect.observability:9471](http://victoria-logs-victoria-logs-cluster-vlselect.observability:9471)
13 instanceSelector:
14 matchLabels:
15 dashboards: grafana
We can then use this Datasource to execute queries and build graphs.

There are also ready-to-use dashboards.
Configuring a new dashboard is just as simple thanks to the Grafana operator. We specify the dashboard's address accessible from the Grafana.com API.
1apiVersion: grafana.integreatly.org/v1beta1
2kind: GrafanaDashboard
3metadata:
4 name: observability-victoria-logs-cluster
5 namespace: observability
6spec:
7 allowCrossNamespaceImport: true
8 datasources:
9 - inputName: "DS_VICTORIALOGS"
10 datasourceName: "VictoriaLogs"
11 instanceSelector:
12 matchLabels:
13 dashboards: "grafana"
14 url: "[https://grafana.com/api/dashboards/23274/revisions/2/download](https://grafana.com/api/dashboards/23274/revisions/2/download)"
This one allows us to analyze the performance of the VictoriaLogs cluster components.

Another dashboard can be useful for viewing logs and thus unifying metrics and logs at a single address.

π¨ Sending Alerts
It is possible to trigger alerts based on log analysis.
Alerting uses VMAlert, the alerting component of the VictoriaMetrics ecosystem.
An additional instance dedicated to log analysis has been added (another instance being already deployed for metrics):
1apiVersion: [operator.victoriametrics.com/v1beta1](https://operator.victoriametrics.com/v1beta1)
2kind: VMAlert
3metadata:
4 labels:
5 app.kubernetes.io/component: victoria-logs-vmalert
6 app.kubernetes.io/instance: victoria-logs
7 name: victoria-logs
8 namespace: observability
9spec:
10 ruleSelector:
11 matchLabels:
12 vmlog: "true"
13 datasource:
14 url: [http://victoria-logs-victoria-logs-cluster-vlselect.observability.svc.cluster.local](http://victoria-logs-victoria-logs-cluster-vlselect.observability.svc.cluster.local).:9471
15 evaluationInterval: 20s
16 image:
17 tag: v1.122.0
18 notifiers:
19 - url: [http://vmalertmanager-victoria-metrics-k8s-stack-0.vmalertmanager-victoria-metrics-k8s-stack.observability.svc.cluster.local](http://vmalertmanager-victoria-metrics-k8s-stack-0.vmalertmanager-victoria-metrics-k8s-stack.observability.svc.cluster.local).:9093
20 port: "8080"
21 remoteRead:
22 url: [http://vmselect-victoria-metrics-k8s-stack.observability.svc.cluster.local](http://vmselect-victoria-metrics-k8s-stack.observability.svc.cluster.local).:8481
23 remoteWrite:
24 url: [http://vminsert-victoria-metrics-k8s-stack.observability.svc.cluster.local](http://vminsert-victoria-metrics-k8s-stack.observability.svc.cluster.local).:8480/api/v1/write
25 resources:
26 limits:
27 cpu: 100m
28 memory: 256Mi
29 requests:
30 cpu: 100m
31 memory: 128Mi
- AlertManager Integration: Uses the AlertManager instance deployed with VictoriaMetrics for notification management.
- Rule Selector: Only evaluates VMRules with the label
vmlog: "true"
, allowing the separation of log alerts from metric alerts. - Alert Storage: Alerts are stored as metrics in VictoriaMetrics for history and analysis.
Here is a concrete example of an alert that detects an excessively high rate of HTTP errors:
1apiVersion: [operator.victoriametrics.com/v1beta1](https://operator.victoriametrics.com/v1beta1)
2kind: VMRule
3metadata:
4 name: loggen
5 namespace: observability
6 labels:
7 vmlog: "true"
8spec:
9 groups:
10 - name: loggen
11 type: vlogs
12 interval: 10m
13 rules:
14 - alert: LoggenHTTPError500
15 annotations:
16 message: "The application Loggen is throwing too many errors in the last 10 minutes"
17 description: 'The pod `{{ index $labels "kubernetes.pod_name" }}` has `{{ $value }}` server errors in the last 10 minutes'
18 expr: 'kubernetes.pod_labels.app.kubernetes.io/instance:"loggen" AND log.status:"5"* | stats by (kubernetes.pod_name) count() as server_errors | filter server_errors:>100'
19 labels:
20 severity: warning
If AlertManager
is configured to send to Slack as explained in this article, we get the following result:

π Final Remarks
This exploration of VictoriaLogs leads me to say that the solution is simple to install and configure. The concepts are rather easy to grasp, whether it's the modular architecture of the cluster mode or the LogsQL language. Indeed, this language is very intuitive, and one quickly gets used to the syntax.
Moreover, if we refer to the published performance tests, the query execution times as well as the effective data compression allow for projections on large-scale platforms.
As you might have guessed, despite the solution being relatively young, I would highly recommend studying it and taking an approach that allows comparison with your existing solutions before considering a switch π
π References
π Official Documentation and Resources
- Official VictoriaLogs Documentation
- VictoriaMetrics Blog
- VictoriaLogs Roadmap - Upcoming features
- LogsQL Playground - Practice the LogsQL language
π Comparisons and Performance Analyses
- VictoriaLogs vs Loki - Detailed comparative analysis
- VictoriaLogs: The Space-Efficient Alternative to Elasticsearch
- ClickBench - Performance benchmarks
π οΈ Tools and Integrations
- Support for Kubernetes events in Vector - Ongoing GitHub issue
- Kubernetes Event Exporter - Persisting K8s events
π¬ Community and Support
- VictoriaMetrics Slack - #victorialogs channel
- VictoriaLogs GitHub Issues - Report bugs or request features