I've been playing around with setting up OpenSearch because I'm not really happy with how annoying it is to actually *search* my logs with Loki (fundamentally, I just want a Datadog- or OS-style dashboard), and… it's rough.
Setting up OpenSearch is difficult as the docs are definitely aimed at large clusters with minimal support for standing up a simple instance. But then we get to Data Prepper and… yeesh. It works, but it feels very, very rough.
Docs *look* comprehensive but aren't, like the date module - some docs on the docs site, but much better docs can only be found deep in GitHub; Config is very repetitive; Pipelines are *pull*-based and only poll every 3s by default, so a complex pipeline setup will *massively* delay your logs.
I'm not sure whether I want to persevere with OS, see if ElasticSearch (/LogStash/etc) goes better, or abandon the whole thing and go back to Loki, which is also painful, but maybe less so.
@alpha The whole “logging” arena feels incredibly over-complicated, with the choices basically being “tail a log file” or “enterprise multi-node cluster designed for *search* (and metrics and and and…) that just happens to be good at log dashboards”. I feel like there's something missing in the middle for small-medium sized deployments.
Part of the problem is self-hosting a bunch of services, none of which have the same log-file format, none of which can be easily changed.
@ratkins @ipsi Somehow, our setup at $JOB-2 was surprisingly usable, despite it being a decade ago and being comprised mostly of statsd+graphite+elk.
Maybe something about it being data that we mostly pushed, rather than automatically pulled, so we had much better context and awareness about what was in there.
@ipsi we (elastic) should have some things there to make your life easier. from agent / beats modules to UI things :)