How does Datadog's per-host pricing compare to New Relic's ingest model in practice?
Datadog charges per host monitored, currently around $15 to $23 per host per month depending on the plan, which is predictable for stable infrastructure but expensive when host counts are high. New Relic charges based on data ingested in GB, which can be cheaper for high-host environments that produce relatively little telemetry data. Most buyers model both against their actual host count and average daily ingest volume before deciding.
Which platforms support OpenTelemetry natively without requiring a proprietary agent?
Datadog, New Relic, Dynatrace, Grafana Cloud, and Splunk Observability Cloud all accept OTel traces, metrics, and logs. The practical difference is how complete that support is: some platforms accept OTel data but push you toward their own SDK for full feature access, such as session replay or custom dashboards. Confirm specifically whether the features you need work through OTel alone before assuming parity.
What's the default log retention period on base-tier plans, and where does it get gated?
Datadog's base plan includes 15 days of log retention; moving to 30 days requires a higher-tier contract. Splunk Observability Cloud similarly gates longer retention windows behind premium pricing. Grafana Cloud's free tier includes 30 days of log retention, which is one reason cost-conscious teams treat it seriously rather than dismissing it as a hobbyist option.
Does Dynatrace actually reduce alerting noise without manual threshold setup, or is that marketing?
Dynatrace's Davis AI engine does generate baseline-driven anomaly alerts without manual threshold configuration, and it's the feature buyers cite most often when choosing it over Datadog at the enterprise tier. That said, the quality of those alerts depends on how long the platform has observed normal behavior in your environment. Expect a settling-in period of one to two weeks before the signal-to-noise ratio becomes genuinely useful.
Can Grafana Cloud dashboards be shared with non-engineers without buying them a paid seat?
Yes. Grafana Cloud supports anonymous read-only access and public dashboard sharing without requiring a paid user seat for each viewer. Datadog does not: every user who logs in to view a dashboard counts against your user tier. For teams that routinely share observability dashboards with product, finance, or executive stakeholders, this is a meaningful cost difference at scale.
Which platforms offer sub-60-second metric resolution at their default plan tier?
Datadog and Dynatrace both support high-frequency polling, but check the plan tier carefully. Datadog's default resolution is 15 seconds for infrastructure metrics on paid plans, but some integrations default to 60-second intervals unless configured otherwise. Dynatrace collects at 1-second resolution for infrastructure and application metrics across its plans. Prometheus can be configured for any scrape interval, though very short intervals increase storage and compute costs.
What's the most defensible choice for a Kubernetes environment running more than 200 services?
Datadog is the most common answer in the current data, specifically because its auto-discovery handles new pods and services without manual intervention at scale, and its service map generates automatically from distributed trace data. Dynatrace is a credible alternative if your priority is AI-driven root cause analysis rather than maximum integration breadth. Both cost significantly more than a self-managed Prometheus and Grafana setup, which remains viable if your team has the capacity to maintain it.
Is Splunk worth evaluating if we're not already a Splunk log management customer?
Splunk Observability Cloud is worth a look specifically for full-fidelity distributed tracing, since it doesn't sample traces the way Datadog and New Relic do by default at high volume. If tracing completeness is a core requirement, that distinction justifies the evaluation. If your primary need is infrastructure monitoring or you're not running high-throughput microservices, the cost and complexity of Splunk is harder to justify without an existing Splunk footprint.