We're all caught up ingesting customer cluster logs, and customers should have no issues accessing these logs.
This incident is now resolved.
Posted May 30, 2019 - 01:06 UTC
We're continuing to monitor the logging cluster, and cluster logs may still be delayed, however customers should no longer be experiencing issues accessing those logs.
We've identified another internal cluster co-located with the logging cluster with a much higher ingest rate, which was soaking up most of the available IO. We've removed this load and our logging ingestion performance has increased. We've also made some smaller tweaks to our index templates for better shard distribution.
We'll update this incident shortly when cluster logs are no longer delayed.
Posted May 30, 2019 - 00:44 UTC
We're still working with the Elasticsearch team to better tune our logging cluster as well as working to scale up the cluster based on their advice.
Customers may still be experiencing slow response times, or availability issues when accessing cluster logs for us-east-1 deployments.
Posted May 29, 2019 - 23:45 UTC
We are experiencing performance issues with an internal Elasticsearch cluster which provides customer cluster logs for us-east-1 deployments. We're working to restore this cluster, however customers will not be able to access cluster logs at this time.
All cluster logs will be ingested once the logging cluster performance returns to normal and no customer clusters are impacted.