We are continuing to experience elevated error rates on data ingestion and increased computation latency for IoT SiteWise auto-computed aggregates, transforms and metrics in the US-EAST-1 Region. This is also impacting IoT Events and IoT Analytics. We continue to work towards resolution.
We continue to work towards recovery of the issue affecting the Cognito User Pools and Identity Pools operations in the US-EAST-1 Region and we see improvement in the error rates experienced by customers.
We have identified the root cause of increased error rates and delays for job state transitions and compute environment scaling in the US-EAST-1 Region and continue to work toward resolution.
We are working towards recovery for delays in reporting updates for change propagation of invalidations and CloudFront configurations. Customer changes are propagating fine across our edge locations but the associated reporting is not getting updated. CloudFront Access Logs, Metrics and Reporting may continue to be affected. End-user requests for content from our edge locations are not affected by this issue and are being served normally.
We continue to work towards recovery of the issue affecting the Cognito User Pools and Identity Pools operations in the US-EAST-1 Region and we see further improvement in the error rates experienced by customers.
We are no longer seeing compute environment scaling delays but are still experiencing elevated job state transition times in the US-EAST-1 Region and continue to work toward resolution.
We have identified the root cause of elevated API errors and latencies in US-EAST-1. While we are seeing partial recovery, we continue to see elevated error rates and latency on API calls to PutEvents. Additionally, Rules subscribing to Events from other AWS Services continue to be delayed as we work toward full resolution.
We continue to work towards recovery of the issue affecting Amazon EKS in the US-EAST-1 Region. Customer applications that are backed by pods already running are not impacted. Also applications and pods can be started and run on EC2 instances that are already part of the cluster. EKS customers are experiencing errors when creating, upgrading and deleting EKS clusters and managed node groups. Customers will experience errors launching new Fargate pods, running Fargate pods are not impacted.
We continue working towards recovery for the issue impacting Amazon ECS in the US-EAST-1 region. Currently running ECS tasks are not impacted for either the EC2 or Fargate launch types. Customers are experiencing API error rates and delays delivering task events and metrics in the US-EAST-1 region. ECS clusters are also not able to scale up or down due to task launch errors. Customers are missing metrics and events from their running tasks as ECS Insights is not able to propagate information. Task Set and Capacity Providers are also impacted. Customers using ECS on Fargate are not able to launch new tasks, running Fargate tasks are not impacted.