Size: a a a

2020 November 26

☁️ AWStatus in AWS_RU
Service: Amazon CloudFront
Region: 🌏 Global
URL: http://status.aws.amazon.com/#cloudfront_1606349807

We are still observing partial recovery on Access Logs, Metrics and Reports but intermittent gaps and delays exist. We continue to work toward full resolution. End-user requests for content from our Edge Locations were not affected by this issue and continue to operate normally.

#cloudfront
источник

☁️ AWStatus in AWS_RU
Service: Amazon WorkSpaces
Region: 🇺🇸US East (N. Virginia)
URL: http://status.aws.amazon.com/#workspaces-us-east-1_1606350252

We continue to work towards recovery of the issue affecting provisioning of Amazon WorkSpaces in the US-EAST-1 Region. We recommend customers do not perform modify, migrate, reboot, rebuild and/or restore operations on existing WorkSpaces at this time.

#workspaces #useast1
источник

☁️ AWStatus in AWS_RU
Service: Amazon Elastic Container Service
Region: 🇺🇸US East (N. Virginia)
URL: http://status.aws.amazon.com/#ecs-us-east-1_1606350262

We continue working towards recovery for the issue impacting Amazon ECS in the US-EAST-1 region. Currently running ECS tasks are not impacted for either the EC2 or Fargate launch types. ECS on Fargate task launches are continuing to see recovery. Delivery of task events and metrics is starting to catch up, and API error rates are declining.

#ecs #useast1
источник

☁️ AWStatus in AWS_RU
Service: Amazon Elastic Kubernetes Service
Region: 🇺🇸US East (N. Virginia)
URL: http://status.aws.amazon.com/#eks-us-east-1_1606350553

We continue to work towards recovery of the issue affecting Amazon EKS in the US-EAST-1 Region. EKS Fargate pod launches have now recovered and is operating normally. Customer applications that are backed by pods already running are not impacted. Applications and pods can be started and run on EC2 instances that are already part of the cluster. Customers can also create Managed Node groups for existing clusters. EKS customers will still experience errors when creating, upgrading and deleting EKS clusters and managed node groups.

#eks #useast1
источник

☁️ AWStatus in AWS_RU
Service: Amazon Kinesis Data Streams
Region: 🇺🇸US East (N. Virginia)
URL: http://status.aws.amazon.com/#kinesis-us-east-1_1606351327

Kinesis Data Streams API operations are seeing gradual recovery but customers may continue to experience increased latencies and failure rates. We continue to actively work towards resolution.

#kinesis #useast1
источник

AS

Alexander Shinkarenk... in AWS_RU
Народное творчество по случаю
https://twitter.com/0xdabbad00/status/1331625911834140672
источник

☁️ AWStatus in AWS_RU
Service: AWS IoT SiteWise
Region: 🇺🇸US East (N. Virginia)
URL: http://status.aws.amazon.com/#iotsitewise-us-east-1_1606351854

We are beginning to see recovery for data ingestion. Access to existing data, transforms, and metrics is unaffected. We are continuing to experience increased computation latency for IoT SiteWise auto-computed aggregates, transforms and metrics in the US-EAST-1 Region. This is also impacting IoT Events and IoT Analytics. We continue to work toward full resolution.

#iotsitewise #useast1
источник

☁️ AWStatus in AWS_RU
Service: AWS IoT Core
Region: 🇺🇸US East (N. Virginia)
URL: http://status.aws.amazon.com/#awsiot-us-east-1_1606351861

We continue to experience increased latency and failure rates for Connect and Subscribe operations in the US-EAST-1 region. We expect to begin seeing recovery once the on-going Kinesis issue is fully resolved. Messaging on existing connections and Shadow operations continue to operate normally.

#awsiot #useast1
источник

☁️ AWStatus in AWS_RU
Service: Amazon CloudWatch
Region: 🇺🇸US East (N. Virginia)
URL: http://status.aws.amazon.com/#cloudwatch-us-east-1_1606354262

We continue experience API error rates and delayed Cloudwatch metrics. We expect to begin seeing recovery once the on-going Kinesis issue is fully resolved. Once we have observed recovery, metrics will backfill if they were queued during the impact.

#cloudwatch #useast1
источник

☁️ AWStatus in AWS_RU
Service: Amazon Elastic Container Service
Region: 🇺🇸US East (N. Virginia)
URL: http://status.aws.amazon.com/#ecs-us-east-1_1606354617

We continue working towards recovery for the issue impacting Amazon ECS in the US-EAST-1 region. Currently running ECS tasks are not impacted for either the EC2 or Fargate launch types. ECS on Fargate task launches are continuing to see recovery with a small number of task launches failing. Task event delivery is fully recovered. There continues to be higher than normal latencies for metrics due to continued CloudWatch impact.

#ecs #useast1
источник

☁️ AWStatus in AWS_RU
Service: Amazon EventBridge
Region: 🇺🇸US East (N. Virginia)
URL: http://status.aws.amazon.com/#events-us-east-1_1606356266

Newly published events are no longer experiencing elevated delivery latencies, however, we continue to see delays in Rules subscribing to events from other AWS Services that were delivered earlier in the day and are working towards full resolution.Newly published events are no longer experiencing elevated delivery latencies, however, we continue to see delays in Rules subscribing to events from other AWS Services that were delivered earlier in the day and are working towards full resolution.

#events #useast1
источник

☁️ AWStatus in AWS_RU
Service: Amazon CloudFront
Region: 🌏 Global
URL: http://status.aws.amazon.com/#cloudfront_1606356363

We continue to work toward full resolution, and expect full recovery once the on-going Kinesis issue is resolved. Upon further recovery, we expect Access Logs, Metrics and Reports to fully recover and start backfilling over time for those queued during the impact.

#cloudfront
источник

☁️ AWStatus in AWS_RU
Service: Amazon Elastic Kubernetes Service
Region: 🇺🇸US East (N. Virginia)
URL: http://status.aws.amazon.com/#eks-us-east-1_1606356702

We continue to observe partial recovery for EKS cluster and node group API operations in the US-EAST-1 Region and are working toward full resolution. Customers may still experience errors when creating, upgrading and deleting EKS clusters and managed node groups.

#eks #useast1
источник

☁️ AWStatus in AWS_RU
Service: AWS Batch
Region: 🇺🇸US East (N. Virginia)
URL: http://status.aws.amazon.com/#batch-us-east-1_1606356944

We continue to see recovery for elevated job state transition times in the US-EAST-1 Region, and are working toward full resolution. Customers can expect jobs to work correctly, but may still see issues with Multi-node Parallel workloads.

#batch #useast1
источник

☁️ AWStatus in AWS_RU
Service: Amazon Kinesis Data Streams
Region: 🇺🇸US East (N. Virginia)
URL: http://status.aws.amazon.com/#kinesis-us-east-1_1606357974

We have now fully mitigated the impact to the subsystem within Kinesis that is responsible for the processing of incoming requests and are no longer seeing increased error rates or latencies. However, we are not yet taking the full traffic load and are working to relax request throttles on the service. Over the next few hours we expect to relax these throttles to previous levels. We expect customers to begin seeing recovery as these throttles are relaxed over this timeframe.

#kinesis #useast1
источник

☁️ AWStatus in AWS_RU
Service: Amazon Elastic Container Service
Region: 🇺🇸US East (N. Virginia)
URL: http://status.aws.amazon.com/#ecs-us-east-1_1606360578

We continue working towards recovery for the issue impacting Amazon ECS in the US-EAST-1 region. Currently running ECS tasks are not impacted for either the EC2 or Fargate launch types. ECS on Fargate task launches are continuing to see recovery with a very small number of task launches failing. Task event delivery is fully recovered. ECS is investigating slow deprovisioning of tasks. There continues to be higher than normal latencies for metrics while we wait for CloudWatch recovery.

#ecs #useast1
источник

☁️ AWStatus in AWS_RU
Service: AWS IoT Core
Region: 🇺🇸US East (N. Virginia)
URL: http://status.aws.amazon.com/#awsiot-us-east-1_1606360775

We continue to experience increased latency and failure rates for Connect and Subscribe operations in the US-EAST-1 region. We expect to see recovery once the on-going Kinesis issue is fully resolved. Messaging on existing connections and Shadow operations continue to operate normally.

#awsiot #useast1
источник

☁️ AWStatus in AWS_RU
Service: AWS Batch
Region: 🇺🇸US East (N. Virginia)
URL: http://status.aws.amazon.com/#batch-us-east-1_1606360981

We continue to see recovery for elevated job state transition times in the US-EAST-1 Region, and are working toward full resolution. Customers can expect most jobs to work correctly, but may still see Multi-node Parallel job delays.

#batch #useast1
источник

☁️ AWStatus in AWS_RU
Service: AWS IoT SiteWise
Region: 🇺🇸US East (N. Virginia)
URL: http://status.aws.amazon.com/#iotsitewise-us-east-1_1606361026

We have seen recovery for data ingestion. Access to existing data, transforms, and metrics is unaffected. We are continuing to experience increased computation latency for IoT SiteWise auto-computed aggregates, transforms and metrics in the US-EAST-1 Region. This is also impacting IoT Events and IoT Analytics. We continue to work toward full resolution.

#iotsitewise #useast1
источник

☁️ AWStatus in AWS_RU
Service: Amazon EventBridge
Region: 🇺🇸US East (N. Virginia)
URL: http://status.aws.amazon.com/#events-us-east-1_1606361628

Newly published events are no longer experiencing elevated delivery latencies. We continue to see delays in Rules subscribing to scheduled events that were delivered earlier in the day and we are working through backlog of these events over the next few hours.

#events #useast1
источник