Gcore - Cloud | London-2 Incident Details – Incident details

All systems operational

Cloud | London-2 Incident Details

Resolved
Major outage
Started about 1 month agoLasted about 5 hours

Affected

Cloud

Major outage from 9:12 AM to 11:46 AM, Operational from 11:46 AM to 1:51 PM

Compute

Major outage from 9:12 AM to 11:46 AM, Operational from 11:46 AM to 1:51 PM

London-2

Major outage from 9:12 AM to 11:46 AM, Operational from 11:46 AM to 1:51 PM

Baremetal

Major outage from 9:12 AM to 11:46 AM, Operational from 11:46 AM to 1:51 PM

London-2

Major outage from 9:12 AM to 11:46 AM, Operational from 11:46 AM to 1:51 PM

Networking

Major outage from 9:12 AM to 11:46 AM, Operational from 11:46 AM to 1:51 PM

Updates
  • Postmortem
    Postmortem

    Incident Report: Power Incident — London (NDLO) Region

    Date: 28 March 2026 Region affected: London-2 (NDLO) Impact window: 08:51 – 11:41 UTC (2h 50m) Status: Resolved

    Issue

    On Saturday, 28 March 2026, Gcore's London-2 (NDLO) region experienced a service outage caused by a power incident at our data center facility in London. An upstream electrical fault at the facility led to a complete loss of power in the area hosting our infrastructure, resulting in unavailability of cloud compute, networking, storage, and bare metal services in the region for approximately 2 hours and 50 minutes. Full service was restored, followed by an extended period of reduced power redundancy that was fully resolved later the same day.

    We understand the impact this had on your operations and sincerely apologise for the disruption.

    What happened

    At 07:29 UTC, a fault developed in an upstream electrical component at the data center facility, which caused the primary power feed to our infrastructure area to trip. The facility's uninterruptible power supply (UPS) carried the load on battery for approximately 80 minutes, but the switchover to the alternate high-voltage feed took longer than the available UPS battery autonomy could sustain. At 08:49 UTC, the UPS batteries were exhausted and the area suffered a complete power loss.

    At 10:32 UTC, the facility restored power via the alternate high-voltage supply, and our infrastructure began coming back online. Network connectivity was re-established first, followed by the cloud control plane and customer workloads. By 11:41 UTC, all customer-facing services were confirmed restored and placed under close monitoring.

    For several hours after the main restoration, a subset of cabinets operated on a single power feed due to a faulty component in the power distribution path. All cabinets retained at least one working power source, so services remained available, but redundancy was reduced. At 19:08 UTC, the facility bypassed the faulty component and full redundant power was restored. UPS battery replacement was completed by 19:20 UTC.

    Timeline (all times UTC)

    Time

    Event

    07:29

    Upstream electrical fault; primary power feed trips; UPS takes over on battery.

    08:49

    UPS battery autonomy exhausted; full loss of power to the affected area.

    08:51

    Incident detected by Gcore; investigation begins.

    09:24

    Facility vendor engaged; onsite response underway.

    10:32

    Power restored via alternate high-voltage supply.

    10:43 – 11:41

    Staged service recovery: network, cloud control plane, compute, bare metal.

    11:41

    All customer-facing services restored and under monitoring.

    13:51

    Status page incident closed.

    19:08

    Full power redundancy restored across all cabinets.

    Impact

    • Services affected: cloud compute (VMs), bare metal, cloud networking, public IP connectivity, cloud storage, and the cloud API within the London-2 (NDLO) region.

    • Customer-visible downtime: approximately 2 hours 50 minutes (08:51 – 11:41 UTC).

    • Reduced power redundancy in a subset of cabinets from approximately 08:51 UTC to 19:08 UTC. Services remained available during this window.

    • Other Gcore regions were not affected.

    Root cause

    The root cause was an upstream electrical fault at the data center facility, combined with the facility's UPS battery autonomy being insufficient to bridge the time required to transfer the load to the alternate high-voltage supply. A secondary component fault in the power distribution path prolonged the period of reduced redundancy after the initial restoration.

    What we are doing

    In close coordination with our data center partner, and on the Gcore side, we have initiated the following actions:

    With the facility provider

    • Requested a full, formal root cause analysis covering the electrical fault, UPS autonomy versus design target, and the secondary component failure.

    • Tracking the facility's remediation plan, including UPS battery replacement (completed 28 March), resilience testing, and preventative maintenance.

    • Reviewing contractual service-level commitments

    Our commitment

    We are treating this incident with the highest priority. Power resilience at the data center level is foundational to our service, and the gap exposed by this event — where UPS autonomy did not cover the time required for an alternate-feed transfer — is unacceptable to us. We are working with our facility partner to ensure this specific failure mode cannot recur, and we are independently strengthening our own architecture so that future events of this kind have less impact on our customers.

    If you have questions about how this incident affected your specific workloads, or if you would like to discuss compensation under your service agreement, please contact Gcore Support.

    Thank you for your patience and your continued trust in Gcore.

  • Resolved
    Resolved

    We are happy to inform you that the power-related incident in the data centre has been fully resolved, and all services have been successfully restored. We will provide a detailed Root Cause Analysis (RCA) once it becomes available.

    If you continue to experience any issues, please do not hesitate to contact our support team. Our team will be happy to assist you and ensure that any further concerns are addressed promptly.

    We appreciate your patience and understanding throughout this incident, and we thank you for your cooperation.

    For further assistance, please contact our support team via support@gcore.com

  • Monitoring
    Monitoring

    All services have now been restored. A small number of bare metal servers are still in the process of recovery, and our teams continue to monitor the situation closely.

    Thank you for your patience and understanding throughout this incident.

  • Update
    Update

    We have received a further update from the data centre confirming that the power restoration process has begun. Some services, like API and network are back and running. Services are being brought back gradually, and onsite teams are working to fully restore operations as quickly and safely as possible.

    We understand the impact of this incident and greatly appreciate your patience. We will continue to share updates as more information becomes available and once full service is restored.

  • Update
    Update

    We have received an update from the data centre confirming a power outage affecting the site. On-site engineers are actively working to restore power as quickly and safely as possible. We will continue to share updates as more information becomes available.

  • Identified
    Identified

    On-site engineers are actively investigating a power issue in the data centre. We are working on resolving it and will provide updates shortly.

  • Investigating
    Investigating

    We are currently experiencing a major outage in our Network affecting Cloud Services in London-2 region, resulting in a complete unavailability of the service. We sincerely apologize for any inconvenience this may cause and greatly appreciate your patience and understanding during this critical time.

    Our engineering team is actively working to identify the root cause and implement a resolution as quickly as possible. We will provide regular updates as we receive more information on the progress of the resolution.

    Thank you for your understanding and cooperation.