Subject to Zayo Maintenance Ticket #: TTN-0007867852
Zayo will implement planned maintenance for network hardening. All work is aimed to be completed in one night, longer window is to accommodate workload. Impact is noted to be Hard Down - Duration of maintenance window. However, VIRTBIZ routing will utilize alternate paths during Zayo's outage.
06-Sep-2024 21:00 to 07-Sep-2024 07:00 ( Central )
07-Sep-2024 02:00 to 07-Sep-2024 12:00 ( GMT )
Subject to Zayo Maintenance Ticket #: TTN-0007970700
Zayo is performing routine fiber splicing in an embargo MED splice case. NO Impact is Expected to your services. This notification is to advise you that we will be entering a splice case that houses live traffic. Routine Fiber Splicing has NO intended impact. Please see below for clarification of classification: Embargo LOW = Low risk, non service affecting Embargo MED = Possible risk, no-intended impact Embargo HIGH = High risk, notify all as hard down as precaution
1st Activity Date
10-Sep-2024 00:01 to 10-Sep-2024 05:00 ( Central )
10-Sep-2024 05:01 to 10-Sep-2024 10:00 ( GMT )
2nd Activity Date
11-Sep-2024 00:01 to 11-Sep-2024 05:00 ( Central )
11-Sep-2024 05:01 to 11-Sep-2024 10:00 ( GMT )
Affecting Server - Ave02
Customers with hosted cloud/VPS servers on the Ave2 node can expect a few minutes of downtime Friday, August 23, sometime between 6:00-6:30am CDT. This is a maintenance window to perform system updates which require a platform restart.
Technicians are currently investigating an issue that has impaired Internet traffic for some hosted customers. We have worked around the issue and at present are investigating. Further details will be published as available.
Affecting Other - Zayo carrier
We have been notified by Zayo that they will be performing maintenance on their network that will cause an outage in their service. We do not expect a major or ongoing impairment to VIRTBIZ services. According to Zayo, "Zayo will implement planned maintenance audit and rebuild of an embargoed splice case." They expect the impact to be a complete "hard down" status of their circuit for the duration of the maintenance event.
31-May-2024 21:00 to 01-Jun-2024 06:00 ( Central )
01-Jun-2024 02:00 to 01-Jun-2024 11:00 ( GMT )
Affecting Other - Upstream Carrier Network
Some users reported trouble reaching hosted assets around 01:00 CST on 04/46/2024. A route advertisement was inadvertently withdrawn from a carrier peer, resulting in unreachable status from certain locations. The issue was corrected and all systems are normal at this time. We are working with vendor support with regard to preventing further incidents and a follow-up maintenance window may be necessary.
A catastrophic event in our area caused a power failure that impacted our facility. In addition, vandalism to our backup power infrastructure caused the generator to power on, but power was not transmitted to the facility. Crews were dispatched immediately but UPS power expired roughly 4 minutes before generator power could be restored. At this time we are aware some services are still down and must be brought online manually. We're working on full restoration as quickly as we can. Thank you for your patience.
We are aware of a routing issue that occurred Christmas Morning 12/25/2023, beginning 5:18AM CST. The issue was traced to a license server fault with our VSR (Virtual Software Router) vendor which resulted in license files becoming invalid. Technicians manually connected VSRs to the Internet via static routing in order to patch the systems for license file updates. After that, normal routing was able to be restored. Further updates will be available as developments occur.
Affecting Server - Rodan
Our technicians are aware of a database issue on the Rodan.virtbiz.com cPanel server. This has been escalated to cPanel product support as it appears to be a malfunction within the software. We are actively working to address the issue and restore full functionality ASAP. We thank you for your patience.
Affecting Other - Upstream Carrier Network
Spectrum Enterprise will be conducting network maintenance on 11/17/2023 12:00AM CST until 11/17/2023 06:00AM CST. The purpose of this maintenance is to perform OS upgrade on connected circuit hardware.
During this maintenance window, you will experience one or more brief interruptions in service while we complete the maintenance activities; the interruptions are expected to last less than 15 minutes (each circuit migrated separately) total. However, due to the complexity of the work, your downtime may be longer. Customers may also see some reconvergence during the maintenance window.
Our network operations engineers closely monitor the work and will do everything possible to minimize any inconvenience to you.
Affecting Other - Upstream Carrier Network
Cogent will be performing network maintenance from 12:01 AM CT 11/10/2023 to 04:00 AM CT 11/10/2023. The purpose of this work is circuit migration to a more robust device.
During this maintenance window, you will experience one or more brief interruptions in service while we complete the maintenance activities; the interruptions are expected to last less than 15 minutes (each circuit migrated separately) total. However, due to the complexity of the work, your downtime may be longer. Customers may also see some reconvergence during the maintenance window.
Our network operations engineers closely monitor the work and will do everything possible to minimize any inconvenience to you.
VIRTBIZ Internet Services is passing along the following information regarding one of our service providers:
As a valued customer, Cogent is committed to keeping you informed about any changes in the status of your service with us. This email is to alert you regarding maintenance our local access provider will be performing on their network:
Start Date & Time: 07/13/2023 12:00AM CDT
End Date & Time: 07/13/2023 06:00AM CDT
Time Zone: Central
Outage Duration: 120 (minutes)
Reason: Circuit Migrations
This maintenance is non-service-impacting for VIRTBIZ customers, as traffic will be routed across other peers. However, some remote networks with slower peering updates may see brief interruptions in service. This is due to the nature of Internet routing and is not indicative of a deficiency with the VIRTBIZ network.
Affecting System - VPS and Hosted Firewall
In accordance with an announced maintenance window, certain VPS and hosted firewall services are temporarily offline due to network restructuring. Engineers are actively working on this service and will restore connectivity as quickly as possible. Thank you for your patience.
Affecting Server - Ave03
Customers hosted on Aventurin{e}-based VPS servers will experience periodic outages on Saturday, March 19 2022 - Sunday, March 20 2022 due to system migrations to enhanced platforms at our new datacenter. We appreciate your patience as we complete this work. Note that the migrations are an automated process and systems are queued in turn for conversion. The process will automatically restore service as soon as it is completed, without the need for intervention.
Affecting Server - Ave01
Technicians are addressing a problem with the AVE1 virtualization node. The VLSA storage system has been impacted and is causing virtualized containers to remain offline. We are sorry for the impact this is causing our customers and understand that restoration of service is a top priority. We are actively working on this issue and appreciate your patience. Please note that this only applies to the AVE1 system.
INCIDENT RFO AND SUMMARY REPORT:
On October 7, 2021 at 01:26 AM (US/Central) the DAL1 network experienced an issue which impacted a good deal of IPv4 transit to the facility. IPv6 traffic was not impacted.
Our technicians quickly diagnosed that the issue did not originate within our facility and began the escalation process to management and our carriers. The Zayo upstream fiber connection lost light path for a brief period and then came back, but would not pass IP traffic. Meanwhile, although IPv6 traffic was routing normally across our Lumen connection, IPv4 traffic was not. It was clear that there would be two external issues to resolve simultaneously.
Working with our colleagues at Lumen was hampered somewhat by less than optimal response times. Our management and engineering team worked on a conference call for nearly 2 hours, pushing through 3 levels of escalation with Lumen until we were ultimately able to work with an IP engineer on duty. This engineer confirmed our suspicion that Lumen was not accepting our BGP route advertisements. The precise cause of this is unknown but suspected to be the byproduct of a recent configuration update to Lumen’s router network. Our advertised prefixes are published in IRR registries, and those route objects are imported automatically into upstream carriers, usually on a nightly basis. For reasons that Lumen has not been able to identify, their system ceased importing these objects and therefore would not pass our advertised routes through BGP as they were unable to be validated by their routing network. We asked Lumen to manually push an update to their router. However, as it was in the process of a nightly cycle this took an unusually long time in their queue, and was further hampered by delayed propagation into their production routing network. Lumen service was restored at 03:23 AM (US/Central) and traffic began to pass normally at that time.
Working with Zayo, we were able to confirm good fiber connectivity (light levels were adequate on both sides). Their NCC advised of no known issues or planned/unplanned maintenance on the circuit. BGP had a healthy configuration but there was no IP. At our behest, the NCC engaged a local field engineer to check the physical port. When we asked how long it would take to dispatch, the NCC advised the field engineer had just left that POP at 2323 Bryan Street (our Z-point) and was turning around to return.
Our internal working theory was that someone at the 2323 Bryan end had made a physical port change of some sort, possibly connecting to the wrong port at their router and that assumption influenced our input and requests to the Zayo NCC. A short time later, the connection went down again (loss of light) and came back up. IP connectivity was immediately restored and BGP synchronized right away. Zayo service was restored at 04:16 AM (US/Central), returning the network to full redundancy.
As is ever the case with situations like this, the cause cannot be attributed to a single issue. Multiple issues are typically compounded into an aggregate end result. This is especially true in this case where we do have redundancy and protections in place, but those were compromised through a bizarre combination of external forces.
Lumen’s dropping of legitimately advertised BGP routes is troubling and we are aggressively pursuing an acceptable understanding of why this occurred and what can be done to prevent the issue from happening again. In the meantime, we have configured preliminary external BGP route monitoring to augment our own internal monitoring so that we can verify third-party carriers are processing routes correctly. Equally concerning is the amount of time that was required to reach an IP engineer qualified to confirm our findings and take appropriate actions to restore service. We have requested further dialog with Lumen management on this.
Zayo’s response stating “Our Field Engineer replaced the fiber jumper between our equipment and the OSP panel in order to restore the service” leaves something to be desired. We remain uncertain that the issue was with a “bad” jumper, since we confirmed adequate light levels at the optical transceivers on both ends of the circuit. Given there were no other factors, we feel our working theory that the fiber may have been accidentally removed and then re-installed into the wrong port has merit.
We appreciate your patience as our team worked through the night and early morning hours to correct issues, document each case and put preliminary plans into action. In all cases, our goal is to focus on strengthening the network and the ability to effectively manage third-party partners whenever necessary rather than to place blame. To that end we will continue to use the information gathered from this event to add to internal systems where needed as well as establish or improve communication with third party providers who are vital to ongoing stable operations.
Thank you,
Chris Gebhardt, President
VIRTBIZ Internet Services
INITIAL REPORT:
The DAL1 network experienced an issue which impacted IPv4 transit to the facility. IPv6 traffic was not impacted. Our service from Zayo was interrupted due to an as-yet undiagnosed issue at their POP at 2323 Bryan Street. Our network automatically healed to Lumen, but there was a problem with that carrier's BGP filters that prevented hosted prefixes from being advertised. Unfortunately, they were not able to quickly resolve the problem due to staffing issues and then their routers were in an update which could not be interrupted in order to push the fix through their system. Our engineers were dispatched immediately and remained on conference call with relevant vendors throughout and following the event. A formal RFO may be composed once full research has been performed.
Affecting Server - Rodan
The webhosting node "RODAN.virtbiz.com" is being upgraded to the latest platform. This action is being taken in order to ensure ongoing platform stability as well as to offer the latest available features. Some services (website, email) may be intermittently unavailable during this time. Thank you for your understanding as we perform this important upgrade for you.
Affecting Other - DAL1 Datacenter
Update: 05/15/2021 10:49
Work has been completed on the below referenced Emergency Maintenance Window. All systems are normal at this time and no further interruptions are expected. We thank our customers for your patience.
Update: 05/14/2021 11:49
Preparations have been made to decomission the UPS responsible for Monday's power failure that impacted multiple customers at our DAL1 facility and transition affected systems to replacement power facilities.
EMERGENCY MAINTENANCE WINDOW
impacting customers with service from Panels L5, L8, L14 and RTR2
Saturday, May 15, 2021
7:30am - 12:00 Noon
If your service is expected to be impacted by this work, you have already received email notification. If you did not receive an email notification, your service is not believed to be impacted by this maintenance window.
All preparations have already been made, construction is complete and systems have been pre-wired as far as practical. The work performed on Saturday will be confined to physically changing the high-voltage connections to service distribution panels. Through extensive planning, careful engineering and workshopping with our technical staff, we are confident that the actual work will go very quickly. Therefore, we consider the maintenance window timeframe to be conservative, and we hope to complete our work much earlier. In any event, we anticipate the actual outage to any customer equipment should be 15 minutes or less.
Since we will be working with high-voltage electricity, we will be restricting entry to the facility throughout the duration of the maintenance. Customers will not be permitted inside the datacenter while work is underway. This is to prevent possible injury to our customers and also eliminate distractions to the technicians who will be performing the work.
Once service is restored to each panel, technicians will proceed with crash-carts and connect to each impacted system that has video output in order to observe a proper boot cycle. After that is complete, we will begin working customer tickets.
Update: 05/11/2021 13:41
Staff, management and contracting engineers have diagnosed the service-impacting issue and are currently working to fully rectify the trouble. One of our main power backup systems has experienced an internal fault. This UPS installation (the engineering, but not the actual hardware) is part of the original datacenter build from 2007 and has become functionally obsolete due to its inability to provide adequate load for our current (and future) requirements. Therefore, we had planned to decomission the system in Q4 2021 or Q1 2022 after moving existing services to new power distribution. Given the serious nature of the failure that has occurred, we will now be accelerating that replacement to an immediate timeline.
Work is already underway and is progressing as quickly as materials can be delivered and installed. We have already taken steps to isolate our core network infrastructure from the impacted system. Customer racks and 3rd party network service providers will be moved as soon as practical. Notifications will be sent out accordingly when power will be cut-over.
We appreciate your patience as we carry out this work. We recognize the critical nature of the service that we provide to our customers and everything we do is focused on that.
Initial report: 05/10/2021 18:11
We are aware of an event that caused an interruption in routing service to DAL1 datacenter from 5:45pm to 5:52pm Central Time.
The issue appears to be related to a catastrophic systems failure at one of the 480V 3-phase UPS systems. Our UPS service company has already dispatched a technician to review this. At first glance it appears there has been an internal component that has failed, causing the load to momentarily trip offline. We are still investigating to determine what, precisely, has happened and what steps will be taken going forward. At this time our technicians have mitigated the issue with alternate power routing.
We appreciate your patience and understanding as we take the necessary steps of properly diagnosing, then permanently correcting the issue and apologize for any inconvenience from the impact of this issue.
Affecting Server - Rodan
In order to provide the best web hosting experience possible, VIRTBIZ is performing upgrades to certain infrastructure. During this time, it may be periodically necessary to temporarily suspend some services in order to ensure a smooth transition. cPanel-based web-hosting customers on the "Rodan" server may experience brief periods of unavailability for various services including website and email functions. Please be assured that the automated process responsible will not interrupt services for longer than absolutely necessary. We appreciate your patience during this time.
Affecting System - DAL1 Datacenter
We have identified software errors that may contribute to unscheduled outages or service degradation. In order to mitigate ill effects, engineers will engage in updating and replacing some critical network infrastructure. We are taking steps to minimize impact but due to significant engineering changes, some interruptions are possible.
Start time: 12:00 PM (CDT) 09/14/2019
End time: 23:59 PM (CDT) 09/14/2019
Expected Outage/Downtime: 10 minutes (intermittent)
During this maintenance window, customers may experience brief interruptions in service while we complete the maintenance activities. We do not expect extended periods of connectivity loss. However, due to the complexity of the work, it is possible that unforeseen issues could result in temporary connectivity loss. Customers may also expect to see some re-convergence during the maintenance window. Customers may experience latency and packet loss intermittently throughout the window.
Our network operations engineers closely monitor the work and will do everything possible to minimize any inconvenience to you. If you have any problems with your connection after this time, or if you have any questions regarding the maintenance at any point, please contact our Support Team for assistance.
We apologize in advance for any inconvenience. We are working as diligently as possible to ensure reliability of service to our customers.
###
UPDATE: 06/09/2019, 9:31PM
In the announced maintenance window for 06/19 we implemented new RaaS to improve routing performance. Unfortunately the tested configuration did run but apparently had some issue with LACP / LAG functionality that caused the network to perform erratically. Our engineers monitored this and engaged our vendor support and attempted to resolve but the LAG ports continued to reset. This eventually lead to memory problems in the MDF's handling customer routes. At that point, hosted assets were effectively disconnected from the core. We are working with the vendor to troubleshoot and perform regression testing against our configuration. At this time we have rolled back to the previous infrastructure pending successful testing and sign-off with our vendor.
In order to promote ongoing network stability, VIRTBIZ will be performing critical maintenance to update and upgrade core routing infrastructure at the DAL1 datacenter location. This work includes physical replacement of core routing assets that supply border and edge connectivity. Equipment and connections are already in place and a procedure has been established in order to minimize impact to end-users.
Start time: 4:00 PM (CDT) 06/09/2019
End time: 6:00 PM (CDT) 06/09/2019
Expected Outage/Downtime: 10 minutes (intermittent)
During this maintenance window, customers may experience up to two brief interruptions in service while we complete the maintenance activities. We do not expect extended periods of connectivity loss. However, due to the complexity of the work, it is possible that unforeseen issues could result in temporary connectivity loss. Customers may also expect to see some re-convergence during the maintenance window. Customers may experience latency and packet loss intermittently throughout the window.
Our network operations engineers closely monitor the work and will do everything possible to minimize any inconvenience to you. If you have any problems with your connection after this time, or if you have any questions regarding the maintenance at any point, please contact our Support Team for assistance.
###
Affecting Other - DAL1 Datacenter
Engineers have identified a route flap issue impacting some connected routes are currently working with our vendors for full resolution.
Affecting Other - MEM1 (Tennessee) datacenter
Technicians are currently investigating a power interruption and loss of routing at MEM1 (Dyersburg, TN) facility. At this time, colocated customers at MEM1 are inaccessible from the Internet. This is a facility issue and not a VIRTBIZ routing failure, and facility technicians are on-site working to resolve the problem.
UPDATE 11:33AM
Investigations have revealed a cascade of failures that contributed to nearly 90 minutes of service interruption. When the power failed, load was handled by UPS (battery) systems while the generator was auto-started. Although the generator started as expected, the automatic transfer switch (ATS) did not transfer the load to generator power. Accordingly, UPS units shut down as batteries were exhausted, causing the load to drop. As power to the load was restored manually, an intermediate distribution switch that connects our router to the facility network failed to properly start and load. As the switch is owned and maintained by the facility (not VIRTBIZ) their staff was required to bypass/replace the failed unit. Once they restored their network connectivity, all VIRTBIZ assets became visible on the network.
Our partners at the facility are conducting a post-mortem of this event in conjunction with their electrician and generator maintenance provider. VIRTBIZ is standing by to assist the facility with their investigation as needed.
Affecting Other - DAL1 Datacenter
EMERGENCY MAINTENANCE NOTIFICATION
Start time: 4:00PM Central 08/04/2016
End time: 9:00PM Central 08/04/2016
Expected Outage/Downtime: n/a
UPDATE, 8:00PM
Oncor crews worked quickly to assist in the replacement of the malfunctioning transformers. Datacenter operations were unimpacted during the maintenance period. The window was closed as of 6:15PM Central.
ORIGINAL POST:
During routine inspections this morning, technicians noted moisture and discoloration around one phase of our primary (A-side) electric service entrance. This indicates a failure of the transformer and points to a pending imminent power failure. Upon notification of this issue, we made contact with the electrical transmission provider, ONCOR, and have arranged for replacement of the entire transformer bank.
Scheduling the maintenance window involves balancing the risk of unplanned transformer failure or fire with the desire to utilize off-peak hours as much as possible. Accordingly, replacement has been scheduled for today (August 4, 2016) at 4:00PM Central Time. The replacement process is estimated to take about 4 hours. We are setting a conservative maintenance window for this event from 4:00PM – 9:00PM Central Time.
During this maintenance window, we will transfer A-side datacenter load to generator. We do not anticipate any interruption to datacenter operations.
Although we are operating within the rated load capacity of the transformer bank installation, we have expressed concerns to ONCOR that this is the second time in as many years that the transformer bank will have been replaced, and it is our position that the transformer bank should be uprated with relation to our load. ONCOR has agreed and will be upgrading the delivery with larger-capacity transformers
Senior datacenter techs and electrical engineers will be on-hand for the duration of the event to ensure the operation proceeds as smoothly as possible. In addition, we have notified both our fuel delivery company as well as our generator maintenance contractor, and they are standing by in the event that we need immediate assistance.
Updates and notifications will be made via our website at this link:
In addition, we will post brief updates as necessary and available to our social media feeds:
Twitter: @virtbiz
Facebook: www.facebook.com/virtbiz
Affecting System - IDF Core 12
UPDATE: 05/07/2016, 9:32PM CDT
We have resolved the issue at this time and do not foresee further trouble, but a network engineer will be reviewing and will advise if further action needs to be taken. We appreciate our customers' patience as we worked to resolve this trouble.
ORIGINAL POST: 05/07/2016, 8:43PM CDT
We are currently investigating a possible problem with IDF (switch) services at DAL1, row 2805.12. Colocation customers and dedicated server customers with assets in this area may be experiencing an outage at this time. Technicians are working as quickly as possible to bring resolution to this issue.
Affecting Other - Peering Carrier
VIRTBIZ Internet Services is passing along the following information regarding one of our service providers:
FPL-Fibernet will be upgrading the OS software in their Service Routers and Switches. As a result of this maintenance, service across the FPL-Fibernet network will be impacted 1 or more times during this maintenance window as well as latency on the circuit. Each interruption might last up to 15 minutes, since the Network Elements will have to reboot for the new OS to become active. Internet connectivity will remain active at other peering points during the upgrade on each router.
This maintenance is non-service-impacting for VIRTBIZ customers, as traffic will be routed across other peers. Therefore, this notification is purely informational in nature.
Affecting System - VPS Hosting Network
UPDATE 6:53AM: Root cause has been identified as a failure in cloud power distribution hardware. This issue has been corrected and engineers have taken steps to prevent further recurance.
ORIGINAL POST:
We are currently investigating unavailabilty within our VPS / Cloud hosting network that is causing some virtual systems to be unreachable. Technicians are actively working to restore service.
Affecting Other - DAL1 Datacenter
UPDATE: 11:55AM CDT
We have confirmed with Level3 that the incident is resolved and are closing this issue. Please see the below for information from the Level3 NOC:
Outage Start: April 05, 2016 16:15 GMT
Outage Stop: April 05, 2016 16:31 GMT
Root Cause: An unreachable device in Dallas, TX was impacting IP services.
Fix Action: Tier III Technical Support made the necessary adjustments to restore reachability to the device.
Summary: The Level 3 IP NOC responded to an unreachable device in Dallas, TX that was impacting IP services, and engaged Tier III Technical Support to assist with the investigations. Tier III Technical Support then made the necessary adjustments to restore reachability to the device, and restore services.
Affecting Other - DAL1 Datacenter
Crews will be performing regular maintenance on the datacenter floor surface on Saturday, March 12. Work is expected to begin at about 10:00AM CST.
Work will include a deep cleaning and removal of any surface imperfections, polishing, and finishing with an anti-static coating. This work is necessary to both promote a clean environment as well as reduce static electricity by use of a static-dissapation coating.
At various times during the maintenance, the application of cleaning agents or coatings will prevent foot traffic. This work is being broken up into sections in order to minimize any possible disruption or inconvenience. However, during the maintenance, the rear entry of the DAL1 facility will remain locked. We ask that colocation customers requiring access enter via the front entrance on Saturday, March 12. In addition, we appreciate your patience as some routine requests such as IP-KVM or media changes, may incur a brief delay if the area of the request is currently being serviced.
Thank you for your understanding as we continue working to bring you and your services the best environment possible.
Affecting Other - DAL1 Datacenter
UPDATE: 03/07/2016, 12:22PM: Work is complete, datacenter is returned to utility power, all systems normal.
UPDATE: 03/07/2016, 10:45AM: Datacenter is operating on emergency (generator) power while UPS maintenance is underway.
ORIGINAL POST: Technicians from our UPS maintenance contractor will be on-site to replace the battery string (24 jars) on UPS2. This is regularly scheduled preventative maintenance to ensure continued power reliability. A battery string has a typical lifespan of 3-5 years and in accordance with best practices, we replace the batteries pro-actively so that a potential failed battery does not lead to an unplanned outage. During this maintenance, old batteries will be removed and new batteries installed and tested. VIRTBIZ has contracted to have the discarded batteries recycled in an environmentally concious manner.
Although backup battery power will be removed from the system during this maintenance, every effort will be made to prevent disruption in service to our customers. Full generator backup service remains in place throughout the duration of the maintenance.
Affecting System - DAL1 / DTX901
ISSUE RESOLVED
Update 02/05/2016 03:48
We have received the following notification from the carrier:
Upon further investigation for this issue we have discovered you were not on the notification listing. You ARE however being affected by the work. This was due to Change Ticket CHG1700319.
This change was due to a fiber construction/relocation in that area around the AT&T Main Hall. Fiber construction crews were splicing fibers to move to a different location at the time services were lost.
Outage Start Time: 02/5/2016 12:43:49 AM
Outage Stop Time: 02/5/2016 2:38:40 AM
Update 02/05/2016 02:33
TWC NOC reports the issue is due to CHG1700319 / TSK21859362. The maintenance will be completed before 6am.
Initial report 02/05/2016 01:41
Monitoring has detected peering with Time Warner to be down. No impairment in service to users is expected as traffic is routing normally via other carriers. Technicians are investigating.
Affecting Other - DAL1 Datacenter
Circuit Id | Expected Impact | A Location CLLI | Z Location CLLI |
---|---|---|---|
IPYX/107784//ZYO | Hard Down - Up to 3 hours |
Affecting Server - Matango
We are aware of an availability issue with cPanel server "Matango". Staff is currently reviewing for resolution.
UPDATE: 9:06AM
cPanel hosting services on "Matango" are impacted by a cloud storage availability problem. We are working to correct ASAP.
UPDATE: 9:57AM
We have corrected the cloud storage availability issue. cPanel "Matango" and related services should be available at this time.
Affecting Other - Network
We have received notification from one of our carriers that they will be performing network maintenance later this week.
This Thursday, February 6, 2014 the Zayo network may be unavailable for up to 10 minutes during the window of Midnight - 1:00AM CST. The VIRTBIZ network will automatically route around the Zayo network during this window and there will be no loss or impairment of service. However, it is possible that some users may experience brief apparent outages if their remote or 3rd party ISP is slow to pick up the BGP advertisement change.
Zayo reports the following:
===========================
Maintenance Ticket #: TTN-389231
Urgency: Demand
Maintenance Window: 00:01 - 01:00 Central
Primary Date: 6-Feb 2014
Backup Date: 7-Feb 2014
Location of Maintenance: Dallas TX
Reason for Maintenance: Zayo will perform demand maintenance to switch to secondary routing engine on er1.dfw2.
Expected Impact: SA (Service Affecting) - Down up to 10 Minutes
Service ID: t787
===========================
If you have any questions or concerns, we invite you to open a support ticket through our customer support portal at https://www.virtbiz.com/support and we will be glad to assist you.
Affecting Other - routing peer: Time Warner
VIRTBIZ NOC has removed routing peer with Time Warner due to interconnect problems impacting customers of Comcast. Comcast customers have been able to pass ICMP traffic across the Time Warner peer but no TCP or UDP traffic. VIRTBIZ networking assets and routing policies have been eliminated as possible contributors to this issue. We have passed the case to the carrier for their review. External Ticket 3292206
UPDATE 8/2/13 5:56PM CDT A new circuit between TW and VIRTBIZ has been activated. Peering is restored and routes are activated.
Affecting Other - DAL1 Datacenter
The following is an informational graph displaying datacenter temperatures as measured over time. Samples are taken in five-minute intervals from monitoring stations throughout the datacenter.
Row 16 Temperature (F) | |
Row 14 Temperature (F) |
|
|
|
Row 17 Temperature (F) |
|
Row 10 Temperature (F) |
|
Bakers Racks Temperature (F) | |
COLOR | Temperature (F) | Row 16 | |
COLOR | Temperature (F) | Row 14 | |
COLOR | Temperature (F) | Row 12 | |
COLOR | Temperature (F) | Row 17 | |
COLOR | Temperature (F) | Row 10 | |
COLOR | Temperature (F) | Bakers Racks |