请输入您要查询的百科知识:

 

词条 Downtime
释义

  1. Types

  2. Characteristics

     Telecommunication outage classifications 

  3. Impact

  4. Famous outages

  5. Service levels

  6. Response and reduction of impact

  7. Planning

  8. Avoidance

  9. Other usage

  10. Measuring downtime

  11. See also

  12. References

{{Other uses}}

The term downtime is used to refer to periods when a system is unavailable.

Downtime or outage duration refers to a period of time that a system fails to provide or perform its primary function. Reliability, availability, recovery, and unavailability are related concepts.

The unavailability is the proportion of a time-span that a system is unavailable or offline.

This is usually a result of the system failing to function because of an unplanned event, or because of routine maintenance (a planned event).

The term is commonly applied to networks and servers. The common reasons for unplanned outages are system failures (such as a crash) or communications failures (commonly known as network outage).

The term is also commonly applied in industrial environments in relation to failures in industrial production equipment. Some facilities measure the downtime incurred during a work shift, or during a 12- or 24-hour period. Another common practice is to identify each downtime event as having an operational, electrical or mechanical origin.

The opposite of downtime is uptime.

Types

Industry standards for the term "Outage Duration" or "Maintenance Duration" can have different point of initiation and completion thus the following clarification should be used to avoid conflicts in contract execution:

  1. "Turnkey" this is the most engrossing of all outage types. Outage or Maintenance starts with operator of the plant or equipment pressing the shutdown or stop button to initiate a halt in operation. Unless otherwise noted, Outage or Maintenance is considered completed when the plant or equipment is back in normal operation ready to begin manufacturing or ready be synchronized with system or grid or ready to perform duties as pump or compressor.
  2. "Breaker to Breaker" This Outage or Maintenance starts with operator of the plant or equipment removing the power circuit (Main power breaker at "off" or "disengaged" or "On-Cooldown"), not the control circuit from operation. This still would allow for the equipment to be cooled down or brought to ambient such that outage/maintenance work can be prepared or initiated. Depending on equipment types, "Breaker to Breaker" outage can be advantageous if contracting out controls related maintenance as this type of maintenance work can be performed while main equipment is still on cool-down or on stand-by. Unless otherwise noted, this type of outage is considered complete when power circuit is re-energized via engaging of the power breaker.
  3. "Completion of Lock-out/Tag-out" This Outage or Maintenance (sometimes mistaken for "Off-Cooldown" but not the same) starts with operator of the plant or equipment removing the power circuit, disengaging the control circuit and performing other neutralization of potential power and hazard sources (typically called Lock-Out, Tag-Out "LOTO") This point of maintenance period is typically the last phase of the outage initiation stage before actual work starts on the facility, plant or equipment. Safety briefing should always follow the LOTO activity, before any work is conducted. Unless otherwise noted, this type of outage is considered complete when the equipment has reached mechanical completion and ready to be placed on slow-roll for many heavy rotating equipment, Bump-test or rotation check for motors, etc., but must follow return or work permit per LOTO procedures.

Any on-line testing, performance testing and tuning required should not count towards the outage duration as these activities are typically conducted after the completion of outage or maintenance event and are out of control of most maintenance contractors.

Characteristics

Unplanned downtime may be the result of a software bug, human error, equipment failure, malfunction, high bit error rate, power failure, overload due to exceeding the channel capacity, a cascading failure, etc.

Telecommunication outage classifications

Downtime can be caused by failure in

hardware (physical equipment),

software (logic controlling equipment),

interconnecting equipment (such as cables, facilities, routers,...),

wireless transmission (wireless, microwave, satellite), and/or

capacity (system limits).

The failures can occur because of

damage,

failure,

design,

procedural (improper use by humans),

engineering (how to use and deployment),

overload (traffic or system resources stressed beyond designed limits),

environment (support systems like power and HVAC),

scheduled downtime (outages designed into the system for a purpose such as software upgrades and equipment growth),

other (none of the above but known), or

unknown.

The failures can be the responsibility of

customer/service provider,

vendor/supplier,

utility,

government,

contractor,

end customer,

public individual,

act of nature,

other (none of the above but known), or

unknown.[1]

Impact

Outages caused by system failures can have a serious impact on the users of computer/network systems, in particular those industries that rely on a nearly 24-hour service:

  • Medical informatics
  • Nuclear power and other infrastructure
  • Banks and other financial institutions
  • Aeronautics, airlines
  • News reporting
  • E-commerce and online transaction processing
  • Persistent online games

Also affected can be the users of an ISP and other customers of a telecommunication network.

Corporations can lose business due to network outage or they may default on a contract, resulting in financial losses. According to Veeam Availability Report, organizations encounter unplanned downtime, on average, 13 times per year with the average cost of one hour of downtime for a mission-critical applications equal to $82,864.

Those people or organizations that are affected by downtime can be more sensitive to particular aspects:

  • some are more affected by the length of an outage - it matters to them how much time it takes to recover from a problem
  • others are sensitive to the timing of an outage - outages during peak hours affect them the most

The most demanding users are those that require high availability.

Famous outages

{{Recentism|date=May 2013}}

On Mother's Day, Sunday, May 8, 1988, a fire broke out in the main switching room of the Hinsdale Central Office of the Illinois Bell telephone company. One of the largest switching systems in the state, the facility processed more than 3.5 million calls each day while serving 38,000 customers, including numerous businesses, hospitals, and Chicago's O'Hare and Midway Airports.[2]

Virtually the entire AT&T network of 4ESS toll tandems switches went in and out of service over and over again on Jan. 15, 1990 disrupting long distance service for the entire United States. The problem dissipated by itself when traffic slowed down. A software bug was found.[3]

AT&T lost its frame relay network for 26 hours on April 13, 1998.[4] This affected many thousands of customers, and bank transactions were one casualty. AT&T failed to meet the service level agreement on their contracts with customers and had to refund[5] 6,600 customer accounts, costing millions of dollars.

Xbox Live had intermittent downtime during the 2007–2008 holiday season which lasted thirteen days.[6] Increased demand from Xbox 360 purchasers (the largest number of new user sign-ups in the history of Xbox Live) was given as the reason for the downtime; in order to make amends for the service issues, Microsoft offered their users the opportunity to receive a free game.[7]

Sony's PlayStation Network April 2011 outage, began on April 20, 2011, and was gradually restored on May 14, 2011 starting in the United States. This outage is the longest amount of time the PSN has been offline since its inception in 2006. Sony has stated the problem was caused by an external intrusion which resulted in the confiscation of personal information.[8] Sony reported on April 26, 2011 that a large amount of user data had been obtained by the same hack that resulted in the downtime.

Telstra's Ryde switch failed in late 2011 after water egressed into the electrical switch board from continuing wet weather. The Ryde switch is one of the largest by area switches in Australia, and affected more than 720,000 services.{{Citation needed|date=May 2013}}

The Miami datacenter of ServerAxis went offline unannounced on February 29, 2016 and was never restored. This impacted multiple providers and hundreds of websites. The outage impacted coverage of the 2016 NCAA Women's Division I Basketball Tournament as WBBState, one of the affected sites, was by far the most comprehensive provider of women's basketball statistics available.[9]

Service levels

In service level agreements, it is common to mention a percentage value (per month or per year) that is calculated by dividing the sum of all downtimes timespans by the total time of a reference time span (e.g. a month). 0% downtime means that the server was available all the time.

For Internet servers downtimes above 1% per year or worse can be regarded as unacceptable as this means a downtime of more than 3 days per year. For e-commerce and other industrial use any value above 0.1% is usually considered unacceptable.{{Citation needed|date=March 2018}}

Response and reduction of impact

It is the duty of the network designer to make sure that a network outage does not happen. When it does happen, a well-designed system will further reduce the effects of an outage by having localized outages which can be detected and fixed as soon as possible.

A process needs to be in place to detect a malfunction - network monitoring - and to restore the network to a working condition - this generally involves a help desk team that can troubleshoot a problem, one composed of trained engineers; a separate help desk team is usually necessary in order to field user input, which can be particularly demanding during a downtime.

A network management system can be used to detect faulty or degrading components prior to customer complaints, with proactive fault rectification.

Risk management techniques can be used to determine the impact of network outages on an organisation and what actions may be required to minimise risk. Risk may be minimised by using reliable components, by performing maintenance, such as upgrades, by using redundant systems or by having a contingency plan or business continuity plan.

Technical means can reduce errors with error correcting codes, retransmission, checksums, or diversity scheme.

One of the biggest causes of downtime is misconfiguration, where a planned change goes wrong. Typically organisations rely on manual effort to manage the process of configuration backups, but this requires highly skilled engineers with the time to manage the process across a multi-vendor network. Automation tools are available to manage backups, but there are very few solutions that handle configuration recovery which is needed to minimize the overall impact of the outage.

Planning

A planned outage is the result of a planned activity by the system owner and/or by a service provider. These outages, often scheduled during the maintenance window, can be used to perform tasks including the following:

  • Deferred maintenance, e.g., a deferred hardware repair or a deferred restart to clean up a garbled memory
  • Diagnostics to isolate a detected fault
  • Hardware fault repair
  • Fixing an error or omission in a configuration database or omission in a recent configuration database change
  • Fixing an error in application database or an error in a recent application database change
  • Software patching/software updates to fix a software fault.

Outages can also be planned as a result of a predictable natural event, such as Sun outage.

Maintenance downtimes have to be carefully scheduled in industries that rely on computer systems. In many cases, system-wide downtimes can be averted using what is called a "rolling upgrade" - the process of incrementally taking down parts of the system for upgrade, without affecting the overall functionality.

Avoidance

For most websites, website monitoring is available. Website monitoring (synthetic or passive) is a service that "monitors" downtime and users on the site.

Other usage

Downtime can also refer to time when human capital or other assets go down. For instance, if employees are in meetings or unable to perform their work due to another constraint, they are down. This can be equally expensive, and can be the result of another asset (i.e. computer/systems) being down. This is also commonly known as "dead time".

This term is used also in factories or industrial use. See total productive maintenance (TPM).

Measuring downtime

There are many external services which can be used to monitor the uptime and downtime as well as availability of a service or a host.

See also

  • High availability
  • Uptime
  • Mean down time
  • Planned downtime
  • Carrier grade

References

[https://integrityalert.com/blog/website-defacement.html Website Downtime Intro and Tips]

1. ^ATIS 0100012.2007 Standard Outage Classification
2. ^Risk Digest Volume 6: Issue 82 1988
3. ^{{cite web|url=http://www.phworld.org/history/attcrash.htm |title=The Crash of the AT&T Network in 1990|}}
4. ^{{Cite web|url=https://www.keysight.com/upload/cmc_upload/All/sp-insight.pdf|title=Preventing IP Network Service Outages|last=|first=|date=|website=Agilent Technologies|archive-url=|archive-date=|dead-url=|access-date=}}
5. ^Risk digest Volume 19 Issue 72 1998
6. ^{{cite web |url=https://www.engadget.com/2008/01/03/xbox-live-outage-day-13-still-up-and-down-still-preventing-fu/ |title=DAY 13 of Xbox Outage |date=2008-01-03 |publisher=Engadget |accessdate=2011-04-27}}
7. ^Microsoft offers free game for Xbox Live holiday problems. PC World, January 4, 2008.
8. ^https://www.google.com/hostednews/ap/article/ALeqM5j9AacQSaJXBQ3JUqZWxemjT8nMPw?docId=916344d02c284103af70f845db4befc1
9. ^A Website Went Offline And Took Most Of Women's College Basketball Analytics With It FiveThirtyEight
Tempo di fermo

4 : Engineering failures|Information technology management|Maintenance|System administration

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/11/15 19:02:15