Skip to content

Commit

Permalink
@SpellarBot fixed 20 typos for you :)
Browse files Browse the repository at this point in the history
  • Loading branch information
SpellarBot committed May 3, 2019
1 parent 6fdff9c commit 471e48d
Show file tree
Hide file tree
Showing 4 changed files with 20 additions and 20 deletions.
26 changes: 13 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -281,7 +281,7 @@ systems.
* constraints that deconstrain
* protocol-based architectures
* emergent constraints
* Universal laws and arcthitectures
* Universal laws and architectures
* conservation laws
* universal architectures
* Highly optimized tolerance
Expand Down Expand Up @@ -363,7 +363,7 @@ Safety-I: avoiding things that go wrong
* systems are decomposable
* functioning is bimodal

Saefty-II: performance variability rather than bimodality
Safety-II: performance variability rather than bimodality
* the system’s ability to succeed under varying conditions, so that the number
of intended and acceptable outcomes (in other words, everyday activities) is
as high as possible
Expand Down Expand Up @@ -471,7 +471,7 @@ See [STAMP](STAMP.md) for some more detailed notes of mine.
* CAST (causal analysis based on STAMP) accident analysis technique
* Systems thinking
* hazard
* interactivy complexity
* interactive complexity
* system accident
* dysfunctional interactions
* safety constraints
Expand Down Expand Up @@ -604,11 +604,11 @@ Source: [Risk management in a dynamic society: a modelling problem]
### Concepts
* Dynamic safety model
* Migration toward accidents
* Risk maangement framework
* Risk management framework
* Boundaries:
- boundary of functionally acceptable performance
- boundary to economic failure
- boundary to unnaceptable work load
- boundary to unacceptable work load
* Cognitive systems engineering
* Skill-rule-knowledge (SKR) model
* AcciMaps
Expand Down Expand Up @@ -648,10 +648,10 @@ Reason is a psychology researcher who did work on understanding and categorizing

#### Accident causation model (Swiss cheese model)

Reason developed an accident casuation model that is sometimes known as the *swiss cheese* model of accidents.
Reason developed an accident causation model that is sometimes known as the *swiss cheese* model of accidents.
In this model, Reason introduced the terms "sharp end" and "blunt end".

#### Human Error model: Slips, laspses and mistakes
#### Human Error model: Slips, lapses and mistakes

Reason developed a model of the types of errors that humans make:

Expand Down Expand Up @@ -695,7 +695,7 @@ She is the director of the Center for Ergonomics at the University of Michigan.
* organization safety
* human-automation/robot interaction
* human error / error management
* attention / interruption maangement
* attention / interruption management
* design of decision support systems


Expand Down Expand Up @@ -779,7 +779,7 @@ Vaughan is a sociology researcher who did a famous study of the NASA Challenger

## David Woods

[Woods](https://complexity.osu.edu/people/woods.2) has a resesarch background in cognitive systems engineering and did work
[Woods](https://complexity.osu.edu/people/woods.2) has a research background in cognitive systems engineering and did work
researching NASA accidents. He is one of the founders [Adaptive Capacity
Labs](http://www.adaptivecapacitylabs.com/), a resilience engineering
consultancy.
Expand Down Expand Up @@ -833,7 +833,7 @@ From [The theory of graceful extensibility: basic rules that govern adaptive sys
6. Adaptive capacity is the potential for adjusting patterns of action to
handle future situations, events, opportunities and disruptions
7. Performance of a UAB as it approaches saturation is different from the
perforamnce of that UAB when it operates far from saturation
performance of that UAB when it operates far from saturation
8. All UABs are local
9. There are bounds on the perspective any UAB, but these limits are overcome
by shifts and contrasts over multiple perspectives.
Expand All @@ -858,7 +858,7 @@ Many of these are mentioned in Woods's [short course](http://csel.org.ohio-state

* the adaptive universe
* unit of adaptive behavior (UAB), adaptive unit
* adapative capacity
* adaptive capacity
* continuous adaptation
* graceful extensibility
* sustained adaptability
Expand Down Expand Up @@ -892,7 +892,7 @@ Many of these are mentioned in Woods's [short course](http://csel.org.ohio-state
* oversimplification
* fixation
* fluency law, veil of fluency
* capacity for maneuver (CfM)
* capacity for manoeuvre (CfM)
* crunches
* sharp end, blunt end
* adaptive landscapes
Expand All @@ -907,7 +907,7 @@ Many of these are mentioned in Woods's [short course](http://csel.org.ohio-state
* Properties of resilient organizations
- Tangible experience with surprise
- uneasy about the precarious present
- push intiative down
- push initiative down
- reciprocity
- align goals across multiple units
* goal conflicts, goal interactions (follow them!)
Expand Down
6 changes: 3 additions & 3 deletions STAMP.md
Original file line number Diff line number Diff line change
Expand Up @@ -134,11 +134,11 @@ are not adequately considered in the control algorithm, accidents can result.
Many accidents relate to *asynchronous evolution* where one part of the system
changes without the related necessary changes in the other parts.

Communication is a critical factor here as well as monitoring for changes that may occur and feeding back this information to the higher-level control. For example, the safety analysis process that generates constraints always involves some basic assumptions about the operating environment of the process. When the environment ment changes such that those assumptions are no longer true.
Communication is a critical factor here as well as monitoring for changes that may occur and feeding back this information to the higher-level control. For example, the safety analysis process that generates constraints always involves some basic assumptions about the operating environment of the process. When the environment changes such that those assumptions are no longer true.

#### Inconsistent, incomplete or incorrect process models

Accidents, particularly component interaction accidents, most often result from inconsistencies between the models of the process used by the controllers (both human and automated) and the actual process state. When the controller's model of the process (either the human mental model or the software or hardware model) diverges from the process state, erroneous control commands (based on the incorrect rect model) can lead to an accident.
Accidents, particularly component interaction accidents, most often result from inconsistencies between the models of the process used by the controllers (both human and automated) and the actual process state. When the controller's model of the process (either the human mental model or the software or hardware model) diverges from the process state, erroneous control commands (based on the incorrect model) can lead to an accident.

The most common form of inconsistency occurs when one or more process models is incomplete in terms of not defining appropriate behavior for all possible process states or all possible disturbances, including unhandled or incorrectly handled component failures.

Expand Down Expand Up @@ -233,7 +233,7 @@ CAST - causal analysis based on STAMP
5. Analyze the loss at the physical system level. Identify the contribution of each of the following to the events: physical and operational controls, physical failures, dysfunctional interactions, communication and coordination flaws, and unhandled disturbances. Determine why the physical controls in place were ineffective in preventing the hazard.
6. Moving up the levels of the safety control structure, determine how and why each successive higher level allowed or contributed to the inadequate control at the current level.

For each system safety constraint, either the responsibility for enforcing it was never assigned to a component in the safety control structure ture or a component or components did not exercise adequate control to ensure their assigned responsibilities (safety constraints) were enforced in the components below them.
For each system safety constraint, either the responsibility for enforcing it was never assigned to a component in the safety control structure or a component or components did not exercise adequate control to ensure their assigned responsibilities (safety constraints) were enforced in the components below them.

Any human decisions or flawed control actions need to be understood in terms of (at least):

Expand Down
2 changes: 1 addition & 1 deletion intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ Two great introductory papers (alas, both paywalled) are:

* [Reconstructing human contributions to accidents: the new view on error and performance](https://www.sciencedirect.com/science/article/pii/S0022437502000324)
by Dekker
* [The eror of counting errors](https://doi.org/10.1016/j.annemergmed.2008.03.015) by Robert Wears
* [The error of counting errors](https://doi.org/10.1016/j.annemergmed.2008.03.015) by Robert Wears


### Safety-II
Expand Down
6 changes: 3 additions & 3 deletions laws.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ by Hoffman and Woods.
* Tradeoffs
- Efficiency-thoroughness tradeoff
- Optimality-brittleness tradeoff
* Theroems
* Theorems
- Theorems of graceful extensibility

## Laws
Expand Down Expand Up @@ -101,7 +101,7 @@ Source: [Beyond Simon’s Slice: Five Fundamental Trade-Offs that Bound the Perf
### Theorems of graceful extensibility

* *UAB* stands for unit of adaptive behavior
* *CfM* stands for capacity for maneuver
* *CfM* stands for capacity for manoeuvre

1. Adaptive capacity is finite
2. Events will produce demands that challenge boundaries on the adaptive
Expand All @@ -113,7 +113,7 @@ Source: [Beyond Simon’s Slice: Five Fundamental Trade-Offs that Bound the Perf
6. Adaptive capacity is the potential for adjusting patterns of action to
handle future situations, events, opportunities and disruptions
7. Performance of a UAB as it approaches saturation is different from the
perforamnce of that UAB when it operates far from saturation
performance of that UAB when it operates far from saturation
8. All UABs are local
9. There are bounds on the perspective any UAB, but these limits are overcome
by shifts and contrasts over multiple perspectives.
Expand Down

0 comments on commit 471e48d

Please sign in to comment.