[SystemSafety] HFACS Pros and Cons - On Behalf of Brian Smith
DREW Rae
d.rae at griffith.edu.au
Fri Aug 25 06:28:02 CEST 2017
HFACS, like the very similar ICAM, is based on a fairly simplistic verson
of Reason's Swiss Cheese, and has strengths and weaknesses accordingly.
My biggest concern with such techniques is that investigators & those who
train and manage them confuse classifying accidents with learning from
them. Picture HFACS as a large bucket full of labels that can be applied to
the accident. An investigator reaches into the bucket, finds labels that
match, and applies the labels to the accident. In extreme cases (and there
are actual tools for this) you can generate the entire accident report
using dropbox menus of pre-existing explanations.
Strengths: Using a technique like HFACS puts more labels into the bucket.
If the alternative is to reach into the bucket and have only one label,
"Operator Error", then at least the technique results in a broader breadth
of "findings", and hopefully therefore a less naive set of responses. In
practice, experienced and well trained investigators tend to use a broad
range of classifications anyway, and less-skilled investigators tend to
default to the same few unhelpful labels like "inadequate supervision" and
"poor organizational climate". This isn't the place to argue about the
validity of those as explanatory labels, but they're certainly unhelpful as
investigation findings, unless the goal is to target or spread blame. So
the strengths don't really come from the method per se, as the general idea
of encouraging investigators to look beyond immediate causes.
Weaknesses: Explaining is not learning. If you already think that
"inadequate supervision" is a good label to apply to some accidents, then
having another example of that doesn't teach you anything about
supervision. It doesn't tell you that bad supervision is a potential
problem, or what good supervision should look like. Genuine learning from
accidents tends to come in a few forms, and all of these sit outside of
structured methods.
i) Technical insight into the physical system - these findings tend to
get reported to the investigator rather than made by the investigator
anyway.
ii) Closing the gap between work as it is actually practiced and work as
it is understood by those who write the procedures. These type of findings
require a non-normative look at work. Looking for problems (as HFACS does)
is a very normative approach.
ii) Learning new ways that accidents can happen - these are the type of
findings that rewrite investigation tools, and almost by definition require
a belief that the current tool isn't giving a good enough explanation.
(Think theories of drift, or normalisation of deviance, or even safety
culture. They all came from dissatisfaction with existing explanations).
A few tests of a good method which point out the problems with HFACs:
> 1) Counterfactuals statements in the investigation report show a strong
> bias towards explanation rather than learning. They try to correct small
> parts of the organisation (typically single individuals) back into line
> with how things "should be", rather than learning about how things are or
> how they could be.
>
2) Repeat findings from multiple investigations are a good sign that the
investigators aren't learning - they aren't demonstrating increased
capability. If the investigators aren't learning, the organisation isn't
learning.
3) If most recommendations are administrative controls, then there is a
lot of implicit operator blame happening, and very little learning.
CAVEAT: All of this is predicated on investigations being about learning.
That's not actually a safe assumption. If the purpose of your
investigations to use the investigation process as a tool for your
investigators to wield influence, HFACS can be a good way to legitimise
recommendations you were going to make anyway. A safety manager should have
a file ready with the improvements they want to make, waiting for an
opportunity like an accident. The organisation is going to be in "something
must be done" mode, it is best if that "something" is at least well
researched, planned and thought through, rather than a spur of the moment
response.
*Drew Rae | Lecturer*
*School of Humanities, Languages and Social Science*
*Griffith University*
*| Nathan | QLD 4111 | Macrossan (N16) Room 2.18 T +61 7 3735 9764 | M
0450 161 361 | email d.rae at griffith.edu.au <d.rae at griffith.edu.au>*
On 25 August 2017 at 13:23, Peter Bernard Ladkin <
ladkin at rvs.uni-bielefeld.de> wrote:
> [The following message was not distributed. I don't know why. PBL]
>
> Subject:
> HFACS pros and cons?
> From:
> "Smith, Brian E. (ARC-TH)" <brian.e.smith at nasa.gov>
> Date:
> 2017-08-24 23:37
> To:
> systemsafety <systemsafety-bounces at lists.techfak.uni-bielefeld.de>
>
> Folks,
>
> In your view what are the strengths and weakness of the Human Factors
> Analysis and Classification
> System (HFCAS)? On the one hand it seems to be a widely used technique.
> On the other, it relies
> heavily on the expert judgment of the human factors representatives on an
> accident investigation
> team – some of whom may simply be engineers who have received rudimentary
> human factors training.
>
> Thanks!
>
> Brian Smith, NASA Ames
>
> posted on behalf of Brian,
> PBL
>
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20170825/3879d2a6/attachment.html>
More information about the systemsafety
mailing list