Are (Trained) Humans the Weakest Link?

It’s an old standby among security experts: “Humans are the weakest link.” My sense is people say this without irony, by which I mean they aren’t suggesting their own type of human (highly skilled cybersecurity professionals) are at fault, but that ordinary users are. But does that hold water? Consider these incidents:

  • Last week’s Amazon Web Services outage
  • DNC hacking
  • OPM breach
  • Home Depot breach
  • Target breach
  • Leaked plans for Marine One, 2009
  • Buckshot Yankee breach, 2008

In each case, the humans who proved to be the weak link were either cybersecurity professionals and/or employees who would have undergone background checks, clearances, and regular information security training. (There are, of course, other examples of well-trained people committing errors, as well as many by regular users.)

At the least, however, this should force us to question easy assumptions. That means taking an honest look at why so many trained people commit errors with such large consequences. After all, they are building and fixing systems imagined and designed by people, meaning there are other design or training errors up the chain.  The systems are used by people, and are for the benefit of people. Most importantly, thought, the key lesson should not be that if even well-trained people are making such consequential errors then there’s no hope for us ordinary folk. Rather, it’s that this is a human pursuit, with technology components. And that we’ve swung way too far into a conception of cybersecurity that assumes that things would be better, if only people didn’t mess them up. 

Leave a Reply

Your email address will not be published. Required fields are marked *