Why Is Cybersecurity So Blessed Hard?

Why, oh why is computer security so blessed hard! At it base, the problem is computers are complex. The programs are complex. The protocols are complex. There are many, many moving parts.

For this article, I’ll use the internet indicator TL;DR or Too Long; Don’t Read.

Here’s the TL:DR part. Computer Hardware is complex. Operating Systems are complex. Computer Software is complex. When you add all of that complexity together, it’s really, really hard to manage. If it’s not managed perfectly, cybersecurity issues like data breaches, firewall failure and network hacking ensue. NerdsToGo are cybersecurity experts who can help protect your private business data, improve your firewall security and keep your network safe from hackers.

What is Computer Hardware?

Avoiding an academic discussion of layers of complexity, the first layer of complexity for computers resides in the hardware. We are familiar with operating systems and their names, Android, iOS, Windows, Unix, MVS, VMS etc. Before one gets to the software or operating system, the hardware architecture itself is part of the computing process. How the chips are designed is the basis of how the operating system and then software will run. Each ‘kind’ of hardware has a binary context that the instruction set runs within. Think standard vs. manual transmission. How you drive a car with a manual transmission is just different from how you dive one with an automatic transmission. To make a manual transmission car go requires a different set of inputs.

To extend the analogy, the standard transmission requires more user input and knowledge than an automatic. The car behaves in different ways depending on the transmission type. The same is true for computer hardware. The hardware (transmission) defines how the software (driver) operates the car. How the processor, input/output pathways, caching, bus communications, etc. works is driven by the design decisions the hardware maker uses. How the physical chips are designed drives how the operating system and, on top of that, the applications will work.

Current devices, phones, tablets, PCs, use what is called a RISC, or Reduced Instruction Set Computing, architecture. The idea is that the central processing unit (CPU) is able to accomplish more ‘stuff’ for each cycle of the CPU. RISC enables computers to do stuff faster with a number of hardware tricks like changing address space, pipelining instructions, etc. Each of these little tricks add to the complexity of the chip and the communications architecture of the chip sets.

Mistakes in how the hardware moves information around in the chips and memory are, what I will call, the first vulnerability. If the chipset in the underlying hardware does something unanticipated, an adversary that discovers that unintended action can use it to attack the machine at the hardware level. The bad guys can make the machine, at the hardware level, to deleterious things to the user.

What is an Operating System?

The next layer is the operating system. The hardware of a computer is relatively dumb. To use the hardware, the system has to be ‘booted’ with an operating system to make it useful. The boot process loads software so that the hardware can recognize the memory (RAM and other storage), input devices (keyboard, mouse), output devices (monitors, printers) and the ‘rules of the road’ for how information moves through the system.

As computers have become more powerful, the amount of information needed to operate the system has become exponentially more complex. The first computer I ever owned, back in the dark ages, had 128k of memory and no hard drive. The computer I’m using to write this has 8GB of RAM. Let me do the math for you: this computer has 62,500 times more memory than my first one. A little more complexity in the operating system? Ya think!?!? In terms of hard drive, zero to anything is, by definition, sizable.

From the standpoint of someone that desires to attack your computer, that increase in size of the OS and the concomitant complexity gives the bad guys many more attack faces to probe. Think of it this way, if you’re trying to keep a burglar out of your house, a brick house with no windows and only one door is much easier to secure than one with many doors and lots of windows. The larger and more complex the operating system is, the more things that can be insecure.

Add to that, for each interface the operating system manages, there are additional ‘cracks’ for a bad guy to force. Each interface (input, output or communications channel) adds a ‘crack’ to get a pry bar into. User input? Internet connection? Network connection? USB stick? Graphics card? The list is practically endless.

Add that each of those interfaces adds additional hardware complexity and risks. Routers, baby monitors, cameras, home automation devices, etc. There have been hardware risks for each of those categories… which leads us to…..

What is Computer Software and Applications?

On top of the hardware and the operating system the applications we use come next. Here the complexity increases by another order of magnitude. In this layer, one adds the complexity of how the applications talk to the operating system that interfaces with the hardware. The application makes ‘calls’ on the OS which translates those calls to hardware operations.

As these processes occur, the involvement of each layer means that there are even more ‘cracks’ for nefarious actors to exploit. The exploits can be complex. Take the example of the WinRAR vulnerability we just discussed. A 19 year old software uses a buried and undiscovered exploit to add an executable code to the users startup folder that leverages an automatic function of the OS (the start-up folder) to install malware. Whew.

Back to the big risk picture. In the classified computer environment, the most secured computer resources are ‘air gapped’ from other, insecure systems. In other words, a classified system isn’t connected to or interacting with any unsecured resources. No internet. No email. No ability to plug in USB thumb drives. There are minimal opportunities for the computer to get infected. Compare that to the computer in an office or home environment.

Those computers are connected to the local network (probably through wireless) and on to the internet. Let’s just take one example. JavaScript. Java is a programming language that downloads and runs code on a computer that connects through the internet, for example. Initially, JS was run on the client side only. That means that when your browser connected to an internet resource, it automatically downloaded software to be run on your computer. Think about that for a second. You connect to a website and it installs and runs code on your computer. For years the whole idea of computer security was limiting or controlling what ‘stuff’ ran on your computer. Now, in our brave new world, by simply visiting a website, your browser application has the ability to download and run a program on your computer.

Next there is the interaction between the applications you have and run for a specific purpose and the ‘helper’ applications already there. For example, if one uses Outlook, you may notice that if you get an email with embedded pictures or graphics, Outlook blocks them from displaying. The user has to give explicit permission to display the pictures. Ever wonder why? A consistent vulnerability has been malware in *.png files. The png format has to have an interpreter that reads the information and displays the content. A mal-formed .png could infect your computer just by displaying the content. And that’s just in email read in a browser.

Think about how many images you are served when you browse the internet. Each image displayed is a potential security vulnerability in this model. Yikes.

That’s the most simple and understandable case. Think about each and every application on your computer. Think about each and every utility. All of the ‘things’ that you load or process. Each one has potential to trigger a problem in each of the layers of activity; hardware, OS, application or communications protocol.

Okay. The problem is complex. It’s been complex for quite a while. Why isn’t it fixed?

It’s a Design Thing

Imagine building a house. The first step would be to design the house. It won’t work very well to just pour a foundation and hope for the best. How big is the house? How many bedrooms? Bathrooms? Where will the doors and windows be? Half way through the construction project would be a problematic time to decide that you wanted a couple of extra bedrooms.

Our current computer environment has been designed and constructed on the fly. As each product or function comes along, it gets cobbled together to make it work. The internet as we know it came from the concept of a robust network that allowed messages to be broken up and routed by multiple routes to reach the intended receiver. In the old days, a single broken switch in the phone network could cause a failure of the entire communications pathway. During the Cold War, for example, in a time of increased tension between the United States and the Soviet Union, the failure of a single AT&T switch caused the entire Strategic Air Command (SAC) command and control structure to lose contact with the major nodes of the system. SAC HQ lost contact with Cheyenne Mountain, Site R and the Distant Early Warning (DEW) system. Did this mean that a nuclear attack had occurred, and SAC should launch a counter-strike? Important decision, that. Those communication challenges gave rise to the need for a robust network of distributed, redundant nodes so that the loss of a single node would not shutdown communication. ARPANET was born. (Highly recommend read: Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety by Eric Schlosser.)

Design A Secure Business Network System

By design, the network was for ensured communication. Security wasn’t really at issue from a design standpoint since all of the network nodes were owned by the military. The only nodes were definitionally secure. All of the nodes and network addresses were ‘owned’ by ARPANET. In 1971, ARPANET consisted of 18 nodes. Each and every internet connected device is an analogous node today. I have more network nodes than that in my house!

The point of all of that is that the foundation of the internet (return to the house analogy) was not designed with security in mind. The designers were not thinking in terms of bad actors on the network. Think of what’s changed and is available on the internet in the last 10 years. Video streaming. On-line applications. Smart phones. Tablets. Web apps. Home automation. Amazon is only 15 years old. No one envisioned how ‘connected’ the world would become.

Back to the house analogy. Imaging pouring a foundation for a 1500 sq. ft. house with three bedrooms and 2 1/2 baths and building a expanding that to end up with the Biltmore Estate. Imagine the add-ons and compromises one would have to make. The latest data suggests that there are about 26 billion devices connected to the internet. Hundreds of thousands of devices. Hundreds of operating systems. Systems and devices of a wide variety of ages and design maturity.

The question about computer security might be how we have ANY cybersecurity rather than what makes it so difficult.

A reporter asked Willie Sutton, a bank robber in the ’20’s and ’30’s that stole about $2M, why he robbed bank. His response? “That’s where the money is.” That’s the next part of why computer security is hard. Remember, the problem is complex, the system was not designed with security in mind and on that system, we’ve based an economic engine.

Back in the day (1993) when I started using Lynx, web browsing was a text-based exercise. ‘The web’ was made of sites that listed a bunch of document names that you could download using the ftp protocol. No pictures. No on-line stores. Just stores of documents. Not a lot of economic incentive for hackers.

The same thing was true of bulletin boards. These were local resources that computer nerds put up for local folks, generally. Kinda like chats of today but not really.

The next piece of security was driven by the dial-up nature of the internet. The computers were not connected via dsl, cable modem etc. The act of ‘getting on the internet’ required you to use a modem that was connected to a phone line necessitating a series of squeaks and whistles before tying up your phone line while you were on-line. The attack face for a hacker was only available while you were connected…. until one of the kids picked up the phone to call and friend and dropped your connection. The attack surface was intermittent.

Let’s review… username and password got you to Genie and it looked a lot like a VT-100 emulator. There were, essentially, text based games, chats, and files to look at. No credit cards. No money exchanging hands. No reason for armies of dedicated hackers to look for flaws since there wasn’t much to steal. The first ‘internet attack’ was an effort by Robert Morris to demonstrate the weakness of internet security. In 1988 Morris turned his worm loose on the internet to measure flaws in security. He made a mistake and the worm would install multiple copies of itself on computers slowing them down to the point of crippling them. A network exercise became our first indication that the internet was vulnerable. For some time, hacking was in intellectual exercise to see what the hacker could do.

Fast forward. The first e-commerce systems began in the early 1990’s and didn’t become a useful target for thieves until it was wide-spread enough to make economic sense. Think 1995 with the advent of Amazon. By 2017, total e-commerce world-wide reached $2.3 Trillion.

“That’s where the money is.” Now economic incentives for fraud and theft exist in spades.

So, the system is VERY complex, it wasn’t designed at its inception for security, there is significant economic incentive to find weaknesses and exploit them for profit. That’s where we are today. Credit card fraud is part of the picture. So is the value of user information. So is disrupting users for blackmail (ransomware). Add in ‘state actors’; bad guys with the backing of nation states for exploiting vulnerabilities. (Remember the November 2017 Equifax breach? 143 million user’s credit files and thus identities stolen…. and that information hasn’t ‘surfaced’ in credit fraud activity, stolen identities, etc. The security community is beginning to believe that the data was stolen by and being used by a ‘state actor’ for espionage purposes.)

That is the environment your business or home exists in: a complex, fundamentally insecure system that has many, many strong incentives to subvert. All of that ignores the final vulnerability. My team used to do end-user support for a data warehouse application. I looked at the help log files to see if there was a consistent issue that we might need to address. The most common solution to a help call was “PICNIC”. I asked the question. What solution do we use so much of the time? Is there something we can do to fix this PICNIC thing. That’s when I learned that PICNIC is an acronym: Problem In Chair Not In Computer.

That’s the final piece of why this is so blessed hard….


That’s the short answer. A quick search yields the links at the end of the article. The stories and analysis about how bad guys get into your system are dated from 2014 to 2019. Each of the articles discusses the risks for networks from users. There are several aspects to the problem.

IT Department Policy vs. The Business

IT departments are often known as the “NO!” department. They are seen as a roadblock to ‘getting things done.’ It is getting worse as current application environments get ‘easier’ or more ‘user friendly’ to get processes or programs started. The term ‘shadow IT’ department is the term used in business circles. These are commonly power users that know enough to do some of the things that the business or co-workers want that IT said no to or didn’t get around to doing.

Understand that the shadow IT department isn’t being evil or malicious. Quite the contrary, the power users are trying to solve problems or improve processes. While Sally may be excellent at using MS Access to set up a great little database to share customer data for her work group she is, almost without exception, unaware of the security risks that task may expose the business to. To do the task right (here read ‘right’ as securely) Sally would have to understand where the data comes from, where it is going, who will have access, how to model the database with proper user controls and authentication, have knowledge of potential Microsoft vulnerabilities, be able to ‘lock down’ the code used to access the data, be able to limit the ability for users and the computers outside of the intended audience do have access to the data, etc. etc. etc. This kind of activity gives IT departments sleepless nights.

The response from IT departments is often to try to control such activities with internal controls. For example, in my days of operating as a shadow IT department, I understood how hard it would be to build such a database securely so I tried to buy a developer copy of MS Access…. Nope! IT controlled that purchase since users couldn’t do development. As a shadow IT guy, what am I to do? My boss wants good stuff to happen. Fast. Do I cobble together something that pleases the boss or IT. Guess where I came down……

Often, IT policies either don’t make sense to users because they don’t understand the environment or the policies themselves are actually counter-productive. Watch me jump on my hobby horse here. This week Microsoft suggested that the IT policy of expiring passwords after a set time period isn’t as effective as other security measures. (https://www.computerworld.com/article/3391365/microsoft-tells-it-admins-to-nix-obsolete-password-reset-practice.html) Hallelujah! Hallelujah! @Steve Marcoux quit reading now….

Back in the day, I ended up as a Classified Systems Security Officer on a number of classified resources. The policy was to change the passwords every 30 days on multiple systems. I’m not that smart. I was left with two options: 1) write the passwords down, which would have been a VERY BAD DAY FOR ME if discovered or 2) use rather weak and sequential passwords for systems. A minimum of 8 characters with all types? Got it. “Sys#0101”, “Sys#0201” etc. for month one. “Sys#0102”, “Sys0202” etc. for month 2. Did the policy really increase security or drive inappropriate behavior. Maybe I should check the statute of limitations….

A possible solution for this paradox is for IT people to articulate the reasons for saying no. Educate the users. Explain the vulnerabilities. Work with users to understand the business need. IT should also consider a more holistic approach to business. Being risk averse isn’t a bad thing in the current environment but some analysis of reasonable risk might be well received.

The Disconnect Between IT Security Mistakes and Resulting Problems

The recent news that the Marriott Hotel chain lost the control of the data for 500 million guests was certainly shocking. Even more shocking was the news that the black hats had been in the system for 4 YEARS before it was discovered. In the Marriott case, the vulnerability exploited is thought to be a SQL injection bug. In any case, whether it’s a cross site scripting hack, or other exploit, the length of time between the security failure and the discovery of a problem is reported as being between 150 and 200 days, on average.

In other words, if a phishing attack works, how does the user relate to a specific action that caused the problem? The organization has probably clicked on thousands of attachments before and after the one phishing exploit that was successful.

Another example is the re-use of passwords. My password manager is controlling credentials for 241 sites! I just checked.

Without some way to manage them, would I be able to remember that many? No way. Especially if they are appropriate in length, randomness and complexity. Enter credential stuffing. Credential stuffing is the newest use (along with cryptocurrency mining) of botnets. Bad guys breach a system and steal user’s credentials (username/password) and try that combination on hundreds of systems to see if they can get in. Now, just for a moment, consider that, to take an example from my vault, I ordered something from ThinkGeek seven years ago. I used my email, that I still use, as a username and entered a password. What if I used the same password that I easily remember this week for my healthcare provider? Or bank? Or Schwab account? A failure in security by ThinkGeek or Kimpton Hotels or Costcophoto, etc. etc. for a website that I used 5+ years ago that allowed my credentials out into the wild might compromise a resource I started using this week. How would I know what I had done wrong from a user perspective? It would be really hard to connect to two actions that caused my problem. How do we help users?

The Importance of Cybersecurity Education for All Users

Education. The 411. Clue them in. Learn them up. Communicate!

  1. Explain why passwords need to be unique to each system or resource
  2. Give them tools to manage and use secure passwords (LastPass!)
  3. Discuss what data is important and the importance of protecting it
  4. Teach the approaches used by bad guys for social engineering attacks
  5. Articulate the dangers of phishing and how to protect against it
  6. Clarify the signs of evidence of security compromises
  7. Give clear direction on the kinds of websites that are the most dangerous from a security standpoint
  8. Caution users to Stop, Look and Listen for pop-ups and dialogs that indicate problems
  9. Celebrate questions and caution; reinforce good behavior
  10. Do it all again on a regular basis