This article was originally published on HelpNetSecurity on April 16, 2020.

Finding security holes in information systems is as old as the first commercially available computer. Back when a “computer” was something that sat in a computer room, users would try to bypass restrictions, sometimes simply by trying to guess the administrator’s password.

Later when Bulletin Board Systems (the primitive version of the Internet) became popular, BBS users searched for ways to gain further access in order to view private files and invented the first phishing attack – familiar to many 21st century computer users as the method that was successfully used to hack into the DNC’s computers just before the 2016 elections.

The origin of the network virus

Back in 1988, when the entire “Internet” was merely 60,000 computers, the first network virus was unleashed. Of course, computer viruses themselves date back to the early days of the personal computer, first invented by an IT shop in Pakistan who wanted to earn money fixing computers – which possibly makes the Farooq Alvi brothers the very first black-hat IT security vendor.

Most of the security concepts we grapple with today date back to the 70s: passwords and access control; malicious code; software bugs leading to privilege escalation attacks.

That would make you think that “nothing is new under the sun” when it comes to Internet security. But just the contrary: while the game stayed the same, the rules have changed.

Information security in the 2010s

From the first security bugs until the recent past, security was a game with a clear winner and loser. If the attacker gets in, the bad guy wins, and the good guy loses.

Our job as information security experts and presumed good guys was to find those security vulnerabilities and help fix them. The premise being that security could be achieved – i.e., that there was a process you could follow to be reasonably secure and be safe from most attackers. This also meant that a security attack was a failure – a catastrophic one.

But the 2010s changed all that: security breaches are still a failure, but no longer catastrophic. A security breach is now one of those bad things that happen in corporate life that you try to prevent but also accept as a possibility. In other words: information security is a part of a mature corporate life.

Hacking contests and The Matrix

It wasn’t always so. Back in the 1980s, I had a notebook where I wrote the details of all the viruses in existence with instructions on how to remove them. It wasn’t a thick notebook.

Around that same time, John McAfee, who later founded the company that still bears his name, would drive around in a van and manually scan computers for viruses (I guess he must have had a notebook similar to mine).

In those days, a computer was either infected by a virus or it wasn’t; if it was, there were a series of steps you could take to make the computer clean again. Like every other aspect of computing, security was a binary state.

We had a similar view with access control (some passwords were safe, some weren’t), encryption, network services, network protocols and more. Some things were “safe” and some were not. Either one or zero.

When viruses gave way to security vulnerabilities as the main worry for IT staff, we started along a similar route – a set of predefined tests that would indicate if a computer was vulnerable.

When vulnerability scanners were first introduced, there were hundreds of security vulnerabilities you needed to check for. It was too many to write in a notebook, but it stood to reason that if you ran a vulnerability scanner and did not find any security vulnerabilities, you were safe.

As recent as the early 2000s, my company ran public “hacking contests” that were a sucker’s bet: we challenged attackers to try and attack a public system on the Internet that was checked for security vulnerabilities and found clean.

We knew that unless they had access to NSA-level tools, a potential attacker wouldn’t be able to break in. Life was still pretty binary and we didn’t expect it to change. The Matrix sequel movie showed Trinity, the brilliant hacker from the future, attacking the villains back in 2003 using a security hole that was known and easily fixable; we all chuckled at how hapless the futuristic Matrix villains were for falling in this easily avoidable trap.

A game we can win

The 2010s came and changed the way we security professionals see the world. First the speed at which security holes were discovered rapidly increased: while some 1,000 security holes were discovered and made public in the year 2000; in 2018 that number was over 16,000 (more than 40 new security holes discovered per day).

Our definition of “computer” also changed: phones, smart TVs, thermostats, light bulbs and cars are all computers with potential security vulnerabilities. The explosion happened on both axes: the number of vulnerabilities multiplied by the number of computer assets means that an average organization no longer hopes to fix all security holes but merely to manage them. In other words: the best we can do is limit our exposure.

This may sound like we’ve hit the tipping point: did we lose the arms race to the black hats? If every organization has a security hole, we are all vulnerable, all the time. Why even play the game if you’re destined to lose? Some self-proclaimed high priests of information security, usually remnants of the 20th century or echoing its old wisdom, will tell you “no system is secure”. But that’s only true if your world is binary, and ours isn’t.

In fact, for the exact reason a security breach is now a real possibility, it is also no longer the apocalyptic scenario it was back in the early 2000s. Also, the development of information security testing and protection systems helps us cope with security breaches: multiple layers of security, the ability to alert, log and block attacks means that the attacking and defending sides both have costs associated to with both attacking and defending: instead of a chess game with a winning and losing side, this is more like a perpetual tug-of-war where as long as a constant effort is applied by both sides it’s quite possible no one will score a definite win.

And that’s a good thing.

The high priests of security

Good and bad as definite concepts belong in the religious realm. Back in the old days security advocates were, in many ways, priests of an evangelistic religion.

We spent our days trying to convince agnostic managers to believe in something they couldn’t always see: the need for security in computing systems. There were many apocalyptic prophecies on what the non-believers will suffer if the proper rituals aren’t followed; many of us believed that computer breaches happened to those who “deserved” to be punished. Those non-believers were not committed enough, or they didn’t follow the recipe for salvation.

But that was then. In this day and age no half-competent manager really believes information security is not important – our evangelism is no longer necessary. Information security is now in the corporate mainstream.

In the corporate mainstream, risk is ever-present. It was famously said that “The Limited Liability Company is the most important invention since the wheel” – and this is because companies take risks all the time.

Apple is worth over a trillion dollars but can go bankrupt tomorrow at a non-zero probability; all Apple can do is limit their corporate risk and keep doing business.

Finally, decades after the first computer virus, information security reached a similar maturity: we can no longer guarantee a zero-risk, but we don’t have to.

Information security is no longer an external component that is measured by its budget or headcount. It is finally a component in the entire corporate governance structure like finance, legal and HR.

In the age of technology and data, information security is certainly a critical component, but still just a component. Managers should pay attention and mindshare to securing their infrastructure and data, but knowing that not every mistake warrants capital punishment, we moved away from the binary “safe or unsafe” to a more nuanced model of risk management and reduction. In that, we are less the religious priests and more corporate professionals, and just in time for the new roaring 20s.

The Best Practices to Protect Systems, Data, and Stop Malware

This guide, Top 10 Secure Coding Practices to Protect Your Web Applications, will lay out the top secure coding tips and best practices.