Tuesday, January 31, 2017

Tech-Sec Guideline: Open Design and Security by Design

Guideline: Open Design and Security by Design

Part of: Technology Security Guidelines
OverviewBuilding a set of Guidelines for Security and Privacy

Security Architecture and Controls are all developed based on an open design in such a way that only private keys and passwords are secret. There is no security through obscurity from the design perspective (Kerkhoff’s Principle).

Knowing how a castle has been built with all its walls, bridges, trenches, and towers should not have an impact on the workings of such security features. Only then the security features will have value against an incoming attack.


In the world of Information Technology it is not different. Whether you are building software, infrastructure, architecture, databases and what not, its security should only depend on the safekeeping and secrecy of the private keys and passwords.

Security through obscurity is likely not avoidable for 100%, but all efforts should be directed towards reaching that goal. The less you need to worry about concerning keeping things a secret, the less vulnerable you will become to information leakage about the inner workings of your systems. And the less vulnerable you are, the more resilient the environment will become.

More information from OWASP about Security By Design.

Monday, January 30, 2017

Dev-Sec Guideline: Build for ergonomics and usability

Guideline: Build for ergonomics and usability

Part of: Development Security Guidelines
OverviewBuilding a set of Guidelines for Security and Privacy

Every system, component and security measure should be developed to reduce as much human error as possible with automated controls and checks and intuitive and awareness-rich interfaces that prevents errors.

No only business applications, but security controls in applications and in general needs to be intuitive to use. When security controls and checks are complicated people will get annoyed at best, or ignore it at worst.
Dilbert, Usability, 2007-11-16
When you want your users to make use of complex passwords and want them to avoid commonly used passwords, make sure such passwords cannot be chosen. Help them from a technical point-of-view. Another good example was the reCAPTCHA made by Google. Instead of entering hard to read words, numbers or even do puzzles just to prove you are a human, Google dramatically upgraded the usability with just one-click.

Security can already be annoying by itself, just don't make it harder!

More information from OWASP about Building Usable Security.

Friday, January 27, 2017

Dev-Sec Guideline: Build to not trust endpoint input and services

Guideline: Build to not trust endpoint input and services

Part of: Development Security Guidelines
OverviewBuilding a set of Guidelines for Security and Privacy

Do not ever trust input coming from users, browsers, apps, services, and APIs and other (end)points from non-trusted sources. Always presume malicious entries that needs to be validated, and where applicable, sanitized.

This guideline is all about trust. Anything can be trusted, and in some cases such trust needs to be validated beforehand. Input from sources you do not control needs to be validated before trusted.

In essence, just like the positive security model, it is all about input validation. But where the positive security model focuses more on the security controls itself, this principle focuses on the data.

Due to compute power and memory processing it is wise to make informed decisions about whether or not to apply input validation wherever data is being processed. Only make sure that whenever data is coming from users, browsers, apps, services and APIs or any other non-controlled endpoint for that matter, that the data has been validated. And this validation can vary from making sure a date is really a date to that of free text input is being stripped from any scripting-languages.

More information from OWASP about a Don’t trust user input and Don't trust services.
More information from Teusink.eu about Input Validation for Web-applications.

Thursday, January 26, 2017

Dev-Sec Guideline: Build to not trust infrastructure

Guideline: Build to not trust infrastructure

Part of: Development Security Guidelines
OverviewBuilding a set of Guidelines for Security and Privacy

Do not trust the fact that infrastructure and platforms are fully operational and do not only trust on their security. Expect that it could go down or reduce in capacity, performance or security any moment in time. Build for resilience.

This guideline is to make developers aware to not lean on the infrastructure and platform for the security (or any other qualitative aspect in that regard) of the applications that are being developed.


It is possible that the infrastructure and platform supporting the application are incredible secure, while the application itself is not. The application can be compromised, despite the security in everything else. And when applications gets compromised, data usually leaks.

More information from OWASP about a Don’t trust infrastructure.

Wednesday, January 25, 2017

Dev-Sec Guideline: Build to fail securely

Guideline: Build to fail securely

Part of: Development Security Guidelines
OverviewBuilding a set of Guidelines for Security and Privacy

Whenever something fails, it fails securely. Meaning that in no situation the (overall) security is lessened due to the failure. A hostile environment (both internal and external) should always be assumed.

This guideline is not about fail safe. Failing safe is about that functionality resumes when a certain control fails to operate. Failing secure is the opposite of that. In example can illustrate that better.
A firewall is a security control that can be, for instance, placed between the Internet and your internal network. When this firewall fails to operate, a couple of things can happen. Either the entire internal network can access the Internet and vice versa (fail safe) or access to the Internet is entirely shut down (fail secure).
Peter Steinfeld
When developing security controls (firewall, input validation, access management, etc.) always aim for fail secure, to make sure that when your security control is attacked, attackers won't gain more access by destroying it.

More information from OWASP about a Fail securely.

Tuesday, January 24, 2017

Dev-Sec Guideline: Build a positive security model

Guideline: Build a positive security model

Part of: Development Security Guidelines
OverviewBuilding a set of Guidelines for Security and Privacy

Wherever possible security should work based on whitelisting, specifically allowing access (positive model). When whitelisting proofs to be a big impact on maintainability of such list, it may work based on blacklisting (negative model).

This guideline is about things like input validation. When building systems it is wise to think about whether or not you can predict or define certain values that should be allowed. If this is possible, defining a positive security model (or a whitelist) is the most secure way to go. This can be done implicit or explicit.

An example of explicit whitelisting is that, in regard to a date-field, only the value 01/01/1980 is allowed. An example of implicit whitelist is that, again in regard to a date-field, the value needs to comply to the format mm/dd/yyyy. The whitelisting is that the value still has to be a date, but any date will suffice.

Positive security models can also be about allowing specific behavior patterns in applications or websites. It is thus all about defining what may be allowed and ignore the rest.

More information from OWASP about a Positive security model.
More information from Teusink.eu about Input Validation for Web-applications.

Building a set of Guidelines for Security and Privacy

I receive often the question on what should be done in terms of security and privacy, and the pitfall is that you can either respond in to abstract terms, or in way to specific detail. When I was thinking about a security policy I noticed that people rarely read them, and I can understand that. And not everyone wants, or needs, to read specific documentation regarding the implementation of encryption.
And in any case, security rules should not lead to disabling businesses or change, but to enabling businesses. And they should be felt to be necessary to protect the business, rather than a checklist just to satisfy the auditor.

So I took the formal corporate policy and translated those into Security and Privacy Guidelines. It is a manifest of some sorts and they fit on 1 page. In essence, they are the 'spiritual' guidelines with which you can enhance the security and privacy of the work you do.

This have led to the following hierarchy in documentation, including the primary target audience.

Document Focus Target audience
Policy Governance Senior Management, Legal, Compliance, and Auditors
Guidelines Manifest Business and IT
Standards Non-technical Business and IT
Baselines Technical IT
The guidelines will be outlined by four main topics which each consists of four to five guidelines (see below). In the upcoming posts I will focus on the guidelines I have set and of which I am kind of a missionary in my organization. I will do that one guideline per post.

If you have any feedback whatsoever, please let me know!

--

Development Security Guidelines

  1. Build a positive security model
  2. Build to fail securely
  3. Build to not trust infrastructure
  4. Build to not trust endpoint input and services
  5. Build for ergonomics and usability

Technology Security Guidelines

  1. Open Design and Security by Design
  2. Defense in Depth and Ubiquitous Security
  3. Compartmentalization
  4. Safe defaults and Hardening
  5. Patch and Life Cycle Management

Information Security Guidelines

  1. Least Privilege
  2. Segregation of Duties
  3. Complete Authorization
  4. Information Cryptography

Privacy Guidelines

  1. Only collect with consent
  2. Only collect for purpose
  3. Destroy after use
  4. Only enrich within purpose
  5. Designate Data Ownership