Tuesday, March 14, 2017

With Internet-of-Things, the consumer is not a customer but a supplier!

With Internet-of-Things, the consumer is not a customer but a supplier!

Authors: Joram Teusink, Rick Veenstra

In the soon-to-be world in which everything is connected to everything, we will face quite some (unforeseen) challenges. One of those challenges is related to the protection of privacy and security of the system as a whole. The other one is the role of the consumer. To quickly sum it up in a single statement:

In the world Internet-of-Things, the consumer is not (only) a customer but (also) a supplier.


In this blog post we will focus on why we make this statement and why we think this new paradigm holds true. We will talk about the fundamental shift in the way we think, or should think, about data and privacy.

You own your devices

This sounds rather plausible, right? And it is. Because with every device you buy you take on the responsibility of owning it. Whether it is connected to anything beyond the power cord or not, by paying the bill you accepted the ownership of the 'thing'. This has many consequences, but we´ll first address the issue of data ownership.

Do you own the data?

Imagine the device you bought is connected to a data network. Not exactly a mental stretch with a 'thing' we refer to as a device in the 'Internet of Things'. This device will generate or accumulate data. This might be simple sensory data or more complex data structures that describe behavioral patterns. It tells something about who you are and what you do. This data is often considered sensitive or at least personal. In the near future, this will certainly extend to data we have not yet dared to dream about.

For now, this data may be your thermostat that is building up a profile of your presence and the temperature of your room or your 'smart' TV that is building up a profile of what you like to watch. Another popular application is home security, such as video-cameras that continuously register what is going on in your most private environment. Which is by the way an intrusion to your privacy that the police and intelligence agencies are only allowed to with a legal warrant. All this collected data is privacy sensitive. It becomes even more sensitive when combined with data that is collected by other devices you may own or services you use.

This data is valuable for you. Or at least it should be. Most people do not want their conversations with their lovers to be openly shared across social media -- especially not if that conversation is spiced with photographic material of a certain private nature. But that is not the only data that is valuable. Almost all data about you should be valuable to you and treated as sensitive. "Why?" you might ask. Because this data can be used to influence you. Your data is being used in ways that cause no direct damage, at least no damage that you could easily identify. But the way you are being influenced is not necessarily in your direct interest; it is primarily aimed at increasing the profit of some company.

Companies are willing to spend substantial money to get access to this data. Because they can use it to tailor their ways of influencing your behavior, which is the very nature of advertisements.

There are many business models in which large corporations get access to your sensitive data. But for now, we will stick to the one that is the easiest to grasp and is the basic premise that most people base their use of data-collecting devices on:

You own the devices; therefore, you are the owner of all data that is collected by these devices.

Are you selling your data?

Now follow us in the next step. You own and operate devices that collects data about you. What are you doing with this data?

When you use devices that collect all this data, both of which you own, do you use them commercially? It depends a bit on the privacy policies you agreed upon with the supplier of the services, but the answer to this is very likely a "yes". Even when you are just an individual and not a company. This may not be the way you look at it now, so let us explain we make this statement.

Most likely you share this data with a company, in exchange for a service or an 'enhanced experience' as they so exquisitely frame it. This company may be the one that built the device or provided it to you; frequently these things are bundled it with an on-line service directly coupled with the device. It may be a third party, which uses the data to enhance your experience of the device or to provide additional (value adding) services. In both cases, you provide data in exchange for a service.

Usually this can be characterized as one of four types of supplier-customer relations:

  1. You have paid for the service with a one-time fee with the purchase of the device. This usually covers a limited period of time, although this is rarely made explicit. Degradation of service over time is almost guaranteed; the older the device, the flakier support tends to become. Your smartphone or tablet are likely the most explanatory examples.
  2. You pay a recurring subscription fee, which is usually supposed to cover all operational costs of the service provider. This model is the traditional pay-as-you-go service model that has been around since way before the internet was conceived and is still the most viable for the supplier in the long term.
  3. You do not pay any monetary fees, but your usage of the services adds to the momentum and user base for the service provider´s paid services which cover for the cost of the free services as well.
  4. You do not pay any monetary fees, but allow the service provider to use your data to build profiles that it will monetize to its own discretion, i.e. targeted advertising. In this case is covered by the adage ¨if you don´t pay for the product, you are the product¨.
So, whether it is in exchange for money or not, you actually trade (sell) your data.

How does this effect your position in the 'supply chain'?

Our argumentation up to this point can be summarized in three statements:
  • you own the IoT-device; 
  • you own the data it collects; 
  • you trade the data.
If you own stuff that collects or generates data and you trade it with a company, you are essentially a service provider. You are the supplier of your data, and you sell it in exchange for either money, features or services! The other party is the consumer, since it 'consumes' your data. This might sound very silly or strange, so let us explain why we think this holds true.

  1. There is a supplier who sells IoT-devices. It supplies you with equipment, with 'things'. When you buy something, you are the customer.
  2. Upon enabling the IoT-device to provide data to the supplier's services –or those of a third party– you become a supplier yourself: a supplier of data to be precise.
  3. Your customer is in many cases the supplier of your device (the equipment), now in the role of the one who is utilizing your data.
So, in this situation you are performing four roles:

  1. the customer in buying the device;
  2. the supplier of data by trading your data with your customer.
  3. the customer in paying for services which utilize the data generated by the device;
  4. the consumer of the services.
Traditionally we only think about #1 and #4 and consider them as one single role. This has the consequence that role #2 is obscured; it is rarely more than an afterthought when even considered at all. That is why we strongly believe the paradigm shift is needed, or to the very least should be debated.

Because you as a customer also perform role #2, we can state:

In the world of IoT, the consumer (of a device) is a supplier (of data).

The consumer as data supplier: privacy considerations

If you consider the data flow we described as a supply chain, you make the shift from a customer/consumer-centric view to a set of demand-supply relations that is very common in the corporate world. And if you are familiar with data protection in this context, you might get a little itchy. Because this viewpoint has quite some implications for accountability and liability regarding data protection and security.

If you are a company and selling data to a data consumer (your customer), you are required to do at least four things:

  1. specify the Terms of Use for the data;
  2. obtain consent of the data subjects (the individuals about whom you collect data) to this Terms of Use;
  3. cover the use of this data by a legally binding agreement with your customer;
  4. take every reasonable precaution to ensure that the data is only used by authorized parties and only for the intended purpose for which it has been collected. This is the topic of Information Security and Privacy.
These obligations are all within the scope of the EU General Data Protection Regulation (a.k.a. the EU-GDPR).

If you are an individual, requirement #2 may be considered as implicitly satisfied since the data subject is the very same entity that provides the data. This does in no way limit your responsibility regarding data protection (requirement #4). It is just rather difficult to sue yourself for inadequate performance or non-compliance.

But when you provide services with collected data about other subjects that you do not legally represent, this (at least theoretically) can get very ugly very quick. If you want to make it really complicated, you might add another ingredient to this cocktail: the right to be forgotten that is embedded in the EU-GDPR. But that´s another topic for someone with a legal background.

We'll now step into the complications of requirement #4: the obligation to protect the security of the data you collect and distribute.

Ownership of a device: the obligations

Okay, you purchased a device and connected it to The Internet of Everything. As we have shown before you have now become a service provider. You are going to provide data to your customer. Data that will probably have privacy-sensitive characteristics, so you are required to protect this data from unauthorized access and use.

By paying the bill you accepted the ownership of the 'thing'. In doing so you made yourself accountable for all benefits and costs that come with the simple existence of the 'thing'. You are not only 'responsible' for reaping the benefits but also for the burden of operations and maintenance during its entire life cycle. Even if you manage to outsource this, which for consumer grade IoT is not (yet) likely, if at all possible, in the end you stay accountable.

If you own the device and you own the data, then you are responsible for its security. No one else but you, really! European privacy legislation is based on this very principle. Whether you generate Personal Identifiable Information (PII) or are the custodian of PII that is trusted to you by its subject (the individual about whom the data has been collected), you are what the EU-GDPR calls the 'processor' and ultimately accountable for the entire supply chain.

We could collapse role #2 (the supplier of data) with role #4 (the consumer of the processed data). If we consider it this way, the device owner (being both supplier and consumer) outsources a part of the data processing. Probably to the provider of 'value adding services'. At least the Dutch Privacy legislation requires you to cover this processing with a legally binding Data Processing Agreement (DPA). This should specify the responsibilities (not the accountability) transferred to third parties. You mandate other companies to handle data you are responsible for. You set constraints on the exact 'processing activities' that will be performed and the conditions under which they will do this.

At least that is the theory. We all know that only in theory there is no difference between theory and practice; in practice, there is. But alas, it is not only the theory but also the law.

But consumers buying things and using services accompanied with them do not sign such DPA with their third parties. At best, they accept the Terms of Service that govern such services. And let’s hope that is not an instance of what we call 'the biggest lie on the internet': they click a checkbox stating something like "I have read the (...) and accept to be bound by this" without reading the text. It is quite likely that these terms of service just contain a full waiver to the service provider for all responsibilities that should be contained in a Data Processing Agreement.

Where the snake bites its own tail

Now we have the situation where the consumer remains ultimately responsible for data processing that he has completely outsourced. Even worse: processing has been outsourced in a way that has essentially stripped him from any control over it.

In your role as a consumer of services, you subscribe to a service using your PII. You agree that this service provider will be processing your sensitive data. You would require adequate protection of this valuable data. But your service provider has outsourced the data collection process to a party that is unable to adequately protect this data as it is generated, collected, stored and transferred. This would be unacceptable because the supplier is violating the EU GDPR.

But in this case, you are the data provider yourself. You as a service consumer agree that the service is delivered by an unreliable data provider and therefore you cannot hold the service provider accountable for any consequence of this data provider failing. Ridiculous if a third party were involved, but legally valid if you are the data provider yourself.

Conclusion

The way we look at the Internet of Things --especially in the consumer domain-- needs a fundamental change in perspective. If we consider the data flow generated by IoT-devices as a supply chain, we make the shift from a customer/consumer-centric view to a set of demand-supply relations. This is a rather uncomfortable position, because it has quite some implications for accountability and liability regarding data protection and security.

At the same time consumers (including ourselves) are not acting in alignment with the roles that this viewpoint uncovers. We end up being service providers who are completely responsible for data protection over the entire supply chain. Yet our customers and suppliers are usually not bound to any obligations to protect this data. And we lack the tools to do it properly ourselves.

Thinking of IoT as a chain of demand-supply relations may help to identify systemic weaknesses. It may turn out to be a good start for finding effective strategies to fight the undesirable exploitation of these weaknesses.

About the authors

Joram Teusink and Rick Veenstra are both Information Security Officers and close friends.

Re-posts on…

Wednesday, March 8, 2017

Book review: Hacked Again, by Scott N. Schober

The book Hacked Again is all about learning your Security lessons. And then learn it again and again. Scoot N. Schober, both the author of this book and CEO of Berkeley Varitronics Systems, talks about how his Security company fell victim to a hack. Twice to be precise.

He starts off with talking about learning it the hard way. Trusting your bank is important, but did you ever consider on what that trust has been based? And what about opting in for credit card payment for your customers? Can that be exploited in any way? And if it does, how easy can you be compensated for any (financial) damages? After the second hack on the company of Scott, he decided to answer those questions and move to another bank.

Website security is another topic he addresses. Your business is likely to be run through a website, therefore, its security needs to be validated and continuously be improved. Failing to do so can (or will) lead to breaches which will impact your reputation and eventually your financial situation. Besides that, he also warns about the dangers of not being careful on how to utilize the great benefits of social media.

But how to protect yourself and your business? To answer this question Scott starts off with social engineering. Why? Because most often the human proves to be the weakest link and we tend to give more information than we would like to admit. Disconnect your credentials from any personal information, directly and indirectly. This way it is more difficult to utilize social data into means to breach your accounts. Because, who likes to be grabbed by a hacker's hook in their attempt with phishing?

Malware also comes to stage in his book. A good deal of pages are used to deal with this subject (rightfully!), and it all comes down to this. Patch everything, use a malware-scanner, and do not click on links or attachments of unknown or untrustworthy sources.

Strong passwords trump lazy hackers! And I could not agree more. There is much debate on what a strong password is, but in any case, a lengthy one is a good starting point. Password re-use and password guessing are one of the core ingredients in successful breaches.

Wireless Networks, or so called Wifi, is another big threat. Especially public ones. There are many means in which they can be exploited and one should always be careful before connecting to one. In the case you build your own Wifi-network, embrace the best practices for security such as strong Wifi-passwords and obscure SIDN. It’s by no means a foolproof system, but at least make it hard.

Scott ends his suggestions with topics like layered security, meaning do have a multitude on security controls anywhere in your network. Do not rely on just one or two controls. Make sure one steps in when the other one fails. But also the fact that Security is everyone’s business. We cannot lean on the government for protection. The Internet is decentralized and therefore comes with its weaknesses in that regard also. Meaning that the defense also has been centralized.

The book ends with notable hacks and breaches, such as the Target Breach, JP Morgen Chase Breach, iCloud Cyberhack, the Sony Cyberbreaches and the hacks at the Office of Personnel Management (OPM).

All in all, a nice read. Especially for business owners, CEOs, Presidents of some Board of any organization and so on. Read it, listen to your Security experts, and prevent financial losses your company cannot pay!

First released: March 2016
Pages: 187
ISBN: 978-0-9969022-1-2
Linkscottschober.com


Monday, February 27, 2017

This blog not affected by the memory-leak of Cloudflare

In the post 'Implementing https on Blogger using Cloudflare' I describe how I utilize Cloudflare services to protect this blog. Unfortunately, we all noticed that Cloudflare has suffered from a tremendous memory-leak which was announced Friday, February the 24th.

This leak (a bit referred to as #cloudbleed) is extensively covered in the following posts.
On February the 24th I also got confirmation that this blog has not been affected by the memory leak in the html-parser of Cloudflare.

So, no worries in that regard concerning this blog.

Thursday, February 16, 2017

Privacy Guidelines: Consent, Purpose, and Retention

Guidelines: Consent, Purpose, and Retention

Part of: Privacy Guidelines
OverviewBuilding a set of Guidelines for Security and Privacy

Below are the guidelines for privacy with the elements of consent, purpose and retention. Whenever organization is mentioned, you can read businesses, healthcare and government. Whenever people is mentioned, you can read customers, consumers, employees, patients and clients. Data in this case is in the category of Personal Identifiable Information (PII) which is subject to national and EU law and regulations.

Only collect with consent

Data is only collected with consent of the subject of the data.

Meaning that you don't collect explicit or implicit data on people without them knowing about it and without the consent of them to do so.

Only collect for purpose

Data is only collected for the business function that it is strictly needed for.

In essence, make sure that you don't over-collect data about people. Data that is not needed for the business to operate is data that should not be collected.

Destroy after use

Data is only kept for the time that it is strictly needed for the processing or as required by law.

Don't keep data about people longer than is needed for the purpose. Whenever the organization-people relation has ended, delete the data. Just keep the data that you are required to by law. Make sure the retention time is within those boundaries.

Only enrich within purpose

Data enrichment is only done within the context of the initial collection and consent of the data.

You can profile and track people to an incredible extent. Beside that you need consent for this profiling, the profiling itself my not excessively step outside the boundaries of the purpose of your organization.

Designate Data Ownership

Data always has an owner, or at the least a steward, who upholds the Security Guidelines and Standards.

Data, just as systems and services, need an owner. Data can travel through many systems and although all those systems might have owners, the actually can't own the data because it is shared. When data has an owner, it can have the proper attention it needs for things like consent, purpose and retention.

Wednesday, February 15, 2017

Info-Sec Guideline: Information Cryptography

Guideline: Information Cryptography

Part of: Information Security Guidelines
OverviewBuilding a set of Guidelines for Security and Privacy

Data transport channels over which any type of information is being transmitted using a publicly available communications channel must always be encrypted using modern and open standards. In addition, data transported by physical means is always done using an encrypted carrier.

Wikipedia: The Enigma machine
Encryption is one of the key security controls that can help keep you data safe from prying eyes. It is as old as the ancient Romans and Greeks and played a major role in World War II with the Enigma machine. But encryption has a (sometimes significant) trade-off. And that is that of resources. It costs time and energy to do its work. The compute time varies across different algorithms, key-lengths and implementations. Despite these trade-offs, the mindset should be that of: “Encrypt Everything!”.

Whenever data is transported using physical carriers, at least the carrier itself should be encrypted. Think about USB-sticks utilizing hardware based encryption. In addition, think about encrypting the data itself also. When data is transported using communications channels, the channel over which the data flows should be encrypted. This can be done in multiple ways, depending on the needs and available options.

In short, make sure that no one can eavesdrop your data and encrypt the means that is used for the transport.

More information from Teusink.eu about Encryption and Hashing.
More information from Wikipedia about Cryptography and Encryption.

Tuesday, February 14, 2017

Info-Sec Guideline: Complete Authorization

Guideline: Complete Authorization

Part of: Information Security Guidelines
OverviewBuilding a set of Guidelines for Security and Privacy

Every form of access is based on a complete authorization scheme (identification -> authentication -> authorization) and authorization is never implicitly granted.

When access to a system is needed, authorizations are required. Such authorization can be granted implicitly and explicitly. Implicit authorizations are dangerous, because a system assumes that because you are on the network or domain, you can access everything else within that context. Explicit authorization prevents that from happening. Even when you have access to a system or domain, specific authorizations are still validated before allowing (further) access.

In an information system in a hospital an assistant might access contact details about a patient. But it would violate the patient’s medical confidentiality when the assistant can also access the medical records stored in the database. Therefore, the person’s identity must be authenticated and based on that authorizations needs to be granted or not. Based on this information the doctor would see more information than the assistant.

Explicit authorization can be done with security tickets or tokens traveling with the identity. But the mere fact that the token exists should not lead to access. The ticket or token should be validated for proper authorization before access is granted.

More information from Wikipedia about Authorization.

Monday, February 13, 2017

Info-Sec Guideline: Segregation of Duties

Guideline: Segregation of Duties

Part of: Information Security Guidelines
OverviewBuilding a set of Guidelines for Security and Privacy

Wherever possible there must be as much segregation of duties between the positions of employees that have a conflict of interests with each other. Such duties should be divided over different employees.
Image of sqlity.net
This guideline is about preventing mistakes and unauthorized transactions which leads to integrity issues or fraud. A developer who submits code to the repository is not the developer who should be able to accept it. A second developer should accept (or decline) the submission to establish the four-eye principle. Another example is that of a bank-employee requesting a loan for a customer. Another employee should accept (or decline) the request.

The main goal is to make sure that not one single person is able to initiate and end a (business) process. Especially in the cases where there are financial transactions involved or where the integrity of data or systems is important to maintain.

More information from Wikipedia about Separation of duties.

Friday, February 10, 2017

Info-Sec Guideline: Least Privilege

Guideline: Least Privilege

Part of: Information Security Guidelines
OverviewBuilding a set of Guidelines for Security and Privacy

Employees and (automated) processes should only have access rights explicitly needed to carry out the functions and duties that needs to be carried out in the context of the work that needs to be done.

This guideline seems an open door, but it many cases it is not. Least privilege is a very important aim, but due to many (business) processes there is often a so called “authorization”-creep. Over the course of the life-cycle of an account (if there is even any) authorizations are constantly added.
Meme Generator
Most notable is that of the user accounts. An employee gets hired, perhaps as a trainee. It then likely has the least privilege principle properly applied. But when such an employee gets other positions in the company the account gets additional authorizations. The authorizations that should be revoked are often not revoked though. Resulting in an account that can be used for many more purposes than it should according to the position the user has in the company.

When every user, administrator or service accounts are created, apply a life-cycle management through the account’s entire life-cycle. When the purpose of an account changes, also change its corresponding authorizations.

You can even take it a step further to make privileges context aware. With limiting privileges based on context (when you might need access to data and when not) is a way to orchestrate the 'need to know' principle.

More information from Wikipedia about Principle of least privilege.

Tuesday, February 7, 2017

Tech-Sec Guideline: Patch and Life Cycle Management

Guideline: Patch and Life Cycle Management

Part of: Technology Security Guidelines
OverviewBuilding a set of Guidelines for Security and Privacy

Every piece of software and firmware of all systems and components should be maintained by its supplier and the latest security patches should always be installed. No software, firmware or hardware is to be end-of-life or dropped of support by its supplier.


In most notable, if not all, hacks lack of patch management was a key-ingredient of a successful breach. Known vulnerabilities are often not patched which leaves the gates to the environment open to attack. Not patching vulnerabilities is like not stopping a wound to bleed. And always install security patches as fast as possible, to make the window for an attack as small as possible.

And it is not only about patch management. Most too often software is being used that has exceed its life cycle, resulting in the use of software that receives no more support. No more support, means no more security fixes. Although you have installed 'all' security patches, you are likely to be vulnerable. Often security vulnerabilities, or its exploits, are reverse engineered to older not support releases. Which then result in a vulnerable system that will never receive a fix.

Never skip a security patch, and never use non-supported software. Ever.

More information from Wikipedia about Patch (computing) and Lifecycle management.

Monday, February 6, 2017

Tech-Sec Guideline: Safe defaults and Hardening

Guideline: Safe defaults and Hardening

Part of: Technology Security Guidelines
OverviewBuilding a set of Guidelines for Security and Privacy

Everything should have a secure default in such a way that access and functionalities are always explicitly granted, instead of implicitly. Functionality and features should always be disabled or uninstalled whenever it is not needed for the system or component to operate according to the needs of business or IT. Any of the already present security features should be enabled whenever possible.

Hardening are much like the same, although there are some differences. Hardening is about making the software, system or other components as hard as possible. Everything that is disabled or uninstalled is something that you don't have to worry about. It is a process in which, for instance, an Operating System is configured in such a way that weaknesses are limited wherever possible. In case of Microsoft Windows Server, think about configuring the register to disable weak ciphers and disabling the FTP-service.

It is also about enabling (or at least not disabling) security features that are incorporated with the software or system. Again, in case of Microsoft Windows Server, think about leaving the Local Firewall on and configure it, instead of disabling the service. The result should be a piece of software or system that utilize all security features while disabling all unused features to reduce the attack vector that accompanies the software or system.


Safe defaults has in essence the same principle as hardening, but there is a small difference. Where hardening is done after the creation of the software or system, safe defaults is all about pre-configuring. It is the same process as hardening, but everything is automated and deployed beforehand. Wherever possible, hardening should be a step in creating safe defaults for a piece of software or system. When a new Virtual Server is being deployed, all hardening steps should be as far as possible be automated, depending on the needs for the Virtual Server.

More information from Wikipedia about Environment Hardening.

Friday, February 3, 2017

Tech-Sec Guideline: Compartmentalization

Guideline: Compartmentalization

Part of: Technology Security Guidelines
OverviewBuilding a set of Guidelines for Security and Privacy

On every level of the technology stack there should be a reasonable amount of compartmentalization of components, systems and zones. Every crossover between two or more compartments are to be managed through ‘mediators’ that can manage and secure the access between them.

A striking analogy of this principle is that of a ship. Every (larger) ship is built in compartments. When a breach in the outer-wall has happened, the compartments makes sure that the flooding won't impact the entire ship. This principle can and should applied in technology also.

The principle of compartmentalization is to prevent that a breach does not impact the entire infrastructure, but only a small(er) part of it. In a, for instance, often less protected Test-environment a breach can occur more easily than a Production-environment. When these are properly compartmentalized, a crossover might be not possible and a full-scale data-leakage then can be prevented.

Compartmentalization can be applied on multiple levels within technology and you can do it as extreme as you want. Remember that there is a trade-off when it comes to compartmentalization of systems and environments. Because every transaction that needs to crossover to another compartment needs to be validated before it is allowed. It will require maintenance (which could be automated to some degree), and it will take for sure resources like processing power.

More information from Wikipedia about Compartmentalization.

Wednesday, February 1, 2017

Tech-Sec Guideline: Defense in Depth and Ubiquitous Security

Guideline: Defense in Depth and Ubiquitous Security

Part of: Technology Security Guidelines
OverviewBuilding a set of Guidelines for Security and Privacy

There should be a redundancy of security Controls within the environment and security Controls should be ubiquitous (everywhere present). When one layer fails or gets breached, another one should step in automatically.

The analogy of the castle has its place with this guideline also. A castle consist of multiple layers of defense. Think about the trenches, bridges, outer-walls, inner-walls and towers. The entire security is a multitude of such controls and where one fails, another controls steps in. In the end it is not about making it impenetrable, but to slow down the progress of the breach. And when it is slow enough, there is a good chance you can kill the attack-chain. This is called defense in depth.


Ubiquitous security has in essence the same meaning, but the underlying principle is somewhat different than that of defense in depth. Ubiquitous security means that it is security that is everywhere around you. It also beholds the guideline of Security by Design to make it possible. This type of security means that you don't have to think about using it, it is just there.

Much like the airbag, seat-belts, electronic brake system (EBS), lane detection and what not. And also traffic lights, traffic signs, road-repairs and traffic-police are examples of security controls to make safe driving possible. When users roam your network, do not only incorporate advanced security controls for a specific application, but also in every aspect of their work in a usable manner.

More information from OWASP about Defense in depth.

Tuesday, January 31, 2017

Tech-Sec Guideline: Open Design and Security by Design

Guideline: Open Design and Security by Design

Part of: Technology Security Guidelines
OverviewBuilding a set of Guidelines for Security and Privacy

Security Architecture and Controls are all developed based on an open design in such a way that only private keys and passwords are secret. There is no security through obscurity from the design perspective (Kerkhoff’s Principle).

Knowing how a castle has been built with all its walls, bridges, trenches, and towers should not have an impact on the workings of such security features. Only then the security features will have value against an incoming attack.


In the world of Information Technology it is not different. Whether you are building software, infrastructure, architecture, databases and what not, its security should only depend on the safekeeping and secrecy of the private keys and passwords.

Security through obscurity is likely not avoidable for 100%, but all efforts should be directed towards reaching that goal. The less you need to worry about concerning keeping things a secret, the less vulnerable you will become to information leakage about the inner workings of your systems. And the less vulnerable you are, the more resilient the environment will become.

More information from OWASP about Security By Design.

Monday, January 30, 2017

Dev-Sec Guideline: Build for ergonomics and usability

Guideline: Build for ergonomics and usability

Part of: Development Security Guidelines
OverviewBuilding a set of Guidelines for Security and Privacy

Every system, component and security measure should be developed to reduce as much human error as possible with automated controls and checks and intuitive and awareness-rich interfaces that prevents errors.

No only business applications, but security controls in applications and in general needs to be intuitive to use. When security controls and checks are complicated people will get annoyed at best, or ignore it at worst.
Dilbert, Usability, 2007-11-16
When you want your users to make use of complex passwords and want them to avoid commonly used passwords, make sure such passwords cannot be chosen. Help them from a technical point-of-view. Another good example was the reCAPTCHA made by Google. Instead of entering hard to read words, numbers or even do puzzles just to prove you are a human, Google dramatically upgraded the usability with just one-click.

Security can already be annoying by itself, just don't make it harder!

More information from OWASP about Building Usable Security.

Friday, January 27, 2017

Dev-Sec Guideline: Build to not trust endpoint input and services

Guideline: Build to not trust endpoint input and services

Part of: Development Security Guidelines
OverviewBuilding a set of Guidelines for Security and Privacy

Do not ever trust input coming from users, browsers, apps, services, and APIs and other (end)points from non-trusted sources. Always presume malicious entries that needs to be validated, and where applicable, sanitized.

This guideline is all about trust. Anything can be trusted, and in some cases such trust needs to be validated beforehand. Input from sources you do not control needs to be validated before trusted.

In essence, just like the positive security model, it is all about input validation. But where the positive security model focuses more on the security controls itself, this principle focuses on the data.

Due to compute power and memory processing it is wise to make informed decisions about whether or not to apply input validation wherever data is being processed. Only make sure that whenever data is coming from users, browsers, apps, services and APIs or any other non-controlled endpoint for that matter, that the data has been validated. And this validation can vary from making sure a date is really a date to that of free text input is being stripped from any scripting-languages.

More information from OWASP about a Don’t trust user input and Don't trust services.
More information from Teusink.eu about Input Validation for Web-applications.

Thursday, January 26, 2017

Dev-Sec Guideline: Build to not trust infrastructure

Guideline: Build to not trust infrastructure

Part of: Development Security Guidelines
OverviewBuilding a set of Guidelines for Security and Privacy

Do not trust the fact that infrastructure and platforms are fully operational and do not only trust on their security. Expect that it could go down or reduce in capacity, performance or security any moment in time. Build for resilience.

This guideline is to make developers aware to not lean on the infrastructure and platform for the security (or any other qualitative aspect in that regard) of the applications that are being developed.


It is possible that the infrastructure and platform supporting the application are incredible secure, while the application itself is not. The application can be compromised, despite the security in everything else. And when applications gets compromised, data usually leaks.

More information from OWASP about a Don’t trust infrastructure.

Wednesday, January 25, 2017

Dev-Sec Guideline: Build to fail securely

Guideline: Build to fail securely

Part of: Development Security Guidelines
OverviewBuilding a set of Guidelines for Security and Privacy

Whenever something fails, it fails securely. Meaning that in no situation the (overall) security is lessened due to the failure. A hostile environment (both internal and external) should always be assumed.

This guideline is not about fail safe. Failing safe is about that functionality resumes when a certain control fails to operate. Failing secure is the opposite of that. In example can illustrate that better.
A firewall is a security control that can be, for instance, placed between the Internet and your internal network. When this firewall fails to operate, a couple of things can happen. Either the entire internal network can access the Internet and vice versa (fail safe) or access to the Internet is entirely shut down (fail secure).
Peter Steinfeld
When developing security controls (firewall, input validation, access management, etc.) always aim for fail secure, to make sure that when your security control is attacked, attackers won't gain more access by destroying it.

More information from OWASP about a Fail securely.

Tuesday, January 24, 2017

Dev-Sec Guideline: Build a positive security model

Guideline: Build a positive security model

Part of: Development Security Guidelines
OverviewBuilding a set of Guidelines for Security and Privacy

Wherever possible security should work based on whitelisting, specifically allowing access (positive model). When whitelisting proofs to be a big impact on maintainability of such list, it may work based on blacklisting (negative model).

This guideline is about things like input validation. When building systems it is wise to think about whether or not you can predict or define certain values that should be allowed. If this is possible, defining a positive security model (or a whitelist) is the most secure way to go. This can be done implicit or explicit.

An example of explicit whitelisting is that, in regard to a date-field, only the value 01/01/1980 is allowed. An example of implicit whitelist is that, again in regard to a date-field, the value needs to comply to the format mm/dd/yyyy. The whitelisting is that the value still has to be a date, but any date will suffice.

Positive security models can also be about allowing specific behavior patterns in applications or websites. It is thus all about defining what may be allowed and ignore the rest.

More information from OWASP about a Positive security model.
More information from Teusink.eu about Input Validation for Web-applications.

Building a set of Guidelines for Security and Privacy

I receive often the question on what should be done in terms of security and privacy, and the pitfall is that you can either respond in to abstract terms, or in way to specific detail. When I was thinking about a security policy I noticed that people rarely read them, and I can understand that. And not everyone wants, or needs, to read specific documentation regarding the implementation of encryption.
And in any case, security rules should not lead to disabling businesses or change, but to enabling businesses. And they should be felt to be necessary to protect the business, rather than a checklist just to satisfy the auditor.

So I took the formal corporate policy and translated those into Security and Privacy Guidelines. It is a manifest of some sorts and they fit on 1 page. In essence, they are the 'spiritual' guidelines with which you can enhance the security and privacy of the work you do.

This have led to the following hierarchy in documentation, including the primary target audience.

Document Focus Target audience
Policy Governance Senior Management, Legal, Compliance, and Auditors
Guidelines Manifest Business and IT
Standards Non-technical Business and IT
Baselines Technical IT
The guidelines will be outlined by four main topics which each consists of four to five guidelines (see below). In the upcoming posts I will focus on the guidelines I have set and of which I am kind of a missionary in my organization. I will do that one guideline per post.

If you have any feedback whatsoever, please let me know!

--

Development Security Guidelines

  1. Build a positive security model
  2. Build to fail securely
  3. Build to not trust infrastructure
  4. Build to not trust endpoint input and services
  5. Build for ergonomics and usability

Technology Security Guidelines

  1. Open Design and Security by Design
  2. Defense in Depth and Ubiquitous Security
  3. Compartmentalization
  4. Safe defaults and Hardening
  5. Patch and Life Cycle Management

Information Security Guidelines

  1. Least Privilege
  2. Segregation of Duties
  3. Complete Authorization
  4. Information Cryptography

Privacy Guidelines

  1. Only collect with consent
  2. Only collect for purpose
  3. Destroy after use
  4. Only enrich within purpose
  5. Designate Data Ownership