Saturday, February 24, 2018

How To: Make your Windows 10 computer more secure and private

Last year I have posted a blog on making your Internet connection more secure and private. Now it is time to look at our home computers. One might argue that if you take Security really serious, you stay away from the Windows Operating System. With Windows 10 that would might not be the case if you are not die-hard into security and privacy.

Windows 10 can really be reasonable secure (at least, that is my perspective). The most important topics are likely the privacy related settings, and the features that enable legacy hardware and/or networks to function. And Windows 10 being the most used operating system in many homes, this blogpost is aimed at people using those.


The goal of this project is to make a secure (or at least secure within a reasonable amount of effort) Windows 10 installation to ensure a secure environment to consume and produce content. It is possible that by gaining new insights hardening-options are either removed or added.

My other goal is to gain a good understanding on Windows 10 Hardening and other Security-related aspects. I feel that as a Lead Information Security Officer it is important to upkeep (general) knowledge about Technology and it's Security.


Scope is an important part for this project. Otherwise you can endlessly install security tools and solutions which in the end have a trade-off. This might be resources and performance, but also your own precious time to keep it running :).

The constraints are:
  • Windows 10 Home & Pro Build 1709
  • For the larger part, the settings needs to be able to be set through a GUI. I'll make some exceptions here and there (because there was never a GUI and its impact is rather important).
  • Some settings can also be set by using a registry-key file (.reg). I will supply these files.
  • Settings must be able to be set without using Group Policy Object (GPO), because that is not present (by default) on Windows 10 Home.


In my GitHub repo "Home-Security-by-W10-Hardening" I created an overview of features and settings that I have set. In that analyses, the following sources where consulted:

And the following aspects of Windows 10 are addressed:
  • Control Panel
    • System and Security
    • Programs
  • Settings
    • System
    • Apps
    • Cortana / Search
    • Privacy
    • Update & Security
      • Update & Security - Windows Defender Security Center
  • Other
    • Telemetry
    • Xbox Game bar
    • Explorer
    • Encryption Cipher Suites
    • Registry
    • Systems repair

And where possible, I have extracted the registry keys in order to set the settings in an automated fashion. In addition, I went through the entire CIS Benchmark for Windows Hardening and decided with every setting to follow suit or not.


All this results in a fairly balanced Windows 10 installment that eliminates known vulnerabilities, hardens some key weaknesses, protects privacy, while retaining most of its features.

You can look everything up right here at this place: Home-Security-by-W10-Hardening

If you have any questions, feel free to reach out to me!

Saturday, December 2, 2017

How To: Make your home Internet connection more secure and private

It has been a while since I have written anything on my blog. Perhaps bit too long ago, but this does not mean that I wasn't busy with creating something to help my readers or anyone other for that matter.
Although I am working as a Lead Information Security Officer and overseeing a Security Team with Security Analysts, I am still a techy in my heart. Therefore, in May 2017 I bought a Raspberry Pi. Pretty much just for the purpose of tinkering with it. A good friend of mine uses it for RC-automation and home-automation, but I have found another purpose.

I wanted to create a device that makes my home Internet connection more secure, and more private. And these features needed also to be utilizable while not at home. I thought that I have set a nice goal with that and started working on it.

When buying a Raspberry Pi you soon are going to install the Debian-based distro called Raspbian Stretch (previous release was Jessie). And with the objective I had set to myself one will be installing Pi-hole also. But I wanted to go further, which I did.

This is the feature set I aimed at and which are now included in the guide.
  • Pi-hole for blocking (malicious) ad-services and malware infected sites.
  • DNSSEC to validate DNS-responses for integrity.
  • DHCP to make sure that every device gets the proper internal DNS-server.
  • VPN (OpenVPN by PiVPN) to enable the possibility to have the same level of security and privacy with any of family’s devices when not at home.

That are the functional requirements in a sense. I have also some other requirements.
  • DNS requests needs to be forwarded to Quad9 DNS-servers.
  • Internal DNS capability.
  • The VPN must behave the same as the internal LAN without any DNS-leaking.
  • Modern encryption for the VPN-tunnel using TLS 1.2 with a strong key.
  • The setup must be reasonable hardened.
  • Important system events needs to be emailed to my email-address.
  • Blocking brute-force login attempts with fail2ban.
  • Fire-walling with iptables in a block everything and white-list specific ports manner.
  • Disabling non-used hardware that enables wireless connectivity (WLAN and Bluetooth).
  • IPv6 wherever possible (for the moment, not the VPN-tunnel).
  • All vulnerabilities fixed that are found with a scan by Nessus Vulnerability Scanner, when the needed fix can be applied by myself (succeeded in that!).
  • The entire setup needs to be updated automatically on at least a weekly basis.

I also have set some constraints to keep the project feasible.
  • Apart from OpenVPN, there is nothing that can be reached from the outside world. I always assume that there is a network-firewall present between the Internet, and the actual Pi.
  • The networking-services this device delivers are meant to enhance security of other network-connected devices in a non-intrusive manner.
  • And although this device delivers services in a (reasonable) secure way, it is not meant to be a device that delivers security services by it self, such as network-scanning and vulnerability scans.
  • It is meant for home or small-office use. Larger companies or institutions should look at other solutions to protect their people.

My wishlist consists of the following. Although not sure at the moment if all can be done.
  • Full IPv6 VPN-tunneling.
  • Implement either DNSCrypt or DNS-over-TLS.
  • Two-factor authentication on the VPN-tunnel.

The hardware I used is listed below.
  • Raspberry Pi 3 Model B 1GB
  • SDHC card - 16GB
  • Pi-Blox Case for Raspberry Pi – Black
  • Costs: roughly € 70

The base image that is used to build this guide is the following:
  • Image with desktop based on Debian Stretch
  • Version: November 2017
  • Release date: 2017-11-29
  • Kernel version: 4.9

This guide was created using Debian Jessie first, but it is now adjusted to also work for Stretch.
It took a while though to get where I am now. I even needed to start over, but that was not an issue. I then met Debian Stretch so the restart had its use. Why o why did I do a “sudo apt-get remove python”….?

I have documented it using GitHub, so hop over to there to see more.

A special word of thanks goes to Jacob Salmela with his up-to-date manual (PDF). This guide is inspired on his, although I go a step further in terms of features. Nevertheless, his contribution to (not only) this guide is worth my sincere gratitude. Thanks!

Any questions, remarks or suggestions? Please let me know!

Tuesday, September 5, 2017

Waarom awareness faalt?!

Dutch transcript of my talk about Security Awareness at 600 Minutes of Cyber Security Strategies by Management Events on September the 5th of 2017 in Spant!

Waarom awareness faalt?!

Requirements voor een auto

In december 2015 ging ik op zoek naar een nieuwe auto. Mijn vorige auto, een Mazda MX-3, had het leven gelaten. Althans, mijn bereidheid voor herstel was vanuit een financieel oogpunt geheel verdwenen. Het was daarom tijd voor een nieuwe auto en in mijn geval had ik gekozen voor privé-lease. Daarmee was ik geen eigenaar meer van de risico’s die gepaard gaan met het in bezit hebben van een auto.

Ik had wel wat zogenoemde constraints. De eerste was dat de maandelijkse incasso niet groter was dan het door mij bedachte maximum, en dat het een auto van een merk van mijn voorkeur was. Mazda of Tesla. Omdat de eerste constraint impact had op de tweede, ging ik maar voor de Mazda.
Na een zorgvuldige selectie had ik mijn keuze gemaakt. De Mazda 2. Niet te duur, en ik koos voor een model met een pittig vermogen, soort van stoere kleur, airco, in-car-entertainment en cruise control. Het contract was getekend en een poosje later kon ik de auto ophalen.

De auto beviel enorm en na wat weken rijden merkte ik op dat er wat features hun werk deden. Denk hierbij aan lane-detection, ABS en het piepje bij het vergeten van de gordel. In de regel allemaal geen probleem natuurlijk, maar wanneer je van een mechanisme auto als MX-3 overstapt naar een nieuwe dan merk je deze veiligheidsmaatregelen gelijk op.

Ik stelde mijzelf de vraag: “Hoe kan in vredesnaam een Security Officer als ik niet stil hebben gestaan bij de veiligheidsmaatregelen van een door mij gekozen auto?”.

Het meest eerlijke antwoord? I just did not care. En die gedachte werd ondersteund door mijn aanname dat veiligheid van de auto in de eerste plaats een verantwoordelijkheid was van Mazda (producent), gesteund door regelgeving vanuit de overheid en toezichthouders (kwaliteitsnormen), de leasemaatschappij (eigenaar), gevolgd door de dealer (onderhoud), en daarna een soort van door mij (gebruiker).

En toen realiseerde ik mij dat het heel goed mogelijk is dat een groot deel van mijn collega’s waarschijnlijk op dezelfde manier naar mijn Technology-afdeling kijkt, als ik naar mijn privé-leaseauto. En hoe kan ik het hen nu kwalijk nemen, als ik het zelf ook doe?

Ligt de verantwoordelijkheid bij de gebruiker?

Is het terecht dat mensen denken dat de veiligheid van het systeem in de eerste plaats niet hun verantwoordelijkheid is? Daar heb ik een poosje over nagedacht en ik vind die gedachtegang terecht. Want ik realiseerde dat er een verschil is tussen de veiligheid van een systeem, en het veilig gebruik van een systeem.

De analogie met de auto past perfect hierin. Ik ben niet verantwoordelijk voor de veiligheid van de auto. De remmen, airbag, gordels en al die andere maatregelen dienen gewoon te werken en goed te zijn. Nu scheelt het dat ik daarbij geen eigenaar ben van de auto, maar zelfs wanneer ik wel eigenaar zou zijn kun je dat in een zekere mate uitbesteden bij de dealer.

Veilig gebruik van de auto is dat ik de gordel om doe, mij aan de verkeersregels houdt, olie ververs, banden controleer, en op gezette tijden of kilometers de auto voor een onderhoudsbeurt naar de dealer breng. Die onderhoudsbeurt is om ervoor te zorgen dat het systeem veilig blijft en de garage is verantwoordelijk voor de uitvoer ervan en de kosten worden weer door de eigenaar gedragen.

En als we het hebben over awareness en het ontwikkelen ervan maken we vaak de denkfout dat veel van de veiligheid bij de gebruiker ligt. En daarmee leunen we echt, onterecht, op de zwakste schakel wanneer we het hebben over de veiligheid van een systeem.

En over de veiligheid van systemen gesproken. Is het jullie opgevallen wat er goed mis is in de foto hierboven? Blowfish? In een auto uit 2015? In 2007 zei de ontwikkelaar van het algoritme, Bruce Schneier, al dat Blowfish niet meer voldoende is, en dat je moet overstappen naar de opvolger Twofish. En tegenwoordig zijn er nog betere alternatieven.

En of ik updates kon krijgen van de software? Dat ‘kon’ en dat kon ik zelf doen. Maar ja, dat betreffen wel alleen de updates van de mappen voor de navigatie. En dat voor een Bluetooth- en Wi-Fi-connected auto.

Security Awareness door ING Bank

Ik ben betrokken geweest bij een panel naar aanleiding van een onderzoek door ING Bank onder leiding van Sander de Bruijn. De innovatieafdeling van ING is altijd op zoek naar nieuwe business mogelijkheden die later al dan niet verzelfstandigd worden. Een van die onderzoeken betreft een onderzoek naar Cyber Security. Na een bezoek aan een groot aantal klanten van ING kregen zij de feedback dat men meer wilt met Security Awareness, maar nog niet goed weet hoe.

ING ging dieper inzoomen op Security Awareness met een tal van onderzoeksmethoden. De meest interessante conclusies die getrokken konden worden zijn als volgt.
  • Training is niet gericht op een individu.
  • Training wordt niet op het juiste moment gegeven.
  • Training wordt niet gekoppeld aan een ervaring.
  • En te vaak ontbreekt het aan voorbeeldgedrag.

Maar waarom kan leunen op awareness gevaarlijk zijn?

In de talk “The security mirage” van Bruce Schneier wordt gesproken over het concept van veilig zijn versus veilig voelen en hoe we daarin de zogenoemde security trade-off maken. In essentie is het risicomanagement. Zijn slot conclusie is dat hoe meer onze gevoelens overeenkomen met de realiteit van veiligheid, des te beter we de security trade-off kunnen maken.

Onderzoek heeft uitgewezen dat we risico’s groter inschatten dan ze daadwerkelijk zijn wanneer het risico een naam heeft, wanneer het een keer van dichtbij is gebeurd of meegemaakt en wanneer we er veel over lezen in bijvoorbeeld het nieuws. En aan de andere kant hebben we de neiging risico’s veel kleiner te maken dan ze werkelijk zijn wanneer het een ver van ons bed show is of lijkt te zijn. We nemen dan meer risico’s en wanneer je niet bekend met de realiteit van de risico’s dan zal de trade-off niet goed ingeschat worden.

Wanneer je dat uitgangspunt meeneemt naar security awareness trainingen, dan zie je dat een one-size-fits-all niet gaat werken. Iedereen heeft een ander gevoel bij veiligheid, heeft een ander beeld van de werkelijkheid en heeft een ander kennisniveau op basis waarvan hij of zij een inschatting kan maken op het gebied van veiligheid.

Leunen op een one-size-fits-all awareness campagne kan dus killing zijn voor je business.

Hoe zit het met Social Engineering?

Kevin Mitnick. Waarschijnlijk een van de meest bekende namen op het gebied van Social Engineering. Ik ben afgelopen juli naar Blackhat in Las Vegas geweest. Daarbij heb ik daar ook de training Advanced Practical Social Engineering van de firma Social-Engineer LLC gevolgd. Hoewel we de klassieke white-hat ophadden tijdens de ‘oefeningen’ op de boulevard, gingen we toch wel stevig in op hack-the-human. En dan besef je ineens echt hoe eenvoudig wij informatie vrijgeven aan wildvreemden.

Zo heb ik zelf zonder blikken en blozen mijzelf voorgesteld met een alias. Altijd was mijn eerste prikkel om mijzelf voor te stellen met mijn echte naam. Maar zelf ervaren om je voor te doen als een ander, om vervolgens de gegevens van een ander boven tafel proberen te krijgen is bizar op zijn minst. Ik voelde het schuren met de Code-of-Ethics van (ISC)2, maar we hebben ons allen keurig gehouden aan de regels dat we niet naar login-credentials mochten vragen, de targets met een goed gevoel hebben achtergelaten en dat zij niet benadeeld worden, op wat voor manier dan ook, naar aanleiding van de ontmoeting.

Maar hoe maak je je collega’s in vredesnaam weerbaar tegen doelbewuste en kwaadwillende acties in het kader van Social Engineering? Is een theoretische training dan wel voldoende?

De juiste focus op Security Awareness

Met het onderzoek van ING, de training Social Engineering, en het verhaal van Bruce Schneier in het achterhoofd houdende, waar zou dan focus moet liggen ten aanzien van Security Awareness? Ik vind dat in de eerste plaats dat je het geven van een awareness training moet proberen te voorkomen. Het klinkt wellicht gek, maar val men echt alleen ‘lastig’ met dingen die ze niet leuk vinden wanneer het echt nodig.

Start dan ook met Technologie. Technologie kan helpen met het verlagen van het risico van het gebruik van een systeem. Denk aan wachtwoord-regels, het piepje bij geen gordel, het uitzetten van onnodige functionaliteiten en het blokkeren van spam en phishing email. En denk eraan, niemand wordt blij van een verschrikkelijke user interface, en zeker security niet!

De volgende stap is die van Proces. Een goed proces kan helpen dat de juiste activiteiten op het juiste moment volgens de juiste voorwaarden gebeuren. Als een grote transactie goed gekeurd moet worden door 2 individuen, dan wordt CEO-fraude al wat lastiger.

De laatste stap is Mens. Wanneer redelijkerwijs alles is gedaan op het gebied van Proces en Technologie, dan kan de Mens voorzien worden van informatie en middelen om het gebruik van het systeem veilig te houden.

En let op, de veiligheid van het systeem mag in den beginne nooit afhangen van de gebruiker!

De juiste kernpunten van goede awareness campagne

Volgens datzelfde onderzoek van ING dient een solide awareness training aan de volgende voorwaarden te voldoen.
  • Rolmodel (het goede voorbeeld) is belangrijk, zo niet het belangrijkste. Het goede voorbeeld geven werkt inspirerend. Wanneer ik de vraag van een collega krijg of ik ook allemaal unieke wachtwoorden gebruik, dan zeg ik: “Ja, voor al mijn 120+ accounts” en dan laat ik vervolgens mijn LastPass score zien.
  • De awareness training is relevant in de context van de situatie of ervaring. Een medewerker krijgt training over het veilig gebruik van wachtwoorden wanneer zijn of haar wachtwoord gaat wijzigen.
  • Personalisatie van de training voor de situatie (context) en kennisniveau van de medewerker. Niet iedereen weet evenveel. Overvraag een medewerker met een lager kennisniveau niet en verveel een medewerker met een hoger kennisniveau niet met reeds bekende informatie.
  • Beloning van goed gedrag en “never waist a good crisis”. Beloon medewerkers voor alert gedrag en de wijze waarop ze daarmee zijn omgegaan. Hierbij kan gamification ook een rol spelen, even als fysieke beloningen zoals medailles.
  • Laagdrempelige toegang tot de informatie. Het moet weinig moeite kosten en, waar mogelijk, gamified om het leuker te maken. Stop awareness niet weg ergens op een intranet-pagina, of een jaarlijkse e-learning training van een week.
Samenvattend. Waarom awareness faalt? Enerzijds omdat we het onderscheid tussen veiligheid van een systeem en het veilig gebruik van een systeem nauwelijks maken. Anderzijds omdat de timing van de ‘les’ zo ontzettend afwijkt bij die van de ervaring.

Ik ben een groot voorstander van Ubiquitous Awareness. Ubiquitous Awareness betekent, vrij vertaald, dat awareness overal in het gebruik van systemen geïntegreerd is. Het gebruik van een systeem is dus zodanig ingericht dat het veilig gebruik ervan vanzelfsprekend is, of dat informatie over het veilig gebruik ervan op het juiste moment van het gebruik zichtbaar of toegankelijk is.

Op de foto hierboven staat mijn zoon Max, met zijn favoriete speelgoed, namelijk de tablet. Hij weet dat mijn vrouw en ik pincodes op onze telefoons en tablets hebben zitten en dat ook op de laptop ingelogd dient te worden. Een klein jaartje geleden pakte ik zijn tablet en stelde een pincode in. Ik vertelde hem wat de pincode was en legde uit waarom het belangrijk is. Hoe vaak denken jullie dat hij de pincode vergeten is? Exact nul keer.

Onderschat de kracht van voorbeeldgedrag, en beloning, bij Security Awareness niet!

Interview @ Management Events

I also partook in an interview on Cyber Security Awareness and a small far-sight into the future about the use of AI and Machine Learning in Cyber Security.

Wednesday, June 28, 2017

Yet another case of cryware!

So, here it is. Yet another blog post about a yet another case of cryware. I think I'll stop with calling it cryptoware or malware, it's just cryware. Not crying for the damages it causes, but how many of the damages could have been prevented with just a mantra of some security hygiene.

Both WannaCry and Petya (or NotPetya) travels from node to node with an incredible pace. Truth to be told, I am in awe of the sophistication of the toolset, while in shock about the amount of steps in the attack-chain used by easily avoidable weaknesses.

I am not going to repeat the workings of both of the malware versions because more technical skilled people can do that better, but let me keep hammering on the following security mantra. And I want to share that hammering with you to prevent the screen below!

Always patch, patch, and patch

Seriously, just always patch. Always. Always patch and never exclude. I often get push-back on why this cannot work, and I ask why-not. And if you state that this cannot work, you don't grasp the importance of just patching everything.

We have enough to worry about zero-days alone, without throwing known patch-able vulnerabilities into the mix. There is nothing you can do against zero-days, until the patch has been released and installed. It's a part of which you cannot control, and therefore you can let go. But as soon as there is a patch, just install it.

And what to patch? Well, everything that costs money, enables value or delivers value should be patched. From CCTV, to IoT, to Computers, to Servers, to Network Components, to HVAC and more. And if no more patches are released, apply life cycle management in order to get patch management going again.

Seriously, no exceptions! When you do that as rigorously as I described, everyone will grow accustomed to it. Both the business and IT as well as suppliers, employees and customers will get used to the fact that you always patch, resulting in a lessened worry about global Cyber-attacks.

Always use anti-malware, but not only that...

I cannot stress enough that anti-malware is still a required piece of security defense in your arsenal of controls. I agree beforehand that anti-virus is pretty much dead (well, almost), but anti-malware and anti-exploit is not. So you will need to have anti-virus, anti-malware and anti-exploit for both unknown and known pieces of malicious code on pretty much every node.

For instance, Windows Defender for the consumer only does anti-virus and -malware for known pieces of malicious code. It does not cover anti-exploit and does not cover unknown stuff. From a security perspective it is a weak protection (although better something than nothing).

There are both business and consumer security solutions that cover you on all elements of untrustworthiness. And please install those tools on every Operating System for which there are such solutions. Windows, Linux, macOS and likely also Android and iOS if there are any for them. The reason is twofold.

One is that of preventing cross contamination. Why not stop Windows malware from spreading through email while you are working on a Linux or macOS environment? It's called herd-protection. It's nice of you to not forward malware to friends, family and co-workers. Really, they will appreciate it!

Two is that of there might not be viruses for Linux and macOS, both of the Operating Systems can be infected with malware or exploited through exploit-kits by hackers. Yeah, it's possible, really! Assuming you are safe with a non-Windows endpoint is the first step on the road to epic Security failure and in all fairness, it shows lack of awareness.

Never ever work under administrative privileges...

One of the key mantra is to never ever work under administrative privileges. Always use UAC (User Account Control) or separate administrator/root-accounts. System modifications should not be possible with the account you use for daily driver (such as Internet, Office and what not).

Never ever do your maintenance work from an endpoint which has direct access to the Internet. Malware installing using privileged accounts is a headache to overcome, because it spreads so easy to other nodes. Especially with privileged accounts that goes beyond being a local administrator.

And while you are at it, always change the default password of privileged accounts of everything.

Use a firewall!

Say what? Yeah I said it. Use a firewall. In your network (for home-users it is often the router) should always be a firewall. Depending on budget it can be either a smart and expensive one, or basic and cheap/free one.

A firewall helps limiting traffic that should not be there. It can help preventing traffic getting in from sources outside, and when configured properly (i.e. by disabling UPnP) it can help prevent traffic going out that should not go out. It's about hindering communications to the command and control server of the malware, which is nice for you and others.

On many Operating Systems there is a so called Local Firewall. Enable it (or at least don't disable it). Most often you can configure it to your needs and let it help limiting the options to break into or out of the system. That's is nice, because you don't want your other systems getting infected.

Firewalls of any type by themselves are by far not a guaranteed solution, but they can help prevent infection or prevent spread of the infection through the source of Internet. Again, people will appreciate it!


Below is a small summary of my points above.
  • Always apply patch management and life cycle management.
  • Always utilize anti-malware, -virus, and -exploit solutions for both known and unknown code.
  • Never do daily work with a privileged account, never use such an account while connected to the Internet and always change the default password.
  • Use a network firewall to limit inbound and outbound traffic that should not be there. And use a local firewall for the same purpose.
There is far more that can be done of course and you should never lay back and think that you are done.  But when you really have the controls in place, you can call up your CEO, CTO, CIO, CFO or whatever C-level manager and say that in the case of an ongoing global attack nothing more can be done. While spreading a subliminal message for more budget to increase the capability of Security Incident Response.

And in the meantime I'll just look out the Cyber-window and cry, yet again, over cryware rampaging in our Cyber-world which affects our Physical-world.

Thursday, June 1, 2017

The very different roles of Developer, Engineer and Analyst in regard to Security Awareness

In my daily work as an Information Security Officer I talk with allot of people. Some of them are (C-level) managers, some of them are Business Owners, and some are Product Owners. But I talk even more to people who actually create, maintain or break the product they are responsible for. And these are the Developers, Engineers and Analysts.

And oh boy, how different do they approach the very same subject! Let me explain what I have learned from that and how I put that knowledge to work in regard to (increasing) Security Awareness.

The Triangle of Work

As I will explain the three different roles further down the road in this blog-post, the following triangle sums it all up.

The Developer

The main focus of the developer is creating the work (or product). His or hers primary driver is building features, testing out new develop- or build-technologies and other tons of cool new stuff.

Resistance is often felt when stability becomes a topic of discussion. Creativity is their driver and nothing can be really stable when creativity needs room.

The Engineer

The main focus of the engineer is maintaining the work. He or she makes sure that whatever the developer is creating is kept running. Often the primary focus does not exceed criteria in the domain of availability, although there are of-course exceptions.

Resistance is often felt when change is at hand. Everything that needs to be changed tends to create instability. Instability is a common trade-off with creativity which is to some degree okay to an engineer, but he or she rather chooses stability.

The Analyst

This is where the 'Regular' Testers might reside, but even more the Security Analysts and Penetration Testers. Their main focus is breaking the work (most often just on a theoretical basis though). And this is a kinda new-ish phenomena in world of technology.

Now there is suddenly a guy or girl who likes to break things and they have now even formal positions in companies! It is not only frustrating to the engineer trying to keep all things running, it is even frustrating to the developer to hear about many child-illnesses in their great works of art.

The analyst wants to see how works can be exploited, broken or otherwise negatively impacted. This of-course generates insights, not to mention tons of workloads, for both engineers and developers.

Do not fight these natural tendencies!

Why? Well, because those tendencies are hard-wired into everyone's brain. You are either one of the three to the extreme, or a certain mix of two or three roles and changing them isn't done overnight. Can I back this up with scientific research? No, unfortunately other than my experience in work and life I cannot (perhaps there is though...).

For the sake of argument, let's assume that for the better part I am right.

Creating Security Awareness for the roles of Developer and Engineer

Many Security Officers (just like myself) try to create awareness with developers in how to make their code more secure by design and try to create awareness with engineers in how to harden everything the keep running. Assuming that Security Analysts are reasonable aware of Security for now. I am not saying these endeavors (creating awareness) are wasted money and energy, but keep in mind they need one key ingredient. And that is commitment to learn from the awareness.

One might say that everyone is always willing to learn more about making things better, but making things better can be something totally different in another one's opinion.

So how to start the change then?

The first step you should take is accepting the fact that the three roles of developer, engineer and analyst exists and that they will continue to exists. Embrace the fact that everyone looks at the same topic differently. You can learn allot from it if you really understand how the other one is thinking about the very same work than you.

In order to change someone's opinion, commitment or whatever it is you want to be changed, you need to influence. There are many books and training on putting influence to practice, but it all boils down to this.

You need them to feel very uncomfortable in the situation where they are now and give them a vision of a better place in the same time, while giving them means to reach that place.

To give an example about creating awareness concerning Input Validation for Developers. You will have to convince the developer that NOT knowing about Input Validation is very wrong and a terrible place to be. Then you will need to create the vision in that awesome place where he or she as a developer knows everything about Input Validation. But that is not enough to change. You will need to provide means (training, tools, etc) in order for him or her to make the change.

And that is allot of work right?

Instead of change, why not let it reside just with influence?

Influence leads to change, and change leads to different outcomes. Awareness focuses most often on the change itself, rather than the influence you want to create or the outcome of said change.

What I mean with this is the following, so back to the developer again. You could also incorporate a Security tool in the build-street that automatically tests code and gives feedback immediately to the developer. Now the developer has two options. Either ignore the errors or fix them. And this is were emotions comes in (read: influence). I have yet to come across a developer who likes compile-, build- or Lint-errors. Errors are no good and needs fixing and that's the driver in many cases at least.

If you can incorporate Security testing (at least to some degree) in a developer's daily work, you created continuous awareness training without the pain of creating it in the minds first. Instead you work the other way around. You make sure that the means for improving are already in place, and by the means you create insight in the awful place they are (no Input Validation knowledge). And the means and insights helps you to become more Secure by Design.


There is no single road that leads to better Security awareness, so keep awareness fit for your audience and focus on the result, supply the means and forget about the change itself (that will come by itself). But also do realize that the three roles will never go away, and that you need all three roles in your team or department to make good decisions.

Help Developer, Engineers and Analysts understand that everyone has to do their part in the greater picture of Technology. When there is respect for each-others opinions and drivers, people will open up and will be more eager to learn from one another. Bashing Developers for yet another vulnerability will not improve Security Awareness and bashing an Engineer for not patching neither.

Implement the means (processes and/or tools) that help Developers and Engineers to (preferably automatically) help them improve Security. The Analyst can then play a tremendous role with helping both roles to continuously improve that.

And I am convinced that when you can create such a culture as a Security Officer, you will dramatically improve the overall security!

Tuesday, March 14, 2017

With Internet-of-Things, the consumer is not a customer but a supplier!

With Internet-of-Things, the consumer is not a customer but a supplier!

Authors: Joram Teusink, Rick Veenstra

In the soon-to-be world in which everything is connected to everything, we will face quite some (unforeseen) challenges. One of those challenges is related to the protection of privacy and security of the system as a whole. The other one is the role of the consumer. To quickly sum it up in a single statement:

In the world Internet-of-Things, the consumer is not (only) a customer but (also) a supplier.

In this blog post we will focus on why we make this statement and why we think this new paradigm holds true. We will talk about the fundamental shift in the way we think, or should think, about data and privacy.

You own your devices

This sounds rather plausible, right? And it is. Because with every device you buy you take on the responsibility of owning it. Whether it is connected to anything beyond the power cord or not, by paying the bill you accepted the ownership of the 'thing'. This has many consequences, but we´ll first address the issue of data ownership.

Do you own the data?

Imagine the device you bought is connected to a data network. Not exactly a mental stretch with a 'thing' we refer to as a device in the 'Internet of Things'. This device will generate or accumulate data. This might be simple sensory data or more complex data structures that describe behavioral patterns. It tells something about who you are and what you do. This data is often considered sensitive or at least personal. In the near future, this will certainly extend to data we have not yet dared to dream about.

For now, this data may be your thermostat that is building up a profile of your presence and the temperature of your room or your 'smart' TV that is building up a profile of what you like to watch. Another popular application is home security, such as video-cameras that continuously register what is going on in your most private environment. Which is by the way an intrusion to your privacy that the police and intelligence agencies are only allowed to with a legal warrant. All this collected data is privacy sensitive. It becomes even more sensitive when combined with data that is collected by other devices you may own or services you use.

This data is valuable for you. Or at least it should be. Most people do not want their conversations with their lovers to be openly shared across social media -- especially not if that conversation is spiced with photographic material of a certain private nature. But that is not the only data that is valuable. Almost all data about you should be valuable to you and treated as sensitive. "Why?" you might ask. Because this data can be used to influence you. Your data is being used in ways that cause no direct damage, at least no damage that you could easily identify. But the way you are being influenced is not necessarily in your direct interest; it is primarily aimed at increasing the profit of some company.

Companies are willing to spend substantial money to get access to this data. Because they can use it to tailor their ways of influencing your behavior, which is the very nature of advertisements.

There are many business models in which large corporations get access to your sensitive data. But for now, we will stick to the one that is the easiest to grasp and is the basic premise that most people base their use of data-collecting devices on:

You own the devices; therefore, you are the owner of all data that is collected by these devices.

Are you selling your data?

Now follow us in the next step. You own and operate devices that collects data about you. What are you doing with this data?

When you use devices that collect all this data, both of which you own, do you use them commercially? It depends a bit on the privacy policies you agreed upon with the supplier of the services, but the answer to this is very likely a "yes". Even when you are just an individual and not a company. This may not be the way you look at it now, so let us explain we make this statement.

Most likely you share this data with a company, in exchange for a service or an 'enhanced experience' as they so exquisitely frame it. This company may be the one that built the device or provided it to you; frequently these things are bundled it with an on-line service directly coupled with the device. It may be a third party, which uses the data to enhance your experience of the device or to provide additional (value adding) services. In both cases, you provide data in exchange for a service.

Usually this can be characterized as one of four types of supplier-customer relations:

  1. You have paid for the service with a one-time fee with the purchase of the device. This usually covers a limited period of time, although this is rarely made explicit. Degradation of service over time is almost guaranteed; the older the device, the flakier support tends to become. Your smartphone or tablet are likely the most explanatory examples.
  2. You pay a recurring subscription fee, which is usually supposed to cover all operational costs of the service provider. This model is the traditional pay-as-you-go service model that has been around since way before the internet was conceived and is still the most viable for the supplier in the long term.
  3. You do not pay any monetary fees, but your usage of the services adds to the momentum and user base for the service provider´s paid services which cover for the cost of the free services as well.
  4. You do not pay any monetary fees, but allow the service provider to use your data to build profiles that it will monetize to its own discretion, i.e. targeted advertising. In this case is covered by the adage ¨if you don´t pay for the product, you are the product¨.
So, whether it is in exchange for money or not, you actually trade (sell) your data.

How does this effect your position in the 'supply chain'?

Our argumentation up to this point can be summarized in three statements:
  • you own the IoT-device; 
  • you own the data it collects; 
  • you trade the data.
If you own stuff that collects or generates data and you trade it with a company, you are essentially a service provider. You are the supplier of your data, and you sell it in exchange for either money, features or services! The other party is the consumer, since it 'consumes' your data. This might sound very silly or strange, so let us explain why we think this holds true.

  1. There is a supplier who sells IoT-devices. It supplies you with equipment, with 'things'. When you buy something, you are the customer.
  2. Upon enabling the IoT-device to provide data to the supplier's services –or those of a third party– you become a supplier yourself: a supplier of data to be precise.
  3. Your customer is in many cases the supplier of your device (the equipment), now in the role of the one who is utilizing your data.
So, in this situation you are performing four roles:

  1. the customer in buying the device;
  2. the supplier of data by trading your data with your customer.
  3. the customer in paying for services which utilize the data generated by the device;
  4. the consumer of the services.
Traditionally we only think about #1 and #4 and consider them as one single role. This has the consequence that role #2 is obscured; it is rarely more than an afterthought when even considered at all. That is why we strongly believe the paradigm shift is needed, or to the very least should be debated.

Because you as a customer also perform role #2, we can state:

In the world of IoT, the consumer (of a device) is a supplier (of data).

The consumer as data supplier: privacy considerations

If you consider the data flow we described as a supply chain, you make the shift from a customer/consumer-centric view to a set of demand-supply relations that is very common in the corporate world. And if you are familiar with data protection in this context, you might get a little itchy. Because this viewpoint has quite some implications for accountability and liability regarding data protection and security.

If you are a company and selling data to a data consumer (your customer), you are required to do at least four things:

  1. specify the Terms of Use for the data;
  2. obtain consent of the data subjects (the individuals about whom you collect data) to this Terms of Use;
  3. cover the use of this data by a legally binding agreement with your customer;
  4. take every reasonable precaution to ensure that the data is only used by authorized parties and only for the intended purpose for which it has been collected. This is the topic of Information Security and Privacy.
These obligations are all within the scope of the EU General Data Protection Regulation (a.k.a. the EU-GDPR).

If you are an individual, requirement #2 may be considered as implicitly satisfied since the data subject is the very same entity that provides the data. This does in no way limit your responsibility regarding data protection (requirement #4). It is just rather difficult to sue yourself for inadequate performance or non-compliance.

But when you provide services with collected data about other subjects that you do not legally represent, this (at least theoretically) can get very ugly very quick. If you want to make it really complicated, you might add another ingredient to this cocktail: the right to be forgotten that is embedded in the EU-GDPR. But that´s another topic for someone with a legal background.

We'll now step into the complications of requirement #4: the obligation to protect the security of the data you collect and distribute.

Ownership of a device: the obligations

Okay, you purchased a device and connected it to The Internet of Everything. As we have shown before you have now become a service provider. You are going to provide data to your customer. Data that will probably have privacy-sensitive characteristics, so you are required to protect this data from unauthorized access and use.

By paying the bill you accepted the ownership of the 'thing'. In doing so you made yourself accountable for all benefits and costs that come with the simple existence of the 'thing'. You are not only 'responsible' for reaping the benefits but also for the burden of operations and maintenance during its entire life cycle. Even if you manage to outsource this, which for consumer grade IoT is not (yet) likely, if at all possible, in the end you stay accountable.

If you own the device and you own the data, then you are responsible for its security. No one else but you, really! European privacy legislation is based on this very principle. Whether you generate Personal Identifiable Information (PII) or are the custodian of PII that is trusted to you by its subject (the individual about whom the data has been collected), you are what the EU-GDPR calls the 'processor' and ultimately accountable for the entire supply chain.

We could collapse role #2 (the supplier of data) with role #4 (the consumer of the processed data). If we consider it this way, the device owner (being both supplier and consumer) outsources a part of the data processing. Probably to the provider of 'value adding services'. At least the Dutch Privacy legislation requires you to cover this processing with a legally binding Data Processing Agreement (DPA). This should specify the responsibilities (not the accountability) transferred to third parties. You mandate other companies to handle data you are responsible for. You set constraints on the exact 'processing activities' that will be performed and the conditions under which they will do this.

At least that is the theory. We all know that only in theory there is no difference between theory and practice; in practice, there is. But alas, it is not only the theory but also the law.

But consumers buying things and using services accompanied with them do not sign such DPA with their third parties. At best, they accept the Terms of Service that govern such services. And let’s hope that is not an instance of what we call 'the biggest lie on the internet': they click a checkbox stating something like "I have read the (...) and accept to be bound by this" without reading the text. It is quite likely that these terms of service just contain a full waiver to the service provider for all responsibilities that should be contained in a Data Processing Agreement.

Where the snake bites its own tail

Now we have the situation where the consumer remains ultimately responsible for data processing that he has completely outsourced. Even worse: processing has been outsourced in a way that has essentially stripped him from any control over it.

In your role as a consumer of services, you subscribe to a service using your PII. You agree that this service provider will be processing your sensitive data. You would require adequate protection of this valuable data. But your service provider has outsourced the data collection process to a party that is unable to adequately protect this data as it is generated, collected, stored and transferred. This would be unacceptable because the supplier is violating the EU GDPR.

But in this case, you are the data provider yourself. You as a service consumer agree that the service is delivered by an unreliable data provider and therefore you cannot hold the service provider accountable for any consequence of this data provider failing. Ridiculous if a third party were involved, but legally valid if you are the data provider yourself.


The way we look at the Internet of Things --especially in the consumer domain-- needs a fundamental change in perspective. If we consider the data flow generated by IoT-devices as a supply chain, we make the shift from a customer/consumer-centric view to a set of demand-supply relations. This is a rather uncomfortable position, because it has quite some implications for accountability and liability regarding data protection and security.

At the same time consumers (including ourselves) are not acting in alignment with the roles that this viewpoint uncovers. We end up being service providers who are completely responsible for data protection over the entire supply chain. Yet our customers and suppliers are usually not bound to any obligations to protect this data. And we lack the tools to do it properly ourselves.

Thinking of IoT as a chain of demand-supply relations may help to identify systemic weaknesses. It may turn out to be a good start for finding effective strategies to fight the undesirable exploitation of these weaknesses.

About the authors

Joram Teusink and Rick Veenstra are both Information Security Officers and close friends.

Re-posts on…