Wednesday, June 28, 2017

Yet another case of cryware!

So, here it is. Yet another blog post about a yet another case of cryware. I think I'll stop with calling it cryptoware or malware, it's just cryware. Not crying for the damages it causes, but how many of the damages could have been prevented with just a mantra of some security hygiene.

Both WannaCry and Petya (or NotPetya) travels from node to node with an incredible pace. Truth to be told, I am in awe of the sophistication of the toolset, while in shock about the amount of steps in the attack-chain used by easily avoidable weaknesses.

I am not going to repeat the workings of both of the malware versions because more technical skilled people can do that better, but let me keep hammering on the following security mantra. And I want to share that hammering with you to prevent the screen below!


Always patch, patch, and patch

Seriously, just always patch. Always. Always patch and never exclude. I often get push-back on why this cannot work, and I ask why-not. And if you state that this cannot work, you don't grasp the importance of just patching everything.

We have enough to worry about zero-days alone, without throwing known patch-able vulnerabilities into the mix. There is nothing you can do against zero-days, until the patch has been released and installed. It's a part of which you cannot control, and therefore you can let go. But as soon as there is a patch, just install it.

And what to patch? Well, everything that costs money, enables value or delivers value should be patched. From CCTV, to IoT, to Computers, to Servers, to Network Components, to HVAC and more. And if no more patches are released, apply life cycle management in order to get patch management going again.

Seriously, no exceptions! When you do that as rigorously as I described, everyone will grow accustomed to it. Both the business and IT as well as suppliers, employees and customers will get used to the fact that you always patch, resulting in a lessened worry about global Cyber-attacks.

Always use anti-malware, but not only that...

I cannot stress enough that anti-malware is still a required piece of security defense in your arsenal of controls. I agree beforehand that anti-virus is pretty much dead (well, almost), but anti-malware and anti-exploit is not. So you will need to have anti-virus, anti-malware and anti-exploit for both unknown and known pieces of malicious code on pretty much every node.

For instance, Windows Defender for the consumer only does anti-virus and -malware for known pieces of malicious code. It does not cover anti-exploit and does not cover unknown stuff. From a security perspective it is a weak protection (although better something than nothing).

There are both business and consumer security solutions that cover you on all elements of untrustworthiness. And please install those tools on every Operating System for which there are such solutions. Windows, Linux, macOS and likely also Android and iOS if there are any for them. The reason is twofold.

One is that of preventing cross contamination. Why not stop Windows malware from spreading through email while you are working on a Linux or macOS environment? It's called herd-protection. It's nice of you to not forward malware to friends, family and co-workers. Really, they will appreciate it!

Two is that of there might not be viruses for Linux and macOS, both of the Operating Systems can be infected with malware or exploited through exploit-kits by hackers. Yeah, it's possible, really! Assuming you are safe with a non-Windows endpoint is the first step on the road to epic Security failure and in all fairness, it shows lack of awareness.

Never ever work under administrative privileges...

One of the key mantra is to never ever work under administrative privileges. Always use UAC (User Account Control) or separate administrator/root-accounts. System modifications should not be possible with the account you use for daily driver (such as Internet, Office and what not).

Never ever do your maintenance work from an endpoint which has direct access to the Internet. Malware installing using privileged accounts is a headache to overcome, because it spreads so easy to other nodes. Especially with privileged accounts that goes beyond being a local administrator.

And while you are at it, always change the default password of privileged accounts of everything.

Use a firewall!

Say what? Yeah I said it. Use a firewall. In your network (for home-users it is often the router) should always be a firewall. Depending on budget it can be either a smart and expensive one, or basic and cheap/free one.

A firewall helps limiting traffic that should not be there. It can help preventing traffic getting in from sources outside, and when configured properly (i.e. by disabling UPnP) it can help prevent traffic going out that should not go out. It's about hindering communications to the command and control server of the malware, which is nice for you and others.

On many Operating Systems there is a so called Local Firewall. Enable it (or at least don't disable it). Most often you can configure it to your needs and let it help limiting the options to break into or out of the system. That's is nice, because you don't want your other systems getting infected.

Firewalls of any type by themselves are by far not a guaranteed solution, but they can help prevent infection or prevent spread of the infection through the source of Internet. Again, people will appreciate it!

Summary

Below is a small summary of my points above.
  • Always apply patch management and life cycle management.
  • Always utilize anti-malware, -virus, and -exploit solutions for both known and unknown code.
  • Never do daily work with a privileged account, never use such an account while connected to the Internet and always change the default password.
  • Use a network firewall to limit inbound and outbound traffic that should not be there. And use a local firewall for the same purpose.
There is far more that can be done of course and you should never lay back and think that you are done.  But when you really have the controls in place, you can call up your CEO, CTO, CIO, CFO or whatever C-level manager and say that in the case of an ongoing global attack nothing more can be done. While spreading a subliminal message for more budget to increase the capability of Security Incident Response.

And in the meantime I'll just look out the Cyber-window and cry, yet again, over cryware rampaging in our Cyber-world which affects our Physical-world.

Thursday, June 1, 2017

The very different roles of Developer, Engineer and Analyst in regard to Security Awareness

In my daily work as an Information Security Officer I talk with allot of people. Some of them are (C-level) managers, some of them are Business Owners, and some are Product Owners. But I talk even more to people who actually create, maintain or break the product they are responsible for. And these are the Developers, Engineers and Analysts.

And oh boy, how different do they approach the very same subject! Let me explain what I have learned from that and how I put that knowledge to work in regard to (increasing) Security Awareness.

The Triangle of Work

As I will explain the three different roles further down the road in this blog-post, the following triangle sums it all up.

The Developer

The main focus of the developer is creating the work (or product). His or hers primary driver is building features, testing out new develop- or build-technologies and other tons of cool new stuff.

Resistance is often felt when stability becomes a topic of discussion. Creativity is their driver and nothing can be really stable when creativity needs room.

The Engineer

The main focus of the engineer is maintaining the work. He or she makes sure that whatever the developer is creating is kept running. Often the primary focus does not exceed criteria in the domain of availability, although there are of-course exceptions.

Resistance is often felt when change is at hand. Everything that needs to be changed tends to create instability. Instability is a common trade-off with creativity which is to some degree okay to an engineer, but he or she rather chooses stability.

The Analyst

This is where the 'Regular' Testers might reside, but even more the Security Analysts and Penetration Testers. Their main focus is breaking the work (most often just on a theoretical basis though). And this is a kinda new-ish phenomena in world of technology.

Now there is suddenly a guy or girl who likes to break things and they have now even formal positions in companies! It is not only frustrating to the engineer trying to keep all things running, it is even frustrating to the developer to hear about many child-illnesses in their great works of art.

The analyst wants to see how works can be exploited, broken or otherwise negatively impacted. This of-course generates insights, not to mention tons of workloads, for both engineers and developers.

Do not fight these natural tendencies!

Why? Well, because those tendencies are hard-wired into everyone's brain. You are either one of the three to the extreme, or a certain mix of two or three roles and changing them isn't done overnight. Can I back this up with scientific research? No, unfortunately other than my experience in work and life I cannot (perhaps there is though...).

For the sake of argument, let's assume that for the better part I am right.

Creating Security Awareness for the roles of Developer and Engineer

Many Security Officers (just like myself) try to create awareness with developers in how to make their code more secure by design and try to create awareness with engineers in how to harden everything the keep running. Assuming that Security Analysts are reasonable aware of Security for now. I am not saying these endeavors (creating awareness) are wasted money and energy, but keep in mind they need one key ingredient. And that is commitment to learn from the awareness.

One might say that everyone is always willing to learn more about making things better, but making things better can be something totally different in another one's opinion.

So how to start the change then?

The first step you should take is accepting the fact that the three roles of developer, engineer and analyst exists and that they will continue to exists. Embrace the fact that everyone looks at the same topic differently. You can learn allot from it if you really understand how the other one is thinking about the very same work than you.

In order to change someone's opinion, commitment or whatever it is you want to be changed, you need to influence. There are many books and training on putting influence to practice, but it all boils down to this.

You need them to feel very uncomfortable in the situation where they are now and give them a vision of a better place in the same time, while giving them means to reach that place.

To give an example about creating awareness concerning Input Validation for Developers. You will have to convince the developer that NOT knowing about Input Validation is very wrong and a terrible place to be. Then you will need to create the vision in that awesome place where he or she as a developer knows everything about Input Validation. But that is not enough to change. You will need to provide means (training, tools, etc) in order for him or her to make the change.

And that is allot of work right?

Instead of change, why not let it reside just with influence?

Influence leads to change, and change leads to different outcomes. Awareness focuses most often on the change itself, rather than the influence you want to create or the outcome of said change.

What I mean with this is the following, so back to the developer again. You could also incorporate a Security tool in the build-street that automatically tests code and gives feedback immediately to the developer. Now the developer has two options. Either ignore the errors or fix them. And this is were emotions comes in (read: influence). I have yet to come across a developer who likes compile-, build- or Lint-errors. Errors are no good and needs fixing and that's the driver in many cases at least.

If you can incorporate Security testing (at least to some degree) in a developer's daily work, you created continuous awareness training without the pain of creating it in the minds first. Instead you work the other way around. You make sure that the means for improving are already in place, and by the means you create insight in the awful place they are (no Input Validation knowledge). And the means and insights helps you to become more Secure by Design.

Conclusion

There is no single road that leads to better Security awareness, so keep awareness fit for your audience and focus on the result, supply the means and forget about the change itself (that will come by itself). But also do realize that the three roles will never go away, and that you need all three roles in your team or department to make good decisions.

Help Developer, Engineers and Analysts understand that everyone has to do their part in the greater picture of Technology. When there is respect for each-others opinions and drivers, people will open up and will be more eager to learn from one another. Bashing Developers for yet another vulnerability will not improve Security Awareness and bashing an Engineer for not patching neither.

Implement the means (processes and/or tools) that help Developers and Engineers to (preferably automatically) help them improve Security. The Analyst can then play a tremendous role with helping both roles to continuously improve that.

And I am convinced that when you can create such a culture as a Security Officer, you will dramatically improve the overall security!

Tuesday, March 14, 2017

With Internet-of-Things, the consumer is not a customer but a supplier!

With Internet-of-Things, the consumer is not a customer but a supplier!

Authors: Joram Teusink, Rick Veenstra

In the soon-to-be world in which everything is connected to everything, we will face quite some (unforeseen) challenges. One of those challenges is related to the protection of privacy and security of the system as a whole. The other one is the role of the consumer. To quickly sum it up in a single statement:

In the world Internet-of-Things, the consumer is not (only) a customer but (also) a supplier.


In this blog post we will focus on why we make this statement and why we think this new paradigm holds true. We will talk about the fundamental shift in the way we think, or should think, about data and privacy.

You own your devices

This sounds rather plausible, right? And it is. Because with every device you buy you take on the responsibility of owning it. Whether it is connected to anything beyond the power cord or not, by paying the bill you accepted the ownership of the 'thing'. This has many consequences, but we´ll first address the issue of data ownership.

Do you own the data?

Imagine the device you bought is connected to a data network. Not exactly a mental stretch with a 'thing' we refer to as a device in the 'Internet of Things'. This device will generate or accumulate data. This might be simple sensory data or more complex data structures that describe behavioral patterns. It tells something about who you are and what you do. This data is often considered sensitive or at least personal. In the near future, this will certainly extend to data we have not yet dared to dream about.

For now, this data may be your thermostat that is building up a profile of your presence and the temperature of your room or your 'smart' TV that is building up a profile of what you like to watch. Another popular application is home security, such as video-cameras that continuously register what is going on in your most private environment. Which is by the way an intrusion to your privacy that the police and intelligence agencies are only allowed to with a legal warrant. All this collected data is privacy sensitive. It becomes even more sensitive when combined with data that is collected by other devices you may own or services you use.

This data is valuable for you. Or at least it should be. Most people do not want their conversations with their lovers to be openly shared across social media -- especially not if that conversation is spiced with photographic material of a certain private nature. But that is not the only data that is valuable. Almost all data about you should be valuable to you and treated as sensitive. "Why?" you might ask. Because this data can be used to influence you. Your data is being used in ways that cause no direct damage, at least no damage that you could easily identify. But the way you are being influenced is not necessarily in your direct interest; it is primarily aimed at increasing the profit of some company.

Companies are willing to spend substantial money to get access to this data. Because they can use it to tailor their ways of influencing your behavior, which is the very nature of advertisements.

There are many business models in which large corporations get access to your sensitive data. But for now, we will stick to the one that is the easiest to grasp and is the basic premise that most people base their use of data-collecting devices on:

You own the devices; therefore, you are the owner of all data that is collected by these devices.

Are you selling your data?

Now follow us in the next step. You own and operate devices that collects data about you. What are you doing with this data?

When you use devices that collect all this data, both of which you own, do you use them commercially? It depends a bit on the privacy policies you agreed upon with the supplier of the services, but the answer to this is very likely a "yes". Even when you are just an individual and not a company. This may not be the way you look at it now, so let us explain we make this statement.

Most likely you share this data with a company, in exchange for a service or an 'enhanced experience' as they so exquisitely frame it. This company may be the one that built the device or provided it to you; frequently these things are bundled it with an on-line service directly coupled with the device. It may be a third party, which uses the data to enhance your experience of the device or to provide additional (value adding) services. In both cases, you provide data in exchange for a service.

Usually this can be characterized as one of four types of supplier-customer relations:

  1. You have paid for the service with a one-time fee with the purchase of the device. This usually covers a limited period of time, although this is rarely made explicit. Degradation of service over time is almost guaranteed; the older the device, the flakier support tends to become. Your smartphone or tablet are likely the most explanatory examples.
  2. You pay a recurring subscription fee, which is usually supposed to cover all operational costs of the service provider. This model is the traditional pay-as-you-go service model that has been around since way before the internet was conceived and is still the most viable for the supplier in the long term.
  3. You do not pay any monetary fees, but your usage of the services adds to the momentum and user base for the service provider´s paid services which cover for the cost of the free services as well.
  4. You do not pay any monetary fees, but allow the service provider to use your data to build profiles that it will monetize to its own discretion, i.e. targeted advertising. In this case is covered by the adage ¨if you don´t pay for the product, you are the product¨.
So, whether it is in exchange for money or not, you actually trade (sell) your data.

How does this effect your position in the 'supply chain'?

Our argumentation up to this point can be summarized in three statements:
  • you own the IoT-device; 
  • you own the data it collects; 
  • you trade the data.
If you own stuff that collects or generates data and you trade it with a company, you are essentially a service provider. You are the supplier of your data, and you sell it in exchange for either money, features or services! The other party is the consumer, since it 'consumes' your data. This might sound very silly or strange, so let us explain why we think this holds true.

  1. There is a supplier who sells IoT-devices. It supplies you with equipment, with 'things'. When you buy something, you are the customer.
  2. Upon enabling the IoT-device to provide data to the supplier's services –or those of a third party– you become a supplier yourself: a supplier of data to be precise.
  3. Your customer is in many cases the supplier of your device (the equipment), now in the role of the one who is utilizing your data.
So, in this situation you are performing four roles:

  1. the customer in buying the device;
  2. the supplier of data by trading your data with your customer.
  3. the customer in paying for services which utilize the data generated by the device;
  4. the consumer of the services.
Traditionally we only think about #1 and #4 and consider them as one single role. This has the consequence that role #2 is obscured; it is rarely more than an afterthought when even considered at all. That is why we strongly believe the paradigm shift is needed, or to the very least should be debated.

Because you as a customer also perform role #2, we can state:

In the world of IoT, the consumer (of a device) is a supplier (of data).

The consumer as data supplier: privacy considerations

If you consider the data flow we described as a supply chain, you make the shift from a customer/consumer-centric view to a set of demand-supply relations that is very common in the corporate world. And if you are familiar with data protection in this context, you might get a little itchy. Because this viewpoint has quite some implications for accountability and liability regarding data protection and security.

If you are a company and selling data to a data consumer (your customer), you are required to do at least four things:

  1. specify the Terms of Use for the data;
  2. obtain consent of the data subjects (the individuals about whom you collect data) to this Terms of Use;
  3. cover the use of this data by a legally binding agreement with your customer;
  4. take every reasonable precaution to ensure that the data is only used by authorized parties and only for the intended purpose for which it has been collected. This is the topic of Information Security and Privacy.
These obligations are all within the scope of the EU General Data Protection Regulation (a.k.a. the EU-GDPR).

If you are an individual, requirement #2 may be considered as implicitly satisfied since the data subject is the very same entity that provides the data. This does in no way limit your responsibility regarding data protection (requirement #4). It is just rather difficult to sue yourself for inadequate performance or non-compliance.

But when you provide services with collected data about other subjects that you do not legally represent, this (at least theoretically) can get very ugly very quick. If you want to make it really complicated, you might add another ingredient to this cocktail: the right to be forgotten that is embedded in the EU-GDPR. But that´s another topic for someone with a legal background.

We'll now step into the complications of requirement #4: the obligation to protect the security of the data you collect and distribute.

Ownership of a device: the obligations

Okay, you purchased a device and connected it to The Internet of Everything. As we have shown before you have now become a service provider. You are going to provide data to your customer. Data that will probably have privacy-sensitive characteristics, so you are required to protect this data from unauthorized access and use.

By paying the bill you accepted the ownership of the 'thing'. In doing so you made yourself accountable for all benefits and costs that come with the simple existence of the 'thing'. You are not only 'responsible' for reaping the benefits but also for the burden of operations and maintenance during its entire life cycle. Even if you manage to outsource this, which for consumer grade IoT is not (yet) likely, if at all possible, in the end you stay accountable.

If you own the device and you own the data, then you are responsible for its security. No one else but you, really! European privacy legislation is based on this very principle. Whether you generate Personal Identifiable Information (PII) or are the custodian of PII that is trusted to you by its subject (the individual about whom the data has been collected), you are what the EU-GDPR calls the 'processor' and ultimately accountable for the entire supply chain.

We could collapse role #2 (the supplier of data) with role #4 (the consumer of the processed data). If we consider it this way, the device owner (being both supplier and consumer) outsources a part of the data processing. Probably to the provider of 'value adding services'. At least the Dutch Privacy legislation requires you to cover this processing with a legally binding Data Processing Agreement (DPA). This should specify the responsibilities (not the accountability) transferred to third parties. You mandate other companies to handle data you are responsible for. You set constraints on the exact 'processing activities' that will be performed and the conditions under which they will do this.

At least that is the theory. We all know that only in theory there is no difference between theory and practice; in practice, there is. But alas, it is not only the theory but also the law.

But consumers buying things and using services accompanied with them do not sign such DPA with their third parties. At best, they accept the Terms of Service that govern such services. And let’s hope that is not an instance of what we call 'the biggest lie on the internet': they click a checkbox stating something like "I have read the (...) and accept to be bound by this" without reading the text. It is quite likely that these terms of service just contain a full waiver to the service provider for all responsibilities that should be contained in a Data Processing Agreement.

Where the snake bites its own tail

Now we have the situation where the consumer remains ultimately responsible for data processing that he has completely outsourced. Even worse: processing has been outsourced in a way that has essentially stripped him from any control over it.

In your role as a consumer of services, you subscribe to a service using your PII. You agree that this service provider will be processing your sensitive data. You would require adequate protection of this valuable data. But your service provider has outsourced the data collection process to a party that is unable to adequately protect this data as it is generated, collected, stored and transferred. This would be unacceptable because the supplier is violating the EU GDPR.

But in this case, you are the data provider yourself. You as a service consumer agree that the service is delivered by an unreliable data provider and therefore you cannot hold the service provider accountable for any consequence of this data provider failing. Ridiculous if a third party were involved, but legally valid if you are the data provider yourself.

Conclusion

The way we look at the Internet of Things --especially in the consumer domain-- needs a fundamental change in perspective. If we consider the data flow generated by IoT-devices as a supply chain, we make the shift from a customer/consumer-centric view to a set of demand-supply relations. This is a rather uncomfortable position, because it has quite some implications for accountability and liability regarding data protection and security.

At the same time consumers (including ourselves) are not acting in alignment with the roles that this viewpoint uncovers. We end up being service providers who are completely responsible for data protection over the entire supply chain. Yet our customers and suppliers are usually not bound to any obligations to protect this data. And we lack the tools to do it properly ourselves.

Thinking of IoT as a chain of demand-supply relations may help to identify systemic weaknesses. It may turn out to be a good start for finding effective strategies to fight the undesirable exploitation of these weaknesses.

About the authors

Joram Teusink and Rick Veenstra are both Information Security Officers and close friends.

Re-posts on…

Wednesday, March 8, 2017

Book review: Hacked Again, by Scott N. Schober

The book Hacked Again is all about learning your Security lessons. And then learn it again and again. Scoot N. Schober, both the author of this book and CEO of Berkeley Varitronics Systems, talks about how his Security company fell victim to a hack. Twice to be precise.

He starts off with talking about learning it the hard way. Trusting your bank is important, but did you ever consider on what that trust has been based? And what about opting in for credit card payment for your customers? Can that be exploited in any way? And if it does, how easy can you be compensated for any (financial) damages? After the second hack on the company of Scott, he decided to answer those questions and move to another bank.

Website security is another topic he addresses. Your business is likely to be run through a website, therefore, its security needs to be validated and continuously be improved. Failing to do so can (or will) lead to breaches which will impact your reputation and eventually your financial situation. Besides that, he also warns about the dangers of not being careful on how to utilize the great benefits of social media.

But how to protect yourself and your business? To answer this question Scott starts off with social engineering. Why? Because most often the human proves to be the weakest link and we tend to give more information than we would like to admit. Disconnect your credentials from any personal information, directly and indirectly. This way it is more difficult to utilize social data into means to breach your accounts. Because, who likes to be grabbed by a hacker's hook in their attempt with phishing?

Malware also comes to stage in his book. A good deal of pages are used to deal with this subject (rightfully!), and it all comes down to this. Patch everything, use a malware-scanner, and do not click on links or attachments of unknown or untrustworthy sources.

Strong passwords trump lazy hackers! And I could not agree more. There is much debate on what a strong password is, but in any case, a lengthy one is a good starting point. Password re-use and password guessing are one of the core ingredients in successful breaches.

Wireless Networks, or so called Wifi, is another big threat. Especially public ones. There are many means in which they can be exploited and one should always be careful before connecting to one. In the case you build your own Wifi-network, embrace the best practices for security such as strong Wifi-passwords and obscure SIDN. It’s by no means a foolproof system, but at least make it hard.

Scott ends his suggestions with topics like layered security, meaning do have a multitude on security controls anywhere in your network. Do not rely on just one or two controls. Make sure one steps in when the other one fails. But also the fact that Security is everyone’s business. We cannot lean on the government for protection. The Internet is decentralized and therefore comes with its weaknesses in that regard also. Meaning that the defense also has been centralized.

The book ends with notable hacks and breaches, such as the Target Breach, JP Morgen Chase Breach, iCloud Cyberhack, the Sony Cyberbreaches and the hacks at the Office of Personnel Management (OPM).

All in all, a nice read. Especially for business owners, CEOs, Presidents of some Board of any organization and so on. Read it, listen to your Security experts, and prevent financial losses your company cannot pay!

First released: March 2016
Pages: 187
ISBN: 978-0-9969022-1-2
Linkscottschober.com


Monday, February 27, 2017

This blog not affected by the memory-leak of Cloudflare

In the post 'Implementing https on Blogger using Cloudflare' I describe how I utilize Cloudflare services to protect this blog. Unfortunately, we all noticed that Cloudflare has suffered from a tremendous memory-leak which was announced Friday, February the 24th.

This leak (a bit referred to as #cloudbleed) is extensively covered in the following posts.
On February the 24th I also got confirmation that this blog has not been affected by the memory leak in the html-parser of Cloudflare.

So, no worries in that regard concerning this blog.

Thursday, February 16, 2017

Privacy Guidelines: Consent, Purpose, and Retention

Guidelines: Consent, Purpose, and Retention

Part of: Privacy Guidelines
OverviewBuilding a set of Guidelines for Security and Privacy

Below are the guidelines for privacy with the elements of consent, purpose and retention. Whenever organization is mentioned, you can read businesses, healthcare and government. Whenever people is mentioned, you can read customers, consumers, employees, patients and clients. Data in this case is in the category of Personal Identifiable Information (PII) which is subject to national and EU law and regulations.

Only collect with consent

Data is only collected with consent of the subject of the data.

Meaning that you don't collect explicit or implicit data on people without them knowing about it and without the consent of them to do so.

Only collect for purpose

Data is only collected for the business function that it is strictly needed for.

In essence, make sure that you don't over-collect data about people. Data that is not needed for the business to operate is data that should not be collected.

Destroy after use

Data is only kept for the time that it is strictly needed for the processing or as required by law.

Don't keep data about people longer than is needed for the purpose. Whenever the organization-people relation has ended, delete the data. Just keep the data that you are required to by law. Make sure the retention time is within those boundaries.

Only enrich within purpose

Data enrichment is only done within the context of the initial collection and consent of the data.

You can profile and track people to an incredible extent. Beside that you need consent for this profiling, the profiling itself my not excessively step outside the boundaries of the purpose of your organization.

Designate Data Ownership

Data always has an owner, or at the least a steward, who upholds the Security Guidelines and Standards.

Data, just as systems and services, need an owner. Data can travel through many systems and although all those systems might have owners, the actually can't own the data because it is shared. When data has an owner, it can have the proper attention it needs for things like consent, purpose and retention.