Saturday, February 21, 2015

Book review: Schneier on Security, by Bruce Schneier


"The closest the security industry has to a rock star.", The Register says. And they are downward right. Bruce's reflections on security and privacy related topics are down to earth, both practical and well-thought, and still (sometimes painfully) true to the present day.

The book "Schneier on Security" by Bruce Schneier is a collection of posts and essays he wrote in various magazines, newsletters and his own blog on schneier.com. They are all relatively short stories, but they hold great value nonetheless. Below is a list of topics I found relevant for myself in his book.
  • Terrorism and Intelligence Agencies;
  • Liability and Security by Design;
  • Privacy and Surveillance;
  • Economics and Psychology of Security.
The topics are covered in twelve chapters in his book, including an extensive list of references. And whatever the case, Bruce says that security is always a trade-off. And I am totally agreeing with him here. Security is not absolute and living your life is risky by design. But when building in security in your live, technology or otherwise, it always has a cost. And people should weigh those costs against the real benefits of security, not just the feelings that entails it.

I will cover the topics in more and less detail and highlights the points that I found meaningful or that I have learned or what my reflection on the topics are. Be aware, it is my interpretation of the book by Bruce Schneier. This post therefore may or may not be the same as the opinion of the author. Because there is always a risk of a misinterpretation on my part.

Terrorism and Intelligence Agencies

Despite the fact that I do not condone terrorism or violence in general, there are three questions we need to ask in regard to terrorism and its security.
  1. How real or present is the threat?
  2. How can we really decrease the chance or impact of its risk (by increasing security)?
  3. And how much are we willing to pay (money, freedoms, convenience) for it?
There is allot of debate going on to these days about the reality of terrorism. It is often the sole reason that government agencies get more power. More power to counter terrorism. Often that increase of that power leads to feeling more secure, but more seldomly to an actual increase of security.

In order to properly address security against terrorism it is important to let go of the fear. Fear clouds our judgements and often leads to wrong decisions. Which can even lead to a decreased security, increased threat, and people feel more secure while in fact they are not. And we as a society often 'help' the cause of terrorists with being more afraid to threats that are likely not worth the countermeasures.

So in order to actually be more secure Bruce says we need to increase targeted-surveillance and investigation, cutting of funding of terrorists and actually find the terrorists themselves instead of guessing were they will attack next. Besides these steps it is very important to improve the way we respond to emergencies and how we can lessen the impact of an attack. And the last, but perhaps the most important, we need to consider our foreign policies and the way they increase or decrease happy feelings towards our Western democracies.

And giving more power and tools to intelligence agencies to collect more data of everyone won't help with increasing security. It will help with creating a government controlled society. But later on that subject.

Liability and Security by Design

Bruce talks a lot about voting machines security in this book. They are often ill-proven developed, tested and implemented. When it comes to security of voting machines, it actually comes to security by design for every piece of software. When creating something, start with what can go wrong.

Liability, or the lack of it, is the main reason why software tends to be insecure by default and why security is patched in to it later in its life cycle. Software developed this way is generally more insecure (even with 1,000 security patches) than software developed with security in mind and practice.

Bruce says that software with 100 patches is not more secure than software with 10 patches, but also not less secure. Because software is not developed with security in mind you simply don't know so therefore you have to assume that software is vulnerable. Vulnerable by design actually.

There are allot of practices like OWASP that can help building software that are more secure by design. The main problem here is actually liability. The ones that suffer from poorly designed software are not the ones that can actual influence the development of software. If software developing companies would be (more) liable for insecure products, they would make software more secure. Fundamentally this is about the economics of security (with I will cover later in this post).

So, if we want to have more secure voting machines, more secure operating systems, more secure applications, than we need to change who is liable (within reason of course) for the products that are developed.

Privacy and Surveillance

I mentioned targeted-surveillance in the topic about terrorism and what Bruce says about good security practices. There can be many trade-offs towards increasing security, and decreasing privacy is one of them. Bruce talks about intelligence agencies allot, and I follow his opinion on this matter. These agencies are part of our lives for centuries in the past and centuries to come. They can actually increase our security and therefore they make sense. But the trade-off is changing the past decades due to new technologies and lack of self-control.

Bruce suggests, in case of the National Security Agency (NSA), to split them up in three parts and place them under other current existing agencies. The functions they provide are necessary to counter crime, and foreign threats. In short the split is listed below.
  • Domestic targeted surveillance, to be placed under the supervision of the FBI (they are bound to laws that require court orders and such);
  • Foreign targeted surveillance, to be placed under the supervision of the CIA (they are bound to laws that require them to work against foreign wrong doing);
  • Targeted digital attacks, to be placed under the supervision of the military (attacks, whether digital or not, should always be under the control of the military).
And we should stop with mass-surveillance altogether. Mass-surveillance does not increase our security. It does increase costs of IT associated with it and it does decrease privacy of all civilians both domestic and foreign (this is the trade-off!). It is criminalizing all people, it impacts our privacy, how we think and act, and ultimately it impacts our democracy and freedoms.

Let me quote a paragraph from the Declaration of Independence of the founding fathers of the United States of America written on July the 4th, 1776 AD.
...that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.--That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed, --That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their Safety and Happiness. Prudence, indeed, will dictate that Governments long established should not be changed for light and transient causes; and accordingly all experience hath shewn, that mankind are more disposed to suffer, while evils are sufferable, than to right themselves by abolishing the forms to which they are accustomed...
Let us remind ourselves by the words of these wise men that all men are created equally, all men have the same human rights, all men should live in freedom and no man, woman or child should be the victim of any government that undermines those principles.

Economics and Psychology of Security

The last topic I want to address is that of the economics and psychology of security. They are linked together and Bruce rightfully addresses them.

The economy side of security is that often security fails due to the wrong economic incentives. The people who could provide security, are not the ones that suffer from the negative impact of insecurity. This is the liability part I covered earlier in this post. In short, when we can shift the dynamics of this we will see better economic decisions in regard to security.

The psychology side of security is important also. Bruce focuses on two subjects here. The first is the difference between feeling secure and being secure and the second is risk seeking versus risk avoiding decision making.

An important factor in a security product is the fact that you are actually secure. Really secure products tend to be costlier than less-secure products (due to longer development or testing times). But when we feel equally secure to both products, we will choose the cheaper one. Even though it is actually less secure. It is important for businesses, actually everyone, to consider this dynamic of our subconscious mind. When choosing between security products we need to make decisions based on data.

And this is the part where we have another tripwire. Bruce says, based on many studies done by others, that we tend to be risk-seeking when we have something to lose and that we tend to be risk-avoiding when we have something to gain.

It all comes down to this. When a security professional advices a board of directors to implement security product xyz to prevent a potential loss of $ 1,500,000 somewhere between now and 5 years he has a difficult job. Especially when the security product will cost $ 500,000 in the next 5 years. The board will likely take the risk, otherwise they will be guaranteed lose the $ 500,000 dollars.

But when a business has something to gain, they tend to be risk-averse. When a business can make a decision to gain $ 500.000 now or the likelihood of gaining nothing or $ 1,500,000 dollars in the next 5 years, the business will likely not take the risk and settle for the $ 500,000 dollars right now.

And as security is inherently a fear-sell, as Bruce states in various online seminars, this probably won't change and decision makers will have to try to base their decisions solely on data.

Conclusion

I think it is a very nice book and definitely worth the read. Although everything was written between 2002 and 2008 it still holds value today. My advice, buy it, read it and draw your own conclusions.

You won't regret it!

First released: September 2008
Pages: 336
ISBN: 978-0-470-39535-4
Linkhttps://www.schneier.com/books/schneier_on_security/

Friday, February 13, 2015

Android Wear and Google Fit and its privacy and security

I have the Motorola Moto 360 Android Wear smartwatch in my possession for a while now. Generally speaking I am very content with the watch, although there are no outspoken features (yet?). It is a nice extension to the Android smartphone. It is no smart-device by itself, but personally I think this is a good way to move to the "one device, multiple screens" principle.

This is the one :)
This is no review about the Moto 360 itself though. Instead I will look closer to the Google Fit service and its privacy and security characteristics. Since such a service is being used much faster or easier when you have more sensors on and around your body. It is important to know that you do not have to have a smartwatch to use Google Fit, but it certainly will enhance the experience. For the record, I have used Google Fit for more than a month now.

For the ones curious about the extensions and the leather band, click here: SteelConnect M and Fossil 22mm Flight ACH2696.

What is Google Fit?

Google Fit is a service that provides functionality like heart rate monitoring, step counting, calorie-burning and activity monitoring. It can also detect the distinction between walking, running and bicycling. It appears that when you drive a car or go by train (probably just when going faster than a certain speed) it will not count towards your activity. So no smuggling there!

The types of data that are automatically collected are:
  • Steps / distance
  • Time of activity
  • Detection of going by foot (walking and running) or bike.
  • Location data
The types of data that are manually collected are:
  • Heart rate
  • Weight
  • Length
  • Gender
Based on those data there you can generate graph views going back to whenever you started collected the info above. You can also define your daily goal of activity time and select in what units all measurements will be presented.

An example of a Graph of Google Fit
An example of Google Fit on Android Wear
You can find the Google Fit support page here.

Activity Detection and Accuracy level

In the Google Fit app there are two settings I want to mention. The first one is that of Activity Detection. When enabled the app puts all the sensors to work to detect when you are being active or not. The service will run continuously in the background and when activity is detected, the data is being collected.

The other settings, which was added in a recent update, reflects the new GPS functionality. If you enable this feature you will allow other apps (and thus also Google Fit itself) to improve distance and location tracking while performing your work-out. The warning for reduced battery life holds truth, and ultimately I disabled this feature myself.

Despite the fact this functionality was recently added to the app, the data was actually already being collected. It can be now more accurate then before (apparently) and the data now also shows up in your dashboards in the app and on the website.

So what is collected in summary?

Google Fit collects data about you being active and inactive. It can sensor if you are likely walking, running, biking, moving by car or train (its just ignores that), and when you are doing just nothing. And when you do that, Google Fit can measure how fast you are going and where you exactly are at that moment. And when you submit your hearth-rate data, length and weight it will use that data too. Google Fit can correlate this data into a view about your movements, physical condition and whereabouts during a specific time period.

What are the privacy and security features of Google Fit?

The privacy and security features are important, so here is a small list.
  • Communicating with the service goes through a secure connection. See the image below for the details about the security. It is not possible to communicate with the service through insecure connections and there is no way to use Google Fit without a Google Account. Protect your Google Account with good security, especially when collecting health-data about yourself.
  • Delete History feature. This will delete all your Google Fit data on the Google services. When you want to stop using Google Fit, first disable it or remove it from all your devices. Then delete your history through the website itself.
  • Third-parties connection. When you want third party apps to connect to your Google Fit data, you will have to explicitly give permission first. Beware here, because those apps can also store data in your Google Fit service and can share or use them in other ways then Google intents with this service. You can find more about this here.
Information about the Security of the connection

What about the privacy policy of Google Fit?

When you enter "privacy policy google fit" in your Internet search tool, you cannot easily find information about this subject. But there is some info to find. Lets start with an overview of the Google Fit platform.
Google Fit platform overview
The Fitness Store of Google, as stated on the platform overview website of Google Fit is the following.
The fitness store is a cloud service that persists fitness data using Google's infrastructure. Apps on different platforms and devices can store data and access data created by other apps. Google Fit provides a set of APIs that make it easy to insert data and query the fitness store.
The way the service works is explained in a short but transparent manner. There is no easy way find specific privacy information about Google Fit though. When you look at the Terms of Service (ToS) of Google Fit for developers you can find the following Use Limitations.
Google does not intend Google Fit to be a medical device. You may not use Google Fit in connection with any product or service that may qualify as a medical device pursuant to Section 201(h) of the Federal Food Drug & Cosmetic (FD&C) Act.
Unless otherwise specified in writing by Google, Google does not intend uses of Google Fit to create obligations under the Health Insurance Portability and Accountability Act, as amended (“HIPAA”). Google makes no representations that Google Fit satisfies HIPAA requirements. If you are or become a Covered Entity or Business Associate under HIPAA, you agree not to use Google Fit for any purpose or in any manner involving Protected Health Information (as defined by HIPAA) unless you receive prior written consent to such use from Google. You acknowledge that upon discovery of a violation of this provision, Google may terminate your use of Google Fit. You are solely responsible for any applicable compliance with HIPAA and agree to hold Google harmless for any uses contrary to this provision.
They pretty much say that Google Fit is not suitable for anything related to healthcare or its insurance (I understand that), nor does it pretend to have the same security level as healthcare or health insurance services (frowny face here). And this last part might be a problem if you consider that there is a fine line between health-data and medical data. When does one ends and starts the other?

In the end, there is no specific privacy policy for Google Fit. I cannot find it, but if you can, please let me know!

Okey, so what about the general privacy policy of Google Fit?

When there is no specific privacy policy for Google Fit we have to assume that the general privacy policy of Google applies to this service. Another reason for this assumption is the fact that you need a Google Account to use Google Fit in the first place and thus it all falls under the same general privacy policy.

You can find which types of information is being gathered by Google Fit services and its third-party applications above. So to keep this blogpost somewhat short I am going to skip this part in the privacy policy, and move straight to how information is being used and shared.

Your Google Fit data might be used for:
  • Improving Google services.
  • Showing more relevant search results.
  • Combing health-data with other Google services.
  • After anonymization, sharing with their partners such as publishers and advertisers.
  • Sharing with partners to which Google has sourced services.
  • And sharing for the obvious legal reasons. Keep in mind that governments or intelligence agencies might get your health-data from Google Fit easier than from healthcare or health insurance companies.
There is also a small topic about sensitive data, and this is the statement about sensitive information from Google.
This is a particular category of personal information relating to confidential medical facts, racial or ethnic origins, political or religious beliefs or sexuality.
And when information is classified as sensitive, Google claims in its privacy policy that it will only be shared after you give your explicit consent.

The big question that remains is: "Does Google consider Google Fit data as confidential medical facts?". I cannot find an explicit statement that it does.

And what about Third-Parties apps and services?

Technically speaking the Google Fit service is well protected and you can enhance your own Google Security with things like 2-Step Verification or even with a FIDO U2F Security Key. As Ken Taylor correctly states in the comments below, this security becomes far less relevant when you connect third-party apps or services to your Google Fit data.

It is for apps possible to use the data you connect the service to and therefore the privacy policy of the third-party app or service apply for specific that data-exchange. Whereas Google Fit won't sell your data, it just might be so that the third-party app will do so. Or perhaps even some other nefarious activity is done with your data.

In this area the Google Privacy Policy offers no protection. The data-exchange is secure and the authentication is based on OAuth technologies. But once the data has been transferred to the third-party, there just simply is no protection from Google.

Do I have to worry?

If you just use Google Fit as is and do not incorporate third-party apps and services, I tend to say that you won't have to worry. Just don't act surprised when you see a fat-burner pill or awesome new running shoes ad on a website when using Google Fit for a while. Google does not state anywhere that it won't do that and its privacy policy is not entirely transparent about this topic, especially considering its statement that Google Fit is not a medical device.

For the third-party apps and services, really read their privacy policy before sharing data such as your health-data. Be cautious here.

It is up to you now to decide what you will do. Are you sharing data with third-parties? If yes, which ones? And did you read the privacy policy? If you want, you can leave a comment to discuss these questions.

--
This post has the tag: update, meaning it will be updated when new information becomes available and/or relevant.