Introduction
This article will look at the common reasons given by politicians or big business (typically social media or tech giants) to reduce accessibility to meaningful privacy. The reasons are often cited when politicians are introducing or changing a law, typically in response to an unexpected event or community pressure. On the businesses side of things, it tends to come as a justification for a change to their policies of use of new features, usually through increased oversight of your device or access to more of your data. We’ve collated the main reasons into a list of five core areas:
- National security (investigate, prosecute, and prevent terrorist attacks & domestic crimes)
- Public Health (such as the current battle against COVID-19)
- Child safety protections (investigate, prosecute, and prevent paedophilia, child abuse, and sex trafficking)
- War on drug cartels and organised crime
- Cybercrime
We aren’t going to the extent of testing the various claims against studies and data to prove / disprove them in the interest of the length of this article. This article is about informing on the ‘why’ not testing its validity, the length of this article aside, that level of scope is beyond what we consider core to Privacy Rightfully. Per our Ethics Statement we don’t set out to do lobbying and we try remain apolitical as possible. There are plenty of others out there looking to climb the mountain of making meaningful change at the political level and good on them. We’re focused on the personal level – what regular folks need to know (and do) so that’s what we’re aiming for here.
The five core reasons
Let’s have a look at each of the five main reasons with a bit of commentary regarding the typical messaging.
National security
Politicians rightfully take the view that national security is a fundamental role of government. One need only take a look at how much of the national budget is spent on Defence across the developed world to highlight this point. A few decades ago, the way intelligence agencies investigated potential threats was difficult, costly, and labour intensive. Agents had to discretely bug places and equipment, they then had to listen to each encounter those bugs captured, they had to physically monitor people and take photographs which were sent back to a central headquarters for analysis, they had to fly around the world, and so forth. These constraints amongst many more made surveillance difficult and mass surveillance virtually impossible.
Given how quickly information can be spread in the modern age coupled with it being in a digital format, governments and their intelligence agencies now rely on digital pre-emptive measures to identify potential threats before they occur / act. Digital pre-emptive measures typically involve surveillance and dataveillance of a subject and their communications. What’s different in modern times is the volume of data any given person can create as well as databasing and matching technologies to analyse those volumes of data. Analysis is through various technologies including Artificial Intelligence and machine learning called ‘psychographic analysis’ designed to improve the prediction of human behaviour.
Governments and politicians have claimed the mass surveillance of the general public has prevented numerous attacks thanks to these pre-emptive measures. They also criticise the general public when they voice their opposition or raise privacy concerns about such practises. They will say something to the effect of: ‘the public shares huge amounts of data with third party companies, such as the social media giants, why do they object when the government is using it for national security rather than profit’ and similar such messages.
The reason for this is the general public has a broad mistrust of the government – even when it’s their political party of choice in power. This isn’t to imply people trust tech giants either, far from it, but tech giants cannot harm people to the extent the government can, tech giants cannot:
- Put you under any type of investigation, prosecute you, and put you in jail
- Restrict your movements, cancel your passport or driver’s licence
- Impact your financial situation through audits, fines, increased taxes, higher inflation, outlaw or severely regulate your business or profession, seize your assets, etc
- Under more totalitarian regimes you can be punished for speaking out against the government, fighting for human rights or equality, being a member of the LGBTIQA+ community, protesting publicly, and so forth
The ‘Google and Facebook already know everything about you’ justification for increased surveillance and erosion of privacy is lazy and misses the point entirely. Generally speaking, people don’t like governments, they feel as uneasy about government surveillance as a teenager does when a parent is looking through their phone. It’s often said that ‘politicians lie more than they tell the truth’, so it’s easy to see that most people will be sceptical to claims that additional surveillance measures are effective at curbing various national security threats.
For more about government sponsored surveillance, including some details about various programs, read our articles:
From the perspective of a politician, whose career is dependent on ensuring the masses vote for him/her, their choice is simple and irrespective of their personal views. They can hold up the argument to erode privacy in the name of fighting any or multiple of the five reasons we’re outlining in this article compared to the opposite. The opposite being standing up for privacy at the expense of potentially catching criminals operating within the five core reasons. Politicians operate on the premise that having a reputation for being ‘tough on crime’ trumps ‘protector of privacy’ when it comes to winning votes.
Typically politicians also consider their career and reputation as the primary motivator for what they stand for or are willing to fight for. This isn’t to say all politicians are like this or have no respect or sensitivity to privacy concerns, but it’s a solid reason as to why you don’t see many of them concerned by privacy erosion when national security is involved in the debate. This is likely to remain the case whilst the majority of the general public agree, it would only change if the majority of people become as privacy conscious as the majority of our readers.
Public health
The global battle with the COVID-19 pandemic has brought with it new debates about privacy. The bulk of attention has been more about freedoms and liberties pertaining to lockdown orders with privacy flying a little under the radar. One of the major tools used by governments around the world are contact tracing apps. There are various types, but the purpose is the same, citizens are to check-in or record the public places they visit via the app to create a record of their movements. In the event a COVID-19 positive person is identified as visiting the same place, all of those who checked in during that time are notified and typically asked to isolate and/or get a COVID-19 test.
In most contact tracing programs, positive cases are contacted by public health authorities to discuss their movements, usually to find out length of stay and any other locations not recorded. However, the public has also been less inclined to be truthful with some fearing they could be in trouble, be deported, lose their job if they’re directed to isolate for an extended period, or be faced with stricter isolation if they’re in a vulnerable cohort. Once again, people are happy to check-in on social media but significantly less inclined to check-in to the government via their app.
Similar to the arguments made regarding national security, governments and politicians make similar ‘keeping you safe’ claims to encourage the mass adoption and use of the app. Once again, they take on the side that’s easier to argue for to the general public in that giving up a bit of privacy is a small price to pay to save lives, especially under the backdrop of COVID-19 deaths globally (4.46 million as of this writing).
The privacy concern is given these apps and laws have been rolled out under exceptional circumstances and at a rapid pace, have they been adequately security tested? Will there be a sunset clause? Will the governments revoke COVID-19 related laws regarding collection and use of personal data? Will the general public be told they no longer have to check-in sometime in the future? The answer should be yes, we don’t want to be sceptical or fear mongering here, but almost two years on COVID-19 isn’t leaving, in fact it’s spawning new variants which are more transmittable and more resistant to current vaccines. The erosion of privacy in the name of public health may go the same way as the equally hastily rolled out Patriot Act to combat national security threats. Checking-in on a government sponsored app may in fact be ‘the new normal’ or a very similar system with a slightly different look, feel, and process might be legislated instead.
Businesses are also getting on board with promoting checking-in on these apps. It’s not difficult to understand their motivation, especially those who were ravaged by various lockdown measures. Those that were financially ravaged by the COVID-19 pandemic lockdowns are more motivated to ‘get back to normal’ before they go bankrupt. One such example is Qantas Airlines in Australia who recently launched rewards for fully vaccinated passengers. Frequent flyers who have been vaccinated can upload their vaccination certificate to the Qantas app and instantly receive their choice of rewards points, status credits, or flight vouchers. All participants also go into a major prize draw to win a year’s worth of Qantas flights and other rewards from their partners.
It’s an odd situation to be in that people would happily share rather private health information with an airline in return for arguably modest rewards. Qantas will now have vaccination information on a subset of the Australian population in frequent flyers who participate. This is data beyond what an airline frequent flyer program would ever require and with Qantas holding such data, they now need to ensure it is stored securely. We’ve long advocated to minimise how much data you give to third parties on the internet, we’ve always said only give what’s necessary and don’t give information that is optional – this reward program is something we’d definitely not advocate for. However, it’s not hard to understand their motivation – the airline industry was one of the worst affected by COVID-19.
It should be said that we do share health data with third party companies through devices such as fitness watches syncing with their own app and other third-party apps. However, this data, whilst it includes heart rate and location, still isn’t as sensitive as a vaccination record. Read our article about interactive insurance policies for more about this.
Ultimately though there is an upward trend in the acceptance to share health or health related data with growing numbers of third parties. This can be for private benefits (such as fitness metrics, rewards, or discounts) and the greater public good (such as the COVID-19 tracing app). The buy-in and growth of this acceptance will align with the expectations governments will have on how much health-related data to expect the population to be willing to share in a future pandemic-type event.
Child safety protections
The third reason to cover is various child safety protections made up of fighting paedophilia, child pornography, sex trafficking, child abuse, child slavery, and various other forms of child exploitation. The increase in material circulating across the internet of the exploitation of children is heartbreakingly horrendous. No one can argue against capturing the individuals and groups who target and exploit children in this manner and locking them away for a very long time.
Topically, it’s been big business who has taken a proactive approach to this and will serve as our example regarding privacy concerns. In August of 2021 Apple announced it will scan photos users upload to iCloud for child abuse material. The system will use hashing algorithms to match user photos to known child exploitation material and flag matches for human review. The concerns to privacy do not involve any objections to fighting child exploitation.
Privacy concerns relate to an issue that was present in the previous two sections also, namely the potential for repurposing of the technology or system in the future. Privacy advocates say that the software could be repurposed to scan for other content that may not be illegal, or which simply lives in a legal grey area but be objectionable by the government. The criticism of Apple’s proposed system has been of the technology itself and its future potential to change what it’s scanning for, not the goal it’s serving in its immediate launch. This is similar to the two previous areas, something new which eats away at your privacy, is rolled out under as a necessity to fight something everyone agrees on – the pattern is consistent.
First it was fighting terrorists, then fighting a pandemic, and now fighting child exploitation. The systems, programs, and laws rolled out to fight terrorists have changed, become more intrusive, and continue today to questionable results. With the second, it’s too early to tell, but as we alluded to at the end of the previous section, checking-in in the name of public health may be with us long after COVID-19 is behind us. So, it isn’t a stretch of the imagination to be sceptical that Apple’s system will exist exclusively for child exploitation material and nothing else (though Apple has said they would resist the system being repurposed in such a way).
There is also something to be said for the accuracy of the system (and software in general) leading to people potentially being wrongly flagged as a paedophile. This can happen from the false positives, images of one’s own children at birth or at a beach, artwork, education material and diagrams, young adults with the appearance of minors, photos for medical reasons, etc. More maliciously someone could be breached by a bad actor and have images planted to their iCloud leading to false accusations, ransom, doxing, and other legal complications.
War on drugs & organised crime
The war on drugs, drug cartels, and organised crime make up the fourth area which is commonly cited for the erosion of privacy. In this area government takes the lead, especially at election time, promising to be ‘tough on drugs’ and to ‘clean up our streets’. They often promise to give law enforcement greater powers to identify and prosecute those involved in the illicit drug trade.
However, a growing proportion of the drug trade is done online on Dark Web sites, along with other organised crime activities. The war isn’t on the streets so much as it is on the internet now, at least at scale the trend is broadly going that way. So naturally law enforcement now needs the powers similar to those used to in the interests of national security – access to the emails, the messages, the devices, the photographs, etc, to take down the big drug cartels and organised crime syndicates. Our article on Operation Ironside and the custom phones with the ANOM app vindicate the idea that the attention is shifting from the physical world (or ‘the streets’) to online.
On a smaller scale, due to the perceived failure of the war on drugs and the increased acceptance of recreational drug use amongst the general public, crimes related to drugs are increasingly underreported. This is of course in comparison to the other four areas in this article and also excluding large scale operations, but rather looking at small-medium or local drug related activity being largely ignored. With a lack of public disclosure or public support law enforcement must rely disproportionately on surveillance, phone taps, infrared scans of property and other such privacy invading searches in their battle against drug activity.
When it comes to organised crime the government typically gains public support by highlighting the tax evasion within organised crime and the money laundering elements. They typically highlight a large figure of lost taxation revenue which could have been used for the public good such as building schools, hospitals, roads, and other projects to benefit communities and create new jobs. Whilst this could be true there are two important things to keep in mind when considering this claim. The first is there is no guarantee that lost tax revenue would be used for those things, especially when one accounts for typical government inefficiency, bloated workforce, and wasteful spending. The second thing is even legitimate businesses actively work to minimise their tax obligations. Whilst the government may not get tax revenue from organised crime, they don’t get any or get very little from some of the biggest, most profitable legitimate businesses.
Fundamentally the war on drugs, drug cartels, and organised crime is just another group of privacy eroding justifications thrown at the wall of public opinion with the hope it will stick. If proposed legislation to grant law enforcement special surveillance powers doesn’t pass in the theme of the war on drugs or organised crime, it will be slightly altered and repurposed for any of the other four areas we’re looking at today.
Cybercrime
Our final area of fighting cybercrime is not only the broadest but also carries a unique characteristic; the views of government and business flip depending on their audience. Whilst the general public are asked to give up their privacy or stop using encrypted communications in the name of fighting the previous four areas, the same message isn’t delivered to the business community.
In the wake of consecutive high profile malware attacks on prominent American businesses in 2021 such as the recent one targeting the Colonial Pipeline the message from government was different to what they ask of the general public. Following these attacks, the White House advised the business community to take greater care with and investment in their cybersecurity defences and infrastructure. We’re not naïve to think that a breach of a large business carries the same risks and threats as the breach of a person or even group of people. Naturally there is more to steal, to hold ransom, or to disrupt society with when a large business is breached.
However, one would think the same message delivered to the general public would yield the same positive result as the business community ramping up their cybersecurity defences. However, the government isn’t advocating the general public encrypt all of their devices, communicate with encrypted applications, and broadly invest in their personal cybersecurity defences. They don’t want individuals to have the same cybersecurity defences as they’re advocating to the business community as it would hamper their various surveillance and dataveillance programs.
By publicly signalling to the business community to ramp up their cybersecurity defences, government has inadvertently said that their intelligence agencies, with their pre-emptive measures, cannot identify or prevent high level malware attacks before they happen. This should ring alarm bells to the general public as we’re asked to sacrifice our privacy for the benefit of the work of those same intelligence agencies. If businesses who may spend millions on cybersecurity aren’t safe and our intelligence agencies cannot pre-emptively identify these attacks before they happen – what hope is there for everyday people like us? So, the question is – what reason is there to not take matters into our own hands and protect ourselves and our data, even if that involves using tools that the government prefers we didn’t (being primarily encrypted ones)?
To make this point in the most cynical and negative way – government cares if large national businesses are targeted by bad actors. However, that care is greatly reduced when it comes to the general public being targeted at an individual level. It’s not to imply they don’t care at all, it’s just that they hold the various surveillance programs targeting the general public to a higher level of importance. Keep this in the back of your mind when a politician or government proposes laws to increase surveillance over you to whatever extent.
We all agree
The overwhelming majority of people agree on the goals and ideal outcomes behind all five areas and demand the government takes action on all five. There has even been a growing call for tech giants to take an active role to assist law enforcement and join the fight against the criminals who use their platforms. Let’s not forget:
- We all agree it’s the governments job to keep the country we live in safe, maintain law and order, and neutralise terrorist threats
- We all agree it’s the governments job to fund and operate public health, including responding to pandemic-type events such as the COVID-19 pandemic to reduce further loss of life
- We all want law enforcement to catch the people and groups involved in all types of child exploitation and put them to justice
- We all want to feel safe in our communities and not be overrun by organised crime or drug induced crime
- We all want to be personally safe and secure and for the businesses in our local community and country be safe and secure from malware and other such attacks from bad actors which can have devastating consequences.
The consistent approach used by governments and businesses rolling out systems or laws which erode privacy is the initial justification which is used to win over public opinion and buy-in. In all key areas we’ve covered the initial justification has been something the general public overwhelmingly agree is a serious issue that needs to be addressed. Privacy advocates don’t disagree on these ends but rather the means, unfortunately it’s with the ends that government and businesses go out with to win support. What tends to happen is over time those means have been repurposed, used for other ends not disclosed initially, or have stuck around long after the seriousness of their necessity has passed.
It comes back to another often touted line of ‘privacy is the cost of security’. Just because we all agree on the nobility and seriousness of achieving the ends it doesn’t automatically mean we agree on the means.
Privacy isn’t just good people vs bad people
At Privacy Rightfully we tend to focus on privacy from the viewpoint of protecting ones sensitive or valuable data from falling in the hands of bad actors and the consequences that come from that. But outside of that privacy is also about keeping things, usually personal things, to ourselves – even if they pose a negligible risk to us if they fall in the hands of a bad actor. These are not things that are illegal or criminal in nature but rather things we don’t even share with some of our loved ones, such as:
- Our political views, opinions of certain political parties or politicians
- Our moral code and views on certain laws or changing societal norms
- Our views about sexual orientation, racial prejudice, and other prominent debates on the subject of equality
- Our views and feelings about the relationships we have with family, friends, and co-workers
- Our views on obesity & health, global warming, pornography, gun laws, and other subjects which can stir fierce debates between opposing groups
- Our fears, stressors, and the state of our mental health
The desire or not to share such views (and with who) is a very personal choice to each individual and that is also at the heart of privacy. This is also why arguments such as the classic ‘if you have nothing to hide, you have nothing to fear’ miss a core element of what privacy means. To varying degrees governments, law enforcement, and tech companies are trying to convince the general public that privacy is only important to bad people motivated to hide illegal activities.
The issue with that is the word ‘only’ because whilst privacy is a domain of such people it’s not so exclusively. The freedom to choose what we keep to ourselves and what we share within the reach of a government surveillance program doesn’t imply there is something wrong within the category of what we wish to keep to ourselves.
In Carpenter v. United States the US Supreme Court noted that:
…cell phones have become “almost a feature of human anatomy” and that location data “provides an intimate window into a person’s life, revealing not only his particular movements, but through them his family, political, professional, religious, and sexual associations.”
Privacy isn’t exclusively about bad people hiding illegal things, if that were the case, we’d all bath, shower, and use the toilet with the doors and curtains open. Having something to hide is not a prerequisite to having a sensible approach to and advocacy for privacy. In the hunt for bad people, the privacy of good people is relegated to collateral damage, at Privacy Rightfully we’re hoping to empower people to change that.
A final word on the business community
The other motivation of the business community we haven’t touched on here but have in other previous articles, most directly this one. This is the business trend of Dataism proposing that ‘data is the new oil’ and similar statements regarding the strategic value of mass data collection capabilities. Data has value, there is no doubt about it, indeed, many apps or software are free to use because the user has to accept their data will be provided to the developer (and potentially their partners and other third parties). Data is somewhat of a currency, so much so that businesses would prefer your data rather than a small payment for their software. As such it isn’t surprising that their motivation is to make the creation, collection, visibility, and harvesting of user data easier to do, rather than not in the name of their user’s privacy.
The reason this has worked so successfully over the years is the variances in visibility of tangible benefit. The average person sees the immediate benefit in using a free app, such as one that tracks the nutritional content of the food they consume, for example. However, the implications or risks to their data, and subsequently their privacy, is far more hazy and even unknown to allow for a reconsideration of using that particular app.
Businesses typically say that, once again, the ends justify the means because it gives them data to improve their product / service to better meet the needs of how their customers use it. However, it also opens the door up to various biases and discrimination based on prejudices around certain aggregated groups. For example, the company may pull advertising efforts from regions with a certain average income threshold or ethnic makeup. This means customers or potential customers within those groups may miss out on deals or offers which would save them enough money to convert and become customers. Without opening a can of worms there is also something to be said for the unique insights such data can realistically provide. For example, do Nike and Adidas have significantly different enough data to make strategic and product development decisions for genuine competitive advantage?
It’s worth remembering that government and business are on the same team here – they want us to generate more and more data about ourselves and for them to be able to access it. Life in modern times now demands that interaction with big entities takes place online, through online accounts or portals which generate data based on what we are inclined to feed them. This is one of the great enablers of the erosion of privacy which governments implore, explicitly and implicitly, that businesses share data with them.
Conclusion
It’s hard to truly know if the radical growth and development of technology which we’re living through truly threatens our way of life in a way which can only be countered with equally growing government surveillance. Such a rebalancing of privacy expectations would be a tough sell for governments who hold little credibility on the matter in our view.
But to play devil’s advocate, the future may be one where meaningful privacy is out of reach for the average person in that we would collectively accept government surveillance as the lesser of two evils to stay safe from malicious bad actors, terrorism, pandemics, etc. Of course, the problem with this is that increasing surveillance powers can also increase the misuse those powers and make the infrastructure and data more and more attractive targets for bad actors.
The five areas we covered are the main arguments typically used to justify the erosion for privacy, but there are others as well. They are all sold with immediate benefits or goals first, with little said of the future repurposing, or capability, or even the proposed length of use. We hope when you hear some of these you will keep and open or sceptical view about the true intention and potential future repurposing of whatever is being proposed.
This article is written in line with our Terms & Conditions and Disclaimer. As such all content is of a general nature only and is not intended as legal, financial, social or professional advice of any sort. Actions, decisions, investments or changes to device settings or personal behaviour as a result of this content is at the users own risk. Privacy Rightfully makes no guarantees of the accuracy, results or outcomes of the content and does not represent the content to be a full and complete solution to any issue discussed. Privacy Rightfully will not be held liable for any actions taken by a user/s as a result of this content. Please consider your own circumstances, conduct further research, assess all risks and engage professional advice where possible.