Lukáš Linhart

Your privacy—or your future

written by Almad

The Hydra of technological progress demands your privacy. We are approaching an interesting point where opting for privacy will make you a second-class citizen, unless you are rich enough to compensate for it. We could have avoided this if there weren’t international competition with a significant portion of the population who have already chosen.

The collective consciousness of the Western world is slowly starting to fully realise the power of technology companies. The privacy prophets who were considered half-mad two decades ago are now listened to as the trends they have been warning against go mainstream. Some of the early tech companies matured while others went into decline. It is easy to hold to ideals when you are growing. It is the decline when it’s hard and it also is when the companies are forced to exploit their most valuable treasure trove—user data.

There is once again a great distance between pioneers and mainstream. They live in different subjective realities. For stereotypical tech pioneers, the current capabilities of almost endless processing power and storage are given; the life of a cyborg where every step is guided by a computer happens already; privacy was left behind in 20th century; human role is in decision making and creativity, the rest of it shall be automated; lifespan over 100 years is given, but hopefully we’ll achieve singularity by then; states and borders are cumbersome relicts of the past one has to cope with; data-driven decision making is daily bread; using whatever data you can gather for whatever purpose is a no-brainer; personalised sites and services with tailored pricing are expected.

Most of the population haven’t caught up to the idea. I can shock my non-tech friends with stories as innocent as those about A/B testing. It is not what they expect technology to be capable of, and it is something they consider creepy. While some of it becomes common knowledge thanks to the recent scandals, I doubt everyone understands them to their true extent.

This is not an omission or neglect. This is a true mismatch of mental models. I think this was on display when OKCupid published its “We experiment on human beings” back in 2014 (removed since, internet archive link ). It came shortly after Facebook confessed to manipulating user’s mood. Both triggered a disbelief and expressed need for ethic boards for such things (see this article in Guardian).

That happened half a decade ago and nothing changed, quite the contrary. There is some shift towards usage of the data and selection algorithms, but only through the lens of current political situation in US.

There are four main reasons why this is very unlikely to change and why I think the invisible machine dystopia is almost inevitable.

Effects of privacy loss are delayed

We are bad at inferring future risk of current behaviour. Anything that means I can have my good time now and worry about the consequences later means that we are fucked and it requires tremendous collective effort to avoid it. Famous examples include drugs, the loan shark business model and global climate change.

When you consent to give away your data, the effect is not immediate. The targeted ads you subscribed to are there now, but they are not the problem. It is when the data is stolen or blended with new company research when the creep factor skyrockets. In addition, once that data is out, there is no way to reclaim your identity back.

Our perception of identity is created by what we think of ourselves. And while we are at thinking...

We know the brain too well

In the last two decades, we have learned a lot about human behaviour. Couple that with recent advancements in neurobiology and you have a knowledge machine that last centuries dictatorships could only dream of. And that is before you add personal, locational and social network data into the mix.

We are capable of building persuasion machines with precision strikes. As Cambridge Analytica has shown, this is already being used and there are no good defences.

The battle over minds was part of every war in existence. What is new is the ability to do personal targeting and adjustments: not a single person sees what you see, there is no leaflet to share or discuss. It has been fried into your brain and disappeared; all is left is altered you.

What is also new is the existence of relatively neutral platform. Tech giants are usually only subject to US legislation. While they do have their biases, I do not believe that they are interfering with the information dispersal as much as other states would. This means an easier ground for the attackers and a breeding ground for biases (see Myanmar’s genocide).

Contesting superpowers know that all too well, which is why vKontakte and WeChat enjoy support from their respective governments.

It is useful

Letting your data go now will not hurt you. It will actually help you quite a bit. It would be easy to rally against it if it would be only a temporary reward (e.g. 23andme telling you about your ancestry), but when done right, there are tangible, long-term benefits. Health data is the most outstanding example, but anyone who is really trying to use unpersonalised search engines will tell you that your Google knowing you does save a lot of time. [^1]

This means the behaviour is normalised. It is only when you are sting as an outlier when the full consequences hit you. Like when you suddenly have to pay a lot of money because some algorithm decided you are not using your medical device enough.

Yet especially medical field is where big data and machine learning benefits are yet to be reaped. It can be useful, but also terrible: when the alternative is death, we will consent to a lot.

Being just a literal data stream for a medical corporate overlord who is keeping us alive is not unthinkable. And, I mean, do you know how much faster medical research will get once you can A/B test all medical treatments in vivo?

China will take over instead

If you read the above carefully, a lot of it is a coordination problem. We could enjoy most of the benefits without the heavy privacy loss; it’s possible, just harder and more expensive to implement. This is one of the cases where regulation would help.

That regulation would need to be international, making it a coordination problem. And just to prove the point that this is probably the hardest problem in existence of human race, before we even started, we already have a defector: China.

Building on top of their totalitarian past and authoritarian future, China already has an environment where privacy is not expected. If you already (correctly [^2]) believe your government is watching everything, it is not a stretch to expect companies to know everything as well. In China, they are intertwined with state structures anyway.

When one part of the equation goes out of the window, why not ride it and reap all benefits?

This is already happening. The world accepted that China secured the position of the world’s manufacturer two decades ago, but they have not yet come to terms with them also being on top of hardware’s R&D, which basically moved from Silicon Valley to Shenzhen (Wired created a good intro video ). Inevitably bound to hardware, software follows. The AI/ML research in on top of the game.

Together with blurry research ethical lines and you have China ahead of the pack. The full extent of CRISPR's utility is still unclear, but doing research on live subjects definitely helps you advance. There were 86 people edited a year ago and few months ago, they started with hereditary edits.

Mix machine learning with big data [^3], real in vivo A/B testing, sufficient power to make people randomly disappear and few disposable minorities…and you have a truly efficient rogue state.

It is interesting to observe the delayed risk effect here. For where the effects are immediate, MAD doctrine applies and suddenly, China is willing to have an agreement on stalemate (as is the case for autonomous weapons). In my opinion, CRISPR has a high chance of a significant “black swan” side-effects which will be discovered in a generation or more. And when effects like these are delayed that much, everybody takes the short-term win and ignore the consequences.

As it stands now, the rest of the world will have impossible choice. One option is to back out, introduce a lot of regulation in a desperate attempt to protect themselves from downsides—and become a client state in the long run [^4] because of technological dependency. This is the way Europe is currently heading towards.

Or, slowly shed your ideals and erode the privacy as you are trying to keep up with the technological progress. This seems to be the current path for US.

It will be really interesting to see the impact of those decisions.


Thanks to Stephen and Hanka for feedback and proofread on drafts of this article.

[^1] The main problem there is perception. A lot of people believe in google as an objective reality and as a trust engine, blissfully unaware of its subjectivity. But this is for another post.

[^2] Aside from whole cities being monitored by face-recognition cameras, they are now integrated into police officer’s glasses as well and as of last year, the behaviour captured directly projects back into your stance in society through the social scoring system. And for the record, “algorithmised governance” is the explicit goal there

[^3] China is the leading country in terms of velocity of genome sequencing

[^4] This is going to be a real-world variation of a lot of good books and movies: what will you do when the price for your health/longer life is an allegiance to an authoritarian state? Coming soon, welcome to the Borg collective.

Published on 2019-02-21