I, For One...

View Original

I, for one, believe in the power of consent.

Unless you’ve been living under a rock (or in a miraculously well-contained echo chamber), I’m willing to bet you’ve heard people around you talking about consent. What is it? How is it given? What form of communication is required? How do we define someone as “fit” to give or not give it over to someone else?

In this post I don’t intend to try to tackle any of these topics as they relate to #metoo, but they do give us a good framework within which we can talk about how, sometimes, tech companies and government entities can abuse our consent.

The most straightforward example on my mind is the age-old question of the EULA. A EULA, or End User License Agreement, is the written totality of the terms of a piece of software (or some other product, of course) that gives you the right to use it. It dictates what you can and can’t do with it, and what they’ll do with any file or piece of information you give to it. You agree to these things all the time; whenever you sign up for a new service or buy a video game, there’s the ever-ominous “Agree to Terms” button that somehow I always forget to click at least twice before it lets me move on to checkout.

We’ve all gotten so used to it, but in many cases they serve to strip you of your rights or information, and allow companies to do things with your content you probably don’t want them to do.

What’s most interesting to me about EULAs and TOS (Terms of Service) and the like is that in all my years of consuming software and agreeing to things, I’ve never actually read one all the way through. And who can blame me when they’re written to be just about as boring and circuitous as is possible to achieve through the use of human language? How can I possibly be considered to be giving “informed consent” when I have to read through 20 pages of high-density text just to find out which Spice Girl is my spirit animal? To me, this reeks of tainted ethics. It’s the digital equivalent of banging a drunk chick who’s too intoxicated to say yes.

As it turns out, sweeping legislation from the EU known as the General Data Protection Regulation, or GDPR, governs (in part) the acceptance action that users have to take in order to “opt in” to any web service. Gone are the days of the auto-enrollment and passive/implied agreement (like by breathing this air you agree). It says in Article 4(11):

“Consent of the data subject means any freely given, specific, informed and unambiguous indication of the data subject's wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her”.

So basically, I at least have to click something that tells me more about what I’m agreeing to when I click. Sounds pretty great, right?

The best part? If you break the law, it’s expensive. As someone who’s had to work to prepare a business for GDPR, I can tell you it wasn’t easy, and it won’t get easier as time goes on. The full effect of this very strong legislation on the speed of innovation and the technology community at large won’t be felt for years to come. And yet, I can’t help but chuckle at the fates of the tech giants who now face massive fines for the ways in which they mishandle our consent.

But, let’s go deeper.

GDPR is meant to protect users’ personal information from malicious or neglectful privacy practices. So, what can go wrong when that consent is abused? Well, you might remember a little company called Cambridge Analytica. Yes, Facebook sold our information for academic research. Yes, it was used for evil. Yes, the consequences will be felt for generations. But what is it, in as precise terms as we can manage, that Facebook actually did wrong?

Should we hate that our behaviors are being studied by academic institutions? Should we denounce the study of human speech and sociology that might yield insights on things like suicide prevention, voter behavior, and racism that can help us grow and progress as a species? I have to say no. The future of our planet lies largely in the hope that learning and research can provide.

I bring it up because in this particular case, Facebook granted access to Cambridge Analytica via a survey put out by “Thisisyourdigitallife” (cool name, dudes) that harvested a personality profile from about 300,000 users, who were paid to take the survey.

But wait. It wasn’t just the users’ data that was shared with Aleksandr Kogan (who I’m pretty damn sure is an actual Russian spy). It was also the information of their friends, and their friends’ friends. So even if you’re not dumb enough to take a paid survey from a dubious domain, if your friend’s cousin is, you were also exposed.

It wasn’t until a year later that Facebook changed their policy to demand the explicit permission of a particular user to grant access to that data for a third party, but the damage had already been done. In fact, for good measure, I’m going to get on one of my occasional soap boxes here and encourage all of you never to take any more of those “What Kind of Hedgehog is your Kitchen?” quiz things when they ask you to authenticate through Facebook or ask for access to any of your profile’s information before letting you play. Please practice good data hygiene, everyone, for all of our sakes!

Your digital fingerprints can have far-reaching consequences. Be careful with your information!

If you haven’t gotten it by now, the theme here is this: We should be in complete, informed control of how our rights and information are impacted by the use of any service at all. And digital consent is only the surface, really. This is going to become even more important for society to consider as genetic science advances.

Take the case of the Golden State Killer. Normally, I’m all in favor of a serial killer being foiled by his own hubris and facing the long arm of the law. But something central to this case doesn’t sit well with me. Joseph James DeAngelo likely truly was the one responsible for these murders. And he was caught, thanks to genetic profiling. But he wasn’t caught because he was so arrogant that he sent in his DNA on the stamp on a letter with a cypher that also somehow proves his alibi is false. In the end, it was his cousin, who submitted his genetic code to a free, open source academic database of genomes. And the cousin’s DNA was similar enough to the killer’s that the police were finally able to track him down.

Granted, the ethics of how to catch a murderer are murky, since the common good would dictate that we want to get a killer caught. On the other hand, it’s not unreasonable to be a little wary of any database with that much power, particularly when its existence precludes consent. Remember how you felt about Cambridge Analytica? Similar scenarios would undoubtedly occur with this database too, and the value of that information is exponentially larger.

I don’t like the idea that criminals could be caught by a daisy-chain of genetic relatives, without regard for the privacy of the biological code that makes up everything that we physically are. Murderer though he may be, DeAngelo is an American citizen who gave authorities no other reason to suspect. He never volunteered to be tested, nor had he ever been accused of a crime that would have resulted in a legal loss of privacy and a consequential logging of his DNA to a police database.

This being my first post, you’ll soon see that I won’t jump quickly to say that we shouldn’t pursue this (or most) technology simply given the fact that its implications might lead to some negative outcome. Genetic research has been a godsend when it comes to the medical industry and the good it will bring to the world looks a little bit more promising each day. But I do think that careful consideration should be given to the privacy rights we have to our own DNA. There are implications of a lack of consent to testing when it involves the most valuable piece of all the information that’s “ours”.

GSK is interesting because it’s a case where a new technology allowed a kind of mass canvass of the population that didn’t previously exist, and now we have to deal with the legal and ethical implications. Perhaps even more concerning, we just found out that IoT devices can legally be used as tools of surveillance with just a simple court order. Yesterday, a judge called for Alexa recordings in a murder trial. I’m pretty sure those users never consented to inviting government surveillance into their home, even if they are guilty. And if we’re not careful, this will become commonplace.

Tl;dr:

  • Consent is a lens through which we can determine the ethics of a business practice or law.

  • EULAs should be stated clearly, simply, and in plain english. Damn it, guys.

  • GDPR is massive and scary, but it’s on to something… our digital consent should mean something real and bear weight.

  • The technology to map out our genomes should exist, but the use of it to identify suspects from a central database of non-criminals should not. This type of database should only exist for scientific and medical pursuits, because without that constraint it gives too much power to its owner.

  • If you’re going to buy Alexa, understand that everything you do/say is being recorded. The government shouldn’t be able to request audio files from our daily lives, but it seems as though at least for now… they can.

What do you think?