The following is a response to a Newhouse Speaks presentation given by Mike McDougall, President of McDougall Communications.

Mike McDougall came to speak at Newhouse on March 26, 2019. His lecture, titled The Privacy Principles: Global Trends Reshaping Reputation Management, had some insightful points about the ways in which data can be used and misused by companies. Specifically, the presentation confronted one crucial issue: as public relations managers, we need to play a hands-on role in the decision-making behind these processes.

Data can be an incredibly rich resource for companies to exploit in their search to maximize profits. It can guide targeted advertising. It can help to increase brand awareness by putting the right message in front of the right people. It can give you powerful insights into what parts of your company are operating efficiently, and which ones are lagging behind or consuming too many resources. This applies to lots of different types of data—you don’t need a complex, invasive set for each of your customers to glean insights. A simple rewards program with an email address attached to a purchase history can produce incredibly powerful insights.

But data also presents some logistical and ethical dilemmas for organizations to tackle. Choosing what data to collect and how to store it can be deciding factors in whether your organization finds itself in a PR and legal crisis. We, as PR practitioners, have a responsibility within our organization to advocate for the safest position that proactively prevents situations like these from rising in the first place.

McDougall covered a lot of this content in the second half of his presentation. But the first half covered something entirely different.

The trouble with doomsaying

“Are you scared now?” McDougall asked the crowd to a smattering of affirmative responses. This was after an extensive lecture on the ways in which companies can collect data, and the ways that data can be used, abused, and misused. To be forthcoming, I didn’t necessarily agree with a lot of McDougall’s lecture for this portion, but I understood where he was coming from.

There were some lines of questioning of specific technologies for which I didn’t see eye-to-eye with him. I’m a bit of an optimist, so I get skeptical when people suggest that CCTV cameras might have facial recognition technologies running in the background or that rogue Facebook Messenger developers may have put in some sort of surveillance features into the backend without letting higher-ups know (besides the fact that Facebook, like any other tech company, has a comprehensive process for developing, testing, reviewing, and adding features before any code hits production deployment). To me, these sorts of theories are nothing short of conspiracy and alarmism.

I think Occam’s Razor holds pretty well in situations like these. It’s counterproductive to make a ton of assumptions about what’s going on behind the scenes when, in reality, any given security camera in a public space probably doesn’t have an expensive and complicated machine learning setup behind it, and Facebook employees probably aren’t spending a ton of extra time at work developing rogue features that would never make it to production.

I’ll admit my views on data are a little more liberal than most people would care for. As an example, below is a screenshot from my Google Maps Timeline, a little-known feature which, so it turns out, has been tracking my every move since I first got an Android phone in 2014. I can jump through my timeline for any given day and see where I walked, biked, drove, flew, etc. Personally, I love this feature. I’m a record-keeper, and this is the ultimate kind of automated journal for me—the screenshot below, for example, is from my trip to Edinburgh during my 2018 birthday weekend. By scrolling through, I can retrace my steps, and mentally revisit all the places I went to. A screenshot from my Google Timeline

A screenshot of my Google Maps Timeline displaying an afternoon in Edinburgh last November.

But a lot of people hate the idea of this amount of information about them being stored. They’re fast to point out that, by knowing where I go at every moment for every day, Google has an incredibly detailed amount of information about me. To me, this is the cost of Google providing services I otherwise couldn’t provide myself. Google probably has gigabytes of data just in their ‘Josh Fayer’ file. The fact that they’re able and willing to dedicate that amount of space and provide these types of features is astonishing to me. I’m perfectly willing to submit my data to them for whatever purposes they’d like.

To counter McDougall: I believe another one of our responsibilities as PR practitioners is to develop informed opinions on data usage, and to advocate these positions to our publics. By succumbing to alarmism and shock value, we make it more difficult to use data for our own purposes, and we further a fear-mongering anti-information perspective. We have to be realistic with what is actually happening with our data in the backend.

How tech companies (probably) use our data

One big benefit to Google of keeping that amount of data on me actually has very little to do with advertising. By amassing tons of user-generated data, Google is able to tune its machine learning algorithms and other information sets with the help of human intervention. If Google isn’t sure whether I visited a laundromat or Target right next door, it will prompt me to confirm where I just stopped off. This helps adjust their geographic data set and improve the maps experience for millions of other users.

Likewise, people might wonder why Google would offer Google Drive storage for so cheap—Google Docs don’t even count towards your quota, so I probably have gigabytes of text files in my Google Drive that Google doesn’t bother charging me for. Google Drive is, for Google, a massive repository of natural language they can run machine learning algorithms on to better understand human speech. None of these have direct applications in advertising or monetization, but they help improve user experiences and allow Google to develop more accurate products. As a side effect, they’re also developing some of the most impressive and cutting-edge technologies that have real social value for humanity!

And, of course, advertising is a part of it. Google probably knows what my favorite restaurant is when I’m back home for break (it’s IHOP, just in case they’re not sure) and that kind of insight is invaluable to automated advertising engines.

But “automated” is the key word here. No one is combing through my Google Drive files or scanning through my Timeline history manually. Very likely, no one at Google has direct access to it. Google also has a vested interest in being one of the most secure storage platforms in the world, so I trust them implicitly to protect my data from third-party snoopers much better than I could if I chose to record this data myself rather than give it to them.

Bottom line

Data can be scary. But spreading fear of data isn’t going to get us anywhere, and stigmatizing the use of data within companies just furthers the public skepticism towards companies that have legitimate purposes for collecting that data. We should be informed about what companies can do with data and hold them accountable for what they do; but on the flipside, we should opt for healthy skepticism over rampant paranoia. That’s the best way for companies and their publics to have an honest dialogue, free of fear or stigma.