paint-brush
Taming Big Tech: The Case for Monitoringby@re_53711
14,613 reads
14,613 reads

Taming Big Tech: The Case for Monitoring

by Robert EpsteinMay 13th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Researchers developed a passive monitoring system that might soon make Big Tech companies accountable to the public. They used it to monitor what Google, Bing and Yahoo were showing users in the months leading up to the 2016 election. The researchers hope it will soon be protecting all of us 24 hours a day from the power of Big Tech.

People Mentioned

Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - Taming Big Tech: The Case for Monitoring
Robert Epstein HackerNoon profile picture

How, working in the shadows of the internet, researchers developed a passive monitoring system that might soon make Big Tech companies accountable to the public — and even save democracy.


What if, early in the morning on Election Day in 2016, Mark Zuckerberg had used Facebook to broadcast “go-out-and-vote” reminders just to supporters of Hillary Clinton? Extrapolating from Facebook’s own published data, that might have given Mrs. Clinton a boost of 450,000 votes or more, with no one but Mr. Zuckerberg and a few cronies knowing about the manipulation.

Because, like most Democrats, Mr. Zuckerberg was overconfident that day, I don’t believe he sent that message, but there is no way to know for sure, and there is certainly nothing to stop him from broadcasting election-tilting messages like that in the future — and not just in the U.S. but in countries around the world.


Do we have to wait for whistleblowers or warrants to learn, belatedly, about shenanigans of this sort, or is there a way to detect them as they occur? What kind of monitoring system would it take? Is there a way to look over the shoulders of internet users to see what overzealous tech companies are showing them on their screens?


This is the story of how my colleagues and I, working in the shadows of the internet, developed such a system and used it to monitor what Google, Bing and Yahoo were showing users in the months leading up to the 2016 election — a working prototype for a large-scale monitoring system that we hope will soon be protecting all of us 24 hours a day from the unfettered power of Big Tech.


The story begins with a phone call I received in August 2015 from Jim Hood, Attorney General of Mississippi. Hood was up for reelection, and he was concerned that Google might rob him of a win by biasing its online search results against him. He had been in an ugly legal battle with Google since 2014, and he was worried that Google could use its formidable online manipulative powers to hurt his career.


Online manipulative powers, you ask? Isn’t Google just a big, cuddly, electronic public library?

If you still believe that nonsense, grow up. Google is actually one of the most rapacious and deceptive companies ever created. Those free tools the company gives you are just gussied-up surveillance platforms, allowing the company to collect information about more than two billion people every day, which they then auction off to advertisers or, in some cases, share with business partners and intelligence agencies.


Public libraries don’t tabulate information about people’s reading habits and then sell the information. Far from being the public service organization it pretends to be, Google is actually the world’s largest advertising agency, doing more than six times the business of the world’s next largest ad agency, London-based WPP.


Early in 2013, Ronald Robertson, now a doctoral candidate at the Network Science Institute at Northeastern University in Boston, and I discovered that Google isn’t just spying on us; it also has the power to exert an enormous impact on our opinions, purchases and votes.

In our early experiments, reported by The Washington Post in March 2013, we discovered that Google’s search engine had the power to shift the percentage of undecided voters supporting a political candidate by a substantial margin without anyone knowing. Just before I heard from Hood in 2015, we had published a report in the Proceedings of the National Academy of Sciences showing that search results favoring one candidate could easily shift the opinions and voting preferences of real voters in real elections by up to 80 percent in some demographic groups with virtually no one knowing they had been manipulated. Worse still, the few people who had noticed that we were showing them biased search results generally shifted even farther in the direction of the bias, so being able to spot favoritism in search results is no protection against it.


We called this new phenomenon the Search Engine Manipulation Effect, or SEME (“seem”), and estimated that Google’s search engine — with or without any deliberate planning by Google employees — was currently determining the outcomes of upwards of 25 percent of the world’s national elections. This is because Google’s search engine lacks an equal-time rule, so it virtually always favors one candidate over another, and that in turn shifts the preferences of undecided voters. Because many elections are very close, shifting the preferences of undecided voters can easily tip the outcome.


Because 90 percent of search in most countries is done on Google and because people trust Google’s search results to be unbiased — the output of what they believe to be an objective and ultra-rational computer algorithm — search results have a much bigger impact on undecided voters than do human-tainted newspapers, radio shows and television programs.

Hood wondered: Could Google manipulate search results to shift votes to his opponent in the upcoming election? “Sure,” I said, “just by adjusting a couple of parameters in the search algorithm — a few minutes work, at most.”


And that’s where the conversion got interesting, setting me on a course that I remain on to this day.

LOOKING OVER PEOPLE’S SHOULDERS

My research, which has now looked at four national elections and multiple topics and candidates, has repeatedly shown that Google can shift opinions and votes dramatically without people knowing. It is one thing, however, to demonstrate this power in laboratory and online experiments and quite another to show that Google’s search results actually favor one particular candidate or cause.


The problem is that search results, like many of the events we experience through our mobile and computer devices, are ephemeral. They appear, influence our thinking, and then disappear, leaving no trace. Tweets and YouTube videos can be viewed repeatedly, but search results, like search suggestions, newsfeeds and many other online stimuli, are individualized and generated on the fly. Once they’re gone, they’re gone.

“How people get their information — what they believe, what they don’t — is, I think, the project for the next decade.” -Eric Schmidt, Executive Chairman, Alphabet, Inc., November 10, 2016 (quote at 06:45 at https://www.youtube.com/watch?v=TjnFOhwDAYM)


That’s why I balk when people tell me we can get the goods on Google by inspecting the company’s search algorithm. Complex computer programs are highly inscrutable, even to the people who wrote them. Examining an algorithm is no substitute for what we really need to do, and that is to look over people’s shoulders while they are viewing ephemeral stimuli. But how?

In our chat in 2015, Hood told me that law enforcement agencies sometimes use “sock puppets” — digitally simulated people — to try to capture what companies like Google are dishing out. We quickly agreed, though, that if you were trying to track election-related search results, sock puppets wouldn’t work. Google’s algorithm can easily distinguish a cyber-person from a real person because sock puppets don’t have extensive histories in Google’s massive surveillance database.


Our phone call ended there, leaving AG Hood with an uneasy feeling about his bid for reelection (although he ultimately won), and leaving me with an obsession: How could I look over the shoulders of a fairly large and diverse group of real people — preferably of known political backgrounds — over a period of weeks or months before the 2016 election? If I could see what search results they were seeing when they conducted election-related searches, I could detect whether those search results favored one candidate.


The Nielsen Company, founded by the American Arthur C. Nielsen in 1923, keeps track of the number of viewers that television shows attract in 47 countries. In the U.S., those numbers — the all-powerful Nielsen ratings — determine whether shows stay on the air and how much money companies must pay to advertise on them. Nielsen started collecting TV data in the U.S. in 1950 by convincing families around the country to hook up a device to their televisions that tracked their viewing. Today, the company relies on data obtained from thousands of such families to tabulate its ratings.


For this tracking system to produce valid numbers, secrecy is essential. With hundreds of millions of dollars in production costs and advertising revenues on the line, imagine the lengths to which interested parties might go to influence a Nielsen family’s viewing habits — or, for that matter, to tamper with those set-top devices.


Could we, I wondered, set up a Neilsen-type network of anonymous field agents, and could we develop the equivalent of a set-top device to monitor what these people see when they conduct searches using major search engines?


Late in 2015, my associates and I began to sketch out plans for setting up a nationwide system to monitor election-related search results in the months leading up to the November 2016 election. This is where that old dictum — “There is a fine line between paranoia and caution” — came into play. It seemed obvious to me that this new system had to be secret in all its aspects, although I wasn’t yet sure what those aspects were.


To get started, we needed funding, but how does one approach potential donors about a system that needs to be secret and that hasn’t yet been devised?


Here I will need to leave out some details, but let’s just say I got lucky. I explained the situation to one of my political contacts, and he referred me to a mysterious man in Central America. He, in turn, spoke to… well, I have no idea, really — and a few weeks later a significant donation was made to the nonprofit, nonpartisan research institute where I conduct my research.

Those funds allowed us to get going, and one of the first things we did was to form an anonymous LLC in New Mexico to oversee the new project. The head of the LLC was a reanimated “Robin Williams,” and the address was that of a house that was listed for rent in Santa Fe. In other words, the LLC was truly a fiction, and it could not easily be traced to me or any of my staff members, each of whom had to sign nondisclosure agreements (NDAs) to have any involvement in the project.


We called the new corporation “Able Path,” and this deserves some explanation. In 2015, Eric Schmidt, formerly head of Google and then head of Alphabet, Google’s parent company, set up a secretive tech company called The Groundwork, the sole purpose of which was to put Hillary Clinton into office. Staffed mainly by members of the tech team that successfully guided Obama’s reelection in 2012 — a team that received regular guidance from Eric Schmidt — The Groundwork was a brilliant regulations dodge. It allowed Schmidt to provide unlimited support for Clinton’s campaign (which he had previously offered to supervise as an outside adviser) without having to disclose a single penny of his financial largesse. Had he donated large sums to a Super PAC, the PAC would have been prohibited from working directly with the Clinton campaign. The Groundwork solved that problem.


Before the November 2016 election, if you visited TheGroundwork.com, all you got was a creepy Illuminati-type symbol, as follows:


You couldn’t get into the website itself, and there was no text on the page at all — just that creepy symbol. Our own new organization, Able Path, used a transformed version of The Groundwork’s inscrutable symbol for its own uninformative landing page:

At this writing, our Groundwork-style landing page is still accessible at AblePath.org, and, in case you haven’t figured it out by now, “Able Path” is an anagram of the name of Google’s parent company, Alphabet.


We conducted Able Path activities from California through proxy computers in Santa Fe, sometimes shifting to proxies in other locations, taking multiple precautions every day to conceal our actual identifies and locations. As you will see, the precautions we took proved necessary. Without them, our new tracking system would probably have turned up nothing — and nothing is definitely not what we found.

HOW TO SPY ON SEARCH ENGINES

To find programmers, we networked through friends and colleagues, and we ultimately had two coding projects running secretly and in parallel. One project — the real one — was run by an outstanding coder who had served time in federal prison for hacking. It was his job to oversee the creation of a passive, undetectable add-on for the Firefox browser that would allow us to track election-related search results on the computers on which it was installed.


The second project involved an independent software group whose assignment was to create a similar add-on for the Chrome browser, which is a Google product. Its main purpose was to give us something to talk about when we felt we needed to tell someone about the project. It was, in other words, our cover story. Neither coding group knew about the other, all of the coders signed NDAs, and payments were made with money orders.


By early 2016, we were testing and refining our new add-ons, so the time had come to recruit a diverse group of field agents willing to install our new browser add-ons and to keep their mouths shut. Unfortunately, every method we used to try to recruit people failed — sometimes dismally. Ultimately, we had to make use of the services of a black-hat marketing group that specialized in serving as a buffer between Facebook and rather sketchy businesses.

That is a world I know little about, but the general idea is that Facebook is picky about the ads they run, which made it tough for the fledgling Able Path LLC to place ads that would run for more than a few minutes before Facebook took them down. Our paid ads were small, innocuous, and honest (although a bit vague), but we could never get them through Facebook’s human or algorithmic censors.


We focused on Facebook because it offers the most precise demographic targeting these days, and we needed to reach a diverse group of eligible voters in a variety of U.S. states who used the Firefox browser. By serving as a middleman between us and Facebook, the black-hat group was able to place and refine ads quickly, eventually getting us just the people we needed. Because our field agents had to sign two NDAs, and because we were asking them to give up sensitive information about their online activity, the daily give-and-take between our staff members and our recruits was intense. Our difficulties notwithstanding, over a period of several months, we successfully recruited 95 field agents in 24 states.


One issue that slowed us down was the email addresses people were using. Most of the people who responded to our ads used gmail, Google’s email service, but because Google analyzes and stores all gmail messages — including the incoming emails from non-gmail email services — we were reluctant to recruit them. We ultimately decided to recruit just a few to serve as a control. Google could easily identify our field agents who used gmail. Would this control group get different search results than our non-gmail users? The company takes pride in customizing search results for individuals, after all, so anything was possible.


On May 19th, 2016, we began to get our first trickle of data. Here is how it worked:

When any of our field agents conducted an online search with the Google, Bing, or Yahoo search engines using any one of the 500 election-related search terms we had provided, three things happened almost instantly. First, an HTML copy of the first page of search results they saw was transmitted to one of our online servers. Second, that server used the 10 search results on that page of search results to look up and preserve HTML versions of the 10 web pages to which the search results linked. And third, all of this information was downloaded to one of our local servers, preserving a code number that was associated with the field agent, the date and time of the search, the 10 search results and their search positions (that was important!), and the corresponding 10 web pages. We deliberately preserved HTML versions of everything (rather than image versions) to make it easier to analyze content.


We could adjust the list of search terms as we pleased, and we eventually reduced the list from 500 to 250, removing most of the search terms that were, according to independent raters, inherently biased toward Hillary Clinton or Donald Trump. We did this to reduce the likelihood that we would get search rankings favoring one candidate simply because our field agents were choosing to use biased search terms.


We knew various demographic characteristics of our field agents, the search terms they were using, what search results they were seeing, and what the web pages looked like that linked to those search results. In other words, we were indeed now looking over the shoulders of real internet users as they conducted election-related searches.


So what were they seeing? Did the search results favor Clinton, Trump or neither one? If search results favored one candidate — that is, if higher ranked results connected to web pages that made one candidate look better than the other — did the favoritism vary by demographic group, and did it differ among the three search engines we were monitoring? Were our gmail users seeing anything different than our non-gmail users?


We now had the ability to answer these questions. At some point, I realized that we also had far more than a modest system for monitoring a presidential election.


As Election Day grew near, the data began to flow faster, so fast that we hit a day in late October when our downloading routines could no longer handle the volume. We had to shut down for a precious three precious days between October 25th and 27th in order to modify our software, but there were no further glitches after that.


I also had a tough decision to make — one of the most difficult decisions I have ever had to make. Should we calculate favoritism ratings while we were still collecting data? In other words, should we start answering our research questions before the election took place? If, before the election, we found that search results were favoring one candidate, should I then report this to the media, or perhaps to the Federal Election Commission, or perhaps to officials from the political parties?


Mr. Trump kept insisting that our political system was “rigged” to favor Mrs. Clinton. What if our tracking system showed that online search results were in fact “rigged” in some sense? What would Trump, who was known for being hotheaded, do with such information? Would he seek a court injunction to force Google to shut down its search engine? If he subsequently lost the election, would he sue Google and the U.S. government? Would he challenge the legitimacy of the election results?


And then there was another possibility. What if — before the election took place — we found that search results were in fact favoring one candidate, and I failed to make that known? Wouldn’t I be complicit in rigging an election, and wouldn’t people eventually find out?


For weeks, I sought advice from everyone I could about these issues, and I have rarely felt so confused or helpless. Ultimately, I decided to keep my team focused on data collection until the winner was announced and only then to begin the rating and analysis of our data.


We had planned to use a large pool of online workers (“crowdsourcing”) to rate our web pages — a process that would take weeks — and tabulating and analyzing the data would take even more time. By the time we could confidently inform the public about what we had found, our results, I figured, would make no difference to anyone. Problem solved.


I submitted reports about our project to two scientific meetings for presentation in March and April of 2017, and after I learned that those submissions had been accepted — in other words, that they had made it through a process of peer review — I passed along some of our findings to Craig Timberg of The Washington Post. He published an article about our project on March 14, 2017, which then got picked up by other news sources. (A more extensive technical summary of our findings can be accessed here.)


As I had predicted, by this late date, no one cared much about our specific findings. But it turns out there was a bigger story here.

A CLEAR PRO-CLINTON TILT

In all, we had preserved 13,207 election-related searches (in other words, 132,070 search results), along with the 98,044 web pages to which the search results linked. The web-page ratings we obtained from online workers (a mean number for each page, indicating how strongly that page favored either Clinton or Trump) allowed us to answer the original questions we had posed. Here is what we found:


  1. Bias. Overall, search rankings favored Mrs. Clinton over most of the 6-month period we had monitored — enough, perhaps, to have shifted more than two million votes to her without people knowing how this had occurred. The pro-Clinton tilt appeared even though the search terms our field agents chose to use were, on average, slightly biased toward Mr. Trump.
  2. Lots of bias. Between October 15th and Election Day — the period when we received the largest volume of data — on all 22 of the days we received data, search rankings favored Mrs. Clinton in _all 10 of the search position_s on the first page of search results.
  3. Google. The pro-Clinton favoritism was more than twice as large on Google than on Yahoo’s search engine, which is, in any case, little more than an offshoot of Google’s search engine these days. We had to discard our Bing data because all of it came from gmail users (more about this issue in a moment).
  4. Demographic differences. Pro-Clinton search results were especially prevalent among decided voters, males, the young, and voters in Democratic states. But voters in Republican and swing states saw pro-Clinton search results too.
  5. Tapering off. Over the course of the 10 days following the election, the pro-Clinton tilt gradually disappeared. All of these findings were highly statistically significant.


Perhaps the most disturbing thing we found had to do with that control I mentioned earlier. We never tried to collect any Chrome data; this was just our cover story, after all. But we did take a careful look at the Firefox data we received from the gmail users we had recruited, and we found that the gmail users who were using the Google search engine on Firefox received search results that were almost perfectly unbiased — eerily unbiased, in fact — about six times less biased than the search results the non-gmail users saw.


You can draw whatever conclusions you like from that last finding. For me, it says simply that you should take precautions when you are monitoring the output of an online company that might want to mess with your data.


Perhaps at this point you are saying, “Okay, they found evidence of a pro-Hillary slant in search results, but maybe that slant was generated by user activity. That’s not Google’s fault.” That, in fact, is exactly what Google always says when antitrust investigators find, time after time, that Google’s search results favor Google products and services. Favoritism in search results, Google says, occurs naturally because of “organic search activity” by users. To that I say: Gimme a break.


As I documented in an article on online censorship for U.S. News and World Report in 2016, Google has complete control over the search results it shows people — even down to the level of customized results it shows each individual. Under Europe’s right-to-be-forgotten law, Google regularly removes more than 100,000 items from its search results each year with surgical precision, and it also demotes specific companies in its search rankings when they violate Google’s vague Terms of Service agreement. The extensive investigation conducted by EU antitrust investigators before they fined Google $2.7 billion in June 2017 for having biased search results also shows that Google exercises deliberate and precise control over its search rankings, both to promote its own products and to demote the products of its competitors.

Our own demographic data and the data from our gmail users also demonstrate the high degree of control Google has over it search results. You can adjust an algorithm to respond to the search activity of users any way you like: You can make it shift search results so that they favor one candidate, his or her opponent, or neither one, just as we do in our SEME experiments. As any experienced coder can tell you — any honest experienced coder, that is — Google’s “organic search” defense is absurd.


Eventually, though, I realized that it doesn’t matter where the favoritism is coming from. Because favoritism in search results shifts opinions dramatically without people’s knowledge, search engines need to be strictly regulated, period — particularly when it comes to socially important activities like elections. If we don’t regulate search results, when favoritism creeps in for any reason — even “organically” (although that idea is nonsense) — it has the potential to influence people, further propelling that favoritism in a kind of digital bandwagon effect.

WHY MONITORING SYSTEMS ARE NEEDED

In retrospect, our tracking project was important not because of the numbers we found but because of what we showed was possible. We demonstrated that ephemeral events on the internet can be accurately monitored on a large scale — in theory, that any ephemeral events on the internet — advertisements, news feeds, search suggestions, featured snippets, search results, images people swipe, conversations with personal assistants, and so on — can be accurately monitored on any scale. That is a game changer.


Think about how online ephemeral events have been discussed until now. In a December 2016 article in The Guardian, journalist Carole Cadwalladr lamented that the online messages displayed to UK citizens just before the June 2016 Brexit referendum were inaccessible — lost forever. “That’s gone,” she wrote. “There’s no record. It wasn’t — couldn’t be — captured. It can’t be studied. We’ll never know.”


Similarly, the antitrust actions brought against Google by the EU, Russia, India, and the U.S. have sometimes been based on relatively small amounts of data collected using indirect methods. But what if an ecosystem of passive monitoring software of the sort my team and I deployed in 2016 was implemented worldwide to keep an eye on what tech companies were showing people? What if such a system were running around the clock, with data analyzed and interpreted in real-time by smart algorithms aided by smart people? What if evidence of undue influence in elections or of other kinds of unethical or illegal machinations were shared on an ongoing basis with journalists, regulators, legislators, law enforcement agencies and antitrust investigators?


Such a system could help to preserve the free-and-fair election and perhaps even protect human freedom.


In the spring of 2017, I began working with prominent academics and business professionals to develop large-scale systems devoted to monitoring ephemeral online information. Google, Facebook and other companies monitor us to an almost obscene degree; isn’t it time that we, the people, begin monitoring them? Why should we have to keep guessing about the shenanigans of Big Tech companies when we can view and archive exactly what they are showing people?


With monitoring systems in place, we will know within minutes whether Mr. Zuckerberg is sending go-out-and-vote reminders to supporters of just one candidate. We will also be able to detect and preserve political censorship in newsfeeds and online comments, favoritism in search results and search suggestions, sudden demotions in search rankings, the blocking of websites, shadowbanning on Twitter, selective suppression of advertisements, the proliferation of Russian-sponsored ads and news stories, and the sometimes bizarre answers provided by new devices like Amazon Echo and Google Home — not to mention dirty tricks no one has identified yet. (Many will be captured in our archives and detected later.)


Because technology is evolving so rapidly these days, the usual regulatory and legislative mechanisms have no way of keeping up. Aggressive monitoring systems will not only be able to keep pace with the new technologies, they might even be able to keep a step ahead, scanning the technological landscape for irregularities and exposing thuggish behavior before it can do much damage.


There’s another possibility too: If executives at the Big Tech companies know their algorithmic output is being tracked, they might think twice about messing with elections or people’s minds.

As U.S. Supreme Court Justice Louis D. Brandeis noted in 1913, “Sunlight is said to be the best of disinfectants,” which is why we call our current organizational efforts sunlight projects.

I have become increasingly concerned about these issues because in the short period since SEME was discovered (in 2013), I have already stumbled onto other dangerous and largely invisible new forms of influence — the Search Suggestion Effect (SSE), the Answer Bot Effect (ABE), the Targeted Messaging Effect (TME), and the Opinion Matching Effect (OME), among others. Effects like these might now be impacting the opinions, beliefs, attitudes, decisions, purchases and voting preferences of more than two billion people every day.


How many new forms of influence has technology made possible in recent years that have not yet been discovered, and what will the future bring?


You might have noticed lately that more and more prominent people — among them, Barak Obama, Facebook co-founder Sean Parker, right-leaning journalist Tucker Carlson, controversial former White House adviser Steve Bannon, left-leaning billionaire George Soros, Twitter co-founder (and founder of Medium) Evan Williams, tech entrepreneur Elon Musk, and World Wide Web inventor Sir Tim Berners-Lee — have been expressing their concerns about the power that Big Tech platforms have to control thinking on a massive scale. An ecosystem of passive monitoring software might prove to be the most effective way of finally making these companies accountable to the public, both now and in the foreseeable future.


Without such systems in place to protect us, two billion more people will be drawn into this Orwellian web within the next five years, and the recent proliferation of home assistant devices, as well as the rapidly expanding internet of things — projected to encompass 30 billion devices by 2020 — will make new forms of mind control possible that we cannot now even imagine.

If you are concerned about these possibilities and would like to support our efforts to protect humanity from the technological oligarchy that now dominates our lives and the lives of our children, please get in touch.


Robert Epstein (@DrREpstein) is Senior Research Psychologist at the American Institute for Behavioral Research and Technology in California. A Ph.D. of Harvard University, Epstein is the former editor-in-chief of Psychology Today and has published 15 books and more than 300 articles on artificial intelligence and other topics. He is currently working on a book called Technoslavery: Invisible Influence in the Internet Age and Beyond.