Published Date: 01-09-19

By Justin Sanders

With all the controversy swirling around Big Tech in 2018, you may have missed yet another, slightly less reported-on scandal that occurred near the end of the year in New Zealand. It involved the murder of British backpacker Grace Millane, and the temporary name suppression ordered by the local authorities for her suspected killer.

The order made it illegal for media outlets to publish the suspect’s name – but that didn’t stop one plucky little Silicon Valley company from doing it anyway. That’s right, Google, with its customary sensitivity to local laws and customs, completely ignored the government’s order and transmitted the man’s name far and wide, giving it prominent placement in the subject line of its “What’s trending in New Zealand” newsletter. The company’s slip-up caused a national uproar, but Google, as is their wont, has yet to be held accountable for it – and it doesn’t look like they will be.

Just prior to all that, back here in the States, a Big Tech lobbying group called the Computer & Communications Industry Association (CCIA) filed a public comment on the United States Trade Representative’s (USTR) ongoing efforts to negotiate a new trade agreement with the European Union. The filing targets Article 13 of the EU’s proposed Copyright Directive, though reading it, we couldn’t help thinking about the entirely unrelated New Zealand incident – because this case reflects Big Tech’s breathless disregard for the human beings it allegedly serves.

The New Zealand snafu occurred because Google’s local news algorithm vacuumed up the most talked-about story of the day, auto-plopped it onto an email, and sent it out before any real person was the wiser. At any legitimate news publishing organization, a human intermediary – AKA a “fact-checker” – would have noticed the issue immediately, and rectified it. Google, probably because it has deemed itself a “platform” as opposed to a “publisher,” was quite comfortable removing this intermediary and letting an algorithm do the work instead – that’s when disaster struck.

For Google and its Big Tech brethren, such machine-driven bumps in the road are to be expected as they engineer their way toward a better, more automated internet world. Or at least that’s what they like to tell us. It’s hard to take them at their word, however, when their tune changes the instant the machines in question stop enabling their relentless, anti-competitive profiteering and start putting a check on it.

Which brings us back to Article 13.

Article 13’s stated goal is to ensure that “online content sharing service providers and right holders shall cooperate in good faith in order to ensure that unauthorized protected works or other subject matter are not available on their services.” The CCIA and its member tech companies who make billions of dollars through the uploading of content that they neither make nor own, want nothing to do with it. They argue the directive “impos[es] an unworkable filtering mandate on hosting providers that would require automated ‘notice-and-stay-down’ for a wide variety of copyrighted works.”

While it would take years to go into effect once implemented, it’s true that one possible result of Article 13’s passage could be online filters that scan uploads for infringing content. And because CCIA sees worst case scenarios in any and every effort to hold Big Tech accountable, they pour on the criticisms about the “significant economic consequences for the U.S. digital economy” that a move toward content filters might entail. Here’s what’s funny about that – a core CCIA funder, Google, loves to tout the success of its Content ID filtering system in reducing YouTube piracy in the U.S. We beg to differ, but let’s pretend it’s true for the sake of argument – why couldn’t similar technology, developed by the world’s most brilliant engineers in Silicon Valley, work equally well for European content hosts?

The answer, of course, is that it could. It could help enforce copyright law and close the “value gap” that keeps creative rightsholders from getting paid fairly for their works that appear online.

But the CCIA’s member companies believe they are special – “exceptional,” as they like to say – when it comes to infringing works on their platforms, and the law as it stands supports their belief. “Intermediary liability protections for Internet service providers, such as the copyright safe harbors found in Section 512 of the Digital Millennium Copyright Act, have been critical to growing the U.S. digital economy by providing business certainty to U.S. investors and innovators,” the CCIA writes. “The United States should commit to upholding these commitments in the intellectual property chapters of its FTAs, continuing with any trade agreement with the EU.”

For decades, we in the creative communities have had to sit on the sidelines and watch helplessly as internet companies have amassed staggering wealth through business models predicated on these “copyright safe harbors.” With no fear of repercussion, Google and others have enabled widespread piracy, both through easily available search results linking to illegal websites, and through the direct uploading of infringing content to their own platforms, such as YouTube. YouTube’s advertising engine, kind of like Google’s news aggregator, makes no distinction between legal and illegal content, taking in money for eyeballs regardless of source.

Under current interpretations of the law, so long as an online platform responds to DMCA requests in a timely manner, they are not breaking the law… but the infringing content still generates revenue for them, even if it’s up for only a short time, and they never give that money back. And once the content is taken down, there is nothing to stop someone from putting it right back up again. This becomes an endless game of whack-a-mole that costs the film and television industry billions of dollars every year and puts the onus on artists to constantly be doing platforms’ policing for them.

Article 13 and the EU Copyright Directive aim to mitigate this shameful situation, at least in Europe, through the stated “notice and stay down” requirement. It’s a sensible goal – but it’s under relentless attack by Big Tech companies and Big Tech shills such as the CCIA.

The truth is, no new copyright enforcement policy will be perfect, and no new measures designed to implement the policy will be either. Actually, no law on the planet is perfect, at least in our experience. And, if content filtering systems do get built as a result of Article 13, there will most definitely be hitches along the way – just as there were, and still are, with Content ID. What’s important is that under Article 13, a framework will be in place for curbing mass infringement and valuing the creatives who make significant contributions to innovation, to culture, and to the economy no matter what part of the world they are in.

The CCIA thinks that because some copyright-related automation might have hitches, then we should have no copyright-related automation. To which we say, what if their member companies applied that same standard toward their business practices – i.e., toward the less than perfect data collection tools, personalized news feeds, and targeted ad services that have caused massive amounts of harm around the globe? What if Google, Facebook, and the rest weren’t allowed to automate anything because of the potential risks of automation – risks that have led to far more damage than any copyright monitoring A.I. could ever dream of?

The world would be a hell of a lot less convenient and some certain CEOs would be a hell of a lot less rich, but at least the world’s biggest companies would have to start treating us like humans again.