Late last week, it emerged that Google intends to ignore a call by the World Wide Web Consortium (W3C) — the international body that works to guide the development of web standards — to rethink the Topics API: A key ad-targeting component of Google’s so-called Privacy Sandbox proposal to evolve the adtech stack that Chrome supports for targeted advertising.
Topics refers to an ad-targeting component of the Sandbox proposal that is based on tracking web users’ interests via their browser.
The W3C Technical Architecture Group (TAG) raised a series of concerns following a request from Google last March for an “early design review” of the Topics API — writing last week that its “initial view” is Google’s proposed Topics API fails to protect users from “unwanted tracking and profiling” and maintains the status quo of “inappropriate surveillance on the web.”
“We do not want to see it proceed further,” added Amy Guy, commenting on behalf of the TAG.
The TAG’s take is not the first downbeat assessment of Topics. Browser engine developers WebKit and Mozilla also recently gave a thumbs-down to Google’s approach — with the former warning against preexisting privacy deficiencies on the web being used as “excuses for privacy deficiencies in new specs and proposals” and the latter deeming Topics “more likely to reduce the usefulness of the information for advertisers than it provides meaningful protection for privacy.”
And the risk of the web user experience fragmenting if there’s only limited support among browsers for Topics — which could lead to implementing sites seeking to block visitors who are using non-Chromium browsers — is another of the concerns flagged by the TAG.
Despite deepening opposition from the world of web infrastructure to Google’s approach, the U.K.’s privacy watchdog appears content to stand by and let Google proceed with a proposal that technical experts at the W3C are warning risks perpetuating the kind of privacy intrusions (and user agency and transparency failures) that have mired the adtech industry in regulatory (and reputational) hot water for years. The Information Commission’s Office (ICO) is a key oversight body in this context as it’s actively engaged in assessing the Sandbox’s compliance with data protection law following a major antitrust intervention by the UK’s Competition and Markets Authority (CMA) which it joined.
Asked whether it has any concerns about Topics’ implications for privacy, including in light of the TAG’s assessment, the ICO took several days to consider the question before declining comment.
The regulator did tell us it is continuing to engage with Google and with the CMA — as part of its role under commitments made by Google last year to the competition watchdog. The ICO’s spokesperson also pointed back to a 2021 opinion, published by the prior U.K. information commissioner on the topic (ha!) of evolving online advertising — which set out a series of “principles” and “recommendations” for the adtech industry, including stipulating that users are provided with an option to receive ads without any tracking, profiling or processing of personal data — and which the spokesperson said lays out its “general expectations” in relation to such proposals now.
But a more fulsome response from the ICO to a detailed critique of Topics by the W3C TAG there was none.
A Google spokesman, meanwhile, confirmed it has briefed the regulator on Topics. And responding to questions about the TAG’s concerns the company also told us:
While we appreciate the input of TAG, we disagree with their characterization that Topics maintains the status quo. Google is committed to Topics, as it is a significant privacy improvement over third-party cookies, and we’re moving forward.
Topics supports interest-based ads that keep the web free & open, and significantly improves privacy compared to third-party cookies. Removing third-party cookies without viable alternatives hurts publishers, and can lead to worse approaches like covert tracking. Many companies are actively testing Topics and Sandbox APIs, and we’re committed to providing the tools to advance privacy and support the web.
Additionally, Google’s senior director of product management, Victor Wong, took to Twitter Friday (following press reporting on the implications of the TAG’s concerns) to tweet a threaded version of sentiments in the statement (in which Wong also claims users can “easily control what topics are shared or turn it off”) — ending with the stipulation that the adtech giant is “100% committed to these APIs as building blocks for a more private internet.”
So, TL;DR, Google’s not for turning on Topics.
It announced this component of Sandbox a year ago — replacing a much criticized earlier interest-based ad-targeting proposal, called FLoCs (aka Federated Learning of Cohorts), which had proposed grouping users with comparable interests into targetable buckets.
FLoCs was soon attacked as a terrible idea — with critics arguing it could amplify existing adtech problems like discrimination and predatory targeting. So Google may not have had much of a choice in killing off FLoCs — but doing so provided it with a way to turn a PR headache over its claimed pro-privacy ads evolution project into a quick win by making the company appear responsive.
Thing is, the fast-stacking of critiques of Topics don’t look good for Google’s claims of “advanced” adtech delivering a “more private internet” either.
Under the Topics proposal, Chrome (or a chromium-based browser) tracks the users’ web activity and assigns interests to them based on what they look at online, which can then be shared with entities that call the Topics API in order to target them with ads.
There are some limits — such as on how many topics can be assigned, how many are shared, how long Topics are stored, and so on — but, fundamentally, the proposal entails the user’s web activity being watched by their browser, which then shares snippets of the taxonomy of interests it’s inferred with sites that ask for the data.
This is not 100% clear to (and controllable by) the web user, as the TAG’s assessment argues:
The Topics API as proposed puts the browser in a position of sharing information about the user, derived from their browsing history, with any site that can call the API. This is done in such a way that the user has no fine-grained control over what is revealed, and in what context, or to which parties. It also seems likely that a user would struggle to understand what is even happening; data is gathered and sent behind the scenes, quite opaquely. This goes against the principle of enhancing the user’s control, and we believe is not appropriate behaviour for any software purporting to be an agent of a web user.
…
Giving the web user access to browser settings to configure which topics can be observed and sent, and from/to which parties, would be a necessary addition to an API such as this, and go some way towards restoring agency of the user, but is by no means sufficient. People can become vulnerable in ways they do not expect, and without notice. People cannot be expected to have a full understanding of every possible topic in the taxonomy as it relates to their personal circumstances, nor of the immediate or knock-on effects of sharing this data with sites and advertisers, and nor can they be expected to continually revise their browser settings as their personal or global circumstances change.
There is also the risk of sites that call the API being able to “enrich” the per-user interest data gathered by Topics by using other forms of tracking — such as device fingerprinting — and thereby strip away at web users’ privacy in the same corrosive, anti-web-user way that tracking and profiling always does.
And while Google has said “sensitive” categories — such as race or gender — can’t be turned into targetable interests via the Topics processing that does not stop advertisers identifying proxy categories they could use to target protected characteristics as has happened using existing tracking-based ad targeting tools (see, for example, “ethnic affinity” ad-targeting on Facebook — which led to warnings back in 2016 of the potential for discriminatory ads excluding people with protected characteristics from seeing job or housing ads).
(Again the TAG picks up on that risk — further pointing out: “[T]here is no binary assessment that can be made over whether a topic is ‘sensitive’ or not. This can vary depending on context, the circumstances of the person it relates to, as well as change over time for the same person.”)
A cynic might say the controversy over FLoCs, and Google’s fairly swift ditching of it, provided the company with useful cover to push Topics as a more palatable replacement — without attracting the same level of fine-grained scrutiny to a proposal that, after all, seeks to keep tracking web users — given all the attention already expended on FLoCs (and with some regulatory powder spent on antitrust Privacy Sandbox considerations).
As with a negotiation, the first ask may be outrageous — not because the expectation is to get everything on the list but as a way to skew expectations and get as much as possible later on.
Google’s highly technical plan to build a new (and it claims) “better-for-privacy” adtech stack was formally announced back in 2020 — when it set out its strategy to deprecate support for third-party tracking cookies in Chrome, having been dragged into action by far earlier anti-tracking moves by rival browsers. But the proposal has faced considerable criticism from publishers and marketers over concerns it will further entrench Google’s dominance of online advertising. That — in turn — has attracted a bunch of regulatory scrutiny and friction from antitrust watchdogs, leading to some delays to the original migration timeline.
The U.K. has led the charge here, with its CMA extracting a series of commitments from the tech giant just under a year ago — over how it would develop the replacement adtech stack and when it could apply any switch.
Principally these commitments are around ensuring Google took feedback from the industry to address any competition concerns. But the CMA and ICO also announced jointly working on this oversight — given the clear implications for web users’ privacy of any change to how ad targeting is done. Which means competition and privacy regulators need to work hand-in-glove here if the web user is not to keep being stiffed in the name of “relevant ads.”
The issue of adtech for the ICO is, however, an awkward one.
This is because it has — historically — failed to take enforcement action against current-gen adtech’s systematic breaches of privacy law. So the notion of the ICO hard-balling Google now, over what the company has, from the outset, branded as a pro-privacy advancement on the dirty status quo, even as the regulator lets privacy-ripping adtech carry on unlawfully processing web users’ data — might look a bit “arse over tit,” so to speak.
The upshot is the ICO is in a bind over how proactively it can regulate the detail of Google’s Sandbox proposal. And that of course plays into Google’s hand — since the sole privacy regulator with eyes actively on this stuff is forced to sit on its hands (or at best twiddle its thumbs) and let Google shape the narrative for Topics and ignore informed critiques — so you could say Google is rubbing the regulator’s face in its own inaction. Hence unwavering talk of “moving forward” on a “significant privacy improvement over third-party cookies.”
“Improvement” is of course relative. So, for users, the reality is it’s still Google in the driver’s seat when it comes to deciding how much of an incremental privacy gain you’ll get on its people-tracking business as usual. And there’s no point in complaining to the ICO about that.
Source @TechCrunch