As third-party cookies and other web trackers lose efficacy, companies are getting better at tracking people by puzzling together disparate pieces of information. Google, meanwhile, is erecting a variety of technical blockades in an attempt to thwart this fingerprinting tech. Among the digital ad giant’s anti-fingerprinting efforts is Privacy Budget.
Google has proposed Privacy Budget as part of its Privacy Sandbox initiative. By putting its stamp-of-approval on the concept — a privacy budget is a common method in privacy research arenas — and by including it on its Privacy Sandbox timeline, the company creates some momentum behind it.
What is a privacy budget?
Whether it’s Google’s branded Privacy Budget or the lower-case generic versions implemented by firms such as Apple for privacy protections in Safari, the privacy budget is a technique used to defend against fingerprinting. The idea is that it places limits on the “fingerprinting surfaces,” such as the HTTP headers that are sent each time a web page loads, that can be mined for information and combined to detect or assign someone’s identity.
It’s important to remember here that, in the Privacy Budget scenario, the web browser is the user agent, or a proxy for the person using the browser. The Privacy Budget is a limit on the amount of information various systems can access or detect in someone’s browser. In other words, based on pre-set thresholds, the Privacy Budget allows the browser to reveal information up to a certain point, then it blocks additional pieces of data from being accessed or detected.
Got it. So how does a privacy budget work in general?
In a way, the privacy budget can be thought of like a monetary budget or a limit on the amount of money that can be accessed. However, the currency used in a privacy budget is not dollars; instead, the value of data allowed in a privacy budget is expressed in bits. So, for example, when a Privacy Budget is at work, it would limit the amount of information that a website or technology employing it has access to, according to how each piece of data is valued in bits. So, if the system can detect that a browser is set to use the Korean language, that language setting would be considered a distinguishing characteristic, and it would be valued as a certain number of bits. Once a pre-defined number of bits or amount of distinguishing browser characteristics is detected, the privacy budget is used up.
Why are people talking about privacy budgets now?
Google plans to enable its branded Privacy Budget as a Privacy Sandbox tool by sometime in 2023, according to its timeline for that overarching project. But as Google has helped fuel more interest in the concept, it has also been mentioned in a variety of other web standards specifications proposed in the Worldwide Web Consortium, the web standards body, as a technique that could mitigate potential fingerprinting capabilities of those various technologies, such as tech used to stream video and audio, for example.
So why is Google behind this idea?
A company spokesperson stressed the technology’s infancy, telling Digiday, “Privacy Budget, an early-stage proposal designed to protect people from fingerprinting, applies a limit on information about a user’s browser to websites on a per-site basis and therefore would not introduce new information across sites that could be used for tracking. As with all proposals, we will continue to get feedback and provide resources for developers throughout the development process to help ensure a smooth transition to a more private web.”
The company also pointed to information made public about the technology by a Google engineer found in the developer forum GitHub. That post states, “Fundamentally, we want to limit how much information about individual users is exposed to sites so that in total it is insufficient to identify and track users across the web, except for possibly as part of large, heterogeneous groups,” and adds, “Our maximum tolerance for revealing information about each user is termed the privacy budget.”
If — or when — Google implements it, what happens to a site or tech when a budget is tapped out?
According to Google’s developer outline, when a website or technology tries to detect a piece of information the budget restricts, it could result in a page error or in replacement of that tech by a “privacy-preserving version” of the fingerprinting surface that could return “noisy” or generic results.
That sounds like it could be messy. What problems could arise?
People who have dug into privacy budget methods say there are a lot of possible hiccups. But there are some pretty important ones for marketers and site publishers to consider that could have a negative impact on user experience.
Researchers at browser maker Brave identified multiple potential issues with the privacy budget approach Google has proposed, including two that could specifically affect user experience and site optimization.
First, the privacy-preserving system that would be triggered when a privacy budget was tapped out could end up throwing out so much noise that the browser’s performance would suffer. “Second, there are few-to-no mechanisms in place to prevent websites from” bombarding the browser with attempts to access the data used for fingerprinting, wrote Brave chief scientist Dr. Ben Livshits and senior privacy researcher and director of privacy Pete Snyder.
Plus, Synder told Digiday, the approach could be “a non-starter” for some web developers trying to build websites and tech that can respond to a variety of dynamic limitations on how sites function. Of course, if Google were to build Privacy Budget into its market-dominant Chrome browser, those web developers would seem to have little recourse than to build tech that reaches the two-thirds of global web traffic the Google browser reaches.