Agencies and their clients may be more cognizant of where they source their data and tech these days, but when it comes to actually taking action on promises to create a more inclusionary and less discriminatory ad tech landscape, there are a lot of dim corners left to be inspected. Publicis Media is trying to shine a light.
With the help of its multicultural practice group, Publicis Groupe’s media agency arm is updating its evaluation process for tech and data providers, including demand- and supply-side platforms as well as data management tools. Publicis Media is also incorporating new ways to assess whether those companies’ systems perpetuate audience segmentation stereotypes or unfairly deprioritize ad inventory supplied by minority publishers.
Specifically, Publicis Media is adding a new set of 30 to 40 questions to an already lengthy request-for-information process that features around 1,200 questions the agency subjects suppliers to on an annual basis, taking about two months to complete per vendor. Adding new parameters to the process — which the agency has dubbed Verified — is meant to help clients evaluate potential tech and data vendors by getting them to supply more information on topics including audience modeling and methodology and taxonomy. The process also aims to convince tech firms to provide access to the algorithmic systems used to make decisions such as audience targeting and inventory pricing.
While responses to the added questions might not reveal precisely what’s happening inside often opaque ad tech systems, the new inquiries reflect issues that haven’t necessarily been addressed formulaically before.
The latest slate of questions includes, “Do you model your ethnic, racial, sexual orientation, gender, and religious segments?” and follows up by asking providers for “a detailed explanation of the process for each group.” Another question asks, “What guidelines or checks and balances do you have in place to ensure the data for ethnic, racial, sexual orientation, gender, and religious segments are accurate?” Responses might allow the agency to compare the level of detail tech suppliers provide or the amount of attention they have paid to impacts of their tech on multicultural equity.
“Currently, there is no marketplace standard in terms of ethically evaluating data and algorithms,” said Publicis Media’s Shelley Pinsonneault, who oversees the agency’s global technology team that conducts the Verified audits. The ten-year-old data and tech assessment process is intended to help clients figure out what approaches are the right fit, rather than rating or labeling tech suppliers as doing something right or wrong. “Each brand has a different threshold for how they approach data and use information to make decisions,” she said.
Every year, Publicis Media staggers its evaluations by category, and thus far has only incorporated new questions and criteria addressing multicultural concerns and impact when inspecting data firms last year. Now, it plans to add the questions to evaluations for tech providers in a dozen other areas such as DSPs or digital out-of-home ad tech.
Digging into how DSPs value minority ad inventory
Publicis Media’s aim is to get answers about how ad systems are designed in order to understand whether their data flows or automated rules and decision processes could affect people in certain cultural groups in a negative way or lead to discrimination against media that speaks to niche groups, explained Jennifer Garcia, svp of multicultural data science and research lead at Cultural Quotient, the agency’s multicultural practice group.
For instance, to assess how algorithmic ad systems make decisions that can affect whether ads appear in media reflecting a diversity of groups and cultures, Garcia and her team might look at how DSPs decide whether to bid on a publisher’s inventory, at what price, and how and where ad impressions show up on web pages or in apps, in order to gauge how the ad systems prioritize — or deemphasize — publishers of certain types of content.
“Depending on what kind of relationship exists among certain publishers and the DSP, this may or may not play into the bias of what a DSP renders preferential among publishers. And certain minority groups and endemic publishers may be at a disadvantage, which also plays a role in the vicious cycle of not having enough data or insights to help inform future multicultural campaigns for clients who are genuinely vested in the segment,” said Garcia.
Publicis Media also aims to ensure that audience segmentation does not reinforce stereotypes or lump all people in certain groups into a monolithic category, said Garcia. So, her team looks for details on how audiences are built. For instance, when examining a segment reaching 18-34 year-old African-American males, she said, “There is more than just the hip-hop listener and sneaker wearer. There are things such as photography and art, entrepreneurship, interest in any level of academics, food, family, health — there is so much more. When we do not bring these opportunities to the forefront, we miss an extremely important opportunity to effectively reach an audience, with not only purchasing power but influence on society,” Garcia added.
Getting details on algorithms isn’t easy
Publicis Media may wield millions of advertiser dollars, but it still has a difficult time getting access to all the details it might like to have about how data is sourced or how technologies are built, according to Pinsonneault. “The extent to which [tech firms] will open the hood and talk about their algorithms is hard,” she said.
Nonetheless, the agency is pressing the tech firms for insights addressing the ethical considerations underlying their algorithms. For example, one new inquiry in the evaluation questionnaire asks tech providers, “What is your approach to ensuring algorithmic [or] AI bias does not exist within your audiences?”
“Of course tech companies would not [let] other companies have access to source code or algorithms. That’s trade secret,” said Tae Wan Kim, associate professor of business ethics at Carnegie Mellon University’s Tepper School of Business, who researches issues related to AI ethics. “A better approach is to rely on an entrusted third party that verifies algorithms for the society,” he said.
While Publicis Media itself does not work with third-party auditors in its Verified evaluation process, the company plans to add a query in its RFI asking whether partners have engaged with third-party auditors, which could include companies like Neutronian, which assesses data sourcing and data models, and TruthSet, which evaluates demographic audience data accuracy.
Kim pointed to tools that help explain algorithmic models, such as LIME and SHAP, and argued that ultimately, there’s financial incentive for companies to instill trust in their algorithmic systems and data through auditing. Noting the rise in AI ethics roles at firms including Salesforce, he said, “An ethically justified algorithm will be an important asset, too.”