Saturday, January 28, 2023
More

    Latest Posts

    Best Practices for Fact-Checking AI-Generated Content

    If you’ve ever met a dyed-in-the-wool fact-checker trained in one of the iconic editorial departments that revere the practice, you’ll know they’re just built differently. Where you see an innocuous sentence, they see words and phrases rife with assumptions, historical references, cliche, or some other anti-fact issue.

    Few content teams have someone with such a keen eye for facts. And honestly, the kinds of content businesses publish rarely require the rigor expected by publishers of long-form investigative journalism.

    All content teams do, however, need everyone to buy into and use the following practices.

    Inventory the facts and fact-ish details in your content.

    Just as you copyedit every piece of content before publication, each piece should go through a fact review.

    Someone (not the writer) reads the piece and highlights all the facts and fact-like sentences. These include the sentences or phrases that depend on interpretation or source selection. Group the inventory of facts into three categories:

    • Category One includes the core facts central to the piece’s argument.
    • Category Two includes important core facts that are part of the central argument but without which the piece could still stand.
    • Category Three includes facts that add color but are peripheral to the asset’s central thesis.

    Use a multi-level process for fact-checking the piece.

    Content teams should fact-check all category one facts of every piece of content that runs through their process—this is a level one fact check. If all the facts check out in that first set, the foundations of that content asset are sound. For everyday content like blog posts, social messages, and so on, organizations that want to do the bare minimum can stop there.

    If you find inconsistencies right out of the gate, however, that piece of content should move to the second level, which involves more intensive checking of all category one and category two facts. If more problems arise, it should either move to level three (see next paragraph) or the content quality police should send it to the content lockup, depending on your internal decision criteria.

    Level three involves checking everything—all facts in all three categories. I recommend doing a level three fact-check on every pillar piece of content your team produces and making it optional for others. How intensive your process is depends on your industry and internal practices. Highly regulated industries or those with high levels of market risk, for example, should probably level-three fact-check everything.

    Each organization will set standards for how much to check and what happens to articles with core or incidental fact issues.

    Define what you view as “reliable sources” and inventory them.

    Fact-checkers rely on sources to validate information, facts, statements, etc. Content teams define which sources are trustworthy, reliable, reputable in your industry, or for the subjects they validate.

    Common sense rules apply here. If you define a common business term, for instance, a published dictionary is a better source than your cousin, the English teacher. If you need data on consumer spending, the U.S. Federal Reserve is a better source than Amazon.

    Content teams should create and maintain a list of fact sources they deem reputable for their market.

    Your list of reliable sources might include media publications, databases, industry associations, journals, published books, etc. It should not include sources that will have AI-generated content in the immediate future. Wikipedia, for example, should never be a source of validation for a fact.

    Find the original source.

    Interesting anecdotes, research findings, truisms, quotes… they have a way of making the rounds of the content world. We’ve all done it—that is, heard a story that piqued our interest and repeated it, only to find out the person we heard it from got the details wrong. Exhibit A: Malcolm Gladwell and that “10,000 hours” theory.

    The best and most reliable way to avoid recycling inaccuracies is to go to the original source. For example, say you are looking for a statistic and find one that fits the bill in an article published in The Economist—but it’s from a research study conducted by a third party. Don’t just assume The Economist got it right. Go find the study itself and read it.

    In a similar vein, if you want to use a quote from an actual person, go find it. This might be in original footage when they said it (again, from a reliable source), the page from their book, a post on their blog, an article under their byline, etc.

    When in doubt, double-check

    If a fact sounds off, too perfect for what you are trying to say, or from a borderline source, double-check it—especially if it is a core fact for an important piece of content. As humans, we can be fooled, but some of us also have good instincts. Trust yours. If a fact sounds wrong, it might be, and you don’t want it eroding your brand trust.

    Generative AI is here to stay, but that doesn’t give us the green light to shut our brains off and let it take the lead on the content we create with it. AI is helpful, but it still needs humans to guide and direct what’s produced.

    Stay informed! Subscribe to The Content Strategist for more insight on the latest news in digital transformation, content marketing strategy, and rising tech trends.

    The post Best Practices for Fact-Checking AI-Generated Content appeared first on Contently.

    Latest Posts

    Don't Miss

    Stay in touch

    To be updated with all the latest news, offers and special announcements.