The Media Minute 6.4.25

AI Trust, Human Oversight, And Publishing Responsibly With The Momentum

Last week, I wrote about AI’s growing role in the newsroom, paying particular attention to the team-leaders who set the tone on how AI is used as well as the staff applying those standards. 

For publishers, that human oversight feels like a necessity, particularly when its absence creates such horrific, lasting impressions.

A recent EY survey found that Americans are particularly negligent when it comes to checking AI-generated content, with only 24% saying they review or edit the text, images, or translations that AI produces (compared to 31% worldwide).

 

Why Popular Food Substack-Based Newsletter Vittles Has Launched A Magazine

… They’d seen the upfront costs associated with print, and weren’t convinced the magazine could be anything more than an occasional, loss-making pet project. But as they stayed on schedule, and didn’t go over budget, and didn’t find any hidden costs, [Jonathan] Nunn decided they should commit. 

 

AI In Content Creation 2025

AI is no longer optional. Across industries, AI has become foundational in content workflows. Over 80% of respondents use AI in some part of their creative process. Nearly 40% use it end-to-end, from ideation to final delivery.

 

Americans Are Among The Least Likely To Review Or Edit AI-Generated Output

A global survey from EY finds that while most people in the United States see artificial intelligence as useful, very few take the time to review or edit what it produces. According to EY’s “AI Sentiment Index 2025”, just 24 percent of U.S. respondents say they review AI-generated texts, images, or translations. That’s one of the lowest rates in the study, and well below the global average of 31 percent.

Share on FacebookTweet about this on TwitterShare on Google+Share on LinkedInShare on RedditEmail this to someone