When Chase announced it would be rolling out AI-powered copy across their marketing departments recently, a chill went down the spine of human copywriters everywhere. The global bank is using machine learning and natural language processing to generate ad copy aimed at highly-targeted market segments.
But while UX and marketing writers have every reason to feel insecure in their corporate jobs, it shouldn’t be because of AI taking over. (If anything, it should be because most companies have a complete lack of loyalty or regard for workers’ rights—but that’s another post.)
Let’s break down exactly why we don’t need to scramble and duck as if the sky is falling…
A tool is only as powerful as the one who wields it
The ideation stage of any project is where it’s safest (and cheapest) to make mistakes. If you’re going to experiment with new technologies and/or incomplete ideas, that’s the time to do it. The fact that Chase is using AI in the generative phase suggests they actually support risk-taking in their writing team. That’s actually pretty cool.
It also means they still believe humans are necessary to review, shape, and approve content before it’s ready for the light of day. That’s a lesson Apple, Snapchat, and Twitter learned back in 2015 when they introduced human editors to curate their respective machine-powered content initiatives. Four years later, in today’s algorithmic-happy social media landscape, we still see companies embrace the critical and as-yet-unreplicable role of human judgment. Linkedin, for example, maintains an editorial team of 65, who curates more than 2 million posts, videos, and articles a day, according to the New York Times. Are they aided by machines? Of course. Do the humans make sure the machines are doing a good job? Absolutely.
Chase, as well, seems to understand there are potential risks in completely ceding to AI without heavy human filtering. Perhaps it’s learning from those doubling down on machine learning, who have done so at the public’s expense. Google fucked up machine learning so hard their photo algorithm tagged black folks as apes (which they still haven’t properly addressed, natch). And Facebook—well, Facebook’s algorithm fetish has helped to literally destroy American democracy as we know it (reminder: not hyperbole).
Now, Chase may not be hosting the world’s content. But its decision to limit the role of AI to the early stages of copy production suggests that for all the press-release-waving about click-through rates, the financial giant isn’t quite ready to cede completely to the machines. Even its next phase roll-out of the technology will be limited to internal communications, according to Persado’s own press release.
How you measure success matters
This is classic corporate vanity decision-making, and means very little on its own. Click-through rate isn’t in and of itself a performance indicator, it’s a metric. If these click-throughs converted more frequently to drive revenue or other meaningful goals, don’t you think we’d hear about it? If that were the case, the media frenzy about its AI adoption would offer more there there. There’s no way Persado’s PR team wouldn’t jam-pack that press release with revenue-busting numbers if they had them.
And why don’t they? Because measuring revenue generation or other forms of genuine engagement is a lot more difficult and time-consuming than measuring CTR. We’re starting to see a shift toward more meaningful content engagement metrics, like scroll depth and conversion rates. But these require more a bit more effort to develop, set up, and track. This is particularly true when trying to track behavior across channels and devices (e.g. a user who clicks a link in an email from their mobile phone, but waits until they get home to make a purchase on their desktop computer).
This kind of thoughtful tracking also demands accountability, something not often found in the hallowed cubicles of corporate finance.
Good writing isn’t subjective
This assertion is not only absurd, it’s indicative of a much deeper problem. If a copywriter can’t rationally and objectively explain their word choices, their company has a hiring problem, not a human writer problem.
In fact, this skill is something I specifically wrote into job descriptions at GoPro, where I ran software UX content strategy. GoPro has since adapted versions of this in its creative roles beyond just writing, in fact. It really is a baseline requirement for any mid-level creative (and, it could be argued, for junior-level, too). Don’t just take my word, though. In his framework for design team levels, Peter Merholz lists “Developing skills for communicating rationale to team members” as an expected skill for associate-level designers (where associate is equivalent to junior).
So why do teams perpetuate the “writing is subjective” trope? In my experience, it’s a way to avoid the tough work of defining standards—and sticking to them. Standards and content governance processes give copy teams the ammunition to push back on age-old habits like the executive poop-and-swoop. You know: a senior team member dives in at the eleventh hour with unfounded, truly subjective opinions that force a copy team to rewrite, rework, and restrategize entire swaths of content.
It’s particularly disappointing to hear a vendor like Persado reinforce this nonsense. They’d be better served positioning themselves as allies and partners to the writers they’re undermining. Of course, writers aren’t approving the purchase orders.
The real threat of AI to content producers
Artificial intelligence isn’t going anywhere. The sooner content producers and content managers begin exploring the opportunities presented by AI, the more influence we’ll have over its integration into our workflows. This means becoming literate in machine learning, data analysis, and natural language processing.
As content producers, we have a responsibility to address these technologies head-on. Humans develop the algorithms and processing rules driving these experiments. Humans create and select the data sets used (or blindly fed into them, as the case may be). The only way to integrate these tools into our workflows responsibly, then, is to insert ourselves into that process. Google’s photo tagging debacle and Microsoft’s racist-trained Tay bot are just scratching the surface of the harm that can be rendered by these technologies.
Even so, artificial intelligence is not a simple matter of good vs. evil. Those who think so tend not to understand the underlying technology and science. AI in its many forms is already being used to work with content in some deeply interesting and important ways. Machines happen to be pretty great for classifying large batches of content, be it music, literature, or the news. Consider USA Today’s use of machine algorithms to analyze nearly one million bills from federal and state legislators that contained cut-and-paste language supplied by lobbyists and special interests.
The threat of AI facing writers (and all of us who manage content in way or another) is not obsolescence. No, the real threat of AI is one of missed opportunity. And worse: abdication of responsibility and accountability.