Skip to main content

How I Use Claude Code as a Marketer

Torn paper strips arranged in overlapping horizontal bands — a central column of jagged-edged rectangular clippings stacked like a physical inbox, each fragment
Owen Steer 19 min read

How to use Claude Code as a marketer?

By encoding what you already know about marketing into skills and context files, then letting those skills run the deep work (research, drafting, social monitoring) at speeds you couldn't match manually. The judgment, creative, and strategic decisions stay with you. I run five of these skills at Fifty Five and Five, and one of them wrote this post.

People are starting to learn what AI can actually do when it’s used well. The signal is hard to miss. Searches for Claude Code, Anthropic’s terminal-based AI coding tool, grew roughly twentyfold in twelve months (DataForSEO via Google Ads ), and Claude Code itself passed $2.5 billion in run-rate revenue in early 2026 (Yahoo Finance, citing Anthropic ). Marketers are part of that wave, even though the surface still looks like a developer tool.

I use Claude Code as a marketer by encoding what I already know about ABM, content creation, and offsite engagement into skills and context files. The skills run the deep work (research, drafting, and monitoring) at speeds I could never match manually. I focus on the strategic decisions and creative judgement that still demand a human. This piece walks through how I do it across three areas: ABM account research, content creation (including this very post), and offsite social listening on LinkedIn and Reddit. It’s one perspective from a wider piece on how an AI marketing agency uses Claude Code across multiple roles.

Quick note on the on-ramp before we go further. Claude Code runs in a terminal, which can feel daunting if your day-to-day stack is browser tabs and SaaS dashboards. Claude Cowork is a more friendly interface that handles many of the same tasks, and it’s improving quickly. The honest qualification: anything involving APIs is still a bit more complicated on the less-technical interface. I’m Owen Steer at Fifty Five and Five. I spend most of my time at the intersection of marketing strategy and the AI tools that actually make strategy operational, and most recently I’ve been turning what used to be team-sized processes into Claude Code skills.

The Claude Code skills I actually use as a marketer

Here’s the cynical-but-true version. A large part of what a marketer like me has spent a career doing (keyword research, account research, content checklists, the whole stack) can be distilled into knowledge files. The manual processes can be turned into automated processes with the APIs and tools available to us. Keyword research used to mean logging into a tool and crunching numbers by hand. Now AI runs it. Of course there are still levels to expertise and tricks that distinguish people from AI. There are. But the time saved is massive, and the work that remains is the work that matters.

That’s where Claude Code skills come in. A skill is a markdown file that captures how an expert approaches a task: the steps, the observations, and the tips and tricks. Pair a skill with context files (about your offer, your brand, and your clients) and you get an AI that approaches your work the way you would, just faster. The expert distils. The AI executes. The expert reviews.

Anthropic’s own marketing teams have published their numbers on this. Their Influencer Marketing team uses scripts to free up over 100 hours a month, and Product Marketing saves 5 to 10 hours per launch brief by using skills (Anthropic blog ). Across marketing teams using agentic tools like Claude Code, repetitive strategic analysis (SEO audits, PPC reviews, the stuff that used to eat a Monday morning) drops by roughly 75% (Anthropic ).

Fifty Five and Five has been a content writing agency since 2014. Writers, senior writers, editors, and people like me handling SEO and keyword research. Claude Code didn’t replace any of that craft. It captured it.

In my experience, the best Claude Code skills for a marketer are the five I run most often, named by what they actually do rather than what we call them internally:

  • One that runs deep ABM account research, finding insights and signals across many accounts in hours rather than weeks per account.
  • One that drafts blog posts following our SEO and GEO checklist (GEO meaning content tuned for AI engines like ChatGPT to cite, not just for Google to rank). This post is the output of that one.
  • One that turns research and 1:1 interviews into author profiles, so AI content sounds like the actual author and not a content mill.
  • One that scans LinkedIn and Reddit every morning for relevant conversations and drafts responses in the author’s voice.
  • One that runs content through the publishing and optimisation checklist before it goes live, handling the technical metadata that used to take an editor a full afternoon.

Some of these run inside Claude Code. Others now run as scheduled tasks in Cowork. The rest of this piece goes deeper on three of them.

The five skills above replaced what would have been at least three full-time roles a few years ago. None of them replaced a role at Fifty Five and Five. The team got their time back instead.

Claude code ABM: from weeks of account research to hours

Years ago I worked at an agency called Punch on a print company client called Oki. They wanted to sell colour printers into retail for in-store signage. We had a list of accounts, Tesco was one of them, and the maths worked on paper. Then we went into a Tesco store and asked the right person. Tesco had just signed a five-year deal for black and white printers the previous year. They were never going to buy. That account got binned immediately. I’ve used that example for years to explain why ABM is research before it’s anything else.

Claude Code ABM is the same idea, just without the bottleneck. I know what you’re thinking: ABM is just personalised email at scale and AI hype. But I’ve worked with ABM since 2019, and the thing people get backwards is this. The old way wasn’t slow; it was hard. Meaningful, insightful, contextual account research requires three things working together:

  1. A strong understanding of your offer (what you actually sell, where it fits, and what its real value is).
  2. A strong understanding of the account (industry, leadership, strategy, and the actual pain points, not the surface-level ones).
  3. The intelligence to find insights and signals that bring the two together.

That third bit is what makes ABM ABM, not just personalised email. It’s where most ABM falls down. ABM has always been 80% setup and 20% execution. The setup is where people quit.

A Claude Code ABM skill unlocks the first two layers. You encode your offer (what you sell, the value, and the fit) and your research methodology (what to look for, where to look, and what good looks like) into context files. The skill runs the deep work across many accounts in hours, not weeks. The bridging intelligence (spotting which insight actually matters and how to use it) still belongs to a human. Same brain, richer raw material.

A concrete example I lean on: investor documents and annual reports. Find the leadership, what they care about, and the multi-year strategic plans. If you can align what you sell with something on their three-year horizon, that isn’t a cold pitch any more, it’s a relevant offer. That kind of insight-mining is baked into my account research skill. It runs across a list of accounts overnight and gives me back a brief I can read with a coffee.

The proof of what’s possible when ABM research is actually done: at Northern Data Group we ran an AI-driven ABM campaign across 110 enterprise accounts. The result was a 39% email open rate (roughly 2x industry average), 200% ROI from one opportunity alone, and qualified meetings at seven targeted accounts (Northern Data Group case study ). That was already strong with a focused, human-led approach. With the research bottleneck broken, the same quality is reachable across many more accounts.

Worth being clear about what humans still own. The strategic frame (defining the offer, the ICP, and the target accounts) is best served by a workshop with the revenue team. The creative concept and outreach plan still benefits from human craft. And spotting which research output is actually valuable is its own skill. The AI gives me richer raw material; the judgement is still mine.

Claude Code blog writing: from topic to publish in an hour

This section is the one I’m writing about while you’re reading the output. Bit recursive, sorry. The skill that produced this post follows ten steps, all running inside a Claude Code conversation with my input where it matters:

  1. Pick a topic. Usually from a recent client conversation or a persona we’re building for.
  2. Develop a customer question someone might actually search.
  3. Generate 5 to 10 question variants (how the same question gets asked different ways).
  4. Run query fan-out across OpenAI and Gemini APIs to capture the actual sub-queries AI search engines fire when answering the question.
  5. Run keyword research via DataForSEO or Ahrefs API, factoring in the sub-queries from step 4.
  6. Find synergy keywords. The ones that appear in both the keyword data and the AI search sub-queries.
  7. Pick the primary keyword (becomes the H1, this post’s H1) and five secondary keywords. Each secondary keyword becomes an H2, which is to say a section heading in the post. Look up at the H2s on this page; each one is anchored to a keyword.
  8. The skill takes the H1 + H2 skeleton and runs external research for stats, examples, and supporting points.
  9. The skill drafts a formal synopsis.
  10. Human takes the wheel. I interrogate the synopsis section by section. Is the angle right, does it sound like me, what would I add from lived experience, and what’s missing? This step alone often takes longer than the previous nine combined, and it’s the most important.

The 20x growth stat in the intro came from external research the skill ran (not the keyword API itself), a fair callout because the meta moment of this post only works if the details are right. Anthropic’s Customer Marketing team has reported drafting case studies in 30 minutes with a similar workflow, down from 2.5 hours by hand (Anthropic blog ).

The skill doesn’t stop at drafting. After the human review, the writing pass uses the synopsis, the author profile (more on those in a second), and our internal SEO and GEO checklist. The result enters the publishing flow, which is itself a Claude Code skill that runs the optimisation checklist over every blog before it goes live. Headings, meta descriptions, structured data, internal linking, image alt text, and the lot. SEO veterans already know this is checklist-shaped work. Tools like Yoast literally tell you what they’re looking for. The skill just runs the checklist consistently.

What is “Claude Code marketing automation”?

Claude Code marketing automation is the same pattern applied to the publishing layer of content. After the blog writing skill produces an optimised draft, a second skill takes that draft and runs it through the SEO and GEO checklist (the one our team built up over years of doing this work manually). It generates the technical metadata: meta descriptions, structured data, image alt text, and schema markup. It places relevant internal links to other content already on the site. Then it pushes a Hugo build to staging for review. The whole pipeline from “topic” to “published draft on staging” runs inside Claude Code with two human review gates: at the synopsis and at the final draft. What used to be a content team’s full week now runs in an afternoon.

Now the bit I care about most. The world is using AI lazily and it shows. You can spot it instantly. Bland, filler-crammed, no edges. That’s exactly why so much anti-AI sentiment exists, and the sentiment isn’t entirely wrong. AI is, separately, a staggeringly cool piece of technology. The problem is over-reliance, not the tool. Paired with clever people, AI can do great things. Just because something is coherent doesn’t mean it’s good. Giving a language model your brand strategy is like handing a parrot a TED Talk; it’ll repeat the words back to you with confidence, but the meaning’s gone.

My answer is two editorial mechanisms baked into the skill. The first is author profiles. A separate skill that uses research, internal documents, and 1:1 interviews to capture how an author actually speaks, their career, the problems they solve, and the things they would never say. The author profile is what makes AI content sound like a person rather than a model. The second is editorial review steps at two stages: at synopsis (this is exactly the step that just happened to this post) and at final review. The job at both stages is to find wooden, AI-style explanations and replace them with personable, lived experience. AI can write words. The hard bit is can AI write the right words. Iteration and feedback loops are how that happens.

We built a version of this whole content workflow for Quisitive. Three publish-ready blogs of around 2,500 to 3,500 words each, in three different SMEs’ voices, structured for both traditional search and AI search citation. Even Google has been clear: they don’t care how content is written so long as it’s good (Google Search Central ). The rule is quality, not provenance.

Want help building something like this?

If you're trying to encode your marketing process into a Claude Code skill, our team has done it across content, ABM, and offsite. Get in touch and we'll walk you through what's worked.

Get in touch

Claude Code social listening on LinkedIn and Reddit

Offsite engagement matters more now than it did two years ago because AI search engines lean heavily on places like LinkedIn and Reddit when they answer questions. Showing up in those conversations has compound value. Three benefits stacking together rather than three separate goals: part lead generation (rare but real intent opportunities), part brand awareness (your audience already lives there), and part LLM citations (the answers AI gives tomorrow are partially shaped by the conversations you join today).

I think about offsite the same way I think about ABM. Entire roles at enterprise companies are dedicated to monitoring these platforms, finding relevant conversations, and crafting responses. A small team can’t match that headcount manually. With a Claude Code skill, then a scheduled Cowork task, you can.

The set-up looks like this. A skill encodes what I’d do manually if I had infinite time: the topic areas to watch, the personas to target, what good engagement looks like, and what to ignore. The skill runs every morning. Two reports waiting before I open my laptop, one for LinkedIn and one for Reddit. Each report lists relevant conversations and drafts responses using the author profile so the voice is mine. I review, edit, post. Or kill the response if it wouldn’t actually add value.

What does Claude Code on Reddit look like in practice?

Reddit behaves differently from LinkedIn. Anonymous, topic-led communities, sceptical readers who can smell a salesperson at fifty paces. The skill knows that. It scans a list of subreddits relevant to the personas we serve, identifies threads where someone is actually asking a question my SME could answer, and drafts a response that reads as a genuine contribution and not a pitch. The response includes specifics, names tools (sometimes including ours, sometimes including competitors when they’re genuinely better fit), and only earns the click-through if it’s useful. I review every response before it posts, because Reddit will downvote even the best-intentioned brand voice into oblivion if the contribution isn’t real.

We built a version of this process for Avalara, scanning Reddit, LinkedIn, Quora, and Medium for relevant conversations and generating responses from key people in their business (Avalara ). The skill finds the right topics for the right author, finds the conversation, and drafts the response. The author then reviews and posts.

The honest qualification: intent opportunities are rare. Most days, no one is in-market in a thread I can find. The compound value is the real prize. The brand awareness in the right communities, and the LLM citation flywheel that closes when those conversations get indexed and cited later. Offsite engagement breeds citations breeds inbound visibility.

The author profile shows up here again, this time driving response generation. It’s the connective tissue across content and offsite, the same skill in different applications. Where a content draft uses the profile to sound like the author, a Reddit response uses it to sound like the author when answering a question. Same voice, different surface.

This is still very much a focus of mine and something I want to crack properly. The hard bit isn’t whether AI can write the words. It’s whether AI can write the right words. That requires iteration and feedback loops on the author profile until the responses sound credibly like the person they belong to. We’re not far off being able to automate the posting itself, once the profile’s good enough that I trust the response without rewriting it. Not there yet.

Enjoying this article?

Get more B2B marketing insights delivered straight to your inbox.

Building your own: a Claude Code skills tutorial

The best Claude Code skills marketers are building share a pattern. They start with a process that’s already painful, repeated, and well-understood. Then they get encoded. Not all at once, not perfectly, but as a working draft that gets sharper with use. Anthropic’s Product Marketing team reported saving 5 to 10 hours per launch brief by encoding the process into a skill (Anthropic blog ). Same pattern, different inputs.

Here’s the short version of a Claude Code skills tutorial that’s worked for me.

The build flow I’d recommend:

  1. Pick one repeated, painful task you do regularly. Content briefing, account research, social monitoring, or whatever’s eating your week.
  2. Describe the process you’d run manually, step by step, in plain English.
  3. Capture the inputs (where do you get the source material) and the outputs (what does a finished version look like).
  4. Write the steps as a skill file. It’s just markdown.
  5. Run it on a real example. Watch where it gets things wrong.
  6. Iterate. Add examples, clarify the steps, and define the outputs more precisely. Use the skill in production.

Three things I’d tell anyone starting:

  • Don’t be scared to embrace the terminal. It’s just like any other chat interface, and you get to pretend to be coding like in the movies.
  • Surround yourself with at least one technical person who can help bring your skills to life through APIs. The marketing-side work is encoded in the skill; the API integration is where the technical layer matters.
  • Get in the habit of meticulously writing down the steps of how you work, so you can skillify them. The skill file is only as good as the documented process behind it.

That third point comes from a lesson I learned the hard way. I built a tool in Claude Code that I thought was working. The outputs looked reasonable, the responses were coherent. Turned out the tool was hallucinating because it didn’t actually have API access to do the tasks I’d assumed it could. Someone had to point it out to me. Crashed and burned. That was a proper wake-up call. I’m a marketer, not a developer, and I’d made assumptions about what the technology could do without really understanding the technical layer underneath.

The lesson is broader than that one tool. If you’re building on AI, you need to understand the layer underneath, or surround yourself with someone who does. That’s why the second piece of advice above matters more than it sounds.

The skills I run today started as manual processes. The author profile builder began as a list of interview questions I’d email SMEs. The offsite engagement skill began as a spreadsheet of subreddits and LinkedIn searches. The blog skill (the one writing this) began as a Google Doc template. Each one became a skill once the manual version had been run enough times to know what good looked like. That’s the order. Manual first, encoded second.

Where this leaves us

I use Claude Code as a marketer by encoding the parts of my expertise that can be encoded (research methodologies, writing checklists, and social listening processes) into skills and context files. The skills run that work across ABM, content, and offsite engagement at speeds a small team couldn’t match manually. The judgement, creative, and strategic decisions stay with me, where they belong.

The structure of this post is itself proof of the loop. The H1 came from a primary keyword the skill picked. Each H2 above is anchored to a secondary keyword. The stats came from external research the skill ran. The synopsis got reviewed by a human, twice. That’s not theory, it’s the workflow that produced what you’ve just read. The next person who searches an AI engine for “how to use Claude Code as a marketer” might get an answer extracted from this conclusion. That’s the loop closing.

Three workflows running in parallel. ABM (deep account research at scale), content (creation, author profiles, and the SEO and GEO checklist), and offsite (LinkedIn, Reddit, and the Cowork shift). The author profile is the connective tissue. Same skill, different applications. Humans still own strategy, creative, and the judgement about what’s worth saying. The lane is still wide open right now. Early movers compound. The limitations that used to hold this back are being lifted, and I can see it happening in the work I do every day.

If you want to see how this plays out across the rest of an agency, my colleagues Chris (partner marketing) and Fergus (design) cover their views in the wider piece on running a marketing agency on Claude Code . If you’re trying to use Claude Code as a marketer and want a hand building something like this, drop me a line on LinkedIn .

Frequently asked questions

Unlock your marketing potential today

Explore how Claude Code can transform your marketing strategies. Let's discuss how we can elevate your campaigns together.