A garland with the word “Party” on it

Late to the AI-Party – Part 4: More than code

Welcome back to my ongoing journey of discovering AI capabilities that everyone else figured out months ago. If you’ve been following along, you know I’ve covered GitHub Copilot’s custom instructions (Part 1), autonomous coding agents like OpenCode (Part 2), and why you shouldn’t use a sledgehammer to crack a nut when it comes to model selection (Part 3).

Today, I want to talk about something that took me quite some time to realize: these AI coding tools aren’t just for coding.

Disclaimer: I changed my tooling! In my little bubble – the “AI Tooling Chapter” at work and the folks I follow online – Claude Code seems to be the tool everyone loves and uses right now. So naturally, after my experiments with GitHub Copilot and OpenCode, I decided to give Claude Code a try. That’s why I’m talking about Claude Code in this post. Not because it’s the better fit. Just because I’ve been using Claude lately. But I’m sure you can do most of it (if not all of it) with your AI agent of choice.

The Obvious Thing I Completely Missed

Here’s what should have been obvious to me from day one: these autonomous AI agents don’t just generate code. They run commands. In your terminal. They execute builds. They run tests. They analyze output. They do whatever you can do in a terminal – and with things like MCP servers, they can do even more.

I know, I know. If you’ve been using these tools for a while, you’re probably rolling your eyes right now. “Yes, Lars, that’s literally what the tool shows you when you use it. It displays every command it runs.”

And you’re right. It’s right there on the screen. But somehow it took me a while to really understand what that meant for the range of possible use cases.

It’s not limited to dotnet build or dotnet test. It can run basically anything you can run in a terminal. And once that clicked, I started seeing applications way beyond pure coding exercises.

Monthly Reporting Ritual

Let me tell you about my monthly reporting nightmare ritual.

As a senior engineering lead, I have to report on a bunch of KPIs and key results to my boss and my boss’s boss. The data gathering itself isn’t complicated – I know where to look, which tables to create, which graphs to screenshot from our dashboards. But it’s tedious. It’s repetitive. And it takes time I’d rather spend on literally anything else. Sure, in theory I could delegate that to one of the people that report into me, but I’m sure their time is better in spend on topics that actually move the needle on our KPIs.

So, who should be doing this important, but boring work? I’m sure we both know the answer by now. 😄 Why not let Claude (or any other AI agent) do it?

Building a Reporting Assistant

Even though I’m late to the AI-party, I know that proper prompting is a key factor for good results. Of course, I don’t want to rewrite all the information about data sources, requirements regarding the format and everything else every month. So I set up a repository with three files to make this automation work consistently:

First, a template file that defines the structure of the report. This keeps the format consistent month over month – same sections, same headers, same table structures. My boss and my boss’s boss don’t want surprises in how the information is presented.

Second, a reference report – a complete example of a report I consider high quality. This demonstrates the desired tone, level of detail, and how to describe trends. Think of it as showing the AI what “good” looks like rather than just telling it.

Third, the requirements file – a detailed markdown document that explains everything your AI agent (in my case Claude) needs to know about the reporting task:

  • Which KPIs to track and where to find them
  • The structure and format of the report (pointing to the template file)
  • Guidelines for analyzing trends in the data
  • A request to provide a first draft of the accompanying narrative
  • Specific requirements for the accompanying narrative
    • Example: “If a KPI is on track, don’t spend more than one or two sentences on it. Focus on significant changes (positive and negative), problems or other insights that can be gathered from the data.”

And let me tell you: it worked like a charm.

Is it perfect? No. I’m still checking that it got the data in the tables right – I’m not going to blindly trust an AI with numbers that go to my boss’s boss. But the time savings are real.

The AI handles:

  • Extracting data from APIs into tables
  • Spotting trends and anomalies
  • Writing a first draft of the narrative

I handle:

  • Verifying the numbers are correct
  • Adjusting the tone and emphasis if needed
  • Adding context the AI couldn’t know
  • Making sure nothing sensitive gets included

It’s exactly the kind of task you’d give to a competent intern: “Here’s what I need, here’s where to find it, follow this template, and draft something for me to review.”

Except this intern never sleeps, never complains about boring work, and doesn’t need coffee breaks.

Of course, before you get too excited and hand over your entire job to an AI: You are still responsible for the output. I verify every number, check that screenshots are from the right time period, review the analysis for accuracy, add context the AI couldn’t know, and make sure nothing confidential gets included. The AI does the grunt work. I do the validation, refinement, and final judgment. That’s exactly how it should be.

What’s next?

1. Graphs

In this lovely report, I also add graphs of the tracked KPIs to provide a visual representation of the trajectory of the last 12 months. It’s just screenshots of my KPI dashboard using specific filters. Right now, I still do this manually, but I’m sure I can use Chrome MCP server to automate that part as well. That’s definitely on my TODO list.

2. This Could Be a Skill!

After setting this up and using it a few times, I recently learned about something called Skills in Claude Code.

A Skill is essentially a reusable, invokable automation that you can trigger with a simple command – like /generate-report. Instead of opening my reporting repository and tell Claude “hey, look at the files in this repository and do that reporting thing again for the last month”, you package the entire workflow (including templates, examples and general documentation) into a skill that can be called anytime.

The way Skills work is pretty straightforward: you create a markdown file (notice a pattern here?) that defines what the skill does, what inputs it needs, what other resources (like templates) should be used, and what steps to follow. Claude Code reads this file and makes the skill available as a command.

My reporting automation is basically a perfect use case for a Skill:

  • It’s a repeatable task I do monthly
  • The requirements are well-documented
  • The inputs are predictable (which month, which dashboards)
  • The output format is consistent

Am I going to convert my reporting automation into a proper Skill? Probably. Eventually.

But it’s exciting to realize that what started as “let me try automating this boring task” has evolved into something that could be formalized, shared, and reused. If you’ve built something similar – a workflow you’ve documented well enough for an AI to follow – you might want to look into Skills too.

Closing Thoughts

I’ve spent three blog posts now talking about AI coding tools. But as this post hopefully illustrates, calling them “coding tools” is actually limiting.

They’re automation tools that happen to be really good at code. But the underlying capability – following instructions, running commands, analyzing output, producing artifacts – applies to way more than just software development.

Just remember: the AI is your intern, not your replacement. It does the tedious stuff. You do the thinking, validation, and final decisions.


Posted

in

by