In my previous post, I walked through how I automate reporting processes using Python and Power BI. What I didn't mention is how the code itself gets written.
For the past months, I've been using Claude Code, Anthropic's command-line AI coding tool, as a core part of my development workflow. Not as a replacement for thinking, but as an accelerator that lets me deliver higher-quality work in less time.
This post is about how that works in practice: what Claude Code is, how I use it, and what I've learned about using it well.
What is Claude Code?
Claude Code is a tool that gives you an AI assistant directly in your preferred environment, whether that is an IDE like Visual Studio Code, a command-line tool inside your terminal or the desktop application of Claude itself. The process of working with Claude is the same in every environment: you describe what you want to build, and it writes code, runs commands, edits files, and iterates, all within your project. It's not autocomplete. It's closer to pair programming with a very fast, very patient colleague.
What makes it different from pasting code into a chat window is that Claude Code has access to your actual project. It can read your files, understand your folder structure, run your scripts, see the errors, and fix them, all in one continuous workflow.
Why this matters for the work I do
Most of the projects I take on as a freelancer follow a similar shape: a client has a manual process that's eating up time, and they need someone to automate it. The deliverable is usually a data pipeline, a reporting integration, or some combination of both.
These projects aren't about writing the cleverest algorithm. They're about understanding the business problem, designing a clean solution, and implementing it reliably. Claude Code fits naturally into the implementation phase — the part where the solution design gets turned into working code.
A script that might take me an hour to write, test, and debug manually can often be done in fifteen minutes with Claude Code. That time saving compounds across a project, and it means I can spend more of my energy on the parts that actually require human judgment: understanding what the client needs, designing the right approach, and making sure the solution holds up in the real world.
The part most people get wrong
There's a common misconception that using AI for coding is as simple as typing "build me a reporting pipeline" and watching it happen. It isn't. The quality of what you get out depends entirely on the quality of what you put in.
This is where experience matters. Knowing how to use Claude Code well is itself a skill.
Here's what I've learned makes the biggest difference:
Context is everything
Claude Code can read your project files, but it doesn't automatically know what matters. The more clearly you structure your project and communicate your intent, the better the output.
In practice, this means I keep a CONTEXT.md file in the root of most projects. It describes the project's purpose, the key design decisions, the data sources involved, and any constraints. When I start a Claude Code session, it can read this file and immediately understand the landscape.
For example, a CONTEXT.md for a reporting automation project might look like this:
# Project: Weekly Sales Reporting Pipeline
## Purpose
Automate the weekly sales report for [client]. Replaces a manual
Excel-based process that takes ~3 hours per week.
## Data sources
- ERP export (xlsx, column names vary slightly between exports)
- CRM export (csv, stable format)
- Budget file (xlsx, updated quarterly)
## Key decisions
- Column mapping defined in config.py to handle naming variations
- Pipeline writes to SQL Server staging table
- Power BI connects to staging table via DirectQuery
## Constraints
- Must run on Windows (client environment)
- No internet access on the server: all dependencies pre-installed
- Client uses Python 3.10
This isn't just documentation for my own benefit. It's instructions that make every interaction with Claude Code more productive. Instead of explaining the same background repeatedly, I give it context once and it carries that understanding through the entire session.
Write good prompts — they're instructions, not wishes
The way you ask Claude Code to do something matters a lot. Vague requests produce vague results. Specific, structured requests produce code you can actually use.
Compare these two prompts:
- "Write a script to clean the data files I have in the directory 'exports'. They don't have the same columns, so you know how to alter them."
- "Write a function that reads all xlsx files from ./data/exports/, normalises column names using the mapping in config.py, validates that the required columns (date, quantity, unit_price) are present, and skips files with missing columns while logging a warning. Use pandas. Include error handling so one broken file doesn't stop the pipeline."
The second prompt takes thirty seconds longer to write and saves twenty minutes of back-and-forth. I've found that the investment in writing clear, detailed prompts pays for itself almost immediately.
Don't talk to the AI Agent they way you talk to coworkers. These agents don't have the human experience as we do. I've seen so many people using the Agent as a self-knowing tool. Ending prompts with "as you know". It does not know anything if you're not clear.
Use markdown files as building blocks
Beyond CONTEXT.md, I often create small markdown specification files for specific components of a project. These serve as both documentation and reusable instructions.
For instance, I might have a specs/pipeline.md that describes the exact steps the data pipeline should follow, or a specs/validation-rules.md that lists every data quality check the pipeline needs to perform.
When I ask Claude Code to build or modify something, I can point it to the relevant spec: "Implement the validation checks described in specs/validation-rules.md." This keeps the instructions consistent and means I'm not rewriting the same requirements from memory each time.
Skills and reusable patterns
Over time, I've built up a small library of patterns; things like how I structure configuration files, how I handle logging, or how I set up database connections. Claude Code can reference these when building new projects, which means each project benefits from what I've learned on previous ones.
This creates a compounding effect. The more projects I complete, the better my library of patterns becomes, and the faster I can deliver the next one. Don't worry if you don't have personal skill files, there are open source skill files and easy to find online. One valuable resource is found here: https://www.skills.sh. Have a look!
What I don't use Claude for
It's worth being honest about where Claude Code isn't the right tool.
I don't use it to make design decisions. Deciding what to build, how data should flow, or where the boundaries of a system should be; that's the work that requires understanding the client's business, their constraints, and their goals. That's my job. I'm the orchestrator. I use it to verify logic and reasoning before writing anything that might not be future-proof or complete.
I also don't blindly trust the output. Every piece of code Claude Code writes gets reviewed, tested, and validated against the actual data. AI-generated code can be subtly wrong in ways that aren't immediately obvious, especially around edge cases.
The tool is fast, but speed without verification is just faster mistakes. The agent makes assumptions, as we do in projects. These assumptions can be correct, but most of the time they are false. A good context file and right prompt would eliminate the assumptions as well.
The bigger picture
The way I see it, tools like Claude Code don't change what freelancers need to be good at, but they change how fast they can deliver on it.
Understanding a client's problem, designing the right solution, knowing what questions to ask, managing the project effectively; none of that goes away. What changes is the implementation phase. The gap between "I know what to build" and "it's built, tested, and working" gets significantly shorter.
For my clients, this means faster delivery and more time spent on the parts of the project that matter most. For me, it means I can take on more complex work without proportionally increasing the hours involved.
Getting started
If you're interested in trying Claude Code yourself, here's what I'd suggest:
Start with a small, well-defined task on a real project. Something like: "Read this CSV, clean the date column, and save it to a new file." See how the tool handles it, and pay attention to how your prompt affects the result.
From there, gradually build up your context files and patterns. You'll notice quickly that the better your project is structured and documented, the more useful Claude Code becomes.
The tool is powerful, but the real skill is knowing how to direct it. That comes with practice and it's a skill worth investing in.