· 3 min read
AI-Assisted Development Workflows
AI coding assistants have moved beyond autocomplete. I started experimenting with structured AI workflows. Not just asking a model to write code, but defining reusable patterns that encode a team’s conventions and decision-making process.
The goal isn’t to replace engineers. It’s to reduce the cognitive overhead of repetitive decisions so I can focus on the problems that actually require human judgment.
The Problem with Ad-Hoc AI Usage
When engineers use AI assistants without structure, the output is inconsistent. One engineer’s prompt produces a React component with one pattern, another engineer’s prompt produces a completely different pattern for the same type of component. The AI is only as consistent as the prompt, and most prompts are written on the fly.
This inconsistency compounds. Code reviews become harder because reviewers can’t pattern-match against team conventions. New engineers see conflicting patterns and don’t know which to follow. The codebase slowly drifts away from the conventions the team agreed on.
Reusable Skills
I started by defining “skills,” structured prompts that encode how the team approaches common tasks. Creating a new API endpoint, adding a design system component, writing integration tests: each has a skill that captures the conventions, naming patterns, file structure, and testing requirements.
A skill isn’t just a prompt template. It includes context about the architecture, references to existing patterns in the codebase, and explicit instructions about what to do and what to avoid. When an engineer invokes a skill, the AI produces output that looks like it was written by someone who’s been on the team for years.
The key is specificity. A generic “write a React component” skill is useless. A skill that says “create a design system component following the compound component pattern, with Storybook stories, unit tests, and TypeScript props matching the project’s conventions” produces consistent, reviewable code.
Agent Patterns
Beyond individual skills, I explored agent patterns, multi-step workflows where the AI handles a sequence of related tasks. For example, when a new OpenAPI endpoint is added, an agent can generate types, create the client function, add mock data for testing, and scaffold a basic integration test.
The agent doesn’t make architectural decisions. It follows a defined workflow that the team designed and agreed on. Each step produces output that goes through the same code review process as hand-written code. The agent is a productivity multiplier, not a decision-maker.
The Human in the Loop
The most important lesson: AI works best as an accelerator, not an autopilot. Every generated output goes through code review. The skills encode conventions, but engineers make the judgment calls about edge cases, performance trade-offs, and user experience.
I also learned that skills need maintenance. As the codebase evolves, the patterns change, and the skills need to be updated. I treat skill definitions like code.
The teams that benefit most from AI-assisted workflows are the ones that already have clear conventions. The AI amplifies whatever patterns exist. If your conventions are inconsistent, the AI will produce inconsistent code. If your conventions are well-defined, the AI becomes a force multiplier for consistency.