The AI Consulting Playbook: 50+ Engagements
AI Strategy|December 17, 202511 min read

The AI Consulting Playbook: 50+ Engagements

After 50+ client engagements, patterns emerge. The companies that succeed with AI all do the same three things, and the ones that fail all make the same two mistakes. We are sharing the playbook.

OW

OneWave AI Team

AI Consulting

We Have Done This 50+ Times. Here Is What Actually Works.

We started OneWave AI because we saw a gap that nobody was filling. Businesses knew they needed AI but had no idea where to start. The big consulting firms were charging $500K for a "digital transformation strategy" that amounted to a PowerPoint deck. The AI tool vendors were selling hammers and telling everyone their problem was a nail. Nobody was doing the actual work of figuring out where AI fits in a specific business and then building it.

Fifty-plus engagements later, we have learned a lot about what works, what does not, and where most businesses get stuck. This post is us pulling back the curtain on exactly how we work with clients. Not the polished marketing version. The real version, including the failures.

Our Stack: Simple by Design

Before we get into the process, let us talk about tooling, because people always ask.

Our primary build tool is Claude Code -- Anthropic's CLI agent. We were early adopters, and it is now the backbone of everything we build. We started with browser-based tools like Replit and Lovable, but they could not go the distance on production client work. Claude Code was the turning point. It works in the terminal, it works with real codebases, and it can build software that actually holds up in production.

Beyond Claude Code, our stack is the Anthropic API for building AI-powered features into client applications, and MCP servers for connecting AI to client tools and data sources. That is basically it. We do not need a dozen tools. We need tools that work.

This simplicity is intentional. The more complex your toolchain, the more things break. Every additional tool is another point of failure, another vendor to manage, another thing to train on. We would rather be excellent with three tools than mediocre with fifteen.

Our Consulting Process

A proven 6-8 week engagement from discovery to independent operation

Discovery

Week 1-2

  • Map actual workflows and pain points
  • Identify hidden manual labor
  • Baseline current metrics

Strategy

Week 2-3

  • Prioritize AI opportunities by ROI
  • Build a phased roadmap
  • Define success metrics

Build

Week 3-6

  • Weekly demos of working software
  • MCP integrations with your tools
  • Iterative development with feedback

Train & Handoff

Week 6-8

  • Hands-on team training
  • Documentation and runbooks
  • Transfer ownership to your team

Timeline varies based on project scope -- typical engagements run 6-8 weeks

Phase 1: Discovery (1-2 Weeks)

Every engagement starts the same way: we shut up and listen. The discovery phase is not us coming in and telling a business what they need. It is us spending time understanding how the business actually works.

We sit with the people who do the work -- not just the executives, but the office manager, the bookkeeper, the sales rep, the warehouse lead. We watch them work. We ask annoying questions like "why do you do it that way?" and "what happens when this step fails?" We map out actual workflows, not the idealized version on the org chart.

The most valuable thing we do in discovery is identify what we call "hidden manual labor." These are the tasks that people have been doing for so long they do not even think of them as problems anymore. The 20 minutes every morning copying data from one spreadsheet to another. The hour spent re-formatting reports. The process of manually checking three different systems to answer a single customer question. These tasks are invisible to management but they add up to hundreds of hours per year.

We also identify what NOT to automate. Not every process benefits from AI. If something requires genuine human judgment, relationship nuance, or creative problem-solving, AI is the wrong tool. Part of our job is being honest about where the line is.

Phase 2: Strategy (1-2 Weeks)

After discovery, we build a prioritized roadmap. This is where we take everything we learned and turn it into a plan that makes economic sense.

We rank every potential AI application by three criteria:

  • Impact: How many hours per week does this save? How much does it reduce errors? Does it affect revenue?
  • Feasibility: Can we build this reliably with current AI capabilities? Is the data available and clean enough?
  • Speed to value: How fast can we get this running and delivering results?

The output is usually a roadmap with 3-6 projects, ranked by priority. We almost always recommend starting with one project -- the one that is highest impact and most feasible. Win there first, then expand.

Here is a pattern we see in almost every engagement: the biggest ROI is always in the boring stuff. Data entry. Email triage. Report generation. Document processing. Nobody gets excited about automating data entry. But when you save a team 15 hours per week on data entry, that is $30,000+ per year in recovered productivity, and you can build the solution in a week.

Meanwhile, the flashy projects -- the AI chatbot on the website, the predictive analytics dashboard -- those often have the lowest ROI relative to the effort required. We tell clients this even when they do not want to hear it.

Phase 3: Build (2-8 Weeks)

This is where Claude Code earns its keep. Our build process is fast because the tooling allows us to move fast.

A typical build week looks like this: Monday we scope the specific features for the week. Tuesday through Thursday we build, using Claude Code to write the application code, set up MCP connections to client systems, and configure the Anthropic API for any AI-powered features. Friday we demo to the client, get feedback, and adjust.

We deliver working software every week, not a big reveal at the end. This matters because the client's requirements always change once they see something real. The feature they thought was critical turns out to be unnecessary. The thing they mentioned offhand in discovery turns out to be the most important feature in the system. You cannot discover this from a requirements document. You discover it by putting working software in front of people and watching how they react.

Build complexity ranges significantly. Some projects are straightforward -- a custom dashboard that pulls from existing data sources, or an automated email workflow. Those take 2-3 weeks. More complex projects -- a full client portal with AI-powered document processing, or a custom CRM replacement with intelligent lead scoring -- take 6-8 weeks.

One thing we have learned: scope creep is the number one project killer. We are aggressive about saying "that is a great idea for version 2" and keeping version 1 focused. The fastest path to value is a narrow, excellent solution, not a broad, mediocre one.

Phase 4: Train and Handoff (1-2 Weeks)

This is the phase most AI consultancies skip, and it is the reason most AI projects fail. The technology works. The team never adopts it.

We spend dedicated time training every person who will touch the system. Not a one-hour webinar. Hands-on, at-their-desk training where we sit with each team member and walk through their specific workflows with the new tool. We watch them use it. We answer questions. We fix the things that confuse them.

We also identify what we call "champions" -- the team members who get excited about the new system and naturally help their colleagues. Every successful AI adoption we have seen has at least one champion on the team. If nobody is excited, the project is in trouble regardless of how good the technology is.

The handoff includes documentation, but more importantly, it includes access. We set up the client's team so they can make basic modifications themselves. The goal is not to create dependency on us. The goal is to give the client a tool they own and can maintain.

What We Have Learned From 50+ Engagements

After doing this enough times, patterns emerge. Here are the ones that matter most.

Every business thinks their problem is unique. It usually is not.

The specific details vary, but the underlying patterns repeat across industries. Data entry bottlenecks, email overload, manual report generation, document processing backlogs, slow customer response times -- these are universal problems wearing industry-specific costumes. This is actually good news for clients because it means we have often solved a version of their problem before.

The best AI projects start small.

The clients who try to transform everything at once almost always stall. The ones who pick one painful workflow, automate it, prove the value, and then expand -- those are the ones who succeed. We have started refusing projects that try to boil the ocean. It does not work.

Team training is more important than the technology.

We have built technically excellent systems that failed because the team never adopted them. We have built simple, imperfect systems that succeeded because the team embraced them. If we had to choose between a mediocre tool with great training and a great tool with no training, we would pick mediocre tool every time.

Being honest about failures.

Not everything works. We had a project where we built an AI-powered customer service agent for a home services company. Technically, it worked well. But their customers -- mostly older homeowners calling about plumbing emergencies -- did not want to talk to an AI. They wanted a human. The tool sat unused for three months before we pivoted it to an internal tool that helped the human operators respond faster. That worked. The lesson: just because you can automate something does not mean you should.

We also had an engagement where the client's data was so messy that we spent the entire build phase just cleaning and organizing data instead of building AI features. We should have caught that in discovery. Now our discovery phase includes a data quality assessment that would have flagged the issue immediately.

Is This Right for Your Business?

Not every business needs an AI consultant. If you are a solo operator with simple workflows, you can probably figure out AI tools on your own. If you are a large enterprise with an internal tech team, you might have the capacity in-house.

The sweet spot for what we do is businesses with 10-200 employees that have real operational pain but do not have the technical expertise to solve it themselves. Businesses where the owner knows something needs to change but does not know where to start. Businesses that have tried a few AI tools on their own and gotten frustrated by the gap between the promise and the reality.

If that sounds like you, the first step is not buying a tool or hiring a developer. The first step is understanding where AI actually fits in your specific business. That is what discovery is for. Everything else follows from there. For a week-by-week look at what happens after you engage us, read how we set up AI for a new client in 30 days.

AI consulting playbookAI implementation lessons learnedwhy AI projects failAI consulting for small businessOneWave AIsuccessful AI adoption patterns
Share this article

Need help implementing AI?

OneWave AI helps small and mid-sized businesses adopt AI with practical, results-driven consulting. Talk to our team.

Get in Touch