A pilot isn’t a strategy. Neither is a single tool or implementation. Start with the harder question: where will AI create ROI? It may take more time…it will also deliver more value

Since launching Kynetyk, I’ve had the chance to speak with leaders across industries about their experience integrating AI—whether for strategy, product development, or operational automation. While these conversations often start in different places, they frequently move in the direction of a common theme: “We’re still looking for the right way to get started.” If this sounds familiar, you’re not alone: the data tell us that most companies—even those that recognize the potential for AI integration—are in exactly the same boat.

I would argue that’s a good thing in many ways. If 2023 was the year of the chatbot, and 2024 was the year that the “Great AI Pilot” phase kicked off, the latter half of 2025 was when we learned a sobering truth: most of those early efforts never scaled to value. Taking your time up until now means that you’ve avoided much of the costly experimentation associated with brittle pilot projects, immature orchestration platforms, and rapidly scaling foundation model capabilities. It also means that your business can take a first step with the benefit of hindsight. And that’s what this article is about: how to get started with AI in a way that can actually scale to value.

Insight 1: A pilot isn’t a plan.

Pilots have their place—they build familiarity and surface possibilities. But if the goal is measurable ROI at the level of real business metrics, a pilot alone won’t get you there. According to MIT’s NANDA initiative report, The GenAI Divide: State of AI in Business 2025, 95% of enterprise AI pilots fail to deliver measurable ROI. And the reasons have little to do with the technology itself. The report points to a few consistent culprits: poor integration with existing workflows, misaligned resource allocation, and a “learning gap” where generic tools can’t adapt to enterprise-specific needs. Projects stall in prototype phase. Pilots become permanent experiments that never scale. The underlying issue isn’t that companies are choosing the wrong tools. It’s that they’re skipping the strategic work that connects those tools to measurable outcomes.

Plan for goals, not tactics

A pilot is a tactic. It answers the question “can this work?” But it skips the harder, more important question: “what are we actually trying to achieve?” We believe every AI effort should begin with clearly defined ROI goals: organization-specific opportunities where AI integration can create measurable value. This isn’t a brainstorm or a wishlist. It’s a strategic exercise that requires management alignment. Leadership needs to own and articulate these goals before anything else moves forward.

Then define capabilities, not tools

Once goals are set, the next step is identifying the capabilities your organization needs to meet them. This is a step where it is easy to make the wrong move—jumping straight to evaluating vendors or spinning up pilots. But capabilities are not tools. A capability might be “the ability to synthesize internal data to accelerate decision-making.” The tool that enables it comes later.

Keeping the conversation at the capability level forces strategic thinking. It ensures you’re solving for what matters, not just what’s available.

Then—and only then—map tools, timelines, and priorities

Once you’ve defined the capabilities you need, the conversation can turn tactical. This is where you evaluate tools (whether to buy, build, or both—more on that below), sequence the work, and assign ownership.

To be clear: team input is valuable throughout this process—strategy shouldn’t be developed in a vacuum. But there’s a difference between contributing to strategy and defining it. Leadership needs to own the goals and capabilities. The final phase is where the broader team needs more agency, not just input. They’re the ones who will use these tools day-to-day, and adoption depends on their buy-in. Engaging them meaningfully at this stage—on tool selection, workflow integration, rollout sequence—smooths the path from plan to practice.

Why this structure works

This approach addresses the failure modes the MIT data exposes:

A pilot asks “can this tool do something useful?” That’s a fine question—but it’s not a strategy. A plan asks “where will AI create value, what do we need to get there, and how will we know we’ve succeeded?” One builds familiarity. The other builds ROI.

Insight 2: Solve for capabilities, not tasks.

The instinct when applying AI to your business is to find a task that’s slow or tedious and automate it. Summarize this document. Draft this email. Extract data from this PDF. These are legitimate uses, but they’re also the smallest version of what AI can do—and they tend to produce tools that save minutes, not capabilities that change how your team works.

The more interesting question is: what would it look like to solve for an entire capability?

The junior developer lesson

The software industry didn’t build a bot that autocompletes code. It didn’t build a separate tool for debugging, another for writing tests, and a third for refactoring. It built a junior developer—something that can take a problem description, write the code, run the tests, fix what breaks, and iterate until it works. Claude Code, Codex, and tools like them aren’t point solutions. They’re end-to-end capabilities that map to how work actually gets done.

That distinction matters. A code autocomplete tool saves keystrokes. A junior developer changes your capacity.

Think in workflows, not features

The same principle applies outside of engineering. Don’t build a bot that summarizes meeting notes—build something that owns the post-meeting workflow: capturing decisions, assigning action items, updating your project tracker, and flagging what fell through the cracks. Don’t build a tool that drafts outreach emails—build something that can research a prospect, personalize the message, and schedule the follow-up.

When you defined capabilities in the planning phase, you were already thinking this way. The key is to carry that mindset into implementation. Every time you’re tempted to scope an AI project as a single task, ask: what’s the full workflow this task lives inside, and how much of it could AI own?

Why this changes the ROI conversation

Point solutions deliver incremental gains—a few minutes saved per task, a slightly faster turnaround. End-to-end capabilities deliver something qualitatively different: they give your team capacity they didn’t have before. That’s the difference between “AI helps us do the same work faster” and “AI lets us do work we couldn’t justify doing at all.”

The companies getting the most from AI right now aren’t the ones with the most automations. They’re the ones that thought carefully about which capabilities to build whole.

Insight 3: Buy or build? The answer is probably both.

The “buy vs. build” question gets treated as a binary choice. It isn’t.

The reality for most growing companies is more nuanced. You likely don’t have the engineering capacity to build and maintain production-grade software that will be used extensively across teams and workflows. That’s not a criticism—it’s just not where your resources should go. At the same time, you probably don’t need an enterprise license for a polished platform just to stand up a data pipeline or consolidate internal knowledge.

The right answer depends on what you’re trying to solve.

When to buy

Consider buying when a tool meets three criteria:

Vertical AI tools built for domain-specific workflows—financial modeling, legal document review, supply chain optimization—often fit this profile. When a vendor has already solved the hard problems and validated the tool across similar organizations, buying saves time, reduces risk, and gets you to value faster.

When to build

Building makes sense in different circumstances:

The principle

Buy where the capability is broadly applicable and needs to be rock solid. Build where the need is specific, the stakes are lower, or the goal is learning. Most organizations will end up doing both—and that’s the right outcome.

The mistake is treating this as an either/or decision and over-investing in one direction. Over-buying leads to shelfware and bloated costs. Over-building leads to fragile systems and distracted teams. A clear-eyed view of what each approach is good for will serve you better than a rigid philosophy.


If you’re still figuring out where to start, that’s not a problem—it’s the right place to be. The companies that get the most from AI won’t necessarily be the ones that moved fastest. They’ll be the ones that started with a plan, focused on the right kind of value, and made smart choices about what to build and what to buy.

That’s exactly what we help companies do at Kynetyk. If any of this resonated, we’d welcome the conversation.


Josh is co-founder of Kynetyk, where he writes about AI, builds products at the intersection of AI and human experience, and helps companies design AI strategies that actually scale. Reach out at josh@kynetyk.ai.*