
You’ve read that AI is transforming web development. You’ve seen the productivity claims. Now you’re about to hire an agency for a project, and every one of them says they “use AI.” The problem is, that phrase means wildly different things depending on who’s saying it.
Some agencies have genuinely integrated AI tools into disciplined workflows. Others slapped a chatbot on their website and called it a day. From the outside, it’s hard to tell the difference — until you’re three months into a project and the cracks start showing.
These five questions will help you separate the teams using AI well from the ones just borrowing the buzzword. They’re based on what we’ve seen work (and fail) across real projects, and they build directly on what we’ve covered in this series about what AI actually does in web development, how the tools work, and why speed without quality controls is dangerous.
TL;DR
Not every agency using AI is using it well. Five questions cut through the noise: ask how they actually use AI day-to-day, what quality controls surround it, how experienced their team is, whether they can show real examples, and how they handle security and intellectual property. The answers reveal whether an agency treats AI as a genuine productivity tool with safeguards — or as a marketing claim with nothing behind it. The right agency won’t just tell you they use AI. They’ll show you how it makes your project better.
1. “How are your developers actually using AI tools day-to-day?”
This is the opener that separates real adoption from marketing talk. You’re not asking whether they use AI. You’re asking how.
A credible answer sounds specific. “Our developers use GitHub Copilot for code suggestions during development, and we use AI-assisted code review tools to flag potential issues before human reviewers look at the code.” That’s concrete. You can picture it happening.
A vague answer sounds like a press release. “We use the latest AI to deliver advanced solutions.” That tells you nothing about what’s actually happening in your project.
As we covered in our breakdown of AI coding tools, these assistants fall into distinct categories — autocomplete, conversation, code review, and AI-first editors. Each serves a different purpose. An agency that understands the landscape can tell you which tools they use and why.
What to listen for:
- Specific tool names (GitHub Copilot, ChatGPT, Cursor, CodeWhisperer)
- Specific use cases (“we use it for boilerplate code and unit test generation”)
- Honest limitations (“we don’t use AI for security-critical logic”)
Red flag: The agency can’t name specific tools or describe concrete workflows. If someone says “we use AI across our entire process” but can’t explain what that looks like on a Tuesday afternoon, they’re selling a story, not a service.
2. “What quality controls do you have around AI-generated code?”
This is the question that matters most. AI tools make developers faster — that’s well-documented. But as we explored in our post on AI speed and quality, speed without safeguards creates a different kind of problem. A GitClear study analyzing 153 million lines of code found that code churn — throwaway code rewritten within two weeks — is projected to double compared to pre-AI levels.
That stat should shape how you evaluate agencies. The question isn’t whether they use AI. It’s whether they’ve built the controls to use it responsibly.
A strong answer includes three layers:
- Automated testing. Every piece of code gets tested automatically, whether it was written by a human or suggested by AI. This catches bugs before they reach your users.
- Human code review. A senior developer reviews all code, including AI-generated suggestions. They check for security gaps, performance issues, and long-term maintainability.
- Clear boundaries. The team knows where AI helps and where human judgment is required. Business logic, security-sensitive code, and architecture decisions stay with experienced developers.
Red flag: “The AI handles quality automatically” or “we trust the AI output.” Any agency that treats AI-generated code as inherently correct hasn’t been paying attention to the data.
3. “How experienced is the team that’s directing the AI?”
This question gets at something counterintuitive. AI tools don’t help all developers equally. McKinsey’s research on developer productivity found that experienced developers see 50-80% productivity gains from AI tools, while less experienced developers see significantly smaller improvements — and sometimes get worse results.
Why? Experienced developers know enough to evaluate what the AI suggests. They can spot a subtly wrong solution, recognize when the AI is confidently producing outdated patterns, and make judgment calls about when to accept or override a suggestion. Junior developers don’t always have that filter.
We made this point in the first post in this series: AI is a power tool. A power tool in experienced hands produces precision work. In inexperienced hands, it produces a mess — just faster.
What to listen for:
- A mix of senior and mid-level developers on your project
- Senior developers reviewing AI-assisted output from the rest of the team
- Investment in team training on responsible AI tool usage
Red flag: A team of mostly junior developers heavily relying on AI without senior oversight. Low rates might look attractive, but the rework costs can exceed what you saved.
4. “Can you show me examples of AI-assisted work you’ve delivered?”
Claims are easy. Evidence is harder. An agency that’s genuinely using AI in their workflow should be able to point to real projects where AI played a role — and explain what that role looked like.
You’re not asking them to reveal proprietary processes. You’re asking for enough specifics to verify that their AI usage is real and results-driven.
A good answer looks like: “On a recent e-commerce project, we used AI tools to accelerate the component library development. This helped us deliver the MVP two weeks ahead of schedule while maintaining our standard code review and testing process.”
A weak answer looks like: “All our projects benefit from AI.” That’s a marketing sentence, not evidence.
What to listen for:
- Specific project examples with measurable outcomes
- Honest descriptions of AI’s role (it helped with X, humans handled Y)
- Transparency about limitations or lessons learned
Red flag: No concrete examples, or examples that sound too good to be true. If AI supposedly handled “everything” on a project, that should raise more questions than it answers.
5. “How do you handle security and IP concerns with AI tools?”
This is the question most business owners forget to ask — and the one that can create the most expensive problems if ignored.
Many AI coding tools are cloud-based. When a developer uses them, snippets of your project’s code may be sent to external servers for processing. For most web projects, this is manageable with the right policies. But it’s a conversation worth having, especially if your project involves sensitive data, proprietary business logic, or regulatory requirements.
According to a 2024 Gartner analysis, data security is the top concern enterprises cite when adopting generative AI tools. Your agency should have thought about this before you ask.
What to listen for:
- A clear policy on what types of code are processed through AI tools
- Understanding of the data handling practices of the tools they use
- Intellectual property agreements that address AI-generated code ownership
- Compliance awareness if your industry has specific regulations (healthcare, finance, etc.)
Red flag: Blank stares. If an agency hasn’t considered the security implications of AI tool usage, they haven’t thought deeply enough about the technology they’re adopting. This isn’t about paranoia — it’s about professionalism.
The real question behind all five questions
These questions aren’t a gotcha exercise. They’re a way to understand an agency’s maturity with AI tools. The pattern you’re looking for is simple: specific answers, honest about limitations, backed by evidence, and grounded in process.
An agency that uses AI well will welcome these questions. They’ve thought about the answers because they’ve lived through the challenges. They know that AI doesn’t replace expertise — it amplifies it. And they’ve built the workflows to prove it.
An agency that gets defensive or vague is telling you something important about how much substance is behind their claims.
The next time you’re evaluating a web development partner, bring these five questions. The answers will tell you more about their capability than any portfolio page or sales deck. Ready to work with a team that welcomes these questions? Let’s talk about your project.
Frequently asked questions
Should I avoid agencies that don’t use AI at all?
Not necessarily. AI tools are powerful accelerators, but they’re not required for every project. A skilled team without AI will still deliver quality work. What matters more is whether the agency is honest about their approach. An agency that doesn’t use AI but delivers great work is better than one that uses AI poorly. That said, agencies that are investing in these tools tend to be more forward-thinking about technology in general — and that’s worth something.
Will using AI make my project cheaper?
It can, but cost savings shouldn’t be the primary driver. AI tools help developers work more efficiently, which can reduce timelines and total hours. But the real value is better allocation of human effort — developers spend more time on complex, high-value work and less time on repetitive tasks. If an agency pitches AI primarily as a cost-cutting measure, ask how they’re reinvesting those efficiency gains into quality. The best teams use the time AI saves to write better tests, do more thorough reviews, and build more thoughtfully.
How can I verify an agency’s AI claims without being technical?
Use the specificity test. Ask for details and listen to how they respond. Credible teams give you tool names, workflow descriptions, and project examples. They can explain what AI does in plain language without hiding behind jargon. If you hear concrete answers that make sense to a non-technical person, that’s a good sign. If you hear buzzwords and generalities, push for specifics — or consider it a warning sign.
What if I’m not comfortable with AI being used on my project?
That’s a legitimate position, and a good agency will respect it. Some projects — especially those involving sensitive data or strict regulatory requirements — may warrant limiting or avoiding AI tool usage. Discuss your concerns upfront. A professional agency will accommodate your preferences and explain the trade-offs honestly, whether that’s slightly longer timelines or adjusted workflows.






