AI-Assisted Development: How We Use AI Tools to Deliver Projects Faster

Developer working with AI coding tools across dual monitors

There is a lot of noise about AI in software development right now. Vendor claims about 10x productivity. Studies showing developers are actually slower when they use AI tools. The reality, as usual, sits somewhere in the middle — and it depends almost entirely on how you use the tools, not just whether you use them.

At Lycore, we have been integrating AI tools into our development workflow across Python, Django, React, Flutter, and .NET projects for the past year and a half. This post is an honest account of what we actually do, what has worked, what has not, and what the practical impact has been for the clients we build for.

Not a vendor pitch. Not a stat-padded overview article. Just what we have learned from shipping real software with AI as part of the process.

The honest picture on AI and developer productivity

Before we get into what we do specifically, it is worth being clear-eyed about what the research actually says. A study from METR in early 2025 found that experienced developers working on complex open-source projects were actually 19% slower when using AI tools — not faster. The reason was not that the AI was unhelpful. It was that reviewing, debugging, and correcting AI-generated code added overhead that outweighed the speed of generation.

At the same time, surveys consistently show that over 80% of developers believe AI makes them more productive, and controlled experiments show 30–55% speed improvements on well-scoped, contained tasks.

Both things are true. AI makes certain tasks significantly faster. It adds overhead to others. The teams and companies that extract real value from AI are the ones that have figured out which is which — and built their workflows accordingly.

Where we use AI tools and why

Diagram showing where AI tools are used across the software development workflow at Lycore

Boilerplate and scaffolding

The clearest productivity win. Setting up a new Django app, scaffolding a REST API endpoint with serializers and viewsets, generating initial migration files, creating Flutter widget structures — these are tasks where AI generates accurate, usable code in seconds that would otherwise take 20–30 minutes of typing and referencing documentation. We use Claude Code and GitHub Copilot here, and the output quality for standard patterns in our stack is consistently high.

The rule we follow: AI writes the skeleton, a senior engineer reviews the architecture before anything else is built on top of it. This stops structural mistakes from propagating through the whole codebase.

Writing and running tests

Test generation is one of the most underrated AI use cases in development. Given a function or a class, AI can generate a thorough set of unit tests — including edge cases a developer might not think to write manually — in a fraction of the time. On a recent Django REST project, our QA cycle for a new module was cut by roughly 40% because the initial test suite was AI-generated and comprehensive, leaving our engineers to focus on integration tests and scenario testing rather than basic coverage.

The caveat that matters: tests must be written against the specification, not against the AI-generated code. If you let AI write both the code and the tests for that code, the tests will pass but may not reflect what the software is actually supposed to do. We write tests from requirements first, then use AI to fill in coverage.

Code review and debugging

When a developer is stuck on a bug or wants a second opinion on an implementation, asking Claude or Copilot is faster than waiting for a colleague to be available. We have found AI particularly useful for catching obvious issues — missing null checks, off-by-one errors, inefficient database queries — before code goes to human review. This means our human code reviews spend more time on architecture, business logic, and edge cases, and less time on the mechanical stuff.

Documentation and handoff materials

Documentation is the task developers are most likely to skip under time pressure. AI has made it much easier to keep documentation current. After a sprint, we use AI to generate initial API documentation from code, write changelog entries, and summarise what changed in plain language for non-technical stakeholders. A task that used to take half a day now takes under an hour.

Requirements clarification and scoping

Before development starts, we use AI to stress-test a requirements document — looking for ambiguities, missing edge cases, and scope risks that are not immediately obvious. On several recent projects this has surfaced questions that, if left unaddressed until mid-development, would have caused significant rework. The cost of a one-hour requirements review is far lower than the cost of a two-week redesign.

Where we do not use AI

Side by side comparison showing where Lycore uses AI tools and where human expertise takes over

Knowing where not to use AI is just as important as knowing where to use it.

We do not use AI-generated code for security-critical logic without thorough manual review. Research shows a 23.7% increase in security vulnerabilities in AI-assisted code. Authentication systems, payment processing, data encryption, and access control are areas where we write carefully and review carefully.

We do not use AI for novel architectural decisions. AI is trained on existing patterns. When a project requires a genuinely new approach — a custom multi-tenant architecture, a non-standard data pipeline, a real-time system with unusual constraints — AI suggestions tend toward the generic. These decisions need human expertise and judgment.

We do not let junior developers use AI unsupervised on complex tasks. Junior engineers in our team use AI tools under supervision, with explicit guidance on when to trust the output and when to be sceptical.

What this means for clients

The practical impact for the businesses we build for comes down to three things.

Faster MVPs. The scaffolding and boilerplate acceleration is most visible on new projects. Getting from a blank repository to a working, testable first version takes meaningfully less time than it did two years ago.

More thorough testing earlier. Because generating test coverage is faster, we test more comprehensively earlier in the project lifecycle. This means fewer bugs reach QA, and fewer surprises emerge post-launch.

Better documentation. Handoff materials, API docs, and technical summaries are more consistently produced and of better quality. This reduces friction when clients bring in internal engineers to work alongside us or take ownership of a system we have built.

Our current AI toolset

  • Claude Code — primary tool for complex code generation, architecture review, and explaining unfamiliar codebases. Particularly strong on Python and Django.
  • GitHub Copilot — inline suggestions in the IDE, useful for boilerplate and autocomplete in well-understood patterns.
  • Claude via API — used within client applications themselves, building AI features including document processing, intelligent search, and automated workflows.
  • ChatGPT — occasional use for research, generating draft documentation, and exploring approaches before committing to implementation.

The bottom line

AI tools have made our engineering team meaningfully more productive on the tasks where they work well: scaffolding, testing, documentation, debugging, and requirements analysis. They have not replaced the judgment, experience, and communication that determine whether a software project succeeds.

If you are evaluating an outsourced development partner and wondering whether they use AI responsibly — ask them specifically how. Not just “do you use AI” but “where in your workflow, with what guardrails, and how does it affect quality.” The answer will tell you a lot about how mature their engineering practice actually is.

Get in touch with the Lycore team →

Related Posts