
Engineering leaders are all excited about generative artificial intelligence (AI) tools, whether that is for adopting AI pair programmers or automating testing and deployments in CI/CD. But at a deeper level, how a company introduces GenAI tools to the development teams can leave a big impact on team’s productivity, quality and morale.
At Port, we’ve seen that as more of our customers adopt GenAI tools, clear anti-patterns in their adoption practices are emerging. Here are three anti-patterns and suggestions for adjustments that companies can make to their teams to ensure successful GenAI tool adoption.
Anti-Pattern 1: Rolling out GenAI Without an Adoption Plan
GenAI tools can do a lot, but they aren’t a magic wand. Without a clear rollout plan, developers may feel skeptical about how AI fits into their daily workflow. This can result in lower adoption rates, lower productivity, and even unnecessary cost blowouts, as companies may be paying for more seats than the developers actually using it.
Moreover, introducing GenAI all at once can disrupt management’s ability to track the engineering KPIs. If the entire team uses AI in different ways, it’ll be more difficult to measure whether the impact on the software development pipeline is positive or negative, and by extension, it’ll be less clear where AI has successfully been adopted and whether it’s saved the intended time and money.
The Solution
Consider testing AI tools with a pilot program. Choose one or two development teams who are high performers in need of assistance — whether that’s because they are burdened with repetitive workloads or writing boilerplate on a regular basis. Seek teams whose tasks are discrete and well understood, like your front-end developers, and work with them to identify a few tasks GenAI can complete on its own, with human review.
Keeping your pilot constrained to a few teams will make it easier to keep the costs low, decide where to use GenAI to your best advantage and gauge its impact on stability and throughput. With a pilot in place, you can also more easily invite the early-adopting teams to create a demo of the GenAI tool to generate excitement and repeat the adoption process for later teams ramping up their GenAI usage.
Consider gathering qualitative data, as well. Run surveys to get developer feedback: is the AI tool they’re testing helpful? Does it make the job easier or are they spending too much time correcting mistakes? Would they tell their peers to use it or to avoid it?
Anti-Pattern 2: Adding GenAI Without Human Involvement or Management
While GenAI tools can write code faster than human developers, it is an anti-pattern to assume they will immediately know how to navigate the institutional context of their development environment. To effectively manage a GenAI tool, human developers are still needed to review, test, maintain and upgrade code over time — and to be able to properly train the AI to produce better results, they need to be involved with the code from the time it is written.
At the same time, the human developers working with GenAI may not know how to use it to their advantage without guidelines. In some cases where human developers are working in brownfield environments, GenAI may not actually have a clear use case, whereas those working on routine maintenance may immediately know where and how GenAI can help.
If developers aren’t clear on why or how the tool is meant to be used, this can cause frustration and stoke doubts in their minds about future utility.
The Solution
Even as GenAI improves, it’s important to focus GenAI tool pilots on augmenting humans, not replacing them. Consider using developer experience surveys to ask about pain points they believe GenAI can help with, and focus tool pilot on solving those problems first. Survey feedback can also help gain insight into how to assuage any fears about why AI is being introduced and share goals for the new tools.
Following up with surveys throughout and after the pilot will also clarify where AI has had a positive impact and a negative impact on engineering metrics. For example, if developers are using a one-to-one GenAI coding assistant, this makes their insights about using the tool, where it is lacking, and how it can be improved or deployed differently to maximize benefits, incredibly valuable.
Anti-Pattern 3: Not Measuring ROI From the Beginning
As more companies adopt GenAI tools into their software development pipelines, business leaders will want to measure the return on investment (ROI). At first glance, this may seem straightforward: GenAI tools are ubiquitous but also costly, and if not properly managed, they can introduce vulnerabilities and other issues into the production environments.
But if you’ve seen success in reducing developer toil with GenAI, you may be able to extract more value from these tools by focusing developers’ attention on other important initiatives, such as streamlining complex processes or automating others.
Measuring GenAI’s impact on the DORA metrics is a great place to start, but using the time savings from these process improvements to your advantage will compound the usefulness of the tool, continue to increase throughput and improve system stability.
The Solution
Consider using a tool that can enable tracking of the impact of GenAI on DORA metrics such as an internal developer portal. Portals offer visual representations of the entire developmental pipeline and dashboards for tracking DORA metrics, and as they unite the pipeline into one interface, helping achieve a higher level of visibility into their value.
Portals also allow teams to slice engineering data so that companies can coordinate their tool pilot in one place and track only the involved teams. With a portal in place, tool pilots can become routine and repeatable, making it easy to test and scale new tools as needed, and closely track the progress and usefulness among all teams.
Wrapping Up
When adopting GenAI into engineering systems, it’s important to focus on where and why the tool is used in a tightly controlled pilot period, and pay close attention to its impact on key metrics. This will help deliver a good developer experience and measure the true value of the chosen GenAI tool across the organization.
Ideally, after adoption, throughput measures, like release frequency, go up; stability and quality measures like bugs and vulnerabilities go down; and developers show love for the AI tools in survey. If a company is unable to measure one or more of these areas, it is in an anti-pattern.