
Driving AI Strategy with Your DevEx Survey: A Practical Guide
Author(s): Bushra Anjum, Ph.D.
Originally published on Towards AI.
For many startups and small companies, the adoption of AI tools in software development is not occurring through a top-down strategic plan. Instead, it’s mostly organic, developer-led exploration, with individual contributors using tools to boost their personal productivity. While this bottom-up innovation is powerful, it can place leadership in a difficult situation, reacting to new security risks, code quality questions, and IP concerns instead of getting ahead of them.
Engineering leaders often use Developer Experience (DevEx) surveys to move from a reactive stance to a proactive one. To manage AI adoption effectively, we need to apply the same principle. However, it is not enough to simply add a question listing which AI tools people use. To truly understand the landscape, your goal is to diagnose your software development team’s specific “AI personality.” Are they bold, risk-taking enthusiasts? Cautious skeptics? Or a fragmented group of occasional explorers?

Based on industry research, conversations with engineering leaders, and analysis of other DevEx surveys, we are proposing four questions to be added to your regular DevEx survey. It’s designed to provide the data you need to identify your team’s profile, so you can craft a targeted AI strategy that aligns with their actual needs, work habits, and perspectives.
The Diagnostic Toolkit: 4 Questions to Ask Now
These questions are specifically designed to address the challenges and opportunities of a small to medium-sized startup where individual contributors are organically exploring AI tools. They are engineered to provide a snapshot of the current state, perceived impact, primary concerns, and desired support related to AI in your software development process. For each question, we have also included a quick note on why we are asking it and how your answers will help us take meaningful action.
Q1: Which of the following AI-powered tools have you used for your work-related tasks in the last 3 months? Please check all that apply and indicate your frequency of use.

Objective: We would like to map the current tool usage and frequency. The goal is to directly address the unstructured, exploratory environment where engineers are exploring new tools without any official guidance or guardrails. The results will reveal the de facto standard tools within the team, and the frequency will help us understand the existing depth of integration, providing leadership with clear signals about where to focus policy and investment (and any subsequent decisions about standardization or licensing). The “Other” field is a valuable discovery mechanism for emerging or niche tools that may be gaining traction. This foundational data is essential for any subsequent decisions about standardization or licensing.
Q2: When using AI tools for software development, how do you perceive their impact on the following aspects of your work?

Objective: This question aims to quantify the “perceived impact” on key tasks. The question deconstructs the generic term “productivity” to reveal the specific trade-offs of AI adoption (e.g., are we seeing a significantly positive impact on speed but a slightly negative impact on quality and time spent on debugging?) The results from this question allow leadership to move beyond a simple “AI is good/bad” debate and into a sophisticated discussion about where and how to apply AI for maximum benefit and minimum harm.
Q3: What are your primary concerns or blockers when using AI tools for software development? Please select up to three that are most significant to you. Additionally, how likely would you be to delegate the following tasks to an AI assistant you trust?

Objective: These two questions measure the “trust deficit or abundance” for AI. The first part aims to identify and prioritize developer concerns and blockers, and asking to pick only the top 3 creates a forced-rank priority list for leadership. The second part assesses the level of trust in AI when it comes to more critical engineering tasks, and the responses may offer useful insight for leadership when determining what policies should be implemented or how strictly those policies should be enforced regarding AI usage. The top “concerns” when combined with “willingness to delegate” should inform the company’s long-term Acceptable Use Policy and AI strategy.
Q4: Is there anything else you would like to share about your experience with AI in software development? This could be a specific tool you love, a particular challenge you have faced, or an idea for how we could use AI to our advantage:
< open-ended text response>
Objective: This question is to capture the unknown unknowns and any related qualitative context. Qualitative data provides the “why” behind the quantitative “what.” Also, it gives a voice to your developers, demonstrating that their individual perspectives are valued and contributing to a culture of psychological safety. Open-ended questions, though should be used sparingly, produce invaluable rich context, surfacing unexpected issues and giving developers a voice to share nuanced feedback that the structured questions cannot capture.
Once you have collected the responses, it’s time to connect the dots. The answers to these four questions (what tools are used, their perceived impact, the primary concerns, and the willingness to delegate), when viewed together, will paint a clear picture and likely point toward one of three common team profiles. The profiles are listed in the next section, each requiring a different strategic approach for AI enablement.
From Data to Diagnosis: Identifying the Team’s Profile
Your survey data will likely point toward one of the three common profiles. Here is how to interpret each one and take action.
Profile 1: The Exposed Enthusiastics
For this profile, the survey response will likely show high usage of various tools, high perceived impact on speed, but also high concerns about code quality and security. The team is moving quickly, but potentially racking up security and technical debt along the way.
Immediate Actions: Establish clear guidelines immediately. Focus on best practices for prompting, code validation, and security. Mandate peer reviews for all significant AI-generated code.
Long-Term Strategy: Invest in tools that provide enterprise-grade security and code analysis. Develop formal training programs focused not just on using the tools, but on using them critically and safely.
Profile 2: The Cautious Skeptics
For this profile, the survey response will likely show low tool usage, low trust scores, and significant concerns about job security and the reliability of AI suggestions. The team is hesitant and held back by concerns about reliability and impact.
Immediate Actions: Start with education. Run workshops that showcase safe and effective use cases. Create a psychologically safe environment for experimentation. Launch a pilot program with a small group of willing volunteers.
Long-Term Strategy: Focus on building trust. Share success stories from the pilot program. Use the data to show, not just tell, how AI can serve as a powerful assistant rather than a replacement. Provide sanctioned, secure tools to alleviate security concerns.
Profile 3: The Occasional Explorers
For this profile, the survey response will likely show pockets of high usage and enthusiasm mixed with areas of complete indifference. There are no common tools or practices; everyone is doing their own thing.
Immediate Actions: Use the survey data to identify the tools that are providing the most value and standardize around them. Consolidate your efforts to provide licenses and training for a limited set of vetted tools.
Long-Term Strategy: Build a community of practice. Empower your AI enthusiasts to become internal champions who can share their knowledge and best practices with the rest of the team.
Understanding your software development team’s evolving relationship with AI is not a one-time task. By embedding a thoughtful, structured approach into your existing DevEx surveys, you are not just gathering data; you are building the foundation for an AI strategy that is grounded in real team dynamics. Whether your team is racing ahead, cautiously holding back, or scattered in exploration, the insights from the diagnostic approach detailed in this article will allow you to lead innovation with responsibility and empower your developers while safeguarding your company’s long-term interests.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI
Take our 90+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!
Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

Discover Your Dream AI Career at Towards AI Jobs
Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 40,000 live jobs today with Towards AI Jobs!
Note: Content contains the views of the contributing authors and not Towards AI.