Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

How Software Development Teams Can Securely and Ethically Deploy AI Tools

To deploy AI tools securely and ethically, teams must balance innovation with accountability—establishing strong governance, upskilling developers, and enforcing rigorous code reviews.

Development software vulnerability

At this point, artificial intelligence (AI)/large language models (LLMs) have emerged as a superpower of sorts for software developers, enabling them to work faster and more prolifically. But teams deploying these tech tools should keep in mind that – regardless of the supersized boost in capabilities – human oversight must take the lead when it comes to security accountability.

After all, developers are ultimately responsible for producing secure, reliable code. Mistakes made during the software development lifecycle (SDLC) often trace back not to AI itself, but to how these professionals use it – and whether they apply the legal, ethical and security-minded expertise required to catch issues before they turn into major problems.

It’s imperative to focus on the potentially disruptive dynamics now, because the presence of AI in coding is here to stay: According to the 2025 State
 of AI Code Quality report (PDF) from Qodo, more than four out of five developers use AI coding tools daily or weekly, and 59 percent run at least three such tools in parallel.

AI’s impact on security, however, has emerged as a primary concern, with even the best LLMs generating either incorrect or vulnerable products nearly two-thirds of the time – leading industry and academic experts to conclude that the technology “cannot yet generate deployment-ready code.” Using one AI solution to generate code and another to review it – with minimal human oversight – will create a false sense of security, increasing the likelihood of compromised software. Less human-rooted accountability/ownership reduces diligence in the review stage, and discourages teams from establishing long-term best practices/policies for ensuring code is protected and reliable.

Clearly, there is a danger that teams will trust AI too much, as these tools lack a command of the often nuanced context to recognize complex vulnerabilities. They may not fully grasp an application’s authentication or authorization framework, potentially leading to the omission of critical checks. If developers reach a state of complacency in their vigilance, the potential for such risks will only increase.

Ethical, legal questions loom large

Advertisement. Scroll to continue reading.

Beyond security, team leaders and members must focus more on ethical and even legal considerations: Nearly one-half of software engineers are facing legal, compliance and ethical challenges in deploying AI, according the The AI Impact Report 2025 from LeadDev. (And 49 percent are concerned about security.)

Copyright issues related to training data sets, for instance, could also present real-life repercussions. It’s possible that an LLM provider will pull from open-source libraries to build these sets. But even if the resulting output isn’t a direct copy from the libraries, it could still be based upon inputs for which permission was never given.

The ethical/legal scenarios can take on a highly perplexing nature: A human engineer can read, learn from and write original code from an open-source library. But if an LLM does the same thing, it can be accused of engaging in derivative practices.

What’s more, the current legal picture is a murky work in progress. Given the still-evolving judicial conclusions and guidelines, those using third-party AI tools need to ensure they are properly indemnified from potential copyright infringement liability, according to Ropes & Gray, a global law firm that advises clients on intellectual property and data matters. “Risk allocation in contracts concerning or contemplating AI models should be approached very carefully,” according to the firm.

Best practices for building expert-level awareness

So how do software engineering leaders and their teams cultivate a “security first” culture and a universal awareness of ethical and legal considerations? I recommend the following best practices:

Establish internal guidelines for AI ethics/liability protection. Security leaders must establish traceability, visibility and governance over developers’ use of AI coding tools. As part of this, they need to evaluate the actual tools deployed, how they’re deployed (including ethical considerations), vulnerability assessments, code-commit data and developers’ secure coding skills in incorporating internal guidelines for the safe and ethical use of AI. This should include the identification of unapproved LLMs, and the ability to log, warn or block requests to use unsanctioned AI products. In setting the guidelines, these leaders need to clearly illustrate the potential risk consequences of a product, and explain how these factors contribute to its approval or disapproval.

The guidelines should also incorporate solid, established legal advice, some of which currently recommends that users of third-party AI tools verify the provenance of their training data to mitigate infringement risk. Generally, users need to avoid unauthorized use of copyrighted content when training any proprietary software that leverages AI, according to Ropes & Gray, as a relevant example.

Upskill and educate developers. To avoid vulnerability-caused reworks and legal and ethical dilemmas, team leaders must upskill developers to grow more proficient and dialed-in on software security, ethics and liability factors which could impact their roles and output. As part of this, they should implement benchmarks to determine the skill levels of team members on these topics, to identify where gaps exist and commit to education and continuous-improvement initiatives to eliminate them.

Communicate – and enforce – best practices. This should include the rigorous review of AI-generated code; it should be standard that code created with these assistants receives the same quality and security review as any other code. For example, as part of their due diligence, teams could validate as many user inputs as possible to prevent SQL injection attacks, while output encoding to block cross-site scripting (XSS) vulnerabilities. (The OWASP Foundation and the Software Engineering Institute’s CERT Division provide additional best practices for secure coding.)

Developers themselves should take part in the designation of best practices, so they are more engaged with risk management and grow more capable of taking accountability for it.

As software developers increasingly turn to AI to help them meet ever-pressing production deadlines, security leaders must work with them to ensure they gain the awareness and capabilities to take full accountability of their output and any potential red flags that AI-assisted code can generate. By establishing guidelines about security, ethics and legal issues – and investing in the education and benchmarking required to follow the guidelines – teams will operate with much more expertise and efficacy. They’ll meet those deadlines without sacrificing speed or innovation, while minimizing the pitfalls that disrupt the SDLC —and that’s a great superpower to have.

Written By

Matias Madou, Ph.D. is a security expert, researcher, and CTO and co-founder of Secure Code Warrior. Matias obtained his Ph.D. in Application Security from Ghent University, focusing on static analysis solutions. He later joined Fortify in the US, where he realized that it was insufficient to solely detect code problems without aiding developers in writing secure code. This inspired him to develop products that assist developers, alleviate the burden of security, and exceed customers' expectations.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

In cyber-physical systems (CPS), just one hour of downtime can outweigh an entire annual security budget. Learn how to master the Return on Security Investment (ROSI) to align security goals with the bottom-line priorities.

Register

Delve into big-picture strategies to reduce attack surfaces, improve patch management, conduct post-incident forensics, and tools and tricks needed in a modern organization.

Register

People on the Move

Jacki Monson has joined CVS Health as SVP, Deputy CISO.

Gigi Schumm has been promoted to Chief Revenue Officer at Securonix.

Chris Sistrunk has been promoted to Practice Leader for Mandiant's OT Security Consulting.

More People On The Move

Expert Insights

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest cybersecurity news, threats, and expert insights. Unsubscribe at any time.