
AI and IT Security
Last Updated on September 12, 2025 by Editorial Team
Author(s): Andrei Besleaga (Nicolae)
Originally published on Towards AI.

Introduction
This is an article related to some issues concerning IT&C Security, in the context of the prevalence of new AI and GenAI tools trends, and not a scientific or technical rigorous article, but explained in a common language, and from my perspective and limited experience only, touching a few points, and in not any case as exhaustive as this subject really is.
First, I will list what this article is not about, because these subjects themselves would take too much and there is more to debate about these, and also at the moment they are not in my interest or expertise:
- it is not about the institutional perspective like : enterprise management, assessments, assurance, compliance, governance, or other upper level issues, but for these there are articles and papers like for example: “Artificial intelligence for system security assurance: A systematic literature review —International Journal of Information Security (2025) — Shao-Fang Wen · Ankur Shukla · Basel Katt”;
- it is not about the regulatory and law perspective, even if now there are some laws like NIS2, DORA, and EU AI Act, to be mandatory in EU states at least; (https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence)
- it is not about other ethical usage or user and social engineering perspectives, since this is also very exhaustive and complex subject, and most probably the human factor is also the worst link as always in this kind of any IT/Telecom/etc. related cybersecurity scenarios, but there are already papers and articles including this;
- it is not about governmental, military or other secret corporations or agencies work, since this also is outside of normal human knowledge or power to defend against it, even with laws and state of the art technology and knowledge available for the masses;
- it is not about the security of AI Systems or systems including AI in their architecture, for this will be probably a whole new field of security included in existing ones, but there are already some articles and papers like: “The AI Security Pyramid of Pain — Chris M. Ward, Josh Harguess, Julia Tao, Daniel Christman, Paul Spicer, and Mike Tan — about challenges of AI Cybersecurity and proposed framework” , or “A Comprehensive Review of AI Security: Threats, Challenges, and Mitigation Strategies — Serdar Yazmyradov1, Hoon Jae Lee — International Journal of Internet, Broadcasting and Communication Vol.16 №4 375–384 (2024)”;
- it is not about physical/hybrid security of systems, which is a very complex subject, and probably the weakest link in all IT Security along with social engineering problems;
Historical AI Security in IT
I have to say first, for who doesn’t know yet, that AI has already been used in cybersecurity so far, probably starting with the 80-90s.
The best implementations were the first antiviruses which used heuristics to detect new viruses that they didn’t have a signature for (that is by intercepting and checking the behavior of all the programs running and alerting on possible bad program behavior), and the first firewalls and IDS that, with multiple rules and app/traffic detections, were able also to detect new unwanted behavior from running programs and/or other systems input/outputs over the networks.
These were implemented at several OSI levels (both low-levels and application level). Also at application levels there were passwords and passwords strength estimators since those days when also brute-forcing programs were against them.
Around 2000s, simple machine learning emerged with:
- Anomaly detection — models started being used to identify deviations from baseline network or user behavior, such as in Host-based Intrusion Detection Systems (HIDS) and Network-based Intrusion Detection Systems (NIDS);
- Spam Filtering — Bayesian classifiers, became critical in email spam detection, distinguishing between legitimate emails and spam;
- Behavioral Analysis — User and Entity Behavior Analytics (UEBA) emerged, leveraging machine learning to monitor user actions and detect insider threats;
Around 2010s, expansion with Deep Learning and Big Data:
- Advanced Threat Detection — AI systems were used to detect advanced persistent threats (APTs) by analyzing large-scale data across endpoints, networks, and cloud environments;
- Endpoint Protection — AI-powered antivirus solutions like Cylance and CrowdStrike emerged, using deep learning to predict and prevent malware infections in real time;
- Threat Intelligence — Natural Language Processing (NLP) was used to analyze threat intelligence feeds, identify emerging threats, and automate response recommendations;
- Fraud Detection — AI algorithms were deployed to detect fraudulent activities in banking, e-commerce, and other industries by analyzing transactional patterns;
- Phishing Detection — AI models were applied to identify phishing websites and emails by analyzing linguistic patterns, metadata, and sender reputation;
Nowadays AI solutions
- AI-Driven Security Operations Centers (SOCs) — Automated threat hunting and incident response using AI-powered tools became standard in SOCs and AI-assisted SIEM (Security Information and Event Management) solutions integrated predictive analytics;
- Automated Malware Analysis — AI was used to reverse-engineer malware and automate dynamic analysis in sandbox environments;
- Deception Technologies — AI-powered honeypots and deception systems lured attackers while collecting intelligence about their tactics, techniques, and procedures (TTPs);
- Zero-Day Threat Detection — AI models focused on detecting previously unknown vulnerabilities and exploits by analyzing exploit behaviors;
- Passwordless Authentication – AI enabled biometrics, behavioral analytics, and multifactor authentication mechanisms;
Other emerging trends and usages in AI for Cybersecurity
- AI vs. AI — AI tools were used defensively and offensively, with attackers employing AI to evade detection systems;
- Adversarial AI Defense — Techniques like adversarial training were developed to defend against adversarial attacks on machine learning models;
- Proactive Cybersecurity — Predictive models were used to forecast potential attack surfaces based on historical data and global threat landscapes;
This historical evolution highlights how AI has shifted from static rule-based systems to dynamic, predictive, and self-learning solutions, with continued innovation driving the field forward.
Other Research
There are currently many articles published related to this domain in ACM & IEEE Libraries, Magazines and Articles, or research thesis, with specific niche and solutions, for example:
- “Artificial Intelligence in Cybersecurity: A Review and a Case Study” by Selcuk Okdem and Sema Okdem explores the integration of AI in enhancing cybersecurity. It reviews AI applications in combating phishing, social engineering, ransomware, and malware, highlighting their preventative capabilities. The paper includes a case study on using a genetic algorithm (GA) to secure communication within IEEE 802.15.4 networks, commonly used in IoT and wireless sensor networks. The GA generates a secure pseudo-random noise (PN) sequence, enhancing security without compromising performance. The study underscores the potential of AI in improving cybersecurity and calls for continued research in this domain;
- Other research papers or articles focus also on the new emerging GenAI tools available to the wider public, and while I think these could be potentially helpers for bad actors and criminals, they could also be used the other way around, as always as tools to fight the cybercriminals (but by excluding as much as possible the false positives, intoxications and other issues as deep fakes and all, also generated by these interesting times we live in);
- “Artificial Intelligence as the new hacker developing agents for offensive security — by Leroy Jacob Valencia” — explores how the new GenAI tools can and will pose security problems but also leveraging the capabilities of (LLMs) such as GPT-4, can have the potential to identify, exploit, and analyze security vulnerabilities autonomously;
However, the deployment of AI in offensive security presents significant ethical and operational challenges. The agent’s development process revealed complexities in command execution, error handling, and maintaining ethical constraints, highlighting areas for future enhancement.
The study contributes to the discussion on AI’s role in cybersecurity by showcasing how AI can augment offensive security strategies. It also proposes future research directions, including the refinement of AI interactions with cybersecurity tools, enhancement of learning mechanisms, and the discussion of ethical guidelines for AI in offensive roles. The findings advocate for a unique approach to AI implementation in cybersecurity, emphasizing innovation.
- Generative AI for Cyber Security: Analyzing the Potential of ChatGPT, DALL-E, and Other Models for Enhancing the Security Space — SIVA SAI , UTKARSH YASHVARDHAN2, VINAY CHAMOLA AND BIPLAB SIKDAR- IEEE Survey — discusses the potential applications of Generative AI in the cybersecurity domain, focusing on tools like ChatGPT and DALL-E for enhancing security measures, such as: password protection, detecting GAI text in attacks, generate examples of adversarial attacks, simulated environments for malware and intrusion detection, threat intelligence, security code generation and transfer, vulnerability scanning and filtering, data privacy protection, bridging gap between technical experts and non-experts, social media threat hunting, IoT security, deepfake detection and prevention, blockchain security, supply chain security, customized LLMS for security;
Conclusions and opinions
In Computing and IT and Communications Systems and Networks, since the very first theory and applications of Fred Cohen of artificial life for these systems (viruses), and the very first “hackers” of the past trying to expose some bad parts of electronic systems (and the bad ones actually taking advantage of that), it has always been a fight between “good and evil” and I am still wondering how such things could still exists (dark web, botnets, archives or malware, black hats, convicted good guys and escaped bad guys and criminals, etc.), but the answer lies as always outside of tech domain core and more into politics, legislation, money, justice, mafia and criminality, states and hybrid wars, and as always in the weakest link of all of it: the human factor.
My personal opinion on this is that: AI (both “old type AI” and new GenAI or other narrow/AGI developed AI, and trained on different kind of datasets security related, could be used to improve the security :
- as always, since prevention is the best way, IT professionals (and that includes from systems architects, engineers, developers, infra, devops, security pros, management, etc.), should already use and embed these technologies directly in the developed systems or code, as self-correcting secure code before deployment, as much as automated, or self-regulated, before deployment, as possible;
- security-first should be a part of everything from OS level to application (web), intra-network or inter-networking levels, from RTOS systems and ASICS and other specialized devices functioning either independently or IoT, to general usage, user-friendly OSes, from intranets and industrial protocols, to the current INTERNET infrastructure levels and all the other virtual networks and protocols built on it, or separately;
- special consideration should be given to domains with separate infrastructures but of the most importance probably, such as different communications networks types, energy and power-grids networks, banking and other financing protocols and operating systems, automotive and aircrafts, public health, security and citizens and/or governmental critical systems and data systems, etc.;
- use the existing available tools for security improvements where is the case (which is not always as easy to do, or not to forget to do, so that’s why improving end-users security tools so that to eliminate human weaknesses should be of importance — eg: unique simple IDs/logins for the users everywhere and for everything — but almost impossible to break even with AI);
- develop and/or use preventive deployed AI security agents that maximize security of systems (at all OSI stack levels), by all available trained data, and also reduce the risk of false positives (for example when legit user/requests/apps/etc. use self protecting techniques but they are already labeled as offensive and they are denied from systems);
- develop and/or use offensive AI security agents in the incident management and post-mortems to learn from all available data and new data discovered from analysis, for the next time, and automatically trigger alerts or automatic blocks, before the breach;
- using innovation and creativity to be one step ahead of the real criminals — that is by developing and using prevention tools that would have the upper hand by AI predicted malevolent usages and reject them from before;
- having special security AI auditing in critical or infrastructure software;
- expanding non directly related domains with AI, such as: cryptography, parts of information theory and algorithmics, low-level functioning of devices (ISO OSI Layer 1 & 2) and new types of systems and software architectures could be of use, since these are the basis of all modern digital tools we use everywhere and in everything, and in the future these will need refining in the context of enhanced processing power and data and new computing paradigms (as quantum computing) will be available to the public and change completely the IT Security landscape along with current and future AI developments;
- and since threat intelligence experts foresee an entirely new set of challenges for these coming years and everyday AI tools becoming instrumental in cyber attacks (F-Alert Cyber Threats Bulletin), probably also AI tools could be used to counterattack those, or as the AI can put it better itself all the AI innovations and implementations could be used, at all levels, such as:
– Proactive Threat Detection and Prevention
– AI-Augmented Security Operations Centers (SOCs)
– Enhancing Endpoint and Network Security
– Securing AI Systems Against Adversarial Attacks
– Advanced Malware and Ransomware Detection
– Identity and Access Management (IAM)
– Cloud and Hybrid Environment Security
– Phishing and Fraud Prevention
– AI-Driven Deception and Honeypots
– Cybersecurity in AI-Powered Offensive Operations
– Systems Continuous Learning and Adaptation
– Inter-System Collaboration and Integration and Global Threat Sharing
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI
Take our 90+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!
Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

Discover Your Dream AI Career at Towards AI Jobs
Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 40,000 live jobs today with Towards AI Jobs!
Note: Content contains the views of the contributing authors and not Towards AI.