Cybersecurity trends for 2025

The cybersecurity company ESET offers the possible panorama for 2025 in terms of cyberattacks, taking into account the threats that emerged during the year that is about to end

This 2024, different cybersecurity trends such as the growth of malware As a Service, which facilitated the deployment of large-scale attacks, the use of Telegram by cybercriminals, as well as ransomware, one of the most worrying threats at the business and government level, marked the agenda. Starting from this context and considering the new technological advances and implementations seen during the year, the ESET Research Laboratory, a leading company in proactive threat detection, exposes the possible trends that will be central in the cybersecurity scene for the coming year.

“We theorize that 2025 will be marked by the growing need to protect OT (Operational Technology) systems, which are essential for critical infrastructures. In addition, the malicious use of generative AI will pose new threats. These issues will be linked to legal and ethical challenges that raise the need for clearer and more effective regulations,” says Fabiana Ramirez Cuenca, Researcher at the ESET Latin America Laboratory.

Uses of Generative AI

Generative AI is perhaps the most widely implemented artificial intelligence today, standing out for its ability to generate content such as text, images, videos, music, voices, among others, which, for example, allows for improved creativity and efficiency in various industries. However, cybercriminals also take advantage of it for malicious purposes, such as the creation of deepfakes and the automation and improvement of cyberattacks. Through this type of AI, open source algorithms can also be accessed, adapted, modified and used for different purposes. The possibility of automating tasks, generating or perfecting malicious code, planning campaigns, among others, make this technology attractive for malicious actors, even the most inexperienced.

Recently OpenAI, the company behind ChatGPT, has issued the report Influence and cyber operations: an update in which it details how various cybercriminals have used their AI models to perform tasks in intermediate phases of cyberattacks, after having acquired some basic tools, before deploying their attacks, whether phishing or malware distribution, by different means. In the same report, the company identifies that different APT (Advanced Persistent Threats) groups have used the technology for, for example, debugging malicious code, researching critical vulnerabilities, perfecting phishing, generating fake images and comments, among others.

“By 2025 we could expect the continued use of Generative AI to improve campaigns that begin with social engineering; the use of algorithms for the design of malicious codes; the possible abuse of applications from companies that use open source AI algorithms and, of course, the sophistication of deepfakes and the possible interaction with virtual reality,” adds Ramirez Cuenca.

Legal and Ethical Challenges of AI

Faced with the growth of generative AI and its potential malicious use, legal and ethical challenges arise that have mostly not yet been efficiently addressed. These include questions such as who is responsible for the acts of AI, what limits should be imposed on its development, or what body is competent to judge it. Currently, there are very few international standards that address the emerging problems of the use of AI and those that exist are often insufficient in the face of a panorama of accelerated development of this technology.

Among the most notable regulations is the European Union AI Act (in force since 2023) which aims to guarantee ethics and transparency, as well as safe development and protection of human rights, addressing AI from a risk-based approach, classifying algorithms according to their dangerousness. In parallel, the US has several approaches, from a national AI initiative, an Executive Order for the safe and reliable use of AI and a draft AI bill of rights that is currently being considered.

At the Latin American level, there has been no major progress during 2024, although most countries have at least decrees, except in the case of Peru, which has a law. Recently, PARLATINO has proposed a Model Law that may inspire legislation at the internal level in each country.

“By 2025, it is likely that there will be greater regulatory scrutiny of AI Algorithms and Models to ensure transparency and explainability – that their decisions can be understood by people -, this together with data protection to guarantee privacy in the use of AI. We will see the search for solutions for the damages generated by AI and the promotion from the regulatory perspective of ethics in the use and development of this technology. Advances in cybersecurity regulations applied to the subject and in terms of international cooperation will also continue,” says the ESET Latin America researcher.

Industrial Control Systems or OT (Operational Technology)

OTs are computer systems and devices used to control industrial and physical processes in various sectors, such as energy, manufacturing, water and gas, among others. These systems manage equipment such as PLCs (Programmable Logic Controllers), SCADA (Supervisory Control and Data Acquisition Systems) and their main function is process automation.

The digitalization and connectivity of these systems has made them both interesting and vulnerable to cyberattacks. In fact, malicious code has already been seen targeting these systems, including “Aurora” (a US government test that demonstrated, for the first time, that a cyberattack could cause physical damage to a power generator) and “Blackenergy, Industroyer” (used in Ukraine to attack its power grid), although of course they are not the only ones. NIST (the US government’s Institute of Standards and Technology) considers OT security to be a growing problem and has created a guide that it updates regularly.

In 2025, OTs will become increasingly relevant in the field of cybersecurity for several reasons, including the aforementioned connectivity between OT devices and the large amount of data they collect. Also, many of these systems are essential for the operation of critical infrastructures, making an attack on this technology attractive to criminals, given that it has the potential to cause great damage.

“These are the trends that we theorize will be central to cybersecurity for the coming year, a challenging scenario marked by the growth in the use of generative artificial intelligence by cybercrime. This will require adapting defense systems and advancing legal frameworks that address the questions raised by these technologies, including their legitimate and beneficial uses. In addition, attacks on critical infrastructure will continue to be a concern. OT systems will be a key target, due to their interconnection and essential role in strategic sectors. Strengthening their cybersecurity will be a priority, considering their demonstrated vulnerability in recent conflicts, where their exploitation has had serious consequences for the affected populations,” concludes Ramirez Cuenca, from ESET Latin America.

Contact details for ESET: https://www.eset.com/ve/. Also its social networks: Instagram (@esetla) and Facebook: (ESET)

With information and reference image provided by ESET and Comstat Rowland

Visit our news channel on Google News and follow us to get accurate, interesting information and stay up to date with everything. You can also see our daily content on X/Twitter and Instagram

You might also like