Connect with us

Hi, what are you looking for?

Tech

Hacker groups have been cut off by ChatGPT, but the danger is not over yet – PCW

Hacker groups have been cut off by ChatGPT, but the danger is not over yet – PCW

Groups linked to various governments have also become increasingly adept at using platforms that use large language models.

Artificial intelligence platforms have been spreading on the Internet for more than a year. Although we mostly use it to write texts and create images, its application is not limited to that. From medicine to engineering to logistics, technology already has many applications, and the number will increase every day.

Just as a knife can be used to spread bread and kill people, AI can also be used for a range of things that are unethical at best and illegal at worst. Once programmers started using AI-powered interfaces for harmless programming, it was only a big leap from there for hackers to discover the novelty themselves.

According to Microsoft and OpenAI, hackers working for governments in various countries prefer to use AI programs that work with large language models (LLM) to make their activities more efficient. These include their self-developed platform, ChatGPT.

According to a statement published by OpenAI a few days ago, it has carried out several large-scale operations against hacker groups linked to different countries. Based on their own evidence, the hackers, mostly Russian, North Korean, Iranian and Chinese, use the company's developments to identify targets, improve scripts and develop various planning methods.

According to their claim, they were able to identify the activities of five such groups. Examples include the coal hurricane and salmon hurricane in China, the Scarlet Sandstorm in Iran, the Emerald Slate in North Korea, and a forest blizzard in Russia. According to the company, the accounts associated with them were terminated immediately.

See also  The Osiris-Rex spacecraft blasted home in its stomach with twenty to forty fragments of the slitter asteroid.

Among them, perhaps the most worrying was Russia's Forest Blizzard activity, which used OpenAI's large language model to learn about the Ukrainian military's satellite communications protocols and radar imaging technologies, and understand their technical parameters. Although this activity seems very disturbing, there is a lot of froth on the heads of other groups involved.

Salmon Typhoon targeted companies linked to government and US defense activities using the well-known AI platform, and groups in North Korea and Iran produced phishing messages using the software.

Although OpenAI claims that no significant use of LLM has been detected by other users associated with the above groups, they have all been shut down to be safe. Among other things, the case indicates that there is still a lot to be done in the field of artificial intelligence technologies for cybersecurity professionals.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Top News

In a harrowing incident that has shaken the community of Lewiston, Maine, a series of shootings on Wednesday evening resulted in a tragic loss...

Top News

President Joe Biden’s abrupt departure from a speech on the U.S. economy at the White House on Monday sent a ripple of speculation and...

Tech

A dangerous application appeared in the Apple App Store disguised as a known program. 24.hu reported the Based on TechCrunch article. Dangerous app in...

World

Chinese scientists have discovered a little-known type of ore containing a rare earth metal highly sought after for its superconducting properties. The ore, called...

Copyright © 2024 Campus Lately.