How Criminals Use Artificial Intelligence?

FREE Online Courses: Elevate Your Skills, Zero Cost Attached - Enroll Now!

As artificial intelligence (AI) becomes more widely accepted, there is a lot of misunderstanding and misunderstanding about what it can do and the threats it might represent. Dystopian ideas of human destruction at the hands of all-knowing robots abound in our culture. However, many individuals recognize that good AI has the potential to do for us in terms of advances and new insights.

Huge data sets are needed for machine learning. A complicated mixture of physical computer vision sensors, sophisticated programming for real-time decision making, and robotics is necessary for many real-world systems, such as driverless cars. Deployment is easier for firms embracing AI, but providing AI access to data and granting it any level of autonomy carries significant dangers that must be evaluated.

“Has anyone observed any cases of criminals employing artificial intelligence?”

Security organizations have been asking this question in recent years. However, recent public/private research on AI and machine learning highlights potential attack vectors and provides instances of existing dangers.

Hackers and criminals, like the communities, that design defense systems for themselves, are dubious, according to Mark Testoni, president and CEO of SAP NS2, an enterprise security company.
Captchas and image recognition, virus development, phishing and whaling, and other techniques utilized by the communities are also exploited by hackers. They’re learning when to hide and when to attack.

Instead of hiding behind masks to rob a bank, criminals are using artificial intelligence to conceal themselves.

How Criminals Use Artificial Intelligence?

1. Deepfakes

Deepfakes are one of the most common AI abuses, in which AI techniques are used to build or edit audio and visual content to make it appear legitimate.

These are a hybrid of “deep learning” and “fake media” that are well-suited for future disinformation operations because they are difficult to distinguish from real content even with technological solutions. They can reach millions of people in different areas of the world at unprecedented rates thanks to the widespread usage of the internet and social media.

Deepfakes have the capacity to corrupt reality for malevolent purposes for a large number of people.

A UK-based energy firm was fooled into sending roughly 200,000 British pounds (about US$260,000 as of writing) to a Hungarian bank account after a person impersonated the firm’s CEO’s voice to authorize the payments using deepfake audio technology.

Because of the ability for malicious usage of AI-powered deepfakes, users must be aware of how realistic they can appear and how they can be utilized maliciously.

2. Whaling and Phishing

You may have gotten emails that didn’t sound right, such as emails purporting to be from your bank, phone service calls, and so on. This is simple enough for someone who knows a little code and HTML.

By crawling any platform, learning and communicating the language, and producing a large number of false emails based on particular findings, machine learning optimizes the phishing process.

3. Password Guessing Assisted By AI

Cybercriminals are using machine learning to improve password guessing algorithms. Conventional methods, such as HashCat and John the Ripper, examine different versions of the password hash to effectively determine the passcode that corresponds to the hash.

Cybercriminals, on the other hand, might employ neural networks and Generative Adversarial Networks (GANs) to evaluate large passcode datasets and produce passcode variations that fit the statistical distribution. This will lead to more precise and targeted password guesses in the future, as well as more business opportunities.

We discovered a GitHub repository for a password analysis application that can read over 1.4 billion credentials and produce password variation rules in an underground forum post from February 2020.

4. Assisted Hacking by AI

AI frameworks are also being used by cybercriminals to exploit weak servers.

AI can help them increase the scale and effectiveness of their social engineering efforts and can learn to recognize trends in people’s behavior, allowing it to persuade them to breach networks and hand over sensitive data by convincing them that a video, phone conversation, or email is real. With the use of AI, all of the social skills now used by hackers might be vastly improved.

For example, we came across a discussion thread on rstforums[.]com about “PWnagotchi 1.0.0,” a tool designed for Wi-Fi hacking using de-authentication assaults. PWnagotchi 1.0.0 adopts a gamification method to increase its hacking performance by using a neural network model: When the system de-authenticates Wi-Fi credentials effectively, it is rewarded and learns to improve its operation on its own.

5. Data Poisoning

Data poisoning is a technique for manipulating a training dataset in order to alter the prediction behavior of a trained model, such as mislabeling spam emails as safe content.

There are two forms of data poisoning assaults: attacks on the availability of a machine learning algorithm and attacks on its integrity. According to research, a 3% training data set poisoning causes an 11 percent loss in accuracy.

Backdoor attacks allow an intruder to provide input to an algorithm that the program’s designer is unaware of. The attacker utilizes that backdoor to trick the machine learning system into misclassifying a string as benign when it actually contains malicious data.

“Data is the lifeblood of machine learning, and we should pay just as much attention to the data we need to train the models as we do to the models themselves,” she said. “The model, the quality of the training, and the data that goes into it all influence user trust.”

6. Generative Adversarial Networks

GANs (Generative Adversarial Networks) are essentially two AI systems pitted against each other, one simulating original content and the other spotting its flaws. They generate content that is convincing enough to pass for the original by competing against each other.

According to Stephanie Condon of ZDNet, Nvidia engineers created a unique AI model to reproduce PAC-MAN just by monitoring hours of gameplay, without the use of a game engine.

Attackers are utilizing GANs to imitate typical traffic patterns, deflect attention away from attacks, and swiftly identify and exfiltrate sensitive data, according to Bandos.

“Thanks to these capabilities, they’re in and out in 30-40 minutes,” he said. “Once attackers begin to use artificial intelligence and machine learning, these tasks can be automated.”

According to Thomas Klimek’s study, “Generative Adversarial Networks: What Are They and Why We Should Be Afraid,” GANs can be used for password cracking, evading malware detection, and deceiving facial recognition. After being trained on an industry-standard password list, a PassGAN system developed by machine learning researchers was able to guess more passwords than several other tools trained on the same dataset. GANs can build malware that evades machine learning-based detection methods in addition to generating data.

According to Bandos, AI algorithms employed in cybersecurity must be retrained on a regular basis to spot new attack methods.
“As our adversaries develop, so must we,” he explained.

He offered the example of obfuscation, which occurs when a piece of malware is primarily made up of genuine code. A machine learning algorithm would have to be able to spot the malicious code.

7. Manipulating Bots

If AI algorithms are making decisions, panelist Greg Foss, a senior cybersecurity expert at VMware Carbon Black, believes they can be persuaded to make the wrong decision.

“Attackers can abuse these models if they understand them,” he stated.

Foss detailed a bot-driven attack on a cryptocurrency trading system.

“Attackers went in and discovered out how bots traded and then used the bots to fool the algorithm,” he explained. “This can be used in a variety of situations.”

This technique is not new, according to Foss, but these algorithms are now making more sophisticated decisions, which raises the chance of making a wrong one.

8. Harmful and Dangerous Drugs

Crimes involving AI planning and autonomous navigation technologies, such as devices for improving smuggling success rates, are on the rise, including drug trafficking, selling, buying, and having prohibited narcotics.

Unmanned vehicles are being used by criminals to increase narcotics trafficking from one company to the next.

9. Malware Development

With the help of AI, hackers may now create undetectable malware. These aid them in controlling webcams, stealing, uploading, modifying, and manipulating files. Computer virus code is written by criminals, who then employ password scrapers and other tools to execute their infection.

10. Cybercrime Involving Chatbots

You’ve probably heard of phishing, a particularly sophisticated sort of cybersecurity threat. There are several types of impersonation, but the most frequent is a website or communication that is made to look like official channels.

Cybercriminals are utilizing artificial intelligence to mimic human behavior. They can, for example, deceive bot detection systems on social media platforms like Spotify by imitating human-like usage patterns. Cybercriminals can then monetize the malicious system by generating fake streams and traffic for a certain artist using AI-assisted impersonation.

Customers or employees who are unaware of the portal’s existence access it and supply highly sensitive information such as credit and payment information, login credentials and passwords, and much more. The unscrupulous parties on the other side then grab the data and exploit it for nefarious purposes, while the people who are affected are completely unaware, at least until something weird happens to their accounts.

This is virtually exactly the scenario that modern chatbots could play out in, especially given how frequently chat and communication platforms have been used by businesses for customer service.

Preventing Crime with AI

Here are a few methods for detecting or preventing crimes involving Artificial Intelligence:

1. To guard against AI-based criminality, use in-depth defense systems. Consider employing multiple antivirus products on your workstations, servers, and mail agent transfers, among other things. This facilitates the detection of the crime.

2. To examine the log data from protective mechanisms, use management systems. Look for odd behavior, such as systems attempting to link to others that would ordinarily have nothing to do with each other.

3. Ensure that all systems have strong passwords that include alphanumeric and special characters, and that they are changed every 90 days.

4. Remove any unneeded systems that are no longer in use. AI/ML-powered tools and bots will first look for systems with exploitable services before moving on to other similar systems.

Artificial Intelligence in Crime Detection and Prevention

1. Gunfire Detection

ShotSpotter triangulates the position of a gunshot using smart city infrastructure.

According to ShotSpotter, only approximately 20% of gunfire incidents are reported to 911 by individuals, and even when they do, they are often only able to provide ambiguous or perhaps false information. They say that their technology can provide authorities with information on the sort of gunshot and a location that is as accurate as 10 feet in real-time.

The sound of a gunshot is picked up by many sensors, and a machine learning system triangulates where the shot occurred by comparing data like when each sensor heard the sound, the noise level, and how the shot was fired.

2. Future Crime Hotspots Prediction

PredPol is one startup that uses big data and machine learning to try to anticipate when and where crime will occur. They say that by studying data from previous crimes, they can forecast when and where new crimes will occur. Their technology is currently operational in a number of American cities, including Los Angeles, which was an early adopter.

Their technique is based on the fact that particular types of crime tend to cluster in time and geography. They say that by collecting historical data and examining where previous crimes occurred, they can anticipate where future crimes will occur.

3. AI Security Cameras

While ShotSpotter listens for criminal activity, many other businesses use cameras to keep an eye on it. Hikvision, a large security camera manufacturer in China, stated last year that they would be employing Movidius (an Intel firm) CPUs to make cameras with deep neural networks built-in.

According to the company, the new camera can better scan for license plates on cars, use facial recognition to look for potential criminals or missing persons, and detect suspicious anomalies such as left luggage in congested areas. Hikvision says that its advanced visual analytics tools can now attain 99 percent accuracy.

AI Misuses And Abuses In Future

In the future, we expect criminals to use AI in a variety of ways. Cybercriminals will very certainly turn to AI in order to increase the breadth and size of their attacks, evade detection, and abuse AI as both an attack route and an attack surface.

Criminals will employ AI to carry out hostile operations against businesses via social engineering strategies, according to our predictions.

Cybercriminals can utilize AI to automate the earliest phases of an attack by creating content, better business intelligence collection, and increasing the rate at which potential victims and business processes are compromised. This can lead to firms being defrauded more quickly and accurately through various tactics, such as phishing and business email compromise (BEC) frauds.

Artificial intelligence can potentially be used to deceive bitcoin traders. For example, we recently came across a conversation on the blackhatworld[.]com forum concerning AI-powered bots that can learn effective trading methods from historical data in order to generate better predictions and trades.

Apart from this, AI may be utilized in the future to hurt or inflict physical harm on people. In fact, facial recognition drones with a gram of explosives are being created right now. These unobtrusive drones, which are meant to look like small birds or insects, can be used for micro-targeted or single-person bombs and can be controlled by cellular internet.

Conclusion

AI will become quite popular among hackers very soon. Cybersecurity is a huge and active topic whose popularity originates from the fact that most of the research is open to the public, making it easy to learn how to hack entire networks. In-built protocols can go a long way toward preventing AI exploitation.

Finally, AI has the potential to lead to a wide range of illicit behaviors. However, if suitable preventive steps are taken, the crimes can be averted.

What was your reaction to the report about criminals using artificial intelligence? Please share your thoughts in the comments section.

Your 15 seconds will encourage us to work even harder
Please share your happy experience on Google | Facebook


Leave a Reply

Your email address will not be published. Required fields are marked *