AI-powered “Audio-Jacking”: IBM Discovers New Cybersecurity Threat

User Avatar

Researchers at IBM Security have identified a new cybersecurity threat called ‘audio jacking’, where AI can manipulate live conversations with deepfake voices, raising concerns about financial fraud and disinformation.

Researchers at IBM Security recently revealed a unique cybersecurity threat they call “audio jacking.” This threat uses artificial intelligence (AI) to collect and modify live conversations in real time. This method uses generative artificial intelligence to create a clone of a person’s voice using just three seconds of audio. This gives attackers the ability to seamlessly replace the original speech with modified information. Having such skills can make it possible to engage in immoral behavior, such as misdirecting financial transactions or altering information expressed during live broadcasts and political speeches.

The technique is surprisingly simple in its implementation and uses artificial intelligence algorithms that listen to live audio looking for certain phrases. If these systems are detected, they can insert the deepfake audio into the discussion without the participants being aware of it. There is a possibility that this could endanger sensitive data or mislead people. Their use can range from financial crime to disinformation in essential communications.

The IBM team has proven that the construction of such a system is not too complicated. The team showed that the work required to capture live audio and integrate it with generative AI technologies is more than the effort required to manipulate the data itself. They drew attention to the potential for abuse under various circumstances, including changing bank details during an argument, which could lead to victims unaware of the situation transferring cash to fake accounts.

See also  Cryptocurrency detective ZachXBT discovers unusual ETH withdrawals for MTG cards

To counter this danger, IBM recommends taking countermeasures such as paraphrasing and repeating essential information during conversations to verify its authenticity. This strategy has the potential to reveal audio differences created by artificial intelligence algorithms.

The results of this study highlight the increasing complexity of cyber threats in this era of powerful artificial intelligence and emphasize the need to remain vigilant and develop creative security measures to combat these types of vulnerabilities.

Image source: Shutterstock

Source link

Share This Article
Leave a comment