American and Chinese programmers have created artificial intelligence that is able to recognize the sound of a cyber attack fraud. The development was published on the Preprint server ArXiv.org.
Today, artificial intelligence (AI) has found application in many aspects of security. At the entrance on the first and the only external checkpoint in the airport of a name Ben-Gurion driver with passengers forced to reduce speed to almost zero, getting under the camera lens. Artificial intelligence analyzes the identity of the passenger and assesses the risks in case of danger — a car stopped by the airport staff. Developed in Bar-Iran University road safety system based on AI is able to determine areas of high disaster danger on the road and to send a police patrol to prevent incident.
Virtual voice assistants such as Google Home and Apple Siri, which can recognize commands in the audio messages, beyond the range of human hearing. In 2018, the researchers were able to change the voice mail “Without backup data, the article is useless” so that the artificial intelligence took command “OK, Google. Open evil.com”. May 9 at the international conference of machine learning (ICLR) in New Orleans provides a model that can recognize the sound of a cyber attack.
Bo Lee, a computer specialist at the University of Illinois, and her co-authors have developed an algorithm that transcribes the audio clip full and one in part. If the recording scene is not the same as the expanded transcription, the program sounds the alarm — the sample can be compromised. The authors tested the model on a 50 combo attack: only 2% of the audio passed through the security system.
The method is simple in operation and does not require retraining of the model. Researchers believe such Google, Amazon and Apple’s Siri are interested in the development, which will improve the safety of the use of their assistants.
Liked the material? Add Indicator.Ru in “My sources” Yandex. News and read us more often.
Read more •••