“The most likely potential threats today is the use of military — says Evgeny Kolesnikov, Director of the Center for machine learning “infosistemy Dzhet”. For example, drones with facial recognition of a particular person who find the right target and can produce an explosion. In the presence of explosives and drone can literally build such a device in his yard”.

In the military sphere AI has long been taking vital decisions in specific types of weapons, although strategic decisions are still the man.

Remote or programmed control of any moving object capable of harm and feature videoresponse, whether it’s drone, truck or plane, requires special attention from developers and programmers, is too great a price error or deliberate actions if their intent is harm. The same applies to life support systems, conservation and use of robots in medicine.

There are more pressing examples, you may encounter people in everyday life. So, in the case of machine learning that is used by searchers and developers of web applications, errors can result in violations of the rights of entire groups of people. For example, Google had to apologize for the mistakes the AI made in the image search for the terms “gorilla”, “monkey” and “chimpanzee” in English. The scandal was related to the fact that the selection of the system got people with dark skin. Followed by accusations of racism. And the generator company news Elon musk OpenAI undertook to give fake information which looked very convincing.

Another vulnerable category — the “Smart home”. Anyone with basic programming skills can connect to control household appliances, lights, and so forth. Moreover, the system “Smart house” from Amazon, Google and Apple provide the functionality, if enabled, the voice prompts and video surveillance. For all the benefits of such services cannot be guaranteed that the data about stay at home will be kept confidential. If privacy is an absolute value for the owner, it is necessary to disable a number of functions, essentially making a “smart home” as such.

An accomplice of the “hacking” of the “smart home” has become the voice assistant Siri. A neighbor of the owner just asked Siri to open the door using the iPad laying in the living room and recognized his team. Smart home owners will recognize several parameters, there is also the risk that if, for example, disease of the host system does not recognize his voice or incorrectly interpret the change in appearance and will not be allowed into the house.

“When we come down to systems of law enforcement, working independently, any error in the recognition of a person (if, for example, the system will take him for a criminal) could cost him his life,” concluded managing partner Marketing Logic, an expert in the field of Big Data and geomarketing Dmitry Galkin. Developers remain to work hard to avoid this.

Read more •••

LEAVE A REPLY

Please enter your comment!
Please enter your name here