Haven’t laboratory animals suffered enough at the hands of humans? We are greatly indebted to the lab mice, but we still expect more from them.
Tests conducted on rodents have helped scientists by shedding light on their research for hundreds of years. Countless new medicines and treatments have been developed in this way. All kinds of threats that humans have been or could be exposed to have been tested on them. Artificial intelligence (AI) has been slated to become the source of all solutions and surpass the capacity of human intelligence by learning from it. But suddenly, we have found ourselves seeking a solution against the AI threats in the brains of our friends.
The answer is AI-based detection tools.
The deep cyber-threat is at the top of the online agenda after appearing for the first time two years ago. Various seemingly-innocent apps have emerged after machines were provided with the capacity for deep learning, thanks to the data analysis capacity of AI, which transcends human intelligence. Now, the ability is there to create fake social media by deciphering the codes of images and sounds and incorporating the manipulated synthetic outputs that are hard to distinguish from the real ones. The world has to deal with the cyber-threat of deepfake, which has the power to throw the social and political future into disarray when it spreads like a virus on social media once the authentic source has been replaced with a fake. There is reason to be skeptical because the real identity of the triggerman, nicknamed “deepfakes,” who got things rolling with the first deepfake porn video on Reddit.com, is still unknown. Now, the online world is susceptible to becoming a casualty and sustainin g the biggest damage.
Cyber-security developers are in search of a solution for deepfake with the help of public security institutions, universities, and the world’s technology giants. In an effort to speed up the process, Facebook, Microsoft, Amazon Web Services (AWS), and AI Partners launched the Deepfake Detection Challenge (DFDC) with a million-dollar award to help the development of AI-based deepfake detection tools. Since the beginning, Zemane’s own deepware.ai team has maintained its forefront position at the race with nearly 400 participating teams from around the world until March 2020. We aim to finish the race in the first place.
Can a rodent’s brain identify fake speech?
An AI-based solution seems to be a valid and scientific approach to an AI-based cyber threat. To do that, appropriate data needs to be given the AI that is developed with a model of the human brain. This is where rodent intelligence comes in as a weapon against deepfake. Jonathan Saunders from the Institute of Neuroscience at the University of Oregon asserts that rodents can distinguish between manipulated synthetic sounds. The research team that he leads believes that this is a promising model in identifying deepfake sounds. “Analyzing the computational mechanisms that the mammalian auditory system uses to discern fake audio could provide information for the next-generation, generalizable algorithms to detect fakes,” claim the scientists who presented their academic work at the Black Hat Conference in Las Vegas last August.
Fake speech revealed by perception problems caused by phonetic mistakes.
The study conducted by Alex Comerford and George Williams, along with Saunders, is based on the phonetic perception of mice. The scientists trained mice as part of the study that aims to discern phonetic categories. The research claims that artificial neural networks could be designed to single out fake speech based on the biological reactions in a mouse’s brain when it hears acoustic breaches. In other words, mice display an understanding similar to humans, except they cannot understand the words they hear. This deficiency serves as an advantage in discerning fake speech. Mice do not overlook the irregularities while trying to figure out the words with phonetic problems. For instance, a deepfake file may contain a subtle irregularity such as a “b” sound where a “g” should be. When the fake speech orders a “hamburber,” the human ear may overlook this tell-tale sign since we are trained to figure out the meaning of the sentences while accommodating for the delivery of the speech, acce nts, and other inconsistencies. The three scientists stated that trained lab mice could identify speeches with up to 80% accuracy; however, they are not defending the idea of training a rodent army to detect deepfakes. Instead, they are hoping to learn how mice distinguish between a fake and authentic speech by monitoring their brain activity while at work. The next goal is to train new fake-detecting algorithms based on the information collected.
During the past two centuries, humankind focused on space for explorations that would expand the horizons of science. Important clues have been obtained to set foot on the Moon, reach Mars, and devise escape plans once we have exhausted the Earth’s resources. There may be some global advantages in surrounding the world with satellites and observing the globe. But strategists that decide on the direction of the future have realized that we do not have to go far to create miracles.
AI has proven that many mysteries will resolve by themselves once we unravel the mysteries of the brain, which we use in a very limited capacity. Maybe in this way, the universe will change dimensions, and life will evolve. The human race devours the nature it has been entrusted with. If it fails to prevent the malicious use of the superintelligence it has developed, it may have to resort to the brains of lab-trained rodents.