The idea of using AI to detect and perhaps even censor AI content online has been growing over the past year. However, if recent evidence is to be believed, the accuracy of this technology is far from perfect, meaning genuine content could be falsely censored if this technology is relied upon.
In the fight against deepfake videos, Intel has released a new system called “FakeCatcher” that can supposedly distinguish between genuine and tweaked digital media. The system’s effectiveness was tested using a mix of real and manipulated clips of former President Donald Trump and current President Joe Biden. Intel reportedly uses the physiological trait of photoplethysmography, which reveals changes in blood circulation and tracks eye movement, to identify and expose these deep fakes.
Acclaimed scientist Ilke Demir, who is part of the Intel Labs research team, explains that the process involves determining the authenticity of content based on human benchmarks such as changes in blood flow of a person and the consistency of eye movement, the BBC reported.