A recent investigation by Human Rights Watch (HRW) has uncovered a disturbing trend in the development of artificial intelligence, where images of children are being used to train artificial intelligence models without consent, potentially exposing them to risk significant privacy and security concerns.
Ars Technica reports that Human Rights Watch researcher Hye Jung Han has discovered that popular AI datasets such as LAION-5B contain links to hundreds of photos of Australian children. These images, taken from various online sources, are being used to train AI models without the knowledge or consent of the children or their families. The implications of this discovery are far-reaching and raise serious concerns about the privacy and safety of minors in the digital age.
Han's research, which examined less than 0.0001 percent of the 5.85 billion images in the LAION-5B dataset, identified 190 photos of children from every state and territory in Australia. This sample size suggests that the actual number of affected children could be significantly higher. The dataset includes images spanning the entire childhood, making it possible for AI image generators to create realistic deepfakes of real Australian children.