The fake videos and images show how generative AI has already become a staple of modern conflict. On one end, AI-generated content of unknown origin is filling the void created by state-sanctioned media blackouts with misinformation, and on the other end, the leaders of these countries are sharing AI-generated slop to spread the oldest forms of xenophobia and propaganda.
Adults who work with Young People News
Waiting on the platform for a morning train that was nowhere to be seen, he asked Meta’s WhatsApp AI assistant for a contact number for TransPennine Express. The chatbot confidently sent him a mobile phone number for customer services, but it turned out to be the private number of a completely unconnected WhatsApp user 170 miles away in Oxfordshire.
"I had one patient whose relative started filming while I was trying to set up," said Ashley d'Aquino, a therapeutic radiographer from London.
"It wasn't the right time - I was trying to focus on delivering the treatment."
"We had a member of staff who agreed to take photos for a patient," she said.
"When the patient handed over her phone, the member of staff saw that the patient had also been covertly recording her, to publish on her cancer blog."
People are increasingly turning to social media for mental health support, yet research has revealed that many influencers are peddling misinformation, including misused therapeutic language, “quick fix” solutions and false claims.
Adele Zeynep Walton’s sibling Aimee was a talented artist who loved music. It was only after her death that Walton realised Aimee had been lured into a dangerous community – and that others may also be victims of it
Researchers at Drexel University’s College of Computing & Informatics analyzed more than 35,000 Google Play reviews of Replika, a chatbot marketed as a judgment-free virtual friend. The study found more than 800 complaints describing harassment and inappropriate conduct, including unsolicited sexual advances and explicit images.
Comments
make a comment