Real Information Artificial Intelligence Series: AI & Multimedia – for Good and Evil
Can You Believe What You See?
By Arthur Weiss
An apology: in my last post, I promised to look at dangers in AI for research and summarizing topics. That’s still on my list. In this post, I’m going to address another danger and explore what AI promises for multimedia.
AI is advancing so rapidly that it is becoming hard to tell the difference between a real video and an AI-generated video. This was illustrated by a viral video showing Tom Cruise in a rooftop fight with Brad Pitt – over attitudes to Jeffrey Epstein. (See: https://www.youtube.com/watch?v=FunqTjCZE8o for the first, https://www.youtube.com/watch?v=9-mq7s0bPV4 for the second, and for a full series of videos including zombie and robot fights: https://www.youtube.com/watch?v=fbVv0ZPk0fw).
The videos were created on 10 February 2026 by Ruairí Robinson, an Irish film director, mostly known for sci-fi and animation shorts, using Seedance 2.0 – an AI video generator developed by ByteDance (of TikTok fame). Seedance was released in February 2026, and Robinson was allowed to test it, producing the linked videos. Within three days, Disney and Paramount had sent ByteDance cease-and-desist letters accusing the company of IP infringement. ByteDance said it would respect IP and pulled the model, promising to put in safeguards to protect intellectual property and prevent the use of actors such as Pitt and Cruise.
One commenter predicted that “in next to no time, one person is going to be able to sit at a computer and create a movie indistinguishable from what Hollywood now releases” – and naturally, Hollywood is petrified. (Look at the war movie sequence or the Godzilla monster video at the end of the above linked third YouTube video, to see how the technology can be used).
This is how far AI has come in just a few years. Even with protections, why spend millions on a movie paying actors huge sums, when you can generate an action movie at a fraction of the price? In-person acting won’t disappear, especially in theaters, but the movie world will change in ways that were unimaginable half a decade ago.
The problem is that it’s not just famous actors who are worried. We all should be. Grok, integrated into X, allowed users to take images of anybody, including children, and to manipulate them – undressing them or putting them in swimwear, with prompts as simple as “put her in a bikini.” Although X pulled this capability for users of the free service, it apparently remains for the paid service. The Grok case showed how easy it was to fake and manipulate images, but removing this option doesn’t stop it. There are alternatives including apps that put one person’s face onto another body and AI “undress” filters. These aren’t the only dangers from such deepfakes. Apps available for cell phones can manipulate images too. For example, KissMe.ai – AI Kiss Generatorpromises to take two photos and combine them in a romantic encounter (generally without permission from at least one of the parties). Deutsche Telecom produced a powerful warning video of the dangers of sharing images of children, but it could apply to anyone.
Nevertheless, text-to-image and text-to-video generation tools are important and have legitimate uses. Midjourney.com is a well-established text-to-image generation tool. Rival AI tools include OpenAI’s DALL-E 3, Adobe’s Firefly, Stable Diffusion, and others. Most include filters to prevent pornographic content, but a lesser-known tool, OpenJourney.art, an open-source text-to-image model, has fewer restrictions on what can or can’t be produced.
Simple applications include the obvious production of images for advertising, presentation, and similar business uses. Less obvious applications include repairing or colorizing old photos and adding movement. (An example is at https://aware.tiny.us/sora – starting with a slightly torn black-and-white headshot of me, aged 18.) The genealogy service, MyHeritage.com, offers similar functionality to bring old family photos to life. Synthesia.io can create full training videos, with a single multilingual AI avatar and a user-generated script, replacing human presenters.
Text-to-video generation tools like OpenAI’s Sora, Invideo.io, Google’s Veo and others can generate complete movies. Often these have imperfections – for example, look at the cup in https://aware.tiny.us/invideo, a video I created using Invideo.io in a few seconds. Paying for such services produces longer and more sophisticated video output – without the imperfections.
We’ve now moved beyond “large language models” (LLMs) to “multimodal models,” and all the main chatbots include the ability to generate images in response to a text prompt. They can also read and interpret images. These tools promise to change any industry depending on images or video, which is why Hollywood is aghast at the Seedance videos.
Fortunately, tools are now appearing that can identify what is real and what is fake, although this can still be difficult. Henk van Ess gives an example in which viral images purportedly showing Jeffrey Epstein still alive were circulated. (I started with Epstein, so ending with him dead and buried seems fitting.) Van Ess uses a Google tool called SynthID that detects a hidden watermark on each Gemini-created image. (See https://www.digitaldigging.org/p/google-makes-the-fake-and-tells-you or a step-by-step analysis at https://x.com/henkvaness/status/2020598075291840528).
Arthur Weiss has been an infopreneur for almost 30 years. He founded AWARE in 1995 after a career at the business information company Dun & Bradstreet. He specializes in competitive and marketing intelligence using open sources (OSINT). Recently, he has pivoted to new areas, including exploring how AI tools can support infopreneurs. His latest insights can be read in International Marketing & Competitive Intelligence and Computers in Libraries magazines. He may be contacted at a.weiss@aware.co.uk.





