The Internet’s newest obsession is Kate Middleton –especially regarding her whereabouts after her unexpected January surgery. Despite initial assurances that the princess would only resume her duties at Easter, the world couldn’t help but speculate and theorize about Kate’s health and the status of her marriage to Prince William. It didn’t help, of course, that the one photos of the princess released since then are usually not, let’s say, final. There were also grainy photos taken from afar and of course an infamous family photo it was later discovered to be manipulative. (Post on X (formerly Twitter) attributed to Kate Middleton was later published apologizing for the edited photo.)
At last, The Sun published the video of Kate and William walking across the farm shop on Monday, which must have put an end to the matter. But the video has done little to reassure probably the most ardent conspiracy theorists, who imagine it is just too low quality to confirm whether the walking woman is absolutely the princess.
In fact, many of the numbers go up to now as to suggest that what we see indicates this will not be Kate Middleton. To prove it, some have turned to AI-based photo enhancement software to sharpen the pixels of the video frames and discover once and for all who walked with the longer term King of England:
The tweet could have been deleted
There you go, people: this woman is NO Kate Middleton. It’s… one of those three. Case closed! Or wait, This this is definitely the lady from the movie:
The tweet could have been deleted
Eh, perhaps not. Jesus, these results are usually not consistent in any respect.
This is because these AI “enhancement” programs don’t do what users think they do. None of the outcomes prove that the lady within the video will not be Kate Middleton. They only prove that artificial intelligence cannot tell what a pixelated person actually looks like.
I do not necessarily blame anyone who thinks AI has that power. After all, during the last yr we have seen AI image and video generators do extraordinary things: if something like Midjourney can render a sensible landscape in seconds, or if OpenAI’s Sora can produce a sensible video of non-existent puppies playing within the snow, why then this system can’t it sharpen the blurry image and show us who’s really behind those pixels?
Artificial intelligence is barely nearly as good as the knowledge it has
You see, once you ask an AI program to “correct” a blurry photo or generate additional parts of the image, you might be actually asking the AI to add more information to the photo. After all, digital images are only ones and zeros, and to show more detail on someone’s face, more information is required. However, artificial intelligence cannot take a look at a blurry face and “know” who is absolutely there through sheer computing power. The only thing he can do is accept the knowledge he has and guess what should actually be there.
So within the case of this video, the AI programs fan the flames of the pixels of the lady in query and, based on the training set, add more detail to the photo based on what thinks it needs to be there – not what really is. That’s why you get very different results each time, and often terrible results. This is only a guess.
Jason Koebler of 404media offers an awesome demonstration of how these tools just don’t work. Not only did Koebler try programs like Fotor and Remini on The Sun’s video, with results as disastrous as other programs on the Internet, but he also tried it on a blurry image of himself. The results, as you may guess, weren’t accurate. So apparently Jason Koebler is missing and his role at 404media has been taken over by an imposter. #Koeblergate.
Now some AI programs If including higher than others, but normally in specific use cases. Again, these programs add data based on what they think should be there, so it really works well when the reply is apparent. For example, Samsung’s “Space Zoom”, which the corporate advertises as capable of taking high-quality photos of the Moon, it turned out to be using artificial intelligence to fill within the remaining missing data. Your Galaxy will take a photo of the blurred Moon, and the unreal intelligence will complement the knowledge with fragments of the actual Moon.
But the Moon is one thing; specific faces are different. Sure, should you had a program like “KateAI” that was trained solely on photos of Kate Middleton, it will probably have the ability to turn a girl’s pixelated face into Kate Middleton, but only since it was trained to achieve this – and it actually would not indicate whether an individual Kate Middleton was within the photo. As it stands, there is no such thing as a artificial intelligence program that may “zoom in and enhance” to reveal who a pixelated face really belongs to. If there’s not enough data within the image to tell who’s really there, there’s not enough data for the AI.
Credit : lifehacker.com