When images could be searched with images. The idea has been there for a long time, some early big databases could do it a bit, like one designed by IBM of which QBIC (Query By Image Content) is an offspring. This used to be only for customers paying a high price and using dedicated big machines…
Often people are not well aware of what they do while searching for images, but almost all image search engines on the net are ‘text based’. Now there is TinEye “reverse image search” and the idea of searching images with images, will start to become common practice soon. I did a few tests and given the historic moment, a most obvious one is for Bin Laden. Google image search said it has 290.000.000 pictures for me.
I choose one of them – just the top left one of the first page that came up – and ask TinEye to check its database for me, comparing my chosen image with whatever other images having the same elements.
‘The same elements’, therein lays the magic… as my example shows many variations just based on one picture, readily available on the internet for years. All kind of alterations are now available on-line, as so many people wanted to be rummy, funny, mean or otherwise about Bin Laden. In all 1340 variations turned up by using the TinEye web site. Many variations were only slight, others greatly deviating from the original. This result comes from an algorithm that searches for a whole set of parameters on a dataset of 1.9532 billion images.
Search results on TinEye are stored only temporary and this was the URL from which I took the examples in my further deliberations below:
We have here such a large data set that we can observe the effectivity of the comparing algorithm. I was impressed very much at first. Even to such an extend, that I wondered whether or not also a text element had been used, as some kind of ‘identifier’ or ‘delimiter’ in the automated search operation. To find out if that is so, some double checks are necessary. Feeding back to the system its own results, applying different names to images and other information around images on web pages used, could be part of such a method of control. I have not been able to do this yet, and when I think it up, other people must already have thought to do the same or have done it already. It will need an hour or so of searching. Until then, marvel and suspicion at the same time, which made me go on, a bit more in detail of my first test.
I found that the smartness of the visual robot system was – sad enough – contradicted by the interface it offered. It is a cumbersome table like text based result, ten at a time, whereby our possible visual associations are constantly hindered by the non-functional design of the TinEye Robot page. Even Google Images (not a master of good visual design) has understood that there is the ‘agile eye’, and offers since a year or so, a tableau of images. Our eyes can swiftly purvey big sets of images, within milliseconds. Not in the straightjacket of the alphabetic sentence structures from top left to down right bottom, making a little jump from right end to left start at each line, but in a much more jumpy and associative way. To make my point I have selected 47 examples from the search result of Robot TinEye (10 web pages of the 134 on the TinEye site, with 10 images per page) and threw them together in one pane, one tableau.
While looking at the first hundred results a second time, some doubt crept in whether what is offered here is solely the result of a visual search. I decided to venture a bit deeper in the 1340 examples TinEye had come up with and in the end I looked at all of them, which left me – because of the ‘ten at a time’ interface with a lame wrist of doing all the clicks. What a machine can not do without the help of a human, a human can do without a machine at ease and so I selected a few visual categories that seemed to me not congruent with what I expect automated visual comparison can do. Five main categories and let’s try to forget the level of stupidity of the metamorphoses of the portrait of Osama Bin Laden. The argument is about what an algorithm to compare images is able to do.
The most unlikely ones to be derived from image comparison solely are 3.3, 3.5 and something which is literally on the edge is picture 3.6, which looks like Obama and only at the right hand side the contour of Bin Laden remains visible vaguely.
When looking at the examples on row 4, one wonders why when all these clumsy impersonations do come up in a search action, why not thousands of bearded men in a white clad and a white turban are found also in such a search, that is run on 19532 billion image database?
Row 5 seems to be an easy job, as the beard and the face elements remain constant, though image 5.5 hides one of the eyes almost completely with the blue hat.
It all points in my observation of this moment in the direction of more than just visual search elements. This is of course absolutely fine and a very logic thing to do, it only differs from the explanation given by TinEye on its web site:
TinEye is the first image search engine on the web to use image identification technology rather than keywords, metadata or watermarks. [About page of TinEye]
Many more questions remain, like if the face tracking software development of the last two decades is one of the elements used in the comparison techniques of TinEye, and if so, then we step from an academic technical discussion into a social one. The potential of automated face tracking of photographs posted on the internet with all kinds of other intentions than enabling whatever security and surveillance initiatives, can become problematic. The TinEye seems to be most popular now with persons and organizations selling pictures and wanting to trace misuse of what they claim to be ‘their copyright’ or ‘intellectual property’. Of course a certain amount of control can be useful, but we know that when it comes to copyright claims only the most powerful will be able to profit and ownership of images also can lead to undesirable forms of censorship and blockages of what is called ‘fair use’. Other application of the TinEye Robot could even have far stretching consequences.
Now we all know that any serious secret service is using such face-tracking tools already for many years, on any photographic material available to them. The question is when everybody will start using such tools and combine them with messaging in social networks this might create havoc, doing the opposite of what these networks claim to be for. Many more effects can be expected like the claim to authorship and fame and image searches that show that the same visual thing existed somewhere else before or after. Endless fights over who has been copying who in the digital land of copy cats. The big music industry already runs automated sound sequence comparisons on the tracks and songs that keep raining down from millions of creators and duplicators, trying to construct court cases to catch what they think are gees that will lay them golden eggs in the form of fines. We may praise ourselves lucky that such copyright claims can not be projected back through the centuries, because how many great composers would have had to appear in the courts called by the lawyers of the music industry and who will ever acknowledge the collective creativity of uncountable anonymous masses?
Back to our sweet looking TinEye image robot… I fed it this picture below, that I composed within 5 minutes from three sources, as I wanted to comment on Facebook about people dancing in front of the White House in Washington after the news of the killing of Bin Laden had been announced. Result zero said TinEye. Though anybody following the news would recognize a 1991 Palestine street dancing after 9/11 attack + the 9/11 attack itself + a picture from last week of people in front of the White House celebrating.
Diffused half transparencies are not yet within the competence of our lovely robot and for me that gave a feeling of relieve. As I am by now more fearing than admiring the capabilities of TinEye. Digital panopticism is not yet there, the human eye and human memory still reigns….
[this article will be extended in the coming weeks with my own and possibly your TinEye double check results]
Wednesday March 11 2011
Could not refrain form playing a bit with the Tineye Robot and so we played ‘hide and seek’ with its own logo… it took three versions to have the robot effectively hiding behind the manipulated lettering of it’s own logo. Colour change and diffusing with a lense and grain filter did not alter the recognition of the word Tineye. Changing the wheel of the logo did not hide him from his own algorithm, but altering the angle of his sensor ears and his arms + his facial expression by somewhat subtracting his chin, gave the desired effect. The robot is clearly visible to us, but not anymore gto its own software.