How useful is Google's 'Similar Images?'

Banner: Hands On Review

Let's face facts: The real reason you'd ever want a search engine to locate "similar images" to one or more you're observing at the moment, is because you're not certain of what you're looking for or what you want to find. A search for photographs that look like da Vinci's Mona Lisa is going to turn up more pictures of the same Mona Lisa. And while a search for photos that look like Paris Hilton will turn up more photos of Paris Hilton, a search for photos that look like other one-hit wonders like William Shatner will turn up pictures of folks we may have never heard of, like someone named Mike Vogel.

So while Google Labs' pre-built experimental searches for its first public incarnation of Similar Images, unveiled Wednesday, does demonstrate an uncanny ability to isolate Paris Hilton pictures from its index, the fact that most of those pictures are labeled "Paris Hilton" anyway suggests that these are not real-world experiments. In the real world, people are looking for a picture of that person in that show with the other guy with the weird hair, or a painting from an artist with the funky name. They're looking for the imaging algorithm to fill in the gaps for the information they don't have on hand, not to demonstrate the ability to mimic a successful search when the information is in front of our face.

In the interest of testing Google Labs' Similar Images in something resembling the real world, we came up with some examples of cases where an individual would want to see related images because they looked similar, not because they were contextually related. An example of something that's contextually similar rather than visually would be a search for "that glass pyramid in France." By comparison, if you were doing a search for pictures of skyscrapers that were covered in mirrored glass, the only results you'd be likely to obtain from an ordinary Google or Bing search, unless by some miracle several photos were published alongside text that read, "Here is a picture of a skyscraper covered in mirrored glass."

But any kind of Google search starts with text, even when you're using Similar Images. So your hope is to be able to find at least the first item contextually (conventionally), and then use it as a sample upon which to base the target of your search. For our first test, we pictured in our minds a particular mirrored-glass skyscraper of the 20th century, and pretended to forget that it was designed by architect I. M. Pei, that it's called Fountain Place, and stands tall on the east side of the skyline of Dallas' CBD. If you're a Texan, you know which one I mean: It's like a big shard of blue glass rocketing upward from the ground as though it had grown there.

Besides the logo up top, the only other obvious difference in this process is that you're looking for a blue hyperlink, "Similar images," along the bottom of items where visually similar samples are available. It's here where you come to realize Similar Images' biggest flaw: It only provides similarity estimates for images that it has already pre-determined to be similar in the first place, so unless it's indexed "mirrored skyscrapers," it's of no more help to you than if you simply used Google Images.

So what has Similar Images indexed thus far, besides what appears on its front page in Labs? We decided to try a handful of searches that an everyday user might embark upon in the hopes of finding something that's visually, not contextually, similar. Here's some of the searches that did turn up at least one or more "Similar images" hyperlinks on page 1 of the results:

  • Jaguar automobiles, although most of the similar items found there would also easily turn up in an ordinary search for "Jaguar"
  • Yellow bird brings up some links, although one of them is attached not to a bird but to a picture of a Porsche rally car called the "yellow bird;" and another is attached to Sesame Street's Big Bird, who is undoubtedly also one of a kind. Here you might prefer some similarity links to actual birds, and indeed there's at least one very unique specimen. However, the Similar Images that Google actually indexed contain, for reasons I can't quite fathom, mostly pictures of mushrooms, along with various ferns, cacti, and a tourist photo of a gift shop in Hong Kong.
  • Old master Christ disciples painting - The Renaissance period is chock full of various interpretations of Jesus and the disciples, usually painted according to the preferences and exacting tastes of the artists' respective benefactors (with some notable exceptions). But if you're not familiar with the artists themselves, finding a particular masterwork that pops up in your mind may not be easy -- there's no unique text to go on. This particular query actually pulled up better similar images than we expected, with an item from allthingsbeautiful.com triggering the similarity hyperlink. From there, we did find several other photographs of the same painting (Cranach's Christ and the Adulteress, 1532), plus links to paintings with much similar content. Since disciples/religious paintings during that period did have almost geometrically regulated guidelines (for instance, the relative positioning of Jesus' head), there are actually some obvious elements for the similarity algorithm to focus on.
  • Italian pasta dish broccoli links to pictures of several appealing pasta dishes, some of which actually do have broccoli. Being able to discern something that's green and tree-like, however, from anything else that's green at the time, is not something you'd necessarily expect even a high-order algorithm to be able to accomplish. So the Similar Images link generated by a picture from howstuffworks.com instead takes you to a number of very good looking pasta dishes, whose plating and use of color are much the same. You'll find spinach, bay leaves, generous parsley, even spinach pasta, alongside bright yellows and spots of chopped tomato.
  • Black dog long face surprised us right at first, since the topmost image retrieved that had a Similar Images link was of a black Labrador with a very long face. Could Similar Images pull up a gallery of similar photos? Here is where you realize how much more useful Similar Images would be if it could maintain track of both the context and the visual similarity at the same time -- specifically in this case, if it could remember it's searching for dogs. For this particular target, the index pulled up mostly bald eagles -- round-headed and white -- with a few fish hawks and other wild fowl thrown in. For another example with Similar Links, the system pulled up not wild dogs but wild hogs...actually, a few dozen pictures of Harley-Davidson motorcycles, the similarity factor here to the black poodle in the target photo being absolutely bewildering.

Meanwhile, there were some searches we tried that, like mirrored skyscraper, turned up "bupkus" Similar Images links, including cashmere sweater (which you might expect would at least turn up an indexed link to similar patterns) and fern blue leaves (which might be a search someone might try to pull up a kind of fern she might otherwise be unable to describe).
Many Google Labs projects are pretty much what they're advertised to be: curiosities, open experiments, ways to throw some ideas out there and see what comes back. Indeed, it's part of the Google philosophy of "Try it first and see what happens;" and it did lead to Google Chrome, arguably the most astonishingly efficient computational engine ever devised for the purposes of mere Internet consumption.

But there is an aimlessness in the Similar Images project that is a little distressing, an indicator that Google isn't always so much about good execution as it can be about good ideas. Many of the sub-par query results we received could easily have been improved if the contextual part of the search had been blended with the visual similarity part. It would have led, for instance, to motorcycles and bald eagles being excluded from the dog search, and mushrooms and Hong Kong tourist photos being excluded from the bird search (though maybe asking it to exclude spinach in a search for broccoli may be too much to ask). The same level of expertise used to refine everyday Google searches could be employed here, although it's the projects where such expertise is regularly used where the most secrecy goes on.

And perhaps that's just one of the downsides of "openness" -- when you're in a forgiving and tolerant mood to begin with, you can forget that the objective of a mission is to cut out the chaff and get to the goal. If Google Similar Images is to become anything more than a toy, someone needs to take charge of it and perhaps add whatever secret sauce the company uses to make regular queries work so well. It may not be in the spirit of openness, but as we're learning in recent years, openness isn't everything.

6 Responses to How useful is Google's 'Similar Images?'

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.