How much is a search engine like a human being when it comes to gathering data in the world around it and making judgments about that data?
You can break the human sensory system down into five parts: vision, taste, smell, feel, and audio. Our visual system has at least two major parts, with rods which help us see black and white and give us night vision, and cones which give us the ability to discern between different colors. We also tend to notice things near us that are in motion, as opposed to more stationary objects – our eyes are drawn to them.
Google might also use a program that simulates more modern browsers to understand different segments of pages and the whitespace that separates them. You can see Google collecting pages as they are displayed on the Web in the cache files it makes available in case a page is not available for one reason or another.
Google might pay attention to the links between pages as it crawls the web. It might look at information on pages about specific people or places or things or concepts, and use those to build its knowledge base. Google also collects information for vertical searches, such as images or videos or Maps or News, and the analysis and ranking signals that it uses for those may differ in some significant ways from Web search results.
But, I didn’t anticipate seeing that Google would be granted a patent this week where it might use facial recognition as a signal to enable people to log into a computer, or into their Google Accounts. The patent tells us that it might use facial recognition to allow a user to skip one aspect of a login, such as a user name or a password.
The patent is:
Login to a computing device based on facial recognition
Invented by Yoshimichi Matsuoka
Assigned to Google
US Patent 8,261,090
Granted September 4, 2012
Filed: September 28, 2011
A method of logging a first user in to an computing device includes receiving a an image of the first user via a camera operably coupled with the computing device and determining an identity of the first user based on the received image.
If the determined identity matches a predetermined identity, then, based at least on the identity of the first user matching the predetermined identity, the first user is logged in to the computing device.
How do you feel about a Google of the future that recognizes you based upon what you look like?