How Google Now and Phone Sensors Might Change Search as We Know It
Bill Slawski, May 24, 2013
At the Google I/O Developers Conference last week, we were introduced to the future of Search, or as Google's Head of Search Quality Amit Singhal called it, the "death of search." The presentations from the day long event told us that features like Google Now will provide information to us as we need it, rather than when we ask for it.
Perhaps that's best explained by looking closer at how Google Now works, and considering a fairly recent hire from Google. In the post Why Google’s Predictive Personal Assistant is better than Siri I wrote last September, I wrote about the patent that describes the predictive algorithm behind Google Now.
For instance, Google Now learns from your habits and your actions. If you go to the ball game at a nearby stadium on a regular basis, Google Now might start regularly showing you a knowledge card with the scores of games from the local team. If you only go to games when the local team is playing a specific competitor, Google Now may figure out that you're a fan of that competitor, and start showing you knowledge cards with their scores. All of this is based upon Google Now learning more of your online and offline habits and activities.
Google Now will be coming to Chrome, and a hands-free verbal searching experience was displayed at Google I/O for desktop searchers as well, referred to as Hot-Word Detection.
While this is worth paying attention to, where things gets really interesting is when we look at three new employees at Google who are the team behind Behav.io, who have been engaged in finding ways to gather and use sensor information in a deeper manner from your mobile device, and those of people who you are connected to.
When news of the hiring took place, I looked at the USPTO assignment database to get an idea of what kind of technology the team had been working upon. A patent originally assigned to MIT was reassigned to Behav.io, which describes the kind of work they've been doing.
They developed a mobile application that can predict whether or not people might install apps based upon their behaviors and those of the people they communicate with. They kept an eye on a number of different kinds of informational graphs to be able to make this kind of prediction.
Here are some examples of those types of graphs:
- A call log graph - with edges weighted by number of calls between nodes,
- A text message graph (with edges weighted by number of text messages between nodes),
- A Bluetooth proximity graph,
- A co-location graph (from GPS data),
- A friendship graph (from Facebook), and
- An affiliation graph (from contacts)
If you go back to my (Siri) post above from last September, and click on the link to the patent, it describes how Google might make predictions based upon contextual information. For example, if you drive to work each morning, Google might figure out where you work. If you get in your car to go to work, and there's congestion on the route you usually drive, Google Now might suggest a different commute to you.
Looking at informational graphs like the ones studied by the Behav.io team can provide a much richer set of information to make predictions upon. In addition to these types of communications, the patent describes the many different types of sensors that mobile smart phones come with.
Smart phones can gather data using many different sensors that are included within them, from accelerometers to barameters, gyroscopes to magnetometers, ambient light sensors to proximity sensors, network position sensors to whether or not a screen is on or off. The Samsung Galaxy S4 is shipping with a thermometer and hygrometer as well. Given all these sensors, a phone can act as a mobile weather station, and can collect a lot of information that can be used in different ways as well.
The MIT patent goes far beyond predicting which apps people might install, and uses that only as an example. As we're told in the patent:
This invention is not limited to predicting installation of apps. For example, in some implementations, this invention can be used to predict the conditional probability of a user taking any action, including adopting an idea
Sensor Data from a mobile device can be used to tell when you're getting sick.
For example, a phone could predict that you're coming down with the Flu a couple of days before you show visible symptoms, based upon your movements and if they show you to be a little slower than normal and weaker and not as steady. It might also look at who you've interacted with physically (looking at blue tooth signals between phones, for instance) and communications between you and others.
The patent also tells us that it can tell when an idea is starting to spread across a network by predicting the diffusion of ideas across that network:
In exemplary implementations of this invention, "trend ignition" in a social-influence campaign in a network is predicted. For example, network data may be used to predict the probability that a certain portion of the user population of a network will adopt an idea (due to diffusion of the idea through the network), if a specific portion number of users in the network are initial "seeds" for that idea (persons who adopt the idea initially). This enables campaign managers to allocate resources efficiently.
The patent is:
Methods and apparatus for prediction and modification of behavior in networks
Invented by Wei Pan, Yaniv Altshuler, Alex Paul Pentland, and Nadav Aharony
Assigned to Massachusetts Institute of Technology
US Patent Application 20120303573
Published November 29, 2012
Filed May 29, 2012
In exemplary implementations of this invention, mobile application (app) installations by users of one or more networks are predicted. Using network data gathered by smartphones, multiple "candidate" graphs (including a call log graph) are calculated.
The "candidate" graphs are weighted by an optimization vector and then summed to calculate a composite graph. The composite graph is used to predict the conditional probabilities that the respective users will install an app, depending in part on whether the user's neighbors have previously installed the app.
Exogenous factors, such as the app's quality, may be taken into account by creating a virtual candidate graph. The conditional probabilities may be used to select a subset of the users. Signals may be sent to the subset of users, including to recommend an app.
Also, the probability of successful "trend ignition" may be predicted from network data.
The patent is long and very detailed, but worth skimming through with a highlighter, or by making notes in a margin (if your ebook reader can do that), or by pasting it into notepad and deleting all the stuff you don't want to keep (which is what I do).
The difference behind what the team working on Google Now were doing, and the people at Behav.io were doing doesn't just vary based upon the Behav.io team looking at more sensor data on a mobile device. It differs because Behav.io has been looking at aggregating data between multiple devices and multiple people to predict the adoption of apps, to figure out how illnesses spread, to understand where ideas might start and ignite socially.
How will this impact the work being done on Google Now, which according to the Google I/O presentations, is one of the key aspects of the future of search? What will these new employees bring with their new focus on sensors and communication between people on smart phones? It can potentially change things significantly, as noted in the article, Google I/O: How Google Now Is Bringing Search Closer to Science Fiction.
For some more on what people involved in Behav.io have been working upon, check out the follow resources:
- Winner: Behavio / Nadav Aharony
- Investigating Social Mechanisms with Mobile Phones
- The “Friends and Family” Study and the FunF platform (pdf)
The future of search isn't going to be the death of search, but it's working upon knowing things that we might need to know before we realize we might.