This summer, Google announced that they were coming out with a program called Google Now, which seems to be Google’s answer to Siri. As a digital assistant, it anticipates your informational and situational needs almost before you do.

The original Siri patent, Intelligent Automated Assistant, is filled with details on different options it might include in the future, but focuses primarily upon what Apple calls an active ontology that can understand what types of related information people might want to find out more about when focusing upon different topics.

For instance, within the domain of “restaurants,” Siri might anticipate questions about which restaurants are nearby, it might pull up reviews for restaurants, or help to book a reservation, or show a menu. Google Now’s take on the concept of intelligent automated assistant is a little different.

Where Google Now differs is that it attempts to learn from and understand human behavior. A head-to-head comparison of the newest version of Siri versus Android’s Jelly Bean Voice Recognition program at PC Magazine keeps on bringing up Google Now as a feature that distinguishes the two programs, in a positive manner.

Google was granted a patent this week that describes the predictive algorithm behind Google Now that learns from its owners’ behaviors. It can determine where you live and where you work, and can offer alternative routes to or from work if there’s road congestion on the route you usually take.

It can learn about your Monday night bowling league and that you like watching certain TV shows, and add both to your calendar for you. It can learn what your favorite sports teams might be, and that you like looking at the scores from games in the morning with breakfast. It remembers that you like stopping at a certain coffee house for breakfast most Tuesdays, and that you usually drop your clothes off at the dry cleaners on your Friday night drive from work.

The patent is:

Providing digital content based on expected user behavior
Invented by Sumit Agarwal, Dipchand Nishar, and Andrew E. Rubin
Assigned to Google
US Patent 8,271,413
Granted September 18, 2012
Filed: November 25, 2008

Abstract

In a computing system, information regarding a plurality of events that use a computing device is obtained, and a time-dependent increase in activity for each of at least some of the events is identified. An observed interest by a user in an event is correlated with an identified increase in activity for the event. Information about the activity at a time related to the event is provided for review by the user.

Among the inventors is Andy Ruben, the co-founder and former CEO of Android Inc, and the Senior Vice President of Mobile and Digital Content at Google. Dipchand “Deep” Nishar was the Director of Wireless Products at Google where he help start Google’s mobile offering, and now works as a Senior Vice President, Products & User Experience at LinkedIn. Sumit Agarwal was Head of Mobile Product Management at Google for a little more than a year, and his LinkedIn profile tells us that he and his team launched “20+ features in various Google mobile products.”

The patent describes a number of different types of activities and user behaviors that it might see from its owner, and learn from. Some external signals might be used to predict future user actions, including user requests and communications made while using a computing device. Some user behaviors might be learned via sensors and GPS.

Many of these activities might be used to provide digital content. Siri will tell you the score of the Washington Nationals baseball game when you ask for it. Google Now will notice that you look up the score every morning after a game, and will start showing you the score before you ask for it.

The patent provides a very detailed description of the kinds of things it might learn, and how it might provide content in response to what it’s learned from user behavior signals. It includes a wide range of examples as well. For instance, it might potentially receive data from a payment processing service provider to learn where you’ve stopped to purchase coffee or where you’ve stopped to buy gasoline, and generate a timeline based upon such purchases and the places you’ve visited.

A screenshot showing a timeline of visits to different places from someone using a mobile device.

Take a turn on your trip to work in the direction of that coffee house, and it might provide suggestions on the route to the coffee house based upon traffic conditions, or provide other information.

It could also present a coupon for that particular coffee house while you’re on your way, or possibly even from another one that is along the same route, before you arrive.

This system might notice that you like to attend baseball games at the local stadium every so often, but that you only go to games when the local team was playing a particular opponent, by checking the team’s schedule. This might tell Google Now that you’re more of a fan of the opposing team than the local team.

It might present you with a coupon for a restaurant near the stadium about 2 hours before the next game against that opponent if you’ve been consistently going to games involving that team.

I’ve provided a really short and high level overview, but the patent is much more detailed, and is worth spending some time with to understand the difference between the helpful Siri, and the predictive Google Now.

Google has also published some very related pending patent applications, such as Providing Results to Parameterless Search Queries

A parameterless search query might be as simple as someone shaking their phone a number of times (shake once, or shake twice), pressing a button for a certain amount of time, or even providing a command such as “search now.”

A screenshot from the patent showing someone shaking a phone, different options being presented based upon context, and a result for driving directions being shown.

In response to that parameterless query, the mobile computing device might take cues from the context around it to provide an answer.

These cues could include information associated with the device and with the user, such as the time of day, upcoming and recent calendar appointments, direction and rate of speed that the device is traveling, a current geographic location, and even recent device activities such as an email being sent to someone about a meeting scheduled in half an hour.

Another related patent application is Activating Applications Based on Accelerometer Data.

A screenshot from the patent showing someone taking different activities like jogging, taking a train, walking, and sitting at a desk, and different activities associated with each.

Under this patent, we learn that certain accelerometer profiles associated with different types of movements at different points of a day might indicate a preference to see certain types of digital content.

Someone who likes to go for jogs in the morning might like their phone to play music, or they may like to see news on it during a commute, manage email communications at an office, and view calendar information on the walk from a parking garage to the office. Different profiles might automatically call up applications you typically like to use.

Google is incorporating user behavior, context information, sensor information, and more to anticipate the needs of users, and predict the kinds of information and the applications that might be appropriate for the people using those devices. That seems to pretty useful in a personal assistant.