Personalized Collaborative Recommendations

Although several recommendation algorithms have been devised, these days, the most prevalent one is collaborative filtering. All major ecommerce and social media sites such as Amazon, Netflix, and Facebook’s employ some variant of collaborative filtering techniques. They are “personalized” because they track the user’s behaviour—pages viewed, purchases, and ratings—to produce recommendations. The approach is called “collaborative” because it treats two items (products/services (as being related based on the fact that lots of other customers have purchased or stated a preference for those items, rather than by analysing sets of product features or keywords.

Personalized collaborative recommender systems have been around since the early 1990s. Some of the early recommenders include the GroupLens project [1], targeting movie recommendations, and MIT’s Ringo for music recommendations. Both GroupLens and Ringo use a simple collaborative algorithm that computes the “distance” between pairs of users based on how much they agree on items they have both rated. Users whose tastes are relatively “near” each other according to these calculations are said to be in the same “neighbourhood.”

However, user similarity recommendations tend to suffer from problems such as the lack of sufficient data sets because most pairs of users have only a few ratings in common or none at all. However, it is not always easy to form neighbourhoods that make sense. Most recommendation systems today rely on item distance algorithms, which calculates the distance between each pair of items according to how closely users who have rated them agree. Distances between pairs of items, which may be based on the ratings of thousands or millions of users, tend to be relatively stable over time, so recommenders can precompute distances and generate recommendations more quickly. Both Amazon and Netflix use variants of an item-item algorithm. One problem with both user-user and item-item algorithms is the inconsistency of ratings. Users often do not rate the same item the same way if offered the chance to rate it again. Researchers therefore are trying different ways to incorporate such variables into their models; for example, some recommenders will ask users to rerate items when their original ratings seem out of sync with everything else the recommender knows about them.

Both content and collaborative filtering algorithms are too inflexible as they can detect people who prefer the same item but they can miss potential pairs who prefer very similar items.

A method to compute the similarity of items is the dimensionality reduction that reduces the potentially very large number of features (feature space) to a smaller representative subset. This method is however more computationally intensive than the other recommendation algorithms, as the time it takes to factor the matrix grows quickly with the number of customers and products. This more general representation allows the recommender to detect users who prefer similar yet distinct items. And it substantially compresses the matrix, making the recommender more efficient.

To calculate the customer’s similarity to other customers, the recommender has to take in mind the customer’s preferences. There are many methods to achieve that, for example by asking customers to rate their purchases. The recommender uses the user’s navigation history through its website and items clicked on to suggest complementary items, and it combines purchase data with user ratings to build a profile of user’s long-term preferences.

Recommenders also utilise business rules that help ensure its recommendations are both helpful to you and profitable for the retailer.

To build trust, the more sophisticated recommender systems strive for some degree of transparency by giving customers an idea of why a particular item was recommended and letting them correct their profiles if they don’t like the recommendations they’re getting. Explanations like these let users know how reliable a given recommendation is.

There is active research on recommendation algorithms that improve different parameters, as different recommendation systems can target different performance goals. The effectiveness of an algorithm can be determined by comparing its predictions and the actual ratings users give. Sellers care much more about errors on highly rated items than errors on low-rated items, because the highly rated items are the ones users are more likely to buy. Another performance measure is the extent to which recommendations match actual purchases. Such measures should however take into account that sometimes users purchase items irrespectively of the recommendations made.

Given the shortcomings of current approaches, research in recommendations has started to focus not at accuracy but also at other attributes, such as serendipity and diversity. Serendipity approaches try to produce unusual recommendations, particularly those that are valuable to one user but not as valuable to other similar users. A diverse list of recommendations is one that does not recommend products services in a single class but also in diverse classes. For example, if the user is browsing books, the recommender will also suggest music and computer games.

Recommendation research today are considering to what extent a recommender should help users explore parts of a site’s collection they haven’t looked into—Recommenders could also help expose people to new ideas [2].

[1] View at www.grouplens.org
[2]  Joseph A. Konstan, John Riedl. Deconstructing Recommender Systems: How Amazon and Netflix predict your preferences and products you purchase. IEEE Spectrum, Sept. 2012.

<