More Related Content
Similar to 50120130406034 2
Similar to 50120130406034 2 (20)
More from IAEME Publication
More from IAEME Publication (20)
50120130406034 2
- 1. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING &
ISSN 0976 - 6375(Online), Volume 4, Issue 6, November - December (2013), © IAEME
TECHNOLOGY (IJCET)
ISSN 0976 – 6367(Print)
ISSN 0976 – 6375(Online)
Volume 4, Issue 6, November - December (2013), pp. 299-313
© IAEME: www.iaeme.com/ijcet.asp
Journal Impact Factor (2013): 6.1302 (Calculated by GISI)
www.jifactor.com
IJCET
©IAEME
PRO-MINING: PRODUCT RECOMMENDATION USING WEB-BASED
OPINION MINING
Rashid Ali
College of Computers and Information Technology,
Taif University, Taif, Saudi Arabia
ABSTRACT
Effective recommendation is the key to success of any online marketing strategy. In this
paper, we discuss the design and development of a novel recommendation system for consumer
products. The proposed system helps the customer to select product of his choice. When a
prospective customer passes a query (name of the item) as an input to our system, our system returns
a ranked list of suitable brands, models available for a particular item as an output. For this purpose,
our system performs meta-searching to find top products on the basis of collected
opinion/information from user blogs, customer reviews, and official websites of the producing
companies. Then, the available data along with links to its sources is passed to a group of users and
their feedback is obtained. Since different users may provide different ranking of products on basis
of their views/understanding of opinion data, we perform rank aggregation to obtain a consensus
ranking of products. The products are then returned in the order of the aggregated ranking. In this
paper, we also present a novel rank aggregation method for aggregation of partial lists.
1. INTRODUCTION
Online shopping is very much popular nowadays. With the increased number of individuals
connected to Internet, the interest in online shopping is also increasing day by day. As a result new
online shopping sites are coming into existence. In online shopping, a customer visits an online
shopping site; search for items of his interest and buys the product. The popular online shopping sites
like amazon.com and ebay.com sell many items. For a single item, there are many brands and models
available. This increases the burden on the shoulder of the customer as he has to select a product of
his interest from a large number of products. Product recommendation systems try to reduce a
customer’s burden by predicting products in which the customer might be interested. The earlier
product recommendation systems generally used collaborative filtering to recommend products to
the customer. Collaborative filtering identifies previous customers whose interests were similar to
299
- 2. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 4, Issue 6, November - December (2013), © IAEME
those of a given customer and recommends products to the given customer that was liked by
previous customers. But, the recommender systems based on collaborative filtering were reported to
have some serious problems [1, 2] and provided poor recommendation. The quality of the
recommendations has an important effect on the customer’s future shopping behavior. If an online
shopping site recommends products that are not to be liked by the customer, the customer may be
annoyed and it is unlikely that he will visit the site again [2]. The situation may be evaded if the
online shopping site efficiently identifies target customers, who are likely to buy recommended
products, and recommends products only to them.
Some of the studies [3] proposed to use web usage data like click stream to understand user’s
interest. Click stream is record of the customer’s browsing behavior at an online shopping website.
The proper analysis of the provided information like what products they see, what product they add
to the shopping cart or what products they buy, may help to understand accurately the customer’s
needs and interests. Accordingly, suitable products may be recommended to the customer.
One novel approach to product recommendation is to use public opinions to recommend
products. This is more likely to satisfy customers as this is the natural way of recommendation. It is a
common practice that whenever a person intends to buy an item, he seeks the opinions of his friends
or relatives on different products and in turn gets recommendation for specific products. Nowadays,
opinions on different products are available online in form of user reviews, blogs etc. If the
recommender systems could effectively extract and analyze opinions from the online opinion
sources, the proper products may be recommended to the customers. As there are hundreds of
opinion sources available on the Web, search engines may be utilized to find and summarize
opinions on a specific product. Since there are many publicly available search engines, which use
different indexing algorithms, have different coverage of the Web, and provide different search
results in response to same query, metasearching may be performed to obtain overall better results.
In this paper, we propose to use metasearching for the purpose of product search and
recommendation. The different contribution of the paper may be listed as follows:
1. Analysis of the limits and problems of existing recommendation techniques.
2. Development of a robust product recommendation system, which is based on meta-searching
and opinion mining of various opinion resources (like user reviews, Blogs etc.).
3. Development of a rank aggregation technique for partial lists
The rest of the paper is organized as follows. In the section 2, we discuss relevant work in the
area of product recommendation. In section 3, we discuss details of our approach and present the
architecture of the product recommendation system. We present our experimental results in section
4. Finally, we conclude in section 5.
2. RELATED WORK
A number of studies have been performed in the area of product recommendation. In the
earlier studies, researchers used collaborative filtering in a number of different applications such as
recommending web pages, articles etc. [4, 5 ]. Since, collaborative filtering has some serious
problems, researchers considered to use Web mining techniques for product recommendation.
Majority of works on product recommendation that used Web mining techniques are found to
be based on Web usage mining. Web usage mining [3] is the process of extraction of useful patterns
from web usage data. Cho et al. proposed a personalized recommendation system based on Web
usage mining in [6]. In [7], Kim et al. also proposed personalized recommendation based on Web
usage mining. Their method was more focused on the problem of helping customers to obtain
recommendation only about the products they would like to buy. Zeng in [8] also discussed the
300
- 3. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 4, Issue 6, November - December (2013), © IAEME
development of a personalized product recommendation system based on customer’s click streams.
The recommender system mined the customer preference and product association automatically
mined click streams of customers.
All of the above studies on product recommendation performed the task from seller's point of
view. They provided the user with the recommendation of products that are being sold on a particular
online store. They did not provide the user the choice of having information also about the products
that were not available on the particular online store. To provide the recommendation from
consumer's point of view, we need to extract opinions of other consumers about different products.
With the availability of a good number of opinion resources in form of online news, forums, blogs
and reviews, this task may be easily performed. We need to extract the opinion which is scattered
over the Web. These extracted opinions may be then used to recommend different products.
Although opinion mining has been performed in many studies but in very few studies, opinion
mining has been used for product recommendation.
In [9], Hu and Liu performed opinion extraction by considering three subtasks namely (i)
extraction of Subject (the product itself), (ii) aspect (the attributes or features of the product), and
(iii) semantic-orientation (either positive or negative). In [10], Popescu and Etzioni annotated Hu and
Liu's corpus with tags. In [11], Aciar et al. used prioritized consumer product reviews to make
product recommendations. Scaffidi et al. in [12] discussed the implementation of a prototype system
called Red Opal to score each product on each feature for the users to locate products rapidly based
on features. Sohail et al. in [13] discussed the use of opinion mining for book recommendation. A
good discussion on other related work on product recommendation can be seen in [14].
In this paper, we propose to use human intelligence to extract opinion from different
reviews/Blogs. Then, we rank the products on the basis of the results of the user feedback based
opinion mining and recommend the products in that order.
3. PRODUCT RECOMMENDATION USING USER FEEDBACK BASED OPINION
MINING
Here, we propose an original method for project recommendation that performs user
feedback based metasearching of opinion resources. Our system incorporates human intelligence in
order to achieve high performance in product recommendation. It aims to provide customer
satisfaction through the development of a user feedback based recommendation technique.
We faced many challenges to accomplish this work. One of these challenges was the
selection of proper consumer products. There are many consumer products and we have selected a
few to develop the prototype. Another was the variation of presence/availability of opinion
resources, (Blog’s, user reviews, news etc.) on the Web for different products. We have reformulated
the customer’s query in order to search and collect suitable opinion resources for the specified item.
The third challenge was to present the content of opinion resources in suitable format before the
customer in order to collect his feedback efficiently. We obtained user feedback on different
products by providing them the products with links to their reviews. The fourth and most important
challenge was to collate different user’s feedback when we received two different opinions (opposite
to one another) for the same product. For this, we developed and used a proper rank aggregation
method.
In the following sub-sections, we briefly discuss the processes of (i) Query refinement and
Reformulation, (ii) Metasearching, (iii) Opinion Mining (iv) User feedback based opinion mining
and (v) Rank Aggregation techniques, which are being used in the proposed product
recommendation technique. Finally, we present the overall architecture of our proposed system.
301
- 4. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 4, Issue 6, November - December (2013), © IAEME
3.1 Query Refinement and Reformulation
The process of changing a query with an objective of obtaining the better search results is
called query refinement or reformulation. This has been reported in [15] that “of the roughly 2 billion
daily web searches made by internet users [16], approximately 28% are modifications to the previous
query [17]”
For example, a user may search for “Laptop” and then, he may change his query to “Dell
Laptop” if he is not satisfied with the results of the initial query. In fact, there are many ways to
perform query reformulation or refinement. For example, this can be done by using one of the
following- word reorder, syntactic variant, whitespace and punctuation, word splitting, word
merging, remove words, add words, add stop-words, url stripping, stemming, morphological variant,
word stemming, acronym, abbreviations, substring, word substitution, word swaps, synonyms
reformulation, spelling correction etc.
3. 2 Metasearching
Metasearching is the process of combining results of different search systems. Metasearching
is generally performed by metasearch engines. Metasearch engines are the Web services that receive
user queries and dispatch them to multiple crawler based search engines. After this, they collect the
returned search results, reorder them and present the reordered list to the end user. Metasearch
engines do not have database of their own. They simply collect the search results of different search
engines in response to a query, reorder them and present before the user. To combine the result from
different search engines, a metasearch engine may use any one of the different rank aggregation
techniques to aggregate the various ranking of the search results to generate an overall ranking. We
discuss details of different types of rank aggregation technique in the section 3.5 Depending on the
rank aggregation technique; the results of metasearching may vary for the same query while collating
ranked results from the same set of participating search engines.
Metasearching is widely discussed in literature. There are a good number of studies that have
been performed on metasearching [18, 19, 20, 21, 22].
3. 3 Opinion Mining
Opinion mining is the extraction of opinion about an item from available opinion resources
like Blogs, user reviews etc. Many studies have been performed in the area in last decade. A good
survey of the different studies performed in the field has been made in [23]. Opinions can be
considered as a combination of words, sentiments or documents. A broad study on these factors has
been made in [24, 25, 26] respectively. However, some people like researchers in [27] believed that
opinion is topic dependant and the above methods did not consider this. They perceived opinion
retrieval as a two-step task, finding relevant documents and re-ranking these documents by opinion
scores [27]. Opinion extraction is generally done using customer reviews. The main issues to be
considered in opinion mining are the (i) selection of product features and (ii) analysis of comments,
whether positive or negative, as discussed in [28, 29]. In opinions, the orientation on the product
features may differ considerably e.g., "the picture quality of the camera is very good but strength of
its body is very weak", here "picture quality" and "strength of body" are features [13]. Here, the two
features have different types of opinion one is having positive while other is negative. Since, the
reviews are provided by the human. Therefore, the techniques used for opinion extraction must
behave like human. Just mining the comments is not sufficient. Sometimes matter is different than
what it appears. Let us consider the following example from a book review [13]:
302
- 5. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 4, Issue 6, November - December (2013), © IAEME
"I highly recommend this book for those who want to waste their time and money. If you are
really sincere to get some knowledge into your bucket, another one is the better option".
Though the above sentence consists terms like ‘highly recommended’ and ‘better’ but both
terms are being used in a negative sense for the specific book. Therefore, just recording only the
positive and negative aspects of the terms and processing on these bases is not sufficient alone to
extract opinion for a better conclusion.
Therefore, in the proposed work, we use human judgment to extract the actual opinion about
a product from the available reviews. We discuss about this in the following section.
3. 4 User Feedback Based Opinion Mining
In order to handle the problems with automatic opinion extraction as outlined in the previous
section, we propose to use human judges to decide the overall opinion about a product on the basis of
the available opinion data. The advantage of this user feedback based approach is that it exploits
human intelligence in deciding the overall opinion about a product. Therefore, we may hope to
extract the real opinion about a product and can rank different products correctly on the basis of
these extracted opinions. For this, we need to take feedback from the users/ human judges. The
feedback from the user about different products may be obtained explicitly or implicitly. An explicit
feedback is the one in which the user is asked to fill in a feedback form, to give his opinion about a
particular product for which summarized reviews (opinion data) are presented before him. On the
basis of this feedback, scores may be given to the specific product. Then, we may rank different
products on the basis of these scores. Finally, we may recommend different products in this order. In
implicit feedback, we may obtain the feedback from the user by watching his actions on links to
opinion resources of different products. For example, we may make an assessment that whether the
user will recommend a specific product or not. Now, there is a problem that what should be done if
the same product is ranked differently by different users. The answer is that we should try for a
consensus. To obtain this consensus ranking, when a number of different rankings are given, we
perform rank aggregation. We discuss this in the next section.
3. 5 Rank Aggregation
Let us begin with some useful definitions.
Definition 1 Given a universe U and S ⊆ U, an ordered list (or simply, a list) l with respect to U is
given as l = [e1, e2,…, e|s|], with each ei ⊆ S, and e1 ≻ e2 ≻…≻e|s|, where “≻” is some ordering
relation on S. Also, for j ∈ U Λ j ∈ l, let l (j) denote the position or rank of j, with a higher rank
having a lower numbered position in the list. We may assign a unique identifier to each element in U
and thus, without loss of generality, we may get U = {1, 2,…, |U|}.
Definition 2 Full List: If a list contains all the elements in U, then it is said to be a full list.
Example 1 A full list lf given as [c, a, d, e, b] has the ordering relation c ≻ a ≻ d ≻ e ≻ b. The
Universe U may be taken as {1,2,3,4,5} with say a ≡ 1, b ≡ 2, c ≡ 3, d ≡ 4, e ≡ 5. With such an
assumption, we have lf = [3, 1, 4, 5, 2]. Here lf(3) ≡ lf(c) = 1, lf (1) ≡ lf(a) = 2, lf(4) ≡ lf(d) =3, lf (5) ≡
lf(e) = 4, lf(2) ≡ lf(b) = 5.
Definition 3 Partial List: A list lp containing elements, which are a strict subset of universe U, is
called a partial list. We have a strict inequality | lp| <|U|.
303
- 6. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 4, Issue 6, November - December (2013), © IAEME
Definition 4 Rank Aggregation Problem: Given a set of n candidates say C=(C1,C2,C3,…,Cn), a
set of m voters say V=(V1,V2,V3,…,Vm), and a ranked list li on C for each voter i. Then, li(j) < li(k)
indicates that the voter i prefers the candidate j to k. The Rank Aggregation Problem is to combine
the m ranked lists l1, l2, l3,…, lm into a single list of candidates, say l that represents the collective
choice of the voters. The function used to get l from l1, l2, l3,…, lm (i.e f(l1, l2, l3,…, lm)=l) is known
as rank aggregation function.
The classical Borda’s method [30] is one of the popular rank aggregation methods. Other
better rank aggregation methods include Markov chain based methods [20] and soft computing based
methods [31]. But, the problem with the methods proposed in [30, 31] is that they work well for full
lists only. If we want to use them for partial lists, as is the case of performing metasearching with
results of public search engines, we need to convert each of the partial lists into full lists. The
methods proposed in [22] and [32] can work well for partial lists but, there we need a decision
attribute (overall ranking) in training phase to train the system. Since, while combining the ranking
from different users, we do not have any overall ranking and the different user’s rankings are also
partial lists, we propose a new rank aggregation method that can be used to obtain the aggregated
ranking.
3.5.1 Rank Aggregation Algorithm for partial lists
Let us assume the cardinality of the union, U of all the ranked lists from the different voters is
|n|. Then, we build a table say Rank table (say R) of size n× m by using the m ranked lists from the m
voters. Here, n is the number of candidates present in the Union U and m is the total number of
rankings. In this rank table, we have m columns corresponding to these m rankings. We place a value
–k in the cell (i,j), say Ri,j= –k if a candidate i ∈ U is present at kth position in the jth ranking. The
argument behind this is that we are converting a set of ranked lists into a set of scored lists. So, by
giving a score –k, where k is the position of the candidate in a ranked list, we make sure that each
candidate obtains a score according to its position in the ranked lists. If the candidate i ∈ U is not
present in the jth ranking at all, then, we place a value –(n+1) in the cell (i, j) to ensure that such a
candidate gets the least score in the jth column corresponding to jth ranking. Now, we compute the
preference score Si for each candidate i by summing the values in the row for the candidate i as
follows
(i) Si=0
(ii) for j=1 to m
if Ri,j = – (n+1)
Si=Si
else Si=Si+(n+ Ri,j)
Finally, we perform a descending sort on the values of preference score Si to obtain
aggregated ranking of the candidates.
The complete procedure of the proposed rank aggregation is explained through following
example.
Example 2 Given three partial lists l1= {b, a, d}, l2 = {a, c, b}, l3= {d, e, c}, then Union of these
three partial lists U= {a, b, c, d, e} and n=5, m=3
304
- 7. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 4, Issue 6, November - December (2013), © IAEME
Hence, R=
Candidates
l1
l2
l3
a
-2
-1
-6
b
-1
-3
-6
c
-6
-2
-3
d
-3
-6
-1
e
-6
-6
-2
and Sa=(5-2) +(5-1)+0 =7, Sb=(5-1) +(5-3)+0 =6, Sc= 0 +(5-2)+ (5-3) =5, Sd= (5-3) +0+(5-1) =6,
Se= 0+0+(5-2) =3
Now, sorting the candidates i on descending values of Si, we get aggregated ranking as
a≻ b ≻ d ≻ c ≻ e or a≻ d ≻ b ≻ c ≻ e, where ‘≻’ denotes the relation ‘is preferred over’
3.6 Product Recommendation System Architecture
Now, we present the architecture of the proposed product recommendation system as shown
in Figure 1. The query about the desired item is passed to a number of search engines
(metasearching). The returned results are aggregated and stored in the product information system.
Query refinement is then done to search for the opinion resources for the different products available
on the Web. The opinion data about a particular product are summarized together. Then, the different
products along with the summarized opinion data are presented before a user and user feedback (UF)
is obtained. On the basis of user feedback, importance score for each product is computed and a
ranking of products on the basis of user feedback based scores is obtained. When we have more than
one ranking of products, rank aggregation is performed to obtain a consensus ranking. Then, the
products are recommended in order of this consensus ranking.
4. EXPERIMENTS AND RESULTS
We experimented with four most popular search engines [33] namely (i) Google, (ii) Bing,
(iii) Yahoo! Search and (iv) Ask. We searched for information about five electronics products
namely (i) Laptop, (ii) Smart-phones, (iii) Tablets, (iv) Printers and (v) Head phones on each of the
four popular search engines. After a look at the search results from the four search engines, we
reformulated each of the five search queries by adding the phrase “Top five” to the name of each of
the five products. We passed the reformulated queries one by one to each of the four search engines.
We parsed the search results URLs from search pages returned in response to a specific query
from each of the four search engines and made a Union say U-set of search results URLs. Table 1
shows the URLs present in the U-set of search results for “Top five laptops”. Then, we analyzed the
content of each of these URLs to check for any significant information/recommendation about the
top Laptops. Interestingly, we found that out of the 27.
305
- 8. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 4, Issue 6, November - December (2013), © IAEME
Product Queries
Modified Results
Result 1
Search Engines
Result 2
Result n
Product Information System
Modified Query
Opinion Summarization
Opinion Mining-UF
UF based ranking
Rank Aggregation
Ranked List of Products
Fig. 1 Architecture of the proposed Product Recommendation System
URLs, only 10 URLs contained information/recommendation about the laptops without any
discrimination of brand, use or cost. These 10 URLs are shown in Table 2. Five other URLs
contained information about budget laptops only. Remaining 12 URLs either mentioned the top
laptops of a specific brand or of specific use or did not contain any relevant information. The 10
URLs as shown in Table 2, recommended either 10 or 5 or 8 top laptops. Their recommendation was
based on analysis of a large number of reviews (generally more than hundred) for different products.
These URLs also contained the links for the reviews, which were basis of their recommendation.
Few laptop models were commonly recommended by majority of these 10 result URLs, while few
other were recommended by one or two result URLs. We presented these URLs to three independent
users. Each of the three users analyzed the content of these URLs and contents of the product
reviews, which formed the basis for the recommendation at these URLs. Finally, each of three users
provided his feedback and ranking of the different laptops present in these URLs. The ranking of the
laptops by user1, user2, and user3 are shown in Table 3, Table 4 and Table 5 respectively. This is
clear from these tables that the recommendation of different users vary (i.e. they are subjective) and
each of three rankings is a partial list. Now, we apply the proposed rank aggregation algorithm as
306
- 9. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 4, Issue 6, November - December (2013), © IAEME
outlined in section 3.5.1 to obtain the aggregated ranking for these three partial lists. The result of the
rank aggregation is shown in the Table 6. The aggregated ranking as shown in Table 6 is returned as
the output of our recommendation system for laptop.
We repeat the process described in the above paragraph for other four products namely (i)
Smart-phones, (ii) Tablets, (iii) Printers, and (iv) Head phones. The results of recommendation of the
proposed system for these four products are shown in Table 7, Table 8, Table 9 and Table 10
respectively.
This is evident from these results that our system provides better recommendations as it
incorporates human intelligence in aggregating the results from popular recommender sites.
Table 1: URL’s in the U-set of search results for “top five laptops” from four most popular search
engines
http://reviews.cnet.com/best-laptops/
http://www.pcmag.com/article2/0,2817,2369981,00.asp
http://www.forbes.com/sites/antonyleather/2013/10/08/top-five-laptop-and-macbookupgrades-from-just-10/
http://www.pcworld.com/article/2046844/5-budget-laptops-for-college-students-we-name-thebest.html
http://techland.time.com/2013/10/03/the-top-3-laptops-in-each-of-5-key-categories/
http://www.digitaltrends.com/computing/top-five-laptop-buying-mistakes/
http://www.techradar.com/us/news/mobile-computing/laptops/top-laptops-25-best-laptops-inthe-world-706673
http://www.laptopmag.com/review/laptops/the-five-best-netbooks.aspx
http://topfivelaptops.wordpress.com/
http://laplaptops.blogspot.com/2010/03/top-five-laptops.html
http://www.youtube.com/watch?v=I1g_e2tNm2c
http://www.nextag.com/top-five-laptops/products-html#!
http://www.computerhowtoguide.com/2012/07/laptop-screen-issues.html
http://virtualmillionneeds.blogspot.com/2012/09/top-five-laptop-buying-mistakes.html#!
http://apps.microsoft.com/windows/ar-sa/app/top-five-laptop/abc1eebd-71a2-4146-b729d01d5098e75d
http://www.topfivecomputers.com/laptops.html
http://apps.microsoft.com/windows/xh-za/app/top-five-laptop/abc1eebd-71a2-4146-b729d01d5098e75d
http://voices.yahoo.com/top-five-back-school-laptops-8862435.html
http://www.pcadvisor.co.uk/test-centre/laptop/3214618/best-laptop-you-can-buy-2013/
http://www.amazon.com/s?ie=UTF8&page=1&rh=i%3Aaps%2Ck%3Atop%20five%20laptops
http://www.amazon.com/s?ie=UTF8&page=1&rh=i%3Aaps%2Ck%3Atop%205%20laptops
http://www.dubaichronicle.com/2013/11/13/top-5-laptops-holiday-gifts/
http://www.ebay.com/sch/i.html?_nkw=top+5+laptops
http://www.it-director.com/business/change/news_release.php?rel=8127
http://www.dell.com/us/p/laptops
http://www.squidoo.com/top-5-laptops
http://computers.toptenreviews.com/laptops/
307
- 10. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 4, Issue 6, November - December (2013), © IAEME
Table 2: Ten Recommender URLs for top laptops
Recommender URL’S
http://www.pcmag.com/article2/0,2817,2369981,00.asp
http://blog.laptopmag.com/top-10-notebooks-now
http://topfivelaptops.wordpress.com/
http://www.youtube.com/watch?v=I1g_e2tNm2c
http://www.topfivecomputers.com/laptops.html
http://voices.yahoo.com/top-five-back-school-laptops-8862435.html
http://www.pcadvisor.co.uk/test-centre/laptop/3214618/best-laptop-youcan-buy-2013/
http://www.dubaichronicle.com/2013/11/13/top-5-laptops-holiday-gifts/
http://www.squidoo.com/top-5-laptops
http://computers.toptenreviews.com/laptops/
Table 3: Laptops Recommended by USER1
Apple MacBook Air 13in (Mid-2013)
ASUS A52F-XA1
Dell Inspiron 14z
Apple MacBook Pro 15 in with Retina display
Toshiba Satellite L505
Acer Aspire S7-392-6411
Sony Vaio Pro 13
HP Pavilion dm1
Lenovo ThinkPad X1 Carbon
Razer Blade (2013)
Asus VivoBook V551LB-DB71T
Apple MacBook Pro 15-inch (2013)
Apple MacBook Pro
Apple Macbook Air 13.3 inch
Acer Aspire M5-583P-6428
Lenovo ThinkPad Edge E431
Table 4 Laptops Recommended by USER 2
Apple MacBook Air 13in (Mid-2013)
Apple MacBook Pro 15 in with Retina display
ASUS A52F-XA1
Toshiba Satellite L505
Dell Inspiron 14z
Acer Aspire S7-392-6411
Sony Vaio Pro 13
HP Pavilion dm1
Lenovo ThinkPad X1 Carbon
Lenovo ThinkPad Edge E431
308
No. of Laptops
Recommended
10
10
5
5
5
5
8
5
5
10
- 11. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 4, Issue 6, November - December (2013), © IAEME
Table 5: Laptops Recommended by USER3
Apple MacBook Air 13in (Mid-2013)
ASUS A52F-XA1
Apple MacBook Pro 15-inch (2013)
Sony Vaio Pro 13
HP Pavilion dm1
Asus VivoBook V551LB-DB71T
Apple Macbook Air 13.3 inch
Dell Inspiron 14z
Acer Aspire M5-583P-6428
Lenovo ThinkPad Edge E431
Table 6: Top-10 Laptops Recommended by our system
Apple MacBook Air 13in (Mid-2013)
ASUS A52F-XA1
Dell Inspiron 14z
Sony Vaio Pro 13
HP Pavilion dm1
Apple MacBook Pro 15 in with Retina display
Toshiba Satellite L505
Acer Aspire S7-392-6411
Apple MacBook Pro 15-inch (2013)
Asus VivoBook V551LB-DB71T
Table 7: Top-10 Smart Phones Recommended by our system
Apple iPad Air
Samsung Galaxy Note 10.1 (2014 Edition, 16GB, Wi-Fi, white)
Microsoft Surface Pro 2 (Windows 8.1)
iPad Mini (with Retina Display)
Apple iPad Air (Wi-Fi)
ASUS MeMO Pad HD 7
Google Nexus 10
Sony Xperia Tablet Z
LG G Pad 8.3
Amazon Kindle Fire HDX 8.9
309
- 12. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 4, Issue 6, November - December (2013), © IAEME
Table 8 Top-10 Tablets Recommended by our system
Samsung Galaxy S4
HTC One
Apple iPhone 5s
Samsung Galaxy Note 3
Nokia Lumia 1020
Motorola Moto X
LG G2
Google Nexus 5
Nokia Lumia 925
Black berryq10
Table 9: Top-10 Printers Recommended by our system
HP Office jet Pro 8600 Plus e-All-in-One
Epson Work Force 845 All-in-One Printer
HP Office jet Pro 276dw MFP
Brother Printer MFCJ825DW Wireless Color Photo Printer with Scanner, Copier & Fax
HP PhotoSmart 7520 e-All-in-One Wireless Printer
Canon Pixma MX922 Wireless Office All-In-One Printer
Dell B3465dnf Multifunction Laser Printer
Epson Stylus NX430 Small-in-One All-in-One Printer
Samsung CLP-415NW
Ricoh Aficio SG 3110DNw
Table 10: Top-10 Headphones Recommended by our system
HP Office jet Pro 8600 Plus e-All-in-One
Epson Work Force 845 All-in-One Printer
HP Office jet Pro 276dw MFP
Brother Printer MFCJ825DW Wireless Color Photo Printer with Scanner, Copier & Fax
HP PhotoSmart 7520 e-All-in-One Wireless Printer
Canon Pixma MX922 Wireless Office All-In-One Printer
Dell B3465dnf Multifunction Laser Printer
Epson Stylus NX430 Small-in-One All-in-One Printer
Samsung CLP-415NW
Ricoh Aficio SG 3110DNw
310
- 13. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 4, Issue 6, November - December (2013), © IAEME
5. CONCLUSION
In this paper, we present architecture of a product recommendation system, which is based on
user feedback based mining of opinion data. We point out the limitation of the automatic opinion
mining in presence of contradicting or misleading comments and emphasize that the user feedback
based mining of opinion will always provide us with reliable results. If for the same set of products,
we obtain different recommendation from different users, we propose to perform rank aggregation to
obtain consensus. We described different rank aggregation techniques that can be used for the
purpose. We presented an efficient rank aggregation method for the partial lists. Finally, we point out
that success of a recommendation system depends on its ability to recommend the products likened
by the customers and to avoid itself from the recommendation of products disliked by the customers.
For this, an efficient evaluation of the proposed system is required to ensure that the system is a
reliable and successful recommendation system. For future work, we recommend to investigate
following: (i) analyze the reliability of user feedback by comparing the feedback with actual actions
and (ii) evaluate the performance of system in terms of user satisfaction.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
M. Claypool, A. Gokhale, T. Miranda, P. Murnikov, D. Netes and M. Sartin (1999).
"Combining content-based and collaborative filters in an online newspaper". In Proceedings
of ACM SIGIR ’99 Workshop on Recommender Systems, Berkeley, CA.
B. Sarwar, G. Karypis, J. Konstan and J. Riedl (2000). "Analysis of recommendation
algorithms for e-commerce", In Proceedings of ACM ECommerce 2000 Conference, pages
158–167.
B. Mobasher, R. Cooley, and J. Srivastava (2000). "Automatic personalization based on web
usage mining", Communications of the ACM, volume 43, issue 8, pages 142–151.
W. Hill, L. Stead, M. Rosenstein and G. Furnas (1995). "Recommending and evaluating
choices in a virtual community of use", In Proceedings of the 1995 ACM Conference on
Human Factors in Computing Systems , pages 194–201.
P. Resnick, N. Iacovou, M. Suchak, P. Bergstrom and J. Riedl (1994). "Grouplens: An open
architecture for collaborative filtering of netnews", In Proceedings of the ACM 1994
Conference on Computer Supported Cooperative, pages 175–186.
Y. H. Cho, J. K. Kim and S. H. Kim (2002). "A personalized recommender system based on
web usage mining and decision tree induction", Expert Systems with Applications, volume
23, Elsevier Science, pages 329–342.
J. K. Kim, Y. H. Cho, W. J. Kim, J. R. Kim and J. H. Suh (2002). "A personalized
recommendation procedure for Internet shopping support", Electronic Commerce Research
and Applications, volume 1, Elsevier Science, pages 301–313.
Z. Zeng (2009). "An Intelligent E-commerce Recommender System Based on Web Mining",
International journal of business and management, volume 4, issue 7, pages 10-14.
M. Hu and B. Liu (2004). "Mining and summarizing customer reviews", In Proceedings of
the Tenth International Conference on Knowledge Discovery and Data Mining, pages 168177.
A. M. Popescu and O. Etzioni (2005). "Extracting product features and opinions from
reviews", In Proceedings of the Conference on Empirical Methods in Natural Language
Processing, pages 339-346.
S. Aciar, D. Zhang and S. Simoff and J. Debenham (2007). "Informed Recommender:
Basing Recommendations on Consumer Product Reviews", IEEE Intelligent Systems,
volume 22, Issue 3, pages 39 – 47.
311
- 14. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 4, Issue 6, November - December (2013), © IAEME
[12] C. Scaffidi, K. Bierhoff, E. Chang, M. Felker, H. Ng and C. Jin (2007). “Red Opal: ProductFeature Scoring from Reviews,” In Proceedings of ACM EC, pages 182-191.
[13] S. S. Sohail, J. Siddiqui, and R. Ali (2013) "Book Recommendation System Using Opinion
Mining Technique", In proceedings of 2013 International Conference on Advances in
Computing, Communications and Informatics (ICACCI-2013), pages 1609-1614
[14] S. S. Sohail, J. Siddiqui, and R. Ali (2012). “Product Recommendation Techniques for
Ecommerce - past, present and future”, International Journal of Advanced Research in
Computer Engineering & Technology, volume 1, number 9, pages 219-225
[15] J. Huang and E. N. Efthimiadis (2009). "Analyzing and evaluating query reformulation
strategies in web search logs" Proceedings of the 18th ACM conference on Information and
knowledge management (CIKM-2009), pages 77-86.
[16] comScore. (2008). Baidu Ranked Third Largest Worldwide Search Property in Dec 2007.
Retrieved Nov 30, 2008 from http://www.comscore.com/press/release.asp?press=2018
[17] G. Pass, A. Chowdhury and C. Torgeson (2006). A picture of search. In InfoScale ‘06, 1.
[18] J. A. Aslam and M. Montague (2001). “Models for metasearch,” in W. Bruce Croft, David J.
Harper,Donald H. Kraft, and Justin Zobel, editors, Proceedings of the 24th Annual
International ACM SIGIR Conference on Research and Development in Information
Retrieval, ACM Press, pages 276–284.
[19] C. C. Vogt and G. W. Cottrell (1999). “Fusion via a linear combination of scores,”
Information Retrieval, volume 1, issue 3, pages 151–173.
[20] C. Dwork, R. Kumar, M. Naor, and D. Sivakumar (2001). “Rank aggregation methods for
the web,” In Proceedings of the tenth international conference on World Wide Web, pages
613–622.
[21] M. E. Renda and U. Straccia (2003). “Web metasearch: Rank vs. score based rank
aggregation methods,” In Proceedings of the 18th Annual Symposium on Applied
Computing, pages 841–846.
[22] R. Ali and M. M. S. Beg (2007) “A Learning Algorithm for Metasearching using Rough Set
Theory,” In Proceedings of the 10th International Conference on Computer and Information
Technology (ICCIT 2007), IEEE Press, Dhaka, Bangladesh, pages 361-366.
[23] B. Pang and L. Lee (2008). " Opinion Mining and Sentiment Analysis," Foundations and
Trends in Information Retrieval, volume 2, Nos. 1–2
[24] V. Hatzivassiloglou and K. McKeown. (1997). "Predicting the semantic orientation of
adjectives" EACL, pages 174-181
[25] S. Kim and E. Hovy (2004). "Determining the sentiment of opinions," COLING, pages
1367-1374
[26] B. Pang, L. Lee, and S. Vaithyanathan (2002). "Thumbs up?: sentiment classification using
machine learning techniques" In Proceedings of EMNLP, pages 79-86
[27] Y. Fangy, L. Siy, N. Somasundaramy, Z. Yu (2012). "Mining Contrastive Opinions on
Political Texts using Cross-Perspective Topic Model" In Proceedings of WSDM’12, Seattle,
Washingtion, USA.
[28] N. Kaji and M. Kitsuregawa (2006). "Automatic Construction of Polarity-Tagged Corpus
from HTML Documents," In proceedings of COLING/ACL’06
[29] L. Zhuang, F. Jing, X.-Yan Zhu, and L. Zhang(2006). “Movie Review Mining and
Summarization”. In proceedings of CIKM-06, 2006.
[30] J. C. Borda (1781). “Memoire sur les election au scrutiny,” Histoire de l'Academie Royale
des Sciences,.
[31] M. M. S. Beg and N. Ahmad (2003) "Soft Computing Techniques for Rank Aggregation on
the World Wide Web”, World Wide Web – An International Journal, Kluwer, volume 6,
issue 1, pages 5-22.
312
- 15. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 4, Issue 6, November - December (2013), © IAEME
[32] R. Ali and M. M. S. Beg (2009). “Modified Rough Set Based Aggregation for Effective
Evaluation of Web Search Systems,” In Proceedings of the 28th North American Fuzzy
Information Processing Society Annual Conference (NAFIPS2009), IEEE Press,
Cincinnati, Ohio, U.S.A.
[33] Top 15 Most Popular Search Engines : December 2013
http://www.ebizmba.com/articles/search-engines
[34] Sandip S. Patil and Asha P. Chaudhari, “Classification of Emotions from Text using SVM
Based Opinion Mining”, International Journal of Computer Engineering & Technology
(IJCET), Volume 3, Issue 1, 2012, pp. 330 - 338, ISSN Print: 0976 – 6367, ISSN Online:
0976 – 6375.
[35] Purvi Dubey and Asst. Prof. Sourabh Dave, “Effective Web Mining Technique For Retrieval
Information on the World Wide Web”, International Journal of Computer Engineering &
Technology (IJCET), Volume 4, Issue 6, 2013, pp. 156 - 160, ISSN Print: 0976 – 6367,
ISSN Online: 0976 – 6375.
[36] Prof. Sindhu P Menon and Dr. Nagaratna P Hegde, “Research on Classification Algorithms
and its Impact on Web Mining”, International Journal of Computer Engineering &
Technology (IJCET), Volume 4, Issue 4, 2013, pp. 495 - 504, ISSN Print: 0976 – 6367,
ISSN Online: 0976 – 6375.
313