Paper 15

Paper Title: Informed Recommender: Basing Recommendations on Consumer Product Reviews

Three Critical Questions

Monday

Group 1: Abilash.Amarthaluri

Bharadwaz.Somavarapu.

 Concept identification is based on keyword matching from an existing database of defined words. What if the user gives the reviews in a high flown language, in which it does not match the keywords or if it misinterprets a word, as a word may have many other meanings in different contexts.
 How does the use of automatic mapping of ontology instances produce good results and desired results? We cannot fully rely upon automated ontology mappers. Instead the use of semi automated mapping techniques might be a promising idea.
 The input data i.e., all the reviews collected from the consumers will be in the form of textual data. Does this application include any Natural Language Processing Techniques in order to correctly infer the semantics of the sentence? What about the computation complexity involved in the application?

Group 2:

Member Name: Sai ram Kota

Q1.the author does not clearly state on an average what is the success rate of identification. He only vaguely says that human intervention is needed in worst case scenario, but he doesn't speak such things in terms of numbers of cases successful/failed?

Q2: The author talks about speed of operation. how many outputted, how many actually correct, how many need Human intervention. which is more accurate, when is it more accurate..?

Q3: The author uses sound-ex to compare names of the attributes in databases, but how will sound-ex tool be able to handle the case of multi lingual data ?

Group 3:

Member Name: Sunil Kumar Garrepally, Yaswanth Kantamaneni

1. It was mentioned in the paper that the user’s skill level and experience is considered in evaluation of the product. But, the users may be prompted to produce the false data for the popularity of the product. So, there is a need of reliability methods to be implemented. Are there any methods defined for reliability calculation of the user’s rating?
2. The users give the text based rating which prompts the need of text mining and mapping the words in the review comments with the synonym database. How does the system handle if there is any new word that is not defined in our synonym data base?
3. The important features are defined for the products and estimated using the methods. What happens when the user looking for a feature that is not listed in the recommendation system designed? Is there any method that gets feedback about adding the new features the users are looking for a particular product?
4. The system decides the recommendation by applying the rules on the review comments. What happens in case if none of the rules are applicable for any review comment? Whether it ignores the review comment or any exception handling methods will be invoked in that case?
5. If the user is buying any product second time, he should enter all the details and preferences for the product again. Are there any methods that would maintain the user’s history and automatically determine the user’s interests and provide recommendations automatically?

Group 4:

Member Name:Ramya Devabhakthuni

 The paper stated that the ‘“recommender systems”’ use ‘“algorithms and parsers”’ for automating the data and to provide “data-mining”. The results of these may be incomplete or “partial”. How these are used when they are different for each product?
 The author stated that “recommender system” uses the “feedback and information” collected from the customers. What happens when the user changes his ‘“preference”’? What happens if the mapping rule is not defined in the classification techniques?
 The paper is mainly concentrated on development of “’recommender system” for only one product. How could the classification process be extended for ‘“multiple products”’?
 The paper stated that the system mainly concentrated on “short sentences” mapping. What happens if a “’long, complicated”’ sentences are need to be classified?

Group 5:

Member Name: Lokesh Reddy Gokul

• Here in the article, the input to the text mining feature is still individual words and single instances and such a usage may not be ideal and fruitful because, single words may have different scope in different grammar scope and context. So is it possible to devise a mechanism which takes care of the grammatical scope and the composite sequences of words in deciding the categorization of the words? What might hindrances to such an approach if deployed, which is stopping the researchers to do so?
• In addition to considering user reviews, it might also be worth giving a thought on letting the manufacturer or the web site providing the review system to create a complete list of all features of the current product under review which simplifies the task of a machine by reducing the need to mine the free-form reviews than considering the exact pin-point answers for each field asked describing a specific detail of the product. What are the scope of text mining and the use of ontology in such a scenario? Do the techniques and formulae deployed in the article be applicable in such a scope too?
• In the concluding remarks of the article, the authors say that they couldn’t classify some long, complicated sentences. What decides such a case of complicated sentences that is not mentioned in the article? The length of the sentences should not be a problem because, the text mining only marks single words and hence text pattern extraction is not a big problem. So what is the actual significance of length and complex nature of unclassified sentences in the current scope?

Group 6:

++++

1) The recommendation process follows rule based approach which itself relies on logic based approach. If the consumer writes the review without any logic so that the mapping to the concepts in the ontology is hard to do. Then the system fails how this problem is going to solved?

2) For the ranking procedure they used the consumer reviews stored in the ontology. But the opinion of the actual user changes day by day, since they get more problems as the time progress. How this mechanism provides accurate results?

3) For the sentence selection and mapping ontology, he used text mining process. But for doing this we need to have a greater flexibility over the language for defining all the concepts in the ontology. How this can be deal when each user uses his own type of narration for writing the review.

Group 7:

Member Name: Kishore Kumar Mannava

Priyanka Koneru

1.)” When a structured form is defined, how does the comunication takes place in order to produce recommendations?what type of the protocols does it use for the communication and how is the realiablity achieved” ?

2.)”How does the concept identification work during the sentence selection and classification ? how can it impact the system in means of optimisation of data”?

3.)”Is the recommender process static or dynamic , i mean to say that does the system only accounts only the previous reviews or it also uses the current reviews produced by the people”?

Group 8:

Member Name:Nimmagadda,Putheti

1. The techniques discussed in the paper consider only the” text based” reviews. Instead of ignoring the “feedback” can we integrate “manual feedback” with the “text based feedback”? If so what is the technique used?
2. “Recommender systems” use “text mining” to predict the “recommendations”. For this “mapping of data” we use a tool to “extract the information” and transform to “structured data”. What sort of tools is required for to extract and structure the information?
3. “Recommender systems” provide “recommendations” for only products. Is it suitable for any other services?

Wednesday

Group 1:

Member Name: Lattupalli,pelluri,voruganti

Author said that his mining techniques can extract useful information from reviews. But, how can he say that a review is useful the computer or any other technique may not perfectly determine what is useful and what is not useful. Also, how he is going to detect the false reviews given by the supporters of certain products.

Different people may have their own style of language. Some may be expert and some may be poor. Though some people do not have good English their reviews might be worth. If the language is poor then mapping the review with the ontology will be impossible. So what care the author is taking to make use of reviews with poor language.

Mapping of user reviews with the ontology is purely automated in the technique used by the user. But how he is going to deal with the wrong mapping done due to the syntactic mapping? Lack of manual intervention in the mapping process may lead to the wrong mappings.

Group 2:

Member Name:Addagalla, Bobbili, Gopinath

• The author presents the text mining technique to map ontology instances to product reviews; while assigning good or bad values may be possible in most cases, how can good or bad values be identified from ambivalent statements?
• One of the drawbacks of the text mining technique is that long sentences could not be mined and analyzed. Doesn’t this mean that some critical information was lost out on, considering the common fact that negative reviews almost always consist of long sentences?
• The authors have mentioned that the opinions of experienced reviewers will get more weightage than those of inexperienced reviewers. Is this not an NP-complete problem, as the technique for defining the experience level of reviewers is sometimes tricky?
• The authors mention the technique of concept matching based on picking out some related words; is this technique not vulnerable to the errors that could be introduced by incomplete domain vocabularies?
• The authors evaluate their approach by taking the example of just one domain. Is there any data or experiences to suggest that this technique would work equally well with other domains?

Group 3:

Member Name: Swati Thorve

1>In the “Concept Identification” step, author has given a label and list of matching or similar meaning words with each concept or class in the ontology. However as each person tends to use different word than other, it is not possible to cover all the matching words. In this case what should be the focus ? Which words should be considered as a matching words for a particular concept ?

2>Author is finding the best rule set using the RIKTEST software, but as RIKTEST solely depends on the size of the training set , the output best rule set will vary depending on the size of the training set. To avoid this problem and to find best rule set irrespective of the training set size what method should be followed ?

3>Authors recommendation system is based on recognization of words in individual sentences in the reviews. Even if we use same set of words, sentence meaning differs drastically with change in formation and grammar in the sentence. How are we going to handle this thing ?

Group 4:

Member Name:Karunapiya Rameshwaram, Shaiv, Anusha Vunnam

CRITICAL QUESTIONS:
1.)Is it possible to integrate the new classes into the existing ontology and will it be compatible with the extended features?
2.)In ranking, we are doing many computations like OQ, FQ, OFQ, OA. What about the cost of implementing all these tools?
3.)Will always textual information helps us in rating the product as it was possible with the digital cameras example? up to how far would it be possible to get expected outcome for every real world application?

Group 5:

Member Name: Rahul Reddy,Rahul Mootha

1. The paper doesn’t explain how the user level of experience and skill with the product can help in resolving the problem of mapping the reviews that are text bases into an efficient recommendation system.
2. On what criteria are the training data and the test data for a product chosen among the various available parameters for evaluating a product is not explained and also how reliable is the source from which this data is taken so that it can be used to make new recommendations?
3. For each product the ontology used to map the reviews to classes changes with the introduction of new features into the product. With so many updates and so many features being added to each and every product in the market daily the ontologies also need to be constantly changed hence generation of ontologies also needs to be automated in order to meet real time requirements, this issue is not addressed in the solution?

Group 6:

Member Name:

Group 7:

Member Name:

Group 8:

Member Name:Bhargav Sandeep Ramayanam

1. Generally users will give the reviews in more than one sentence and also use shorts words (gud instead of good) instead of writing them completely. How the author proposed system deals with these complex and short words sentences?
2. The author had mentioned about the rating the skill level of the user but did not explained it. It is good idea to give priorities among the users, but the author didn’t explain it. There are several issues which come while rating. How the system overcomes all these and issues and provides rating?
3. The user review comments are in the text format. But the ontology accepts on structured data to store. How this can be achieved? What are the processes that the system uses for this translation?

Group 9:

Member Name: Satish Bhat, Holly Vo

1. Can this recommendation scheme be applied to services and not just products? Can this recommendation scheme be helpful is service discovery and composition process?
2. The training period for the system can be substantially long even with textual information. Will a hybrid approach be able to reduce the training period?
3. Free form text data mining is a complex process. It is pretty difficult to classify complex and long sentences. Can complex AI techniques help reduce the complexity to some extent?

Group 10:

Member Name: Sunae Shin, Hyungbae Park

1. They have set and give the explanation about the term of opinion quality and product quality. Although they have set the variables to evaluate, the terms are too abstract. Thus, the whole structure of the ontology is unclear.
2. Also, the category they made for the mapping comments which is good, bad, quality is vague. The more specified classes could be the solution for the mapping of complicated and long sentences.
3. The procedure of the system includes many general issues such as approach of text data when converting to the ontology. Using the text data for the process is considered many times in the previous researches in various areas such as keyword matching metric. However, the system is valuable because of connecting the concept of ontology. This makes the user free from the text.
4. They have strong mathematical procedure for the processing opinion quality and produce quality. This generates reasonable consequence and evaluation of the consumer’s review and producer’s opinion. The process shows that analysis of the data with precise metric is significant element in the data mining. Again, with this procedure user can get the recommendation which is text free.
5. The validation of this system is absent. The system should be compared with other existing procedure to prove the worth of the system rather than just representing their results.

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License