In this paper we describe and present the results of the two systems, called here X2C-A and X2C-B, that we specifically developed and submitted for our participation to ABSITA 2018, for the Aspect Category Detection (ACD) and Aspect Category Polarity (ACP) tasks. The results show that X2C-A is top ranker in the official results of the ACD task, at a distance of just 0.0073 from the best system; moreover, its post deadline improved version, called X2C-A-s, scores first in the official ACD results. About the ACP results, our X2C-A-s system, which takes advantage of our ready-to-use industrial Sentiment API, scores at a distance of just 0.0577 from the best system, even though it has not been specifically trained on the training set of the evaluation.
In Proc. of the Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian (EVALITA 2018). In CEUR Proceedings Vol. 2263.
In this paper we describe and show the results of the two systems that we have specifically developed to participate at Ironita 2018 for the irony detection task. We scored first in the official ranking of the competition for the unconstrained run. Considering both the constrained and unconstrained chart, we scored at a distance of just 0.027 of F1 score from the best score.
In Proc. of 16th International Conference of the Italian Association for Artificial Intelligence (AI*IA 2017). To appear.
Computing accurately sentiment expressed in huge amount of textual data is a key task largely required by the market, and nowadays industrial engines make available ready-to-use APIs for sentiment analysis-related tasks. However, building sentiment engines showing high accuracy on different topics domains and, even more difficult, on structurally different textual sources (e.g reviews, tweets, blogs, etc.) is not a trivial task. Papers about cross-domain techniques have been recently published but, to the best of our knowledge, they are either tested on the same textual source (e.g. reviews) or, in case of a cross-source evaluation, they lack of a comparison with industrial engines, which are instead specifically designed for dealing with multiple sources. In this paper, we compare the results of research and industrial engines on an extensive experimental evaluation, considering the document-level polarity detection task performed on different textual sources: tweets, apps reviews and general products reviews, in both English and Italian language. The experimental evaluation results help the reader to quantify the performance gap between industrial and research sentiment engines when both are tested on heterogeneous textual sources and on different languages (English/Italian). We also present the results of our multi-source API called X2Check, showing better results than the industrial engines under evaluation for both Italian and English. Compared to the research engines, instead, X2Check shows a macro-f1 score that is always higher than tools not specifically trained on the test set under evaluation.
In Proc. of Third Conference on Computational Linguistics (CLiC-it 2016) & Fifth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. EVALITA 2016. In CEUR Proceedings Vol 1749.
In this paper we present our Tweet2Check tool, provide an analysis of the experimental results obtained by our tool at the Evalita Sentipolc 2016 evaluation, and compare its performance with the state-of-the-art tools that participated to the evaluation. In the experimental analysis, we show that Tweet2Check is: (i) the winner of the irony detection task; (ii) the second classified for the polarity task, considering the unconstrained runs, at a distance of 0.017 from the first tool; (iii) in the top 5 tools (out of 13), considering a score that allows to indicate the most complete-best performing tools for Sentiment Analysis of tweets, i.e. by summing up the best F-score of each team for the three tasks (subjectivity, polarity and irony); (iv) the second best tool, according to the former score, considering together polarity and irony tasks.
In Semantic Web Challenges Vol. 641-1. Volume Editors: “Harald Sack, Stefan Dietze, Anna Tordai“ CCIS Springer
In this paper we show App2Check performance when applied to Amazon products reviews. In our experimental evaluation, we show App2Check performance with and without a specific training on Amazon products reviews, and we compare our results with two state-of-the-art research tools.
In Proc. of Sideways 2016 – 2nd International Workshop on Social Media World Sensors – Held in conjunction with the 10th edition of the Language Resources and Evaluation Conference (LREC 2016)
Sentiment Analysis has nowadays a crucial role in social media analysis and, more generally, in analysing user opinions about general topics or user reviews about product/services, enabling a huge number of applications. Many methods and software implementing different approaches exist and there is not a clear best approach for Sentiment classification/quantification. We believe that performance reached by machine learning approaches is a key advantage to apply to sentiment analysis in order to reach a performance which is very close to the one obtained by group of humans, who evaluate subjective sentences such as user reviews. In this paper, we present the App2Check system, developed mainly applying supervised learning techniques, and the results of our experimental evaluation, showing that App2Check outperforms state-of-the-art research tools on user reviews in Italian language related to the evaluation of apps published to app stores.
Benchmark performed on 10,000 comments in Italian language posted by the users on the Apple and Google app stores, regarding 10 popular apps (1,000 comments per app). All of the experiments are repeatable. Please feel free to read our papers for more details or contact us to repeat the experiments.
We have a team of research scientists working on the application of state-of-the-art Natural Language Processing and Machine Learning techniques for designing and building ready-to-market AI-based products.
We participate as speakers at the reference research and business conferences about Sentiment Analysis. We are invited speakers at the Sentiment Analysis Symposium 2017 in New York.
You can contact us through
support@app2check.com
+39 010 09 660
Via XX Settembre, 14 – 16121, Genoa