Text Mining / Clustering / Label Prediction

Dave0408Dave0408 MemberPosts:8Contributor I
edited November 2018 inHelp

Hello there,

我玩一些“文本处理”。I've got a collection of about 1000 articles on sport (exspecially soccer/football) news collected from different RSS Feeds.

To start with an good basis I catgorized them all manually into 7 categories. That leads to following distribution (in "german"):

label count %
Teamnews 430 37,01
Rest 166 14,29
Transfers 143 12,31
Skandal 141 12,13
Verletzung 124 10,67
Management 99 8,52
Liganews 59 5,08
Summe 1162 100

My aim now is to set up a prediction model that will categorize future articels by its own.

That's where i stuck a little bit. Basically i'll do the following text processing:

Spoiler































































In another process I filtered the labels and checked the created WordLists and was satisfied with the results. So it regnozied the most "important" words for every label.

I stored them in an mysql db. I also created a top50 wordlist wich includes the 50 most used words of a label. But do not use both lists right now.;)

But back to my current problem. To create a model I choose the X-Validation Operator and tried different classification learners (like: Naive Bayes, k-NN, ID3 and Decision Tree).

Because the results of the performance Operator in all cases where so disappointing, i also used "optimize parameters" operator. Unfortunatelly without positive success.

For example i got an accuracy of 12,48% in my k-NN prediction model.

Here is an example output:

accuracy: 12.48% +/- 0.59% (mikro: 12.48%)

true Skandal true Management true Transfers true Verletzung true Teamnews true Rest true Liganews class precision
pred. Skandal 141 98 142 124 430 161 58 12.22%
pred. Management 0 0 1 0 0 0 0 0.00%
pred. Transfers 0 1 0 0 0 0 0 0.00%
pred. Verletzung 0 0 0 0 0 0 0 0.00%
pred. Teamnews 0 0 0 0 0 0 0 0.00%
pred. Rest 0 0 0 0 0 4 1 80.00%
pred. Liganews 0 0 0 0 0 1 0 0.00%
class recall 100.00% 0.00% 0.00% 0.00% 0.00% 2.41% 0.00%

Tests with reducing the number of articles in label "Teamnews" to #150 to get an better distribution weren't successfull too.

So is there any hint or tip how i can increase my accuracy to something higher than 70%?

Is it a mistake in previous text processing steps?

Should i use my stored wordlists for each categorie instead of the whole articels?

Or is this the completly wrong way of doing it?

If you need any more information, please let me know.

Thanks.

Best,

David

Answers

  • MartinLiebigMartinLiebig Administrator, Moderator, Employee, RapidMiner Certified Analyst, RapidMiner Certified Expert, University ProfessorPosts:3,368RM Data Scientist

    Hi,

    quick thought: Have you tried a Linear SVM in a Polynominal by Binominal Classification operator?


    ~Martin

    - Head of Data Science Services at RapidMiner -
    Dortmund, Germany
  • Dave0408Dave0408 MemberPosts:8Contributor I

    Hi.

    Have you tried a Linear SVM in a Polynominal by Binominal Classification operator?

    No. Never used before....

    I had a look a the tutorial process. It is used with numerical attributes?!

    My input training example set has got following structure:

    Role Name Type
    label label nominal
    text

    nominal

    And the operator combination is not able to work with that?

    Spoiler










































    < portSpacing端口= " sink_through 1”间隔= " 0 " / >












































    EDIT:

    Fixed: The text attribute is Type "text" and not nominal.

  • MartinLiebigMartinLiebig Administrator, Moderator, Employee, RapidMiner Certified Analyst, RapidMiner Certified Expert, University ProfessorPosts:3,368RM Data Scientist

    Hi,

    you are right. It does not work on nominal/text attributes. Butyou usually do not train on the text itself. I cannot load your process (for some unknown reason) but it is using tokenization. So the structure of your data should be

    label (nominal, label)

    Text (text, special)

    count_wordA (numerical, regular)

    count_wordB (numerical, regular)

    count_wordC (numerical, regular)

    count_wordD (numerical, regular)

    where count might be the TF-IDF value. That is perfect for a SVM.

    Best,

    Martin

    - Head of Data Science Services at RapidMiner -
    Dortmund, Germany
  • Dave0408Dave0408 MemberPosts:8Contributor I

    Hey Martin,

    thanks for your quick thoughts.

    Running the Linear SVM in a Polynominal by Binominal Classification operator leads me to an accuracy of 63.19% +/- 4.75% (mikro: 63.17%).

    Not perfect but even really better as my first results. And i think its acceptable.

    Unfortunatelly i had to cut of my input examples to a limit of 150 per label. Otherwise rapidminer chrashes on my computer (i5 Core and 16GB RAM).

    So thanks again.

  • MartinLiebigMartinLiebig Administrator, Moderator, Employee, RapidMiner Certified Analyst, RapidMiner Certified Expert, University ProfessorPosts:3,368RM Data Scientist

    Hi Dave,

    try to change pruning settings. How may attributes did you create? If it is like 2k, i can imagine why the SVM crashes..

    Next step for better results would be to optimize on the setting C of the SVM. Take a logarithmic "grid" between 1e-3 to 1e3. That should boost it.

    Best,

    Martin

    - Head of Data Science Services at RapidMiner -
    Dortmund, Germany
  • Dave0408Dave0408 MemberPosts:8Contributor I

    Hey Martin,

    try to change pruning settings. How may attributes did you create? If it is like 2k, i can imagine why the SVM crashes..

    I tried. But 9/10 times rapidminer runs into a loop situation? The process time counter goes on but nothing happens. I am only succesfull if i handle an very large amount of examples (less than 100). This costs me a lot of accuracy.

    My origin database includes 1162 examples and 16 regular attributes.

    I filter this to 725 examples and 1 regular attributes. Then I start my text preprocessing.

    在这一步我的榜样nlcludes 725 examples and 2 special and 72 regular attributes

    Using pruning my wordlist only contains 47 examples for all 6 labels. (What i think is very less?!)

    When the process runs into the "Polynominal by Binominal" Operator inlcuding the SVM(linear) rapidminer stucks.

    Spoiler








































    Some information about my hardware:

    Betriebsystemname Microsoft Windows 10 Pro
    Version 10.0.14393 Build 14393
    Systemtyp x64-basierter PC
    Prozessor英特尔(R)的核心(TM) i5 - 5200 u @ 2.20 ghz CPU, 2195 MHz, 2 Kern(e), 4 logische(r) Prozessor(en)
    Installierter physischer Speicher (RAM) 16,0 GB
    Größe der Auslagerungsdatei 800 MB

    My Java Version:

    Version 8 Update 111 Build 1.8.0_111-b14

    Any idea?

    Next step for better results would be to optimize on the setting C of the SVM. Take a logarithmic "grid" between 1e-3 to 1e3. That should boost it.

    This is a good hint and i will check it, when the problem situation above is solved.

    Best wishes,

    David

  • MartinLiebigMartinLiebig Administrator, Moderator, Employee, RapidMiner Certified Analyst, RapidMiner Certified Expert, University ProfessorPosts:3,368RM Data Scientist

    Hi,

    can you maybe sent me the data and the process? I am keen to have a look on it. Of course we treat the data as confidential. My email address would be mschmitz at rapidminer dot com

    ~ Martin

    - Head of Data Science Services at RapidMiner -
    Dortmund, Germany
  • jabrajabra MemberPosts:20Contributor I

    Hello, you can first cluster the texts. And after the implementation of the column, the cluster was identified as a label and performed with classification algorithms? For example, the svm algorithm
    Thank you so much

Sign InorRegisterto comment.