网创优客建站品牌官网
为成都网站建设公司企业提供高品质网站建设
热线:028-86922220
成都专业网站建设公司

定制建站费用3500元

符合中小企业对网站设计、功能常规化式的企业展示型网站建设

成都品牌网站建设

品牌网站建设费用6000元

本套餐主要针对企业品牌型网站、中高端设计、前端互动体验...

成都商城网站建设

商城网站建设费用8000元

商城网站建设因基本功能的需求不同费用上面也有很大的差别...

成都微信网站建设

手机微信网站建站3000元

手机微信网站开发、微信官网、微信商城网站...

建站知识

当前位置:首页 > 建站知识

ldatopicnumber-创新互联

Hi Vikas --

the optimum number of topics (K in LDA) is dependent on a at least two factors: 
Firstly, your data set may have an intrinsic number of topics, i.e., may derive 
from some natural clusters that your data have. This number will in the best 
case make your ppx minimal. A non-parametric approach like HDP would ideally 
result in the same K as the one that minimises ppx for LDA.  The second type of 
influence is that of the hyperparameters. If you fix the Dirichlet parameters 
alpha and beta (for LDA's Dirichlet-multinomial "levels" (theta | alpha) and 
(phi | beta)), you bias the optimum K. For instance, larger alpha will force 
more " "decisive" choices of z for each token, leading to a concentration of 
theta to fewer weights, which influences K.

Trouble minimizing perplexity in LDA

up vote1down votefavorite  

I am running LDA from Mark Steyver's MATLAB Topic Modelling toolkit on a few Apache Java open source projects. I have taken care of stop word removal (for e.g. words such Apache, java keywords are marked as stopwords) and tokenization. I find that perplexity on test data always decreases with increasing number of topics. I tried different values of ALPHA but no difference.

成都创新互联专注于盘龙企业网站建设,响应式网站开发,商城系统网站开发。盘龙网站建设公司,为盘龙等地区提供建站服务。全流程专业公司,专业设计,全程项目跟踪,成都创新互联专业和态度为您提供的服务

I need to find optimal number of topics and for that perplexity plot should reach a minimum. Please suggest what may be wrong.

Definition and details regarding calculation of perplexity of a topic model is explained in this post.

Edit: I played with hyperparameters alpha and beta and now perplexity seems to reach a minimum. It is not clear to me as to how these hyperparameters affect perplexity. Initially I was plotting results till 200 topics without any success. Now on the same range minimum is reached at around 50-60 topics (which was my intuition) after modifying hyperparameters. Also, as this postnotes, you bias optimal number of topics according to specific values of hyperparameters.

machine-learning topic-models hyperparameter
shareimprove this question edited Sep 15 '12 at 2:13    asked Sep 14 '12 at 5:22 abhinavkulkarni
2586
1
Many of us probably don't know what perplexity means and what aperplexity plot shows. I know I don't. Could you enlighten me (us)? – Michael Chernick Sep 14 '12 at 15:54
1
@MichaelChernick: I edited post to include a link detailing perplexity of a topic model. – abhinavkulkarni Sep 14 '12 at 22:27
1
Thanks for doing that. – Michael Chernick Sep 14 '12 at 22:52
How many topics have you tried so far (on what size corpus)? Maybe you just haven't yet hit the right number of topics? Also, for inferring the number of topics from data you may want to look into the Hierarchical Dirichlet Process (HDP) with code on David Blei's site: cs.princeton.edu/~blei/topicmodeling.html – Nick Sep 14 '12 at 23:22
@Nick: Indeep HDP, a nonparametric topic modelling algorithm is an alternative to LDA, wherein you don't have to tune hyperparameters. However at this point I would like to stick to LDA and know how and why perplexity behaviour changes drastically with regards to small adjustments in hyperparameters. Also, my corpus size is quite large. For e.g. I have tokenized Apache Lucene source code with ~1800 java files and 367K source code lines. So that's a pretty big corpus I guess. – abhinavkulkarni Sep 15 '12 at 2:21

1 Answer

activeoldestvotes
up vote2down vote

You might want to have a look at the implementation of LDA in Mallet, which can do hyperparameter optimization as part of the training. Mallet also uses asymmetric priors by default, which according to this paper, leads to the model being much more robust against setting the number of topics too high. In practice this means you don't have to specify the hyperparameters, and can set number of topics pretty high without negatively affecting results.

In my experience hyperparameter optimization and asymmetric priors gave significantly better topics than without it, but I haven't tried the Matlab Topic Modelling toolkit.

shareimprove this answer
 
名称栏目:ldatopicnumber-创新互联
网站地址:http://bjjierui.cn/article/jsiso.html

其他资讯