网创优客建站品牌官网
为成都网站建设公司企业提供高品质网站建设
热线:028-86922220
成都专业网站建设公司

定制建站费用3500元

符合中小企业对网站设计、功能常规化式的企业展示型网站建设

成都品牌网站建设

品牌网站建设费用6000元

本套餐主要针对企业品牌型网站、中高端设计、前端互动体验...

成都商城网站建设

商城网站建设费用8000元

商城网站建设因基本功能的需求不同费用上面也有很大的差别...

成都微信网站建设

手机微信网站建站3000元

手机微信网站开发、微信官网、微信商城网站...

建站知识

当前位置:首页 > 建站知识

mahoutcanopy怎么使用

这篇文章主要介绍“mahout canopy怎么使用”,在日常操作中,相信很多人在mahout canopy怎么使用问题上存在疑惑,小编查阅了各式资料,整理出简单好用的操作方法,希望对大家解答”mahout canopy怎么使用”的疑惑有所帮助!接下来,请跟着小编一起来学习吧!

专业从事成都网站制作、网站建设,高端网站制作设计,微信小程序,网站推广的成都做网站的公司。优秀技术团队竭力真诚服务,采用H5场景定制+CSS3前端渲染技术,成都响应式网站建设公司,让网站在手机、平板、PC、微信下都能呈现。建站过程建立专项小组,与您实时在线互动,随时提供解决方案,畅聊想法和感受。

canopy原理是聚类算法的一种实现
canopy是一种简单,快速但是不准确的聚类方法
cannopy是一种小而美的聚类方法,其算法流程如下
1设样本集为S 确定两个阙值t1和t2 其中t1>t2
2任取一个样本点p属于s作为一个canopy记为c,从s中移除p
3记录s中所有点到p的距离dist
4若dist5若dist重复2-5直至S为空
T1和T2参数
当T1过大时,会使许多点属于多个cannopy,可能造成各个点的中心点间距比较近,各族区间不明显
当T2过大时,增加强标记数据点的数量,会减少族的个数,T2过小,会增加族的个数,同时,增加计算时间
mahout中对canopy clustering的实现是比较巧妙的,整个聚类过程用两个map操作和一个reduce操作就完成了
canopy构建过程可以概括为 遍历给定点集S,设置两个阙值,t1和t2且t1>t2选择一个点,用低成本算法计算它与其他
canopy中心的距离,如果距离小于t1    则将该点加入那个canopy如果小于T2  则该点不会成为某个canopy的中心,重复整个过程,直到s非空
距离的实现
org.apache.mahout.common.distance.DistanceMeasure接口
CosineDistanceMeasure
SquaredEuclideanDistanceMeasure计算欧式距离的平方
EuclideanDistanceMeasure计算欧式距离
ManhatanDistanceMeasure 马氏距离,图像处理用的比较多
TanimotoDistanceMeasure jaccard相似度带权重的欧式距离和马氏距离
canopy使用注意点
1首先是轻量距离亮度的选择。是选择一个模型中的属性,还是其他外部属性这对canopy的分布很重要
2 T1和T2取值影响到重叠度F,以及canopy的粒度
3.canopy有消除孤立点的作用,而kmeas却无能为力,建立canopies后,可以删除那些包含比较少的canopy,往往这些canopy包含孤立点
4,设置好canopy内点的数目,来决定聚类中心数目k,这样效果比较好
[root@localhost bin]# hadoop fs -mkdir /20140824
[root@localhost data]# vi test-data.csv
1 -0.213  -0.956  -0.003  0.056  0.091  0.017  -0.024  1
1 3.147  2.129  -0.006  -0.056  -0.063  -0.002  0.109  0
1 -2.165  -2.718  -0.008  0.043  -0.103  -0.156  -0.024  1
1 -4.337  -2.686  -0.012  0.122  0.082  -0.021  -0.042  1
root@localhost data]# hadoop fs -put test-data.csv /20140824
[root@localhost mahout-distribution-0.7]# hadoop jar org.apache.mahout.clustering.syntheticcontrol.canopy.Job -i /20140824/test-data.csv -o /20140824   -t1 10   -t2 1
6/12/05 05:37:09 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
16/12/05 05:37:13 INFO input.FileInputFormat: Total input paths to process : 1
16/12/05 05:37:14 INFO mapreduce.JobSubmitter: number of splits:1
16/12/05 05:37:15 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1480730026445_0005
16/12/05 05:37:17 INFO impl.YarnClientImpl: Submitted application application_1480730026445_0005
16/12/05 05:37:17 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1480730026445_0005/
16/12/05 05:37:17 INFO mapreduce.Job: Running job: job_1480730026445_0005
16/12/05 05:38:26 INFO mapreduce.Job: Job job_1480730026445_0005 running in uber mode : false
16/12/05 05:38:27 INFO mapreduce.Job:  map 0% reduce 0%
16/12/05 05:39:25 INFO mapreduce.Job:  map 100% reduce 0%
16/12/05 05:39:28 INFO mapreduce.Job: Job job_1480730026445_0005 completed successfully
16/12/05 05:39:30 INFO mapreduce.Job: Counters: 30
    File System Counters
        FILE: Number of bytes read=0
        FILE: Number of bytes written=105369
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=339
        HDFS: Number of bytes written=457
        HDFS: Number of read operations=5
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=2
    Job Counters
        Launched map tasks=1
        Data-local map tasks=1
        Total time spent by all maps in occupied slots (ms)=51412
        Total time spent by all reduces in occupied slots (ms)=0
        Total time spent by all map tasks (ms)=51412
        Total vcore-seconds taken by all map tasks=51412
        Total megabyte-seconds taken by all map tasks=52645888
    Map-Reduce Framework
        Map input records=4
        Map output records=4
        Input split bytes=108
        Spilled Records=0
        Failed Shuffles=0
        Merged Map outputs=0
        GC time elapsed (ms)=140
        CPU time spent (ms)=1620
        Physical memory (bytes) snapshot=87416832
        Virtual memory (bytes) snapshot=841273344
        Total committed heap usage (bytes)=15597568
    File Input Format Counters
        Bytes Read=231
    File Output Format Counters
        Bytes Written=457
16/12/05 05:39:31 INFO canopy.CanopyDriver: Build Clusters Input: /20140824/data Out: /20140824 Measure: org.apache.mahout.common.distance.SquaredEuclideanDistanceMeasure@79b0cd8f t1: 10.0 t2: 1.0
16/12/05 05:39:32 INFO client.RMProxy: Connecting to ResourceManager at hadoop02/127.0.0.1:8032
16/12/05 05:39:33 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
16/12/05 05:39:37 INFO input.FileInputFormat: Total input paths to process : 1
16/12/05 05:39:38 INFO mapreduce.JobSubmitter: number of splits:1
16/12/05 05:39:38 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1480730026445_0006
16/12/05 05:39:38 INFO impl.YarnClientImpl: Submitted application application_1480730026445_0006
16/12/05 05:39:39 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1480730026445_0006/
16/12/05 05:39:39 INFO mapreduce.Job: Running job: job_1480730026445_0006
    File System Counters
        FILE: Number of bytes read=0
        FILE: Number of bytes written=105814
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=1970
        HDFS: Number of bytes written=527
        HDFS: Number of read operations=13
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=2
    Job Counters
        Launched map tasks=1
        Data-local map tasks=1
        Total time spent by all maps in occupied slots (ms)=26957
        Total time spent by all reduces in occupied slots (ms)=0
        Total time spent by all map tasks (ms)=26957
        Total vcore-seconds taken by all map tasks=26957
        Total megabyte-seconds taken by all map tasks=27603968
    Map-Reduce Framework
        Map input records=4
        Map output records=4
        Input split bytes=112
        Spilled Records=0
        Failed Shuffles=0
        Merged Map outputs=0
        GC time elapsed (ms)=134
        CPU time spent (ms)=1880
        Physical memory (bytes) snapshot=96550912
        Virtual memory (bytes) snapshot=841433088
        Total committed heap usage (bytes)=15597568
    File Input Format Counters
        Bytes Read=457
    File Output Format Counters
        Bytes Written=527
C-0{n=2 c=[1.000, -3.794, -2.694, -0.011, 0.102, 0.036, -0.055, -0.038, 1.000] r=[1:0.543, 2:0.008, 3:0.001, 4:0.020, 5:0.046, 6:0.034, 7:0.004]}
    Weight : [props - optional]:  Point:
    1.0: [1.000, -4.337, -2.686, -0.012, 0.122, 0.082, -0.021, -0.042, 1.000]
C-1{n=2 c=[1.000, -2.220, -2.270, -0.008, 0.066, -0.008, -0.079, -0.029, 1.000] r=[1:1.031, 2:0.433, 3:0.002, 4:0.016, 5:0.002, 6:0.010, 7:0.005]}
    Weight : [props - optional]:  Point:
    1.0: [1.000, -2.165, -2.718, -0.008, 0.043, -0.103, -0.156, -0.024, 1.000]
C-2{n=1 c=[0:1.000, 1:3.147, 2:2.129, 3:-0.006, 4:-0.056, 5:-0.063, 6:-0.002, 7:0.109] r=[]}
    Weight : [props - optional]:  Point:
    1.0: [0:1.000, 1:3.147, 2:2.129, 3:-0.006, 4:-0.056, 5:-0.063, 6:-0.002, 7:0.109]
C-3{n=1 c=[1.000, -1.189, -1.837, -0.006, 0.050, -0.006, -0.070, -0.024, 1.000] r=[]}
    Weight : [props - optional]:  Point:
    1.0: [1.000, -0.213, -0.956, -0.003, 0.056, 0.091, 0.017, -0.024, 1.000]
16/12/05 05:43:59 INFO clustering.ClusterDumper: Wrote 4 clusters
16/12/05 05:55:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 4 items
drwxr-xr-x   - root supergroup          0 2016-12-05 05:43 /20140824/clusteredPoints
drwxr-xr-x   - root supergroup          0 2016-12-05 05:42 /20140824/clusters-0-final
drwxr-xr-x   - root supergroup          0 2016-12-05 05:39 /20140824/data
-rw-r--r--   1 root supergroup        231 2016-12-05 05:21 /20140824/test-data.csv

到此,关于“mahout canopy怎么使用”的学习就结束了,希望能够解决大家的疑惑。理论与实践的搭配能更好的帮助大家学习,快去试试吧!若想继续学习更多相关知识,请继续关注创新互联网站,小编会继续努力为大家带来更多实用的文章!


本文标题:mahoutcanopy怎么使用
当前网址:http://bjjierui.cn/article/ghcscg.html

其他资讯