2016-04-29 9 views
0

Ich bin neu bei hadoop und habe gerade hadoop 2.6 installiert.Hadoop Wortanzahl Beispiel

Es scheint, dass das System in Ordnung gestartet ist. Ich versuche, das Word Count exmaple zu laufen und ht e Problem ist, dass alles scheint zu laufen, der Ausgabeordner wurde mit 2 Dateien erstellt:

-rw-r - r-- 1 yoni supergroup 0 2016-04- 30 02:11/user/yoni/output100/_SUCCESS -rw -r - r-- 1 yoni supergroup 0 2016-04-30 02:11/benutzer/yoni/ausgabe100/teil-r-00000

aber die Datei ist leer part-r-00000. das Problem ist, ich weiß nicht, das Problem zu finden,

das ist das Protokoll des Jobs suchen waren:

16/04/30 20:30:33 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 
16/04/30 20:30:34 WARN mapreduce.JobSubmitter: No job jar file set. User classes may not be found. See Job or Job#setJar(String). 
16/04/30 20:30:34 INFO input.FileInputFormat: Total input paths to process : 1 
16/04/30 20:30:34 INFO mapreduce.JobSubmitter: number of splits:1 
16/04/30 20:30:34 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1461971181442_0005 
16/04/30 20:30:34 INFO mapred.YARNRunner: Job jar is not present. Not adding any jar to the list of resources. 
16/04/30 20:30:34 INFO impl.YarnClientImpl: Submitted application application_1461971181442_0005 
16/04/30 20:30:34 INFO mapreduce.Job: The url to track the job: http://yoni-Lenovo-Z40-70:8088/proxy/application_1461971181442_0005/ 
16/04/30 20:30:34 INFO mapreduce.Job: Running job: job_1461971181442_0005 
16/04/30 20:30:41 INFO mapreduce.Job: Job job_1461971181442_0005 running in uber mode : false 
16/04/30 20:30:41 INFO mapreduce.Job: map 0% reduce 0% 
16/04/30 20:30:46 INFO mapreduce.Job: map 100% reduce 0% 
16/04/30 20:30:51 INFO mapreduce.Job: map 100% reduce 100% 
16/04/30 20:30:52 INFO mapreduce.Job: Job job_1461971181442_0005 completed successfully 
16/04/30 20:30:52 INFO mapreduce.Job: Counters: 49 
    File System Counters 
     FILE: Number of bytes read=6 
     FILE: Number of bytes written=211511 
     FILE: Number of read operations=0 
     FILE: Number of large read operations=0 
     FILE: Number of write operations=0 
     HDFS: Number of bytes read=170 
     HDFS: Number of bytes written=86 
     HDFS: Number of read operations=6 
     HDFS: Number of large read operations=0 
     HDFS: Number of write operations=2 
    Job Counters 
     Launched map tasks=1 
     Launched reduce tasks=1 
     Data-local map tasks=1 
     Total time spent by all maps in occupied slots (ms)=2923 
     Total time spent by all reduces in occupied slots (ms)=2526 
     Total time spent by all map tasks (ms)=2923 
     Total time spent by all reduce tasks (ms)=2526 
     Total vcore-seconds taken by all map tasks=2923 
     Total vcore-seconds taken by all reduce tasks=2526 
     Total megabyte-seconds taken by all map tasks=2993152 
     Total megabyte-seconds taken by all reduce tasks=2586624 
    Map-Reduce Framework 
     Map input records=1 
     Map output records=0 
     Map output bytes=0 
     Map output materialized bytes=6 
     Input split bytes=116 
     Combine input records=0 
     Combine output records=0 
     Reduce input groups=0 
     Reduce shuffle bytes=6 
     Reduce input records=0 
     Reduce output records=0 
     Spilled Records=0 
     Shuffled Maps =1 
     Failed Shuffles=0 
     Merged Map outputs=1 
     GC time elapsed (ms)=166 
     CPU time spent (ms)=1620 
     Physical memory (bytes) snapshot=426713088 
     Virtual memory (bytes) snapshot=3818450944 
     Total committed heap usage (bytes)=324009984 
    Shuffle Errors 
     BAD_ID=0 
     CONNECTION=0 
     IO_ERROR=0 
     WRONG_LENGTH=0 
     WRONG_MAP=0 
     WRONG_REDUCE=0 
    File Input Format Counters 
     Bytes Read=54 
    File Output Format Counters 
     Bytes Written=86 
16/04/30 20:30:52 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 
16/04/30 20:30:52 WARN mapreduce.JobSubmitter: No job jar file set. User classes may not be found. See Job or Job#setJar(String). 
16/04/30 20:30:52 INFO input.FileInputFormat: Total input paths to process : 1 
16/04/30 20:30:52 INFO mapreduce.JobSubmitter: number of splits:1 
16/04/30 20:30:52 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1461971181442_0006 
16/04/30 20:30:52 INFO mapred.YARNRunner: Job jar is not present. Not adding any jar to the list of resources. 
16/04/30 20:30:52 INFO impl.YarnClientImpl: Submitted application application_1461971181442_0006 
16/04/30 20:30:52 INFO mapreduce.Job: The url to track the job: http://yoni-Lenovo-Z40-70:8088/proxy/application_1461971181442_0006/ 
16/04/30 20:30:52 INFO mapreduce.Job: Running job: job_1461971181442_0006 
16/04/30 20:31:01 INFO mapreduce.Job: Job job_1461971181442_0006 running in uber mode : false 
16/04/30 20:31:01 INFO mapreduce.Job: map 0% reduce 0% 
16/04/30 20:31:07 INFO mapreduce.Job: map 100% reduce 0% 
16/04/30 20:31:12 INFO mapreduce.Job: map 100% reduce 100% 
16/04/30 20:31:13 INFO mapreduce.Job: Job job_1461971181442_0006 completed successfully 
16/04/30 20:31:13 INFO mapreduce.Job: Counters: 49 
    File System Counters 
     FILE: Number of bytes read=6 
     FILE: Number of bytes written=210495 
     FILE: Number of read operations=0 
     FILE: Number of large read operations=0 
     FILE: Number of write operations=0 
     HDFS: Number of bytes read=216 
     HDFS: Number of bytes written=0 
     HDFS: Number of read operations=7 
     HDFS: Number of large read operations=0 
     HDFS: Number of write operations=2 
    Job Counters 
     Launched map tasks=1 
     Launched reduce tasks=1 
     Data-local map tasks=1 
     Total time spent by all maps in occupied slots (ms)=3739 
     Total time spent by all reduces in occupied slots (ms)=3133 
     Total time spent by all map tasks (ms)=3739 
     Total time spent by all reduce tasks (ms)=3133 
     Total vcore-seconds taken by all map tasks=3739 
     Total vcore-seconds taken by all reduce tasks=3133 
     Total megabyte-seconds taken by all map tasks=3828736 
     Total megabyte-seconds taken by all reduce tasks=3208192 
    Map-Reduce Framework 
     Map input records=0 
     Map output records=0 
     Map output bytes=0 
     Map output materialized bytes=6 
     Input split bytes=130 
     Combine input records=0 
     Combine output records=0 
     Reduce input groups=0 
     Reduce shuffle bytes=6 
     Reduce input records=0 
     Reduce output records=0 
     Spilled Records=0 
     Shuffled Maps =1 
     Failed Shuffles=0 
     Merged Map outputs=1 
     GC time elapsed (ms)=125 
     CPU time spent (ms)=1010 
     Physical memory (bytes) snapshot=427823104 
     Virtual memory (bytes) snapshot=3819626496 
     Total committed heap usage (bytes)=324534272 
    Shuffle Errors 
     BAD_ID=0 
     CONNECTION=0 
     IO_ERROR=0 
     WRONG_LENGTH=0 
     WRONG_MAP=0 
     WRONG_REDUCE=0 
    File Input Format Counters 
     Bytes Read=86 
    File Output Format Counters 
     Bytes Written=0 

Ich verwende das Beispiel, das mit wordcount der hadoop Instalation

kommt

hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar grep/user/yoni/input/user/yoni/output101 'dfs [az.] +'

und das Setup im Pseudo-verteilten Modus wie in allen grundlegenden tutrials

+0

Ich glaube nicht, 'grep/user/Yoni/input/user/Yoni/output101 'dfs [a-z.] +'' Ist ein gültiges Argument für Ihr Glas. Wenn es aber wäre, wenn grep nichts zurückgeben würde, dann, yeah, würden Sie ein leeres Ergebnis erhalten. –

+0

Gemäß den Zählern erhielt Ihr Job einen einzelnen Eintragsrekord ('Map Input Records = 1') und fand nichts passendes gegebenes Muster ('Map output records = 0'). Deshalb erhalten Sie eine leere Ausgabe ('Reduce output records = 0'). '_SUCCESS' bedeutet, dass das Hadoop-Framework es geschafft hat, Ihren Job zu vervollständigen und nicht mehr. Anzahl der 'part-xxxxx' Dateien entspricht der Anzahl der Reduzierungen. Jeder von ihnen kann leer sein, wenn der entsprechende Reduzierer keine Ausgangsdatensätze erzeugt hat. – gudok

Antwort

0

In diesem Beispiel sollten Sie alle xml Dateien unter hadoop-2.6.4/etc/hadoop in den Ordner HDFS mit dem Namen 'input' im richtigen Benutzer-Home-Verzeichnis stellen, das heißt 'yoni here.

Überprüfen Sie zuerst Ihren HDFS Daemon-Prozessstatus, indem Sie http://localhost:50070 (standardmäßig) erkunden.

Zweitens, überprüfen Sie den Status Ihrer Dateien durch bin/hdfs dfs -ls /user/yoni/input oder bin/hdfs fsck/-files -blocks.

Wenn alles gut läuft, sollte es funktionieren.

Hadoop MapReduce Next Generation - Setting up a Single Node Cluster