2016-07-14 7 views
0

Ich habe eine seltsame Funken RDD, ich bin mir nicht sicher, was falsch ist oder wie zur Fehlerbehebung gehen. Ich führe Spark 1.6.2 mit Python 3.4.3 durch ein Ipython-Notebook.Etwas falsch mit Spark RDD

Hier ist, wie ich in den Daten am Zuge:

sqlContext = SQLContext(sc) 
hdfs = 'hdfs://192.168.1.213:54310/TEMP/*' 

def make_Row(item): 
    temp = {} 
    if 'desc' in item: 
     temp['desc']=str(item['desc']).lower() 
    if 'cat' in item: 
     temp['cat']= parse_cats(str(item['cat']).lower()) 
    if 'cat' in temp and 'desc' in temp: 
     return Row(**temp) 

raw_data = sc.wholeTextFiles(hdfs, 15).map(lambda x: json.loads(x[1])).flatMap(lambda x: x).map(lambda d: make_Row(d)).cache() 
raw_data.count() 

Das bringt mir die Zählung der Daten:

Schräge Teile:

  1. raw_data.show() gibt die folgende Ausnahme. Ich überprüfte 'PipelinedRDD' object has no attribute 'toDF' in PySpark, aber ich habe sqlContext definiert.

    AttributeErrorTraceback (jüngste Aufforderung zuletzt) ​​ in() ----> 1 raw_data.show()

    Attribute: 'PipelinedRDD' Objekt hat kein Attribut 'show'

  2. raw_data.toDF().show() Werke fein.

  3. raw_data.toDF().columns zeigt: ['cat', 'desc'], aber raw_data.toDF().describe('cat').show() führt den folgenden:

    Py4JJavaErrorTraceback (most recent call last) 
        <ipython-input-110-45e3ce9b4e4b> in <module>() 
        ----> 1 raw_data.toDF().describe('cat').show() 
    
        /apps/spark/python/pyspark/sql/dataframe.py in describe(self, *cols) 
         772   if len(cols) == 1 and isinstance(cols[0], list): 
         773    cols = cols[0] 
        --> 774   jdf = self._jdf.describe(self._jseq(cols)) 
         775   return DataFrame(jdf, self.sql_ctx) 
         776 
    
        /apps/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py in __call__(self, *args) 
         811   answer = self.gateway_client.send_command(command) 
         812   return_value = get_return_value(
        --> 813    answer, self.gateway_client, self.target_id, self.name) 
         814 
         815   for temp_arg in temp_args: 
    
        /apps/spark/python/pyspark/sql/utils.py in deco(*a, **kw) 
         43  def deco(*a, **kw): 
         44   try: 
        ---> 45    return f(*a, **kw) 
         46   except py4j.protocol.Py4JJavaError as e: 
         47    s = e.java_exception.toString() 
    
        /apps/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name) 
         306     raise Py4JJavaError(
         307      "An error occurred while calling {0}{1}{2}.\n". 
        --> 308      format(target_id, ".", name), value) 
         309    else: 
         310     raise Py4JError(
    
        Py4JJavaError: An error occurred while calling o1689.describe. 
        : org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 68.0 failed 4 times, most recent failure: Lost task 2.3 in stage 68.0 (TID 352, solrmain): java.lang.NullPointerException 
         at org.apache.spark.api.python.SerDeUtil$$anonfun$toJavaArray$1.apply(SerDeUtil.scala:102) 
         at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) 
         at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) 
         at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) 
         at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) 
         at org.apache.spark.sql.execution.aggregate.SortBasedAggregationIterator.processCurrentSortedGroup(SortBasedAggregationIterator.scala:122) 
         at org.apache.spark.sql.execution.aggregate.SortBasedAggregationIterator.next(SortBasedAggregationIterator.scala:152) 
         at org.apache.spark.sql.execution.aggregate.SortBasedAggregationIterator.next(SortBasedAggregationIterator.scala:29) 
         at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) 
         at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) 
         at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:149) 
         at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) 
         at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) 
         at org.apache.spark.scheduler.Task.run(Task.scala:89) 
         at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) 
         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
         at java.lang.Thread.run(Thread.java:745) 
    
        Driver stacktrace: 
         at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431) 
         at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419) 
         at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418) 
         at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
         at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) 
         at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418) 
         at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799) 
         at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799) 
         at scala.Option.foreach(Option.scala:236) 
         at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799) 
         at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640) 
         at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599) 
         at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588) 
         at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) 
         at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620) 
         at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832) 
         at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845) 
         at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858) 
         at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:212) 
         at org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:165) 
         at org.apache.spark.sql.execution.SparkPlan.executeCollectPublic(SparkPlan.scala:174) 
         at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499) 
         at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499) 
         at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56) 
         at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:2086) 
         at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$execute$1(DataFrame.scala:1498) 
         at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$collect(DataFrame.scala:1505) 
         at org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1375) 
         at org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1374) 
         at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:2099) 
         at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1374) 
         at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1383) 
         at org.apache.spark.sql.DataFrame$$anonfun$describe$1.apply(DataFrame.scala:1352) 
         at org.apache.spark.sql.DataFrame$$anonfun$describe$1.apply(DataFrame.scala:1335) 
         at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$withPlan(DataFrame.scala:2126) 
         at org.apache.spark.sql.DataFrame.describe(DataFrame.scala:1335) 
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
         at java.lang.reflect.Method.invoke(Method.java:498) 
         at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) 
         at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381) 
         at py4j.Gateway.invoke(Gateway.java:259) 
         at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) 
         at py4j.commands.CallCommand.execute(CallCommand.java:79) 
         at py4j.GatewayConnection.run(GatewayConnection.java:209) 
        at java.lang.Thread.run(Thread.java:745) 
    Caused by: java.lang.NullPointerException 
         at org.apache.spark.api.python.SerDeUtil$$anonfun$toJavaArray$1.apply(SerDeUtil.scala:102) 
         at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) 
         at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) 
         at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) 
         at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) 
         at org.apache.spark.sql.execution.aggregate.SortBasedAggregationIterator.processCurrentSortedGroup(SortBasedAggregationIterator.scala:122) 
         at org.apache.spark.sql.execution.aggregate.SortBasedAggregationIterator.next(SortBasedAggregationIterator.scala:152) 
         at org.apache.spark.sql.execution.aggregate.SortBasedAggregationIterator.next(SortBasedAggregationIterator.scala:29) 
         at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) 
         at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) 
         at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:149) 
         at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) 
         at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) 
         at org.apache.spark.scheduler.Task.run(Task.scala:89) 
         at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) 
         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
         ... 1 more 
    

Vielen Dank im Voraus

Antwort

0

Die RDD API hat keine show() Verfahren, daher der Fehler in # 1. Sie müssen entweder auf einen Datenrahmen umwandeln (wie Sie in # tun 2) oder etwas verwenden wie:

raw_data.take(5) 

eine Liste Ihrer Zeilenobjekte zu erhalten.

Zu # 3, die toDF-Methode ist ein wenig knifflig, abhängig von der Verteilung, die Sie verwenden. Versuchen Sie Folgendes:

df = sqlContext.createDataFrame(raw_data) 

dann führen Sie Ihre anderen Operationen.

0

Nummer 1 wurde bereits beantwortet: RDD hat keine show() Methode.

Nummer 3: Versuchen Sie raw_data.toDF().describe(['cat']).show()