• 首页 首页 icon
  • 工具库 工具库 icon
    • IP查询 IP查询 icon
  • 内容库 内容库 icon
    • 快讯库 快讯库 icon
    • 精品库 精品库 icon
    • 问答库 问答库 icon
  • 更多 更多 icon
    • 服务条款 服务条款 icon

RDD变换和操作只能由驱动调用

用户头像
it1352
帮助5

问题说明

错误:

org.apache.spark.SparkException: RDD的转换和操作只能由司机被调用,而不是内的其他变换;例如,rdd1.map(X => rdd2.values.count()* X)是无效的,因为价值观改造和计数不能在rdd1.map改造内部执行的操作。欲了解更多信息,请参阅SPARK-5063。

org.apache.spark.SparkException: RDD transformations and actions can only be invoked by the driver, not inside of other transformations; for example, rdd1.map(x => rdd2.values.count() * x) is invalid because the values transformation and count action cannot be performed inside of the rdd1.map transformation. For more information, see SPARK-5063.

def computeRatio(model: MatrixFactorizationModel, test_data: org.apache.spark.rdd.RDD[Rating]): Double = {
  val numDistinctUsers = test_data.map(x => x.user).distinct().count()
  val userRecs: RDD[(Int, Set[Int], Set[Int])] = test_data.groupBy(testUser => testUser.user).map(u => {
    (u._1, u._2.map(p => p.product).toSet, model.recommendProducts(u._1, 20).map(prec => prec.product).toSet)
  })
  val hitsAndMiss: RDD[(Int, Double)] = userRecs.map(x => (x._1, x._2.intersect(x._3).size.toDouble))

  val hits = hitsAndMiss.map(x => x._2).sum() / numDistinctUsers

  return hits
}

我使用 MatrixFactorizationModel.scala 的方法,我一定要在映射用户,然后调用方法来获取每个用户的结果。通过这样做,我介绍嵌套映射,我相信会引起问题:

I am using the method in MatrixFactorizationModel.scala, I have to map over users and then call the method to get the results for each user. By doing that I introduce nested mapping which I believe cause the issue:

我知道这个问题实际采取在地方:

I know that issue actually take place at:

val userRecs: RDD[(Int, Set[Int], Set[Int])] = test_data.groupBy(testUser => testUser.user).map(u => {
  (u._1, u._2.map(p => p.product).toSet, model.recommendProducts(u._1, 20).map(prec => prec.product).toSet)
})

由于同时映射在我打电话 model.recommendProducts

Because while mapping over I am calling model.recommendProducts

正确答案

#1

MatrixFactorizationModel 是一个分布式的模式,所以你不能简单地从一个动作或转换调用它。最接近你做什么,这里是这样的:

MatrixFactorizationModel is a distributed model so you cannot simply call it from an action or a transformation. The closest thing to what you do here is something like this:

import org.apache.spark.rdd.RDD
import org.apache.spark.mllib.recommendation.{MatrixFactorizationModel, Rating}

def computeRatio(model: MatrixFactorizationModel, testUsers: RDD[Rating]) = {
  val testData = testUsers.map(r => (r.user, r.product)).groupByKey
  val n = testData.count

  val recommendations = model
     .recommendProductsForUsers(20)
     .mapValues(_.map(r => r.product))

  val hits = testData
    .join(recommendations)
    .values
    .map{case (xs, ys) => xs.toSet.intersect(ys.toSet).size}
    .sum

  hits / n
}

注:

    • 不同的是一个昂贵的操作,完全obsoletely在这里,因为你可以得到从分组数据
    • 相同的信息
    • 而不是 GROUPBY 其次是投影(地图),项目第一和组以后。没有理由,如果你想只是一个产品ID传送充分的收视率。
  • distinct is an expensive operation and completely obsoletely here since you can obtain the same information from a grouped data
  • instead of groupBy followed by projection (map), project first and group later. There is no reason to transfer full ratings if you want only a product ids.

这篇好文章是转载于:编程之路

  • 版权申明: 本站部分内容来自互联网,仅供学习及演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,请提供相关证据及您的身份证明,我们将在收到邮件后48小时内删除。
  • 本站站名: 编程之路
  • 本文地址: /reply/detail/tanhcbabkh
系列文章
更多 icon
同类精品
更多 icon
继续加载