2023-02-26 525
解决办法参考:
问题描述:ambari部署的spark和hive,在sparksql中执行insert into table xxx partition(dt=’xxx’) select xxx from xxx where dt=’xxx’,报错如下错误
org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move source hdfs://az-ccip-hadoop01.hdp:8020/warehouse/tablespace/managed/hive/ford.db/s_leads/.hive-staging_hive_2020-12-22_07-37-14_526_202796727754164477-1/-ext-10000 to destination hdfs://az-ccip-hadoop01.hdp:8020/warehouse/tablespace/managed/hive/ford.db/s_leads/dt=20201220;
at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:106)
at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:248)
at org.apache.spark.sql.hiv e.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115)
at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258)
at org.apache.spark.sql.Dataset.<init>(Dataset.scala:190)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
... 49 elided
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move source hdfs://az-ccip-hadoop01.hdp:8020/warehouse/tablespace/managed/hive/ford.db/s_leads/.hive-staging_hive_2020-12-22_07-37-14_526_202796727754164477-1/-ext-10000 to destination hdfs://az-ccip-hadoop01.hdp:8020/warehouse/tablespace/managed/hive/ford.db/s_leads/dt=20201220
at org.apache.hadoop.hive.ql.metadata.Hive.getHiveException(Hive.java:4303)
at org.apache.hadoop.hive.ql.metadata.Hive.getHiveException(Hive.java:4258)
at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:4253)
at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:4620)
at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:2132)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.sql.hive.client.Shim_v3_0.loadPartition(HiveShim.scala:1275)
at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747)
at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745)
at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745)
at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278)
at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216)
at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215)
at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261)
at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843)
at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
... 63 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hive.ql.metadata.HiveException: Load Data failed for hdfs://az-ccip-hadoop01.hdp:8020/warehouse/tablespace/managed/hive/ford.db/s_leads/.hive-staging_hive_2020-12-22_07-37-14_526_202796727754164477-1/-ext-10000 as the file is not owned by hive and load data is also not ran as hive
at org.apache.hadoop.hive.ql.metadata.Hive.needToCopy(Hive.java:4347)
at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:4187)
... 82 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Load Data failed for hdfs://az-ccip-hadoop01.hdp:8020/warehouse/tablespace/managed/hive/ford.db/s_leads/.hive-staging_hive_2020-12-22_07-37-14_526_202796727754164477-1/-ext-10000 as the file is not owned by hive and load data is also not ran as hive
at org.apache.hadoop.hive.ql.metadata.Hive.needToCopy(Hive.java:4338)
... 83 more
解决办法:
修改metastore.catalog.default取值为hive,然后重启spark2
<property>
<name>metastore.catalog.default</name>
<value>hive</value>
</property>
原文链接:https://77isp.com/post/34526.html
=========================================
https://77isp.com/ 为 “云服务器技术网” 唯一官方服务平台,请勿相信其他任何渠道。
数据库技术 2022-03-28
网站技术 2022-11-26
网站技术 2023-01-07
网站技术 2022-11-17
Windows相关 2022-02-23
网站技术 2023-01-14
Windows相关 2022-02-16
Windows相关 2022-02-16
Linux相关 2022-02-27
数据库技术 2022-02-20
抠敌 2023年10月23日
嚼餐 2023年10月23日
男忌 2023年10月22日
瓮仆 2023年10月22日
簿偌 2023年10月22日
扫码二维码
获取最新动态