-
Notifications
You must be signed in to change notification settings - Fork 308
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
System cannot find the path specified. #399
Comments
Same errors as me Thanks |
I wasn't able to reproduce this. Do you by any chance have any accented / non-ASCII characters in your username? Do you have an antivirus / firewall that could be blocking connections on port 8880? |
We are checking the firewall..problem..will let you know about that thanks
Other users have same problem..ie versions of java and spark versions fixed
their problem..not mine..strange.
Username..grbortz..nothing strange
Really need to get this resolved!
Thanks very much
Graham
…On 3 Jan 2017 23:32, "Kevin Ushey" ***@***.***> wrote:
I wasn't able to reproduce this. Do you by any chance have any accented /
non-ASCII characters in your username? Do you have an antivirus / firewall
that could be blocking connections on port 8880?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#399 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGglEmy9TbpzX4yCKF8mSgpXjWUKXS9Fks5rOr6AgaJpZM4LZIU0>
.
|
Hi Kevin
Early on today I did manage to force a connection by going to aeroplane
mode ( switch off company access to their lan) and then midway through the
tutorial the connection dropped.
But now I have a different problem , I can force the connection up again
but when I run the comand : iris_tbl=iris_tbl <- copy_to
<http://spark.rstudio.com/reference/reexports.html>(sc, iris)
I get the following error ..summary only as its extensive
iris_tbl <- copy_to(sc, iris)
"Error: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
at java.lang.reflect.Constructor.newInstance(Unknown Source)
at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(
IsolatedClientLoader.scala:258)
at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(
HiveUtils.scala:359)
at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(
HiveUtils.scala:263)
at org.apache.spark.sql.hive.HiveSharedState.metadataHive$
lzycompute(HiveSharedState.scala:39)"
and then at the bottom it states:
Caused by: java.lang.RuntimeException: The root scratch dir:
C:/Users/grbortz/AppData/Local/rstudio/spark/Cache/
spark-2.0.2-bin-hadoop2.7/tmp/hive on HDFS *should be writable.* Current
permissions are: rw-rw-rw-
at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(
SessionState.java:612)
at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(
SessionState.java:554)
at org.apache.hadoop.hive.ql.session.SessionState.start(
SessionState.java:508)
... 47 more
*See the permissions...who must have the permissions , my user or any user
and how do I create these?*
*How do I set "writable"..is this ? There is no such permission as
Writable..it is write or read *
*My username has all the permissions. *
*Does this mean the Attributes must not be "read only" ..when I try change
the attributes for that directory (tmp) to blank ..It changes back when I
look at it again*
*Not sure what to do now.*
*Graham*
…On Wed, Jan 4, 2017 at 7:59 AM, Graham Bortz ***@***.***> wrote:
We are checking the firewall..problem..will let you know about that thanks
Other users have same problem..ie versions of java and spark versions
fixed their problem..not mine..strange.
Username..grbortz..nothing strange
Really need to get this resolved!
Thanks very much
Graham
On 3 Jan 2017 23:32, "Kevin Ushey" ***@***.***> wrote:
> I wasn't able to reproduce this. Do you by any chance have any accented /
> non-ASCII characters in your username? Do you have an antivirus / firewall
> that could be blocking connections on port 8880?
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <#399 (comment)>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AGglEmy9TbpzX4yCKF8mSgpXjWUKXS9Fks5rOr6AgaJpZM4LZIU0>
> .
>
|
I do have a space in my username but they are an ASCII code. I'll look into my antivirus/firewall and report back. |
This is what happened now from home with no company wifi network to
interfere.
library(sparklyr)
spark_install(version="2.0.2")
*Spark 2.0.2 for Hadoop 2.7 or later already installed.*
Sys.getenv('JAVA_HOME')
*[1] "C:\\Program Files\\Java\\jre1.8.0_112"*
Sys.getenv('SPARK_HOME')
*[1]
"C:\\Users\\grbortz\\AppData\\Local\\rstudio\\spark\\Cache\\spark-2.0.2-bin-hadoop2.7"*
Sys.getenv("HADOOP_HOME")
*[1]
"C:\\Users\\grbortz\\AppData\\Local\\rstudio\\spark\\Cache\\spark-2.0.2-bin-hadoop2.7\\tmp\\hadoop"*
sc <- spark_connect(master = "local")
*Error in file(con, "r") : cannot open the connection*
*In addition: Warning message:*
*In file(con, "r") :*
* cannot open file
'C:\Users\grbortz\AppData\Local\Temp\Rtmp6zaoRi\file214834723dcf_spark.log':
Permission denied*
*I tried it again !!!*
sc <- spark_connect(master = "local")
*Error in force(code) : *
* Failed while connecting to sparklyr to port (8880) for sessionid (6885):
Gateway in port (8880) did not respond.*
* Path:
C:\Users\grbortz\AppData\Local\rstudio\spark\Cache\spark-2.0.2-bin-hadoop2.7\bin\spark-submit2.cmd*
* Parameters: --class, sparklyr.Backend,
"C:\Users\grbortz\Documents\R\win-library\3.2\sparklyr\java\sparklyr-2.0-2.11.jar",
8880, 6885*
*---- Output Log ----17/01/04 21:13:38 INFO sparklyr: Session (6885)
starting17/01/04 21:13:38 INFO sparklyr: Registering session (6885) into
gateway port (8880)(17/01/04 21:13:59 ERROR sparklyr: Server shutting down:
failed with exception ,java.net.ConnectException: Connection timed out:
connect)*
I think there is something major going on here. I tried to run SparkR and
it came back saying the Java Virtual Machine wont run.
Please advise..
Thanks
…On Wed, Jan 4, 2017 at 5:25 PM, Dustin Tindall ***@***.***> wrote:
I do have a space in my username but they are an ASCII code. I'll look
into my antivirus/firewall and report back.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#399 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGglErTNz7fOvkVIRhIGO0jjJQJ0_wV7ks5rO7n5gaJpZM4LZIU0>
.
|
I've seen these permissions as well on a Windows VM, but only intermittently and I wasn't yet able to ascertain a root cause. :/ I wonder if the issue is that multiple threads are attempting to read / write this file at the same time? I'm also not quite sure to make of the writable permissions error:
Perhaps the directory needs executable permissions as well? (I'm not exactly sure how this translates into the Windows permissions model yet, though) What's the output of |
There is no file.info file there unless its hidden..there is only a bin
directory
Graham
…On Wed, Jan 4, 2017 at 9:48 PM, Kevin Ushey ***@***.***> wrote:
I've seen these permissions as well on a Windows VM, but only
intermittently and I wasn't yet able to ascertain a root cause. :/ I wonder
if the issue is that multiple threads are attempting to read / write this
file at the same time?
I'm also not quite sure to make of the writable permissions error:
Caused by: java.lang.RuntimeException: The root scratch dir:
C:/Users/grbortz/AppData/Local/rstudio/spark/Cache/
spark-2.0.2-bin-hadoop2.7/tmp/hive on HDFS *should be writable.* Current
permissions are: rw-rw-rw-
Perhaps the directory needs executable permissions as well? (I'm not
exactly sure how this translates into the Windows permissions model yet,
though)
What's the output of file.info("C:\\Users\\grbortz\
\AppData\\Local\\rstudio\\spark\\Cache\\spark-2.0.2-bin-
hadoop2.7\\tmp\\hadoop")?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#399 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGglEoSxvgmRcJZUxGKWHKmKN6WhPzoQks5rO_eLgaJpZM4LZIU0>
.
|
I managed on a new machine tp establish a connection . This machine has no
antivirus
How do I check Firewall settings ion tis PC?
ie sc <- spark_connect(master = "local")
Re-using existing Spark connection to local
But when I run tbl comand I get the following: Not sure what this is:
Dontknow if this will ever come right...
iris_tbl=copy_to(sc,iris)
Error: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
at java.lang.reflect.Constructor.newInstance(Unknown Source)
at
org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:258)
at
org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:359)
at
org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:263)
at
org.apache.spark.sql.hive.HiveSharedState.metadataHive$lzycompute(HiveSharedState.scala:39)
at
org.apache.spark.sql.hive.HiveSharedState.metadataHive(HiveSharedState.scala:38)
at
org.apache.spark.sql.hive.HiveSharedState.externalCatalog$lzycompute(HiveSharedState.scala:46)
at
org.apache.spark.sql.hive.HiveSharedState.externalCatalog(HiveSharedState.scala:45)
at
org.apache.spark.sql.hive.HiveSessionState.catalog$lzycompute(HiveSessionState.scala:50)
at
org.apache.spark.sql.hive.HiveSessionState.catalog(HiveSessionState.scala:48)
at
org.apache.spark.sql.hive.HiveSessionState$$anon$1.<init>(HiveSessionState.scala:63)
at
org.apache.spark.sql.hive.HiveSessionState.analyzer$lzycompute(HiveSessionState.scala:63)
at
org.apache.spark.sql.hive.HiveSessionState.analyzer(HiveSessionState.scala:62)
at
org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:49)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:582)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at sparklyr.Invoke$.invoke(invoke.scala:94)
at sparklyr.StreamHandler$.handleMethodCall(stream.scala:89)
at sparklyr.StreamHandler$.read(stream.scala:55)
at sparklyr.BackendHandler.channelRead0(handler.scala:49)
at sparklyr.BackendHandler.channelRead0(handler.scala:14)
at
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: The root
scratch dir:
C:/Users/bortz_g/AppData/Local/rstudio/spark/Cache/spark-2.0.2-bin-hadoop2.7/tmp/hive
on HDFS should be writable. Current permissions are: rw-rw-rw-
at
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
at
org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:189)
... 46 more
Caused by: java.lang.RuntimeException: The root scratch dir:
C:/Users/bortz_g/AppData/Local/rstudio/spark/Cache/spark-2.0.2-bin-hadoop2.7/tmp/hive
on HDFS should be writable. Current permissions are: rw-rw-rw-
at
org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:612)
at
org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:554)
at
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508)
... 47 more
…On Wed, Jan 4, 2017 at 10:07 PM, Graham Bortz ***@***.***> wrote:
There is no file.info file there unless its hidden..there is only a bin
directory
Graham
On Wed, Jan 4, 2017 at 9:48 PM, Kevin Ushey ***@***.***>
wrote:
> I've seen these permissions as well on a Windows VM, but only
> intermittently and I wasn't yet able to ascertain a root cause. :/ I wonder
> if the issue is that multiple threads are attempting to read / write this
> file at the same time?
>
> I'm also not quite sure to make of the writable permissions error:
>
> Caused by: java.lang.RuntimeException: The root scratch dir:
> C:/Users/grbortz/AppData/Local/rstudio/spark/Cache/
> spark-2.0.2-bin-hadoop2.7/tmp/hive on HDFS *should be writable.* Current
> permissions are: rw-rw-rw-
>
> Perhaps the directory needs executable permissions as well? (I'm not
> exactly sure how this translates into the Windows permissions model yet,
> though)
>
> What's the output of file.info("C:\\Users\\grbortz\
> \AppData\\Local\\rstudio\\spark\\Cache\\spark-2.0.2-bin-hado
> op2.7\\tmp\\hadoop")?
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <#399 (comment)>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AGglEoSxvgmRcJZUxGKWHKmKN6WhPzoQks5rO_eLgaJpZM4LZIU0>
> .
>
|
I was able to connect by moving my spark install to a directory c:\spark. I'm thinking the error was attributed to a space within my user name that affected my spark paths. Let me know if I should go ahead and close this issue. |
Did U actually install it again throughSpark or did you redirect through environmental variables. Are your Temp directories in the new spark path or the old. What is happening with me is the antivirus program is protecting the default spark path ie in the AppData Path. Even when I redirected R studio to the new Spark path , ,by resetting the environmntal variables it read the new spark path when initiating the .cmdr files BUT it still placed files in the default Temp path ie a Temp folder in the AppData path. This is why I ask if there is a difference when you install from spark and when you do this do you have the option of the directory where you want to install it, or do you have to specify this in the command used to install spark , as an option. If so what is the format of the command. If I have downloaded the spark .tar or .tgz (which One must I use?) file how can I direct R studio to install from that folder rather than re download from the web. |
I moved the spark install from the AppData path to a folder right under my c: drive, updated the path variables, and deleted the old spark versions and everything worked. |
Did exactly as you said nd got the error below.
How do I direct spark away from using the Temp directory in that path ie
direct it to use the same/simliar path BUT NOT IN AppData
Error in file(con, "r") : cannot open the connection
In addition: Warning message:
In file(con, "r") :
cannot open file
'C:\Users\grbortz\AppData\Local\Temp\RtmpsjTEIm\file32a43d455ade_spark.log':
Permission denied
…On Sat, Jan 28, 2017 at 1:40 AM, Dustin Tindall ***@***.***> wrote:
I moved the spark install from the AppData path to a folder right under my
c: drive, updated the path variables, and deleted the old spark versions
and everything worked.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#399 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGglEt5V2TBnU84e4u0Dd5nv-A-Bm-Kkks5rWoBvgaJpZM4LZIU0>
.
|
I'm getting the following error using both v0.5.1 on CRAN and v0.5.2 on github. Any help will be appreciated.
Thanks.
Error in force(code) :
Failed while connecting to sparklyr to port (8880) for sessionid (3794): Gateway in port (8880) did not respond.
Path: C:\Users\xxxx xxxx\AppData\Local\rstudio\spark\Cache\spark-1.6.2-bin-hadoop2.6\bin\spark-submit2.cmd
Parameters: --class, sparklyr.Backend, --jars, "C:/Users/xxxx xxxx/Documents/R/win-library/3.3/sparklyr/java/spark-csv_2.11-1.3.0.jar","C:/Users/xxxx xxxx/Documents/R/win-library/3.3/sparklyr/java/commons-csv-1.1.jar","C:/Users/xxxx xxxx/Documents/R/win-library/3.3/sparklyr/java/univocity-parsers-1.5.1.jar", "C:\Users\xxxx xxxx\Documents\R\win-library\3.3\sparklyr\java\sparklyr-1.6-2.10.jar", 8880, 3794
---- Output Log ----
The system cannot find the path specified.
---- Error Log ----
The text was updated successfully, but these errors were encountered: