Fwd: [CRAN-pretest-archived] CRAN submission SparkR 2.2.2

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

Fwd: [CRAN-pretest-archived] CRAN submission SparkR 2.2.2

Shivaram Venkataraman
The upcoming 2.2.2 release was submitted to CRAN. I think there are
some knows issues on Windows, but does anybody know what the following
error with Netty is ?

>     WARNING: Illegal reflective access by io.netty.util.internal.PlatformDependent0$1 (file:/home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7/jars/netty-all-4.0.43.Final.jar) to field java.nio.Buffer.address

Thanks
Shivaram


---------- Forwarded message ---------
From: <[hidden email]>
Date: Mon, Jul 9, 2018 at 12:12 PM
Subject: [CRAN-pretest-archived] CRAN submission SparkR 2.2.2
To: <[hidden email]>
Cc: <[hidden email]>


Dear maintainer,

package SparkR_2.2.2.tar.gz does not pass the incoming checks
automatically, please see the following pre-tests:
Windows: <https://win-builder.r-project.org/incoming_pretest/SparkR_2.2.2_20180709_175630/Windows/00check.log>
Status: 1 ERROR, 1 WARNING
Debian: <https://win-builder.r-project.org/incoming_pretest/SparkR_2.2.2_20180709_175630/Debian/00check.log>
Status: 1 ERROR, 2 WARNINGs

Last released version's CRAN status: ERROR: 1, OK: 1
See: <https://CRAN.R-project.org/web/checks/check_results_SparkR.html>

CRAN Web: <https://cran.r-project.org/package=SparkR>

Please fix all problems and resubmit a fixed version via the webform.
If you are not sure how to fix the problems shown, please ask for help
on the R-package-devel mailing list:
<https://stat.ethz.ch/mailman/listinfo/r-package-devel>
If you are fairly certain the rejection is a false positive, please
reply-all to this message and explain.

More details are given in the directory:
<https://win-builder.r-project.org/incoming_pretest/SparkR_2.2.2_20180709_175630/>
The files will be removed after roughly 7 days.

No strong reverse dependencies to be checked.

Best regards,
CRAN teams' auto-check service
Flavor: r-devel-linux-x86_64-debian-gcc, r-devel-windows-ix86+x86_64
Check: CRAN incoming feasibility, Result: WARNING
  Maintainer: 'Shivaram Venkataraman <[hidden email]>'

  New submission

  Package was archived on CRAN

  Insufficient package version (submitted: 2.2.2, existing: 2.3.0)

  Possibly mis-spelled words in DESCRIPTION:
    Frontend (4:10, 5:28)

  CRAN repository db overrides:
    X-CRAN-Comment: Archived on 2018-05-01 as check problems were not
      corrected despite reminders.

  Found the following (possibly) invalid URLs:
    URL: http://spark.apache.org/docs/latest/api/R/mean.html
      From: inst/doc/sparkr-vignettes.html
      Status: 404
      Message: Not Found

Flavor: r-devel-windows-ix86+x86_64
Check: running tests for arch 'x64', Result: ERROR
    Running 'run-all.R' [175s]
  Running the tests in 'tests/run-all.R' failed.
  Complete output:
    > #
    > # Licensed to the Apache Software Foundation (ASF) under one or more
    > # contributor license agreements.  See the NOTICE file distributed with
    > # this work for additional information regarding copyright ownership.
    > # The ASF licenses this file to You under the Apache License, Version 2.0
    > # (the "License"); you may not use this file except in compliance with
    > # the License.  You may obtain a copy of the License at
    > #
    > #    http://www.apache.org/licenses/LICENSE-2.0
    > #
    > # Unless required by applicable law or agreed to in writing, software
    > # distributed under the License is distributed on an "AS IS" BASIS,
    > # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    > # See the License for the specific language governing permissions and
    > # limitations under the License.
    > #
    >
    > library(testthat)
    > library(SparkR)

    Attaching package: 'SparkR'

    The following object is masked from 'package:testthat':

        describe

    The following objects are masked from 'package:stats':

        cov, filter, lag, na.omit, predict, sd, var, window

    The following objects are masked from 'package:base':

        as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
        rank, rbind, sample, startsWith, subset, summary, transform, union

    >
    > # Turn all warnings into errors
    > options("warn" = 2)
    >
    > if (.Platform$OS.type == "windows") {
    +   Sys.setenv(TZ = "GMT")
    + }
    >
    > # Setup global test environment
    > # Install Spark first to set SPARK_HOME
    >
    > # NOTE(shivaram): We set overwrite to handle any old tar.gz
files or directories left behind on
    > # CRAN machines. For Jenkins we should already have SPARK_HOME set.
    > install.spark(overwrite = TRUE)
    Overwrite = TRUE: download and overwrite the tar fileand Spark
package directory if they exist.
    Spark not found in the cache directory. Installation will start.
    MirrorUrl not provided.
    Looking for preferred site from apache website...
    Preferred mirror site found: http://mirror.dkd.de/apache/spark
    Downloading spark-2.2.2 for Hadoop 2.7 from:
    - http://mirror.dkd.de/apache/spark/spark-2.2.2/spark-2.2.2-bin-hadoop2.7.tgz
    trying URL 'http://mirror.dkd.de/apache/spark/spark-2.2.2/spark-2.2.2-bin-hadoop2.7.tgz'
    Content type 'application/x-gzip' length 200743115 bytes (191.4 MB)
    ==================================================
    downloaded 191.4 MB

    Installing to C:\Users\ligges\AppData\Local\Apache\Spark\Cache
    DONE.
    SPARK_HOME set to
C:\Users\ligges\AppData\Local\Apache\Spark\Cache/spark-2.2.2-bin-hadoop2.7
    >
    > sparkRDir <- file.path(Sys.getenv("SPARK_HOME"), "R")
    > sparkRWhitelistSQLDirs <- c("spark-warehouse", "metastore_db")
    > invisible(lapply(sparkRWhitelistSQLDirs,
    +                  function(x) { unlink(file.path(sparkRDir, x),
recursive = TRUE, force = TRUE)}))
    > sparkRFilesBefore <- list.files(path = sparkRDir, all.files = TRUE)
    >
    > sparkRTestMaster <- "local[1]"
    > sparkRTestConfig <- list()
    > if (identical(Sys.getenv("NOT_CRAN"), "true")) {
    +   sparkRTestMaster <- ""
    + } else {
    +   # Disable hsperfdata on CRAN
    +   old_java_opt <- Sys.getenv("_JAVA_OPTIONS")
    +   Sys.setenv("_JAVA_OPTIONS" = paste("-XX:-UsePerfData", old_java_opt))
    +   tmpDir <- tempdir()
    +   tmpArg <- paste0("-Djava.io.tmpdir=", tmpDir)
    +   sparkRTestConfig <- list(spark.driver.extraJavaOptions = tmpArg,
    +                            spark.executor.extraJavaOptions = tmpArg)
    + }
    >
    > test_package("SparkR")
    Launching java with spark-submit command
C:\Users\ligges\AppData\Local\Apache\Spark\Cache/spark-2.2.2-bin-hadoop2.7/bin/spark-submit2.cmd
  --driver-java-options "-Djava.io.tmpdir=D:\temp\RtmpABZLQj"
sparkr-shell D:\temp\RtmpABZLQj\backend_port16d0838283f7e
    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
    Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties
    Setting default log level to "WARN".
    To adjust logging level use sc.setLogLevel(newLevel). For SparkR,
use setLogLevel(newLevel).
    -- 1. Error: create DataFrame from list or data.frame
(@test_basic.R#21)  ------
    cannot open the connection
    1: sparkR.session(master = sparkRTestMaster, enableHiveSupport =
FALSE, sparkConfig = sparkRTestConfig) at
D:/temp/Rtmp8IKu99/RLIBS_77d8215b7bce/SparkR/tests/testthat/test_basic.R:21
    2: sparkR.sparkContext(master, appName, sparkHome, sparkConfigMap,
sparkExecutorEnvMap,
           sparkJars, sparkPackages)
    3: file(path, open = "rb")

    Launching java with spark-submit command
C:\Users\ligges\AppData\Local\Apache\Spark\Cache/spark-2.2.2-bin-hadoop2.7/bin/spark-submit2.cmd
  --driver-java-options "-Djava.io.tmpdir=D:\temp\RtmpABZLQj"
sparkr-shell D:\temp\RtmpABZLQj\backend_port16d085df97d88
    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
    Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties
    Setting default log level to "WARN".
    To adjust logging level use sc.setLogLevel(newLevel). For SparkR,
use setLogLevel(newLevel).
    18/07/09 18:10:43 ERROR Shell: Failed to locate the winutils
binary in the hadoop binary path
    java.io.IOException: Could not locate executable
null\bin\winutils.exe in the Hadoop binaries.
        at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:379)
        at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:394)
        at org.apache.hadoop.util.Shell.<clinit>(Shell.java:387)
        at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
        at org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:611)
        at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:273)
        at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:261)
        at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:791)
        at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:761)
        at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:634)
        at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2427)
        at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2427)
        at scala.Option.getOrElse(Option.scala:121)
        at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2427)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:295)
        at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2516)
        at org.apache.spark.api.r.RRDD$.createSparkContext(RRDD.scala:139)
        at org.apache.spark.api.r.RRDD.createSparkContext(RRDD.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
        at java.lang.Thread.run(Thread.java:748)
    18/07/09 18:10:43 WARN NativeCodeLoader: Unable to load
native-hadoop library for your platform... using builtin-java classes
where applicable
    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
    18/07/09 18:10:54 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:10:55 WARN BLAS: Failed to load implementation from:
com.github.fommil.netlib.NativeSystemBLAS
    18/07/09 18:10:55 WARN BLAS: Failed to load implementation from:
com.github.fommil.netlib.NativeRefBLAS
    18/07/09 18:10:55 WARN LAPACK: Failed to load implementation from:
com.github.fommil.netlib.NativeSystemLAPACK
    18/07/09 18:10:55 WARN LAPACK: Failed to load implementation from:
com.github.fommil.netlib.NativeRefLAPACK
    18/07/09 18:11:12 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:14 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:15 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:17 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:18 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:19 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:21 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:22 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:23 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:25 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:26 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:28 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:29 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:46 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:47 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:49 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:50 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    == testthat results
===========================================================
    OK: 6 SKIPPED: 0 FAILED: 1
    1. Error: create DataFrame from list or data.frame (@test_basic.R#21)

    Error: testthat unit tests failed
    Execution halted
    Picked up _JAVA_OPTIONS: -XX:-UsePerfData

Flavor: r-devel-linux-x86_64-debian-gcc
Check: tests, Result: ERROR
    Running 'run-all.R' [6s/15s]
  Running the tests in 'tests/run-all.R' failed.
  Complete output:
    > #
    > # Licensed to the Apache Software Foundation (ASF) under one or more
    > # contributor license agreements.  See the NOTICE file distributed with
    > # this work for additional information regarding copyright ownership.
    > # The ASF licenses this file to You under the Apache License, Version 2.0
    > # (the "License"); you may not use this file except in compliance with
    > # the License.  You may obtain a copy of the License at
    > #
    > #    http://www.apache.org/licenses/LICENSE-2.0
    > #
    > # Unless required by applicable law or agreed to in writing, software
    > # distributed under the License is distributed on an "AS IS" BASIS,
    > # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    > # See the License for the specific language governing permissions and
    > # limitations under the License.
    > #
    >
    > library(testthat)
    > library(SparkR)

    Attaching package: 'SparkR'

    The following object is masked from 'package:testthat':

        describe

    The following objects are masked from 'package:stats':

        cov, filter, lag, na.omit, predict, sd, var, window

    The following objects are masked from 'package:base':

        as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
        rank, rbind, sample, startsWith, subset, summary, transform, union

    >
    > # Turn all warnings into errors
    > options("warn" = 2)
    >
    > if (.Platform$OS.type == "windows") {
    +   Sys.setenv(TZ = "GMT")
    + }
    >
    > # Setup global test environment
    > # Install Spark first to set SPARK_HOME
    >
    > # NOTE(shivaram): We set overwrite to handle any old tar.gz
files or directories left behind on
    > # CRAN machines. For Jenkins we should already have SPARK_HOME set.
    > install.spark(overwrite = TRUE)
    Overwrite = TRUE: download and overwrite the tar fileand Spark
package directory if they exist.
    Spark not found in the cache directory. Installation will start.
    MirrorUrl not provided.
    Looking for preferred site from apache website...
    Preferred mirror site found: http://mirror.klaus-uwe.me/apache/spark
    Downloading spark-2.2.2 for Hadoop 2.7 from:
    - http://mirror.klaus-uwe.me/apache/spark/spark-2.2.2/spark-2.2.2-bin-hadoop2.7.tgz
    trying URL 'http://mirror.klaus-uwe.me/apache/spark/spark-2.2.2/spark-2.2.2-bin-hadoop2.7.tgz'
    Content type 'application/octet-stream' length 200743115 bytes (191.4 MB)
    ==================================================
    downloaded 191.4 MB

    Installing to /home/hornik/.cache/spark
    DONE.
    SPARK_HOME set to /home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7
    >
    > sparkRDir <- file.path(Sys.getenv("SPARK_HOME"), "R")
    > sparkRWhitelistSQLDirs <- c("spark-warehouse", "metastore_db")
    > invisible(lapply(sparkRWhitelistSQLDirs,
    +                  function(x) { unlink(file.path(sparkRDir, x),
recursive = TRUE, force = TRUE)}))
    > sparkRFilesBefore <- list.files(path = sparkRDir, all.files = TRUE)
    >
    > sparkRTestMaster <- "local[1]"
    > sparkRTestConfig <- list()
    > if (identical(Sys.getenv("NOT_CRAN"), "true")) {
    +   sparkRTestMaster <- ""
    + } else {
    +   # Disable hsperfdata on CRAN
    +   old_java_opt <- Sys.getenv("_JAVA_OPTIONS")
    +   Sys.setenv("_JAVA_OPTIONS" = paste("-XX:-UsePerfData", old_java_opt))
    +   tmpDir <- tempdir()
    +   tmpArg <- paste0("-Djava.io.tmpdir=", tmpDir)
    +   sparkRTestConfig <- list(spark.driver.extraJavaOptions = tmpArg,
    +                            spark.executor.extraJavaOptions = tmpArg)
    + }
    >
    > test_package("SparkR")
    Launching java with spark-submit command
/home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7/bin/spark-submit
--driver-java-options "-Djava.io.tmpdir=/tmp/Rtmpkd8Lf6" sparkr-shell
/tmp/Rtmpkd8Lf6/backend_port289f65a5f5e0
    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
    Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties
    Setting default log level to "WARN".
    To adjust logging level use sc.setLogLevel(newLevel). For SparkR,
use setLogLevel(newLevel).
    WARNING: An illegal reflective access operation has occurred
    WARNING: Illegal reflective access by
io.netty.util.internal.PlatformDependent0$1
(file:/home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7/jars/netty-all-4.0.43.Final.jar)
to field java.nio.Buffer.address
    WARNING: Please consider reporting this to the maintainers of
io.netty.util.internal.PlatformDependent0$1
    WARNING: Use --illegal-access=warn to enable warnings of further
illegal reflective access operations
    WARNING: All illegal access operations will be denied in a future release
    18/07/09 17:58:50 WARN NativeCodeLoader: Unable to load
native-hadoop library for your platform... using builtin-java classes
where applicable
    18/07/09 17:58:54 ERROR RBackendHandler: count on 13 failed
    java.lang.reflect.InvocationTargetException
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
        at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
        at java.base/java.lang.Thread.run(Thread.java:844)
    Caused by: java.lang.IllegalArgumentException
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
        at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
        at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:426)
        at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:257)
        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:256)
        at scala.collection.immutable.List.foreach(List.scala:381)
        at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:256)
        at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:156)
        at org.apache.spark.SparkContext.clean(SparkContext.scala:2294)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2068)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
        at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
        at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
        at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:278)
        at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2439)
        at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2438)
        at org.apache.spark.sql.Dataset$$anonfun$55.apply(Dataset.scala:2846)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
        at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2845)
        at org.apache.spark.sql.Dataset.count(Dataset.scala:2438)
        ... 36 more
    ── 1. Error: create DataFrame from list or data.frame
(@test_basic.R#26)  ──────
    java.lang.IllegalArgumentException
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
        at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
        at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:426)
        at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:257)
        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:256)
        at scala.collection.immutable.List.foreach(List.scala:381)
        at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:256)
        at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:156)
        at org.apache.spark.SparkContext.clean(SparkContext.scala:2294)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2068)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
        at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
        at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
        at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:278)
        at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2439)
        at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2438)
        at org.apache.spark.sql.Dataset$$anonfun$55.apply(Dataset.scala:2846)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
        at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2845)
        at org.apache.spark.sql.Dataset.count(Dataset.scala:2438)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
        at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
        at java.base/java.lang.Thread.run(Thread.java:844)
    1: expect_equal(count(df), i) at
/srv/hornik/tmp/CRAN/SparkR.Rcheck/SparkR/tests/testthat/test_basic.R:26
    2: quasi_label(enquo(object), label)
    3: eval_bare(get_expr(quo), get_env(quo))
    4: count(df)
    5: count(df)
    6: callJMethod(x@sdf, "count")
    7: invokeJava(isStatic = FALSE, objId$id, methodName, ...)
    8: handleErrors(returnStatus, conn)
    9: stop(readString(conn))

    18/07/09 17:58:54 ERROR RBackendHandler: fit on
org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper failed
    java.lang.reflect.InvocationTargetException
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
        at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
        at java.base/java.lang.Thread.run(Thread.java:844)
    Caused by: java.lang.IllegalArgumentException
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
        at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
        at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:426)
        at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:257)
        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:256)
        at scala.collection.immutable.List.foreach(List.scala:381)
        at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:256)
        at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:156)
        at org.apache.spark.SparkContext.clean(SparkContext.scala:2294)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2068)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
        at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
        at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
        at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:373)
        at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:373)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
        at org.apache.spark.rdd.PairRDDFunctions.countByKey(PairRDDFunctions.scala:372)
        at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1204)
        at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1204)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
        at org.apache.spark.rdd.RDD.countByValue(RDD.scala:1203)
        at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:113)
        at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:88)
        at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:153)
        at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:149)
        at scala.collection.Iterator$class.foreach(Iterator.scala:893)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
        at scala.collection.IterableViewLike$Transformed$class.foreach(IterableViewLike.scala:44)
        at scala.collection.SeqViewLike$AbstractTransformed.foreach(SeqViewLike.scala:37)
        at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:149)
        at org.apache.spark.ml.feature.RFormula.fit(RFormula.scala:198)
        at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$.fit(GeneralizedLinearRegressionWrapper.scala:81)
        at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.fit(GeneralizedLinearRegressionWrapper.scala)
        ... 36 more
    ── 2. Error: spark.glm and predict (@test_basic.R#58)
─────────────────────────
    java.lang.IllegalArgumentException
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
        at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
        at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:426)
        at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:257)
        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:256)
        at scala.collection.immutable.List.foreach(List.scala:381)
        at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:256)
        at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:156)
        at org.apache.spark.SparkContext.clean(SparkContext.scala:2294)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2068)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
        at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
        at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
        at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:373)
        at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:373)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
        at org.apache.spark.rdd.PairRDDFunctions.countByKey(PairRDDFunctions.scala:372)
        at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1204)
        at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1204)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
        at org.apache.spark.rdd.RDD.countByValue(RDD.scala:1203)
        at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:113)
        at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:88)
        at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:153)
        at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:149)
        at scala.collection.Iterator$class.foreach(Iterator.scala:893)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
        at scala.collection.IterableViewLike$Transformed$class.foreach(IterableViewLike.scala:44)
        at scala.collection.SeqViewLike$AbstractTransformed.foreach(SeqViewLike.scala:37)
        at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:149)
        at org.apache.spark.ml.feature.RFormula.fit(RFormula.scala:198)
        at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$.fit(GeneralizedLinearRegressionWrapper.scala:81)
        at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.fit(GeneralizedLinearRegressionWrapper.scala)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
        at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
        at java.base/java.lang.Thread.run(Thread.java:844)
    1: spark.glm(training, Sepal_Width ~ Sepal_Length + Species) at
/srv/hornik/tmp/CRAN/SparkR.Rcheck/SparkR/tests/testthat/test_basic.R:58
    2: spark.glm(training, Sepal_Width ~ Sepal_Length + Species)
    3: .local(data, formula, ...)
    4: callJStatic("org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper",
"fit", formula,
           data@sdf, tolower(family$family), family$link, tol,
as.integer(maxIter), weightCol,
           regParam, as.double(var.power), as.double(link.power))
    5: invokeJava(isStatic = TRUE, className, methodName, ...)
    6: handleErrors(returnStatus, conn)
    7: stop(readString(conn))

    ══ testthat results
═══════════════════════════════════════════════════════════
    OK: 0 SKIPPED: 0 FAILED: 2
    1. Error: create DataFrame from list or data.frame (@test_basic.R#26)
    2. Error: spark.glm and predict (@test_basic.R#58)

    Error: testthat unit tests failed
    Execution halted

Flavor: r-devel-linux-x86_64-debian-gcc
Check: re-building of vignette outputs, Result: WARNING
  Error in re-building vignettes:
    ...

  Attaching package: 'SparkR'

  The following objects are masked from 'package:stats':

      cov, filter, lag, na.omit, predict, sd, var, window

  The following objects are masked from 'package:base':

      as.data.frame, colnames, colnames<-, drop, endsWith,
      intersect, rank, rbind, sample, startsWith, subset, summary,
      transform, union

  Picked up _JAVA_OPTIONS: -XX:-UsePerfData
  Picked up _JAVA_OPTIONS: -XX:-UsePerfData
  Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties
  Setting default log level to "WARN".
  To adjust logging level use sc.setLogLevel(newLevel). For SparkR,
use setLogLevel(newLevel).
  WARNING: An illegal reflective access operation has occurred
  WARNING: Illegal reflective access by
io.netty.util.internal.PlatformDependent0$1
(file:/home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7/jars/netty-all-4.0.43.Final.jar)
to field java.nio.Buffer.address
  WARNING: Please consider reporting this to the maintainers of
io.netty.util.internal.PlatformDependent0$1
  WARNING: Use --illegal-access=warn to enable warnings of further
illegal reflective access operations
  WARNING: All illegal access operations will be denied in a future release
  18/07/09 17:58:59 WARN NativeCodeLoader: Unable to load
native-hadoop library for your platform... using builtin-java classes
where applicable
  18/07/09 17:59:07 ERROR RBackendHandler: dfToCols on
org.apache.spark.sql.api.r.SQLUtils failed
  java.lang.reflect.InvocationTargetException
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
        at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
        at java.base/java.lang.Thread.run(Thread.java:844)
  Caused by: java.lang.IllegalArgumentException
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
        at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
        at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:426)
        at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:257)
        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:256)
        at scala.collection.immutable.List.foreach(List.scala:381)
        at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:256)
        at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:156)
        at org.apache.spark.SparkContext.clean(SparkContext.scala:2294)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2068)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
        at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
        at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
        at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:278)
        at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:2865)
        at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2391)
        at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2391)
        at org.apache.spark.sql.Dataset$$anonfun$55.apply(Dataset.scala:2846)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
        at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2845)
        at org.apache.spark.sql.Dataset.collect(Dataset.scala:2391)
        at org.apache.spark.sql.api.r.SQLUtils$.dfToCols(SQLUtils.scala:212)
        at org.apache.spark.sql.api.r.SQLUtils.dfToCols(SQLUtils.scala)
        ... 36 more
  Quitting from lines 102-104 (sparkr-vignettes.Rmd)
  Error: processing vignette 'sparkr-vignettes.Rmd' failed with diagnostics:
  java.lang.IllegalArgumentException
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
        at scala.collection.mutable.HashMap.fore
  Execution halted

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: [CRAN-pretest-archived] CRAN submission SparkR 2.2.2

Felix Cheung
I recall this might be a problem running Spark on java 9

 

From: Shivaram Venkataraman <[hidden email]>
Sent: Monday, July 9, 2018 2:17 PM
To: dev; Felix Cheung; Tom Graves
Subject: Fwd: [CRAN-pretest-archived] CRAN submission SparkR 2.2.2
 
The upcoming 2.2.2 release was submitted to CRAN. I think there are
some knows issues on Windows, but does anybody know what the following
error with Netty is ?

> WARNING: Illegal reflective access by io.netty.util.internal.PlatformDependent0$1 (file:/home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7/jars/netty-all-4.0.43.Final.jar) to field java.nio.Buffer.address

Thanks
Shivaram


---------- Forwarded message ---------
From: <[hidden email]>
Date: Mon, Jul 9, 2018 at 12:12 PM
Subject: [CRAN-pretest-archived] CRAN submission SparkR 2.2.2
To: <[hidden email]>
Cc: <[hidden email]>


Dear maintainer,

package SparkR_2.2.2.tar.gz does not pass the incoming checks
automatically, please see the following pre-tests:
Windows: <https://win-builder.r-project.org/incoming_pretest/SparkR_2.2.2_20180709_175630/Windows/00check.log>
Status: 1 ERROR, 1 WARNING
Debian: <https://win-builder.r-project.org/incoming_pretest/SparkR_2.2.2_20180709_175630/Debian/00check.log>
Status: 1 ERROR, 2 WARNINGs

Last released version's CRAN status: ERROR: 1, OK: 1
See: <https://CRAN.R-project.org/web/checks/check_results_SparkR.html>

CRAN Web: <https://cran.r-project.org/package=SparkR>

Please fix all problems and resubmit a fixed version via the webform.
If you are not sure how to fix the problems shown, please ask for help
on the R-package-devel mailing list:
<https://stat.ethz.ch/mailman/listinfo/r-package-devel>
If you are fairly certain the rejection is a false positive, please
reply-all to this message and explain.

More details are given in the directory:
<https://win-builder.r-project.org/incoming_pretest/SparkR_2.2.2_20180709_175630/>
The files will be removed after roughly 7 days.

No strong reverse dependencies to be checked.

Best regards,
CRAN teams' auto-check service
Flavor: r-devel-linux-x86_64-debian-gcc, r-devel-windows-ix86+x86_64
Check: CRAN incoming feasibility, Result: WARNING
Maintainer: 'Shivaram Venkataraman <[hidden email]>'

New submission

Package was archived on CRAN

Insufficient package version (submitted: 2.2.2, existing: 2.3.0)

Possibly mis-spelled words in DESCRIPTION:
Frontend (4:10, 5:28)

CRAN repository db overrides:
X-CRAN-Comment: Archived on 2018-05-01 as check problems were not
corrected despite reminders.

Found the following (possibly) invalid URLs:
URL: http://spark.apache.org/docs/latest/api/R/mean.html
From: inst/doc/sparkr-vignettes.html
Status: 404
Message: Not Found

Flavor: r-devel-windows-ix86+x86_64
Check: running tests for arch 'x64', Result: ERROR
Running 'run-all.R' [175s]
Running the tests in 'tests/run-all.R' failed.
Complete output:
> #
> # Licensed to the Apache Software Foundation (ASF) under one or more
> # contributor license agreements. See the NOTICE file distributed with
> # this work for additional information regarding copyright ownership.
> # The ASF licenses this file to You under the Apache License, Version 2.0
> # (the "License"); you may not use this file except in compliance with
> # the License. You may obtain a copy of the License at
> #
> # http://www.apache.org/licenses/LICENSE-2.0
> #
> # Unless required by applicable law or agreed to in writing, software
> # distributed under the License is distributed on an "AS IS" BASIS,
> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> # See the License for the specific language governing permissions and
> # limitations under the License.
> #
>
> library(testthat)
> library(SparkR)

Attaching package: 'SparkR'

The following object is masked from 'package:testthat':

describe

The following objects are masked from 'package:stats':

cov, filter, lag, na.omit, predict, sd, var, window

The following objects are masked from 'package:base':

as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
rank, rbind, sample, startsWith, subset, summary, transform, union

>
> # Turn all warnings into errors
> options("warn" = 2)
>
> if (.Platform$OS.type == "windows") {
+ Sys.setenv(TZ = "GMT")
+ }
>
> # Setup global test environment
> # Install Spark first to set SPARK_HOME
>
> # NOTE(shivaram): We set overwrite to handle any old tar.gz
files or directories left behind on
> # CRAN machines. For Jenkins we should already have SPARK_HOME set.
> install.spark(overwrite = TRUE)
Overwrite = TRUE: download and overwrite the tar fileand Spark
package directory if they exist.
Spark not found in the cache directory. Installation will start.
MirrorUrl not provided.
Looking for preferred site from apache website...
Preferred mirror site found: http://mirror.dkd.de/apache/spark
Downloading spark-2.2.2 for Hadoop 2.7 from:
- http://mirror.dkd.de/apache/spark/spark-2.2.2/spark-2.2.2-bin-hadoop2.7.tgz
trying URL 'http://mirror.dkd.de/apache/spark/spark-2.2.2/spark-2.2.2-bin-hadoop2.7.tgz'
Content type 'application/x-gzip' length 200743115 bytes (191.4 MB)
==================================================
downloaded 191.4 MB

Installing to C:\Users\ligges\AppData\Local\Apache\Spark\Cache
DONE.
SPARK_HOME set to
C:\Users\ligges\AppData\Local\Apache\Spark\Cache/spark-2.2.2-bin-hadoop2.7
>
> sparkRDir <- file.path(Sys.getenv("SPARK_HOME"), "R")
> sparkRWhitelistSQLDirs <- c("spark-warehouse", "metastore_db")
> invisible(lapply(sparkRWhitelistSQLDirs,
+ function(x) { unlink(file.path(sparkRDir, x),
recursive = TRUE, force = TRUE)}))
> sparkRFilesBefore <- list.files(path = sparkRDir, all.files = TRUE)
>
> sparkRTestMaster <- "local[1]"
> sparkRTestConfig <- list()
> if (identical(Sys.getenv("NOT_CRAN"), "true")) {
+ sparkRTestMaster <- ""
+ } else {
+ # Disable hsperfdata on CRAN
+ old_java_opt <- Sys.getenv("_JAVA_OPTIONS")
+ Sys.setenv("_JAVA_OPTIONS" = paste("-XX:-UsePerfData", old_java_opt))
+ tmpDir <- tempdir()
+ tmpArg <- paste0("-Djava.io.tmpdir=", tmpDir)
+ sparkRTestConfig <- list(spark.driver.extraJavaOptions = tmpArg,
+ spark.executor.extraJavaOptions = tmpArg)
+ }
>
> test_package("SparkR")
Launching java with spark-submit command
C:\Users\ligges\AppData\Local\Apache\Spark\Cache/spark-2.2.2-bin-hadoop2.7/bin/spark-submit2.cmd
--driver-java-options "-Djava.io.tmpdir=D:\temp\RtmpABZLQj"
sparkr-shell D:\temp\RtmpABZLQj\backend_port16d0838283f7e
Picked up _JAVA_OPTIONS: -XX:-UsePerfData
Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR,
use setLogLevel(newLevel).
-- 1. Error: create DataFrame from list or data.frame
(@test_basic.R#21) ------
cannot open the connection
1: sparkR.session(master = sparkRTestMaster, enableHiveSupport =
FALSE, sparkConfig = sparkRTestConfig) at
D:/temp/Rtmp8IKu99/RLIBS_77d8215b7bce/SparkR/tests/testthat/test_basic.R:21
2: sparkR.sparkContext(master, appName, sparkHome, sparkConfigMap,
sparkExecutorEnvMap,
sparkJars, sparkPackages)
3: file(path, open = "rb")

Launching java with spark-submit command
C:\Users\ligges\AppData\Local\Apache\Spark\Cache/spark-2.2.2-bin-hadoop2.7/bin/spark-submit2.cmd
--driver-java-options "-Djava.io.tmpdir=D:\temp\RtmpABZLQj"
sparkr-shell D:\temp\RtmpABZLQj\backend_port16d085df97d88
Picked up _JAVA_OPTIONS: -XX:-UsePerfData
Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR,
use setLogLevel(newLevel).
18/07/09 18:10:43 ERROR Shell: Failed to locate the winutils
binary in the hadoop binary path
java.io.IOException: Could not locate executable
null\bin\winutils.exe in the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:379)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:394)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:387)
at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
at org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:611)
at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:273)
at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:261)
at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:791)
at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:761)
at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:634)
at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2427)
at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2427)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2427)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:295)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2516)
at org.apache.spark.api.r.RRDD$.createSparkContext(RRDD.scala:139)
at org.apache.spark.api.r.RRDD.createSparkContext(RRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:748)
18/07/09 18:10:43 WARN NativeCodeLoader: Unable to load
native-hadoop library for your platform... using builtin-java classes
where applicable
Picked up _JAVA_OPTIONS: -XX:-UsePerfData
18/07/09 18:10:54 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
18/07/09 18:10:55 WARN BLAS: Failed to load implementation from:
com.github.fommil.netlib.NativeSystemBLAS
18/07/09 18:10:55 WARN BLAS: Failed to load implementation from:
com.github.fommil.netlib.NativeRefBLAS
18/07/09 18:10:55 WARN LAPACK: Failed to load implementation from:
com.github.fommil.netlib.NativeSystemLAPACK
18/07/09 18:10:55 WARN LAPACK: Failed to load implementation from:
com.github.fommil.netlib.NativeRefLAPACK
18/07/09 18:11:12 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
18/07/09 18:11:14 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
18/07/09 18:11:15 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
18/07/09 18:11:17 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
18/07/09 18:11:18 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
18/07/09 18:11:19 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
18/07/09 18:11:21 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
18/07/09 18:11:22 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
18/07/09 18:11:23 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
18/07/09 18:11:25 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
18/07/09 18:11:26 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
18/07/09 18:11:28 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
18/07/09 18:11:29 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
18/07/09 18:11:46 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
18/07/09 18:11:47 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
18/07/09 18:11:49 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
18/07/09 18:11:50 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
== testthat results
===========================================================
OK: 6 SKIPPED: 0 FAILED: 1
1. Error: create DataFrame from list or data.frame (@test_basic.R#21)

Error: testthat unit tests failed
Execution halted
Picked up _JAVA_OPTIONS: -XX:-UsePerfData

Flavor: r-devel-linux-x86_64-debian-gcc
Check: tests, Result: ERROR
Running 'run-all.R' [6s/15s]
Running the tests in 'tests/run-all.R' failed.
Complete output:
> #
> # Licensed to the Apache Software Foundation (ASF) under one or more
> # contributor license agreements. See the NOTICE file distributed with
> # this work for additional information regarding copyright ownership.
> # The ASF licenses this file to You under the Apache License, Version 2.0
> # (the "License"); you may not use this file except in compliance with
> # the License. You may obtain a copy of the License at
> #
> # http://www.apache.org/licenses/LICENSE-2.0
> #
> # Unless required by applicable law or agreed to in writing, software
> # distributed under the License is distributed on an "AS IS" BASIS,
> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> # See the License for the specific language governing permissions and
> # limitations under the License.
> #
>
> library(testthat)
> library(SparkR)

Attaching package: 'SparkR'

The following object is masked from 'package:testthat':

describe

The following objects are masked from 'package:stats':

cov, filter, lag, na.omit, predict, sd, var, window

The following objects are masked from 'package:base':

as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
rank, rbind, sample, startsWith, subset, summary, transform, union

>
> # Turn all warnings into errors
> options("warn" = 2)
>
> if (.Platform$OS.type == "windows") {
+ Sys.setenv(TZ = "GMT")
+ }
>
> # Setup global test environment
> # Install Spark first to set SPARK_HOME
>
> # NOTE(shivaram): We set overwrite to handle any old tar.gz
files or directories left behind on
> # CRAN machines. For Jenkins we should already have SPARK_HOME set.
> install.spark(overwrite = TRUE)
Overwrite = TRUE: download and overwrite the tar fileand Spark
package directory if they exist.
Spark not found in the cache directory. Installation will start.
MirrorUrl not provided.
Looking for preferred site from apache website...
Preferred mirror site found: http://mirror.klaus-uwe.me/apache/spark
Downloading spark-2.2.2 for Hadoop 2.7 from:
- http://mirror.klaus-uwe.me/apache/spark/spark-2.2.2/spark-2.2.2-bin-hadoop2.7.tgz
trying URL 'http://mirror.klaus-uwe.me/apache/spark/spark-2.2.2/spark-2.2.2-bin-hadoop2.7.tgz'
Content type 'application/octet-stream' length 200743115 bytes (191.4 MB)
==================================================
downloaded 191.4 MB

Installing to /home/hornik/.cache/spark
DONE.
SPARK_HOME set to /home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7
>
> sparkRDir <- file.path(Sys.getenv("SPARK_HOME"), "R")
> sparkRWhitelistSQLDirs <- c("spark-warehouse", "metastore_db")
> invisible(lapply(sparkRWhitelistSQLDirs,
+ function(x) { unlink(file.path(sparkRDir, x),
recursive = TRUE, force = TRUE)}))
> sparkRFilesBefore <- list.files(path = sparkRDir, all.files = TRUE)
>
> sparkRTestMaster <- "local[1]"
> sparkRTestConfig <- list()
> if (identical(Sys.getenv("NOT_CRAN"), "true")) {
+ sparkRTestMaster <- ""
+ } else {
+ # Disable hsperfdata on CRAN
+ old_java_opt <- Sys.getenv("_JAVA_OPTIONS")
+ Sys.setenv("_JAVA_OPTIONS" = paste("-XX:-UsePerfData", old_java_opt))
+ tmpDir <- tempdir()
+ tmpArg <- paste0("-Djava.io.tmpdir=", tmpDir)
+ sparkRTestConfig <- list(spark.driver.extraJavaOptions = tmpArg,
+ spark.executor.extraJavaOptions = tmpArg)
+ }
>
> test_package("SparkR")
Launching java with spark-submit command
/home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7/bin/spark-submit
--driver-java-options "-Djava.io.tmpdir=/tmp/Rtmpkd8Lf6" sparkr-shell
/tmp/Rtmpkd8Lf6/backend_port289f65a5f5e0
Picked up _JAVA_OPTIONS: -XX:-UsePerfData
Picked up _JAVA_OPTIONS: -XX:-UsePerfData
Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR,
use setLogLevel(newLevel).
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by
io.netty.util.internal.PlatformDependent0$1
(file:/home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7/jars/netty-all-4.0.43.Final.jar)
to field java.nio.Buffer.address
WARNING: Please consider reporting this to the maintainers of
io.netty.util.internal.PlatformDependent0$1
WARNING: Use --illegal-access=warn to enable warnings of further
illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
18/07/09 17:58:50 WARN NativeCodeLoader: Unable to load
native-hadoop library for your platform... using builtin-java classes
where applicable
18/07/09 17:58:54 ERROR RBackendHandler: count on 13 failed
java.lang.reflect.InvocationTargetException
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.base/java.lang.Thread.run(Thread.java:844)
Caused by: java.lang.IllegalArgumentException
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:426)
at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:257)
at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:256)
at scala.collection.immutable.List.foreach(List.scala:381)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:256)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:156)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2294)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2068)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:278)
at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2439)
at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2438)
at org.apache.spark.sql.Dataset$$anonfun$55.apply(Dataset.scala:2846)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2845)
at org.apache.spark.sql.Dataset.count(Dataset.scala:2438)
... 36 more
── 1. Error: create DataFrame from list or data.frame
(@test_basic.R#26) ──────
java.lang.IllegalArgumentException
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:426)
at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:257)
at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:256)
at scala.collection.immutable.List.foreach(List.scala:381)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:256)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:156)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2294)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2068)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:278)
at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2439)
at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2438)
at org.apache.spark.sql.Dataset$$anonfun$55.apply(Dataset.scala:2846)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2845)
at org.apache.spark.sql.Dataset.count(Dataset.scala:2438)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.base/java.lang.Thread.run(Thread.java:844)
1: expect_equal(count(df), i) at
/srv/hornik/tmp/CRAN/SparkR.Rcheck/SparkR/tests/testthat/test_basic.R:26
2: quasi_label(enquo(object), label)
3: eval_bare(get_expr(quo), get_env(quo))
4: count(df)
5: count(df)
6: callJMethod(x@sdf, "count")
7: invokeJava(isStatic = FALSE, objId$id, methodName, ...)
8: handleErrors(returnStatus, conn)
9: stop(readString(conn))

18/07/09 17:58:54 ERROR RBackendHandler: fit on
org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper failed
java.lang.reflect.InvocationTargetException
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.base/java.lang.Thread.run(Thread.java:844)
Caused by: java.lang.IllegalArgumentException
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:426)
at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:257)
at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:256)
at scala.collection.immutable.List.foreach(List.scala:381)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:256)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:156)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2294)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2068)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:373)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:373)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.PairRDDFunctions.countByKey(PairRDDFunctions.scala:372)
at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1204)
at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1204)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.RDD.countByValue(RDD.scala:1203)
at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:113)
at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:88)
at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:153)
at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:149)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.IterableViewLike$Transformed$class.foreach(IterableViewLike.scala:44)
at scala.collection.SeqViewLike$AbstractTransformed.foreach(SeqViewLike.scala:37)
at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:149)
at org.apache.spark.ml.feature.RFormula.fit(RFormula.scala:198)
at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$.fit(GeneralizedLinearRegressionWrapper.scala:81)
at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.fit(GeneralizedLinearRegressionWrapper.scala)
... 36 more
── 2. Error: spark.glm and predict (@test_basic.R#58)
─────────────────────────
java.lang.IllegalArgumentException
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:426)
at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:257)
at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:256)
at scala.collection.immutable.List.foreach(List.scala:381)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:256)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:156)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2294)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2068)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:373)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:373)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.PairRDDFunctions.countByKey(PairRDDFunctions.scala:372)
at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1204)
at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1204)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.RDD.countByValue(RDD.scala:1203)
at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:113)
at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:88)
at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:153)
at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:149)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.IterableViewLike$Transformed$class.foreach(IterableViewLike.scala:44)
at scala.collection.SeqViewLike$AbstractTransformed.foreach(SeqViewLike.scala:37)
at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:149)
at org.apache.spark.ml.feature.RFormula.fit(RFormula.scala:198)
at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$.fit(GeneralizedLinearRegressionWrapper.scala:81)
at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.fit(GeneralizedLinearRegressionWrapper.scala)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.base/java.lang.Thread.run(Thread.java:844)
1: spark.glm(training, Sepal_Width ~ Sepal_Length + Species) at
/srv/hornik/tmp/CRAN/SparkR.Rcheck/SparkR/tests/testthat/test_basic.R:58
2: spark.glm(training, Sepal_Width ~ Sepal_Length + Species)
3: .local(data, formula, ...)
4: callJStatic("org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper",
"fit", formula,
data@sdf, tolower(family$family), family$link, tol,
as.integer(maxIter), weightCol,
regParam, as.double(var.power), as.double(link.power))
5: invokeJava(isStatic = TRUE, className, methodName, ...)
6: handleErrors(returnStatus, conn)
7: stop(readString(conn))

══ testthat results
═══════════════════════════════════════════════════════════
OK: 0 SKIPPED: 0 FAILED: 2
1. Error: create DataFrame from list or data.frame (@test_basic.R#26)
2. Error: spark.glm and predict (@test_basic.R#58)

Error: testthat unit tests failed
Execution halted

Flavor: r-devel-linux-x86_64-debian-gcc
Check: re-building of vignette outputs, Result: WARNING
Error in re-building vignettes:
...

Attaching package: 'SparkR'

The following objects are masked from 'package:stats':

cov, filter, lag, na.omit, predict, sd, var, window

The following objects are masked from 'package:base':

as.data.frame, colnames, colnames<-, drop, endsWith,
intersect, rank, rbind, sample, startsWith, subset, summary,
transform, union

Picked up _JAVA_OPTIONS: -XX:-UsePerfData
Picked up _JAVA_OPTIONS: -XX:-UsePerfData
Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR,
use setLogLevel(newLevel).
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by
io.netty.util.internal.PlatformDependent0$1
(file:/home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7/jars/netty-all-4.0.43.Final.jar)
to field java.nio.Buffer.address
WARNING: Please consider reporting this to the maintainers of
io.netty.util.internal.PlatformDependent0$1
WARNING: Use --illegal-access=warn to enable warnings of further
illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
18/07/09 17:58:59 WARN NativeCodeLoader: Unable to load
native-hadoop library for your platform... using builtin-java classes
where applicable
18/07/09 17:59:07 ERROR RBackendHandler: dfToCols on
org.apache.spark.sql.api.r.SQLUtils failed
java.lang.reflect.InvocationTargetException
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.base/java.lang.Thread.run(Thread.java:844)
Caused by: java.lang.IllegalArgumentException
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:426)
at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:257)
at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:256)
at scala.collection.immutable.List.foreach(List.scala:381)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:256)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:156)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2294)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2068)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:278)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:2865)
at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2391)
at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2391)
at org.apache.spark.sql.Dataset$$anonfun$55.apply(Dataset.scala:2846)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2845)
at org.apache.spark.sql.Dataset.collect(Dataset.scala:2391)
at org.apache.spark.sql.api.r.SQLUtils$.dfToCols(SQLUtils.scala:212)
at org.apache.spark.sql.api.r.SQLUtils.dfToCols(SQLUtils.scala)
... 36 more
Quitting from lines 102-104 (sparkr-vignettes.Rmd)
Error: processing vignette 'sparkr-vignettes.Rmd' failed with diagnostics:
java.lang.IllegalArgumentException
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
at scala.collection.mutable.HashMap.fore
Execution halted
Reply | Threaded
Open this post in threaded view
|

Re: [CRAN-pretest-archived] CRAN submission SparkR 2.2.2

Sean Owen-2
In reply to this post by Shivaram Venkataraman
Yes, this flavor of error should only come up in Java 9. Spark doesn't support that. Is there any way to tell CRAN this should not be tested?

On Mon, Jul 9, 2018, 4:17 PM Shivaram Venkataraman <[hidden email]> wrote:
The upcoming 2.2.2 release was submitted to CRAN. I think there are
some knows issues on Windows, but does anybody know what the following
error with Netty is ?

>     WARNING: Illegal reflective access by io.netty.util.internal.PlatformDependent0$1 (file:/home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7/jars/netty-all-4.0.43.Final.jar) to field java.nio.Buffer.address

Thanks
Shivaram


---------- Forwarded message ---------
From: <[hidden email]>
Date: Mon, Jul 9, 2018 at 12:12 PM
Subject: [CRAN-pretest-archived] CRAN submission SparkR 2.2.2
To: <[hidden email]>
Cc: <[hidden email]>


Dear maintainer,

package SparkR_2.2.2.tar.gz does not pass the incoming checks
automatically, please see the following pre-tests:
Windows: <https://win-builder.r-project.org/incoming_pretest/SparkR_2.2.2_20180709_175630/Windows/00check.log>
Status: 1 ERROR, 1 WARNING
Debian: <https://win-builder.r-project.org/incoming_pretest/SparkR_2.2.2_20180709_175630/Debian/00check.log>
Status: 1 ERROR, 2 WARNINGs

Last released version's CRAN status: ERROR: 1, OK: 1
See: <https://CRAN.R-project.org/web/checks/check_results_SparkR.html>

CRAN Web: <https://cran.r-project.org/package=SparkR>

Please fix all problems and resubmit a fixed version via the webform.
If you are not sure how to fix the problems shown, please ask for help
on the R-package-devel mailing list:
<https://stat.ethz.ch/mailman/listinfo/r-package-devel>
If you are fairly certain the rejection is a false positive, please
reply-all to this message and explain.

More details are given in the directory:
<https://win-builder.r-project.org/incoming_pretest/SparkR_2.2.2_20180709_175630/>
The files will be removed after roughly 7 days.

No strong reverse dependencies to be checked.

Best regards,
CRAN teams' auto-check service
Flavor: r-devel-linux-x86_64-debian-gcc, r-devel-windows-ix86+x86_64
Check: CRAN incoming feasibility, Result: WARNING
  Maintainer: 'Shivaram Venkataraman <[hidden email]>'

  New submission

  Package was archived on CRAN

  Insufficient package version (submitted: 2.2.2, existing: 2.3.0)

  Possibly mis-spelled words in DESCRIPTION:
    Frontend (4:10, 5:28)

  CRAN repository db overrides:
    X-CRAN-Comment: Archived on 2018-05-01 as check problems were not
      corrected despite reminders.

  Found the following (possibly) invalid URLs:
    URL: http://spark.apache.org/docs/latest/api/R/mean.html
      From: inst/doc/sparkr-vignettes.html
      Status: 404
      Message: Not Found

Flavor: r-devel-windows-ix86+x86_64
Check: running tests for arch 'x64', Result: ERROR
    Running 'run-all.R' [175s]
  Running the tests in 'tests/run-all.R' failed.
  Complete output:
    > #
    > # Licensed to the Apache Software Foundation (ASF) under one or more
    > # contributor license agreements.  See the NOTICE file distributed with
    > # this work for additional information regarding copyright ownership.
    > # The ASF licenses this file to You under the Apache License, Version 2.0
    > # (the "License"); you may not use this file except in compliance with
    > # the License.  You may obtain a copy of the License at
    > #
    > #    http://www.apache.org/licenses/LICENSE-2.0
    > #
    > # Unless required by applicable law or agreed to in writing, software
    > # distributed under the License is distributed on an "AS IS" BASIS,
    > # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    > # See the License for the specific language governing permissions and
    > # limitations under the License.
    > #
    >
    > library(testthat)
    > library(SparkR)

    Attaching package: 'SparkR'

    The following object is masked from 'package:testthat':

        describe

    The following objects are masked from 'package:stats':

        cov, filter, lag, na.omit, predict, sd, var, window

    The following objects are masked from 'package:base':

        as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
        rank, rbind, sample, startsWith, subset, summary, transform, union

    >
    > # Turn all warnings into errors
    > options("warn" = 2)
    >
    > if (.Platform$OS.type == "windows") {
    +   Sys.setenv(TZ = "GMT")
    + }
    >
    > # Setup global test environment
    > # Install Spark first to set SPARK_HOME
    >
    > # NOTE(shivaram): We set overwrite to handle any old tar.gz
files or directories left behind on
    > # CRAN machines. For Jenkins we should already have SPARK_HOME set.
    > install.spark(overwrite = TRUE)
    Overwrite = TRUE: download and overwrite the tar fileand Spark
package directory if they exist.
    Spark not found in the cache directory. Installation will start.
    MirrorUrl not provided.
    Looking for preferred site from apache website...
    Preferred mirror site found: http://mirror.dkd.de/apache/spark
    Downloading spark-2.2.2 for Hadoop 2.7 from:
    - http://mirror.dkd.de/apache/spark/spark-2.2.2/spark-2.2.2-bin-hadoop2.7.tgz
    trying URL 'http://mirror.dkd.de/apache/spark/spark-2.2.2/spark-2.2.2-bin-hadoop2.7.tgz'
    Content type 'application/x-gzip' length 200743115 bytes (191.4 MB)
    ==================================================
    downloaded 191.4 MB

    Installing to C:\Users\ligges\AppData\Local\Apache\Spark\Cache
    DONE.
    SPARK_HOME set to
C:\Users\ligges\AppData\Local\Apache\Spark\Cache/spark-2.2.2-bin-hadoop2.7
    >
    > sparkRDir <- file.path(Sys.getenv("SPARK_HOME"), "R")
    > sparkRWhitelistSQLDirs <- c("spark-warehouse", "metastore_db")
    > invisible(lapply(sparkRWhitelistSQLDirs,
    +                  function(x) { unlink(file.path(sparkRDir, x),
recursive = TRUE, force = TRUE)}))
    > sparkRFilesBefore <- list.files(path = sparkRDir, all.files = TRUE)
    >
    > sparkRTestMaster <- "local[1]"
    > sparkRTestConfig <- list()
    > if (identical(Sys.getenv("NOT_CRAN"), "true")) {
    +   sparkRTestMaster <- ""
    + } else {
    +   # Disable hsperfdata on CRAN
    +   old_java_opt <- Sys.getenv("_JAVA_OPTIONS")
    +   Sys.setenv("_JAVA_OPTIONS" = paste("-XX:-UsePerfData", old_java_opt))
    +   tmpDir <- tempdir()
    +   tmpArg <- paste0("-Djava.io.tmpdir=", tmpDir)
    +   sparkRTestConfig <- list(spark.driver.extraJavaOptions = tmpArg,
    +                            spark.executor.extraJavaOptions = tmpArg)
    + }
    >
    > test_package("SparkR")
    Launching java with spark-submit command
C:\Users\ligges\AppData\Local\Apache\Spark\Cache/spark-2.2.2-bin-hadoop2.7/bin/spark-submit2.cmd
  --driver-java-options "-Djava.io.tmpdir=D:\temp\RtmpABZLQj"
sparkr-shell D:\temp\RtmpABZLQj\backend_port16d0838283f7e
    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
    Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties
    Setting default log level to "WARN".
    To adjust logging level use sc.setLogLevel(newLevel). For SparkR,
use setLogLevel(newLevel).
    -- 1. Error: create DataFrame from list or data.frame
(@test_basic.R#21)  ------
    cannot open the connection
    1: sparkR.session(master = sparkRTestMaster, enableHiveSupport =
FALSE, sparkConfig = sparkRTestConfig) at
D:/temp/Rtmp8IKu99/RLIBS_77d8215b7bce/SparkR/tests/testthat/test_basic.R:21
    2: sparkR.sparkContext(master, appName, sparkHome, sparkConfigMap,
sparkExecutorEnvMap,
           sparkJars, sparkPackages)
    3: file(path, open = "rb")

    Launching java with spark-submit command
C:\Users\ligges\AppData\Local\Apache\Spark\Cache/spark-2.2.2-bin-hadoop2.7/bin/spark-submit2.cmd
  --driver-java-options "-Djava.io.tmpdir=D:\temp\RtmpABZLQj"
sparkr-shell D:\temp\RtmpABZLQj\backend_port16d085df97d88
    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
    Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties
    Setting default log level to "WARN".
    To adjust logging level use sc.setLogLevel(newLevel). For SparkR,
use setLogLevel(newLevel).
    18/07/09 18:10:43 ERROR Shell: Failed to locate the winutils
binary in the hadoop binary path
    java.io.IOException: Could not locate executable
null\bin\winutils.exe in the Hadoop binaries.
        at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:379)
        at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:394)
        at org.apache.hadoop.util.Shell.<clinit>(Shell.java:387)
        at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
        at org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:611)
        at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:273)
        at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:261)
        at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:791)
        at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:761)
        at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:634)
        at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2427)
        at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2427)
        at scala.Option.getOrElse(Option.scala:121)
        at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2427)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:295)
        at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2516)
        at org.apache.spark.api.r.RRDD$.createSparkContext(RRDD.scala:139)
        at org.apache.spark.api.r.RRDD.createSparkContext(RRDD.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
        at java.lang.Thread.run(Thread.java:748)
    18/07/09 18:10:43 WARN NativeCodeLoader: Unable to load
native-hadoop library for your platform... using builtin-java classes
where applicable
    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
    18/07/09 18:10:54 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:10:55 WARN BLAS: Failed to load implementation from:
com.github.fommil.netlib.NativeSystemBLAS
    18/07/09 18:10:55 WARN BLAS: Failed to load implementation from:
com.github.fommil.netlib.NativeRefBLAS
    18/07/09 18:10:55 WARN LAPACK: Failed to load implementation from:
com.github.fommil.netlib.NativeSystemLAPACK
    18/07/09 18:10:55 WARN LAPACK: Failed to load implementation from:
com.github.fommil.netlib.NativeRefLAPACK
    18/07/09 18:11:12 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:14 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:15 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:17 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:18 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:19 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:21 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:22 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:23 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:25 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:26 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:28 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:29 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:46 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:47 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:49 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    18/07/09 18:11:50 WARN WeightedLeastSquares: regParam is zero,
which might cause numerical instability and overfitting.
    == testthat results
===========================================================
    OK: 6 SKIPPED: 0 FAILED: 1
    1. Error: create DataFrame from list or data.frame (@test_basic.R#21)

    Error: testthat unit tests failed
    Execution halted
    Picked up _JAVA_OPTIONS: -XX:-UsePerfData

Flavor: r-devel-linux-x86_64-debian-gcc
Check: tests, Result: ERROR
    Running 'run-all.R' [6s/15s]
  Running the tests in 'tests/run-all.R' failed.
  Complete output:
    > #
    > # Licensed to the Apache Software Foundation (ASF) under one or more
    > # contributor license agreements.  See the NOTICE file distributed with
    > # this work for additional information regarding copyright ownership.
    > # The ASF licenses this file to You under the Apache License, Version 2.0
    > # (the "License"); you may not use this file except in compliance with
    > # the License.  You may obtain a copy of the License at
    > #
    > #    http://www.apache.org/licenses/LICENSE-2.0
    > #
    > # Unless required by applicable law or agreed to in writing, software
    > # distributed under the License is distributed on an "AS IS" BASIS,
    > # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    > # See the License for the specific language governing permissions and
    > # limitations under the License.
    > #
    >
    > library(testthat)
    > library(SparkR)

    Attaching package: 'SparkR'

    The following object is masked from 'package:testthat':

        describe

    The following objects are masked from 'package:stats':

        cov, filter, lag, na.omit, predict, sd, var, window

    The following objects are masked from 'package:base':

        as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
        rank, rbind, sample, startsWith, subset, summary, transform, union

    >
    > # Turn all warnings into errors
    > options("warn" = 2)
    >
    > if (.Platform$OS.type == "windows") {
    +   Sys.setenv(TZ = "GMT")
    + }
    >
    > # Setup global test environment
    > # Install Spark first to set SPARK_HOME
    >
    > # NOTE(shivaram): We set overwrite to handle any old tar.gz
files or directories left behind on
    > # CRAN machines. For Jenkins we should already have SPARK_HOME set.
    > install.spark(overwrite = TRUE)
    Overwrite = TRUE: download and overwrite the tar fileand Spark
package directory if they exist.
    Spark not found in the cache directory. Installation will start.
    MirrorUrl not provided.
    Looking for preferred site from apache website...
    Preferred mirror site found: http://mirror.klaus-uwe.me/apache/spark
    Downloading spark-2.2.2 for Hadoop 2.7 from:
    - http://mirror.klaus-uwe.me/apache/spark/spark-2.2.2/spark-2.2.2-bin-hadoop2.7.tgz
    trying URL 'http://mirror.klaus-uwe.me/apache/spark/spark-2.2.2/spark-2.2.2-bin-hadoop2.7.tgz'
    Content type 'application/octet-stream' length 200743115 bytes (191.4 MB)
    ==================================================
    downloaded 191.4 MB

    Installing to /home/hornik/.cache/spark
    DONE.
    SPARK_HOME set to /home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7
    >
    > sparkRDir <- file.path(Sys.getenv("SPARK_HOME"), "R")
    > sparkRWhitelistSQLDirs <- c("spark-warehouse", "metastore_db")
    > invisible(lapply(sparkRWhitelistSQLDirs,
    +                  function(x) { unlink(file.path(sparkRDir, x),
recursive = TRUE, force = TRUE)}))
    > sparkRFilesBefore <- list.files(path = sparkRDir, all.files = TRUE)
    >
    > sparkRTestMaster <- "local[1]"
    > sparkRTestConfig <- list()
    > if (identical(Sys.getenv("NOT_CRAN"), "true")) {
    +   sparkRTestMaster <- ""
    + } else {
    +   # Disable hsperfdata on CRAN
    +   old_java_opt <- Sys.getenv("_JAVA_OPTIONS")
    +   Sys.setenv("_JAVA_OPTIONS" = paste("-XX:-UsePerfData", old_java_opt))
    +   tmpDir <- tempdir()
    +   tmpArg <- paste0("-Djava.io.tmpdir=", tmpDir)
    +   sparkRTestConfig <- list(spark.driver.extraJavaOptions = tmpArg,
    +                            spark.executor.extraJavaOptions = tmpArg)
    + }
    >
    > test_package("SparkR")
    Launching java with spark-submit command
/home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7/bin/spark-submit
--driver-java-options "-Djava.io.tmpdir=/tmp/Rtmpkd8Lf6" sparkr-shell
/tmp/Rtmpkd8Lf6/backend_port289f65a5f5e0
    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
    Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties
    Setting default log level to "WARN".
    To adjust logging level use sc.setLogLevel(newLevel). For SparkR,
use setLogLevel(newLevel).
    WARNING: An illegal reflective access operation has occurred
    WARNING: Illegal reflective access by
io.netty.util.internal.PlatformDependent0$1
(file:/home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7/jars/netty-all-4.0.43.Final.jar)
to field java.nio.Buffer.address
    WARNING: Please consider reporting this to the maintainers of
io.netty.util.internal.PlatformDependent0$1
    WARNING: Use --illegal-access=warn to enable warnings of further
illegal reflective access operations
    WARNING: All illegal access operations will be denied in a future release
    18/07/09 17:58:50 WARN NativeCodeLoader: Unable to load
native-hadoop library for your platform... using builtin-java classes
where applicable
    18/07/09 17:58:54 ERROR RBackendHandler: count on 13 failed
    java.lang.reflect.InvocationTargetException
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
        at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
        at java.base/java.lang.Thread.run(Thread.java:844)
    Caused by: java.lang.IllegalArgumentException
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
        at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
        at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:426)
        at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:257)
        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:256)
        at scala.collection.immutable.List.foreach(List.scala:381)
        at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:256)
        at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:156)
        at org.apache.spark.SparkContext.clean(SparkContext.scala:2294)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2068)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
        at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
        at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
        at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:278)
        at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2439)
        at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2438)
        at org.apache.spark.sql.Dataset$$anonfun$55.apply(Dataset.scala:2846)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
        at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2845)
        at org.apache.spark.sql.Dataset.count(Dataset.scala:2438)
        ... 36 more
    ── 1. Error: create DataFrame from list or data.frame
(@test_basic.R#26)  ──────
    java.lang.IllegalArgumentException
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
        at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
        at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:426)
        at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:257)
        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:256)
        at scala.collection.immutable.List.foreach(List.scala:381)
        at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:256)
        at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:156)
        at org.apache.spark.SparkContext.clean(SparkContext.scala:2294)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2068)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
        at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
        at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
        at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:278)
        at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2439)
        at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2438)
        at org.apache.spark.sql.Dataset$$anonfun$55.apply(Dataset.scala:2846)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
        at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2845)
        at org.apache.spark.sql.Dataset.count(Dataset.scala:2438)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
        at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
        at java.base/java.lang.Thread.run(Thread.java:844)
    1: expect_equal(count(df), i) at
/srv/hornik/tmp/CRAN/SparkR.Rcheck/SparkR/tests/testthat/test_basic.R:26
    2: quasi_label(enquo(object), label)
    3: eval_bare(get_expr(quo), get_env(quo))
    4: count(df)
    5: count(df)
    6: callJMethod(x@sdf, "count")
    7: invokeJava(isStatic = FALSE, objId$id, methodName, ...)
    8: handleErrors(returnStatus, conn)
    9: stop(readString(conn))

    18/07/09 17:58:54 ERROR RBackendHandler: fit on
org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper failed
    java.lang.reflect.InvocationTargetException
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
        at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
        at java.base/java.lang.Thread.run(Thread.java:844)
    Caused by: java.lang.IllegalArgumentException
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
        at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
        at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:426)
        at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:257)
        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:256)
        at scala.collection.immutable.List.foreach(List.scala:381)
        at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:256)
        at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:156)
        at org.apache.spark.SparkContext.clean(SparkContext.scala:2294)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2068)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
        at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
        at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
        at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:373)
        at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:373)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
        at org.apache.spark.rdd.PairRDDFunctions.countByKey(PairRDDFunctions.scala:372)
        at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1204)
        at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1204)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
        at org.apache.spark.rdd.RDD.countByValue(RDD.scala:1203)
        at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:113)
        at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:88)
        at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:153)
        at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:149)
        at scala.collection.Iterator$class.foreach(Iterator.scala:893)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
        at scala.collection.IterableViewLike$Transformed$class.foreach(IterableViewLike.scala:44)
        at scala.collection.SeqViewLike$AbstractTransformed.foreach(SeqViewLike.scala:37)
        at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:149)
        at org.apache.spark.ml.feature.RFormula.fit(RFormula.scala:198)
        at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$.fit(GeneralizedLinearRegressionWrapper.scala:81)
        at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.fit(GeneralizedLinearRegressionWrapper.scala)
        ... 36 more
    ── 2. Error: spark.glm and predict (@test_basic.R#58)
─────────────────────────
    java.lang.IllegalArgumentException
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
        at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
        at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:426)
        at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:257)
        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:256)
        at scala.collection.immutable.List.foreach(List.scala:381)
        at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:256)
        at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:156)
        at org.apache.spark.SparkContext.clean(SparkContext.scala:2294)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2068)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
        at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
        at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
        at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:373)
        at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:373)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
        at org.apache.spark.rdd.PairRDDFunctions.countByKey(PairRDDFunctions.scala:372)
        at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1204)
        at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1204)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
        at org.apache.spark.rdd.RDD.countByValue(RDD.scala:1203)
        at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:113)
        at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:88)
        at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:153)
        at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:149)
        at scala.collection.Iterator$class.foreach(Iterator.scala:893)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
        at scala.collection.IterableViewLike$Transformed$class.foreach(IterableViewLike.scala:44)
        at scala.collection.SeqViewLike$AbstractTransformed.foreach(SeqViewLike.scala:37)
        at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:149)
        at org.apache.spark.ml.feature.RFormula.fit(RFormula.scala:198)
        at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$.fit(GeneralizedLinearRegressionWrapper.scala:81)
        at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.fit(GeneralizedLinearRegressionWrapper.scala)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
        at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
        at java.base/java.lang.Thread.run(Thread.java:844)
    1: spark.glm(training, Sepal_Width ~ Sepal_Length + Species) at
/srv/hornik/tmp/CRAN/SparkR.Rcheck/SparkR/tests/testthat/test_basic.R:58
    2: spark.glm(training, Sepal_Width ~ Sepal_Length + Species)
    3: .local(data, formula, ...)
    4: callJStatic("org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper",
"fit", formula,
           data@sdf, tolower(family$family), family$link, tol,
as.integer(maxIter), weightCol,
           regParam, as.double(var.power), as.double(link.power))
    5: invokeJava(isStatic = TRUE, className, methodName, ...)
    6: handleErrors(returnStatus, conn)
    7: stop(readString(conn))

    ══ testthat results
═══════════════════════════════════════════════════════════
    OK: 0 SKIPPED: 0 FAILED: 2
    1. Error: create DataFrame from list or data.frame (@test_basic.R#26)
    2. Error: spark.glm and predict (@test_basic.R#58)

    Error: testthat unit tests failed
    Execution halted

Flavor: r-devel-linux-x86_64-debian-gcc
Check: re-building of vignette outputs, Result: WARNING
  Error in re-building vignettes:
    ...

  Attaching package: 'SparkR'

  The following objects are masked from 'package:stats':

      cov, filter, lag, na.omit, predict, sd, var, window

  The following objects are masked from 'package:base':

      as.data.frame, colnames, colnames<-, drop, endsWith,
      intersect, rank, rbind, sample, startsWith, subset, summary,
      transform, union

  Picked up _JAVA_OPTIONS: -XX:-UsePerfData
  Picked up _JAVA_OPTIONS: -XX:-UsePerfData
  Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties
  Setting default log level to "WARN".
  To adjust logging level use sc.setLogLevel(newLevel). For SparkR,
use setLogLevel(newLevel).
  WARNING: An illegal reflective access operation has occurred
  WARNING: Illegal reflective access by
io.netty.util.internal.PlatformDependent0$1
(file:/home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7/jars/netty-all-4.0.43.Final.jar)
to field java.nio.Buffer.address
  WARNING: Please consider reporting this to the maintainers of
io.netty.util.internal.PlatformDependent0$1
  WARNING: Use --illegal-access=warn to enable warnings of further
illegal reflective access operations
  WARNING: All illegal access operations will be denied in a future release
  18/07/09 17:58:59 WARN NativeCodeLoader: Unable to load
native-hadoop library for your platform... using builtin-java classes
where applicable
  18/07/09 17:59:07 ERROR RBackendHandler: dfToCols on
org.apache.spark.sql.api.r.SQLUtils failed
  java.lang.reflect.InvocationTargetException
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
        at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
        at java.base/java.lang.Thread.run(Thread.java:844)
  Caused by: java.lang.IllegalArgumentException
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
        at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
        at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:426)
        at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:257)
        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:256)
        at scala.collection.immutable.List.foreach(List.scala:381)
        at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:256)
        at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:156)
        at org.apache.spark.SparkContext.clean(SparkContext.scala:2294)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2068)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
        at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
        at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
        at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:278)
        at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:2865)
        at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2391)
        at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2391)
        at org.apache.spark.sql.Dataset$$anonfun$55.apply(Dataset.scala:2846)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
        at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2845)
        at org.apache.spark.sql.Dataset.collect(Dataset.scala:2391)
        at org.apache.spark.sql.api.r.SQLUtils$.dfToCols(SQLUtils.scala:212)
        at org.apache.spark.sql.api.r.SQLUtils.dfToCols(SQLUtils.scala)
        ... 36 more
  Quitting from lines 102-104 (sparkr-vignettes.Rmd)
  Error: processing vignette 'sparkr-vignettes.Rmd' failed with diagnostics:
  java.lang.IllegalArgumentException
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
        at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
        at scala.collection.mutable.HashMap.fore
  Execution halted

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: [CRAN-pretest-archived] CRAN submission SparkR 2.2.2

Shivaram Venkataraman
Yes. I think Felix checked in a fix to ignore tests run on java
versions that are not Java 8 (I think the fix was in
https://github.com/apache/spark/pull/21666 which is in 2.3.2)

Shivaram
On Mon, Jul 9, 2018 at 5:39 PM Sean Owen <[hidden email]> wrote:

>
> Yes, this flavor of error should only come up in Java 9. Spark doesn't support that. Is there any way to tell CRAN this should not be tested?
>
> On Mon, Jul 9, 2018, 4:17 PM Shivaram Venkataraman <[hidden email]> wrote:
>>
>> The upcoming 2.2.2 release was submitted to CRAN. I think there are
>> some knows issues on Windows, but does anybody know what the following
>> error with Netty is ?
>>
>> >     WARNING: Illegal reflective access by io.netty.util.internal.PlatformDependent0$1 (file:/home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7/jars/netty-all-4.0.43.Final.jar) to field java.nio.Buffer.address
>>
>> Thanks
>> Shivaram
>>
>>
>> ---------- Forwarded message ---------
>> From: <[hidden email]>
>> Date: Mon, Jul 9, 2018 at 12:12 PM
>> Subject: [CRAN-pretest-archived] CRAN submission SparkR 2.2.2
>> To: <[hidden email]>
>> Cc: <[hidden email]>
>>
>>
>> Dear maintainer,
>>
>> package SparkR_2.2.2.tar.gz does not pass the incoming checks
>> automatically, please see the following pre-tests:
>> Windows: <https://win-builder.r-project.org/incoming_pretest/SparkR_2.2.2_20180709_175630/Windows/00check.log>
>> Status: 1 ERROR, 1 WARNING
>> Debian: <https://win-builder.r-project.org/incoming_pretest/SparkR_2.2.2_20180709_175630/Debian/00check.log>
>> Status: 1 ERROR, 2 WARNINGs
>>
>> Last released version's CRAN status: ERROR: 1, OK: 1
>> See: <https://CRAN.R-project.org/web/checks/check_results_SparkR.html>
>>
>> CRAN Web: <https://cran.r-project.org/package=SparkR>
>>
>> Please fix all problems and resubmit a fixed version via the webform.
>> If you are not sure how to fix the problems shown, please ask for help
>> on the R-package-devel mailing list:
>> <https://stat.ethz.ch/mailman/listinfo/r-package-devel>
>> If you are fairly certain the rejection is a false positive, please
>> reply-all to this message and explain.
>>
>> More details are given in the directory:
>> <https://win-builder.r-project.org/incoming_pretest/SparkR_2.2.2_20180709_175630/>
>> The files will be removed after roughly 7 days.
>>
>> No strong reverse dependencies to be checked.
>>
>> Best regards,
>> CRAN teams' auto-check service
>> Flavor: r-devel-linux-x86_64-debian-gcc, r-devel-windows-ix86+x86_64
>> Check: CRAN incoming feasibility, Result: WARNING
>>   Maintainer: 'Shivaram Venkataraman <[hidden email]>'
>>
>>   New submission
>>
>>   Package was archived on CRAN
>>
>>   Insufficient package version (submitted: 2.2.2, existing: 2.3.0)
>>
>>   Possibly mis-spelled words in DESCRIPTION:
>>     Frontend (4:10, 5:28)
>>
>>   CRAN repository db overrides:
>>     X-CRAN-Comment: Archived on 2018-05-01 as check problems were not
>>       corrected despite reminders.
>>
>>   Found the following (possibly) invalid URLs:
>>     URL: http://spark.apache.org/docs/latest/api/R/mean.html
>>       From: inst/doc/sparkr-vignettes.html
>>       Status: 404
>>       Message: Not Found
>>
>> Flavor: r-devel-windows-ix86+x86_64
>> Check: running tests for arch 'x64', Result: ERROR
>>     Running 'run-all.R' [175s]
>>   Running the tests in 'tests/run-all.R' failed.
>>   Complete output:
>>     > #
>>     > # Licensed to the Apache Software Foundation (ASF) under one or more
>>     > # contributor license agreements.  See the NOTICE file distributed with
>>     > # this work for additional information regarding copyright ownership.
>>     > # The ASF licenses this file to You under the Apache License, Version 2.0
>>     > # (the "License"); you may not use this file except in compliance with
>>     > # the License.  You may obtain a copy of the License at
>>     > #
>>     > #    http://www.apache.org/licenses/LICENSE-2.0
>>     > #
>>     > # Unless required by applicable law or agreed to in writing, software
>>     > # distributed under the License is distributed on an "AS IS" BASIS,
>>     > # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>>     > # See the License for the specific language governing permissions and
>>     > # limitations under the License.
>>     > #
>>     >
>>     > library(testthat)
>>     > library(SparkR)
>>
>>     Attaching package: 'SparkR'
>>
>>     The following object is masked from 'package:testthat':
>>
>>         describe
>>
>>     The following objects are masked from 'package:stats':
>>
>>         cov, filter, lag, na.omit, predict, sd, var, window
>>
>>     The following objects are masked from 'package:base':
>>
>>         as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
>>         rank, rbind, sample, startsWith, subset, summary, transform, union
>>
>>     >
>>     > # Turn all warnings into errors
>>     > options("warn" = 2)
>>     >
>>     > if (.Platform$OS.type == "windows") {
>>     +   Sys.setenv(TZ = "GMT")
>>     + }
>>     >
>>     > # Setup global test environment
>>     > # Install Spark first to set SPARK_HOME
>>     >
>>     > # NOTE(shivaram): We set overwrite to handle any old tar.gz
>> files or directories left behind on
>>     > # CRAN machines. For Jenkins we should already have SPARK_HOME set.
>>     > install.spark(overwrite = TRUE)
>>     Overwrite = TRUE: download and overwrite the tar fileand Spark
>> package directory if they exist.
>>     Spark not found in the cache directory. Installation will start.
>>     MirrorUrl not provided.
>>     Looking for preferred site from apache website...
>>     Preferred mirror site found: http://mirror.dkd.de/apache/spark
>>     Downloading spark-2.2.2 for Hadoop 2.7 from:
>>     - http://mirror.dkd.de/apache/spark/spark-2.2.2/spark-2.2.2-bin-hadoop2.7.tgz
>>     trying URL 'http://mirror.dkd.de/apache/spark/spark-2.2.2/spark-2.2.2-bin-hadoop2.7.tgz'
>>     Content type 'application/x-gzip' length 200743115 bytes (191.4 MB)
>>     ==================================================
>>     downloaded 191.4 MB
>>
>>     Installing to C:\Users\ligges\AppData\Local\Apache\Spark\Cache
>>     DONE.
>>     SPARK_HOME set to
>> C:\Users\ligges\AppData\Local\Apache\Spark\Cache/spark-2.2.2-bin-hadoop2.7
>>     >
>>     > sparkRDir <- file.path(Sys.getenv("SPARK_HOME"), "R")
>>     > sparkRWhitelistSQLDirs <- c("spark-warehouse", "metastore_db")
>>     > invisible(lapply(sparkRWhitelistSQLDirs,
>>     +                  function(x) { unlink(file.path(sparkRDir, x),
>> recursive = TRUE, force = TRUE)}))
>>     > sparkRFilesBefore <- list.files(path = sparkRDir, all.files = TRUE)
>>     >
>>     > sparkRTestMaster <- "local[1]"
>>     > sparkRTestConfig <- list()
>>     > if (identical(Sys.getenv("NOT_CRAN"), "true")) {
>>     +   sparkRTestMaster <- ""
>>     + } else {
>>     +   # Disable hsperfdata on CRAN
>>     +   old_java_opt <- Sys.getenv("_JAVA_OPTIONS")
>>     +   Sys.setenv("_JAVA_OPTIONS" = paste("-XX:-UsePerfData", old_java_opt))
>>     +   tmpDir <- tempdir()
>>     +   tmpArg <- paste0("-Djava.io.tmpdir=", tmpDir)
>>     +   sparkRTestConfig <- list(spark.driver.extraJavaOptions = tmpArg,
>>     +                            spark.executor.extraJavaOptions = tmpArg)
>>     + }
>>     >
>>     > test_package("SparkR")
>>     Launching java with spark-submit command
>> C:\Users\ligges\AppData\Local\Apache\Spark\Cache/spark-2.2.2-bin-hadoop2.7/bin/spark-submit2.cmd
>>   --driver-java-options "-Djava.io.tmpdir=D:\temp\RtmpABZLQj"
>> sparkr-shell D:\temp\RtmpABZLQj\backend_port16d0838283f7e
>>     Picked up _JAVA_OPTIONS: -XX:-UsePerfData
>>     Using Spark's default log4j profile:
>> org/apache/spark/log4j-defaults.properties
>>     Setting default log level to "WARN".
>>     To adjust logging level use sc.setLogLevel(newLevel). For SparkR,
>> use setLogLevel(newLevel).
>>     -- 1. Error: create DataFrame from list or data.frame
>> (@test_basic.R#21)  ------
>>     cannot open the connection
>>     1: sparkR.session(master = sparkRTestMaster, enableHiveSupport =
>> FALSE, sparkConfig = sparkRTestConfig) at
>> D:/temp/Rtmp8IKu99/RLIBS_77d8215b7bce/SparkR/tests/testthat/test_basic.R:21
>>     2: sparkR.sparkContext(master, appName, sparkHome, sparkConfigMap,
>> sparkExecutorEnvMap,
>>            sparkJars, sparkPackages)
>>     3: file(path, open = "rb")
>>
>>     Launching java with spark-submit command
>> C:\Users\ligges\AppData\Local\Apache\Spark\Cache/spark-2.2.2-bin-hadoop2.7/bin/spark-submit2.cmd
>>   --driver-java-options "-Djava.io.tmpdir=D:\temp\RtmpABZLQj"
>> sparkr-shell D:\temp\RtmpABZLQj\backend_port16d085df97d88
>>     Picked up _JAVA_OPTIONS: -XX:-UsePerfData
>>     Using Spark's default log4j profile:
>> org/apache/spark/log4j-defaults.properties
>>     Setting default log level to "WARN".
>>     To adjust logging level use sc.setLogLevel(newLevel). For SparkR,
>> use setLogLevel(newLevel).
>>     18/07/09 18:10:43 ERROR Shell: Failed to locate the winutils
>> binary in the hadoop binary path
>>     java.io.IOException: Could not locate executable
>> null\bin\winutils.exe in the Hadoop binaries.
>>         at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:379)
>>         at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:394)
>>         at org.apache.hadoop.util.Shell.<clinit>(Shell.java:387)
>>         at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
>>         at org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:611)
>>         at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:273)
>>         at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:261)
>>         at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:791)
>>         at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:761)
>>         at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:634)
>>         at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2427)
>>         at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2427)
>>         at scala.Option.getOrElse(Option.scala:121)
>>         at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2427)
>>         at org.apache.spark.SparkContext.<init>(SparkContext.scala:295)
>>         at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2516)
>>         at org.apache.spark.api.r.RRDD$.createSparkContext(RRDD.scala:139)
>>         at org.apache.spark.api.r.RRDD.createSparkContext(RRDD.scala)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>         at java.lang.reflect.Method.invoke(Method.java:498)
>>         at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
>>         at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
>>         at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
>>         at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>         at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>         at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>         at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
>>         at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>         at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
>>         at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
>>         at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
>>         at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
>>         at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
>>         at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
>>         at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
>>         at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
>>         at java.lang.Thread.run(Thread.java:748)
>>     18/07/09 18:10:43 WARN NativeCodeLoader: Unable to load
>> native-hadoop library for your platform... using builtin-java classes
>> where applicable
>>     Picked up _JAVA_OPTIONS: -XX:-UsePerfData
>>     18/07/09 18:10:54 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>     18/07/09 18:10:55 WARN BLAS: Failed to load implementation from:
>> com.github.fommil.netlib.NativeSystemBLAS
>>     18/07/09 18:10:55 WARN BLAS: Failed to load implementation from:
>> com.github.fommil.netlib.NativeRefBLAS
>>     18/07/09 18:10:55 WARN LAPACK: Failed to load implementation from:
>> com.github.fommil.netlib.NativeSystemLAPACK
>>     18/07/09 18:10:55 WARN LAPACK: Failed to load implementation from:
>> com.github.fommil.netlib.NativeRefLAPACK
>>     18/07/09 18:11:12 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>     18/07/09 18:11:14 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>     18/07/09 18:11:15 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>     18/07/09 18:11:17 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>     18/07/09 18:11:18 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>     18/07/09 18:11:19 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>     18/07/09 18:11:21 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>     18/07/09 18:11:22 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>     18/07/09 18:11:23 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>     18/07/09 18:11:25 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>     18/07/09 18:11:26 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>     18/07/09 18:11:28 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>     18/07/09 18:11:29 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>     18/07/09 18:11:46 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>     18/07/09 18:11:47 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>     18/07/09 18:11:49 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>     18/07/09 18:11:50 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>     == testthat results
>> ===========================================================
>>     OK: 6 SKIPPED: 0 FAILED: 1
>>     1. Error: create DataFrame from list or data.frame (@test_basic.R#21)
>>
>>     Error: testthat unit tests failed
>>     Execution halted
>>     Picked up _JAVA_OPTIONS: -XX:-UsePerfData
>>
>> Flavor: r-devel-linux-x86_64-debian-gcc
>> Check: tests, Result: ERROR
>>     Running 'run-all.R' [6s/15s]
>>   Running the tests in 'tests/run-all.R' failed.
>>   Complete output:
>>     > #
>>     > # Licensed to the Apache Software Foundation (ASF) under one or more
>>     > # contributor license agreements.  See the NOTICE file distributed with
>>     > # this work for additional information regarding copyright ownership.
>>     > # The ASF licenses this file to You under the Apache License, Version 2.0
>>     > # (the "License"); you may not use this file except in compliance with
>>     > # the License.  You may obtain a copy of the License at
>>     > #
>>     > #    http://www.apache.org/licenses/LICENSE-2.0
>>     > #
>>     > # Unless required by applicable law or agreed to in writing, software
>>     > # distributed under the License is distributed on an "AS IS" BASIS,
>>     > # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>>     > # See the License for the specific language governing permissions and
>>     > # limitations under the License.
>>     > #
>>     >
>>     > library(testthat)
>>     > library(SparkR)
>>
>>     Attaching package: 'SparkR'
>>
>>     The following object is masked from 'package:testthat':
>>
>>         describe
>>
>>     The following objects are masked from 'package:stats':
>>
>>         cov, filter, lag, na.omit, predict, sd, var, window
>>
>>     The following objects are masked from 'package:base':
>>
>>         as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
>>         rank, rbind, sample, startsWith, subset, summary, transform, union
>>
>>     >
>>     > # Turn all warnings into errors
>>     > options("warn" = 2)
>>     >
>>     > if (.Platform$OS.type == "windows") {
>>     +   Sys.setenv(TZ = "GMT")
>>     + }
>>     >
>>     > # Setup global test environment
>>     > # Install Spark first to set SPARK_HOME
>>     >
>>     > # NOTE(shivaram): We set overwrite to handle any old tar.gz
>> files or directories left behind on
>>     > # CRAN machines. For Jenkins we should already have SPARK_HOME set.
>>     > install.spark(overwrite = TRUE)
>>     Overwrite = TRUE: download and overwrite the tar fileand Spark
>> package directory if they exist.
>>     Spark not found in the cache directory. Installation will start.
>>     MirrorUrl not provided.
>>     Looking for preferred site from apache website...
>>     Preferred mirror site found: http://mirror.klaus-uwe.me/apache/spark
>>     Downloading spark-2.2.2 for Hadoop 2.7 from:
>>     - http://mirror.klaus-uwe.me/apache/spark/spark-2.2.2/spark-2.2.2-bin-hadoop2.7.tgz
>>     trying URL 'http://mirror.klaus-uwe.me/apache/spark/spark-2.2.2/spark-2.2.2-bin-hadoop2.7.tgz'
>>     Content type 'application/octet-stream' length 200743115 bytes (191.4 MB)
>>     ==================================================
>>     downloaded 191.4 MB
>>
>>     Installing to /home/hornik/.cache/spark
>>     DONE.
>>     SPARK_HOME set to /home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7
>>     >
>>     > sparkRDir <- file.path(Sys.getenv("SPARK_HOME"), "R")
>>     > sparkRWhitelistSQLDirs <- c("spark-warehouse", "metastore_db")
>>     > invisible(lapply(sparkRWhitelistSQLDirs,
>>     +                  function(x) { unlink(file.path(sparkRDir, x),
>> recursive = TRUE, force = TRUE)}))
>>     > sparkRFilesBefore <- list.files(path = sparkRDir, all.files = TRUE)
>>     >
>>     > sparkRTestMaster <- "local[1]"
>>     > sparkRTestConfig <- list()
>>     > if (identical(Sys.getenv("NOT_CRAN"), "true")) {
>>     +   sparkRTestMaster <- ""
>>     + } else {
>>     +   # Disable hsperfdata on CRAN
>>     +   old_java_opt <- Sys.getenv("_JAVA_OPTIONS")
>>     +   Sys.setenv("_JAVA_OPTIONS" = paste("-XX:-UsePerfData", old_java_opt))
>>     +   tmpDir <- tempdir()
>>     +   tmpArg <- paste0("-Djava.io.tmpdir=", tmpDir)
>>     +   sparkRTestConfig <- list(spark.driver.extraJavaOptions = tmpArg,
>>     +                            spark.executor.extraJavaOptions = tmpArg)
>>     + }
>>     >
>>     > test_package("SparkR")
>>     Launching java with spark-submit command
>> /home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7/bin/spark-submit
>> --driver-java-options "-Djava.io.tmpdir=/tmp/Rtmpkd8Lf6" sparkr-shell
>> /tmp/Rtmpkd8Lf6/backend_port289f65a5f5e0
>>     Picked up _JAVA_OPTIONS: -XX:-UsePerfData
>>     Picked up _JAVA_OPTIONS: -XX:-UsePerfData
>>     Using Spark's default log4j profile:
>> org/apache/spark/log4j-defaults.properties
>>     Setting default log level to "WARN".
>>     To adjust logging level use sc.setLogLevel(newLevel). For SparkR,
>> use setLogLevel(newLevel).
>>     WARNING: An illegal reflective access operation has occurred
>>     WARNING: Illegal reflective access by
>> io.netty.util.internal.PlatformDependent0$1
>> (file:/home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7/jars/netty-all-4.0.43.Final.jar)
>> to field java.nio.Buffer.address
>>     WARNING: Please consider reporting this to the maintainers of
>> io.netty.util.internal.PlatformDependent0$1
>>     WARNING: Use --illegal-access=warn to enable warnings of further
>> illegal reflective access operations
>>     WARNING: All illegal access operations will be denied in a future release
>>     18/07/09 17:58:50 WARN NativeCodeLoader: Unable to load
>> native-hadoop library for your platform... using builtin-java classes
>> where applicable
>>     18/07/09 17:58:54 ERROR RBackendHandler: count on 13 failed
>>     java.lang.reflect.InvocationTargetException
>>         at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
>> Method)
>>         at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>         at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>         at java.base/java.lang.reflect.Method.invoke(Method.java:564)
>>         at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
>>         at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
>>         at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
>>         at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>         at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>         at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>         at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
>>         at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>         at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
>>         at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
>>         at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
>>         at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
>>         at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
>>         at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
>>         at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
>>         at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
>>         at java.base/java.lang.Thread.run(Thread.java:844)
>>     Caused by: java.lang.IllegalArgumentException
>>         at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>         at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>         at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>         at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
>>         at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
>>         at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
>>         at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
>>         at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
>>         at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
>>         at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
>>         at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
>>         at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
>>         at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
>>         at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:426)
>>         at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
>>         at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
>>         at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
>>         at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
>>         at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:257)
>>         at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:256)
>>         at scala.collection.immutable.List.foreach(List.scala:381)
>>         at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:256)
>>         at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:156)
>>         at org.apache.spark.SparkContext.clean(SparkContext.scala:2294)
>>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:2068)
>>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
>>         at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
>>         at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>>         at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>>         at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>>         at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
>>         at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:278)
>>         at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2439)
>>         at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2438)
>>         at org.apache.spark.sql.Dataset$$anonfun$55.apply(Dataset.scala:2846)
>>         at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
>>         at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2845)
>>         at org.apache.spark.sql.Dataset.count(Dataset.scala:2438)
>>         ... 36 more
>>     ── 1. Error: create DataFrame from list or data.frame
>> (@test_basic.R#26)  ──────
>>     java.lang.IllegalArgumentException
>>         at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>         at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>         at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>         at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
>>         at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
>>         at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
>>         at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
>>         at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
>>         at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
>>         at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
>>         at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
>>         at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
>>         at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
>>         at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:426)
>>         at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
>>         at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
>>         at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
>>         at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
>>         at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:257)
>>         at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:256)
>>         at scala.collection.immutable.List.foreach(List.scala:381)
>>         at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:256)
>>         at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:156)
>>         at org.apache.spark.SparkContext.clean(SparkContext.scala:2294)
>>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:2068)
>>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
>>         at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
>>         at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>>         at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>>         at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>>         at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
>>         at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:278)
>>         at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2439)
>>         at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2438)
>>         at org.apache.spark.sql.Dataset$$anonfun$55.apply(Dataset.scala:2846)
>>         at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
>>         at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2845)
>>         at org.apache.spark.sql.Dataset.count(Dataset.scala:2438)
>>         at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
>> Method)
>>         at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>         at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>         at java.base/java.lang.reflect.Method.invoke(Method.java:564)
>>         at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
>>         at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
>>         at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
>>         at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>         at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>         at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>         at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
>>         at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>         at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
>>         at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
>>         at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
>>         at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
>>         at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
>>         at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
>>         at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
>>         at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
>>         at java.base/java.lang.Thread.run(Thread.java:844)
>>     1: expect_equal(count(df), i) at
>> /srv/hornik/tmp/CRAN/SparkR.Rcheck/SparkR/tests/testthat/test_basic.R:26
>>     2: quasi_label(enquo(object), label)
>>     3: eval_bare(get_expr(quo), get_env(quo))
>>     4: count(df)
>>     5: count(df)
>>     6: callJMethod(x@sdf, "count")
>>     7: invokeJava(isStatic = FALSE, objId$id, methodName, ...)
>>     8: handleErrors(returnStatus, conn)
>>     9: stop(readString(conn))
>>
>>     18/07/09 17:58:54 ERROR RBackendHandler: fit on
>> org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper failed
>>     java.lang.reflect.InvocationTargetException
>>         at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
>> Method)
>>         at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>         at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>         at java.base/java.lang.reflect.Method.invoke(Method.java:564)
>>         at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
>>         at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
>>         at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
>>         at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>         at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>         at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>         at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
>>         at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>         at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
>>         at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
>>         at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
>>         at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
>>         at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
>>         at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
>>         at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
>>         at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
>>         at java.base/java.lang.Thread.run(Thread.java:844)
>>     Caused by: java.lang.IllegalArgumentException
>>         at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>         at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>         at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>         at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
>>         at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
>>         at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
>>         at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
>>         at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
>>         at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
>>         at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
>>         at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
>>         at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
>>         at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
>>         at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:426)
>>         at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
>>         at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
>>         at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
>>         at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
>>         at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:257)
>>         at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:256)
>>         at scala.collection.immutable.List.foreach(List.scala:381)
>>         at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:256)
>>         at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:156)
>>         at org.apache.spark.SparkContext.clean(SparkContext.scala:2294)
>>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:2068)
>>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
>>         at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
>>         at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>>         at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>>         at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>>         at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
>>         at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:373)
>>         at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:373)
>>         at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>>         at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>>         at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>>         at org.apache.spark.rdd.PairRDDFunctions.countByKey(PairRDDFunctions.scala:372)
>>         at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1204)
>>         at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1204)
>>         at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>>         at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>>         at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>>         at org.apache.spark.rdd.RDD.countByValue(RDD.scala:1203)
>>         at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:113)
>>         at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:88)
>>         at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:153)
>>         at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:149)
>>         at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>>         at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>>         at scala.collection.IterableViewLike$Transformed$class.foreach(IterableViewLike.scala:44)
>>         at scala.collection.SeqViewLike$AbstractTransformed.foreach(SeqViewLike.scala:37)
>>         at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:149)
>>         at org.apache.spark.ml.feature.RFormula.fit(RFormula.scala:198)
>>         at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$.fit(GeneralizedLinearRegressionWrapper.scala:81)
>>         at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.fit(GeneralizedLinearRegressionWrapper.scala)
>>         ... 36 more
>>     ── 2. Error: spark.glm and predict (@test_basic.R#58)
>> ─────────────────────────
>>     java.lang.IllegalArgumentException
>>         at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>         at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>         at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>         at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
>>         at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
>>         at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
>>         at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
>>         at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
>>         at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
>>         at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
>>         at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
>>         at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
>>         at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
>>         at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:426)
>>         at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
>>         at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
>>         at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
>>         at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
>>         at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:257)
>>         at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:256)
>>         at scala.collection.immutable.List.foreach(List.scala:381)
>>         at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:256)
>>         at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:156)
>>         at org.apache.spark.SparkContext.clean(SparkContext.scala:2294)
>>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:2068)
>>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
>>         at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
>>         at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>>         at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>>         at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>>         at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
>>         at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:373)
>>         at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:373)
>>         at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>>         at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>>         at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>>         at org.apache.spark.rdd.PairRDDFunctions.countByKey(PairRDDFunctions.scala:372)
>>         at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1204)
>>         at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1204)
>>         at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>>         at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>>         at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>>         at org.apache.spark.rdd.RDD.countByValue(RDD.scala:1203)
>>         at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:113)
>>         at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:88)
>>         at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:153)
>>         at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:149)
>>         at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>>         at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>>         at scala.collection.IterableViewLike$Transformed$class.foreach(IterableViewLike.scala:44)
>>         at scala.collection.SeqViewLike$AbstractTransformed.foreach(SeqViewLike.scala:37)
>>         at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:149)
>>         at org.apache.spark.ml.feature.RFormula.fit(RFormula.scala:198)
>>         at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$.fit(GeneralizedLinearRegressionWrapper.scala:81)
>>         at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.fit(GeneralizedLinearRegressionWrapper.scala)
>>         at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
>> Method)
>>         at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>         at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>         at java.base/java.lang.reflect.Method.invoke(Method.java:564)
>>         at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
>>         at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
>>         at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
>>         at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>         at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>         at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>         at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
>>         at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>         at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
>>         at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
>>         at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
>>         at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
>>         at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
>>         at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
>>         at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
>>         at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
>>         at java.base/java.lang.Thread.run(Thread.java:844)
>>     1: spark.glm(training, Sepal_Width ~ Sepal_Length + Species) at
>> /srv/hornik/tmp/CRAN/SparkR.Rcheck/SparkR/tests/testthat/test_basic.R:58
>>     2: spark.glm(training, Sepal_Width ~ Sepal_Length + Species)
>>     3: .local(data, formula, ...)
>>     4: callJStatic("org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper",
>> "fit", formula,
>>            data@sdf, tolower(family$family), family$link, tol,
>> as.integer(maxIter), weightCol,
>>            regParam, as.double(var.power), as.double(link.power))
>>     5: invokeJava(isStatic = TRUE, className, methodName, ...)
>>     6: handleErrors(returnStatus, conn)
>>     7: stop(readString(conn))
>>
>>     ══ testthat results
>> ═══════════════════════════════════════════════════════════
>>     OK: 0 SKIPPED: 0 FAILED: 2
>>     1. Error: create DataFrame from list or data.frame (@test_basic.R#26)
>>     2. Error: spark.glm and predict (@test_basic.R#58)
>>
>>     Error: testthat unit tests failed
>>     Execution halted
>>
>> Flavor: r-devel-linux-x86_64-debian-gcc
>> Check: re-building of vignette outputs, Result: WARNING
>>   Error in re-building vignettes:
>>     ...
>>
>>   Attaching package: 'SparkR'
>>
>>   The following objects are masked from 'package:stats':
>>
>>       cov, filter, lag, na.omit, predict, sd, var, window
>>
>>   The following objects are masked from 'package:base':
>>
>>       as.data.frame, colnames, colnames<-, drop, endsWith,
>>       intersect, rank, rbind, sample, startsWith, subset, summary,
>>       transform, union
>>
>>   Picked up _JAVA_OPTIONS: -XX:-UsePerfData
>>   Picked up _JAVA_OPTIONS: -XX:-UsePerfData
>>   Using Spark's default log4j profile:
>> org/apache/spark/log4j-defaults.properties
>>   Setting default log level to "WARN".
>>   To adjust logging level use sc.setLogLevel(newLevel). For SparkR,
>> use setLogLevel(newLevel).
>>   WARNING: An illegal reflective access operation has occurred
>>   WARNING: Illegal reflective access by
>> io.netty.util.internal.PlatformDependent0$1
>> (file:/home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7/jars/netty-all-4.0.43.Final.jar)
>> to field java.nio.Buffer.address
>>   WARNING: Please consider reporting this to the maintainers of
>> io.netty.util.internal.PlatformDependent0$1
>>   WARNING: Use --illegal-access=warn to enable warnings of further
>> illegal reflective access operations
>>   WARNING: All illegal access operations will be denied in a future release
>>   18/07/09 17:58:59 WARN NativeCodeLoader: Unable to load
>> native-hadoop library for your platform... using builtin-java classes
>> where applicable
>>   18/07/09 17:59:07 ERROR RBackendHandler: dfToCols on
>> org.apache.spark.sql.api.r.SQLUtils failed
>>   java.lang.reflect.InvocationTargetException
>>         at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
>> Method)
>>         at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>         at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>         at java.base/java.lang.reflect.Method.invoke(Method.java:564)
>>         at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
>>         at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
>>         at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
>>         at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>         at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>         at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>         at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
>>         at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>         at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>         at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
>>         at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
>>         at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
>>         at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
>>         at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
>>         at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
>>         at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
>>         at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
>>         at java.base/java.lang.Thread.run(Thread.java:844)
>>   Caused by: java.lang.IllegalArgumentException
>>         at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>         at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>         at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>         at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
>>         at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
>>         at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
>>         at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
>>         at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
>>         at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
>>         at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
>>         at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
>>         at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
>>         at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
>>         at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:426)
>>         at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
>>         at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
>>         at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
>>         at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
>>         at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:257)
>>         at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:256)
>>         at scala.collection.immutable.List.foreach(List.scala:381)
>>         at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:256)
>>         at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:156)
>>         at org.apache.spark.SparkContext.clean(SparkContext.scala:2294)
>>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:2068)
>>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
>>         at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
>>         at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>>         at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>>         at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>>         at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
>>         at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:278)
>>         at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:2865)
>>         at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2391)
>>         at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2391)
>>         at org.apache.spark.sql.Dataset$$anonfun$55.apply(Dataset.scala:2846)
>>         at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
>>         at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2845)
>>         at org.apache.spark.sql.Dataset.collect(Dataset.scala:2391)
>>         at org.apache.spark.sql.api.r.SQLUtils$.dfToCols(SQLUtils.scala:212)
>>         at org.apache.spark.sql.api.r.SQLUtils.dfToCols(SQLUtils.scala)
>>         ... 36 more
>>   Quitting from lines 102-104 (sparkr-vignettes.Rmd)
>>   Error: processing vignette 'sparkr-vignettes.Rmd' failed with diagnostics:
>>   java.lang.IllegalArgumentException
>>         at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>         at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>         at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>         at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
>>         at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
>>         at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
>>         at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
>>         at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
>>         at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
>>         at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
>>         at scala.collection.mutable.HashMap.fore
>>   Execution halted
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: [hidden email]
>>

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: [CRAN-pretest-archived] CRAN submission SparkR 2.2.2

Tom Graves-2
is there anyway to push it to CRAN without this fix, I don't really want to respin 2.2.2 just with the test fix.

Tom

On Monday, July 9, 2018, 4:50:18 PM CDT, Shivaram Venkataraman <[hidden email]> wrote:


Yes. I think Felix checked in a fix to ignore tests run on java
versions that are not Java 8 (I think the fix was in

Shivaram
On Mon, Jul 9, 2018 at 5:39 PM Sean Owen <[hidden email]> wrote:
>
> Yes, this flavor of error should only come up in Java 9. Spark doesn't support that. Is there any way to tell CRAN this should not be tested?
>
> On Mon, Jul 9, 2018, 4:17 PM Shivaram Venkataraman <[hidden email]> wrote:
>>
>> The upcoming 2.2.2 release was submitted to CRAN. I think there are
>> some knows issues on Windows, but does anybody know what the following
>> error with Netty is ?
>>
>> >    WARNING: Illegal reflective access by io.netty.util.internal.PlatformDependent0$1 (file:/home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7/jars/netty-all-4.0.43.Final.jar) to field java.nio.Buffer.address
>>
>> Thanks
>> Shivaram
>>
>>
>> ---------- Forwarded message ---------
>> From: <[hidden email]>
>> Date: Mon, Jul 9, 2018 at 12:12 PM
>> Subject: [CRAN-pretest-archived] CRAN submission SparkR 2.2.2
>>
>>
>> Dear maintainer,
>>
>> package SparkR_2.2.2.tar.gz does not pass the incoming checks
>> automatically, please see the following pre-tests:
>> Status: 1 ERROR, 1 WARNING
>> Status: 1 ERROR, 2 WARNINGs
>>
>> Last released version's CRAN status: ERROR: 1, OK: 1
>>
>>
>> Please fix all problems and resubmit a fixed version via the webform.
>> If you are not sure how to fix the problems shown, please ask for help
>> on the R-package-devel mailing list:
>> If you are fairly certain the rejection is a false positive, please
>> reply-all to this message and explain.
>>
>> More details are given in the directory:
>> The files will be removed after roughly 7 days.
>>
>> No strong reverse dependencies to be checked.
>>
>> Best regards,
>> CRAN teams' auto-check service
>> Flavor: r-devel-linux-x86_64-debian-gcc, r-devel-windows-ix86+x86_64
>> Check: CRAN incoming feasibility, Result: WARNING
>>  Maintainer: 'Shivaram Venkataraman <[hidden email]>'
>>
>>  New submission
>>
>>  Package was archived on CRAN
>>
>>  Insufficient package version (submitted: 2.2.2, existing: 2.3.0)
>>
>>  Possibly mis-spelled words in DESCRIPTION:
>>    Frontend (4:10, 5:28)
>>
>>  CRAN repository db overrides:
>>    X-CRAN-Comment: Archived on 2018-05-01 as check problems were not
>>      corrected despite reminders.
>>
>>  Found the following (possibly) invalid URLs:
>>      From: inst/doc/sparkr-vignettes.html
>>      Status: 404
>>      Message: Not Found
>>
>> Flavor: r-devel-windows-ix86+x86_64
>> Check: running tests for arch 'x64', Result: ERROR
>>    Running 'run-all.R' [175s]
>>  Running the tests in 'tests/run-all.R' failed.
>>  Complete output:
>>    > #
>>    > # Licensed to the Apache Software Foundation (ASF) under one or more
>>    > # contributor license agreements.  See the NOTICE file distributed with
>>    > # this work for additional information regarding copyright ownership.
>>    > # The ASF licenses this file to You under the Apache License, Version 2.0
>>    > # (the "License"); you may not use this file except in compliance with
>>    > # the License.  You may obtain a copy of the License at
>>    > #
>>    > #
>>    > # Unless required by applicable law or agreed to in writing, software
>>    > # distributed under the License is distributed on an "AS IS" BASIS,
>>    > # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>>    > # See the License for the specific language governing permissions and
>>    > # limitations under the License.
>>    > #
>>    >
>>    > library(testthat)
>>    > library(SparkR)
>>
>>    Attaching package: 'SparkR'
>>
>>    The following object is masked from 'package:testthat':
>>
>>        describe
>>
>>    The following objects are masked from 'package:stats':
>>
>>        cov, filter, lag, na.omit, predict, sd, var, window
>>
>>    The following objects are masked from 'package:base':
>>
>>        as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
>>        rank, rbind, sample, startsWith, subset, summary, transform, union
>>
>>    >
>>    > # Turn all warnings into errors
>>    > options("warn" = 2)
>>    >
>>    > if (.Platform$OS.type == "windows") {
>>    +  Sys.setenv(TZ = "GMT")
>>    + }
>>    >
>>    > # Setup global test environment
>>    > # Install Spark first to set SPARK_HOME
>>    >
>>    > # NOTE(shivaram): We set overwrite to handle any old tar.gz
>> files or directories left behind on
>>    > # CRAN machines. For Jenkins we should already have SPARK_HOME set.
>>    > install.spark(overwrite = TRUE)
>>    Overwrite = TRUE: download and overwrite the tar fileand Spark
>> package directory if they exist.
>>    Spark not found in the cache directory. Installation will start.
>>    MirrorUrl not provided.
>>    Looking for preferred site from apache website...
>>    Preferred mirror site found: http://mirror.dkd.de/apache/spark
>>    Downloading spark-2.2.2 for Hadoop 2.7 from:
>>    Content type 'application/x-gzip' length 200743115 bytes (191.4 MB)
>>    ==================================================
>>    downloaded 191.4 MB
>>
>>    Installing to C:\Users\ligges\AppData\Local\Apache\Spark\Cache
>>    DONE.
>>    SPARK_HOME set to
>> C:\Users\ligges\AppData\Local\Apache\Spark\Cache/spark-2.2.2-bin-hadoop2.7
>>    >
>>    > sparkRDir <- file.path(Sys.getenv("SPARK_HOME"), "R")
>>    > sparkRWhitelistSQLDirs <- c("spark-warehouse", "metastore_db")
>>    > invisible(lapply(sparkRWhitelistSQLDirs,
>>    +                  function(x) { unlink(file.path(sparkRDir, x),
>> recursive = TRUE, force = TRUE)}))
>>    > sparkRFilesBefore <- list.files(path = sparkRDir, all.files = TRUE)
>>    >
>>    > sparkRTestMaster <- "local[1]"
>>    > sparkRTestConfig <- list()
>>    > if (identical(Sys.getenv("NOT_CRAN"), "true")) {
>>    +  sparkRTestMaster <- ""
>>    + } else {
>>    +  # Disable hsperfdata on CRAN
>>    +  old_java_opt <- Sys.getenv("_JAVA_OPTIONS")
>>    +  Sys.setenv("_JAVA_OPTIONS" = paste("-XX:-UsePerfData", old_java_opt))
>>    +  tmpDir <- tempdir()
>>    +  tmpArg <- paste0("-Djava.io.tmpdir=", tmpDir)
>>    +  sparkRTestConfig <- list(spark.driver.extraJavaOptions = tmpArg,
>>    +                            spark.executor.extraJavaOptions = tmpArg)
>>    + }
>>    >
>>    > test_package("SparkR")
>>    Launching java with spark-submit command
>> C:\Users\ligges\AppData\Local\Apache\Spark\Cache/spark-2.2.2-bin-hadoop2.7/bin/spark-submit2.cmd
>>  --driver-java-options "-Djava.io.tmpdir=D:\temp\RtmpABZLQj"
>> sparkr-shell D:\temp\RtmpABZLQj\backend_port16d0838283f7e
>>    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
>>    Using Spark's default log4j profile:
>> org/apache/spark/log4j-defaults.properties
>>    Setting default log level to "WARN".
>>    To adjust logging level use sc.setLogLevel(newLevel). For SparkR,
>> use setLogLevel(newLevel).
>>    -- 1. Error: create DataFrame from list or data.frame
>> (@test_basic.R#21)  ------
>>    cannot open the connection
>>    1: sparkR.session(master = sparkRTestMaster, enableHiveSupport =
>> FALSE, sparkConfig = sparkRTestConfig) at
>> D:/temp/Rtmp8IKu99/RLIBS_77d8215b7bce/SparkR/tests/testthat/test_basic.R:21
>>    2: sparkR.sparkContext(master, appName, sparkHome, sparkConfigMap,
>> sparkExecutorEnvMap,
>>            sparkJars, sparkPackages)
>>    3: file(path, open = "rb")
>>
>>    Launching java with spark-submit command
>> C:\Users\ligges\AppData\Local\Apache\Spark\Cache/spark-2.2.2-bin-hadoop2.7/bin/spark-submit2.cmd
>>  --driver-java-options "-Djava.io.tmpdir=D:\temp\RtmpABZLQj"
>> sparkr-shell D:\temp\RtmpABZLQj\backend_port16d085df97d88
>>    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
>>    Using Spark's default log4j profile:
>> org/apache/spark/log4j-defaults.properties
>>    Setting default log level to "WARN".
>>    To adjust logging level use sc.setLogLevel(newLevel). For SparkR,
>> use setLogLevel(newLevel).
>>    18/07/09 18:10:43 ERROR Shell: Failed to locate the winutils
>> binary in the hadoop binary path
>>    java.io.IOException: Could not locate executable
>> null\bin\winutils.exe in the Hadoop binaries.
>>        at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:379)
>>        at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:394)
>>        at org.apache.hadoop.util.Shell.<clinit>(Shell.java:387)
>>        at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
>>        at org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:611)
>>        at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:273)
>>        at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:261)
>>        at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:791)
>>        at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:761)
>>        at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:634)
>>        at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2427)
>>        at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2427)
>>        at scala.Option.getOrElse(Option.scala:121)
>>        at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2427)
>>        at org.apache.spark.SparkContext.<init>(SparkContext.scala:295)
>>        at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2516)
>>        at org.apache.spark.api.r.RRDD$.createSparkContext(RRDD.scala:139)
>>        at org.apache.spark.api.r.RRDD.createSparkContext(RRDD.scala)
>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>        at java.lang.reflect.Method.invoke(Method.java:498)
>>        at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
>>        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
>>        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
>>        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
>>        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
>>        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
>>        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
>>        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
>>        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
>>        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
>>        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
>>        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
>>        at java.lang.Thread.run(Thread.java:748)
>>    18/07/09 18:10:43 WARN NativeCodeLoader: Unable to load
>> native-hadoop library for your platform... using builtin-java classes
>> where applicable
>>    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
>>    18/07/09 18:10:54 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>    18/07/09 18:10:55 WARN BLAS: Failed to load implementation from:
>> com.github.fommil.netlib.NativeSystemBLAS
>>    18/07/09 18:10:55 WARN BLAS: Failed to load implementation from:
>> com.github.fommil.netlib.NativeRefBLAS
>>    18/07/09 18:10:55 WARN LAPACK: Failed to load implementation from:
>> com.github.fommil.netlib.NativeSystemLAPACK
>>    18/07/09 18:10:55 WARN LAPACK: Failed to load implementation from:
>> com.github.fommil.netlib.NativeRefLAPACK
>>    18/07/09 18:11:12 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>    18/07/09 18:11:14 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>    18/07/09 18:11:15 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>    18/07/09 18:11:17 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>    18/07/09 18:11:18 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>    18/07/09 18:11:19 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>    18/07/09 18:11:21 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>    18/07/09 18:11:22 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>    18/07/09 18:11:23 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>    18/07/09 18:11:25 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>    18/07/09 18:11:26 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>    18/07/09 18:11:28 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>    18/07/09 18:11:29 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>    18/07/09 18:11:46 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>    18/07/09 18:11:47 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>    18/07/09 18:11:49 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>    18/07/09 18:11:50 WARN WeightedLeastSquares: regParam is zero,
>> which might cause numerical instability and overfitting.
>>    == testthat results
>> ===========================================================
>>    OK: 6 SKIPPED: 0 FAILED: 1
>>    1. Error: create DataFrame from list or data.frame (@test_basic.R#21)
>>
>>    Error: testthat unit tests failed
>>    Execution halted
>>    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
>>
>> Flavor: r-devel-linux-x86_64-debian-gcc
>> Check: tests, Result: ERROR
>>    Running 'run-all.R' [6s/15s]
>>  Running the tests in 'tests/run-all.R' failed.
>>  Complete output:
>>    > #
>>    > # Licensed to the Apache Software Foundation (ASF) under one or more
>>    > # contributor license agreements.  See the NOTICE file distributed with
>>    > # this work for additional information regarding copyright ownership.
>>    > # The ASF licenses this file to You under the Apache License, Version 2.0
>>    > # (the "License"); you may not use this file except in compliance with
>>    > # the License.  You may obtain a copy of the License at
>>    > #
>>    > #
>>    > # Unless required by applicable law or agreed to in writing, software
>>    > # distributed under the License is distributed on an "AS IS" BASIS,
>>    > # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>>    > # See the License for the specific language governing permissions and
>>    > # limitations under the License.
>>    > #
>>    >
>>    > library(testthat)
>>    > library(SparkR)
>>
>>    Attaching package: 'SparkR'
>>
>>    The following object is masked from 'package:testthat':
>>
>>        describe
>>
>>    The following objects are masked from 'package:stats':
>>
>>        cov, filter, lag, na.omit, predict, sd, var, window
>>
>>    The following objects are masked from 'package:base':
>>
>>        as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
>>        rank, rbind, sample, startsWith, subset, summary, transform, union
>>
>>    >
>>    > # Turn all warnings into errors
>>    > options("warn" = 2)
>>    >
>>    > if (.Platform$OS.type == "windows") {
>>    +  Sys.setenv(TZ = "GMT")
>>    + }
>>    >
>>    > # Setup global test environment
>>    > # Install Spark first to set SPARK_HOME
>>    >
>>    > # NOTE(shivaram): We set overwrite to handle any old tar.gz
>> files or directories left behind on
>>    > # CRAN machines. For Jenkins we should already have SPARK_HOME set.
>>    > install.spark(overwrite = TRUE)
>>    Overwrite = TRUE: download and overwrite the tar fileand Spark
>> package directory if they exist.
>>    Spark not found in the cache directory. Installation will start.
>>    MirrorUrl not provided.
>>    Looking for preferred site from apache website...
>>    Preferred mirror site found: http://mirror.klaus-uwe.me/apache/spark
>>    Downloading spark-2.2.2 for Hadoop 2.7 from:
>>    Content type 'application/octet-stream' length 200743115 bytes (191.4 MB)
>>    ==================================================
>>    downloaded 191.4 MB
>>
>>    Installing to /home/hornik/.cache/spark
>>    DONE.
>>    SPARK_HOME set to /home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7
>>    >
>>    > sparkRDir <- file.path(Sys.getenv("SPARK_HOME"), "R")
>>    > sparkRWhitelistSQLDirs <- c("spark-warehouse", "metastore_db")
>>    > invisible(lapply(sparkRWhitelistSQLDirs,
>>    +                  function(x) { unlink(file.path(sparkRDir, x),
>> recursive = TRUE, force = TRUE)}))
>>    > sparkRFilesBefore <- list.files(path = sparkRDir, all.files = TRUE)
>>    >
>>    > sparkRTestMaster <- "local[1]"
>>    > sparkRTestConfig <- list()
>>    > if (identical(Sys.getenv("NOT_CRAN"), "true")) {
>>    +  sparkRTestMaster <- ""
>>    + } else {
>>    +  # Disable hsperfdata on CRAN
>>    +  old_java_opt <- Sys.getenv("_JAVA_OPTIONS")
>>    +  Sys.setenv("_JAVA_OPTIONS" = paste("-XX:-UsePerfData", old_java_opt))
>>    +  tmpDir <- tempdir()
>>    +  tmpArg <- paste0("-Djava.io.tmpdir=", tmpDir)
>>    +  sparkRTestConfig <- list(spark.driver.extraJavaOptions = tmpArg,
>>    +                            spark.executor.extraJavaOptions = tmpArg)
>>    + }
>>    >
>>    > test_package("SparkR")
>>    Launching java with spark-submit command
>> /home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7/bin/spark-submit
>> --driver-java-options "-Djava.io.tmpdir=/tmp/Rtmpkd8Lf6" sparkr-shell
>> /tmp/Rtmpkd8Lf6/backend_port289f65a5f5e0
>>    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
>>    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
>>    Using Spark's default log4j profile:
>> org/apache/spark/log4j-defaults.properties
>>    Setting default log level to "WARN".
>>    To adjust logging level use sc.setLogLevel(newLevel). For SparkR,
>> use setLogLevel(newLevel).
>>    WARNING: An illegal reflective access operation has occurred
>>    WARNING: Illegal reflective access by
>> io.netty.util.internal.PlatformDependent0$1
>> (file:/home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7/jars/netty-all-4.0.43.Final.jar)
>> to field java.nio.Buffer.address
>>    WARNING: Please consider reporting this to the maintainers of
>> io.netty.util.internal.PlatformDependent0$1
>>    WARNING: Use --illegal-access=warn to enable warnings of further
>> illegal reflective access operations
>>    WARNING: All illegal access operations will be denied in a future release
>>    18/07/09 17:58:50 WARN NativeCodeLoader: Unable to load
>> native-hadoop library for your platform... using builtin-java classes
>> where applicable
>>    18/07/09 17:58:54 ERROR RBackendHandler: count on 13 failed
>>    java.lang.reflect.InvocationTargetException
>>        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
>> Method)
>>        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
>>        at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
>>        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
>>        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
>>        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
>>        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
>>        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
>>        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
>>        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
>>        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
>>        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
>>        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
>>        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
>>        at java.base/java.lang.Thread.run(Thread.java:844)
>>    Caused by: java.lang.IllegalArgumentException
>>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>        at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
>>        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
>>        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
>>        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
>>        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
>>        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
>>        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
>>        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
>>        at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
>>        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
>>        at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:426)
>>        at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
>>        at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
>>        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
>>        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
>>        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:257)
>>        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:256)
>>        at scala.collection.immutable.List.foreach(List.scala:381)
>>        at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:256)
>>        at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:156)
>>        at org.apache.spark.SparkContext.clean(SparkContext.scala:2294)
>>        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2068)
>>        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
>>        at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
>>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>>        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>>        at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
>>        at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:278)
>>        at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2439)
>>        at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2438)
>>        at org.apache.spark.sql.Dataset$$anonfun$55.apply(Dataset.scala:2846)
>>        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
>>        at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2845)
>>        at org.apache.spark.sql.Dataset.count(Dataset.scala:2438)
>>        ... 36 more
>>    ── 1. Error: create DataFrame from list or data.frame
>> (@test_basic.R#26)  ──────
>>    java.lang.IllegalArgumentException
>>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>        at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
>>        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
>>        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
>>        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
>>        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
>>        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
>>        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
>>        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
>>        at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
>>        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
>>        at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:426)
>>        at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
>>        at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
>>        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
>>        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
>>        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:257)
>>        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:256)
>>        at scala.collection.immutable.List.foreach(List.scala:381)
>>        at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:256)
>>        at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:156)
>>        at org.apache.spark.SparkContext.clean(SparkContext.scala:2294)
>>        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2068)
>>        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
>>        at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
>>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>>        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>>        at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
>>        at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:278)
>>        at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2439)
>>        at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2438)
>>        at org.apache.spark.sql.Dataset$$anonfun$55.apply(Dataset.scala:2846)
>>        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
>>        at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2845)
>>        at org.apache.spark.sql.Dataset.count(Dataset.scala:2438)
>>        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
>> Method)
>>        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
>>        at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
>>        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
>>        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
>>        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
>>        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
>>        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
>>        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
>>        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
>>        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
>>        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
>>        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
>>        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
>>        at java.base/java.lang.Thread.run(Thread.java:844)
>>    1: expect_equal(count(df), i) at
>> /srv/hornik/tmp/CRAN/SparkR.Rcheck/SparkR/tests/testthat/test_basic.R:26
>>    2: quasi_label(enquo(object), label)
>>    3: eval_bare(get_expr(quo), get_env(quo))
>>    4: count(df)
>>    5: count(df)
>>    6: callJMethod([hidden email], "count")
>>    7: invokeJava(isStatic = FALSE, objId$id, methodName, ...)
>>    8: handleErrors(returnStatus, conn)
>>    9: stop(readString(conn))
>>
>>    18/07/09 17:58:54 ERROR RBackendHandler: fit on
>> org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper failed
>>    java.lang.reflect.InvocationTargetException
>>        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
>> Method)
>>        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
>>        at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
>>        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
>>        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
>>        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
>>        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
>>        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
>>        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
>>        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
>>        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
>>        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
>>        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
>>        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
>>        at java.base/java.lang.Thread.run(Thread.java:844)
>>    Caused by: java.lang.IllegalArgumentException
>>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>        at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
>>        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
>>        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
>>        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
>>        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
>>        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
>>        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
>>        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
>>        at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
>>        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
>>        at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:426)
>>        at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
>>        at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
>>        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
>>        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
>>        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:257)
>>        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:256)
>>        at scala.collection.immutable.List.foreach(List.scala:381)
>>        at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:256)
>>        at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:156)
>>        at org.apache.spark.SparkContext.clean(SparkContext.scala:2294)
>>        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2068)
>>        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
>>        at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
>>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>>        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>>        at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
>>        at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:373)
>>        at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:373)
>>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>>        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>>        at org.apache.spark.rdd.PairRDDFunctions.countByKey(PairRDDFunctions.scala:372)
>>        at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1204)
>>        at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1204)
>>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>>        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>>        at org.apache.spark.rdd.RDD.countByValue(RDD.scala:1203)
>>        at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:113)
>>        at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:88)
>>        at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:153)
>>        at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:149)
>>        at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>>        at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>>        at scala.collection.IterableViewLike$Transformed$class.foreach(IterableViewLike.scala:44)
>>        at scala.collection.SeqViewLike$AbstractTransformed.foreach(SeqViewLike.scala:37)
>>        at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:149)
>>        at org.apache.spark.ml.feature.RFormula.fit(RFormula.scala:198)
>>        at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$.fit(GeneralizedLinearRegressionWrapper.scala:81)
>>        at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.fit(GeneralizedLinearRegressionWrapper.scala)
>>        ... 36 more
>>    ── 2. Error: spark.glm and predict (@test_basic.R#58)
>> ─────────────────────────
>>    java.lang.IllegalArgumentException
>>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>        at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
>>        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
>>        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
>>        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
>>        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
>>        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
>>        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
>>        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
>>        at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
>>        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
>>        at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:426)
>>        at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
>>        at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
>>        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
>>        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
>>        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:257)
>>        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:256)
>>        at scala.collection.immutable.List.foreach(List.scala:381)
>>        at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:256)
>>        at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:156)
>>        at org.apache.spark.SparkContext.clean(SparkContext.scala:2294)
>>        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2068)
>>        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
>>        at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
>>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>>        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>>        at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
>>        at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:373)
>>        at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:373)
>>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>>        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>>        at org.apache.spark.rdd.PairRDDFunctions.countByKey(PairRDDFunctions.scala:372)
>>        at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1204)
>>        at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1204)
>>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>>        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>>        at org.apache.spark.rdd.RDD.countByValue(RDD.scala:1203)
>>        at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:113)
>>        at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:88)
>>        at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:153)
>>        at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:149)
>>        at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>>        at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>>        at scala.collection.IterableViewLike$Transformed$class.foreach(IterableViewLike.scala:44)
>>        at scala.collection.SeqViewLike$AbstractTransformed.foreach(SeqViewLike.scala:37)
>>        at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:149)
>>        at org.apache.spark.ml.feature.RFormula.fit(RFormula.scala:198)
>>        at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$.fit(GeneralizedLinearRegressionWrapper.scala:81)
>>        at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.fit(GeneralizedLinearRegressionWrapper.scala)
>>        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
>> Method)
>>        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
>>        at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
>>        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
>>        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
>>        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
>>        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
>>        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
>>        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
>>        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
>>        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
>>        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
>>        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
>>        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
>>        at java.base/java.lang.Thread.run(Thread.java:844)
>>    1: spark.glm(training, Sepal_Width ~ Sepal_Length + Species) at
>> /srv/hornik/tmp/CRAN/SparkR.Rcheck/SparkR/tests/testthat/test_basic.R:58
>>    2: spark.glm(training, Sepal_Width ~ Sepal_Length + Species)
>>    3: .local(data, formula, ...)
>>    4: callJStatic("org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper",
>> "fit", formula,
>>            [hidden email], tolower(family$family), family$link, tol,
>> as.integer(maxIter), weightCol,
>>            regParam, as.double(var.power), as.double(link.power))
>>    5: invokeJava(isStatic = TRUE, className, methodName, ...)
>>    6: handleErrors(returnStatus, conn)
>>    7: stop(readString(conn))
>>
>>    ══ testthat results
>> ═══════════════════════════════════════════════════════════
>>    OK: 0 SKIPPED: 0 FAILED: 2
>>    1. Error: create DataFrame from list or data.frame (@test_basic.R#26)
>>    2. Error: spark.glm and predict (@test_basic.R#58)
>>
>>    Error: testthat unit tests failed
>>    Execution halted
>>
>> Flavor: r-devel-linux-x86_64-debian-gcc
>> Check: re-building of vignette outputs, Result: WARNING
>>  Error in re-building vignettes:
>>    ...
>>
>>  Attaching package: 'SparkR'
>>
>>  The following objects are masked from 'package:stats':
>>
>>      cov, filter, lag, na.omit, predict, sd, var, window
>>
>>  The following objects are masked from 'package:base':
>>
>>      as.data.frame, colnames, colnames<-, drop, endsWith,
>>      intersect, rank, rbind, sample, startsWith, subset, summary,
>>      transform, union
>>
>>  Picked up _JAVA_OPTIONS: -XX:-UsePerfData
>>  Picked up _JAVA_OPTIONS: -XX:-UsePerfData
>>  Using Spark's default log4j profile:
>> org/apache/spark/log4j-defaults.properties
>>  Setting default log level to "WARN".
>>  To adjust logging level use sc.setLogLevel(newLevel). For SparkR,
>> use setLogLevel(newLevel).
>>  WARNING: An illegal reflective access operation has occurred
>>  WARNING: Illegal reflective access by
>> io.netty.util.internal.PlatformDependent0$1
>> (file:/home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7/jars/netty-all-4.0.43.Final.jar)
>> to field java.nio.Buffer.address
>>  WARNING: Please consider reporting this to the maintainers of
>> io.netty.util.internal.PlatformDependent0$1
>>  WARNING: Use --illegal-access=warn to enable warnings of further
>> illegal reflective access operations
>>  WARNING: All illegal access operations will be denied in a future release
>>  18/07/09 17:58:59 WARN NativeCodeLoader: Unable to load
>> native-hadoop library for your platform... using builtin-java classes
>> where applicable
>>  18/07/09 17:59:07 ERROR RBackendHandler: dfToCols on
>> org.apache.spark.sql.api.r.SQLUtils failed
>>  java.lang.reflect.InvocationTargetException
>>        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
>> Method)
>>        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
>>        at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
>>        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
>>        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
>>        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
>>        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
>>        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
>>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
>>        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
>>        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
>>        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
>>        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
>>        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
>>        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
>>        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
>>        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
>>        at java.base/java.lang.Thread.run(Thread.java:844)
>>  Caused by: java.lang.IllegalArgumentException
>>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>        at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
>>        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
>>        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
>>        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
>>        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
>>        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
>>        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
>>        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
>>        at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
>>        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
>>        at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:426)
>>        at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
>>        at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
>>        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
>>        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
>>        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:257)
>>        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:256)
>>        at scala.collection.immutable.List.foreach(List.scala:381)
>>        at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:256)
>>        at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:156)
>>        at org.apache.spark.SparkContext.clean(SparkContext.scala:2294)
>>        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2068)
>>        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
>>        at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
>>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>>        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>>        at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
>>        at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:278)
>>        at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:2865)
>>        at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2391)
>>        at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2391)
>>        at org.apache.spark.sql.Dataset$$anonfun$55.apply(Dataset.scala:2846)
>>        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
>>        at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2845)
>>        at org.apache.spark.sql.Dataset.collect(Dataset.scala:2391)
>>        at org.apache.spark.sql.api.r.SQLUtils$.dfToCols(SQLUtils.scala:212)
>>        at org.apache.spark.sql.api.r.SQLUtils.dfToCols(SQLUtils.scala)
>>        ... 36 more
>>  Quitting from lines 102-104 (sparkr-vignettes.Rmd)
>>  Error: processing vignette 'sparkr-vignettes.Rmd' failed with diagnostics:
>>  java.lang.IllegalArgumentException
>>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
>>        at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
>>        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
>>        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
>>        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
>>        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
>>        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
>>        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
>>        at scala.collection.mutable.HashMap.fore
>>  Execution halted
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: [hidden email]
>>

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]
Reply | Threaded
Open this post in threaded view
|

Re: [CRAN-pretest-archived] CRAN submission SparkR 2.2.2

Shivaram Venkataraman
I dont think we need to respin 2.2.2 -- Given that 2.3.2 is on the way
we can just submit that.

Shivaram
On Mon, Jul 9, 2018 at 6:19 PM Tom Graves <[hidden email]> wrote:

>
> is there anyway to push it to CRAN without this fix, I don't really want to respin 2.2.2 just with the test fix.
>
> Tom
>
> On Monday, July 9, 2018, 4:50:18 PM CDT, Shivaram Venkataraman <[hidden email]> wrote:
>
>
> Yes. I think Felix checked in a fix to ignore tests run on java
> versions that are not Java 8 (I think the fix was in
> https://github.com/apache/spark/pull/21666 which is in 2.3.2)
>
> Shivaram
> On Mon, Jul 9, 2018 at 5:39 PM Sean Owen <[hidden email]> wrote:
> >
> > Yes, this flavor of error should only come up in Java 9. Spark doesn't support that. Is there any way to tell CRAN this should not be tested?
> >
> > On Mon, Jul 9, 2018, 4:17 PM Shivaram Venkataraman <[hidden email]> wrote:
> >>
> >> The upcoming 2.2.2 release was submitted to CRAN. I think there are
> >> some knows issues on Windows, but does anybody know what the following
> >> error with Netty is ?
> >>
> >> >    WARNING: Illegal reflective access by io.netty.util.internal.PlatformDependent0$1 (file:/home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7/jars/netty-all-4.0.43.Final.jar) to field java.nio.Buffer.address
> >>
> >> Thanks
> >> Shivaram
> >>
> >>
> >> ---------- Forwarded message ---------
> >> From: <[hidden email]>
> >> Date: Mon, Jul 9, 2018 at 12:12 PM
> >> Subject: [CRAN-pretest-archived] CRAN submission SparkR 2.2.2
> >> To: <[hidden email]>
> >> Cc: <[hidden email]>
> >>
> >>
> >> Dear maintainer,
> >>
> >> package SparkR_2.2.2.tar.gz does not pass the incoming checks
> >> automatically, please see the following pre-tests:
> >> Windows: <https://win-builder.r-project.org/incoming_pretest/SparkR_2.2.2_20180709_175630/Windows/00check.log>
> >> Status: 1 ERROR, 1 WARNING
> >> Debian: <https://win-builder.r-project.org/incoming_pretest/SparkR_2.2.2_20180709_175630/Debian/00check.log>
> >> Status: 1 ERROR, 2 WARNINGs
> >>
> >> Last released version's CRAN status: ERROR: 1, OK: 1
> >> See: <https://CRAN.R-project.org/web/checks/check_results_SparkR.html>
> >>
> >> CRAN Web: <https://cran.r-project.org/package=SparkR>
> >>
> >> Please fix all problems and resubmit a fixed version via the webform.
> >> If you are not sure how to fix the problems shown, please ask for help
> >> on the R-package-devel mailing list:
> >> <https://stat.ethz.ch/mailman/listinfo/r-package-devel>
> >> If you are fairly certain the rejection is a false positive, please
> >> reply-all to this message and explain.
> >>
> >> More details are given in the directory:
> >> <https://win-builder.r-project.org/incoming_pretest/SparkR_2.2.2_20180709_175630/>
> >> The files will be removed after roughly 7 days.
> >>
> >> No strong reverse dependencies to be checked.
> >>
> >> Best regards,
> >> CRAN teams' auto-check service
> >> Flavor: r-devel-linux-x86_64-debian-gcc, r-devel-windows-ix86+x86_64
> >> Check: CRAN incoming feasibility, Result: WARNING
> >>  Maintainer: 'Shivaram Venkataraman <[hidden email]>'
> >>
> >>  New submission
> >>
> >>  Package was archived on CRAN
> >>
> >>  Insufficient package version (submitted: 2.2.2, existing: 2.3.0)
> >>
> >>  Possibly mis-spelled words in DESCRIPTION:
> >>    Frontend (4:10, 5:28)
> >>
> >>  CRAN repository db overrides:
> >>    X-CRAN-Comment: Archived on 2018-05-01 as check problems were not
> >>      corrected despite reminders.
> >>
> >>  Found the following (possibly) invalid URLs:
> >>    URL: http://spark.apache.org/docs/latest/api/R/mean.html
> >>      From: inst/doc/sparkr-vignettes.html
> >>      Status: 404
> >>      Message: Not Found
> >>
> >> Flavor: r-devel-windows-ix86+x86_64
> >> Check: running tests for arch 'x64', Result: ERROR
> >>    Running 'run-all.R' [175s]
> >>  Running the tests in 'tests/run-all.R' failed.
> >>  Complete output:
> >>    > #
> >>    > # Licensed to the Apache Software Foundation (ASF) under one or more
> >>    > # contributor license agreements.  See the NOTICE file distributed with
> >>    > # this work for additional information regarding copyright ownership.
> >>    > # The ASF licenses this file to You under the Apache License, Version 2.0
> >>    > # (the "License"); you may not use this file except in compliance with
> >>    > # the License.  You may obtain a copy of the License at
> >>    > #
> >>    > #    http://www.apache.org/licenses/LICENSE-2.0
> >>    > #
> >>    > # Unless required by applicable law or agreed to in writing, software
> >>    > # distributed under the License is distributed on an "AS IS" BASIS,
> >>    > # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> >>    > # See the License for the specific language governing permissions and
> >>    > # limitations under the License.
> >>    > #
> >>    >
> >>    > library(testthat)
> >>    > library(SparkR)
> >>
> >>    Attaching package: 'SparkR'
> >>
> >>    The following object is masked from 'package:testthat':
> >>
> >>        describe
> >>
> >>    The following objects are masked from 'package:stats':
> >>
> >>        cov, filter, lag, na.omit, predict, sd, var, window
> >>
> >>    The following objects are masked from 'package:base':
> >>
> >>        as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
> >>        rank, rbind, sample, startsWith, subset, summary, transform, union
> >>
> >>    >
> >>    > # Turn all warnings into errors
> >>    > options("warn" = 2)
> >>    >
> >>    > if (.Platform$OS.type == "windows") {
> >>    +  Sys.setenv(TZ = "GMT")
> >>    + }
> >>    >
> >>    > # Setup global test environment
> >>    > # Install Spark first to set SPARK_HOME
> >>    >
> >>    > # NOTE(shivaram): We set overwrite to handle any old tar.gz
> >> files or directories left behind on
> >>    > # CRAN machines. For Jenkins we should already have SPARK_HOME set.
> >>    > install.spark(overwrite = TRUE)
> >>    Overwrite = TRUE: download and overwrite the tar fileand Spark
> >> package directory if they exist.
> >>    Spark not found in the cache directory. Installation will start.
> >>    MirrorUrl not provided.
> >>    Looking for preferred site from apache website...
> >>    Preferred mirror site found: http://mirror.dkd.de/apache/spark
> >>    Downloading spark-2.2.2 for Hadoop 2.7 from:
> >>    - http://mirror.dkd.de/apache/spark/spark-2.2.2/spark-2.2.2-bin-hadoop2.7.tgz
> >>    trying URL 'http://mirror.dkd.de/apache/spark/spark-2.2.2/spark-2.2.2-bin-hadoop2.7.tgz'
> >>    Content type 'application/x-gzip' length 200743115 bytes (191.4 MB)
> >>    ==================================================
> >>    downloaded 191.4 MB
> >>
> >>    Installing to C:\Users\ligges\AppData\Local\Apache\Spark\Cache
> >>    DONE.
> >>    SPARK_HOME set to
> >> C:\Users\ligges\AppData\Local\Apache\Spark\Cache/spark-2.2.2-bin-hadoop2.7
> >>    >
> >>    > sparkRDir <- file.path(Sys.getenv("SPARK_HOME"), "R")
> >>    > sparkRWhitelistSQLDirs <- c("spark-warehouse", "metastore_db")
> >>    > invisible(lapply(sparkRWhitelistSQLDirs,
> >>    +                  function(x) { unlink(file.path(sparkRDir, x),
> >> recursive = TRUE, force = TRUE)}))
> >>    > sparkRFilesBefore <- list.files(path = sparkRDir, all.files = TRUE)
> >>    >
> >>    > sparkRTestMaster <- "local[1]"
> >>    > sparkRTestConfig <- list()
> >>    > if (identical(Sys.getenv("NOT_CRAN"), "true")) {
> >>    +  sparkRTestMaster <- ""
> >>    + } else {
> >>    +  # Disable hsperfdata on CRAN
> >>    +  old_java_opt <- Sys.getenv("_JAVA_OPTIONS")
> >>    +  Sys.setenv("_JAVA_OPTIONS" = paste("-XX:-UsePerfData", old_java_opt))
> >>    +  tmpDir <- tempdir()
> >>    +  tmpArg <- paste0("-Djava.io.tmpdir=", tmpDir)
> >>    +  sparkRTestConfig <- list(spark.driver.extraJavaOptions = tmpArg,
> >>    +                            spark.executor.extraJavaOptions = tmpArg)
> >>    + }
> >>    >
> >>    > test_package("SparkR")
> >>    Launching java with spark-submit command
> >> C:\Users\ligges\AppData\Local\Apache\Spark\Cache/spark-2.2.2-bin-hadoop2.7/bin/spark-submit2.cmd
> >>  --driver-java-options "-Djava.io.tmpdir=D:\temp\RtmpABZLQj"
> >> sparkr-shell D:\temp\RtmpABZLQj\backend_port16d0838283f7e
> >>    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
> >>    Using Spark's default log4j profile:
> >> org/apache/spark/log4j-defaults.properties
> >>    Setting default log level to "WARN".
> >>    To adjust logging level use sc.setLogLevel(newLevel). For SparkR,
> >> use setLogLevel(newLevel).
> >>    -- 1. Error: create DataFrame from list or data.frame
> >> (@test_basic.R#21)  ------
> >>    cannot open the connection
> >>    1: sparkR.session(master = sparkRTestMaster, enableHiveSupport =
> >> FALSE, sparkConfig = sparkRTestConfig) at
> >> D:/temp/Rtmp8IKu99/RLIBS_77d8215b7bce/SparkR/tests/testthat/test_basic.R:21
> >>    2: sparkR.sparkContext(master, appName, sparkHome, sparkConfigMap,
> >> sparkExecutorEnvMap,
> >>            sparkJars, sparkPackages)
> >>    3: file(path, open = "rb")
> >>
> >>    Launching java with spark-submit command
> >> C:\Users\ligges\AppData\Local\Apache\Spark\Cache/spark-2.2.2-bin-hadoop2.7/bin/spark-submit2.cmd
> >>  --driver-java-options "-Djava.io.tmpdir=D:\temp\RtmpABZLQj"
> >> sparkr-shell D:\temp\RtmpABZLQj\backend_port16d085df97d88
> >>    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
> >>    Using Spark's default log4j profile:
> >> org/apache/spark/log4j-defaults.properties
> >>    Setting default log level to "WARN".
> >>    To adjust logging level use sc.setLogLevel(newLevel). For SparkR,
> >> use setLogLevel(newLevel).
> >>    18/07/09 18:10:43 ERROR Shell: Failed to locate the winutils
> >> binary in the hadoop binary path
> >>    java.io.IOException: Could not locate executable
> >> null\bin\winutils.exe in the Hadoop binaries.
> >>        at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:379)
> >>        at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:394)
> >>        at org.apache.hadoop.util.Shell.<clinit>(Shell.java:387)
> >>        at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
> >>        at org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:611)
> >>        at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:273)
> >>        at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:261)
> >>        at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:791)
> >>        at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:761)
> >>        at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:634)
> >>        at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2427)
> >>        at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2427)
> >>        at scala.Option.getOrElse(Option.scala:121)
> >>        at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2427)
> >>        at org.apache.spark.SparkContext.<init>(SparkContext.scala:295)
> >>        at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2516)
> >>        at org.apache.spark.api.r.RRDD$.createSparkContext(RRDD.scala:139)
> >>        at org.apache.spark.api.r.RRDD.createSparkContext(RRDD.scala)
> >>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >>        at java.lang.reflect.Method.invoke(Method.java:498)
> >>        at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
> >>        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
> >>        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
> >>        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
> >>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
> >>        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
> >>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
> >>        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
> >>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
> >>        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
> >>        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
> >>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
> >>        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
> >>        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
> >>        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
> >>        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
> >>        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
> >>        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
> >>        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
> >>        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
> >>        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
> >>        at java.lang.Thread.run(Thread.java:748)
> >>    18/07/09 18:10:43 WARN NativeCodeLoader: Unable to load
> >> native-hadoop library for your platform... using builtin-java classes
> >> where applicable
> >>    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
> >>    18/07/09 18:10:54 WARN WeightedLeastSquares: regParam is zero,
> >> which might cause numerical instability and overfitting.
> >>    18/07/09 18:10:55 WARN BLAS: Failed to load implementation from:
> >> com.github.fommil.netlib.NativeSystemBLAS
> >>    18/07/09 18:10:55 WARN BLAS: Failed to load implementation from:
> >> com.github.fommil.netlib.NativeRefBLAS
> >>    18/07/09 18:10:55 WARN LAPACK: Failed to load implementation from:
> >> com.github.fommil.netlib.NativeSystemLAPACK
> >>    18/07/09 18:10:55 WARN LAPACK: Failed to load implementation from:
> >> com.github.fommil.netlib.NativeRefLAPACK
> >>    18/07/09 18:11:12 WARN WeightedLeastSquares: regParam is zero,
> >> which might cause numerical instability and overfitting.
> >>    18/07/09 18:11:14 WARN WeightedLeastSquares: regParam is zero,
> >> which might cause numerical instability and overfitting.
> >>    18/07/09 18:11:15 WARN WeightedLeastSquares: regParam is zero,
> >> which might cause numerical instability and overfitting.
> >>    18/07/09 18:11:17 WARN WeightedLeastSquares: regParam is zero,
> >> which might cause numerical instability and overfitting.
> >>    18/07/09 18:11:18 WARN WeightedLeastSquares: regParam is zero,
> >> which might cause numerical instability and overfitting.
> >>    18/07/09 18:11:19 WARN WeightedLeastSquares: regParam is zero,
> >> which might cause numerical instability and overfitting.
> >>    18/07/09 18:11:21 WARN WeightedLeastSquares: regParam is zero,
> >> which might cause numerical instability and overfitting.
> >>    18/07/09 18:11:22 WARN WeightedLeastSquares: regParam is zero,
> >> which might cause numerical instability and overfitting.
> >>    18/07/09 18:11:23 WARN WeightedLeastSquares: regParam is zero,
> >> which might cause numerical instability and overfitting.
> >>    18/07/09 18:11:25 WARN WeightedLeastSquares: regParam is zero,
> >> which might cause numerical instability and overfitting.
> >>    18/07/09 18:11:26 WARN WeightedLeastSquares: regParam is zero,
> >> which might cause numerical instability and overfitting.
> >>    18/07/09 18:11:28 WARN WeightedLeastSquares: regParam is zero,
> >> which might cause numerical instability and overfitting.
> >>    18/07/09 18:11:29 WARN WeightedLeastSquares: regParam is zero,
> >> which might cause numerical instability and overfitting.
> >>    18/07/09 18:11:46 WARN WeightedLeastSquares: regParam is zero,
> >> which might cause numerical instability and overfitting.
> >>    18/07/09 18:11:47 WARN WeightedLeastSquares: regParam is zero,
> >> which might cause numerical instability and overfitting.
> >>    18/07/09 18:11:49 WARN WeightedLeastSquares: regParam is zero,
> >> which might cause numerical instability and overfitting.
> >>    18/07/09 18:11:50 WARN WeightedLeastSquares: regParam is zero,
> >> which might cause numerical instability and overfitting.
> >>    == testthat results
> >> ===========================================================
> >>    OK: 6 SKIPPED: 0 FAILED: 1
> >>    1. Error: create DataFrame from list or data.frame (@test_basic.R#21)
> >>
> >>    Error: testthat unit tests failed
> >>    Execution halted
> >>    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
> >>
> >> Flavor: r-devel-linux-x86_64-debian-gcc
> >> Check: tests, Result: ERROR
> >>    Running 'run-all.R' [6s/15s]
> >>  Running the tests in 'tests/run-all.R' failed.
> >>  Complete output:
> >>    > #
> >>    > # Licensed to the Apache Software Foundation (ASF) under one or more
> >>    > # contributor license agreements.  See the NOTICE file distributed with
> >>    > # this work for additional information regarding copyright ownership.
> >>    > # The ASF licenses this file to You under the Apache License, Version 2.0
> >>    > # (the "License"); you may not use this file except in compliance with
> >>    > # the License.  You may obtain a copy of the License at
> >>    > #
> >>    > #    http://www.apache.org/licenses/LICENSE-2.0
> >>    > #
> >>    > # Unless required by applicable law or agreed to in writing, software
> >>    > # distributed under the License is distributed on an "AS IS" BASIS,
> >>    > # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> >>    > # See the License for the specific language governing permissions and
> >>    > # limitations under the License.
> >>    > #
> >>    >
> >>    > library(testthat)
> >>    > library(SparkR)
> >>
> >>    Attaching package: 'SparkR'
> >>
> >>    The following object is masked from 'package:testthat':
> >>
> >>        describe
> >>
> >>    The following objects are masked from 'package:stats':
> >>
> >>        cov, filter, lag, na.omit, predict, sd, var, window
> >>
> >>    The following objects are masked from 'package:base':
> >>
> >>        as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
> >>        rank, rbind, sample, startsWith, subset, summary, transform, union
> >>
> >>    >
> >>    > # Turn all warnings into errors
> >>    > options("warn" = 2)
> >>    >
> >>    > if (.Platform$OS.type == "windows") {
> >>    +  Sys.setenv(TZ = "GMT")
> >>    + }
> >>    >
> >>    > # Setup global test environment
> >>    > # Install Spark first to set SPARK_HOME
> >>    >
> >>    > # NOTE(shivaram): We set overwrite to handle any old tar.gz
> >> files or directories left behind on
> >>    > # CRAN machines. For Jenkins we should already have SPARK_HOME set.
> >>    > install.spark(overwrite = TRUE)
> >>    Overwrite = TRUE: download and overwrite the tar fileand Spark
> >> package directory if they exist.
> >>    Spark not found in the cache directory. Installation will start.
> >>    MirrorUrl not provided.
> >>    Looking for preferred site from apache website...
> >>    Preferred mirror site found: http://mirror.klaus-uwe.me/apache/spark
> >>    Downloading spark-2.2.2 for Hadoop 2.7 from:
> >>    - http://mirror.klaus-uwe.me/apache/spark/spark-2.2.2/spark-2.2.2-bin-hadoop2.7.tgz
> >>    trying URL 'http://mirror.klaus-uwe.me/apache/spark/spark-2.2.2/spark-2.2.2-bin-hadoop2.7.tgz'
> >>    Content type 'application/octet-stream' length 200743115 bytes (191.4 MB)
> >>    ==================================================
> >>    downloaded 191.4 MB
> >>
> >>    Installing to /home/hornik/.cache/spark
> >>    DONE.
> >>    SPARK_HOME set to /home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7
> >>    >
> >>    > sparkRDir <- file.path(Sys.getenv("SPARK_HOME"), "R")
> >>    > sparkRWhitelistSQLDirs <- c("spark-warehouse", "metastore_db")
> >>    > invisible(lapply(sparkRWhitelistSQLDirs,
> >>    +                  function(x) { unlink(file.path(sparkRDir, x),
> >> recursive = TRUE, force = TRUE)}))
> >>    > sparkRFilesBefore <- list.files(path = sparkRDir, all.files = TRUE)
> >>    >
> >>    > sparkRTestMaster <- "local[1]"
> >>    > sparkRTestConfig <- list()
> >>    > if (identical(Sys.getenv("NOT_CRAN"), "true")) {
> >>    +  sparkRTestMaster <- ""
> >>    + } else {
> >>    +  # Disable hsperfdata on CRAN
> >>    +  old_java_opt <- Sys.getenv("_JAVA_OPTIONS")
> >>    +  Sys.setenv("_JAVA_OPTIONS" = paste("-XX:-UsePerfData", old_java_opt))
> >>    +  tmpDir <- tempdir()
> >>    +  tmpArg <- paste0("-Djava.io.tmpdir=", tmpDir)
> >>    +  sparkRTestConfig <- list(spark.driver.extraJavaOptions = tmpArg,
> >>    +                            spark.executor.extraJavaOptions = tmpArg)
> >>    + }
> >>    >
> >>    > test_package("SparkR")
> >>    Launching java with spark-submit command
> >> /home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7/bin/spark-submit
> >> --driver-java-options "-Djava.io.tmpdir=/tmp/Rtmpkd8Lf6" sparkr-shell
> >> /tmp/Rtmpkd8Lf6/backend_port289f65a5f5e0
> >>    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
> >>    Picked up _JAVA_OPTIONS: -XX:-UsePerfData
> >>    Using Spark's default log4j profile:
> >> org/apache/spark/log4j-defaults.properties
> >>    Setting default log level to "WARN".
> >>    To adjust logging level use sc.setLogLevel(newLevel). For SparkR,
> >> use setLogLevel(newLevel).
> >>    WARNING: An illegal reflective access operation has occurred
> >>    WARNING: Illegal reflective access by
> >> io.netty.util.internal.PlatformDependent0$1
> >> (file:/home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7/jars/netty-all-4.0.43.Final.jar)
> >> to field java.nio.Buffer.address
> >>    WARNING: Please consider reporting this to the maintainers of
> >> io.netty.util.internal.PlatformDependent0$1
> >>    WARNING: Use --illegal-access=warn to enable warnings of further
> >> illegal reflective access operations
> >>    WARNING: All illegal access operations will be denied in a future release
> >>    18/07/09 17:58:50 WARN NativeCodeLoader: Unable to load
> >> native-hadoop library for your platform... using builtin-java classes
> >> where applicable
> >>    18/07/09 17:58:54 ERROR RBackendHandler: count on 13 failed
> >>    java.lang.reflect.InvocationTargetException
> >>        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
> >> Method)
> >>        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >>        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >>        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
> >>        at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
> >>        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
> >>        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
> >>        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
> >>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
> >>        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
> >>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
> >>        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
> >>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
> >>        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
> >>        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
> >>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
> >>        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
> >>        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
> >>        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
> >>        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
> >>        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
> >>        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
> >>        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
> >>        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
> >>        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
> >>        at java.base/java.lang.Thread.run(Thread.java:844)
> >>    Caused by: java.lang.IllegalArgumentException
> >>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
> >>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
> >>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
> >>        at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
> >>        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
> >>        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
> >>        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
> >>        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
> >>        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
> >>        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
> >>        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
> >>        at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
> >>        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
> >>        at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:426)
> >>        at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
> >>        at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
> >>        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
> >>        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
> >>        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:257)
> >>        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:256)
> >>        at scala.collection.immutable.List.foreach(List.scala:381)
> >>        at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:256)
> >>        at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:156)
> >>        at org.apache.spark.SparkContext.clean(SparkContext.scala:2294)
> >>        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2068)
> >>        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
> >>        at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
> >>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> >>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
> >>        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
> >>        at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
> >>        at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:278)
> >>        at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2439)
> >>        at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2438)
> >>        at org.apache.spark.sql.Dataset$$anonfun$55.apply(Dataset.scala:2846)
> >>        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
> >>        at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2845)
> >>        at org.apache.spark.sql.Dataset.count(Dataset.scala:2438)
> >>        ... 36 more
> >>    ── 1. Error: create DataFrame from list or data.frame
> >> (@test_basic.R#26)  ──────
> >>    java.lang.IllegalArgumentException
> >>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
> >>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
> >>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
> >>        at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
> >>        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
> >>        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
> >>        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
> >>        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
> >>        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
> >>        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
> >>        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
> >>        at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
> >>        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
> >>        at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:426)
> >>        at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
> >>        at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
> >>        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
> >>        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
> >>        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:257)
> >>        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:256)
> >>        at scala.collection.immutable.List.foreach(List.scala:381)
> >>        at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:256)
> >>        at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:156)
> >>        at org.apache.spark.SparkContext.clean(SparkContext.scala:2294)
> >>        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2068)
> >>        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
> >>        at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
> >>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> >>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
> >>        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
> >>        at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
> >>        at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:278)
> >>        at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2439)
> >>        at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2438)
> >>        at org.apache.spark.sql.Dataset$$anonfun$55.apply(Dataset.scala:2846)
> >>        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
> >>        at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2845)
> >>        at org.apache.spark.sql.Dataset.count(Dataset.scala:2438)
> >>        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
> >> Method)
> >>        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >>        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >>        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
> >>        at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
> >>        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
> >>        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
> >>        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
> >>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
> >>        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
> >>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
> >>        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
> >>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
> >>        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
> >>        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
> >>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
> >>        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
> >>        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
> >>        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
> >>        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
> >>        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
> >>        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
> >>        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
> >>        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
> >>        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
> >>        at java.base/java.lang.Thread.run(Thread.java:844)
> >>    1: expect_equal(count(df), i) at
> >> /srv/hornik/tmp/CRAN/SparkR.Rcheck/SparkR/tests/testthat/test_basic.R:26
> >>    2: quasi_label(enquo(object), label)
> >>    3: eval_bare(get_expr(quo), get_env(quo))
> >>    4: count(df)
> >>    5: count(df)
> >>    6: callJMethod(x@sdf, "count")
> >>    7: invokeJava(isStatic = FALSE, objId$id, methodName, ...)
> >>    8: handleErrors(returnStatus, conn)
> >>    9: stop(readString(conn))
> >>
> >>    18/07/09 17:58:54 ERROR RBackendHandler: fit on
> >> org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper failed
> >>    java.lang.reflect.InvocationTargetException
> >>        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
> >> Method)
> >>        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >>        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >>        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
> >>        at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
> >>        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
> >>        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
> >>        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
> >>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
> >>        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
> >>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
> >>        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
> >>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
> >>        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
> >>        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
> >>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
> >>        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
> >>        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
> >>        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
> >>        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
> >>        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
> >>        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
> >>        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
> >>        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
> >>        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
> >>        at java.base/java.lang.Thread.run(Thread.java:844)
> >>    Caused by: java.lang.IllegalArgumentException
> >>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
> >>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
> >>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
> >>        at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
> >>        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
> >>        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
> >>        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
> >>        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
> >>        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
> >>        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
> >>        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
> >>        at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
> >>        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
> >>        at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:426)
> >>        at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
> >>        at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
> >>        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
> >>        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
> >>        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:257)
> >>        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:256)
> >>        at scala.collection.immutable.List.foreach(List.scala:381)
> >>        at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:256)
> >>        at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:156)
> >>        at org.apache.spark.SparkContext.clean(SparkContext.scala:2294)
> >>        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2068)
> >>        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
> >>        at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
> >>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> >>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
> >>        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
> >>        at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
> >>        at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:373)
> >>        at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:373)
> >>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> >>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
> >>        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
> >>        at org.apache.spark.rdd.PairRDDFunctions.countByKey(PairRDDFunctions.scala:372)
> >>        at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1204)
> >>        at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1204)
> >>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> >>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
> >>        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
> >>        at org.apache.spark.rdd.RDD.countByValue(RDD.scala:1203)
> >>        at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:113)
> >>        at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:88)
> >>        at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:153)
> >>        at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:149)
> >>        at scala.collection.Iterator$class.foreach(Iterator.scala:893)
> >>        at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
> >>        at scala.collection.IterableViewLike$Transformed$class.foreach(IterableViewLike.scala:44)
> >>        at scala.collection.SeqViewLike$AbstractTransformed.foreach(SeqViewLike.scala:37)
> >>        at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:149)
> >>        at org.apache.spark.ml.feature.RFormula.fit(RFormula.scala:198)
> >>        at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$.fit(GeneralizedLinearRegressionWrapper.scala:81)
> >>        at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.fit(GeneralizedLinearRegressionWrapper.scala)
> >>        ... 36 more
> >>    ── 2. Error: spark.glm and predict (@test_basic.R#58)
> >> ─────────────────────────
> >>    java.lang.IllegalArgumentException
> >>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
> >>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
> >>        at org.apache.xbean.asm5.ClassReader.<init>(Unknown Source)
> >>        at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46)
> >>        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:443)
> >>        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:426)
> >>        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
> >>        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
> >>        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103)
> >>        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
> >>        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
> >>        at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103)
> >>        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
> >>        at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:426)
> >>        at org.apache.xbean.asm5.ClassReader.a(Unknown Source)
> >>        at org.apache.xbean.asm5.ClassReader.b(Unknown Source)
> >>        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
> >>        at org.apache.xbean.asm5.ClassReader.accept(Unknown Source)
> >>        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:257)
> >>        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:256)
> >>        at scala.collection.immutable.List.foreach(List.scala:381)
> >>        at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:256)
> >>        at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:156)
> >>        at org.apache.spark.SparkContext.clean(SparkContext.scala:2294)
> >>        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2068)
> >>        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2094)
> >>        at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
> >>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> >>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
> >>        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
> >>        at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
> >>        at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:373)
> >>        at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:373)
> >>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> >>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
> >>        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
> >>        at org.apache.spark.rdd.PairRDDFunctions.countByKey(PairRDDFunctions.scala:372)
> >>        at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1204)
> >>        at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1204)
> >>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> >>        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
> >>        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
> >>        at org.apache.spark.rdd.RDD.countByValue(RDD.scala:1203)
> >>        at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:113)
> >>        at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:88)
> >>        at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:153)
> >>        at org.apache.spark.ml.Pipeline$$anonfun$fit$2.apply(Pipeline.scala:149)
> >>        at scala.collection.Iterator$class.foreach(Iterator.scala:893)
> >>        at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
> >>        at scala.collection.IterableViewLike$Transformed$class.foreach(IterableViewLike.scala:44)
> >>        at scala.collection.SeqViewLike$AbstractTransformed.foreach(SeqViewLike.scala:37)
> >>        at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:149)
> >>        at org.apache.spark.ml.feature.RFormula.fit(RFormula.scala:198)
> >>        at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$.fit(GeneralizedLinearRegressionWrapper.scala:81)
> >>        at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper.fit(GeneralizedLinearRegressionWrapper.scala)
> >>        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
> >> Method)
> >>        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >>        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >>        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
> >>        at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
> >>        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
> >>        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
> >>        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
> >>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
> >>        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
> >>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
> >>        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
> >>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
> >>        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
> >>        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
> >>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
> >>        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
> >>        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
> >>        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
> >>        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
> >>        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
> >>        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
> >>        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
> >>        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
> >>        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
> >>        at java.base/java.lang.Thread.run(Thread.java:844)
> >>    1: spark.glm(training, Sepal_Width ~ Sepal_Length + Species) at
> >> /srv/hornik/tmp/CRAN/SparkR.Rcheck/SparkR/tests/testthat/test_basic.R:58
> >>    2: spark.glm(training, Sepal_Width ~ Sepal_Length + Species)
> >>    3: .local(data, formula, ...)
> >>    4: callJStatic("org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper",
> >> "fit", formula,
> >>            data@sdf, tolower(family$family), family$link, tol,
> >> as.integer(maxIter), weightCol,
> >>            regParam, as.double(var.power), as.double(link.power))
> >>    5: invokeJava(isStatic = TRUE, className, methodName, ...)
> >>    6: handleErrors(returnStatus, conn)
> >>    7: stop(readString(conn))
> >>
> >>    ══ testthat results
> >> ═══════════════════════════════════════════════════════════
> >>    OK: 0 SKIPPED: 0 FAILED: 2
> >>    1. Error: create DataFrame from list or data.frame (@test_basic.R#26)
> >>    2. Error: spark.glm and predict (@test_basic.R#58)
> >>
> >>    Error: testthat unit tests failed
> >>    Execution halted
> >>
> >> Flavor: r-devel-linux-x86_64-debian-gcc
> >> Check: re-building of vignette outputs, Result: WARNING
> >>  Error in re-building vignettes:
> >>    ...
> >>
> >>  Attaching package: 'SparkR'
> >>
> >>  The following objects are masked from 'package:stats':
> >>
> >>      cov, filter, lag, na.omit, predict, sd, var, window
> >>
> >>  The following objects are masked from 'package:base':
> >>
> >>      as.data.frame, colnames, colnames<-, drop, endsWith,
> >>      intersect, rank, rbind, sample, startsWith, subset, summary,
> >>      transform, union
> >>
> >>  Picked up _JAVA_OPTIONS: -XX:-UsePerfData
> >>  Picked up _JAVA_OPTIONS: -XX:-UsePerfData
> >>  Using Spark's default log4j profile:
> >> org/apache/spark/log4j-defaults.properties
> >>  Setting default log level to "WARN".
> >>  To adjust logging level use sc.setLogLevel(newLevel). For SparkR,
> >> use setLogLevel(newLevel).
> >>  WARNING: An illegal reflective access operation has occurred
> >>  WARNING: Illegal reflective access by
> >> io.netty.util.internal.PlatformDependent0$1
> >> (file:/home/hornik/.cache/spark/spark-2.2.2-bin-hadoop2.7/jars/netty-all-4.0.43.Final.jar)
> >> to field java.nio.Buffer.address
> >>  WARNING: Please consider reporting this to the maintainers of
> >> io.netty.util.internal.PlatformDependent0$1
> >>  WARNING: Use --illegal-access=warn to enable warnings of further
> >> illegal reflective access operations
> >>  WARNING: All illegal access operations will be denied in a future release
> >>  18/07/09 17:58:59 WARN NativeCodeLoader: Unable to load
> >> native-hadoop library for your platform... using builtin-java classes
> >> where applicable
> >>  18/07/09 17:59:07 ERROR RBackendHandler: dfToCols on
> >> org.apache.spark.sql.api.r.SQLUtils failed
> >>  java.lang.reflect.InvocationTargetException
> >>        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
> >> Method)
> >>        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >>        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >>        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
> >>        at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
> >>        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
> >>        at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
> >>        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
> >>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
> >>        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
> >>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
> >>        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
> >>        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
> >>        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
> >>        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]