Serverless proxy for Spark cluster

Overview

Build Status Build Status Maven Central Docker Hub Pulls

Hydrosphere Mist

Join the chat at https://gitter.im/Hydrospheredata/mist

Hydrosphere Mist is a serverless proxy for Spark cluster. Mist provides a new functional programming framework and deployment model for Spark applications.

Please see our quick start guide and documentation

Features:

  • Spark Function as a Service. Deploy Spark functions rather than notebooks or scripts.
  • Spark Cluster and Session management. Fully managed Spark sessions backed by on-demand EMR, Hortonworks, Cloudera, DC/OS and vanilla Spark clusters.
  • Typesafe programming framework that clearly defines inputs and outputs of every Spark job.
  • REST HTTP & Messaging (MQTT, Kafka) API for Scala & Python Spark jobs.
  • Multi-cluster mode: Seamless Spark cluster on-demand provisioning, autoscaling and termination(pending) Cluster of Spark Clusters

It creates a unified API layer for building enterprise solutions and microservices on top of a Spark functions.

Mist use cases

High Level Architecture

High Level Architecture

Contact

Please report bugs/problems to: https://github.com/Hydrospheredata/mist/issues.

http://hydrosphere.io/

LinkedIn

Facebook

Twitter

Comments
  • Dealing with LocalData and a lot of columns

    Dealing with LocalData and a lot of columns

    At https://github.com/Hydrospheredata/mist/blob/master/docs/use-cases/ml-realtime.md you demo how to use LocalData for real-time scoring.

    My pipeline (loaded via spark pipeline persistence) tries to join the input record with additional data. This is not possible, as the only operation supported is withColumn. Would you suggest to start a long running local spark context here?

    Do you have plans to add this in the future?

    opened by geoHeil 15
  • Mist Spark cluster mode issue in DC/OS via Marathon

    Mist Spark cluster mode issue in DC/OS via Marathon

    Dear All,

    We managed to deploy the mist docker image in DC/OS via marathon using the following json configuration.

    { "volumes": null, "id": "/mist-job-server", "cmd": "/usr/share/mist/bin/mist-master start --config /config/docker.conf --router-config /config/router.conf --debug true", "args": null, "user": null, "env": null, "instances": 1, "cpus": 1, "mem": 2048, "disk": 500, "gpus": 0, "executor": null, "constraints": null, "fetch": null, "storeUrls": null, "backoffSeconds": 1, "backoffFactor": 1.15, "maxLaunchDelaySeconds": 3600, "container": { "docker": { "image": "hydrosphere/mist:0.12.3-2.1.1", "forcePullImage": true, "privileged": false, "portMappings": [ { "containerPort": 2004, "protocol": "tcp", "servicePort": 10106 } ], "network": "BRIDGE" }, "type": "DOCKER", "volumes": [ { "containerPath": "/config", "hostPath": "/nfs/mist/config", "mode": "RW" }, { "containerPath": "/jobs", "hostPath": "/nfs/mist/jobs", "mode": "RW" }, { "containerPath": "/var/run/docker.sock", "hostPath": "/var/run/docker.sock", "mode": "RW" } ] }, "healthChecks": null, "readinessChecks": null, "dependencies": null, "upgradeStrategy": { "minimumHealthCapacity": 1, "maximumOverCapacity": 1 }, "labels": { "HAPROXY_GROUP": "external" }, "acceptedResourceRoles": null, "residency": null, "secrets": null, "taskKillGracePeriodSeconds": null, "portDefinitions": [ { "port": 10106, "protocol": "tcp", "labels": {} } ], "requirePorts": false }

    Now, we wanted to switch spark from local mode to cluster mode.

    Our docker.conf file looks as follows:

    mist { context-defaults.spark-conf = { spark.master = "local[4]" spark.jars.packages = "com.datastax.spark:spark-cassandra-connector_2.11:2.0.3" spark.cassandra.connection.host="node-0.cassandra.mesos" }

    context.test.spark-conf = { spark.cassandra.connection.host="node-0.cassandra.mesos" spark.jars.packages = "com.datastax.spark:spark-cassandra-connector_2.11:2.0.3" }

    http { on = true host = "0.0.0.0" port = 2004 }

    workers.runner = "local" }

    To make spark run in cluster mode, we added the following:

    mist { context-defaults.spark-conf = { spark.master = "mesos://spark.marathon.mesos:31921" spark.submit.deployMode = "cluster" spark.mesos.executor.docker.image = "mesosphere/spark:1.1.0-2.1.1-hadoop-2.6" spark.mesos.executor.home = "/opt/spark/dist" spark.jars.packages = "com.datastax.spark:spark-cassandra-connector_2.11:2.0.3" spark.cassandra.connection.host="node-0.cassandra.mesos" }

    context.test.spark-conf = { spark.cassandra.connection.host="node-0.cassandra.mesos" spark.jars.packages = "com.datastax.spark:spark-cassandra-connector_2.11:2.0.3" }

    http { on = true host = "0.0.0.0" port = 2004 }

    workers.runner = "local" //???? }

    Now we get the exception mesos native library libmesos.so not found.

    Does anybody know what we are missing?

    Also, can anybody tell us what are the valid values for workers.runner? Do we have to change anything here?

    best regards Sriraman.

    opened by sreeraaman 12
  • Not able to detect custom (datasource) classpath in spark-sql with mist

    Not able to detect custom (datasource) classpath in spark-sql with mist

    I have custom data source which I use in sqlContext.read.format("c##.###.bigdata.fileprocessing.sparkjobs.fileloader.DataSource"). This works well via spark-submit in local & yarn mode. But, when I invoke the same job via Mist, it throws following exception:

    Failed to find data source: c.p.b.f.sparkjobs.fileloader.DataSource. Please find packages at http://spark-packages.org

    Curl Command invoked: curl --header "Content-Type: application/json" -X POST http://localhost:2004/api/fileProcess --data '{"path": "/root/fileprocessing/bigdata-fileprocessing-all.jar", "className": "c##.###.bigdata.fileprocessing.util.LoaderMistApp$", "parameters": {"configId": "FILE1"}, "namespace": "foo"}'

    bigdata-fileprocessing-all.jar has the DataSource class.

    Added following in router.conf fileProcess = { path = ${fp_path} className = "c##.###.bigdata.fileprocessing.util.LoaderMistApp$" namespace = "foo" }

    Mist Version: 0.10.0 Spark Version: 1.6.1

    Error Stacktrace: mist_stacktrace.txt

    opened by krishna2020 12
  • Failed to execute job because missing mist_worker.jar

    Failed to execute job because missing mist_worker.jar

    Context

    • Stack includes: 1 Spark Master (without mist) + Worker (without mist) + Mist Master
    • Spark version 2.4.0
    • Mist version 1.1.0
    downtime="3600s"
    max-conn-failures=5
    max-parallel-jobs=1
    precreated=false
    run-options=""
    spark-conf {
        "spark.master"="spark://spark-master:7077"
        "spark.submit.deployMode"="cluster"
        "spark.dynamicAllocation.enabled"="true"
        "spark.shuffle.service.enabled"="true"
    }
    streaming-duration="1s"
    

    Log

    mist_1           | 2018-11-09 10:59:42,857 INFO  akka.event.slf4j.Slf4jLogger Slf4jLogger started
    spark-master_1   | 2018-11-09 10:59:42,937 INFO  org.apache.spark.deploy.master.Master Registering worker 29c1c06e51e3:9099 with 8 cores, 13.7 GB RAM
    spark-worker_1   | 2018-11-09 10:59:42,966 INFO  org.apache.spark.deploy.worker.Worker Successfully registered with master spark://spark-master:7077
    mist_1           | 2018-11-09 10:59:43,184 INFO  akka.remote.Remoting Starting remoting
    mist_1           | 2018-11-09 10:59:43,412 INFO  akka.remote.Remoting Remoting started; listening on addresses :[akka.tcp://[email protected]:2551]
    mist_1           | 2018-11-09 10:59:43,521 INFO  org.flywaydb.core.internal.util.VersionPrinter Flyway 4.1.1 by Boxfuse
    mist_1           | 2018-11-09 10:59:43,826 INFO  org.flywaydb.core.internal.dbsupport.DbSupportFactory Database: jdbc:h2:file:/opt/mist/data/recovery.db (H2 1.4)
    mist_1           | 2018-11-09 10:59:44,014 INFO  org.flywaydb.core.internal.command.DbValidate Successfully validated 2 migrations (execution time 00:00.018s)
    mist_1           | 2018-11-09 10:59:44,027 INFO  org.flywaydb.core.internal.command.DbMigrate Current version of schema "PUBLIC": 2
    mist_1           | 2018-11-09 10:59:44,027 INFO  org.flywaydb.core.internal.command.DbMigrate Schema "PUBLIC" is up to date. No migration necessary.
    mist_1           | 2018-11-09 10:59:44,540 INFO  io.hydrosphere.mist.master.MasterServer$ LogsSystem started
    mist_1           | 2018-11-09 10:59:46,042 WARN  org.apache.hadoop.util.NativeCodeLoader Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    mist_1           | 2018-11-09 10:59:46,995 INFO  akka.event.slf4j.Slf4jLogger Slf4jLogger started
    mist_1           | 2018-11-09 10:59:47,264 INFO  akka.remote.Remoting Starting remoting
    mist_1           | 2018-11-09 10:59:47,601 INFO  akka.remote.Remoting Remoting started; listening on addresses :[akka.tcp://[email protected]:40605]
    mist_1           | 2018-11-09 10:59:48,197 INFO  io.hydrosphere.mist.master.MasterServer$ FunctionInfoProvider started
    mist_1           | 2018-11-09 10:59:48,646 INFO  io.hydrosphere.mist.master.MasterServer$ Main service started
    mist_1           | 2018-11-09 10:59:49,686 INFO  io.hydrosphere.mist.master.MasterServer$ Http interface started
    mist_1           | 2018-11-09 10:59:49,692 INFO  io.hydrosphere.mist.master.Master$ Mist master started
    mist_1           | 2018-11-09 11:00:04,797 INFO  io.hydrosphere.mist.master.execution.ContextFrontend Starting executor k8s-master_96a1ce36-460a-4f3b-b8ba-735ddb2a33fe for k8s-master
    mist_1           | 2018-11-09 11:00:04,833 INFO  io.hydrosphere.mist.master.execution.ContextFrontend Context k8s-master - connected state(active connections: 0, max: 1)
    mist_1           | 2018-11-09 11:00:04,845 INFO  io.hydrosphere.mist.master.execution.workers.starter.LocalSparkSubmit Try submit local worker k8s-master_96a1ce36-460a-4f3b-b8ba-735ddb2a33fe_1, cmd: /opt/spark/bin/spark-submit --conf spark.eventLog.enabled=true --conf spark.shuffle.service.enabled=true --conf spark.submit.deployMode=cluster --conf spark.master=spark://spark-master:7077 --conf spark.eventLog.dir=/data/spark/events --conf spark.dynamicAllocation.enabled=true --conf spark.eventLog.compress=true --class io.hydrosphere.mist.worker.Worker /opt/mist/mist-worker.jar --master 172.20.0.5:2551 --name k8s-master_96a1ce36-460a-4f3b-b8ba-735ddb2a33fe_1
    spark-master_1   | 2018-11-09 11:00:07,315 INFO  org.apache.spark.deploy.master.Master Driver submitted org.apache.spark.deploy.worker.DriverWrapper
    spark-master_1   | 2018-11-09 11:00:07,318 INFO  org.apache.spark.deploy.master.Master Launching driver driver-20181109110007-0000 on worker worker-20181109105941-29c1c06e51e3-9099
    spark-worker_1   | 2018-11-09 11:00:07,355 INFO  org.apache.spark.deploy.worker.Worker Asked to launch driver driver-20181109110007-0000
    spark-worker_1   | 2018-11-09 11:00:07,367 INFO  org.apache.spark.deploy.worker.DriverRunner Copying user jar file:/opt/mist/mist-worker.jar to /opt/spark/work/driver-20181109110007-0000/mist-worker.jar
    spark-worker_1   | 2018-11-09 11:00:07,390 INFO  org.apache.spark.util.Utils Copying /opt/mist/mist-worker.jar to /opt/spark/work/driver-20181109110007-0000/mist-worker.jar
    spark-worker_1   | 2018-11-09 11:00:07,400 INFO  org.apache.spark.deploy.worker.DriverRunner Killing driver process!
    spark-worker_1   | 2018-11-09 11:00:07,404 WARN  org.apache.spark.deploy.worker.Worker Driver driver-20181109110007-0000 failed with unrecoverable exception: java.nio.file.NoSuchFileException: /opt/mist/mist-worker.jar
    spark-master_1   | 2018-11-09 11:00:07,460 INFO  org.apache.spark.deploy.master.Master Removing driver: driver-20181109110007-0000
    spark-master_1   | 2018-11-09 11:00:12,769 INFO  org.apache.spark.deploy.master.Master 172.20.0.5:40290 got disassociated, removing it.
    spark-master_1   | 2018-11-09 11:00:12,770 INFO  org.apache.spark.deploy.master.Master 172.20.0.5:42207 got disassociated, removing it.
    mist_1           | 2018-11-09 11:00:12,897 ERROR io.hydrosphere.mist.master.execution.workers.ExclusiveConnector Could not start worker connection
    mist_1           | java.lang.RuntimeException: Process terminated with error java.lang.RuntimeException: Process exited with status code 255 and out: 2018-11-09 11:00:06,479 WARN  org.apache.hadoop.util.NativeCodeLoader Unable to load native-hadoop library for your platform... using builtin-java classes where applicable;2018-11-09 11:00:12,424 ERROR org.apache.spark.deploy.ClientEndpoint Exception from cluster was: java.nio.file.NoSuchFileException: /opt/mist/mist-worker.jar;java.nio.file.NoSuchFileException: /opt/mist/mist-worker.jar;	at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86);	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102);	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107);	at sun.nio.fs.UnixCopyFile.copy(UnixCopyFile.java:526);	at sun.nio.fs.UnixFileSystemProvider.copy(UnixFileSystemProvider.java:253);	at java.nio.file.Files.copy(Files.java:1274);	at org.apache.spark.util.Utils$.org$apache$spark$util$Utils$$copyRecursive(Utils.scala:664);	at org.apache.spark.util.Utils$.copyFile(Utils.scala:635);	at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:719);	at org.apache.spark.util.Utils$.fetchFile(Utils.scala:509);	at org.apache.spark.deploy.worker.DriverRunner.downloadUserJar(DriverRunner.scala:155);	at org.apache.spark.deploy.worker.DriverRunner.prepareAndRunDriver(DriverRunner.scala:173);	at org.apache.spark.deploy.worker.DriverRunner$$anon$1.run(DriverRunner.scala:92)
    mist_1           | 	at io.hydrosphere.mist.master.execution.workers.WorkerRunner$DefaultRunner$$anonfun$continueSetup$1$1.applyOrElse(WorkerRunner.scala:39)
    mist_1           | 	at io.hydrosphere.mist.master.execution.workers.WorkerRunner$DefaultRunner$$anonfun$continueSetup$1$1.applyOrElse(WorkerRunner.scala:39)
    mist_1           | 	at scala.concurrent.Future$$anonfun$onFailure$1.apply(Future.scala:138)
    mist_1           | 	at scala.concurrent.Future$$anonfun$onFailure$1.apply(Future.scala:136)
    mist_1           | 	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
    mist_1           | 	at scala.concurrent.impl.ExecutionContextImpl$AdaptedForkJoinTask.exec(ExecutionContextImpl.scala:121)
    mist_1           | 	at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    mist_1           | 	at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    mist_1           | 	at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    mist_1           | 	at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
    

    Job log

    INFO 2018-11-09T11:58:22.53 [bec629b4-7cc4-482e-8ccb-9a7856f701d2] Waiting worker connection
    INFO 2018-11-09T11:58:22.534 [bec629b4-7cc4-482e-8ccb-9a7856f701d2] InitializedEvent(externalId=None)
    INFO 2018-11-09T11:58:22.534 [bec629b4-7cc4-482e-8ccb-9a7856f701d2] QueuedEvent
    ERROR 2018-11-09T11:59:02.636 [bec629b4-7cc4-482e-8ccb-9a7856f701d2] FailedEvent with Error: 
     java.lang.RuntimeException: Context is broken
    	at io.hydrosphere.mist.master.execution.JobActor$$anonfun$io$hydrosphere$mist$master$execution$JobActor$$initial$1.applyOrElse(JobActor.scala:59)
    	at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
    	at io.hydrosphere.mist.master.execution.JobActor.akka$actor$Timers$$super$aroundReceive(JobActor.scala:24)
    	at akka.actor.Timers$class.aroundReceive(Timers.scala:44)
    	at io.hydrosphere.mist.master.execution.JobActor.aroundReceive(JobActor.scala:24)
    	at akka.actor.ActorCell.receiveMessage(ActorCell.scala:527)
    	at akka.actor.ActorCell.invoke(ActorCell.scala:496)
    	at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
    	at akka.dispatch.Mailbox.run(Mailbox.scala:224)
    	at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
    	at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    	at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    	at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    	at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
    Caused by: java.lang.RuntimeException: Process terminated with error java.lang.RuntimeException: Process exited with status code 255 and out: 2018-11-09 11:58:56,046 WARN  org.apache.hadoop.util.NativeCodeLoader Unable to load native-hadoop library for your platform... using builtin-java classes where applicable;2018-11-09 11:59:01,870 ERROR org.apache.spark.deploy.ClientEndpoint Exception from cluster was: java.nio.file.NoSuchFileException: /opt/mist/mist-worker.jar;java.nio.file.NoSuchFileException: /opt/mist/mist-worker.jar;	at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86);	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102);	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107);	at sun.nio.fs.UnixCopyFile.copy(UnixCopyFile.java:526);	at sun.nio.fs.UnixFileSystemProvider.copy(UnixFileSystemProvider.java:253);	at java.nio.file.Files.copy(Files.java:1274);	at org.apache.spark.util.Utils$.org$apache$spark$util$Utils$$copyRecursive(Utils.scala:664);	at org.apache.spark.util.Utils$.copyFile(Utils.scala:635);	at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:719);	at org.apache.spark.util.Utils$.fetchFile(Utils.scala:509);	at org.apache.spark.deploy.worker.DriverRunner.downloadUserJar(DriverRunner.scala:155);	at org.apache.spark.deploy.worker.DriverRunner.prepareAndRunDriver(DriverRunner.scala:173);at org.apache.spark.deploy.worker.DriverRunner$$anon$1.run(DriverRunner.scala:92)
    	at io.hydrosphere.mist.master.execution.workers.WorkerRunner$DefaultRunner$$anonfun$continueSetup$1$1.applyOrElse(WorkerRunner.scala:39)
    	at io.hydrosphere.mist.master.execution.workers.WorkerRunner$DefaultRunner$$anonfun$continueSetup$1$1.applyOrElse(WorkerRunner.scala:39)
    	at scala.concurrent.Future$$anonfun$onFailure$1.apply(Future.scala:138)
    	at scala.concurrent.Future$$anonfun$onFailure$1.apply(Future.scala:136)
    	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
    	at scala.concurrent.impl.ExecutionContextImpl$AdaptedForkJoinTask.exec(ExecutionContextImpl.scala:121)
    	at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    	at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    	at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    	at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
    

    Local worker log

    2018-11-09 11:58:24,154 WARN  org.apache.hadoop.util.NativeCodeLoader Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    2018-11-09 11:58:30,109 ERROR org.apache.spark.deploy.ClientEndpoint Exception from cluster was: java.nio.file.NoSuchFileException: /opt/mist/mist-worker.jar
    java.nio.file.NoSuchFileException: /opt/mist/mist-worker.jar
    	at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
    	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
    	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
    	at sun.nio.fs.UnixCopyFile.copy(UnixCopyFile.java:526)
    	at sun.nio.fs.UnixFileSystemProvider.copy(UnixFileSystemProvider.java:253)
    	at java.nio.file.Files.copy(Files.java:1274)
    	at org.apache.spark.util.Utils$.org$apache$spark$util$Utils$$copyRecursive(Utils.scala:664)
    	at org.apache.spark.util.Utils$.copyFile(Utils.scala:635)
    	at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:719)
    	at org.apache.spark.util.Utils$.fetchFile(Utils.scala:509)
    	at org.apache.spark.deploy.worker.DriverRunner.downloadUserJar(DriverRunner.scala:155)
    	at org.apache.spark.deploy.worker.DriverRunner.prepareAndRunDriver(DriverRunner.scala:173)
    	at org.apache.spark.deploy.worker.DriverRunner$$anon$1.run(DriverRunner.scala:92)
    

    Suspicious Seem hard code when using $MIST_HOME for folder path to mist_worker.jar on spark worker

    org.apache.spark.deploy.worker.DriverRunner Copying user jar file:/opt/mist/mist-worker.jar to /opt/spark/work/driver-20181109110007-0000/mist-worker.jar
    org.apache.spark.deploy.worker.Worker Driver driver-20181109110007-0000 failed with unrecoverable exception: java.nio.file.NoSuchFileException: /opt/mist/mist-worker.jar
    
    opened by zero88 11
  • Mist throwing exception for drill queries

    Mist throwing exception for drill queries

    My jobs with any drill query are not working with mist and I keep getting the following exception. The same job works fine with spark-submit.

    17/06/15 05:59:04 WARN ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.lang.NoClassDefFoundError: oadd/org/apache/log4j/Logger at oadd.org.apache.zookeeper.Login.(Login.java:44) at oadd.org.apache.zookeeper.client.ZooKeeperSaslClient.createSaslClient(ZooKeeperSaslClient.java:226) at oadd.org.apache.zookeeper.client.ZooKeeperSaslClient.(ZooKeeperSaslClient.java:131) at oadd.org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:949) at oadd.org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1003) 17/06/15 05:59:05 WARN ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect

    Here's a sample drill query job I wrote which is working fine with spark-submit,

    def execute(): Map[String, Any] = {
         
        Class.forName("org.apache.drill.jdbc.Driver")
        val connection = DriverManager.getConnection("jdbc:drill:zk=localhost:5181/drill/demo_mapr_com-drillbits;schema=dfs", "root", "mapr")
        val statement = connection.createStatement()
        val query = "select * from  dfs.tmp.`employees` limit 10"
        val resultSet = statement.executeQuery(query)
        var list : List[String] = List()
        
        while(resultSet.next()){  
          println(resultSet.getString(1));
          list = list ++ List(resultSet.getString(1))
        }
        Map("result" -> list)
      }
    

    Also find attached the response I get from the mist API.

    response.txt

    opened by lalit10368 11
  • Running Mist with base as EMR spark Cluster

    Running Mist with base as EMR spark Cluster

    I am running mist service on EMR providing following details in default.conf as

    mist {
      context-defaults.spark-conf = {
        spark.master = "yarn-client"
        spark.submit.deployMode = "client"
      }
    
      workers.runner = "manual"
    }
    

    It is giving me error as : "Worker ml initialization timeout: not being responsive for 2 minutes"

    When I have changed to workers.runner = "local", it ran but as per my knowledge, spawning workers on the host only.

    Why above error occurred and what additional configurations I have to provide to resolve it in a way that Mist uses existing cluster's worker only?

    opened by utkarshsaraf19 9
  • Recovery.db.mv.db size crashes Mist

    Recovery.db.mv.db size crashes Mist

    I have setup a VM having following configuration : Redhat 7.4, 4 GB RAM

    I have visualized that the size of Recovery.db.mv.db increases which is obvious as i run more jobs.

    It is crashing when the size reaches 37 MB with java heap space error.

    I wanted to know the reason of it.Is it due to browser loading this whole file or mist itself and what configuration changes/factors i have to keep in mind while deploying it?

    bug 
    opened by utkarshsaraf19 9
  • fast model evaluation

    fast model evaluation

    You mentioned that you are working on a high throughput API for mist. Maybe https://github.com/combust-ml/mleap is helpful.

    Synchronous real-time API for high throughput - we are working on adding a model serving support for online queries with low latency.

    opened by geoHeil 9
  • Unable to submit job with v0.12.3

    Unable to submit job with v0.12.3

    While submitting job via rest getting this error:

    Scala version 2.10.4 Mist version 0.12.3 Spark version 1.6.1

    17/07/20 08:30:02 ERROR Worker$: Fatal error java.lang.NoSuchMethodError: scala.runtime.IntRef.create(I)Lscala/runtime/IntRef; at scopt.OptionParser.parse(options.scala:417) at io.hydrosphere.mist.worker.WorkerArguments$.parse(Worker.scala:102) at io.hydrosphere.mist.worker.WorkerArguments$.forceParse(Worker.scala:105) at io.hydrosphere.mist.worker.Worker$.delayedEndpoint$io$hydrosphere$mist$worker$Worker$1(Worker.scala:118) at io.hydrosphere.mist.worker.Worker$delayedInit$body.apply(Worker.scala:112) at scala.Function0$class.apply$mcV$sp(Function0.scala:40) at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12) at scala.App$$anonfun$main$1.apply(App.scala:71) at scala.App$$anonfun$main$1.apply(App.scala:71) at scala.collection.immutable.List.foreach(List.scala:318) at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32) at scala.App$class.main(App.scala:71) at io.hydrosphere.mist.worker.Worker$.main(Worker.scala:112) at io.hydrosphere.mist.worker.Worker.main(Worker.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:752) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

    opened by sanjutoyou 7
  • Context JVMs in Yarn cluster mode for resource management

    Context JVMs in Yarn cluster mode for resource management

    Does Mist support running context JVMs in Yarn cluster mode ? This will enable resource management of these context JVM processes/drivers by Yarn Resource manager. If not, is there any plan to do it in future ?

    opened by goelrajat 7
  • Mist always launch jobs with spark.master=local[*] despite function's default context

    Mist always launch jobs with spark.master=local[*] despite function's default context

    Despite function was deployed with default context = volodymyr.bakhmatiuk_cluster_context, it is launched with default local context. Help me please to launch my job on my remote cluster!

    To launch HelloMist function on my cluster, I did four steps due to documentation:

    1. I created new configuration file:
    model = Context
    name = cluster_context
    data {
      spark-conf {
        spark.master = "spark://myhost.com:7077"
      }
    }
    
    1. I've set context=cluster_context in hello_mist/scala/conf/20_function.conf
    2. Re-packaged everything with mvn package
    3. Deployed changes to mist with mist-cli apply -f conf

    Now I can check that function's context is linked to my cluster:

    curl -H 'Content-Type: application/json'v -X GET http://localhost:2004/v2/api/functions

    [{"name":"volodymyr.bakhmatiuk_hello-mist-java","execute":{"samples":{"type":"MInt"}},"path":"volodymyr.bakhmatiuk_hello-mist-java_0.0.1.jar","tags":[],"className":"HelloMist","defaultContext":"volodymyr.bakhmatiuk_cluster_context","lang":"java"}]

    And I can check that configurations has been deployed:

    curl -H 'Content-Type: application/json' -X GET http://localhost:2004/v2/api/contexts/volodymyr.bakhmatiuk_cluster_context

    {"name":"volodymyr.bakhmatiuk_cluster_context","maxJobs":20,"workerMode":"shared","precreated":false,"sparkConf":{""spark.master"":"spark://myhost.com:7077"},"runOptions":"","downtime":"120s","streamingDuration":"1s"}

    Now I launch job through WebMist and it is finished successfully. But it looks like WebMist launch a job on local[*] spark cluster, because nothing have been launched on myhost.com cluster! Logs:

    18-02-09 17:11:42 [mist-akka.actor.default-dispatcher-16] INFO ere.mist.master.WorkersManager:107 Trying to start worker volodymyr.bakhmatiuk_cluster_context, for context: volodymyr.bakhmatiuk_cluster_context
    18-02-09 17:11:47 [mist-akka.actor.default-dispatcher-3] INFO ere.mist.master.WorkersManager:107 Received worker registration - WorkerRegistration(volodymyr.bakhmatiuk_cluster_context,akka.tcp://[email protected]:41197,Some(http://172.17.0.3:4040))
    18-02-09 17:11:47 [mist-akka.actor.default-dispatcher-26] INFO ere.mist.master.WorkersManager:107 Worker resolved - WorkerResolved(volodymyr.bakhmatiuk_cluster_context,akka.tcp://[email protected]:41197,Actor[akka.tcp://[email protected]:41197/user/worker-volodymyr.bakhmatiuk_cluster_context#-1055101121],Some(http://172.17.0.3:4040))
    18-02-09 17:11:47 [mist-akka.actor.default-dispatcher-16] INFO ere.mist.master.WorkersManager:107 Worker with volodymyr.bakhmatiuk_cluster_context is registered on akka.tcp://[email protected]:41197
    18-02-09 17:11:49 [mist-akka.actor.default-dispatcher-14] INFO ist.master.FrontendJobExecutor:107 Job has been started be02598b-f8ed-4f80-a583-255f478e610e
    18-02-09 17:11:50 [mist-akka.actor.default-dispatcher-3] INFO ist.master.FrontendJobExecutor:107 Job RunJobRequest(be02598b-f8ed-4f80-a583-255f478e610e,JobParams(volodymyr.bakhmatiuk_hello-mist-java_0.0.1.jar,HelloMist,Map(samples -> 7),execute)) id done with result JobSuccess(be02598b-f8ed-4f80-a583-255f478e610e,3.4285714285714284)
    

    P.S. My Spark cluster version equals 2.1.1.

    I launch mist this way:

    docker run -p 2004:2004 -v /var/run/docker.sock:/var/run/docker.sock hydrosphere/mist:1.0.0-RC8-2.2.0 mist

    opened by Volodymyr128 6
  • How to Integrate Mist API with AWS EMR?

    How to Integrate Mist API with AWS EMR?

    Hi Guys, We have a requirement to integrate mist with AWS EMR to run multiple jobs. Could you please suggest to us how to integrate Mist API with AWS EMR. Thanks in advance.

    opened by pmiyandada 2
  • Run parallel jobs on-prem dynamic spark clusters

    Run parallel jobs on-prem dynamic spark clusters

    I am new to spark, And we have a requirement to set up a dynamic spark cluster to run multiple jobs. by referring to some articles, we can achieve this by using EMR (Amazon) service. Is there any way to the same setup that can be done locally? Once Spark clusters are available with services running on different ports on different servers, how to point mist to new spark cluster for each job. Thanks in advance.

    opened by pmiyandada 0
  • Starting child for FunctionInfoProvider failed

    Starting child for FunctionInfoProvider failed

    It shows error starting child for Initialization of FunctionInfoProvider failed of timeout

    2020-01-18 23:38:50 ERROR RestartSupervisor:159 - Starting child for FunctionInfoProvider failed java.lang.IllegalStateException: Initialization of FunctionInfoProvider failed of timeout at io.hydrosphere.mist.master.jobs.ActorRefWaiter$IdentityActor$$anonfun$receive$1.apply$ at akka.actor.Actor$class.aroundReceive(Actor.scala:517) at io.hydrosphere.mist.master.jobs.ActorRefWaiter$IdentityActor.aroundReceive(FunctionIn$ at akka.actor.ActorCell.receiveMessage(ActorCell.scala:527) at akka.actor.ActorCell.invoke(ActorCell.scala:496) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257) at akka.dispatch.Mailbox.run(Mailbox.scala:224) at akka.dispatch.Mailbox.exec(Mailbox.scala:234) at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

    opened by daffydee13 6
  • PySpark - starting from Spark 2.4.1 python jobs don't work

    PySpark - starting from Spark 2.4.1 python jobs don't work

    It seems that from 2.4.1 using authentification token for py4j GatewayServer became a requirement. Now python jobs are failing with the following error:

    You are trying to pass an insecure Py4j gateway to Spark. This is not allowed as it is a security risk
    
    bug 
    opened by dos65 0
Releases(v1.1.3)
  • v1.1.3(Jul 24, 2019)

    Fixed from #547

    • Proper support of file-downloading job status: fix failure handling, up mist-ui to 2.2.1
    • Http API: return error stack traces
    • Fix SQL building for job history
    • MistLib: Fix python test running
    Source code(tar.gz)
    Source code(zip)
  • v1.1.2(Apr 2, 2019)

  • v1.1.1(Nov 13, 2018)

  • v1.1.0(Nov 8, 2018)

  • v1.0.0(Oct 31, 2018)

    :tada::tada::tada::tada: :100: Released :tada::tada::tada::tada:

    After more than half of year from the first release candidate of 1.0.0, Mist finally became stable enough to be released as 1.0.0. Comparing to the previous RC-18 this release contains only ui related fixes.

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0-RC17(Sep 3, 2018)

    • Upgrade mist-ui to 2.0.1 (pagination for job list :fire:)
    • Provide full error message from job invocation #501, #508
    • Change default worker settings #513 : workerMode -> exclusive, maxJobs -> 1
    • Mqtt async interface - enable auto reconnect option #510
    • Fix problems with collecting logs from workers #496
    • Http api
      • add DELETE methods for v2/api/functions, v2/api/artifacts, v2/api/contexts #507
      • add startTImeout, performTimeout options for POST v2/api/functions/{id}/jobs
    • Mist-lib
      • add default encoder for List #506
      • handle invalid types error in arguments with default values #497
      • fix encoder for boolean values #492
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0-RC16(May 24, 2018)

    This release contains a lot of fixes and improvements around user libraries for functions. New mistlibversion for scala/java isn't compatible with previous versions, so to upgrade a mist to v1.0.0-RC16 it's required to migrate and redeploy functions (migration example)

    Python library was completely rewritten and now is called mistpy. Also, now it's available in pipy. Documentation Example

    • Python library improvements: #485, #441,
    • Mistlib:
      • Explicit result type declaration in MistFn and Handle was removed. Handle declaration was divided into two parts: first build RawHandle[A] where A is resulting type, then build Handle from it using asHandle/toHandle methods and JsEncoder - fix #440
      • #362 - default main method implementation, now it's possible to run mist functions directly from spark-submit
      • scala:
        • #473 - use default values in case classes for building argument extraction -mist.api.encoding.generic.extractorWithDefaults
      • java:
        • Specific for java JMistFn and JHandle implementations was removed - use mist.api.MistFn and mist.api.Handle
        • #478 - onSparkSession, onSparkSessionWithHive methods into java args
        • #489 :
          • interactions with JsData from java was improved
          • encode resulting value properly - RetVal was removed
    • #491 - fix worker termination
    • #484 - support contexts that were created before v1.0.0-RC15
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0-RC15(May 7, 2018)

    • Manual worker mode fixes:
      • Fix workers.manual.onStop behavior. Previously onStop command was used to invoke user action after stopping worker, now it used directly to stop it (worker expects to handle sigkill and notify master node that it was correctly stopped)
      • Return WORKER_NAME env for onStop - #480
    • Fail not started jobs if a context was marked as broken - #479 , #445
    • Fix spark-submit command building - #472
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0-RC14(Mar 28, 2018)

    • Logs collection improved - now job logs contain logs from spark #435 #462
    • Added support for contexts on kubernetes backed cluster #430 #460
    • Added support for python binary configuration #461 (spark.pyspark.python, spark.pyspark.python.driver)
    • Docker image contains mesos library by default #444 #465
    • Fixed job cancellation #469
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0-RC13(Mar 6, 2018)

    • Minor bug fixes after reworking master-worker communications (#452, #453)
    • Fix streaming jobs cancellation (#454, #328)
    • Fix kafka configuration (#447 )
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0-RC12(Feb 26, 2018)

    • Master-worker communication reworked (#426)
      • Fix: #293, #294, #289, #263
      • Automatic apply context changes (previously could be achieved only by manual worker stopping) #393
    • Fix python 3 jobs support #431
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0-RC11(Feb 15, 2018)

  • v1.0.0-RC10(Feb 15, 2018)

  • v1.0.0-RC9(Feb 9, 2018)

    • Fixed large jars transmission between worker-master #408
    • Fixed spark-conf decoding/applying on sparkContext #411 #413
    • Dropped maven/hdfs function artifacts support
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0-RC8(Feb 6, 2018)

  • v1.0.0-RC7(Jan 30, 2018)

    • Added metrics to status page
    • Fix getting endpoints list - don't call info provider with zero elements
    • Jobs Log writing: resume store flow in case when error happened
    • Handle fatal errors from jobs loading, validation
    • Fix #373
    • Integration test reworked
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0-RC6(Jan 19, 2018)

  • v1.0.0-RC5(Jan 16, 2018)

  • v1.0.0-RC4(Jan 12, 2018)

    • Improve case classes support for argument
    • Upgrade akka to 2.5.8
    • Minor fix for exclusive workers - handle more communication problems with master
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0-RC3(Dec 28, 2017)

  • v1.0.0-RC2(Dec 25, 2017)

  • v1.0.0-RC1(Dec 18, 2017)

    • New library api for scala. Added java support. Features:

      • typesafe function definition
      • arguments validation
      • automatic function's result encoding

      See more:

    • artifacts http api

    • validation moved from master to special local spark node

    Source code(tar.gz)
    Source code(zip)
  • v0.13.3(Sep 17, 2017)

    • Added cluster tab
    • Added http method for getting worker by job
    • Added method for getting detailed info about worker
    • Make creation of integration tests easy (master is now embeddable)
    Source code(tar.gz)
    Source code(zip)
  • v0.13.2(Sep 11, 2017)

    • Move examples to one folder
    • Remove ML Jobs from this repository as it already transferred to spark-ml-serving
    • Update UI version usage
    • Added route for health checking (fix #270 )
    • Context default settings application in create method
    • Worker init configuration is requested from master
    • Tune default settings master and worker

    Fixes: #255, #262, #264, #265, #268, #270, #273, #276, #278, #297, #302, #303

    Source code(tar.gz)
    Source code(zip)
  • v0.13.1(Aug 10, 2017)

  • v0.13.0(Aug 7, 2017)

    • New UI
    • Http api - added methods for managing contexts and endpoints see docs

    Minor fixes:

    • Added timeout on worker initialization #258
    • Job arguments validation at master #248
    • Fixed problems with scopt #245
    • Fixed configuration issues with contexts #240, #252
    Source code(tar.gz)
    Source code(zip)
  • v0.12.3(Jul 11, 2017)

  • v0.12.2(Jun 29, 2017)

  • v0.12.0(May 29, 2017)

    This release contains bug fixes and minor improvements:

    • Fixed duplication of jobs #202
    • Fixed parameters mapping #198
    • Fixed logging configuration #172
    • Added publication via tar.gz
    Source code(tar.gz)
    Source code(zip)
  • v0.11.0(Apr 27, 2017)

Tensorflow implementation of Fully Convolutional Networks for Semantic Segmentation

FCN.tensorflow Tensorflow implementation of Fully Convolutional Networks for Semantic Segmentation (FCNs). The implementation is largely based on the

Sarath Shekkizhar 1.3k Dec 25, 2022
PyTorch implementation of the cross-modality generative model that synthesizes dance from music.

Dancing to Music PyTorch implementation of the cross-modality generative model that synthesizes dance from music. Paper Hsin-Ying Lee, Xiaodong Yang,

NVIDIA Research Projects 485 Dec 26, 2022
Data-Driven Operational Space Control for Adaptive and Robust Robot Manipulation

OSCAR Project Page | Paper This repository contains the codebase used in OSCAR: Data-Driven Operational Space Control for Adaptive and Robust Robot Ma

NVIDIA Research Projects 74 Dec 22, 2022
SeqTR: A Simple yet Universal Network for Visual Grounding

SeqTR This is the official implementation of SeqTR: A Simple yet Universal Network for Visual Grounding, which simplifies and unifies the modelling fo

seanZhuh 76 Dec 24, 2022
A Model for Natural Language Attack on Text Classification and Inference

TextFooler A Model for Natural Language Attack on Text Classification and Inference This is the source code for the paper: Jin, Di, et al. "Is BERT Re

Di Jin 418 Dec 16, 2022
Data and analysis code for an MS on SK VOC genomes phenotyping/neutralisation assays

Description Summary of phylogenomic methods and analyses used in "Immunogenicity of convalescent and vaccinated sera against clinical isolates of ance

Finlay Maguire 1 Jan 06, 2022
Pytorch implementation code for [Neural Architecture Search for Spiking Neural Networks]

Neural Architecture Search for Spiking Neural Networks Pytorch implementation code for [Neural Architecture Search for Spiking Neural Networks] (https

Intelligent Computing Lab at Yale University 28 Nov 18, 2022
This is the code used in the paper "Entity Embeddings of Categorical Variables".

This is the code used in the paper "Entity Embeddings of Categorical Variables". If you want to get the original version of the code used for the Kagg

Cheng Guo 845 Nov 29, 2022
Exe-to-xlsm - Simple script to create VBscript of exe and inject to xlsm

🎁 Exe To Office Executable file injection to Office documents: .xlsm, .docm, .p

3 Jan 25, 2022
A little Python application to auto tag your photos with the power of machine learning.

Tag Machine A little Python application to auto tag your photos with the power of machine learning. Report a bug or request a feature Table of Content

Florian Torres 14 Dec 21, 2022
Official implementation of Self-supervised Image-to-text and Text-to-image Synthesis

Self-supervised Image-to-text and Text-to-image Synthesis This is the official implementation of Self-supervised Image-to-text and Text-to-image Synth

6 Jul 31, 2022
Visualization toolkit for neural networks in PyTorch! Demo -->

FlashTorch A Python visualization toolkit, built with PyTorch, for neural networks in PyTorch. Neural networks are often described as "black box". The

Misa Ogura 692 Dec 29, 2022
This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.

BiPointNet: Binary Neural Network for Point Clouds Created by Haotong Qin, Zhongang Cai, Mingyuan Zhang, Yifu Ding, Haiyu Zhao, Shuai Yi, Xianglong Li

Haotong Qin 59 Dec 17, 2022
An Empirical Investigation of Model-to-Model Distribution Shifts in Trained Convolutional Filters

CNN-Filter-DB An Empirical Investigation of Model-to-Model Distribution Shifts in Trained Convolutional Filters Paul Gavrikov, Janis Keuper Paper: htt

Paul Gavrikov 18 Dec 30, 2022
A Neural Net Training Interface on TensorFlow, with focus on speed + flexibility

Tensorpack is a neural network training interface based on TensorFlow. Features: It's Yet Another TF high-level API, with speed, and flexibility built

Tensorpack 6.2k Jan 01, 2023
Neural network for recognizing the gender of people in photos

Neural Network For Gender Recognition How to test it? Install requirements.txt file using pip install -r requirements.txt command Run nn.py using pyth

Valery Chapman 1 Sep 18, 2022
Official Implementation and Dataset of "PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask and Group-Level Consistency", CVPR 2021

Portrait Photo Retouching with PPR10K Paper | Supplementary Material PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask an

184 Dec 11, 2022
Sequence Modeling with Structured State Spaces

Structured State Spaces for Sequence Modeling This repository provides implementations and experiments for the following papers. S4 Efficiently Modeli

HazyResearch 896 Jan 01, 2023
Code for database and frontend of webpage for Neural Fields in Visual Computing and Beyond.

Neural Fields in Visual Computing—Complementary Webpage This is based on the amazing MiniConf project from Hendrik Strobelt and Sasha Rush—thank you!

Brown University Visual Computing Group 29 Nov 30, 2022
Code for Contrastive-Geometry Networks for Generalized 3D Pose Transfer

CGTransformer Code for our AAAI 2022 paper "Contrastive-Geometry Transformer network for Generalized 3D Pose Transfer" Contrastive-Geometry Transforme

18 Jun 28, 2022