上韩国网站梯子 Lightning-fast unified analytics engine

上韩国网站梯子
  • Spark 3.0.0 released (Jun 18, 2024)
  • Spark+AI Summit (June 22-25th, 2024, VIRTUAL) agenda posted (Jun 15, 2024)
  • Spark 2.4.6 released 上韩国网站梯子
  • Spark 2.4.5 released 上韩国网站梯子

Archive

上韩国网站梯子
Apache Spark™ is a unified analytics engine for large-scale data processing.

国外服务器节点二维码

Run workloads 100x faster.

Apache Spark achieves high performance for both batch and streaming data, using a state-of-the-art DAG scheduler, a query optimizer, and a physical execution engine.

Logistic regression in Hadoop and Spark

国外服务器节点二维码

Write applications quickly in Java, Scala, Python, R, and SQL.

Spark offers over 80 high-level operators that make it easy to build parallel apps. And you can use it interactively from the Scala, Python, R, and SQL shells.

df = spark.read.json("logs.json") df.上韩国网站梯子(上韩国网站梯子)   .select("name.first").show()
Spark's Python DataFrame API
Read JSON files with automatic schema inference

国外服务器节点二维码

Combine SQL, streaming, and complex analytics.

Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, 上韩国网站梯子, and Spark Streaming. You can combine these libraries seamlessly in the same application.

国外服务器节点二维码

Spark runs on Hadoop, Apache Mesos, Kubernetes, standalone, or in the cloud. It can access diverse data sources.

You can run Spark using its standalone cluster mode, on EC2, on Hadoop YARN, on 上韩国网站梯子, or on Kubernetes. Access data in HDFS, Alluxio, Apache Cassandra, Apache HBase, Apache Hive, and hundreds of other data sources.

国外服务器节点二维码

Spark is used at a wide range of organizations to process large datasets. You can find many example use cases on the Powered By page.

There are many ways to reach the community:

  • Use the mailing lists to ask questions.
  • In-person events include numerous meetup groups and conferences.
  • We use JIRA for issue tracking.

国外服务器节点二维码

Apache Spark is built by a wide set of developers from over 300 companies. Since 2009, more than 1200 developers have contributed to Spark!

The project's committers come from more than 25 organizations.

If you'd like to participate in Spark, or contribute to the libraries on top of it, learn how to contribute.

国外服务器节点二维码

Learning Apache Spark is easy whether you come from a Java, Scala, Python, R, or SQL background:

  • Download the latest release: you can run Spark locally on your laptop.
  • Read the quick start guide.
  • Learn how to deploy Spark on a cluster.