Apache Spark Test 2
Spark supports which cluster managers?
Explanation:
Several cluster managers are available in the system right now:
Standalone is a simple cluster manager that comes with Spark and makes setting up a cluster a breeze.
Apache Mesos is a general-purpose cluster manager that can run Hadoop MapReduce and service applications as well. (Deprecated)
YARN is the Hadoop 2 resource manager.
Kubernetes is an open-source framework for containerized application deployment, scalability, and administration.
Which of the following statements about Spark MLlib is correct?
Explanation:
It is the scalable machine learning library which delivers efficiencies
RDDs are immutable and fault-tolerant.
Explanation:
RDDs are fault-tolerant, immutable distributed collections of things that cannot be changed once created. RDD divides each dataset into logical partitions that can be computed on various cluster nodes.
Which algorithm is not a solution for the regression problem?
Explanation:
The supervised machine learning technique of logical regression can be used to predict a categorical response. It can be used to overcome machine learning difficulties, including underclassification. The process of classification involves looking at data and giving a class (or a label) to it.
Which of the following statements about Spark R is correct?
Explanation:
SparkR is a R package that provides a lightweight frontend for interacting with Apache Spark from within R. SparkR provides a distributed data frame implementation in Spark 3.2.1 that allows operations like selection, filtering, aggregation, and so on (similar to R data frames, dplyr), but for big datasets. MLlib, a distributed machine learning framework, is also supported by SparkR.
Which of the following statements regarding DataFrame is correct?
Explanation:
Dataframe is an easy-to-use Spark API for processing organized and unstructured data. A Schema is a blueprint for every DataFrame. It can contain both general data types like string types and integer kinds, as well as spark-specific data types like struct types.
Which of the following statements about Spark Shell is correct?
Explanation:
For Scala and Python users, the Spark Shell provides interactive command-line environments. SparkR Shell has only been properly tested to function with Spark solo to yet, and does not cover all Hadoop distributions, therefore it is not included here. REPL (Read/Eval/Print Loop) is another name for the Spark Shell.
Is MLlib a deprecated library?
Explanation:
The Resilient Spread Dataset (RDD), a fault-tolerant read-only multiset of data objects distributed over a cluster of servers, is the architectural foundation of Apache Spark. The usage of the Dataset API is encouraged even though the RDD API is not deprecated.
On RDD, the read operation is
Explanation:
In RDD, the read operation can be coarse or fine grained. We can transform the entire dataset but not a single element on it because it is coarse-grained. We can transform individual elements on the dataset with fine-grained transformations.