Apache Spark Interview Questions for Freshers

Apache Spark Interview Questions
What Is Shark?
Most of the data users know only SQL and are not good at programming. Shark is a tool, developed for people who are from a database background – to access Scala MLib capabilities through Hive like SQL interface. Shark tool helps data users run Hive on Spark – offering compatibility with Hive metastore, queries and data.
List Some Use Cases Where Spark Outperforms Hadoop In Processing.?
Sensor Data Processing –
Apache Spark’s ‘In-memory computing’ works best here, as data is retrieved and combined from different sources.
Spark is preferred over Hadoop for real time querying of data
Stream Processing –
For processing logs and detecting frauds in live streams for alerts, Apache Spark is the best solution.
What Is A Sparse Vector?
A sparse vector has two parallel arrays –one for indices and the other for values. These vectors are used for storing non-zero entries to save space.
What Is Rdd?
RDDs (Resilient Distributed Datasets) are basic abstraction in Apache Spark that represent the data coming into the system in object format. RDDs are used for in-memory computations on large clusters, in a fault tolerant manner. RDDs are read-only portioned, collection of records,
that are –
Immutable –
RDDs cannot be altered.
Resilient –
If a node holding the partition fails the other node takes the data.
Explain About Transformations And Actions In The Context Of Rdds.?
Transformations are functions executed on demand, to produce a new RDD. All transformations are followed by actions. Some examples of transformations include map, filter and reduceByKey.
Actions are the results of RDD computations or transformations. After an action is performed, the data from RDD moves back to the local machine. Some examples of actions include reduce, collect, first, and take.
What Are The Languages Supported By Apache Spark For Developing Big Data Applications?
Scala, Java, Python, R and Clojure
Can You Use Spark To Access And Analyse Data Stored In Cassandra Databases?
Yes, it is possible if you use Spark Cassandra Connector.
Is It Possible To Run Apache Spark On Apache Mesos?
Yes, Apache Spark can be run on the hardware clusters managed by Mesos.
Explain About The Different Cluster Managers In Apache Spark?
The 3 different clusters managers supported in Apache Spark are:
YARN
Apache Mesos –
Has rich resource scheduling capabilities and is well suited to run Spark along with other applications. It is advantageous when several users run interactive shells because it scales down the CPU allocation between commands.
Standalone deployments –
Well suited for new deployments which only run and are easy to set up.
How Can Spark Be Connected To Apache Mesos?
To connect Spark with Mesos:
Configure the spark driver program to connect to Mesos. Spark binary package should be in a location accessible by Mesos. (or)
Install Apache Spark in the same location as that of Apache Mesos and configure the property ‘spark.mesos.executor.home’ to point to the location where it is installed.
How Can You Minimize Data Transfers When Working With Spark?
Minimizing data transfers and avoiding shuffling helps write spark programs that run in a fast and reliable manner.
The various ways in which data transfers can be minimized when working with Apache Spark are:
Using Broadcast Variable-
Broadcast variable enhances the efficiency of joins between small and large RDDs.
Using Accumulators –
Accumulators help update the values of variables in parallel while executing.
The most common way is to avoid operations ByKey, repartition or any other operations which trigger shuffles.
Why Is There A Need For Broadcast Variables When Working With Apache Spark?
These are read only variables, present in-memory cache on every machine. When working with Spark, usage of broadcast variables eliminates the necessity to ship copies of a variable for every task, so data can be processed faster. Broadcast variables help in storing a lookup table inside the memory which enhances the retrieval efficiency when compared to an RDD lookup ().
Is It Possible To Run Spark And Mesos Along With Hadoop?
Yes, it is possible to run Spark and Mesos with Hadoop by launching each of these as a separate service on the machines. Mesos acts as a unified scheduler that assigns tasks to either Spark or Hadoop.
What Is Lineage Graph?
The RDDs in Spark, depend on one or more other RDDs. The representation of dependencies in between RDDs is known as the lineage graph. Lineage graph information is used to compute each RDD on demand, so that whenever a part of persistent RDD is lost, the data that is lost can be recovered using the lineage graph information.
How Can You Trigger Automatic Clean-ups In Spark To Handle Accumulated Metadata?
You can trigger the clean-ups by setting the parameter ‘spark.cleaner.ttl’ or by dividing the long running jobs into different batches and writing the intermediary results to the disk.
Explain About The Major Libraries That Constitute The Spark Ecosystem?
Spark MLib-
Machine learning library in Spark for commonly used learning algorithms like clustering, regression, classification, etc.
Spark Streaming –
This library is used to process real time streaming data.
Spark GraphX –
Spark API for graph parallel computations with basic operators like joinVertices, subgraph, aggregateMessages, etc.
Spark SQL –
Helps execute SQL like queries on Spark data using standard visualization or BI tools.

Related

Interview Questions 1971640367425888443

Post a Comment

emo-but-icon

item