SPARK
Spark Interview Questions and Answers
Q1: When do you use apache spark? OR What are the benefits of Spark over Mapreduce?
Ans:
-
Spark is really fast. As per their claims, it runs programs up to 100x faster than Hadoop MapReduce in memory, or 10x faster on disk. It aptly utilizes RAM to produce the faster results.
-
In map reduce paradigm, you write many Map-reduce tasks and then tie these tasks together using Oozie/shell script. This mechanism is very time consuming and the map-reduce task have heavy latency.
-
And quite often, translating the output out of one MR job into the input of another MR job might require writing another code because Oozie may not suffice.
-
In Spark, you can basically do everything using single application / console (pyspark or scala console) and get the results immediately. Switching between 'Running something on cluster' and 'doing something locally' is fairly easy and straightforward. This also leads to less context switch of the developer and more productivity.
-
Spark kind of equals to MapReduce and Oozie put together.
Q2: Is there are point of learning Mapreduce, then?
Ans: Yes. For the following reason:
-
Mapreduce is a paradigm used by many big data tools including Spark. So, understanding the MapReduce paradigm and how to convert a problem into series of MR tasks is very important.
-
When the data grows beyond what can fit into the memory on your cluster, the Hadoop Map-Reduce paradigm is still very relevant.
-
Almost, every other tool such as Hive or Pig converts its query into MapReduce phases. If you understand the Mapreduce then you will be able to optimize your queries better.
Q3: When running Spark on Yarn, do I need to install Spark on all nodes of Yarn Cluster?
Ans:
Since spark runs on top of Yarn, it utilizes yarn for the execution of its commands over the cluster's nodes.
So, you just have to install Spark on one node.
Q4: What are the downsides of Spark?
Ans:
Spark utilizes the memory. The developer has to be careful. A casual developer might make following mistakes:
-
She may end up running everything on the local node instead of distributing work over to the cluster.
-
She might hit some webservice too many times by the way of using multiple clusters.
The first problem is well tackled by Hadoop Map reduce paradigm as it ensures that the data your code is churning is fairly small a point of time thus you can make a mistake of trying to handle whole data on a single node.
The second mistake is possible in Map-Reduce too. While writing Map-Reduce, user may hit a service from inside of map() or reduce() too many times. This overloading of service is also possible while using Spark.
Q5: What is a RDD?
Ans:
The full form of RDD is resilience distributed dataset. It is a representation of data located on a network which is
-
Immutable - You can operate on the rdd to produce another rdd but you can’t alter it.
-
Partitioned / Parallel - The data located on RDD is operated in parallel. Any operation on RDD is done using multiple nodes.
-
Resilience - If one of the node hosting the partition fails, another nodes takes its data.
RDD provides two kinds of operations: Transformations and Actions.
Q6: What is Transformations?
Ans: The transformations are the functions that are applied on an RDD (resilient distributed data set). The transformation results in another RDD. A transformation is not executed until an action follows.
The example of transformations are:
-
map() - applies the function passed to it on each element of RDD resulting in a new RDD.
-
filter() - creates a new RDD by picking the elements from the current RDD which pass the function argument.
Q7: What are Actions?
Ans:
An action brings back the data from the RDD to the local machine. Execution of an action results in all the previously created transformation. The example of actions are:
-
reduce() - executes the function passed again and again until only one value is left. The function should take two argument and return one value.
- take() - take all the values back to the local node form RDD.
Q19 What is a “Spark Executor”?
Answer: When “SparkContext” connects to a cluster manager, it acquires an “Executor” on the cluster nodes. “Executors” are Spark processes that run computations and store the data on the worker node. The final tasks by “SparkContext” are transferred to executors.
Q20 What are the languages supported by Apache Spark for developing big data applications?
Answer: Scala, Java, Python, R and Clojure
Q21 Is it possible to run Spark and Mesos along with Hadoop?
Answer: Yes, it is possible to run Spark and Mesos with Hadoop by launching each of these as a separate service on the machines. Mesos acts as a unified scheduler that assigns tasks to either Spark or Hadoop.
Q22 What are the common mistakes developers make when running Spark applications?
Answer: Developers often make the mistake of-
Hitting the web service several times by using multiple clusters.
Run everything on the local node instead of distributing it.
Developers need to be careful with this, as Spark makes use of memory for processing.
Q23 Explain about the different types of transformations on DStreams?
Answer: Stateless Transformations- Processing of the batch does not depend on the output of the previous batch. Examples – map (), reduceByKey (), filter ().
Stateful Transformations- Processing of the batch depends on the intermediary results of the previous batch. Examples –Transformations that depend on sliding windows.
Q24 How Spark handles monitoring and logging in Standalone mode?
Answer: Spark has a web based user interface for monitoring the cluster in standalone mode that shows the cluster and job statistics. The log output for each job is written to the work directory of the slave nodes.
Q25 Hadoop uses replication to achieve fault tolerance. How is this achieved in Apache Spark?
Answer: Data storage model in Apache Spark is based on RDDs. RDDs help achieve fault tolerance through lineage. RDD always has the information on how to build from other datasets. If any partition of a RDD is lost due to failure, lineage helps build only that particular lost partition.
Q26 What do you understand by Lazy Evaluation?
Answer: Spark is intellectual in the manner in which it operates on data. When you tell Spark to operate on a given dataset, it heeds the instructions and makes a note of it, so that it does not forget – but it does nothing, unless asked for the final result. When a transformation like map () is called on a RDD-the operation is not performed immediately. Transformations in Spark are not evaluated till you perform an action. This helps optimize the overall data processing workflow.
Q27 What do you understand by SchemaRDD?
Answer: An RDD that consists of row objects (wrappers around basic string or integer arrays) with schema information about the type of data in each column.
Q28 What do you understand by Transformations in Spark?
Answer: Transformations are functions applied on RDD, resulting into another RDD. It does not execute until an action occurs. map() and filter() are examples of transformations, where the former applies the function passed to it on each element of RDD and results into another RDD. The filter() creates a new RDD by selecting elements from current RDD that pass function argument.
Q29 How Spark store the data?
Answer: Spark is a processing engine, there is no storage engine. It can retrieve data from any storage engine like HDFS, S3 and other data resources.
Q30 What is the difference between persist() and cache()
Answer: persist () allows the user to specify the storage level whereas cache () uses the default storage level.
1.What is Apache Spark?
Spark is a fast, easy-to-use and flexible data processing framework. It has an advanced execution engine supporting cyclic data flow and in-memory computing. Spark can run on Hadoop, standalone or in the cloud and is capable of accessing diverse data sources including HDFS, HBase, Cassandra and others.
2.Explain key features of Spark.
- Allows Integration with Hadoop and files included in HDFS.
- Spark has an interactive language shell as it has an independent Scala (the language in which Spark is written) interpreter
- Spark consists of RDD’s (Resilient Distributed Datasets), which can be cached across computing nodes in a cluster.
- Spark supports multiple analytic tools that are used for interactive query analysis , real-time analysis and graph processing
3.Define RDD.
RDD is the acronym for Resilient Distribution Datasets – a fault-tolerant collection of operational elements that run parallel. The partitioned data in RDD is immutable and distributed. There are primarily two types of RDD:
- Parallelized Collections : The existing RDD’s running parallel with one another
- Hadoop datasets: perform function on each file record in HDFS or other storage system
4.What does a Spark Engine do?
Spark Engine is responsible for scheduling, distributing and monitoring the data application across the cluster.
5.Define Partitions?
As the name suggests, partition is a smaller and logical division of data similar to ‘split’ in MapReduce. Partitioning is the process to derive logical units of data to speed up the processing process. Everything in Spark is a partitioned RDD.
6.What operations RDD support?
- Transformations
- Actions
7.What do you understand by Transformations in Spark?
Transformations are functions applied on RDD, resulting into another RDD. It does not execute until an action occurs. map() and filer() are examples of transformations, where the former applies the function passed to it on each element of RDD and results into another RDD. The filter() creates a new RDD by selecting elements form current RDD that pass function argument.
8. Define Actions.
An action helps in bringing back the data from RDD to the local machine. An action’s execution is the result of all previously created transformations. reduce() is an action that implements the function passed again and again until one value if left. take() action takes all the values from RDD to local node.
9.Define functions of SparkCore.
Serving as the base engine, SparkCore performs various important functions like memory management, monitoring jobs, fault-tolerance, job scheduling and interaction with storage systems.
10.What is RDD Lineage?
Spark does not support data replication in the memory and thus, if any data is lost, it is rebuild using RDD lineage. RDD lineage is a process that reconstructs lost data partitions. The best is that RDD always remembers how to build from other datasets.
11.What is Spark Driver?
Spark Driver is the program that runs on the master node of the machine and declares transformations and actions on data RDDs. In simple terms, driver in Spark creates SparkContext, connected to a given Spark Master.
The driver also delivers the RDD graphs to Master, where the standalone cluster manager runs.
12.What is Hive on Spark?
Hive contains significant support for Apache Spark, wherein Hive execution is configured to Spark:
hive> set spark.home=/location/to/sparkHome;
hive> set hive.execution.engine=spark;
Hive on Spark supports Spark on yarn mode by default.
13.Name commonly-used Spark Ecosystems.
- Spark SQL (Shark)- for developers
- Spark Streaming for processing live data streams
- GraphX for generating and computing graphs
- MLlib (Machine Learning Algorithms)
- SparkR to promote R Programming in Spark engine.
14.Define Spark Streaming.
Spark supports stream processing – an extension to the Spark API , allowing stream processing of live data streams. The data from different sources like Flume, HDFS is streamed and finally processed to file systems, live dashboards and databases. It is similar to batch processing as the input data is divided into streams like batches.
15.What is GraphX?
Spark uses GraphX for graph processing to build and transform interactive graphs. The GraphX component enables programmers to reason about structured data at scale.
16.What does MLlib do?
MLlib is scalable machine learning library provided by Spark. It aims at making machine learning easy and scalable with common learning algorithms and use cases like clustering, regression filtering, dimensional reduction, and alike.
17.What is Spark SQL?
SQL Spark, better known as Shark is a novel module introduced in Spark to work with structured data and perform structured data processing. Through this module, Spark executes relational SQL queries on the data. The core of the component supports an altogether different RDD called SchemaRDD, composed of rows objects and schema objects defining data type of each column in the row. It is similar to a table in relational database.
18.What is a Parquet file?
Parquet is a columnar format file supported by many other data processing systems. Spark SQL performs both read and write operations with Parquet file and consider it be one of the best big data analytics format so far.
19.What file systems Spark support?
• Hadoop Distributed File System (HDFS)
• Local File system
• S3
20.What is Yarn?
Similar to Hadoop, Yarn is one of the key features in Spark, providing a central and resource management platform to deliver scalable operations across the cluster . Running Spark on Yarn necessitates a binary distribution of Spar as built on Yarn support.
21.List the functions of Spark SQL.
Spark SQL is capable of:
• Loading data from a variety of structured sources
• Querying data using SQL statements, both inside a Spark program and from external tools that connect to Spark SQL through standard database connectors (JDBC/ODBC). For instance, using business intelligence tools like Tableau
• Providing rich integration between SQL and regular Python/Java/Scala code, including the ability to join RDDs and SQL tables, expose custom functions in SQL, and more
22.What are benefits of Spark over MapReduce?
- Due to the availability of in-memory processing, Spark implements the processing around 10-100x faster than Hadoop MapReduce. MapReduce makes use of persistence storage for any of the data processing tasks.
- Unlike Hadoop, Spark provides in-built libraries to perform multiple tasks form the same core like batch processing, Steaming, Machine learning, Interactive SQL queries. However, Hadoop only supports batch processing.
- Hadoop is highly disk-dependent whereas Spark promotes caching and in-memory data storage
- Spark is capable of performing computations multiple times on the same dataset. This is called iterative computation while there is no iterative computing implemented by Hadoop.
Q1 Define RDD.
Answer: RDD is the acronym for Resilient Distribution Datasets – a fault-tolerant collection of operational elements that run parallel.
The partitioned data in RDD is immutable and distributed. There are primarily two types of RDD:
- Parallelized Collections : The existing RDD’s running parallel with one another
- Hadoop datasets: perform function on each file record in HDFS or other storage system
Q2 Explain the key features of Spark.
Answer: • Spark allows Integration with Hadoop and files included in HDFS.
- It has an independent language (Scala) interpreter and hence comes with an interactive language shell.
- It consists of RDD’s (Resilient Distributed Datasets), that can be cached across computing nodes in a cluster.
- It supports multiple analytic tools that are used for interactive query analysis, real-time analysis and graph processing.
Additionally, some of the salient features of Spark include:
Lighting fast processing: When it comes to Big Data processing, speed always matters, and Spark runs Hadoop clusters way faster than others. Spark makes this possible by reducing the number of read/write operations to the disc. It stores this intermediate processing data in memory.
Support for sophisticated analytics: In addition to simple “map” and “reduce” operations, Spark supports SQL queries, streaming data, and complex analytics such as machine learning and graph algorithms. This allows users to combine all these capabilities in a single workflow,
Q3 What operations does the “RDD” support?
Answer: Transformations
Actions
Q4 Define “Transformations” in Spark.
Answer: “Transformations” are functions applied on RDD, resulting in a new RDD. It does not execute until an action occurs. map() and filer() are examples of “transformations”, where the former applies the function assigned to it on each element of the RDD and results in another RDD. The filter() creates a new RDD by selecting elements from the current RDD.
Q5 What does the Spark Engine do?
Answer: Spark Engine is responsible for scheduling, distributing and monitoring the data application across the cluster.
Q6 What is “RDD”?
Answer: RDD stands for Resilient Distribution Datasets: a collection of fault-tolerant operational elements that run in parallel. The partitioned data in RDD is immutable and is distributed in nature
.
Q7 What are the functions of “Spark Core”?
Answer: The “SparkCore” performs an array of critical functions like memory management, monitoring jobs, fault tolerance, job scheduling and interaction with storage systems.
It is the foundation of the overall project. It provides distributed task dispatching, scheduling, and basic input and output functionalities. RDD in Spark Core makes it fault tolerance. RDD is a collection of items distributed across many nodes that can be manipulated in parallel. Spark Core provides many APIs for building and manipulating these collections.
Q8 What is an “RDD Lineage”?
Answer: Spark does not support data replication in the memory. In the event of any data loss, it is rebuilt using the “RDD Lineage”. It is a process that reconstructs lost data partitions.
Q9 What is a “Spark Driver”?
Answer: “Spark Driver” is the program that runs on the master node of the machine and declares transformations and actions on data RDDs. The driver also delivers RDD graphs to the “Master”, where the standalone cluster manager runs.
Q10 What is an “Accumulator”?
Answer: “Accumulators” are Spark’s offline debuggers. Similar to “Hadoop Counters”, “Accumulators” provide the number of “events” in a program.
Accumulators are the variables that can be added through associative operations. Spark natively supports accumulators of numeric value types and standard mutable collections. “AggregrateByKey()” and “combineByKey()” uses accumulators.
Q11 Which file systems does Spark support?
Answer: Hadoop Distributed File System (HDFS)
Local File system
S3
Q12 Can you use Spark to access and analyse data stored in Cassandra databases?
Answer: Yes, it is possible if you use Spark Cassandra Connector.
Q13 What is “YARN”?
Answer: “YARN” is a large-scale, distributed operating system for big data applications. It is one of the key features of Spark, providing a central and resource management platform to deliver scalable operations across the cluster.
Q14 What do you understand by Pair RDD?
Answer: Special operations can be performed on RDDs in Spark using key/value pairs and such RDDs are referred to as Pair RDDs. Pair RDDs allow users to access each key in parallel. They have a reduceByKey () method that collects data based on each key and a join () method that combines different RDDs together, based on the elements having the same key.
Q15 List the various types of “Cluster Managers” in Spark.
Answer: The Spark framework supports three kinds of Cluster Managers:
Standalone
Apache Mesos
YARN
Q16 Define “Partitions”.
Answer: A “Partition” is a smaller and logical division of data, that is similar to the “split” in Map Reduce. Partitioning is the process that helps derive logical units of data in order to speed up data processing.
Q17 What is RDD Lineage?
Answer: Spark does not support data replication in the memory and thus, if any data is lost, it is rebuild using RDD lineage. RDD lineage is a process that reconstructs lost data partitions. The best is that RDD always remembers how to build from other datasets.
Q18 What is a “worker node”?
Answer: “Worker node” refers to any node that can run the application code in a cluster.
23.Is there any benefit of learning MapReduce, then?
Yes, MapReduce is a paradigm used by many big data tools including Spark as well. It is extremely relevant to use MapReduce when the data grows bigger and bigger. Most tools like Pig and Hive convert their queries into MapReduce phases to optimize them better.
24.What is Spark Executor?
When SparkContext connect to a cluster manager, it acquires an Executor on nodes in the cluster. Executors are Spark processes that run computations and store the data on the worker node. The final tasks by SparkContext are transferred to executors for their execution.
25.Name types of Cluster Managers in Spark.
The Spark framework supports three major types of Cluster Managers:
- Standalone: a basic manager to set up a cluster
- Apache Mesos: generalized/commonly-used cluster manager, also runs Hadoop MapReduce and other applications
- Yarn: responsible for resource management in Hadoop
26.What do you understand by worker node?
Worker node refers to any node that can run the application code in a cluster.
27.What is PageRank?
A unique feature and algorithm in graph, PageRank is the measure of each vertex in the graph. For instance, an edge from u to v represents endorsement of v’s importance by u. In simple terms, if a user at Instagram is followed massively, it will rank high on that platform.
28.Do you need to install Spark on all nodes of Yarn cluster while running Spark on Yarn?
No because Spark runs on top of Yarn.
29.Illustrate some demerits of using Spark.
Since Spark utilizes more storage space compared to Hadoop and MapReduce, there may arise certain problems. Developers need to be careful while running their applications in Spark. Instead of running everything on a single node, the work must be distributed over multiple clusters.
30.How to create RDD?
Spark provides two methods to create RDD:
• By parallelizing a collection in your Driver program. This makes use of SparkContext’s ‘parallelize’ method
val data = Array(2,4,6,8,10)
val distData = sc.parallelize(data)
• By loading an external dataset from external storage like HDFS, HBase, shared file system
---------------
Q1: Say I have a huge list of numbers in RDD(say myrdd). And I wrote the following code to compute average:
def myAvg(x, y):
return (x+y)/2.0;
avg = myrdd.reduce(myAvg);
What is wrong with it? And How would you correct it?
Ans: The average function is not commutative and associative;
I would simply sum it and then divide by count.
def sum(x, y):
return x+y;
total = myrdd.reduce(sum);
avg = total / myrdd.count();
The only problem with the above code is
that the total might become very big thus over flow. So, I would rather
divide each number by count and then sum in the following way.
cnt = myrdd.count();
def devideByCnd(x):
return x/cnt;
myrdd1 = myrdd.map(devideByCnd);
avg = myrdd.reduce(sum);
Q2: Say I have a huge list of numbers in a file in HDFS. Each line has one number.And I want to compute the square root of sum of squares of these numbers. How would you do it?
Ans:
# We would first load the file as RDD from HDFS on spark
numsAsText = sc.textFile("hdfs://namenode:9000/user/kayan/mynumbersfile.txt");
# Define the function to compute the squares
def toSqInt(str):
v = int(str);
return v*v;
#Run the function on spark rdd as transformation
nums = numsAsText.map(toSqInt);
#Run the summation as reduce action
total = nums.reduce(sum)
#finally compute the square root. For which we need to import math.
import math;
print math.sqrt(total);
numsAsText =sc.textFile("hdfs://namenode:9000/user/kalyan/mynumbersfile.txt");
def toInt(str):
return int(str);
nums = numsAsText.map(toInt);
def sqrtOfSumOfSq(x, y):
return math.sqrt(x*x+y*y);
total = nums.reduce(sum)
import math;
print math.sqrt(total);
Ans: Yes. The approach is correct and sqrtOfSumOfSq is a valid reducer.
Q4: Could you compare the pros and cons of the your approach (in Question 2 above) and my approach (in Question 3 above)?
Ans:
You are doing
the square and square root as part of reduce action while I am squaring
in map() and summing in reduce in my approach.
My approach will
be faster because in your case the reducer code is heavy as it is
calling math.sqrt() and reducer code is generally executed
approximately n-1 times the spark RDD.
The only downside of my approach is that there is a huge chance of integer overflow because I am computing the sum of squares as part of map.
The only downside of my approach is that there is a huge chance of integer overflow because I am computing the sum of squares as part of map.
Q5: If you have to compute the total counts of each of the unique words on spark, how would you go about it?
Ans:
#This will load the bigtextfile.txt as RDD in the spark
lines = sc.textFile("hdfs://namenode:9000/user/kalyan/bigtextfile.txt");
#define a function that can break each line into words
def toWords(line):
return line.split();
# Run the toWords function on each element of RDD on spark as flatMap transformation.
# We are going to flatMap instead of map because our function is returning multiple values.
words = lines.flatMap(toWords);
# Convert each word into (key, value) pair. Her key will be the word itself and value will be 1.
def toTuple(word):
return (word, 1);
wordsTuple = words.map(toTuple);
# Now we can easily do the reduceByKey() action.
def sum(x, y):
return x+y;
counts = wordsTuple.reduceByKey(sum)
# Now, print
counts.collect()
Q6: In a very huge text file, you want to just check if a particular keyword exists. How would you do this using Spark?
Ans:
lines = sc.textFile("hdfs://namenode:9000/user/kalyan/bigtextfile.txt");
def isFound(line):
if line.find(“mykeyword”) > -1:
return 1;
return 0;
foundBits = lines.map(isFound);
sum = foundBits.reduce(sum);
if sum > 0:
print “FOUND”;
else:
print “NOT FOUND”;
Ans: Yes.
The search is not stopping even after the word we are looking for has been found. Our map code would keep executing on all the nodes which is very inefficient.
We could utilize accumulators to report whether the word has been found or not and then stop the job. Something on these line:
We could utilize accumulators to report whether the word has been found or not and then stop the job. Something on these line:
import thread, threading
from time import sleep
result = "Not Set"
lock = threading.Lock()
accum = sc.accumulator(0)
def map_func(line):
#introduce delay to emulate the slowness
sleep(1);
if line.find("Adventures") > -1:
accum.add(1);
return 1;
return 0;
def start_job():
global result
try:
sc.setJobGroup("job_to_cancel", "some description")
lines = sc.textFile("hdfs://namenode:9000/user/kalyan/wordcount/input/big.txt");
result = lines.map(map_func);
result.take(1);
except Exception as e:
result = "Cancelled"
lock.release()
def stop_job():
while accum.value < 3 :
sleep(1);
sc.cancelJobGroup("job_to_cancel")
supress = lock.acquire()
supress = thread.start_new_thread(start_job, tuple())
supress = thread.start_new_thread(stop_job, tuple())
supress = lock.acquire()
Spark _SQL
Q1 Name a few commonly used Spark Ecosystems.
Answer: Spark SQL (Shark)
Spark Streaming
GraphX
MLlib
SparkR
Q2 What is “Spark SQL”?
Answer: Spark SQL is a Spark interface to work with structured as
well as semi-structured data. It has the capability to load data from
multiple structured sources like “text files”, JSON files, Parquet
files, among others. Spark SQL provides a special type of RDD called
SchemaRDD. These are row objects, where each object represents a record.
Q3 Can we do real-time processing using Spark SQL?
Answer: Not directly but we can register an existing RDD as a SQL table and trigger SQL queries on top of that.
Q4 Explain about the major libraries that constitute the Spark Ecosystem
Answer: Spark MLib- Machine learning library in Spark for commonly
used learning algorithms like clustering, regression, classification,
etc.
Spark Streaming – This library is used to process real time streaming data.
Spark GraphX – Spark API for graph parallel computations with basic
operators like join Vertices, subgraph, aggregate Messages, etc.
Spark SQL – Helps execute SQL like queries on Spark data using standard visualization or BI tools.
Q5 What is Spark SQL?
Answer: SQL Spark, better known as Shark is a novel module introduced
in Spark to work with structured data and perform structured data
processing. Through this module, Spark executes relational SQL queries
on the data. The core of the component supports an altogether different
RDD called SchemaRDD, composed of rows objects and schema objects
defining data type of each column in the row. It is similar to a table
in relational database.
Q6 What is a Parquet file?
Answer: Parquet is a columnar format file supported by many other
data processing systems. Spark SQL performs both read and write
operations with Parquet file and consider it be one of the best big data
analytics format so far.
Q7 List the functions of Spark SQL.
Answer: Spark SQL is capable of:
- Loading data from a variety of structured sources
- Querying data using SQL statements, both inside a Spark program and
from external tools that connect to Spark SQL through standard database
connectors (JDBC/ODBC). For instance, using business intelligence tools
like Tableau
- Providing rich integration between SQL and regular Python/Java/Scala
code, including the ability to join RDDs and SQL tables, expose custom
functions in SQL, and more
Q8 What is Spark?
Answer: Spark is a parallel data processing framework. It allows to
develop fast, unified big data application combine batch, streaming and
interactive analytics.
Q9 What is Hive on Spark?
Answer: Hive is a component of Hortonworks’ Data Platform (HDP). Hive
provides an SQL-like interface to data stored in the HDP. Spark users
will automatically get the complete set of Hive’s rich features,
including any new features that Hive might introduce in the future.
The main task around implementing the Spark execution engine for Hive
lies in query planning, where Hive operator plans from the semantic
analyzer which is translated to a task plan that Spark can execute. It
also includes query execution, where the generated Spark plan gets
actually executed in the Spark cluster.
Q10 What is a “Parquet” in Spark?
Answer: “Parquet” is a columnar format file supported by many data
processing systems. Spark SQL performs both read and write operations
with the “Parquet” file.
Q11 What is Catalyst framework?
Answer: Catalyst framework is a new optimization framework present in
Spark SQL. It allows Spark to automatically transform SQL queries by
adding new optimizations to build a faster processing system.
Q12 Why is BlinkDB used?
Answer: BlinkDB is a query engine for executing interactive SQL
queries on huge volumes of data and renders query results marked with
meaningful error bars. BlinkDB helps users balance ‘query accuracy’ with
response time.
Q13 How can you compare Hadoop and Spark in terms of ease of use?
Answer: Hadoop MapReduce requires programming in Java which is
difficult, though Pig and Hive make it considerably easier. Learning Pig
and Hive syntax takes time. Spark has interactive APIs for different
languages like Java, Python or Scala and also includes Shark i.e. Spark
SQL for SQL lovers – making it comparatively easier to use than Hadoop.
Q14 What are the various data sources available in SparkSQL?
Answer: Parquet file
JSON Datasets
Hive tables
SparkSQL is a Spark component that supports querying data either via
SQL or via the Hive Query Language. It originated as the Apache Hive
port to run on top of Spark (in place of MapReduce) and is now
integrated with the Spark stack. In addition to providing support for
various data sources, it makes it possible to weave SQL queries with
code transformations which results in a very powerful tool. Below is an
example of a Hive compatible query:
Q15 What are benefits of Spark over MapReduce?
Answer: • Due to the availability of in-memory processing, Spark
implements the processing around 10-100x faster than Hadoop MapReduce.
MapReduce makes use of persistence storage for any of the data
processing tasks.
- Unlike Hadoop, Spark provides in-built libraries to perform multiple
tasks form the same core like batch processing, Steaming, Machine
learning, Interactive SQL queries. However, Hadoop only supports batch
processing.
- Hadoop is highly disk-dependent whereas Spark promotes caching and in-memory data storage
- Spark is capable of performing computations multiple times on the
same dataset. This is called iterative computation while there is no
iterative computing implemented by Hadoop.
Q16 How SparkSQL is different from HQL and SQL?
Answer: SparkSQL is a special component on the spark Core engine that
support SQL and Hive Query Language without changing any syntax. It’s
possible to join SQL table and HQL table.
SQL_Steaming
Lab13: Spark-streaming
#Howtostart
sudo yum update
spark-shell
#steps:
#create a folder spark-streaming and go to the folder
mkdir spark-streaming
#go to spark-streaming folder
cd spark-streaming
git clone https://github.com/databricks/spark-training.git
#go to scala folder
cd spark-training/streaming/scala
#Twitter Setup
#open twitter setting page https://apps.twitter.com/
#This page lists the set of Twitter-based applications that you own
and have already created consumer keys and access tokens for. This list
will be #empty if you have never created any applications.create a new
temporary application.To do this, click on the blue “Create a new
application” button.
cd /spark-streaming/spark-training/streaming/scala
vi Tutorial.scala
#import org.apache.spark._
#import org.apache.spark.SparkContext._
#import org.apache.spark.streaming._
#import org.apache.spark.streaming.twitter._
#import org.apache.spark.streaming.StreamingContext._
#import TutorialHelper._
#object Tutorial {
#def main(args: Array[String]) {
# // Checkpoint directory
# val checkpointDir = TutorialHelper.getCheckpointDirectory()
# // Configure Twitter credentials
val apiKey = “”
val apiSecret = “”
val accessToken = “”
val accessTokenSecret = “”
#TutorialHelper.configureTwitterCredentials(apiKey, apiSecret, accessToken, accessTokenSecret)
#// Your code goes here
# val ssc = new StreamingContext(new SparkConf(), Seconds(1))
# val tweets = TwitterUtils.createStream(ssc, None)
#val statuses = tweets.map(status => status.getText())
#statuses.print()
#ssc.checkpoint(checkpointDir)
#ssc.start()
#ssc.awaitTermination()
# }
#}
#we will use ../../sbt/sbt assembly command will compile the
Tutorial class and create a JAR file in
/streaming/scala/target/scala-2.10/. It will #take some time to build.
../../sbt/sbt assembly
#we will use spark-submit to execute our program for this lab
spark-submit –class Tutorial ../../streaming/scala/target/scala-2.10/Tutorial-assembly-0.1-SNAPSHOT.jar
Spark MLIB
Q1 What is Spark MLlib?
Answer: Mahout is a machine learning library for Hadoop, similarly
MLlib is a Spark library. MetLib provides different algorithms, that
algorithms scale out on the cluster for data processing. Most of the
data scientists use this MLlib library.
Q2 What is the function of “MLlib”?
Answer: “MLlib” is Spark’s machine learning library. It aims at
making machine learning easy and scalable with common learning
algorithms and real-life use cases including clustering, regression
filtering, and dimensional reduction among others.
Q3 What is Shark?
Answer: Most of the data users know only SQL and are not good at
programming. Shark is a tool, developed for people who are from a
database background – to access Scala MLib capabilities through Hive
like SQL interface. Shark tool helps data users run Hive on Spark –
offering compatibility with Hive metastore, queries and data.
Q4 What does MLlib do?
Answer: MLlib is scalable machine learning library provided by Spark.
It aims at making machine learning easy and scalable with common
learning algorithms and use cases like clustering, regression filtering,
dimensional reduction, and alike.
Interview Questions for – Spark GraphX
Q1 Name a few commonly used Spark Ecosystems.
Answer: Spark SQL (Shark)
Spark Streaming
GraphX
MLlib
SparkR
Q2 What is “GraphX” in Spark?
Answer: “GraphX” is a component in Spark which is used for graph
processing. It helps to build and transform interactive graphs. Spark
uses GraphX for graph processing to build and transform interactive
graphs. The GraphX component enables programmers to reason about
structured data at scale.
Q3 Define “PageRank”.
Answer: “PageRank” is the measure of each vertex in a graph.
Q4 What is lineage graph?
Answer: The RDDs in Spark, depend on one or more other RDDs. The
representation of dependencies in between RDDs is known as the lineage
graph. Lineage graph information is used to compute each RDD on demand,
so that whenever a part of persistent RDD is lost, the data that is lost
can be recovered using the lineage graph information.
Q5 Explain about the major libraries that constitute the Spark Ecosystem
Answer: Spark MLib- Machine learning library in Spark for commonly
used learning algorithms like clustering, regression, classification,
etc.
Spark Streaming – This library is used to process real time streaming data.
Spark GraphX – Spark API for graph parallel computations with basic
operators like join Vertices, subgraph, aggregate Messages, etc.
Spark SQL – Helps execute SQL like queries on Spark data using standard visualization or BI tools.
Q6 Does Apache Spark provide check pointing?
Answer: Lineage graphs are always useful to recover RDDs from a
failure but this is generally time consuming if the RDDs have long
lineage chains. Spark has an API for check pointing i.e. a REPLICATE
flag to persist. However, the decision on which data to checkpoint – is
decided by the user. Checkpoints are useful when the lineage graphs are
long and have wide
Excellent Blog, I like your blog and It is very informative. Thank you
ReplyDeletePyspark online Training
Learn Pyspark Online
Really informative Blog...Thanks for sharing...Waiting for next update...
ReplyDeleteStruts Training in Chennai
Struts Training center in Chennai
Struts Course in Chennai
Informative content,thanks for sharing...waiting for next update.
ReplyDeleteQTP Online Training
UFT Training Online
QTP Online Course