Experience on AWS and its web service offering S3, Redshift, EC2, EMR, Lambda, CloudWatch, RDS, Step functions, Spark streaming etc.
Good knowledge of Configuring and working on Multi node clusters and distributed data processing framework Spark.
Hands on 3 years of experience with EMR Apache Spark Hadoop technologies
Experience with must have Linux, Python and PySpark, Spark SQL.
Experience in working with large volumes of data Tera-bytes, analyze the data structures
Experience in designing scalable data pipelines, complex event processing, analytics components using big data technology Spark Python Scala PySpark,
Expert in SQLPLSQL, redshift, NoSQL database
Experience in process orchestration tools Apache Airflow, Apache NiFi etc.
Hands on knowledge of design, development and enhancement of Data Lakes, constantly evolve with emerging tools and technologies
Apply now to have the opportunity to be considered for similar jobs at leading companies in the Seen network for FREE.
Zero stress and one profile that can connect you directly to 1000s of companies.
We’ll take it from there. After you tell us what you’re looking for, we’ll show you off to matches.
Boost your interview skills, map your tech career and seal the deal with 1:1 career coaching.
Join now and be seen.