The Data Engineer with work with a cross-functional team to migrate existing ETL jobs from a Redshift-based Data Warehouse to an S3-based Data Lake Architecture.
•3-5 years of Big Data experience (PySpark preferred, Spark, EMR, Hadoop type experience required)
•Strong skills and comfort level operating in AWS and leveraging AWS technologies
•High level Redshift experience (including tuning, vacuuming, managing clusters, large scale ETL/ELT)
•Ability to transfer data to S3 using a structured readable format
•Excellent written and communication skills