Senior Data Engineer (Remote)
TradeRev, India

Experience
1 Year
Salary
0 - 0
Job Type
Job Shift
Job Category
Traveling
No
Career Level
Telecommute
Qualification
As mentioned in job details
Total Vacancies
1 Job
Posted on
Oct 26, 2021
Last Date
Nov 26, 2021
Location(s)

Job Description

KAR Global is looking to expand our data team as we continue to grow our data platform.You should have a strong background with Python and SQL.As a member of the data team the main responsibilities are implementing/maintaining Airflow ETL jobs, using Python to ingest external data sources into the Data Warehouse, and working closely with the Product and Data Science teams to deliver data in useable formats and to the appropriate data sources.We have a polyglot data model using many cutting-edge data platforms.We are currently using Snowflake as our Data Warehouse, Elastic Search for location-based searching, Postgres for transactional data.Our tech stack is very cutting edge.Snowflake drives the Data Warehouse, ElasticSearch enables our location-based searching/metrics, and Apache Spark is used to train our models.All environments are run on AWS EKS and data processing framework is written in Python.
About Our Candidate: This candidate should be a self-starter who is interested in learning new systems/environments and is passionate about developing quality supportable data service solutions for internal and external customers. We value natural curiosity about data and technology that drives results through quality, repeatable, and long-term sustainable database and code development. The candidate should be highly dynamic and excited by the opportunities to learn many different products and data domains and how they drive business outcomes and value for our customers.What You Will Be Doing: You will participate daily in ceremonies of Agile sprint to help design, plan, build, test, develop, and support KAR data products and platforms consisting of Airflow ETL pipelines and Postrgres, Redshift, Dynamo DB, and Elastic search, and Snowflake databases. Our team works in a shared services delivery model supporting seven lines of business, including front-end customer facing products, B2B portals, mobile applications, business analytics, and data science initiatives.Responsibilities include:
  • Work with product, data science, analytics, and engineering teams to learn project data needs and define project scope
  • Design and planning of data services solutions on the Data Platform
  • Building and delivery of Python/Docker feed framework data pipeline jobs and services
  • Contribute to the Data Engineering team delivery framework including building of re-usable code, implementing industry best practices, and maintain a common delivery framework
  • Monitoring, maintenance, documentation, and incident resolution of scheduled production data jobs supporting internal and external customers data needs
What You Need to Be Successful:
  • 2+ years experience Postgres SQL development including functions, stored procedures, and indexing or equivalent (required)
  • Experience in production data management in high availability product delivery ODS / RDBMS or equivalent (required)
  • Experience planning and designing maintainable data schemas (required)
  • Experience with Airflow, Python, Docker, Kubernetes and data warehouse environments (preferred)
  • Experience using Github / Azure DevOps (CI/CD) / Artifactory / PyPy or comparable delivery stacks (preferred)
  • Experience with AWS Redshift, MPP, or Dynamo DB (preferred)
  • Experience with Kinesis/Kafka (preferred)
  • Experience working with large enterprise data lakes / Snowflake (preferred)

Job Specification

Job Rewards and Benefits

TradeRev

Information Technology and Services - San Jose, United States
© Copyright 2004-2024 Mustakbil.com All Right Reserved.