As a Data Engineer at Jai Kisan, you will play a pivotal role in the development and maintenance of our data
infrastructure. You will be responsible for building and optimizing data pipelines, ensuring data accuracy and availability,
and supporting our data-driven initiatives. This role requires expertise in cloud platforms, databases, microservices, and
strong programming skills in Python and SQL.
Roles amp; Responsibilities:
Collaborate with cross-functional teams to understand data requirements and translate them into technical
Design, develop, and maintain end-to-end data pipelines, ensuring efficient data extraction, transformation, and
loading (ETL) processes.
Utilize cloud platforms (AWS, GCP, or Azure) to build, deploy, and manage data solutions in a scalable and
Develop and maintain microservices for data-related functionalities, allowing for seamless integration with various
Implement data quality checks and monitoring processes to ensure data accuracy, consistency, and reliability.
Perform data modeling and optimization to support reporting, analytics, and machine learning initiatives.
Work with a variety of databases, including MongoDB, Postgres, and other relational and NoSQL databases.
Collaborate with data scientists and analysts to enable data-driven decision-making by providing them with clean,
Troubleshoot data-related issues and perform root cause analysis to identify and implement solutions.
Stay updated with emerging technologies and industry best practices in data engineering.
Bachelor's degree in Computer Science, Information Technology, or a related field. 1-2 years of experience in a
data engineering role.
Strong programming skills in Python and SQL.
Familiarity with at least one cloud platform (AWS, GCP, or Azure).
Experience building microservices and deploying them in a cloud environment.
Proficiency in designing and optimizing data pipelines.
Knowledge of both relational (e.g., Postgres) and NoSQL databases (e.g., MongoDB).
Understanding of data modeling concepts.
Excellent problem-solving and communication skills.
Ability to work in a fast-paced, collaborative environment.
Good to Have:
Experience with data orchestration and workflow management tools (e.g., Apache Airflow).
Knowledge of containerization and orchestration platforms (e.g., Docker, Kubernetes).
Familiarity with data warehousing solutions (e.g., Amazon Redshift, Google Big Query).
Exposure to financial data or fintech industry (Optional).
Strong analytical and data visualization skills.