Data Engineer
Responsibilities for Data Engineer

?       Create and maintain a data pipeline for automating data ingestion from multiple data sources

?       Build the data platform capabilities required for optimal extraction, transformation, and loading of data from a wide          variety of data sources.

?       Keep our data sepa***d and secure across national boundaries through multiple data centers and AWS regions

?       Work with data and analytics experts to strive for greater functionality in our data systems.

?       Assemble large, complex data sets that meet functional / non-functional business requirements.

?       Identify, design, and implement a solution for automating manual processes and optimizing data delivery

?       Work with stakeholders including the Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.

Qualifications for Data Engineer

?       Experience with Data Pipeline and ETL Tools such as Apache Airflow, AWS Data Pipeline, AWS Glue, Talend

?       Experience with Data Warehouse solution - Redshift, Snowflake

?       Experience with AWS cloud services: S3, Athena, RDS, EC2, EMR, RDS, Lambda,

?       Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc

?       Experience with big data tools: Hadoop, Spark, Kafka, etc.

?       Experience with relational SQL and NoSQL databases, including Postgres and DynamoDB.

?       Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.

?       Experience with Kubernetes, EKS, API Development

?       Advanced working SQL knowledge and experience working with relational databases and NOSQL databases.

?       Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.

?       Experience performing root cause analysis on internal and external data ingestion.

?       Strong analytic skills related to working with Structured, Semi-Structured and Unstructured datasets.

?       A successful history of manipulating, processing and extracting value from large disconnected datasets.

?       Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.

?       Experience supporting and working with cross-functional teams in a dynamic environment. 


Required Skills : Python, Apache Airflow

Indotronix is an Equal Opportunity Employer

Let Us Do the Heavy Lifting!
Upload your resume and we'll reach out when a job fits your skills.
Job Code
JPC - 92761
Posted Date
2021-03-11 01:13:08
Experience
5+ years
Primary Skills
Python, SQL, Spark, Kafka, Apache Airflow, adoop
Salary
$67.16-$67.16
Contact Person

Arun Kumar MS

Hear from our employees:

Hear from our employees: