Job Responsibilities:
- Constantly maintain, grow, and evolve the data engineering clusters at MunchON. Introduce a new set of technologies wherever required
- Create and maintain optimal data pipeline architecture
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, NOSQL, and ‘big data’ technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
- Work with stakeholders including the Executive, Product, Data, and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Keep our data separated and secure across boundaries through multiple data centers by following data security standards.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.
Job Requirements:
- Programming Languages (proficiency might vary but variety is nice to have).
- SQL Databases, NoSQL Databases, Apache Airflow or ETL tools, Apache --Spark or similar scalable computing engines.
- ELK Stack, Hadoop Ecosystem, Apache Kafka plus, some experience of cloud services like AWS, Google Cloud.
Post a Comment