Big Data Engineer

Duties and Responsibilities:- Setting-up collaboration environment and framework for AI related research and development projects.- Design and develop different architectural models for scalable data processing and storage.- Work with cross-functional teams to understand technical needs, and set-up big data environment that helps us to establish rapid POCs and prototype developments on both on-premise and cloud-based platforms.- Ensure data accessibility to data scientists, and researchers via different programming languages.Qualifications:- 5 years of Data Engineering/ETL/Administration experience.- Experience with various Hadoop distribution like Hortonworks and Cloudera.- Knowledge of cluster monitoring tools like Ambari, Ganglia, or Nagios.- Delivered Big Data solutions in the cloud with AWS or Azure or Google Cloud.- Experience in Java programming.- Experience in Scala programming.- Good coding skills in at least one scripting language (Shell, Python, R, etc.)- Experience with RDBMS (MySQL, PostgreSQL, etc.)- Experience with NoSQL database administration & development like MongoDB.- Experience with Hadoop eco-system (MapReduce, Streaming, Pig, HIVE, Spark).- Experience using DevOps toolbox such as Jenkins, Chef, Puppet.- Ability to create and manage big data pipeline using Kafka, Flume & Spark- Knowledge of BI tools such as Tableau, Pentaho, etc.- Experience building large-scale distributed applications and services- Possess significant knowledge of Big Data technologies and tools.- Experience with agile development methodologies- Knowledge of industry standards and trends


Key Skills
Java, Hive, NoSQL, Cloudera, Hadoop, Kafka, Flume, Mapreduce, Spark, Pig

Job Summary

  • Published on: 2020-05-22
  • Salary: NA
  • Location: Bengaluru