BigData Engineer - Hadoop en Capital Federal - HV Capital Humano Otro

We are looking for a candidato to work for an international company Independent.
Contractor is preferred.
Home working.
Job Responsibilities:
• Hadoop development and implementation.
• Loading from disparate data sets.
• Pre-processing using Hive and Pig.
• Designing, building, installing, configuring and supporting Hadoop.
• Translate complex functional and technical requirements into detailed design.
• Perform analysis of vast data stores and uncover insights.
• Maintain security and data privacy.
• Create scalable and high-performance web services for data tracking.
• High-speed querying.
• Managing and deploying HBase.
• Being a part of a POC effort to help build new Hadoop clusters.
• Test prototypes and oversee handover to operational teams.
• Propose best practices/standards.
Skills Required:
• Knowledge in Hadoop
• Good knowledge in back-end programming, specifically java, JS, Node.js and OOAD
• Writing high-performance, reliable and maintainable code.
• Ability to write MapReduce jobs.
• Good knowledge of database structures, theories, principles, and practices.
• Ability to write Pig Latin scripts.
• Hands on experience in HiveQL.
• Familiarity with data loading tools like Flume, Sqoop.
• Knowledge of workflow/schedulers like Oozie.
• Analytical and problem solving skills, applied to Big Data domain
• Proven understanding with Hadoop, HBase, Hive, Pig, and HBase.
• Good aptitude in multi-threading and concurrency concepts.
• Fluent English

    Empleos recientes

    Visto: 15 veces
    « Volver a la Atrás
    ¿Este anuncio te parece inapropiado, falso o ilegal? Repórtalo!
    Recomendar este empleo a un amigo