23 abr
|
AgileEngine
|
Corrientes
23 abr
AgileEngine
Corrientes
Postúlate en Kit Empleo: kitempleo.com.ar/empleo/nvr41
AgileEngine is an Inc. **** company that creates award-winning software for Fortune 500 brands and trailblazing startups across 17+ industries.
We rank among the leaders in areas like application development and AI/ML, and our people-first culture has earned us multiple Best Place to Work awards.
WHY JOIN US If you're looking for a place to grow, make an impact, and work with people who care, we'd love to meet you!
ABOUT THE ROLE As a Data Engineer , you'll be part of a small, senior team building a cloud-native data platform on AWS from the ground up — moving raw data through Bronze, Silver, and Gold layers into clean, trusted, analytics-ready datasets that directly power business decisions.
Working closely with the Data Architect and Head of Data, you'll have real visibility into the full data lifecycle: pipeline design, data quality, governance, and delivery.
If you want a role where your engineering work is foundational rather than incremental, and where the stack — S3, Glue, Redshift, dbt, Airflow, PySpark — is genuinely modern, this is it.
WHAT YOU WILL DO Build and maintain scalable ETL and ELT pipelines across AWS services; Implement medallion architecture for data ingestion, transformation, and delivery; Collaborate with data architects on data modeling and schema design; Develop ingestion frameworks for structured, semi-structured, and streaming data; Integrate data quality, lineage,
and observability into pipelines; Work with analytics and business teams to deliver consistent and well-documented data; Write clean and testable Python and SQL code following best practices; Support data governance and security standards including compliance requirements; Monitor pipeline performance, troubleshoot issues, and optimize scalability and cost efficiency.
MUST HAVES 5+ years of experience as a Data Engineer working with AWS data ecosystems ; Strong experience with AWS services (S3, Glue, Lambda, Kinesis, Redshift, Athena, Step Functions) ; Proficiency in Python, PySpark, and SQL for data transformation; Strong understanding of ETL design patterns , batch and streaming data processing; Knowledge of data modeling (star schema, snowflake schema, incremental processing); Experience with orchestration tools (Airflow, Step Functions, dbt) ; Understanding of data governance, data quality frameworks, and CI/CD for pipelines; Experience working in agile, cross-functional environments; PERKS AND BENEFITS Professional growth: Mentorship, TechTalks, and personalized growth roadmaps.
Competitive compensation: USD-based pay with education, fitness, and team activity budgets.
Exciting projects: Modern solutions with Fortune 500 and top product companies.
Flextime: Versátil schedule with remote and office options.
#J--Ljbffr
Postúlate en Kit Empleo: kitempleo.com.ar/empleo/nvr41
📌 Data Engineer Id52084 (Corrientes)
🏢 AgileEngine
📍 Corrientes