Confidencial REF: #1045

The product that the Company develops is a suite of automation and optimization tools designed to help modern publishers gain deeper insight into what their audiences need, make smarter decisions with their resources and get the most value out of their content.

We’re looking for experienced individuals with deep knowledge of data warehousing, data ingestion, databases and distributed systems, and proficient in writing custom libraries but also know when to use off-the-shelf solutions when necessary. Ideal candidates are self-motivated engineers with a passion for both business and technology innovation, more importantly they quickly adapt with changing technologies. We value people who are passionate about system design and have an eye for improving product quality. We currently work with Scala, Kotlin, Java, Python, Postgres, Snowflake, Kafka, Spark, and Flink.


  • Develop and optimize system components for maximum performance and scalability across a vast array of environments.
  • Have a commitment to collaborative problem solving, sophisticated design, and product quality
  • Ensure that system components and the overall application are robust and easy to maintain.
  • Contribute to backlog reviews, technical solutions design and implementations
  • Be disciplined in implementing software in a timely manner while ensuring product quality isn’t compromised

  • Strong analysis and problem solving skills
  • Deep understanding of good data modelling practices, and data normalization/denormalization, Star Schema, Snowflake Schema, Data Vault and Change Data Capture.
  • Successfully implemented and released data warehouse models along with all needed data pipelines for ingestion, modelling and reporting.
  • Deep understanding of enterprise warehouse databases including clustering, loading, unloading, partitioning, and warehouse database maintenance.
  • Formal training in software engineering, computer science or computer engineering.
  • Worked as part of a mature engineering team

Ideal Candidate:

  • Have strong working knowledge with Snowflake, Snowpipe,and Snowflake Pipelines.
  • Deep understanding of differences of OLAP and OLTP systems.
  • Successfully implemented realtime and batch analytics using Kafka, Flink, Apache Beams and/or Google DataFlow.
  • Strong working knowledge of non EDW warehouses including, Data Lakes, and NoSQL Analytical Databases.
  • Have a working knowledge with containerization and build pipelines
  • Successfully implemented data systems for very large data volumes such as click streams and/or IoT sensors data.

You will be part of a national icon and Canada’s most recognized media brand.

We’re also an international award winner for data visualization, design, and creative storytelling. A digital innovator with a global client list for our in-house AI-powered optimization, prediction and automation platform. And a place where Canadians come for the best journalism in the country.

We aim to reflect Canada in the stories we tell and in our workforce.

We understand our staff have lives outside of the office and offer flexible work arrangements and support programs. We also provide training and mentorships to staff to ensure you’re able to grow and challenge yourself and your abilities.

Formulario de postulación

Como se establece en la ley 18.331 - PROTECCIÓN DE DATOS PERSONALES Y ACCIÓN DE HABEAS DATA - La información personal y profesional informada, podría ser utilizada por Búsquedas IT en procesos de selección y administración de personal o transferirse a clientes, socios clave u otras compañías.