Senior Data Engineer (Replacement confidential)

Technical Requirements — Must Have

● SQLMesh: Incremental models, full-refresh strategies, model testing, Medallion architecture

● Dagster: Asset pipelines, sensors, scheduling, upstream/downstream dependency management ● BigQuer: Partitioning, clustering, MERGE with partition pruning, cost-aware query design, INFORMATION_SCHEMA

● PostgreSQL: Triggers, stored functions, PL/pgSQL fluency, trigger-to-model conversion patterns ● Python: Apache Arrow, Polars (or equivalent) for high-throughput Reverse ETL data movement

● Data validation: row-count reconciliation, column-sum diffing, data-diff tooling

● CI/CD for data: Git PR workflows, automated model testing, Azure DevOps experience

● Idempotency discipline: all models must be re-runnable with zero side effects

Nice to Have

● iGaming or sportsbook domain knowledge: member tiers, bet/deposit fact tables, fiscal period logic ● Experience with Datastream, Airbyte, or other CDC ingestion tools

● dbt experience (cross-applicable to SQLMesh patterns)

● Terraform or Pulumi on GCP

● Grafana or Datadog for pipeline observability

Ideal Candidate

● 5+ years in data engineering with a track record of shipping production pipelines — not just building in staging

● Has executed at least one legacy-to-modern DWH migration (not just contributed to one)

● Comfortable reading, debugging, and rewriting PL/pgSQL business logic

● Writes clean, testable SQL and Python — and holds the same standard for teammates

● Works well under an experienced technical lead: takes direction, flags risk early, executes independently

● Does not need hand-holding on tooling — picks up SQLMesh or Dagster quickly if not already proficient

● Available to start March 2026; committed to Phase 1b continuation preferred.

Code Quality Standards:

All models and assets produced must adhere to:

● Explicit partitioning and clustering keys on all destination tables

● No broad DISTINCT — use GROUP BY or window functions

● MERGE with partition pruning for upserts; no DELETE+INSERT anti-patterns

● No temp tables or CTEs materialised to disk

● Zero material drift on financial columns (Turnover, Revenue) — the mandatory acceptance gate ● Python adhering to PEP 8; SQL modular, idempotent, and reusable Working Norms

● 3x per week standup with the Fractional CTO — focused on blockers

● Weekly sync reviewing milestones, risks, and forward planning

● All work submitted via Pull Request — reviewed and approved by Client Data Engineering before merge

● Named to the engagement — no rotation without client approval

Application Confirmation

You're applying for the role below:

Senior Data Engineer (Replacement confidential)

Location: Thành phố Hồ Chí Minh

Contract Details: Contract

Submit Date: 2026-04-18

No CV uploaded

About the job

Location Thành phố Hồ Chí Minh
Created On 2026-03-31
Working Model WFH
Job Level Senior