A

Data Quality Observability Engineer

Aroha Technologies
10 minutes ago
Contract
Washington
United States
Note: Any domain is ok, Rate can be flexible we can go higher if needed, pls share candidates with 7+yrs of exp. Location - Seattle, WA (Hybrid work) Onsite 2-3 days per week - is mandatory. Client : Aerospace Requirement: 2 Data Quality Observability Engineer (1 Position) Experience Required: 5-7 years Duration: contract position. Hourly Rate - $60 - $65/hr on W2 with Cyient C2C Rate - $75 Role Summary: The Data Quality/Observability Engineer owns the monitoring, alerting, lineage, and SLA reporting layer across all ingestion pipelines. This role ensures that once pipelines are in production, Boeing has full visibility into pipeline health, data freshness, and SLA compliance - and that issues are detected and escalated before they impact downstream consumers. Key Responsibilities: Implement pipeline-level metrics collection: records in/out, processing lag, throughput, failure counts, and data freshness indicators. Build and maintain monitoring dashboards in CloudWatch and/or Grafana for all production ingestion pipelines. Configure alerting thresholds tied to agreed SLAs; ensure alerts trigger appropriately for pipeline lag, failures, and data quality breaches. Capture data lineage and metadata for every ingestion pipeline and publish to the Boeing Data/Developer Portal or catalog. Design and implement data quality rules (completeness, schema conformance, record counts, freshness) in collaboration with Data/Cloud Engineers and data governance/stewards. Produce automated weekly and monthly SLA reports showing ingestion success rates, data freshness, incident counts, and trend analysis. Develop cost monitoring views for ingestion compute spend and provide optimization recommendations. Collaborate with Boeing's monitoring/observability team for dashboard and alerting integration. Support incident triage by providing pipeline health diagnostics and root cause data. Maintain and evolve DQ and observability standards as new sources are onboarded each month. Required Skills & Qualifications: 5-7 years of experience in data quality engineering, data observability, or data operations with a platform focus. Hands-on experience with CloudWatch (metrics, logs, alarms, dashboards) and Grafana. Strong experience implementing data quality frameworks: Great Expectations, dbt tests, Deequ, Soda, or equivalent. Familiarity with data lineage and cataloging tools: AWS Glue Catalog, Apache Atlas, DataHub, OpenLineage, or similar. Proficiency in SQL and Python for metrics collection, reporting automation, and DQ rule implementation. Experience building SLA dashboards and automated reporting for data pipelines. Understanding of data observability concepts: freshness, volume, schema change detection, distribution anomalies. Knowledge of AWS cost monitoring tools (Cost Explorer, CUR, CloudWatch billing metrics). Strong communication skills for producing reports and presenting SLA performance to stakeholders. Experience working with data governance teams on data contracts, classification, and quality rules. Preferred Skills: Experience with dedicated data observability platforms (Monte Carlo, Datafold, Bigeye). Familiarity with OpenTelemetry or distributed tracing for data pipelines. Experience with FinOps practices for data platform cost optimization.