Atlanta, GA
Full Time
3 weeks ago


This telecom mogul is looking for skilled data engineers to add on to their data ingestion team for the build out of their new Integrated Data Warehouse. In this role, you will be responsible for reviewing and analyzing user stories, High level Solution Designs, source systems, Source to Target Mappings, and then developing code to ingest the data into various database platforms and services. Once Ingested, you will be responsible for ensuring the accuracy of the data per the requirements, then developing scripts to load the data from the DB/Data Lakes into the staging area. After that is complete, you will develop scripts to extract, transform, and load, the data into the core tables as per the logic in the source to target mapping documents. Then develop scripts to import the data from the core table. From there, you will then develop QA test scenarios and perform QA testing, including but not limited to functional, End to End, and data validation testing. You will oversee the production deployments and support the AppOps team during the deployment, data loads, troubleshooting, as well as regular production break fixes activates, as needed. As part of this DevOps role, you will participate in daily standups, will share regular project status with the DevOps Track lead and Scrum Master, as well as management and stakeholders.


- Experience with ETL

- Azure Databricks + ADF

- Spark, Scala, or Python


- Enterprise Level Experience

- Requirement Analysis Review and Analyze user stories.

- Experience with Python for Automation

- Experience with Shell Scripting

- Review HLSD with solution architects.

- Analyzing source systems, Data modelling and source to target mappings (S2TM) and creating relevant artifacts as needed.

- Data Ingestion and Processing -- Analyze various sources and develop the code to ingest the data into Hadoop Data Lake.

- Data Cleansing - Ensure accuracy and completeness of the data and if required cleanup the data as per the need.

- Dispatch / Export- Develop scripts to load the data from Hadoop data lake into the destination staging area.

- Core Loads -- Develop the scripts to extract, transform and load (ETL) data into the Core tables as per the logic laid out in the source to target mapping (S2TM) document.

- Import -- Develop scripts to import the data from core tables into Hadoop Data Lake.

- Develop QA Test scenarios/cases as needed, and perform QA testing including but not limited to functional, End to End and data validation testing.

- Production deployment.

- Support AppOps team during deployment, data loads and troubleshooting, as well as during regular Production Break Fix Activities, as needed.

- Participate in daily standups, and share regular project status to DevOps Track Lead and/or DevOps Scrum Master, T-Mobile Project Manager, T-Mobile Manager and/or other stakeholders as needed.

- CICD Support as needed.

- Any other relevant analysis, design, development, and application support activities assigned to the team members from time to time.


- Experience or strong understanding with Snowflake (DB)

Benefit packages for this role will start on the 31st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.

Chairman's Circle