Responsibilities
- Design, develop, and implement end-to-end data pipelines, utilizing ETL processes and technologies such as Databricks, Python, Spark, Scala, JavaScript/JSON, SQL, and Jupyter Notebooks based on mission needs and objectives.
- Work closely with cross-functional teams to understand data requirements and design optimal data models and architectures is preferred.
- Optimize data pipelines ensuring scalability, reliability, and high-performance processing as well as providing innovative solutions to troubleshoot complex data-related problems with large datasets.
Qualifications
- Bachelor’s degree, preferably in Computer Science, Information Technology, Computer Engineering, or related IT discipline, or equivalent experience
- 2+ years of data engineering experience
- 2+ years of experience with ETL processes and technologies such as DataBricks, Python, Spark, Scala, JavaScript/JSON, SQL, and Jupyter Notebooks
- 2+ years of experience in government consulting
- Must be legally authorized to work in the United States without the need for employer sponsorship, now or at any time in the future.
- Must live in a commutable distance (approximately 100-mile radius) to one of the following Delivery locations: Atlanta, GA; Charlotte, NC; Dallas, TX; Gilbert, AZ; Houston, TX; Lake Mary, FL; Mechanicsburg, PA; Philadelphia, PA with the ability to commute to assigned location for the day, without the need for overnight accommodations
- Expectation to co-locate in your designated Delivery location up to 30% of the time based on business needs. This may include a maximum of 10% overnight client/project travel
Preferred
- AWS, Azure and/or Google Cloud Platform Certification.
- Experience working with agile development methodologies such as Sprint and Scrum.
- Strong problem-solving skills and ability to solve complex data-related issues
- Data governance experience