US
0 suggestions are available, use up and down arrow to navigate them
PROCESSING APPLICATION
Hold tight! We’re comparing your resume to the job requirements…

ARE YOU SURE YOU WANT TO APPLY TO THIS JOB?
Based on your Resume, it doesn't look like you meet the requirements from the employer. You can still apply if you think you’re a fit.
Job Requirements of Data Engineer III:
-
Employment Type:
Full-Time
-
Location:
Texas, US (Onsite)
Do you meet the requirements for this job?
Data Engineer III
Bayone Solutions Inc
Texas, US (Onsite)
Full-Time
Key Responsibilities:
- Design, develop, and maintain complex data pipelines and data infrastructure, ensuring data availability, quality, and integrity.
- Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver robust data solutions.
- Build and optimize ETL processes for extracting, transforming, and loading large datasets across various sources.
- Design and implement scalable data architectures and frameworks that support advanced analytics and machine learning applications.
- Troubleshoot, debug, and optimize existing data systems to improve performance and reliability.
- Develop and manage databases and data storage solutions, ensuring efficient data retrieval and processing.
- Ensure proper data governance, security, and compliance with industry regulations and company policies.
- Provide mentorship and guidance to junior and mid-level data engineers, fostering a culture of continuous learning and improvement.
- Stay up to date with emerging trends and technologies in the data engineering space and evaluate their potential applications.
- Work with cloud-based platforms (AWS, GCP, Azure) to manage large-scale data processing and storage.
Required Skills and Qualifications:
- Bachelor s degree in Computer Science, Information Technology, Engineering, or a related field. Master s degree is a plus.
- 5+ years of experience as a Data Engineer, with at least 2 years in a senior or Level 3 data engineering role.
- Expertise in SQL and experience with relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB, Cassandra).
- Strong experience with data pipeline orchestration tools (e.g., Apache Airflow, Luigi, Prefect).
- Proficiency in programming languages such as Python, Java, or Scala for data processing and automation.
- Hands-on experience with big data technologies (e.g., Hadoop, Spark, Kafka).
- In-depth knowledge of cloud platforms (AWS, GCP, Azure) and cloud data services (e.g., Redshift, BigQuery, Databricks).
- Familiarity with data warehousing concepts and tools (e.g., Snowflake, Amazon Redshift, Google BigQuery).
- Knowledge of data modeling, data lake architectures, and data governance best practices.
- Strong problem-solving skills and the ability to work with complex data sets and systems.
- Excellent communication and collaboration skills to work with cross-functional teams.
Get job alerts by email.
Sign up now!
Join Our Talent Network!