Senior Hadoop Engineer
JPS Tech Solutions LLC Madison
Job Title: Senior Hadoop Engineer Location: Madison, Wisconsin, 53703 Experience: 12+ Years Employment Type: Contract Job Description We are seeking a highly experienced Senior Hadoop Engineer to lead the design, development, and optimization of our large-scale data processing and analytics environment.
The ideal candidate will have extensive hands-on expertise in Hadoop ecosystem tools and distributed data frameworks. This role involves working closely with data architects, analysts, and application teams to build scalable and secure big-data solutions that support business-critical analytics.
Key Responsibilities Design, build, and maintain Hadoop-based big data platforms and data pipelines. Implement and optimize large-scale data processing applications using tools such as HDFS, Hive, Spark, Impala, and HBase. Work with engineering and business teams to translate requirements into scalable data architecture.
Improve performance and reliability of Hadoop clusters, including monitoring, capacity planning, and tuning. Develop and manage ETL processes that integrate data from multiple sources. Ensure data security, governance, and compliance across all Hadoop environments.
Automate operational tasks and support continuous deployment practices. Troubleshoot issues across Hadoop components and provide root-cause analysis. Support migration and modernization initiatives to cloud platforms when applicable. Required Skills and Experience 12+ years of professional experience in data engineering or software engineering roles.
Strong expertise in Hadoop ecosystem tools including HDFS, YARN, Hive, Pig, Spark, Kafka, Sqoop, Oozie, and Zookeeper. Proficiency in programming languages such as Java, Scala, and Python. Solid understanding of distributed systems, parallel processing, and performance optimization.
Experience working with relational and NoSQL databases (e.g., Oracle, MySQL, HBase, Cassandra, MongoDB Hands-on experience with data ingestion and ETL pipelines. Experience with version control, CI/CD tools, and Linux environments. Familiarity with cloud platforms such as AWS, Azure, or GCP is preferred.
Strong analytical, problem-solving, and communication skills. Preferred Qualifications Experience working in a large enterprise or government project environment. Certifications in Big Data, Cloud, or Data Engineering. Experience implementing real-time streaming solutions with Kafka and Spark Streaming.
The ideal candidate will have extensive hands-on expertise in Hadoop ecosystem tools and distributed data frameworks. This role involves working closely with data architects, analysts, and application teams to build scalable and secure big-data solutions that support business-critical analytics.
Key Responsibilities Design, build, and maintain Hadoop-based big data platforms and data pipelines. Implement and optimize large-scale data processing applications using tools such as HDFS, Hive, Spark, Impala, and HBase. Work with engineering and business teams to translate requirements into scalable data architecture.
Improve performance and reliability of Hadoop clusters, including monitoring, capacity planning, and tuning. Develop and manage ETL processes that integrate data from multiple sources. Ensure data security, governance, and compliance across all Hadoop environments.
Automate operational tasks and support continuous deployment practices. Troubleshoot issues across Hadoop components and provide root-cause analysis. Support migration and modernization initiatives to cloud platforms when applicable. Required Skills and Experience 12+ years of professional experience in data engineering or software engineering roles.
Strong expertise in Hadoop ecosystem tools including HDFS, YARN, Hive, Pig, Spark, Kafka, Sqoop, Oozie, and Zookeeper. Proficiency in programming languages such as Java, Scala, and Python. Solid understanding of distributed systems, parallel processing, and performance optimization.
Experience working with relational and NoSQL databases (e.g., Oracle, MySQL, HBase, Cassandra, MongoDB Hands-on experience with data ingestion and ETL pipelines. Experience with version control, CI/CD tools, and Linux environments. Familiarity with cloud platforms such as AWS, Azure, or GCP is preferred.
Strong analytical, problem-solving, and communication skills. Preferred Qualifications Experience working in a large enterprise or government project environment. Certifications in Big Data, Cloud, or Data Engineering. Experience implementing real-time streaming solutions with Kafka and Spark Streaming.
Education Bachelor's or Master's degree in Computer Science, Information Technology, Engineering, or a related field.
Ashley Furniture Industries, LLC.Madison
1. Work with limited supervision and make routine and non-routine choices within daily duties in the processes and procedures of product engineering. Draft solid models and prints based on the design drawing or rough drafted products. Ensure all...
CDW Amplified ServicesMadison
Senior Hybrid AWS Cloud Engineer (IAM, Kubernetes, Automation)
Remote
Contract to Hire (CDW W2 Contract Coworker)
CDW is hiring a hands-on, code-first AWS Cloud Engineer with deep expertise in IAM, Kubernetes, and infrastructure as code...
BrooksourceMadison
FPGA Engineer
Waukesha, WI
On-Site Role (5 days/week)
Compensation: $75/hour
ABOUT THE ROLE
Join GE HealthCare's Magnetic Resonance (MR) engineering organization as an FPGA Engineer, where you will play a critical role in designing, developing...