About the Job
We are looking for a candidate with strong expertise in SQL, Python, and Big Data technologies.
Key Responsibilities
- Assist in developing and maintaining scalable data pipelines using Python and SQL.
- Support data modeling activities for analytics and reporting workflows.
- Perform data cleansing, transformation, and validation using PySpark.
- Collaborate with data engineers and analysts to ensure high data quality and availability.
- Work with Hadoop ecosystem tools to process and manage large datasets.
- Contribute to data documentation and maintain version-controlled scripts and workflows.
Job Requirements
- Strong SQL skills with the ability to write complex queries, joins, and aggregations.
- Proficiency in Python for data manipulation, automation, and scripting.
- Good understanding of data modeling concepts such as Star/Snowflake schema and Fact/Dimension tables.
- Familiarity with Big Data and Hadoop ecosystem tools like HDFS, Hive, and Spark.
- Basic experience with PySpark is a strong advantage.
- Strong analytical thinking and problem-solving abilities.
Recruitment Note
We are seeking candidates with solid knowledge and hands-on experience in SQL, Python, and Big Data concepts.




