Join JLL Technologies COE as a Data Engineering professional and contribute to our global data strategy. Design and develop scalable data solutions using SQL and PySpark. Collaborate with cross-functional teams, troubleshoot data issues, and stay updated with emerging technologies. Apply now!
Data Engineering 2
JLL Technologies COE
What this job involves:
About the role
#JLLTechAmbitions
JLL Technologies Enterprise Data team is a newly established central organization that oversees JLL’s data strategy. We are seeking data professionals to work with our colleagues at JLL around the globe in providing solutions, developing new products, building enterprise reporting & analytics capability to reshape the business of Commercial Real Estate using the power of data and we are just getting started on that journey!
We are looking for a Data Engineer who is self-starter to work in a diverse and fast-paced environment that can join our Enterprise Data team. This is an individual contributor role that is responsible for designing and developing of data solutions that are strategic for the business and built on the latest technologies and patterns. This a global role that requires partnering with the broader JLLT team at the country, regional and global level by utilizing in-depth knowledge of data, infrastructure, technologies and data engineering experience.
Responsibilities:
- Design, develop, and maintain scalable and efficient cloud-based data infrastructure using SQL and PySpark.
- Collaborate with cross-functional teams to understand data requirements, identify potential data sources, and define data ingestion architecture.
- Design and implement efficient data pipeline framework, ensuring the smooth flow of data from various sources to data lakes, data warehouses, and analytical platforms.
- Troubleshoot and resolve issues related to data processing, data quality, and data pipeline performance.
- Stay updated with emerging technologies, tools, and best practices in cloud data engineering, SQL, and PySpark.
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver data solutions that meet their needs.
- Document data infrastructure, data pipelines, and ETL processes, ensuring knowledge transfer and smooth handovers.
Sounds like you? To apply, you need to be:
Requirements:
- Bachelor's degree in Computer Science, Data Engineering, or a related field. (A master's degree is a plus.)
- Minimum of 3 years of experience in data engineering or full-stack development, with a focus on cloud-based environments.
- Strong expertise in SQL and PySpark, with a proven track record of working on large-scale data projects.
- Experience with cloud platforms (any 1) such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).
- Proficiency in designing and implementing data pipelines, ETL processes, and workflow automation.
- Familiarity with data warehousing concepts, dimensional modelling, and data governance best practices.
- Strong problem-solving skills and ability to analyze complex data processing issues.
- Excellent communication and interpersonal skills to collaborate effectively with cross-functional teams.
- Attention to detail and a commitment to delivering high-quality, reliable data solutions.
- Ability to adapt to evolving technologies and work effectively in a fast-paced, dynamic environment.
Preferred Qualifications:
- Experience with managing big data technologies (e.g., Spark, Python, Serverless Stack, API, etc.).
- Familiarity with cloud-based data warehousing platforms (e.g., AWS Redshift, Google BigQuery, Snowflake, etc.).
- Knowledge of data visualization tools (e.g., Tableau, Power BI) for creating meaningful data reports and dashboards is a plus.