Python code development best practices were introduced and applied, including design principles, clean coding, virtual environments, project/folder setup, configuration, logging, exception handling, linting, dependency management, performance optimization with profiling, unit testing, integration testing and dockerization. .
What you will learn in this course:
- How to write professional ETL pipelines in Python.
- Steps to write production-level Python code.
- How to apply functional programming in Data Engineering.
- How to achieve good object-oriented code design.
- How to use a meta file for task control.
- Best coding practices for Python in ETL/Data Engineering.
- How to implement a pipeline in Python extracting data from an AWS S3 source, transforming and loading the data to another AWS S3 target.
Who should attend :
- Data engineers, scientists, and developers who want to write professional, production-ready data pipelines in Python.
- Anyone who wants to write production-ready data pipelines in Python.
Writing Production Ready ETL Pipelines in Python/Pandas Course Specifications:
- Publisher: Udemy
- Instructor: Jan Schwarzlose
- French language
- Level: All levels
- Duration: 7 hours and 3 minutes
- Conferences: 78
Requirements:
- Basic knowledge of Python and Pandas is desirable.
- Basic knowledge of ETL and AWS S3 is desirable.
Photo of the course:
Course introduction video:
Installation guide:
After Extracting, watch with the player of your choice.
Subtitle: English
Quality: 720p
Download links:
Password: free download software
Size:
2.48 GB