- Own architecture and delivery of the company-wide analytics warehouse on AWS + Redshift, serving Finance/FP&A, Marketing, and Product teams across B2B and B2C operations.
- Design and implement Python-based API ingestion pipelines with staged loading into Redshift, producing consistent schemas for downstream modeling and BI consumption.
- Main contributor to the dbt transformation layer — dimensional modeling, reusable macros, incremental models, tests, and lineage documentation.
- Define data contracts and quality guardrails: schema expectations, validation rules, and anomaly checks to protect stakeholder-facing dashboards.
- Migrated Spark/PySpark transformations into dbt + SQL, achieving 70% reduction in transformation runtime with improved auditability.
- Own observability for pipeline and warehouse health using Prometheus + Grafana: operational dashboards, alerting, and SLA monitoring.
- Drive infrastructure-as-code using Terraform for all data platform components with GitOps workflows for reproducible deployments.
- Supported migration of core data workloads from PostgreSQL to Amazon Redshift, improving scalability and analytics performance.
- Maintained and extended batch pipelines and Airflow orchestration, contributing to a standardized data platform foundation.
- Integrated multiple data sources via APIs and delivered cleaned datasets for reporting and analysis.
- Migrated CI pipelines from Concourse CI to GitHub Actions, improving deployment consistency and feedback loops.
- Helped introduce and adopt dbt practices for version-controlled, documented, and testable data transformations.
- Built and maintained Airflow workflows to extract and transform product and marketing data into analytics-ready datasets.
- Developed SQL/MySQL transformations to standardize reporting tables and improve data consistency across dashboards.
- Created and maintained Tableau dashboards translating business questions into clear KPIs for product and marketing stakeholders.
- Implemented data validation checks to improve reliability of scheduled refreshes and reduce manual reporting effort.
Apache AirflowMySQLSQLTableauETL
Research Development Engineering Intern at Faculty of Mechanical Engineering, University of Belgrade
Belgrade, Serbia
- Worked on wind tunnel experiments with real-time sensor data (air speed, pressure, force) for tested objects and prototypes.
- Built data processing workflows in Apache Spark / PySpark to transform and analyze experimental runs across time windows.
- Developed analysis and visualization notebooks using Python (Matplotlib, Seaborn) to validate experiments and communicate results.
- Documented experimental setups, data definitions, and analysis outputs to ensure reproducibility across test runs.
PythonSpark / PySparkMatplotlibSeaborn
Data Warehouse Administrator at Flash Europe d.o.o
Belgrade, Serbia
- Designed and implemented the company's analytical data platform on PostgreSQL, consolidating sales, inventory, and finance domains.
- Developed Python ETL pipelines for data ingestion, transformation, and normalization into standardized reporting tables.
- Built batch processing workflows with Linux cron and Luigi, modeling task dependencies for repeatable runs and controlled backfills.
- Implemented internal tooling using Django for data entry, cataloging, and governance workflows integrated with the PostgreSQL warehouse.
PythonPostgreSQLSQLLuigiDjangoLinuxETL
Education
M.Sc in Information Technologies
The University of Belgrade, Faculty of Mechanical Engineering
Belgrade, Serbia
Courses Available on Link Thesis Title: "Real time analysis on data stream using Apache Spark” -
GPA 9.45/10
B.Sc in Mechanical Engineering
The University of Belgrade, Faculty of Mechanical Engineering
Belgrade, Serbia
Thesis Title: "Normal Forms and Normalization in contemporary
databases"