About



NameMaarten Peters
TitleSenior Principal Data Engineer/Scientist
NationalityDutch
ResidenceVoorburg, The Netherlands
E-mailmaarten.peters@valcon.com
Mobile+31 6 13680558
LinkedInhttps://linkedin.com/in/maartenjpeters

Career Overview

Education

Certificates

Profile

Maarten Peters is a very flexible data scientist and engineer with a varied background in data applications. Starting his career as a Business Analyst at Sanoma Media B.V., he developed insights into real-time bidding data for the programmatic advertising department, collaborating with data science on revenue optimization. Moving to Jibes as a BI Consultant, he specialized his skills into data warehousing, engineering and transitioning into Viqtor Davis and Valcon as a Principal Data Engineer and Data Scientist, working with platforms such as Azure, Databricks and Python. His personal interests are focused on electronics, DIY and sports.

Project experience

2022 - 2022: IKEA - Data Engineer

As a Data Engineer at IKEA for the Sustainability project, I worked on the Retail footprint. Within this footprint I was mainly responsible for the design and implementation of the ETL processes within Azure Databricks. The data models followed a calculation model designed by IKEA, where the processes had to fit within an enterprise Data Mesh lakehouse architecture. Processes were developed in d.m.v. CI/CD, data quality testing, data lineage and schema validation. In the last weeks of the project, we also played a role in developing an NLP classification model for linking bills of materials

Responsibilites

Tools

Agile/Scrum, Databricks, Azure DevOps, Azure Data Factory, CI/CD, Python, SQL

2021 - 2022: Schiphol - Data Engineer

Developing Python/Scala applications for the Central Data Factories team to ingest real-time data into the Core Data Platform, using Gitlab, Openshift, Databricks and Confluent, hosted on Microsoft Azure. Applications provided real-time data streaming, batch replay mechanisms and full CI/CD development with infrastructure-as-code in Terraform.

Responsibilites

Tools

Agile/Scrum, Databricks, Openshift, Kubernetes, Confluent, Kafka, Docker, Apache Airflow, Infrastructure-as-code, Terraform, Gitlab, CI/CD, Python, Scala, SQL

2022 - 2022: Santander - Data Scientist

Proof of concept solution of an investment appetite classification model, implemented in the low-code product of Squirro. Responsible for model development and validation.

Responsibilites

Tools

Agile/Scrum, Python, Squirro

2022 - 2022: LCPS - BI Consultant

Data warehouse development on COVID-19 patient transfer data and implementing Power BI reports for the Dutch government, providing insights into the pandemic, facilitating response measures.

Responsibilites

Tools

Azure Data Factory, Microsoft SQL Server, Microsoft Power BI, SQL, DAX

2020 - 2021: Stedin - Data Engineer

Collaborating on development of a big data platform, developing FastAPI applications deployed in Kubernetes containers and aiding in deployment of data science applications. All development was performed via CI/CD and tasks were delegated with Scrum.

Responsibilites

Tools

Agile/Scrum, HDInsight, Apache Hive, Apache NiFi, Informatica Big Data Management, FastAPI, CI/CD, Python, Kubernetes, Docker, Azure Functions, Microsoft SQL Server, SQL

2019 - 2020: Fontem Ventures - Data Engineer

Implementing a lakehouse architecture for e-commerce analysis on DataBricks and Power BI, with infrastructure as code and Azure DevOps CI/CD pipelines.

Responsibilites

Tools

Agile/Scrum, Python, Infrastructure-as-code, Terraform, Databricks, CI/CD, Azure DevOps, Azure Data Factory, Microsoft SQL Server, SQL, Microsoft Power BI

2019 - 2019: Port of Rotterdam - BI Consultant

Developing a data warehouse implementation for the OnTrack application in PostgreSQL, with ETL performed by AWS Lambda and infrastructure-as-code in Terraform.

Responsibilites

Tools

Agile/Scrum, AWS Lambda, PostgreSQL, Python, R, SQL, Terraform, Infrastructure-as-code

2018 - 2019: Sligro - BI Consultant

Implementing a data warehouse in Microsoft SQL Server on-premise, with a data vault layer and a reporting data model.

Responsibilites

Tools

Agile/Scrum, Microsoft SQL Server, SQL, Qlik Sense, Data Vault

2018 - 2018: PostNL - BI Consultant

As a BI Consultant I was responsible for developing reports in Power BI and developing data models in Microsoft SQL Server.

Responsibilites

Tools

Microsoft SQL Server, SQL, Power BI Desktop

2018 - 2018: VodafoneZiggo - BI Consultant

Report development and consulting in Qlik Sense, along with development on reporting data in Microsoft SQL Server.

Responsibilites

Tools

Agile/Scrum, Microsoft SQL Server, Oracle SQL Developer, SQL, Qlik Sense Enterprise, Qlik Sense Desktop, Python

2017 - 2018: Connexxion - BI Consultant

Developed reports in Power BI, implemented a SQL Server data warehouse in Transact-SQL and SSIS, along with a data vault and reporting layer. Also developed a real-time Power BI dashboard, providing insights into sensor data from busses for maintainance.

Responsibilites

Tools

Agile/Scrum, Microsoft SQL Server, SQL Server Integration Services, SQL, Power BI, Team Foundation Server, Microsoft Power BI

2017 - 2017: Nationale Nederlanden - BI Consultant

Report development in Microsoft Power BI & SAP Business Objects, with data modelling in Oracle data warehouse. Also implemented R/Python scripts for automated on-premise data refresh.

Responsibilites

Tools

Microsoft SQL Server, Oracle SQL Datawarehouse, SQL, SQL Server Integration Services, SAP Business Objects

2017 - 2017: Irdeto - BI Consultant

Development on the Microsoft SQL Server data warehouse, responsible for migrating the data ingestion of Microsoft Dynamics with SSIS and BIML.

Responsibilites

Tools

Microsoft SQL Server, SQL, SQL Server Integration Services, Tableau

2017 - 2017: PostNL - BI Consultant

As a BI Consultant at PostNL, I developed reports in Amazon Quicksight based on a self-developed data warehouse based on Amazon Redshift and AWS Lambda, as a result of a migration from Microsoft Access and VBScript.

Responsibilites

Tools

Microsoft Access, AWS Redshift, AWS Lambda, Data modelling, SQL, PostgreSQL, Python

2013 - 2016: Sanoma Media - Business Analyst

At Sanoma Media B.V. there was a growing focus on the digital ad marketplace where more and more attention was given to programmatic advertising. Via real-time bidding (RTB) ads were served and increasingly larger volumes of data were collected. To visualize this information a big data environment was developed with Hadoop/Apache Hive as the data lake and QlikView as the reporting tool.

I was responsible for the development of queries to Apache Hive and designing ETL processes for data acquisition to QlikView. Also I developed the front-end of almost all reports with varying degrees of report flexibility, where users could select, compare and analyze almost all possibilities.

Responsibilites

Tools

Agile/Scrum, R, QlikView, Apache Hive, Data modelling, SQL, PHP, HTML5, AWS, CSS, JavaScript, GitHub, Bash

Skills and expertise

Key skills

Coding

Python★★★★
Scala★★★☆
SQL★★★★
Bash★★☆☆
R★★☆☆

Tools

Kafka★★☆☆
Databricks★★★★
iPython★★★☆
Git★★★☆
Kubernetes★★☆☆
Microsoft SQL Server★★★★
Microsoft Power BI★★★★

Consulting

Business Analytics★★★☆
Capability Building★☆☆☆
Project Acquisition★☆☆☆
Project Definition★★☆☆
Project Management★★★☆
Solution Design★★★☆
Stakeholder Management★★☆☆

Organisation

Administration★★★☆
Domain Understanding★★☆☆
Conceptual Reasoning★★★☆
Team Management★★☆☆

Data & techniques

Structured Data: CSV, JSON, Parquet, Delta, etc.★★★★
Unstructure Data: Web pages, PDF, images, etc.★★☆☆
Databases★★★☆
API’s★★☆☆
Data Warehousing★★★★
Report Development★★★★
Data Modelling★★★☆
Machine Learning★★☆☆

Proficiency levels: