Designed and implemented a server-less, cloud-based big data pipeline.
At a glance:
We work with a Seattle-based global retailer who needed to integrate data from multiple marketing campaign systems and automate the process for analysis and engagement. The customer needed to better understand who was engaging with marketing content and offers and if customers were progressing through the campaigns as expected. They also wanted to benchmark campaigns across platforms to see which were the most effective.
The customer’s data was spread across multiple campaign systems and data types and was being managed by several outside vendors; the processes included multiple manual steps. Additionally, they had poor visibility into which campaigns were the most effective at driving revenue for their business.
Approach and Solution:
The Logic20/20 data scientists:
- Designed and implemented a serverless, cloud-based big data pipeline to process all datasets related to the campaign measurement data pipeline.
- Built new connectors to ingest data from 3rd party systems; collected and transformed in the cloud
- Automated previously manual data processes steps
- Interview stakeholders to understand who used the data and how it was used
- Implemented a reusable and robust data architecture which provides analytical data for future data science experiments
Value and Benefits - “the wins”
Logic20/20 designed a data management system that:
- Shortens data processing turnaround times from five days to approximately 4 hours, through a combination of performance improvements and elimination of manual bottlenecks
- Reduces cost and increases operational efficiencies
- Supported the legacy system during transformation, eliminating disruption to the current analytics processes
- Automated validation processes, resulting in fewer manual interventions.