Learn how to make tracking plans a core asset of your product analytics setup

Dear Data-Traveller, please note that this is a Linkedin-Remix.

I posted this content already on Linkedin in March 2022, but I want to make sure it doesn´t get lost in the social network abyss.

For your accessibility-experience and also for our own content backup, we repost the original text here.

Have a look, leave a like if you like it, and join the conversation in the comments if this sparks a thought!

Original Post:

Plain Text:

It’s pretty easy to predict that data quality will be the big topic this year, next year, and the year after…

And this also means tracking or measurement quality where the data is collected (not everyone has a team of 5 to clean up the tracking mess in the data warehouse). And we all know: tracking removed – no data to be recovered.

Ensuring data quality is a team sport and not something that happens in one place. You have different levels of checks to ensure data quality.

So today, let’s look at what we can do on the tracking level.

– having a defined event schema (aka the tracking plan) is a must because this is basically what you test against.

– if you use a tag manager, you can add tag monitoring to see if tags are firing or have issues

– write end to end test to simulate a users journey on your frontends and check for the tracking network requests (something I am currently working on)

– put up monitoring where the data arrives (be it in tools like Amplitude, Mixpanel,… or at the raw data tables in your data warehouse)

– depending on your setup, you can include the tracking code requests into your unit tests – mock testing if your integration still is sending the correct test data

There is even a better solution for the last one. When you work with Avo – you will manage your tracking plan there. But what you can also do is, use an SDK to send the event schema back to Avo at the same place where you ship it to your tracking destination.

Avo only checks the schema against your defined tracking plan and warns you in the tool if there are semantic issues with your implementation.

This reduces testing a new tracking setup from 1h easily down to 10m. And it keeps the lights on because the test is constant.

If you like to see what this looks like in real, join me today in the live session with Stefania Olafsdottir, where I will do a live demo of creating a tracking inspection like described.

Live session or get the recording: https://bit.ly/3MCIQFm