Data pipelines à la mode

In all businesses there is some kind of data pipeline, even if it’s powered by humans working off a shared drive somewhere. Lots of places are better than this. They have workflow systems, ETL pipelines, analytics teams, data scientists etc.

But, can they say months later which version of which code running on what data generated insights?

Can they be reproduced?

What if the algorithms change?

Do you go back and re-run everything?

Science itself has a reproducibility problem, but it’s worse in most companies and mistakes can be expensive.

There is a useful subset of data pipelines, let's call them “pure”, that only depend on the data flowing through them. For pure pipelines we can use techniques from distributed build systems to allow us to know what code was used for each step, not lose any previous results as we improve our algorithms and avoid repeating work that has been done already.

This talk contains interesting theory but is resolutely practical and with concrete examples in several languages and distributed computation frameworks.

THIS TALK IN THREE WORDS

datascience

OBJECTIVES

Introduce a way of thinking about data pipelines that allows you to avoid repeating work and track provenance.

TARGET AUDIENCE

  • Data scientists
  • Anyone running ETL type workloads.