I've used Metaflow for the past 4 years or so on different ML teams. It's really great!
Straightforward for data/ML scientists to pick up, familiar python class API for defining DAGs, and simplifies scaling out parallel jobs on AWS Batch (or k8s). The UI is pretty nice. Been happy to see the active development on it too.
Currently using it at our small biotech startup to run thousands of protein engineering computations (including models like RFDiffusion, ProteinMPNN, boltz, AlphaFold, ESM, etc.).
Data engineering focused DAG tools like Airflow are awkward for doing these kinds of ML computations, where we don't need the complexity of schedules, etc. Metaflow, imho, is also a step up from orchestration tools that were born out of bioinformatics groups, like Snakemake or Nextflow.
Just a satisfied customer of Metaflow here. thx
anentropic 6 hours ago [-]
I've been curious about this project for a while...
If you squint a bit it's sort of like an Airflow that can run on AWS Step Functions.
Step Functions sort of gives you fully serverless orchestration, which feels like a thing that should exist. But the process for authoring them is very cumbersome - they are crying out for a nice language level library i.e. for Python something that creates steps via decorator syntax.
And it looks like Metaflow basically provides that (as well as for other backends).
The main thing holding me back is lack of ecosystem. A big chunk of what I want to run on an orchestrator are things like dbt and dlt jobs, both of which have strong integrations for both Airflow and Dagster. Whereas Metaflow feels like not really on the radar, not widely used.
Possibly I have got the wrong end of the stick a bit because Metaflow also provides an Airflow backend, which I sort of wonder in that case why bother with Metaflow?
coredog64 52 minutes ago [-]
A few years back, the Step Functions team was soliciting input, and the Python thing was something that came up as a suggestion. It's hard, yes, but it should be possible to "Starlark" this and tell users that if you stick to this syntax, you can write Python and compile it down to native StepFunction syntax.
Having said that, they have slightly improved the StepFunctions by adopting JSONata syntax.
anentropic 16 minutes ago [-]
I don't think it should need Starlark or a restricted syntax.
You just want some Python code that builds up a representation of the state machine, e.g. via decorating functions the same way that Celery, Dask, Airflow, Dagster et al have done for years.
Then you have some other command to take that representation and generate the actual Step Functions JSON from it (and then deploy it etc).
But the missing piece is that those other tools also explicitly give you a Python execution environment, so the function you're decorating is usually the 'task' function you want to run remotely.
Whereas Step Functions doesn't provide compute itself, it mostly just gives you a way to execute AWS API calls. But the non control flow tasks in my Step Functions end up mostly being Lambda invoke steps to run my Python code.
I'm currently authoring Step Functions via CDK. It is clunky AF.
What it needs is some moderately opinionated layer on top.
Someone at AWS did have a bit of an attempt here: https://aws-step-functions-data-science-sdk.readthedocs.io/e... but I'd really like to see something that goes further and smooths away a lot of the finickety JSON input arg/response wrangling. Also the local testing story (for Step Functions generally) is pretty meh.
vtuulos 1 minutes ago [-]
If you are ok with executing your SFN steps on AWS Batch, Metaflow should do the job well. It's pretty inhuman to interact with SFN directly.
One feature that's in our roadmap is the ability to define DAG fully programmatically, maybe through configs, so you will be able to have a custom representation -> SFN JSON, just using Metaflow as a compiler
kot-behemoth 2 hours ago [-]
A while ago I saw a promising Clojure project stepwise [0] which sounds pretty close to what you're describing. It not only allows you to define steps in code, but also implements cool stuff like ability to write conditions, error statuses and resources in a much-less verbose EDN instead of JSON. It also supports code reloading and offloading large payloads to S3.
Wow cool, a project I created got a mention on HN. :D
vtuulos 6 hours ago [-]
Metaflow was started to address the needs of ML/AI projects whereas Airflow and Dagster started in data engineering.
Consequently, a major part of Metaflow focuses on facilitating easy and efficient access to (large scale) compute - including dependency management - and local experimentation, which is out of scope for Airflow and Dagster.
Metaflow has basic support for dbt and companies use it increasingly to power data engineering as AI is eating the world, but if you just need an orchestrator for ETL pipelines, Dagster is a great choice
If you are curious to hear how companies navigate the question of Airflow vs Metaflow, see e.g this recent talk by Flexport https://youtu.be/e92eXfvaxU0
They call it a DREAM stack (Daft, Ray Engine or Ray and Poetry, Argo and Metaflow)
ShamblingMound 2 hours ago [-]
Have been looking for an orchestrator for AI workflows including agentic workflows and this seemed to be the most promising (open source, free, can self-host, and supports dynamic workflows).
But have not seen anyone talk about it in that context. What do people use for AI workflow orchestration (aside from langchain)?
If you are curious, join the Metaflow Slack at http://slack.outerbounds.co and start a thread on #ask-metaflow
lazarus01 7 hours ago [-]
I went to the GitHub page. The descriptions of the service seem redundant to what cloud providers offer today. I looked at the documentation and it lacks concrete examples for implementation flows.
Seems like something new to learn, an added layer on top of existing workflows, with no obvious benefit.
manojlds 7 hours ago [-]
It's an old project from before the current AI buzz and I rejected this when I looked at it few years back as well with similar reasons.
My opinion about Netflix OSS has been pretty low as well.
datadrivenangel 2 hours ago [-]
All the cloud providers have some hosted / custom version of an AI/ML deployment and training system. Good enough to use, janky enough to probably not meet all your needs if you're serious.
lazarus01 27 minutes ago [-]
I use google cloud for ML. AWS has a similar offering.
I find google is purpose built for ml and provides tons of resources with excellent documentation.
AWS feels like driving a double decker bus, very big and clunky, compared to google, which is a luxury sedan, that is quite comfortable to take you where you’re going.
nxobject 10 hours ago [-]
As a fun historical sidebar and an illustration that there are no new names in tech these days, Metaflow was also the name of the company that first introduced out-of-order speculative execution of CISC architectures using micro-ops. [1]
Netflix used to release so much good opensource software a decade ago. Now it seems to have fallen out of developer mindshare. Seems like the odd one out in FAANG in terms of tech and AI.
A big deal is that they get packaged automatically for remote execution. And you can attach them on the command line without touching code, which makes it easy to build pipelines with pluggable functionality - think e.g. switching an LLM provider on the fly.
Is it common to see Metaflow used alongside MLflow if a team wants to track experiment data?
vtuulos 4 hours ago [-]
Metaflow tracks all artifacts and allows you to build dashboards with them, so there’s no need to use MLFlow per se. There’s a Metaflow integration in Weights and Biases, CometML etc, if you want pretty off-the-shelf dashboards
Straightforward for data/ML scientists to pick up, familiar python class API for defining DAGs, and simplifies scaling out parallel jobs on AWS Batch (or k8s). The UI is pretty nice. Been happy to see the active development on it too.
Currently using it at our small biotech startup to run thousands of protein engineering computations (including models like RFDiffusion, ProteinMPNN, boltz, AlphaFold, ESM, etc.).
Data engineering focused DAG tools like Airflow are awkward for doing these kinds of ML computations, where we don't need the complexity of schedules, etc. Metaflow, imho, is also a step up from orchestration tools that were born out of bioinformatics groups, like Snakemake or Nextflow.
Just a satisfied customer of Metaflow here. thx
If you squint a bit it's sort of like an Airflow that can run on AWS Step Functions.
Step Functions sort of gives you fully serverless orchestration, which feels like a thing that should exist. But the process for authoring them is very cumbersome - they are crying out for a nice language level library i.e. for Python something that creates steps via decorator syntax.
And it looks like Metaflow basically provides that (as well as for other backends).
The main thing holding me back is lack of ecosystem. A big chunk of what I want to run on an orchestrator are things like dbt and dlt jobs, both of which have strong integrations for both Airflow and Dagster. Whereas Metaflow feels like not really on the radar, not widely used.
Possibly I have got the wrong end of the stick a bit because Metaflow also provides an Airflow backend, which I sort of wonder in that case why bother with Metaflow?
Having said that, they have slightly improved the StepFunctions by adopting JSONata syntax.
You just want some Python code that builds up a representation of the state machine, e.g. via decorating functions the same way that Celery, Dask, Airflow, Dagster et al have done for years.
Then you have some other command to take that representation and generate the actual Step Functions JSON from it (and then deploy it etc).
But the missing piece is that those other tools also explicitly give you a Python execution environment, so the function you're decorating is usually the 'task' function you want to run remotely.
Whereas Step Functions doesn't provide compute itself, it mostly just gives you a way to execute AWS API calls. But the non control flow tasks in my Step Functions end up mostly being Lambda invoke steps to run my Python code.
I'm currently authoring Step Functions via CDK. It is clunky AF.
What it needs is some moderately opinionated layer on top.
Someone at AWS did have a bit of an attempt here: https://aws-step-functions-data-science-sdk.readthedocs.io/e... but I'd really like to see something that goes further and smooths away a lot of the finickety JSON input arg/response wrangling. Also the local testing story (for Step Functions generally) is pretty meh.
One feature that's in our roadmap is the ability to define DAG fully programmatically, maybe through configs, so you will be able to have a custom representation -> SFN JSON, just using Metaflow as a compiler
Here's a nice article with code examples implementing a simple pipeline: https://www.quantisan.com/orchestrating-pizza-making-a-tutor....
[0]: https://github.com/Motiva-AI/stepwise
Consequently, a major part of Metaflow focuses on facilitating easy and efficient access to (large scale) compute - including dependency management - and local experimentation, which is out of scope for Airflow and Dagster.
Metaflow has basic support for dbt and companies use it increasingly to power data engineering as AI is eating the world, but if you just need an orchestrator for ETL pipelines, Dagster is a great choice
If you are curious to hear how companies navigate the question of Airflow vs Metaflow, see e.g this recent talk by Flexport https://youtu.be/e92eXfvaxU0
They call it a DREAM stack (Daft, Ray Engine or Ray and Poetry, Argo and Metaflow)
But have not seen anyone talk about it in that context. What do people use for AI workflow orchestration (aside from langchain)?
If you are curious, join the Metaflow Slack at http://slack.outerbounds.co and start a thread on #ask-metaflow
Seems like something new to learn, an added layer on top of existing workflows, with no obvious benefit.
My opinion about Netflix OSS has been pretty low as well.
I find google is purpose built for ml and provides tons of resources with excellent documentation.
AWS feels like driving a double decker bus, very big and clunky, compared to google, which is a luxury sedan, that is quite comfortable to take you where you’re going.
[1] https://en.wikipedia.org/wiki/Metaflow_Technologies
A big deal is that they get packaged automatically for remote execution. And you can attach them on the command line without touching code, which makes it easy to build pipelines with pluggable functionality - think e.g. switching an LLM provider on the fly.
If you haven't looked into Metaflow recently, configuration management is another big feature that was contributed by the team at Netflix: https://netflixtechblog.com/introducing-configurable-metaflo...
Many folks love the new native support for uv too: https://docs.metaflow.org/scaling/dependencies/uv
I'm happy to answer any questions here