Configuring a Spark cluster is not the easiest thing in the world, even if you are a data scientist
Alpine Labs wants to support scenarios in which there is no data scientist involved
Alpine Labs uses ML under the hood.
Spark has its nuances, and these need to be taken into account when deploying jobs and tuning clusters
But can data scientists really know everything about Spark?
Finding a good tuning solution for Spark clusters is not trivial.