But I want to use the new Library ML so I do that :
from pyspark import SparkContext
from pyspark.sql import SQLContext
from pyspark.sql.functions import explode, split
from pyspark.ml.feature import Word2Vec
from pyspark.ml.feature import Word2VecModel
import numpy as np
model = word2Vec.fit(raw_text)
This code works in local mode but when I try to deploy in clusters mode (like previously) I have a problem because when one cluster writes in hdfs folder the other cannot write inside, so at the end I have an empty folder instead of a plenty of parquet file like in MLLIB. I don't understand because it works with MLLIB but not in ML with the same config when I submitting my code.
Do you have an idea, how I can solve this problem ?
I hope I was clear enough.