2016-09-19 4 views
0

기본적으로 원래 데이터 집합에서 일부 샘플링을 수행하고이를 training_test로 변환하는 python 함수가 있습니다.Spark Dataframe에서 Python 함수 실행

팬더 데이터 프레임에서 작동하도록 코드를 작성했습니다.

누구나 spark DAtaframe을 pyspark에 구현하는 방법을 알고 있는지 궁금합니다. Pandas 데이터 프레임이나 numpy 배열 대신 Spark Dataframe을 사용해야합니까?

def train_test_split(recommender,pct_test=0.20,alpha=40): 
    """ This function takes a ratings data and splits it into 
    train, validation and test datasets 

    This function will take in the original user-item matrix and "mask" a percentage of the original ratings where a 
    user-item interaction has taken place for use as a test set. The test set will contain all of the original ratings, 
    while the training set replaces the specified percentage of them with a zero in the original ratings matrix. 

    parameters: 

    ratings - the original ratings matrix from which you want to generate a train/test set. Test is just a complete 
    copy of the original set. This is in the form of a sparse csr_matrix. 

    pct_test - The percentage of user-item interactions where an interaction took place that you want to mask in the 
    training set for later comparison to the test set, which contains all of the original ratings. 

    returns: 

    training_set - The altered version of the original data with a certain percentage of the user-item pairs 
    that originally had interaction set back to zero. 

    test_set - A copy of the original ratings matrix, unaltered, so it can be used to see how the rank order 
    compares with the actual interactions. 

    user_inds - From the randomly selected user-item indices, which user rows were altered in the training data. 
    This will be necessary later when evaluating the performance via AUC. 

    """ 

    test_set = recommender.copy() # Make a copy of the original set to be the test set. 

    test_set=(test_set>0).astype(np.int8) 
    training_set = recommender.copy() # Make a copy of the original data we can alter as our training set. 
    nonzero_inds = training_set.nonzero() # Find the indices in the ratings data where an interaction exists 
    nonzero_pairs = list(zip(nonzero_inds[0], nonzero_inds[1])) # Zip these pairs together of user,item index into list 
    random.seed(0) # Set the random seed to zero for reproducibility 
    num_samples = int(np.ceil(pct_test*len(nonzero_pairs))) # Round the number of samples needed to the nearest integer 
    samples = random.sample(nonzero_pairs, num_samples) # Sample a random number of user-item pairs without replacement 
    user_inds = [index[0] for index in samples] # Get the user row indices 
    item_inds = [index[1] for index in samples] # Get the item column indices 
    training_set[user_inds, item_inds] = 0 # Assign all of the randomly chosen user-item pairs to zero 

    conf_set=1+(alpha*training_set) 
    return training_set, test_set, conf_set, list(set(user_inds)) 

답변

0

당신은 스파크 dataframe에 randomSplit 기능을 사용할 수 있습니다 알려 주시기 바랍니다.

(train, test) = dataframe.randomSplit([0.8, 0.2]) 
+0

스파크 데이터 프레임에서 해당 기능을 구현하는 방법을 더 잘 이해하고 있습니다. – Baktaawar

관련 문제