Home » #Technology » Building AI-Powered Movie Recommendation Systems with Scikit-Learn: A Conceptual Guide

Building AI-Powered Movie Recommendation Systems with Scikit-Learn: A Conceptual Guide

Recommendation systems drive personalized experiences across industries. From e-commerce platforms suggesting products to streaming services curating content, AI-powered recommendation engines significantly enhance user engagement and retention. For over two decades, I’ve been igniting change and delivering scalable tech solutions that elevate organisations to new heights. My expertise transforms challenges into opportunities, inspiring businesses to thrive in the digital age. This tech concept demonstrates how to build a collaborative filtering-based recommendation system using Scikit-Learn and the Surprise library. This approach utilizes user-item interactions to predict user preferences for unseen items.

Use Case: Movie Recommendation System

Problem Statement

We aim to build a movie recommendation system that suggests movies to users based on their past interactions and preferences.

Dataset: MovieLens (IMDB Alternative)

Instead of manually downloading IMDB data, we use the MovieLens dataset, available in the Surprise library. This dataset is widely used for recommendation system research and includes the following columns:

  • userId: Unique identifier for users.
  • itemId: Unique identifier for movies.
  • rating: User-assigned rating (1-5).

Step 1: Data Preprocessing

Before building the model, we need to load and preprocess the dataset. The Surprise library provides built-in datasets, and we use the ml-100k dataset. After loading, we split it into training and testing sets to evaluate model performance.

import pandas as pd
from surprise import Dataset
from surprise.model_selection import train_test_split

# Load MovieLens dataset from Surprise
data = Dataset.load_builtin('ml-100k')

# Convert dataset into train-test split
trainset, testset = train_test_split(data, test_size=0.2)

Step 2: Building the Recommendation Model

We use Singular Value Decomposition (SVD), a matrix factorization technique, for collaborative filtering. SVD helps identify latent relationships between users and items, improving recommendation accuracy. The model learns these patterns by training on the user-item rating matrix.

from surprise import SVD
from surprise import accuracy

# Define the model
model = SVD()

# Train the model on training data
model.fit(trainset)

Step 3: Making Predictions

Once the model is trained, it can predict ratings for movies that a user has not rated yet. This allows us to generate personalized recommendations. Here, we predict a rating for a specific user and movie.

# Predict a rating for a specific user and item
user_id = str(1)   # use user_id 1
item_id = str(50)  # use movie_id 50
prediction = model.predict(user_id, item_id)
print(f"Predicted rating for User {user_id} on Movie {item_id}: {prediction.est}")

Step 4: Evaluating the Model

To measure the effectiveness of our recommendation system, we use Root Mean Squared Error (RMSE). RMSE quantifies the difference between actual and predicted ratings, with lower values indicating better performance.

# Get predictions for the test set
test_predictions = model.test(testset)

# Compute RMSE
rmse = accuracy.rmse(test_predictions)
print(f"RMSE: {rmse}")

Step 5: Deploying the Recommendation System

To make the recommendation system accessible, we deploy it as a REST API using Flask. This allows external applications to request movie recommendations for users dynamically.

import os
import pandas as pd
from surprise import Dataset, SVD
from surprise.model_selection import train_test_split
from flask import Flask, request, jsonify

# Define the local Surprise dataset directory 
surprise_data_dir = os.path.expanduser("~/.surprise_data/ml-100k/") # path to dataset loaded locally,  check the path  and update 

# Load MovieLens dataset from Surprise
data = Dataset.load_builtin('ml-100k')

# Convert dataset into train-test split
trainset, testset = train_test_split(data, test_size=0.2)

# Train the model
model = SVD()
model.fit(trainset)

# Load movie metadata from local ml-100k directory
movie_info_path = os.path.join(surprise_data_dir, "u.item")
movie_info = pd.read_csv(movie_info_path, 
                         sep='|', encoding='latin-1', usecols=[0, 1], 
                         names=['movie_id', 'title'], engine='python')

# Convert movie IDs to strings (to match Surprise format)
movie_info['movie_id'] = movie_info['movie_id'].astype(str)

# Initialize Flask app
app = Flask(__name__)

@app.route('/recommend', methods=['GET'])
def get_recommendations():
    user_id = request.args.get('user_id')

    # Ensure user_id is a string (Surprise uses string format)
    if not user_id:
        return jsonify({"error": "Please provide a valid user_id"}), 400

    # Get all available movie IDs
    movie_ids = movie_info['movie_id'].tolist()

    # Predict ratings for all movies
    predictions = [(movie, model.predict(user_id, movie).est) for movie in movie_ids]

    # Sort by highest predicted rating
    recommendations = sorted(predictions, key=lambda x: x[1], reverse=True)[:5]

    # Format response with movie names
    result = [
        {
            "movie_id": movie_id,
            "title": movie_info[movie_info['movie_id'] == movie_id]['title'].values[0],
            "predicted_rating": round(rating, 2)
        }
        for movie_id, rating in recommendations
    ]

    return jsonify(result)

if __name__ == '__main__':
    app.run(debug=True)

Available Datasets in Surprise Library

Surprise provides several built-in datasets for recommendation systems:

  1. ml-100k: 100,000 movie ratings from MovieLens (used in this guide).
  2. ml-1m: 1 million movie ratings from MovieLens.
  3. jester: Jokes rating dataset.
  4. book-crossing: Book rating dataset.
  5. filmtrust: FilmTrust movie ratings dataset.

These datasets can be loaded using Dataset.load_builtin('dataset_name').

My Tech Advice: This conceptual guide serves as a starting point for building a recommendation system. SVD plays a crucial role in collaborative filtering, breaking down the user-item matrix to uncover hidden patterns through latent factors. This approach can be extended to other domains like e-commerce, music streaming, and online learning platforms. Future improvements could include hybrid modelsreal-time updates, and deep learning-based recommenders.

#AskDushyant
Note: The example and pseudo code is for illustration only. You must modify and experiment with the concept to meet your specific needs.
#TechConcept #TechAdvice #ScikitLearn #Python #AI #ML

Leave a Reply

Your email address will not be published. Required fields are marked *