Supercharging Real-Time AI Pipelines with Apache Pulsar Functions

Artificial intelligence (AI) has significantly transformed the way we live and work. From virtual assistants to autonomous vehicles, AI is rapidly changing the world. As the demand for real-time AI grows, developers and businesses require a streamlined process for building real-time inference engines. Apache Pulsar, a messaging and streaming platform, provides a convenient and powerful solution for addressing some of the limitations of traditional machine learning workflows. In this article, we’ll explore how Pulsar Functions, a serverless computing framework that runs on top of Apache Pulsar, can help build real-time inference engines for low-latency predictions.

Utilizing the pub/sub nature of Apache Pulsar with Pulsar Functions for real-time AI

Pulsar Functions takes advantage of the inherent pub/sub nature of Apache Pulsar. The pub/sub messaging pattern allows for messages to be published to a topic and then delivered to different subscribers. Pulsar Functions leverages this pattern and provides a framework for true real-time AI. Pulsar Functions allows developers to deploy functions in the cloud and execute them in response to events. When combined with the pub/sub messaging pattern, Pulsar Functions enable real-time execution, making it an ideal choice for building real-time inference engines.

Building a real-time inference engine using Pulsar Functions for low-latency predictions

Our goal is to build a real-time inference engine, powered by Pulsar Functions, that can retrieve low-latency predictions both one at a time and in bulk. We will use the popular Iris dataset to demonstrate the process. The Iris dataset contains measurements of Iris flowers, along with their corresponding species. We’ll use a decision tree classifier to predict the species based on the measurements.

Serializing the model using the pickle module for model training

We use the pickle module to serialize the model during training. This dumps the model to a file in the working directory. The pickled model can then be loaded by the Pulsar Functions and used to make predictions without having to retrain the model.

This function does not depend on the user context. Parameters and configuration options specific to the calling user could be used to adjust the behavior if desired. This allows multiple users to query the same function with different inputs without affecting each other.

Decision tree representation for the classifier

A decision tree classifier can be represented as a series of intuitive decisions based on feature values, that culminates in a prediction when a leaf node of the tree is reached. In the case of the Iris dataset, we have four features – sepal length, sepal width, petal length, and petal width – which we will use to classify the flowers into three species – Setosa, Versicolor, and Virginica. We’ll train the model on a fraction of the dataset using the decision tree classifier from scikit-learn.

Creating and triggering the function with the Pulsar standalone client

With the Pulsar standalone client running, we only need to create and trigger our function. The Pulsar Functions client will automatically detect any new function deployments and handle the scaling of function instances based on the workload.

This bulk version of the function is similar but differs in three ways. First, the input is a list of feature sets instead of a single feature set. Second, the function retrieves all predictions at once instead of returning them one at a time. Finally, the function returns a list of predictions instead of a single prediction.

Pulsar Functions provide a simple yet powerful way to build real-time inference engines for low-latency predictions. While this example only scratches the surface of what’s possible with Pulsar Functions, it provides a blueprint for implementing a real-time AI pipeline using Apache Pulsar. As the demand for real-time AI grows, developers and businesses should consider using Pulsar Functions to build efficient and effective AI systems.

Explore more