diff --git a/examples/keras_rs/img/two_stage_rs_with_marketing_interaction/architecture.jpg b/examples/keras_rs/img/two_stage_rs_with_marketing_interaction/architecture.jpg
new file mode 100644
index 0000000000..05e81acfa3
Binary files /dev/null and b/examples/keras_rs/img/two_stage_rs_with_marketing_interaction/architecture.jpg differ
diff --git a/examples/keras_rs/img/two_stage_rs_with_marketing_interaction/two_stage_rs_with_marketing_interaction_13_60.png b/examples/keras_rs/img/two_stage_rs_with_marketing_interaction/two_stage_rs_with_marketing_interaction_13_60.png
new file mode 100644
index 0000000000..72409279ad
Binary files /dev/null and b/examples/keras_rs/img/two_stage_rs_with_marketing_interaction/two_stage_rs_with_marketing_interaction_13_60.png differ
diff --git a/examples/keras_rs/img/two_stage_rs_with_marketing_interaction/two_stage_rs_with_marketing_interaction_9_90.png b/examples/keras_rs/img/two_stage_rs_with_marketing_interaction/two_stage_rs_with_marketing_interaction_9_90.png
new file mode 100644
index 0000000000..0fb9ff322f
Binary files /dev/null and b/examples/keras_rs/img/two_stage_rs_with_marketing_interaction/two_stage_rs_with_marketing_interaction_9_90.png differ
diff --git a/examples/keras_rs/ipynb/two_stage_rs_with_marketing_interaction.ipynb b/examples/keras_rs/ipynb/two_stage_rs_with_marketing_interaction.ipynb
new file mode 100644
index 0000000000..66c3ce85c0
--- /dev/null
+++ b/examples/keras_rs/ipynb/two_stage_rs_with_marketing_interaction.ipynb
@@ -0,0 +1,699 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "colab_type": "text"
+ },
+ "source": [
+ "# Two Stage Recommender System with Marketing Interaction\n",
+ "\n",
+ "**Author:** Mansi Mehta \n",
+ "**Date created:** 26/11/2025 \n",
+ "**Last modified:** 26/11/2025 \n",
+ "**Description:** Recommender System with Ranking and Retrival model for Marketing interaction."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "colab_type": "text"
+ },
+ "source": [
+ "# **Introduction**\n",
+ "\n",
+ "This tutorial demonstrates a critical business scenario: a user lands on a website, and a\n",
+ "marketing engine must decide which specific ad to display from an inventory of thousands.\n",
+ "The goal is to maximize the Click-Through Rate (CTR). Showing irrelevant ads wastes\n",
+ "marketing budget and annoys the user. Therefore, we need a system that predicts the\n",
+ "probability of a specific user clicking on a specific ad based on their demographics and\n",
+ "browsing habits.\n",
+ "\n",
+ "**Architecture**\n",
+ "1. **The Retrieval Stage:** Efficiently select an initial set of roughly 10-100\n",
+ "candidates from millions of possibilities. It weeds out items the user is definitely not\n",
+ "interested in.\n",
+ "User Tower: Embeds user features (ID, demographics, behavior) into a vector.\n",
+ "Item Tower: Embeds ad features (Ad ID, Topic) into a vector.\n",
+ "Interaction: The dot product of these two vectors represents similarity.\n",
+ "2. **The Ranking Stage:** It takes the output of the retrieval model and fine-tune the\n",
+ "order to select the single best ad to show.\n",
+ "A Deep Neural Network (MLP).\n",
+ "Interaction: It takes the User Embedding, Ad Embedding, and their similarity score to\n",
+ "predict a precise probability (0% to 100%) that the user will click.\n",
+ "\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "colab_type": "text"
+ },
+ "source": [
+ "# **Dataset**\n",
+ "We will use the [Ad Click\n",
+ "Prediction](https://www.kaggle.com/datasets/mafrojaakter/ad-click-data) Dataset from\n",
+ "Kaggle\n",
+ "\n",
+ "**Feature Distribution of dataset:**\n",
+ "User Tower describes who is looking and features contains i.e Gender, City, Country, Age,\n",
+ "Daily Internet Usage, Daily Time Spent on Site, and Area Income.\n",
+ "Item Tower describes what is being shown and features contains Ad Topic Line, Ad ID.\n",
+ "\n",
+ "In this tutorial, we are going to build and train a Two-Tower (User Tower and Ad Tower)\n",
+ "model using the Ad Click Prediction dataset from Kaggle.\n",
+ "We're going to:\n",
+ "1. **Data Pipeline:** Get our data and preprocess it for both Retrieval (implicit\n",
+ "feedback) and Ranking (explicit labels).\n",
+ "2. **Retrieval:** Implement and train a Two-Tower model to generate candidates.\n",
+ "3. **Ranking:** Implement and train a Neural Ranking model to predict click probabilities.\n",
+ "4. **Inference:** Run an end-to-end test (Retrieval --> Ranking) to generate\n",
+ "recommendations for a specific user."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 0,
+ "metadata": {
+ "colab_type": "code"
+ },
+ "outputs": [],
+ "source": [
+ "!!pip install -q keras-rs"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 0,
+ "metadata": {
+ "colab_type": "code"
+ },
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "\n",
+ "os.environ[\"KERAS_BACKEND\"] = \"tensorflow\"\n",
+ "import keras\n",
+ "import matplotlib.pyplot as plt\n",
+ "import numpy as np\n",
+ "import tensorflow as tf\n",
+ "import pandas as pd\n",
+ "import keras_rs\n",
+ "import tensorflow_datasets as tfds\n",
+ "from mpl_toolkits.axes_grid1 import make_axes_locatable\n",
+ "from keras import layers\n",
+ "from concurrent.futures import ThreadPoolExecutor\n",
+ "from sklearn.model_selection import train_test_split\n",
+ "from sklearn.preprocessing import MinMaxScaler\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "colab_type": "text"
+ },
+ "source": [
+ "# **Preparing Dataset**"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 0,
+ "metadata": {
+ "colab_type": "code"
+ },
+ "outputs": [],
+ "source": [
+ "!pip install -q kaggle\n",
+ "!# Download the dataset (requires Kaggle API key in ~/.kaggle/kaggle.json)\n",
+ "!kaggle datasets download -d mafrojaakter/ad-click-data --unzip -p ./ad_click_dataset"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 0,
+ "metadata": {
+ "colab_type": "code"
+ },
+ "outputs": [],
+ "source": [
+ "data_path = \"./ad_click_dataset/Ad_click_data.csv\"\n",
+ "if not os.path.exists(data_path):\n",
+ " # Fallback for filenames with spaces or different casing\n",
+ " data_path = \"./ad_click_dataset/Ad Click Data.csv\"\n",
+ "\n",
+ "ads_df = pd.read_csv(data_path)\n",
+ "# Clean column names\n",
+ "ads_df.columns = ads_df.columns.str.strip()\n",
+ "# Rename the column name\n",
+ "ads_df = ads_df.rename(\n",
+ " columns={\n",
+ " \"Male\": \"gender\",\n",
+ " \"Ad Topic Line\": \"ad_topic\",\n",
+ " \"City\": \"city\",\n",
+ " \"Country\": \"country\",\n",
+ " \"Daily Time Spent on Site\": \"time_on_site\",\n",
+ " \"Daily Internet Usage\": \"internet_usage\",\n",
+ " \"Area Income\": \"area_income\",\n",
+ " }\n",
+ ")\n",
+ "# Add user_id and add_id column\n",
+ "ads_df[\"user_id\"] = \"user_\" + ads_df.index.astype(str)\n",
+ "ads_df[\"ad_id\"] = \"ad_\" + ads_df[\"ad_topic\"].astype(\"category\").cat.codes.astype(str)\n",
+ "# Remove nulls and normalize\n",
+ "ads_df = ads_df.dropna()\n",
+ "# normalize\n",
+ "numeric_cols = [\"time_on_site\", \"internet_usage\", \"area_income\", \"Age\"]\n",
+ "scaler = MinMaxScaler()\n",
+ "ads_df[numeric_cols] = scaler.fit_transform(ads_df[numeric_cols])\n",
+ "\n",
+ "# Split the train and test datasets\n",
+ "x_train, x_test = train_test_split(ads_df, test_size=0.2, random_state=42)\n",
+ "\n",
+ "\n",
+ "def dict_to_tensor_features(df_features, continuous_features):\n",
+ " tensor_dict = {}\n",
+ " for k, v in df_features.items():\n",
+ " if k in continuous_features:\n",
+ " tensor_dict[k] = tf.expand_dims(tf.constant(v, dtype=\"float32\"), axis=-1)\n",
+ " else:\n",
+ " v_str = np.array(v).astype(str).tolist()\n",
+ " tensor_dict[k] = tf.expand_dims(tf.constant(v_str, dtype=\"string\"), axis=-1)\n",
+ " return tensor_dict\n",
+ "\n",
+ "\n",
+ "def create_retrieval_dataset(\n",
+ " data_df,\n",
+ " all_ads_features,\n",
+ " all_ad_ids,\n",
+ " user_features_list,\n",
+ " ad_features_list,\n",
+ " continuous_features_list,\n",
+ "):\n",
+ "\n",
+ " # Filter for Positive Interactions (Cicks)\n",
+ " positive_interactions = data_df[data_df[\"Clicked on Ad\"] == 1].copy()\n",
+ "\n",
+ " if positive_interactions.empty:\n",
+ " return None\n",
+ "\n",
+ " def sample_negative(positive_ad_id):\n",
+ " neg_ad_id = positive_ad_id\n",
+ " while neg_ad_id == positive_ad_id:\n",
+ " neg_ad_id = np.random.choice(all_ad_ids)\n",
+ " return neg_ad_id\n",
+ "\n",
+ " def create_triplets_row(pos_row):\n",
+ " pos_ad_id = pos_row.ad_id\n",
+ " neg_ad_id = sample_negative(pos_ad_id)\n",
+ "\n",
+ " neg_ad_row = all_ads_features[all_ads_features[\"ad_id\"] == neg_ad_id].iloc[0]\n",
+ " user_features_dict = {\n",
+ " name: getattr(pos_row, name) for name in user_features_list\n",
+ " }\n",
+ " pos_ad_features_dict = {\n",
+ " name: getattr(pos_row, name) for name in ad_features_list\n",
+ " }\n",
+ " neg_ad_features_dict = {name: neg_ad_row[name] for name in ad_features_list}\n",
+ "\n",
+ " return {\n",
+ " \"user\": user_features_dict,\n",
+ " \"positive_ad\": pos_ad_features_dict,\n",
+ " \"negative_ad\": neg_ad_features_dict,\n",
+ " }\n",
+ "\n",
+ " with ThreadPoolExecutor(max_workers=8) as executor:\n",
+ " triplets = list(\n",
+ " executor.map(\n",
+ " create_triplets_row, positive_interactions.itertuples(index=False)\n",
+ " )\n",
+ " )\n",
+ "\n",
+ " triplets_df = pd.DataFrame(triplets)\n",
+ " user_df = triplets_df[\"user\"].apply(pd.Series)\n",
+ " pos_ad_df = triplets_df[\"positive_ad\"].apply(pd.Series)\n",
+ " neg_ad_df = triplets_df[\"negative_ad\"].apply(pd.Series)\n",
+ "\n",
+ " user_features_tensor = dict_to_tensor_features(\n",
+ " user_df.to_dict(\"list\"), continuous_features_list\n",
+ " )\n",
+ " pos_ad_features_tensor = dict_to_tensor_features(\n",
+ " pos_ad_df.to_dict(\"list\"), continuous_features_list\n",
+ " )\n",
+ " neg_ad_features_tensor = dict_to_tensor_features(\n",
+ " neg_ad_df.to_dict(\"list\"), continuous_features_list\n",
+ " )\n",
+ "\n",
+ " features = {\n",
+ " \"user\": user_features_tensor,\n",
+ " \"positive_ad\": pos_ad_features_tensor,\n",
+ " \"negative_ad\": neg_ad_features_tensor,\n",
+ " }\n",
+ " y_true = tf.ones((triplets_df.shape[0], 1), dtype=tf.float32)\n",
+ " dataset = tf.data.Dataset.from_tensor_slices((features, y_true))\n",
+ " buffer_size = len(triplets_df)\n",
+ " dataset = (\n",
+ " dataset.shuffle(buffer_size=buffer_size)\n",
+ " .batch(64)\n",
+ " .cache()\n",
+ " .prefetch(tf.data.AUTOTUNE)\n",
+ " )\n",
+ " return dataset\n",
+ "\n",
+ "\n",
+ "user_clicked_ads = (\n",
+ " x_train[x_train[\"Clicked on Ad\"] == 1]\n",
+ " .groupby(\"user_id\")[\"ad_id\"]\n",
+ " .apply(set)\n",
+ " .to_dict()\n",
+ ")\n",
+ "\n",
+ "for u in x_train[\"user_id\"].unique():\n",
+ " if u not in user_clicked_ads:\n",
+ " user_clicked_ads[u] = set()\n",
+ "\n",
+ "AD_FEATURES = [\"ad_id\", \"ad_topic\"]\n",
+ "USER_FEATURES = [\n",
+ " \"user_id\",\n",
+ " \"gender\",\n",
+ " \"city\",\n",
+ " \"country\",\n",
+ " \"time_on_site\",\n",
+ " \"internet_usage\",\n",
+ " \"area_income\",\n",
+ " \"Age\",\n",
+ "]\n",
+ "continuous_features = [\"time_on_site\", \"internet_usage\", \"area_income\", \"Age\"]\n",
+ "\n",
+ "all_ads_features = x_train[AD_FEATURES].drop_duplicates().reset_index(drop=True)\n",
+ "all_ad_ids = all_ads_features[\"ad_id\"].tolist()\n",
+ "\n",
+ "retrieval_train_dataset = create_retrieval_dataset(\n",
+ " data_df=x_train,\n",
+ " all_ads_features=all_ads_features,\n",
+ " all_ad_ids=all_ad_ids,\n",
+ " user_features_list=USER_FEATURES,\n",
+ " ad_features_list=AD_FEATURES,\n",
+ " continuous_features_list=continuous_features,\n",
+ ")\n",
+ "\n",
+ "retrieval_test_dataset = create_retrieval_dataset(\n",
+ " data_df=x_test,\n",
+ " all_ads_features=all_ads_features,\n",
+ " all_ad_ids=all_ad_ids,\n",
+ " user_features_list=USER_FEATURES,\n",
+ " ad_features_list=AD_FEATURES,\n",
+ " continuous_features_list=continuous_features,\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "colab_type": "text"
+ },
+ "source": [
+ "# **Implement the Retrival Model**\n",
+ "For the Retrieval stage, we will build a Two-Tower Model.\n",
+ "\n",
+ "**The Architecture Components:**\n",
+ "\n",
+ "1. User Tower:User features (User ID, demographics, behavior metrics like time_on_site).\n",
+ "It encodes these mixed features into a fixed-size vector representation called the User\n",
+ "Embedding.\n",
+ "2. Item (Ad) Tower:Ad features (Ad ID, Ad Topic Line).It encodes these features into a\n",
+ "fixed-size vector representation called the Item Embedding.\n",
+ "3. Interaction (Similarity):We calculate the Dot Product between the User Embedding and\n",
+ "the Item Embedding."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 0,
+ "metadata": {
+ "colab_type": "code"
+ },
+ "outputs": [],
+ "source": [
+ "keras.utils.set_random_seed(42)\n",
+ "\n",
+ "vocab_map = {\n",
+ " \"user_id\": x_train[\"user_id\"].unique(),\n",
+ " \"gender\": x_train[\"gender\"].astype(str).unique(),\n",
+ " \"city\": x_train[\"city\"].unique(),\n",
+ " \"country\": x_train[\"country\"].unique(),\n",
+ " \"ad_id\": x_train[\"ad_id\"].unique(),\n",
+ " \"ad_topic\": x_train[\"ad_topic\"].unique(),\n",
+ "}\n",
+ "cont_feats = [\"time_on_site\", \"internet_usage\", \"area_income\", \"Age\"]\n",
+ "\n",
+ "normalizers = {}\n",
+ "for f in cont_feats:\n",
+ " norm = layers.Normalization(axis=None)\n",
+ " norm.adapt(x_train[f].values.astype(\"float32\"))\n",
+ " normalizers[f] = norm\n",
+ "\n",
+ "\n",
+ "def build_tower(feature_names, continuous_names=None, embed_dim=64, name=\"tower\"):\n",
+ " inputs, embeddings = {}, []\n",
+ "\n",
+ " for feat in feature_names:\n",
+ " if feat in vocab_map:\n",
+ " inp = keras.Input(shape=(1,), dtype=tf.string, name=feat)\n",
+ " inputs[feat] = inp\n",
+ " vocab = list(vocab_map[feat])\n",
+ " x = layers.StringLookup(vocabulary=vocab)(inp)\n",
+ " x = layers.Embedding(\n",
+ " len(vocab) + 1, embed_dim, embeddings_regularizer=\"l2\"\n",
+ " )(x)\n",
+ " embeddings.append(layers.Flatten()(x))\n",
+ "\n",
+ " if continuous_names:\n",
+ " for feat in continuous_names:\n",
+ " inp = keras.Input(shape=(1,), dtype=tf.float32, name=feat)\n",
+ " inputs[feat] = inp\n",
+ " embeddings.append(normalizers[feat](inp))\n",
+ "\n",
+ " x = layers.Concatenate()(embeddings)\n",
+ " x = layers.Dense(128, activation=\"relu\")(x)\n",
+ " x = layers.Dropout(0.2)(x)\n",
+ " x = layers.Dense(64, activation=\"relu\")(x)\n",
+ " output = layers.Dense(embed_dim)(layers.Dropout(0.2)(x))\n",
+ "\n",
+ " return keras.Model(inputs=inputs, outputs=output, name=name)\n",
+ "\n",
+ "\n",
+ "user_tower = build_tower(\n",
+ " [\"user_id\", \"gender\", \"city\", \"country\"], cont_feats, name=\"user_tower\"\n",
+ ")\n",
+ "ad_tower = build_tower([\"ad_id\", \"ad_topic\"], name=\"ad_tower\")\n",
+ "\n",
+ "\n",
+ "def bpr_hinge_loss(y_true, y_pred):\n",
+ " margin = 1.0\n",
+ " return -tf.math.log(tf.nn.sigmoid(y_pred) + 1e-10)\n",
+ "\n",
+ "\n",
+ "class RetrievalModel(keras.Model):\n",
+ " def __init__(self, user_tower_instance, ad_tower_instance, **kwargs):\n",
+ " super().__init__(**kwargs)\n",
+ " self.user_tower = user_tower\n",
+ " self.ad_tower = ad_tower\n",
+ " self.ln_user = layers.LayerNormalization()\n",
+ " self.ln_ad = layers.LayerNormalization()\n",
+ "\n",
+ " def call(self, inputs):\n",
+ " u_emb = self.ln_user(self.user_tower(inputs[\"user\"]))\n",
+ " pos_emb = self.ln_ad(self.ad_tower(inputs[\"positive_ad\"]))\n",
+ " neg_emb = self.ln_ad(self.ad_tower(inputs[\"negative_ad\"]))\n",
+ " pos_score = keras.ops.sum(u_emb * pos_emb, axis=1, keepdims=True)\n",
+ " neg_score = keras.ops.sum(u_emb * neg_emb, axis=1, keepdims=True)\n",
+ " return pos_score - neg_score\n",
+ "\n",
+ " def get_embeddings(self, inputs):\n",
+ " u_emb = self.ln_user(self.user_tower(inputs[\"user\"]))\n",
+ " ad_emb = self.ln_ad(self.ad_tower(inputs[\"positive_ad\"]))\n",
+ " dot_interaction = keras.ops.sum(u_emb * ad_emb, axis=1, keepdims=True)\n",
+ " return u_emb, ad_emb, dot_interaction\n",
+ "\n",
+ "\n",
+ "retrieval_model = RetrievalModel(user_tower, ad_tower)\n",
+ "retrieval_model.compile(\n",
+ " optimizer=keras.optimizers.Adam(learning_rate=1e-3), loss=bpr_hinge_loss\n",
+ ")\n",
+ "history = retrieval_model.fit(retrieval_train_dataset, epochs=30)\n",
+ "\n",
+ "pd.DataFrame(history.history).plot(\n",
+ " subplots=True, layout=(1, 3), figsize=(12, 4), title=\"Retrival Model Metrics\"\n",
+ ")\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "colab_type": "text"
+ },
+ "source": [
+ "# **Predictions of Retrival Model**\n",
+ "Two-Tower model is trained, we need to use it to generate candidates.\n",
+ "\n",
+ "We can implement inference pipeline using three steps:\n",
+ "1. Indexing: We can run the Item Tower once for all available ads to generate their\n",
+ "embeddings.\n",
+ "2. Query Encoding: When a user arrives, we pass their features through the User Tower to\n",
+ "generate a User Embedding.\n",
+ "3. Nearest Neighbor Search: We search the index to find the Ad Embeddings closest to the\n",
+ "User Embedding (highest dot product).\n",
+ "\n",
+ "Keras-RS [BruteForceRetrieval\n",
+ "layer](https://keras.io/keras_rs/api/retrieval_layers/brute_force_retrieval/) calculates\n",
+ "dot product between the user and every single item in the index to find exact top-K\n",
+ "matches"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 0,
+ "metadata": {
+ "colab_type": "code"
+ },
+ "outputs": [],
+ "source": [
+ "USER_CATEGORICAL = [\"user_id\", \"gender\", \"city\", \"country\"]\n",
+ "CONTINUOUS_FEATURES = [\"time_on_site\", \"internet_usage\", \"area_income\", \"Age\"]\n",
+ "USER_FEATURES = USER_CATEGORICAL + CONTINUOUS_FEATURES\n",
+ "\n",
+ "\n",
+ "class BruteForceRetrievalWrapper:\n",
+ " def __init__(self, model, ads_df, ad_features, user_features, k=10):\n",
+ " self.model, self.k = model, k\n",
+ " self.user_features = user_features\n",
+ " unique_ads = ads_df[ad_features].drop_duplicates(\"ad_id\").reset_index(drop=True)\n",
+ " self.ids = unique_ads[\"ad_id\"].values\n",
+ " self.topic_map = dict(zip(unique_ads[\"ad_id\"], unique_ads[\"ad_topic\"]))\n",
+ " ad_inputs = {\n",
+ " \"ad_id\": tf.constant(self.ids.astype(str)),\n",
+ " \"ad_topic\": tf.constant(unique_ads[\"ad_topic\"].astype(str).values),\n",
+ " }\n",
+ " self.candidate_embs = model.ln_ad(model.ad_tower(ad_inputs))\n",
+ "\n",
+ " def query_batch(self, user_df):\n",
+ " inputs = {\n",
+ " k: tf.constant(\n",
+ " user_df[k].values.astype(float if k in CONTINUOUS_FEATURES else str)\n",
+ " )\n",
+ " for k in self.user_features\n",
+ " if k in user_df.columns\n",
+ " }\n",
+ " u_emb = self.model.ln_user(self.model.user_tower(inputs))\n",
+ " scores = tf.linalg.matmul(u_emb, self.candidate_embs, transpose_b=True)\n",
+ " top_scores, top_indices = tf.math.top_k(scores, k=self.k)\n",
+ " return top_scores.numpy(), top_indices.numpy()\n",
+ "\n",
+ " def decode_results(self, scores, indices):\n",
+ " results = []\n",
+ " for row_scores, row_indices in zip(scores, indices):\n",
+ " retrieved_ids = self.ids[row_indices]\n",
+ " results.append(\n",
+ " [\n",
+ " {\"ad_id\": aid, \"ad_topic\": self.topic_map[aid], \"score\": float(s)}\n",
+ " for aid, s in zip(retrieved_ids, row_scores)\n",
+ " ]\n",
+ " )\n",
+ " return results\n",
+ "\n",
+ "\n",
+ "retrieval_engine = BruteForceRetrievalWrapper(\n",
+ " model=retrieval_model,\n",
+ " ads_df=ads_df,\n",
+ " ad_features=[\"ad_id\", \"ad_topic\"],\n",
+ " user_features=USER_FEATURES,\n",
+ " k=10,\n",
+ ")\n",
+ "sample_user = pd.DataFrame([x_test.iloc[0]])\n",
+ "scores, indices = retrieval_engine.query_batch(sample_user)\n",
+ "top_ads = retrieval_engine.decode_results(scores, indices)[0]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "colab_type": "text"
+ },
+ "source": [
+ "# **Implementation of Ranking Model**\n",
+ "Retrieval model only calculates a simple similarity score (Dot Product). It doesn't\n",
+ "account for complex feature interactions.\n",
+ "So we need to build ranking model after words retrival model.\n",
+ "\n",
+ "**Architecture**\n",
+ "1. **Feature Extraction:** We reuse the trained User Tower and Ad Tower from the\n",
+ "Retrieval stage. We freeze these towers (trainable = False) so their weights don't\n",
+ "change.\n",
+ "2. **Interaction:** Instead of just a dot product, we concatenate three inputs- The User\n",
+ "EmbeddingThe Ad EmbeddingThe Dot Product (Similarity)\n",
+ "3. **Scorer(MLP):** These concatenated inputs are fed into a Multi-Layer Perceptron\u2014a\n",
+ "stack of Dense layers. This network learns the non-linear relationships between the user\n",
+ "and the ad.\n",
+ "4. **Output:** The final layer uses a Sigmoid activation to output a single probability\n",
+ "between 0.0 and 1.0 (Likelihood of a Click)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 0,
+ "metadata": {
+ "colab_type": "code"
+ },
+ "outputs": [],
+ "source": [
+ "retrieval_model.trainable = False\n",
+ "\n",
+ "\n",
+ "def create_ranking_ds(df):\n",
+ " inputs = {\n",
+ " \"user\": dict_to_tensor_features(df[USER_FEATURES], continuous_features),\n",
+ " \"positive_ad\": dict_to_tensor_features(df[AD_FEATURES], continuous_features),\n",
+ " }\n",
+ " return (\n",
+ " tf.data.Dataset.from_tensor_slices(\n",
+ " (inputs, df[\"Clicked on Ad\"].values.astype(\"float32\"))\n",
+ " )\n",
+ " .shuffle(10000)\n",
+ " .batch(256)\n",
+ " .prefetch(tf.data.AUTOTUNE)\n",
+ " )\n",
+ "\n",
+ "\n",
+ "ranking_train_dataset = create_ranking_ds(x_train)\n",
+ "ranking_test_dataset = create_ranking_ds(x_test)\n",
+ "\n",
+ "\n",
+ "class RankingModel(keras.Model):\n",
+ " def __init__(self, retrieval_model, **kwargs):\n",
+ " super().__init__(**kwargs)\n",
+ " self.retrieval = retrieval_model\n",
+ " self.mlp = keras.Sequential(\n",
+ " [\n",
+ " layers.Dense(256, activation=\"relu\"),\n",
+ " layers.Dropout(0.2),\n",
+ " layers.Dense(128, activation=\"relu\"),\n",
+ " layers.Dropout(0.2),\n",
+ " layers.Dense(64, activation=\"relu\"),\n",
+ " layers.Dense(1, activation=\"sigmoid\"),\n",
+ " ]\n",
+ " )\n",
+ "\n",
+ " def call(self, inputs):\n",
+ " u_emb, ad_emb, dot = self.retrieval.get_embeddings(inputs)\n",
+ " return self.mlp(keras.ops.concatenate([u_emb, ad_emb, dot], axis=-1))\n",
+ "\n",
+ "\n",
+ "ranking_model = RankingModel(retrieval_model)\n",
+ "ranking_model.compile(\n",
+ " optimizer=keras.optimizers.Adam(1e-4),\n",
+ " loss=\"binary_crossentropy\",\n",
+ " metrics=[\"AUC\", \"accuracy\"],\n",
+ ")\n",
+ "history1 = ranking_model.fit(ranking_train_dataset, epochs=20)\n",
+ "\n",
+ "pd.DataFrame(history1.history).plot(\n",
+ " subplots=True, layout=(1, 3), figsize=(12, 4), title=\"Ranking Model Metrics\"\n",
+ ")\n",
+ "plt.show()\n",
+ "\n",
+ "ranking_model.evaluate(ranking_test_dataset)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "colab_type": "text"
+ },
+ "source": [
+ "# **Predictions of Ranking Model**\n",
+ "The retrieval model gave us a list of ads that are generally relevant (high dot product\n",
+ "similarity). The ranking model will now calculate the specific probability (0% to 100%)\n",
+ "that the user will click each of those ads.\n",
+ "\n",
+ "The Ranking model expects pairs of (User, Ad). Since we are scoring 10 ads for 1 user, we\n",
+ "cannot just pass the user features once.We effectively take user's features 10 times to\n",
+ "create a batch."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 0,
+ "metadata": {
+ "colab_type": "code"
+ },
+ "outputs": [],
+ "source": [
+ "\n",
+ "def rerank_ads_for_user(user_row, retrieved_ads, ranking_model):\n",
+ " ads_df = pd.DataFrame(retrieved_ads)\n",
+ " num_ads = len(ads_df)\n",
+ " user_inputs = {\n",
+ " k: tf.fill(\n",
+ " (num_ads, 1),\n",
+ " str(user_row[k]) if k not in continuous_features else float(user_row[k]),\n",
+ " )\n",
+ " for k in USER_FEATURES\n",
+ " }\n",
+ " ad_inputs = {\n",
+ " k: tf.reshape(tf.constant(ads_df[k].astype(str).values), (-1, 1))\n",
+ " for k in AD_FEATURES\n",
+ " }\n",
+ " scores = (\n",
+ " ranking_model({\"user\": user_inputs, \"positive_ad\": ad_inputs}).numpy().flatten()\n",
+ " )\n",
+ " ads_df[\"ranking_score\"] = scores\n",
+ " return ads_df.sort_values(\"ranking_score\", ascending=False).to_dict(\"records\")\n",
+ "\n",
+ "\n",
+ "sample_user = x_test.iloc[0]\n",
+ "scores, indices = retrieval_engine.query_batch(pd.DataFrame([sample_user]))\n",
+ "top_ads = retrieval_engine.decode_results(scores, indices)[0]\n",
+ "final_ranked_ads = rerank_ads_for_user(sample_user, top_ads, ranking_model)\n",
+ "print(f\"User: {sample_user['user_id']}\")\n",
+ "print(f\"{'Ad ID':<10} | {'Topic':<30} | {'Retrival Score':<11} | {'Rank Probability'}\")\n",
+ "for item in final_ranked_ads:\n",
+ " print(\n",
+ " f\"{item['ad_id']:<10} | {item['ad_topic'][:28]:<30} | {item['score']:.4f} |{item['ranking_score']*100:.2f}%\"\n",
+ " )"
+ ]
+ }
+ ],
+ "metadata": {
+ "accelerator": "GPU",
+ "colab": {
+ "collapsed_sections": [],
+ "name": "two_stage_rs_with_marketing_interaction",
+ "private_outputs": false,
+ "provenance": [],
+ "toc_visible": true
+ },
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.0"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 0
+}
\ No newline at end of file
diff --git a/examples/keras_rs/md/two_stage_rs_with_marketing_interaction.md b/examples/keras_rs/md/two_stage_rs_with_marketing_interaction.md
new file mode 100644
index 0000000000..87c31267c8
--- /dev/null
+++ b/examples/keras_rs/md/two_stage_rs_with_marketing_interaction.md
@@ -0,0 +1,836 @@
+# Two Stage Recommender System with Marketing Interaction
+
+**Author:** Mansi Mehta
+**Date created:** 26/11/2025
+**Last modified:** 26/11/2025
+**Description:** Recommender System with Ranking and Retrival model for Marketing interaction.
+
+
+ [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/keras_rs/ipynb/two_stage_rs_with_marketing_interaction.ipynb) • [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/keras_rs/two_stage_rs_with_marketing_interaction.py)
+
+
+
+# **Introduction**
+
+This tutorial demonstrates a critical business scenario: a user lands on a website, and a
+marketing engine must decide which specific ad to display from an inventory of thousands.
+The goal is to maximize the Click-Through Rate (CTR). Showing irrelevant ads wastes
+marketing budget and annoys the user. Therefore, we need a system that predicts the
+probability of a specific user clicking on a specific ad based on their demographics and
+browsing habits.
+
+**Architecture**
+1. **The Retrieval Stage:** Efficiently select an initial set of roughly 10-100
+candidates from millions of possibilities. It weeds out items the user is definitely not
+interested in.
+User Tower: Embeds user features (ID, demographics, behavior) into a vector.
+Item Tower: Embeds ad features (Ad ID, Topic) into a vector.
+Interaction: The dot product of these two vectors represents similarity.
+2. **The Ranking Stage:** It takes the output of the retrieval model and fine-tune the
+order to select the single best ad to show.
+A Deep Neural Network (MLP).
+Interaction: It takes the User Embedding, Ad Embedding, and their similarity score to
+predict a precise probability (0% to 100%) that the user will click.
+
+
+
+# **Dataset**
+We will use the [Ad Click
+Prediction](https://www.kaggle.com/datasets/mafrojaakter/ad-click-data) Dataset from
+Kaggle
+
+**Feature Distribution of dataset:**
+User Tower describes who is looking and features contains i.e Gender, City, Country, Age,
+Daily Internet Usage, Daily Time Spent on Site, and Area Income.
+Item Tower describes what is being shown and features contains Ad Topic Line, Ad ID.
+
+In this tutorial, we are going to build and train a Two-Tower (User Tower and Ad Tower)
+model using the Ad Click Prediction dataset from Kaggle.
+We're going to:
+1. **Data Pipeline:** Get our data and preprocess it for both Retrieval (implicit
+feedback) and Ranking (explicit labels).
+2. **Retrieval:** Implement and train a Two-Tower model to generate candidates.
+3. **Ranking:** Implement and train a Neural Ranking model to predict click probabilities.
+4. **Inference:** Run an end-to-end test (Retrieval --> Ranking) to generate
+recommendations for a specific user.
+
+
+```python
+!!pip install -q keras-rs
+```
+
+
+
+```python
+import os
+
+os.environ["KERAS_BACKEND"] = "tensorflow"
+import keras
+import matplotlib.pyplot as plt
+import numpy as np
+import tensorflow as tf
+import pandas as pd
+import keras_rs
+import tensorflow_datasets as tfds
+from mpl_toolkits.axes_grid1 import make_axes_locatable
+from keras import layers
+from concurrent.futures import ThreadPoolExecutor
+from sklearn.model_selection import train_test_split
+from sklearn.preprocessing import MinMaxScaler
+
+```
+
+```
+['',
+ '\x1b[1m[\x1b[0m\x1b[34;49mnotice\x1b[0m\x1b[1;39;49m]\x1b[0m\x1b[39;49m A new release of pip is available: \x1b[0m\x1b[31;49m23.2.1\x1b[0m\x1b[39;49m -> \x1b[0m\x1b[32;49m25.3\x1b[0m',
+ '\x1b[1m[\x1b[0m\x1b[34;49mnotice\x1b[0m\x1b[1;39;49m]\x1b[0m\x1b[39;49m To update, run: \x1b[0m\x1b[32;49mpip install --upgrade pip\x1b[0m']
+```
+
+
+# **Preparing Dataset**
+
+
+```python
+!pip install -q kaggle
+!# Download the dataset (requires Kaggle API key in ~/.kaggle/kaggle.json)
+!kaggle datasets download -d mafrojaakter/ad-click-data --unzip -p ./ad_click_dataset
+```
+
+
+
+```
+[[34;49mnotice[1;39;49m][39;49m To update, run: [32;49mpip install --upgrade pip
+
+Dataset URL: https://www.kaggle.com/datasets/mafrojaakter/ad-click-data
+License(s): unknown
+
+Downloading ad-click-data.zip to ./ad_click_dataset
+```
+
+
+ 0%| | 0.00/37.6k [00:00, ?B/s]
+
+
+100%|███████████████████████████████████████| 37.6k/37.6k [00:00<00:00, 207kB/s]
+
+100%|███████████████████████████████████████| 37.6k/37.6k [00:00<00:00, 206kB/s]
+
+
+
+```python
+data_path = "./ad_click_dataset/Ad_click_data.csv"
+if not os.path.exists(data_path):
+ # Fallback for filenames with spaces or different casing
+ data_path = "./ad_click_dataset/Ad Click Data.csv"
+
+ads_df = pd.read_csv(data_path)
+# Clean column names
+ads_df.columns = ads_df.columns.str.strip()
+# Rename the column name
+ads_df = ads_df.rename(
+ columns={
+ "Male": "gender",
+ "Ad Topic Line": "ad_topic",
+ "City": "city",
+ "Country": "country",
+ "Daily Time Spent on Site": "time_on_site",
+ "Daily Internet Usage": "internet_usage",
+ "Area Income": "area_income",
+ }
+)
+# Add user_id and add_id column
+ads_df["user_id"] = "user_" + ads_df.index.astype(str)
+ads_df["ad_id"] = "ad_" + ads_df["ad_topic"].astype("category").cat.codes.astype(str)
+# Remove nulls and normalize
+ads_df = ads_df.dropna()
+# normalize
+numeric_cols = ["time_on_site", "internet_usage", "area_income", "Age"]
+scaler = MinMaxScaler()
+ads_df[numeric_cols] = scaler.fit_transform(ads_df[numeric_cols])
+
+# Split the train and test datasets
+x_train, x_test = train_test_split(ads_df, test_size=0.2, random_state=42)
+
+
+def dict_to_tensor_features(df_features, continuous_features):
+ tensor_dict = {}
+ for k, v in df_features.items():
+ if k in continuous_features:
+ tensor_dict[k] = tf.expand_dims(tf.constant(v, dtype="float32"), axis=-1)
+ else:
+ v_str = np.array(v).astype(str).tolist()
+ tensor_dict[k] = tf.expand_dims(tf.constant(v_str, dtype="string"), axis=-1)
+ return tensor_dict
+
+
+def create_retrieval_dataset(
+ data_df,
+ all_ads_features,
+ all_ad_ids,
+ user_features_list,
+ ad_features_list,
+ continuous_features_list,
+):
+
+ # Filter for Positive Interactions (Cicks)
+ positive_interactions = data_df[data_df["Clicked on Ad"] == 1].copy()
+
+ if positive_interactions.empty:
+ return None
+
+ def sample_negative(positive_ad_id):
+ neg_ad_id = positive_ad_id
+ while neg_ad_id == positive_ad_id:
+ neg_ad_id = np.random.choice(all_ad_ids)
+ return neg_ad_id
+
+ def create_triplets_row(pos_row):
+ pos_ad_id = pos_row.ad_id
+ neg_ad_id = sample_negative(pos_ad_id)
+
+ neg_ad_row = all_ads_features[all_ads_features["ad_id"] == neg_ad_id].iloc[0]
+ user_features_dict = {
+ name: getattr(pos_row, name) for name in user_features_list
+ }
+ pos_ad_features_dict = {
+ name: getattr(pos_row, name) for name in ad_features_list
+ }
+ neg_ad_features_dict = {name: neg_ad_row[name] for name in ad_features_list}
+
+ return {
+ "user": user_features_dict,
+ "positive_ad": pos_ad_features_dict,
+ "negative_ad": neg_ad_features_dict,
+ }
+
+ with ThreadPoolExecutor(max_workers=8) as executor:
+ triplets = list(
+ executor.map(
+ create_triplets_row, positive_interactions.itertuples(index=False)
+ )
+ )
+
+ triplets_df = pd.DataFrame(triplets)
+ user_df = triplets_df["user"].apply(pd.Series)
+ pos_ad_df = triplets_df["positive_ad"].apply(pd.Series)
+ neg_ad_df = triplets_df["negative_ad"].apply(pd.Series)
+
+ user_features_tensor = dict_to_tensor_features(
+ user_df.to_dict("list"), continuous_features_list
+ )
+ pos_ad_features_tensor = dict_to_tensor_features(
+ pos_ad_df.to_dict("list"), continuous_features_list
+ )
+ neg_ad_features_tensor = dict_to_tensor_features(
+ neg_ad_df.to_dict("list"), continuous_features_list
+ )
+
+ features = {
+ "user": user_features_tensor,
+ "positive_ad": pos_ad_features_tensor,
+ "negative_ad": neg_ad_features_tensor,
+ }
+ y_true = tf.ones((triplets_df.shape[0], 1), dtype=tf.float32)
+ dataset = tf.data.Dataset.from_tensor_slices((features, y_true))
+ buffer_size = len(triplets_df)
+ dataset = (
+ dataset.shuffle(buffer_size=buffer_size)
+ .batch(64)
+ .cache()
+ .prefetch(tf.data.AUTOTUNE)
+ )
+ return dataset
+
+
+user_clicked_ads = (
+ x_train[x_train["Clicked on Ad"] == 1]
+ .groupby("user_id")["ad_id"]
+ .apply(set)
+ .to_dict()
+)
+
+for u in x_train["user_id"].unique():
+ if u not in user_clicked_ads:
+ user_clicked_ads[u] = set()
+
+AD_FEATURES = ["ad_id", "ad_topic"]
+USER_FEATURES = [
+ "user_id",
+ "gender",
+ "city",
+ "country",
+ "time_on_site",
+ "internet_usage",
+ "area_income",
+ "Age",
+]
+continuous_features = ["time_on_site", "internet_usage", "area_income", "Age"]
+
+all_ads_features = x_train[AD_FEATURES].drop_duplicates().reset_index(drop=True)
+all_ad_ids = all_ads_features["ad_id"].tolist()
+
+retrieval_train_dataset = create_retrieval_dataset(
+ data_df=x_train,
+ all_ads_features=all_ads_features,
+ all_ad_ids=all_ad_ids,
+ user_features_list=USER_FEATURES,
+ ad_features_list=AD_FEATURES,
+ continuous_features_list=continuous_features,
+)
+
+retrieval_test_dataset = create_retrieval_dataset(
+ data_df=x_test,
+ all_ads_features=all_ads_features,
+ all_ad_ids=all_ad_ids,
+ user_features_list=USER_FEATURES,
+ ad_features_list=AD_FEATURES,
+ continuous_features_list=continuous_features,
+)
+```
+
+# **Implement the Retrival Model**
+For the Retrieval stage, we will build a Two-Tower Model.
+
+**The Architecture Components:**
+
+1. User Tower:User features (User ID, demographics, behavior metrics like time_on_site).
+It encodes these mixed features into a fixed-size vector representation called the User
+Embedding.
+2. Item (Ad) Tower:Ad features (Ad ID, Ad Topic Line).It encodes these features into a
+fixed-size vector representation called the Item Embedding.
+3. Interaction (Similarity):We calculate the Dot Product between the User Embedding and
+the Item Embedding.
+
+
+```python
+keras.utils.set_random_seed(42)
+
+vocab_map = {
+ "user_id": x_train["user_id"].unique(),
+ "gender": x_train["gender"].astype(str).unique(),
+ "city": x_train["city"].unique(),
+ "country": x_train["country"].unique(),
+ "ad_id": x_train["ad_id"].unique(),
+ "ad_topic": x_train["ad_topic"].unique(),
+}
+cont_feats = ["time_on_site", "internet_usage", "area_income", "Age"]
+
+normalizers = {}
+for f in cont_feats:
+ norm = layers.Normalization(axis=None)
+ norm.adapt(x_train[f].values.astype("float32"))
+ normalizers[f] = norm
+
+
+def build_tower(feature_names, continuous_names=None, embed_dim=64, name="tower"):
+ inputs, embeddings = {}, []
+
+ for feat in feature_names:
+ if feat in vocab_map:
+ inp = keras.Input(shape=(1,), dtype=tf.string, name=feat)
+ inputs[feat] = inp
+ vocab = list(vocab_map[feat])
+ x = layers.StringLookup(vocabulary=vocab)(inp)
+ x = layers.Embedding(
+ len(vocab) + 1, embed_dim, embeddings_regularizer="l2"
+ )(x)
+ embeddings.append(layers.Flatten()(x))
+
+ if continuous_names:
+ for feat in continuous_names:
+ inp = keras.Input(shape=(1,), dtype=tf.float32, name=feat)
+ inputs[feat] = inp
+ embeddings.append(normalizers[feat](inp))
+
+ x = layers.Concatenate()(embeddings)
+ x = layers.Dense(128, activation="relu")(x)
+ x = layers.Dropout(0.2)(x)
+ x = layers.Dense(64, activation="relu")(x)
+ output = layers.Dense(embed_dim)(layers.Dropout(0.2)(x))
+
+ return keras.Model(inputs=inputs, outputs=output, name=name)
+
+
+user_tower = build_tower(
+ ["user_id", "gender", "city", "country"], cont_feats, name="user_tower"
+)
+ad_tower = build_tower(["ad_id", "ad_topic"], name="ad_tower")
+
+
+def bpr_hinge_loss(y_true, y_pred):
+ margin = 1.0
+ return -tf.math.log(tf.nn.sigmoid(y_pred) + 1e-10)
+
+
+class RetrievalModel(keras.Model):
+ def __init__(self, user_tower_instance, ad_tower_instance, **kwargs):
+ super().__init__(**kwargs)
+ self.user_tower = user_tower
+ self.ad_tower = ad_tower
+ self.ln_user = layers.LayerNormalization()
+ self.ln_ad = layers.LayerNormalization()
+
+ def call(self, inputs):
+ u_emb = self.ln_user(self.user_tower(inputs["user"]))
+ pos_emb = self.ln_ad(self.ad_tower(inputs["positive_ad"]))
+ neg_emb = self.ln_ad(self.ad_tower(inputs["negative_ad"]))
+ pos_score = keras.ops.sum(u_emb * pos_emb, axis=1, keepdims=True)
+ neg_score = keras.ops.sum(u_emb * neg_emb, axis=1, keepdims=True)
+ return pos_score - neg_score
+
+ def get_embeddings(self, inputs):
+ u_emb = self.ln_user(self.user_tower(inputs["user"]))
+ ad_emb = self.ln_ad(self.ad_tower(inputs["positive_ad"]))
+ dot_interaction = keras.ops.sum(u_emb * ad_emb, axis=1, keepdims=True)
+ return u_emb, ad_emb, dot_interaction
+
+
+retrieval_model = RetrievalModel(user_tower, ad_tower)
+retrieval_model.compile(
+ optimizer=keras.optimizers.Adam(learning_rate=1e-3), loss=bpr_hinge_loss
+)
+history = retrieval_model.fit(retrieval_train_dataset, epochs=30)
+
+pd.DataFrame(history.history).plot(
+ subplots=True, layout=(1, 3), figsize=(12, 4), title="Retrival Model Metrics"
+)
+plt.show()
+```
+
+
+```
+Epoch 1/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - loss: 2.8117
+
+Epoch 2/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 1.3631
+
+Epoch 3/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 1.0918
+
+Epoch 4/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.9143
+
+Epoch 5/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.7872
+
+Epoch 6/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.6925
+
+Epoch 7/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.6203
+
+Epoch 8/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.5641
+
+Epoch 9/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.5190
+
+Epoch 10/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.4817
+
+Epoch 11/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.4499
+
+Epoch 12/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.4220
+
+Epoch 13/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - loss: 0.3970
+
+Epoch 14/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 6ms/step - loss: 0.3743
+
+Epoch 15/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.3537
+
+Epoch 16/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.3346
+
+Epoch 17/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.3171
+
+Epoch 18/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.3009
+
+Epoch 19/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.2858
+
+Epoch 20/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.2718
+
+Epoch 21/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.2587
+
+Epoch 22/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.2465
+
+Epoch 23/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.2350
+
+Epoch 24/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.2243
+
+Epoch 25/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.2142
+
+Epoch 26/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.2046
+
+Epoch 27/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.1956
+
+Epoch 28/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.1871
+
+Epoch 29/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.1791
+
+Epoch 30/30
+
+6/6 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.1715
+```
+
+
+
+
+
+
+# **Predictions of Retrival Model**
+Two-Tower model is trained, we need to use it to generate candidates.
+
+We can implement inference pipeline using three steps:
+1. Indexing: We can run the Item Tower once for all available ads to generate their
+embeddings.
+2. Query Encoding: When a user arrives, we pass their features through the User Tower to
+generate a User Embedding.
+3. Nearest Neighbor Search: We search the index to find the Ad Embeddings closest to the
+User Embedding (highest dot product).
+
+Keras-RS [BruteForceRetrieval
+layer](https://keras.io/keras_rs/api/retrieval_layers/brute_force_retrieval/) calculates
+dot product between the user and every single item in the index to find exact top-K
+matches
+
+
+```python
+USER_CATEGORICAL = ["user_id", "gender", "city", "country"]
+CONTINUOUS_FEATURES = ["time_on_site", "internet_usage", "area_income", "Age"]
+USER_FEATURES = USER_CATEGORICAL + CONTINUOUS_FEATURES
+
+
+class BruteForceRetrievalWrapper:
+ def __init__(self, model, ads_df, ad_features, user_features, k=10):
+ self.model, self.k = model, k
+ self.user_features = user_features
+ unique_ads = ads_df[ad_features].drop_duplicates("ad_id").reset_index(drop=True)
+ self.ids = unique_ads["ad_id"].values
+ self.topic_map = dict(zip(unique_ads["ad_id"], unique_ads["ad_topic"]))
+ ad_inputs = {
+ "ad_id": tf.constant(self.ids.astype(str)),
+ "ad_topic": tf.constant(unique_ads["ad_topic"].astype(str).values),
+ }
+ self.candidate_embs = model.ln_ad(model.ad_tower(ad_inputs))
+
+ def query_batch(self, user_df):
+ inputs = {
+ k: tf.constant(
+ user_df[k].values.astype(float if k in CONTINUOUS_FEATURES else str)
+ )
+ for k in self.user_features
+ if k in user_df.columns
+ }
+ u_emb = self.model.ln_user(self.model.user_tower(inputs))
+ scores = tf.linalg.matmul(u_emb, self.candidate_embs, transpose_b=True)
+ top_scores, top_indices = tf.math.top_k(scores, k=self.k)
+ return top_scores.numpy(), top_indices.numpy()
+
+ def decode_results(self, scores, indices):
+ results = []
+ for row_scores, row_indices in zip(scores, indices):
+ retrieved_ids = self.ids[row_indices]
+ results.append(
+ [
+ {"ad_id": aid, "ad_topic": self.topic_map[aid], "score": float(s)}
+ for aid, s in zip(retrieved_ids, row_scores)
+ ]
+ )
+ return results
+
+
+retrieval_engine = BruteForceRetrievalWrapper(
+ model=retrieval_model,
+ ads_df=ads_df,
+ ad_features=["ad_id", "ad_topic"],
+ user_features=USER_FEATURES,
+ k=10,
+)
+sample_user = pd.DataFrame([x_test.iloc[0]])
+scores, indices = retrieval_engine.query_batch(sample_user)
+top_ads = retrieval_engine.decode_results(scores, indices)[0]
+```
+
+# **Implementation of Ranking Model**
+Retrieval model only calculates a simple similarity score (Dot Product). It doesn't
+account for complex feature interactions.
+So we need to build ranking model after words retrival model.
+
+**Architecture**
+1. **Feature Extraction:** We reuse the trained User Tower and Ad Tower from the
+Retrieval stage. We freeze these towers (trainable = False) so their weights don't
+change.
+2. **Interaction:** Instead of just a dot product, we concatenate three inputs- The User
+EmbeddingThe Ad EmbeddingThe Dot Product (Similarity)
+3. **Scorer(MLP):** These concatenated inputs are fed into a Multi-Layer Perceptron—a
+stack of Dense layers. This network learns the non-linear relationships between the user
+and the ad.
+4. **Output:** The final layer uses a Sigmoid activation to output a single probability
+between 0.0 and 1.0 (Likelihood of a Click).
+
+
+```python
+retrieval_model.trainable = False
+
+
+def create_ranking_ds(df):
+ inputs = {
+ "user": dict_to_tensor_features(df[USER_FEATURES], continuous_features),
+ "positive_ad": dict_to_tensor_features(df[AD_FEATURES], continuous_features),
+ }
+ return (
+ tf.data.Dataset.from_tensor_slices(
+ (inputs, df["Clicked on Ad"].values.astype("float32"))
+ )
+ .shuffle(10000)
+ .batch(256)
+ .prefetch(tf.data.AUTOTUNE)
+ )
+
+
+ranking_train_dataset = create_ranking_ds(x_train)
+ranking_test_dataset = create_ranking_ds(x_test)
+
+
+class RankingModel(keras.Model):
+ def __init__(self, retrieval_model, **kwargs):
+ super().__init__(**kwargs)
+ self.retrieval = retrieval_model
+ self.mlp = keras.Sequential(
+ [
+ layers.Dense(256, activation="relu"),
+ layers.Dropout(0.2),
+ layers.Dense(128, activation="relu"),
+ layers.Dropout(0.2),
+ layers.Dense(64, activation="relu"),
+ layers.Dense(1, activation="sigmoid"),
+ ]
+ )
+
+ def call(self, inputs):
+ u_emb, ad_emb, dot = self.retrieval.get_embeddings(inputs)
+ return self.mlp(keras.ops.concatenate([u_emb, ad_emb, dot], axis=-1))
+
+
+ranking_model = RankingModel(retrieval_model)
+ranking_model.compile(
+ optimizer=keras.optimizers.Adam(1e-4),
+ loss="binary_crossentropy",
+ metrics=["AUC", "accuracy"],
+)
+history1 = ranking_model.fit(ranking_train_dataset, epochs=20)
+
+pd.DataFrame(history1.history).plot(
+ subplots=True, layout=(1, 3), figsize=(12, 4), title="Ranking Model Metrics"
+)
+plt.show()
+
+ranking_model.evaluate(ranking_test_dataset)
+```
+
+
+```
+Epoch 1/20
+
+3/3 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - AUC: 0.6079 - accuracy: 0.4961 - loss: 0.6890
+
+Epoch 2/20
+
+3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - AUC: 0.8329 - accuracy: 0.5748 - loss: 0.6423
+
+Epoch 3/20
+
+3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - AUC: 0.9284 - accuracy: 0.7467 - loss: 0.5995
+
+Epoch 4/20
+
+3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - AUC: 0.9636 - accuracy: 0.8766 - loss: 0.5599
+
+Epoch 5/20
+
+3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - AUC: 0.9763 - accuracy: 0.9213 - loss: 0.5229
+
+Epoch 6/20
+
+3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - AUC: 0.9824 - accuracy: 0.9304 - loss: 0.4876
+
+Epoch 7/20
+
+3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - AUC: 0.9862 - accuracy: 0.9331 - loss: 0.4540
+
+Epoch 8/20
+
+3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - AUC: 0.9880 - accuracy: 0.9357 - loss: 0.4224
+
+Epoch 9/20
+
+3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - AUC: 0.9898 - accuracy: 0.9436 - loss: 0.3920
+
+Epoch 10/20
+
+3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - AUC: 0.9911 - accuracy: 0.9475 - loss: 0.3633
+
+Epoch 11/20
+
+3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - AUC: 0.9914 - accuracy: 0.9528 - loss: 0.3361
+
+Epoch 12/20
+
+3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - AUC: 0.9923 - accuracy: 0.9580 - loss: 0.3103
+
+Epoch 13/20
+
+3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - AUC: 0.9925 - accuracy: 0.9619 - loss: 0.2866
+
+Epoch 14/20
+
+3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - AUC: 0.9931 - accuracy: 0.9633 - loss: 0.2643
+
+Epoch 15/20
+
+3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - AUC: 0.9935 - accuracy: 0.9633 - loss: 0.2436
+
+Epoch 16/20
+
+3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - AUC: 0.9938 - accuracy: 0.9659 - loss: 0.2247
+
+Epoch 17/20
+
+3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - AUC: 0.9942 - accuracy: 0.9646 - loss: 0.2076
+
+Epoch 18/20
+
+3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - AUC: 0.9945 - accuracy: 0.9659 - loss: 0.1918
+
+Epoch 19/20
+
+3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - AUC: 0.9947 - accuracy: 0.9672 - loss: 0.1777
+
+Epoch 20/20
+
+3/3 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - AUC: 0.9953 - accuracy: 0.9685 - loss: 0.1645
+```
+
+
+
+
+
+
+
+
+```
+1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 230ms/step - AUC: 0.9904 - accuracy: 0.9476 - loss: 0.2319
+
+[0.2318607121706009, 0.9903508424758911, 0.9476439952850342]
+```
+
+
+# **Predictions of Ranking Model**
+The retrieval model gave us a list of ads that are generally relevant (high dot product
+similarity). The ranking model will now calculate the specific probability (0% to 100%)
+that the user will click each of those ads.
+
+The Ranking model expects pairs of (User, Ad). Since we are scoring 10 ads for 1 user, we
+cannot just pass the user features once.We effectively take user's features 10 times to
+create a batch.
+
+
+```python
+
+def rerank_ads_for_user(user_row, retrieved_ads, ranking_model):
+ ads_df = pd.DataFrame(retrieved_ads)
+ num_ads = len(ads_df)
+ user_inputs = {
+ k: tf.fill(
+ (num_ads, 1),
+ str(user_row[k]) if k not in continuous_features else float(user_row[k]),
+ )
+ for k in USER_FEATURES
+ }
+ ad_inputs = {
+ k: tf.reshape(tf.constant(ads_df[k].astype(str).values), (-1, 1))
+ for k in AD_FEATURES
+ }
+ scores = (
+ ranking_model({"user": user_inputs, "positive_ad": ad_inputs}).numpy().flatten()
+ )
+ ads_df["ranking_score"] = scores
+ return ads_df.sort_values("ranking_score", ascending=False).to_dict("records")
+
+
+sample_user = x_test.iloc[0]
+scores, indices = retrieval_engine.query_batch(pd.DataFrame([sample_user]))
+top_ads = retrieval_engine.decode_results(scores, indices)[0]
+final_ranked_ads = rerank_ads_for_user(sample_user, top_ads, ranking_model)
+print(f"User: {sample_user['user_id']}")
+print(f"{'Ad ID':<10} | {'Topic':<30} | {'Retrival Score':<11} | {'Rank Probability'}")
+for item in final_ranked_ads:
+ print(
+ f"{item['ad_id']:<10} | {item['ad_topic'][:28]:<30} | {item['score']:.4f} |{item['ranking_score']*100:.2f}%"
+ )
+```
+
+
+```
+User: user_216
+Ad ID | Topic | Retrival Score | Rank Probability
+ad_305 | Front-line fault-tolerant in | 8.2131 |99.27%
+ad_318 | Front-line upward-trending g | 7.6231 |99.17%
+ad_758 | Right-sized multi-tasking so | 7.1814 |99.06%
+ad_767 | Robust object-oriented Graph | 7.2068 |99.02%
+ad_620 | Polarized modular function | 7.2857 |98.92%
+ad_522 | Open-architected full-range | 7.0892 |98.82%
+ad_771 | Robust web-enabled attitude | 7.3828 |98.81%
+ad_810 | Sharable optimal capacity | 6.7046 |98.69%
+ad_31 | Ameliorated well-modulated c | 6.9498 |98.40%
+ad_104 | Configurable 24/7 hub | 6.7244 |98.39%
+```
+
diff --git a/examples/keras_rs/two_stage_rs_with_marketing_interaction.py b/examples/keras_rs/two_stage_rs_with_marketing_interaction.py
new file mode 100644
index 0000000000..b2c1e572ca
--- /dev/null
+++ b/examples/keras_rs/two_stage_rs_with_marketing_interaction.py
@@ -0,0 +1,556 @@
+"""
+Title: Two Stage Recommender System with Marketing Interaction
+Author: Mansi Mehta
+Date created: 26/11/2025
+Last modified: 26/11/2025
+Description: Recommender System with Ranking and Retrival model for Marketing interaction.
+Accelerator: GPU
+"""
+
+"""
+# **Introduction**
+
+This tutorial demonstrates a critical business scenario: a user lands on a website, and a
+marketing engine must decide which specific ad to display from an inventory of thousands.
+The goal is to maximize the Click-Through Rate (CTR). Showing irrelevant ads wastes
+marketing budget and annoys the user. Therefore, we need a system that predicts the
+probability of a specific user clicking on a specific ad based on their demographics and
+browsing habits.
+
+**Architecture**
+1. **The Retrieval Stage:** Efficiently select an initial set of roughly 10-100
+candidates from millions of possibilities. It weeds out items the user is definitely not
+interested in.
+User Tower: Embeds user features (ID, demographics, behavior) into a vector.
+Item Tower: Embeds ad features (Ad ID, Topic) into a vector.
+Interaction: The dot product of these two vectors represents similarity.
+2. **The Ranking Stage:** It takes the output of the retrieval model and fine-tune the
+order to select the single best ad to show.
+A Deep Neural Network (MLP).
+Interaction: It takes the User Embedding, Ad Embedding, and their similarity score to
+predict a precise probability (0% to 100%) that the user will click.
+
+
+"""
+
+"""
+# **Dataset**
+We will use the [Ad Click
+Prediction](https://www.kaggle.com/datasets/mafrojaakter/ad-click-data) Dataset from
+Kaggle
+
+**Feature Distribution of dataset:**
+User Tower describes who is looking and features contains i.e Gender, City, Country, Age,
+Daily Internet Usage, Daily Time Spent on Site, and Area Income.
+Item Tower describes what is being shown and features contains Ad Topic Line, Ad ID.
+
+In this tutorial, we are going to build and train a Two-Tower (User Tower and Ad Tower)
+model using the Ad Click Prediction dataset from Kaggle.
+We're going to:
+1. **Data Pipeline:** Get our data and preprocess it for both Retrieval (implicit
+feedback) and Ranking (explicit labels).
+2. **Retrieval:** Implement and train a Two-Tower model to generate candidates.
+3. **Ranking:** Implement and train a Neural Ranking model to predict click probabilities.
+4. **Inference:** Run an end-to-end test (Retrieval --> Ranking) to generate
+recommendations for a specific user.
+"""
+
+"""shell
+!pip install -q keras-rs
+"""
+
+import os
+
+os.environ["KERAS_BACKEND"] = "tensorflow"
+import keras
+import matplotlib.pyplot as plt
+import numpy as np
+import tensorflow as tf
+import pandas as pd
+import keras_rs
+import tensorflow_datasets as tfds
+from mpl_toolkits.axes_grid1 import make_axes_locatable
+from keras import layers
+from concurrent.futures import ThreadPoolExecutor
+from sklearn.model_selection import train_test_split
+from sklearn.preprocessing import MinMaxScaler
+
+
+"""
+# **Preparing Dataset**
+"""
+
+"""shell
+pip install -q kaggle
+# Download the dataset (requires Kaggle API key in ~/.kaggle/kaggle.json)
+kaggle datasets download -d mafrojaakter/ad-click-data --unzip -p ./ad_click_dataset
+"""
+data_path = "./ad_click_dataset/Ad_click_data.csv"
+if not os.path.exists(data_path):
+ # Fallback for filenames with spaces or different casing
+ data_path = "./ad_click_dataset/Ad Click Data.csv"
+
+ads_df = pd.read_csv(data_path)
+# Clean column names
+ads_df.columns = ads_df.columns.str.strip()
+# Rename the column name
+ads_df = ads_df.rename(
+ columns={
+ "Male": "gender",
+ "Ad Topic Line": "ad_topic",
+ "City": "city",
+ "Country": "country",
+ "Daily Time Spent on Site": "time_on_site",
+ "Daily Internet Usage": "internet_usage",
+ "Area Income": "area_income",
+ }
+)
+# Add user_id and add_id column
+ads_df["user_id"] = "user_" + ads_df.index.astype(str)
+ads_df["ad_id"] = "ad_" + ads_df["ad_topic"].astype("category").cat.codes.astype(str)
+# Remove nulls and normalize
+ads_df = ads_df.dropna()
+# normalize
+numeric_cols = ["time_on_site", "internet_usage", "area_income", "Age"]
+scaler = MinMaxScaler()
+ads_df[numeric_cols] = scaler.fit_transform(ads_df[numeric_cols])
+
+# Split the train and test datasets
+x_train, x_test = train_test_split(ads_df, test_size=0.2, random_state=42)
+
+
+def dict_to_tensor_features(df_features, continuous_features):
+ tensor_dict = {}
+ for k, v in df_features.items():
+ if k in continuous_features:
+ tensor_dict[k] = tf.expand_dims(tf.constant(v, dtype="float32"), axis=-1)
+ else:
+ v_str = np.array(v).astype(str).tolist()
+ tensor_dict[k] = tf.expand_dims(tf.constant(v_str, dtype="string"), axis=-1)
+ return tensor_dict
+
+
+def create_retrieval_dataset(
+ data_df,
+ all_ads_features,
+ all_ad_ids,
+ user_features_list,
+ ad_features_list,
+ continuous_features_list,
+):
+
+ # Filter for Positive Interactions (Cicks)
+ positive_interactions = data_df[data_df["Clicked on Ad"] == 1].copy()
+
+ if positive_interactions.empty:
+ return None
+
+ def sample_negative(positive_ad_id):
+ neg_ad_id = positive_ad_id
+ while neg_ad_id == positive_ad_id:
+ neg_ad_id = np.random.choice(all_ad_ids)
+ return neg_ad_id
+
+ def create_triplets_row(pos_row):
+ pos_ad_id = pos_row.ad_id
+ neg_ad_id = sample_negative(pos_ad_id)
+
+ neg_ad_row = all_ads_features[all_ads_features["ad_id"] == neg_ad_id].iloc[0]
+ user_features_dict = {
+ name: getattr(pos_row, name) for name in user_features_list
+ }
+ pos_ad_features_dict = {
+ name: getattr(pos_row, name) for name in ad_features_list
+ }
+ neg_ad_features_dict = {name: neg_ad_row[name] for name in ad_features_list}
+
+ return {
+ "user": user_features_dict,
+ "positive_ad": pos_ad_features_dict,
+ "negative_ad": neg_ad_features_dict,
+ }
+
+ with ThreadPoolExecutor(max_workers=8) as executor:
+ triplets = list(
+ executor.map(
+ create_triplets_row, positive_interactions.itertuples(index=False)
+ )
+ )
+
+ triplets_df = pd.DataFrame(triplets)
+ user_df = triplets_df["user"].apply(pd.Series)
+ pos_ad_df = triplets_df["positive_ad"].apply(pd.Series)
+ neg_ad_df = triplets_df["negative_ad"].apply(pd.Series)
+
+ user_features_tensor = dict_to_tensor_features(
+ user_df.to_dict("list"), continuous_features_list
+ )
+ pos_ad_features_tensor = dict_to_tensor_features(
+ pos_ad_df.to_dict("list"), continuous_features_list
+ )
+ neg_ad_features_tensor = dict_to_tensor_features(
+ neg_ad_df.to_dict("list"), continuous_features_list
+ )
+
+ features = {
+ "user": user_features_tensor,
+ "positive_ad": pos_ad_features_tensor,
+ "negative_ad": neg_ad_features_tensor,
+ }
+ y_true = tf.ones((triplets_df.shape[0], 1), dtype=tf.float32)
+ dataset = tf.data.Dataset.from_tensor_slices((features, y_true))
+ buffer_size = len(triplets_df)
+ dataset = (
+ dataset.shuffle(buffer_size=buffer_size)
+ .batch(64)
+ .cache()
+ .prefetch(tf.data.AUTOTUNE)
+ )
+ return dataset
+
+
+user_clicked_ads = (
+ x_train[x_train["Clicked on Ad"] == 1]
+ .groupby("user_id")["ad_id"]
+ .apply(set)
+ .to_dict()
+)
+
+for u in x_train["user_id"].unique():
+ if u not in user_clicked_ads:
+ user_clicked_ads[u] = set()
+
+AD_FEATURES = ["ad_id", "ad_topic"]
+USER_FEATURES = [
+ "user_id",
+ "gender",
+ "city",
+ "country",
+ "time_on_site",
+ "internet_usage",
+ "area_income",
+ "Age",
+]
+continuous_features = ["time_on_site", "internet_usage", "area_income", "Age"]
+
+all_ads_features = x_train[AD_FEATURES].drop_duplicates().reset_index(drop=True)
+all_ad_ids = all_ads_features["ad_id"].tolist()
+
+retrieval_train_dataset = create_retrieval_dataset(
+ data_df=x_train,
+ all_ads_features=all_ads_features,
+ all_ad_ids=all_ad_ids,
+ user_features_list=USER_FEATURES,
+ ad_features_list=AD_FEATURES,
+ continuous_features_list=continuous_features,
+)
+
+retrieval_test_dataset = create_retrieval_dataset(
+ data_df=x_test,
+ all_ads_features=all_ads_features,
+ all_ad_ids=all_ad_ids,
+ user_features_list=USER_FEATURES,
+ ad_features_list=AD_FEATURES,
+ continuous_features_list=continuous_features,
+)
+
+"""
+# **Implement the Retrival Model**
+For the Retrieval stage, we will build a Two-Tower Model.
+
+**The Architecture Components:**
+
+1. User Tower:User features (User ID, demographics, behavior metrics like time_on_site).
+It encodes these mixed features into a fixed-size vector representation called the User
+Embedding.
+2. Item (Ad) Tower:Ad features (Ad ID, Ad Topic Line).It encodes these features into a
+fixed-size vector representation called the Item Embedding.
+3. Interaction (Similarity):We calculate the Dot Product between the User Embedding and
+the Item Embedding.
+"""
+
+keras.utils.set_random_seed(42)
+
+vocab_map = {
+ "user_id": x_train["user_id"].unique(),
+ "gender": x_train["gender"].astype(str).unique(),
+ "city": x_train["city"].unique(),
+ "country": x_train["country"].unique(),
+ "ad_id": x_train["ad_id"].unique(),
+ "ad_topic": x_train["ad_topic"].unique(),
+}
+cont_feats = ["time_on_site", "internet_usage", "area_income", "Age"]
+
+normalizers = {}
+for f in cont_feats:
+ norm = layers.Normalization(axis=None)
+ norm.adapt(x_train[f].values.astype("float32"))
+ normalizers[f] = norm
+
+
+def build_tower(feature_names, continuous_names=None, embed_dim=64, name="tower"):
+ inputs, embeddings = {}, []
+
+ for feat in feature_names:
+ if feat in vocab_map:
+ inp = keras.Input(shape=(1,), dtype=tf.string, name=feat)
+ inputs[feat] = inp
+ vocab = list(vocab_map[feat])
+ x = layers.StringLookup(vocabulary=vocab)(inp)
+ x = layers.Embedding(
+ len(vocab) + 1, embed_dim, embeddings_regularizer="l2"
+ )(x)
+ embeddings.append(layers.Flatten()(x))
+
+ if continuous_names:
+ for feat in continuous_names:
+ inp = keras.Input(shape=(1,), dtype=tf.float32, name=feat)
+ inputs[feat] = inp
+ embeddings.append(normalizers[feat](inp))
+
+ x = layers.Concatenate()(embeddings)
+ x = layers.Dense(128, activation="relu")(x)
+ x = layers.Dropout(0.2)(x)
+ x = layers.Dense(64, activation="relu")(x)
+ output = layers.Dense(embed_dim)(layers.Dropout(0.2)(x))
+
+ return keras.Model(inputs=inputs, outputs=output, name=name)
+
+
+user_tower = build_tower(
+ ["user_id", "gender", "city", "country"], cont_feats, name="user_tower"
+)
+ad_tower = build_tower(["ad_id", "ad_topic"], name="ad_tower")
+
+
+def bpr_hinge_loss(y_true, y_pred):
+ margin = 1.0
+ return -tf.math.log(tf.nn.sigmoid(y_pred) + 1e-10)
+
+
+class RetrievalModel(keras.Model):
+ def __init__(self, user_tower_instance, ad_tower_instance, **kwargs):
+ super().__init__(**kwargs)
+ self.user_tower = user_tower
+ self.ad_tower = ad_tower
+ self.ln_user = layers.LayerNormalization()
+ self.ln_ad = layers.LayerNormalization()
+
+ def call(self, inputs):
+ u_emb = self.ln_user(self.user_tower(inputs["user"]))
+ pos_emb = self.ln_ad(self.ad_tower(inputs["positive_ad"]))
+ neg_emb = self.ln_ad(self.ad_tower(inputs["negative_ad"]))
+ pos_score = keras.ops.sum(u_emb * pos_emb, axis=1, keepdims=True)
+ neg_score = keras.ops.sum(u_emb * neg_emb, axis=1, keepdims=True)
+ return pos_score - neg_score
+
+ def get_embeddings(self, inputs):
+ u_emb = self.ln_user(self.user_tower(inputs["user"]))
+ ad_emb = self.ln_ad(self.ad_tower(inputs["positive_ad"]))
+ dot_interaction = keras.ops.sum(u_emb * ad_emb, axis=1, keepdims=True)
+ return u_emb, ad_emb, dot_interaction
+
+
+retrieval_model = RetrievalModel(user_tower, ad_tower)
+retrieval_model.compile(
+ optimizer=keras.optimizers.Adam(learning_rate=1e-3), loss=bpr_hinge_loss
+)
+history = retrieval_model.fit(retrieval_train_dataset, epochs=30)
+
+pd.DataFrame(history.history).plot(
+ subplots=True, layout=(1, 3), figsize=(12, 4), title="Retrival Model Metrics"
+)
+plt.show()
+
+"""
+# **Predictions of Retrival Model**
+Two-Tower model is trained, we need to use it to generate candidates.
+
+We can implement inference pipeline using three steps:
+1. Indexing: We can run the Item Tower once for all available ads to generate their
+embeddings.
+2. Query Encoding: When a user arrives, we pass their features through the User Tower to
+generate a User Embedding.
+3. Nearest Neighbor Search: We search the index to find the Ad Embeddings closest to the
+User Embedding (highest dot product).
+
+Keras-RS [BruteForceRetrieval
+layer](https://keras.io/keras_rs/api/retrieval_layers/brute_force_retrieval/) calculates
+dot product between the user and every single item in the index to find exact top-K
+matches
+"""
+
+USER_CATEGORICAL = ["user_id", "gender", "city", "country"]
+CONTINUOUS_FEATURES = ["time_on_site", "internet_usage", "area_income", "Age"]
+USER_FEATURES = USER_CATEGORICAL + CONTINUOUS_FEATURES
+
+
+class BruteForceRetrievalWrapper:
+ def __init__(self, model, ads_df, ad_features, user_features, k=10):
+ self.model, self.k = model, k
+ self.user_features = user_features
+ unique_ads = ads_df[ad_features].drop_duplicates("ad_id").reset_index(drop=True)
+ self.ids = unique_ads["ad_id"].values
+ self.topic_map = dict(zip(unique_ads["ad_id"], unique_ads["ad_topic"]))
+ ad_inputs = {
+ "ad_id": tf.constant(self.ids.astype(str)),
+ "ad_topic": tf.constant(unique_ads["ad_topic"].astype(str).values),
+ }
+ self.candidate_embs = model.ln_ad(model.ad_tower(ad_inputs))
+
+ def query_batch(self, user_df):
+ inputs = {
+ k: tf.constant(
+ user_df[k].values.astype(float if k in CONTINUOUS_FEATURES else str)
+ )
+ for k in self.user_features
+ if k in user_df.columns
+ }
+ u_emb = self.model.ln_user(self.model.user_tower(inputs))
+ scores = tf.linalg.matmul(u_emb, self.candidate_embs, transpose_b=True)
+ top_scores, top_indices = tf.math.top_k(scores, k=self.k)
+ return top_scores.numpy(), top_indices.numpy()
+
+ def decode_results(self, scores, indices):
+ results = []
+ for row_scores, row_indices in zip(scores, indices):
+ retrieved_ids = self.ids[row_indices]
+ results.append(
+ [
+ {"ad_id": aid, "ad_topic": self.topic_map[aid], "score": float(s)}
+ for aid, s in zip(retrieved_ids, row_scores)
+ ]
+ )
+ return results
+
+
+retrieval_engine = BruteForceRetrievalWrapper(
+ model=retrieval_model,
+ ads_df=ads_df,
+ ad_features=["ad_id", "ad_topic"],
+ user_features=USER_FEATURES,
+ k=10,
+)
+sample_user = pd.DataFrame([x_test.iloc[0]])
+scores, indices = retrieval_engine.query_batch(sample_user)
+top_ads = retrieval_engine.decode_results(scores, indices)[0]
+
+"""
+# **Implementation of Ranking Model**
+Retrieval model only calculates a simple similarity score (Dot Product). It doesn't
+account for complex feature interactions.
+So we need to build ranking model after words retrival model.
+
+**Architecture**
+1. **Feature Extraction:** We reuse the trained User Tower and Ad Tower from the
+Retrieval stage. We freeze these towers (trainable = False) so their weights don't
+change.
+2. **Interaction:** Instead of just a dot product, we concatenate three inputs- The User
+EmbeddingThe Ad EmbeddingThe Dot Product (Similarity)
+3. **Scorer(MLP):** These concatenated inputs are fed into a Multi-Layer Perceptron—a
+stack of Dense layers. This network learns the non-linear relationships between the user
+and the ad.
+4. **Output:** The final layer uses a Sigmoid activation to output a single probability
+between 0.0 and 1.0 (Likelihood of a Click).
+"""
+
+retrieval_model.trainable = False
+
+
+def create_ranking_ds(df):
+ inputs = {
+ "user": dict_to_tensor_features(df[USER_FEATURES], continuous_features),
+ "positive_ad": dict_to_tensor_features(df[AD_FEATURES], continuous_features),
+ }
+ return (
+ tf.data.Dataset.from_tensor_slices(
+ (inputs, df["Clicked on Ad"].values.astype("float32"))
+ )
+ .shuffle(10000)
+ .batch(256)
+ .prefetch(tf.data.AUTOTUNE)
+ )
+
+
+ranking_train_dataset = create_ranking_ds(x_train)
+ranking_test_dataset = create_ranking_ds(x_test)
+
+
+class RankingModel(keras.Model):
+ def __init__(self, retrieval_model, **kwargs):
+ super().__init__(**kwargs)
+ self.retrieval = retrieval_model
+ self.mlp = keras.Sequential(
+ [
+ layers.Dense(256, activation="relu"),
+ layers.Dropout(0.2),
+ layers.Dense(128, activation="relu"),
+ layers.Dropout(0.2),
+ layers.Dense(64, activation="relu"),
+ layers.Dense(1, activation="sigmoid"),
+ ]
+ )
+
+ def call(self, inputs):
+ u_emb, ad_emb, dot = self.retrieval.get_embeddings(inputs)
+ return self.mlp(keras.ops.concatenate([u_emb, ad_emb, dot], axis=-1))
+
+
+ranking_model = RankingModel(retrieval_model)
+ranking_model.compile(
+ optimizer=keras.optimizers.Adam(1e-4),
+ loss="binary_crossentropy",
+ metrics=["AUC", "accuracy"],
+)
+history1 = ranking_model.fit(ranking_train_dataset, epochs=20)
+
+pd.DataFrame(history1.history).plot(
+ subplots=True, layout=(1, 3), figsize=(12, 4), title="Ranking Model Metrics"
+)
+plt.show()
+
+ranking_model.evaluate(ranking_test_dataset)
+
+"""
+# **Predictions of Ranking Model**
+The retrieval model gave us a list of ads that are generally relevant (high dot product
+similarity). The ranking model will now calculate the specific probability (0% to 100%)
+that the user will click each of those ads.
+
+The Ranking model expects pairs of (User, Ad). Since we are scoring 10 ads for 1 user, we
+cannot just pass the user features once.We effectively take user's features 10 times to
+create a batch.
+"""
+
+
+def rerank_ads_for_user(user_row, retrieved_ads, ranking_model):
+ ads_df = pd.DataFrame(retrieved_ads)
+ num_ads = len(ads_df)
+ user_inputs = {
+ k: tf.fill(
+ (num_ads, 1),
+ str(user_row[k]) if k not in continuous_features else float(user_row[k]),
+ )
+ for k in USER_FEATURES
+ }
+ ad_inputs = {
+ k: tf.reshape(tf.constant(ads_df[k].astype(str).values), (-1, 1))
+ for k in AD_FEATURES
+ }
+ scores = (
+ ranking_model({"user": user_inputs, "positive_ad": ad_inputs}).numpy().flatten()
+ )
+ ads_df["ranking_score"] = scores
+ return ads_df.sort_values("ranking_score", ascending=False).to_dict("records")
+
+
+sample_user = x_test.iloc[0]
+scores, indices = retrieval_engine.query_batch(pd.DataFrame([sample_user]))
+top_ads = retrieval_engine.decode_results(scores, indices)[0]
+final_ranked_ads = rerank_ads_for_user(sample_user, top_ads, ranking_model)
+print(f"User: {sample_user['user_id']}")
+print(f"{'Ad ID':<10} | {'Topic':<30} | {'Retrival Score':<11} | {'Rank Probability'}")
+for item in final_ranked_ads:
+ print(
+ f"{item['ad_id']:<10} | {item['ad_topic'][:28]:<30} | {item['score']:.4f} |{item['ranking_score']*100:.2f}%"
+ )
diff --git a/two_stage_rs_with_marketing_interaction.ipynb b/two_stage_rs_with_marketing_interaction.ipynb
new file mode 100644
index 0000000000..cb1843b016
--- /dev/null
+++ b/two_stage_rs_with_marketing_interaction.ipynb
@@ -0,0 +1,1126 @@
+{
+ "nbformat": 4,
+ "nbformat_minor": 0,
+ "metadata": {
+ "colab": {
+ "provenance": []
+ },
+ "kernelspec": {
+ "name": "python3",
+ "display_name": "Python 3"
+ },
+ "language_info": {
+ "name": "python"
+ }
+ },
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "source": [
+ "# **Introduction**\n",
+ "\n",
+ "This tutorial demonstrates a critical business scenario: a user lands on a website, and a marketing engine must decide which specific ad to display from an inventory of thousands.\n",
+ "The goal is to maximize the Click-Through Rate (CTR). Showing irrelevant ads wastes marketing budget and annoys the user. Therefore, we need a system that predicts the probability of a specific user clicking on a specific ad based on their demographics and browsing habits.\n",
+ "\n",
+ "**Architecture**\n",
+ "1. **The Retrieval Stage:** Efficiently select an initial set of roughly 10-100 candidates from millions of possibilities. It weeds out items the user is definitely not interested in.\n",
+ "User Tower: Embeds user features (ID, demographics, behavior) into a vector.\n",
+ "Item Tower: Embeds ad features (Ad ID, Topic) into a vector.\n",
+ "Interaction: The dot product of these two vectors represents similarity.\n",
+ "2. **The Ranking Stage:** It takes the output of the retrieval model and fine-tune the order to select the single best ad to show.\n",
+ "A Deep Neural Network (MLP).\n",
+ "Interaction: It takes the User Embedding, Ad Embedding, and their similarity score to predict a precise probability (0% to 100%) that the user will click.\n",
+ "\n",
+ ""
+ ],
+ "metadata": {
+ "id": "y5jO6Y78Vf-N"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "# **Dataset**\n",
+ "We will use the [Ad Click Prediction](https://www.kaggle.com/datasets/mafrojaakter/ad-click-data) Dataset from Kaggle\n",
+ "\n",
+ "**Feature Distribution of dataset:**\n",
+ "User Tower describes who is looking and features contains i.e Gender, City, Country, Age, Daily Internet Usage, Daily Time Spent on Site, and Area Income.\n",
+ "Item Tower describes what is being shown and features contains Ad Topic Line, Ad ID.\n",
+ "\n",
+ "In this tutorial, we are going to build and train a Two-Tower (User Tower and Ad Tower) model using the Ad Click Prediction dataset from Kaggle.\n",
+ "We're going to:\n",
+ "1. **Data Pipeline:** Get our data and preprocess it for both Retrieval (implicit feedback) and Ranking (explicit labels).\n",
+ "2. **Retrieval:** Implement and train a Two-Tower model to generate candidates.\n",
+ "3. **Ranking:** Implement and train a Neural Ranking model to predict click probabilities.\n",
+ "4. **Inference:** Run an end-to-end test (Retrieval --> Ranking) to generate recommendations for a specific user."
+ ],
+ "metadata": {
+ "id": "xcJBUXmeaavN"
+ }
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {
+ "id": "AL5vdFd8QOZl",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "outputId": "8b519c48-1e1a-4e58-9325-6108cfb7b4da"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "\u001b[?25l \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m0.0/92.5 kB\u001b[0m \u001b[31m?\u001b[0m eta \u001b[36m-:--:--\u001b[0m\r\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m92.5/92.5 kB\u001b[0m \u001b[31m2.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25h"
+ ]
+ }
+ ],
+ "source": [
+ "!pip install -q keras-rs"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {
+ "id": "2cdPdsiFQOZm"
+ },
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "os.environ[\"KERAS_BACKEND\"] = \"tensorflow\"\n",
+ "import keras\n",
+ "import matplotlib.pyplot as plt\n",
+ "import numpy as np\n",
+ "import tensorflow as tf\n",
+ "import pandas as pd\n",
+ "import keras_rs\n",
+ "import tensorflow_datasets as tfds\n",
+ "from mpl_toolkits.axes_grid1 import make_axes_locatable\n",
+ "from keras import layers\n",
+ "from concurrent.futures import ThreadPoolExecutor\n",
+ "from sklearn.model_selection import train_test_split\n",
+ "from sklearn.preprocessing import MinMaxScaler\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "# **Preparing Dataset**"
+ ],
+ "metadata": {
+ "id": "fdhb5tuL9UBe"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "from google.colab import files\n",
+ "files.upload()"
+ ],
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 91
+ },
+ "id": "RJN16Th-9W8E",
+ "outputId": "bfa060e0-25fe-41a4-cddd-b46aea023352"
+ },
+ "execution_count": 3,
+ "outputs": [
+ {
+ "output_type": "display_data",
+ "data": {
+ "text/plain": [
+ ""
+ ],
+ "text/html": [
+ "\n",
+ " \n",
+ " \n",
+ " Upload widget is only available when the cell has been executed in the\n",
+ " current browser session. Please rerun this cell to enable.\n",
+ " \n",
+ " "
+ ]
+ },
+ "metadata": {}
+ },
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Saving kaggle (1).json to kaggle (1).json\n"
+ ]
+ },
+ {
+ "output_type": "execute_result",
+ "data": {
+ "text/plain": [
+ "{'kaggle (1).json': b'{\"username\":\"mansim071\",\"key\":\"7b9249c264ac5cb7d295afcdd44f7ad1\"}'}"
+ ]
+ },
+ "metadata": {},
+ "execution_count": 3
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "!mkdir -p ~/.kaggle\n",
+ "!mv kaggle.json ~/.kaggle/\n",
+ "!chmod 600 ~/.kaggle/kaggle.json"
+ ],
+ "metadata": {
+ "id": "G4JgdNRp9tI3"
+ },
+ "execution_count": 4,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "!kaggle datasets download -d mafrojaakter/ad-click-data\n",
+ "!unzip -o ad-click-data.zip -d ./ad_click_data"
+ ],
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "NOhaq3bl-bmp",
+ "outputId": "bcd54c95-28dc-42b5-8f82-39f8763db18a"
+ },
+ "execution_count": 5,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Dataset URL: https://www.kaggle.com/datasets/mafrojaakter/ad-click-data\n",
+ "License(s): unknown\n",
+ "Downloading ad-click-data.zip to /content\n",
+ " 0% 0.00/37.6k [00:00, ?B/s]\n",
+ "100% 37.6k/37.6k [00:00<00:00, 138MB/s]\n",
+ "Archive: ad-click-data.zip\n",
+ " inflating: ./ad_click_data/Ad Click Data.csv \n"
+ ]
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "ads_df= pd.read_csv('/content/ad_click_data/Ad Click Data.csv')\n",
+ "# Clean column names\n",
+ "ads_df.columns = ads_df.columns.str.strip()\n",
+ "# Rename the column name\n",
+ "ads_df = ads_df.rename(columns={\n",
+ " 'Male': 'gender',\n",
+ " 'Ad Topic Line': 'ad_topic',\n",
+ " 'City': 'city',\n",
+ " 'Country': 'country',\n",
+ " 'Daily Time Spent on Site': 'time_on_site',\n",
+ " 'Daily Internet Usage': 'internet_usage',\n",
+ " 'Area Income': 'area_income'\n",
+ "})\n",
+ "#Add user_id and add_id column\n",
+ "ads_df['user_id'] = \"user_\" + ads_df.index.astype(str)\n",
+ "ads_df['ad_id'] = \"ad_\" + ads_df['ad_topic'].astype('category').cat.codes.astype(str)\n",
+ "# Remove nulls and normalize\n",
+ "ads_df = ads_df.dropna()\n",
+ "#normalize\n",
+ "numeric_cols = [\"time_on_site\", \"internet_usage\", \"area_income\", \"Age\"]\n",
+ "scaler = MinMaxScaler()\n",
+ "ads_df[numeric_cols] = scaler.fit_transform(ads_df[numeric_cols])"
+ ],
+ "metadata": {
+ "id": "Nyq64PZo-axX"
+ },
+ "execution_count": 6,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "#Split the train and test datasets\n",
+ "x_train,x_test= train_test_split(ads_df,test_size=0.2,random_state=42)"
+ ],
+ "metadata": {
+ "id": "2xBrCTZkuD6p"
+ },
+ "execution_count": 7,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "def dict_to_tensor_features(df_features, continuous_features):\n",
+ " tensor_dict = {}\n",
+ " for k, v in df_features.items():\n",
+ " if k in continuous_features:\n",
+ " tensor_dict[k] = tf.expand_dims(tf.constant(v, dtype='float32'), axis=-1)\n",
+ " else:\n",
+ " v_str = np.array(v).astype(str).tolist()\n",
+ " tensor_dict[k] = tf.expand_dims(tf.constant(v_str, dtype='string'), axis=-1)\n",
+ " return tensor_dict\n",
+ "\n",
+ "def create_retrieval_dataset(data_df,all_ads_features,all_ad_ids,\n",
+ " user_features_list,ad_features_list,continuous_features_list):\n",
+ "\n",
+ " # Filter for Positive Interactions (Cicks)\n",
+ " positive_interactions = data_df[data_df[\"Clicked on Ad\"] == 1].copy()\n",
+ "\n",
+ " if positive_interactions.empty:\n",
+ " return None\n",
+ "\n",
+ " def sample_negative(positive_ad_id):\n",
+ " neg_ad_id = positive_ad_id\n",
+ " while neg_ad_id == positive_ad_id:\n",
+ " neg_ad_id = np.random.choice(all_ad_ids)\n",
+ " return neg_ad_id\n",
+ "\n",
+ " def create_triplets_row(pos_row):\n",
+ " pos_ad_id = pos_row.ad_id\n",
+ " neg_ad_id = sample_negative(pos_ad_id)\n",
+ "\n",
+ " neg_ad_row = all_ads_features[all_ads_features['ad_id'] == neg_ad_id].iloc[0]\n",
+ " user_features_dict = {name: getattr(pos_row, name) for name in user_features_list}\n",
+ " pos_ad_features_dict = {name: getattr(pos_row, name) for name in ad_features_list}\n",
+ " neg_ad_features_dict = {name: neg_ad_row[name] for name in ad_features_list}\n",
+ "\n",
+ " return {\n",
+ " \"user\": user_features_dict,\n",
+ " \"positive_ad\": pos_ad_features_dict,\n",
+ " \"negative_ad\": neg_ad_features_dict\n",
+ " }\n",
+ "\n",
+ " with ThreadPoolExecutor(max_workers=8) as executor:\n",
+ " triplets = list(executor.map(create_triplets_row, positive_interactions.itertuples(index=False)))\n",
+ "\n",
+ " triplets_df = pd.DataFrame(triplets)\n",
+ " user_df = triplets_df[\"user\"].apply(pd.Series)\n",
+ " pos_ad_df = triplets_df[\"positive_ad\"].apply(pd.Series)\n",
+ " neg_ad_df = triplets_df[\"negative_ad\"].apply(pd.Series)\n",
+ "\n",
+ " user_features_tensor = dict_to_tensor_features(user_df.to_dict('list'), continuous_features_list)\n",
+ " pos_ad_features_tensor = dict_to_tensor_features(pos_ad_df.to_dict('list'), continuous_features_list)\n",
+ " neg_ad_features_tensor = dict_to_tensor_features(neg_ad_df.to_dict('list'), continuous_features_list)\n",
+ "\n",
+ " features = {\n",
+ " \"user\": user_features_tensor,\n",
+ " \"positive_ad\": pos_ad_features_tensor,\n",
+ " \"negative_ad\": neg_ad_features_tensor,\n",
+ " }\n",
+ " y_true = tf.ones((triplets_df.shape[0], 1), dtype=tf.float32)\n",
+ " dataset = tf.data.Dataset.from_tensor_slices((features, y_true))\n",
+ " buffer_size = len(triplets_df)\n",
+ " dataset = dataset.shuffle(buffer_size=buffer_size).batch(64).cache().prefetch(tf.data.AUTOTUNE)\n",
+ " return dataset\n",
+ "\n",
+ "user_clicked_ads = (\n",
+ " x_train[x_train[\"Clicked on Ad\"] == 1]\n",
+ " .groupby(\"user_id\")[\"ad_id\"]\n",
+ " .apply(set)\n",
+ " .to_dict()\n",
+ ")\n",
+ "\n",
+ "for u in x_train[\"user_id\"].unique():\n",
+ " if u not in user_clicked_ads:\n",
+ " user_clicked_ads[u] = set()\n",
+ "\n",
+ "AD_FEATURES = [\"ad_id\", \"ad_topic\"]\n",
+ "USER_FEATURES = [\"user_id\", \"gender\", \"city\", \"country\", \"time_on_site\", \"internet_usage\", \"area_income\", \"Age\"]\n",
+ "continuous_features = [\"time_on_site\", \"internet_usage\", \"area_income\", \"Age\"]\n",
+ "\n",
+ "all_ads_features = x_train[AD_FEATURES].drop_duplicates().reset_index(drop=True)\n",
+ "all_ad_ids = all_ads_features['ad_id'].tolist()\n",
+ "\n",
+ "retrieval_train_dataset = create_retrieval_dataset(\n",
+ " data_df=x_train,\n",
+ " all_ads_features=all_ads_features,\n",
+ " all_ad_ids=all_ad_ids,\n",
+ " user_features_list=USER_FEATURES,\n",
+ " ad_features_list=AD_FEATURES,\n",
+ " continuous_features_list=continuous_features\n",
+ ")\n",
+ "\n",
+ "retrieval_test_dataset = create_retrieval_dataset(\n",
+ " data_df=x_test,\n",
+ " all_ads_features=all_ads_features,\n",
+ " all_ad_ids=all_ad_ids,\n",
+ " user_features_list=USER_FEATURES,\n",
+ " ad_features_list=AD_FEATURES,\n",
+ " continuous_features_list=continuous_features\n",
+ ")"
+ ],
+ "metadata": {
+ "id": "D0eSXIKpsUSM"
+ },
+ "execution_count": 28,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "# **Implement the Retrival Model**\n",
+ "For the Retrieval stage, we will build a Two-Tower Model.\n",
+ "\n",
+ "**The Architecture Components:**\n",
+ "\n",
+ "1. User Tower:User features (User ID, demographics, behavior metrics like time_on_site). It encodes these mixed features into a fixed-size vector representation called the User Embedding.\n",
+ "2. Item (Ad) Tower:Ad features (Ad ID, Ad Topic Line).It encodes these features into a fixed-size vector representation called the Item Embedding.\n",
+ "3. Interaction (Similarity):We calculate the Dot Product between the User Embedding and the Item Embedding."
+ ],
+ "metadata": {
+ "id": "48AtiZBm1N6W"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "keras.utils.set_random_seed(42)"
+ ],
+ "metadata": {
+ "id": "07SgZrFa7BFy"
+ },
+ "execution_count": 29,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "vocab_map = {\n",
+ " \"user_id\": x_train[\"user_id\"].unique(),\n",
+ " \"gender\": x_train[\"gender\"].astype(str).unique(),\n",
+ " \"city\": x_train[\"city\"].unique(),\n",
+ " \"country\": x_train[\"country\"].unique(),\n",
+ " \"ad_id\": x_train[\"ad_id\"].unique(),\n",
+ " \"ad_topic\": x_train[\"ad_topic\"].unique()\n",
+ "}\n",
+ "cont_feats = [\"time_on_site\", \"internet_usage\", \"area_income\", \"Age\"]\n",
+ "\n",
+ "normalizers = {}\n",
+ "for f in cont_feats:\n",
+ " norm = layers.Normalization(axis=None)\n",
+ " norm.adapt(x_train[f].values.astype('float32'))\n",
+ " normalizers[f] = norm\n",
+ "\n",
+ "def build_tower(feature_names, continuous_names=None, embed_dim=64, name=\"tower\"):\n",
+ " inputs, embeddings = {}, []\n",
+ "\n",
+ " for feat in feature_names:\n",
+ " if feat in vocab_map:\n",
+ " inp = keras.Input(shape=(1,), dtype=tf.string, name=feat)\n",
+ " inputs[feat] = inp\n",
+ " vocab = list(vocab_map[feat])\n",
+ " x = layers.StringLookup(vocabulary=vocab)(inp)\n",
+ " x = layers.Embedding(len(vocab) + 1, embed_dim, embeddings_regularizer='l2')(x)\n",
+ " embeddings.append(layers.Flatten()(x))\n",
+ "\n",
+ " if continuous_names:\n",
+ " for feat in continuous_names:\n",
+ " inp = keras.Input(shape=(1,), dtype=tf.float32, name=feat)\n",
+ " inputs[feat] = inp\n",
+ " embeddings.append(normalizers[feat](inp))\n",
+ "\n",
+ " x = layers.Concatenate()(embeddings)\n",
+ " x = layers.Dense(128, activation=\"relu\")(x)\n",
+ " x = layers.Dropout(0.2)(x)\n",
+ " x = layers.Dense(64, activation=\"relu\")(x)\n",
+ " output = layers.Dense(embed_dim)(layers.Dropout(0.2)(x))\n",
+ "\n",
+ " return keras.Model(inputs=inputs, outputs=output, name=name)"
+ ],
+ "metadata": {
+ "id": "1xNPuYzU_Zgj"
+ },
+ "execution_count": 30,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "user_tower = build_tower([\"user_id\", \"gender\", \"city\", \"country\"], cont_feats, name=\"user_tower\")\n",
+ "ad_tower = build_tower([\"ad_id\", \"ad_topic\"], name=\"ad_tower\")"
+ ],
+ "metadata": {
+ "id": "oR97oiPV_f5v"
+ },
+ "execution_count": 31,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "def bpr_hinge_loss(y_true, y_pred):\n",
+ " margin = 1.0\n",
+ " return -tf.math.log(tf.nn.sigmoid(y_pred) + 1e-10)"
+ ],
+ "metadata": {
+ "id": "pt7mR-WxJFwx"
+ },
+ "execution_count": 32,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "class RetrievalModel(keras.Model):\n",
+ " def __init__(self, user_tower_instance, ad_tower_instance, **kwargs):\n",
+ " super().__init__(**kwargs)\n",
+ " self.user_tower = user_tower\n",
+ " self.ad_tower = ad_tower\n",
+ " self.ln_user = layers.LayerNormalization()\n",
+ " self.ln_ad = layers.LayerNormalization()\n",
+ "\n",
+ "\n",
+ " def call(self,inputs):\n",
+ " u_emb = self.ln_user(self.user_tower(inputs[\"user\"]))\n",
+ " pos_emb = self.ln_ad(self.ad_tower(inputs[\"positive_ad\"]))\n",
+ " neg_emb = self.ln_ad(self.ad_tower(inputs[\"negative_ad\"]))\n",
+ " pos_score = keras.ops.sum(u_emb * pos_emb, axis=1, keepdims=True)\n",
+ " neg_score = keras.ops.sum(u_emb * neg_emb, axis=1, keepdims=True)\n",
+ " return pos_score - neg_score\n",
+ "\n",
+ "\n",
+ " def get_embeddings(self, inputs):\n",
+ " u_emb = self.ln_user(self.user_tower(inputs[\"user\"]))\n",
+ " ad_emb = self.ln_ad(self.ad_tower(inputs[\"positive_ad\"]))\n",
+ " dot_interaction = keras.ops.sum(u_emb * ad_emb, axis=1, keepdims=True)\n",
+ " return u_emb, ad_emb, dot_interaction"
+ ],
+ "metadata": {
+ "id": "Mx-PbEFOeCMf"
+ },
+ "execution_count": 33,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "retrieval_model = RetrievalModel(user_tower, ad_tower)\n",
+ "retrieval_model.compile(optimizer=keras.optimizers.Adam(learning_rate=1e-3),loss=bpr_hinge_loss)\n",
+ "history = retrieval_model.fit(retrieval_train_dataset,epochs=30)"
+ ],
+ "metadata": {
+ "id": "K2i_5VPiF2F_",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "outputId": "af8bfbdb-f97d-4cbc-cc22-83c19ca6e478"
+ },
+ "execution_count": 34,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Epoch 1/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m6s\u001b[0m 9ms/step - loss: 2.9548\n",
+ "Epoch 2/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 7ms/step - loss: 1.3977 \n",
+ "Epoch 3/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 7ms/step - loss: 1.1149 \n",
+ "Epoch 4/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 7ms/step - loss: 0.9265 \n",
+ "Epoch 5/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 8ms/step - loss: 0.7926 \n",
+ "Epoch 6/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 8ms/step - loss: 0.6924 \n",
+ "Epoch 7/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 7ms/step - loss: 0.6163 \n",
+ "Epoch 8/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 8ms/step - loss: 0.5574 \n",
+ "Epoch 9/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 8ms/step - loss: 0.5107 \n",
+ "Epoch 10/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 12ms/step - loss: 0.4725\n",
+ "Epoch 11/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 8ms/step - loss: 0.4401 \n",
+ "Epoch 12/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 7ms/step - loss: 0.4120 \n",
+ "Epoch 13/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 8ms/step - loss: 0.3869 \n",
+ "Epoch 14/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 7ms/step - loss: 0.3644 \n",
+ "Epoch 15/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 7ms/step - loss: 0.3438 \n",
+ "Epoch 16/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 8ms/step - loss: 0.3249 \n",
+ "Epoch 17/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 7ms/step - loss: 0.3075 \n",
+ "Epoch 18/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 7ms/step - loss: 0.2914 \n",
+ "Epoch 19/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 7ms/step - loss: 0.2765 \n",
+ "Epoch 20/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 7ms/step - loss: 0.2627 \n",
+ "Epoch 21/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 8ms/step - loss: 0.2498 \n",
+ "Epoch 22/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 7ms/step - loss: 0.2378 \n",
+ "Epoch 23/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 8ms/step - loss: 0.2265 \n",
+ "Epoch 24/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 8ms/step - loss: 0.2160 \n",
+ "Epoch 25/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 7ms/step - loss: 0.2061 \n",
+ "Epoch 26/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 7ms/step - loss: 0.1968 \n",
+ "Epoch 27/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 7ms/step - loss: 0.1880 \n",
+ "Epoch 28/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 7ms/step - loss: 0.1798 \n",
+ "Epoch 29/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 8ms/step - loss: 0.1720 \n",
+ "Epoch 30/30\n",
+ "\u001b[1m6/6\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 9ms/step - loss: 0.1646 \n"
+ ]
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "pd.DataFrame(history.history).plot(subplots=True, layout=(1, 3), figsize=(12, 4), title=\"Retrival Model Metrics\")\n",
+ "plt.show()"
+ ],
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 408
+ },
+ "id": "1tKldoRSdyu0",
+ "outputId": "155cac09-7007-4974-d6c4-231a35cd0683"
+ },
+ "execution_count": 37,
+ "outputs": [
+ {
+ "output_type": "display_data",
+ "data": {
+ "text/plain": [
+ ""
+ ],
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAlIAAAGHCAYAAAB7xLxyAAAAOnRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjEwLjAsIGh0dHBzOi8vbWF0cGxvdGxpYi5vcmcvlHJYcgAAAAlwSFlzAAAPYQAAD2EBqD+naQAAPERJREFUeJzt3Xl8VPW9//H3ZJtsk4RA9gQIiywii0EQUEShUgQKVhGhLajAdYlaBGvL7UWtyy/1WitVcUGvcKtFEBH1oojIKgVURFRAEJAlQHbITPaEzPn9ETISSYBMZjLJzOv5eJxHMmfOmfOZEx7Ht9/v93yPyTAMQwAAAGg0P08XAAAA0FoRpAAAAJxEkAIAAHASQQoAAMBJBCkAAAAnEaQAAACcRJACAABwEkEKAADASQQpAAAAJxGkAACtXseOHXXbbbe57fM3bNggk8mkDRs2uO0YTWUymfToo482er/Dhw/LZDJp0aJFLq+pOXj6b0OQAgA4ZdGiRTKZTI4lICBASUlJuu2223T8+HGnPnPPnj169NFHdfjwYdcW20zOPiebN28+533DMJSSkiKTyaQxY8Z4oELn1QYWk8mkN998s95thgwZIpPJpF69ejl1jMWLF2vevHlNqLL5BXi6AABA6/bYY48pNTVV5eXl2rZtmxYtWqTNmzdr165dCg4ObtRn7dmzR3/5y180bNgwdezY8aL327dvn/z8Wk7bQHBwsBYvXqyrrrqqzvqNGzfq2LFjMpvNHqqs6Wq/229/+9s66w8fPqwtW7Y0+m9+tsWLF2vXrl2aOXPmRe8zdOhQlZWVKSgoyOnjNkXL+VcHAGiVRo0apd/+9reaPn26XnvtNT344IM6ePCgPvjgA7ce1zAMlZWVSZLMZrMCAwPderzGuOGGG7Rs2TKdPn26zvrFixcrLS1N8fHxHqqs6W644QatWbNG+fn5ddYvXrxYcXFx6t+/f7PUUV5eLrvdLj8/PwUHB3ssSBOkAAAudfXVV0uSDh48WGf93r17dfPNNys6OlrBwcHq379/nbC1aNEiTZgwQZJ07bXXOrqRase+dOzYUWPGjNHq1avVv39/hYSE6JVXXnG8VztGavv27TKZTPrf//3fc2pbvXq1TCaTVq5cKUk6cuSI7rnnHnXr1k0hISFq27atJkyY0OSuxUmTJqmgoEBr1qxxrKusrNQ777yjyZMn17tPSUmJZs+erZSUFJnNZnXr1k1/+9vfZBhGne0qKir0wAMPKCYmRhaLRb/61a907Nixej/z+PHjuuOOOxQXFyez2axLL71Ur7/+epO+27hx42Q2m7Vs2bI66xcvXqxbbrlF/v7+9e735ptvKi0tTSEhIYqOjtatt96qzMxMx/vDhg3Thx9+qCNHjjj+9rWtkrXdikuWLNF//dd/KSkpSaGhobLZbA2Okfr88891ww03qE2bNgoLC1Pv3r31j3/8w/F+dna2br/9diUnJ8tsNishIUHjxo1r9N+erj0AgEvV/oeoTZs2jnW7d+/WkCFDlJSUpD/96U8KCwvT22+/rfHjx2v58uW68cYbNXToUN1///167rnn9J//+Z/q0aOHJDl+SjVdeJMmTdKdd96pGTNmqFu3buccv3///urUqZPefvttTZ06tc57S5cuVZs2bTRy5EhJ0pdffqktW7bo1ltvVXJysg4fPqyXXnpJw4YN0549exQaGurUOejYsaMGDRqkt956S6NGjZIkrVq1SlarVbfeequee+65OtsbhqFf/epXWr9+vaZNm6a+fftq9erV+sMf/qDjx4/r2WefdWw7ffp0vfnmm5o8ebIGDx6sdevWafTo0efUkJOToyuvvFImk0n33nuvYmJitGrVKk2bNk02m61R3WdnCw0N1bhx4/TWW2/p7rvvliR988032r17t1577TV9++235+zz5JNPau7cubrllls0ffp05eXl6fnnn9fQoUP19ddfKyoqSn/+859ltVp17Ngxx/cNDw+v8zmPP/64goKC9OCDD6qioqLB7rw1a9ZozJgxSkhI0O9//3vFx8fr+++/18qVK/X73/9eknTTTTdp9+7duu+++9SxY0fl5uZqzZo1Onr0aKO6lWUAAOCEhQsXGpKMTz/91MjLyzMyMzONd955x4iJiTHMZrORmZnp2Hb48OHGZZddZpSXlzvW2e12Y/DgwUbXrl0d65YtW2ZIMtavX3/O8Tp06GBIMj7++ON635s6darj9Zw5c4zAwEDj5MmTjnUVFRVGVFSUcccddzjWlZaWnvNZW7duNSQZ//znPx3r1q9f32Bd9Z2TL7/80njhhRcMi8XiOMaECROMa6+91lHv6NGjHfu99957hiTjiSeeqPN5N998s2EymYwDBw4YhmEYO3fuNCQZ99xzT53tJk+ebEgyHnnkEce6adOmGQkJCUZ+fn6dbW+99VYjMjLSUdehQ4cMScbChQvP+91qz8GyZcuMlStXGiaTyTh69KhhGIbxhz/8wejUqZNhGIZxzTXXGJdeeqljv8OHDxv+/v7Gk08+WefzvvvuOyMgIKDO+tGjRxsdOnRo8NidOnU652/287/N6dOnjdTUVKNDhw7GqVOn6mxrt9sNwzCMU6dOGZKMp59++rzf+WLQtQcAaJIRI0YoJiZGKSkpuvnmmxUWFqYPPvhAycnJkqSTJ09q3bp1uuWWW1RUVKT8/Hzl5+eroKBAI0eO1P79+y/6Lr/U1FRHa9L5TJw4UVVVVXr33Xcd6z755BMVFhZq4sSJjnUhISGO36uqqlRQUKAuXbooKipKO3bsuNhTUK9bbrlFZWVlWrlypYqKirRy5coGu/U++ugj+fv76/7776+zfvbs2TIMQ6tWrXJsJ+mc7X7eumQYhpYvX66xY8fKMAzHOc/Pz9fIkSNltVqb9P2uv/56RUdHa8mSJTIMQ0uWLNGkSZPq3fbdd9+V3W7XLbfcUqeO+Ph4de3aVevXr7/o406dOrXO36w+X3/9tQ4dOqSZM2cqKiqqznsmk0lSzd89KChIGzZs0KlTpy76+PWhaw8A0CTz58/XJZdcIqvVqtdff12bNm2qc1fagQMHZBiG5s6dq7lz59b7Gbm5uUpKSrrgsVJTUy+qpj59+qh79+5aunSppk2bJqmmW69du3a67rrrHNuVlZUpIyNDCxcu1PHjx+uMR7JarRd1rIbExMRoxIgRWrx4sUpLS1VdXa2bb7653m2PHDmixMREWSyWOutruzWPHDni+Onn56fOnTvX2e7nXZx5eXkqLCzUggULtGDBgnqPmZub69T3kqTAwEBNmDBBixcv1oABA5SZmdlgSNy/f78Mw1DXrl0b/KyLdTF//9qxeeebgsFsNuupp57S7NmzFRcXpyuvvFJjxozRlClTGn0jAEEKANAkAwYMcNypNX78eF111VWaPHmy9u3bp/DwcNntdknSgw8+2GBrUpcuXS7qWBdqjTjbxIkT9eSTTyo/P18Wi0UffPCBJk2apICAn/7Td99992nhwoWaOXOmBg0apMjISJlMJt16662Oupti8uTJmjFjhrKzszVq1KhzWkjcpbb23/72t+eME6vVu3fvJh1j8uTJevnll/Xoo4+qT58+6tmzZ4O1mEwmrVq1qt6B6D8fB3U+jfn7X8jMmTM1duxYvffee1q9erXmzp2rjIwMrVu3Tv369bvozyFIAQBcxt/fXxkZGbr22mv1wgsv6E9/+pM6deokqablYcSIEefdv7brxRUmTpyov/zlL1q+fLni4uJks9l066231tnmnXfe0dSpU/XMM8841pWXl6uwsNAlNdx444268847tW3bNi1durTB7Tp06KBPP/1URUVFdVql9u7d63i/9qfdbtfBgwfrtELt27evzufV3tFXXV19wXPurKuuukrt27fXhg0b9NRTTzW4XefOnWUYhlJTU3XJJZec9zNd8fevba3btWvXBb97586dNXv2bM2ePVv79+9X37599cwzzzQ44Wh9GCMFAHCpYcOGacCAAZo3b57Ky8sVGxurYcOG6ZVXXlFWVtY52+fl5Tl+DwsLkySXBJkePXrosssu09KlS7V06VIlJCRo6NChdbbx9/c/Z3qB559/XtXV1U0+vlTT2vLSSy/p0Ucf1dixYxvc7oYbblB1dbVeeOGFOuufffZZmUwmx51/tT9/ftffz2cD9/f310033aTly5dr165d5xzv7HPuLJPJpOeee06PPPKIfve73zW43a9//Wv5+/vrL3/5yznn2jAMFRQUOF6HhYU1uUv18ssvV2pqqubNm3fOv6Pa45eWlqq8vLzOe507d5bFYlFFRUWjjkeLFADA5f7whz9owoQJWrRoke666y7Nnz9fV111lS677DLNmDFDnTp1Uk5OjrZu3apjx47pm2++kST17dtX/v7+euqpp2S1WmU2m3XdddcpNjbWqTomTpyohx9+WMHBwZo2bdo5kzaOGTNGb7zxhiIjI9WzZ09t3bpVn376qdq2bdvkc1Croa61s40dO1bXXnut/vznP+vw4cPq06ePPvnkE73//vuaOXOmo5Wlb9++mjRpkl588UVZrVYNHjxYa9eu1YEDB875zL/+9a9av369Bg4cqBkzZqhnz546efKkduzYoU8//VQnT55s8ncbN26cxo0bd95tOnfurCeeeEJz5szR4cOHNX78eFksFh06dEgrVqzQf/zHf+jBBx+UJKWlpWnp0qWaNWuWrrjiCoWHh583gNbHz89PL730ksaOHau+ffvq9ttvV0JCgvbu3avdu3dr9erV+uGHHzR8+HDdcsst6tmzpwICArRixQrl5OSc02p5QU2+7w8A4JPOvtX/56qrq43OnTsbnTt3Nk6fPm0YhmEcPHjQmDJlihEfH28EBgYaSUlJxpgxY4x33nmnzr6vvvqq0alTJ8Pf37/Obe0/nzLgbD+f/qDW/v37DUmGJGPz5s3nvH/q1Cnj9ttvN9q1a2eEh4cbI0eONPbu3XvO5zkz/cH51PddioqKjAceeMBITEw0AgMDja5duxpPP/2045b9WmVlZcb9999vtG3b1ggLCzPGjh1rZGZmnjP9gWEYRk5OjpGenm6kpKQYgYGBRnx8vDF8+HBjwYIFjm2cmf7gfH4+/UGt5cuXG1dddZURFhZmhIWFGd27dzfS09ONffv2ObYpLi42Jk+ebERFRRmSHFMhnO/YDf1tNm/ebPziF78wLBaLERYWZvTu3dt4/vnnDcMwjPz8fCM9Pd3o3r27ERYWZkRGRhoDBw403n777fN+t/qYDONn7WwAAAC4KIyRAgAAcBJBCgAAwEkEKQAAACcRpAAAAJxEkAIAAHASQQoAAMBJBCkAAAAnEaQAAACcRJACAABwEkEKAADASQQpAAAAJxGkAAAAnESQAgAAcBJBCgAAwEkEKQAAACcRpAAAAJxEkAIAAHASQQoAAMBJBCkAAAAnEaQAAACcRJACAABwEkEKAADASQQpAAAAJwV4uoCLYbfbdeLECVksFplMJk+XA6CVMQxDRUVFSkxMlJ8f//8IwHVaRZA6ceKEUlJSPF0GgFYuMzNTycnJni4DgBdpFUHKYrFIqrkIRkREeLgaAK2NzWZTSkqK41oCAK7SKoJUbXdeREQEQQqA0xgaAMDVGCwAAADgJIIUAACAkwhSAAAATmoVY6QAX2a321VZWenpMlq8oKAgpjYA0OwIUkALVllZqUOHDslut3u6lBbPz89PqampCgoK8nQpAHwIQQpooQzDUFZWlvz9/ZWSkkJry3nUTtqblZWl9u3bc3cegGZDkAJaqNOnT6u0tFSJiYkKDQ31dDktXkxMjE6cOKHTp08rMDDQ0+UA8BH8Ly7QQlVXV0sSXVUXqfY81Z43AGgOBCmghaOb6uJwngB4AkEKAADASV4ZpO564yuNe2GzjhaUeroUwOcMGzZMM2fO9HQZANAsvDJIfXfcqm+OWXWqlLl3AACA+3hlkLIE19yMWFR+2sOVAAAAb+aVQSrcXBOkiiuqPFwJ4NtOnTqlKVOmqE2bNgoNDdWoUaO0f/9+x/tHjhzR2LFj1aZNG4WFhenSSy/VRx995Nj3N7/5jWJiYhQSEqKuXbtq4cKFnvoqAFAvr5xHKpwWKXghwzBUVuWZW/tDAv2duivutttu0/79+/XBBx8oIiJCf/zjH3XDDTdoz549CgwMVHp6uiorK7Vp0yaFhYVpz549Cg8PlyTNnTtXe/bs0apVq9SuXTsdOHBAZWVlrv5qANAkXhmkLME1k/ERpOBNyqqq1fPh1R459p7HRio0qHGXi9oA9e9//1uDBw+WJP3rX/9SSkqK3nvvPU2YMEFHjx7VTTfdpMsuu0yS1KlTJ8f+R48eVb9+/dS/f39JUseOHV3zZQDAhby8a48gBXjK999/r4CAAA0cONCxrm3bturWrZu+//57SdL999+vJ554QkOGDNEjjzyib7/91rHt3XffrSVLlqhv37566KGHtGXLlmb/DgBwIV7aIkWQgvcJCfTXnsdGeuzY7jB9+nSNHDlSH374oT755BNlZGTomWee0X333adRo0bpyJEj+uijj7RmzRoNHz5c6enp+tvf/uaWWgDAGV7dIkXXHryJyWRSaFCARxZnxkf16NFDp0+f1ueff+5YV1BQoH379qlnz56OdSkpKbrrrrv07rvvavbs2Xr11Vcd78XExGjq1Kl68803NW/ePC1YsKBpJxEAXMyrW6SKyrlrD/CUrl27aty4cZoxY4ZeeeUVWSwW/elPf1JSUpLGjRsnSZo5c6ZGjRqlSy65RKdOndL69evVo0cPSdLDDz+stLQ0XXrppaqoqNDKlSsd7wFAS+HVLVJ07QGetXDhQqWlpWnMmDEaNGiQDMPQRx99pMDAmhtCqqurlZ6erh49euiXv/ylLrnkEr344ouSah5CPGfOHPXu3VtDhw6Vv7+/lixZ4smvAwDnMBmGYXi6iAux2WyKjIyU1WpVRETEBbf/eFeW7npzh/p3aKN37h7cDBUCrldeXq5Dhw4pNTVVwcHBni6nxTvf+WrsNQQALpZXtkjVTn9AixQAAHAnrwxSDDYHAADNwTuDFIPNAQBAM/DKIGU5a7B5KxgCBgAAWinvDFJnxkjZDam00jPPJgMAAN7PK4NUcKCf/P1qJhBkwDlaO1pVLw7nCYAneOWEnCaTSeHmAFnLqlRUflpx3O2MVigwMFAmk0l5eXmKiYlxanZxX2EYhvLy8mQymRxzVAFAc/DKICXJEaRokUJr5e/vr+TkZB07dkyHDx/2dDktnslkUnJysvz93fNcQACoj9cGKR4TA28QHh6url27qqqKf8cXEhgYSIgC0Oy8PkgVM5cUWjl/f38CAgC0UF452Fw6a1JOuvYAAICbeG2QcjwmhhYpAADgJl4bpH6a3ZwgBQAA3MNrg9RPs5szSBcAALiH1wap8LMeEwMAAOAOXhukLHTtAQAAN/PaIBV+ZrA5QQoAALiL9wYpuvYAAICbeW2QYkJOAADgbl4fpHhEDAAAcBevDVLMbA4AANytUUEqIyNDV1xxhSwWi2JjYzV+/Hjt27fvvPssWrRIJpOpzhIcHNykoi9G7YScxRWnZRiG248HAAB8T6OC1MaNG5Wenq5t27ZpzZo1qqqq0vXXX6+SkpLz7hcREaGsrCzHcuTIkSYVfTEizty1ZxhSaWW1248HAAB8T0BjNv7444/rvF60aJFiY2P11VdfaejQoQ3uZzKZFB8f71yFTjIH+CnAz6TTdkNF5acVZm7UVwUAALigJo2RslqtkqTo6OjzbldcXKwOHTooJSVF48aN0+7du8+7fUVFhWw2W52lsUwm01ndeww4BwAArud0kLLb7Zo5c6aGDBmiXr16Nbhdt27d9Prrr+v999/Xm2++KbvdrsGDB+vYsWMN7pORkaHIyEjHkpKS4lSNjgHnTIEAAADcwGQ4ORL77rvv1qpVq7R582YlJydf9H5VVVXq0aOHJk2apMcff7zebSoqKlRRUeF4bbPZlJKSIqvVqoiIiIs+1qh/fKbvs2x6Y9oAXd015qL3A+BdbDabIiMjG30NAYALcWrg0L333quVK1dq06ZNjQpRkhQYGKh+/frpwIEDDW5jNptlNpudKa0OCy1SAADAjRrVtWcYhu69916tWLFC69atU2pqaqMPWF1dre+++04JCQmN3rexwpndHAAAuFGjWqTS09O1ePFivf/++7JYLMrOzpYkRUZGKiQkRJI0ZcoUJSUlKSMjQ5L02GOP6corr1SXLl1UWFiop59+WkeOHNH06dNd/FXOxaScAADAnRoVpF566SVJ0rBhw+qsX7hwoW677TZJ0tGjR+Xn91ND16lTpzRjxgxlZ2erTZs2SktL05YtW9SzZ8+mVX4ReEwMAABwp0YFqYsZl75hw4Y6r5999lk9++yzjSrKVejaAwAA7uS1z9qTfhpsXkzXHgAAcAPvDlJnHhPDGCkAAOAOXh2kmJATAAC4k3cHKccYKQabAwAA1/PqIMUYKQAA4E7eHaTOjJHirj0AAOAOXh2kwoMZIwUAANzHu4NUbdde5WnZ7U49mxkAAKBBXh2kamc2NwyptKraw9UAAABv49VByhzgp0B/kyQeEwMAAFzPq4OUyWT6qXuPcVIAAMDFvDpISWcNOGcKBAAA4GJeH6QsZqZAAAAA7uH1QYopEAAAgLt4fZD6aXZzBpsDAADX8vogRYsUAABwF68PUrVzSfG8PQAA4GpeH6TCzww2p0UKAAC4mtcHKUeLFEEKAAC4mNcHKceEnHTtAQAAF/P6IGVhQk4AAOAmXh+kalukeNYeAABwNe8PUoyRAgAAbuL1QSoi+MwjYujaAwAALub1Qeqnrj2CFAAAcC3vD1JnTchptxsergYAAHgT7w9SZ1qkJKmkklYpAADgOl4fpIID/RXkX/M1GScFAABcyeuDlMSDiwEAgHv4RpBiwDkAAHADnwpSdO0BAABX8okgxYOLAQCAO/hUkOIxMQAAwJV8IkjRtQcAANzBJ4KU5cxjYhhsDgAAXMknghTTHwAAAHfwjSDl6NpjjBQAAHAdnwhSlmDGSAEAANfzqSBF1x4AAHAlnwhS4WYGmwMAANfzkSBF1x4AAHC9RgWpjIwMXXHFFbJYLIqNjdX48eO1b9++C+63bNkyde/eXcHBwbrsssv00UcfOV2wM5jZHAAAuEOjgtTGjRuVnp6ubdu2ac2aNaqqqtL111+vkpKSBvfZsmWLJk2apGnTpunrr7/W+PHjNX78eO3atavJxV8sBpsDAAB3MBmGYTi7c15enmJjY7Vx40YNHTq03m0mTpyokpISrVy50rHuyiuvVN++ffXyyy9f1HFsNpsiIyNltVoVERHR6DoLiiuU9sSnkqSD/+8G+fuZGv0ZAFqvpl5DAKAhTRojZbVaJUnR0dENbrN161aNGDGizrqRI0dq69atDe5TUVEhm81WZ2mK2gk5JamkklYpAADgGk4HKbvdrpkzZ2rIkCHq1atXg9tlZ2crLi6uzrq4uDhlZ2c3uE9GRoYiIyMdS0pKirNlSpLMAf4KCqj5qoyTAgAAruJ0kEpPT9euXbu0ZMkSV9YjSZozZ46sVqtjyczMbPJnWszMJQUAAFwr4MKbnOvee+/VypUrtWnTJiUnJ5932/j4eOXk5NRZl5OTo/j4+Ab3MZvNMpvNzpTWoPDgABWUVPKYGAAA4DKNapEyDEP33nuvVqxYoXXr1ik1NfWC+wwaNEhr166ts27NmjUaNGhQ4yptonBapAAAgIs1qkUqPT1dixcv1vvvvy+LxeIY5xQZGamQkBBJ0pQpU5SUlKSMjAxJ0u9//3tdc801euaZZzR69GgtWbJE27dv14IFC1z8Vc6PKRAAAICrNapF6qWXXpLVatWwYcOUkJDgWJYuXerY5ujRo8rKynK8Hjx4sBYvXqwFCxaoT58+euedd/Tee++dd4C6O/CYGAAA4GqNapG6mCmnNmzYcM66CRMmaMKECY05lMsxuzkAAHA1n3jWnnTWGCm69gAAgIv4TJCiRQoAALiazwSp2tnNi8qZ/gAAALiGzwSp2gk5uWsPAAC4iu8EqeCau/YIUgAAwFV8JkgxIScAAHA13wlSjJECAAAu5jtBijFSAADAxXwmSEXUjpGiaw8AALiIzwSp2q69kspqVdsvPEM7AADAhfhMkAoz+zt+p3sPAAC4gs8EKXOAv4ICar4uQQoAALiCzwQpSYrgMTEAAMCFfCpI/TSXFFMgAACApvOtIFU7lxRdewAAwAV8KkhZzEyBAAAAXMenglRtixSDzQEAgCv4VJCyMEYKAAC4kE8FqXDu2gMAAC7kU0HKwmBzAADgQj4VpMLPDDYvokUKAAC4gG8FKbr2AACAC/lUkKodbM5dewAAwBV8K0gxRgoAALiQTwUpHhEDAABcybeCFGOkAACAC/lUkIoIPvOIGLr2AACAC/hUkKrt2iutrFa13fBwNQAAoLXzqSAVdiZISXTvAQCApvOpIBUU4CdzQM1XLqpgwDkAAGganwpSkmRhnBQAAHARHwxS3LkHAABcw+eC1E9zSRGkAABA0/hukKJrDwAANJHPBSm69gAAgKv4XJCqnd2cx8QAAICm8rkgZTnTtcddewAAoKl8L0idmf6AweYAAKCpfC5IOR5cTIsUAABoIt8LUmbGSAEAANfwuSBloUUKAAC4iO8GKcZIAQCAJmp0kNq0aZPGjh2rxMREmUwmvffee+fdfsOGDTKZTOcs2dnZztbcJOHmM4PNaZECAABN1OggVVJSoj59+mj+/PmN2m/fvn3KyspyLLGxsY09tEvwiBgAAOAqAY3dYdSoURo1alSjDxQbG6uoqKiL2raiokIVFRWO1zabrdHHawhdewAAwFWabYxU3759lZCQoF/84hf697//fd5tMzIyFBkZ6VhSUlJcVkdtkCqrqtbparvLPhcAAPgetwephIQEvfzyy1q+fLmWL1+ulJQUDRs2TDt27Ghwnzlz5shqtTqWzMxMl9UTZv6pEY479wAAQFM0umuvsbp166Zu3bo5Xg8ePFgHDx7Us88+qzfeeKPefcxms8xms1vqCfT3U3Cgn8qr7CoqP62o0CC3HAcAAHg/j0x/MGDAAB04cMATh5b002NiaJECAABN4ZEgtXPnTiUkJHji0JJ4cDEAAHCNRnftFRcX12lNOnTokHbu3Kno6Gi1b99ec+bM0fHjx/XPf/5TkjRv3jylpqbq0ksvVXl5uV577TWtW7dOn3zyieu+RSPVPm+Px8QAAICmaHSQ2r59u6699lrH61mzZkmSpk6dqkWLFikrK0tHjx51vF9ZWanZs2fr+PHjCg0NVe/evfXpp5/W+YzmxlxSAADAFUyGYRieLuJCbDabIiMjZbVaFRER0eTPu/ON7Vq9O0dP3thLvxnYwQUVAmjJXH0NAYBaPvesPemnx8QwKScAAGgKnwxSlmC69gAAQNP5ZJAK5649AADgAj4ZpGiRAgAAruCTQYrpDwAAgCv4ZpBi+gMAAOACPhmkkqJCJEl7s22qtrf42R8AAEAL5ZNBqk9KlCzmAJ0qrdKu41ZPlwMAAFopnwxSgf5+GtKlnSRp4w95Hq4GAAC0Vj4ZpCTpmm4xkghSAADAeT4bpIZeUhOkvj56StZS7t4DAACN57NBKikqRF1jw2U3pM0H8j1dDgAAaIV8NkhJ0jWX1Hbv5Xq4EgAA0Br5dpA6a5yUYTANAgAAaByfDlJXdIxWcKCfcmwV+iGn2NPlAACAVsang1RwoL+u7NRWEt17AACg8Xw6SElnj5NiGgQAANA4BKkzQerLQ6dUUsGz9wAAwMXz+SCV2i5MKdEhqqy2a9uPBZ4uBwAAtCI+H6RMJhPdewAAwCk+H6Qk6ZpLYiURpAAAQOMQpCQN6txWgf4mHSko1eH8Ek+XAwAAWgmClKRwc4D6d4iWRKsUAAC4eASpM86e5RwAAOBiEKTOqB1wvvVggcqrqj1cDQAAaA0IUmd0j7co1mJWWVW1th8+5elyAABAK0CQOqPuNAg8LgYAAFwYQeosQ5lPCgAANAJB6ixXdWknP5P0Q06xThSWebocAADQwhGkztImLEh9UqIkSZtolQIAABdAkPqZ2nFSm/YTpAAAwPkRpH6mNkh9tj9fp6vtHq4GAAC0ZASpn+mdHKWo0EAVlZ/WzsxCT5cDAABaMILUz/j7mXR1V+7eAwAAF0aQqsc1TIMAAAAuAkGqHkO7tpMkfXvMqvziCg9XAwAAWiqCVD1iI4LVMyFCkrRqV7aHqwEAAC0VQaoBN6clS5L+57MfVW03PFwNAABoiQhSDbh1QIqiQgN1uKBUn+ymVQoAAJyLINWA0KAATbmygyTp5Y0HZRi0SgEAgLoIUucxZXBHmQP89M0xqz4/dNLT5QAAgBam0UFq06ZNGjt2rBITE2UymfTee+9dcJ8NGzbo8ssvl9lsVpcuXbRo0SInSm1+7cLNmtC/ZqzUKxsPergaAADQ0jQ6SJWUlKhPnz6aP3/+RW1/6NAhjR49Wtdee6127typmTNnavr06Vq9enWji/WE6Vd1kp9JWr8vT3uzbZ4uBwAAtCABjd1h1KhRGjVq1EVv//LLLys1NVXPPPOMJKlHjx7avHmznn32WY0cObKxh292HduFaVSvBH34XZYWbPpRf7+lr6dLAgAALYTbx0ht3bpVI0aMqLNu5MiR2rp1a4P7VFRUyGaz1Vk86T+GdpIkfbDzhE4Ulnm0FgAA0HK4PUhlZ2crLi6uzrq4uDjZbDaVldUfSjIyMhQZGelYUlJS3F3mefVJidKgTm112m7o9c2HPFoLAABoOVrkXXtz5syR1Wp1LJmZmZ4uSXdeU9Mq9dYXR2UtrfJwNQAAoCVwe5CKj49XTk5OnXU5OTmKiIhQSEhIvfuYzWZFRETUWTztmkti1D3eopLKar35+RFPlwMAAFoAtwepQYMGae3atXXWrVmzRoMGDXL3oV3KZDI5WqUW/vuwyquqPVwRAADwtEYHqeLiYu3cuVM7d+6UVDO9wc6dO3X06FFJNd1yU6ZMcWx/11136ccff9RDDz2kvXv36sUXX9Tbb7+tBx54wDXfoBmN6Z2oxMhg5RdXaMXXxz1dDgAA8LBGB6nt27erX79+6tevnyRp1qxZ6tevnx5++GFJUlZWliNUSVJqaqo+/PBDrVmzRn369NEzzzyj1157rVVMffBzgf5+mnZ1TavUq5t4mDEAAL7OZLSCh8jZbDZFRkbKarV6fLxUScVpDf7rOlnLqvTyb9P0y17xHq0HwIW1pGsIAO/SIu/aa8nCzAGaMoiHGQMAAIKUU6YO7qigAD/tzCzUl4dPebocAADgIQQpJ7QLN2tCWs3DjF/ccMDD1QAAAE8hSDlpxtWd5O9n0oZ9eVq9O9vT5QAAAA8gSDmpY7sw3XnmGXxz39slaxmznQMA4GsIUk1w//Cu6tQuTLlFFfrrqu89XQ4AAGhmBKkmCA70V8avL5MkvfVFprYeLPBwRQAAoDkRpJpoYKe2+s3A9pKkOe9+y6NjAADwIQQpF/jjqO6KjwjW4YJSPfvpD54uBwAANBOClAtEBAfq8fG9JEmvfXZIu45bPVwRAABoDgQpF/lFzziN7p2garuhh975VlXVdk+XBAAA3Iwg5UKPjr1UkSGB2pNl06uf/ejpcgAAgJsRpFwoxmLW3DE9JUnzPt2vH/OKPVwRAABwJ4KUi910eZKu7tpOlaft+tO738lu56HGAAB4K4KUi5lMJv2/Gy9TSKC/vjh0Um99edTTJQEAADchSLlBSnSoHhzZTZL014/2Ksta5uGKAACAOxCk3OS2wR3VNyVKRRWnddebO1RWyUSdAAB4G4KUm/j7mTRvYl9FhQbqm8xCPbB0J+OlAADwMgQpN+rYLkwLftdfQf5++nh3tp76eK+nSwIAAC5EkHKzAanR+u+be0uSXtn0o/71+REPVwQAAFyFINUMxvdL0gMjLpEkPfz+bm38Ic/DFQEAAFcgSDWT+4d30a/7Janabij9Xzu0L7vI0yUBAIAmIkg1E5PJpIybLtPA1GgVV5zWHYu+VK6t3NNlAQCAJiBINSNzgL9e+V2aOrUL0/HCMk3/53aVVp72dFkAAMBJBKlmFhUapNdvu0JtQgP17TGrfr9kp6qZFgEAgFaJIOUBHduF6dUpNdMirNmToyc//F6GQZgCAKC1IUh5SP+O0Xp6Qs20CK//+5D+c8Uuna62e7gqAADQGAQpDxrXN0mPj+8lk0l664ujuvONrxgzBQBAK0KQ8rDfXdlBL/0mTeYAP63dm6tJr36uguIKT5cFAAAuAkGqBfhlr3gtnjHQ8Vy+m17aoiMFJZ4uCwAAXABBqoVI6xCt5XcPVnKbEB0uKNWvX9yibzILPV0WAAA4D4JUC9I5Jlzv3jNYlyZGqKCkUrcu2KZ1e3M8XRYAAGgAQaqFibUEa+mdg3R113Yqq6rWjH9+paVfHvV0WQAAoB4EqRYo3Byg12+7Qjddnqxqu6E/Lv9Oj6/co/Kqak+XBgAAzkKQaqEC/f30twm9dd91XSRJ/7P5kMY+v1m7jls9XBkAAKhFkGrBTCaTZl/fTf8ztb/ahZu1P7dY4+f/W8+v3c/knQAAtAAEqVZgeI84ffLAUI3qFa/TdkPPrPlBN7+8VT/mFXu6NAAAfBpBqpWIDgvSi7+5XM9O7CNLcIB2Zhbqhuc+0z+3Hpadhx4DAOARBKlWxGQy6cZ+yVo9c6iGdGmr8iq7Hn5/t6Yu/EJZ1jJPlwcAgM8hSLVCiVEheuOOgfrLry5VcKCfPtufr+v/vkkvbTjInX0AADQjk2EYLb5fyGazKTIyUlarVREREZ4up0U5mFesWW9/45gFPTEyWLOv76Yb+yXJz8/k2eKAFoJrCAB3capFav78+erYsaOCg4M1cOBAffHFFw1uu2jRIplMpjpLcHCw0wWjrs4x4Vpx92A9M6GPEiODdcJartnLvtHo5zfrs/15ni4PAACv1uggtXTpUs2aNUuPPPKIduzYoT59+mjkyJHKzc1tcJ+IiAhlZWU5liNHjjSpaNTl52fSTWnJWvfgMP3xl91lMQfo+yybfvc/X2jK619ozwmbp0sEAMArNTpI/f3vf9eMGTN0++23q2fPnnr55ZcVGhqq119/vcF9TCaT4uPjHUtcXFyTikb9ggP9dfewztr40LW6fUhHBfqbtOmHPI1+/jPNfvsbHTtV6ukSAQDwKo0KUpWVlfrqq680YsSInz7Az08jRozQ1q1bG9yvuLhYHTp0UEpKisaNG6fdu3ef9zgVFRWy2Wx1Fly86LAgPTL2Un066xqN7p0gw5CW7zima57eoPvf+lrfHWN2dAAAXKFRQSo/P1/V1dXntCjFxcUpOzu73n26deum119/Xe+//77efPNN2e12DR48WMeOHWvwOBkZGYqMjHQsKSkpjSkTZ3RoG6b5ky/XinsGa0iXtqq2G/rgmxMa+8Jm3bpgq9btzWEOKgAAmqBRd+2dOHFCSUlJ2rJliwYNGuRY/9BDD2njxo36/PPPL/gZVVVV6tGjhyZNmqTHH3+83m0qKipUUVHheG2z2ZSSksIdN02067hVr332o/7v2yxVnwlQXWLDNePqVI3rm6TgQH8PVwi4B3ftAXCXRrVItWvXTv7+/srJyamzPicnR/Hx8Rf1GYGBgerXr58OHDjQ4DZms1kRERF1FjRdr6RIzbu1nz576Fr9x9BOCjcH6EBusf64/Dtd9dQ6Pbd2v3Js5Z4uEwCAVqNRQSooKEhpaWlau3atY53dbtfatWvrtFCdT3V1tb777jslJCQ0rlK4TGJUiP7zhh7aOuc6/dfoHkqMDFZ+caX+vuYHDf7rOk1b9KU+3pWtKh6MDADAeQU0dodZs2Zp6tSp6t+/vwYMGKB58+appKREt99+uyRpypQpSkpKUkZGhiTpscce05VXXqkuXbqosLBQTz/9tI4cOaLp06e79pug0SzBgZp+dSdNHdxRH32XpTe3HdGXh09p7d5crd2bq3bhQbqxX5Ju6Z+irnEWT5cLAECL0+ggNXHiROXl5enhhx9Wdna2+vbtq48//tgxAP3o0aPy8/upoevUqVOaMWOGsrOz1aZNG6WlpWnLli3q2bOn674FmiTQ30/j+iZpXN8kHcwr1rLtx7R8xzHlFVXo1c8O6dXPDqlf+yjd0j9FY3onyBIc6OmSAQBoEXhEDOpVVW3Xxn15Wro9U+v25joGpwcF+GnYJTEa3TtBw3vEKdzc6CwONDuuIQDchSCFC8otKteKHcf19vZMHcwrcaw3B/jp2m6xGt07Qdd1j1UYoQotFNcQAO5CkMJFMwxDe7OL9OG3WVr57QkdLvhppvTgQD9d1z1Woy9L1DXdYmipQovCNQSAuxCk4BTDMLQny6YPv83Sh99l6chZoSrI308DO0VrRI84De8Rq+Q2oR6sFOAaAsB9CFJoMsMwtPuETSu/zdLHu7LqtFRJUrc4i4b3iNXwHnHqmxIlfz+ThyqFr+IaAsBdCFJwKcMwdDCvROv25ujT73O1/fBJnf0UmrZhQbrmkhhdfUk7DenSTrGWYM8VC5/BNQSAuxCk4FaFpZXasC9Pa/fmasO+XBWVn67zfvd4i4ZeEqOrurTTgNRoHlMDt+AaAsBdCFJoNlXVdn15+KQ2/ZCvzQfytOu4rc77QQF+GtAxWld1bafBnduqZ0KEAvwbNfk+UC+uIQDchSAFjykortC/Dxbosx/ytPlAvrKsdZ/zF24OUP+ObXRlp7YamBqtXkmRCiRYwQlcQwC4C0EKLULt2KrP9udp8/58fXH45DndgKFB/krrUDdY0RWIi8E1BIC7EKTQIlXbDX2fZdO2Hwv0+aGT+uLQSVnLqupsE+Tvp15JEUrr0EZpHdro8g5tGLyOenENAeAuBCm0Cna7oX05RTXB6seT2n7kpPKLK8/Zrn10qCNU9UuJUrd4C92B4BoCwG0IUmiVDMPQ0ZOl+urIKceyL6dIP//XbA7w06WJEeqdHKW+KVHqkxKljm1DZTIxl5Uv4RoCwF0IUvAatvIqfZNZqO2HT2nH0VP6JrNQtp+Ns5KkiOAA9UmJUu/kSPVKjNSliZFKiQ4hXHkxriEA3IUgBa9lGIYOF5Tqm8xCfXOsUN9kFmr3CZsqTtvP2TYiOEA9EyNqglVSzc9OMeHMwu4luIYAcBeCFHxKVbVd+7KL9O0xq749VhOs9mUXqbL63HAVHOinbvER6hFvUfd4i7onRKh7vEVRoUEeqBxNwTUEgLsQpODzKk/bdSC3WLtOWLXnhE27jlu1J8um0srqerdPiAyuE6y6xlrUKSaMqRhaMK4hANyFIAXUo9pu6FB+ifZm27Q3q6jmZ3aRjp0qq3d7P5PUsW2YusSG65I4i7rGhROwWhCuIQDchSAFNIKtvEo/ZBfp++wi7c2qCVc/5BSdM3loLT+T1KFtmDrHhKlzTLg6nfnZOSZcbcLoImwuXEMAuAtBCmgiwzCUW1Sh/TnF+iGnSPtzixy/13fXYK3osCB1jglTp3bhSo0JU8e2YUptF6YObUNpxXIxriEA3IUgBbhJbcA6mFusg3nFOphXooN5xfoxr0THC+vvIpQkk0lKiAhWx3Zh6tguTKlta352aBuqlDahCgkiZDUW1xAA7kKQAjygtPK0fswr0Y/5JTqYW6zDBSU6nF+iQ/kl523FkqRYi1kd2oaqfXSY2keH1vzeNlTto0PVNiyI+bDqwTUEgLsQpIAWxDAMnSyp1OGCEh3KL60JVwUlOlJQoiMFpQ2OxaoVEuiv5DYhSm4TopTo0DO/17RkJbcJUVRooE8GLa4hANwlwNMFAPiJyWRS23Cz2oabldYh+pz3C0srdaSgVEdP1iy1AevoyVJl28pVVlWt/bnF2p9bXO/nhwb5KzEqRElRIUpqc+ZnVEjNujYhirOYFcCzCQHgohGkgFYkKjRIUaFB6pMSdc57laftOlFYpmOnypR5qlSZJ0vP+r1M+cUVKq2s1oHcYh1oIGj5maRYS7ASooKVGBmihMhgxUcGKzGq5veEyBDFWMzM+A4AZxCkAC8RFODnGKBen/Kqap0oLNPxwrKan6fKdLywXMcLS3WisFxZ1jJVVRvKtpUr21aur1VY7+f4+5kUE25WXGSw4iPMio8IPvN7sOP3WItZ4eYAn+xGBOBbCFKAjwgO9FenmHB1igmv9/1qu6GC4gqdsJYrq7BMWdaacHXCWq7sM+tyiipUbf8pbH1znuOFBvkr1mJWbERNsIo78zM2wqxYS7BiLGbFhJt9dtwWAO9AkAIgqaalKTYiWLERwepbT9ehVBO28osrlG2tCVI5tvJzfs+1Vaio4rRKK6t1uKBUhwtKz3vcQP+aFq4Yy1nLmXFi7cLNahcepHYWs9qFmRURQisXgJaFIAXgovn7mRQXEay4iGD1Oc92pZWnlWurUG5RhXJs5cotqlDumZ85tnLlFVUor7hChaVVqqo2dMJarhPW8gseP8jfT23Dg9Q2PEgd24bphcmXu+7LAYATCFIAXC40KEAd2wU0OF6rVsXpahUUVyq3qKImXNUuxeUqKK5UfnGF8osrlV9U08pVWW0/0+VY3uBDpQGgORGkAHiMOaBmOobEqJALblteVa2CkkoVFFcov7iiGaoDgAsjSAFoFYID/R3zXgFAS8HMewAAAE4iSAEAADiJIAUAAOAkghQAAICTCFIAAABOIkgBAAA4iSAFAADgJIIUAACAkwhSAAAATiJIAQAAOKlVPCLGMAxJks1m83AlAFqj2mtH7bUEAFylVQSpoqIiSVJKSoqHKwHQmhUVFSkyMtLTZQDwIiajFfwvmt1u14kTJ2SxWGQymS64vc1mU0pKijIzMxUREdEMFfoGzqt7cF5d7+fn1DAMFRUVKTExUX5+jGgA4DqtokXKz89PycnJjd4vIiKC/zC5AefVPTivrnf2OaUlCoA78L9mAAAATiJIAQAAOMkrg5TZbNYjjzwis9ns6VK8CufVPTivrsc5BdBcWsVgcwAAgJbIK1ukAAAAmgNBCgAAwEkEKQAAACcRpAAAAJxEkAIAAHCSVwap+fPnq2PHjgoODtbAgQP1xRdfeLqkVmXTpk0aO3asEhMTZTKZ9N5779V53zAMPfzww0pISFBISIhGjBih/fv3e6bYViIjI0NXXHGFLBaLYmNjNX78eO3bt6/ONuXl5UpPT1fbtm0VHh6um266STk5OR6quHV46aWX1Lt3b8cM5oMGDdKqVasc73NOAbib1wWppUuXatasWXrkkUe0Y8cO9enTRyNHjlRubq6nS2s1SkpK1KdPH82fP7/e9//7v/9bzz33nF5++WV9/vnnCgsL08iRI1VeXt7MlbYeGzduVHp6urZt26Y1a9aoqqpK119/vUpKShzbPPDAA/q///s/LVu2TBs3btSJEyf061//2oNVt3zJycn661//qq+++krbt2/Xddddp3Hjxmn37t2SOKcAmoHhZQYMGGCkp6c7XldXVxuJiYlGRkaGB6tqvSQZK1ascLy22+1GfHy88fTTTzvWFRYWGmaz2Xjrrbc8UGHrlJuba0gyNm7caBhGzTkMDAw0li1b5tjm+++/NyQZW7du9VSZrVKbNm2M1157jXMKoFl4VYtUZWWlvvrqK40YMcKxzs/PTyNGjNDWrVs9WJn3OHTokLKzs+uc48jISA0cOJBz3AhWq1WSFB0dLUn66quvVFVVVee8du/eXe3bt+e8XqTq6motWbJEJSUlGjRoEOcUQLMI8HQBrpSfn6/q6mrFxcXVWR8XF6e9e/d6qCrvkp2dLUn1nuPa93B+drtdM2fO1JAhQ9SrVy9JNec1KChIUVFRdbblvF7Yd999p0GDBqm8vFzh4eFasWKFevbsqZ07d3JOAbidVwUpoDVIT0/Xrl27tHnzZk+X4hW6deumnTt3ymq16p133tHUqVO1ceNGT5cFwEd4Vddeu3bt5O/vf85dOTk5OYqPj/dQVd6l9jxyjp1z7733auXKlVq/fr2Sk5Md6+Pj41VZWanCwsI623NeLywoKEhdunRRWlqaMjIy1KdPH/3jH//gnAJoFl4VpIKCgpSWlqa1a9c61tntdq1du1aDBg3yYGXeIzU1VfHx8XXOsc1m0+eff845Pg/DMHTvvfdqxYoVWrdunVJTU+u8n5aWpsDAwDrndd++fTp69CjntZHsdrsqKio4pwCahdd17c2aNUtTp05V//79NWDAAM2bN08lJSW6/fbbPV1aq1FcXKwDBw44Xh86dEg7d+5UdHS02rdvr5kzZ+qJJ55Q165dlZqaqrlz5yoxMVHjx4/3XNEtXHp6uhYvXqz3339fFovFMUYnMjJSISEhioyM1LRp0zRr1ixFR0crIiJC9913nwYNGqQrr7zSw9W3XHPmzNGoUaPUvn17FRUVafHixdqwYYNWr17NOQXQPDx926A7PP/880b79u2NoKAgY8CAAca2bds8XVKrsn79ekPSOcvUqVMNw6iZAmHu3LlGXFycYTabjeHDhxv79u3zbNEtXH3nU5KxcOFCxzZlZWXGPffcY7Rp08YIDQ01brzxRiMrK8tzRbcCd9xxh9GhQwcjKCjIiImJMYYPH2588sknjvc5pwDczWQYhuGhDAcAANCqedUYKQAAgOZEkAIAAHASQQoAAMBJBCkAAAAnEaQAAACcRJACAABwEkEKAADASQQpAAAAJxGkAAAAnESQAgAAcBJBCgAAwEn/Hy0V00jT1u5WAAAAAElFTkSuQmCC\n"
+ },
+ "metadata": {}
+ }
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "# **Predictions of Retrival Model**\n",
+ "Two-Tower model is trained, we need to use it to generate candidates.\n",
+ "\n",
+ "We can implement inference pipeline using three steps:\n",
+ "1. Indexing: We can run the Item Tower once for all available ads to generate their embeddings.\n",
+ "2. Query Encoding: When a user arrives, we pass their features through the User Tower to generate a User Embedding.\n",
+ "3. Nearest Neighbor Search: We search the index to find the Ad Embeddings closest to the User Embedding (highest dot product).\n",
+ "\n",
+ "Keras-RS [BruteForceRetrieval layer](https://keras.io/keras_rs/api/retrieval_layers/brute_force_retrieval/) calculates dot product between the user and every single item in the index to find exact top-K matches"
+ ],
+ "metadata": {
+ "id": "_o0ILppGcknp"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "USER_CATEGORICAL = [\"user_id\", \"gender\", \"city\", \"country\"]\n",
+ "CONTINUOUS_FEATURES = [\"time_on_site\", \"internet_usage\", \"area_income\", \"Age\"]\n",
+ "USER_FEATURES = USER_CATEGORICAL + CONTINUOUS_FEATURES\n",
+ "\n",
+ "class BruteForceRetrievalWrapper:\n",
+ " def __init__(self, model, ads_df, ad_features, user_features, k=10):\n",
+ " self.model, self.k = model, k\n",
+ " self.user_features = user_features\n",
+ " unique_ads = ads_df[ad_features].drop_duplicates(\"ad_id\").reset_index(drop=True)\n",
+ " self.ids = unique_ads[\"ad_id\"].values\n",
+ " self.topic_map = dict(zip(unique_ads[\"ad_id\"], unique_ads[\"ad_topic\"]))\n",
+ " ad_inputs = {\"ad_id\": tf.constant(self.ids.astype(str)),\n",
+ " \"ad_topic\": tf.constant(unique_ads[\"ad_topic\"].astype(str).values)\n",
+ " }\n",
+ " self.candidate_embs = model.ln_ad(model.ad_tower(ad_inputs))\n",
+ "\n",
+ " def query_batch(self, user_df):\n",
+ " inputs = {k: tf.constant(user_df[k].values.astype(float if k in CONTINUOUS_FEATURES else str))\n",
+ " for k in self.user_features if k in user_df.columns\n",
+ " }\n",
+ " u_emb = self.model.ln_user(self.model.user_tower(inputs))\n",
+ " scores = tf.linalg.matmul(u_emb, self.candidate_embs, transpose_b=True)\n",
+ " top_scores, top_indices = tf.math.top_k(scores, k=self.k)\n",
+ " return top_scores.numpy(), top_indices.numpy()\n",
+ "\n",
+ " def decode_results(self, scores, indices):\n",
+ " results = []\n",
+ " for row_scores, row_indices in zip(scores, indices):\n",
+ " retrieved_ids = self.ids[row_indices]\n",
+ " results.append([\n",
+ " {\"ad_id\": aid, \"ad_topic\": self.topic_map[aid], \"score\": float(s)}\n",
+ " for aid, s in zip(retrieved_ids, row_scores)\n",
+ " ])\n",
+ " return results\n",
+ "\n",
+ "retrieval_engine = BruteForceRetrievalWrapper(model=retrieval_model,ads_df=ads_df,ad_features=[\"ad_id\", \"ad_topic\"],\n",
+ " user_features=USER_FEATURES, k=10)\n",
+ "sample_user = pd.DataFrame([x_test.iloc[0]])\n",
+ "scores, indices = retrieval_engine.query_batch(sample_user)\n",
+ "top_ads = retrieval_engine.decode_results(scores, indices)[0]"
+ ],
+ "metadata": {
+ "id": "QrHPBLIml8Si"
+ },
+ "execution_count": 51,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "# **Implementation of Ranking Model**\n",
+ "Retrieval model only calculates a simple similarity score (Dot Product). It doesn't account for complex feature interactions.\n",
+ "So we need to build ranking model after words retrival model.\n",
+ "\n",
+ "**Architecture**\n",
+ "1. **Feature Extraction:** We reuse the trained User Tower and Ad Tower from the Retrieval stage. We freeze these towers (trainable = False) so their weights don't change.\n",
+ "2. **Interaction:** Instead of just a dot product, we concatenate three inputs- The User EmbeddingThe Ad EmbeddingThe Dot Product (Similarity)\n",
+ "3. **Scorer(MLP):** These concatenated inputs are fed into a Multi-Layer Perceptron—a stack of Dense layers. This network learns the non-linear relationships between the user and the ad.\n",
+ "4. **Output:** The final layer uses a Sigmoid activation to output a single probability between 0.0 and 1.0 (Likelihood of a Click)."
+ ],
+ "metadata": {
+ "id": "xQtLgCfyeqYS"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "retrieval_model.trainable = False\n",
+ "def create_ranking_ds(df):\n",
+ " inputs = {\"user\": dict_to_tensor_features(df[USER_FEATURES], continuous_features),\n",
+ " \"positive_ad\": dict_to_tensor_features(df[AD_FEATURES], continuous_features)\n",
+ " }\n",
+ " return tf.data.Dataset.from_tensor_slices((inputs, df[\"Clicked on Ad\"].values.\n",
+ " astype('float32'))).shuffle(10000).batch(256).prefetch(tf.data.AUTOTUNE)"
+ ],
+ "metadata": {
+ "id": "_j2PAllRvDOb"
+ },
+ "execution_count": 39,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "ranking_train_dataset= create_ranking_ds(x_train)\n",
+ "ranking_test_dataset = create_ranking_ds(x_test)"
+ ],
+ "metadata": {
+ "id": "uhKCsNa8v0Uo"
+ },
+ "execution_count": 40,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "class RankingModel(keras.Model):\n",
+ " def __init__(self, retrieval_model, **kwargs):\n",
+ " super().__init__(**kwargs)\n",
+ " self.retrieval = retrieval_model\n",
+ " self.mlp = keras.Sequential([\n",
+ " layers.Dense(256, activation=\"relu\"), layers.Dropout(0.2),\n",
+ " layers.Dense(128, activation=\"relu\"), layers.Dropout(0.2),\n",
+ " layers.Dense(64, activation=\"relu\"),\n",
+ " layers.Dense(1, activation=\"sigmoid\")\n",
+ " ])\n",
+ "\n",
+ " def call(self, inputs):\n",
+ " u_emb, ad_emb, dot = self.retrieval.get_embeddings(inputs)\n",
+ " return self.mlp(keras.ops.concatenate([u_emb, ad_emb, dot], axis=-1))"
+ ],
+ "metadata": {
+ "id": "mQCXdFFqvDRC"
+ },
+ "execution_count": 41,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "ranking_model = RankingModel(retrieval_model)\n",
+ "ranking_model.compile(optimizer=keras.optimizers.Adam(1e-4), loss=\"binary_crossentropy\", metrics=[\"AUC\", \"accuracy\"])\n",
+ "history1 = ranking_model.fit(ranking_train_dataset, epochs=20)"
+ ],
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "w5JPRvJ_vDUS",
+ "outputId": "cdc8c321-8722-48a9-f6e3-8516c9f5caa1"
+ },
+ "execution_count": 42,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Epoch 1/20\n",
+ "\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m6s\u001b[0m 75ms/step - AUC: 0.7137 - accuracy: 0.4999 - loss: 0.6688\n",
+ "Epoch 2/20\n",
+ "\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 43ms/step - AUC: 0.8871 - accuracy: 0.6535 - loss: 0.6237\n",
+ "Epoch 3/20\n",
+ "\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 51ms/step - AUC: 0.9528 - accuracy: 0.8104 - loss: 0.5837\n",
+ "Epoch 4/20\n",
+ "\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 27ms/step - AUC: 0.9704 - accuracy: 0.8531 - loss: 0.5561 \n",
+ "Epoch 5/20\n",
+ "\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 23ms/step - AUC: 0.9826 - accuracy: 0.9023 - loss: 0.5173\n",
+ "Epoch 6/20\n",
+ "\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 47ms/step - AUC: 0.9875 - accuracy: 0.9188 - loss: 0.4851\n",
+ "Epoch 7/20\n",
+ "\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 58ms/step - AUC: 0.9866 - accuracy: 0.9337 - loss: 0.4533\n",
+ "Epoch 8/20\n",
+ "\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 29ms/step - AUC: 0.9914 - accuracy: 0.9448 - loss: 0.4224 \n",
+ "Epoch 9/20\n",
+ "\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 23ms/step - AUC: 0.9903 - accuracy: 0.9441 - loss: 0.3910\n",
+ "Epoch 10/20\n",
+ "\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 40ms/step - AUC: 0.9910 - accuracy: 0.9502 - loss: 0.3671\n",
+ "Epoch 11/20\n",
+ "\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 20ms/step - AUC: 0.9938 - accuracy: 0.9616 - loss: 0.3386\n",
+ "Epoch 12/20\n",
+ "\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 22ms/step - AUC: 0.9922 - accuracy: 0.9628 - loss: 0.3158\n",
+ "Epoch 13/20\n",
+ "\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 22ms/step - AUC: 0.9940 - accuracy: 0.9676 - loss: 0.2864\n",
+ "Epoch 14/20\n",
+ "\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 24ms/step - AUC: 0.9948 - accuracy: 0.9657 - loss: 0.2607\n",
+ "Epoch 15/20\n",
+ "\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 14ms/step - AUC: 0.9951 - accuracy: 0.9685 - loss: 0.2452\n",
+ "Epoch 16/20\n",
+ "\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 15ms/step - AUC: 0.9943 - accuracy: 0.9689 - loss: 0.2243\n",
+ "Epoch 17/20\n",
+ "\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 13ms/step - AUC: 0.9945 - accuracy: 0.9701 - loss: 0.2068\n",
+ "Epoch 18/20\n",
+ "\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 13ms/step - AUC: 0.9942 - accuracy: 0.9682 - loss: 0.1947\n",
+ "Epoch 19/20\n",
+ "\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 12ms/step - AUC: 0.9955 - accuracy: 0.9719 - loss: 0.1764\n",
+ "Epoch 20/20\n",
+ "\u001b[1m3/3\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m0s\u001b[0m 15ms/step - AUC: 0.9943 - accuracy: 0.9725 - loss: 0.1623\n"
+ ]
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "pd.DataFrame(history1.history).plot(subplots=True, layout=(1, 3), figsize=(12, 4), title=\"Ranking Model Metrics\")\n",
+ "plt.show()"
+ ],
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 408
+ },
+ "id": "WoodoIYnFgsx",
+ "outputId": "ee8c8243-6c85-4831-f44a-167d1ecf7b06"
+ },
+ "execution_count": 43,
+ "outputs": [
+ {
+ "output_type": "display_data",
+ "data": {
+ "text/plain": [
+ ""
+ ],
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAA94AAAGHCAYAAABGc4o9AAAAOnRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjEwLjAsIGh0dHBzOi8vbWF0cGxvdGxpYi5vcmcvlHJYcgAAAAlwSFlzAAAPYQAAD2EBqD+naQAAh+xJREFUeJzs3Xt8zvX/x/HHdV3btZMdzGZzGHM+syJLiGq1kEonhwr74vtLKVm+scKkpBPpoFYiOn3TwbcTkRaViCIkEjkfNmOzsdnpuq7fH7OLZdj5s8Pzfrt9bte1z/U5PK8LH3td7/fn/TY5HA4HIiIiIiIiIlIuzEYHEBEREREREanOVHiLiIiIiIiIlCMV3iIiIiIiIiLlSIW3iIiIiIiISDlS4S0iIiIiIiJSjlR4i4iIiIiIiJQjFd4iIiIiIiIi5UiFt4iIiIiIiEg5UuEtIiIiIiIiUo5UeIuISLUzfPhwatWqdcntevfuTe/evcs/UCUydepUTCZTifYdPnw4oaGhZRuoAtXEP28REakcVHiLiEiZW7BgASaTybm4uLjQoEEDhg8fzqFDh4yOVymEhoZiMpmIiIgo9PW5c+c6P79ff/21gtOVTu/evTGZTLRo0aLQ11esWOF8b5988kmxj3/48GGmTp3Kpk2bSplURESkYrgYHUBERKqvadOm0aRJEzIzM/n5559ZsGABq1evZuvWrbi7uxsdj2+++cbQ87u7u7Ny5UoSEhIIDg4u8Nr777+Pu7s7mZmZBqUrHXd3d3bt2sX69evp2rVrgddK+94OHz7ME088QWhoKGFhYUXez+g/bxERqbnU4i0iIuWmT58+3HPPPYwcOZK33nqL8ePH8/fff/PFF18YHQ0Aq9WK1Wo17Pzdu3enVq1aLFq0qMD6gwcP8uOPP9KvXz+DkpVes2bNaNWqFf/9738LrM/MzOR///tfhb63jIwMwPg/bxERqblUeIuISIXp2bMnAH///bdzXXZ2NlOmTKFz5874+vri5eVFz549WblyZYF99+7di8lk4oUXXuDNN9+kWbNmuLm5ccUVV/DLL79c8tybNm0iMDCQ3r17c+rUKeD8e35XrVqFyWTio48+Yvr06TRs2BB3d3euu+46du3add4x58yZQ9OmTfHw8KBr1678+OOPxbqP2N3dndtuu40PPvigwPr//ve/1K5dm8jIyEL3++677+jZsydeXl74+flxyy23sH379vO2W716NVdccQXu7u40a9aMN95444JZ3nvvPTp37oyHhwf+/v4MGjSIAwcOFOl9XMjgwYNZtGgRdrvdue7LL78kIyODu+66q9B9Dh06xL/+9S+CgoJwc3OjXbt2zJ8/3/n6qlWruOKKKwCIiopydllfsGABkPdn2r59ezZs2MDVV1+Np6cnjz32mPO1f/7ZZGZmMnXqVFq2bIm7uzv16tXjtttuK/B39MMPP6Rz5854e3vj4+NDhw4deOmll0r12YiISM2iruYiIlJh9u7dC0Dt2rWd69LS0njrrbcYPHgwo0aN4uTJk8ybN4/IyEjWr19/XlfiDz74gJMnT/J///d/mEwmnnvuOW677TZ2796Nq6troef95ZdfiIyMpEuXLnz++ed4eHhcNOczzzyD2Wxm/PjxpKam8txzz3H33Xezbt065zavv/46Y8aMoWfPnowbN469e/dy6623Urt2bRo2bFjkz2TIkCHccMMN/P333zRr1sz5Hu+4445C38+3335Lnz59aNq0KVOnTuX06dO88sordO/enY0bNzoHP/v999+54YYbCAwMZOrUqeTm5hIbG0tQUNB5x5w+fTqTJ0/mrrvuYuTIkSQlJfHKK69w9dVX89tvv+Hn51fk9/PP9zZ16lRWrVrFtdde63xv1113HXXr1j1v+8TERK688kpMJhNjxowhMDCQr7/+mhEjRpCWlsbDDz9MmzZtmDZtGlOmTOHf//6388ucq666ynmc48eP06dPHwYNGsQ999xT6HsGsNls3HTTTcTHxzNo0CDGjh3LyZMnWbFiBVu3bqVZs2asWLGCwYMHc9111/Hss88CsH37dn766SfGjh1bos9FRERqIIeIiEgZe/vttx2A49tvv3UkJSU5Dhw44Pjkk08cgYGBDjc3N8eBAwec2+bm5jqysrIK7J+SkuIICgpy/Otf/3Ku27NnjwNw1KlTx5GcnOxc//nnnzsAx5dffulcN2zYMIeXl5fD4XA4Vq9e7fDx8XH069fPkZmZWeA8vXr1cvTq1cv588qVKx2Ao02bNgUyvfTSSw7A8fvvvzscDocjKyvLUadOHccVV1zhyMnJcW63YMECB1DgmBfSuHFjR79+/Ry5ubmO4OBgx5NPPulwOByObdu2OQDH999/7/wcf/nlF+d+YWFhjrp16zqOHz/uXLd582aH2Wx2DB061Lnu1ltvdbi7uzv27dvnXLdt2zaHxWJxnPvf/969ex0Wi8Uxffr0Avl+//13h4uLS4H1w4YNczRu3PiS761Xr16Odu3aORwOh6NLly6OESNGOByOvD9Xq9XqWLhwofOz/vjjj537jRgxwlGvXj3HsWPHChxv0KBBDl9fX0dGRobD4XA4fvnlFwfgePvttws9N+CIi4sr9LVz/2zmz5/vAByzZs06b1u73e5wOByOsWPHOnx8fBy5ubmXfN8iIiIXoq7mIiJSbiIiIggMDCQkJIQ77rgDLy8vvvjiiwItwhaLxXnfrd1uJzk5mdzcXLp06cLGjRvPO+bAgQMLtJjnt3ju3r37vG1XrlxJZGQk1113HYsXL8bNza1IuaOiogrcC/zPc/z6668cP36cUaNG4eJytvPY3XffXSBbUVgsFu666y7nvdDvv/8+ISEhznOe68iRI2zatInhw4fj7+/vXN+xY0euv/56li5dCuS15C5fvpxbb72VRo0aObdr06bNed3XFy9ejN1u56677uLYsWPOJTg4mBYtWpzX5b+4hgwZwuLFi8nOzuaTTz7BYrEwYMCA87ZzOBx8+umn9O/fH4fDUSBLZGQkqamphf59KIybmxtRUVGX3O7TTz8lICCABx988LzX8qdc8/PzIz09nRUrVhTp3CIiIoVR4S0iIuVmzpw5rFixgk8++YS+ffty7NixQovfhQsX0rFjR9zd3alTpw6BgYEsWbKE1NTU87Y9t5CEs93WU1JSCqzPzMykX79+XHbZZXz00UfFGlTrUufYt28fAM2bNy+wnYuLS4nmuR4yZAjbtm1j8+bNfPDBBwwaNKjQubbzz9uqVavzXmvTpg3Hjh0jPT2dpKQkTp8+Xeh0Xv/cd+fOnTgcDlq0aEFgYGCBZfv27Rw9erTY7+dcgwYNIjU1la+//pr333+fm266CW9v7/O2S0pK4sSJE7z55pvn5cgvoouapUGDBkX68/77779p1apVgS9P/un++++nZcuW9OnTh4YNG/Kvf/2LZcuWFSmHiIhIPt3jLSIi5aZr16506dIFgFtvvZUePXowZMgQduzYQa1atYC8Qb2GDx/Orbfeyn/+8x/q1q2LxWJhxowZBQa4ymexWAo9l8PhKPCzm5sbffv25fPPP2fZsmXcdNNNRc5d1HOUlfDwcJo1a8bDDz/Mnj17GDJkSLmcpzB2ux2TycTXX39d6PvO/3MqqXr16tG7d29mzpzJTz/9xKeffnrBHAD33HMPw4YNK3Sbjh07Fumcl7qHvzjq1q3Lpk2bWL58OV9//TVff/01b7/9NkOHDmXhwoVldh4REaneVHiLiEiFyC+mr7nmGl599VUmTpwIwCeffELTpk1ZvHhxgVbe2NjYUp3PZDLx/vvvc8stt3DnnXfy9ddfF3m08Utp3LgxALt27eKaa65xrs/NzWXv3r1FLhDPNXjwYJ566inatGlzwbmp88+7Y8eO8177888/CQgIwMvLC3d3dzw8PNi5c+d52/1z32bNmuFwOGjSpAktW7Ysdu6iGDJkCCNHjsTPz4++ffsWuk1gYCDe3t7YbDYiIiIuerzCegOURLNmzVi3bh05OTkXHJgP8qYh69+/P/3798dut3P//ffzxhtvMHny5PN6PYiIiBRGXc1FRKTC9O7dm65duzJ79mwyMzOBs63L57Ymr1u3jrVr15b6fFarlcWLF3PFFVfQv39/1q9fX+pjAnTp0oU6deowd+5ccnNznevff//987q8F9XIkSOJjY1l5syZF9ymXr16hIWFsXDhQk6cOOFcv3XrVr755htnUWuxWIiMjOSzzz5j//79zu22b9/O8uXLCxzztttuw2Kx8MQTT5zXou9wODh+/HiJ3s+57rjjDmJjY3nttdcu2AXcYrFw++238+mnn7J169bzXk9KSnI+9/LyAijwGZTE7bffzrFjx3j11VfPey3/s/jn+zebzc4vVrKyskp1fhERqTnU4i0iIhXqP//5D3feeScLFizgvvvu46abbmLx4sUMGDCAfv36sWfPHuLi4mjbtq1zvu3S8PDw4KuvvuLaa6+lT58+fP/997Rv375Ux7RarUydOpUHH3yQa6+9lrvuuou9e/eyYMECmjVrVqIW2caNGzN16tRLbvf888/Tp08funXrxogRI5zTifn6+hbY/4knnmDZsmX07NmT+++/n9zcXF555RXatWvHli1bnNs1a9aMp556ipiYGOeUaN7e3uzZs4f//e9//Pvf/2b8+PHFfj/n+me2C3nmmWdYuXIl4eHhjBo1irZt25KcnMzGjRv59ttvSU5Odmb28/MjLi4Ob29vvLy8CA8Pp0mTJsXKNXToUN555x2io6NZv349PXv2JD09nW+//Zb777+fW265hZEjR5KcnMy1115Lw4YN2bdvH6+88gphYWG0adOmJB+HiIjUQGrxFhGRCnXbbbfRrFkzXnjhBWw2G8OHD+fpp59m8+bNPPTQQyxfvpz33nvPeW94WfDx8WH58uUEBwdz/fXXs2vXrlIfc8yYMbz88svs37+f8ePH8+OPP/LFF1/g5+eHu7t7GaQuXEREBMuWLaNOnTpMmTKFF154gSuvvJKffvqpQOHZsWNHli9fTmBgIFOmTGH+/Pk88cQThY4oPnHiRD799FPMZjNPPPEE48eP54svvuCGG27g5ptvLrf38k9BQUGsX7+eqKgoFi9ezJgxY3jppZdITk52zqEN4OrqysKFC7FYLNx3330MHjyY77//vtjns1gsLF26lMcff5x169bx8MMPM2vWLHx8fOjQoQOQd8+5u7s7r732Gvfffz8LFy5k4MCBfP3115jN+jVKRESKxuQor5FiREREahi73U5gYCC33XYbc+fONTqOiIiIVBL6qlZERKQEMjMzz7sn+p133iE5ObnMBnETERGR6kEt3iIiIiWwatUqxo0bx5133kmdOnXYuHEj8+bNo02bNmzYsKFY84aLiIhI9abB1UREREogNDSUkJAQXn75ZZKTk/H392fo0KE888wzKrpFRESkALV4i4iIiIiIiJQj3eMtIiIiIiIiUo5UeIuIiIiIiIiUIxXeIiIiIiIiIuVIhbeIiIiIiIhIOVLhLSIiIiIiIlKOVHiLiIiIiIiIlCMV3iIiIiIiIiLlSIW3iIiIiIiISDlS4S0iIiIiIiJSjlR4i4iIiIiIiJQjFd4iIiIiIiIi5UiFt4iIiIiIiEg5UuEtIiIiIiIiUo5UeIuIiIiIiIiUIxXeIiIiIiIiIuVIhbeIiIiIiIhIOVLhLSIiIiIiIlKOVHiLiIiIiIiIlCMV3iIiIiIiIiLlyMXoAGXBbrdz+PBhvL29MZlMRscRkSrI4XBw8uRJ6tevj9lcvb6T1DVSREpD10cRkcIV5/pYLQrvw4cPExISYnQMEakGDhw4QMOGDY2OUaZ0jRSRsqDro4hI4YpyfawWhbe3tzeQ94Z9fHwMTiMiVVFaWhohISHO60l1omukiJSGro8iIoUrzvWxWhTe+V2DfHx8dNEUkVKpjl0NdY0UkbKg66OISOGKcn2sXjfqiIiIiIiIiFQyKrxFREREREREylGxC+8ffviB/v37U79+fUwmE5999tkl91m1ahWXX345bm5uNG/enAULFpy3zZw5cwgNDcXd3Z3w8HDWr19f3GgiIiIiIiIilU6x7/FOT0+nU6dO/Otf/+K222675PZ79uyhX79+3Hfffbz//vvEx8czcuRI6tWrR2RkJACLFi0iOjqauLg4wsPDmT17NpGRkezYsYO6desW/12JiIiIiIjIRdlsNnJycoyOUam5urpisVhKfZxiF959+vShT58+Rd4+Li6OJk2aMHPmTADatGnD6tWrefHFF52F96xZsxg1ahRRUVHOfZYsWcL8+fOZOHFicSOKiIiIiIjIBTgcDhISEjhx4oTRUaoEPz8/goODSzXIZLmPar527VoiIiIKrIuMjOThhx8GIDs7mw0bNhATE+N83Ww2ExERwdq1aws9ZlZWFllZWc6f09LSyj64iIiIiIhINZRfdNetWxdPT89qOWtBWXA4HGRkZHD06FEA6tWrV+JjlXvhnZCQQFBQUIF1QUFBpKWlcfr0aVJSUrDZbIVu8+effxZ6zBkzZvDEE0+UW2YREREREZHqyGazOYvuOnXqGB2n0vPw8ADg6NGj1K1bt8TdzqvkqOYxMTGkpqY6lwMHDhgdSUREREREpNLLv6fb09PT4CRVR/5nVZr74cu9xTs4OJjExMQC6xITE/Hx8cHDwwOLxYLFYil0m+Dg4EKP6ebmhpubW7llFhERERERqc7UvbzoyuKzKvcW727duhEfH19g3YoVK+jWrRsAVquVzp07F9jGbrcTHx/v3EZEREREpDLIsWkEaBEpvmK3eJ86dYpdu3Y5f96zZw+bNm3C39+fRo0aERMTw6FDh3jnnXcAuO+++3j11Vd59NFH+de//sV3333HRx99xJIlS5zHiI6OZtiwYXTp0oWuXbsye/Zs0tPTnaOci0j15nA4yLE5yLbZyc49Z7HZyMq1k5VrJzPbRka2jYwcG6ezc/OeZ9vIzLE5n+evP51j43S2jevaBDG6dzOj356ISOFyTsPR7ZDwOyRuhaQdYHEFd9+zi5vPOT/7nfP8zHoXd3DYITP17JKVVvDnAsuZ17yD4Y55Rn8CVYrD4eD97e/z1u9vMf/G+TT1bWp0JJEap3fv3oSFhTF79myjoxRbsQvvX3/9lWuuucb5c3R0NADDhg1jwYIFHDlyhP379ztfb9KkCUuWLGHcuHG89NJLNGzYkLfeess5lRjAwIEDSUpKYsqUKSQkJBAWFsayZcvOG3BNRErG4XDkFa8FilQbGdm5ZwrZvAI2J7/wtTnIzrU7f86x5RW/5/6cbbOTY3Ngs+ctdoeDXJsDm8OB3e4g98y6/Ndt9rzXbHZHgeI668wxy0NogFe5HFdEpNhOJkLi73lFdsLWvMfjO/OK5tIwu4K9BC2wtZuU7rw1kMlkYt2RdRzPPM7sDbN5+dqXjY4kIlVIsQvv3r1743A4Lvj6ggULCt3nt99+u+hxx4wZw5gxY4obR6TCOBwOMnPsnMzKIT3LRq6t8F+WLvSvw+EAm91Bju1s4ZpXxJ6z7h8/5xe7OWeK3H8Ww/mtxAULYgdZOXmtvucW2PYL/7OtdFzMJqwu5rzFkvfoabXgYXXBw9WMp9UFD6sFT1fLOevzn+c9elothPhr0BARqQA5mf9oaT4B6cfh6B9nC+30o4Xv61kHgjvkLXXb5q27YGv1P1q1HfaCRber50Vay//RUu4VWO4fS3U0rvM4fjz0IysPrGRD4gY6B3U2OpKIVBHlPriaSHmw2x1k5NjIyMolPdtGelYu6Vl53YyzbXbs57SuFtbi+s91p7NtnMrK5VRmLunZuZzMzOXUmWOeyn+ebcNWlarXC7BazHhYLecVqu6uFmeR63rOo5uLGVeLqcD6/O1czGZczCbMZpPz0WIyYTGDxWzGYgazyYSL2YzZDBaTCReLCavFcrawPud4bmfOYTFrsA8RKSOnT0DqATixH04cgFOJXPgr0guw284pqgvpxm3LKsJBTFCn+Zkiuz0Ed4Sg9nldvksyaI/DAdmn8vK4uOcV1BbX4h9HiqWpX1Nua3EbH//1MTN/ncn7fd/XAFUiBklJSWHs2LF8+eWXZGVl0atXL15++WVatGgBwL59+xgzZgyrV68mOzub0NBQnn/+efr27UtKSgpjxozhm2++4dSpUzRs2JDHHnusXG91VuEthrLbHaRkZHP0ZBZJJ7M4ejKLoyczSTqZxbFT2ZzMzCEjK68ozsg+W2RnZNsMze1lzSscL+RC/wlbzCaslrxC1tVSsJB1dTlnneVssetiMZ9TEOcVra4u+cc5t0g+u87N1ewspj2tLs4C28PVgqulSs4iKCJyPocDTqecKar3n1NgnymyT+yHrNQKCmMq2MLs4QcBLc8W2XXbgLUMb38xmcDNO2+RCnV/2P18tfsrfj/2O8v3LefG0BuNjiRSKg6Hg9O5pw05t4eLR4m/vBo+fDg7d+7kiy++wMfHhwkTJtC3b1+2bduGq6srDzzwANnZ2fzwww94eXmxbds2atWqBcDkyZPZtm0bX3/9NQEBAezatYvTp8v3M1DhLWXG4XCQnm3jZGYOJzNzOZmZQ1pmLmmn8x6TTmaRdKaoPnoyi6NpWRw7lUVuKVqRzSbwsrrg5eaCp1tey631TItp/pLX4nrOc0veo3MbkwkPq4VabnnH8XZ3wcvqQi13F2q5uRRc7+aCp6sFs1pkRUTOOn0C9q+Fvath30+QerBizptzOq/V91I8A8AvBPwagXc9MFmKdx6T6eLdtt19weoNZn2xWRMEeAQQ1S6K1za/xksbXuK6kOtwVW8DqcJO554m/INwQ869bsg6PF2Lf2tgfsH9008/cdVVVwHw/vvvExISwmeffcadd97J/v37uf322+nQoQMATZueHRBx//79XHbZZXTp0gWA0NDQ0r+ZS1DhLUWSnpXL74dS2XLwBDsSTpF6OsdZWOcX2qeyckvcFdvfy0pdbzcCzyx1vd0J9HbD+0zx63mmMPa0nvnZzYKX1QV3V7O6eImIVLSMZNi3Jq/I3rs67z7m4nbfLktedfOK6vzi2q8R+DY6u64sW5pFgGHthvHRXx9x8NRBFu1YxD1t7zE6kkiNsn37dlxcXAgPP/uFQZ06dWjVqhXbt28H4KGHHmL06NF88803REREcPvtt9OxY0cARo8eze23387GjRu54YYbuPXWW50FfHlR4S3nycq18eeRk2w5eILNB1PZfOAEu5JOcZEx9QqwmE34uLvg7e6Kt3teS7GPuysB3m7UPVNU5xfZdX3cCKjlpu7PIiKVWfrxs0X2vp8g8Q/OK7T9m0Foj7ylbhswVcB13WIFnwZg1UCKUrE8XT25P+x+pq2dRtyWOG5ufjM+Vh+jY4mUiIeLB+uGrDPs3OVl5MiRREZGsmTJEr755htmzJjBzJkzefDBB+nTpw/79u1j6dKlrFixguuuu44HHniAF154odzyqPCu4Wx2B7uOnmLzwRNsOXiCLQdT2X4kjRzb+VV2PV93Ojb0pX19X/xrWfFxFtau+Li74OOR97OHq0Wt0CIiVdmpJNi3Gvb+lFdoH912/jYBLaFx97xCu3F38KlX8TlFDDSg+QDe2/Yeu1N3M+/3eYzrPM7oSCIlYjKZStTd20ht2rQhNzeXdevWOVuqjx8/zo4dO2jbtq1zu5CQEO677z7uu+8+YmJimDt3Lg8++CAAgYGBDBs2jGHDhtGzZ0/+85//qPCWsnUqK5f47Yks2XKEn3YdI72Qgcpqe7rSsaEfnRr60rGhHx1DfKnr7W5AWhERKXcnE88U2meK7WM7zt8msA2Eds8rsht3B++gis8pUom4mF0Y13kcD373IO9te49BrQZRr5a+gBKpCC1atOCWW25h1KhRvPHGG3h7ezNx4kQaNGjALbfcAsDDDz9Mnz59aNmyJSkpKaxcuZI2bdoAMGXKFDp37ky7du3Iysriq6++cr5WXlR41xDpWbl89+dRlmw5wsodR8nKPTsHtafVQvsGvs4iOyzEj4a1Sz7CoIiIVHJph8+0Zp8pto/vOn+boPZnWrTPFNpeARWfU6SS69WwF12CuvBr4q+8uulVpveYbnQkkRrj7bffZuzYsdx0001kZ2dz9dVXs3TpUlxd8wY7tNlsPPDAAxw8eBAfHx9uvPFGXnzxRQCsVisxMTHs3bsXDw8PevbsyYcffliueU0OR1Hv3K280tLS8PX1JTU1FR8f3V+TLyM7l5V/JrHk98N89+dRMnPOFtuhdTy5qWN9bmwfTJt6Ppo3WWq86nwdqc7vTYoo9eCZ1uwz92gn7/7HBqa8aa9Ce55p0b4KPP0NiSqVT3W+hpTFe9t6bCuDlwzGhImP+n9Ea//WZZxSpGxlZmayZ88emjRpgru7erQWxYU+s+JcQ9TiXc2czraxckdey/Z3fx7ldM7ZbuSN63jSr0M9+nWsR9t6PmrRFhGprlL2nS2y966GE/sKvm4y580tnT8YWqMrwaO2MVlFqrj2Ae3pE9qHr/d+zcxfZ/Lm9W/qdywROY8K72ri173JLFizl/jtBYvtRv6e9OtYj34d6tGuvoptEZFqx+GAlD15Xced82gfKLiNyQL1w850He8JjcLz5p4WkTLx0OUP8e3+b/n5yM+sObyG7g26Gx1JRCoZFd5VXGpGDs8s285/15/9JSvE34O+HepxU4f6tG+gYltEpNpJ3g17fjg7GNrJwwVfN7tA/cvz7s8O7QEh4eDmbUxWkRqgoXdDBrcezDvb3mHmhplcWe9KLGaL0bFEpBJR4V1FORwOvtpyhCe+3MaxU1kA3Nm5Ifd2a0yHBr4qtkVEqhO7HQ5tgB1L4M+l5486bnaFhl3ODoYWEg5WL2OyitRQ/+74b/6363/sTNnJl7u/5NbmtxodSUQqERXeVdDBlAwmf7aVlTuSAGgW6MWM2zrStYkGwhERqTZyMvNatXcsgR1fw6nEs6+ZXfKK6/w5tBteAdaqNQerSHXj6+bLvzv8m5kbZvLKb68QGRqJh4uH0bFEpJJQ4V2F5NrsLFizl5nf/MXpHBtWi5n7r2nG6N7NcHNRdyYRkSovIxl2fgN/LoFd8ZCTfvY1qze0uB5a94PmEeDhZ1hMESnc4DaD+e+f/+Vw+mHe2/YeozqOMjqSyAXZ7fZLbyRA2XxWKryriK2HUpm4eAtbD6UB0DXUn6dv60DzurUMTiYiIqWSsg92LM0rtvetAcfZATLxrg+t+uQV26E9wcVqXE4RuSQ3ixsPXv4gMT/GMG/rPG5veTv+7uqRKJWL1WrFbDZz+PBhAgMDsVqtuk31AhwOB9nZ2SQlJWE2m7FaS/7/sArvSi4jO5cXV/zFvNV7sDvAx92Fx/q24a4uIZg197aISNXjcMCRTXn3au9YColbC75etx207gut+kL9y0C/DIlUKX2b9OWdP95he/J24jbH8Vj4Y0ZHEinAbDbTpEkTjhw5wuHDhy+9g+Dp6UmjRo0wm80lPoYK70ps5Z9HmfTZVg6dOA1A/071mXxTG+p6a6J7EZEqJTcb9q0+W2ynHTr7mskMja46W2z7NzEup4iUmtlkZnyX8Yz4ZgQf7/iYu9vcTWOfxkbHEinAarXSqFEjcnNzsdlsl96hBrNYLLi4uJS6V4AK70oo6WQW077axpeb876BauDnwVO3tuea1nUNTiYiIkWWmQo7V+QV2jtXQFba2ddcvaD5tdCqH7SMBE91RRWpTrrW60rPBj358dCPvLTxJWb1nmV0JJHzmEwmXF1dcXV1NTpKjaDCu5LJzrUzeO7P7Dp6CrMJRvRowrjrW+Jp1R+ViEill3oor9DesRT2/Aj2nLOvedWFVjdC65ugSS9wVe8lkeosunM0Px3+iRX7VrDp6CbC6oYZHUlEDKRqrpJZsGYPu46eIqCWlQVRXWnfwNfoSCIicjEOB+xeBevi4K/lgOPsawEt87qPt+4HDbpAKe4NE5GqpXnt5gxoPoBPd37KzF9n8k6fdzSAlUgNpsK7Ejl6MpOX43cB8OiNrVV0i4hUZtkZsGURrHsDkrafXR8Snldot+oHAc2Nyycihrs/7H6W7lnKpqRNrNi3ghtCbzA6kogYRIV3JfL8sh2cysqlY0Nf7ri8odFxRESkMCcOwC9zYcNCyDyRt85aC8KGQNf/U7EtIk51PesyrN0w4jbH8fyvz9OjQQ88XT2NjiUiBlDhXUlsPnCCjzccBCC2fztNFSYiUpk4HLB/bV538u1fnZ1r268xhP8fXHYPuKuXkoicb0T7EXz595ccOnWIuC1xRHeONjqSiBhAhXcl4HA4mPrlHwAMuKwBnRvXNjiRiIgAkJsFWz+Fn1+HhC1n1ze5GsJH541IbrYYl09EKj13F3diusYw5rsxvPvHu9zc9Gaa11bPGJGaRoV3JfDZpkP8tv8EnlYLE/u0NjqOiIhkp8NPL8Ov8yA9KW+dizt0vAvC74OgdsbmE5EqpVdIL64NuZbvDnzHU+ue4u3ItzXQmkgNo8LbYOlZuTzz9Z8APHBNc4J8NL2MiIih0o7AB3edbeH2aQBXjITOwzXftoiU2ISuE1h7ZC0bEjfw1e6v6N+sv9GRRKQCaV4Tg81ZuYvEtCwa+XsyokcTo+OIiNRsiX/AWxF5RbdnANwxH8Zuhp7RKrpFpFTq16rP/3X8PwBe+PUFUrNSDU4kIhVJhbeB9h/P4K0f9wDweL82uLvqPkEREcP8/R3MvxHSDkKdFjDyW2h/O1hcjU4mItXE0LZDaerblOTMZF757RWj44hIBVLhbaCnlmwj22anR/MAbmgbZHQcEZGaa+M78P6dkJUGjXvAiG/AX72QRCranDlzCA0Nxd3dnfDwcNavX3/R7U+cOMEDDzxAvXr1cHNzo2XLlixdurSC0hafq8WVSVdOAuCjHR+x9dhWgxOJSEVR4W2Q1TuP8c22RCxmE7H922qADRERIzgcEP8kfPEg2HOhw11w72J1KxcxwKJFi4iOjiY2NpaNGzfSqVMnIiMjOXr0aKHbZ2dnc/3117N3714++eQTduzYwdy5c2nQoEEFJy+eK4KvoH/T/jhw8OTPT2Kz24yOJCIVQIW3AXJsdp44M33YvVc2pkWQt8GJRERqoNws+HQk/PhC3s9XPwq3vQkubsbmEqmhZs2axahRo4iKiqJt27bExcXh6enJ/PnzC91+/vz5JCcn89lnn9G9e3dCQ0Pp1asXnTp1quDkxRfdJRpvV2+2Hd/Gx399bHQcEakAKrwN8P7P+9h59BS1PV0ZF9HS6DgiIjVPRjK8cyts/QTMLnDLa3Dt46DeRyKGyM7OZsOGDURERDjXmc1mIiIiWLt2baH7fPHFF3Tr1o0HHniAoKAg2rdvz9NPP43NduEW5KysLNLS0gosRgjwCOChyx8C4OWNL3Ps9DFDcohIxVHhXcGS07OZteIvAB65oRW+nhq0R0SkQh3/O2/k8v1rwM0X7vkULrvb6FQiNdqxY8ew2WwEBRUc8yYoKIiEhIRC99m9ezeffPIJNpuNpUuXMnnyZGbOnMlTTz11wfPMmDEDX19f5xISElKm76M47mx5J23rtOVkzklm/jrTsBwiUjFUeFewmd/sIC0zlzb1fBjctZHRcUREapYD62He9ZD8N/iGwIjl0LS30alEpATsdjt169blzTffpHPnzgwcOJDHH3+cuLi4C+4TExNDamqqczlw4EAFJi7IYrYw+crJmDDx1e6v+CXhF8OyiEj5U+FdgbYdTuO/6/cDENu/LRazujSKiFSYP/4HC26CjONQ/zIYGQ912xidSkSAgIAALBYLiYmJBdYnJiYSHBxc6D716tWjZcuWWCxnp2Nt06YNCQkJZGdnF7qPm5sbPj4+BRYjtQ9oz12t7gLgqZ+fIseWY2geESk/KrwriMPh4Ikv/8DugH4d6nFl0zpGRxIRqRkcDlg9Gz4eDrYsaNUXhi8Bb03jKFJZWK1WOnfuTHx8vHOd3W4nPj6ebt26FbpP9+7d2bVrF3a73bnur7/+ol69elit1nLPXFYevOxB/N392Z26m3e2vWN0HBEpJyq8K8jS3xNYtycZNxczMX1bGx1HRKTmWD8Xvo3Nex4+Gga+B1YvYzOJyHmio6OZO3cuCxcuZPv27YwePZr09HSioqIAGDp0KDExMc7tR48eTXJyMmPHjuWvv/5iyZIlPP300zzwwANGvYUS8XXzZXyX8QC8seUNDp86bHAiESkPLkYHqAlOZ9t4eul2AO7r1YyGtT0NTiQiUkPkZMIPz+c9v2YS9PqPsXlE5IIGDhxIUlISU6ZMISEhgbCwMJYtW+YccG3//v2YzWfbjEJCQli+fDnjxo2jY8eONGjQgLFjxzJhwgSj3kKJ3dT0Jj7d+SkbEjfwzPpnePnal42OJCJlTC3eFeDNH3Zz6MRp6vu6c1+vZkbHERGDzZkzh9DQUNzd3QkPD2f9+vUX3DYnJ4dp06bRrFkz3N3d6dSpE8uWLavAtFXcpvcg/WjeQGo9HjY6jYhcwpgxY9i3bx9ZWVmsW7eO8PBw52urVq1iwYIFBbbv1q0bP//8M5mZmfz999889thjBe75ripMJhOTwifhYnJh5YGVrDqwyuhIIlLGVHiXs0MnTvP697sAeKxfGzysVe8/AxEpO4sWLSI6OprY2Fg2btxIp06diIyM5OjRo4VuP2nSJN544w1eeeUVtm3bxn333ceAAQP47bffKjh5FWTLgZ9eynvefSxYNH2jiFRezWs359529wLwzPpnOJ172uBEIlKWVHiXs2e+/pPMHDtdm/jTr0M9o+OIiMFmzZrFqFGjiIqKom3btsTFxeHp6cn8+fML3f7dd9/lscceo2/fvjRt2pTRo0fTt29fZs7UnK+XtPVTOLEfvALhsnuMTiMickn3dbyPYK9gDp06xNwtc42OIyJlSIV3OcrIzmXp70cAmHJTW0wmTR8mUpNlZ2ezYcMGIiIinOvMZjMRERGsXbu20H2ysrJwd3cvsM7Dw4PVq1df8DxZWVmkpaUVWGocux1Wv5j3/Mr7wdXD2DwiIkXg6erJxCsmAvD2H2+zO3W3wYlEpKyo8C5HfxxOw2Z3UNfbjfYNfI2OIyIGO3bsGDabzTlQUL6goCASEhIK3ScyMpJZs2axc+dO7HY7K1asYPHixRw5cuSC55kxYwa+vr7OJSQkpEzfR5WwYykk/QluvnDFCKPTiIgU2bWNrqVng57k2nOZtnYadof90juJSKWnwrscbT5wAoBOIX6G5hCRquull16iRYsWtG7dGqvVypgxY4iKiiowsu8/xcTEkJqa6lwOHDhQgYkrAYcDfjzTFb/rKHDXF58iUnWYTCYeC38MDxcPNiRu4JO/PjE6koiUgRIV3mU9Iu/UqVMxmUwFltatq/5c11sOpgLQqaF+6RMRCAgIwGKxkJiYWGB9YmIiwcHBhe4TGBjIZ599Rnp6Ovv27ePPP/+kVq1aNG3a9ILncXNzw8fHp8BSo+xeBYc3gosHXDna6DQiIsXW0LshD132EACzNswiIb3wXlEiUnUUu/AurxF527Vrx5EjR5zLxe5frCq2HDwBQMeGfobmEJHKwWq10rlzZ+Lj453r7HY78fHxdOvW7aL7uru706BBA3Jzc/n000+55ZZbyjtu1bV6Vt5j52HgFWBsFhGREhrcejAdAzuSnpPOUz8/hcPhMDqSiJRCsQvv8hqR18XFheDgYOcSEHDhX5aqwsBBJzKy2Xs8A4COavEWkTOio6OZO3cuCxcuZPv27YwePZr09HSioqIAGDp0KDExMc7t161bx+LFi9m9ezc//vgjN954I3a7nUcffdSot1C5HfgF9vwAZhe46kGj04iIlJjFbOGJbk/gYnbh+4Pfs2zvskvvJCKVVrEK7/IckXfnzp3Ur1+fpk2bcvfdd7N///4L5qgKAwfldzMPreOJn6fV4DQiUlkMHDiQF154gSlTphAWFsamTZtYtmyZc8C1/fv3Fxg4LTMzk0mTJtG2bVsGDBhAgwYNWL16NX5+fga9g0ouv7W70yDwbWhsFhGRUmpeuzn/7vhvAGasm0FKZorBiUSkpIpVeJfXiLzh4eEsWLCAZcuW8frrr7Nnzx569uzJyZMnCz1mVRg4SN3MReRCxowZw759+8jKymLdunWEh4c7X1u1ahULFixw/tyrVy+2bdtGZmYmx44d45133qF+/foGpK4CEv/IG80cE3QfZ3QaEZEyMbL9SJr7NSclK4Vnf3nW6DgiUkLlPqp5UUbk7dOnD3feeScdO3YkMjKSpUuXcuLECT766KNCj1kVBg7adCCvxVvdzEVEKkj+vN3tboWA5oZGEREpK64WV6ZdNQ2zycyS3Uv44eAPRkcSkRIoVuFdUSPy+vn50bJlS3bt2lWceJVKfot3mKYSExEpf8l7YOunec97qLVbRKqXDoEduLfNvQA8+fOTnMo+ZXAiESmuYhXeFTUi76lTp/j777+pV69eceJVGgmpmRw9mYXFbKJdfbV4i4iUu59eAocdml8P9ToZnUZEpMw9cNkDNKzVkIT0BGZvnG10HBEppmJ3NS+PEXnHjx/P999/z969e1mzZg0DBgzAYrEwePDgMniLFW/zmdbuFnVr4WG1GBtGRKS6SzsCm97Pe97zEWOziIiUEw8XD6ZeNRWARTsWsSFxg7GBRKRYXIq7w8CBA0lKSmLKlCkkJCQQFhZ23oi8596/nT8i7+7du6lVqxZ9+/bl3XffLTAi78GDBxk8eDDHjx8nMDCQHj168PPPPxMYGFj6d2iAzQdOANBJA6uJiJS/ta+CLRsaXQWNL977SkSkKguvF87tLW7n052fMnXNVD65+RPcLG5GxxKRIih24Q15I/KOGTOm0NdWrVpV4Of8EXkv5sMPPyxJjEorfyqxTrq/W0SkfGUkw69v5z3vGW1sFhGRChDdJZofDv7A3rS9xG2OY+zlY42OJCJFUO6jmtc0drvjnKnEdH+3iEi5Wv8m5KRDcAdoHmF0GhGRcudj9eHxKx8H4O2tb7P9+HaDE4lIUajwLmN7j6eTlpmLm4uZVsHeRscREam+sk7Cz6/nPe/5CJhMxuYREakg1zW6jhsa34DNYWPKmink2HOMjiQil6DCu4zldzNvW98HV4s+XhGRcrNhAWSegDrNoc3NRqcREalQMeEx+Fh9+DP5Txb+sdDoOCJyCaoMy1j+iOYaWE1EpBzlZsGaV/Oed38YzJpBQkRqlgCPACZ0nQDA65teZ2/qXmMDichFqfAuY2cHVtP93SIi5WbTB3AqAXwaQMeBRqcRETFE/6b96V6/O9n2bGLXxGJ32I2OJCIXoMK7DOXY7Gw9lFd4d1SLt4hI+bDlwk+z855f9RC4WA2NIyJiFJPJxJRuU/Bw8WDj0Y18vONjoyOJyAWo8C5DfyWeJCvXjre7C03qeBkdR0Skevrjf5CyFzzrwOVDjU4jImKo+rXqO6cUm7VhFgnpCQYnEpHCqPAuQ/ndzDs29MVs1ui6IiJlzm6H1bPynl95P1g9jc0jIlIJDGo1iLDAMDJyM4hdE4vD4TA6koj8gwrvMrT5wAlA3cxFRMrNzuVwdBtYveGKkUanERGpFCxmC090fwI3ixtrDq/hwx0fGh1JRP5BhXcZ2pw/sFpDDawmIlIufn4t7/GKEeDhZ2gUEZHKpKlvU8Z1HgfAzF9nsjt1t8GJRORcKrzLyOlsG38lngSgU4ifsWFERKqjlL2w5wfAlFd4i4hIAYNbD6ZbvW5k2bKI+TGGHHuO0ZFE5AwV3mVk25FUbHYHgd5uBPu4Gx1HRKT6+e39vMemvcGvkaFRREQqI7PJzJPdn8TH6sO249uI2xxndCQROUOFdxnZdOBsN3OTSQOriYiUKbstb+5ugMvvNTaLiEglFuQVxORukwF46/e32HR0k7GBRARQ4V1mthw8AWhgNRGRcrF7JaQdBHc/aNXP6DQiIpXajaE3clPTm7A77Dy2+jEycjKMjiRS46nwLiP5U4np/m4RkXKw8d28x44DwVW384iIXEpMeAzBXsEcOHmA5355zug4IjWeCu8ykJqRw55j6QB0bKARzUVEylT6cfhzSd5zdTMXESkSH6sPT/d4GhMmPt35KSv3rzQ6kkiNpsK7DGw5dAKARv6e1PayGhtGRKS6+f0jsOdAvU4Q3MHoNCIiVcYVwVcwrN0wAKauncrx08cNTiRSc6nwLgPqZi4iUk4cjrPdzC9Ta7eISHE9eNmDtKjdguTMZKaumYrD4TA6kkiNpMK7DGw+cALIG9FcRETK0OHf4OgfYHGDDncYnUZEpMqxWqzM6DEDV7Mrqw6uYvHOxUZHEqmRVHiXgc0a0VxEpHz89l7eY9ubwaO2sVlERKqoVv6teOiyhwB49pdn2Z+23+BEIjWPCu9SSkzLJDEtC7MJ2jfwMTqOiEj1kXMafv8k7/ll9xibRUSkihvabihXBF/B6dzTPLb6MXLtuUZHEqlRVHiXUn4385ZB3nhaXYwNIyJSnWz7ArJSwa8RhF5tdBoRkSrNbDLzVPenqOVai81Jm5n3+zyjI4nUKCq8Syl/YLWOur9bRKRs/XZmULWwe8Cs/65EREqrfq36PBb+GABxm+P449gfBicSqTn0m0wp6f5uEZFykLwb9v4ImCBsiNFpRESqjZua3sQNjW8g15HLxB8ncjr3tNGRRGoEFd6l4HA4zk4lpsJbRKTsbPog77HZteAXYmwWEZFqxGQyMaXbFAI9AtmbtpdZv84yOpJIjaDCuxT2Hc8g9XQOVhczrYK9jY4jIlI92G1nC28NqiYiUuZ83Xx5qvtTAHy440NWH1ptcCKR6k+FdynkdzNvW88Hq4s+ShGRMvH3Skg7lDd9WOt+RqcREamWrmpwFUNa593KM/mnySRnJhucSKR6U7VYCpsP5Hcz18BqIiJl5rd38h47DgQXN2OziIhUY+M6j6OZbzOOnT7G5J8m43A4jI4kUm2p8C6FLWdavDuF+BmaQ0Sk2kg/Bn8uzXuubuYiIuXK3cWd53o9h9Vs5YeDP/DBnx8YHUmk2lLhXUK5NjtbD+dPJeZnbBgRkepiy0dgz4F6YRDcweg0IlLB5syZQ2hoKO7u7oSHh7N+/foLbrtgwQJMJlOBxd3dvQLTVg8ta7fkkS6PADDz15nsSN5hcCKR6kmFdwn9lXiKzBw73m4uNA3wMjqOiEjV53Ccnbv78nuNzSIiFW7RokVER0cTGxvLxo0b6dSpE5GRkRw9evSC+/j4+HDkyBHnsm/fvgpMXH0Mbj2YXg17kWPP4dEfHtUUYyLlQIV3CeV3M2/fwBez2WRsGBGR6uDwRji6DVzcof0dRqcRkQo2a9YsRo0aRVRUFG3btiUuLg5PT0/mz59/wX1MJhPBwcHOJSgoqAITVx8mk4lp3acR6BHI7tTdPP/L80ZHEql2VHiX0Ob8+bt1f7eISNnYeKa1u83N4OFnaBQRqVjZ2dls2LCBiIgI5zqz2UxERARr16694H6nTp2icePGhISEcMstt/DHH39c9DxZWVmkpaUVWCSPv7s/03tMx4SJj//6mG/3fWt0JJFqRYV3CW0+cALQiOYiImUiOwO2fpr3XN3MRWqcY8eOYbPZzmuxDgoKIiEhodB9WrVqxfz58/n888957733sNvtXHXVVRw8ePCC55kxYwa+vr7OJSQkpEzfR1XXrX43hrcfDkDsmlgS0gv/7EWk+FR4l0Bmjo0diScB6KgWbxGR0tv+BWSlgV9jaNzD6DQiUgV069aNoUOHEhYWRq9evVi8eDGBgYG88cYbF9wnJiaG1NRU53LgwIEKTFw1PBj2IO3qtCMtO42YH2Ow2W1GRxKpFlR4l8Afh9Ow2R0E1LJS31ejZ4qIlFp+N/PL7gWz/msSqWkCAgKwWCwkJiYWWJ+YmEhwcHCRjuHq6spll13Grl27LriNm5sbPj4+BRYpyNXiynNXP4eniye/Jv7KvK3zjI4kUi3ot5sScM7f3dAPk0kDq4mIlMrxv2HfasAEYYONTiMiBrBarXTu3Jn4+HjnOrvdTnx8PN26dSvSMWw2G7///jv16tUrr5g1RiOfRjx+5eMAvLbpNTYd3WRsIJFqQIV3CeTf3635u0VEysCm9/Mem18Hvg2NzSIihomOjmbu3LksXLiQ7du3M3r0aNLT04mKigJg6NChxMTEOLefNm0a33zzDbt372bjxo3cc8897Nu3j5EjRxr1FqqV/k3706dJH2wOGxN/nMjJ7JNGRxKp0lyMDlAVbTkzonnHEA2sJiJSKnYbbPog7/ll9xibRUQMNXDgQJKSkpgyZQoJCQmEhYWxbNky54Br+/fvx3zOrSgpKSmMGjWKhIQEateuTefOnVmzZg1t27Y16i1UKyaTiclXTmZL0hYOnTrEkz8/ybM9n1VvT5ESMjkcDofRIUorLS0NX19fUlNTy/1endTTOXR64hsANk6+Hn8va7meT0QqRkVeRypapX5vf30DH9wJHv7wyJ/g4mZ0IhH5h0p9DSml6vzeysrmpM0M+3oYNoeN6T2mc3Ozm42OJFJpFOcaoq7mxfT7mdbuEH8PFd0iIqX12zt5j50GqegWEamEOgV24v6w+wGY/vN09qftNziRSNWkwruYNp8ZWE33d4uIlFL6Mdjxdd5zdTMXEam0RrQfQZegLmTkZvDoD4+SY8sxOpJIlaPCu5jOjmiu+7tFREpl84dgz4X6l0NQO6PTiIjIBVjMFmb0nIGP1Yc/jv/Bq5teNTqSSJWjwruY8gdW66QWbxGR0slv7Q4bYmwOERG5pGCvYKZdNQ2At7e+zc9HfjY4kUjVUqLCe86cOYSGhuLu7k54eDjr16+/4LY5OTlMmzaNZs2a4e7uTqdOnVi2bFmpjmmUo2mZHEnNxGyC9g3U4i0iUirHduQ9NuhsbA4RESmS6xpfx50t78SBg5gfYzh2+pjRkUSqjGIX3osWLSI6OprY2Fg2btxIp06diIyM5OjRo4VuP2nSJN544w1eeeUVtm3bxn333ceAAQP47bffSnxMo2w+09rdvG4tvNw0E5uIlExxv2icPXs2rVq1wsPDg5CQEMaNG0dmZmYFpS0nGcmQnpT3PKClsVlERKTI/nPFf2ju15xjp48x8ceJ2Ow2oyOJVAnFLrxnzZrFqFGjiIqKom3btsTFxeHp6cn8+fML3f7dd9/lscceo2/fvjRt2pTRo0fTt29fZs6cWeJjGmWLBlYTkVIq7heNH3zwARMnTiQ2Npbt27czb948Fi1axGOPPVbBycvY8V15jz4NwK2WsVlERKTIPFw8mNlrJh4uHqw7so43t7xpdCSRKqFYhXd2djYbNmwgIiLi7AHMZiIiIli7dm2h+2RlZeHu7l5gnYeHB6tXry7VMdPS0gosFWHTgRMAdArxq5DziUj1U9wvGtesWUP37t0ZMmQIoaGh3HDDDQwePLhS3o5TLMf+ynsMaGFsDhERKbamfk2ZfOVkAF7f/Lru9xYpgmIV3seOHcNmsxEUFFRgfVBQEAkJCYXuExkZyaxZs9i5cyd2u50VK1awePFijhw5UuJjzpgxA19fX+cSEhJSnLdRIg6Hg98P5Q+spvu7RaT4SvJF41VXXcWGDRuchfbu3btZunQpffv2veB5jPpysliSztzfrW7mIiJVUv9m/bm9xe04cDDhhwkkZSQZHUmkUiv3Uc1feuklWrRoQevWrbFarYwZM4aoqCjM5pKfOiYmhtTUVOdy4MCBMkxcuJSMHE5k5M1Z2CrYu9zPJyLVT0m+aBwyZAjTpk2jR48euLq60qxZM3r37n3RruZGfDlZbMd25j2q8BYRqbImdp1Iy9otSc5MZsKPE8i15xodSaTSKlb1GxAQgMViITExscD6xMREgoODC90nMDCQzz77jPT0dPbt28eff/5JrVq1aNq0aYmP6ebmho+PT4GlvKVkZAPg7eaCm4ul3M8nIgKwatUqnn76aV577TU2btzI4sWLWbJkCU8++eQF9zHiy8lic3Y1V+EtIlJVubu480KvF/B08eSXhF94ffPrRkcSqbSKVXhbrVY6d+5MfHy8c53dbic+Pp5u3bpddF93d3caNGhAbm4un376Kbfcckupj1mRUtLzCu/aXlaDk4hIVVWSLxonT57Mvffey8iRI+nQoQMDBgzg6aefZsaMGdjt9kL3MeLLyWLJzYKUvXnPVXiLiFRpTXybENstFoC5W+ay5tAagxOJVE7F7u8dHR3N3LlzWbhwIdu3b2f06NGkp6cTFRUFwNChQ4mJiXFuv27dOhYvXszu3bv58ccfufHGG7Hb7Tz66KNFPmZlkHKmm3ltT1eDk4hIVVWSLxozMjLOuzXHYsnrdeNwOMovbHlK3gMOG1i9wbvwLxxERKTq6Nu0r3N+74k/TiQxPfHSO4nUMMWejHrgwIEkJSUxZcoUEhISCAsLY9myZc57Fvfv31/gl8TMzEwmTZrE7t27qVWrFn379uXdd9/Fz8+vyMesDNTiLSJlITo6mmHDhtGlSxe6du3K7Nmzz/vyskGDBsyYMQOA/v37M2vWLC677DLCw8PZtWsXkydPpn///s4CvMo5lj+wWgswmYzNIiIiZWJC1wn8fux3/kz+k0d/eJR5kfNwMRe71BCptkr0r2HMmDGMGTOm0NdWrVpV4OdevXqxbdu2Uh2zMsi/x7u2pwpvESm54n55OWnSJEwmE5MmTeLQoUMEBgbSv39/pk+fbtRbKD3d3y0iUu24WdyY2Wsmd311FxuPbmTOpjmMvXys0bFEKg19DVVE+V3N/dTVXERKqThfXrq4uBAbG0tsbGwFJKsg+SOaB6rwFhGpThr5NOKJq55g/Pfjeev3t7i87uX0bNjT6FgilUK5TydWXeR3NfdXi7eISOmoxVtEpNqKDI1kUKtBADy2+jES0gufLlOkplHhXUT5Xc39dI+3iEjJORyaw1tEpJr7zxX/oY1/G05kneA/3/+HHHuO0ZFEDKfCu4hOaFRzEZHSO3kEsk+ByQK1mxidRkREyoHVYmVmr5nUcq3FpqRNvPLbK0ZHEjGcCu8iSs5QV3MRkVJLOjOiuX8TcNH1VESkugrxCWFa92kAvL31bb4/8L3BiUSMpcK7iE7kdzVX4S0iUnLObuatjM0hIiLl7vrG13N3m7sBePynxzly6ojBiUSMo8K7CBwOh3NU89pe6mouIlJizoHVWhibQ0REKsQjnR+hfZ32pGalMv778WTbso2OJGIIFd5FkJaZi83uADSPt4hIqWhEcxGRGsXV4srzvZ7H2+rNlmNbeGb9M0ZHEjGECu8iyO9m7uFqwd3VYnAaEZEqTCOai4jUOA29G/Lc1c9hwsTHf33M4p2LjY4kUuFUeBdBikY0FxEpvcw0OHk477m6mouI1Cg9GvRgzGVjAHjq56f4Pel3gxOJVCwV3kWQkp7X4l1bc3iLiJTc8TOt3bWCwMPP0CgiIlLxRnYYybUh15Jjz2HcqnEcP33c6EgiFUaFdxGknOlqrvu7RURKQd3MRURqNLPJzPQe02ni24TEjETGfz+eHHuO0bFEKoQK7yI4O6K5Cm8RkRLTiOYiIjVeLWstZl8zGy9XL35N/JVZv84yOpJIhVDhXQTOrua6x1tEpOQ0ormIiABNfZsyvcd0AN7b/h5f/v2lwYlEyp8K7yLI72rup67mIiIll6QWbxERyXNdo+sY1WEUANPWTuPP5D8NTiRSvlR4F0F+4e2vFm8RkZKx5UDy7rznAa2MzSIiIpXCA2EP0L1BdzJtmTy88mFOZJ4wOpJIuVHhXQQp6brHW0SkVFL2gT0HXD3Bp4HRaUREpBKwmC082/NZGtZqyKFTh5jw4wRsdpvRsUTKhQrvIlBXcxGRUsq/v7tOczDrvx4REcnj6+bL7Gtm4+HiwZrDa3jlt1eMjiRSLvTbTxGc7WquwltEpEQ0sJqIiFxAK/9WPHHVEwDM2zqPFftWGJxIpOyp8L4Eh8PhnE7MT/d4i4iUjObwFhGRi+jTpA9D2w4FYNLqSfx94m+DE4mULRXel3A6x0Z2rh3QPd4iIiV2bEfeY6AKbxERKdy4zuPoGtyVjNwMxq4cy8nsk0ZHEikzKrwvIfnMHN5Wixkvq8XgNCIiVZDDoa7mIiJySS5mF57v9TzBXsHsS9vHYz8+ht1hNzqWSJlQ4X0JJ87pZm4ymQxOIyJSBaUnQWYqYAL/ZkanERGRSszf3Z/ZvWdjNVtZdXAVb2x+w+hIImVChfcl5A+sVlsDq4mIlEx+a3ftxuDqbmwWERGp9NoFtGNyt8kAvLb5NVbuX2lwIpHSU+F9CfldzWt7aWA1EZESUTdzEREpplub38qgVoMAiFkdw+4Tuw1OJFI6KrwvIb+ruVq8RURKKEmFt4iIFN+jXR+lS1AX0nPSeWjlQ6RlpxkdSaTEVHhfgrOruUY0FxEpGbV4i4hICbiaXZnZeyb1vOqxL20fE36YgM1uMzqWSImo8L6ElPyu5prDW0SkZDSHt4iIlJC/uz8vXfMS7hZ3Vh9azSu/vWJ0JJESUeF9CSnqai4iUnLZGZC6P++5Cm8RESmBNnXa8MRVTwAwb+s8lu1ZZnAikeJT4X0JGtVcRKQUju/Ke/TwB686xmYREZEqq2/TvkS1iwJg8k+T+TP5T4MTiRSPCu9LOHuPt7qai4gUW/793YGtjM0hIlXCnDlzCA0Nxd3dnfDwcNavX1+k/T788ENMJhO33npr+QYUQ429fCzd63cn05bJ2O/GkpyZbHQkkSJT4X0JKel5Xc391OItIlJ8zoHVWhibQ0QqvUWLFhEdHU1sbCwbN26kU6dOREZGcvTo0Yvut3fvXsaPH0/Pnj0rKKkYxWK28OzVz9LIuxGH0w8z/vvx5NhzjI4lUiQqvC8hv8XbX4W3iEjxaURzESmiWbNmMWrUKKKiomjbti1xcXF4enoyf/78C+5js9m4++67eeKJJ2jatGkFphWj+Lr58tI1L+Hp4skvCb/wwi8vGB1JpEhUeF9EVq6NjOy8KQt0j7eISAloRHMRKYLs7Gw2bNhARESEc53ZbCYiIoK1a9decL9p06ZRt25dRowYUaTzZGVlkZaWVmCRqqd57ebM6DkDgA/+/ID/7fyfwYlELk2F90WcODOiudkE3u4uBqcREali7Lazg6upq7mIXMSxY8ew2WwEBQUVWB8UFERCQkKh+6xevZp58+Yxd+7cIp9nxowZ+Pr6OpeQkJBS5RbjXNvoWu7vdD8AT/78JFuSthicSOTiVHhfRHL62RHNzWaTwWlERKqY1AOQmwkWN/BrbHQaEalGTp48yb333svcuXMJCAgo8n4xMTGkpqY6lwMHDpRjSilv/9fp/7g25Fpy7DmMWzmOpIwkoyOJXJCacS8i//5uP0+NaC4iUmxJZ+7vrtMczBZjs4hIpRYQEIDFYiExMbHA+sTERIKDg8/b/u+//2bv3r3079/fuc5utwPg4uLCjh07aNas2Xn7ubm54ebmVsbpxShmk5mnez7N3Uvu5u/Uv3l41cO8Hfk2VotuEZXKRy3eF5Hf1Vz3d4uIlIBGNBeRIrJarXTu3Jn4+HjnOrvdTnx8PN26dTtv+9atW/P777+zadMm53LzzTdzzTXXsGnTJnUhr0G8XL14+dqX8bZ6syVpC9PXTcfhcBgdS+Q8avG+CGdXcy8V3iIixaYRzUWkGKKjoxk2bBhdunSha9euzJ49m/T0dKKiogAYOnQoDRo0YMaMGbi7u9O+ffsC+/v5+QGct16qv0Y+jXj+6ue5P/5+Fu9cTGv/1gxuPdjoWCIFqPC+iBMZ+fd4q6u5iEixaURzESmGgQMHkpSUxJQpU0hISCAsLIxly5Y5B1zbv38/ZrM6a0rhujfozsOXP8ysDbN4dv2zhPqE0q3++b0lRIyiwvsiUvK7mqvFW0Sk+PJbvANVeItI0YwZM4YxY8YU+tqqVasuuu+CBQvKPpBUKcPbDeevlL/4avdXPLLqEd7t+y7N/M6/11/ECPra8CJSzhnVXEREiiEjGTKO5T2v09zYLCIiUiOYTCaeuOoJLq97OSdzTvJA/AMcP33c6FgigArvi0pRV3MRkZLJb+32DQGrl7FZRESkxrBarMy+ZjYNazXk0KlDjF05lixbltGxRFR4X0yKRjUXESkZjWguIiIGqe1emzkRc/C2erM5aTOTV0/WSOdiuBIV3nPmzCE0NBR3d3fCw8NZv379RbefPXs2rVq1wsPDg5CQEMaNG0dmZqbz9alTp2IymQosrVu3Lkm0MuVs8dY93iIixaMRzUVExEBNfZvyYu8XcTG58PXer3lt82tGR5IartiF96JFi4iOjiY2NpaNGzfSqVMnIiMjOXr0aKHbf/DBB0ycOJHY2Fi2b9/OvHnzWLRoEY899liB7dq1a8eRI0ecy+rVq0v2jsrQ2Xu81dVcRKRYNKK5iIgYLLxeOFO6TQEgbnMcX/79pcGJpCYrduE9a9YsRo0aRVRUFG3btiUuLg5PT0/mz59f6PZr1qyhe/fuDBkyhNDQUG644QYGDx58Xiu5i4sLwcHBziUgIKBk76iM5NrspGXmAupqLiJSbGrxFhGRSmBAiwGMaD8CgNg1sWxI3GBwIqmpilV4Z2dns2HDBiIiIs4ewGwmIiKCtWvXFrrPVVddxYYNG5yF9u7du1m6dCl9+/YtsN3OnTupX78+TZs25e6772b//v0XzJGVlUVaWlqBpaydOJ3jfO7roRZvEZEiy8mElL15z1V4i4iIwR66/CGub3w9OfYcxq4cy/60C9cZIuWlWIX3sWPHsNlsBAUFFVgfFBREQkJCofsMGTKEadOm0aNHD1xdXWnWrBm9e/cu0NU8PDycBQsWsGzZMl5//XX27NlDz549OXnyZKHHnDFjBr6+vs4lJCSkOG+jSE6cub/bx90FF4vGoBMRKbLk3eCwg5sv1KprdBoREanhzCYz03tMp32d9qRmpfJA/AOkZqUaHUtqmHKvKFetWsXTTz/Na6+9xsaNG1m8eDFLlizhySefdG7Tp08f7rzzTjp27EhkZCRLly7lxIkTfPTRR4UeMyYmhtTUVOdy4MCBMs+dnJ7X4u2vgdVERIrn3BHNTSZjs4iIiAAeLh68ct0r1POqx960vYxbNY4cW86ldxQpI8UqvAMCArBYLCQmJhZYn5iYSHBwcKH7TJ48mXvvvZeRI0fSoUMHBgwYwNNPP82MGTOw2+2F7uPn50fLli3ZtWtXoa+7ubnh4+NTYClr+SOa++n+bhGR4skfWC2wlbE5REREzhHgEcCr172Kl6sXvyT8wrSfp2maMakwxSq8rVYrnTt3Jj4+3rnObrcTHx9Pt27dCt0nIyMDs7ngaSwWC8AF/6KfOnWKv//+m3r16hUnXpnK72quEc1FRIpJc3iLiEgl1bJ2S17o9QJmk5nPdn3GvK3zjI4kNUSxu5pHR0czd+5cFi5cyPbt2xk9ejTp6elERUUBMHToUGJiYpzb9+/fn9dff50PP/yQPXv2sGLFCiZPnkz//v2dBfj48eP5/vvv2bt3L2vWrGHAgAFYLBYGDx5cRm+z+PK7mmsObxEpa3PmzCE0NBR3d3fCw8PPm+XhXL1798ZkMp239OvXrwITF9OxHXmPGlhNREQqoR4NehDTNa9eeWnjSyzfu9zgRFITuBR3h4EDB5KUlMSUKVNISEggLCyMZcuWOQdc279/f4EW7kmTJmEymZg0aRKHDh0iMDCQ/v37M336dOc2Bw8eZPDgwRw/fpzAwEB69OjBzz//TGBgYBm8xZI52+KtwltEys6iRYuIjo4mLi6O8PBwZs+eTWRkJDt27KBu3fMHIlu8eDHZ2dnOn48fP06nTp248847KzJ20dntmsNbREQqvUGtB7EvbR/vbX+Px1c/Tj2venQM7Gh0LKnGTI5qcGNDWloavr6+pKamltn93o9+spmPfj3IfyJb8cA1zcvkmCJSeZXHdaQw4eHhXHHFFbz66qtA3u06ISEhPPjgg0ycOPGS+8+ePZspU6Zw5MgRvLy8Ct0mKyuLrKws589paWmEhISU+3sDIPUgvNgOzC7weAJYdLuOSFVXUddHI1Tn9yaXZrPbGLtyLN8f/B5/d3/e7/s+Db0bGh1LqpDiXEM0T9YF5Hc199M93iJSRrKzs9mwYQMRERHOdWazmYiICNauXVukY8ybN49BgwZdsOiGiply8YLy7+/2b6aiW0REKjWL2cJzVz9Ha//WJGcmM/rb0aRkphgdS6opFd4XoK7mIlLWjh07hs1mc96aky8oKIiEhIRL7r9+/Xq2bt3KyJEjL7pdRUy5eEHObuYaWE1ERCo/T1dP5lw3xznN2IPfPcjp3NNGx5JqSIX3BaSo8BaRSmbevHl06NCBrl27XnS7iphy8YKSNLCaiIhULXU96xIXEYeP1YfNSZt59IdHybXnGh1LqhkV3heQkpE/qrm6SopI2QgICMBisZCYmFhgfWJiIsHBwRfdNz09nQ8//JARI0aUZ8TSc04lpsJbRESqjqZ+TXnl2lewmq2sOrCKGetmaI5vKVMqvAthtzvU1VxEypzVaqVz587Ex8c719ntduLj4+nWrdtF9/3444/JysrinnvuKe+YpaMRzUVEpIq6POhynr36WUyY+Oivj3jr97eMjiTViArvQqRl5mA/8wWXBlcTkbIUHR3N3LlzWbhwIdu3b2f06NGkp6cTFRUFwNChQ4mJiTlvv3nz5nHrrbdSp06dio5cdJmpcOrMveq6x1tERKqgiMYRTOyaN8vIy7+9zOe7Pjc4kVQXxZ7HuybI72buZbXg5mIxOI2IVCcDBw4kKSmJKVOmkJCQQFhYGMuWLXMOuLZ//37M5oLfie7YsYPVq1fzzTffGBG56I7tynv0rgfumpZHRESqpiFthpCYkcj8rfOZumYqAR4BdG/Q3ehYUsWp8C5E/sBqfupmLiLlYMyYMYwZM6bQ11atWnXeulatWlWN+8yc93ertVtERKq2sZePJTEjkSW7lzBu1TjevvFt2tVpZ3QsqcLU1bwQKel5hbe/lwpvEZEiO6YRzUVEpHowm8w8edWThNcL53TuaR749gEOnjxodCypwlR4FyK/q7nu7xYRKQYNrCYiItWIq8WV2b1n06p2K45nHmf0t6NJyUwxOpZUUSq8C6ERzUVESkBTiYmISDVTy1qL1yJeo55XPfam7WXMd2M4nXva6FhSBanwLkSyupqLiBSPLQeSd+c9V+EtIiLVSF3PusRFxOFj9WFL0hYe/eFRcu25RseSKkaFdyHU1VxEpJhS9oI9F1y9wKe+0WlERETKVFO/prx63au4WdxYdWAVT697umoMfCqVhgrvQuR3NVeLt4hIESXlD6zWAkwmY7OIiIiUg8vqXsazPZ/FhImP//qYub/PNTqSVCEqvAuR39Vc04mJiBSR7u8WEZEa4LrG1zGx60QAXvntFRbvXGxwIqkqVHgX4sSZrua11dVcRKRo8kc0D1ThLSIi1duQNkMY0X4EAE+sfYL4/fEGJ5KqQIV3IVI0qrmISPHkt3jXaWFsDhERkQow9vKxDGg+ALvDzqPfP8ovCb8YHUkqORXe/+BwOM4W3rrHW0SkaPJHNK/T3NgcIiIiFcBkMjGl2xSuDbmWbHs2D333EH8m/2l0LKnEVHj/Q3q2jRxb3giF6mouIlIEmalwOjnvee1QQ6OIiIhUFBezC8/1eo4uQV04lXOK+1bcx4G0A0bHkkpKhfc/pJwZWM3NxYyHq8XgNCIiVUDynrxHr7rgVsvYLCIiIhXIzeLGy9e+TKvarTieeZx/r/g3SRlJRseSSkiF9z+ce3+3SVPiiIhcWn43c/8mxuYQERExgLfVm7jr4wjxDuHgqYOM/nY0adlpRseSSkaF9z+knBnR3E/dzEVEiiblTIt3bRXeIiJSMwV4BPDG9W8Q4BHAjpQdPBj/IJm5mUbHkkpEhfc/5Hc199fAaiIiRZPf1Vwt3iIiUoOFeIcQFxGHt6s3G49u5D8//Idce67RsaSSUOH9D5pKTESkmFL25j2qxVtERGq4Vv6teOW6V3CzuLHqwCqeWPsEDofD6FhSCajw/gd1NRcRKSa1eIuIiDh1DurM81c/j8Vk4bNdn/HixheNjiSVgArvf1BXcxGRYsjNgrRDec/V4i0iIgLANY2uIbZbLABvb32bBVsXGBtIDKfC+x/yu5r7qau5iMilpewDHGD1Bq8Ao9OIiIhUGgNaDCC6czQAMzfM5PNdnxucSIykwvsfTpzpau7vpa7mIiKXlD+iuX8oaApGERGRAqLaRzG83XAAYtfEsurAKiPjiIFUeP9DcrpavEVEiixZU4mJiIhcTHTnaG5pdgs2h43x34/nl4RfjI4kBlDh/Q8nNKq5iEjRpWhgNRERkYsxmUxMvWoqvRv2JsuWxQPxD7AxcaPRsaSCqfD+h/xRzf1VeIuIXJpavEVERC7JxezCC71f4Kr6V3E69zT3x9/P5qTNRseSCqTC+xyZOTZO59gA8NM93iIil6YWbxEpQ3PmzCE0NBR3d3fCw8NZv379BbddvHgxXbp0wc/PDy8vL8LCwnj33XcrMK1I8bhZ3HjpmpcIDw4nPSed0StG88exP4yOJRVEhfc58kc0dzGb8HZzMTiNiEglZ7dByt6852rxFpFSWrRoEdHR0cTGxrJx40Y6depEZGQkR48eLXR7f39/Hn/8cdauXcuWLVuIiooiKiqK5cuXV3BykaJzd3Hn5WtfpnNQZ07mnOTfK/7N9uPbjY4lFUCF9zlS0vO6mft5WjFpdF4RkYtLOwy2bDC7gm9Do9OISBU3a9YsRo0aRVRUFG3btiUuLg5PT0/mz59f6Pa9e/dmwIABtGnThmbNmjF27Fg6duzI6tWrKzi5SPF4unoy57o5hAWGkZadxr9X/Ju/Uv4yOpaUMxXe50hxDqymbuYiIpeU383crxGYLcZmEZEqLTs7mw0bNhAREeFcZzabiYiIYO3atZfc3+FwEB8fz44dO7j66qsvuF1WVhZpaWkFFhEjeLl68VrEa3QI6MCJrBOM+mYUf5/42+hYUo5UeJ8jRSOai4gUXbLu7xaRsnHs2DFsNhtBQUEF1gcFBZGQkHDB/VJTU6lVqxZWq5V+/frxyiuvcP31119w+xkzZuDr6+tcQkJCyuw9iBSXt9WbuOvjaOPfhuTMZEYsH8Ge1D1Gx5JyosL7HCln5vCurYHVREQuLUUjmouIsby9vdm0aRO//PIL06dPJzo6mlWrVl1w+5iYGFJTU53LgQMHKi6sSCF8rD7MvWEurWq34njmcUYuH8n+tP1Gx5JyoML7HPlTianFW0SkCNTiLSJlJCAgAIvFQmJiYoH1iYmJBAcHX3A/s9lM8+bNCQsL45FHHuGOO+5gxowZF9zezc0NHx+fAouI0XzdfHnzhjdp7teco6ePMuKbERw8edDoWFLGVHifI7+ruZ8KbxGRS3NOJdbU2BwiUuVZrVY6d+5MfHy8c53dbic+Pp5u3boV+Th2u52srKzyiChSrvzd/Zl7w1ya+DYhIT2Bkd+M5MipI0bHkjKkwvsc+V3N/dXVXETk4hwOSN6b91xdzUWkDERHRzN37lwWLlzI9u3bGT16NOnp6URFRQEwdOhQYmJinNvPmDGDFStWsHv3brZv387MmTN59913ueeee4x6CyKlEuARwLwb5tHYpzGHTh3iX8v/RWJ64qV3lCpBk1WfI7+ruVq8RUQu4XQKZKXmPa/d2NgsIlItDBw4kKSkJKZMmUJCQgJhYWEsW7bMOeDa/v37MZvPthmlp6dz//33c/DgQTw8PGjdujXvvfceAwcONOotiJRaoGcgb93wFlHLojh46iAjvhnB25FvE+gZaHQ0KSWTw+FwGB2itNLS0vD19SU1NbVU9+rc8upqNh9M5a2hXYhoG3TpHUSk2iir60hlVC7v7eAGeOta8K4Pj2wvm2OKSKWk66NIxTt86jBRy6I4nH6YJr5NmB85nwCPAKNjyT8U5xqirubnSM7QqOYiIkWSooHVREREykv9WvV5K/ItgjyD2JO6h38t/xdHM44aHUtKQYX3OU6kq6u5iEiRJO/Oe9T93SIiIuUixDuE+ZHzCfYKZk/qHqKWRZGQfuF57aVyK1HhPWfOHEJDQ3F3dyc8PJz169dfdPvZs2fTqlUrPDw8CAkJYdy4cWRmZpbqmGUtx2bnZFYuAP4qvEVELs45lViooTFERESqs0Y+jVhw4wIa1GrA/pP7Gb5suKYaq6KKXXgvWrSI6OhoYmNj2bhxI506dSIyMpKjRwvv+vDBBx8wceJEYmNj2b59O/PmzWPRokU89thjJT5mecifSsxkAh8PdTUXEbmo/K7mavEWEREpVw1qNWDBjQuco50PXzacfWn7jI4lxVTswnvWrFmMGjWKqKgo2rZtS1xcHJ6ensyfP7/Q7desWUP37t0ZMmQIoaGh3HDDDQwePLhAi3Zxj1keTpwZ0dzXwxWL2VRh5xURqZKSdY+3iIhIRQn2CubtyLdp4tuExIxEopZFsfvEbqNjSTEUq/DOzs5mw4YNREREnD2A2UxERARr164tdJ+rrrqKDRs2OAvt3bt3s3TpUvr27VviY2ZlZZGWllZgKS3nHN7qZi4icnHZGXDqzD1mavEWERGpEIGegbwd+TYtarcg6XQSUcuj+CvlL6NjSREVq/A+duwYNpvNOZ9ivqCgIBISCr/Rf8iQIUybNo0ePXrg6upKs2bN6N27t7OreUmOOWPGDHx9fZ1LSEhIcd5GofK7mvt5qpu5iMhFpezNe3T3BU9/Q6OIiIjUJHU86jD/hvm08W9DcmYyI5aPYPtxTetZFZT7qOarVq3i6aef5rXXXmPjxo0sXryYJUuW8OSTT5b4mDExMaSmpjqXAwcOlDpnypmu5rXV4i0icnG6v1tERMQwfu5+zL1hLh0COnAi6wQjvhnB70m/Gx1LLqFYhXdAQAAWi4XExMQC6xMTEwkODi50n8mTJ3PvvfcycuRIOnTowIABA3j66aeZMWMGdru9RMd0c3PDx8enwFJayen5c3ir8BYRuSjd3y0iImIoXzdf3rz+TS6rexkns08yasUofjv6m9Gx5CKKVXhbrVY6d+5MfHy8c53dbic+Pp5u3boVuk9GRgZmc8HTWCwWABwOR4mOWR5OnOlqXltdzUVELi6/xdu/qbE5REREarBa1lrERcRxRfAVpOek838r/o9fEn4xOpZcQLG7mkdHRzN37lwWLlzI9u3bGT16NOnp6URFRQEwdOhQYmJinNv379+f119/nQ8//JA9e/awYsUKJk+eTP/+/Z0F+KWOWRHyu5r7qau5iMjFJauruYiISGXg6erJnOvm0K1eN07nnub+b+9nzeE1RseSQrgUd4eBAweSlJTElClTSEhIICwsjGXLljkHR9u/f3+BFu5JkyZhMpmYNGkShw4dIjAwkP79+zN9+vQiH7MiOEc1V1dzEZGLS1FXcxERkcrCw8WDV657hehV0fxw8AcejH+QF695kasbXm10NDmHyeFwOIwOUVppaWn4+vqSmppa4vu9b3vtJzbuP0HcPZdzY/t6ZZxQRCq7sriOVFZl+t5suTA9COy5MG4b+DYom5AiUmnp+ihSNeTYchj//Xi+O/AdLmYXnu35LDeE3mB0rGqtONeQch/VvKo4oVHNRUQuLfVAXtFtcQNvfUkpIiJSWbhaXHmh9wtEhkaSa89l/PfjeX/7+0bHkjNUeJ+RnKFRzUVELsk5lVgomPVfiIiISGXianbl2Z7PMrDVQBw4eGb9M8z8dSZ2h93oaDWefmsCbHYHqafzB1fTqOYiIhekqcREREQqNYvZwuPhjzP28rEALPhjARN/mEi2LdvgZDWbCm8g7XQO+Xe6q6u5iJS3OXPmEBoairu7O+Hh4axfv/6i2584cYIHHniAevXq4ebmRsuWLVm6dGkFpf2HFI1oLiIiUtmZTCZGdhjJ0z2exsXkwtd7v2b0t6NJy04zOlqNpcKbs93Mvd1ccLXoIxGR8rNo0SKio6OJjY1l48aNdOrUicjISI4ePVro9tnZ2Vx//fXs3buXTz75hB07djB37lwaNDBoUDO1eIuIiFQZ/Zv1Z07EHLxcvVifsJ5hXw8jIT3B6Fg1kqpM4MSZwtvPS93MRaR8zZo1i1GjRhEVFUXbtm2Ji4vD09OT+fPnF7r9/PnzSU5O5rPPPqN79+6EhobSq1cvOnXqVMHJz0jZm/eoFm8REZEq4ar6V7HgxgUEeASw68Qu7ll6DztTdhodq8ZR4Q2kpOfd3+2vbuYiUo6ys7PZsGEDERERznVms5mIiAjWrl1b6D5ffPEF3bp144EHHiAoKIj27dvz9NNPY7PZLnierKws0tLSCixlwuFQi7eIiEgV1Nq/Ne/3fZ8mvk1IzEhk2NfD+CXhF6Nj1SgqvDnb1dxPhbeIlKNjx45hs9kICgoqsD4oKIiEhMK7fe3evZtPPvkEm83G0qVLmTx5MjNnzuSpp5664HlmzJiBr6+vcwkJCSmbN5CeBDnpgAn8GpXNMUVERKRC1K9Vn3f7vMvldS/nZM5J/m/F/7FszzKjY9UYKrw529W8tkY0F5FKxm63U7duXd588006d+7MwIEDefzxx4mLi7vgPjExMaSmpjqXAwcOlE2Y/NZu3xBwcSubY4qIiEiF8XXz5c0b3uT6xteTY8/hPz/8h4V/LDQ6Vo2gwhtIycjraq45vEWkPAUEBGCxWEhMTCywPjExkeDg4EL3qVevHi1btsRisTjXtWnThoSEBLKzC58WxM3NDR8fnwJLmcgf0dw/tGyOJyIiIhXOzeLG81c/z91t7gbghV9f4Nn1z2qu73KmwhtISc9v8VbhLSLlx2q10rlzZ+Lj453r7HY78fHxdOvWrdB9unfvzq5du7Dbz/5n+Ndff1GvXj2s1gq+ZiVrKjEREZHqwGK2MOGKCTzS+REA3tv+Hv/5/j9k2bIMTlZ9qfAGUtTVXEQqSHR0NHPnzmXhwoVs376d0aNHk56eTlRUFABDhw4lJibGuf3o0aNJTk5m7Nix/PXXXyxZsoSnn36aBx54oOLDJ+/Oe9TAaiIiIlWeyWRiePvhPNPzGVzMLnyz7xtGLh/J8dPHjY5WLbkYHaAyyB/VXF3NRaS8DRw4kKSkJKZMmUJCQgJhYWEsW7bMOeDa/v37MZvPficaEhLC8uXLGTduHB07dqRBgwaMHTuWCRMmVHz4FLV4i4iIVDf9mvYj0COQh1c9zKakTQxZMoSXr32ZVv6tjI5Wrajw5twWbxXeIlL+xowZw5gxYwp9bdWqVeet69atGz///HM5pyoCTSUmIiJSLXWt15X3+77Pg989yL60fQz9eijPXv0svUN6Gx2t2lBXc84ZXE2Ft4hI4bJOQsaxvOdq8RYREal2mvg24f2+7xMeHE5GbgYPffcQb299G4fDYXS0aqHGF94Oh+PsdGJeusdbRKRQ+a3dnnXAvYxGSRcREZFKxdfNl9evf527Wt6FAwezNsxi8k+TybYVPpOKFF2NL7xPZuWSa8/7Fkct3iIiF6D7u0VERGoEV7Mrk66cREzXGMwmM5///TmjvhlFcmay0dGqtBpfeJ84M7Cah6sFd1fLJbYWEamhdH+3iIhIjWEymRjSZgivXfca3q7ebDy6kSFLhrAzZafR0aqsGl94J2sqMRGRS1OLt4iISI3TvUF33uv3HiHeIRw6dYh7lt7DDwd/MDpWlVTjC+/8Ec391M1cROTC1OItIiJSIzX1bcoHfT/giuAryMjNYEz8GBb+sVCDrhVTjS+88wdW89cc3iIiF5bf4u3f1NgcIiIiUuH83P14I+INbm9xOw4cvPDrC8SuiSXHlmN0tCqjxhfeyWfu8fZTV3MRkcLlZkPqwbzn6mouIiJSI7laXIntFsuEKyZgNpn5367/MWrFKFIyU4yOViXU+MLbOZWYupqLiBQu9QA47ODqBbXqGp1GREREDGIymbin7T3MuW4OtVxrsSFxA3d9dRebjm4yOlqlV+ML7xTnHN4qvEVECpW8O++xdiiYTIZGEREREeP1aNCD9/q+R2OfxiSkJxC1LEr3fV+CCu8zXc01qrmIyAVoYDURERH5h2Z+zVh00yL6hPYh15HLC7++wEPfPURqVqrR0SolFd7qai4icnHOqcRCDY0hIiIilYuXqxfPXv0sk6+cjNVsZdXBVdz55Z1sTtpsdLRKp8YX3snp6mouInJRavEWERGRCzCZTNzV6i7e65s33/eR9CMM/3o47/zxjrqen6PGF94nMtTVXETkopwt3iq8RUREpHBt6rTho5s+4obGN5DryOX5X59n7Mqx6np+Ro0uvB0Oh7qai4hcjN0OKXvznqvFW0RERC6ilrUWL/R6gcfDH8fV7MrKAysZ+NVAfk/63ehohqvRhffpHBtZuXZAXc1FRAp1KgFyM8FkAd8Qo9OIiIhIJWcymRjUehDv9n2XhrUacujUIYYuG8p7296r0V3Pa3ThnXKmm7mrxYSX1WJwGhGRSij//m6/ELDolhwREREpmnZ12vFR/4+4vvH15NpzefaXZ4leFU1adprR0QxRswvv9LPdzE2am1ZE5Hy6v1tERERKyNvqzcxeM5nYdSIuZhe+3f8td315F1uPbTU6WoWr2YW37u8WEbk454jmTY3NISI1wpw5cwgNDcXd3Z3w8HDWr19/wW3nzp1Lz549qV27NrVr1yYiIuKi24uIMUwmE3e3uZt3+7xLg1oNOHTqEPcuvZc3t7xJrj3X6HgVpoYX3nldzf00ormISOFSNJWYiFSMRYsWER0dTWxsLBs3bqRTp05ERkZy9OjRQrdftWoVgwcPZuXKlaxdu5aQkBBuuOEGDh06VMHJRaQo2ge0P9v13JHLK7+9QtSyKA6cPGB0tApRowvvE2davP01sJqISOGSd+c9qqu5iJSzWbNmMWrUKKKiomjbti1xcXF4enoyf/78Qrd///33uf/++wkLC6N169a89dZb2O124uPjKzi5iBSVj9WHmb1mMr3HdLxcvdiUtIk7vriD/+38X7UfeK1GF97JZ+7x9lNXcxGRwiWrxVtEyl92djYbNmwgIiLCuc5sNhMREcHatWuLdIyMjAxycnLw9/e/4DZZWVmkpaUVWESkYplMJm5udjOf3vwpl9e9nIzcDKasmcLDKx8mOTPZ6HjlpkYX3ifOdDWvra7mIiLnO50CmSfyntcONTKJiFRzx44dw2azERQUVGB9UFAQCQkJRTrGhAkTqF+/foHi/Z9mzJiBr6+vcwkJ0TSJIkZpUKsB8yPn8/DlD+NiduG7A99x2+e38cPBH4yOVi5qdOGdoq7mIiIXlt/aXSsIrF7GZhERuYhnnnmGDz/8kP/973+4u7tfcLuYmBhSU1Ody4EDNePeUpHKymK2MKLDCD7o+wHNfJtxPPM4D8Q/wJNrnyQjJ8PoeGXKxegARlJXc6nK7HY72dnZRseoMlxdXbFYLEbHqFo0lZiUgM1mIycnx+gYUkxWqxWz2bj2mICAACwWC4mJiQXWJyYmEhwcfNF9X3jhBZ555hm+/fZbOnbseNFt3dzccHNzK3VeESlbbeq04cObPuSljS/x3vb3+Oivj1ifsJ4ZPWfQPqC90fHKRI0uvNXVXKqq7Oxs9uzZg91uNzpKleLn50dwcDAmk8noKFWD7u+WYnA4HCQkJHDixAmjo0gJmM1mmjRpgtVqTGOE1Wqlc+fOxMfHc+uttwI4B0obM2bMBfd77rnnmD59OsuXL6dLly4VlFZEyoO7izsTuk6gZ8OeTF49mb1pe7ln6T3c1+k+RnYYiYu5apeuVTt9KTnn8VZXc6lCHA4HR44cwWKxEBISYmgLRVXhcDjIyMhwTklTr149gxNVEWrxlmLIL7rr1q2Lp6envuCqQux2O4cPH+bIkSM0atTIsD+76Ohohg0bRpcuXejatSuzZ88mPT2dqKgoAIYOHUqDBg2YMWMGAM8++yxTpkzhgw8+IDQ01HkveK1atahVq5Yh70FESu+q+lex+JbFPPnzkyzfu5w5m+bw46EfmdFjBo18Ghkdr8RqduF9pqt5bXU1lyokNzeXjIwM6tevj6enp9FxqgwPDw8Ajh49St26ddXtvCiS9+Y9qsVbLsFmszmL7jp16hgdR0ogMDCQw4cPk5ubi6urMT0BBw4cSFJSElOmTCEhIYGwsDCWLVvmHHBt//79Bb5sfv3118nOzuaOO+4ocJzY2FimTp1akdFFpIz5uvny/NXP06thL55e9zRbkrZw2xe38e+O/2Z4u+FYLVWvfquxhXdWro30bBsA/iq8pQqx2fL+3hrVHbAqy/+iIicnR4V3UajFW4oo/55ufRlYdeX/n2Kz2QwrvAHGjBlzwa7lq1atKvDz3r17yz+QiBjGZDLRv1l/Ogd1ZspPU1iXsI5XfnuFL//+kslXTqZrva5GRyyWGttHNf/+brMJvN1r7PcPUoWpG2fx6TMrhpxMSDuc91wt3lJE+jdWdenPTkQqq/q16jP3hrnM6DkDf3d/9qbtZcQ3I4j5MYZjp48ZHa/ISlR4z5kzh9DQUNzd3QkPD2f9+vUX3LZ3796YTKbzln79+jm3GT58+Hmv33jjjSWJVmT593f7eVoxm/WfjYhIASf2AQ5w8wFPdR0WERER45hMJm5qehNf3PoFA1sNxISJr3Z/xc2f3cxHOz7C7qj8Aw4Xu/BetGgR0dHRxMbGsnHjRjp16kRkZKRz0KJ/Wrx4MUeOHHEuW7duxWKxcOeddxbY7sYbbyyw3X//+9+SvaMiSknXiOYiIheUP6J57VBQS5iIiIhUAr5uvky6chLv932fNv5tOJl9kid/fpJ7lt7D9uPbjY53UcUuvGfNmsWoUaOIioqibdu2xMXF4enpyfz58wvd3t/fn+DgYOeyYsUKPD09zyu83dzcCmxXu3btkr2jInKOaK77u0Uq1Nq1a7FYLAV6vUDevXsmk6nQqYhCQ0OZPXt2gXUrV66kb9++1KlTB09PT9q2bcsjjzzCoUOHyjF9DZK8O+9R3cxFRESkkukQ2IEP+n3AxK4T8XL14vdjvzNoySCeXf8s6TnpRscrVLEK7+zsbDZs2EBERMTZA5jNREREsHbt2iIdY968eQwaNAgvL68C61etWkXdunVp1aoVo0eP5vjx4xc8RlZWFmlpaQWW4jq3q7mIVJx58+bx4IMP8sMPP3D48OESHeONN94gIiKC4OBgPv30U7Zt20ZcXBypqanMnDmzjBPXUBpYTcQw+YPViYjIhbmYXbi7zd18cesXRIZGYnfYeW/7e9z8v5tZvnc5DofD6IgFFKvwPnbsGDabzTmtQ76goCDn3IkXs379erZu3crIkSMLrL/xxht55513iI+P59lnn+X777+nT58+ztGb/2nGjBn4+vo6l5CQkOK8DeDs4Gr+XupqLlJRTp06xaJFixg9ejT9+vVjwYIFxT7GwYMHeeihh3jooYeYP38+vXv3JjQ0lKuvvpq33nqLKVOmlH3wmii/q7lavKUGWLZsGT169MDPz486depw00038ffffztfP3jwIIMHD8bf3x8vLy+6dOnCunXrnK9/+eWXXHHFFbi7uxMQEMCAAQOcr5lMJj777LMC5/Pz83Ne//bu3YvJZGLRokX06tULd3d33n//fY4fP87gwYNp0KABnp6edOjQ4bzb8Ox2O8899xzNmzfHzc2NRo0aMX36dACuvfba80YHT0pKwmq1Eh8fXxYfm4hIpVDXsy4v9HqBuIg4QrxDOHr6KOO/H8/ob0dzIO2A0fGcKnQ473nz5tGhQwe6di049PugQYOczzt06EDHjh1p1qwZq1at4rrrrjvvODExMURHRzt/TktLK3bxnaw5vKWacDgcnM4p/Euq8ubhainWSLgfffQRrVu3plWrVtxzzz08/PDDxMTEFOsYH3/8MdnZ2Tz66KOFvu7n51fkY8lFqMVbSsvhgJwMY87t6lmssQnS09OJjo6mY8eOnDp1iilTpjBgwAA2bdpERkYGvXr1okGDBnzxxRcEBwezceNG7Pa8gXyWLFnCgAEDePzxx3nnnXfIzs5m6dKlxY48ceJEZs6cyWWXXYa7uzuZmZl07tyZCRMm4OPjw5IlS7j33ntp1qyZ8/eomJgY5s6dy4svvkiPHj04cuQIf/75JwAjR45kzJgxzJw5Ezc3NwDee+89GjRowLXXXlvsfCIilV33Bt1ZfPNi5m2dx7zf5/HT4Z+45fNbGNp2KCM7jKSWtZah+YpVeAcEBGCxWEhMTCywPjExkeDg4Ivum56ezocffsi0adMueZ6mTZsSEBDArl27Ci283dzcnP+JlJS6mkt1cTrHRtspyw0597ZpkXhai34ZmTdvHvfccw+Q19MlNTWV77//nt69exf5GDt37sTHx4d69eoVN64Uld0GKfvynqvFW0oqJwOerm/MuR87DFavS293xu23317g5/nz5xMYGMi2bdtYs2YNSUlJ/PLLL/j7+wPQvHlz57bTp09n0KBBPPHEE851nTp1Knbkhx9+mNtuu63AuvHjxzufP/jggyxfvpyPPvqIrl27cvLkSV566SVeffVVhg0bBkCzZs3o0aMHALfddhtjxozh888/56677gJgwYIFzplkRESqI3cXdx4Ie4B+Tfoxfd10fj7yM/O2zuOzXZ/x0OUPcUuzW7CYLYZkK1ZXc6vVSufOnQt0UbLb7cTHx9OtW7eL7vvxxx+TlZXl/KX7Yg4ePMjx48fL9RdrdTUXqVg7duxg/fr1DB48GAAXFxcGDhzIvHnzinUch8OhXxrLW9ohsOeA2RV8GhidRqTc7dy5k8GDB9O0aVN8fHwIDQ0FYP/+/WzatInLLrvMWXT/06ZNmwptJCiuLl26FPjZZrPx5JNP0qFDB/z9/alVqxbLly9n//79AGzfvp2srKwLntvd3Z17773XOfjtxo0b2bp1K8OHDy91VhGRyi7UN5Q3r3+Tl695mUbejTieeZzYNbEMXjKYXxN+NSRTsbuaR0dHM2zYMLp06ULXrl2ZPXs26enpREVFATB06FAaNGjAjBkzCuw3b948br31VurUKTgf7KlTp3jiiSe4/fbbCQ4O5u+//+bRRx+lefPmREZGluKtXVx+V3O1eEtV5+FqYdu08vu3cqlzF9W8efPIzc2lfv2zLWAOhwM3NzdeffVVfHx8AEhNTT2vu/iJEyfw9fUFoGXLlqSmpnLkyBG1epcX51RijcGgb4WlGnD1zGt5NurcxdC/f38aN27M3LlzqV+/Pna7nfbt25OdnY2Hh8dF973U6yaT6bwBfgobPO2fg84+//zzvPTSS8yePZsOHTrg5eXFww8/THZ2dpHOC3ndzcPCwjh48CBvv/021157LY0bN77kfiIi1YHJZOKaRtfQo0EPPvjzA+I2x7E9eTtRy6O4vvH1RHeOpqF3wwrLU+zpxAYOHMgLL7zAlClTCAsLY9OmTSxbtsw54Nr+/fs5cuRIgX127NjB6tWrGTFixHnHs1gsbNmyhZtvvpmWLVsyYsQIOnfuzI8//ljq7uQXc0LTiUk1YTKZ8LS6GLIUteU5NzeXd955h5kzZ7Jp0ybnsnnzZurXr89///tfWrRogdlsZsOGDQX23b17N6mpqbRs2RKAO+64A6vVynPPPVfouQqbjkyKSfd3S1kwmfK6exuxFKNXzPHjx9mxYweTJk3iuuuuo02bNqSkpDhf79ixI5s2bSI5ObnQ/Tt27HjRwcoCAwML/F60c+dOMjIufe/7Tz/9xC233MI999xDp06daNq0KX/99Zfz9RYtWuDh4XHRc3fo0IEuXbowd+5cPvjgA/71r39d8rwiItWNq8WVYe2GseS2JdzZ8k7MJjMr9q3gls9u4aWNL1XY9GMlGlxtzJgx542UmW/VqlXnrWvVqtUFh3P38PBg+fKKvz81RV3NRSrMV199RUpKCiNGjHC2XOe7/fbbmTdvHvfddx8jR47kkUcewcXFhQ4dOnDgwAEmTJjAlVdeyVVXXQVASEgIL774ImPGjCEtLY2hQ4cSGhrKwYMHeeedd6hVq5amFCstjWguNUjt2rWpU6cOb775JvXq1WP//v1MnDjR+frgwYN5+umnufXWW5kxYwb16tXjt99+o379+nTr1o3Y2Fiuu+46mjVrxqBBg8jNzWXp0qVMmDAByBtd/NVXX6Vbt27YbDYmTJiAq+ulf/do0aIFn3zyCWvWrKF27drMmjWLxMRE2rZtC+R1JZ8wYQKPPvooVquV7t27k5SUxB9//FGgoSN/kDUvL68Co62LiNQ0/u7+TOk2hYGtBvL8L8+zLmEdb/3+Vt7935c9xC3Nb8FsKna7dJGV35ErsVybndTTeYW3upqLlL958+YRERFxXtENeYX3r7/+ypYtW3jppZcYNmwYEyZMoF27dgwfPpyOHTvy5ZdfFmhdv//++/nmm284dOgQAwYMoHXr1owcORIfH58CgxFJCanFW2oQs9nMhx9+yIYNG2jfvj3jxo3j+eefd75utVr55ptvqFu3Ln379qVDhw4888wzWCx5t2H07t2bjz/+mC+++IKwsDCuvfZa1q9f79x/5syZhISE0LNnT4YMGcL48ePx9Lx0V/hJkyZx+eWXExkZSe/evQkODubWW28tsM3kyZN55JFHmDJlCm3atGHgwIEcPXq0wDaDBw/GxcWFwYMH4+7uXopPSkSkemjl34q5N8zlpWteIsQ7hGOnjzFlzRQGfTWIDYkbLn2AEjI5KtvM4iWQlpaGr68vqampzvtEL+b4qSw6P/UtALum98HFUiO/f5AqKjMzkz179tCkSRP9ElVMF/vsinsdqUqK/d7iekLCFhi8CFrdWP4BpcrTdany2rt3L82aNeOXX37h8ssvv+B2uj5Wv/cmIpeWbcvmg+0f8MaWNziVcwqAyNBIxncZT7DXxWftguJdQ2pkxZk/lZiPu4uKbhGRczkc6mouUg3k5OSQkJDApEmTuPLKKy9adIuI1FRWi5Xh7Yfz1YCvuKPlHZhNZuL3x5NjO38QzNIq0T3eVV2wrwfzh3chO9dudBQRkcpnyKK87ua1Q41OIiIl9NNPP3HNNdfQsmVLPvnkE6PjiIhUanU86hDbLZZBrQbxx/E/CPEJKfNz1MjCu5abC9e2DjI6hohI5WMyQWj3vEVEqqzevXtfcGBbEREpXCv/VrTyb1Uux1Y/axEREREREZFypMJbREREREREpByp8BapotSFsPjsdo3rIFKe9G+s6tL/KSIi5atG3uMtUpW5urpiMplISkoiMDCwwPzWUjiHw0F2djZJSUmYzWasVqvRkUSqFavVitls5vDhwwQGBmK1WnVtqkIcDgdJSUmYTCZcXV2NjiMiUi2p8BapYiwWCw0bNuTgwYPs3bvX6DhViqenJ40aNcJsVmcfkbJkNptp0qQJR44c4fDhw0bHkRIwmUw0bNgQi8VidBQRkWpJhbdIFVSrVi1atGhBTk7ZzzFYXVksFlxcXCpFK9ycOXN4/vnnSUhIoFOnTrzyyit07dq10G0XLFhAVFRUgXVubm5kZmZWRFSRIrNarTRq1Ijc3FxsNpvRcaSYXF1dVXSLiJQjFd4iVZTFYtEvSVXQokWLiI6OJi4ujvDwcGbPnk1kZCQ7duygbt26he7j4+PDjh07nD9Xhi8PRAqT31VZ3ZVFREQKUn9LEZEKNGvWLEaNGkVUVBRt27YlLi4OT09P5s+ff8F9TCYTwcHBziUoKKgCE4uIiIhIaanwFhGpINnZ2WzYsIGIiAjnOrPZTEREBGvXrr3gfqdOnaJx48aEhIRwyy238Mcff1z0PFlZWaSlpRVYRERERMQ4KrxFRCrIsWPHsNls57VYBwUFkZCQUOg+rVq1Yv78+Xz++ee899572O12rrrqKg4ePHjB88yYMQNfX1/nEhISUqbvQ0RERESKp1rc450/96RadUSkpPKvH5VtLttu3brRrVs3589XXXUVbdq04Y033uDJJ58sdJ+YmBiio6OdP6emptKoUSNdI0WkRCrr9bEs6HdIESmN4lwfq0XhffLkSQC16ohIqZ08eRJfX99yOXZAQAAWi4XExMQC6xMTEwkODi7SMVxdXbnsssvYtWvXBbdxc3PDzc3N+XP+fwq6RopIaZTn9dEo+h1SRMpCUa6P1aLwrl+/PgcOHMDb27vIo/2mpaUREhLCgQMH8PHxKeeElZM+gzz6HPQZQN43lSdPnqR+/frldg6r1Urnzp2Jj4/n1ltvBcButxMfH8+YMWOKdAybzcbvv/9O3759i3ze4l4j9fchjz4HfQagzwAq5vpoFF0fS0afgz6DfDX9cyjO9bFaFN5ms5mGDRuWaF8fH58a+ZfkXPoM8uhz0GdQES050dHRDBs2jC5dutC1a1dmz55Nenq6c67uoUOH0qBBA2bMmAHAtGnTuPLKK2nevDknTpzg+eefZ9++fYwcObLI5yzpNbKm/33Ip89BnwHoM6huLd35dH0sHX0O+gzy1eTPoajXx2pReIuIVBUDBw4kKSmJKVOmkJCQQFhYGMuWLXMOuLZ//37M5rPjXqakpDBq1CgSEhKoXbs2nTt3Zs2aNbRt29aotyAiIiIixaTCW0Skgo0ZM+aCXctXrVpV4OcXX3yRF198sQJSiYiIiEh5qbHTibm5uREbG1tgAKKaRp9BHn0O+gykIP19yKPPQZ8B6DOQgvT3IY8+B30G+fQ5FJ3JUR3nhhARERERERGpJGpsi7eIiIiIiIhIRVDhLSIiIiIiIlKOVHiLiIiIiIiIlCMV3iIiIiIiIiLlSIW3iIiIiIiISDmqkYX3nDlzCA0Nxd3dnfDwcNavX290pAo1depUTCZTgaV169ZGxyp3P/zwA/3796d+/fqYTCY+++yzAq87HA6mTJlCvXr18PDwICIigp07dxoTtpxc6jMYPnz4eX83brzxRmPCimFq8jVS10ddH3V9lIupyddHqJnXSF0fdX0sKzWu8F60aBHR0dHExsayceNGOnXqRGRkJEePHjU6WoVq164dR44ccS6rV682OlK5S09Pp1OnTsyZM6fQ15977jlefvll4uLiWLduHV5eXkRGRpKZmVnBScvPpT4DgBtvvLHA343//ve/FZhQjKZrpK6PhdH1MY+ujzWbro95ato1UtdHXR/LjKOG6dq1q+OBBx5w/myz2Rz169d3zJgxw8BUFSs2NtbRqVMno2MYCnD873//3979gyS7hgEYv0+edCzC8tVAsb9TLYXS0KRUDlHQUNEgEQ1tEdEmEbRES9DQGi1By1lbJJuqIWiNFCEiFAqirCHQ5wzyGZ6s88Hp9TnwXD8I7J1uXh8uuLX0r8rvpVJJWZaltre3K9eenp6Uy+VSh4eHGia03z/vgVJKxeNxNTExoWUe/D+Y3kj6SB+Voo+ozfQ+KkUj6SN9/C+Mesf7/f1dLi8vJRqNVq41NDRINBqVs7MzjZPV383Njfh8Puno6JC5uTm5vb3VPZJW2WxWcrlc1dloamqScDhs3NlIpVLS1tYmvb29srS0JI+Pj7pHQp3QyDL6WI0+fqCP5qKPH2jkB/r4gT7+O6MW74eHBykWi+LxeKquezweyeVymqaqv3A4LPv7+3J8fCx7e3uSzWZleHhYXl5edI+mza/n3/SzMTY2JgcHB5JMJmVra0tOT08lFotJsVjUPRrqgEbSx1roYxl9NBt9LKOR1ehjGX38PX/qHgD1F4vFKo/7+/slHA5LIBCQo6MjWVhY0DgZdJuZmak87uvrk/7+funs7JRUKiWRSETjZEB90Ed8hT4CNBK10cffY9Q73m63WxwOh+Tz+arr+XxeLMvSNJV+zc3N0tPTI+l0Wvco2vx6/jkb1To6OsTtdht9NkxCIz+jj/TxK/TRLPSxNtMbSR9ro4+1GbV4O51OGRgYkGQyWblWKpUkmUzK0NCQxsn0KhQKkslkxOv16h5Fm2AwKJZlVZ2N5+dnubi4MPps3N3dyePjo9FnwyQ08jP6SB+/Qh/NQh9rM72R9LE2+libcX9qvrKyIvF4XAYHByUUCsnOzo68vr7K/Py87tHqZnV1VcbHxyUQCMj9/b2sr6+Lw+GQ2dlZ3aPZqlAoVL3yls1m5erqSlpaWsTv98vy8rJsbm5Kd3e3BINBSSQS4vP5ZHJyUt/QP+y7e9DS0iIbGxsyNTUllmVJJpORtbU16erqktHRUY1To55MbyR9LKOP9BGfmd5HETMbSR/p44/R/bHqOuzu7iq/36+cTqcKhULq/Pxc90h1NT09rbxer3I6naq9vV1NT0+rdDqteyzbnZycKBH59BOPx5VS5a+ESCQSyuPxKJfLpSKRiLq+vtY79A/77h68vb2pkZER1draqhobG1UgEFCLi4sql8vpHht1ZnIj6SN9pI/4jsl9VMrMRtJH+vhT/lBKqXos+AAAAAAAmMio//EGAAAAAKDeWLwBAAAAALARizcAAAAAADZi8QYAAAAAwEYs3gAAAAAA2IjFGwAAAAAAG7F4AwAAAABgIxZvAAAAAABsxOINAAAAAICNWLwBAAAAALARizcAAAAAADb6Gx5rplhwEToZAAAAAElFTkSuQmCC\n"
+ },
+ "metadata": {}
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "ranking_model.evaluate(ranking_test_dataset)"
+ ],
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "RD4UirtNvDXT",
+ "outputId": "9964607f-eea1-4c1a-d117-2a847416cfec"
+ },
+ "execution_count": 44,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "\u001b[1m1/1\u001b[0m \u001b[32m━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[37m\u001b[0m \u001b[1m1s\u001b[0m 630ms/step - AUC: 0.9867 - accuracy: 0.9372 - loss: 0.2243\n"
+ ]
+ },
+ {
+ "output_type": "execute_result",
+ "data": {
+ "text/plain": [
+ "[0.2243196964263916, 0.9866776466369629, 0.9371727705001831]"
+ ]
+ },
+ "metadata": {},
+ "execution_count": 44
+ }
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "# **Predictions of Ranking Model**\n",
+ "The retrieval model gave us a list of ads that are generally relevant (high dot product similarity). The ranking model will now calculate the specific probability (0% to 100%) that the user will click each of those ads.\n",
+ "\n",
+ "The Ranking model expects pairs of (User, Ad). Since we are scoring 10 ads for 1 user, we cannot just pass the user features once.We effectively take user's features 10 times to create a batch."
+ ],
+ "metadata": {
+ "id": "XaLAPapNjdYm"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "def rerank_ads_for_user(user_row, retrieved_ads, ranking_model):\n",
+ " ads_df = pd.DataFrame(retrieved_ads)\n",
+ " num_ads = len(ads_df)\n",
+ " user_inputs = { k: tf.fill((num_ads, 1), str(user_row[k]) if k not in continuous_features else float(user_row[k]))\n",
+ " for k in USER_FEATURES}\n",
+ " ad_inputs = {k: tf.reshape(tf.constant(ads_df[k].astype(str).values), (-1, 1)) for k in AD_FEATURES}\n",
+ " scores = ranking_model({\"user\": user_inputs, \"positive_ad\": ad_inputs}).numpy().flatten()\n",
+ " ads_df[\"ranking_score\"] = scores\n",
+ " return ads_df.sort_values(\"ranking_score\", ascending=False).to_dict(\"records\")\n",
+ "\n",
+ "sample_user = x_test.iloc[0]\n",
+ "scores, indices = retrieval_engine.query_batch(pd.DataFrame([sample_user]))\n",
+ "top_ads = retrieval_engine.decode_results(scores, indices)[0]\n",
+ "final_ranked_ads = rerank_ads_for_user(sample_user, top_ads, ranking_model)\n",
+ "print(f\"User: {sample_user['user_id']}\")\n",
+ "print(f\"{'Ad ID':<10} | {'Topic':<30} | {'Retrival Score':<11} | {'Rank Probability'}\")\n",
+ "for item in final_ranked_ads:\n",
+ " print(f\"{item['ad_id']:<10} | {item['ad_topic'][:28]:<30} | {item['score']:.4f} | {item['ranking_score']*100:.2f}%\")"
+ ],
+ "metadata": {
+ "id": "MvPsCaw_vDaT",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "outputId": "7b16a6ac-679e-41b6-cce8-67b4f193b91a"
+ },
+ "execution_count": 49,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "User: user_216\n",
+ "Ad ID | Topic | Retrival Score | Rank Probability\n",
+ "ad_660 | Profound optimizing utilizat | 8.1021 | 99.19%\n",
+ "ad_318 | Front-line upward-trending g | 6.6563 | 99.07%\n",
+ "ad_311 | Front-line methodical utiliz | 6.6728 | 98.77%\n",
+ "ad_31 | Ameliorated well-modulated c | 6.4871 | 98.65%\n",
+ "ad_861 | Synergized clear-thinking pr | 6.2368 | 98.57%\n",
+ "ad_387 | Implemented didactic support | 5.9674 | 98.47%\n",
+ "ad_799 | Self-enabling optimal initia | 5.8983 | 98.43%\n",
+ "ad_984 | Vision-oriented contextually | 5.9103 | 98.29%\n",
+ "ad_706 | Re-engineered demand-driven | 6.5815 | 98.22%\n",
+ "ad_916 | Universal multi-state system | 5.6566 | 98.17%\n"
+ ]
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [],
+ "metadata": {
+ "id": "ECqj1I91JUgg"
+ },
+ "execution_count": 45,
+ "outputs": []
+ }
+ ]
+}
\ No newline at end of file