# Pagination Tutorial
source: https://developer.mastercard.com/bin-lookup/documentation/tutorials-and-guides/pagination-tutorial/index.md

## Overview {#overview}

This tutorial covers how to use pagination to call the API in sequence, and pull all of the data. It also demonstrates how to store the results of each call in memory and also to a CSV file. Python is the language used in this tutorial but the steps, and code, will be similar for other languages.

At the end of this tutorial you will have a simple Python script that when run will call the API several times in a paginated fashion until it has received all of the data. The code will store the results in an array and also convert them into a CSV file.

### Prerequisites {#prerequisites}

For this tutorial you must ensure you have `Python` and `pip` installed on your system. If you don't already have Python installed, check out the [Python documentation](https://www.python.org/doc/).
Tip: If you have followed any of the other BIN Lookup tutorials in this section, you can skip steps 1 and 2 because they are the same for every tutorial. If you want to get straight to the code, all example files are available on [Github](https://github.com/Mastercard-Samples/bin-lookup-sample-code)

## 1. Create the File \& Download Dependencies {#1-create-the-file--download-dependencies}

First, we need to ensure to have the dependencies needed for our code. We install the `mastercard-oauth1-signer` and `requests` packages via the following command line:

```terminal
pip install mastercard-oauth1-signer requests
```

Next, create a file called `pagination-example.py` and add the following imports at the top of the file:

```python
import requests
from requests.auth import AuthBase
import oauth1.authenticationutils as authenticationutils
from oauth1.signer import OAuthSigner
import csv
```

## 2. Configuring Authentication {#2-configuring-authentication}

Next, create some variables to hold the base URL, which is the url for the API we will be calling, and for your consumer key. You can get your consumer key from your project in the projects dashboard.

```python
BASE_URL = 'Add Sandbox or Production BASE URL here'
CONSUMER_KEY = 'Add your project consumer key here' 
```

Following that, use the this code to create a simple class that will sign all HTTP requests we send using the Python requests library:

```python
# MCSigner
# Helper class for signing request objects
class MCSigner(AuthBase):
    def __init__(self, consumer_key, signing_key):
        self.signer = OAuthSigner(consumer_key, signing_key)

    def __call__(self, request):
        self.signer.sign_request(request.url, request)
        return request
```

Finally, use the Mastercard OAuth library to create a signing key using the .p12 cert that you downloaded when you created your project, and the keystore password that you set:

```python
# Generate a signing key and use it, and consumer key, with the signer class
signing_key = authenticationutils.load_signing_key('./certs/sandbox.p12', 'keystorepassword')
signer = MCSigner(CONSUMER_KEY, signing_key)
```

## 3. The Pagination Loop {#3-the-pagination-loop}

Now that we have our credentials configured and our requests handler setup for signing, we can begin calling the paginated API. The API provides metadata to tell us what page we are on, how many records there are in total, and how many pages in total too. Lets create a method that we can call that will sequentially call the API to download all account range records.

```python
def fetch_data_from_api(base_url, initial_page=1, post_payload={}, signer=None, data=None):
    all_items = []
    current_page = initial_page
    total_items_downloaded = 0
    total_items = None

    while True:

        # Update the payload to include the current page number for pagination
        post_payload.update({"page": current_page, 'size': '10000'})

        # Perform a POST request to the API
        response = requests.post(base_url, params=post_payload, auth=signer, json=data)
        response_data = response.json()
        
        # Extract metadata
        current_page_number = response_data['currentPageNumber']
        total_pages = response_data['totalPages']
        total_items = response_data['totalItems']
        
        # Extract the actual items
        items = response_data.get('items', [])
        
        # Add the items from the current page to the master list
        all_items.extend(items)
        
        # Update the total number of items downloaded
        total_items_downloaded += len(items)
        
        # Print the current progress for reference
        print(f"Downloaded {len(items)} items from page {current_page_number}/{total_pages}.")
        
        # Check if we have reached the last page
        if current_page_number >= total_pages:
            break
        
        # Move to the next page
        current_page += 1

    # After the loop, verify that the number of downloaded items matches the totalItems value
    if total_items_downloaded == total_items:
        print(f"Successfully downloaded all {total_items_downloaded} items.")
    else:
        print(f"Warning: Downloaded {total_items_downloaded} items, but expected {total_items} items.")
    
    return all_items
```

Note: You can increase or decrease the size of the records you want by changing the size value. We recommend a number between 5000 and 10000, less records per page mean more API calls, more means longer response times.

And to call this method we use the following code, once the download is complete you can then store the results:

```python
all_records = fetch_data_from_api(base_url=f'{BASE_URL}/bin-ranges', initial_page=1, post_payload={}, signer=signer)
```

## 4. Storing in a CSV {#4-storing-in-a-csv}

You now have all records stored in the items variable, but if you want to store them persistently here is some sample code to save the data to a CSV file:

```python
# Set up a file to store the results from the API
data_file = open('account_ranges.csv', 'w', encoding='utf-8')
csv_writer = csv.writer(data_file)

# A for loop to go through the JSON objects and convert them into CSV rows
count = 0
for item in all_records:
    if count == 0:
        # Writing headers of CSV file
        header = item.keys()
        csv_writer.writerow(header)
        count += 1
    # Writing data of CSV file
    csv_writer.writerow(item.values())
 
data_file.close()
```

To execute the Python script, use the following command:

```terminal
python pagination-example.py
```

Once the script completes you will have a CSV called `account-ranges.csv` showing all of the records from the BIN Lookup service.

## Code {#code}

Here is the full Python source code:

```python
import requests
from requests.auth import AuthBase
import oauth1.authenticationutils as authenticationutils
from oauth1.signer import OAuthSigner
import csv

BASE_URL = 'Add Sandbox or Production BASE URL here'
CONSUMER_KEY = 'Add you project consumer key here' 

# MCSigner
# Helper class for signing request objects
class MCSigner(AuthBase):
    def __init__(self, consumer_key, signing_key):
        self.signer = OAuthSigner(consumer_key, signing_key)

    def __call__(self, request):
        self.signer.sign_request(request.url, request)
        return request

# Generate a signing key and use it, and consumer key, with the signer class
signing_key = authenticationutils.load_signing_key('./certs/sandbox.p12', 'keystorepassword')
signer = MCSigner(CONSUMER_KEY, signing_key)

def fetch_data_from_api(base_url, initial_page=1, post_payload={}, signer=None, data=None):
    all_items = []
    current_page = initial_page
    total_items_downloaded = 0
    total_items = None

    while True:

        # Update the payload to include the current page number for pagination
        post_payload.update({"page": current_page, 'size': '10000'})

        # Perform a POST request to the API
        response = requests.post(base_url, params=post_payload, auth=signer, json=data)
        response_data = response.json()
        
        # Extract metadata
        current_page_number = response_data['currentPageNumber']
        total_pages = response_data['totalPages']
        total_items = response_data['totalItems']
        
        # Extract the actual items
        items = response_data.get('items', [])
        
        # Add the items from the current page to the master list
        all_items.extend(items)
        
        # Update the total number of items downloaded
        total_items_downloaded += len(items)
        
        # Print the current progress for reference
        print(f"Downloaded {len(items)} items from page {current_page_number}/{total_pages}.")
        
        # Check if we have reached the last page
        if current_page_number >= total_pages:
            break
        
        # Move to the next page
        current_page += 1

    # After the loop, verify that the number of downloaded items matches the totalItems value
    if total_items_downloaded == total_items:
        print(f"Successfully downloaded all {total_items_downloaded} items.")
    else:
        print(f"Warning: Downloaded {total_items_downloaded} items, but expected {total_items} items.")
    
    return all_items

all_records = fetch_data_from_api(base_url=f'{BASE_URL}/bin-ranges', initial_page=1, post_payload={}, signer=signer)

# Set up a file to store the results from the API
data_file = open('account_ranges.csv', 'w')
csv_writer = csv.writer(data_file)

# A for loop to go through the JSON objects and convert them into CSV rows
count = 0
for item in all_records:
    if count == 0:
        # Writing headers of CSV file
        header = item.keys()
        csv_writer.writerow(header)
        count += 1
    # Writing data of CSV file
    csv_writer.writerow(item.values())
 
data_file.close()
```

