Skip to main content

Building a New Connector

A guide for how to add a new connector to Parsons, from start to finish

Published onJun 20, 2023
Building a New Connector
·

This is a hands-on, interactive guide. It is based on an earlier version.

Picking a connector

You may have come to this guide because you want to build a specific connector. That’s great! If you don’t have a connector picked, you can browse the new connector request list or ask for requests from the Parsons Slack.

Once you’ve got a connector in mind, you’ll need to check if it’s actually possible to build a connector for that platform:

  1. Does the platform even have an API? You should usually be able to find it by searching their website, but you can always email them to double check (sometimes platforms keep their API documentation behind a login.)

  2. Is there an account you can use for testing? Ideally, you’ll be able to create a test account for free, but not all platforms let you do that. You can also use a real account for testing purposes, you’ll just have to be careful with it. If you have trouble with this step, check out our Sandbox Access Guide.

Take notes on these steps! You’ll need them later for the documentation.

Basic Setup

Creating a new connector is a kind of contribution, so if you’re new to contributing, follow the steps to set up in the contribution guide. In particular, you’ll want to make sure you have followed all the steps under Install Parsons for Development. If you’re not new to contributing, you can skip them.

Once you’re set up, make sure your virtual environment is activated and you’ve pulled the most recent version from the main branch of the Parsons repo. Then you can create a new branch named something like add-X-connector.

File structure

The first thing you’ll need to do is create a folder for your connector. This folder should have the same name as the module (file) within the folder, and the same name as the connector class. For example, the airtable connector is in the airtable folder, and the hustle connector is in the hustle folder.

Inside the folder, create two files. The first should be named __init__.py and should be empty. The second will have the same name as your folder - this is the file which will have your connector’s code. For example, in the airtable folder this file is called airtable.py and in the hustle folder it’s called hustle.py.

The directory should look like this:

connector/
  __init__.py
  connector.py

Replace “connector” with your specific connector’s name, ie ActBlue, NGPVAN, ActionKit, etc.

Connector Template

Open the connector.py file. At the top of the file, add the following code to enable logging for our connector, as well as the code for the connector class itself. As with before, replace “connector” with your specific connector’s name.

import logging
from parsons.utilities.api_connector import APIConnector
from parsons.utilities import check_env
from parsons import Table

logger = logging.getLogger(__name__)

URI = ""

class Connector(object):
	"""
	Instantiate Connector class.

   	`Args:`
	"""

	def __init__(self, api_key=None):
     	    self.api_key = api_key
     	    self.uri = URI

The text enclosed in triple quotes """ """ is called a docstring (short for documentation string), and is used to document the class. Typically, it includes the arguments (`Args:`) accepted by the __init__ method of the class. You’ll also use docstrings to document the individual methods you create.

Making your connector easy to import

We make it easier for users to import connectors by making them all available at the “top level” of Parsons. People can use commands like from parsons import ActionNetwork instead of from parsons.actionetwork import ActionNetwork.

To make this possible, we need to make some edits in parsons/__init__.py. Open it and scroll down to the big list of connectors. Find your connector’s place in the list, which is in alphabetical order, and add your connector as a tuple like so:

    ("parsons.actblue.actblue", "ActBlue"),

The first item is the path to your connector class, and the second item is the name you want people to use when importing the connector class. It should be identical to the connector class’s name.

Don’t forget the comma at the end!

Initialization and Authentication

Some Python Basics

The __init__ method defines how the class is instantiated. The parameters defined there correspond to the inputs you pass in when calling it.

We can instantiate our class in three different ways:

item = Connector(api_key="123123")
item = Connector("123123")
item = Connector()

The first time, we’re instantiating with the api key passed in as a keyword argument.

The second time, we’re instantiating with the api key passed in as a positional argument.

The third time, we’re not passing in the api key at all, and instead using the default.

When an argument has a default, that makes it optional. So api_key in our method above is optional, because it has a default, None.

It’s easy to accidentally pass positional variables in the wrong order, so we recommend naming the arguments as you pass them in, that is, using keyword arguments.

Note that when we’re defining our __init__ method, self is the first parameter passed in. self refers to the class instance itself, and is always automatically passed in by Python as the first positional argument. self is used to add or change variables and methods that are “on” the class, like so:

When defining methods it’s important to always put self in the function/method signature. “Signature” is just a fancy term for the first line of the function, ie, def __init__(self, api_key=None):

If you forget to include self you may get an error that looks like this:

In the function signature, I’ve only defined one positional variable, a. But I’ve tried to pass in two: “hi!”, and self, which Python automatically passes in for me. With a mismatched number of arguments, Python throws an error.

Initializing Your Parsons Connector

For Parsons connectors, the __init__ class handles authentication. Some connectors do more, but most just authenticate.

There are typically two steps: getting authentication information, and using that information to create a client.

Getting Authentication Information

The first thing you’ll need to do is determine what type of authentication the platform you’re connecting to uses. API keys? Username and password? Tokens of some kind?

Once you figure this out, write down exactly how to get that information from the third-party platform. You’ll add that to the documentation later.

We like to give users two different options for getting authentication info to the connector: passing them as arguments to the __init__ method, or storing them as environmental variables. Use the Parsons utility check_env to allow for either possibility. Your __init__ method should look something like this:

def __init__(self, api_key=None):
  self.api_key = check_env.check('CONNECTOR_API_KEY', api_key)

This code uses the api_key passed in, unless it is None. If it’s None, it looks for the specified environmental variable and uses that instead. If the api_key passed in is None and the environmental variable isn’t set, check_env raises an error.

Note that environmental variables all follow the same format - the specific connector’s name, and then the name of the authentication parameter (ie password or api_key), in all caps, with underscores between words.

Creating a Client

A “client” is something that communicates with a server. Big tech companies tend to create “client libraries” to make it easier for others to use their platforms. Where these exist, Parsons takes advantage of them.

If your platform has an actively maintained client library, you can build off that. You’ll want to download and install it, add it to the requirements.txt file, and then import it into connector.py file. You can use the Airtable Connector as a guide, but you’ll also need to look at the client library’s own documentation to see how it handles authentication, how to call methods, etc.

Other platforms, especially smaller, movement-specific platforms, don’t have their own client libraries. In those cases, you’ll use Parsons’s APIConnector class.

The API Connector requires:

  • The uri (typically the base url that forms the base endpoint for the server’s API). This is often hard-coded as a constant in the file but can be passed in as a configurable variable.

  • The authentication information passed in as a tuple.

  • Header information (typically "accept": "application/json") - optional for some connectors

For example, the ActBlue init method looks something like this (slightly simplified for readability):

def __init__(self, actblue_client_uuid=None, actblue_client_secret=None, actblue_uri=None):    
  
  # getting authentication information  
  self.actblue_client_uuid = check_env.check('ACTBLUE_CLIENT_UUID', actblue_client_uuid)  
  self.actblue_client_secret = check_env.check('ACTBLUE_CLIENT_SECRET', actblue_client_secret)  
  self.uri = check_env.check('ACTBLUE_URI', actblue_uri)    

  # setting headers  
  self.headers = {"accept": "application/json"}    

  # instantiating the client by passing in required info to APIConnector  
  self.client = APIConnector(
    self.uri, 
    auth=(self.actblue_client_uuid, self.actblue_client_secret),                             
    headers=self.headers)

Writing Your First Method

Pick a method from the API documentation that you want to implement. You’ll probably want to start with something simple, ideally a GET method with only a few parameters.

Here’s an example which hits a /campaigns endpoint:

class Connector(object):
  
  # snip out docs and init method
  def get_campaigns(self):        
    result = self.client.get_request("/campaigns")        
    return result

The APIConnector (accessed here as self.client) has methods for each of the most common HTTP Request methods, including get_request, post_request, delete_request, etc. get_request takes in the URL to fetch (required) and a dictionary with additional parameters (optional).

The endpoint url that you pass in should be everything after the uri/base url passed into your connector’s __init__. For example, if the uri/base url passed in was https://example.org/api/v3 then the extra piece here should be something like campaigns or donors or events. You do not need beginning or ending slashes, but you should keep internal slashes. Pass in campaigns rather than /campaigns or campaigns/, but campaigns/all rather than campaignsall.

For now, let’s ignore any extra parameters, and focus on testing what we’ve already got.

Writing Your First Test

Setting Up the Testing Infrastructure

Tests are added in the test folder, found at the top level of the Parsons directory. If you think your connector will require separate data files for testing, create a folder structure that looks something like this:

test_connector  
    __init__.py
    datafile.json
    test_connector.py  

Otherwise, you can simply create a test_connector.py file at the top level of the test directory.

The bare bones of your test file should look something like this:

import unittest
from parsons import Table, Connector

class TestConnector(unittest.TestCase):
  
  def setUp(self):    
    self.api_key = “$APIKEY”    
    self.connector = Connector(api_key=self.api_key)

  def tearDown(self):    
    pass

  def test_get_campaigns(self):    
    pass

(As with previously, replace uses of Connector and connector with your specific connector’s name. Do follow the capitalization pattern.)

This code creates a new class of test cases using the unittest library. Each class of test cases has a setUp method and a tearDown method that are run before and after each test, respectively. For now, we’ll instantiate our connector class in the setUp method, and leave the tearDown method empty.

Individual test cases are written using the format test_x. Test cases that don’t start with test_ will not be found or run. In general, if you’re testing a specific function or method, it’s good to use that in the name of the test, ie test_get_campaigns.

Run the following command from the top level of your Parsons repository:

pytest -s test/test_connector.py

Note: if you put your tests into their own directory, you may need to run:

pytest -s test/yourconnactorname/test_connector.py

If the tests pass, it means you haven’t made any typos or errors in setting this up. Passing tests look something like this:

Test output frequently contains warnings, which have been edited out here. You can often ignore those, especially if they’re for tests other than your own. Just focus on making sure your test pass.

Writing the Simplest Possible Test

Once the infrastructure’s set up, add the following to your test:

def test_get_campaigns(self):        
  print(self.connector.get_campaigns())

This is not how the final version of this test will look, but it allows us to see whether our connector is more or less working, including our authentication.

The result of this test should be data (if there is data at the endpoint you’re hitting) rather than an authorization error. If it is, great - you’re ready to move on.

Otherwise…this guide can’t cover all the things that might go wrong, but here are some common problems:

  • The authentication info you passed in was in the wrong format. Pytest may show you an error raised in APIConnector’s prepare_auth method.

  • The data is being returned in an unexpected format. For instance, pytest may show you a JSONDecodeError.

Please reach out for help if you get stuck on authorization!

Don’t forget to test that this works with environmental variables too!

Export/set your environmental variables and then remove them from the setUp method, which should now look like this:

def setUp(self):      
  self.connector = Connector()

Your tests will hopefully once again pass! If not, there may be an error in how you’re using check_env.

A Better (Slightly More Complex) Test

Now that we know our authentication is working, we can switch our focus to testing the method itself.

It’s time to write an “actual” test using assertions (all of which are available on the TestCase class). First, we want to test whether we got a parsons table at all:

def test_get_campaigns(self):   
  result = self.connector.get_campaigns()   
  self.assertIsInstance(result, Table)

assertIsInstance tests whether a given object is an instance of a given class. In this case, we’re checking whether result is a Table.

Once you’ve gotten that to pass, it’s time to check the content of that table. Add another line to the test with a second assertion:

def test_get_campaigns(self):   
  result = self.connector.get_campaigns()   
  self.assertIsInstance(result, Table)       
  self.assertDictEqual(result.to_dicts()[0], {})

The code result.to_dicts()[0] is getting our Parsons Table as a list of dictionaries and then selecting the first one. You don’t have to get your data in precisely this way, as long as you get some data. I’ve chosen to get the data as a dictionary here because assertDictEqual is a special assertion that allows us to compare dictionaries without worrying about the order of the keys within them.

With this assertion, we’re checking that we get the data we expect. Of course, we expect something other than an empty dictionary. {} is just a placeholder.

Print the result of get_campaigns and put it in a dictionary. If it’s small, you can put it in the test itself, otherwise can put it in another file (example of an external file).

def test_get_campaigns(self):  
  result = self.connector.get_campaigns()  
  self.assertIsInstance(result, Table)  
  test_dict = {        
    'created_at': '2022-09-20T18:47:05.381Z',        
    'donations_count': 0,        
    'id': 366172,        
    'name': 'Test Campaign',        
    'total_raised': '0.0',  
  }  
  self.assertDictEqual(result.to_dicts()[0], test_dict)

Alternatively, you may want to just check that the columns are all there without worrying about the data. That might look like:

def test_get_campaigns(self):    
  result = self.connector.get_campaigns()    
  self.assertIsInstance(result, Table)    
  self.assertDictEqual(result.to_dicts()[0], test_dict)    
  columns = [              
    'id', 'name', 'created_at', 'total_raised', 'donations_count'          
  ]    
  self.assertEquals(result.columns, columns)

And that’s it! You’ve written your first test!

This is something called a “live test”. You are tested against “live” or “real” data you’re getting from the platform. Eventually that data will change and the test will break, even if the code is still correct, because you have hard-coded the data you expect into the test.

“Live” tests are more likely to catch errors, but they’re hard to use in our automated tests. Later, we’ll be adapting these tests for use in our automated test suite (see the Adapting Your Tests section). But for now, we’re ready to write more methods.

Writing More Methods

This guide assumes you don’t have access to a client library and are using the APIConnector utility. If you do have a client, the examples would look a little different. For example, the Github connector uses a Github client, so it just wraps the client methods and adds one extra line to transform the results into a Parsons Table (see an example).

Fleshing out our GET method

Let’s take another look at the code we wrote in test_get_campaigns. This method doesn’t have any ability to handle parameters.

Often, API endpoints will allow for required or optional parameters which help specify or filter results. For example, you may be able to filter campaigns by date, or by a specific ID.

Your methods should mirror the API endpoint they’re hitting. So if the API endpoint has parameters, so should your method.

Let’s say our method has optional parameters campaign_id and order. We can adapt our method to accept those parameters:

def get_campaigns(self, campaign_id=None, order=None):     
  all_params = {"campaign_id": campaign_id, "order": order}     
  params_with_values = {key: value for key, value in all_params.items() if value is not None}     
  result = self.client.get_request("campaigns", params=params_with_values)     
  return Table(result)

Note that the default value for both is None. After we create our params dict, we loop through it with a dictionary comprehension and remove any items that are the default value of None. That way we only pass in the parameter if it’s specified by the caller.

You might be thinking, “do we really need all this extra code?” We do not! And it gets worse the more parameters you have. Some methods may allow over a dozen parameters. So we recommend making your methods more concise by using special Python syntax:

def get_campaigns(self, **kwargs):  
  result = self.client.get_request("campaigns", params=kwargs)  
  return Table(result)

kwargs is a dictionary that contains any keywords arguments passed in. Note that this approach doesn’t work if you want something besides None to be the default for a parameter. In that case, you’d want to separate out that keyword in the signature, ie:

def get_campaigns(self, order="Asc", **kwargs):  
  kwargs["order"] = order  
  result = self.client.get_request("campaigns", params=kwargs)  
  return Table(result)

Note that if a parameter is separated out, you need to manually add it back to kwargs before passing kwargs on to get_request.

Testing Our Parameters

Now it’s time to test our changes. First, re-run the tests as they are and make sure they still work. None of the parameters are required, so there shouldn’t be an issue with our existing test not passing in any parameters, so we should still get the same result.

Next, let’s create a new test for testing this method with filters/parameters. That way if it breaks but the first test doesn’t, we’ll know it’s an issue with the filters.

A test for the id filter might look something like this:

def test_get_campaigns_with_with_campaign_id(self):  
  result = self.connector.get_campaigns(id=366172)  
  self.assertEqual(result.num_rows, 1)  
  self.assertEqual(result.to_dicts()[0]["id"], 366172)

Because we’re getting only one campaign, chosen by id, the number of rows in the resulting table should be 1. And that row should have a value in the ID column equal to the ID passed in.

A test for the order filter might look something like this:

def test_get_campaigns_with_with_order_filter(self):    
  result = self.connector.get_campaigns()    
  self.assertEqual(result["id"], [366590, 366172])
  result = self.connector.get_campaigns(order="desc")    
  self.assertEqual(result["id"], [366590, 366172])    
  result = self.connector.get_campaigns(order="asc")    
  self.assertEqual(result["id"], [366172, 366590])

In this example, descending order is default, so passing in order="desc" doesn’t change anything, but order="asc" should swap the order around.

Documenting with docstrings

Every method should have docstrings documenting the method. Those will be automatically collected and displayed on the connector’s page in the documentation. A docstring for our get_campaigns method might look like this:

 def get_campaigns(self, id=None, order=None):  
   """  
   Get information on campaigns.  
   
   `Args:`  
      id: int      
       Optional. The ID of the campaign to get. If omitted, method will return all campaigns.  
      order: str      
       Optional. Valid values are "asc" and "desc". If not supplied, order is descending by default.  
   
   `Returns`:      
     Parsons Table  """

Key elements:

  • Single sentence summarizing what the method does

  • Args, aka arguments accepted by the method

  • Returns, aka what the method returns

You can check out some other examples from ActionKit and Hustle as well. The ActionKit one has an example of linking to platform documentation in your docs, which you should totally do if you think it will be helpful.

It’s good to write the docstrings when you write the method, so you don’t forget anything. But don’t worry too much about the formatting. We’ll go over documentation in more detail later (see documentation section).

Writing other kinds of methods

The guide above should serve you well for writing methods that use GET. But what about methods that need to hit an endpoint with POST, PUT, or DELETE?

<TO BE WRITTEN>

Pagination, Retry Logic and Edge Cases

We are in the process of refactoring the APIConnector class to handle more things for you, but for now, you will have to handle these edge cases yourselves. Here’s some guidance:

Pagination

<TO BE WRITTEN>

Retry Logic

<TO BE WRITTEN>

Edge Cases

There may be other cases. For example, ActBlue requires you to generate and then download a file, so the ActBlue connector handles multiple steps: generating the file, polling to see if it’s ready, downloading it when it is ready, etc. Please reach out for help with edge cases if you need it!

General Guidelines

Before we finish this section, here are some general guidelines for writing connector method:

The methods of your connector should generally mirror the endpoints of the API. Every API is different, but the connector should generally look like the API it is connecting to. Methods of your connector should reference the resources the API is using (e.g. “people”, “members”, “events”).

The following lists rules for naming common endpoints:

  • GET - single record - get_<resource> (e.g. get_event, get_person)

  • GET - multiple records - get_<resource>s (e.g. get_members, get_people)

  • POST - single record - create_<resource> (e.g. create_person, create_tag)

  • PUT - single record - update_<resource> (e.g. update_person, update_event)

  • DELETE - single record - delete_<resource> (e.g. delete_member)

A method’s arguments should mirror the parameters of the API endpoint it is calling. Optional parameters should be optional in your method signature (i.e. default to None).

Use Python docstrings to document every public method of your class. The docstrings for your public methods are used to automatically generate documentation for your connector. Having this documentation for every method makes it easier for users to pick up your connector.

Methods returning multiple values should return a Parsons Table. If the list of results is empty, return an empty Parsons Table (not None). Methods returning a single value should just return the value. If the API could not find the value (eg, the ID provided for a resource was not found), return a None value from the method.

Adapting Your Tests For Long-Term Use

The tests we’ve written so far are “live tests” - that is, tests that send requests to third party platforms and make use of the data. That’s fine for developing over a short period of time, but because that data is real, it will change, and our tests will start to fail.

There are two avenues we can take to improve the long-term validity of our tests. First, we can adjust the live tests to only test things that largely don’t change. And second, we can create “mocks”, ie mocked versions of our tests, where instead of actually hitting the third-party endpoints, we pretend to, and return fake responses that we can then test.

Live Tests

To begin with, let’s import the Parsons utility function mark_live_test:

from test.utils import mark_live_test

class TestConnector(unittest.TestCase): 
  # docs, init, Setup, etc removed for brevity 
  
  @mark_live_test 
  def test_get_campaigns_live_test(self):

mark_live_test is a decorator, which is a special kind of Python object. You don’t need to know what a decorator is or how it works, but the tl;dr is: it is a function that can wrap around another function in order to easily add a feature to it.

mark_live_test adds the feature that, unless the environmental variable LIVE_TEST exists and is true, the test will be skipped. This is helpful, because we’ll only want to run our tests some of the time.

Once you’ve added this decorator, run the tests. You’ll see a bunch of ss instead of .s in the output. To actually run the tests again you’ll need to set LIVE_TEST to true by typing one of the following on the command line:

export LIVE_TEST=True   # Mac or Linuxset 
LIVE_TEST=True      # Windows

We’re now ready to update our live tests. The key thing here is to focus on things that don’t change regardless of the specific data returned. Some good rules of thumb:

  • When testing a table of data returned, check the column names rather than the row values.

  • When testing the order of a result, don’t check the specific values of, say, the first and last objects, but compare their relative values. If order is ascending, for instance, check that the first row’s value is less than the last row’s.

  • When testing a specific filter, don’t compare full dictionaries or rows, but rather just the thing being filtered for.


  • Parsons doesn’t currently make full use of live tests, so don’t stress out too much over this section. Once we’ve set ourselves up to use live tests more regularly and comprehensively, this guide will be edited to be more demanding. ;)

Mocking

Mocking is the practice of creating “fake” versions of functions which we can’t consistently test. In our case, we can’t always assume that we’ll be able to send requests to external platforms. What if our authentication info is wrong? What if no authentication info is set? What if the platform is down? We want tests that can pass anyway.

So, we create mock versions of our requests using a library called requests_mock. This allows us to patch our connector class so that when we call it, the mock version of the request is returned.

(Of course, if we do this, we can’t catch changes to the API, because we’re not actually hitting the API. That’s why it’s important to also do live tests! But with mocks, we can test everything except the request itself.)

As with the live tests, the mock tests have their own special decorator:

import requests_mock

class TestConnector(unittest.TestCase): 
  # docs, init, Setup, etc removed for brevity 

  @requests_mock.Mocker() 
  def test_get_campaigns(self, m):

This decorator changes the signature of the method decorated and adds an extra parameter, m, corresponding to the mock object. We’ll use that object in the function itself:

from connector_mock_data import fake_campaigns_json 

  @requests_mock.Mocker() 
  def test_get_campaigns(self, m):     
      m.get(self.base_uri + '/campaigns', json=fake_campaigns_json)     
      result = self.connector.get_campaigns()     
      # Assert the method returns expected dict response     
      self.assertDictEqual(result.to_dicts()[0], fake_campaigns_json[0])

Here, we’re importing some fake data that we’ve stored in an internal file. The line that begins with m.get is specifying the endpoint that we want to mock: the one composed of self.base_uri + '/campaigns'. If any of the code called in this tests attempts to send a request to that endpoint, it will not actually send the request, but instead just return whatever’s in fake_campaigns_json .

(I cannot really explain how mock_requests does this. It honestly seems magical! If anyone reading this has a good guide to how mock_requests works under the hood, let me know.)

Aside from these changes, your tests should be able to stay the same.

Documentation

The last step when adding a new connector is writing the documentation for it.

Start by making a copy of the template connector file and renaming it to connector.rst (again, with your actual connector name). This should be in the docs folder, which is at the top level of the repository.

You can also use another connector’s docs and for guidance/inspiration.

.rst stands for “restructured text” which is the (alas, extremely fiddly) format our docs use.

There are three key parts of this document:

Overview and Auth

The overview itself is super straight forward. You just need one or two sentences describing the platform and linking to it.

The auth section is a note added to the documentation, using this syntax:

.. note:: 
  Here’s some authentication information! 

The auth section should display as a blue box, like so:

Quickstart

The quickstart section should start with an explanation of how to handle authentication, and in particular, what the names of the environmental variables are. Use the backticks around the ``variable names`` so that the variables appear as code, which will help them stand out.

The quickstart should demonstrate initializing your connector with authentication info passed in either as environmental variables or as parameters. You can pick some additional methods to demonstrate as well. (Unless your connector has a few methods, you probably don’t need to demonstrate all of them.)

Code blocks are formatted like so (example from ActBlue’s docs):

.. code-block:: python 

  from parsons import ActBlue 
  
  # First approach: Use API key environment variables 
  # In bash, set your environment variables like so: 
  # export ACTBLUE_CLIENT_UUID='MY_UUID' 
  # export ACTBLUE_CLIENT_SECRET='MY_SECRET' 
  actblue = ActBlue() 
  
  # Second approach: Pass API keys as arguments 
  actblue = ActBlue(
    actblue_client_uuid='MY_UUID', 
    actblue_client_secret='MY_SECRET'
  )

Autodocs

The final section on this page should be the “autodocs”. This is a special syntax that allows us to say “go to this connector’s class and get all of the docstrings from the class and its methods and display them here”.

This section should look like this:

***API***

.. autoclass :: parsons.ConnectorName 
  :inherited-members:

Don’t forget to add your connector to the documentation sidebar!

We want people to be able to find your connector’s docs. Without adding it to the sidebar, most people won’t even know the connector exists.

To add your docs to the sidebar, go to your local version of index.rst. Scroll down to near the bottom, to a section that starts like this:

.. toctree:: 
  :maxdepth: 1 
  :caption: Integrations 
  :name: integrations 

  actblue 
  action_kit 
  action_network 
  airtable

Find your connector’s place alphabetically and add it. That’s all you need to do!

Building Your Documentation Locally

It’s good to build your documentation locally so you can visually inspect it. RST is a tricky language—it’s easy to make small mistakes that have big effects.

To build your docs, navigate into the docs folder via the command line. From the docs folder, run the command make html. If you try to do this from somewhere else, you will probably get an error.

(You may also get an import error if you haven’t installed the documentations requirements. We keep them in a separate file in the docs folder. From the docs folder, run pip install -r requirements.txt and the handful of docs requirements should be installed. You should only need to this the first time you build the docs.)

It will take a moment for the documentation to build. There may be warnings or errors. Unfortunately there are some pre-existing warnings for other connectors that you’ll need to ignore (sorry!). Skim them quickly to see if any reference your connector name, and only worry about those.

You should also open up the docs in the browser to check that they look right. To do this, open up the index.html file in your browser, find your connector in the sidebar, and click it. Take a look at the docs and check for any formatting errors, typos, etc.

Wrap Up

You’re good to go! You can submit a PR with your changes to the Parsons repository. (Not sure how to do that? Check out this guide.)

The Parsons maintainers will try to review your pull request within a week or so. Feel free to reach out if it looks like we’ve missed it. Please also reach out if you don’t understand any of the feedback we give.

It is likely that we’ll ask you to make at least a few changes before merging your PR. That’s totally normal and not a sign that you’ve done anything wrong. This is complicated work! You should be proud of yourself for tackling it. :)

Questions? Comments? Feedback?

All kinds of feedback are very much welcome! Please reach out to me at [email protected] or ping me on Slack, or leave a comment on this doc.

Parsons periodically runs “new connector cohorts” where people work through this guide together. I am also happy to help people work through this 1:1. Just reach out! :)

Comments
0
comment
No comments here
Why not start the discussion?