## Monday, December 24, 2012

### Bridging OAuth 2.0 objects between GData and Discovery

My colleague +Takashi Matsuo and I recently gave a talk about using OAuth2Decorator (from the google-api-python-client library) with request handlers in Google App Engine. Shortly after, a Stack Overflow question sprung up asking about the right way to use the decorator and, as a follow up,  if the decorator could be used with the Google Apps Provisioning API. As I mentioned in my answer,
The Google Apps Provisioning API is a Google Data API...As a result, you'll need to use the gdata-python-client library to use the Provisioning API. Unfortunately, you'll need to manually convert from a oauth2client.client.OAuth2Credentials object to a gdata.gauth.OAuth2Token object to use the same token for either one.
Instead of making everyone and their brother write their own, I thought I'd take a stab at it and write about it here. The general philosophy I took was that the token subclass should be 100% based on an OAuth2Credentials object:
• the token constructor simply takes an OAuth2Credentials object
• the token refresh updates the OAuth2Credentials object set on the token
• values of the current token can be updated directly from the OAuth2Credentials object set on the token
Starting from the top, we'll use two imports:
import httplib2
from gdata.gauth import OAuth2Token
The first is needed to refresh an OAuth2Credentials object using the mechanics native to google-api-python-client, and the second is needed so we may subclass the gdata-python-client native token class.

As I mentioned, the values should be updated directly from an OAuth2Credentials object, so in our constructor, we first initialize the values to None and then call our update method to actual set the values. This allows us to write less code, because, repeating is bad (I think someone told me that once?).
class OAuth2TokenFromCredentials(OAuth2Token):
def __init__(self, credentials):
self.credentials = credentials
super(OAuth2TokenFromCredentials, self).__init__(None, None, None, None)
self.UpdateFromCredentials()

We can get away with passing four Nones to the superclass constructor, as it only has four positional arguments: client_idclient_secret, scope,  and user_agent. Three of those have equivalents on the OAuth2Credentials object, but there is no place for scope because that part of the token exchange handled elsewhere (OAuth2WebServerFlow) in the google-api-python-client library.
  def UpdateFromCredentials(self):
self.client_id = self.credentials.client_id
self.client_secret = self.credentials.client_secret
self.user_agent = self.credentials.user_agent
...
Similarly, the OAuth2Credentials object only implements the refresh part of the OAuth 2.0 flow, so only has the token URI, hence auth_urirevoke_uri and redirect_uri will not be set either. However, the token URI and the token data are the same for both.
    ...
self.token_uri = self.credentials.token_uri
self.access_token = self.credentials.access_token
self.refresh_token = self.credentials.refresh_token
...

Finally, we copy the extra fields which may be set outside of a constructor:
    ...
self.token_expiry = self.credentials.token_expiry
self._invalid = self.credentials.invalid

Since OAuth2Credentials doesn't deal with all parts of the OAuth 2.0 process, we disable those methods from OAuth2Token that do.
  def generate_authorize_url(self, *args, **kwargs): raise NotImplementedError
def get_access_token(self, *args, **kwargs): raise NotImplementedError
def revoke(self, *args, **kwargs): raise NotImplementedError
def _extract_tokens(self, *args, **kwargs): raise NotImplementedError

Finally, the last method which needs to be implemented is _refresh, which should refresh the OAuth2Credentials object and then update the current GData token after the refresh. Instead of using the passed in request object, we use one from httplib2 as we mentioned in imports.
  def _refresh(self, unused_request):
self.credentials._refresh(httplib2.Http().request)
self.UpdateFromCredentials()

After refreshing the OAuth2Credentials object, we can update the current token using the same method called in the constructor.

Using this class, we can simultaneously call a discovery-based API and a GData API:
from apiclient.discovery import build
from gdata.contacts.client import ContactsClient

service = build('calendar', 'v3', developerKey='...')

class MainHandler(webapp2.RequestHandler):
@decorator.oauth_required
def get(self):
auth_token = OAuth2TokenFromCredentials(decorator.credentials)
contacts_client = ContactsClient()
auth_token.authorize(contacts_client)
contacts = contacts_client.get_contacts()
...
events = service.events().list(calendarId='primary').execute(
http=decorator.http())
...


## Monday, September 10, 2012

### Last to Cross the Finish Line: Part Three

Recently, my colleague +Fred Sauer and I gave a tech talk called "Last Across the Finish Line: Asynchronous Tasks with App Engine". This is part three in a three part series where I will share our learnings and give some helpful references to the App Engine documentation.

Check out the previous post if you haven't already. In this section, we'll define the PopulateBatch function and explore the ndb models and Task Queue operations that make it work.

## Imports

Before defining the models and helper functions in models.py, let's first review the imports:
import json


Again, we import json and channel for serialization and message passing. We import the defer function from the deferred library to abstract away task creation and take advantage of the ability to "defer" a function call to another thread of execution. Finally, we import ndb as a means for interacting with the App Engine Datastore.

## Method Wrapper Built for Tasks

As we saw in the BeginWork handler in part two, units of work are passed to PopulateBatch as 3-tuples containing a method, the positional arguments and the keyword arguments to that method.

In order to keep our task from hanging indefinitely due to unseen errors and to implicitly include the work unit in the batch, we define a wrapper around these method calls:
def AlwaysComplete(task, method, *args, **kwargs):
try:
method(*args, **kwargs)
except:  # TODO: Consider failing differently.
pass
finally:
defer(task.Complete)
As you can see, we catch any and all errors thrown by our method and don't retry the method if it fails. In our example, if the call method(*args, **kwargs) fails, the data won’t be sent through the channel and the given square will not show up in the quilt. However, since we catch these exceptions, the batch will complete and the spinner will disappear with this square still missing.

This part is likely going to be customized to the specific work involved, but for our case, we didn't want individual failures to cause the whole batch to fail. In addition, we implicitly link the work unit with a special type of task object in the datastore.

In the finally section of the error catch, we defer the Complete method on the task corresponding to this work unit. We defer the call to this complete method in order to avoid any errors (possibly from a failed datastore action) that the method may cause. If it were to throw an error, since AlwaysComplete is called in a deferred task, the task would retry and our worker unit would execute (or fail) again, which is bad if our user interface is not idempotent.

As we saw above, we need a datastore model to represent tasks within a batch. We start out initially with a model having only one attribute — a boolean representing whether or not the task has completed.
class BatchTask(ndb.Model):
# Very important that the default value True of indexed is used here
# since we need to query on BatchTask.completed
completed = ndb.BooleanProperty(default=False)
As we know, we'll need to define a Complete method in order to use the task in AlwaysComplete, but before doing so, we'll define another method which will put the task object in the datastore and pass a unit of work to AlwaysComplete:
  @ndb.transactional
def Populate(self, method, *args, **kwargs):
self.put()
kwargs['_transactional'] = True
defer(AlwaysComplete, self.key, method, *args, **kwargs)
In this Populate method, we first put the object in the datastore transactionally by using the ndb.transactional decorator. By adding the _transactional keyword to the keyword arguments, defer strips away the underscore and creates a transactional task. By doing this
"the task is only enqueued — and guaranteed to be enqueued — if the transaction is committed successfully."
We need this deferred task to be enqueued transactionally for consistency of the completed boolean attribute. The datastore put in Populate uses the default value of False, but after Complete is called we want to set this boolean to True. If this value was not consistent, we may have a race condition that resulted in a completed task in the datastore being marked as incomplete. As we'll see later, we rely on this consistency for a query that will help us determine if our batch is done.

To signal that a unit of work has completed, we define the Complete method on the task object:
  @ndb.transactional
def Complete(self):
self.completed = True
self.put()

batcher_parent = self.key.parent().get()
defer(batcher_parent.CheckComplete, _transactional=True)
It performs two functions. First, it sets completed to True in a transaction. Second, it retrieves the parent entity of the task object and defers the CheckComplete method on this parent. As we will see in more depth in the PopulateBatch function, we use a special type of batch parent object to create an entity group containing all the worker tasks for the batch. We don't want to check if the batch has completed until the datastore put has succeeded, so we defer the call to call to CheckComplete transactionally, just as we did with AlwaysComplete in the Populate method.

NoteIt may seem that these get calls to retrieve the parent via self.key.parent().get() are using more bandwidth than necessary. However, we are relying here on the power of ndb. Using a combination of instance caching and memcache, most (if not all) of these gets will use the cache and will not incur the cost of a round-trip to the datastore.

## Batch Parent Model

Given what we rely on in BatchTask, we need to define a special type of datastore object that will act as the parent entity for a batch. Since we are going to use it to check when a batch is complete, we define the boolean attribute all_tasks_loaded to signal whether or not all worker tasks from the batch have begun. We can use this as a short circuit in our CheckComplete method (or as a guard against premature completion).
class TaskBatcher(ndb.Model):
all_tasks_loaded = ndb.BooleanProperty(default=False, indexed=False)
To check if a batch is complete, we first determine if all tasks have loaded. If that is the case, we perform an ancestor query that simply attempts to fetch the first worker task in the entity group which has not yet completed. If such a task does not exist, we know the batch has completed, and so start to clean up the task and batch parent objects from the datastore.
  def CheckComplete(self):
# Does not need to be transactional since it doesn't change data
session_id = self.key.id()
ancestor=self.key).fetch(1)
if len(incomplete) == 0:
channel.send_message(session_id, json.dumps({'status': 'complete'}))
self.CleanUp()
return

channel.send_message(session_id, json.dumps({'status': 'incomplete'}))
We again do the utmost at this step to ensure consistency by using an ancestor query:
"There are scenarios in which any pending modifications are guaranteed to be completely applied...any ancestor queries in the High Replication datastore. In both cases, query results will always be current and consistent."
After checking if a batch is complete, we need to communicate the status back to the client. We'll rely on PopulateBatch to create instances of TaskBatcher with the ID of the session corresponding to the batch as the datastore key. We send a status complete or incomplete message to the client using the session ID for the channel. In order to correctly handle these messages on the client, we'll need to update the onmessage handler (defined in part two) to account for status updates:
socket.onmessage = function(msg) {
var response = JSON.parse(msg.data);
if (response.status !== undefined) {
setStatus(response.status);
} else {
var squareIndex = 8*response.row + response.column;
var squareId = '#square' + squareIndex.toString();
$(squareId).css('background-color', response.color); } } Just as the setStatus method revealed the progress spinner when work began, it will remove the spinner when the status is complete. We'll next define the CleanUp method that is called when the batch is complete:  def CleanUp(self): children = BatchTask.query(ancestor=self.key).iter(keys_only=True) ndb.delete_multi(children) self.key.delete() This method uses the key from the batch parent to perform another ancestor query and creates an object which can iterate over all the keys of the tasks in the batch. By using the delete_multi function provided by ndb, we are able to delete these in parallel rather than waiting for each to complete. After deleting all the tasks, the batcher deletes itself and clean up is done. Since the TaskBatcher.CheckComplete spawns CleanUp in a deferred task, if the deletes time out, the task will try again until all tasks in the batch are deleted. As a final method on TaskBatcher, we define something similar to BatchTask.Populate that is triggered after all workers in the batch have been added:  @ndb.transactional def Ready(self): self.all_tasks_loaded = True self.put() self.CheckComplete() This simply signals that all tasks from the batch have loaded by setting all_tasks_loaded to True and calls CheckComplete in case all the tasks in the batch have already completed. This check is necessary because if all worker tasks complete before all_tasks_loaded is True, then none of the checks initiated by those tasks would signal completion. We use a transaction to avoid a race condition with the initial datastore put — a put which is a signal that all tasks have not loaded. ## Populating a Batch With our two models in hand, we are finally ready to define the PopulateBatch function used (in part two) by the BeginWork handler. We want users of this function to be able to call it directly, but don't want it to block the process they call it in, so we wrap the real function in a function that will simply defer the work: def PopulateBatch(session_id, work): defer(_PopulateBatch, session_id, work) In the actual function, we first create a TaskBatcher object using the session ID as the key and put it into the datastore using the default value of False for all_tasks_loaded. Since this is a single synchronous put, it blocks the thread of execution and we can be sure our parent is in the datastore before members of the entity group (the task objects) are created. def _PopulateBatch(session_id, work): batcher_key = ndb.Key(TaskBatcher, session_id) batcher = TaskBatcher(key=batcher_key) batcher.put() After doing this, we loop through all the 3-tuples in the passed in batch of work. For each unit of work, we create a task using the batcher as parent and then call the Populate method on the task using the method, positional arguments and keyword arguments provided in the unit of work.  for method, args, kwargs in work: task = BatchTask(parent=batcher_key) task.Populate(method, *args, **kwargs) Finally, to signal that all tasks in the batch have been added, we call the Ready method on the batch parent:  batcher.Ready() Note: This approach can cause performance issues as the number of tasks grows, since contentious puts within the entity group can cause task completions to stall or retry. I (or my colleagues) will be following up with two posts on the following topics: • using task tagging and pull queues to achieve a similar result, but reducing contention • exploring ways to extend this model to a hierarchical model where tasks may have subtasks ## Wednesday, August 29, 2012 ### Last to Cross the Finish Line: Part Two Recently, my colleague +Fred Sauer and I gave a tech talk called "Last Across the Finish Line: Asynchronous Tasks with App Engine". This is part two in a three part series where I will share our learnings and give some helpful references to the App Engine documentation. Check out the previous post if you haven't already. In this section, we'll cover the two WSGI handlers in main.py serving requests for our application and the client side code that communicates with our application. ## Imports Before defining the handlers, let's first review the imports: import json from google.appengine.api import channel from google.appengine.api import users from google.appengine.ext.webapp.util import login_required import webapp2 from webapp2_extras import jinja2 from display import RandomRowColumnOrdering from display import SendColor from models import PopulateBatch  We import json for serialization of messages. Specific to App Engine, we import channel to use the Channel API, users and login_required for authenticating users within a request, webapp2 for creating WSGI Handlers and jinja2 for templating. Finally, we import four functions from the two other modules defined within our project. From the display module, we import the SendColor function that we explored in part one and the RandomRowColumnOrdering function, which generates all possible row, column pairs in a random order. From the as of yet undiscussed models module we import the PopulateBatch function, which takes a session ID and a batch of work to be done and spawns workers to carry out the batch of work. ## Handlers This module defines two handlers: the main page for the user interface and an AJAX handler which will begin spawning the workers. For the main page we use jinja2 templates to render from the template main.html in the templates folder: class MainPage(webapp2.RequestHandler): def RenderResponse(self, template, **context): jinja2_renderer = jinja2.get_jinja2(app=self.app) rendered_value = jinja2_renderer.render_template(template, **context) self.response.write(rendered_value) @login_required def get(self): user_id = users.get_current_user().user_id() token = channel.create_channel(user_id) self.RenderResponse('main.html', token=token, table_id='pixels', rows=8, columns=8) In get — the actual handler serving the GET request from the browser — we use the login_required decorator to make sure the user is signed in, and then create a channel for message passing using the ID of the signed in user. The template takes an HTML ID, rows and columns to create an HTML table as the "quilt" that the user will see. We pass the created token for the channel, an HTML ID for the table and the rows and columns to the template by simply specifying them as keyword arguments. For the handler which will spawn the workers, we use RandomRowColumnOrdering to generate row, column pairs. Using each pair along with the SendColor function and the user ID (as a proxy for session ID) for message passing, we add a unit of work to the batch class BeginWork(webapp2.RequestHandler): # Can't use login_required decorator here because it is not # supported for POST requests def post(self): response = {'batch_populated': False} try: # Will raise an AttributeError if no current user user_id = users.get_current_user().user_id() # TODO: return 400 if not logged in work = [] for row, column in RandomRowColumnOrdering(8, 8): args = (row, column, user_id) work.append((SendColor, args, {})) # No keyword args PopulateBatch(user_id, work) response['batch_populated'] = True except: # TODO: Consider logging traceback.format_exception(*sys.exc_info()) here pass self.response.write(json.dumps(response)) Finally, for routing applications within our app, we define: app = webapp2.WSGIApplication([('/begin-work', BeginWork), ('/', MainPage)], debug=True) and specify handlers: - url: /.* script: main.app in app.yaml; to use WSGI apps, the App Engine runtime must be python27. ## Client Side Javascript and jQuery In the template main.html we use jQuery to make AJAX requests and manage the CSS for each square in our "quilt". We also define some other Javascript functions for interacting with the App Engine Channel API. In the HTML <head> element we load the Channel Javascript API, and in the <body> element we open a channel using the {{ token }} passed in to the template: <head> <script src="/_ah/channel/jsapi"></script> </head> <body> <script type="text/javascript"> channel = new goog.appengine.Channel('{{ token }}'); socket = channel.open(); socket.onerror = function() { console.log('Socket error'); }; socket.onclose = function() { console.log('Socket closed'); }; </script> </body> In addition to onerror and onclose, we define more complex functions for the onopen and onmessage callbacks. First, when the socket has been opened, we send a POST request to /begin-work to signal that the channel is ready for communication. If the response indicates that the batch of workers has been initialized successfully, we call a method setStatus which will reveal the progress spinner: socket.onopen = function() {$.post('/begin-work', function(data) {
var response = JSON.parse(data);
if (response.batch_populated) {
}
});
}
As we defined in part one, each SendColor worker sends back a message along the channel representing a row, column pair and a color. On message receipt, we use these messages to set the background color of the corresponding square to the color provided:
socket.onmessage = function(msg) {
var response = JSON.parse(msg.data);
var squareIndex = 8*response.row + response.column;
var squareId = '#square' + squareIndex.toString();
$(squareId).css('background-color', response.color); } As you can see from squareId, each square in the table generated by the template has an HMTL ID so we can specifically target it. ## Next... In the final post, we'll define the PopulateBatch function and explore the ndb models and Task Queue operations that make it work. ## Monday, August 27, 2012 ### Last to Cross the Finish Line: Part One Recently, my colleague +Fred Sauer and I gave a tech talk called "Last Across the Finish Line: Asynchronous Tasks with App Engine". This is part one in a three part series where I will share our learnings and give some helpful references to the App Engine documentation. ## Intro Before I dive in, a quick overview of our approach: • "Fan out; Fan in" First spread tasks over independent workers; then gather the results back together • Use task queues to perform background work in parallel • Tasks have built-in retries • Can respond quickly to the client, making UI more responsive • Operate asynchronously when individual tasks can be executed independently, hence can be run concurrently • If tasks are too work intensive to run synchronously, (attempt to) break work into small independent pieces • Break work into smaller tasks, for example: • rendering media (sounds, images, video) • retrieving and parsing data from an external service (Google Drive, Cloud Storage, GitHub, ...) • Keep track of all workers; notify client when work is complete Before talking about the sample, let's check it out in action: We are randomly generating a color in a worker and sending it back to the client to fill in a square in the "quilt". (Thanks to +Iein Valdez for this term.) In this example, think of each square as a (most likely more complex) compute task. ## Application Overview The application has a simple structure: gae-last-across-the-finish-line/ |-- app.yaml |-- display.py |-- main.py |-- models.py +-- templates/ +-- main.html We'll inspect each of the Python modules display.py, main.py and models.py individually and explore how they interact with one another. In addition to this, we'll briefly inspect the HTML and Javascript contained in the template main.html, to understand how the workers pass messages back to the client. In this post, I will explain the actual background work we did and briefly touch on the methods for communicating with the client, but won't get into client side code or the generic code for running the workers and watching them all as they cross the finish line. In the second post, we’ll examine the client side code and in the third, we’ll discuss the models that orchestrate the work. ## Workers These worker methods are defined in display.py. To generate the random colors, we simply choose a hexadecimal digit six different times and throw a # on at the beginning: import random HEX_DIGITS = '0123456789ABCDEF' def RandHexColor(length=6): result = [random.choice(HEX_DIGITS) for _ in range(length)] return '#' + ''.join(result) With RandHexColor in hand, we define a worker that will take a row and column to be colored and a session ID that will identify the client requesting the work. This worker will generate a random color and then send it to the specified client along with the row and column. To pass messages to the client, we use the Channel API and serialize our messages using the json library in Python. import json from google.appengine.api import channel def SendColor(row, column, session_id): color = RandHexColor(length=6) color_dict = {'row': row, 'column': column, 'color': color} channel.send_message(session_id, json.dumps(color_dict)) ## Next... In the next post, we'll explore the WSGI handlers that run the application and the client side code that handles the messages from the workers. ## Sunday, August 19, 2012 ### A Decorator for App Engine Deferred Tasks I happen to be a big fan of the deferred library for both Python runtimes in Google App Engine. If an application needs to queue up work, breaking the work into easy to understand units by writing worker methods and then deferring the work into tasks is a breeze using the deferred library. For the majority of cases, if fine grained control over the method of execution is not needed, using the deferred library is a great (and in my opinion, the correct) abstraction. Maybe I am just biased because I have made a few changes to the deferred library over the past few months? One such change I made added a feature that allows a task to fail once without having an impact on subsequent retries; this can be accomplished by raising a SingularTaskFailure. Over this weekend, I found that I wanted to use this feature for a special type* of worker. Since I wanted to utilize this unique exception, I wanted to make sure that this worker only ran in a deferred task. Initially I thought I was lost, since any pickled method wouldn't directly have access to the task queue specific headers from the request. But luckily, many of these headers persist as environment variables, so can be accessed via os.environ or os.getenv, yippee! Being a good little (Python) boy, I decided to abstract this requirement into a decorator and let the function do it's own work in peace. Upon realizing the usefulness of such a decorator, I decided to write about it, so here it is: import functools import os from google.appengine.ext.deferred import defer from google.appengine.ext.deferred.deferred import _DEFAULT_QUEUE as DEFAULT_QUEUE from google.appengine.ext.deferred.deferred import _DEFAULT_URL as DEFERRED_URL QUEUE_KEY = 'HTTP_X_APPENGINE_QUEUENAME' URL_KEY = 'PATH_INFO' def DeferredWorkerDecorator(method): @functools.wraps(method) def DeferredOnlyMethod(*args, **kwargs): path_info = os.environ.get(URL_KEY, '') if path_info != DEFERRED_URL: raise EnvironmentError('Wrong path of execution: {}'.format(path_info)) queue_name = os.environ.get(QUEUE_KEY, '') if queue_name != DEFAULT_QUEUE: raise EnvironmentError('Wrong queue name: {}'.format(queue_name)) return method(*args, **kwargs) return DeferredOnlyMethod  This decorator first checks if the environment variable PATH_INFO is set to the default value for the deferred queue: /_ah/queue/deferred. If this is not the case (or if the environment variable is not set), an EnvironmentError is raised. Then the environment variable HTTP_X_APPENGINE_QUEUENAME is checked against the name of the default queue: default. Again, if this is incorrect or unset, an EnvironmentError is raised. If both these checks pass, the decorated method is called with its arguments and the value is returned. To use this decorator: import time from google.appengine.ext.deferred import SingularTaskFailure @DeferredWorkerDecorator def WorkerMethod(): if too_busy(): time.sleep(30) raise SingularTaskFailure # do work WorkerMethod() # This will fail with an EnvironmentError defer(WorkerMethod) # This will perform the work, but in it's own task In case you want to extend this, here is a more "complete" list of some helpful values that you may be able to retrieve from environment variables: HTTP_X_APPENGINE_TASKRETRYCOUNT HTTP_X_APPENGINE_QUEUENAME HTTP_X_APPENGINE_TASKNAME HTTP_X_APPENGINE_TASKEXECUTIONCOUNT HTTP_X_APPENGINE_TASKETA HTTP_X_APPENGINE_COUNTRY HTTP_X_APPENGINE_CURRENT_NAMESPACE PATH_INFO *Specialized WorkerI had two different reasons to raise a SingularTaskFailure in my worker. First, I was polling for resources that may not have been online, so wanted the task to sleep and then restart (after raising the one time failure). Second, I was using a special sentinel in the datastore to determine if the current user had any other job in progress. Again, I wanted to sleep and try again until the current user's other job had completed. ## Thursday, May 17, 2012 ### Life of π: Continued Fractions and Infinite Series This is from a talk I gave to the UC Santa Cruz Math Club back in February. I have had the slides cluttering my desk since I gave the talk as a reminder of putting them up on the web, and today I finally cleaned them off! An interested tidbit about these slides: I gave the same talk one other time as an undergraduate. As an undergraduate I gave the talk the day after a root canal. This February (at UCSC), one day after the talk I also had a root canal. What's the moral of the story? Don't give this talk. No seriously, don't give this talk. Though the fact is cool and the math is not too bad to grasp, the stuff from slides 106 to 113 goes way over an audience's collective head just because they have stopped paying attention at that point. Too many variables! ## Tuesday, May 15, 2012 ### Reverse Calculating An Interest Rate I was recently playing around with some loan data and only happened to have the term (or length, or duration) of the loan, the amount of the recurring payment (in this case monthly) and the remaining principal owed on the loan. I figured there was an easy way to get at the interest rate, but wasn't sure how. After some badgering from my coworker +Paul, I searched the web and found a tool from CALCAmo (a site just for calculating amortizations). Problem solved, right? Wrong. I wanted to know why; I had to go deeper. So I did a bit of math and a bit of programming and I was where I needed to be. I'll break the following down into parts before going on full steam. • Break down the amortization schedule in terms of the variables we have and the one we want • Determine a function we want to find zeroes of • Write some code to implement the Newton-Raphson method • Utilize the Newton-Raphson code to find an interest rate • Bonus: Analyze the function to make sure we are right Step I: Break Down the Amortization Schedule We can do this using the series $$\left\{P_i\right\}_i$$ of principal owed, which varies over time and will go to zero once paid off. In this series, $$P_0$$ is the principal owed currently and $$P_i$$ is the principal owed after $$i$$ payments have been made. (Assuming monthly payments, this will be after $$i$$ months.) If the term is $$T$$ periods, then we have $$P_T = 0$$. We have already introduced the term ($$T$$); we also need the value of the recurring (again, usually monthly) payment $$R$$, the interest rate $$r$$ and the initial principal owed $$P_0 = P$$. Time-Relationship between Principal Values If after $$i$$ periods, $$P_i$$ is owed, then after one period has elapsed, we will owe $$P_i \cdot m$$ where $$m = m(r)$$ is some multiplier based on the length of the term. For example if each period is one month, then we divide our rate by $$12$$ for the interest and add $$1$$ to note that we are adding to existing principal: $m(r) = 1 + \frac{r}{12}.$ In addition to the interest, we will have paid off $$R$$ hence $P_{i + 1} = P_i \cdot m - R.$ Formula for $$P_i$$ Using this, we can actually determine $$P_i$$ strictly in terms of $$m, R$$ and $$P$$. First, note that $P_2 = P_1 \cdot m - R = (P_0 \cdot m - R) \cdot m - R = P \cdot m^2 - R(m + 1)$ since $$P_0 = P$$. We can show inductively that $P_i = P \cdot m^i - R \cdot \sum_{j = 0}^{i - 1} m^j.$ We already have the base case $$i = 1$$, by definition. Assuming it holds for $$i$$, we see that $P_{i + 1} = P_i \cdot m - R = m \cdot \left(P \cdot m^i - R \cdot \sum_{j = 0}^{i - 1} m^j\right) - R = P \cdot m^{i + 1} - R \cdot \sum_{j = 1}^{i} m^j - R,$ and our induction is complete. (We bump the index $$j$$ since we are multiplying each $$m^j$$ by $$m$$.) Each term in the series is related to the previous one (except $$P_0$$, since time can't be negative in this case). Step II: Determine a Function we want to find Zeroes of Since we know $$P_T = 0$$ and $$P_T = P \cdot m^T - R \cdot \sum_{j = 0}^{T - 1} m^j$$, we actually have a polynomial in place that will let us solve for $$m$$ and in so doing, solve for $$r$$. To make our lives a tad easier, we'll do some rearranging. First, note that $\sum_{j = 0}^{T - 1} m^j = m^{T - 1} + \cdots + m + 1 = \frac{m^T - 1}{m - 1}.$ We calculate this sum of a geometric series here, but I'll just refer you to the Wikipedia page instead. With this reduction we want to solve $0 = P \cdot m^T - R \cdot \frac{m^T - 1}{m - 1} \Longleftrightarrow P \cdot m^T \cdot (m - 1) = R \cdot (m^T - 1).$ With that, we have accomplished Step II, we have found a function (parameterized by $$P, T$$ and $$R$$ which we can use zeroes from to find our interest rate: $f_{P, T, R}(m) = P \cdot m^T \cdot (m - 1) - R \cdot (m^T - 1) = P \cdot m^{T + 1} - (P + R) \cdot m^T + R.$ Step III: Write some code to implement the Newton-Raphson method We use the Newton-Raphson method to get super-duper-close to a zero of the function. For in-depth coverage, see the Wikipedia page on the Newton-Raphson method, but I'll give some cursory coverage below. The methods used to show that a fixed point is found are not necessary for the intuition behind the method. Intuition behind the method For the intuition, assume we know (and can compute) a function $$f$$, its derivative $$f'$$ and a value $$x$$. Assume there is some zero $$y$$ nearby $$x$$. Since they are close, we can approximate the slope of the line between the points $$(x, f(x)$$ and $$(y, f(y)$$ with the derivative nearby. Since we know $$x$$, we use $$f'(x)$$ and intuit that $f'(x) = \text{slope} = \frac{f(y) - f(x)}{y - x} \Rightarrow y - x = \frac{f(y) - f(x)}{f'(x)}.$ But, since we know that $$y$$ is a zero, $$f(y) - f(x) = -f(x)$$ hence $y - x = \frac{-f(x)}{f'(x)} \Rightarrow y = x - \frac{f(x)}{f'(x)}.$ Using this method, one can start with a given value $$x_0 = x$$ and compute better and better approximations of a zero via the iteration above that determines $$y$$. We use a sequence to do so: $x_{i + 1} = x_i - \frac{f(x_i)}{f'(x_i)}$ and stop calculating the $$x_i$$ either after $$f(x_i)$$ is below a preset threshold or after the fineness of the approximation $$\left|x_i - x_{i + 1}\right|$$ goes below a (likely different) preset threshold. Again, there is much that can be said about these approximations, but we are trying to accomplish things today, not theorize. Programming Newton-Raphson To perform Newton-Raphson, we'll implement a Python function that takes the initial guess ($$x_0$$) and the functions $$f$$ and $$f'$$. We'll also (arbitrarily) stop after the value $$f(x_i)$$ drops below $$10^{-8}$$ in absolute value. def newton_raphson_method(guess, f, f_prime): def next_value(value): return value - f(value)*1.0/f_prime(value) current = guess while abs(f(current)) > 10**(-8): current = next_value(current) return current As you can see, once we have f and f_prime, everything else is easy because all the work in calculating the next value (via next_value) is done by the functions. Step IV: Utilize the Newton-Raphson code to find an Interest Rate We first need to implement $$f_{P, T, R}(m) = P \cdot m^{T + 1} - (P + R) \cdot m^T + R$$ and $$f'_{P, T, R}$$ in Python. Before doing so, we do a simple derivative calculation: $f_{P, T, R}'(m) = P \cdot (T + 1) \cdot m^T - (P + R) \cdot T \cdot m^{T - 1}.$ With these formulae in hand, we write a function which will spit out the corresponding f and f_prime given the parameters $$P$$ (principal), $$T$$ (term) and $$R$$ (payment): def generate_polynomials(principal, term, payment): def f(m): return (principal*(m**(term + 1)) - (principal + payment)*(m**term) + payment) def f_prime(m): return (principal*(term + 1)*(m**term) - (principal + payment)*term*(m**(term - 1))) return (f, f_prime) Note that these functions only take a single argument (m), but we are able to use the other parameters from the parent scope beyond the life of the call to generate_polynomials due to closure in Python. In order to solve, we need an initial guess, but we need to know the relationship between $$m$$ and $$r$$ before we can determine what sort of guess makes sense. In addition, once a value for $$m$$ is returned from Newton-Raphson, we need to be able to turn it into an $$r$$ value so functions m and m_inverse should be implemented. For our dummy case here, we'll assume monthly payments (and compounding): def m(r): return 1 + r/12.0 def m_inverse(m_value): return 12.0*(m_value - 1) Using these, and assuming that an interest rate of 10% is a good guess, we can put all the pieces together: def solve_for_interest_rate(principal, term, payment, m, m_inverse): f, f_prime = generate_polynomials(principal, term, payment) guess_m = m(0.10) # ten percent as a decimal m_value = newton_raphson_method(guess_m, f, f_prime) return m_inverse(m_value) To check that this makes sense, let's plug in some values. Using the bankrate.com loan calculator, if we have a 30-year loan (with $$12 \cdot 30 = 360$$ months of payments) of$100,000 with an interest rate of 7%, the monthly payment would be \$665.30. Plugging this into our pipeline:
>>> principal = 100000
>>> term = 360
>>> payment = 665.30
>>> solve_for_interest_rate(principal, term, payment, m, m_inverse)
0.0699996284703
And we see the rate of 7% is approximated quite well!

Bonus: Analyze the function to make sure we are right

Coming soon. We will analyze the derivate and concavity to make sure that our guess yield the correct (and unique) zero.