Throughout the rest of my internship at PingThings, I had 3 or 4 main projects along with some smaller tickets. These projects covered many different topics and had me learning at every step. Not only did I have to do a lot of research about technology that I was unfamiliar with, I also had to meet with a lot of people to get clarification on the inner workings of some systems that PingThings utilizes. Each task was challenging and had me learning at every step, which provided me with a great opportunity to make the most of my internship.
Project 1: Creating a new GraphQL endpoint.
GraphQL is a query language that uses a custom type system to request data. PingThings utilizes GraphQL to query large amounts of data from BtrDB, the database that stores the PMU data, to use on their plotter. Essentially, you can request data by sending a request that is shaped like the data you are trying to query. A GraphQL has a predetermined set of types that serve as the format for all possible queries. These types describe the fields, both required and optional, and return data of fields. When a query comes in, it is validated and executed according to that schema using a resolver. The resolver is how a GraphQL server processes the query fields to return the corresponding data. Below is an example of a query, a possible return value, and its related schema.
After I got the base implementation of the endpoint working, I had to account for bad calls to Nearest. I changed the schema to have an optional "error" field that would be populated (along with dummy values for required fields) if a call to Nearest errored or returned bad data (ex. an invalid UUID is passed in). Without this sort of handling, one bad call to Nearest would result in the entire ListNearest endpoint to error. Of course, this isn't ideal as we would still want to receive the data for the calls that did not error.
The final step was benchmarking and testing. Using the new endpoint, I recorded metrics on the time that it took for a request to finish using different numbers of UUIDs. I started with 1 and scaled up, eventually running into some issues at around 200 UUIDs. When more than 200 UUIDs were passed in, the resolver would sometimes error completely. I believed this to be due to having too many open requests to Nearest at one time, but I never pinpointed the exact cause. I solved this issue by breaking the loop down to groups of 200. This would cause some performance hits but did resolve the issue.