At WebKinder, we recently started building a home on the web for our WordPress plugins. It is not launched yet, but I really enjoy working on it. One requirement is to fetch as much plugin data as possible from directly. The official plugin site has an API that exposes things like download statistics, the description or changelog1. This is the main endpoint for a plugin with slug plugin-name:

A GET request to this endpoint will return a serialized object containing most of the information about the plugin. We decided to implement shortcodes to render different parts of the data. One displays the changelog, one shows the download numbers and so forth. This way the page layouts stay more flexible and we can use the data in different places. From a more general point of view, this is the situation:

  • Multiple shortcodes request data from the same HTTP API endpoint
  • A page can include none, some or all of them

If a page contains two or more of these shortcodes, the same API request will be made multiple times. That is redundant and a waste of resources. Instead it would be great if the first shortcode loads the data from the API and all subsequent shortcodes read their data directly from this response. But there is no way to know which shortcode is going first, so the solution should work for all possible orders. What we really need here is a cache.


WordPress has a built-in class that allows you to cache data which may be computationally expensive to regenerate2. I think an external API request falls into that category, so I had a look. The store for the cached data is a simple key-value map. You add data identified by some key string and you retrieve it using that key. The most important thing here is that the cache is non-persistent. The data is only kept for the duration of one request, meaning each page reload flushes the cache. This fits the requirements described above perfectly.

I would suggest creating a base layer of abstraction by wrapping the API request, error handling and caching into a function. A shortcode can then call this function without worrying about these details and focus on processing the data.

function get_remote_data( $slug ) {
	$key = 'some-key-string';
	if( !wp_cache_get( $key ) ) {
		$response = wp_remote_get( $url );
		//error handling
		wp_cache_add( $key , $response );
		return $response;
	else {
		return wp_cache_get( $key );

The error handling is really specific to what your data from the API is, so I just put it as a comment. The methods of the caching class are pretty much self-explanatory and more details about them can be found in the codex2. The snippet above is more pseudocode to illustrate the idea than anything. For example you might want to store the response body only, instead of the complete object et cetera.

Splitting up requests into multiple smaller ones

Assume you have different shortcodes that use different parts of the data returned by the same API endpoint. In our case of the API this is the case. The main plugin endpoint returns the data for all shortcodes. If all shortcodes are used on a page this makes perfect sense. But if only one shortcode is used on a page, it still loads the data for all shortcodes. For that particular page it would make more sense to refine the request to only return data that is needed, given the API in question supports this. But then again, having all shortcodes on one page will result in five different requests that could have been batched into one. Clearly the variables are the number of shortcodes on a page and the size of the data to be retrieved. The API supports refining requests to the plugin endpoint by sending a POST request with certain parameters, so I decided to try it out.

I wrote a quick php script that sends a 100 requests, measures their execution time and prints the average value. I used the PHP built-in tools curl and microtime and I honestly don’t know how meaningful this analysis is, but here are some numbers I got for the plugin woocommerce:

  • GET 0.85s
  • POST (total downloads only): 0.65s
  • POST (active installs only): 0.61s

So it is a little more expensive to load the complete data, but definitely not expensive enough such that splitting requests would be worth it. In order for that to pay off, the ratio between the complete data set and one individual part has to be considerably larger. I think for the kind of request made here, most of the time is spent on communication and not on computation on the remote server.