Categories
APIs

What the logs don’t tell you

In a world that is increasingly dominated by mobile applications and cloud services, APIs are becoming crucial to developers and service providers alike. But what are developers actually getting? And is this what service providers think they provide?

error-logs

Developers

Developers want to use APIs that extend their service without having to either build the technology themselves or comply with required legislation or security (think payments or anything to do with storing large amounts of personal details).

[tweetable]Developers want simple, scalable, well-documented APIs that are as reliable as possible[/tweetable]. They do not want API they use to make their service unreliable, buggy or slow, i.e. make them look bad. Poorly-performing APIs can harm a developer’s reputation but also the API provider’s, should the branding of an API be visible to the world (think Twitter, Facebook, Instagram).

API Providers

API providers grant access to a service or services through the use of public or private APIs; a private API being a service that is not for public consumption for privacy or security reasons. Why grant access at all? To allow developers to use services and functionality that they do not have to build themselves and provide stickiness to an existing user base. Salesforce has done an amazing job of allowing third-party developers to use and extend the functionality that is provided and has even allowed developers to build a thriving apps and plugin community.

API providers look to balance the load on their servers that may also be dealing with other services. They are trying to provide minimum response times whilst maintaining the access and integrity of the data and the service for the developers.

Schrodinger’s API, it both works and doesn’t work at the same time

Here lies the problem; when an app that relies on an API performs badly whose fault is it? Is the app performing how the developer expected or is the API not responding and thus slowing the service? I[tweetable]t is very easy for the API provider to believe that just because the green light is on that the API is working[/tweetable]. Many systems behave completely different from the theoretical under load, when exposed to extreme conditions or elements beyond normal operation or even users doing unexpected things to the API.

Logs

System logs, either from servers, application monitoring tools or other conventional developer operating systems are excellent at hiding things because there is usually a lot of data to digest and identifying the issues from the noise can be nearly impossible.

Some examples include:

Averages – Whilst an average latency of 300ms may look ok its not if you are still getting a number of calls that take 10 seconds. To understand whether or not your slow performaning outliers are an issue means you have to look at the distribution of latencies and the frequency of the outliers.

Errors rates – hopefully these are low. Even a low error rate in a popular API can represent a huge issue. Consider an API that deals with 2 billion transactions a day at 0.2% error rate still has 4 million failed calls a day.

Logs only measure calls – If the API is not frequently used then the logs are not going to tell you anything. If the bulk of transactions are say only done on a Friday but the services failed on a Thursday then the detail will not be in the logs. Only frequent monitoring will notify you of issues before they hit your users.

Basically what the logs don’t tell you is how APIs work end to end, in different geographic regions and what the end-to-end latencies are when using real transactions.

Some simple rules

Monitoring is for life not just for Christmas.

An API that is switched on may not stay on but may be on every time you check.

The reliability of an API is inversely proportional to the number of people using it and the number of developers trying to do things that may break it.

The use of server logs is a function of user base and the amount of data being recorded.

Test the API like it would be used in the wild, end-to-end across a range of cloud services and apps. In this instance cloud services means hosting platforms like Google Apps Engine, Amazon Web Services and Azure to name a few.

Key question to ask is whether the servers are being tested for performance or the API and the impact on users and the overall experience?

As a developer and a user of services it’s the experience that matters. Poor experience equals poor brand perception, which leads to trying a different API or app, losing the client whether it’s a developer or a guy with an iPhone migrating onto the next app.

Things to consider when testing

Where is the data being served and where are your users?

An app that works in San Francisco when the server farm is in Mountain View may show differences when the same server farm has to serve the app in Europe. Test the API from different locations.

I got a HTTP200 response that means everything is fine, right?

Whilst most of us have seen HTTP 404’s, page not found, we also know HTTP 200 indicates an OK response from the server. The challenge comes from when a HTTP 200 means that things are not OK. For example, in order to avoid browser problems, some APIs only return HTTP 200 with an error message which needs to be parsed. Alternatively, the API might be returning invalid content which could cause an App to fail.

API search function comparison

facebook-vs-twitter-latency

In the above figure it can be seen that the average latency for search using the Facebook and Twitter API is approximately 2 seconds apart with Twitter being the faster and less erratic. Whilst we can only guess at what is happening in the background the reality is Facebook Graph Search appears to be less responsive to anyone using this feature in an app.

Regional server response variation

facebook-api-geo-latency

The above figure shows Facebook Get response across 6 regions globally. It can be seen that Asia and particularly Japan are poor cousins when it comes to regional performance. This behavior has been viewed with other APIs that have been tested in this way.

Caching data

search-caching-impact

The above figure shows the effect of caching on server response. After caching was implemented on the server it can be seen that response improved, even during refreshes (the spikes) overall performance was up.

Server issues

dropbox-api-latency

The above figure shows intermittent server issues over time. This can be indicative of load balancing issues or a problem with a server in a server cluster.

What is the future?

[tweetable]The number of APIs is only going to increase and developers are likely to rely on 3rd party services more and more[/tweetable]. It is also likely that more than one API from more than one provider will be used in an app. How can we mitigate against the response of one API compared to another? We see a need for intelligence in the app that can let that user know that something may be awry with the service trying to be accessed. This should be utilized as part of the UI flow to warn that ‘hey looks like is not responding, please bear with us’. This would be enabled by pinging the monitoring service to determine if there are any issues reported or the app being alerted automatically on a fail scenario that is outside pre-determined boundaries.

Intelligence in the monitoring will also lead to better understanding of the results and give a heads up to the API providers when issues occur or when the data is showing that a server is about to fail and allow providers to avoid downtime.

Disclaimer: Data was provided by APImetrics.io who focus on API Performance measurement, testing and analytics. John Cooper is advisory board member at APImetrics

Categories
Business

Mobile Advertising versus App Store Promotion: a tale of woes and wins

As an independent developer, I ‘ve had my fair amount of successes and failures – examples of the former are TVPyx (Symbian, Windows Phone, Web) and TubeBusBike (Symbian).

Having developed apps on iOS, Android, WP, Symbian, Bada and Web – my experiences of all stores  has been mixed. As an independent developer it is increasingly difficult to get noticed in the sea of apps that are available on the various stores. I have had a fair amount of trial and error experiences with both advertising and merchandising across those stores. I ‘m here to share my experiences with both.

 There are a number of techniques available to developers that can be used to promote your app and increase downloads. Some of these you will need to pay for, some of which are just down to hard work and slick execution. Of course there is always an element of luck and right place right time usually built upon previous failures, think Rovio. I am going to concentrate on two methods of promotion.

 Advertising  and cross promotion – The method of promoting an app either through paid in-app advertising i.e. in someone else’s app/website or cross promotion through apps developed by the same publisher.

 App store promotion – The practice of promoting an app via app stores. Merchandisers (app store owner staffers) select apps by country/region to appear as featured or promoted apps on the store. Various ‘slots’ have different success rates where ‘featured’ is usually the Holy Grail in terms of maximizing eyeballs and downloads.

Advertising

Advertising using one of the mobile ad networks like Admob or an ad exchange like Inneractive is a paid-for activity i.e. you would pay for a campaign of ad impressions to promote your application in the usual advertising model. While someone like Admob may be excellent in a market like the Germany, they may lack in a specific region like Vietnam. This is where an ad exchange comes in. If you have a truly global application or specific regional needs that no one ad network can provide the required local content, an ad exchange barters on your behalf with local inventory and then serves the ad that gives you the most return.

Not all ad mechanisms are created equal, so you should take care whilst selecting one. While the fill rate may be excellent compared to a single network, the downside is that you may not be getting premium content that would be served by a truly local provider i.e. lower CPM. So while a fixed ad network can provide targeted delivery in terms of locale, an ad exchange can level the playing field especially in those maybe hard to reach areas of the globe. You need to understand your market and choose accordingly.

My personal experience of using paid-for app promotion was very disappointing. For £1000 one of my apps was involved in a campaign that consisted of a carousel with 4 ads shown in succession. The campaign as a whole  garnered 260,000 impressions. My ad was the 4th on the carousel meaning that it would be the 4th ad served once the app the ad was in was invoked. Quite far down the pecking order. From this campaign there were 82 clicks of which it is unclear whether any of these actually resulted in any downloads. No spike, no step change, just noise. The ad was targeted at UK mainly but a few other countries were involved. So quite a high customer acquisition rate!

Anecdotal evidence suggests that in some markets, advertising in apps might even have an adverse effect on downloads, as they use data which comes at a cost to the user.

App Store Promotion

Being a ‘featured’ app on any store will dramatically increase downloads. Naturally being featured in a store is likely the result of one of the following; it’s a great app, it’s a great experience, great PR, a relationship with a journalist on a national newspaper, major marketing budget, lots of hard work and maybe a bit of luck to name a few.

To get noticed by a store owner – especially an OEM – you need to consider what they as the builder of the devices are currently trying to push. For Nokia it may be imaging or mapping i.e. you are more likely to be promoted if you are harnessing one of the strengths of the business, what makes them unique. For Samsung it may be an app that integrates with their TV solutions. Segmentation considerations also work e.g. apps for a demographic that are being targeted by a particular device or devices. Building a relationship with an app store owner is a means to get promoted but this is likely to be the result of an app that meets the needs of a campaign or some quid quo pro between the developer and the likely OEM. A strong relationship or understanding of needs is required regardless of approach. I am privileged enough to have been involved a number of OEM programmes and have some close relationships with a number of OEM’s and platform providers so this approach has very much worked for me.

There are a number different areas on a store where you can be promoted; featured, staff picks etc. Some OEM’s have mini stores that usually link to their platform stores like Windows Marketplace or Play. This gives the OEM the ability to merchandise their partner apps without seeking the permission of the platform owner. Nokia has the App Highlights app shipped with all their phones, other OEM’s have their own offering.

My experience of being featured on Windows Marketplace was great for downloads as I suspect being featured would be on other stores. App Highlights worked well until Nokia changed the app due to having to try to promote more apps themselves. This meant my app started to get lost in the sheer number of apps being promoted. The latter being the inherent problem of managing app promotion on store.

Below is a graph of my own experience of being featured on Windows Marketplace and being promoted through App Highlights. There is no halo effect, as soon as the promotion stops the graph returns to the usual run rate. The implication is that you have to continue to promote and market the app to get downloads. As you can see the experience is far more positive than paid-for app advertising. Being featured represented a 1000% increase (800 downloads/day) in downloads whilst being included in App Highlights represented a 200% (160 downloads/day) increase in downloads.

Continuous promotion is crucial

There are other spikes on the graph that are not either Featured or App Highlights. The honest answer is I don’t know what caused them. I only know that my app was featured or highlighted because a) someone told me or b) I happened to know the right people. The other spikes could have been caused by promotion on other parts of the store that I was unaware of or a blog picked up on the app etc. It is usually the case that the developer is not told that their app is being promoted which seems a shame for the developer and the store owner not to be able to capitalize on the promotion.

Conclusion

To get downloads, you need to continuously promote and market your app. I experienced no halo effect, as soon as the promotion stops the graph returns to the usual run rate. For me, getting featured and highlighted was a far more effective solution than paid-for advertising. The key is to build close relationships with multiple OEM’s and platform providers and use it to deeply understand their marketing needs.