Categories
Business Tips

Best Practices for a successful IoT Developer Program

Events and training programs are a main component in many IoT developer programs. But just how effective are they?

This infographic sheds some light into the effectiveness of training and events. Insights are based on our Best Practices for IoT Developer Programs report.

Best Practices for an IoT Developer Program
Infographic
Categories
Platforms Tips

4 out of 10 sites unaware of the Google’s new Mobile Ranking Signal

A variety of mobile devices have flooded the Web in the past years. To no one’s surprise, Google announced that starting April 21st, they’ll expand the use of mobile-friendliness as a mobile ranking signal, in fact penalizing all sites that don’t have a mobile strategy – dubbed “Mobilegeddon” in the recent press.

Furthermore, Google just paid $25 million for exclusive rights to the “.app” top-level domain. Although the company hasn’t yet announced any specific plans for .app, this could be the signal that the Mobile Web will evolve beyond Responsive Web Design (RWD) and lean more towards rich UX/UI and mimicking of the native environment.

Presumably, [tweetable]the update to their algorithm will have a significant impact on Google’s mobile search results[/tweetable]. If you’re like me, you’d want to go all the way and find out the specifics: what is the current distribution of mobile web strategies, how many websites will be impacted by this change and to what degree? If you already have a mobile web strategy in place or just started developing one, you’d probably also want to know how to optimize it: how “lengthy” should your pages be or how “heavy”? And what Google’s PageSpeed Insights has to says about it?

We have searched the answers to these questions by analyzing the top 10,000 Alexa sites, from 5 different categories: News, E-commerce, Tech, Business and Sports. What we discovered was no less surprising and we felt like this is something worth sharing with the community. Before diving into the data let me tell you a few words on the methodology.

As per Google’s guidelines, we have taken into consideration three types of configurations for building mobile sites:

google-table

Google recognizes three different configurations for building mobile sites. (Google Developers)

  • Responsive web design: serving the same HTML code on the same URL, regardless of the users’ device; render the display differently based on the screen size.
  • Dynamic serving (also known as adaptive): using the same URL regardless of device, but generating HTML code dynamically by detecting the browser’s User Agent
  • Separate URLs (also known as mobile friendly): serving different code to each device and a separate web address for the mobile version

To detect the strategy used by different websites, we have crawled them using an iPhone User Agent.

Mozilla/5.0 (iPhone; CPU iPhone OS 5_0 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9A334 Safari/7534.48.3

On top of that, we also identified those sites that implemented a Smart App Banner meta tag in the head of their home page, to market their iOS app:

<meta name="apple-itunes-app" content="app-id=myAppStoreID">

For analyzing the performance, we have used the PageSpeed Insights API, which takes into consideration not only the size of a webpage, but also compression, server response time, browser caching and landing page redirects, among others.

We also wanted to see how long mobile pages have become. Since RWD relies heavily on scrolling, we asked ourselves if this is an effective way to display content and what can be considered an optimum page height. Throughout the rest of the article, you’ll see references to “X5” height rate, which means that a site’s main page height is 5 times the height of the viewport on iPhone 6 (375 x 667). This is an indication of how much scroll needs to be done in order to reach the last piece of content.

visionmobile-infographics

Overview of Mobile Strategies on the Web

It’s 2015 and if you think that RWD is winning over mobile friendly sites (separate URLs), you’re spot on. Nearly 28% of the sites we’ve researched are responsive, while 26% have opted for a separate mobile address. The gap is indeed there, but it’s not as wide as we might have assumed. The surprising fact is that 40% of the researched sites have no mobile web strategy whatsoever! To me, that’s HUGE! Since Google’s new algorithm update is just around the corner, we can easily imagine that ignoring mobile users will no longer be an option.

Deeply buried in the data, we’ve also found another interesting fact: approximately 4% considered that being adaptive (serving different content under the same web address) is the right strategy for their mobile web presence.

2% of the researched websites have chosen to target their mobile users through applications (iOS). The number may in fact be bigger, but the inconsistency in promoting their apps makes them difficult to track.

Let’s dig a little bit more and see the variations between different categories and the impact that each strategy has on the overall mobile experience.

Newspapers – resistive as always

Being “stuck in the past” seems to be rightly attributed to the publishing industry, especially to newspapers: 38% of the websites from this category are mobile-friendly (separate URLs) and only 25% responsive.

In the PageSpeed Insights (PSI) results, adaptive comes on top with 75% scoring between 40 and 60 (although only 15% are above 60). Interestingly enough, mobile-friendly sites score even better (19% with PSI > 60), while only 11% of responsive sites have a score higher than 60. In fact, most of the sites that score under 20 are responsive (10%).

Wait, what? I thought Google loves responsiveness, right? There are countless articles about how Google recommends RWD as the best way to target mobile users. Surely, something doesn’t add up!

Or maybe most of the responsive sites don’t comply with Google’s own recommendations. A plausible explanation might be that developers have mistaken Google’s love for good mobile experience with the love for responsiveness, thus taking Google’s guidelines for granted. Instead of taking advantage of RWD, they end up producing sites that score poorly on all aspects.

When looking at sites’ height distribution on mobile devices, we see that the average for mobile-friendly ones is X8, which is pretty close to the X7 average rate represented by adaptive sites. However, 54% of adaptive sites have a less than X5 height rate.

In all fairness, “old” doesn’t necessarily mean wrong for newspapers sites and this time it might be working in their favor: 47% of responsive sites have a X10 – X15 height rate and 14% even over X15, which means that mobile users have to do a lot of scrolling before reading a certain piece of content.

So why are mobile friendly pages shorter? Did they notice that not so “lengthy” pages are more suitable for reading content on mobile? Is that the reason behind choosing mobile friendly (separate URLs) over responsiveness? But if that’s the case, why not going all the way towards adaptive? They’re doing a better job in terms of producing closer to optimum mobile pages (averaged at 2.5MB for both mobile-friendly and adaptive, compared to 3.7 MB for responsive) so perhaps it’s the ease of managing a separate mobile site all together.

Looking at Alexa’s popularity ranks, we notice that as the rank decreases, the number of responsive websites grows. The majority of sites with a low popularity rank are in fact responsive (14%), while 53% of the analyzed adaptive sites place themselves among the top ranked ones.

Tech sites – leaning towards RWD

In contrast with the above findings, responsive sites predominate in the “Tech” category (33%), while only 12% are mobile friendly (separate URLs).

In addition to that, 58% have page sizes under 2MB and consequently – a higher score on PageSpeed Insights: 51% mark over 60 and only 1.5% under 20. A page size of 2MB might sound a bit too much, but in fact, caching and other factors are influencing the overall PSI score.

When comparing the results to the newspapers category, tech websites show a major improvement. Yes, responsive sites still have the biggest height ratio (X6), but the average is significantly lower than what we’ve found earlier (X8).

Unfortunately, the “Tech” category also has the biggest percentage of sites that have NO mobile strategy whatsoever: 51%. It would seem that we have either modern responsive sites or nothing at all.

Overall, adaptive sites are still the most efficient in terms of score, but responsive sites are stealing the 1st place when it comes to height ratio, with a slightly bigger percentage of websites having a height rate of X5.

E-commerce sites fare better than most

For the “Shopping” category, mobile-friendly (separate URLs) and responsive sites are at a tie: 26%. Surprisingly though, adaptive e-commerce sites scored the biggest percentage of all categories: 6%. What would cause e-commerce sites to lean towards adaptive?

One plausible reason can be their appetite for improving conversion rates, thus their attention to optimize everything related to the buying funnel, which is influencing their mobile strategy as well.

If we analyze the height distribution for all three configurations (separate mobile URLs, responsive and adaptive), we see a pattern emerging: e-commerce sites, regardless of their mobile strategy, have the biggest percentages for under an X5 height rate: 90%, 58% and respectively 81%.

One could speculate that the reason for keeping their pages shorter is related to the influence that a lower height ratio might have on conversion rates. On top of that, PageSpeed Insights offers the highest score for all three mobile strategies: ~90% of mobile friendly/adaptive sites and 75% of the responsive ones have scored at least 40 points.

[tweetable]Clearly, e-commerce sites are doing a great job at optimizing for mobile[/tweetable] and, regardless of their favorite strategy (adaptive sites seem to be leading by a low margin), they’re mastering it like no other.

Apps – still luxurious game to play

Among other categories, we’ve found that mobile applications are appealing too: close to 6% of sites from “Business” and “Sports” have created their own iOS application, even if most of these previously opted for mobile friendly sites: 9% respectively 15%.

As a general rule, regardless of the category, if a site doesn’t have a mobile web strategy, chances are it won’t go for an application either. You’d expect it to be the other way around, but if a site owner ignored all previous mobile web strategies at his disposal, would he really be open to a more laborious and expensive approach (apps)? Not really.

Optimum Height and Page Size on Mobile

Analyzing ~10,000 sites in various categories surfaced a couple of guidelines that you might want to take into consideration when implement your mobile web strategy. In essence, what’s the maximum height and page size that your mobile friendly, responsive or adaptive site should have to ensure a 50+ PageSpeed Score?

If we analyze page sizes less than 2M, we realize that mobile-friendly sites, particularly in the Sports category, come on top with little over 91%. We also notice that only 30% of responsive sites score over 50 on PageSpeed Insights. Again, just keeping the page size at this level isn’t enough to ensure a high PSI score since caching can be equally important, but it’s a good place to start.

Interestingly enough, the “News” category is the only one where the majority (52%) of the adaptive sites with scores over 50 has page sizes between 2MB – 4MB. Even if adaptive sites seem to be able to “carry” more weight, their height rate should be kept at the minimum (X5) to ensure a good score.

Applying the same logic to height rates, clearly X5 seem to be the optimum rate for scoring 50+ on PageSpeed Insights. Once more, responsive sites seem to have the least chances of scoring over 50, even if they aim for a lower height: 62%.

Is the Mobile Web Heading Towards “Appification”?

[tweetable]Today, mobile web consumption occurs mostly from in-app browsers[/tweetable]. Just look for example at the Facebook app, that used to open links by using an external browser. Now, they have embedded a browser directly in the app and it makes sense – they don’t want their users to have a broken experience.

With that in mind, shouldn’t mobile websites offer a more app-like experience? Isn’t the linear scrolling experience we see on responsive sites a bit outdated?

In all fairness, what we’ve concluded thus far doesn’t take into account various UI/UX aspects and doesn’t answer some critical questions: When scrolling becomes too much for a mobile site? What’s the impact of having a long scroll and is this the reason for poor mobile reading experience on the web? What else can we do to ensure that mobile users have a good experience on the web?

That’s why in the second phase of the study we’ll analyze how much scrolling is actually being done on a responsive page, on mobile devices. After gathering relevant information on how mobile users interact with responsive sites we’ll be able to complete the next phase of the study by answering some key questions:

  • What’s the mobile device and browser they’re using for accessing a site?
  • How much time they spend on a particular page?
  • What’s the maximum scroll height they reached on a site?

From “Nice To Have” To Mandatory

We’ve seen that 40% of the top 10,000 Alexa sites from 5 categories (News, Tech, E-commerce, Business, Sports) don’t do anything when it comes to their mobile users. However, we’re in the middle of a big shift: [tweetable]having a mobile strategy on the web is no longer just “nice to have”[/tweetable]. If we take into consideration the impending change of Google’s mobile ranking algorithm, we can conclude that at this point any mobile web strategy is a good strategy.

As far as this first part of our study is concerned, from a technical point of view, a page that fits within 2MB and has a height rate of X5 has a good chance to score 50+ on Google’s PageSpeed Insights. Although adaptive websites are overall the most efficient ones regarding these aspects, they are not very popular either. Even though the study clearly shows that RWD is far from being optimum, responsiveness is the leading strategy adopted when targeting mobile users.

If we add to the mix the “.app” top-level web domain bought by Google, the line between native apps and web apps is getting thinner and thinner. The Mobile Web is already evolving beyond responsiveness into something new and exciting where everything is an app instead of a site, where user interactions are more important than just page views and ultimately where all apps are interlinked into a Web of apps.

If you are interested in learning more about mobile app development read the third post of our series on Mobile App Marketing, on Business Models.

Categories
Business Tips

Vital Metrics For Tracking Your App’s Success

Making data driven decisions is key to driving growth in your mobile app, and it’s the reason that nearly every app developer in the world integrates analytics tracking within their app. In fact, in a recent Tapdaq survey we discovered that 90% of developers have implemented a third party analytics SDK into their app[bctt tweet=”90% of developers have implemented a third party analytics SDK into their app” username=”DevEconomics”].

viral-metrics-app-success

However, we were surprised to then learn that only 5% of these developers knew what to do with the data points which they were tracking. After speaking with a large group of the developers questioned, we realised that many aren’t sure which metrics are most important, or what steps they need to take in order to improve.

As the App Store has matured, creating a chart-topping product has become a much more complex process. App analytics providers have moved with the times and now provide developers with more data than ever before on their app’s performance. In this post I am going to pick out 12 app metrics and explain why they are the most important data points when tracking your app’s success.

Acquisition

Growth of your app business starts at the user acquisition stage. Here there are several key questions which all developers ask themselves.

How many installs have I generated?

Tracking installs received as an overall figure is very easy, and all data is provided through iTunes Connect/Google Play.

How much have these downloads cost?

You have to know what your cost per install (CPI) is when paying to acquire new users to your app. [tweetable]If your CPI rises above the value of your user’s lifetime value (LTV), then the campaign is unprofitable[/tweetable] and unless you are propped up by strong organic install numbers, your business is going to struggle.

Working out your cost per install on mobile ad networks is straightforward, and nearly all networks now give this figure to you up front. If you are acquiring users via cost per install ad networks then I’d recommend you test multiple platforms, providing you have a large enough budget. In an interview with KISS Metrics, Wooga’s head of marketing, Eric Seufert, said the company used 23 ad networks to get their Jelly Splash game in to the top charts of the key markets.

Where did these downloads come from?

When you see a spike in your app’s downloads, the first thing you want to know is where they came from. By using install attribution tools you can see a breakdown of all your installs by referral source, which gives you far stronger idea of which networks can send you the greatest volume of users for the lowest cost.

How high a quality are the users within my app?

This question can’t be answered immediately, but, over time, [tweetable]cohort analysis can help you to get a better understanding of the quality of the users [/tweetable]within your app. Specifically, when working with multiple paid traffic sources, you can work out which platform provides you with the most real value beyond just the install itself.

For example, Ad Network A might have sent you 10,000 installs for $20,000, whereas Ad Network B sent 10,000 installs for $15,000. If looking at the CPI alone, logic would say that Ad Network B is the preferred choice here. However, using cohort analysis you may discover the users from Ad Network A have an LTV of $2.50, whereas the users from Ad Network B only have an LTV of $1.50. So, in terms of real value, Ad Network A would actually be the optimal solution.

Engagement

In mobile analytics, the fun really starts after the install. With app engagement there are multiple metrics that need to be tracked in order to paint a picture of how engaged your users really are.

Session Length

Tracking your session length is not as straight forward as it sounds. Different analytics companies have different definitions when it comes to sessions. For example, Flurry deem a session to start when an application is opened, and end when the app is terminated. By default, a session is ped as terminated if a user leaves the app for more than 10 seconds, although this logic can be changed.

In contrast, Google Analytics only consider a session to be over after 30 seconds of inactivity, although again this can be customised to any required time.

app-success-4

They key takeaway here is to ensure you know exactly how a session is defined within your app, as the definition does vary depending on who your analytics provider is.

Time in App

This is a marketing metric that is sometimes confused with session length, and is also often classed as a retention metric.

Where session length describes how long a user’s individual session lasts, time in app is used to define how long a user spends within an app, in total, over a given length of time. For example, a user could spend 2 hours in an app over the course of a week, and this could comprise of multiple sessions. The more time a user spends in your app on a daily basis, the better your chances are of monetising that user.

Popular Pages/Features

Understanding which pages and events are most popular in your app helps you paint a better picture of which content is most valuable to your users. More importantly, it also highlights weaknesses too. By tracking your page visits and conversion funnels you are able to see exactly where users drop out from your app, and this enables you to make data driven decisions on what content to improve in order to get more users reaching the ‘aha’ moment in your product.

Retention

It is argued that user retention is what sets apart a top grossing app from its competitors. Whilst user engagement tracks how long users spend within your app, user retention focusses on how often customers visit your app. It’s worth bearing in mind that a digital product can be a huge success, even if engagement is low, providing retention rates are high. For example, Google is a very successful tech company with sky high retention rates, yet engagement is relatively low.

Let’s take a look at the most important engagement rates you need to be tracking…

Retention Rate

User retention rate can be calculated in a number of ways. However, probably the most popular method is rolling retention (which is actually the default method used by Flurry Analytics).

To calculate this, you need to look at the proportion of users returning to your app on Day+N, or any day after that, and dividing it by the number of users who had installed your app on Day 0.

Here’s a great graphic from the Applift blog that summarises this calculation…

app-success-3

app-success-5

app-success-6

It’s going to be very interesting to see exactly how powerful Apple make their analytics tracking within iTunes Connect. A sneak preview has been posted on the AppTweak blog, showing a screenshot of a cohort table used for tracking retention…

app-success-2

Daily and Monthly Active Users

[tweetable]Your daily and monthly active user count is another measure of just how ‘sticky’ your app is[/tweetable]. Over time, successful apps look to grow the gap between their daily downloads and their daily active users, and they do this by ensuring they add as much value to their customers as often as possible.

Again, both these metrics are heavily tied with engagement. The more engaging the content is within your app, the more likely it is you will retain your users. The more often you can provide them with value, the higher your DAU count will be.

Churn rate

User churn rate is a key metric to understand, particularly when you come to calculating your user lifetime value (LTV). Churn rate is the opposite to retention rate and is the measure of how many users stop using your app over a given period of time, usually a month.

Churn rate is expressed as a percentage of the number of people who could have left and it is not possible for customer churn to be 0% or lower. An example would be: If your app has 100 users, then 100 people could leave/stop using your app this month. At the end of the month, only 23 users stop using your app, so this means you have a churn rate of 23%.

Monetisation

Average Revenue Per User (ARPU)

This metric is often confused with LTV, but it’s actually a far simpler data point to track. ARPU is the revenue you generate, on average, from each user of your application, and this can be calculated by simply adding up the revenue your app generates each month, and dividing it by your total number of users.

LTV

Lifetime Value, often shortened to LTV, is the measure of the revenue a customer will bring during their lifetime of using your application. In our recent Tapdaq survey, amazingly all 60 of the developers we spoke to said it was the most valuable metric in app marketing.

To calculate the LTV of one of your users there are several data points you need to know. They are:

  • Customer Churn: As described above…
  • Income: This includes all revenue from In-App Purchases and subscriptions, after Apple has taken their 30% cut, and any income from selling advertising space within your applications.
  • Number of active users: This one’s fairly obvious, it’s the number of active users your application has. The definition of what a “user” or “active user” is will vary depending on your application and your business model. If you have a mix of active users where some generate income and some don’t, include them all. This mix will likely continue as your app grows.
  • Average Revenue Per User (ARPU): As described above…

Once you have collated all the numbers above, just plug them into the equation below to discover the average LTV of your users.

app-success-1

App Success Sum Up

There are quite a number of important metrics that you need to be tracking and improving upon in order to make your app a true success. Always be looking at the wider picture, and evaluating how each metric has an effect on one another. If your team is small, or you are an indie developer working alone, then I’d recommend starting by iterating your product with the focus on increasing engagement, retention, and your average user LTV. You don’t need to have millions of users on board in order to build a truly great mobile app. Test heavily, and make data driven decisions in order to position yourself in a place where you can start to invest in acquisition with the confidence in your product’s quality and monetisation.

Categories
Business Tips

How to beat 2/3rds of app competitors

Our mission here at Developer Economics is to help developers create a better app business. A recent survey from App Promo highlights the pain points once again, and offers some hints about the solution.

Let’s start with the bad news…

4 out of 5 developers admit that their app doesn’t make enough money to be considered a standalone business. 2 out of 3 doesn’t break even. This confirms our results from the Developer Economics 2013 report, where 67% of developers who want to earn money live under the App Poverty line (revenues of less than $500 per month).

Despite these disconcerting numbers, and despite developers indicating discovery, making money and turning the app into a business as the main challenges, most developers undervalue the importance of marketing their applications. According to the App Promo survey, 2 out of 3 developers don’t have a marketing budget, and a quarter of developers doesn’t market their app at all.

And yet there is hope. A full 81% of developers said that they would not abandon their app. App Promo found that the survivors (experienced developers with apps that are over 3 years in the market) have succeeded in creating an interesting app business. They report revenues earned to date of over half a million, 100% of them breaks even and 78% considers their app successful enough for a standalone business. And yes, they do market their apps: over half of the respondents in this group has marketing budgets of over $1000 per month.

App Promo’s results also include differences between platforms, the use of various revenue models and marketing techniques, and more. The full results can be downloaded here.

AP_DevThatCould_2

Categories
Business Tips

Why You Should Localize from Day 1 (And How to Do it Painlessly)

Localization is rarely discussed (and often overlooked by developers), but it is increasingly important in today’s economy where mobile development is a global industry. The United States ranks fourth, behind South Korea, Hong Kong and Taiwan in the number of mobile device users per capita. Singapore, Israel and a quartet of European countries round out the top 10.

Localization is certainly worth the effort. A 2007 paper by the Localization Industry Standards Association (LISA), for instance, reported that $25 dollars was returned for every $1 invested in localization. And a 2012 publication from Distimo revealed that on average, applications increased their download volumes on the iPhone by greater than 128% the week after introducing the app in the user’s native language.

But localization can also be a huge undertaking.

Localization can be expensive and cumbersome

But it doesn’t have to be.

Currently, there are three approaches to translation: manual, automated, and hybrid. Each has its own benefits and drawbacks:

Manual – Employees, contractors, volunteers, or language vendors serve as translators. Emails, FTP servers, and spreadsheets are the primary tools for workflow management.

  • Benefits: Accurate, aware of brand identity, and sensitive to context, tone and style.
  • Drawbacks: Expensive, cumbersome and slow
  • Examples: Applingua, LocTeam, WordCrafts

Automated – Computer software is used to translate text from one language to another. Also known as “Machine Translation.”

  • Benefits: Fast, efficient, and low cost
  • Drawbacks: Imprecise, lacks keyword recognition, and insensitive to style
  • Examples: Google Translate, Bing Translator, SYSTRAN, SDL Language Weaver

Hybrid – Human translators are paired with a localization platform that helps automate the localization workflow for developers, product/localization managers, and translators. It brings the benefits of the manual and automated approaches and the drawbacks of neither.

  • Benefits: Efficient, accurate, and sensitive to context, style and tone.
  • Drawbacks: Initial learning curve upon startup
  • Examples: Transifex with Gengo, OneSky

After a few years of trying to build super-computers that understand human language like only a human can, the localization industry is now leaning toward the hybrid approach that still brings a great deal of processor power to bear. The difference though, is in the personal touch that only someone with skin on can provide. A machine cannot understand context or tone. A machine cannot understand the difference between “manual” meaning “by hand” or “manual” meaning the skateboard trick. It can’t inject energy into a paragraph with an unexpected word choice. Those are things that only humans can do.

When to localize?

Once you select from one of the above solutions, the issue of workflow remains. Decide how your localization will get done, and you will also need to decide when in the development process your app will be translated. Developers traditionally approached localization in one of two ways:

In Development– Some developers opt to add an extra step to the release process after which no strings (also referred to as “segments”) may be added, edited, or deleted. This is sometimes called a string freeze. This gives translators the necessary time to work on and test translations without fear of changes. Following this point, only minor bugs may be addressed – strings cannot be changed.

After the strings are translated, they are returned to the developers for use in the final release. This process is then repeated for the next release of the software. This process slows down the release of the software in all languages quite a bit.

Post-Release – The second approach is to release the software and add translations afterwards. This means some pages will be not translated at all, or for software with a previous release that has been translated in the past, only partially translated. With this approach, companies are unable to do a simultaneous release in multiple languages.

Introducing: Continuous Localization

Using either of these traditional workflows means localization will be performed in large batches, making it incompatible with today’s agile processes. It delays the availability of the software in languages other than the source language. And every delay is a missed opportunity to create new customers and generate more revenue.

A new solution is available that moves at the speed of today’s development teams, without demanding development stoppage. The idea is that as soon as a new piece of content gets generated for your app, it’s made available for translation. As soon as a new piece of content gets translated for your app, it’s made available to your users.

It looks like this:

Continuous Localization

We call it Continuous Localization, and it is really only possible with the use of a Continuous Localization Platform to house and manage the entire localization process in the cloud. These systems, now emerging on the market, harness the power of the cloud and APIs to enable a nuanced human-driven translation at the speed of continuous deployment.

Categories
Tips

Backend-as-a-Service Shootout (the best alternatives to Parse?)

Using a Backend-as-a-Service (BaaS) can reduce development cost and time-to-market. It’s a simple way of getting a highly scalable backend solution without significant upfront investment. This avoids the technical risks of having to scale your own service to meet demand as your user base grows; in a world where an app that hits the store top charts might gain more than a million new users before you complete your next iteration of development this is worthy of serious consideration. In most cases the tradeoff is giving up control of your backend. This tradeoff was brought into the spotlight recently when the most popular BaaS, Parse, was acquired by Facebook. This created a predominantly negative reaction from developers who went from buying a service from a neutral party to hosting their backend with someone many already distrust that has an interest in mining their app data. So, if you’re looking for a BaaS for new project but don’t want to share your data with Facebook, or want to migrate away from Parse, where do you go? Our last survey asked developers using BaaS offerings to rate their primary tool against a range of criteria – the results could highlight some attractive alternatives.

Splitting out the 8 tools which had more than 10 ratings each, the “other” category is still almost 25% of responses and includes a further 11 tools that developers had selected as their primary BaaS. Our own sector page lists 43 vendors at the time of writing, suggesting that the sector is still very fragmented and likely to see consolidation in future.

BaaS Shootout

Some popular BaaS options tied to other tools

Parse was by far the most popular with almost 2.5 times as many responses as Appcelerator’s Cloud Service as the next most popular. Appcelerator’s service is fairly heavily tied to their popular Cross-platform tool (CPT) much like the Sencha offering, which had a very similar number of responses. However, while Sencha’s BaaS had the highest developer satisfaction in our survey, Appcelerator’s was the lowest of the top eight. This situation is the same as the satisfaction levels for the corresponding CPTs. While sencha.io may look attractive on developer ratings, adopting it implies using (at least some of) the Sencha libraries for cross-platform web development too – although this tool scored highly on cross-platform availability (the web works everywhere) there are no native SDKs.

Applicasa switched focus

Just behind sencha.io for developer satisfaction was Applicasa. However, while our survey was running Applicasa were in the middle of a mini-pivot from a generic BaaS to a “Mobile Game Management Platform”, having recognised that the generic BaaS sector was exceptionally crowded. They haven’t yet come out of beta or announced pricing, although this is likely to reflect their value-adding services for game developers. If you’re looking for a BaaS offering with extra functionality for mobile games then Applicasa may be worth a look.

Open source or specialised

Behind Applicasa comes Parse, closely followed by Deployd and CloudMine. Deployd does not yet have a production hosting solution, so it’s currently just an open source project that you host your own instance of on Heroku or Amazon. That’s also an advantage in that you can modify the code and you’ll always control your own data. Another open source BaaS option like this, Helios, was recently launched by Heroku themselves. If you can take on responsibility for some of the maintenance of the backend in order to maintain control of your backend code and data then this kind of open source option is very attractive. CloudMine on the other hand is focussing on larger corporate clients – they’re targeting enterprises and agencies producing lots of apps. Like Applicasa, they’re specialising to target what they see as a more profitable niche and trying to avoid mass market generic BaaS competition.

Further acquisitions likely – select with care

The remaining popular BaaS options in our survey scored below the average for “others” on developer satisfaction. However, just by looking at the top handful we can see some trends for the still immature sector emerging. The generic BaaS space is all about scale. The remaining vendors fighting for this market are likely to get acquired by a larger company, or run out of cash trying to compete. It was implied that there were multiple parties interested in acquiring Parse who are presumably still in the market for a similar solution. If the acquiror of your chosen BaaS is a PaaS vendor then the service should continue to evolve and developers’ data remain private. The large PaaS vendors are likely to build or buy a more complete BaaS solution – we already see this with Helios and Windows Azure Mobile Services. Other companies interested in buying a BaaS vendor might want to integrate with their own analytics (as with Flurry buying Trestle) or other developer services, secure a key supplier or just get a closer relationship with mobile developers. There may also be large enterprises that snap up a small BaaS vendor for their own internal use. Other BaaS vendors will specialise towards specific developer segments.

If, like most developers, you’re still experimenting in the market and not yet building your own services with a long term view then a BaaS that’s specialised to your app category might be a great option. For those looking to select a common backend architecture that they’ll re-use across multiple products, or platform to build on top of for the longer term, the open source frameworks look like the safest option in the current market.

Categories
Business Community Tips

Test Early, Test Often, Test on Everything?

Testing any mobile app presents a wide range of challenges. The often repeated but rarely followed software best practice of test early, test often is harder to adhere to than usual due to the fragmentation of the target environment and the relative maturity of tools. The increased acceptance of apps by mainstream consumers and intense competition have raised the bars for user experience and quality. There is more to test than ever, yet often very limited budget for doing so. Fortunately every challenge presents an opportunity and a vast array of tools vendors are racing to fill the gaps.

What to test?

Much of the traditional software testing literature focuses on various forms of functional testing – ensuring the system does what it’s meant to do. With a strong trend towards simpler, single purpose apps, this is often the easiest thing to verify in a mobile app project. There is now a much stronger focus on the user experience and this requires testing of an entirely different nature. The most effective way to test that an app is easy (or even fun) to use is to get feedback from real users. Doing that and finding major issues after the app has been built is a very expensive mistake to make, so most developers and designers will want to create mock-ups or prototypes for early feedback. There’s a wide range of tools to help with this task from simple wireframing through to full interactive prototyping. Given the importance of animations within mobile apps to enable users to discover interface interactions and learn to navigate, more complete prototypes are becoming increasingly desirable. As users become more sophisticated and specialist tools reduce the time and effort required to create interactive prototypes this trend is likely to continue.

With the majority of app store revenues coming through in-app purchases, another more specialized form of testing the design of an app is becoming increasingly important – split testing. On the desktop web, tools for trying out design and copy variants to optimize sites for specific user behaviours are very mature and the best of them can be used by staff with no development skills. In the mobile world most of the tools in this space are still very immature and developer-focussed. The responsive design trend on the web and the more restricted deployment options for native apps make this a more challenging problem for mobile devices but we expect the tools in this sector to mature rapidly.

[sectors slugs=’prototyping-mockup’]

When to test?

The earlier you find problems with software, the cheaper it is to fix them. As such, it makes sense to start testing as early as possible. How about testing the idea for the app via a mobile market research service before you even create your first wireframes? It’s worth considering – if you can’t generate interest in your app idea with a simple pitch it’s not going to be easy to get people to download it from the store either.

For most apps (particularly native apps) it’ll be worth using one of the mock-up or prototyping tools mentioned above and test the design before you start coding the real app. It’s much cheaper to iterate a simple design prototype than a native app. However, you’ll still want to try out the actual app with real users before you launch it. To help with that there’s a range of beta testing services that can help you distribute your beta app and find and/or manage testers. There are also services to help you get feedback from your users before and after the app launches. Providing a highly accessible feedback channel for users in the app is your best hope for preventing the inevitable disgruntled few from leaving bad reviews.

Ideally an app will be developed and tested iteratively with functional testing of new features and full regression tests for the existing functionality run for each iteration. This level of testing can get extremely expensive and time consuming unless it is automated. Fortunately there are several tools, open source frameworks and third party services that can help out there too.

[sectors slugs=’beta-testing,feedback-helpdesk,automated-app-testing’]

Where to test?

Another major problem for mobile developers is the scale and fragmentation of the market they’re trying to serve. Collecting a full library of test devices with major firmware variants is way beyond the budget of most developers, let alone the effort that would be required to test manually on all of them. Automated testing solutions can help here also and some services provide access to a large set of devices for remote testing too. However, it’s simply not feasible for most developers to test every version of their apps on all the device and firmware combinations they support. This limitation means some bugs are almost guaranteed to escape into the wild; the important thing then becomes how quickly you discover and fix them. For this reason, crash analytics and bug tracking tools are becoming increasingly important. Another useful weapon in this battle is your usage analytics data – it can enable you to focus testing on the devices which are most popular amongst your user base and also spot changes in use on particular device models that might signal a non-fatal error that’s causing users to abandon the app.

Finally, for some apps, where they are tested geographically may also be important. Do you know what the performance of your app is like for users who are far from your servers? If you use SMS, do you know how long it takes to get to users on different networks around the world (or if it even gets there). Have the localisations for your app been tested by native speakers? Our automated testing and app certification sectors include companies that can crowdsource beta testers or provide access software testing professionals almost anywhere in the world to help you scale globally without leaving your desk.

[sectors slugs=’crash-analytics-bug-tracking,user-analytics,automated-app-testing,app-certification’]

Categories
Business Tips

How to Optimise Ad Revenue

On the surface, advertising seems like a fairly simple and easy to implement business model for an app. Decide on some places to display ads, integrate one or more third party ad services and wait for the money to start rolling in. If you do this without a clear plan for how and why users will interact with ads in your app you’ll probably find the revenue disappointing. Optimise revenue by growing your user base, increasing engagement and improving your fill rate (how many of your possible ad slots actually show an ad) and eCPM (effective cost per thousand impressions). The challenge becomes apparent when you try to improve these metrics and find them somewhat opposed to one another. Show too many ads and users either use your app less or abandon it altogether. Make the ads smaller or display them less prominently and your click-through ratio (and hence eCPM) goes down. Show a lot of irrelevant ads (higher fill rate typically has less relevance on average) prominently and users start ignoring all of them by default. Make your ads prominent and they’d better be relevant. If it’s hard to target your audience well then keep the advertising low key and count on volume to make up the revenue. It turns out that getting the right balance is very difficult and not many developers manage it.

Higher eCPM != higher revenue

A very small fraction of developers in our survey managed to achieve truly exceptional eCPM’s, greater than $5, sometimes even more than $10. These developers were almost all making multiples of the average revenue in total but they were also using more than one revenue model. It’s likely that most of their revenue was coming from another source and they showed very few highly relevant ads to get those rates. If we focus on the developers who only used advertising as a revenue model then those with eCPM’s below $0.25 were earning significantly more on average than those with eCPM’s from $0.25-1.50. So, for the majority of developers, those with higher eCPM’s make less money. To add to the confusion, size of active user base is also very weakly correlated with ad revenue; the simple concept of getting more users to make more money from ads doesn’t work on its own either.

Why iOS developers make more with ads

A good illustration of the complexity is to compare iOS and Android developers. As reported by several ad networks, iOS gets a higher eCPM on average than Android. However, our survey data suggests that the difference is all at the very high end. If we exclude the developers with eCPMs over $5, iOS actually has a lower eCPM than Android. For those only monetizing via ads, Android developers had a 37% higher eCPM and while the iOS developers only had 20% larger user bases on average, they earned almost 75% more revenue every month. This suggests that the iOS developers were seeing very much higher engagement with their apps and thus delivering many more ad impressions.

The fallback network fallacy?

There’s some popular advice that in order to maximise ad revenue, developers should use at least two ad networks, one with a high eCPM and low fill rate and another with a lower eCPM but very high fill rate. The theory is that this ensures they get the most relevant targeted ads with the best rates when they’re available but still don’t waste the inventory by filling it with less relevant lower paying ads when they don’t get filled by the premium network. This fallback strategy sounds logical but does it work? There’s some possible support for this in the fact that developers using more than one ad network make slightly more money on average than those who only use one (and 70% of developers purely monetizing through ads only use one network). However, this seems more likely to indicate greater sophistication than successful use of the fallback strategy. The ads from the two different networks are unlikely to fit the same presentation, positioning and format within an app well. There’s some fairly strong evidence against this fallback strategy within our survey data – developers with eCPMs above $5 are excluded from the following sample and so are those earning greater than $50k per app per month in revenue due to the disproportionate effect both tiny minorities have on averages.

Note that 67% of developers using ads selected neither eCPM nor fill rate as reasons for choosing their ad networks and that percentage is mirrored in the group only using ads for monetization. Those that selected one criteria or the other but not both generated slightly higher eCPM than average and significantly higher revenues. Developers trying to maximize both metrics to squeeze the maximum possible revenue out of the advertising space in their apps generated the highest eCPM but did far worse than average on revenues.

Optimise for engagement

This data suggests that developers should pick an appropriate advertising style for their app and try to go for either quality or volume of ads displayed but not both. Considering that the most popular advertising services use a cost-per-click model, the highest eCPMs are likely to simply reflect higher click-through ratios. In many cases the taps on ads may be accidental. Ads getting in the way of the content or usage of an app result in fewer users or lower usage and thus lower revenue. It seems that by far the best way to optimise ad revenue is to build app experiences that people want to spend a lot of time using and make sure the ads don’t spoil them. The extra volume of impressions generated by tens or hundreds of thousands of engaged users will more than make up for lower eCPMs or ad inventory not getting filled.

Categories
Platforms Tips

A/B Testing on the Amazon Appstore

In December Amazon launched a new A/B testing service for Android apps on the Amazon Appstore. Integrating A/B testing, particularly for in-app purchase related events, in the store portal is a welcome addition. Slightly disappointing considering this comes from a store provider is that the A/B testing service does not support testing different copy or icons on the storefront itself, purely in-app A/B tests, for which there are already third-party alternatives.

Not a “leaner” alternative store

When we looked at the developer platform “lean factor” for Android previously, we were only considering Google Play. Android came out on top due to the ability to publish new versions to the store almost immediately, enabling much faster iteration. However, the Amazon Appstore does have a review process which introduces significant publishing delay. As such even with the convenient addition of A/B testing baked into Amazon’s store, Google Play still offers a better environment for implementing lean development.

Testing in a smaller market

That said, there is at least one interesting use of this new service. Amazon’s Appstore has historically had a much higher proportion of users paying for downloads and in-app purchases than on Google Play – so much so that in early 2012 many developers made more money on Amazon’s store despite the much smaller audience. A smaller market with a high proportion of paying users is ideal for A/B testing improvements with the most important user group, particularly improvements to in-app purchase flows. Testing new variations on Amazon’s store and then rolling the successful ones out to a Google Play version might be a useful strategy for increasing revenues.

But how much smaller?

A final question is how much smaller is Amazon’s store in terms of downloads? They are notoriously silent when it comes to absolute numbers and even the relative numbers they give are ambiguous. In the press release for the A/B testing service linked above they say that app downloads have grown 500% over the previous year. This was in a release at the beginning of December with no fixed reference point given for the comparison. As can be seen from the relative growth chart produced by Distimo at the same time last year below, the launch of the original Kindle Fire at that time produced similarly massive growth.
Distimo_Amazon_AppStore
Dependending on where the comparison is made, a 500% increase could be anywhere from slightly behind January’s download level to 6 times the December 2011 peak. Until they provide some concrete numbers the best advice is to try it and see what your own results are like. The only major difference we’re certain of between the app buying users on the two stores is a higher proportion of tablet users at Amazon’s.

Categories
Business Community Tips

Developer Story: Lyft

Sebastian Brannstrom, Lead Engineer for Lyft at Zimride, talked to us about their app and the business that the technology enables. Sebastian has been working in mobile software since 2006, initially on Symbian and then transitioning to iOS, Android & Web by way of a side project, created in collaboration with designer and product manager Anna Alfut. In 2011 he joined VC-funded startup Zimride, who at the time only had a handful engineers, to create social ride-sharing services. Zimride’s initial service was an online marketplace for people to sell seats in their car on longer journeys. It was (and still is) growing but relatively slowly by Silicon Valley startup standards.

App Background

The company decided to create a new real-time marketplace for shorter trips and hence Lyft was born in 2012. Lyft has iOS & Android apps with two modes, driver and passenger. Lyft drivers are thoroughly screened, background checked, trained and insured with a $1M excess liability policy. Passengers can use the app to request rides that are tracked by the service, which suggests a minimum donation to the driver at the end. The driver mode notifies drivers of a nearby pickup request and gives them a short time window to accept it before it’s passed to another driver. The whole system enforces use of Facebook for identification to provide some additional security.

 

Track Record

The concept caught on and quickly became the main focus of the company. The engineering team has roughly tripled in size and the growth of the service is only being limited by how quickly they can recruit, screen and train drivers. Whilst they advertise for drivers on services like Pandora, Spotify and Craigslist, they have never marketed to passengers at all, apart from their signature giant pink mustaches on participating cars. Word-of-mouth marketing at its best, straight out of Seth Godin’s Purple Cow playbook. They have hundreds of registered drivers and tens of thousands of passengers in their first city, San Francisco. The company was nominated for three Crunchie awards and named runner-up in the “Best New Startup of 2012” category. According to TechCrunch, they very recently closed a $15M series B round of venture capital funding and have also just launched their service in a second city – Los Angeles. Open job vacancies make it clear they’re planning significant further expansion.

Competition

The disruption of transportation enabled by near ubiquitous smartphone adoption is an opportunity several startups are attempting to exploit. Lyft faces direct competition locally in San Francisco from SideCar, whilst Uber provide a high end alternative and have stated an intention to create a direct competitor in the lower cost segment. Fairly high-profile competitors with similar technology but not yet competing in the same geographical markets are Heyride, HAILO and Taxibeat, although the latter are enabling existing taxis with similar technology rather than encouraging peer-to-peer ride sharing. There is also indirect competition from existing taxi services.

Business Model

Lyft do not monetize their apps directly, it’s free to download and there are no in-app purchases for new features. Like almost all online marketplaces, Lyft make money by taking a cut of the transactions on the market. In this case the transactions are donations from the passenger to the driver. These are entirely voluntary (which gets around legal issues with drivers using their vehicles for commercial purposes) but the app provides a suggested donation and drivers can set a minimum average donation – passengers that don’t pay much/anything are likely to find no-one will accept their requests very quickly.

Lessons Learned

A successful service is much more than an app. The technology only enables the business at Lyft. Sebastian was quick to point out that the key to the success of the company is the operations team, building a community of drivers and passengers. If they’d simply built the technology and put it out there to see who wanted to use it, it’s very unlikely they’d be enjoying the growth they see now.

Projects will expand or contract to fill the time available to them. The initial concept for Lyft was originally scoped out as an 8-week development for a team of 5 (3 engineers, a designer and a product manager). One of the founders, playing devil’s advocate, said “what if you’ve only got 2 weeks to do it”. This forced them to really cut the concept down to a true Minimum Viable Product. They eventually got the first version built in 3 weeks (server and iOS app) – even today there are still several of their original requirements sitting at the bottom of their backlog unimplemented. The things you think will be essential parts of a service can often turn out to be unimportant for real users.

Team chemistry is essential. It would have been impossible to build such a complex service so quickly without fantastic collaboration. The relationships and collaborative working mode are more important than physical location – Lyft has been hiring top talent from around the world and sorting out visas and relocation to San Francisco afterwards. Sebastian was based in London when he was hired, their iOS lead was in Uruguay and the Android lead in Russia (the extreme time difference was sometimes an issue in the latter case).

What’s in the Lyft toolbox?

Like many successful development teams, Lyft use a lot of third party tools to help build their product:

Also, although they have built their own backend service, creating a highly responsive notification system was a challenge they solved with a combination of polling for updates when sending driver location, the Apple Push Notification Service, Google Cloud Messaging and a paid service from Pusher. However, the latter was initially a source of many crashes due to immature client libraries (Pusher only provide official support for a JavaScript client library, other platforms are community supported).

Sebastian’s desire for the tools space was very much in-line with our outlook in the latest developer economics report – consolidation. Fewer SDKs to integrate and fewer monitoring consoles to log into.

King for a day

Finally, if Sebastian could change just one thing about the platforms he works with, what would it be?

Better support for web/native hybrid app development (Lyft explored and abandoned that approach), with the Android WebView particularly in need of improvement, was a close contender but the top of the list for fixing was the Apple App Store review process.  5-10 days of waiting and they can see from their server logs that the reviewer doesn’t even login to the app with Facebook Connect before approving it. There must be a better way.