Categories
Analysis

Infographic: Programming languages adoption trends 2021

In our last infographic, JavaScript was the most popular programming language. What has changed in terms of the sizes in the last six months? You can find the answers in this infographic with key findings from our Developer Economics 20th edition survey, which ran between November 2020 and February 2021 and reached 19,000 developers worldwide.

Javascript is the queen of programming languages

JavaScript is the most popular programming language by some distance, with nearly 14M developers using it globally. More importantly, the JavaScript community has been growing in size consistently for the past three years. Between Q4 2017 and Q1 2021, more than 4.5M developers joined the community – the highest growth in absolute terms across all languages. Even in software sectors where JavaScript is not among developers’ top choices, like data science or embedded development, about a fourth of developers use it in their projects.

Python is conquering the world

Since it surpassed Java in popularity at the beginning of 2020, Python has remained the second most widely adopted language behind JavaScript. Python now counts just over 10M users, after adding 1.6M net new developers in the past year alone. That’s a 20% growth rate, the highest across all the large programming language communities of more than 6M users. The rise of data science and machine learning (ML) is a clear factor in Python’s popularity. Close to 70% of ML developers and data scientists report using Python. For perspective, only 17% use R, the other language often associated with data science.

Kotlin’s rise continues

The fastest growing language community in percentage terms is Kotlin. In fact, it’s one of the two communities – the other being Rust – that has grown more than two-fold over the last three years, from 1.1M developers in Q4 2017 to 2.6M in Q1 2021. This is also very

evident from Kotlin’s ranking, where it moved from 11th to eight place during that period – a trend that’s largely attributed to Google’s decision to make Kotlin its preferred language for Android development. Even so, Kotlin still has a long way to go to catch up with the leading language in mobile development, Java; there are currently twice as many mobile developers building applications in Java than in Kotlin.

Swift was recently outranked by Kotlin, after attracting slightly fewer net new developers in the second half of 2020 (100K vs 300K). Even so, Swift is currently the default language for development across all Apple platforms, which has led to a stagnation in the adoption of Objective C. This gradual phase-out of Objective C from the Apple app ecosystem is also matched by a significant drop in its rank, from ninth to 12th place. 

The more niche languages – Go, Ruby, Rust, and Lua – are still much smaller, with up to 2.1M active software developers each. Go and Ruby are important languages in backend development, but Go has grown slightly faster in the past year, both in absolute and percentage terms. Rust has formed a very strong community of developers who care about performance, memory safety, and security. As a result, it grew faster than any other language in the last 12 months, more than doubling in size. Finally, Lua was also among the fastest growing language communities in the last year, mainly attracting AR/VR and IoT developers looking for a scripting alternative to low-level languages such as C and C++.

Sign up to our community to have your say in our next developer survey.

Infographic: Programming languages adoption trends 2021
Categories
Analysis

The search for a cloud-native database

Cedrick Lunven (@clunven) and Jeff Carpenter (@jscarp) of K8ssandra discuss the search fora cloud-native database.

The concept of “cloud-native” has come to stand for a collection of best practices for application logic and infrastructure, including databases. However, many of the databases supporting our applications have been around for decades, before the cloud or cloud-native was a thing. The data gravity associated with these legacy solutions has limited our ability to move applications and workloads. As we move to the cloud, how do we evolve our data storage approach? Do we need a cloud-native database? What would it even mean for a database to be cloud-native? Let’s take a look at these questions.

What is Cloud-Native?

It’s helpful to start by defining terms. In unpacking “cloud-native”, let’s start with the word “native”. For individuals, the word may evoke thoughts of your first language, or your country or origin – things that feel natural to you. Or in nature itself, we might consider the native habitats inhabited by wildlife, and how each species is adapted to its environment. We can use this as a basis to understand the meaning of cloud-native.

Here’s how the Cloud Native Computing Foundation (CNCF) defines the term:

“Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds: Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.

These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.”

This is a rich definition, but it can be a challenge to use this to define what a cloud-native database is, as evidenced by the Database section of the CNCF Landscape Map:

Database section of the CNCF Landscape Map

Databases are just a small portion of a crowded cloud computing landscape.

Look closely, and you’ll notice a wide range of offerings: both traditional relational databases and NoSQL databases, supporting a variety of different data models including key/value, document, and graph. You’ll also find technologies that layer clustering, querying or schema management capabilities on top of existing databases. And this doesn’t even consider related categories in the CNCF landscape such as Streaming and Messaging for data movement, or Cloud Native Storage for persistence.

Which of these databases are cloud-native? Only those that are designed for the cloud, should we include those that can be adapted to work in the cloud? Bill Wilder provides an interesting perspective in his 2012 book, “Cloud Architecture Patterns”, defining “cloud-native” as:

Any application that was architected to take full advantage of cloud platforms”

By this definition, cloud-native databases are those that have been architected to take full advantage of underlying cloud infrastructure. Obvious? Maybe. Contentious? Probably…

Why should I care if my database is cloud-native?

Or to ask a different way, what are the advantages of a cloud-native database? Consider the two main factors driving the popularity of the cloud: cost and time-to-market.

  • Cost – the ability to pay-as-you-go has been vital in increasing cloud adoption. (But that doesn’t mean that cloud is cheap or that cost management is always straightforward.)
  • Time-to-market – the ability to quickly spin up infrastructure to prototype, develop, test, and deliver new applications and features. (But that doesn’t mean that cloud development and operations are easy.)

These goals apply to your database selection, just as they do to any other part of your stack.

What are the characteristics of a cloud-native database?

Now we can revisit the CNCF definition and extract characteristics of a cloud-native database that will help achieve our cost and time-to-market goals:

  • Scalability – the system must be able to add capacity dynamically to absorb additional workload
  • Elasticity – it must also be able to scale back down, so that you only pay for the resources you need
  • Resiliency – the system must survive failures without losing your data
  • Observability – tracking your activity, but also health checking and handling failovers
  • Automation – implementing operations tasks as repeatable logic to reduce the possibility of error. This characteristic is the most difficult to achieve, but is essential to achieve a high delivery tempo at scale

Cloud-native databases are designed to embody these characteristics, which distinguish them from “cloud-ready” databases, that is, those that can be deployed to the cloud with some adaptation.

What’s a good example of a cloud-native database?

Let’s test this definition of a cloud-native database by applying it to Apache Cassandra™ as an example. While the term “cloud-native” was not yet widespread when Cassandra was developed, it bears many of the same architectural influences, since it was inspired by public cloud infrastructure such as Amazon’s Dynamo Paper and Google’s BigTable. Because of this lineage, Cassandra embodies the principles outlined above:

  • Cassandra demonstrates horizontal scalability through adding nodes, and can be scaled down elastically to free resources outside of peak load periods
  • By default, Cassandra is an AP system, that is, it prioritizes availability and partition tolerance over consistency, as described in the CAP theorem. Cassandra’s built in replication, shared-nothing architecture and self-healing features help guarantee resiliency.
  • Cassandra nodes expose logging, metrics, and query tracing, which enable observability
  • Automation is the most challenging aspect for Cassandra, as typical for databases.

While automating the initial deployment of a Cassandra cluster is a relatively simple task, other tasks such as scaling up and down or upgrading can be time-consuming and difficult to automate. After all, even single-node database operations can be challenging, as many a DBA can testify. Fortunately, the K8ssandra project provides best practices for deploying Cassandra on Kubernetes, including major strides forward in automating “day 2” operations.

Does a cloud-native database have to run on Kubernetes?

Speaking of Kubernetes… When we talk about databases in the cloud, we’re really talking about stateful workloads requiring some kind of storage. But in the cloud world, stateful is painful. Data gravity is a real challenge – data may be hard to move due to regulations and laws, and the cost can get quite expensive. This results in a premium on keeping applications close to their data.

The challenges only increase when we begin deploying containerized applications using Kubernetes, since it was not originally designed for stateful workloads. There’s an emerging push toward deploying databases to run on Kubernetes as well, in order to maximize development and operational efficiencies by running the entire stack on a single platform. What additional requirements does Kubernetes put on a cloud-native database?

Containerization

First, the database must run in containers. This may sound obvious, but some work is required. Storage must be externalized, the memory and other computing resources must be tuned appropriately, and the application logs and metrics must be made available to infrastructure for monitoring and log aggregation.

Storage

Next, we need to map the database’s storage needs onto Kubernetes constructs. At a minimum, each database node will make a persistent volume claim that Kubernetes can use to allocate a storage volume with appropriate capacity and I/O characteristics. Databases are typically deployed using Kubernetes Stateful Sets, which help manage the mapping of storage volumes to pods and maintain consistent, predictable, identity.

Automated Operations

Finally, we need tooling to manage and automate database operations, including installation and maintenance. This is typically implemented via the Kubernetes operator pattern. Operators are basically control loops that observe the state of Kubernetes resources and take actions to help achieve a desired state. In this way they are similar to Kubernetes built-in controllers, but with the key difference that they understand domain-specific state and thus help Kubernetes make better decisions.

For example, the K8ssandra project uses cass-operator, which defines a Kubernetes custom resource (CRD) called “CassandraDatacenter” to describe the desired state of each top-level failure domain of a Cassandra cluster. This provides a level of abstraction higher than dealing with Stateful Sets or individual pods.

Kubernetes database operators typically help to answer questions like:

  • What happens during failovers? (pods, disks, networks)
  • What happens when you scale out? (pod rescheduling)
  • How are backups performed?
  • How do we effectively detect and prevent failure?
  • How is software upgraded? (rolling restarts)

Conclusion and what’s next

A cloud-native database is one that is designed with cloud-native principles in mind, including scalability, elasticity, resiliency, observability, and automation. As we’ve seen with Cassandra, automation is often the final milestone to be achieved, but running databases in Kubernetes can actually help us progress toward this goal of automation.

What’s next in the maturation of cloud-native databases? We’d love to hear your input as we continue to invent the future of this technology together.

This blog post originally appeared on K8ssandra and is based on Cedrick’s presentation “Databases in the Cloud-Native Era” from BluePrint London, March 11, 2021 (registration required).

Categories
Analysis Community

Coding the Future: How Developers Embrace and Adopt Emerging Technologies

As the popularity of a technology ebbs and flows, so does its impact, and when it comes to software development practices, few recent technologies have exerted as profound an influence as DevOps. This technology has become truly mainstream, seeing widespread adoption across software sectors, industries, and roles. We are delighted to say that, for these reasons, DevOps has matured out of our emerging technology tracker and instead has been replaced with several new and exciting technologies that have the potential to reshape the world. Here, we’ll use developers’ engagement with and adoption of these technologies to help us understand just how this might come to pass.

We have tracked developers’ engagement with and adoption of different technologies over six surveys, spanning three years, endingQ1 2021. To measure engagement and adoption, we asked developers if they are working on, learning about, interested in, or not interested in different emerging technologies, whilst adding to the list as new innovations appear. We classified each technology according to whether its engagement rate is above or below the median-high/low engagement-and whether its adoption rate is above or below the median-high/low adoption. 

Robotics, mini apps and computer vision are taking the lead as emerging technologies developers are most engaged with

After graduating DevOps from our emerging technology tracker, robotics, mini apps – apps embedded within another app – and computer vision head the table for those emerging technologies with which developers are most engaged. Around half of developers say they are working on, learning about, or interested in each of these technologies, and, whilst mini apps are most widely adopted by professional developers, hobbyists and students are most interested in robotics. However, of the developers engaged with mini apps, nearly a quarter are currently working on the technology. For computer vision, this drops to 15%, and for robotics, just 10%. Despite engaging developers in similar ways, it’s clear that the practical applications of mini apps are widely recognised by developers-in fact adoption increased by four percentage points in the last twelve months, one of the largest increases we saw.

Nearly 30% of actively engaged developers are learning about cryptocurrencies

Almost three in ten engaged developers are learning about cryptocurrencies, the most of any technology – though other blockchain applications are close behind on 26%. The academic interest in these technologies has yet to translate directly into adoption-only 14% and 12% of engaged developers are actively working on projects using these technologies. More than 40% of them are professionally involved in web apps / Software as aService (SaaS), and a third are involved in mobile development as professionals. This said, adoption did increase for both cryptocurrencies (+5 percentage points), and other blockchain applications (+4 percentage points) in the last twelve months-developers are continuing to find practical applications for these technologies. With giants such as Maersk incorporating blockchain technology into their logistics management systems in the last few years, more widespread adoption is inevitable.

Quantum computing and self-driving cars still lag in adoption

Quantum computing and self-driving cars continue to languish near the bottom in terms of adoption, but continue to spark some developers’ imaginations – more than two in five developers are engaged with these technologies. However, of these developers, fewer than one in ten are actually working on each of these technologies, and whilst engagement with these technologies dropped over the last twelve months, adoption increased for both – though more for quantum computing (4 percentage points) than self-driving cars (2 percentage points). There is a similar story with brain / body computer interfaces, which is a new technology that we added in the most recent survey-many developers are engaged, but, unsurprisingly, given its bleeding-edge status, very few are actively working on the technology.

We also recently added hearables, DNA computing / storage, and haptic feedback to our list of emerging technologies. Engagement is low with these technologies; on a level with fog/edge computing-between a quarter and a third of developers are engaged. We see that around one in ten engaged developers are actively working on these very nascent technologies, and two in ten are learning about them. Though the engaged audience for these technologies is small, there is a core of developers contributing to their continued progress.

Each of the emerging technologies we have covered encounters different barriers on its path to widespread adoption. For many, the barriers are technological-the advances needed to bring quantum or DNA computing to the mainstream are many years away, but there are also social, cultural, and even legislative barriers which will impede progress. Though important, developers are only part of the puzzle.

Categories
Analysis

Why do developers adopt or reject cloud technologies?

In the nearly fifteen years since Amazon AWS cracked open the cloud market by releasing S3 – and changed the world by doing so – there has been huge growth in the variety of cloud solutions available for developers to use. We examine the different reasons that developers give for adopting or rejecting cloud technologies. The findings shared in this post are based on the Developer Economics survey 19th edition which ran during June-August 2020 and reached more than 17,000 developers in 159 countries.

Of the new cloud technologies which have appeared over the last fifteen years, containers have arguably had the greatest impact. With 60% of developers using this technology, the benefits are clearly widely recognised. However, with just under 30% of developers using container orchestration tools and management platforms, there is still room for this technology to develop.

In second position, with 45% of cloud developers using this technology, Database-as-a-Service (DBaaS) is also very widely used, and data storage and retrieval will continue to be an important issue, albeit in a much more sophisticated form than S3 originally offered. Cloud Platform-as-a-Service (PaaS) sits in a distant third place. A third of backend developers are using PaaS, putting this technology slightly ahead of the other ones we ask about – between 21-27% of developers use them.

Why do developers adopt or reject cloud technologies - containers is the cloud technology that is most widely used by backend developers.

Abstraction and simplification are two of the main drivers for the mass adoption of cloud technologies, but we can’t overlook the role that flexibility plays. Spinning up instances to cope with variable demand, creating temporary testing environments, and adding storage as required is immensely powerful. But one often-overlooked aspect of this flexibility is that developers and organisations have the flexibility to choose. They are not restricted to the expensive, bare metal they bought ten years ago, and they are less constrained by monolithic purchasing processes, because, to put it simply, these decisions matter less. In a world where infrastructure can be provisioned and destroyed at will, and where data and server configurations can be transferred easily between homogeneous systems, cloud providers have to find other areas of differentiation in order to compete. Vendor lock-in is much less of an issue for users than it once was, and the rise of the developer as a decision-maker has put even more power into their hands. Note the adoption and rejection reasons for DBaaS and orchestration platforms come from our previous survey, fielded in Q1 2020.

“Pricing and support/documentation are most important to developers”

For every cloud technology, with the exception of orchestration tools, pricing and support/documentation are the two most important factors that developers consider when adopting that technology. For the most part, these two factors switch between first and second place, however, pricing drops to fifth place for developers considering adopting an orchestration tool, whereas support/documentation remains at the top by a large margin. Around three in ten of these developers selected ease and speed of development (32%), integration with other systems (31%), community (30%), and pricing (29%) as reasons for adoption, with pricing being around 15 percentage points lower for orchestration tools than for other technologies. On the other hand, community and scalability are generally more important for developers selecting an orchestration tool.

Much of this distinction is driven by the dominance of Kubernetes. With 57% of backend developers who are using an orchestration tool choosing Kubernetes, it is the single most popular orchestration tool, and importantly, it’s free and open source. It stands to reason, therefore, that pricing is simply not an issue for developers using Kubernetes, instead they value the community support that helps them master such a complex tool.

Indeed, as well as pricing being much less important for these developers, the learning curve is also less important. It seems that these developers understand that they are dealing with high levels of complexity and abstraction and accept that there is a lot to learn in this space. But for those developers that want the abstraction and simplicity offered by a commercial container management system, many paid options exist, and pricing is still an important factor in this space.

Why do developers adopt or reject cloud technologies - Ranking of reasons for adoption

Taking developers’ reasons for rejection into account let us view the decision-making process from the other side. Immediately, we see that pricing is the dominant factor when rejecting every technology. Taking a closer look at the data shows the true extent of this – for DBaaS and Infrastructure-as-a-Service (IaaS), developers were more than twice as likely to select pricing as a rejection reason than the second- and third-placed reasons of support/documentation, and the learning curve, respectively. Amongst the remaining technologies, the smallest difference was 8 percentage points, for developers rejecting orchestration tools.

Further down the list, there is a lot of variability between the different technologies. For example, the learning curve was the second most popular rejection reason for developers choosing IaaS, with a quarter of them doing so. This suggests that the learning curve for IaaS is quite steep and that this is a barrier for many developers. This is not the case for DBaaS however, where only 15% of developers stated this as a reason for rejection.

“Suitability, feature set and performance are hygiene factors”

Suitability and feature set has middling importance for developers choosing to adopt a technology, but for many technologies, it is a more important reason for rejection. This shows that suitability and feature set is a hygiene factor – there are relatively few cases where this is of paramount importance, but many where a technology does not meet the needs and is therefore rejected.

Finally, performance sits very low in the hierarchy for developers adopting and rejecting cloud solutions. This indicates that, for the vast majority of uses cases, the range of performance options provided by vendors is sufficient. This suggests that many cloud computing products are, to some extent, homogenous, and that developers are more concerned with the ‘soft’ features, such as support/documentation, community, or learning curve. These features make for a fulfilling development experience, and in the age of the developer as a decision-maker, experience is everything.

Why do developers adopt or reject cloud technologies - ranking of reasons for rejection.

What are your reasons for adopting or rejecting cloud technologies? You can let us know your reasons here!

Categories
Analysis

What do developers value in open source?

Open-source software (OSS) is used by 92% of developers, so what exactly do they value in it? We find that developers value OSS’s ability to supersede any single contributor and live on almost eternally. We highlight some uncertainty around OSS’s future by showing trends from geographic regions and sectors. The findings shared in this post are based on the Developer Economics survey 19th edition which ran during June-August 2020 and reached more than 17,000 developers in 159 countries.

What exactly do developers value in open-source?

Open-source software (OSS) is ubiquitous in the global developer community. As our data shows, OSS is used by 92% of developers. A question that comes to mind is: what exactly do developers value in OSS? In the chart below, we show which statements developers value about OSS, broken down by professional and nonprofessional developers, and enterprise and non-enterprise developers. The overarching theme for what developers value from OSS is its ability to be eternal. “To collaborate with the community, building software that outlasts even its originator” encapsulates the two statements with the greatest agreement.

The overall cost and wanting to avoid vendor lock-in/lock-out are important aspects that professional and enterprise developers in particular value in OSS, while non-enterprise developers value forking product derivatives and debugging more than the other groups. Non-professional developers do not value the overall costs element, perhaps because they have not experienced the costs involved in closed source software, whereas many professional developers have. Another aspect that non-professional developers value significantly less is avoiding vendor lock-in. This also suggests that these developers have not experienced the limitations of closed source software yet.

Appreciation of the overall costs of OSS is also highly linked with years of developer experience: only 24% of developers with less than one year of experience agree that low cost is an asset of OSS. In contrast, the percentage of developers who agree that low cost is an asset of OSS rises to 34% of developers who have between three and five years, and 43% of developers with six or more years of experience. Typically, as developers gain experience, they begin to work in different sectors, often crossing over between sectors. At this point, the flexibility that OSS offers may become crucial. 

Finally, we also see a greater proportion of non-professional developers not using OSS compared to others. This is also reflected indirectly in each of the other statements; we see that non-professional developers agree with every statement less than professional developers. This suggests that, to be truly appreciative of the benefits of OSS, you may have had to engage with it seriously, in the way professional developers do.

Where OSS is written is changing

At present, the culture of OSS is particularly strong with Western European and Israeli developers, where not a single statement is valued below the average. On the contrary, developers in North America—who, up until now, have driven the OSS movement—value contributing and interacting with the community less than average. This could suggest a cooling off of North American OSS development and a maturing of this ecosystem. 

On average, East Asian developers seem to be disengaged from the OSS movement more than developers from other regions. Only 88% of developers in this region use OSS compared to 92% globally. In general, developers in this region also value less aspects of OSS. In particular, their extremely low appreciation of the continuous support for the technology compared to others, highlights that developers in this region are apprehensive about the longevity of OSS, which partially undermines its main benefit. This apprehension is also reflected by the relatively low agreement associated with contributing. 

According to our data, South Asian developers value contributing to OSS significantly more than others. In addition, South Asia is the region with the largest proportion of developers who value collaborating and interacting with the community. This combination positions the region to be among the drivers of the next wave of OSS development. In the Middle East and Africa region, some key advantages of OSS, such as avoiding vendor lock-in and the overall low cost have not yet resonated with developers — this is despite the fact that, at least for Africa, income per capita is low compared to global averages. What assists in explaining this is this region’s proportion of professional developers and the experience of its developers. 

The Middle East and Africa, as well as South America, have roughly the same proportion of professional developers, 60.7%, in contrast to North America or Western Europe and Israel, where more than 80% of developers are professional. Non-professionals value OSS less. Similarly, developers in the Middle East and Africa are also the least experienced, on average, and years of experience in particular is linked with appreciating the low cost of OSS.

Some sectors embrace OSS while others don’t

Emergent sectors such as augmented reality (AR) and virtual reality (VR) stand to benefit greatly from OSS as a means of defining a common standard and exchanging ideas. Yet, we find that developers working in these two fields do not value forking/creating product derivatives, nor even collaboration in the case of VR, as much as other developers do, on average, from other fields. This could be partially explained by the lower than average agreement with the need for continuous support for a technology. When developers do not value this characteristic, it is unlikely that they are working with the mindset which would ensure long term OSS growth and desirability. 

On the other hand, developers who are building apps and extensions for third party ecosystems, on average, value contributing and forking more than developers in other sectors. Similarly, the very successful node.js runtime has facilitated other extensions and developers working in backend services really value the continuous support of OSS projects. At present, despite the large percentage of developers who use open source software, it is only in certain circumstances that the majority of developers value OSS for any given reason. Perhaps this suggests that OSS has become an expectation rather than being perceived as a gift from society at large to society at large. Observing how developers value OSS in the future would be a good litmus test for the health of open source projects. For now though, there are encouraging blooms in South Asia for example, but also software sectors of scepticism, such as in AR/VR.

Are you involved in open source? Share your experiences with us in our Developer Economics 20th edition survey!


Be a guest writer on our blog
Have you got brilliant tips and resources that developers love to read? Then we want you on our blog! Find out more.

Categories
Analysis Community News and Resources

How are developers’ needs changing due to COVID-19?

Working and performing during a pandemic will leave deep marks behind, both financially and psychologically speaking. In our latest survey, we asked developers how their needs have changed due to COVID-19. The findings shared in this post are based on the Developer Economics survey 19th edition which ran during June-August 2020 and reached more than 17,000 developers in 159 countries.

At the time of writing this post, there have been more than 30 million COVID-19 cases around the world, with 7.3 million of those still active. The virus is ubiquitous and affects all continents to more or less similar degrees. Working and performing during a pandemic is an experience that will undoubtedly leave deep marks behind, both financially and psychologically speaking.

7.2 million developers report needing flexible working hours/workload

We asked developers to select from a given set of technical and non-technical needs, up to three extra needs the pandemic has created for their own development activities. 73% of developers reported having additional needs due to COVID-19. In particular, 34%, or 7.2 million developers, expressed their need for flexible working hours/workload. 

Quarantine and social distancing policies have encouraged many employers to allow their workers to work from home, where possible. A large proportion of workers are now facing the inconvenience of relocating their working space into their home. Among such inconveniences is the necessity of taking care of households while keeping up productivity. Under these circumstances, flexibility is seen as the key to success, or simply survival.

The next most common perceived needs, reported by about one in four developers, are: 

  • collaboration tools and platforms (26%), 
  • online training resources (25%), and
  • virtual opportunities to support networking and peer-to-peer interaction (23%). 

Among these three, the only technical one, strictly speaking, refers to the need for collaboration tools, such as video conferencing platforms. The other top needs are related to self-improvement and self-management, and to socialising. 

The supremacy of non-technical needs is striking. All of the technical necessities, except collaboration tools, sit at the bottom of the list, being reported only by about one in ten developers: 

  • better performance in terms of computing resources (13%),
  •  hardware components (9%),
  •  increased security (9%), and 
  • additional cloud space (7%). 

There are two explanations for these patterns. First, developers may have not indicated the need for extra technical support because it had been already fulfilled, i.e. their employers had already provided them with it. It could also be, however, that developers did not perceive technical considerations as being more important than flexibility, networking, and learning.

The bigger the company, the more flexibility developers need

We found that the most important factor in influencing developers’ needs in relation to COVID-19 is their company size. Compared to those in middle- or large-sized companies, self-employed developers and developers working in small businesses of up to 20 employees report fewer new needs overall. That is especially the case for flexibility in terms of working hours/workload, and for collaboration tools. The most probable explanation is that they would have already implemented a flexible working schedule prior to COVID19. This is likely to apply to contractors as well as to small, dynamic startups. When it comes to keeping collaboration and interaction going, it may just be easier for small groups of people to maintain old habits or find an easy-to-use tool, such as emailing, phoning, or even getting together whilst respecting the required social distancing.

On the contrary, the bigger the company, the stronger the need for all of the above, including opportunities for virtual interactions. A large company typically requires a structured system of communication, and usually that system needs to accommodate the various teams’ diverse needs; even more so when a company is locked into an IT vendor’s services. 

Interestingly, the need for mental health support also linearly increases with company size, probably as a result of those challenges experienced in terms of flexibility and peer-to-peer communication and interaction. Another potential reason is that employees in larger organisations, where nobody is indispensable by default, may be experiencing more performance pressure and be more scared of losing their jobs.

How COVID19 is affecting developers’ technical needs 

While developers’ technical needs due to COVID-19 do not change significantly with company size, they strongly correlate to the developers’ level of involvement in tool purchasing decisions. Those most concerned about increased security, performance, and cloud space are the ones responsible for tool specs and expenses, as well as budget approval, who usually fulfill roles within technical management. 

On the one hand, with the increasing number of developers working from home, more machines need to be available and connected via VPN and similar technologies. More layers to navigate introduces complexity barriers that affect work efficiency, but also the need for the implementation of extra security controls. Furthermore, servers are often overloaded and downtimes happen more frequently, affecting system reliability. If you add to this the fact that budgets are being reduced or even frozen, due to the economic instability the pandemic is causing, the situation is actually precarious. Those in charge are inevitably the ones noticing the need for technical support the most. 

Conclusion

In a relatively short time, the pandemic has generated and consolidated a series of working practices that had been previously known only to a very small proportion of the population. Such new practices, based on remote working and virtual collaboration, are likely to persist after COVID-19. If one acknowledges this, investing in optimising support becomes even more valuable. We recommend that, especially large enterprises, consider the delicate balance between self management and collaboration needs when designing policies and offering support to their employees in the face of the pandemic situation.

Categories
Analysis

The state of AR and VR in Asia: Highly developed working practices and a strong pipeline of students

This article originally appeared on DevRelX.     

Virtual Reality (VR) and Augmented Reality (AR) have been on the cusp of widespread adoption for many years now, but technical and commercial hurdles have impeded this process. For VR at least, it seems that 2020 could be the year when the technology goes truly mainstream – there are already several consumer-grade headsets available on the market and many game studios are following Valve’s Half-Life: Alyx into VR with their own AAA titles. AR, though now ubiquitous on smartphones and available for many industrial applications, still lags behind in adoption due to the more complex technical challenges such as object occlusion. That said, the recent rumours around Apple’s entry into the Mixed Reality (MR) space may spark a wave of innovators hoping to get the jump on them.

What we do know is that as VR, AR and MR achieve greater market penetration, not only will more developers be needed to create the immersive worlds and experiences that consumers demand, but artists and designers will also be required to populate these worlds with convincing inhabitants, create 3D assets and help to realize a creative vision. Fortunately, there is no shortage of hobbyists involved in AR and VR. As we discuss in our State of the Developer Nation report, not only are most people in AR or VR involved as hobbyists but around a quarter of those who work professionally in the two sectors still consider themselves to be hobbyists on the side.

We also discovered that a lot of developers in AR and VR were taking on many different roles, often those that aren’t traditionally associated with being a software developer. In fact, we coined a new term to describe those people who not only write code but who also dip their toes into more traditional creative endeavors. Enter the Hybrid Developer, and we’ll find out more about her very soon indeed.

At SlashData, as the analysts of the developer economy, we have traditionally been focused on understanding developers. But given the contributions that people in more artistic roles make to many sectors, especially to AR and VR, we felt that in order to truly understand this transformational technology, we needed to understand those people who help shape how it looks and feels. So, for the first time, we sought out people working in AR and VR in non-developer roles. We asked artists, creators, filmmakers and their ilk just what it’s like to work in AR and VR and here I’ll be sharing some of our most interesting findings.

What, exactly, is a Hybrid Developer?

First things first. Let’s understand more about these so-called Hybrid Developers. These are people that have a traditional developer role (a software engineer, or a DevOps specialist, for example), but who also take on more creative or artistic roles (artists and filmmakers, for example). This means that we can fit people involved in AR and VR into three categories:

  1. Pure developers – people who only have developer roles
  2. Hybrid developers – people who have both developer and creative roles
  3. Non-developers – people who don’t have developer roles

Generally speaking, around 63% of those involved in software development projects are pure developers, 21% are non-developers, and 15% are in hybrid roles. But people involved in AR and/or VR show very different behaviour. The distribution amongst these roles is much more even, with fewer pure developers (39%), slightly more hybrid developers (31%) and twice as many non-developers. This is a pattern that is replicated, to a greater or lesser degree across many regions, but it is in South and East Asia where these differences are most pronounced.

State of AR VR in Asia

East Asia is further along the curve for adoption of AR and VR

East Asia has very quickly adopted AR and VR, with almost two in five people involved in software development projects contributing to this sector in some way. But as well as being ahead of the curve in terms of the sheer number of people involved in the sector, non-developer AR/VR practitioners here find it easier to enter the space.

From the chart above you can see that in East Asia, AR/VR practitioners are more than twice as likely to be non-developers than people elsewhere in the world. We see this phenomenon replicated to a lesser extent for people not involved in AR/VR, with a correspondingly lower proportion of hybrid developers. We can draw two conclusions from this:

  1. In East Asia, people involved in software development projects are more specialised, taking on fewer hybrid roles.
  2. Non-developers in East Asia contribute more towards software development projects than elsewhere in the world

Looking to South Asia, the spread of roles in this region is much more balanced – not only does this region have a healthy proportion of hybrid developers, but the distribution of AR/VR practitioners between the three categories of pure, hybrid and non-developers is fairly even. Many AR/VR practitioners here have a balanced and varied skill set, with four in ten of them taking on hybrid roles, and this is something that we see replicated in other developing regions, such as the Middle East and Africa.

What types of roles are AR/VR practitioners taking on?

When we delve more deeply into the developer and non-developer roles that AR/VR practitioners take on, we can tease out some more important insights. The chart below shows a subset of all the roles we ask about (out of a total of 25). In East Asia, only two in ten AR/VR practitioners identify as programmers or developers, the lowest of all the regions, and much less than the rest of the world, where almost half of AR/VR practitioners identify as developers. This is another result of the rapid adoption of AR and VR in East Asia – non-developers have been able to enter the space more easily, and the whole AR/VR ecosystem is at a later stage of maturity.

The incidence of AR/VR practitioners in East Asia that identify as product managers, marketers or salespersons provides further evidence for this – once development practises have matured, productisation and monetisation take a front seat. Here, East Asia is also ahead of the curve.

The eagle-eyed amongst you will notice that although East Asia has a much lower proportion of developers, there is not a correspondingly dominant role in East Asia which makes up for this. These ‘missing’ roles aren’t simply hiding in the ones I haven’t shown here. Instead, AR/VR practitioners in East Asia simply do fewer roles. 62% of them take on only a single role, compared to 47% of AR/VR practitioners in the rest of the world. Only 11% of them do four or more roles, compared to a whopping 27% in the rest of the world. Generally speaking, taking on many different roles is a hallmark of being involved in AR/VR (as we discussed in our State of the Developer Nation report), but this is resoundingly not the case in East Asia. Specialization is another result of a sector maturing – roles become more defined and people have to wear fewer hats, working instead collaboratively in specialized teams.

Finally, without wanting to labor the point, the lower incidence of data scientists and machine learning developers is yet another sign that East Asia is ahead of the curve. Data science and machine learning are foundational to the success of VR, and in particular, AR. Many of the advances here have come from image recognition and other technologies which mitigate some of the hardware difficulties faced by people creating for AR and VR. You might expect this to be reflected in the number of AR/VR practitioners identifying as data scientists, but this is not the case. One viewpoint is that these low numbers are simply a correlation with the lower number of developers in general. But it’s also possible that those who are into AR and VR use a higher level of abstraction – instead of building machine learning models, they simply plug into an API and get the results they need and don’t consider themselves data scientists.

As AR and VR become more established in other regions, we can expect to see many of these phenomena filtering throughout the globe, although the differing cultures and economic situations at play mean that each region will develop its own idiosyncrasies. This said, one good indicator that a sector is in ascension is a high proportion of students, and here, South Asia is ahead of the curve, with over half of AR/VR practitioners here identifying as students. Granted, there are more students in South Asia across all the sectors, but it’s particularly high for AR and VR (51%, compared to 38% for those not involved). South Asia is definitely a region to watch for AR and VR development in the future.

State of AR VR in Asia

If this post has piqued your interest or sparked some interesting questions, please don’t hesitate to reach out and let us know. We hold rich and varied information on people involved in AR and VR, and we’re adding to it all the time!

Want to see more? Check out our latest research reports and graphs based on data from developers like you who took our global surveys.