Categories
Platforms

Low-Code Platforms: Bringing Visual Programming Back (to Stay)

low code platforms

There’s an interesting trend in the second decade of this millennium. Things once declared “dead,” are experiencing a resurgence. For example, animated GIFs, once relegated to cheesy ads for home refinancing or losing belly fat in a month with acai berries, are back in Slack channels, social media and blogs everywhere. Email newsletters have returned after many corporations abandoned them as sales and marketing tools in 2008 or so. Podcasts were declared to have peaked sometime around 2010. Now, they’re back and there are almost too many to choose from. The consensus about the return of animated GIFs, email newsletters and podcasts is that they’ve improved in quality and offer more to people who use them.

Visual programming environments and platforms were also hot in the 1990s and the early 2000s. Then the noise they generated seemed to die down. And, now they’re back, very likely for good. Let’s look at why.

Too fast, too choppy, too inwardly focused and…it’s complicated

Visual programming has been around for much longer than we think. It started quietly enough in the 1960s with Bert Sutherland’s interactive programming language. The idea built up steam in the 1970s and 1980s (Smalltalk). The idea of moving away from text editing, compiling, writing down the errors, and debugging with the eyes was alluring. And, so it came of age in the 1990s with Visual Basic, Xelfi/Netbeans, Visual Studio and the height of the CASE tools hype.

 

Ah, the old Smalltalk days. Source: Basic Aspects of Squeak and the Smalltalk-80 Programming Language
Ah, the old Smalltalk days. Source: Basic Aspects of Squeak and the Smalltalk-80 Programming Language

 

So, there you have it. A whole slew of tools that could make programming so easy a child could do it. So, what happened? Why did visual programming virtually go gentle into that good night?

I think it’s because so much was still new in the 1990s and early 2000s. A whole lot of great digital and online stuff came out of that period very quickly. Take the World Wide Web, for example. It was going mainstream, but parts of it were more like the World Wild, Wild West. But I think that, in the rush to show the world the cool stuff the web and digital were bringing us, some steps were missed.

So, the visual programming tools of that period were really more about “look what we can do,” rather than “look at what you can do.” So, the end result of that philosophy is shaky extensibility (if there’s any at all), slow code generation and little to no cross-platform capability. In addition, in-depth programming skills and mindset were still the name of the game.

The only thing that’s constant is that nothing is constant

If there’s one thing I’ve learned since I started writing about programming languages and development trends in 1998, it’s that nothing is constant. When I invested in my first Mac in 1996, I had no idea I would replace it with a laptop just a few years later. And when I upgraded to one with an Intel Core i7 processor, I had no idea that it would end up gathering dust in my home office while I played with my smartphone and tablet in my living room.

In this mobile world, people want apps for almost everything. In addition, there are the other trends that are in the backlogs of today’s developers. These include solutions for cloud, machine learning, data science, artificial intelligence and IoT, as highlighted in “The State of the Developer Nation, Q1 2017,” the report compiled by Developer Economics. So, all of a sudden the already significant amount of knowledge you need to build software and applications in this brave new technological world has skyrocketed.

Most of you are developers, so I don’t need to tell you how difficult it is to be a full-stack unicorn in the age of “we need an AI and predictive analytics app for that on the cloud.” The Developer Economics surveys tell your story: your work can span multiple different areas, requiring mastery of several languages. Nor do I need to go on and on about the pressure to get these apps built and out in the marketplaces or stores ASAP or all the headaches that come with updates (new JavaScript libraries! Dependencies! Merges!). So, I’m going to skip all that and get to my point.

Now more than ever, we need to move away from the slow pace and nightmares of hand coding to something visual that makes development as easy as GUI interfaces make almost any computer task. But we don’t need the visual programming of the 1990s; we need something new and improved. And now we have it: it’s called “low-code.” An easy-to-understand name that Forrester coined in 2014.

Low-code is visual programming of the 1990s on steroids

Although low-code development includes visual programming, I want to be clear that this is not your father’s visual programming. Yes, it’s true that common code elements, workflows and business processes are turned into components so you can drag them around and drop them into a visual IDE.  But there’s even more to it than that. Application deployment, updates and generation are automated. You’re doing more than building applications visually using things that have survived the tests of time and software battles.

More specifically, rather than starting a project by hand-coding, some basic routing or writing a set of failing tests, you draw the shape of your application. You define the precise workflow your application needs to address each possible scenario. You draw the UI. You specify the data your application will store and how the database will store it. And, you use your visual IDE to integrate REST APIs with your application or integrate your applications with other systems, such as an SAP ERP.

So, instead of worrying how you’re going find the time to learn the latest faddish JavaScript framework or play with a cutting edge NoSQL data store, you’re delivering something valuable to the world in what seems like no time flat. Even better, you’re not sweating over DevOps or crying in your beer over application monitoring. So, basically, you’ve got something that’s miles ahead of what visual programming used to offer.

What about low-code platforms gives visual programming staying power?

The ability to leave the choppy, inwardly focused, released-too-fast ways of 1990s visual programming is becoming easier all the time. That’s huge. In the Forrester Wave: Low-Code Platforms Q2 2016, they rated the top 14 vendors of low-code development platforms out of a much bigger number. The fact that Google has thrown its hat in the low-code arena is another sign, as is a recent article in InformationWeek about low-code.

Here are the reasons I think low-code has brought visual programming back to stay:

  • Flexibility: You work in an IDE for visually defining the UIs, workflows and data models of your application but you can still add your own hand-written code (code you already know) where necessary,
  • Automated database integration. Low-code platforms transparently convert your data models into relational tables and SQL queries. And, data from external APIs is automatically made available from your application. This is not your typical ORM. It includes change management from the database all the way up to the UI.
  • No more deployment, maintenance and change nightmares. Automated tools build, debug, deploy and maintain the application in test, staging and production—sometimes with just one click.

Basically, everything that anyone ever complained about in forums related to visual programming is gone, and the parts people loved are still here.

And, while low-code platforms do require a little training, I’m not talking months of schooling here. More like a few weeks. Plus, low-code makes it possible to avoid having to know more languages and technology than I can count, all of which are needed to meet the demands of web and mobile application development. What’s not to love about that? You get to take a concept and build it into a working app without going back to school to learn six more things that have popped up in the last few months.

Conclusion: Low-code keeps the heart of visual programming beating

So, Justin Timberlake might have brought sexy back, but low-code has brought the heart of visual programming back. It takes what was good about the early days of visual programming but adds a big advantage. You can jump right in and start describing your solution to a problem. You don’t need to learn a whole bunch of arcane details. Deployment, updates, integration, all are fast and easy, mostly because the majority of those things are done for you automatically.

As a result, when a request comes in for an app that uses fitness and heart rate data to propose a specific exercise program for a heart patient—in 2 weeks—you can get right on it. How cool is that?

Interested in finding out how you compare to other software developers in your country/region? Take the Developer Economics survey and get your personalised developer scorecard.

Categories
Platforms Tools

What types of tools are IoT developers actually using?

IoT platforms were on the cusp of reaching the peak of inflated expectations in Gartner’s Hype Cycle from August 2016. Not surprisingly – there are literally hundreds of them, and counting. Also, the word ‘platform’ is used for anything, from network infrastructure to hardware components to cloud services. In the end, IoT owes its boom in popularity to more and better tools becoming available for developers. In this article, we shed some light on the types of tools that IoT developers are actually using.

The IoT tool market is still underdeveloped and heavily fragmented.

Despite the proliferation of IoT platforms and other tools, the IoT tool market is still underdeveloped and heavily fragmented. We asked IoT developers to select technologies they use out of a list of 15 categories. On average, IoT developers use 2.9 types of tools in that list, or one in five out of the list; professionals slightly more at 3.5 tool types. That’s comparatively fewer than developers in other sectors like cloud, mobile, or web, where developers use a quarter to a third of the tools listed. Part of the reason is fragmentation: not every tool is comprehensive enough to be relevant to a large number of developers. In part, the low tool usage is due to underdevelopment of the tool market. 11% of IoT developers don’t use any of the tools in our list, compared to 6% of web developers and 3% of mobile developers, who we presented with similar sized lists. Either way, we expect to see a good bit of consolidation and development before we can call this a mature tooling market.

Professional IoT developers use more tools than amateurs.

Professional IoT developers use more tools than amateurs, as we said, but they tend to use specific types of tools more often. The biggest differences are seen in categories like software deployment tools, IoT cloud platforms, embedded operating systems, machine learning platforms, gateway middleware, beacons, message brokers, or fog computing. What all these technologies have in common is that they are components of a complete IoT solution, i.e. technologies that an engineer would integrate under the hood to implement a valuable product or project. Fog or edge computing – championed by Cisco – is notable by its absence: a mere 4% of IoT developers are working with this technology. It may be too early for this technology, or the need for it might not be as big as pundits proclaim. Time will tell.

The gap between professional and amateur use is virtually non-existent in hardware platforms such as single-board computers like the Raspberry Pi or prototyping boards like the Arduino or Intel Edison. These microprocessors and computers have become so cheap and accessible (i.e. easy to use) that everyone with a minimal technical background can play around with them and put them to productive use. Even wearables toolkits and middleware show signs of this level of accessibility.

We also don’t see the amateur-pro gap in high-level, integrating platforms: Smart Home platforms like HomeKit or SmartThings, smartwatch platforms like WatchOS or Android Wear, or voice platforms like Amazon Alexa. These are all areas (IoT verticals) that are easy to get into, easy to imagine (and design) a solution that scratches your own itch, and therefore highly popular among hobbyists, as we’ve highlighted in other reports. Attractiveness to hobbyists aside, these comprehensive types platforms lower the barrier for people to start building meaningful solutions quickly, whereas the component technologies from above are still more the domain of specialists. Even health & wellness data platforms like Google Fit or HealthKit – arguably a more specific, advanced domain – have only a small difference in usage between professionals and amateurs.

Some of the technologies in the list are specific to certain verticals: wearables toolkits are for wearables developers, Smart Home platforms for Smart Home developers, and so on. Or are they? 12% of developers who use Smart Home platforms are not currently targeting or planning to target that vertical, for example. That is a reasonably big number, even though the usage gap with Smart Home developers is indeed clear. Some of these technologies might be fairly generic, and might even be ‘misused’ for unrelated projects. In some cases like smartwatch platforms, developers might work on a smartwatch app as part of a broader IoT solution, without self-identifying necessarily as ‘wearable developers’.

02https://www.developereconomics.com/reports/state-developer-nation-q1-2017

Only 20% of retail IoT developers use beacons

Location beacons are an interesting case. Their most marketed use cases were in retail and hospitality applications. However, only 20% of retail IoT developers use beacons; a good bit less than the 27% to 33% in-vertical usage we see for other vertical-specific technologies. Furthermore, the gap between in-vertical and out-of-vertical usage is only 9 percentage points, i.e. half that of the other technologies discussed here. We take this as a sign that beacons may be overhyped, perhaps technologically, but more likely in terms of how valuable the use cases are to customers. In our previous State of the Nation report (Q3 2016), we noted that retail was the sector within IoT with the fastest attrition of developers, possibly due to a sense of disillusionment and kickback from the hype. The data on technology use in the retail vertical seems to support that hypothesis.

The potential remains enormous

We opened this article with Gartner’s claim that we’re at the peak of inflated expectations when it comes to IoT platforms. Our IoT research over the past years says that we’ve already passed it, with stalled population growth and high churn among developers, heading full-speed towards the trough of disillusionment. The key reason is that the technology is still too immature, very few platforms are finding product-market fit, and thus the majority of consumer-focused developers lack a platform that gives them a viable market. Of course the core technology marches on, with some mostly consumer-focused tools finding uses outside their original intended market. The potential remains enormous. However, it’s going to get worse before it gets better, with a lot of consolidation among the many existing technology platforms.

Categories
Business Community Tips

Job positions for Video Game designers

 

game design job positions

So, you know how to get into game designing, and you know that education and training standards you need to succeed – now all you need to do is get ahead of everyone else and begin to make headway as a game designer.

Once you have achieved the level of education or training needed for a career in game design, you can plan for your future in the industry. This begins with determining your career path, gaining experience, and creating your first game.

Determine a Career Path

Even within the specialty, there are many different types of game designers. Furthermore, game designers have diverse roles within their various positions which may not be obvious. This is why it is important for aspiring game design professionals to fully consider the type of game career they intend to pursue.

Senior Level Designer

This position is responsible for outlining the level objectives and game flow within a set and then is required to create the documentation for each level. A senior level designer should be able to create, position, and fine-tune game play elements and AI components.

Level Designer

This is a position subordinate to a senior level designer. Level designers will typically use the provided design documentation, including all mechanics, any guidelines, and the mission outlines to create and implement each of the game’s levels.

Lead Animator

Animators work in close collaboration with artists, programmers, and designers to create each aspect of the characters used in the game.

Gain Experience

Getting an entry level job with a large game studio can be a difficult proposition. Since most employers require some game design experience for most jobs, new game designers have to find creative ways to gain relevant experience.

Game Designer Internships

Some companies offer internships or co-op positions for beginner designers.

Go Small and Indie

Small businesses on a budget are often willing to hire game programmers or artists with little practical experience.

Coding for a Cause

There are some charities that require coding and game design. You can sign up and start writing code while gaining real-world experience.

Develop a Game

Game designers can create a buzz, get experience, and gain a competitive edge when they design and publish their own game. Utilise free programs to create a simple, engaging and interactive mobile game. Publish it for sale on the app marketplace. Then begin working on something more complex. Each game will add value to your portfolio and most importantly, it will count as design experience.

Game design is an exciting and fast-growing field. However, it is one of the most difficult to break into. To do so you need a clear direction and understanding of the industry, education and training requirements, and a strategy to succeed.

Categories
Platforms

Angular vs React: Battle for the future of front-end web development?

Google and Facebook are two of the world’s most powerful companies and each has created a framework for building web apps. Angular and React respectively appear to be in a battle for the future of the web, with the active online debate and adoption for large consumer-facing apps seeming to lean quite strongly in React’s favour at present. Are they collectively taking over the front-end? Is React really leading? Our data from a broad cross-section of nearly 6,000 web developers may surprise you.

angular vs react

Which is your favourite framework? Take the Developer Economics Survey and win amazing prizes.

Although traditional, largely static, web pages still have an important place, mobile is now the dominant computing paradigm and mobile users have come to expect the interactivity of native apps. To attempt to match a native app experience, a web app cannot be entirely rendered on the server side, the page has to be changed dynamically on the client. The more extensive the changes the greater the need for a better abstraction than the DOM (Document Object Model) to manage the complexity. This has driven ever growing usage of third-party JavaScript libraries and frameworks.

Historically jQuery was the first library to get really popular, enabling easier manipulation of the DOM on the client side. It’s still the most popular today, as the primary front-end library for 34% of web developers. However, manually manipulating the DOM turns out to be extremely complex and error-prone when it’s happening extensively, so frameworks that provide a better abstraction are increasingly important. Overall just 12% of web developers don’t use any kind of framework and another 6% have written their own. That leaves 48% of web developers currently using a third-party framework other than jQuery as their primary way of doing front-end web development. Of those, Angular and React account for 30% of all usage, leaving all the others far behind. Indeed front-end web development is such a fragmented space that no other single library or framework accounts for more than 2% of primary usage. So React and Angular certainly lead other frameworks, although only around half of all web developers have fully embraced any single page application framework so far.

Angular is still king despite the React hype.

AngularJS (Angular 1.x) was the first single page app framework to get the stamp of approval from an internet giant, when Google started to back the open-source side project of one of their employees publicly. Google’s backing gave many large enterprises the confidence to adopt, and with broader adoption came a flourishing ecosystem of components and tools. As this was happening, React was built internally at Facebook and deployed on the Facebook newsfeed in 2011 and then Instagram’s web app in 2012. Yet React wasn’t released as open source until 2013, by which time Angular had an enormous lead in both adoption and ecosystem. Then in late 2014 Google appeared to stumble previewing Angular 2.0, which was going to be incompatible with Angular 1.x and use a new language. Reaction from the developer community was not good. By mid-2015 Google had agreed to work with Microsoft so that TypeScript became the official language for Angular 2.0, while the 1.x series had a promise of continued support, and a migration path between versions was created. This discontinuity for the Angular community seemed like a gift to the already rapidly growing React.

Although Angular still had many vocal fans, anyone following the broader front-end web developer community online would have to assume that React was taking Angular’s crown. At the time of writing React has passed Angular 1.x in terms of stars on their respective GitHub projects, with around 61,500 to 55,000. Angular 2.x trails both of these by far with 21,500. In the independent State of JavaScript survey run in late 2016, React came out way ahead of both versions of Angular in usage, interest, and retention. However, our own survey, which reaches out across many different developer communities does not reflect this result overall at all. Not only is Angular 2.x the primary framework for about as many developers as React (10% vs 9% globally), but Angular 1.x is still the most popular overall by a slim margin (11% use it as their primary framework). In total those using one or the other version of Angular number more than double those using React.

angular vs react

React is favoured by front-end specialists.

In order to see how reality in the market could be so different from the online buzz and even a large community survey, it’s interesting to look at the breakdown of JavaScript library and framework usage by primary programming language. If we only look at the users of the latest versions of JavaScript – those who like to stay at the forefront and are more likely to be found debating framework choices on the internet – we see React is the primary framework for 27% of them. So amongst those who have made the switch to ESNext (i.e. the 2015 version of the JavaScript standard or later), who then use tools to convert their code to the JavaScript that’s widely supported in browsers (known as ES5, introduced back in 2009), more are using React than both versions of Angular combined. However, this is the only group of developers for which React beats either version of Angular alone. These forward-looking JavaScript users are less than half of those primarily using JavaScript, and just 16% of all web developers (who almost all use some JavaScript).

A further 18% of web developers are still primarily using ES5. More of these are currently still using Angular 1.x (21%) as their primary framework than Angular 2.x (9%) and React (8%) combined. These developers are getting on with what they know and are productive doing. They may be following the new standards and frameworks but most of them don’t see enough benefit in switching yet. Another 3% of all web developers are primarily using TypeScript, which could be seen as the most advanced version of JavaScript currently available. However, some web developers understandably don’t want to adopt anything not yet in the standards, others don’t want to use the optional static types, and a significant minority still avoid anything from Microsoft. Given that Angular 2.x has adopted TypeScript it’s not surprising to find 41% of those primarily using the language have adopted the framework. There are another 18% currently still using Angular 1.x that will most likely migrate to Angular 2.x.

Backend web developers prefer Angular on the front-end.

After some flavour of JavaScript, the most popular language for web developers is PHP, with 21% still considering it their primary language. Given the focus on rendering pages server-side in most of the popular PHP content management systems, it’s not too surprising to find less interest in single page app frameworks in general amongst these developers, with 52% still using jQuery as their primary library. Interestingly only 3% of PHP developers are primarily using Angular 1.x, with 8% on Angular 2.x, and just 4% for React. In fact almost as many PHP developers don’t use any library or framework for the front-end (14%) as use React plus either Angular version.

Developers primarily using server-side languages other than JavaScript/Node.js or PHP (totalling 42% of all web developers) are significantly less likely to be using jQuery than PHP developers but they are also significantly less interested in Angular and React than the JavaScript developers (26% vs 38%). When they do primarily use one of these front-end frameworks, far more choose Angular (20%) than React (6%), and more of the Angular users are on version 2.x (11%) than version 1.x (9%). Considering all of those who are server-side developers not using Node.js, which is 63% of the web developer population, Angular is significantly preferred to React at this point, probably because it is complete framework, rather than forcing the developer to make lots of other library and tooling choices as they currently have to with React.

What happens next?

There are a many alternative futures that could be inferred from this data. The simplest story would be that framework preferences won’t move much for the different groups. Server-side developers will continue to have relatively little interest in the front-end frameworks and ES5 developers will stick to Angular 1.x when they eventually transition to ESNext or TypeScript. This doesn’t fit the current trend of increased JavaScript usage across the web, front-end and server. It also ignores the fact that Google will be migrating to Angular 2.x internally and developers will not want to be left without support one day. We could also imagine that as developers start using ESNext or TypeScript their framework preferences shift accordingly. Both React and Angular gain greater share, with React growing faster than Angular.

There’s probably some truth in this, but it’s too focused on the front-end developers. Server-side developers who aren’t using Node.js are less likely to find React attractive without a much simpler learning curve for the ecosystem. Then again, the most popular PHP framework is still WordPress, and the company behind WordPress has chosen React as the new front-end framework for WordPress.com – many PHP developers may follow them. Facebook has significant momentum with React, but Angular is likely to remain the most popular for smaller projects and internal apps. What we can predict is that despite the inevitable churn on the front-end, both frameworks have successfully built a critical mass of developers creating valuable ecosystems, and both are set for significant growth in the years ahead. We’d be surprised if the 30% of web developers using either Angular or React didn’t become 40% in the next 2 years.

So, what do you prefer? Angular or React? Take the Developer Economics Survey and win amazing prizes.

Categories
Business Tips

How can developers improve their paycheck.

As a software developer, what is the most lucrative opportunity you could be working on? This is a very relevant question to ask. Software skills are generally scarce and good developers are highly coveted. Furthermore, developers are mobile, in the sense that the nature of their trade allows them to work from remote locations quite easily and marketplaces for their services are well established. So which project should you pick to improve your paycheck?

developer salary

There are many reasons why someone might prefer one job over another, but let’s be honest: developers deserve to get paid well, given their important position in the global value chain. For the first time in 12 editions, we asked developers in our survey how much they earn in salaries or contractor fees. The results are in and from the data we learn several insights that can help developers improve their paycheck, and conversely, provide opportunities for organisations to find talent.

First, there are enormous differences in how much developers in each region and software sector earn. The best earning developers in our survey – those in the top ten percent – often earn tens and sometimes hundreds of times as much as the least well-off, i.e. the bottom decile. Part of this gap is location-driven. We’ll come back to that shortly. This said, we can only conclude that a developer’s skill, knowledge, and reputation do matter. Investing in them will pay off.

Developers working in areas with a higher technical complexity generally earn more.

Talking of skills, developers who work in areas with a higher technical complexity – and therefore higher barriers to entry and ultimately fewer developers doing it – generally earn more. Developers that work on cloud computing and other backend services report higher salaries than those working on front-end web apps. Machine learning specialists make even more than the backend folks. In Western Europe, for example, the median web developer has a yearly gross salary of $35,400 USD, the median backend developer earns $39,500 and a machine learning developer makes $45,200. This relationship is seen across regions and also at higher wage levels. Web and mobile development are the most commoditised; there is a fairly low barrier to start making simple apps or websites, and these tasks are relatively easily outsourced to other regions.

Scarcity of skills drives up paycheck amounts for developer services.

Scarcity of skills drives up the price for developer services. This is also true for new, emerging areas of development, like Augmented and Virtual Reality, or the Internet of Things, but only at the top end of the scale.The best developers in emerging areas earn top dollar, while the bottom half of the developer population makes less than their counterparts in more established sectors. Let’s compare Augmented Reality (AR) with backend developers in North America as an example. The median wage for an AR developer in that region is $71,000 USD, a good bit less than the $79,200 that the median backend developer makes. At the top end, however, AR development is more lucrative. At the 75th percentile, the AR developer is paid $132,300 and the backend developer $122,800. At the very top (90th percentile), the difference is even more pronounced: $219,000 for AR, $169,000 for backend. The reason for this wide range of salaries is that markets like AR/VR or IoT are still commercially underdeveloped. Companies that are early adopters pay large sums for skilled developers, who are scarce. At the same time, less experienced developers are attracted by the hype. Their compensation suffers both from a lack of relevant skill and from a lack of companies that are hiring in the early market.

Again this pattern repeats across regions. The exception is South Asia. The outsourcing model that drives software development in that region seems to be built on maintaining legacy code and developers there are less involved in emerging innovations (a conclusion that’s also supported by our developer population sizing research).

Developer-wages

We’re still a long way off a global market for developers!

We started this chapter by saying that developers can market their services location-independently if they choose to. However, it’s clear from the data that we’re still a long way off a global market for developers! The median web developer in North America for instance earns $73,600 USD per year. A Western European web developer earns half of that – $35,400 USD – although recent exchange rate shenanigans due to Brexit and the Euro-crisis will have affected that comparison. Web developers in other regions earn again half of that: between $11,700 in South Asia and $20,800 in Eastern Europe. Not just the region of the world you live in matters, but also the country and even the city you call home.

This opens up opportunities for organisations who will accept remote workers. You can hire a top 10% Eastern European backend developer for less money than the median North American wage in that sector. For developers, it means that brushing up your English skills and looking for opportunities beyond your backyard can be very interesting indeed. Developers who take that leap and seek opportunities that pay to international standards are in the minority. This explains why top wages in emerging regions (Asia, the Middle East, Africa) are so exuberantly high compared to local standards. A Western developer in the top decile earns about three times as much as the median wage in his sector and region. In the emerging world, top wages are seven to ten times the median. The best developers in those regions work for multinationals or sell their services on international marketplaces, while most stay employed locally, at much lower remuneration levels.

So what’s a developer to do if you want to move up in the world, financially? Invest in your skills. Do difficult work. Improve your English. Look for opportunities internationally. Go for it. You deserve it!

Take our Developer Economics Survey and speak out about other challenges you face!

Categories
Business Tips

How to become a Video Game designer : Education & Training

In the first part of our series we looked at how to plan and get started for a career as a video game designer, taking a look at what a game designer actually does, who typically employs them and the potential for earning good money. In this second part we’re going to look more closely at education and training.

video game designer

If you have spent any time researching a career in game design, then you probably already know that the most current game design training is needed. This industry is growing; however, it remains extremely competitive. Therefore, it is essential to have expert knowledge of the entire game R&D process. If it is your goal to work for a game studio or to design your own games, you need training as a programmer and in graphic design or art.

Degree Programs for Game Design

Many universities offer courses in computer science. However, designers may need a bachelor’s degree if they are planning to work for large game studios. Although some colleges offer a degree in game design, aspiring game designers can get the necessary training from computer science, software engineering, or related degree programs.

Required Coursework

The required coursework for game design programs cover subjects like 2D, 3D and CAD modelling and animation. They also include level and interface design. Other courses needed are in storyboard rendering, drawing, and scripting.

Co-Curricular Activities

Many schools have a club for students who wish to work on game design and development outside of the classroom. If your school or program doesn’t offer a game design club, join their AV club instead.

Coding Bootcamps are a great way to learn a lot in a short period. These are often available free or low-cost through various schools or communities. There are also some free camps available online.

Extra-Curricular Activities of a Video Game Designer

It’s important, also, that you regularly play video games. As simple as that sounds, you need experience as a game player. It helps you become aware of the most modern trends in the industry. Understanding the most current advanced gaming technology can also be beneficial. Pay attention to how games are structured and begin to think of ways you would improve them. Make notes for when you begin to design your own game.

Some employers will require a bachelor’s in video game design or related computer science program, while for others A-levels will be the minimum requirement. To make up for insufficient formal education, you may need to have experience working within the computer science, or graphic arts industry. You will need to possess an understanding of programming languages, software design, and modelling programs.

Next week, we’ll take a close look at how to finally launch your career.

 

Categories
Languages

What is the best programming language for Machine Learning?

Q&A sites and data science forums are buzzing with the same questions over and over again: I’m new in data science, what language should I learn? What’s the best machine learning language?

machine-learning-programming-language

There’s an abundance of articles attempting to answer these questions, either based on personal experience or on job offer data. Τhere’s so much more activity in machine learning than job offers in the West can describe, however, and peer opinions are of course very valuable but often conflicting and as such may confuse the novices. We turned instead to our hard data from 2,000+ data scientists and machine learning developers who responded to our latest survey about which languages they use and what projects they’re working on – along with many other interesting things about their machine learning activities and training. Then, being data scientists ourselves, we couldn’t help but run a few models to see which are the most important factors that are correlated to language selection. We compared the top-5 languages and the results prove that there is no simple answer to the “which language?” question. It depends on what you’re trying to build, what your background is and why you got involved in machine learning in the first place.

Which machine learning language is the most popular overall?

First, let’s look at the overall popularity of machine learning languages. Python leads the pack, with 57% of data scientists and machine learning developers using it and 33% prioritising it for development. Little wonder, given all the evolution in the deep learning Python frameworks over the past 2 years, including the release of TensorFlow and a wide selection of other libraries. Python is often compared to R, but they are nowhere near comparable in terms of popularity: R comes fourth in overall usage (31%) and fifth in prioritisation (5%). R is in fact the language with the lowest prioritisation-to-usage ratio among the five, with only 17% of developers who use it prioritising it. This means that in most cases R is a complementary language, not a first choice. The same ratio for Python is at 58%, the highest by far among the five languages, a clear indication that the usage trends of Python are the exact opposite to those of R. Not only is Python the most widely used language, it is also the primary choice for the majority of its users. C/C++ is a distant second to Python, both in usage (44%) and prioritisation (19%). Java follows C/C++ very closely, while JavaScript comes fifth in usage, although with a slightly better prioritisation performance than R (7%). We asked our respondents about other languages used in machine learning, including the usual suspects of Julia, Scala, Ruby, Octave, MATLAB and SAS, but they all fall below the 5% mark of prioritisation and below 26% of usage. We therefore focused our attention on the top-5 languages.

Python is prioritised in applications where Java is not.

Our data reveals that the most decisive factor when selecting a language for machine learning is the type of project you’ll be working on – your application area. In our survey we asked developers about 17 different application areas while also providing our respondents with the opportunity to tell us that they’re still exploring options, not actively working on any area. Here we present the top and bottom three areas per language: the ones where developers prioritise each language the most and the least.

Machine learning scientists working on sentiment analysis prioritise Python (44%) and R (11%) more and JavaScript (2%) and Java (15%) less than developers working on other areas. In contrast, Java is prioritised more by those working on network security / cyber attacks and fraud detection, the two areas where Python is the least prioritised. Network security and fraud detection algorithms are built or consumed mostly in large organisations – and especially in financial institutions – where Java is a favourite of most internal development teams. In areas that are less enterprise-focused, such as natural language processing (NLP) and sentiment analysis, developers opt for Python which offers an easier and faster way to build highly performing algorithms, due to the extensive collection of specialised libraries that come with it.

Artificial Intelligence (AI) in games (29%) and robot locomotion (27%) are the two areas where C/C++ is favoured the most, given the level of control, high performance and efficiency required. Here a lower level programming language such as C/C++ that comes with highly sophisticated AI libraries is a natural choice, while R, designed for statistical analysis and visualisations, is deemed mostly irrelevant. AI in games (3%) and robot locomotion(1%)  are the two areas where R is prioritised the least, followed by speech recognition where the case is similar.

Other than in sentiment analysis, R is also relatively highly prioritised – as compared to other application areas – in bioengineering and bioinformatics (11%), an area where both Java and JavaScript are not favoured. Given the long-standing use of R in biomedical statistics, both inside and outside academia, it’s no surprise that it’s one of the areas where it’s used the most. Finally, our data shows that developers new to data science and machine learning who are still exploring options prioritise JavaScript more than others (11%) and Java less than others (13%). These are in many cases developers who are experimenting with machine learning through the use of a 3rd-party machine learning API in a web application.

machine-learning-programming-languages

Professional background is pivotal in selecting a machine learning language.

Second to the application area, the professional background is also pivotal in selecting a machine learning language: the developers prioritising  the top-five languages more than others come from five different backgrounds. Python is prioritised the most by those for whom data science is the first profession or field of study (38%). This indicates that Python has by now become an integral part of data science – it has evolved into the native language of data scientists. The same can not be said for R, which is mostly prioritised by data analysts and statisticians (14%), as the language was initially created for them, replacing S.

Front-end web developers extend their use of JavaScript to machine learning, 16% prioritising it for that purpose, while staying clear of the cumbersome C/C++ (8%). At the exact opposite stand embedded computing hardware / electronics engineers who go for C/C++ more than others, while avoiding JavaScript, Java and R more than others. Given their investment in mastering C/C++ in their engineering life, it would make no sense to settle for a language that would compromise their level of control over their application. Embedded computing hardware engineers are also the most likely to be working on near-the-hardware machine learning projects, such as IoT edge analytics projects, where hardware may force their language selection. Our data confirms that their involvement is significantly above average in industrial maintenance, image classification and robot locomotion projects among others.

For Java, it’s the front-end desktop application developers who prioritise it more than others (21%), which is also inline with its use mostly in enterprise-focused applications as noted earlier. Enterprise developers tend to use Java in all projects, including machine learning. The company directive in this case is also evident from the third factor that is strongly correlated to language prioritisation – the reason to get into machine learning. Java is prioritised the most (27%) by developers who got into machine learning because their boss or company asked them to. It is the least preferred (14%) by those who got into the field just because they were curious to see what all the fuss was about – Java is not a language that you normally learn just for fun! It is Python that the curious prioritise more than others (38%), another indication that Python is recognised as the main language that one needs to experiment with to find out what machine learning is all about.

It seems that some universities teaching data science courses still need to catch up with this notion though. Developers who say that they got into machine learning because data science is/was part of their university degree are the least likely to prioritise Python (26%) and the most likely to prioritise R (7%) as compared to others. There is evidently still a favourable bias towards R within statistics circles in academia – where it was born – but as data science and machine learning gravitate more towards computing, the trend is fading away. Those with university training in data science may favour it more than others, but in absolute terms it’s still only a small fraction of that group too that will go for R first.

C/C++ is prioritised more by those who want to enhance their existing apps/projects with machine learning (20%) and less by those who hope to build new highly competitive apps based on machine learning (14%). This pattern points again to C/C++ being mostly used in engineering projects and IoT or AR/VR apps, most likely already written in C/C++, to which ML-supported functionality is being added. When building a new app from scratch – especially one using NLP for chatbots – there’s no particular reason to use C/C++, while there are plenty of reasons to opt for languages that offer highly-specialised libraries, such as Python. These languages can more quickly and easily yield highly-performing algorithms that may offer a competitive advantage in new ML-centric apps.

Finally, contractors who got into machine learning to increase their chances of securing highly-profitable projects prioritise JavaScript more than others (8%). These are probably JavaScript developers building web applications to which they are adding a machine learning API. An example would be visualising the results of a machine learning algorithm on a web-based dashboard.

There is no such thing as a ‘best language for machine learning’.

Our data shows that popularity is not a good yardstick to use when selecting a programming language for machine learning and data science. There is no such thing as a ‘best language for machine learning’ and it all depends on what you want to build, where you’re coming from and why you got involved in machine learning. In most cases developers port the language they were already using into machine learning, especially if they are to use it in projects adjacent to their previous work – such as engineering projects for C/C++ developers or web visualisations for JavaScript developers.

If your first ever contact with programming is through machine learning, then your peers in our survey point to Python as the best option, given its wealth of libraries and ease of use. If, on the other hand, you’re dreaming of a job in an enterprise environment, be prepared to use Java. Whatever the case, these are exciting times for machine learning and the journey is guaranteed to be a mind-blowing one, irrespective of the language you opt for. Enjoy the ride!

Categories
Business

How to Break into Game Design (Part 1): What They Do and How to Get Started.

Developers in game design work alone or as a team to develop and design video games. The video game sector is a £41 billion industry in the United Kingdom. This number is expected to grow as more and more people play video games on their smartphones, according to Reuters.

game-designer-how-to-developereconomics

What Does a Game Designer Do?

Game designers work with developers to coordinate the complex task of building games from the framework out. Designers have duties that include:

  •     Designing characters – backstories, storylines, and story arcs
  •     Creating and defining levels
  •     Creating puzzles and mini games
  •     Contributing to the art and animation

While most developers create the code, a designer may also write code. Various programming languages are utilised for gaming. Depending on the studio a designer might have the duties of project management and testing.

What is the Economic Outlook for a Game Design Career?

According to new research conducted by IBISWorld, the software development industry is rapidly expanding. The latest statistics from Reed show that software developers make an average wage of £54,079 in the UK.

Who Employs Game Designers?

Most game designers work for game studios. There is a robust freelance market, however, for experienced game designers.

Skills Needed to Become a Game Designer

It is helpful that you have a natural ability, talent, or interest in acquiring artistic skills. However, people lacking these abilities can often compensate by having other technical computer skills. In fact, tech abilities may be preferred by some studios. Some specific skills game designers should have, include:

  •     Computer programming or knowledge of certain programming languages
  •     Coding
  •     CAD or 3D modelling
  •     Knowledge of AV equipment
  •     Critical thinking and problem solving
  •     Written and verbal communication

How to Get Started in Game Design

Because of the growing need, there are more colleges and universities offering degree programs in video game design. Besides, there are technical degree and certification programs offered at various schools. Some communities and online services even offer free beginner coding courses to get you started. These courses are usually offered in connection with a computer science or media department of a local community college.

Game design is an exciting career with enormous earning potential. There are many facets of the job that include managerial and administrative duties, so it is important to have excellent communication skills in addition to computer and artistic abilities. It is true that most video game designers have a bachelor’s degree in some type of computers science. This doesn’t mean that it is required, as many studios consider experience in lieu of education.

If all this has sparked your interest, stay tuned as over the next few weeks we’ll be publishing the next in our series on breaking into Game Designing as a career. Part 2 of our series will explain the educational and training requirements needed to get into the industry.