Categories
Platforms

Low-Code Platforms: Bringing Visual Programming Back (to Stay)

low code platforms

There’s an interesting trend in the second decade of this millennium. Things once declared “dead,” are experiencing a resurgence. For example, animated GIFs, once relegated to cheesy ads for home refinancing or losing belly fat in a month with acai berries, are back in Slack channels, social media and blogs everywhere. Email newsletters have returned after many corporations abandoned them as sales and marketing tools in 2008 or so. Podcasts were declared to have peaked sometime around 2010. Now, they’re back and there are almost too many to choose from. The consensus about the return of animated GIFs, email newsletters and podcasts is that they’ve improved in quality and offer more to people who use them.

Visual programming environments and platforms were also hot in the 1990s and the early 2000s. Then the noise they generated seemed to die down. And, now they’re back, very likely for good. Let’s look at why.

Too fast, too choppy, too inwardly focused and…it’s complicated

Visual programming has been around for much longer than we think. It started quietly enough in the 1960s with Bert Sutherland’s interactive programming language. The idea built up steam in the 1970s and 1980s (Smalltalk). The idea of moving away from text editing, compiling, writing down the errors, and debugging with the eyes was alluring. And, so it came of age in the 1990s with Visual Basic, Xelfi/Netbeans, Visual Studio and the height of the CASE tools hype.

 

Ah, the old Smalltalk days. Source: Basic Aspects of Squeak and the Smalltalk-80 Programming Language
Ah, the old Smalltalk days. Source: Basic Aspects of Squeak and the Smalltalk-80 Programming Language

 

So, there you have it. A whole slew of tools that could make programming so easy a child could do it. So, what happened? Why did visual programming virtually go gentle into that good night?

I think it’s because so much was still new in the 1990s and early 2000s. A whole lot of great digital and online stuff came out of that period very quickly. Take the World Wide Web, for example. It was going mainstream, but parts of it were more like the World Wild, Wild West. But I think that, in the rush to show the world the cool stuff the web and digital were bringing us, some steps were missed.

So, the visual programming tools of that period were really more about “look what we can do,” rather than “look at what you can do.” So, the end result of that philosophy is shaky extensibility (if there’s any at all), slow code generation and little to no cross-platform capability. In addition, in-depth programming skills and mindset were still the name of the game.

The only thing that’s constant is that nothing is constant

If there’s one thing I’ve learned since I started writing about programming languages and development trends in 1998, it’s that nothing is constant. When I invested in my first Mac in 1996, I had no idea I would replace it with a laptop just a few years later. And when I upgraded to one with an Intel Core i7 processor, I had no idea that it would end up gathering dust in my home office while I played with my smartphone and tablet in my living room.

In this mobile world, people want apps for almost everything. In addition, there are the other trends that are in the backlogs of today’s developers. These include solutions for cloud, machine learning, data science, artificial intelligence and IoT, as highlighted in “The State of the Developer Nation, Q1 2017,” the report compiled by Developer Economics. So, all of a sudden the already significant amount of knowledge you need to build software and applications in this brave new technological world has skyrocketed.

Most of you are developers, so I don’t need to tell you how difficult it is to be a full-stack unicorn in the age of “we need an AI and predictive analytics app for that on the cloud.” The Developer Economics surveys tell your story: your work can span multiple different areas, requiring mastery of several languages. Nor do I need to go on and on about the pressure to get these apps built and out in the marketplaces or stores ASAP or all the headaches that come with updates (new JavaScript libraries! Dependencies! Merges!). So, I’m going to skip all that and get to my point.

Now more than ever, we need to move away from the slow pace and nightmares of hand coding to something visual that makes development as easy as GUI interfaces make almost any computer task. But we don’t need the visual programming of the 1990s; we need something new and improved. And now we have it: it’s called “low-code.” An easy-to-understand name that Forrester coined in 2014.

Low-code is visual programming of the 1990s on steroids

Although low-code development includes visual programming, I want to be clear that this is not your father’s visual programming. Yes, it’s true that common code elements, workflows and business processes are turned into components so you can drag them around and drop them into a visual IDE.  But there’s even more to it than that. Application deployment, updates and generation are automated. You’re doing more than building applications visually using things that have survived the tests of time and software battles.

More specifically, rather than starting a project by hand-coding, some basic routing or writing a set of failing tests, you draw the shape of your application. You define the precise workflow your application needs to address each possible scenario. You draw the UI. You specify the data your application will store and how the database will store it. And, you use your visual IDE to integrate REST APIs with your application or integrate your applications with other systems, such as an SAP ERP.

So, instead of worrying how you’re going find the time to learn the latest faddish JavaScript framework or play with a cutting edge NoSQL data store, you’re delivering something valuable to the world in what seems like no time flat. Even better, you’re not sweating over DevOps or crying in your beer over application monitoring. So, basically, you’ve got something that’s miles ahead of what visual programming used to offer.

What about low-code platforms gives visual programming staying power?

The ability to leave the choppy, inwardly focused, released-too-fast ways of 1990s visual programming is becoming easier all the time. That’s huge. In the Forrester Wave: Low-Code Platforms Q2 2016, they rated the top 14 vendors of low-code development platforms out of a much bigger number. The fact that Google has thrown its hat in the low-code arena is another sign, as is a recent article in InformationWeek about low-code.

Here are the reasons I think low-code has brought visual programming back to stay:

  • Flexibility: You work in an IDE for visually defining the UIs, workflows and data models of your application but you can still add your own hand-written code (code you already know) where necessary,
  • Automated database integration. Low-code platforms transparently convert your data models into relational tables and SQL queries. And, data from external APIs is automatically made available from your application. This is not your typical ORM. It includes change management from the database all the way up to the UI.
  • No more deployment, maintenance and change nightmares. Automated tools build, debug, deploy and maintain the application in test, staging and production—sometimes with just one click.

Basically, everything that anyone ever complained about in forums related to visual programming is gone, and the parts people loved are still here.

And, while low-code platforms do require a little training, I’m not talking months of schooling here. More like a few weeks. Plus, low-code makes it possible to avoid having to know more languages and technology than I can count, all of which are needed to meet the demands of web and mobile application development. What’s not to love about that? You get to take a concept and build it into a working app without going back to school to learn six more things that have popped up in the last few months.

Conclusion: Low-code keeps the heart of visual programming beating

So, Justin Timberlake might have brought sexy back, but low-code has brought the heart of visual programming back. It takes what was good about the early days of visual programming but adds a big advantage. You can jump right in and start describing your solution to a problem. You don’t need to learn a whole bunch of arcane details. Deployment, updates, integration, all are fast and easy, mostly because the majority of those things are done for you automatically.

As a result, when a request comes in for an app that uses fitness and heart rate data to propose a specific exercise program for a heart patient—in 2 weeks—you can get right on it. How cool is that?

Interested in finding out how you compare to other software developers in your country/region? Take the Developer Economics survey and get your personalised developer scorecard.

Categories
Platforms Tools

What types of tools are IoT developers actually using?

IoT platforms were on the cusp of reaching the peak of inflated expectations in Gartner’s Hype Cycle from August 2016. Not surprisingly – there are literally hundreds of them, and counting. Also, the word ‘platform’ is used for anything, from network infrastructure to hardware components to cloud services. In the end, IoT owes its boom in popularity to more and better tools becoming available for developers. In this article, we shed some light on the types of tools that IoT developers are actually using.

The IoT tool market is still underdeveloped and heavily fragmented.

Despite the proliferation of IoT platforms and other tools, the IoT tool market is still underdeveloped and heavily fragmented. We asked IoT developers to select technologies they use out of a list of 15 categories. On average, IoT developers use 2.9 types of tools in that list, or one in five out of the list; professionals slightly more at 3.5 tool types. That’s comparatively fewer than developers in other sectors like cloud, mobile, or web, where developers use a quarter to a third of the tools listed. Part of the reason is fragmentation: not every tool is comprehensive enough to be relevant to a large number of developers. In part, the low tool usage is due to underdevelopment of the tool market. 11% of IoT developers don’t use any of the tools in our list, compared to 6% of web developers and 3% of mobile developers, who we presented with similar sized lists. Either way, we expect to see a good bit of consolidation and development before we can call this a mature tooling market.

Professional IoT developers use more tools than amateurs.

Professional IoT developers use more tools than amateurs, as we said, but they tend to use specific types of tools more often. The biggest differences are seen in categories like software deployment tools, IoT cloud platforms, embedded operating systems, machine learning platforms, gateway middleware, beacons, message brokers, or fog computing. What all these technologies have in common is that they are components of a complete IoT solution, i.e. technologies that an engineer would integrate under the hood to implement a valuable product or project. Fog or edge computing – championed by Cisco – is notable by its absence: a mere 4% of IoT developers are working with this technology. It may be too early for this technology, or the need for it might not be as big as pundits proclaim. Time will tell.

The gap between professional and amateur use is virtually non-existent in hardware platforms such as single-board computers like the Raspberry Pi or prototyping boards like the Arduino or Intel Edison. These microprocessors and computers have become so cheap and accessible (i.e. easy to use) that everyone with a minimal technical background can play around with them and put them to productive use. Even wearables toolkits and middleware show signs of this level of accessibility.

We also don’t see the amateur-pro gap in high-level, integrating platforms: Smart Home platforms like HomeKit or SmartThings, smartwatch platforms like WatchOS or Android Wear, or voice platforms like Amazon Alexa. These are all areas (IoT verticals) that are easy to get into, easy to imagine (and design) a solution that scratches your own itch, and therefore highly popular among hobbyists, as we’ve highlighted in other reports. Attractiveness to hobbyists aside, these comprehensive types platforms lower the barrier for people to start building meaningful solutions quickly, whereas the component technologies from above are still more the domain of specialists. Even health & wellness data platforms like Google Fit or HealthKit – arguably a more specific, advanced domain – have only a small difference in usage between professionals and amateurs.

Some of the technologies in the list are specific to certain verticals: wearables toolkits are for wearables developers, Smart Home platforms for Smart Home developers, and so on. Or are they? 12% of developers who use Smart Home platforms are not currently targeting or planning to target that vertical, for example. That is a reasonably big number, even though the usage gap with Smart Home developers is indeed clear. Some of these technologies might be fairly generic, and might even be ‘misused’ for unrelated projects. In some cases like smartwatch platforms, developers might work on a smartwatch app as part of a broader IoT solution, without self-identifying necessarily as ‘wearable developers’.

02https://www.developereconomics.com/reports/state-developer-nation-q1-2017

Only 20% of retail IoT developers use beacons

Location beacons are an interesting case. Their most marketed use cases were in retail and hospitality applications. However, only 20% of retail IoT developers use beacons; a good bit less than the 27% to 33% in-vertical usage we see for other vertical-specific technologies. Furthermore, the gap between in-vertical and out-of-vertical usage is only 9 percentage points, i.e. half that of the other technologies discussed here. We take this as a sign that beacons may be overhyped, perhaps technologically, but more likely in terms of how valuable the use cases are to customers. In our previous State of the Nation report (Q3 2016), we noted that retail was the sector within IoT with the fastest attrition of developers, possibly due to a sense of disillusionment and kickback from the hype. The data on technology use in the retail vertical seems to support that hypothesis.

The potential remains enormous

We opened this article with Gartner’s claim that we’re at the peak of inflated expectations when it comes to IoT platforms. Our IoT research over the past years says that we’ve already passed it, with stalled population growth and high churn among developers, heading full-speed towards the trough of disillusionment. The key reason is that the technology is still too immature, very few platforms are finding product-market fit, and thus the majority of consumer-focused developers lack a platform that gives them a viable market. Of course the core technology marches on, with some mostly consumer-focused tools finding uses outside their original intended market. The potential remains enormous. However, it’s going to get worse before it gets better, with a lot of consolidation among the many existing technology platforms.

Categories
Platforms

Angular vs React: Battle for the future of front-end web development?

Google and Facebook are two of the world’s most powerful companies and each has created a framework for building web apps. Angular and React respectively appear to be in a battle for the future of the web, with the active online debate and adoption for large consumer-facing apps seeming to lean quite strongly in React’s favour at present. Are they collectively taking over the front-end? Is React really leading? Our data from a broad cross-section of nearly 6,000 web developers may surprise you.

angular vs react

Which is your favourite framework? Take the Developer Economics Survey and win amazing prizes.

Although traditional, largely static, web pages still have an important place, mobile is now the dominant computing paradigm and mobile users have come to expect the interactivity of native apps. To attempt to match a native app experience, a web app cannot be entirely rendered on the server side, the page has to be changed dynamically on the client. The more extensive the changes the greater the need for a better abstraction than the DOM (Document Object Model) to manage the complexity. This has driven ever growing usage of third-party JavaScript libraries and frameworks.

Historically jQuery was the first library to get really popular, enabling easier manipulation of the DOM on the client side. It’s still the most popular today, as the primary front-end library for 34% of web developers. However, manually manipulating the DOM turns out to be extremely complex and error-prone when it’s happening extensively, so frameworks that provide a better abstraction are increasingly important. Overall just 12% of web developers don’t use any kind of framework and another 6% have written their own. That leaves 48% of web developers currently using a third-party framework other than jQuery as their primary way of doing front-end web development. Of those, Angular and React account for 30% of all usage, leaving all the others far behind. Indeed front-end web development is such a fragmented space that no other single library or framework accounts for more than 2% of primary usage. So React and Angular certainly lead other frameworks, although only around half of all web developers have fully embraced any single page application framework so far.

Angular is still king despite the React hype.

AngularJS (Angular 1.x) was the first single page app framework to get the stamp of approval from an internet giant, when Google started to back the open-source side project of one of their employees publicly. Google’s backing gave many large enterprises the confidence to adopt, and with broader adoption came a flourishing ecosystem of components and tools. As this was happening, React was built internally at Facebook and deployed on the Facebook newsfeed in 2011 and then Instagram’s web app in 2012. Yet React wasn’t released as open source until 2013, by which time Angular had an enormous lead in both adoption and ecosystem. Then in late 2014 Google appeared to stumble previewing Angular 2.0, which was going to be incompatible with Angular 1.x and use a new language. Reaction from the developer community was not good. By mid-2015 Google had agreed to work with Microsoft so that TypeScript became the official language for Angular 2.0, while the 1.x series had a promise of continued support, and a migration path between versions was created. This discontinuity for the Angular community seemed like a gift to the already rapidly growing React.

Although Angular still had many vocal fans, anyone following the broader front-end web developer community online would have to assume that React was taking Angular’s crown. At the time of writing React has passed Angular 1.x in terms of stars on their respective GitHub projects, with around 61,500 to 55,000. Angular 2.x trails both of these by far with 21,500. In the independent State of JavaScript survey run in late 2016, React came out way ahead of both versions of Angular in usage, interest, and retention. However, our own survey, which reaches out across many different developer communities does not reflect this result overall at all. Not only is Angular 2.x the primary framework for about as many developers as React (10% vs 9% globally), but Angular 1.x is still the most popular overall by a slim margin (11% use it as their primary framework). In total those using one or the other version of Angular number more than double those using React.

angular vs react

React is favoured by front-end specialists.

In order to see how reality in the market could be so different from the online buzz and even a large community survey, it’s interesting to look at the breakdown of JavaScript library and framework usage by primary programming language. If we only look at the users of the latest versions of JavaScript – those who like to stay at the forefront and are more likely to be found debating framework choices on the internet – we see React is the primary framework for 27% of them. So amongst those who have made the switch to ESNext (i.e. the 2015 version of the JavaScript standard or later), who then use tools to convert their code to the JavaScript that’s widely supported in browsers (known as ES5, introduced back in 2009), more are using React than both versions of Angular combined. However, this is the only group of developers for which React beats either version of Angular alone. These forward-looking JavaScript users are less than half of those primarily using JavaScript, and just 16% of all web developers (who almost all use some JavaScript).

A further 18% of web developers are still primarily using ES5. More of these are currently still using Angular 1.x (21%) as their primary framework than Angular 2.x (9%) and React (8%) combined. These developers are getting on with what they know and are productive doing. They may be following the new standards and frameworks but most of them don’t see enough benefit in switching yet. Another 3% of all web developers are primarily using TypeScript, which could be seen as the most advanced version of JavaScript currently available. However, some web developers understandably don’t want to adopt anything not yet in the standards, others don’t want to use the optional static types, and a significant minority still avoid anything from Microsoft. Given that Angular 2.x has adopted TypeScript it’s not surprising to find 41% of those primarily using the language have adopted the framework. There are another 18% currently still using Angular 1.x that will most likely migrate to Angular 2.x.

Backend web developers prefer Angular on the front-end.

After some flavour of JavaScript, the most popular language for web developers is PHP, with 21% still considering it their primary language. Given the focus on rendering pages server-side in most of the popular PHP content management systems, it’s not too surprising to find less interest in single page app frameworks in general amongst these developers, with 52% still using jQuery as their primary library. Interestingly only 3% of PHP developers are primarily using Angular 1.x, with 8% on Angular 2.x, and just 4% for React. In fact almost as many PHP developers don’t use any library or framework for the front-end (14%) as use React plus either Angular version.

Developers primarily using server-side languages other than JavaScript/Node.js or PHP (totalling 42% of all web developers) are significantly less likely to be using jQuery than PHP developers but they are also significantly less interested in Angular and React than the JavaScript developers (26% vs 38%). When they do primarily use one of these front-end frameworks, far more choose Angular (20%) than React (6%), and more of the Angular users are on version 2.x (11%) than version 1.x (9%). Considering all of those who are server-side developers not using Node.js, which is 63% of the web developer population, Angular is significantly preferred to React at this point, probably because it is complete framework, rather than forcing the developer to make lots of other library and tooling choices as they currently have to with React.

What happens next?

There are a many alternative futures that could be inferred from this data. The simplest story would be that framework preferences won’t move much for the different groups. Server-side developers will continue to have relatively little interest in the front-end frameworks and ES5 developers will stick to Angular 1.x when they eventually transition to ESNext or TypeScript. This doesn’t fit the current trend of increased JavaScript usage across the web, front-end and server. It also ignores the fact that Google will be migrating to Angular 2.x internally and developers will not want to be left without support one day. We could also imagine that as developers start using ESNext or TypeScript their framework preferences shift accordingly. Both React and Angular gain greater share, with React growing faster than Angular.

There’s probably some truth in this, but it’s too focused on the front-end developers. Server-side developers who aren’t using Node.js are less likely to find React attractive without a much simpler learning curve for the ecosystem. Then again, the most popular PHP framework is still WordPress, and the company behind WordPress has chosen React as the new front-end framework for WordPress.com – many PHP developers may follow them. Facebook has significant momentum with React, but Angular is likely to remain the most popular for smaller projects and internal apps. What we can predict is that despite the inevitable churn on the front-end, both frameworks have successfully built a critical mass of developers creating valuable ecosystems, and both are set for significant growth in the years ahead. We’d be surprised if the 30% of web developers using either Angular or React didn’t become 40% in the next 2 years.

So, what do you prefer? Angular or React? Take the Developer Economics Survey and win amazing prizes.

Categories
Platforms Tools

[ Infographic ] The State of the Developer Nation Survey – Tools & Technologies featured

The State of the Developer Nation Survey (H2 2016) was by far the largest in participation. The best way to illustrate this is by an infographic, highlighting important facts and figures. Further down you  will be able to find out the total number of respondents and the countries of their origin as well as all the development areas covered and the  number of tools featured per development area.

Clicking  on the Infographic will redirect you to the full list of tools falling under 7 different development areas namely: Desktop, Mobile, Web, IoT, Cloud, AR/VR and Machine Learning. In total there are 21 categories under all development areas which amount to a total of 226 tools.

top-tools-technologies-developer-survey-VisionMobile

 

Categories
Platforms Tools

A New Dimension for UI: Using Unity for Virtual Reality

virtual reality unity ui

The advent of virtual reality solutions, ranging from gaming to trainings and simulations, is raising new questions about previously standard industry practices. User interfaces (UI), in particular, require a complete re-thinking of function, layout, and implementation. Traditionally, user interfaces have been divided into diegetic (part of the game world), non-diegetic (separate from the game world), spatial and meta components. Most successful games use a combination of them to provide a balanced experience. In this, we break down each category, its advantages/disadvantages for virtual reality, and how to implement them in Unity. Meta UI components are rare in general and largely disregarded in VR programming. For that reason, they are not considered in this analysis.

Non-Diegetic UI

Historically, non-diegetic user interfaces have been the most common in the gaming industry. The key defining feature of them is that the components of the UI exist on a completely different plane than the actual 3D game space. Imagine here a heads-up display (HUD) as they are likely the most ubiquitous examples of non-diegetic user interfaces. A health bar, for example, does not exist within the 3D space that the game supposes nor can characters in-game interact with it. It is outside both the game’s narrative and space.

Pros/Cons

This modality offers the user a very clear display of relevant information and allows for quick navigation. The fear, however, is that the distinct separation of the game world from the structures that manipulate it results in a lack of immersion.

Use in Virtual Reality With Unity

For virtual reality, non-diegetic user interfaces can be very difficult to successfully implement. The largest obstacle is the fact that a HUD a la traditional gaming can be too close to the user’s face, resulting in highly uncomfortable eye strain. In Unity, the typical way to design a non-diegetic HUD is through the Screen Space – Overlay or Screen Space – Camera functions. It is unsupported, however, in Unity VR due to discomfort-related concerns. A developer can, however, fix a model to the user’s vector of vision. This, in effect, serves the purpose of a HUD. Once again, though, it can prove awkward. It would be like walking all day with a phone directly in front of you. In order to focus on it, you would need to re-focus your view from the rest of the world. Additionally, its presence when focusing on other tasks would be distracting. In short, stay away from strictly non-diegetic UIs when developing solutions for virtual reality.

Diegetic UI

This model of user interface holistically embeds all of the information typically represented in a HUD into the game’s 3D space. An example of this in a game would be if instead of a mini-map in the corner of the screen, the avatar/user would pull out and look at a map that exists within in the game world. Thus, the user interface is part of the game’s narrative and exists within the game space. From a player perspective, the Deadspace video game franchise is generally regarded as having implemented one of the best diegetic UIs to date.

Pros/Cons

The advantage of this style is the belief that it increases the realism of the gaming experience and thereby results in deeper immersion. The drawback, however, is it requires developers to seek ingenious ways of representing typical information, such as health, items in inventory, etc. These, in turn, must be intuitive and effective, otherwise, they will frustrate the user and result in a loss of immersion.

Use in Virtual Reality With Unity

In many ways, the goal of virtual reality is to provide a level of engagement and immersion that mimics real-life. With this in mind, diegesis seems like the logical, and even necessary, method of crafting user interfaces. The logic seems to go, if real-life is without menus and speech bubbles shouldn’t virtual real-life be so too? In lieu of this, there are several ways to create more diegetic experiences using Unity in new innovative ways. One way is to use the Raycast function to initiate interaction. Let’s imagine, for example, that in an RPG the user wishes to interact with an NPC. Instead of clicking and using a menu, the user could simply stare at them for an appropriate amount of time, which mirrors how we use eye contact in real-life to initiate conversation.

Spatial UI

A spatial UI lies half between traditional diegetic and non-diegetic models by offering elements that exist within the 3D game space but are not part of the game’s narrative. Perhaps the simplest iteration of this would be if you were to select a unit in a real-time strategy. Around the unit would appear some sort of circle or symbol to represent that the unit has been selected. In a first-person shooter, a way-marker for an objective is another example of spatial UI. The way-marker exists in the game space but if you were to live inside your character’s head, you wouldn’t see it.

Pros/Cons

In many ways, the advantages and disadvantages of spatial UIs mimic those of diegetic models. The key upside is it provides a lot of clarity to the user; all the relevant information for a user can be tagged to the relevant models. This, however, is offset by the fear that the presence of meta-information could break the immersive dimension of the game.

Use in Virtual Reality With Unity

When it comes virtual reality, spatial UI is the simplest and most effective option. When programming with Unity this means selecting World Space as the render mode for the Canvas. This allows components of the UI to be placed anywhere in the game space. In order for the best results and most comfortable experience for the user, set the text at a comfortable distance (3-5 meters) away and make sure it is clear, large, and readable.

In order to reduce clutter on the screen and keep immersion-levels high, it is often advisable not to permanently tag UI information to a model. It can appear unrealistic and unnecessary. Instead, allow notifications and status updates to flow in and out of the game as organically as possible. For example, don’t always have a health bar floating above a character’s head but instead have an aura appear around the character or have a health bar flash in the game space near the character. Unity also allows the implementation of arrows to help direct users if they’re looking in the wrong the direction. The easiest way to add this to a game is GUIArrows and customising which vector should be prioritized can be done with the Show Angle function.

An effective use of spatial user interfaces that is subtle but clear is overwhelmingly the simplest and most effective model. It provides the necessary instruction without — if done tastefully — shattering the user’s level of immersion.

Conclusion

The key consideration, whether choosing to pursue non-diegetic, diegetic or spatial components, is to strike a balance between immersion and usability. The greatest strength of virtual reality is that it’s 360° of 3D space naturally induces a degree of engagement that far surpasses even the most advanced screen-based solutions. The fear for some developers is that immersion could be broken by clunky interfaces that divorce the user from the actual experience. With this in mind, it’s important to remember that many games featuring non-diegetic/spatial features still boast impressive levels of immersion. MMOs that allow highly customizable HUDS immediately come to mind. They may clutter the screen but they also allow the user to feel at home in the experience, which in turn induces immersion.

In short, according to our experience at Program-Ace when designing an interface for virtual reality, pay careful attention to making sure the experience remains intuitive and comfortable while also trying at every moment to submerge components into the game space and game narrative.

Categories
Business Platforms

What is the right CMS for your business?

 

choosing-CMS

“I don’t care about the platform, let’s just create our website on something popular and cheap and get on with it”.

Dear IT decision maker, this is wrong. On an infinite number of levels.

This article is going to show you why. It’s not going to promote one technology or CMS platform over another,(well, at least not much, taking the author’s unavoidable personal bias under account). Instead, it’s going to address the issues that usually arise long after the CMS platform has been selected and paid for.

For the purposes of this article, we have picked interesting details about a number of popular and emerging CMS platforms like WordPress, Joomla, Drupal, Typo3, DNN and Umbraco , and we have also included references to Concrete5, Contentful and Rooftop, as well as Wix (a web site builder  that is provided exclusively in SAAS form). Although WordPress is currently by far the most popular CMS out there, this post is not a “Wordpress VS the world” one.

“A lot of people are using it, what could go wrong?”

Let’s take WordPress. It’s got over 24 million installations (an estimate from a relevant article on Quora, but we can’t know for sure what percentage of them regards installations of business websites as opposed to personal sites, blogs, or small business like hairdresser salons, neighbourhood groceries and auto repair shops (that are usually just a couple of pages set up on a free or very cheap theme and that’s about it). Don’t get me wrong – there’s nothing wrong with this type of websites, but they’re not indicative of a CMS platform’s capabilities in any way.

For comparison, Typo3 says it has over 500,000 installations, Umbraco says it has over 380,000 installations, Concrete5 is just shy of 140,000 installations and Drupal has over 1,200,000 installations.

Although other CMS platforms feature a very smaller number of installations, a Fortune 500 company website will – probably –  outrank a local hairdresser’s website in complexity, features and quality.

So one number you should pay attention to is how many use cases are published out there, and for what type of clients. Do not base your choice solely on how many people are using the CMS, but introduce some quality criteria. How many companies in your business sector are using this CMS? What is their preferred choice and why?

“As long as it does its job”

Having been in the industry for about 20 years I’ve seen a lot, including “fake” CMS platforms. Many years ago, around 2001, I met with the owners of a small web development agency who believed in the doctrine that the client should be totally platform agnostic. They showed me the CMS they were using.

It was a glorified web file manager managing static HTML pages! This is what they knew how to do best – static HTML pages. But since clients were starting to demand “a CMS”, they gave them what they wanted.

When you choose your CMS, always make sure that it provides you with the editing functionality you really need. Chances are most of today’s CMS platforms will allow you to do several things (we’re not in 2000 any more), but how far you need them to go is up to you.

For example:

  • Does the website’s navigation system (menus, footer links etc.)  get updated automatically or based on a specific set of rules when you add new pages (provided that you can easily add new pages!) or do you have to do that by hand?
  • How are page URLs generated, are they SEO friendly, and what happens when you change a page’s URL with regards to SEO and links that may exist on your old URL?
  • Does it manage image resizing for you or do you have to resize images before uploading them unless you want your visitors’ bandwidth to choke over that 20 x 6MB image gallery?
  • Are you really protected from “breaking” the layout if you use the CMS’ WYSIWYG editor (assuming it has one!) to update content in an unorthodox way?

The above are just food for thought. There are actually dozens of tiny little things that you should consider in the same regard.

For some CMS platforms, the answer to almost every one of the questions above is “it depends on the developer”, which is actually a good thing since it means that the CMS can be properly customized and extended for your own needs as long as your specs are detailed and correct. Which leads us to our next point..

“I’ll hire somebody to extend it when I need to”

There are agencies out there that provide design and development services using their own proprietary CMS, claiming that it has been specifically developed to address your needs. While this may be true, you’re actually getting tied to a specific agency’s proprietary software, with little chances to find developers outside this agency willing to work on it in the future, even if the agency gives you the full source code of their CMS (which, in most cases, won’t happen anyway).

If you decide to go with a popular open source CMS, you should definitely take the “signal-to-noise” developer ratio under consideration. What I mean by that is that it’s easy to find developers for popular CMS platforms, but you should watch out for fakes or people with very limited knowledge. A rule of thumb is that the more popular the CMS platform is, the more chances you have to hire a person who just learned about it yesterday, or works solely with plugins/add-ons without having ever written a single line of code.

Although there are no known statistics for this, it is obvious that the easier a CMS is to set up and start with, the more possible a greater pool of inexperienced developers is. Open source CMS platforms suffer from this a lot – my experiences are limited to WordPress, Joomla and DNN Community – all three are very easy to set up and get going, but need a lot more when it comes to specific functionality. There are a lot of folks out there that claim to be “developers” using one of those platforms when in fact they just know how to set it up and configure it with a theme (usually a free one) and probably some plugins. Ask them to do something that isn’t covered by the core CMS functionality or the plugins they are familiar with and you’re suddenly open to a whole new world of expenses, bugs, and subsequently more expenses.

“I got it cheap, now it’s ready and I don’t have to pay anything more”

Maintenance costs, unless they are agreed upon from day one, are considered hidden costs and they often end up, in the long term, being higher than the actual cost of developing your web site with the CMS of your choice. If your CMS’ performance degrades over time or if your CMS is often vulnerable to exploits, then you *must* consider maintenance services. The alternative is far more expensive.

So what can you do? First, have in mind that the most widely used a CMS platform is the more “bad” people are going to target it and the more vulnerabilities will be discovered.

Exploit DB maintains a great database of exploits per platform. Let’s see how two of the most popular CMS platforms around today are doing there compared to other, less popular choices. WordPress had a whooping 982 total entries at the time of writing this article, Joomla (a similarly popular but notoriously insecure platform) had 1,152 entries while less popular platforms like, for example, Umbraco (the one I’m working with) had 1, Concrete5 had 16 and ModX had 15.

This does not make popular CMS platforms less valuable – it just indicates that, if left unmaintained, they will have higher chances of being exploited, hacked, defaced and lots of other terms generally meaning “more money to spend on repairs”.

The problem with updating a CMS in order to secure it often lies with third-party add-ons that may not follow your platform’s update path. It is common for popular CMS platforms to have a wide number of add-ons (called plugins or modules or packages or extensions, depending on the platform) made by third parties, and some of them break when the CMS is upgraded to a newer version, or even become the starting point  for exploits in the first place.

At the time of writing this post there were 47,956 WordPress “plugins”, 859 DNN “modules”, 7288 Joomla “extensions”, over 1500 Typo3 “extensions”, 36,031 Drupal “modules”, and over 1000 Umbraco “packages, just to get a feeling of the sizes we are talking about.

If you go with a popular platform, or one that is widely known to often be the target of hackers, you should ensure that your site is developed in a safe manner, its add-ons chosen very carefully, and that it is maintained correctly (either by your agency, your web host or a person you will hire for that). Alternatively, you can switch to PAAS or SAAS solutions, like for example wordpress.org hosting for WordPress or Umbraco Cloud hosting for Umbraco and leave site maintenance to the experts (at a cost). Even Wix is considered a SAAS solution (with restrictions mentioned elsewhere in the article).

“I spent a week doing data entry”

No matter how much you pay for your CMS and/or development, your content is what is most valuable to you and what you are going to be maintaining and expanding for years to come. You must be absolutely sure that you really own your content and that you can have it exported in a way that will allow you to reuse it with minimal cost if you need to.

Wix, for example, is a SAAS platform  that provides a very nice (and cheap) way to have a site up and running in virtually no time – but with a price. Your content is “tied” to their platform and cannot be exported or transferred elsewhere.

 

“It’s OK, but we may need to have more in the future”

Let’s say that the only thing you want done today is have your website built as soon as possible. What about tomorrow?

Often a website needs to get expanded with functionality that was not predicted or planned from day 1. This may include importing data from third-party sources, integrating feedback forms with CRM applications, adding e-commerce capabilities etc. If you have only your website in mind today, you may choose a platform that is hard (or expensive, or both) to extend in the future.

For example, WordPress has an abundance of plugins that make it integrate with third-party systems, and it’s relatively easy to have developers write some additional code to do so. But, if your long-term goal is to use the same platform for your intranet and include SSO capabilities for Windows Domains, then DNN is probably the way to go.

Let’s also not forget that a CMS today is not what a CMS was 10 years ago. A website on a desktop PC or laptop is only one way of presenting information. Your site must be ready for mobile (tablets, phones), and your data should be ready to be accessed by native applications. Well, most CMSs today solve the mobile problem by either letting you implement the responsive/grid layout of your choice or already using one for you (although how they allow you to form your content using WYSIWYG editors and how they provide decent previewing varies greatly). How you can expose data to be consumed (and updated) to native apps, though, is another issue.

If you are primarily interested in having your data consumed by third-party apps / native mobile apps, then a different set of priorities need to be introduced:

Does the CMS of your choice feature an API that is easy to use?

Although almost every CMS today is advertising an API, not all APIs are equally mature. CMS platforms with a rich add-on ecosystem usually feature the more mature APIs since those facilitate add-on development. A generic API may not be always useful for exposing your data to other apps, but it’s a first step towards that.

Does the CMS of your choice provide a REST API?

Some CMS platforms allow you to easily create your own REST APIs where others provide them out of the box (and allow you to extend them). If you need to make your data available anywhere outside the confines of the CMS, your best choice would be a platform with a mature REST API. Thankfully, all popular platforms provide that in a way or another. WordPress, for example, has multiple plugins that provide REST API functionality. Umbraco has a REST API developed internally. Joomla features a REST API in the form of an extension.

The factor that you should pay attention to, however, is how complete the REST API is. The less you need to extend it yourself the better.

For CMS platforms with an add-on ecosystem, a critical factor for your decision is how many of the add-ons you are probably going to use will work well with the existing REST API.

For example, Gravity Forms , a very popular WordPress plugin, is not implementing the WordPress API in a standard way and, instead, provides its very own API, which can lead to a lot of work if you need to seamlessly work with WordPress and Gravity Forms in a unified, RESTful way.

Should you consider an API-first CMS instead of a page-oriented one?

This is the toughest question that you may have to answer. If your primary goal is providing your data to third-party apps, then an API-first CMS like, for example, Contentful or Rooftop  (which, by the way, uses WordPress as its back-end and manages to solve the WP-Gravity Forms API integration problem we talked about above), is definitely the choice to go for.

What an API-first CMS offers as an advantage is the total separation of the data and presentation layers, meaning you have an “engine” that can manage your data regardless of where they are eventually consumed. This can be a blessing or a curse, since it’s up to you to decide which technology to use for a web front-end (which is treated like any other app that consumes its data), while you may have to deal with potential limitations imposed by the number of SDKs available.

“We work with Java, but what’s wrong with launching a PHP-based website?”

Let’s suppose your organization heavily depends on Azure, Office 365 and Active Directory. Why on earth would you select, for example, Django CMS as your platform? Although it is a fine CMS, its technologies will be far out of your organization’s scope and internal expertise and you would have to resort to third parties for every single issue introduced during the lifetime or your web site. You might do that anyway (see the next section), but you have no way to evaluate results if the technologies used are alien to you. Let alone integrate your web site with other things.

This is a highly subjective point of view and you may totally disagree, but my belief is that the CMS you will choose to power your web presence should be in harmony with other technologies already being used in your business, since this opens up a lot of options for its evolvement later. Unless, of course, you’re using an API-first, cloud-hosted CMS, around which you can build additional services.

“We’ll extend it internally”

So you’ve got a couple of IT people that are familiar with some Web technologies and you think that it would be cost effective if you selected a CMS platform that utilizes the technologies they already know so that you can maintain it and extend it internally.

I don’t even have to prove why this is fundamentally wrong, but let’s say that it is analogous to having your graphics designer paint your house.

Conclusion

Your decision on the CMS platform you should use for your site is not an easy one, and it should not be left in the hands of the agency you intend to hire  just because “that’s what they’re working with”. It’s easy to be impressed with the design and visuals and forget there’s an “engine” that powers your website behind the scenes, but that engine is the most important aspect of the whole construct since it’s the one that will restrict you or enable you to do more when that time comes.

You should have a long-term plan about how you want your web site to evolve, what you expect it to cost you in a specific period of time and how you are going to tackle challenges like security and extensibility. There is no globally right answer, it all depends on what your own main objectives are.  

Being conscious about the technology, the platform, its pros and cons and its features will only benefit you in the long term. If you feel you don’t have the technical knowledge or the time to make such a decision, you should hire an expert consultant who will take all parameters into account and suggest the best platform for your own needs. Whatever you do, though, for heaven’s sake don’t buy a website at the price you would buy a new pair of jeans just because “that’s all you need”. You’ll end up paying for a whole new wardrobe really fast.

 

Categories
APIs Platforms Tools

Do-it-yourself NLP versus wit, LUIS, or api.ai

 

NPL_bot_

 

Alex and I have been building bots for about 1.5 years and have talked to hundreds of bot devs through our BotsBerlin meetup, which now has over 1,000 members. Something we get asked a lot is whether it’s worth investing in building your own NLP engine, or whether it makes sense to use a third party service like wit.ai, LUIS, or api.ai.

What does a chatbot’s NLP engine do?

Let’s say you’re building a restaurant bot. These tools will help you take a sentence typed by a human, and turn them into structured data, for example:

 

NLP Module chatbots

 

Do you build yours or use third-party tools? Let us know in our DE Survey.

The structure on the right is something computers can actually work with, and you can pass this on to the business logic of your bot. For example, you would probably query the Foursquare API and fetch a list of restaurants. If there are some popular restaurants matching those constraints, you would probably suggest those to your user. If not, you might suggest a Chinese restaurant instead.

NLP-api-chatbots

Foursquare has already done the hard work of finding matching restaurants, so the trickiest part of building this MVP is finding a way to generate structured data from natural language. The great thing about tools like wit, LUIS, and api.ai is that they make this part so easy that you can build an MVP like the above in an afternoon. In our experience, 3rd party tools are an excellent way to build quick prototypes. You could just as quickly build a bot to find videos with the YouTube API, or products from Product Hunt.

Reasons to do it yourself

If your restaurant bot is a runaway success, you will inevitably want to become independent. We see that the more advanced bot teams are all developing their own NLP. Data from the Developer Economics surveys, which polled the opinions of thousands of developers interested in chatbots, are pointing towards a democratisation of chatbots through open source projects (there’s a live survey out now if you want to contribute to this knowledge pool).
Here are three real-life examples of why people switch.

API constraints

databot was a Slack app we built at the start of 2016. Databot would connect your data warehouse to your Slack, so you could ask

what was the ROI like for October’s facebook ads?

and databot would generate the corresponding SQL query and answer your question.

We started off using wit.ai, which would always default to guessing that October referred to the following October, not the previous one. So we had a lot of fun with our date library to build a workaround. Of course wit could add a feature to let you customise this default, but that’s missing the more general point. If you use an API you are have to live with someone else’s engineering decisions, and that friction tends to grow as your project matures.

Data ownership

We talked to a startup building a commerce bot, specifically one which let you look for presents for friends and family and find good deals, e.g. “my sister likes running and craft coffee and I want to spend around $30”. For them, gathering the data around people’s purchasing intentions is core to the value of their business, and they want to make sure it belongs to them. Moreover, for privacy sensitive verticals like insurance, health, and banking, sending every message to a 3rd party is not an option, users and businesses just aren’t comfortable with it.

Performance

Admithub is an education startup. This team actually has one of the most technically advanced NLP modules I’ve seen, it can recognise thousands of intents. Their bot helps university students by updating them about events and deadlines, and can answer questions ranging from “when are housing applications due?” to “can I have a salamander in my dorm room”.

AdmitHub found very quickly that third party tools weren’t up to this task (they tend to optimise for the small data use case, performing well even when a developer is getting started and there are only a few examples). Most also failed to handle misspelled words, which are common when chatting with teenagers. While simple bots are generalizable, sophisticated bots are all complicated in their own way. Every algorithm has trade-offs, and a one-size-fits-all approach can let you down when your use case becomes more advanced.

Bonus: Control your own fate

Ultimately, technological independence is compelling for many teams. It’s great to use free tools developed by big tech companies, but they may not stay free (Microsoft have started charging for LUIS) and they may disappear with little notice (like Parse did).

The rise of do-it-yourself NLP

{wit,LUIS,api}.ai are wonderful tools that make prototyping very quick. But from talking to dozens of bot teams, I’m convinced that everyone will eventually become independent. Early indications from the state of AI survey are that virtually all businesses are uncomfortable relying on APIs for their AI, and that doesn’t surprise me given the examples I’ve just talked about. The engineering case is that web APIs just aren’t the solution to every problem in programming. The business case is that you really want to own your data and be independent.

In 2017 we will see the bots that have traction moving away from 3rd party NLP services. The biggest drawback, until now, has been the engineering investment and machine learning talent required to build a custom NLP engine. It makes no sense every bot team to reinvent the same things, so at LASTMILE we decided to open source ours. You can find out more at rasa.ai

 

Are you involved in ML and/or AI? Take the Developer Economics Survey and shape the future of ML/AI development.

Categories
News and Resources Platforms

Google announces new hardware and “Actions on Google” platform

Welcome to DeveloperEconomics’ weekly news roundup. In this edition Google announces new hardware and “Actions on Google” platform, Apple and Deloitte team-up for enterprise solutions and HTC’s Viveport VR app store goes live globally. Read on for the full news rundown.

 

Google announces new hardware and “Actions on Google” platform

 

Google has launched two new premium smartphones using the Pixel brand, a Daydream VR headset, a new WiFi router, and a 4k Chromecast. They have also announced dates and pricing for the previously announced Google Home speaker. On top of this, December will see the availability of a new ‘Actions on Google’ platform for developers to add to Google’s new Assistant in Allo, Google Home and exclusively on Pixel phones.

 

Google combines services under ‘Cloud’ brand

 

Google has created a new umbrella brand for its cloud services. Google for Work – Google’s Cloud Platform – and Google Apps for Work – which itself is being rebranded as GSuite – all now fall under the newly created Google Cloud brand. Google said its decision to rebrand underscores its seriousness about enterprise services.

 

Android Wear 2.0 delayed until 2017

 

The release of Android Wear 2.0 will be delayed until 2017, Google has announced. The release, originally scheduled for this autumn, was pushed back to allow Google to collect more feedback and fine tune the software. Google has instead released the third developer preview of the OS, which includes Google Play on Android Wear.

 

Genymobile announces cloud-based Android platform

Genymobile has announced a new cloud platform to help enterprises build and test Android applications. Genymotion Cloud features support for Jenkins and Bamboo, along with support for testing frameworks such as Robotium, Appium, Expresso and Calabash. The platform also features virtual device sharing, live demos and app sharing for cross-company collaboration.

 

HTC’s Viveport VR store goes global

 

HTC has launched its official store for the Vive VR headset. Viveport is launching in 30 countries, with around 60 titles covering categories such as education, design, art, social, video, music , sports and health. The store is currently highlighting content from the likes of Everest VR, The Blu, Google Spotlight Stores and Stonehenge VR.

 

Occipital launches $500 VR dev kit for smartphones

Start-up Occipital has released a dev kit that offers room-scale motion tracking for iOS and Android phones. The $500 kit uses Occipital’s Structure sensor, which has already been used on smartphones to create 3D meshes of environments. The kit includes a Structure Sensor, custom faceplate, phone case and 120-degree wide vision lens.

 

Codenvy partners with Bitnami for “one click” cloud stacks

 

Codenvy and Bitnami have teamed-up to offer “one-click” programming stacks for common frameworks. The stacks integrate the Che cloud IDE and workplace server with Bitnami stacks, allowing devs to intantly access Dockerized workspaces, removing the need to set-up and configure IDEs and frameworks before writing code. Frameworks supported include Express, Swift, Play and Rails.

 

Waratek enhances Java app security with RASP

Waratek has released a new version of its AppSecurity platform for Java apps. The release lets developers modernise the security capabilities of older Java apps with a RASP plug-in that the eliminates the need to replace existing Java Runtime Environments. Waratek adds that its virtualisation-based architecture avoids the performance penalties associated with other RASP products.

 

Oracle loses appeal against Google in Java battle

Oracle has lost its appeal against Google, in the long-running legal battle over whether Android infringes on Java copyrights. This latest appeal concerns whether Google failed to disclose its intent to develop tools to run Android on the desktop using the Android App Runtime for Chrome. A District Court Judge denied the motion, saying it had “no consequence with the defined scope of our trials.”

 

Apple and Deloitte announce iOS partnership

 

Apple has teamed-up with Deloitte to help companies get to grips with the enterprise features of iOS. The partnership involves a “first-of-its-kind” Apple practice with over 5,000 strategic advisors, who are focused on helping business take advantage of the iOS ecosystem. The deal will also see Deloitte offer native app development services for ERP, CRM and HR departments.

 

Skymind raises $3m for Java deep-learning library

 

Skymind, which offers an open-source deep-learning library for Java, has raised $3 million from investors such as Tencent, SV Angel and Mandra Capital. The start-up aims to build a library that lets Java developers work on AI deep learning. Skymind says its libraries have been downloaded 22 thousand times just in the last month.

Sign up for our weekly newsletter, with the latest facts and insights on the app economy.

Categories
Platforms

Using Bash in Windows – today

bash_windows
using bash in windows today

“… However, when we talked with web developers, they still struggled with using Windows as their primary devbox.”

The above quote is from Kevin Gallo, the VP of Windows Dev platform, and was delivered around mark 0:38 of his presentation in Microsoft’s Build 2016 keynote. He then continued with the observation that “… many of them have workflows which rely on open source command line tools, scripts and frameworks”, and finished with a slide that his audience was – at first – slightly unsure on how excited to get about: Bash is coming to Windows.

Screenshot #1: Kevin Gallo’s slide from Build 2016 announcing Bash coming to Windows
Screenshot #1: Kevin Gallo’s slide from Build 2016 announcing Bash coming to Windows

If you let the video play for another 7 seconds, you’ll also catch a glimpse of Gallo’s audience. You can see the emotions depicted on their faces form a picture that explains perfectly the complex (and sometimes tumultuous) relationship of Microsoft with Linux and the Open Source world. Three persons are smiling excitedly and beginning to slow clap (the ones that suddenly realise how much easier managing their OS stack or scripting their Windows environment will become). You then have the classic cautious indifference of the majority of developers that wait to see whether this is “worth getting excited about”. Finally, you can also detect some unguarded annoyance from the fanboy crowd (“Seriously? I have to sit and hear about Bash? What’s wrong with PowerShell?”).

Personally, I belong to the first group. Despite working with open source technologies since the beginning of my professional career back in 2003, I’ve never managed to move away from Windows. To this effect, when I saw Rich Turner and Russ Alexander casually doing a apt-get install git on Windows to install git, I was excited. A lot.

But until the functionality showcased in the video above is mature and stable enough to be rolled out, I’ll continue using my current workflow which has served me faithfully since 2011: And that is bash on Windows (To be precise: A more “cut down” version of Bash. Read on for details).

The challenge: Production-strength command line workflow in Windows.

One might argue that Windows was never meant to be “driven” from the command line.

Microsoft tried to mitigate this back in 2006 by rolling out PowerShell, a shell and scripting language that gives users full access to their whole Windows environment. For Windows devs this was a great extra tool but for all other developers it was still not enough to lure them away from the power and versatility they found on the Linux command line.

Add to this the strongly opinionated naming conventions and approaches that PowerShell inherited from the .NET Framework (did you know that cd is but an alias to the “proper” command which is Get-ChildItem? That’s camelcase _and_ a dash that autocompletes with tab even if you type it in lowercase. Strange stuff) and you can see why it’s really hard for e.g. a PHP developer to consider it for his dev workflow.

When every single blogpost or article or tutorial written about a subject, e.g. “how to rebase branches in git”, includes instructions and screenshots that clearly demonstrate the flow in a Linux shell, it’s only natural for the developer to assume that this is the correct way of doing things.

Towards a solution: Install Git for Windows

For my frontend-with-a-bit-of-PHP-but-from-a-Windows-OS workflow I always relied on certain “battle proven” tools. WinSCP was the weapon of choice when files needed to be moved from one place to another (either via FTP, SFTP, SCP or even rSync). Putty allowed me to connect via SSH to all my dev boxes. TortoiseGIt ensured that I could use git directly from my Windows explorer interface.

The first “lightbulb / aha” moment for me occurred when I installed Git for Windows after being prompted to “try it out on the command line” by a colleague.
One of the steps of the install wizard prompts you to choose “How would you like to use Git from the command line?”:

Screenshot #2: Choosing how to use Git for Windows
Screenshot #2: Choosing how to use Git for Windows

… and it mentioned “Bash”!

Installation completes and suddenly I get a shell in Windows that looks suspiciously similar to what I’m used to in Linux or iOS installations:

Screenshot #3: MinTTY terminal emulator window
Screenshot #3: MinTTY terminal emulator window

Bash in Windows: How it works

Kudos? To the awesome devs that worked to bring Git to windows – https://git-for-windows.github.io/.
In essence the installer sets up a unix-like shell environment (MinGW – “Minimalist GNU for Windows”) which – very roughly speaking – creates the needed Unix layer that shells like Bash can run onto.
A terminal emulator called MinTTY is also installed (shown in screenshot #3 above) which is a Windows program that runs the Bash shell which in turn enables you to use quite a good subset of the Linux commands needed for an open source dev workflow.

Looks are important

… especially if you are an ex-designer-turned-frontend-developer. Going from the black and white severity of cmd.exe (where you could not even resize the window to the dimensions you wanted) to MinTTY definitely boosted my “developer happiness” feeling:

Screenshot #4: MinTTY terminal emulator window
Screenshot #4: MinTTY terminal emulator window

In the above example, I manually mapped the colours from the famous Solarized colour theme to the default 16 ANSI colours. For the font I chose the crystal clear Consolas font set at 12 point, although I’ve recently been experimenting with Adobe’s Source Code Pro as an alternative.

The MinTTY window can be resized to any dimension of your choosing. You can also use the same shortcuts as you use in the browser to resize the text on the fly (CTRL+plus, CTRL+minus or CTRL+mouse wheel). Finally you can launch as many instances of MinTTY as you want, enabling you to lay out a series of windows into your codebase and file structure, exactly as it suits you:

Bash in Windows Screenshot #5: Multiple instances running at the same time at different dimensions and font-size
Screenshot #5: Multiple instances running at the same time at different dimensions and font-size

I can now do {{thing}} from the command line

The list below demonstrates just a small subset of the stuff you can do with Bash in Windows that I found particularly useful and / or helpful.

  • Git
    No more “download and unzip”. Git clone any repo of your choosing in any directory in your filesystem. The handy “GIT Bash here…” shortcut that appears when you right click any folder is particularly useful here.
  • Linux command line
    MinGW supports a subset of the various commands and programs available in Linux, things like awk, sed, grep, find are all here, ready to be used. Shortcuts are also available (CTRL+U, CTRL+K for inline editing, CTRL+R to lookup on Bash history etc) as well as piping and redirection.
  • SSH
    OpenSSH works right out of the box. Set up your keys by using ssh-keygen (exactly the same way you would do in a Linux box) and then connect to any of your machines. You can also setup an ssh-agent (exactly the way Beanstalk or Github or Bitbucket explain in their online tutorials) to ensure you don’t retype your password all the time. Of course ftp and scp are available as well.
  • Vim
    No more notepad++ for me. After I went through the steep-as-mount-Everest learning curve I found out that vim was the best tool for quick text edits (I’ve strongly resisted the urge to play with emacs. We’ll see).
  • Bash scripting
    The very first bash script I experimented with (and use constantly nowadays) is z: https://github.com/rupa/z. I no longer rely on lengthy cd statements such as:
    cd /some_directory/nesting/nested/my_work
    But rather do a:
    z my_work
    … and I’m immediately taken to the directory I want.

“You should really switch to {{enter Linux distro name here}}”

Indeed. But even if I do so, there is still a vast number of devs out there who still need / have to work with Windows. One year ago, Isaac Schlueter (co-founder and CEO of the Node Package Manager – NPM) had this to say:

Bash in Windows: this matters
If you want devs using your code, this matters

Until WSL is out … Bash in Windows

The soon-to-be-released Windows Subsystem for Linux is a brilliant (and much-needed) step forward in making the Windows environment a first-class citizen for open source development workflows.Nevertheless, there is no need to wait for Microsoft to make WSL available to everyone.

I’ve been using Bash in Windows – in my daily workflow – for the last 5 years and it’s working like a charm.
If you want to do the same, simply install Git for Windows.

Categories
Community Platforms

Angelo Kastroulis – Mobile Development Runs Deep

blog_interview

Developer Profile:
Angelo Kastroulis

Angelo Photo (1)

At VisionMobile, we believe in the people behind the numbers. While it’s important to understand numbers, trends and segments, it’s equally important to understand the people who buy our products and services. This developer profile is one in a series designed to help us get to know some of the people behind the statistics.

Job title and company:
Founder, Independent Consultant at Carrera Group

Country/Area:
Florida, United States

Development Focus:
Enterprise software expert for hire. “I like doing independent work,” he explains, “there’s no enterprise baggage.. You’re there to do a job, to solve a difficult problem, to help clients through something.” There’s where he likes to focus: on fixing problems and doing so outside of a company’s culture. He continues, “I know we’re not going to rewrite this whole thing: I’m here to do one specific thing and provide some development help or architectural advice to help get you out of a jam. For six months, I can help with this antiquated technology.”

He works across multiple technologies, but focuses on the healthcare industry.

Languages used:
Kastroulis counsels against getting too caught up in language or platform fanaticism. He recommends using the best tool for a given job. That said, his go-to technologies include JavaScript (Node.js), Microsoft .Net, C, Python, and a “tiny bit” of Java.

Favorite project built recently:
Kastroulis reports how he enjoys working on new projects with new challenges. His favorite project was building a high performance column-store database kernel. Another recent project was an electronic prescribing and ER discharge application for both the web and iOS devices.

Favorite tools:
As do many developers, Kastroulis prefers to use the appropriate toolset for the project – and to choose toolsets he’s most familiar with. Enterprise developers may not have that flexibility, but independent developers often do. His favorite toolset is Visual Studio Code, which works across platforms. He also uses node.js and a lot of JetBrains tools (especially for C and Python). On a Mac he uses Sublime Text and command-line tools. Of course, for source code management he uses GitHub and Git on the command line. “I’ve worked with Amazon Web Services (AWS) and Heroku, but Azure is my cloud host of choice,” he adds. “Azure is easier to work with and it’s HIPAA compliant.”

Best developer-related advice you would give to another developer
While it’s hard to predict the future, Kastroulis advises developers to “get an idea of where the world is headed and try to get there first.” He concedes that you may not always be right, “but follow your gut.” Take advantage of industry knowledge, and take advantage of the expertise you gain focusing in your industry (healthcare, financial, and so on).