Categories
Community

Accelerating IoT development using Cloud workflow with Arm virtual hardware

Having spent significant time in the realm of embedded systems and IoT development, I’ve consistently encountered challenges related to initial setup and scaling to production. Chip selection, a pivotal aspect of this process, often involves meticulous shortlisting of microcontrollers and architectures, followed by the acquisition of development boards and prototyping to identify the most suitable chipset. Not to mention the lead times and chip shortage problems, this is merely the beginning.

Once this initial step is complete, the journey extends to establishing IDEs, debuggers, test environments, and other development, testing, collaborating and shipping tools. This process can be arduous, difficult to scale, and often discouraging. Yet, as those familiar with my work know, I’ve consistently been vocal about the bridging of the gap between hardware/IoT/embedded and software/cloud development, thanks to new product lifecycle management workflows and DevOps practises.

Cloud developers have long enjoyed the luxury of robust tools and streamlined software lifecycles. The ability to scale from a single server instance to thousands with a simple click, facilitated by Docker Containers, Kubernetes, and DevOps workflows like CI/CD, has been a major draw for embedded developers.

One such innovation that brings IoT developers closer to this cloud-native model is Arm’s Virtual Hardware on the cloud. Let’s talk more about that in this blog and see how it fits into the bigger picture starting from:

Pain Points in Traditional Embedded and IoT Development

Developing for embedded and IoT applications involves numerous challenges, including:

  • Hardware Dependencies: Reliance on physical hardware can significantly delay development cycles, as changes often require new hardware or software modifications.
  • Long Shipping Delays and Chip Shortages: Procuring specialised hardware can be time-consuming, especially during chip shortages (a major impact seen during the COVID-19 pandemic), hindering development progress.
  • Limited Testing Environments: Testing embedded software on physical hardware can be resource-intensive and requires specialised custom jigs, debuggers, physical access, Test and Measurement equipment and much more.
  • Integration Challenges: Coordinating hardware and software development teams can be difficult, leading to delays and potential integration issues.

How Arm Virtual Hardware Changes the Game

Arm Virtual Hardware addresses these challenges by providing a virtualised environment where developers can simulate and test embedded and IoT applications without relying on physical hardware. This offers several key benefits:

  • Faster Time to Market: AVH accelerates development cycles by enabling testing and debugging early in the process, reducing the time it takes to bring products to market.
  • Development Without Hardware: Developers can start working on their applications before physical hardware is available, saving time and resources, which is very helpful when you’re in the process of deciding the hardware or ordering dev boards.
  • Bridging the Gap Between Hardware and Software: AVH fosters collaboration between hardware and software teams by providing a common platform for testing and integration.
  • Enabling Cloud Workflows and DevOps: Thanks to AVH, you can now use most of the tools cloud developers have to ship products faster into the market. Virtual hardware can be integrated into cloud-based development environments, enabling DevOps practices and continuous integration/continuous delivery (CI/CD) pipelines. 
  • Scalable: Firing up 200 new Virtual hardware in the cloud takes 10 seconds; compare this to testing your code in 200 new development boards; the scalability factor with Arm virtual hardware is phenomenal and a breeze regarding regression testing. 

Enabling MLOps with Arm Virtual Hardware

Running ML models on the Edge compute devices is one of the most common applications where Arm-based processors are deployed today. Think of smart speakers, phones, traffic lights, cameras, etc. These products and applications can benefit greatly by adopting Arm virtual hardware in prototyping and testing life cycles. 

Machine Learning Operations (MLOps) involves managing the entire lifecycle of machine learning models, from development to deployment. Using Arm virtual hardware, developers and data scientists can test their model on almost a real Arm processor, estimating the performance of different architectures and chipsets; this enables them to pick the best hardware suitable for their models, saving a lot of cost and time in development and bringing product to market. Developers can train machine learning models on virtualised Arm hardware, ensuring compatibility with target devices and architecture, enabling

  • Testing and Optimization: AVH can test and optimise models for performance and resource utilisation on embedded platforms.
  • Deployment: Once ready, models can be deployed to physical devices with little to no adaptation using AVH as a reference environment.

In Conclusion

Arm Virtual Hardware is a game-changer for embedded and IoT development. By addressing the challenges of traditional development methods, AVH enables faster time to market, improved collaboration, and enhanced flexibility. As the adoption of embedded and IoT devices grows, AVH will play a vital role in driving innovation and efficiency.

Categories
Community

Building a Security-First Culture in Cloud Development

In an increasingly data-rich environment, businesses and individuals are increasingly looking for alternatives to storing and sharing information from their own networks. Not to mention that there are users who want software services that aren’t dependent on their internal systems. These are elements that are helping make the cloud developer landscape so rich with opportunities. Yet, when creating products to bring to the market, it’s important to recognize that alongside opportunity comes risk.  

Development teams in the cloud sector are subject to both internal and external threats. Adopting protective tools is certainly important here. Yet, it is the behavior of staff, the collaborations between teams, and the approach to management that really makes a difference. By building a security-first culture in your cloud development organization, you’re making your company more robust against threats.

Fostering Cross-Departmental Collaborations

Any good cloud development startup has talented development professionals and skilled security experts. Nevertheless, simply having these professionals working independently on their tasks alone is not the way to a security-first culture. Meaningful collaborations make for a more holistically secure product and business.

So, how can you boost collaborations between security and development?

  • Improve cross-departmental communication: Communication is key in any collaboration. Members of both dev and security teams must find it easy to connect regularly. This may include having specific channels for joint security and development discussions, such as direct messaging groups.
  • Integrate security professionals in dev teams: One of the most effective ways of improving cross-departmental collaborations is project integration. This means that for every cloud development project, there should be at least one security professional embedded as a core member of the project team. This ensures security considerations are a meaningful part of the development process.

In addition, bear in mind that each team and its members will have nuanced preferences for collaboration. Take the time to regularly reach out to your security and dev teams to ask what they feel is particularly good or especially challenging about their collaborations. Importantly, leadership should collaborate with them on identifying the resources or protocols that can help and commit to implementing these.

Creating a Secure Environment

It’s difficult to establish a security-first culture in cloud development if the environment in which your teams operate isn’t protected. Therefore, part of your approach should be to fill any potential security gaps that could pose or exacerbate risks to the cloud development team, the work they’re doing, and the overall business.

Some elements to focus on here include the following.

Physical security

Physical access controls in the development space help to ensure that nobody who isn’t a core part of each cloud development team can interact with data or assets related to projects. While you can doubtlessly trust all your staff members, it is not unusual to face insider threats, including when your development process involves continuous integration/continuous deployment (CI/CD) practices. 

Limiting unnecessary access to sources of information is key to keeping cloud development projects secure. You might consider installing biometric security tools at certain checkpoints or providing radio-frequency identification (RFID) fobs for specific areas of the business.

Digital security

With any cloud development project, there also has to be strict control over interactions with the digital landscape of the business. One approach to this is to create network silos. By dividing the network where needed and allocating portions to teams or projects, you gain greater control over the security access to each project’s portion.

Another useful approach is to arrange for dedicated internet access (DIA) for your development teams. This involves arranging with your internet service provider (ISP) to deliver a portion of the connection specifically provisioned for the use of your business or project. This doesn’t just enable you to guarantee a certain level of reliable bandwidth. It also tends to be more secure than sharing connections with others on the network that aren’t connected to a project or even to your organization.

Establishing Cloud Security Best Practice Protocols

Another vital component of a security-first culture in cloud development is to create practical and robust company-wide policies. Some of the cloud security strategies to protect data and maintain compliance that you should outline in your protocols include:

  • The shared security responsibility: The responsibility for protection isn’t just with your security or information technology (IT) professionals. Everybody who interacts with your cloud systems, project tools, and any other data has a role in protecting these items. Clarifying this in your security protocols and staff handbook sends a message that everyone can and should take steps to make a positive difference in their day-to-day activities.
  • Utilizing data encryption: Encryption is one of the most powerful tools to keep cloud project data protected even if bad actors breach other forms of defense. Therefore, it’s important that your security culture protocols clearly outline the circumstances in which development staff should apply encryption and what tools they should use for encrypting and key sharing.

These protocols should be well documented and readily available to all staff, perhaps stored on cloud platforms to ensure workers can access them wherever they’re operating from. That said, to be a good influence on security culture, they can’t just exist in document form and sit on your servers. Alongside giving general security awareness training, you also need to thoroughly educate staff on how to access this information and what they should and should not have stored on the cloud. 

In the onboarding phases, there should be a detailed walkthrough of each best practice, with room for questions to address uncertainty. Throughout employees’ time with the company, you should also provide regular update training on key elements of cloud security practice, particularly when tools, systems, and job roles change.

Conclusion

Building a security-first culture in your cloud development company is an effective way to make your projects more robust against threats. This involves a range of actions, from strengthening the development environment to training your staff on solid protocols. It’s also important to gain staff feedback on security practices. They interact with your systems and projects most directly and will have insights into both issues and potential solutions. It also keeps your workers a meaningful part of the security culture.

Categories
Tips

How to Upskill Your Cloud, SRE, and DevOps Experts to Empower Your Organisation

Developer Nation is continuously trying to bring high quality articles for your career path and your company/organisation among insights, tips, interviews and more from the developer ecosystem. This time we have the honour to host an article by IOD about roles, career growth and leadership, focusing on Cloud, SRE, and DevOps Experts.

This article was contributed by IOD.
IOD is seeking new tech bloggers. If you are a top notch tech expert or a writer, join
IOD’s talent network and share your expertise!

Today, every company is leveraging technology to innovate, streamline operations, and create value for their customers. Regarding software engineering, developers have a natural and prominent role in creating new capabilities and opportunities, but that cannot happen without a greater support infrastructure. Cloud, SRE (site reliability engineering), and DevOps engineers are central to value delivery and business continuity. It is vital for engineering managers to understand how they can become mentors in order to coach and upskill these experts to both enable their career growth and increase business value.

Distinctions Between Expert Roles

Modern software development teams work in a DevOps way, by bringing people with different competencies together and enabling a faster, higher-quality value delivery and development lifecycle. 

Writing the application business logic is only part of the engineering equation needed to deliver customer value. The other part, operations, includes many tasks that are mostly driven by Cloud, SRE, and DevOps experts. Good examples of those tasks are designing scalable and reliable systems, ensuring that code can be tested and deployed using continuous integration and delivery pipelines, monitoring system health, and implementing security and compliance guidelines.

Personally, I dislike the title “DevOps Engineer,” because DevOps is applicable to the entire engineering team and is a more abstract concept. SRE, on the other hand, is a concrete implementation of the DevOps philosophy—experts in an SRE role bridge the gap between developers and operations. A DevOps engineer (what I prefer to call the “automation/cloud specialist”) differs from an SRE, as they only focus on systems operations.

There are some natural derivations in lateral roles that come from further specialisation in certain technology areas such as DevSecOps engineer, chaos engineer, or cloud and solution architects at more senior levels. 

SRE and cloud specialists are crucial to the success of the product or service. Yet, they are too often disconnected from the business reality; this is where coaching and mentoring makes all the difference. 

Mentoring Is Vital to Career Growth

Engineering managers have the lead role in mentoring and coaching these experts: guiding, providing feedback, showing different career possibilities, and building bridges within the rest of the organisation. Engineering managers can act as human routers to make the connection between experts and business stakeholders, ensuring experts get first-hand knowledge and visibility on the value and end-user experience of the product(s) and service(s) that they work on. 

Similarly, managers can then demonstrate to stakeholders the positive business impact of these experts’ actions. Does the solution have a great reliability track record and always meet the agreed SLAs? Tell them about it! What would it take to enable solution architecture to scale ten-fold and be available in other geographical locations to support new business cases? Great conversation starter! 

How to Make It Happen

Unfortunately, business stakeholders usually only connect with these experts when something goes wrong (e.g., a system failure), and need to understand what happened and why. Engineering managers can change this pattern and create a new paradigm.

Here are a few things managers can do to establish this paradigm and foster its culture:

  • Translate business and industry-specific jargon into technical concepts, examples, and terminology that experts can relate to. 
  • Help experts develop the necessary non-technical skills to communicate effectively, and translate complex engineering scenarios into simple, relatable terms and ideas with business impact.
  • Facilitate sessions where experts can present and showcase potential opportunities that new cloud and data technologies can unlock in the organisation, generating new business models or streamlining existing operations and processes.
  • Create and explore opportunities for experts to shadow and connect directly with colleagues in different roles across the organisation, such as working alongside a customer support representative or joining a sales meeting.

This enables constructive cooperation across competencies, breaking silos while helping these experts grow and gain a better understanding of their impact in the organisation.

Upskilling, Community, and Thought Leadership

There is no substitute for hands-on learning, and engineering managers have a unique role in creating those opportunities. It’s important to maintain a continuous dialog to understand the expert’s career goals and interests, while simultaneously facilitating situations that enable them to gain new hands-on experience.

Simple and small steps, such as inviting them to a steering meeting, participating in a technical brainstorming workshop, or joining a new, exciting project (even if in a minor role) can make a huge difference and impact. Venturing out of one’s comfort zone is always an opportunity to grow and learn.

Further, hands-on experience should always be accompanied by other learning and input, such as insights from other experts, industry certifications, or non-technical skills development.

Certifications and Digital Content

Consuming digital content—articles, videos, whitepapers—and pursuing industry certifications—such as those offered by AWS, Microsoft, Google, and the Cloud Native Computing Foundation—are both excellent practices for validating existing knowledge and discovering new services, expert insights, and best practices.

Each of these organisations offer certifications that go from the fundamentals to complex solution architecture scenarios, focusing on areas such as security, networking, and data engineering. When combined with hands-on experience outside of typical work tasks, content-learning and certifications provide natural upskilling and specialisation pathways that stay with the expert even when changing jobs or companies. 

There are also plenty of advantages to an organisation that has certified experts. It nudges the organisation toward good practices and ways of working, while enabling the company to level up their cloud partnership status and showcase their expertise to customers.

Technology Communities

In organisations, from SMBs to larger corporations, there is a natural tendency for individuals to become siloed in their team and/or business unit. Cloud, SRE, and DevOps are domains transversal to all development teams and organisational structures. Fostering an internal technology community where these experts can regularly meet increases alignment and promotes a healthy exchange of ideas. Moreover, it enables these experts to drive the technology governance and foster a culture of engineering excellence across the organisation.

Similarly, external communities and events are also a great way to gain new insights and fresh perspectives. DevOpsDays, ServerlessDays, as well as AWS and Azure Community Days and Meetups, to name a few, are fantastic options to learn and meet like-minded people. With practically all events now fully virtual and often free to attend, this is something that should be highly encouraged and promoted in your organisation.

Sharing Experiences and Thought Leadership

Engineers, especially less experienced ones, might be intimidated at the prospect of sharing their insight and experiences in technical articles or public speaking engagements. Regardless of the level of the content, whether beginner’s guides or more advanced deep dives, there is considerable value in creating content and sharing knowledge. Entry-level content from a Cloud, SRE, or DevOps expert can offer tremendous value to a developer or business stakeholder not familiar with the topic, and it can help bridge gaps between different competencies.

From a career growth perspective, an expert that invests time and effort in thought leadership activities—including written content and speaking engagements—is more likely to accelerate their professional growth and seniority. This is not first because of the positive visibility that those activities bring to themselves and their organisation (that helps!), but rather, it enables the expert to radically improve and develop valuable communication skills. Simply, with practice comes change; the more we work to translate and express complex thoughts and ideas into written and verbal content, the less subject we are to our own silos. 

Conclusion

Coaching and upskilling Cloud, SRE, and DevOps experts reveals new possibilities for impacting how an organisation operates and delivers. With these experts, it’s critical that direct managers and senior leadership start seeing and treating them as essential value creators, not cost centres.

When mentoring these experts, help them understand their potential career paths and growth, and highlight the value they create and the impact they make in the organisation. Most importantly, be transparent, provide constructive feedback, and foster a psychologically safe environment that encourages them to venture beyond their comfort zone and try bringing in new ideas.

Categories
Analysis

Why do developers adopt or reject cloud technologies?

In the nearly fifteen years since Amazon AWS cracked open the cloud market by releasing S3 – and changed the world by doing so – there has been huge growth in the variety of cloud solutions available for developers to use. We examine the different reasons that developers give for adopting or rejecting cloud technologies. The findings shared in this post are based on the Developer Economics survey 19th edition which ran during June-August 2020 and reached more than 17,000 developers in 159 countries.

Of the new cloud technologies which have appeared over the last fifteen years, containers have arguably had the greatest impact. With 60% of developers using this technology, the benefits are clearly widely recognised. However, with just under 30% of developers using container orchestration tools and management platforms, there is still room for this technology to develop.

In second position, with 45% of cloud developers using this technology, Database-as-a-Service (DBaaS) is also very widely used, and data storage and retrieval will continue to be an important issue, albeit in a much more sophisticated form than S3 originally offered. Cloud Platform-as-a-Service (PaaS) sits in a distant third place. A third of backend developers are using PaaS, putting this technology slightly ahead of the other ones we ask about – between 21-27% of developers use them.

Why do developers adopt or reject cloud technologies - containers is the cloud technology that is most widely used by backend developers.

Abstraction and simplification are two of the main drivers for the mass adoption of cloud technologies, but we can’t overlook the role that flexibility plays. Spinning up instances to cope with variable demand, creating temporary testing environments, and adding storage as required is immensely powerful. But one often-overlooked aspect of this flexibility is that developers and organisations have the flexibility to choose. They are not restricted to the expensive, bare metal they bought ten years ago, and they are less constrained by monolithic purchasing processes, because, to put it simply, these decisions matter less. In a world where infrastructure can be provisioned and destroyed at will, and where data and server configurations can be transferred easily between homogeneous systems, cloud providers have to find other areas of differentiation in order to compete. Vendor lock-in is much less of an issue for users than it once was, and the rise of the developer as a decision-maker has put even more power into their hands. Note the adoption and rejection reasons for DBaaS and orchestration platforms come from our previous survey, fielded in Q1 2020.

“Pricing and support/documentation are most important to developers”

For every cloud technology, with the exception of orchestration tools, pricing and support/documentation are the two most important factors that developers consider when adopting that technology. For the most part, these two factors switch between first and second place, however, pricing drops to fifth place for developers considering adopting an orchestration tool, whereas support/documentation remains at the top by a large margin. Around three in ten of these developers selected ease and speed of development (32%), integration with other systems (31%), community (30%), and pricing (29%) as reasons for adoption, with pricing being around 15 percentage points lower for orchestration tools than for other technologies. On the other hand, community and scalability are generally more important for developers selecting an orchestration tool.

Much of this distinction is driven by the dominance of Kubernetes. With 57% of backend developers who are using an orchestration tool choosing Kubernetes, it is the single most popular orchestration tool, and importantly, it’s free and open source. It stands to reason, therefore, that pricing is simply not an issue for developers using Kubernetes, instead they value the community support that helps them master such a complex tool.

Indeed, as well as pricing being much less important for these developers, the learning curve is also less important. It seems that these developers understand that they are dealing with high levels of complexity and abstraction and accept that there is a lot to learn in this space. But for those developers that want the abstraction and simplicity offered by a commercial container management system, many paid options exist, and pricing is still an important factor in this space.

Why do developers adopt or reject cloud technologies - Ranking of reasons for adoption

Taking developers’ reasons for rejection into account let us view the decision-making process from the other side. Immediately, we see that pricing is the dominant factor when rejecting every technology. Taking a closer look at the data shows the true extent of this – for DBaaS and Infrastructure-as-a-Service (IaaS), developers were more than twice as likely to select pricing as a rejection reason than the second- and third-placed reasons of support/documentation, and the learning curve, respectively. Amongst the remaining technologies, the smallest difference was 8 percentage points, for developers rejecting orchestration tools.

Further down the list, there is a lot of variability between the different technologies. For example, the learning curve was the second most popular rejection reason for developers choosing IaaS, with a quarter of them doing so. This suggests that the learning curve for IaaS is quite steep and that this is a barrier for many developers. This is not the case for DBaaS however, where only 15% of developers stated this as a reason for rejection.

“Suitability, feature set and performance are hygiene factors”

Suitability and feature set has middling importance for developers choosing to adopt a technology, but for many technologies, it is a more important reason for rejection. This shows that suitability and feature set is a hygiene factor – there are relatively few cases where this is of paramount importance, but many where a technology does not meet the needs and is therefore rejected.

Finally, performance sits very low in the hierarchy for developers adopting and rejecting cloud solutions. This indicates that, for the vast majority of uses cases, the range of performance options provided by vendors is sufficient. This suggests that many cloud computing products are, to some extent, homogenous, and that developers are more concerned with the ‘soft’ features, such as support/documentation, community, or learning curve. These features make for a fulfilling development experience, and in the age of the developer as a decision-maker, experience is everything.

Why do developers adopt or reject cloud technologies - ranking of reasons for rejection.

What are your reasons for adopting or rejecting cloud technologies? You can let us know your reasons here!

Categories
Tips

Eight must-read books for developers in 2021

What are the top books on your reading list this season? Whether you’re learning a new skill or adding depth to your existing knowledge in a particular development area, it’s always a good idea to get a few more recommendations to your list. We’ve teamed up with Packt to help you discover eight must-read books that you need to add to your collection in 2021.

All Packt eBooks and Videos are for $5! A key part of Packt’s mission is to unlock new opportunities for developers and help put software to work in new ways. They want this year’s $5 campaign to help developers unlock new opportunities.

Cloud and Admin

Azure DevOps Explained

Implement real-world DevOps and cloud deployment scenarios using Azure Repos, Azure Pipelines, and other Azure DevOps tools.

What reviews say:

“The book is very carefully walking the reader through everything you need to know to become an Azure DevOps expert. I use DevOps all the time to build and manage Business Central AL development and found the book very useful.”

Kubernetes and Docker – An Enterprise Guide

Apply Kubernetes beyond the basics of Kubernetes clusters by implementing IAM using OIDC and Active Directory, Layer 4 load balancing using MetalLB, advanced service integration, security, auditing, and CI/CD.

What reviews say:

“This book covers most of the topics when an enterprise would like to adopt Kubernetes. What’s more, you hardly can find coverage on these topics in the market!”

Coding and tools

Learning C# by Developing Games with Unity 2020

Get to grips with coding in C# and build simple 3D games with Unity from the ground up with this updated fifth edition of the bestselling guide.

What reviews say:

“If you’re serious about learning to build games in Unity your progress will be advanced rapidly if you first have a solid foundation of understanding of C#. This book explains the necessary information to start understanding and using C# to develop games in Unity. After reading this you’ll have enough context to begin tearing down other people’s code and repurposing it to build your own functionalities for your game.”

iOS 14 Programming for Beginners

Learn iOS app development and work with the latest Apple development tools. Explore the latest features of Xcode 12 and the Swift 5.3 programming language in this updated fifth edition.

What reviews say:

“The author does a good job to capture an effective, quick, and breezy reading/learning/code-along experience. The explanations are concise and easy to follow, although I would imagine a complete newbie to programming entirely might ask a lot of questions in the earlier chapters.”

Data

Learn Amazon SageMaker

Quickly build and deploy machine learning models without managing infrastructure, and improve productivity using Amazon SageMaker’s capabilities such as Amazon SageMaker Studio, Autopilot, Experiments, Debugger, and Model Monitor.

What reviews say:

“This is a comprehensive book for a data scientist looking to use the AWS ecosystem for machine learning with a focus on Sagemaker. I like the way it is organized which is practical and matches a typical life-cycle of a project.”

Data Engineering with Python 

Build, monitor, and manage real-time data pipelines to create data engineering infrastructure efficiently using open-source Apache projects.

What reviews say:

“Data Engineering With Python provides a solid overview of pipelining and database connections for those tasked with processing both batch and stream data flows. Not only for the data miners, this book will be useful as well in a CI/CD environment using Kafka and Spark. It’s very readable and contains lots of practical, illustrative examples.”

Programming

40 Algorithms Every Programmer Should Know: Hone your problem

Learn algorithms for solving classic computer science problems with this concise guide covering everything from fundamental algorithms, such as sorting and searching, to modern algorithms used in machine learning and cryptography.

What reviews say:

“Who the book is aimed at: if you self-identify as a data scientist, serious algorithms specialist, or even the quant type, then you won’t be disappointed! If you’re just starting in the field, the author has done the hard work of selecting some of the commonly used techniques & algorithms in the field today.”

Learn Quantum Computing with Python and IBM Quantum Experience

A step-by-step guide to learning the implementation and associated methodologies in quantum computing with the help of the IBM Quantum Experience, Qiskit, and Python that will have you up and running and productive in no time.

What reviews say:

“I really like this book. It takes a step-by-step approach to introduce the reader to the IBM Q Experience, to the basics underlying quantum computing, and to the reality of the noise involved in the current machines. This introduction is technical and shows the user how to use the IBM system either directly through the GUI on their website or by running Python code on one’s own machine.”

Have you read any of these already? Leave your impressions in the comments and don’t forget to share the list with other developers in your circle!


Be a guest writer on our blog
Have you got brilliant tips and resources that developers love to read? Then we want you on our blog! Find out more.

Categories
Tips

Scale Up Deep Learning in the Cloud

Deep learning is typically a long and costly endeavour, especially when it comes to training models. There are many factors that impact the process, but processing power, in particular, can make or break your pipeline. Today, many developers leverage graphics processing units (GPUs). Learn how you can scale up deep learning in the cloud. 

GPUs enable you to run simultaneous compute operations. This capability can significantly speed up your model training time. While on-premise GPUs aren’t an option for everyone, there is an increasing number of cloud-based GPUs options you can take advantage of.

Who Is Using Deep Learning in the Cloud?

As of Q4 2019, there are 13.3M developers working on data science, machine learning and AI development worldwide, up from 12.2M a year ago,  based on the findings from the Developer Economics Q4 2019 survey.

Click here to help us update the figure for 2020 – take part in our latest Developer Economics Q2 2020 survey live now.


Developer Economics survey - speak out and win prizes.

While many of these developers are working on more accessible, and budget friendly, machine learning (ML) projects, deep learning implementations are also gaining traction. The obvious organizations to look to for this are the cloud providers themselves and other industry giants, including Google, Facebook, and Microsoft. Others include Snapchat, Fermilab, Disney, and Carnegie Mellon University.

However, deep learning in the cloud is also proving beneficial for many smaller organizations that would otherwise not have access to the technology. As larger organizations have increased their adoption, the breadth and availability of services has increased and the cost has gone down. This has paved the way for deep learning models to be used in everything from mobile games to evaluating credit-checks.

Benefits of Deep Learning in the Cloud

Depending on the scale of your operations, implementing deep learning in the cloud can provide a number of benefits. This is particularly true for teams looking to adopt machine learning operations (MLOps) since pipelines and tooling are often already in the cloud. 

Increased scalability

One of the greatest benefits of using cloud resources for deep learning is the scalability that is possible. On-premises deployments are limited by local hardware and scaling can take significant time. In the cloud, however, you can scale as needed, temporarily provisioning hardware for particularly compute heavy tasks and scaling down during other times.

Additionally, cloud resources can provide scalability for hybrid workloads by providing burst capabilities as needed. This enables organizations to extend the value of their on-premises resources while still granting access to more performance.

Provider support for tooling

All major cloud providers offer some level of built-in support for existing ML and deep learning tools, including TensorFlow and PyTorch. This enables teams to continue working with the tools they are familiar with without limitations created by OS or infrastructure. 

Additionally, some providers offer enhancements for these frameworks. For example, pre-crafted notebooks for faster deployment. These enhancements enable teams to leverage provider tooling or resources to make implementation processes more efficient. 

Reduced barrier to entry

Machine learning in general and deep learning in particular, can require significant expertise and resources to implement. Cloud providers can help lower these barriers by offering pre-built services for developing, training, testing models. Some providers even offer ML as a service, enabling teams without ML developers to leverage the technologies available. 

Additionally, cloud resources can provide an easier entry point for deep learning operations. With cloud resources, you can test out methods and processes before making significant investments in hardware or tooling. You can also start small, and low risk, with cloud resources and scale up to on-premises investments once you better understand your hardware needs. 

GPUs in the Cloud

As cloud providers increase their support and options for deep learning implementations, organisations are beginning to take notice. While there are specialised providers available, the big three are where many organisations, especially those just getting started, should look.

Azure

Azure provides several choices for GPU-based instances. All of these instances are designed for high computation tasks, including deep learning, simulations, and visualisations.

In Azure, you can choose from three instance series:

  • NC-series—optimised for compute and network-intensive workloads. These instances can support OpenCL and CUDA-based applications and simulations. GPUs available include the NVIDIA Tesla V100, the Intel Broadwell, and the Intel HaswellGPUs.
  • NV-series—optimised for visualisations, encoding, streaming and virtual desktop infrastructures (VDI). These instances support OpenGL and DirectX. GPUs available include the AMD Radeon Instinct MI25 and NVIDIA Tesla M60 GPUs. 
  • ND-series—optimised for deep learning training scenarios and inference. GPUs available include the NVIDIA Tesla P40, Intel Skylake, and Intel Broadwell GPUs. 

AWS

AWS provides four instance options, available in multiple sizes. These include EC2 P2, P3, G3, and G4 instances. With these instances, you can choose to access NVIDIA Tesla M60, T4 Tensor, K80, or V100 GPUs and can include up to 16 GPUs per instance.

With AWS, you also have the option of using Amazon Elastic Graphics. This service enables you to connect your EC2 instances to a variety of low-cost GPUs. You can attach GPUs to any instance that is compatible for greater workload flexibility. The Elastic Graphics service also provides up to 8GB of memory and supports OpenGL 4.3.

Google Cloud

Although Google Cloud doesn’t offer dedicated instances with GPUs, it does enable you to connect GPUs to existing instances. This works with standard instances and Google Kubernetes Engine (GKE) instances. It also enables you to deploy node pools including GPUs. Support is available for NVIDIA Tesla V100, P4, T4, K80, and P100 GPUs.

Another option in Google Cloud is access to TensorFlow processing units (TPUs). These units are made of multiple GPUs. TPUs are designed to quickly perform matrix multiplication and can provide performance similar to Tensor Core enabled Tesla V100 instances. Currently, PyTorch provides partial support for TPUs.

Conclusion

There are a number of benefits to using cloud-based GPUs. Perhaps the most popular advantage is the scalability of the cloud. Instead of being limited to local hardware, you can quickly scale up or down without incurring on-prem overhead. You can also leverage cloud vendor support, and integrate with popular frameworks such as PyTorch and TensorFlow. 

Another popular benefit of cloud is that many vendors offer resources that can significantly save time. For example, you can use cloud AutoML tools to speed up some of your processes, and test out methods without investing too much time and costs. In this case, you also reduce risk by testing out your hypothesis. In short, cloud GPUs enable you to gain a higher level of scalability, save time, and avoid on-prem overhead.

Author Bio: Farhan Munir

With over 12 years of experience in the technical domain, I have witnessed the evolution of many web technologies, as well as the rise of the digital economy. I consider myself a life-long learner, and I love experimenting with new technologies. I embrace challenges with enthusiasm and outside-of-the-box mindset. I feel it is important to share your experiences with the rest of the world – in order to pass on the knowledge or let other folks learn from your mistakes or successes. In my spare time, I like to travel and photograph the world. YouTube

Are you using cloud GPUs in your development? Take our survey and share your experiences.

Categories
Tips

Where do ML developers run their code?

In this blog post we’ll explore where ML developers run their app or project’s code, and how it differs based on how they are involved in machine learning/AI, what they’re using it for, as well as which algorithms and frameworks they’re using.

Machine learning (ML) powers an increasing number of applications and services which we use daily. For some organisations and data scientists, it is not just about generating business insights or training predictive models anymore. Indeed, the emphasis has shifted from pure model development to real-world production scenarios that are concerned with issues such as inference performance, scaling, load balancing, training time, reproducibility, and visibility. Those require computation power, which in the past has been a huge hindrance for machine learning developers.

A shift from running code on laptop & desktop computers to cloud computing solutions

The share of ML developers who write their app or project’s code locally on laptop or desktop computers, has dropped from 61% to 56% between the mid and end of 2019. Although the five percentage points drop is significant, the majority of developers continue to run their code locally. Unsurprisingly, amateurs are more likely to do so than professional ML developers (65% vs 51%).

By contrast, in the same period, we observe a slight increase in the share of developers who deploy their code on public clouds or mainframe computers. In this survey wave, we introduced multi cloud as a new possible answer to the question: “Where does your app/project’s code run?” in order to identify developers who are using multiple public clouds for a single project.

As it turns out, 19% of ML developers use multi cloud solutions (see this multi-cloud cheat sheet here) to deploy their code. It is likely that, by introducing this new option, we underestimate the real increase in public cloud usage for running code; some respondents may have selected multi cloud in place of public cloud. That said, it has become increasingly easy and inexpensive to spin up a number of instances and run ML models on rented cloud infrastructures. In fact, most of the leading cloud hosting solutions provide free Jupyter notebook environments that require no setup and run entirely in the cloud. Google Colab, for example, comes reinstalled with most of the machine learning libraries and acts as a perfect place where you can plug and play to build machine learning solutions where dependency and compute is not an issue.

While amateurs are less likely to leverage cloud computing infrastructures than professional developers, they are as likely as professionals to run their code on hardware other than CPU. As we’ll see in more depth later, over a third of machine learning enthusiasts who train deep learning models on large datasets use hardware architectures such as GPU and TPU to run their resource intensive code.

Developers working with big data & deep learning frameworks are more likely to deploy their code on hybrid and multi clouds

Developers who do ML/AI research are more likely to run code locally on their computers (60%) than other ML developers (54%); mostly because they tend to work with smaller datasets. On the other hand, developers in charge of deploying models built by members of their team or developers who build machine learning frameworks are more likely to run code on cloud hosting solutions.

Teachers of ML/AI or data science topics are also more likely than average to use cloud solutions, more specifically hybrid or multi clouds. It should be noted that a high share of developers teaching ML/AI are also involved in a different way in data science and ML/AI. For example, 41% consume 3rd party APIs and 37% train & deploy ML algorithms in their apps or projects. They are not necessarily using hybrid and multi cloud architectures as part of their teaching activity.

The type of ML frameworks or libraries which ML developers use is another indicator of running code on cloud computing architectures. Developers who are currently using big data frameworks such as Hadoop, and particularly Apache Spark, are more likely to use public and hybrid clouds. Spark developers also make heavier use of private clouds to deploy their code (40% vs 31% of other ML developers) and on-premise servers (36% vs 30%).

Deep learning developers are more likely to run their code on cloud instances or on-premise servers than developers using other machine learning frameworks/libraries such as the popular Scikit-learn python library. 

There is, however, a clear distinction between developers using Keras and TensorFlow – the popular and most accessible deep learning libraries for python – compared to those using Torch, DeepLearning4j or Caffe. The former are less likely to run their code on anything other than their laptop or desktop computers, while the latter are significantly more likely to make use of hybrid and multi clouds, on-premise servers and mainframes. These differences stem mostly from developers’ experience in machine learning development; for example, only 19% of TensorFlow users have over 3 years of experience as compared to 25% and 35% of Torch and DeepLearning4j developers respectively. Torch is definitely best suited to ML developers who care about efficiency, thanks to an easy and fast scripting language, LuaJIT, and an underlying C/CUDA implementation.

Hardware architectures are used more heavily by ML developers working with speech recognition, network security, robot locomotion and bioengineering. Those developers are also more likely to use advanced algorithms such as Generative Adversarial Networks and work on large datasets, hence the need for additional computer power. Similarly, developers who are currently using C++ machine learning libraries make heavier use of hardware architectures other than CPU (38% vs 31% of other developers) and mainframes,  presumably because they too care about performance.

Finally, there is a clear correlation between where ML developers’ code runs and which stage(s) of the machine learning/data science workflow they are involved in. ML developers involved in data ingestion are more likely to run their code on private clouds and on-premise servers, while those involved in model deployment make heavier use of public clouds to deploy their machine learning solutions. 31% of developers involved across all stages of the machine learning workflow – end to end – run code on self hosted solutions, as compared to 26% of developers who are not. They are also more likely to run their code on public and hybrid clouds. 

By contrast, developers involved in data visualisation or data exploration tend to run their code in local environments (62% and 60% respectively), even more so than ML developers involved in other stages of the data science workflow (54%).

Developer Economics 18th edition reached 17,000+ respondents from 159 countries around the world. As such, the Developer Economics series continues to be the most global independent research on mobile, desktop, industrial IoT, consumer electronics, 3rd party ecosystems, cloud, web, game, AR/VR and machine learning developers and data scientists combined ever conducted. You can read the full free report here.

If you are a Machine Learning programmer or Data Scientist, join our community and voice your opinion in our current survey to shape the next State of the Developer nation report.

Categories
Platforms

Choosing the right Containers-as-a-Service (CaaS) – or not

The emergence of cloud native development and containers has redefined how software is developed. But not all organizations have the resources or expertise to set up the required infrastructure to support a containerized application. Luckily, cloud vendors offer Containers-as-a-Service to help developers to capitalize on the benefits of cloud native development. 

All three leading cloud providers have CaaS products but choosing the right one can be a challenge. While everyone has different requirements, it is always beneficial to understand what solutions others are using and why, to help inform decisions. 

Based on research from /Data’s recent Developer Economics survey, we have discovered that there are a few factors that drive developers to choose one CaaS over another: 

  • familiarity with tools and languages
  • integration with other systems
  • support and documentation 
  • ease and speed of development. 

While there were other reasons for developers to consider when adopting a platform, the percentage of developers that considered these four factors important is noticeably different across the three leaders. Some of the other sixteen factors that are tracked in the research  include:

  • pricing
  • community
  • learning curve
  • suitability and feature set
  • performance 
  • scalability

graph-Reasons-for

Not all CaaS platforms are selected for the same reasons though. Developers that chose AWS Elastic Container Service were more likely to choose it because of its integration with other systems. This is a reason to choose AWS ECS for 34% of developers using it, compared to 29% and 28% for Azure and Google. Amazon not only has a vast array of tools and services, they also have a robust partner network. The options are so great they have their own service marketplace and have even released a Cloud Map service to help developers discover and manage it all. 

Developers tend to favor Google Container Engine (GCE) because it is easy to use and well documented. Forty five percent of GCE developers chose it in part because of the support and documentation and 36% because of ease and speed of development. We tend to find that developers are consistently happy with the support and documentation that Google provides to their developer community. This satisfaction is an important reason for Google Container Engine users to choose the platform. 

For Azure Container Service, developers like the fact that they can use the tools and languages that they are familiar with. Azure developers are 7 and 12 percentage points more likely to choose Azure Container Service for this reason than Amazon and Google respectively. Our research shows that Microsoft developers are relatively brand loyal so Azure has made it easy for developers to use Microsoft tools for container development and management. Azure has enabled developers to develop using Docker containers and Visual Studio, tools to deploy code to Azure Container Service with a simple command. They have also made it possible to deploy Docker containers to Windows Servers. Finally integration with Active Directory enables loyal Microsoft developers to use existing authentication policies and technologies .

At the end of the day, most developers are looking for a platform that is easy to use and fits with their current strategy and infrastructure, whether it is though integrations, support, or the ability to use the tools that they are comfortable with. 

Containers: Is it really a choice of either or?

While each solution has unique benefits, our analysis also found that many developers were using more than one leading CaaS and in some cases three. Seven percent of developers using a CaaS were using all three of the leading platforms while 46% were using two.

graph-developers-use

Our data verifies what you may already suspect based on your own experiences: more than half of backend developers are pursuing a multi cloud strategy, choosing not to commit to a single provider. 

There are a number of benefits to a multi-cloud solution that are driving this trend. IT organizations can avoid vendor lockin if teams develop for a multi cloud environment. This approach forces developers to build without relying on vendor specific services, reducing switching costs. Multi-cloud approaches also enables organizations to optimize their infrastructure. Developers and operations pro’s can leverage the strengths of each cloud depending on the requirements of various workloads and applications. Greater resilience is also a key benefit to consider. This is especially important in denial of service attacks where compute resources can be overwhelmed with fake requests. With a backup cloud ready and waiting, workloads can just shift to the backup cloud.

The choice of either one CaaS or the other will become even less relevant in the future as leading vendors are all standardizing on Kubernetes. Amazon and Azure are promoting Kubenetes-specific CaaS offerings to focus more on Kubernetes as the underlying orchestration engine. Azure is actually migrating all its users to the Kubernetes service. With Kubernetes the standard orchestration engine, migrating apps and container across cloud providers becomes much easier.

We are also seeing Amazon and Azure working to make it more convenient to develop using containers and Kubernetes. Both firms are offering clusterless or serverless Kubernetes services such as Fargate from AWS and Azure Container Instance. These solutions enable developers to just deploy containers without having to worry about servers or clusters. This approach will make it easier for developers but the additional level of abstraction also reduces flexibility and increases switching costs. 

Amazon’s open sourcing of Firecracker, the micro VM that supports the serverless platform Lambda and Fargate, will be another interesting development to watch. This may prove to be Amazon’s response to Kubernetes but for the serverless market. While still a ways off this could lead to a serverless ecosystem that is just as flexible as the container landscape.

What do you think?

Do you feel strongly about the container solution you are using? 

Or perhaps you feel poorly about certain other containers. 

Let us know about it and have your voice heard by taking the Developer Economics survey.

Categories
Business Community

Cloud & Desktop Developer Landscape

How is cloud and desktop developers landscape evolving? We’ve prepared an infographic with some key insights that can help you better understand the cloud and desktop development, based on our recent report focusing on the topic. Here are some of the key insights:

  • 49% of developers are working professionally across both cloud and desktop
  • 41% of desktop developers are creating applications which never leave the browser
  • 54% of cloud developers who use advertising are making less than $500/month

Check out the Cloud and Desktop Developers infographic for more insights:

cloud&desktop_infographic

Want more insights?

Find out how you can access the full report.