We often come across the term “Enterprise Developers” and even more often, we ask ourselves questions about their profile, from basic demographics to purchasing decisions. But who is an Enterprise Developer? According to SlashData, large enterprise developers as those who work for organisations with between 1,000 and 5,000 employees and very large enterprise developers as those who work for organisations with more than 5,000 employees. Non-enterprise developers are those that remain.
How many enterprise developers are out there?
In Q1 2021, SlashData estimated that there are 1.5M enterprise developers working in large organisations, and 2.0M working in very large enterprises. Between Q1 2020 and Q1 2021, there was a 21% year-on-year growth in the large enterprise population, while the very large enterprise population increased by 18%. This is impressive growth, considering that the non-enterprise population showed only 10% growth over the same period. However, for the large and very large enterprise population, the immediate picture is one of stability: in the last six months, it has remained practically unchanged. Considering that the biggest tech companies an important proportion of very large enterprises have benefited from the economic situation caused by the pandemic, it is somewhat of a surprise that the very large enterprise population hasn’t increased. Perhaps there is a backlog of hiring splurges that have yet to show up in the data or these companies are remaining cautious and waiting to see how the pandemic plays out before committing to hiring more employees.
Where in the world are they located?
If we break down the enterprise populations by region, we find that 22% of software developers in South Asia are very large enterprise developers. These enterprises are likely using their global connections to capitalise on price here: the median per-hour cost for Android development in India, a large hub for app development, is $30. In North America, the rate is five times as high. For example, SAP, the largest non-American software company by revenue, has based its largest R&D lab outside Germany, in India. In absolute terms, most developers working at very large enterprises are to be found in North America (27%) and Western Europe (26%). South Asia takes third place, with 17% of all very large enterprise developers working in this region. North America, Western Europe, and East Asia are the stage for large enterprises, with 28%, 24%, and 17% of the large enterprise population respectively.
Which sectors or industries are they working on?
Most developers are involved in web apps: more than 6 out of 10 developers are involved in the sector. However, developers are often involved in more than one sector. In fact, around three-quarters of developers are involved in more than two sectors, while around 15% engage in five or more sectors. Besides web apps, enterprise developers are dominated by their involvement in the backend sector: 56% of very large enterprise developers and 51% of large enterprise developers work in this sector. This is compared to the 45% of non-enterprise developers who work in the backend sector. Enterprise businesses typically have more sophisticated needs that warrant engaging backend developers. With the myriad of resources at their disposal, enterprises are more easily able to develop large-scale, connected products with complicated backends, IoT devices, for example. Backend developers are required to achieve these goals.
The opposite trend is observed in mobile apps: 40% of non-enterprise developers work in this field, while just under a third of enterprise developers do. The relatively short scope and scale of mobile app development make small business engagement in this sector achievable. For enterprise companies a mobile app, or a web app, is only one small part of a giant machine of software; but for smaller organisations, the mobile or web app might be the entire product. Web apps, backend services, and desktop apps the top three sectors enterprise developers are involved should be explored in greater detail.
Attitudes and engagement
Open source has become a ubiquitous part of developer culture, embodying the widely venerated values of sharing code, knowledge, and best practices among peer developers. These days, many open source projects are vendor or corporate owned and maintained. Developer attitudes towards contributing to these open source projects differ, but across the non-enterprise/large enterprise/very large enterprise divide these differences are subtle. Large enterprise developers are slightly more likely to contribute to corporate-owned, open source to improve their positions in the software food chain: 17% are motivated by their desire to get noticed by the company. Meanwhile, very large enterprises are slightly more likely to have developers buy into the company ethos: 20% of developers working at these organisations wish to contribute to something that has the company’s backing to succeed.
Non-enterprise developers tend to favour reasons relating to improving their skill set: 47% wish to learn to code better. Compare this to large enterprise developers, of whom 42% are motivated by learning to code better. Around one-fifth of all developers are united by the desire to build community support; while just under a third of all developers advocate for a more open software society, contributing to corporate-owned, open-source projects because it’s “something bigger than me”. However, moving from attitudes to economic engagement, the differences between large, very large, and non-enterprise developers tend to be more pronounced regarding practical engagement within an organisation.
“Those who control the money”
There are many tiers of enterprise developers: these range from those high up in a business hierarchy that makes or influences purchasing decisions, down to developers who are not involved at all in these selections. The breakdown of developers according to economic decision-making power within an organisation differs across the non-enterprise/large enterprise/very large enterprise divide.
Just over a quarter of large enterprise (27%) and non-enterprise developers (26%) are economic decision makers for their team or company, with the ability to make the final selection decision for tools, approve expenses on tools and components, and/or approve the overall team budget for tools. Not all of these economic decision-makers can approve budgets or expenses, but whether they are decision-makers for their team or company, they are all empowered to make tool selection decisions.
Comparing these segments to very large enterprises, where only 18% of developers can make purchasing decisions, demonstrates that decision-making power tends to concentrate at the top of the hierarchy in larger organisations. In agreement with this analysis, 31% of developers who work at enterprises are separate from purchasing decisions. In comparison, this drops to 25% and 17% for large and non-enterprise developers, respectively.
So are you an Enterprise Developer? Did you see yourself fitting in the data above? Then you probably are an Enterprise Developer. If you want to help the ecosystem grow, you can join our community of software creators by clicking this link.
As technology advances and becomes mainstream, more enterprises are now incorporating virtual reality in their project planning after years of development. When talking about VR technology, we often think they are associated with entertainment and video games. However, the continuous popularity of VR makes it beneficial across all industries. Experts said we could expect the VR market to grow from $6.3 billion in 2021 to $84 billion by 2028.
At first, many are skeptical about how VR will impact the business industry. But the arrival of tech giants like Google, Facebook, HTC, and Samsung in the VR scene demonstrated and guaranteed that this technology could be the future. Enterprises now see that adopting VR technology is a helpful way to create innovative and sustainable workplaces. This article will tackle the top 5 reasons enterprises should adopt virtual reality.
Why Should Enterprises Adopt Virtual Reality?
Enterprises see a lot of possibilities in VR, including the ability to revolutionise work processes, provide customers with world-class experiences, train staff members, and more. A person may perceive and engage with an immersive virtual environment thanks to VR technology. To further understand, here’s a deep-dive:
Immersive Training in Virtual Worlds
Training your employees using VR technology is an excellent way to let them gain experiences safely. The ability of virtual reality to create immersive real-life scenarios allows employers to be risky when it comes to giving training exercises, as there will be no repercussions in the process. When something goes wrong, all you have to do is to push the reset button.
The 3D simulation of VR technology helps employees to be more engaged and participate actively in training. Startups are prone to cyber threats. That’s why navigating data loss prevention through simulations is the best way to understand security tips for startups without any potential risks.
They are already applying this in the medical field, where surgeons do complex operations on children. Even Walmart does simulation training to prepare employees for Black Friday. Regardless of industry, VR technology offers an engaging, cost-effective platform for training employees.
Effective Prototyping
Developing products can be costly, time-consuming, and risky as it involves a lot of processes and conceptualization without the assurance that they will be successful. Using VR technology will change all of that.
Virtual reality allows enterprises to utilize these immersive real-life scenarios to visualize and design their products. It prevents wasting your time and money by providing a set of approaches and tools you can use to explore ideas and test the product with less resource usage as you are working on an experimental model. Using VR technology can help avoid complications as you can easily modify and redesign your products whenever you detect problems.
Improved HR Practices
VR’s seamless, universal connectivity will fundamentally alter how we manage human resources. With access to workers worldwide, hiring procedures will change to ensure employers choose only the best candidates. Prospective candidates can virtually shadow their possible roles as part of the interview process to understand their daily duties. Additionally, one can adopt VR to a range of interactive HR workshops.
Using VR technology can make the recruitment process easy. Large and small-scale companies can utilise this by creating remote offices to interview applicants, hear their responses, and observe their body language.
Redefined Marketing Strategies
By implementing the “try before you buy” idea, businesses may alter advertising and enhance customer engagement. VR services can improve the shopping experience of many and potentially increase the number of visitors to e-commerce sites. With VR, customers can visually “try on” things to see if they match their needs. There is no more buying the incorrect size or style. And when you are ready to make the purchase, you’ll be able to do it virtually.
Volvo has already executed this idea by as offers a VR test drive on your phone. Lacoste has already followed the lead with their AR mobile app, where customers can try shoes virtually. Moreover, VR in your marketing strategy can give you an advantage over your competitors, providing efficiency that helps you stay ahead of the competition.
Time-Efficient and Cost-Effective
Time is money, and you may save both with virtual reality services. You can more effectively create and test your products using VR for prototyping, reducing the need for post-production testing. You can save money on corporate travel expenses and promote employee safety by holding meetings virtually.
Bottom Line
There are boundless possibilities in using VR technology for your enterprise. Given the fact that top industries consider virtual reality as part of their operations, which enables its rapid growth across all industries. Despite being a new concept for many, it is safe to say that VR technology is the future of innovative business solutions.
Virtual reality technology introduces new ways to advance corporate objectives as it develops. Now is the time for your company to establish itself as a pioneer in using this technology, as it will sooner or later gain widespread acceptance among the populace. Those left behind will find it challenging to catch up. But if your company joins in right away, it can quickly position itself ahead of the curve.
In our most recent (Q3,2022) Developer Nation survey we took a close look at the way developers engage with the Metaverse and NFTs. We explored different levels of engagement (from simple interest to actually working on them) and explored different patterns of interest across other technologies as well as on a regional level. Today we take a look at how developers are engaging with the Metaverse and NFTs.
“50% developers are interested or involved in the Metaverse some way”
The data shows that developers appear to be more interested in the Metaverse compared to NFTs. Possibly because Developers involved in NFTs are also highly involved in other projects, especially cryptocurrencies like bitcoin.
“45% of developers involved in NFTs are highly involved in other blockchain projects and 55% are working on cryptocurrencies”
There are more developers learning about the Metaverse (50%) than they are learning about NFTs(20%). Also, developers working on the Metaverse identified Quantum Computing as the no 1 technology they wish to learn about.
Here’s a video of us discussing the infographic.
In terms of regional distribution, most developers working on the Metaverse and NFTs reside in North America.
The Internet allows everyone to explore the web and create personal accounts on various platforms, hence, it’s safe to say that email addresses and passwords are the identity of netizens. And because of rampant cyberattacks, users aim to protect themselves from cyberattacks by frequently changing and choosing strong alpha-numeric-special-characters passwords. However, passwords alone aren’t enough to provide adequate data security nowadays.
The volume of data breaches continues to rise without fail as cybercriminals discover new and sophisticated ways to compromise unsuspecting individuals’ accounts. But thanks to recent technological advances, individuals and organisations have the opportunity to utilise multi-factor authentication systems to safeguard their identities and sensitive data.
This highly effective method reduces the consequences of poor password hygiene and prevents identity thefts. In this article, we’ll discuss everything you need to know about the importance of implementing MFA into your day-to-day online activity.
Let’s get started!
All About Multi-Factor Authentication
MFA refers to methods that authenticate whether a user’s identity is genuine. It typically requires a user to provide two or more pieces of factors for authentication purposes along with their usual account password. One of its fundamental objectives is to add several layers of authentication factors to increase security. Software based Two-factor authentication:
Along with passwords one of the most common methods used for 2fa are Time-Based-One-Time Passwords or TOTP. Most common TOTP applications are Google Authenticator and Authy.
These apps provide a unique combination of numeric keys generated by a standardized algorithm to users who sign in to platforms where 2fa is needed along with password. Quite a lot of services including GMail, Github allow adding TOTP 2fa.
Hardware based Two-factor authentication:
On the other hand, users who prefer a stronger MFA method could invest in hardware authenticators like YubiKey. This device, when plugged into your workstation generates a unique code that the service can use to authenticate your identity.
In addition, it’s a more secure system because it’s a hardware device which needs to be connected to a computer while authenticating the user account and it produces more extended codes, making it harder for hackers to guess, without physical access to the device.
3 Types of Authentication Factors
Something you know – knowledge
Of course, the most common knowledge factor is a password. However, there are other types of knowledge factors, such as passphrases, PINs, and security questions. Although these have provided excellent security in the past, they aren’t as effective now that new generations of cybercriminals have surfaced.
Something you have – possession
Possession factors encompass smartphones, hard tokens, soft tokens, smartcards, and key fobs. For example, users typically need to insert smartcards into devices, receive a One-Tip Passcode (OTP) on their smartphones, or receive unique codes from physical tokens.
Something you are – inheritance
Inheritance factors are the unique physical traits that users possess. These are verified through voice or facial recognition, retinal scans, and other striking methods.
The Benefits of Using Multi-Factor Authentication
Effective cybersecurity solution
With an MFA system in place, hackers will have a tough time entering your network because it implements strict security measures. Moreover, you can make hackers’ tasks even more difficult by using strong and complicated passwords, mainly if the MFA is used together with an SSO solution.
Verifies user identity
MFA is a valuable tool for protecting sensitive data against identity breaches and theft. By using this strategy, the security of the traditional username and password login is reinforced by another layer of protection. In addition, cybercriminals will find it difficult to crack the given TOTP because it’s a complex combination that only works for a specific period – typically within seconds or minutes.
Hassle-free implementation
By its nature, a multi-factor authentication system is non-invasive. It wouldn’t affect anything within your device’s virtual space, making way for a hassle-free implementation. In addition, it boasts an intuitive user experience, helping you quickly acclimate to the system.
Meets regulatory compliance
Organisations that use multi-factor authentication are hitting two birds with one stone – data security and compliance risk management. For example, PCI-DSS requires MFA implementation in certain situations to stop unauthorised users from accessing systems. So despite application updates with unattended consequences, MFA compliance ensures that the system remains virtually non-intrusive.
The Takeaway
Now that the world is in the digital age, Internet users continue to face cybercriminals’ deceptive tactics to gain their login credentials. And in this day and age where identity is considered the new perimeter, individuals who don’t utilise multi-factor authentication are playing with fire.
The use of multi-factor authentication is a smart and proactive choice, both for individuals and corporations. So if you’re looking for a convenient, innovative, and efficient way to add another layer of protection to your online accounts, MFA would be your best choice. Do you use any MFA technique ? If yes do tell us about it in the comment section below.
Blockchain technology has made digital currency transactions increasingly useful, practical and accessible. However, as the number of crypto users has gone up, so has the rate of cyber theft related to cryptocurrencies. That’s why it’s important to understand how to safekeep your crypto by learning about crypto wallets, how they work and what to look for in one, whether it’s digital or physical.
What is a crypto wallet?
Cryptocurrency wallets, or simply crypto wallets, are places where traders store the secure digital codes needed to interact with a blockchain. They don’t actively store your cryptocurrencies, despite what their name may lead you to believe.
Crypto wallets need to locate the crypto associated with your address in the blockchain, which is why they must interact with it. In fact, crypto wallets are not as much a wallet as they are ledgers: They function as an owner’s identity and account on a blockchain network and provide access to transaction history.
How do crypto wallets work?
When someone sends bitcoin, ether, dogecoin or any other type of digital currency to your crypto wallet, you aren’t actually transferring any coins. What they’re doing is signing off ownership thereof to your wallet’s address. That is to say, they are confirming that the crypto on the blockchain no longer belongs to their address, but yours. Two digital codes are necessary for this process: a public key and a private key.
A public key is a string of letters and numbers automatically generated by the crypto wallet provider. For example, a public key could look like this: B1fpARq39i7L822ywJ55xgV614.
A private key is another string of numbers and letters, but one that only the owner of the wallet should know.
Think of a crypto wallet as an email account. To receive an email, you need to give people your email address. This would be your public key in the case of crypto wallets, and you need to share it with others to be a part of any blockchain transaction. However, you would never give someone the password to access your email account. For crypto wallets, that password is the equivalent of your private key, which under no circumstances should be shared with another person.
Using these two keys, crypto wallet users can participate in transactions without compromising the integrity of the currency being traded or of the transaction itself. The public key assigned to your digital wallet must match your private key to authenticate any funds sent or received. Once both keys are verified, the balance in your crypto wallet will increase or decrease accordingly.
Types of crypto wallet
Crypto wallets can be broadly classified into two groups: hot wallets and cold wallets. The main difference is that hot wallets are always connected to the internet while cold wallets are kept offline.
Hot Wallets
Hot wallets are digital tools whose connection to the internet cannot be severed. Users can access these pieces of software from a phone or desktop computer to monitor their currencies and trade them. Some hot wallets are also accessible through the web or as browser extensions, meaning you can use them on a wide variety of devices.
The greatest advantage of hot wallets is their convenience. Your public and private keys are stored and encrypted on your wallet’s respective app or website, so unless they’re limited to a specific device, you can access them anywhere with an online connection. This ease of access makes them ideal for those who trade more often and are considering spending bitcoins.
Because hot wallets are always accessible online, they also face a greater risk of cyberattacks. Hackers can exploit hidden vulnerabilities in the software that supports your wallet or use malware to break into the system. This is particularly dangerous for web wallets hosted by crypto exchanges, which are bigger targets overall for crypto thieves.
PROS – Highly convenient, can be accessed from anywhere with an internet connection – Easier to recover access if you lose the private key than cold wallets
CONS – Less secure than cold wallets, vulnerable to a wider variety of attacks – For custodial wallets, your keys are kept on the exchange’s servers
Cold Wallets
Cold wallets store your digital keys offline on a piece of hardware or sheet of paper. Hardware wallets usually come in the form of a USB drive which lets you buy, sell and trade crypto while it’s connected to a computer. With “paper” wallets, your keys may be accessible via print-out QR codes, written on a piece of paper, or engraved on some other material, such as metal.
Cold storage wallets are deliberately designed to be hard to hack. Unless the wallet owner falls for some sort of phishing attack, hackers have no way of obtaining the owner’s keys remotely. For something like a hardware wallet, a thief would first have to obtain the USB drive used to access your crypto and then somehow crack its password.
This high level of security may lend itself to mistakes on the part of wallet owners. If you lose your USB drive or sheet of paper and don’t have your private key backed up somewhere, you’ve effectively lost access to your crypto. Compared to hot wallets, which make it possible to regain access through a seed phrase, recovering access on a cold wallet is impossible in most cases due to the two-key security system.
PROS – More secure than hot storage wallets due to offline storage – Many hardware wallets are supported by hot storage wallets
CONS – Transactions take longer on average – Nearly impossible to recover currencies without a backup of your digital keys
How to set up a crypto wallet
Setting up a cryptocurrency wallet is a generally straightforward process that takes no more than a couple of minutes. The first step is to determine the kind of wallet you want to use since hot wallets and cold wallets have different set up processes. Then, you’ll need to do the following:
For hot wallets…
Download the wallet. Make sure the wallet is legitimate before downloading any software. Crypto scams are becoming increasingly common and it’s important to know if the company behind a wallet actually exists. For web wallets, verify that you are on the correct website and not on a fake version of it built to steal your information.
Set up your account and security features. If you are using a non-custodial wallet, this is when you’ll be given your private key, a random 12 to 24-word string of words. If you lose or forget these, you will not be able to access your crypto. You can enable added security tools, like two-factor authentication and biometrics, during or after the set up process. The process for custodial wallets is a bit more involved, and you’ll have to undergo a verification process called Know-Your-Customer (KYC) to validate your identity.
Add funds to your wallet. For non-custodial wallets, you may have to transfer crypto from elsewhere, as not all wallets allow you to buy crypto with fiat currency directly. As for custodial wallets, you’ll need to fund them using a credit or debit card before you can purchase crypto, in some cases.
For cold wallets…
Purchase the wallet online. When buying a cold wallet, avoid third-party resellers. Buy the product directly from the developer to avoid issues, such as the device being tampered with beforehand.
Install the device’s software. Each brand has its own software that must be installed onto the hardware device before it can be used. Make sure to download the software from the company’s official website. Then, follow its instructions to create your wallet.
Deposit your cryptocurrency. You’ll need to transfer crypto into your hardware wallet from elsewhere, such as from a crypto exchange. Some wallets may have an incorporated exchange that allows you to trade crypto while the device is connected to your desktop computer or mobile device.
What to look for in a crypto wallet
When looking for a crypto wallet, it’s very important to first ask yourself:
How often do I trade? Will you be trading cryptocurrency daily or just occasionally? Hot wallets are better for active traders due to their speed and practicality. However, active traders may also benefit from a cold wallet by using it as a kind of savings account, keeping the bulk of their currencies there.
What do I want to trade? Are you looking to buy and store Bitcoin or are you interested in different types of cryptocurrency, like altcoins and stablecoins? The crypto wallet you pick should support the currencies you wish to trade and will ideally accommodate any other coins you may want to trade in the future.
How much am I willing to spend? Are you planning on accumulating large amounts of crypto? Hardware wallets are ideal for this sort of activity, but unlike hot wallets (which are mostly free), they require an upfront payment to own the wallet itself. Some hot wallets have higher crypto trading fees but offer faster transactions or greater functionality.
What functionality do I need in a wallet? Do you plan on doing anything specific with crypto beyond simply trading it? For example, traders who want to make money with their crypto passively should look for wallets that allow for crypto lending, staking and deposits.
After exploring the above questions, we put together some general suggestions for what to look for in a crypto wallet:
Supported currencies – The rule of thumb for supported currencies is “the more, the better.” Unless you’re interested in solely trading Bitcoin, we suggest you opt for a wallet that supports at least a few of the more popular altcoins.
Accessible interface – An accessible, intuitive user interface is always welcome, regardless of whether you’re a crypto veteran or a newbie. Look for wallets that don’t make you jump through hoops to start basic trading.
24/7 customer support – Although more useful for newer traders, having customer support available throughout the day is always a plus. This is especially true for wallets that undergo frequent updates and may suffer from bugs or visual glitches.
Hardware wallet compatibility – Anyone who is seriously thinking about getting into crypto should consider getting a hardware wallet. Even people who don’t trade frequently should consider a hardware wallet to safeguard their most important assets. Investors with a hot wallet that’s compatible with at least one brand of hardware wallet have an advantage, since they can default to the model(s) supported by their wallet and transfer their crypto back and forth as needed.
Investing in crypto prudently
Cryptocurrencies are a new and exciting financial asset. The idea of a decentralized currency independent of the banking industry is enticing for many. The wild price swings can be a thrill, and some coins are simply amusing.
Consider the story of Dogecoin. A portmanteau of Bitcoin and Doge, the currency was a hit on Reddit, a popular social network forums site, and quickly generated a market value of $8 million. DOGE hit an all-time high on May 8, 2021, reaching a market capitalization of more than $90 billion after Elon Musk and Reddit users involved in the GameStop short squeeze turned their attention to it.
For a more sobering example, take a look at Bitcoin — the grandparent of all cryptocurrencies. Bitcoin has experienced multiple crashes throughout its lifespan, but its most recent one has left a lasting impression in mainstream culture. Reaching an all-time high of more than $65,000 in November 2021, its market value has declined as part of a general crypto price drop, briefly dipping under $20,000 in June 2022.
While entertaining, the fact remains that cryptocurrencies are unpredictable assets and should be traded with caution. It’s important to consider the following dangers when asking yourself, “should I invest in cryptocurrencies?:”
Crypto is volatile. A cursory glance at the historical price of Bitcoin is enough to see massive peaks and depressions throughout its lifespan. Just recently, Bitcoin fell under $20,000 in June after having surpassed a value of $69,000 for a single coin in November 2021. The same goes for any other major cryptocurrency. These dramatic changes are not normal compared to the pace at which mainstream assets move.
Crypto isn’t backed by anything. Most coins do not have a natural resource, such as gold, silver or other metals, that is used to track their value. They’re not backed by the government and don’t track the growth potential of enterprises the way stocks and bonds do. This increases crypto’s volatility as a whole.
Cryptocurrencies are also speculative assets, which are riskier due to large fluctuations in price. Many active traders invest in them with the hope of making a big profit after their value dramatically increases in the near future — hopefully before a crash.
Crypto is unregulated. Governments and institutions worldwide are still grappling with how to regulate cryptocurrencies, asking: Do we need specific legislation to regulate crypto assets? Who should regulate crypto? Should it be regulated at all?
While this lack of regulation responds to the nature of crypto and its ethos of freedom, a lack of adequate regulation means consumers are not protected against many crypto crimes and scams. Ultimately, crypto must be studied and handled carefully, as its future remains uncertain.
Personal finance experts and advisors recommend investing no more than 5% of your portfolio in risky assets like crypto. Beginners should also refrain from riskier crypto trading practices, such as lending and staking currencies to generate revenue.
Crypto Wallet Glossary
Blockchain: A blockchain is a type of ledger that records digital transactions and is duplicated across its entire network of systems. The shared nature of blockchain creates an immutable registry that protects users against fraud. Cryptocurrencies are traded on the blockchain.
BTC: BTC is the currency code used to represent Bitcoin, which was created by Satoshi Nakamoto as the first decentralized cryptocurrency. Read our article on what is Bitcoin to find out more.
Foundation for Wallet Interoperability (FIO) Network: The FIO was established in the “pursuit of blockchain usability through the FIO Protocol.” The FIO protocol is meant to improve the scalability of the blockchain and develop a standard for interaction between various crypto-related entities.
Hierarchical Deterministic (HD) account: HD accounts may be restored on other devices by using a backup phrase of 12 random words that’s created when you generate the wallet.
Light client: Also called light nodes, light clients implement SPV, a technology that does not require downloading an entire blockchain to verify transactions. Depending on the currency, a full blockchain could be anywhere from 5Gb to over 200Gb. Thus, light clients tend to be faster than regular clients and require less computing power, disk space and bandwidth. Mobile wallets almost always use light clients.
mBTC: A common exchange value, mBTC is short for millibitcoin, which is one-thousandth of a bitcoin (0.001 BTC or 1/1000 BTC)
Multi-signature: Multisig for short, wallets with this feature require more than one private key to sign and send a transaction.
Open-source: Software that is considered “open-source” has a source code that may be studied, modified or redistributed by anyone. The source code is what programmers use to adjust how a piece of software works.
Seed phrase: Newly opened crypto wallets randomly generate a string of 12 to 24 words known as a seed phrase. Users with non-custodial wallets must keep this phrase and are recommended to write it down in a safe location, since it stores all the information needed to recover access to their wallet and funds.
With all the information in this post, I believe you’re on your way to becoming an expert on crypto wallets and the measures you can take to avoid cyber theft. Until next time!
Rarely any software project today is built from the ground up. Frameworks and libraries have made developers’ lives so easy that there’s no need to reinvent the wheel anymore when it comes to software development. But these frameworks and libraries become dependencies of our projects, and as the software grows in complexity over time, it can become pretty challenging to efficiently manage these dependencies.
Sooner than later developers can find their code depending on software projects of other developers which are either open source, hosted online or being developed in-house, in maybe another department of the organisation. These dependencies are also evolving, and need to be updated and in sync with your main source tree. This ensures that a small change breaks nothing and your project is not outdated and does not have any known security vulnerability or bugs.
A good recent example of this is log4j, a popular framework for logging, initially released in 1999, which became a huge headache for many businesses at the end of 2021, including Apple, Microsoft and VMware. log4jt was a dependency in a variety of software and the vulnerabilities discovered affected all of them. This is a classic example of how dependencies play a huge role in software lifecycle and why managing them efficiently becomes important.
While there are a variety of ways and frameworks to manage software dependencies, depending on software complexity, today I’ll cover one of the most common and easy to use methods called “git submodule”:. As the name suggests it is built right into git itself, which is the de facto version control system for the majority of software projects.
Hands-on with git submodules:
Let us assume your project name “hello-world” depends on an open source library called “print”.
A not-so-great way to manage the project is to clone the “print” library code and push it alongside the “Hello World” code tree to GitHub (or any version control server). This works and everything runs as expected. But what happens when the author of “print” makes some changes to its code or fixes a bug?Since you’ve used your own local copy of print and there is no tracking to the upstream project, you won’t be able to get these new changes in, therefore you need to manually patch it yourself or re-fetch and push the code once again. Is this the best way of doing it, one may ask?
git has this feature baked in which allows you to add other git repos (dependencies projects) as submodules. This means your project will follow a modular approach and you can update the submodules, independent of your main project. You can add as many submodules in your project as you want and assign rules such as “where to fetch it from” and “where to store the code once it is fetched”. This obviously works if you use git for your software project version control.
Let’s see this in action:
So I’ve created a new git project namely “hello-world” on my GitHub account, which has two directories:
src – where my main source code is stored
lib – where all the libraries a.k.a dependencies are stored which my source code is using.
These libraries are hosted on GitHub by their maintainers as independent projects. For this example, I’m using two libraries.
hello – which is also created by me as a separate github repo
resources – which is another git repository in Developer Nation account
To add these two above-mentioned libraries as submodules to my project, let’s open the terminal, change to the main project directory where I want them to be located. In this case, I want them in my lib directory, so I’ll execute the following commands:
cd hello-world/lib
Add submodule with command : git submodule add <link to repo>
This will fetch the source code of these libraries and save them in your lib folder. Also, now you’ll find a new hidden file created in root of your main project directory with name .gitmodules which has the following meta-data:
Now every time someone clones the project, they can separately clone the submodule using following commands:
git clone < Your project URL >
cd <Your project URL>
git submodule init
git submodule update
OR:
This can also be done in one command as:
git clone <Your Project URL> —recursive, in this case
git clone git@github.com:iayanpahwa/hello-world.git —recursive
One more thing you’ll notice on GitHub project repo is in lib directory, folders are named as :
print @ fa3f …
resources @ c22
The hash after @ denotes the last commit from where print and resources libraries were fetched. This is a very powerful feature as by default, the submodule will be fetched from the latest commit available upstream i.e HEAD of master branch, but you can fetch from different branches as well. More details and options can be found on the official doc here.
Now you can track and update dependency projects independent of your main source tree. One thing to note is all your dependencies need not to be on the same hosting site as long as they’re using git. For example: If hello-world was hosted on Github and printed on Gitlab, the git submodule will still work the same.
I hope this was a useful tutorial and you can now leverage git submodules to better manage your project dependencies. If you have any questions and ideas for more blogs, I’d love to hear from you in the comments below.
If you ever wondered how game designers come up with placement and immersability of assets such as health meter and mission progress without them hindering game play, this article is for you. Like websites or mobile apps, video games have common UI components that help players navigate and accomplish goals. In this article you’ll discover the four classes of game UI and how as a game designer you can utilise them to provide for the best possible gaming experience.
Sixty years ago the Brookhaven National Laboratory in Upton, NY held an open house. Visitors who toured the lab were treated to an interactive exhibit, a game titled Tennis for Two. The setup was simple—a 5-inch analog display and two controllers, each with one knob and one button. The world’s first video game was born, but after two years, the exhibit was closed.
Twelve years passed, and an eerily similar arcade game showed up in a bar called Andy Capp’s Tavern. The name of the game? Pong. Its maker? Atari. Seemingly overnight, the burgeoning world of video games was transformed. Novelty became an industry.
Since Pong, the complexity of video game graphics has evolved exponentially. We’ve encountered alien insects, elven adventures, and soldiers from every army imaginable. We’ve braved mushroom kingdoms, boxing rings, and an expanding universe of hostile landscapes. While it’s fun to reminisce about the kooky characters and impossible plot lines, it’s also worth discussing the design elements that make video games worth playing—the UI components.
Like websites or mobile apps, video games have common UI components that help players navigate, find information, and accomplish goals. From start screens to coin counters, video game UI components are a crucial aspect of playability (a player’s experience of enjoyment and entertainment). To understand how these components impact the gaming experience, we must quickly address two concepts that are vital to video game design: Narrative and The Fourth Wall.
Narrative
Narrative is the story that a video game tells. Consider this as your video game character storyline.
The Fourth Wall
The Fourth Wall is an imaginary barrier between the game player and the space in which the game takes place.
Narrative and The Fourth Wall provide two questions that must be asked of every UI component incorporated into a game:
Does the component exist in the game story?
Does the component exist in the game space?
From these two questions, four classes of video game UI components emerge: Non-diegetic; Diegetic; Spatial; and Meta.
Non-Diegetic
Does the component exist in the game story? No
Does the component exist in the game space? No
Non-diegetic UI components reside outside of a game’s story and space. None of the characters in the game, including a player’s avatar, are aware that the components exist. The design, placement, and context of non-diegetic components are paramount.
In fast-paced games, non-diegetic components may interrupt a player’s sense of immersion. But in strategy-heavy games, they can provide players with a more nuanced assessment of resources and actions.
Non-Diegetic components commonly appear in video games as stat meters. They keep track of points, time, damage, and various resources that players amass and expend during gameplay.
In Super Mario Bros. 3, the stat meter is non-diegetic because it exists outside of the game world and story (characters within the game don’t know it’s there).
Diegetic
Does the component exist in the game story? Yes
Does the component exist in the game space? Yes
Diegetic UI components inhabit both a game’s story and space, and characters within the game are aware of the components. Even though they exist within the game story and space, poorly considered diegetic components are still capable of distracting or frustrating players.
Scale makes diegetic components tricky. For instance, an in-game speedometer that resides on a vehicle’s dashboard will likely be too small for players to see clearly. In some games, handheld diegetic components (like maps) can be toggled to a 2-D, full-screen view, making them non-diegetic.
In the demolition racing game Wreckfest, cars are diegetic UI components. Over the course of a race, they take on visible damage that indicates how near a player is to being knocked out of competition.
Spatial
Does the component exist in the game story? No
Does the component exist in the game space? Yes
Spatial UI components are found in a game’s space, but characters within the game don’t see them. Spatial components often work as visual aids, helping players select objects or pointing out important landmarks.
Text labels are a classic example of spatial UI components. In fantasy and adventure games, players may encounter important objects that are unfamiliar in appearance. Text labels quickly remove ambiguity and keep players immersed in the gaming experience.
The American football franchise Madden has spatial UI components that help players select avatars and understand game scenarios.
Meta
Does the component exist in the game story? Yes
Does the component exist in the game space? No
Meta UI components exist in a game’s story, but they don’t reside in the game’s space. A player’s avatar may or may not be aware of meta components. Traditionally, meta components have been used to signify damage to a player’s avatar.
Meta components can be quite subtle—like a slowly accumulating layer of dirt on the game’s 2D plane, but they can also feature prominently in the gaming experience. In action and adventure games, the entire field of view is sometimes shaken, blurred, or discolored to show that a player has taken on damage.
The Legend of Zelda utilizes scrolling text (a meta component) to advance the narrative and provide players with helpful tips.
A very illustrative infographic summing up all 4 classes of video game UI components can be found below.
Classifying video game UI components isn’t always cut and dry. A life meter may be diegetic in one game but non-diegetic in another. Depending on a game’s narrative and its players’ relationship to the fourth wall, components may blur the line between classes. Likewise, an infinite range of visual styles and configurations can be applied to components according to a game’s art direction.
Software development is a messy and intensive process, which in theory, should be a linear, cumulative construction of functionalities and improvements in code, but is rather more complex. More often than not it is a series of intertwined, non-linear threads of complex code, partly finished features, old legacy methods, collections of TODO comments, and other things common to any human-driven and a largely hand-crafted process known to mankind.
Git was built to make our lives easier when dealing with this messy and complex approach to software development. Git made it possible to work effortlessly on many features at once and decide what you want to stage and commit to the repository. The staging area in Git is the main working area, but most of the developers know only a little about it.
In this article, we will be discussing the staging area in Git and how it is a fundamental part of version control and can be used effectively to make version control easier and uncomplicated.
What is Staging area?
To understand what is staging area is, let’s take a real-world example – suppose that you are moving to another place, and you have to pack your stuff into boxes and you wouldn’t want to mix the items meant for the bathroom, kitchen, bedroom, and the living room in the same box. So, you will take a box and start putting stuff into it, and if doesn’t make sense, you can also remove it before finally packing the box and labeling it.
Here, in this example, the box serves as the staging area, where you are doing the work (crafting your commit), whereas when you are done, then you are packing it and labeling it (committing the code).
In technical terms, the staging area is the middle ground between what you have done to your files (also known as the working directory) and what you had last committed (the HEAD commit). As the name implies, the staging area gives you space to prepare (stage) the changes that will be reflected on the next commit. This surely adds up some complexity to the process, but it also adds more flexibility to selectively prepare the commits as they can be modified several times in the staging area before committing.
Assume you’re working on two files, but only one is ready to commit. You don’t want to be forced to commit both files, but only the one that is ready. This is where Git’s staging area comes in handy. We place files in a staging area before committing what has been staged. Even the deletion of a file must be recorded in Git’s history, therefore deleted files must be staged before being committed.
What are git commands for the staging area?
git add
The command used to stage any change in Git is git add. The git add command adds a modification to the staging area from the working directory. It informs Git that you wish to include changes to a specific file in the next commit. However, git add has little effect on the repository—changes are not truly recorded until you execute git commit.
The common options available along with this command are as follows:
You can specify a <file> from which all changes will be staged. The syntax would be as follows:
git add <file>
Similarly, you can specify a <directory> for the next commit:
git add <directory>
You can also use a . to add all the changes from the present directory, such as the following:
git add .
git status
git status command is used to check the status of the files (untracked, modified, or deleted) in the present branch. It can be simply used as follows:
git status
git reset
In case, you have accidentally staged a file or directory and want to undo it or unstage it, then you can use git reset command. It can be used as follows:
git reset HEAD example.html
git rm
If you remove files, they will appear as deleted in git status, and you must use git add to stage them. Another option is to use the git rm command, which deletes and stages files in a single command:
To remove a file (and stage it)
git rm example.html
To remove a folder (and stage it)
git rm -r myfolder
git commit
The git commit command saves a snapshot of the current staged changes in the project. Committed snapshots are “secure” versions of a project that Git will never alter unless you specifically ask it to.
Git may be considered a timeline management utility at a high level. Commits are the fundamental building blocks of a Git project timeline. Commits may be thought of as snapshots or milestones along a Git project’s history. Commits are produced with the git commit command to record the current status of a project.
Git Snapshots are never committed to the remote repository. As the staging area serves as a wall between the working directory and the project history, each developer’s local repository serves as a wall between their contributions and the central repository.
The most common syntax followed to create a commit in git is as follows:
git commit -m "commit message"
The above commands and their functionalities can be summed up simply in the following image:
Conclusion
To summarize, git add is the first command in a series of commands that instructs Git to “store” a snapshot of the current project state into the commit history. When used alone, git add moves pending changes from the working directory to the staging area. The git status command examines the repository’s current state and can be used to confirm a git add promotion. To undo a git add, use the git reset command. The git commit command is then used to add a snapshot of the staging directory to the commit history of the repository.
This is all for this article, we will discuss more Git Internals in the next article. Do let me know if you have any feedback or suggestions for this series.
If you want to read what we discussed in the earlier instalments of the series, you can find them below.
Git Internals Part 1- List of basic Concepts That Power your .git Directory here
Git Internals Part 2: How does Git store your data? here
Code reviews are a type of software quality assurance activity that involves rigorous evaluations of code in order to identify bugs, improve code quality, and assist engineers in understanding the source code.
Implementing a systematic approach for human code reviews is one of the most effective ways to enhance software quality and security. Given the probability of mistakes during code authorship, using many fresh eyes with complementary knowledge may disclose flaws that the original programmer may have overlooked.
A successful peer review process requires a careful balance of well-established protocols and a non-threatening, collaborative atmosphere. Highly structured peer evaluations can hinder productivity, while lax approaches are frequently unsuccessful. Managers must find a happy medium that allows for fast and successful peer review while also encouraging open communication and information sharing among coworkers.
The Benefit/Importance of Code Reviews
The fundamental goal of code review is to guarantee that the codebase’s code health improves with time.
Code health is a “concept” used to measure if the codebase on which one or more developers are working is — manageable, readable, stable (or less error-prone), buildable, and testable.
Code reviews enhance code quality by detecting issues before they turn unmanageable, it ensures a consistent design and implementation and also assures consistency of standards. It contributes to the software’s maintainability and lifespan, resulting in sturdy software created from components for smooth integration and functioning. It is inevitable that adjustments will be required in the future, thus it is critical to consider who will be accountable for implementing such changes.
When source code is regularly reviewed, developers can learn dependable techniques and best practices, as well as provide better documentation, because some developers may be oblivious of optimization approaches that could be applicable to their code. The code review process allows these engineers to learn new skills and improve the efficiency of their code, and produce better software.
Another significant benefit of code reviews is that they make it easier for analysts and testers to comprehend. In Quality Assurance (QA) testing, testers must not only evaluate the code quality but also discover issues that contribute to bad test results. This can result in ongoing, needless development delays owing to further testing and rewriting.
Performing Code Reviews
Good code reviews should be the standard that we all strive towards. Here are some guidelines for establishing a successful code review to ensure high-quality and helpful reviews in the long run:
Use checklists
Every member of your team is quite likely to repeat the same mistakes because omissions are the most difficult to identify since it is hard to evaluate something that does not exist. Checklists are the most effective method for avoiding frequent errors and overcoming the challenges of omission detection. Checklists for code reviews can help team members understand the expectations for each type of review and can be beneficial for reporting and process development.
Set limits for review time and code lines checked
It might of course be very much tempting to rush through a review and expect someone else to detect the mistakes you omitted. However, a SmartBear study indicates a considerable decline in defect density at speeds quicker than 500 LOC per hour. The most effective code review is performed in a suitable quantity, at a slower speed, for a limited period of time.
Code review is vital, but it can also be a time-consuming as well as a painstaking process. As a result, it is critical to control how much time a reviewer or team spends on the specifics of each line of code. Best practices in this area include ensuring that team members do not spend more than an hour on code reviews and that the team does not examine more than a few hundred lines in a certain amount of hours.
In essence, it is strongly advised not to review for more than 60 minutes at a time, as studies suggest that taking pauses from a task over time can significantly increase work quality. More regular evaluations should lessen the need for a review of this length in the future.
Performing a security code review.
A security code review is a manual or automated method that assesses an application’s source code. Manual reviews examine the code’s style, intent, and functional output, whereas automated tools check for spacing or name errors and compare it to known standard functions. A security code review, the third sort of evaluation, examines the developer’s code for security resilience.
The goal of this examination is to identify any current security weaknesses or vulnerabilities. Among other things, code review searches for logic flaws, reviews spec implementation, and verifies style guidelines. However, it is also important that a developer should be able to write code in an environment that protects it against external attacks, that can have effects on everything from intellectual property theft to revenue loss to data loss. Limiting code access, ensuring robust encryption, and establishing Secrets Management to safeguard passwords and hardcodes from widespread dissemination are some examples.
Make sure pull requests are minimal and serve a single function.
Pull requests (PRs) are a typical way of requesting peer code evaluations. The PR triggered the review process when a developer completes an initial code modification. To improve the effectiveness and speed of manual code review, the developer should submit PRs with precise instructions for reviewers. The lengthier the review, the greater the danger that the reviewer may overlook the fundamental goal of the PR. In fact, a PR should be no more than 250 lines long because a study shows reviewers may find 70–90 percent of errors in under an hour.
Offer constructive feedback.
Giving constructive feedback are very essential as code reviews play very important roles in software development, however, it is also important to be constructive rather than critical or harsh in your feedback to maintain your team’s morale and ensure keep the team learns from the mistake.
Code review examples
The main outcome of a code review process is to increase efficiency. While these traditional methods of code review have worked in the past, you may be losing efficiency if you haven’t switched to a code review tool. A code review tool automates the process of code review so that a reviewer solely focuses on the code.
A code review tool integrates with your development cycle to initiate a code review before new code is merged into the main codebase. You can choose a tool that is compatible with your technology stack to seamlessly integrate it into your workflow.
A great example of code review, especially in Python, which is my favored language, would be dealing with Duck Typing, which is strongly recommended in Python to be more productive and adaptable. Emulating built-in Python types such as containers is a common use-case:
Full container protocol emulation involves the presence and effective implementation of several magic methods. This can become time-consuming and error-prone. A preferable approach is to build user containers on top of a respective abstract base class:
# Extra Pythonic!
class DictLikeType(collections.abc.MutableMapping):
def __init__(self, *args, **kwargs):
self.store = dict(*args, **kwargs)
def __getitem__(self, key):
return self.store[key]
...
We would not only have to implement fewer magic methods, but the ABC harness would also verify that all necessary protocol methods were in place. This mitigates some of the inherent instability of dynamic typing.
Top code review tools for Developers
The fundamental purpose of a code review process, as described earlier in this article, is to enhance efficiency. While the traditional code review approaches outlined above have worked in the past (and continue to work), you may be losing efficiency if you haven’t switched to using a code review tool. A code review tool automates the code review process, freeing up the reviewer’s time to concentrate solely on the code.
Before adding new code to the main codebase, code review tools interact with your development cycle to initiate a code review. You should choose a tool that is compatible with your technological stack so that it can be readily integrated into your workflow. Here is a list of some of the top code review tools:
1. Github You may have previously used forks and pull requests to evaluate code if you use GitHub to manage your Git repositories in the cloud.
Github also stands out due to it’s discussion feature during a pull request, with github you can analyze the difference, comment inline, and view the history of changes. You can also use the code review tool to resolve small Git conflicts through the web interface. To establish a more thorough procedure, GitHub even allows you to integrate with other review tools via its marketplace.
Atlassian’s Crucible is a collaborative code review tool that lets you to examine code, discuss plan modifications, and find bugs across a variety of version control systems.
Crucible integrates well with other products in Atlassian’s ecosystem, including Confluence and Enterprise BitBucket. And, just like with any product that is encircled by other products in its ecosystem, combining Crucible with Jira, Atlassian’s Issue, and Project Tracker, will provide the greatest advantage. It allows you to do code reviews and audits on merged code prior to committing.
SmartBear Collaborator is a peer code and document review tool for development teams working on high-quality code projects. Collaborator allows teams to review design documents in addition to source code.
You can use Collaborator to see code changes, identify defects, and make comments on specific lines of code. You can also set review rules and automatic notifications to ensure that reviews are completed on time. It also allows for easy integration with multiple SCMs and IDEs such as Visual Studio and Eclipse amongst others.
4. Visual Expert
Visual Expert is an enterprise solution for code review specializing in database code. It has support for three platforms only: PowerBuilder, SQL Server, and Oracle PL/SQL. If you are using any other DBMS, you will not be able to integrate Visual Expert for code review.
Visual Expert spares no line of code from rigorous testing. The code review tool delivers a comprehensive analysis of code gathered from a customer’s preferred platform.
5. RhodeCode Rhodecode is a secured, open-source enterprise source code management tool. It is a unified tool for Git, Subversion, and Mercurial. Its primary functions are team collaboration, repository management, and code security and authentication.
Source — Rhodecode
RhodeCode distinguishes itself by allowing teams to synchronize their work through commit code commentary, live code discussions, and sharing code snippets. Teams may also use coding tools to assign review jobs to the appropriate person, resulting in a more frictionless workflow for teams.
Conclusion
We learned what code review is and why it is crucial in the software life cycle in this tutorial. We also discussed best practices for reviewing code and the various approaches for doing so, as well as an example of a code review and lists of top code review tools to assist you get started reviewing code throughout your organization or team.
AWS Lambda is a powerful tool for developing serverless applications and on-demand workflows. However, this power comes at a cost in terms of flexibility and ease of deployment, as the manual deployment process that AWS Lambda recommends can be error-prone and hard to scale.
CloudFormation revolutionizes this process, replacing copied zip files with dependable and repeatable template-based deployment schemes. With CloudFormation, your Lambda functions will be easier to maintain, easier for your developers to understand, and easier to scale as your application grows.
Reviewing AWS Lambda Deployments
AWS Lambda function deployments are based around file handling—namely, by zipping your code into an archive and uploading the file to AWS. At its core, all AWS Lambda functions follow this pattern:
Create a zip file.
Upload to an S3 bucket.
Set the function to active.
This takes place whether you’re manually deploying the code, have outsourced your deployments to a tool, or are following any protocol in-between.
Once the file is received, AWS unzips your code into the appropriate folder structure, making it available to run when the Lambda container is spun up. This approach is a key point to remember as we discuss Lambda deployments and also exposes one of the first holes in the manual deployment process—AWS Lambda functions have an unstated structure that you need to follow.
Simply put, you do not want to right-click on a file and create an archive; otherwise, you’ll encounter an error when you try to run your deployed Lambda code. The following screenshots illustrate this issue:
Figure 1: Do not zip the folder using this method
If you examine the zip files produced by the above method, you’ll find that their root level consists of your code folder:
Figure 2: This zip file will not be parsable by AWS Lambda
The issue this introduces is specifically related to how AWS Lambda deploys the code—namely, it simply unzips the provided code archive to an executable folder, then routes invocation requests to the application code found in that folder. When you provide a zip archive with a folder at the root level, instead of the application code itself, AWS Lambda has no idea what to do and throws errors. So, make sure that you zip the folder contents themselves, as follows:
Figure 3: Zipped at the appropriate level, the function code should be the root of the archive
When you do this, your code is put at the root level of the zip folder. This allows AWS Lambda to easily deploy your published code:
Figure 4: The code file is present at the root of the zip archive
IOD recruits tech experts from around the world to create compelling content for our clients’ tech blogs. Contact us to learn how we can help you with your content marketing challenges.
Each Lambda function exists independently, meaning that you cannot easily share resources between Lambda functions—shared libraries, source data files, and all other information sources that need to be included with the zip archive you upload. This additional fragility and duplication can be resolved with Lambda layers. Lambda layers provide you with a common base for your functions, letting you easily deploy shared libraries without the duplication that would be required when using only the base container.
While you can set up a scriptable and maintainable deployment process, once the project size grows, the brittleness of the above steps will quickly become apparent. AWS CloudFormation solves this very complex problem by categorizing infrastructure as code; this lets your developers and development operations teams create, deploy, and tear down resources with simple configuration-file modifications. These configuration files are human-readable and can be modified in any text configuration, programming language, or UI tools that you desire.
Furthermore, CloudFormation lets you centralize the deployment of your infrastructure, creating a build process for your serverless functions that is both repeatable and predictable.
Improving Lambda Deployments with CloudFormation
Moving from the error-prone manual process of Lambda deployment to the superpowered CloudFormation model is a straightforward process of translating your function’s infrastructure needs into the appropriate CloudFormation template language. CloudFormation lets you then consolidate the disparate resource deployments for your application into a small set of configuration files, allowing your infrastructure to be maintained alongside your application code.
All in all, CloudFormation makes deploying AWS Lambda functions incredibly simple.
Start by creating the template file that will define your resources. This will be your working folder for your code. Next, create your function in the appropriate file for your desired Lambda runtime. Finally, create an S3 bucket and provide its address to your Lambda function; once you’ve done this, you can deploy functions simply by copying your zip file to the correct S3 bucket.
CloudFormation will be the tool that ties together all the resources your function requires. In CloudFormation, you will define the function, the function’s IAM role, the function’s code repository in S3, and execution policies to ensure that your function can do everything it needs to do within the AWS ecosystem. CloudFormation further gathers these resources together, centralizing all of your infrastructure definitions in a single template file that lives alongside your code.
Running Through a Sample Deployment
In this section, we’ll run through a quick example of creating a CloudFormation-driven deployment process for an AWS Lambda function. Start with the following Node.JS code to create a simple Lambda function using the nodejs12.x runtime:
This code is deliberately simple, allowing you to highlight the deployment process itself. Once you’ve created the function code, you can begin creating all of the items that will allow you to deploy and run the code with CloudFormation.
First, create a new file in the same directory as the function. These instructions assume that your file will be named template.yml. Once you‘ve created the empty template file, start including resources needed to get your function running. You can begin with defining an S3 bucket to hold your function code:
Once you’ve created the template file and modified it to reflect the resources above, you can deploy your functions from the command line with a single call:
This basic configuration will allow you to deploy your functions once they‘ve been uploaded to the S3 bucket specified in the function definition. You can now build upon this basic set of deployment functionality to automate any aspect of your stack creation. For a fully functional deployment sample, you can clone the excellent quickstart repo from AWS.
Some Tips and Additional Resources
As you work CloudFormation into your Lambda development pipeline, you’re bound to encounter headaches. Here are a few tips to help avoid unnecessary frustration from this immensely helpful AWS blog article on the topic:
Did you know that you can deploy in-line Lambda code? Simply include your (small) Lambda function code as lines appended after the zipfile key.
If you only need to release your functions to a small subset of AWS regions, you can provide a list of regional buckets to populate with your code; simply expand the resource listing when defining your source Lambda zip files.
With a simple name format policy and some custom code, you can create a system that allows you to upload your S3 file once, then publish it to any AWS region that supports AWS Lambda.
In addition to the AWS blog post above, my fellow IOD experts also had a few thoughts on the best ways to achieve serverless deployment zen:
Slobodan Stojanovic gave a detailed overview of the path he took to simplifying his Lambda deployments; it serves as a good case study for transitioning your Lambda deployments into more maintainable patterns.
Once again, the excellent Quickstart repo provided by AWS also offers a useful CloudFormation-driven tool for deploying your AWS Lambda code across multiple regions from a single bucket.
Wrapping Up
AWS Lambda deployments are brittle and prone to error out-of-the-box, requiring you to wade through numerous user interfaces and dialog flows to create your function, associated execution roles, and the resources you need to host your deployable code.
With CloudFormation, you can convert all of this manual configuration into a single template file with the power to describe an entire application stack. CloudFormation replaces the complex and error-prone manual process of deploying Lambda functions with a repeatable, maintainable process that can be maintained alongside your code.
IOD’s expert+editor teams create the kind of content that tech marketing professionals just don’t have the expertise to create.