Categories
Tips

The Importance and Benefits of using Multi-Factor Authentication (MFA)

The Internet allows everyone to explore the web and create personal accounts on various platforms, hence, it’s safe to say that email addresses and passwords are the identity of netizens. And because of rampant cyberattacks, users aim to protect themselves from cyberattacks by frequently changing and choosing strong alpha-numeric-special-characters passwords. However, passwords alone aren’t enough to provide adequate data security nowadays.

The volume of data breaches continues to rise without fail as cybercriminals discover new and sophisticated ways to compromise unsuspecting individuals’ accounts. But thanks to recent technological advances, individuals and organisations have the opportunity to utilise multi-factor authentication systems to safeguard their identities and sensitive data.

This highly effective method reduces the consequences of poor password hygiene and prevents identity thefts. In this article, we’ll discuss everything you need to know about the importance of implementing MFA into your day-to-day online activity.

Let’s get started!

All About Multi-Factor Authentication

MFA refers to methods that authenticate whether a user’s identity is genuine. It typically requires a user to provide two or more pieces of factors for authentication purposes along with their usual account password. One of its fundamental objectives is to add several layers of authentication factors to increase security. Software based Two-factor authentication:

Along with passwords one of the most common methods used for 2fa are Time-Based-One-Time Passwords or TOTP. Most common TOTP applications are Google Authenticator and Authy.

These apps provide a unique combination of numeric keys generated by a standardized algorithm to users who sign in to platforms where 2fa is needed along with password. Quite a lot of services including GMail, Github allow adding TOTP 2fa.

Hardware based Two-factor authentication:

On the other hand, users who prefer a stronger MFA method could invest in hardware authenticators like YubiKey. This device, when plugged into your workstation generates a unique code that the service can use to authenticate your identity.

In addition, it’s a more secure system because it’s a hardware device which needs to be connected to a computer while authenticating the user account and it produces more extended codes, making it harder for hackers to guess, without physical access to the device.

3 Types of Authentication Factors

Something you know – knowledge

Of course, the most common knowledge factor is a password. However, there are other types of knowledge factors, such as passphrases, PINs, and security questions. Although these have provided excellent security in the past, they aren’t as effective now that new generations of cybercriminals have surfaced.

Something you have – possession

Possession factors encompass smartphones, hard tokens, soft tokens, smartcards, and key fobs. For example, users typically need to insert smartcards into devices, receive a One-Tip Passcode (OTP) on their smartphones, or receive unique codes from physical tokens.

Something you are – inheritance

Inheritance factors are the unique physical traits that users possess. These are verified through voice or facial recognition, retinal scans, and other striking methods.

The Benefits of Using Multi-Factor Authentication

Effective cybersecurity solution

With an MFA system in place, hackers will have a tough time entering your network because it implements strict security measures. Moreover, you can make hackers’ tasks even more difficult by using strong and complicated passwords, mainly if the MFA is used together with an SSO solution.

Verifies user identity

MFA is a valuable tool for protecting sensitive data against identity breaches and theft. By using this strategy, the security of the traditional username and password login is reinforced by another layer of protection. In addition, cybercriminals will find it difficult to crack the given TOTP because it’s a complex combination that only works for a specific period – typically within seconds or minutes.

Hassle-free implementation

By its nature, a multi-factor authentication system is non-invasive. It wouldn’t affect anything within your device’s virtual space, making way for a hassle-free implementation. In addition, it boasts an intuitive user experience, helping you quickly acclimate to the system.

Meets regulatory compliance

Organisations that use multi-factor authentication are hitting two birds with one stone – data security and compliance risk management. For example, PCI-DSS requires MFA implementation in certain situations to stop unauthorised users from accessing systems. So despite application updates with unattended consequences, MFA compliance ensures that the system remains virtually non-intrusive.

The Takeaway

Now that the world is in the digital age, Internet users continue to face cybercriminals’ deceptive tactics to gain their login credentials. And in this day and age where identity is considered the new perimeter, individuals who don’t utilise multi-factor authentication are playing with fire.

The use of multi-factor authentication is a smart and proactive choice, both for individuals and corporations. So if you’re looking for a convenient, innovative, and efficient way to add another layer of protection to your online accounts, MFA would be your best choice. Do you use any MFA technique ? If yes do tell us about it in the comment section below.

Categories
Tips

A Beginners Guide To Crypto Wallets

Blockchain technology has made digital currency transactions increasingly useful, practical and accessible. However, as the number of crypto users has gone up, so has the rate of cyber theft related to cryptocurrencies. That’s why it’s important to understand how to safekeep your crypto by learning about crypto wallets, how they work and what to look for in one, whether it’s digital or physical.

What is a crypto wallet?

Cryptocurrency wallets, or simply crypto wallets, are places where traders store the secure digital codes needed to interact with a blockchain. They don’t actively store your cryptocurrencies, despite what their name may lead you to believe.

Crypto wallets need to locate the crypto associated with your address in the blockchain, which is why they must interact with it. In fact, crypto wallets are not as much a wallet as they are ledgers: They function as an owner’s identity and account on a blockchain network and provide access to transaction history.

How do crypto wallets work?

When someone sends bitcoin, ether, dogecoin or any other type of digital currency to your crypto wallet, you aren’t actually transferring any coins. What they’re doing is signing off ownership thereof to your wallet’s address. That is to say, they are confirming that the crypto on the blockchain no longer belongs to their address, but yours. Two digital codes are necessary for this process: a public key and a private key.

A public key is a string of letters and numbers automatically generated by the crypto wallet provider. For example, a public key could look like this: B1fpARq39i7L822ywJ55xgV614.

A private key is another string of numbers and letters, but one that only the owner of the wallet should know.

Think of a crypto wallet as an email account. To receive an email, you need to give people your email address. This would be your public key in the case of crypto wallets, and you need to share it with others to be a part of any blockchain transaction. However, you would never give someone the password to access your email account. For crypto wallets, that password is the equivalent of your private key, which under no circumstances should be shared with another person.

Using these two keys, crypto wallet users can participate in transactions without compromising the integrity of the currency being traded or of the transaction itself. The public key assigned to your digital wallet must match your private key to authenticate any funds sent or received. Once both keys are verified, the balance in your crypto wallet will increase or decrease accordingly.

Types of crypto wallet

Crypto wallets can be broadly classified into two groups: hot wallets and cold wallets. The main difference is that hot wallets are always connected to the internet while cold wallets are kept offline.

Hot Wallets

Hot wallets are digital tools whose connection to the internet cannot be severed. Users can access these pieces of software from a phone or desktop computer to monitor their currencies and trade them. Some hot wallets are also accessible through the web or as browser extensions, meaning you can use them on a wide variety of devices.

The greatest advantage of hot wallets is their convenience. Your public and private keys are stored and encrypted on your wallet’s respective app or website, so unless they’re limited to a specific device, you can access them anywhere with an online connection. This ease of access makes them ideal for those who trade more often and are considering spending bitcoins.

Because hot wallets are always accessible online, they also face a greater risk of cyberattacks. Hackers can exploit hidden vulnerabilities in the software that supports your wallet or use malware to break into the system. This is particularly dangerous for web wallets hosted by crypto exchanges, which are bigger targets overall for crypto thieves.

PROS
– Highly convenient, can be accessed from anywhere with an internet connection
– Easier to recover access if you lose the private key than cold wallets

CONS
– Less secure than cold wallets, vulnerable to a wider variety of attacks
– For custodial wallets, your keys are kept on the exchange’s servers

Cold Wallets

Cold wallets store your digital keys offline on a piece of hardware or sheet of paper. Hardware wallets usually come in the form of a USB drive which lets you buy, sell and trade crypto while it’s connected to a computer. With “paper” wallets, your keys may be accessible via print-out QR codes, written on a piece of paper, or engraved on some other material, such as metal.

Cold storage wallets are deliberately designed to be hard to hack. Unless the wallet owner falls for some sort of phishing attack, hackers have no way of obtaining the owner’s keys remotely. For something like a hardware wallet, a thief would first have to obtain the USB drive used to access your crypto and then somehow crack its password.

This high level of security may lend itself to mistakes on the part of wallet owners. If you lose your USB drive or sheet of paper and don’t have your private key backed up somewhere, you’ve effectively lost access to your crypto. Compared to hot wallets, which make it possible to regain access through a seed phrase, recovering access on a cold wallet is impossible in most cases due to the two-key security system.

PROS
– More secure than hot storage wallets due to offline storage
– Many hardware wallets are supported by hot storage wallets

CONS
– Transactions take longer on average
– Nearly impossible to recover currencies without a backup of your digital keys

How to set up a crypto wallet

Setting up a cryptocurrency wallet is a generally straightforward process that takes no more than a couple of minutes. The first step is to determine the kind of wallet you want to use since hot wallets and cold wallets have different set up processes. Then, you’ll need to do the following:

For hot wallets…

Download the wallet. Make sure the wallet is legitimate before downloading any software. Crypto scams are becoming increasingly common and it’s important to know if the company behind a wallet actually exists. For web wallets, verify that you are on the correct website and not on a fake version of it built to steal your information.

Set up your account and security features. If you are using a non-custodial wallet, this is when you’ll be given your private key, a random 12 to 24-word string of words. If you lose or forget these, you will not be able to access your crypto. You can enable added security tools, like two-factor authentication and biometrics, during or after the set up process. The process for custodial wallets is a bit more involved, and you’ll have to undergo a verification process called Know-Your-Customer (KYC) to validate your identity.

Add funds to your wallet. For non-custodial wallets, you may have to transfer crypto from elsewhere, as not all wallets allow you to buy crypto with fiat currency directly. As for custodial wallets, you’ll need to fund them using a credit or debit card before you can purchase crypto, in some cases.

For cold wallets…

Purchase the wallet online. When buying a cold wallet, avoid third-party resellers. Buy the product directly from the developer to avoid issues, such as the device being tampered with beforehand.

Install the device’s software. Each brand has its own software that must be installed onto the hardware device before it can be used. Make sure to download the software from the company’s official website. Then, follow its instructions to create your wallet.

Deposit your cryptocurrency. You’ll need to transfer crypto into your hardware wallet from elsewhere, such as from a crypto exchange. Some wallets may have an incorporated exchange that allows you to trade crypto while the device is connected to your desktop computer or mobile device.

What to look for in a crypto wallet

When looking for a crypto wallet, it’s very important to first ask yourself:

How often do I trade? Will you be trading cryptocurrency daily or just occasionally? Hot wallets are better for active traders due to their speed and practicality. However, active traders may also benefit from a cold wallet by using it as a kind of savings account, keeping the bulk of their currencies there.

What do I want to trade? Are you looking to buy and store Bitcoin or are you interested in different types of cryptocurrency, like altcoins and stablecoins? The crypto wallet you pick should support the currencies you wish to trade and will ideally accommodate any other coins you may want to trade in the future.

How much am I willing to spend? Are you planning on accumulating large amounts of crypto? Hardware wallets are ideal for this sort of activity, but unlike hot wallets (which are mostly free), they require an upfront payment to own the wallet itself. Some hot wallets have higher crypto trading fees but offer faster transactions or greater functionality.

What functionality do I need in a wallet? Do you plan on doing anything specific with crypto beyond simply trading it? For example, traders who want to make money with their crypto passively should look for wallets that allow for crypto lending, staking and deposits.

After exploring the above questions, we put together some general suggestions for what to look for in a crypto wallet:

  1. Supported currencies – The rule of thumb for supported currencies is “the more, the better.” Unless you’re interested in solely trading Bitcoin, we suggest you opt for a wallet that supports at least a few of the more popular altcoins.
  2. Accessible interface – An accessible, intuitive user interface is always welcome, regardless of whether you’re a crypto veteran or a newbie. Look for wallets that don’t make you jump through hoops to start basic trading.
  3. 24/7 customer support – Although more useful for newer traders, having customer support available throughout the day is always a plus. This is especially true for wallets that undergo frequent updates and may suffer from bugs or visual glitches.
  4. Hardware wallet compatibility – Anyone who is seriously thinking about getting into crypto should consider getting a hardware wallet. Even people who don’t trade frequently should consider a hardware wallet to safeguard their most important assets. Investors with a hot wallet that’s compatible with at least one brand of hardware wallet have an advantage, since they can default to the model(s) supported by their wallet and transfer their crypto back and forth as needed.

Investing in crypto prudently

Cryptocurrencies are a new and exciting financial asset. The idea of a decentralized currency independent of the banking industry is enticing for many. The wild price swings can be a thrill, and some coins are simply amusing.

Consider the story of Dogecoin. A portmanteau of Bitcoin and Doge, the currency was a hit on Reddit, a popular social network forums site, and quickly generated a market value of $8 million. DOGE hit an all-time high on May 8, 2021, reaching a market capitalization of more than $90 billion after Elon Musk and Reddit users involved in the GameStop short squeeze turned their attention to it.

For a more sobering example, take a look at Bitcoin — the grandparent of all cryptocurrencies. Bitcoin has experienced multiple crashes throughout its lifespan, but its most recent one has left a lasting impression in mainstream culture. Reaching an all-time high of more than $65,000 in November 2021, its market value has declined as part of a general crypto price drop, briefly dipping under $20,000 in June 2022.

While entertaining, the fact remains that cryptocurrencies are unpredictable assets and should be traded with caution. It’s important to consider the following dangers when asking yourself, “should I invest in cryptocurrencies?:”

Crypto is volatile. A cursory glance at the historical price of Bitcoin is enough to see massive peaks and depressions throughout its lifespan. Just recently, Bitcoin fell under $20,000 in June after having surpassed a value of $69,000 for a single coin in November 2021. The same goes for any other major cryptocurrency. These dramatic changes are not normal compared to the pace at which mainstream assets move.

Crypto isn’t backed by anything. Most coins do not have a natural resource, such as gold, silver or other metals, that is used to track their value. They’re not backed by the government and don’t track the growth potential of enterprises the way stocks and bonds do. This increases crypto’s volatility as a whole.

Cryptocurrencies are also speculative assets, which are riskier due to large fluctuations in price. Many active traders invest in them with the hope of making a big profit after their value dramatically increases in the near future — hopefully before a crash.

Crypto is unregulated. Governments and institutions worldwide are still grappling with how to regulate cryptocurrencies, asking: Do we need specific legislation to regulate crypto assets? Who should regulate crypto? Should it be regulated at all?

While this lack of regulation responds to the nature of crypto and its ethos of freedom, a lack of adequate regulation means consumers are not protected against many crypto crimes and scams. Ultimately, crypto must be studied and handled carefully, as its future remains uncertain.

Personal finance experts and advisors recommend investing no more than 5% of your portfolio in risky assets like crypto. Beginners should also refrain from riskier crypto trading practices, such as lending and staking currencies to generate revenue.

Crypto Wallet Glossary

  • Blockchain: A blockchain is a type of ledger that records digital transactions and is duplicated across its entire network of systems. The shared nature of blockchain creates an immutable registry that protects users against fraud. Cryptocurrencies are traded on the blockchain.
  • BTC: BTC is the currency code used to represent Bitcoin, which was created by Satoshi Nakamoto as the first decentralized cryptocurrency. Read our article on what is Bitcoin to find out more.
  • Foundation for Wallet Interoperability (FIO) Network: The FIO was established in the “pursuit of blockchain usability through the FIO Protocol.” The FIO protocol is meant to improve the scalability of the blockchain and develop a standard for interaction between various crypto-related entities.
  • Hierarchical Deterministic (HD) account: HD accounts may be restored on other devices by using a backup phrase of 12 random words that’s created when you generate the wallet.
  • Light client: Also called light nodes, light clients implement SPV, a technology that does not require downloading an entire blockchain to verify transactions. Depending on the currency, a full blockchain could be anywhere from 5Gb to over 200Gb. Thus, light clients tend to be faster than regular clients and require ​​less computing power, disk space and bandwidth. Mobile wallets almost always use light clients.
  • mBTC: A common exchange value, mBTC is short for millibitcoin, which is one-thousandth of a bitcoin (0.001 BTC or 1/1000 BTC)
  • Multi-signature: Multisig for short, wallets with this feature require more than one private key to sign and send a transaction.
  • Open-source: Software that is considered “open-source” has a source code that may be studied, modified or redistributed by anyone. The source code is what programmers use to adjust how a piece of software works.
  • Seed phrase: Newly opened crypto wallets randomly generate a string of 12 to 24 words known as a seed phrase. Users with non-custodial wallets must keep this phrase and are recommended to write it down in a safe location, since it stores all the information needed to recover access to their wallet and funds.

With all the information in this post, I believe you’re on your way to becoming an expert on crypto wallets and the measures you can take to avoid cyber theft. Until next time!

Categories
Tips

Managing software project dependencies with git submodules

Rarely any software project today is built from the ground up. Frameworks and libraries have made developers’ lives so easy that there’s no need to reinvent the wheel anymore when it comes to software development. But these frameworks and libraries become dependencies of our projects, and as the software grows in complexity over time, it can become pretty challenging to efficiently manage these dependencies. 

Sooner than later developers can find their code depending on software projects of other developers which are either open source, hosted online or being developed in-house, in maybe another department of the organisation. These dependencies are also evolving, and need  to be updated and in sync with your main source tree. This ensures that a small change breaks nothing and your project is not outdated and does not have  any known security vulnerability or bugs.

A good recent example of this is log4j, a popular framework for logging, initially released in 1999, which became a huge headache for many businesses at the end of 2021, including Apple, Microsoft and VMware. log4jt was a dependency in a variety of software and the vulnerabilities discovered affected all of them. This is a classic example of how dependencies play a huge role in software lifecycle and why managing them efficiently becomes important. 

While there are a variety of ways and frameworks to manage software dependencies, depending on software complexity, today I’ll cover one of the most common and easy to use methods called “git submodule”:. As the name suggests it is built right into git itself, which is the de facto version control system for the majority of software projects.

Hands-on with git submodules:

Let us assume your project name “hello-world” depends on an open source library called “print”.

A not-so-great way to manage the project is to clone the “print” library code and push it alongside the “Hello World” code tree to GitHub (or any version control server). This works and everything runs as expected. But what happens when the author of “print” makes some changes to its code or fixes a bug?Since you’ve used your own local copy of print and there is no tracking to the upstream project, you won’t be able to get these new changes in, therefore you need to manually patch it yourself or re-fetch and push the code once again. Is this the best way of doing it, one may ask?

git has this feature baked in which allows you to add other git repos (dependencies projects) as submodules. This means your project will follow a modular approach and you can update the submodules, independent of your main project. You can add as many submodules in your project as you want and assign rules such as “where to fetch it from” and “where to store the code once it is fetched”. This obviously works if you use git for your software project version control.

Let’s see this in action:

So I’ve created a new git project namely “hello-world” on my GitHub account, which has two directories:

src – where my main source code is stored

lib – where all the libraries a.k.a dependencies are stored which my source code is using.

These libraries are hosted on GitHub by their maintainers as independent projects. For this example, I’m using two libraries.

  1. hello – which is also created by me as a separate github repo 
  2. resources –  which is another git repository in Developer Nation account

To add these two above-mentioned libraries as submodules to my project, let’s open the terminal, change to the main project directory where I want them to be located. In this case, I want them in my lib directory, so I’ll execute the following commands:

cd hello-world/lib

Add submodule with command : git submodule add <link to repo>

git submodule add git@github.com:iayanpahwa/print.gitgit submodule add git@github.com:devnationworld/resources.git

This will fetch the source code of these libraries and save them in your lib folder. Also, now you’ll find a new hidden file created in root of your main project directory with name .gitmodules which has the following meta-data:

```
[submodule "lib/print"]
path = lib/print
url = git@github.com:iayanpahwa/print.git
[submodule "lib/resources"]
path = lib/resources
url = git@github.com:devnationworld/resources.git
```

This tells git about :

  • submodules use in this project 
  • where to fetch them from
  • where to store them

Now every time someone clones the project, they can separately clone the submodule using following commands:

git clone < Your project URL >
cd <Your project URL>
git submodule init 
git submodule update 

OR:

This can also be done in one command as:

git clone <Your Project URL> —recursive, in this case

git clone git@github.com:iayanpahwa/hello-world.git —recursive

One more thing you’ll notice on GitHub project repo is in lib directory, folders are named as :

print @ fa3f …

resources @ c22

The hash after @ denotes the last commit from where print and resources libraries were fetched. This is a very powerful feature as by default, the submodule will be fetched from the latest commit available upstream i.e HEAD of master branch, but you can fetch from different branches as well. More details and options can be found on the official doc here.

Now you can track and update dependency projects independent of your main source tree. One thing to note is all your dependencies need not to be on the same hosting site as long as they’re using git. For example: If hello-world was hosted on Github and printed on Gitlab, the git submodule will still work the same.

I hope this was a useful tutorial and you can now leverage git submodules to better manage your project dependencies. If you have any questions and ideas for more blogs, I’d love to hear from you in the comments below. 

Categories
Tips

A Definitive guide to Game UI for enhanced Gaming experience

If you ever wondered how game designers come up with placement and immersability of assets such as health meter and mission progress without them hindering game play, this article is for you. Like websites or mobile apps, video games have common UI components that help players navigate and accomplish goals. In this article you’ll discover the four classes of game UI and how as a game designer you can utilise them to provide for the best possible gaming experience.

Sixty years ago the Brookhaven National Laboratory in Upton, NY held an open house. Visitors who toured the lab were treated to an interactive exhibit, a game titled Tennis for Two. The setup was simple—a 5-inch analog display and two controllers, each with one knob and one button. The world’s first video game was born, but after two years, the exhibit was closed.

Twelve years passed, and an eerily similar arcade game showed up in a bar called Andy Capp’s Tavern. The name of the game? Pong. Its maker? Atari. Seemingly overnight, the burgeoning world of video games was transformed. Novelty became an industry.

Since Pong, the complexity of video game graphics has evolved exponentially. We’ve encountered alien insects, elven adventures, and soldiers from every army imaginable. We’ve braved mushroom kingdoms, boxing rings, and an expanding universe of hostile landscapes. While it’s fun to reminisce about the kooky characters and impossible plot lines, it’s also worth discussing the design elements that make video games worth playing—the UI components.

Like websites or mobile apps, video games have common UI components that help players navigate, find information, and accomplish goals. From start screens to coin counters, video game UI components are a crucial aspect of playability (a player’s experience of enjoyment and entertainment). To understand how these components impact the gaming experience, we must quickly address two concepts that are vital to video game design: Narrative and The Fourth Wall.

Narrative

Narrative is the story that a video game tells. Consider this as your video game character storyline.

The Fourth Wall

The Fourth Wall is an imaginary barrier between the game player and the space in which the game takes place.

Narrative and The Fourth Wall provide two questions that must be asked of every UI component incorporated into a game:

  1. Does the component exist in the game story?
  2. Does the component exist in the game space?

From these two questions, four classes of video game UI components emerge: Non-diegetic; Diegetic; Spatial; and Meta.

Non-Diegetic

  • Does the component exist in the game story? No
  • Does the component exist in the game space? No

Non-diegetic UI components reside outside of a game’s story and space. None of the characters in the game, including a player’s avatar, are aware that the components exist. The design, placement, and context of non-diegetic components are paramount.

In fast-paced games, non-diegetic components may interrupt a player’s sense of immersion. But in strategy-heavy games, they can provide players with a more nuanced assessment of resources and actions.

Non-Diegetic components commonly appear in video games as stat meters. They keep track of points, time, damage, and various resources that players amass and expend during gameplay.

In Super Mario Bros. 3, the stat meter is non-diegetic because it exists outside of the game world and story (characters within the game don’t know it’s there).

Diegetic

  • Does the component exist in the game story? Yes
  • Does the component exist in the game space? Yes

Diegetic UI components inhabit both a game’s story and space, and characters within the game are aware of the components. Even though they exist within the game story and space, poorly considered diegetic components are still capable of distracting or frustrating players.

Scale makes diegetic components tricky. For instance, an in-game speedometer that resides on a vehicle’s dashboard will likely be too small for players to see clearly. In some games, handheld diegetic components (like maps) can be toggled to a 2-D, full-screen view, making them non-diegetic.

In the demolition racing game Wreckfest, cars are diegetic UI components. Over the course of a race, they take on visible damage that indicates how near a player is to being knocked out of competition.

Spatial

  • Does the component exist in the game story? No
  • Does the component exist in the game space? Yes

Spatial UI components are found in a game’s space, but characters within the game don’t see them. Spatial components often work as visual aids, helping players select objects or pointing out important landmarks.

Text labels are a classic example of spatial UI components. In fantasy and adventure games, players may encounter important objects that are unfamiliar in appearance. Text labels quickly remove ambiguity and keep players immersed in the gaming experience.

The American football franchise Madden has spatial UI components that help players select avatars and understand game scenarios.

Meta

  • Does the component exist in the game story? Yes
  • Does the component exist in the game space? No

Meta UI components exist in a game’s story, but they don’t reside in the game’s space. A player’s avatar may or may not be aware of meta components. Traditionally, meta components have been used to signify damage to a player’s avatar.

Meta components can be quite subtle—like a slowly accumulating layer of dirt on the game’s 2D plane, but they can also feature prominently in the gaming experience. In action and adventure games, the entire field of view is sometimes shaken, blurred, or discolored to show that a player has taken on damage.

The Legend of Zelda utilizes scrolling text (a meta component) to advance the narrative and provide players with helpful tips.

A very illustrative infographic summing up all 4 classes of video game UI components can be found below.

Classifying video game UI components isn’t always cut and dry. A life meter may be diegetic in one game but non-diegetic in another. Depending on a game’s narrative and its players’ relationship to the fourth wall, components may blur the line between classes. Likewise, an infinite range of visual styles and configurations can be applied to components according to a game’s art direction.

Categories
Tips

Git Internals Part 3: Understanding the staging area in Git

Software development is a messy and intensive process, which in theory, should be a linear, cumulative construction of functionalities and improvements in code, but is rather more complex. More often than not it is a series of intertwined, non-linear threads of complex code, partly finished features, old legacy methods, collections of TODO comments, and other things common to any human-driven and a largely hand-crafted process known to mankind.

Git was built to make our lives easier when dealing with this messy and complex approach to software development. Git made it possible to work effortlessly on many features at once and decide what you want to stage and commit to the repository. The staging area in Git is the main working area, but most of the developers know only a little about it.

In this article, we will be discussing the staging area in Git and how it is a fundamental part of version control and can be used effectively to make version control easier and uncomplicated.

What is Staging area?

To understand what is staging area is, let’s take a real-world example – suppose that you are moving to another place, and you have to pack your stuff into boxes and you wouldn’t want to mix the items meant for the bathroom, kitchen, bedroom, and the living room in the same box. So, you will take a box and start putting stuff into it, and if doesn’t make sense, you can also remove it before finally packing the box and labeling it.

Here, in this example, the box serves as the staging area, where you are doing the work (crafting your commit), whereas when you are done, then you are packing it and labeling it (committing the code).

In technical terms, the staging area is the middle ground between what you have done to your files (also known as the working directory) and what you had last committed (the HEAD commit). As the name implies, the staging area gives you space to prepare (stage) the changes that will be reflected on the next commit. This surely adds up some complexity to the process, but it also adds more flexibility to selectively prepare the commits as they can be modified several times in the staging area before committing.

Assume you’re working on two files, but only one is ready to commit. You don’t want to be forced to commit both files, but only the one that is ready. This is where Git’s staging area comes in handy. We place files in a staging area before committing what has been staged. Even the deletion of a file must be recorded in Git’s history, therefore deleted files must be staged before being committed.

What are git commands for the staging area?

git add

The command used to stage any change in Git is git add. The git add command adds a modification to the staging area from the working directory. It informs Git that you wish to include changes to a specific file in the next commit. However, git add has little effect on the repository—changes are not truly recorded until you execute git commit.

The common options available along with this command are as follows:

You can specify a <file> from which all changes will be staged. The syntax would be as follows:

git add <file>

Similarly, you can specify a <directory> for the next commit:

git add <directory>

You can also use a . to add all the changes from the present directory, such as the following:

git add .

git status

git status command is used to check the status of the files (untracked, modified, or deleted) in the present branch. It can be simply used as follows:

git status

git reset

In case, you have accidentally staged a file or directory and want to undo it or unstage it, then you can use git reset command. It can be used as follows:

git reset HEAD example.html

git rm

If you remove files, they will appear as deleted in git status, and you must use git add to stage them. Another option is to use the git rm command, which deletes and stages files in a single command:

To remove a file (and stage it)

git rm example.html

To remove a folder (and stage it)

git rm -r myfolder 

git commit

The git commit command saves a snapshot of the current staged changes in the project. Committed snapshots are “secure” versions of a project that Git will never alter unless you specifically ask it to.

Git may be considered a timeline management utility at a high level. Commits are the fundamental building blocks of a Git project timeline. Commits may be thought of as snapshots or milestones along a Git project’s history. Commits are produced with the git commit command to record the current status of a project.

Git Snapshots are never committed to the remote repository. As the staging area serves as a wall between the working directory and the project history, each developer’s local repository serves as a wall between their contributions and the central repository.

The most common syntax followed to create a commit in git is as follows:

git commit -m "commit message"

The above commands and their functionalities can be summed up simply in the following image:

git commit -m commit message

Conclusion

To summarize, git add is the first command in a series of commands that instructs Git to “store” a snapshot of the current project state into the commit history. When used alone, git add moves pending changes from the working directory to the staging area. The git status command examines the repository’s current state and can be used to confirm a git add promotion. To undo a git add, use the git reset command. The git commit command is then used to add a snapshot of the staging directory to the commit history of the repository.

This is all for this article, we will discuss more Git Internals in the next article. Do let me know if you have any feedback or suggestions for this series. 

If you want to read what we discussed in the earlier instalments of the series, you can find them below.

Git Internals Part 1- List of basic Concepts That Power your .git Directory here

Git Internals Part 2: How does Git store your data? here

Keep reading!

Categories
Platforms

Getting Started with EVM (Ethereum Virtual Machine)

Ethereum has been a game-changer since its launch in 2015. It revolutionized the way people think about blockchain technology and decentralization. For a quick refresher,  Ethereum is a public, open-source, decentralized blockchain which can run smart-contracts and enable developers to build and deploy decentralized applications (DApps).

Many used to believe that blockchain was all about cryptocurrencies. And Ether is just another cryptocurrency like the well-known Bitcoin. However, Ethereum took the blockchain technology to new heights by shifting its concept from being just another digital currency to a new decentralized platform with endless applications and possibilities. 

It gave birth to the ICO (Initial Coin Offering) wave, introduced a completely new programming language, supported the creation of DApps (decentralized applications), and, foremost, polarized the term “smart-contracts.” What makes all these possible is the heart of Ethereum’s success: the Ethereum Virtual Machine (EVM)

In this article, we’ll take a closer look at the EVM, what it is and how it works. We’ll also give hints and tips on how to develop on EVM using Solidity. So, without further ado, let’s get started!

Basics of Ethereum Virtual Machine 

We’ve already mentioned that EVM makes Ethereum what it is today. But we should establish a stronger foundation for understanding EVM. 

What is an Ethereum Virtual Machine (EVM)?

Ethereum Virtual Machine or EVM is a “world computer” that executes programs called “smart-contracts.” Smart-contracts are immutable computer programs intended to digitally facilitate, verify or enforce the negotiation or performance of a contract. These are applications that run precisely as programmed without the possibility of fraud or third-party interference.

Additionally, EVM is responsible for processing and executing all other transactions on the Ethereum network, such as handling DApps, and token transfers. It runs on every node in the Ethereum network and processes every transaction that goes through it. It is Turing-complete, meaning it can run any type of program as long as there are enough resources or “gas” to process it.

How does EVM work?

EVM works by executing a program called bytecode. This bytecode is generated from the high-level programming language Solidity (we will discuss this later in this article). The bytecode is then fed into the EVM, which processes and executes it.

To better understand this process, let’s compare it to how a traditional computer works. A desktop computer runs programs written in high-level coding languages like C++, Java, or Python. These programs are then converted into machine code, a low-level language the computer can understand. And the machine code is fed into the CPU (central processing unit), which processes and executes it.

Similarly, the bytecode generated from Solidity is fed into the EVM, which processes and executes it. The main difference here is that a traditional computer can only run one program at a time, while the EVM can run multiple programs simultaneously. This is because each program that runs on the EVM has its own isolated environment, which is called an “Ethereum Virtual Machine.” 

Developing on EVM with Solidity

Solidity as Programming Language

As we’ve mentioned before, the EVM executes a program called bytecode. This bytecode is generated from the high-level programming language called Solidity. So, to develop on EVM, you will need to understand the use of Solidity. 

Solidity is a contract-oriented, high-level programming language for implementing smart contracts. It was created specifically for the EVM and had syntax inspired by existing languages such as C++, Python, and JavaScript. However, there are a few things that you should know about Solidity. 

First of all, Solidity is a statically typed language, which means you will need to declare the type of each variable before using it. For example, before using it, you must declare whether a variable is an integer or a string. Secondly, Solidity is case-sensitive, so you will need to be careful about the casing of your variables. For instance, the variable “MyVariable” differs from “myvariable.” Third, Solidity does not have a concept of “null,” meaning you will need to use the keyword “require” to check if a variable is null or not.

Tools to Get Started

You can use the following few tools to get started on EVM. The first tool that you will need is the Remix IDE. A Remix is a browser-based IDE that allows you to write, compile, and debug Solidity contracts. It also comes with a built-in debugger and an integrated testing environment.

The next tool you can use is Hardhat. Hardhat is a toolkit for Ethereum development that allows you to automate many of the tasks involved in smart contract development, such as compiling, testing, deploying, and upgrading contracts.

The last tool that you can check is Truffle. Truffle is a development environment, testing framework, and asset pipeline for Ethereum. It makes it easy to develop smart contracts and provides a suite of tools for testing, debugging, and deploying contracts.

After choosing the tools, you must decide which Ethereum network you want to deploy your contract to.  You can choose from two main networks: the testnet and the mainnet. Testnet is a global testing environment in which developers can obtain and spend ether with no real-world value.” In other words, it is a test network where you can experiment with your contracts without worrying about losing any real money.

On the other hand, the mainnet is the “live” Ethereum network, where all transactions have real-world value. Contracts deployed on the mainnet are live and irreversible. Also, it is accessible to anyone in the world. Hence, ensure that your contracts are thoroughly tested before deploying them on the mainnet.

Lastly, once you have deployed your contract on either the testnet or the mainnet, you can view it on Etherscan. Etherscan is a block explorer and analytics platform for Ethereum that allows you to view all of the transactions you have made on the Ethereum network and information about individual addresses and contracts.

Tips on Developing on EVM

Here are some tips that you can use in developing on EVM: 

  1. Make sure to test your contracts thoroughly before deploying them on the mainnet because once a contract is deployed on the mainnet, it cannot be changed or deleted.
  2. Secure your private keys and keep them safe. If someone gets ahold of your private keys, they can access all of your Ether.
  3. Consider using tools to automate the tasks involved in smart contract development to save you a lot of time and effort in the long run. 
  4. Be aware of the gas costs associated with each transaction. Every transaction on the Ethereum network costs a certain amount of Ether to execute.
  5. Keep your contract code simple and easy to understand. Complex contracts are more difficult to debug and likely to contain errors.

Conclusion 

It’s safe to say that Ethereum is a disruptive innovation with the potential to change how we interact with the digital world. That’s why it’s no wonder Ethereum’s price today continuously rises. With its powerful smart contract functionality, Ethereum provides a whole new level of flexibility and control.

While it is still in its early stages, Ethereum Virtual Machine (EVM) has already established impactful changes, and its further development is definitely worth keeping an eye on.

Sophia Young recently quit a non-writing job to finally be able to tell stories and paint the world through her words. She loves talking about fashion and weddings and travel, but she can also easily kick ass with a thousand-word article about the latest marketing and business trends, blockchain, cryptocurrency, finance-related topics, and can probably even whip up a nice heart-warming article about family life. She can totally go from fashion guru to your friendly neighbourhood cat lady with mean budgeting skills and home tips real quick.

Categories
Tips

What is Code Review? — Best Practices, guidelines and tools.

Code reviews are a type of software quality assurance activity that involves rigorous evaluations of code in order to identify bugs, improve code quality, and assist engineers in understanding the source code.

Implementing a systematic approach for human code reviews is one of the most effective ways to enhance software quality and security. Given the probability of mistakes during code authorship, using many fresh eyes with complementary knowledge may disclose flaws that the original programmer may have overlooked.

A successful peer review process requires a careful balance of well-established protocols and a non-threatening, collaborative atmosphere. Highly structured peer evaluations can hinder productivity, while lax approaches are frequently unsuccessful. Managers must find a happy medium that allows for fast and successful peer review while also encouraging open communication and information sharing among coworkers.

The Benefit/Importance of Code Reviews

The fundamental goal of code review is to guarantee that the codebase’s code health improves with time.

Code health is a “concept” used to measure if the codebase on which one or more developers are working is — manageable, readable, stable (or less error-prone), buildable, and testable.

Code reviews enhance code quality by detecting issues before they turn unmanageable, it ensures a consistent design and implementation and also assures consistency of standards. It contributes to the software’s maintainability and lifespan, resulting in sturdy software created from components for smooth integration and functioning. It is inevitable that adjustments will be required in the future, thus it is critical to consider who will be accountable for implementing such changes.

When source code is regularly reviewed, developers can learn dependable techniques and best practices, as well as provide better documentation, because some developers may be oblivious of optimization approaches that could be applicable to their code. The code review process allows these engineers to learn new skills and improve the efficiency of their code, and produce better software.

Another significant benefit of code reviews is that they make it easier for analysts and testers to comprehend. In Quality Assurance (QA) testing, testers must not only evaluate the code quality but also discover issues that contribute to bad test results. This can result in ongoing, needless development delays owing to further testing and rewriting.

Performing Code Reviews

Good code reviews should be the standard that we all strive towards. Here are some guidelines for establishing a successful code review to ensure high-quality and helpful reviews in the long run:

Use checklists

Every member of your team is quite likely to repeat the same mistakes because omissions are the most difficult to identify since it is hard to evaluate something that does not exist. Checklists are the most effective method for avoiding frequent errors and overcoming the challenges of omission detection. Checklists for code reviews can help team members understand the expectations for each type of review and can be beneficial for reporting and process development.

Set limits for review time and code lines checked

It might of course be very much tempting to rush through a review and expect someone else to detect the mistakes you omitted. However, a SmartBear study indicates a considerable decline in defect density at speeds quicker than 500 LOC per hour. The most effective code review is performed in a suitable quantity, at a slower speed, for a limited period of time.

Code review is vital, but it can also be a time-consuming as well as a painstaking process. As a result, it is critical to control how much time a reviewer or team spends on the specifics of each line of code. Best practices in this area include ensuring that team members do not spend more than an hour on code reviews and that the team does not examine more than a few hundred lines in a certain amount of hours.

In essence, it is strongly advised not to review for more than 60 minutes at a time, as studies suggest that taking pauses from a task over time can significantly increase work quality. More regular evaluations should lessen the need for a review of this length in the future.

Performing a security code review.

A security code review is a manual or automated method that assesses an application’s source code. Manual reviews examine the code’s style, intent, and functional output, whereas automated tools check for spacing or name errors and compare it to known standard functions. A security code review, the third sort of evaluation, examines the developer’s code for security resilience.

The goal of this examination is to identify any current security weaknesses or vulnerabilities. Among other things, code review searches for logic flaws, reviews spec implementation, and verifies style guidelines. However, it is also important that a developer should be able to write code in an environment that protects it against external attacks, that can have effects on everything from intellectual property theft to revenue loss to data loss. Limiting code access, ensuring robust encryption, and establishing Secrets Management to safeguard passwords and hardcodes from widespread dissemination are some examples.

Make sure pull requests are minimal and serve a single function.

Pull requests (PRs) are a typical way of requesting peer code evaluations. The PR triggered the review process when a developer completes an initial code modification. To improve the effectiveness and speed of manual code review, the developer should submit PRs with precise instructions for reviewers. The lengthier the review, the greater the danger that the reviewer may overlook the fundamental goal of the PR. In fact, a PR should be no more than 250 lines long because a study shows reviewers may find 70–90 percent of errors in under an hour.

Offer constructive feedback.

Giving constructive feedback are very essential as code reviews play very important roles in software development, however, it is also important to be constructive rather than critical or harsh in your feedback to maintain your team’s morale and ensure keep the team learns from the mistake.

Code review examples

The main outcome of a code review process is to increase efficiency. While these traditional methods of code review have worked in the past, you may be losing efficiency if you haven’t switched to a code review tool. A code review tool automates the process of code review so that a reviewer solely focuses on the code.

A code review tool integrates with your development cycle to initiate a code review before new code is merged into the main codebase. You can choose a tool that is compatible with your technology stack to seamlessly integrate it into your workflow.

A great example of code review, especially in Python, which is my favored language, would be dealing with Duck Typing, which is strongly recommended in Python to be more productive and adaptable. Emulating built-in Python types such as containers is a common use-case:

 # Pythonic!
    class DictLikeType:
        def __init__(self, *args, **kwargs):
            self.store = dict(*args, **kwargs)

        def __getitem__(self, key):
            return self.store[key]

        ...

Full container protocol emulation involves the presence and effective implementation of several magic methods. This can become time-consuming and error-prone. A preferable approach is to build user containers on top of a respective abstract base class:

# Extra Pythonic!
    class DictLikeType(collections.abc.MutableMapping):
        def __init__(self, *args, **kwargs):
            self.store = dict(*args, **kwargs)

        def __getitem__(self, key):
            return self.store[key]

        ...

We would not only have to implement fewer magic methods, but the ABC harness would also verify that all necessary protocol methods were in place. This mitigates some of the inherent instability of dynamic typing.

Top code review tools for Developers

The fundamental purpose of a code review process, as described earlier in this article, is to enhance efficiency. While the traditional code review approaches outlined above have worked in the past (and continue to work), you may be losing efficiency if you haven’t switched to using a code review tool. A code review tool automates the code review process, freeing up the reviewer’s time to concentrate solely on the code.

Before adding new code to the main codebase, code review tools interact with your development cycle to initiate a code review. You should choose a tool that is compatible with your technological stack so that it can be readily integrated into your workflow. Here is a list of some of the top code review tools:

1. Github
You may have previously used forks and pull requests to evaluate code if you use GitHub to manage your Git repositories in the cloud.

code review - GitHub

Github also stands out due to it’s discussion feature during a pull request, with github you can analyze the difference, comment inline, and view the history of changes. You can also use the code review tool to resolve small Git conflicts through the web interface. To establish a more thorough procedure, GitHub even allows you to integrate with other review tools via its marketplace.

2. Crucible

Atlassian’s Crucible is a collaborative code review tool that lets you to examine code, discuss plan modifications, and find bugs across a variety of version control systems.

Code review - Crucible
Source — Crucible Code review

Crucible integrates well with other products in Atlassian’s ecosystem, including Confluence and Enterprise BitBucket. And, just like with any product that is encircled by other products in its ecosystem, combining Crucible with Jira, Atlassian’s Issue, and Project Tracker, will provide the greatest advantage. It allows you to do code reviews and audits on merged code prior to committing.

3. Smartbear Collaborator

SmartBear Collaborator is a peer code and document review tool for development teams working on high-quality code projects. Collaborator allows teams to review design documents in addition to source code.

Code review - Smartbear
Source — Smartbear Overview Review

You can use Collaborator to see code changes, identify defects, and make comments on specific lines of code. You can also set review rules and automatic notifications to ensure that reviews are completed on time. It also allows for easy integration with multiple SCMs and IDEs such as Visual Studio and Eclipse amongst others.

4. Visual Expert

Visual Expert is an enterprise solution for code review specializing in database code. It has support for three platforms only: PowerBuilder, SQL Server, and Oracle PL/SQL. If you are using any other DBMS, you will not be able to integrate Visual Expert for code review.

Visual Expert
Source — Visual Expert for Oracle

Visual Expert spares no line of code from rigorous testing. The code review tool delivers a comprehensive analysis of code gathered from a customer’s preferred platform.

5. RhodeCode
Rhodecode is a secured, open-source enterprise source code management tool. It is a unified tool for Git, Subversion, and Mercurial. Its primary functions are team collaboration, repository management, and code security and authentication.

RhodeCode
Source — Rhodecode

RhodeCode distinguishes itself by allowing teams to synchronize their work through commit code commentary, live code discussions, and sharing code snippets. Teams may also use coding tools to assign review jobs to the appropriate person, resulting in a more frictionless workflow for teams.

Conclusion

We learned what code review is and why it is crucial in the software life cycle in this tutorial. We also discussed best practices for reviewing code and the various approaches for doing so, as well as an example of a code review and lists of top code review tools to assist you get started reviewing code throughout your organization or team.

Categories
Tips

How to Deploy Your Lambda Functions with CloudFormation

AWS Lambda is a powerful tool for developing serverless applications and on-demand workflows. However, this power comes at a cost in terms of flexibility and ease of deployment, as the manual deployment process that AWS Lambda recommends can be error-prone and hard to scale. 

CloudFormation revolutionizes this process, replacing copied zip files with dependable and repeatable template-based deployment schemes. With CloudFormation, your Lambda functions will be easier to maintain, easier for your developers to understand, and easier to scale as your application grows.

Reviewing AWS Lambda Deployments

AWS Lambda function deployments are based around file handling—namely, by zipping your code into an archive and uploading the file to AWS. At its core, all AWS Lambda functions follow this pattern:

  • Create a zip file.
  • Upload to an S3 bucket.
  • Set the function to active.

This takes place whether you’re manually deploying the code, have outsourced your deployments to a tool, or are following any protocol in-between.

Once the file is received, AWS unzips your code into the appropriate folder structure, making it available to run when the Lambda container is spun up. This approach is a key point to remember as we discuss Lambda deployments and also exposes one of the first holes in the manual deployment process—AWS Lambda functions have an unstated structure that you need to follow. 

Simply put, you do not want to right-click on a file and create an archive; otherwise, you’ll encounter an error when you try to run your deployed Lambda code. The following screenshots illustrate this issue:

Figure 1: Do not zip the folder using this method

If you examine the zip files produced by the above method, you’ll find that their root level consists of your code folder:

Figure 2: This zip file will not be parsable by AWS Lambda

The issue this introduces is specifically related to how AWS Lambda deploys the code—namely, it simply unzips the provided code archive to an executable folder, then routes invocation requests to the application code found in that folder. When you provide a zip archive with a folder at the root level, instead of the application code itself, AWS Lambda has no idea what to do and throws errors. So, make sure that you zip the folder contents themselves, as follows:

Figure 3: Zipped at the appropriate level, the function code should be the root of the archive

When you do this, your code is put at the root level of the zip folder. This allows AWS Lambda to easily deploy your published code:

Figure 4: The code file is present at the root of the zip archive

IOD recruits tech experts from around the world to create compelling content for our clients’ tech blogs. Contact us to learn how we can help you with your content marketing challenges.

Each Lambda function exists independently, meaning that you cannot easily share resources between Lambda functions—shared libraries, source data files, and all other information sources that need to be included with the zip archive you upload. This additional fragility and duplication can be resolved with Lambda layers. Lambda layers provide you with a common base for your functions, letting you easily deploy shared libraries without the duplication that would be required when using only the base container.

While you can set up a scriptable and maintainable deployment process, once the project size grows, the brittleness of the above steps will quickly become apparent. AWS CloudFormation solves this very complex problem by categorizing infrastructure as code; this lets your developers and development operations teams create, deploy, and tear down resources with simple configuration-file modifications. These configuration files are human-readable and can be modified in any text configuration, programming language, or UI tools that you desire. 

Furthermore, CloudFormation lets you centralize the deployment of your infrastructure, creating a build process for your serverless functions that is both repeatable and predictable.

Improving Lambda Deployments with CloudFormation

Moving from the error-prone manual process of Lambda deployment to the superpowered CloudFormation model is a straightforward process of translating your function’s infrastructure needs into the appropriate CloudFormation template language. CloudFormation lets you then consolidate the disparate resource deployments for your application into a small set of configuration files, allowing your infrastructure to be maintained alongside your application code.

All in all, CloudFormation makes deploying AWS Lambda functions incredibly simple.

Start by creating the template file that will define your resources. This will be your working folder for your code. Next, create your function in the appropriate file for your desired Lambda runtime. Finally, create an S3 bucket and provide its address to your Lambda function; once you’ve done this, you can deploy functions simply by copying your zip file to the correct S3 bucket.

CloudFormation will be the tool that ties together all the resources your function requires. In CloudFormation, you will define the function, the function’s IAM role, the function’s code repository in S3, and execution policies to ensure that your function can do everything it needs to do within the AWS ecosystem. CloudFormation further gathers these resources together, centralizing all of your infrastructure definitions in a single template file that lives alongside your code.

Running Through a Sample Deployment

In this section, we’ll run through a quick example of creating a CloudFormation-driven deployment process for an AWS Lambda function. Start with the following Node.JS code to create a simple Lambda function using the nodejs12.x runtime:

exports.handler = async (event) => {
        // TODO implement
        const response = {
            statusCode: 200,
            body: JSON.stringify('CloudFormation deployment
     successful!'),
         };
         return response;
      };

This code is deliberately simple, allowing you to highlight the deployment process itself. Once you’ve created the function code, you can begin creating all of the items that will allow you to deploy and run the code with CloudFormation.

First, create a new file in the same directory as the function. These instructions assume that your file will be named template.yml. Once you‘ve created the empty template file, start including resources needed to get your function running. You can begin with defining an S3 bucket to hold your function code:

 AWSTemplateFormatVersion: '2010-09-09'
     Description: 'Example Lambda zip copy'
     Resources:
        LambdaZipsBucket:
          Type: AWS::S3::Bucket

Then, create the resources needed for your function, including an IAM role and the function definition itself:

MyFunctionRole:
          Type: AWS::IAM::Role
          Properties:
             AssumeRolePolicyDocument:
                Version: '2012-10-17'
                Statement:
                   - Effect: Allow
                     Principal:
                        Service: lambda.amazonaws.com
                     Action: sts:AssumeRole
              ManagedPolicyArns:
                -
arn:aws:iam::aws:policy/service role/AWSLambdaBasicExecutionRole
        MyFunction:
            DependsOn: CopyZips
            Type: AWS::Lambda::Function
            Properties:
               Description: Example
               Handler: index.handler
               Runtime: nodejs12.x
               Role: !GetAtt 'MyFunctionRole.Arn'
               Timeout: 300
               Code:
                   S3Bucket: !Ref 'LambdaZipsBucket'
                   S3Key: !Sub '${QSS3KeyPrefix}/lambda.zip

Once you’ve created the template file and modified it to reflect the resources above, you can deploy your functions from the command line with a single call:

aws cloudformation deploy --template-file template.yml
    --stack-name your-stack-name-here

This basic configuration will allow you to deploy your functions once they‘ve been uploaded to the S3 bucket specified in the function definition. You can now build upon this basic set of deployment functionality to automate any aspect of your stack creation. For a fully functional deployment sample, you can clone the excellent quickstart repo from AWS.

Some Tips and Additional Resources

As you work CloudFormation into your Lambda development pipeline, you’re bound to encounter headaches. Here are a few tips to help avoid unnecessary frustration from this immensely helpful AWS blog article on the topic:

  • Did you know that you can deploy in-line Lambda code? Simply include your (small) Lambda function code as lines appended after the zipfile key.
  • If you only need to release your functions to a small subset of AWS regions, you can provide a list of regional buckets to populate with your code; simply expand the resource listing when defining your source Lambda zip files.
  • With a simple name format policy and some custom code, you can create a system that allows you to upload your S3 file once, then publish it to any AWS region that supports AWS Lambda.

In addition to the AWS blog post above, my fellow IOD experts also had a few thoughts on the best ways to achieve serverless deployment zen:

Once again, the excellent Quickstart repo provided by AWS also offers a useful CloudFormation-driven tool for deploying your AWS Lambda code across multiple regions from a single bucket.

Wrapping Up

AWS Lambda deployments are brittle and prone to error out-of-the-box, requiring you to wade through numerous user interfaces and dialog flows to create your function, associated execution roles, and the resources you need to host your deployable code. 

With CloudFormation, you can convert all of this manual configuration into a single template file with the power to describe an entire application stack. CloudFormation replaces the complex and error-prone manual process of deploying Lambda functions with a repeatable, maintainable process that can be maintained alongside your code.

IOD’s expert+editor teams create the kind of content that tech marketing professionals just don’t have the expertise to create.

Learn more.

Categories
Community Tips

Understanding developer personalities

Personality theories provide a blueprint for understanding why people behave the way they do. In the latest edition of our State of the Developer Nation 22nd Edition – Q1 2022, we incorporated a measure of the widely accepted ‘Big Five’ personality dimensions. We did this in order to better understand the personality traits of software developers. Here, we share some of our findings on developer personalities. Our aim is to discuss how this kind of information can help to support interactions with developers.

Personality measures are a powerful tool for understanding people’s preferences and behaviours. Software teams need diversity not only in terms of skills, experience, and knowledge, but also require a variety of personalities. This will help teams collaborate effectively on complex and challenging projects.

The Ten-Item Personality Inventory

We used the Ten-Item Personality Inventory (TIPI) methodology in order to measure the ‘Big Five’ personality dimensions. These dimensions are: emotional stability, extraversion, openness to experiences, agreeableness, and conscientiousness. The TIPI method is well-suited for situations where short measures are required. The results have been shown to have good alignment with other widely used Big Five measures1. Although more comprehensive and accurate personality measures than TIPI exist, they typically require an entire survey to themselves.

The TIPI method presents respondents with ten pairs of personality traits and asks them to rate how strongly these traits apply to them. Below, we show responses to these items for over 12,000 developers. We find that developers, in general, see themselves as complex and open to new experiences (86% agree or strongly agree that this applies to them), dependable and self-disciplined (79%), calm and emotionally stable (76%), and sympathetic and warm (74%). 

Developer personalities - developers are most likely to agree that they are dependable, self-disciplined, and open to new experiences

Diving deeper into the TIPI data allows us to identify more specific personality types within the general developer population. We collapsed these ten items into five distinct measures, one for each of the Big Five personality dimensions. For example, statements about being ‘sympathetic, warm’ and ‘critical, quarrelsome’ combine to give an overall measure of agreeableness. We then derived a score for each developer on each of the five dimensions. This helped us identify the developer personalities at the polar ends of each dimension, e.g. labelling those who are at the top end of the agreeableness scale as ‘agreeable’ and those at the bottom end as ‘disagreeable’. 

Finally, we segmented all developers into a set of distinct personality types. We did this by using the personality labels that they had been assigned as inputs to our segmentation algorithms.

Approximately 8% of all developers differ from the aforementioned group. They showcase a higher level of openness to experiences – often related to intellectual curiosity. These software developers have personality traits that suggest they are likely to investigate new tools and technologies. They are also more likely to stay up to date with the cutting edge of technology.

The Five Developer Personalities

The following charts show the characteristics of five example developer personalities revealed within our data. A well-rounded, ‘balanced’ personality type accounts for 52% of the developer population. These are developers who sit firmly at the centre of each dimension. They are neither introverted nor extroverted, highly agreeable nor disagreeable, emotionally unstable nor lacking emotion, etc.

5% of developers fit a ‘responsible and cooperative’ personality type. These developers score highly in conscientiousness, openness to experiences, and agreeableness in comparison to the majority of developers. Increased conscientiousness often relates to setting long-term goals and planning routes to achieve them, e.g being career-driven. Higher scores for openness to experiences reflects a preference for creativity and flexibility rather than repetition and routine. Our data backs this up. These developers are more receptive to personal development-related vendor resources. For example, 35% engage with seminars, training courses, and workshops compared to 25% of ‘balanced’ developers. Their high scores for agreeableness also correlate with greater engagement with community offerings. For example 23% attend meetup events compared with 17% of ‘balanced’ developers.

5% of developers conform to an ‘achievement-driven and emotionally stable’ profile. As with the previous personality type, they are conscientious and open to experiences. However, they score much higher in terms of emotional stability but slightly lower in terms of agreeableness. Developers who score high in emotional stability react less emotionally. For example they favour data over opinions. Lower agreeableness can be a useful trait for making objective decisions, free from the obligation of pleasing others.

We also find a segment of developers with an ‘introverted and unreliable’ profile. They indicate that they are less involved in social activities, disorganised, closed to new experiences, and less agreeable than other developers. Fortunately, these developers, who are likely hard to reach and engage in new activities and communities, are a very small minority, at 2% of all developers.

Common developer personality profiles
Common developer personality profiles

Developer Personalities, Roles and Content Preferences

Finally, we show how the characteristics of these developer personalities vary, in terms of both associations with developer roles and the kinds of information and content that they consume. Developers in the ‘balanced’ profile are most likely to have ‘programmer/ developer’ job titles. However, those who fit the ‘responsible and cooperative’ profile are disproportionately more likely to occupy creative (e.g UX designer) roles. This aligns with their increased creativity/openness, and senior CIO/CTO/IT manager positions, reflecting their self-discipline and achievement striving.

Those who are ‘achievement-driven and emotionally stable’ are less likely than other personality types to have ‘programmer/developer’ job titles, but disproportionately more likely to be data scientists, machine learning (ML) developers, or data engineers. They tend to deal mainly in facts and data rather than opinions and emotions. Those in the ‘introverted and unreliable’ profile are more likely to have test/QA engineer and system administrator job titles than those in other personality types. 

Developer personalities - achievement-driven developers with high emotional stability are 50% more likely to be data scientists than those with a balanced personality

When it comes to where developers go to find information and stay up to date, perhaps unsurprisingly, the ‘introverted and unreliable’ personality type uses the fewest information sources overall, affirming that they are a difficult group to engage via community-focussed events and groups. However, their use of social media is in line with other personality types, suggesting that this may be a suitable channel for catching the attention of this hard-to-reach group.

Both of the high-conscientiousness and high-openness personality types use the widest range of information sources overall, however, those who are more cooperative are considerably more likely to turn to social media for information about software development (53% of the ‘responsible and cooperative’ type vs. 44% of the ‘achievement-driven and emotionally stable’ type).

‘Intellectually curious’ developers are the most likely to make use of official vendor resources and open source communities. Hence, the audience that vendors reach via these resources may be slightly more keen to experience new products and offerings, than the typical ‘balanced’ developer.

What’s Next with Developer Personalities

We just began to scratch the surface of developers’ personality profiles. The personality types we have shown are indicative of just a few of the differences that exist among developers. By capturing this kind of data, we’ve opened the door for more extensive profiling and persona building, along with a deeper analysis of how the many other developer behaviours and preferences that we track align with personality traits. If you’re interested in learning more about developer personalities and how this can help you to reach out to developers, then we’re very excited to see how our data can support you.

Developer personalities - Achievement-driven developers use more information sources than those with a balanced personality
Categories
Analysis

Are Low/No-Code tools living up to their disruptive promise?

You may be wondering why software development is a slow and expensive exercise. Its complexity and the need for technical resources may be hard to find or very expensive to hire. Due to this, low/no-code tools have become increasingly popular among developers today. In this article, we explore low/no-code development, the advantages/disadvantages, and try to understand if it is disrupting the software industry today with data-driven facts.

What is low/no-code tools software?

Low/no-code tools are visual software development platforms. Unlike traditional software development, which involves programmers writing lines of code, the low-code/no-code platforms encapsulate all this behind the tool.

As per the State of the Developer Nation 22nd Edition – Q1 2022 report,  46% of professional developers use low/no-code tools for some portion of their development work.

The difference between Low-code and No-code development platforms

Before we proceed further, hope you know the difference between low-code and no-code software.

Low-code platforms require technical knowledge and it helps the developers to code faster. The main benefit is that these platforms have powerful tools that speed up technical software development activities and are built for coders. 

No-code platforms are built for standard business users. There are no options for manually editing code and rather focus on the user experience aspect in creating functionality and abstracting the technical details away from the user. 

Despite some level of automation in low-code platforms, coding is still core to the development process. Openness is a key difference between low-code platforms and no-code ones. As a developer, you can modify existing code or add new ones to change the application. The ability to add code provides flexibility with more use cases and customization possibilities. However, it limits backward compatibility.

Any new version changes to the low-code platform may affect custom code developed and may need a proper review before an upgrade. That means whenever there is a launch of a new version of the low-code platform, customers will need to test if their customized code functionality works well after the upgrade. 

In the case of no-code versions, customers do not have to worry about any functionality or breaking changes due to the platform being a closed system.

Low-code platforms offer easy integration capabilities. Unlike No-code which can lead to users creating programs without proper scrutiny with risks like security concerns, integration, and regulatory challenges besides increasing technical debt.

How do you use low/no-code tools and software?

As a user, you visually select and connect reusable components representing the steps in a process. You then link them to create the desired workflow. The code is very much present behind the steps, which drives the functionality.

Low-code/no-code tools enable non-technical staff at workplaces or anyone to develop business workflow applications. Moreover, low-code/no-code platforms allow easy integration with other business applications. For example, a sales staff could use a low-code/no-code application to develop qualified leads or opportunities into a database. They could then set triggers to send out targeted communications based on the occurrence of specified events.

Advantages and disadvantages of low code/no-code software.

Low-code/no-code platforms have both advantages and disadvantages. Here are some of them.

Lower costs & faster development: Time is money, and you can reduce your costs when you create more applications faster that automate and help improve productivity. You save costs on recruiting additional developers as applications that took a few months can be completed in a few days leading to faster availability of business applications.

Integration feasibility & challenges: Today’s application programming interfaces, or APIs, enable a high level of integration between applications. Integration works seamlessly in many cases. However, when we look at scalability and speed, custom integration is preferred for critical enterprise business applications.

Creating APIs is not easy and requires a better understanding of the IT landscape and related applications. Hence creating significant and sizeable applications will require experienced developers rather than non-technical hands-on low code/no-code software.

Time to market gains: As low code/no-code software replaces conventional hard coding with drag and drop functionality, reusable components, ready-to-use templates, and minimal coding, organizations can deliver applications faster to the market. It, therefore, helps organizations gain a competitive edge and improve productivity.

Performance: The standard view on low code/no-code software is that it focuses on saving time and is effective and successful. However, low code/no-code software platforms are not designed for performance and limit the number of functions one can implement. Moreover, adding new features to an application built using low code/no-code software can get challenging.

Privacy and Security Issues: With low-code/no-code software, there are limitations to configuring data protection and privacy aspects. You do not have access to all the source code, making it challenging to detect any security gaps.

The future of software development

Low-code/no-code software platforms offer many advantages in creating business applications faster. There are some disadvantages to its limitations in coding functions and features. What is the ground situation today with low-code/no-code software platforms?

The State of the Developer Nation 22nd Edition – Q1 2022 report has some interesting insights on the actual usage of low-code/no-code software platforms. Here are some findings:

Who is using low-code/no-code tools?

  • 46% of professional developers use low-code/no-code (LCNC) tools for some portion of their development work.
  • Experienced developers, particularly those with more than ten years of experience, are the least likely to use LCNC tools.
  • Most developers that use LCNC tools do so for less than a quarter of their development work.
  • The Greater China area has the highest LCNC tool adoption rate. 69% of developers in this region report using LCNC tools, compared to the global average of 46%.
  • 19% of developers in North America use LCNC products for more than half of their coding work – almost twice the global average of 10%. This provides strong evidence that these tools can supplant traditional development approaches

Wrapping up

Low-code/No-code tools have great potential and disrupt the traditional software industry but at a slower rate. State of the Developer Nation 22nd Edition – Q1 2022 report shows us fascinating insights.

Experienced developers with ten or more years of experience are less likely to use low-code/no-code tools. It could probably be due to the flexibility that coding offers the experienced developers and their comfort with it. It may also have an angle related to the job security of software developers and the risks of automated LCNC tools taking away significant parts of programming activity. Experienced developers work on complex tasks and the low-code tools are more suited for simple programming tasks, which the experienced hands may find easy to do.

On the other hand, North American developers seem to be progressive in using LCNC products for half of their coding (twice the global average of 10%), showing massive potential for LCNC tools to supplement software development activities. A lot of initiative in using LCNC tools also rests with the software organizations leading initiatives and implementing these solutions. Younger developers may find it easier to automate some parts of coding using LCNC tools and speed up their development activities. 

The adapted LCNC approach each programmer takes to code and develop a feature can come from their learning experience. A younger developer may prefer to use LCNC for about 25% of their development work as they are familiar with using the tools and it is a way of working. An experienced developer may shun the tools as he has always been building applications from scratch by coding and no LCNC tools. 

As technology advances, and pressure to have business solutions quicker build up, organizations will need to use the latest LCNC tools. Developing robust functional and secure software solutions faster to get competitive gains will be a mandate amid the rapid pace of digital transformation. Today LCNC tools are progressing successfully in that direction and programmers irrespective of their experience need to adapt LCNC tools where an opportunity to improve productivity exists.