Categories
Community

10 Benefits of Test-Driven Development to Your DevOps Team

From JavaScript to HTML/CSS to SQL and beyond, thoroughly testing code before integrating it into any system is a key element to consider in software development. First and foremost, it safeguards the quality and integrity of the code. Compared to development teams that use other methods, TDD has been proven to considerably reduce bugs and deficiencies.

It’s also interesting to note that the DevOps market size is expected to reach $25.5 billion by 2028.

In this article, we’ll explain what test-driven development is, along with the various benefits, and how to effectively integrate test-driven development into your DevOps Team.

Test-Driven Development

What is Test-Driven Development in DevOps? 

First things first, it’s important to understand that the test driven development technique is neither about testing, design, or simply carrying out lots of tests. Test Driven Development (TDD) is a proactive software development method where developers write tests for the code before it’s even been written.

In addition to Test-Driven Development, in the age of digital transformation, digital transformation conferences have become a reliable pool of knowledge for developers to make strategic decisions and foolproof investment choices, too.

Whether you’re a small startup or an established enterprise, implementing test-driven development can significantly enhance your software development process and ensure the quality of your products, ultimately strengthening your business name in the industry.

Moreover, by promoting transparency and accountability in the development cycle, TDD aids in identifying and mitigating potential risks, thus providing clarity regarding the ultimate beneficial owner of code functionality.

Why use Test Driven Development in DevOps?

Test-Driven Development offers a variety of benefits for developers, including:

1. Early Bug Detection & Reduces Bugging Time

Writing tests before making changes or implementing new features helps catch bugs and problems early on. Even better, the likelihood of shortcomings or flaws in the final product is considerably reduced too.

Pinpointing a specific area of code that requires attention when it fails, this reduces the time spent on identifying and rectifying issues that can be spent where it’s needed most.

2. Improved Code Quality

Writing tests not only ensure the code meets specific requirements, it often produces cleaner, more modular and more manageable code. Inevitably, this leads to better code quality.

Test-Driven Development

Emphasizing early testing, maintainability, and confidence in the correctness of the codebase, improving code quality with TDD also offers:

  • Insightful documentation
  • Better software design 
  • Increased developer confidence 
  • Automated regression prevention 
  • Notable time savings in the long run 
  • Seamless CI/CD integration 
  • Improved customer satisfaction.

3. Faster Feedback Cycles

TDD provides software developers with immediate feedback on the precision of their code. Quicker feedback loops save developers valuable time by addressing coding headaches straightaway.

Other key advantages faster feedback cycles offer developers include:

  • Accelerates the overall development speed 
  • Minimizes distractions 
  • Enhances productivity 
  • Developers gain confidence in code changes 
  • Aligns with agile development principles 
  • Promotes incremental development 
  • Swift integration with CI 
  • Fosters a culture of collaboration
  • Shortens the overall feedback loop in the development process.

4. Facilitates Refactoring

Refactoring refers to the process of improving internal structures or code designs without changing its external behavior.

Enabling developers to regularly improve the quality and maintainability of the codebase, refactoring allows developers to reshape and develop code whilst simultaneously eliminating the worry of breaking existing functionality or introducing accidental consequences.

The key steps for refactoring with TDD are:

  • Write a failing test 
  • Run the test 
  • Perform refactoring, e.g. renaming variables, extracting methods, simplifying complex logic, etc. 
  • Run the test again 
  • Write additional tests 
  • Run all tests 
  • Evaluate 
  • Implement changes 
  • CI Integration 
  • Refactoring Documentation, e.g. comments in the code, README files, etc.

To guarantee you codebase’s code health improves with time, it’s worth considering carrying out a code review.

5. Supports Continuous Integration (CI)

In DevOps software development, continuous integration (CI) is where developers routinely add code changes to a central repository. Going hand in hand with TDD, CI enables automated tests, provides quick feedback, maintains code stability, and makes sure any integration issues are identified early on.

The CI process typically includes these steps:

  • Version Control System (VCS) 
  • Code Changes 
  • Automated Build 
  • Automated Testing 
  • Static Code Analysis 
  • Artifact Generation 
  • Deployment to Staging 
  • Environment Automated 
  • Acceptance Testing 
  • Manual testing 
  • Code Review 
  • Feedback and Notifications 
  • Merge to Main/Master Branch.

6. Enables Continuous Delivery (CD)

Quite simply, continuous delivery (CD) automates the building, testing, and deploying of software. Making sure it’s always in a deployable state, combined with CI/CD techniques, TDD supports the frequent release of software updates.

Closely related to CI, the key steps in the CD process are:

  • Version Control 
  • Continuous Integration (CI) 
  • Automated Testing 
  • Artifact Generation 
  • Configuration Management 
  • Deployment to Testing/Staging Environment 
  • Automated Acceptance Testing 
  • Manual Testing 
  • Approval Gates 
  • Deployment to Production 
  • Monitoring and Logging 
  • Rollback Plan 
  • Post-Deployment Testing 
  • Documentation and Release Notes

7. Better Collaboration Reduces Debugging Times

TDD provides a clear understanding of the expected behavior of the code. It fosters a culture of collaboration among team members, facilitating virtual collaboration sessions where developers can discuss test results, code implementations, and potential improvements, regardless of their physical locations.

It also helps reduce debugging times by promoting collaboration in the form of clear specifications, collective code ownership, and regular code reviews. 

Reducing debugging times is beneficial for DevOps teams for various reasons:

  • Increased efficiency 
  • Faster time to market 
  • Cost savings 
  • Enhanced morale and motivation 
  • Higher-quality software 
  • Iterative development.

Resulting in better-quality software, faster turnaround for fixing issues, and happier development teams, reducing debugging times is essential for maintaining a seamless development process from start to finish.

8. Increased Confidence in Changes

Acting as the ultimate safeguard, if developers can ensure the tests pass they can be confident in the knowledge that the changes haven’t introduced any setbacks. Test-Driven Development (TDD) also aligns well with modern infrastructure practices like utilizing dedicated hosts, where the isolation and predictability they offer can further bolster confidence in code changes.

Just like software development, Enterprise Architecture (EA) is constantly evolving in this fast-paced market. So, if you like the idea of quicker change and innovation, achieving greater value within the market, and accomplishing your objectives, it’s worth looking into the latest EA trends for further insight.

Test-Driven Development

9. Positively Impacts Data Handling

By writing tests that validate data inputs and outputs, TDD ensures that data is processed accurately, providing a reliable foundation for developers to make an informed inference about the behavior and performance of their code under various conditions. This leads to improved data quality and reduces the likelihood of inconsistencies and errors.

TDD ensures accurate data handling by:

  1. Requirement Clarification 

Clarifying the types of data that need to be handled, how they should be processed, and determining the expected outcomes.

2. Test Writing 

Developers write test cases covering various scenarios related to data handling, e.g. input data, expected output, and any specific conditions or constraints to consider.

3. Test Execution (Red Phase) 

Examine failing tests to start writing the code to handle the data.

4. Code Implementation (Green Phase) 

Write the minimum amount of code needed to make failing tests pass.

5. Refactoring (Blue Phase) 

Once the tests pass and the code works, it’s time to refactor the code to improve structure, readability and efficiency.

6. Regression Testing

To maintain data accuracy, developers run an existing test suite to ensure changes haven’t introduced any regressions.
Increasingly driven by automation, call center data, campaigns, and dialling plans are prime examples that can all benefit from implementing modern test-driven development strategies.

10. Cost savings

By catching problems early, TDD can reduce the time and resources spent on fixing bugs and addressing issues in later stages of development or production.

Boosting both financial performance and competitiveness in your industry, saving costs allows development teams to deliver projects much faster, with fewer resources.

If you’re looking to take back control of your software development investments, it’s worth delving deeper into application portfolio management best practices to learn more.

Are there any alternatives to Test-Driven Development (TDD)?

Acceptance test-driven development (ATDD)

Acceptance Test-Driven Development (ATDD) is an agile software development process that incorporates acceptance tests into the development stage.

Behavior driven development (BDD)

Behavior-Driven Development (BDD) encourages collaboration amongst a diverse mix of stakeholders to enhance communication. It also ensures software meets the desired behavior and business requirements.

How do you implement Test-Driven Development?

A typical TDD workflow typically includes the following steps:

1. Write a Test

Write a test to define the expected behavior of the code.

2. Run the Test

Carry out the test and make sure it fails. The code hasn’t been implemented yet, so you want the test to fail and show the test is working properly by accurately reflecting the missing functionality.

3. Write the Code

Create the minimum amount of code needed to pass the test. Fulfill the requirements and nothing more.

4. Run the Refractor test (if needed)

Reducing complexities and strengthening readability, refactoring improves the code by making small tweaks without altering the code’s external behavior.

5. Repeat the Process

Repeat the cycle for each new piece of functionality or changes that need to be made.

Helping to better understand your domain as you develop it, and building robust and scalable apps aligned with your business domain is incredibly important too. For example, you could register domain .ai if you work in the world of machine learning or if you have a store based in Anguilla to boost brand awareness.

Final Thoughts

Instilling true value and lowering costs across the board, it’s clear to see (when used right), the TDD method presents an array of benefits to savvy software development teams.

Allowing developers to build a safe environment to unearth all the bugs before harming the whole system, if you’re looking for a methodology renowned for consistent quality and flexibility, test-driven development is the way forward.

Categories
Community

State of Developer Wellness report 2024

In 2023, we ran our first ever Developer Wellness survey with the aim of better understanding developers, levels of their well-being and happiness. Last year’s report sparked crucial conversations about well-being in the developer community, shedding light on the challenges developers were facing in their careers.

This year, we return with an even larger survey (nearly 1,000 developers from 86 countries!) to dive deeper. The survey was live for fifteen days during March 2024. More than half of the developers who participated were aged between 18 and 44 years old but we also had 10 developers younger than 18 and 7% were above 55 years old. 

83% of developers reported feeling burnout at some point in their careers

Burnout, characterised by exhaustion, energy depletion, increased distance from your job and reduced efficiency, is a significant concern in the developer world. The demanding nature of the work, coupled with factors like tight deadlines, constant learning curves, and potential isolation, can contribute to this state.

However, despite the concerning figure, many developers have started focusing on their well-being now more than ever. More than half of the developers have access to wellness tech through their employers in 2024 – fitness trackers, mindfulness apps, etc. – and more developers have started prioritising their physical and mental health amidst the stiff competition and uncertainty surrounding the tech industry. 

We found that 34% of developers exercise once or twice per week while 40% exercise at least 3 times per week! More than half of developers also manage to kick in 6-7 hours of sleep every night, although there is definitely some room for improvement.

In 2024, 84% of developers had to work overtime at least occasionally

Overtime is a common experience for many developers, although the frequency varies. While more than half (53%) find it acceptable, 39% express dissatisfaction without deeming it a deal-breaker, and 8% consider it overwhelming. 

This distribution suggests a spectrum of attitudes towards work-life balance among developers. It underscores the importance for companies to foster environments that prioritise employee well-being while acknowledging the demands of the tech industry.

But what are the tips/strategies that could help you improve your well-being and start waking up full of energy? How can you set clear boundaries at work?

We got you! You’ll find your well-being toolkit in our Wellness report. On top of that, you’ll also discover:

  • Workplace Perceptions: the workplace setup, employer support, sense of purpose
  • From Burnout to Balance: Are you on the struggle bus? Learn how developers are conquering burnout and achieving work-life harmony. ‍
  • Beyond the Code: work-life balance, recharging & well-being
  • Developer Wellness Champions: Unveiling the secrets to developer well-being, straight from our Developer Nation community members!

and more!

Ready to join a thriving developer community that prioritises well-being?

Download the report today, share it with your network, and let’s build a culture of wellness that promotes the mental, physical and emotional well-being of the developer industry! 

Categories
Community

Driving Digital Transformation: The Crucial Impact of Data Analytics

Picture yourself sailing a boat out on the ocean. Seeing the shore is impossible as thick fog sweeps in over the waves. Fortunately, you can see a lighthouse’s beam warning you to stay clear of the jagged cliffs up ahead. The ship’s captain waves you off when you point out this light and says he’d rather trust his instincts. 

Forrester found that 50% of the decisions made by companies are intuitive and subjective. In the same way that a tall lighthouse warns sailors of danger with its powerful light beam, big data analytics could provide the direction and concentration needed to drive digital transformation.  

With the unexpected disruption in the market, companies needed to digitize their interactions with staff and clients. Organizations of all shapes, sizes, and sectors have had to reinvent their business strategies and procedures, regardless of whether their objective is to compete, survive, or disrupt.

What Does Digital Transformation Mean?

The term “digital transformation” is frequently used, although it doesn’t have a universal definition, and different companies may interpret it in various ways. For a retail store, this could entail modifying its business plan to include online sales to its physical location. 

However, a business already has a digital commerce set up, it could entail implementing new digital technologies or altering existing procedures to analyze consumer behavior and attributes to hyper-personalize its product offerings and enhance the customer experience.

Whatever definition you choose, a successful digital transformation requires the proper infrastructure, technology, and data strategy to modernize your business and make it a more competitive and agile player in the market. Technology, digital workflows, and organizational transformation are all necessary. Because data analytics is an essential catalyst for any digital transformation endeavor, they must be prioritized.

Why Does Data Analytics Drive Digital Transformation?

Businesses must adapt as consumers’ expectations and technologies change. A company’s ability to make fast, well-informed, data-driven decisions will decide its ability to compete and prosper. Data can also be sold as a product, allowing your business to compete in the digital market. It can also be utilized to develop new products or enhance current ones.

Analytics provide the insights that result in well-informed decision-making, whereas data supplies the facts. Using data analytics as the backbone of your digital transformation, both at the beginning and during, you may get past important contemporary business issues that can otherwise stand in the way of your successful data projects.

1. Problems with Data Quality

Good data can support a company in achieving its goals. However, failing to take action to guarantee data quality might result in expensive errors and lost opportunities. 

When you value data as an asset for the entire organization, you can take ownership, management, and security of the data into consideration. These factors will improve decision-making throughout the organization by fostering transparency and trust in the data.

2. Data Silos

Segmenting and siloing data across many platforms, technologies, and business divisions can make it more difficult for enterprises to integrate the data, extract insights from it, and maximize its value. 

You can consider technologies and techniques that enable merging data from many sources and systems to view the big picture when you use data analytics as the main driver of your digital transformation.

3. Complications with Legacy Applications and Systems

Legacy systems can be a security risk to the company since they are frequently expensive to update and maintain, and they can sometimes be difficult to integrate with other, more contemporary components of your infrastructure. 

They make it harder for you to compete in the digital market. By approaching this from a data analytics perspective, you may choose technologies appropriate for the job and promote user adoption, agility, improved security, and peak performance.

4. Fulfilling Expectations of Customers

Today’s customers demand flawless end-to-end experiences, and businesses depend on data to help deliver them. Data enables you to gain insight into your target audience’s demographics, requirements, and preferences as well as what, when, and how often they buy and interact.

To satisfy evolving customer expectations, a digital transformation that prioritizes data analytics can facilitate the implementation of technology and procedures that will assist in gathering and analyzing that data.

Benefits of Using Data Analytics in Digital Transformation

There are obvious advantages to following a lighthouse when sailing wide waters. The same holds for data analytics. These are but a handful of the explanations for why you ought to incorporate them into your everyday activities, particularly as you head toward digital transformation.

1. Enhances the Efficiency of Digital Transformation

Utilizing data to examine user or client behavior allows you to make adjustments that will boost productivity and hasten the achievement of your objectives. For instance, you may utilize statistics to pinpoint the issue when customers are dumping the purchase process and modify it accordingly to boost the chance that they will finish it.

2. Assists in Formulating an Effective Digital Transformation Plan

A clearly defined plan is necessary before you start your digital transformation journey. You may be sure you don’t behave in a vacuum by basing decisions on the facts. Along the road, you can also keep note of the successes and setbacks of your strategy to figure out where, if necessary, to change course.

3. Allows Complex Process Automation

Automation lowers the possibility of error while also saving time and money. You can determine which procedures can be automated and gain insights into the most efficient ways to do it with data analytics. It will also be essential to the automation process, enabling the highest level of efficacy for any AI and machine learning algorithms.

4. Aids in Performance Monitoring

A thorough understanding of the success of your digital transformation can be obtained through data analytics. You can monitor important indicators to ensure your plan works, like page views, sales, and consumer involvement. This enables you to continue the forward motion of your digital transformation by making timely modifications.

5. Boosts Agility

Making judgments fast and correctly is made possible by data analytics, and this is a crucial component of agility in the digital transformation process. You are better equipped to react to shifts in the market or client expectations when you have access to the relevant data when required.

Challenges of Employing Data Analytics in Digital Transformation

Data analytics is a great tool, but it comes with some obstacles. To ensure data is reliable and helpful, you must first learn how to collect it and then appropriately evaluate it. Here are several challenges to be aware of.

1. Data Quality

Data analytics in the digital transformation strategy won’t function successfully without best-in-class data. Accordingly, the data must be precise, current, and reliable. The information also has to be unbiased and free of biases, having been gathered from trustworthy sources. This may present a hurdle, but accurate data analysis requires it.

2. Scalability

Data analytics is not a static process; it must be scalable to meet a company’s growing needs. As more data is generated, analytics must meet the demand. This can be pricey, but it is essential to ensure data analytics is effective and efficient.

3. Security

Addressing security concerns is crucial in data analytics. Because data is acquired from numerous sources, it is critical to guarantee that information is secure and that no illegal access occurs. As firms become reliant on sensitive data, implementing a data governance and management strategy becomes increasingly important.

Best Practices of Data Analytics in Digital Transformation

Knowing where a lighthouse is does not guarantee that your boat will arrive safely on shore. You must have a plan in place for how to use the information. Here are some best practices to follow when you embark on your data analytics journey.

1. Data Visualization

Data visualization is an effective tool for data analysis. It makes data easier to understand by presenting it visually. It can assist in discovering patterns, trends, and connections, allowing for more educated judgments.

Stormboard is a tool that brings all of these principles together. This data-driven collaboration platform delivers data to everyone on your team in an understandable format and automates the reporting process, ensuring that all stakeholders are updated on your processes.

2. Data democratization

Data democratization is making data available to all stakeholders that require it. This includes internal teams, customers, partners, and the general public. Data democratization allows companies to guarantee that all parties have access to the information they need to make the best decisions possible.

For data to be useful to your team, they must understand where to get it and how to use it. This procedure will necessitate training and updating your team’s analytical capabilities.

3. Automation

One essential element of data analytics is automation. Automation makes it possible to gather, process, and disseminate data quickly and effectively. Organizations may ensure accuracy and consistency while saving money, time, and resources by automating procedures. 

The importance of data analytics in digital transformation has been highlighted by navigating through its domain. Businesses can make better judgments by using predictive modeling and historical data analysis. Data engineers’ skills and automation are making real-world applications easier to manage. Large volumes of occasionally unstructured data are being used.

Conclusion

The importance of data analytics in digital transformation has been highlighted by navigating through its domain. Businesses can make better judgments by using predictive modeling and historical data analysis. Managing large amounts of occasionally unstructured data to create real-world applications is becoming easier thanks to automation and the knowledge of data engineering services

There has never been a stronger physical link between skillful data analytics and skillful business planning. In a time when strategic foresight is based primarily on data, effectively managing a business requires having a firm knowledge of data analytics. The capacity to interpret data not only reveals the state of the market but also encourages businesses to anticipate and react to emerging trends, resulting in the development of resilient, flexible, and forward-thinking strategies.

Categories
Community

Boosting Developer Productivity: Tools and Techniques for Efficient Coding

From ensuring that our smartphones operate efficiently to creating the software that runs large enterprise systems, developers are the brains behind much of the technological advancements we’re seeing today. However, software development is neither quick nor easy. And neither does it come cheap.

In fact, it takes around 4.5 months for the average software development project to be completed at an average cost of $36,000. With demand for such projects at an all-time high, developers need to get into a flow and experience deep focus to be productive.

That’s why using the right tools and techniques to enhance coding efficiency are so crucial. And that’s exactly what we explore in more detail below. Let’s dive in.

Understanding Developer Productivity

Developer productivity can be understood by exploring some of the objectives and key results (OKRs) against which their work is measured. Some of these are time-to-completion, bug rate, and code coverage.

Despite working toward clear OKRs, achieving these goals is sometimes hindered by common challenges that developers face that hinder their productivity. Examples of these include:

  • Interruptions and meetings
  • Micro-management and tight deadlines
  • Vagueness and unclear prioritization
  • A distracting workplace environment
  • Uncontrolled changes in the project’s scope
  • Unclear product definition process
  • Tool multiplicity and hardware
  • Lack of documentation

Techniques and Strategies for Boosting Developer Productivity

Software team leaders and project managers who are aiming to boost productivity of their developer teams should consider the following strategies:

  • Minimizing distractions and multitasking: When developers write code, they are in a space of deep focus. The smallest distractions could lead to drops in productivity and have other negative effects. The same is true when you require your developers to multitask. Whether it’s attending to incessant phone calls or unplanned stand-ups, it’s necessary to create a positive space where they can thrive. Give them sufficient time to prepare for planned meetings in advance, ensure they are working in a quiet environment, and avoid micro-managing them to avoid frustration and poor productivity. You can also use task apps to organize tasks by priority and set time limits so your developers don’t have to waste time and attention preparing their to-do list. 
  • Optimizing the Integrated Development Environment (IDE): There are software apps that help developers operate more productively. Essentially, this is known as an IDE and it combines functionalities that include software editing automation, building, testing, and packaging. IDEs can improve coding efficiency through additional capabilities syntax highlighting, intelligent code completion, refactoring support, debugging, etc.
  • Clear project specifications: The importance of well-defined project specs in reducing misunderstandings cannot be stressed enough. Project team leads should introduce well-defined project deadlines with achievable milestones along the way. There should also be verification by the client or interested party of the expectations of deliverables that the project should produce upon completion. Other key criteria include having a clear budget, setting out quality assurance requirements, and software requirement specifications (SRS), including functional requirements, non-functional requirements, and technical requirements.
  • Eliminating unneeded tests: While testing may be a natural part of the software development process in ensuring conformity with business requirements and technical specifications, it shouldn’t go overboard. Instead, there should be processes in place that review and aim to optimize the testing activities. Ultimately, this can reduce the execution time for the final product. 
  • Utilizing No-Code Platforms: In recent years, the rise of no-code platforms has offered developers a new approach to streamline development processes. These platforms allow for the creation of software applications without the need for traditional programming, enabling developers to focus on higher-level tasks while still achieving efficient results. Integrating such platforms into development workflows can significantly boost productivity by accelerating the development cycle and reducing the need for manual coding tasks.

Developer Productivity Tools

While there may be many developer time tracking and productivity tools available, we’ve curated the top two to help you with different development tasks. Here are the tools that topped our list:

Sublime Text

sublime text

Developed by Jon Skinner, Sublime Text is a versatile text editor for code, markup, and documentation. It is incredibly fast in launching and can handle large files with ease. Available for Windows, macOS, and Linux, its cross-platform compatibility means switching between different operating systems is a breeze and no functionalities are lost.

For those developers who would like to customize their coding environment, its functionality can be extended using community-contributed packages. Meanwhile, there are plugins that can be used, too. With a minimalist and clean interface, it offers a clutter-free environment for distraction-free writing.

What is more, it offers the ability to make multiple selections and edits at the same time. It offers a command palette to help you access numerous functions quickly. And for more complex projects, you can split your code into numerous columns or rows for easy comparison and editing. Finally, it can be configured to automatically save your files regularly.

GitHub

githyb

While there are distinct advantages and disadvantages of cloud computing, using GitHub is all about the benefits. This is a website and cloud-based service that helps developers boost their productivity. Think of it as a massive file where multiple developer collaborators can work on a project, store code, and implement version control to manage changes to their code.

It’s all about improving developer productivity and taking it to the next level seamlessly. What is more, it has a user-friendly interface, making it easy for novice and experienced developers to share, merge, change, and write code in one place.

Conclusion

With the techniques and tools mentioned above, project managers can ensure greater developer productivity without compromising the mental health of their team members.

Many developers experience burnout and this needs to be prevented with proper collaboration and communication, supported by the right tools.

Enhancing developer productivity in coding can be a more streamlined process as managers take their wellbeing into account.

Nikola Baldikov is a skilled SEO expert who is dedicated to helping businesses thrive. He is the esteemed founder of InBound Blogging, where his expertise lies in search engine optimization and crafting effective content strategies. Throughout his career he has had the pleasure of collaborating with a wide range of companies regardless of their scale and has consistently aided them in accomplishing their objectives online. During his leisure time. He finds joy in engaging in football matches and dance routines.

Categories
Community

Enhancing Online Security: Best Practices for Developers

Developing a new software platform, mobile application, or online tool can be a great opportunity to offer innovative tools to the public. It can also present some serious risks. There will be those who seek to steal your intellectual property during the dev process. Alternatively, your completed product may be targeted by those who want to exploit valuable user data.

This makes it vital that your development team enhances its online security measures. 

Be Proactive

As a developer — or a leader of a dev team — it’s important not to treat security as a set of superficial defensive measures. This reactive attitude can put you and your applications on the back foot, struggling against the onslaught of threats. You and your team need to be proactive in making security as central and important to development as your coding.

One good approach to this is to make the product secure by design. As the name suggests, this process is about incorporating strong online security into the design phase of the development lifecycle. You’ll basically avoid waiting to consider security until the testing phase of the project or even fixing bugs in the beta phase, as is common. Instead, alongside brainstorming the key features of your product, your team should be looking at what the specific security challenges of the product are likely to be and how to minimize them. This allows you to build a strong security foundation from the outset.

Another way to be proactive in implementing security measures is to ensure your team follows Secure Software Development Lifecycle (SSDLC) protocols. This is effectively a set of actions that are baked into every task developers on your team perform so that they can identify and handle potential issues before they become problematic. It includes creating a culture of security in which threats are discussed and considered regularly. It should involve frequent cybersecurity training so that your dev team is fully aware of the latest threats and protection techniques. Importantly, the development environment itself should be secure, both digitally and physically.

Utilize Advanced Encryption Techniques

Encryption is one of the most powerful tools for ensuring online security. This is particularly effective for minimizing unauthorized access to data that is likely to be shared online both during the development lifecycle and by consumers when using the final product.

Identify and use strong encryption algorithms

Algorithms are the basis upon which encryption operates. Therefore, it’s important to utilize the most appropriate algorithms both for the product itself and protecting your networks. For instance, Advanced Encryption Standard (AES) is a common tool for development teams. This symmetric algorithm performs multiple encryption rounds before breaking the data down into smaller blocks. Some software and apps that require end-user authentication to access sensitive data — like financial information — may be better served by asymmetric encryption, such as the Rivest-Shamir-Adleman (RSA) protocol.

Adopting solid key management

Any encryption algorithm you adopt requires keys to be generated and shared to decrypt the information. It’s vital that you implement management measures to mitigate unauthorized access to and use of these keys. It’s important to formalize which members of the team can obtain and use these keys. It’s also vital to regularly change keys, much as you might update a password to keep it strong.

Conduct Vulnerability Assessments and Improvements

The cybersecurity landscape is in flux. Even within the timeline of your development process, new threats can emerge and come into favor. One of the best practices developers need to adopt is conducting regular vulnerability assessments and making relevant improvements.

Perhaps the most convenient approach during development is using automated scanning software. You can invest in tools that scan both the specific code of your project alongside your overall IT infrastructure. There’s even an increasing number of artificial intelligence (AI) driven scanners that use machine learning algorithms to learn about and adapt to the security landscape in relation to your development. In addition, utilizing a DevOps monitoring tool can allow you to see real-time performance issues that could suggest weaknesses in security, such as slow response times.

It’s also wise to remember that your development team’s workflow can be a source of vulnerability. For instance, too many unnecessary repetitive security processes might cause dev staff to become complacent and overlook key protective actions. A commitment to regular process improvement can help you not only minimize weak points but also boost efficiency. Not to mention it helps you to notice changes in the security landscape and adapt to them. You can do this by taking time to map out both formal and informal processes visually in flow diagrams at milestones during the development lifecycle. This helps you to analyze where inefficiencies occur and what processes you can consolidate and strengthen.

Conclusion

With some solid security best practices, you can ensure your development project is protected from threats throughout the project’s life cycle. This should include taking secure-by-design protocols and adopting string encryption, among other measures. Wherever possible make certain that you have a cybersecurity expert embedded into your dev team or available to consult regularly. This can help you both implement effective processes and stay abreast of any potential threats you need to prepare for.

Categories
Community

8 Indexing Strategies to Optimize Database Performance

Databases provide the backbone for almost every application and system we rely on, acting like a digital filing system for storing and retrieving essential information. Whether it’s organizing customer data in a CRM or handling transactions in a banking system, an efficient database is crucial for a smooth user experience.

However, when we get into large volumes of data and more complex queries, database management can become daunting. That’s where good indexing strategies can make all the difference. 

Think of it like tidying up your digital filing cabinet so you can quickly find what you need without rummaging through multiple drawers and folders to locate the correct file.

By organizing and structuring your data in a way that facilitates quick retrieval, indexing can make a big difference in how well your database performs. Here, we’ll explore some strategies to help you do just that.

Database indexing best practices

Before you settle on a strategy, it’s worth understanding the different ways you can approach indexing to improve query selection and overall database performance.

Identify key queries and columns

Before getting started with indexing, you need to identify the type of queries your application is running regularly and which columns are involved in those queries. This helps you to focus your efforts on areas that will give the best results. There’s no point in spending time and energy indexing columns that rarely get used.

For example, let’s say you’re developing an app for an online bookstore, and one of the most common queries is searching for books by author name. In this case, creating an index on the “author” column can dramatically improve the performance of search queries.

Data orchestration tools can examine query patterns and usage statistics to pinpoint the most commonly executed queries in your database. What is orchestration, we hear you ask. 

When we talk about data, orchestration is the process of managing and coordinating various tasks like collecting, processing, and analyzing data from different sources. This helps to keep data operations well-organized and efficient.

By understanding which queries are commonly used, database administrators can prioritize indexing efforts on the columns involved in these queries.

Avoid over-indexing

While indexing can undoubtedly speed up query performance, as the saying goes, you can have too much of a good thing. 

Over-indexing isn’t just a waste of time, it can actually have the opposite desired effect and hinder database performance.    

Keep in mind that every index you add takes up storage space and needs managing within the database. Plus, having too many indexes in play can slow down your insert and update performance because your database will be working overtime to update multiple indexes with every change.

To avoid this, follow data indexing best practices such as those covered in Apache Hive documentation. Aim to strike a balance between query performance and keeping the database easy to manage. 

Focus on indexing columns that are frequently used in WHERE clauses, JOIN conditions, and ORDER BY clauses. Also, think about using composite indexes for queries that involve multiple columns.

Regularly monitor and tune indexes

Creating indexes isn’t one of those jobs you can do once and forget about. Because data and query patterns often evolve over time, you need to regularly check and adjust them. 

It’s similar to the practices of Machine Learning Ops (MLOps), where ongoing monitoring ensures the model is still effective. Similarly, consistently reviewing and fine-tuning indexes plays a pivotal role in managing their effectiveness. 

Failure to do so can lead to accumulating technical debt, where outdated or inefficient indexes accumulate over time, resulting in degraded performance and increased maintenance overhead.

Use SQL tools like MySQL’s EXPLAIN or Microsoft SQL Server’s Query Execution Plan. These will give you a solid view of how queries are being executed and which indexes are well utilized. You can then more easily see where to add missing indexes and remove ones you no longer need. It also helps you spot opportunities to update existing ones to better suit query patterns.

Let’s look at what that means in practice. Suppose you notice a particular query performing poorly despite having an index. Upon closer inspection, you discover that the index’s cardinality (i.e. uniqueness) is low, leading to poor selectivity. In this case, modifying the index or adding additional columns to improve selectivity could significantly boost that query’s performance.

Consider using covering indexes

A covering index includes all the columns necessary to fulfill a query. This means that the database doesn’t need to keep accessing the underlying table. 

To return to our filing cabinet analogy, you can think of it as having the right folders set out in front of you so you don’t have to search through the entire cabinet to find what you need. Using covering indexes can speed up search queries by reducing the number of overall disk I/O operations.

For example, consider a call center analytics software that logs details of each customer interaction. This might include data such as:

  • Caller ID
  • Call duration
  • Call timestamp
  • Outcome

If you’re frequently running reports on the total duration of calls, creating a covering index on the caller ID and call duration fields can optimize query performance. This allows the software to retrieve call duration information directly from the index without having to repeatedly access the main call log table.

Monitor and manage index fragmentation

Index fragmentation occurs when the logical sequence of index pages is not in sync with the physical arrangement. This can make data storage less efficient and slow down search queries. It’s like a library’s card catalog not matching the actual locations of the books on the shelves. 

If you don’t catch this and fix it, the problem will only get worse as more data is added or updated. It’s essential to keep a close eye on your indexes and tidy them up regularly. 

One solution is containerization, which provides a structured environment for managing databases. Most modern systems also offer tools for detecting and addressing index fragmentation like rebuilding or reorganizing indexes to help with this.

8 database indexing strategies to try

Not all indexing strategies are created equal. When it comes to finding the best indexing strategy for your database, you need to consider a few things, including:

  • What type of data you’re working with
  • Which queries you run often
  • What performance goals you want to achieve

With that in mind, here are a few examples of indexing strategies for different situations.

1. Single-column indexes

Single-column indexes work well for databases with tables containing a large number of rows and where queries frequently filter or sort data based on a single column. For instance, if you’re regularly looking up users by their usernames, create an index for the “username” column in the user table for faster retrieval.

2. Composite indexes

If your common queries involve columns in a WHERE clause or involve ORDER BY and GROUP BY operations on multiple columns, composite indexes might be more useful. For example, if you have a sales database where you’re frequently searching for sales by date and location together, you can create an index for both the “date” and “location” columns.

3. Unique indexes

These ensure data integrity by enforcing uniqueness on one or more columns. They are beneficial for columns that should not contain duplicate values, such as primary keys or email addresses in a user table.

Image Sourced from DoneDone.com

4. Clustered indexes

Some databases feature rows that are physically stored in order based on the index key. In these cases, clustered indexes can improve the performance of range queries or sequential scans. For example, if you organize time-series data by date, clustering the primary key will make it quicker to find information chronologically.

5. Covering indexes

These indexes contain all necessary information for answering a query so the database doesn’t have to revert to the original data table. They’re helpful for queries with SELECT, JOIN, and WHERE clauses. 

This can significantly improve query performance, especially in scenarios where you might need to generate data-driven insights from complex queries that involve multiple columns or tables. For example, if you often create reports using data from multiple columns, a covering index could include all those columns to speed up the process.

For organizations managing large-scale data processing tasks, such as those involving HPC batch jobs, implementing covering indexes can significantly improve query performance, especially when dealing with complex queries across multiple columns or tables.

Another crucial consideration for database optimization is ensuring smooth operations during critical periods, such as website launches. Utilizing a comprehensive website launch checklist can help ensure that your database infrastructure is adequately prepared to handle increased traffic and demands on query performance during such events.

6. Partial indexes

When a subset of data is frequently queried, partial indexes can be created to cover only that subset, reducing the index size and improving query performance. An example is creating a partial index for active users in a user table where only rows with “active = true” are indexed. 

In cloud environments dealing with massive datasets, partial indexes can help you manage resources more efficiently and maintain optimal performance. What is cloud native architecture? This refers to apps built specifically to work well in cloud environments. It involves using cloud services and concepts like microservices, containerization, and orchestration. It’s frequently used for apps that need to perform in an agile environment and be quickly scaled up or down.

7. Expression indexes

These indexes are created based on expressions or functions applied to one or more columns. They are useful for queries involving computed values or transformations. For example, indexing the result of a mathematical operation or string concatenation performed on columns.

8. Hash indexes

Particularly useful for equality comparisons, hash indexes can provide fast access to data with low cardinality columns or when accessing a large number of rows randomly. They are suitable for scenarios like indexing boolean or enumerated columns.

Database indexing – optimize database performance

In database management, optimizing queries is key to ensuring your database performs well across all platforms, from web to mobile. To do this, you need a solid indexing strategy. 

Choosing the right database index can directly impact business operations. When your database is well-organized, it means employees and users can find what they need quickly, leading to tangible benefits from improved response times to streamlined operations and reduced costs.

Understanding the different approaches and best practices means you’ll be better equipped to streamline your data and manage it efficiently.

Pohan Lin – Senior Web Marketing and Localizations Manager

Pohan Lin is the Senior Web Marketing and Localizations Manager at Databricks, a global Data and AI provider connecting the features of data warehouses and data lakes to create lakehouse architecture. With over 18 years of experience in web marketing, online SaaS business, and ecommerce growth. Pohan is passionate about innovation and is dedicated to communicating the significant impact data has in marketing. Pohan has written for other domains such as Spiceworks and Parcel Monitor. Here is his LinkedIn.

Categories
Community Tips

How Do UX Design Principles Elevate Customer Experiences?

User Experience (UX) Design principles play a key role in increasing customer experience. UX principles focus on creating products that are user-friendly and meaningful to use.

According to the research paper published by Core, If you want to design a user-friendly interface, white-label service providers who are experts in designing can help you out. White-label web development companies have UI/UX experts who know how to place each UI element in your product. So,  white-label agencies can help you build a more responsive and highly interactive design to help the users move around the website. 

Here are the things white-label agencies take care of while designing the UI/UX of your product.

6 UX Design Principles for Increasing Customer Experience

#1 Simple and Clear Designs

Keeping your website design simple and clear is an integral part of your UX strategy. This would involve defining the navigation menus clearly. You must design intuitive design layouts and use effective language. 

Make sure to add inputs and messages along the design wherever needed. It will help enhance the experience of the user. Your white-label design service provider will prioritize simplicity while crafting the solution. They will use the existing successful websites as their guide to define a clean and organized layout. 

The services will devise a strategy to make navigation intuitive and guided. This would help people move around without being stuck at any point. Moreover, they can plan for a “one-task, one-screen” layout that avoids clutter. 

According to the research paper published by Core, simplicity in design is about going deep into your user’s minds. There are 3 ways to achieve simplicity in design:

• Maintain clarity: Understand and design for your users’ main goals

• Make use of automation: Design for a minimum amount of conscious and cognitive effort

• Limit options: Design for a strong “information scent”

#2 Crafting Responsive Designs

Your users access the same websites across devices and resolutions. For instance, a user may access the website on the Chrome browser on their Android tablet. Another user may access the website on their iPhone browser. 

It is also possible the same user opens the website on an Android phone and tablet simultaneously. Responsive designs will automatically adapt to the platform’s needs and screen sizes. The design will stay consistent, and users will not face any friction while moving around. This adaptability will enhance the user’s experience.

Your white-label service provider can help you implement the responsive design layout. They are crucial in imagining and testing responsive designs. They will thoroughly test the design and address layout issues in advance.

#3 Ensuring Design Consistency

Consistency is key to keeping your users happy. You must ensure your design is aligned with your brand’s identity. Each element must be cohesive and defined. 

You must add the brand’s color scheme, typography, and design styles while creating the website. This would make it recognizable and relatable to the users. You can improve the overall appearance and ensure a professional design outcome. 

A white-label website design and development service provider works with defined guidelines. They are aware of the design expectations and nuances. As a result, companies can offer clean and consistent designs. Companies would design the wireframe to prototype to eliminate inconsistencies and provide a smooth layout.

#4 Well-defined Information Architecture

Information flow is pivotal in enhancing the user’s experience. You must define a logical movement and the organization of the content. When the user moves from one particular information to another, they must sense an intuitive flow. 

This would increase the user’s engagement on the website and allow them to find the information faster.  You can connect with your white-label service provider to define the sitemap and wireframes for your website. This would establish an organized information flow. You can design the user journeys and map them in your website design. 

Companies can also help you conduct usability tests and validate the information flow for engagement.

#5 Iterative Design with Feedback Loops

Knowing what your users think of the design is crucial for designing an appropriate website. You must ensure you have a feedback loop that brings back the messages from the user. This would help build a user-centric website.

You must use an iterative design strategy to implement the loops and leverage them to avail the feedback. 

You must have defined mechanisms to help collect the user’s feedback. This would include surveys and analytics tools. White-label service providers can implement these feedback loops and incorporate iterative design for excellent user insights.

Companies can use user insights to build an optimized website aligned with the user’s preferences and needs.

#6 Accessibility Design Considerations

Diverse users, including people with disabilities, will access your website. You must prepare your website for all user types. Ensure the website is aligned with the ethical design considerations while designing for web accessibility

Implementation: The white-label service provider is well aware of accessibility guidelines. Their understanding of accessibility standards would help them implement the right headings and alt+text for images.

Moreover, they would ensure the design is accessible via screen readers and other inputs. This would address all the barriers and ensure inclusivity.

Conclusion

It is crucial to create UX-led white-label websites. This would enhance your design strategy and extend exceptional results. You can elevate your user’s experience by implementing consistent and clear designs. 

Moreover, it offers an organized information architecture and accessible design that boosts the website’s quality. When your website meets the functional, aesthetic, and quality standards, it is usable, user-friendly, and highly engaging.

This would improve the session length and the conversion rate for your business. Prioritizing UX design principles in your website design is no longer a strategy; it is the definition of a successful website.

Categories
Community

Building a Restaurant Guide with Next.js and Storyblok

In this tutorial, we will explore how to integrate Next.js with Storyblok along with learning some Next.js and Headless features. We will also see how Storyblok works and enable the live preview of the Storyblok’s Visual Editor. To make things interesting, we will build a Restaurant Guide with the same. Here is a look of what we will be building –

Hint - If you're in a hurry, you can check out the repo at Github. You can also clone the space by clicking here. 

Requirements

To follow this tutorial there are the following requirements:

  • Basic understanding of JavaScript, React and Next.js
  • Node.js LTS version
  • An account in the Storyblok App

Creating a Next.js Project

Let’s first create a Next.js project by the following command – 

npx create-next-app@latest

Feel free to choose your preferences and make sure to add TailwindCSS to the project as the code of this tutorial contains tailwind for styling. Let’s also install the Storyblok React SDK with the following command –

npm i @storyblok/react

The SDK will allow us to interact with the API and enable live editing for us. Let’s now start the development server –

npm run dev

Once the server is started, open your browser at http://localhost:3000. You should see the following screen – 

Due the securty reasons, http is not supported inside the Visual Editor of Storyblok, so we need to set up a proxy from an https port to http 3000. You can read more about it in setting up a proxy guide. We will be using 3010 port on https.

Hint - Checkout Storyblok’s Kickstart page to see in detail how to connect Storyblok with any technology, both in a new and existing projects.

Creating a Space in Storyblok

Now, let’s login into the Storyblok App. Once logged in, you will see a list of your spaces. Every card is a space here. 

You can consider a space as a content repository, that contains everything like content, users, components, etc. For this tutorial, let’s consider one space as one project. You can read more about spaces in the Structure of Content guide present in the Storyblok website. Let’s now create a new space for our Next.js project by clicking the button present at the right top that says Add Space. Let’s choose creating a New Space from the list of options. There are a couple of plans available to choose from when you create a space. For this tutorial, feel free to choose the Community Plan which is free and requires no payment information.

Then, we can enter the Space name and Server location.

Setting up the Space

Once the space is created, you will see that there is a Home Story created inside the Content section and a couple of blocks (or components) in the Block Library section. We will be creating more stories and blocks according to our requirements of the app. As Storyblok is a headless CMS, you can create stories that are built of different blocks and consume the content inside the stories with the help of APIs on any frontend layer (in this case Next.js)

Hint - You can read more about a Story in the Structure of Content guide, it is basically an entry.

Let’s go to the settings of the space, and then Visual Editor tab. This is the place where you set up the location of the Visual Editor. Let’s change it to https://localhost:3010/ as this is our dev server after the proxy.

Now let’s go to the home story present inside the Content section, you will see that now we have a our Next.js website there. Though there is a 404, but the aim was to setup the Visual Editor Environment.

Integrating Next.js and Storyblok

Now, let’s add some code to our Next.js project to have the integration with Storyblok working and adding the functionality of live editing. Let’s first delete the page.js file inside the app folder. Then let’s create a [[..slug]] folder and a page.js file inside that. We are doing this to catch all the dynamic routes possible. You can read more about Dynamic Routes in official Next.js docs.

We will add the code to render every route inside this page.js file, let’s first initialize Storyblok. To do so, we need to go to the layout.js file and add storyblokInit function to it. You can add the following –


import { storyblokInit, apiPlugin} from "@storyblok/react/rsc" 

// Existing Code

storyblokInit({
  accessToken: 'your-access-token',
  use: [apiPlugin]
})

We are importing storyblokInit and apiPlugin from the package, and are initializing the function. You can find the access token of your space inside the settings of the space. Make sure to copy the preview one. You can create more tokens according to the requirements.

apiPlugin helps us to use the functionality of Storyblok JS Client that allows us to interact with the Storyblok API using the SDK. If you don’t want to add it, you can choose your preferred way to fetch the content by making a get request to the Storyblok’s Content Delivery API.

Once this is done, we also need to add a special wrapper component that will allows us to do the edits in real time. As Next.js by default is doing everything on the Server Side with React Server Components, it is not possible to play around with Javascript and listen to such events. Hence, this component will be a Client Side Component. 

Create a components folder and then create a new file named StoryblokProvider.js in the components folder. You can paste the following code in that – 


/** 1. Tag it as client component */
"use client";
import { storyblokInit, apiPlugin } from "@storyblok/react/rsc";

/** 2. Import your components */
// import Page from "./Page"

/** 3. Initialize it as usual */
storyblokInit({
  accessToken: "your-access-token",
  use: [apiPlugin],
  components: {},
});

export default function StoryblokProvider({ children }) {
  return children;
}

Here we do the initalization again on the client side, the previous initialization in the layout will be used to fetch the data. Here, we also have a components key inside the initialization, which will contain all the map of all the components we create for dynamic rendering.

We need to wrap the app inside this component. The layout.js file should now look like this – 


import { storyblokInit, apiPlugin} from "@storyblok/react/rsc"
import StoryblokProvider from "@/components/StoryblokProvider"
import './globals.css';
export const metadata = {
  title: 'Storyblok and Next.js 14',
  description: 'A Next.js and Storyblok app using app router ',
}
storyblokInit({
  accessToken: 'your-access-token',
  use: [apiPlugin]
})
export default function RootLayout({ children }) {
  return (
    <StoryblokProvider>
      <html lang="en">
        <body>{children}</body>
      </html>
    </StoryblokProvider>
  )
}

Now let’s add the logic to the page.js file to render all the routes. Paste the following code inside the file – 


import { getStoryblokApi } from "@storyblok/react/rsc";
import StoryblokStory from "@storyblok/react/story";

export default async function Page({ params }) {
  let slug = params.slug ? params.slug.join("/") : "home";

  const storyblokApi = getStoryblokApi();
  let { data } = await storyblokApi.get(
    `cdn/stories/${slug}`,
    { version: "draft" },
    { cache: "no-store" }
  );

  return (
    <div className="mb-40">
      <StoryblokStory story={data.story} bridgeOptions={{}} />
    </div>
  );
}

export async function generateStaticParams() {

  const storyblokApi = getStoryblokApi();
  let { data } = await storyblokApi.get("cdn/links/", {
    version: "draft",
  });

  let paths = [];

  Object.keys(data.links).forEach((linkKey) => {
    if (data.links[linkKey].is_folder) {
      return;
    }
    const slug = data.links[linkKey].slug;
    if (slug == "home") {
      return;
    }
    let splittedSlug = slug.split("/");
    paths.push({ slug: splittedSlug });
  });
  return paths;
}

In this file, we import getStoryblokApi that is used to fetch the data from Storyblok. First thing to see here is the generateStaticParams function here provided by Next.js. This function is used to define the dynamic routes that should statically generate during build time. This is pretty useful for static generation. You can read more about the function in the Next.js Docs.

In this function, we fetch all the links that are inside Storyblok. This means we get all the entries. We are using the links endpoint of the API which gives us the access to all the links so we don’t have to fetch the complete stories. You can take a look at the links endpoing in the API reference. We are ignoring the home slug here, as it will be used for all the other routes along with it. 

Once we have all the links we return the paths array as required by generateStaticParams function in the required format. Once we have all the links, the Page function can handle the logic for rendering it. Inside the Page function, we first retrieve the slug and fetch the content of that slug using the API. With the fetch, the first parameter is the slug, second one is the API Options where we define the version in this case (there are more options that can be passed), and the third parameter is to pass custom fetch options. 

Storyblok stores two versions of content, one is draft and the other is published. By default, when you create something or make changes, the new content is stored as draft until you hit publish. 

Hint - Only Preview token can fetch the draft as well as the published version of content. Public token can only fetch the published version.

We also pass the third parameter as { cache: "no-store" } for Next.js, so that the request is not cached all the time when the content changes.

After we get the data, we use the StoryblokStory  component to render the components received in the content dynamically. We also need to pass the story property which will contain the content of the component.  The other property is the bridgeOptions, which takes the options for the bridge. 

Now with this all done, we are ready to create the new stories and components! 

Creating Stories and Components

Let’s start by creating a Restaurant component first. This will help us understand the basics, and then we can add multiple fields as well as components. Go to the Block Library in Storyblok, you will see that there are already a few blocks created. We won’t be using these, so let’s create a new one by clicking the button on right top that says Create New . Let’s add the name restaurant for this, and choose Content type block. 

There are 3 different types of components in Storyblok, Content type blocks are the ones that can be standalone stories. The Nestable ones are the ones that can be added in other blocks. And, Universal blocks can be used as both.

Now once the component is added, we need to add a couple of fields to it. Let’s start by adding a name field and a background image field. The pre selected field type is text, but you can change it while creating the field or after the field is created. For the background_image, select (or change to) the type asset.

Once the component is created, we can use this to create new Restaurant stories to store the content related to a restaurant. Let’s go the Content section, and add a new folder named Restaurants. This folder will store all the restaurants. Select the default content type as Restaurant

Once the folder is created, add a new story inside it for a restaurant. Feel free to choose the name of the restaurant. 

Once the story is created, you will see the fields that we added in our component. 

The fields are empty, as well as our frontend is not showing anything at this moment. Add the content to the fields and then let’s hit the save button on the right top. Even when we hit save, nothing is shown in the Visual Editor. This is because we are missing one important step – creation of component on the frontend. You should also see a warning in the terminal saying that the component doesn’t exist.

Let’s now create a Restaurant component in our Next.js project. Create a new file name Restaurant.js  inside the components folder and paste the following code –


import { storyblokEditable } from "@storyblok/react/rsc";

const Restaurant = ({ blok }) => (
  <main {...storyblokEditable(blok)}>
    <div>
      <div
        className={`min-h-[500px] relative
          flex
          items-end
          justify-center
          p-9
          my-6
          rounded-[5px]
          overflow-hidden`}
      >
        <div className="relative z-10 text-center">
          <h1 className="text-7xl text-white font-bold mb-3">{blok.name}</h1>
        </div>
        <img
          src={`${blok.background_image?.filename}`}
          alt={blok.background_image.alt}
          className="absolute top-0 left-0 z-0 w-full h-full object-cover"
        />
      </div>
    </div>
  </main>
);
export default Restaurant;

In this component, we are getting a property named blok that we passed from the StoryblokStory as the story property. This blok property will contain all the information coming from the story. In this case, it will have the name we added along with the background_image. We can directly access those properties from the blok object. For the name, it is blok.name and for the image it is blok.background_image?.filename. This is because asset field gives you another object which contains other information along with image’s url under the property name filename.

The components where live editing is required must have the StoryblokEditable function on the root element as we have it here on the main tag. This allows us to see the dotted lines inside the visual editor and makes the components clickable.

Once we save this component, we also need to add this to the initalization in the StoryblokProvider.js file – 


/** 1. Tag it as client component */
"use client";
import { storyblokInit, apiPlugin } from "@storyblok/react/rsc";

/** 2. Import your components */
import Restaurant from "./Restaurant";

/** 3. Initialize it as usual */
storyblokInit({
  accessToken: "your-access-token",
  use: [apiPlugin],
  components: {
    'restaurant': Restaurant
  },
});
export default function StoryblokProvider({ children }) {
  return children;
}

As soon as we save this, we should see the preview in the Visual Editor! Now, if you try to change the text or the image, you will even see a live preview in real time. Congratulations, we now have everything working for us. Now it is all about creating new fields and new components. 

Let’s now add the following fields (along with the types) to the Restaurant block – 

  • description – Textarea 
  • cuisines – Multi-Options
  • vegan – Boolean
  • city – Text
  • dishes – Blocks

You can add more fields of your choice, once the fields are added we just need to map those in the frontend to be used by our Restaurant component. 

For the cuisines field that we added, you can add options of your choice.

The dishes field that we added is of type blocks. Blocks is a special type that allows other nestable components to be inserted inside it. As it is recommened to be as granular as possible, we can make another component named dish that can be added inside that field.

Let’s add a new nestable block named dish with the following fields – 

  • name – Text
  • description – Textarea
  • image – Asset

Let’s now go to the newly added restaurant and fill the other fields along with adding a dish block there. 

Now to make all these fields visible, we need to add those in our Restaurant component in Next.js and we also need to add a Dish component in our frontned. Paste the following code in a new file named Dish.js  under the components folder –


import { storyblokEditable } from "@storyblok/react/rsc";
const Dish = ({ dish }) => {
  return (
    <section {...storyblokEditable(dish)}>
      <div className="mx-auto flex w-80 flex-col justify-center bg-white rounded-2xl shadow-xl shadow-gray-400/20">
        <img
          className="aspect-video w-80 rounded-t-2xl object-cover object-center"
          src={dish.image.filename}
        />
        <div className="p-6">
          {/* <small className="text-gray-900 text-xs">
            Rating - {dish.rating.value}
          </small> */}
          <h1 className="text-2xl font-medium text-gray-700 pb-2">
            {dish.name}
          </h1>
          <p className="text text-gray-500 leading-6">{dish.description}</p>
        </div>
      </div>
    </section>
  );
};
export default Dish;

Make sure to add this to the storyblokInit function as we added the Restaurant component. Paste the following code in the Restuarant.js file – 




    import { storyblokEditable } from "@storyblok/react/rsc";
    import Dish from "./Dish";
    const Restaurant = ({ blok }) => (
      <main {...storyblokEditable(blok)}>
        <div>
          <div
            className={`min-h-[500px]
              relative
              flex
              items-end
              justify-center
              p-9
              my-6
              rounded-[5px]
              overflow-hidden`}
          >
            <div className="relative z-10 text-center">
              <h1 className="text-7xl text-white font-bold mb-3">{blok.name}</h1>
            </div>
            <img
              src={`${blok.background_image?.filename}`}
              alt={blok.background_image.alt}
              className="absolute top-0 left-0 z-0 w-full h-full object-cover"
            />
          </div>
          <div className="px-6">
            <h2 className="text-2xl font-bold ">{blok.description}</h2>
            <hr className="flex-grow border-t mt-2 border-gray-300"/> 
            <div className="flex justify-between mt-4">
              <p className="text-xl">📍{blok.city}</p>
              <div className="flex ">
                {blok.cuisines.map((c) => (
                  <span
                    key={c}
                    className="px-4 py-1 rounded-full mx-2 text-white text-sm  bg-green-500 "
                  >
                    {c}
                  </span>
                ))}
              </div>
            </div>
            <div className="flex justify-between gap-4 mt-4">
              <p className="font-thin">Vegan {blok.vegan ? "✅" : "❌"}</p>
              {/* <p className="font-thin">
                Rating - <span className="font-bold">{blok.rating.value}</span>
              </p> */}
            </div>
          </div>
          <div className="mt-8 grid w-full grid-cols-1 gap-2 mx-auto sm:grid-cols-3">
            {blok.dishes.map((d) => (
              <div className=" ">
                <Dish dish={d}></Dish>
              </div>
            ))}
          </div>
        </div>
      </main>
    );
    export default Restaurant;

Once this is done, we should be able to see the other fields rendered as well. The best part here is that, whenever you change anything like adding or deleting a new dish, changing the images or text – everything is visible in real time. Feel free to try dragging the components as well to see the power of Visual Editor.

At this point, we have covered all important things and we are ready with our fully functional website. Just to make it better, let’s add a few more restaurants in the folder. 

To make our website look good, let’s also make the landing page better as well. Delete the home story that we have, and then go the block library to make some changes and add new blocks. You can go ahead and delete the page block as well. Let’s create the following components now – 

  • landing_page
    • body – Blocks
  • hero
    • headline – Text
    • subheadline – Text
    • image – Asset
    • layout – Single-Option with two options ( constrained and full_width)
  • featured_restaurants
    • restaurants – Multi Option 

In the restaurant field of featured_restaurants block, change the source to Stories and add the path folder as well as the content type as shown below – 

This will allow us to select the restaurants in the block from the Restaurants folder.

Now let’s add the following components to our Next.js project –

LandingPage.js – 


import { storyblokEditable, StoryblokComponent } from "@storyblok/react/rsc";
 
const LandingPage = ({ blok, restaurants }) => (
  <main {...storyblokEditable(blok)}>
    {blok.body.map((nestedBlok) => (
      <StoryblokComponent restaurants={restaurants} blok={nestedBlok} key={nestedBlok._uid} />
    ))}
  </main>
);
 
export default LandingPage;

The StoryblokComponent here allows us the render the components dynamically and is also used behind the scenes while using the StoryblokStory component.

Hero.js


import { storyblokEditable } from "@storyblok/react/rsc";
const Hero = ({ blok }) => {
  return (
    <div {...storyblokEditable(blok)} className={`min-h-[500px]
    relative
    flex
    items-end
    justify-center
    p-9
    my-6
    rounded-[5px]
    overflow-hidden ${blok.layout === 'constrained' ? 'container mx-auto' : ''}`}>
      <div className="relative z-10 text-center">
        <h1 className="text-6xl text-white font-bold mb-3">{blok.headline}</h1>
        <h2 className="text-4xl text-white font-light">{blok.subheadline}</h2>
      </div>
      <img
        src={`${blok.image?.filename}/m/filters:brightness(-50)`}
        alt={blok.image.alt}
        className="absolute top-0 left-0 z-0 w-full h-full object-cover"
      />
    </div>
  );
};
export default Hero;

FeaturedRestaurants – 


import { storyblokEditable } from "@storyblok/react/rsc";
const FeaturedRestaurants = ({ blok }) => {
  return (
    <section {...storyblokEditable(blok)}>
      <p className="text-5xl mt-8 text-center">Featured Restaurants</p>
      <div className="mt-8 grid w-full grid-cols-1 gap-2 mx-auto sm:grid-cols-3">
      {blok.restaurants.map((r) => (
          <div key={r.slug}>
            <div className="mx-auto flex w-80 flex-col justify-center bg-white rounded-2xl shadow-xl shadow-gray-400/20">
              <img
                className="aspect-video w-80 rounded-t-2xl object-cover object-center"
                src={r.content.background_image?.filename}
              />
              <div className="p-6">
                <small className="text-gray-900 text-xs">
                  {/* Rating - {reestaurant.rating.value} */}
                </small>
                <h1 className="text-2xl font-medium text-gray-700 pb-2">
                  {r.content.name}
                </h1>
                <p className="text text-gray-500 leading-6">{r.content.description}</p>
              </div>
            </div>
          </div>
      ))}
      </div>
 
    </section>
  );
};
export default FeaturedRestaurants;

The final version of StoryblokProvider.js with all the components should like this – 


/** 1. Tag it as client component */
"use client";
import { storyblokInit, apiPlugin } from "@storyblok/react/rsc";
/** 2. Import your components */
import FeaturedRestaurants from "./FeaturedRestaurants";
import LandingPage from "./LandingPage";
import Hero from "./Hero";
import Restaurant from "./Restaurant";
/** 3. Initialize it as usual */
storyblokInit({
  accessToken: "your-access-token",
  use: [apiPlugin],
  components: {
    restaurant: Restaurant,
    hero : Hero,
    "landing_page": LandingPage,
    "featured_restaurants": FeaturedRestaurants,
  },
});
export default function StoryblokProvider({ children }) {
  return children;
}

Now let’s go ahead and create a new home story using the landing page block. Add two blocks, the hero and featured_restaurants to its body field. Let’s also add data to these fields along with selecting the restaurants in the Featured Restaurants field. You will see that we get an error here, this is because the restaurants we select aren’t stored as the complete stories but as references. If you try checking the API response, you will see there are just a few UUIDs under the field. 

We need to resolve this field to get the data for the restaurants inside the component. For this case, there is a parameter that we can pass named as resolve_relations. We also need to resolve the relations for the bridge. 

Resolving Relations(for referencing one story in another story)

In the Page.js file where we fetch the data, we need to add the parameter as mentioned below –

let { data } = await storyblokApi.get(`cdn/stories/${slug}`, {version: 'draft', resolve_relations: ["featured_restaurants.restaurants"]}, {cache: "no-store"});

Also, as mentioned earlier, there is an option for passing the bridge options in the StoryblokStory. To pass that please change the component tag to the following in the same file –

<StoryblokStory story={data.story} bridgeOptions={{resolveRelations: ["featured_restaurants.restaurants"]}} />

And that is all! The error should be gone now. The preview will display the content rendered the way it should be along with the enabled live editing. We can now make any number of changes, add different blocks and it will all be editable in real time. 

Similarly, there can be more new blocks (components) and fields created. This was just a basic app to show the capabilities of Next.js and Headless. There are endless possibilities. Make sure to check out the space mentioned in the hint at the top along with the repository for a few more ideas. You can even try creating a demo space that is available while creating a new space. 

Conclusion

In this tutorial, we saw how to integrate Next.js and Storyblok. We saw how to manage and consume content using the API, and how to enable a real-time visual experience using the Visual Editor. We also went through a couple of features that Next.js offers to create great user experiences. 

Resources to look at – 

ResourceLink
Storyblok Technologies Hubhttps://www.storyblok.com/technologies
Storyblok React SDKhttps://github.com/storyblok/storyblok-react
Next.js Docshttps://nextjs.org/
Storyblok Visual Editorhttps://www.storyblok.com/docs/guide/essentials/visual-editor
Storyblok JS Bridgehttps://www.storyblok.com/docs/Guides/storyblok-latest-js
Categories
Community

Can AI exist in a sustainable way?

The fictional but prescient Dr. Ian Malcom noted in 1993’s Jurassic Park, “…your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.”  Generative AI’s rapid expansion due to the increase in size of large language models has felt something akin to genetically engineering a TRex.  It is certainly fun and exciting, but what are the consequences?  

Hardware is also beefing up those CPUs, GPUs, TPUs (all the PUs really) to support the training and distribution of those models. But just as history, and our favorite science fiction movies, have showed us, there is a cost. Of course, we’re all wary that SkyNet (T)  might emerge, (and frankly time will tell), but the more pressing matter is the consumption of electricity and water.

Addressing the AI elephant in the room

At Cisco, we’ve been baking predictive AI into our platforms for years, helping IT operations make insightful, and even proactive, decisions. Across compute, storage, and networking infrastructure, application of predictive AI and observability has been incredibly useful in helping organizations scale and optimize their actual infrastructure usage. With APIs paving the way for multi platform integration, we’re seeing wholesale Day 0 to Day N solutions that help organizations manage usage and more.

ai environment

What the research says

While these gains are exciting, the underlying machine learning technologies that support predictive AI do not have the same resource needs as Generative AI, which requires new approaches to reducing carbon footprint and overall environmental impacts

Over the last five years or so, researchers at universities like Berkeley and the University of Massachusetts saw past the horizon and started experimenting and proving methods that could be employed to lessen the energy consumption (and carbon footprint) of precursor technologies like natural language processing (NLP) to large language model (LLM). They even go as far as to prescribe both software/algorithm and hardware/infrastructure improvements to alleviate the carbon footprint created by training and using NLP and LLM. Even better, similar activities are underway to measure the impact of AI technology on water usage as well.

But, that’s not the whole story..

As of today, the true nature of AI’s impact on energy consumption is REALLY hard to actually quantify. Article after article tries to dig into the actual effect of using generative AI technologies. The challenge is that the combination of large amounts of variables (what task is being done, how is the data center setup, what processors are being used, etc. etc.) and IP secrecy (there is a LOT of money to be made here) makes reaching a true, tangible answer difficult. Not to mention, there is no way of knowing if those running LLM-based services are employing some of the proven mitigations noted above. 

The best any of the current research can come up with is energy usage comparable to an average U.S. home per year to the average mid-size country. That’s an unmanageable range which makes  understanding the actual impact and ways to mitigate difficult to identify.

So, it seems, that at least in the short term, newer AI technologies will have an increased impact on energy consumption and water usage to the possible negative detriment of the environment.

Problem solving, the developer way

 So how can AI exist in conjunction with sustainability efforts? Ah, that’s the interesting part. AI just may be the answer to its own problems. The problem that I mention above about it being difficult to figure out the impact of AI usage on energy and water consumption is being currently worked on by AI sustainability initiatives

In theory, the models would then be able to suggest solutions to increased water and electricity consumption. r In a slightly less sophisticated model, predictive AI elements are starting to be used to simply just turn things off. This is the simplest answer: eliminate situations where energy is generated but not actually used– and the really cool thing is AI can help us with that.

In the realm of this technological advancement, developers are bestowed with an extraordinary opportunity to make a real impact for a sustainable future.

Getting involved

Cisco’s Build for Better coding challenge, is open March 14 – April 22, 2024, and invites all Developers to harness their skills in AI, Observability, and Sustainability to make a real-world impact. Learn more and commit your code by Earth Day.

Categories
Community

Developer Nation Donation Program: Supporting Charities for a Better Tomorrow

At Developer Nation, our commitment extends beyond just gathering data and insights from the developer community; we believe in giving back and making a positive impact. Through our Donation Program, we support causes that resonate with our mission and values, ensuring that our community’s voice is heard not only in surveys but also in charitable endeavours.

How Does it Work?

For every survey wave conducted, we allocate a portion of our resources to donate to charities chosen by the Developer Nation Community. With each completed survey, we contribute $0.10 towards our donation goal, aiming to reach a minimum of $1,600 or more in total donations!

The 26th Developer Nation Survey

During our 26th Developer Nation global survey, we were thrilled to receive 13,852 qualified responses. This remarkable engagement helped us reach $1,385 in donations, inspiring us to continue giving back to our community in various ways.

Charities Supported in the 26th Survey

In line with our commitment to supporting developer-centric initiatives and other causes valued by our community, we distributed our donations among several deserving charities. Here’s a breakdown of the organizations we supported and the contributions they received based on community votes:

1. FreeCodeCamp

FreeCodeCamp is dedicated to providing free coding education to individuals worldwide. With 5,197 (out of 13,852) survey participants showing their support, we were able to donate $520 to further their mission of making coding accessible to everyone.

2. Mozilla Foundation

The Mozilla Foundation champions an open and accessible internet for all, promoting privacy, security, and digital literacy. With 2,997 votes from our community, we contributed $300 to advance their mission of a healthier internet ecosystem.

3. Engineers Without Borders

Engineers Without Borders (EWB) harnesses the skills of engineers to address global challenges, from clean water access to sustainable infrastructure. Supported by 2,584 out of 13,852 survey participants, we donated $258 to support their impactful projects worldwide.

4. Ada Developers Academy

Ada Developers Academy is dedicated to increasing diversity in tech by providing women and gender-diverse individuals with comprehensive software development training. With 1,759 votes from our community, we contributed $176 to empower more individuals with the skills and opportunities they need to thrive in tech.

5. ADRA (Adventist Development and Relief Agency)

ADRA works tirelessly to address social injustice and provide humanitarian aid to communities in need worldwide. With 1,209 votes from our community, we donated $121 to support their efforts in promoting sustainable development, disaster relief, and advocacy.

Conclusion: Fostering Community and Social Responsibility

At Developer Nation, we believe that giving back to society is integral to our mission. We extend our deepest gratitude to our community members for their enthusiastic participation in our Donation Program. Your contributions have not only exceeded our expectations but also reaffirmed our collective commitment to making a difference.

As we continue our journey, we welcome your thoughts and suggestions for future donations. Your feedback plays a crucial role in shaping our philanthropic efforts, ensuring that we remain aligned with the causes that matter most to you.

Join us in our mission to build a brighter, more inclusive future through technology and compassion. Together, we can make a meaningful impact on the world!