Categories
Community

Enhancing Online Security: Best Practices for Developers

Developing a new software platform, mobile application, or online tool can be a great opportunity to offer innovative tools to the public. It can also present some serious risks. There will be those who seek to steal your intellectual property during the dev process. Alternatively, your completed product may be targeted by those who want to exploit valuable user data.

This makes it vital that your development team enhances its online security measures. 

Be Proactive

As a developer — or a leader of a dev team — it’s important not to treat security as a set of superficial defensive measures. This reactive attitude can put you and your applications on the back foot, struggling against the onslaught of threats. You and your team need to be proactive in making security as central and important to development as your coding.

One good approach to this is to make the product secure by design. As the name suggests, this process is about incorporating strong online security into the design phase of the development lifecycle. You’ll basically avoid waiting to consider security until the testing phase of the project or even fixing bugs in the beta phase, as is common. Instead, alongside brainstorming the key features of your product, your team should be looking at what the specific security challenges of the product are likely to be and how to minimize them. This allows you to build a strong security foundation from the outset.

Another way to be proactive in implementing security measures is to ensure your team follows Secure Software Development Lifecycle (SSDLC) protocols. This is effectively a set of actions that are baked into every task developers on your team perform so that they can identify and handle potential issues before they become problematic. It includes creating a culture of security in which threats are discussed and considered regularly. It should involve frequent cybersecurity training so that your dev team is fully aware of the latest threats and protection techniques. Importantly, the development environment itself should be secure, both digitally and physically.

Utilize Advanced Encryption Techniques

Encryption is one of the most powerful tools for ensuring online security. This is particularly effective for minimizing unauthorized access to data that is likely to be shared online both during the development lifecycle and by consumers when using the final product.

Identify and use strong encryption algorithms

Algorithms are the basis upon which encryption operates. Therefore, it’s important to utilize the most appropriate algorithms both for the product itself and protecting your networks. For instance, Advanced Encryption Standard (AES) is a common tool for development teams. This symmetric algorithm performs multiple encryption rounds before breaking the data down into smaller blocks. Some software and apps that require end-user authentication to access sensitive data — like financial information — may be better served by asymmetric encryption, such as the Rivest-Shamir-Adleman (RSA) protocol.

Adopting solid key management

Any encryption algorithm you adopt requires keys to be generated and shared to decrypt the information. It’s vital that you implement management measures to mitigate unauthorized access to and use of these keys. It’s important to formalize which members of the team can obtain and use these keys. It’s also vital to regularly change keys, much as you might update a password to keep it strong.

Conduct Vulnerability Assessments and Improvements

The cybersecurity landscape is in flux. Even within the timeline of your development process, new threats can emerge and come into favor. One of the best practices developers need to adopt is conducting regular vulnerability assessments and making relevant improvements.

Perhaps the most convenient approach during development is using automated scanning software. You can invest in tools that scan both the specific code of your project alongside your overall IT infrastructure. There’s even an increasing number of artificial intelligence (AI) driven scanners that use machine learning algorithms to learn about and adapt to the security landscape in relation to your development. In addition, utilizing a DevOps monitoring tool can allow you to see real-time performance issues that could suggest weaknesses in security, such as slow response times.

It’s also wise to remember that your development team’s workflow can be a source of vulnerability. For instance, too many unnecessary repetitive security processes might cause dev staff to become complacent and overlook key protective actions. A commitment to regular process improvement can help you not only minimize weak points but also boost efficiency. Not to mention it helps you to notice changes in the security landscape and adapt to them. You can do this by taking time to map out both formal and informal processes visually in flow diagrams at milestones during the development lifecycle. This helps you to analyze where inefficiencies occur and what processes you can consolidate and strengthen.

Conclusion

With some solid security best practices, you can ensure your development project is protected from threats throughout the project’s life cycle. This should include taking secure-by-design protocols and adopting string encryption, among other measures. Wherever possible make certain that you have a cybersecurity expert embedded into your dev team or available to consult regularly. This can help you both implement effective processes and stay abreast of any potential threats you need to prepare for.

Categories
Community

8 Indexing Strategies to Optimize Database Performance

Databases provide the backbone for almost every application and system we rely on, acting like a digital filing system for storing and retrieving essential information. Whether it’s organizing customer data in a CRM or handling transactions in a banking system, an efficient database is crucial for a smooth user experience.

However, when we get into large volumes of data and more complex queries, database management can become daunting. That’s where good indexing strategies can make all the difference. 

Think of it like tidying up your digital filing cabinet so you can quickly find what you need without rummaging through multiple drawers and folders to locate the correct file.

By organizing and structuring your data in a way that facilitates quick retrieval, indexing can make a big difference in how well your database performs. Here, we’ll explore some strategies to help you do just that.

Database indexing best practices

Before you settle on a strategy, it’s worth understanding the different ways you can approach indexing to improve query selection and overall database performance.

Identify key queries and columns

Before getting started with indexing, you need to identify the type of queries your application is running regularly and which columns are involved in those queries. This helps you to focus your efforts on areas that will give the best results. There’s no point in spending time and energy indexing columns that rarely get used.

For example, let’s say you’re developing an app for an online bookstore, and one of the most common queries is searching for books by author name. In this case, creating an index on the “author” column can dramatically improve the performance of search queries.

Data orchestration tools can examine query patterns and usage statistics to pinpoint the most commonly executed queries in your database. What is orchestration, we hear you ask. 

When we talk about data, orchestration is the process of managing and coordinating various tasks like collecting, processing, and analyzing data from different sources. This helps to keep data operations well-organized and efficient.

By understanding which queries are commonly used, database administrators can prioritize indexing efforts on the columns involved in these queries.

Avoid over-indexing

While indexing can undoubtedly speed up query performance, as the saying goes, you can have too much of a good thing. 

Over-indexing isn’t just a waste of time, it can actually have the opposite desired effect and hinder database performance.    

Keep in mind that every index you add takes up storage space and needs managing within the database. Plus, having too many indexes in play can slow down your insert and update performance because your database will be working overtime to update multiple indexes with every change.

To avoid this, follow data indexing best practices such as those covered in Apache Hive documentation. Aim to strike a balance between query performance and keeping the database easy to manage. 

Focus on indexing columns that are frequently used in WHERE clauses, JOIN conditions, and ORDER BY clauses. Also, think about using composite indexes for queries that involve multiple columns.

Regularly monitor and tune indexes

Creating indexes isn’t one of those jobs you can do once and forget about. Because data and query patterns often evolve over time, you need to regularly check and adjust them. 

It’s similar to the practices of Machine Learning Ops (MLOps), where ongoing monitoring ensures the model is still effective. Similarly, consistently reviewing and fine-tuning indexes plays a pivotal role in managing their effectiveness. 

Failure to do so can lead to accumulating technical debt, where outdated or inefficient indexes accumulate over time, resulting in degraded performance and increased maintenance overhead.

Use SQL tools like MySQL’s EXPLAIN or Microsoft SQL Server’s Query Execution Plan. These will give you a solid view of how queries are being executed and which indexes are well utilized. You can then more easily see where to add missing indexes and remove ones you no longer need. It also helps you spot opportunities to update existing ones to better suit query patterns.

Let’s look at what that means in practice. Suppose you notice a particular query performing poorly despite having an index. Upon closer inspection, you discover that the index’s cardinality (i.e. uniqueness) is low, leading to poor selectivity. In this case, modifying the index or adding additional columns to improve selectivity could significantly boost that query’s performance.

Consider using covering indexes

A covering index includes all the columns necessary to fulfill a query. This means that the database doesn’t need to keep accessing the underlying table. 

To return to our filing cabinet analogy, you can think of it as having the right folders set out in front of you so you don’t have to search through the entire cabinet to find what you need. Using covering indexes can speed up search queries by reducing the number of overall disk I/O operations.

For example, consider a call center analytics software that logs details of each customer interaction. This might include data such as:

  • Caller ID
  • Call duration
  • Call timestamp
  • Outcome

If you’re frequently running reports on the total duration of calls, creating a covering index on the caller ID and call duration fields can optimize query performance. This allows the software to retrieve call duration information directly from the index without having to repeatedly access the main call log table.

Monitor and manage index fragmentation

Index fragmentation occurs when the logical sequence of index pages is not in sync with the physical arrangement. This can make data storage less efficient and slow down search queries. It’s like a library’s card catalog not matching the actual locations of the books on the shelves. 

If you don’t catch this and fix it, the problem will only get worse as more data is added or updated. It’s essential to keep a close eye on your indexes and tidy them up regularly. 

One solution is containerization, which provides a structured environment for managing databases. Most modern systems also offer tools for detecting and addressing index fragmentation like rebuilding or reorganizing indexes to help with this.

8 database indexing strategies to try

Not all indexing strategies are created equal. When it comes to finding the best indexing strategy for your database, you need to consider a few things, including:

  • What type of data you’re working with
  • Which queries you run often
  • What performance goals you want to achieve

With that in mind, here are a few examples of indexing strategies for different situations.

1. Single-column indexes

Single-column indexes work well for databases with tables containing a large number of rows and where queries frequently filter or sort data based on a single column. For instance, if you’re regularly looking up users by their usernames, create an index for the “username” column in the user table for faster retrieval.

2. Composite indexes

If your common queries involve columns in a WHERE clause or involve ORDER BY and GROUP BY operations on multiple columns, composite indexes might be more useful. For example, if you have a sales database where you’re frequently searching for sales by date and location together, you can create an index for both the “date” and “location” columns.

3. Unique indexes

These ensure data integrity by enforcing uniqueness on one or more columns. They are beneficial for columns that should not contain duplicate values, such as primary keys or email addresses in a user table.

Image Sourced from DoneDone.com

4. Clustered indexes

Some databases feature rows that are physically stored in order based on the index key. In these cases, clustered indexes can improve the performance of range queries or sequential scans. For example, if you organize time-series data by date, clustering the primary key will make it quicker to find information chronologically.

5. Covering indexes

These indexes contain all necessary information for answering a query so the database doesn’t have to revert to the original data table. They’re helpful for queries with SELECT, JOIN, and WHERE clauses. 

This can significantly improve query performance, especially in scenarios where you might need to generate data-driven insights from complex queries that involve multiple columns or tables. For example, if you often create reports using data from multiple columns, a covering index could include all those columns to speed up the process.

For organizations managing large-scale data processing tasks, such as those involving HPC batch jobs, implementing covering indexes can significantly improve query performance, especially when dealing with complex queries across multiple columns or tables.

Another crucial consideration for database optimization is ensuring smooth operations during critical periods, such as website launches. Utilizing a comprehensive website launch checklist can help ensure that your database infrastructure is adequately prepared to handle increased traffic and demands on query performance during such events.

6. Partial indexes

When a subset of data is frequently queried, partial indexes can be created to cover only that subset, reducing the index size and improving query performance. An example is creating a partial index for active users in a user table where only rows with “active = true” are indexed. 

In cloud environments dealing with massive datasets, partial indexes can help you manage resources more efficiently and maintain optimal performance. What is cloud native architecture? This refers to apps built specifically to work well in cloud environments. It involves using cloud services and concepts like microservices, containerization, and orchestration. It’s frequently used for apps that need to perform in an agile environment and be quickly scaled up or down.

7. Expression indexes

These indexes are created based on expressions or functions applied to one or more columns. They are useful for queries involving computed values or transformations. For example, indexing the result of a mathematical operation or string concatenation performed on columns.

8. Hash indexes

Particularly useful for equality comparisons, hash indexes can provide fast access to data with low cardinality columns or when accessing a large number of rows randomly. They are suitable for scenarios like indexing boolean or enumerated columns.

Database indexing – optimize database performance

In database management, optimizing queries is key to ensuring your database performs well across all platforms, from web to mobile. To do this, you need a solid indexing strategy. 

Choosing the right database index can directly impact business operations. When your database is well-organized, it means employees and users can find what they need quickly, leading to tangible benefits from improved response times to streamlined operations and reduced costs.

Understanding the different approaches and best practices means you’ll be better equipped to streamline your data and manage it efficiently.

Pohan Lin – Senior Web Marketing and Localizations Manager

Pohan Lin is the Senior Web Marketing and Localizations Manager at Databricks, a global Data and AI provider connecting the features of data warehouses and data lakes to create lakehouse architecture. With over 18 years of experience in web marketing, online SaaS business, and ecommerce growth. Pohan is passionate about innovation and is dedicated to communicating the significant impact data has in marketing. Pohan has written for other domains such as Spiceworks and Parcel Monitor. Here is his LinkedIn.

Categories
Community Tips

How Do UX Design Principles Elevate Customer Experiences?

User Experience (UX) Design principles play a key role in increasing customer experience. UX principles focus on creating products that are user-friendly and meaningful to use.

According to the research paper published by Core, If you want to design a user-friendly interface, white-label service providers who are experts in designing can help you out. White-label web development companies have UI/UX experts who know how to place each UI element in your product. So,  white-label agencies can help you build a more responsive and highly interactive design to help the users move around the website. 

Here are the things white-label agencies take care of while designing the UI/UX of your product.

6 UX Design Principles for Increasing Customer Experience

#1 Simple and Clear Designs

Keeping your website design simple and clear is an integral part of your UX strategy. This would involve defining the navigation menus clearly. You must design intuitive design layouts and use effective language. 

Make sure to add inputs and messages along the design wherever needed. It will help enhance the experience of the user. Your white-label design service provider will prioritize simplicity while crafting the solution. They will use the existing successful websites as their guide to define a clean and organized layout. 

The services will devise a strategy to make navigation intuitive and guided. This would help people move around without being stuck at any point. Moreover, they can plan for a “one-task, one-screen” layout that avoids clutter. 

According to the research paper published by Core, simplicity in design is about going deep into your user’s minds. There are 3 ways to achieve simplicity in design:

• Maintain clarity: Understand and design for your users’ main goals

• Make use of automation: Design for a minimum amount of conscious and cognitive effort

• Limit options: Design for a strong “information scent”

#2 Crafting Responsive Designs

Your users access the same websites across devices and resolutions. For instance, a user may access the website on the Chrome browser on their Android tablet. Another user may access the website on their iPhone browser. 

It is also possible the same user opens the website on an Android phone and tablet simultaneously. Responsive designs will automatically adapt to the platform’s needs and screen sizes. The design will stay consistent, and users will not face any friction while moving around. This adaptability will enhance the user’s experience.

Your white-label service provider can help you implement the responsive design layout. They are crucial in imagining and testing responsive designs. They will thoroughly test the design and address layout issues in advance.

#3 Ensuring Design Consistency

Consistency is key to keeping your users happy. You must ensure your design is aligned with your brand’s identity. Each element must be cohesive and defined. 

You must add the brand’s color scheme, typography, and design styles while creating the website. This would make it recognizable and relatable to the users. You can improve the overall appearance and ensure a professional design outcome. 

A white-label website design and development service provider works with defined guidelines. They are aware of the design expectations and nuances. As a result, companies can offer clean and consistent designs. Companies would design the wireframe to prototype to eliminate inconsistencies and provide a smooth layout.

#4 Well-defined Information Architecture

Information flow is pivotal in enhancing the user’s experience. You must define a logical movement and the organization of the content. When the user moves from one particular information to another, they must sense an intuitive flow. 

This would increase the user’s engagement on the website and allow them to find the information faster.  You can connect with your white-label service provider to define the sitemap and wireframes for your website. This would establish an organized information flow. You can design the user journeys and map them in your website design. 

Companies can also help you conduct usability tests and validate the information flow for engagement.

#5 Iterative Design with Feedback Loops

Knowing what your users think of the design is crucial for designing an appropriate website. You must ensure you have a feedback loop that brings back the messages from the user. This would help build a user-centric website.

You must use an iterative design strategy to implement the loops and leverage them to avail the feedback. 

You must have defined mechanisms to help collect the user’s feedback. This would include surveys and analytics tools. White-label service providers can implement these feedback loops and incorporate iterative design for excellent user insights.

Companies can use user insights to build an optimized website aligned with the user’s preferences and needs.

#6 Accessibility Design Considerations

Diverse users, including people with disabilities, will access your website. You must prepare your website for all user types. Ensure the website is aligned with the ethical design considerations while designing for web accessibility

Implementation: The white-label service provider is well aware of accessibility guidelines. Their understanding of accessibility standards would help them implement the right headings and alt+text for images.

Moreover, they would ensure the design is accessible via screen readers and other inputs. This would address all the barriers and ensure inclusivity.

Conclusion

It is crucial to create UX-led white-label websites. This would enhance your design strategy and extend exceptional results. You can elevate your user’s experience by implementing consistent and clear designs. 

Moreover, it offers an organized information architecture and accessible design that boosts the website’s quality. When your website meets the functional, aesthetic, and quality standards, it is usable, user-friendly, and highly engaging.

This would improve the session length and the conversion rate for your business. Prioritizing UX design principles in your website design is no longer a strategy; it is the definition of a successful website.

Categories
Self-Hosting

Systemd journal logs: A Game-Changer for DevOps and Developers

“Why bother with it? I let it run in the background and focus on more important DevOps work.” — a random DevOps Engineer at Reddit r/devops

In an era where technology is evolving at breakneck speeds, it’s easy to overlook the tools that are right under our noses. One such underutilized powerhouse is the systemd journal. For many, it’s a mere tool to check the status of systemd service units or to tail the most recent events (journalctl -f). Others who do mainly container work, ignore even its existence.

What is the purpose of systemd-journal?

However, the systemd journal includes very important information. Kernel errors, application crashes, out of memory process kills, storage related anomalies, crucial security intel like ssh or sudo attempts and security audit logs, connection / disconnection errors, network related problems, and a lot more. The system journal is brimming with data that can offer deep insights into the health and security of our systems and still many professional system and devops engineers tend to ignore it.

Of course we use logs management systems, like Loki, Elastic, Splunk, DataDog, etc. But do we really go through the burden to configure our logs pipeline (and accept the additional cost) to push systemd journal logs to them? We usually don’t.

On top of this, what if I told you that there’s an untapped reservoir of potential within the systemd journal? A potential that could revolutionize the way developers, sysadmins, and DevOps professionals approach logging, troubleshooting, and monitoring.

But how does systemd-journal work?

systemd journal isn’t just a logging tool; it’s an intricate system that offers dynamic fields for every log entry. Yes, you read right. Each log line may have its own unique fields, annotating and tagging it with any number of additional name-value pairs (and the value part can be even binary data). This is unlike what most log management systems do. Most of them are optimized for logs that are uniform, like a table, with common fields among all the entries. systemd journal on the other hand, is optimized for managing an arbitrary number of fields on each log entry, without any uniformity. This feature gives this tool amazing power.

Check for example coredumps. systemd developers have annotated all applications crashes with a plethora of information, including environment variables, mount info, process status information, open files, signals, and everything that was available at the time the application crashed.

Now, imagine a world where application developers don’t just log errors, but annotate those logs with rich information: the request path, internal component states, source and destination details, and everything related to identify the exact case and state this log line appeared. How much time such error logging would save? It would be a game-changer, enabling faster troubleshooting, precise error tracking, and efficient service maintenance.

All this power is hidden behind a very cryptic journalctl command. So, at Netdata we decided to reveal this power and make it accessible to everyone.

Try it for yourself in one of our Netdata demo rooms here.

Categories
Self-Hosting

5 DevOps best practices to reinforce with monitoring tools

This blog is contributed to Developer Nation by Netdata

As part of a modern software development team, you’re asked to do a lot. You’re supposed to build faster, release more frequently, crush bugs, and integrate testing suites along the way. You’re supposed to implement and practice a strong DevOps culture, read entire novels about SRE best practices, go agile, or add a bunch of Scrum ceremonies to everyone’s calendar. Every week, the industry recommends that you “shift-left” another part of the DevOps pipeline, to the point where you’re supposed to handle everything from unit testing to production deployment optimization from day one.

While you might have some experience in monitoring software, the reality is that as an aggregate, some others around you probably don’t. According to the Stack Overflow Developer Survey 2020, nearly 40% of developers have less than 5 years of professional experience. There’s not enough time for anyone to learn all these DevOps tools and best practices while also putting meaningful code into a GitHub repository on a regular basis.

Monitoring, and the metrics data it creates, can be a powerful way to encourage DevOps best practices through a common language, and implementing it doesn’t have to be complicated or time-consuming. By combining a DevOps mindset with a “full-stack” monitoring tool, you can start getting instant feedback about the performance and availability of what you’re trying to build—without waiting another 5 years for your team’s DevOps experience to catch up.

If your team has already settled on a monitoring tool, you can start applying these best practices today. If you’re still looking for the right piece of kit, you can start making informed tooling decisions based on what’s going to strengthen your team.

Focus on infrastructure monitoring first

When we talk about monitoring software for DevOps teams, we’re talking primarily about infrastructure monitoring. Infrastructure monitoring is the practice of collecting metrics data about the performance and availability of an application’s “full stack.” That’s everything including the hardware, any virtualized environment, the operating system, and any services (like databases, message queues, or web servers) that might make your application possible.

Depending on the full stack’s complexity, infrastructure monitoring can mean keeping an eye on a single virtual machine (VM) running on Google Cloud Platform (GCP), a Kubernetes (k8s) deployment with dozens of ephemeral nodes that scale horizontally during periods of high usage, or anything in between.

Here’s some key infrastructure metrics to keep an eye on using your monitoring tool (Netdata included):

If you can collect eBPF metrics, that’s even better, even if you aren’t experienced enough to make sense of them yet. eBPF metrics are still very much the cutting edge of infrastructure monitoring, providing extremely granular detail into exactly how the Linux kernel deals with your full stack, so there’s still a lot of flux in recommendations and best practices.

Monitor performance and availability in every environment

Modern DevOps teams should be monitoring the full stack no matter where it runs. This presents quite a large break from tradition, where the operations (Ops) team handled monitoring only once the application was running in production. The perception was that seeing users interact with a full stack was the only way to catch real bugs.

The latest best practices acknowledge that it’s possible—even inevitable—to catch bugs early by monitoring everywhere. That starts with local development servers and extends to any number of testing, staging, or production environments. That also means the monitoring tool should work whether the application is running off the latest M1 Macbook Air or in a multi-cloud deployment across dozens of virtual machines (VMs).

Before you go rushing into your next release process, take time to develop the tooling to monitor in more places. That might mean creating a custom Dockerfile for local development, or adding hooks into your CI/CD toolchain to deploy a fresh staging environment every time a developer reaches a milestone.

How’s this for an easy deployment experience in any environment?

Collect everything, worry about it when you need it

The only way to know that something is going wrong with your application’s infrastructure is to have the data to support it. One common practice is to vacuum up every metric, store it for 2-3 weeks, and have it available if you need to go back in time and root cause an issue or outage.

One way to ensure you’re collecting everything is to choose a tool with high granularity. Every infrastructure monitoring tool collects and visualizes metrics at a specific granularity, which is another way of talking about the time period between one point of collection and the next.

One data point every 60 seconds = low granularity One data point every 1 second = high granularity If you have a transitive-but-critical error that comes and goes within 5 seconds, a low-granularity solution might not even show a blip, which means you still don’t know anything went wrong in the first place.

With low granularity, metrics are averaged out over long periods of time, which has the unwanted effect of flattening what should be a worrying spikes into nothing more than a blip in the noise.

Netdata itself uses an internal time-series database for storing per-second metrics in an efficient way, which gives you tons of flexibility to find the sweet spot between disk space considerations and keeping historical metrics around long enough for proper analysis.

Some DevOps teams even use tools like (e)BPF, which collect and visualize metrics with an event granularity, which means they can show you every event, and not just an average/minimum/maximum of data between two points in time.

Break down silos with metrics

One of the DevOps mindset’s core purposes is to break down existing silos between what used to be separate development (Dev) and operations (Ops) teams. In the past, the Dev team finished writing code, flung across the fence to the Ops team, and wiped their hands clean from whatever came next. The Ops team then spent their days putting out fires and understanding how the application worked.

DevOps is designed to stop this unproductive cycle, but it only works if everyone has access to the same platform and uses the same language: metrics. Choose a tool that’s accessible to everyone who touches application code or controls the production environment. That doesn’t mean having one person who controls the infrastructure monitoring dashboards and lets the rest of the organization look at it in read-only mode.

Make sure your monitoring tool encourages the sharing of information. Let anyone on your team, no matter their role, peek at your configurations or dashboards. By looking over your shoulder, they might learn something valuable, like a metric they’d previously overlooked or a unique troubleshooting strategy. On the other hand, the tool should also let anyone experiment and explore in a “sandbox” that doesn’t affect the core health and availability dashboards.

For example, Netdata Cloud uses the concepts of War Rooms, which are shared containers for DevOps teams who need to do infrastructure monitoring. Every node, alert, and custom dashboard in that War Room is shared between everyone, but each team member can freely create, reconfigure, and learn. No more keys to the dashboarding kingdom, and no more worrying about messing up someone else’s perfectly-crafted troubleshooting experience.

Bubble it up into continuous monitoring

While continuous integration (CI) and continuous delivery (CD; CI/CD) have gotten all the attention, a lot of DevOps practitioners have forgotten about continuous monitoring (CM). This practice helps DevOps teams track, identify, and make decisions from all collected metrics, across all environments, in real time.

While some consider CM the last part of the DevOps pipeline—the practice of monitoring an application in production—other organizations bring CM to the entire CI/CD toolchain, monitoring internal processes and tooling to identify issues before being released into the wild.

With a sophisticated CM strategy in place, your team can better respond to ongoing incidents, with the added benefit of making leaps in the 4 key metrics for DevOps success: mean time to acknowledge (MTTA), mean time to recovery (MTTR), mean time between failures (MTBF), and mean time to failure (MTTF). You’ll improve company-wide visibility into the performance and availability of its stack, and you’ll end up driving real business results, like happier users and improved retention. Because Netdata deploys (easily) everywhere, has highly-granular metrics, and lets users of all experience levels explore and learn their infrastructure, it’s perfect for leveling up a DevOp team with CM.

Don’t have a DevOps monitoring tool yet?

The IT infrastructure monitoring tools that make all these best practices come to life come in a huge variety of shapes and sizes, from open-source toolchains you cobble together on your own to enterprise-friendly monoliths that do everything but cost a ton.

Because there are so many moving parts, a lot of developers and DevOps teams hesitate when choosing an IT monitoring tool, and then end up with something that doesn’t actually empower them, knock down silos between teams, or ramp up the speed of development.

One choice that enables all of the above best practices, and many more, is Netdata. Download the free and open-source Netdata Agent to start implementing DevOps best practices and improve team’s performance know-how with a free, open-source monitoring tool.

Once you’re seeing metrics with per-second granularity, familiarize yourself with Netdata’s documentation and guides to explore more opportunities to explore, troubleshoot, and resolve even the most complex of full-stack issues.

Categories
Self-Hosting

Monitoring vs Observability: Understanding the Differences

This blog is contributed to Developer Nation by Netdata

As systems increasingly shift towards distributed architectures to deliver application services, the roles of monitoring and observability have never been more crucial. Monitoring delivers the situational awareness you need to detect issues, while observability goes a step further, offering the analytical depth to understand the root cause of those issues.

Understanding the nuanced differences between monitoring and observability is crucial for anyone responsible for system health and performance. In dissecting these methodologies, we’ll explore their unique strengths, dive into practical applications, and illuminate how to strategically employ each to enhance operational outcomes.

To set the stage, consider a real-world scenario that many of us have encountered: It’s 3 a.m., and you get an alert that a critical service is down. Traditional monitoring tools may tell you what’s wrong, but they won’t necessarily tell you why it’s happening leaving that part up to you. With observability, the tool enables you to explore your system’s internal state and uncover the root cause in a faster and easier manner.

The Conceptual Framework

Monitoring has its roots in the early days of computing, dating back to mainframes and the first networked systems. The primary objective was straightforward: keep the system up and running. Threshold-based alerts and basic metrics like CPU usage, memory consumption, and disk I/O were the mainstay. These metrics provided a snapshot but often lacked the context needed for debugging complex issues.

Observability, on the other hand, is a relatively new paradigm, inspired by control theory and complex systems theory. It came to prominence with the rise of microservices, container orchestration, and cloud-native technologies. Unlike monitoring, which focuses on known problems, observability is designed to help you understand unknown issues. The concept gained traction as systems became too complex to understand merely through predefined metrics or logs.

Monitoring: The Watchtower

Monitoring is about gathering data to answer known questions. These questions usually take the form of metrics, alerts, and logs configured ahead of time. In essence, monitoring systems act as a watchtower, constantly scanning for pre-defined conditions and alerting you when something goes awry. The approach is inherently reactive; you set up alerts based on what you think will go wrong and wait.

For instance, you might set an alert for when CPU usage exceeds 90% for a prolonged period. While this gives you valuable information, it doesn’t offer insights into why this event is occurring. Was there a sudden spike in user traffic, or is there an inefficient code loop causing the CPU to max out?

Observability: The Explorer

Observability is a more dynamic concept, focusing on the ability to ask arbitrary questions about your system, especially questions you didn’t know you needed to ask. Think of observability as an explorer equipped with a map, compass, and tools that allow you to discover and navigate unknown territories of your system. With observability, you can dig deeper into high-cardinality data, enabling you to explore the “why” behind the issues.

For example, you may notice that latency has increased for a particular service. Observability tools will allow you to drill down into granular data, like traces or event logs, to identify the root cause, whether it be an inefficient database query, network issues, or something else entirely.

Key Differences between Monitoring & Observability

Data

Monitoring and observability rely heavily on these three fundamental data types: metrics, logs and traces. However the approach taken in collecting, examining and utilizing this data can differ significantly.

Both monitoring and observability rely on data, but the kinds of data they use and how they use it can differ substantially.

Metrics in Monitoring vs Observability

Metrics serve as the backbone of both monitoring and observability, providing numerical data that is collected over time. However, the granularity, flexibility, and usage of these metrics differ substantially between the two paradigms.

Monitoring: Predefined and Aggregate Metrics

In a monitoring setup, metrics are often predefined and tend to be aggregate values, such as averages or sums calculated over a specific time window. These metrics are designed to trigger alerts based on known thresholds. For example, you might track the average CPU usage over a five-minute window and set an alert if it exceeds 90%. While this approach is effective for catching known issues, it lacks the context needed to understand why a problem is occurring.

Observability: High-Fidelity, High-Granularity and Context-Rich Metrics

Observability platforms go beyond merely collecting metrics; they focus on high-granularity, real-time metrics that can be dissected and queried in various ways. Here, you’re not limited to predefined aggregate values. You can explore metrics like request latency at the 99th percentile over a one-second interval or look at the distribution of database query times for a particular set of conditions. This depth allows for a more nuanced understanding of system behavior, enabling you to pinpoint issues down to their root cause.

A critical aspect that is often overlooked is the need for real-time, high-fidelity metrics, which are metrics sampled at very high frequencies, often per second. In a system where millions of transactions are happening every minute, a five-minute average could hide critical spikes that may indicate system failure or degradation. Observability platforms are generally better suited to provide this level of granularity than traditional monitoring tools.

Logs: Event-Driven in Monitoring vs Queryable in Observability

Logs provide a detailed account of events and are fundamental to both monitoring and observability. However, the treatment differs.

Monitoring: Event-Driven Logs

In monitoring systems, logs are often used for event-driven alerting. For instance, a log entry indicating an elevated permissions login action might trigger an alert for potential security concerns. These logs are essential but are typically consulted only when an issue has already been flagged by the monitoring system.

Observability: Queryable Logs

In observability platforms, logs are not just passive records; they are queryable data points that can be integrated with metrics and traces for a fuller picture of system behavior. You can dynamically query logs to investigate anomalies in real-time, correlating them with other high-cardinality data to understand the ‘why’ behind an issue.

Proactive vs Reactive

The second key difference lies in how these approaches are generally used to interact with the system.

Monitoring: Set Alerts and React

Monitoring is generally reactive. You set up alerts for known issues, and when those alerts go off, you react. It’s like having a fire alarm; it will notify you when there’s a fire, but it won’t tell you how the fire started, or how to prevent it in the future.

Observability: Continuous Exploration

Observability, by contrast, is more proactive. With an observability platform, you’re not just waiting for things to break. You’re continually exploring your data to understand how your system behaves under different conditions. This allows for more preventive measures and enables engineers to understand the system’s behavior deeply.

Opinionated Dashboards and Charts

Navigating the sprawling landscape of system data can be a daunting task, particularly as systems scale and evolve. Both monitoring and observability tools offer dashboards and charts as a solution to this challenge, but the philosophy and functionality behind them can differ significantly.

Monitoring: Pre-Built and Prescriptive Dashboards

In the realm of monitoring, dashboards are often pre-built and prescriptive, designed to highlight key performance indicators (KPIs) and metrics that are generally considered important for the majority of use-cases. For instance, a pre-configured dashboard for a database might focus on query performance, CPU usage, and memory consumption. These dashboards serve as a quick way to gauge the health of specific components within your system.

  • Quick Setup: Pre-built dashboards require little to no configuration, making them quick to deploy.
  • Best Practices: These dashboards are often designed based on industry best practices, providing a tried-and-true set of metrics that most organizations should monitor.
  • Lack of Flexibility: Pre-built dashboards are not always tailored to your specific needs and might lack the ability to perform ad-hoc queries or deep dives.
  • Surface-Level Insights: While useful for a quick status check, these dashboards may not provide the contextual data needed to understand the root cause of an issue.

Observability: Customizable and Exploratory Dashboards

Contrastingly, observability platforms often allow for much greater customization and flexibility in dashboard creation. You can build your own dashboards that focus on the metrics most relevant to your specific application or business needs. Moreover, you can create ad-hoc queries to explore your data in real-time.

  • Deep Insights: Custom dashboards allow you to drill down into high-cardinality data, providing nuanced insights that can lead to effective problem-solving.
  • Contextual Understanding: Because you can tailor your dashboard to include a wide range of metrics, logs, and traces, you get a more contextual view of system behavior.
  • Complexity: The flexibility comes at the cost of complexity. Building custom dashboards often requires a deep understanding of the data model and query language of the observability platform.
  • Time-Consuming: Crafting a dashboard that provides valuable insights can be a time-consuming process, especially if you’re starting from scratch.

Netdata aims to deliver the best of both worlds by giving you out-of-the-box opinionatedpowerfulflexiblecustomizable dashboards for every single metric.

Real-World Applications: Monitoring vs Observability

Understanding the key differences between monitoring and observability is pivotal, but these concepts are best illustrated through real-world use cases. Below, we delve into some sample scenarios where each approach excels, offering insights into their practical applications.

Network Performance

Monitoring tools are incredibly effective for tracking network performance metrics like latency, packet loss, and throughput. These metrics are often predefined, allowing system administrators to quickly identify issues affecting network reliability. For example, if a VPN connection experiences high packet loss, monitoring tools can trigger an alert, prompting immediate action.

Debugging Microservices

In a microservices architecture, services are loosely coupled but have to work in harmony. When latency spikes in one service, it can be a herculean task to pinpoint the issue. This is where observability shines. By leveraging high-cardinality data and dynamic queries, engineers can dissect interactions between services at a granular level, identifying bottlenecks or failures that are not immediately obvious.

Case Study: Transitioning from Monitoring to Observability

Consider a real-world example of a SaaS company that initially relied solely on monitoring tools. As their application grew in complexity and customer base, they started noticing unexplained latency issues affecting their API. Traditional monitoring tools could indicate that latency had increased but couldn’t offer insights into why it was happening.

The company then transitioned to an observability platform, enabling them to drill down into granular metrics and traces. They discovered that the latency was tied to a specific database query that only became problematic under certain conditions. Using observability, they could identify the issue, fix the inefficient query, and substantially improve their API response times. This transition not only solved their immediate problem but equipped them with the tools to proactively identify and address issues in the future.

Synergy and Evolution: The Future of Monitoring and Observability

The choice between monitoring and observability isn’t binary; often, they can complement each other. Monitoring provides the guardrails that keep your system running smoothly, while observability gives you the tools to understand your system deeply, especially as it grows in complexity.

As we continue to push the boundaries of what’s possible in software development and system architecture, both monitoring and observability paradigms are evolving to meet new challenges and leverage emerging technologies. The sheer volume of data generated by modern systems is often too vast for humans to analyze in real-time. AI and machine learning algorithms can sift through this sea of information to detect anomalies and even predict issues before they occur. For example, machine learning models can be trained to recognize the signs of an impending system failure, such as subtle but unusual patterns in request latency or CPU utilization, allowing for preemptive action.

Monitoring and observability serve distinct but complementary roles in the management of modern software systems. Monitoring provides a reactive approach to known issues, offering immediate alerts for predefined conditions. It excels in areas like network performance and infrastructure health, acting as a first line of defense against system failures. Observability, on the other hand, allows for a more proactive and exploratory interaction with your system. It shines in complex, dynamic environments, enabling teams to understand the ‘why’ behind system behavior, particularly in microservices architectures and real-world debugging scenarios.

Netdata: Real-Time Metrics Meet Deep Insights

Netdata offers capabilities that span both monitoring and observability. It delivers real-time, per-second metrics, making it a powerful resource for those in need of high-fidelity data. Netdata provides out-of-the-box dashboards for every single metric as well as the capability to build custom dashboards, bridging the gap between static monitoring views and the dynamic, exploratory nature of observability. Whether you’re looking to simply keep an eye on key performance indicators or need to dig deep into system behavior, Netdata offers a balanced, versatile solution.

Check out Netdata’s public demo space or sign up today for free, if you haven’t already.

Happy Troubleshooting!

Categories
Self-Hosting

Docker container monitoring with Netdata

This blog is contributed to Developer Nation by Netdata

Properly monitoring the health and performance of Docker containers is an essential skill for solo developers and large teams alike. As your infrastructure grows in complexity, it’s important to streamline every facet of the performance of your apps/services. Plus, it’s essential that the tools you use to make those performance decisions work across teams, and allow for complex scaling architectures.

Netdata does all that, and thanks to our Docker container collector, you can now monitor the health and performance of your Docker containers in real-time.

With Docker container monitoring enabled via cgroups, you get real-time, interactive charts showing key CPU, memory, disk I/O, and networking of entire containers. Plus, you can use other collectors to monitor the specific applications or services running inside Docker containers.

With these per-second metrics at your fingertips, you can get instant notifications about outages, performance hiccups, or excessive resource usage, visually identify the anomaly, and fix the root cause faster.

What is Docker?

Docker is a virtualization platform that helps developers deploy their software in reproducible and isolated packages called containers. These containers have everything the software needs to run properly, including libraries, tools, and their application’s source code or binaries. And because these packages contain everything the application needs, it runs everywhere, isolating problems where code works in testing, but not production.

Docker containers are a popular platform for distributing software via Docker Hub, as we do for Netdata itself. But perhaps more importantly, containers are now being “orchestrated” with programs like Docker Compose, and platforms like Kubernetes and Docker Swarm. DevOps teams also use containers to orchestrate their microservices architectures, making them a fundamental component of scalable deployments.

How Netdata monitors Docker containers

Netdata uses control groups—most often referred to as cgroups—to monitor Docker containers. cgroups is a Linux kernel feature that limits and tracks the resource usage of a collection of processes. When you combine resource limits with process isolation (thanks, namespaces!), you get what we commonly refer to as containers.

Linux uses virtual files, usually placed at /sys/fs/cgroup/, to report the existing containers and their resource usage. Netdata scans these files/directories every few seconds (configurable via check for new cgroups every in netdata.conf) to find added or removed cgroups.

The best part about monitoring Docker containers with Netdata is that it’s zero-configuration. If you have Docker containers running when you install Netdata, it’ll auto-detect them and start monitoring their metrics. If you spin up Docker containers after installing Netdata, restart it with sudo service netdata restart or the appropriate variant for your system, and you’ll be up and running!

Read more about Netdata’s cgroup collector in our documentation.

View many containers at-a-glance

Netdata auto-detects running containers and auto-populates the right-hand menu with their IDs or container names, based on the configuration of your system. This interface is expandable to any number of Docker containers you want to monitor with Netdata, whether it’s 1, 100, or 1,000.

Netdata also uses its meaningful presentation to organize CPU and memory charts into families, so you can quickly understand which containers are using the most CPU, memory, disk I/O, or networking, and begin correlating that with other metrics from your system.

Get alarms when containers go awry

Netdata comes with pre-configured CPU and memory alarms for every running Docker container. Once Netdata auto-detects a Docker container, it initializes three alarms: RAM usage, RAM+swap usage, and CPU utilization for the cgroup. These alarms calculate their usage based on the cgroup limits you set, so they’re completely dynamic to any Docker setup.

You can, of course, edit your health.d/cgroups.conf file to modify the existing alarms or create new ones entirely.

Dive into real-time metrics for containerized apps and services

Netdata’s Docker monitoring doesn’t stop with entire containers—it’s also fully capable of monitoring the apps/services running inside those containers. This way, you’ll get more precise metrics for your mission-critical web servers or databases, plus all the pre-configured alarms that come with that collector!

You can monitor specific metrics for any of the 200+ apps/services like MySQL, Nginx, or Postgres, with little or no configuration on your part. Just set the service up using the recommended method, and Netdata will auto-detect it.

For example, here are some real-time charts for an Nginx web server, running inside of a Docker container, while it’s undergoing a stress test.

Visit our documentation and use the search bar at the top to figure out how to monitor favorite containerized service.

What’s next?

To get started monitoring Docker containers with Netdata, install Netdata on any system running the Docker daemon. Netdata will auto-detect your cgroups and begin monitoring the health and performance of any running Docker containers.

If you already have Netdata installed and want to enable Docker monitoring, restart Netdata using the appropriate command for your system.

Netdata handles ephemeral Docker containers without complaint, so don’t worry about situations where you’re scaling up and down on any given system. As soon as a new container is running, Netdata dynamically attaches all the relevant alarms, and you can see new charts after refreshing the dashboard.

For a more thorough investigation of Netdata’s Docker monitoring capabilities, read our cgroups collector documentation and our Docker Engine documentation. You can also learn about running Netdata inside of a container in your ongoing efforts to containerize everything.

Categories
Community

Building a Restaurant Guide with Next.js and Storyblok

In this tutorial, we will explore how to integrate Next.js with Storyblok along with learning some Next.js and Headless features. We will also see how Storyblok works and enable the live preview of the Storyblok’s Visual Editor. To make things interesting, we will build a Restaurant Guide with the same. Here is a look of what we will be building –

Hint - If you're in a hurry, you can check out the repo at Github. You can also clone the space by clicking here. 

Requirements

To follow this tutorial there are the following requirements:

  • Basic understanding of JavaScript, React and Next.js
  • Node.js LTS version
  • An account in the Storyblok App

Creating a Next.js Project

Let’s first create a Next.js project by the following command – 

npx create-next-app@latest

Feel free to choose your preferences and make sure to add TailwindCSS to the project as the code of this tutorial contains tailwind for styling. Let’s also install the Storyblok React SDK with the following command –

npm i @storyblok/react

The SDK will allow us to interact with the API and enable live editing for us. Let’s now start the development server –

npm run dev

Once the server is started, open your browser at http://localhost:3000. You should see the following screen – 

Due the securty reasons, http is not supported inside the Visual Editor of Storyblok, so we need to set up a proxy from an https port to http 3000. You can read more about it in setting up a proxy guide. We will be using 3010 port on https.

Hint - Checkout Storyblok’s Kickstart page to see in detail how to connect Storyblok with any technology, both in a new and existing projects.

Creating a Space in Storyblok

Now, let’s login into the Storyblok App. Once logged in, you will see a list of your spaces. Every card is a space here. 

You can consider a space as a content repository, that contains everything like content, users, components, etc. For this tutorial, let’s consider one space as one project. You can read more about spaces in the Structure of Content guide present in the Storyblok website. Let’s now create a new space for our Next.js project by clicking the button present at the right top that says Add Space. Let’s choose creating a New Space from the list of options. There are a couple of plans available to choose from when you create a space. For this tutorial, feel free to choose the Community Plan which is free and requires no payment information.

Then, we can enter the Space name and Server location.

Setting up the Space

Once the space is created, you will see that there is a Home Story created inside the Content section and a couple of blocks (or components) in the Block Library section. We will be creating more stories and blocks according to our requirements of the app. As Storyblok is a headless CMS, you can create stories that are built of different blocks and consume the content inside the stories with the help of APIs on any frontend layer (in this case Next.js)

Hint - You can read more about a Story in the Structure of Content guide, it is basically an entry.

Let’s go to the settings of the space, and then Visual Editor tab. This is the place where you set up the location of the Visual Editor. Let’s change it to https://localhost:3010/ as this is our dev server after the proxy.

Now let’s go to the home story present inside the Content section, you will see that now we have a our Next.js website there. Though there is a 404, but the aim was to setup the Visual Editor Environment.

Integrating Next.js and Storyblok

Now, let’s add some code to our Next.js project to have the integration with Storyblok working and adding the functionality of live editing. Let’s first delete the page.js file inside the app folder. Then let’s create a [[..slug]] folder and a page.js file inside that. We are doing this to catch all the dynamic routes possible. You can read more about Dynamic Routes in official Next.js docs.

We will add the code to render every route inside this page.js file, let’s first initialize Storyblok. To do so, we need to go to the layout.js file and add storyblokInit function to it. You can add the following –


import { storyblokInit, apiPlugin} from "@storyblok/react/rsc" 

// Existing Code

storyblokInit({
  accessToken: 'your-access-token',
  use: [apiPlugin]
})

We are importing storyblokInit and apiPlugin from the package, and are initializing the function. You can find the access token of your space inside the settings of the space. Make sure to copy the preview one. You can create more tokens according to the requirements.

apiPlugin helps us to use the functionality of Storyblok JS Client that allows us to interact with the Storyblok API using the SDK. If you don’t want to add it, you can choose your preferred way to fetch the content by making a get request to the Storyblok’s Content Delivery API.

Once this is done, we also need to add a special wrapper component that will allows us to do the edits in real time. As Next.js by default is doing everything on the Server Side with React Server Components, it is not possible to play around with Javascript and listen to such events. Hence, this component will be a Client Side Component. 

Create a components folder and then create a new file named StoryblokProvider.js in the components folder. You can paste the following code in that – 


/** 1. Tag it as client component */
"use client";
import { storyblokInit, apiPlugin } from "@storyblok/react/rsc";

/** 2. Import your components */
// import Page from "./Page"

/** 3. Initialize it as usual */
storyblokInit({
  accessToken: "your-access-token",
  use: [apiPlugin],
  components: {},
});

export default function StoryblokProvider({ children }) {
  return children;
}

Here we do the initalization again on the client side, the previous initialization in the layout will be used to fetch the data. Here, we also have a components key inside the initialization, which will contain all the map of all the components we create for dynamic rendering.

We need to wrap the app inside this component. The layout.js file should now look like this – 


import { storyblokInit, apiPlugin} from "@storyblok/react/rsc"
import StoryblokProvider from "@/components/StoryblokProvider"
import './globals.css';
export const metadata = {
  title: 'Storyblok and Next.js 14',
  description: 'A Next.js and Storyblok app using app router ',
}
storyblokInit({
  accessToken: 'your-access-token',
  use: [apiPlugin]
})
export default function RootLayout({ children }) {
  return (
    <StoryblokProvider>
      <html lang="en">
        <body>{children}</body>
      </html>
    </StoryblokProvider>
  )
}

Now let’s add the logic to the page.js file to render all the routes. Paste the following code inside the file – 


import { getStoryblokApi } from "@storyblok/react/rsc";
import StoryblokStory from "@storyblok/react/story";

export default async function Page({ params }) {
  let slug = params.slug ? params.slug.join("/") : "home";

  const storyblokApi = getStoryblokApi();
  let { data } = await storyblokApi.get(
    `cdn/stories/${slug}`,
    { version: "draft" },
    { cache: "no-store" }
  );

  return (
    <div className="mb-40">
      <StoryblokStory story={data.story} bridgeOptions={{}} />
    </div>
  );
}

export async function generateStaticParams() {

  const storyblokApi = getStoryblokApi();
  let { data } = await storyblokApi.get("cdn/links/", {
    version: "draft",
  });

  let paths = [];

  Object.keys(data.links).forEach((linkKey) => {
    if (data.links[linkKey].is_folder) {
      return;
    }
    const slug = data.links[linkKey].slug;
    if (slug == "home") {
      return;
    }
    let splittedSlug = slug.split("/");
    paths.push({ slug: splittedSlug });
  });
  return paths;
}

In this file, we import getStoryblokApi that is used to fetch the data from Storyblok. First thing to see here is the generateStaticParams function here provided by Next.js. This function is used to define the dynamic routes that should statically generate during build time. This is pretty useful for static generation. You can read more about the function in the Next.js Docs.

In this function, we fetch all the links that are inside Storyblok. This means we get all the entries. We are using the links endpoint of the API which gives us the access to all the links so we don’t have to fetch the complete stories. You can take a look at the links endpoing in the API reference. We are ignoring the home slug here, as it will be used for all the other routes along with it. 

Once we have all the links we return the paths array as required by generateStaticParams function in the required format. Once we have all the links, the Page function can handle the logic for rendering it. Inside the Page function, we first retrieve the slug and fetch the content of that slug using the API. With the fetch, the first parameter is the slug, second one is the API Options where we define the version in this case (there are more options that can be passed), and the third parameter is to pass custom fetch options. 

Storyblok stores two versions of content, one is draft and the other is published. By default, when you create something or make changes, the new content is stored as draft until you hit publish. 

Hint - Only Preview token can fetch the draft as well as the published version of content. Public token can only fetch the published version.

We also pass the third parameter as { cache: "no-store" } for Next.js, so that the request is not cached all the time when the content changes.

After we get the data, we use the StoryblokStory  component to render the components received in the content dynamically. We also need to pass the story property which will contain the content of the component.  The other property is the bridgeOptions, which takes the options for the bridge. 

Now with this all done, we are ready to create the new stories and components! 

Creating Stories and Components

Let’s start by creating a Restaurant component first. This will help us understand the basics, and then we can add multiple fields as well as components. Go to the Block Library in Storyblok, you will see that there are already a few blocks created. We won’t be using these, so let’s create a new one by clicking the button on right top that says Create New . Let’s add the name restaurant for this, and choose Content type block. 

There are 3 different types of components in Storyblok, Content type blocks are the ones that can be standalone stories. The Nestable ones are the ones that can be added in other blocks. And, Universal blocks can be used as both.

Now once the component is added, we need to add a couple of fields to it. Let’s start by adding a name field and a background image field. The pre selected field type is text, but you can change it while creating the field or after the field is created. For the background_image, select (or change to) the type asset.

Once the component is created, we can use this to create new Restaurant stories to store the content related to a restaurant. Let’s go the Content section, and add a new folder named Restaurants. This folder will store all the restaurants. Select the default content type as Restaurant

Once the folder is created, add a new story inside it for a restaurant. Feel free to choose the name of the restaurant. 

Once the story is created, you will see the fields that we added in our component. 

The fields are empty, as well as our frontend is not showing anything at this moment. Add the content to the fields and then let’s hit the save button on the right top. Even when we hit save, nothing is shown in the Visual Editor. This is because we are missing one important step – creation of component on the frontend. You should also see a warning in the terminal saying that the component doesn’t exist.

Let’s now create a Restaurant component in our Next.js project. Create a new file name Restaurant.js  inside the components folder and paste the following code –


import { storyblokEditable } from "@storyblok/react/rsc";

const Restaurant = ({ blok }) => (
  <main {...storyblokEditable(blok)}>
    <div>
      <div
        className={`min-h-[500px] relative
          flex
          items-end
          justify-center
          p-9
          my-6
          rounded-[5px]
          overflow-hidden`}
      >
        <div className="relative z-10 text-center">
          <h1 className="text-7xl text-white font-bold mb-3">{blok.name}</h1>
        </div>
        <img
          src={`${blok.background_image?.filename}`}
          alt={blok.background_image.alt}
          className="absolute top-0 left-0 z-0 w-full h-full object-cover"
        />
      </div>
    </div>
  </main>
);
export default Restaurant;

In this component, we are getting a property named blok that we passed from the StoryblokStory as the story property. This blok property will contain all the information coming from the story. In this case, it will have the name we added along with the background_image. We can directly access those properties from the blok object. For the name, it is blok.name and for the image it is blok.background_image?.filename. This is because asset field gives you another object which contains other information along with image’s url under the property name filename.

The components where live editing is required must have the StoryblokEditable function on the root element as we have it here on the main tag. This allows us to see the dotted lines inside the visual editor and makes the components clickable.

Once we save this component, we also need to add this to the initalization in the StoryblokProvider.js file – 


/** 1. Tag it as client component */
"use client";
import { storyblokInit, apiPlugin } from "@storyblok/react/rsc";

/** 2. Import your components */
import Restaurant from "./Restaurant";

/** 3. Initialize it as usual */
storyblokInit({
  accessToken: "your-access-token",
  use: [apiPlugin],
  components: {
    'restaurant': Restaurant
  },
});
export default function StoryblokProvider({ children }) {
  return children;
}

As soon as we save this, we should see the preview in the Visual Editor! Now, if you try to change the text or the image, you will even see a live preview in real time. Congratulations, we now have everything working for us. Now it is all about creating new fields and new components. 

Let’s now add the following fields (along with the types) to the Restaurant block – 

  • description – Textarea 
  • cuisines – Multi-Options
  • vegan – Boolean
  • city – Text
  • dishes – Blocks

You can add more fields of your choice, once the fields are added we just need to map those in the frontend to be used by our Restaurant component. 

For the cuisines field that we added, you can add options of your choice.

The dishes field that we added is of type blocks. Blocks is a special type that allows other nestable components to be inserted inside it. As it is recommened to be as granular as possible, we can make another component named dish that can be added inside that field.

Let’s add a new nestable block named dish with the following fields – 

  • name – Text
  • description – Textarea
  • image – Asset

Let’s now go to the newly added restaurant and fill the other fields along with adding a dish block there. 

Now to make all these fields visible, we need to add those in our Restaurant component in Next.js and we also need to add a Dish component in our frontned. Paste the following code in a new file named Dish.js  under the components folder –


import { storyblokEditable } from "@storyblok/react/rsc";
const Dish = ({ dish }) => {
  return (
    <section {...storyblokEditable(dish)}>
      <div className="mx-auto flex w-80 flex-col justify-center bg-white rounded-2xl shadow-xl shadow-gray-400/20">
        <img
          className="aspect-video w-80 rounded-t-2xl object-cover object-center"
          src={dish.image.filename}
        />
        <div className="p-6">
          {/* <small className="text-gray-900 text-xs">
            Rating - {dish.rating.value}
          </small> */}
          <h1 className="text-2xl font-medium text-gray-700 pb-2">
            {dish.name}
          </h1>
          <p className="text text-gray-500 leading-6">{dish.description}</p>
        </div>
      </div>
    </section>
  );
};
export default Dish;

Make sure to add this to the storyblokInit function as we added the Restaurant component. Paste the following code in the Restuarant.js file – 




    import { storyblokEditable } from "@storyblok/react/rsc";
    import Dish from "./Dish";
    const Restaurant = ({ blok }) => (
      <main {...storyblokEditable(blok)}>
        <div>
          <div
            className={`min-h-[500px]
              relative
              flex
              items-end
              justify-center
              p-9
              my-6
              rounded-[5px]
              overflow-hidden`}
          >
            <div className="relative z-10 text-center">
              <h1 className="text-7xl text-white font-bold mb-3">{blok.name}</h1>
            </div>
            <img
              src={`${blok.background_image?.filename}`}
              alt={blok.background_image.alt}
              className="absolute top-0 left-0 z-0 w-full h-full object-cover"
            />
          </div>
          <div className="px-6">
            <h2 className="text-2xl font-bold ">{blok.description}</h2>
            <hr className="flex-grow border-t mt-2 border-gray-300"/> 
            <div className="flex justify-between mt-4">
              <p className="text-xl">📍{blok.city}</p>
              <div className="flex ">
                {blok.cuisines.map((c) => (
                  <span
                    key={c}
                    className="px-4 py-1 rounded-full mx-2 text-white text-sm  bg-green-500 "
                  >
                    {c}
                  </span>
                ))}
              </div>
            </div>
            <div className="flex justify-between gap-4 mt-4">
              <p className="font-thin">Vegan {blok.vegan ? "✅" : "❌"}</p>
              {/* <p className="font-thin">
                Rating - <span className="font-bold">{blok.rating.value}</span>
              </p> */}
            </div>
          </div>
          <div className="mt-8 grid w-full grid-cols-1 gap-2 mx-auto sm:grid-cols-3">
            {blok.dishes.map((d) => (
              <div className=" ">
                <Dish dish={d}></Dish>
              </div>
            ))}
          </div>
        </div>
      </main>
    );
    export default Restaurant;

Once this is done, we should be able to see the other fields rendered as well. The best part here is that, whenever you change anything like adding or deleting a new dish, changing the images or text – everything is visible in real time. Feel free to try dragging the components as well to see the power of Visual Editor.

At this point, we have covered all important things and we are ready with our fully functional website. Just to make it better, let’s add a few more restaurants in the folder. 

To make our website look good, let’s also make the landing page better as well. Delete the home story that we have, and then go the block library to make some changes and add new blocks. You can go ahead and delete the page block as well. Let’s create the following components now – 

  • landing_page
    • body – Blocks
  • hero
    • headline – Text
    • subheadline – Text
    • image – Asset
    • layout – Single-Option with two options ( constrained and full_width)
  • featured_restaurants
    • restaurants – Multi Option 

In the restaurant field of featured_restaurants block, change the source to Stories and add the path folder as well as the content type as shown below – 

This will allow us to select the restaurants in the block from the Restaurants folder.

Now let’s add the following components to our Next.js project –

LandingPage.js – 


import { storyblokEditable, StoryblokComponent } from "@storyblok/react/rsc";
 
const LandingPage = ({ blok, restaurants }) => (
  <main {...storyblokEditable(blok)}>
    {blok.body.map((nestedBlok) => (
      <StoryblokComponent restaurants={restaurants} blok={nestedBlok} key={nestedBlok._uid} />
    ))}
  </main>
);
 
export default LandingPage;

The StoryblokComponent here allows us the render the components dynamically and is also used behind the scenes while using the StoryblokStory component.

Hero.js


import { storyblokEditable } from "@storyblok/react/rsc";
const Hero = ({ blok }) => {
  return (
    <div {...storyblokEditable(blok)} className={`min-h-[500px]
    relative
    flex
    items-end
    justify-center
    p-9
    my-6
    rounded-[5px]
    overflow-hidden ${blok.layout === 'constrained' ? 'container mx-auto' : ''}`}>
      <div className="relative z-10 text-center">
        <h1 className="text-6xl text-white font-bold mb-3">{blok.headline}</h1>
        <h2 className="text-4xl text-white font-light">{blok.subheadline}</h2>
      </div>
      <img
        src={`${blok.image?.filename}/m/filters:brightness(-50)`}
        alt={blok.image.alt}
        className="absolute top-0 left-0 z-0 w-full h-full object-cover"
      />
    </div>
  );
};
export default Hero;

FeaturedRestaurants – 


import { storyblokEditable } from "@storyblok/react/rsc";
const FeaturedRestaurants = ({ blok }) => {
  return (
    <section {...storyblokEditable(blok)}>
      <p className="text-5xl mt-8 text-center">Featured Restaurants</p>
      <div className="mt-8 grid w-full grid-cols-1 gap-2 mx-auto sm:grid-cols-3">
      {blok.restaurants.map((r) => (
          <div key={r.slug}>
            <div className="mx-auto flex w-80 flex-col justify-center bg-white rounded-2xl shadow-xl shadow-gray-400/20">
              <img
                className="aspect-video w-80 rounded-t-2xl object-cover object-center"
                src={r.content.background_image?.filename}
              />
              <div className="p-6">
                <small className="text-gray-900 text-xs">
                  {/* Rating - {reestaurant.rating.value} */}
                </small>
                <h1 className="text-2xl font-medium text-gray-700 pb-2">
                  {r.content.name}
                </h1>
                <p className="text text-gray-500 leading-6">{r.content.description}</p>
              </div>
            </div>
          </div>
      ))}
      </div>
 
    </section>
  );
};
export default FeaturedRestaurants;

The final version of StoryblokProvider.js with all the components should like this – 


/** 1. Tag it as client component */
"use client";
import { storyblokInit, apiPlugin } from "@storyblok/react/rsc";
/** 2. Import your components */
import FeaturedRestaurants from "./FeaturedRestaurants";
import LandingPage from "./LandingPage";
import Hero from "./Hero";
import Restaurant from "./Restaurant";
/** 3. Initialize it as usual */
storyblokInit({
  accessToken: "your-access-token",
  use: [apiPlugin],
  components: {
    restaurant: Restaurant,
    hero : Hero,
    "landing_page": LandingPage,
    "featured_restaurants": FeaturedRestaurants,
  },
});
export default function StoryblokProvider({ children }) {
  return children;
}

Now let’s go ahead and create a new home story using the landing page block. Add two blocks, the hero and featured_restaurants to its body field. Let’s also add data to these fields along with selecting the restaurants in the Featured Restaurants field. You will see that we get an error here, this is because the restaurants we select aren’t stored as the complete stories but as references. If you try checking the API response, you will see there are just a few UUIDs under the field. 

We need to resolve this field to get the data for the restaurants inside the component. For this case, there is a parameter that we can pass named as resolve_relations. We also need to resolve the relations for the bridge. 

Resolving Relations(for referencing one story in another story)

In the Page.js file where we fetch the data, we need to add the parameter as mentioned below –

let { data } = await storyblokApi.get(`cdn/stories/${slug}`, {version: 'draft', resolve_relations: ["featured_restaurants.restaurants"]}, {cache: "no-store"});

Also, as mentioned earlier, there is an option for passing the bridge options in the StoryblokStory. To pass that please change the component tag to the following in the same file –

<StoryblokStory story={data.story} bridgeOptions={{resolveRelations: ["featured_restaurants.restaurants"]}} />

And that is all! The error should be gone now. The preview will display the content rendered the way it should be along with the enabled live editing. We can now make any number of changes, add different blocks and it will all be editable in real time. 

Similarly, there can be more new blocks (components) and fields created. This was just a basic app to show the capabilities of Next.js and Headless. There are endless possibilities. Make sure to check out the space mentioned in the hint at the top along with the repository for a few more ideas. You can even try creating a demo space that is available while creating a new space. 

Conclusion

In this tutorial, we saw how to integrate Next.js and Storyblok. We saw how to manage and consume content using the API, and how to enable a real-time visual experience using the Visual Editor. We also went through a couple of features that Next.js offers to create great user experiences. 

Resources to look at – 

ResourceLink
Storyblok Technologies Hubhttps://www.storyblok.com/technologies
Storyblok React SDKhttps://github.com/storyblok/storyblok-react
Next.js Docshttps://nextjs.org/
Storyblok Visual Editorhttps://www.storyblok.com/docs/guide/essentials/visual-editor
Storyblok JS Bridgehttps://www.storyblok.com/docs/Guides/storyblok-latest-js
Categories
Community

Can AI exist in a sustainable way?

The fictional but prescient Dr. Ian Malcom noted in 1993’s Jurassic Park, “…your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.”  Generative AI’s rapid expansion due to the increase in size of large language models has felt something akin to genetically engineering a TRex.  It is certainly fun and exciting, but what are the consequences?  

Hardware is also beefing up those CPUs, GPUs, TPUs (all the PUs really) to support the training and distribution of those models. But just as history, and our favorite science fiction movies, have showed us, there is a cost. Of course, we’re all wary that SkyNet (T)  might emerge, (and frankly time will tell), but the more pressing matter is the consumption of electricity and water.

Addressing the AI elephant in the room

At Cisco, we’ve been baking predictive AI into our platforms for years, helping IT operations make insightful, and even proactive, decisions. Across compute, storage, and networking infrastructure, application of predictive AI and observability has been incredibly useful in helping organizations scale and optimize their actual infrastructure usage. With APIs paving the way for multi platform integration, we’re seeing wholesale Day 0 to Day N solutions that help organizations manage usage and more.

ai environment

What the research says

While these gains are exciting, the underlying machine learning technologies that support predictive AI do not have the same resource needs as Generative AI, which requires new approaches to reducing carbon footprint and overall environmental impacts

Over the last five years or so, researchers at universities like Berkeley and the University of Massachusetts saw past the horizon and started experimenting and proving methods that could be employed to lessen the energy consumption (and carbon footprint) of precursor technologies like natural language processing (NLP) to large language model (LLM). They even go as far as to prescribe both software/algorithm and hardware/infrastructure improvements to alleviate the carbon footprint created by training and using NLP and LLM. Even better, similar activities are underway to measure the impact of AI technology on water usage as well.

But, that’s not the whole story..

As of today, the true nature of AI’s impact on energy consumption is REALLY hard to actually quantify. Article after article tries to dig into the actual effect of using generative AI technologies. The challenge is that the combination of large amounts of variables (what task is being done, how is the data center setup, what processors are being used, etc. etc.) and IP secrecy (there is a LOT of money to be made here) makes reaching a true, tangible answer difficult. Not to mention, there is no way of knowing if those running LLM-based services are employing some of the proven mitigations noted above. 

The best any of the current research can come up with is energy usage comparable to an average U.S. home per year to the average mid-size country. That’s an unmanageable range which makes  understanding the actual impact and ways to mitigate difficult to identify.

So, it seems, that at least in the short term, newer AI technologies will have an increased impact on energy consumption and water usage to the possible negative detriment of the environment.

Problem solving, the developer way

 So how can AI exist in conjunction with sustainability efforts? Ah, that’s the interesting part. AI just may be the answer to its own problems. The problem that I mention above about it being difficult to figure out the impact of AI usage on energy and water consumption is being currently worked on by AI sustainability initiatives

In theory, the models would then be able to suggest solutions to increased water and electricity consumption. r In a slightly less sophisticated model, predictive AI elements are starting to be used to simply just turn things off. This is the simplest answer: eliminate situations where energy is generated but not actually used– and the really cool thing is AI can help us with that.

In the realm of this technological advancement, developers are bestowed with an extraordinary opportunity to make a real impact for a sustainable future.

Getting involved

Cisco’s Build for Better coding challenge, is open March 14 – April 22, 2024, and invites all Developers to harness their skills in AI, Observability, and Sustainability to make a real-world impact. Learn more and commit your code by Earth Day.

Categories
Community

Developer Nation Donation Program: Supporting Charities for a Better Tomorrow

At Developer Nation, our commitment extends beyond just gathering data and insights from the developer community; we believe in giving back and making a positive impact. Through our Donation Program, we support causes that resonate with our mission and values, ensuring that our community’s voice is heard not only in surveys but also in charitable endeavours.

How Does it Work?

For every survey wave conducted, we allocate a portion of our resources to donate to charities chosen by the Developer Nation Community. With each completed survey, we contribute $0.10 towards our donation goal, aiming to reach a minimum of $1,600 or more in total donations!

The 26th Developer Nation Survey

During our 26th Developer Nation global survey, we were thrilled to receive 13,852 qualified responses. This remarkable engagement helped us reach $1,385 in donations, inspiring us to continue giving back to our community in various ways.

Charities Supported in the 26th Survey

In line with our commitment to supporting developer-centric initiatives and other causes valued by our community, we distributed our donations among several deserving charities. Here’s a breakdown of the organizations we supported and the contributions they received based on community votes:

1. FreeCodeCamp

FreeCodeCamp is dedicated to providing free coding education to individuals worldwide. With 5,197 (out of 13,852) survey participants showing their support, we were able to donate $520 to further their mission of making coding accessible to everyone.

2. Mozilla Foundation

The Mozilla Foundation champions an open and accessible internet for all, promoting privacy, security, and digital literacy. With 2,997 votes from our community, we contributed $300 to advance their mission of a healthier internet ecosystem.

3. Engineers Without Borders

Engineers Without Borders (EWB) harnesses the skills of engineers to address global challenges, from clean water access to sustainable infrastructure. Supported by 2,584 out of 13,852 survey participants, we donated $258 to support their impactful projects worldwide.

4. Ada Developers Academy

Ada Developers Academy is dedicated to increasing diversity in tech by providing women and gender-diverse individuals with comprehensive software development training. With 1,759 votes from our community, we contributed $176 to empower more individuals with the skills and opportunities they need to thrive in tech.

5. ADRA (Adventist Development and Relief Agency)

ADRA works tirelessly to address social injustice and provide humanitarian aid to communities in need worldwide. With 1,209 votes from our community, we donated $121 to support their efforts in promoting sustainable development, disaster relief, and advocacy.

Conclusion: Fostering Community and Social Responsibility

At Developer Nation, we believe that giving back to society is integral to our mission. We extend our deepest gratitude to our community members for their enthusiastic participation in our Donation Program. Your contributions have not only exceeded our expectations but also reaffirmed our collective commitment to making a difference.

As we continue our journey, we welcome your thoughts and suggestions for future donations. Your feedback plays a crucial role in shaping our philanthropic efforts, ensuring that we remain aligned with the causes that matter most to you.

Join us in our mission to build a brighter, more inclusive future through technology and compassion. Together, we can make a meaningful impact on the world!