Subscribe to feed
Updated: 17 min 18 sec ago

Create accessible websites with Drupal

Mon, 03/20/2023 - 15:00
Create accessible websites with Drupal neerajskydiver Mon, 03/20/2023 - 03:00

As the world becomes increasingly digital, it’s more important than ever to ensure that websites are accessible to everyone. Accessibility is about designing websites that can be used by people with disabilities, such as visual or hearing impairments, as well as those who rely on assistive technology like screen readers. In this article, I’ll explore recommendations for creating accessible websites with Drupal, a popular open source content management system.

Why accessibility is important

First, consider why accessibility is important. According to the World Health Organization, over 1 billion people worldwide live with some form of disability. In the United States alone, 26% of adults have some form of disability. Ensuring that websites are accessible is not only a moral imperative, it’s also a legal requirement. In the US, websites must comply with the Americans with Disabilities Act (ADA) and Section 508 of the Rehabilitation Act, which sets standards for accessibility in federal agencies.

4 tips for creating accessible websites with Drupal

Here are some tips for creating accessible websites with Drupal:

  1. Choose accessible themes and modules: When selecting themes and modules for your Drupal website, it’s important to choose those designed with accessibility in mind. The Drupal community has created a number of themes and modules that are specifically designed for accessibility. You can also use tools like the Web Accessibility Evaluation Tool (WAVE) to test the accessibility of themes and modules before you install them.
  2. Design for keyboard navigation: Many people with disabilities rely on keyboard navigation to access websites. To ensure that your Drupal website can be navigated using a keyboard, you should make sure that all interactive elements are reachable with a keyboard and that the order in which elements are accessed with the keyboard makes sense. You can use the Drupal Accessibility module to test your website’s keyboard navigation.
  3. Use ARIA attributes: Accessible Rich Internet Applications (ARIA) is a set of attributes that can be added to HTML elements to make them more accessible. ARIA attributes can be used to provide additional information to assistive technology, such as screen readers. For example, you can use ARIA attributes to describe the purpose of a button or a link. Drupal has built-in support for ARIA attributes.
  4. Test for accessibility compliance: To ensure that your Drupal website is accessible, test it for compliance with accessibility standards like the Web Content Accessibility Guidelines (WCAG). There are a number of tools available for testing accessibility compliance, such as Accessibility Insights for Web.

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets Examples of accessible websites using Drupal

Several organizations have successfully implemented accessible websites using Drupal. Here are two of my favorite.

  1. University of Colorado Boulder: The University of Colorado Boulder used Drupal to redesign its website with accessibility in mind. They used Drupal’s built-in accessibility features, as well as custom modules, to ensure that their website is compliant with accessibility standards. As a result, they saw a significant increase in traffic and engagement from users with disabilities.

  2. Connecticut Children’s Medical Center: Connecticut Children’s Medical Center used Drupal to create an accessible website for patients and their families. They used Drupal’s built-in accessibility features, as well as custom modules, to provide features like keyboard navigation and ARIA attributes. The website has been praised for its accessibility and has won several awards.

Access for all

Creating accessible websites is essential for ensuring that everyone can access digital content. Drupal has a number of features and modules that can help make websites more accessible, including built-in accessibility features, themes and modules designed for accessibility, and support for ARIA attributes. By implementing these recommendations, you can create an accessible website that provides a better user experience for all users.

Use the open source Drupal CMS to create accessible websites that provide open access to everyone.

Accessibility Drupal What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Develop on Kubernetes with open source tools

Mon, 03/20/2023 - 15:00
Develop on Kubernetes with open source tools rberrelleza Mon, 03/20/2023 - 03:00

Over the last five years, a massive shift in how applications get deployed has occurred. It’s gone from self-hosted infrastructure to the world of the cloud and Kubernetes clusters. This change in deployment practices brought a lot of new things to the world of developers, including containers, cloud provider configuration, container orchestration, and more. There’s been a shift away from coding monoliths towards cloud-native applications consisting of multiple microservices.

While application deployment has advanced, the workflows and tooling for development have largely remained stagnant. They didn’t adapt completely or feel “native” to this brave new world of cloud-native applications. This can mean an unpleasant developer experience, involving a massive loss in developer productivity.

But there’s a better way. What if you could seamlessly integrate Kubernetes and unlimited cloud resources with your favorite local development tools?

The current state of cloud-native development

Imagine that you’re building a cloud-native application that includes a Postgres database in a managed application platform, a data set, and three different microservices.

Normally, this would involve the following steps:

  1. Open a ticket to get your IT team to provision a DB in your corporate AWS account.
  2. Go through documentation to find where to get a copy of last week’s DB dump from your staging environment (you are not using prod data in dev, right?)
  3. Figure out how to install and run service one on your local machine
  4. Figure out how to install and run service two on your local machine
  5. Figure out how to install and run service three on your local machine

And that’s just to get started. Once you’ve made your code changes, you then have to go through these steps to test them in a realistic environment:

  1. Create a Git branch
  2. Commit your changes
  3. Figure out a meaningful commit message
  4. Push your changes
  5. Wait your turn in the CI queue
  6. CI builds your artifacts
  7. CI deploys your application
  8. You finally validate your changes

I’ve worked with teams where this process takes anything from a few minutes to several hours. But as a developer, waiting even a few minutes to see whether my code works was a terrible experience. It was slow, frustrating, and made me dread making complex changes.

Simplify your cloud-native development workflow with Crossplane and Okteto

Crossplane is an open source project that connects your Kubernetes cluster to external, non-Kubernetes resources and allows platform teams to build a custom Kubernetes API to consume those resources. This enables you to do something like kubectl apply -f db.yaml to create a database in any cloud provider. And this enables your DevOps or IT team to give you access to cloud infra without having to create accounts, distribute passwords, or manually limit what you can or can’t do. It's self-service heaven.

The Okteto CLI is an open source tool that enables you to build, develop, and debug cloud native applications directly in any Kubernetes cluster. Instead of writing code, building, and then deploying in Kubernetes to see your changes, you simply run okteto up, and your code changes are synchronized in real time. At the same time, your application is hot-reloaded in the container. It’s a fast inner loop for cloud-native applications.

On their own, each of these tools is very useful, and I recommend you try them both. The Crossplane and Okteto projects enable you to build a great developer experience for you and your team, making building cloud-native applications easier, faster, and joyful.

Here’s the example I mentioned in the previous section, but instead of a traditional setup, imagine you’re using Crossplane and Okteto:

  1. You type okteto up
  2. Okteto deploys your services in Kubernetes while Crossplane provisions your database (and data!)
  3. Okteto synchronizes your code changes and enables hot-reloading in all your services

More on Kubernetes What is Kubernetes? Free online course: Containers, Kubernetes and Red Hat OpenShift technical over… Test drive OpenShift hands-on An introduction to enterprise Kubernetes How to explain Kubernetes in plain terms eBook: Running Kubernetes on your Raspberry Pi homelab Kubernetes cheat sheet eBook: A guide to Kubernetes for SREs and sysadmins Latest Kubernetes articles

At this point, you have a live environment in Kubernetes, just for you. You saved a ton of time by not having to go through IT, figuring out local dependencies, and remembering the commands needed to run each service. And because everything is defined as code, it means that everyone in your team can get their environment in exactly the same way. No degree in cloud infrastructure required.

But there’s one more thing. Every time you make a code change, Okteto automatically refreshes your services without requiring you to commit code. There’s no waiting for artifacts to build, no redeploying your application, or going through lengthy CI queues. You can write code, save the file, and see your changes running live in Kubernetes in less than a second.

How’s that for a fast cloud-native development experience?

Get into the cloud

If you’re building applications meant to run in Kubernetes, why are you not developing in Kubernetes?

Using Crossplane and Okteto together gives your team a fast cloud-native development workflow. By introducing Crossplane and Okteto into your team:

  • Everyone on your team can spin up a fully-configured environment by running a single command
  • Your cloud development environment spans Kubernetes-based workloads, as well as cloud services
  • Your team can share a single Kubernetes cluster instead of having to spin up one cluster on every developer machine, CI pipeline, and so on
  • Your development environment looks a lot like your production environment
  • You don’t have to train every developer on Kubernetes, containers, cloud providers, and so on.

Just type okteto up, and you’re developing within seconds!

Use Crossplane and Okteto for cloud-native development in a matter of seconds.

Kubernetes Cloud Containers SCaLE What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How I got my first job in tech and helped others do the same

Sat, 03/18/2023 - 15:00
How I got my first job in tech and helped others do the same discombobulateme Sat, 03/18/2023 - 03:00

Two years ago, I got an interview with Sauce Labs when they opened an internship in the Open Source Program Office (OSPO). There was a lot of competition, and I didn’t have the usual technical background you might think a tech company would be looking for. I was stumbling out of a career in the arts, having taken a series of technical courses to learn as much Python and JavaScript as I could manage. And I was determined not to squander the chance I had at an interview working in open source, which had been the gateway for my newfound career path.

It was in the PyLadies of Berlin community that I met Eli Flores, a mentor and friend who ultimately referred me for the interview. I would probably not have had a chance for an interview in Sauce Labs if it hadn’t been for Eli.

My CV was bad.

I was trying to assert technical skills I didn’t have, and trying to emulate what I thought an interviewer for the position would want to read. Of course, the interview selection process is difficult, too. Somebody has to sift through stacks of paper to find someone with the right skills, somebody who fits into the required role, while simultaneously hoping for someone to bring a unique perspective to the organization. On the one hand, you offer a chance to interview, trusting the judgment of someone you trust. On the other hand, you may end up having clones of the people around you.

This is where referral programs shine the most. And this was the story of how I got my first job in tech.

Was a referral enough? Many would consider that they’d done their good deed for the year. But not Eli.

Eli was the first female software engineer to be hired by Sauce Labs in Germany. By the time I arrived, there were three of us: Eli, myself, and Elizabeth, a junior hired one year before. Based on her own struggles, Eli kept an eye on me, invited me for constant check-ins and provided me with practical information about creating my career path based on what the company would consider a check list. She didn’t just share a link and walk away. She explained to me what it meant, and some “traps” that were built in to the system. Leadership, at the time, hadn’t been trained to recognize their biases, and that had affected Eli’s career path.

Besides that, she was the one putting together a formal document explaining to the ones with the power to make decisions why they needed to give me a junior position at the end of my internship. She gathered information among my peers, found out who had hiring power, prepared them months before my contract ended, and gave me the insight I needed to defend my position.

I did my part.

When things looked uncertain about my contract renewal, I asked a friend and mentor what to do, and what was expected. I asked others who’d been in my place recently. I built a document measuring my progress along the months, ensuring that my achievements clearly intersected with the company’s interpretation of the engineering career path. With that, I could demonstrate that Eli was right: They had every reason to keep me, not according to subjective feelings, but with objective metrics.

Defining my role

There was still a big problem, though. Sauce wanted to keep me, but they didn’t know what to do with me. Junior roles require guidance, and the progressive collection of knowledge. I’d found a passion for the Open Source Program Office, where I could actively collaborate with the open source community. But an OSPO may be one of the most complex departments in a company. It gathers open source and business understanding, and it requires autonomy to make connections between business needs and the needs of open source. My peers were mostly staff engineers, contributing to open source projects critical to the business, and those are complex contributions.

One of my peers, Christian Bromann, was also seeking to grow his managerial skills, and so he took me under his wing. We started having regular 1-on-1 sessions, as we discussed what it meant to be doing open source in a business setting. He invited me to get closer to the foundations and projects he was part of, and we did several paired programming sessions to help me understand what mattered most to engineers tasked with meeting specific requirements. He unapologetically placed a chair at the company’s table for me, integrating me into the business, and so my role became clear and defined.

I had help from several colleagues from various other departments to stay and grow as a professional. They each showed me all the other things I didn’t know about the corporate world, including the single most important thing that I didn’t know existed in business: the way we were actually working to make lives better. We had diversity, equity, and inclusion (DEI) groups, environmental, employee resource groups, informal mentorship, and cross department support. The best thing about Sauce Labs is our people. They are incredibly smart and passionate humans, from whom I learn lessons daily.

A short time later, I decided it was time for me to give back.

I looked back and saw all the others that came before me, and helped me land a job I enjoy and that had critically improved my life. I urgently felt the need to bring another chair to this table for someone else. I started digging to find a way to make sense of how a for-profit organization could have a fellowship program.

A fellowship program in a for-profit organization

I was now formally occupying a role that bridged the OSPO and the Community departments. My main task was to create developer relations, focused on open source communities (I know, it’s a dream job!)

The imbalance between contribution and consumption of open source, especially within infrastructure (which business depends upon) is always a risk for the ecosystem. So the question is, what does a company and an open source project have in common?

The answer is humans.

There are many legal issues that makes it hard for a for-profit company to run a fellowship program. This differs from country to country because laws differ from country to country. Germany has a lot of protections in place for workers. As my human resource department told me: “If it smells like a job, it is a job.” That usually means taxes and expenses, and of course costs are always the major determining factor when launching a new program.

Unlike an internship, which implies you are training someone to be hired after the training period and, therefore, requires a pre-approved budget with a year’s salary accounted for. A fellowship, however, is a loose contract, closer to a scholarship, and only spans a specific amount of time. It’s a great fit for an open source project, and similar initiatives like the Google Summer of Code and Outreachy.

The model I was proposing was focused on the humans. I wanted to facilitate entry into the field for aspiring local technologists. I’d gone through similar programs myself, and I knew how frustrating they could be. They’re competitive, and to have a hope of being selected you had to commit to months of unpaid work prior to the application process.

By creating several small local initiatives, I believed the whole open source ecosystem could benefit. I felt that lowering the barriers to entry by not being so competitive, and making the application process easier, would surely bring more people in, especially the ones unable to commit to months of unpaid work.

More open source career advice Open source cheat sheets Linux starter kit for developers 7 questions sysadmins should ask a potential employer before taking a job Resources for IT artchitects Cheat sheet: IT job interviews Fellowship

The Open Source Community Fellowship is a six months paid program that connects for-profit organizations with open source projects to foster diversity in contribution and governance in open source.

Having employees as mentors decreases the cost of the program, and brings a huge value to a company because it helps train employees as better mentors to others. Several studies prove the benefit of having formal and informal mentorship within companies, with rewards including a sense of belonging, and it tends to result in retaining talent. Many companies say their employees are expected to have mentorship skills in order to achieve senior levels, but it’s a skill that needs to be put into practice. By giving employees 2 hours a week to acquire this skill, very little work is lost for a lot of benefit for the long term.

The open source project a business connects with needs to be critical for the business. If you’re going to pay a certain number of people to work for six months exclusively on a project, then there needs to be obvious benefit from that expenditure. I encourage fellowships to be an interdisciplinary program, because most open source projects need help in documentation, translation, design, and community support.

And yes, a fellowship should be six months, no less. Programs that offer only three months, maybe with a stipend, isn’t enough for a proper on-boarding and commitment. The maintainers of tomorrow need to be integrated in the communities of today, and that takes time.

Lastly, yes it has to be a paid program. We need sponsorship, not just mentorship. Although mentorship helps you increase networking, we all have bills to pay. Paying fellows a salary allows them to truly commit to the project.

Sauce Labs is sponsoring for the first time the program that started in December 2022 with five fellows across the USA. We hope this becomes a program that exemplifies the soul of the free software movement, so you can fork it, modify it, and redistribute it.

You’ve got the power

We’re often faced with the question, “What can I do?” Instead of feeling overwhelmed by difficulties that will always exist, acknowledge all the power you have in your current situation. Here are some ideas based on my own story:

  • Become a community organizer. No groups near by? Create your own, and others will follow. Support is needed.
  • Become a mentor. Join initiatives or create a formal or informal program at your company.
  • Pay attention to your colleagues, and proactively offer your help. Even with a steady job, you still need help to grow. Use your privilege to get all the voices heard in your the meetings.
  • Adopt a fellowship program to call your own. It’s a replicable model, easy to implement, and it brings innumerable benefits to the open source ecosystem.

There’s always something we can do to make the world around us a little better, and you are an important piece of that.

I wouldn't be where I am today without my mentors. Now, I have my dream job in open source.

Careers What to read next The power of sisterhood and allyship in open source This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How being open at work results in happy customers

Fri, 03/17/2023 - 15:03
How being open at work results in happy customers katgaines Fri, 03/17/2023 - 03:03

Every interaction we have with another person is influenced by our emotions. Those emotions have ripple effects, especially when it comes to how you do business. A happy customer patronizes your business more, recommends it to family and friends, writes a positive review, and ultimately leads to more money being spent at your business than if they'd been disappointed. The most basic known variable of providing good customer service influences this: If something isn't going as expected, work to make it right (within reason), and you'll save the relationship.

In tech, you can respect this in a few ways. If you listen to customer feedback, create products they'll find useful and intuitive, and nurture those positive associations with your project, then you'll do well. But there's an oft overlooked component to your customer's emotional perception of your business, and that's the customer support team.

Customer support team

The interactions handled by a support team carry a high emotional charge for the customer. Software needs to work, and it needs to work now.

Software faces a unique challenge when it comes to how a customer-facing team builds a relationship: it's primarily a virtual interaction. For in-person customer care, an employee wields the superpower of eye contact, a strong emotional influence. Facial expressions force us to interact with more empathy than say, a voice over the phone, or an email response.

When that's not possible, though, the ability to shift the emotional tone to a calm one can be challenging. It's easy for a customer to have a natural bias toward online support. Maybe they've had a bad experience with heavily automated support in the past. There are plenty of badly configured chatbots, unnavigable phone menus, and dispassionate robotic voices to add fuel to the fire when emotions are already high. A customer may have talked to a support agent who's miserable at work and therefore apathetic to the outcome. The customer carries these experiences into their emotional approach when asking for help. This can create stress for the agent who picks up their ticket, and a vicious cycle repeats.

Because of high stakes, emotional nature of Customer Support (CS), your business has an opportunity. Corral these big emotions through the people who have the most access to them. The key to doing this successfully is to remember the ripple effect. A customer service agent with the necessary tools and knowledge at their fingertips is happy. A happy customer service agent has better conversations with customers. You can set yourself apart from competitors by creating happy customer support agents in an empowered and knowledgeable customer service team. How is this done?

Preparing for success

If you’re a leader in customer support, or a stakeholder elsewhere in the organization (engineering, product, and so on) who works with support a lot, you can work in key areas to make the lives of your support agents a little easier:

Create visibility

As a customer support agent, you need data about the customer you're helping. You need to know the systems your customer is using, and the products you're meant to support. It's crucial to have visibility into other teams in the organization, because they have that kind of data. You need to know who to ask for help when a problem arises, what the known issues are, what's being worked on already, and so on.

Siloed departments are a common major barrier to achieving visibility across teams. This can be made worse by tools and systems that don't connect departments, such as a spreadsheet directory or filing issues in an internal tracking tool. When this is the case, the customer service department can't get timely information on the status of an issue, and the engineering department can't get a good feel for how customers are experiencing the issue.

If your customer service team is given visibility into the complexity faced by your engineering teams, it's easier to clearly articulate the state of issues to customers and stakeholders. Customer service teams can create visibility for engineering, too. Crucial information about problems can come from your customers. When engineering has visibility into customer issues, they're better equipped to prioritize for customer needs.

Everyone works hard to prevent customers from being affected by issues, but that's not always realistic. Use the data your customers give your customer service team about what's wrong, and empower your customer service agents to become part of an incident response process rather than just reacting to it.

Make difficult moments easy

Customer support is a difficult job. If you have never worked in customer service, I'm giving you some homework: shadow your customer support team so you can understand where friction happens. It's a great way to get to know who your customers really are, by seeing them in their highest emotional moments, and seeing how your team navigates that. Customer service means all the questions coming your way, few of the answers at your fingertips, manual tasks to complete, and not enough people to share the load.

Make the job easy for customer service where you can. It will pay off. Maybe you can help the team automate mundane tasks to better focus on more interesting problems. Often this manifests in chatbots, but it's worth being creative here. For example, can automation be applied when escalating tickets to engineering? That could free an agent to work on their troubleshooting process, rather than the manual steps of making that escalation happen.

You can use tooling your engineering team might already have in place to find these opportunities. Operations platforms can be shared to put both team's metrics out in the open, helping everyone stay aligned on common goals.

The feedback loop required for a mature software development life cycle needs the customer service team to operate effectively. You can only do that with shared visibility across your organization.

Making it easy also means proactive design, especially when it comes to processes for critical moments. You probably have a process to manage major incidents. When you share these tools and processes with customer service, you enable greater visibility and gain valuable insight and teammates along the way. During an incident, customer service can play a few key roles:

Aggregating customer reported issues

When an incident triggers, engineering needs to quickly find out how much of the service is impacted, including how many features, the depth to which they are affected, and whether they are slow or completely offline. Customer impact is part of that, which customer service can help uncover by associating inbound customer complaints to technical incidents to help drive priorities. As customer service receives reports of issues during an incident, that data becomes part of the impact of the incident, and is incorporated into the resolution process.

Prioritization of SLA

Your customer service team is in a unique position to help confirm the impact of an incident on the end user. They have insight into when services are reaching their Service Level Agreement (SLA) for certain customers, and can alert the responding team. This is an important piece of information to manage, and engineering teams might not have visibility into those contractual agreements. This aids in the prioritization of issues during incidents. CS can advise on whether or not an incident should be escalated or have its severity increased based on the customer intelligence they are receiving. More customer impact could mean a higher severity level for the incident, more responders included in the triage, and more stakeholders informed.

Liaisons and stakeholder communication

Speaking of stakeholders, customer service can take the lead when it comes to codifying communication practices for incidents. Customer service can take ownership of policies around messaging for customers, template responses, and communication processes. Templates with clear information and status pages to keep up to date are just some of the assets they can manage.

Post-incident follow-up

You'll always encounter customers who watch your status page like a hawk for updates. These customers and others ask customer service for updates if they don't see progress. You can ease the cognitive load of responding to these customers with the newfound connection with the incident process. If you hold incident reviews, then customer service must be part of that conversation. The tone of a conversation changes when a customer service agent has extra data to present to users about the impact of the incident, the resolution, and long-term plans for prevention. Your customer feels consistency, and your agent feels real ownership of the conversation.

At the end of the day, involving your customer service team through the entire process, from start to finish, allows them to gain control of their own destiny. It lets them provide valuable input back into the resolution process, and leverage their improved experience to improve the customer experience.

Automation with open source Download now: The automated enterprise eBook Free online course: Ansible essentials Ansible cheat sheet eBook: A practical guide to home automation using open source tools A quickstart guide to Ansible eBook: 7 examples of automation on the edge More articles about open source automation Invest in people

You can't create a happy employee out of thin air. Customer service leaders need help doing this. People need investment in career growth, the ability to collaborate with their peers, and a voice in the organization to know that their feedback is heard.

Your customer support team is not here to report on metrics to the business or to slog through the queue. Investing means giving them time and space to expand their skills and grow in their careers. For customer service leaders, this comes with knowing you may not keep them in support forever. You can build a strong team that offers phenomenal support, and also creates a hiring funnel into the rest of the business.

The first level of this is up-leveling agents within support. It's common to have a "premium" support team, or similar, for customers who need a high touch level of support, and the ability to get help at any hour. Hiring 24x7 staff won't help a customer service leader redesign their team's status as a cost center, but developing a staffing model to use the existing team's time efficiently can. Sharing tooling with engineering can be one way to get there. For example, if engineering is on call for responding to issues, customer service can use the same tooling to provide a creative solution, rotating a specialized team for those odd hours or high priority issues.

This can open up a new career path for those who want to be on a team with specialized knowledge. Having a team that can be notified as-needed, rather than fully staffed at all times, staring at a queue, and waiting for incoming requests, allows leaders to scale their customer experience efficiently.

Empowering customer service teams to reach out to other teams and advocate for customers also creates new communications channels and opportunities. Your customer service team can serve as a gateway into your organization for technical personnel who are still building skills. A close relationship with engineering supports career growth. Shared processes promote this. So does a shadowing program, having a subject matter expert in support departments for different product areas, and intentionally building career paths to assist transitions when it's time to do so. Customer service agents who transition to other departments bring with them their customer focus and dedication to the customer experience. This is a valuable addition to teams in your organization which increases empathy across the board.

The modern software development life cycle doesn't end when code is checked into a repository and all the tests turn green. A constant feedback loop from users back into development planning links user requirements directly to the product management phase of the cycle. Organizations across various industries have seen the benefits of adopting shared goals and purposes across different teams. Include your customer service team in larger organization-wide initiatives, like DevOps transformations and automation projects. Doing this increases the effectiveness of customer-focused teams, and improving their day-to-day work in turn improves the experience they can provide for customers. In a nutshell: Happy agents translate to happy customers.

The way the teams within your organization interact affects customer experience. Open communication and shared knowledge can transform your business.

Image by:

SCaLE Careers Business What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

My first pull request at age 14

Fri, 03/17/2023 - 15:00
My first pull request at age 14 neilnaveen Fri, 03/17/2023 - 03:00

My name is Neil Naveen, and I'm a 14-year-old middle schooler who's been coding for seven years. I have also been coding in Golang for two years.

Coding isn't my only passion, though. I've been practicing Jiu-Jitsu for four years and have competed in multiple competitions. I'm passionate about coding and Jiu-Jitsu, as they teach me important life lessons.


I started coding on Codecombat, which taught me many fundamental coding skills.

One of the most exciting moments in my coding journey was when I ranked 16th out of around 50,000 players in a multiplayer arena hosted by Code Combat. I was just 11 years old then, and it was an incredible achievement for me. It gave me the confidence to continue exploring and learning new things.


After Codecombat, I moved on to This site helped me hone my algorithm coding skills with tailored problems to learn specific algorithms.

Coding Game

When I turned 13, I moved on to bot programming on Coding Game. The competition was much more intense, so I had to use better algorithms. For example, when creating ultimate tic-tac-toe AI, I used algorithms like Minimax and Monte Carlo Tree Search to make my code fast and efficient.

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets GitHub CLI

One day, I saw my dad using an open source tool called GitHub CLI, and I was fascinated by it. GitHub CLI is a tool that allows users to interact with the GitHub API directly from the command line without ever having to go to GitHub itself.

Another day, my dad was reviewing PRs from a bot designed to detect vulnerabilities in dependencies.

Later, I thought about GitHub CLI and this bot, and wondered whether GitHub CLI itself was being monitored by a security bot. It turned out that it was not.

So I created a fix and included a security audit for GitHub CLI.

To my delight, my contribution was accepted. It was merged into the project, which was a thrilling moment for me. It was an excellent opportunity to contribute to a significant project like a popular tool like GitHub CLI, and to help secure it. Here's the link to my PR:

Commit your code

I hope my story will inspire other young people to explore and contribute to the open source world. Age isn't a barrier to entry. Everyone should explore and contribute. If you want to check out my website, head over to You can also check out my Leetcode profile. And if you're interested, check out my talk at CloudNativeSecurityCon recording.

I'm grateful for the opportunities I've had so far, and I'm excited to see what the future holds for me. Thank you for reading my story!

Age is not a barrier for contributing to open source.

Image by:

Careers Programming What to read next 7 steps to securing your Linux server This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Write documentation that actually works for your community

Thu, 03/16/2023 - 15:00
Write documentation that actually works for your community olga-merkulova Thu, 03/16/2023 - 03:00

What distinguishes successful and sustainable projects from those that disappeared into the void? Spoiler — it's community. Community is what drives an open source project, and documentation is one of the foundational blocks for building a community. In other words, documentation isn't only about documentation.

Establishing good documentation can be difficult, though. Users don't read documentation because it's inconvenient, it goes out of date very quickly, there's too much, or there's not enough.

The development team doesn't write documentation because of the "it's obvious for me, so it's obvious to everyone" trap. They don't write because they are too busy making the project exist. Things are developing too fast, or they're not developing fast enough.

But good documentation remains the best communication tool for groups and projects. This is especially true considering that projects tend to get bigger over time.

Documentation can be a single source of truth within a group or a company.  This is important when coordinating people toward a common goal and preserving knowledge as people move on to different projects.

So how do you write appropriate documentation for a project and share it with the right people?

What is successful community documentation?

To succeed in writing documentation in your community:

  • Organize your routine

  • Make it clear and straightforward

  • Be flexible, make changes to the routine according to a specific situation

  • Do version control

Image by:

(Olga Merkulova, CC BY-SA 4.0)

Being flexible doesn't mean being chaotic. Many projects have succeeded just because they are well-organized.

James Clear (author of Atomic Habits) wrote, "You do not rise to the level of your goals. You fall to the level of your systems." Be sure to organize the process so that the level is high enough to achieve success.

Design the process

Documentation is a project. Think of writing docs as writing code. In fact, documentation can be a product and a very useful one at that.

This means you can use the same processes as in software development: analysis, capturing requirements, design, implementation, and maintenance. Make documentation one of your processes.

Think about it from different perspectives while designing the process. Not all documentation is the right documentation for everyone.

Most users only need a high-level overview of a project, while API documentation is probably best reserved for developers or advanced users.

Developers need library and function documentation. Users are better served by example use cases, step-by-step guides, and an architectural overview of how a project fits in with the other software they use.

Image by:

(Olga Merkulova, CC BY-SA 4.0)

Ultimately, before creating any process, you must determine what you need:

  • Focus groups: this includes developers, integrators, administrators, users, sales, operations, executives

  • Level of expertise: Keep in mind the beginner, intermediate, and advanced users

  • Level of detail: There's room for a high-level overview as well as technical detail, so consider how you want these to be presented

  • Journeys and entry points: How do people find the documentation, how they use it

When you ponder these questions, it helps you structure information you want to communicate through documentation. It defines clear metrics on what has to be in the documentation.

Here's how to approach building a process around documentation.

Coding conventions

The code itself should make sense. Documentation should be expressed through good class names, file names, and so on. Create common coding standards and make a self-documented code process by thinking about:

  • Variable naming conventions

  • Make names understandable by using class, function naming schemes

  • Avoid deep nesting, or don't nest at all

  • Do not simply copy-and-paste code

  • No long methods should be used

  • Avoid using magic numbers (use const instead)

  • Use extract methods, variables, and so on

  • Use meaningful directory structures, modules, packages, and files

Testing along with engineering

Testing isn't only about how code should behave. It's also about how to use an API, functions, methods, and so on. Well-written tests can reveal base and edge case scenarios. There's even a test-driven development practice that focuses on creating test cases (step by step scenarios of what should be tested and how) before code development.

Version control

Version control, even for your documentation, helps you track the logic of your changes. It can help you answer why a change was made.

Make sure comments during commits explain WHY a change was made, not WHAT change was made.

The more engaging the documentation process is, the more people will get into it. Add creativity and fun to it. You should think about readability of documentation by using:

  • software code conventions

  • diagrams and graphs (that are also explained in text)

  • mind maps

  • concept maps

  • infographics

  • images (highlight important parts)

  • short videos

By using different ways of communication, you offer more ways to engage with your documentation. This can help forestall misunderstanding (different languages, different meanings), and different learning styles.

Here are some software tools for creating documentation:

  • Javadoc, Doxygen, JsDoc, and so on: Many languages have automated documentation tools to help capture major features in code
  • Web hooks and CI/CD engines: Allows continuous publication of your documentation
  • Restructured Text, Markdown, Asciidoc: File formats and processing engines help you produce beautiful and usable documentation out of plain text files
  • ReadTheDocs:  Is a documentation host that can be attached to a public Git repository
  •, LibreOffice Draw, Dia: Produce diagrams, graphs, mind-maps, roadmaps, planning, standards, and metrics
  • Peek, Asciinema: Use commands for recording your terminal
  • VokoscreenNG: Use mouse clicks and screen capture

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets Documentation is vital

Documenting processes and protocols are just as important as documenting your project itself. Most importantly, make information about your project and creation of your project exciting.

The speed of entering into a project and process, and understanding how everything works, is an important feature. It helps ensure continued engagement. Simple processes and a clear understanding of what needs to be done is obtained by building one "language" in the team.

Documentation is designed to convey value, which means demonstrating something through words and deeds. It doesn't matter whether it's a member of your team or a user of your application.

Think about the process as a continuum and use means of communication, processes, and documentation.

Image by:

(Olga Merkulova, CC BY-SA 4.0)

Documentation is a means of communication.

Establishing good documentation can be difficult, but it's critical to effective communication. Follow this framework for writing and sharing documentation with the right people.

Image by:

SCaLE Documentation What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How I returned to open source after facing grief

Thu, 03/16/2023 - 15:00
How I returned to open source after facing grief Amita Thu, 03/16/2023 - 03:00

The open source community is a wonderful place where people come together to collaborate, share knowledge, and build amazing things. I still remember my first contribution in Fedora 12 years ago, and since then it’s been an amazing journey. However, life can sometimes get in the way and cause us to take a break from participation. The COVID-19 pandemic has affected us all in different ways, and for some, it has been a time of immense loss and grief. I lost my loved one during the pandemic, and it has been the most difficult life event to deal with. It caused me to take a break from the Fedora community, as well. For those in the open source community who have had to take a break due to the loss of a loved one, returning to coding and contributing to projects can feel daunting. However, with some thought and planning, it is possible to make a comeback and once again become an active member of the community.

First and foremost, it is important to take care of yourself and allow yourself the time and space to grieve. Grief is a personal and unique experience. There is no right or wrong way to go through it. It is important to be kind to yourself. Don’t rush into things before you are ready.

Once you’re ready to start contributing again, there are a few things you can do to make your comeback as smooth as possible.

Reach out to other contributors

This is a hard truth: nothing stops for you and technology is growing exponentially. When I rejoined Fedora recently, I felt the world had changed around me so fast. From IRC to Telegram to Signal and Matrix, from IRC meetings to Google Meet, from Pagure to GitLab, from mailing lists to discussion forums, and the list goes on. If you haven’t been active in your community for a while, it can be helpful to reach out to your friends in the community and let them know that you’re back and ready to contribute again. This can help you reconnect with people and get back into the swing of things. They may have some suggestions or opportunities for you to get involved in. I am grateful to my Fedora friend Justin W. Flory, who helped me out selflessly to ensure I found my way back into the community.

Start small

In the past, I served as Fedora Diversity, Equity, & Inclusion (D.E.I.) Advisor, which is one of the Fedora Council member positions. It was a big job. I recognized that, and I knew that were I to think of doing the same job immediately after my break, then it would have been a burden that could threaten to cause early burnout. It’s vitally important to take it easy. Start small.

If you’re feeling overwhelmed by the thought of diving back into a big project, start small. There are plenty of small tasks and bugs that need to be fixed, and tackling one of these can help you ease back into the community.

Find a mentor

If you’re feeling unsure about how to get started or where to focus your efforts, consider finding a mentor. A mentor (in my case, Justin W. Flory) can provide guidance, advice, and support as you make your comeback.

Show gratitude

An open source community is built on the contributions of many people. A healthy community is grateful for your contribution. Showing gratitude is part of making a community healthy. Show your gratitude to others who help you, guide you, and give you feedback.

Block your calendar

Initially, it may take some time to get back to the rhythm of contributing. It helps to schedule some time in your calendar for open source work. It can be weekly/bi-weekly, depending on your availability. Remember, every contribution counts, and that is the beauty of the open source world. This trick will help you to get into a regular routine.

Two steps forward, one step back

Finally, it’s important to remember that it’s okay to take a step back if you need it. Grief is not a linear process. You may find that you need to take a break again in the future. It’s important to be honest with yourself and others about your needs. Take the time you need to take care of yourself.

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets Return on your own terms

Returning to the open source community after a period of grief can be challenging. It’s also an opportunity to reconnect with something you are passionate about and make a positive impact in the world. In time, you’ll find that you’re able to pick up where you left off, and re-engage with the community once again.

I dedicate this, my first ever article, to my late younger brother Mr. Nalin Sharma, who left us at the age of 32 due to COVID-19 in 2021. He was a passionate engineer and full of life. I hope he is in a better place now, and I am sure he will always be alive in my memories.

Contributing to open source projects after losing a loved one can feel daunting. Here's my advice for how to rejoin the community.

Image by:

Careers What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How to set up your own open source DNS server

Wed, 03/15/2023 - 15:00
How to set up your own open source DNS server Amar1723 Wed, 03/15/2023 - 03:00

A Domain Name Server (DNS) associates a domain name (like with an IP address (like This is how your web browser knows where in the world to look for data when you enter a URL or when a search engine returns a URL for you to visit. DNS is a great convenience for internet users, but it's not without drawbacks. For instance, paid advertisements appear on web pages because your browser naturally uses DNS to resolve where those ads "live" on the internet. Similarly, software that tracks your movement online is often enabled by services resolved over DNS. You don't want to turn off DNS entirely because it's very useful. But you can run your own DNS service so you have more control over how it's used.

I believe it's vital that you run your own DNS server so you can block advertisements and keep your browsing private, away from providers attempting to analyze your online interactions. I've used Pi-hole in the past and still recommend it today. However, lately, I've been running the open source project Adguard Home on my network. I found that it has some unique features worth exploring.

Adguard Home

Of the open source DNS options I've used, Adguard Home is the easiest to set up and maintain. You get many DNS resolution solutions, such as DNS over TLS, DNS over HTTPS, and DNS over QUIC, within one single project.

You can set up Adguard as a container or as a native service using a single script:

$ curl -s -S -L \

Look at the script so you understand what it does. Once you're comfortable with the install process, run it:

$ sh ./

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets

Some of my favorite features of AdGuard Home:

  • An easy admin interface

  • Block ads and malware with the Adguard block list

  • Options to configure each device on your network individually

  • Force safe search on specific devices

  • Set HTTPS for the admin interface, so your remote interacts with it are fully encrypted

I find that Adguard Home saves me time. Its block lists are more robust than those on Pi-hole. You can quickly and easily configure it to run DNS over HTTPS.

No more malware

Malware is unwanted content on your computer. It's not always directly dangerous to you, but it may enable dangerous activity for third parties. That's not what the internet was ever meant to do. I believe you should host your own DNS service to keep your internet history private and out of the hands of known trackers such as Microsoft, Google, and Amazon. Try Adguard Home on your network.

Take control of your internet privacy by running your own DNS server with the open source project, Adguard Home.

Image by:

Networking Internet What to read next 5 open source tools to take control of your own data This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Synchronize databases more easily with open source tools

Wed, 03/15/2023 - 15:00
Synchronize databases more easily with open source tools Li Zongwen Wed, 03/15/2023 - 03:00

Change Data Capture (CDC) uses Server Agents to record, insert, update, and delete activity applied to database tables. CDC provides details on changes in an easy-to-use relational format. It captures column information and metadata needed to apply the changes to the target environment for modified rows. A changing table that mirrors the column structure of the tracked source table stores this information.

Capturing change data is no easy feat. However, the open source Apache SeaTunnel project i is a data integration platform provides CDC function with a design philosophy and feature set that makes these captures possible, with features above and beyond existing solutions.

CDC usage scenarios

Classic use cases for CDC is data synchronization or backups between heterogeneous databases. You may synchronize data between MySQL, PostgreSQL, MariaDB, and similar databases in one scenario. You could synchronize the data to a full-text search engine in a different example. With CDC, you can create backups of data based on what CDC has captured.

When designed well, the data analysis system obtains data for processing by subscribing to changes in the target data tables. There's no need to embed the analysis process into the existing system.

Sharing data state between microservices

Microservices are popular, but sharing information between them is often complicated. CDC is a possible solution. Microservices can use CDC to obtain changes in other microservice databases, acquire data status updates, and execute the corresponding logic.

Update cache

The concept of Command Query Responsibility Segregation (CQRS) is the separation of command activity from query activity. The two are fundamentally different:

  • A command writes data to a data source.
  • A query reads data from a data source.

The problem is, when does a read event happen in relation to when a write event happened, and what bears the burden of making those events occur?

It can be difficult to update a cache. You can use CDC to obtain data update events from a database and let that control the refresh or invalidation of the cache.

CQRS design usually uses two different storage instances to support business query and change operations. Because of the use of two stores, in order to ensure data consistency, we can use distributed transactions to ensure strong data consistency, at the cost of availability, performance, and scalability. You can also use CDC to ensure final consistency of data, which has better performance and scalability, but at the cost of data latency, which can currently be kept in the range of millisecond in the industry.

For example, you could use CDC to synchronize MySQL data to your full-text search engine, such as ElasticSearch. In this architecture, ElasticSearch searches all queries, but when you want to modify data, you don't directly change ElasticSearch. Instead, you modify the upstream MySQL data, which generates a data update event. This event is consumed by the ElasticSearch system as it monitors the database, and the event prompts an update within ElasticSearch.

In some CQRS systems, a similar method can be used to update the query view.

Pain points

CDC isn't a new concept and various existing projects implement it. For many users, though, there are some disadvantages to the existing solutions.

Single table configuration

With some CDC software, you must configure each table separately. For example, to synchronize ten tables, you need to write ten source SQLs and Sink SQLs. To perform a transform, you also need to write the transform SQL.

Sometimes, a table can be written by hand, but only when the volume is small. When the volume is large, type mapping or parameter configuration errors may occur, resulting in high operation and maintenance costs.

Apache SeaTunnel is an easy-to-use data integration platform hoping to solve this problem.

Schema evolution is not supported

Some CDC solutions support DDL event sending but do not support sending to Sink so that it can make synchronous changes. Even a CDC that can get an event may not be able to send it to the engine because it cannot change the Type information of the transform based on the DDL event (so the Sink cannot follow the DDL event to change it).

Too many links

On some CDC platforms, when there are several tables, a link must be used to represent each table while one is synchronized. When there are many tables, a lot of links are required. This puts pressure on the source JDBC database and causes too many Binlogs, which may result in repeated log parsing.

SeaTunnel CDC architecture goals

Apache SeaTunnel is an open source high-performance, distributed, and massive data integration framework. To tackle the problems the existing data integration tool's CDC functions cannot solve, the community "reinvents the wheel" to develop a CDC platform with unique features. This architectural design is based on the strengths and weaknesses of existing CDC tools.

Apache Seatunnel supports:

  • Lock-free parallel snapshot history data.
  • Log heartbeat detection and dynamic table addition.
  • Sub-database, sub-table, and multi-structure table reading.
  • Schema evolution.
  • All the basic CDC functions.

The Apache SeaTunnel reduces the operations and maintenance costs for users and can dynamically add tables.

For example, when you want to synchronize the entire database and add a new table later, you don't need to maintain it manually, change the job configuration, or stop and restart jobs.

Additionally, Apache SeaTunnel supports reading sub-databases, sub-tables, and multi-structure tables in parallel. It also allows schema evolution, DDL transmission, and changes supporting schema evolution in the engine, which can be changed to Transform and Sink.

SeaTunnel CDC current status

Currently, CDC has the basic capabilities to support incremental and snapshot phases. It also supports MySQL for real-time and offline use. The MySQL real-time test is complete, and the offline test is coming. The schema is not supported yet because it involves changes to Transform and Sink. The dynamic discovery of new tables is not yet supported, and some interfaces have been reserved for multi-structure tables.

Open source and data science What is data science? What is Python? Data scientist: A day in the life Try OpenShift Data Science MariaDB and MySQL cheat sheet Latest data science articles Project outlook

As an Apache incubation project, the Apache SeaTunnel community is developing rapidly. The next community planning session has these main directions:

1. Expand and improve connector and catalog ecology

We're working to enhance many connector and catalog features, including:

  • Support more connectors, including TiDB, Doris, and Stripe.
  • Improving existing connectors in terms of usability and performance.
  • Support CDC connectors for real-time, incremental synchronization scenarios.

Anyone interested in connectors can review Umbrella.

2. Support for more data integration scenarios (SeaTunnel Engine)

There are pain points that existing engines cannot solve, such as the synchronization of an entire database, the synchronization of table structure changes, and the large granularity of task failure.

We're working to solve those issues. Anyone interested in the CDC engine should look at issue 2272.

3. Easier to use (SeaTunnel Web)

We're working to provide a web interface to make operations easier and more intuitive. Through a web interface, we will make it possible to display Catalog, Connector, Job, and related information, in the form of DAG/SQL. We're also giving users access to the scheduling platform to easily tackle task management.

Visit the web sub-project for more information on the web UI.

Wrap up

Database activity often must be carefully tracked to manage changes based on activities such as record updates, deletions, or insertions. Change Data Capture provides this capability. Apache SeaTunnel is an open source solution that addresses these needs and continues to evolve to offer more features. The project and community are active and your participation is welcome.

The open source Apache SeaTunnel project is a data integration platform that makes it easy to synchronize data.

Image by:

Jason Baker. CC BY-SA 4.0.

SCaLE Data Science Databases What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

5 of the most curious uses of the Raspberry Pi

Tue, 03/14/2023 - 15:00
5 of the most curious uses of the Raspberry Pi AmyJune Tue, 03/14/2023 - 03:00

Recently, I was on a call where it was said that the open source community is a combination of curiosity and a culture of solutions. And curiosity is the basis of our problem-solving. We use a lot of open source when solving problems of all sizes, and that includes Linux running on the supremely convenient Raspberry Pi.

We all have such different lived experiences, so I asked our community of writers about the most curious use of a Raspberry Pi they've ever encountered. I have a hunch that some of these fantastic builds will spark an idea for others.

Experimentation with the Raspberry Pi

For me, the Raspberry Pi has been a great tool to add extra development resources on my home network. If I want to create a new website or experiment with a new software tool, I don't have to bog down my desktop Linux machine with a bunch of packages that I might only use once while experimenting. Instead, I set it up on my Raspberry Pi.

If I think I'm going to do something risky, I use a backup boot environment. I have two microSD cards, which allows me to have one plugged into the Raspberry Pi while I set up the second microSD to do whatever experimenting I want to do. The extra microSD doesn't cost that much, but it saves a ton of time for the times when I want to experiment on a second image. Just shutdown, swap microSD cards, reboot, and immediately I'm working on a dedicated test system.

When I'm not experimenting, my Raspberry Pi acts as a print server to put my non-WiFi printer on our home network. It is also a handy file server over SSH so that I can make quick backups of my important files.

Jim Hall

The popularity of the Raspberry Pi

The most amazing thing I've seen about the Raspberry Pi is that it normalized and commoditized the idea of the small-board computers and made them genuinely and practically available to folks.

Before the Raspberry Pi, we had small-board computers in a similar fashion, but they tended to be niche, expensive, and nigh unapproachable from a software perspective. The Raspberry Pi was cheap, and cheap to the point of making it trivial for anyone to get one for a project (ignoring the current round of unobtainium it's been going through). Once it was cheap, people worked around the software challenges and made it good enough to solve many basic computing tasks, down to being able to dedicate a full and real computer to a task, not just a microcontroller.

We've got a plethora of good, cheap-ish, small-board computers, and this gives way to tinkering, toying, and experimenting. People are willing to try new ideas, even spurring more hobbyist hardware development to support these ideas.

Honestly, that is by far the most amazing and radical thing I've seen from the Raspberry Pi: how it's fundamentally changed everyone's perception of what computing, at the level of what the Raspberry Pi excels at anyway, is and given rise not only to its own ecosystem but now countless others in diversity.

John ‘Warthog9' Hawley

Raspberry Pi for the bees

In 2018, my younger brother and I used to have several beehives and used a Raspberry Pi and various sensors to monitor the temperature and humidity of our hives. We also planned to implement a hive scale to observe honey production in summer and measure the weight in winter to see if the bees had enough food left. We never got around to doing that.

Our little monitoring solution was based on a Raspberry Pi 2 Model B, ran Raspbian Stretch (based on Debian 9), and had a temperature and humidity sensor connected (DHT11). We had three or four of those sensors in the hives to measure the temperature at the entrance hole, under the lid, and in the lowest frame. We connected the sensor directly to the Pi and used the Python_DHT sensor library to read the data. We also set up InfluxDB, Telegraf, and finally, Grafana to visualize the data.

If you want to know more about our setup, we published an article on our little monitoring solution in Linux Magazine.

Heike Jurzik

More on Raspberry Pi What is Raspberry Pi? eBook: Guide to Raspberry Pi Getting started with Raspberry Pi cheat sheet eBook: Running Kubernetes on your Raspberry Pi Whitepaper: Data-intensive intelligent applications in a hybrid cloud blueprint Understanding edge computing Our latest on Raspberry Pi Go retro with the Raspberry Pi

One thing I would love to create with the Raspberry Pi is a simulation of how to program machine language into an old-style computer using "switches and lights." This looks to be fairly straightforward using the GPIO pins on the Raspberry Pi. For example, their online manual shows examples of how to use GPIO to switch an LED on and off or to use buttons to get input. I think it should be possible with some LEDs and switches, plus a small program running on the Raspberry Pi to emulate the old-style computer. But I lack the free time to work on a project like this, which is why I wrote the Toy CPU to emulate it.

Jim Hall

Build a toy with the Raspberry Pi

When my daughter was four, she asked for a "Trolls music box" for Christmas. She could picture it perfectly in her head. It would be pink and sparkly with her name on it. When she opened the box, the theme song from the popular movie would play. She could store her trolls and other treasures in the box. After searching everywhere online and in stores, I could not find one that measured up to her imagination. My husband and I decided we could build one ourselves in our own toyshop (i.e., his home office). The center of it all was, of course, the Raspberry Pi. He used light sensors and a Python script to make the song play at just the right moment. We placed the tech discreetly in the bottom of the music box and decorated it with her aesthetic in mind. That year, holiday magic was made possible with open source! 

Lauren Pritchett

People use the Raspberry Pi for all kinds of things. What's caught your attention?

Image by:

Dwight Sipler on Flickr

Raspberry Pi community What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 5566 points Minnesota

Jim Hall is an open source software advocate and developer, best known for usability testing in GNOME and as the founder + project coordinator of FreeDOS. At work, Jim is CEO of Hallmentum, an IT executive consulting company that provides hands-on IT Leadership training, workshops, and coaching.

| Connect jimfhall User Attributes Correspondent Open Sourcerer People's Choice Award People's Choice Award 2018 Author Correspondent Contributor Club 289 points Cologne/Luebeck, Germany

Heike is a FLOSS enthusiast, technical writer and author of several Linux books:

Heike discovered Linux in 1996, while she was working at the University's Center for Applied Computer Science. In her spare time Heike hangs out at Folk and Bluegrass sessions, playing the fiddle.

| Follow heikejurzik | Connect heike-jurzik Open Minded Author Linux Debian Geek Contributor Club 96 points Oregon

John works for VMware in the Open Source Program Office on upstream open source projects. In a previous life he's worked on the MinnowBoard open source hardware project, led the system administration team on, and built desktop clusters before they were cool. For fun he's built multiple star ship bridges, a replica of K-9 from a popular British TV show, done in flight computer vision processing from UAVs, designed and built a pile of his own hardware.

He's cooked delicious meals for friends, and is a connoisseur of campy 'bad' movies. He's a Perl programmer who's been maliciously accused of being a Python developer as well.

| Follow warty9 Open Enthusiast SysAdmin CentOS Community Manager Developer Fedora Geek Author DevOps Gamer Linux Maker Open hardware Python 2208 points Raleigh, NC

Lauren is the managing editor for When she's not organizing the editorial calendar or digging into the data, she can be found going on adventures with her family and German shepherd rescue dog, Quailford. She is passionate about spreading awareness of how open source technology and principles can be applied to areas outside the tech industry such as education and government.

User Attributes Team Open Source Champion Author Open access Contributor Club Register or Login to post a comment.

Calculate pi by counting pixels

Tue, 03/14/2023 - 15:00
Calculate pi by counting pixels Jim Hall Tue, 03/14/2023 - 03:00

For Pi Day this year, I wanted to write a program to calculate pi by drawing a circle in FreeDOS graphics mode, then counting pixels to estimate the circumference. I naively assumed that this would give me an approximation of pi. I didn't expect to get 3.14, but I thought the value would be somewhat close to 3.0.

I was wrong. Estimating the circumference of a circle by counting the pixels required to draw it will give you the wrong result. No matter what resolution I tried, the final pi calculation of circumference divided by diameter was always around 2.8.

You can't count pixels to calculate pi

I wrote a FreeDOS program using OpenWatcom C that draws a circle to the screen, then counts the pixels that make up that circle. I wrote it in FreeDOS because DOS programs can easily enter graphics mode by using the OpenWatcom _setvideomode function. The _VRES16COLOR video mode puts the display into 640×680 resolution at 16 colors, a common "classic VGA" screen resolution. In the standard 16 color DOS palette, color 0 is black, color 1 is blue, color 7 is a low intensity white, and color 15 is a high intensity white.

In graphics mode, you can use the _ellipse function to draw an ellipse to the screen, from some starting x,y coordinate in the upper left to a final x,y coordinate in the lower right. If the height and width are the same, the ellipse is a circle. Note that in graphics mode, x and y count from zero, so the upper left corner is always 0,0.

Image by:

(Jim Hall, CC BY-SA 4.0)

You can use the _getpixel function to get the color of a pixel at a specified x,y coordinate on the screen. To show the progress in my program, I also used the _setpixel function to paint a single pixel at any x,y on the screen. When the program found a pixel that defined the circle, I changed that pixel to bright white. For other pixels, I set the color to blue.

Image by:

(Jim Hall, CC BY-SA 4.0)

With these graphics functions, you can write a program that draws a circle to the screen, then iterates over all the x,y coordinates of the circle to count the pixels. For any pixel that is color 7 (the color of the circle), add one to the pixel count. At the end, you can use the total pixel count as an estimate of the circumference:

#include #include int main() { unsigned long count; int x, y; /* draw a circle */ _setvideomode(_VRES16COLOR); /* 640x480 */ _setcolor(7); /* white */ _ellipse(_GBORDER, 0, 0, 479, 479); /* count pixels */ count = 0; for (x = 0; x <= 479; x++) { for (y = 0; y <= 479; y++) { if (_getpixel(x, y) == 7) { count++; /* highlight the pixel */ _setcolor(15); /* br white */ _setpixel(x, y); } else { /* highlight the pixel */ _setcolor(1); /* blue */ _setpixel(x, y); } } } /* done */ _setvideomode(_DEFAULTMODE); printf("pixel count (circumference?) = %lu\n", count); puts("diameter = 480"); printf("pi = c/d = %f\n", (double) count / 480.0); return 0; }

But counting pixels to determine the circumference underestimates the actual circumference of the circle. Because pi is the ratio of the circumference of a circle to its diameter, my pi calculation was noticeably lower than 3.14. I tried several video resolutions, and I always got a final result of about 2.8:

pixel count (circumference?) = 1356 diameter = 480 pi = c/d = 2.825000

Open science and sustainability Video series: ChRIS (ChRIS Research Integration System) Explore Red Hat Research projects 6 articles to inspire open source sustainability How Linux rescues slow computers (and the planet) Latest articles about open science Latest articles about open education Latest articles about sustainability You need to measure the distance between pixels to get pi

The problem with counting pixels to estimate the circumference is that the pixels are only a sample of a circular drawing. Pixels are discrete points in a grid, while a circle is a continuous drawing. To provide a better estimate of the circumference, you must measure the distance between pixels and use that total measurement for the circumference.

To update the program, you must write a function that calculates the distance between any two pixels: x0,y0 and x,y. You don't need a bunch of fancy math or algorithms here, just the knowledge that the OpenWatcom _ellipse function draws only solid pixels in the color you set for the circle. The function doesn't attempt to provide antialiasing by drawing nearby pixels in some intermediate color. That allows you to simplify the math. In a circle, pixels are always directly adjacent to one another: vertically, horizontally, or diagonally.

For pixels that are vertically or horizontally adjacent, the pixel "distance" is simple. It's a distance of 1.

For pixels that are diagonally adjacent, you can use the Pythagorean theorem of a²+b²=c² to calculate the distance between two diagonal pixels as the square root of 2, or approximately 1.414.

double pixel_dist(int x0, int y0, int x, int y) { if (((x - x0) == 0) && ((y0 - y) == 1)) { return 1.0; } if (((y0 - y) == 0) && ((x - x0) == 1)) { return 1.0; } /* if ( ((y0-y)==1) && ((x-x0)==1) ) { */ return 1.414; /* } */ }

I wrapped the last "if" statement in comments so you can see what the condition is supposed to represent.

To measure the circumference, we don't need to examine the entire circle. We can save a little time and effort by working on only the upper left quadrant. This also allows us to know the starting coordinate of the first pixel in the circle; we'll skip the first pixel at 0,239 and instead assume that as our first x0,y0 coordinate in measuring the quarter-circumference.

Image by:

(Jim Hall, CC BY-SA 4.0)

The final program is similar to our "count the pixels" program, but instead measures the tiny distances between pixels in the upper left quadrant of the circle. You may notice that the program counts down the y coordinates, from 238 to 0. This accommodates the assumption that the known starting x0,y0 coordinate in the quarter-circle is 0,239. With that assumption, the program only needs to evaluate the y coordinates between 0 and 238. To estimate the total circumference of the circle, multiply the quarter-measurement by 4:

#include #include double pixel_dist(int x0, int y0, int x, int y) { ... } int main() { double circum; int x, y; int x0, y0; /* draw a circle */ _setvideomode(_VRES16COLOR); /* 640x480 */ _setcolor(7); /* white */ _ellipse(_GBORDER, 0, 0, 479, 479); /* calculate circumference, use upper left quadrant only */ circum = 0.0; x0 = 0; y0 = 479 / 2; for (x = 0; x <= 479 / 2; x++) { for (y = (479 / 2) - 1; y >= 0; y--) { if (_getpixel(x, y) == 7) { circum += pixel_dist(x0, y0, x, y); x0 = x; y0 = y; /* highlight the pixel */ _setcolor(15); /* br white */ _setpixel(x, y); } else { /* highlight the pixel */ _setcolor(1); /* blue */ _setpixel(x, y); } } } circum *= 4.0; /* done */ _setvideomode(_DEFAULTMODE); printf("circumference = %f\n", circum); puts("diameter = 480"); printf("pi = c/d = %f\n", circum / 480.0); return 0; }

This provides a better estimate of the circumference. It's still off by a bit, because measuring a circle using pixels is still a pretty rough approximation, but the final pi calculation is much closer to the expected value of 3.14:

circumference = 1583.840000 diameter = 480 pi = c/d = 3.299667

Happy Pi Day! Does counting pixels get you the circumference of a circle?

Image by:

Education What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How I destroyed my Raspberry Pi

Tue, 03/14/2023 - 15:00
How I destroyed my Raspberry Pi hANSIc99 Tue, 03/14/2023 - 03:00

I wanted to write an article demonstrating "How to automate XYZ with the Raspberry Pi" or some other interesting, curious, or useful application around the Raspberry Pi. As you might realize from the title, I cannot offer such an article anymore because I destroyed my beloved Raspberry Pi.

The Raspberry Pi is a standard device on every technology enthusiast's desk. As a result, tons of tutorials and articles tell you what you can do with it. This article instead covers the dark side: I describe what you had better not do!

More on Raspberry Pi What is Raspberry Pi? eBook: Guide to Raspberry Pi Getting started with Raspberry Pi cheat sheet eBook: Running Kubernetes on your Raspberry Pi Whitepaper: Data-intensive intelligent applications in a hybrid cloud blueprint Understanding edge computing Our latest on Raspberry Pi Cable colors

I want to provide some background before I get to the actual point of destruction. You have to deal with different cable colors when doing electrical work in and around the house. Here in Germany, each house connects to the three-phase AC supply grid, and you usually find the following cable colors:

  • Neutral conductor: Blue
  • (PE) Protective conductor: Yellow-green
  • (L1) Phase 1: Brown
  • (L2) Phase 2: Black
  • (L3) Phase 3: Grey

For example, when wiring a lamp, you pick up neutral (N, blue) and phase (L, 1/3 chance that it is brown), and you get 230V AC between them.

Wiring the Raspberry Pi

Earlier this year, I wrote an article about OpenWrt, an open source alternative to firmware for home routers. In the article, I used a TP-link router device. However, the original plan was to use my Raspberry Pi model 4.

Image by:

(Stephan Avenwedde, CC BY-SA 4.0)

The idea was to build a travel router that I could install in my caravan to improve the internet connectivity at a campsite (I'm the kind of camper who can't do without the internet). To do so, I added a separate USB-Wifi-dongle to my Raspberry Pi to connect a second Wifi antenna and installed OpenWrt. Additionally, I added a 12V-to-5V DC/DC converter to connect with the 12V wiring in the caravan. I tested this setup with a 12V vehicle battery on my desk, and it worked as expected. After everything was set up and configured, I started to install it in my caravan.

In my caravan, I found a blue and a brown wire, connected it with the 12V-to-5V DC/DC converter, put the fuses back in, and…

Image by:

(Stephan Avenwedde, CC BY-SA 4.0)

The chip, which disassembled itself, is the actual step-down transformer. I was so confident that the blue wire was on 0V potential and the brown one was on 12V that I didn't even measure. I have since learned that the blue cable is on 12V, and the brown cable is on ground potential (which is pretty common in vehicle electronics).

Wrap up

Since this accident, my Raspberry Pi has never booted up. Because the prices for the Raspberry Pi have skyrocketed, I had to find an alternative. Luckily, I came across the TP-Link travel router, which can also run Open-WRT and does its job satisfactorily. In closing: It's better to measure too often than one time too few.

It's better to measure too often than one time too few. I learned the hard way, so you don't have to.

Image by:

kris krüg

Raspberry Pi What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Control your Raspberry Pi with Lua

Tue, 03/14/2023 - 15:00
Control your Raspberry Pi with Lua alansmithee Tue, 03/14/2023 - 03:00

Lua is a sometimes misunderstood language. It’s different from other languages, like Python, but it’s a versatile extension language that’s widely used in game engines, frameworks, and more. Overall, I find Lua to be a valuable tool for developers, letting them enhance and expand their projects in some powerful ways.

You can download and run stock Lua as Seth Kenlon explained in his article Is Lua worth learning, which includes simple Lua code examples. However, to get the most out of Lua, it’s best to use it with a framework that has already adopted the language. In this tutorial, I demonstrate how to use a framework called Mako Server, which is designed for enabling Lua programmers to easily code IoT and web applications. I also show you how to extend this framework with an API for working with the Raspberry Pi’s GPIO pins.


Before following this tutorial, you need a running Raspberry Pi that you can log into. While I will be compiling C code in this tutorial, you do not need any prior experience with C code. However, you do need some experience with a POSIX terminal.


To start, open a terminal window on your Raspberry Pi and install the following tools for downloading code using Git and for compiling C code:

$ sudo apt install git unzip gcc make

Next, compile the open source Mako Server code and the Lua-periphery library (the Raspberry Pi GPIO library) by running the following command:

$ wget -O \

Review the script to see what it does, and run it once you’re comfortable with it:

$ sh ./

The compilation process may take some time, especially on an older Raspberry Pi. Once the compilation is complete, the script asks you to install the Mako Server and the lua-periphery module to /usr/local/bin/. I recommend installing it to simplify using the software. Don’t worry, if you no longer need it, you can uninstall it:

$ cd /usr/local/bin/ $ sudo rm mako

To test the installation, type mako into your terminal. This starts the Mako Server, and see some output in your terminal. You can stop the server by pressing CTRL+C.

IoT and Lua

Now that the Mako Server is set up on your Raspberry Pi, you can start programming IoT and web applications and working with the Raspberry Pi’s GPIO pins using Lua. The Mako Server framework provides a powerful and easy API for Lua developers to create IoT applications and the lua-periphery module lets Lua developers interact with the Raspberry Pi’s GPIO pins and other peripheral devices.

Start by creating an application directory and a .preload script, which inserts Lua code for testing the GPIO. The .preload script is a Mako Server extension that’s loaded and run as a Lua script when an application is started.

$ mkdir gpiotst $ nano gpiotst/.preload

Copy the following into the Nano editor and save the file:

-- Load and access the LED interface local LED = require('periphery').LED local function doled() local led = LED("led0") -- Open LED led0 trace"Turn LED on" led:write(true) -- Turn on LED (set max brightness) ba.sleep(3000) -- 3 seconds trace"Turn LED off" led:write(false) -- Turn off LED (set zero brightness) led:close() end -- Defer execution -- to after Mako has started

More on Raspberry Pi What is Raspberry Pi? eBook: Guide to Raspberry Pi Getting started with Raspberry Pi cheat sheet eBook: Running Kubernetes on your Raspberry Pi Whitepaper: Data-intensive intelligent applications in a hybrid cloud blueprint Understanding edge computing Our latest on Raspberry Pi

The above Lua code controls the main Raspberry Pi LED using the Lua-periphery library you compiled and included with the Mako Server. The script defines a single function called doled that controls the LED. The script begins by loading the periphery library (the shared library using the Lua require function. The returned data is a Lua table with all GPIO API functions. However, you only need the LED API, and you directly access that by appending .LED after calling require. Next, the code defines a function called doled that does the following:

  1. Opens the Raspberry Pi main LED identified as led0 by calling the LED function from the periphery library and by passing it the string led0.
  2. Prints the message Turn LED on to the trace (the console).
  3. Activates the LED by calling the write method on the LED object and passing it the Boolean value true, which sets the maximum brightness of the LED.
  4. Waits for 3 seconds by calling ba.sleep(3000).
  5. Prints the message Turn LED off to the trace.
  6. Deactivates the LED by calling the write method on the LED object and passing it the Boolean value false, which sets zero brightness of the LED.
  7. Closes the LED by calling the close function on the LED object.

At the end of the .preload script, the doled function is passed in as argument to function This allows the execution of the doled function to be deferred until after Mako Server has started.

To start the gpiotst application, run the Mako Server as follows:

$ mako -l::gpiotst

The following text is printed in the console:

Opening LED: opening 'brightness': Permission denied.

Accessing GPIO requires root access, so stop the server by pressing CTRL+C and restart the Mako Server as follows:

$ sudo mako -l::gpiotst

Now the Raspberry Pi LED turns on for 3 seconds. Success!

Lua unlocks IoT

In this primer, you learned how to compile the Mako Server, including the GPIO Lua module, and how to write a basic Lua script for turning the Raspberry Pi LED on and off. I’ll cover further IoT functions, building upon this article, in future articles.

You may in the meantime delve deeper into the Lua-periphery GPIO library by reading its documentation to understand more about its functions and how to use it with different peripherals. To get the most out of this tutorial, consider following the interactive Mako Server Lua tutorial to get a better understanding of Lua, web, and IoT. Happy coding!

Learn how to use the Lua programming language to program Internet of Things (IoT) devices and interact with General Purpose Input/Output (GPIO) pins on a Raspberry Pi.

Image by:

Dwight Sipler on Flickr

Raspberry Pi Programming Internet of Things (IoT) Download the eBook A guide to Lua This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

7 questions for the OSI board candidates

Mon, 03/13/2023 - 15:00
7 questions for the OSI board candidates Luis Mon, 03/13/2023 - 03:00

The Open Source Initiative (OSI) is a non-profit organization that promotes open source that maintains and evaluates compliance with the Open Source Definition. Every year the OSI holds elections for its board of directors. It has become somewhat of a tradition for me to write questions for OSI board candidates.

In past years, I've asked questions about the focus of the organization and how the board should work with staff. The board has since acted decisively by hiring its first executive director, Stefano Maffuli. It has also expanded staffing in other ways, like hiring a policy director. To me, this is a huge success, and so I didn't pose those questions again this year. 

Repeated questions

Other prior questions are worth repeating. In particular:

Your time: "You have 24 hours in the day and could do many different things. Why do you want to give some of those hours to OSI? What do you expect your focus to be during those hours?"

This question is a good one to ask of applicants to any non-profit board. Board work is often boring, thankless, and high-stakes. Anyone going into it needs to have not just a reason why but also a clear, specific idea of what they're going to do. "Help out" is not enough—what do you conceive of as the role of the board? How will you help execute the board's fiduciary duties around finances and executive oversight etc.?

OSI has had trouble retaining board members in the past, including one current candidate who resigned mid-term during a previous term. So getting this right is important.

Broader knowledge: What should OSI do about the tens of millions of people who regularly collaborate to build software online (often calling that activity, colloquially, open source) but don't know what OSI is or what it does?

I have no firm answers to this question—there's a lot of room for creativity here. I do think, though, that the organization has in recent years done a lot of good work in this direction, starting in the best way—by doing work to make the organization relevant to a broader number of folks. I hope new board members have more good ideas to continue this streak.

New at OSI

Two of my questions this year focus on changes that are happening inside OSI.

Licensing process: The organization has proposed improvements to the license-review process. What do you think of them? 

Licensing is central to the organization's mission, and it is seeking comments on a plan to improve its process. Board members shouldn't need to be licensing experts, but since they will be asked to finalize and approve this process, they must have some theory of how the board should approach this problem.

OSI initiative on AI: What did you think of the recent OSI initiative on AI? If you liked it, what topics would you suggest for similar treatment in the future? If you didn't like it, what would you improve, or do instead?

The OSI's Deep Dive on AI represents one of the most interesting things the organization has done in a long time. In it, the organization deliberately went outside its comfort zone, trying to identify and bridge gaps between the mature community of open software and the new community of open machine learning. But it was also a big use of time and resources. Many different answers are legitimate here (including "that shouldn't be a board-level decision") but board members should probably have an opinion of some sort.

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets New outside forces

Finally, it's important for OSI to carefully respond to what's happening in the broader world of open. I offer three questions to get at this:

Regulation: New industry regulation in both the EU and US suggests that governments will be more involved in open source in the future. What role do you think OSI should play in these discussions? How would you, as a board member, impact that?

The OSI has done a lot of work on the upcoming EU Cyber Resilience Act, joining many other (but not all) open organizations. This will not be the last government regulation that might directly affect open software. How OSI should prioritize and address this is, I think, a critical challenge in the future.

Solo maintainers: The median number of developers on open source projects is one, and regulation and industry standards are increasing their burden. How (if at all) should OSI address that? Is there tension between that and industry needs?

Many of the candidates work at large organizations—which is completely understandable since those organizations have the luxury of giving their employees time for side projects like OSI. But the median open software project is small. I would love to hear more about how the candidates think about bridging this gap, especially when unfunded mandates (both from governments and industry standards) seem to be continually increasing.

Responsible licensing: There are now multiple initiatives around "responsible" or "ethical" licensing, particularly (but not limited to) around machine learning. What should OSI's relationship to these movements and organizations be?

A new generation of developers is taking the ethical implications of software seriously. This often includes explicitly rejecting the position that unfettered source-code access is a sine qua non of software that empowers human beings. OSI does not need to accept that position, but it must have some theory of how to react: silence? firm but clear rejection? constructive alliance? educational and marketing opportunity? 

The Bottom Line

The OSI has come a long way in the past few years and recent board members have a lot to be proud of. But it's still a small organization, in tumultuous times for this industry. (And we've unfortunately had recent reminders that board composition matters for organizations in our space.) Every OSI member should take this vote seriously, so I hope these questions (and the candidate's answers on the OSI blog) help make for good choices.

The OSI is holding its board elections. Here are the important issues facing the Open Source Initiative.

Licensing What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Monitor Kubernetes cloud costs with open source tools

Mon, 03/13/2023 - 15:00
Monitor Kubernetes cloud costs with open source tools mattray Mon, 03/13/2023 - 03:00

Kubernetes is a powerful platform for managing dynamic containerized applications in the cloud, but it may be difficult to understand where costs are incurred. Managing the cost efficiency of Kubernetes resources can be a challenge. That's where OpenCost comes in. OpenCost is a cloud cost monitoring tool that integrates seamlessly with Kubernetes, allowing you to track your cloud spend in real-time so you can optimize your resources accordingly.

[ Also read How to measure the cost of running applications ]

OpenCost is an open source CNCF Sandbox project and specification for the real-time monitoring of cloud costs associated with Kubernetes deployments. The specification models current and historical Kubernetes cloud spend and resource allocation by service, deployment, namespace, labels, and much more. This data is essential for understanding and optimizing Kubernetes for both cost and performance from the applications down through the infrastructure.

Requirements and installation

Getting started with OpenCost is a relatively straightforward process. OpenCost uses Prometheus for both monitoring and metric storage. You can install it from the Prometheus Community Kubernetes Helm Chart.

Install Prometheus

Begin by installing Prometheus using the following command:

$ helm install my-prometheus --repo prometheus \ --namespace prometheus --create-namespace \ --set pushgateway.enabled=false --set alertmanager.enabled=false -f \ OpenCost

Next, install OpenCost using the kubectl command:

$ kubectl apply --namespace opencost -f \

This command deploys OpenCost to your cluster and starts collecting data. That is all that most installations require. You can use your own Prometheus installation or customize the deployment with the OpenCost Helm Chart.

Testing and access

OpenCost automatically detects whether it runs on AWS, Azure, or GCP, and you can configure it to provide pricing for on-premises Kubernetes deployments. Begin by forwarding ports for API and UI access:

$ kubectl port-forward --namespace opencost service/opencost 9003 9090

Within about five minutes, you can verify the UI and server are running, and you may access the OpenCost UI at http://localhost:9090.

Monitor costs

You are ready to start monitoring your cloud costs with OpenCost deployed to your Kubernetes cluster. The OpenCost dashboard provides real-time visibility into your cloud spend, allowing you to identify cost anomalies and optimize your cloud resources. You can view your cloud spend by nodes, namespaces, pods, tags, and more.

Image by:

(Matthew Ray, CC BY-SA 4.0)

The kubectl cost plugin provides easy CLI queries to Kubernetes cost allocation metrics. It allows developers, operators, and others to determine the cost and efficiency for any Kubernetes workload quickly.

$ kubectl cost --service-port 9003 \ --service-name opencost --kubecost-namespace opencost \ --allocation-path /allocation/compute pod \ --window 5m --show-efficiency=true +-------+---------+-------------+----------+---------------+ |CLUSTER|NAMESPACE|POD |MONTH RATE|COST EFFICIENCY| +-------+---------+-------------+----------+---------------+ |cl-one |kube-syst|coredns-db...| 1.486732 | 0.033660 | | | || 1.486732 | 0.032272 | | | |kube-prox...7| 1.359577 | 0.002200 | | | |kube-prox...x| 1.359577 | 0.002470 | | |opencost |opencost...5t| 0.459713 | 0.187180 | | |kube-syst|aws-node-cbwl| 0.342340 | 0.134960 | | | |aws-node-gbfh| 0.342340 | 0.133760 | | |prometheu|my-prome...pv| 0.000000 | 0.000000 | | | || 0.000000 | 0.000000 | | | |my-prome...89| 0.000000 | 0.000000 | +-------+---------+-------------+----------+---------------+ | SUMMED| | | 6.837011 | | +-------+---------+-------------+----------+---------------+

More on Kubernetes What is Kubernetes? Free online course: Containers, Kubernetes and Red Hat OpenShift technical over… Test drive OpenShift hands-on An introduction to enterprise Kubernetes How to explain Kubernetes in plain terms eBook: Running Kubernetes on your Raspberry Pi homelab Kubernetes cheat sheet eBook: A guide to Kubernetes for SREs and sysadmins Latest Kubernetes articles

You can also integrate an API to extract the data programmatically into your platform of choice.

Kubernetes optimization strategies

Now that you have a handle on your cloud costs, it's time to optimize your Kubernetes environment. Optimization is an iterative process. Start at the top of the stack (containers) and work through each layer. The efficiencies compound at each step. There are many ways to optimize Kubernetes for cost efficiency, such as:

  1. Look for abandoned workloads and unclaimed volumes: Pods and storage that are no longer in use or are disconnected continue to consume resources without providing value.
  2. Right-size your workloads: Ensure you're using the right size containers for your workloads. Investigate over- and under-allocated containers.
  3. Autoscaling: Autoscaling can help you save costs by only using resources when needed.
  4. Right-size your cluster: Too many or too-large nodes may be inefficient. Finding the right balance between capacity, availability, and performance may greatly reduce costs.
  5. Investigate cheaper node types: There's a lot of variation in CPU, RAM, networking, and storage. Switching to ARM architectures may unlock even greater savings.
  6. Invest in a FinOps team: A dedicated team within your organization can look for ways to unlock greater savings by coordinating reserved instances, spot instances, and savings plans.
Get started today

Monitoring costs in a Kubernetes environment can be challenging, but with OpenCost, it doesn't have to be. To get started with OpenCost and take control of your cloud spend, visit the OpenCost website, get the code in GitHub, check out the OpenCost documentation, and get involved in the #opencost channel in the CNCF Slack.

[ Related read How to prioritize cloud spending ]

OpenCost is a cloud cost monitoring tool that integrates seamlessly with Kubernetes, allowing you to track your cloud spend in real-time so you can optimize your resources accordingly.

Image by:

Kubernetes Cloud Containers SCaLE What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Keep the solar system in your pocket with a Raspberry Pi

Mon, 03/13/2023 - 15:00
Keep the solar system in your pocket with a Raspberry Pi cherrybomb Mon, 03/13/2023 - 03:00

In my previous article, I talked about Stellarium in a web browser. This stellar software allows for a 3D view of your local sky, with amazing telescopic and scripting features. In this article, I'll show you how to put those stars in your pocket using a Raspberry Pi 4, along with scripting some animations for fun. If you remember my Halloween-themed article, I covered using a Raspberry Pi to make a festive Halloween jack-o-lantern. I've repurposed this same Raspberry Pi to create a traveling galaxy box. That's what I'll be calling it today. So let's get packed for a galactic trip.

What do we need for this trip?
  • One Raspberry pi 4 (with peripherals)
  • Raspbian OS
  • A sense of Astronomic wonder
Install Stellarium on your Raspberry Pi

For starting the installation of Raspberry Pi OS, I followed the instructions for the Raspberry Pi Imager instructions. I used the SD card edition of the install in my previous article, but you can choose whichever way you want using the Raspberry Pi manager. Make sure you update your OS with the most recent software.

You can do this by running these commands:

$ sudo apt update $ sudo apt upgrade

That might take a minute to run if you, for example, left your PI alone since October.

Queue some elevator music for me. It's gonna be a moment.

Next up, to install Stellarium, just run one simple command:

$ sudo apt install stellarium

And poof! It's installing.

Countdown to launch

Once you've got Stellarium installed, go to your application menu, and under education, you'll see the Stellarium app. I grabbed a screenshot just to show you.

Image by:

(Jess Cherry, CC BY-SA 4.0)


When you first open Stellarium, it asks you for some configuration items. This includes your location, timezone, and anything else it didn't catch automatically at startup. Since my Pi already had a location configured and the timezone was set, it opened up without issue directly to my location. However, I want to show you what those settings look like, along with the highlighted button to click on the very left to change the configuration as you need:

Image by:

(Jess Cherry, CC BY-SA 4.0)

You can also download more stars in the extras section:

Image by:

(Jess Cherry, CC BY-SA 4.0)

I downloaded all of the catalogs. Each download also tells you how many stars are in the catalog, which is an incredible feature.

Each one of these sections does have some pretty neat things to look over. I'm also covering scripting in this article, though, so I'll just briefly skim through some interesting sections I played with.

Choose from the many available extensions. In this screenshot, I was looking at having the plugin for meteor showers. It allows you to create some simulations of your own meteor showers.

Image by:

(Jess Cherry, CC BY-SA 4.0)

You also have a bunch of other interesting plugins as you go.

In the information section, you can customize what you want to be visible on your screen when you click on an object.

Image by:

(Jess Cherry, CC BY-SA 4.0)

In the Time section, you can pick the time and date formats and pick entirely different times and dates to look at different skies as they appeared in history or will appear in the future. In this set of screenshots, I chose something completely random, and it worked:

Image by:

(Jess Cherry, CC BY-SA 4.0)

Finally, before I get to the fun part, you have the tools section, where you can enable and disable how your personal planetarium works. You can change labels, add a location for screenshots, change how your mouse works, and so much more.

Image by:

(Jess Cherry, CC BY-SA 4.0)

Time to script some animations

In my previous article, I mentioned in passing that you can script your own animations. That's where the scripts tab comes in:

Image by:

(Jess Cherry, CC BY-SA 4.0)

Before I get to scripting an animation, you might notice that some animations are already available for you to watch and use. For instance, in the picture above, I have a partial lunar eclipse highlighted. This is one of the animated series you can watch before you get started with scripting on your own.

Because the animations don't stop on their own with the movement of time, you must press K on your keyboard at any time to stop the animation. While Stellarium uses the QtScript engine as its scripting tool of choice, some users choose to go with Typescript. However, it's easy enough to make some simple "hello world" scripts using the script console within Stellarium. To do this, press the F12 button on your keyboard while inside the application, and a nifty console pops up.

Next on the list is a simple test of "hello galaxy". First, use the debugger library to verify that the scripting engine prints correctly to the log window:

core.debug("Hello Galaxy");

Press the Play button in the top left of the console to run the script and:

Image by:

(Jess Cherry, CC BY-SA 4.0)

You can see the output in the next tab:

Image by:

(Jess Cherry, CC BY-SA 4.0)

More on Raspberry Pi What is Raspberry Pi? eBook: Guide to Raspberry Pi Getting started with Raspberry Pi cheat sheet eBook: Running Kubernetes on your Raspberry Pi Whitepaper: Data-intensive intelligent applications in a hybrid cloud blueprint Understanding edge computing Our latest on Raspberry Pi

Now that you know that works, you can move on to something a tad bit larger, like putting words on the screen.

In this example, you'll be using the Label Manager Class to create a "Hello Galaxy" label to display on the screen. But this time, you can pick the size and color of the font, as well as its location on the screen.

var label=LabelMgr.labelScreen("Hello Galaxy", 200, 200, true, 60, "#ff080200"); core.wait(60); LabelMgr.deleteLabel(label);

This label manager displays the text "Hello Galaxy" in all black on the screen with a font size of 60 for 60 seconds. The two 200s are your points along the horizontal and vertical axis of the screen. The true Boolean is for the label to be visible on screen.

Image by:

(Jess Cherry, CC BY-SA 4.0)

Now that you have a cool onscreen setup, you can explore the scripting functionality further. It's not for the timid. I provided some notes on the scripting engine to a friend who specializes in JavaScript to see if he had any pointers for me, and the response I received was, "Oh dear."

Final thoughts

If you want to travel and teach astronomy, having this software on a Raspberry Pi is a great way to go. Easy installation lets you have stars in your pocket quickly. If you have your location functionality enabled on your Pi, you can easily and immediately have a view of your local sky given to you in real-time while you travel. For some more interesting bits on the scripting engine, I suggest you look over the official documentation. There's a good section on Typescript integration for those who have experience in Typescript.

Explore the open source planetarium Stellarium using a Raspberry Pi 4. Then have some scripting fun with animations.

Image by:

NASA, used with permission

Raspberry Pi Science What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Our favorite tech books written by women

Sun, 03/12/2023 - 15:00
Our favorite tech books written by women AmyJune Sun, 03/12/2023 - 03:00

Whether you're a fast reader or someone who takes a book at a leisurely pace, a whole month is an ample time to explore a new book. As March is Women's History Month, we asked contributors to tell us about their favorite books by women authors.

Leadership books

As an engineer turned agency CTO, one book I consider "required reading" for all of my technical leads and managers is The Manager's Path by Camille Fournier. Fournier does a stellar job outlining all aspects of the sometimes-awkward, sometimes-exhilarating progression from code to mentorship to management to executive leadership. The book is wonderfully tactical and real in its approach and will give you practical leadership tips no matter where you are in your engineering career. I highly recommend it.

Kat White

These three books inspired and empowered me to own and shape my career in tech. They offer three different points of view: the manager, the engineer, and the hacker. You can understand what it's like to be in these roles and decide the path you want to pursue. You'll get complete and relevant information to act mindfully and to deal with obstacles.

Hacking Capitalism can be particularly useful to marginalized people who are willing to fight back.

Camilla Conte

Profit Without Oppression by Kim Crayton

Rob McBryde

Programming guides

­—Lewis Cowles

I'm inspired by Laura Thomson, current SVP of Engineering at Fastly, Board Trustee at Internet Society, and co-author of the best-selling PHP and MySQL Web Development from Addison-Wesley Professional.

Ben Ramsey

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets Women making history

I enjoyed Programmed Inequality by Mar Hicks

—Evelyn Mitchell

Proving Ground: The untold story of the six women who programmed the world's first modern computer. by Kathy Kleiman

DJ Billings

Grace Hopper: Admiral of the cyber sea by Kathleen Broome Williams


Share your favorites

What books are you reading by women authors? Share your recommendations in the comments!

Add these inspiring tech books to your reading list for Women's History Month.

Image by:

Flickr, CC BY 2.0, Modified by Jen Wike Huger

Women in tech community Careers What to read next Everything you need to know about Grace Hopper in six books This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 15 points Austin, TX

Katherine is a full-stack web LAMP developer and Chief Technology Officer at Kanopi. She’s been building websites since 1999, working in WordPress since 2003, and Drupal since 2007. She found her way to Kanopi as a client; when she worked at a marketing/brand agency, Katherine was so impressed with how Kanopi managed a project under challenging conditions that she was eager to join the team.

As Chief Technology Officer, Katherine’s job is to keep Kanopi excelling in all things technical. She is a Solutions Architect with a proven track record spanning 15 years of client-facing technology leadership for clients ranging from small businesses and high-growth startups to global business and consumer divisions of Fortune 500 enterprises. She helps clients see the relationship between their business needs and their technical solution, and guides them towards maximizing the impact and longevity of their sites with the budget they have available.

As an experienced cross-functional team leader, Katherine guides project teams, defines requirements, and provides project oversight while serving as a solutions-focused client partner. She is a passionate advocate for rewarding user experiences and future-proof development methodologies.

| Follow katherinemwhite | Connect katherinemwhite Community Member 18 points Nashville, TN, US

Ben Ramsey is a web developer, API enthusiast, and open source software advocate living in Nashville, TN.
Passionate about the PHP community, Ben is actively working on the PHP Community Foundation, PHP TestFest, and Southeast PHP Conference. He is the founder of the Atlanta PHP user group and co-organizer of the Nashville PHP user group. He also organized the first PHP Appalachia retreat and PHP Community Conference.
A core contributor to the PHP programming language, Ben is the co-author of Zend PHP 5 Certification Study Guide and PHP 5 Unleashed, the author of numerous articles on APIs and web application security, and is a popular speaker at PHP conferences worldwide.
When not building the next-generation platform for professional photographers at ShootProof, Ben enjoys good beer and spending time with his family.
Ben blogs about web development at His open-source contributions, including the popular ramsey/uuid library for PHP, may be found at

| Connect benramsey Community Member Cloud Linux PHP Web developer Community Manager Creative Commons Developer Docker 64 points Milan, Italy

I'm a Software Engineer at Red Hat, working on CI/CD and Automation with cloud-native open-source technologies.

Open Enthusiast Author 17 points Community Member PHP Python Web developer Wordpress Creative Commons Debian Developer Docker Geek Apache Java DevOps Ubuntu Cloud Gamer Linux Open education Open government 18 points Los Angeles area

Rob McBryde is a Senior Technical Program Manager for the Red Hat Customer Portal. Working with Customer Lifecycle Marketing and Customer Digital Experience design and engineering teams, McBryde oversees creating personalized and improved experiences for Customer Portal visitors.

He joined Red Hat in 2022 as part of the Customer Experience and Engagement. Prior to Red Hat, McBryde worked in digital agencies since 2006 and managed numerous enterprise-level web projects since 2013. He loves being in the trenches with his team and seeing all parts of a project come together to form a tangible, meaningful solution.

| Follow robmcbryde | Connect robmcbryde Community Member 99 points Los Angeles, CA

I love to code, write things, run. Freelance web developer, former cheerleader. I enjoy all Linux distros.

| Follow itsjustdj Open Enthusiast Author 16 points Washington DC USA

A "furry" alter ego (OC/Original Character) for a senior US Government Employed (civilian) research scientist & (patented) inventor, married, "boomer" generation Dad to a teenager, who is also advocate for diversity, self improvement and higher education. Further interest in/abilities for: Photographer (semi-professional, digital & film), avid horse riding competitor, Foodie (supertaster), computer hobbyist & IT qualified professional (aware of AI/Machine Learning as well as parallel/GPU processing techniques, supercomputers use and architecture), basic understanding of politics, federal agency alignments, government budget cycles, planning & approvals, as well as medical issues related to cardiac issues and basic health (I am *not* a health profession!)

| Follow camstonefox Community Member Register or Login to post a comment.

Open source reimagines the traditional keyboard

Sat, 03/11/2023 - 16:00
Open source reimagines the traditional keyboard luisteia Sat, 03/11/2023 - 03:00

After one-hundred years of evolution in typewriting capability, our keyboard layout remains essentially the same: A rectangular board with apparently random clustered keys. Why is this, and how can we do better?

Keyboard origins

After the first mechanical typewriters appeared in the early 1900s, the entire typing effort was human-powered via rather complex mechanical systems. One example is the Continental typewriter commercialized in 1904, seen below. The orthogonal clustering of keys―where each row is side shifted with respect to the following―resulted from the need to avoid collisions between the various cantilever beams that connect each button, via a complex mechanism, to the relief that strikes the sheet, thus creating the letter. This design of the keys of one row appearing in the space between the keys of the preceding row still exists on today's keyboards.

Image by:

(Original photo by Mali Maeder. Modified by Luis Teia. CC BY-SA 2.0)

With passing years, more compact and robust electrical motors and actuators gradually replaced the cantilever part of this complex mechanism (for example, see the IBM Selectric commercialized in 1961). With the advent of computers, the ability to transform typing action into visible text relies on three main parts: The screen, the computer, and the keyboard (in a laptop, these three subsystems exist in one portable block). Overall, the physical mechanism was almost entirely replaced by electronics (i.e., wiring, electrically powered, and governed by programming), as seen with any of the current keyboards for sale on Amazon. Once again, the orthogonal disposition of keys remained essentially unchanged.

Image by:

(Original photo by Viktor Hanacek and Athena. Modified by Luis Teia. CC BY-SA 2.0)

Overall, this translates into more than 100 years of monumental evolution in how we transform typing action into printed or screen-perceived text (from human-powered to electrical-powered with advanced computing), but with little to no development of the human-to-machine interface (both in terms of its logic arrangement and ergonomic features).

The missing link

I believe that during the electrification of typewriters over the past century, the link between human physiology and the arrangement of the keys in a keyboard, as an interface between human and machine, has been mostly overlooked. A critical question that still has no answer is, Is the current arrangement of keys on my keyboard the most efficient and intuitive solution, effortwise and timewise? This article hopes to kick off an exchange of ideas on fundamental out-of-the-box approaches to create novel and radical open source keyboard constructions that address this question. The idea presented here is just one possibility out of many. 

[ Related read 17 open source technologists share their favorite keyboards ]

Considering the potential benefits in terms of effort and time saved from a more efficient typing activity, multiplied by millions or even billions of users that type every day, the overall effect can yield a potentially colossal reward with a global impact on both social and professional activities.

Ergonomic reflection

Any design starts with an adequate understanding of the system's dynamic with which the device interfaces. For example, a jet engine interfaces with an aircraft. In turn, a keyboard interfaces with a person, making the following questions pertinent during its design:

  • What is the easiest and most effortless way to place a word on a screen?
  • How does the mind construct any given word, and how do I type the letters to do so?
  • What are the words that I most often use?
  • How do typing the words and recurrence affect the key layout?  
  • Do I want to move my fingers, hands, arms, or a combination of all?
  • What is the effort impact on a full day of typing?

The natural geometry of a hand and fingers comprises a center at the palm with a combination of fanning-out fingers, which can be radially extended. The hand (including fingers) can be rotated about the wrist, and the arm, in turn, can translate (including the hand and fingers) from front to back and side to side. The amount of effort required to exercise these degrees of freedom increases as one moves from the fingers to the arm simply because of the increased mass that is displaced. Moving our arms instead of our fingers makes us slower and tire more quickly.

Image by:

(Original photo by Anna Shvets. Modified by Luis Teia. CC BY-SA 2.0)

The oscillatory flow within words

The English language (for example) has more than 200,000 words, with a minimum of 3,000 words required for effective communication. The basis of such language starts with pronouns (I, you, he, she, it, we, and they), followed by the primary verbs (be, do, and have). This small group of words is probably the most used in everyday typing activity, comprising English-speaking sentences worldwide. A foundation can be constructed for an efficient keyboard arrangement with this information. A similar approach could be undertaken to match other languages. No one way suits all languages and/or user preferences.

Words are typed in a natural alternating way moving between vowels and consonants. This back-and-forth oscillation between vowels and consonants perpetuates in sentences. Vibration theory tells us that all oscillations occur around a center; hence, it is natural to assume (as an ergonomic reflection) that a center exists in our future keyboard. From a physics perspective, particle or single-point entity oscillatory systems (not distributed)―such as those found in planetary orbits and quantum resonators―tell us that oscillations of a point about a center (in our case, the point is our finger focus controlled by our linear mind) incur the presence of circular or arc type motion. Finally, the impact of these analogies in our keyboard design suggests the presence of a center and an arc distribution of information. Indeed, the ancient Greek theatres (and, as a legacy, modern ones) embody such architecture―i.e., a center surrounded by circular arcs for effective sound migration or information transfer.

Consider the case where the vowels are clustered at the center of the keyboard (as shown below), in a vowel core, so to speak. It is a plausible assumption that of all the vowels, A―the first letter of the alphabet―is the most suitable to be at the center (of both the vowel core and the keyboard). The other four vowels, E, I, O, and U, form four quadrants surrounding the letter A, placed in a second circular track of higher radius. In this disposition, it is logical to distribute the consonants in layers around this core. In turn, the arrangement of the consonants should be such that the keywords mentioned above can be written easily and intuitively (and preferably horizontally from left to right, just as the mind writes when using pen and paper). Such a circular layout reflects and favors the natural word-forming movement that transits from vowels to consonants and vice versa. Below are a few examples of how a word is typed in such a keyboard architecture.

Image by:

(Luis Teia CC BY-SA 4.0)

The typing path or sequence is superimposed (highlighted in red) for some of the keywords above. The inward and outward motion of the digitizing finger is visible (for example, see the word Have in the top left corner). It does not really matter if one or multiple fingers type the word because our linear mind can only focus on one letter at a time. That is, the path shown follows the typing sequence of the mind as it constructs the word on a keyboard.

After forming the inner vowel circular core surrounded by an inner and outer arc of consonants, several tracks are gradually added with increasing radius. The resulting keyboard layout is shown below and is briefly explained as follows. First is the track possessing signs and larger keys such as Space and Enter―the most used and largest keys on a keyboard―followed by a track with function keys and, finally, a smaller track at the top for higher functions. The keys surrounding the enclosure are attributed to hotkeys, such as volume control, play, stop, and internet browser control. Keys with opposites, such as < > and ( ), are placed on either side of the vertical symmetry axis. A higher-level keyboard containing the numerals and algebra operators (as well as additional characters) is available by continuously pressing the Shift key (at the bottom left) or by pressing the CAPS key once (at the top center). One of the most used functions in all programs is SAVE; hence, a key is placed at the top for easy access. My previous open source publication with The Journal of Open Engineering (TJOE) has shown that the number distribution around the operators substantially reduces computational time and effort. For further details, take a look at the TJOE publication New Calculator Design for Efficient Interface based on the Circular Group Approach. I will explain in-depth how the one-handed keyboard was created and its advantages in a more lengthy future publication (possibly in The Journal of Open Engineering).

Image by:

(Luis Teia, CC BY-SA 4.0)

The resulting circular one-handed keyboard is around 24.5cm long, 21.2cm wide, and 4cm thick. The vertical symmetry makes it suitable for right- and left-handers. The most significant change between the two usages is the location of the Spacebar and Enter keys, but one can easily train the mind to swap these (i.e., for a person using their left hand, the thumb presses the Enter and the little finger, the Spacebar, and vice versa). Regarding its general design, the streamlined arc outboard enclosure was inspired by the bridge of the starship Enterprise from the TV series Star Trek: The Next Generation. Have you ever noticed how highway curves incline to push and keep the cars on track naturally? Similarly, the inward curvature of the surface of the consonant (and outer) tracks generates a radial reactive force component after each pressed key that helps guide the finger back to the center. It assists with the contraction part of the oscillatory movement of writing a word. Conversely, the convex curvature of the vowel core does the opposite. It helps with the expansion part of the oscillatory movement of writing a word.

Image by:

(Luis Teia, CC BY-SA 4.0)

Computer experts know that data swapping (RAM data transfer to and from a hard drive) during information exchange slows the system down significantly. Likewise, swapping one's hand and attention between the keyboard and mouse generates a similar detrimental effect. The one-handed keyboard is designed to commit one hand solely to writing while the other hand is fully available to operate the mouse, thus removing the inefficiency caused by keyboard-mouse hand swapping. To envision the ultimate use of this keyboard, consider the analogy of the skilled accountant that can quickly enter large calculations on a keypad without looking. The ultimate goal is extrapolating this skill to the one-handed keyboard. This is the great expectation of its creator―to provide the ability for a person to write whole texts with less effort and more speed than with regular rectangular keyboards. Of course, the mind needs to map the new keyboard first, but its largely intuitive arrangement should facilitate this process.

Computer-aided design construction

The ability to successfully sell a product directly depends on how good and realistic it looks. In turn, computer simulation software is the cheapest way to drive a product through its design and maturing cycle. The alternative―building models and testing―is expensive and time-consuming. Open source Computer-Aided Design (CAD) software offers an even greater advantage, as there is no need to pay usage licenses. The current drawback is that they still lag behind the capability of commercially available alternates. There are several front runner open source software solutions for CAD. Having worked for over a decade with commercial CAD software, I believe FreeCAD is one of the most complete free counterparts, presenting the closest capabilities to those offered by commercial packages. Indeed, designing highly complex machines with multiple moving parts is now possible using this software. There are still bugs to solve, and the voyage may not be as smooth as in commercial software, but it is possible to do it at zero cost. You can create a jet engine in FreeCAD—it is already that advanced!

Image by:

(Luis Teia, CC BY-SA 4.0)

FreeCAD has all the fundamental (and even more advanced) features required to create complex parts and assemblies. It has all the essential features and functions of commercial software, the most important being:

  • Left side: A tree listing the operations done (showing labels and attributes) and their sequence.
  • Top: With Part Design mode selected from the drop-down menu, from left to right the control over the type of model presented, the orientation of the camera, creation of sketches, planes and finally the options that add/remove material (namely extrusion, revolutions, lofting, etc.).
  • Boolean operations: Adding or subtracting two volumes and the possibility of chamber and fillet edges.

The FreeCAD file containing the final version of the keyboard is freely available at Figshare. It remains unlocked, allowing anyone to alter its design. In reality, I expect to see mutations of this design or its inherent features in a future generation of one-handed keyboards or other devices requiring an efficient and intuitive single-handed interface. Download the latest version of the open source FreeCAD software (version 0.20.2) from to load the model.

Image by:

(Luis Teia, CC BY-SA 4.0)

Open multimedia and art resources Music and video at the Linux terminal 26 open source creative apps to try this year Film series: Open Source Stories Blender cheat sheet Kdenlive cheat sheet GIMP cheat sheet Latest audio and music articles Latest video editing articles

Some lessons learned from using FreeCAD during this project:

  • Save often and in new files, especially after major modifications, as the ability to update dependencies often lacks robustness (especially for complex models with many sketches, planes, and extrusions/revolutions/lofts). Making changes uptree may be difficult.
  • Build your planes and sketches first (when there are many, these are tasks that FreeCAD is robust in executing), then add material by extrusion/lofting from those sketches. Experience showed me FreeCAD could generate unstable results in a component with many complex features, especially when using additive lofting with more than four sketches.
  • Use the spacebar to quickly show or hide an item, such as a plane, sketch, or volume. Make your life easier, and hide what you don't need to see.
  • Leave Chamfering and Filleting to the end, and remember to always overlap material (or at least make it adjacent) in every newly added material operation (e.g., a revolving sweep) with respect to the existing body (e.g., an existing extruded block); otherwise no volume is generated, and you are left wondering what happened (no error window appears). FreeCAD does not generate two separate volumes within the same body.

While commercial CAD software is already fairly advanced in realistically rendering models, FreeCAD is still evolving. However, various free and open source rendering options are available, including the animation software Blender and CADRays from OpenCascade (Windows only), LuxCoreRender, and POV-Ray.  

CADRays offers a simpler interface than Blender, but it is still a robust option to make CAD models realistically photo-rendered. It's open source, and you can download and use the program with free registration.

On Linux, download the source code. You need both the occt.git and the cadrays.git repositories. Both use cmake, so the compilation process is the usual Cmake incantation:

$ mkdir build $ cd build $ cmake .. $ make

A simple GUI exists, allowing for the most basic operations. On the left side, the Scene tab shows a tree of the part or assembly. Below it are the settings to control resolution, rendering, and camera type. To the right, one can select the type of material (there are built-in options, such as metal and plastic), or one can program custom material. Superimposed is a Transformation tab that allows control of the attitude of the part/assembly. Below it are the controls for lighting, alteration of ascension or azimuth attitude, and changes in the color of the illumination and background modification. At the center is a window showing how the model looks based on the present settings (and, thus, how the image export will look). Below that is a command interface, which allows one to call for and use a more in-depth and greater variety of functions. This is evoked by typing help.

Image by:

(Luis Teia, CC BY-SA 4.0)

Some lessons learned from using CADRays during this project:

  • Before loading your complex part, press OpenGL (bottom left corner in the Rendering tab). The reason is that the Monte Carlo calculations (immediately done after loading the step file) can significantly slow down the computer because the Rendering setting is on GI. With OpenGL, the model is simplified, making handling easier.
  • Orientate your CAD part (translate, rotate, and fit) before switching back the Rendering from OpenGL to GI to avoid getting frustrated with lags during part movement.
  • White background is created by writing the command vsetcolorbg 255 255 255
  • More accurate orientation (rotation, translation, and fit to window) can be achieved by using the command line at the bottom (type help for options) instead of using a mouse or tabs that do not provide full control.
  • CADRays does not save a file that stores your work done with the model and saves your settings, so be sure to get the image you want before exiting. Otherwise, you must repeat your work the next time you open the program.
Future work

The present model only defines the exterior dimensions and shape resulting from the novel design approach. The next steps are defining the internal assembly (comprising of how the keys are attached to the butterfly spring support) and designing the internal electronics, etc. Those tasks are slated for future follow-up work. Ultimately, anyone with access to 3D printing and an electronics lab should be able to create and test a working prototype. Alternatively, if you are just an enthusiast wanting to get a feel for what the one-handed keyboard would look like, then you can 3D print it as is, permitting you to experiment with how easily you can type sentences. Nowadays, this service is reasonably accessible via online manufacturing stores. For those that want to be even more involved, the fact that the one-handed keyboard design is open source provides the opportunity for it to be commercially developed as a product of a company producing computer-related equipment.

Is the current arrangement of keys on the keyboard the most efficient and intuitive solution? Open source aims to address this question with a circular one-handed keyboard.

Image by:

"A (Snowy) Day In New York" by The All-Nite Images is licensed under CC BY-SA 2.0 

Hardware Accessibility Art and design What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

What is an edge-native application?

Fri, 03/10/2023 - 16:00
What is an edge-native application? fdesbiens Fri, 03/10/2023 - 03:00

Cloud-native. Those two simple words redefined how we build, deploy, and even consume software. Twenty-five years ago, when I had hair I could pull on when having trouble with my code, the idea that I would use an office suite contained in my web browser would have sounded like science fiction. And yet, here we are. Gone are the days of installing an office suite from a pile of floppy disks. Naturally, the three or four years gap between releases is also a thing of the past. Nowadays, new features may appear anytime and anywhere, with no installation required. It's an understatement to say that the velocity of the software industry increased by an order of magnitude.

This evolution is a tremendous achievement from my perspective. Of course, if you are a bit younger than me or even one of those "cloud-native" developers who never experienced anything else, you surely see things differently. At this point, cloud-native is de facto how things are done—even for me. It is not exciting or new, just assumed. However, there are still applications living outside the cloud. Those applications are increasingly impacting our personal and professional lives. You see, software is eating the world. And this means cloud-native applications are increasingly supplemented with edge-native ones.

In this article, I will define edge applications and describe their characteristics. But before I get to that, it's important to understand how the cloud and edge environments differ.

What's the difference between edge and cloud?

We are increasingly surrounded by everyday objects powered by software. This is why I think the term Internet of Things will eventually be replaced by Software Defined Everything. This idea has profound implications for the way applications and services are built. In most cases, connecting things directly to the cloud doesn't make sense. This concept is where edge computing comes into play. Edge computing, at its core, provides compute, networking, and storage capabilities at the network's border, closer to the source of the data. It makes it possible to reduce latency and optimize bandwidth usage. It makes applications more resilient and lets users control where their data is located physically.

The edge and the cloud are two very different computing environments. The cloud is defined by the on-demand availability of resources. You can create new virtual machine instances, add network capacity, or change the network topology anytime. Cloud resources are naturally limited, but those limits are so high that most users will never bump into them. By nature, the cloud is heterogeneous, centralized, and large-scale.

  • Heterogeneous: The cloud is heterogeneous since the compute resources available fit into several predefined types. It's possible to create hundreds or even thousands of instances of the same type, all identical unless customized. From a developer's perspective, they represent a stable and predictable hardware platform.
  • Centralized: The cloud is centralized. You can deploy resources in specific data centers or geographical zones, but everything is managed from a central console.
  • Large-scale: Finally, the cloud is inherently a large-scale environment. Your specific deployment may be small, but the resources at your disposal are immense in terms of capacity and performance.

The edge is the polar opposite of the cloud. It is heterogeneous, distributed, and small-scale. When designing edge computing solutions, picking the right hardware for the job is critical. In particular, battery-operated devices require top-notch power efficiency, which means that processors based on the ARM and RISC-V architectures are much more common than in the cloud. Moreover, many edge devices must interact with legacy equipment; they usually feature ports not found on IT equipment and leverage protocols specific to operational technology. For these reasons, edge hardware often differs from one physical location to another. By definition, edge computing implies distributed applications; data processing is performed in various physical locations across the whole edge-to-cloud continuum. Lastly, the edge is a small-scale environment. Of course, deploying thousands of edge nodes to support a solution is possible. However, the compute, networking, and storage resources will be severely limited in any specific location.

The distinctions between the two environments mean that edge-native applications differ greatly from cloud-native ones. Let's now look more closely at them.

Edge-native application characteristics

Edge-native applications share some characteristics with cloud-native ones. Both application types rely on microservices. They expose APIs, often in a RESTful way, which enables service composition. Both types of applications consist of loosely coupled services. This design prevents the creation of affinities between the services and enhances the overall resiliency of the application. Finally, teams leveraging a DevOps approach focused on continuous integration build them. Edge-native applications, however, will often avoid continuous deployment—especially for applications driven by real-time or mission-critical requirements.

Edge-native applications naturally possess specific characteristics setting them apart. Specifically, these are lifespan, heterogeneity, and constraints.

  • Lifespan: Edge-native applications typically have a long lifespan. Edge-native applications usually include significant capital investments, such as heavy machinery or industrial equipment. This means they must be maintained and operated for years, if not decades.
  • Heterogeneity: Edge-native applications exhibit heterogeneity. Here, I am not only referring to the edge node themselves but also to the origin of all the components of the solution. No one supplier can provide you with the sensors, actuators, microcontrollers, edge nodes, software, and networking equipment needed to build and deploy a solution. At a minimum, you must deal with several vendors with different support lifecycles and commitments to your particular market.
  • Constraints: Edge-native applications are constrained by the realities that edge hardware and software face in the field. It's a harsh world outside the data center. The list of threats is long: Extreme temperatures, humidity, electromagnetic interference, water or dust ingress, and vibrations all contribute to shortening the life of the equipment and falsifying sensor readings. Power consumption is also a huge concern, and not just for battery-operated devices. Higher consumption results in more heat, which requires a beefier cooling system which, in turn, is an additional cost and point of failure. Finally, since edge computing applications are inherently distributed, they are completely dependent on the network. You must assume the network will be unstable and unreliable and design the solution accordingly.

Another important consideration is that edge computing bridges the world of information technology (IT) and operational technology (OT). The latter provides the components of industrial control systems used to operate factories, pipelines, water distribution systems, and wind farms, for example. In other words, OT is part of the technology universe where Programmable Logic Controllers (PLCs) and Supervisory Control and Data Acquisition Systems (SCADA) reign supreme. IT and OT embody two different approaches to computing. IT relies on off-the-shelf components that are replaceable and updated frequently. A three-year lifecycle for laptops is common in many organizations. On the other hand, OT is about purpose-built solutions controlling critical infrastructure, which means updates are infrequent. Thus, one could say that, for many organizations, IT is a service to the business, while OT is the business itself.

Not all edge-native applications target OT use cases. But since they may exist outside of the cloud and the corporate data center, they are exposed to many of the dangers OT applications must face and must fulfill many of the same requirements. This requires a change of mindset from developers.

Defining edge-native

We now have a solid understanding of the edge environment and the characteristics of edge-native applications. Now is the time to answer the question in this article's title: What is an edge-native application?

My answer to this question goes like this:

A distributed application made of virtualized or containerized microservices that are deployed outside the cloud and the corporate data center. Edge-native applications are optimized for field use, resilient, adapted to mobility, orchestrated and leverage zero trust and zero touch operational models.

This definition implies that it is not enough to have distributed nodes processing data locally to have an edge-native application. Such applications, after all, are built from the ground up for the edge using a specific approach. Think about it: Is any random transactional website a cloud-native application? Of course not.

Optimized for field use

Because they often run on constrained hardware, edge-native applications are optimized for size and power consumption. There is little to no elasticity at the edge, meaning you must manage resources carefully, especially considering the longer lifespan expected from the solution. You could be tempted to future-proof your edge nodes by providing them with additional compute power, memory, and storage. However, this is not cost-effective and, in any case, will probably not be viable over the long term.


Edge-native applications possess great resiliency, given their distributed nature. They assume that nodes, services, and even the network may fail anytime. They manage such outages as seamlessly as possible by buffering outbound transactions and data for later. Nodes near a failed system should be able to take over, although with reduced quality of service.

Adapted to mobility

Edge-native applications can connect to mobile networks and be deployed on nodes onboard vehicles. They are location-aware, meaning they can dynamically take advantage of services available in the local environment. They can also adapt their behavior according to local regulations and requirements. Moreover, they can leverage location-based routing when needed. This enables them to pick the most cost-effective option to transmit data and receive commands.


The components of edge-native applications may be deployed inside containers, but virtual machines, serverless functions, and binaries also play a role. The lifecycle of all these deployment artifacts must be carefully orchestrated to scale up or down certain services or to stage incremental updates. Naturally, this reliance on orchestration makes the whole application more resilient since additional service instances can be spun up to face adverse conditions or operational challenges.

More on edge computing Understanding edge computing Why Linux is critical to edge computing eBook: Running Kubernetes on your Raspberry Pi Download now: The automated enterprise eBook eBook: A practical guide to home automation using open source tools eBook: 7 examples of automation on the edge What is edge machine learning? The latest on edge Zero trust

The zero trust model implies that, by default, no device is trusted. Every device and node is seen as a potential attack vector. The zero trust model involves systematic device authentication and authorization, with limitations on the scope and timeframe of the access granted. In addition, data must be encrypted in motion and at rest.

Zero touch

Edge-native applications require credentials for authentication, authorization, and even device attestation. The latter involves using certificates or similar means to prove a device's unique identity and trustworthiness. Zero touch onboarding means that such credentials are deployed automatically from a central location as soon as a device connects to the network. Manual manipulations are error-prone and potential attack vectors, so eliminating them is an important priority.

Wrap up

If you are a cloud-native developer, I hope you now realize how the edge computing space differs from what you are familiar with. These differences should not prevent you from jumping in, however. The languages and techniques you know are certainly applicable at the edge. With the right mindset and a willingness to learn, you can become a productive edge-native developer.

If you are looking for a community of experienced edge-native developers to strike up a conversation with, join the Edge Native working group at the Eclipse Foundation.

Edge-native applications differ from cloud-native applications. Edge-native applications carry the following key characteristics.

Image by:

Edge computing Cloud SCaLE What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How the GDB debugger and other tools use call frame information to determine the active function calls

Fri, 03/10/2023 - 16:00
How the GDB debugger and other tools use call frame information to determine the active function calls wcohen Fri, 03/10/2023 - 03:00

In my previous article, I showed how debuginfo is used to map between the current instruction pointer (IP) and the function or line containing it. That information is valuable in showing what code the processor is currently executing. However, having more context for the calls that lead up to the current function and line being executed is also extremely helpful.

For example, suppose a function in a library has an illegal memory access due to a null pointer being passed as a parameter into the function. Just looking at the current function and line shows that the fault was triggered by attempted access through a null pointer. However, what you really want to know is the full context of the active function calls leading up to that null pointer access, so you can determine how that null pointer was initially passed into the library function. This context information is provided by a backtrace, and allows you to determine which functions could be responsible for the bogus parameter.

One thing’s certain: Determining the currently active function calls is a non-trivial operation.

Function activation records

Modern programming languages have local variables and allow for recursion where a function can call itself. Also, concurrent programs have multiple threads that may have the same function running at the same time. The local variables cannot be stored in global locations in these situations. The locations of the local variables must be unique for each invocation of the function. Here’s how it works:

  1. The compiler produces a function activation record each time a function is called to store local variables in a unique location.
  2. For efficiency, the processor stack is used to store the function activation records.
  3. A new function activation record is created at the top of the processor stack for the function when it’s called.
  4. If that function calls another function, then a new function activation record is placed above the existing function activation record.
  5. Each time there is a return from a function, its function activation record is removed from the stack.

The creation of the function activation record is created by code in the function called the prologue. The removal of the function activation record is handled by the function epilogue. The body of the function can make use of the memory set aside on the stack for it for temporary values and local variables.

Function activation records can be variable size. For some functions, there’s no need for space to store local variables. Ideally, the function activation record only needs to store the return address of the function that called this function. For other functions, significant space may be required to store local data structures for the function in addition to the return address. This variation in frame sizes leads to compilers using frame pointers to track the start of the function’s activation frame. Now the function prologue code has the additional task of storing the old frame pointer before creating a new frame pointer for the current function, and the epilogue has to restore the old frame pointer value.

The way that the function activation record is laid out, the return address and old frame pointer of the calling function are constant offsets from the current frame pointer. With the old frame pointer, the next function’s activation frame on the stack can be located. This process is repeated until all the function activation records have been examined.

Optimization complications

There are a couple of disadvantages to having explicit frame pointers in code. On some processors, there are relatively few registers available. Having an explicit frame pointer causes more memory operations to be used. The resulting code is slower because the frame pointer must be in one of the registers. Having explicit frame pointers may constrain the code that the compiler can generate, because the compiler may not intermix the function prologue and epilogue code with the body of the function.

The compiler’s goal is to generate fast code where possible, so compilers typically omit frame pointers from generated code. Keeping frame pointers can significantly lower performance, as shown by Phoronix’s benchmarking. The downside of omitting frame pointers is that finding the previous calling function’s activation frame and return address are no longer simple offsets from the frame pointer.

Call Frame Information

To aid in the generation of function backtraces, the compiler includes DWARF Call Frame Information (CFI) to reconstruct frame pointers and to find return addresses. This supplemental information is stored in the .eh_frame section of the execution. Unlike traditional debuginfo for function and line location information, the .eh_frame section is in the executable even when the executable is generated without debug information, or when the debug information has been stripped from the file. The call frame information is essential for the operation of language constructs like throw-catch in C++.

The CFI has a Frame Description Entry (FDE) for each function. As one of its steps, the backtrace generation process finds the appropriate FDE for the current activation frame being examined. Think of the FDE as a table, with each row representing one or more instructions, with these columns:

  • Canonical Frame Address (CFA), the location the frame pointer would point to
  • The return address
  • Information about other registers

The encoding of the FDE is designed to minimize the amount of space required. The FDE describes the changes between rows rather than fully specify each row. To further compress the data, starting information common to multiple FDEs is factored out and placed in Common Information Entries (CIE). This makes the FDE more compact, but it also requires more work to compute the actual CFA and find the return address location. The tool must start from the uninitialized state. It steps through the entries in the CIE to get the initial state on function entry, then it moves on to process the FDE by starting at the FDE’s first entry, and processes operations until it gets to the row that covers the instruction pointer currently being analyzed.

Example use of Call Frame Information

Start with a simple example with a function that converts Fahrenheit to Celsius. Inlined functions do not have entries in the CFI, so the __attribute__((noinline)) for the f2c function ensures the compiler keeps f2c as a real function.

#include int __attribute__ ((noinline)) f2c(int f) { int c; printf("converting\n"); c = (f-32.0) * 5.0 /9.0; return c; } int main (int argc, char *argv[]) { int f; scanf("%d", &f); printf ("%d Fahrenheit = %d Celsius\n", f, f2c(f)); return 0; }

Compile the code with:

$ gcc -O2 -g -o f2c f2c.c

The .eh_frame is there as expected:

$ eu-readelf -S f2c |grep eh_frame [17] .eh_frame_hdr PROGBITS 0000000000402058 00002058 00000034 0 A 0 0 4 [18] .eh_frame PROGBITS 0000000000402090 00002090 000000a0 0 A 0 0 8

We can get the CFI information in human readable form with:

$ readelf --debug-dump=frames f2c > f2c.cfi

Generate a disassembly file of the f2c binary so you can look up the addresses of the f2c and main functions:

$ objdump -d f2c > f2c.dis

Find the following lines in f2c.dis to see the start of f2c and main:

0000000000401060 : 0000000000401190 :

In many cases, all the functions in the binary use the same CIE to define the initial conditions before a function’s first instruction is executed. In this example, both f2c and main use the following CIE:

00000000 0000000000000014 00000000 CIE Version: 1 Augmentation: "zR" Code alignment factor: 1 Data alignment factor: -8 Return address column: 16 Augmentation data: 1b DW_CFA_def_cfa: r7 (rsp) ofs 8 DW_CFA_offset: r16 (rip) at cfa-8 DW_CFA_nop DW_CFA_nop

For this example, don’t worry about the Augmentation or Augmentation data entries. Because x86_64 processors have variable length instructions from 1 to 15 bytes in size, the “Code alignment factor” is set to 1. On a processor that only has 32-bit (4 byte instructions), this would be set to 4 and would allow more compact encoding of how many bytes a row of state information applies to. In a similar fashion, there is the “Data alignment factor” to make the adjustments to where the CFA is located more compact. On x86_64, the stack slots are 8 bytes in size.

The column in the virtual table that holds the return address is 16. This is used in the instructions at the tail end of the CIE. There are four DW_CFA instructions. The first instruction, DW_CFA_def_cfa describes how to compute the Canonical Frame Address (CFA) that a frame pointer would point at if the code had a frame pointer. In this case, the CFA is computed from r7 (rsp) and CFA=rsp+8.

The second instruction DW_CFA_offset defines where to obtain the return address CFA-8. In this case, the return address is currently pointed to by the stack pointer (rsp+8)-8. The CFA starts right above the return address on the stack.

The DW_CFA_nop at the end of the CIE is padding to keep alignment in the DWARF information. The FDE can also have padding at the end of the for alignment.

Find the FDE for main in f2c.cfi, which covers the main function from 0x40160 up to, but not including, 0x401097:

00000084 0000000000000014 00000088 FDE cie=00000000 pc=0000000000401060..0000000000401097 DW_CFA_advance_loc: 4 to 0000000000401064 DW_CFA_def_cfa_offset: 32 DW_CFA_advance_loc: 50 to 0000000000401096 DW_CFA_def_cfa_offset: 8 DW_CFA_nop

Before executing the first instruction in the function, the CIE describes the call frame state. However, as the processor executes instructions in the function, the details will change. First the instructions DW_CFA_advance_loc and DW_CFA_def_cfa_offset match up with the first instruction in main at 401060. This adjusts the stack pointer down by 0x18 (24 bytes). The CFA has not changed location but the stack pointer has, so the correct computation for CFA at 401064 is rsp+32. That’s the extent of the prologue instruction in this code. Here are the first couple of instructions in main:

0000000000401060 : 401060: 48 83 ec 18 sub $0x18,%rsp 401064: bf 1b 20 40 00 mov $0x40201b,%edi

The DW_CFA_advance_loc makes the current row apply to the next 50 bytes of code in the function, until 401096. The CFA is at rsp+32 until the stack adjustment instruction at 401092 completes execution. The DW_CFA_def_cfa_offset updates the calculations of the CFA to the same as entry into the function. This is expected, because the next instruction at 401096 is the return instruction (ret) and pops the return value off the stack.

401090: 31 c0 xor %eax,%eax 401092: 48 83 c4 18 add $0x18,%rsp 401096: c3 ret

This FDE for f2c function uses the same CIE as the main function, and covers the range of 0x41190 to 0x4011c3:

00000068 0000000000000018 0000006c FDE cie=00000000 pc=0000000000401190..00000000004011c3 DW_CFA_advance_loc: 1 to 0000000000401191 DW_CFA_def_cfa_offset: 16 DW_CFA_offset: r3 (rbx) at cfa-16 DW_CFA_advance_loc: 29 to 00000000004011ae DW_CFA_def_cfa_offset: 8 DW_CFA_nop DW_CFA_nop DW_CFA_nop

The objdump output for the f2c function in the binary:

0000000000401190 : 401190: 53 push %rbx 401191: 89 fb mov %edi,%ebx 401193: bf 10 20 40 00 mov $0x402010,%edi 401198: e8 93 fe ff ff call 401030 40119d: 66 0f ef c0 pxor %xmm0,%xmm0 4011a1: f2 0f 2a c3 cvtsi2sd %ebx,%xmm0 4011a5: f2 0f 5c 05 93 0e 00 subsd 0xe93(%rip),%xmm0 # 402040 <__dso_handle+0x38> 4011ac: 00 4011ad: 5b pop %rbx 4011ae: f2 0f 59 05 92 0e 00 mulsd 0xe92(%rip),%xmm0 # 402048 <__dso_handle+0x40> 4011b5: 00 4011b6: f2 0f 5e 05 92 0e 00 divsd 0xe92(%rip),%xmm0 # 402050 <__dso_handle+0x48> 4011bd: 00 4011be: f2 0f 2c c0 cvttsd2si %xmm0,%eax 4011c2: c3 ret

In the FDE for f2c, there’s a single byte instruction at the beginning of the function with the DW_CFA_advance_loc. Following the advance operation, there are two additional operations. A DW_CFA_def_cfa_offset changes the CFA to %rsp+16 and a DW_CFA_offset indicates that the initial value in %rbx is now at CFA-16 (the top of the stack).

Looking at this fc2 disassembly code, you can see that a push is used to save %rbx onto the stack. One of the advantages of omitting the frame pointer in the code generation is that compact instructions like push and pop can be used to store and retrieve values from the stack. In this case, %rbx is saved because the %rbx is used to pass arguments to the printf function (actually converted to a puts call), but the initial value of f passed into the function needs to be saved for the later computation. The DW_CFA_advance_loc 29 bytes to 4011ae shows the next state change just after pop %rbx, which recovers the original value of %rbx. The DW_CFA_def_cfa_offset notes the pop changed CFA to be %rsp+8.

GDB using the Call Frame Information

Having the CFI information allows GNU Debugger (GDB) and other tools to generate accurate backtraces. Without CFI information, GDB would have a difficult time finding the return address. You can see GDB making use of this information, if you set a breakpoint at line 7 of f2c.c. GDB puts the breakpoint before the pop %rbx in the f2c function is done and the return value is not at the top of the stack.

GDB is able to unwind the stack, and as a bonus is also able to fetch the argument f that was currently saved on the stack:

$ gdb f2c [...] (gdb) break f2c.c:7 Breakpoint 1 at 0x40119d: file f2c.c, line 7. (gdb) run Starting program: /home/wcohen/present/202207youarehere/f2c [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/". 98 converting Breakpoint 1, f2c (f=98) at f2c.c:8 8 return c; (gdb) where #0 f2c (f=98) at f2c.c:8 #1 0x000000000040107e in main (argc=, argv=) at f2c.c:15Call Frame Information

The DWARF Call Frame Information provides a flexible way for a compiler to include information for accurate unwinding of the stack. This makes it possible to determine the currently active function calls. I’ve provided a brief introduction in this article, but for more details on how the DWARF implements this mechanism, see the DWARF specification.

Get the active function call from your debugger.

Image by:

Image by Mapbox Uncharted ERG, CC-BY 3.0 US

Programming What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.