Subscribe to feed
Updated: 1 hour 45 min ago

Postgres DevOps database administrator: A day in the life

Thu, 03/09/2023 - 16:00
Postgres DevOps database administrator: A day in the life doug.ortiz Thu, 03/09/2023 - 03:00

A Postgres DevOps DBA plays a critical role in modern IT organizations that rely on Postgres as their primary database management system. The role of a Postgres DevOps DBA involves many responsibilities, skills, and tasks. A few of these include: Managing the database design and architecture, infrastructure management, ensuring high availability, security, and performing routine maintenance tasks (tuning, backup and recovery, and monitoring).

This article summarizes the common responsibilities and skills expected of a Postgres DevOps DBA in today's enterprise environments.

Database design and architecture

Two primary responsibilities of a Postgres DevOps DBA are database design and architecture. This role must have a greater understanding of the application's data storage requirements and the business logic involved. This knowledge includes designing and creating database schemas and tables. It also means configuring indexes and other database objects to optimize query performance, and choosing the right version of Postgres to use. The role must ensure the database is designed for scalability and maintainability, considering future growth and data retention needs.

Performance tuning

Another critical area of responsibility is performance tuning. A Postgres DevOps DBA must be able to identify and resolve performance issues by monitoring database performance metrics and analyzing query performance. The role must also have a deep understanding of the database and be able to configure it for optimal performance, including optimizing queries and indexes, tuning memory settings, and identifying and addressing performance bottlenecks.

Backup and recovery

Backup and recovery are also key areas of responsibility. The DBA must have a solid understanding of backup and recovery solutions and must design and implement a backup strategy that ensures that data is always recoverable in the event of data loss. They must also validate the recovery process and implement high-availability and disaster recovery solutions to minimize downtime and data loss.


Security is another critical area of responsibility. The DBA ensures the database is secure by implementing access controls, encryption, and other security measures to protect the data. They must also stay up-to-date with the latest security trends and best practices and implement them to protect against potential threats.

Infrastructure management

Infrastructure management is also a key responsibility. These DBAs must manage the hardware, network, and storage infrastructure and provision the infrastructure to support Postgres. They must also configure the infrastructure for performance and availability and scale the infrastructure as necessary to accommodate data growth.

[ Related read: 3 tips to manage large Postgres databases ]

Automation and scripting

This role must be able to automate repetitive tasks such as backups, monitoring, and patching using tools like Ansible, Terraform, and Kubernetes. They must also be familiar with automation best practices to ensure tasks are automated efficiently and effectively. Automation reduces the potential for human error, improves efficiency, and allows the DBA to focus on more complex tasks.

More open source career advice Open source cheat sheets Linux starter kit for developers 7 questions sysadmins should ask a potential employer before taking a job Resources for IT artchitects Cheat sheet: IT job interviews Monitor and configure alerts

Monitoring the database and infrastructure and setting up alerts to notify them of issues is extremely important. The role must also take proactive measures to prevent downtime and data loss, using monitoring tools like Nagios, Zabbix, and Prometheus to detect potential issues.


In addition to these technical responsibilities, a PostgreSQL DevOps DBA must also collaborate with other IT teams, such as developers, operations, and security, to integrate the database into the larger IT ecosystem. The DBAs must also document their work and stay up-to-date with the latest trends and best practices in Postgres and DevOps. This involves engaging with stakeholders to gather requirements, establish priorities, and align the database with the organization's broader goals.

Wrap up

In conclusion, a Postgres DevOps DBA plays a critical role in modern IT organizations that rely on Postgres as their primary database management system. How do your current skills and expectations match this list? Are you on the right track to excel as a DBA in modern database environment?

What are the responsibilities of a database administrator (DBA)?

Image by:

Careers Databases What to read next MariaDB and MySQL cheat sheet This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Contribute to open source without code

Thu, 03/09/2023 - 16:00
Contribute to open source without code Debra Chen Thu, 03/09/2023 - 03:00

An open source "community" means different things to different people. I think of open source a little like "falling in love" because it is about people and relationships. Treat open source as a community because, without people, there is no source, open or otherwise.

I'm a member of the Apache DolphinScheduler community. Because that project is intentionally low-code, it appeals to many people who aren't software developers. Sometimes, people who don't write code aren't sure whether there's a meaningful way to contribute to an open source project that exists mainly because of source code. I know from experience that there is, and I will explain why in this article.

Contributions to the community

In the Apache DolphinScheduler project, I'm mainly responsible for global operation, influence, and caring for the community.

Some people say that projects are big trees, with open source being the soil. That's an apt analogy, and it demonstrates the importance of actively nurturing the thing you're trying to help grow.

I have a simpler idea: Do everything possible to make it good.

A community requires constant attention, not because it's needy but because it is part of life. Community is the people living amongst you, whether in your physical or online space.

Since joining the open source community, I have independently initiated and organized events, including:

  • Coordinated on average one meetup in China a month.
  • Recommended the community participate in the technology shares within the big data field.
  • Coordinated with almost all of the open source projects within China's "big data" field, visiting and communicating with those communities individually.

In my opinion, an excellent project should grow in a good ecology. And a community needs to go out to exchange ideas, share resources, and cooperate with other excellent communities. Everyone should feel the benefits brought to the community in their work.

My overseas expansion follows the same pattern. Of course, it's difficult to do that effectively due to differences in cultures and languages. It takes energy, but it's worth it.

So far, we have successfully held meetups overseas, including in the United States, India, Singapore, Germany, France, Finland, and more.

So how do I contribute to DolphinScheduler? Am I committing code to the project? Am I a community manager? Do I have an official title?

I think of myself as an assistant. I foster communication and connection, and that, as much as any code contribution, is an example of the "Apache Way."

Get started with DolphinScheduler

I first learned about open source when I worked at OpenAtom Foundation as an open source education operation manager. As China's first open source foundation, OpenAtom operates many projects, exemplified by OpenHarmony.

I joined the DolphinScheduler community and found a group of people who were eager to share knowledge, provide guidance and support, and keen to help others discover a tool they would find useful in their own lives.

DolphinScheduler aims to be an influential scheduler worldwide, helping teams work in an Agile and efficient way.

First impressions of the community

It's common to hear complaints from the community about project development. We all have complaints from time to time. Maybe you reported a bug, but the developers didn't address your problem. Or maybe you had a great idea for a feature, but the team ignored it. If you're a member of an open source community, you've heard these grievances before, and if you haven't, you eventually will.

I've learned that these voices are all important to an open source community. It's a good sign when you hear this feedback because it means the community is willing to find bugs, report them, and ask and answer questions. Hearing those complaints may reveal places in the project's structure that need to be improved. Is there a volunteer from the community who can respond to bug reports and triage them so they get to the right developer? Is there a volunteer group waiting to be formed to respond promptly to questions from newcomers in your project's Discourse or forum?

A greeter at the door of your open source project can help invite tentative community members in. A greeter can also ensure that there's no "gatekeeping" happening. Everyone's welcome and everyone has something to contribute, even if all they can offer is an atmosphere of helping one another.

As much as you or I wish we could solve technical issues for everyone, it's not practical. But anyone can be willing to help find a solution—that's one of the great strengths of a community. These users spontaneously serve as their community's "customer service" department.

Within the DolphinScheduler project, we have many (Yan Jiang, Xu Zhiwu, Zhang Qichen, Wang Yuxiang, Xiang Zihao, Yang Qiyu, Yang Jiahao, Gao Chufeng, and Gao Feng, in no particular order!). Even though they don't develop the solution, they work tirelessly to find the person who can.

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets Words to the community

If you want to become a committer through non-code contributions or don't have time to make a code contribution, then the first step is to join the community. There's no sign-up form or approval process, but there's also no fast track. You join a community by participating. Through reliable and consistent participation, you develop relationships with others.

I'm available for a chat and always eager to talk about global event organization, documentation, feedback, and more.

Become a committer

Apache DolphinScheduler faces many challenges. Many companies, even ones that support open source, choose non-open business tooling. I want to work with community partners to make DolphinScheduler a world-class scheduling tool. I hope everyone can harvest the technical achievements they want and that DolphinScheduler helps get them there.

Join our community and help us promote an open and Agile way of working. Or find a project in need of your non-coding skills. Find out just how cool and fun it is to empower a community of your peers!

If you want to become a committer through non-code contributions or don't have time to make a code contribution, then the first step is to join the community.

Image by:

(WordPress, CC0 License)

Community management Careers What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Compiler optimization and its effect on debugger line information

Thu, 03/09/2023 - 16:00
Compiler optimization and its effect on debugger line information wcohen Thu, 03/09/2023 - 03:00

In my previous article, I described the DWARF information used to map regular and inlined functions between an executable binary and its source code. Functions can be dozens of lines, so you might like to know specifically where the processor is in your source code. The compiler includes information mapping between instructions and specific lines in the source code to provide a precise location. In this article, I describe line mapping information, and some of the issues caused by compiler optimizations.

Start with the same example code from the previous article:

#include #include int a; double b; int main(int argc, char* argv[]) { a = atoi(argv[1]); b = atof(argv[2]); a = a + 1; b = b / 42.0; printf ("a = %d, b = %f\n", a, b); return 0; }

The compiler only includes the line mapping information when the code is compiled with debugging information enabled (the -g option):

$ gcc -O2 -g example.c -o exampleExamining line number information

Line information is stored in a machine readable format, but human readable output can be generated with llvm-objdump or odjdump.

$ llvm-objdump --line-numbers example

For the main function, you get output listing the assembly code instruction with the file and line number associated with the instruction:

0000000000401060 : ; main(): ; /home/wcohen/present/202207youarehere/example.c:9 401060: 53 pushq %rbx ; /usr/include/stdlib.h:364 401061: 48 8b 7e 08 movq 8(%rsi), %rdi ; /home/wcohen/present/202207youarehere/example.c:9 401065: 48 89 f3 movq %rsi, %rbx ; /usr/include/stdlib.h:364 401068: ba 0a 00 00 00 movl $10, %edx 40106d: 31 f6 xorl %esi, %esi 40106f: e8 dc ff ff ff callq 0x401050 ; /usr/include/bits/stdlib-float.h:27 401074: 48 8b 7b 10 movq 16(%rbx), %rdi 401078: 31 f6 xorl %esi, %esi ; /usr/include/stdlib.h:364 40107a: 89 05 c8 2f 00 00 movl %eax, 12232(%rip) # 0x404048 ; /usr/include/bits/stdlib-float.h:27 401080: e8 ab ff ff ff callq 0x401030 ; /home/wcohen/present/202207youarehere/example.c:12 401085: 8b 05 bd 2f 00 00 movl 12221(%rip), %eax # 0x404048 ; /home/wcohen/present/202207youarehere/example.c:14 40108b: bf 10 20 40 00 movl $4202512, %edi # imm = 0x402010 ; /home/wcohen/present/202207youarehere/example.c:13 401090: f2 0f 5e 05 88 0f 00 00 divsd 3976(%rip), %xmm0 # 0x402020 <__dso_handle+0x18> 401098: f2 0f 11 05 a0 2f 00 00 movsd %xmm0, 12192(%rip) # 0x404040 ; /home/wcohen/present/202207youarehere/example.c:12 4010a0: 8d 70 01 leal 1(%rax), %esi ; /home/wcohen/present/202207youarehere/example.c:14 4010a3: b8 01 00 00 00 movl $1, %eax ; /home/wcohen/present/202207youarehere/example.c:12 4010a8: 89 35 9a 2f 00 00 movl %esi, 12186(%rip) # 0x404048 ; /home/wcohen/present/202207youarehere/example.c:14 4010ae: e8 8d ff ff ff callq 0x401040 ; /home/wcohen/present/202207youarehere/example.c:16 4010b3: 31 c0 xorl %eax, %eax 4010b5: 5b popq %rbx 4010b6: c3

The first instruction at 0x401060 maps to the original source code file example.c line 9, the opening { for the main function.

The next instruction 0x401061 maps to line 364 of stdlib.h line 364, the inlined atoi function. This is setting up one of the arguments to the later strtol call.

The instruction 0x401065 is also associated with the opening { of the main function.

Instructions 0x401068 and 0x40106d set the remaining arguments for the strtol call that takes place at 0x40106f. In this case, you can see that the compiler has reordered the instructions and causes some bouncing between line 9 of example.c and line 364, or the stdlib.h include file, as you step through the instructions on the debugger.

You can also see some mixing of instructions for lines 12, 13, and 14 from example.c in the output of llvm-objdump above. The compiler has moved the divide instructions (0x40190) for line 13 before some of the instructions for line 12 to hide the latency of the divide. As you step through the instructions in the debugger for this code, you see the debugger jump back and forth between lines rather than doing all the instructions from one line before moving on to the next line. Also notice as you step though that line 13 with the divide operation was not shown, but the divide definitely occurred to produce the output. You can see GDB bouncing between lines when stepping through the  program's main function:

(gdb) run 1 2 Starting program: /home/wcohen/present/202207youarehere/example 1 2 [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/". Breakpoint 1, main (argc=3, argv=0x7fffffffdbe8) at /usr/include/stdlib.h:364 364 return (int) strtol (__nptr, (char **) NULL, 10); (gdb) print $pc $10 = (void (*)()) 0x401060 (gdb) next 10 a = atoi(argv[1]); (gdb) print $pc $11 = (void (*)()) 0x401061 (gdb) next 11 b = atof(argv[2]); (gdb) print $pc $12 = (void (*)()) 0x401074 (gdb) next 10 a = atoi(argv[1]); (gdb) print $pc $13 = (void (*)()) 0x40107a (gdb) next 11 b = atof(argv[2]); (gdb) print $pc $14 = (void (*)()) 0x401080 (gdb) next 12 a = a + 1; (gdb) print $pc $15 = (void (*)()) 0x401085 (gdb) next 14 printf ("a = %d, b = %f\n", a, b); (gdb) print $pc $16 = (void (*)()) 0x4010ae (gdb) next a = 2, b = 0.047619 15 return 0; (gdb) print $pc $17 = (void (*)()) 0x4010b3

With this simple example, you can see that the order of instructions does not match the original source code. When the program is running normally, you would never observe those changes. However, they are quite visible when using a debugger to step through the code. The boundaries between lines of code become blurred. This has other implications. When you decide to set a breakpoint to a line following a line with variable update, the compiler scheduler may have moved the variable after the location you expect the variable to be updated, and you don’t get the expected value for the variable at the breakpoint.

Which of the instructions for a line get the breakpoint?

With the previous example.c, the compiler generated multiple instructions to implement individual lines of code. How does the debugger know which of those instructions should be the one that it places the breakpoint on? There’s an additional statement flag in the line information that marks the recommended locations to place the breakpoints. You can see those instructions marked with S in the column below SBPE in eu-readelf --debug-dump=decodedline example:

DWARF section [31] '.debug_line' at offset 0x50fd: CU [c] example.c line:col SBPE* disc isa op address (Statement Block Prologue Epilogue *End) /home/wcohen/present/202207youarehere/example.c (mtime: 0, length: 0) 9:1 S 0 0 0 0x0000000000401060 10:2 S 0 0 0 0x0000000000401060 /usr/include/stdlib.h (mtime: 0, length: 0) 362:1 S 0 0 0 0x0000000000401060 364:3 S 0 0 0 0x0000000000401060 /home/wcohen/present/202207youarehere/example.c (mtime: 0, length: 0) 9:1 0 0 0 0x0000000000401060 /usr/include/stdlib.h (mtime: 0, length: 0) 364:16 0 0 0 0x0000000000401061 364:16 0 0 0 0x0000000000401065 /home/wcohen/present/202207youarehere/example.c (mtime: 0, length: 0) 9:1 0 0 0 0x0000000000401065 /usr/include/stdlib.h (mtime: 0, length: 0) 364:16 0 0 0 0x0000000000401068 364:16 0 0 0 0x000000000040106f 364:16 0 0 0 0x0000000000401074 /usr/include/bits/stdlib-float.h (mtime: 0, length: 0) 27:10 0 0 0 0x0000000000401074 /usr/include/stdlib.h (mtime: 0, length: 0) 364:10 0 0 0 0x000000000040107a /home/wcohen/present/202207youarehere/example.c (mtime: 0, length: 0) 11:2 S 0 0 0 0x0000000000401080 /usr/include/bits/stdlib-float.h (mtime: 0, length: 0) 25:1 S 0 0 0 0x0000000000401080 27:3 S 0 0 0 0x0000000000401080 27:10 0 0 0 0x0000000000401080 27:10 0 0 0 0x0000000000401085 /home/wcohen/present/202207youarehere/example.c (mtime: 0, length: 0) 12:2 S 0 0 0 0x0000000000401085 12:8 0 0 0 0x0000000000401085 14:2 0 0 0 0x000000000040108b 13:8 0 0 0 0x0000000000401090 13:4 0 0 0 0x0000000000401098 12:8 0 0 0 0x00000000004010a0 14:2 0 0 0 0x00000000004010a3 12:4 0 0 0 0x00000000004010a8 13:2 S 0 0 0 0x00000000004010ae 14:2 S 0 0 0 0x00000000004010ae 15:2 S 0 0 0 0x00000000004010b3 16:1 0 0 0 0x00000000004010b3 16:1 0 0 0 0x00000000004010b6 16:1 * 0 0 0 0x00000000004010b6
  • Groups of instructions are delimited by the path to the source file for those instructions.
  • The left column contains the line number and column that the instruction maps back to, followed by the flags.
  • The hexadecimal number is the address of the instruction, followed by the offset into the function of the instruction.

If you look carefully at the output, you see that some instructions map back to multiple lines in the code. For example, 0x0000000000401060 maps to both line 9 and 10 of example.c. The same instruction also maps to lines 362 and 364 of /usr/include/stdlib.h. The mappings are not one-to-one. One line of source code may map to multiple instructions, and one instruction may map to multiple lines of code. When the debugger decides to print out a single line mapping for an instruction, it might not be the one that you expect.

Merging and eliminating of lines

As you saw in the output of the detailed line mapping information, mappings are not one-to-one. There are cases where the compiler can eliminate instructions because they have no effect on the final result of the program. The compiler may also merge instructions from separate lines through optimizations, such as common subexpression elimination (CSE), and omit that the instruction could have come from more than one place in the code.

The following example was compiled on an x86_64 Fedora 36 machine, using GCC-12.2.1. Depending on the particular environment, you may not get the same results, because different versions of compilers may optimize the code differently.

Note the if-else statement in the code. Both have statements doing the same expensive divides. The compiler factors out the divide operation.

#include #include int main(int argc, char* argv[]) { int a,b,c; a = atoi(argv[1]); b = atoi(argv[2]); if (b) { c = 100/a; } else { c = 100/a; } printf ("a = %d, b = %d, c = %d\n", a, b, c); return 0; }

Looking at objdump -dl whichline, you see one divide operation in the binary:

/home/wcohen/present/202207youarehere/whichline.c:13 401085: b8 64 00 00 00 mov $0x64,%eax 40108a: f7 fb idiv %ebx

Line 13 is one of the lines with a divide, but you might suspect that there are other line numbers associated with those addresses. Look at the output of eu-readelf --debug-dump=decodedline whichline to see whether there are other line numbers associated with those addresses.

Line 11, where the other divide occurs, is not in this list:

/usr/include/stdlib.h (mtime: 0, length: 0) 364:16 0 0 0 0x0000000000401082 364:16 0 0 0 0x0000000000401085 /home/wcohen/present/202207youarehere/whichline.c (mtime: 0, length: 0) 10:2 S 0 0 0 0x0000000000401085 13:3 S 0 0 0 0x0000000000401085 15:2 S 0 0 0 0x0000000000401085 13:5 0 0 0 0x0000000000401085

If the results are unused, the compiler may completely eliminate generating code for some lines.

Consider the following example, where the else clause computes c = 100 * a, but does not use it:

#include #include int main(int argc, char* argv[]) { int a,b,c; a = atoi(argv[1]); b = atoi(argv[2]); if (b) { c = 100/a; printf ("a = %d, b = %d, c = %d\n", a, b, c); } else { c = 100 * a; printf ("a = %d, b = %d\n", a, b); } return 0; }

Programming and development
Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java An open source developer's guide to building applications

Compile eliminate.c with GCC:

$ gcc -O2 -g eliminate.c -o eliminate

When looking through the output generated by objdump -dl eliminate, there’s no sign of the multiplication for 100 * a (line 14) of eliminate.c. The compiler has determined that the value was not used and eliminated it.

When looking through the output of objdump -dl eliminate, there is no:


Maybe it’s hidden as one of the other views of line information. You can use eu-readelf with the --debug-dump option to get a complete view of the line information:

$ eu-readelf --debug-dump=decodedline eliminate > eliminate.lines

It turns out that GCC did record some mapping information. It seems that 0x4010a5 maps to the multiplication statement, in addition to the printf at line 15:

/home/wcohen/present/202207youarehere/eliminate.c (mtime: 0, length: 0) … 18:1 0 0 0 0x00000000004010a4 14:3 S 0 0 0 0x00000000004010a5 15:3 S 0 0 0 0x00000000004010a5 15:3 0 0 0 0x00000000004010b0 15:3 * 0 0 0 0x00000000004010b6 Optimization affects line information

The line information included into compiled binaries is helpful when pinpointing where in code the processor is. However, optimization can affect the line information, and what you see when debugging the code.

When using a debugger, expect that boundaries between lines of code are fuzzy, and the debugger is likely to bounce between them when stepping through the code. An instruction might map to multiple lines of source code, but the debugger may only reports one. The compiler may entirely eliminate instructions associated with a line of code, and it may or may not include line mapping information. The line information generated by the compiler is helpful, but keep in mind that something might be lost in translation.

Learn how to decipher the output of debugging line information, and what tools you can use to look deeper.

Image by:

Programming What to read next What you need to know about compiling code This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

8 examples of influential women in tech

Wed, 03/08/2023 - 16:00
8 examples of influential women in tech AmyJune Wed, 03/08/2023 - 03:00

A journey through open source is rarely something you do alone. Your hobby, career, and your life has been affected by others in the tech space, and statistically some of those people have been women. That's one of the many reasons International Women's Day exists, and it's a good excuse to reflect upon the women who have inspired your career in tech. We asked contributors for their thoughts.

Inspirational women Dr. Kathleen Greenaway

One of the women that inspired me was my university professor, Dr. Kathleen Greenaway. She was exactly who I wanted to be. I remember her saying at a women's event about breaking the glass ceiling that she couldn't believe that we were still talking about it so many years later. I now find myself thinking the very same thing. This is just one example, but she was it.

Shanta Nathwani

Hilary Mason

I owe my knowledge and start in PHP to Hilary Mason. While she was a professor at Johnson & Wales in Providence RI, she ran an elective study on server-side programming. She showed us PHP, and for a final project had us build something using a database. I think I built a simple login system and a commenting tool or something. I love telling folks I learned PHP from a woman (the lead data scientist at, at that!)

John E. Picozzi

Carie Fisher

The most inspirational woman in tech for me is Carie Fisher. I met her when I first started getting involved in the accessibility community. She invited me to help with projects and helped me through my impostor syndrome when applying to jobs, getting certified, and speaking at conferences. Her compassion and devotion to digital inclusion is matched by only a few.

AmyJune Hineline

Kanopi Studios

I've been working in tech for 25 years and have often been the only female developer in a company or department. Then I joined Kanopi Studios, a women-owned and led agency with many smart, tech-savvy women from whom I am inspired every day. My gender is no longer a barrier to my career success. I feel respected and heard, and my accomplishments are recognized.

Cindy Williams

Barbara Liskov and Sandi Metz

I think Barbara Liskov is one of the most influential figures in our field I also really really like Sandi Metz, whose speaking and teaching skills helped me a lot in my career. I recommend any of her books or conference videos.



I have been inspired by a number of women in my life, both personally and professionally. I always say that my mother, my sister and my grandmother have been great references for me in everything. But I have great colleagues with whom I work today who, for me are my references. I always think something like: Those people who have been important to you, try to keep them close. When I was studying development, we had no references. No one taught us that the first programmer was a woman or that we have WiFi or GPS, thanks to a woman. There is a very good book that I am reading right now The Invisible Woman that I highly recommend.

Marta Torre

More open source career advice Open source cheat sheets Linux starter kit for developers 7 questions sysadmins should ask a potential employer before taking a job Resources for IT artchitects Cheat sheet: IT job interviews Sarah Drasner

Written by an amazing woman in tech, it brought to my attention to another amazing woman in tech, Engineering Management for the Rest of Us by Sarah Drasner. This book (and the amazing dev manager, Jody, who sent copies to all the leads) is the reason I am going to be facilitating some discussions about how we experience feedback differently. We realized that a lot of folks may not even really know how to talk about what they need or what works for them, so an open/casual chat where we share some good and bad experiences (optionally, of course) and look at some examples of different styles will hopefully be a really helpful collaborative learning experience.

Fei Lauren

Sheryl Sandberg

My first book about women in tech, which was recommended to me at the WomenPower conference in Hannover, Germany, was Lean In: Women, Work and the Will to Lead by Sheryl Sandberg. Not only was I impressed by her own way but very much by how she managed to use the powers we as women are given and what makes us different for her own success and the company's success.

Anne Faulhaber

Your own influence

In open source, maybe more than anywhere, we all are influences on each other. Sharing and collaborating are built into the process of open source. Tell us about the influences you've had during your open source journey.

Members of the community share stories about the important women who influenced their tech careers.

Image by:

Women in tech Careers community What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 16 points Hamilton, Ontario, Canada

Equipped with a Bachelor of Commerce in Information Technology Management and a Black Belt in Karate, there’s nothing Shanta Nathwani can’t do. She is a full-stack developer who specializes in WordPress, Node and React with a love for life and learning.

A natural teacher, Shanta spent six years teaching WordPress courses at Sheridan College as well as having given more than 40 WordCamp talks. Her topics include data architecture, custom post types and ACF, as well as beginner topics like posts versus pages and how to create a website in 30 minutes. She served as a QA Supervisor at a software company for 2 years before starting her own company, Namara Technologies Inc., where she served as the President & CEO. After being accepted at as an expert, she is now the Project Liaison Manager for the platform.

When she’s not working, she can be found volunteering as a Co-organizer for the Hamilton WordPress Meetup Group, taking stunning photographs, singing karaoke, and, of course, practicing martial arts.

| Follow ShantaDotCa | Connect nathwani Community Member 78 points Providence RI

My official role is Solution Architect at EPAM working from home in Rhode Island. My unofficial role at any organization I work for is resident Drupal fanatic; I believe strongly in contributing to the Drupal community and supporting open source in any way I can.

I’m the organizer of the Drupal Providence Meetup, an Acquia-certified Site Builder, a co-host on Talking Drupal, and a co-organizer of the New England Drupal Camp. I hold a bachelor degree in Web Management and Internet Commerce, as well as an associate degree in Web Development from Johnson & Wales University. Throughout my career I have crafted Drupal solutions for organizations like CVS Caremark, Leica Geosystems, Blue Cross Blue Shield, Marriott International, Rhode Island School of Design, and Getty Images.

When I’m not immersed in the world of Drupal, I enjoy spending time with my family, traveling, drinking craft beer, coffee, and cooking!

| Follow johnpicozzi Open Enthusiast Author 16 points Nashville, TN

I am a full-stack web developer based in Nashville, TN with over 20 years of professional experience creating and maintaining websites for businesses, schools, non-profits and healthcare organizations.

I am currently employed as a Drupal Engineer at Kanopi Studios, where I work within the support department to maintain and enhance customers' Drupal and WordPress websites.

| Follow cindytwilliams Community Member 16 points Community Member 16 points Santander

Freelance full-stack developer at In love with open source, teamwork, good practices in software development and I am an active volunteer in the WordPress support and translation teams.

Recently, I've been a very active #Diversity volunteer in the global WordPress team, because I see it as unfair not to include one type of person because they are different.

| Follow martatorredev | Connect martatorredev Community Member 16 points Community Member 16 points | Connect anne-faulhaber Community Member Register or Login to post a comment.

What cloud developers need to know about hardware

Wed, 03/08/2023 - 16:00
What cloud developers need to know about hardware JayF Wed, 03/08/2023 - 03:00

It's easy to forget the progress that people in tech have made. In the early 2000s, most local user groups held regular install fests. Back then, to configure a single machine to run Linux well, we had to know intimate details about hardware and how to configure it. Now, almost twenty years later, we represent a project whose core ideal is to make getting a single computer to run Linux as easy as an API call. In this new world, operators and developers alike no longer have to worry about the hardware in their servers. This change has had a profound impact on the next generation of operators and developers.

In the early days of computer technology, you had to put your hands on the hardware frequently. If a computer needed more memory, you just added it. As time passed, technology also evolved in big ways. This ended up moving the operator further from the hardware. What used to be a trip to the data center is now a support ticket to have remote hands on the hardware. Eventually, hardware was disposed of altogether. Instead, you now summon and destroy "servers" with simple commands and no longer have to worry about hardware.

Here is the real truth: hardware exists because it is needed to power clouds. But what is a cloud, really?

Why hardware is critical to the cloud

A cloud is a centralization of foundational resources built upon utilizing abstractions. It can range from being as simple as a hypervisor running a few VMs in your homelab to levels of complexity that include custom servers, networking gear, containers, and technology that's been designed from the ground up to focus on efficiencies of scale.

They are nebulous. They evolve.

Those entering technology today don't have the same hands-on experiences as more experienced developers had. Many are trained to use clouds from their earliest interactions with computers. They don't know a world without a button to change the memory allocation. They can point their attention to higher levels in the technology stack. Yet without an understanding of the foundations the infrastructure they use is built upon, they are implicitly giving away their opportunity to learn the lower levels of the stack, including hardware. No fault exists here because the implementer and operator of the cloud infrastructure have made specific choices to intentionally make their products easier to use.

This means that now, more than ever, you have to think intentionally about what trade-offs you make — or others make — when choosing to use cloud technologies. Most people will not know what trade-offs have been made until they get their first oversized cloud bill or first outage caused by a "noisy neighbor". Can businesses trust their vendors to make trade-offs that are best for their operations? Will vendors suggest more efficient or more profitable services? Let the buyer (or engineer!) beware.

[ Related read 5 things open source developers should know about cloud services providers ]

Thinking intentionally about trade-offs requires looking at your requirements and goals from multiple perspectives. Infrastructure decisions and the trade-offs therein are inherent to the overall process, design, or use model for that project. This is why they must be planned for as soon as possible. Multiple different paths must be considered in order to find your project a good home.

Explore the open source cloud Free online course: Developing cloud-native applications with microservices eBook: Modernize your IT with managed cloud services Try for 60 days: Red Hat OpenShift Dedicated Free online course: Containers, Kubernetes and Red Hat OpenShift What is Kubernetes? Understanding edge computing Latest articles for IT architects

First, there is the axis of the goal to be achieved, or the service provided. This may come with requirements around speed, quality, or performance. This can in itself drive a number of variables. You may need specialized hardware such as GPUs to process a request with acceptable speed. Will this workload need to auto-scale, or not? Of course, these paths are intertwined. The question already jumps to "Will my wallet auto-scale?"

Business requirements are another part of this to consider. Your project may have specific security or compliance requirements which dictate where data is stored. Proximity to related services is also a potential concern. This includes ensuring a low-latency connection to a nearby stock exchange or ability to provide a high-quality local video cache as part of a content delivery network.

Then there is the final part which is the value and cost of the service provided — how much one wishes to or can spend to meet the requirements. This is tightly bound  with the first path. The "what" your business is and "how" your business operates. This can be something as mundane as whether your business prefers CapEx versus OpEx.

[ Also read Cloud services: 4 ways to get the most from your committed spend ]

When looking at these options it is easy to see how changing any one variable can begin to change the other variables. They are inherently intertwined, and some technologies may allow for these variables to shift dynamically. Without understanding lower layers of substrate, you risk taking paths that further this dynamic model of billing. For some, this is preferred. For others, it can be dreaded.

Even though learning hardware-specific knowledge has become more optional in modern technology stacks, we hope this article has encouraged you to look into what you may be missing out on without even knowing. Hardware improvements have been a large part of feature delivery and efficiency gains, shrinking computers from room-sized monstrosities to small enough to implant inside a human. We hope you take time to stop, learn, and consider what hardware platform your next project will be running on, even if you don't control it.

If you are a student who hasn't gotten their head out of the clouds yet, go find an old computer, install a stick of RAM, and challenge yourself to learn something new.

The cloud is everywhere, so hardware is more critical than ever.

Image by:

Photo by Ian StaufferonUnsplash

SCaLE Hardware Cloud What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 33 points Open Enthusiast Author Register or Login to post a comment.

Own your cloud with NextcloudPi on the Raspberry Pi

Wed, 03/08/2023 - 16:00
Own your cloud with NextcloudPi on the Raspberry Pi hej Wed, 03/08/2023 - 03:00

You can now say goodbye to big commercial cloud providers and manage your appointments, contacts, and other data in your own home network. Install NextcloudPi on your Raspberry Pi in less than 30 minutes, synchronize your mobile devices with your own Nextcloud, and gain total digital sovereignty and privacy!

I remember when the first Raspberry Pi hit the market in 2012. My Linux friends and I were absolutely thrilled: a tiny computer, available for little money, but with enough computing power to be useful. It has a fully-fledged Linux system running on it, too! We started all sorts of DIY projects, set up media centers, web servers, blogs, control centers for our smart homes, and even a monitoring solution for bee hives.

Last year in December, I decided to install and run my own cloud on the Raspberry Pi. After some digging around, I settled on NextcloudPi, a ready-made instance of Nextcloud. The open source software runs not only on the Raspberry Pi, but on many other single-board computers and other operating systems.

This article shows how to install and configure NextcloudPi. I also explain how to secure the system and talk about different backup and restore methods.

In addition to this tutorial, you can check out how to synchronize data from Google Workspace and Apple iCloud with Nextcloud in my previous articles.


To run NextcloudPi on a Raspberry Pi, you need at least a Raspberry Pi 2. However, newer models like the Raspberry Pi 3, Pi 3+, and especially the Raspberry Pi 4 are much faster. At my home, I use a Raspberry Pi 3, Model B+ with a 64 Bit quad-core processor (1,4 GHz) and 1 GB of RAM. Since I don't use a graphical desktop environment, this equipment is completely sufficient for my purposes (maximum 10 users/devices).

What else do you need? Here is a checklist:

  • A microSD card with 8 GB minimum capacity

  • A computer to write the NextcloudPi image to the SD card

  • An Ethernet cable (NextcloudPi also works via WLAN, but a direct connection is much more stable and faster)

  • An optional additional external storage to store the data of your own cloud. This depends on the amount of data you have. You should choose a USB stick or an external hard drive of sufficient size

Normally, you don't need a monitor or an external keyboard. After you have flashed the NextcloudPi image onto the SD card, the rest of the setup and operation is done with a web interface. You can also access the Raspberry Pi over SSH from Linux, Windows, and macOS. You can also access it from your mobile devices.

Nextcloud vs. NextcloudPi

Of course, you can always install Nextcloud on Raspberry Pi OS (formerly Raspbian) or another operating system on the mini computer. However, this means you have to install the operating system, set up a database server and a database — a complex task that scares off many beginners. The process is easier and faster with NextcloudPi. The open source project offers ready-made images for various single-board computers. It also has an installation script that allows you to set up your own cloud in no time at all.

NextcloudPi takes a lot of work off your hands. It installs and configures Nextcloud so that you can start right away. The developers have published a list of supported hardware/systems on their website. For example, Raspberry Pi (all models, there are also Berryboot images to run NextcloudPi directly from an external hard drive), Odroid, Rock64, RockPro64, Banana Pi, and so on. They also offer a container image that runs on all architectures and operating systems that run containers.

In addition to the current Nextcloud version, NextcloudPi includes a web server with a pre-configured database connection. An administration interface for the web browser is also included so that beginners can quickly find their way around. Those who prefer administration on the command-line can activate SSH access. NextcloudPi has a number of useful presets, including automatic HTTPS forwarding, HSTS (HTTP Strict-Transport-Security, a security mechanism for HTTPS connections), PHP extensions to improve performance, and more. The configuration wizard assists you with the first steps and the formatting of external USB media as well as access from outside using various Dynamic DNS services.

Nextcloud itself has a number of pre-installed apps, including a calendar, address book, dashboard, file sharing, PDF viewer, image and file management, notes and tasks, and Nextcloud Activities.

Flash the image to the SD card

The GitHub repository contains ready-made images for the Raspberry Pi and other devices. After downloading and unpacking the zip file, you can use the dd command on Linux or macOS to write the image to the SD card:

sudo dd bs=4M conv=fsync if=NextCloudPi[…].img of=/dev/mmcblkXX

Replace the device name of the SD card with the correct name. Be careful to choose the correct device name, because dd does not ask for confirmation!

Alternatively, you can use the open source program Etcher to write the image to the SD card. It runs on Linux, Windows, and macOS. Simply select the image on your hard drive (Flash from file) or enter the address of the NextcloudPi image in GitHub (Flash from URL). After that, click Select target, select the SD card, and Flash starts the writing process.

Image by:

(Heike Jurzik, CC BY-SA 4.0)

Boot and activate NextcloudPi

Insert the prepared SD card into the slot of the Raspberry Pi, connect the Ethernet cable to a free port on your network switch or router, and connect the Raspberry Pi to the power supply to boot it. First, you need to find out the IP address of the Raspberry Pi. This is how you access both the web interface for setting up NextcloudPi and Nextcloud itself.

If you are using a router equipped with a DHCP server, you can look in the router's administration interface to see which address it has assigned to the Raspberry Pi.

Alternatively, use a network scanner like nmap on the command-line to find out the IP.

Open a web browser and enter the IP address or hostname of your Pi. This opens the configuration wizard. By default, NextcloudPi only has a self-signed SSL/TLS certificate that hasn't been signed by a known certificate authority (CA). Most web browsers warn against such self-signed certificates. In this case, it's safe to ignore the warning, accept the risk and continue.

Next, you see the NextcloudPi Activation screen. The web interface contains information about the two accounts it has created: one for the NextcloudPi administrator and one for Nextcloud. At this point, it's a good idea to take notes or save the passwords in a password manager so you can change them later. Click on the Activate button, log in as the user ncp along with the associated password to start the configuration wizard.

Image by:

(Heike Jurzik, CC BY-SA 4.0)

NextcloudPi configuration wizard

The first time you navigate to your Pi in your browser, click the Run button to start the configuration wizard.

Image by:

(Heike Jurzik, CC BY-SA 4.0)

Switch to the USB Configuration tab to set up an external USB device (for example, an external hard disk or a USB stick) for the Nextcloud data. If the USB medium already has a suitable file system (Ext2, Ext3, Ext4 or Btrfs), you can continue by clicking Skip. Otherwise, instruct NextcloudPi with the Format USB button to format the disk with the Btrfs file system. Caution! Formatting erases all data on medium!

In the External access tab, you can set up NextcloudPi so that the system can be accessed from outside (the internet). At this point, I recommend selecting no. You can always connect various dynamic DNS services through the NextcloudPi NETWORKING menu. After this last step, the initial setup is complete. The Finish tab offers two links to access your new Nextcloud installation and the NextcloudPi dashboard:

First steps in NextcloudPi

The NextcloudPi administration interface is very intuitive. The menu bar shows the version number, a language switcher, a search function, and icons that link to your own Nextcloud installation. You can also find information about the system, access existing backups and snapshots, display an overview of the Nextcloud configuration, log files, and re-start the configuration wizard. Use the icon on the far right to shut down or restart the operating system.

In the left sidebar, there are seven menus containing essential options for managing the NextcloudPi system:

  • BACKUPS: (Automatic) backups, configure backup media, define a backup schedule, export and import the NextcloudPi configuration, create a snapshot of the Btrfs file system, restore existing backups

  • CONFIG: Display and (re)set the password for the administrator account ncp, move the Nextcloud database to an external (USB) device, move the Nextcloud data directory, force secure HTTPS connections, restart Nextcloud with a clean configuration, and configure system limits

  • NETWORKING: Activate NFS, SSH access, various DNS services and providers, TLS/SSL certificates with Let's Encrypt, port forwarding for access from the outside, a static IP address, trusted proxy servers, and Samba

  • SECURITY: Configure the firewall and the intrusion prevention system Fail2ban, and initiate a manual security check

  • SYSTEM: Activate the monitoring for Prometheus and automatic mounting of USB devices, check the status of external hard disks and save the system logs in RAM to protect the SD card, define the size and location of the swap space, and activate compressed RAM to improve swap performance

  • TOOLS: Uses various utilities for fixing permissions of Nextcloud data files, formatting USB drives (Btrfs file system) and switching Nextcloud's maintenance mode on and off

  • UPDATES: Enables automatic Nextcloud and NextcloudPi updates, notifications about new versions and regular updates of all installed Nextcloud apps, update the current instance to a new Nextcloud version, install NextcloudPi updates, and activate the automatic installation of security updates

Starting Nextcloud

You can log into your Nextcloud with the username ncp and the password displayed in the activation window. The Nextcloud dashboard offers quick access to certain files and folders, your calendar, and your online status (Online, Away, Do not disturb, and so on). You can also select your location for a weather forecast.

All installed Nextcloud apps are listed in the top menu bar: Files, Photos, Activity, Contacts, Calendar, Notes, and so on. If you select an app, a menu in the left sidebar provides filters and tasks associated with it. In the Files app, for example, you get search functions and filters that provide quick access to your own files and folders or those shared with you. In the Contacts app, on the other hand, there is a button for creating new contacts and managing groups and circles. At the bottom left, you can access the app's settings.

All Nextcloud users can find their personal settings by clicking on the profile picture or the initials of the username in the top right corner. The administrator account ncp may also (un)install apps, manage user accounts, and perform other administrative tasks.

Image by:

(Heike Jurzik, CC BY-SA 4.0)

Before you start importing your address book and calendars, it's a good idea to create a new user account without administrator privileges for your daily work. For security reasons, you should only use the ncp account when you change something in the configuration. This includes installing and updating Nextcloud apps, creating users and groups, and so on.

To create a new user, click on the icon with the N in the top right corner to open the settings and select the Users entry there. In the left sidebar, click the New user button, enter the username, first and last name, a password, and an email address. You can also add the account to an existing group. In the Default quota field, you can define how much hard disk space is granted to the user in the cloud.


Once you have Nextcloud up and running, you can synchronize your Android or Apple devices. You can read about how to do that in my previous articles, Switch from Google Workspace to Nextcloud and Switch from iCloud to Nextcloud.

Set access to your system

Even if you operate Nextcloud in the local network only and no services are accessible from the outside, it's vital to consider additional security measures. Here are some suggestions:

  • Activate SSH access: To enable SSH access for the Raspberry Pi, go to NETWORKING > SSH in the NextcloudPi web interface. Click the Active checkbox and enter a password for the pi account. Finally, click Apply to start the SSH service on the Raspberry Pi. By default, the password raspberry is set up for the user pi. You must change it to something different. After your first login through SSH, the system prompts you to change this password as well!

  • Set up a Firewall: You can activate the firewall in the Security > UFW section. The Uncomplicated Firewall (UFW) is a frontend for the powerful but quite complex Netfilter firewall Iptables. The NextcloudPi developers simplify the setup by entering the three essential ports in the web interface that UFW should allow: 80 (HTTP), 443 (HTTPS), and 22 (SSH). Click Active and then Apply to start the firewall. In the dialogue window you can see messages from the operating system about added rules.

  • Fail2ban: If you only use Nextcloud on your home network, you can do without the setup. If, on the other hand, Nextcloud is accessible from the Internet, then set up this additional protective measure via SECURITY / fail2ban. Fail2ban secures services against DoS attacks. For this purpose, the program blocks IP addresses after a certain number of failed connection attempts — first temporarily and then permanently.

  • TLS/SSL Certificates: By default, NextcloudPi includes self-signed SSL/TLS certificates.This can lead to a warning. In the case of NextcloudPi in your local network, the warning is merely a technicality and you can define an exception for the respective web browser. Alternatively, you can generate valid TLS/SSL certificates with the Let's Encrypt certification authority (or set up your own Certification Authority).

  • Enable 2FA for Nextcloud: You can activate the two-factor authentication (2FA) through the menu Administration > Security. In the security section of the Nextcloud app catalog, there are numerous apps that set up a second factor for logging in — provided by an app or as hardware (with a YubiKey, for example). If you have enabled two-factor authentication, set up an app password under Personal > Security. You can use this password for authentication in the Apple or Android devices so that the synchronization of contacts and calendars succeeds. For security reasons, the password is only displayed once.

  • Password Policies: You can also set up password policies for Nextcloud in Administration > Security. For example, you can define a minimum password length, the number of days until a user password expires, and the number of login attempts before an account gets blocked. Additionally, you can forbid common passwords and enforce combinations of upper and lower case, numeric, and special characters.

Image by:

(Heike Jurzik, CC BY-SA 4.0)

For the security of an operating system, it is essential to install security updates promptly. For NextcloudPi, there are a total of three different updates: NextcloudPi updates, Nextcloud updates, and the apps installed. There are also updates for the underlying operating system (Raspberry Pi OS). Use the menu Updates in the NextcloudPi interface to activate notifications about new versions, automatic updates, and manual updates of Nextcloud apps.

More on Raspberry Pi What is Raspberry Pi? eBook: Guide to Raspberry Pi Getting started with Raspberry Pi cheat sheet eBook: Running Kubernetes on your Raspberry Pi Whitepaper: Data-intensive intelligent applications in a hybrid cloud blueprint Understanding edge computing Our latest on Raspberry Pi Backup and restore

Now that you have your own cloud, ideally within your own four walls — that's the end of the setup. The article could end right here. It could, that is, if there weren't one more essential topic to discuss: backups! The small SD cards in the Raspberry Pi are particularly prone to hardware defects compared to standard disks, but there can also be other reasons for hardware failure. NextcloudPi contains all the necessary tools to create automated backups and restore them in an emergency.

You can immediately back up your data at any time via the NextcloudPi web interface. To do this, open the nc-backup entry from the BACKUPS menu. The backups are better off on an external USB device than on the SD card — an external hard drive not only has more space but is also more reliable. To create a full backup that includes not only the NextcloudPi configuration but also the Nextcloud database, the Nextcloud apps and the users' data (calendar, contacts, and other files), activate the Include data checkbox. Optionally compress the data by ticking the checkbox Compress. An alternative to the full backup is to simply export the NextcloudPi configuration via the menu entry BACKUPS > nc-export-ncp. This way you only save the NextcloudPi settings — without the Nextcloud database, its configuration, and user data.

To make things easier, you can activate automatic backups (BACKUPS > nc-backup-auto). Again, it's up to you whether you want to include the users' data and whether you want to compress the backups. In the default settings, NextcloudPi creates a full backup every seven days after clicking Apply. After four weeks, it overwrites the oldest version. Both methods, the manual backup and the automatic backup, have the advantage that they can be set up quickly. However, the backups can — depending on the amount of data in your Nextcloud — take up a large amount of space, even if they are compressed. As the amount of data grows, NextcloudPi also needs more time to create the backups. The cloud is in maintenance mode during this time and cannot be used.

As an alternative, you can create incremental backups by creating and synchronizing Btrfs snapshots. This backup method saves disk space and is significantly more performant than the other approaches. Especially for large amounts of data in the cloud. This variant has a real advantage: during the backup, Nextcloud does not have to be put into maintenance mode, so there is no downtime. The B-tree FS (also called Butter FS) is a so-called Copy-On-Write file system (COW) and allows snapshots of the current data to be created from the running system. These are frozen images of a subvolume at the time of creation. They do not require any additional disk space.

Please note: To do this, the Nextcloud data directory must be moved from the SD card (or another data medium) to a USB drive with a Btrfs file system. You may have already configured this in the configuration wizard, otherwise you can move the data:

  1. Format the external USB drive with the Btrfs file system (TOOLS > nc-format-USB).

  2. Move the Nextcloud data directory to the external USB disk (CONFIG > nc-datadir). You should see the message The NC data directory has been moved successfully in the window below.

  3. As a test, you can now create a snapshot (BACKUPS > nc-snapshot) before activating the automatic snapshots feature.

  4. To create the snapshots automatically, open the nc-snapshot-auto entry from the BACKUPS menu, check Active and click Apply. NextcloudPi now automatically creates a Btrfs snapshot of the Nextcloud data directory every hour.

The newly cloned directories only ever take up as much additional storage space as new files have been added since the last snapshot — so the whole thing is extremely efficient in terms of space. You can incrementally send the snapshots to another Btrfs file system. NextcloudPi supports you with the setup (BACKUPS > nc-snapshot-sync). You can choose either another external drive or a directory on a remote computer. This must be accessible over SSH (without a password) and the disk must also be formatted with Btrfs.

After you have set up your backup strategy, you should always test whether you can restore your data with the NextcloudPi web interface. For almost all methods presented here, there is a corresponding menu item for recovery:

  • nc-import-ncp: import your NextcloudPi configuration

  • nc-restore-snapshot: restore a specific snapshot

  • nc-restore: restore a full backup of your Nextcloud

Image by:

(Heike Jurzik, CC BY-SA 4.0)

Explore Nextcloud

There is an active Nextcloud community out there:

The NextcloudPi developers have put together a website explaining which questions are best asked where.

NextcloudPi offers a cost-effective and robust alternative to commercial cloud providers. By installing it on a Raspberry Pi, you can have control over your data and ensure privacy. With its compatibility with Android and Apple devices, it makes it easy to synchronize your appointments, contacts, and other data. The installation process is straightforward and can be completed in under 30 minutes. By taking the necessary steps to secure the system and implementing backup and restore methods, you can have peace of mind knowing your data is safe.

For more in-depth information on NextcloudPi, you can check out my book Nextcloud on the Raspberry Pi: Set up your own cloud with NextcloudPi. It provides a comprehensive guide to setting up and using NextcloudPi, and will help you get the most out of your personal cloud solution — take the first step towards digital sovereignty!

This article has been adapted from Heike Jurzik's book, Nextcloud on the Raspberry Pi.

Install NextcloudPi on your Raspberry Pi in less than 30 minutes, synchronize your mobile devices with your own Nextcloud, and gain total digital sovereignty and privacy.

Image by:

Photo by Ian StaufferonUnsplash

Nextcloud Raspberry Pi What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

The power of sisterhood and allyship in open source

Wed, 03/08/2023 - 16:00
The power of sisterhood and allyship in open source discombobulateme Wed, 03/08/2023 - 03:00

A little more than two years ago, I switched my career from artist to software developer. I didn’t do it alone. I had the support of PyLadies Berlin, the local Berlin chapter of an international volunteer-based group made to support women in technology.

We are used to the term “career change” as if it were a break in a trajectory. But in my experience, that’s never really the case. A person cannot erase themselves from what they consist of, and this richness of diverse backgrounds resulted in several breaking points. Individual journeys, often far from computer science, hold accountability for the social implication of technology and bring richness and creativity to the technology industry.

Being an artist has always given me freedom and opened doors to explore several fields, from architecture to sciences. A great part of my artistic experience took place in hackerspaces in Brazil, surrounded by the Free/Libre Open Source Software (FLOSS) ideology, the open free/libre culture of sharing. Nowadays, for several ideological and practical reasons that do not fall within the scope of this article, the most common term is “open source”. And lucky for me, my career switch started with an internship in an Open Source Program Office (OSPO), which made this switch feel — almost — like home.

Standing on the shoulders of giants

We all benefit from open source. Whether you code or not, the software you use relies on it. Since it is an open culture where everything is built upon the work of others, it’s common to hear the term “standing on the shoulders of giants”, which refers to the idea that advancements are built upon the work and contributions of those who came before us. This highlights the importance of learning from the experiences and accomplishments of others.

This article is meant to unveil whose shoulders I am standing on. And this is not only to show my gratitude to them but also to answer a question I was asked while being interviewed by Kevin Ball and Christopher Hiller at JSParty: What can you do to improve diversity in your surroundings?

“Standing on the shoulder of giants” regards not only to open source but its the base of sisterhood in technology by recognizing female pioneers and leaders’ roles in the field. By acknowledging the contributions of women who came before us, we can gain inspiration and insight into the challenges they faced and learn from their experiences in overcoming some shackles. In this way, we “stand on the shoulders of giants” and build upon their work to create a more inclusive and supportive environment for women and underestimated [2] people in technology.

By supporting one another, recognizing the importance of learning from the experiences of others, and forming a supportive network, we can work together to overcome challenges and build a better future for all by creating a more equitable environment. By doing so, we are creating new giants for others to stand upon in the future.

Organizing a local community: Meili Triantafyllidi and Jessica Greene

I joined PyLadies Berlin, which was founded by Meili in 2013. Jessica, one of the organizers, was a junior software engineer at Ecosia. Being a community organizer means using your free time to passionately do all the work needed to create a safe, supportive networking and learning space. It includes finding a hosting place, promoting the event, curating themes, finding speakers, and most importantly, listening to the needs of the community.

Being new in a multicultural city and trying to find my place in it, PyLadies was less a place to learn Python and more a center to be welcomed and understood.

According to the narrative we are told, tech is the new promise land everyone is heading to, with infinite job postings, freedom to switch countries, and a well-paid careers. This isn’t being offered in other sectors, or at least not at this scale. And communities focused on bringing diversity to the space offer to make this a realistic possibility for everyone.

Every event starts with community announcements, a simple slide containing an agenda, and promotions for similar events. Two of the events I heard guided me to my career path: the Rail Girls Summer of Code program and the FrauenLoop. Feeling compelled to contribute back to the supportive community I’d already received, I became one of the co-organizers.

Networking and learning: FrauenLoop

Founded by Dr. Nakeema Stefflbauer in 2016, FrauenLoop has committed to changing the face of EU-based tech companies. The program is divided in 3 months cycles, which are composed of weekly evening classes and weekend workshops to train women who don’t have a tech industry connection.

The learning curriculum is developed around the professional needs of women, from technical industry-focused classes to workshops delivered by women on how the tech industry really works and how to successfully navigate it. Some common topics are salary negotiation and practicing technical interviews. Most recently, in response to the layoffs, there was a workshop run with the Berlin Tech Workers Coalition about Demystifying the Termination Challenge Process.

The focus is on women, especially migrants, the ones changing family status and careers who are really ready to go job searching.

Being around Nakeema is itself an inspiration. The program was a starting point for understanding what coding means and learning the basics of web development. But the greatest part was connecting with others who would later become PyLadies co-organizers, speakers, mentors in side projects, and friends.

FrauenLoop also gives its students the opportunity to go back as mentors. For me, this was the breaking point that definitively set my path. I have been a mentor for over a year, and it has improved my confidence in myself and reinforced my own learning. Being challenged by the responsibility to facilitate the learning to others, you inevitably have to learn.

There I met Victoria Hodder, who was my partner applying to Rail Girls Summer of Code.

Diversity programs: from Rail Girls Summer of Code to Ecosia Summer of Code

Rail Girls Summer of Code was a global fellowship program for women and non-binary coders where successful applicants received a three-month scholarship to work on existing open source projects. The program was active from 2013 to 2020.

The application was submitted by a team, meaning two people from the same city. While it was a remote program, having a local peer ensured accountability and support.

It also required a place to work, an environment suitable for working for three months. This place could be your home, a co-working space, a work office, or in the best-case scenario, a coaching company. Although the coaching company had no obligation beyond offering a space to work, it connected us with a local company and gave us a space to have visibility and network with people within the industry we wanted to enter.

Jessica, my PyLadies Berlin co-organizer, had kick-started her career in tech with the program. She proposed Ecosia, her then and current company, to be the coaching company for two teams. One team was myself and Victoria (we focused on web development) and the other was Taciana Cruz and Karina Cordeiro (they focused on data).

During the three month application period, the COVID-19 pandemic hit hard. Victoria and I had been sort of selected for the Rail Girls Program after getting involved with the if-me project. Sort of selected. Communication with Rail Girls got really messy by the end of the selection period until they finally canceled the program at the last minute.

We were all devastated. The weight of the pandemic hit us hard, crushing not only a chance for a paid job but a dream of starting a new career that had been cultivated for so long.

Jessica, a junior software developer at the time, knew that. So she took a step further and, instead of feeling powerless, she took a stand. She piled more work on top of her personal struggles navigating her new role and created the Ecosia Summer of Code.

Ecosia couldn’t cover scholarships, but Jessica developed a mentorship instead. The program used the company’s available resources, offering mentorship from highly qualified professionals to fill gaps in our knowledge. As Victoria and Karina dropped the initiative, needing paid jobs, Taciana and I managed to continue on individual projects. We found common themes to work on and supported each other.

About a year later, I was invited by one of those mentors, Jakiub Fialla, to talk about open source to the company. I am still connected with a few others, and every now and then, I stop by and meet some of them when they host PyLadies Berlin events. How sweet is that?

Sponsoring diversity: Coyotiv and Armagan Amcalar

When Rail Girls was canceled, I saw an Instagram post about a bootcamp offering a full stack web development program scholarship.

The application was fairly simple, so I applied. I quickly received a spontaneous invite for an interview. Depressed, messy, and hopeless, I attended without any preparation, so I was brutally honest. The conversation was equally honest, which I highly appreciated.

The interviewer was Armagan Amcalar, the founder of the Coyotiv School Of Software Engineering. Coming from a music background, Armagan is creative and thinks critically about the world around him. The school itself started after he offered free crash courses in Women Techmakers Berlin for three years. He doesn’t use a rote diversity speech, he acts upon it, offering scholarships to all full-time participants.

More open source career advice Open source cheat sheets Linux starter kit for developers 7 questions sysadmins should ask a potential employer before taking a job Resources for IT artchitects Cheat sheet: IT job interviews

I got the scholarship, and together with four other people (3 of them women), the first cohort was formed. Bootcamp lasted for 17 super intense weeks. This was fundamental in changing my perspective on code. Unlike other places I had tried to learn, the least of Armagan’s concerns is about the framework we choose. Instead, it was all about understanding what we were doing, and thinking about software engineering as a creative, powerful tool shaping the world we live in. I didn’t get just a scholarship, I got a friend and a mentor for life who offered me a turn and opened a door for a better life.

Do you think I am overreacting? Talk to people around me. My partner, who has known me for about 14 years at this point, commented on how much I had changed. Disciplined, vibrating, happy about the things I was learning along the way, having deep conversations about software and its surroundings, not being conflicted, letting go a life-long career in arts, and finding a purpose. It was so remarkable that he joined a few cohorts after me.

The school provided me with technical knowledge, interview training, CV support, and public speaking training. Graduation was not only about developing a personal full-stack project. You also had to give back to open source, in recognition that so much software is built upon it, by publishing a library on npm. Node Package Manager (npm), is a Javascript package repository that allows you to reuse code by easily installing it within your Javascript-based projects. Although I have been involved with the free software movement and open source for over a decade, I’d never thought I could give back to it with actual code.

My contribution

This is how rainbow-penguin was born. It’s an npm library that sends motivational messages to developers while coding. Maybe it’s not a very functional tool. Still, to me, it was a needed tool based on my personal experience wading through the frustrations of learning to code, contributing to the if-me project, and hearing so many similar stories from other learners.

Through my experiences in these programming communities, I learned that code is much bigger than the lines of code, and how powerful and necessary it is to have allies. No matter who you are or what you think you know, there are opportunities within the free and open source software communities. Your participation doesn't have to be big, because together our contributions are greater than their sum. Take the first step. Find your allies within open source.

“Standing on the shoulder of giants” regards not only to open source but its the base of sisterhood in technology by recognizing female pioneers and leaders’ roles in the field.

Image by:

LGBTQ Symbols via Pixabay. CC0.

SCaLE Careers What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Open Source Career Day at SCaLE 20x

Tue, 03/07/2023 - 19:00
Open Source Career Day at SCaLE 20x lufthans Tue, 03/07/2023 - 06:00

Southern California Linux Expo (SCaLE) has been a great place for careers in Open Source, and the Open Source Career Day (OSCD) is returning for SCaLE 20x on Sunday, March 12th.

For many years, SCaLE had a career and jobs Birds of a Feather (BoF) meeting one evening during SCaLE, and a physical job board throughout the conference. But in 2020, RaiseMe created its first Open Source Career Day.

This year at SCaLE, the career BoF team and RaiseMe are working together again to bring a full day of career activities for the Sunday schedule. There will also be a career BoF on Saturday evening. I'm the OSCD co-chair this year, but I'm not just a chair. I'm also a client. I found my current employer through the SCaLE job board.

Speed pitching

At the BoF, we plan to have Speed Pitching. Everyone gets a chance to practice their interview response to "tell us about yourself" several times during the BoF. It's an elevator pitch for you.

Sunday, we have 30-minute career consulting sessions from 11:30 to 14:30. Participants can ask for resumé review, recareering assistance, or career guidance. Signups are open now through SCaLE 20x. When signing up, please check all the times you can be available.

More open source career advice Open source cheat sheets Linux starter kit for developers 7 questions sysadmins should ask a potential employer before taking a job Resources for IT artchitects Cheat sheet: IT job interviews This year's career schedule

For presentations, we start the day with a recareering and resume writing clinic. Bryna is bringing us a follow up to a success story bridging OSCD for SCaLE 18x to this year.

Next, Miguel leads a panel on diversity successes in FLOSS projects and how we can learn from them for future successes in FLOSS and the workplace.

After lunch, Fatima is in from Canada to give us excellent strategies for early career success. There are lots of resources for getting that first job, and Fatima wants to help us have a good start once we get the early career job.

We’ll wrap up the presentations with a panel on hiring manager insights. The panel will discuss what hiring managers are looking for in job applicants and give career advancement advice.

Thanks to the diligence of the SCaLE organizers, we're getting headshots on Sunday as well. The photographer is available from 11:00 to 13:00.

OSCD's Sunday schedule wraps up before SCaLE 20x’s closing keynote from Ken Thompson.


When registering for SCaLE, use code JOBS to get 50% off registration. See you in Pasadena!

Get help in finding employment in open source at SCaLE.

Image by:

SCaLE Careers What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Switch from iCloud to Nextcloud

Tue, 03/07/2023 - 16:00
Switch from iCloud to Nextcloud hej Tue, 03/07/2023 - 03:00

If you're wary of committing your data to cloud services controlled by a corporation but love the convenience of remote storage and easy web-based access, you're not alone. The cloud is popular because of what it can do. But the cloud doesn't have to be closed. Luckily, the open source Nextcloud project provides a personal and private cloud application suite.

It's easy to install and import data—including contacts, calendars, and photos. The real trick is getting your data from cloud providers like iCloud. In this article, I demonstrate the steps you need to take to migrate your digital life to Nextcloud.

Migrate your data to Nextcloud

As with Android devices, first you must transfer existing data from Apple's iCloud to Nextcloud. Then you can set up two new accounts for your Apple devices to fully automatically synchronize address books and appointments. Apple supports CalDAV for calendars and CardDAV for contacts, so you don't even need to install an extra app.

To export your address book, you can either open the Contacts app on your iPhone/iPad or log into iCloud in your web browser:

  1. Select all address book entries you want to transfer to Nextcloud and choose File > Export > Export vCard to save a .vcf file on your local disk.

  2. Import the .vcf file into Nextcloud. To do this, select the Contacts app, click Settings at the bottom left and select the Import contacts button. In the following dialogue window, click Select local file, and open the previously saved vCard.

To set up a CardDAV account on your iPhone or iPad, go to Settings > Contacts > Accounts > Add Account:

  1. Select Other and then Add CardDAV account. In the Server field, enter the URL of Nextcloud (for example, https://nextcloudpi.local). Below this is space for the username and password of the Nextcloud account. Open the Advanced Settings for the new account.

  2. Ensure the Use SSL option is enabled. The account URL is usually set correctly. It contains, amongst other things, the host name of your Nextcloud and your user name.

To create a new account on macOS for synchronizing address books, open the Contacts app and select Add Account from the Contacts menu. Activate the checkbox Other Contacts Account and click on Continue. You can accept the CardDAV entry. In the Account Type drop-down menu, select Manual entry.

Image by:

(Heike Jurzik, CC BY-SA 4.0)

Enter your Nextcloud user name, password, and server address. The current macOS version requires you to specify port 443 (for SSL) in the server address. For example, if the address of your Nextcloud is https://nextcloudpi.local and the username is hej, then enter the following in the field:


Syncing your calendars

Exporting your calendars works similarly. Through the Calendar app, you can do this with iCloud in the browser, on your smartphone/tablet, or the macOS desktop.

First, set the calendar to public. This doesn't mean that everyone can access your calendar. It's only used to generate a link for the calendar subscription. Copy the URL to the clipboard. It's not yet possible to import the calendar directly into Nextcloud because you don't need a link for this, but an .ics file (iCalendar). Here is how to generate such a file from the link:

  1. Copy the link to the clipboard

  2. Paste the link into the address bar of a web browser

  3. Change the beginning of the URL and replace webcal with http

  4. Press Enter and save the .ics file on your disk

Image by:

(Heike Jurzik, CC BY-SA 4.0)

More open source alternatives Open source project management tools Trello alternatives Linux video editors Open source alternatives to Photoshop List of open source alternatives Latest articles about open source alternatives

You can now import the .ics file. To do this, open the Calendar app in Nextcloud, click Calendar settings at the bottom left and then Import calendar. Select the .ics file you saved in the file manager.

Repeat this process for all iCloud calendars. After that, it's time to replace the old iCloud synchronization service.

Synchronizing events

To synchronize new events with Nextcloud, set up a new account on your client devices (smartphone, tablet, desktop):

  • iPhone/iPad: Settings / Calendar / Accounts / Add Account, select Other and then choose Add CalDAV Account. In the Server field, enter your local Nextcloud URL, which is https://nextcloudpi.local. You can see a space for the username and password of the Nextcloud account.

  • macOS: Open the Calendar app and select Add Account from the Calendar menu. Activate the checkbox Other CalDAV Account and click Continue. From the Account Type drop-down menu, select Manual entry. Enter your Nextcloud username and password as well as the Nextcloud server address. Don't forget to specify the port 443 (for SSL) in the server address; otherwise the account setup will fail.

Tip: If you want to synchronize other files like documents, photos, videos, and so on, in addition to your contacts and calendars, you can install the Nextcloud app offered in the App Store.

This article has been adapted from Heike Jurzik's book, Nextcloud on the Raspberry Pi.

Nextcloud is your very own open source cloud. Here's how to make the switch.

Image by:

Nextcloud Alternatives Mac What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

10 ways Wikimedia does developer advocacy

Tue, 03/07/2023 - 16:00
10 ways Wikimedia does developer advocacy srish_aka_tux Tue, 03/07/2023 - 03:00

In a previous article, I wrote about Wikipedia’s rich history as a repository of open knowledge. Supporting Wikipedia as a platform ensures that the information it contains is available to everyone, but it’s a big job. In this article, I’m going to introduce you to the vastness of Wikipedia’s technology landscape and the technical community behind it. I’ll examine the role of developer advocacy in supporting the technical community through dedicated open source software mentoring programs and events, awards and ceremonies, grants and partnerships, a developer portal, and more. This helps us engage with volunteer developers using the technology behind Wikipedia and its sister projects. Through this article, you’ll understand what developer advocacy can look like for a nonprofit organization, and gather new ideas for building stronger developer communities.

Advocacy for Wikimedia Developers

Broadly, advocacy for developers means “being the voice of developers”. It’s about advocating for their needs and offering them the necessary support to be successful. This could be in the form of resources, helping build essential technical skills, and sharing ideas for projects and tasks where they can best contribute. Overall, it means enabling a healthy environment where they can be their best productive selves.

It is essentially taking them through the open source contributor funnel (users > contributors > maintainers), and supporting them in every phase. In the case of Wikimedia, developer advocacy is also about liaising between developers and the broader movement (staff and non-staff) and helping build relationships between the two.

A good return on investment (ROI) for developer advocacy in open source organizations like Wikimedia is based on a thriving technical community. We don’t sell or market to our developers. Many “developer-first” for-profit organizations rely on developers consuming their products or services as a primary market. At Wikipedia and in much of open source, a happy new contributor is one who makes contributions that bring significant impact to the technical ecosystem, feels a sense of identity and belonging within the community, and also ends up staying in the community. This is a paramount requirement for open source software projects like Wikipedia, and the project’s life depends on Developers and Developer Advocates working synchronously.

[ Also read 5 qualities of great open source developer advocates ]

This impact feeds into an improved experience for the Wikimedia editors using the software, which ultimately helps them contribute to the organization’s mission of developing free educational content for the world with increased activity and ease of use. Developer advocacy plays a crucial role in Wikimedia, so now I’ll address how that’s expressed through action. These approaches may be an inspiration for your own open source projects, and they may help sustain your project and your community.

Initiatives around developer advocacy

Wikimedia has a dedicated Technical Engagement team of 16 people. It’s heavily focused on advocacy work for developers and supporting the technical community. In addition, several initiatives are ongoing in the community around the world to help local developers and connect them with the global community. Though there are countless initiatives to engage volunteer developers in Wikimedia software projects, primarily in the last decade, this section looks at the ten most popular or currently active initiatives.

Open source outreach and mentoring programs

In the open source world, the Google Summer of Code (run by Google) and Outreachy (run by Software Freedom Conservancy) are two widely known outreach and mentoring programs that introduce open source software development to new contributors. Outreachy in particular encourages participation from individuals belonging to marginalized groups underrepresented in the global tech space, and aims to foster diversity in tech. Both programs have been operating for over fifteen years, in which many open source organizations participate.

Wikimedia participates in both programs every year and has onboarded hundreds of contributors to their projects through these internships. The majority of them are now part of software projects used by the broader community. There are also successful examples of participants turning into long-term contributors, maintainers of projects, and those who continue to make meaningful contributions to not just Wikimedia, but the broader open source communities, as well as playing leading roles in them.


Wikimedia’s community thrives on numerous partnerships with the outside world to unlock free knowledge. While resources and bandwidth of the technical contributors are limited, Wikimedia’s tech community welcomes new partnerships and has collaborated with several initiatives and projects that align with the organization’s mission. Over the years, such collaborations have been instrumental in expanding the contributor base, fostering innovation for a better world, and encouraging and promoting the use of its open source ecosystem.

An ongoing initiative is a partnership with’s fellowship for their Abstract Wikipedia project. Abstract Wikipedia makes it possible for people to share more knowledge in more languages by helping create language-independent articles with a new platform and a supportive community at a new top level Wikimedia project, Wikifunctions. Through this fellowship, nine Google employees are offering their pro-bono technical services to the project to speed up its development and growth.

In the past, there have also been efforts to foster collaborations with academia through organizations like POSSE. This provides universities and professors with professional development resources to encourage teaching with open source projects and engaging students in contributing to them.


Historically Wikimedia has funded several grants to its community members to support their work in amplifying the movement’s vision with their communities and mission-aligned organizations worldwide to promote knowledge equity and foster collaboration and cross-cultural exchange. There are different types and sizes of funding available. Typically in the technical landscape, grants are available for people to enhance the existing technology or develop new tools to support the Wikimedia projects, organize and participate in local technology meetups or Hackathons in their communities, and so on. A dedicated accelerator program called Unlock Accelerator also promotes open innovation for technology solutions and provides training throughout the program to help participants achieve their goals.

Community recognition

In open source organizations and projects, community members play a crucial role in shaping them. Volunteers join these communities for intrinsic motivation factors, and there aren’t any tangible benefits that are driving them. Despite this, recognition helps build a culture of appreciation, contributes to the community members’ well-being, gives them a sense of belonging, and can strengthen their commitment to the community’s mission. There are several initiatives in Wikimedia’s broader community, particularly in the technical community. A popular initiative called the Coolest Tool Award selects the top ten tools yearly, developed by volunteers. Nominations come from the community in various categories (newcomer, editor, developer, and so on). The award is unique, and is given in a typical wiki-style: The tool’s wiki page that has received an award is edited live and updated with an award template during the ceremony!

Other forms of recognition are giving swag, posting badges on talk pages, and sending thank you for edits made to a technical article using a wiki feature that community members often use to appreciate one another’s work.

Community capacity building

Wikimedia software developers come from all walks of life with diverse skill sets. Some are working professionals, some are university students, and some are working full-time with different chapters and user groups part of Wikimedia. They have varying learning needs and motivation to grow their skills. The capacity-building programs in the technical spaces focus primarily on volunteer developers at a beginner or intermediate level, regarding skills needed to contribute to specialized areas. They’re interested in contributing to Wikimedia’s core technical projects or improving existing technical workflows on their local wikis to meet specific use cases of the community. For instance, a contributor might be suited for help in administering the site, or dealing with vandalism, or configuring bots or user scripts, or something else.

Small wiki toolkits is a global initiative that focuses on building capacity among community members of smaller language wikis by running technical workshops on various topics, developing toolkits, and building a network of individuals from these communities so they can help each other. These efforts are aimed at enabling learners to gain essential skills and potential trainers to benefit from the resources and ultimately help grow content on their wikis. Several local initiatives operate with the same intention in communities worldwide. For example, developers in India run IndicTechCom, an initiative to address the local needs of editors in their communities by implementing technical solutions.

Technical support

Wikimedia’s technical ecosystem is massive. Every project follows its own set of contribution guidelines, norms, and channels for communication. As many Wikimedia technical projects and communities are decentralized, it can sometimes be overwhelming for people to look for ways to start, to approach fellow community members, and to even ask for technical support. Despite this, there are a few centralized systems that most projects rely on for various forms of collaboration, including:

  • Phabricator for issue tracking
  • Gerrit for code collaboration
  • IRC channels
  • Mailing lists and talk pages for project-related discussions
  • Wikis for project documentation

These are the venues where staff and community members offer technical support and have co-established communication norms to interact with others and the broader community. For example, we suggest everyone communicate in the open whenever possible, to do prior research before asking a question, to use inclusive language, and to be patient while waiting for code reviews, and so on.

Platform and services

Platforms and services within the technical ecosystem act as a fuel for innovation and productivity for the work of thousands of developers. Wikimedia provides cloud services infrastructure at no cost to its developers to host, run, and maintain their tools with dedicated technical support through help of technical teams. This cloud infrastructure is built upon OpenStack and Kubernetes. We’ve had, as of this writing, about 1500 users leveraging these services.

The cloud engineers supporting this essential infrastructure keep the services running, ensure they are up to date with the latest technologies, and provide their best efforts to support the community.

Wikimedia makes its API and datasets available for public use for research and development. Developers can leverage the Wikimedia API to interact with the Wikimedia sites and obtain essential access to features to search, create and modify content. APIs have enabled Wikimedia and its third-party developers to build tools with compelling use cases, such as real-time visualization of edits made to Wikipedia and dashboards exploring gender diversity in Wikimedia projects. Big tech organizations rely heavily on these APIs. Amazon’s Alexa pulls information from Wikipedia. Facebook uses it to fight misinformation. Google uses it to improve search results.

A new project, “Wikimedia Enterprise,” makes commercial-grade APIs available to bigger commercial content users for consuming data at a large scale as a paid service.

Community metrics and health

Community metrics are crucial to understand and communicate developer contributions, the impact of the projects and programs targeted toward community, and its overall health. Wikimedia uses CHAOSS’s GrimoireLab analytics tool to track contributions and activities in various collaboration and communication venues. This gives insights such as how many issues are filed and resolved, how many changesets are submitted and reviewed, which project receives most contributions, which individuals and organizations contribute, and so on.

Often quantitative data gives birth to qualitative information gathering. It’s inspired research studies such as low retention rates of developers to understand their motivations for joining the project, why they choose to stay, what challenges they face, and why they leave. The tool allows writing custom queries to fetch various kinds of data. It is available to anyone in the community to obtain specific project metrics and analyze them as per their needs.

Our favorite resources about open source Git cheat sheet Advanced Linux commands cheat sheet Open source alternatives Free online course: RHEL technical overview Check out more cheat sheets Hackathons and events

Though most of the work happens online, Wikimedia’s local and global communities meet through Hackathons and technical events for in-person collaboration, knowledge exchange, and getting to know each other. Global annual Hackathons take place twice a year and several local events throughout the year. During these events, people get together to hack on projects under a specific focus area and get feedback on their work in the form of code reviews from others and brainstorm on new ideas. Newcomers in the geographic area join in to understand how they can get involved. There are sessions and workshops oriented to teaching a specific skill and a final showcase in which they present what they have accomplished during the Hackathon.

For us, Hackathons have been some of the most successful developer advocacy-assisted activities for open source projects.

Developer resources

For a new developer trying to join a Wikimedia project, navigating through this giant and complex technical environment to understand where they might fit best can be quite challenging initially. Wikimedia’s recent developer portal ensures a smooth first interaction and experience with the community and brings all the resources under one roof. New and existing developers can use the portal to understand various technical areas and processes to contribute to numerous projects, browse them by programming languages and explore demo apps, discover and share tools, learn how to develop them, learn about the community, and so on.

Developer advocacy

Developer advocacy has emerged as a key field in tech in the last decade and evolved significantly, despite there being fewer resources around it and more so for open source projects than for-profit tech companies. I hope this article gives you a window into how one of the largest open source communities welcomes and supports its technical volunteers through various initiatives and new ideas to implement in your developer communities.

Are you interested in joining Wikimedia’s technical community? Explore the resources here and learn how to get involved:

It's important to grow and nurture your open source community. Find out how Wikimedia does it.

Image by:

Community management What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Switch from Google Workspace to Nextcloud

Mon, 03/06/2023 - 16:00
Switch from Google Workspace to Nextcloud hej Mon, 03/06/2023 - 03:00

If you're wary of committing your data to cloud services controlled by a corporation but you love the convenience of remote storage and easy web-based access, then you're not alone. The cloud is popular because of what it can do. But the cloud doesn't have to be closed. Luckily, the open source Nextcloud project provides a personal and private cloud application suite.

It's easy to install and import data—including contacts, calendars, and photos. The real trick is getting your data from cloud providers like Google. In this article I demonstrate the steps you need to take to migrate your digital life from an Android device to Nextcloud.

Migrate your data to Nextcloud

I wrote this article using a Raspberry Pi running Nextcloud, but the process is the same regardless of how you choose to run Nextcloud.

Two network protocols are used to exchange data between Nextcloud and your Android device: CalDAV (calendar) and CardDAV (contacts). Android doesn't natively support these two protocols. You need an additional app for Android smartphones and tablets. DAVx⁵ synchronizes calendars and contacts between Android devices and Nextcloud. It is available for download free of charge as an APK from F-Droid, from the Google Play Store (approximately US$6), and other app stores.

Before you set up synchronization with the app, you need to export existing contacts and calendars and import them into Nextcloud:

  1. Log into your Google account and make sure the automatic synchronization of contacts and appointments with Google is switched On to export the data. This ensures that your data is up-to-date in the Google Cloud.

  2. Open the Google Apps menu and select the Contacts entry to open the address book. In the left sidebar, click on Export. In the next dialogue, select all contacts and save everything as a vCard (.vcf file). If you only want to export certain address book entries, select them beforehand and choose Selected contacts in the Export dialogue.

  3. Import the .vcf file into Nextcloud. To do this, select the Contacts app, click Settings at the bottom left, and click the Import contacts button. In the following dialogue window, click Select local file, and open the previously saved vCard.

It's just as simple to export and then import your calendars:

  1. Visit the Google website and open the Calendar app. On the left, you see all your own and subscribed calendars (My calendars). The right side displays the day, week or month views. Click on the icon with the gears to access the settings.

  2. On the left, click Import and export. By default, all calendars are marked for export. There is no option to save only individual calendars.

  3. Click Export and save the .zip file to the hard disk. Unpack the .zip file. It contains several .ics files (iCalendar format), one for each Google calendar.

  4. Open the Calendar app in Nextcloud, click Calendar settings at the bottom left and then Import calendar. Select one or more .ics files you saved in the file manager.

More open source alternatives Open source project management tools Trello alternatives Linux video editors Open source alternatives to Photoshop List of open source alternatives Latest articles about open source alternatives

After a short time, all events appear in your Nextcloud calendar. Repeat this process for all Google calendars. Now everything is ready to replace the old Google synchronization service. From now on, you create new entries through your Android device.

Connecting to your Nextcloud account

Install the DAVx⁵ app and confirm it has access to your calendars and contacts. Tap the orange plus sign to connect the Nextcloud account to the app. Select Login with URL and user name, enter your Nextcloud user name and password. The Base URL field contains an address that you can find in Nextcloud's calendar app. Click on Calendar settings at the bottom left and scroll down to Copy primary CalDAV address.

Click Login to connect to your Nextcloud, Create account, and select Groups are separate vCards from the Contact group method drop-down menu. You can use the sliders in the CardDAV and CalDAV sections to define the contacts and address books you want the app to synchronize.

Once the app has completed the synchronization process, you can access the data through standard Android apps for calendars and address books. You can start the synchronization manually with the Refresh icon at the bottom right. You can also set the app to always run in the background. You can also set the app to always run in the background. Note that this may affect the battery life of the device.

This article has been adapted from Heike Jurzik's book, Nextcloud on the Raspberry Pi.

How to synchronize your data between Android and Nextcloud.

Image by:

Nextcloud Alternatives Android What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How Wikipedia helps keep the internet open

Mon, 03/06/2023 - 16:00
How Wikipedia helps keep the internet open srish_aka_tux Mon, 03/06/2023 - 03:00

Wikipedia is one of the most significant open source software projects, in part because it's a lot bigger than you may realize. And yet anyone can contribute content, and anyone can contribute code to many technical areas of the projects that work behind the curtain to keep Wikipedia running.

Over 870 Wikipedia and umbrella sites are available in different languages, and all of them operate with a common goal of “developing free educational content and disseminating it effectively and globally.” For example, Wikimedia Commons is a repository of free media files, and as of today, it has over 68 million images. Wikisource is a free library of textual sources with over 5 million articles and website subdomains active for 72 languages. Wikidata is an accessible repository of over 99 million data items used across several Wikipedia-related sites.

These projects are supported and maintained by Wikimedia Foundation, a non-profit organization headquartered in San Francisco. The organization also empowers hundreds of thousands of volunteers worldwide to contribute free knowledge to these projects. Behind this community of knowledge gatherers and producers, a lot of work goes into maintenance, technical support, and administrative work to keep these sites up and running. From the outside looking in, you might still wonder what more work could remain in developing Wikipedia’s software. After all, it’s one of the top ten most visited internet websites in the world, and serves its purpose well and provides access to the best possible information.

The truth is that every article on Wikipedia leverages thousands of software tools for its creation, editing, and maintenance. These are crucial steps in ensuring equitable, reliable, and fast access to information no matter where you are in the world. When you browse Wikipedia or any other Wikimedia sites, the software you interact with is called MediaWiki, a powerful collaboration and documentation software that powers the content of Wikipedia. It comes with a default set of features. To further enhance the software’s capabilities, you can install various extensions. They’re too numerous to mention, but two notable extensions are:

  • VisualEditor: A WYSIWYG rich-text editor for MediaWiki-powered wikis
  • Wikibase: Allows storing, managing, and accessing structured data which Wikipedia pulls from Wikidata.

All of this apparent ancillary tooling makes the modern Wikipedia, and each one is important for its functioning.

Wikimedia and Mediawiki

Overall, Wikipedia’s technology ecosystem is vast! As MediaWiki, one of the most popular software in the Wikimedia world, is available under an open source license, over four hundred thousand projects and organizations use it for hosting their content. For example, NASA uses it to organize its content around space missions and their knowledge base!

In addition, there are many other bots, tools, desktop and mobile apps that help with content access, creation, editing, and maintenance. For example, bots in particular help drastically reduce the workload of editors by automating repetitive and tedious tasks, such as fighting vandalism, suggesting articles to newcomers, fact-checking articles, and more. InternetArchiveBot is a popular bot that frequently communicates with the Wayback Machine to fix dead links on Wikipedia.

Tools are software applications that support various contributors in their work. For example, organizers can access tools for conducting editathons, running campaigns, educational courses around Wikipedia editing, and so on. As of May 2022, bots and tools contribute 36.6% of edits made to 870 Wikimedia wikis, demonstrating their significant impact on the ecosystem.

Kiwix is a well-known offline reader and a desktop application that provides access to Wikipedia in limited internet access regions, particularly in educational settings. Mobile apps for Wikipedia and Wikimedia Commons allow editors to contribute articles and media files through their devices too, making our knowledge platforms accessible to a larger audience around the world.

The next time you are browsing a Wikipedia article and notice frequent changes being made to it in real-time in the wake of a recent event, you might be able to visualize better what might be happening behind the scenes.

Wikipedia’s technical community

Wikipedia was launched in 2001. It had about ten developers at that time. Since the inception of the Wikimedia Foundation in 2003, the developer pool has vastly grown over these years. About a thousand developers are now contributing to various projects within our knowledge movement. This number fluctuates yearly, depending on the number of active contributors and staff members, initiatives supporting volunteer developers, global events such as the pandemic, and so on.

Members in the technical community contribute in various ways and roles. There are code contributors, documentarians, designers, advocates, mentors, community organizers, testers, translators, site administrators, and more.

According to a survey conducted for new developers, Wikimedia draws a lot of contributors from the United States, Europe, and India like other open source projects and is growing in different regions of the world.

Volunteer developers have similar motivations as Wikipedia editors. They join as contributors to support the free knowledge mission, learn and gain new skills, improve the experience of other editors, and so on. One of the volunteer developers from India says, “While I joined as an editor, I started to familiarize myself with the tech behind Wikipedia because there were significantly fewer contributors in the Hindi Wikipedia community who could address our local language needs through technology.”

Open multimedia and art resources Music and video at the Linux terminal 26 open source creative apps to try this year Film series: Open Source Stories Blender cheat sheet Kdenlive cheat sheet GIMP cheat sheet Latest audio and music articles Latest video editing articles

Between July 2021 and June 2022, looking only at code repositories hosted in Wikimedia’s Gerrit instance, 514 developers contributed 45,621 merged software changes to 1225 repositories. Of these contributions, 48.52% came from outside the Wikimedia Foundation by other organizations and independent developers. Some of these developers are also part of various user groups, chapters, and affiliate bodies working in different regions to promote the use and encourage contributions to Wikimedia projects. These numbers do not include the additional developers who chose to host their code externally instead, or code that is hosted directly on wiki pages, such as gadgets or modules.

Making a difference

Wikipedia is a vast repository of knowledge, available to everyone. In many ways, it’s the embodiment of the original vision of what the internet can and should be: A source of information, understanding, and collaboration.

You can be a part of Wikipedia as a contributor, either by sharing your knowledge in articles, or by helping to build the software that makes it all possible. If you’re interested in joining Wikimedia’s technical community, then explore the resources on our developer site, and learn how to get involved.

Wikipedia embodies the spirit of the original vision of the internet, and you can be a part of it.

Image by:

Tools Education Internet Wikimedia Art and design What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Build a Raspberry Pi monitoring dashboard in under 30 minutes

Mon, 03/06/2023 - 16:00
Build a Raspberry Pi monitoring dashboard in under 30 minutes Keyur Paralkar Mon, 03/06/2023 - 03:00

If you’ve ever wondered about the performance of your Raspberry Pi, then you might need a dashboard for your Pi. In this article, I demonstrate how to quickly building an on-demand monitoring dashboard for your Raspberry Pi so you can see your CPU performance, memory and disk usage in real time, and add more views and actions later as you need them.

If you’re already using Appsmith, you can also import the sample app directly and get started.


Appsmith is an open source, low-code app builder that helps developers build internal apps like dashboards and admin panels easily and quickly. It’s a great choice for your dashboard, and reduces the time and complexity of traditional coding approaches.

For the dashboard in this example, I display usage stats for:

  • CPU
    • Percentage utilization
    • Frequency or clock speed
    • Count
    • Temperature
  • Memory
    • Percentage utilization
    • Percentage available memory
    • Total memory
    • Free memory
  • Disk
    • Percentage disk utilization
    • Absolute disk space used
    • Available disk space
    • Total disk space
Creating an endpoint

You need a way to get this data from your Raspberry Pi (RPi) and into Appsmith. The psutils Python library is useful for monitoring and profiling, and the Flask-RESTful Flask extension creates a REST API.

Appsmith calls the REST API every few seconds to refresh data automatically, and gets a JSON object in response with all desired stats as shown:

{ "cpu_count": 4, "cpu_freq": [ 600.0, 600.0, 1200.0 ], "cpu_mem_avail": 463953920, "cpu_mem_free": 115789824, "cpu_mem_total": 971063296, "cpu_mem_used": 436252672, "cpu_percent": 1.8, "disk_usage_free": 24678121472, "disk_usage_percent": 17.7, "disk_usage_total": 31307206656, "disk_usage_used": 5292728320, "sensor_temperatures": 52.616 }1. Set up the REST API

If your Raspberry Pi doesn’t have Python on it yet, open a terminal on your Pi and run this install command:

$ sudo apt install python3

Now set up a Python virtual environment for your development:

$ python -m venv PiData

Next, activate the environment. You must do this after rebooting your Pi.

$ source PiData/bin/activate $ cd PiData

To install Flask and Flask-RESTful and dependencies you’ll need later, create a file in your Python virtual environment called requirements.txt and add these lines to it:

flask flask-restful gunicorn

Save the file, and then use pip to install them all at once. You must do this after rebooting your Pi.

(PyData)$ python -m pip install -r requirements.txt

Next, create a file named to house the logic for retrieving the RPi’s system stats with psutils. Paste this code into your file:

from flask import Flask from flask_restful import Resource, Api import psutil app = Flask(__name__) api = Api(app) class PiData(Resource): def get(self): return "RPI Stat dashboard" api.add_resource(PiData, '/get-stats') if __name__ == '__main__':

Here’s what the code is doing:

  • Use app = Flask(name) to define the app that nests the API object.
  • Use Flask-RESTful’s API method to define the API object.
  • Define PiData as a concrete Resource class in Flask-RESTful to expose methods for each supported HTTP method.
  • Attach the resource, PiData, to the API object, api, with api.add_resource(PiData, '/get-stats').
  • Whenever you hit the URL /get-stats, PiData is returned as the response.
2. Read stats with psutils

To get the stats from your Pi, you can use these built-in functions from psutils:

  • cpu_percentage, cpu_count, cpu_freq, and sensors_temperatures functions for the percentage utilization, count, clock speed, and temperature respectively, of the CPU sensors_temperatures reports the temperature of all the devices connected to the RPi. To get just the CPU’s temperature, use the key cpu-thermal.
  • virtual_memory for total, available, used, and free memory stats in bytes.
  • disk_usage to return the total, used, and free stats in bytes.

Combining all of the functions in a Python dictionary looks like this:

system_info_data = { 'cpu_percent': psutil.cpu_percent(1), 'cpu_count': psutil.cpu_count(), 'cpu_freq': psutil.cpu_freq(), 'cpu_mem_total':, 'cpu_mem_avail': memory.available, 'cpu_mem_used': memory.used, 'cpu_mem_free':, 'disk_usage_total':, 'disk_usage_used': disk.used, 'disk_usage_free':, 'disk_usage_percent': disk.percent, 'sensor_temperatures': psutil.sensors_temperatures()\['cpu-thermal' ][0].current, }

The next section uses this dictionary.

3. Fetch data from the Flask-RESTful API

To see data from your Pi in the API response, update to include the dictionary system_info_data in the class PiData:

from flask import Flask from flask_restful import Resource, Api import psutil app = Flask(__name__) api = Api(app) class PiData(Resource): def get(self): memory = psutil.virtual_memory() disk = psutil.disk_usage('/') system_info_data = { 'cpu_percent': psutil.cpu_percent(1), 'cpu_count': psutil.cpu_count(), 'cpu_freq': psutil.cpu_freq(), 'cpu_mem_total':, 'cpu_mem_avail': memory.available, 'cpu_mem_used': memory.used, 'cpu_mem_free':, 'disk_usage_total':, 'disk_usage_used': disk.used, 'disk_usage_free':, 'disk_usage_percent': disk.percent, 'sensor_temperatures': psutil.sensors_temperatures()['cpu-thermal'][0].current, } return system_info_data api.add_resource(PiData, '/get-stats') if __name__ == '__main__':

Your script’s ready. Run the script:

$ python * Serving Flask app "PiData" (lazy loading) * Environment: production WARNING: This is a development server. Do not run this in a production environment. * Debug mode: on * Running on (Press CTRL+C to quit) * Restarting with stat * Debugger is active!

You have a working API!

4. Make the API available to the internet

You can interact with your API on your local network. To reach it over the internet, however, you must open a port in your firewall and forward incoming traffic to the port made available by Flask. However, as the output of your test advised, running a Flask app from Flask is meant for development, not for production. To make your API available to the internet safely, you can use the gunicorn production server, which you installed during the project setup stage.

Now you can start your Flask API. You must do this any time you’ve rebooted your Pi.

$ gunicorn -w 4 'PyData:app' Serving on

To reach your Pi from the outside world, open a port in your network firewall and direct incoming traffic to the IP address of your PI, at port 8000.

First, get the internal IP address of your Pi:

$ ip addr show | grep inet

Internal IP addresses start with 10 or 192 or 172.

Next, you must configure your firewall. There’s usually a firewall embedded in the router you get from your internet service provider (ISP). Generally, you access your home router through a web browser. Your router’s address is sometimes printed on the bottom of the router, and it begins with either 192.168 or 10. Every device is different, though, so there’s no way for me to tell you exactly what you need to click on to adjust your settings. For a full description of how to configure your firewall, read Seth Kenlon’s article Open ports and route traffic through your firewall.

Alternately, you can use localtunnel to use a dynamic port-forwarding service.

Once you’ve got traffic going to your Pi, you can query your API:

$ curl { "cpu_count": 4, "cpu_freq": [ 600.0, 600.0, 1200.0 ], "cpu_mem_avail": 386273280, ...

If you have gotten this far, the toughest part is over.

5. Repetition

If you reboot your Pi, you must follow these steps:

  1. Reactivate your Python environment with source
  2. Refresh the application dependencies with pip
  3. Start the Flask application with gunicorn

Your firewall settings are persistent, but if you’re using localtunnel, then you must also start a new tunnel after a reboot.

You can automate these tasks if you like, but that’s a whole other tutorial. The final section of this tutorial is to build a UI on Appsmith using the drag-and-drop widgets, and a bit of Javascript, to bind your RPi data to the UI. Believe me, it’s easy going from here on out!

Build the dashboard on Appsmith. Image by:

(Keyur Paralkar, CC BY-SA 4.0)

To get to a dashboard like this, you need to connect the exposed API endpoint to Appsmith, build the UI using Appsmith’s widgets library, and bind the API’s response to your widgets. If you’re already using Appsmith, you can just import the sample app directly and get started.

If you haven’t done so already, sign up for a free Appsmith account. Alternately, you can self-host Appsmith.

Connect the API as an Appsmith datasource

Sign in to your Appsmith account.

  1. Find and click the + button next to QUERIES/JS in the left nav.
  2. Click Create a blank API.
  3. At the top of the page, name your project PiData.
  4. Get your API’s URL. If you’re using localtunnel, then that’s a address, and as always append /get-stats to the end for the stat data. Paste it into the first blank field on the page, and click the RUN button.

Confirm that you see a successful response in the Response pane.

Image by:

(Keyur Paralkar, CC BY-SA 4.0)

Build the UI

The interface for AppSmith is pretty intuitive, but I recommend going through building your first application on Appsmith tutorial if you feel lost.

For the title, drag and drop a Text, Image, and Divider widget onto the canvas. Arrange them like this:

Image by:

(Keyur Paralkar, CC BY-SA 4.0)

The Text widget contains the actual title of your page. Type in something cooler than “Raspberry Pi Stats”.

The Image widget houses a distinct logo for the dashboard. You can use whatever you want.

Use a Switch widget for a toggled live data mode. Configure it in the Property pane to get data from the API you’ve built.

For the body, create a place for CPU Stats with a Container widget using the following widgets from the Widgets library on the left side:

  • Progress Bar
  • Stat Box
  • Chart

Do the same for the Memory and Disk stats sections. You don’t need a Chart for disk stats, but don’t let that stop you from using one if you can find uses for it.

Your final arrangement of widgets should look something like this:

Image by:

(Keyur Paralkar, CC BY-SA 4.0)

The final step is to bind the data from the API to the UI widgets you have.

Bind data to the widgets

Head back to the canvas and find your widgets in sections for the three categories. Set the CPU Stats first.

To bind data to the Progress Bar widget:

  1. Click the Progress Bar widget to see the Property pane on the right.
  2. Look for the Progress property.
  3. Click the JS button to activate Javascript.
  4. Paste {{ ?? 0}} in the field for Progress. That code references the stream of data from of your API named PiData. Appsmith caches the response data within the .data operator of PiData. The key cpu_percent contains the data Appsmith uses to display the percentage of, in this case, CPU utilization.
  5. Add a Text widget below the Progress Bar widget as a label.
Image by:

(Keyur Paralkar, CC BY-SA 4.0)

There are three Stat Box widgets in the CPU section. Binding data to each one is the exact same as for the Progress Bar widget, except that you bind a different data attribute from the .data operator. Follow the same procedure, with these exceptions:

  • {{${[0]} ?? 0 }} to show clock speed.
  • {{${} ?? 0 }} for CPU count.
  • {{${(} ?? 0 }} for CPU temperature data.

Assuming all goes to plan, you end up with a pretty dashboard like this one:

Image by:

(Keyur Paralkar, CC BY-SA 4.0)

CPU utilization trend

You can use a Chart widget to display the CPU utilization as a trend line, and have it automatically update over time.

First, click the widget, find the Chart Type property on the right, and change it to LINE CHART. To see a trend line, store cpu_percent in an array of data points. Your API currently returns this as a single data point in time, so use Appsmith’s storeValue function (an Appsmith-native implementation of a browser’s setItem method) to get an array.

Click the + button next to QUERIES/JS and name it utils.

Paste this Javascript code into the Code field:

export default { getLiveData: () => { //When switch is on: if (Switch1.isSwitchedOn) { setInterval(() => { let utilData =; storeValue("cpu_util_data", [...utilData, { x:, y: }]); }, 1500, 'timerId') } else { clearInterval('timerId'); } }, initialOnPageLoad: () => { storeValue("cpu_util_data", []); } }

To initialize the Store, you’ve created a JavaScript function in the object called initialOnPageLoad, and you’ve housed the storeValue function in it.

You store the values from cpu_util_data into the storeValue function using storeValue("cpu_util_data", []);. This function runs on page load.

So far, the code stores one data point from cpu_util_data in the Store each time the page is refreshed. To store an array, you use the x and y subscripted variables, both storing values from the cpu_percent data attribute.

You also want this data stored automatically by a set interval between stored values. When the function setInterval is executed:

  1. The value stored in cpu_util_data is fetched.
  2. The API PiData is called.
  3. cpu_util_data is updated as x and y variables with the latest cpu_percent data returned.
  4. The value of cpu_util_data is stored in the key utilData.
  5. Steps 1 through 4 are repeated if and only if the function is set to auto-execute. You set it to auto-execute with the Switch widget, which explains why there is a getLiveData parent function.

Navigate to the Settings tab to find all the parent functions in the object and set initialOnPageLoad to Yes in the RUN ON PAGE LOAD option.

Image by:

(Keyur Paralkar, CC BY-SA 4.0)

Now refresh the page for confirmation

Return to the canvas. Click the Chart widget and locate the Chart Data property. Paste the binding {{ }} into it. This gets your chart if you run the object utils yourself a few times. To run this automatically:

  1. Find and click the Live Data Switch widget in your dashboard’s title.
  2. Look for the onChange event.
  3. Bind it to {{ utils.getLiveData() }}. The Javascript object is utils, and getLiveData is the function that activates when you toggle the Switch on, which fetches live data from your Raspberry Pi. But there’s other live data, too, so the same switch works for them. Read on to see how.
Bind all the data

Binding data to the widgets in the Memory and Disk sections is similar to how you did it for the CPU Stats section.

For Memory, bindings change to:

  • {{( \* 100 ?? 0 }} for the Progress Bar.
  • {{ \${(} ?? 0 }} GB, {{ \${(} ?? 0}} GB, and {{ \${(} ?? 0 }} GB for the three Stat Box widgets.

For Disk, bindings on the Progress Bar, and Stat Box widgets change respectively to:

  • {{ ?? 0 }}
  • {{ \${(} ?? 0 }} GB
  • {{ \${(} ?? 0 }} GB and {{ \${(} ?? 0 }} GB for the three Stat Box widgets.

The Chart here needs updating the utils object you created for CPU Stats with a storeValue key called disk_util_data nested under getLiveData that follows the same logic as cpu_util_data. For the disk utilization chart, we store disk_util_data that follows the same logic as that of the CPU utilization trend chart.

export default { getLiveData: () => { //When switch is on: if (Switch1.isSwitchedOn) { setInterval(() => { const cpuUtilData =; const diskUtilData =;; storeValue("cpu_util_data", [...cpuUtilData, { x:,y: }]); storeValue("disk_util_data", [...diskUtilData, { x:,y: }]); }, 1500, 'timerId') } else { clearInterval('timerId'); } }, initialOnPageLoad: () => { storeValue("cpu_util_data", []); storeValue("disk_util_data", []); } }

Visualizing the flow of data triggered by the Switch toggling live data on and off with the utils Javascript object looks like this:

Image by:

(Keyur Paralkar, CC BY-SA 4.0)

Toggled on, the charts change like this:

Image by:

(Keyur Paralkar, CC BY-SA 4.0)

More on Raspberry Pi What is Raspberry Pi? eBook: Guide to Raspberry Pi Getting started with Raspberry Pi cheat sheet eBook: Running Kubernetes on your Raspberry Pi Whitepaper: Data-intensive intelligent applications in a hybrid cloud blueprint Understanding edge computing Our latest on Raspberry Pi

Pretty, minimalistic, and totally useful.


As you get more comfortable with psutils, Javascript, and Appsmith, I think you’ll find you can tweak your dashboard easily and endlessly to do really cool things like:

  • See trends from the previous week, month, quarter, year, or any custom range that your RPi data allows
  • Build an alert bot for threshold breaches on any stat
  • Monitor other devices connected to your Raspberry Pi
  • Extend psutils to another computer with Python installed
  • Monitor your home or office network using another library
  • Monitor your garden
  • Track your own life habits

Until the next awesome build, happy hacking!

Use Python to make an API for monitoring your Raspberry Pi hardware and build a dashboard with Appsmith.

Image by:

Internet Archive Book Images. Modified by CC BY-SA 4.0

Raspberry Pi What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Create templates for your video graphics with Inkscape

Sat, 03/04/2023 - 16:00
Create templates for your video graphics with Inkscape mairin Sat, 03/04/2023 - 03:00

Recently, I recorded a 15-minute tutorial with supporting materials on how to automate graphics production in Inkscape. I demonstrated this by building a base template and automatically replacing various text strings in the file from a CSV using the Next Generator Inkscape extension by Maren Hachmann. In case you'd rather read instead of watching a video, you can read the accompanying article How I automate graphics creation with Inkscape here on

Based on popular demand from that tutorial, I created a more advanced tutorial that expands upon the last one. It demonstrates how to automate image replacement and changing colors using the same method.

You can watch it on the Fedora Design Team Linux Rocks PeerTube channel or the embedded YouTube video below:

In this article, I provide some context for how this tutorial is useful. I also include a very high-level summary of the content in the video in case you'd rather skim text and not watch a video.

Conference talk card graphics

The background on this tutorial is continued from the original tutorial. For each Flock/Nest conference, you need a graphic for each talk for the online platform used to host the virtual conference. There's usually 50 or more talks for large events. That's a lot of graphics to produce manually.

With this tutorial, you learn how to make a template like this in Inkscape:

Image by:

(Máirín Duffy, CC BY-SA 4.0)

And a CSV file like this:

ConferenceName, TalkName, PresenterNames, TrackNames, BackgroundColor1, BackgroundColor2, AccentColor, Photo BestCon, The Pandas Are Marching, Beefy D. Miracle, Exercise, 51a2da, 294172, e59728, beefy.png Fedora Nest, Why Fedora is the Best Linux, Colúr and Badger, The Best Things, afda51, 0d76c4, 79db32, colur.png BambooFest 2022, Bamboo Tastes Better with Fedora, Panda, Panda Life, 9551da, 130dc4, a07cbc, panda.png AwesomeCon, The Best Talk You Ever Heard, Dr. Ver E. Awesome, Hyperbole, da51aa, e1767c, db3279, badger.png

You can combine them to generate one graphic per row in the CSV, where the background color of the slide, the background color of the track name, the speaker headshot background, and the speaker headshot image change accordingly:

Image by:

(Máirín Duffy, CC BY-SA 4.0)

There are many things you can use this technique for. You can use it to create consistent cover images for your channel videos. You can even use it to create awesome banners and graphics for Fedora as a member of the Fedora Design Team!

Install the Inkscape Next Generator extension

The first step to creating these is to install the Next Generator extension for Inkscape created by Maren Hachmann:

  1. Go to the website and download the next_gen.inx and from the top level of the repo.
  2. Then go into the Edit > Preferences > System dialog in Inkscape. Search for the User Extensions directory listing and click the Open icon next to it. Drag the .inx and .py files into that folder.
  3. Finally, you should close all open Inkscape windows and restart Inkscape. The new extension is under the Extensions menu: Extensions > Export > Next Generator.
Create the template

Each header of your CSV file (in my example: ConferenceName, TalkName, PresenterNames) is a variable you can place in an Inkscape file that serves as your template. Take a look at the example SVG template file for directions. If you want the TalkName to appear in your template, create a text object in Inkscape and put the following content into it:


When you run the extension, the %VAR_TalkName% text is replaced with the TalkName listed for each row of the CSV. So for the first row, %VAR_TalkName% is replaced with the text The Pandas Are Marching for the first graphic. For the second graphic, the TalkName is Why Fedora is the Best Linux. You continue doing this until you get to the TalkName column for each graphic.

Extend the template for color changes

There's not much you have to do for color changes except decide what colors you want to change. You can come up with field names for them in your CSV, and pick out colors for each row of your CSV. In my example CSV, there are two colors of the background gradient that change (BackgroundColor1 and BackgroundColor2) and an accent color (AccentColor) that is used to color the conference track name background lozenge as well as the outline on the speaker headshot:

BackgroundColor1, BackgroundColor2, AccentColor 51a2da, ,294172 ,e59728 afda51, ,0d76c4 ,79db32 9551da, ,130dc4 ,a07cbc da51aa, ,e1767c ,db3279Change only certain items of the same color

There is one trick you have to do if you have the same color you want to change in some parts of the image and to stay the same in other parts of the image.

The way color changes work in Next Generator is a simple find and replace type of mechanism. So when you tell Next Generator in Inkscape to replace anything with the color code #ff0000 (which is in the sample template and what I like to call obnoxious red) to some other color (let's say #aaaa00), it will replace every single object in the file that has #ff0000 as a color to the new value, #aaaa00.

If you want just the conference track name background's red to change color, but you want to keep the color border around the speaker's headshot red in all of the graphics, there's a little trick you can use to achieve this. Simply use the HSV tool in the Fill and Stroke dialog in Inkscape to tune the red item that you didn't want down just one notch. If you change it #fa0000, it has a different hex value for its color code. Then, you can have anything with #ff0000 change color according to the values in your CSV. Anything #fa0000 would stay red and be unaffected by the color replacement mechanism.

Now a couple of things to note about color codes:

  • Do not use # in the CSV or the JSON (more on the JSON below) for these color values.
  • Only use the first six "digits" of the hex color code. Inkscape by default includes 8, the last two are the alpha channel/opacity value for the color. (But wait, how do you use different opacity color values here then? You might be able to use an inline stylesheet that changes the fill-opacity value for the items you want transparency on, but I have not tested this yet.)
Extending the template for image changes

First, you want to add "filler" images to your template. Do this by linking them. Do not embed them when you import them into Inkscape. I embedded one in the template – photo.png.

Then, prep the CSV for the color changes and for the image changes. You need to come up with field names for any images you want to be swapped in your CSV. You can list the image filenames you want to replace in each row of your CSV. In the example CSV, you have just one image with a field name of "Photo":

Photo beefy.png colur.png panda.png badger.png

Note that the images as listed in the CSV are just filenames. I recommend placing these files in the same directory as your template SVG file. Then you won't have to worry about specifying specific file paths. This makes your template more portable (tar or zip it up and share).

Build the JSON for the Next Generator dialog

The final (and trickiest) bit of getting this all to work is to write some JSON formatted key-value pairs for Next Generator. This helps it understand which colors and images are present in the template file map and which field names and column headers are in your CSV file, so it knows what goes where.

Here is the example JSON I used:


Where did I come up with those color codes for the JSON? They are all picked from the template.svg file. The color code 51a2da is the lighter blue color in the circular gradient in the background and  294172 is the darker blue towards the bottom of the gradient. The color code ff0000 (obnoxious red) is the color border around the speaker headshot and the background lozenge color behind the track name.

Where did the photo.png filename come from? That's the name of the filler image I used for the headshot placement. If you're in Inkscape and not sure what the filename of the image you're using is, right click, select Image Properties and it's the value in the URL field pops up in the sidebar.

Run the Generator

Once your template is ready, simply run the Next Generator extension by loading your CSV into it. Select which variables (header names) you want to use in each file name, and copy paste your JSON snippet into the dialog in the Non-text values to replace field:

Image by:

(Máirín Duffy, CC BY-SA 4.0)

Open multimedia and art resources Music and video at the Linux terminal 26 open source creative apps to try this year Film series: Open Source Stories Blender cheat sheet Kdenlive cheat sheet GIMP cheat sheet Latest audio and music articles Latest video editing articles

Hit apply and enjoy!

Troubleshoot color and image replacement issues

I leave you with some hard-won knowledge on how to troubleshoot when color and/or image replacement is not working:

  • Image names are just the filename. Keep the images in the same directory as your template. You do not need to use the full file path. (This will make your templates more portable since you can tar or zip up the directory and share it.)
  • Image names, color values, and variable names in the spreadsheet do not need any " or ' unless you need to escape a comma (,) character in a text field. Image names, color values, and variable names always need quotes in the JSON.
  • The # character does not precede color values. It won't work if you add it.
  • By default, Inkscape gives you an 8-digit hex value for color codes. The last two correspond to the alpha value of the color (ff0000ff for bright red with no opacity.) You will need to remove the last two digits so you are using the base 6-“digit” hex code for the color values (that correspond to RGB colors) to remove the opacity/alpha values from the color code. Otherwise, the color replacement won't work.
  • Check that you have all variable names in the JSON written exactly the same as in the CSV header entries except with " in the JSON (BackgroundColor1 in the CSV is "BackgroundColor1" in the JSON.)
  • Use the filename for the default image you are replacing in the template. You do not use the ObjectID or any other Inkscape-specific identifier for the image. Also, link the image instead of embedding it.

Level up your conference talks and video templates with these Inkscape tricks.

Image by:

Inkscape Art and design Video editing Open Studio What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

A trivia vending machine made with a Raspberry Pi

Fri, 03/03/2023 - 16:00
A trivia vending machine made with a Raspberry Pi pshapiro Fri, 03/03/2023 - 03:00

As an educator working at a public library, I keep my eyes peeled for interesting uses of the Raspberry Pi. From where I sit, the Trivia Vending Machine project out of Dallas, Texas, is one of the most creative and interesting uses of these amazing devices. Using a Raspberry Pi to replace the coin box on a food vending machine is a stroke of genius by Greg Needel and his team. The potential uses of this idea are far-reaching. Check out this short YouTube video to see the Trivia Vending Machine in action.

The original Trivia Vending Machine focused on science questions, but you could build a Trivia Vending Machine with any questions—history, civics, literature, and so on. The most engaging uses will be if you encourage students to write their own questions—and answer each others' questions. And consider this: Instead of disbursing food, the vending machine could disburse coupons to local businesses. One way I earn a living is by teaching guitar lessons, and I'd gladly donate a guitar lesson as a coupon for a Trivia Vending Machine. However, a student must rack up a suitable amount of points to earn one of my guitar lessons.

Stretch your imagination a little further. Would it be possible to have logic puzzles for students to solve to get food (or coupons) from the vending machine? Yes, that would not be difficult to create. Maybe Sudoku puzzles, Wordle, KenKen, Sokoban, or any other puzzle. Students could play these puzzles with a touch screen. How about chess? Sure, students could solve chess puzzles to get food (or coupons).

Did you notice in the video that the original Trivia Vending Machine is large and heavy? Designing a smaller one—perhaps one-third the size that fits on a rolling cart—could make for easier transport between schools, libraries, museums, and maker faires.

The inside of a Trivia Vending Machine is composed of stepper motors. You can buy these used on the web. A web search for "used vending machine motors" turns up the Vending World and the VendMedic websites.

More on Raspberry Pi What is Raspberry Pi? eBook: Guide to Raspberry Pi Getting started with Raspberry Pi cheat sheet eBook: Running Kubernetes on your Raspberry Pi Whitepaper: Data-intensive intelligent applications in a hybrid cloud blueprint Understanding edge computing Our latest on Raspberry Pi

If you are a member of a makerspace, tell your fellow members about the Trivia Vending Machine. It's an open invention, not patented, so anyone can build it. (Thank you, Greg Needel.) I imagine the coding for such a device is not too difficult. It would be lovely if someone could create a GitHub repository of such code—and maybe some accompanying explanatory screencasts.

Although the Trivia Vending Machine did not win an award in the Red Bull Creations contest, this invention is still award-worthy. Someone should track down Greg Needel and give him a suitable prize. What should that award look like? It might look like $25k or $50k. I say three cheers for Greg Needel and his creative team. They took the Raspberry Pi in the direction that the inventors of this computer intended—a tinkerer's delight. Bold and beautiful. Bold, beautiful, and open. Could you ask for anything more?

One last thing. The Trivia Vending Machine was created several years ago with an early Raspberry Pi model. Current Raspberry Pi computers are much faster and more responsive. So, any lags in the interaction you notice in the above-mentioned video no longer exist on today's Raspberry Pi models.

Oh, I want one of those candy bars so bad. I'm smacking my lips together. Remind me; how many points do I need to earn to get a Snickers bar? Whatever it takes. I'll do whatever it takes.

Using a Raspberry Pi to replace the coin box on a food vending machine is a stroke of genius.

Image by:

Raspberry Pi Education What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How I automate graphics creation with Inkscape

Fri, 03/03/2023 - 16:00
How I automate graphics creation with Inkscape mairin Fri, 03/03/2023 - 03:00

I recorded a 15-minute long tutorial demonstrating how to automate the production of graphics from a CSV file or spreadsheet (basically a mailmerge type deal for graphics) in Inkscape. It uses the Next Generator Inkscape extension from Maren Hachmann. 

You can watch it on the Fedora Design Team Linux Rocks PeerTube channel (PeerTube is open source!) or the embedded YouTube video below:

In this article, I provide some context for how this tutorial is useful. I also include a very high-level summary of the content in the video in case you'd rather skim text and not watch a video.

Conference talk card graphics

Each Flock/Nest needs a graphic for each talk for the online platform you use to host a virtual conference. There are usually about 50 or more talks for large events like this. That's a lot of graphics to produce manually.

With this tutorial, you learn how to make a template like this in Inkscape:

Image by:

(Máirín Duffy, CC BY-SA 4.0)

And a CSV file like this:

  CONFERENCENAME TALKNAME PRESENTERNAMES BestCon The Pandas Are Marching Beefy D. Miracle Fedora Nest Why Fedora is the Best Linux Colúr and Badger BambooFest 2022 Bamboo Tastes Better with Fedora Panda AwesomeCon The Best Talk You Ever Heard Dr. Ver E. Awesome

Combine them to generate one graphic per row in the CSV, like so:

Image by:

(Máirín Duffy, CC BY-SA 4.0)

Conference graphics are a good example of how you can apply this tutorial. You could also use it to generate business cards (it outputs a PDF), personalized birthday invitations, personalized graphics for students in your classroom (like student name cards for their desks), and signage for your office. You can use it to create graphics for labeling items, too. You can even use it to create awesome banners and graphics for Fedora as a member of the Fedora Design Team! There are a ton of possibilities for how you can apply this technique, so let your imagination soar.

Open multimedia and art resources Music and video at the Linux terminal 26 open source creative apps to try this year Film series: Open Source Stories Blender cheat sheet Kdenlive cheat sheet GIMP cheat sheet Latest audio and music articles Latest video editing articles The Inkscape Next Generator extension

The first step to create these images is to install the Next Generator extension for Inkscape created by Maren Hachmann:

  1. Go to the website and download the next_gen.inx and from the top level of the repo.
  2. Then go into the Edit > Preferences > System dialog in Inkscape. Search for the User Extensions directory listing and click the Open icon. Drag the .inx and .py files into that folder.
  3. Finally, you should close all open Inkscape windows and restart Inkscape. The new extension is under the Extensions menu: Extensions > Export > Next Generator.
Create a template

Each header of your CSV file (in my example: ConferenceName, TalkName, PresenterNames) is a variable you can place in an Inkscape file that serves as your template. Take a look at the example SVG template file for directions. If you want the TalkName to appear in your template, create a text object in Inkscape and put the following content into it:


When you run the extension, the %VAR_TalkName% text is replaced with the TalkName listed for each row of the CSV. So for the first row, %VAR_TalkName% is replaced with the text The Pandas Are Marching for the first graphic. For the second graphic, the TalkName is Why Fedora is the Best Linux. You continue doing this until you get to the TalkName column for each graphic.

Run the generator

Once your template is ready, run the Next Generator extension by loading your CSV. Then, select which variables (header names) you want to use in each file name and hit the Apply button.

In a future article, I will provide a tutorial on more advanced use of this extension, like changing colors and graphics included in each file.

This article was originally published on the author's blog and has been republished with permission.

Follow along this Inkscape tutorial to create conference talk card graphics at scale.

Image by:

Ray Smith

Art and design Video editing Inkscape Open Studio What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How Web3 and a mesh organizational design can turn challenges into opportunities

Thu, 03/02/2023 - 16:00
How Web3 and a mesh organizational design can turn challenges into opportunities jenkelchner Thu, 03/02/2023 - 03:00

We're in a new era (or at least the early days of a new chapter)—not just a new period in our technological history but also a new paradigm for how people work and contribute to solving problems. One significant challenge I've found in working with leaders is that most organizations are not designed to adapt—let alone thrive—in this new era.

With the rapid emergence of Web3 technologies and the rise of open source software as the basis for these advances, I see multiple challenges every organization can turn into epic opportunities immediately. I detail these in my recently published book, Mesh. In this article, I'll offer a quick overview of three of the most distinct opportunities: Reliance on distributed structures rather than decentralized ones, trapped and untapped value, and the emergence of Web3.

What is Web3?

Many people have preconceived notions of what Web3, or features like blockchain, is or isn't. Web3 refers to the next generation of the internet, which is decentralized and enables more direct, secure, and private interactions between users without intermediaries. Instead of relying on centralized systems like companies or governments, Web3 uses technology such as blockchain to create a network of peers who can transact and exchange value directly with each other. Blockchain provides a secure and transparent ledger for recording transactions and tracking data, enabling trust and collaboration. This results in a more open, transparent, and fair environment where users have greater control over their data. Simply put, Web3 is a more empowering and equitable internet. The features and technology of Web3 bring new opportunities for organizations to improve insights, strengthen connections and build trust as we transform many aspects of how we work and do business.

Distributed and decentralized Challenge 1: Distributed workforce and systems without an updated organizational model

"Distributed" and "decentralized" are often used interchangeably, but they have slightly different meanings.

"Distributed" refers to the distribution or spread of resources, tasks, or functions across multiple locations or devices. This can include movements like distributed computing, where a single task is broken down and executed across multiple computers, or distributed systems, where different system components are located in different places and communicate over a network, though still coordinated by some central actor or authority.

"Decentralization," on the other hand, refers to relocating power and authority away from a central point or hierarchy. This approach can apply to many different types of systems where power is channeled among various branches or levels of organizational systems where decision-making is allocated across varying levels of management, contributors, or community members rather than concentrated in a single entity. Decentralization can increase the speed of decision-making and action.

The fact that a system is distributed does not necessarily make it decentralized, yet many organizations still wonder why distributing their resources and actors doesn't lead to performance gains. For example, think of the turn now to a "distributed workforce," which refers to a group of individuals working remotely or from different geographic locations and collaborating using technology and communication tools to achieve a common goal. Workers can be distributed but not decentralized, relying on the same outmoded hierarchical systems and structures that typically slow them down—now simply at a distance.

Opportunity: Improve autonomy, resilience, and adaptability through decentralization

Decentralization may also be a way to increase autonomy, resilience, and adaptability in organizations, networks, or systems. By distributing decision-making and power across multiple actors, decentralization allows for more local and diverse perspectives. It can reduce the risk of a single point of failure, making the overall system more robust and adaptable to change.

Trapped and untapped value Challenge 2: When trapped and untapped value exists, it can limit the growth and development of the organization

In the early days of my consulting career, I began to use the phrase "trapped value" when doing strategic analysis work. Since then, my understanding of the concept has evolved; I now see value in organizations as trapped and untapped value (TUV). Understanding TUV helps organizations identify barriers to growth in a more nuanced way.

TUV is the value available to an organization that is nevertheless not being used. Trapped value could result from a limitation or barrier that prevents a goal from being realized; untapped value can be overlooked, unseen, or undervalued opportunities and resources.

Organizations rely on a combination of procedures, systems, and workflows to operate. However, as time passes, organizational actors introduce new procedures and systems without considering the previous ones or their potential effect on the organization. These changes can lead to siloed departments, communication breakdowns, and an accumulation of procedures that result in pockets of TUV.

Opportunity: Releasing the regenerative value of capabilities and capacities

Throughout Mesh, my co-author Gráinne Hamilton and I examine burnout, or performance TUV; blackouts, or lack of visibility of skills; and the circuit breakers that prevent optimized deployment of resources through the organization and ecosystem. We also explain how an organization's various "power generators" offer sets of capabilities and capacities for releasing TUV. Doing this can lead to organizational regeneration. An organization has avenues for transforming through continuous cycles of change, remaining relevant and sustainable. With an open, regenerative culture that innovates on how its people contribute, the people's capabilities become more visible, teams become more balanced, and value can be realized in newly beneficial ways.

The emergence of Web3 and its features Challenge 3: It's all so new and overwhelming

Web3 technologies, such as blockchain, smart contracts, and tokenization, are relatively new. While some organizations actively use these technologies, many others are still experimenting with them, exploring their potential benefits, and figuring out how to implement them best.

Blockchain, for example, has been used in supply chain management to increase transparency, reduce costs, and improve security. Some organizations also use blockchain to develop digital identities and create new business models, such as the tokenization of assets. Smart contracts—self-executing contracts with the terms of the agreement written into code—are used in industries such as finance, insurance, and real estate to automate processes and reduce costs.

Many organizations are still evaluating the potential benefits and how to implement these technologies best to grow and scale their business.

Opportunity: Gain insights from data, establish trust, and build the future

Web3 technologies offer organizations a wide range of opportunities to gain insights, improve connections, and establish trust for organizational gain. However, organizations also need to consider how to integrate and take full advantage of these technologies. Decentralization and Web3 also create unparalleled opportunities for rapid organizational and commercial benefits and for building connections and trust. For example, data that can be sourced and verified via blockchain can provide incredible visibility and pose new connections to solutions. Likewise, decentralizing contributions and community can empower relationships across "borders" and boundaries to rapidly solve problems or even maintain ongoing projects, while trustless transactions can enhance organizational trust.

I recommend that organizations prioritize digital transformation initiatives that align with the shift towards Web3 technologies and be mindful of their organizational models, processes, and cultural behaviors to take advantage of this new era.

Learn about open organizations Download resources Join the community What is an open organization? How open is your organization? An open and regenerative organizational culture can optimize the benefits of decentralization

Organizations need to adopt more open and decentralized practices to effectively engage with and maximize the use of Web3 technologies.

Open approaches tend to espouse granularity, flexibility, and reusability. Open source cultural practices organize around transparency, participation, and community. The open organization characteristics, for example, elevate the idea of people coming together to actively participate and co-create in a community.

Looking deeper at Web3, we see that its tenets revolve around transparency, autonomy, and decentralization; they emphasize an infrastructure that enables distributed components that may or may not collaborate to develop outputs but still connect in an organizational context.

We can connect these complementary views on open practices by stressing three tenets: Connection, visibility, and trust (incidentally, this is what my co-author and I do in Mesh). These tenets are designed to bridge the convergences and divergences of open practices and Web3 while also expanding upon them. It's important to be human-centric and prioritize personal agency and psychological safety while at the same time enabling open, regenerative behaviors to foster a mesh organizational design.

Organizations can optimize the benefits of decentralization by catalyzing the components and connections of an open and regenerative culture and recognizing that a mesh structure can connect, work together, create value, and attract others to participate. As a model, mesh allows organizations to make the most of things like data, internal knowledge, and contributions of its people by using, combining, and reworking them. We can manage the distributed and decentralized nature of how we work and collaborate with Web3 solutions such as blockchain, tokenization, and smart contracts. Visualization tools can enable collating, connecting, and clustering data from various sources to provide helpful insights, optimize processes, and ultimately release value.

Unlock the potential of decentralization.

Image by:

The Open Organization What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Parallel and distributed computing with Raspberry Pi clusters

Thu, 03/02/2023 - 16:00
Parallel and distributed computing with Raspberry Pi clusters visimpscot2 Thu, 03/02/2023 - 03:00

Since the Raspberry Pi's launch, creators have based countless computer science education projects on the humble pocket-sized system on a chip. These have included many projects exploring low-cost Raspberry Pi clusters to introduce parallel and distributed computing (PDC) concepts.

The UK Open University (OU) provides distance education to students of diverse ages, experiences, and backgrounds, which raises some issues not faced in more traditional universities. The OU experiment using Raspberry Pi clusters to introduce PDC concepts to distance learning students began in 2019 and has been featured in an academic paper but deserves to be known more widely.

The project uses Raspberry Pi clusters based on the OctaPi instructions, released under a Creative Commons Licence by GCHQ. Eight Raspberry Pis are connected in a private network using a router and a switch. One of the Raspberry Pis acts as the lead, while the others are servers providing results back to the lead device. Programs written in Python run on the lead Pi, and the dispy package distributes activities across cores in the cluster.

Three programs have been developed for the clusters, and you can download them from the Git repository.

Two of the programs are text-based and linked to search problems: The traveling salesperson and password hashing. As complete search problems, these are ideal for teaching PDC concepts. The third program is graphical. The image combiner takes three images as input, with non-overlapping obstructions. It constructs an image without any obstructions by comparing the RGBA values pixel-by-pixel across the three images and selecting the median.

Using the cluster

The Open University is a distance learning institution, so students access the clusters through a web interface. Remote access to the clusters uses the OpenSTEM Labs infrastructure at the university. Ten clusters (eight built with Pi 4, two built with Pi 3B+) are installed into racks, with webcams pointed at each cluster.

The students select which program to run, the number of cores to use, and the parameters for the selected program. As output, they see the time the program takes to run on an individual Raspberry Pi compared to the cluster using the number of cores selected. The student also sees the output from the program, either the password hashing result, the minimal and maximal traveling salesperson route, or the non-occluded image.

Image by:

(Peter Cheer, CC BY-SA 4.0)

A webcam shows a live stream of the cluster. The lead Pi has an LED display to show the program's state as it runs. The webcam makes it clear to students that they are experimenting with real dedicated hardware rather than getting simulated or pre-recorded results.

Image by:

(Peter Cheer, CC BY-SA 4.0)

Each program has two activities associated with it, illustrating different aspects of program design and PDC operations. One of the main learning points is that PDC computing can provide significant performance advantages but at a cost in the time and resources taken to divide and distribute a problem and, in reverse, to recombine the results. The second learning point is that efficiency is significantly affected by the program design.

More on Raspberry Pi What is Raspberry Pi? eBook: Guide to Raspberry Pi Getting started with Raspberry Pi cheat sheet eBook: Running Kubernetes on your Raspberry Pi Whitepaper: Data-intensive intelligent applications in a hybrid cloud blueprint Understanding edge computing Our latest on Raspberry Pi Students like it

Currently, the use of the Raspberry Pi clusters is optional. Based on the findings so far, though, students enjoy it and are motivated by having remote access to physical hardware.

One student has said, "It was really interesting to be able to use real clusters instead of having it virtualized."

Another adds, "It was really exciting to be able to actually see a cluster working and see the real effects of working with multiple cores. It was great to be able to try this out for myself, not just to read the theory about it!"

Students are using the clusters to undertake learning activities designed to teach the principles of PDC rather than writing and running their own programs. The experience of developing a low-cost Raspberry Pi cluster for use with open-distance university students demonstrates the benefits remote practical activities can have for teaching PDC concepts and engaging students.

When I asked Daniel Gooch, one of the team members behind the project, about it, he said: "For me, where we differ is that we've taken an existing set of Raspberry Pi instructions and worked on integrating in additional wrap-around material to ensure it can cope with the distance and scale we operate on."

This academic experiment using Raspberry Pi clusters introduces parallel and distributed computing (PDC) concepts to distance learning students.

Image by:

Dwight Sipler on Flickr

Raspberry Pi Education What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

3 myths about open source CMS platforms

Wed, 03/01/2023 - 16:00
3 myths about open source CMS platforms pierina.wetto Wed, 03/01/2023 - 03:00

There are two choices when it comes to building a website. You can choose an open source platform like Drupal or WordPress, or a proprietary platform overseen by a company like Adobe or Microsoft. How do you know which is best for your website?

Things to consider:

  • How much user support will I get?

  • Which is better for security?

  • Is the cost within budget?

For organizations with limited budgets, the choice is either an open source site or something less flexible like Wix or Squarespace – the cost attached to a proprietary platform might be out of reach. However, for a large enterprise organization, both approaches have pros and cons worth addressing.

Proprietary platforms can be attractive to many large organizations for several reasons. In addition to promising great platforms customized to the client's business needs, proprietary arrangements typically offer full hosting plans. The company behind the CMS handles all updates, upgrades, security issues, and bugs – often 24/7.

While proprietary software comes with a high price point, there's a sense of justification behind it: at least you get what you pay for.

It's worth noting, though, that many of the world's biggest corporate brands use Drupal as their CMS of choice, including General Electric, Tesla, IBM, Paramount Global, United Airlines, and the Royal Family. The Government of Australia operates on Drupal, as does the Government of Ontario, the Canadian Security Intelligence Service (CSIS), several US state governments, and countless other government agencies around the world.

So, why do organizations that have large budgets for web development opt for an open source platform, despite the supposed advantages touted by proprietary providers?

The answers are numerous, ranging from a need for financial accountability to the supportive nature of the Drupal community. These factors more than make up for any potential shortcomings of the open source model.

This article runs through some popular myths around proprietary and open source platforms that continue to influence decision making.

Myth #1: Proprietary platforms provide better user support

One of the main selling points of proprietary platforms is that its vendors promise 24/7 client support should anything go wrong with the site, or if you need anything customized. This 24/7 support comes at a cost. For institutions concerned about sudden emergencies, this is obviously an appealing offering that for many justifies the price tag.

What proprietary vendors won't tell you, however, is that open source platforms like Drupal provide much of the same service (typically in tandem with an agency and an infrastructure partner like Acquia or Pantheon). This is provided at no cost through their networks of volunteers and sponsored contributors.

Drupal, for example, is supported by a global community of hundreds of thousands of contributors who work collaboratively to address technical issues and improve the platform.

In the Drupal world, when you find a bug and create a report within the community, the response — while not necessarily instantaneous — is typically fast. While mission-critical sites like government platforms will need to pay somebody to be available for 24/7 support, this broader community support is of enormous benefit to all Drupal users.

Proprietary platforms do have counterparts to this type of community, but they're oftentimes much smaller. Sitecore, for example, advertises that it has a community of 20,000 developers. This is a drop in the bucket compared to the scope of the Drupal developer community.

Myth #2: Proprietary is more secure than open source

This is a stubborn myth — understandably. Open source code, by its nature, is publicly available to anyone, including individuals with malicious intent. In contrast, proprietary platforms keep their codebases under lock and key. The for-profit nature of proprietary vendors gives them a greater (financial) incentive to track down and neutralize bad actors.

The unpopular truth is that proprietary platforms are every bit as vulnerable to attacks as their open source counterparts — if not more so.

For one thing, most security breaches don't come from hackers scouring source code for weak spots, but from avoidable human lapses such as failures to follow security guidelines, improper software setup, use of easy passwords, lack of data validation processes, and absence of data encryption techniques. These lapses are no less likely to occur on a proprietary platform than they are on an open source one.

Paradoxically, the open source nature of platforms like Drupal is actually more of a help than a liability when it comes to cybersecurity. Open source code means that anyone with the know-how can search for and identify vulnerabilities. And with an army of over a million developers contributing behind the scenes, it's safe to say that Drupal takes its security very seriously. Proprietary vendors, by contrast, are limited in this capacity by their cybersecurity staffing numbers.

Myth #3: Proprietary costs more, so you get more value

It's widely believed that when you opt for a less expensive product —in this case, an open source website — you're either settling for a "less-good" quality product or setting yourself up for additional costs down the road in the form of upgrades and modifications. Proprietary websites may cost more at the outset, but at least you know you're getting something of real quality and the costs are predictable.

In truth, there is no difference in quality between open source and proprietary websites. It all depends on the quality of workmanship that goes into building the sites. And while any website project is vulnerable to budget overruns, proprietary platforms are actually more prone to them than open source ones.

When you opt for a proprietary platform, you automatically commit to paying for a license. This may be a one-time cost or a recurring subscription fee. In many cases, proprietary providers charge on a "per-seat" basis, meaning that the larger your team gets, the more expensive maintaining your website becomes. An open source site, by contrast, costs nothing beyond what you spend on design, and is in fact much more predictable from a cost standpoint.

This is of particular importance to governments, whose website development and renewal costs are publicly available and subject to intense media scrutiny. The Government of Canada faced negative press after it hired Adobe to restructure a vast swath of federal websites under the URL. A project originally valued at $1.54 million in 2015 had by the following year ballooned to $9.2 million. While details were scant, some of this budget overrun was attributed to costs due to additional staffing requirements. Cue impending doom music.

Websites built on open source platforms like Drupal aren't cheap to develop, but the costs are almost always more predictable. And when it's the taxpayers who are footing the bill, this is a major advantage.

Bonus: Open source = wider talent base

If you're a large government organization with complex web needs, chances are you'll be looking to hire in-house developers. From this standpoint, it makes much more sense to opt for an open source web platform in terms of available talent. The magnitude of the Drupal community relative to, say, Sitecore, means that your LinkedIn search is far more likely to turn up Drupal specialists in your area than Sitecore experts.

Similar disparities exist when it comes to providing your staff with training. Drupal training is widely available and affordable. Hint: they offer customized training. Becoming a licensed developer for something run by Adobe, by contrast, is a much more complex and expensive undertaking.

Why Drupal specifically?

I've touted Drupal extensively throughout this post, as Evolving Web is the home of many Drupal trainers, developers and experts However, it's far from the only open source CMS option out there. WordPress remains the world's most popular CMS platform, being used by some 43% of the world's websites.

Drupal does, however, stand out from the pack in a number of important ways. The Drupal platform simply has more features and is a lot more supportive of customization than most of its open source competitors. This is perhaps less of a big deal if you're a small business or organization with a narrow area of focus. But government websites are generally complex, high-traffic undertakings responsible for disseminating a wide range of content to a diverse array of audiences.

Other cool government sites are using It

Evolving Web recently redesigned the official website for the City of Hamilton. As the main online hub for Canada's ninth largest municipal area, serving some 800,000 people, the City of Hamilton website caters to a wide range of audiences, from residents and local business people to tourists and foreign investors. Its services run the gamut, enabling residents to plan public transit use, pay property taxes, find employment, apply for a marriage license, and get information on recreational activities, among many other options.

The City of Hamilton site exemplifies many of Drupal's strengths. Like many government websites, it encompasses vast swaths of data and resources and is subject to considerable surges in traffic, both of which Drupal is well equipped to handle. The site revamp also involved corralling various third-party services (including the recreation sign-up and council meeting scheduler) and a half-dozen websites that existed outside of Drupal. This required creative solutions of the sort that the Drupal community excels at developing.

More open source alternatives Open source project management tools Trello alternatives Linux video editors Open source alternatives to Photoshop List of open source alternatives Latest articles about open source alternatives Drupal upholds accessibility standards

A further advantage of Drupal for government websites is that its publishing platform, along with all of its other features and services, is designed to be fully accessible in accordance with WCAG standards. Drupal's default settings ensure accurate interpretation of text by screen readers, provide accessible color contrast, and intensity recommendations. They also generate pictures and forms that are accessible and incorporate skip navigation in its core themes.

You are in good company

All this attests to the strengths of the open source model — and of Drupal in particular — underpinned as it is by an army of over a million contributors. Thanks to this, the platform is in a constant state of improvement and innovation, of which every single Drupal user is a beneficiary.

Join the club

At Evolving Web, we specialize in helping organizations harness their online presence with open source platforms like Drupal and WordPress. Let's keep in touch!

Join our inner circle and sign up for our newsletter, where you'll get insider content and hear more about upcoming training programs, webinars and events.

Open source alternatives to proprietary platforms offer benefits for developers and users alike.

Image by:

Drupal Alternatives What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Use your Raspberry Pi as a streaming server

Wed, 03/01/2023 - 16:00
Use your Raspberry Pi as a streaming server Erbeck Wed, 03/01/2023 - 03:00

There are various reasons to stream live video transmissions from webcams. The Raspberry Pi platform is perfect for such applications. It requires little power for continuous applications such as live-streaming servers. Communication with a Raspicam camera module, USB cam, or other network video signals is possible. The system is an RTMP, HLS, and SRT server. This article shows how to set up the Raspberry Pi as a streaming server to use HLS streaming. You need a video source to use it as a streaming server.

Even without a Raspberry Pi, you can do the steps described here. In addition, there are further installation instructions for Windows, Linux, and macOS available.


The application is datarhei Restreamer, a graphical user interface for the datarhei Core. The datarhei Core runs the well-known media framework FFmpeg under the hood. The easiest way to start with datarhei Restreamer is to install the official Docker container. The download and installation of the program via Docker Hub are automatic with the pull command. Restreamer starts immediately after the installation. If you don't have a Raspberry Pi, use one of the other Docker containers on the datarhei Restreamer GitHub page (e.g., AMD64 or GPU Cuda support).

datarhei Restreamer and datarhei Core are both open source software under the Apache License 2.0.

Here's the command for an installation on a Raspberry Pi 3 and above with GPU support:

docker run -d --restart=always --name restreamer \ -v /opt/restreamer/config:/core/config -v /opt/restreamer/data:/core/data \ --privileged \ -p 8080:8080 -p 8181:8181 \ -p 1935:1935 -p 1936:1936 \ -p 6000:6000/udp \ datarhei/restreamer:rpi-latest

Regardless of which command you use, you only need the --privileged option to access local devices, like a USB camera.

After installation, connect the Raspberry Pi to the local network. Then open the web-based GUI in a browser by navigating to http://device-ip:8181/ui.

You should see the following screen:

Image by:

(Sven Erbeck, CC BY-SA 4.0)

Assign the password, and the system is ready for the first login. A wizard is starting to configure the first video source.

Hint: The above Docker command permanently stores the configuration data with the login name and password in the /opt/restreamer/config folder.


The application consists of three logical parts: Video input, system dashboard, and video output. The video input and output run independently of each other.

Video input

The wizard will help you to create a video source right from the start. This can be a USB video source, the Raspberry Pi camera, or a network source like an IP cam or an m3u8 file from a network. HLS, RTMP, and real-time SRT protocol are ready to use. The wizard helps to configure the video resolution and sound correctly. In the last step, you can assign different licenses from Creative Commons. It is worth taking a look at the video signal settings. You will find several options, like transcoding or rotating the video for vertical video platforms.


After successfully creating the video signal, you will land in the dashboard.

Image by:

(Sven Erbeck, CC BY-SA 4.0)

It is the central starting point for all other settings. To see the program's full functionality, you can switch to expert mode in system preferences.

The dashboard contains the following:

  • Video signal settings.
  • Active content URL for RTMP, SRT, HLS server, and snapshot.
  • All active Publication Services for restreaming.
  • Start the wizard to create additional video sources.
  • The system menu.
  • Live-Statistics for the video signal.
  • Live-System monitoring.
Video output

There are different ways to play the video signal.

The publication website is the simplest, immediately-ready, and internally hosted landing page by Restreamer. The player page can also transmit to Chromecast and AirPlay. Basic settings like adjusting the background image and adding a logo in the player are possible directly in the Restreamer. Those who know HTML can customize the page for themselves. Advanced users can inject code to use the site with external modules like a chat. A statistics module under the video player shows the active viewers and all views. The Share button supports the distribution of the live stream. HTTPS certificates for the website are active with Let's Encrypt without much effort. With a simple port forwarding for HTTPS to the LAN IP of the Raspberry Pi, the website is publicly accessible.

Image by:

(Sven Erbeck, CC BY-SA 4.0)

The publication services are a great way to restream content. There are numerous ready-made modules for popular websites like YouTube, Twitch, or PeerTube. Likewise, for other streaming software, to popular CDNs. Complete control over the video protocols allows streaming to all RTMP, HLS, and SRT-capable destination addresses. An HTML snippet code with the video player works on web pages.

Image by:

(Sven Erbeck, CC BY-SA 4.0)

More on Raspberry Pi What is Raspberry Pi? eBook: Guide to Raspberry Pi Getting started with Raspberry Pi cheat sheet eBook: Running Kubernetes on your Raspberry Pi Whitepaper: Data-intensive intelligent applications in a hybrid cloud blueprint Understanding edge computing Our latest on Raspberry Pi Save power while streaming with Raspberry Pi

This article shows how to turn the Raspberry Pi into a streaming server. The Raspberry Pi platform allows you to interact with various video signals in a power-saving way. The pre-settings make it easy to configure the server, and advanced users can make some adjustments to the system. You can use it for restreaming, hosting for live-streaming on a website, or integration into system landscapes with OBS. Using different video sources and transport protocols offer great flexibility as a basis for a project and make this system highly customizable. Furthermore, the datarhei Core with FFmpeg makes it easy for software developers to extend all application processes.

The program turns the Raspberry Pi into a dedicated streaming server. Depending on your internet upload, you can live stream to websites or multi-stream to different video networks independently and without an additional video provider.

Test a fully functional demo before installation on the project website with the login name admin and password demo.

Stream live video from webcams with a Raspberry Pi and restream videos to social networks.

Image by:

Raspberry Pi What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.