opensource.com

Subscribe to opensource.com feed
Updated: 49 min 50 sec ago

How I scan family photos on Linux

Sun, 04/17/2022 - 15:00
How I scan family photos on Linux Alan Formy-Duval Sun, 04/17/2022 - 03:00

Linux isn't just something that runs on servers and powers the internet. It's a safe place for your data, your family history and memories, working or having fun, and real life.

Case in point: Right now I'm in the middle of a project scanning old family photos. I have been using Fedora Linux with the GNOME desktop for a few years, so I didn't have to install any additional software packages. I just plug my scanner into the USB port, start up the scanning software (Document Scanner), and I'm good to go. Keep reading to see how I did it.

More Linux resources Linux commands cheat sheet Advanced Linux commands cheat sheet Free online course: RHEL technical overview Linux networking cheat sheet SELinux cheat sheet Linux common commands cheat sheet What are Linux containers? Our latest Linux articles Digitizing memories

Many people are interested in learning more about their family history, ancestry, and legacy. With the technology now available, digitizing old artifacts has become a common practice. Whether it's your 1980s cassette collection, high school artwork, or old family photos, putting them into a digital format is a modern method of preservation and future proofing.

My mom recently gave me some photos of some of my ancestors, so I have several images that I want to preserve. Scanning them not only provides a certain sense of permanence but also allows me to manipulate them in ways that were unheard of in the era when they were captured. For instance, I have a photo of my grandfather, who unfortunately passed a few years before my birth. By digitizing his photo, I can zoom in, get to know him, and maybe relate to him in a way that otherwise would be impossible.

Workflow

The first thing to do is plug my Canon scanner into the USB port. When I open Document Scanner, it detects my Canon LiDE 210 scanner. Next, I place the photo onto the flatbed scanner. I adjust the settings for 2400 DPI image resolution to ensure I capture every detail.

Then I click scan. At this resolution the scan may take a while, but once it is complete, I can crop the image as needed and save it.

By the way, as I scan my photos and write this article, I'm also enjoying some of my favorite music with Clementine, an open source audio player—on the same computer. Performance hit? Not a bit!

Once scanning is complete, I've also got the option of cropping the image and saving it as a PDF, JPG, or whatever format I choose.

Real life

Allow me to introduce my grandfather and my Uncle George, circa 1944. George was a World War II vet having seen action in Europe battling the Nazis. My grandfather, on the right, was the foreman of a southeastern North Carolina lumber mill. While he didn't see the battlefield, he was in charge of captured Nazi POWs assigned to work at his mill. He described them as young boys that just wanted to go home to their families.

Image by:

(Alan Formy-Duval, CC BY-SA 4.0)

Final thoughts

As a dedicated Linux desktop user, I sometimes hear people say they don't use Linux because there are certain tasks it can't perform. Linux is all I use, and I haven't had that problem for roughly 14 years and counting. Whether you're looking for a pleasant pastime or a way to be more productive, there's likely a solution for you that runs on Linux.

With Linux, I can connect with my ancestors in unexpected ways.

Image by:

PublicDomainPictures. CC0.

Art and design Open Studio Linux What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How the C programming language has grown

Fri, 04/15/2022 - 15:00
How the C programming language has grown Jim Hall Fri, 04/15/2022 - 03:00

The C programming language will turn fifty years old in 2022. Yet despite its long history, C remains one of the top "most-used" programming languages in many "popular programming languages" surveys. For example, check out the TIOBE Index, which tracks the popularity of different programming languages. Many Linux applications are written in C, such as the GNOME desktop.

I interviewed Brian Kernighan, co-author (with Dennis Ritchie) of The C Programming Language book, to learn more about the C programming language and its history.

Where did the C programming language come from?

C is an evolution of a sequence of languages intended for system programming—that is, writing programs like compilers, assemblers, editors, and ultimately operating systems. The Multics project at MIT, with Bell Labs as a partner, planned to write everything in a high-level language (a new idea at the time, roughly 1965). They were going to use IBM's PL/1, but it was very complicated, and the promised compilers didn't arrive in time.

After a brief flirtation with a subset called EPL (by Doug McIlroy of Bell Labs), Multics turned to BCPL, a much simpler and cleaner language designed and implemented by Martin Richards of Cambridge, who I think was visiting MIT at the time. When Ken Thompson started working on what became Unix, he created an even simpler language, based on BCPL, that he called B. He implemented it for the PDP-7 used for the first proto-Unix system in 1969.

BCPL and B were both "typeless" languages; that is, they had only one data type, integer. The DEC PDP-11, which arrived on the scene in about 1971 and was the computer for the first real Unix implementation, supported several data types, notably 8-bit bytes as well as 16-bit integers. For that, a language that also supported several data types was a better fit. That's the origin of C.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java

How was C used within Bell Labs and the early versions of Unix?

C was originally used only on Unix, though after a while, there were also C compilers for other machines and operating systems. Mostly it was used for system-programming applications, which covered quite a spectrum of interesting areas, along with a lot of systems for managing operations of AT&T's telephone network.

What was the most interesting project written in C at Bell Labs?

Arguably, the most interesting, memorable, and important C program was the Unix operating system itself. The first version of Unix in 1971 was in PDP-11 assembly language, but by the time of the fourth edition, around 1973, it was rewritten in C. That was truly crucial since it meant that the operating system (and all its supporting software) could be ported to a different kind of computer basically by recompiling everything. Not quite that simple in practice, but not far off.

You co-authored The C Programming Language book with Dennis Ritchie. How did that book come about, and how did you and Dennis collaborate on the book?

I had written a tutorial on Ken Thompson's B language to help people get started with it. I upgraded that to a tutorial on C when it became available. And after a while, I twisted Dennis's arm to write a C book with me. Basically, I wrote most of the tutorial material, except for the system call chapter, and Dennis had already written the reference manual, which was excellent. Then we worked back and forth to smooth out the tutorial parts; the reference manual stayed pretty much the same since it was so well done from the beginning. The book was formatted with the troff formatter, one of many tools on Unix, and I did most of the formatting work.

When did C become a thing that other programmers outside of Bell Labs used for their work?

I don't really remember well at this point, but I think C mostly followed along with Unix for the first half dozen years or so. With the development of compilers for other operating systems, it began to spread to other systems besides Unix. I don't recall when we realized that C and Unix were having a real effect, but it must have been in the mid to late 1970s.

Why did C become such an influential programming language?

The primary reason in the early days was its association with Unix, which spread rapidly. If you used Unix, you wrote in C. Later on, C spread to computers that might not necessarily run Unix, though many did because of the portable C compiler that Steve Johnson wrote. The workstation market, with companies like Sun Microsystems, MIPS (which became SGI), and others, was enabled by the combination of Unix and C. The IBM PC came somewhat later, about 1982, and C became one of the standard languages, under MS-DOS and then Windows. And today, most Internet of Things (IoT) devices will use C.

C remains a popular programming language today, some 50 years after its creation. Why has C remained so popular?

I think C hit a sweet spot with efficiency and expressiveness. In earlier times, efficiency really mattered since computers were slow and had limited memory compared to what we are used to today. C was very efficient, in the sense that it could be compiled into efficient machine code, and it was simple enough that it was easy to see how to compile it. At the same time, it was very expressive, easy to write, and compact. No other language has hit that kind of spot quite so well, at least in my humble but correct opinion.

How has the C programming language grown or changed over the years?

C has grown modestly, I guess, but I haven't paid much attention to the evolving C standards. There are enough changes that code written in the 1980s needs a bit of work before it will compile, but it's mostly related to being honest about types. Newer features like complex numbers are perhaps useful, but not to me, so I can't make an informed comment.

What programming problems can be solved most easily in C?

Well, it's a good language for anything, but today, with lots of memory and processing power, most programmers are well served by languages like Python that take care of memory management and other more high-level constructs. C remains a good choice for lower levels where squeezing cycles and bytes still matter.

C has influenced other programming languages, including C++, Java, Go, and Rust. What are your thoughts on these other programming languages?

Almost every language is in some ways a reaction to its predecessors. To over-simplify a fair amount, C++ adds mechanisms to control access to information, so it's better than C for really large programs. Java is a reaction to the perceived complexity of C++. Go is a reaction to the complexity of C++ and the restrictions of Java. Rust is an attempt to deal with memory management issues in C (and presumably C++) while coming close to C's efficiency.

They all have real positive attributes, but somehow no one is ever quite satisfied, so there will always be more languages that, in their turn, react to what has gone before. At the same time, the older languages, for the most part, will remain around because they do their job well, and there's an embedded base where they are perfectly fine, and it would be infeasible to reimplement in something newer.

Thanks to Brian for sharing this great history of the C programming language!

Would you like to learn C programming? Start with these popular C programming articles from the last year: 5 ways to learn the C programming language in 2022.

Here's my interview with Brian Kernighan, co-author (with Dennis Ritchie) of The C Programming Language book, to discuss the C programming language and its 50-year history.

Programming What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

My favorite build options for Go

Thu, 04/14/2022 - 15:00
My favorite build options for Go Gaurav Kamathe Thu, 04/14/2022 - 03:00 Up Register or Login to like.

One of the most gratifying parts of learning a new programming language is finally running an executable and getting the desired output. When I discovered the programming language Go, I started by reading some sample programs to get acquainted with the syntax, then wrote small test programs. Over time, this approach helped me get familiar with compiling and building the program.

The build options available for Go provide ways to gain more control over the build process. They can also provide additional information to help break the process into smaller parts. In this article, I will demonstrate some of the options I have used. Note: I am using the terms build and compile to mean the same thing.

Programming and development Red Hat Developers Blog Programming cheat sheets Try for free: Red Hat Learning Subscription eBook: An introduction to programming with Bash Bash shell scripting cheat sheet eBook: Modernizing Enterprise Java Getting started with Go

I am using Go version 1.16.7; however, the command given here should work on most recent versions as well. If you do not have Go installed, you can download it from the Go website and follow the instructions for installation. Verify the version you have installed by opening a prompt command and typing:

$ go version

The response should look like this, depending on your version.

go version go1.16.7 linux/amd64
$Basic compilation and execution of Go programs

I'll start with a sample Go program that simply prints "Hello World" to the screen.

$ cat hello.go
package main

import "fmt"

func main() {
        fmt.Println("Hello World")
}
$

Before discussing more advanced options, I'll explain how to compile a sample Go program. I make use of the build option followed by the Go program source file name, which in this case is hello.go.

$ go build hello.go

If everything is working correctly, you should see an executable named hello created in your current directory. You can verify that it is in ELF binary executable format (on the Linux platform) by using the file command. You can also execute it and see that it outputs "Hello World."

$ ls
hello  hello.go
$
$ file ./hello
./hello: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, not stripped
$
$ ./hello
Hello World
$

Go provides a handy run option in case you do not want to have a resulting binary and instead want to see if the program works correctly and prints the desired output. Keep in mind that even if you do not see the executable in your current directory, Go still compiles and produces the executable somewhere, runs it, then removes it from the system. I'll explain in a later section of this article.

$ go run hello.go
Hello World
$
$ ls
hello.go
$Under the hood

The above commands worked like a breeze to run my program with minimal effort. However, if you want to find out what Go does under the hood to compile these programs, Go provides a -x option that prints everything Go does to produce the executable.

A quick look tells you that Go creates a temporary working directory within /tmp, produces the executable, and then moves it to the current directory where the source Go program was present.

$ go build -x hello.go

WORK=/tmp/go-build1944767317
mkdir -p $WORK/b001/

<< snip >>

mkdir -p $WORK/b001/exe/
cd .
/usr/lib/golang/pkg/tool/linux_amd64/link -o $WORK \
/b001/exe/a.out -importcfg $WORK/b001 \
/importcfg.link -buildmode=exe -buildid=K26hEYzgDkqJjx2Hf-wz/\
nDueg0kBjIygx25rYwbK/W-eJaGIOdPEWgwC6o546 \
/K26hEYzgDkqJjx2Hf-wz -extld=gcc /root/.cache/go-build /cc \
/cc72cb2f4fbb61229885fc434995964a7a4d6e10692a23cc0ada6707c5d3435b-d
/usr/lib/golang/pkg/tool/linux_amd64/buildid -w $WORK \
/b001/exe/a.out # internal
mv $WORK/b001/exe/a.out hello
rm -r $WORK/b001/

This helps solve the mysteries when a program runs but no resulting executable is created within the current directory. Using -x shows that the executable file was indeed created in a /tmp working directory and was executed. However, unlike the build option, the executable did not move to the current directory, making it appear that no executable was created.

$ go run -x hello.go


mkdir -p $WORK/b001/exe/
cd .
/usr/lib/golang/pkg/tool/linux_amd64/link -o $WORK/b001 \
/exe/hello -importcfg $WORK/b001/importcfg.link -s -w -buildmode=exe -buildid=hK3wnAP20DapUDeuvAAS/E_TzkbzwXz6tM5dEC8Mx \
/7HYBzuaDGVdaZwSMEWAa/hK3wnAP20DapUDeuvAAS -extld=gcc \
/root/.cache/go-build/75/ \
7531fcf5e48444eed677bfc5cda1276a52b73c62ebac3aa99da3c4094fa57dc3-d
$WORK/b001/exe/hello
Hello WorldMimic compilation without producing the executable

Suppose you don't want to compile the program and produce an actual binary, but you do want to see all steps in the process. You can do so by using the -n build option, which prints the steps that it would normally run without actually creating the binary.

$ go build -n hello.goSave temp directories

A lot of work happens in the /tmp working directory, which is deleted once the executable is created and run. But what if you want to see which files were created in the compilation process? Go provides a -work option that can be used when compiling a program. The -work option prints the working directory path in addition to running the program, but it doesn't delete the working directory afterward, so you can move to that directory and examine all the files created during the compile process.

$ go run -work hello.go
WORK=/tmp/go-build3209320645
Hello World
$
$ find /tmp/go-build3209320645
/tmp/go-build3209320645
/tmp/go-build3209320645/b001
/tmp/go-build3209320645/b001/importcfg.link
/tmp/go-build3209320645/b001/exe
/tmp/go-build3209320645/b001/exe/hello
$
$ /tmp/go-build3209320645/b001/exe/hello
Hello World
$Alternative compilation options

What if, instead of using the build/run magic of Go, you want to compile the program by hand and end up with an executable that can be run directly by your operating system (in this case, Linux)? This process can be divided into two parts: compile and link. Use the tool option to see how it works.

First, use the tool compile option to produce the resulting ar archive file, which contains the .o intermediate file. Next, use the tool link option on this hello.o file to produce the final executable, which can then run.

$ go tool compile hello.go
$
$ file hello.o
hello.o: current ar archive
$
$ ar t hello.o
__.PKGDEF
_go_.o
$
$ go tool link -o hello hello.o
$
$ file hello
hello: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, not stripped
$
$ ./hello
Hello World
$

To peek further into the link process of producing the executable from the hello.o file, you can use the -v option, which searches for the runtime.a file included in every Go executable.

$ go tool link -v -o hello hello.o
HEADER = -H5 -T0x401000 -R0x1000
searching for runtime.a in /usr/lib/golang/pkg/linux_amd64/runtime.a
82052 symbols, 18774 reachable
        1 package symbols, 1106 hashed symbols, 77185 non-package symbols, 3760 external symbols
81968 liveness data
$Cross-compilation options

Now that I've explained the compilation of a Go program, I'll demonstrate how Go allows you to build an executable targeted at different hardware architectures and operating systems by providing two environment variables—GOOS and GOARCH—before the actual build command.

Why does this matter? You can see an example when an executable produced for the ARM (aarch64) architecture won't run on an Intel (x86_64) architecture and produces an Exec format error.

These options make it trivial to produce cross-platform binaries.

$ GOOS=linux GOARCH=arm64 go build hello.go
$
$ file ./hello
./hello: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), statically linked, not stripped
$


$ ./hello
bash: ./hello: cannot execute binary file: Exec format error
$
$ uname -m
x86_64
$

You can read my earlier blog post about my experiences with cross-compilation using Go to learn more.

View underlying assembly instructions

The source code is not directly converted to an executable, though it generates an intermediate assembly format which is then assembled into an executable. In Go, this is mapped to an intermediate assembly format rather than the underlying hardware assembly instructions.

To view this intermediate assembly format, use -gcflags followed by -S given to the build command. This command shows the assembly instructions.

$ go build -gcflags="-S" hello.go
# command-line-arguments
"".main STEXT size=138 args=0x0 locals=0x58 funcid=0x0
        0x0000 00000 (/test/hello.go:5) TEXT    "".main(SB), ABIInternal, $88-0
        0x0000 00000 (/test/hello.go:5) MOVQ    (TLS), CX
        0x0009 00009 (/test/hello.go:5) CMPQ    SP, 16(CX)
        0x000d 00013 (/test/hello.go:5) PCDATA  $0, $-2
        0x000d 00013 (/test/hello.go:5) JLS     128

<< snip >>
$

You can also use the objdump -s option, as shown below, to see the assembly instructions for an executable program that was already compiled.

$ ls
hello  hello.go
$
$
$ go tool objdump -s main.main hello
TEXT main.main(SB) /test/hello.go
  hello.go:5            0x4975a0                64488b0c25f8ffffff      MOVQ FS:0xfffffff8, CX                 
  hello.go:5            0x4975a9                483b6110                CMPQ 0x10(CX), SP                      
  hello.go:5            0x4975ad                7671                    JBE 0x497620                           
  hello.go:5            0x4975af                4883ec58                SUBQ $0x58, SP                         
  hello.go:6            0x4975d8                4889442448              MOVQ AX, 0x48(SP)                      

<< snip >>
$Strip binaries to reduce their size

Go binaries are typically large. For example, a simple Hello World program produces a 1.9M-sized binary.

$ go build hello.go
$
$ du -sh hello
1.9M    hello
$
$ file hello
hello: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, not stripped
$

To reduce the size of the resulting binary, you can strip off information not needed during execution. Using -ldflags followed by -s -w flags makes the resulting binary slightly lighter, at 1.3M.

$ go build -ldflags="-s -w" hello.go
$
$ du -sh hello
1.3M    hello
$
$ file hello
hello: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, stripped
$Conclusion

I hope this article introduced you to some handy Go build options that can help you understand the Go compilation process better. For additional information on the build process and other interesting options available, refer to the help section:

$ go help build

These handy Go build options can help you understand the Go compilation process better.

Image by:

Opensource.com

Go programming language What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

A guide to JVM parameters for Java developers

Thu, 04/14/2022 - 15:00
A guide to JVM parameters for Java developers Jayashree Hutt… Thu, 04/14/2022 - 03:00 Up Register or Login to like.

When you write source code, you're writing code for humans to read. Computers can't execute source code until the code is compiled into machine language, a generic term referring to any number of languages required by a specific machine. Normally, if you compile code on Linux, it runs on Linux, and if you compile code on Windows, it runs on Windows, and so on. However, Java is different. It doesn't target an actual machine. It targets something called the Java Virtual Machine (JVM), and so it can run on any machine.

More on Java What is enterprise Java programming? Red Hat build of OpenJDK Java cheat sheet Free online course: Developing cloud-native applications with microservices Fresh Java articles

Java source code gets compiled into bytecode which is run by a JVM installed on a computer. The JVM is an execution engine, but it's not one you usually interact with directly. It runs quietly, processing Java bytecode. Most people don't need to think or even know about the JVM, but it can be useful to understand how the JVM works so you can debug and optimize Java code. For example:

  • In the production environment, you might find a deployed application needs a performance boost.

  • If something goes wrong in an application you've written, both the developer and end-user have options to debug the problem.

  • Should you want to know the details of the Java Development Kit (JDK) being used to develop or run a Java application, you can get those details by querying the JVM.

This article introduces some basic JVM parameters to help in these scenarios…

Image by:

(Jayashree Huttanagoudar CC BY-SA 4.0)

What's the difference between a JVM, JDK, and JRE?

Java has a lot of J-acronyms, including JVM, JDK, and JRE.

  • A Java Developer Kit (JDK) is accessed by programmers who need development libraries to use in their code.

  • The Java Runtime Environment (JRE) is employed by people who want to run a Java application.

  • The Java Virtual Machine (JVM) is the component that runs Java bytecode.

The JDK contains both a JRE and a JVM, but some Java distributions provide an alternate download containing a JRE (including a JVM).

Image by:

(Jayashree Huttanagoudar CC BY-SA 4.0)

Java is open source, so different companies build and distribute JDKs. You can install more than one on your system, which can be helpful when you're working on or using different Java projects, some of which might use an old JDK.

To list the JDKs on your Linux system, you can use the alternatives command:

$ alternatives --config java
There are 2 programs that provide java.
Selection Command
-----------------------------------------------
*+ 1 java-11-openjdk.x86_64 (/usr/lib/jvm/java-11-openjdk-11.0.13.0.8-2.fc35.x86_64/bin/java)
2 java-1.8.0-openjdk.x86_64 (/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-2.fc35.x86_64/jre/bin/java)

Enter to keep the current selection[+], or type selection number:

To switch between available JDKs, run the command again:

$ sudo alternatives --config java

Another option is to use SDKMan, which helps you download, update, and manage the JDKs on your system.

What is JVM tuning?

Tuning a JVM is the process of adjusting JVM parameters to improve the performance of the Java application. It also helps to diagnose application failure.

In general, it's important to consider these points before tuning:

  • Cost : Sometimes, improving the hardware running your code can improve an application's performance. That might seem like a "cheat" but consider how much time you're willing to spend tuning the JVM parameters. Sometimes, an application requires more memory to perform as desired, and no amount of software hacking will change that.

  • Desired Outcome: Stability is more important than performance in the long run. If your tuning affects the stability, it's probably better to wisely choose your tuning parameters.

  • Underlying issues : Sometimes, the issue could be an underlying issue with the host operating system. Before tuning the JVM, ensure that the JVM's platform is working as expected.

  • Memory leaks: If you find yourself using Garbage Collection (GC) tuning parameters, there are likely memory leaks that need to get fixed in the application code.

Types of JVM Parameters

JVM parameters are grouped under three categories: Standard options, Non-standard, and Advanced.

Standard options

All JVM implementations support standard options. Run the 'java' command in a terminal to see a list of standard options.

$ java
Usage: java [options] <mainclass> [args...]
        (to execute a class)
   or  java [options] -jar <jarfile> [args...]
        (to execute a jar file)

 where options include:

        -cp <class search path of directories and zip/jar files>
        -classpath <class search path of directories and zip/jar files>
        --class-path <class search path of directories and zip/jar files>
                A : separated list of directories, JAR archives,
                and ZIP archives to search for class files.
        --enable-preview
                allow classes to depend on preview features of this release

To specify an argument for a long option, you can use --<name>=<value> or
--<name> <value>.

These are all standard options included with any JVM, and you can safely use them as you use any command-line option. For example, to validate command options for configuration, and create a VM and load a main class without executing the main class, use:

$ java --dry-run <classfile>Non-standard options

Non-standard options start with -X. These are for general purpose use and are specific to a particular implementation of JVM. To list these options:
 

$ java -X
-Xbatch disable background compilation
-Xbootclasspath/a:<directories and zip/jar files separated by :>
append to end of bootstrap class path
-Xinternalversion
displays more detailed JVM version information than the
-version option
-Xloggc:<file> log GC status to a file with time stamps
[...]

These extra options are subject to change without notice and are not supported by all JVM implementations.

A JVM built by Microsoft may have different options than one built by Red Hat, and so on.

To get detailed JVM version information, use the following option:
 

$ java -Xinternalversion --version
OpenJDK 64-Bit Server VM (11.0.13+8) for linux-amd64 JRE (11.0.13+8), built on Nov 8 2021 00:00:00 by "mockbuild" with gcc 11.2.1 20210728 (Red Hat 11.2.1-1)

To get the property setting use:

$ java -XshowSettings:properties --versionAdvanced options

These options are not for casual use and are used for tuning the specific areas of the Hotspot VM. These options are subject to change, and there is no guarantee that all JVM implementations will support it.

These options start with -XX. To list these options, use the following command:

$ java -XX:+UnlockDiagnosticVMOptions -XX:+PrintFlagsFinal -version

For example, to trace the class loading then use the below command:

$ java -XX:+TraceClassLoading Hello

The Hello.java has:

$ cat Hello. java
public class Hello {
  public static void main(String[] args) {
    System.out.println("Inside Hello World!");
  }
}
 

Another common problem you might face is OOM (Out Of Memory) errors, which can happen without much debug information. To solve such a problem, you might use the debug option -XX:+HeapDumpOnOutOfMemoryError, which creates a .hprof file with debug information.
 

$ cat TestClass. java import java.util.ArrayList; import java.util.List; public class TestClass { public static void main(String[] args) { List list = new ArrayList(); for (int i = 0; i < 1000; i++) { list.add(new char[1000000]); } } } $ Javac TestClass.java $ java -XX:+HeapDumpOnOutOfMemoryError -Xms10m -Xmx1g TestClass java.lang.OutOfMemoryError: java heap space Dumping heap to java_pid444496.hprof ... Heap dump file created [1018925828 bytes in 1.442 secs] Exception in thread "main" java.lang.OutOfMemoryError: java heap space at TestClass.main(TestClass.Java:8)

There are tools to look at this .hprof file to understand what went wrong.

Conclusion

By understanding and using JVM and JVM parameters, both developers and end users can diagnose failures and improve the performance of a Java application. The next time you're working with Java, take a moment to look at the options available to you.

By understanding and using JVM and JVM parameters, both developers and end users can diagnose failures and improve the performance of a Java application.

Image by:

CC BY 3.0 US Mapbox Uncharted ERG

Java What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

How to join Apache Cassandra during Google Summer of Code

Wed, 04/13/2022 - 15:00
How to join Apache Cassandra during Google Summer of Code Stefan Miklosovic Wed, 04/13/2022 - 03:00 Up Register or Login to like.

It's time to don your shades and sandals! Apache Cassandra will be participating in the Google Summer of Code (GSoC) in 2022 again after a successful project in 2021, and the program this year has some changes we are excited to announce.

GSoC is a Google-sponsored program that promotes open source development and enables individuals to submit project proposals to open source mentor organizations. Applicants whose proposals are accepted get paid to work on their project during the Northern Hemisphere's summer. The Apache Software Foundation (ASF) has been a GSoC mentor organization for the past 17 years. It acts as an umbrella organization, which means that applicants can submit project proposals to any subproject within the ASF, including Apache Cassandra.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources

Last year I was a mentor, and I found that being able to switch hats and look at the program from a different perspective was invaluable. Back in 2013, I participated in my first GSoC as a student, so it is not far from the truth to say that I am a good example of how well GSOC can work! Once I dipped my toes into open source, I was immediately hooked. Even if you do not gain committer status in your first GSoC project, the exposure to the world of open source will help to get you there eventually.

Big changes to GSoC eligibility

Previously, the program was open only to post-secondary students, such as university students or recent graduates. This year, however, it will be open to anyone 18 years old or older who is an open source newcomer.

GSoC recognizes that the program can benefit anyone at various stages of their career, including people changing careers, those who are self-taught, those returning to the workforce, and more. The goal is to create a starting point for anyone who is not sure how to get started in open source or uncertain whether open source communities would welcome their contributions.

You can find more details about the program on the official GSoC website, including information on stipends.

Apache Cassandra GSoC project ideas

Currently, we have two project ideas with appointed mentors, but you are welcome to propose other projects.

Add support to EXPLAIN (CASSANDRA-17380)
Mentor: Benjamin Lerer

This is a project for adding functionality to CQL so that it supports EXPLAIN statements, which provide users with a way to understand how their query will be executed and some information on the amount of work that will be performed. For more details, see Cassandra Enhanced Proposal (CEP) draft 4.

Produce and verify BoundedReadCompactionStrategy as a unified general-purpose compaction algorithm (CASSANDRA-17381)
Mentor: Joey Lynch

This project focuses on performing validation and making the necessary code changes to introduce a new compaction strategy in Cassandra. You'll need prior knowledge in Java programming, and algorithm optimization skillsets would be useful too. Previous experience with Cassandra is helpful but not required. Compaction is a somewhat isolated part of the codebase that can be independently tested and even published as separate jars as compaction strategies are pluggable.

How to get involved

If you are interested in contributing to Apache Cassandra during GSoC, please join the #cassandra-gsoc room on Slack and introduce yourself! Potential mentors will give you initial instructions on how to get started and suggest some warm-up tasks.

Getting started with Apache Cassandra development

The best way to get started if you are new to Apache Cassandra is to get acquainted with the project's documentation and set up a local development environment. You will be able to play around with a locally running instance via cqlsh and nodetool to get a feel for how to use the database. If you run into problems or roadblocks during this exercise, do not be shy about asking questions on #cassandra-gsoc.

Google Summer of Code tips

There are many good resources on the web on preparing for GSoC, particularly the ASF GSoC Guide and the Python community notes on GSoC expectations. The best GSoC participants are self-motivated and proactive. Following the tips above should increase your chances of getting selected and delivering your project successfully. Good luck!

GSoC can benefit anyone at various stages of their career, including people changing careers, those who are self-taught, those returning to the workforce, and more.

Image by:

Opensource.com

Apache Cassandra Education What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

5 open source tools for developing on the cloud

Wed, 04/13/2022 - 15:00
5 open source tools for developing on the cloud Seth Kenlon Wed, 04/13/2022 - 03:00 Up Register or Login to like.

When developing software on the cloud, your environment is fundamentally different from what is on your laptop. This is a benefit to the development process because your code adapts to the environment it is running on. This article will go over five different integrated development environments (IDEs) that can improve your programming experience.

Explore the open source cloud Free online course: Developing cloud-native applications with microservices eBook: Modernize your IT with managed cloud services Try for 60 days: Red Hat OpenShift Dedicated Free online course: Containers, Kubernetes and Red Hat OpenShift What is Kubernetes? Understanding edge computing Latest articles for IT architects Che

While it's perfectly acceptable to develop on a local IDE with minimal integration to a local platform like OKD or minikube, there's a better option. Che is an IDE designed for, and that runs on, Kubernetes. For a developer, an IDE that's aware of the peculiarities of the cloud can be useful.

Some developers don't like using an IDE, because they feel an IDE can manage too much of their code for them, making them feel distant from the code base. But when your code is being developed on the cloud, there's a lot of benefit to letting the cloud remain abstract. You don't need to know about the platform you're coding on, because you're coding for an ephemeral, yet totally predictable, container. If you let your IDE be your primary interface, you don't have to worry about the filesystem you're using or the layout of the system. You can focus on your code, while your IDE manages your environment.

CodeReady Workspaces

A natural extension to running an IDE on Kubernetes is the ability to run your choice of several IDEs on Kubernetes. CodeReady Workspaces is an OpenShift feature that launches popular IDEs in a container.

Whether your language of choice is Python, Java, Go, Rust, C or C++, JavaScript, .Net, or something else, you can probably benefit from a good IDE. CodeReady Workspaces has access to VS Code, JetBrains, Che, Theia, and more. There are plenty of good arguments for standardizing a development team on the same IDE, and that's precisely what CodeReady Workspaces can make possible.

CodeReady Workspaces runs on OpenShift, so it can be used with several different cloud providers, including Red Hat OpenShift Service on AWS platform, but also Azure, Google Cloud, as well as your own private OpenStack cloud.

Container hubs

There are libraries for you to use in the software development world so you don't have to reinvent technology that someone else has already worked hard to figure out. Similarly, the cloud has containers that afford developers and sysadmins the same luxury.

When you're developing a cloud-native application and realize you need some standardized component (for instance, a database), you can import a container that provides that component. All you have to do is look at the inputs and outputs of the container, as if it were a function in code, and write your software accordingly.

There are many popular and reliable container hubs out there.

Like software libraries, well-supported containerized components have the advantage that they're maintained by someone else. While you could learn to make your own containers and run your own custom support applications, your first stop should be a container hub.

Buildah

When it comes time to build your own container, whether it's because a container hub doesn't have a well-maintained container for what you need, or because the container you need is your own, there are tools out there to make the process easy.

Even if you're developing applications in relative isolation, when you're developing for the cloud, your application at some point is sure to be deployed as a container. There are a few different ways to build a container. You can base your work on existing containers, or if you really need to start fresh you can build a container from scratch.

Whatever tactic you use, you want your solution to ultimately become automated so you can integrate it with your CI/CD or build process. Buildah is a flexible, easy to learn tool that's well worth using.

Kubectl

Depending on which cloud provider you're dealing with,the kubectl command may or may not be available to you, but there's a difference between using a command and knowing a command.

I've found that learning kubectl has been significant in my understanding of the underlying components of cloud technology. As a developer, you may never need to know what nodes are present in a cluster, but it can be nice to understand that they exist, why, and what they do. You also may not need to worry about what namespace your container runs in, or what pod it's a member of, but it can be useful to understand what is and isn't available inside and outside of a namespace or a pod.

[ Download now: Kubectl cheat sheet ]

Regardless of whether your cloud provider gives you access to kubectl or you need to run it for a project, if you're the type of developer who wants to understand the whole stack, then you should learn kubectl.

A developing story

Cloud-native application development is an evolving story. New tools continue to surface, and the task of developing in the cloud is getting easier and more accessible. One thing's for sure: you should definitely take the chance to get familiar with using different environments when computing in the cloud. With so many open source tools to make life in the cloud feel like your native environment, you have plenty of options to choose from.

[ Take the free online course: Deploying containerized applications ]

Here are a few IDEs that can improve your programming experience while using multiple cloud service providers.

Image by:

WOCinTech Chat. Modified by Opensource.com. CC BY-SA 4.0

Cloud Programming What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Try this open source alternative to Salesforce

Tue, 04/12/2022 - 15:00
Try this open source alternative to Salesforce Laryn Kragt Bakker Tue, 04/12/2022 - 03:00 Up Register or Login to like.

CiviCRM is an open source constituent relationship management (CRM) system designed to help you manage information about your organization's contacts, members, donations, and events. It's built specifically for nonprofits, so you won't find yourself having to try to shoehorn your organizational workflow into a business-oriented model (as some find themselves doing when using similar CRMs).

Even better, it's built to be extremely flexible and customizable. You can create custom fields, location types, contact sub-types, relationship types, financial type and more. Best of all, it's customized to your nonprofit's specific needs. There are no hard-coded limits on the number of contacts you can store—and likewise, no arbitrary limits or thresholds that trigger a higher monthly fee. (I've heard an unfortunate story of a group that started out with, for example, a "Nonprofit Success Pack" on a proprietary CRM. When they outgrew it, they were trapped in what turned out to be a very expensive system, or they wanted to expand functionality with "apps" that ended up unexpectedly costing hundreds of dollars a month.)

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Breaking down data silos

If your organization is storing data in many different places ("data silos"), you can imagine the benefit of having an accessible, centralized place to store information: no more maintaining multiple spreadsheets and databases to track your constituents. No more updating information in one place only to find out the contact is also being tracked in a different location with inconsistent, unreliable data. No more donations in one system, event registrations in another, and email lists in a third location. When you (or the contact themselves, if you allow it) updates the information, it's updated across the board. When you view a contact, you can see at a glance their donation history, a record of their attendance for your events and their membership status, for example.

Privacy is always a concern, and policies like the General Data Protection Regulation (GDPR) work hard to preserve it. Using CiviCRM, you can fully own your data and store it on your own server and under your privacy policy, rather than off-loading it to third party systems with their own privacy policies. (There's also a GDPR extension to provide GDPR-specific functionality such as a "Forget Me" button).

Costs

Though the software is open source and available without charge, there can be costs associated with CiviCRM. The fees range from hosting fees to hiring consultants or developers who can help you set up, configure, and maintain the installation or import historical data from another system. In the meantime, instead of these costs being sunk into a proprietary profit-based system, they go towards deepening a sharing, educational, empowering economy that can benefit other nonprofits as well.

Core components and functionality

CiviCRM covers a lot of ground. Here are some of its core components and functionalities:

  • CRM: Tracks contact addresses, demographics, relationships with other contacts, activities with your organization, and custom fields you've set up. Uses groups, tags, and saved searches to categorize your contacts. A powerful API allows developers to do more extensive integrations and customization if the need is there.

  • Events: Tracks attendance to past events, and sets up events that allow registration (with or without payment).

  • Memberships: Tracks membership status and allows membership registration and renewal (with or without payment).

  • Donations: Tracks donation and contribution history and creates as many customized donation pages as needed. It also allows peer-to-peer fundraising pages.

  • Mailings: Sends email to contacts in certain groups or to subsets based on filters/searches. Allows people to sign up to mailing lists while verifying/updating their contact information. Uses special links in your emails to prepopulate the forms on your site with the contact's information and links it directly to their record.

  • Payment gateways: Connects directly to your payment gateway of choice with no additional middle layer or additional fee on top of your processor's fee. This puts more of the donor's money in your organization's bank account. This also allows you to change your payment processor if needed without having to change your entire CRM.

Contributed extensions of note

Those are just the broad categories, of course. CiviCRM is flexible, so if you need more than what's provided by default, you can install extensions developed by the community.

Here are some examples:

  • Contact Layout Editor: Take control of the contact summary screen. Rearrange, rename, and design blocks. Drag and drop fields. Choose which layout gets shown to each type of user, and design custom layouts for each contact type.

  • DIY Forms: Use Backdrop or Drupal's powerful Webform module to push data directly to your CRM database from form submissions on your site. Allows multiple contacts to be added and edited from a single form, including relationships between them.

  • CiviVolunteer: Manage your volunteers.

  • Extended Reports: Generate detailed, customizable reports.

  • Mosaico drag and drop email builder: Use the Mosaico library for a drag and drop interface when creating email templates.

Examples of integrations with the website

CiviCRM currently requires a CMS (although there is some talk about allowing it to run as a stand-alone system in the future). At the moment it supports Backdrop, Drupal, WordPress, and Joomla. It can be run on one of these as a separate CRM-specific subdomain or can be integrated directly into your main site (if you use one of these CMS's). Here are some examples of the types of website integrations that are possible:

  • Private members area/intranet: Sync membership or group status from CiviCRM to a role on the website, allowing access to private information for certain contacts.

  • Member directory with filters: Pull specific contact information into a searchable directory on your site that dynamically updates based on data in CiviCRM.

  • DIY updates: Allow contacts to update their own information and avoid manual entry of hand-written or emailed changes. Use deduplication rules to match contacts, allow logged in users to update their own info and send special customized links in an email to pre-populate the form with the contact's information and link it directly to their record (whether they are logged in or not).

  • Mailing list signup: Add a mailing list signup option on contact forms.

  • Create user accounts: Allow users to register for an account on your website while also capturing their information in CiviCRM.

Try CiviCRM

The CiviCRM community is interested in educating and empowering users of all kinds. There are free online manuals for users, administrators, and developers. If you think your organization could benefit from CiviCRM,  ask a question or just give it a try.

CiviCRM is an open source CRM built specifically for nonprofit organizations.

Image by:

opensource.com

Alternatives Tools Business What to read next Intro to Corteza, an open source alternative to Salesforce 6 reasons this nonprofit chose Backdrop for its open source CMS This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

The path to an open world begins with inclusivity

Tue, 04/12/2022 - 15:00
The path to an open world begins with inclusivity Ron McFarland Tue, 04/12/2022 - 03:00 Up Register or Login to like.

For the past few weeks, collaborator Brook Manville and I have been offering our thoughts on (and analyses of) Johan Norberg's Open: The Story of Human Progress. My first article simply explained what the author means by the word "open." Thinking alongside Norberg now, I'd like to discuss future ways we might apply open organization principles to bring about prosperous societies globally. Norberg hints at such directions, but I believe further detail is required if we'll be able to set up action plans for the future.

The importance of inclusivity

One of the open organization principles, namely inclusivity, is extremely important when developing a more open society.

Open Organization resources Download resources Join the community What is an open organization? How open is your organization?

To highlight this importance, Norberg often asks readers to consider the differences between two groups or communities: an "inside" group and an "outside" group. When we consider people from our inside group, we see them as individuals. If we don't approve of one member, we limit our judgment only to that person. When we consider people from our outside group, or community, we see them as representatives of the whole group (forming a stereotype).

Here is where inclusivity becomes important. When the two groups interact, stereotypes can erode and both groups' members get seen as individuals. This doesn't mean the two groups become completely integrated; it just means each develops a deeper level of understanding about the other. And while differences are still apparent, joint interests emerge, too. Both groups can begin working together on efforts that serve them both.

To improve our inclusivity, we could ask how often people spend time with groups outside of or different from theirs. And does the group seem satisfied with that degree of contact?

Multi-dimensional group identities

When inside groups begin making more contact with different, outside groups, they begin to sew the seeds of more open global societies.

When inside groups begin making more contact with different, outside groups, they begin to sew the seeds of more open global societies.

Naturally, we all are members of many communities, and we have many identities (which Norberg explains). Consider just a few: nationality, religion, race, political party, specialty/profession, language, educational level, and gender. Some of these dimensions of identity we have in common with others; other aspects of our identities differentiate us from others. By seeking common connections in the dimensions of our identities that we share, we can begin pursuing a better understanding of those "other" dimensions of identity that seem different to us. Problems arise when we identify too strongly (or too extremely) with only one facet of our identities, refusing to acknowledge the existence or importance of those different from us.

So how can we get people with such extreme, single-dimension identifications to be more inclusive?

"Tolerance" is a common theme in Norberg's work, but the concept on its own isn't sufficient for building open societies.

Steps to more inclusive societies

What Norberg calls "tolerance" is for me one aspect of a multi-part system of values that can help us create more open and inclusive global societies. I'd describe them like this:

  1. Recognition: We must first recognize the existence of other identities outside our own. This doesn't involve any kind of judgment about those identities, just acknowledgement of their existence.
  2. Respect: With recognition should come respect for those identities. It involves not just acknowledging others' existence, but validating it in some way. For example, say I travel to another city to meet fans of an opposing sports team. What I have to do is respect fans of that "competing" city's team—not determine whether that team is "bad" or "good," but try to understand why a person would be a fan of that team (particularly, say, if that fan grew up in that city).
  3. Understanding: This involves trying to understand what others are thinking and feeling. Here we begin to see others as individuals with unique histories and experiences that have led to the development of their identities. Simply put, we need to see the context others inhabit and understand how it might differ from ours.
  4. Tolerance: Norberg argues for greater tolerance of differences throughout Open. What he's describing here is the (occasionally uncomfortable) work of holding different identities in tension with our own. It takes effort. When I hear different perspectives, how do I respond? Am I overly defensive? Do I respond quickly or pause to consider his context? Do I ask clarifying questions to get a feeling for his thinking? These questions will help me become more tolerant of different and divergent perspectives.
  5. Optimism: Being optimistic is helpful for becoming open to differences. As Norberg notes, fear leads to pessimism, which narrows people's thinking and makes them look inward. It blinds them to others' perspectives. For instance, when giving sales seminars, I'd notice that people who were pessimistic about making sales had a harder time spotting the needs of the customer. They were too preoccupied looking inward. Optimism leads to thinking outwardly, which can lead to inclusivity.
  6. Patience: Finally, I think patience is also required to improve our ability to work through differences. When the differences are so great, the work of understanding those differences also seems great. It can feel easier simply not to do it. The best advice might just be to give yourself some time to think about the differences. After some time, a new perspective and better understanding might come to the surface. Patience is key.
Barriers to openness

Norberg stresses that most societies actually maintain barriers that prevent them from becoming more open. Barriers to growth and opportunities mean talented people will seek to join groups in societies where those barriers don't exist. Norberg offers several historical examples of this tendency.

But what are those barriers?

They relate to four types of mobility:

  1. Physical mobility: The ability to physically move where one wishes and away from a place or situation one wants to avoid is a form of physical mobility. How easily can a person or group move from one area to another in which they perceive more opportunity for growth?
  2. Professional mobility: This is the ability to move into different professions when the need for one's current professional skills are declining. How easily can a person or group develop new skills and adopt professions in higher demand?
  3. Social mobility: Are people or groups able to move to communities that promote their vision of human development—and away from communities that don't? How accepting is a community? How welcoming of new and different people? Can new ideas flourish here through discussions with newcomers?
  4. Psychological mobility: This refers to emotional desire or fear associated with moving to new and different environments and away from others. How psychologically adaptable and flexible is a person or group in general? Can people emotionally adjust to changes within their current surroundings?

Understanding these various forms of mobility can help a group or society learn how it can become more open.

Toward a more open world

As I've written previously, a more globalized society tends to be a more open society. Norberg clearly feels similarly, arguing that throughout history, when societies become more accepting of differences and open up, they tend to prosper, innovating and developing faster. It is due to openness, he says, that human civilization has progressed more in the past 200 years than it has in the past 20,000. Clearly, openness is important enough to warrant our efforts in helping it flourish.

We should be thinking of our work as "non-zero sum," even striving for a "plus-sum game," where one group's successes propel everyone forward.

In the end, Norberg's work stresses the importance of combining inclusivity with another open organization characteristic: collaboration. Future successes will rely, he says, on overcoming the idea that social progress involves a kind of "zero-sum game" in which one group's gains represent net loss for others. We should, he insists, be thinking of our work as "non-zero sum," even striving for a "plus-sum game," where one group's successes propel everyone forward. All groups have their own strengths and weaknesses; by working collaboratively, they can leverage each other's strengths far more effectively, increasing total benefit for all.

Openness, tolerance, recognition, and respect lead to increased trade, better division of labor, greater specialization, the quicker iteration and perfection of processes, and global understanding. As Norberg notes, openness helps new forms of expression, new ideas, new business models, and new insights circulate, leading to greater prosperity. Furthermore, Norberg notes that scientific advancement depends on open exchange of information, viewpoints, criticisms, and concepts. Isolation and restricted interaction, on the other hand, have historically led to failure—and will continue to do so in the future, should we not avoid them.

Building open societies requires respect, understanding, and patience.

Image by:

Opensource.com

The Open Organization Diversity and inclusion Read the series Open exchange, open doors, open minds: A recipe for global progress Making the case for openness as the engine of human progress 4 questions about the essence of openness This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 1 Comment Register or Login to post a comment. Image by:

opensource.com

Ron McFarland | April 12, 2022

After writing this article, I started thinking of the value of an open society or organization.  Consider this.  

What is the value of more open organizations and societies?

To answer this question, I would like to compare it with product standardization.  What is the value of standardizing an electrical outlet socket for example?  Imagine every country, city, town and even individual home having different home appliance electrical outlets and sockets.  If that were the case, the cost, quality and performance would be hundreds or even thousands of times worse than one standardized electrical outlet design.

Looking at the extremes where there is only one standardized socket design that is mass-produced using advanced automation and on the other hand, thousands, possibly even millions of versions, all made by hand at varying levels of quality.  The benefit of standardizing of these electrical sockets could be 1,000 times a handmade version.  Open organizations and societies offer that exact same value to their community members.  Openness helps find and standardize what is best for wide communities.

 

6 reasons this nonprofit chose Backdrop for its open source CMS

Mon, 04/11/2022 - 15:00
6 reasons this nonprofit chose Backdrop for its open source CMS Laryn Kragt Bakker Mon, 04/11/2022 - 03:00 Up Register or Login to like.

As a nonprofit that builds websites for other nonprofits (among other things), the Stuart Center has used a variety of platforms over the years based on experience and feedback from nonprofit partners. In the early days, we did straight HTML websites, and as content management system (CMS) technology blossomed, used Mambo (and Joomla after it forked). We then moved on and have used WordPress and Drupal for years now. As things evolve and change, we always have to re-assess and adjust course as necessary based on the audience we are serving. Of course, it's not just the technology that is changing, but the focus and priorities and end-user experience of the various projects.

The Stuart Center serves primarily small to mid-sized nonprofits. Many of our partners don't have a full-time website person, and often people wear many hats. Funding is usually tight. They have website needs that can't always fit into a cookie-cutter solution, and they appreciate flexibility from a website so it can be a solid base that can grow and scale and adapt as their needs do. This is why we've often turned to Drupal over the last decade or so.

In the last number of years, we've shifted and begun to suggest Backdrop to our partners as an alternative to Drupal. Backdrop not only maintains many of the strengths of Drupal: it also brings some strong new features and a particular focus on and attention to just the kind of groups that make up our audience. Here are some of the deciding factors we considered as we've observed and begun to participate in the Backdrop project. (And to be clear, this is not intended as a knock on the other projects as much as an endorsement of Backdrop – we've used WordPress and Drupal plenty and I expect we'll use them again when the project is right).

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Power, flexibility, scalability

In Backdrop, the best parts of Drupal are kept, including many of the new improvements that went into Drupal 8 such as configuration management and key features like the ability to create dynamic, customizable views in the core software. This makes it much more flexible out of the box than WordPress. There are also new Backdrop-specific improvements, such as a user-friendly and a powerful layout system that allows separate landing pages. There are also sections with unique page structures and blocks of various kinds of content placed differently on the page. We have the power to build a solid, flexible base that we can scaffold on top of as budget allows and ideas grow. Sometimes a group doesn't have the budget for a full project all at once. With Backdrop, we can find a way to do a project in multiple phases, building on what we started rather than having to start again or redo a lot of work.

Usability and empowerment

There has been a continuous stream of usability improvements in Backdrop as the team tries to clean up some of the inconsistencies and less than ideal user interface aspects of Drupal 7. Some of them are small and the kind of thing you don't even notice when they're fixed (which is what you want) and others bring new features into play (like the addition of an image library in the editor, and the fact that images uploaded through the editor are tracked elsewhere on the site as well). We had a partner email us the day after we upgraded and slide in a comment about how they loved the new image library, so we know that these improvements are being noticed by our end users. Attention to the little details like the full-featured editor experience (which works so nicely out of the box) and upgraded JavaScript libraries make it much more enjoyable to use as a content editor or site builder on a day-to-day basis.

Backdrop's principles fit hand-in-glove with the Stuart Center's values, especially the way we try to educate and empower our partners. Make sure it's usable. Include features that the majority need and want. Keep it simple but extendable. This is critical as we build a site and then try to empower our partners to take on as much of the editorial and administrative aspects as they are willing and able. We always tell our partners that, based on their needs, we will help train them to handle as much as they are able on the site. We'll be here for questions as needed, realizing some groups will need more support and others will need less (or none).

Affordability

As we at the Stuart Center evaluated Backdrop, we had to look at the total cost of ownership of the website for our partners.The fact that building a Drupal site will cost more than building a Backdrop site, that is one thing, but the costs of ongoing maintenance and upgrades and the potential need for a more expensive hosting plan are others. We've found that we can develop Backdrop sites quickly, which means lower costs, and that the Backdrop principle of backwards compatibility has made updates and upgrades very smooth. (With core updates  already possible via the admin interface, and potentially in an automated form in the future, it's another affordability win for our partners on the horizon). Some of our smaller partners have sites that are on shared hosting, and Backdrop's philosophy includes a principle of maintaining great performance so the system can run on lower cost hosting.

It's worth noting here that there is a smooth upgrade path from Drupal 7 to Backdrop, whereas the process of moving from Drupal 7 to Drupal 8 and above is a more intensive migration process. 

Security, maintenance, and a four year track record

At the Stuart Center, we've been evaluating Backdrop for several years, even as we've built sites with it and participated in the community by porting, maintaining, and developing modules. Backdrop is over seven years old now, with scheduled and on-time releases every 4 months. Security updates are generally managed in collaboration with the Drupal 7 security team on issues that affect both systems. The ability to update the core software via the administrative interface (in a similar way to how you can already update contributed modules) is a great feature for those partners that want to manage things more completely. The long term goal of providing an option for automated security updates is another step in the right direction (I've appreciated the thoughtful conversation around this and whether it will be possible to provide a "security updates only" release for minor versions to minimize the chance that an automated update could cause a breakage – a branch that does not force you to upgrade to the latest functionality but does allow the security patches to flow in as seamlessly as possible).

I expect that we'll see a spike in Backdrop adoption as we approach the end-of-life of Drupal 7.

Community and leadership

Most of the above considerations are from the end user perspective, but the developer's Backdrop experience is also one of empowerment. The community is still comparatively small but it's growing, and the attitude in general feels very welcoming and warm, "structured to promote participation and collaboration." Modern workflows and development tools like Github help make it simpler to get involved in one way or another.

The leadership structure is one of the hidden gems of the project. Rather than one person or an enterprise-focused company having outsized control of a community-powered project, the Backdrop Project Management Team is set up as a diverse group representing "all perspectives of the Backdrop community". This gives peace of mind that the project won't stray from the principles that focus on the needs of small and mid-size nonprofits, companies, and groups, and the big shops and enterprise developers that want to shift everything to headless systems or other expensive functionality can't use their size to steer the project away from its intended audience. They are still welcome to use Backdrop and participate in the contrib space, of course, just not to commandeer the project!

Integration with CiviCRM

Another important check mark for Backdrop CMS is the deep integration it has with CiviCRM, allowing nonprofits or company to take advantage of all the benefits CiviCRM offers: centralized donor and contact management, memberships, donations, events with registration, case management, email blasts, and the privacy win of self-hosted data.

Conclusion

If you work in a small or medium-sized nonprofit, the points above will probably resonate in many ways. At the Stuart Center, we've been very pleased with Backdrop's direction and we're looking forward to the project's future as a tool to help us and our partners build a better world. I expect we'll use other systems from time to time, but Backdrop feels like a "Swiss Army knife" of sorts and should be part of the conversation.

If you have a web project coming up or are just interested in discussing Backdrop or asking us a question or two, join the live chat or feel free to get in touch with us at the Stuart Center. You can also follow the Nonprofit Backdrop Twitter feed.

Here are some of the deciding factors that the Stuart Center considered as they observed and begun to participate in the Backdrop project.

Image by:

Opensource.com

Tools Alternatives Business What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

5 open source alternatives to Doodle polls

Mon, 04/11/2022 - 15:00
5 open source alternatives to Doodle polls Don Watkins Mon, 04/11/2022 - 03:00 Up Register or Login to like.

Scheduling meetings can be a nearly insurmountable task. Finding a time that works for everyone in the same organization, let alone across different time zones, can feel like trying to solve a puzzle with a missing piece.

There are several web applications that can help you send around a poll to find out what times work for each participant, with several different options for dates and times provided. By taking the intersection of all the good times, you can uncover the ideal meeting schedule.

Should you want to host your own meeting poll software, there are several open source options available to you, and recently I had the occasion to try five of them.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources Framadate

Framadate is produced by the French not-for-profit association Framasof. Offered as a hosted web application, Framadate is ad-free, supports real time collaboration, is multilingual, and can help with planning and documenting meetings. It's released under a Cecill-B license. The source code is available on GitLab, if you want to review it, contribute to it, or download and self-host.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

Dudle

Dudle is an open source poll and event scheduling application. It was released with a GPL v.3 license. You can easily create scheduling and polling events in over twenty different languages, either anonymously or with a distinct URI. Dudle comes with different stylesheets to give your event scheduling or polling a distinct appearance. The source code is available, and you can run the software on your own server.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

Nextcloud

There's not much Nextcloud can't do. It has an application called Nextcloud Polls that allows you to create and share polls from the familiar Nextcloud interface. It's written in PHP and Vue.js, and is released under an AGPL 3.0 license. It has many features, including easy poll creation, the ability to hide results from other users, and the option for an automatic expiration date. You can easily export to HTML and different spreadsheet formats. You can report bugs or request new features, and there's an active development community.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

Croodle

Croodle doesn't have a demo server, so you're likely to use this only if you intend to self-host. It's released under an MIT license and is written in PHP. Croodle is encrypted end-to-end. All data (poll title, description, options, user names, and so on) are encrypted and decrypted in the browser using 256-bit AES encryption. If you're building a site and you want to include polling as a service, this is a great project to try.

 

Rallly

Rallly (yes, that's three L's) provides a simple and direct interface for quickly scheduling events and allowing participants to vote on the date and time of events. Released with an MIT license, Rallly's source code is available for you to contribute to or review. Its primary means of delivery is as a container, so there's essentially no configuration required to quickly launch an instance on your own server using Podman, Kubernetes, or Docker. Rallly has excellent documentation to aid you setting it up in your own environment for complex setups.

Image by:

(Seth Kenlon, CC BY-SA 4.0)

Polls

Whether you run a temporary solution in a container, an occasional poll on Nextcloud, or build a full productivity suite around scheduling, there are plenty of open source solutions for getting input from your event's participants. The world is smaller due to the increase of video calls we make, and now it's effortless to coordinate your meetups across time zones and busy schedules.

Whether you run a temporary solution in a container, an occasional poll on Nextcloud, or build a full productivity suite around scheduling, there are plenty of open source solutions for getting input from your event's participants.

Alternatives Nextcloud Business Tools What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 1 Comment Register or Login to post a comment. Image by:

opensource.com

Laurent | April 11, 2022

And what about Dolibarr survey ?
https://www.dolibarr.org/presentation-surveys-polls.php

Automate checking for flaws in Python with Thoth

Mon, 04/11/2022 - 15:00
Automate checking for flaws in Python with Thoth Fridolin Pokorny Mon, 04/11/2022 - 03:00 Up Register or Login to like.

Most cyberattacks take advantage of publicly known vulnerabilities. Many programmers can automate builds using Continuous Integration/Continuous Deployment (CI/CD) or DevOps techniques. But how can we automate the checks for security flaws that turn up hourly in different free and open source libraries? Many methods now exist to ferret out buggy versions of libraries when building an application.

This article will focus on Python because it boasts some sophisticated tools for checking the security of dependencies. In particular, the article explores Project Thoth because it pulls together many of these tools to automate Python program builds with security checks as part of the resolution process. One of the authors, Fridolín, is a key contributor to Thoth.

More on security The defensive coding guide 10 layers of Linux container security SELinux coloring book More security articles Inputs to automated security efforts

This section lists efforts to provide the public with information about vulnerabilities. It focuses on tools related to the article's subject: Reports of vulnerabilities in open source Python libraries.

Common Vulnerabilities and Exposures (CVE) program

Any discussion of software security has to start with the comprehensive CVE database, which pulls together flaws discovered by thousands of scattered researchers. The other projects in this article depend heavily on this database. It's maintained by the U.S. National Institute of Standards and Technology (NIST), and additions to it are curated by MITRE, a non-profit corporation specializing in open source software and supported by the U.S. government. The CVE database feeds numerous related projects, such as the CVE Details statistics site.

A person or automated tool can find exact packages and versions associated with security vulnerabilities in a structured format, along with less structured text explaining the vulnerability, as seen below.

Image by:

(Fridolín Pokorný and Andy Oram, CC BY-SA 4.0)

Security efforts by the Python Packaging Authority

The Python Packaging Authority (PyPA) is the major organization creating best practices for open source packages in the Python language. Volunteers from many companies support PyPA. Security-related initiatives by PyPA are significant advances in making Python robust.

PyPA's Advisory Database curates known vulnerabilities in Python packages in a machine-readable form. Yet another project, pip-audit, supported by PyPA, audits application requirements and reports any known vulnerabilities in the packages used. Output from pip-audit can be in both human-readable and structured formats such as JSON. Thus, automated tools can consult the Advisory Database or pip-audit to warn developers about the risks in their dependencies.

A video by Dustin Ingram, a maintainer of PyPI, explains how these projects work.

Open Source Insights

An initiative called Open Source Insights tries to help open source developers by providing information in structured formats about dependencies in popular language ecosystems. Such information includes security advisories, license information, libraries' dependencies, etc.

To exercise Open Source Insights a bit, we looked up the popular TensorFlow data science library and discovered that (at the time of this writing) it has a security advisory on PyPI (see below). Clicking on the MORE DETAILS button shows links that can help research the advisory (second image).

Image by:

(Fridolín Pokorný and Andy Oram, CC BY-SA 4.0)

Image by:

(Fridolín Pokorný and Andy Oram, CC BY-SA 4.0)

Interestingly, the version of TensorFlow provided by the Node.js package manager (npm) had no security advisories at that time. The programming languages used in this case may be the reason for the difference. However, the apparent inconsistency reminds us that provenance can make a big difference, and we'll show how an automated process for resolving dependencies can adapt to such issues.

Open Source Insights obtains dependency information on Python packages by installing them into a clean environment. Python packages are installed by the pip resolver—the most popular installation tool for Python libraries—from PyPI, the most popular index listing open source Python libraries. Vulnerability information for each package is retrieved from the Open Source Vulnerability database (OSV). OSV acts as a triage service, grouping vulnerabilities across multiple language ecosystems.

Open Source Insights would be a really valuable resource if it had an API; we expect that the developers will add one at some point. Even though the information is currently available only as web pages, the structured format allows automated tools to scrape the pages and look for critical information such as security advisories.

Security Scorecards by the Open Source Security Foundation

Software quality—which is intimately tied to security—calls for basic practices such as conducting regression tests before checking changes into a repository, attaching cryptographic signatures to releases, and running static analysis. Some of these practices can be detected automatically, allowing security experts to rate the security of projects on a large scale.

An effort called Security Scorecards, launched in 2020 and backed by the Open Source Security Foundation (OpenSSF), currently lists a couple of dozen such automated checks. Most of these checks depend on GitHub services and can be run only on projects stored in GitHub. The project is still very useful, given the dominance of GitHub for open source projects, and represents a model for more general rating systems.

Project Thoth

Project Thoth is a cloud-based tool that helps Python programmers build robust applications, a task that includes security checking along with many other considerations. Red Hat started Thoth, and it runs in the Red Hat OpenShift cloud service, but its code is entirely open source. The project has built up a community among Python developers. Developers can copy the project's innovations in other programming languages.

A tool that helps programmers find libraries and build applications is called a resolver. The popular pip resolver generally picks the most recent version of each library, but is sophisticated enough to consider the dependencies of dependencies in a hierarchy called a dependency graph. pip can even backtrack and choose a different version of a library to handle version range specifications found by traversing the dependency graph.

When it comes to choosing the best version of a dependency, Thoth can do much more than pip. Here is an overview of Thoth with a particular eye to how it helps with security.

Thoth overview

Thoth considers many elements of a program's environment when installing dependencies: the CPU and operating system on which the program will run, metadata about the application's container such as the ones extracted by Skopeo, and even information about the GPU that a machine learning application will use. Thoth can take into account several other variables, but you can probably guess from the preceding list that Thoth was developed first to support machine learning in containers. The developer provides Thoth with information about the application's environment in a configuration file.

What advantages does the environment information give? It lets Thoth exclude versions of libraries with known vulnerabilities in the specified environment. A developer who notices that a build fails or has problems during a run can store information about what versions of dependencies to use or avoid in a specification called a prescription, consulted by Thoth for future users.

Thoth can even run tests on programs and their environments. Currently, it uses Clair to run static testing over the content of container images and stores information about the vulnerabilities found. In the future, Thoth's developers plan to run actual applications with various combinations of library versions, using a project from the Python Code Quality Authority (PyCQA) named Bandit. Thoth will run Bandit on each package source code separately and combine results during the resolution process.

The different versions of the various libraries can cause a combinatorial explosion (too many possible combinations to test them all). Thoth, therefore, models dependency resolution as a Markov Decision Process (MDP) to decide on the most productive subset to run.

Sometimes security is not the primary concern. For instance, perhaps you plan to run a program in a private network isolated from the Internet. In that case, you can tell Thoth to prioritize some other benefit, such as performance or stability, over security.

Thoth stores its dependency choices in a lock file. Lock files "lock in" particular versions of particular dependencies. Without the lock files, subtle security vulnerabilities and other bugs can creep into the production application. In the worst case, without locking, users can be confronted with so-called "dependency confusion attacks".

For instance, a resolver might choose to get a library from an index with a buggy version because the index from which the resolver usually gets the dependency is temporarily unavailable.

Another risk is that an attacker might bump up a library's version number in an index, causing a resolver to pick that version because it is the most recent one. The desired version exists in a different index but is overlooked in favor of the one that seems more up-to-date.

Wrap-up

Thoth is a complicated and growing collection of open source tools. The basic principles behind its dependency resolutions can be an inspiration for other projects. Those principles are:

  1. A resolver should routinely check for vulnerabilities by scraping websites such as the CVE database, running static checks, and through any other sources of information. The results must be stored in a database.
  2. The resolver has to look through the dependencies of dependencies and backtrack when it finds that some bug or security flaw calls for changing a decision that the resolver made earlier.
  3. The resolver's findings and information passed back by the developers using the resolver should be stored and used in future decisions.

In short, with the wealth of information about security vulnerabilities available these days, we can automate dependency resolution and produce safer applications.

Project Thoth pulls together many open source tools to automate program builds with security checks as part of the resolution process.

Image by:

opensource.com

Security and privacy DevOps Python What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 32 points Arlington, Massachusetts, USA

Andy is a writer and editor in the computer field. His editorial projects at O'Reilly Media ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. Andy also writes often on health IT, on policy issues related to the Internet, and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM (Brussels), DebConf, and LibrePlanet. Andy participates in the Association for Computing Machinery's policy organization, named USTPC, and is on the editorial board of the Linux Professional Institute.

| Follow praxagora Open Enthusiast Register or Login to post a comment.

New book teaches readers how to tell data stories with PostgreSQL

Sun, 04/10/2022 - 15:00
New book teaches readers how to tell data stories with PostgreSQL Joshua Allen Holm Sun, 04/10/2022 - 03:00 Up Register or Login to like.

SQL databases can be daunting but can also be very fun if you know how to use them. The information contained in a database can provide many insights to someone who knows how to properly query and manipulate the data. Practical SQL, 2nd Edition: A Beginner's Guide to Storytelling with Data by Anthony DeBarros teaches readers how to do just that.

More great content Free online course: RHEL technical overview Learn advanced Linux commands Download cheat sheets Find an open source alternative Explore open source resources The content

DeBarros, currently a data editor at the Wall Street Journal, pulls from his practical experience in journalism to teach readers how to tell stories with data. The book consists of an introduction, 20 chapters, and several appendices. The introduction sets the tone for the book, explaining what the book is about and who it is for, and the 20 chapters teach lessons about various database topics. Chapter 1 is the traditional "how to set up your environment" chapter and covers how to install PostgreSQL on Windows, macOS, or Linux (specifically, Ubuntu). The following chapters cover the basics of working with SQL databases, like creating databases and tables, performing basic queries, understanding data types, importing and exporting data, and basic math and stats functions. The chapters then progress to more complex topics like joining tables and extracting, inspecting, and modifying data. By the time the reader reaches the book's midpoint, they should have a solid understanding of how databases work.

The chapters in the second half of the book, starting with chapter 11, explore advanced topics.

  • Chapter 11 covers statistical functions.
  • Chapter 12 explains how to work with dates and time.
  • Chapter 13 teaches advanced query techniques.
  • Chapter 14 explores text mining features.
  • Chapter 15 looks at analyzing spatial data using PostGIS.
  • Chapter 16 explains how to work with JSON data.
  • Chapter 17 shows how to use views, functions, and triggers.
  • Chapter 18 discusses using PostgreSQL from the command line.
  • Chapter 19 covers database maintenance.

The final chapter, Chapter 20: Telling Your Data's Story, shifts away from the practical aspects of chapters one through 19 toward providing advice about telling stories using data. Again, Debarros pulls from his experience as a journalist to offer lessons about the whys, hows, and best practices of doing data journalism or data storytelling. If chapters one through 19 are the tools in the toolbox, chapter 20 is a sample blueprint that will inspire the reader to create their own project.

The exercises

There are SQL files and other supplemental resources for the exercises in each chapter in the book's GitHub Repository, except for chapter 20, which has no activities. The repository also contains a file with solutions for each of the "try it yourself" end-of-chapter exercises.

The exercises throughout the book are all very interesting. While the earliest chapters are understandably basic (there are only so many ways to teach CREATE DATABASE and CREATE TABLE), they provide an excellent foundation for the more advanced topics later in the book. The advanced exercises use real-world data to give verisimilitude to the learning experience. The database of choice for Practical SQL, 2nd Edition is PostgreSQL, but the book makes some mentions of different databases when things might work differently. However, it is very much a PostgreSQL book, so that is something to keep in mind.

Final thoughts

Practical SQL, 2nd Edition is a well-written and informative book that can help someone begin to master SQL. Even more importantly, it is an extremely enjoyable book that will keep the reader engaged with interesting, thought-provoking exercises. Anyone interested in learning the ins and outs of PostgreSQL should consider picking up this book. The book's only drawback is that it is a PostgreSQL book, not a database-agnostic book, so anyone trying to learn MySQL, MariaDB, or some other SQL-based database might want to choose a book that focuses on that particular database. The overall "Guide to Storytelling with Data" lessons are something a moderately experienced MySQL, MariaDB, etc. can apply to their database of choice, but this book is not the ideal first book for learning a non-PostgreSQL database. That one caveat emptor aside, I highly recommend Practical SQL, 2nd Edition to anyone wanting to learn PostgreSQL and how to tell stories with data.

Practical SQL, 2nd Edition: A Beginner's Guide to Storytelling with Data by Anthony DeBarros offers an informative and enjoyable way to learn SQL.

Image by:

Opensource.com

Databases What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 1 Comment Register or Login to post a comment. madtom1999 | April 12, 2022

Worth noting that PostgreSQL allow Stored Procedure overloading which can make development a nightmare if you are unaware of it.

Explaining Git branches with a LEGO analogy

Sat, 04/09/2022 - 15:00
Explaining Git branches with a LEGO analogy Seth Kenlon Sat, 04/09/2022 - 03:00 Up Register or Login to like.

Creating a new branch in a code repository is a pretty common task when working with Git. It's one of the primary mechanisms for keeping unrelated changes separate from one another. It's also very often the main designator for what gets merged into the main branch.

Without branches, everything would either have to be cherrypicked, or else all your work would be merged effectively as a squashed rebase. The problem is, branches inherit the work of the branch from which they're forked, and that can lead to you accidentally pushing commits you never intended to be in your new branch.

The solution is to always fork off of main (except when you mean not to). It's an easy rule to say, but unfortunately it's equally as easy to forget, so it can help to look at the reasoning behind the rule.

More on Git What is Git? Git cheat sheet Markdown cheat sheet New Git articles A branch is not a folder

It's natural to think of a Git branch as a folder.

It's not.

When you create a branch, you're not creating a clean environment, even though it might seem like you are. A branch inherits all of the data that its parent contains. If the parent branch is the main branch, then your new branch contains the common history of your project. But if the parent branch is another branch off of main, then your new branch contains the history in main plus the history of the other branch. I often think in terms of LEGO bricks, so here's a visual example that isn't one of those complex Git node graphs (but actually is, secretly).

Say your main branch is a LEGO plate.

Image by:

(Seth Kenlon CC BY-SA 4.0)

When you create a branch off of main, you add a brick. Suppose you add a branch called blue.

Image by:

(Seth Kenlon BY-SA 4.0)

The blue branch contains the history of the base plate plus whatever work you do on blue. In code, this is what's happened so far:

$ git branch
* main
$ git checkout -b blueBranch of a branch

If you create yet another branch while you're still in your blue branch, then you're building on top of main as well as blue. Suppose you create a branch called red because you want to start building out a new feature.

Image by:

(Seth Kenlon CC BY-SA 4.0)

There's nothing inherently wrong with this, as long as you understand that your red branch is built on top of blue. All the work you did in the blue branch also exists in red. As long as you didn't want red to be a fresh start containing only the history of your main branch, this is a perfectly acceptable method of building your repo. Be aware, though, that the project owner isn't able to, for instance, accept the red changes without also accepting a bunch of blue changes, at least not without going to a lot of trouble.

Clean break

If what you actually want to do is to develop blue and red as separate features so that the project owner can choose to merge one and not the other, then you need the two branches to both be based only on main. It's easy to do that. You just checkout the main branch first, and then create your new branch from there.

$ git branch
* blue
main
$ git checkout main
$ git checkout -b red

Here's what that looks like in LEGO:

Image by:

(Seth Kenlon CC BY-SA 4.0)

Now you can deliver just blue to the project owner, or just red, or both, and the project owner can decide what to attach to main on the official repository. Better still, both blue and red can be developed separately going forward. Even if you finish blue and it gets merged into main, once the developer of red merges in changes from main then what was blue becomes available to new red development.

Image by:

(Seth Kenlon CC BY-SA 4.0)

Branch example

Here's a simple demonstration of this principle. First, create a Git repository with a main branch:

$ mkdir example
$ cd example
$ git init -b main

Populate your nascent project with an example file:

$ echo "Hello world" > example.txt
$ git add example.txt
$ git commit -m 'Initial commit'

Then checkout a branch called blue and make a silly commit that you don't want to keep:

$ git checkout -b blue
$ fortune > example.txt
$ git add example.txt
$ git commit -m 'Unwisely wrote over the contents of example.txt'

Take a look at the log:

$ git log --oneline
ba9915d Unwisely wrote over the contents of example.txt
55d4811 Initial commit

First, assume you're happy to continue developing on top of blue. Create a branch called red:

$ git checkout -b red

Take a look at the log:

$ git log --oneline
ba9915d Unwisely wrote over the contents of example.txt
55d4811 Initial commit

Your new red branch, and anything you develop in red, contains the commit you made in blue. If that's what you want, then you may proceed with development. However, if you intended to make a fresh start, then you need to create red off of main instead.

Now checkout your main branch:

$ git checkout main

Take a look at the log:

$ git log --oneline
55d4811 Initial commit

Looks good so far. The blue branch is isolated from main, so it's a clean base from which to branch in a different direction. Time to reset the demo. Because you haven't done anything on red yet, you can safely delete it. Were this happening in real life and you'd started developing on red, then you'd have to cherrypick your changes from red into a new branch.

This is just a demo, though, so it's safe to delete red:

$ git branch -D red

Now create a new branch called red. This version of red is intended as a fresh start, distinct from blue.

$ git checkout -b red
$ git log --oneline
55d4811 Initial commit

Try making a new commit:

$ echo "hello world" >> example.txt
$ git add example.txt
$ git commit -m 'A new direction'

Look at the log:

$ git checkout -b red
$ git log --oneline
de834ff A new direction
55d4811 Initial commit

Take one last look at blue:

$ git checkout blue
$ git log --oneline
ba9915d Unwisely wrote over the contents of example.txt
55d4811 Initial commit

The red branch has a history all its own.

The blue has a history all its own.

Two distinct branches, both based on main.

Fork with care

Like many Git users, I find it easier to keep track of my current branch by using a Git-aware prompt. After reading Moshe Zadka's article on it, I've been using Starship.rs and I've found it to be very helpful, especially when making lots of updates to a packaging project that requires all merge requests to contain just one commit on exactly one branch.

With hundreds of updates being made across 20 or more participants, the only way to manage this is to checkout main often, pull, and create a new branch. Starship reminds me instantly of my current branch and the state of that branch.

Whether you fork a new branch off of the main branch or off of another branch depends on what you're trying to achieve. The important thing is that you understand that it matters where you create a branch. Be mindful of your current branch.

Use this helpful LEGO analogy to understand why it matters where you branch in Git.

Image by: Image credits: CC BY-SA 4.0 Klaatu Einzelgänger Git What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 2 Comments Register or Login to post a comment. Image by:

opensource.com

Victorhck | April 11, 2022

Was brilliant to use Lego for example!
Thanks!

BTW: The link to starship article is wrong and gives a 404 error. It's this one:

https://opensource.com/article/22/2/customize-prompt-starship

Greetings!

Image by:

Opensource.com

AmyJune Hineline | April 12, 2022

Thanks for the comment! I updated the link.

In reply to by Victorhck

Peek inside your Git repo with rev-parse

Fri, 04/08/2022 - 15:00
Peek inside your Git repo with rev-parse Seth Kenlon Fri, 04/08/2022 - 03:00 Up Register or Login to like.

I use Git a lot. In fact, there's probably an argument that I sometimes misuse it. I use Git to power a flat-file CMS, a website, and even my personal calendar.

To misuse Git, I write a lot of Git hooks. One of my favorite Git subcommands is rev-parse, because when you're scripting with Git, you need information about your Git repository just as often as you need information from it.

More on Git What is Git? Git cheat sheet Markdown cheat sheet New Git articles Getting the top-level directory

For Git, there are no directories farther back than its own top-level folder. That's in part what makes it possible to move a Git directory from, say, your computer to a thumb drive or a server with no loss of functionality.

Git is only aware of the directory containing a hidden .git directory and any tracked folders below that. The --show-toplevel option displays the root directory of your current Git repository. This is the place where it all starts, at least for Git.

Here's an obvious example of how you might use it:

$ cd ~/example.git
$ git rev-parse --show-toplevel
/home/seth/example.git

It becomes more useful when you're farther in your Git repo. No matter where you roam within a repo, rev-parse --show-toplevel always knows your root directory:

$ cd ~/example.git/foo/bar/baz
$ git rev-parse --show-toplevel
/home/seth/example.git

In a similar way, you can get a pointer to what makes that directory the top level: the hidden .git folder.

$ git rev-parse --git-dir
/home/seth/example.com/.gitFind your way home

The --show-cdup option tells you (or your script, more likely) exactly how to get to the top-level directory from your current working directory. It's a lot easier than trying to reverse engineer the output of --show-toplevel, and it's more portable than hoping a shell has pushd and popd.

$ git rev-parse --show-cdup
../../..

Interestingly, you can lie to --show-cdup, if you want to. Use the --prefix option to fake the directory you're making your inquiry from:

$ cd ~/example.git/foo/bar/baz
$ git rev-parse --prefix /home/seth/example.git/foo --show-cdup
../Current location

Should you need confirmation of where a command is being executed from, you can use the --is-inside-work-tree and --is-inside-git-dir options. These return a Boolean value based on the current working directory:

$ pwd
.git/hooks
$ git rev-parse --is-inside-git-dir
true

$ git rev-parse --is-inside-work-tree
falseGit scripts

The rev-parse subcommand is utilitarian. It's not something most people are likely to need every day. However, if you write a lot of Git hooks or use Git heavily in scripts, it may be the Git subcommand you always wanted without knowing you wanted it.

Try it out the next time you invoke Git in a script.

If you write a lot of Git hooks or use Git heavily in scripts, rev-parse may be the subcommand you always wanted without knowing you wanted it.

Git What to read next Make your own Git subcommands This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 1 Comment Register or Login to post a comment. Image by:

Opensource.com

weent18 | April 8, 2022

Amazing a good deal of beneficial information

10 Git tips we can't live without

Wed, 04/06/2022 - 15:00
10 Git tips we can't live without AmyJune Hineline Wed, 04/06/2022 - 03:00 Up Register or Login to like.

Git tips are a dime a dozen, and it's a good thing because you can never get enough of them. If you use Git every day, then every tip, trick, and shortcut you can find is potentially time and effort saved. I asked Opensource.com community members for their favorite Git hacks. Here they are!

More on Git What is Git? Git cheat sheet Markdown cheat sheet New Git articles add

The git add --patch (-p for short) command kicks off an interactive review of chunks of changes that you can add, split into smaller chunks, or ignore (among other things). That way, you can be sure to limit your changes to a specific commit.

Kevin Thull

I use git add -p to review changes a hunk at a time before they're committed. It lets you check whether you forgot to remove some sketchy ideas, stray comments, or other things you shouldn't commit.

Ryan Price

amend

The Git option --amend is a helpful alternative to creating several commits and then squashing them into one commit through an interactive rebase. I like that you can continually amend your first commit to add additional changes as necessary.

Ashley Hardin

bisect

I know I screwed something up, but I have no idea when. That's what git-bisect is for.

In a previous life, I was a backend developer of Drupal and WordPress at an agency outside of Chicago. I had multiple customer sites I was working on at any one time, bouncing back and forth between them like a pinball. Every now and then, someone would find an undocumented feature in my code and, with fuzzy memory because of all the different client sites, git-bisect would come in handy with helping me find the culprit.

Eric Michalsen

blame

Contrary to the name of the git blame command, I don't use it to blame others. It's great when you're taking over repositories that you didn't initialize. You can see when certain changes were completed and hopefully the commit messages behind them too. It's a wonderful troubleshooting tool.

Miriam Goldman

checkout

I use git checkout - to change to the previous branch. It's handy for switching from a feature branch back to the main development branch and back again.

Kevin Thull

diff

I like the git diff --staged command to review all of the staged changes before committing.

Kevin Thull

status

I seem to touch many files when I work, and git status is a lifesaver. Understanding the state of the working directory and staging area with git status has helped me learn those core concepts in Git and make sure all my work is committed!

Ravi Lachhman, Field CTO at Shipa

squash (rebase -i)

I like to use this command to squash several commits into one. Start by using git rebase -i HEAD~# where # is the number of commits to squash. Change pick to squash on each commit that should become part of the one above it. Then edit your commit messages as you see fit. It keeps the commit history very tidy.

Ryan Marks

update-index

Use this with caution!

$ git update-index --assume-unchanged path/to/file

It's handy for marking a file unchanged that has local changes or just one of those pesky files that continually swears it's been changed even though you've done absolutely nothing to it.

Kevin Thull

worktree

One of my favorites is the git-worktree feature, which manages simultaneous checkouts of different branches in a repo without making a wholesale copy of the files.

Here's an example:

$ git worktree add ../feature-branch feature-branch

KURT W.K.

Wrap up

Now it's time to suggest your own favorite tips. What Git commands save you time, or prevent mistakes? What tips do you pass on to new Git users at your workplace? Let us know as we celebrate 17 years of Git goodness this week!

Opensource.com community members share their favorite Git tips for saving time or preventing mistakes.

Image by:

Opensource.com

Git Opensource.com community What to read next What Git aliases are in your .bashrc? A practical guide to using the git stash command This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Make your own Git subcommands

Wed, 04/06/2022 - 15:00
Make your own Git subcommands Seth Kenlon Wed, 04/06/2022 - 03:00 Up Register or Login to like.

Git is pretty famous for having lots of subcommands, like clone, init, add, mv, restore, bisect, blame, show, rebase, and many more. In a previous article, I wrote about the very useful rev-parse subcommand for Git. Even with all of these subcommands available, users still come up with functions to improve their Git experience. While you're free to create Git-related commands and run them as scripts, it's easy to make your own custom Git subcommands. You can even integrate them with Git through rev-parse.

More on Git What is Git? Git cheat sheet Markdown cheat sheet New Git articles Create a simple Git script

A script that integrates with Git can be as complex as you need it to be, or it can be short and straightforward.

As a simple example, assume you've created a script that gathers the file names of your latest commit and places them into a file called latest.txt. You use this script after every commit for reporting purposes, and you've decided it would be handy to be able to run the script as if it were a built-in feature of Git, for instance, with the command git report.

Currently, running git report renders this error:

$ git report
git: 'report' is not a git command. See 'git --help'.

The most similar command is
        bugreport

Here's the script that generates your report:

#!/bin/sh

TOP=$(git rev-parse --show-toplevel)
HASH=$(git log --pretty=format:'%h' -n 1)

mkdir "${TOP}"/reports || true

git diff-tree \
--no-commit-id --name-only \
-r HEAD > "${TOP}"/reports/$HASH

Save this file as git-report.sh somewhere in your PATH.

You're not going to run this script directly, so do include the .sh extension in the name. Make the script executable:

$ chmod +x git-report.shCreate the front-end command

You can force Git to run git report through rev-parse and launch your git-report.sh script instead of returning an error. Here's the script:

#!/bin/sh

git-report.sh

Save this file as git-report (do not use a .sh file extension) somewhere on your PATH. Make the script executable:

$ chmod +x git-reportRunning your custom Git command

Now test it out. First, create the infrastructure and some sample data:

$ mkdir myproject ; cd !$
$ git init
$ echo "foo" > hello.txt
$ git add hello.txt
$ git commit -m 'first file'
$ git report

Look inside the reports directory to see a record of your latest commit:

$ cat reports/2e3efd8
hello.txtPass arguments to your script

You can pass arguments through rev-parse, too. I help maintain a Git subcommand called Git-portal, which helps manage large multimedia files associated with a Git repository.

It's a little like Git LFS or Git Annex, except without the overhead of versioning the actual media files. (The symlinks to the files are kept under version control, but the contents of the files are treated independently, which is useful in scenarios where the artistic process is distinct from the development process.)

To integrate Git-portal with Git as a subcommand, I use a simple front-end shell script, which in turn invokes rev-parse for literal parsing.

The example script provided in this article can't take any parameters, but the actual Git scripts I write almost always do. Here's how arguments are passed, for example, to Git-portal:

#!/bin/sh

ARG=$(git rev-parse --sq-quote "$@")
CMD="git-portal.sh $ARG"
eval "$CMD"

The --sq-quote option quotes all arguments following git portal and includes them as new parameters for git-portal.sh, which is the script with all the useful functions in it. A new command is assembled into the CMD variable, and then that variable is evaluated and executed.

Here's a simple example you can run. It's a modified version of a script I use to resize and apply accurate copyright (or copyleft, as the case may be) EXIF data to images for a flat-file CMS I use Git to manage.

This is a simplified example, and there's no good reason for this script to be a Git subcommand, but it's demonstrative. You can use your imagination to find ways to expand it to use Git functions.

Call this file git-imager.sh:

#!/bin/sh
## Requires
# GNU Bash
# exiftool (sometimes packaged as perl-Image-exiftool)
# Image Magick

PIC="${1}"
COPY="-Copyright"
LEFT="${2}"

mogrify -geometry 512^x512 -gravity Center \
        -crop 512x512+0+0 "${1}"

exiftool -Copyright="${2}" "${1}"

The front-end script for Git integration passes all parameters (the file name and the license) through to the git-imager.sh script.

#!/bin/sh

ARG=$(git rev-parse --sq-quote "$@")
CMD="git-imager.sh $ARG"
eval "$CMD"

Try it out:

$ git imager logo.jpg "Tux CC BY-SA"
 1 image files updatedEasy Git customization

Creating your own Git subcommand makes your custom scripts feel like natural components of Git. For users, it makes the new subcommands easy to remember, and it helps your subcommands integrate into the rest of everyone's Git workflow.

Creating your own Git subcommand makes your custom scripts feel like natural components of Git.

Image by:

Ray Smith

Git What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

4 questions about the essence of openness

Tue, 04/05/2022 - 15:00
4 questions about the essence of openness Brook Manville Tue, 04/05/2022 - 03:00 Up Register or Login to like.

Despite some quibbles I voiced in the first part of my review of Johan Norberg's Open: The Story of Human Progress, the author's argument remains engaging—especially at this historical juncture, as the world witnesses Russia's invasion of Ukraine, and authoritarian movements around the world are threatening the sustainability of the liberal democracies Norberg identifies as bastions of openness.

But if the world were to somehow make a transformational leap towards more global openness, what would actually be required? The conceptual challenge of that is daunting—but a first step might begin simply by articulating some of the unexplored questions that follow from Open's overall vision.

These are meaty questions, to be sure—about the actual "performance gains" more openness will deliver, about design considerations that any community would need to make if to pursue open transformation, and about how, precisely, that work would have to be undertaken in different contexts.

Open Organization resources Download resources Join the community What is an open organization? How open is your organization?

So in this second part of my reviewI want to elaborate a few of these. I'll also offer some explanation of how they pertain to the historical narrative Norberg lays out, and why now getting clear on them bears attention from any would-be open advocates and reformers. At the heart of inquiries here is a single, probing question: What can we actually do to promote more openness of the sort that Norberg envisions—if that is indeed something we desire?

What's the point of openness?

The first question Norberg raises for me is this: What, overall, is the purpose of pursuing more openness for one's entity? How do we more concretely define the goals of becoming more open?

This question may sound so obvious that it needn't even be raised. But, surprisingly, Norberg is much less explicit on the point than he should be. His argument waivers between the notion of "open" as a self-evident truth and, at other times, as the basis for a given historical success he imputes to this or that nation, based on, say, features like "greater innovation" and "community-wide problem-solving." My point here is not to deny the benefits of openness, but rather to push for an articulation of more specific and concretely defined benefits of open strategies that an organization or other entity is right to reach for.

For instance, if a benefit of greater openness is its impact on human "progress," then we need to articulate precisely what that term signifies. And that means the possibility of inviting criticisms. Recently, for example, in another spirited but also controversial book, Enlightenment Now, Steven Pinker took a cut at the difficult work of defining "progress." For him, progress can be made concrete and measurable by assessing modernity's improvement against several specific variables of human civilization (e.g., increase in life-span, more positive healthcare outcomes, growth of wealth, increase of knowledge, etc.). Pinker associates such improvements that benefit us today with the growth of reason, science and humanism—Enlightenment values forged over the last several centuries. His charge to modern societies is to keep building progress by fortifying and extending such Enlightenment practices. Pinker's approach to explaining progress has been variously challenged, but his case at least begins with a firm definitional stake in the ground.

We should expect the same of Norberg's manifesto. As Norberg explains, humans' psychological and sociological tendencies often create ongoing barriers to sustaining open approaches to life and work. If we're to undertake the (clearly difficult) work of fostering more open institutions, we'll want to understand specifically the value these approaches deliver, and why we should push for them.

In other words, what's the real prize for all the trouble?

If the world were to somehow make a transformational leap towards more global openness, what would actually be required? The conceptual challenge of that is daunting.

Norberg's overall conceit about purpose (to which I'm sympathetic) is fundamentally practical: we should pursue openness because it creates and facilitates material improvements and superior success for whatever members of a society are trying to achieve, as well as broader economic welfare. But he also says little about less material aspects of life in an open community: the joys and fulfillment of greater individual freedom, and the horizon-broadening exposure to new ideas and cultural inputs. These are more intangible and somewhat relative, historically contingent, and perhaps more fraught. As Norberg observes, horizon-broadening exposure to other cultural influences can also be the fuel of tribal antagonism—not a benefit for many people, but rather a source of identity-threatening fear.

And how does this intersect with the view that greater openness is less about any benefits to incumbent members of a nation and more about satisfying the human rights of newcomers to it? For many constituents in liberal democracies, openness is less an opportunistic strategy and more a moral obligation to fellow humans. Tensions between this view and others point to the need for more intentional discussion of the purpose of—and goals in potentially pursuing—any stance of greater openness.

What's the scale of openness?

The next question Norberg's Open raises for me is: What 'unit(s) of analysis' are the appropriate focus of "open-building?"Do different entities require different approaches to "open?"

By "unit of analysis," I mean the size and nature of the entity being studied—a fancy phrase that asks about the "relative bigness" and make-up of whatever human grouping one is trying to make more open. Norberg's aim is bold: raise existing discussions about open innovation and cross-boundary collaboration in business organizations and open software networks to much greater scale, examining political, cultural, and economic entities (including nation states, empires, and entire civilizations). From time to time, he also focuses on smaller organizational forms (cities, indigenous communities, and groups of nomadic peoples). For Norberg, "more open" benefits human groupings of any size.

But does it always? Because his understanding of what makes for "progress" is so fluid, one wonders if the concept itself not only needs stricter definition, but also if it scales in all the ways Norberg believes. Taking as a hypothesis that the answer to the latter is "yes," further analysis would look for differentiation, segmentation of type, and appropriate qualification of what "open" can expect to deliver at these various scales.

If indeed "open" always delivers some kind of progress, we could sharpen our understanding of why and how it does this, given the substantial differences among different kinds of organizations and human groupings. We might also examine the question of whether there's some upper limit to size. Can a culture and set of practices for open innovation expand infinitely, or does it ultimately hit a ceiling?

What's the limit of openness?

Next, I wonder: Does "open" ever need to respect certain boundaries? When, where, and why?

The "unit of analysis question" presumes that some entities might strive to be open, but others—perhaps because they're bigger, smaller, or simply different—may not. The implication here is that openness involves limits or boundaries. But Norberg leaves unresolved the question of whether "open" has boundaries. Does an open world, for instance, mean one in which significant distinctions between all entities eventually disappear, because no boundaries should exist between any human groupings, lest tribal enmity flourish and progress cease? If this latter condition is part of Norberg's vision (when discussing climate change, he edges slightly in this direction), it's certainly not implied in most of his discussion. Instead, following the lessons of history, he suggests that some entities will be more open than others, and those that are will fare better.

Does an open world mean one in which significant distinctions between all entities eventually disappear, because no boundaries should exist between any human groupings, lest tribal enmity flourish and progress cease?

Then, a follow-on issue: If leaders or citizens of a certain society, nation, or extended community decide to open themselves up, when and how do they draw any kind of boundaries at all? This leads to a host of related questions, all familiar to anyone who has studied open organizational design, implementation, or practice:

  • Who defines membership for participating and enjoying the fruits of openness?
  • What (if any) rights and privileges confer to members of a community or society who aren't full "members," and how will those be determined?
  • Who, if anyone, should not be allowed to join an open community? Is potential exclusion based on nefarious designs for individual greed and power? Because an entity wishes ill upon the well-intentioned, more open members? Or are there other suitable prohibitions?
  • What principles, practices, and institutions are necessary to decide and defend certain boundaries (both in terms of people and territory, and even including virtual networks), given that boundaries themselves can undermine openness, even if they are also necessary to defend and preserve it?
What's the best way to foster openness?

Norberg's book raises one final question for me: What form of governance is needed to advance and sustain openness?

Towards the end of his book, Norberg seems to imply that liberal democracies are the most appropriate stewards of open societies. That seems logical enough. But as we've seen, his historical survey celebrates openness as achieved and/or utilized by the harsh Mongolian emperor Genghis Kahn, Roman imperial rulers, and 18th C. Britain under William of Orange (who worked with an elected parliament but remained a monarch with still substantial powers). Norberg also notes that modern China has been experimenting, so far relatively successfully, with a blend of authoritarian rule and more open economic practices. To what degree, and in what ways, is "open" independent of the kind of rule that organizes the society that might benefit from it? What systems or regimes of governance espouse the most viable "vision" of openness? And how do those systems structure decision-making power?

An organization's governance model is critical not just to creating openness but also to sustaining that posture over time (especially given all the human tendencies to retreat from the natural challenges and setbacks it brings, as Norberg elaborates). Consider, for example, some of the critical, but also enduring, decisions that governing institutions and their leaders must make in connection with both development and maintenance of openness:

Agreeing on the boundaries of and membership in an open community: Decisions about both are never static or settled. Expansion or contraction of territory, growth of new populations, changing diversity and complexity of members who compose the community—all require new decisions and potential changes or refinements to past agreements. Maintaining what it means to be open, and for whom and where, must be a continuing process managed on behalf of all members (and would-be members, too).

Defending those boundaries and membership(s): The maintenance of the open society is not only about achieving what might be called "internal agreement" among membership stakeholders, but also about warding off external threats or major challenges to those decisions by hostile outside powers. Open societies must ultimately struggle with both domestic and foreign challenges to how open they think they wish to be.

Defining the excellence that justifies openness: Norberg glosses over any explicit discussion of this, but the book's implied premise is that open societies ultimately thrive because they foster competitive selection of the "best ideas," which leads to innovation. In other words, open societies win and thrive because they make decisions according to some standard of "excellence"—not familiarity, parochial preferences, or ideological priorities. But "excellence" is not always always simple and self-evident to any particular community: What criteria will be used to define it? How will conflicts about it be resolved? How will it be maintained as the pre-eminent basis for action? These are governance questions, and members of open communities themselves—or those entrusted to govern it—must agree on how those challenges will be met and differing opinions among members mediated.

It is the great value of Norberg’s book that it can inspire and raise hope for a generally more open world. But the conundrums that come along with such aspirations now require much more thinking from the rest of us—from both academic study but also the harsh lessons of practice.

Defining whether and how those who achieve excellence will be differentially entitled or rewarded: This question, which follows directly from the above issue, moves from understanding what excellence is to agreeing who deserves to benefit from it. In short, it's a question of organizational governance. And as I've framed it here, it's a question of a governance model often called "meritocracy" (meaning those with the greatest knowledge, skills and accomplishment should be recognized by a society's people as most worthy to lead, and also to bear the greatest burdens of ensuring the greater good for all). Today, however, meritocracy is a term fraught and contested, particularly among those who criticize its associated systems of excessive credentialism (status conferred by educational degrees, not objective performance), and concerns that reliance on judgment by excellence unfairly discounts the social scaffolding that helps some people more than others achieve success. Others simply argue that meritocracy undermines the dignity and equality upon which democratic governance models depend. Such controversies do not directly relate to social and economic strategies for pursuing openness, but they will ultimately be pulled into efforts aimed at intentionally creating it. Similarly, like questions of boundaries and membership, societal concepts of what makes excellence and how merit should be judged will require ongoing adjustment and revision, as the entity grows larger and more complex (which performance success tends to drive).

When and how to modify any community's state of "openness" when required: The history of democratic (or generally "open" communities or nations) is filled with examples when their peoples decide that openness must be reduced or curtailed—sometimes temporarily, sometimes more enduringly. Ancient Athens became steadily more elitist—less open—about its citizenship during the "Golden Age." Ancient Romans, during the period of their Republic, enacted a law to allow temporary suspensions of freedoms and the appointment of a dictator in times of grave emergency, a measure abused by Julius Caesar when he made himself a more permanent king. But during Rome’s later imperial age, despite the overall autocratic rule, social mobility and commercial relationships became steadily more open. Modern American democracy has witnessed periods of welcoming immigration and open trade and others of exclusionary practices and high tariffs. Openness inevitably becomes a moving target, as populations, external events, and social beliefs change. How should future open-seeking societies and nations govern a process that can never stand still?

It is the great value of Norberg’s book that it can inspire and raise hope for a generally more open world. But the conundrums that come along with such aspirations now require much more thinking from the rest of us—from both academic study but also the harsh lessons of practice.

A more open world isn't inevitable. Building one will take work—and it raises complex questions like these.

Image by:

Opensource.com

The Open Organization Read the series Open exchange, open doors, open minds: A recipe for global progress Making the case for openness as the engine of human progress This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

What Git aliases are in your .bashrc?

Tue, 04/05/2022 - 15:00
What Git aliases are in your .bashrc? AmyJune Hineline Tue, 04/05/2022 - 03:00 Up Register or Login to like.

Many open source users love a good Bash alias and are usually happy to show off a particularly robust .bashrc file when given the chance. If you're a frequent user of Git, you might benefit from a few Git aliases mixed in with your other Bash aliases. Alternately, you can create aliases specific to Git with this git config command. This example sets the git co command to git checkout.

$ git config --global alias.co checkout

I asked our contributors for their favorite and most useful Git aliases so that you could take advantage of their ideas. Here are their suggestions.

More on Git What is Git? Git cheat sheet Markdown cheat sheet New Git articles Aliases

Here's an easy way to see just the most recent log entry:

git config alias.last 'log -1 HEAD'

Opensource.com author Sachin Patil uses hist for reviewing logs:

log --pretty=format:'%h %ai [%an] %s%d' --graph

Sachin creates this Bash alias for pull requests:

# github pull request. 
# Usage: git pr

pr="\!sh -c 'git fetch $1 pull/$2/head:$3 && git checkout $3' -"

The git diff command is helpful for all kinds of comparisons, but sometimes all you really want is the file name of what's changed. Kristen Pol creates an alias to shorten the --name-only option:

git diff --name-only

Kristen says, "We typically have a lot of development happening simultaneously, so knowing the most recent commit across all branches is handy." Here's a command Kristen aliases for that purpose:

git branch --remotes --verbose --sort=-committerdate

Everybody appreciates a fresh start. This is Kristen's alias for wiping out a branch, leaving it fresh and clean:

alias gitsuperclean='git reset --hard; git clean --force -d -x'Custom filter-repo command

Chris has been using a "third-party" Git command called git-filter-repo.

Chris explains the alias. "Ever want to pull a specific directory out of a larger Git repository and make it its own, separate repo, while keeping all the Git history? That's exactly what filter-repo does."

Your aliases

What Git command do you use so often that you alias it? Do you use Bash aliases or Git aliases, or a mix of both? Tell us in the comments!

I asked our contributors for their favorite and most useful Git aliases so that you could take advantage of their ideas.

Image by:

kris krüg

Git What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

My guide to understanding Git rebase -i

Tue, 04/05/2022 - 15:00
My guide to understanding Git rebase -i Vaishnavi R Tue, 04/05/2022 - 03:00 Up Register or Login to like.

Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency. It is an essential tool in an open source developer's toolkit.

This article covers why and how to use the git rebase --interactive (-i for short) command. This is considered an intermediate Git command, but it can be very useful once you start working with large teams.

This command is one of the most powerful in Git. The git rebase command helps you manage multiple commits, including:

  • Flatten (or squash in Git terminology) several commits so that they look like they were all done at once
  • Delete one of the commits
  • Split one commit into two
  • Reorder the commits

Yes, the git rebase command can rewrite your repository's commit history by rearranging, modifying, and even deleting commits. So let's get started!

More on Git What is Git? Git cheat sheet Markdown cheat sheet New Git articles Helpful instructions in the rebase message

The git rebase -i interface is, as its long form --interactive flag implies, an interactive interface. It provides a list of commits, and then you choose what actions you want Git to take on each of them. Taking no action is a valid choice, but at least one commit must be marked as the one to squash, or the rebase is functionally meaningless.

  • p, pick — Pick this commit to keep.
  • e, edit — Edit this commit to amend the commit message.
  • r, reword — Use this commit, but also edit it.
  • s, squash — Squash this commit into a previous commit. When performing a git rebase -i, you must have at least one commit marked as squash.
  • d, drop — Delete this commit.
Squashing commits

Suppose you have two commits, and you want to squash them into one. This is achieved by using git rebase -i HEAD~2 (that's two commits from your current position) command and by putting the word squash before the commit.

$ git rebase --interactive HEAD~2

Running this command gives you a list of commits in your text editor that looks something like this:

pick 718710d Commit1.
pick d66e267 Commit2.

If you want to make a single commit from two commits, then you have to modify the script to look this:

pick 718710d Commit1.
squash d66e267 Commit2.

Finally, save and exit the editor. After that, Git applies all the changes and opens an editor to merge the two commits:

# This is a combination of 2 commits.
# This is the 1st commit message:

Fuse smoke test versioning.

# This is the commit message #2:

Updated the code.

# Please enter the commit message for your changes.

Saving this results a single commit that introduces the changes of two previous commits.

Reordering commits

Suppose you have three commits, and you want to change their order such that Commit3 is first, Commit2 is second, and then third is Commit1. Run git rebase -i HEAD~3 to make this happen:

$ git rebase --interactive HEAD~3

This script is opened in your editor:

pick 8cfd1c4 Commit1
pick 718710d Commit2
pick d77e267 Commit3

Modify the script like this:

pick d77e267 Commit3
pick 718710d Commit2
pick 8cfd1c4 Commit1

When you save and exit the editor, Git rewinds your branch to the parent of these commits, and applies d77e267, then 718710d, and then 8cfd1c4 as the commit numbers are not matching.

Delete a commit

Suppose you have two commits and want to get rid of the second one. You can delete it using the git rebase -i script.

$ git rebase -i HEAD~2

This script is opened in your editor:

pick 8cfd1c4 Commit1
pick 718710d Commit2

Place the word drop before the commit you want to delete, or you can just delete that line from the rebase script.

pick 8cfd1c4 Commit1
drop 718710d Commit2

Then save and exit the editor. Git applies the changes and deletes that commit.

This can cause merge conflicts if you have many commits later in the sequence that depend on the one you just deleted, so use it carefully. But you have an option to revert the changes.

Rebase with caution

If you get partway through a rebase and decide it's not a good idea, use the git rebase --abort command to revert all the changes you did. If you have finished a rebase and decide it's wrong or not what you want, you can use git reflog to recover an earlier version of your branch.

Rebasing is powerful and can be useful to keep your repo organized and your history clean. Some developers like to rebase the main branch so that the commits tell a clear story of development, while others prefer for all commits to be preserved exactly as they were proposed for merging from other branches. As long as you think through what your repo needs and how a rebase might affect it, the git rebase command and the git rebase -i interface are useful commands.

The git rebase command is one of the most powerful in Git. It can rewrite your repository's commit history by rearranging, modifying, and even deleting commits.

Image by:

Image from Unsplash.com, Creative Commons Zero 

Git What to read next How to reset, revert, and return to previous states in Git This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Extend Kubernetes service discovery with Stork and Quarkus

Mon, 04/04/2022 - 15:00
Extend Kubernetes service discovery with Stork and Quarkus Daniel Oh Mon, 04/04/2022 - 03:00 Up Register or Login to like.

In traditional monolithic architecture, applications already knew where the backend services existed through static hostnames, IP addresses, and ports. The IT operation team maintained the static configurations for service reliability and system stability. This Day 2 operation has significantly changed since microservices began running in distributed networking systems. The change happened because microservices need to communicate with multiple backend services to improve the load balancing and service resiliency.

Explore the open source cloud Free online course: Developing cloud-native applications with microservices eBook: Modernize your IT with managed cloud services Try for 60 days: Red Hat OpenShift Dedicated Free online course: Containers, Kubernetes and Red Hat OpenShift What is Kubernetes? Understanding edge computing Latest articles for IT architects

The microservices topology became much more complex as the service applications were containerized and placed on Kubernetes. Because the application containers can be terminated and recreated anytime by Kubernetes, the applications can't know the static information in advance. The microservices don't need to be configured with the static information of the backend applications because Kubernetes handles service discovery, load balancing, and self-healing dynamically and automatically.

However, Kubernetes doesn't support programmatic service discovery and client-based load balancing through integrated application configurations. Smallrye Stork is an open source project to solve this problem, providing the following benefits and features:

  • Augment service discovery capabilities
  • Support for Consul and Kubernetes
  • Custom client load-balancing features
  • Manageable and programmatic APIs

Nevertheless, Java developers need some time to adapt to the Stork project and integrate it with an existing Java framework. Luckily, Quarkus enables developers to plug Stork's features into Java applications. This article demonstrates how Quarkus allows developers to add Stork's features to Java applications.

Create a new Quarkus project using Quarkus CLI

Using the Quarkus command-line tool (CLI), create a new Maven project. The following command will scaffold a new reactive RESTful API application:

$ quarkus create app quarkus-stork-example -x rest-client-reactive,resteasy-reactive  

The output should look like this:

...
[SUCCESS] ✅  quarkus project has been successfully generated in:
--> /Users/danieloh/Downloads/demo/quarkus-stork-example
...

Open a pom.xml file and add the following Stork dependencies: stork-service-discovery-consul and smallrye-mutiny-vertx-consul-client. Find the solution to this example here.


  io.smallrye.stork
  stork-service-discovery-consul


  io.smallrye.reactive
  smallrye-mutiny-vertx-consul-client
Create new services for the discovery

Create two services (hero and villain) that the Stork load balancer will discover. Create a new services directory in src/main/java/org/acme. Then create a new HeroService.java file in src/main/java/org/acme/services.

Add the following code to the HeroService.java file that creates a new HTTP server based on the Vert.x reactive engine:

@ApplicationScoped
public class HeroService {

    @ConfigProperty(name = "hero-service-port", defaultValue = "9000") int port;

    public void init(@Observes StartupEvent ev, Vertx vertx) {
        vertx.createHttpServer()
                .requestHandler(req -> req.response().endAndForget("Super Hero!"))
                .listenAndAwait(port);
    }
   
}

Next, create another service by creating a VillainService.java file. The only difference is that you need to set a different name, port, and return message in the init() method as below:

@ConfigProperty(name = "villain-service-port", defaultValue = "9001") int port;

public void init(@Observes StartupEvent ev, Vertx vertx) {
        vertx.createHttpServer()
                .requestHandler(req -> req.response().endAndForget("Super Villain!"))
                .listenAndAwait(port);
}Register the services to Consul

As I mentioned earlier, Stork allows you to use Consul based on Vert.x Consul Client for the service registration. Create a new ConsulRegistration.java file to register two services with the same name (my-rest-service) in src/main/java/org/acme/services. Finally, add the following ConfigProperty and init() method:

@ApplicationScoped
public class ConsulRegistration {

    @ConfigProperty(name = "consul.host") String host;
    @ConfigProperty(name = "consul.port") int port;

    @ConfigProperty(name = "hero-service-port", defaultValue = "9000") int hero;
    @ConfigProperty(name = "villain-service-port", defaultValue = "9001") int villain;

    public void init(@Observes StartupEvent ev, Vertx vertx) {
        ConsulClient client = ConsulClient.create(vertx, new ConsulClientOptions().setHost(host).setPort(port));

        client.registerServiceAndAwait(
                new ServiceOptions().setPort(hero).setAddress("localhost").setName("my-rest-service").setId("hero"));
        client.registerServiceAndAwait(
                new ServiceOptions().setPort(villain).setAddress("localhost").setName("my-rest-service").setId("villain"));

    }

}Delegate the reactive REST client to Stork

The hero and villain services are normal reactive RESTful services that can be accessed directly by exposable APIs. You need to delegate those services to Stork for service discovery, selection, and calling.

Create a new interface MyRestClient.java file in the src/main/java directory. Then add the following code:

@RegisterRestClient(baseUri = "stork://my-rest-service")
public interface MyRestClient {

    @GET
    @Produces(MediaType.TEXT_PLAIN)
    String get();
}

The baseUri that starts with stork:// enables Stork to discover the services and select one based on load balancing type. Next, modify the existing resource file or create a new resource file (MyRestClientResource) to inject the RestClient (MyRestClient) along with the endpoint (/api) as seen below:

@Path("/api")
public class MyRestClientResource {
   
    @RestClient MyRestClient myRestClient;

    @GET
    @Produces(MediaType.TEXT_PLAIN)
    public String invoke() {
        return myRestClient.get();
    }

}

Before you run the application, configure Stork to use the Consul server in the application.properties as shown below:

consul.host=localhost
consul.port=8500

stork.my-rest-service.service-discovery=consul
stork.my-rest-service.service-discovery.consul-host=localhost
stork.my-rest-service.service-discovery.consul-port=8500
stork.my-rest-service.load-balancer=round-robinTest your application

You have several ways to run a local Consul server. For this example, run the server using a container. This approach is probably simpler than installating or referring to an external server. Find more information here.

$ docker run --rm --name consul -p 8500:8500 -p 8501:8501 consul:1.7 agent -dev -ui -client=0.0.0.0 -bind=0.0.0.0 --https-port=8501

Run your Quarkus application using Dev mode:

$ cd quarkus-stork-example
$ quarkus dev

The output looks like this:

...
INFO  [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated.
INFO  [io.quarkus] (Quarkus Main Thread) Installed features: [cdi, jaxrs-client-reactive, rest-client-reactive, resteasy-reactive, smallrye-context-propagation, vertx]

--
Tests paused
Press [r] to resume testing, [o] Toggle test output, [:] for the terminal, [h] for more options>

Access the RESTful API (/api) to retrieve available services based on the round-robin load balancing mechanism. Execute the following curl command-line in your local terminal:

& while true; do curl localhost:8080/api ; echo ''; sleep 1; done

The output should look like this:

Super Villain!
Super Hero!
Super Villain!
Super Hero!
Super Villain!
...Wrap up

You learned how Quarkus enables developers to integrate client-based load balancing programming using Stork and Consul for reactive Java applications. Developers can also have better developer experiences using live coding while they keep developing the reactive programming in Quarkus. For more information about Quarkus, visit the Quarkus guides and practices.

Quarkus enables developers to integrate client-based load balancing programming using Stork and Consul for reactive Java applications.

Java Cloud Kubernetes What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. Register or Login to post a comment.

Pages